index
int64 0
20.3k
| text
stringlengths 0
1.3M
| year
stringdate 1987-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
4,200
|
Active Learning Ranking from Pairwise Preferences with Almost Optimal Query Complexity Nir Ailon∗ Technion, Haifa, Israel nailon@cs.technion.ac.il Abstract Given a set V of n elements we wish to linearly order them using pairwise preference labels which may be non-transitive (due to irrationality or arbitrary noise). The goal is to linearly order the elements while disagreeing with as few pairwise preference labels as possible. Our performance is measured by two parameters: The number of disagreements (loss) and the query complexity (number of pairwise preference labels). Our algorithm adaptively queries at most O(n poly(log n, ε−1)) preference labels for a regret of ε times the optimal loss. This is strictly better, and often significantly better than what non-adaptive sampling could achieve. Our main result helps settle an open problem posed by learning-to-rank (from pairwise information) theoreticians and practitioners: What is a provably correct way to sample preference labels? 1 Introduction We study the problem of learning to rank from pairwise preferences, and solve an open problem that has led to development of many heuristics but no provable results. The input is a set V of n elements from some universe, and we wish to linearly order them given pairwise preference labels, given as response to which is preferred, u or v? for pairs u, v ∈V . The goal is to linearly order the elements from the most preferred to the least preferred, while disagreeing with as few pairwise preference labels as possible. Our performance is measured by two parameters: The loss (number of disagreements) and query complexity (number of preference responses we need). This is a learning problem, with a finite sample space of n 2 possibilities only (hence a transductive learning problem). The loss minimization problem given the entire n × n preference matrix is a well known NP-hard problem called MFAST (minimum feedback arc-set in tournaments) [5]. Recently, Kenyon and Schudy [23] have devised a PTAS for it, namely, a polynomial (in n) -time algorithm computing a solution with loss at most (1 + ε) the optimal, for and ε > 0 (the degree of the polynomial there may depend on ε). In our case each edge from the input graph is given for a unit cost, hence we seek query efficiency. Our algorithm samples preference labels non-uniformly and adaptively, hence we obtain an active learning algorithm. Our output is not a solution to MFAST, but rather a reduction of the original learning problem to a simpler one decomposed into small instances in which the optimal loss is high, consequently, uniform sampling of preferences can be shown to be sufficiently good. Our Setting vs. The Usual “Learning to Rank” Problem. Our setting defers from much of the learning to rank (LTR) literature. Usually, the labels used in LTR problems are responses to individual elements, and not to pairs of elements. A typical example is the 1..5 scale rating for restaurants, or 0, 1 rating (irrelevant/relevant) for candidate documents retrieved for a query (known as the binary ranking problem). The preference graph induced from these labels is transitive, hence no combinatorial problems arise due to nontransitivity. We do not discuss this version of LTR. Some LTR literature does consider the pairwise preference label approach, and there is much justification to it (see [11, 22] and reference therein). Other works (e.g. [26]) discuss pairwise or higher order ∗Supported by a Marie Curie International Reintegration Grant PIRG07-GA-2010-268403 1 (listwise) approaches, but a close inspection reveals that they do not use pairwise (or listwise) labels, only pairwise (or listwise) loss functions. Using Kenyon and Schudy’s PTAS as a starting point. As mentioned above, our main algorithm is derived from the PTAS of [23], but with a significant difference. We use their algorithm to obtain a certain decomposition of the input. A key change to their algorithm, which is not query efficient, involves careful sampling followed by iterated sample refreshing steps. Our work can be studied in various contexts, aside from LTR. Machine Learning Reductions: Our main algorithm reduces a given instance to smaller subproblems decomposing it. We mention other work in this vein: [6, 3, 9]. Active Learning: An important field of statistical learning theory and practice ([8, 21, 15, 14, 24, 17, 13, 20, 16, 13]). In the most general setting, one wishes to improve on standard statistical learning theoretical complexity bounds by actively choosing instances for labels. Many heuristics have been developed, while algorithms with provable bounds (especially in the agnostic case) are known for few problems, often toys. General bounds are difficult to use: [8] provides general purpose active learning bounds which are quite difficult to use in actual specific problems; The A2 algorithm [7], analyzed in [21] using the disagreement coefficient is not useful here. It can be shown that the disagreement coefficient here is trivial (omitted due to lack of space). Noisy Sorting: There is much literature in theoretical computer science on sorting noisy data. [10] work in a Bayesian setting; In [19], the input preference graph is transitive, and labels are nondeterministic. In other work, elements from the set of alternatives are assumed to have a latent value. In this work the input is worst case and not Bayesian, query responses are deterministic and elements do not necessarily have a latent value. Paper Organization: Section 2 presents basic definitions and lemmata, and in particular defines what a good decomposition is and how it can be used in learning permutations from pairwise preferences. Section 3 presents our main active learning algorithm which is, in fact, an algorithm for producing a good decomposition query efficiently. The main result is presented in Theorem 3.1. Section 4 discusses future work and followup work appearing in the full version of this paper. 2 Notation and Basic Lemmata Let V denote a finite set of size n that we wish to rank.1 We assume an unknown preference function W on pairs of elements in V , which is unknown to us. For any pair u, v ∈V , W(u, v) is 1 if u is deemed preferred over v, and 0 otherwise. We enforce W(u, v) + W(v, u) = 1 (no abstentation) hence, (V, W) is a tournament. We assume that W is agnostic: it is not necessarily transitive and may contain errors and inconsistencies. For convenience, for any two real numbers a, b we will let [a, b] denote the interval {x : a ≤x ≤b} if a ≤b and {x : b ≤x ≤a} otherwise. We wish to predict W using a hypothesis h from concept class H = Π(V ), where Π(V ) is the set of permutations π over V viewed equivalently as binary functions over V × V satisfying, for all u, v, w ∈V , π(u, v) = 1 −π(v, u) and π(u, w) = 1 whenever π(u, v) = π(v, w) = 1. For π ∈Π(V ) we also use notation: π(u, v) = 1 if and only if u ≺π v, namely, if u precedes v in π. Abusing notation, we also view permutations as injective functions from [n] to V , so that the element π(1) ∈V is in the first, most preferred position and π(n) is the least preferred one. We also define the function ρπ inverse to π as the unique function satisfying π(ρπ(v)) = v for all v ∈V . Hence, u ≺π v is equivalent to ρπ(u) < ρπ(v). ) As in standard ERM, we define a risk function Cu,v penalizing the error of π with respect to the pair u, v, namely, Cu,v(π, V, W) = 1π(u,v)̸=W (u,v) . The total loss, C(h, V, W) is defined as Cu,v summed over all unordered u, v ∈V . Our goal is to devise an active learning algorithm for the purpose of minimizing this loss. In this paper we show an improved, almost optimal statistical learning theoretical bound using recent important breakthroughs in combinatorial optimization of a related problem called minimum feedback arc-set in tournaments (MFAST). The relation between this NP-Hard problem and our learning problem has been noted before in (eg [12]), when these breakthroughs were yet to be known. MFAST is more precisely defined as follows: V and W are given in entirety (we pay no price for reading W), and we seek π ∈Π(V ) minimizing the MFAST cost C(π, V, W). A PTAS has been 1In a more general setting we are given a sequence V 1, V 2, . . . of sets, but there is enough structure and interest in the single set case, which we focus on in this work. 2 discovered for this NP-Hard very recently in groundbreaking work by Kenyon and Schudy [23]. This PTAS is not useful however for the purpose of learning to rank from pairwise preferences because it is not query efficient. It may require to read all quadratically many entries in W. In this work we fix this drawback, and use the PTAS to obtain a certain useful decomposition. Definition 2.1. Given a set V of size n, an ordered decomposition is a list of pairwise disjoint subsets V1, . . . , Vk ⊆V such that ∪k i=1Vi = V . We let W|Vi denote the restriction of W to Vi × Vi for i = 1, . . . , k. For a permutation π ∈Π(v) we let π|Vi denote its restriction to the elements of Vi (hence, π|Vi ∈Π(Vi)). We say that π ∈Π(V ) respects V1, . . . , Vk if for all u ∈Vi, v ∈Vj, i < j, u ≺π v. We denote the set of permutations π ∈Π(V ) respecting the decomposition V1, . . . , Vk by Π(V1, . . . , Vk). We say that a subset U of V is small in V if |U| ≤log n/ log log n, otherwise we say that U is big in V . A decomposition V1, . . . , Vk is ε-good with respect to W if:2 Local Chaos: min π∈Π(V ) X i:Vi big in V C(π|Vi, Vi, W|Vi) ≥ε2 X i:Vi big in V ni 2 . (2.1) Approximate Optimality: min σ∈Π(V1,...,Vk) C(σ, V, W) ≤(1 + ε) min π∈Π(V ) C(π, V, W) . (2.2) We will show how to use an ε-good decomposition, and how to obtain one query-efficiently. Basic (suboptimal) results from statistical learning theory: Viewing pairs of V -elements as data points, the loss C(π, V, W) is, up to normalization, an expected cost over a random draw of a data point. A sample E of unordered pairs gives rise to a partial cost, CE defined as: CE(π, V, W) = n 2 |E|−1 P (u,v)∈E u≺πv W(v, u). (We assume throughout that E is chosen with repetitions and is hence a multiset; the accounting of parallel edges is clear.) CE(·, ·, ·) is an empirical unbiased estimator of C(π, V, W) if E ⊆ V 2 is chosen uniformly at random among all (multi)subsets of a given size. The basic question in statistical learning theory is, how good is the minimizer π of CE, in terms of C? The notion of VC dimension [25] gives us a nontrivial (albeit suboptimal - see below) bound. Lemma 2.2. The VC dimension of the set of permutations on V , viewed as binary classifiers on pairs of elements, is n −1. It is easy to show that the VC dimension is at most O(n log n), which is the logarithm of the number of permutations. See [4] for a linear bound. The implications are: Proposition 2.3. If E is chosen uniformly at random (with repetitions) as a sample of m elements from V 2 , where m > n, then with probability at least 1 −δ over the sample, all permutations π satisfy: |CE(π, V, W) −C(π, V, W)| = n2O q n log m+log(1/δ) m . Hence, if we want to minimize C(π, V, W) over π to within an additive error of µn2 with probability at least 1−δ, it suffices to choose a sample E of m = O(µ−2(n log n+log δ−1)) elements from V 2 uniformly at random (with repetitions), and optimize CE(π, V, W) instead.3 Assume δ ≥e−n, so that we get a more manageable sample bound of O(µ−2n log n). Is this bound at all interesting? For two permutations π, σ, the Kendall-Tau metric dτ(π, σ) is defined as dτ(π, σ) = P u̸=v 1[(u ≺π v)∧(v ≺σ u)] . The Spearman Footrule metric dfoot(π, σ) is defined as dfoot(π, σ) = P u |ρπ(u)− ρσ(u)| . The following is well known [18]: dτ(π, σ) ≤dfoot(π, σ) ≤2dτ(π, σ) . (2.3) Clearly C(·, V, ·) extends dτ(·, ·) to distances between permutations and binary tournaments, with the triangle inequality dτ(π, σ) ≤C(π, V, W) + C(σ, V, W) satisfied for all W and π, σ ∈Π(V ). Assume we use Proposition 2.3 to find π ∈Π(V ) with an additive regret of O(µn2) with respect to an optimal solution π∗for some µ > 0. The triangle inequality implies dτ(π, π∗) = Ω(µn2). By (2.3), hence, dfoot(π, π∗) = Ω(µn2). By definition of dfoot, this means that the averege element v ∈ V is translated Ω(µn) positions away from its position in π∗. In some applications (e.g. IR), one may 2We will just say ε-good if W is clear from the context. 3 V 2 denotes the set of unordered pairs of distinct elements in V . 3 want elements to be at most a constant γ positions off. This translates to a sought regret of O(γn) for constant γ, and using our notation, to µ = γ/n. Proposition 2.3 cannot guarantee less than a quadratic sample size for such a regret, which is tantamount to querying all of W. We can do better: For any ε > 0 we achieve an additive regret of O(εC(π∗, V, W)) using O(poly(log n, ε−1)) Wqueries, for arbitrarily small optimal loss C(π∗, V, W). This is not achievable using Proposition 2.3. One may argue that the VC bound may be too pessimistic, and other arguments may work for the uniform sample case. A simple extremal case (omitted from this abstract) shows that this is false. Proposition 2.4. Let V1, . . . , Vk be an ordered decomposition of V . Let B denote the set of indices i ∈[k] such that Vi is big in V . Assume E is chosen uniformly at random (with repetitions) as a sample of m elements from S i∈B Vi 2 , where m > n. For each i = 1, . . . , k, let Ei = E ∩ Vi 2 . Define CE(π, {V1, . . . , Vk}, W) to be CE(π, {V1, . . . , Vk}, W) = P i∈B ni 2 |E|−1 P i∈B ni 2 −1|Ei|CEi(π|Vi, Vi, W|Vi) . (The normalization is defined so that the expression is an unbiased estimator of P i∈B C(π|Vi, Vi, W|Vi). If |Ei| = 0 for some i, formally define ni 2 −1|Ei|CEi(π|Vi, Vi, W|Vi) = 0.) Then with probability at least 1 −e−n over the sample, all permutations π ∈Π(V ) satisfy: CE(π, {V1, . . . , Vk}, W) −P i∈B C(π|Vi, Vi, W|Vi) = P i∈B ni 2 O q n log m+log(1/δ) m . The proof (omitted from this abstract) uses simple VC dimension arithmetic. Now, why is εgoodness good? Lemma 2.5. Fix ε > 0 and assume we have an ε-good partition (Definition 2.1) V1, . . . , Vk of V . Let B denote the set of i ∈[k] such that Vi is big in V , and let ¯B = [k] \ B. Let ni = |Vi| for i = 1, . . . , n, and let E denote a random sample of O(ε−6n log n) elements from S i∈B Vi 2 , each element chosen uniformly at random with repetitions. Let Ei denote E ∩ Vi 2 . Let CE(π, {V1, . . . , Vk}, W) be defined as in Proposition 2.4. For any π ∈Π(V1, . . . , Vk) define: ˜C(π) := CE(π, {V1, . . . , Vk}, W) + X i∈¯ B C(π|Vi, Vi, W|Vi) + X 1≤i<j≤k X (u,v)∈Vi×Vj 1v≺πu . (2.4) Then the following event occurs with probability at least 1 −e−n: For any minimizer σ∗of ˜C(·) over Π(V1, . . . , Vk): C(σ∗, V, W) ≤(1 + 2ε) minπ∈Π(V ) C(π, V, W). (Proof omitted from abstract.) The consequence: Given an ε-good decomposition V1, . . . , Vk, optimizing ˜C(σ) over σ ∈Π(V1, . . . , Vk), would give a solution with relative regret of 2ε w.r.t. the optimum. The first and last terms in the RHS of (2.4) require no more than O(ε−6n log n) Wqueries to compute (by definition of E, and given the decomposition). The middle term runs over small Vi’s, and can be computed from O(n log n/ log log n) W-queries. If we now assume that a good decomposition can be efficiently computed using O(n polylog(n, ε−1)) W-queries (as we indeed show), then we would beat the VC bound whenever the optimal loss is at most O(n2−ν) for some ν > 0. 3 A Query Efficient Algorithm for ε-Good Decompositions Theorem 3.1. Given a set V of size n, a preference oracle W and an error tolerance parameter 0 < ε < 1, there exists a poly(n, ε−1)-time algorithm returning, with constant probabiliy, an ε-good partition of V , querying at most O(ε−6n log5 n) locations in W on expectation. Before describing the algorithm and its analysis, we need some definitions: Definition 3.2. Let π denote a permutation over V . Let v ∈V and i ∈[n]. We define πv→i to be the permutation obtained by moving the rank of v to i in π, and leaving the rest of the elements in the same order.4 Definition 3.3. Fix π ∈Π(V ), v ∈V and i ∈[n]. We define TestMove(π, V, W, v, i) := C(π, V, W) −C(πv→i, V, W) . Equivalently, if i ≥ρπ(v) then TestMove(π, V, W, v, i) := 4For example, if V = {x, y, z} and (π(1), π(2), π(3)) = (x, y, z), then (πx→3(1), πx→3(2), πx→3(3)) = (y, z, x). 4 P u:ρπ(u)∈[ρπ(v)+1,i](Wuv −Wvu) . A similar expression can be written for i < ρπ(v). For a multiset E ⊆ V 2 , define TestMoveE(π, V, W, v, i), for i ≥ρπ(v), as TestMoveE(π, V, W, v, i) := |i−ρπ(v)| | ˜ E| P u:(u,v)∈˜ E(W(u, v) −W(v, u)). where the multiset ˜E is defined as {(u, v) ∈E : ρπ(u) ∈[ρπ(v) + 1, i]}. Similarly, for i < ρπ(v) we define TestMoveE(π, V, W, v, i) := |i−ρπ(v)| | ˜ E| P u:(u,v)∈˜ E(W(v, u) −W(u, v)). where ˜E is now {(u, v) ∈E : ρπ(u) ∈[i, ρπ(v) −1]}. Lemma 3.4. Fix π ∈Π(V ), v ∈V , i ∈[n] and an integer N. Let E ⊆ V 2 be a random (multi)set of size N with elements (v, u1), . . . , (v, uN), drawn so that for each j ∈[N] the element uj is chosen uniformly at random from among the elements lying between v (exclusive) and position i (inclusive) in π. Then E[TestMoveE(π, V, W, v, i)] = TestMove(π, V, W, v, i). Additionally, for any δ > 0, except with probability of failure δ, | TestMoveE(π, V, W, v, i) −TestMove(π, V, W, v, i)| = O |i −ρπ(v)| q log δ−1 N . The lemma is easily proven using Hoeffding tail bounds, using the fact that |W(u, v)| ≤1 for all u, v. Our decomposition algorithm SampleAndRank is detailed in Algorithm 1, with subroutines in Algorithms 2 and 3. It is a query efficient improvement of the PTAS in [23] with the following difference: here we are not interested in an approximation algorithm for MFAST, but just in an ε-good decomposition. Whenever we reach a small block (line 3) or a big block with a probably approximately sufficiently high cost (line 8) in our recursion of Algorithm 2), we simply output it as a block in our partition. Denote the resulting outputted partition by V1, . . . , Vk. Denote by ˆπ the minimizer of C(·, V, W) over Π(V1, . . . , Vk). We need to show that C(ˆπ, V, W) ≤ (1 + ε) minπ∈Π(V ) C(π, V, W), thus establishing (2.2). The analysis closely follows [23]. Due to space limitations, we focus on the differences, and specifically on Procedure ApproxLocalImprove (Algorithm 3), replacing a greedy local improvement step in [23] which is not query efficient. SampleAndRank (Algorithm 1) takes the following arguments: The set V , the preference matrix W and an accuracy argument ε. It is implicitly understood that the argument W passed to SampleAndRank is given as a query oracle, incurring a unit cost upon each access. The first warm start step in SampleAndRank computes an expected constant factor approximation π to MFAST on V, W using QuickSort [2]. The query complexity of this step is O(n log n) on expectation (see [3]). Before continuing, we make the following assumption, which holds with constant probability using Markov probability bounds. Assumption 3.5. The cost C(π, V, W) of π computed in line 2 of SampleAndRank is O(1) times that of the optimal π∗, and the query cost incurred in the computation is O(n log n). Next, a recursive procedure SampleAndDecompose is called, running a divide-and-conquer algorithm. Before branching, it executes the following: Lines 5 to 9 identify local chaos (2.1) (with high probability). Line 10 calls ApproxLocalImprove (Algorithm 3), responsible for performing query-efficient approximate greedy steps, as we now explain. Approximate local improvement steps. ApproxLocalImprove takes a set V of size N, W, a permutation π on V , two numbers C0, ε and an integer n.5 The number n is always the size of the input in the root call to SampleAndDecompose, passed down in the recursion, and used for the purpose of controlling success probabilities. The goal of is to repeatedly identify w.h.p. single vertex moves that considerably decrease the cost. The procedure starts by creating a sample ensemble S = {Ev,i : v ∈V, i ∈[B, L]}, where B = log⌊Θ(εN/ log n)⌋and L = ⌈log N⌉. The size of each Ev,i ∈S is Θ(ε−2 log2 n), and each element (v, x) ∈Ev,i was added (with possible multiplicity) by uniformly at random selecting, with repetitions, an element x ∈V positioned at distance at most 2i from the position of v in π. Let Dπ denote the distribution space from which S was drawn, and let PrX∼Dπ[X = S] denote the probability of obtaining a given sample ensemble S. S will enable us to approximate the improvement in cost obtained by moving a single element u to position j. Definition 3.6. Fix u ∈V and j ∈[n], and assume log |j −ρπ(u)| ≥B. Let ℓ= ⌈log |j − ρπ(u)|⌉. We say that S is successful at u, j if |{x : (u, x) ∈Eu,ℓ} ∩{x : ρπ(x) ∈[ρπ(u), j]}| = Ω(ε−2 log2 n) . 5Notation abuse: V here is a subset of the original input. 5 Success of S at u, j means that sufficiently many samples x ∈V such that ρπ(x) is between ρπ(u) and j are represented in Eu,ℓ. Conditioned on S being successful at u, j, note that the denominator from the definition of TestMoveE does not vanish, and we can thereby define: Definition 3.7. S is a good approximation at u, j if (defining ℓas in Definition 3.6): TestMoveEu,ℓ(π, V, W, u, j) −TestMove(π, V, W, u, j) ≤ 1 2ε|j −ρπ(u)|/ log n . S is a good approximation if it is succesful and a good approximation at all u ∈V , j ∈[n] satisfying ⌈log |j −ρπ(u)|⌉∈[B, L]. Using Chernoff to ensure success and Hoeffding to ensure good approximation, union bounding: Lemma 3.8. Except with probability 1 −O(n−4), S is a good approximation. Algorithm 1 SampleAndRank(V, W, ε) 1: n ←|V | 2: π ←Expected O(1)-approx solution to MFAST using O(n log n) W-queries on expectation using QuickSort [2] 3: return SampleAndDecompose(V, W, ε, n, π) Algorithm 2 SampleAndDecompose(V, W, ε, n, π) 1: N ←|V | 2: if N ≤log n/ log log n then 3: return trivial partition {V } 4: end if 5: E ←random subset of O(ε−4 log n) elements from V 2 (with repetitions) 6: C ←CE(π, V, W) (C is an additive O(ε2N 2) approximation of C w.p. ≥1 −n−4) 7: if C = Ω(ε2N 2) then 8: return trivial partition {V } 9: end if 10: π1 ←ApproxLocalImprove(V, W, π, ε, n) 11: k ←random integer in the range [N/3, 2N/3] 12: VL ←{v ∈V : ρπ(v) ≤k}, πL ←restriction of π1 to VL 13: VR ←V \ VL, πR ←restriction of π1 to VR 14: return concatenation of SampleAndDecompose(VL, W, ε, n, πL), SampleAndDecompose(VR, W, ε, n, πR) Mutating the Pair Sample To Reflect a Single Element Move. Line 16 in ApproxLocalImprove requires elaboration. In lines 15-18 we sought (using S) an element u and position j, such that moving u to j (giving rise to πu→j) would considerably improve the cost w.h.p. If such an element u existed, we executed the exchange π ←πu→j. Unfortunately the sample ensemble S becomes stale: even if S was a good approximation, it is no longer necessarily so w.r.t. the new value of π. We refresh it in line 16 by applying a transformation ϕu→j on S, resulting in a new sample ensemble ϕu→j(S) approximately distributed by Dπu→j. More precisely, ϕ (defined below) is such that ϕu→j(Dπ) = Dπu→j , (3.1) where the left hand side denotes the distribution obtained by drawing from Dπ and applying ϕu→j to the result. We now define ϕu→j. Denoting ϕu→j(S) = S′ = {E′ v,i : v ∈V, i ∈[B, L]}, we need to define each E′ v,i. Definition 3.9. Ev,i is interesting in the context of π and πu→j if the two sets T1, T2 defined as T1 = {x ∈V : |ρπ(x) −ρπ(v)| ≤2i}, T2 = {x ∈V : |ρπu→j(x) −ρπu→j(v)| ≤2i} differ. We set E′ v,i = Ev,i for all v, i for which Ev,i is not interesting. Fix one interesting choice v, i. Let T1, T2 be as in Defintion 3.9. It can be easily shown that each of T1 and T2 contains O(1) elements that are not contained in the other, and it can be assumed (using a simple clipping argument - omitted) that this number is exactly 1, hence |T1| = |T2|. let X1 = T1 \ T2, and X2 = T2 \ T1. Fix any injection α : X1 →X2, and extend α : T1 →T2 so that α(x) = x for all x ∈T1 ∩T2. Finally, 6 Algorithm 3 ApproxLocalImprove(V, W, π, ε, n) (Note: π used as both input and output) 1: N ←|V |, B ←⌈log(Θ(εN/ log n)⌉, L ←⌈log N⌉ 2: if N = O(ε−3 log3 n) then 3: return 4: end if 5: for v ∈V do 6: r ←ρπ(v) 7: for i = B . . . L do 8: Ev,i ←∅ 9: for m = 1..Θ(ε−2 log2 n) do 10: j ←integer uniformly at random chosen from [max{1, r −2i}, min{n, r + 2i}] 11: Ev,i ←Ev,i ∪{(v, π(j))} 12: end for 13: end for 14: end for 15: while ∃u ∈V and j ∈[n] s.t. (setting ℓ:= ⌈log |j −ρπ(u)|⌉): ℓ∈[B, L] and TestMoveEu,ℓ(π, V, W, u, j) > ε|j −ρπ(u)|/ log n do 16: For v ∈V , i ∈[B, L] refresh Ev,i w.r.t. the move u →j using ϕu→j (Section 3) 17: π ←πu→j 18: end while define E′ v,i = {(v, α(x)) : (v, x) ∈Ev,i}. For v = u we create E′ v,i from scratch by repeating the loop in line 7 for that v. It is easy to see that (3.1) holds. By Lemma 3.8, the total variation distance between (Dπ| good approximation) and Dπu→j is O(n−4). Using a simple chain rule argument: Lemma 3.10. Fix π0 on V of size N, and fix u1, . . . , uk ∈V and j1, . . . , jk ∈[n]. Draw S0 from Dπ0, and define S1 = ϕu1→j1(S0), S2 = ϕu2→j2(S1), · · · , Sk = ϕuk→jk(Sk−1), π1 = π0 u1→j1, π2 = π1 u2→j2, · · · , πk = πk−1 uk→jk. Consider the random variable Sk conditioned on S0, S1, . . . , Sk−1 being good approximations for π0, . . . , πk−1, respectively. Then the total variation distance between the distribution of Sk and the distribution (Dπk|πk) (corresponding to the process of obtaning πk and drawing from Dπk ”from scratch”) is at most O(kn−4). The difference between S and S′, defined as dist(S, S′) := S v,i Ev,i∆E′ v,i bounds the query complexity of computing mutations. The proof of the following has been omitted from this abstract. Lemma 3.11. Assume S ∼Dπ for some π, and S′ = ϕu→j. Then E[dist(S, S′)] = O(ε−3 log3 n). Analysis of SampleAndDecompose. Various high probability events must occur in order for the algorithm guarantees to hold. Let E1 denote the event that the first Θ(n4) sample ensembles S1, S2, . . . ApproxLocalImprove, either in lines 5 and 14, or via mutations, are good approximations By Lemmas 3.8 and 3.10, using a union bound, with constant probability (say, 0.99) this happens. Let E2 denote the event that the cost approximations obtained in line 5 of SampleAndDecompose are successful at all recursive calls. By Hoeffding tail bounds, this happens with probability 1 −O(n−4) for each call, there are O(n log n) calls, hence we can lower bound the probability of success of all executions by 0.99. Concluding, the following holds with probability at least 0.97: Assumption 3.12. Events E1 and E2 hold true. We condition what follows on this assumption.6 Let π∗denote the optimal permutation for the root call to SampleAndDecompose with V, W, ε. The permutation π is, by Assumption 3.5, a constant factor approximation for π∗. By the triangle inequality, dτ(π, π∗) ≤C(π, V, W) + C(π∗, V, W), hence, E[dτ(π, π∗)] = O(C(π∗, V, W)) . From this, using (2.3), E[dfoot(π, π∗)] = O(C(π∗, V, W)). Now consider the recursion tree T of SampleAndDecompose. Denote I the set of internal nodes, and by L the set of leaves (i.e. executions exiting from line 8). For a call SampleAndDecompose corresponding to a node X, denote the input arguments by (VX, W, ε, n, πX). Let L[X], R[X] denote the left and right children of X respectively. Let kX 6This may bias some expectation upper bounds derived earlier and in what follows. This bias can multiply the estimates by at most 1/0.97, which can be absorbed in our O-notations. 7 denote the integer k in 11 in the context of X ∈I. Hence, by our definitions, VL[X], VR[X], πL[X] and πR[X] are precisely VL, VR, πL, πR from lines 12-13 in the context of node X. Take, as in line 1, NX = |VX|. Let π∗ X denote the optimal MFAST solution for instance (VX, W|VX). By E1 we conclude that the cost of πX u→j is always an actual improvement compared to πX (for the current value of πX, u and j in iteration), and the improvement in cost is of magnitude at least Ω(ε|ρπX(u) −j|/ log n), which is Ω(ε2NX/ log2 n) due to the use of B defined in line 1.7 But then the number of iterations of the while loop in line 15 of ApproxLocalImprove is O(ε−2C(πX, VX, W|VX) log2 n/NX) (Otherwise the true cost of the running solution would go below 0.) Since C(πX, VX, W|VX) ≤ NX 2 , the number of iterations is hence at most O(ε−2NX log2 n). By Lemma 3.11 the expected query complexity incurred by the call to ApproxLocalImprove is therefore O(ε−5NX log5 n). Summing over the recursion tree, the total query complexity incurred by calls to ApproxLocalImprove is, on expectation, at most O(ε−5n log6 n). Now consider the moment at which the while loop of ApproxLocalImprove terminates. Let π1X denote the permutation obtained at that point, returned to SampleAndDecompose in line 10. We classify the elements v ∈VX to two families: V short X denotes all u ∈VX s.t. |ρπ1X(u) −ρπ∗ X(u)| = O(εNX/ log n), and V long X denotes VX \ V short X . We know by assumption, that the last sample ensemble S used in ApproxLocalImprove was a good approximation, hence for all u ∈V long X : (*) TestMove(π1X, VX, W|VX, u, ρπ∗ X(u)) = O(ε|ρπ1X(u) −ρπ∗ X(u)|/ log n). Following [23], we say for u ∈VX that u crosses kX if [ρπ1X(u), ρπ∗ X(u)] contains kX. Let V cross X denote the (random) set of elements u ∈VX that cross kX. We define a key quantity as in [23]: TX := P u∈V cross X TestMove(π1X, VX, W|VX, u, ρπ∗ X(u)). Following (*), the elements u ∈V long X can contribute at most O ε P u∈V long X |ρπ1X(u) −ρπ∗ X(u)|/ log n to TX. This latter bound is, by definition, O(εdfoot(π1X, π∗ X)/ log n) which is, using (2.3) at most O(εdτ(π1X, π∗ X)/ log n). By the triangle inequality and the definition of π∗ X, the last expression is O(εC(π1X, VX, W|VX)/ log n). How much can elements in V short X contribute to TX? The probability of each such element to cross k is O(|ρπ1X(u) −ρπ∗ X(u)|/NX). Hence, the total expected contribution is O P u∈V short X |ρπ1X(u) −ρπ∗ X(u)|2/NX . Under the constraints P u∈V short X |ρπ1X(u) −ρπ∗ X(u)| ≤dfoot(π1X, π∗ X) and |ρπ1X(u) −ρπ∗ X(u)| = O(εNX/ log n), this is O(dfoot(π1X, π∗ X)εNX/(NX log n)) = O(dfoot(π1X, π∗ X)ε/ log n). Again using (2.3) and the triangle inequality, the bound becomes O(εC(π1X, VX, W|VX)/ log n). Combining for V long and V short, we conclude: (**) EkX[TX] = O(εC(π∗ X, VX, W|VX)/ log n), (the expectation is over the choice of kX.) The bound (**) is the main improvement over [23], and should be compared with Lemma 3.2 there, stating (in our notation) TX = O(εC∗NX/(4n log n)). The latter bound is more restrictive than ours in certain cases, and obtaining it relies on a procedure that cannot be performed without having access W in its entirety. (**) however can be achieved using efficient querying of W, as we have shown. The remaineder of the arguments leading to proof of Theorem 3.1 closely follow those in Section 4 of [23]. The details have been omitted from this abstract. 4 Future Work We presented a statistical learning theoretical active learning result for pairwise ranking. The main vehicle was a query (and time) efficient decomposition procedure, reducing the problem to smaller ones in which the optimal loss is high and uniform sampling suffices. The main drawback of our result is the inability to use it in order to search in a limited subspace of permutations. A typical example of such a subspace is the case in which each element v ∈V has a corresponding feature vector in a real vector space, and we only seek permutations induced by linear score functions. In followup work, Ailon, Begleiter and Ezra [1] show a novel technique achieving a slightly better query complexity than here with a simpler proof, while also admitting search in restricted spaces. Acknowledgements The author gratefully acknowledges the help of Warren Schudy with derivation of some of the bounds in this work. Special thanks to Ron Begleiter for helpful comments. Apologizes for omitting references to much relevant work that could not fit in this version’s bibliography. 7This also bounds the number of times a sample ensemble is created by O(n4), as required by E1. 8 References [1] Nir Ailon, Ron Begleiter, and Esther Ezra, A new active learning scheme with applications to learning to rank from pairwise preferences, arxiv.org/abs/1110.2136 (2011). [2] Nir Ailon, Moses Charikar, and Alantha Newman, Aggregating inconsistent information: Ranking and clustering, J. ACM 55 (2008), no. 5. [3] Nir Ailon and Mehryar Mohri, Preference based learning to rank, vol. 80, 2010, pp. 189–212. [4] Nir Ailon and Kira Radinsky, Ranking from pairs and triplets: Information quality, evaluation methods and query complexity, WSDM, 2011. [5] Noga Alon, Ranking tournaments, SIAM J. Discret. Math. 20 (2006), no. 1, 137–142. [6] M. F. Balcan, N. Bansal, A. Beygelzimer, D. Coppersmith, J. Langford, and G. B. Sorkin, Robust reductions from ranking to classification, Machine Learning 72 (2008), no. 1-2, 139– 153. [7] Maria-Florina Balcan, Alina Beygelzimer, and John Langford, Agnostic active learning, J. Comput. Syst. Sci. 75 (2009), no. 1, 78–89. [8] Maria-Florina Balcan, Steve Hanneke, and Jennifer Vaughan, The true sample complexity of active learning, Machine Learning 80 (2010), 111–139. [9] A. Beygelzimer, J. Langford, and P. Ravikumar, Error-correcting tournaments, ALT, 2009, pp. 247–262. [10] M. Braverman and E. Mossel, Noisy sorting without resampling, SODA: Proceedings of the 19th annual ACM-SIAM symposium on Discrete algorithms, 2008, pp. 268–276. [11] B. Carterette, P. N. Bennett, D. Maxwell Chickering, and S. T. Dumais, Here or there: Preference judgments for relevance, ECIR, 2008. [12] William W. Cohen, Robert E. Schapire, and Yoram Singer, Learning to order things, NIPS ’97, 1998, pp. 451–457. [13] D. Cohn, L. Atlas, and R. Ladner, Improving generalization with active learning, Machine Learning 15 (1994), no. 2, 201–221. [14] A. Culotta and A. McCallum, Reducing labeling effort for structured prediction tasks, AAAI: Proceedings of the 20th national conference on Artificial intelligence, 2005, pp. 746–751. [15] S. Dasgupta, Coarse sample complexity bounds for active learning, Advances in Neural Information Processing Systems 18, 2005, pp. 235–242. [16] S. Dasgupta, A. Tauman Kalai, and C. Monteleoni, Analysis of perceptron-based active learning, Journal of Machine Learning Research 10 (2009), 281–299. [17] Sanjoy Dasgupta, Daniel Hsu, and Claire Monteleoni, A general agnostic active learning algorithm, NIPS, 2007. [18] Persi Diaconis and R. L. Graham, Spearman’s footrule as a measure of disarray, Journal of the Royal Statistical Society. Series B (Methodological) 39 (1977), no. 2, pp. 262–268. [19] U. Feige, D. Peleg, P. Raghavan, and E. Upfal, Computing with unreliable information, STOC: Proceedings of the 22nd annual ACM symposium on Theory of computing, 1990, pp. 128–137. [20] Yoav Freund, H. Sebastian Seung, Eli Shamir, and Naftali Tishby, Selective sampling using the query by committee algorithm, Mach. Learn. 28 (1997), no. 2-3, 133–168. [21] Steve Hanneke, A bound on the label complexity of agnostic active learning, ICML, 2007, pp. 353–360. [22] Eyke H¨ullermeier, Johannes F¨urnkranz, Weiwei Cheng, and Klaus Brinker, Label ranking by learning pairwise preferences, Artif. Intell. 172 (2008), no. 16-17, 1897–1916. [23] Claire Kenyon-Mathieu and Warren Schudy, How to rank with few errors, STOC, 2007, pp. 95– 103. [24] Dan Roth and Kevin Small, Margin-based active learning for structured output spaces, 2006. [25] V. N. Vapnik and A. Ya. Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities, Theory of Prob. and its Applications 16 (1971), no. 2, 264–280. [26] F. Xia, T-Y Liu, J. Wang, W. Zhang, and H. Li, Listwise approach to learning to rank: theory and algorithm, ICML ’08, 2008, pp. 1192–1199. 9
|
2011
|
149
|
4,201
|
Hashing Algorithms for Large-Scale Learning Ping Li Cornell University pingli@cornell.edu Anshumali Shrivastava Cornell University anshu@cs.cornell.edu Joshua Moore Cornell University jlmo@cs.cornell.edu Arnd Christian K¨onig Microsoft Research chrisko@microsoft.com Abstract Minwise hashing is a standard technique in the context of search for efficiently computing set similarities. The recent development of b-bit minwise hashing provides a substantial improvement by storing only the lowest b bits of each hashed value. In this paper, we demonstrate that b-bit minwise hashing can be naturally integrated with linear learning algorithms such as linear SVM and logistic regression, to solve large-scale and high-dimensional statistical learning tasks, especially when the data do not fit in memory. We compare b-bit minwise hashing with the Count-Min (CM) and Vowpal Wabbit (VW) algorithms, which have essentially the same variances as random projections. Our theoretical and empirical comparisons illustrate that b-bit minwise hashing is significantly more accurate (at the same storage cost) than VW (and random projections) for binary data. 1 Introduction With the advent of the Internet, many machine learning applications are faced with very large and inherently high-dimensional datasets, resulting in challenges in scaling up training algorithms and storing the data. Especially in the context of search and machine translation, corpus sizes used in industrial practice have long exceeded the main memory capacity of single machine. For example, [33] discusses training sets with 1011 items and 109 distinct features, requiring novel algorithmic approaches and architectures. As a consequence, there has been a renewed emphasis on scaling up machine learning techniques by using massively parallel architectures; however, methods relying solely on parallelism can be expensive (both with regards to hardware requirements and energy costs) and often induce significant additional communication and data distribution overhead. This work approaches the challenges posed by large datasets by leveraging techniques from the area of similarity search [2], where similar increases in data sizes have made the storage and computational requirements for computing exact distances prohibitive, thus making data representations that allow compact storage and efficient approximate similarity computation necessary. The method of b-bit minwise hashing [26–28] is a recent progress for efficiently (in both time and space) computing resemblances among extremely high-dimensional (e.g., 264) binary vectors. In this paper, we show that b-bit minwise hashing can be seamlessly integrated with linear Support Vector Machine (SVM) [13,18,20,31,35] and logistic regression solvers. 1.1 Ultra High-Dimensional Large Datasets and Memory Bottlenecks In the context of search, a standard procedure to represent documents (e.g., Web pages) is to use w-shingles (i.e., w contiguous words), where w ≥5 in several studies [6,7,14]. This procedure can generate datasets of extremely high dimensions. For example, suppose we only consider 105 common English words. Using w = 5 may require the size of dictionary Ωto be D = |Ω| = 1025 = 283. In practice, D = 264 often suffices, as the number of available documents may not be large enough to exhaust the dictionary. For w-shingle data, normally only abscence/presence (0/1) information is used, as it is known that word frequency distributions within documents approximately follow a power-law [3], meaning that most single terms occur rarely, thereby making a w-shingle is unlikely to occur more than once in a document. Interestingly, even when the data are not too highdimensional, empirical studies [8,17,19] achieved good performance with binary-quantized data. When the data can fit in memory, linear SVM training is often extremely efficient after the data are loaded into the memory. It is however often the case that, for very large datasets, the data loading 1 time dominates the computing time for solving the SVM problem [35]. A more severe problem arises when the data can not fit in memory. This situation can be common in practice. The publicly available webspam dataset (in LIBSVM format) needs about 24GB disk space, which exceeds the memory capacity of many desktop PCs. Note that webspam, which contains only 350,000 documents represented by 3-shingles, is still very small compared to industry applications [33]. 1.2 Our Proposal We propose a solution which leverages b-bit minwise hashing. Our approach assumes the data vectors are binary, high-dimensional, and relatively sparse, which is generally true of text documents represented via shingles. We apply b-bit minwise hashing to obtain a compact representation of the original data. In order to use the technique for efficient learning, we have to address several issues: • We need to prove that the matrices generated by b-bit minwise hashing are positive definite, which will provide the solid foundation for our proposed solution. • If we use b-bit minwise hashing to estimate the resemblance, which is nonlinear, how can we effectively convert this nonlinear problem into a linear problem? • Compared to other hashing techniques such as random projections, Count-Min (CM) sketch [11], or Vowpal Wabbit (VW) [32,34], does our approach exhibits advantages? It turns out that our proof in the next section that b-bit hashing matrices are positive definite naturally provides the construction for converting the otherwise nonlinear SVM problem into linear SVM. 2 Review of Minwise Hashing and b-Bit Minwise Hashing Minwise hashing [6,7] has been successfully applied to a wide range of real-world problems [4,6,7, 9,10,12,15,16,30], for efficiently computing set similarities. Minwise hashing mainly works well with binary data, which can be viewed either as 0/1 vectors or as sets. Given two sets, S1, S2 ⊆ Ω= {0, 1, 2, ..., D −1}, a widely used measure of similarity is the resemblance R: R = |S1 ∩S2| |S1 ∪S2| = a f1 + f2 −a, where f1 = |S1|, f2 = |S2|, a = |S1 ∩S2|. (1) Applying a random permutation π : Ω→Ωon S1 and S2, the collision probability is simply Pr (min(π(S1)) = min(π(S2))) = |S1 ∩S2| |S1 ∪S2| = R. (2) One can repeat the permutation k times: π1, π2, ..., πk to estimate R without bias. The common practice is to store each hashed value, e.g., min(π(S1)) and min(π(S2)), using 64 bits [14]. The storage (and computational) cost will be prohibitive in truly large-scale (industry) applications [29]. b-bit minwise hashing [27] provides a strikingly simple solution to this (storage and computational) problem by storing only the lowest b bits (instead of 64 bits) of each hashed value. For convenience, denote z1 = min (π (S1)) and z2 = min (π (S2)), and denote z(b) 1 (z(b) 2 ) the integer value corresponding to the lowest b bits of of z1 (z2). For example, if z1 = 7, then z(2) 1 = 3. Theorem 1 [27] Assume D is large. Pb = Pr “ z(b) 1 = z(b) 2 ” = C1,b + (1 −C2,b) R (3) r1 = f1 D , r2 = f2 D , f1 = |S1|, f2 = |S2| C1,b = A1,b r2 r1 + r2 + A2,b r1 r1 + r2 , C2,b = A1,b r1 r1 + r2 + A2,b r2 r1 + r2 , A1,b = r1 [1 −r1]2b−1 1 −[1 −r1]2b , A2,b = r2 [1 −r2]2b−1 1 −[1 −r2]2b .□ This (approximate) formula (3) is remarkably accurate, even for very small D; see Figure 1 in [25]. We can then estimate Pb (and R) from k independent permutations: ˆRb = ˆPb −C1,b 1 −C2,b , Var “ ˆRb ” = Var “ ˆPb ” [1 −C2,b]2 = 1 k [C1,b + (1 −C2,b)R] [1 −C1,b −(1 −C2,b)R] [1 −C2,b]2 (4) It turns out that our method only needs ˆPb for linear learning, i.e., no need to explicitly estimate R. 2 3 Kernels from Minwise Hashing b-Bit Minwise Hashing Definition: A symmetric n × n matrix K satisfying P ij cicjKij ≥0, for all real vectors c is called positive definite (PD). Note that here we do not differentiate PD from nonnegative definite. Theorem 2 Consider n sets S1, ..., Sn ⊆Ω= {0, 1, ..., D −1}. Apply one permutation π to each set. Define zi = min{π(Si)} and z(b) i the lowest b bits of zi. The following three matrices are PD. 1. The resemblance matrix R ∈Rn×n, whose (i, j)-th entry is the resemblance between set Si and set Sj: Rij = |Si∩Sj| |Si∪Sj| = |Si∩Sj| |Si|+|Sj|−|Si∩Sj|. 2. The minwise hashing matrix M ∈Rn×n: Mij = 1{zi = zj}. 3. The b-bit minwise hashing matrix M(b) ∈Rn×n: M (b) ij = 1 n z(b) i = z(b) j o . Consequently, consider k independent permutations and denote M(b) (s) the b-bit minwise hashing matrix generated by the s-th permutation. Then the summation Pk s=1 M(b) (s) is also PD. Proof: A matrix A is PD if it can be written as an inner product BTB. Because Mij = 1{zi = zj} = D−1 X t=0 1{zi = t} × 1{zj = t}, (5) Mij is the inner product of two D-dim vectors. Thus, M is PD. Similarly, M(b) is PD because M (b) ij = P2b−1 t=0 1{z(b) i = t} × 1{z(b) j = t}. R is PD because Rij = Pr{Mij = 1} = E (Mij) and Mij is the (i, j)-th element of the PD matrix M. Note that the expectation is a linear operation. □ 4 Integrating b-Bit Minwise Hashing with (Linear) Learning Algorithms Linear algorithms such as linear SVM and logistic regression have become very powerful and extremely popular. Representative software packages include SVMperf [20], Pegasos [31], Bottou’s SGD SVM [5], and LIBLINEAR [13]. Given a dataset {(xi, yi)}n i=1, xi ∈RD, yi ∈{−1, 1}. The L2-regularized linear SVM solves the following optimization problem): min w 1 2wTw + C n X i=1 max n 1 −yiwTxi, 0 o , (6) and the L2-regularized logistic regression solves a similar problem: min w 1 2wTw + C n X i=1 log “ 1 + e−yiwTxi” . (7) Here C > 0 is a regularization parameter. Since our purpose is to demonstrate the effectiveness of our proposed scheme using b-bit hashing, we simply provide results for a wide range of C values and assume that the best performance is achievable if we conduct cross-validations. In our approach, we apply k random permutations on each feature vector xi and store the lowest b bits of each hashed value. This way, we obtain a new dataset which can be stored using merely nbk bits. At run-time, we expand each new data point into a 2b × k-length vector with exactly k 1’s. For example, suppose k = 3 and the hashed values are originally {12013, 25964, 20191}, whose binary digits are {010111011101101, 110010101101100, 100111011011111}. Consider b = 2. Then the binary digits are stored as {01, 00, 11} (which corresponds to {1, 0, 3} in decimals). At run-time, we need to expand them into a vector of length 2bk = 12, to be {0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0}, which will be the new feature vector fed to a solver such as LIBLINEAR. Clearly, this expansion is directly inspired by the proof that the b-bit minwise hashing matrix is PD in Theorem 2. 5 Experimental Results on Webspam Dataset Our experiment settings closely follow the work in [35]. They conducted experiments on three datasets, of which only the webspam dataset is public and reasonably high-dimensional (n = 350000, D = 16609143). Therefore, our experiments focus on webspam. Following [35], we randomly selected 20% of samples for testing and used the remaining 80% samples for training. We chose LIBLINEAR as the workhorse to demonstrate the effectiveness of our algorithm. All experiments were conducted on workstations with Xeon(R) CPU (W5590@3.33GHz) and 48GB 3 RAM, under Windows 7 System. Thus, in our case, the original data (about 24GB in LIBSVM format) fit in memory. In applications when the data do not fit in memory, we expect that b-bit hashing will be even more substantially advantageous, because the hashed data are relatively very small. In fact, our experimental results will show that for this dataset, using k = 200 and b = 8 can achieve similar testing accuracies as using the original data. The effective storage for the reduced dataset (with 350K examples, using k = 200 and b = 8) would be merely about 70MB. 5.1 Experimental Results on Nonlinear (Kernel) SVM We implemented a new resemblance kernel function and tried to use LIBSVM to train an SVM using the webspam dataset. The training time well exceeded 24 hours. Fortunately, using b-bit minswise hashing to estimate the resemblance kernels provides a substantial improvement. For example, with k = 150, b = 4, and C = 1, the training time is about 5185 seconds and the testing accuracy is quite close to the best results given by LIBLINEAR on the original webspam data. 5.2 Experimental Results on Linear SVM There is an important tuning parameter C. To capture the best performance and ensure repeatability, we experimented with a wide range of C values (from 10−3 to 102) with fine spacings in [0.1, 10]. We experimented with k = 10 to k = 500, and b = 1, 2, 4, 6, 8, 10, and 16. Figure 1 (average) and Figure 2 (std, standard deviation) provide the test accuracies. Figure 1 demonstrates that using b ≥8 and k ≥200 achieves similar test accuracies as using the original data. Since our method is randomized, we repeated every experiment 50 times. We report both the mean and std values. Figure 2 illustrates that the stds are very small, especially with b ≥4. In other words, our algorithm produces stable predictions. For this dataset, the best performances were usually achieved at C ≥1. 10 −3 10 −2 10 −1 10 0 10 1 10 2 80 82 84 86 88 90 92 94 96 98 100 C Accuracy (%) b = 1 b = 2 b = 4 b = 6 8 b = 10,16 svm: k = 30 Spam: Accuracy 10 −3 10 −2 10 −1 10 0 10 1 10 2 80 82 84 86 88 90 92 94 96 98 100 C Accuracy (%) b = 1 b = 2 b = 4 b = 6 b = 8,10,16 svm: k = 50 Spam: Accuracy 10 −3 10 −2 10 −1 10 0 10 1 10 2 80 82 84 86 88 90 92 94 96 98 100 C Accuracy (%) Spam: Accuracy svm: k = 100 b = 1 b = 2 b = 4 b = 8,10,16 4 6 6 10 −3 10 −2 10 −1 10 0 10 1 10 2 80 82 84 86 88 90 92 94 96 98 100 C Accuracy (%) svm: k = 150 Spam: Accuracy b = 1 b = 2 b = 4 b = 6,8,10,16 4 10 −3 10 −2 10 −1 10 0 10 1 10 2 80 82 84 86 88 90 92 94 96 98 100 C Accuracy (%) b = 6,8,10,16 b = 1 svm: k = 200 b = 2 4 b = 4 Spam: Accuracy 10 −3 10 −2 10 −1 10 0 10 1 10 2 80 82 84 86 88 90 92 94 96 98 100 C Accuracy (%) Spam: Accuracy svm: k = 300 b = 1 b = 2 4 b = 4 b = 6,8,10,16 10 −3 10 −2 10 −1 10 0 10 1 10 2 80 82 84 86 88 90 92 94 96 98 100 C Accuracy (%) svm: k = 400 Spam: Accuracy b = 1 b = 2 4 b = 4 b = 6,8,10,16 10 −3 10 −2 10 −1 10 0 10 1 10 2 80 82 84 86 88 90 92 94 96 98 100 b = 1 b = 2 4 b = 6,8,10,16 C Accuracy (%) svm: k = 500 Spam: Accuracy b = 4 Figure 1: SVM test accuracy (averaged over 50 repetitions). With k ≥200 and b ≥8. b-bit hashing achieves very similar accuracies as using the original data (dashed, red if color is available). 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 −2 10 −1 10 0 C Accuracy (std %) b = 1 b = 2 b = 4 b = 6 b = 8 b = 16 10 svm: k = 50 Spam accuracy (std) 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 −2 10 −1 10 0 C Accuracy (std %) b = 1 b = 2 b = 4 b = 6 b = 8 b = 10,16 svm: k = 100 Spam accuracy (std) 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 −2 10 −1 10 0 C Accuracy (std %) b = 1 b = 2 b = 4 b = 6 b = 8,10,16 svm: k = 200 Spam accuracy (std) 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 −2 10 −1 10 0 b = 1 b = 2 b = 4 b = 6,8,10,16 Spam accuracy (std) C Accuracy (std %) svm: k = 500 Figure 2: SVM test accuracy (std). The standard deviations are computed from 50 repetitions. When b ≥8, the standard deviations become extremely small (e.g., 0.02%). Compared with the original training time (about 100 seconds), Figure 3 (upper panels) shows that our method only needs about 3 seconds (near C = 1). Note that our reported training time did not include data loading (about 12 minutes for the original data and 10 seconds for the hashed data). Compared with the original testing time (about 150 seconds), Figure 3 (bottom panels) shows that our method needs merely about 2 seconds. Note that the testing time includes both the data loading time, as designed by LIBLINEAR. The efficiency of testing may be very important in practice, for example, when the classifier is deployed in a user-facing application (such as search), while the cost of training or preprocessing may be less critical and can be conducted off-line. 4 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 0 10 1 10 2 10 3 C Training time (sec) svm: k = 50 Spam: Training time 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 0 10 1 10 2 10 3 C Training time (sec) svm: k =100 Spam: Training time 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 0 10 1 10 2 10 3 C Training time (sec) b = 16 svm: k = 200 Spam: Training time 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 0 10 1 10 2 10 3 Spam: Training time b = 10 b = 16 C Training time (sec) svm: k = 500 10 −3 10 −2 10 −1 10 0 10 1 10 2 1 2 10 100 1000 C Testing time (sec) svm: k = 50 Spam: Testing time 10 −3 10 −2 10 −1 10 0 10 1 10 2 1 2 10 100 1000 C Testing time (sec) svm: k = 100 Spam: Testing time 10 −3 10 −2 10 −1 10 0 10 1 10 2 1 2 10 100 1000 C Testing time (sec) svm: k = 200 Spam: Testing time 10 −3 10 −2 10 −1 10 0 10 1 10 2 1 2 10 100 1000 C Testing time (sec) svm: k = 500 Spam: Testing time Figure 3: SVM training time (upper panels) and testing time (bottom panels). The original costs are plotted using dashed (red, if color is available) curves. 5.3 Experimental Results on Logistic Regression Figure 4 presents the test accuracies and training time using logistic regression. Again, with k ≥200 and b ≥8, b-bit minwise hashing can achieve similar test accuracies as using the original data. The training time is substantially reduced, from about 1000 seconds to about 30 seconds only. 10 −3 10 −2 10 −1 10 0 10 1 10 2 80 82 84 86 88 90 92 94 96 98 100 C Accuracy (%) logit: k = 50 Spam: Accuracy b = 1 b = 2 b = 4 b = 6 b = 8,10,16 10 −3 10 −2 10 −1 10 0 10 1 10 2 80 82 84 86 88 90 92 94 96 98 100 C Accuracy (%) logit: k = 100 Spam: Accuracy b = 1 b = 2 b = 4 b = 6 b = 8,10,16 10 −3 10 −2 10 −1 10 0 10 1 10 2 80 82 84 86 88 90 92 94 96 98 100 C Accuracy (%) Spam: Accuracy logit: k = 200 b = 6,8,10,16 b = 1 b = 2 b = 4 10 −3 10 −2 10 −1 10 0 10 1 10 2 80 82 84 86 88 90 92 94 96 98 100 C Accuracy (%) logit: k = 500 Spam: Accuracy b = 1 b = 2 b = 4 4 b = 6,8,10,16 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 0 10 1 10 2 10 3 C Training time (sec) logit: k = 50 Spam: Training time 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 0 10 1 10 2 10 3 C Training time (sec) logit: k = 100 Spam: Training time 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 0 10 1 10 2 10 3 C Training time (sec) logit: k = 200 Spam: Training time 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 0 10 1 10 2 10 3 C Training time (sec) b = 16 logit: k = 500 Spam: Training time Figure 4: Logistic regression test accuracy (upper panels) and training time (bottom panels). In summary, it appears b-bit hashing is highly effective in reducing the data size and speeding up the training (and testing), for both SVM and logistic regression. We notice that when using b = 16, the training time can be much larger than using b ≤8. Interestingly, we find that b-bit hashing can be easily combined with Vowpal Wabbit (VW) [34] to further reduce the training time when b is large. 6 Random Projections, Count-Min (CM) Sketch, and Vowpal Wabbit (VW) Random projections [1, 24], Count-Min (CM) sketch [11], and Vowpal Wabbit (VW) [32, 34], as popular hashing algorithms for estimating inner products for high-dimensionaldatasets, are naturally applicable in large-scale learning. In fact, those methods are not limited to binary data. Interestingly, the three methods all have essentially the same variances. Note that in this paper, we use ”VW“ particularly for the hashing algorithm in [34], not the influential “VW” online learning platform. 6.1 Random Projections Denote the first two rows of a data matrix by u1, u2 ∈RD. The task is to estimate the inner product a = PD i=1 u1,iu2,i. The general idea is to multiply the data vectors by a random matrix {rij} ∈RD×k, where rij is sampled i.i.d. from the following generic distribution with [24] E(rij) = 0, V ar(rij) = 1, E(r3 ij) = 0, E(r4 ij) = s, s ≥1. (8) Note that V ar(r2 ij) = E(r4 ij) −E2(r2 ij) = s −1 ≥0. This generates two k-dim vectors, v1 and v2: v1,j = D X i=1 u1,irij, v2,j = D X i=1 u2,irij, j = 1, 2, ..., k (9) 5 The general family of distributions (8) includes the standard normal distribution (in this case, s = 3) and the “sparse projection” distribution specified as rij = √s × 1 with prob. 1 2s 0 with prob. 1 −1 s −1 with prob. 1 2s [24] provided the following unbiased estimator ˆarp,s of a and the general variance formula: ˆarp,s = 1 k k X j=1 v1,jv2,j, E(ˆarp,s) = a = D X i=1 u1,iu2,i, (10) V ar(ˆarp,s) = 1 k " D X i=1 u2 1,i D X i=1 u2 2,i + a2 + (s −3) D X i=1 u2 1,iu2 2,i # (11) which means s = 1 achieves the smallest variance. The only elementary distribution we know that satisfies (8) with s = 1 is the two point distribution in {−1, 1} with equal probabilities. [23] proposed an improved estimator for random projections as the solution to a cubic equation. Because it can not be written as an inner product, that estimator can not be used for linear learning. 6.2 Count-Min (CM) Sketch and Vowpal Wabbit (VW) Again, in this paper, “VW” always refers to the hashing algorithm in [34]. VW may be viewed as a “bias-corrected” version of the Count-Min (CM) sketch [11]. In the original CM algorithm, the key step is to independently and uniformly hash elements of the data vectors to k buckets and the hashed value is the sum of the elements in the bucket. That is h(i) = j with probability 1 k, where j ∈{1, 2, ..., k}. By writing Iij = 1 if h(i) = j 0 otherwise , we can write the hashed data as w1,j = D X i=1 u1,iIij, w2,j = D X i=1 u2,iIij (12) The estimate ˆacm = Pk j=1 w1,jw2,j is (severely) biased for estimating inner products. The original paper [11] suggested a “count-min” step for positive data, by generating multiple independent estimates ˆacm and taking the minimum as the final estimate. That step can reduce but can not remove the bias. Note that the bias can be easily removed by using k k−1 ˆacm −1 k PD i=1 u1,i PD i=1 u2,i . [34] proposed a creative method for bias-correction, which consists of pre-multiplying (elementwise) the original data vectors with a random vector whose entries are sampled i.i.d. from the twopoint distribution in {−1, 1} with equal probabilities. Here, we consider the general distribution (8). After applying multiplication and hashing on u1 and u2, the resultant vectors g1 and g2 are g1,j = D X i=1 u1,iriIij, g2,j = D X i=1 u2,iriIij, j = 1, 2, ..., k (13) where E(ri) = 0, E(r2 i ) = 1, E(r3 i ) = 0, E(r4 i ) = s. We have the following Lemma. Theorem 3 ˆavw,s = k X j=1 g1,jg2,j, E(ˆavw,s) = D X i=1 u1,iu2,i = a, (14) V ar(ˆavw,s) = (s −1) D X i=1 u2 1,iu2 2,i + 1 k " D X i=1 u2 1,i D X i=1 u2 2,i + a2 −2 D X i=1 u2 1,iu2 2,i # □ (15) Interestingly, the variance (15) says we do need s = 1, otherwise the additional term (s − 1) PD i=1 u2 1,iu2 2,i will not vanish even as the sample size k →∞. In other words, the choice of random distribution in VW is essentially the only option if we want to remove the bias by premultiplying the data vectors (element-wise) with a vector of random variables. Of course, once we let s = 1, the variance (15) becomes identical to the variance of random projections (11). 6 7 Comparing b-Bit Minwise Hashing with VW (and Random Projections) We implemented VW and experimented it on the same webspam dataset. Figure 5 shows that b-bit minwise hashing is substantially more accurate (at the same sample size k) and requires significantly less training time (to achieve the same accuracy). Basically, for 8-bit minwise hashing with k = 200 achieves similar test accuracies as VW with k = 104 ∼106 (note that we only stored the non-zeros). 10 1 10 2 10 3 10 4 10 5 10 6 80 82 84 86 88 90 92 94 96 98 100 C = 0.01 1,10,100 0.1 k Accuracy (%) svm: VW vs b = 8 hashing C = 0.01 C = 0.1 C = 1 10,100 Spam: Accuracy 10 1 10 2 10 3 10 4 10 5 10 6 80 82 84 86 88 90 92 94 96 98 100 k Accuracy (%) 1 C = 0.01 C = 0.1 C = 1 10 C = 0.01 C = 0.1 10,100 logit: VW vs b = 8 hashing Spam: Accuracy 100 10 2 10 3 10 4 10 5 10 6 10 0 10 1 10 2 10 3 k Training time (sec) C = 100 C = 10 C = 1,0.1,0.01 C = 100 C = 10 C = 1,0.1,0.01 Spam: Training time svm: VW vs b = 8 hashing 10 2 10 3 10 4 10 5 10 6 10 0 10 1 10 2 10 3 k Training time (sec) C = 0.01 10,1.0,0.1 100 logit: VW vs b = 8 hashing Spam: Training time C = 0.1,0.01 C = 100,10,1 Figure 5: The dashed (red if color is available) curves represent b-bit minwise hashing results (only for k ≤500) while solid curves for VW. We display results for C = 0.01, 0.1, 1, 10, 100. This empirical finding is not surprising, because the variance of b-bit hashing is usually substantially smaller than the variance of VW (and random projections). In the technical report (arXiv:1106.0967, which also includes the complete proofs of the theorems presented in this paper), we show that, at the same storage cost, b-bit hashing usually improves VW by 10- to 100-fold, by assuming each sample of VW needs 32 bits to store. Of course, even if VW only stores each sample using 16 bits, an improvement of 5- to 50-fold would still be very substantial. There is one interesting issue here. Unlike random projections (and minwise hashing), VW is a sparsity-preserving algorithm, meaning that in the resultant sample vector of length k, the number of non-zeros will not exceed the number of non-zeros in the original vector. In fact, it is easy to see that the fraction of zeros in the resultant vector would be (at least) 1 −1 k c ≈exp −c k , where c is the number of non-zeros in the original data vector. In this paper, we mainly focus on the scenario in which c ≫k, i.e., we use b-bit minwise hashing or VW for the purpose of data reduction. However, in some cases, we care about c ≪k, because VW is also an excellent tool for compact indexing. In fact, our b-bit minwise hashing scheme for linear learning may face such an issue. 8 Combining b-Bit Minwise Hashing with VW In Figures 3 and 4, when b = 16, the training time becomes substantially larger than b ≤8. Recall that in the run-time, we expand the b-bit minwise hashed data to sparse binary vectors of length 2bk with exactly k 1’s. When b = 16, the vectors are very sparse. On the other hand, once we have expanded the vectors, the task is merely computing inner products, for which we can use VW. Therefore, in the run-time, after we have generated the sparse binary vectors of length 2bk, we hash them using VW with sample size m (to differentiate from k). How large should m be? Theorem 4 may provide an insight. Recall Section 2 provides the estimator, denoted by ˆRb, of the resemblance R, using b-bit minwise hashing. Now, suppose we first apply VW hashing with size m on the binary vector of length 2bk before estimating R, which will introduce some additional randomness. We denote the new estimator by ˆRb,vw. Theorem 4 provides its theoretical variance. 10 −3 10 −2 10 −1 10 0 10 1 10 2 85 90 95 100 0 1 2 3 8 C Accuracy (%) SVM: 16−bit hashing + VW, k = 200 Spam:Accuracy 10 −3 10 −2 10 −1 10 0 10 1 10 2 85 90 95 100 C Accuracy (%) 1 2 3 8 0 Logit: 16−bit hashing +VW, k = 200 Spam: Accuracy 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 0 10 1 10 2 0 1 8 C Training time (sec) Spam:Training Time SVM: 16−bit hashing + VW, k = 200 1 2 8 3 0 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 0 10 1 10 2 C Training time (sec) 1 8 Logit: 16−bit hashing +VW, k = 200 Spam: Training Time 8 0 0 Figure 6: We apply VW hashing on top of the binary vectors (of length 2bk) generated by b-bit hashing, with size m = 20k, 21k, 22k, 23k, 28k, for k = 200 and b = 16. The numbers on the solid curves (0, 1, 2, 3, 8) are the exponents. The dashed (red if color if available) curves are the results from only using b-bit hashing. When m = 28k, this method achieves similar test accuracies (left panels) while substantially reducing the training time (right panels). 7 Theorem 4 Var ˆRb,vw = V ar ˆRb + 1 m 1 [1 −C2,b]2 1 + P 2 b −Pb(1 + Pb) k , (16) where V ar ˆRb = 1 k Pb(1−Pb) [1−C2,b]2 is given by (4) and C2,b is the constant defined in Theorem 1. □ Compared to the original variance V ar “ ˆRb ” , the additional term in (16) can be relatively large, if m is small. Therefore, we should choose m ≫k and m ≪2bk. If b = 16, then m = 28k may be a good trade-off. Figure 8 provides an empirical study to verify this intuition. 9 Limitations While using b-bit minwise hashing for training linear algorithms is successful on the webspam dataset, it is important to understand the following three major limitations of the algorithm: (A): Our method is designed for binary (0/1) sparse data. (B): Our method requires an expensive preprocessing step for generating k permutations of the data. For most applications, we expect the preprocessing cost is not a major issue because the preprocessing can be conducted off-line (or combined with the data-collection step) and is easily parallelizable. However, even if the speed is not a concern, the energy consumption might be an issue, especially considering (b-bit) minwise hashing is mainly used for industry applications. In addition, testing an new unprocessed data vector (e.g., a new document) will be expensive. (C): Our method performs only reasonably well in terms of dimension reduction. The processed data need to be mapped into binary vectors in 2b × k dimensions, which is usually not small. (Note that the storage cost is just bk bits.) For example, for the webspam dataset, using b = 8 and k = 200 seems to suffice and 28 × 200 = 51200 is quite large, although it is much smaller than the original dimension of 16 million. It would be desirable if we can further reduce the dimension, because the dimension determines the storage cost of the model and (moderately) increases the training time for batch learning algorithms such as LIBLINEAR. In hopes of fixing the above limitations, we experimented with an implementation using another hashing technique named Conditional Random Sampling (CRS) [21, 22], which is not limited to binary data and requires only one permutation of the original data (i.e., no expensive preprocessing). We achieved some limited success. For example, CRS compares favorably to VW in terms of storage (to achieve the same accuracy) on the webspam dataset. However, so far CRS can not compete with b-bit minwise hashing for linear learning (in terms of training speed, storage cost, and model size). The reason is because even though the estimator of CRS is an inner product, the normalization factors (i.e, the effective sample size of CRS) to ensure unbiased estimates substantially differ pairwise (which is a significant advantage in other applications). In our implementation, we could not to use fully correct normalization factors, which lead to severe bias of the inner product estimates and less than satisfactory performance of linear learning compared to b-bit minwise hashing. 10 Conclusion As data sizes continue to grow faster than the memory and computational power, statistical learning tasks in industrial practice are increasingly faced with training datasets that exceed the resources on a single server. A number of approaches have been proposed that address this by either scaling out the training process or partitioning the data, but both solutions can be expensive. In this paper, we propose a compact representation of sparse, binary data sets based on b-bit minwise hashing, which can be naturally integrated with linear learning algorithms such as linear SVM and logistic regression, leading to dramatic improvements in training time and/or resource requirements. We also compare b-bit minwise hashing with the Count-Min (CM) sketch and Vowpal Wabbit (VW) algorithms, which, according to our analysis, all have (essentially) the same variances as random projections [24]. Our theoretical and empirical comparisons illustrate that b-bit minwise hashing is significantly more accurate (at the same storage) for binary data. There are various limitations (e.g., expensive preprocessing) in our proposed method, leaving ample room for future research. Acknowledgement This work is supported by NSF (DMS-0808864), ONR (YIP-N000140910911), and a grant from Microsoft. We thank John Langford and Tong Zhang for helping us better understand the VW hashing algorithm, and Chih-Jen Lin for his patient explanation of LIBLINEAR package and datasets. 8 References [1] Dimitris Achlioptas. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. Journal of Computer and System Sciences, 66(4):671–687, 2003. [2] Alexandr Andoni and Piotr Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In Commun. ACM, volume 51, pages 117–122, 2008. [3] Harald Baayen. Word Frequency Distributions, volume 18 of Text, Speech and Language Technology. Kulver Academic Publishers, 2001. [4] Michael Bendersky and W. Bruce Croft. Finding text reuse on the web. In WSDM, pages 262–271, Barcelona, Spain, 2009. [5] Leon Bottou. http://leon.bottou.org/projects/sgd. [6] Andrei Z. Broder. On the resemblance and containment of documents. In the Compression and Complexity of Sequences, pages 21–29, Positano, Italy, 1997. [7] Andrei Z. Broder, Steven C. Glassman, Mark S. Manasse, and Geoffrey Zweig. Syntactic clustering of the web. In WWW, pages 1157 – 1166, Santa Clara, CA, 1997. [8] Olivier Chapelle, Patrick Haffner, and Vladimir N. Vapnik. Support vector machines for histogram-based image classification. IEEE Trans. Neural Networks, 10(5):1055–1064, 1999. [9] Ludmila Cherkasova, Kave Eshghi, Charles B. Morrey III, Joseph Tucek, and Alistair C. Veitch. Applying syntactic similarity algorithms for enterprise information management. In KDD, pages 1087–1096, Paris, France, 2009. [10] Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, Michael Mitzenmacher, Alessandro Panconesi, and Prabhakar Raghavan. On compressing social networks. In KDD, pages 219–228, Paris, France, 2009. [11] Graham Cormode and S. Muthukrishnan. An improved data stream summary: the count-min sketch and its applications. Journal of Algorithm, 55(1):58–75, 2005. [12] Yon Dourisboure, Filippo Geraci, and Marco Pellegrini. Extraction and classification of dense implicit communities in the web graph. ACM Trans. Web, 3(2):1–36, 2009. [13] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. Liblinear: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874, 2008. [14] Dennis Fetterly, Mark Manasse, Marc Najork, and Janet L. Wiener. A large-scale study of the evolution of web pages. In WWW, pages 669–678, Budapest, Hungary, 2003. [15] George Forman, Kave Eshghi, and Jaap Suermondt. Efficient detection of large-scale redundancy in enterprise file systems. SIGOPS Oper. Syst. Rev., 43(1):84–91, 2009. [16] Sreenivas Gollapudi and Aneesh Sharma. An axiomatic approach for result diversification. In WWW, pages 381–390, Madrid, Spain, 2009. [17] Matthias Hein and Olivier Bousquet. Hilbertian metrics and positive definite kernels on probability measures. In AISTATS, pages 136–143, Barbados, 2005. [18] Cho-Jui Hsieh, Kai-Wei Chang, Chih-Jen Lin, S. Sathiya Keerthi, and S. Sundararajan. A dual coordinate descent method for large-scale linear svm. In Proceedings of the 25th international conference on Machine learning, ICML, pages 408–415, 2008. [19] Yugang Jiang, Chongwah Ngo, and Jun Yang. Towards optimal bag-of-features for object categorization and semantic video retrieval. In CIVR, pages 494–501, Amsterdam, Netherlands, 2007. [20] Thorsten Joachims. Training linear svms in linear time. In KDD, pages 217–226, Pittsburgh, PA, 2006. [21] Ping Li and Kenneth W. Church. Using sketches to estimate associations. In HLT/EMNLP, pages 708–715, Vancouver, BC, Canada, 2005 (The full paper appeared in Commputational Linguistics in 2007). [22] Ping Li, Kenneth W. Church, and Trevor J. Hastie. Conditional random sampling: A sketch-based sampling technique for sparse data. In NIPS, pages 873–880, Vancouver, BC, Canada, 2006 (Newer results appeared in NIPS 2008. [23] Ping Li, Trevor J. Hastie, and Kenneth W. Church. Improving random projections using marginal information. In COLT, pages 635–649, Pittsburgh, PA, 2006. [24] Ping Li, Trevor J. Hastie, and Kenneth W. Church. Very sparse random projections. In KDD, pages 287–296, Philadelphia, PA, 2006. [25] Ping Li and Arnd Christian K¨onig. Theory and applications b-bit minwise hashing. In Commun. ACM, 2011. [26] Ping Li and Arnd Christian K¨onig. Accurate estimators for improving minwise hashing and b-bit minwise hashing. Technical report, 2011 (arXiv:1108.0895). [27] Ping Li and Arnd Christian K¨onig. b-bit minwise hashing. In WWW, pages 671–680, Raleigh, NC, 2010. [28] Ping Li, Arnd Christian K¨onig, and Wenhao Gui. b-bit minwise hashing for estimating three-way similarities. In NIPS, Vancouver, BC, 2010. [29] Gurmeet Singh Manku, Arvind Jain, and Anish Das Sarma. Detecting Near-Duplicates for Web-Crawling. In WWW, Banff, Alberta, Canada, 2007. [30] Marc Najork, Sreenivas Gollapudi, and Rina Panigrahy. Less is more: sampling the neighborhood graph makes salsa better and faster. In WSDM, pages 242–251, Barcelona, Spain, 2009. [31] Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal estimated sub-gradient solver for svm. In ICML, pages 807–814, Corvalis, Oregon, 2007. [32] Qinfeng Shi, James Petterson, Gideon Dror, John Langford, Alex Smola, and S.V.N. Vishwanathan. Hash kernels for structured data. Journal of Machine Learning Research, 10:2615–2637, 2009. [33] Simon Tong. Lessons learned developing a practical large scale machine learning system. http://googleresearch.blogspot.com/2010/04/lessons-learned-developing-practical.html, 2008. [34] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing for large scale multitask learning. In ICML, pages 1113–1120, 2009. [35] Hsiang-Fu Yu, Cho-Jui Hsieh, Kai-Wei Chang, and Chih-Jen Lin. Large linear classification when data cannot fit in memory. In KDD, pages 833–842, 2010. 9
|
2011
|
15
|
4,202
|
Nonnegative dictionary learning in the exponential noise model for adaptive music signal representation Onur Dikmen CNRS LTCI; T´el´ecom ParisTech 75014, Paris, France dikmen@telecom-paristech.fr C´edric F´evotte CNRS LTCI; T´el´ecom ParisTech 75014, Paris, France fevotte@telecom-paristech.fr Abstract In this paper we describe a maximum likelihood approach for dictionary learning in the multiplicative exponential noise model. This model is prevalent in audio signal processing where it underlies a generative composite model of the power spectrogram. Maximum joint likelihood estimation of the dictionary and expansion coefficients leads to a nonnegative matrix factorization problem where the Itakura-Saito divergence is used. The optimality of this approach is in question because the number of parameters (which include the expansion coefficients) grows with the number of observations. In this paper we describe a variational procedure for optimization of the marginal likelihood, i.e., the likelihood of the dictionary where the activation coefficients have been integrated out (given a specific prior). We compare the output of both maximum joint likelihood estimation (i.e., standard Itakura-Saito NMF) and maximum marginal likelihood estimation (MMLE) on real and synthetical datasets. The MMLE approach is shown to embed automatic model order selection, akin to automatic relevance determination. 1 Introduction In this paper we address the task of nonnegative dictionary learning described by V ≈WH, (1) where V , W, H are nonnegative matrices of dimensions F ×N, F ×K and K ×N, respectively. V is the data matrix, where each column vn is a data point, W is the dictionary matrix, with columns {wk} acting as “patterns” or “explanatory variables” reprensentative of the data, and H is the activation matrix, with columns {hn}. For example, in this paper we will be interested in music data such that V is time-frequency spectrogram matrix and W is a collection of spectral signatures of latent elementary audio components. The most common approach to nonnegative dictionary learning is nonnegative matrix factorization (NMF) [1] which consists in retrieving the factorization (1) by solving min W,H D(V |WH) def = X fn d(vfn|[WH]fn) s.t. W, H ≥0 , (2) where d(x|y) is a measure of fit between nonnegative scalars, vfn are the entries of V , and A ≥0 expresses nonnegativity of the entries of matrix A. The cost function D(V |WH) is often a likelihood function −log p(V |W, H) in disguise, e.g., the Euclidean distance underlies additive Gaussian noise, the Kullback-Leibler (KL) divergence underlies Poissonian noise, while the Itakura-Saito (IS) divergence underlies multiplicative exponential noise [2]. The latter noise model will be central to this work because it underlies a suitable generative model of the power spectrogram, as shown in [3] and later recalled. 1 A criticism about NMF is that little can be said about the asymptotical optimality of the learnt dictionary W. Indeed, because W is estimated jointly with H, the total number of parameters FK + KN grows with the number of data points N. As such, this paper instead addresses optimization of the likelihood in the marginal model described by p(V |W) = Z H p(V |W, H)p(H)dH, (3) where H is treated as a random latent variable with prior p(H). The evaluation and optimization of the marginal likelihood is not trivial in general, and this paper is precisely devoted to these tasks in the multiplicative exponential noise model. The maximum marginal likelihood estimation approach we seek here is related to IS-NMF in such a way that Latent Dirichlet Allocation (LDA) [4] is related to Latent Semantic Indexing (pLSI) [5]. LDA and pLSI are two estimators in the same model, but LDA seeks estimation of the topic distributions in the marginal model, from which the topic weights describing each document have been integrated out. In contrast, pLSI (which is essentially equivalent to KL-NMF as shown in [6]) performs maximum joint likelihood estimation (MJLE) for the topics and weights. Blei et al. [4] show the better performance of LDA with respect to (w.r.t) pLSI. Welling et al. [7] also report similar results with a discussion, stating that deterministic latent variable models assign zero probability to input configurations that do not appear in the training set. A similar approach is Discrete Component Analysis (DCA) [8] which considers maximum marginal a posteriori estimation in the GammaPoisson (GaP) model [9], see also [10] for the maximum marginal likelihood estimation on the same model. In this paper, we will follow the same objective for the multiplicative exponential noise model. We will describe a variational algorithm for the evaluation and optimization of (3); note that the algorithm exploits specificities of the model and is not a mere adaptation of LDA or DCA to an alternative setting. We will consider a nonnegative Generalized inverse-Gaussian (GIG) distribution as a prior for H, a flexible distribution which takes the Gamma and inverse-Gamma as special cases. As will be detailed later, this work relates to recent work by Hoffman et al. [11], which considers full Bayesian integration of W and H (both assumed random) in the exponential noise model, in a nonparametric setting allowing for model order selection. We will show that our more simple maximum likelihood approach inherently performs model selection as well by automatically pruning “irrelevant” dictionary elements. Applied to a short well structured piano sequence, our approach is shown to capture the correct number of components, corresponding to the expected note spectra, and outperforms the nonparametric Bayesian approach of [11]. The paper is organized as follows. Section 2 introduces the multiplicative exponential noise model with the prior distribution for the expansion coefficients p(H). Sections 3 and 4 describe the MJLE and MMLE approaches, respectively. Section 5 reports results on synthetical and real audio data. Section 6 concludes. 2 Model The generative model assumed in this paper is vfn = ˆvfn . ǫfn , (4) where ˆvfn = P k wfkhkn and ǫfn is a nonnegative multiplicative noise with exponential distribution ǫfn ∼exp(−ǫfn). In other words, and under independence assumptions, the likelihood function is p(V |W, H) = Y fn(1/ˆvfn) exp(−vfn/ˆvfn) . (5) When V is a power spectrogram matrix such that vfn = |xfn|2 and {xfn} are the complex-valued short-time Fourier transform (STFT) coefficients of some signal data, where f typically acts as a frequency index and n acts as a time-frame index, it was shown in [3] that an equivalent generative model of vfn is xfn = X k cfkn, cfkn ∼Nc(0, wfkhkn) , (6) 2 where Nc refers to the circular complex Gaussian distribution.1 In other words, the exponential multiplicative noise model underlies a generative composite model of the STFT. The complexvalued matrix {cfkn}fn, referred to as kthcomponent, is characterized by a spectral signature wk, amplitude-modulated in time by the frame-dependent coefficient hkn, which accounts for nonstationarity. In analogy with LDA or DCA, if our data consisted of word counts, with f indexing words and n indexing documents, then the columns of W would describe topics and cfkn would denote the number of occurrences of word f stemming from topic k in document n. In our setting W is considered a free deterministic parameter to be estimated by maximum likelihood. In contrast, H is treated as a nonnegative random latent variable over which we will integrate. It is assigned a GIG prior, such that hkn ∼GIG(αk, βk, γk) , (7) with GIG(x|α, β, γ) = (β/γ)α/2 2Kα(2√βγ)xα−1 exp − βx + γ x , (8) where K is a modified Bessel function of the second kind and x, β and γ are nonnegative scalars. The GIG distribution unifies the Gamma (α > 0, γ = 0) and inverse-Gamma (α < 0, β = 0) distributions. Its sufficient statistics are x, 1/x and log x, and in particular we have ⟨x⟩= Kα+1(2√βγ) Kα(2√βγ) rγ β , 1 x = Kα−1(2√βγ) Kα(2√βγ) s β γ , (9) where ⟨x⟩denotes expectation. Although all derivations and the implementations are done for the general case, in practice we will only consider the special case of Gamma distribution for simplicity. In such case, β parameter merely acts as a scale parameter, which we fix so as to solve the scale ambiguity between the columns of W and the rows of H. We will also assume the shape parameters {αk} fixed to arbitrary values (typically, αk = 1, which corresponds to the exponential distribution). Given the generative model specified by equations (4) and (7) we now describe two estimators for W. 3 Maximum joint likelihood estimation 3.1 Estimator The joint (penalized) log-likelihood likelihood of W and H is defined by CJL(W, H) def = log p(V |W, H) + log p(H) (10) = −DIS(V |WH) − X kn(1 −αk) log hkn + βkhkn + γk/hkn + cst , (11) where DIS(V |WH) is defined as in equation (2) with dIS(x|y) = x/y −log(x/y) −1 (ItakuraSaito divergence) and “cst” denotes terms constant w.r.t W and H. The subscript JL stands for joint likelihood, and the estimation of W by maximization of CJL(W, H) will be referred to as maximum joint likelihood estimation (MJLE). 3.2 MM algorithm for MJLE We describe an iterative algorithm which sequentially updates W given H and H given W. Each of the two steps can be achieved in a minorization-maximization (MM) setting [12], where the original problem is replaced by the iterative optimization of an easier-to-optimize auxiliary function. We first describe the update of H, from which the update of W will be easily deduced. Given W, our task consists in maximizing C(H) = −DIS(V |WH) −L(H), where L(H) = P kn(1 −αk) log hkn + βkhkn + γk/hkn. Using Jensen’s inequality to majorize the convex part of DIS(V |WH) (terms in 1A complex random variable has distribution Nc(µ, λ) if and only if its real and imaginary parts are independent and distributed as N(ℜ(µ), λ/2) and N(ℑ(µ), λ/2), respectively. 3 vfn/ˆvfn) and first order Taylor approximation to majorize its concave part (terms in log ˆvfn), as in [13], the functional G(H, ˜H) = − X k pkn/hkn + qknhkn −L(H) + cst, (12) where pkn = ˜h2 kn P f wfkvfn/˜v2 fn, qkn = P f wfk/˜vfn, ˜vfn = [W ˜H]fn, can be shown to be a tight lower bound of C(H), i.e., G(H, ˜H) ≤C(H) and G( ˜H, ˜H) = C( ˜H). Its iterative maximization w.r.t H, where ˜H = H(i) acts as the current iterate at iteration i, produces an ascent algorithm, such that C(H(i+1)) ≥C(H(i)). The update is easily shown to amount to solving an order 2 polynomial with a single positive root given by hkn = (αk −1) + p (αk −1)2 + 4(pkn + γk)(qkn + βk) 2(qkn + βk) . (13) The update preserves nonnegativity given positive initialization. By exchangeability of W and H when the data is transposed (V T = HT W T ), and dropping the penalty term (αk = 1, βk = 0, γk = 0), the update of W is given by the multiplicative update wfk = ˜wfk sP n hknvfn/˜v2 fn P n hkn/˜vfn , (14) which is known from [13]. 4 Maximum marginal likelihood estimation 4.1 Estimator We define the marginal log-likelihood objective function as CML(W) def = log Z p(V |W, H)p(H) dH . (15) The subscript ML stands for marginal likelihood, and the estimation of W by maximization of CML(W) will be referred to as maximum marginal likelihood estimation (MMLE). Note that in Bayesian estimation the term marginal likelihood is sometimes used as a synonym for the model evidence, which is the likelihood of data given the model, i.e., where all random parameters (including W) have been marginalized. This is not the case here where W is treated as a deterministic parameter and marginal likelihood only refers to the likelihood of W, where H has been integrated out. The integral in equation (15) is intractable given our model. In the next section we resort to a variational Bayes procedure for the evaluation and maximization of CML(W). 4.2 Variational algorithm for MMLE In the following we propose an iterative lower bound evaluation/maximization procedure for approximate maximization of CML(W). We will construct a bound B(W, ˜ W ) such that ∀(W, ˜ W), CML(W) ≥B(W, ˜ W ), where ˜W acts as the current iterate and W acts as the free parameter over which the bound is maximized. The maximization is approximate in that the bound will only satisfy B( ˜ W , ˜W) ≈CML( ˜W), i.e., is loosely tight in the current update ˜W, which fails to ensure ascent of the objective function like in the MM setting of Section 3.2. We propose to construct the bound from a variational Bayes perspective [14]. The following inequality holds for any distribution function q(H) CML(W) ≥⟨log p(V |W, H)⟩q + ⟨log p(H)⟩q −⟨log q(H)⟩q def = Bvb q (W) . (16) The inequality becomes an equality when q(H) = p(H|V, W); when the latter is available in close form, the EM algorithm consists in using ˜q(H) = p(H|V, ˜W) and maximize Bvb ˜q (W) w.r.t W, and iterate. The true posterior of H being intractable in our case, we take q(H) to be a factorized, 4 parametric distribution qθ(H), whose parameter θ is updated so as to tighten Bvb q ( ˜W) to C( ˜ W ). Like in [11], we choose qθ(H) to be in the same family as the prior, such that qθ(H) = Y kn GIG(¯αkn, ¯βkn, ¯γkn) . (17) The first term of Bvb q (W) essentially involves the expectation of −DIS(V |WH) w.r.t to the variational distribution qθ(H). The product WH introduces some coupling of the coefficients of H (via the sum P k wfkhkn) which makes the integration difficult. Following [11] and similar to Section 3.2, we propose to lower bound this term using Jensen’s and Taylor’s type inequalities to majorize the convex and concave parts of −DIS(V |WH). The contributions of the elements of H become decoupled w.r.t to k, which allows for evaluation and maximization of the bound. This leads to ⟨log p(V |H, W)⟩q ≥− X fn X k φ2 fkn vfn wfk 1 hkn q ! + log ψfn + 1 ψfn X k wfk⟨hkn⟩q −1 ! , (18) where {ψfn} and {φfkn} are nonnegative free parameters such that P k φfkn = 1. We define Bθ,φ,ψ(W) as Bvb q (W) but where the expectation of the joint log-likelihood is replaced by its lower bound given right side of equation (18). From there, our algorithm is a two-step procedure consisting in 1) computing ˜θ, ˜φ, ˜ψ so as to tighten Bθ,φ,ψ( ˜ W ) to CML( ˜W), and 2) maximizing B˜θ, ˜φ, ˜ψ(W) w.r.t W. The corresponding updates are given next. Note that evaluation of the bound only involves expectations of hkn and 1/hkn w.r.t to the GIG distribution, which is readily given by equation (9). Step 1: Tightening the bound Given current dictionary update ˜W, run the following fixed-point equations. φfkn = ˜wfk/⟨1/hkn⟩q P j ˜wfj/⟨1/hjn⟩q , ψfn = X j ˜wfj⟨hjn⟩q ¯αkn = αk, ¯βkn = βk + X f ˜wfk ψfn , ¯γkn = γk + X f vfnφ2 fkn ˜wfk . Step 2: Optimizing the bound Given the variational distribution ˜q = q˜θ from previous step, update W as wfk = ˜wfk v u u u u t P n vfn hP j ˜wfj⟨1/hjn⟩−1 ˜q i−2 ⟨1/hkn⟩−1 ˜q P n hP j ˜wfj⟨hjn⟩˜q i−1 ⟨hkn⟩˜q . (19) The VB update has a similar form to the MM update of equation (14) but the contributions of H are replaced by expected values w.r.t the variational distribution. 4.3 Relation to other works A variational algorithm using the activation matrix H and the latent components C = {cfkn} as hidden data can easily be devised, as sketched in [2]. Including C in the variational distribution also allows to decouple the contributions of the activation coefficients w.r.t to k but leads from our experience to a looser bound, a finding also reported in [11]. In a fully Bayesian setting, Hoffman et al. [11] assume Gamma priors for both W and H. The model is such that ˆvfn = P k λkwfkhkn, where λk acts as a component weight parameter. The number of components is potentially infinite but, in a nonparametric setting, the prior for λk favors a finite number of active components. Posterior inference of the parameters W, H, {λk} is achieved in a variational setting similar to Section 4.2, by maximizing a lower bound on p(V ). In contrast to this method, our approach does not require to specify a prior for W, leads to simple updates for W that are directly comparable to IS-NMF and experiments will reveal that our approach embeds model order selection as well, by automatically pruning unnecessary columns of W, without resorting to the nonparametric framework. 5 5 10 15 20 25 −1.55 −1.5 −1.45 −1.4 −1.35 −1.3 −1.25 x 10 5 MMLE CML K (a) CML by MMLE 5 10 15 20 25 −8 −7 −6 −5 −4 −3 x 10 4 MJLE CJL K (b) CJL by MJLE 5 10 15 20 25 −1.65 −1.6 −1.55 −1.5 −1.45 −1.4 x 10 5 MJLE CML K (c) CML by MJLE Figure 1: Marginal likelihood CML (a) and joint likelihood CJL (b) versus number of components K. CML values corresponding to dictionaries estimated by CJL maximization (c). 5 Experiments In this section, we study the performances of MJLE and MMLE methods on both synthetical and real-world datasets.2 The prior hyperparameters are fixed to αk = 1, γk = 0 (exponential distribution) and βk = 1, i.e., hkn ∼exp(−hkn). We used 5000 algorithm iterations and nonnegative random initializations in all cases. In order to minimize the odds of getting stuck in local optima, we adapted the deterministic annealing method proposed in [15] for MMLE. Deterministic annealing is applied by multiplying the entropy term −⟨log q(H)⟩in the lower bound in (16) by 1/η(i). The initial η(0) is chosen in (0, 1) and increased through iterations. In our experiments, we set η(0) = 0.6 and updated it with the rule η(i+1) = min(1, 1.005η(i)). 5.1 Swimmer dataset First, we consider the synthetical Swimmer dataset [16], for which the ground truth of the dictionary is available. The dataset is composed of 256 images of size 32 × 32, representing a swimmer built of an invariant torso and 4 limbs. Each of the 4 limbs can be in one of 4 positions and the dataset is formed of all combinations. Hence, the ground truth dictionary corresponds to the collection of individual limb positions. As explained in [16] the torso is an unidentifiable component that can be paired with any of the limbs, or even split among the limbs. In our experiments, we mapped the values in the dataset onto the range [1, 100] and multiplied with exponential noise, see some samples in Fig. 2 (a). We ran the MM and VB algorithms (for MJLE and MMLE, respectively) for K = 1 . . . 20 and the joint and marginal log-likelihood end values (after the 5000 iterations) are displayed in Fig. 1. The marginal log-likelihood is here approximated by its lower bound, as described in Section 4.2. In Fig. 1(a) and (b) the respective objective criteria (CML and CJL) maximized by MMLE and MJLE are shown. The increase of CML stops after K = 16, whereas CJL continues to increase as K gets larger. Fig. 1 (c) displays the corresponding marginal likelihood values, CML, of the dictionaries obtained by MJLE in Fig. 1 (b); this figure empirically shows that maximizing the joint likelihood does not necessarily imply maximization of the marginal likelihood. These figures display the mean and standard deviation values obtained from 7 experiments. The likelihood values increase with the number of components, as expected from nested models. However, the marginal likelihood stagnates after K = 16. Manual inspection reveals that passed this value of K, the extra columns of W are pruned to zero, leaving the criterion unchanged. Hence, MMLE appears to embed automatic order selection, similar to automatic relevance determination [17, 18]. The dictionaries learnt from MJLE and MMLE with K = 20 components are shown in Fig. 2 (b) and (c). As can be seen from Fig. 2 (b), MJLE produces spurious or duplicated components. In contrast, the ground truth is well recovered with MMLE. 2MATLAB code is available at http://perso.telecom-paristech.fr/∼dikmen/nips11/ 6 (a) Data (b) WMJLE (c) WMMLE Figure 2: Data samples and dictionaries learnt on the swimmer dataset with K = 20. 5.2 A piano excerpt In this section, we consider the piano data used in [3]. It is a toy audio sequence recorded in real conditions, consisting of four notes played all together in the first measure and in all possible pairs in the subsequent measures. A power spectrogram with analysis window of size 46 ms was computed, leading to F = 513 frequency bins and N = 676 time frames. We ran MMLE with K = 20 on the spectrogram. We reconstructed STFT component estimates from the factorization ˆ W ˆH, where ˆW is the MMLE dictionary estimate and ˆH = ⟨H⟩q. We used the minimum mean square error (MMSE) estimate given by ˆcfkn = gfkn. xfn, where gfkn is the time-frequency Wiener mask defined by ˆwfkˆhkn/ P j ˆwfjˆhjn. The estimated dictionary and the reconstructed components in the time domain after inverse STFT are shown in Fig. 3 (a). Out of the 20 components, 12 were assigned to zero during inference. The remaining 8 are displayed. 3 of the nonzero dictionary columns have very small values, leading to inaudible reconstructions. The five significant dictionary vectors correspond to the frequency templates of the four notes and the transients. For comparison, we applied the nonparametric approach by Hoffman et al. [11] on the same data with the same hyperparameters for H. The estimated dictionary and the reconstructed components are presented in Fig. 3 (b). 10 out of 20 components had very small weight values. The most significant 8 of the remaining components are presented in the figure. These components do not exactly correspond to individual notes and transients as they did with MMLE. The fourth note is mainly represented in the fifth component, but partially appears in the first three components as well. In general, the performance of the nonparametric approach depends more on initialization, i.e., requires more repetitions than MMLE. For the above results, we used 200 repetitions for the nonparametric method and 20 for MMLE (without annealing, same stopping criterion) and chose the repetition with the highest likelihood. 5.3 Decomposition of a real song In this last experiment, we decompose the first 40 seconds of God Only Knows by the Beach Boys. This song was produced in mono and we retrieved a downsampled version of it at 22kHz from the CD release. We computed a power spectrogram with 46 ms analysis window and ran our VB algorithm with K = 50. Fig. 4 displays the original data, and two examples of estimated time-frequency masks and reconstructed components. The figure also shows the variance of the reconstructed components and the evolution of the variational bound along iterations. In this example, 5 components out of the 50 are completely pruned in the factorization and 7 others are inaudible. Such decomposition can be used in various music editing settings, for example for mono to stereo remixing, see, e.g., [3]. 7 WMMLE cMMLE (a) MMLE Whoffman choffman (b) Hoffman et al. Figure 3: The estimated dictionary and the reconstructed components by MMLE and the nonparametric approach by Hoffman et al. with K = 20. Log power data spectrogram 200 400 600 800 1000 1200 1400 1600 100 200 300 400 500 0 5 10 15 20 25 30 35 40 −1 0 1 Temporal data 5 10 15 20 25 30 35 40 45 50 0 1 2 3 4 5 x 10 −3 Variance of reconstructed components 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 0 5 10 x 10 5 Variational bound against iterations Time−frequency Wiener mask of component 13 200 400 600 800 1000 1200 1400 1600 100 200 300 400 500 0 5 10 15 20 25 30 35 40 −1 0 1 Reconstructed component 13 Time−frequency Wiener mask of component 18 200 400 600 800 1000 1200 1400 1600 100 200 300 400 500 0 5 10 15 20 25 30 35 40 −1 0 1 Reconstructed component 18 Figure 4: Decomposition results of a real song. The Wiener masks take values between 0 (white) and 1 (black). The first example of reconstructed component captures the first chord of the song, repeated 4 times in the intro. The other component captures the cymbal, which starts with the first verse of the song. Acknowledgments This work is supported by project ANR-09-JCJC-0073-01 TANGERINE (Theory and applications of nonnegative matrix factorization). 6 Conclusions In this paper we have challenged the standard NMF approach to nonnegative dictionary learning, based on maximum joint likelihood estimation, with a better-posed approach consisting in maximum marginal likelihood estimation. The proposed algorithm based on variational inference has comparable computational complexity to standard NMF/MJLE. Our experiments on synthetical and real data have brought up a very attractive feature of MMLE, namely its self-ability to discard irrelevant columns in the dictionary, without resorting to elaborate schemes such as Bayesian nonparametrics. 8 References [1] D. D. Lee and H. S. Seung. Learning the parts of objects with nonnegative matrix factorization. Nature, 401:788–791, 1999. [2] C. F´evotte and A. T. Cemgil. Nonnegative matrix factorisations as probabilistic inference in composite models. In Proc. 17th European Signal Processing Conference (EUSIPCO), pages 1913–1917, Glasgow, Scotland, Aug. 2009. [3] C. F´evotte, N. Bertin, and J.-L. Durrieu. Nonnegative matrix factorization with the ItakuraSaito divergence. With application to music analysis. Neural Computation, 21(3):793–830, Mar. 2009. [4] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, Jan. 2003. [5] Thomas Hofman. Probabilistic latent semantic indexing. In Proc. 22nd International Conference on Research and Development in Information Retrieval (SIGIR), 1999. [6] E. Gaussier and C. Goutte. Relation between PLSA and NMF and implications. In Proc. 28th annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR’05), pages 601–602, New York, NY, USA, 2005. ACM. [7] M. Welling, C. Chemudugunta, and N. Sutter. Deterministic latent variable models and their pitfalls. In SIAM Conference on Data Mining (SDM), pages 196–207, 2008. [8] W. L. Buntine and A. Jakulin. Discrete component analysis. In Lecture Notes in Computer Science, volume 3940, pages 1–33. Springer, 2006. [9] John F. Canny. GaP: A factor model for discrete data. In Proceedings of the 27th ACM international Conference on Research and Development of Information Retrieval (SIGIR), pages 122–129, 2004. [10] O. Dikmen and C. F´evotte. Maximum marginal likelihood estimation for nonnegative dictionary learning. In Proc. of International Conference on Acoustics, Speech and Signal Processing (ICASSP’11), Prague, Czech Republic, 2011. [11] M. Hoffman, D. Blei, and P. Cook. Bayesian nonparametric matrix factorization for recorded music. In Proc. 27th International Conference on Machine Learning (ICML), Haifa, Israel, 2010. [12] D. R. Hunter and K. Lange. A tutorial on MM algorithms. The American Statistician, 58:30 – 37, 2004. [13] Y. Cao, P. P. B. Eggermont, and S. Terebey. Cross Burg entropy maximization and its application to ringing suppression in image reconstruction. IEEE Transactions on Image Processing, 8(2):286–292, Feb. 1999. [14] C. M. Bishop. Pattern Recognition And Machine Learning. Springer, 2008. ISBN-13: 9780387310732. [15] K. Katahira, K. Watanabe, and M. Okada. Deterministic annealing variant of variational Bayes method. In International Workshop on Statistical-Mechanical Informatics 2007 (IWSMI 2007), 2007. [16] D. Donoho and V. Stodden. When does non-negative matrix factorization give a correct decomposition into parts? In Sebastian Thrun, Lawrence Saul, and Bernhard Sch¨olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [17] D. J. C. Mackay. Probable networks and plausible predictions – a review of practical Bayesian models for supervised neural networks. Network: Computation in Neural Systems, 6(3):469– 505, 1995. [18] C. M. Bishop. Bayesian PCA. In Advances in Neural Information Processing Systems (NIPS), pages 382–388, 1999. 9
|
2011
|
150
|
4,203
|
From Stochastic Nonlinear Integrate-and-Fire to Generalized Linear Models Skander Mensi School of Computer and Communication Sciences and Brain-Mind Institute Ecole Polytechnique Federale de Lausanne 1015 Lausanne EPFL, SWITZERLAND skander.mensi@epfl.ch Richard Naud School of Computer and Communication Sciences and Brain-Mind Institute Ecole Polytechnique Federale de Lausanne 1015 Lausanne EPFL, SWITZERLAND richard.naud@epfl.ch Wulfram Gersnter School of Computer and Communication Sciences and Brain-Mind Institute Ecole Polytechnique Federale de Lausanne 1015 Lausanne EPFL, SWITZERLAND wulfram.gerstner@epfl.ch Abstract Variability in single neuron models is typically implemented either by a stochastic Leaky-Integrate-and-Fire model or by a model of the Generalized Linear Model (GLM) family. We use analytical and numerical methods to relate state-of-theart models from both schools of thought. First we find the analytical expressions relating the subthreshold voltage from the Adaptive Exponential Integrate-andFire model (AdEx) to the Spike-Response Model with escape noise (SRM as an example of a GLM). Then we calculate numerically the link-function that provides the firing probability given a deterministic membrane potential. We find a mathematical expression for this link-function and test the ability of the GLM to predict the firing probability of a neuron receiving complex stimulation. Comparing the prediction performance of various link-functions, we find that a GLM with an exponential link-function provides an excellent approximation to the Adaptive Exponential Integrate-and-Fire with colored-noise input. These results help to understand the relationship between the different approaches to stochastic neuron models. 1 Motivation When it comes to modeling the intrinsic variability in simple neuron models, we can distinguish two traditional approaches. One approach is inspired by the stochastic Leaky Integrate-and-Fire (LIF) hypothesis of Stein (1967) [1], where a noise term is added to the system of differential equations implementing the leaky integration to a threshold. There are multiple versions of such a stochastic LIF [2]. How the noise affects the firing probability is also a function of the parameters of the neuron model. Therefore, it is important to take into account the refinements of simple neuron models in terms of subthreshold resonance [3, 4], spike-triggered adaptation [5, 6] and non-linear spike 1 initiation [7, 5]. All these improvements are encompassed by the Adaptive Exponential Integrateand-Fire model (AdEx [8, 9]). The other approach is to start with some deterministic dynamics for the the state of the neuron (for instance the instantaneous distance from the membrane potential to the threshold) and link the probability intensity of emitting a spike with a non-linear function of the state variable. Under some conditions, this type of model is part of a greater class of statistical models called Generalized Linear Models (GLM [10]). As a single neuron model, the Spike Response Model (SRM) with escape noise is a GLM in which the state variable is explicitly the distance between a deterministic voltage and the threshold. The original SRM could account for subthreshold resonance, refractory effects and spike-frequency adaptation [11]. Mathematically similar models were developed independently in the study of the visual system [12] where spike-frequency adaptation has also been modeled [13]. Recently, this approach has retained increased attention since the probabilistic framework can be linked with the Bayesian theory of neural systems [14] and because Bayesian inference can be applied to the population of neurons [15]. In this paper, we investigate the similarity and differences between the state-of-the-art GLM and the stochastic AdEx. The motivation behind this work is to relate the traditional threshold neuron models to Bayesian theory. Our results extend the work of Plesser and Gerstner (2000) [16] since we include the non-linearity for spike initiation and spike-frequency adaptation. We also provide relationships between the parameters of the AdEx and the equivalent GLM. These precise relationships can be used to relate analog implementations of threshold models [17] to the probabilistic models used in the Bayesian approach. The paper is organized as follows: We first describe the expressions relating the SRM state-variable to the parameters of the AdEx (Sect. 3.1) in the subthreshold regime. Then, we use numerical methods to find the non-linear link-function that models the firing probability (Sect. 3.2). We find a functional form for the SRM link-function that best describes the firing probability of a stochastic AdEx. We then compare the performance of this link-function with the often used exponential or linear-rectifier link-functions (also called half-wave linear rectifier) in terms of predicting the firing probability of an AdEx under complex stimulus (Sect. 3.3). We find that the exponential linkfunction yields almost perfect prediction. Finally, we explore the relations between the statistic of the noise and the sharpness of the non-linearity for spike initiation with the parameters of the SRM. 2 Presentation of the Models In this section we present the general formula for the stochastic AdEx model (Sect. 2.1) and the SRM (Sect 2.2). 2.1 The Stochastic Adaptive Exponential Integrate-and-Fire Model The voltage dynamics of the stochastic AdEx is given by: τm ˙V = El −V + ∆T exp V −Θ ∆T −Rw + RI + Rϵ (1) τw ˙w = a(V −El) −w (2) where τm is the membrane time constant, El the reverse potential, R the membrane resistance, Θ is the threshold, ∆T is the shape factor and I(t) the input current which is chosen to be an OrnsteinUhlenbeck process with correlation time-constant of 5 ms. The exponential term ∆T exp( V −Θ ∆T ) is a non-linear function responsible for the emission of spikes and ϵ is a diffusive white noise with standard deviation σ (i.e. ϵ ∼N(0, σ)). Note that the diffusive white-noise does not imply white noise fluctuations of the voltage V (t), the probability distribution of V (t) will depend on ∆T and Θ. The second variable, w, describes the subthreshold as well as the spike-triggered adaptation both parametrized by the coupling strength a and the time constant τw. Each time ˆtj the voltage goes to infinity, we assumed that a spike is emitted. Then the voltage is reset to a fixed value Vr and w is increased by a constant value b. 2.2 The Generalized Linear Model In the SRM, The voltage V (t) is given by the convolution of the injected current I(t) with the membrane filter κ(t) plus the additional kernel η(t) that acts after each spikes (here we split the 2 spike-triggered kernel in two η(t) = ηv(t) + ηw(t) for reasons that will become clear later): V (t) = El + [κ ∗I](t) + X {ˆtj} ηv(t −ˆtj) + ηw(t −ˆtj) (3) Then at each time ˆtj a spike is emitted which results in a change of voltage described by η(t) = ηv(t) + ηw(t). Given the deterministic voltage, (Eq. 3) a spike is emitted according to the firing intensity λ(V ): λ(t) = f(V (t)) (4) where f(·) is an arbitrary function called the link-function. Then the firing behavior of the SRM depends on the choice of the link-function and its parameters. The most common link-function used to model single neuron activities are the linear-rectifier and the exponential function. 3 Mapping In order to map the stochastic AdEx to the SRM we follow a two-step procedure. First we derive the filter κ(t) and the kernels ηv(t) and ηw(t) analytically as a function of AdEx parameters. Second, we derive the link-function of the SRM from the stochastic spike emission of the AdEx. Figure 1: Mapping of the subthreshold dynamics of an AdEx to an equivalent SRM. A. Membrane filter κ(t) for three different sets of parameters of the AdEx leading to over-damped, critically damped and under-damped cases (upper, middle and lower panel, respectively). B. Spike-Triggered η(t) (black), ηv(t) (light gray) and ηw (gray) for the three cases. C. Example of voltage trace produced when an AdEx is stimulated with a step of colored noise (black). The corresponding voltage from a SRM stimulated with the same current and where we forced the spikes to match those of the AdEx (red). D. Error in the subthreshold voltage (VAdEx −VGLM) as a function of the mean voltage of the AdEx, for the three different cases: over-, critically and under-damped (light gray, gray and black, respectively) with ∆T = 1 mV. Red line represents the voltage threshold Θ. E. Root Mean Square Error (RMSE) ratio for the three cases with ∆T = 1 mV. The RMSE ratio is the RMSE between the deterministic VSRM and the stochastic VAdEx divided by the RMSE between repetitions of the stochastic AdEx voltage. The error bar shows a single standard deviation as the RMSE ratio is averaged accross multiple value of σ. 3.1 Subthreshold voltage dynamics We start by assuming that the non-linearity for spike initiation does not affect the mean subthreshold voltage of the stochastic AdEx (see Figure 1 D). This assumption is motivated by the small ∆T 3 observed in in-vitro recordings (from 0.5 to 2 mV [8, 9]) which suggest that the subthreshold dynamics are mainly linear except very close to Θ. Also, we expect that the non-linear link-function will capture some of the dynamics due to the non-linearity for spike initiation. Thus it is possible to rewrite the deterministic subthreshold part of the AdEx (Eq. 1-2 without ϵ and without ∆T exp((V −Θ)/∆T )) using matrices: ˙x = Ax (5) with x = V w and A = −1 τm − 1 glτm a τw −1 τw (6) In this form, the dynamics of the deterministic AdEx voltage is a damped oscillator with a driving force. Depending on the eigenvalues of A the system could be over-damped, critically damped or under-damped. The filter κ(t) of the GLM is given by the impulse response of the system of coupled differential equations of the AdEx, described by Eq. 5 and 6. In other words, one has to derive the response of the system when stimulating with a Dirac-delta function. The type of damping gives three different qualitative shapes of the kernel κ(t), which are summarized in Table 3.1 and Figure 1 A. Since the three different filters also affect the nature of the stochastic voltage fluctuations, we will keep the distinction between over-damped, critically damped and under-damped scenarios throughout the paper. This means that our approach is valid for at least 3 types of diffusive voltage-noise (i.e. the white noise ϵ in Eq. 1 filtered by 3 different membrane filters κ(t)). To complete the description of the deterministic voltage, we need an expression for the spiketriggered kernels. The voltage reset at each spike brings a spike-triggered jump in voltage of magnitude ∆= Vr −V (ˆt). This perturbation is superposed to the current fluctuations due to I(t) and can be mediated by a Delta-diract pulse of current. Thus we can write the voltage reset kernel by: ηv(t) = ∆ κ(0) [δ ∗κ] (t) = ∆ κ(0)κ(t) (7) where δ(t) is the Dirac-delta function. The shape of this kernel depends on κ(t) and can be computed from Table 3.1 (see Figure 1 B). Finally, the AdEx mediates spike-frequency adaptation by the jump of the second variables w. From Eq. 2 we can see that this produces a current wspike(t) = b exp (−t/τw) that can cumulate over subsequent spikes. The effect of this current on voltage is then given by the convolution of wspike(t) with the membrane filter κ(t). Thus in the SRM framework the spike-frequency adaptation is taken into account by: ηw(t) = [wspike ∗κ](t) (8) Again the precise form of ηw(t) depends on κ(t) and can be computed from Table 3.1 (see Figure 1 B). At this point, we would like to verify our assumption that the non-linearity for spike emission can be neglected. Fig. 1 C and D shows that the error between the voltage from Eq. 3 and the voltage from the stochastic AdEx is generally small. Moreover, we see that the main contribution to the voltage prediction error is due to the mismatch close to the spikes. However the non-linearity for spike initiation may change the probability distribution of the voltage fluctuations, which in turn influences the probability of spiking. This will influence the choice of the link-function, as we will see in the next section. 3.2 Spike Generation Using κ(t), ηv(t) and ηw(t), we must relate the spiking probability of the stochastic AdEx as a function of its deterministic voltage. According to [2] the probability of spiking in time bin dt given the deterministic voltage V (t) is given by: p(V ) = prob{spike in [t, t + dt]} = 1 −exp (−f(V (t))dt) (9) where f(·) gives the firing intensity as a function of the deterministic V (t) (Eq. 3). Thus to extract the link-function f we have to compute the probability of spiking given V (t) for our SRM. To do so we apply the method proposed by Jolivet et al. (2004) [18], where the probability of spiking is simply given by the distribution of the deterministic voltage estimated at the spike times divided by the distribution of the SRM voltage when there is no spike (see figure 2 A). One can numerically compute these two quantities for our models using N repetitions of the same stimulus. 4 Table 1: Analytical expressions for the membrane filter κ(t) in terms of the parameters of the AdEx for over-, critically-, and under-damped cases. Membrane Filter: κ(t) over-damped if: critically-damped if: under-damped if: (τm + τw)2 > 4τmτw(gl+a) gl (τm + τw)2 = 4τmτw(gl+a) gl (τm + τw)2 < 4τmτw(gl+a) gl κ(t) = k1eλ1t + k2eλ2t κ(t) = (αt + β)eλt κ(t) = (k1 cos (ωt) + k2 sin (ωt)) eλt λ1 = 1 2τmτw (−(τm + τw) + λ = −(τm+τw) 2τmτw λ = −(τm+τw) 2τmτw q (τm + τw)2 −4 τmτw gl (gl + a) λ2 = 1 2τmτw (−(τm + τw) − α = τm−τw 2Cτmτw ω = s τw−τm 2τmτw 2 − a glτmτw q (τm + τw)2 −4 τmτw gl (gl + a) k1 = −(1+(τmλ2)) Cτm(λ1−λ2) β = 1 C k1 = 1 C k2 = 1+(τmλ1) Cτm(λ1−λ2) k2 = −(1+τmλ) Cωτm The standard deviation σ of the noise and the parameter ∆T of the AdEx non-linearity may affect the shape of the link-function. We thus extract p(V ) for different σ and ∆T (Fig. 2 B). Then using visual heuristics and previous knowledge about the potential analytical expression of the link-funtion, we try to find a simple analytical function that captures p(V ) for a large range of combinations of σ and ∆T . We observed that the log(−log(p)) is close to linear in most studied conditions Fig. 2 B suggesting the following two distributions of p(V ): p(V ) = 1 −exp −exp V −VT ∆V (10) p(V ) = exp −exp −V −VT ∆V (11) Once we have p(V ), we can use Eq. 4 to obtain the equivalent SRM link-function, which leads to: f(V ) = −1 dt log (1 −p(V )) (12) Then the two potential link-functions of the SRM can be derived from Eq. 10 and Eq. 11 (respectively): f(V ) = λ0 exp V −VT ∆V (13) f(V ) = −λ0 log 1 −exp −exp −V −VT ∆V (14) with λ0 = 1 dt, VT the threshold of the SRM and ∆V the sharpness of the link-function (i.e. the parameters that governs the degree of the stochasticity). Note that the exact value of λ0 has no importance since it is redundant with VT . Eq. 13 is the standard exponential link-function, but we call Eq. 14 the log-exp-exp link-function. 3.3 Prediction The next point is to evaluate the fit quality of each link-function. To do this, we first estimate the parameters VT and ∆V of the GLM link-function that maximize the likelihood of observing a spike 5 Figure 2: SRM link-function. A. Histogram of the SRM voltage at the AdEx firing times (red) and at non-firing times (gray). The ratio of the two distributions gives p(V ) (Eq. 9, dashed lines). Inset, zoom to see the voltage histogram evaluated at the firing time (red). B. log(−log(p)) as a function of the SRM voltage for three different noise levels σ = 0.07, 0.14, 0.18 nA (pale gray, gray, black dots, respectively) and ∆T = 1 mV. The line is a linear fit corresponding to the log-exp-exp linkfunction and the dashed line corresponds to a fit with the exponential link-function. C. Same data and labeling scheme as B, but plotting f(V ) according to Eq. 12. The lines are produced with Eq. 14 with parameters fitted as described in B. and the dashed lines are produced with Eq. 13. Inset, same plot but on a semi-log(y) axis. train generated with an AdEx. Second we look at the predictive power of the resulting SRM in terms of Peri-Stimulus Time Histogram (PSTH). In other words we ask how close the spike trains generated with a GLM are from the spike train generated with a stochastic AdEx when both models are stimulated with the same input current. For any GLM with link-function f(V ) ≡f(t|I, θ) and parameters θ regulating the shape of κ(t), ηv(t) and ηw(t), the Negative Log-Likelihood (NLL) of observing a spike-train {ˆt} is given by: NLL = − X ˆt log(f(t|I, θ)) − X t f(t|I, θ) (15) It has been shown that the negative log-likelihood is convex in the parameters if f is convex and logconcave [19]. It is easy to show that a linear-rectifier link-function, the exponential link-function and the log-exp-exp link-function all satisfy these conditions. This allows efficient estimation of the optimal parameters ˆVT and ˆ ∆V using a simple gradient descent. One can thus estimate from a single AdEx spike train the optimal parameters of a given link-function, which is more efficient than the method used in Sect. 3.2. The minimal NLL resulting from the gradient descent gives an estimation of the fit quality. A better estimate of the fit quality is given by the distance between the PSTHs in response to stimuli not used for parameter fitting . Let ν1(t) be the PSTH of the AdEx, and ν2(t) be the PSTH of the fitted SRM, 6 Figure 3: PSTH prediction. A. Injected current. B. Voltage traces produced by an AdEx (black) and the equivalent SRM (red), when stimulated with the current in A. C. Raster plot for 20 realizations of AdEx (black tick marks) and equivalent SRM (red tick marks). D. PSTH of the AdEx (black) and the SRM (red) obtained by averaging 10,000 repetitions. E. Optimal log-likelihood for the three cases of the AdEx, using three different link-functions, a linear-rectifier (light gray), an exponential link-function (gray) and the link-function defined by Eq. 14 (dark gray), these values are obtained by averaging over 40 different combinations σ and ∆T (see Fig. 4). Error bars are one standard deviation, the stars denote a significant difference, two-sample t-test with α = 0.01. F. same as E. but for Md (Eq. 16). then we use Md ∈[0, 1] as a measure of match: Md = 2 R (ν1(t) −ν2(t))2 dt R ν1(t)2dt + R ν2(t)2dt (16) Md = 1 means that it is impossible to differentiate the SRM from the AdEx in terms of their PSTHs, whereas a Md of 0 means that the two PSTHs are completely different. Thus Md is a normalized similarity measure between two PSTHs. In practice, Md is estimated from the smoothed (boxcar average of 1 ms half-width) averaged spike train of 1 000 repetitions for each models. We use both the NLL and Md to quantify the fit quality for each of the three damping cases and each of the three link-functions. Figure 3 shows the match between the stochastic AdEx used as a reference and the derived GLM when both are stimulated with the same input current (Fig. 3 A). The resulting voltage traces are almost identical (Fig. 3 B) and both models predict almost the same spike trains and so the same PSTHs (Fig. 3 C and D). More quantitalively, we see on Fig. 3 E and F, that the linear-rectifier fits significantly worse than both the exponential and log-exp-exp link-functions, both in terms of NLL and of Md. The exponential link-function performs as well as the log-exp-exp link-function, with a spike train similarity measure Md being almost 1 for both. Finally the likelihood-based method described above gives us the opportunity to look at the relationship between the AdEx parameters σ and ∆T that governs its spike emission and the parameters VT and ∆V of the link-function (Fig. 4). We observe that an increase of the noise level produces a flatter link-function (greater ∆V ) while an increase in ∆T also produces an increase in ∆V and VT (note that Fig. 4 shows ∆V and VT for the exponential link-function only, but equivalent results are obtained with the log-exp-exp link-function). 4 Discussion In Sect. 3.3 we have shown that it is possible to predict with almost perfect accuracy the PSTH of a stochastic AdEx model using an appropriate set of parameters in the SRM. Moreover, since 7 Figure 4: Influence of the AdEx parameters on the parameters of the exponential link-function. A. VT as a function of ∆T and σ. B. ∆V as a function of ∆T and σ. the subthreshold voltage of the AdEx also gives a good match with the deterministic voltage of the SRM, we expect that the AdEx and the SRM will not differ in higher moments of the spike train probability distributions beyond the PSTH. We therefore conclude that diffusive noise models of the type of Eq. 1-2 are equivalent to GLM of the type of Eq. 3-4. Once combined with similar results on other types of stochastic LIF (e.g. correlated noise), we could bridge the gap between the literature on GLM and the literature on diffusive noise models. Another noteworthy observation pertains to the nature of the link-function. The link-function has been hypothesized to be a linear-rectifier, an exponential, a sigmoidal or a Gaussian [16]. We have observed that for the AdEx the link-function follows Eq. 14 that we called the log-exp-exp linkfunction. Although the link-function is log-exp-exp for most of the AdEx parameters, the exponential link-function gives an equivalently good prediction of the PSTH. This can be explained by the fact that the difference between log-exp-exp and exponential link-functions happens mainly at low voltage (i.e. far from the threshold), where the probability of emitting a spike is so low (Figure 2 C, until -50 mv). Therefore, even if the exponential link-function overestimates the firing probability at these low voltages it rarely produces extra spikes. At voltages closer to the threshold, where most of the spikes are emitted, the two link-functions behave almost identically and hence produce the same PSTH. The Gaussian link-function can be seen as lying in-between the exponential link-function and the log-exp-exp link-function in Fig. 2. This means that the work of Plesser and Gerstner (2000) [16] is in agreement with the results presented here. The importance of the time-derivative of the voltage stressed by Plesser and Gerstner (leading to a two-dimensional link-function f(V, ˙V )) was not studied here to remain consistent with the typical usage of GLM in neural systems [14]. Finally we restricted our study to exponential non-linearity for spike initiation and do not consider other cases such as the Quadratic Integrate-and-fire (QIF, [5]) or other polynomial functional shapes. We overlooked these cases for two reasons. First, there are many evidences that the non-linearity in neurons (estimated from in-vitro recordings of Pyramidal neurons) is well approximated by a single exponential [9]. Second, the exponential non-linearity of the AdEx only affects the subthreshold voltage at high voltage (close to threshold) and thus can be neglected to derive the filters κ(t) and η(t). Polynomial non-linearities on the other hand affect a larger range of the subthreshold voltage so that it would be difficult to justify the linearization of subthreshold dynamics essential to the method presented here. References [1] R. B. Stein, “Some models of neuronal variability,” Biophys J, vol. 7, no. 1, pp. 37–68, 1967. [2] W. Gerstner and W. Kistler, Spiking neuron models. Cambridge University Press New York, 2002. [3] E. Izhikevich, “Resonate-and-fire neurons,” Neural Networks, vol. 14, no. 883-894, 2001. [4] M. J. E. Richardson, N. Brunel, and V. Hakim, “From subthreshold to firing-rate resonance,” Journal of Neurophysiology, vol. 89, pp. 2538–2554, 2003. 8 [5] E. Izhikevich, “Simple model of spiking neurons,” IEEE Transactions on Neural Networks, vol. 14, pp. 1569–1572, 2003. [6] S. Mensi, R. Naud, M. Avermann, C. C. H. Petersen, and W. Gerstner, “Parameter extraction and classification of three neuron types reveals two different adaptation mechanisms,” Under review. [7] N. Fourcaud-Trocme, D. Hansel, C. V. Vreeswijk, and N. Brunel, “How spike generation mechanisms determine the neuronal response to fluctuating inputs,” Journal of Neuroscience, vol. 23, no. 37, pp. 11 628–11 640, 2003. [8] R. Brette and W. Gerstner, “Adaptive exponential integrate-and-fire model as an effective description of neuronal activity,” Journal of Neurophysiology, vol. 94, pp. 3637–3642, 2005. [9] L. Badel, W. Gerstner, and M. Richardson, “Dependence of the spike-triggered average voltage on membrane response properties,” Neurocomputing, vol. 69, pp. 1062–1065, 2007. [10] P. McCullagh and J. A. Nelder, Generalized linear models, 2nd ed. Chapman & Hall/CRC, 1998, vol. 37. [11] W. Gerstner, J. van Hemmen, and J. Cowan, “What matters in neuronal locking?” Neural computation, vol. 8, pp. 1653–1676, 1996. [12] D. Hubel and T. Wiesel, “Receptive fields and functional architecture of monkey striate cortex,” Journal of Physiology, vol. 195, pp. 215–243, 1968. [13] J. Pillow, L. Paninski, V. Uzzell, E. Simoncelli, and E. Chichilnisky, “Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model,” Journal of Neuroscience, vol. 25, no. 47, pp. 11 003–11 013, 2005. [14] K. Doya, S. Ishii, A. Pouget, and R. P. N. Rao, Bayesian brain: Probabilistic approaches to neural coding. The MIT Press, 2007. [15] S. Gerwinn, J. H. Macke, M. Seeger, and M. Bethge, “Bayesian inference for spiking neuron models with a sparsity prior,” in Advances in Neural Information Processing Systems, 2007. [16] H. Plesser and W. Gerstner, “Noise in integrate-and-fire neurons: From stochastic input to escape rates,” Neural Computation, vol. 12, pp. 367–384, 2000. [17] J. Schemmel, J. Fieres, and K. Meier, “Wafer-scale integration of analog neural networks,” in Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on, june 2008, pp. 431 –438. [18] R. Jolivet, T. Lewis, and W. Gerstner, “Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy,” Journal of Neurophysiology, vol. 92, pp. 959–976, 2004. [19] L. Paninski, “Maximum likelihood estimation of cascade point-process neural encoding models,” Network: Computation in Neural Systems, vol. 15, pp. 243–262, 2004. 9
|
2011
|
151
|
4,204
|
Generalised Coupled Tensor Factorisation Y. Kenan Yılmaz A. Taylan Cemgil Umut S¸ims¸ekli Department of Computer Engineering Bo˘gazic¸i University, Istanbul, Turkey kenan@sibnet.com.tr, {taylan.cemgil, umut.simsekli}@boun.edu.tr Abstract We derive algorithms for generalised tensor factorisation (GTF) by building upon the well-established theory of Generalised Linear Models. Our algorithms are general in the sense that we can compute arbitrary factorisations in a message passing framework, derived for a broad class of exponential family distributions including special cases such as Tweedie’s distributions corresponding to βdivergences. By bounding the step size of the Fisher Scoring iteration of the GLM, we obtain general updates for real data and multiplicative updates for non-negative data. The GTF framework is, then extended easily to address the problems when multiple observed tensors are factorised simultaneously. We illustrate our coupled factorisation approach on synthetic data as well as on a musical audio restoration problem. 1 Introduction A fruitful modelling approach for extracting meaningful information from highly structured multivariate datasets is based on matrix factorisations (MFs). In fact, many standard data processing methods of machine learning and statistics such as clustering, source separation, independent components analysis (ICA), nonnegative matrix factorisation (NMF), latent semantic indexing (LSI) can be expressed and understood as MF problems. These MF models also have well understood probabilistic interpretations as probabilistic generative models. Indeed, many standard algorithms mentioned above can be derived as maximum likelihood or maximum a-posteriori parameter estimation procedures. It is also possible to do a full Bayesian treatment for model selection [1]. Tensors appear as a natural generalisation of matrix factorisation, when observed data and/or a latent representation have several semantically meaningful dimensions. Before giving a formal definition, consider the following motivating example Xi,j,k 1 ≈ X r Zi,r 1 Zj,r 2 Zk,r 3 Xj,p 2 ≈ X r Zj,r 2 Zp,r 4 Xj,q 3 ≈ X r Zj,r 2 Zq,r 5 (1) where X1 is an observed 3-way array and X2, X3 are 2-way arrays, while Zα for α = 1 . . . 5 are the latent 2-way arrays. Here, the 2-way arrays are just matrices but this can be easily extended to objects having arbitrary number of indices. As the term ’N-way array’ is awkward, we prefer using the more convenient term tensor. Here, Z2 is a shared factor, coupling all models. As the first model is a CP (Parafac) while the second and the third ones are MF’s, we call the combined factorization as CP/MF/MF model. Such models are of interest when one can obtain different ’views’ of the same piece of information (here Z2) under different experimental conditions. Singh and Gordon [2] focused on a similar problem called as collective matrix factorisation (CMF) or multi-matrix factorisation, for relational learning but only for matrix factors and observations. In addition, their generalised Bregman divergence minimisation procedure assumes matching link and loss functions. For coupled matrix and tensor factorization (CMTF), recently [3] proposed a gradient-based allat-once optimization method as an alternative to alternating least square (ALS) optimization and 1 demonstrated their approach for a CP/MF coupled model. Similar models are used for proteinprotein interactions (PPI) problems in gene regulation [4]. The main motivation of the current paper is to construct a general and practical framework for computation of tensor factorisations (TF), by extending the well-established theory of Generalised Linear Models (GLM). Our approach is also partially inspired by probabilistic graphical models: our computation procedures for a given factorisation have a natural message passing interpretation. This provides a structured and efficient approach that enables very easy development of application specific custom models, priors or error measures as well as algorithms for joint factorisations where an arbitrary set of tensors can be factorised simultaneously. Well known models of multiway analysis (Parafac, Tucker [5]) appear as special cases and novel models and associated inference algorithms can be automatically be developed. In [6], the authors take a similar approach to tensor factorisations as ours, but that work is limited to KL and Euclidean costs, generalising MF models of [7] to the tensor case. It is possible to generalise this line of work to β-divergences [8] but none of these works address the coupled factorisation case and consider only a restricted class of cost functions. 2 Generalised Linear Models for Matrix/Tensor Factorisation To set the notation and our approach, we briefly review GLMs following closely the original notation of [9, ch 5]. A GLM assumes that a data vector x has conditionally independently drawn components xi according to an exponential family density xi ∼exp xiγi −b(γi) τ 2 −c(xi, τ) ⟨xi⟩= ˆxi = ∂b(γi) ∂γi var(xi) = τ 2 ∂2b(γi) ∂γ2 i (2) Here, γi are canonical parameters and τ 2 is a known dispersion parameter. ⟨xi⟩is the expectation of xi and b(·) is the log partition function, enforcing normalization. The canonical parameters are not directly estimated, instead one assumes a link function g(·) that ’links’ the mean of the distribution ˆxi and assumes that g(ˆxi) = l⊤ i z where l⊤ i is the ith row vector of a known model matrix L and z is the parameter vector to be estimated, A⊤denotes matrix transpose of A. The model is linear in the sense that a function of the mean is linear in parameters, i.e., g(ˆx) = Lz . A Linear Model (LM) is a special case of GLM that assumes normality, i.e. xi ∼N(xi; ˆxi, σ2) as well as linearity that implies identity link function as g(ˆxi) = ˆxi = l⊤ i z assuming li are known. Logistic regression assumes a log link, g(ˆxi) = log ˆxi = l⊤ i z; here log ˆxi and z have a linear relationship [9]. The goal in classical GLM is to estimate the parameter vector z. This is typically achieved via a Gauss-Newton method (Fisher Scoring). The necessary objects for this computation are the log likelihood, the derivative and the Fisher Information (the expected value of negative of the Fisher Score). These are easily derived as: L = X i [xiγi −b(γi)]/τ 2 − X i c(xi, τ) ∂L ∂z = 1 τ 2 X i (xi −ˆxi)wigˆx(ˆxi)l⊤ i (3) ∂L ∂z = 1 τ 2 L⊤DG(x −ˆx) ∂2L ∂z2 = 1 τ 2 L⊤DL (4) where w is a vector with elements wi, D and G are the diagonal matrices as D = diag(w), G = diag(gˆx(ˆxi)) and wi = v(ˆxi)g2 ˆx(ˆxi) −1 gˆx(ˆxi) = ∂g(ˆxi) ∂ˆxi (5) with v(ˆxi) being the variance function related to the observation variance by var(xi) = τ 2v(ˆxi). Via Fisher Scoring, the general update equation in matrix form is written as z ←z + L⊤DL −1 L⊤DG(x −ˆx) (6) Although this formulation is somewhat abstract, it covers a very broad range of model classes that are used in practice. For example, an important special case appears when the variance functions are in the form of v(ˆx) = ˆxp. By setting p = {0, 1, 2, 3} these correspond to Gaussian, Poisson, Exponential/Gamma, and Inverse Gaussian distributions [10, pp.30], which are special cases of the exponential family of distributions for any p named Tweedie’s family [11]. Those for p = {0, 1, 2}, in turn, correspond to EU, KL and IS cost functions often used for NMF decompositions [12, 7]. 2 2.1 Tensor Factorisations (TF) as GLM’s The key observation for expressing a TF model as a GLM is to identify the multilinear structure and using an alternating optimization approach. To hide the notational complexity, we will give an example with a simple matrix factorisation model; extension to tensors will require heavier notation, but are otherwise conceptually straightforward. Consider a MF model g( ˆX) = Z1Z2 in scalar g( ˆX)i,j = X r Zi,r 1 Zj,r 2 (7) where Z1, Z2 and g( ˆX) are matrices of compatible sizes. Indeed, by applying the vec operator (vectorization, stacking columns of a matrix to obtain a vector) to both sides of (7) we obtain two equivalent representation of the same system vec(g( ˆX)) = (I|j| ⊗Z1) vec(Z2) = ∂(Z1Z2) ∂Z2 vec(Z2) = ∂g( ˆX) ∂Z2 vec(Z2) ≡∇2 ⃗Z2 (8) where I|j| denotes the |j| × |j| identity matrix, ⊗denotes the Kronecker product [13], and vec Z ≡ ⃗Z. Clearly, this is a GLM where ∇2 plays the role of a model matrix and ⃗Z2 is the parameter vector. By alternating between Z1 and Z2, we can maximise the log-likelihood iteratively; indeed this alternating maximisation is standard for solving matrix factorisation problems. In the sequel, we will show that a much broader range of algorithms can be readily derived in the GLM framework. 2.2 Generalised Tensor Factorisation We define a tensor Λ as a multiway array with an index set V = {i1, i2, . . . , i|α|} where each index in for n = 1 . . . |α| runs as in = 1 . . . |in|. An element of the tensor Λ is a scalar that we denote by Λ(i1, i2, . . . , i|α|) or Λi1,i2,...,i|α| or as a shorthand notation by Λ(v) with v being a particular configuration. |v| denotes number of all distinct configurations for V, and e.g. if V = {i1, i2} then |v| = |i1||i2|. We call the form Λ(v) as element-wise; the notation [ ] yields a tensor by enumerating all the indices, i.e., Λ = [Λi1,i2,...,i|α|] or Λ = [Λ(v)]. For any two tensors X and Y of compatible order, X ◦Y is an element-wise multiplication and if not explicitly stressed X/Y is an element-wise division. 1 is an object of all ones whose order depends on the context where it is used. A generalised tensor factorisation problem is specified by an observed tensor X (with possibly missing entries, to be treated later) and a collection of latent tensors to be estimated, Z1:|α| = {Zα} for α = 1 . . . |α|, and by an exponential family of form (2). The index set of X is denoted by V0 and the index set of each Zα by Vα. The set of all model indices is V = S|α| α=1 Vα. We use vα (or v0) to denote a particular configuration of the indices for Zα (or X) while ¯vα denoting a configuration of the compliment ¯Vα = V/Vα. The goal is to find the latent Zα that maximize the likelihood p(X|Z1:α) where ⟨X⟩= ˆX is given via g( ˆX(v0)) = X ¯v0 Y α Zα(vα) (9) To clarify our notation with an example, we express the CP (Parafac) model, defined as ˆX(i, j, k) = P r Z1(i, r)Z2(j, r)Z3(k, r). In our notation, we take identity link g( ˆX) = ˆX and the index sets with V = {i, j, k, r}, V0 = {i, j, k}, ¯V0 = {r}, V1 = {i, r}, V2 = {j, r} and V3 = {k, r}. Our notation deliberately follows that of graphical models; the reader might find it useful to associate indices with discrete random variables and factors with probability tables [14]. Obviously, while a TF model does not represent a discrete probability measure, the algebraic structure is nevertheless analogous. To extend the discussion in Section 2.1 to the tensor case, we need the equivalent of the model matrix, when updating Zα. This is obtained by summing over the product of all remaining factors g( ˆX(v0)) = X ¯v0∩vα Zα(vα) X ¯v0∩¯vα Y α′̸=α Zα′(vα′) = X ¯v0∩vα Zα(vα)Lα(oα) Lα(oα) = X ¯v0∩¯vα Y α′̸=α Zα′(vα′) with oα ≡(v0 ∪vα) ∩(¯v0 ∪¯vα) 3 One related quantity to Lα is the derivative of the tensor g( ˆX) wrt the latent tensor Zα denoted as ∇α and is defined as (following the convention [13, pp 196]) ∇α = ∂g( ˆX) ∂Zα = I|v0∩vα| ⊗Lα with Lα ∈R|v0∩¯vα|×|¯v0∩vα| (10) The importance of Lα is that, all the update rules can be formulated by a product and subsequent contraction of Lα with another tensor Q having exactly the same index set of the observed tensor X. As a notational abstraction, it is useful to formulate the following function, Definition 1. The tensor valued function ∆α(Q) : R|v0| →R|vα| is defined as ∆ε α(Q) = h X v0∩¯vα Q(v0) Lα(oα)εi (11) with ∆α(Q) being an object of the same order as Zα and oα ≡(v0 ∪vα) ∩(¯v0 ∪¯vα). Here, on the right side, the nonnegative integer ε denotes the element-wise power, not to be confused with an index. On the left, it should be interpreted as a parameter of the ∆function. Arguably, ∆function abstracts away all the tedious reshape and unfolding operations [5]. This abstraction has also an important practical facet: the computation of ∆is algebraically (almost) equivalent to computation of marginal quantities on a factor graph, for which efficient message passing algorithms exist [14]. Example 1. TUCKER3 is defined as ˆXi,j,k = P p,q,r Ai,pBj,qCk,rGp,q,r with V = {i, j, k, p, q, r}, V0 = {i, j, k}, VA = {i, p}, VB = {j, q}, VC = {k, r}, VG = {p, q, r}. Then for the first factor A, the objects LA and ∆ε A() are computed as follows LA = "X q,r Bj,qCk,rGp,q,r # = h ((C ⊗B)G⊤)p k,j i = h LA p k,j i (12) ∆ε A(Q) = X j,k Qk,j i Lε A p k,j = QLε A p i (13) The index sets marginalised out for LA and ∆A are ¯V0 ∩¯VA = {p, q, r} ∩{j, q, k, r} = {q, r} and V0 ∩¯VA = {i, j, k} ∩{j, q, k, r} = {j, k}. Also we verify the order of the gradient ∇A (10) as Ii i ⊗LA p k,j = ∇i,p i,k,j that conforms the matrix derivation convention [13, pp.196]. 2.3 Iterative Solution for GTF As we have now established a one to one relationship between GLM and GTF objects such as the observation x ≡vec X, the mean (and the model estimate) ˆx ≡vec ˆX, the model matrix L ≡Lα and the parameter vector z ≡vec Zα, we can write directly from (6) as ⃗Zα ←⃗Zα + ∇⊤ α D∇α −1 ∇⊤ α DG( ⃗X −⃗ˆX) with ∇α = ∂g( ˆX) ∂Zα (14) There are at least two ways that this update can further simplified. We may assume an identity link function, or alternatively we may choose a matching link and lost functions such that they cancel each other smoothly [2]. In the sequel we consider identity link g( ˆX) = ˆX that results to g ˆ X( ˆX) = 1. This implies G to be identity, i.e. G = I. We define a tensor W, that plays the same role as w in (5), which becomes simply the precision (inverse variance function), i.e. W = 1/v( ˆX) where for the Gaussian, Poisson, Exponential and Inverse Gaussian distributions we have simply W = ˆX−p with p = {0, 1, 2, 3} [10, pp 30]. Then, the update (14) is reduced to ⃗Zα ←⃗Zα + ∇⊤ α D∇α −1 ∇⊤ α D( ⃗X −⃗ˆX) (15) After this simplification we obtain two update rules for GTF for non-negative and real data. The update (15) can be used to derive multiplicative update rules (MUR) popularised by [15] for the nonnegative matrix factorisation (NMF). MUR equations ensure the non-negative parameter updates as long as starting some non-negative initial values. 4 Theorem 1. The update equation (15) for nonnegative GTF is reduced to multiplicative form as Zα ←Zα ◦∆α(W ◦X) ∆α(W ◦ˆX) s.t. Zα(vα) > 0 (16) (Proof sketch) Due to space limitation we leave the full details of the proof, but idea is that inverse of H = ∇⊤D∇is identified as step size and by use of the results of the Perron-Frobenious theorem [16, pp 125] we further bound it as η = ⃗Zα ∇⊤D ⃗ˆX < 2⃗Zα ∇⊤D ⃗ˆX ≤ 2 λmax(∇⊤D∇) since λmax(H) ≤max vα H ⃗Zα (vα) Zα(vα) (17) For the special case of the Tweedie family where the precision is a function of the mean as W = ˆX−p for p = {0, 1, 2, 3} the update (15) is reduced to Zα ←Zα ◦∆α( ˆX−p ◦X) ∆α( ˆX1−p) (18) For example, to update Z2 for the NMF model ˆX = Z1Z2, ∆2 is ∆2(Q) = Z⊤ 1 Q. Then for the Gaussian (p = 0) this reduces to NMF-EU as Z2 ←Z2 ◦(Z⊤ 1 X)/(Z⊤ 1 ˆX). For the Poisson (p = 1) it reduces to NMF-KL as Z2 ←Z2 ◦ Z⊤ 1 (X/ ˆX) / Z⊤ 1 1 [15]. By dropping the non-negativity requirement we obtain the following update equation: Theorem 2. The update equation for GTF with real data can be expressed as Zα ←Zα + 2 λα/0 ∆α(W ◦(X −ˆX)) ∆2α(W) with λα/0 = |vα ∩¯v0| (19) (Proof sketch) Again skipping the full details, as part of the proof we set Zα = 1 in (17) specifically, and replacing matrix multiplication of ∇⊤D∇1 by ∇⊤2D1λα/0 completes the proof. Here the multiplier λα/0 is the cardinality arising from the fact that only λα/0 elements are non-zero in a row of ∇⊤D∇. Note the example for λα/0 that if Vα ∩¯V0 = {p, q} then λα/0 = |p||q| which is number of all distinct configurations for the index set {p, q}. Missing data can be handled easily by dropping the missing data terms from the likelihood [17]. The net effect of this is the addition of an indicator variable mi to the gradient ∂L/∂z = τ −2 P i(xi − ˆxi)miwigˆx(ˆxi)l⊤ i with mi = 1 if xi is observed otherwise mi = 0. Hence we simply define a mask tensor M having the same order as the observation X, where the element M(v0) is 1 if X(v0) is observed and zero otherwise. In the update equations, we merely replace W with W ◦M. 3 Coupled Tensor Factorization Here we address the problem when multiple observed tensors Xν for ν = 1 . . . |ν| are factorised simultaneously. Each observed tensor Xν now has a corresponding index set V0,ν and a particular configuration will be denoted by v0,ν ≡uν. Next, we define a |ν| × |α| coupling matrix R where Rν,α = 1 Xν and Zα connected 0 otherwise ˆXν(uν) = X ¯uν Y α Zα(vα)Rν,α (20) For the coupled factorisation, we get the following expression as the derivative of the log likelihood ∂L ∂Zα(vα) = X ν Rν,α X uν∩¯vα Xν(uν) −ˆXν(uν) Wν(uν)∂ˆXν(uν) ∂Zα(vα) (21) where Wν ≡W( ˆXν(uν)) are the precisions. Then proceeding as in section 2.3 (i.e. getting the Hessian and finding Fisher Information) we arrive at the update rule in vector form as ⃗Zα ←⃗Zα + X ν Rν,α∇⊤ α,νDν∇α,ν −1 X ν Rν,α∇⊤ α,νDν ⃗Xν −⃗ˆXν (22) 5 ... ... Z1 Zα Z|α| ... ... X1 Xν X|ν| A B C D E X1 X2 X3 Figure 1: (Left) Coupled factorisation structure where the arrow indicates the existence of the influence of latent tensor Zα onto the observed tensor Xν. (Right) The CP/MF/MF coupled factorisation problem in 1. where ∇α,ν = ∂g( ˆXν)/∂Zα. The update equations for the coupled case are quite intuitive; we calculate the ∆α,ν functions defined as ∆ε α,ν(Q) = h X uν∩¯vα Q(uν) Y α′̸=α Zα′(vα′)Rν,αεi (23) for each submodel and add the results: Lemma 1. Update for non-negative CTF Zα ←Zα ◦ P ν Rν,α∆α,ν(Wν ◦Xν) P ν Rν,α∆α,ν Wν ◦ˆXν (24) In the special case of a Tweedie family, i.e. for the distributions whose precision as Wν = ˆX−p ν , the update is Zα ←Zα ◦ P ν Rν,α∆α,ν ˆX−p ν ◦Xν / P ν Rν,α∆α,ν ˆX1−p ν . Lemma 2. General update for CTF Zα ←Zα + 2 λα/0 P ν Rν,α∆α,ν Wν ◦ Xν −ˆXν P ν Rν,α∆2α,ν(Wν) (25) For the special case of the Tweedie family we plug Wν = ˆX−p ν and get the related formula. 4 Experiments Here we want to solve the CTF problem introduced (1), which is a coupled CP/MF/MF problem ˆXi,j,k 1 = X r Ai,rBj,rCk,r ˆXj,p 2 = X r Bj,rDp,r ˆXj,q 3 = X r Bj,rEq,r (26) where we employ the symbols A : E for the latent tensors instead of Zα. This factorisation problem has the following R matrix with |α| = 5, |ν| = 3 R = " 1 1 1 0 0 0 1 0 1 0 0 1 0 0 1 # with ˆX1 = P A1B1C1D0E0 ˆX2 = P A0B1C0D1E0 ˆX3 = P A0B1C0D0E1 (27) We want to use the general update equation (25). This requires derivation of ∆ε α,ν() for ν = 1 (CP) and ν = 2 (MF) but not for ν = 3 since that ∆α,3() has the same shape as ∆α,2(). Here we show the computation for B, i.e. for Z2, which is the common factor ∆ε B,1(Q) = "X ik Qi,j,k Ai,rCk,rε # = Q(1)(Cε ⊙Aε) (28) ∆ε B,2(Q) = "X p Qj,p Dp,rε # = QDε (29) 6 with Q(n) being mode-n unfolding operation that turns a tensor into matrix form [5]. In addition, for ν = 1 the required scalar value λB/0 is |r| here since VB ∩¯V0 = {j, r} ∩{r} = {r} noting that value λB/0 is the same for ν = 2, 3. The simulated data size for observables is |i| = |j| = |k| = |p| = |q| = 30 while the latent dimension is |r| = 5. The number of iterations is 1000 with the Euclidean cost while the experiment produced similar results for KL cost as shown in Figure 2. 0 5 10 0 5 A 0 5 10 0 5 10 B 0 5 10 0 5 C 0 5 10 0 5 10 D 0 5 10 5 10 E Orginal Initial Final Figure 2: The figure compares the original, the initial (start up) and the final (estimate) factors for Zα = A, B, C, D, E. Only the first column, i.e. Zα(1 : 10, 1) is plotted. Note that CP factorisation is unique up to permutation and scaling [5] while MF factorisation is not unique, but when coupled with CP it recovers the original data as shown in the figure. For visualisation, to find the correct permutation, for each of Zα the matching permutation between the original and estimate are found by solving an orthogonal Procrustes problem [18, pp 601]. 4.1 Audio Experiments In this section, we illustrate a real data application of our approach, where we reconstruct missing parts of an audio spectrogram X(f, t), that represents the STFT coefficient magnitude at frequency bin f and time frame t of a piano piece, see top left panel of Fig.3. This is a difficult matrix completion problem: as entire time frames (columns of X) are missing, low rank reconstruction techniques are likely to be ineffective. Yet such missing data patterns arise often in practice, e.g., when packets are dropped during digital communication. We will develop here a novel approach, expressed as a coupled TF model. In particular, the reconstruction will be aided by an approximate musical score, not necessarily belonging to the played piece, and spectra of isolated piano sounds. Pioneering work of [19] has demonstrated that, when a audio spectrogram of music is decomposed using NMF as X1(f, t) ≈ˆX(f, t) = P i D(f, i)E(i, t), the computed factors D and E tend to be semantically meaningful and correlate well with the intuitive notion of spectral templates (harmonic profiles of musical notes) and a musical score (reminiscent of a piano roll representation such as a MIDI file). However, as time frames are modeled conditionally independently, it is impossible to reconstruct audio with this model when entire time frames are missing. In order to restore the missing parts in the audio, we form a model that can incorporates musical information of chords structures and how they evolve in time. In order to achieve this, we hierarchically decompose the excitation matrix E as a convolution of some basis matrices and their weights: E(i, t) = P k,τ B(i, τ, k)C(k, t −τ). Here the basis tensor B encapsulates both vertical and temporal information of the notes that are likely to be used in a musical piece; the musical piece to be reconstructed will share B, possibly played at different times or tempi as modelled by G. After replacing E with the decomposed version, we get the following model (eq 30): ˆX1(f, t) = X i,τ,k,d D(f, i)B(i, τ, k)C(k, d)Z(d, t, τ) Test file (30) ˆX2(i, n) = X τ,k,m B(i, τ, k)G(k, m)Y (m, n, τ) MIDI file (31) ˆX3(f, p) = X i D(f, i)F(i, p)T(i, p) Merged training files (32) 7 Here we have introduced new dummy indices d and m, and new (fixed) factors Z(d, t, τ) = δ(d − t + τ) and Y (m, n, τ) = δ(m −n + τ) to express this model in our framework. In eq 32, while forming X3 we concatenate isolated recordings corresponding to different notes. Besides, T is a 0 −1 matrix, where T(i, p) = 1(0) if the note i is played (not played) during the time frame p and F models the time varying amplitudes of the training data. R matrix for this model is defined as R = " 1 1 1 1 0 0 0 0 0 1 0 0 1 1 0 0 1 0 0 0 0 0 1 1 # with ˆX1 = P D1B1C1Z1G0Y 0F 0T 0 ˆX2 = P D0B1C0Z0G1Y 1F 0T 0 ˆX3 = P D1B0C0Z0G0Y 0F 1T 1 (33) Figure 3 illustrates the performance the model, using KL cost (W = ˆX−1) on a 30 second piano recording where the 70% of the data is missing; we get about 5dB SNR improvement, gracefully degrading from 10% to 80% missing data: the results are encouraging as quite long portions of audio are missing, see bottom right panel of Fig.3. Time (sec) Frequency (Hz) X3 (Isolated Recordings) 100 200 300 0 500 1000 1500 2000 Time (sec) Notes X2 (Transcription Data) 50 100 150 20 40 60 80 Time (sec) Frequency (Hz) X1 5 10 15 20 25 0 500 1000 1500 2000 Time (sec) Frequency (Hz) X1hat (Restored) 5 10 15 20 25 0 500 1000 1500 2000 Time (sec) Frequency (Hz) Ground Truth 5 10 15 20 25 0 500 1000 1500 2000 20 40 60 80 0 5 10 15 Missing Data Percentage (%) SNR (dB) Performance Reconst. SNR Initial SNR Figure 3: Top row, left to right: Observed matrices X1: spectrum of the piano performance, darker colors imply higher magnitude (missing data (70%) are shown white), X2, a piano roll obtained from a musical score of the piece, X3, spectra of 88 isolated notes from a piano. Bottom Row: Reconstructed X1, the ground truth, and the SNR results with increasing missing data. Here, initial SNR is computed by substituting 0 as missing values. 5 Discussion This paper establishes a link between GLMs and TFs and provides a general solution for the computation of arbitrary coupled TFs, using message passing primitives. The current treatment focused on ML estimation; as immediate future work, the probabilistic interpretation is to be extended to a full Bayesian inference with appropriate priors and inference methods. A powerful aspect, which we have not been able to summarize here is assigning different cost functions, i.e. distributions, to different observation tensors in a coupled factorization model. This requires only minor modifications to the update equations. We believe that, as a whole, the GCTF framework covers a broad range of models that can be useful in many different application areas beyond audio processing, such as network analysis, bioinformatics or collaborative filtering. Acknowledgements: This work is funded by the T ¨UB˙ITAK grant number 110E292, Bayesian matrix and tensor factorisations (BAYTEN) and Bo˘gazic¸i University research fund BAP5723. Umut S¸ims¸ekli is also supported by a Ph.D. scholarship from T ¨UB˙ITAK. We also would like to thank to Evrim Acar for the fruitful discussions. 8 References [1] A. T. Cemgil, Bayesian inference for nonnegative matrix factorisation models, Computational Intelligence and Neuroscience 2009 (2009) 1–17. [2] A. P. Singh, G. J. Gordon, A unified view of matrix factorization models, in: ECML PKDD’08, Part II, no. 5212, Springer, 2008, pp. 358–373. [3] E. Acar, T. G. Kolda, D. M. Dunlavy, All-at-once optimization for coupled matrix and tensor factorizations, CoRR abs/1105.3422. arXiv:1105.3422. [4] Q. Xu, E. W. Xiang, Q. Yang, Protein-protein interaction prediction via collective matrix factorization, in: In Proc. of the IEEE International Conference on BIBM, 2010, pp. 62–67. [5] T. G. Kolda, B. W. Bader, Tensor decompositions and applications, SIAM Review 51 (3) (2009) 455–500. [6] Y. K. Yılmaz, A. T. Cemgil, Probabilistic latent tensor factorization, in: Proceedings of the 9th international conference on Latent variable analysis and signal separation, LVA/ICA’10, Springer-Verlag, 2010, pp. 346–353. [7] C. Fevotte, A. T. Cemgil, Nonnegative matrix factorisations as probabilistic inference in composite models, in: Proc. 17th EUSIPCO, 2009. [8] Y. K. Yılmaz, A. T. Cemgil, Algorithms for probabilistic latent tensor factorization, Signal Processing(2011),doi:10.1016/j.sigpro.2011.09.033. [9] C. E. McCulloch, S. R. Searle, Generalized, Linear, and Mixed Models, Wiley, 2001. [10] C. E. McCulloch, J. A. Nelder, Generalized Linear Models, 2nd Edition, Chapman and Hall, 1989. [11] R. Kaas, Compound poisson distributions and glm’s, tweedie’s distribution, Tech. rep., Lecture, Royal Flemish Academy of Belgium for Science and the Arts, (2005). [12] A. Cichocki, R. Zdunek, A. H. Phan, S. Amari, Nonnegative Matrix and Tensor Factorization, Wiley, 2009. [13] J. R. Magnus, H. Neudecker, Matrix Differential Calculus with Applications in Statistics and Econometrics, 3rd Edition, Wiley, 2007. [14] M. Wainwright, M. I. Jordan, Graphical models, exponential families, and variational inference, Foundations and Trends in Machine Learning 1 (2008) 1–305. [15] D. D. Lee, H. S. Seung, Algorithms for non-negative matrix factorization, in: NIPS, Vol. 13, 2001, pp. 556–562. [16] M. Marcus, H. Minc, A Survey of Matrix Theory and Matrix Inequalities, Dover, 1992. [17] R. Salakhutdinov, A. Mnih, Probabilistic matrix factorization, in: Advances in Neural Information Processing Systems, Vol. 20, 2008. [18] G. H. Golub, C. F. V. Loan, Matrix computations, 3rd Edition, Johns Hopkins UP, 1996. [19] P. Smaragdis, J. C. Brown, Non-negative matrix factorization for polyphonic music transcription, in: WASPAA, 2003, pp. 177–180. 9
|
2011
|
152
|
4,205
|
Prismatic Algorithm for Discrete D.C. Programming Problem Yoshinobu Kawahara∗and Takashi Washio The Institute of Scientific and Industrial Research (ISIR) Osaka University 8-1 Mihogaoka, Ibaraki-shi, Osaka 567-0047 JAPAN {kawahara,washio}@ar.sanken.osaka-u.ac.jp Abstract In this paper, we propose the first exact algorithm for minimizing the difference of two submodular functions (D.S.), i.e., the discrete version of the D.C. programming problem. The developed algorithm is a branch-and-bound-based algorithm which responds to the structure of this problem through the relationship between submodularity and convexity. The D.S. programming problem covers a broad range of applications in machine learning. In fact, this generalizes any set-function optimization. We empirically investigate the performance of our algorithm, and illustrate the difference between exact and approximate solutions respectively obtained by the proposed and existing algorithms in feature selection and discriminative structure learning. 1 Introduction Combinatorial optimization techniques have been actively applied to many machine learning applications, where submodularity often plays an important role to develop algorithms [10, 16, 27, 14, 15, 19, 1]. In fact, many fundamental problems in machine learning can be formulated as submoular optimization. One of the important categories would be the D.S. programming problem, i.e., the problem of minimizing the difference of two submodular functions. This is a natural formulation of many machine learning problems, such as learning graph matching [3], discriminative structure learning [21], feature selection [1] and energy minimization [24]. In this paper, we propose a prismatic algorithm for the D.S. programming problem, which is a branch-and-bound-based algorithm responding to the specific structure of this problem. To the best of our knowledge, this is the first exact algorithm to the D.S. programming problem (although there exists an approximate algorithm for this problem [21]). As is well known, the branch-and-bound method is one of the most successful frameworks in mathematical programming and has been incorporated into commercial softwares such as CPLEX [13, 12]. We develop the algorithm based on the analogy with the D.C. programming problem through the continuous relaxation of solution spaces and objective functions with the help of the Lov´asz extension [17, 11, 18]. The algorithm is implemented as an iterative calculation of binary-integer linear programming (BILP). Also, we discuss applications of the D.S. programming problem in machine learning and investigate empirically the performance of our method and the difference between exact and approximate solutions through feature selection and discriminative structure-learning problems. The remainder of this paper is organized as follows. In Section 2, we give the formulation of the D.S. programming problem and then describe its applications in machine learning. In Section 3, we give an outline of the proposed algorithm for this problem. Then, in Section 4, we explain the details of its basic operations. And finally, we give several empirical examples using artificial and real-world datasets in Section 5, and conclude the paper in Section 6. Preliminaries and Notation: A set function f is called submodular if f(A) + f(B) ≥f(A ∪ B) + f(A ∩B) for all A, B ⊆N, where N = {1, · · · , n} [5, 7]. Throughout this paper, we denote ∗http://www.ar.sanken.osaka-u.ac.jp/∼kawahara/ 1 by ˆf the Lov´asz extension of f, i.e., a continuous function ˆf : Rn →R defined by ˆf(p) = ∑m−1 j=1 (ˆpj −ˆpj+1)f(Uj) + ˆpmf(Um), where Uj = {i ∈N : pi ≥ˆpj} and ˆp1 > · · · > ˆpm are the m distinct elements of p [17, 18]. Also, we denote by IA ∈{0, 1}n the characteristic vector of a subset A ⊆N, i.e., IA = ∑ i∈A ei where ei is the i-th unit vector. Note, through the definition of the characteristic vector, any subset A ⊆N has the one-to-one correspondence with the vertex of a n-dimensional cube D := {x ∈Rn : 0 ≤ xi ≤1(i = 1, . . . , n)}. And, we denote by (A, t)(T) all combinations of a real value plus subset whose corresponding vectors (IA, t) are inside or on the surface of a polytope T ∈Rn+1. 2 The D.S. Programming Problem and its Applications Let f and g are submodular functions. In this paper, we address an exact algorithm to solve the D.S. programming problem, i.e., the problem of minimizing the difference of two submodular functions: min A⊆N f(A) −g(A). (1) As is well known, any real-valued function whose second partial derivatives are continuous everywhere can be represented as the difference of two convex functions [12]. As well, the problem (1) generalizes any set-function optimization problem. Problem (1) covers a broad range of applications in machine learning [21, 24, 3, 1]. Here, we give a few examples. Feature selection using structured-sparsity inducing norms: Sparse methods for supervised learning, where we aim at finding good predictors from as few variables as possible, have attracted much interests from machine learning community. This combinatorial problem is known to be a submodular maximization problem with cardinality constraint for commonly used measures such as least-squared errors [4, 14]. And as is well known, if we replace the cardinality function with its convex envelope such as l1-norm, this can be turned into a convex optimization problem. Recently, it is reported that submodular functions in place of the cardinality can give a wider family of polyhedral norms and may incorporate prior knowledge or structural constraints in sparse methods [1]. Then, the objective (that is supposed to be minimized) becomes the sum of a loss function (often, supermodular) and submodular regularization terms. Discriminative structure learning: It is reported that discriminatively structured Bayesian classifier often outperforms generatively structured one [21, 22]. One commonly used metric for discriminative structure learning would be EAR (explaining away residual) [2]. EAR is defined as the difference of the conditional mutual information between variables by class C and non-conditional one, i.e., I(Xi; Xj|C) −I(Xi; Xj). In structure learning, we repeatedly try to find a subset in variables that minimize this kind of measures. Since the (symmetric) mutual information is a submodular function, obviously this problem leads the D.S. programming problem [21]. Energy minimization in computer vision: In computer vision, an image is often modeled with a Markov random field, where each node represents a pixel. Let G = (V, E) be the undirected graph, where a label xs ∈L is assigned on each node. Then, many tasks in computer vision can be naturally formulated in terms of energy minimization where the energy function has the form: E(x) = ∑ p∈V θp(xp) + ∑ (p,q)∈E θ(xp, xq), where θp and θp,q are univariate and pairwise potentials. In a pairwise potential for binarized energy (i.e., L = {0, 1}), submodularity is defined as θpq(1, 1) + θpq(0, 0) ≥θpq(1, 0) + θpq(0, 1) (see, for example, [26]). Based on this, any energy function in computer vision can be written with a submodular function E1(x) and a supermodular function E2(x) as E(x) = E1(x) + E2(x) (ex. [24]). Or, in case of binarized energy, even if such explicit decomposition is not known, a non-unique decomposition to submodular and supermodular functions can be always given [25]. 3 Prismatic Algorithm for the D.S. Programming Problem By introducing an additional variable t(∈R), Problem (1) can be converted into the equivalent problem with a supermodular objective function and a submodular feasible set, i.e., min A⊆N,t∈R t −g(A) s.t. f(A) −t ≤0. (2) 2 v T D S (0,0) (1,1) (0,1) r S1 S2 (1,0) Figure 1: Illustration of the prismatic algorithm. Obviously, if (A∗, t∗) is an optimal solution of Problem (2), then A∗is an optimal solution of Problem (1) and t∗= f(A∗). The proposed algorithm is a realization of the branch-and-bound scheme which responds to this specific structure of the problem. To this end, we first define a prism T(S) ⊂Rn+1 by T = {(x, t) ∈Rn × R : x ∈S}, where S is an n-simplex. S is obtained from the ndimensional cube D at the initial iteration (as described in Section 4.1), or by the subdivision operation described in the later part of this section (and the detail will be described in Section 4.2). The prism T has n + 1 edges that are vertical lines (i.e., lines parallel to the t-axis) which pass through the n + 1 vertices of S, respectively [11]. Our algorithm is an iterative procedure which mainly consists of two parts; branching and bounding, as well as other branch-and-bound frameworks [13]. In branching, subproblems are constructed by dividing the feasible region of a parent problem. And in bounding, we judge whether an optimal solution exists in the region of a subproblem and its descendants by calculating an upper bound of the subproblem and comparing it with an lower bound of the original problem. Some more details for branching and bounding are described as follows. Branching: The branching operation in our method is carried out using the property of a simplex. That is, since, in a n-simplex, any r + 1 vertices are not on a r −1-dimensional hyperplane for r ≤n, any n-simplex can be divided as S = ∪p i=1 Si, where p ≥2 and Si are n-simplices such that each pair of simplices Si, Sj(i ̸= j) intersects at most in common boundary points (the way of constructing such partition is explained in Section 4.2). Then, T = ∪p i=1 Ti, where Ti = {(x, t) ∈ Rn × R : x ∈Si}, is a natural prismatic partition of T induced by the above simplical partition. Bounding: For the bounding operation on Sk (resp., Tk) at the iteration k, we consider a polyhedral convex set Pk such that Pk ⊃˜D, where ˜D = {(x, t) ∈Rn × R : x ∈D, ˆf(x) ≤t} is the region corresponding to the feasible set of Problem (2). At the first iteration, such P is obtained as P0 = {(x, t) ∈Rn × R : x ∈S, t ≥˜t}, where ˜t is a real number satisfying ˜t ≤min{f(A) : A ⊆N}. Here, ˜t can be determined by using some existing submodular minimization solver [23, 8]. Or, at later iterations, more refined Pk, such that P0 ⊃P1 ⊃· · · ⊃˜D, is constructed as described in Section 4.4. As described in Section 4.3, a lower bound β(Tk) of t −g(A) on the current prism Tk can be calculated through the binary-integer linear programming (BILP) (or the linear programming (LP)) using Pk, obtained as described above. Let α be the lowest function value (i.e., an upper bound of t −g(A) on ˜D) found so far. Then, if β(Tk) ≥α, we can conclude that there is no feasible solution which gives a function value better than α and can remove Tk without loss of optimality. The pseudo-code of the proposed algorithm is described in Algorithm 1. In the following section, we explain the details of the operations involved in this algorithm. 4 Basic Operations Obviously, the procedure described in Section 3 involves the following basic operations: 1. Construction of the first prism: A prism needs to be constructed from a hypercube at first, 2. Subdivision process: A prism is divided into a finite number of sub-prisms at each iteration, 3. Bound estimation: For each prism generated throughout the algorithm, a lower bound for the objective function t−g(A) over the part of the feasible set contained in this prism is computed, 4. Construction of cutting planes: Throughout the algorithm, a sequence of polyhedral convex sets P0, P1, · · · is constructed such that P0 ⊃P1 ⊃· · · ⊃˜D. Each set Pj is generated by a cutting plane to cut off a part of Pj−1, and 5. Deletion of non-optimal prisms: At each iteration, we try to delete prisms that contain no feasible solution better than the one obtained so far. 3 Construct a simplex S0 ⊃D, its corresponding prism T0 and a polyhedral convex set P0 ⊃˜D. 1 Let α0 be the best objective function value known in advance. Then, solve the BILP (5) 2 corresponding to α0 and T0, and let β0 = β(T0, P0, α0) and ( ¯A0, ¯t0) be the point satisfying β0 = ¯t0 −g( ¯A0). Set R0 ←T0. 3 while Rk ̸= ∅ 4 Select a prism T ∗ k ∈Rk satisfying βk = β(T ∗ k ), (¯vk, ¯tk) ∈T ∗ k . 5 if (¯vk, ¯tk) ∈˜D then 6 Set Pk+1 = Pk. 7 else 8 Construct lk(x, t) according to (8), and set Pk+1 = {(x, t) ∈Pk : lk(x, t) ≤0}. 9 Subdivide T ∗ k = T(S∗ k) into a finite number of subprisms Tk,j(j∈Jk) (cf. Section 4.2). 10 For each j ∈Jk, solve the BILP (5) with respect to Tk,j, Pk+1 and αk. 11 Delete all Tk,j(j∈Jk) satisfying (DR1) or (DR2). Let Mk denote the collection of 12 remaining prisms Tk,j(j ∈Jk), and for each T ∈Mk set β(T) = max{β(T ∗ k ), β(T, Pk+1, αk)}. Let Fk be the set of new feasible points detected while solving BILP in Step 11, and set 13 αk+1 = min{αk, min{t −g(A) : (A, t) ∈Fk}}. Delete all T∈Mk satisfying β(T)≥αk+1 and let Rk be Rk−1 \ Tk ∈Mk. 14 Set βk+1 ←min{β(T) : T ∈Mk} and k ←k + 1. 15 Algorithm 1: Pseudo-code of the prismatic algorithm for the D.S programming problem. 4.1 Construction of the first prism The initial simplex S0 ⊃D (which yields the initial prism T0 ⊃˜D) can be constructed as follows. Now, let v and Av be a vertex of D and its corresponding subset in N, respectively, i.e., v = ∑ i∈Av ei. Then, the initial simplex S0 ⊃D can be constructed by S0 = {x ∈Rn : xi ≤1(i ∈Av), xi ≥0(i ∈N \ Av), aT x ≤γ}, where a = ∑ i∈N\Av ei −∑ i∈Av ei and γ = |N \ Av|. The n + 1 vertices of S0 are v and the n points where the hyperplane {x ∈Rn : aT x = γ} intersects the edges of the cone {x ∈Rn : xi ≤ 1(i ∈Av), xi ≥0(i ∈N \ Av)}. Note this is just an option and any n-simplex S ⊃D is available. 4.2 Sub-division of a prism Let Sk and Tk be the simplex and prism at k-th iteration in the algorithm, respectively. We denote Sk as Sk = [vi k, . . . , vn+1 k ] := conv{v1 k, . . . , vn+1 k } which is defined as the convex hull of its vertices v1 k, . . . , vn+1 k . Then, any r ∈Sk can be represented as r = ∑n+1 i=1 λivi k, ∑n+1 i=1 λi = 1, λi ≥0 (i = 1, . . . , n + 1). Suppose that r ̸= vi k (i = 1, . . . , n + 1). For each i satisfying λi > 0, let Si k be the subsimplex of Sk defined by Si k = [v1 k, . . . , vi−1 k , r, vi+1 k , . . . , vn+1 k ]. (3) Then, the collection {Si k : λi > 0} defines a partition of Sk, i.e., we have ∪ λi>0Si k = Sk, int Si k ∩int Sj k = ∅for i ̸= j [12]. In a natural way, the prisms T(Si k) generated by the simplices Si k defined in Eq. (3) form a partition of Tk. This subdivision process of prisms is exhaustive, i.e., for every nested (decreasing) sequence of prisms {Tq} generated by this process, we have ∩∞ q=0 Tq = τ, where τ is a line perpendicular to Rn (a vertical line) [11]. Although several subdivision process can be applied, we use a classical bisection one, i.e., each simplex is divided into subsimplices by choosing in Eq. (3) as r = (vi1 k + vi2 k )/2, where ∥vi1 k −vi2 k ∥= max{∥vi k −vj k∥: i, j ∈{0, . . . , n}, i ̸= j} (see Figure 1). 4 4.3 Lower bounds Again, let Sk and Tk be the simplex and prism at k-th iteration in the algorithm, respectively. And, let α be an upper bound of t −g(A), which is the smallest value of t −g(A) attained at a feasible point known so far in the algorithm. Moreover, let Pk be a polyhedral convex set which contains ˜D and be represented as Pk = {(x, t) ∈Rn × R : Akx + akt ≤bk}, (4) where Ak is a real (m×n)-matrix and ak, bk ∈Rm.1 Now, a lower bound β(Tk, Pk, α) of t−g(A) over Tk ∩˜D can be computed as follows. First, let vi k (i = 1, . . . , n + 1) denote the vertices of Sk, and define I(Sk) = {i ∈{1, . . . , n + 1} : vi k ∈Bn} and µ = { min{α, min{ ˆf(vi k) −ˆg(vi k) : i ∈I(S)}}, if I(S) ̸= ∅, α, if I(S) = ∅. For each i = 1, . . . , n + 1, consider the point (vi k, ti k) where the edge of Tk passing through vi k intersects the level set {(x, t) : t −ˆg(x) = µ}, i.e., ti k = ˆg(vi k) + µ (i = 1, . . . , n + 1). Then, let us denote the uniquely defined hyperplane through the points (vi k, ti k) by H = {(x, t) ∈ Rn×R : pT x−t = γ}, where p ∈Rn and γ ∈R. Consider the upper and lower halfspace generated by H, i.e., H+ = {(x, t) ∈Rn × R : pT x −t ≤γ} and H−= {(x, t) ∈Rn × R : pT x −t ≥γ}. If Tk ∩˜D ⊆H+, then we see from the supermodularity of g(A) (the concavity of ˆg(x)) that min{t −g(A) : (A, t) ∈(A, t)(Tk ∩˜D)} ≥min{t −g(A) : (A, t) ∈(A, t)(Tk ∩H+)} ≥min{t −ˆg(x) : (x, t) ∈Tk ∩H+} = ti k −ˆg(xi k)(i = 1, . . . , n + 1) = µ. Otherwise, we shift the hyperplane H (downward with respect to t) until it reaches a point z = (x∗, t∗) (∈Tk ∩Pk ∩H−, x∗∈Bn) ((x∗, t∗) is a point with the largest distance to H and the corresponding pair (A, t) (since x∗∈Bn) is in (A, t)(Tk ∩Pk ∩H−)). Let ¯H denote the resulting supporting hyperplane, and denote by ¯H+ the upper halfspace generated by ¯H. Moreover, for each i = 1, . . . , n + 1, let zi = (vi k, ¯ti k) be the point where the edge of T passing through vi k intersects ¯H. Then, it follows (A, t)(Tk ∩˜D) ⊂(A, t)(Tk ∩Pk) ⊂(A, t)(Tk ∩¯H+), and hence min{t −g(A) : (A, t) ∈(A, t)(Tk ∩˜D)} > min{t −g(A) : (A, t) ∈(A, t)(Tk ∩¯H+)} = min{¯ti k −ˆg(vi k) : i = 1, . . . , n + 1}. Now, the above consideration leads to the following BILP in (λ, x, t): max λ,x,t (∑n+1 i=1 tiλi −t ) s.t. Akx + akt ≤bk, x = ∑n+1 i=1 λivi k, x ∈Bn, ∑n+1 i=1 λi = 1, λi ≥0 (i = 1, . . . , n + 1), (5) where A, a and b are given in Eq. (4). Proposition 1. (a) If the system (5) has no solution, then intersection (A, t)(Tk ∩˜D) is empty. (b) Otherwise, let (λ∗, x∗, t∗) be an optimal solution of BILP (5) and c∗= ∑n+1 i=1 tiλ∗ i −t∗its optimal value, respectively. Then, the following statements hold: (b1) If c∗≤0, then (A, t)(Tk ∩˜D) ⊂(A, t)(H+). (b2) If c∗> 0, then z = (∑n+1 i=1 λivi k, t∗ k), zi = (vi k, ¯ti k) = (vi k, ti k −c∗) and ¯ti k −ˆg(vi k) = µ −c∗(i = 1, . . . , n + 1). Proof. First, we prove part (a). Since every point in Sk is uniquely representable as x = ∑n+1 i=1 λivi, we see from Eq. (4) that the set (A, t)(Tk ∩Pk) coincide with the feasible set of problem (5). Therefore, if the system (5) has no solution, then (A, t)(Tk∩Pk) = ∅, and hence (A, t)(Tk∩˜D) = ∅ (because ˜D ⊂Pk). Next, we move to part (b). Since the equation of H is pT x −t = γ, it follows 1Note that Pk is updated at each iteration, which does not depend on Sk, as described in Section 4.4. 5 that determining the hyperplane ¯H and the point z amounts to solving the binary integer linear programming problem: max pT x −t s.t. (x, t) ∈Tk ∩Pk, x ∈Bn. (6) Here, we note that the objective of the above can be represented as pT x −t = pT (∑n+1 i=1 λivi k ) −t = ∑n+1 i=1 λipT vi k −t. On the other hand, since (vi k, ti k) ∈H, we have pT vi k −ti k = γ (i = 1, . . . , n + 1), and hence pT x −t = ∑n+1 i=1 λi(γ + ti k) −t = ∑n+1 i=1 ti kλi −t + γ. Thus, the two BILPs (5) and (6) are equivalent. And, if γ∗denotes the optimal objective function value in Eq. (6), then γ∗= c∗+ γ. If γ∗≤γ, then it follows from the definition of H+ that ¯H is obtained by a parallel shift of H in the direction H+. Therefore, c∗≤0 implies (A, t)(Tk ∩Pk) ⊂ (A, t)(H+), and hence (A, t)(Tk ∩˜D) ⊂(A, t)(H+). Since ¯H = {(x, t) ∈Rn × R : pT x −t = γ∗} and H = {(x, t) ∈Rn × R : pT x −t = γ} we see that for each intersection point (vi k, ¯ti k) (and (vi k, ti k)) of the edge of Tk passing through vi k with ¯H (and H), we have pT vi k −¯ti k = γ∗and pT vi k −ti k = γ, respectively. This implies that ¯ti k = ti k + γ −γ∗= ti k −c∗, and (using ti k = ˆg(vi k) + µ) that ¯ti k = ˆg(vi k) + µ −c∗. From the above, we see that, in the case (b1), µ constitutes a lower bound of (t−g(A)) wheres, in the case (b2), such a lower bound is given by min{¯ti k −ˆg(vi k) : i = 1, . . . , n + 1}. Thus, Proposition 1 provides the lower bound βk(Tk, Pk, α) = { +∞, if BILP (5) has no feasible point, µ, if c∗≤0, µ −c∗ if c∗> 0. (7) As stated in Section 4.5, Tk can be deleted from further consideration when βk = ∞or µ. 4.4 Outer approximation The polyhedral convex set Pk ⊃˜D used in the preceding section is updated in each iteration, i.e., a sequence P0, P1, · · · is constructed such that P0 ⊃P1 ⊃· · · ⊃˜D. The update from Pk to Pk+1 (k = 0, 1, . . .) is done in a way which is standard for pure outer approximation methods [12]. That is, a certain linear inequality lk(x, t) ≤0 is added to the constraint set defining Pk, i.e., we set Pk+1 = Pk ∩{(x, t) ∈Rn × R : lk(x, t) ≤0}. The function lk(x, t) is constructed as follows. At iteration k, we have a lower bound βk of t − g(A) as defined in Eq. (7), and a point (¯vk, ¯tk) satisfying ¯tk −ˆg(¯vk) = βk. We update the outer approximation only in the case (¯vk, ¯tk) /∈˜D. Then, we can set lk(x, t) = sT k [(x, t) −zk] + ( ˆf(x∗ k) −t∗ k), (8) where sk is a subgradient of ˆf(x) −t at zk. The subgradient can be calculated as, for example, stated in [9] (see also [7]). Proposition 2. The hyperplane {(x, t) ∈Rn × R : lk(x, t) = 0} strictly separates zk from ˜D, i.e., lk(zk) > 0, and lk(x, t) ≤0 for ∀(x, t) ∈˜D. Proof. Since we assume that zk /∈˜D, we have lk(zk) = ( ˆf(x∗ k) −t∗ k). And, the latter inequality is an immediate consequence of the definition of a subgradient. 4.5 Deletion rules At each iteration of the algorithm, we try to delete certain subprisms that contain no optimal solution. To this end, we adopt the following two deletion rules: (DR1) Delete Tk if BILP (5) has no feasible solution. 6 ̻ ̻ [b ̻ b b Exact (Prismatic) Approx. (Supermodular-submodular) λ Training Error ̻ ̻ [b ̻ λ Test Error b b Exact (Prismatic) Approx. (Supermodular-sumodular) ̻ ̻ λ Time [second] b b Exact (Prismatic) Approx. (Supermodular-sumodular) Figure 2: Training errors, test errors and computational time versus λ for the prismatic algorithm and the supermodular-sumodular procedure. p n k exact(PRISM) SSP greedy lasso 120 150 5 1.8e-4 (192.6) 1.9e-4 (0.93) 1.8e-4 (0.45) 1.9e-4 (0.78) 120 150 10 2.0e-4 (262.7) 2.4e-4 (0.81) 2.3e-4 (0.56) 2.4e-4 (0.84) 120 150 20 7.3e-4 (339.2) 7.8e-4 (1.43) 8.3e-4 (0.59) 7.7e-4 (0.91) 120 150 40 1.7e-3 (467.6) 2.1e-3 (1.17) 2.9e-3 (0.63) 1.9e-3 (0.87) Table 1: Normalized mean-square prediction errors of training and test data by the prismatic algorithm, the supermodular-submodular procedure, the greedy algorithm and the lasso. (DR2) Delete Tk if the optimal value c∗of BILP (5) satisfies c∗≤0. The feasibility of these rules can be seen from Proposition 1 as well as the D.C. programing problem [11]. That is, (DR1) follows from Proposition 1 that in this case Tk ∩˜D = ∅, i.e., the prism Tk is infeasible, and (DR2) from Proposition 1 and from the definition of µ that the current best feasible solution cannot be improved in T. 5 Experimental Results We first provide illustrations of the proposed algorithm and its solution on toy examples from feature selection in Section 5.1, and then apply the algorithm to an application of discriminative structure learning using the UCI repository data in Section 5.2. The experiments below were run on a 2.8 GHz 64-bit workstation using Matlab and IBM ILOG CPLEX ver. 12.1. 5.1 Application to feature selection We compared the performance and solutions by the proposed prismatic algorithm (PRISM), the supermodular-submodular procedure (SSP) [21], the greedy method and the LASSO. To this end, we generated data as follows: Given p, n and k, the design matrix X ∈Rn×p is a matrix of i.i.d. Gaussian components. A feature set J of cardinality k is chosen at random and the weights on the selected features are sampled from a standard multivariate Gaussian distribution. The weights on other features are 0. We then take y = Xw + n−1/2∥Xw∥2ϵ, where w is the weights on features and ϵ is a standard Gaussian vector. In the experiment, we used the trace norm of the submatrix corresponding to J, XJ, i.e., tr(XT J XJ)1/2. Thus, our problem is minw∈Rp 1 2n∥y −Xw∥2 2 + λ · tr(XT J XJ)1/2, where J is the support of w. Or equivalently, minA∈V g(A) + λ · tr(XT AXA)1/2, where g(A) := minwA∈R|A| ∥y −XAwA∥2. Since the first term is a supermodular function [4] and the second is a submodular function, this problem is the D.S. programming problem. First, the graphs in Figure 2 show the training errors, test errors and computational time versus λ for PRISM and SSP (for p = 120, n = 150 and k = 10). The values in the graphs are averaged over 20 datasets. For the test errors, we generated another 100 data from the same model and applied the estimated model to the data. And, for all methods, we tried several possible regularization parameters. From the graphs, we can see the following: First, exact solutions (by PRISM) always outperform approximate ones (by SSP). This would show the significance of optimizing the submodular-norm. That is, we could obtain the better solutions (in the sense of prediction error) by optimizing the objective with the submodular norm more exactly. And, our algorithm took longer especially when 7 Data Attr. Class exact (PRISM) approx. (SSP) generative Chess 36 2 96.6 (±0.69) 94.4 (±0.71) 92.3 (±0.79) German 20 2 70.0 (±0.43) 69.9 (±0.43) 69.1 (±0.49) Census-income 40 2 73.2 (±0.64) 71.2 (±0.74) 70.3 (±0.74) Hepatitis 19 2 86.9 (±1.89) 84.3 (±2.31) 84.2 (±2.11) Table 2: Empirical accuracy of the classifiers in [%] with standard deviation by the TANs discriminatively learned with PRISM or SSP and generatively learned with a submodular minimization solver. The numbers in parentheses are computational time in seconds. λ smaller. This would be because smaller λ basically gives a larger size subset (solution). Also, Table 1 shows normalized-mean prediction errors by the prismatic algorithm, the supermodularsubmodular procedure, the greedy method and the lasso for several k. The values are averaged over 10 datasets. This result also seems to show that optimizing the objective with the submodular norm exactly is significant in the meaning of prediction errors. 5.2 Application to discriminative structure learning Our second application is discriminative structure learning using the UCI machine learning repository.2 Here, we used CHESS, GERMAN, CENSUS-INCOME (KDD) and HEPATITIS, which have two classes. The Bayesian network topology used was the tree augmented naive Bayes (TAN) [22]. We estimated TANs from data both in generative and discriminative manners. To this end, we used the procedure described in [20] with a submodular minimization solver (for the generative case), and the one [21] combined with our prismatic algorithm (PRISM) or the supermodular-submodular procedure (SSP) (for the discriminative case). Once the structures have been estimated, the parameters were learned based on the maximum likelihood method. Table 2 shows the empirical accuracy of the classifier in [%] with standard deviation for these datasets. We used the train/test scheme described in [6, 22]. Also, we removed instances with missing values. The results seem to show that optimizing the EAR measure more exactly could improve the performance of classification (which would mean that the EAR is significant as the measure of discriminative structure learning in the sense of classification). 6 Conclusions In this paper, we proposed a prismatic algorithm for the D.S. programming problem (1), which is the first exact algorithm for this problem and is a branch-and-bound method responding to the structure of this problem. We developed the algorithm based on the analogy with the D.C. programming problem through the continuous relaxation of solution spaces and objective functions with the help of the Lov´asz extension. We applied the proposed algorithm to several situations of feature selection and discriminative structure learning using artificial and real-world datasets. The D.S. programming problem addressed in this paper covers a broad range of applications in machine learning. In future works, we will develop a series of the presented framework specialized to the specific structure of each problem. Also, it would be interesting to investigate the extension of our method to enumerate solutions, which could make the framework more useful in practice. Acknowledgments This research was supported in part by JST PRESTO PROGRAM (Synthesis of Knowledge for Information Oriented Society), JST ERATO PROGRAM (Minato Discrete Structure Manipulation System Project) and KAKENHI (22700147). Also, we are very grateful to the reviewers for helpful comments. 2http://archive.ics.uci.edu/ml/index.html 8 References [1] F. Bach. Structured sparsity-inducing norms through submodular functions. In Advances in Neural Information Processing Systems 23, pages 118–126, 2010. [2] J. A. Bilmes. Dynamic Bayesian multinets. In Proc. of the 16th Conf. on Uncertainty in Artificial Intelligence (UAI’00), pages 38–45, 2000. [3] T. S. Caetano, J. J. McAuley, L. Cheng, Q. V. Le, and A. J. Smola. Learning graph matching. IEEE Trans. on Pattern Analysis and Machine Intelligence, 31(6):1048–1058, 2009. [4] A. Das and D. Kempe. Algorithms for subset selection in linear regression. In Proc. of the 40th annual ACM symp. on Theory of computing (STOC’08), pages 45–54, 2008. [5] J. Edmonds. Submodular functions, matroids, and certain polyhedra. In R. Guy, H. Hanani, N. Sauer, and J. Sch¨onheim, editors, Combinatorial structures and their applications, pages 69–87, 1970. [6] N. Friedman, D. Geiger, and M. Goldszmidt. Bayesian network classifier. 29:131–163, 1997. [7] S. Fujishige. Submodular Functions and Optimization. Elsevier, 2 edition, 2005. [8] S. Fujishige, T. Hayashi, and S. Isotani. The minimum-norm-point algorithm applied submodular function minimization and linear programming. Technical report, Research Institute for Mathematical Sciences, Kyoto University, 2006. [9] E. Hazan and S. Kale. Beyond convexity: online submodular minimization. In Advances in Neural Information Processing Systems 22, pages 700–708, 2009. [10] S. Hoi, R. Jin, J. Zhu, and M. Lyu. Batch mode active learning and its application to medical image classification. In Proc. of the 23rd Int’l Conf. on Machine learning (ICML’06), pages 417–424, 2006. [11] R. Horst, T. Q. Phong, Ng. V. Thoai, and J. de Vries. On solving a D.C. programming problem by a sequence of linear programs. Journal of Global Optimization, 1:183–203, 1991. [12] R. Horst and H. Tuy. Global Optimization (Deterministic Approaches). Springer, 3 edition, 1996. [13] T. Ibaraki. Enumerative approaches to combinatorial optimization. In J.C. Baltzer and A.G. Basel, editors, Annals of Operations Research, volume 10 and 11. 1987. [14] Y. Kawahara, K. Nagano, K. Tsuda, and J. A. Bilmes. Submodularity cuts and applications. In Advances in Neural Information Processing Systems 22, pages 916–924. MIT Press, 2009. [15] A. Krause and V. Cevher. Submodular dictionary selection for sparse representation. In Proc. of the 27th Int’l Conf. on Machine learning (ICML’10), pages 567–574. Omnipress, 2010. [16] A. Krause, H. B. McMahan, C. Guestrin, and A. Gupta. Robust submodular observation selection. Journal of Machine Learning Research, 9:2761–2801, 2008. [17] L. Lov´asz. Submodular functions and convexity. In A. Bachem, M. Gr¨otschel, and B. Korte, editors, Mathematical Programming – The State of the Art, pages 235–257. 1983. [18] K. Murota. Discrete Convex Analysis. Monographs on Discrete Math and Applications. SIAM, 2003. [19] K. Nagano, Y. Kawahara, and S. Iwata. Minimum average cost clustering. In Advances in Neural Information Processing Systems 23, pages 1759–1767, 2010. [20] M. Narasimhan and J. A. Bilmes. PAC-learning bounded tree-width graphical models. In Proc. of the 20th Ann. Conf. on Uncertainty in Artificial Intelligence (UAI’04), pages 410–417, 2004. [21] M. Narasimhan and J. A. Bilmes. A submodular-supermodular procedure with applications to discriminative structure learning. In Proc. of the 21st Ann. Conf. on Uncertainty in Artificial Intelligence (UAI’05), pages 404–412, 2005. [22] F. Pernkopf and J. A. Bilmes. Discriminative versus generative parameter and structure learning of bayesian network classifiers. In Proc. of the 22nd Int’l Conf. on Machine Learning (ICML’05), pages 657–664, 2005. [23] M. Queyranne. Minimizing symmetric submodular functions. Math. Prog., 82(1):3–12, 1998. [24] C. Rother, T. Minka, A. Blake, and V. Kolmogorov. Cosegmentation of image pairs by histogram matching-incorporating a global constraint into mrfs. In Proc. of the 2006 IEEE Comp. Soc. Conf. on Computer Vision and Pattern Recognition (CVPR’06), pages 993–1000, 2006. [25] A. Shekhovtsov. Supermodular decomposition of structural labeling problem. Control Systems and Computers, 20(1):39–48, 2006. [26] A. Shekhovtsov, V. Kolmogorov, P. Kohli, V. Hlav c, C. Rother, and P. Torr. Lp-relaxation of binarized energy minimization. Technical Report CTU-CMP-2007-27, Czech Technical University, 2007. [27] M. Thoma, H. Cheng, A. Gretton, H. Han, H. P. Kriegel, A. J. Smola, S. Y. Le Song Philip, X. Yan, and K. Borgwardt. Near-optimal supervised feature selection among frequent subgraphs. In Proc. of the 2009 SIAM Conf. on Data Mining (SDM’09), pages 1076–1087, 2008. 9
|
2011
|
153
|
4,206
|
On the Completeness of First-Order Knowledge Compilation for Lifted Probabilistic Inference Guy Van den Broeck Department of Computer Science, Katholieke Universiteit Leuven Celestijnenlaan 200A, B-3001 Heverlee, Belgium guy.vandenbroeck@cs.kuleuven.be Abstract Probabilistic logics are receiving a lot of attention today because of their expressive power for knowledge representation and learning. However, this expressivity is detrimental to the tractability of inference, when done at the propositional level. To solve this problem, various lifted inference algorithms have been proposed that reason at the first-order level, about groups of objects as a whole. Despite the existence of various lifted inference approaches, there are currently no completeness results about these algorithms. The key contribution of this paper is that we introduce a formal definition of lifted inference that allows us to reason about the completeness of lifted inference algorithms relative to a particular class of probabilistic models. We then show how to obtain a completeness result using a first-order knowledge compilation approach for theories of formulae containing up to two logical variables. 1 Introduction and related work Probabilistic logic models build on first-order logic to capture relational structure and on graphical models to represent and reason about uncertainty [1, 2]. Due to their expressivity, these models can concisely represent large problems with many interacting random variables. While the semantics of these logics is often defined through grounding the models [3], performing inference at the propositional level is – as for first-order logic – inefficient. This has motivated the quest for lifted inference methods that exploit the structure of probabilistic logic models for efficient inference, by reasoning about groups of objects as a whole and avoiding repeated computations. The first approaches to exact lifted inference have upgraded the variable elimination algorithm to the first-order level [4, 5, 6]. More recent work is based on methods from logical inference [7, 8, 9, 10], such as knowledge compilation. While these approaches often yield dramatic improvements in runtime over propositional inference methods on specific problems, it is still largely unclear for which classes of models these lifted inference operators will be useful and for which ones they will eventually have to resort to propositional inference. One notable exception in this regard is lifted belief propagation [11], which performs exact lifted inference on any model whose factor graph representation is a tree. A first contribution of this paper is that we introduce a notion of domain lifted inference, which formally defines what lifting means, and which can be used to characterize the classes of probabilistic models to which lifted inference applies. Domain lifted inference essentially requires that probabilistic inference runs in polynomial time in the domain size of the logical variables appearing in the model. As a second contribution we show that the class of models expressed as 2-WFOMC formulae (weighted first-order model counting with up to 2 logical variables per formula) can be domain lifted using an extended first-order knowledge compilation approach [10]. The resulting approach allows for lifted inference even in the presence of (anti-) symmetric or total relations in a theory. These are extremely common and useful concepts that cannot be lifted by any of the existing first-order knowledge compilation inference rules. 1 2 Background We will use standard concepts of function-free first-order logic (FOL). An atom p(t1, . . . , tn) consists of a predicate p/n of arity n followed by n arguments, which are either constants or logical variables. An atom is ground if it does not contain any variables. A literal is an atom a or its negation ¬a. A clause is a disjunction l1 ∨... ∨lk of literals. If k = 1, it is a unit clause. An expression is an atom, literal or clause. The pred(a) function maps an atom to its predicate and the vars(e) function maps an expression to its logical variables. A theory in conjunctive normal form (CNF) is a conjunction of clauses. We often represent theories by their set of clauses and clauses by their set of literals. Furthermore, we will assume that all logical variables are universally quantified. In addition, we associate a set of constraints with each clause or atom, either of the form X ̸= t, where X is a logical variable and t is a constant or variable, or of the form X ∈D, where D is a domain, or the negation of these constraints. These define a finite domain for each logical variable. Abusing notation, we will use constraints of the form X = t to denote a substitution of X by t. The function atom(e) maps an expression e to its atoms, now associating the constraints on e with each atom individually. To add the constraint c to an expression e, we use the notation e ∧c. Two atoms unify if there is a substitution which makes them identical and if the conjunction of the constraints on both atoms with the substitution is satisfiable. Two expressions e1 and e2 are independent, written e1 ⊥⊥e2, if no atom a1 ∈atom(e1) unifies with an atom a2 ∈atom(e2). We adopt the Weighted First-Order Model Counting (WFOMC) [10] formalism to represent probabilistic logic models, building on the notion of a Herbrand interpretation. Herbrand interpretations are subsets of the Herbrand base HB(T), which consists of all ground atoms that can be constructed with the available predicates and constant symbols in T. The atoms in a Herbrand interpretation are assumed to be true. All other atoms in HB(T) are assumed to be false. An interpretation I satisfies a theory T, written as I |= T, if it satisfies all the clauses c ∈T. The WFOMC problem is defined on a weighted logic theory T, which is a logic theory augmented with a positive weight function w and a negative weight function w, which assign a weight to each predicate. The WFOMC problem involves computing wmc(T, w, w) = X I|=T Y a∈I w(pred(a)) Y a∈HB(T )\I w(pred(a)). (1) 3 First-order knowledge compilation for lifted probabilistic inference 3.1 Lifted probabilistic inference A first-order probabilistic model defines a probability distribution P over the set of Herbrand interpretations H. Probabilistic inference in these models is concerned with computing the posterior probability P(q|e) of query q given evidence e, where q and e are logical expressions in general: P(q|e) = P h∈H,h|=q∧e P(h) P h∈H,h|=e P(h) (2) We propose one notion of lifted inference for first-order probabilistic models, defined in terms of the computational complexity of inference w.r.t. the domains of logical variables. It is clear that other notions of lifted inference are conceivable, especially in the case of approximate inference. Definition 1 (Domain Lifted Probabilistic Inference). A probabilistic inference procedure is domain lifted for a model m, query q and evidence e iff the inference procedure runs in polynomial time in |D1|, ..., |Dk| with Di the domain of the logical variable vi ∈vars(m, q, e). Domain lifted inference does not prohibit the algorithm to be exponential in the size of the vocabulary, that is, the number of predicates, arguments and constants, of the probabilistic model, query and evidence. In fact, the definition allows inference to be exponential in the number of constants which occur in arguments of atoms in the theory, query or evidence, as long as it is polynomial in the cardinality of the logical variable domains. This definition of lifted inference stresses the ability to efficiently deal with the domains of the logical variables that arise, regardless of their size, and formalizes what seems to be generally accepted in the lifted inference literature. 2 A class of probabilistic models is a set of probabilistic models expressed in a particular formalism. As examples, consider Markov logic networks (MLN) [12] or parfactors [4], or the weighted FOL theories for WFOMC that we introduced above, when the weights are normalized. Definition 2 (Completeness). Restricting queries to atoms and evidence to a conjunction of literals, a procedure that is domain lifted for all probabilistic models m in a class of models M and for all queries q and evidence e, is called complete for M. 3.2 First-order knowledge compilation First-order knowledge compilation is an approach to lifted probabilistic inference consisting of the following three steps (see Van den Broeck et al. [10] for details): 1. Convert the probabilistic logical model to a weighted CNF. Converting MLNs or parfactors requires adding new atoms to the theory that represent the (truth) value of each factor or formula. 2 friends(X, Y ) ∧smokes(X) ⇒smokes(Y ) (a) MLN Model smokes(Y ) ∨¬ smokes(X) ∨¬ friends(X, Y ) ∨¬ f(X, Y ) friends(X, Y ) ∨f(X, Y ) smokes(X) ∨f(X, Y ) ¬ smokes(Y ) ∨f(X, Y ). (b) CNF Theory Predicate w w friends 1 1 smokes 1 1 f e2 1 (c) Weight Functions unit clause leaf set-disjunction set-conjunction decomposable conjunction deterministic disjunction ^ x ∈Smokers ∧ ∨ ^ y /∈Smokers ∧ _ Smokers ⊆People ∧ ∧ f(x, y) ∧ smokes(X), X ∈Smokers f(X, Y ), Y ∈Smokers ¬ smokes(Y ), Y /∈Smokers f(X, Y ), X /∈Smokers, Y /∈Smokers friends(x, y) ¬ f(x, y) ∧ ¬ friends(x, y) (d) First-Order d-DNNF Circuit Figure 1: Friends-smokers example (taken from [10]) Example 1. The MLN in Figure 1a assigns a weight to a formula in FOL. Figure 1b represents the same model as a weighted CNF, introducing a new atom f(X, Y ) to encode the truth value of the MLN formula. The probabilistic information is captured by the weight functions in Figure 1c. 2. Compile the logical theory into a First-Order d-DNNF (FO d-DNNF) circuit. Figure 1d shows an example of such a circuit. Leaves represent unit clauses. Inner nodes represent the disjunction or conjunction of their children l and r, but with the constraint that disjunctions must be deterministic (l ∧r is unsatisfiable) and conjunctions must be decomposable (l ⊥⊥r). 3. Perform WFOMC inference to compute posterior probabilities. In a FO d-DNNF circuit, WFOMC is polynomial in the size of the circuit and the cardinality of the domains. To compile the CNF theory into a FO d-DNNF circuit, Van den Broeck et al. [10] propose a set of compilation rules, which we will refer to as CR1. We will now briefly describe these rules. Unit Propagation introduces a decomposable conjunction when the theory contains a unit clause. Independence creates a decomposable conjunction when the theory contains independent subtheories. Shannon decomposition applies when the theory contains ground atoms and introduces a deterministic disjunction between two modified theories: one where the ground atom is true, and one where it is false. Shattering splits clauses in the theory until all pairs of atoms represent either a disjoint or identical set of ground atoms. Example 2. In Figure 2a, the first two clauses are made independent from the friends(X, X) clause and split off in a decomposable conjunction by unit propagation. The unit clause becomes a leaf of the FO d-DNNF circuit, while the other operand requires further compilation. 3 friends(X, Y ) ∨dislikes(X, Y ) ¬ friends(X, Y ) ∨likes(X, Y ) friends(X, X) ∧ friends(X, Y ) ∨dislikes(X, Y ), X ̸= Y ¬ friends(X, Y ) ∨likes(X, Y ), X ̸= Y likes(X, X) friends(X, X) (a) Unit propagation of friends(X, X) dislikes(X, Y ) ∨friends(X, Y ) fun(X) ∨¬ friends(X, Y ) ^ x ∈People dislikes(x, Y ) ∨friends(x, Y ) fun(x) ∨¬ friends(x, Y ) (b) Independent partial grounding fun(X), X ∈FunPeople ¬ fun(X), X /∈FunPeople fun(X) ∨¬ friends(X, Y ) fun(X) ∨¬ friends(Y, X) _ FunPeople ⊆People fun(X) ∨¬ friends(X, Y ) fun(X) ∨¬ friends(Y, X) (c) Atom counting of fun(X) Figure 2: Examples of compilation rules. Circles are FO d-DNNF inner nodes. White rectangles show theories before and after applying the rule. All variable domains are People. (taken from [10]) Independent Partial Grounding creates a decomposable conjunction over a set of child circuits, which are identical up to the value of a grounding constant. Since they are structurally identical, only one child circuit is actually compiled. Atom Counting applies when the theory contains an atom with a single logical variable X ∈D. It explicitly represents the domain D⊤⊆D of X for which the atom is true. It compiles the theory into a deterministic disjunction between all possible such domains. Again, these child circuits are identical up to the value of D⊤and only one is compiled. Example 3. The theory in Figure 2b is compiled into a decomposable set-conjunction of theories that are independent and identical up to the value of the x constant. The theory in Figure 2c contains an atom with one logical variable: fun(X). Atom counting compiles it into a deterministic setdisjunction over theories that differ in FunPeople, which is the domain of X for which fun(X) is true. Subsequent steps of unit propagation remove the fun(X) atoms from the theory entirely. 3.3 Completeness We will now characterize those theories where the CR1 compilation rules cannot be used, and where the inference procedure has to resort to grounding out the theory to propositional logic. For these, first-order knowledge compilation using CR1 is not yet domain lifted. When a logical theory contains symmetric, anti-symmetric or total relations, such as friends(X, Y ) ⇒friends(Y, X), (3) parent(X, Y ) ⇒¬ parent(Y, X), X ̸= Y, (4) ≤(X, Y) ∨≤(Y, X), (5) or more general formulas, such as enemies(X, Y ) ⇒¬ friend(X, Y ) ∧¬ friend(Y, X), (6) none of the CR1 rules apply. Intuitively, the underlying problem is the presence of either: • Two unifying (not independent) atoms in the same clause which contain the same logical variable in different positions of the argument list. Examples include (the CNF of) Formulas 3, 4 and 5, where the X and Y variable are bound by unifying two atoms from the same clause. • Two logical variables that bind when unifying one pair of atoms but appear in different positions of the argument list of two other unifying atoms. Examples include Formula 6, which in CNF is ¬ friend(X, Y ) ∨¬ enemies(X, Y ) ¬ friend(Y, X) ∨¬ enemies(X, Y ) Here, unifying the enemies(X, Y ) atoms binds the X variables from both clauses, which appear in different positions of the argument lists of the unifying atoms friend(X, Y ) and friend(Y, X). Both of these properties preclude the use of CR1 rules. Also in the context of other model classes, such as MLNs, probabilistic versions of the above formulas cannot be processed by CR1 rules. 4 Even though first-order knowledge compilation with CR1 rules does not have a clear completeness result, we can show some properties of theories to which none of the compilation rules apply. First, we need to distinguish between the arity of an atom and its dimension. A predicate with arity two might have atoms with dimension one, when one of the arguments is ground or both are identical. Definition 3 (Dimension of an Expression). The dimension of an expression e is the number of logical variables it contains: dim(e) = | vars(e)|. Lemma 1 (CR1 Postconditions). The CR1 rules remove all atoms from the theory T which have zero or one logical variable arguments, such that afterwards ∀a ∈atom(T) : dim(a) > 1. When no CR1 rule applies, the theory is shattered and contains no independent subtheories. Proof. Ground atoms are removed by the Shannon decomposition operator followed by unit propagation. Atoms with a single logical variable (including unary relations) are removed by the atom counting operator followed by unit propagation. If T contains independent subtheories, the independence operator can be applied. Shattering is always applied when T is not yet shattered. 4 Extending first-order knowledge compilation In this section we introduce a new operator which does apply to the theories from Section 3.3. 4.1 Logical variable properties To formally define the operator we propose, and prove its correctness, we first introduce some mathematical concepts related to the logical variables in a theory (partly after Jha et al. [8]). Definition 4 (Binding Variables). Two logical variables X, Y are directly binding b(X, Y ) if they are bound by unifying a pair of atoms in the theory. The binding relationship b+(X, Y ) is the transitive closure of the directly binding relation b(X, Y ). Example 4. In the theory ¬ p(W, X) ∨¬ q(X) r(Y ) ∨¬ q(Y ) ¬ r(Z) ∨s(Z) the variable pairs (X, Y ) and (Y, Z) are directly binding. The variables X, Y and Z are binding. Variable W does not bind to any other variable. Note that the binding relationship b+(X, Y ) is an equivalence relation that defines two equivalence classes: {X, Y, Z} and {W}. Lemma 2 (Binding Domains). After shattering, binding logical variables have identical domains. Proof. During shattering (see Section 3.2), when two atoms unify, binding two variables with partially overlapping domains, the atoms’ clauses are split up into clauses where the domain of the variables is identical, and clauses where the domains are disjoint and the atoms no longer unify. Definition 5 (Root Binding Class). A root variable is a variable that appears in all the atoms in its clause. A root binding class is an equivalence class of binding variables where all variables are root. Example 5. In the theory of Example 4, {X, Y, Z} is a root binding class and {W} is not. 4.2 Domain recursion We will now introduce the new domain recursion operator, starting with its preconditions. Definition 6. A theory allows for domain recursion when (i) the theory is shattered, (ii) the theory contains no independent subtheories and (iii) there exists a root binding class. From now on, we will denote with C the set of clauses of the theory at hand and with B a root binding class guaranteed to exist if C allows for domain recursion. Lemma 2 states that all variables in B have identical domains. We will denote the domain of these variables with D. The intuition behind the domain recursion operator is that it modifies D by making one element explicit: D = D′ ∪{xD} with xD /∈D′. This explicit domain element is introduced by the SPLITD function, which splits clauses w.r.t. the new subdomain D′ and element xD. 5 Definition 7 (SPLITD). For a clause c and given set of variables Vc ⊆vars(c) with domain D, let SPLITD(c, Vc) = c, if Vc = ∅ SPLITD(c1, Vc \ {V }) ∪SPLITD(c2, Vc \ {V }), if Vc ̸= ∅ (7) where c1 = c ∧(V = xD) and c2 = c ∧(V ̸= xD) ∧(V ∈D′) for some V ∈Vc. For a set of clauses C and set of variables V with domain D: SPLITD(C, V) = S c∈C SPLITD(c, V ∩vars(c)). The domain recursion operator creates three sets of clauses: SPLITD(C, B) = Cx ∪Cv ∪Cr, with Cx = {c ∧ ^ V ∈B∩vars(c) (V = xD)|c ∈C}, (8) Cv = {c ∧ ^ V ∈B∩vars(c) (V ̸= xD) ∧(V ∈D′)|c ∈C}, (9) Cr = SPLITD(C, B) \ Cx \ Cv. (10) Proposition 3. The conjunction of the domain recursion sets is equivalent to the original theory: V c∈C c ≡V c∈SPLITD(C,B) c and therefore V c∈C c ≡ V c∈Cx c ∧ V c∈Cv c ∧ V c∈Cr c . We will now show that these sets are independent and that their conjunction is decomposable. Theorem 4. The theories Cx, Cv and Cr are independent: Cx ⊥⊥Cv, Cx ⊥⊥Cr and Cv ⊥⊥Cr. The proof of Theorem 4 relies on the following Lemma. Lemma 5. If the theory allows for domain recursion, all clauses and atoms contain the same number of variables from B: ∃n, ∀c ∈C, ∀a ∈atom(C) : | vars(c) ∩B | = | vars(a) ∩B | = n. Proof. Denote with Cn the clauses in C that contain n logical variables from B and with Cc n its compliment in C. If C is nonempty, there is a n > 0 for which Cn is nonempty. Then every atom in Cn contains exactly n variables from B (Definition 5). Since the theory contains no independent subtheories, there must be an atom a in Cn which unifies with an atom ac in Cc n, or Cc n is empty. After shattering, all unifications bind one variable from a to a single variable from ac. Because a contains exactly n variables from B, ac must also contain exactly n (Definition 4), and because B is a root binding class, the clause of ac also contains exactly n, which contradicts the definition of Cc n. Therefore, Cc n is empty, and because the variables in B are root, they also appear in all atoms. Proof of Theorem 4. From Lemma 5, all atoms in C contain the same number of variables from B. In Cx, these variables are all constrained to be equal to xD, while in Cv and Cr at least one variable is constrained to be different from xD. An attempt to unify an atom from Cx with an atom from Cv or Cr therefore creates an unsatisfiable set of constraints. Similarly, atoms from Cv and Cr cannot be unified. Finally, we extend the FO d-DNNF language proposed in Van den Broeck et al. [10] with a new node, the recursive decomposable conjunction ∧ ⃝r, and define the domain recursion compilation rule. Definition 8 ( ∧ ⃝r). The FO d-DNNF node ∧ ⃝r(nx, nr, D, D′, V) represents a decomposable conjunction between the d-DNNF nodes nx, nr and a d-DNNF node isomorphic to the ∧ ⃝r node itself. In particular, the isomorphic operand is identical to the node itself, except for the size of the domain of the variables in V, which becomes one smaller, going from D to D′ in the isomorphic operand. We have shown that the conjunction between sets Cx, Cv and Cr is decomposable (Theorem 4) and logically equivalent to the original theory (Proposition 3). Furthermore, Cv is identical to C, up to the constraints on the domain of the variables in B. This leads us to the following definition of domain recursion. Definition 9 (Domain Recursion). The domain recursion compilation rule compiles C into ∧ ⃝r(nx, nr, D, D′, B), where nx, nr are the compiled circuits for Cx, Cr. The third set Cv is represented by the recursion on D, according to Definition 8. 6 ¬ friends(x, X) ∨friends(X, x), X ̸= x ¬ friends(X, x) ∨friends(x, X), X ̸= x ¬ friends(x, x) ∨friends(x, x) ∧r ∨ ¬ friends(x, x) friends(x, x) ^ x′∈P erson x′̸=x ∨ ∧ ∧ friends(x′, x) friends(x, x′) ¬ friends(x′, x) ¬ friends(x, x′) Person ←Person \ {x} Cr Cx nr nx nv Figure 3: Circuit for the symmetric relation in Equation 3, rooted in a recursive conjunction. Example 6. Figure 3 shows the FO d-DNNF circuit for Equation 3. The theory is split up into three independent theories: Cr and Cx, shown in the Figure 3, and Cv = {¬ friends(X, Y ) ∨ friends(Y, X), X ̸= x, Y ̸= x}. The conjunction of these theories is equivalent to Equation 3. Theory Cv is identical to Equation 3, up to the inequality constraints on X and Y . Theorem 6. Given a function size, which maps domains to their size, the weighted first-order model count of a ∧ ⃝r(nx, nr, D, D′, V) node is wmc( ∧ ⃝r(nx, nr, D, D′, V), size) = wmc(nx, size)size(D) size(D) Y s=0 wmc(nr, size ∪{D′ 7→s}), (11) where size ∪{D′ 7→s} adds to the size function that the subdomain D′ has cardinality s. Proof. If C allows for domain recursion, due to Theorem 4, the weighted model count is wmc(C, size) = 1, if size(D) = 0 wmc(Cx) · wmc(Cv, size′) · wmc(Cr, size′) if size(D) > 0 (12) where size′ = size ∪{D′ 7→size(D) −1}. Theorem 7. The Independent Partial Grounding compilation rule is a special case of the domain recursion rule, where ∀c ∈C : | vars(c) ∩B | = 1 (and therefore Cr = ∅). 4.3 Completeness In this section, we introduce a class of models for which first-order knowledge compilation with domain recursion is complete. Definition 10 (k-WFOMC). The class of k-WFOMC consist of WFOMC theories with clauses that have up to k logical variables. A first completeness result is for 2-WFOMC, using the set of knowledge compilation rules CR2, which are the rules in CR1 extended with domain recursion. Theorem 8 (Completeness for 2-WFOMC). First-order knowledge compilation using the CR2 compilation rules is a complete domain lifted probabilistic inference algorithm for 2-WFOMC. Proof. From Lemma 1, after applying the CR1 rules, the theory contains only atoms with dimension larger than or equal to two. From Definition 10, each clause has dimension smaller than or equal to two. Therefore, each logical variable in the theory is a root variable and according to Definition 5, every equivalence class of binding variables is a root binding class. Because of Lemma 1, the theory allows for domain recursion, which requires further compilation of two theories: Cx and Cr into nx and nr. Both have dimension smaller than 2 and can be lifted by CR1 compilation rules. The properties of 2-WFOMC are a sufficient but not necessary condition for first-order knowledge compilation to be domain lifted. We can obtain a similar result for MLNs or parfactors by reducing them to a WFOMC problem. If an MLN contains only formulae with up to k logical variables, then its WFOMC representation will be in k-WFOMC. 7 This result for 2-WFOMC is not trivial. Van den Broeck et al. [10] showed in their experiments that counting first-order variable elimination (C-FOVE) [6] fails to lift the “Friends Smoker Drinker” problem, which is in 2-WFOMC. We will show in the next section that the CR1 rules fail to lift the theory in Figure 4a, which is in 2-WFOMC. Note that there are also useful theories that are not in 2-WFOMC, such as those containing the transitive relation friends(X, Y ) ∧friends(Y, Z) ⇒ friends(X, Z). 5 Empirical evaluation To complement the theoretical results of the previous section, we extended the WFOMC implementation1 with the domain recursion rule. We performed experiments with the theory in Figure 4a, which is a version of the friends and smokers model [11] extended with the symmetric relation of Equation 3. We evaluate the performance querying P(smokes(bob)) with increasing domain size, comparing our approach to the existing WFOMC implementation and its propositional counterpart, which first grounds the theory and then compiles it with the c2d compiler [13] to a propositional d-DNNF circuit. We did not compare to C-FOVE [6] because it cannot perform lifted inference on this model. Propositional inference quickly becomes intractable when there are more than 20 people. The lifted inference algorithms scale much better. The CR1 rules can exploit some regularities in the model. For example, they eliminate all the smokes(X) atoms from the theory. They do, however, resort to grounding at a later stage of the compilation process. With the domain recursion rule, there is no need for grounding. This advantage is clear in the experiments, our approach having an almost constant inference time in this range of domains sizes. Note that the runtimes for c2d include compilation and evaluation of the circuit, whereas the WFOMC runtimes only represent evaluation of the FO d-DNNF. After all, propositional compilation depends on the domain size but first-order compilation does not. First-order compilation takes a constant two seconds for both rule sets. 2 smokes(X) ∧friends(X, Y ) ⇒smokes(Y ) friends(X, Y ) ⇒friends(Y, X). (a) MLN Model 0.01 0.1 1 10 100 1000 10000 10 20 30 40 50 60 70 80 Runtime [s] Number of People c2d WFOMC - CR1 WFOMC - CR2 (b) Evaluation Runtime Figure 4: Symmetric friends and smokers experiment, comparing propositional knowledge compilation (c2d) to WFOMC using compilation rules CR1 and CR2 (which includes domain recursion). 6 Conclusions We proposed a definition of complete domain lifted probabilistic inference w.r.t. classes of probabilistic logic models. This definition considers algorithms to be lifted if they are polynomial in the size of logical variable domains. Existing first-order knowledge compilation turns out not to admit an intuitive completeness result. Therefore, we generalized the existing Independent Partial Grounding compilation rule to the domain recursion rule. With this one extra rule, we showed that first-order knowledge compilation is complete for a significant class of probabilistic logic models, where the WFOMC representation has up to two logical variables per clause. Acknowledgments The author would like to thank Luc De Raedt, Jesse Davis and the anonymous reviewers for valuable feedback. This work was supported by the Research Foundation-Flanders (FWO-Vlaanderen). 1http://dtai.cs.kuleuven.be/wfomc/ 8 References [1] Lise Getoor and Ben Taskar, editors. An Introduction to Statistical Relational Learning. MIT Press, 2007. [2] Luc De Raedt, Paolo Frasconi, Kristian Kersting, and Stephen Muggleton, editors. Probabilistic inductive logic programming: theory and applications. Springer-Verlag, Berlin, Heidelberg, 2008. [3] Daan Fierens, Guy Van den Broeck, Ingo Thon, Bernd Gutmann, and Luc De Raedt. Inference in probabilistic logic programs using weighted CNF’s. In Proceedings of UAI, pages 256–265, 2011. [4] David Poole. First-order probabilistic inference. In Proceedings of IJCAI, pages 985–991, 2003. [5] Rodrigo de Salvo Braz, Eyal Amir, and Dan Roth. Lifted first-order probabilistic inference. In Proceedings of IJCAI, pages 1319–1325, 2005. [6] Brian Milch, Luke S. Zettlemoyer, Kristian Kersting, Michael Haimes, and Leslie Pack Kaelbling. Lifted Probabilistic Inference with Counting Formulas. In Proceedings of AAAI, pages 1062–1068, 2008. [7] Vibhav Gogate and Pedro Domingos. Exploiting Logical Structure in Lifted Probabilistic Inference. In Proceedings of StarAI, 2010. [8] Abhay Jha, Vibhav Gogate, Alexandra Meliou, and Dan Suciu. Lifted Inference Seen from the Other Side: The Tractable Features. In Proceedings of NIPS, 2010. [9] Vibhav Gogate and Pedro Domingos. Probabilistic theorem proving. In Proceedings of UAI, pages 256–265, 2011. [10] Guy Van den Broeck, Nima Taghipour, Wannes Meert, Jesse Davis, and Luc De Raedt. Lifted Probabilistic Inference by First-Order Knowledge Compilation. In Proceedings of IJCAI, pages 2178–2185, 2011. [11] Parag Singla and Pedro Domingos. Lifted first-order belief propagation. In Proceedings of AAAI, pages 1094–1099, 2008. [12] Matthew Richardson and Pedro Domingos. Markov logic networks. Machine Learning, 62(1): 107–136, 2006. [13] Adnan Darwiche. New advances in compiling CNF to decomposable negation normal form. In Proceedings of ECAI, pages 328–332, 2004. 9
|
2011
|
154
|
4,207
|
Analysis and Improvement of Policy Gradient Estimation Tingting Zhao, Hirotaka Hachiya, Gang Niu, and Masashi Sugiyama Tokyo Institute of Technology {tingting@sg., hachiya@sg., gang@sg., sugiyama@}cs.titech.ac.jp Abstract Policy gradient is a useful model-free reinforcement learning approach, but it tends to suffer from instability of gradient estimates. In this paper, we analyze and improve the stability of policy gradient methods. We first prove that the variance of gradient estimates in the PGPE (policy gradients with parameter-based exploration) method is smaller than that of the classical REINFORCE method under a mild assumption. We then derive the optimal baseline for PGPE, which contributes to further reducing the variance. We also theoretically show that PGPE with the optimal baseline is more preferable than REINFORCE with the optimal baseline in terms of the variance of gradient estimates. Finally, we demonstrate the usefulness of the improved PGPE method through experiments. 1 Introduction The goal of reinforcement learning (RL) is to find an optimal decision-making policy that maximizes the return (i.e., the sum of discounted rewards) through interaction with an unknown environment [13]. Model-free RL is a flexible framework in which decision-making policies are directly learned without going through explicit modeling of the environment. Policy iteration and policy search are two popular formulations of model-free RL. In the policy iteration approach [6], the value function is first estimated and then policies are determined based on the learned value function. Policy iteration was demonstrated to work well in many real-world applications, especially in problems with discrete states and actions [14, 17, 1]. Although policy iteration can naturally deal with continuous states by function approximation [8], continuous actions are hard to handle due to the difficulty of finding maximizers of value functions with respect to actions. Moreover, since policies are indirectly determined via value function approximation, misspecification of value function models can lead to inappropriate policies even in very simple problems [15, 2]. Another limitation of policy iteration especially in physical control tasks is that control policies can vary drastically in each iteration. This causes severe instability in the physical system and thus is not favorable in practice. Policy search is another approach to model-free RL that can overcome the limitations of policy iteration [18, 4, 7]. In the policy search approach, control policies are directly learned so that the return is maximized, for example, via a gradient method (called the REINFORCE method) [18], an EM algorithm [4], and a natural gradient method [7]. Among them, the gradient-based method is particularly useful in physical control tasks since policies are changed gradually. This ensures the stability of the physical system. However, since the REINFORCE method tends to have a large variance in the estimation of the gradient directions, its naive implementation converges slowly [9, 10, 12]. Subtraction of the optimal baseline [16, 5] can ease this problem to some extent, but the variance of gradient estimates is still large. Furthermore, the performance heavily depends on the choice of an initial policy, and appropriate initialization is not straightforward in practice. 1 To cope with this problem, a novel policy gradient method called policy gradients with parameterbased exploration (PGPE) was proposed recently [12]. In PGPE, an initial policy is drawn from a prior probability distribution, and then actions are chosen deterministically. This construction contributes to mitigating the problem of initial policy choice and stabilizing gradient estimates. Moreover, by subtracting a moving-average baseline, the variance of gradient estimates can be further reduced. Through robot-control experiments, PGPE was demonstrated to achieve more stable performance than existing policy-gradient methods. The goal of this paper is to theoretically support the usefulness of PGPE, and to further improve its performance. More specifically, we first give bounds of the gradient estimates of the REINFORCE and PGPE methods. Our theoretical analysis shows that gradient estimates for PGPE have smaller variance than those for REINFORCE under a mild condition. We then show that the moving-average baseline for PGPE adopted in the original paper [12] has excess variance; we give the optimal baseline for PGPE that minimizes the variance, following the line of [16, 5]. We further theoretically show that PGPE with the optimal baseline is more preferable than REINFORCE with the optimal baseline in terms of the variance of gradient estimates. Finally, the usefulness of the improved PGPE method is demonstrated through experiments. 2 Policy Gradients for Reinforcement Learning In this section, we review policy gradient methods. 2.1 Problem Formulation Let us consider a Markov decision problem specified by (S, A, PT , PI, r, γ), where S is a set of ℓ-dimensional continuous states, A is a set of continuous actions, PT (s′|s, a) is the transition probability density from current state s to next state s′ when action a is taken, PI(s) is the probability of initial states, r(s, a, s′) is an immediate reward for transition from s to s′ by taking action a, and 0 < γ < 1 is the discount factor for future rewards. Let p(a|s, θ) be a stochastic policy with parameter θ, which represents the conditional probability density of taking action a in state s. Let h = [s1, a1, . . . , sT , aT ] be a trajectory of length T. Then the return (i.e., the discounted sum of future rewards) along h is given by R(h) := PT t=1 γt−1r(st, at, st+1). The expected return for parameter θ is defined by J(θ) := R p(h|θ)R(h)dh, where p(h|θ) = p(s1) QT t=1 p(st+1|st, at)p(at|st, θ). The goal of reinforcement learning is to find the optimal policy parameter θ∗that maximizes the expected return J(θ): θ∗:= arg max J(θ). 2.2 Review of the REINFORCE Algorithm In the REINFORCE algorithm [18], the policy parameter θ is updated via gradient ascent: θ ←−θ + ε∇θJ(θ), where ε is a small positive constant. The gradient ∇θJ(θ) is given by ∇θJ(θ) = R ∇θp(h|θ)R(h)dh = R p(h|θ)∇θ log p(h|θ)R(h)dh = R p(h|θ) PT t=1 ∇θ log p(at|st, θ)R(h)dh, where we used the so-called ‘log trick’: ∇θp(h|θ) = p(h|θ)∇θ log p(h|θ). Since p(h|θ) is unknown, the expectation is approximated by the empirical average: ∇θ bJ(θ) = 1 N PN n=1 PT t=1 ∇θ log p(an t |sn t , θ)R(hn), where hn := [sn 1, an 1, . . . , sn T , an T ] is a roll-out sample. 2 Let us employ the Gaussian policy model with parameter θ = (µ, σ), where µ is the mean vector and σ is the standard deviation: p(a|s, θ) = 1 σ √ 2π exp −(a−µ⊤s)2 2σ2 . Then the policy gradients are explicitly given as ∇µ log p(a|s, θ) = a−µ⊤s σ2 s and ∇σ log p(a|s, θ) = (a−µ⊤s)2−σ2 σ3 . A drawback of REINFORCE is that the variance of the above policy gradients is large [10, 11], which leads to slow convergence. 2.3 Review of the PGPE Algorithm One of the reasons for large variance of policy gradients in the REINFORCE algorithm is that the empirical average is taken at each time step, which is caused by stochasticity of policies. In order to mitigate this problem, another method called policy gradients with parameter-based exploration (PGPE) was proposed recently [11]. In PGPE, a linear deterministic policy, π(a|s, θ) = θ⊤s, is adopted, and stochasticity is introduced by considering a prior distribution over policy parameter θ with hyper-parameter ρ: p(θ|ρ). Since entire history h is solely determined by a single sample of parameter θ in this formulation, it is expected that the variance of gradient estimates can be reduced. The expected return for hyper-parameter ρ is expressed as J(ρ) = RR p(h|θ)p(θ|ρ)R(h)dhdθ. Differentiating this with respect to ρ, we have ∇ρJ(ρ) = RR p(h|θ)∇ρp(θ|ρ)R(h)dhdθ = RR p(h|θ)p(θ|ρ)∇ρ log p(θ|ρ)R(h)dhdθ, where the log trick for ∇ρp(θ|ρ) is used. We then approximate the expectation over h and θ by the empirical average: ∇ρ bJ(ρ) = 1 N PN n=1 ∇ρ log p(θn|ρ)R(hn), where each trajectory sample hn is drawn from p(h|θn) and the parameter θn is drawn from p(θn|ρ). Let us employ the Gaussian prior distribution with hyper-parameter ρ = (η, τ) to draw parameter vector θ, where η is the mean vector and τ is the vector consisting of the standard deviation in each element: p(θi|ρi) = 1 τi √ 2π exp −(θi−ηi)2 2τ 2 i . Then the derivative of log p(θ|ρ) with respect to ηi and τi are given as follows: ∇ηi log p(θ|ρ) = θi−ηi τ 2 i and ∇τi log p(θ|ρ) = (θi −ηi)2 −τ 2 i τ 3 i . 3 Variance of Gradient Estimates In this section, we theoretically investigate the variance of gradient estimates in REINFORCE and PGPE. For multi-dimensional state space, we consider the trace of the covariance matrix of gradient vectors. That is, for a random vector A = (A1, . . . , Aℓ)⊤, we define Var(A) = tr E (A −E[A])(A −E[A])⊤ = Pℓ m=1 E h (Am −E[Am])2i , (1) where E denotes the expectation. Let B = Pℓ i=1 τ −2 i , where ℓis the dimensionality of state s. Below, we consider a subset of the following assumptions: 3 Assumption (A): r(s, a, s′) ∈[−β, β] for β > 0. Assumption (B): r(s, a, s′) ∈[α, β] for 0 < α < β. Assumption (C): For δ > 0, there exist two series {ct}T t=1 and {dt}T t=1 such that ∥st∥2 ≥ct and ∥st∥2 ≤dt hold with probability at least (1−δ)1/2N respectively over the choice of sample paths, where ∥· ∥2 denotes the ℓ2-norm. Note that Assumption (B) is stronger than Assumption (A). Let L(T) = CT α2 −DT β2/(2π), CT = PT t=1 c2 t, and DT = PT t=1 d2 t. First, we analyze the variance of gradient estimates in PGPE (the proofs of all the theorems are provided in the supplementary material): Theorem 1. Under Assumption (A), we have the following upper bounds: Var h ∇η bJ(ρ) i ≤β2(1−γT )2B N(1−γ)2 and Var h ∇τ bJ(ρ) i ≤2β2(1−γT )2B N(1−γ)2 , This theorem means that the upper bound of the variance of ∇η bJ(ρ) is proportional to β2 (the upper bound of squared rewards), B (the trace of the inverse Gaussian covariance), and (1−γT )2/(1−γ)2, and is inverse-proportional to sample size N. The upper bound of the variance of ∇τ bJ(ρ) is twice larger than that of ∇η bJ(ρ). When T goes to infinity, (1 −γT )2 will converge to 1. Next, we analyze the variance of gradient estimates in REINFORCE: Theorem 2. Under Assumptions (B) and (C), we have the following lower bound with probability at least 1 −δ: Var h ∇µ bJ(θ) i ≥ (1−γT )2 Nσ2(1−γ)2 L(T). Under Assumptions (A) and (C), we have the following upper bound with probability at least (1 − δ)1/2: Var h ∇µ bJ(θ) i ≤DT β2(1−γT )2 Nσ2(1−γ)2 . Under Assumption (A), we have Var h ∇σ bJ(θ) i ≤2T β2(1−γT )2 Nσ2(1−γ)2 . The upper bounds for REINFORCE are similar to those for PGPE, but they are monotone increasing with respect to trajectory length T. The lower bound for the variance of ∇µ bJ(θ) will be non-trivial if it is positive, i.e., L(T) > 0. This can be fulfilled, e.g., if α and β satisfy 2πCT α2 > DT β2. Deriving a lower bound of the variance of ∇σ bJ(θ) is left open as future work. Finally, we compare the variance of gradient estimates in REINFORCE and PGPE: Theorem 3. In addition to Assumptions (B) and (C), we assume L(T) is positive and monotone increasing with respect to T. If there exists T0 such that L(T0) ≥β2Bσ2, then we have Var[∇µ bJ(θ)] > Var[∇η bJ(ρ)] for all T > T0, with probability at least 1 −δ. The above theorem means that PGPE is more favorable than REINFORCE in terms of the variance of gradient estimates of the mean, if trajectory length T is large. This theoretical result would partially support the experimental success of the PGPE method [12]. 4 Variance Reduction by Subtracting Baseline In this section, we give a method to reduce the variance of gradient estimates in PGPE and analyze its theoretical properties. 4 4.1 Basic Idea of Introducing Baseline It is known that the variance of gradient estimates can be reduced by subtracting a baseline b: for REINFORCE and PGPE, modified gradient estimates are given by ∇θ bJb(θ) = 1 N PN n=1(R(hn) −b) PT t=1 ∇θ log p(an t |sn t , θ), ∇ρ bJb(ρ) = 1 N PN n=1(R(hn) −b)∇ρ log p(θn|ρ). The adaptive reinforcement baseline [18] was derived as the exponential moving average of the past experience: b(n) = γR(hn−1) + (1 −γ)b(n −1), where 0 < γ ≤1. Based on this, an empirical gradient estimate with the moving-average baseline was proposed for REINFORCE [18] and PGPE [12]. The above moving-average baseline contributes to reducing the variance of gradient estimates. However, it was shown [5, 16] that the moving-average baseline is not optimal; the optimal baseline is, by definition, given as the minimizer of the variance of gradient estimates with respect to a baseline. Following this formulation, the optimal baseline for REINFORCE is given as follows [10]: b∗ REINFORCE := arg minb Var[∇θ bJb(θ)] = E[R(h)∥PT t=1 ∇θ log p(at|st,θ)∥2] E[∥PT t=1 ∇θ log p(at|st,θ)∥2] . However, only the moving-average baseline was introduced to PGPE so far [12], which is suboptimal. Below, we derive the optimal baseline for PGPE, and study its theoretical properties. 4.2 Optimal Baseline for PGPE Let b∗ PGPE be the optimal baseline for PGPE that minimizes the variance: b∗ PGPE := arg minb Var[∇ρ bJb(ρ)]. Then the following theorem gives the optimal baseline for PGPE: Theorem 4. The optimal baseline for PGPE is given by b∗ PGPE = E[R(h)∥∇ρ log p(θ|ρ)∥2] E[∥∇ρ log p(θ|ρ)∥2] , and the excess variance for a baseline b is given by Var[∇ρ bJb(ρ)] −Var[∇ρ bJb∗ PGPE(ρ)] = (b−b∗ PGPE)2 N E[∥∇ρ log p(θ|ρ)∥2]. The above theorem gives an analytic-form expression of the optimal baseline for PGPE. When expected return R(h) and the squared norm of characteristic eligibility ∥∇ρ log p(θ|ρ)∥2 are independent of each other, the optimal baseline is reduced to average expected return E[R(h)]. However, the optimal baseline is generally different from the average expected return. The above theorem also shows that the excess variance is proportional to the squared difference of baselines (b −b∗ PGPE)2 and the expected squared norm of characteristic eligibility E[∥∇ρ log p(θ|ρ)∥2], and is inverseproportional to sample size N. Next, we analyze the contribution of the optimal baseline to the variance with respect to mean parameter η in PGPE: Theorem 5. If r(s, a, s′) ≥α > 0, we have the following lower bound: Var[∇η bJ(ρ)] −Var[∇η bJb∗ PGPE(ρ)] ≥α2(1−γT )2B N(1−γ)2 . Under Assumption (A), we have the following upper bound: Var[∇η bJ(ρ)] −Var[∇η bJb∗ PGPE(ρ)] ≤β2(1−γT )2B N(1−γ)2 . This theorem shows that the lower and upper bounds of the excess variance are proportional to α2 and β2 (the bounds of squared immediate rewards), B (the trace of the inverse Gaussian covariance), and (1 −γT )2/(1 −γ)2, and are inverse-proportional to sample size N. When T goes to infinity, (1 −γT )2 will converge to 1. 5 4.3 Comparison with REINFORCE Next, we analyze the contribution of the optimal baseline for REINFORCE, and compare it with that for PGPE. It was shown [5, 16] that the excess variance for a baseline b in REINFORCE is given by Var[∇θ bJb(θ)] −Var[∇θ bJb∗ REINFORCE(θ)] = (b−b∗ REINFORCE)2 N E
PT t=1 ∇θ log p(at|st, θ)
2 . Based on this, we have the following theorem: Theorem 6. Under Assumptions (B) and (C), we have the following bounds with probability at least 1 −δ: CT α2(1−γT )2 Nσ2(1−γ)2 ≤Var[∇µ bJ(θ)] −Var[∇µ bJb∗ REINFORCE(θ)] ≤β2(1−γT )2DT Nσ2(1−γ)2 . The above theorem shows that the lower and upper bounds of the excess variance are monotone increasing with respect to trajectory length T. In the aspect of the amount of reduction in the variance of gradient estimates, Theorem 5 and Theorem 6 show that the optimal baseline for REINFORCE contributes more than that for PGPE. Finally, based on Theorem 1 and Theorem 5 and based on Theorem 2 and Theorem 6, we have the following theorem: Theorem 7. Under Assumptions (B) and (C), we have Var[∇η bJb∗ PGPE(ρ)] ≤(1−γT )2 N(1−γ)2 (β2 −α2)B, Var[∇µ bJb∗ REINFORCE(θ)] ≤ (1−γT )2 Nσ2(1−γ)2 (β2DT −α2CT ), where the latter inequality holds with probability at least 1 −δ. This theorem shows that the upper bound of the variance of gradient estimates for REINFORCE with the optimal baseline is still monotone increasing with respect to trajectory length T. On the other hand, since (1 −γT )2 ≤1, the above upper bound of the variance of gradient estimates in PGPE with the optimal baseline can be further upper-bounded as Var[∇η bJb∗ PGPE(ρ)] ≤(β2−α2)B N(1−γ)2 , which is independent of T. Thus, when trajectory length T is large, the variance of gradient estimates in REINFORCE with the optimal baseline may be significantly larger than the variance of gradient estimates in PGPE with the optimal baseline. 5 Experiments In this section, we experimentally investigate the usefulness of the proposed method, PGPE with the optimal baseline. 5.1 Illustrative Data Let the state space S be one-dimensional and continuous, and the initial state is randomly chosen from the standard normal distribution. The action space A is also set to be one-dimensional and continuous. The transition dynamics of the environment is set at st+1 = st + at + ε, where ε ∼ N(0, 0.52) is stochastic noise. The immediate reward is defined as r = exp −s2/2 −a2/2 + 1, which is bounded as 1 < r ≤2. The discount factor is set at γ = 0.9. Here, we compare the following five methods: REINFORCE without any baselines, REINFORCE with the optimal baseline (OB), PGPE without any baselines, PGPE with the moving-average baseline (MB), and PGPE with the optimal baseline (OB). For fair comparison, all of these methods use the same parameter setup: the mean and standard deviation of the Gaussian distribution is set at µ = −1.5 and σ = 1, the number of episodic samples is set at N = 100, and the length of the trajectory is set at T = 10 or 50. We then calculate the variance of gradient estimates over 100 runs. Table 1 summarizes the results, showing that the variance of REINFORCE is overall larger than PGPE. A notable difference between REINFORCE and PGPE is that the variance of REINFORCE 6 Table 1: Variance and bias of estimated gradients for the illustrative data. T = 10 T = 50 Method Variance Bias Variance Bias µ, η σ, τ µ, η σ, τ µ, η σ, τ µ, η σ, τ REINFORCE 13.2570 26.9173 -0.3102 -1.5098 188.3860 278.3095 -1.8126 -5.1747 REINFORCE-OB 0.0914 0.1203 0.0672 0.1286 0.5454 0.8996 -0.2988 -0.2008 PGPE 0.9707 1.6855 -0.0691 0.1319 1.6572 3.3720 -0.1048 -0.3293 PGPE-MB 0.2127 0.3238 0.0828 -0.1295 0.4123 0.8332 0.0925 -0.2556 PGPE-OB 0.0372 0.0685 -0.0164 0.0512 0.0850 0.1815 0.0480 -0.0779 0 10 20 30 40 50 0 1 2 3 4 5 6 Iteration Variance in log10 scale REINFORCE REINFORCE−OB (a) REINFORCE and REINFORCE-OB 0 10 20 30 40 50 −1 0 1 2 3 4 Iteration Variance in log10 scale PGPE PGPE−MB PGPE−OB (b) PGPE, PGPE-MB and PGPE-OB 0 10 20 30 40 50 1 2 3 4 5 6 Iteration Variance in log10−scale REINFORCE PGPE (c) REINFORCE and PGPE 0 10 20 30 40 50 −1 −0.5 0 0.5 1 1.5 2 2.5 3 Iteration Variance in log10−scale REINFORCE−OB PGPE−OB (d) REINFORCE-OB and PGPE-OB Figure 1: Variance of gradient estimates with respect to the mean parameter through policy-update iterations for the illustrative data. 0 2 4 6 8 10 12 14 16 18 20 14 14.5 15 15.5 16 16.5 17 Number of episodes N Return R REINFORCE REINFORCE−OB PGPE PGPE−MB PGPE−OB (a) Good initial policy 0 2 4 6 8 10 12 14 16 18 20 12.5 13 13.5 14 14.5 15 15.5 16 16.5 17 Number of episodes N Return R REINFORCE REINFORCE−OB PGPE PGPE−MB PGPE−OB (b) Poor initial policy Figure 2: Return as functions of the number of episodic samples N for the illustrative data. significantly grows as T increases, whereas that of PGPE is not influenced that much by T. This well agrees with our theoretical analysis in Section 3. The results also show that the variance of PGPEOB (the proposed method) is much smaller than that of PGPE-MB. REINFORCE-OB contributes highly to reducing the variance especially when T is large, which also well agrees with our theory. However, PGPE-OB still provides much smaller variance than REINFORCE-OB. We also investigate the bias of gradient estimates of each method. We regard gradients estimated with N = 1000 as true gradients, and compute the bias of gradient estimates when N = 100. The results are also included in Table 1, showing that introduction of baselines does not increase the bias; rather, it tends to reduce the bias. Next, we investigate the variance of gradient estimates when policy parameters are updated over iterations. In this experiment, we set N = 10 and T = 20, and the variance is computed from 50 runs. Policies are updated over 50 iterations. In order to evaluate the variance in a stable manner, we repeat the above experiments 20 times with random choice of initial mean parameter µ from [−3.0, −0.1], and investigate the average variance of gradient estimates with respect to mean parameter µ over 20 trials, in log10-scale. The results are summarized in Figure 1. Figure 1(a) compares the variance of REINFORCE with/without baselines, whereas Figure 1(b) compares the variance of PGPE with/without baselines. These plots show that introduction of baselines contributes highly to the reduction of the variance over iterations. Figure 1(c) compares the variance of REINFORCE and PGPE without baselines, showing that PGPE provides much more stable gradient estimates than REINFORCE. Figure 1(d) compares the variance of REINFORCE and PGPE with the optimal baselines, showing that gradient estimates obtained by PGPE-OB are much smaller than those by REINFORCE-OB. Overall, in terms of the variance of gradient estimates, the proposed PGPE-OB compares favorably with other methods. Next, we evaluate returns obtained by each method. The trajectory length is fixed at T = 20, and the maximum number of policy-update iterations is set at 50. We investigate average returns over 20 runs as functions of the number of episodic samples N. We have two experimental results for different initial policies. Figure 2(a) shows the results when initial mean parameter µ is chosen randomly 7 from [−1.6, −0.1], which tends to perform well. The graph shows that PGPE-OB performs the best, especially when N < 5; then REINFORCE-OB follows with a small margin. PGPE-MB and plain PGPE also work reasonably well, although they are slightly unstable due to larger variance. Plain REINFORCE is highly unstable, which is caused by the huge variance of gradient estimates (see Figure 1 again). Figure 2(b) describes the results when initial mean parameter µ is chosen randomly from [−3.0, −0.1], which tends to result in poorer performance. In this setup, difference among the compared methods is more significant than the case with good initial policies. Overall, plain REINFORCE performs very poorly, and even REINFORCE-OB tends to be outperformed by the PGPE methods. This means that REINFORCE is very sensitive to the choice of initial policies. Among the PGPE methods, the proposed PGPE-OB works very well and converges quickly. 5.2 Cart-Pole Balancing Finally, we evaluate the performance of our proposed method in a more complex task of cart-pole balancing [3]. A pole is hanged to the roof of a cart, and the goal is to swing up the pole by moving the cart properly and try to keep the pole at the top. The state space S is two-dimensional and continuous, which consists of the angle ϕ ∈[0, 2π] and angular velocity ˙ϕ ∈[−3π, 3π] of the pole. The action space A is one-dimensional and continuous, which corresponds to the force applied to the cart (note that we can not directly control the pole, but only indirectly through moving the cart). We use the Gaussian policy model for REINFORCE and linear policy model for PGPE, where state s is non-linearly transformed to a feature space via a basis function vector. We use 20 Gaussian kernels with standard deviation σ = 0.5 as the basis functions, where the kernel centers are distributed over the following grid points: {0, π/2, π, 3π/2} × {−3π, −3π/2, 0, 3π/2, 3π}. The dynamics of the pole (i.e., the update rule of the angle and the angular velocity) is given by ϕt+1 = ϕt + ˙ϕt+1∆t and ˙ϕt+1 = ˙ϕt + 9.8 sin(ϕt)−αwl ˙ϕ2 t sin(2ϕt)/2+α cos(ϕt)at 4l/3−αwl cos2(ϕt) ∆t, where α = 1/W + w and at is the action taken at time t. We set the problem parameters as: the mass of the cart W = 8[kg], the mass of the pole w = 2[kg], and the length of the pole l = 0.5[m]. We set the time step ∆t for the position and velocity updates at 0.01[s] and action selection at 0.1[s]. The reward function is defined as r(st, at, st+1) = cos(ϕt+1). That is, the higher the pole is, the more rewards we can obtain. The initial policy is chosen randomly, and the initial-state probability density is set to be uniform. The agent collects N = 100 episodic samples with trajectory length T = 40. The discount factor is set at γ = 0.9. We investigate average returns over 10 trials as the functions of policy-update iterations. The return at each trial is computed over 100 test episodic samples (which are not used for policy learning). The experimental results are plotted in Figure 3, showing that the improvement of both plain REINFORCE and REINFORCE-OB tend to be slow, and all PGPE methods outperformed REINFORCE methods overall. Among the PGPE methods, the proposed PGPE-OB converges faster than others. 6 Conclusion In this paper, we analyzed and improved the stability of the policy gradient method called PGPE (policy gradients with parameterbased exploration). We theoretically showed that, under a mild condition, PGPE provides more stable gradient estimates than the classical REINFORCE method. We also derived the optimal baseline for PGPE, and theoretically showed that PGPE with the optimal baseline is more preferable than REINFORCE with the optimal baseline in terms of the variance of gradient estimates. Finally, we demonstrated the usefulness of PGPE with optimal baseline through experiments. 0 50 100 150 200 250 300 −2 −1 0 1 2 3 4 5 Iteration Return R REINFORCE REINFORCE−OB PGPE PGPE−MB PGPE−OB Figure 3: Performance of policy Acknowledgments: TZ and GN were supported by the MEXT scholarship and the GCOE program, HH was supported by the FIRST program, and MS was supported by MEXT KAKENHI 23120004. 8 References [1] N. Abe, P. Melville, C. Pendus, C. K. Reddy, D. L. Jensen, V. P. Thomas, J. J. Bennett, G. F. Anderson, B. R. Cooley, M. Kowalczyk, M. Domick, and T. Gardinier. Optimizing debt collections using constrained reinforcement learning. In Proceedings of The 16th ACM SGKDD Conference on Knowledge Discovery and Data Mining, pages 75–84, 2010. [2] J. Baxter, P. Bartlett, and L. Weaver. Experiments with infinite-horizon, policy-gradient estimation. Journal of Artificial Intelligence Research, 15:351–381, 2001. [3] M. Bugeja. Non-linear swing-up and stabilizing control of an inverted pendulum system. In Proceedings of IEEE Region 8 EUROCON, volume 2, pages 437–441, 2003. [4] P. Dayan and G. E. Hinton. Using expectation-maximization for reinforcement learning. Neural Computation, 9(2):271–278, 1997. [5] E. Greensmith, P. L. Bartlett, and J. Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 5:1471–1530, 2004. [6] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237–285, 1996. [7] S. Kakade. A natural policy gradient. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 1531–1538, Cambridge, MA, 2002. MIT Press. [8] M. G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning Research, 4:1107–1149, 2003. [9] P. Marbach and J. N. Tsitsiklis. Approximate gradient methods in policy-space optimization of Markov reward processes. Discrete Event Dynamic Systems, 13(1-2):111–148, 2004. [10] J. Peters and S. Schaal. Policy gradient methods for robotics. In Processing of the IEEE/RSJ International Conferece on Inatelligent Robots and Systems(IROS), 2006. [11] F. Sehnke, C. Osendorfer, T. R¨uckstiess, A. Graves, J. Peters, and J. Schmidhuber. Policy gradients with parameter-based exploration for control. In Proceedings of The 18th International Conference on Artificial Neural Networks, pages 387–396, 2008. [12] F. Sehnke, C. Osendorfer, T. R¨uckstiess, A. Graves, J. Peters, and J. Schmidhuber. Parameterexploring policy gradients. Neural Networks, 23(4):551–559, 2010. [13] R. S. Sutton and G. A. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, USA, 1998. [14] G. Tesauro. TD-gammon, a self-teaching backgammon program, achieves master-level play. Neural Computation, 6(2):215–219, 1994. [15] L. Weaver and J. Baxter. Reinforcement learning from state and temporal differences. Technical report, Department of Computer Science, Australian National University, 1999. [16] L. Weaver and N. Tao. The optimal reward baseline for gradient-based reinforcement learning. In Processings of The Seventeeth Conference on Uncertainty in Artificial Intelligence, pages 538–545, 2001. [17] J. D. Williams and S. Young. Partially observable Markov decision processes for spoken dialog systems. Computer Speech and Language, 21(2):231–422, 2007. [18] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229, 1992. 9
|
2011
|
155
|
4,208
|
On Tracking The Partition Function Guillaume Desjardins, Aaron Courville, Yoshua Bengio {desjagui,courvila,bengioy}@iro.umontreal.ca D´epartement d’informatique et de recherche op´erationnelle Universit´e de Montr´eal Abstract Markov Random Fields (MRFs) have proven very powerful both as density estimators and feature extractors for classification. However, their use is often limited by an inability to estimate the partition function Z. In this paper, we exploit the gradient descent training procedure of restricted Boltzmann machines (a type of MRF) to track the log partition function during learning. Our method relies on two distinct sources of information: (1) estimating the change ∆Z incurred by each gradient update, (2) estimating the difference in Z over a small set of tempered distributions using bridge sampling. The two sources of information are then combined using an inference procedure similar to Kalman filtering. Learning MRFs through Tempered Stochastic Maximum Likelihood, we can estimate Z using no more temperatures than are required for learning. Comparing to both exact values and estimates using annealed importance sampling (AIS), we show on several datasets that our method is able to accurately track the log partition function. In contrast to AIS, our method provides this estimate at each time-step, at a computational cost similar to that required for training alone. 1 Introduction In many areas of application, problems are naturally expressed as a Gibbs measure, where the distribution over the domain X is given by, for x ∈X: q(x) = ˜q(x) Z(β) = exp{−βE(x)} Z(β) , with Z(β) = X x ˜q(x). (1) E(x) is refered to as the “energy” of configuration x, β is a free parameter known as the inverse temperature and Z(β) is the normalization factor commonly refered to as the partition function. Under certain general conditions on the form of E, these models are known as Markov Random Fields (MRF), and have been very popular within the vision and natural language processing communities. MRFs with latent variables – in particular restricted Boltzmann machines (RBMs) [9] – are among the most popular building block for deep architectures [1], being used in the unsupervised initialization of both Deep Belief Networks [9] and Deep Boltzmann Machines [22]. As illustrated in Eq. 1, the partition function is computed by summing over all variable configurations. Since the number of configurations scales exponentially with the number of variables, exact calculation of the partition function is generally computationally intractable. Without the partition function, probabilities under the model can only be determined up to a multiplicative constant, which seriously limits the model’s utility. One method recently proposed for estimating Z(β) is annealed importance sampling (AIS) [18, 23]. In AIS, Z(β) is approximated by the sum of a set of importance-weighted samples drawn from the model distribution. With a large number of variables, drawing a set of importance-weighted samples is generally subject to extreme variance in the importance weights. AIS alleviates this issue by annealing the model distribution through a series of slowly changing distributions that link the target model distribution to one where the log partition function is tractable. While AIS is quite successful, it generally requires the use of tens of thousands of annealing distributions in order to achieve accurate results. This computationally intensive 1 requirement renders AIS inappropriate as a means of maintaining a running estimate of the log partition function throughout training. Yet, having ready access to this quantity throughout learning opens the door to a range of possibilities. Likelihood could be used as a basis for model comparison throughout training; early-stopping could be accomplished by monitoring an estimate of the likelihood of a validation set. Another important application is in Bayesian inference in MRFs [17] where we require the partition function for each value of the parameters in the region of support. Tracking the log partition function would also enable simultaneous estimation of all the parameters of a heterogeneous model, for example an extended directed graphical model with Gibbs distributions forming some of the model components. In this work, we consider a method of tracking the log partition function during training, which builds upon the parallel tempering (PT) framework [7, 10, 15]. Our method relies on two basic observations. First, when using stochastic gradient descent 1, parameters tend to change slowly during training; consequently, the partition function Z(β) also tends to evolve slowly. We exploit this property of the learning process by using importance sampling to estimate changes in the log partition function from one learning iteration the next. If the changes in the distribution from time-step t to t + 1 are small, the importance sampling estimate can be very accurate, even with relatively few samples. This is the same basic strategy employed in AIS, but while with AIS one constructs a path of close distributions through an annealing schedule, in our procedure we simply rely on the path of distributions that emerges from the learning process. Second, parallel tempering (PT) relies on simulating an extended system, consisting of multiple models each running at their own temperature. These temperatures are chosen such that neighboring models overlap sufficiently as to allow for frequent cross-temperature state swaps. This is an ideal operating regime for bridge sampling [2, 19], which can thus serve to estimate the difference in log partition functions between neighboring models. While with relatively few samples, each method on its own tends not to provide reliable estimates, we propose to combine these measurements using a variation of the well-known Kalman filter (KF), allowing us to accurately track the evolution of the log partition function throughout learning. The efficiency of our method stems from the fact that our estimator makes use of the samples generated in the course of training, thus incurring relatively little additional computational cost. This paper is structured as follows. In Section 2, we provide a brief overview of RBMs and the SML-PT training algorithm, which serves as the basis of our tracking algorithm. Sections (3.1-3.3) cover the details of the importance and bridge sampling estimates, while Section 3.4 provides a comprehensive look at our filtering procedure and the tracking algorithm as a whole. Experimental results are presented in Section 4. 2 Stochastic Maximum Likelihood with Parallel Tempering Our proposed log partition function tracking strategy is applicable to any Gibbs distribution model that is undergoing relatively smooth changes in the partition function. However, we concentrate on its application to the RBM since it has become a model of choice for learning unsupervised features for use in deep feed-forward architectures [9, 1] as well as for modeling complex, high-dimensional distributions [27, 24, 12]. RBMs are bipartite graphical models where visible units v ∈{0, 1}nv interact with hidden units h ∈{0, 1}nh through the energy function E(v, h) = −hT Wv −cT h −bT v. The model parameters θ = [W, c, b] consist of the weight matrix W ∈Rnh×nv, whose entries Wij connect units (vi, hj), and offset vectors b and c. RBMs can be trained through a stochastic approximation to the negative log-likelihood gradient ∂F (v) ∂θ −Ep[ ∂F (v) ∂θ ], where F(v) is the free-energy function defined as F(v) = −log P h exp(−E(v, h)). In Stochastic Maximum Likelihood (SML) [25], we replace the expectation by a sample average, where approximate samples are drawn from a persistent Markov chain, updated through k-steps of Gibbs sampling between parameter updates. Other algorithms improve upon this default formulation by replacing Gibbs sampling with more powerful sampling algorithms [26, 7, 21, 20]. By increasing the mixing rate of the underlying Markov chain, these methods can lead to lower variance estimates of the maximum likelihood gradient and faster conver1Stochastic gradient descent is one of the most popular methods for training MRFs precisely because second order optimization methods typically require a deterministic gradient, whereas sampling-based estimators are the only practical option for models with an intractable partition function. 2 gence. However, from the perspective of tracking the log partition function, we will see in Section 3 that the SML-PT scheme [7] presents a rather unique advantage. Throughout training, parallel tempering draws samples from an extended system Mt = {qi,t; i ∈ [1, M]}, where qi,t denotes the model with inverse temperature βi ∈[0, 1] obtained after t steps of gradient descent. Each model qi,t (associated with a unique partition function Zi,t) represents a smoothed version of the target distribution: q1,t (with β1 = 1). The inverse temperature βi = 1/Ti ∈[0, 1] controls the degree of smoothing, with smaller values of βi leading to distributions which are easier to sample from. To leverage these fast-mixing chains, PT alternates k steps of Gibbs sampling (performed independently at each temperature) with cross-temperature state swaps. These are proposed between neighboring chains using a Metropolis-Hastings-based acceptance criterion. If we denote the particle obtained by each model qi,t after k steps of Gibbs sampling as xi,t, then the swap acceptance ratio ri,t for chains (i, i + 1) is given by: ri,t = min 1, ˜qi,t(xi+1,t)˜qi+1,t(xi,t) ˜qi,t(xi,t)˜qi+1,t(xi+1,t) (2) These swaps ensure that samples from highly ergodic chains are gradually swapped into lower temperature chains. Our swapping schedule is the deterministic even-odd algorithm [14] which proposes swaps between all pairs (qi,t, qi+1,t) with even i’s, followed by those with odd i’s. The gradient is then estimated by using the sample which was last swapped into temperature β1. To reduce the variance on our estimate, we run multiple Markov chains per temperature, yielding a mini-batch of model samples Xi,t = {x(n) i,t ∼qi,t(x); 1 ≤n ≤N} at each time-step and temperature. SML with Adaptive parallel tempering (SML-APT) [6], further improves upon SML-PT by automating the choice of temperatures. It does so by maximizing the flow of particles between extremal temperatures, yielding better ergodicity and more robust sampling in the negative phase of training. 3 Tracking the Partition Function Unrolling in time (learning iterations) the M models being simulated by PT, we can envision a twodimensional lattice of RBMs indexed by (i, t). As previously mentioned, gradient descent learning causes qi,t, the model with inverse temperature βi obtained at time-step t, to be close to qi,t−1. We can thus apply importance sampling between adjacent temporal models 2 to obtain an estimate of ζi,t −ζi,t−1, denoted as O∆t i,t . Inspired by the annealing distributions used in AIS, one could think to iterate this process from a known quantity ζi,1, in order to estimate ζi,t. Unfortunately, the variance of such an estimate would grow quickly with t. PT provides an interesting solution to this problem, by simulating an extended system Mt where the βi’s are selected such that qi,t and qi+1,t have enough overlap to allow for frequent cross-temperature state swaps. This motivates using bridge sampling [2] to provide an estimate of ζi+1,t −ζi,t, the difference in log partitions between temperatures βi+1 and βi. We denote this estimate as O∆β i,t . Additionally, we can treat ζM,t as a known quantity during training, by setting βM = 0 3. Beginning with ζM,t (see definition in Fig. 1), repeated application of bridge sampling alone could in principle arrive at an accurate estimate of {ζi,t; i ∈[1, M], t ∈[1, T]}. However, reducing the variance sufficiently to provide useful estimates of the log partition function would require using a relatively large number of samples at each temperature. Within the context of RBM training, the required number of samples at each of the parallel chains would have an excessive computational cost. Nonetheless even with relatively few samples, the bridge sampling estimate provides an additional source of information regarding the log partition function. Our strategy is to combine these two high variance estimates O∆t i,t and O∆β i,t by treating the unknown log partition functions as a latent state to be tracked by a Kalman filter. In this framework, we consider O∆t i,t and O∆β i,t as observed quantities, used to iteratively refine the joint distribution over the latent state at each learning iteration. Formally, we define this latent state to be ζt = [ζ1,t, . . . , ζM,t, bt] , where bt is an extra term to account for a systematic bias in O∆β 1,t (see Sec. 3.2 for details). The corresponding graphical model is shown in Figure 1. 2 This same technique was recently used in [5], in the context of learning rate adaptation. 3 The visible units of an RBM with zero weights are marginally independent. Its log partition function is thus given as P i log(1 + exp(bi)) + nh · log(2). 3 O∆t 1,t O∆t 2,t O∆β 1,t O∆β 1,t−1 O∆β M−1,t−1 O∆β M−1,t ζM,t−1 ζM,t ζ2,t ζ1,t ζ1,t−1 ζ2,t−1 bt bt−1 System Equations: p(ζ0) = N(µ0, Σ0) p(ζt | ζt−1) = N (ζt−1, Σζ) p(O(∆t) t | ζt, ζt−1) = N(C[ζt, ζt−1]T , Σ∆t) p(O(∆β) t | ζt) = N(Hζt, Σ∆β) C = 2 664 IM 1 0 ... 0 −IM 0 0 ... 0 3 775 H = 2 664 −1 +1 0 0 0 0 −1 +1 0 ... 0 . . . 0 0 0 0 −1 +1 0 3 775 Figure 1: A directed graphical model for log partition function tracking. The shaded nodes represent observed variables, and the double-walled nodes represent the tractable ζM,: with βM = 0. For clarity of presentation, we show the bias term as distinct from the other ζi,t (recall bt = ζM+1,t.) 3.1 Model Dynamics The first step is to specify how we expect the log partition function to change over training iterations, i.e. our prior over the model dynamics. SML training of the RBM model parameters is a stochastic gradient descent algorithm (typically over a mini-batch of N examples) where the parameters change by small increments specified by an approximation to the likelihood gradient. This implies that both the model distribution and the partition function change relatively slowly over learning increments with a rate of change being a function of the SML learning rate; i.e. we expect qi,t and ζi,t to be close to qi,t−1 and ζi,t−1 respectively. Our model dynamics are thus simple and capture the fact that the log partition function is slowly changing. Characterizing the evolution of the log partition functions as independent Gaussian processes, we model the probability of ζt conditioned on ζt−1 as p(ζt|ζt−1) = N(ζt−1, Σζ), a normal distribution with mean ζt−1 and fixed diagonal covariance Σζ = Diag[σ2 Z, . . . , σ2 Z, σ2 b]. σ2 Z and σ2 b are hyper-parameters controlling how quickly the latent states ζi,t and bt are expected to change between learning iterations. 3.2 Importance Sampling Between Learning Iterations The observation distribution p(O(∆t) t | ζt, ζt−1) = N(C[ζt, ζt−1]T , Σ∆t) models the relationship between the evolution of the latent log partitions and the statistical measurements O(∆t) t = [O(∆t) 1,t , . . . , O(∆t) M,t ] given by importance sampling, with O∆t i,t defined as: O∆t i,t = log ( 1 N N X n=1 w(n) i,t ) with w(n) i,t = ˜qi,t(x(n) i,t−1) ˜qi,t−1(x(n) i,t−1) . (3) In the above distribution, the matrix C encodes the fact that the average importance weights estimate ζi,t −ζi,t−1 + bt · Ii=1, where I is the indicator function. It is formally defined in Fig. 1. Σ∆t is a diagonal covariance matrix, whose elements are updated online from the estimated variances of the log-importance weights. At time-step t, the i-th entry of its diagonal is thus given by Var[wi,t]/ P n w(n)2. The term bt accounts for a systematic bias in O(∆t) 1,t . It stems from the reuse of samples X1,t−1: first, for estimating the negative phase gradient at time-step t−1 (i.e. the gradient applied between qi,t−1 4 and qi,t) and second, to compute the importance weights of Eq. 3. Since the SML gradient acts to lower the probability of negative particles, w(n) i,t is biased. 3.3 Bridging the Parallel Tempering Temperature Gaps Consider now the other dimension of our parallel tempered lattice of RBMs: temperature. As previously mentioned, neighboring distributions in PT are designed to have significant overlap in their densities in order to permit particle swaps. However, the intermediate distributions qi,t(v, h) are not so close to one another that we can use them as the intermediate distributions of AIS. AIS typically requires thousands of intermediate chains, and maintaining that number of parallel chains would carry a prohibitive computational burden. On the other hand, the parallel tempering strategy of spacing the temperature to ensure moderately frequent swapping nicely matches the ideal operating regime of bridge sampling [2]. We thus consider a second observation model as p(O(∆β) t | ζt) = N(Hζt, Σ∆β), with H defined in Fig.1. The quantities O(∆β) t = [O∆β 1,t , . . . , O∆β M−1,t] are obtained via bridge sampling as estimates of ζi+1,t −ζi,t. Entries O∆β i,t are given by: O∆β i,t = log N X n=1 u(n) i,t −log N X n=1 v(n) i,t , where u(n) i,t = q∗ i,t x(n) i,t ˜qi,t x(n) i,t , v(n) i,t = q∗ i,t x(n) i+1,t ˜qi+1,t x(n) i+1,t . (4) The bridging distribution [2, 19] q∗ i,t is chosen such that it has large support with both qi and qi+1. For all i ∈[1, M −1], we choose the approximately optimal distribution q(opt) i,t (x) = ˜qi,t(x)˜qi+1,t(x) si,t ˜qi,t(x)+˜qi+1,t(x) where si,t ≈Zi+1,t/Zi,t. Since the Zi,t’s are the very quantities we are trying to estimate, this definition may seem problematic. However it is possible to start with a coarse estimate of si,1 and refine it in subsequent iterations by using the output of our tracking algorithm. Σ∆β is once again a diagonal covariance matrix, updated online from the variance of the log-importance weights u and v [19]. The i-th entry is given by Var[ui,t] hP n u(n) i,t i2 + Var[vi,t] hP n v(n) i,t i2 . 3.4 Kalman Filtering of the Log-Partition Function In the above we have described two sources of information regarding the log partition function for each of the RBMs in the lattice. In this section we describe a method to fuse all available information to improve the overall accuracy of the estimate of every log partition function. We now consider the steps involved in the inference process in moving from an estimate of the posterior over the latent state at time t −1 to an estimate of the posterior at time t. We begin by assuming we know the posterior p(ζt−1 | O(∆t) t−1:0, O(∆β) t−1:0), where O(·) t−1:0 = [O(·) 1 , . . . , O(·) t−1]. We follow the treatment of Neal [18] in characterizing our uncertainty regarding ζi,t as a Gaussian distribution and define p(ζt−1 | O(∆t) t−1:0, O(∆β) t−1:0) ∼N(µt−1,t−1, Pt−1,t−1), a multivariate Gaussian with mean µt−1,t−1 and covariance Pt−1,t−1. The double index notation is used to indicate which is the latest observation being conditioned on for each of the two types of observations: e.g. µt,t−1 represents the posterior mean given O(∆t) t:0 and O(∆β) t−1:0. Departing from the typical Kalman filter setting, O(∆t) t depends on both ζt and ζt−1. In order to incorporate this observation into our estimate of the latent state, we first need to specify the prior joint distribution p(ζt−1, ζt | O(∆t) t−1:0, O(∆β) t−1:0) = p(ζt | ζt−1)p(ζt−1 | O(∆t) t−1:0, O(∆β) t−1:0), with p(ζt | ζt−1) as defined in Sec. 3.1. Observation O(∆t) t is then incorporated through Bayes rule, yielding p(ζt−1, ζt | O(∆t) t:0 , O(∆β) t−1:0) . Having incorporated the importance sampling estimate into the model, we can then marginalize over ζt−1 (which is no longer required), to yield p(ζt | O(∆t) t:0 , O(∆β) t−1:0). Finally, it remains only to incorporate the bridge sampler estimate O(∆β) t by a second application of Bayes rule, which gives us p(ζt | O(∆t) t:0 , O(∆β) t:0 ), the updated posterior over the latent state at time-step t. The detailed inference equations are provided in Fig. 2 and can be derived easily from standard textbook equations on products and marginals of normal distributions [4]. 5 Inference Equations: (i) p “ ζt−1, ζt | O(∆t) t−1:0, O(∆β) t−1:0 ” = N(ηt−1,t−1, Vt−1,t−1) with ηt−1,t−1 = » µt−1,t−1 µt−1,t−1 – and Vt−1,t−1 = » Pt−1,t−1 Pt−1,t−1 Pt−1,t−1 Σζ + Pt−1,t−1 – (ii) p(ζt−1, ζt | O(∆t) t:0 , O(∆β) t−1:0) = N(ηt,t−1 , Vt,t−1) with Vt,t−1 = (V −1 t−1,t−1 + CT Σ−1 ∆tC)−1 and ηt,t−1 = Vt,t−1(CT Σ∆tO(∆t) t + V −1 t−1,t−1ηt−1,t−1) (iii) p “ ζt | O(∆t) t:0 , O(∆β) t−1:0 ” = N (µt,t−1 , Pt,t−1) with µt,t−1 = [ηt,t−1]2 and Pt,t−1 = [Vt,t−1]2,2 (iv) p(ζt | O(∆t) t:0 , O(∆β) t:0 ) = N(µt,t, Pt,t) with Pt,t = (P −1 t,t−1 + HT Σ−1 ∆βH)−1 and µt,t = Pt,t(HT Σ∆βO(∆β) t + P −1 t,t−1µt,t−1) Figure 2: Inference equations for our log partition tracking algorithm, a variant on the Kalman filter. For any vector v and matrix V , we use the notation [v]2 to denote the vector obtained by preserving the bottom half elements of v and [V ]2,2 to indicate the lower right-hand quadrant of V . 4 Experimental Results For the following experiments, SML was performed using either constant or decreasing learning rates. We used the decreasing schedule ϵt = min(ϵinit α t+1, ϵinit), where ϵt is the learning rate at time-step t, ϵinit is the initial or base learning rate and α is the decrease constant. Entries of Σζ (see Section 3.1) were set as follows. We set σ2 Z = +∞, which is to say that we did not exploit the smoothness prior when estimating the prior distribution over the joint p(ζt−1, ζt | O(∆t) t−1:0, O(∆β) t−1:0). σ2 b was set to 10−3 · ϵt, allowing the estimated bias on O(∆t) 1,t to change faster for large learning rates. When initializing the RBM visible offsets4 as proposed in [8], the intermediate distributions of Eq. 1 lead to sub-optimal swap rates between adjacent chains early in training, with a direct impact on the quality of tracking. In our experiments, we avoid this issue by using the intermediate distributions qi,t(x) ∝exp[βi · (−hT Wv −cT h) −bT v]. We tested mini-batch sizes N ∈[10, 20]. Comparing to Exact Likelihood We start by comparing the performance of our tracking algorithm to the exact likelihood, obtained by marginalizing over both visible and hidden units. We chose 25 hidden units and trained on the ubiquitous MNIST [13] dataset for 300k updates, using both fixed and adaptive learning rates. The main results are shown in Figure 3. In Figure 3(a), we can see that our tracker provides a very good fit to the likelihood with ϵinit = 0.001 and decrease constants α in {103, 104, 105}. Increasing the base learning rate to ϵinit = 0.01 in Figure 3(b), we maintain a good fit up to α = 104, with a small dip in performance at 50k updates. Our tracker fails however to capture the oscillatory behavior engendered by too high of a learning rate (ϵinit = 0.01, α = 105). It is interesting to note that the failure mode of our algorithm seems to coincide with an unstable optimization process. Comparing to AIS for Large-Scale Models In evaluating the performance of our tracking algorithm on larger models, exact computation of the likelihood is no longer possible, so we use AIS as our baseline.5 Our models consisted of RBMs with 500 hidden units, trained using SML-APT [6] on the MNIST and Caltech Silhouettes [16] datasets. We performed 200k updates, with learning rate parameters ϵinit ∈{.01, .001} and α ∈{103, 104, 105}. On MNIST, AIS estimated the test-likelihood of our best model at −94.34 ± 3.08 (where ± indicates the 3σ confidence interval), while our tracking algorithm reported a value −89.96. On Caltech Silhouettes, our model reached −134.23 ± 21.14 according to AIS, while our tracker reported 4Each bk is initialized to log ¯xk 1−¯xk , where ¯xk is the mean of the k-th dimension on the training set. 5Our base AIS config. was 103 intermediate distributions spaced linearly between β = [0, 0.5], 104 distributions for the interval [0.5, 0.9] and 104 for [0.9, 1.0]. Estimates of log Z are averaged over 100 annealed importance weights. 6 0 50 100 150 200 250 300 Updates (x1e3) −210 −200 −190 −180 −170 −160 −150 −140 Likelihood (nats) µi = 0.001 α = 1e3 µi = 0.001 α = 1e4 µi = 0.001 α = 1e5 Exact AIS Kalman 0 50 100 150 200 250 300 Updates (x1e3) −210 −200 −190 −180 −170 −160 −150 −140 −130 Likelihood (nats) µi = 0.010 α = 1e3 µi = 0.010 α = 1e4 µi = 0.010 α = 1e5 Exact AIS Kalman Figure 3: Comparison of exact test-set likelihood and estimated likelihood as given by AIS and our tracking algorithm. We trained a 25-hidden unit RBM for 300k updates using SML, with a learning rate schedule ϵt = min(α·ϵinit/(t+1), ϵinit), with (left) ϵinit = 0.001 and (right) ϵinit = 0.01 varying α ∈{103, 104, 105}. 0 50 100 150 200 Updates (x1e3) 380 400 420 440 460 480 500 520 540 log Z (nats) ζt O(∆β) t AIS O(∆t) t bt −2 0 2 4 6 ∆t log Z 0 100 200 300 400 500 Epochs −105 −100 −95 −90 −85 −80 Likelihood (nats) ζt (train) ζt (valid) AIS (valid) Figure 4: (left) Plotted on left y-axis are the Kalman filter measurements O(∆β) t , our log partition estimate of ζ1,t and point estimates of ζ1,t obtained by AIS. On the right y-axis, measurement O(∆t) t is plotted, along with the estimated bias bt. Note how bt becomes progressively less-pronounced as ϵt decreases and the model converges. Also of interest, the variance on O(∆β) t increases with t but is compensated by a decreasing variance on O(∆t) t , yielding a relatively smooth estimate ζ1,t. (not shown) The ±3σ confidence interval of the AIS estimate at 200k updates was measured to be 3.08. (right) Example of early-stopping on dna dataset. −114.31. To put these numbers in perspective, Salakhutdinov and Murray [23] reports values of −125.53, −105.50 and −86.34 for 500 hidden unit RBMs trained with CD{1,3,25} respectively. Marlin et al. [16] report around −120 for Caltech Silhouettes, again using 500 hidden units. Figure 4(left) shows a detailed view of the Kalman filter measurements and its output, for the best performing MNIST model. We can see that the variance on O(∆β) t (plotted on the left y-axis) grows slowly over time, which is mitigated by a decreasing variance on O(∆t) t (plotted on the right yaxis). As the model converges and the learning rate decreases, qi,t−1 and qi,t become progressively closer and the importance sampling estimates become more robust. The estimated bias term bt also converges to zero. An important point to note is that a naive linear-spacing of temperatures yielded low exchange rates between neighboring temperatures, with adverse effects on the quality of our bridge sampling estimates. As a result, we observed a drop in performance, both in likelihood as well as tracking performance. Adaptive tempering [6] (with a fixed number of chains M) proved crucial in getting good tracking for these experiments. Early-Stopping Experiments Our final set of experiments highlights the performance of our method on a wide-variety of datasets [11]. In these experiments, we use our estimate of the log 7 Dataset RBM RBM-25 NADE Kalman AIS adult -15.24 -15.70 (± 0.50) -16.29 -13.19 connect4 -15.77 -16.81 (± 0.67) -22.66 -11.99 dna -87.97 -88.51 (± 0.97) -96.90 -84.81 mushrooms -10.49 -14.68 (± 30.75) -15.15 -9.81 nips -270.10 -271.23 (± 0.58) -277.37 -273.08 ocr letters -33.87 -31.45 (± 2.70) -43.05 -27.22 rcv1 -46.89 -48.61 (± 0.69) -48.88 -46.66 web -28.95 -29.91 (± 0.74) -29.38 -28.39 Table 1: Test set likelihood on various datasets. Models were trained using SML-PT. Early-stopping was performed by monitoring likelihood on a hold-out validation set, using our KF estimate of the log partition function. Best models (i.e. the choice of hyper-parameters) were then chosen according to the AIS likelihood estimate. Results for 25-hidden unit RBMs and NADE are taken from [11]. ± indicates a confidence interval of three standard deviations. partition to monitor model performance on a held-out validation set. When the onset of over-fitting is detected, we store the model parameters and report the associated test-set likelihood, as estimated by both AIS and our tracking algorithm. The advantages of such an early-stopping procedure is shown in Figure 4(b), where training log-likelihood increases throughout training while validation performance starts to decrease around 250 epochs. Detecting over-fitting without tracking the log partition would require a dense grid of AIS runs which would prove computationally prohibitive. We tested parameters in the following range: number of hidden units in {100, 200, 500, 1000} (depending on dataset size), learning rates in {10−2, 10−3, 10−4} either held constant during training or annealed with constants α ∈{103, 104, 105}. For tempering, we used 10 fixed temperatures, spaced linearly between β = [0, 1]. SGD was performed using mini-batches of size {10, 100} when estimating the gradient, and mini-batches of size {10, 20} for our set of tempered-chains (we thus simulate 10 × {10, 20} tempered chains in total). As can be seen in Table 4, our tracker performs very well compared to the AIS estimates and across all datasets. Efforts to lower the variance of the AIS estimate proved unsuccessful, even going as far as 105 intermediate distributions. 5 Discussion In this paper, we have shown that while exact calculation of the partition function of RBMs may be intractable, one can exploit the smoothness of gradient descent learning in order to approximately track the evolution of the log partition function during learning. Treating the ζi,t’s as latent variables, the graphical model of Figure 1 allowed us to combine multiple sources of information to achieve good tracking of the log partition function throughout training, on a variety of datasets. We note however that good tracking performance is contingent on the ergodicity of the negative phase sampler. Unsurprisingly, this is the same condition required by SML for accurate estimation of the negative phase gradient. The method presented in the paper is also computationally attractive, with only a small computaiton overhead relative to SML-PT training. The added computational cost lies in the computation of the importance weights for importance sampling and bridge sampling. However, this boils down to computing free-energies which are mostly pre-computed in the course of gradient updates with the sole exception being the computation of ˜qi,t(xi,t−1) in the importance sampling step. In comparison to AIS, our method allows us to fairly accurately track the log partition function, and at a per-point estimate cost well below that of AIS. Having a reliable and accurate online estimate of the log partition function opens the door to a wide range of new research directions. Acknowledgments The authors acknowledge the financial support of NSERC and CIFAR; and Calcul Qu´ebec for computational resources. We also thank Hugo Larochelle for access to the datasets of Sec. 4; Hannes Schulz, Andreas Mueller, Olivier Delalleau and David Warde-Farley for feedback on the paper and algorithm; along with the developers of Theano [3]. 8 References [1] Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1–127. Also published as a book. Now Publishers, 2009. [2] Bennett, C. (1976). Efficient estimation of free energy differences from Monte Carlo data. Journal of Computational Physics, 22(2), 245–268. [3] Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., and Bengio, Y. (2010). Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy). Oral. [4] Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer. [5] Cho, K., Raiko, T., and Ilin, A. (2011). Enhanced gradient and adaptive learning rate for training restricted boltzmann machines. In L. Getoor and T. Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML ’11, pages 105–112, New York, NY, USA. ACM. [6] Desjardins, G., Courville, A., and Bengio, Y. (2010a). Adaptive parallel tempering for stochastic maximum likelihood learning of rbms. NIPS*2010 Deep Learning and Unsupervised Feature Learning Workshop. [7] Desjardins, G., Courville, A., Bengio, Y., Vincent, P., and Delalleau, O. (2010b). Tempered Markov chain monte carlo for training of restricted Boltzmann machine. In JMLR W&CP: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010), volume 9, pages 145–152. [8] Hinton, G. (2010). A practical guide to training restricted boltzmann machines. Technical Report 2010003, University of Toronto. version 1. [9] Hinton, G. E., Osindero, S., and Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527–1554. [10] Iba, Y. (2001). Extended ensemble monte carlo. International Journal of Modern Physics, C12, 623–656. [11] Larochelle, H. and Murray, I. (2011). The Neural Autoregressive Distribution Estimator. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS’2011), volume 15 of JMLR: W&CP. [12] Larochelle, H., Bengio, Y., and Turian, J. (2010). Tractable multivariate binary density estimation and the restricted Boltzmann forest. Neural Computation, 22(9), 2285–2307. [13] LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient based learning applied to document recognition. IEEE, 86(11), 2278–2324. [14] Lingenheil, M., Denschlag, R., Mathias, G., and Tavan, P. (2009). Efficiency of exchange schemes in replica exchange. Chemical Physics Letters, 478(1-3), 80 – 84. [15] Marinari, E. and Parisi, G. (1992). Simulated tempering: A new monte carlo scheme. EPL (Europhysics Letters), 19(6), 451. [16] Marlin, B., Swersky, K., Chen, B., and de Freitas, N. (2009). Inductive principles for restricted boltzmann machine learning. In Proceedings of The Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS’10), volume 9, pages 509–516. [17] Murray, I. and Ghahramani, Z. (2004). Bayesian learning in undirected graphical models: Approximate mcmc algorithms. [18] Neal, R. M. (2001). Annealed importance sampling. Statistics and Computing, 11(2), 125–139. [19] Neal, R. M. (2005). Estimating ratios of normalizing constants using linked importance sampling. [20] Salakhutdinov, R. (2010a). Learning deep boltzmann machines using adaptive mcmc. In L. Bottou and M. Littman, editors, Proceedings of the Twenty-seventh International Conference on Machine Learning (ICML-10), volume 1, pages 943–950. ACM. [21] Salakhutdinov, R. (2010b). Learning in Markov random fields using tempered transitions. In NIPS’09. [22] Salakhutdinov, R. and Hinton, G. E. (2009). Deep Boltzmann machines. In AISTATS’2009, volume 5, pages 448–455. [23] Salakhutdinov, R. and Murray, I. (2008). On the quantitative analysis of deep belief networks. In W. W. Cohen, A. McCallum, and S. T. Roweis, editors, ICML 2008, volume 25, pages 872–879. ACM. [24] Taylor, G. and Hinton, G. (2009). Factored conditional restricted Boltzmann machines for modeling motion style. In L. Bottou and M. Littman, editors, ICML 2009, pages 1025–1032. ACM. [25] Tieleman, T. (2008). Training restricted Boltzmann machines using approximations to the likelihood gradient. In W. W. Cohen, A. McCallum, and S. T. Roweis, editors, ICML 2008, pages 1064–1071. ACM. [26] Tieleman, T. and Hinton, G. (2009). Using fast weights to improve persistent contrastive divergence. In L. Bottou and M. Littman, editors, ICML 2009, pages 1033–1040. ACM. [27] Welling, M., Rosen-Zvi, M., and Hinton, G. E. (2005). Exponential family harmoniums with an application to information retrieval. In NIPS’04, volume 17, Cambridge, MA. MIT Press. 9
|
2011
|
156
|
4,209
|
Portmanteau Vocabularies for Multi-Cue Image Representation Fahad Shahbaz Khan1, Joost van de Weijer1, Andrew D. Bagdanov1,2, Maria Vanrell1 1Centre de Visio per Computador, Computer Science Department 1Universitat Autonoma de Barcelona, Edifci O, Campus UAB (Bellaterra), Barcelona, Spain 2 Media Integration and Communication Center, University of Florence, Italy Abstract We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau1 words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-theart results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation. 1 Introduction Image categorization is the task of classifying an image as containing an objects from a predefined list of categories. One of the most successful approaches to this problem is the bag-of-words (BOW) [4, 15, 11, 2]. In the bag-of-words model an image is first represented by a collection of local image features detected either sparsely or in a regular, dense grid. Each local feature is then represented by one or more cues, each describing one aspect of a small region around the corresponding feature. Typical local cues include color, shape, and texture. These cues are then quantized into visual words and the final image representation is a histogram over these visual vocabularies. In the final stage of the BOW approach the histogram representations are sent to a classifier. The success of BOW is highly dependent on the quality of the visual vocabulary. In this paper we investigate visual vocabularies which are used to represent images whose local features are described by both shape and color. To extend BOW to multiple cues, two properties are especially important: cue binding and cue weighting. A visual vocabulary is said to have the binding property when two independent cues appearing at the same location in an image remain coupled in the final image representation. For example, if every local patch in an image is independently described by a shape word and a color word, in the final image representation using compound words the binding property ensures that shape and color words coming from the same feature location are coupled in the final representation. The term binding is borrowed from the neuroscience field where it is used to describe the way in which humans select and integrate the separate cues of objects in the correct combinations in order to accurately recognize them [17]. The property of cue weighting implies that it is possible 1A portmanteau is a combination of two or more words to form a neologism that communicates a concept better than any individual word (e.g. Ski resort + Konference = Skonference). We use the term to describe our vocabularies to emphasize the connotation with combining color and shape words into new, more meaningful representations. 1 to adapt the relevance of each cue depending on the dataset. The importance of cue weighting can be seen from the success of Multiple Kernel Learning (MKL) techniques where weights for each cue are automatically learned [3, 13, 21, 14, 1, 20]. Traditionally, two approaches exist to handle multiple cues in BOW. When each cue has its own visual vocabulary the result is known as a late fusion image representation in which an image is represented as one histogram over shape-words and another histogram over color-words. Such a representation does not have the cue binding property, meaning that it is impossible to know exactly which color-shape events co-occurred at local features. Late fusion does, however, allow cue weighting. Another approach, called early fusion, constructs a single visual vocabulary of joint color-shape words. Representations over early fusion vocabularies have the cue binding property, meaning that the spatial co-occurrence of shape and color events is preserved. However, cue weighting in early fusion vocabularies is very cumbersome since must be performed before vocabulary construction making cross-validation very expensive. Recently, Khan et al. [10] proposed a method which combines cue binding and weighting. However, their final image representation size is equal to number of vocabulary words times the number of classes, and is therefore not feasible for the large data sets considered in this paper. A straightforward, if combinatorially inconvenient, approach to ensuring the binding property is to create a new vocabulary that contains one word for each combination of original shape and color feature. Considering that each of the original shape and color vocabularies may contain thousands of words, the resulting joint vocabulary may contain millions. Such large vocabularies are impractical as estimating joint color-shape statistics is often infeasible due to the difficulty of sampling from limited training data. Furthermore, with so many parameters the resulting classifiers are prone to overfitting. Because of this and other problems, this type of joint feature representation has not been further pursued as a way of ensuring that image representations have the binding property. In recent years a number of vocabulary compression techniques have appeared that derive small, discriminative vocabularies from very large ones [16, 7, 5]. Most of these techniques are based on information theoretic clustering algorithms that attempt to combine words that are equivalently discriminative for the set of object categories being considered. Because these techniques are guided by the discriminative power of clusters of visual words, estimates of class-conditional visual word probabilities are essential. These recent developments in vocabulary compression allow us to reconsider the direct, Cartesian product approach to building compound vocabularies. These vocabulary compression techniques have been demonstrated on single-cue vocabularies with a few tens of thousands of words. Starting from even moderately sized shape and color vocabularies results in a compound shape-color vocabulary an order of magnitude larger. In such cases, robust estimates of the underlying class-conditional joint-cue distributions may be difficult to obtain. We show that for typical datasets a strong independence assumption about the joint color-shape distribution leads to more robust estimates of the class-conditional distributions needed for vocabulary compression. In addition, our estimation technique allows flexible cue-specific weighting that cannot be easily performed with other cue combination techniques that maintain the binding property. 2 Portmanteau vocabularies In this section we propose a new multi-cue vocabulary construction method that results in compact vocabularies which possess both the cue binding and the cue weighting properties described above. Our approach is to build portmanteau vocabularies of discriminative, compound shape and color words chosen from independently learned color and shape lexicons. The term portmanteau is used in natural language for words which are a blend of two other words and which combine their meaning. We use the term portmanteau to describe these compound terms to emphasize the fact that, similarly to the use of neologistic portmanteaux in natural language to capture complex and compound concepts, we create groups of color and shape words to describe semantic concepts inadequately described by shape or color alone. A simple way to ensure the binding property is by considering a product vocabulary that contains a new word for every combination of shape and color terms. Assume that S = {s1, s2, ..., sM} and C = {c1, c2, ..., cN} represent the visual shape and color vocabularies, respectively. Then the 2 0,00E+00 2,00E-06 4,00E-06 6,00E-06 8,00E-06 1,00E-05 1,20E-05 1,40E-05 1,60E-05 0 2 4 6 8 10 12 14 16 Bird-200 Direct Empirical Independence Assumption 0,00E+00 2,00E-06 4,00E-06 6,00E-06 8,00E-06 1,00E-05 0 2 4 6 8 10 12 14 16 18 20 22 Flower-102 Direct Empirical Independence Assumption Figure 1: Comparison of two estimates of the joint cue distribution p(S, C|R) on two large datasets. The graphs plot the Jenson-Shannon divergence between each estimate and the true joint distribution as a functions of the number of training images used to estimate them. The true joint distribution is estimated empirically over all images in each dataset. Estimation using the independence assumption of equation (2) yields similar or better estimates than their empirical counterparts. product vocabulary is given by W = {w1, w2, ..., wT } = {{si, cj} | 1 ≤i ≤M, 1 ≤j ≤N}, where T = M × N. We will also use the the notation sm to identify a member from the set S. A disadvantage of vocabularies of compound terms constructed by considering the Cartesian product of all primitive shape and color words is that the total number of visual words is equal to the number of color words times the number of shape words, which typically results in hundreds of thousands of elements in the final vocabulary. This is impractical for two reasons. First, the high dimensionality of the representation hampers the use of complex classifiers such as SVMs. Second, insufficient training data often renders robust estimation of parameters very difficult and the resulting classifiers tend to overfit the training set. Because of these drawbacks, compound product vocabularies have, to the best of our knowledge, not been pursued in literature. In the next two subsections we discuss our approach to overcoming these two drawbacks. 2.1 Compact Portmanteau Vocabularies In recent years, several algorithms for feature clustering have been proposed which compress large vocabularies into small ones [16, 7, 5]. To reduce the high-dimensionality of the product vocabulary, we apply Divisive Information-Theoretic feature Clustering (DITC) algorithm [5], which was shown to outperform AIB [16]. Furthermore, DITC has also been successfully employed to construct compact pyramid representations [6]. The DITC algorithm is designed to find a fixed number of clusters which minimize the loss in mutual information between clusters and the class labels of training samples. In our algorithm, loss in mutual information is measured between original product vocabulary and the resulting clusters. The algorithm joins words which have similar discriminative power over the set of classes in the image categorization problem. This is measured by the probability distributions p (R|wt), where R = {r1, r2, ..rL} is the set of L classes. More precisely, the drop in mutual information I between the vocabulary W and the class labels R when going from the original set of vocabulary words W to the clustered representation W R = {W1, W2, ..., WJ} (where every Wj represents a cluster of words from W) is equal to I (R; W) −I R; W R = J X j=1 X wt∈Wj p (wt) KL (p (R|wt) || p (R|Wj)), (1) where KL is the Kullback-Leibler divergence between two distributions. Equation (1) states that the drop in mutual information is equal to the prior-weighted KL-divergence between a word and its assigned cluster. The DITC algorithm minimizes this objective function by alternating computation 3 Figure 2: The effect of α on DITC clusters. Each of the large boxes contains 100 image patches sampled from one Portmanteau word on the Oxford Flower-102 dataset. Top row: five clusters for α = 0.1. Note how these clusters are relatively homogeneous in color, while shape varies considerably within each. Middle row: five clusters sampled for α = 0.5. The clusters show consistency over both color and shape. Bottom row: five clusters sampled for α = 0.9. Notice how in this case shape is instead homogeneous within each cluster. of the cluster distributions and assignment of compound visual words to their closest cluster. For more details on the DITC algorithm we refer to Dhillon et al. [5]. Here we apply the DITC algorithm to reduce the high-dimensionality of the compound vocabularies. We call the compact vocabulary which is the output of the DITC algorithm the portmanteau vocabulary and its words accordingly portmanteau words. The final image representation p(W R) is a distribution over the portmanteau words. 2.2 Joint distribution estimation In solving the problem of high-dimensionality of the compound vocabularies we seemingly further complicated the estimation problem. As DITC is based on estimates of the class-conditional distributions p(S, C|R) = p(W|R) over product vocabularies, we have increased the number of parameters to be estimated to M ×N ×L. This can easily reach millions of parameters for standard image datasets. To solve this problem we propose to estimate the class conditional distributions by assuming independence of color and shape, given the class: p(sm, cn|R) ∝p(sm|R)p(cn|R). (2) Note that we do not assume independence of the cues themselves, but rather the less restrictive independence of the cues given the class. Instead of directly estimating the empirical joint distribution p(S, C|R), we reduce the number of parameters to estimate to (M + N) × L, which in the vocabulary configurations discussed in this paper represents a reduction in complexity of two orders of magnitude. As an additional advantage, we will show in section 2.3 that estimating the joint distribution p(S, C|R) allows us to introduce cue weighting. To verify the quality of the empirical estimates of equation (2) we perform the following experiment. In figure 1 we plot the Jensen-Shannon (JS) divergence between the empirical joint distribution obtained from the test images and the two estimates: direct estimation of the empirical joint distribution p(S, C|R) on the training set, and an approximate estimate made by assuming independence as in 4 1 2 3 4 5 6 7 8 9 10 0 0.05 0.1 0.15 0.2 0.25 beta=1.000000 1 2 3 4 5 6 7 8 9 10 0 0.05 0.1 0.15 0.2 0.25 beta=5.000000 classes R classes R p(R|w) p(R|w) Figure 3: The effect of β on DITC clusters. For 20 words p (R|wt) is plotted in dotted grey lines. DITC is used to obtain ten portmanteau means p (R|Wj) are plotted in different colors. On the left is shown the final clustering for β = 1.0. Note that none of the portmanteau means are especially discriminative for one particular class. On the right, however, for β = 5.0 each portmanteau concentrates on discriminating one class. equation (2). Results are provided as a function of the number of training images for two large datasets. A low JS-divergence means a better estimate of the true joint-cue distribution. The plotted lines show the curves for a color cue vocabulary of 100 words and a shape cue vocabulary of 5,000 words, resulting in a product vocabulary of 500,000 words. On both datasets we see that the independence assumption actually leads to a better or equally good estimate of the joint distribution. Increasing the number of training samples, or starting with smaller color and shape vocabularies and hence reducing the number of parameters to estimate, will improve direct empirical estimates of p(S, C). However, figure 1 shows that for typical vocabulary settings on large datasets the independence assumption results in equivalently good or better estimates of the joint distribution. 2.3 Cue weighting Constructing the compact portmanteau vocabularies based on the independence assumption significantly reduces the number of parameters to estimate. Furthermore, as we will see in this section, it allows us to control the relative contribution of color and shape cues in the final representation. We introduce a weighting parameter α ∈[0, 1] in the estimate of p(C, S): pα(sm, cn|R) ∝p(sm|R)αp(cn|R)1−α (3) where an α close to zero results in a larger influence of the color words, and a α close to one leads to a vocabulary which focuses predominantly on shape. To illustrate the influence of α on the vocabulary construction, we show samples from portmanteau words obtained on the Oxford Flower-102 dataset (see figure 4) in figure 2. The DITC algorithm is applied to reduce the product vocabulary of 500,000 compound words to 100 portmanteau words. For settings of α ∈{0.1, 0.5, 0.9} we show five of the hundred words. Each word is represented by one hundred randomly sampled patches from the dataset which have been assigned to the word. The effect of changing the α can be clearly seen. For low α the Portmanteau words exhibit homogeneity of color but lack within-cluster shape consistency. On the other hand for high α the words show strong shape homogeneity such as low and high frequency lines and blobs, while color is more uniformly distributed. For a setting of α = 0.5 the clustering is more consistent in both color and shape. Additionally, another parameter β is introduced: pα,β(sm, cn|R) ∝ p(sm|R)αp(cn|R)1−αβ (4) To illustrate the influence of β consider the following experiment on synthetic data. We generate a set of 100 words which have random discriminative power p (R|wt) over L = 10 classes. In figure 3 5 Figure 4: Example images from the two datasets used in our experiments Top: images from four categories of the Flower-102 dataset. Bottom: four example images from the Bird-200 dataset. we show the p (R|wt) for a subset of 20 words in grey, and p (R|Wj) ∝ P wt∈Wj p(wt)p(R|wt) for the ten portmanteau words in color. We observe that increasing the β parameter directs DITC to find clusters which are each highly discriminative for a single class, rather than being discriminative over all classes. We found that higher β values often lead to image representations which improve classification results. These weighting parameters are learned through cross validation on the training set. In practice we found α to change with the data set according to the importance of color and shape. The β parameter was found to to be constant at a value 5 for the two datasets evaluated in this paper. Both parameters were found to significantly improve results on both datasets. 2.4 Image representation with portmanteau vocabularies We summarize our approach to constructing portmanteau vocabularies for image representation. We emphasize the fact that our approach is fundamentally about deriving compact multi-cue image representations and, as such, can be used as a drop-in replacement in any bag-of-words pipeline. Image representation by portmanteau vocabulary built from color and shape cues follows these steps: 1. Independent color and shape vocabularies are constructed by standard K-means clustering over color and shape descriptors extracted from training images. 2. Empirical class-conditional word distributions p(S|R) and p(C|R) are computed from the training set, the joint cue distribution P(S, C|R) is estimated assuming conditional independence as in equation (4). 3. The portmanteau vocabulary is computed with the DITC algorithm. The output of the DITC is a list of indexes which, for each member of the compound vocabulary maps to one of the J portmanteau words. 4. Using the index list output by DITC, the original image features are revisited and the index corresponding the compound shape-color word at each feature is used to represent each image as a histogram over the portmanteau vocabulary. 3 Experimental results We follow the standard bag-of-words approach. We use a combination of interest-point detectors along with a dense multi-scale grid detector. The SIFT descriptor [12] is used to construct a shape vocabulary. For color we use the color name descriptor, which is computed by converting sRGB values to color names according to [19] after which each patch is represented as a histogram over the eleven color names. The shape and color vocabularies are constructed using the standard Kmeans algorithm. In all our experiments we use a shape vocabulary of 5000 words and a color vocabulary of 100 words. Applying Laplace weighting was not found to influence the results and 6 therefore not used in the experiments. The classifier is a non-linear, multi-way, one-versus-all SVM using the χ2 kernel [24]. Each test image is assigned the label of the classifier giving the highest response and the final classification score is the mean recognition rate per category. We performed several experiments to validate our approach to building multi-cue vocabularies by comparing with other methods which are based on exactly the same initial SIFT and CN descriptors: • Shape and Color only: a single vocabulary of 5000 SIFT words and one of 100 CN words. • Early fusion: SIFT and CN are concatenated into single descriptor. The relative weight of shape and color is optimized by cross-validation. Note that cross-validation on cue weighting parameters for early fusion must be done over the entire BOW pipeline, from vocabulary construction to classification. Vocabulary size is 5000. • Direct empirical: DITC based on the empirical distribution of p(S, C|R) over a total of 500.000 compound words estimated on the training set. • Independence assumption: where p(S, C|R) = p(S|R)p(C|R) is assumed. We also show separate results with and without using α and β. In all cases the color-shape visual vocabularies are compressed to 500 visual words and spatial pyramids are constructed for the final image representation as in [11]. All of the above approaches were evaluated on two standard and challenging datasets: Oxford Flower-102 and Caltech-UCSD Bird200. The train-test splits are fixed for both datasets and are provided on their respective websites.2 3.1 Results on the Flower-102 and Bird-200 datasets The Oxford Flower-102 dataset contains 8189 images of 102 different flower species. It is a challenging dataset due to significant scale and illumination changes (see figure 4). The results are presented in table 1(a). We see that shape alone yields results superior to color. Early fusion is reasonably good at 70.5%. This is however obtained through laborious cross validation to obtain the optimal balance between CN and SIFT cues. Since our cue weighting is done after the initial vocabulary and histogram construction, cross-validation is significantly faster than for early fusion. The bottom three rows of table 1(a) give the results of our approach to image representation with portmanteau vocabularies in a variety of configurations. The direct empirical estimation of the joint shape-color distribution provides slightly better results than estimation based on the independence assumption. However, weighting the two visual cues using the α parameter described in equation (3) in the independent estimation of p(s, c|class) improves the results significantly. In particular, the gain of almost 7% obtained by adding β is remarkable. The best recognition performance were obtained for α = 0.8 and β = 5. The Caltech-UCSD Bird-200 dataset contains 6033 images from 200 different bird species. This dataset contains many bird species that closely resemble each other in terms of color and shape cues, making the recognition task extremely difficult. Table 1(a) contains test results for our approach on Bird-200 as well. Interestingly, on this dataset color outperforms shape alone and early fusion yields only a small improvement over color. Results based on portmanteau vocabularies outperform early fusion, and estimation based on the independence assumption provide better results than direct empirical estimation. These results are further improved by the introduction of cue weighting with a final score of 22.4% obtained with α = 0.7 and β = 5 outperforming all others. 3.2 Comparison with the state-of-the-art Recently, an extensive performance evaluation of color descriptors was presented by van de Sande et al. [18]. In this evaluation the OpponentSIFT and C-SIFT were reported to provide superior performance on image categorization problems. We construct a visual vocabulary of 5000 visual words for both OpponentSIFT and C-SIFT and apply the DITC algorithm to compress it to 500 visual words. As shown in table 1(b), Our approach provides significantly better results compared to both OpponentSIFT and C-SIFT, possibly due to the fact neither supports cue weighting. 2The Flower-102 dataset at http://www.robots.ox.ac.uk/vgg/research/flowers/ and the Birds-200 set at http://www.vision.caltech.edu/visipedia/CUB-200.html 7 Method Flower-102 Bird-200 Shape only 60.7 12.9 Color only 48.5 16.8 Early Fusion 70.5 17.0 Direct empirical 64.6 18.9 Independent 63.5 19.8 Independent + α 66.4 21.6 Independent + α + β 73.3 22.4 Method Bird-200 Flower-102 OpponentSIFT 14.0 69.2 C-SIFT 13.9 65.9 MKL [13] − 72.8 MKL [3] 19.0 − Random Forest [23] 19.2 − Saliency [9] − 71.0 Our Approach 22.4 73.3 (a) (b) Table 1: Comparative evaluation of our approach. (a) Classification score on Flower-102 and Bird200 datasets for individual features, early fusion and several configurations of our approach. (b) Comparison of our approach to the state-of-the-art on the Bird-200 and Flower-102 datasets. In recent years, combining multiple cues using Multiple Kernel Learning (MKL) techniques has received a lot of attention. These approaches combine multiple cues and multiple kernels and apply per-class cue weighting. Table 1(b) includes two recent MKL techniques that report state-of-the-art performance. The technique described in [3] is based on geometric blur, grayscale SIFT, color SIFT and full image color histograms, while the approach in [13] also employs HSV, SIFT int, SIFT bd, and HOG descriptors in the MKL framework of [21]. Despite the simplicity of our approach, which is based on only two cues and a single kernel, it outperforms these complex multi-cue learning techniques. Also note that both MKL approaches are based on learning class-specific weighting for multiple cues. This is especially cumbersome when there exist several hundred object categories in a dataset (e.g. the Bird-200 dataset contains 200 bird categories). In contrast to these approaches, we learn a global, class-independent cue weighting parameters to balance color and shape cues. On the Flower-102 dataset, our final classification score of 73.3% is comparable to the state-of-theart recognition performance [13, 9, 8]3 obtained on this dataset. It should be noted that Nilsback and Zisserman [13] obtain a classification performance of 72.8% using segmented images and a combination of four different visual cues in a multiple kernel learning framework. Our performance, however, is obtained on unsegmented images using only color and shape cues. On the Bird-200 dataset, our approach significantly outperforms state-of-the-art methods [23, 3, 22]. 4 Conclusions In this paper we propose a new method to construct multi-cue, visual portmanteau vocabularies that combine color and shape cues. When constructing a multi-cue vocabulary two properties are especially desirable: cue binding and cue weighting. Starting from multi-cue product vocabularies we compress this representation to form discriminative compound terms, or portmanteaux, used in the final image representation. Experiments demonstrate that assuming independence of visual cues given the categories provides a robust estimation of joint-cue distributions compared to direct empirical estimation. Assuming independence also has the advantage of both reducing the complexity of the representation by two orders of magnitude and allowing flexible cue weighting. Our final image representation is compact, maintains the cue binding property, admits cue weighting and yields state-of-the-art performance on the image categorization problem. We tested our approach on two datasets, each with more than one hundred object categories. Results demonstrate the superiority of our approach over existing ones combining color and shape cues. We obtain a gain of 2.8% and 5.4% over the early fusion approach. Our approach also outperforms methods based on multiple cues and MKL with per-class parameter learning. This leaves open the possibility of using our approach to multi-cue image representation within an MKL framework. Acknowledgments: This work is supported by the EU project ERG-TS-VICI-224737; by the Spanish Research Program Consolider-Ingenio 2010: MIPRCV (CSD200700018); by the Tuscan Regional project MNEMOSYNE (POR-FSE 2007-2013, A.IV-OB.2); and by the Spanish projects TIN2009-14173, TIN2010-21771-C02-1. Joost van de Weijer acknowledges the support of a Ramon y Cajal fellowship. 3From correspondence with the authors of [8] we learned that the results reported in their paper are erroneous and they do not obtain results better than [13]. 8 References [1] Francis Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In NIPS, 2008. [2] A. Bosch, A. Zisserman, and X. Munoz. Scene classification via plsa. In ECCV, 2006. [3] Steve Branson, Catherine Wah, Florian Schroff, Boris Babenko, Peter Welinder, Pietro Perona, and Serge Belongie. Visual recognition with humans in the loop. In ECCV, 2010. [4] G. Csurka, C. Bray, C. Dance, and L. Fan. Visual categorization with bags of keypoints. In Workshop on Statistical Learning in Computer Vision, ECCV, 2004. [5] Inderjit Dhillon, Subramanyam Mallela, and Rahul Kumar. A divisive information-theoretic feature clustering algorithm for text classification. Journal of Machine Learning Research (JMLR), 3:1265–1287, 2003. [6] Noha M. Elfiky, Fahad Shahbaz Khan, Joost van de Weijer, and Jordi Gonzalez. Discriminative compact pyramids for object and scene recognition. Pattern Recgnition, 2011. [7] Brian Fulkerson, Andrea Vedaldi, and Stefano Soatto. Localizing objects with smart dictionaries. In ECCV, 2008. [8] Satoshi Ito and Susumu Kubota. Object classification using hetrogeneous co-occurrence features. In ECCV, 2010. [9] Christopher Kanan and Garrison Cottrell. Robust classification of objects, faces, and flowers using natural image statistics. In CVPR, 2010. [10] Fahad Shahbaz Khan, Joost van de Weijer, and Maria Vanrell. Top-down color attention for object recognition. In ICCV, 2009. [11] Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR, 2006. [12] D. G. Lowe. Distinctive image features from scale-invariant points. IJCV, 60(2):91–110, 2004. [13] M-E Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In ICVGIP, 2008. [14] Alain Rakotomamonjy, Francis Bach, Stephane Canu, and Yves Grandvalet. More efficiency in multiple kernel learning. In ICML, 2007. [15] J. Sivic, B. Russell, A. Efros, A. Zisserman, and W.Freeman. Discovering object categories in image collections. In ICCV, 2005. [16] Noam Slonim and Naftali Tishby. Agglomerative information bottleneck. In NIPS, 1999. [17] Anne Treisman. Feature Binding, Attention and Object Perception. Philosophical Transactions: Biological Sciences, 353(1373):1295–1306, 1998. [18] Koen E. A. van de Sande, Theo Gevers, and Cees G. M. Snoek. Evaluating color descriptors for object and scene recognition. PAMI, 32(9):1582–1596, 2010. [19] J. van de Weijer, C. Schmid, Jakob J. Verbeek, and D. Larlus. Learning color names for real-world applications. IEEE Transaction in Image Processing (TIP), 18(7):1512–1524, 2009. [20] Manik Varma and Bodla Rakesh Babu. More generality in efficient multiple kernel learning. In ICML, 2009. [21] Manik Varma and Debajyoti Ray. Learning the discriminative power-invariance trade-off. In ICCV, 2007. [22] Jinjun Wang, Jianchao Yang, Kai Yu, Fengjun Lv, Thomas Huang, and Yihong Gong. Localityconstrained linear coding for image classification. In CVPR, 2010. [23] Bangpeng Yao, Aditya Khosla, and Li Fei-Fei. Combining randomization and discrimination for finegrained image categorization. In CVPR, 2011. [24] J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid. Local features and kernels for classification of texture and object catergories: A comprehensive study. IJCV, 73(2):213–218, 2007. 9
|
2011
|
157
|
4,210
|
Algorithms for Hyper-Parameter Optimization James Bergstra The Rowland Institute Harvard University bergstra@rowland.harvard.edu R´emi Bardenet Laboratoire de Recherche en Informatique Universit´e Paris-Sud bardenet@lri.fr Yoshua Bengio D´ept. d’Informatique et Recherche Op´erationelle Universit´e de Montr´eal yoshua.bengio@umontreal.ca Bal´azs K´egl Linear Accelerator Laboratory Universit´e Paris-Sud, CNRS balazs.kegl@gmail.com Abstract Several recent advances to the state of the art in image classification benchmarks have come from better configurations of existing techniques rather than novel approaches to feature learning. Traditionally, hyper-parameter optimization has been the job of humans because they can be very efficient in regimes where only a few trials are possible. Presently, computer clusters and GPU processors make it possible to run more trials and we show that algorithmic approaches can find better results. We present hyper-parameter optimization results on tasks of training neural networks and deep belief networks (DBNs). We optimize hyper-parameters using random search and two new greedy sequential methods based on the expected improvement criterion. Random search has been shown to be sufficiently efficient for learning neural networks for several datasets, but we show it is unreliable for training DBNs. The sequential algorithms are applied to the most difficult DBN learning problems from [1] and find significantly better results than the best previously reported. This work contributes novel techniques for making response surface models P(y|x) in which many elements of hyper-parameter assignment (x) are known to be irrelevant given particular values of other elements. 1 Introduction Models such as Deep Belief Networks (DBNs) [2], stacked denoising autoencoders [3], convolutional networks [4], as well as classifiers based on sophisticated feature extraction techniques have from ten to perhaps fifty hyper-parameters, depending on how the experimenter chooses to parametrize the model, and how many hyper-parameters the experimenter chooses to fix at a reasonable default. The difficulty of tuning these models makes published results difficult to reproduce and extend, and makes even the original investigation of such methods more of an art than a science. Recent results such as [5], [6], and [7] demonstrate that the challenge of hyper-parameter optimization in large and multilayer models is a direct impediment to scientific progress. These works have advanced state of the art performance on image classification problems by more concerted hyper-parameter optimization in simple algorithms, rather than by innovative modeling or machine learning strategies. It would be wrong to conclude from a result such as [5] that feature learning is useless. Instead, hyper-parameter optimization should be regarded as a formal outer loop in the learning process. A learning algorithm, as a functional from data to classifier (taking classification problems as an example), includes a budgeting choice of how many CPU cycles are to be spent on hyper-parameter exploration, and how many CPU cycles are to be spent evaluating each hyperparameter choice (i.e. by tuning the regular parameters). The results of [5] and [7] suggest that with current generation hardware such as large computer clusters and GPUs, the optimal alloca1 tion of CPU cycles includes more hyper-parameter exploration than has been typical in the machine learning literature. Hyper-parameter optimization is the problem of optimizing a loss function over a graph-structured configuration space. In this work we restrict ourselves to tree-structured configuration spaces. Configuration spaces are tree-structured in the sense that some leaf variables (e.g. the number of hidden units in the 2nd layer of a DBN) are only well-defined when node variables (e.g. a discrete choice of how many layers to use) take particular values. Not only must a hyper-parameter optimization algorithm optimize over variables which are discrete, ordinal, and continuous, but it must simultaneously choose which variables to optimize. In this work we define a configuration space by a generative process for drawing valid samples. Random search is the algorithm of drawing hyper-parameter assignments from that process and evaluating them. Optimization algorithms work by identifying hyper-parameter assignments that could have been drawn, and that appear promising on the basis of the loss function’s value at other points. This paper makes two contributions: 1) Random search is competitive with the manual optimization of DBNs in [1], and 2) Automatic sequential optimization outperforms both manual and random search. Section 2 covers sequential model-based optimization, and the expected improvement criterion. Section 3 introduces a Gaussian Process based hyper-parameter optimization algorithm. Section 4 introduces a second approach based on adaptive Parzen windows. Section 5 describes the problem of DBN hyper-parameter optimization, and shows the efficiency of random search. Section 6 shows the efficiency of sequential optimization on the two hardest datasets according to random search. The paper concludes with discussion of results and concluding remarks in Section 7 and Section 8. 2 Sequential Model-based Global Optimization Sequential Model-Based Global Optimization (SMBO) algorithms have been used in many applications where evaluation of the fitness function is expensive [8, 9]. In an application where the true fitness function f : X →R is costly to evaluate, model-based algorithms approximate f with a surrogate that is cheaper to evaluate. Typically the inner loop in an SMBO algorithm is the numerical optimization of this surrogate, or some transformation of the surrogate. The point x∗that maximizes the surrogate (or its transformation) becomes the proposal for where the true function f should be evaluated. This active-learning-like algorithm template is summarized in Figure 1. SMBO algorithms differ in what criterion they optimize to obtain x∗given a model (or surrogate) of f, and in they model f via observation history H. SMBO f, M0, T, S 1 H ←∅, 2 For t ←1 to T, 3 x∗←argminx S(x, Mt−1), 4 Evaluate f(x∗), ▷Expensive step 5 H ←H ∪(x∗, f(x∗)), 6 Fit a new model Mt to H. 7 return H Figure 1: The pseudo-code of generic Sequential Model-Based Optimization. The algorithms in this work optimize the criterion of Expected Improvement (EI) [10]. Other criteria have been suggested, such as Probability of Improvement and Expected Improvement [10], minimizing the Conditional Entropy of the Minimizer [11], and the bandit-based criterion described in [12]. We chose to use the EI criterion in our work because it is intuitive, and has been shown to work well in a variety of settings. We leave the systematic exploration of improvement criteria for future work. Expected improvement is the expectation under some model M of f : X →RN that f(x) will exceed (negatively) some threshold y∗: EIy∗(x) := Z ∞ −∞ max(y∗−y, 0)pM(y|x)dy. (1) 2 The contribution of this work is two novel strategies for approximating f by modeling H: a hierarchical Gaussian Process and a tree-structured Parzen estimator. These are described in Section 3 and Section 4 respectively. 3 The Gaussian Process Approach (GP) Gaussian Processes have long been recognized as a good method for modeling loss functions in model-based optimization literature [13]. Gaussian Processes (GPs, [14]) are priors over functions that are closed under sampling, which means that if the prior distribution of f is believed to be a GP with mean 0 and kernel k, the conditional distribution of f knowing a sample H = (xi, f(xi))n i=1 of its values is also a GP, whose mean and covariance function are analytically derivable. GPs with generic mean functions can in principle be used, but it is simpler and sufficient for our purposes to only consider zero mean processes. We do this by centering the function values in the considered data sets. Modelling e.g. linear trends in the GP mean leads to undesirable extrapolation in unexplored regions during SMBO [15]. The above mentioned closedness property, along with the fact that GPs provide an assessment of prediction uncertainty incorporating the effect of data scarcity, make the GP an elegant candidate for both finding candidate x∗(Figure 1, step 3) and fitting a model Mt (Figure 1, step 6). The runtime of each iteration of the GP approach scales cubically in |H| and linearly in the number of variables being optimized, however the expense of the function evaluations f(x∗) typically dominate even this cubic cost. 3.1 Optimizing EI in the GP We model f with a GP and set y∗to the best value found after observing H: y∗= min{f(xi), 1 ≤ i ≤n}. The model pM in (1) is then the posterior GP knowing H. The EI function in (1) encapsulates a compromise between regions where the mean function is close to or better than y∗and under-explored regions where the uncertainty is high. EI functions are usually optimized with an exhaustive grid search over the input space, or a Latin Hypercube search in higher dimensions. However, some information on the landscape of the EI criterion can be derived from simple computations [16]: 1) it is always non-negative and zero at training points from D, 2) it inherits the smoothness of the kernel k, which is in practice often at least once differentiable, and noticeably, 3) the EI criterion is likely to be highly multi-modal, especially as the number of training points increases. The authors of [16] used the preceding remarks on the landscape of EI to design an evolutionary algorithm with mixture search, specifically aimed at optimizing EI, that is shown to outperform exhaustive search for a given budget in EI evaluations. We borrow here their approach and go one step further. We keep the Estimation of Distribution (EDA, [17]) approach on the discrete part of our input space (categorical and discrete hyper-parameters), where we sample candidate points according to binomial distributions, while we use the Covariance Matrix Adaptation - Evolution Strategy (CMA-ES, [18]) for the remaining part of our input space (continuous hyper-parameters). CMA-ES is a state-of-the-art gradient-free evolutionary algorithm for optimization on continuous domains, which has been shown to outperform the Gaussian search EDA. Notice that such a gradient-free approach allows non-differentiable kernels for the GP regression. We do not take on the use of mixtures in [16], but rather restart the local searches several times, starting from promising places. The use of tesselations suggested by [16] is prohibitive here, as our task often means working in more than 10 dimensions, thus we start each local search at the center of mass of a simplex with vertices randomly picked among the training points. Finally, we remark that all hyper-parameters are not relevant for each point. For example, a DBN with only one hidden layer does not have parameters associated to a second or third layer. Thus it is not enough to place one GP over the entire space of hyper-parameters. We chose to group the hyper-parameters by common use in a tree-like fashion and place different independent GPs over each group. As an example, for DBNs, this means placing one GP over common hyper-parameters, including categorical parameters that indicate what are the conditional groups to consider, three GPs on the parameters corresponding to each of the three layers, and a few 1-dimensional GPs over individual conditional hyper-parameters, like ZCA energy (see Table 1 for DBN parameters). 3 4 Tree-structured Parzen Estimator Approach (TPE) Anticipating that our hyper-parameter optimization tasks will mean high dimensions and small fitness evaluation budgets, we now turn to another modeling strategy and EI optimization scheme for the SMBO algorithm. Whereas the Gaussian-process based approach modeled p(y|x) directly, this strategy models p(x|y) and p(y). Recall from the introduction that the configuration space X is described by a graph-structured generative process (e.g. first choose a number of DBN layers, then choose the parameters for each). The tree-structured Parzen estimator (TPE) models p(x|y) by transforming that generative process, replacing the distributions of the configuration prior with non-parametric densities. In the experimental section, we will see that the configuation space is described using uniform, log-uniform, quantized log-uniform, and categorical variables. In these cases, the TPE algorithm makes the following replacements: uniform →truncated Gaussian mixture, log-uniform →exponentiated truncated Gaussian mixture, categorical →re-weighted categorical. Using different observations {x(1), ..., x(k)} in the non-parametric densities, these substitutions represent a learning algorithm that can produce a variety of densities over the configuration space X. The TPE defines p(x|y) using two such densities: p(x|y) = ℓ(x) if y < y∗ g(x) if y ≥y∗, (2) where ℓ(x) is the density formed by using the observations {x(i)} such that corresponding loss f(x(i)) was less than y∗and g(x) is the density formed by using the remaining observations. Whereas the GP-based approach favoured quite an aggressive y∗(typically less than the best observed loss), the TPE algorithm depends on a y∗that is larger than the best observed f(x) so that some points can be used to form ℓ(x). The TPE algorithm chooses y∗to be some quantile γ of the observed y values, so that p(y < y∗) = γ, but no specific model for p(y) is necessary. By maintaining sorted lists of observed variables in H, the runtime of each iteration of the TPE algorithm can scale linearly in |H| and linearly in the number of variables (dimensions) being optimized. 4.1 Optimizing EI in the TPE algorithm The parametrization of p(x, y) as p(y)p(x|y) in the TPE algorithm was chosen to facilitate the optimization of EI. EIy∗(x) = Z y∗ −∞ (y∗−y)p(y|x)dy = Z y∗ −∞ (y∗−y)p(x|y)p(y) p(x) dy (3) By construction, γ = p(y < y∗) and p(x) = R R p(x|y)p(y)dy = γℓ(x) + (1 −γ)g(x). Therefore Z y∗ −∞ (y∗−y)p(x|y)p(y)dy = ℓ(x) Z y∗ −∞ (y∗−y)p(y)dy = γy∗ℓ(x) −ℓ(x) Z y∗ −∞ p(y)dy, so that finally EIy∗(x) = γy∗ℓ(x)−ℓ(x) R y∗ −∞p(y)dy γℓ(x)+(1−γ)g(x) ∝ γ + g(x) ℓ(x) (1 −γ) −1 . This last expression shows that to maximize improvement we would like points x with high probability under ℓ(x) and low probability under g(x). The tree-structured form of ℓand g makes it easy to draw many candidates according to ℓand evaluate them according to g(x)/ℓ(x). On each iteration, the algorithm returns the candidate x∗with the greatest EI. 4.2 Details of the Parzen Estimator The models ℓ(x) and g(x) are hierarchical processes involving discrete-valued and continuousvalued variables. The Adaptive Parzen Estimator yields a model over X by placing density in the vicinity of K observations B = {x(1), ..., x(K)} ⊂H. Each continuous hyper-parameter was specified by a uniform prior over some interval (a, b), or a Gaussian, or a log-uniform distribution. The TPE substitutes an equally-weighted mixture of that prior with Gaussians centered at each of the x(i) ∈B. The standard deviation of each Gaussian was set to the greater of the distances to the left and right neighbor, but clipped to remain in a reasonable range. In the case of the uniform, the points a and b were considered to be potential neighbors. For discrete variables, supposing the prior was a vector of N probabilities pi, the posterior vector elements were proportional to Npi + Ci where Ci counts the occurrences of choice i in B. The log-uniform hyper-parameters were treated as uniforms in the log domain. 4 Table 1: Distribution over DBN hyper-parameters for random sampling. Options separated by “or” such as pre-processing (and including the random seed) are weighted equally. Symbol U means uniform, N means Gaussian-distributed, and log U means uniformly distributed in the log-domain. CD (also known as CD-1) stands for contrastive divergence, the algorithm used to initialize the layer parameters of the DBN. Whole model Per-layer Parameter Prior Parameter Prior pre-processing raw or ZCA n. hidden units log U(128, 4096) ZCA energy U(.5, 1) W init U(−a, a) or N(0, a2) random seed 5 choices a algo A or B (see text) classifier learn rate log U(0.001, 10) algo A coef U(.2, 2) classifier anneal start log U(100, 104) CD epochs log U(1, 104) classifier ℓ2-penalty 0 or log U(10−7, 10−4) CD learn rate log U(10−4, 1) n. layers 1 to 3 CD anneal start log U(10, 104) batch size 20 or 100 CD sample data yes or no 5 Random Search for Hyper-Parameter Optimization in DBNs One simple, but recent step toward formalizing hyper-parameter optimization is the use of random search [5]. [19] showed that random search was much more efficient than grid search for optimizing the parameters of one-layer neural network classifiers. In this section, we evaluate random search for DBN optimization, compared with the sequential grid-assisted manual search carried out in [1]. We chose the prior listed in Table 1 to define the search space over DBN configurations. The details of the datasets, the DBN model, and the greedy layer-wise training procedure based on CD are provided in [1]. This prior corresponds to the search space of [1] except for the following differences: (a) we allowed for ZCA pre-processing [20], (b) we allowed for each layer to have a different size, (c) we allowed for each layer to have its own training parameters for CD, (d) we allowed for the possibility of treating the continuous-valued data as either as Bernoulli means (more theoretically correct) or Bernoulli samples (more typical) in the CD algorithm, and (e) we did not discretize the possible values of real-valued hyper-parameters. These changes expand the hyper-parameter search problem, while maintaining the original hyper-parameter search space as a subset of the expanded search space. The results of this preliminary random search are in Figure 2. Perhaps surprisingly, the result of manual search can be reliably matched with 32 random trials for several datasets. The efficiency of random search in this setting is explored further in [21]. Where random search results match human performance, it is not clear from Figure 2 whether the reason is that it searched the original space as efficiently, or that it searched a larger space where good performance is easier to find. But the objection that random search is somehow cheating by searching a larger space is backward – the search space outlined in Table 1 is a natural description of the hyper-parameter optimization problem, and the restrictions to that space by [1] were presumably made to simplify the search problem and make it tractable for grid-search assisted manual search. Critically, both methods train DBNs on the same datasets. The results in Figure 2 indicate that hyper-parameter optimization is harder for some datasets. For example, in the case of the “MNIST rotated background images” dataset (MRBI), random sampling appears to converge to a maximum relatively quickly (best models among experiments of 32 trials show little variance in performance), but this plateau is lower than what was found by manual search. In another dataset (convex), the random sampling procedure exceeds the performance of manual search, but is slow to converge to any sort of plateau. There is considerable variance in generalization when the best of 32 models is selected. This slow convergence indicates that better performance is probably available, but we need to search the configuration space more efficiently to find it. The remainder of this paper explores sequential optimization strategies for hyper-parameter optimization for these two datasets: convex and MRBI. 6 Sequential Search for Hyper-Parameter Optimization in DBNs We validated our GP approach of Section 3.1 by comparing with random sampling on the Boston Housing dataset, a regression task with 506 points made of 13 scaled input variables and a scalar 5 1 2 4 8 16 32 64 128 experiment size (# trials) 0.0 0.2 0.4 0.6 0.8 1.0 accuracy mnist basic 1 2 4 8 16 32 64 128 experiment size (# trials) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 accuracy mnist background images 1 2 4 8 16 32 64 128 experiment size (# trials) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 accuracy mnist rotated background images 1 2 4 8 16 32 64 128 experiment size (# trials) 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 accuracy convex 1 2 4 8 16 32 64 128 experiment size (# trials) 0.4 0.5 0.6 0.7 0.8 0.9 1.0 accuracy rectangles 1 2 4 8 16 32 64 128 experiment size (# trials) 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 accuracy rectangles images Figure 2: Deep Belief Network (DBN) performance according to random search. Random search is used to explore up to 32 hyper-parameters (see Table 1). Results found using a grid-search-assisted manual search over a similar domain with an average 41 trials are given in green (1-layer DBN) and red (3-layer DBN). Each box-plot (for N = 1, 2, 4, ...) shows the distribution of test set performance when the best model among N random trials is selected. The datasets “convex” and “mnist rotated background images” are used for more thorough hyper-parameter optimization. regressed output. We trained a Multi-Layer Perceptron (MLP) with 10 hyper-parameters, including learning rate, ℓ1 and ℓ2 penalties, size of hidden layer, number of iterations, whether a PCA preprocessing was to be applied, whose energy was the only conditional hyper-parameter [22]. Our results are depicted in Figure 3. The first 30 iterations were made using random sampling, while from the 30th on, we differentiated the random samples from the GP approach trained on the updated history. The experiment was repeated 20 times. Although the number of points is particularly small compared to the dimensionality, the surrogate modelling approach finds noticeably better points than random, which supports the application of SMBO approaches to more ambitious tasks and datasets. Applying the GP to the problem of optimizing DBN performance, we allowed 3 random restarts to the CMA+ES algorithm per proposal x∗, and up to 500 iterations of conjugate gradient method in fitting the length scales of the GP. The squared exponential kernel [14] was used for every node. The CMA-ES part of GPs dealt with boundaries using a penalty method, the binomial sampling part dealt with it by nature. The GP algorithm was initialized with 30 randomly sampled points in H. After 200 trials, the prediction of a point x∗using this GP took around 150 seconds. For the TPE-based algorithm, we chose γ = 0.15 and picked the best among 100 candidates drawn from ℓ(x) on each iteration as the proposal x∗. After 200 trials, the prediction of a point x∗using this TPE algorithm took around 10 seconds. TPE was allowed to grow past the initial bounds used with for random sampling in the course of optimization, whereas the GP and random search were restricted to stay within the initial bounds throughout the course of optimization. The TPE algorithm was also initialized with the same 30 randomly sampled points as were used to seed the GP. 6.1 Parallelizing Sequential Search Both the GP and TPE approaches were actually run asynchronously in order to make use of multiple compute nodes and to avoid wasting time waiting for trial evaluations to complete. For the GP approach, the so-called constant liar approach was used: each time a candidate point x∗was proposed, a fake fitness evaluation equal to the mean of the y’s within the training set D was assigned temporarily, until the evaluation completed and reported the actual loss f(x∗). For the TPE approach, we simply ignored recently proposed points and relied on the stochasticity of draws from ℓ(x) to provide different candidates from one iteration to the next. The consequence of parallelization is that each proposal x∗is based on less feedback. This makes search less efficient, though faster in terms of wall time. 6 0 10 20 30 40 50 14 16 18 20 22 24 26 Time Best value so far Figure 3: After time 30, GP optimizing the MLP hyper-parameters on the Boston Housing regression task. Best minimum found so far every 5 iterations, against time. Red = GP, Blue = Random. Shaded areas = one-sigma error bars. convex MRBI TPE 14.13 ±0.30 % 44.55 ±0.44% GP 16.70 ± 0.32% 47.08 ± 0.44% Manual 18.63 ± 0.34% 47.39 ± 0.44% Random 18.97 ± 0.34 % 50.52 ± 0.44% Table 2: The test set classification error of the best model found by each search algorithm on each problem. Each search algorithm was allowed up to 200 trials. The manual searches used 82 trials for convex and 27 trials MRBI. Runtime per trial was limited to 1 hour of GPU computation regardless of whether execution was on a GTX 285, 470, 480, or 580. The difference in speed between the slowest and fastest machine was roughly two-fold in theory, but the actual efficiency of computation depended also on the load of the machine and the configuration of the problem (the relative speed of the different cards is different in different hyper-parameter configurations). With the parallel evaluation of up to five proposals from the GP and TPE algorithms, each experiment took about 24 hours of wall time using five GPUs. 7 Discussion The trajectories (H) constructed by each algorithm up to 200 steps are illustrated in Figure 4, and compared with random search and the manual search carried out in [1]. The generalization scores of the best models found using these algorithms and others are listed in Table 2. On the convex dataset (2-way classification), both algorithms converged to a validation score of 13% error. In generalization, TPE’s best model had 14.1% error and GP’s best had 16.7%. TPE’s best was significantly better than both manual search (19%) and random search with 200 trials (17%). On the MRBI dataset (10-way classification), random search was the worst performer (50% error), the GP approach and manual search approximately tied (47% error), while the TPE algorithm found a new best result (44% error). The models found by the TPE algorithm in particular are better than previously found ones on both datasets. The GP and TPE algorithms were slightly less efficient than manual search: GP and EI identified performance on par with manual search within 80 trials, the manual search of [1] used 82 trials for convex and 27 trials for MRBI. There are several possible reasons for why the TPE approach outperformed the GP approach in these two datasets. Perhaps the inverse factorization of p(x|y) is more accurate than the p(y|x) in the Gaussian process. Perhaps, conversely, the exploration induced by the TPE’s lack of accuracy turned out to be a good heuristic for search. Perhaps the hyper-parameters of the GP approach itself were not set to correctly trade off exploitation and exploration in the DBN configuration space. More empirical work is required to test these hypotheses. Critically though, all four SMBO runs matched or exceeded both random search and a careful human-guided search, which are currently the state of the art methods for hyper-parameter optimization. The GP and TPE algorithms work well in both of these settings, but there are certainly settings in which these algorithms, and in fact SMBO algorithm in general, would not be expected to do well. Sequential optimization algorithms work by leveraging structure in observed (x, y) pairs. It is possible for SMBO to be arbitrarily bad with a bad choice of p(y|x). It is also possible to be slower than random sampling at finding a global optimum with a apparently good p(y|x), if it extracts structure in H that leads only to a local optimum. 8 Conclusion This paper has introduced two sequential hyper-parameter optimization algorithms, and shown them to meet or exceed human performance and the performance of a brute-force random search in two difficult hyper-parameter optimization tasks involving DBNs. We have relaxed standard constraints (e.g. equal layer sizes at all layers) on the search space, and fall back on a more natural hyperparameter space of 32 variables (including both discrete and continuous variables) in which many 7 0 50 100 150 200 time(trials) 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 error (fraction incorrect) Dataset: convex manual 99.5’th q. GP TPE 0 50 100 150 200 time(trials) 0.5 0.6 0.7 0.8 0.9 error (fraction incorrect) Dataset: mnist rotated background images manual 99.5’th q. GP TPE Figure 4: Efficiency of Gaussian Process-based (GP) and graphical model-based (TPE) sequential optimization algorithms on the task of optimizing the validation set performance of a DBN of up to three layers on the convex task (left) and the MRBI task (right). The dots are the elements of the trajectory H produced by each SMBO algorithm. The solid coloured lines are the validation set accuracy of the best trial found before each point in time. Both the TPE and GP algorithms make significant advances from their random initial conditions, and substantially outperform the manual and random search methods. A 95% confidence interval about the best validation means on the convex task extends 0.018 above and below each point, and on the MRBI task extends 0.021 above and below each point. The solid black line is the test set accuracy obtained by domain experts using a combination of grid search and manual search [1]. The dashed line is the 99.5% quantile of validation performance found among trials sampled from our prior distribution (see Table 1), estimated from 457 and 361 random trials on the two datasets respectively. variables are sometimes irrelevant, depending on the value of other parameters (e.g. the number of layers). In this 32-dimensional search problem, the TPE algorithm presented here has uncovered new best results on both of these datasets that are significantly better than what DBNs were previously believed to achieve. Moreover, the GP and TPE algorithms are practical: the optimization for each dataset was done in just 24 hours using five GPU processors. Although our results are only for DBNs, our methods are quite general, and extend naturally to any hyper-parameter optimization problem in which the hyper-parameters are drawn from a measurable set. We hope that our work may spur researchers in the machine learning community to treat the hyperparameter optimization strategy as an interesting and important component of all learning algorithms. The question of “How well does a DBN do on the convex task?” is not a fully specified, empirically answerable question – different approaches to hyper-parameter optimization will give different answers. Algorithmic approaches to hyper-parameter optimization make machine learning results easier to disseminate, reproduce, and transfer to other domains. The specific algorithms we have presented here are also capable, at least in some cases, of finding better results than were previously known. Finally, powerful hyper-parameter optimization algorithms broaden the horizon of models that can realistically be studied; researchers need not restrict themselves to systems of a few variables that can readily be tuned by hand. The TPE algorithm presented in this work, as well as parallel evaluation infrastructure, is available as BSD-licensed free open-source software, which has been designed not only to reproduce the results in this work, but also to facilitate the application of these and similar algorithms to other hyper-parameter optimization problems.1 Acknowledgements This work was supported by the National Science and Engineering Research Council of Canada, Compute Canada, and by the ANR-2010-COSI-002 grant of the French National Research Agency. GPU implementations of the DBN model were provided by Theano [23]. 1“Hyperopt” software package: https://github.com/jaberg/hyperopt 8 References [1] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In ICML 2007, pages 473–480, 2007. [2] G. E. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18:1527–1554, 2006. [3] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Machine Learning Research, 11:3371–3408, 2010. [4] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, November 1998. [5] Nicolas Pinto, David Doukhan, James J. DiCarlo, and David D. Cox. A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS Comput Biol, 5(11):e1000579, 11 2009. [6] A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning. NIPS Deep Learning and Unsupervised Feature Learning Workshop, 2010. [7] A. Coates and A. Y. Ng. The importance of encoding versus training with sparse coding and vector quantization. In Proceedings of the Twenty-eighth International Conference on Machine Learning (ICML11), 2010. [8] F. Hutter. Automated Configuration of Algorithms for Solving Hard Computational Problems. PhD thesis, University of British Columbia, 2009. [9] F. Hutter, H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In LION-5, 2011. Extended version as UBC Tech report TR-2010-10. [10] D.R. Jones. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization, 21:345–383, 2001. [11] J. Villemonteix, E. Vazquez, and E. Walter. An informational approach to the global optimization of expensive-to-evaluate functions. Journal of Global Optimization, 2006. [12] N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In ICML, 2010. [13] J. Mockus, V. Tiesis, and A. Zilinskas. The application of Bayesian methods for seeking the extremum. In L.C.W. Dixon and G.P. Szego, editors, Towards Global Optimization, volume 2, pages 117–129. North Holland, New York, 1978. [14] C.E. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. [15] D. Ginsbourger, D. Dupuy, A. Badea, L. Carraro, and O. Roustant. A note on the choice and the estimation of kriging models for the analysis of deterministic computer experiments. 25:115–131, 2009. [16] R. Bardenet and B. K´egl. Surrogating the surrogate: accelerating Gaussian Process optimization with mixtures. In ICML, 2010. [17] P. Larra˜naga and J. Lozano, editors. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Springer, 2001. [18] N. Hansen. The CMA evolution strategy: a comparing review. In J.A. Lozano, P. Larranaga, I. Inza, and E. Bengoetxea, editors, Towards a new evolutionary computation. Advances on estimation of distribution algorithms, pages 75–102. Springer, 2006. [19] J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. The Learning Workshop (Snowbird), 2011. [20] A. Hyv¨arinen and E. Oja. Independent component analysis: Algorithms and applications. Neural Networks, 13(4–5):411–430, 2000. [21] J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. JMLR, 2012. Accepted. [22] C. Bishop. Neural networks for pattern recognition. 1995. [23] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. 9
|
2011
|
158
|
4,211
|
Finite-Time Analysis of Stratified Sampling for Monte Carlo Alexandra Carpentier INRIA Lille - Nord Europe alexandra.carpentier@inria.fr R´emi Munos INRIA Lille - Nord Europe remi.munos@inria.fr Abstract We consider the problem of stratified sampling for Monte-Carlo integration. We model this problem in a multi-armed bandit setting, where the arms represent the strata, and the goal is to estimate a weighted average of the mean values of the arms. We propose a strategy that samples the arms according to an upper bound on their standard deviations and compare its estimation quality to an ideal allocation that would know the standard deviations of the strata. We provide two regret analyses: a distributiondependent bound O(n−3/2) that depends on a measure of the disparity of the strata, and a distribution-free bound O(n−4/3) that does not. 1 Introduction Consider a polling institute that has to estimate as accurately as possible the average income of a country, given a finite budget for polls. The institute has call centers in every region in the country, and gives a part of the total sampling budget to each center so that they can call random people in the area and ask about their income. A naive method would allocate a budget proportionally to the number of people in each area. However some regions show a high variability in the income of their inhabitants whereas others are very homogeneous. Now if the polling institute knew the level of variability within each region, it could adjust the budget allocated to each region in a more clever way (allocating more polls to regions with high variability) in order to reduce the final estimation error. This example is just one of many for which an efficient method of sampling a function with natural strata (i.e., the regions) is of great interest. Note that even in the case that there are no natural strata, it is always a good strategy to design arbitrary strata and allocate a budget to each stratum that is proportional to the size of the stratum, compared to a crude Monte-Carlo. There are many good surveys on the topic of stratified sampling for Monte-Carlo, such as (Rubinstein and Kroese, 2008)[Subsection 5.5] or (Glasserman, 2004). The main problem for performing an efficient sampling is that the variances within the strata (in the previous example, the income variability per region) are usually unknown. One possibility is to estimate the variances online while sampling the strata. There is some interesting research along this direction, such as (Arouna, 2004) and more recently (Etor´e and Jourdain, 2010, Kawai, 2010). The work of Etor´e and Jourdain (2010) matches exactly our problem of designing an efficient adaptive sampling strategy. In this article they propose to sample according to an empirical estimate of the variance of the strata, whereas Kawai (2010) addresses a computational complexity problem which is slightly different from ours. The recent work of Etor´e et al. (2011) describes a strategy that enables to sample asymptotically according to the (unknown) standard deviations of the strata and at the same time adapts the shape (and number) of the strata online. This is a very difficult problem, especially in high dimension, that we will not address here, although we think this is a very interesting and promising direction for further researches. 1 These works provide asymptotic convergence of the variance of the estimate to the targeted stratified variance1 divided by the sample size. They also prove that the number of pulls within each stratum converges to the desired number of pulls i.e. the optimal allocation if the variances per stratum were known. Like Etor´e and Jourdain (2010), we consider a stratified Monte-Carlo setting with fixed strata. Our contribution is to design a sampling strategy for which we can derive a finite-time analysis (where ’time’ refers to the number of samples). This enables us to predict the quality of our estimate for any given budget n. We model this problem using the setting of multi-armed bandits where our goal is to estimate a weighted average of the mean values of the arms. Although our goal is different from a usual bandit problem where the objective is to play the best arm as often as possible, this problem also exhibits an exploration-exploitation trade-off. The arms have to be pulled both in order to estimate the initially unknown variability of the arms (exploration) and to allocate correctly the budget according to our current knowledge of the variability (exploitation). Our setting is close to the one described in (Antos et al., 2010) which aims at estimating uniformly well the mean values of all the arms. The authors present an algorithm, called GAFS-MAX, that allocates samples proportionally to the empirical variance of the arms, while imposing that each arm is pulled at least √n times to guarantee a sufficiently good estimation of the true variances. Note though that in the Master Thesis (Grover, 2009), the author presents an algorithm named GAFS-WL which is similar to GAFS-MAX and has an analysis close to the one of GAFS-MAX. It deals with stratified sampling, i.e. it targets an allocation which is proportional to the standard deviation (and not to the variance) of the strata times their size2. Some questions remain open in this work, notably that no distribution independent regret bound is provided for GAFS-WL. We clarify this point in Section 4. Our objective is similar, and we extend the analysis of this setting. Contributions: In this paper, we introduce a new algorithm based on Upper-ConfidenceBounds (UCB) on the standard deviation. They are computed from the empirical standard deviation and a confidence interval derived from Bernstein’s inequalities. We provide a finite-time analysis of its performance. The algorithm, called MC-UCB, samples the arms proportionally to an UCB3 on the standard deviation times the size of the stratum. Note that the idea is similar to the one in (Carpentier et al., 2011). Our contributions are the following: • We derive a finite-time analysis for the stratified sampling for Monte-Carlo setting by using an algorithm based on upper confidence bounds. We show how such a family of algorithm is particularly interesting in this setting. • We provide two regret analysis: (i) a distribution-dependent bound O(n−3/2)4 that depends on the disparity of the stratas (a measure of the problem complexity), and which corresponds to a stationary regime where the budget n is large compared to this complexity. (ii) A distribution-free bound O(n−4/3) that does not depend on the the disparity of the stratas, and corresponds to a transitory regime where n is small compared to the complexity. The characterization of those two regimes and the fact that the corresponding excess error rates differ enlightens the fact that a finite-time analysis is very relevant for this problem. The rest of the paper is organized as follows. In Section 2 we formalize the problem and introduce the notations used throughout the paper. Section 3 introduces the MC-UCB algorithm and reports performance bounds. We then discuss in Section 4 about the parameters of the algorithm and its performances. In Section 5 we report numerical experiments that 1The target is defined in [Subsection 5.5] of (Rubinstein and Kroese, 2008) and later in this paper, see Equation 4. 2This is explained in (Rubinstein and Kroese, 2008) and will be formulated precisely later. 3Note that we consider a sampling strategy based on UCBs on the standard deviations of the arms whereas the so-called UCB algorithm of Auer et al. (2002), in the usual multi-armed bandit setting, computes UCBs on the mean rewards of the arms. 4The notation O(·) corresponds to O(·) up to logarithmic factors. 2 illustrate our method on the problem of pricing Asian options as introduced in (Glasserman et al., 1999). Finally, Section 6 concludes the paper and suggests future works. 2 Preliminaries The allocation problem mentioned in the previous section is formalized as a K-armed bandit problem where each arm (stratum) k = 1, . . . , K is characterized by a distribution νk with mean value µk and variance σ2 k. At each round t ≥1, an allocation strategy (or algorithm) A selects an arm kt and receives a sample drawn from νkt independently of the past samples. Note that a strategy may be adaptive, i.e., the arm selected at round t may depend on past observed samples. Let {wk}k=1,...,K denote a known set of positive weights which sum to 1. For example in the setting of stratified sampling for Monte-Carlo, this would be the probability mass in each stratum. The goal is to define a strategy that estimates as precisely as possible µ = K k=1 wkµk using a total budget of n samples. Let us write Tk,t = t s=1 I {ks = k} the number of times arm k has been pulled up to time t, and ˆµk,t = 1 Tk,t Tk,t s=1 Xk,s the empirical estimate of the mean µk at time t, where Xk,s denotes the sample received when pulling arm k for the s-th time. After n rounds, the algorithm A returns the empirical estimate ˆµk,n of all the arms. Note that in the case of a deterministic strategy, the expected quadratic estimation error of the weighted mean µ as estimated by the weighted average ˆµn = K k=1 wkˆµk,n satisfies: E ˆµn −µ 2 = E K k=1 wk(ˆµk,n −µk) 2 = K k=1 w2 kEνk ˆµk,n −µk 2 . We thus use the following measure for the performance of any algorithm A: Ln(A) = K k=1 w2 kE (µk −ˆµk,n)2 . (1) The goal is to define an allocation strategy that minimizes the global loss defined in Equation 1. If the variance of the arms were known in advance, one could design an optimal static5 allocation strategy A∗by pulling each arm k proportionally to the quantity wkσk. Indeed, if arm k is pulled a deterministic number of times T ∗ k,n, then Ln(A∗) = K k=1 w2 k σ2 k T ∗ k,n . (2) By choosing T ∗ k,n such as to minimize Ln under the constraint that K k=1 T ∗ k,n = n, the optimal static allocation (up to rounding effects) of algorithm A∗is to pull each arm k, T ∗ k,n = wkσk K i=1 wiσi n , (3) times, and achieves a global performance Ln(A∗) = Σ2 w n , (4) where Σw = K i=1 wiσi. In the following, we write λk = T ∗ k,n n = wkσk Σw the optimal allocation proportion for arm k and λmin = min1≤k≤K λk. Note that a small λmin means a large disparity of the wkσk and, as explained later, provides for the algorithm we build in Section 3 a characterization of the hardness of a problem. However, in the setting considered here, the σk are unknown, and thus the optimal allocation is out of reach. A possible allocation is the uniform strategy Au, i.e., such that T u k = wk K i=1 wi n. Its performance is Ln(Au) = K k=1 wk K k=1 wkσ2 k n = Σw,2 n , 5Static means that the number of pulls allocated to each arm does not depend on the received samples. 3 where Σw,2 = K k=1 wkσ2 k. Note that by Cauchy-Schwartz’s inequality, we have Σ2 w ≤Σw,2 with equality if and only if the (σk) are all equal. Thus A∗is always at least as good as Au. In addition, since i wi = 1, we have Σ2 w −Σw,2 = − k wk(σk −Σw)2. The difference between those two quantities is the weighted quadratic variation of the σk around their weighted mean Σw. In other words, it is the variance of the (σk)1≤k≤K. As a result the gain of A∗compared to Au grow with the disparity of the σk. We would like to do better than the uniform strategy by considering an adaptive strategy A that would estimate the σk at the same time as it tries to implement an allocation strategy as close as possible to the optimal allocation algorithm A∗. This introduces a natural trade-offbetween the exploration needed to improve the estimates of the variances and the exploitation of the current estimates to allocate the pulls nearly-optimally. In order to assess how well A solves this trade-offand manages to sample according to the true standard deviations without knowing them in advance, we compare its performance to that of the optimal allocation strategy A∗. For this purpose we define the notion of regret of an adaptive algorithm A as the difference between the performance loss incurred by the algorithm and the optimal algorithm: Rn(A) = Ln(A) −Ln(A∗). (5) The regret indicates how much we loose in terms of expected quadratic estimation error by not knowing in advance the standard deviations (σk). Note that since Ln(A∗) = Σ2 w n , a consistent strategy i.e., asymptotically equivalent to the optimal strategy, is obtained whenever its regret is neglectable compared to 1/n. 3 Allocation based on Monte Carlo Upper Confidence Bound 3.1 The algorithm In this section, we introduce our adaptive algorithm for the allocation problem, called Monte Carlo Upper Confidence Bound (MC-UCB). The algorithm computes a high-probability bound on the standard deviation of each arm and samples the arms proportionally to their bounds times the corresponding weights. The MC-UCB algorithm, AMC−UCB, is described in Figure 1. It requires three parameters as inputs: c1 and c2 which are related to the shape of the distributions (see Assumption 1), and δ which defines the confidence level of the bound. In Subsection 4.2, we discuss a way to reduce the number of parameters from three to one. The amount of exploration of the algorithm can be adapted by properly tuning these parameters. Input: c1, c2, δ. Let b = 2 2 log(2/δ) c1 log(c2/δ) + √ 2c1δ(1+log(c2/δ))n1/2 (1−δ) . Initialize: Pull each arm twice. for t = 2K + 1, . . . , n do Compute Bk,t = wk Tk,t−1 ˆσk,t−1 + b 1 Tk,t−1 for each arm 1 ≤k ≤K Pull an arm kt ∈arg max1≤k≤K Bk,t end for Output: ˆµk,t for each arm 1 ≤k ≤K Figure 1: The pseudo-code of the MC-UCB algorithm. The empirical standard deviations ˆσk,t−1 are computed using Equation 6. The algorithm starts by pulling each arm twice in rounds t = 1 to 2K. From round t = 2K+1 on, it computes an upper confidence bound Bk,t on the standard deviation σk, for each arm k, and then pulls the one with largest Bk,t. The upper bounds on the standard deviations are built by using Theorem 10 in (Maurer and Pontil, 2009)6 and based on the empirical standard deviation ˆσk,t−1 : ˆσ2 k,t−1 = 1 Tk,t−1 −1 Tk,t−1 i=1 (Xk,i −ˆµk,t−1)2, (6) 6We could also have used the variant reported in (Audibert et al., 2009). 4 where Xk,i is the i-th sample received when pulling arm k, and Tk,t−1 is the number of pulls allocated to arm k up to time t −1. After n rounds, MC-UCB returns the empirical mean ˆµk,n for each arm 1 ≤k ≤K. 3.2 Regret analysis of MC-UCB Before stating the main results of this section, we state the assumption that the distributions are sub-Gaussian, which includes e.g., Gaussian or bounded distributions. See (Buldygin and Kozachenko, 1980) for more precisions. Assumption 1 There exist c1, c2 > 0 such that for all 1 ≤k ≤K and any > 0, PX∼νk(|X −µk| ≥) ≤c2 exp(−2/c1) . (7) We provide two analyses, a distribution-dependent and a distribution-free, of MC-UCB, which are respectively interesting in two regimes, i.e., stationary and transitory regimes, of the algorithm. We will comment on this later in Section 4. A distribution-dependent result: We now report the first bound on the regret of MCUCB algorithm. The proof is reported in (Carpentier and Munos, 2011). and relies on upper- and lower-bounds on Tk,t −T ∗ k,t, i.e., the difference in the number of pulls of each arm compared to the optimal allocation (see Lemma 3). Theorem 1 Under Assumption 1 and if we choose c2 such that c2 ≥2Kn−5/2, the regret of MC-UCB run with parameter δ = n−7/2 with n ≥4K is bounded as Rn(AMC−UCB) ≤log(n)c1(c2 + 2) n3/2λ3/2 min 112Σw +6K + 19 λ3 minn2 KΣ2 w +720c1(c2 +1) log(n)2 . Note that this result crucially depends on the smallest proportion λmin which is a measure of the disparity of the standard deviations times their weight. For this reason we refer to it as “distribution-dependent” result. A distribution-free result: Now we report our second regret bound that does not depend on λmin but whose rate is poorer. The proof is reported in (Carpentier and Munos, 2011) and relies on other upper- and lower-bounds on Tk,t −T ∗ k,t detailed in Lemma 4. Theorem 2 Under Assumption 1 and if we choose c2 such that c2 ≥2Kn−5/2, the regret of MC-UCB run with parameter δ = n−7/2 with n ≥4K is bounded as Rn(AMC−UCB) ≤200√c1(c2 + 2)ΣwK n4/3 log(n) + 365 n3/2 129c1(c2 + 2)2K2 log(n)2 + KΣ2 w . This bound does not depend on 1/λmin. Note that the bound is not entirely distribution free since Σw appears. But it can be proved using Assumption 1 that Σ2 w ≤c1c2. This is obtained at the price of the slightly worse rate O(n−4/3). 4 Discussion on the results 4.1 Distribution-free versus distribution-dependent Theorem 1 provides a regret bound of order O(λ−5/2 min n−3/2), whereas Theorem 2 provides a bound in O(n−4/3) independently of λmin. Hence, for a given problem i.e., a given λmin, the distribution-free result of Theorem 2 is more informative than the distribution-dependent result of Theorem 1 in the transitory regime, that is to say when n is small compared to λ−1 min. The distribution-dependent result of Theorem 1 is better in the stationary regime i.e., for large n. This distinction reminds us of the difference between distribution-dependent and distribution-free bounds for the UCB algorithm in usual multi-armed bandits7. 7The distribution dependent bound is in O(K log n/Δ), where Δ is the difference between the mean value of the two best arms, and the distribution-free bound is in O(√nK log n) as explained in (Auer et al., 2002, Audibert and Bubeck, 2009). 5 Although we do not have a lower bound on the regret yet, we believe that the rate n−3/2 cannot be improved for general distributions. As explained in the proof in Appendix B of (Carpentier and Munos, 2011), this rate is a direct consequence of the high probability bounds on the estimates of the standard deviations of the arms which are in O(1/√n), and those bounds are tight. A natural question is whether there exists an algorithm with a regret of order O(n−3/2) without any dependence in λ−1 min. Although we do not have an answer to this question, we can say that our algorithm MC-UCB does not satisfy this property. In Appendix D.1 of (Carpentier and Munos, 2011), we give a simple example where λmin = 0 and for which the rate of MC-UCB cannot be better than O(n−4/3). This shows that our analysis of MC-UCB is tight. The problem dependent upper bound is similar to the one provided for GAFS-WL in (Grover, 2009). We however expect that GAFS-WL has for some problems a sub-optimal behavior: it is possible to find cases where Rn(AGAF S−W L) = Ω(1/n), see Appendix D.1 of (Carpentier and Munos, 2011). Note however that when there is an arm with 0 standard deviation, GAFS-WL is likely to perform better than MC-UCB, as it will only sample this arm O(√n) times while MC-UCB samples it O(n2/3) times. 4.2 The parameters of the algorithm Our algorithm takes three parameters as input, namely c1, c2 and δ, but we only use a combination of them in the algorithm, with the introduction of b = 2 2 log(2/δ) c1 log(c2/δ)+ √ 2c1δ(1+log(c2/δ))n1/2 (1−δ) . For practical use of the method, it is enough to tune the algorithm with a single parameter b. By the choice of the value assigned to δ in the two theorems, b should be chosen of order c log(n), where c can be interpreted as a high probability bound on the range of the samples. We thus simply require a rough estimate of the magnitude of the samples. Note that in the case of bounded distributions, b can be chosen as b = 4 5 2c log(n) where c is a true bound on the variables. This result is easy to deduce by simplifying Lemma 1 in Appendix A of (Carpentier and Munos, 2011) for the case of bounded variables. 5 Numerical experiment: Pricing of an Asian option We consider the pricing problem of an Asian option introduced in (Glasserman et al., 1999) and later considered in (Kawai, 2010, Etor´e and Jourdain, 2010). This uses a Black-Schole model with strike C and maturity T. Let (W(t))0≤t≤1 be a Brownian motion that is discretized at d equidistant times {i/d}1≤i≤d, which defines the vector W ∈Rd with components Wi = W(i/d). The discounted payoffof the Asian option is defined as a function of W, by: F(W) = exp(−rT) max 1 d d i=1 S0 exp (r −1 2s2 0) iT d + s0 √ TWi −C, 0 , (8) where S0, r, and s0 are constants, and the price is defined by the expectation p = EW F(W). We want to estimate the price p by Monte-Carlo simulations (by sampling on W = (Wi)1≤i≤d). In order to reduce the variance of the estimated price, we can stratify the space of W. Glasserman et al. (1999) suggest to stratify according to a one dimensional projection of W, i.e., by choosing a projection vector u ∈Rd and define the strata as the set of W such that u · W lies in intervals of R. They further argue that the best direction for stratification is to choose u = (0, · · · , 0, 1), i.e., to stratify according to the last component Wd of W. Thus we sample Wd and then conditionally sample W1, ..., Wd−1 according to a Brownian Bridge as explained in (Kawai, 2010). Note that this choice of stratification is also intuitive since Wd has the largest exponent in the payoff(8), and thus the highest volatility. Kawai (2010) and Etor´e and Jourdain (2010) also use the same direction of stratification. Like in (Kawai, 2010) we consider 5 strata of equal weight. Since Wd follows a N(0, 1), the strata correspond to the 20-percentile of a normal distribution. The left plot of Figure 2 represents the cumulative distribution function of Wd and shows the strata in terms of 6 percentiles of Wd. The right plot represents, in dot line, the curve E[F(W)|Wd = x] versus P(Wd < x) parameterized by x, and the box plot represents the expectation and standard deviations of F(W) conditioned on each stratum. We observe that this stratification produces an important heterogeneity of the standard deviations per stratum, which indicates that a stratified sampling would be profitable compared to a crude Monte-Carlo sampling. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −100 0 100 200 300 400 500 600 700 800 900 1000 P(Wd<x) E[F(W)|Wd=x] Expectation of the payoff in every strata for Wd with C=90 E[F(W)|Wd=x] E[F(W)|Wd∈ strata] Figure 2: Left: Cdf of Wd and the definition of the strata. Right: expectation and standard deviation of F(W) conditioned on each stratum for a strike C = 90. We choose the same numerical values as Kawai (2010): S0 = 100, r = 0.05, s0 = 0.30, T = 1 and d = 16. Note that the strike C of the option has a direct impact on the variability of the strata. Indeed, the larger C, the more probable F(W) = 0 for strata with small Wd, and thus, the smaller λmin. Our two main competitors are the SSAA algorithm of Etor´e and Jourdain (2010) and GAFSWL of Grover (2009). We did not compare to (Kawai, 2010) which aims at minimizing the computational time and not the loss considered here8. SSAA works in Kr rounds of length Nk where, at each round, it allocates proportionally to the empirical standard deviations computed in the previous rounds. Etor´e and Jourdain (2010) report the asymptotic consistency of the algorithm whenever k Nk goes to 0 when k goes to infinity. Since their goal is not to obtain a finite-time performance, they do not mention how to calibrate the length and number of rounds in practice. We choose the same parameters as in their numerical experiments (Section 3.2.2 of (Etor´e and Jourdain, 2010)) using 3 rounds. In this setting where we know the budget n at the beginning of the algorithm, GAFS-WL pulls each arm a√n times and then pulls at time t + 1 the arm kt+1 that maximizes wkˆσk,t Tk,t . We set a = 1. As mentioned in Subsection 4.2, an advantage of our algorithm is that it requires a single parameter to tune. We chose b = 1000 log(n) where 1000 is a high-probability range of the variables (see right plot of Figure 2). Table 5 reports the performance of MC-UCB, GAFSWL, SSAA, and the uniform strategy, for different values of strike C i.e., for different values of λ−1 min and Σw,2/Σ2 w = wkσ2 k ( k wkσk)2 . The total budget is n = 105. The results are averaged on 50000 trials. We notice that MC-UCB outperforms SSAA, the uniform strategy, and GAFS-WL strategy. Note however that, in the case of GAFS-WL strategy, the small gain could come from the fact that there are more parameters in MC-UCB, and that we were thus able to adjust them (even if we kept the same parameters for the three values of C). In the left plot of Figure 3, we plot the rescaled regret Rnn3/2, averaged over 50000 trials, as a function of n, where n ranges from 50 to 5000. The value of the strike is C = 120. Again, we notice that MC-UCB performs better than Uniform and SSAA because it adapts 8In that paper, the computational costs for each stratum vary, i.e. it is faster to sample in some strata than in others, and the aim of their paper is to minimize the global computational cost while achieving a given performance. 7 C 1 λmin Σw,2/Σ2 w Uniform SSAA GAFS-WL MC-UCB 60 6.18 1.06 2.52 10−2 5.87 10−3 8.25 10−4 7.29 10−4 90 15.29 1.24 3.32 10−2 6.14 10−3 8.58 10−4 8.07 10−4 120 744.25 3.07 3.56 10−2 6.22 10−3 9.89 10−4 9.28 10−4 Table 1: Characteristics of the distributions (λ−1 min and Σw,2/Σ2 w) and regret of the Uniform, SSAA, and MC-UCB strategies, for different values of the strike C. faster to the distributions of the strata. But it performs very similarly to GAFS-WL. In addition, it seems that the regret of Uniform and SSAA grows faster than the rate n3/2, whereas MC-UCB, as well as GAFS-WL, grow with this rate. The right plot focuses on the MC-UCB algorithm and rescales the y−axis to observe the variations of its rescaled regret more accurately. The curve grows first and then stabilizes. This could correspond to the two regimes discussed previously. 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 0 0.5 1 1.5 2 2.5 3 x 10 5 n Rescaled Regret w.r.t. n for C=120 MC−UCB Uniform Allocation SSAA GAFS−WL 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5000 6000 7000 8000 9000 10000 11000 12000 n Rnn3/2 Rescaled regret w.r.t. n for C=120 MC−UCB Figure 3: Left: Rescaled regret (Rnn3/2) of the Uniform, SSAA, and MC-UCB strategies. Right: zoom on the rescaled regret for MC-UCB that illustrates the two regimes. 6 Conclusions We provided a finite-time analysis for stratified sampling for Monte-Carlo in the case of fixed strata. We reported two bounds: (i) a distribution dependent bound O(n−3/2λ−5/2 min ) which is of interest when n is large compared to a measure of disparity λ−1 min of the standard deviations (stationary regime), and (ii) a distribution free bound in O(n−4/3) which is of interest when n is small compared to λ−1 min (transitory regime). Possible directions for future work include: (i) making the MC-UCB algorithm anytime (i.e. not requiring the knowledge of n), (ii) investigating whether their exists an algorithm with O(n−3/2) regret without dependency on λ−1 min, and (iii) deriving distribution-dependent and distribution-free lower-bounds for this problem. Acknowledgements We thank Andr´as Antos for several comments that helped us to improve the quality of the paper. This research was partially supported by Region Nord-Pas-de-Calais Regional Council, French ANR EXPLO-RA (ANR-08-COSI-004), the European Communitys Seventh Framework Programme (FP7/2007-2013) under grant agreement 231495 (project CompLACS), and by Pascal-2. 8 References Andr´as Antos, Varun Grover, and Csaba Szepesv´ari. Active learning in heteroscedastic noise. Theoretical Computer Science, 411:2712–2728, June 2010. B. Arouna. Adaptative monte carlo method, a variance reduction technique. Monte Carlo Methods and Applications, 10(1):1–24, 2004. J.Y. Audibert and S. Bubeck. Minimax policies for adversarial and stochastic bandits. In 22nd annual conference on learning theory, 2009. J.Y. Audibert, R. Munos, and Cs. Szepesv´ari. Exploration-exploitation tradeoffusing variance estimates in multi-armed bandits. Theoretical Computer Science, 410(19):1876–1902, 2009. P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2):235–256, 2002. VV Buldygin and Y.V. Kozachenko. Sub-gaussian random variables. Ukrainian Mathematical Journal, 32(6):483–489, 1980. A. Carpentier and R. Munos. Finite-time analysis of stratified sampling for monte carlo. Technical Report inria-00636924, INRIA, 2011. A. Carpentier, A. Lazaric, M. Ghavamzadeh, R. Munos, and P. Auer. Upper-confidencebound algorithms for active learning in multi-armed bandits. In Algorithmic Learning Theory, pages 189–203. Springer, 2011. Pierre Etor´e and Benjamin Jourdain. Adaptive optimal allocation in stratified sampling methods. Methodol. Comput. Appl. Probab., 12(3):335–360, September 2010. Pierre Etor´e, Gersende Fort, Benjamin Jourdain, and ´Eric Moulines. On adaptive stratification. Ann. Oper. Res., 2011. to appear. P. Glasserman. Monte Carlo methods in financial engineering. Springer Verlag, 2004. ISBN 0387004513. P. Glasserman, P. Heidelberger, and P. Shahabuddin. Asymptotically optimal importance sampling and stratification for pricing path-dependent options. Mathematical Finance, 9 (2):117–152, 1999. V. Grover. Active learning and its application to heteroscedastic problems. Department of Computing Science, Univ. of Alberta, MSc thesis, 2009. R. Kawai. Asymptotically optimal allocation of stratified sampling with adaptive variance reduction by strata. ACM Transactions on Modeling and Computer Simulation (TOMACS), 20(2):1–17, 2010. ISSN 1049-3301. A. Maurer and M. Pontil. Empirical bernstein bounds and sample-variance penalization. In Proceedings of the Twenty-Second Annual Conference on Learning Theory, pages 115–124, 2009. S.I. Resnick. A probability path. Birkh¨auser, 1999. R.Y. Rubinstein and D.P. Kroese. Simulation and the Monte Carlo method. Wileyinterscience, 2008. ISBN 0470177942. 9
|
2011
|
159
|
4,212
|
Active learning of neural response functions with Gaussian processes Mijung Park Electrical and Computer Engineering The University of Texas at Austin mjpark@mail.utexas.edu Greg Horwitz Departments of Physiology and Biophysics The University of Washington ghorwitz@uw.edu Jonathan W. Pillow Departments of Psychology and Neurobiology The University of Texas at Austin pillow@mail.utexas.edu Abstract A sizeable literature has focused on the problem of estimating a low-dimensional feature space for a neuron’s stimulus sensitivity. However, comparatively little work has addressed the problem of estimating the nonlinear function from feature space to spike rate. Here, we use a Gaussian process (GP) prior over the infinitedimensional space of nonlinear functions to obtain Bayesian estimates of the “nonlinearity” in the linear-nonlinear-Poisson (LNP) encoding model. This approach offers increased flexibility, robustness, and computational tractability compared to traditional methods (e.g., parametric forms, histograms, cubic splines). We then develop a framework for optimal experimental design under the GP-Poisson model using uncertainty sampling. This involves adaptively selecting stimuli according to an information-theoretic criterion, with the goal of characterizing the nonlinearity with as little experimental data as possible. Our framework relies on a method for rapidly updating hyperparameters under a Gaussian approximation to the posterior. We apply these methods to neural data from a color-tuned simple cell in macaque V1, characterizing its nonlinear response function in the 3D space of cone contrasts. We find that it combines cone inputs in a highly nonlinear manner. With simulated experiments, we show that optimal design substantially reduces the amount of data required to estimate these nonlinear combination rules. 1 Introduction One of the central problems in systems neuroscience is to understand how neural spike responses convey information about environmental stimuli, which is often called the neural coding problem. One approach to this problem is to build an explicit encoding model of the stimulus-conditional response distribution p(r|x), where r is a (scalar) spike count elicited in response to a (vector) stimulus x. The popular linear-nonlinear-Poisson (LNP) model characterizes this encoding relationship in terms of a cascade of stages: (1) linear dimensionality reduction using a bank of filters or receptive fields; (2) a nonlinear function from filter outputs to spike rate; and (3) an inhomogeneous Poisson spiking process [1]. While a sizable literature [2–10] has addressed the problem of estimating the linear front end to this model, the nonlinear stage has received comparatively less attention. Most prior work has focused on: simple parametric forms [6, 9, 11]; non-parametric methods that do not scale easily to high 1 Poisson spiking input response nonlinearity inverse-link history filter Figure 1: Encoding model schematic. The nonlinear function f converts an input vector x to a scalar, which g then transforms to a non-negative spike rate λ = g(f(x)). The spike response r is a Poisson random variable with mean λ. dimensions (e.g., histograms, splines) [7, 12]; or nonlinearities defined by a sum or product of 1D nonlinear functions [10,13]. In this paper, we use a Gaussian process (GP) to provide a flexible, computationally tractable model of the multi-dimensional neural response nonlinearity f(x), where x is a vector in feature space. Intuitively, a GP defines a probability distribution over the infinite-dimensional space of functions by specifying a Gaussian distribution over its finite-dimensional marginals (i.e., the probability over the function values at any finite collection of points), with hyperparameters that control the function’s variability and smoothness [14]. Although exact inference under a model with GP prior and Poisson observations is analytically intractable, a variety of approximate and sampling-based inference methods have been developed [15, 16]). Our work builds on a substantial literature in neuroscience that has used GP-based models to decode spike trains [17–19], estimate spatial receptive fields [20,21], infer continuous spike rates from spike trains [22–24], infer common inputs [25], and extract low-dimensional latent variables from multi-neuron spiking activity [26,27]. We focus on data from trial-based experiments where stimulus-response pairs (x, r) are sparse in the space of possible stimuli. We use a fixed inverse link function g to transform f(x) to a non-negative spike rate, which ensures the posterior over f is log-concave [6, 20]. This log-concavity justifies a Gaussian approximation to the posterior, which we use to perform rapid empirical Bayes estimation of hyperparameters [5,28]. Our main contribution is an algorithm for optimal experimental design, which allows f to be characterized quickly and accurately from limited data [29, 30]. The method relies on uncertainty sampling [31], which involves selecting the stimulus x for which g(f(x)) is maximally uncertain given the data collected in the experiment so far. We apply our methods to the nonlinear color-tuning properties of macaque V1 neurons. We show that the GP-Poisson model provides a flexible, tractable model for these responses, and that optimal design can substantially reduce the number of stimuli required to characterize them. 2 GP-Poisson neural encoding model 2.1 Encoding model (likelihood) We begin by defining a probabilistic encoding model for the neural response. Let ri be an observed neural response (the spike count in some time interval T) at the i’th trial given the input stimulus xi. Here, we will assume that x is D-dimensional vector in the moderately low-dimensional neural feature space to which the neuron is sensitive, the output of the “L” stage in the LNP model. Under the encoding model (Fig. 1), an input vector xi passes through a nonlinear function f, whose real-valued output is transformed to a positive spike rate through a (fixed) function g. The spike response is a Poisson random variable with mean g(f(x)), so the conditional probability of a stimulusresponse pair is Poisson: p(ri|xi, f) = 1 ri!λri i e−λi, λi = g(f(xi)). (1) For a complete dataset, the log-likelihood is: L(f) = log p(r|X, f) = r⊤log(g(f)) −1⊤g(f) + const, (2) 2 where r = (r1, . . . , rN)⊤is a vector of spike responses, 1 is a vector of ones, and f = (f(x1), . . . f(xN))⊤is shorthand for the vector defined by evaluating f at the points in X = {x1, . . . xN}. Note that although f is an infinite-dimensional object in the space of functions, the likelihood only depends on the value of f at the points in X. In this paper, we fix the inverse-link function to g(f) = log(1 + exp(f)), which has the nice property that it grows linearly for large f and decays gracefully to zero for negative f. This allows us to place a Gaussian prior on f without allocating probability mass to negative spike rates, and obviates the need for constrained optimization of f (but see [22] for a highly efficient solution). Most importantly, for any g that is simultaneously convex and log-concave1, the log-likelihood L(f) is concave in f, meaning it is free of non-global local extrema [6,20]. Combining L with a log-concave prior (as we do in the next section) ensures the log-posterior is also concave. 2.2 Gaussian Process prior Gaussian processes (GPs) allow us to define a probability distribution over the infinite-dimensional space of functions by specifying a Gaussian distribution over a function’s finite-dimensional marginals (i.e., the probability over the function values at any finite collection of points). The hyperparameters defining this prior are a mean µf and a kernel function k(xi, xj) that specifies the covariance between function values f(xi) and f(xj) for any pair of input points xi and xj. Thus, the GP prior over the function values f is given by p(f) = N(f| µf1, K) = |2πK|−1 2 exp −1 2(f −µf1)⊤K−1(f −µf1) (3) where K is a covariance matrix whose i, j’th entry is Kij = k(xi, xj). Generally, the kernel controls the prior smoothness of f by determining how quickly the correlation between nearby function values falls off as a function of distance. (See [14] for a general treatment). Here, we use a Gaussian kernel, since neural response nonlinearities are expected to be smooth in general: k(xi, xj) = ρ exp −||xi −xj||2/(2τ) , (4) where hyperparameters ρ and τ control the marginal variance and smoothness scale, respectively. The GP therefore has three total hyperparameters, θ = {µf, ρ, τ} which set the prior mean and covariance matrix over f for any collection of points in X. 2.3 MAP inference for f The maximum a posteriori (MAP) estimate can be obtained by numerically maximizing the posterior for f. From Bayes rule, the log-posterior is simply the sum of the log-likelihood (eq. 2) and log-prior (eq. 3) plus a constant: log p(f|r, X, θ) = r⊤log(g(f)) −1⊤g(f) −1 2(f −µf)⊤K−1(f −µf) + const. (5) As noted above, this posterior has a unique maximum fmap so long as g is convex and log-concave. However, the solution vector fmap defined this way contains only the function values at the points in the training set X. How do we find the MAP estimate of f at other points not in our training set? The GP prior provides a simple analytic formula for the maximum of the joint marginal containing the training data and any new point f ∗= f(x∗), for a new stimulus x∗. We have p(f ∗, f|x∗, r, X, θ) = p(f ∗|f, θ)p(f|r, X, θ) = N(f ∗|µ∗, σ∗2) p(f|r, X, θ) (6) where, from the GP prior, µ∗= µf + k∗⊤K−1(f −µf) and σ∗2 = k(x∗, x∗) −k∗⊤K∗k∗are the (f-dependent) mean and variance of f ∗, and row vector k∗= (k(x1, x∗), . . . k(xN, x∗)). This factorization arises from the fact that f ∗is conditionally independent of the data given the value of the function at X. Clearly, this posterior marginal (eq. 6) is maximized when f ∗= µ∗and f = fmap.2 Thus, for any collection of novel points X∗, the MAP estimate for f(X∗) is given by the mean of the conditional distribution over f ∗given fmap: p(f(X∗)|X∗, fmap, θ) = N µf + K∗K−1(fmap −µf), K∗∗−K∗K−1K∗⊤ (7) 1Such functions must grow monotonically at least linearly and at most exponentially [6]. Examples include exponential, half-rectified linear, log(1 + exp(f))p for p ≥1. 2Note that this is not necessarily identical to the marginal MAP estimate of f ∗|x∗, r, X, θ, which requires maximizing (eq. 6) integrated with respect to f. 3 where K∗ il = k(x∗ i , xl) and K∗∗ ij = k(x∗ i , x∗ j). In practice, the prior covariance matrix K is often ill-conditioned when datapoints in X are closely spaced and smoothing hyperparameter τ is large, making it impossible to numerically compute K−1. When the number of points is not too large (N < 1000), we can address this by performing a singular value decomposition (SVD) of K and keeping only the singular vectors with singular value above some threshold. This results in a lower-dimensional numerical optimization problem, since we only have to search the space spanned by the singular vectors of K. We discuss strategies for scaling to larger datasets in the Discussion. 2.4 Efficient evidence optimization for θ The hyperparameters θ = {µf, ρ, τ} that control the GP prior have a major influence on the shape of the inferred nonlinearity, particularly in high dimensions and when data is scarce. A theoretically attractive and computationally efficient approach for setting θ is to maximize the evidence p(θ|r, X), also known as the marginal likelihood, a general approach known as empirical Bayes [5,14,28,32]. Here we describe a method for rapid evidence maximization that we will exploit to design an active learning algorithm in Section 3. The evidence can be computed by integrating the product of the likelihood and prior with respect to f, but can also be obtained by solving for the (often neglected) denominator term in Bayes’ rule: p(r|θ) = Z p(r|f)p(f|θ)df = p(r|f)p(f|θ) p(f|r, θ) , (8) where we have dropped conditioning on X for notational convenience. For the GP-Poisson model here, this integral is not tractable analytically, but we can approximate it as follows. We begin with a well-known Gaussian approximation to the posterior known as the Laplace approximation, which comes from a 2nd-order Taylor expansion of the log-posterior around its maximum [28]: p(f|r, θ) ≈N(f|fmap, Λ), Λ−1 = H + K−1, (9) where H = ∂2 ∂f 2 L(f) is the Hessian (second derivative matrix) of the negative log-likelihood (eq. 2), evaluated at fmap, and K−1 is the inverse prior covariance (eq. 3). This approximation is reasonable given that the posterior is guaranteed to be unimodal and log-concave. Plugging it into the denominator in (eq. 8) gives us a formula for evaluating approximate evidence, p(r|θ) ≈exp L(f) N(f|µf, K) N(f|fmap, Λ) , (10) which we evaluate at f = fmap, since the Laplace approximation is the most accurate there [20,33]. The hyperparameters θ directly affect the prior mean and covariance (µf, K), as well as the posterior mean and covariance (fmap, Λ), all of which are essential for evaluating the evidence. Finding fmap and Λ given θ requires numerical optimization of log p(f|r, θ), which is computationally expensive to perform for each search step in θ. To overcome this difficulty, we decompose the posterior moments (fmap, Λ) into terms that depend on θ and terms that do not via a Gaussian approximation to the likelihood. The logic here is that a Gaussian posterior and prior imply a likelihood function proportional to a Gaussian, which in turn allows prior and posterior moments to be computed analytically for each θ. This trick is similar to that of the EP algorithm [34]: we divide a Gaussian component out of the Gaussian posterior and approximate the remainder as Gaussian. The resulting moments are H = Λ−1 −K−1 for the likelihood inverse-covariance (which is the Hessian of the log-likelihood from eq. 9), and m = H−1(Λ−1fmap −K−1µf) for the likelihood mean, which comes from the standard formula for the product of two Gaussians. Our algorithm for evidence optimization proceeds as follows: (1) given the current hyperparameters θi, numerically maximize the posterior and form the Laplace approximation N(fmapi, Λi); (2) compute the Gaussian “potential” N(mi, Hi) underlying the likelihood, given the current values of (fmapi, Λi, θi), as described above; (3) Find θi+1 by maximizing the log-evidence, which is: E(θ) = rT log(g(fmap))−1T g(fmap)−1 2 log |KHi+I|−1 2(fmap−µf)T K−1(fmap−µf), (11) where fmap and Λ are updated using Hi and mi obtained in step (2), i.e. fmap = Λ(Himi + K−1µf) and Λ = (Hi + K−1)−1. Note that this significantly expedites evidence optimization since we do not have to numerically optimize fmap for each θ. 4 0 50 100 0 5 15 25 0 50 100 0 5 15 25 random sampling uncertainty sampling 20 datapoints 100 datapoints true posterior mean 95% confidence region A rate (spikes/trial) 10 80 0 10 20 random sampling uncertainty sampling 40 20 160 # of datapoints error B rate (spikes/trial) Figure 2: Comparison of random and optimal design in a simulated experiment with a 1D nonlinearity. The true nonlinear response function g(f(x)) is in gray, the posterior mean is in black solid, 95% confidence interval is in black dotted, stimulus is in blue dots. A (top): Random design: responses were measured with 20 (left) and 100 (right) additional stimuli, with stimuli sampled uniformly over the interval shown on the x axis. A (bottom): Optimal design: responses were measured with same numbers of additional stimuli selected by uncertainty sampling (see text). B: Mean square error as a function of the number of stimulus-response pairs. The optimal design achieved half the error rate of the random design experiment. 3 Optimal design: uncertainty sampling So far, we have introduced an efficient algorithm for estimating the nonlinearity f and hyperparameters θ for an LNP encoding model under a GP prior. Here we introduce a method for adaptively selecting stimuli during an experiment (often referred to as active learning or optimal experimental design) to minimize the amount of data required to estimate f [29]. The basic idea is that we should select stimuli that maximize the expected information gained about the model parameters. This information gain of course depends the posterior distribution over the parameters given the data collected so far. Uncertainty sampling [31] is an algorithm that is appropriate when the model parameters and stimulus space are in a 1-1 correspondence. It involves selecting the stimulus x for which the posterior over parameter f(x) has highest entropy, which in the case of a Gaussian posterior corresponds to the highest posterior variance. Here we alter the algorithm slightly to select stimuli for which we are most uncertain about the spike rate g(f(x)), not (as stated above) the stimuli where we are most uncertain about our underlying function f(x). The rationale for this approach is that we are generally more interested in the neuron’s spike-rate as a function of the stimulus (which involves the inverse link function g) than in the parameters we have used to define that function. Moreover, any link function that maps R to the positive reals R+, as required for Poisson models, we will have unavoidable uncertainty about negative values of f, which will not be overcome by sampling small (integer) spike-count responses. Our strategy therefore focuses on uncertainty in the expected spike-rate rather than uncertainty in f. Our method proceeds as follows. Given the data observed up to a certain time in the experiment, we define a grid of (evenly-spaced) points {x∗ j} as candidate next stimuli. For each point, we compute the posterior uncertainty γj about the spike rate g(f(x∗ j)) using the delta method, i.e., γj = g′(f(x∗ j))σj, where σj is the posterior standard deviaton (square root of the posterior variance) at f(xj) and g′ is the derivative of g with respect to its argument. The stimulus selected next on trial t + 1, given all data observed up to time t, is selected randomly from the set: xt+1 ∈{x∗ j | γj ≥γi∀i}, (12) that is, the set of all stimuli for which uncertainty γ is maximal. To find {σj} at each candidate point, we must first update θ and fmap. After each trial, we update fmap by numerically optimizing the posterior, then update the hyperparameters using (eq. 11), and then numerically re-compute fmap and Λ given the new θ. The method is summarized in Algorithm 1, and runtimes are shown in Fig. 5. 5 Algorithm 1 Optimal design for nonlinearity estimation under a GP-Poisson model 1. given the current data Dt = {x1, ..., xt, r1, ..., rt}, the posterior mode fmapt, and hyperparameters θt, compute the posterior mean and standard deviation (f ∗ map, σ∗) at a grid of candidate stimulus locations {x∗}. 2. select the element of {x∗} for which γ∗= g′(f ∗ map)σ∗is maximal 3. present the selected xt+1 and record the neural response rt+1 4. find fmapt+1|Dt+1, θt; update θi+1 by maximizing evidence; find fmapt+1|Dt+1, θt+1 4 Simulations We tested our method in simulation using a 1-dimensional feature space, where it is easy to visualize the nonlinearity and the uncertainty of our estimates (Fig. 2). The stimulus space was taken to be the range [0, 100], the true f was a sinusoid, and spike responses were simulated as Poisson with rate g(f(x)). We compared the estimate of g(f(x)) obtained using optimal design to the estimate obtained with “random sampling”, stimuli drawn uniformly from the stimulus range. Fig. 2 shows the estimates of g(f(x)) after 20 and 100 trials using each method, along with the marginal posterior standard deviation, which provides a ±2 SD Bayesian confidence interval for the estimate. The optimal design method effectively decreased the high variance in the middle (near 50) because it drew more samples where uncertainty about the spike rate was higher (due to the fact that variance increases with mean for Poisson neurons). The estimates using random sampling (A, top) was not accurate because it drew more points in the tails where the variance was originally lower than the center. We also examined the errors in each method as a function of the number of data points. We drew each number of data points 100 times and computed the average error between the estimate and the true g(f(x)). As shown in (B), uncertainty sampling achieved roughly half the error rate of the random sampling after 20 datapoints. 5 Experiments −0.6 0 0.6 cone contrast 0 20 40 60 0 10 20 spike count trial # L M S Figure 3: Raw experimental data: stimuli in 3D conecontrast space (above) and recorded spike counts (below) during the first 60 experimental trials. Several (3-6) stimulus staircases along different directions in color space were randomly interleaved to avoid the effects of adaptation; a color direction is defined as the relative proportions of L, M, and S cone contrasts, with [0 0 0] corresponding to a neutral gray (zero-contrast) stimulus. In each color direction, contrast was actively titrated with the aim of evoking a response of 29 spikes/sec. This sampling procedure permitted a broad survey of the stimulus space, with the objective that many stimuli evoked a statistically reliable but non-saturating response. In all, 677 stimuli in 65 color directions were presented for this neuron. We recorded from a V1 neuron in an awake, fixating rhesus monkey while Gabor patterns with varying color and contrast were presented at the receptive field. Orientation and spatial frequency of the Gabor were fixed at preferred values for the neuron and drifted at 3 Hz for 667 ms each. Contrast was varied using multiple interleaved staircases along different axes in color space, and spikes were counted during a 557ms window beginning 100ms after stimulus appeared. The staircase design was used because the experiments were carried out prior to formulating the optimal design methods described in this paper. However, we will analyze them here for a “simulated optimal design experiment”, where we choose stimuli sequentially from the list of stimuli that were actually presented during the experiment, in an order determined by our information-theoretic criterion. See Fig. 3 caption for more details of the experimental recording. 6 random sampling uncertainty sampling L S cone contrast M cone contrast L cone contrast A B posterior mean random sampling 95% conf. interval uncertainty sampling 150 datapoints 150 datapoints all data 0.6 0.3 0 0.6 0.3 0 0.4 0.2 0 30 15 0 spike rate M 0.6 0 - 0.6 0.6 0 - 0.6 L 0.6 0 - 0.6 0.4 0 - 0.4 M 0.6 0 - 0.6 0.4 0 - 0.4 0.6 0 - 0.6 0.6 0 - 0.6 S 0.4 0 - 0.4 0.4 0 - 0.4 S 0.4 0 - 0.4 0.4 0 - 0.4 spike rate spike rate all data 30 15 0 30 15 0 Figure 4: One and two-dimensional conditional “slices” through the 3D nonlinearity of a V1 simple cell in cone contrast space. A: 1D conditionals showing spike rate as a function of L, M, and S cone contrast, respectively, with other cone contrasts fixed to zero. Traces show the posterior mean and ±2SD credible interval given all datapoints (solid and dotted gray), and the posterior mean given only 150 data points selected randomly (black) or by optimal design (red), carried out by drawing a subset of the data points actually collected during the experiment. Note that even with only 1/4 of data, the optimal design estimate is nearly identical to the estimate obtained from all 677 datapoints. B: 2D conditionals on M and L (first row), S and L (second row), M and S (third row) cones, respectively, with the other cone contrast set to zero. 2D conditionals using optimal design sampling (middle column) with 150 data points are much closer to the 2D conditionals using all data (right column) than those from a random sub-sampling of 150 points (left column). We first used the entire dataset (677 stimulus-response pairs) to find the posterior maximum fmap, with hyperparameters set by maximizing evidence (sequential optimization of fmap and θ (eq. 11) until convergence). Fig. 4 shows 1D and 2D conditional slices through the estimated 3D nonlinearity g(f(x)), with contour plots constructed using the MAP estimate of f on a fine grid of points. The contours for a neuron with linear summation of cone contrasts followed by an output nonlinearity (i.e., as assumed by the standard model of V1 simple cells) would consist of straight lines. The curvature observed in contour plots (Fig. 4B) indicates that cone contrasts are summed together in a highly nonlinear fashion, especially for L and M cones (top). We then performed a simulated optimal design experiment by selecting from the 677 stimulusresponse pairs collected during the experiment, and re-ordering them greedily according to the uncertainty sampling algorithm described above. We compared the estimate obtained using only 1/4 of the data (150 points) with an estimate obtained if we had randomly sub-sampled 150 data points from the dataset (Fig. 4). Using only 150 data points, the conditionals of the estimate using uncertainty sampling were almost identical to those using all data (677 points). Although our software implementation of the optimal design method was crude (using Matlab’s fminunc twice to find fmap and fmincon once to optimize the hyperparameters during each inter-trial interval), the speed was more than adequate for the experimental data collected (Fig. 5, A) using a machine with an Intel 3.33GHz XEON processor. The largest bottleneck by far was computing the eigendecomposition of K for each search step for θ. We will discuss briefly how to improve the speed of our algorithm in the Discussion. Lastly, we added a recursive filter h to the model (Fig. 1), to incorporate the effects of spike history on the neuron’s response, allowing us to account for the possible effects of adaptation on the spike counts obtained. We computed the maximum a posteriori (MAP) estimate for h under a temporal 7 50 100 150 200 250 300 2 4 6 8 10 50 100 150 200 250 300 0 0.2 0.4 0.6 0.8 random sampling uncertainty sampling A # of datapoints MSE # of datapoints run time (in seconds) B 0 25 50 −5 0 5 x 10 −4 C estimated history filter time before spike (s) Figure 5: Comparison of run time and error of optimal design method using simulated experiments by resampling experimental data. A: The run time for uncertainty sampling (including the posterior update and the evidence optimization) as a function of the number of data points observed. (The grid of “candidate” stimuli {x∗} was the subset of stimuli in the experimental dataset not yet selected, but the speed was not noticeably affected by scaling to much larger sets of candidate stimuli). The black dotted line shows the mean intertrial interval of 677ms. B: The mean squared error between the estimate obtained using each sampling method and that obtained using the full dataset. Note that the error of uncertainty sampling with 150 points is even lower than that from random sampling with 300 data points. C: Estimated response-history filter h, which describes how recent spiking influences the neuron’s spike rate. This neuron shows self-excitatory influence on the time-scale of 25s, with self-suppression on a longer scale of approximately 1m. smoothing prior (Fig. 5). It shows that the neuron’s response has a mild dependence on its recent spike-history, with a self-exciting effect of spikes within the last 25s. We evaluated the performance of the augmented model by holding out a random 10% of the data for cross-validation. Prediction performance on test data was more accurate by an average of 0.2 spikes per trial in predicted spike count, a 4 percent reduction in cross-validation error compared to the original model. 6 Discussion We have developed an algorithm for optimal experimental design, which allows the nonlinearity in a cascade neural encoding model to be characterized quickly and accurately from limited data. The method relies on a fast method for updating the hyperparameters using a Gaussian factorization of the Laplace approximation to the posterior, which removes the need to numerically recompute the MAP estimate as we optimize the hyperparameters. We described a method for optimal experimental design, based on uncertainty sampling, to reduce the number of stimuli required to estimate such response functions. We applied our method to the nonlinear color-tuning properties of macaque V1 neurons and showed that the GP-Poisson model provides a flexible, tractable model for these responses, and that optimal design can substantially reduce the number of stimuli required to characterize them. One additional virtue of the GP-Poisson model is that conditionals and marginals of the high-dimensional nonlinearity are straightforward, making it easy to visualize their lowerdimensional slices and projections (as we have done in Fig. 4). We added a history term to the LNP model in order to incorporate the effects of recent spike history on the spike rate (Fig. 5), which provided a very slight improvement in prediction accuracy. We expect that the ability to incorporate dependencies on spike history to be important for the success of optimal design experiments, especially with neurons that exhibit strong spike-rate adaptation [30]. One potential criticism of our approach is that uncertainty sampling in unbounded spaces is known to “run away from the data”, repeatedly selecting stimuli that are far from previous measurements. We wish to point out that in neural applications, the stimulus space is always bounded (e.g., by the gamut of the monitor), and in our case, stimuli at the corners of the space are actually helpful for initializing estimates the range and smoothness of the function. In future work, we will work to improve the speed of the algorithm for use in real-time neurophysiology experiments, using analytic first and second derivatives for evidence optimization and exploring approximate methods for sparse GP inference [35]. We will examine kernel functions with a more tractable matrix inverse [20], and test other information-theoretic data selection criteria for response function estimation [36]. 8 References [1] E. P. Simoncelli, J. W. Pillow, L. Paninski, and O. Schwartz. The Cognitive Neurosciences, III, chapter 23, pages 327–338. MIT Press, Cambridge, MA, October 2004. [2] R.R. de Ruyter van Steveninck and W. Bialek. Proc. R. Soc. Lond. B, 234:379–414, 1988. [3] E. J. Chichilnisky. Network: Computation in Neural Systems, 12:199–213, 2001. [4] F. Theunissen, S. David, N. Singh, A. Hsu, W. Vinje, and J. Gallant. Network: Computation in Neural Systems, 12:289–316, 2001. [5] M. Sahani and J. Linden. NIPS, 15, 2003. [6] L. Paninski. Network: Computation in Neural Systems, 15:243–262, 2004. [7] Tatyana Sharpee, Nicole C Rust, and William Bialek. Neural Comput, 16(2):223–250, Feb 2004. [8] O. Schwartz, J. W. Pillow, N. C. Rust, and E. P. Simoncelli. Journal of Vision, 6(4):484–507, 7 2006. [9] J. W. Pillow and E. P. Simoncelli. Journal of Vision, 6(4):414–428, 4 2006. [10] Misha B Ahrens, Jennifer F Linden, and Maneesh Sahani. J Neurosci, 28(8):1929–1942, Feb 2008. [11] Nicole C Rust, Odelia Schwartz, J. Anthony Movshon, and Eero P Simoncelli. Neuron, 46(6):945–956, Jun 2005. [12] I. DiMatteo, C. Genovese, and R. Kass. Biometrika, 88:1055–1073, 2001. [13] S.F. Martins, L.A. Sousa, and J.C. Martins. Image Processing, 2007. ICIP 2007. IEEE International Conference on, volume 3, pages III–309. IEEE, 2007. [14] Carl Rasmussen and Chris Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [15] Liam Paninski, Yashar Ahmadian, Daniel Gil Ferreira, Shinsuke Koyama, Kamiar Rahnama Rad, Michael Vidne, Joshua Vogelstein, and Wei Wu. J Comput Neurosci, Aug 2009. [16] Jarno Vanhatalo, Ville Pietil¨ainen, and Aki Vehtari. Statistics in medicine, 29(15):1580–1607, July 2010. [17] E. Brown, L. Frank, D. Tang, M. Quirk, and M. Wilson. Journal of Neuroscience, 18:7411–7425, 1998. [18] W. Wu, Y. Gao, E. Bienenstock, J.P. Donoghue, and M.J. Black. Neural Computation, 18(1):80–118, 2006. [19] Y. Ahmadian, J. W. Pillow, and L. Paninski. Neural Comput, 23(1):46–96, Jan 2011. [20] K.R. Rad and L. Paninski. Network: Computation in Neural Systems, 21(3-4):142–168, 2010. [21] Jakob H Macke, Sebastian Gerwinn, Leonard E White, Matthias Kaschube, and Matthias Bethge. Neuroimage, 56(2):570–581, May 2011. [22] John P. Cunningham, Krishna V. Shenoy, and Maneesh Sahani. Proceedings of the 25th international conference on Machine learning, ICML ’08, pages 192–199, New York, NY, USA, 2008. ACM. [23] R.P. Adams, I. Murray, and D.J.C. MacKay. Proceedings of the 26th Annual International Conference on Machine Learning. ACM New York, NY, USA, 2009. [24] Todd P. Coleman and Sridevi S. Sarma. Neural Computation, 22(8):2002–2030, 2010. [25] J. E. Kulkarni and L Paninski. Network: Computation in Neural Systems, 18(4):375–407, 2007. [26] A.C. Smith and E.N. Brown. Neural Computation, 15(5):965–991, 2003. [27] B.M. Yu, J.P. Cunningham, G. Santhanam, S.I. Ryu, K.V. Shenoy, and M. Sahani. Journal of Neurophysiology, 102(1):614, 2009. [28] C.M. Bishop. Pattern recognition and machine learning. Springer New York:, 2006. [29] D. Mackay. Neural Computation, 4:589–603, 1992. [30] J. Lewi, R. Butera, and L. Paninski. Neural Computation, 21(3):619–687, 2009. [31] David D. Lewis and William A. Gale. Proceedings of the ACM SIGIR conference on Research and Development in Information Retrieval, pages 3–12. Springer-Verlag, 1994. [32] G. Casella. American Statistician, pages 83–87, 1985. [33] J. W. Pillow, Y. Ahmadian, and L. Paninski. Neural Comput, 23(1):1–45, Jan 2011. [34] T. P. Minka. UAI ’01: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pages 362–369, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc. [35] E. Snelson and Z. Ghahramani. Advances in neural information processing systems, 18:1257, 2006. [36] Andreas Krause, Ajit Singh, and Carlos Guestrin. J. Mach. Learn. Res., 9:235–284, June 2008. 9
|
2011
|
16
|
4,213
|
Online Submodular Set Cover, Ranking, and Repeated Active Learning Andrew Guillory Department of Computer Science University of Washington guillory@cs.washington.edu Jeff Bilmes Department of Electrical Engineering University of Washington bilmes@ee.washington.edu Abstract We propose an online prediction version of submodular set cover with connections to ranking and repeated active learning. In each round, the learning algorithm chooses a sequence of items. The algorithm then receives a monotone submodular function and suffers loss equal to the cover time of the function: the number of items needed, when items are selected in order of the chosen sequence, to achieve a coverage constraint. We develop an online learning algorithm whose loss converges to approximately that of the best sequence in hindsight. Our proposed algorithm is readily extended to a setting where multiple functions are revealed at each round and to bandit and contextual bandit settings. 1 Problem In an online ranking problem, at each round we choose an ordered list of items and then incur some loss. Problems with this structure include search result ranking, ranking news articles, and ranking advertisements. In search result ranking, each round corresponds to a search query and the items correspond to search results. We consider online ranking problems in which the loss incurred at each round is the number of items in the list needed to achieve some goal. For example, in search result ranking a reasonable loss is the number of results the user needs to view before they find the complete information they need. We are specifically interested in problems where the list of items is a sequence of questions to ask or tests to perform in order to learn. In this case the ranking problem becomes a repeated active learning problem. For example, consider a medical diagnosis problem where at each round we choose a sequence of medical tests to perform on a patient with an unknown illness. The loss is the number of tests we need to perform in order to make a confident diagnosis. We propose an approach to these problems using a new online version of submodular set cover. A set function F(S) defined over a ground set V is called submodular if it satisfies the following diminishing returns property: for every A ⊆B ⊆V \{v}, F(A+v)−F(A) ≥F(B +v)−F(B). Many natural objectives measuring information, influence, and coverage turn out to be submodular [1, 2, 3]. A set function is called monotone if for every A ⊆B, F(A) ≤F(B) and normalized if F(∅) = 0. Submodular set cover is the problem of selecting an S ⊆V minimizing |S| under the constraint that F(S) ≥1 where F is submodular, monotone, and normalized (note we can always rescale F). This problem is NP-hard, but a greedy algorithm gives a solution with cost less than 1 + ln 1/ϵ that of the optimal solution where ϵ is the smallest non-zero gain of F [4]. We propose the following online prediction version of submodular set cover, which we simply call online submodular set cover. At each time step t = 1 . . . T we choose a sequence of elements St = (vt 1, vt 2, . . . vt n) where each vt i is chosen from a ground set V of size n (we use a superscript for rounds of the online problem and a subscript for other indices). After choosing St, an adversary reveals a submodular, monotone, normalized function F t, and we suffer loss ℓ(F t, St) where ℓ(F t, St) ≜min {n} ∪{i : F t(St i) ≥1}i (1) 1 and St i ≜S j≤i{vt j} is defined to be the set containing the first i elements of St (let St 0 ≜∅). Note ℓcan be equivalently written ℓ(F t, St) ≜Pn i=0 I(F t(St i) < 1) where I is the indicator function. Intuitively, ℓ(F t, St) corresponds to a bounded version of cover time: it is the number of items up to n needed to achieve F t(S) ≥1 when we select items in the order specified by St. Thus, if coverage is not achieved, we suffer a loss of n. We assume that F t(V ) ≥1 (therefore coverage is achieved if St does not contain duplicates) and that the sequence of functions (F t)t is chosen in advance (by an oblivious adversary). The goal of our learning algorithm is to minimize the total loss P t ℓ(F t, St). To make the problem clear, we present it first in its simplest, full information version. However, we will later consider more complex variations including (1) a version where we only produce a list of length k ≤n instead of n, (2) a multiple objective version where a set of objectives F t 1, F t 2, . . . F t m is revealed each round, (3) a bandit (partial information) version where we do not get full access to F t and instead only observe F t(St 1), F t(St 2), . . . F t(St n), and (4) a contextual bandit version where there is some context associated with each round. We argue that online submodular set cover, as we have defined it, is an interesting and useful model for ranking and repeated active learning problems. In a search result ranking problem, after presenting search results to a user we can obtain implicit feedback from this user (e.g., clicks, time spent viewing each result) to determine which results were actually relevant. We can then construct an objective F t(S) such that F t(S) ≥1 iff S covers or summarizes the relevant results. Alternatively, we can avoid explicitly constructing an objective by considering the bandit version of the problem where we only observe the values F t(St i). For example, if the user clicked on k total results then we can let F(St i) ≜ci/k where ci ≤k is the number of results in the subset Si which were clicked. Note that the user may click an arbitrary set of results in an arbitrary order, and the user’s decision whether or not to click a result may depend on previously viewed and clicked results. All that we assume is that there is some unknown submodular function explaining the click counts. If the user clicks on a small number of very early results, then coverage is achieved quickly and the ordering is desirable. This coverage objective makes sense if we assume that the set of results the user clicked are of roughly equal importance and together summarize the results of interest to the user. In the medical diagnosis application, we can define F t(S) to be proportional to the number of candidate diseases which are eliminated after performing the set of tests S on patient t. If we assume that a particular test result always eliminates a fixed set of candidate diseases, then this function is submodular. Specifically, this objective is the reduction in the size of the version space [5, 6]. Other active learning problems can also be phrased in terms of satisfying a submodular coverage constraint including problems that allow for noise [7]. Note that, as in the search result ranking problem, F t is not initially known but can be inferred after we have chosen St and suffered loss ℓ(F t, St). 2 Background and Related Work Recently, Azar and Gamzu [8] extended the O(ln 1/ϵ) greedy approximation algorithm for submodular set cover to the more general problem of minimizing the average cover time of a set of objectives. Here ϵ is the smallest non-zero gain of all the objectives. Azar and Gamzu [8] call this problem ranking with submodular valuations. More formally, we have a known set of functions F1, F2, . . . , Fm each with an associated weight wi. The goal is then to choose a permutation S of the ground set V to minimize Pm i=1 wiℓ(Fi, S). The offline approximation algorithm for ranking with submodular valuations will be a crucial tool in our analysis of online submodular set cover. In particular, this offline algorithm can viewed as constructing the best single permutation S for a sequence of objectives F 1, F 2 . . . F T in hindsight (i.e., after all the objectives are known). Recently the ranking with submodular valuations problem was extended to metric costs [9]. Online learning is a well-studied problem [10]. In one standard setting, the online learning algorithm has a collection of actions A, and at each time step t the algorithm picks an action St ∈A. The learning algorithm then receives a loss function ℓt, and the algorithm incurs the loss value for the action it chose ℓt(St). We assume ℓt(St) ∈[0, 1] but make no other assumptions about the form of loss. The performance of an online learning algorithm is often measured in terms of regret, the difference between the loss incurred by the algorithm and the loss of the best single fixed action in hindsight: R = PT t=1 ℓt(St) −minS∈A PT t=1 ℓt(S). There are randomized algorithms which guarantee E[R] ≤ p T ln |A| for adversarial sequences of loss functions [11]. Note that because 2 E[R] = o(T) the per round regret approaches zero. In the bandit version of this problem the learning algorithm only observes ℓt(St) [12]. Our problem fits in this standard setting with A chosen to be the set of all ground set permutations (v1, v2, . . . vn) and ℓt(St) ≜ℓ(F t, St)/n. However, in this case A is very large so standard online learning algorithms which keep weight vectors of size |A| cannot be directly applied. Furthermore, our problem generalizes an NP-hard offline problem which has no polynomial time approximation scheme, so it is not likely that we will be able to derive any efficient algorithm with o(T ln |A|) regret. We therefore instead consider α-regret, the loss incurred by the algorithm as compared to α times the best fixed prediction. Rα = PT t=1 ℓt(St) −α minS∈A PT t=1 ℓt(S). α-regret is a standard notion of regret for online versions of NP-hard problems. If we can show Rα grows sub linearly with T then we have shown loss converges to that of an offline approximation with ratio α. Streeter and Golovin [13] give online algorithms for the closely related problems of submodular function maximization and min-sum submodular set cover. In online submodular function maximization, the learning algorithm selects a set St with |St| ≤k before F t is revealed, and the goal is to maximize P t F t(St). This problem differs from ours in that our problem is a loss minimization problem as opposed to an objective maximization problem. Online min-sum submodular set cover is similar to online submodular set cover except the loss is not cover time but rather ˆℓ(F t, St) ≜ n X i=0 max(1 −F t(St i), 0). (2) Min-sum submodular set cover penalizes 1 −F t(St i) where submodular set cover uses I(F t(St i) < 1). We claim that for certain applications the hard threshold makes more sense. For example, in repeated active learning problems minimizing P t ℓ(F t, St) naturally corresponds to minimizing the number of questions asked. Minimizing P t ˆℓ(F t, St) does not have this interpretation as it charges less for questions asked when F t is closer to 1. One might hope that minimizing ℓcould be reduced to or shown equivalent to minimizing ˆℓ. This is not likely to be the case, as the approximation algorithm of Streeter and Golovin [13] does not carry over to online submodular set cover. Their online algorithm is based on approximating an offline algorithm which greedily maximizes P t min(F t(S), 1). Azar and Gamzu [8] show that this offline algorithm, which they call the cumulative greedy algorithm, does not achieve a good approximation ratio for average cover time. Radlinski et al. [14] consider a special case of online submodular function maximization applied to search result ranking. In their problem the objective function is assumed to be a binary valued submodular function with 1 indicating the user clicked on at least one document. The goal is then to maximize the number of queries which receive at least one click. For binary valued functions ˆℓand ℓare the same, so in this setting minimizing the number of documents a user must view before clicking on a result is a min-sum submodular set cover problem. Our results generalize this problem to minimizing the number of documents a user must view before some possibly non-binary submodular objective is met. With non-binary objectives we can incorporate richer implicit feedback such as multiple clicks and time spent viewing results. Slivkins et al. [15] generalize the results of Radlinski et al. [14] to a metric space bandit setting. Our work differs from the online set cover problem of Alon et al. [16]; this problem is a single set cover problem in which the items that need to be covered are revealed one at a time. Kakade et al. [17] analyze general online optimization problems with linear loss. If we assume that the functions F t are all taken from a known finite set of functions F then we have linear loss over a |F| dimensional space. However, this approach gives poor dependence on |F|. 3 Offline Analysis In this work we present an algorithm for online submodular set cover which extends the offline algorithm of Azar and Gamzu [8] for the ranking with submodular valuations problem. Algorithm 1 shows this offline algorithm, called the adaptive residual updates algorithm. Here we use T to denote the number of objective functions and superscript t to index the set of objectives. This notation is chosen to make the connection to the proceeding online algorithm clear: our online algorithm will approximately implement Algorithm 1 in an online setting, and in this case the set of objectives in 3 Algorithm 1 Offline Adaptive Residual Input: Objectives F 1, F 2, . . . F T Output: Sequence S1 ⊂S2 ⊂. . . Sn S0 ←∅ for i ←1 . . . n do v ←argmax v∈V P t δ(F t, Si−1, v) Si ←Si−1 + v end for Figure 1: Histograms used in offline analysis the offline algorithm will be the sequence of objectives in the online problem. The algorithm is a greedy algorithm similar to the standard algorithm for submodular set cover. The crucial difference is that instead of a normal gain term of F t(S + v) −F t(S) it uses a relative gain term δ(F t, S, v) ≜ ( min( F t(S+v)−F t(S) 1−F t(S) , 1) if F(S) < 1 0 otherwise The intuition is that (1) a small gain for F t matters more if F t is close to being covered (F t(S) close to 1) and (2) gains for F t with F t(S) ≥1 do not matter as these functions are already covered. The main result of Azar and Gamzu [8] is that Algorithm 1 is approximately optimal. Theorem 1 ([8]). The loss P t ℓ(F t, S) of the sequence produced by Algorithm 1 is within 4(ln(1/ϵ) + 2) of that of any other sequence. We note Azar and Gamzu [8] allow for weights for each F t. We omit weights for simplicity. Also, Azar and Gamzu [8] do not allow the sequence S to contain duplicates while we do–selecting a ground set element twice has no benefit but allowing them will be convenient for the online analysis. The proof of Theorem 1 involves representing solutions to the submodular ranking problem as histograms. Each histogram is defined such that the area of the histogram is equal to the loss of the corresponding solution. The approximate optimality of Algorithm 1 is shown by proving that the histogram for the solution it finds is approximately contained within the histogram for the optimal solution. In order to convert Algorithm 1 into an online algorithm, we will need a stronger version of Theorem 1. Specifically, we will need to show that when there is some additive error in the greedy selection rule Algorithm 1 is still approximately optimal. For the optimal solution S∗= argminS∈V n P t ℓ(F t, S) (V n is the set of all length n sequences of ground set elements), define a histogram h∗with T columns, one for each function F t. Let the tth column have with width 1 and height equal to ℓ(F t, S∗). Assume that the columns are ordered by increasing cover time so that the histogram is monotone non-decreasing. Note that the area of this histogram is exactly the loss of S∗. For a sequence of sets ∅= S0 ⊆S1 ⊆. . . Sn (e.g., those found by Algorithm 1) define a corresponding sequence of truncated objectives ˆF t i (S) ≜ ( min( F t(S∪Si−1)−F t(Si−1) 1−F t(Si−1) , 1) if F t(Si−1) < 1 1 otherwise ˆF t i (S) is essentially F t except with (1) Si−1 given “for free”, and (2) values rescaled to range between 0 and 1. We note that ˆF t i is submodular and that if F t(S) ≥1 then ˆF t i (S) ≥1. In this sense ˆF t i is an easier objective than F t. Also, for any v, ˆF t i ({v}) −ˆF t i (∅) = δ(F t, Si−1, v). In other words, the gain of ˆF t i at ∅is the normalized gain of F t at Si−1. This property will be crucial. We next define truncated versions of h∗: ˆh1, ˆh2, . . . ˆhn which correspond to the loss of S∗for the easier covering problems involving ˆF t i . For each j ∈1 . . . n, let ˆhi have T columns of height j with the tth such column of width ˆF t i (S∗ j ) −ˆF t i (S∗ j−1) (some of these columns may have 0 width). Assume again the columns are ordered by height. Figure 1 shows h∗and ˆhi. We assume without loss of generality that F t(S∗ n) ≥1 for every t (clearly some choice of S∗ contains no duplicates, so under our assumption that F t(V ) ≥1 we also have F t(S∗ n) ≥1). Note 4 that the total width of ˆhi is then the number of functions remaining to be covered after Si−1 is given for free (i.e., the number of F t with F t(Si−1) < 1). It is not hard to see that the total area of ˆhi is P t ˆℓ( ˆF t i , S∗) where ˆl is the loss function for min-sum submodular set cover (2). From this we know ˆhi has area less than h∗. In fact, Azar and Gamzu [8] show the following. Lemma 1 ([8]). ˆhi is completely contained within h∗when ˆhi and h∗are aligned along their lower right boundaries. We need one final lemma before proving the main result of this section. For a sequence S define Qi = P t δ(F t, Si−1, vi) to be the total normalized gain of the ith selected element and let ∆i = Pn j=i Qj be the sum of the normalized gains from i to n. Define Πi = |{t : F t(Si−1) < 1}| to be the number of functions which are still uncovered before vi is selected (i.e., the loss incurred at step i). [8] show the following result relating ∆i to Πi. Lemma 2 ([8]). For any i, ∆i ≤(ln 1/ϵ + 2)Πi We now state and prove the main result of this section, that Algorithm 1 is approximately optimal even when the ith greedy selection is preformed with some additive error Ri. This theorem shows that in order to achieve low average cover time it suffices to approximately implement Algorithm 1. Aside from being useful for converting Algorithm 1 into an online algorithm, this theorem may be useful for applications in which the ground set V is very large. In these situations it may be possible to approximate Algorithm 1 (e.g., through sampling). Streeter and Golovin [13] prove similar results for submodular function maximization and min-sum submodular set cover. Our result is similar, but the proof is non trivial. The loss function ℓis highly non linear with respect to changes in F t(St i), so it is conceivable that small additive errors in the greedy selection could have a large effect. The analysis of Im and Nagarajan [9] involves a version of Algorithm 1 which is robust to a sort of multplicative error in each stage of the greedy selection. Theorem 2. Let S = (v1, v2, . . . vn) be any sequence for which X t δ(F t, Si−1, vi) + Ri ≥max v∈V X t δ(F t, Si−1, v) Then P t ℓ(F t, St) ≤4(ln 1/ϵ + 2) P t ℓ(F t, S∗) + n P i Ri Proof. Let h be a histogram with a column for each Πi with Πi ̸= 0. Let γ = (ln 1/ϵ + 2). Let the ith column have width (Qi + Ri)/(2γ) and height max(Πi −P j Rj, 0)/(2(Qi + Ri)). Note that Πi ̸= 0 iff Qi + Ri ̸= 0 as if there are functions not yet covered then there is some set element with non zero gain (and vice versa). The area of h is X i:Πi̸=0 1 2γ (Qi + Ri) max(Πi −P j Rj, 0) 2(Qi + Ri) ≥1 4γ X t ℓ(F t, S) −n 4γ X j Rj Assume h and every ˆhi are aligned along their lower right boundaries. We show that if the ith column of h has non-zero area then it is contained within ˆhi. Then, it follows from Lemma 1 that h is contained within h∗, completing the proof. Consider the ith column in h. Assume this column has non zero area so Πi ≥P j Rj. This column is at most (∆i + P j≥i Rj)/(2γ) away from the right hand boundary. To show that this column is in ˆhi it suffices to show that after selecting the first k = ⌊(Πi −P j Rj)/(2(Qi +Ri))⌋items in S∗we still have P t(1 −ˆF t i (S∗ k)) ≥(∆i + P j≥i Rj)/(2γ) . The most that P t ˆF t i can increase through the addition of one item is Qi + Ri. Therefore, using the submodularity of ˆF t i , X t ˆF t i (S∗ k) − X t ˆF t i (∅) ≤k(Qi + Ri) ≤Πi/2 − X j Rj/2 Therefore P t(1 −ˆF t i (S∗ k)) ≥Πi/2 + P j Rj/2 since P t(1 −ˆF t i (∅)) = Πi. Using Lemma 2 Πi/2 + X j Rj/2 ≥∆i/(2γ) + X j Rj/2 ≥(∆i + X j≥i Rj)/(2γ) 5 Algorithm 2 Online Adaptive Residual Input: Integer T Initialize n online learning algorithms E1, E2, . . . En with A = V for t = 1 →T do ∀i ∈1 . . . n predict vt i with Ei St ←(vt 1, . . . vt n) Receive F t, pay loss l(F t, St) For Ei, ℓt(v) ←(1 −δ(F t, St i−1, v)) end for Figure 2: Ei selects the ith element in St. 4 Online Analysis We now show how to convert Algorithm 1 into an online algorithm. We use the same idea used by Streeter and Golovin [13] and Radlinski et al. [14] for online submodular function maximization: we run n copies of some low regret online learning algorithm, E1, E2, . . . En, each with action space A = V . We use the ith copy Ei to select the ith item in each predicted sequence St. In other words, the predictions of Ei will be v1 i , v2 i , . . . vT i . Figure 2 illustrates this. Our algorithm assigns loss values to each Ei so that, assuming Ei has low regret, Ei approximately implements the ith greedy selection in Algorithm 1. Algorithm 2 shows this approach. Note that under our assumption that F 1, F 2, . . . F T is chosen by an oblivious adversary, the loss values for the ith copy of the online algorithm are oblivious to the predictions of that run of the algorithm. Therefore we can use any algorithm for learning against an oblivious adversary. Theorem 3. Assume we use as a subroutine an online prediction algorithm with expected regret E[R] ≤ √ T ln n. Algorithm 2 has expected α-regret E[Rα] ≤n2√ T ln n for α = 4(ln(1/ϵ) + 2) Proof. Define a meta-action ˜vi for the sequence of actions chosen by Ei, ˜vi = (v1 i , v2 i , . . . vT i ). We can extend the domain of F t to allow for meta-actions F t(S ∪{ˆvi}) = F t(S ∪{vt i}). Let ˜S be the sequence of meta actions ˜S = ( ˜v1, ˜v2, . . . ˜vn). Let Ri be the regret of Ei. Note that from the definition of regret and our choice of loss values we have that max v∈V X t δ(F t, ˜Si−1, v) − X t δ(F t, ˜Si−1, ˜vi) = Ri Therefore, ˜S approximates the greedy solution in the sense required by Theorem 2. Theorem 2 did not require that S be constructed V . From Theorem 2 we then have X t ℓ(F t, St) = X t ℓ(F t, ˜S) ≤α X t ℓ(F t, S∗) + n X i Ri The expected α-regret is then E[n P i Ri] ≤n2√ T ln n We describe several variations and extensions of this analysis, some of which mirror those for related work [13, 14, 15]. Avoiding Duplicate Items Since each run of the online prediction algorithm is independent, Algorithm 2 may select the same ground set element multiple times. This drawback is easy to fix. We can simply select any arbitrary vi /∈Si−1 if Ei selects a vi ∈Si−i. This modification does not affect the regret guarantee as selecting a vi ∈Si−1 will always result in a gain of zero (loss of 1). Truncated Loss In some applications we only care about the first k items in the sequence St. For these applications it makes sense to consider a truncated version of l(F t, St) with parameter k ℓk(F t, St) ≜min {k} ∪{|St i| : F t(St i) ≥1} This is cover time computed up to the kth element in St. The analysis for Theorem 2 also shows P t ℓk(F t, St) ≤4(ln 1/ϵ + 2) P t ℓ(F t, S∗) + k Pk i=1 Ri. The corresponding regret bound is then 6 k2√ T ln n. Note here we are bounding truncated loss P t ℓk(F t, St) in terms of untruncated loss P t ℓ(F t, S∗). In this sense this bound is weaker. However, we replace n2 with k2 which may be much smaller. Algorithm 2 achieves this bound simultaneously for all k. Multiple Objectives per Round Consider a variation of online submodular set cover in which instead of receiving a single objective F t each round we receive a batch of objectives F t 1, F t 2, . . . F t m and incur loss Pm i=1 ℓ(F t i , St). In other words, each rounds corresponds to a ranking with submodular valuations problem. It is easy to extend Algorithm 2 to this setting by using 1 − (1/m) Pm i=1 δ(F t i , St i−1, v) for the loss of action v in Ei. We then get O(k2√ mL∗ln n+k2m ln n) total regret where L∗= PT t=1 Pm i=1 ℓ(F t i , S∗) (Section 2.6 of [10]). Bandit Setting Consider a setting where instead of receiving full access to F t we only observe the sequence of objective function values F t(St 1), F t(St 2), . . . F t(St n) (or in the case of multiple objectives per round, F t i (St j) for every i and j). We can extend Algorithm 2 to this setting using a nonstochastic multiarmed bandits algorithm [12]. We note duplicate removal becomes more subtle in the bandit setting: should we feedback a gain of zero when a duplicate is selected or the gain of the non-duplicate replacement? We propose either is valid if replacements are chosen obliviously. Bandit Setting with Expert Advice We can further generalize the bandit setting to the contextual bandit setting [18] (e.g., the bandit setting with expert advice [12]). Say that we have access at time step t to predictions from a set of m experts. Let ˜vj be the meta action corresponding to the sequence of predictions from the jth expert and ˜V be the set of all ˜vj. Assume that Ei guarantees low regret with respect to ˜V X t δ(F t, St i−1, vt i) + Ri ≥max ˜v∈˜V X t δ(F t, St i−1, ˜v) (3) where we have extended the domain of each F t to include meta actions as in the proof of Theorem 3. Additionally assume that F t( ˜V ) ≥1 for every t. In this case we can show P t ℓk(F t, St) ≤ minS∗∈˜V m P t ℓm(F t, S∗) + k Pk i=1 Ri. The Exp4 algorithm [12] has Ri = O( √ nT ln m) giving total regret O(k2√ nT ln m). Experts may use context in forming recommendations. For example, in a search ranking problem the context could be the query. 5 Experimental Results 5.1 Synthetic Example We present a synthetic example for which the online cumulative greedy algorithm [13] fails, based on the example in Azar and Gamzu [8] for the offline setting. Consider an online ad placement problem where the ground set V is a set of available ad placement actions (e.g., a v ∈V could correspond to placing an ad on a particular web page for a particular length of time). On round t, we receive an ad from an advertiser, and our goal is to acquire λ clicks for the ad using as few advertising actions as possible. Define F t(St i) to be min(ct i, λ)/λ where ct i is number of clicks acquired from the ad placement actions St i. Say that we have n advertising actions of two types: 2 broad actions and n −2 narrow actions. Say that the ads we receive are also of two types. Common type ads occur with probability (n −1)/n and receive 1 and λ −1 clicks respectively from the two broad actions and 0 clicks from narrow actions. Uncommon type ads occur with probability 1/n and receive λ clicks from one randomly chosen narrow action and 0 clicks from all other actions. Assume λ ≥n2. Intuitively broad actions could correspond to ad placements on sites for which many ads are relevant. The optimal strategy giving an average cover time O(1) is to first select the two broad actions covering all common ads then select the narrow actions in any order. However, the offline cumulative greedy algorithm will pick all narrow actions before picking the broad action with gain 1 giving average cover time O(n). The left of Figure 3 shows average cover time for our proposed algorithm and the cumulative greedy algorithm of [13] on the same sequences of random objectives. For this example we use n = 25 and the bandit version of the problem with the Exp3 algorithm [12]. We also plot the average cover times for offline solutions as baselines. As seen in the figure, the cumulative algorithms converge to higher average cover times than the adaptive residual algorithms. Interestingly, the online cumulative algorithm does better than the offline cumulative algorithm: it seems added randomization helps. 7 Figure 3: Average cover time 5.2 Repeated Active Learning for Movie Recommendation Consider a movie recommendation website which asks users a sequence of questions before they are given recommendations. We define an online submodular set cover problem for choosing sequences of questions in order to quickly eliminate a large number of movies from consideration. This is similar conceptually to the diagnosis problem discussed in the introduction. Define the ground set V to be a set of questions (for example “Do you want to watch something released in the past 10 years?” or “Do you want to watch something from the Drama genre?”). Define F t(S) to be proportional to the number of movies eliminated from consideration after asking the tth user S. Specifically, let H be the set of all movies in our database and V t(S) be the subset of movies consistent with the tth user’s responses to S. Define F t(S) ≜min(|H \ V t(S)|/c, 1) where c is a constant. F t(S) ≥iff after asking the set of questions S we have eliminated at least c movies. We set H to be a set of 11634 movies available on Netflix’s Watch Instantly service and use 803 questions based on those we used for an offline problem [7]. To simulate user responses to questions, on round t we randomly select a movie from H and assume the tth user answers questions consistently with this movie. We set c = |H| −500 so the goal is to eliminate about 95% of all movies. We evaluate in the full information setting: this makes sense if we assume we receive as feedback the movie the user actually selected. As our online prediction subroutine we tried Normal-Hedge [19], a second order multiplicative weights method [20], and a version of multiplicative weights for small gains using the doubling trick (Section 2.6 of [10]). We also tried a heuristic modification of Normal-Hedge which fixes ct = 1 for a fixed, more aggressive learning rate than theoretically justified. The right of Figure 3 shows average cover time for 100 runs of T = 10000 iterations. Note the different scale in the bottom row–these methods performed significantly worse than Normal-Hedge. The online cumulative greedy algorithm converges to a average cover time close to but slightly worse than that of the adaptive greedy method. The differences are more dramatic for prediction subroutines that converge slowly. The modified Normal-Hedge has no theoretical justification, so it may not generalize to other problems. For the modified Normal-Hedge the final average cover times are 7.72 adaptive and 8.22 cumulative. The offline values are 6.78 and 7.15. 6 Open Problems It is not yet clear what practical value our proposed approach will have for web search result ranking. A drawback to our approach is that we pick a fixed order in which to ask questions. For some problems it makes more sense to consider adaptive strategies [5, 6]. Acknowledgments This material is based upon work supported in part by the National Science Foundation under grant IIS-0535100, by an Intel research award, a Microsoft research award, and a Google research award. 8 References [1] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In HLT, 2011. [2] D. Kempe, J. Kleinberg, and ´E. Tardos. Maximizing the spread of influence through a social network. In KDD, 2003. [3] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies. JMLR, 2008. [4] L.A. Wolsey. An analysis of the greedy algorithm for the submodular set covering problem. Combinatorica, 2(4), 1982. [5] D. Golovin and A. Krause. Adaptive submodularity: A new approach to active learning and stochastic optimization. In COLT, 2010. [6] Andrew Guillory and Jeff Bilmes. Interactive submodular set cover. In ICML, 2010. [7] Andrew Guillory and Jeff Bilmes. Simultaneous learning and covering with adversarial noise. In ICML, 2011. [8] Yossi Azar and Iftah Gamzu. Ranking with Submodular Valuations. In SODA, 2011. [9] S. Im and V. Nagarajan. Minimum Latency Submodular Cover in Metrics. ArXiv e-prints, October 2011. [10] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [11] Y. Freund and R. Schapire. A desicion-theoretic generalization of on-line learning and an application to boosting. In Computational learning theory, pages 23–37, 1995. [12] P. Auer, N. Cesa-Bianchi, Y. Freund, and R.E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2003. [13] M. Streeter and D. Golovin. An online algorithm for maximizing submodular functions. In NIPS, 2008. [14] F. Radlinski, R. Kleinberg, and T. Joachims. Learning diverse rankings with multi-armed bandits. In ICML, 2008. [15] A. Slivkins, F. Radlinski, and S. Gollapudi. Learning optimally diverse rankings over large document collections. In ICML, 2010. [16] N. Alon, B. Awerbuch, and Y. Azar. The online set cover problem. In STOC, 2003. [17] Sham M. Kakade, Adam Tauman Kalai, and Katrina Ligett. Playing games with approximation algorithms. In STOC, 2007. [18] J. Langford and T. Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In NIPS, 2007. [19] K. Chaudhuri, Y. Freund, and D. Hsu. A parameter-free hedging algorithm. In NIPS, 2009. [20] N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction with expert advice. Machine Learning, 2007. 9
|
2011
|
160
|
4,214
|
The Fast Convergence of Boosting Matus Telgarsky Department of Computer Science and Engineering University of California, San Diego 9500 Gilman Drive, La Jolla, CA 92093-0404 mtelgars@cs.ucsd.edu Abstract This manuscript considers the convergence rate of boosting under a large class of losses, including the exponential and logistic losses, where the best previous rate of convergence was O(exp(1/✏2)). First, it is established that the setting of weak learnability aids the entire class, granting a rate O(ln(1/✏)). Next, the (disjoint) conditions under which the infimal empirical risk is attainable are characterized in terms of the sample and weak learning class, and a new proof is given for the known rate O(ln(1/✏)). Finally, it is established that any instance can be decomposed into two smaller instances resembling the two preceding special cases, yielding a rate O(1/✏), with a matching lower bound for the logistic loss. The principal technical hurdle throughout this work is the potential unattainability of the infimal empirical risk; the technique for overcoming this barrier may be of general interest. 1 Introduction Boosting is the task of converting inaccurate weak learners into a single accurate predictor. The existence of any such method was unknown until the breakthrough result of Schapire [1]: under a weak learning assumption, it is possible to combine many carefully chosen weak learners into a majority of majorities with arbitrarily low training error. Soon after, Freund [2] noted that a single majority is enough, and that O(ln(1/✏)) iterations are both necessary and sufficient to attain accuracy ✏. Finally, their combined effort produced AdaBoost, which attains the optimal convergence rate (under the weak learning assumption), and has an astonishingly simple implementation [3]. It was eventually revealed that AdaBoost was minimizing a risk functional, specifically the exponential loss [4]. Aiming to alleviate perceived deficiencies in the algorithm, other loss functions were proposed, foremost amongst these being the logistic loss [5]. Given the wide practical success of boosting with the logistic loss, it is perhaps surprising that no convergence rate better than O(exp(1/✏2)) was known, even under the weak learning assumption [6]. The reason for this deficiency is simple: unlike SVM, least squares, and basically any other optimization problem considered in machine learning, there might not exist a choice which attains the minimal risk! This reliance is carried over from convex optimization, where the assumption of attainability is generally made, either directly, or through stronger conditions like compact level sets or strong convexity [7]. Convergence rate analysis provides a valuable mechanism to compare and improve of minimization algorithms. But there is a deeper significance with boosting: a convergence rate of O(ln(1/✏)) means that, with a combination of just O(ln(1/✏)) predictors, one can construct an ✏-optimal classifier, which is crucial to both the computational efficiency and statistical stability of this predictor. The contribution of this manuscript is to provide a tight convergence theory for a large class of losses, including the exponential and logistic losses, which has heretofore resisted analysis. The goal is a general analysis without any assumptions (attainability of the minimum, or weak learnability), 1 however this manuscript also demonstrates how the classically understood scenarios of attainability and weak learnability can be understood directly from the sample and the weak learning class. The organization is as follows. Section 2 provides a few pieces of background: how to encode the weak learning class and sample as a matrix, boosting as coordinate descent, and the primal objective function. Section 3 then gives the dual problem, max entropy. Given these tools, section 4 shows how to adjust the weak learning rate to a quantity which is useful without any assumptions. The first step towards convergence rates is then taken in section 5, which demonstrates that the weak learning rate is in fact a mechanism to convert between the primal and dual problems. The convergence rates then follow: section 6 and section 7 discuss, respectively, the conditions under which classical weak learnability and (disjointly) attainability hold, both yielding the rate O(ln(1/✏)), and finally section 8 shows how the general case may be decomposed into these two, and the conflicting optimization behavior leads to a degraded rate of O(1/✏). The last section will also exhibit an ⌦(1/✏) lower bound for the logistic loss. 1.1 Related Work The development of general convergence rates has a number of important milestones in the past decade. The first convergence result, albeit without any rates, is due to Collins et al. [8]; the work considered the improvement due to a single step, and as its update rule was less aggressive than the line search of boosting, it appears to imply general convergence. Next, Bickel et al. [6] showed a rate of O(exp(1/✏2)), where the assumptions of bounded second derivatives on compact sets are also necessary here. Many extremely important cases have also been handled. The first is the original rate of O(ln(1/✏)) for the exponential loss under the weak learning assumption [3]. Next, R¨atsch et al. [9] showed, for a class of losses similar to those considered here, a rate of O(ln(1/✏)) when the loss minimizer is attainable. The current manuscript provides another mechanism to analyze this case (with the same rate), which is crucial to being able to produce a general analysis. And, very recently, parallel to this work, Mukherjee et al. [10] established the general convergence under the exponential loss, with a rate of ⇥(1/✏). The same matrix, due to Schapire [11], was used to show the lower bound there as for the logistic loss here; their upper bound proof also utilized a decomposition theorem. It is interesting to mention that, for many variants of boosting, general convergence rates were known. Specifically, once it was revealed that boosting is trying to be not only correct but also have large margins [12], much work was invested into methods which explicitly maximized the margin [13], or penalized variants focused on the inseparable case [14, 15]. These methods generally impose some form of regularization [15], which grants attainability of the risk minimizer, and allows standard techniques to grant general convergence rates. Interestingly, the guarantees in those works cited in this paragraph are O(1/✏2). 2 Setup A view of boosting, which pervades this manuscript, is that the action of the weak learning class upon the sample can be encoded as a matrix [9, 15]. Let a sample S := {(xi, yi)}m 1 ✓(X ⇥Y)m and a weak learning class H be given. For every h 2 H, let S|h denote the projection onto S induced by h; that is, S|h is a vector of length m, with coordinates (S|h)i = yih(xi). If the set of all such columns {S|h : h 2 H} is finite, collect them into the matrix A 2 Rm⇥n. Let ai denote the ith row of A, corresponding to the example (xi, yi), and let {hj}n 1 index the set of weak learners corresponding to columns of A. It is assumed, for convenience, that entries of A are within [−1, +1]; relaxing this assumption merely scales the presented rates by a constant. The setting considered in this manuscript is that this finite matrix can be constructed. Note that this can encode infinite classes, so long as they map to only k < 1 values (in which case A has at most km columns). As another example, if the weak learners are binary, and H has VC dimension d, then Sauer’s lemma grants that A has at most (m + 1)d columns. This matrix view of boosting is thus similar to the interpretation of boosting performing descent on functional space, but the class complexity and finite sample have been used to reduce the function class to a finite object [16, 5]. 2 Routine BOOST. Input Convex function f ◦A. Output Approximate primal optimum λ. 1. Initialize λ0 := 0n. 2. For t = 1, 2, . . ., while r(f ◦A)(λt−1) 6= 0n: (a) Choose column jt := argmaxj |r(f ◦A)(λt−1)>ej|. (b) Line search: ↵t apx. minimizes ↵7! (f ◦A)(λt−1 + ↵ejt). (c) Update λt := λt−1 + ↵tejt. 3. Return λt−1. Figure 1: l1 steepest descent [17, Algorithm 9.4] applied to f ◦A. To make the connection to boosting, the missing ingredient is the loss function. Let G0 denote the set of loss functions g satisfying: g is twice continuously differentiable, g00 > 0 (which implies strict convexity), and limx!1 g(x) = 0. (A few more conditions will be added in section 5 to prove convergence rates, but these properties suffice for the current exposition.) Crucially, the exponential loss exp(−x) from AdaBoost and the logistic loss ln(1 + exp(−x)) are in G0 (and the eventual G). Boosting determines some weighting λ 2 Rn of the columns of A, which correspond to weak learners in H. The (unnormalized) margin of example i is thus hai, λi = e> i Aλ, where ei is an indicator vector. Since the prediction on xi is 1[hai, λi ≥0], it follows that Aλ > 0m (where 0m is the zero vector) implies a training error of zero. As such, boosting solves the minimization problem inf λ2Rn m X i=1 g(hai, λi) = inf λ2Rn m X i=1 g(e> i Aλ) = inf λ2Rn f(Aλ) = inf λ2Rn(f ◦A)(λ) =: ¯fA, (2.1) where f : Rm ! R is the convenience function f(x) = P i g((x)i), and in the present problem denotes the (unnormalized) empirical risk. ¯fA will denote the optimal objective value. The infimum in eq. (2.1) may well not be attainable. Suppose there exists λ0 such that Aλ0 > 0m (theorem 6.1 will show that this is equivalent to the weak learning assumption). Then 0 inf λ2Rn f(Aλ) inf {f(Aλ) : λ = cλ0, c > 0} = inf c>0 f(c(Aλ0)) = 0. On the other hand, for any λ 2 Rn, f(Aλ) > 0. Thus the infimum is never attainable when weak learnability holds. The template boosting algorithm appears in fig. 1, formulated in terms of f ◦A to make the connection to coordinate descent as clear as possible. To interpret the gradient terms, note that (r(f ◦A)(λ))j = (A>rf(Aλ))j = m X i=1 g0(hai, λi)hj(xi)yi, which is the expected correlation of hj with the target labels according to an unnormalized distribution with weights g0(hai, λi). The stopping condition r(f ◦A)(λ) = 0m means: either the distribution is degenerate (it is exactly zero), or every weak learner is uncorrelated with the target. As such, eq. (2.1) represents an equivalent formulation of boosting, with one minor modification: the column (weak learner) selection has an absolute value. But note that this is the same as closing H under complementation (i.e., for any h 2 H, there exists h0 with h(x) = −h0(x)), which is assumed in many theoretical treatments of boosting. In the case of the exponential loss with binary weak learners, the line search step has a convenient closed form; but for other losses, or even for the exponential loss but with confidence-rated predictors, there may not be a closed form. Moreover, this univariate search problem may lack a minimizer. To produce the eventual convergence rates, this manuscript utilizes a step size minimizing an upper bounding quadratic (which is guaranteed to exist); if instead a standard iterative line search guarantee were used, rates would only degrade by a constant factor [17, section 9.3.1]. 3 As a final remark, consider the rows {ai}m 1 of A as a collection of m points in Rn. Due to the form of g, BOOST is therefore searching for a halfspace, parameterized by a vector λ, which contains all of the points. Sometimes such a halfspace may not exist, and g applies a smoothly increasing penalty to points that are farther and farther outside it. 3 Dual Problem This section provides the convex dual to eq. (2.1). The relevance of the dual to convergence rates is as follows. First, although the primal optimum may not be attainable, the dual optimum is always attainable—this suggests a strategy of mapping the convergence strategy to the dual, where there exists a clear notion of progress to the optimum. Second, this section determines the dual feasible set—the space of dual variables or what the boosting literature typically calls unnormalized weights. Understanding this set is key to relating weak learnability, attainability, and general instances. Before proceeding, note that the dual formulation will make use of the Fenchel conjugate h⇤(φ) = supx2dom(h) hx, φi −h(x), a concept taking a central place in convex analysis [18, 19]. Interestingly, the Fenchel conjugates to the exponential and logistic losses are respectively the BoltzmannShannon and Fermi-Dirac entropies [19, Commentary, section 3.3], and thus the dual is explicitly performing entropy maximization (cf. lemma C.2). As a final piece of notation, denote the kernel of a matrix B 2 Rm⇥n by Ker(B) = {φ 2 Rn : Bφ = 0m}. Theorem 3.1. For any A 2 Rm⇥n and g 2 G0 with f(x) = P i g((x)i), inf {f(Aλ) : λ 2 Rn} = sup {−f ⇤(−φ) : φ 2 ΦA} , (3.2) where ΦA := Ker(A>)\Rm + is the dual feasible set. The dual optimum A is unique and attainable. Lastly, f ⇤(φ) = Pm i=1 g⇤((φ)i). The dual feasible set ΦA = Ker(A>) \ Rm + has a strong interpretation. Suppose 2 ΦA; then is a nonnegative vector (since 2 Rm +), and, for any j, 0 = (φ>A)j = Pm i=1 φiyihj(xi). That is to say, every nonzero feasible dual vector provides a (an unnormalized) distribution upon which every weak learner is uncorrelated! Furthermore, recall that the weak learning assumption states that under any weighting of the input, there exists a correlated weak learner; as such, weak learnability necessitates that the dual feasible set contains only the zero vector. There is also a geometric interpretation. Ignoring the constraint, −f ⇤attains its maximum at some rescaling of the uniform distribution (for details, please see lemma C.2). As such, the constrained dual problem is aiming to write the origin as a high entropy convex combination of the points {ai}m 1 . 4 A Generalized Weak Learning Rate The weak learning rate was critical to the original convergence analysis of AdaBoost, providing a handle on the progress of the algorithm. Recall that the quantity appeared in the denominator of the convergence rate, and a weak learning assumption critically provided that this quantity is nonzero. This section will generalize the weak learning rate to a quantity which is always positive, without any assumptions. Note briefly that this manuscript will differ slightly from the norm in that weak learning will be a purely sample-specific concept. That is, the concern here is convergence, and all that matters is the sample S = {(xi, yi)}m 1 , as encoded in A; it doesn’t matter if there are wild points outside this sample, because the algorithm has no access to them. This distinction has the following implication. The usual weak learning assumption states that there exists no uncorrelating distribution over the input space. This of course implies that any training sample S used by the algorithm will also have this property; however, it suffices that there is no distribution over the input sample S which uncorrelates the weak learners from the target. Returning to task, the weak learning assumption posits the existence of a constant, the weak learning rate γ, which lower bounds the correlation of the best weak learner with the target for any distribu4 tion. Stated in terms of the matrix A, 0 < γ = inf φ2Rm + kφk=1 max j2[n] ##### m X i=1 (φ)iyihj(xi) ##### = inf φ2Rm + \{0m} kA>φk1 kφk1 = inf φ2Rm + \{0m} kA>φk1 kφ −0mk1 . (4.1) The only way this quantity can be positive is if φ 62 Ker(A>) \ Rm + = ΦA, meaning the dual feasible set is exactly {0m}. As such, one candidate adjustment is to simply replace {0m} with the dual feasible set: γ0 := inf φ2Rm + \ΦA kA>φk1 inf 2ΦA kφ − k1 . Indeed, by the forthcoming proposition 4.3, γ0 > 0 as desired. Due to technical considerations which will be postponed until the various convergence rates, it is necessary to tighten this definition with another set. Definition 4.2. For a given matrix A 2 Rm⇥n and set S ✓Rm, define γ(A, S) := inf ⇢ kA>φk1 inf 2S\Ker(A>) kφ − k1 : φ 2 S \ Ker(A>) % . ⌃ Crucially, for the choices of S pertinent here, this quantity is always positive. Proposition 4.3. Let A 6= 0m⇥n and polyhedron S be given. If S \ Ker(A>) 6= ; and S has nonempty interior, γ(A, S) 2 (0, 1). To simplify discussion, the following projection and distance notation will be used in the sequel: Pp C(x) 2 Argmin y2C ky −xkp, Dp C(x) = kx −Pp C(x)kp, with some arbitrary choice made when the minimizer is not unique. 5 Prelude to Convergence Rates: Three Alternatives The pieces are in place to finally sketch how the convergence rates may be proved. This section identifies how the weak learning rate γ(A, S) can be used to convert the standard gradient guarantees into something which can be used in the presence of no attainable minimum. To close, three basic optimization scenarios are identified, which lead to the following three sections on convergence rates. But first, it is a good time to define the final loss function class. Definition 5.1. Every g 2 G satisfies the following properties. First, g 2 G0. Next, for any x 2 Rm satisfying f(x) f(Aλ0), and for any coordinate (x)i, there exist constants ⌘> 0 and β > 0 such that g00((x)i) ⌘g((x)i) and g((x)i) −βg0((x)i). ⌃ The exponential loss is in this class with ⌘= β = 1 since exp(·) is a fixed point with respect to the differentiation operator. Furthermore, as is verified in remark F.1 of the full version, the logistic loss is also in this class, with ⌘= 2m/(m ln(2)) and β 1 + 2m. Intuitively, ⌘and β encode how similar some g 2 G is to the exponential loss, and thus these parameters can degrade radically. However, outside the weak learnability case, the other terms in the bounds here will also incur a penalty of the form em for the exponential loss, and there is some evidence that this is unavoidable (see the lower bounds in Mukherjee et al. [10] or the upper bounds in R¨atsch et al. [9]). Next, note how the standard guarantee for coordinate descent methods can lead to guarantees on the progress of the algorithm in terms of dual distances, thanks to γ(A, S). Proposition 5.2. For any t, A 6= 0m⇥n, S ◆{−rf(Aλt)} with γ(A, S) > 0, and g 2 G, f(Aλt+1) −¯fA f(Aλt) −¯fA − γ(A, S)2D1 S\Ker(A>)(−rf(Aλt))2 2⌘f(Aλt) . Proof. The stopping condition grants −rf(Aλt) 62 Ker(A>). Thus, by definition of γ(A, S), γ(A, S) = inf φ2S\Ker(A>) kA>φk1 D1 S\Ker(A>)(φ) kA>rf(Aλt)k1 D1 S\Ker(A>)(−rf(Aλt)). 5 (a) Weak learnability. (b) Attainability. (c) General case. Figure 2: Viewing the rows {ai}m 1 of A as points in Rn, boosting seeks a homogeneous halfspace, parameterized by a normal λ 2 Rn, which contains all m points. The dual, on the other hand, aims to express the origin as a high entropy convex combination of the rows. The convergence rate and dynamics of this process are controlled by A, which dictates one of the three above scenarios. Combined with a standard guarantee of coordinate descent progress (cf. lemma F.2), f(Aλt) −f(Aλt+1) ≥kA>rf(Aλt)k2 1 2⌘f(Aλt) ≥ γ(A, S)2D1 S\Ker(A>)(−rf(Aλt))2 2⌘f(Aλt) . Subtracting ¯fA from both sides and rearranging yields the statement. Recall the interpretation of boosting closing section 2: boosting seeks a halfspace, parameterized by λ 2 Rn, which contains the points {ai}m 1 . Progress onward from proposition 5.2 will be divided into three cases, each distinguished by the kind of halfspace which boosting can reach. These cases appear in fig. 2. The first case is weak learnability: positive margins can be attained on each example, meaning a halfspace exists which strictly contains all points. Boosting races to push all these margins unboundedly large, and has a convergence rate O(ln(1/✏)). Next is the case that no halfspace contains the points within its interior: either any such halfspace has the points on its boundary, or no such halfspace exists at all (the degenerate choice λ = 0n). This is the case of attainability: boosting races towards finite margins at the rate O(ln(1/✏)). The final situation is a mix of the two: there exists a halfspace with some points on the boundary, some within its interior. Boosting will try to push some margins to infinity, and keep others finite. These two desires are at odds, and the rate degrades to O(1/✏). Less metaphorically, the analysis will proceed by decomposing this case into the previous two, applying the above analysis in parallel, and then stitching the result back together. It is precisely while stitching up that an incompatibility arises, and the rate degrades. This is no artifact: a lower bound will be shown for the logistic loss. 6 Convergence Rate under Weak Learnability To start this section, the following result characterizes weak learnability, including the earlier relationship to the dual feasible set (specifically, that it is precisely the origin), and, as analyzed by many authors, the relationship to separability [1, 9, 15]. Theorem 6.1. For any A 2 Rm⇥n and g 2 G the following conditions are equivalent: 9λ 2 Rn ⇧Aλ 2 Rm ++, (6.2) inf λ2Rn f(Aλ) = 0, (6.3) A = 0m, (6.4) ΦA = {0m}. (6.5) The equivalence means the presence of any of these properties suffices to indicate weak learnability. The last two statements encode the usual distributional version of the weak learning assumption. 6 The first encodes the fact that there exists a homogeneous halfspace containing all points within its interior; this encodes separability, since removing the factor yi from the definition of ai will place all negative points outside the halfspace. Lastly, the second statement encodes the fact that the empirical risk approaches zero. Theorem 6.6. Suppose Aλ > 0m and g 2 G; then γ(A, Rm +) > 0, and for all t, f(Aλt) −¯fA f(Aλ0) ✓ 1 −γ(A, Rm +)2 2β2⌘ ◆t . Proof. By theorem 6.1, Rm + \ Ker(A>) = ΦA = {0m}, which combined with g −βg0 gives D1 ΦA(−rf(Aλt)) = inf 2ΦA k −rf(Aλt) − k1 = krf(Aλt)k1 ≥f(Aλt)/β. Plugging this and ¯fA = 0 (again by theorem 6.1) along with polyhedron Rm + ◆−rf(Rm) (whereby γ(A, Rm +) > 0 by proposition 4.3 since A 2 Rm +) into proposition 5.2 gives f(Aλt+1) f(Aλt) −γ(A, Rm +)2f(Aλt) 2β2⌘ = f(Aλt) ✓ 1 −γ(A, Rm +)2 2β2⌘ ◆ , and recursively applying this inequality yields the result. Since the present setting is weak learnability, note by (4.1) that the choice of polyhedron Rm + grants that γ(A, Rm +) is exactly the original weak learning rate. When specialized for the exponential loss (where ⌘= β = 1), the bound becomes (1 −γ(A, Rm +)2/2)t, which exactly recovers the bound of Schapire and Singer [20], although via different analysis. In general, solving for t in the expression ✏= f(Aλt)−¯ fA f(Aλ0)−¯ fA ⇣ 1 −γ(f,A)2 2β2⌘ ⌘t exp ⇣ −γ(f,A)2t 2β2⌘ ⌘ reveals that t 2β2⌘ γ(A,S)2 ln(1/✏) iterations suffice to reach error ✏. Recall that β and ⌘, in the case of the logistic loss, have only been bounded by quantities like 2m. While it is unclear if this analysis of β and ⌘was tight, note that it is plausible that the logistic loss is slower than the exponential loss in this scenario, as it works less in initial phases to correct minor margin violations. 7 Convergence Rate under Attainability Theorem 7.1. For any A 2 Rm⇥n and g 2 G, the following conditions are equivalent: 8λ 2 Rn ⇧Aλ 62 Rm + \ {0m}, (7.2) f ◦A has minimizers, (7.3) A 2 Rm ++, (7.4) ΦA \ Rm ++ 6= ;. (7.5) Interestingly, as revealed in (7.4) and (7.5), attainability entails that the dual has fully interior points, and furthermore that the dual optimum is interior. On the other hand, under weak learnability, eq. (6.4) provided that the dual optimum has zeros at every coordinate. As will be made clear in section 8, the primal and dual weights have the following dichotomy: either the margin hai, λi goes to infinity and ( A)i goes to zero, or the margin stays finite and ( A)i goes to some positive value. Theorem 7.6. Suppose A 6= 0m⇥n, g 2 G, and the infimum of eq. (2.1) is attainable. Then there exists a (compact) tightest axis-aligned retangle C containing the initial level set, and f is strongly convex with modulus c > 0 over C. Finally, γ(A, −rf(C)) > 0, and for all t, f(Aλt) −¯fA (f(0m) −¯fA) ✓ 1 −cγ(A, −rf(C))2 ⌘f(Aλ0) ◆t . In other words, t ⌘f(Aλ0) cγ(A,−rf(C))2 ln( 1 ✏) iterations suffice to reach error ✏. The appearance of a modulus of strong convexity c (i.e., a lower bound on the eigenvalues of the Hessian of f) may seem surprising, and sketching the proof illuminates its appearance and subsequent function. 7 When the infimum is attainable, every margin hai, λi converges to some finite value. In fact, they all remain bounded: (7.2) provides that no halfspace contains all points, so if one margin becomes positive and large, another becomes negative and large, giving a terrible objective value. But objective values never increase with coordinate descent. To finish the proof, strong convexity (i.e., quadratic lower bounds in the primal) grants quadratic upper bounds in the dual, which can be used to bound the dual distance in proposition 5.2, and yield the desired convergence rate. This approach fails under weak learnability—some primal weights grow unboundedly, all dual weights shrink to zero, and no compact set contains all margins. 8 General Convergence Rate The final characterization encodes two principles: the rows of A may be partitioned into two matrices A0, A+ which respectively satisfy theorem 6.1 and theorem 7.1, and that these two subproblems affect the optimization problem essentially independently. Theorem 8.1. Let A0 2 Rz⇥n, A+ 2 Rp⇥n, and g 2 G be given. Set m := z + p, and A 2 Rm⇥n to be the matrix obtained by stacking A0 on top of A+. The following conditions are equivalent: (9λ 2 Rn ⇧A0λ 2 Rz ++ ^ A+λ = 0p) ^ (8λ 2 Rn ⇧A+λ 62 Rp + \ {0p}), (8.2) ( inf λ2Rn f(Aλ) = inf λ2Rn f(A+λ)) ^ ( inf λ2Rn f(A0λ) = 0) ^ f ◦A+ has minimizers, (8.3) A = h A0 A+ i with A0 = 0z ^ A+ 2 Rp ++, (8.4) (ΦA0 = {0z}) ^ (ΦA+ \ Rp ++ 6= ;) ^ (ΦA = ΦA0 ⇥ΦA+). (8.5) To see that any matrix A falls into one of the three scenarios here, fix a loss function g, and recall from theorem 3.1 that A is unique. In particular, the set of zero entries in A exactly specifies which of the three scenarios hold, the current scenario allowing for simultaneous positive and zero entries. Although this reasoning made use of A, note that it is A which dictates the behavior: in fact, as is shown in remark I.1 of the full version, the decomposition is unique. Returning to theorem 8.1, the geometry of fig. 2c is provided by (8.2) and (8.5). The analysis will start from (8.3), which allows the primal problem to be split into two pieces, which are then individually handled precisely as in the preceding sections. To finish, (8.5) will allow these pieces to be stitched together. Theorem 8.6. Suppose A 6= 0m⇥n, g 2 G, A 2 Rm + \ Rm ++ \ {0m}, and the notation from theorem 8.1. Set w := supt krf(A+λt) + P1 ΦA+ (−rf(A+λt))k1. Then w < 1, and there exists a tightest cube C+ so that C+ ◆{x 2 Rp : f(x) f(Aλ0)}, and let c > 0 be the modulus of strong convexity of f over C+. Then γ(A, Rz + ⇥−rf(C+)) > 0, and for all t, f(Aλt) −¯fA 2f(Aλ0)/ , (t + 1) min 1, γ(A, Rz + ⇥−rf(C+))2/((β + w/(2c))2⌘) / . (In the case of the logistic loss, w supx2Rm krf(x)k1 m.) As discussed previously, the bounds deteriorate to O(1/✏) because the finite and infinite margins sought by the two pieces A0, A+ are in conflict. For a beautifully simple, concrete case of this, consider the following matrix, due to Schapire [11]: S := "−1 +1 +1 −1 +1 +1 # . The optimal solution here is to push both coordinates of λ unboundedly positive, with margins approaching (0, 0, 1). But pushing any coordinate λi too quickly will increase the objective value, rather than decreasing it. In fact, this instance will provide a lower bound, and the mechanism of the proof shows that the primal weights grow extremely slowly, as O(ln(t)). Theorem 8.7. Using the logistic loss and exact line search, for any t ≥1, f(Sλt) −¯fS ≥1/(8t). Acknowledgement The author thanks Sanjoy Dasgupta, Daniel Hsu, Indraneel Mukherjee, and Robert Schapire for valuable conversations. The NSF supported this work under grants IIS-0713540 and IIS-0812598. 8 References [1] Robert E. Schapire. The strength of weak learnability. Machine Learning, 5:197–227, July 1990. [2] Yoav Freund. Boosting a weak learning algorithm by majority. Information and Computation, 121(2):256–285, 1995. [3] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci., 55(1):119–139, 1997. [4] Leo Breiman. Prediction games and arcing algorithms. Neural Computation, 11:1493–1517, October 1999. [5] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28, 1998. [6] Peter J. Bickel, Yaacov Ritov, and Alon Zakai. Some theory for generalized boosting algorithms. Journal of Machine Learning Research, 7:705–732, 2006. [7] Z. Q. Luo and P. Tseng. On the convergence of the coordinate descent method for convex differentiable minimization. Journal of Optimization Theory and Applications, 72:7–35, 1992. [8] Michael Collins, Robert E. Schapire, and Yoram Singer. Logistic regression, AdaBoost and Bregman distances. Machine Learning, 48(1-3):253–285, 2002. [9] Gunnar R¨atsch, Sebastian Mika, and Manfred K. Warmuth. On the convergence of leveraging. In NIPS, pages 487–494, 2001. [10] Indraneel Mukherjee, Cynthia Rudin, and Robert Schapire. The convergence rate of AdaBoost. In COLT, 2011. [11] Robert E. Schapire. The convergence rate of AdaBoost. In COLT, 2010. [12] Robert E. Schapire, Yoav Freund, Peter Barlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. In ICML, pages 322–330, 1997. [13] Gunnar R¨atsch and Manfred K. Warmuth. Maximizing the margin with boosting. In COLT, pages 334–350, 2002. [14] Manfred K. Warmuth, Karen A. Glocer, and Gunnar R¨atsch. Boosting algorithms for maximizing the soft margin. In NIPS, 2007. [15] Shai Shalev-Shwartz and Yoram Singer. On the equivalence of weak learnability and linear separability: New relaxations and efficient boosting algorithms. In COLT, pages 311–322, 2008. [16] Llew Mason, Jonathan Baxter, Peter L. Bartlett, and Marcus R. Frean. Functional gradient techniques for combining hypotheses. In A.J. Smola, P.L. Bartlett, B. Sch¨olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 221–246, Cambridge, MA, 2000. MIT Press. [17] Stephen P. Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [18] Jean-Baptiste Hiriart-Urruty and Claude Lemar´echal. Fundamentals of Convex Analysis. Springer Publishing Company, Incorporated, 2001. [19] Jonathan Borwein and Adrian Lewis. Convex Analysis and Nonlinear Optimization. Springer Publishing Company, Incorporated, 2000. [20] Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):297–336, 1999. [21] George B. Dantzig and Mukund N. Thapa. Linear Programming 2: Theory and Extensions. Springer, 2003. [22] Adi Ben-Israel. Motzkin’s transposition theorem, and the related theorems of Farkas, Gordan and Stiemke. In M. Hazewinkel, editor, Encyclopaedia of Mathematics, Supplement III. 2002. 9
|
2011
|
161
|
4,215
|
See the Tree Through the Lines: The Shazoo Algorithm∗ Fabio Vitale DSI, University of Milan, Italy fabio.vitale@unimi.it Nicol`o Cesa-Bianchi DSI, University of Milan, Italy nicolo.cesa-bianchi@unimi.it Claudio Gentile DICOM, University of Insubria, Italy claudio.gentile@uninsubria.it Giovanni Zappella Dept. of Mathematics, Univ. of Milan, Italy giovanni.zappella@unimi.it Abstract Predicting the nodes of a given graph is a fascinating theoretical problem with applications in several domains. Since graph sparsification via spanning trees retains enough information while making the task much easier, trees are an important special case of this problem. Although it is known how to predict the nodes of an unweighted tree in a nearly optimal way, in the weighted case a fully satisfactory algorithm is not available yet. We fill this hole and introduce an efficient node predictor, SHAZOO, which is nearly optimal on any weighted tree. Moreover, we show that SHAZOO can be viewed as a common nontrivial generalization of both previous approaches for unweighted trees and weighted lines. Experiments on real-world datasets confirm that SHAZOO performs well in that it fully exploits the structure of the input tree, and gets very close to (and sometimes better than) less scalable energy minimization methods. 1 Introduction Predictive analysis of networked data is a fast-growing research area whose application domains include document networks, online social networks, and biological networks. In this work we view networked data as weighted graphs, and focus on the task of node classification in the transductive setting, i.e., when the unlabeled graph is available beforehand. Standard transductive classification methods, such as label propagation [2, 3, 18], work by optimizing a cost or energy function defined on the graph, which includes the training information as labels assigned to training nodes. Although these methods perform well in practice, they are often computationally expensive, and have performance guarantees that require statistical assumptions on the selection of the training nodes. A general approach to sidestep the above computational issues is to sparsify the graph to the largest possible extent, while retaining much of its spectral properties —see, e.g., [5, 6, 12, 16]. Inspired by [5, 6], this paper reduces the problem of node classification from graphs to trees by extracting suitable spanning trees of the graph, which can be done quickly in many cases. The advantage of performing this reduction is that node prediction is much easier on trees than on graphs. This fact has recently led to the design of very scalable algorithms with nearly optimal performance guarantees in the online transductive model, which comes with no statistical assumptions. Yet, the current results in node classification on trees are not satisfactory. The TREEOPT strategy of [5] is optimal to within constant factors, but only on unweighted trees. No equivalent optimality results are available for general weighted trees. To the best of our knowledge, the only other comparable result is WTA by [6], which is optimal (within log factors) only on weighted lines. In fact, WTA can still be applied to weighted trees by exploiting an idea contained in [9]. This is based on linearizing the tree via a depth-first visit. Since linearization loses most of the structural information of the tree, ∗This work was supported in part by Google Inc. through a Google Research Award, and by the PASCAL2 Network of Excellence under EC grant 216886. This publication only reflects the authors’ views. 1 this approach yields suboptimal mistake bounds. This theoretical drawback is also confirmed by empirical performance: throwing away the tree structure negatively affects the practical behavior of the algorithm on real-world weighted graphs. The importance of weighted graphs, as opposed to unweighted ones, is suggested by many practical scenarios where the nodes carry more information than just labels, e.g., vectors of feature values. A natural way of leveraging this side information is to set the weight on the edge linking two nodes to be some function of the similariy between the vectors associated with these nodes. In this work, we bridge the gap between the weighted and unweighted cases by proposing a new prediction strategy, called SHAZOO, achieving a mistake bound that depends on the detailed structure of the weighted tree. We carry out the analysis using a notion of learning bias different from the one used in [6] and more appropriate for weighted graphs. More precisely, we measure the regularity of the unknown node labeling via the weighted cutsize induced by the labeling on the tree (see Section 3 for a precise definition). This replaces the unweighted cutsize that was used in the analysis of WTA. When the weighted cutsize is used, a cut edge violates this inductive bias in proportion to its weight. This modified bias does not prevent a fair comparison between the old algorithms and the new one: SHAZOO specializes to TREEOPT in the unweighted case, and to WTA when the input tree is a weighted line. By specializing SHAZOO’s analysis to the unweighted case we recover TREEOPT’s optimal mistake bound. When the input tree is a weighted line, we recover WTA’s mistake bound expressed through the weighted cutsize instead of the unweighted one. The effectiveness of SHAZOO on any tree is guaranteed by a corresponding lower bound (see Section 3). SHAZOO can be viewed as a common nontrivial generalization of both TREEOPT and WTA. Obtaining this generalization while retaining and extending the optimality properties of the two algorithms is far from being trivial from a conceptual and technical standpoint. Since SHAZOO works in the online transductive model, it can easily be applied to the more standard train/test (or “batch”) transductive setting: one simply runs the algorithm on an arbitrary permutation of the training nodes, and obtains a predictive model for all test nodes. However, the implementation might take advantage of knowing the set of training nodes beforehand. For this reason, we present two implementations of SHAZOO, one for the online and one for the batch setting. Both implementations result in fast algorithms. In particular, the batch one is linear in |V |. This is achieved by a fast algorithm for weighted cut minimization on trees, a procedure which lies at the heart of SHAZOO. Finally, we test SHAZOO against WTA, label propagation, and other competitors on real-world weighted graphs. In almost all cases (as expected), we report improvements over WTA due to the better sensitivity to the graph structure. In some cases, we see that SHAZOO even outperforms standard label propagation methods. Recall that label propagation has a running time per prediction which is proportional to |E|, where E is the graph edge set. On the contrary, SHAZOO can typically be run in constant amortized time per prediction by using Wilson’s algorithm for sampling random spanning trees [17]. By disregarding edge weights in the initial sampling phase, this algorithm is able to draw a random (unweighted) spanning tree in time proportional to |V | on most graphs. Our experiments reveal that using the edge weights only in the subsequent prediction phase causes in practice only a minor performance degradation. 2 Preliminaries and basic notation Let T = (V, E, W) be an undirected and weighted tree with |V | = n nodes, positive edge weights Wi,j > 0 for (i, j) ∈E, and Wi,j = 0 for (i, j) /∈E. A binary labeling of T is any assignment y = (y1, . . . , yn) ∈{−1, +1}n of binary labels to its nodes. We use (T, y) to denote the resulting labeled weighted tree. The online learning protocol for predicting (T, y) is defined as follows. The learner is given T while y is kept hidden. The nodes of T are presented to the learner one by one, according to an unknown and arbitrary permutation i1, . . . , in of V . At each time step t = 1, . . . , n node it is presented and the learner must issue a prediction byit ∈{−1, +1} for the label yit. Then yit is revealed and the learner knows whether a mistake occurred. The learner’s goal is to minimize the total number of prediction mistakes. Following previous works [10, 9, 5, 6], we measure the regularity of a labeling y of T in terms of φ-edges, where a φ-edge for (T, y) is any (i, j) ∈E such that yi ̸= yj. The overall amount of irregularity in a labeled tree (T, y) is the weighted cutsize ΦW = P (i,j)∈Eφ Wi,j, where Eφ ⊆E is the subset of φ-edges in the tree. We use the weighted cutsize as our learning bias, that is, we want to design algorithms whose predictive performance scales with ΦW . Unlike the φ-edge count Φ = |Eφ|, which is a good measure of regularity for unweighted graphs, the weighted cutsize takes 2 the edge weight Wi,j into account when measuring the irregularity of a φ-edge (i, j). In the sequel, when we measure the distance between any pair of nodes i and j on the input tree T we always use the resistance distance metric d, that is, d(i, j) = P (r,s)∈π(i,j) 1 Wr,s , where π(i, j) is the unique path connecting i to j. 3 A lower bound for weighted trees In this section we show that the weighted cutsize can be used as a lower bound on the number of online mistakes made by any algorithm on any tree. In order to do so (and unlike previous papers on this specific subject —see, e.g., [6]), we need to introduce a more refined notion of adversarial “budget”. Given T = (V, E, W), let ξ(M) be the maximum number of edges of T such that the sum of their weights does not exceed M, ξ(M) = max n |E′| : E′ ⊆E, P (i,j)∈E′ wi,j ≤M o . We have the following simple lower bound (all proofs are omitted from this extended abstract). Theorem 1 For any weighted tree T = (V, E, W) there exists a randomized label assignment to V such that any algorithm can be forced to make at least ξ(M)/2 online mistakes in expectation, while ΦW ≤M. Specializing [6, Theorem 1] to trees gives the lower bound K/2 under the constraint Φ ≤K ≤|V |. The main difference between the two bounds is the measure of label regularity being used: Whereas Theorem 1 uses ΦW , which depends on the weights, [6, Theorem 1] uses the weight-independent quantity Φ. This dependence of the lower bound on the edge weights is consistent with our learning bias, stating that a heavy φ-edge violates the bias more than a light one. Since ξ is nondecreasing, the lower bound implies a number of mistakes of at least ξ(ΦW )/2. Note that ξ(ΦW ) ≥Φ for any labeled tree (T, y). Hence, whereas a constraint K on Φ implies forcing at least K/2 mistakes, a constraint M on ΦW allows the adversary to force a potentially larger number of mistakes. In the next section we describe an algorithm whose mistake bound nearly matches the above lower bound on any weighted tree when using ξ(ΦW ) as the measure of label regularity. 4 The Shazoo algorithm In this section we introduce the SHAZOO algorithm, and relate it to previously proposed methods for online prediction on unweighted trees (TREEOPT from [5]) and weighted line graphs (WTA from [6]). In fact, SHAZOO is optimal on any weighted tree, and reduces to TREEOPT on unweighted trees and to WTA on weighted line graphs. Since TREEOPT and WTA are optimal on any unweighted tree and any weighted line graph, respectively, SHAZOO necessarily contains elements of both of these algorithms. In order to understand our algorithm, we now define some relevant structures of the input tree T. See Figure 1 (left) for an example. These structures evolve over time according to the set of observed labels. First, we call revealed a node whose label has already been observed by the online learner; otherwise, a node is unrevealed. A fork is any unrevealed node connected to at least three different revealed nodes by edge-disjoint paths. A hinge node is either a revealed node or a fork. A hinge tree is any component of the forest obtained by removing from T all edges incident to hinge nodes; hence any fork or labeled node forms a 1-node hinge tree. When a hinge tree H contains only one hinge node, a connection node for H is the node contained in H. In all other cases, we call a connection node for H any node outside H which is adjacent to a node in H. A connection fork is a connection node which is also a fork. Finally, a hinge line is any path connecting two hinge nodes such that no internal node is a hinge node. Given an unrevealed node i and a label value y ∈{−1, +1}, the cut function cut(i, y) is the value of the minimum weighted cutsize of T over all labelings y ∈{−1, +1}n consistent with the labels seen so far and such that yi = y. Define ∆(i) = cut(i, −1) −cut(i, +1) if i is unrevealed, and ∆(i) = yi, otherwise. The algorithm’s pseudocode is given in Algorithm 1. At time t, in order to predict the label yit of node it, SHAZOO calculates ∆(i) for all connection nodes i of H(it), where H(it) is the hinge tree containing it. Then the algorithm predicts yit using the label of the connection node i of H(it) which is closest to it and such that ∆(i) ̸= 0 (recall from Section 2 that all distances/lengths are measured using the resistance metric). Ties are broken arbitrarily. If ∆(i) = 0 for all connection nodes i in H(it) then SHAZOO predicts a default value (−1 in the 3 1 2 1 3 2 4 2 1 1 1 2 1 >0 >0 <0 + + + + + 2 4 3 6 1 5 1+a 1+2a 1+(V-1)a 1+3a Figure 1: Left: An input tree. Revealed nodes are dark grey, forks are doubly circled, and hinge lines have thick black edges. The hinge trees not containing hinge nodes (i.e., the ones that are not singletons) are enclosed by dotted lines. The dotted arrows point to the connection node(s) of such hinge trees. Middle: The predictions of SHAZOO on the nodes of a hinge tree. The numbers on the edges denote edge weights. At a given time t, SHAZOO uses the value of ∆on the two hinge nodes (the doubly circled ones, which are also forks in this case), and is required to issue a prediction on node it (the black node in this figure). Since it is between a positive ∆hinge node and a negative ∆hinge node, SHAZOO goes with the one which is closer in resistance distance, hence predicting byit = −1. Right: A simple example where the mincut prediction strategy does not work well in the weighted case. In this example, mincut mispredicts all labels, yet Φ = 1, and the ratio of ΦW to the total weight of all edges is about 1/|V |. The labels to be predicted are presented according to the numbers on the left of each node. Edge weights are also displayed, where a is a very small constant. pseudocode). If it is a fork (which is also a hinge node), then H(it) = {it}. In this case, it is a connection node of H(it), and obviously the one closest to itself. Hence, in this case SHAZOO predicts yt simply by byit = sgn ∆(it) . See Figure 1 (middle) for an example. On unweighted Algorithm 1: SHAZOO for t = 1 . . . n Let C H(it) be the set of the connection nodes i of H(it) for which ∆(i) ̸= 0 if C H(it) ̸≡∅ Let j be the node of C H(it) closest to it Set byit = sgn ∆(j) else Set byit = −1 (default value) trees, computing ∆(i) for a connection node i reduces to the Fork Label Estimation Procedure in [5, Lemma 13]. On the other hand, predicting with the label of the connection node closest to it in resistance distance is reminiscent of the nearest-neighbor prediction of WTA on weighted line graphs [6]. In fact, as in WTA, this enables to take advantage of labelings whose φ-edges are light weighted. An important limitation of WTA is that this algorithm linearizes the input tree. On the one hand, this greatly simplifies the analysis of nearest-neighbor prediction; on the other hand, this prevents exploiting the structure of T, thereby causing logaritmic slacks in the upper bound of WTA. The TREEOPT algorithm, instead, performs better when the unweighted input tree is very different from a line graph (more precisely, when the input tree cannot be decomposed into long edge-disjoint paths, e.g., a star graph). Indeed, TREEOPT’s upper bound does not suffer from logaritmic slacks, and is tight up to constant factors on any unweighted tree. Similar to TREEOPT, SHAZOO does not linearize the input tree and extends to the weighted case TREEOPT’s superior performance, also confirmed by the experimental comparison reported in Section 6. In Figure 1 (right) we show an example that highlights the importance of using the ∆function to compute the fork labels. Since ∆predicts a fork it with the label that minimizes the weighted cutsize of T consistent with the revealed labels, one may wonder whether computing ∆through mincut based on the number of φ-edges (rather than their weighted sum) could be an effective prediction strategy. Figure 1 (right) illustrates an example of a simple tree where such a ∆mispredicts the labels of all nodes, when both ΦW and Φ are small. Remark 1 We would like to stress that SHAZOO can also be used to predict the nodes of an arbitrary graph by first drawing a random spanning tree T of the graph, and then predicting optimally on T —see, e.g., [5, 6]. The resulting mistake bound is simply the expected value of SHAZOO’s mistake bound over the random draw of T. By using a fast spanning tree sampler [17], the involved computational overhead amounts to constant amortized time per node prediction on “most” graphs. 4 Remark 2 In certain real-world input graphs, the presence of an edge linking two nodes may also carry information about the extent to which the two nodes are dissimilar, rather than similar. This information can be encoded by the sign of the weight, and the resulting network is called a signed graph. The regularity measure is naturally extended to signed graphs by counting the weight of frustrated edges (e.g.,[7]), where (i, j) is frustrated if yiyj ̸= sgn(wi,j). Many of the existing algorithms for node classification [18, 9, 10, 5, 8, 6] can in principle be run on signed graphs. However, the computational cost may not always be preserved. For example, mincut [4] is in general NP-hard when the graph is signed [13]. Since our algorithm sparsifies the graph using trees, it can be run efficiently even in the signed case. We just need to re-define the ∆function as ∆(i) = fcut(i, −1) −fcut(i, +1), where fcut is the minimum total weight of frustrated edges consistent with the labels seen so far. The argument contained in Section 5 for the positive edge weights (see, e.g., Eq. (1) therein) allows us to show that also this version of ∆can be computed efficiently. The prediction rule has to be re-defined as well: We count the parity of the number z of negative-weighted edges along the path connecting it to the closest node j ∈C H(it) , i.e., byit = (−1)zsgn ∆(j) . Remark 3 In [5] the authors note that TREEOPT approximates a version space (Halving) algorithm on the set of tree labelings. Interestingly, SHAZOO is also an approximation to a more general Halving algorithm for weighted trees. This generalized Halving gives a weight to each labeling consistent with the labels seen so far and with the sign of ∆(f) for each fork f. These weighted labelings, which depend on the weights of the φ-edges generated by each labeling, are used for computing the predictions. One can show (details omitted due to space limitations) that this generalized Halving algorithm has a mistake bound within a constant factor of SHAZOO’s. 5 Mistake bound analysis and implementation We now show that SHAZOO is nearly optimal on every weighted tree T. We obtain an upper bound in terms of ΦW and the structure of T, nearly matching the lower bound of Theorem 1. We now give some auxiliary notation that is strictly needed for stating the mistake bound. Given a labeled tree (T, y), a cluster is any maximal subtree whose nodes have the same label. An in-cluster line graph is any line graph that is entirely contained in a single cluster. Finally, given a line graph L, we set RW L = P (i,j)∈L 1 Wi,j , i.e., the (resistance) distance between its terminal nodes. Theorem 2 For any labeled and weighted tree (T, y), there exists a set LT of O ξ(ΦW ) edgedisjoint in-cluster line graphs such that the number of mistakes made by SHAZOO is at most of the order of X L∈LT min n |L|, 1 + log 1 + ΦW RW L o . The above mistake bound depends on the tree structure through LT . The sum contains O ξ(ΦW ) terms, each one being at most logarithmic in the scale-free products ΦW RW L . The bound is governed by the same key quantity ξ ΦW occurring in the lower bound of Theorem 1. However, Theorem 2 also shows that SHAZOO can take advantage of trees that cannot be covered by long line graphs. For example, if the input tree T is a weighted line graph, then it is likely to contain long in-cluster lines. Hence, the factor multiplying ξ ΦW may be of the order of log 1 + ΦW RW L . If, instead, T has constant diameter (e.g., a star graph), then the in-cluster lines can only contain a constant number of nodes, and the number of mistakes can never exceed O ξ(ΦW ) . This is a log factor improvement over WTA which, by its very nature, cannot exploit the structure of the tree it operates on.1 As for the implementation, we start by describing a method for calculating cut(v, y) for any unlabeled node v and label value y. Let T v be the maximal subtree of T rooted at v, such that no internal node is revealed. For any node i of T v, let T v i be the subtree of T v rooted at i. Let Φv i (y) be the minimum weighted cutsize of T v i consistent with the revealed nodes and such that yi = y. Since 1One might wonder whether an arbitrarily large gap between upper (Theorem 2) and lower (Theorem 1) bounds exists due to the extra factors depending on ΦW RW L . One way to get around this is to follow the analysis of WTA in [6]. Specifically, we can adapt here the more general analysis from that paper (see Lemma 2 therein) that allows us to drop, for any integer K, the resistance contribution of K arbitrary non-φ edges of the line graphs in LT (thereby reducing RW L for any L containing any of these edges) at the cost of increasing the mistake bound by K. The details will be given in the full version of this paper. 5 ∆(v) = cut(v, −1) −cut(v, +1) = Φv v(−1) −Φv v(+1), our goal is to compute Φv v(y). It is easy to see by induction that the quantity Φv i (y) can be recursively defined as follows, where Cv i is the set of all children of i in T v, and Yj ≡{yj} if yj is revealed, and Yj ≡{−1, +1}, otherwise:2 Φv i (y) = X j∈Cv i min y′∈Yj Φv j(y′) + I {y′ ̸= y} wi,j if i is an internal node of T v 0 otherwise. (1) Now, Φv v(y) can be computed through a simple depth-first visit of T v. In all backtracking steps of this visit the algorithm uses (1) to compute Φv i (y) for each node i, the values Φv j(y) for all children j of i being calculated during the previous backtracking steps. The total running time is therefore linear in the number of nodes of T v. Next, we describe the basic implementation of SHAZOO for the on-line setting. A batch learning implementation will be given at the end of this section. The online implementation is made up of three steps. 1. Find the hinge nodes of subtree T it. Recall that a hinge-node is either a fork or a revealed node. Observe that a fork is incident to at least three nodes lying on different hinge lines. Hence, in this step we perform a depth-first visit of T it, marking each node lying on a hinge line. In order to accomplish this task, it suffices to single out all forks marking each labeled node and, recursively, each parent of a marked node of T it. At the end of this process we are able to single out the forks by counting the number of edges (i, j) of each marked node i such that j has been marked, too. The remaining hinge nodes are the leaves of T it whose labels have currently been revealed. 2. Compute sgn(∆(i)) for all connection forks of H(it). From the previous step we can easily find the connection node(s) of H(it). Then, we simply exploit the above-described technique for computing the cut function, obtaining sgn(∆(i)) for all connection forks i of H(it). 3. Propagate the labels of the nodes of C(H(it)) (only if it is not a fork). We perform a visit of H(it) starting from every node r ∈C(H(it)). During these visits, we mark each node j of H(it) with the label of r computed in the previous step, together with the length of π(r, j), which is what we need for predicting any label of H(it) at the current time step. The overall running time is dominated by the first step and the calculation of ∆(i). Hence the worst case running time is proportional to P t≤|V | |V (T it)|. This quantity can be quadratic in |V |, though this is rarely encountered in practice if the node presentation order is not adversarial. For example, it is easy to show that in a line graph, if the node presentation order is random, then the total time is of the order of |V | log |V |. For a star graph the total time complexity is always linear in |V |, even on adversarial orders. In many real-world scenarios, one is interested in the more standard problem of predicting the labels of a given subset of test nodes based on the available labels of another subset of training nodes. Building on the above on-line implementation, we now derive an implementation of SHAZOO for this train/test (or “batch learning”) setting. We first show that computing |Φi i(+1)| and |Φi i(−1)| for all unlabeled nodes i in T takes O(|V |) time. This allows us to compute sgn(∆(v)) for all forks v in O(|V |) time, and then use the first and the third steps of the on-line implementation. Overall, we show that predicting all labels in the test set takes O(|V |) time. Consider tree T i as rooted at i. Given any unlabeled node i, we perform a visit of T i starting at i. During the backtracking steps of this visit we use (1) to calculate Φi j(y) for each node j in T i and label y ∈{−1, +1}. Observe now that for any pair i, j of adjacent unlabeled nodes and any label y ∈{−1, +1}, once we have obtained Φi i(y), Φi j(+1) and Φi j(−1), we can compute Φj i(y) in constant time, as Φj i(y) = Φi i(y) −miny′∈{−1,+1} Φi j(y′) + I {y′ ̸= y} wi,j . In fact, all children of j in T i are descendants of i, while the children of i in T i (but j) are descendants of j in T j. SHAZOO computes Φi i(y), we can compute in constant time Φj i(y) for all child nodes j of i in T i, and use this value for computing Φj j(y). Generalizing this argument, it is easy to see that in the next phase we can compute Φk k(y) in constant time for all nodes k of T i such that for all ancestors u of k and all y ∈{−1, +1}, the values of Φu u(y) have previously been computed. 2The recursive computations contained in this section are reminiscent of the sum-product algorithm [11]. 6 The time for computing Φs s(y) for all nodes s of T i and any label y is therefore linear in the time of performing a breadth-first (or depth-first) visit of T i, i.e., linear in the number of nodes of T i. Since each labeled node with degree d is part of at most d trees T i for some i, we have that the total number of nodes of all distinct (edge-disjoint) trees T i across i ∈V is linear in |V |. Finally, we need to propagate the connection node labels of each hinge tree as in the third step of the online implementation. Since also this last step takes linear time, we conclude that the total time for predicting all labels is linear in |V |. 6 Experiments We tested our algorithm on a number of real-world weighted graphs from different domains (character recognition, text categorization, bioinformatics, Web spam detection) against the following baselines: Online Majority Vote (OMV). This is an intuitive and fast algorithm for sequentially predicting the node labels is via a weighted majority vote over the labels of the adjacent nodes seen so far. Namely, OMV predicts yit through the sign of P s yiswis,it, where s ranges over s < t such that (is, it) ∈E. Both the total time and space required by OMV are Θ(|E|). Label Propagation (LABPROP). LABPROP [18, 2, 3] is a batch transductive learning method computed by solving a system of linear equations which requires total time of the order of |E|×|V |. This relatively high computational cost should be taken into account when comparing LABPROP to faster online algorithms. Recall that OMV can be viewed as a fast “online approximation” to LABPROP. Weighted Tree Algorithm (WTA). As explained in the introductory section, WTA can be viewed as a special case of SHAZOO. When the input graph is not a line, WTA turns it into a line by first extracting a spanning tree of the graph, and then linearizing it. The implementation described in [6] runs in constant amortized time per prediction whenever the spanning tree sampler runs in time Θ(|V |). The Graph Perceptron algorithm [10] is another readily available baseline. This algorithm has been excluded from our comparison because it does not seem to be very competitive in terms of performance (see, e.g., [6]), and is also computationally expensive. In our experiments, we combined SHAZOO and WTA with spanning trees generated in different ways (note that OMV and LABPROP do not need to extract spanning trees from the input graph). Random Spanning Tree (RST). Following Ch. 4 of [12], we draw a weighted spanning tree with probability proportional to the product of its edge weights. We also tested our algorithms combined with random spanning trees generated uniformly at random ignoring the edge weights (i.e., the weights were only used to compute predictions on the randomly generated tree) —we call these spanning trees NWRST (no-weight RST). On most graphs, this procedure can be run in time linear in the number of nodes [17]. Hence, the combinations SHAZOO+NWRST and WTA+NWRST run in O(|V |) time on most graphs. Minimum Spanning Tree (MST). This is just the minimal weight spanning tree, where the weight of a spanning tree is the sum of its edge weights. This is the tree that best approximates the original graph i.t.o. trace norm distance of the corresponding Laplacian matrices. Following [10, 6], we also ran SHAZOO and WTA using committees of spanning trees, and then aggregating predictions via a majority vote. The resulting algorithms are denoted by k*SHAZOO and k*WTA, where k is the number of spanning trees in the aggregation. We used either k = 7, 11 or k = 3, 7, depending on the dataset size. For our experiments, we used five datasets: RCV1, USPS, KROGAN, COMBINED, and WEBSPAM. WEBSPAM is a big dataset (110,900 nodes and 1,836,136 edges) of inter-host links created for the Web Spam Challenge 2008 [15].3 KROGAN (2,169 nodes and 6,102 edges) and COMBINED (2,871 nodes and 6,407 edges) are high-throughput protein-protein interaction networks of budding yeast taken from [14] —see [6] for a more complete description. Finally, USPS and RCV1 are graphs obtained from the USPS handwritten characters dataset (all ten categories) and the first 10,000 documents in chronological order of Reuters Corpus Vol. 1 (the four most frequent categories), respectively. In both cases, we used Euclidean 10-Nearest Neighbor to create edges, each 3We do not compare our results to those obtained within the challenge since we are only exploiting the graph (weighted) topology here, disregarding content features. 7 weight wi,j being equal to e−∥xi−xj∥2/σ2 i,j. We set σ2 i,j = 1 2 σ2 i + σ2 j , where σ2 i is the average squared distance between i and its 10 nearest neighbours. Following previous experimental settings [6], we associate binary classification tasks with the five datasets/graphs via a standard one-vs-all reduction. Each error rate is obtained by averaging over ten randomly chosen training sets (and ten different trees in the case of RST and NWRST). WEBSPAM is natively a binary classification problem, and we used the same train/test split provided with the dataset: 3,897 training nodes and 1,993 test nodes (the remaining nodes being unlabeled). In the below table, we show the macro-averaged classification error rates (percentages) achieved by the various algorithms on the first four datasets mentioned in the main text. For each dataset we trained ten times over a random subset of 5%, 10% and 25% of the total number of nodes and tested on the remaining ones. In boldface are the lowest error rates on each column, excluding LABPROP which is used as a “yardstick” comparison. Standard deviations averaged over the binary problems are small: most of the times less than 0.5%. Datasets USPS RCV1 KROGAN COMBINED Predictors 5% 10% 25% 5% 10% 25% 5% 10% 25% 5% 10% 25% SHAZOO+RST 3.62 2.82 2.02 21.72 18.70 15.68 18.11 17.68 17.10 17.77 17.24 17.34 SHAZOO+NWRST 3.88 3.03 2.18 21.97 19.21 15.95 18.11 18.14 17.32 17.22 17.21 17.53 SHAZOO+MST 1.07 0.96 0.80 17.71 14.87 11.73 17.46 16.92 16.30 16.79 16.64 17.15 WTA+RST 5.34 4.23 3.02 25.53 22.66 19.05 21.82 21.05 20.08 21.76 21.38 20.26 WTA+NWRST 5.74 4.45 3.26 25.50 22.70 19.24 21.90 21.28 20.18 21.58 21.42 20.64 WTA+MST 1.81 1.60 1.21 21.07 17.94 13.92 21.41 20.63 19.61 21.74 21.20 20.32 7*SHAZOO+RST 1.68 1.28 0.97 16.33 13.52 11.07 15.54 15.58 15.46 15.12 15.24 15.84 7*SHAZOO+NWRST 1.89 1.38 1.06 16.49 13.98 11.37 15.61 15.62 15.50 15.02 15.12 15.80 7*WTA+RST 2.10 1.56 1.14 17.44 14.74 12.15 16.75 16.64 15.88 16.42 16.09 15.72 7*WTA+NWRST 2.33 1.73 1.24 17.69 15.18 12.53 16.71 16.60 16.00 16.24 16.13 15.79 11*SHAZOO+RST 1.52 1.17 0.89 15.82 13.04 10.59 15.36 15.40 15.29 14.91 15.06 15.61 11*SHAZOO+NWRST 1.70 1.27 0.98 15.95 13.42 10.93 15.40 15.33 15.32 14.87 14.99 15.67 11*WTA+RST 1.84 1.36 1.01 16.40 13.95 11.42 16.20 16.15 15.53 15.90 15.58 15.30 11*WTA+NWRST 2.04 1.51 1.12 16.70 14.28 11.68 16.22 16.05 15.50 15.74 15.57 15.33 OMV 24.79 12.34 2.10 31.65 22.35 11.79 43.13 38.75 29.84 44.72 40.86 33.24 LABPROP 1.95 1.11 0.82 16.28 12.99 10.00 15.56 14.98 15.23 14.79 14.93 15.18 Next, we extract from the above table a specific comparison among SHAZOO, WTA, and LABPROP. SHAZOO and WTA use a single minimum spanning tree (the best performing tree type for both algorithms). Note that SHAZOO consistently outperforms WTA. We then report the results on WEBSPAM. SHAZOO and WTA use only non-weighted random spanning trees (NWRST) to optimize scalability. Since this dataset is extremely unbalanced (5.4% positive labels) we use the average test set F-measure instead of the error rate. SHAZOO WTA OMV LABPROP 3*WTA 3*SHAZOO 7*WTA 7*SHAZOO 0.954 0.947 0.706 0.931 0.967 0.964 0.968 0.968 Our empirical results can be briefly summarized as follows: 1. Without using committees, SHAZOO outperforms WTA on all datasets, irrespective to the type of spanning tree being used. With committees, SHAZOO works better than WTA almost always, although the gap between the two reduces. 2. The predictive performance of SHAZOO+MST is comparable to, and sometimes better than, that of LABPROP, though the latter algorithm is slower. 3. k*SHAZOO, with k = 11 (or k = 7 on WEBSPAM) seems to be especially effective, outperforming LABPROP, with a small (e.g., 5%) training set size. 4. NWRST does not offer the same theoretical guarantees as RST, but it is extremely fast to generate (linear in |V | on most graphs — e.g., [1]), and in our experiments is only slightly inferior to RST. 8 References [1] N. Alon, C. Avin, M. Kouck´y, G. Kozma, Z. Lotker, and M.R. Tuttle. Many random walks are faster than one. In Proc. 20th Symp. on Parallel Algo. and Architectures, pages 119–128. Springer, 2008. [2] M. Belkin, I. Matveeva, and P. Niyogi. Regularization and semi-supervised learning on large graphs. In Proceedings of the 17th Annual Conference on Learning Theory, pages 624–638. Springer, 2004. [3] Y. Bengio, O. Delalleau, and N. Le Roux. Label propagation and quadratic criterion. In SemiSupervised Learning, pages 193–216. MIT Press, 2006. [4] A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. In Proceedings of the 18th International Conference on Machine Learning. Morgan Kaufmann, 2001. [5] N. Cesa-Bianchi, C. Gentile, and F. Vitale. Fast and optimal prediction of a labeled tree. In Proceedings of the 22nd Annual Conference on Learning Theory, 2009. [6] N. Cesa-Bianchi, C. Gentile, F. Vitale, and G. Zappella. Random spanning trees and the prediction of weighted graphs. In Proceedings of the 27th International Conference on Machine Learning, 2010. [7] C. Altafini G. Iacono. Monotonicity, frustration, and ordered response: an analysis of the energy landscape of perturbed large-scale biological networks. BMC Systems Biology, 4(83), 2010. [8] M. Herbster and G. Lever. Predicting the labelling of a graph via minimum p-seminorm interpolation. In Proceedings of the 22nd Annual Conference on Learning Theory. Omnipress, 2009. [9] M. Herbster, G. Lever, and M. Pontil. Online prediction on large diameter graphs. In Advances in Neural Information Processing Systems 22. MIT Press, 2009. [10] M. Herbster, M. Pontil, and S. Rojas-Galeano. Fast prediction on a tree. In Advances in Neural Information Processing Systems 22. MIT Press, 2009. [11] F.R. Kschischang, B.J. Frey, and H.A. Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498–519, 2001. [12] R. Lyons and Y. Peres. Probability on trees and networks. Manuscript, 2008. [13] S.T. McCormick, M.R. Rao, and G. Rinaldi. Easy and difficult objective functions for max cut. Math. Program., 94(2-3):459–466, 2003. [14] G. Pandey, M. Steinbach, R. Gupta, T. Garg, and V. Kumar. Association analysis-based transformations for protein interaction networks: a function prediction case study. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 540–549. ACM Press, 2007. [15] Yahoo! Research (Barcelona) and Laboratory of Web Algorithmics (Univ. of Milan). Web spam collection. URL: barcelona.research.yahoo.net/webspam/datasets/. [16] D. A. Spielman and N. Srivastava. Graph sparsification by effective resistances. In Proc. of the 40th annual ACM symposium on Theory of computing (STOC 2008). ACM Press, 2008. [17] D.B. Wilson. Generating random spanning trees more quickly than the cover time. In Proceedings of the 28th ACM Symposium on the Theory of Computing, pages 296–303. ACM Press, 1996. [18] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International Conference on Machine Learning, 2003. 9
|
2011
|
162
|
4,216
|
t-divergence Based Approximate Inference Nan Ding2, S.V. N. Vishwanathan1,2, Yuan Qi2,1 Departments of 1Statistics and 2Computer Science Purdue University ding10@purdue.edu, vishy@stat.purdue.edu, alanqi@cs.purdue.edu Abstract Approximate inference is an important technique for dealing with large, intractable graphical models based on the exponential family of distributions. We extend the idea of approximate inference to the t-exponential family by defining a new t-divergence. This divergence measure is obtained via convex duality between the log-partition function of the t-exponential family and a new t-entropy. We illustrate our approach on the Bayes Point Machine with a Student’s t-prior. 1 Introduction The exponential family of distributions is ubiquitous in statistical machine learning. One prominent application is their use in modeling conditional independence between random variables via a graphical model. However, when the number or random variables is large, and the underlying graph structure is complex, a number of computational issues need to be tackled in order to make inference feasible. Therefore, a number of approximate techniques have been brought to bear on the problem. Two prominent approximate inference techniques include the Monte Carlo Markov Chain (MCMC) method [1], and the deterministic method [2, 3]. Deterministic methods are gaining significant research traction, mostly because of their high efficiency and practical success in many applications. Essentially, these methods are premised on the search for a proxy in an analytically solvable distribution family that approximates the true underlying distribution. To measure the closeness between the true and the approximate distributions, the relative entropy between these two distributions is used. When working with the exponential family, one uses the Shannon-Boltzmann-Gibbs (SBG) entropy in which case the relative entropy is the well known Kullback-Leibler (KL) divergence [2]. Numerous well-known algorithms in exponential family, such as the mean field method [2, 4] and the expectation propagation [3, 5], are based on this criterion. The thin-tailed nature of the exponential family makes it unsuitable for designing algorithms which are potentially robust against certain kinds of noisy data. Notable work including [6, 7] utilizes mixture/split exponential family based approximate model to improve the robustness. Meanwhile, effort has also been devoted to develop alternate, generalized distribution families in statistics [e.g. 8, 9], statistical physics [e.g. 10, 11], and most recently in machine learning [e.g. 12]. Of particular interest to us is the t-exponential family1, which was first proposed by Tsallis and co-workers [10, 13, 14]. It is a special case of the more general φ-exponential family of Naudts [11, 15–17]. Related work in [18] has applied the t-exponential family to generalize logistic regression and obtain an algorithm that is robust against certain types of label noise. In this paper, we attempt to generalize deterministic approximate inference by using the texponential family. In other words, the approximate distribution used is from the t-exponential family. To obtain the corresponding divergence measure as in the exponential family, we exploit the 1Sometimes, also called the q-exponential family or the Tsallis distribution. 1 convex duality between the log-partition function of the t-exponential family and a new t-entropy2 to define the t-divergence. To illustrate the usage of the above procedure, we use it for approximate inference in the Bayes Point Machine (BPM) [3] but with a Student’s t-prior. The rest of the paper is organized as follows. Section 2 consists of a brief review of the t-exponential family. In Section 3 a new t-entropy is defined as the convex dual of the log-partition function of the t-exponential family. In Section 4, the t-divergence is derived and is used for approximate inference in Section 5. Section 6 illustrates the inference approach by applying it to the Bayes Point Machine with a Student’s t-prior, and we conclude the paper with a discussion in Section 7. 2 The t-exponential Family and Related Entropies The t-exponential family was first proposed by Tsallis and co-workers [10, 13, 14]. It is defined as p(x; θ) := expt (Φ(x), θ −gt(θ)) , where (1) expt(x) := exp(x) if t = 1 [1 + (1 −t)x] 1 1−t + otherwise. (2) The inverse of the expt function is called logt. Note that the log-partition function, gt(θ), in (1) preserves convexity and satisfies ∇θgt(θ) = Eq [Φ(x)] . (3) Here q(x) is called the escort distribution of p(x), and is defined as q(x) := p(x)t p(x)tdx. (4) See the supplementary material for a proof of convexity of gt(θ) based on material from [17], and a detailed review of the t-exponential family of distributions. There are various generalizations of the Shannon-Boltzmann-Gibbs (SBG) entropy which are proposed in statistical physics, and paired with the t-exponential family of distributions. Perhaps the most well-known among them is the Tsallis entropy [10]: Htsallis(p) := − p(x)t logt p(x)dx. (5) Naudts [11, 15, 16, 17] proposed a more general framework, wherein the familiar exp and log functions are generalized to expφ and logφ functions which are defined via a function φ. These generalized functions are used to define a family of distributions, and corresponding to this family an entropy like measure called the information content Iφ(p) as well as its divergence measure are defined. The information content is the dual of a function F(θ), where ∇θF(θ) = Ep [Φ(x)] . (6) Setting φ(p) = pt in the Naudts framework recovers the t-exponential family defined in (1). Interestingly when φ(p) = 1 t p2−t, the information content Iφ is exactly the Tsallis entropy (5). One another well-known non-SBG entropy is the R´enyi entropy [19]. The R´enyi α-entropy (when α = 1) of the probability distribution p(x) is defined as: Hα(p) = 1 1 −α log p(x)αdx . (7) Besides these entropies proposed in statistical physics, it is also worth noting efforts that work with generalized linear models or utilize different divergence measures, such as [5, 8, 20, 21]. It is well known that the negative SBG entropy is the Fenchel dual of the log-partition function of an exponential family distribution. This fact is crucially used in variational inference [2]. Although all 2Although closely related, our t-entropy definition is different from either the Tsallis entropy [10] or the information content in [17]. Nevertheless, it can be regarded as an example of the generalized framework of the entropy proposed in [8]. 2 of the above generalized entropies are useful in their own way, none of them satisfy this important property for the t-exponential family. In the following sections we attempt to find an entropy which satisfies this property, and outline the principles of approximate inference using the t-exponential family. Note that although our main focus is the t-exponential family, we believe that our results can also be extended to the more general φ-exponential family of Naudts [15, 17]. 3 Convex Duality and the t-Entropy Definition 1 (Inspired by Wainwright and Jordan [2]) The t-entropy of a distribution p(x; θ) is defined as Ht(p(x; θ)) : = − q(x; θ) logt p(x; θ) dx = −Eq [logt p(x; θ)] . (8) where q(x; θ) is the escort distribution of p(x; θ). It is straightforward to verify that the t-entropy is non-negative. Furthermore, the following theorem establishes the duality between −Ht and gt. The proof is provided in the supplementary material. This extends Theorem 3.4 of [2] to the t-entropy. Theorem 2 For any µ, define θ(µ) (if exists) to be the parameter of the t-exponential family s.t. µ = Eq(x;θ(µ)) [Φ(x)] = Φ(x)q(x; θ(µ)) dx. (9) Then g∗ t (µ) = −Ht(p(x; θ(µ))) if θ(µ) exists +∞otherwise . (10) where g∗ t (µ) denotes the Fenchel dual of gt(θ). By duality it also follows that gt(θ) = sup µ {µ, θ −g∗ t (µ)} . (11) From Theorem 2, it is obvious that Ht(µ) is a concave function. Below, we derive the t-entropy function corresponding to two commonly used distributions. See Figure 1 for a graphical illustration. Example 1 (t-entropy of Bernoulli distribution) Assume the Bernoulli distribution is Bern(p) with parameter p. The t-entropy is Ht(p) = −pt logt p −(1 −p)t logt(1 −p) pt + (1 −p)t = 1 −(pt + (1 −p)t)−1 t −1 (12) Example 2 (t-entropy of Student’s t-distribution) Assume that a k-dim Student’s t-distribution p(x; µ, Σ, v) is given by (54), then the t-entropy of p(x; µ, Σ, v) is given by Ht(p(x))) = −Ψ 1 −t 1 + v−1 + 1 1 −t (13) where K = (v Σ)−1, v = 2 t−1 −k, and Ψ = Γ((v+k)/2) (πv)k/2Γ(v/2)| Σ |1/2 −2/(v+k) . 3.1 Relation with the Tsallis Entropy Using (4), (5), and (8), the relation between the t-entropy and Tsallis entropy is obvious. Basically, the t-entropy is a normalized version of the Tsallis entropy, Ht(p) = − 1 p(x)tdx p(x)t logt p(x)dx = 1 p(x)tdxHtsallis(p). (14) 3 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 p Ht(p) t=0.1 t=0.5 t=1.0 t=1.5 t=1.9 0 2 4 6 8 10 0 5 10 15 σ2 Ht(σ2) t=1.0 t=1.3 t=1.6 t=1.9 Figure 1: t-entropy corresponding to two well known probability distributions. Left: the Bernoulli distribution Bern(x; p); Right: the Student’s t-distribution St(x; 0, σ2, v), where v = 2/(t−1)−1. One can recover the SBG entropy by setting t = 1.0. 3.2 Relation with the R´enyi Entropy We can equivalently rewrite the R´enyi Entropy as: Hα(p) = 1 1 −α log p(x)αdx = −log p(x)αdx −1/(1−α) . (15) The t-entropy of p(x) (when t = 1) is equal to Ht(p) = − p(x)t logt p(x)dx p(x)tdx = −logt p(x)tdx −1/(1−t) . (16) Therefore, when α = t, Ht(p) = −logt(exp(−Hα(p))) (17) When t and α →1, both entropies go to the SBG entropy. 4 The t-divergence Recall that the Bregman divergence defined by a convex function −H between p and ˜p is [22]: D(p ˜p) = −H(p) + H(˜p) + dH(˜p) d ˜p (˜p(x) −p(x))dx. (18) For the SBG entropy, it is easy to verify that the Bregman divergence leads to the relative SBGentropy (also widely known as the Kullback-Leibler (KL) divergence). Analogously, one can define the t-divergence3 as the Bregman divergence or relative entropy based on the t-entropy. Definition 3 The t-divergence, which is the relative t-entropy between two distribution p(x) and ˜p(x), is defined as, Dt(p ˜p) = q(x) logt p(x) −q(x) logt ˜p(x)dx. (19) The following theorem states the relationship between the relative t-entropy and the Bregman divergence. The proof is provided in the supplementary material. Theorem 4 The t-divergence is the Bregman divergence defined on the negative t-entropy −Ht(p). 3Note that the t-divergence is not a special case of the divergence measure of Naudts [17] because the entropies are defined differently the derivations are fairly similar in spirit. 4 The t-divergence plays a central role in the variational inference that will be derived shortly. It also preserves the following properties: • Dt(p ˜p) ≥0, ∀p, ˜p. The equality holds only for p = ˜p. • Dt(p ˜p) = Dt(˜p p). Example 3 (Relative t-entropy between Bernoulli distributions) Assume that two Bernoulli distributions Bern(p1) and Bern(p2), then the relative t-entropy Dt(p1 p2) between these two distributions is: Dt(p1 p2) = pt 1 logt p1 + (1 −p1)t logt(1 −p1) −pt 1 logt p2 −(1 −p1)t logt(1 −p2) pt 1 + (1 −p1)t (20) = 1 −pt 1p1−t 2 −(1 −p1)t(1 −p2)1−t (1 −t)(pt 1 + (1 −p1)t) (21) Example 4 (Relative t-entropy between Student’s t-distributions) Assume that two Student’s tdistributions p1(x; µ1, Σ1, v) and p2(x; µ2, Σ2, v) are given, then the relative t-entropy Dt(p1 p2) between these two distributions is: Dt(p1 p2) = q1(x) logt p1(x) −q1(x) logt p2(x)dx = Ψ1 1 −t 1 + v−1 + 2Ψ2 1 −tµ 1 K2 µ2 (22) −Ψ2 1 −tTr K 2 Σ1 −Ψ2 1 −tµ 1 K2 µ1 −Ψ2 1 −t µ 2 K2 µ2 + 1 (23) 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 p1 Dt(p1 p2) t=0.1 t=0.5 t=1.0 t=1.5 t=1.9 −4 −2 0 2 4 0 20 40 60 µ Dt(p1 p2) t=1.0 t=1.3 t=1.6 t=1.9 0 2 4 6 8 10 0 1 2 3 4 σ2 Dt(p1 p2) t=1.0 t=1.3 t=1.6 t=1.9 Figure 2: The t-divergence between: Left: Bern(p1) and Bern(p2 = 0.5); Middle: St(x; µ, 1, v) and St(x; 0, 1, v); Right: St(x; 0, σ2, v) and St(x; 0, 1, v), where v = 2/(t −1) −1. 5 Approximate Inference in the t-Exponential Family In essence, the deterministic approximate inference finds an approximate distribution from an analytically tractable distribution family which minimizes the relative entropy (e.g. KL-divergence in exponential family) with the true distribution. Since the relative entropy is not symmetric, the results of minimizing D(p ˜p) and D(˜p p) are different. In the main body of the paper we describe methods which minimize D(p ˜p) where ˜p comes from the t-exponential family. Algorithms which minimize D(˜p p) are described in the supplementary material. Given an arbitrary probability distribution p(x), in order to obtain a good approximation ˜p(x; θ) in the t-exponential family, we minimize the relative t-relative entropy (19) ˜p = argmin ˜p Dt(p ˜p) = q(x) logt p(x) −q(x) logt ˜p(x; θ)dx. (24) Here q(x) = 1 Z p(x)t denotes the escort of the original distribution p(x). Since ˜p(x; θ) = expt(Φ(x), θ −gt(θ)), (25) 5 using the fact that ∇θgt(θ) = E˜q[Φ(x)], one can take the derivative of (24) with respect to θ: Eq[Φ(x)] = E˜q[Φ(x)]. (26) In other words, the approximate distribution can be obtained by matching the escort expectation of Φ(x) between the two distributions. The escort expectation matching in (26) is reminiscent of the moment matching in the Power-EP [5] or the Fractional BP [23] algorithm, where the approximate distribution is obtained by E˜p[Φ(x)] = Epα ˜p1−α /Z[Φ(x)]. (27) The main reason for using the t-divergence, however, is not to address the computational or convergence issues as is done in the case of power EP/fractional BP. In contrast, we use the generalized exponential family (t-exponential family) to build our approximate models. In this context, the t-divergence plays the same role as KL divergence in the exponential family. To illustrate our ideas on a non-trivial problem, we apply escort expectation matching to the Bayes Point Machine (BPM) [3] with a Student’s t-distribution prior. 6 Bayes Point Machine with Student’s t-Prior Let D = {(x1, y1), . . . , (xn, yn)} be the training data. Consider a linear model parametrized by the k-dim weight vector w. For each training data point (xi, yi), the conditional distribution of the label yi given xi and w is modeled as [3]: ti(w) = p(yi | xi, w) = + (1 −2)Θ(yi w, xi), (28) where Θ(z) is the step function: Θ(z) = 1 if z > 0 and = 0 otherwise. By making a standard i.i.d. assumption about the data, the posterior distribution can be written as p(w | D) ∝p0(w) i ti(w), (29) where p0(w) denotes a prior distribution. Instead of using multivariate Gaussian distribution as a prior as was done by Minka [3], we will use a Student’s t-prior, because we want to build robust models: p0(w) = St(w; 0, I, v). (30) As it turns out, the posterior p(w | D) is infeasible to obtain in practice. Therefore we will find a multivariate Student’s t-distribution to approximate the true posterior. p(w | D) ˜p(w) = St(w; ˜µ, ˜Σ, v). (31) In order to obtain such a distribution, we implement the Bayesian online learning method [24], which is also known as Assumed Density Filter [25]. The extension to the expectation propagation is similar to [3] and omitted due to space limitation. The main idea is to process data points one by one and update the posterior by using escort moment matching. Assume the approximate distribution after processing (x1, y1), . . . , (xi−1, yi−1) to be ˜pi−1(w) and define ˜p0(w) = p0(w) (32) pi(w) ∝˜pi−1(w)ti(w) (33) Then the approximate posterior ˜pi(w) is updated as ˜pi(w) = St(w; µ(i), Σ(i), v) = argmin µ,Σ Dt(pi(w) St(w; µ, Σ, v)). (34) Because ˜pi(w) is a k-dim Student’s t-distribution with degree of freedom v, for which Φ(w) = [w, w w] and t = 1 + 2/(v + k) (see example 5 in Appendix A), it turns out that we only need qi(w) w d w = ˜qi(w) w d w, and (35) qi(w) w w d w = ˜qi(w) w w d w . (36) 6 Here ˜qi(w) ∝˜pi(w)t, qi(w) ∝˜pi−1(w)t˜ti(w) and ˜ti(w) = ti(w)t = t + (1 −)t −t Θ(yi w, xi). (37) Denote ˜pi−1(w) = St(w; µ(i−1), Σ(i−1), v), ˜qi−1(w) = St(w; µ(i−1), v Σ(i−1) /(v + 2), v + 2) (also see example 5), and we make use of the following relations: Z1 = ˜pi−1(w)˜ti(w)d w (38) = t + (1 −)t −t z −∞ St(x; 0, 1, v)dx (39) Z2 = ˜qi−1(w)˜ti(w)d w (40) = t + (1 −)t −t z −∞ St(x; 0, v/(v + 2), v + 2)dx (41) g = 1 Z2 ∇µZ1 = yiα xi (42) G = 1 Z2 ∇ΣZ1 = −1 2 yiα xi, µ(i−1)
x i Σ(i−1) xi xi x i (43) where, α = ((1 −)t −t) St(z; 0, 1, v) Z2 x i Σ(i−1) xi and z = yi xi, µ(i−1)
x i Σ(i−1) xi . Equations (39) and (41) are analogous to Eq. (5.17) in [3]. By assuming that a regularity condition4 holds, and ∇can be interchanged in ∇Z1 of (42) and (43). Combining with (38) and (40), we obtain the escort expectations of pi(w) from Z1 and Z2 (similar to Eq. (5.12) and (5.13) in [3]), Eq[w] = 1 Z2 ˜qi−1(w)˜ti(w) w d w = µ(i−1) + Σ(i−1) g (44) Eq[w w] −Eq[w] Eq[w] = 1 Z2 ˜qi−1(w)˜ti(w) w w d w −Eq[w] Eq[w] = r Σ(i−1) −Σ(i−1) g g −2 G Σ(i−1) (45) where r = Z1/Z2 and Eq[·] means the expectation with respect to qi(w). Since the mean and variance of the escort of ˜pi(w) is µ(i) and Σ(i) (again see example 5), after combining with (42) and (43), µ(i) = Eq[w] = µ(i−1) + αyi Σ(i−1) xi (46) Σ(i) = Eq[w w] −Eq[w] Eq[w] = r Σ(i−1) −(Σ(i−1) xi) αyi xi, µ(i)
x i Σ(i−1) xi (Σ(i−1) xi). (47) 6.1 Results In the above Bayesian online learning algorithm, everytime a new data xn coming in, p(θ | x1, . . . , xn−1) is used as a prior, and the posterior is computed by incorporating the likelihood p(xn | θ). The Student’s t-distribution is a more conservative or non-subjective prior than the Gaussian distribution because its heavy-tailed nature. More specifically, it means that the Student’s t-based BPM can be more strongly influenced by the newly coming in points. In many binary classfication problems, it is assumed that the underlying classfication hyperplane is always fixed. However, in some real situations, this assumption might not hold. Especially, in 4This is a fairly standard technical requirement which is often proved using the Dominated Convergence Theorem (see e.g. Section 9.2 of Rosenthal [26]). 7 0 1,000 2,000 3,000 4,000 0 20 40 60 80 100 # Points # Diff. Signs of w Gauss v=3 v=10 0 1,000 2,000 3,000 4,000 0 5 10 15 20 # Points # Diff. Signs of w Gauss v=3 v=10 Figure 3: The number of wrong signs between w. Left: case I; Right: case II Table 1: The classification error of all the data points Gauss v=3 v=10 Case I 0.337 0.242 0.254 Case II 0.150 0.130 0.128 an online learning problem, the data sequence coming in is time dependent. It is possible that the underlying classifier is also time dependent. For a senario like this, we require our learning machine is able to self-adjust during the time given the data. In our experiment, we build a synthetic online dataset which mimics the above senario, that is the underlying classification hyperplane is changed during a certain time interval. Our sequence of data is composed of 4000 data points randomly generated by a 100 dimension isotropic Gaussian distribution N(0, I). The sequence can be partitioned into 10 sub-sequences of length 400. During each sub-sequence s, there is a base weight vector wb (s) ∈{−1, +1}100. Each point x(i) of the subsequence is labeled as y(i) = sign w (i) x(i) where w(i) = wb (s) +n and n is a random noise from [−0.1, +0.1]100. The base weight vector wb (s) can be (I) totally randomly generated, or (II) generated based on the base weight vector wb (s−1) in the following way: wb (s)j = Rand{−1, +1} j ∈[400s −399, 400s] wb (s−1)j otherwise. (48) Namely, only 10% of the base weight vector is changed based upon the previous base weight vector. We compare the Bayes Point Machine with Student’s t-prior (with v = 3 and v = 10) with the Gaussian prior. For both method, = 0.01. We report (1) for each point the number of different signs between the base weight vector and the mean of the posterior (2) the error rate of all the points. According to the Fig. 3 and Table. 1, we find that the Bayes Point Machine with the Student’st prior adjusts itself significantly faster than the Gaussian prior and it also ends up with a better classification results. We believe that is mostly resulted from its heavy-tailness. 7 Discussion In this paper, we investigated the convex duality of the log-partition function of the t-exponential family, and defined a new t-entropy. By using the t-divergence as a divergence measure, we proposed approximate inference on the t-exponential family by matching the expectation of the escort distributions. The results in this paper can be extended to the more generalized φ-exponential family by Naudts [15]. The t-divergence based approximate inference is only applied in a toy example. The focus of our future work is on utilizing this approach in various graphical models. Especially, it is important to investigate a new family of graphical models based on heavy-tailed distributions for applications involving noisy data. 8 References [1] W. R. Gilks, S. Richardson, and D. J. Spiegelhalter. Markov Chain Monte Carlo in Practice. Chapman & Hall, 1995. [2] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1 – 2):1 – 305, 2008. [3] T. Minka. Expectation Propagation for approximative Bayesian inference. PhD thesis, MIT Media Labs, Cambridge, USA, 2001. [4] Y. Weiss. Comparing the mean field method and belief propagation for approximate inference in MRFs. In David Saad and Manfred Opper, editors, Advanced Mean Field Methods. MIT Press, 2001. [5] T. Minka. Divergence measures and message passing. Report 173, Microsoft Research, 2005. [6] C. Bishop, N. Lawrence, T. Jaakkola, and M. Jordan. Approximating posterior distributions in belief networks using mixtures. In Advances in Neural Information Processing Systems 10, 1997. [7] G. Bouchard and O. Zoeter. Split variational inference. In Proc. Intl. Conf. Machine Learning, 2009. [8] P. Grunwald and A. Dawid. Game theory, maximum entropy, minimum discrepancy, and robust Bayesian decision theory. Annals of Statistics, 32(4):1367–1433, 2004. [9] C. R. Shalizi. Maximum likelihood estimation for q-exponential (tsallis) distributions, 2007. URL http: //arxiv.org/abs/math.ST/0701854. [10] C. Tsallis. Possible generalization of boltzmann-gibbs statistics. J. Stat. Phys., 52:479–487, 1988. [11] J. Naudts. Deformed exponentials and logarithms in generalized thermostatistics. Physica A, 316:323– 334, 2002. URL http://arxiv.org/pdf/cond-mat/0203489. [12] T. D. Sears. Generalized Maximum Entropy, Convexity, and Machine Learning. PhD thesis, Australian National University, 2008. [13] A. Sousa and C. Tsallis. Student’s t- and r-distributions: Unified derivation from an entropic variational principle. Physica A, 236:52–57, 1994. [14] C. Tsallis, R. S. Mendes, and A. R. Plastino. The role of constraints within generalized nonextensive statistics. Physica A: Statistical and Theoretical Physics, 261:534–554, 1998. [15] J. Naudts. Generalized thermostatistics based on deformed exponential and logarithmic functions. Physica A, 340:32–40, 2004. [16] J. Naudts. Generalized thermostatistics and mean-field theory. Physica A, 332:279–300, 2004. [17] J. Naudts. Estimators, escort proabilities, and φ-exponential families in statistical physics. Journal of Inequalities in Pure and Applied Mathematics, 5(4), 2004. [18] N. Ding and S. V. N. Vishwanathan. t-logistic regression. In Richard Zemel, John Shawe-Taylor, John Lafferty, Chris Williams, and Alan Culota, editors, Advances in Neural Information Processing Systems 23, 2010. [19] A. R´enyi. On measures of information and entropy. In Proc. 4th Berkeley Symposium on Mathematics, Statistics and Probability, pages 547–561, 1960. [20] J. D. Lafferty. Additive models, boosting, and inference for generalized divergences. In Proc. Annual Conf. Computational Learning Theory, volume 12, pages 125–133. ACM Press, New York, NY, 1999. [21] I. Csisz´ar. Information type measures of differences of probability distribution and indirect observations. Studia Math. Hungarica, 2:299–318, 1967. [22] K. Azoury and M. K. Warmuth. Relative loss bounds for on-line density estimation with the exponential family of distributions. Machine Learning, 43(3):211–246, 2001. Special issue on Theoretical Advances in On-line Learning, Game Theory and Boosting. [23] W. Wiegerinck and T. Heskes. Fractional belief propagation. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 438–445, 2003. [24] M. Opper. A Bayesian approach to online learning. In On-line Learning in Neural Networks, pages 363–378. Cambridge University Press, 1998. [25] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In UAI, 1998. [26] J. S. Rosenthal. A First Look at Rigorous Probability Theory. World Scientific Publishing, 2006. 9
|
2011
|
163
|
4,217
|
Boosting with Maximum Adaptive Sampling Charles Dubout Idiap Research Institute charles.dubout@idiap.ch Franc¸ois Fleuret Idiap Research Institute francois.fleuret@idiap.ch Abstract Classical Boosting algorithms, such as AdaBoost, build a strong classifier without concern about the computational cost. Some applications, in particular in computer vision, may involve up to millions of training examples and features. In such contexts, the training time may become prohibitive. Several methods exist to accelerate training, typically either by sampling the features, or the examples, used to train the weak learners. Even if those methods can precisely quantify the speed improvement they deliver, they offer no guarantee of being more efficient than any other, given the same amount of time. This paper aims at shading some light on this problem, i.e. given a fixed amount of time, for a particular problem, which strategy is optimal in order to reduce the training loss the most. We apply this analysis to the design of new algorithms which estimate on the fly at every iteration the optimal trade-off between the number of samples and the number of features to look at in order to maximize the expected loss reduction. Experiments in object recognition with two standard computer vision data-sets show that the adaptive methods we propose outperform basic sampling and state-of-the-art bandit methods. 1 Introduction Boosting is a simple and efficient machine learning algorithm which provides state-of-the-art performance on many tasks. It consists of building a strong classifier as a linear combination of weaklearners, by adding them one after another in a greedy manner. However, while textbook AdaBoost repeatedly selects each of them using all the training examples and all the features for a predetermined number of rounds, one is not obligated to do so and can instead choose only to look at a subset of examples and features. For the sake of simplicity, we identify the space of weak learners and the feature space by considering all the thresholded versions of the latter. More sophisticated combinations of features can be envisioned in our framework by expanding the feature space. The computational cost of one iteration of Boosting is roughly proportional to the product of the number of candidate weak learners Q and the number of samples T considered, and the performance increases with both. More samples allow a more accurate estimation of the weak-learners’ performance, and more candidate weak-learners increase the performance of the best one. Therefore, one wants at the same time to look at a large number of candidate weak-learners, in order to find a good one, but also needs to look at a large number of training examples, to get an accurate estimate of the weak-learner performances. As Boosting progresses, the candidate weak-learners tend to behave more and more similarly, as their performance degrades. While a small number of samples is initially sufficient to characterize the good weak-learners, it becomes more and more difficult, and the optimal values for a fixed product Q T moves to larger T and smaller Q. We focus in this paper on giving a clear mathematical formulation of the behavior described above. Our main analytical results are Equations (13) and (17) in § 3. They give exact expressions of the 1 expected edge of the selected weak-learner – that is the immediate loss reduction it provides in the considered Boosting iteration – as a function of the number T of samples and number Q of weaklearners used in the optimization process. From this result we derive several algorithms described in § 4, and estimate their performance compared to standard and state-of-the-art baselines in § 5. 2 Related works The most computationally intensive operation performed in Boosting is the optimization of the weak-learners. In the simplest version of the procedure, it requires to estimate for each candidate weak-learner a score dubbed “edge”, which requires to loop through every training example. Reducing this computational cost is crucial to cope with high-dimensional feature spaces or very large training sets. This can be achieved through two main strategies: sampling the training examples, or the feature space, since there is a direct relation between features and weak-learners. Sampling the training set was introduced historically to deal with weak-learners which can not be trained with weighted samples. This procedure consists of sampling examples from the training set according to their Boosting weights, and of approximating a weighted average over the full set by a non-weighted average over the sampled subset. See § 3.1 for formal details. Such a procedure has been re-introduced recently for computational reasons [5, 8, 7], since the number of subsampled examples controls the trade-off between statistical accuracy and computational cost. Sampling the feature space is the central idea behind LazyBoost [6], and consists simply of replacing the brute-force exhaustive search over the full feature set by an optimization over a subset produced by sampling uniformly a predefined number of features. The natural redundancy of most of the families of features makes such a procedure particularly efficient. Recently developed methods rely on multi-arms bandit methods to balance properly the exploitation of features known to be informative, and the exploration of new features [3, 4]. The idea behind those methods is to associate a bandit arm to every feature, and to see the loss reduction as a reward. Maximizing the overall reduction is achieved with a standard bandit strategy such as UCB [1], or Exp3.P [2], see § 5.2 for details. These techniques suffer from three important drawbacks. First they make the assumption that the quality of a feature – the expected loss reduction of a weak-learner using it – is stationary. This goes against the underpinning of Boosting, which is that at any iteration the performance of the learners is relative to the sample weights, which evolves over the training (Exp3.P does not make such an assumption explicitly, but still rely only on the history of past rewards). Second, without additional knowledge about the feature space, the only structure they can exploit is the stationarity of individual features. Hence, improvement over random selection can only be achieved by sampling again the exact same features one has already seen in the past. We therefore only use those methods in a context where features come from multiple families. This allows us to model the quality, and to bias the sampling, at the level of families instead of individual features. Those approaches exploit information about features to bias the sampling, hence making it more efficient, and reducing the number of weak-learners required to achieve the same loss reduction. However, they do not explicitly aim at controlling the computational cost. In particular, there is no notion of varying the number of samples used for the estimation of the weak-learners’ performance. 3 Boosting with noisy maximization We present in this section some analytical results to approximate a standard round of AdaBoost – or most other Boosting algorithms – by sampling both the training examples and the features used to build the weak-learners. Our main goal is to devise a way of selecting the optimal numbers of weaklearners Q and samples T to look at, so that their product is upper-bounded by a given constant, and that the expectation of the real performance of the selected weak-learner is maximal. In § 3.1 we recall standard notation for Boosting, the concept of the edge of a weak-learner, and how it can be approximated by a sampling of the training examples. In § 3.2 we formalize the optimization of the learners and derive the expectation E[G∗] of the true edge G∗of the selected weak-learner, and we illustrate these results in the Gaussian case in § 3.3. 2 1{condition} is equal to 1 if the condition is true, 0 otherwise N number of training examples F number of weak-learners K number of families of weak-learners T number of examples subsampled from the full training set Q number of weak-learners sampled in the case of a single family of features Q1, . . . , QK number of weak-learners sampled from each one of the K families (xn, yn) ∈X × {−1, 1} training examples ωn ∈R weights of the nth training example in the considered Boosting iteration Gq true edge of the qth weak-learner G∗true edge of the selected weak-learner e(Q, T) value of E[G∗], as a function of Q and T e(Q1, . . . , QK, T) value of E[G∗], in the case of K families of features Hq approximated edge of the qth weak-learner, estimated from the subsampled T examples ∆q estimation error in the approximated edge Hq −Gq Table 1: Notations As stated in the introduction, we will ignore the feature space itself, and only consider in the following sections the set of weak-learners built from it. Also, note that both the Boosting procedure and our methods are presented in the context of binary classification, but can be easily extended to a multi-class context using for example AdaBoost.MH, which we used in all our experiments. 3.1 Edge estimation with weighting-by-sampling Given a training set (xn, yn) ∈X × {−1, 1}, n = 1, . . . , N (1) and a set H of weak-learners, the standard Boosting procedure consists of building a strong classifier f(x) = X i αi hi(x) (2) by choosing the terms αi ∈R and hi ∈H in a greedy manner to minimize a loss estimated over the training samples. At every iteration, choosing the optimal weak-learner boils down to finding the weak-learner with the largest edge, which is the derivative of the loss reduction w.r.t. the weak-learner weight. The higher this value, the more the loss can be reduced locally, and thus the better the weak-learner. The edge is a linear function of the responses of the weak-learner over the samples G(h) = N X n=1 ynωnh(xn), (3) where the ωns depend on the current responses of f over the xns. We consider without loss of generality that they have been normalized such that PN n=1 ωn = 1. Given an arbitrary distribution η over the sample indexes, with a non-zero mass over every index, we can rewrite the edge as G(h) = EN∼η yNωN η(N) h(xN) (4) which, for η(n) = ωn gives G(h) = EN∼η [yNh(xN)] (5) The idea of weighting-by-sampling consists of replacing the expectation in that expression with an approximation obtained by sampling. Let N1, . . . , NT , be i.i.d. of distribution η, we define the approximated edge as H(h) = 1 T T X t=1 yNth(xNt), (6) 3 G* P(H | G ) P(H | G ) G G G H H H 3 3 1 2 2 3 1 2 3 P(H | G ) 1 1 2 Figure 1: To each of the Q weak-learner corresponds a real edge Gq computed over all the training examples, and an approximated edge Hq computed from a subsampling of T training examples. The approximated edge fluctuates around the true value, with a binomial distribution. The Boosting algorithm selects the weak-learner with the highest approximated edge, which has a real edge G∗. On this figure, the largest approximated edge is H1, hence the real edge G∗of the selected weaklearner is equal to G1, which is less than G3. which follows a binomial distribution centered on the true edge, with a variance decreasing with the number of samples T. It is accurately modeled with H(h) ∼N G, (1 + G)(1 −G) T . (7) 3.2 Formalization of the noisy maximization Let G1, . . . , GQ be a series of independent, real-valued random variables standing for the true edges of Q weak-learners sampled randomly. Let ∆1, . . . , ∆Q be a series of independent, real-valued random variables standing for the noise in the estimation of the edges due to the sampling of only T training examples, and finally ∀q, let Hq = Gq + ∆q be the approximated edge. We define G∗as the true edge of the weak-learner, which has the highest approximated edge G∗= Gargmax1≤q≤Q Hq (8) This quantity is random due to both the sampling of the weak-learners, and the sampling of the training examples. The quantity we want to optimize is e(Q, T) = E[G∗], the expectation of the true edge of the selected learner, which increases with both Q and T. A higher Q increases the number of terms in the maximization of Equation (8), and a higher T reduces the variance of the ∆s, ensuring that G∗is close to maxq Gq. In practice, if the variance of the ∆s is of the order of, or higher than, the variance of the Gs, the maximization is close to a pure random selection, and looking at many weak-learners is useless. We have: e(Q, T) = E[G∗] (9) = E h Gargmax1≤q≤Q Hq i (10) = Q X q=1 E Gq Y u̸=q 1{Hq>Hu} (11) = Q X q=1 E E Gq Y u̸=q 1{Hq>Hu} Hq (12) = Q X q=1 E E[Gq|Hq] Y u̸=q E 1{Hq>Hu} Hq (13) 4 If the distributions of the Gqs and the ∆qs are Gaussians or mixtures of Gaussians, we can derive analytical expressions for both E[Gq|Hq] and E 1{Hq>Hu} Hq , and compute the value of e(Q, T) efficiently. In the case of multiple families of weak-learners, it makes sense to model the distributions of the edges Gq separately for each family, as they often have a more homogeneous behavior inside a family than across families. We can easily adapt the framework developed in the previous sections to that case, and we define e(Q1, . . . , QK, T), the expected edge of the selected weak-learner when we sample T examples from the training set, and Qk weak-learners from the kth family. 3.3 Gaussian case As an illustrative example, we consider here the case where the Gqs, the ∆qs, and hence also the Hqs all follow Gaussian distributions. We take Gq ∼N(0, 1), and ∆q ∼N(0, σ2), and obtain: e(Q, T) = Q E E[G1|H1] Y u̸=1 E 1{H1>Hu} H1 (14) = Q E " H1 σ2 + 1Φ H1 √ σ2 + 1 Q−1# (15) = 1 √ σ2 + 1 E h Q G1Φ (G1)Q−1i (16) = 1 √ σ2 + 1 E max 1≤q≤Q Gq . (17) Where, Φ stands for the cumulative distribution function of the unit Gaussian, and σ depends on T. See Figure 2 for an illustration of the behavior of e(Q, T) for two different variances of the Gqs. There is no reason to expect the distribution of the Gqs to be Gaussian, contrary to the ∆qs, as shown by Equation (7), but this is not a problem as it can always be approximated by a mixture, for which we can still derive analytical expressions, even if the Gqs or the ∆qs have different distributions for different qs. 4 Adaptive Boosting Algorithms We propose here several new algorithms to sample features and training examples adaptively at each Boosting step. While all the formulation above deals with uniform sampling of weak learners, we actually sample features, and optimize thresholds to build stumps. We observed that after a small number of Boosting iterations, the Gaussian model of Equation (7) is sufficiently accurate. 4.1 Maximum Adaptive Sampling At every Boosting step, our first algorithm MAS Naive models Gq with a Gaussian mixture model fitted on the edges estimated at the previous iteration, computes from that density model the pair (Q, T) maximizing e(Q, T), samples the corresponding number of examples and features, and keeps the weak-learner with the highest approximated edge. The algorithm MAS 1.Q takes into account the decomposition of the feature set into K families of feature extractors. It models the distributions of the Gqs separately, estimating the distribution of each on a small number of features and examples sampled at the beginning of each iteration, chosen so as to account for 10% of the total cost. From these models, it optimizes Q, T and the index l of the family to sample from, to maximize e(Q1{l=1}, . . . , Q 1{l=K}, T). Hence, in a given Boosting step, it does not mix weak-learners based on features from different families. Finally MAS Q.1 similarly models the distributions of the Gqs, but it optimizes Q1, . . . , QK, T greedily, starting from Q1 = 0, . . . , QK = 0, and iteratively incrementing one of the Ql so as to maximize e(Q1, . . . , QK, T). 5 1 10 100 1,000 10,000 1 10 100 1,000 10,000 0 1 2 3 4 Number of samples T Number of features Q E[G*] 1 10 100 1,000 10,000 1 10 100 1,000 10,000 0 1 2 3 4 Number of samples T Number of features Q E[G*] 1 10 100 1,000 10,000 0 1 2 3 4 Number of features Q E[G*] for a given cost TQ TQ = 1,000 TQ = 10,000 TQ = 100,000 1 10 100 1,000 10,000 0 1 2 3 4 Number of features Q E[G*] for a given cost TQ TQ = 1,000 TQ = 10,000 TQ = 100,000 E[G*] TQ = 1,000 TQ = 10,000 TQ = 100,000 E[G*] TQ = 1,000 TQ = 10,000 TQ = 100,000 Figure 2: Simulation of the expectation of G∗in the case where both the Gqs and the ∆qs follow Gaussian distributions. Top: Gq ∼N(0, 10−2). Bottom: Gq ∼N(0, 10−4). In both simulations ∆q ∼N(0, 1/T). Left: Expectation of G∗vs. the number of sampled weak-learner Q and the number of samples T. Right: same value as a function of Q alone, for different fixed costs (product of the number of examples T and Q). As this graphs illustrates, the optimal value for Q is greater for larger variances of the Gq. In such a case the Gq are more spread out, and identifying the largest one can be done despite a large noise in the estimations, hence with a limited number of samples. 4.2 Laminating The fourth algorithm we have developed tries to reduce the requirement for a density model of the Gq. At every Boosting step it iteratively reduces the number of considered weak-learners, and increases the number of samples. More precisely: given a fixed Q0 and T0, at every Boosting iteration, the Laminating first samples Q0 weak-learners and T0 training examples. Then, it computes the approximated edges and keeps the Q0/2 best weak-learners. If more than one remains, it samples 2T0 examples, and re-iterates. The cost of each iteration is constant, equal to Q0T0, and there are at most log2(Q0) of them, leading to an overall cost of O(log2(Q0)Q0T0). In the experiments, we equalize the computational cost with the MAS approaches parametrized by T, Q by forcing log2(Q0)Q0T0 = TQ. 5 Experiments We demonstrate the validity of our approach for pattern recognition on two standard data-sets, using multiple types of image features. We compare our algorithms both to different flavors of uniform sampling and to state-of-the-art bandit based methods, all tuned to deal properly with multiple families of features. 5.1 Datasets and features For the first set of experiments we use the well known MNIST handwritten digits database [10], containing respectively 60,000/10,000 train/test grayscale images of size 28 × 28 pixels, divided in ten classes. We use features computed by multiple image descriptors, leading to a total of 16,451 features. Those descriptors can be broadly divided in two categories. (1) Image transforms: Identity, Gradient image, Fourier and Haar transforms, Local Binary Patterns (LBP/iLBP). (2) Histograms: 6 sums of the intensities in random image patches, histograms of (oriented and non oriented) gradients at different locations, Haar-like features. For the second set of experiments we use the challenging CIFAR-10 data-set [9], a subset of the 80 million tiny images data-set. It contains respectively 50,000/10,000 train/test color images of size 32 × 32 pixels, also divided in 10 classes. We dub it as challenging as state-of-the-art results without using additional training data barely reach 65% accuracy. We use directly as features the same image descriptors as described above for MNIST, plus additional versions of some of them making use of color information. 5.2 Baselines We first define three baselines extending LazyBoost in the context of multiple feature families. The most naive strategy one could think of, that we call Uniform Naive, simply ignores the families, and picks features uniformly at random. This strategy does not properly distribute the sampling among the families, thus if one of them had a far greater cardinality than the others, all features would come from it. We define Uniform 1.Q to pick one of the feature families at random, and then samples the Q features from that single family, and Uniform Q.1 to pick uniformly at random Q families of features, and then pick one feature uniformly in each family. The second family of baselines we have tested bias their sampling at every Boosting iteration according to the observed edges in the previous iterations, and balance the exploitation of families of features known to perform well with the exploration of new family by using bandit algorithms [3, 4]. We use three such baselines (UCB, Exp3.P, ϵ-greedy), which differ only by the underlying bandit algorithm used. We tune the meta-parameters of these techniques – namely the scale of the reward and the exploration-exploitation trade-off – by training them multiple times over a large range of parameters and keeping only the results of the run with the smallest final Boosting loss. Hence, the computational cost is around one order of magnitude higher than for our methods in the experiments. Nb. of Uniform Bandits MAS stumps Naive 1.Q Q.1 UCB⋆ Exp3.P⋆ ϵ-greedy⋆ Naive 1.Q Q.1 Laminating MNIST 10 -0.34 (0.01) -0.33 (0.02) -0.35 (0.02) -0.33 (0.01) -0.32 (0.01) -0.34 (0.02) -0.51 (0.02) -0.50 (0.02) -0.52 (0.01) -0.43 (0.00) 100 -0.80 (0.01) -0.73 (0.03) -0.81 (0.01) -0.73 (0.01) -0.73 (0.02) -0.73 (0.03) -1.00 (0.01) -1.00 (0.01) -1.03 (0.01) -1.01 (0.01) 1,000 -1.70 (0.01) -1.45 (0.02) -1.68 (0.01) -1.64 (0.01) -1.52 (0.02) -1.60 (0.04) -1.83 (0.01) -1.80 (0.01) -1.86 (0.00) -1.99 (0.01) 10,000 -5.32 (0.01) -3.80 (0.02) -5.04 (0.01) -5.26 (0.01) -5.35 (0.04) -5.38 (0.09) -5.35 (0.01) -5.05 (0.02) -5.30 (0.00) -6.14 (0.01) CIFAR-10 10 -0.26 (0.00) -0.25 (0.01) -0.26 (0.00) -0.25 (0.01) -0.25 (0.01) -0.26 (0.00) -0.28 (0.00) -0.28 (0.00) -0.28 (0.01) -0.28 (0.00) 100 -0.33 (0.00) -0.33 (0.01) -0.34 (0.00) -0.33 (0.00) -0.33 (0.00) -0.33 (0.00) -0.35 (0.00) -0.35 (0.00) -0.37 (0.01) -0.37 (0.00) 1,000 -0.47 (0.00) -0.46 (0.00) -0.48 (0.00) -0.48 (0.00) -0.47 (0.00) -0.48 (0.00) -0.48 (0.00) -0.48 (0.00) -0.49 (0.01) -0.50 (0.00) 10,000 -0.93 (0.00) -0.85 (0.00) -0.91 (0.00) -0.90 (0.00) -0.91 (0.00) -0.91 (0.00) -0.93 (0.00) -0.88 (0.00) -0.89 (0.01) -0.90 (0.00) Table 2: Mean and standard deviation of the Boosting loss (log10) on the two data-sets and for each method, estimated on ten randomized runs. Methods highlighted with a ⋆require the tuning of meta-parameters which have been optimized by training fully multiple times. 5.3 Results and analysis We report the results of the proposed algorithms against the baselines introduced in § 5.2 on the two data-sets of § 5.1 using the standard train/test cuts in tables 2 and 3. We ran each configuration ten times and report the mean and standard deviation of each. We set the maximum cost of all the algorithms to 10N, setting Q = 10 and T = N for the baselines, as this configuration leads to the best results after 10,000 Boosting rounds of AdaBoost.MH. These results illustrate the efficiency of the proposed methods. For 10, 100 and 1,000 weak-learners, both the MAS and the Laminating algorithms perform far better than the baselines. Performance tend to get similar for 10,000 stumps, which is unusually large. As stated in § 5.2, the meta-parameters of the bandit methods have been optimized by running the training fully ten times, with the corresponding computational effort. 7 Nb. of Uniform Bandits MAS stumps Naive 1.Q Q.1 UCB⋆ Exp3.P⋆ ϵ-greedy⋆ Naive 1.Q Q.1 Laminating MNIST 10 51.18 (4.22) 54.37 (7.93) 48.15 (3.66) 52.86 (4.75) 53.80 (4.53) 51.37 (6.35) 25.91 (2.04) 25.94 (2.57) 25.73 (1.33) 35.70 (2.35) 100 8.95 (0.41) 11.64 (1.06) 8.69 (0.48) 11.39 (0.53) 11.58 (0.93) 11.59 (1.12) 4.87 (0.29) 4.78 (0.16) 4.54 (0.21) 4.85 (0.16) 1,000 1.75 (0.06) 2.37 (0.12) 1.76 (0.08) 1.80 (0.08) 2.18 (0.14) 1.83 (0.16) 1.50 (0.06) 1.59 (0.08) 1.45 (0.04) 1.34 (0.08) 10,000 0.94 (0.06) 1.13 (0.03) 0.94 (0.04) 0.90 (0.05) 0.84 (0.02) 0.85 (0.07) 0.92 (0.03) 0.97 (0.05) 0.94 (0.04) 0.85 (0.04) CIFAR-10 10 76.27 (0.97) 78.57 (1.94) 76.00 (1.60) 77.04 (1.65) 77.51 (1.50) 77.13 (1.15) 71.54 (0.69) 71.13 (0.49) 70.63 (0.34) 71.54 (1.06) 100 56.94 (1.01) 58.33 (1.30) 54.48 (0.64) 57.49 (0.46) 58.47 (0.81) 58.19 (0.83) 53.94 (0.55) 52.79 (0.09) 50.15 (0.64) 50.44 (0.68) 1,000 39.13 (0.61) 39.97 (0.37) 37.70 (0.38) 38.13 (0.30) 39.23 (0.31) 38.36 (0.72) 38.79 (0.28) 38.31 (0.27) 36.95 (0.25) 36.39 (0.58) 10,000 31.83 (0.29) 31.16 (0.29) 30.56 (0.30) 30.55 (0.24) 30.39 (0.22) 29.96 (0.45) 32.07 (0.27) 31.36 (0.13) 32.51 (0.38) 31.17 (0.22) Table 3: Mean and standard deviation of the test error (in percent) on the two data-sets and for each method, estimated on ten randomized runs. Methods highlighted with a ⋆require the tuning of meta-parameters which have been optimized by training fully multiple times. On the MNIST data-set, when adding 10 or 100 weak-learners, our methods roughly divides the error rate by two, and still improves it by ≃30% with 1,000 stumps. The loss reduction follows the same pattern. The CIFAR data-set is a very difficult pattern recognition problem. Still our algorithms performs substantially better than the baselines for 10 and 100 weak-learners, gaining more than 10% in the test error rates, and behave similarly to the baselines for larger number of stumps. As stated in § 1, the optimal values for a fixed product Q T moves to larger T and smaller Q. For instance, On the MNIST data-set with MAS Naive, averaging on ten randomized runs, for respectively 10, 100, 1,000, 10,000 stumps, T = 1,580, 13,030, 37,100, 43,600, and Q = 388, 73, 27, 19. We obtain similar and consistent results across settings. The overhead of MAS algorithms compared to Uniform ones is small, in our experiments, taking into account the time spent computing features, it is approximately 0.2% for MAS Naive, 2% for MAS 1.Q and 8% for MAS Q.1. The Laminating algorithm has no overhead. The poor behavior of bandit methods for small number of stumps may be related to the large variations of the sample weights during the first iterations of Boosting, which goes against the underlying assumption of stationarity of the loss reduction. 6 Conclusion We have improved Boosting by modeling the statistical behavior of the weak-learners’ edges. This allowed us to maximize the loss reduction under strict control of the computational cost. Experiments demonstrate that the algorithms perform well on real-world pattern recognition tasks. Extensions of the proposed methods could be investigated along two axis. The first one is to blur the boundary between the MAS procedures and the Laminating, by deriving an analytical model of the loss reduction for generalized sampling procedures: Instead of doubling the number of samples and halving the number of weak-learners, we could adapt both set sizes optimally. The second is to add a bandit-like component to our methods by adding a variance term related to the lack of samples, and their obsolescence in the Boosting process. This would account for the degrading density estimation when weak-learner families have not been sampled for a while, and induce an exploratory sampling which may be missing in the current algorithms. Acknowledgments This work was supported by the European Community’s 7th Framework Programme under grant agreement 247022 – MASH, and by the Swiss National Science Foundation under grant 200021124822 – VELASH. We also would like to thank Dr. Robert B. Israel, Associate Professor Emeritus at the University of British Columbia for his help on the derivation of the expectation of the true edge of the weak-learner with the highest approximated edge (equations (9) to (13)). 8 References [1] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47(2):235–256, 2002. [2] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2003. [3] R. Busa-Fekete and B. Kegl. Accelerating AdaBoost using UCB. JMLR W&CP, Jan 2009. [4] R. Busa-Fekete and B. Kegl. Fast Boosting using adversarial bandits. In ICML, 2010. [5] N. Duffield, C. Lund, and M. Thorup. Priority sampling for estimation of arbitrary subset sums. J. ACM, 54, December 2007. [6] G. Escudero, L. M`arquez, and G. Rigau. Boosting applied to word sense disambiguation. Machine Learning: ECML 2000, pages 129–141, 2000. [7] F. Fleuret and D. Geman. Stationary features and cat detection. Journal of Machine Learning Research (JMLR), 9:2549–2578, 2008. [8] Z. Kalal, J. Matas, and K. Mikolajczyk. Weighted sampling for large-scale Boosting. British machine vision conference, 2008. [9] A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Master’s thesis, 2009. http://www.cs.toronto.edu/˜kriz/cifar.html. [10] Y. Lecun and C. Cortes. The mnist database of handwritten digits. http://yann.lecun. com/exdb/mnist/. 9
|
2011
|
164
|
4,218
|
Inverting Grice’s Maxims to Learn Rules from Natural Language Extractions Mohammad Shahed Sorower, Thomas G. Dietterich, Janardhan Rao Doppa Walker Orr, Prasad Tadepalli, and Xiaoli Fern School of Electrical Engineering and Computer Science Oregon State University Corvallis, OR 97331 {sorower,tgd,doppa,orr,tadepall,xfern}@eecs.oregonstate.edu Abstract We consider the problem of learning rules from natural language text sources. These sources, such as news articles and web texts, are created by a writer to communicate information to a reader, where the writer and reader share substantial domain knowledge. Consequently, the texts tend to be concise and mention the minimum information necessary for the reader to draw the correct conclusions. We study the problem of learning domain knowledge from such concise texts, which is an instance of the general problem of learning in the presence of missing data. However, unlike standard approaches to missing data, in this setting we know that facts are more likely to be missing from the text in cases where the reader can infer them from the facts that are mentioned combined with the domain knowledge. Hence, we can explicitly model this “missingness” process and invert it via probabilistic inference to learn the underlying domain knowledge. This paper introduces a mention model that models the probability of facts being mentioned in the text based on what other facts have already been mentioned and domain knowledge in the form of Horn clause rules. Learning must simultaneously search the space of rules and learn the parameters of the mention model. We accomplish this via an application of Expectation Maximization within a Markov Logic framework. An experimental evaluation on synthetic and natural text data shows that the method can learn accurate rules and apply them to new texts to make correct inferences. Experiments also show that the method out-performs the standard EM approach that assumes mentions are missing at random. 1 Introduction The immense volume of textual information available on the web provides an important opportunity and challenge for AI: Can we develop methods that can learn domain knowledge by reading natural texts such as news articles and web pages. We would like to acquire at least two kinds of domain knowledge: concrete facts and general rules. Concrete facts can be extracted as logical relations or as tuples to populate a data base. Systems such as Whirl [3], TextRunner [5], and NELL [1] learn extraction patterns that can be applied to text to extract instances of relations. General rules can be acquired in two ways. First, they may be stated explicitly in the text— particularly in tutorial texts. Second, they can be acquired by generalizing from the extracted concrete facts. In this paper, we focus on the latter setting: Given a data base of literals extracted from natural language texts (e.g., newspaper articles), we seek to learn a set of probabilistic Horn clauses that capture general rules. Unfortunately for rule learning algorithms, natural language texts are incomplete. The writer tends to mention only enough information to allow the reader to easily infer the remaining facts from shared background knowledge. This aspect of economy in language was first pointed out by Grice 1 [7] in his maxims of cooperative conversation (see Table 1). For example, consider the following sentence that discusses a National Football League (NFL) game: Table 1: Grice’s Conversational Maxims 1 Be truthful—do not say falsehoods. 2 Be concise—say as much as necessary, but no more. 3 Be relevant. 4 Be clear. “Given the commanding lead of Kansas city on the road, Denver Broncos’ 14-10 victory surprised many” This mentions that Kansas City is the away team and that the Denver Broncos won the game, but does not mention that Kansas City lost the game or that the Denver Broncos was the home team. Of course these facts can be inferred from domain knowledge rules such as the rule that “if one team is the winner, the other is the loser (and vice versa)” and the rule “if one team is the home team, the other is the away team (and vice versa)”. This is an instance of the second maxim. Another interesting case arises when shared knowledge could lead the reader to an incorrect inference: “Ahmed Said Khadr, an Egyptian-born Canadian, was killed last October in Pakistan.” This explicitly mentions that Khadr is Canadian, because otherwise the reader would infer that he was Egyptian based on the domain knowledge rule “if a person is born in a country, then the person is a citizen of that country”. Grice did not discuss this case, but we can state this as a corollary of the first maxim: Do not by omission mislead the reader into believing falsehoods. This paper formalizes the first two maxims, including this corollary, and then shows how to apply them to learn probabilistic Horn clause rules from propositions extracted from news stories. We show that rules learned this way are able to correctly infer more information from incomplete texts than a baseline approach that treats propositions in news stories as missing at random. The problem of learning rules from extracted texts has been studied previously [11, 2, 17]. These systems rely on finding documents in which all of the facts participating in a rule are mentioned. If enough such documents can be found, then standard rule learning algorithms can be applied. A drawback of this approach is that it is difficult to learn rules unless there are many documents that provide such complete training examples. The central hypothesis of our work is that by explicitly modeling the process by which facts are mentioned, we can learn rules from sets of documents that are smaller and less complete. The line of work most similar to this paper is that of Michael and Valiant [10, 9] and Doppa, et al. [4]. They study learning hard (non-probabilistic) rules from incomplete extractions. In contrast with our approach of learning explicit probabilistic models, they take the simpler approach of implicitly inverting the conversational maxims when counting evidence for a proposed rule. Specifically, they count an example as consistent with a proposed rule unless it explicitly contradicts the rule. Although this approach is much less expensive than the probabilistic approach described in this paper, it has difficulty with soft (probabilistic) rules. To handle these, these authors sort the rules by their scores and keep high scoring rules even if they have some contraditions. Such an approach can learn “almost hard” rules, but will have difficulty with rules that are highly probabilistic (e.g., that the home team is somewhat more likely to win a game than the away team). Our method has additional advantages. First, it provides a more general framework that can support alternative sets of conversational maxims, such as mentions based on saliency, recency (prefer to mention a more recent event rather than an older event), and surprise (prefer to mention a less likely event rather than a more likely event). Second, when applied to new articles, it assigns probabilities to alternative interpretations, which is important for subsequent processing. Third, it provides an elegant, first-principles account of the process, which can then be compiled to yield more efficient learning and reasoning procedures. 2 Technical Approach We begin with a logical formalization of the Gricean maxims. Then we present our implementation of these maxims in Markov Logic [15]. Finally, we describe a method for probabilistically inverting the maxims to learn rules from textual mentions. 2 Formalizing the Gricean maxims. Consider a writer and a reader who share domain knowledge K. Suppose that when told a fact F, the reader will infer an additional fact G. We will write this as (K, MENTION(F) ⊢reader G), where ⊢reader represents the inference procedure of the reader and MENTION is a modal operator that captures the action of mentioning a fact in the text. Note that the reader’s inference procedure is not standard first-order deduction, but instead is likely to be incomplete and non-monotonic or probabilistic. With this notation, we can formalize the first two Gricean maxims as follows: • Mention true facts/don’t lie: F ⇒ MENTION(F) (1) MENTION(F) ⇒ F (2) The first formula is overly strong, because it requires the writer to mention all true facts. Below, we will show how to use Markov Logic weights to weaken this. The second formula captures a positive version of “don’t lie”—if something is mentioned, then it is true. For news articles, it does not need to be weakened probabilistically. • Don’t mention facts that can be inferred by the reader: MENTION(F) ∧G ∧(K, MENTION(F) ⊢reader G ⇒¬MENTION(G) • Mention facts needed to prevent incorrect reader inferences: MENTION(F) ∧¬G ∧(K, MENTION(F) ⊢reader G) ∧ H ∧(K, MENTION(F ∧H) ̸⊢reader G) ⇒ MENTION(H) In this formula H is a true fact that, when combined with F, is sufficient to prevent the reader from inferring G. Implementation in Markov Logic. Although this formalization is very general, it is difficult to apply directly because of the embedded invocation of the reader’s inference procedure and the use of the MENTION modality. Consequently, we sidestep this problem by manually “compiling” the maxims into ordinary first-order Markov Logic as follows. The notation w : indicates that a rule has a weight w in Markov Logic. The first maxim is encoded in terms of fact-to-mention and mention-to-fact rules. For each predicate P in the domain of discourse, we write w1 : FACT P ⇒ MENTION P w2 : MENTION P ⇒ FACT P. Suppose that the shared knowledge K contains the Horn clause rule P ⇒Q, then we encode the positive form of second maxim in terms of the mention-to-mention rule: w3 : MENTION P ∧FACT Q ⇒ ¬MENTION Q One might expect that we could encode the faulty-inference-by-omission corollary as w4 : MENTION P ∧¬FACT Q ⇒ MENTION NOTQ, where we have chosen MENTION NOTQ to play the role of H in axiom 2. However, in news stories, there is a strong preference for H to be a positive assertion, rather than a negative assertion. For example, in the citizenship case, it would be unnatural to say “Ahmed Said Khadr, an Egyptian-born non-Egyptian...”. In particular, because CITIZENOF(p, c) is generally a function from p to c (i.e., a person is typically a citizen of only one country), it suffices to mention CITIZENOF(Khadr, Canada) to prevent the faulty inference CITIZENOF(Khadr, Egypt). Hence, for rules of the form P(x, y) ⇒Q(x, y), where Q is a function from its first to its second argument, we can implement the inference-by-omission maxim as w5 : MENTION P(x, y) ∧FACT Q(x, z) ∧(y ̸= z) ⇒ MENTION Q(x, z). Finally, the shared knowledge P ⇒Q is represented by the fact-to-fact rule: w6 : FACT P ⇒ FACT Q 3 In Markov Logic, each of these rules is assigned a (learned) weight which can be viewed as a cost of violating the rule. The probability of a world ω is proportional to exp X j wjI[Rule j is satisfied by ω] , where j iterates over all groundings of the Markov logic rules in world ω and I[φ] is 1 if φ is true and 0 otherwise. An advantage of Markov Logic is that it allows us to define a probabilistic model even when there are contradictions and cycles in the logical rules. Hence, we can include both a rule that says “if the home team is mentioned, then the away team is not mentioned” and rules that say “the home team is always mentioned” and “the away team is always mentioned”. Obviously a possible world ω cannot satisfy all of these rules. The relative weights on the rules determine the probability that particular literals are actually mentioned. Learning. We seek to learn both the rules and their weights. We proceed by first proposing candidate fact-to-fact rules and then automatically generating the other rules (especially the mentionto-mention rules) from the general rule schemata described above. Then we apply EM to learn the weights on all of the rules. This has the effect of removing unnecessary rules by driving their weights to zero. Proposing Candidate Fact-to-Fact Rules. For each predicate symbol and its specified arity, we generate a set of candidate Horn clauses with that predicate as the head (consequent). For the rule body (antecedent), we consider all conjunctions of literals involving other predicates (i.e., we do not allow recursive rules) up to a fixed maximum length. Each candidate rule is scored on the mentions in the training documents for support (number of training examples that mention all facts in the body) and confidence (the conditional probability that the head is mentioned given that the body is satisfied). We discard all rules that do not achieve minimum support σ and then keep the top τ most confident rules. The values of σ and τ are determined via cross-validation within the training set. The selected rules are then entered into the knowledge base. From each fact-to-fact rule, we derive mention-to-mention rules as described above. For each predicate, we also generate fact-to-mention and mention-to-fact rules. Table 2: Learn Gricean Mention Model Input: DI =Incomplete training examples τ = number of rules per head σ = minimum support per rule Output: M = Explicit mention model 1: LEARN GRICEAN MENTION MODEL: 2: exhaustively learn rules for each head 3: discard rules with less than σ support 4: select the τ most confident rules R for each head 5: R′ := R 6: for each rule (factP => factQ) ∈R do 7: add mentionP ⇒¬mentionQ to R′ 8: end for 9: for every factP ∈R do 10: add factP ⇒mentionP to R′ 11: add mentionP ⇒factP to R′ 12: end for 13: repeat 14: E-Step: apply inference to predict weighted facts F 15: define complete weighted data DC := DI ∪F 16: M-Step: learn weights for rules in R′ using data DC 17: until convergence 18: return the set of weighted rules R′ Learning the Weights. The goal of weight learning is to maximize the likelihood of the observed mentions (in the training set) by adjusting the weights of the rules. Because our training data only consists of mentions and no facts, the facts are latent (hidden variables), and we must apply the EM algorithm to learn the weights. We employ the Markov Logic system Alchemy [8] for learning and inference. To implement EM, we applied the MC-SAT algorithm in the E-step and maximum pseudo-log likelihood (“generative training”) for the M step. EM is iterated to convergence, which only requires a few iterations. Table 2 summarizes the pseudo-code of the algorithm. MAP inference for prediction is achieved using Alchemy’s extension of MaxWalkSat. Treating Missing Mentions as Missing At Random: An alternative to the Gricean mention model described above is to assume that the writer chooses which facts to mention (or omit) at random 4 Table 3: Synthetic Data Properties q 0.17 0.33 0.50 0.67 0.83 0.97 Mentioned literals (%) 91.38 80.74 68.72 63.51 51.70 42.13 Complete records (%) 61.70 30.64 8.51 5.53 0.43 0.00 according to some unknown probability distribution that does not depend on the values of the missing variables—a setting known as Missing-At-Random (MAR). When data are MAR, it is possible to obtain unbiased estimates of the true distribution via imputation using EM [16]. We implemented this approach as follows. We apply the same method of learning rules (requiring minimum support σ and then taking the τ most confident rules). Each learned rule has the general form MENTION A ⇒ MENTION B. The collection of rules is treated as a model of the joint distribution over the mentions. Generative weight learning combined with Alchemy’s builtin EM implementation is then applied to learn the weights on these rules. 3 Experimental Evaluation We evaluated our mention model approach using data generated from a known mention model to understand its behavior. Then we compared its performance to the MAR approach on actual extractions from news stories about NFL football games, citizenship, and Somali ship hijackings. Synthetic Mention Experiment. The goal of this experiment was to evaluate the ability of our method to learn accurate rules from data that match the assumptions of the algorithm. We also sought to understand how performance varies as a function of the amount of information omitted from the text. The data were generated using a database of NFL games (from 1998 and 2000-2005) downloaded from www.databasefootball.com. These games were then encoded using the predicates TEAMINGAME(Game, Team), GAMEWINNER(Game, Team), GAMELOSER(Game, Team), HOMETEAM(Game, Team), AWAYTEAM(Game, Team), and TEAMGAMESCORE(Game, Team, Score) and treated as the ground truth. Note that these predicates can be divided into two correlated sets: WL = {GAMEWINNER, GAMELOSER, TEAMGAMESCORE} and HA = {HOMETEAM, AWAYTEAM}. From this ground truth, we generate a set of mentions for each game as follows. One literal is chosen uniformly at random from each of WL and HA and mentioned. Then each of the remaining literals is mentioned with probability 1−q, where q is a parameter that we varied in the experiments. Table 3 shows the average percentage of literals mentioned in each generated “news story” and the percentage of generated “news stories” that mentioned all literals. Table 4: Gricean Mention Model Performance on Synthetic Data. Each cell indicates % of complete records inferred. Training q Test q 0.17 0.33 0.50 0.67 0.83 0.97 (%) (%) (%) (%) (%) (%) 0.17 100 100 100 100 100 100 0.33 100 99 97 96 90 85 0.50 100 99 98 97 93 87 0.67 100 98 92 92 81 66 0.83 99 98 72 71 61 54 0.97 91 81 72 68 56 41 For each q, we generated 5 different datasets, each containing 235 games. For each value of q, we ran the algorithm five times. In each iteration, one dataset was used for training, another for validation, and the remaining 3 for testing. The training and validation datasets shared the same value of q. The resulting learned rules were evaluated on the test sets for all of the different values of q. The validation set is employed to determine the thresholds τ and σ during rule learning and to decide when to terminate EM. The chosen values were τ = 10, σ = 0.5 (50% of the total training instances), and between 3 and 8 EM iterations. Table 4 reports the proportion of complete game records (i.e., all four literals) that were correctly inferred, averaged over the five runs. Note that any facts mentioned in the generated articles are 5 automatically correctly inferred, so if no inference was performed at all, the results would match the second row of Table 3. Notice that when trained on data with low missingness (e.g. q = 0.17), the algorithm was able to learn rules that predict well for articles with much higher levels of missing values. This is because q = 0.17 means that only 8.62% of the literals are missing in the training dataset, which results in 61.70% complete records. These are sufficient to allow learning highlyaccurate rules. However, as the proportion of missing literals in the training data increases, the algorithm starts learning incorrect rules, so performance drops. In particular, when q = 0.97, the training documents contain no complete records (Table 3). Nonetheless, the learned rules are still able to completely and correctly reconstruct 41% of the games! The rules learned under such high levels of missingness are not totally correct. Here is an example of one learned rule (for q = 0.97): FACT HOMETEAM(g, t1) ∧FACT TEAMINGAME(g, t1) ⇒FACT GAMEWINNER(g, t1). This rule says that the home team always wins. When appropriately weighted in Markov Logic, this is a reasonable rule even though it is not perfectly correct (nor was it a rule that we applied during the synthetic data generation process). Table 5: Percentage of Literals Correctly Predicted Training q Test q 0.17 0.33 0.50 0.67 0.83 0.97 (%) (%) (%) (%) (%) (%) 0.97 98 95 93 92 89 85 In addition to measuring the fraction of entire games correctly inferred, we can obtain a more fine-grained assessment by measuring the fraction of individual literals correctly inferred. Table 5 shows this for the q = 0.97 training scenario. We can see that even when the test articles have q = 0.97 (which means only 42.13% of literals are mentioned), the learned rules are able to correctly infer 85% of the literals. By comparison, if the literals had been predicted independently at random, only 6.25% would be correctly predicted. Experiments with Real Data: We performed experiments on three datasets extracted from news stories: NFL games, citizenship, and Somali ship hijackings. Table 6: Statistics on mentions for extracted NFL games (after repairing violations of integrity constraints). Under “Home/Away”, “men none” gives the percentage of articles in which neither the Home nor the Away team was mentioned, “men one”, the percentage in which exactly one of Home or Away was mentioned, and “men both”, the percentage where both were mentioned. Home/Away Winner/Loser men men men men men men none one both none one both (%) (%) (%) (%) (%) (%) NFL Train 17.9 58.9 23.2 17.9 57.1 25.0 NFL Test 83.6 19.6 0.0 1.8 98.2 0.0 NFL Games. A state-of-the-art information extraction system from BBN Technologies [6, 14] was applied to a corpus of 1000 documents taken from the Gigaword corpus V4 [13] to extract the same five propositions employed in the synthetic data experiments. The BBN coreference system attempted to detect and combine multiple mentions of the same game within a single article. The resulting data set contained 5,850 games. However, the data still contained many coreference errors, which produced games apparently involving more than two teams or where one team achieved multiple scores. To address these problems, we took each extracted game and applied a set of integrity constraints. The integrity constraints were learned automatically from 5 complete game records. Examples of the learned constraints include “Every game has exactly two teams” and “Every game has exactly one winner.” Each extracted game was then converted into multiple games by deleting literals in all possible ways until all of the integrity constraints were satisfied. The team names were replaced (arbitrarily) with constants A and B. The games were then processed to remove duplicates. The result was a set of 56 distinct extracted games, which we call NFL Train. To develop a test set, NFL Test, we manually extracted 55 games from news stories about the 2010 NFL season (which has no overlap with Gigaword V4). Table 6 summarizes these game records. Here is an excerpt from one of the stories that was analyzed during learning: “William Floyd rushed for three touchdowns and Steve Young scored two more, moving the San Francisco 49ers one victory 6 from the Super Bowl with a 44-15 American football rout of Chicago.” The initial set of literals extracted by the BBN system was the following: MENTION TEAMINGAME(NFLGame9209, SanFrancisco49ers) ∧ MENTION TEAMINGAME(NFLGame9209, ChicagoBears) ∧ MENTION GAMEWINNER(NFLGame9209, SanFrancisco49ers) ∧ MENTION GAMEWINNER(NFLGame9209, ChicagoBears) ∧ MENTION GAMELOSER(NFLGame9209, ChicagoBears). After processing with the learned integrity constraints, the extracted interpretation was the following: MENTION TEAMINGAME(NFLGame9209, SanFrancisco49ers) ∧ MENTION TEAMINGAME(NFLGame9209, ChicagoBears) ∧ MENTION GAMEWINNER(NFLGame9209, SanFrancisco49ers) ∧ MENTION GAMELOSER(NFLGame9209, ChicagoBears). Table 7: Observed percentage of cases where exactly one literal is mentioned and the percentage predicted if the literals were missing at random Home/Away Winner/Loser obs. pred. obs. pred. men men men men one one one one (%) (%) (%) (%) NFL Train 58.9 49.9 57.1 49.8 NFL Test 19.6 34.5 98.2 47.9 It is interesting to ask whether these data are consistent with the explicit mention model versus the missing-at-random model. Let us suppose that under MAR, the probability that a fact will be mentioned is p. Then the probability that both literals in a rule (e.g., home/away or winner/loser) will be mentioned is p2, the probability that both will be missing is (1−p)2, and the probability that exactly one will be mentioned is 2p(1 −p). We can fit the best value for p to the observed missingness rates to minimize the KL divergence between the predicted and observed distributions. If the explicit mention model is correct, then the MAR fit will be a poor estimate of the fraction of cases where exactly one literal is missing. Table 7 shows the results. On NFL Train, it is clear that the MAR model seriously underestimates the probability that exactly one literal will be mentioned. The NFL Test data is inconsistent with the MAR assumption, because there are no cases where both predicates are mentioned. If we estimate p based only on the cases where both are missing or one is missing, the MAR model seriously underestimates the one-missing probability. Hence, we can see that train and test, though drawn from different corpora and extracted by different methods, both are inconsistent with the MAR assumption. Table 8: NFL test set performance. Gricean MAR Model Model (%) (%) 100.0 50.0 We applied both our explicit mention model and the MAR model to the NFL dataset. The cross-validated parameter values for the explicit mention model were ϵ = 0.5 and τ = 50, and the number of EM iterations varied between 2 and 3. We measured performance relative to the performance that could be attained by a system that uses the correct rules. The results are summarized in Table 8. Our method achieves perfect performance, whereas the MAR method only reconstructs half of the reconstructable games. This reflects the extreme difficulty of the test set, where none of the articles mentions all literals involved in any rule. Here are a few examples of the rules that are learned: 0.00436 : FACT TEAMINGAME(g, t1) ∧FACT GAMELOSER(g, t2) ∧(t1 ̸= t2) ⇒ FACT GAMEWINNER(g, t1) 0.17445 : MENTION TEAMINGAME(g, t1) ∧MENTION GAMELOSER(g, t2) ∧(t1 ̸= t2) ⇒ ¬MENTION GAMEWINNER(g, t1) The first rule is a weak form of the “fact” rule that if one team is the loser, the other is the winner. The second rule is the corresponding “mention” rule that if the loser is mentioned then the winner is not. The small weights on these rules are difficult to interpret in isolation, because in Markov logic, all of the weights are coupled and there are other learned rules that involve the same literals. Birthplace and Citizenship. We repeated this same experiment on a different set of 182 articles selected from the ACE08 Evaluation corpus [12] and extracted by the same methods. In these 7 articles, the citizenship of a person is mentioned 583 times and birthplace only 25 times. Both are mentioned in the same article only 6 times (and of these, birthplace and citizenship are the same in only 4). Clearly, this is another case where the MAR assumption does not hold. Integrity constraints were applied to force each person to have at most one birthplace and one country of citizenship, and then both methods were applied. The cross-validated parameter values for the explicit mention model were ϵ = 0.5 and τ = 50 and the number of EM iterations varied between 2 and 3. Table 9 shows the two cases of interest and the probability assigned to the missing fact by the two methods. The inverse Gricean approach gives much better results. Table 9: Birthplace and Citizenship: Predicted probability assigned to the correct interpretation by the Gricean mention model and the MAR model. Configuration Gricean Model MAR Pred. prob. Pred. prob. Citizenship missing 1.000 0.969 Birthplace missing 1.000 0.565 Somali Ship Hijacking. We collected a set of 41 news stories concerning ship hijackings based on ship names taken from the web site coordination-maree-noire.eu. From these documents, we manually extracted all mentions of the ownership country and flag country of the hijacked ships. Twenty-five stories mentioned only one fact (ownership or flag), while 16 mentioned both. Of the 16, 14 reported the flag country as different from the ownership country. The Gricean maxims predict that if the two countries are the same, then only one of them will be mentioned. The results (Table 10) show that the Gricean model is again much more accurate than the MAR model. 4 Conclusion Table 10: Flag and Ownership: Predicted probability assigned to the missing fact by the Gricean mention model and the MAR model. Cross-validated parameter values ϵ = 0.5 and τ = 50; 2-3 EM iterations. Configuration Gricean Model MAR Pred. prob. Pred. prob. Ownership missing 1.000 0.459 Flag missing 1.000 0.519 This paper has shown how to formalize the Gricean conversational maxims, compile them into Markov Logic, and invert them via probabilistic reasoning to learn Horn clause rules from facts extracted from documents. Experiments on synthetic mentions showed that our method is able to correctly reconstruct complete records even when neither the training data nor the test data contain complete records. Our three studies provide evidence that news articles obey the maxims across three domains. In all three domains, our method achieves excellent performance that far exceeds the performance of standard EM imputation. This shows conclusively that rule learning benefits from employing an explicit model of the process that generates the data. Indeed, it allows rules to be learned correctly from only a handful of complete training examples. An interesting direction for future work is to learn forms of knowledge more complex than Horn clauses. For example, the state of a hijacked ship can change over time from states such as “attacked” and “captured” to states such as “ransom demanded” and “released”. The Gricean mention model predicts that if a news story mentions that a ship was released, then it does not need to mention that the ship was “attacked” or “captured”. Handling such cases will require extending the methods in this paper to reason about time and what the author and reader know at each point in time. It will also require better methods for joint inference, because there are more than 10 predicates in this domain, and our current EM implementation scales exponentially in the number of interrelated predicates. Acknowledgments This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. FA8750-09-C-0179 and by Army Research Office (ARO). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DARPA, the Air Force Research Laboratory (AFRL), ARO, or the US government. 8 References [1] A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E.R. Hruschka Jr., and T.M. Mitchell. Toward an architecture for never-ending language learning. In Proceedings of the Conference on Artificial Intelligence (AAAI), pages 1306–1313. AAAI Press, 2010. [2] A. Carlson, J. Betteridge, R. C. Wang, E. R. Hruschka, Jr., and T. M. Mitchell. Coupled semisupervised learning for information extraction. In Proceedings of the Third ACM International Conference on Web Search and Data Mining, WSDM ’10, pages 101–110, New York, NY, USA, 2010. ACM. [3] W. W. Cohen. WHIRL: A word-based information representation language. Artificial Intelligence, 118(1-2):163–196, 2000. [4] J. R. Doppa, M. S. Sorower, M. Nasresfahani, J. Irvine, W. Orr, T. G. Dietterich, X. Fern, and P. Tadepalli. Learning rules from incomplete examples via implicit mention models. In Proceedings of the 2011 Asian Conference on Machine Learning, 2011. [5] O. Etzioni, M. Banko, S. Soderland, and D. S. Weld. Open information extraction from the web. Commun. ACM, 51(12):68–74, 2008. [6] M. Freedman, E. Loper, E. Boschee, and R. Weischedel. Empirical Studies in Learning to Read. In Proceedings of Workshop on Formalisms and Methodology for Learning by Reading (NAACL-2010), pages 61–69, 2010. [7] H. P. Grice. Logic and conversation. In Syntax and semantics: Speech acts, volume 3, pages 43–58. Academic Press, New York, 1975. [8] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, D. Lowd, and P. Domingos. The Alchemy system for statistical relational AI. Technical report, Department of Computer Science and Engineering, University of Washington, Seattle, WA, 2007. [9] L. Michael. Reading between the lines. In IJCAI, pages 1525–1530, 2009. [10] L. Michael and L. G. Valiant. A first experimental demonstration of massive knowledge infusion. In KR, pages 378–389, 2008. [11] U. Y. Nahm and R. J. Mooney. A mutually beneficial integration of data mining and information extraction. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and the Twelfth Conference on Innovative Applications of Artificial Intelligence, pages 627–632. AAAI Press, 2000. [12] NIST. Automatic Content Extraction 2008 Evaluation Plan. [13] R. Parker, D. Graff, J. Kong, K. Chen, and K. Maeda. English Gigaword Fourth Edition. Linguistic Data Consortium, Philadelphia, 2009. [14] L. Ramshaw, E. Boschee, M. Freedman, J. MacBride, R. Weischedel, and A.Zamanian. Serif language processing effective trainable language understanding. In Joseph Olive, Caitlin Christianson, and John McCary, editors, Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation. Springer, 2011. [15] M. Richardson and P. Domingos. Markov logic networks. Machine learning, 62:107–136, February 2006. [16] J. L. Schafer and M. K. Olsen. Multiple imputation for multivariate missing-data problems: a data analyst’s perspective. Multivariate Behavioral Research, 33:545–571, 1998. [17] S. Schoenmackers, O. Etzioni, D. S. Weld, and J. Davis. Learning first-order Horn clauses from web text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 1088–1098, Stroudsburg, PA, USA, 2010. Association for Computational Linguistics. 9
|
2011
|
165
|
4,219
|
Efficient anomaly detection using bipartite k-NN graphs Kumar Sricharan Department of EECS University of Michigan Ann Arbor, MI 48104 kksreddy@umich.edu Alfred O. Hero III Department of EECS University of Michigan Ann Arbor, MI 48104 hero@umich.edu Abstract Learning minimum volume sets of an underlying nominal distribution is a very effective approach to anomaly detection. Several approaches to learning minimum volume sets have been proposed in the literature, including the K-point nearest neighbor graph (K-kNNG) algorithm based on the geometric entropy minimization (GEM) principle [4]. The K-kNNG detector, while possessing several desirable characteristics, suffers from high computation complexity, and in [4] a simpler heuristic approximation, the leave-one-out kNNG (L1O-kNNG) was proposed. In this paper, we propose a novel bipartite k-nearest neighbor graph (BPkNNG) anomaly detection scheme for estimating minimum volume sets. Our bipartite estimator retains all the desirable theoretical properties of the K-kNNG, while being computationally simpler than the K-kNNG and the surrogate L1OkNNG detectors. We show that BP-kNNG is asymptotically consistent in recovering the p-value of each test point. Experimental results are given that illustrate the superior performance of BP-kNNG as compared to the L1O-kNNG and other state of the art anomaly detection schemes. 1 Introduction Given a training set of normal events, the anomaly detection problem aims to identify unknown, anomalous events that deviate from the normal set. This novelty detection problem arises in applications where failure to detect anomalous activity could lead to catastrophic outcomes, for example, detection of faults in mission-critical systems, quality control in manufacturing and medical diagnosis. Several approaches have been proposed for anomaly detection. One class of algorithms assumes a family of parametrically defined nominal distributions. Examples include Hotelling’s T test and the Fisher F-test, which are both based on a Gaussian distribution assumption. The drawback of these algorithms is model mismatch: the supposed distribution need not be a correct representation of the nominal data, which can then lead to poor false alarm rates. More recently, several non-parametric methods based on minimum volume (MV) set estimation have been proposed. These methods aim to find the minimum volume set that recovers a certain probability mass α with respect to the unknown probability density of the nominal events. If a new event falls within the MV set, it is classified as normal and otherwise as anomalous. Estimation of minimum volume sets is a difficult problem, especially for high dimensional data. There are two types of approaches to this problem: (1) transform the MV estimation problem to an equivalent density level set estimation problem, which requires estimation of the nominal density; and (2) directly identify the minimal set using function approximation and non-parametric estimation [10, 6, 9]. Both types of approaches involve explicit approximation of high dimensional quant1 ities - the multivariate density function in the first case and the boundary of the minimum volume set in the second and are therefore not easily applied to high dimensional problems. The GEM principle developed by Hero [4] for determining MV sets circumvents the above difficulties by using the asymptotic theory of random Euclidean graphs instead of function approximation. However, the GEM based K-kNNG anomaly detection scheme proposed in [4] is computationally difficult. To address this issue, a surrogate L1O-kNNG anomaly detection scheme was proposed in [4]. L1O-kNNG is computationally simpler than K-kNNG, but loses some desirable properties of the K-kNNG, including asymptotic consistency, as shown below. In this paper, we use the GEM principle to develop a bipartite k-nearest neighbor (k-NN) graphbased anomaly detection algorithm. BP-kNNG retains the desirable properties of the GEM principle and as a result inherits the following features: (i) it is not restricted to linear or even convex decision regions, (ii) it is completely non-parametric, (iii) it is optimal in that it converges to the uniformly most powerful (UMP) test when the anomalies are drawn from a mixture of the nominal density and the uniform density, (iv) it does not require knowledge of anomalies in the training sample, (v) it is asymptotically consistent in recovering the p-value of the test point and (vi) it produces estimated p-values, allowing for false positive rate control. K-LPE [13] and RRS [7] are anomaly detection methods which are also based on k-NN graphs. BPkNNG differs from L1O-kNNG, K-LPE and RRS in the following respects. L1O-kNNG, K-LPE and RRS do not use bipartite graphs. We will show that the bipartite nature of BP-kNNG results in significant computational savings. In addition, the K-LPE and RRS test statistics involve only the k-th nearest neighbor distance, while the statistic in BP-kNNG, like the L1O-kNNG, involves summation of the power weighted distance of all the edges in the k-NN graph. This will result in increased robustness to outliers in the training sample. Finally, we will show that the mean square rate of convergence of p-values in BP-kNNG (O(T −2/(2+d))) is faster as compared to the convergencerate of K-LPE (O(T −2/5+T −6/5d)), where T is the size of the nominal training sample and d is the dimension of the data. The rest of this paper is organized as follows. In Section 2, we outline the statistical framework for minimum volume set anomaly detection. In Section 3, we describe the GEM principle and the K-kNNG and L1O-kNNG anomaly detection schemes proposed in [4]. Next, in Section 4, we develop our bipartite k-NN graph (BP-kNNG) method for anomaly detection. We show consistency of the method and compare its computational complexity with that of the K-kNNG, L1O-kNNG and K-LPE algorithms. In Section 5, we show simulation results that illustrate the superior performance of BP-kNNG over L1O-kNNG. We also show that our method compares favorably to other state of the art anomaly detection schemes when applied to real world data from the UCI repository [1]. We conclude with a short discussion in Section 6. 2 Statistical novelty detection The problem setup is as follows. We assume that a training sample X T = {X1, . . . , XT } of ddimensional vectors is available. Given a new sample X, the objective is to declare X to either be a ’nominal’ event consistent with XT or an ’anomalous’ event which deviates from X T . We seek to find a functional D and corresponding detection rule D(x) > 0 so that X is declared to be nominal if D(x) > 0 holds and anomalous otherwise. The acceptance region is given by A = {x : D(x) > 0}. We seek to further constrain the choice of D to allow as few false negatives as possible for a fixed allowance of false positives. To formulate this problem, we adopt the standard statistical framework for testing composite hypotheses. We assume that the training sample XT is an i.i.d sample draw from an unknown ddimensional probability distribution f0(x) on [0, 1]d. Let X have density f on [0, 1]d. The anomaly detection problem can be formulated as testing the hypotheses H 0 : f = f0 versus H1 : f ̸= f0. For a given α ∈(0, 1), we seek an acceptance region A that satisfies Pr(X ∈A|H0) ≥1 −α. This requirement maintains the false positive rate at a level no greater than α. Let A = {A : A f0(x)dx ≥1 −α} denote the collection of acceptance regions of level α. The most suitable acceptance region from the collection A would be the set which minimizes the false negative rate. Assume that the density f is bounded above by some constant C. In this case the false negative rate is bounded by Cλ(A) where λ(.) is the Lebesgue measure in R d. Consider the relaxed problem of 2 minimizing the upper bound Cλ(A) or equivalently the volume λ(A) of A. The optimal acceptance region with a maximum false alarm rate α is therefore given by the minimum volume set of level α: Λα = min{λ(A) : A f0(x)dx ≥α}. Define the minimum entropy set of level α to be Ωα = min{Hν(A) : A f0(x)dx ≥1 −α} where Hν(A) = (1 −ν)−1 A f ν 0 (x)dx is the R´enyi ν-entropy of the density f0 over the set A. It can be shown that when f0 is a Lebesgue density in Rd, the minimum volume set and the minimum entropy set are equivalent, i.e. Λα and Ωα are identical. Therefore, the optimal decision rule for a given level of false alarm α is to declare an anomaly if X /∈Ωα. This decision rule has a strong optimality property [4]: when f 0 is Lebesgue continuous and has no ’flat’ regions over its support, this decision rule is a uniformly most powerful (UMP) test at level 1 −α for the null hypothesis that the test point has density f(x) equal to the nominal f 0(x) versus the alternative hypothesis that f(x) = (1 −ϵ)f0(x) + ϵU(x), where U(x) is the uniform density over [0, 1]d and ϵ ∈[0, 1]. Furthermore, the power function is given by β = Pr(X /∈Ω α|H1) = (1 −ϵ)α + ϵ(1 −λ(Ωα)). 3 GEM principle In this section, we briefly review the geometric entropy minimization (GEM) principle method [4] for determining minimum entropy sets Ωα of level α. The GEM method directly estimates the critical region Ωα for detecting anomalies using minimum coverings of subsets of points in a nominal training sample. These coverings are obtained by constructing minimal graphs, e.g., the k-minimal spanning tree or the k-nearest neighbor graph, covering a K-point subset that is a given proportion of the training sample. Points in the training sample that are not covered by the K-point minimal graphs are identified as tail events. In particular, let XK,T denote one of the T K K point subsets of XT . The k-nearest neighbors (k-NN) of a point Xi ∈XK,T are the k closest points to Xi among XK,T −Xi. Denote the corresponding set of edges between Xi and its k-NN by {ei(1), . . . , ei(k)}. For any subset XK,T , define the total power weighted edge length of the k-NN graph on X K,T with power weighting γ (0 < γ < d), as LkNN(XK,T ) = K i=1 k l=1 |eti(l)|γ, where {t1, . . . , tK} are the indices of Xi ∈XK,T . Define the K-kNNG graph to be the K-point k-NN graph having minimal length minXT,K∈XT LkNN(XT,K) over all T K subsets XK,T . Denote the corresponding length minimizing subset of K points by X ∗ T,K = argmin XT,K∈X LkNN(XK,T ). The K-kNNG thus specifies a minimal graph covering X ∗ K,T of size K. This graph can be viewed as capturing the densest regions of XT . If XT is an i.i.d. sample from a multivariate density f0(x) and if limK,T →∞K/T = ρ, then the set X ∗ K,T converges a.s. to the minimum ν-entropy set containing a proportion of at least ρ of the mass of f0(x), where ν = 1 −γ/d [4]. This set can be used to perform anomaly detection. 3.1 K-kNNG anomaly detection Given a test sample X, denote the pooled sample XT +1 = XT ∪{X} and determine the K-kNNG graph over XT +1. Declare X to be an anomaly if X /∈X ∗ K,T +1 and nominal otherwise. When the density f0 is Lebesgue continuous, it follows from [4] that as K, T →∞, this anomaly detection algorithm has false alarm rate that converges to α = 1 −K/T and power that converges to that of the minimum volume set test of level α. An identical detection scheme based on the K-minimal spanning tree has also been developed in [4]. The K-kNNG anomaly detection scheme therefore offers a direct approach to detecting outliers while bypassing the more difficult problems of density estimation and level set estimation in high dimensions. However, this algorithm requires construction of k-nearest neighbor graphs (or k-minimal spanning trees) over T K different subsets. For each input test point, the runtime of this algorithm 3 is therefore O(dK2T K ). As a result, the K-kNNG method is not well suited for anomaly detection for large sample sizes. 3.2 L1O-kNNG To address the computational problems of K-kNNG, Hero [4] proposed implementing the K-kNNG for the simplest case K = T −1. The runtime of this algorithm for each input test point is O(dT 2). Clearly, the L1O-kNNG is of much lower complexity that the K-kNNG scheme. However, the L1OkNNG detects anomalies at a fixed false alarm rate 1/(T + 1), where T is the training sample size. To detect anomalies at a higher false alarm rate α∗, one would have to subsample the training set and only use T ∗= 1/α∗−1 training samples. This destroys any hope for asymptotic consistency of the L1O-kNNG. In the next section, we propose a different GEM based algorithm that uses bipartite graphs. The algorithm has algorithm has a much faster runtime than the L1O-kNNG, and unlike the L1O-kNNG, is asymptotically consistent and can operate at any specified alarm rate α. We describe our algorithm below. 4 BP-kNNG Let {XN, XM} be a partition of XT with card{XN} = N and card{XM} = M = T −N respectively. As above, let XK,N denote one of the N K subsets of K distinct points from XN. Define the bipartite k-NN graph on {XK,N, XM} to be the set of edges linking each Xi ∈XK,N to its k nearest neighbors in XM. Define the total power weighted edge length of this bipartite k-NN graph with power weighting γ (0 < γ < d) and a fixed number of edges s (1 ≤s ≤k) corresponding to each vertex Xi ∈XK,N to be Ls,k(XK,N, XM) = K i=1 k l=k−s+1 |eti(l)|γ, where {t1, . . . , tK} are the indices of Xi ∈XK,N and {eti(1), . . . , eti(k)} are the k-NN edges in the bipartite graph originating from Xti ∈XK,N. Define the bipartite K-kNNG graph to be the one having minimal weighted length minXN,K∈XN Ls,k(XN,K, XM) over all N K subsets XK,N. Define the corresponding minimizing subset of K points of X K,N by X ∗ K,N = argmin XK,N∈X Ls,k(XK,N, XM). Using the theory of partitioned k-NN graph entropy estimators [11], it follows that as k/M → 0, k, N →∞and for fixed s, the set X ∗ K,N converges a.s. to the minimum ν-entropy set Ω1−ρ containing a proportion of at least ρ of the mass of f 0(x), where ρ = limK,N→∞K/N and ν = 1 −γ/d. This suggests using the bipartite k-NN graph to detect anomalies in the following way. Given a test point X, denote the pooled sample XN+1 = XN ∪{X} and determine the optimal bipartite K-kNNG graph X ∗ K,N+1 over {XK,N+1, XM}. Now declare X to be an anomaly if X /∈X ∗ K,N+1 and nominal otherwise. It is clear that by the GEM principle, this algorithm detects false alarms at a rate that converges to α = 1 −K/T and power that converges to that of the minimum volume set test of level α. We can equivalently determine X ∗ K,N+1 as follows. For each Xi ∈XN, construct ds,k(Xi) = k l=k−s+1 |ei(l)|γ. For each test point X, define ds,k(X) = k l=s−k+1 |eX(l)|γ, where {eX(1), . . . , eX(k)} are the k-NN edges from X to XM. Now, choose the K points among XN ∪X with the K smallest of the N + 1 edge lengths {ds,k(Xi), Xi ∈XN} ∪{ds,k(X)}. Because of the bipartite nature of the construction, this is equivalent to choosing X ∗ K,N+1. This leads to the proposed BP-kNNG anomaly detection algorithm described by Algorithm 1. 4.1 BP-kNNG p-value estimates The p-value is a score between 0 and 1 that is associated with the likelihood that a given point X 0 comes from a specified nominal distribution. The BP-kNNG generates an estimate of the p-value 4 Algorithm 1 Anomaly detection scheme using bipartite k-NN graphs 1. Input: Training samples XT , test samples X, false alarm rate α 2. Training phase a. Create partition {XN, XM} b. Construct k-NN bipartite graph on partition c. Compute k-NN lengths ds,k(Xi) for each Xi ∈XN: ds,k(Xi) = k l=k−s+1 |ei(l)|γ 3. Test phase: detect anomalous points for each input test sample X do Compute k-NN length ds,k(X) = k l=k−s+1 |eX(l)|γ if (1/N) Xi∈XN 1(ds,k(Xi) < ds,k(X)) ≥1 −α then Declare X to be anomalous else Declare X to be non-anomalous end if end for that is asymptotically consistent, guaranteeing that the BP-kNNG detector is a consistent novelty detector. Specifically, for a given test point X0, the true p-value associated with a point X0 in a minimum volume set test is given by ptrue(X0) = S(X0) f0(z)dz where S(X0) = {z : f0(z) ≤f0(X0)} and E(X0) = {z : f0(z) = f0(X0)}. ptrue(X0) is the minimal level α at which X0 would be rejected. The empirical p-value associated with the BP-kNNG is defined as pbp(X0) = Xi∈XN 1(ds,k(Xi) ≥ds,k(X0)) N . (1) 4.2 Asymptotic consistency and optimal convergence rates Here we prove that the BP-kNNG detector is asymptotically consistent by showing that for a fixed number of edges s, E[(pbp(X0) −ptrue(X0))2] →0 as k/M →0, k, N →∞. In the process, we also obtain rates of convergence of this mean-squared error. These rates depend on k, N and M and result in the specification of an optimal number of neighbors k and an optimal partition ratio N/M that achieve the best trade-off between bias and variance of the p-value estimates p bp(X0). We assume that the density f0 (i) is bounded away from 0 and ∞and is continuous on its support S, (ii) has no flat spots over its support set and (iii) has a finite number of modes. Let E denote the expectation w.r.t. the density f0, and B, V denote the bias and variance operators. Throughout this section, assume without loss of generality that {X1, . . . , XN} ∈XN and {XN+1, . . . , XT } ∈XM. Bias: We first introduce the oracle p-value porac(X0) = (1/N) Xi∈XN 1(f0(Xi) ≤f0(X0)) and note that E[porac(X0)] = ptrue(X0). The distance ei(l) of a point Xi ∈XN to its l-th nearest neighbor in XM is related to the bipartite l-nearest neighbor density estimate ˆfl(Xi) = (l −1)/(Mcded i(l)) (section 2.3, [11]) where cd is the unit ball volume in d dimensions. Let e(X) = k l=k−s+1 k −1 l −1 ˆfl(X) ν−1 −s(f(X))ν−1 and δ(Xi, X0) = δi = (f(Xi))ν−1 −(f(X0))ν−1. We then have B[pbp(X0)] = E[pbp(X0)] −ptrue(X0) = E[pbp(X0) −porac(X0)] = E[1(ds,k(X1) ≥ds,k(X0))] −E[1(f(X1) ≤f(X0))] = E[1(e(X1) −e(X0) + δ1 ≤0) −1(δ1 ≤0)]. 5 This bias will be non-zero when 1(e(X1) −e(X0) + δ1 ≤0) ̸= 1(δ1 ≤0). First we investigate this condition when δ1 > 0. In this case, for 1(e(X1) −e(X0) + δ1 ≤0) ̸= 1(δ1 ≤0), we need −e(X1) + e(X0) ≥δ1. Likewise, when δ1 ≤0, 1(e(X1) −e(X0) + δ1 ≤0) ̸= 1(δ1 ≤0) occurs when e(X1) −e(X0) > |δ1|. From the theory developed in [11], for any fixed s, |e(X)| = O(k/M) 1/d + O(1/ √ k) with probability greater than 1 −o(1/M). This implies that B[pbp(X0)] = E[1(e(X1) −e(X0) + δ1 ≤0) −1(δ1 ≤0)] = Pr{|δ1| = O((k/M)1/d + 1/ √ k)} + o(1/M) = O((k/M)1/d + 1/ √ k),(2) where the last step follows from our assumption that the density f 0 is continuous and has a finite number of modes. Variance: Define bi = 1(e(Xi) −e(X0) + δi ≤0) −1(δi ≤0). We can compute the variance in a similar manner to the bias as follows (for additional details, please refer to the supplementary material): V[pbp(X0)] = 1 N V[1(e(X1) −e(X0) + δ1 ≤0)] + N −1 N Cov[b1, b2] = O(1/N) + E[b1b2] −(E[b1]E[b2]) = O(1/N + (k/M)2/d + 1/k). (3) Consistency of p-values: From (2) and (3), we obtain an asymptotic representation of the estimated p-value E[(pbp(X0) −ptrue(X0))2] = O((k/M)2/d) + O(1/k) + O(1/N). This implies that pbp converges in mean square to ptrue, for a fixed number of edges s, as k/M →0, k, N →∞. Optimal choice of parameters: The optimal choice of k to minimize the MSE is given by k = Θ(M 2/(2+d)). For fixed M + N = T , to minimize MSE, N should then be chosen to be of the order O(M (4+d)/(4+2d)), which implies that M = Θ(T ). The mean square convergence rate for this optimal choice of k and partition ratio N/M is given by O(T −2/(2+d)). In comparison, the K-LPE method requires that k grows with the sample size at rate k = Θ(T 2/5). The mean square rate of convergence of the p-values in K-LPE is then given by O(T −2/5 + T −6/5d). The rate of convergence of the p-values is therefore faster in the case of BP-kNNG as compared to K-LPE. 4.3 Comparison of run time complexity Here we compare complexity of BP-kNNG with that of K-kNNG, L1O-kNNG and K-LPE. For a single query point X, the runtime of K-kNNG is O(dK 2T K ), while the complexity of the surrogate L1O-kNN algorithm and the K-LPE is O(dT 2). On the other hand, the complexity of the proposed BP-kNNG algorithm is dominated by the computation of d k(Xi) for each Xi ∈XN and dk(X), which is O(dNM) = O(dT (8+3d)/(4+2d)) = o(dT 2). For the K-kNNG, L1O-kNNG and K-LPE, a new k-NN graph has to be constructed on {X N ∪{X}} for every new query point X. On the other hand, because of the bipartite construction of our k-NN graph, dk(Xi) for each Xi ∈XN needs to be computed and stored only once. For every new query X that comes in, the cost to compute dk(X) is only O(dM) = O(dT ). For a total of L query points, the overall runtime complexity of our algorithm is therefore much smaller than the L1O-kNNG, KLPE and K-kNNG anomaly detection schemes (O(dT (T (4+d)/(4+2d) + L)) compared to O(dLT 2), O(dLT 2) and O(dLK2T K ) respectively). 5 Simulation comparisons We compare the L1O-kNNG and the bipartite K-kNNG schemes on a simulated data set. The training set contains 1000 realizations drawn from a 2-dimensional Gaussian density f 0 with mean 0 and diagonal covariance with identical component variances of σ = 0.1. The test set contains 500 realizations drawn from 0.8f0 + 0.2U, where U is the uniform density on [0, 1]2. Samples from the uniform distribution are classified to be anomalies. The percentage of anomalies in the test set is therefore 20%. 6 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 False positive rate True positive rate BP−kNNG L10−kNNG Clairvoyant (a) ROC curves for L1O-kNNG and BP-kNNG. The labeled ’clairvoyant’ curve is the ROC of the UMP anomaly detector.. 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 Desired Observed BP−kNNG L10−kNNG (b) Comparison of observed false alarm rates for L1O-kNNG and BP-kNNG with the desired false alarm rates. Figure 1: Comparison of performance of L1O-kNNG and BP-kNNG. Data set Sample size Dimension Anomaly class HTTP (KDD’99) 567497 3 attack (0.4%) Forest 286048 10 class 4 vs class 2 (0.9%) Mulcross 262144 4 2 clusters (10%) SMTP (KDD’99) 95156 3 attack (0.03%) Shuttle 49097 9 class 2,3,5,6,7 vs class 1 (7%) Table 1: Description of data used in anomaly detection experiments. The distribution f0 has essential support on the unit square. For this simple case the minimum volume set of level α is a disk centered at the origin with radius √ 2σ2 log(1/α). The power of the uniformly most powerful (UMP) test is 1 −2πσ2 log(1/α). L1O-kNNG and BP-kNNG were implemented in Matlab 7.6 on an 2 GHz Intel processor with 3 GB of RAM. The value of k was set to 5. For the BP-kNNG, we set s = 1, N = 100 and M = 900. In Fig. 1(a), we compare the detection performance of L1O-kNNG and BP-kNNG against the ’clairvoyant’ UMP detector in terms of the ROC. We note that the proposed BP-kNNG is closer to the optimal UMP test as compared to the L1O-kNNG. In Fig. 1(b) we note the close agreement between desired and observed false alarm rates for BP-kNNG. Note that the L1O-kNNG significantly underestimates its false alarm rate for higher levels of true false alarm. In the case of the L1O-kNNG, it took an average of 60ms to test each instance for possible anomaly. The total run-time was therefore 60x500 = 3000ms. For the BP-kNNG, for a single instance, it took an average of 57ms. When all the instances were processed together, the total run time was only 97ms. This significant savings in runtime is due to the fact that the bipartite graph does not have to be constructed separately for each new test instance; it suffices to construct it once on the entire data set. 5.1 Experimental comparisons In this section, we compare our algorithm to several other state of the art anomaly detection algorithms, namely: MassAD [12], isolation forest (or iForest) [5], two distance-based methods ORCA [2] and K-LPE [13], a density-based method LOF [3], and the one-class support vector machine (or 1-SVM) [9]. All the methods are tested on the five largest data sets used in [5]. The data characteristics are summarized in Table 1. One of the anomaly data generators is Mulcross [8] and the other four are from the UCI repository [1]. Full details about the data can be found in [5]. The comparison performance is evaluated in terms of averaged AUC (area under ROC curve) and processing time (a total of training and test time). Results for BP-kNNG are compared with results for L1O-kNNG, K-LPE, MassAD, iForest and ORCA in Table 2. The results for MassAD, iForest and ORCA are reproduced from [12]. MassAD and iForest were implemented in Matlab and tested on an AMD Opteron machine with a 1.8 GHz processor and 4 GB memory. The results for ORCA, 7 Data sets AUC Time (secs) BP L10 K-LPE Mass iF ORCA BP L10 K-LPE Mass iF ORCA HTTP 0.99 NA NA 1.00 1.00 0.36 3.81 .10/i .19/i 34 147 9487 Forest 0.86 NA NA 0.91 0.87 0.83 7.54 .18/i .18/i 18 79 6995 Mulcross 1.00 NA NA 0.99 0.96 0.33 4.68 .26/i .17/i 17 75 2512 SMTP 0.90 NA NA 0.86 0.88 0.87 0.74 .11/i .17/i 7 26 267 Shuttle 0.99 NA NA 0.99 1.00 0.60 1.54 .45/i .16/i 4 15 157 Table 2: Comparison of anomaly detection schemes in terms of AUC and run-time for BP-kNNG (BP) against L1O-kNNG (L10), K-LPE, MassAD (Mass), iForest (iF) and ORCA. When reporting results for L1O-kNNG and K-LPE, we report the processing time per test instance (/i). We are unable to report the AUC for K-LPE and L1O-kNNG because of the large processing time. We note that BP-kNNG compares favorably in terms of AUC while also requiring the least run-time. Data sets Desired false alarm 0.01 0.02 0.05 0.1 0.2 HTTP (KDD’99) 0.007 0.015 0.063 0.136 0.216 Forest 0.009 0.015 0.035 0.071 0.150 Mulcross 0.008 0.014 0.040 0.096 0.186 SMTP (KDD’99) 0.006 0.017 0.046 0.099 0.204 Shuttle 0.026 0.030 0.045 0.079 0.179 Table 3: Comparison of desired and observed false alarm rates for BP-kNNG. There is good agreement between the desired and observed rates. LOF and 1-SVM were conducted using the same experimental setting but on a faster 2.3 GHz machine. We exclude the results for LOF and 1-SVM in table 2 because MassAD, iForest and ORCA have been shown to outperform LOF and 1-SVM in [12]. We implemented BP-kNNG, L1O-kNNG and K-LPE in Matlab on an Intel 2 GHz processor with 3 GB RAM. We note that this machine is comparable to the AMD Opteron machine with a 1.8 GHz processor. We choose T = 104 training samples and fix k = 50 in all three cases. For BP-kNNG, we fix s = 5 and N = 103. When reporting results for L1O-kNNG and K-LPE, we report the processing time per test instance (/i). We are unable to report the AUC for K-LPE because of the large processing time and for L1O-kNNG because it cannot operate at high false alarm rates. From the results in Table 2, we see that BP-kNNG performs comparably in terms of AUC to the other algorithms, while having the least processing time across all algorithms (implemented on different, but comparable machines). In addition, BP-kNNG allows the specification of a threshold for anomaly detection at a desired false alarm rate. This is corroborated by the results in Table 3, where we see that the observed false alarm rates across the different data sets are close to the desired false alarm rate. 6 Conclusions The geometric entropy minimization (GEM) principle was introduced in [4] to extract minimal set coverings that can be used to detect anomalies from a set of training samples. In this paper we propose a bipartite k-nearest neighbor graph (BP-kNNG) anomaly detection algorithm based on the GEM principle. BP-kNNG inherits the theoretical optimality properties of GEM methods including consistency, while being an order of magnitude faster than the methods proposed in [4]. We compared BP-kNNG against state of the art anomaly detection algorithms and showed that BPkNNG compares favorably in terms of both ROC performance and computation time. In addition, BP-kNNG enjoys several other advantages including the ability to detect anomalies at a desired false alarm rate. In BP-kNNG, the p-values of each test point can also be easily computed (1), making BP-kNNG easily extendable to incorporating false discovery rate constraints. 8 References [1] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007. [2] S. D. Bay and M. Schwabacher. Mining distance-based outliers in near linear time with randomization and a simple pruning rule. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’03, pages 29–38, New York, NY, USA, 2003. ACM. [3] M. M. Breunig, H. Kriegel, R. T. Ng, and J. Sander. Lof: identifying density-based local outliers. In Proceedings of the 2000 ACM SIGMOD international conference on Management of data, SIGMOD ’00, pages 93–104, New York, NY, USA, 2000. ACM. [4] A. O. Hero. Geometric entropy minimization (gem) for anomaly detection and localization. In Proc. Advances in Neural Information Processing Systems (NIPS, pages 585–592. MIT Press, 2006. [5] F. T. Liu, K. M. Ting, and Z. Zhou. Isolation forest. In Proceedings of the 2008 Eighth IEEE International Conference on Data Mining, pages 413–422, Washington, DC, USA, 2008. IEEE Computer Society. [6] C. Park, J. Z. Huang, and Y. Ding. A computable plug-in estimator of minimum volume sets for novelty detection. Operations Research, 58(5):1469–1480, 2010. [7] S. Ramaswamy, R. Rastogi, and K. Shim. Efficient algorithms for mining outliers from large data sets. SIGMOD Rec., 29:427–438, May 2000. [8] D. M. Rocke and D. L. Woodruff. Identification of Outliers in Multivariate Data. Journal of the American Statistical Association, 91(435):1047–1061, 1996. [9] B. Sch¨olkopf, R. Williamson, A. Smola, J. Shawe-Taylor, and J.Platt. Support Vector Method for Novelty Detection. volume 12, 2000. [10] C. Scott and R. Nowak. Learning minimum volume sets. J. Machine Learning Res, 7:665–704, 2006. [11] K. Sricharan, R. Raich, and A. O. Hero. Empirical estimation of entropy functionals with confidence. ArXiv e-prints, December 2010. [12] K. M. Ting, G. Zhou, T. F. Liu, and J. S. C. Tan. Mass estimation and its applications. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’10, pages 989–998, New York, NY, USA, 2010. ACM. [13] M. Zhao and V. Saligrama. Anomaly detection with score functions based on nearest neighbor graphs. Computing Research Repository, abs/0910.5461, 2009. 9
|
2011
|
166
|
4,220
|
Shallow vs. Deep Sum-Product Networks Olivier Delalleau Department of Computer Science and Operation Research Universit´e de Montr´eal delallea@iro.umontreal.ca Yoshua Bengio Department of Computer Science and Operation Research Universit´e de Montr´eal yoshua.bengio@umontreal.ca Abstract We investigate the representational power of sum-product networks (computation networks analogous to neural networks, but whose individual units compute either products or weighted sums), through a theoretical analysis that compares deep (multiple hidden layers) vs. shallow (one hidden layer) architectures. We prove there exist families of functions that can be represented much more efficiently with a deep network than with a shallow one, i.e. with substantially fewer hidden units. Such results were not available until now, and contribute to motivate recent research involving learning of deep sum-product networks, and more generally motivate research in Deep Learning. 1 Introduction and prior work Many learning algorithms are based on searching a family of functions so as to identify one member of said family which minimizes a training criterion. The choice of this family of functions and how members of that family are parameterized can be a crucial one. Although there is no universally optimal choice of parameterization or family of functions (or “architecture”), as demonstrated by the no-free-lunch results [37], it may be the case that some architectures are appropriate (or inappropriate) for a large class of learning tasks and data distributions, such as those related to Artificial Intelligence (AI) tasks [4]. Different families of functions have different characteristics that can be appropriate or not depending on the learning task of interest. One of the characteristics that has spurred much interest and research in recent years is depth of the architecture. In the case of a multi-layer neural network, depth corresponds to the number of (hidden and output) layers. A fixedkernel Support Vector Machine is considered to have depth 2 [4] and boosted decision trees to have depth 3 [7]. Here we use the word circuit or network to talk about a directed acyclic graph, where each node is associated with some output value which can be computed based on the values associated with its predecessor nodes. The arguments of the learned function are set at the input nodes of the circuit (which have no predecessor) and the outputs of the function are read off the output nodes of the circuit. Different families of functions correspond to different circuits and allowed choices of computations in each node. Learning can be performed by changing the computation associated with a node, or rewiring the circuit (possibly changing the number of nodes). The depth of the circuit is the length of the longest path in the graph from an input node to an output node. Deep Learning algorithms [3] are tailored to learning circuits with variable depth, typically greater than depth 2. They are based on the idea of multiple levels of representation, with the intuition that the raw input can be represented at different levels of abstraction, with more abstract features of the input or more abstract explanatory factors represented by deeper circuits. These algorithms are often based on unsupervised learning, opening the door to semi-supervised learning and efficient 1 use of large quantities of unlabeled data [3]. Analogies with the structure of the cerebral cortex (in particular the visual cortex) [31] and similarities between features learned with some Deep Learning algorithms and those hypothesized in the visual cortex [17] further motivate investigations into deep architectures. It has been suggested that deep architectures are more powerful in the sense of being able to more efficiently represent highly-varying functions [4, 3]. In this paper, we measure “efficiency” in terms of the number of computational units in the network. An efficient representation is important mainly because: (i) it uses less memory and is faster to compute, and (ii) given a fixed amount of training samples and computational power, better generalization is expected. The first successful algorithms for training deep architectures appeared in 2006, with efficient training procedures for Deep Belief Networks [14] and deep auto-encoders [13, 27, 6], both exploiting the general idea of greedy layer-wise pre-training [6]. Since then, these ideas have been investigated further and applied in many settings, demonstrating state-of-the-art learning performance in object recognition [16, 28, 18, 15] and segmentation [20], audio classification [19, 10], natural language processing [9, 36, 21, 32], collaborative filtering [30], modeling textures [24], modeling motion [34, 33], information retrieval [29, 26], and semi-supervised learning [36, 22]. Poon and Domingos [25] introduced deep sum-product networks as a method to compute partition functions of tractable graphical models. These networks are analogous to traditional artificial neural networks but with nodes that compute either products or weighted sums of their inputs. Analogously to neural networks, we define “hidden” nodes as those nodes that are neither input nodes nor output nodes. If the nodes are organized in layers, we define the “hidden” layers to be those that are neither the input layer nor the output layer. Poon and Domingos [25] report experiments with networks much deeper (30+ hidden layers) than those typically used until now, e.g. in Deep Belief Networks [14, 3], where the number of hidden layers is usually on the order of three to five. Whether such deep architectures have theoretical advantages compared to so-called “shallow” architectures (i.e. those with a single hidden layer) remains an open question. After all, in the case of a sum-product network, the output value can always be written as a sum of products of input variables (possibly raised to some power by allowing multiple connections from the same input), and consequently it is easily rewritten as a shallow network with a sum output unit and product hidden units. The argument supported by our theoretical analysis is that a deep architecture is able to compute some functions much more efficiently than a shallow one. Until recently, very few theoretical results supported the idea that deep architectures could present an advantage in terms of representing some functions more efficiently. Most related results originate from the analysis of boolean circuits (see e.g. [2] for a review). Well-known results include the proof that solving the n-bit parity task with a depth-2 circuit requires an exponential number of gates [1, 38], and more generally that there exist functions computable with a polynomial-size depthk circuit that would require exponential size when restricted to depth k −1 [11]. Another recent result on boolean circuits by Braverman [8] offers proof of a longstanding conjecture, showing that bounded-depth boolean circuits are unable to distinguish some (non-uniform) input distributions from the uniform distribution (i.e. they are “fooled” by such input distributions). In particular, Braverman’s result suggests that shallow circuits can in general be fooled more easily than deep ones, i.e., that they would have more difficulty efficiently representing high-order dependencies (those involving many input variables). It is not obvious that circuit complexity results (that typically consider only boolean or at least discrete nodes) are directly applicable in the context of typical machine learning algorithms such as neural networks (that compute continuous representations of their input). Orponen [23] surveys theoretical results in computational complexity that are relevant to learning algorithms. For instance, H˚astad and Goldmann [12] extended some results to the case of networks of linear threshold units with positivity constraints on the weights. Bengio et al. [5, 7] investigate, respectively, complexity issues in networks of Gaussian radial basis functions and decision trees, showing intrinsic limitations of these architectures e.g. on tasks similar to the parity problem. Utgoff and Stracuzzi [35] informally discuss the advantages of depth in boolean circuit in the context of learning architectures. Bengio [3] suggests that some polynomials could be represented more efficiently by deep sumproduct networks, but without providing any formal statement or proofs. This work partly addresses this void by demonstrating families of circuits for which a deep architecture can be exponentially more efficient than a shallow one in the context of real-valued polynomials. Note that we do not address in this paper the problem of learning these parameters: even if an efficient deep representation exists for the function we seek to approximate, in general there is no 2 guarantee for standard optimization algorithms to easily converge to this representation. This paper focuses on the representational power of deep sum-product circuits compared to shallow ones, and studies it by considering particular families of target functions (to be represented by the learner). We first formally define sum-product networks. We consider two families of functions represented by deep sum-product networks (families F and G). For each family, we establish a lower bound on the minimal number of hidden units a depth-2 sum-product network would require to represent a function of this family, showing it is much less efficient than the deep representation. 2 Sum-product networks Definition 1. A sum-product network is a network composed of units that either compute the product of their inputs or a weighted sum of their inputs (where weights are strictly positive). Here, we restrict our definition of the generic term “sum-product network” to networks whose summation units have positive incoming weights1, while others are called “negative-weight” networks. Definition 2. A “negative-weight“ sum-product network may contain summation units whose weights are non-positive (i.e. less than or equal to zero). Finally, we formally define what we mean by deep vs. shallow networks in the rest of the paper. Definition 3. A “shallow“ sum-product network contains a single hidden layer (i.e. a total of three layers when counting the input and output layers, and a depth equal to two). Definition 4. A “deep“ sum-product network contains more than one hidden layer (i.e. a total of at least four layers, and a depth at least three). 3 The family F 3.1 Definition The first family of functions we study, denoted by F, is made of functions built from deep sumproduct networks that alternate layers of product and sum units with two inputs each (details are provided below). The basic idea we use here is that composing layers (i.e. using a deep architecture) is equivalent to using a factorized representation of the polynomial function computed by the network. Such a factorized representation can be exponentially more compact than its expansion as a sum of products (which can be associated to a shallow network with product units in its hidden layer and a sum unit as output). This is what we formally show in what follows. + ×× × x1 x2 x3 x4 ℓ1 2 = x3x4 ℓ1 1 = x1x2 µ11 = 1 λ11 = 1 ℓ2 1 = λ11ℓ1 1 + µ11ℓ1 2 = x1x2 + x3x4 = f(x1, x2, x3, x4) Figure 1: Sum-product network computing the function f ∈F such that i = λ11 = µ11 = 1. Let n = 4i, with i a positive integer value. Denote by ℓ0 the input layer containing scalar variables {x1, . . . , xn}, such that ℓ0 j = xj for 1 ≤j ≤n. Now define f ∈F as any function computed by a sum-product network (deep for i ≥2) composed of alternating product and sum layers: • ℓ2k+1 j = ℓ2k 2j−1 · ℓ2k 2j for 0 ≤k ≤i −1 and 1 ≤j ≤22(i−k)−1 • ℓ2k j = λjkℓ2k−1 2j−1 + µjkℓ2k−1 2j for 1 ≤k ≤i and 1 ≤j ≤22(i−k) where the weights λjk and µjk of the summation units are strictly positive. The output of the network is given by f(x1, . . . , xn) = ℓ2i 1 ∈R, the unique unit in the last layer. The corresponding (shallow) network for i = 1 and additive weights set to one is shown in Figure 1 1This condition is required by some of the proofs presented here. 3 (this architecture is also the basic building block of bigger networks for i > 1). Note that both the input size n = 4i and the network’s depth 2i increase with parameter i. 3.2 Theoretical results The main result of this section is presented below in Corollary 1, providing a lower bound on the minimum number of hidden units required by a shallow sum-product network to represent a function f ∈F. The high-level proof sketch consists in the following steps: (1) Count the number of unique products found in the polynomial representation of f (Lemma 1 and Proposition 1). (2) Show that the only possible architecture for a shallow sum-product network to compute f is to have a hidden layer made of product units, with a sum unit as output (Lemmas 2 to 5). (3) Conclude that the number of hidden units must be at least the number of unique products computed in step 3.2 (Lemma 6 and Corollary 1). Lemma 1. Any element ℓk j can be written as a (positively) weighted sum of products of input variables, such that each input variable xt is used in exactly one unit of ℓk. Moreover, the number mk of products found in the sum computed by ℓk j does not depend on j and obeys the following recurrence rule for k ≥0: if k + 1 is odd, then mk+1 = m2 k, otherwise mk+1 = 2mk. Proof. We prove the lemma by induction on k. It is obviously true for k = 0 since ℓ0 j = xj. Assuming this is true for some k ≥0, we consider two cases: • If k + 1 is odd, then ℓk+1 j = ℓk 2j−1 · ℓk 2j. By the inductive hypothesis, it is the product of two (positively) weighted sums of products of input variables, and no input variable can appear in both ℓk 2j−1 and ℓk 2j, so the result is also a (positively) weighted sum of products of input variables. Additionally, if the number of products in ℓk 2j−1 and ℓk 2j is mk, then mk+1 = m2 k, since all products involved in the multiplication of the two units are different (since they use disjoint subsets of input variables), and the sums have positive weights. Finally, by the induction assumption, an input variable appears in exactly one unit of ℓk. This unit is an input to a single unit of ℓk+1, that will thus be the only unit of ℓk+1 where this input variable appears. • If k + 1 is even, then ℓk+1 j = λjkℓk 2j−1 + µjkℓk 2j. Again, from the induction assumption, it must be a (positively) weighted sum of products of input variables, but with mk+1 = 2mk such products. As in the previous case, an input variable will appear in the single unit of ℓk+1 that has as input the single unit of ℓk in which this variable must appear. Proposition 1. The number of products in the sum computed in the output unit l2i 1 of a network computing a function in F is m2i = 2 √n−1. Proof. We first prove by induction on k ≥1 that for odd k, mk = 22 k+1 2 −2, and for even k, mk = 22 k 2 −1. This is obviously true for k = 1 since 22 1+1 2 −2 = 20 = 1, and all units in ℓ1 are single products of the form xrxs. Assuming this is true for some k ≥1, then: • if k + 1 is odd, then from Lemma 1 and the induction assumption, we have: mk+1 = m2 k = 22 k 2 −1 2 = 22 k 2 +1−2 = 22 (k+1)+1 2 −2 • if k + 1 is even, then instead we have: mk+1 = 2mk = 2 · 22 k+1 2 −2 = 22 (k+1) 2 −1 which shows the desired result for k + 1, and thus concludes the induction proof. Applying this result with k = 2i (which is even) yields m2i = 22 2i 2 −1 = 2 √ 22i−1 = 2 √n−1. 4 Lemma 2. The products computed in the output unit l2i 1 can be split in two groups, one with products containing only variables x1, . . . , x n 2 and one containing only variables x n 2 +1, . . . , xn. Proof. This is obvious since the last unit is a “sum“ unit that adds two terms whose inputs are these two groups of variables (see e.g. Fig. 1). Lemma 3. The products computed in the output unit l2i 1 involve more than one input variable. Proof. It is straightforward to show by induction on k ≥1 that the products computed by lk j all involve more than one input variable, thus it is true in particular for the output layer (k = 2i). Lemma 4. Any shallow sum-product network computing f ∈F must have a “sum” unit as output. Proof. By contradiction, suppose the output unit of such a shallow sum-product network is multiplicative. This unit must have more than one input, because in the case that it has only one input, the output would be either a (weighted) sum of input variables (which would violate Lemma 3), or a single product of input variables (which would violate Proposition 1), depending on the type (sum or product) of the single input hidden unit. Thus the last unit must compute a product of two or more hidden units. It can be re-written as a product of two factors, where each factor corresponds to either one hidden unit, or a product of multiple hidden units (it does not matter here which specific factorization is chosen among all possible ones). Regardless of the type (sum or product) of the hidden units involved, those two factors can thus be written as weighted sums of products of variables xt (with positive weights, and input variables potentially raised to powers above one). From Lemma 1, both x1 and xn must be present in the final output, and thus they must appear in at least one of these two factors. Without loss of generality, assume x1 appears in the first factor. Variables x n 2 +1, . . . , xn then cannot be present in the second factor, since otherwise one product in the output would contain both x1 and one of these variables (this product cannot cancel out since weights must be positive), violating Lemma 2. But with a similar reasoning, since as a result xn must appear in the first factor, variables x1, . . . , x n 2 cannot be present in the second factor either. Consequently, no input variable can be present in the second factor, leading to the desired contradiction. Lemma 5. Any shallow sum-product network computing f ∈F must have only multiplicative units in its hidden layer. Proof. By contradiction, suppose there exists a “sum“ unit in the hidden layer, written s = P t∈S αtxt with S the set of input indices appearing in this sum, and αt > 0 for all t ∈S. Since according to Lemma 4 the output unit must also be a sum (and have positive weights according to Definition 1), then the final output will also contain terms of the form βtxt for t ∈S, with βt > 0. This violates Lemma 3, establishing the contradiction. Lemma 6. Any shallow negative-weight sum-product network (see Definition 2) computing f ∈F must have at least 2 √n−1 hidden units, if its output unit is a sum and its hidden units are products. Proof. Such a network computes a weighted sum of its hidden units, where each hidden unit is a product of input variables, i.e. its output can be written as ΣjwjΠtxγjt t with wj ∈R and γjt ∈ {0, 1}. In order to compute a function in F, this shallow network thus needs a number of hidden units at least equal to the number of unique products in that function. From Proposition 1, this number is equal to 2 √n−1. Corollary 1. Any shallow sum-product network computing f ∈F must have at least 2 √n−1 hidden units. Proof. This is a direct corollary of Lemmas 4 (showing the output unit is a sum), 5 (showing that hidden units are products), and 6 (showing the desired result for any shallow network with this specific structure – regardless of the sign of weights). 5 3.3 Discussion Corollary 1 above shows that in order to compute some function in F with n inputs, the number of units in a shallow network has to be at least 2 √n−1, (i.e. grows exponentially in √n). On another hand, the total number of units in the deep (for i > 1) network computing the same function, as described in Section 3.1, is equal to 1 + 2 + 4 + 8 + . . . + 22i−1 (since all units are binary), which is also equal to 22i −1 = n −1 (i.e. grows only quadratically in √n). It shows that some deep sumproduct network with n inputs and depth O(log n) can represent with O(n) units what would require O(2 √n) units for a depth-2 network. Lemma 6 also shows a similar result regardless of the sign of the weights in the summation units of the depth-2 network, but assumes a specific architecture for this network (products in the hidden layer with a sum as output). 4 The family G In this section we present similar results with a different family of functions, denoted by G. Compared to F, one important difference of deep sum-product networks built to define functions in G is that they can vary their input size independently of their depth. Their analysis thus provides additional insight when comparing the representational efficiency of deep vs. shallow sum-product networks in the case of a fixed dataset. 4.1 Definition Networks in family G also alternate sum and product layers, but their units have as inputs all units from the previous layer except one. More formally, define the family G = ∪n≥2,i≥0Gin of functions represented by sum-product networks, where the sub-family Gin is made of all sum-product networks with n input variables and 2i + 2 layers (including the input layer ℓ0), such that: 1. ℓ1 contains summation units; further layers alternate multiplicative and summation units. 2. Summation units have positive weights. 3. All layers are of size n, except the last layer ℓ2i+1 that contains a single sum unit that sums all units in the previous layer ℓ2i. 4. In each layer ℓk for 1 ≤k ≤2i, each unit ℓk j takes as inputs {ℓk−1 m |m ̸= j}. An example of a network belonging to G1,3 (i.e. with three layers and three input variables) is shown in Figure 2. + + + + × × × x1 x2 x3 ℓ1 3 = x1 + x2 ℓ1 2 = x1 + x3 ℓ1 1 = x2 + x3 ℓ2 3 = x2 3 + x1x2 +x1x3 + x2x3 ℓ2 1 = x2 1 + x1x2 +x1x3 + x2x3 ℓ2 2 = . . . ℓ3 1 = x2 1 + x2 2 + x2 3 + 3(x1x2 + x1x3 + x2x3) = g(x1, x2, x3) Figure 2: Sum-product network computing a function of G1,3 (summation units’ weights are all 1’s). 4.2 Theoretical results The main result is stated in Proposition 3 below, establishing a lower bound on the number of hidden units of a shallow sum-product network computing g ∈G. The proof sketch is as follows: 1. We show that the polynomial expansion of g must contain a large set of products (Proposition 2 and Corollary 2). 2. We use both the number of products in that set as well as their degree to establish the desired lower bound (Proposition 3). 6 We will also need the following lemma, which states that when n −1 items each belong to n −1 sets among a total of n sets, then we can associate to each item one of the sets it belongs to without using the same set for different items. Lemma 7. Let S1, . . . , Sn be n sets (n ≥2) containing elements of {P1, . . . , Pn−1}, such that for any q, r, |{r|Pq ∈Sr}| ≥n −1 (i.e. each element Pq belongs to at least n −1 sets). Then there exist r1, . . . , rn−1 different indices such that Pq ∈Srq for 1 ≤q ≤n −1. Proof. Omitted due to lack of space (very easy to prove by construction). Proposition 2. For any 0 ≤j ≤i, and any product of variables P = Πn t=1xαt t such that αt ∈N and P t αt = (n −1)j, there exists a unit in ℓ2j whose computed value, when expanded as a weighted sum of products, contains P among these products. Proof. We prove this proposition by induction on j. First, for j = 0, this is obvious since any P of this form must be made of a single input variable xt, that appears in ℓ0 t = xt. Suppose now the proposition is true for some j < i. Consider a product P = Πn t=1xαt t such that αt ∈N and P t αt = (n −1)j+1. P can be factored in n −1 sub-products of degree (n −1)j, i.e. written P = P1 . . . Pn−1 with Pq = Πn t=1xβqt t , βqt ∈N and P t βqt = (n −1)j for all q. By the induction hypothesis, each Pq can be found in at least one unit ℓ2j kq. As a result, by property 4 (in the definition of family G), each Pq will also appear in the additive layer ℓ2j+1, in at least n −1 different units (the only sum unit that may not contain Pq is the one that does not have ℓ2j kq as input). By Lemma 7, we can thus find a set of units ℓ2j+1 rq such that for any 1 ≤q ≤n −1, the product Pq appears in ℓ2j+1 rq , with indices rq being different from each other. Let 1 ≤s ≤n be such that s ̸= rq for all q. Then, from property 4 of family G, the multiplicative unit ℓ2(j+1) s computes the product Πn−1 q=1 ℓ2j+1 rq , and as a result, when expanded as a sum of products, it contains in particular P1 . . . Pn−1 = P. The proposition is thus true for j + 1, and by induction, is true for all j ≤i. Corollary 2. The output gin of a sum-product network in Gin, when expanded as a sum of products, contains all products of variables of the form Πn t=1xαt t such that αt ∈N and P t αt = (n −1)i. Proof. Applying Proposition 2 with j = i, we obtain that all products of this form can be found in the multiplicative units of ℓ2i. Since the output unit ℓ2i+1 1 computes a sum of these multiplicative units (weighted with positive weights), those products are also present in the output. Proposition 3. A shallow negative-weight sum-product network computing gin ∈Gin must have at least (n −1)i hidden units. Proof. First suppose the output unit of the shallow network is a sum. Then it may be able to compute gin, assuming we allow multiplicative units in the hidden layer in the hidden layer to use powers of their inputs in the product they compute (which we allow here for the proof to be more generic). However, it will require at least as many of these units as the number of unique products that can be found in the expansion of gin. In particular, from Corollary 2, it will require at least the number of unique tuples of the form (α1, . . . , αn) such that αt ∈N and Pn t=1 αt = (n −1)i. Denoting dni = (n −1)i, this number is known to be equal to n+dni−1 dni , and it is easy to verify it is higher than (or equal to) dni for any n ≥2 and i ≥0. Now suppose the output unit is multiplicative. Then there can be no multiplicative hidden unit, otherwise it would mean one could factor some input variable xt in the computed function output: this is not possible since by Corollary 2, for any variable xt there exist products in the output function that do not involve xt. So all hidden units must be additive, and since the computed function contains products of degree dni, there must be at least dni such hidden units. 7 4.3 Discussion Proposition 3 shows that in order to compute the same function as gin ∈Gin, the number of units in the shallow network has to grow exponentially in i, i.e. in the network’s depth (while the deep network’s size grows linearly in i). The shallow network also needs to grow polynomially in the number of input variables n (with a degree equal to i), while the deep network grows only linearly in n. It means that some deep sum-product network with n inputs and depth O(i) can represent with O(ni) units what would require O((n −1)i) units for a depth-2 network. Note that in the similar results found for family F, the depth-2 network computing the same function as a function in F had to be constrained to either have a specific combination of sum and hidden units (in Lemma 6) or to have non-negative weights (in Corollary 1). On the contrary, the result presented here for family G holds without requiring any of these assumptions. 5 Conclusion We compared a deep sum-product network and a shallow sum-product network representing the same function, taken from two families of functions F and G. For both families, we have shown that the number of units in the shallow network has to grow exponentially, compared to a linear growth in the deep network, so as to represent the same functions. The deep version thus offers a much more compact representation of the same functions. This work focuses on two specific families of functions: finding more general parameterization of functions leading to similar results would be an interesting topic for future research. Another open question is whether it is possible to represent such functions only approximately (e.g. up to an error bound ǫ) with a much smaller shallow network. Results by Braverman [8] on boolean circuits suggest that similar results as those presented in this paper may still hold, but this topic has yet to be formally investigated in the context of sum-product networks. A related problem is also to look into functions defined only on discrete input variables: our proofs do not trivially extend to this situation because we cannot assume anymore that two polynomials yielding the same output values must have the same expansion coefficients (since the number of input combinations becomes finite). Acknowledgments The authors would like to thank Razvan Pascanu and David Warde-Farley for their help in improving this manuscript, as well as the anonymous reviewers for their careful reviews. This work was partially funded by NSERC, CIFAR, and the Canada Research Chairs. References [1] Ajtai, M. (1983). P1 1-formulae on finite structures. Annals of Pure and Applied Logic, 24(1), 1–48. [2] Allender, E. (1996). Circuit complexity before the dawn of the new millennium. In 16th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pages 1–18. Lecture Notes in Computer Science 1180, Springer Verlag. [3] Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1–127. Also published as a book. Now Publishers, 2009. [4] Bengio, Y. and LeCun, Y. (2007). Scaling learning algorithms towards AI. In L. Bottou, O. Chapelle, D. DeCoste, and J. Weston, editors, Large Scale Kernel Machines. MIT Press. [5] Bengio, Y., Delalleau, O., and Le Roux, N. (2006). The curse of highly variable functions for local kernel machines. In NIPS’05, pages 107–114. MIT Press, Cambridge, MA. [6] Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H. (2007). Greedy layer-wise training of deep networks. In NIPS 19, pages 153–160. MIT Press. [7] Bengio, Y., Delalleau, O., and Simard, C. (2010). Decision trees do not generalize to new variations. Computational Intelligence, 26(4), 449–467. [8] Braverman, M. (2011). Poly-logarithmic independence fools bounded-depth boolean circuits. Communications of the ACM, 54(4), 108–115. [9] Collobert, R. and Weston, J. (2008). A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML 2008, pages 160–167. [10] Dahl, G. E., Ranzato, M., Mohamed, A., and Hinton, G. E. (2010). Phone recognition with the meancovariance restricted boltzmann machine. In Advances in Neural Information Processing Systems (NIPS). 8 [11] H˚astad, J. (1986). Almost optimal lower bounds for small depth circuits. In Proceedings of the 18th annual ACM Symposium on Theory of Computing, pages 6–20, Berkeley, California. ACM Press. [12] H˚astad, J. and Goldmann, M. (1991). On the power of small-depth threshold circuits. Computational Complexity, 1, 113–129. [13] Hinton, G. E. and Salakhutdinov, R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504–507. [14] Hinton, G. E., Osindero, S., and Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527–1554. [15] Kavukcuoglu, K., Sermanet, P., Boureau, Y.-L., Gregor, K., Mathieu, M., and LeCun, Y. (2010). Learning convolutional feature hierarchies for visual recognition. In NIPS’10. [16] Larochelle, H., Erhan, D., Courville, A., Bergstra, J., and Bengio, Y. (2007). An empirical evaluation of deep architectures on problems with many factors of variation. In ICML’07, pages 473–480. ACM. [17] Lee, H., Ekanadham, C., and Ng, A. (2008). Sparse deep belief net model for visual area V2. In NIPS’07, pages 873–880. MIT Press, Cambridge, MA. [18] Lee, H., Grosse, R., Ranganath, R., and Ng, A. Y. (2009a). Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML 2009. Montreal (Qc), Canada. [19] Lee, H., Pham, P., Largman, Y., and Ng, A. (2009b). Unsupervised feature learning for audio classification using convolutional deep belief networks. In NIPS’09, pages 1096–1104. [20] Levner, I. (2008). Data Driven Object Segmentation. Ph.D. thesis, Department of Computer Science, University of Alberta. [21] Mnih, A. and Hinton, G. E. (2009). A scalable hierarchical distributed language model. In NIPS’08, pages 1081–1088. [22] Mobahi, H., Collobert, R., and Weston, J. (2009). Deep learning from temporal coherence in video. In ICML’2009, pages 737–744. [23] Orponen, P. (1994). Computational complexity of neural networks: a survey. Nordic Journal of Computing, 1(1), 94–110. [24] Osindero, S. and Hinton, G. E. (2008). Modeling image patches with a directed hierarchy of markov random field. In NIPS’07, pages 1121–1128, Cambridge, MA. MIT Press. [25] Poon, H. and Domingos, P. (2011). Sum-product networks: A new deep architecture. In UAI’2011, Barcelona, Spain. [26] Ranzato, M. and Szummer, M. (2008). Semi-supervised learning of compact document representations with deep networks. In ICML. [27] Ranzato, M., Poultney, C., Chopra, S., and LeCun, Y. (2007). Efficient learning of sparse representations with an energy-based model. In NIPS’06, pages 1137–1144. MIT Press. [28] Ranzato, M., Boureau, Y.-L., and LeCun, Y. (2008). Sparse feature learning for deep belief networks. In NIPS’07, pages 1185–1192, Cambridge, MA. MIT Press. [29] Salakhutdinov, R. and Hinton, G. E. (2007). Semantic hashing. In Proceedings of the 2007 Workshop on Information Retrieval and applications of Graphical Models (SIGIR 2007), Amsterdam. Elsevier. [30] Salakhutdinov, R., Mnih, A., and Hinton, G. E. (2007). Restricted Boltzmann machines for collaborative filtering. In ICML 2007, pages 791–798, New York, NY, USA. [31] Serre, T., Kreiman, G., Kouh, M., Cadieu, C., Knoblich, U., and Poggio, T. (2007). A quantitative theory of immediate visual recognition. Progress in Brain Research, Computational Neuroscience: Theoretical Insights into Brain Function, 165, 33–56. [32] Socher, R., Lin, C., Ng, A. Y., and Manning, C. (2011). Learning continuous phrase representations and syntactic parsing with recursive neural networks. In ICML’2011. [33] Taylor, G. and Hinton, G. (2009). Factored conditional restricted Boltzmann machines for modeling motion style. In ICML 2009, pages 1025–1032. [34] Taylor, G., Hinton, G. E., and Roweis, S. (2007). Modeling human motion using binary latent variables. In NIPS’06, pages 1345–1352. MIT Press, Cambridge, MA. [35] Utgoff, P. E. and Stracuzzi, D. J. (2002). Many-layered learning. Neural Computation, 14, 2497–2539. [36] Weston, J., Ratle, F., and Collobert, R. (2008). Deep learning via semi-supervised embedding. In ICML 2008, pages 1168–1175, New York, NY, USA. [37] Wolpert, D. H. (1996). The lack of a priori distinction between learning algorithms. Neural Computation, 8(7), 1341–1390. [38] Yao, A. (1985). Separating the polynomial-time hierarchy by oracles. In Proceedings of the 26th Annual IEEE Symposium on Foundations of Computer Science, pages 1–10. 9
|
2011
|
167
|
4,221
|
Expressive Power and Approximation Errors of Restricted Boltzmann Machines Guido F. Mont´ufar1, Johannes Rauh1, and Nihat Ay1,2 1Max Planck Institute for Mathematics in the Sciences, Inselstraße 22 04103 Leipzig, Germany 2Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501, USA {montufar,jrauh,nay}@mis.mpg.de Abstract We present explicit classes of probability distributions that can be learned by Restricted Boltzmann Machines (RBMs) depending on the number of units that they contain, and which are representative for the expressive power of the model. We use this to show that the maximal Kullback-Leibler divergence to the RBM model with n visible and m hidden units is bounded from above by (n−1)−log(m+1). In this way we can specify the number of hidden units that guarantees a sufficiently rich model containing different classes of distributions and respecting a given error tolerance. 1 Introduction A Restricted Boltzmann Machine (RBM) [24, 10] is a learning system consisting of two layers of binary stochastic units, a hidden layer and a visible layer, with a complete bipartite interaction graph. RBMs are used as generative models to simulate input distributions of binary data. They can be trained in an unsupervised way and more efficiently than general Boltzmann Machines, which are not restricted to have a bipartite interaction graph [11, 6]. Furthermore, they can be used as building blocks to progressively train and study deep learning systems [13, 4, 16, 21]. Hence, RBMs have received increasing attention in the past years. An RBM with n visible and m hidden units generates a stationary distribution on the states of the visible units which has the following form: pW,C,B(v) = 1 ZW,C,B X h∈{0,1}m exp h⊤Wv + C⊤h + B⊤v ∀v ∈{0, 1}n , where h ∈{0, 1}m denotes a state vector of the hidden units, W ∈Rm×n, C ∈Rm and B ∈ Rn constitute the model parameters, and ZW,C,B is a corresponding normalization constant. In the sequel we denote by RBMn,m the set of all probability distributions on {0, 1}n which can be approximated arbitrarily well by a visible distribution generated by the RBM with m hidden and n visible units for an appropriate choice of the parameter values. As shown in [21] (generalizing results from [15]) RBMn,m contains any probability distribution if m ≥2n−1 −1. On the other hand, if RBMn,m equals the set P of all probability distributions on {0, 1}n, then it must have at least dim(P) = 2n −1 parameters, and thus at least ⌈2n/(n + 1)⌉−1 hidden units [21]. In fact, in [8] it was shown that for most combinations of m and n the dimension of RBMn,m (as a manifold, possibly with singularities) equals either the number of parameters or 2n −1, whatever is smaller. However, the geometry of RBMn,m is intricate, and even an RBM of dimension 2n −1 is not guaranteed to contain all visible distributions, see [20] for counterexamples. In summary, an RBM that can approximate any distribution arbitrarily well must have a very large number of parameters and hidden units. In practice, training such a large system is not desirable or even possible. However, there are at least two reasons why in many cases this is not necessary: 1 • An appropriate approximation of distributions is sufficient for most purposes. • The interesting distributions the system shall simulate belong to a small class of distributions. Therefore, the model does not need to approximate all distributions. For example, the set of optimal policies in reinforcement learning [25], the set of dynamics kernels that maximize predictive information in robotics [26] or the information flow in neural networks [3] are contained in very low dimensional manifolds; see [2]. On the other hand, usually it is very hard to mathematically describe a set containing the optimal solutions to general problems, or a set of interesting probability distributions (for example the class of distributions generating natural images). Furthermore, although RBMs are parametric models and for any choice of the parameters we have a resulting probability distribution, in general it is difficult to explicitly specify this resulting probability distribution (or even to estimate it [18]). Due to these difficulties the number of hidden units m is often chosen on the basis of experience [12], or m is considered as a hyperparameter which is optimized by extensive search, depending on the distributions to be simulated by the RBM. In this paper we give an explicit description of classes of distributions that are contained in RBMn,m, and which are representative for the expressive power of this model. Using this description, we estimate the maximal Kullback-Leibler divergence between an arbitrary probability distribution and the best approximation within RBMn,m. This paper is organized as follows: Section 2 discusses the different kinds of errors that appear when an RBM learns. Section 3 introduces the statistical models studied in this paper. Section 4 studies submodels of RBMn,m. An upper bound of the approximation error for RBMs is found in Section 5. 2 Approximation Error When training an RBM to represent a distribution p, there are mainly three contributions to the discrepancy between p and the state of the RBM after training: 1. Usually the underlying distribution p is unknown and only a set of samples generated by p is observed. These samples can be represented as an empirical distribution pData, which usually is not identical with p. 2. The set RBMn,m does not contain every probability distribution, unless the number of hidden units is very large, as we outlined in the introduction. Therefore, we have an approximation error given by the distance of pData to the best approximation pData RBM contained in the RBM model. 3. The learning process may yield a solution ˜pData RBM in RBM which is not the optimum pData RBM. This occurs, for example, if the learning algorithm gets trapped in a local optimum, or if it optimizes an objective different from Maximum Likelihood, e.g. contrastive divergence (CD), see [6]. In this paper we study the expressive power of the RBM model and the Kullback-Leibler divergence from an arbitrary distribution to its best representation within the RBM model. Estimating the approximation error is difficult, because the geometry of the RBM model is not sufficiently understood. Our strategy is to find subsets M ⊆RBMn,m that are easy to describe. Then the maximal error when approximating probability distributions with an RBM is upper bounded by the maximal error when approximating with M. Consider a finite set X. A real valued function on X can be seen as a real vector with |X| entries. The set P = P(X) of all probability distributions on X is a (|X| −1)-dimensional simplex in R|X|. There are several notions of distance between probability distributions, and in turn for the error in the representation (approximation) of a probability distribution. One possibility is to use the induced distance of the Euclidian space R|X|. From the point of view of information theory, a more meaningful distance notion for probability distributions is the Kullback-Leibler divergence: D(p∥q) := X x p(x) log p(x) q(x) . In this paper we use the basis 2 logarithm. The Kullback-Leibler (KL) divergence is non-negative and vanishes if and only if p = q. If the support of q does not contain the support of p it is defined 2 relative error q = p q = 1 |X| 0 128 255 1 Figure 1: This figure gives an intuition on what the size of an error means for probability distributions on images with 16 × 16 pixels. Every column shows four samples drawn from the best approximation q of the distribution p = 1 2(δ(1...1) + δ(0...0)) within a partition model with 2 randomly chosen cubical blocks, containing (0 . . . 0) and (1 . . . 1), of cardinality from 1 (first column) to |X| 2 (last column). As a measure of error ranging from 0 to 1 we take D(p∥q)/D p∥1 |X| . The last column shows samples from the uniform distribution, which is, in particular, the best approximation of p within RBMn,0. Note that an RBM with 1 hidden unit can approximate p with arbitrary accuracy, see Theorem 4.1. as ∞. The summands with p(x) = 0 are set to 0. The KL-divergence is not symmetric, but it has nice information theoretic properties [14, 7]. If E ⊆P is a statistical model and if p ∈P, then any probability distribution pE ∈E satisfying D(p∥pE) = D(p∥E) := min{D(p∥q) : q ∈E} is called a (generalized) reversed information projection, or rI-projection. Here, E denotes the closure of E. If p is an empirical distribution, then one can show that any rI-projection is a maximum likelihood estimate. In order to assess an RBM or some other model M we use the maximal approximation error with respect to the KL-divergence when approximating arbitrary probability distributions using M: DM := max {D(p∥M) : p ∈P} . For example, the maximal KL-divergence to the uniform distribution 1 |X| is attained by any Dirac delta distributions δx, x ∈X, and amounts to: D{ 1 |X|} = D(δx∥1 |X|) = log |X| . (1) 3 Model Classes 3.1 Exponential families and product measures In this work we only need a restricted class of exponential families, namely exponential families on a finite set with uniform reference measure. See [5] for more on exponential families. The boundary of discrete exponential families is discussed in [23], which uses a similar notation. Let A ∈Rd×|X| be a matrix. The columns Ax of A will be indexed by x ∈X. The rows of A can be interpreted as functions on R. The exponential family EA with sufficient statistics A consists of all probability distributions of the form pλ, λ ∈Rd, where pλ(x) = exp(λ⊤Ax) P x exp(λ⊤Ax), for all x ∈X. Note that any probability distribution in EA has full support. Furthermore, EA is in general not a closed set. The closure EA (with respect to the usual topology on RX ) will be important in the following. Exponential families behave nicely with respect to rI-projection: Any p ∈P has a unique rI-projection pE to EA. 3 The most important exponential families in this work are the independence models. The independence model of n binary random variables consists of all probability distributions on {0, 1}n that factorize: En = n p ∈P(X) : p(x1, . . . , xn) = n Y i=1 pi(xi) for some pi ∈P({0, 1}) o . It is the closure of an n-dimensional exponential family En. This model corresponds to the RBM model with no hidden units. An element of the independence model is called a product distribution. Lemma 3.1 (Corollary 4.1 of [1]) Let En be the independence model on {0, 1}n. If n > 0, then DEn = (n −1). The global maximizers are the distributions of the form 1 2(δx + δy), where x, y ∈ {0, 1}n satisfy xi + yi = 1 for all i. This result should be compared with (1). Although the independence model is much larger than the set { 1 |X|}, the maximal divergence decreases only by 1. As shown in [22], if E is any exponential family of dimension k, then DE ≥log(|X|/(k + 1)). Thus, this notion of distance is rather strong. The exponential families satisfying DE = log(|X|/(k+1)) are partition models; they will be defined in the following section. 3.2 Partition models and mixtures of products with disjoint supports The mixture of m models M1, . . . , Mm ⊆P is the set of all convex combinations p = X i αipi , where pi ∈Mi, αi ≥0, X i αi = 1 . (2) In general, mixture models are complicated objects. Even if all models M1 = · · · = Mm are equal, it is difficult to describe the mixture [17, 19]. The situation simplifies considerably if the models have disjoint supports. Note that given any partition ξ = {X1, . . . , Xm} of X, any p ∈P can be written as p(x) = pXi(x)p(Xi) for all x ∈Xi and i ∈{1, . . . , m}, where pXi is a probability measure in P(Xi) for all i. Lemma 3.2 Let ξ = {X1, . . . , Xm} be a partition of X and let M1, . . . , Mm be statistical models such that Mi ⊆P(Xi). Consider any p ∈P and corresponding pXi such that p(x) = pXi(x)p(Xi) for x ∈Xi. Let pi be an rI-projection of pXi to Mi. Then the rI-projection pM of P to the mixture M of M1, . . . , Mm satisfies pM(x) = p(Xi)pi(x), whenever x ∈Xi . Therefore, D(p∥M) = P i p(Xi)D(pXi∥Mi), and so DM = maxi=1,...,m DMi. Proof Let p ∈M be as in (2). Then D(q∥p) = Pm i=1 q(Xi)D(qXi∥pi) for all q ∈P. For fixed q this sum is minimal if and only if each term is minimal. □ If each Mi is an exponential family, then the mixture is also an exponential family (this is not true if the supports of the models Mi are not disjoint). In the rest of this section we discuss two examples. If each Mi equals the set containing just the uniform distribution on Xi, then M is called the partition model of ξ, denoted with Pξ. The partition model Pξ is given by all distributions with constant value on each block Xi, i.e. those that satisfy p(x) = p(y) for all x, y ∈Xi. This is the closure of the exponential family with sufficient statistics Ax = (χ1(x), χ2(x), . . . , χd(x))⊤, where χi := χXi is 1 on x ∈Xi, and 0 everywhere else. See [22] for interesting properties of partition models. The partition models include the set of finite exchangeable distributions (see e.g. [9]), where the blocks of the partition are the sets of binary vectors which have the same number of entries equal to one. The probability of a vector v depends only on the number of ones, but not on their position. Corollary 3.3 Let ξ = {X1, . . . , Xm} be a partition of X. Then DPξ = maxi=1,...,m log |Xi|. 4 1 2 (δ(11) + δ(01)) Pξ P δ(0 1) δ(1 1) δ(0 0) δ(1 0) E1 E2 P δ(0 1) δ(1 1) δ(0 0) δ(1 0) Figure 2: Models in P({0, 1}2). Left: The blue line represents the partition model Pξ with partition ξ = {(11), (01)}∪{(00), (10)}. The dashed lines represent the set of KL-divergence maximizers for Pξ. Right: The mixture of the product distributions E1 and E2 with disjoint supports on {(11), (01)} and {(00), (10)} corresponding to the same partition ξ equals the whole simplex P. Now assume that X = {0, 1}n is the set of binary vectors of length n. As a subset of Rn it consists of the vertices (extreme points) of the n-dimensional hypercube. The vertices of a k-dimensional face of the n-cube are given by fixing the values of x in n −k positions: {x ∈{0, 1}n : xi = ˜xi, ∀i ∈I, for some I ⊆{1, . . . , n}, |I| = n −k} We call such a subset Y ⊆X cubical or a face of the n-cube. A cubical subset of cardinality 2k can be naturally identified with {0, 1}k. This identification allows to define independence models and product measures on P(Y) ⊆P(X). Note that product measures on Y are also product measures on X, and the independence model on Y is a subset of the independence model on X. Corollary 3.4 Let ξ = {X1, . . . , Xm} be a partition of X = {0, 1}n into cubical sets. For any i let Ei be the independence model on Xi, and let M be the mixture of E1, . . . , Em. Then DM = max i=1,...,m log(|Xi|) −1 . See Figure 1 for an intuition on the approximation error of partition models, and see Figure 2 for small examples of a partition model and of a mixture of products with disjoint support. 4 Classes of distributions that RBMs can learn Consider a set ξ = {Xi}m i=1 of m disjoint cubical sets Xi in X. Such a ξ is a partition of some subset ∪ξ = ∪iXi of X into m disjoint cubical sets. We write Gm for the collection of all such partitions. We have the following result: Theorem 4.1 RBMn,m contains the following distributions: • Any mixture of one arbitrary product distribution, m−k product distributions with support on arbitrary but disjoint faces of the n-cube, and k arbitrary distributions with support on any edges of the n-cube, for any 0 ≤k ≤m. In particular: • Any mixture of m + 1 product distributions with disjoint cubical supports. In consequence, RBMn,m contains the partition model of any partition in Gm+1. Restricting the cubical sets of the second item to edges, i.e. pairs of vectors differing in one entry, we see that the above theorem implies the following previously known result, which was shown in [21]. Corollary 4.2 RBMn,m contains the following distributions: • Any distribution with a support set that can be covered by m + 1 pairs of vectors differing in one entry. In particular, this includes: • Any distribution in P with a support of cardinality smaller than or equal to m + 1. 5 Corollary 4.2 implies that an RBM with m ≥2n−1 −1 hidden units is a universal approximator of distributions on {0, 1}n, i.e. can approximate any distribution to an arbitrarily good accuracy. Assume m + 1 = 2k and let ξ be a partition of X into m + 1 disjoint cubical sets of equal size. Let us denote by Pξ,1 the set of all distributions which can be written as a mixture of m + 1 product distributions with support on the elements of ξ. The dimension of Pξ,1 is given by dim Pξ,1 = (m+1) log 2n m + 1 +m+1+n = (m+1)·n+(m+1)+n−(m+1) log(m+1) . The dimension of the set of visible distribution represented by an RBM is at most equal to the number of paramters, see [21], this is m · n + m + n. This means that the class given above has roughly the same dimension of the set of distributions that can be represented. In fact, dim Pξ,1 −dim RBMm−1 = n + 1 −(m + 1) log(m + 1) . This means that the class of distributions Pξ,1 which by Theorem 4.1 can be represented by RBMn,m is not contained in RBMn,m−1 when (m + 1)m+1 ≤2n+1. Proof of Theorem 4.1 The proof draws on ideas from [15] and [21]. An RBM with no hidden units can represent precisely the independence model, i.e. all product distributions, and in particular any uniform distribution on a face of the n-cube. Consider an RBM with m −1 hidden units. For any choice of the parameters W ∈Rm−1×n, B ∈ Rn, C ∈Rm−1 we can write the resulting distribution on the visible units as: p(v) = P h z(v, h) P v′,h′ z(v′, h′) , (3) where z(v, h) = exp(hWv + Bv + Ch). Appending one additional hidden unit, with connection weights w to the visible units and bias c, produces a new distribution which can be written as follows: pw,c(v) = (1 + exp(wv + c)) P h z(v, h) P v′,h′(1 + exp(wv′ + c))z(v′, h′) . Consider now any set I ⊆[n] := {1, . . . , n} and an arbitrary visible vector u ∈X. The values of u in the positions [n]\I define a face F := {v ∈X : vi = ui , ∀i ̸∈I} of the n-cube of dimension |I|. Let 1 := (1, . . . , 1) ∈Rn and denote by uI,0 the vector with entries uI,0 i = ui, ∀i ̸∈I and uI,0 i = 0, ∀i ∈I. Let λI ∈Rn with λI i = 0 , ∀i ̸∈I and let λc, a ∈R. Define the connection weights w and c as follows: w = a(uI,0 −1 21I,0) + λI , c = −a(uI,0 −1 21I,0)⊤u + λc . For this choice and a →∞equation (4) yields: pw,c(v) = p(v) 1+P v′∈F exp (λI·v′+λc)p(v′), ∀v ̸∈F (1+exp(λI·v+λc))p(v) 1+P v′∈F exp (λI·v′+λc)p(v′), ∀v ∈F . (4) If the initial p from equation (3) is such that its restriction to F is a product distribution, then p(v) = K exp(ηI · v) , ∀v ∈F, where K is a constant and ηI is a vector with ηI i = 0 , ∀i ̸∈I. We can choose λI = βI −ηI, and exp(λc) = α 1 K P v∈F exp(βI·v). For this choice, equation (4) yields: pw,c = (α −1)p + αˆp , where ˆp is a product distribution with support in F and arbitrary natural parameters βI, and α is an arbitrary mixture weight in [0, 1]. Finally, the product distributions on edges of the cube are arbitrary, see [19] or [21] for details, and hence the restriction of any p to any edge is a product distribution. □ 6 0 1 2 3 4 0 0.5 1 1.5 2 2.5 0 1 2 3 4 0 50 RBMs with 3 visible units D m D(pparity∥pRBM) Number of hidden units m 0 1 2 3 4 5 6 7 8 0 0.5 1 1.5 2 2.5 3 D(pparity∥pRBM) RBMs with 4 visible units Number of hidden units m (n −1) −log(m + 1) Figure 3: This figure demonstrates our results for n = 3 and n = 4 visible units. The red curves represent the bounds from Theorem 5.1. We fixed pparity as target distribution, the uniform distribution on binary length n vectors with an even number of ones. The distribution pparity is not the KL-maximizer from RBMn,m, but it is in general difficult to represent. Qualitatively, samples from pparity look like uniformly distributed, and representing pparity requires the maximal number of product mixture components [20, 19]. For both values of n and each m = 0, . . . , 2n/2 we initialized 500 resp. 1000 RBMs at parameter values chosen uniformly at random in the range [−10, 10]. The inset of the left figure shows the resulting KL-divergence D(pparity∥prand RBM) (for n = 4 the resulting KL-divergence was larger). Randomly chosen distributions in RBMn,m are likely to be very far from the target distribution. We trained these randomly initialized RBMs using CD for 500 training epochs, learning rate 1 and a list of even parity vectors as training data. The result after training is given by the blue circles. After training the RBMs the result is often not better than the uniform distribution, for which D(pparity∥ 1 |{0,1}n|) = 1. For each m, the best set of parameters after training was used to initialize a further CD training with a smaller learning rate (green squares, mostly covered) followed by a short maximum likelihood gradient ascent (red filled squares). 5 Maximal Approximation Errors of RBMs Let m < 2n−1 −1. By Theorem 4.1 all partition models for partitions of {0, 1}n into m + 1 cubical sets are contained in RBMn,m. Applying Corollary 3.3 to such a partition where the cardinality of all blocks is at most 2n−⌊log(m+1)⌋yields the bound DRBMn,m ≤n −⌊log(m + 1)⌋. Similarly, using mixtures of product distributions, Theorem 4.1 and Corollary 3.4 imply the smaller bound DRBMn,m ≤n −1 −⌊log(m + 1)⌋. In this section we derive an improved bound which strictly decreases, as m increases, until 0 is reached. Theorem 5.1 Let m ≤2n−1 −1. Then the maximal Kullback-Leibler divergence from any distribution on {0, 1}n to RBMn,m is upper bounded by max p∈P D(p∥RBMn,m) ≤(n −1) −log(m + 1) . Conversely, given an error tolerance 0 ≤ǫ ≤1, the choice m ≥2(n−1)(1−ǫ) −1 ensures a sufficiently rich RBM model that satisfies DRBMn,m ≤ǫDRBMn,0. For m = 2n−1 −1 the error vanishes, corresponding to the fact that an RBM with that many hidden units is a universal approximator. In Figure 3 we use computer experiments to illustrate Theorem 5.1. The proof makes use of the following lemma: Lemma 5.2 Let n1, . . . , nm ≥0 such that 2n1 + · · · + 2nm = 2n. Let M be the union of all mixtures of independent models corresponding to all cubical partitions of X into blocks of cardinalities 2n1, . . . , 2nm. Then DM ≤P i:ni>1 ni−1 2n−ni . Proof of Lemma 5.2 The proof is by induction on n. If n = 1, then m = 1 or m = 2, and in both cases it is easy to see that the inequality holds (both sides vanish). If n > 1, then order the ni such that n1 ≥n2 ≥· · · ≥nm ≥0. Without loss of generality assume m > 1. Let p ∈P(X), and let Y be a cubical subset of X of cardinality 2n−1 such that p(Y) ≤1 2. Since the numbers 2n1 + · · · + 2ni for i = 1, . . . , m contain all multiples of 2n1 up to 2n and 2n/2n1 is even, there exists k such that 2n1 + · · · + 2nk = 2n−1 = 2nk+1 + · · · + 2nm. 7 Let M′ be the union of all mixtures of independence models corresponding to all cubical partitions ξ = {X1, . . . , Xm} of X into m blocks of cardinalities n1, . . . , nm such that X1 ∪· · · ∪Xk = Y. In the following, the symbol P′ i shall denote summation over all indices i such that ni > 1. By induction D(p∥M) ≤D(p∥M′) ≤p(Y) k X′ i=1 ni −1 2n−1−ni + p(X \ Y) m X′ j=k+1 nj −1 2n−1−nj . (5) There exist j1 = k + 1 < j2 < · · · < jk < jk+1 = m + 1 such that 2ni = 2nji + · · · + 2nji+1−1 for all i ≤k. Note that ji+1 X′ j=ji nj −1 2n−1−nj ≤ni −1 2n−1 (2nji + · · · + 2nji+1−1) = ni −1 2n−1−ni , and therefore ( 1 2 −p(Y)) ni −1 2n−1−ni + ( 1 2 −p(X \ Y)) ji+1−1 X′ j=ji nj −1 2n−1−nj ≥0 . Adding these terms for i = 1, . . . , k to the right hand side of equation (5) yields D(p∥M) ≤1 2 k X′ i=1 ni −1 2n−1−ni + 1 2 m X′ j=k+1 nj −1 2n−1−nj , from which the assertions follow. □ Proof of Theorem 5.1 From Theorem 4.1 we know that RBMn,m contains the union M of all mixtures of independent models corresponding to all partitions with up to m + 1 cubical blocks. Hence, DRBMn,m ≤DM. Let k = n −⌊log(m + 1)⌋and l = 2m + 2 −2n−k+1 ≥0; then l2k−1 +(m+1−l)2k = 2n. Lemma 5.2 with n1 = · · · = nl = k −1 and nl+1 = · · · = nm+1 = k implies DM ≤l(k −2) 2n−k+1 + (m + 1 −l)(k −1) 2n−k = k −m + 1 2n−k . The assertion follows from log(m + 1) ≤(n −k) + m+1 2n−k −1, where log(1 + x) ≤x for all x > 0 was used. □ 6 Conclusion We studied the expressive power of the Restricted Boltzmann Machine model with n visible and m hidden units. We presented a hierarchy of explicit classes of probability distributions that an RBM can represent. These classes include large collections of mixtures of m + 1 product distributions. In particular any mixture of an arbitrary product distribution and m further product distributions with disjoint supports. The geometry of these submodels is easier to study than that of the RBM models, while these subsets still capture many of the distributions contained in the RBM models. Using these results we derived bounds for the approximation errors of RBMs. We showed that it is always possible to reduce the error to at most (n −1) −log(m + 1). That is, given any target distribution, there is a distribution within the RBM model for which the Kullback-Leibler divergence between both is not larger than that number. Our results give a theoretical basis for selecting the size of an RBM which accounts for a desired error tolerance. Computer experiments showed that the bound captures the order of magnitude of the true approximation error, at least for small examples. However, learning may not always find the best approximation, resulting in an error that may well exceed our bound. Acknowledgments Nihat Ay acknowledges support by the Santa Fe Institute. 8 References [1] N. Ay and A. Knauf. Maximizing multi-information. Kybernetika, 42:517–538, 2006. [2] N. Ay, G. Mont´ufar, and J. Rauh. Selection criteria for neuromanifolds of stochastic dynamics. International Conference on Cognitive Neurodynamics, 2011. [3] N. Ay and T. Wennekers. Dynamical properties of strongly interacting Markov chains. Neural Networks, 16:1483–1497, 2003. [4] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. NIPS, 2007. [5] L. Brown. Fundamentals of Statistical Exponential Families: With Applications in Statistical Decision Theory. Inst. Math. Statist., Hayworth, CA, USA, 1986. [6] M. A. Carreira-Perpi˜nan and G. E. Hinton. On contrastive divergence learning. In Proceedings of the 10-th International Workshop on Artificial Intelligence and Statistics, 2005. [7] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, 2006. [8] M. A. Cueto, J. Morton, and B. Sturmfels. Geometry of the Restricted Boltzmann Machine. In M. A. G. Viana and H. P. Wynn, editors, Algebraic methods in statistics and probability II, AMS Special Session. AMS, 2010. [9] P. Diaconis and D. Freedman. Finite exchangeable sequences. Ann. Probab., 8:745–764, 1980. [10] Y. Freund and D. Haussler. Unsupervised learning of distributions on binary vectors using 2-layer networks. NIPS, pages 912–919, 1992. [11] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Comput., 14:1771–1800, 2002. [12] G. E. Hinton. A practical guide to training Restricted Boltzmann Machines, version 1. Technical report, UTML2010-003, University of Toronto, 2010. [13] G. E. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for Deep Belief Nets. Neural Comput., 18:1527–1554, 2006. [14] S. Kullback and R. Leibler. On information and sufficiency. Ann. Math. Stat., 22:79–86, 1951. [15] N. Le Roux and Y. Bengio. Representational power of Restricted Boltzmann Machines and Deep Belief Networks. Neural Comput., 20(6):1631–1649, 2008. [16] N. Le Roux and Y. Bengio. Deep Belief Networks are compact universal approximators. Neural Comput., 22:2192–2207, 2010. [17] B. Lindsay. Mixture models: theory, geometry, and applications. Inst. Math. Statist., 1995. [18] P. M. Long and R. A. Servedio. Restricted Boltzmann Machines are hard to approximately evaluate or simulate. In Proceedings of the 27-th ICML, pages 703–710, 2010. [19] G. Mont´ufar. Mixture decompositions using a decomposition of the sample space. ArXiv 1008.0204, 2010. [20] G. Mont´ufar. Mixture models and representational power of RBMs, DBNs and DBMs. NIPS Deep Learning and Unsupervised Feature Learning Workshop, 2010. [21] G. Mont´ufar and N. Ay. Refinements of universal approximation results for Deep Belief Networks and Restricted Boltzmann Machines. Neural Comput., 23(5):1306–1319, 2011. [22] J. Rauh. Finding the maximizers of the information divergence from an exponential family. PhD thesis, Universit¨at Leipzig, 2011. [23] J. Rauh, T. Kahle, and N. Ay. Support sets of exponential families and oriented matroids. Int. J. Approx. Reason., 52(5):613–626, 2011. [24] P. Smolensky. Information processing in dynamical systems: foundations of harmony theory. In Symposium on Parallel and Distributed Processing, 1986. [25] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning). MIT Press, March 1998. [26] K. G. Zahedi, N. Ay, and R. Der. Higher coordination with less control – a result of infromation maximization in the sensori-motor loop. Adaptive Behavior, 18(3-4):338–355, 2010. 9
|
2011
|
168
|
4,222
|
Directed Graph Embedding: an Algorithm based on Continuous Limits of Laplacian-type Operators Dominique C. Perrault-Joncas Department of Statistics University of Washington Seattle, WA 98195 dcpj@stat.washington.edu Marina Meil˘a Department of Statistics University of Washington Seattle, WA 98195 mmp@stat.washington.edu Abstract This paper considers the problem of embedding directed graphs in Euclidean space while retaining directional information. We model the observed graph as a sample from a manifold endowed with a vector field, and we design an algorithm that separates and recovers the features of this process: the geometry of the manifold, the data density and the vector field. The algorithm is motivated by our analysis of Laplacian-type operators and their continuous limit as generators of diffusions on a manifold. We illustrate the recovery algorithm on both artificially constructed and real data. 1 Motivation Recent advances in graph embedding and visualization have focused on undirected graphs, for which the graph Laplacian properties make the analysis particularly elegant [1, 2]. However, there is an important number of graph data, such as social networks, alignment scores between biological sequences, and citation data, which are naturally asymmetric. A commonly used approach for this type of data is to disregard the asymmetry by studying the spectral properties of W +W T or W T W, where W is the affinity matrix of the graph. Some approaches have been offered to preserve the asymmetry information contained in data: [3], [4], [5] or to define directed Laplacian operators [6]. Although quite successful, these works adopt a purely graph-theoretical point of view. Thus, they are not concerned with the generative process that produces the graph, nor with the interpretability and statistical properties of their algorithms. In contrast, we view the nodes of a directed graph as a finite sample from a manifold in Euclidean space, and the edges as macroscopic observations of a diffusion kernel between neighboring points on the manifold. We explore how this diffusion kernel determines the overall connectivity and asymmetry of the resulting graph and demonstrate how Laplacian-type operators of this graph can offer insights into the underlying generative process. Based on the analysis of the Laplacian-type operators, we derive an algorithm that, in the limit of infinite sample and vanishing bandwidth, recovers the key features of the sampling process: manifold geometry, sampling distribution, and local directionality, up to their intrinsic indeterminacies. 2 Model The first premise here is that we observe a directed graph G, with n nodes, having weights W = [Wij] for the edge from node i to node j. In following with common Laplacian-based embedding approaches, we assume that G is a geometric random graph constructed from n points sampled according to distribution p = e−U on an unobserved compact smooth manifold M ⊆Rl of known intrinsic dimension d ≤l. The edge weight Wij is then determined by a directed similarity kernel kϵ(xi, xj) with bandwidth ϵ. The directional component of kϵ(xi, xj) will be taken to be derived 1 from a vector field r on M, which assigns a preferred direction between weights Wij and Wji. The choice of a vector field r to characterize the directional component of G might seem restrictive at first. In the asymptotic limit of ϵ →0 and n →∞however, kernels are characterized by their diffusion, drift, and source components [7]. As such, r is sufficient to characterize any directionality associated with a drift component and as it turns out, the component of r normal M in Rl can also be use to characterize any source component. As for the diffusion component, it is not possible to uniquely identify it from G alone [8]. Some absolute knownledge of M is needed to say anything about it. Hence, without loss of generality, we will construct kϵ(xi, xj) so that the diffusion component ends being isotropic and constant, i.e. equal to Laplace-Beltrami operator ∆on M. The schematic of this generative process is shown in the top left of Figure 1 below. From left to right: the graph generative process mapping the sample on M to geometric random graph G via the kernel kϵ(x, y), then the subsequent embedding Ψn of G by operators H(α) aa,n, H(α) ss,n (defined in section 3.1). As these operators converge to their respective limits, H(α) aa and H(α) ss , so will Ψn →Ψ, pn →p, and rn →r. We design an algorithm that, given G, produces the top right embedding (Ψn, pn, and rn). Figure 1: Schematic of our framework. The question is then as follows: can the generative process’ geometry M, distribution p = e−U, and directionality r, be recovered from G? In other words, is there an embedding of G in Rm, m ≥d that approximates all three components of the process and that is also consistent as sample size increases and the bandwidth vanishes? In the case of undirected graphs, the theory of Laplacian eigenmaps [1] and Diffusion maps [9] answers this question in the affirmative, in that the geometry of M and p = e−U can be inferred using spectral graph theory. The aim here is to build on the undirected problem and recover all three components of the generative process from a directed graph G. The spectral approach to undirected graph embedding relies on the fact that eigenfunctions of the Laplace-Beltrami operator are known to preserve the local geometry of M [1]. With a consistent empirical Laplace-Beltrami operator based on G, its eigenvectors also recover the geometry of M and converge to the corresponding eigenfunctions on M. For a directed graph G, an additional operator is needed to recover the local directional component r, but the principle remains the same. The schematic for this is shown in Figure 1 where two operators - H(α) ss,n, introduced in [9] for undirected embeddings, and H(α) aa,n, a new operator defined in section 3.1 - are used to obtain the embedding Ψn, distribution pn, and vector field rn. As H(α) aa,n and H(α) ss,n converge to H(α) aa and H(α) ss , Ψn, pn, and rn also converge to Ψ, p, and r, where Ψ is the local geometry preserving the embedding of M into Rm. The algorithm we propose in Section 4 will calculate the matrices corresponding to H(α) ·,n from the graph G, and with their eigenvectors, will find estimates for the node coordinates Ψ, the directional component r, and the sampling distribution p. In the next section we briefly describe the mathematical models of the diffusion processes that our model relies on. 2 2.1 Problem Setting The similarity kernel kϵ(x, y) can be used to define transport operators on M. The natural transport operator is defined by normalizing kϵ(x, y) as Tϵ[f](x) = Z M kϵ(x, y) pϵ(x) f(y)p(y)dy , where pϵ(x) = Z M kϵ(x, y)p(y)dy . (1) Tϵ[f](x) represents the diffusion of a distribution f(y) by the transition density kϵ(x, y)p(y)/ R kϵ(x, y′)p(y′)dy′. The eigenfunctions of this infinitesimal operator are the continuous limit of the eigenvectors of the transition probability matrix P = D−1W given by normalizing the affinity matrix W of G by D = diag(W1) [10]. Meanwhile, the infinitesimal transition ∂f ∂t = lim ϵ→0 (Tϵ −I)f ϵ (2) defines the backward equation for this diffusion process over M based on kernel kϵ. Obtaining the explicit expression for transport operators like (2) is then the main technical challenge. 2.2 Choice of Kernel In order for Tϵ[f] to have the correct asymptotic form, some hypotheses about the similarity kernel kϵ(x, y) are required. The hypotheses are best presented by considering the decomposition of kϵ(x, y) into symmetric hϵ(x, y) = hϵ(y, x) and anti-symmetric aϵ(x, y) = −aϵ(y, x) components: kϵ(x, y) = hϵ(x, y) + aϵ(x, y) . (3) The symmetric component hϵ(x, y) is assumed to satisfy the following properties: 1. hϵ(||y − x||2) = h(||y−x||2/ϵ) ϵd/2 , and 2. h ≥0 and h is exponentially decreasing as ||y −x|| →∞. This form of symmetric kernel was used in [9] to analyze the diffusion map. For the asymmetric part of the similarity kernel, we assume the form aϵ(x, y) = r(x, y) 2 · (y −x)h(||y −x||2/ϵ) ϵd/2 , (4) with r(x, y) = r(y, x) so that aϵ(x, y) = −aϵ(y, x). Here r(x, y) is a smooth vector field on the manifold that gives an orientation to the asymmetry of the kernel kϵ(x, y). It is worth noting that the dependence of r(x, y) on both x and y implies that r : M × M →Rl with Rl the ambient space of M; however in the asymptotic limit, the dependence in y is only important “locally” (x = y), and as such it is appropriate to think of r(x, x) being a vector field on M. As a side note, it is worth pointing out that even though the form of aϵ(x, y) might seem restrictive at first, it is sufficiently rich to describe any vector field . This can be seen by taking r(x, y) = (w(x) + w(y))/2 so that at x = y the resulting vector field is given by r(x, x) = w(x) for an arbitrary vector field w(x). 3 Continuous Limit of Laplacian Type Operators We are now ready to state the main asymptotic result. Proposition 3.1 Let M be a compact, closed, smooth manifold of dimension d and kϵ(x, y) an asymmetric similarity kernel satisfying the conditions of section 2.2, then for any function f ∈ C2(M), the integral operator based on kϵ has the asymptotic expansion Z M kϵ(x, y)f(y)dy = m0f(x) + ϵg(f(x), x) + o(ϵ) , (5) where g(f(x), x) = m2 2 (ω(x)f(x) + ∆f(x) + r · ∇f(x) + f(x)∇· r + c(x)f(x)) (6) and m0 = R Rd h(||u||2)du, m2 = R Rd u2 i h(||u||2)du. 3 The proof can be found in [8] along with the definition of ω(x) and c(x) in (6). For now, it suffices to say that ω(x) corresponds to an interaction between the symmetric kernel hϵ and the curvature of M and was first derived in [9]. Meanwhile, c(x) is a new term that originates from the interaction between hϵ and the component of r that is normal to M in the ambient space Rl. Proposition 3.1 foreshadows a general fact about spectral embedding algorithms: in most cases, Laplacian operators confound the effects of spatial proximity, sampling density and directional flow due to the presence of the various terms above. 3.1 Anisotropic Limit Operators Proposition 3.1 above can be used to derive the limits of a variety of Laplacian type operators associated with spectral embedding algorithms like [5, 6, 3]. Although we will focus primarily on a few operators that give the most insight into the generative process and enable us to recover the model defined in Figure 1, we first present four distinct families of operators for completeness. These operator families are inspired by the anisotropic family of operators that [9] introduced for undirected graphs, which make use of anisotropic kernels of the form: k(α) ϵ (x, y) = kϵ(x, y) pαϵ (x)pαϵ (y) , (7) with α ∈[0, 1] where α = 0 is the isotropic limit. To normalize the anisotropic kernels, we need to redefine the outdegrees distribution of k(α) ϵ as p(α) ϵ (x) = R M k(α) ϵ (x, y)p(y)dy. From (7), four families of diffusion processes of the form ft = H(α)[f](x) can be derived depending on which kernel is normalized and which outdegree distribution is used for the normalization. Specifically, we define transport operators by normalizing the asymmetric k(α) ϵ or symmetric h(α) ϵ kernels with the asymmetric pϵ or symmetric qϵ = R M hϵ(x, y)p(y)dy outdegree distribution1. To keep track of all options, we introduce the following notation: the operators will be indexed by the type of kernel and outdegree distribution they correspond to (symmetric or asymmetric), with the first index identifying the kernel and the second index identifying the outdegree distribution. For example, the family of anisotropic limit operators introduced by [9] is defined by normalizing the symmetric kernel by the symmetric outdegree distribution, hence they will be denoted as H(α) ss , with the superscript corresponding to the anisotropic power α. Proposition 3.2 With the above notation, H(α) aa [f] = ∆f −2 (1 −α) ∇U ·∇f + r·∇f (8) H(α) as [f] = ∆f −2 (1 −α) ∇U · ∇f −cf + (α −1)(r · ∇U)f −(∇· r)f + r · ∇f (9) H(α) sa [f] = ∆f −2 (1 −α) ∇U · ∇f + (c + ∇· r + (α −1)r · ∇U)f (10) H(α) ss [f] = ∆f −2(1 −α)∇U · ∇f. (11) The proof of this proposition, which can be found in [8], follows from repeated application of Proposition 3.1 to p(y) or q(y) and then to kα(x, y) or hα(x, y), as well as the fact that 1 pα ϵ = 1 p−α [1 −αϵ(ω + ∆p p + 2r · ∇p p + 2∇· r + c)] + o(ϵ). Thus, if we use the asymmetric kϵ and pϵ, we get H(α) aa , defined by the advected diffusion equation (8). In general, H(α) aa is not hermitian, so it commonly has complex eigenvectors. This makes embedding directed graphs with this operator problematic. Nevertheless, H(1) aa will play an important role in extracting the directionality of the sampling process. If we use the symmetric kernel hϵ but the asymmetric outdegree distribution pϵ, we get the family of operators H(α) sa , of which the WCut of [3] is a special case (α = 0). If we reverse the above, i.e. use kϵ and qϵ, we obtain H(α) as . This turns out to be merely a combination of H(α) aa and H(α) sa . 1The reader may notice that there are in fact eight possible combinations of kernel and degree distribution, since the anisotripic kernel (7) could also be defined using a symmetric or asymmetric outdegree distribution. However, there are only four distinct asymptotic results and they are all covered by using one kernel (symmetric or asymmetric) and one degree distribution (symmetric or asymmetric) throughout. 4 Algorithm 1 Directed Embedding Input: Affinity matrix Wi,j and embedding dimension m, (m ≥d) 1. S ←(W + W T )/2 (Steps 1–6 estimate the coordinates as in [11]) 2. qi ←Pn j=1 Si,j, Q = diag(q) 3. V ←Q−1SQ−1 4. q(1) i ←Pn j=1 Vi,j, Q(1) = diag(q(1)) 5. H(1) ss,n ←Q(1)−1V 6. Compute the Ψ the n×(m+1) matrix with orthonormal columns containing the m+1 largest right eigenvector (by eigenvalue) of H(1) ss,n as well as the Λ the (m+1)×(m+1) diagonal matrix of eigenvalues. Eigenvectors 2 to m + 1 from Ψ are the m coordinates of the embedding. 7. Compute π the left eigenvector of H(1) ss,n with eigenvalue 1. (Steps 7–8 estimate the density) 8. π ←π/ Pn i=1 πi is the density distribution over the embedding. 9. pi ←Pn j=1 Wi,j, P = diag(p) (Steps 9–13 estimate the vector field r) 10. T ←P −1WP −1 11. p(1) i ←Pn j=1 Ti,j, P (1) = diag(p(1)) 12. H(1) aa,n ←P (1)−1T 13. R ←(H(1) aa,n −H(1) ss,n)Ψ/2. Columns 2 to m + 1 of R are the vector field components in the direction of the corresponding coordinates of the embedding. Finally, if we only consider the symmetric kernel hϵ and degree distribution qϵ, we recover H(α) ss , the anisotropic kernels of [9] for symmetric graphs. This operator for α = 1 is shown to separate the manifold from the probability distribution [11] and will be used as part of our recovery algorithm. 4 Isolating the Vector Field r Our aim is to esimate the manifold M, the density distribution p = e−U, and the vector field r. The first two components of the data can be recovered from H(1) ss as shown in [11] and summarized in Algorithm 1. At this juncture, one feature of generative process is missing: the vector field r. The natural approach for recovering r is to isolate the linear operator r · ∇from H(α) aa by substracting H(α) ss : H(α) aa −H(α) ss = r · ∇. (12) The advantage of recovering r in operator form as in (12) is that r · ∇is coordinate free. In other words, as long as the chosen embedding of M is diffeomorphic to M2, (12) can be used to express the component of r that lies in the tangent space TM, which we denote by r||. Specifically, let Ψ be a diffeomorphic embedding of M ; the component of r along coordinate ψk is then given by r · ∇ψk = rk, and so, in general, r|| = r · ∇Ψ . (13) The subtle point that only r|| is recovered from (13) follows from the fact that the operator r · ∇is only defined along M and hence any directional derivative is necessarily along TM. Equation (13) and the previous observations are the basis for Algorithm 1, which recovers the three important features of the generative process for an asymmetric graph with affinity matrix W. A similar approach can be employed to recover c + ∇· r, or simply ∇· r if r has no component perpendicular to the tangent space TM (meaning that c ≡0). Recovering c + ∇· r is achieved by taking advantage of the fact that (H(1) sa −H(1) ss ) = (c + ∇· r) , (14) 2A diffeomorphic embedding is guaranteed by using the eigendecomposition of H(1) ss . 5 which is a diagonal operator. Taking into account that for finite n (H(1) sa,n −H(1) ss,n) is not perfectly diagonal, using ψn ≡1n (vector of ones), i.e. (H(1) sa,n −H(1) ss,n)[1n] = (cn +∇·rn), has been found empirically to be more stable than simply extracting the diagonal of (H(1) sa,n −H(1) ss,n). 5 Experiments Artificial Data For illustrative purposes, we begin by applying our method to an artificial example. We use the planet Earth as a manifold with a topographic density distribution, where sampling probability is proportional to elevation. We also consider two vector fields: the first is parallel to the line of constant latitude and purely tangential to the sphere, while the second is parallel to the line of constant longitude with a component of the vector field perpendicular to the manifold. The true model with constant latitude vector field is shown in Figure 2, along with the estimated density and vector field projected on the true manifold (sphere). Model Recovered Latitudinal Longitudinal (a) (b) Figure 2: (a): Sphere with latitudinal vector field, i.e East-West asymmetry, with Wew > Wwe if node w lies to the West of node e. The graph nodes are sampled non-uniformly, with the topographic map of the world as sampling density. We sample n = 5000 nodes, and observe only the resulting W matrix, but not the node locations. From W, our algorithm estimates the sample locations (geometry), the vector field (black arrows) generating the observed asymmetries, and the sampling distribution at each data point (colormap). (b) Vector fields on a spherical region (blue), and their estimates (red): latitudinal vector field tangent to the manifold (left) and longitudinal vector field with component perpendicular to manifold tangent plane (right). Both the estimated density and vector field agree with the true model, demonstrating that for artificial data, the recovery algorithm 1 performs quite well. We note that the estimated density does not recover all the details of the original density, even for large sample size (here n = 5000 with ϵ = 0.07). Meanwhile, the estimated vector field performs quite well even when the sampling is reduced to n = 500 with ϵ = 0.1. This can be seen in Figure 2, b, where the true and estimated vector fields are superimposed. Figure 2 also demonstrates how r · ∇only recovers the tangential component of r. The estimated geometry is not shown on any of these figures, since the success of the diffusion map in recovering the geometry for such a simple manifold is already well established [2, 9]. Real DataThe National Longitudinal Survey of Youth (NLSY) 1979 Cohort is a representative sample of young men and women in the United States who were followed from 1979 to 2000 [12, 13]. The aim here is to use this survey to obtain a representation of the job market as a diffusion process over a manifold. The data set consists of a sample of 7,816 individual career sequences of length 64, listing the jobs a particular individual held every quarter between the ages of 20 and 36. Each token in the sequence identifies a job. Each job corresponds to an industry × occupation pair. There are 25 unique industry and 20 unique occupation indices. Out of the 500 possible pairings, approximately 450 occur in the data, with only 213 occurring with sufficient frequency to be included here. Thus, our graph G has 213 nodes - the jobs - and our observations consist of 7,816 walks between the graph nodes. We convert these walks to a directed graph with affinity matrix W. Specifically, Wij represents the number of times a transition from job i to job j was observed (Note that this matrix is asymmetric, 6 i.e Wij ̸= Wji). Normalizing each row i of W by its outdegree di gives P = diag(di)−1W, the non-parametric maximum likelihood estimator for the Markov chain over G for the progression of career sequences. This Markov chain has as limit operator H(0) aa , as the granularity of the job market increases along with the number of observations. Thus, in trying to recover the geometry, distribution and vector field, we are actually interested in estimating the full advective effect of the diffusion process generated by H(0) aa ; that is, we want to estimate r · ∇−2∇U · ∇where we can use −2∇U · ∇= H(0) ss −H(1) ss to complement Algorithm 1. −0.1 −0.05 0 0.05 0.1 0.15 0.2 −0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 !2 !3 800 1000 1200 1400 1600 1800 2000 −0.1 −0.05 0 0.05 0.1 0.15 0.2 −0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 !2 !3 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 (a) (b) Figure 3: Embedding the job market along with field r −2∇U over the first two non-constant eigenvectors. The color map corresponds to the mean monthly wage in dollars (a) and to the female proportion (b) for each job. We obtain an embedding of the job market that describes the relative position of jobs, their distribution, and the natural time progression from each job. Of these, the relative position and natural time progression are the most interesting. Together, they summarize the job market dynamics by describing which jobs are naturally “close” as well as where they can lead in the future. From a public policy perspective, this can potentially improve focus on certain jobs for helping individuals attain better upward mobility. The job market was found to be a high dimensional manifold. We present only the first two dimensions, that is, the second and third eigenvectors of H(0) ss , since the first eigenvector is uninformative (constant) by construction. The eigenvectors showed correlation with important demographic data, such as wages and gender. Figure 3 displays this two-dimensional sub-embedding along with the directional information r −2∇U for each dimension. The plot shows very little net progression toward regions of increasing mean salary3. This is somewhat surprising, but it is easy to overstate this observation: diffusion alone would be enough to move the individuals towards higher salary. What Figure 3 (a) suggests is that there appear to be no “external forces” advecting individuals towards higher salary. Nevertheless, there appear to be other external forces at play in the job market: Figure 3 (b), which is analogous to Figure 3 (a), but with gender replacing the salary color scheme, suggests that these forces push individuals towards greater gender differentiation. This is especially true amongst male-dominated jobs which appear to be advected toward the left edge of the embedding. Hence, this simple analysis of the job market can be seen as an indication that males and females tend to move away from each other over time, while neither seems to have a monopoly on high- or low- paying jobs. 6 Discussion This paper makes three contributions: (1) it introduces a manifold-based generative model for directed graphs with weighted edges, (2) it obtains asymptotic results for operators constructed from the directed graphs, and (3) these asymptotic results lead to a natural algorithm for estimating the model. 3It is worth noting that in the NLSY data set, high paying jobs are teacher, nurse and mechanic. This is due to the fact that the career paths observed stop at at age 36, which is relatively early in an individual’s career. 7 Generative Models that assume that data are sampled from a manifold are standard for undirected graphs, but to our knowledge, none have yet been proposed for directed graphs. When W is symmetric, it is natural to assume that it depends on the points’ proximity. For asymmetric affinities W, one must include an additional component to explain the asymmetry. In the asymptotic limit, this is tantamount to defining a vector field on the manifold. Algorithm We have used from [9] the idea of defining anisotropic kernels (indexed by α) in order to separate the density p and the manifold geometry M. Also, we adopted their general assumptions about the symmetric part of the kernel. As a consequence, the recovery algorithm for p and M is identical to theirs. However, insofar as the asymmetric part of the kernel is concerned, everything, starting from the definition and the introduction of the vector field r as a way to model the asymmetry, through the derivation of the asymptotic expression for the symmetric plus asymmetric kernel, is new. We go significantly beyond the elegant idea of [9] regarding the use of anisotropic kernels by analyzing the four distinct renormalizations possible for a given α, each of them combining different aspects of M, p and r. Only the successful (and novel) combination of two different anisotropic operators is able to recover the directional flow r. Algorithm 1 is natural, but we do not claim it is the only possible one in the context of our model. For instance, we can also use H(α) sa to recover the operator ∇· r (which empirically seems to have worse numerical properties than r · ∇). In the National Longitudinal Survery of Youth study, we were interested in the whole advective term, so we estimated it from a different combination of operators. Depending on the specific question, other features of the model could be obtained Limit Results Proposition 3.1 is a general result on the asymptotics of asymmetric kernels. Recovering the manifold and r is just one, albeit the most useful, of the many ways of exploiting these results. For instance, H(0) sa is the limit operator of the operators used in [3] and [5]. The limit analysis could be extended to other digraph embedding algorithms such as [4, 6]. How general is our model? Any kernel can be decomposed into a symmetric and an asymmetric part, as we have done. The assumptions on the symmetric part h are standard. The paper of [7] goes one step further from these assumptions; we will discuss it in relationship with our work shortly. The more interesting question is how limiting are our assumptions regarding the choice of kernel, especially the asymmetric part, which we parameterized as aϵ(x, y) = r/2 · (y −x)hϵ(x, y) in (4). In the asymptotic limit, this choice turns out to be fully general, at least up to the identifiable aspects of the model. For a more detailed discussion of this issue, see [8]. In [7], Ting, Huang and Jordan presented asymptotic results for a general family of kernels that includes asymmetric and random kernels. Our kϵ can be expressed in the notation of [7] by taking wx(y) ←1+r(x, y)·(y−x), rx(y) ←1, K0 ←h, h ←ϵ. Their assumptions are more general than the assumptions we make here, yet our model is general up to what can be identified from G alone. The distinction arises because [7] focuses on the graph construction methods from an observed sample of M, while we focus on explaining an observed directed graph G through a manifold generative process. Moreover, while the [7] results can be used to analyze data from directed graphs, they differ from our Proposition 3.1. Specifically, with respect to the limit in Theorem 3 from [7], we obtain the additional source terms f(x)∇· r and c(x)f(x) that follow from not enforcing conservation of mass while defining operators H(α) sa and H(α) as . We applied our theory of directed graph embedding to the analysis of the career sequences in Section 5, but asymmetric affinity data abound in other social contexts, and in the physical and life sciences. Indeed, any “similarity” score that is obtained from a likelihood of the form Wvu =likelihood(u|v) is generally asymmetric. Hence our methods can be applied to study not only social networks, but also patterns of human movement, road traffic, and trade relations, as well as alignment scores in molecular biology. Finally, the physical interpretation of our model also makes it naturally applicable to physical models of flows. Acknowledgments This research was partially supported by NSW awards IIS-0313339 and IIS-0535100. 8 References [1] Belkin and Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15:1373–1396, 2002. [2] Nadler, Lafon, and Coifman. Diffusion maps, spectral clustering and eigenfunctions of fokker-planck operators. In Neural Information Processing Systems Conference, 2006. [3] Meila and Pentney. Clustering by weighted cuts in directed graphs. In SIAM Data Mining Conference, 2007. [4] Zhou, Huang, and Scholkopf. Learning from labeled and unlabeled data on a directed graph. In International Conference on Machine Learning, pages 1041–1048, 2005. [5] Zhou, Schlkopf, and Hofmann. Semi-supervised learning on directed graphs. In Advances in Neural Information Processing Systems, volume 17, pages 1633–1640, 2005. [6] Fan R. K. Chung. The diameter and laplacian eigenvalues of directed graphs. Electr. J. Comb., 13, 2006. [7] Ting, Huang, and Jordan. An analysis of the convergence of graph Laplacians. In International Conference on Machine Learning, 2010. [8] Dominique Perrault-Joncas and Marina Meil˘a. Directed graph embedding: an algorithm based on continuous limits of laplacian-type operators. Technical Report TR 587, University of Washington - Department of Statistics, November 2011. [9] Coifman and Lafon. Diffusion maps. Applied and Computational Harmonic Analysis, 21:6–30, 2006. [10] Mikhail Belkin and Partha Niyogi. Convergence of laplacian eigenmaps. preprint, short version NIPS 2008, 2008. [11] Coifman, Lafon, Lee, Maggioni, Warner, and Zucker. Geometric diffusions as a tool for harmonic analysis and structure definition of data: Diffusion maps. In Proceedings of the National Academy of Sciences, pages 7426–7431, 2005. [12] United States Department of Labor. National longitudinal survey of youth 1979 cohort. http://www.bls.gov/nls/, retrived October 2011. [13] Marc A. Scott. Affinity models for career sequences. Journal of the Royal Statistical Society: Series C (Applied Statistics), 60(3):417–436, 2011. 9
|
2011
|
169
|
4,223
|
Nonstandard Interpretations of Probabilistic Programs for Efficient Inference David Wingate BCS / LIDS, MIT wingated@mit.edu Noah D. Goodman Psychology, Stanford ngoodman@stanford.edu Andreas Stuhlm¨uller BCS, MIT ast@mit.edu Jeffrey M. Siskind ECE, Purdue qobi@purdue.edu Abstract Probabilistic programming languages allow modelers to specify a stochastic process using syntax that resembles modern programming languages. Because the program is in machine-readable format, a variety of techniques from compiler design and program analysis can be used to examine the structure of the distribution represented by the probabilistic program. We show how nonstandard interpretations of probabilistic programs can be used to craft efficient inference algorithms: information about the structure of a distribution (such as gradients or dependencies) is generated as a monad-like side computation while executing the program. These interpretations can be easily coded using special-purpose objects and operator overloading. We implement two examples of nonstandard interpretations in two different languages, and use them as building blocks to construct inference algorithms: automatic differentiation, which enables gradient based methods, and provenance tracking, which enables efficient construction of global proposals. 1 Introduction Probabilistic programming simplifies the development of probabilistic models by allowing modelers to specify a stochastic process using syntax that resembles modern programming languages. These languages permit arbitrary mixing of deterministic and stochastic elements, resulting in tremendous modeling flexibility. The resulting programs define probabilistic models that serve as prior distributions: running the (unconditional) program forward many times results in a distribution over execution traces, with each trace being a sample from the prior. Examples include BLOG [13], Bayesian Logic Programs [10] IBAL[18], CHURCH [6], Stochastic MATLAB [28], and HANSEI [11]. The primary challenge in developing such languages is scalable inference. Inference can be viewed as reasoning about the posterior distribution over execution traces conditioned on a particular program output, and is difficult because of the flexibility these languages present: in principle, an inference algorithm must behave reasonably for any program a user wishes to write. Sample-based MCMC algorithms are the state-of-the-art method, due to their simplicity, universality, and compositionality. But in probabilistic modeling more generally, efficient inference algorithms are designed by taking advantage of structure in distributions. How can we find structure in a distribution defined by a probabilistic program? A key observation is that some languages, such as CHURCH and Stochastic MATLAB, are defined in terms of an existing (non-probabilistic) language. Programs in these languages may literally be executed in their native environments—suggesting that tools from program analysis and programming language theory can be leveraged to find and exploit structure in the program for inference, much as a compiler might find and exploit structure for performance. Here, we show how nonstandard interpretations of probabilistic programs can help craft efficient inference algorithms. Information about the structure of a distribution (such as gradients, dependencies or bounds) is generated as a monad-like side computation while executing the program. This extra information can be used to, for example, construct good MH proposals, or search efficiently for a local maximum. We focus on two such interpretations: automatic differentiation and provenance tracking, and show how they can be used as building blocks to construct efficient inference 1 algorithms. We implement nonstandard interpretations in two different languages (CHURCH and Stochastic MATLAB), and experimentally demonstrate that while they typically incur some additional execution overhead, they dramatically improve inference performance. 2 Background and Related Work Alg. 1: A Gaussian-Gamma mixture 1: for i=1:1000 2: if ( rand > 0.5 ) 3: X(i) = randn; 4: else 5: X(i) = gammarnd; 6: end; 7: end; We begin by outlining our setup, following [28]. We define an unconditioned probabilistic program to be a parameterless function f with an arbitrary mix of stochastic and deterministic elements (hereafter, we will use the term function and program interchangeably). The function f may be written in any language, but our running example will be MATLAB. We allow the function to be arbitrarily complex inside, using any additional functions, recursion, language constructs or external libraries it wishes. The only constraint is that the function must be self-contained, with no external side-effects which would impact the execution of the function from one run to another. The stochastic elements of f must come from a set of known, fixed elementary random primitives, or ERPs. Complex distributions are constructed compositionally, using ERPs as building blocks. In MATLAB, ERPs may be functions such as rand (sample uniformly from [0,1]) or randn (sample from a standard normal). Higher-order random primitives, such as nonparametric distributions, may also be defined, but must be fixed ahead of time. Formally, let T be the set of ERP types. We assume that each type t ∈T is a parametric family of distributions pt(x|θt), with parameters θt. Now, consider what happens while executing f. As f is executed, it encounters a series of ERPs. Alg. 1 shows an example of a simple f written in MATLAB with three syntactic ERPs: rand, randn, and gammarnd. During execution, depending on the return value of each call to rand, different paths will be taken through the program, and different ERPs will be encountered. We call this path an execution trace. A total of 2000 random choices will be made when executing this f. Let fk|x1,··· ,xk−1 be the k’th ERP encountered while executing f, and let xk be the value it returns. Note that the parameters passed to the k’th ERP may change depending on previous xk’s (indeed, its type may also change, as well as the total number of ERPs). We denote by x all of the random choices which are made by f, so f defines the probability distribution p(x). In our example, x ∈ R2000. The probability p(x) is the product of the probability of each individual ERP choice: p(x) = K Y k=1 ptk(xk|θtk, x1, · · · , xk−1) (1) again noting explicitly that types and parameters may depend arbitrarily on previous random choices. To simplify notation, we will omit the conditioning on the values of previous ERPs, but again wish to emphasize that these dependencies are critical and cannot be ignored. By fk, it should therefore be understood that we mean fk|x1,··· ,xk−1, and by ptk(xk|θtk) we mean ptk(xk|θtk, x1, · · · , xk−1). Generative functions as described above are, of course, easy to write. A much harder problem, and our goal in this paper, is to reason about the posterior conditional distribution p(x|y), where we define y to be a subset of random choices which we condition on and (in an abuse of notation) x to be the remaining random choices. For example, we may condition f on the X(i)’s, and reason about the sequence of rand’s most likely to generate the X(i)’s. For the rest of this paper, we will drop y and simply refer to p(x), but it should be understood that the goal is always to perform inference in conditional distributions. 2.1 Nonstandard Interpretations of Probabilistic Programs With an outline of probabilistic programming in hand, we now turn to nonstandard interpretations. The idea of nonstandard interpretations originated in model theory and mathematical logic, where it was proposed that a set of axioms could be interpreted by different models. For example, differential geometry can be considered a nonstandard interpretation of classical arithmetic. In programming, a nonstandard interpretation replaces the domain of the variables in the program with a new domain, and redefines the semantics of the operators in the program to be consistent with the new domain. This allows reuse of program syntax while implementing new functionality. For example, the expression “a ∗b” can be interpreted equally well if a and b are either scalars or 2 matrices, but the “∗” operator takes on different meanings. Practically, many useful nonstandard interpretations can be implemented with operator overloading: variables are redefined to be objects with operators that implement special functionality, such as tracing, reference counting, or profiling. For the purposes of inference in probabilistic programs, we will augment each random choice xk with additional side information sk, and replace each xk with the tuple ⟨xk, sk⟩. The native interpreter for the probabilistic program can then interpret the source code as a sequence of operations on these augmented data types. For a recent example of this, we refer the reader to [24]. 3 Automatic Differentiation For probabilistic models with many continuous-valued random variables, the gradient of the likelihood ∇xp(x) provides local information that can significantly improve the properties of MonteCarlo inference algorithms. For instance, Langevin Monte-Carlo [20] and Hamiltonian MCMC [15] use this gradient as part of a variable-augmentation technique (described below). We would like to be able to use gradients in the probabilistic-program setting, but p(x) is represented implicitly by the program. How can we compute its gradient? We use automatic differentiation (AD) [3, 7], a nonstandard interpretation that automatically constructs ∇xp(x). The automatic nature of AD is critical because it relieves the programmer from hand-computing derivatives for each model; moreover, some probabilistic programs dynamically create or delete random variables making simple closed-form expressions for the gradient very difficult to find. Unlike finite differencing, AD computes an exact derivative of a function f at a point (up to machine precision). To do this, AD relies on the chain rule to decompose the derivative of f into derivatives of its sub-functions: ultimately, known derivatives of elementary functions are composed together to yield the derivative of the compound function. This composition can be computed as a nonstandard interpretation of the underlying elementary functions. The derivative computation as a composition of the derivatives of the elementary functions can be performed in different orders. In forward mode AD [27], computation of the derivative proceeds by propagating perturbations of the input toward the output. This can be done by a nonstandard interpretation that extends each real value to the first two terms of its Taylor expansion [26], overloading each elementary function to operate on these real “polynomials”. Because the derivatives of f at c can be extracted from the coefficients of ǫ in f(c + ǫ) , this allows computation of the gradient. In reverse mode AD [25], computation of the derivative proceeds by propagating sensitivities of the output toward the input. One way this can be done is by a nonstandard interpretation that extends each real value into a “tape” that captures the trace of the real computation which led to that value from the inputs, overloading each elementary function to incrementally construct these tapes. Such a tape can be postprocessed, in a fashion analogous to backpropagation [21], to yield the gradient. These two approaches have complementary computational tradeoffs: reverse mode (which we use in our implementation) can compute the gradient of a function f : Rn →R with the same asymptotic time complexity as computing f, but not the same asymptotic space complexity (due to its need for saving the computation trace), while forward mode can compute the gradient with these same asymptotic space complexity, but with a factor of O(n) slowdown (due to its need for constructing the gradient out of partial derivatives along each independent variable). There are implementations of AD for many languages, including SCHEME(e.g., [17]), FORTRAN (e.g., ADIFOR[2]), C (e.g., ADOL–C [8]), C++ (e.g., FADBAD++[1]), MATLAB (e.g., INTLAB [22]), and MAPLE (e.g., GRADIENT [14]). See www.autodiff.org. Additionally, overloading and AD are well established techniques that have been applied to machine learning, and even to applicationspecific programming languages for machine learning, e.g., LUSH[12] and DYNA[4]. In particular, DYNA applies a nonstandard interpretation for ∧and ∨as a semiring (× and +, + and max, . . .) in a memoizing PROLOG to generalize Viterbi, forward/backward, inside/outside, etc. and uses AD to derive the outside algorithm from the inside algorithm and support parameter estimation, but unlike probabilistic programming, it does not model general stochastic processes and does not do general inference over such. Our use of overloading and AD differs in that it facilitates inference in complicated models of general stochastic processes formulated as probabilistic programs. Probabilistic programming provides a powerful and convenient framework for formulating complicated models and, more importantly, separating such models from orthogonal inference mechanisms. Moreover, overloading provides a convenient mechanism for implementing many such inference mechanisms (e.g., Langevin MC, Hamiltonian MCMC, Provenance Tracking, as demonstrated below) in a probabilistic programming language. 3 (define (perlin-pt x y keypt power) (* 255 (sum (map (lambda (p2 pow) (let ((x0 (floor (* p2 x))) (y0 (floor (* p2 y)))) (* pow (2d-interp (keypt x0 y0) (keypt (+ 1 x0) y0) (keypt x0 (+ 1 y0)) (keypt (+ 1 x0) (+ 1 y0)))))) powers-of-2 power)))) (define (perlin xs ys power) (let ([keypt (mem (lambda (x y) (/ 1 (+ 1 (exp (- (gaussian 0.0 2.0)))))))]) (map (lambda (x) (map (lambda (y) (perlin-pt x y keypt power)) xs)) ys))) Figure 1: Code for the structured Perlin noise generator. 2d-interp is B-spline interpolation. 3.1 Hamiltonian MCMC Alg. 2: Hamiltonian MCMC 1: repeat forever 2: Gibbs step: 3: Draw momentum m ∼N(0, σ2) 4: Metropolis step: 5: Start with current state (x, m) 6: Simulate Hamiltonian dynamics to give (x′, m′) 7: Accept w/ p = min[1, e(−H(x′,m′)+H(x,m))] 8: end; To illustrate the power of AD in probabilistic programming, we build on Hamiltonian MCMC (HMC), an efficient algorithm whose popularity has been somewhat limited by the necessity of computing gradients—a difficult task for complex models. Neal [15] introduces HMC as an inference method which “produces distant proposals for the Metropolis algorithm, thereby avoiding the slow exploration of the state space that results from the diffusive behavior of simple random-walk proposals.” HMC begins by augmenting the states space with “momentum variables” m. The distribution over this augmented space is eH(x,m), where the Hamiltonian function H decomposed into the sum of a potential energy term U(x) = −ln p(x) and a kinetic energy K(m) which is usually taken to be Gaussian. Inference proceeds by alternating between a Gibbs step and Metropolis step: fixing the current state x, a new momentum m is sampled from the prior over m; then x and m are updated together by following a trajectory according to Hamiltonian dynamics. Discrete integration of Hamiltonian dynamics requires the gradient of H, and must be done with a symplectic (i.e. volume preserving) integrator (following [15] we use the Leapfrog method). While this is a complex computation, incorporating gradient information dramatically improves performance over vanilla random-walk style MH moves (such as Gaussian drift kernels), and its statistical efficiency also scales much better with dimensionality than simpler methods [15]. AD can also compute higherorder derivatives. For example, Hessian matrices can be used to construct blocked Metropolis moves [9] or proposals based on Newton’s method [19], or as part of Riemannian manifold methods [5]. 3.2 Experiments and Results We implemented HMC by extending BHER [28], a lightweight implementation of the CHURCH language which provides simple, but universal, MH inference. We used used an implementation of AD based on [17] that uses hygienic operator overloading to do both forward and reverse mode AD for Scheme (the target language of the BHER compiler). The goal is to compute ∇xp(x). By Eq. 1, p(x) is the product of the individual choices made by each xi (though each probability can depend on previous choices, through the program evaluation). To compute p(x), BHER executes the corresponding program, accumulating likelihoods. Each time a continuous ERP is created or retrieved, we wrap it in a “tape” object which is used to track gradient information; as the likelihood p(x) is computed, these tapes flow through the program and through appropriately overloaded operators, resulting in a dependency graph for the real portion of the computation. The gradient is then computed in reverse mode, by “back-propagating” along this graph. We implement an HMC kernel by using this gradient in the leapfrog integrator. Since program states may contain a combination of discrete and continuous ERPs, we use an overall cycle kernel which alternates between standard MH kernel for individual discrete random variables and the HMC kernel for all continuous random choices. To decrease burn-in time, we initialize the sampler by using annealed gradient ascent (again implemented using AD). We ran two sets of experiments that illustrate two different benefits of HMC with AD: automated gradients of complex code, and good statistical efficiency. Structured Perlin noise generation. Our first experiment uses HMC to generate modified Perlin noise with soft symmetry structure. Perlin noise is a procedural texture used by computer graphics artists to add realism to natural textures such as clouds, grass or tree bark. We generate Perlinlike noise by layering octaves of random but smoothly varying functions. We condition the result 4 0 50 100 150 200 250 300 0 2 4 6 Samples Distance to true expectation MCMC HMC diagonal symmetry Figure 2: On the left: samples from the structured Perlin noise generator. On the right: convergence of expected mean for a draw from a 3D spherical Gaussian conditioned on lying on a line. on approximate diagonal symmetry, forcing the resulting image to incorporate additional structure without otherwise skewing the statistics of the image. Note that the MAP solution for this problem is uninteresting, as it is a uniform image; it is the variations around the MAP that provide rich texture. We generated 48x48 images; the model had roughly 1000 variables. Fig. 2 shows the result via typical samples generated by HMC, where the approximate symmetry is clearly visible. A code snippet demonstrating the complexity of the calculations is shown in Fig. 1; this experiment illustrates how the automatic nature of the gradients is most helpful, as it would be time consuming to compute these gradients by hand—particularly since we are free to condition using any function of the image. Normal distribution noisily conditioned on line (2D projection) 1: x ∼N(µ, σ) 2: k ∼Bernoulli(e−dist(line,x) noise ) 3: Condition on k = 1 -1.5 -1.0 -0.5 0.5 1.0 1.5 -2 -1 1 2 Complex conditioning. For our second example, we demonstrate the improved statistical efficiency of the samples generated by HMC versus BHER’s standard MCMC algorithm. The goal is to sample points from a complex 3D distribution, defined by starting with a Gaussian prior, and sampling points that are noisily conditioned to be on a line running through R3. This creates complex interactions with the prior to yield a smooth, but strongly coupled, energy landscape. Fig. 2 compares our HMC implementation with BHER’s standard MCMC engine. The x-axis denotes samples, while the y-axis denotes the convergence of an estimator of certain marginal statistics of the samples. We see that this estimator converges much faster for HMC, implying that the samples which are generated are less autocorrelated – affirming that HMC is indeed making better distal moves. HMC is about 5x slower than MCMC for this experiment, but the overhead is justified by the significant improvement in the statistical quality of the samples. 4 Provenance Tracking for Fine-Grained Dynamic Dependency Analysis One reason gradient based inference algorithms are effective is that the chain rule of derivatives propagates information backwards from the data up to the proposal variables. But gradients, and the chain rule, are only defined for continuous variables. Is there a corresponding structure for discrete choices? We now introduce a new nonstandard interpretation based on provenance tracking (PT). In programming language theory, the provenance of a variable is the history of variables and computations that combined to form its value. We use this idea to track fine-grained dependency information between random values and intermediate computations as they combine to form a likelihood. We then use this provenance information to construct good global proposals for discrete variables as part of a novel factored multiple-try MCMC algorithm. 4.1 Defining and Implementing Provenance Tracking Like AD, PT can be implemented with operator overloading. Because provenance information is much coarser than gradient information, the operators in PT objects have a particularly simple form; most program expressions can be covered by considering a few cases. Let X denote the set {xi} of all (not necessarily random) variables in a program. Let R(x) ⊂X define the provenance of a variable x. Given R(x), the provenance of expressions involving x can be computed by breaking 5 down expressions into a sequence of unary operations, binary operations, and function applications. Constants have empty provenances. Let x and y be expressions in the program (consisting of an arbitrary mix of variables, constants, functions and operators). For a binary operation x ⊙y, the provenance R(x ⊙y) of the result is defined to be R(x ⊙y) = R(x) ∪R(y). Similarly, for a unary operation, the provenance R(⊙x) = R(x). For assignments, x = y ⇒R(x) = R(y). For a function, R(f(x, y, ...)) may be computed by examining the expressions within f; a worst-case approximation is R(f(x, y, ...)) = R(x) ∪ R(y) · · · . A few special cases are also worth noting. Strictly speaking, the previous rules track a superset of provenance information because some functions and operations are constant for certain inputs. In the case of multiplication, x ∗0 = 0, so R(x ∗0) = {}. Accounting for this gives tighter provenances, implying, for example, that special considerations apply to sparse linear algebra. In the case of probabilistic programming, recall that random variables (or ERPs) are represented as stochastic functions fi that accept parameters θi. Whenever a random variable is conditioned, the output of the corresponding fi is fixed; thus, while the likelihood of a particular output of fi depends on θi, the specific output of fi does not. For the purposes inference, therefore, R(fi(θi)) = {}. 4.2 Using Provenance Tracking as Part of Inference Provenance information could be used in many ways. Here, we illustrate one use: to help construct good block proposals for MH inference. Our basic idea is to construct a good global proposal by starting with a random global proposal (which is unlikely to be good) and then inhibiting the bad parts. We do this by allowing each element of the likelihood to “vote” on which proposals seemed good. This can be considered a factored version of a multiple-try MCMC algorithm [16]. The algorithm is shown in Fig. 3. Let xO be the starting state. In step (2), we propose a new state xO′. This new state changes many ERPs at once, and is unlikely to be good (for the proof, we require that xO′ i ̸= xO i for all i). In step (3), we accept or reject each element of the proposal based on a function α. Our choice of α (Fig. 3, left) uses PT, as we explain below. In step (4) we construct a new proposal xM by “mixing” two states: we set the variables in the accepted set A to the values of xO′ i , and we leave the variables in the rejected set R at their original values in xO. In steps (5-6) we compute the forward probabilities. In steps (7-8) we sample one possible path backwards from xM to xO, with the relevant probabilities. Finally, in step (9) we accept or reject the overall proposal. We use α(xO, xO′) to allow the likelihood to “vote” in a fine-grained way for which proposals seemed good and which seemed bad. To do this, we compute p(xO) using PT to track how each xO i influences the overall likelihood p(xO). Let D(i; xO) denote the “descendants” of variable xO i , defined as all ERPs whose likelihood xO i impacted. We also use PT to compute p(xO′), again tracking dependents D(i; xO′), and let D(i) be the joint set of ERPs that xi influences in either state xO or xO′. We then use D(i), p(xO) and p(xO′) to estimate the amount by which each constituent element xO′ i in the proposal changed the likelihood. We assign “credit” to each i as if it were the only proposal – that is, we assume that if, for example, the likelihood went up, it was entirely due to the change in xO i . Of course, the variables’ effects are not truly independent; this is a fully-factored approximation to those effects. The final α is shown in Fig. 3 (left), where we define p(xD(i)) to be the likelihood of only the subset of variables that xi impacted. Here, we prove that our algorithm is valid MCMC by following [16] and showing detailed balance. To do this, we must integrate over all possible rejected paths of the negative bits xO′ R and xMI R : p(xO)P(xM|xO) = p(xO) Z xO′ R Z xMI R QO′ A QO′ R P M A P M R QMI R min 1, p(xM) p(xO) QMI A P MI A P MI R QO′ A P M A P M R = Z xO′ R Z xMI R QO′ R QMI R min n p(xO)QO′ A P M A P M R , p(xM)QMI A P MI A P MI R o = p(xM) Z xO′ R Z xMI R QMI A QMI R P MI A P MI R QO′ R min ( 1, p(xO)QO′ A P M A P M R p(xM)QMI A P MI A P MI R ) = p(xM)P(xO|xM) where the subtlety to the equivalence is that the rejected bits xO′ R and xMI R have switched roles. □ 6 Alg. 3: Factored Multiple-Try MH 1: Begin in state xO. Assume it is composed of individual ERPs xO = xO 1 , · · · , xO k . 2: Propose a new state for many ERPs. For i = 1, · · · , k, propose xO′ i ∼Q(xO′|xO) s.t. xO′ i ̸= xO i . 3: Decide to accept or reject each element of xO′. This test can depend arbitrarily on xO and xO′, but must decide for each ERP independently; let αi(xO, xO′) be the probability of accepting xO′ i . Let A be the set of indices of accepted proposals, and R the set of rejected ones. 4: Construct a new state, xM = n xO′ i : i ∈A o S xO j : j ∈R . This new state mixes new values for the ERPs from the accepted set A and old values for the ERPs in the rejected set R. 5: Let P M A = Q i∈A αi(xO, xO′) be the probability of accepting the ERPs in A, and let P M R = Q j∈R(1 − αj(xO, xO′)) be the probability of rejecting the ERPs in R. 6: Let QO′ A = Q i∈A Q(xO′ i |xO) and QO′ R = Q j∈R Q(xO′ j |xO). 7: Construct a new state xMI. Propose new values for all of the rejected ERPs using xM as the start state, but leave ERPs in the accepted set at their original value. For j ∈R let xMI j ∼Q(·|xM). Then, xMI = xO i : i ∈A S xMI j : j ∈R . 8: Let P MI A = Q i∈A αi(xM, xMI), and let P MI R = Q j∈R(1 −αj(xM, xMI)). 9: Accept xM with probability min n 1, (p(xM)QMI A P MI A P MI R )/(p(xO)QO′ A P M A P M R ) o . Alg. 4: A PT-based Acceptance Test 1: The PT algorithm implements αi(x, x′). 2: Compute p(x), tracking D(xi; x) 3: Compute p(x′), tracking D(xi; x′) 4: Let D(i) = D(xi; x) ∪D(xi; x′) 5: Let αi(x, x′) = min n 1, p(x′ i)p(x′ D(i)) p(xi)p(xD(i)) o Individual ERPs Accepted set Rejected set Start state Proposed state Mixed state Reverse path Figure 3: The factored multiple-try MH algorithm (top), the PT-based acceptance test (left) and an illustration of the process of mixing together different elementary proposals (right). 4.3 Experiments and Results We implemented provenance tracking and in Stochastic MATLAB [28] by leveraging MATLAB’s object oriented capabilities, which provides full operator overloading. We tested on four tasks: a Bayesian “mesh induction” task, a small QMR problem, probabilistic matrix factorization [23] and an integer-valued variant of PMF. We measured performance by examining likelihood as a function of wallclock time; an important property of the provenance tracking algorithm is that it can help mitigate constant factors affecting inference performance. Alg. 5: Bayesian Mesh Induction 1: function X = bmi( base mesh ) 2: mesh = base mesh + randn; 3: img = render( mesh ); 4: X = img + randn; 5: end; Bayesian mesh induction. The BMI task is simple: given a prior distribution over meshes and a target image, sample a mesh which, when rendered, looks like the target image. The prior is a Gaussian centered around a “mean mesh,” which is a perfect sphere; Gaussian noise is added to each vertex to deform the mesh. The model is shown in Alg. 5. The rendering function is a custom OPENGL renderer implemented as a MEX function. No gradients are available for this renderer, but it is reasonably easy to augment it with provenance information recording vertices of the triangle that were responsible for each pixel. This allows us to make proposals to mesh vertices, while assigning credit based on pixel likelihoods. Results for this task are shown in Fig. 4 (“Face”). Even though the renderer is quite fast, MCMC with simple proposals fails: after proposing a change to a single variable, it must re-render the image in order to compute the likelihood. In contrast, making large, global proposals is very effective. Fig. 4 (top) shows a sequence of images representing burn-in of the model as it starts from the initial condition and samples its way towards regions of high likelihood. A video demonstrating the results is available at http://www.mit.edu/˜wingated/papers/index.html. 7 0 15 30 45 −3 −2.5 −2 −1.5 −1 x 10 9 5 10 15 20 25 −6 −4 −2 0 x 10 7 5 10 15 20 25 −10 −5 0 x 10 4 Input Target Time (seconds) Log likelihood PMF Time 0 1 2 −100 −80 −60 −40 Face x 1e9 x 1e4 x 1e7Integer PMF QMR Figure 4: Top: Frames from the face task. Bottom: results on Face, QMR, PMF and Integer PMF. QMR. The QMR model is a bipartite, binary model relating diseases (hidden) to symptoms (observed) using a log-linear noisy-or model. Base rates on diseases can be quite low, so “explaining away” can cause poor mixing. Here, MCMC with provenance tracking is effective: it finds highlikelihood solutions quickly, again outperforming naive MCMC. Probabilistic Matrix Factorization. For the PMF task, we factored a matrix A ∈R1000x1000 with 99% sparsity. PMF places a Gaussian prior over two matrices, U ∈R1000x10 and V ∈R1000x10, for a total of 20,000 parameters. The model assumes that Aij ∼N(UiV T j , 1). In Fig. 4, we see that MCMC with provenance tracking is able to find regions of much higher likelihood much more quickly than naive MCMC. We also compared to an efficient hand-coded MCMC sampler which is capable of making, scoring and accepting/rejecting about 20,000 proposals per second. Interestingly, MCMC with provenance tracking is more efficient than the hand-coded sampler, presumably because of the economies of scale that come with making global proposals. Integer Probabilistic Matrix Factorization. The Integer PMF task is like ordinary PMF, except that every entry in U and V is constrained to be an integer between 1 and 10. These constraints imply that no gradients exist. Empirically, this does not seem to matter for the efficiency of the algorithm relative to standard MCMC: in Fig. 4 we again see dramatic performance improvements over the baseline Stochastic MATLAB sampler and the hand-coded sampler. 5 Conclusions We have shown how nonstandard interpretations of probabilistic programs can be used to extract structural information about a distribution, and how this information can be used as part of a variety of inference algorithms. The information can take the form of gradients, Hessians, fine-grained dependencies, or bounds. Empirically, we have implemented two such interpretations and demonstrated how this information can be used to find regions of high likelihood quickly, and how it can be used to generate samples with improved statistical properties versus random-walk style MCMC. There are other types of interpretations which could provide additional information. For example, interval arithmetic [22] could be used to provide bounds or as part of adaptive importance sampling. Each of these interpretations can be used alone or in concert with each other; one of the advantages of the probabilistic programming framework is the clean separation of models and inference algorithms, making it easy to explore combinations of inference algorithms for complex models. More generally, this work begins to illuminate the close connections between probabilistic inference and programming language theory. It is likely that other techniques from compiler design and program analysis could be fruitfully applied to inference problems in probabilistic programs. Acknowledgments DW was supported in part by AFOSR (FA9550-07-1-0075) and by Shell Oil, Inc. NDG was supported in part by ONR (N00014-09-1-0124) and a J. S. McDonnell Foundation Scholar Award. JMS was supported in part by NSF (CCF-0438806), by NRL (N00173-10-1-G023), and by ARL (W911NF-10-2-0060). All views expressed in this paper are the sole responsibility of the authors. 8 References [1] C. Bendtsen and O. Stauning. FADBAD, a flexible C++ package for automatic differentiation. Technical Report IMM–REP–1996–17, Department of Mathematical Modelling, Technical University of Denmark, Lyngby, Denmark, Aug. 1996. [2] C. H. Bischof, A. Carle, G. F. Corliss, A. Griewank, and P. D. Hovland. ADIFOR: Generating derivative codes from Fortran programs. Scientific Programming, 1(1):11–29, 1992. [3] G. Corliss, C. Faure, A. Griewank, L. Hasco¨et, and U. Naumann. Automatic Differentiation: From Simulation to Optimization. Springer-Verlag, New York, NY, 2001. [4] J. Eisner, E. Goldlust, and N. A. Smith. Compiling comp ling: Weighted dynamic programming and the Dyna language. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT-EMNLP), pages 281–290, Vancouver, October 2005. [5] M. Girolami and B. Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. J. R. Statist. Soc. B, 73(2):123–214, 2011. [6] N. Goodman, V. Mansinghka, D. Roy, K. Bonawitz, and J. Tenenbaum. Church: a language for generative models. In Uncertainty in Artificial Intelligence (UAI), 2008. [7] A. Griewank. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. Number 19 in Frontiers in Applied Mathematics. SIAM, 2000. [8] A. Griewank, D. Juedes, and J. Utke. ADOL-C, a package for the automatic differentiation of algorithms written in C/C++. ACM Trans. Math. Software, 22(2):131–167, 1996. [9] E. Herbst. Gradient and Hessian-based MCMC for DSGE models (job market paper), 2010. [10] K. Kersting and L. D. Raedt. Bayesian logic programming: Theory and tool. In L. Getoor and B. Taskar, editors, An Introduction to Statistical Relational Learning. MIT Press, 2007. [11] O. Kiselyov and C. Shan. Embedded probabilistic programming. In Domain-Specific Languages, pages 360–384, 2009. [12] Y. LeCun and L. Bottou. Lush reference manual. Technical report, 2002. URL http://lush. sourceforge.net. [13] B. Milch, B. Marthi, S. Russell, D. Sontag, D. L. Ong, and A. Kolobov. BLOG: Probabilistic models with unknown objects. In International Joint Conference on Artificial Intelligence (IJCAI), pages 1352–1359, 2005. [14] M. B. Monagan and W. M. Neuenschwander. GRADIENT: Algorithmic differentiation in Maple. In International Symposium on Symbolic and Algebraic Computation (ISSAC), pages 68–76, July 1993. [15] R. M. Neal. MCMC using Hamiltonian dynamics. In Handbook of Markov Chain Monte-Carlo (Steve Brooks, Andrew Gelman, Galin Jones and Xiao-Li Meng, Eds.), 2010. [16] S. Pandolfi, F. Bartolucci, and N. Friel. A generalization of the multiple-try metropolis algorithm for bayesian estimation and model selection. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2010. [17] B. A. Pearlmutter and J. M. Siskind. Lazy multivariate higher-order forward-mode AD. In Symposium on Principles of Programming Languages (POPL), pages 155–160, 2007. doi: 10.1145/1190215.1190242. [18] A. Pfeffer. IBAL: A probabilistic rational programming language. In International Joint Conference on Artificial Intelligence (IJCAI), pages 733–740. Morgan Kaufmann Publ., 2001. [19] Y. Qi and T. P. Minka. Hessian-based Markov chain Monte-Carlo algorithms (unpublished manuscript), 2002. [20] P. J. Rossky, J. D. Doll, and H. L. Friedman. Brownian dynamics as smart monte carlo simulation. Journal of Chemical Physics, 69:4628–4633, 1978. [21] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. 323:533–536, 1986. [22] S. Rump. INTLAB - INTerval LABoratory. In Developments in Reliable Computing, pages 77–104. Kluwer Academic Publishers, Dordrecht, 1999. [23] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Neural Information Processing Systems (NIPS), 2008. [24] J. M. Siskind and B. A. Pearlmutter. First-class nonstandard interpretations by opening closures. In Symposium on Principles of Programming Languages (POPL), pages 71–76, 2007. doi: 10.1145/1190216. 1190230. [25] B. Speelpenning. Compiling Fast Partial Derivatives of Functions Given by Algorithms. PhD thesis, Department of Computer Science, University of Illinois at Urbana-Champaign, Jan. 1980. [26] B. Taylor. Methodus Incrementorum Directa et Inversa. London, 1715. [27] R. E. Wengert. A simple automatic derivative evaluation program. Commun. ACM, 7(8):463–464, 1964. [28] D. Wingate, A. Stuhlmueller, and N. D. Goodman. Lightweight implementations of probabilistic programming languages via transformational compilation. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2011. 9
|
2011
|
17
|
4,224
|
Information Rates and Optimal Decoding in Large Neural Populations Kamiar Rahnama Rad Liam Paninski Department of Statistics, Columbia University {kamiar,liam}@stat.columbia.edu http://www.stat.columbia.edu/˜liam/research/pubs/kamiar-ss-info.pdf Abstract Many fundamental questions in theoretical neuroscience involve optimal decoding and the computation of Shannon information rates in populations of spiking neurons. In this paper, we apply methods from the asymptotic theory of statistical inference to obtain a clearer analytical understanding of these quantities. We find that for large neural populations carrying a finite total amount of information, the full spiking population response is asymptotically as informative as a single observation from a Gaussian process whose mean and covariance can be characterized explicitly in terms of network and single neuron properties. The Gaussian form of this asymptotic sufficient statistic allows us in certain cases to perform optimal Bayesian decoding by simple linear transformations, and to obtain closed-form expressions of the Shannon information carried by the network. One technical advantage of the theory is that it may be applied easily even to non-Poisson point process network models; for example, we find that under some conditions, neural populations with strong history-dependent (non-Poisson) effects carry exactly the same information as do simpler equivalent populations of non-interacting Poisson neurons with matched firing rates. We argue that our findings help to clarify some results from the recent literature on neural decoding and neuroprosthetic design. Introduction It has long been argued that many key questions in neuroscience can best be posed in informationtheoretic terms; the efficient coding hypothesis discussed in [2, 3, 1], represents perhaps the bestknown example. Answering these questions quantitatively requires us to compute the Shannon information rate of neural channels, whether numerically using experimental data or analytically in mathematical models. In many cases it is useful to exploit connections with “ideal observer” analysis, in which the performance of an optimal Bayesian decoder places fundamental bounds on the performance of any biological system given access to the same neural information. However, the non-linear, non-Gaussian, and correlated nature of neural responses has hampered the development of this theory, particularly in the case of high-dimensional and/or time-varying stimuli. The neural decoding literature is far too large to review systematically here; instead, we will focus our attention on work which has attempted to develop an analytical theory to simplify these complex decoding and information-rate problems. Two limiting regimes have received significant analytical attention in the neuroscience literature. In the “high-SNR” regime, n →∞, where n is the number of neurons encoding the signal of interest; if the information rate of each neuron is bounded away from zero and neurons respond in a conditionally weakly-dependentmanner given the stimulus, then the total information provided by the neural population becomes infinite, and the error rate of any reasonable neural decoder tends to zero. For discrete stimuli, the Shannon information is effectively determined in this asymptotic limit by a simpler quantity known as the Chernoff information [9]; for continuous stimuli, maximum likelihood estimation is asymptotically optimal, and the asymp1 totic Shannon information is controlled by the Fisher information [8, 7]. On the other hand we can consider the “low-SNR” limit, where only a few neurons are observed and each neuron is asymptotically weakly tuned to the stimulus. In this limit, the Shannon information tends to zero, and under certain conditions the optimal Bayesian estimator (which can be strongly nonlinear in general) can be approximated by a simpler linear estimator; see [5] and more recently [16] for details. In this paper, we study information transmission and optimal decoding in what we would argue is a more biologically-relevant “intermediate” regime, where n is large but the total amount of information provided by the population remains finite, and the problem of decoding the stimulus given the population neural activity remains nontrivial. Likelihood in the intermediate regime: the inhomogeneous Poisson case For clarity, we begin by analyzing the information in a simple population of neurons, represented as inhomogenous Poisson processes that are conditionally independent given the stimulus. We will extend our analysis to more general neural populations in the next section. In response to the stimulus, at each time step t neuron i fires with probability λi(t)dt, where the rate is given by λi(t) = f [bi(t) + ϵℓi,t(θ)] , (1) where f(.) is a smooth rectifying non-linearity and ϵ is a gain factor controlling each neuron’s sensitivity. The baseline firing rate is determined by bi(t) and is independent of the input signal. The true stimulus at time t is defined by θt, and θ abbreviates the time varying stimulus θ0:T in the time interval [0, Tdt]. The term ℓi,t(θ) summarizes the dependence of the neuron’s firing rate on θ; depending on the setting, this term may represent e.g. a tuning curve or a spatiotemporal filter applied to the stimulus (see examples below). The likelihood includes all the information about the stimulus encoded in the population’s spiking response. Neuron i’s response at time step t is designated by by the binary variable ri(t). The loglikelihood at the parameter value ϑ (which may be different from the true parameter θ) is given by the standard point-process formula [21]: Lϑ(r) := log p(r|ϑ) = n ! i=1 T ! t=0 ri(t) log λi(t) −λi(t)dt. (2) This expression can be expanded around ϵ = 0: Lϑ(r) = Lϑ(r)|ϵ=0 + ϵ∂Lϑ(r) ∂ϵ |ϵ=0 + 1 2ϵ2 ∂2Lϑ(r) ∂ϵ2 |ϵ=0 + O(nϵ3), where ∂Lϑ(r) ∂ϵ |ϵ=0 = ! i,t ℓi,t(ϑ) " ri(t)f ′ f # bi(t) $ −f ′(bi(t))dt % ∂2Lϑ(r) ∂ϵ2 |ϵ=0 = ! i,t ℓ2 i,t(ϑ) " ri(t) #f ′ f $′# bi(t) $ −f ′′(bi(t))dt % . Let ri denote the vector representation of the ith neuron’s spike train and let1 gi(ri) := & ri(1) f ′ f (bi(1)) −f ′(bi(1))dt · · · ri(T )f ′ f (bi(T )) −f ′(bi(T ))dt 'T hi(ri) := & ri(1) #f ′ f $′(bi(1)) −f ′′(bi(1))dt · · · ri(T ) #f ′ f $′(bi(T )) −f ′′(bi(T ))dt 'T ℓi(ϑ) := & ℓi,1(ϑ) ℓi,2(ϑ) · · · ℓi,T (ϑ) 'T ; then Lϑ(r) = Lϑ(r)|ϵ=0 + ϵ n ! i=1 ℓi(ϑ)T gi(ri) + 1 2ϵ2 n ! i=1 ℓi(ϑ)T diag[hi(ri)]ℓi(ϑ) + O(nϵ3). 1With a slight abuse of notation, we use T for both the total number of time steps and the transpose operation; the difference is clear from the context. 2 This second-order loglikelihood expansion is standard in likelihood theory [24]; as usual, the first term is constant in ϑ and can therefore be ignored, while the third (quadratic) term controls the curvature of the loglikelihood at ϵ = 0, and scales as ϵn2. In the high-SNR regime discussed above, where n →∞and ϵ is fixed, the likelihood becomes sharply peaked at θ (and therefore the Fisher information, which may be understood as the curvature of the log-likelihood at θ, controls the asymptotics of the estimation error in the case of continuous stimuli), and estimation of θ becomes easy; in the low-SNR regime, we fix n and consider the ϵ →0 limit. Now, finally, we can more precisely define the “intermediate” SNR regime: we will focus on the case of large populations (n →∞), but in order to keep the total information in a finite range we need to scale the sensitivity ϵ as ϵ ∼n−1/2. In this setting, the error term O(nϵ3) = O(n−1 2 ) = o(1) and can therefore be neglected, and the law of large numbers (LLN) implies that ϵ2 ∂2Lϑ(r) ∂ϵ2 |ϵ=0 = Er|θ ( 1 n ! i ℓi(ϑ)T diag[hi(ri)]ℓi(ϑ) ) ; consequently, the quadratic term ϵ2 ∂2Lϑ(r) ∂ϵ2 |ϵ=0 will be independent of the observed spike train and therefore void of information about θ. So the first derivative term is the only part of the likelihood that depends both on the neural activity and ϑ, and may therefore be considered a sufficient statistic in this asymptotic regime: all the information about the stimulus is summarized in ϵ∂Lϑ(r) ∂ϵ |ϵ=0 = 1 √n ! i ℓi(ϑ)T gi(ri). (3) We may further apply the central limit theorem (CLT) to this sum of independent random vectors to conclude that this term converges to a Gaussian process indexed by ϑ (under mild technical conditions that we will ignore here, for clarity). Thus this model enjoys the local asymptotic normality property observed in many parametric statistical models [24]: all of the information in the data can be summarized asymptotically by a sufficient statistic with a sampling distribution that turns out to be Gaussian. Example: Linearly filtered stimuli and state-space models In many cases neurons are modeled in terms of simple rectified linear filters responding to the stimulus. We can handle this case easily using the language introduced above, if we let Ki denote the matrix implementing the transformation (Kiθ)t = ℓi,t(θ), the projection of the stimulus onto the i-th neuron’s stimulus filter. Then, ϵ∂Lϑ(r) ∂ϵ |ϵ=0 = ϑT ( 1 √n n ! i=1 KT i * diag +f ′ i fi , ri −f ′ idt -) := ϑT ∆(r), where fi stands for the vector version of f[bi(t)]. Thus all the information in the population spike train can be summarized in the random vector ∆(r), which is a simple linear function of the observed spike train data. This vector has an asymptotic Gaussian distribution, with mean and covariance Er|θ (∆(r)) = 1 √n n ! i=1 KT i * diag +f ′ i fi , * fidt + f ′ idtKiθ √n + O( 1 n) −f ′ idt = ( 1 n n ! i=1 KT i diag &f ′2 i fi dt ' Ki ) θ + O( 1 √n) J := covr|θ (∆(r)) = 1 n n ! i=1 KT i diag +f ′ i fi , covr|θ & ri ' diag +f ′ i fi , Ki = 1 n n ! i=1 KT i diag &f ′2 i fi dt ' Ki + O( 1 √n). Thus, the neural population’s non-linear and temporally dynamic response to the stimulus is as informative in this intermediate regime as a single observation from a standard Gaussian experiment, 3 in which the parameter θ is filtered linearly by J and corrupted by Gaussian noise. All of the filtering properties of the population are summarized by the matrix J. (Note that if we consider each Ki as a random sample from some distribution of filters, then J will converge by the law of large numbers to a matrix we can compute explicitly.) Thus in many cases we can perform optimal Bayesian decoding of θ given the spike trains quite easily. For example, if θ has a zero mean Gaussian prior distribution with covariance Cθ, then the posterior mean and the maximum-a-posteriori (MAP) estimate is well-known and coincides with the optimal linear estimate (OLE): ˆθOLE(r) = E(θ|r) = (J + C−1 θ )−1∆(r). (4) We may compute the Shannon information I(θ : r) between r and θ in a similarly direct fashion. We know that, asymptotically, the sufficient statistic ∆(r) is as informative as the full population response r I(θ : r) = I(θ : ∆(r)). In the case that the prior of θ is Gaussian, as above, then the information can therefore be computed quite explicitly via standard formulas for the linear-Gaussian channel [9]: I(θ : ∆(r)) = 1 2 log det(I + JCθ). (5) To summarize, when the encodings ℓi,t(θ) are linear in θ, and we are in the intermediate-SNR regime, and the parameter θ has a Gaussian prior distribution, then the optimal Bayesian estimate is obtained by applying a linear transformation to the sufficient statistic ∆(r) which itself is linear in the spike train, and the mutual information between the stimulus and full population response has a particularly simple form. These results help to extend previous theoretical studies [5, 18, 20, 16] demonstrating that in some cases linear decoding can be optimal, and also shed some light on recent experimental studies indicating that optimal linear and nonlinear Bayesian estimators often have similar performance in practice [13, 12]. To work through a concrete example, consider the case that the temporal sequence of parameter values θt is generated by an autoregressive process: θt+1 = Aθt + ηt ηt ∼N(0, R), for a stable dynamics matrix A and positive-semidefinite covariance matrix R. Further assume that the observation matrices Ki act instantaneously, i.e., Ki is block-diagonal with blocks Ki,t, and therefore the responses are modeled as ri(t) ∼Poiss[f(bi(t) + ϵKi,tθt)dt]. Thus θ and the responses r together represent a state-space model. This framework has been shown to lead to state-of-the-art performance in a wide variety of neural data analysis settings [14]. To understand optimal inference in this class of models in the intermediate SNR regime, we may follow the recipe outlined above: we see that the asymptotic sufficient statistic in this model can be represented as ∆t = Jtθt + ϵt ϵt ∼N(0, Jt), where the effective filter matrix J defined above is block-diagonal (due to the block-diagonal structure of the filter matrices Ki), with blocks we have denoted Jt. Thus ∆t represents observations from a linear-Gaussian state-space model, i.e., a Kalman filter model [17]. Optimal decoding of θ given the observation sequence ∆1:T can therefore be accomplished via the standard forward-backward Kalman filter-smoother [10]; see Fig. 1 for an illustration. The information rate limT →∞I(θ0:T : r0:T ) = limT →∞I(θ0:T : ∆(r)0:T ) may be computed via similar recursions in the stationary case (i.e., when Jt is constant in time). The result may be expressed most explicitly in terms of a matrix which is the solution of a Riccati equation involving the effective Kalman model parameters; the details are provided in the appendix. Nonlinear examples: orientation coding, place fields, and small-time expansions While the linear setting discussed above can handle many examples of interest, it does not seem general enough to cover two well-studied decoding problems: inferring the orientation of a visual 4 stimulus from a population of cortical neurons [19, 4], or inferring position from a population of hippocampal or entorhinal neurons [6]. In the former case, the stimulus is a phase variable, and therefore does not fit gracefully into the linear setting described above; in the latter case, place fields and grid fields are not well-approximated as linear functions of position. If we apply our general theory in these settings, the interpretation of the encoding function ℓi(θ) does not change significantly: ℓi(θ) could represent the tuning curve of neuron i as a function of the orientation of the visual stimulus, or of the animal’s location in space. However, without further assumptions the limiting sufficient statistic, which is a weighted sum of these encoding functions ℓi(θ) (recall eq. 3) may result in an infinite-dimensional Gaussian process, which may be computationally inconvenient. To simplify matters somewhat, we can introduce a mild assumption on the tuning functions ℓi(θ). Let’s assume that these functions may be expressed in some low-dimensional basis: ℓi(θ) = KiΦ(θ), for some vectors Ki, and Φ(θ) is defined to map θ into an mT -dimensional space which is usually smaller than dim(θ) = dim(θt)T . This finite-basis assumption is very natural: in the orientation example, tuning curves are periodic in the angle θt and are therefore typically expressed as sums of a few Fourier functions; similarly, two-dimensional finite Fourier or Zernike bases are often used to represent grid or place fields [6]. The key point here is that we may now simply follow the derivation of the last section with Φ(θ) in place of θ; we find that the sufficient statistic may be represented asymptotically as an mT -dimensional Gaussian vector with mean J and covariance JΦ(θ), with J defined as in the preceding section. We should note that this nonlinear case does remain slightly more complicated than the linear case in one respect: while the likelihood with respect to Φ(θ) reduces to something very simple and tractable, the prior (which is typically defined as a function of θ) might be some complicated function of the remapped variable Φ(θ). So in most interesting nonlinear cases we can no longer compute the optimal Bayesian decoder or the Shannon information rate analytically. However, our approach does lead to a major simplification in numerical investigations into theoretical coding issues. For example, to examine the coding efficiency of a population of neurons encoding an orientation variable in this intermediate SNR regime we do not need to simulate the responses of the entire population (which would involve drawing nT random variables, for some large population size n); instead, we only need to draw a single equivalent mT -dimensional Gaussian vector ∆(r), and quantify the decoding performance based on the approximate loglikelihood Lϑ(r) = Lϑ(r)|ϵ=0 + Φ(ϑ)T ∆(r) + 1 2Φ(ϑ)T JΦ(ϑ) + O( 1 √n), which as emphasized above has a simple quadratic form as a function of Φ(ϑ). Since m can typically be chosen to be much smaller than n, this approach can result in significant computational savings. We now switch gears slightly and examine another related intermediate regime in which nonlinear encoding plays a key role: instead of letting the sensitivity ϵ of each neuron become small (in order to keep the total information in the population finite), we could instead keep the sensitivity constant and let the time period over which we are observing the population scale inversely with the population size n. This short-time limit is sensible in some physiological and psychophysical contexts [22] and was examined analytically in [15] to study the impact of inter-neuron dependencies on information transmission. Our methods can also be applied to this short-time limit. We begin by writing the loglikelihood of the observed spike count vector r in a single time-bin of length dt: Lϑ(r) := log p(r|θ) = ! i ri log f [bi + ℓi(ϑ)] −f [bi + ℓi(ϑ)] dt. The second term does not depend on r; therefore, all information in r about θ resides in the sufficient statistic ∆ϑ(r) := ! i ri log f [bi + ℓi(ϑ)] . Since the i-th neuron fires with probability f [bi + ℓi(θ)] dt, the mean of ∆ϑ(r) scales with ndt, and it is clear that dt = 1/n is a natural scaling of the time bin. With this scaling ∆ϑ(r) converges to a Gaussian stochastic process with mean Er|θ[∆ϑ(r)] = 1 n ! i f [bi + ℓi(θ)] log f [bi + ℓi(ϑ)] 5 and covariance covr|θ[∆ϑ(r), ∆ϑ′(r)] = 1 n ! i f [bi + ℓi(θ)] . log f [bi + ℓi(ϑ)] /. log f [bi + ℓi(ϑ′)] / , where we have used the fact that the variance of a Poisson random variable coincides with its mean. In general, this limiting Gaussian process will be infinite-dimensional. However, if we choose the exponential nonlinearity (f(.) = exp(.)) and the encoding functions ℓi(θ) are of the finite-dimensional form considered above, ℓi(θ) = KT i Φ(θ), then the log f[bi + ℓi(ϑ)] term in the definition of ∆ϑ(r) simplifies: in this case, all information about θ is captured by the sufficient statistic ∆(r) = ! i riKi. If we again let dt = 1/n, then we find that ∆(r) converges to a finite-dimensional Gaussian random vector with mean and covariance Er|θ[∆(r)] = 1 n ! i f 0 bi + KT i Φ(θ) 1 Ki; covr|θ[∆(r)] = 1 n ! i f 0 bi + KT i Φ(θ) 1 KiKT i ; again, if the filters Ki are modeled as independent draws from some fixed distribution, then the above normalized sums converge to their expectations, by the LLN. Thus, as in the intermediateSNR regime, we see that inference can be dramatically simplified in this short-time setting. Likelihood in the intermediate regime: non-Poisson effects We conclude by discussing the generalization to non-Poisson networks with interneuronal dependencies and nontrivial correlation structure. We generalize the rate equation (1) to λi(t) = fi 0 bi(t) + ϵℓi,t(θ) 22Ht 1 , where Ht stands for the spiking activity of all neurons prior to time t: Ht = {ri(t′)}t′<t,1≤i≤n. Note that the influence of spiking history may be different for each neuron: refractory periods, self-inhibition and coupling between neurons can be formulated by appropriately defining the dependence of fi(.) on Ht. We begin, as usual, by expanding the log-likelihood. The basic point-process likelihood (eq. 2) remains valid. Let gi(r) and hi(r) denote the vector versions of ri(t)f ′ f & bi(t) 22Ht ' −f ′ i & bi(t) 22Ht ' dt and ri(t) #f ′ f $′& bi(t) 22Ht ' −f ′′ i & bi(t) 22Ht ' dt, respectively, analogously to the Poisson case. Then, the first and second terms in the expansion of the loglikelihood may be written as ϵ∂Lϑ(r) ∂ϵ |ϵ=0 = ϵ ! i ℓT i (ϑ)gi(r) and 1 2ϵ2 ∂2Lϑ(r) ∂ϵ2 |ϵ=0 = 1 2ϵ2 ! i ℓT i (ϑ)diag[hi(r)]ℓi(ϑ), as before. For independent neurons, the log-likelihood was composed of normalized sums of independent random variables that converged to a Gaussian process, by the CLT. In the historydependent, coupled case, gi(r) and hi(r) depend not only on the i-th neuron’s activity ri, but rather on the whole network history. Nonetheless, under technical conditions on the network’s dependence structure (to ensure that the firing rates and correlations in the network remain bounded), we may still exploit versions of the LLN and CLT. Thus, under conditions ensuring the validity of the LLN we may conclude that, as before, the second-order term ϵ2 ∂2Lϑ(r) ∂ϵ2 |ϵ=0 convergesto its expectation under the intermediate ϵ ∼n−1 2 scaling, and therefore carries no information about θ. When we discard this second-order term, along with higher-order terms that are negligible in the intermediate-SNR, large-n limit, we are left once again with the gradient term ϵ ∂Lϑ(r) ∂ϵ |ϵ=0 = 1 √n 3 i ℓi(ϑ)T gi(r), which under appropriate conditions (ensuring the validity of a CLT) will converge to a Gaussian process limit whose mean and covariance we can often compute analytically. 6 Let’s turn to a specific example, in order to make these claims somewhat more concrete. Consider a network with weak couplings and possibly strong self-inhibition and history dependence; more precisely, we assume that interneuronal conditional cross-covariances are weak, given the stimulus: cov[ri(t), rj(t + τ)|θ] = O(n−1) for i ̸= j. See, e.g., [11, 23] for further discussion of this condition, which is satisfied for many spiking networks in which the synaptic weights scale uniformly as O(n−1). For simplicity, we will also restrict our attention to linear encoding functions, though generalizations to the nonlinear case are straightforward. Thus, as before, let Ki denote the matrix implementing the transformation (Kiθ)t = ℓi,t(θ), the projection of the stimulus onto the i-th neuron’s stimulus filter. Then ϵ∂Lϑ(r) ∂ϵ |ϵ=0 = ϑT ( 1 √n n ! i=1 KT i * diag +f ′ i fi , ri −f ′ idt -) , where fi stands for the vector version of fi & bi(t) 22Ht ' ; in other words, the t-th entry of fidt is the probability of observing a spike in the interval [t, t + dt], given the network spiking history Ht in the absence of input. Our sufficient statistic is therefore exactly as in the Poisson setting, ∆(r) := 1 √n n ! i=1 KT i * diag +f ′ i fi , ri −f ′ idt , (6) except for the history-dependence induced through the redefinition of fi. Computing the necessary means and covariances in this case requires more work than in the Poisson case; see the appendix for details. It is helpful (though not necessary) to make the stationarity assumption bi(t) ≡bi, which implies in this setting that E( f ′ i 2 fi ) can also be chosen to be timeinvariant; in this case the limiting covariance and mean of the sufficient statistic are given by J := covr|θ [∆(r)] = 1 n n ! i=1 Kidiag & Er|θ=0 #f ′ i 2 fi dt $' Ki; Er|θ [∆(r)] = Jθ, where the expectations are over the spontaneous network activity in the absence of any input. In short, once again, we have ∆(r) →D N(Jθ, J). Analytically, the only challenge here is to compute the expectations in the definition of J. In many cases this can be done analytically (e.g., in any population of uncoupled renewal-process neurons), or by using mean-field theory [23], or numerically by simply calculating the mean firing rate of the network in the undriven state θ = 0. We examine this convergence quantitatively in Fig. 1. In this case the stimulus θt was a sample path from a one-dimensional autoregressive (AR(1)) process. Spikes were generated according to λi(t) = λo exp θt √n + n ! j=1 wjiIj(t) 1τi(t)>τref , where Ij(t) is the synaptic input from the j-th cell (generated by convolving the spike train rj with an exponential of time constant 20 ms), wji is the synaptic weight matrix coupling the output of neuron j to the input of neuron i, τi(t) is the time since the last spike; therefore, 1τi(t)>τref enforces the absolute refractory period τref, which was set to be 2 ms here. Since the encoding filters Ki act instantaneously in this model (Ki can be represented as a delta function, weighted by n−1/2), the observed spike trains can be considered observations from a state-space model, as described above. The weights wji were generated randomly from a uniform distribution on the interval −[5/n, 5/n], with self-weights wii = 0, and 3 j wji = 0 to enforce detailed balance in the network. Note that, while the interneuronal coupling is weak in this example, the autocorrelation in these spike trains is quite strong on short time scales, due to the absolute refractory effect. We compared two estimators of θ: the full (nonlinear) MAP estimate ˆθMAP = arg maxθ p(θ|r), which we computed using the fast direct optimization methods described in [14], and the limiting optimal estimator ˆθ∆:= (J +C−1 θ )−1∆(r). Note that J is diagonal; we computed the expectations in the definition of J using the numerical approach described above in this simulation, though in 7 0 0.5 1 1.5 2 2.5 sufficient statistics Δ(r) −5 0 5 stimuli n = 1 spike train(s) with 2ms refractory period, 20ms synaptic time constant and baseline rate 30Hz 0 0.1 0.2 0.3 0.4 −5 0 5 n = 5 0 0.05 0.1 0.15 0.2 0 0.1 0.2 0.3 0.4 time(sec) 0 0.05 0.1 0.15 0.2 −5 0 5 time(sec) 0 0.05 0.1 0.15 0.2 n = 20 time(sec) θ θMAP θΔ Figure 1: The left panels show the true stimulus (green), MAP estimate (red) and the limiting optimal estimator ˆθ∆:= (J + C−1 θ )−1∆(r) (blue) for various population sizes n. The middle panels show the spike trains used to compute these estimates. The right panels show the sufficient statistics ∆(r) used to compute ˆθ∆. Note that the same true stimulus was used in all three simulations. As n increases, the linear decoder converges to the MAP estimate, despite the nonlinear and correlated nature of the network model generating the spike trains (see main text for details). other simulations (with uncoupled renewal-model populations) we checked that the fully-analytical approach gave the correct solution. In addition, C−1 θ is tridiagonal in this state-space setting; thus the linear matrix equation in eq. (4) can be solved efficiently in O(T ) time using standard tridiagonal matrix solvers. We find that, as predicted, the full nonlinear Bayesian estimator ˆθMAP approaches the limiting optimal estimator ˆθ∆as n becomes large; n = 20 is basically sufficient in this case, although of course the convergence will be slower for larger values of the gain factor ϵ (or, equivalently, larger filters Ki or larger values of the variance of θt). We conclude with a few comments about these results. First, note that the covariance matrix J we have computed here coincides almost exactly with what we computed previously in the Poisson case. Indeed, we can make this connection much more precise: we can always choose an equivalent Poisson network with rates defined so that the Er|θ=0[(f ′ i)2/fi] term in the non-Poisson network matches the (f ′ i)2/fi term in the Poisson network. Since J determines the information rate completely, we conclude that for any weakly-coupled network there is an equivalent Poisson network which conveys exactly the same information in the intermediate regime. However, note that the the sufficient statistic ∆(r) is different in the Poisson and non-Poisson settings, since the f ′/f term linearly reweights the observed spikes, depending on how likely they were given the history; thus the optimal Bayesian decoder incorporates non-Poisson effects explicitly. A number of interesting questions remain open. For example, while we expect a LLN and CLT to continue to hold in many cases of strong, structured interneuronal coupling, computing the asymptotic mean and covariance of the sufficient statistic ∆(r) may be more challenging in such cases, and new phenomena may arise. 8 References [1] J. Atick. Could information theory provide an ecological theory of sensory processing? Network: Computation in Neural Systems, pages 213–251, May 1992. [2] F. Attneave. Some informational aspects of visual perception. Psychological Review, 1954. [3] H. B. Barlow. Possible principles underlying the transformation of sensory messages. Sensory Communication, pages 217–234, 1961. [4] P. Berens, A. S. Ecker, S. Gerwinn, A. S. Tolias, and M. Bethge. Reassessing optimal neural population codes with neurometric functions. Proceedings of the National Academy of Sciences, 108:4423–4428, 2011. [5] W. Bialek and A. Zee. Coding and computation with neural spike trains. Journal of Statistical Physics, 59:103–115, 1990. [6] E. Brown, L. Frank, D. Tang, M. Quirk, and M. Wilson. A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. Journal of Neuroscience, 18:7411–7425, 1998. [7] N. Brunel and J.-P. Nadal. Mutual information, fisher information, and population coding. Neural Comput., 10(7):1731–1757, 1998. [8] B. Clarke and A. Barron. Information-theoretic asymptotics of Bayes methods. IEEE Transactions on Information Theory, 36:453 – 471, 1990. [9] T. Cover and J. Thomas. Elements of information theory. Wiley, New York, 1991. [10] J. Durbin and S. Koopman. Time Series Analysis by State Space Methods. Oxford University Press, 2001. [11] I. Ginzburg and H. Sompolinsky. Theory of correlations in stochastic neural networks. Phys Rev E, 50(4):3171–3191, 1994. [12] V. Lawhern, W. Wu, N. Hastopoulos, and L. Paninski. Population decoding of motor cortical activity using a generalized linear model with hidden states. Journal of Neuroscience Methods, 2011. [13] J. Macke, L. Sing, B. Cunningham, J.P. snd Yu, K. Shenoy, and M. Sahani. Modelling lowdimensional dynamics in recorded spiking populations. COSYNE, 2011. [14] L. Paninski, Y. Ahmadian, D. Ferreira, S. Koyama, K. Rahnama Rad, M. Vidne, J. Vogelstein, and W. Wu. A new look at state-space models for neural data. Journal of Computational Neuroscience, 29(1):107–126, 2010. [15] S. Panzeri, S. Schultz, A. Treves, and E. Rolls. Correlations and the encoding of information in the nervous system. Proceedings of the Royal Society London B, 266(1423):1001–1012, 1999. [16] J. Pillow, Y. Ahmadian, and L. Paninski. Model-based decoding, information estimation, and change-point detection in multi-neuron spike trains. Neural Computation, 23(1):1–45, January 2011. [17] S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation, 11:305–345, 1999. [18] E. Salinas and L. Abbott. Vector reconstruction from firing rates. Journal of Computational Neuroscience, 1:89–107, 1994. [19] H. S. Seung and H. Sompolinsky. Simple models for reading neuronal population codes. Proceedings of the National Academy of Sciences, 90:10749–10753, 1993. [20] H. Snippe. Parameter extraction from population codes: A critical assesment. Neural Computation, 8:511–529, 1996. [21] D. Snyder and M. Miller. Random Point Processes in Time and Space. Springer-Verlag, 1991. [22] S. Thorpe, D. Fize, and C. Marlot. Speed of processing in the human visual system. Nature, 381:520–522, 1996. [23] T. Toyoizumi, K. Rahnama Rad, and L. Paninski. Mean-field approximations for coupled populations of generalized linear model spiking neurons with Markov refractoriness. Neural Computation, 21:1203–1243, 2009. [24] A. van der Vaart. Asymptotic statistics. Cambridge University Press, Cambridge, 1998. 9
|
2011
|
170
|
4,225
|
Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization Mark Schmidt mark.schmidt@inria.fr Nicolas Le Roux nicolas@le-roux.name Francis Bach francis.bach@ens.fr ´Ecole Normale Sup´erieure, Paris INRIA - SIERRA Project Team Abstract We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates. Using these rates, we perform as well as or better than a carefully chosen fixed error level on a set of structured sparsity problems. 1 Introduction In recent years the importance of taking advantage of the structure of convex optimization problems has become a topic of intense research in the machine learning community. This is particularly true of techniques for non-smooth optimization, where taking advantage of the structure of nonsmooth terms seems to be crucial to obtaining good performance. Proximal-gradient methods and accelerated proximal-gradient methods [1, 2] are among the most important methods for taking advantage of the structure of many of the non-smooth optimization problems that arise in practice. In particular, these methods address composite optimization problems of the form minimize x∈Rd f(x) := g(x) + h(x), (1) where g and h are convex functions but only g is smooth. One of the most well-studied instances of this type of problem is ℓ1-regularized least squares [3, 4], minimize x∈Rd 1 2∥Ax −b∥2 + λ∥x∥1, where we use ∥· ∥to denote the standard ℓ2-norm. Proximal-gradient methods are an appealing approach for solving these types of non-smooth optimization problems because of their fast theoretical convergence rates and strong practical performance. While classical subgradient methods only achieve an error level on the objective function of O(1/ √ k) after k iterations, proximal-gradient methods have an error of O(1/k) while accelerated proximal-gradient methods futher reduce this to O(1/k2) [1, 2]. That is, accelerated proximalgradient methods for non-smooth convex optimization achieve the same optimal convergence rate that accelerated gradient methods achieve for smooth optimization. Each iteration of a proximal-gradient method requires the calculation of the proximity operator, proxL(y) = arg min x∈Rd L 2 ∥x −y∥2 + h(x), (2) 1 where L is the Lipschitz constant of the gradient of g. We can efficiently compute an analytic solution to this problem for several notable choices of h, including the case of ℓ1-regularization and disjoint group ℓ1-regularization [5, 6]. However, in many scenarios the proximity operator may not have an analytic solution, or it may be very expensive to compute this solution exactly. This includes important problems such as total-variation regularization and its generalizations like the graph-guided fused-LASSO [7, 8], nuclear-norm regularization and other regularizers on the singular values of matrices [9, 10], and different formulations of overlapping group ℓ1-regularization with general groups [11, 12]. Despite the difficulty in computing the exact proximity operator for these regularizers, efficient methods have been developed to compute approximate proximity operators in all of these cases; accelerated projected gradient and Newton-like methods that work with a smooth dual problem have been used to compute approximate proximity operators in the context of total-variation regularization [7, 13], Krylov subspace methods and low-rank representations have been used to compute approximate proximity operators in the context of nuclear-norm regularization [9, 10], and variants of Dykstra’s algorithm (and related dual methods) have been used to compute approximate proximity operators in the context of overlapping group ℓ1-regularization [12, 14, 15]. It is known that proximal-gradient methods that use an approximate proximity operator converge under only weak assumptions [16, 17]; we briefly review this and other related work in the next section. However, despite the many recent works showing impressive empirical performance of (accelerated) proximal-gradient methods that use an approximate proximity operator [7, 13, 9, 10, 14, 15], up until recently there was no theoretical analysis on how the error in the calculation of the proximity operator affects the convergence rate of proximal-gradient methods. In this work we show in several contexts that, provided the error in the proximity operator calculation is controlled in an appropriate way, inexact proximal-gradient strategies achieve the same convergence rates as the corresponding exact methods. In particular, in Section 4 we first consider convex objectives and analyze the inexact proximal-gradient (Proposition 1) and accelerated proximal-gradient (Proposition 2) methods. We then analyze these two algorithms for strongly convex objectives (Proposition 3 and Proposition 4). Note that in these analyses, we also consider the possibility that there is an error in the calculation of the gradient of g. We then present an experimental comparison of various inexact proximal-gradient strategies in the context of solving a structured sparsity problem (Section 5). 2 Related Work The algorithm we shall focus on in this paper is the proximal-gradient method xk = proxL [yk−1 −(1/L)(g′(yk−1) + ek)] , (3) where ek is the error in the calculation of the gradient and the proximity problem (2) is solved inexactly so that xk has an error of εk in terms of the proximal objective function (2). In the basic proximal-gradient method we choose yk = xk, while in the accelerated proximal-gradient method we choose yk = xk + βk(xk −xk−1), where the sequence (βk) is chosen appropriately. There is a substantial amount of work on methods that use an exact proximity operator but have an error in the gradient calculation, corresponding to the special case where εk = 0 but ek is non-zero. For example, when the ek are independent, zero-mean, and finite-variance random variables, then proximal-gradient methods achieve the (optimal) error level of O(1/ √ k) [18, 19]. This is different than the scenario we analyze in this paper since we do not assume unbiased nor independent errors, but instead consider a sequence of errors converging to 0. This leads to faster convergence rates, and makes our analysis applicable to the case of deterministic (and even adversarial) errors. Several authors have recently analyzed the case of a fixed deterministic error in the gradient, and shown that accelerated gradient methods achieve the optimal convergence rate up to some accuracy that depends on the fixed error level [20, 21, 22], while the earlier work of [23] analyzes the gradient method in the context of a fixed error level. This contrasts with our analysis, where by allowing the error to change at every iteration we can achieve convergence to the optimal solution. Also, we can tolerate a large error in early iterations when we are far from the solution, which may lead to substantial computational gains. Other authors have analyzed the convergence rate of the gradient and projected-gradient methods with a decreasing sequence of errors [24, 25], but this analysis does not consider the important class of accelerated gradient methods. In contrast, the analysis of [22] allows a decreasing sequence of errors (though convergence rates in this context are not explicitly 2 mentioned) and considers the accelerated projected-gradient method. However, the authors of this work only consider the case of an exact projection step, and they assume the availability of an oracle that yields global lower and upper bounds on the function. This non-intuitive oracle leads to a novel analysis of smoothing methods, but leads to slower convergence rates than proximal-gradient methods. The analysis of [21] considers errors in both the gradient and projection operators for accelerated projected-gradient methods, but this analysis requires that the domain of the function is compact. None of these works consider proximal-gradient methods. In the context of proximal-point algorithms, there is a substantial literature on using inexact proximity operators with a decreasing sequence of errors, dating back to the seminal work of Rockafeller [26]. Accelerated proximal-point methods with a decreasing sequence of errors have also been examined, beginning with [27]. However, unlike proximal-gradient methods where the proximity operator is only computed with respect to the non-smooth function h, proximal-point methods require the calculation of the proximity operator with respect to the full objective function. In the context of composite optimization problems of the form (1), this requires the calculation of the proximity operator with respect to g + h. Since it ignores the structure of the problem, this proximity operator may be as difficult to compute (even approximately) as the minimizer of the original problem. Convergence of inexact proximal-gradient methods can be established with only weak assumptions on the method used to approximately solve (2). For example, we can establish that inexact proximalgradient methods converge under some closedness assumptions on the mapping induced by the approximate proximity operator, and the assumption that the algorithm used to compute the inexact proximity operator achieves sufficient descent on problem (2) compared to the previous iteration xk−1 [16]. Convergence of inexact proximal-gradient methods can also be established under the assumption that the norms of the errors are summable [17]. However, these prior works did not consider the rate of convergence of inexact proximal-gradient methods, nor did they consider accelerated proximal-gradient methods. Indeed, the authors of [7] chose to use the non-accelerated variant of the proximal-gradient algorithm since even convergence of the accelerated proximal-gradient method had not been established under an inexact proximity operator. While preparing the final version of this work, [28] independently gave an analysis of the accelerated proximal-gradient method with an inexact proximity operator and a decreasing sequence of errors (assuming an exact gradient). Further, their analysis leads to a weaker dependence on the errors than in our Proposition 2. However, while we only assume that the proximal problem can be solved up to a certain accuracy, they make the much stronger assumption that the inexact proximity operator yields an εk-subdifferential of h [28, Definition 2.1]. Our analysis can be modified to give an improved dependence on the errors under this stronger assumption. In particular, the terms in √εi disappear from the expressions of Ak, eAk and bAk appearing in the propositions, leading to the optimal convergence rate with a slower decay of εi. More details may be found in [29]. 3 Notation and Assumptions In this work, we assume that the smooth function g in (1) is convex and differentiable, and that its gradient g′ is Lipschitz-continuous with constant L, meaning that for all x and y in Rd we have ∥g′(x) −g′(y)∥⩽L∥x −y∥. This is a standard assumption in differentiable optimization, see [30, §2.1.1]. If g is twicedifferentiable, this corresponds to the assumption that the eigenvalues of its Hessian are bounded above by L. In Propositions 3 and 4 only, we will also assume that g is µ-strongly convex (see [30, §2.1.3]), meaning that for all x and y in Rd we have g(y) ⩾g(x) + ⟨g′(x), y −x⟩+ µ 2 ||y −x||2. In contrast to these assumptions on g, we will only assume that h in (1) is a lower semi-continuous proper convex function (see [31, §1.2]), but will not assume that h is differentiable or Lipschitzcontinuous. This allows h to be any real-valued convex function, but also allows for the possibility that h is an extended real-valued convex function. For example, h could be the indicator function of a convex set, and in this case the proximity operator becomes the projection operator. 3 We will use xk to denote the parameter vector at iteration k, and x∗to denote a minimizer of f. We assume that such an x∗exists, but do not assume that it is unique. We use ek to denote the error in the calculation of the gradient at iteration k, and we use εk to denote the error in the proximal objective function achieved by xk, meaning that L 2 ∥xk −y∥2 + h(xk) ⩽εk + min x∈Rd L 2 ∥x −y∥2 + h(x) , (4) where y = yk−1 −(1/L)(g′(yk−1) + ek)). Note that the proximal optimization problem (2) is strongly convex and in practice we are often able to obtain such bounds via a duality gap (e.g., see [12] for the case of overlapping group ℓ1-regularization). 4 Convergence Rates of Inexact Proximal-Gradient Methods In this section we present the analysis of the convergence rates of inexact proximal-gradient methods as a function of the sequences of solution accuracies to the proximal problems (εk), and the sequences of magnitudes of the errors in the gradient calculations (∥ek∥). We shall use (H) to denote the set of four assumptions which will be made for each proposition: • g is convex and has L-Lipschitz-continuous gradient; • h is a lower semi-continuous proper convex function; • The function f = g + h attains its minimum at a certain x∗∈Rn; • xk is an εk-optimal solution to the proximal problem (2) in the sense of (4). We first consider the basic proximal-gradient method in the convex case: Proposition 1 (Basic proximal-gradient method - Convexity) Assume (H) and that we iterate recursion (3) with yk = xk. Then, for all k ⩾1, we have f 1 k k X i=1 xi ! −f(x∗) ⩽L 2k ∥x0 −x∗∥+ 2Ak + p 2Bk 2 , (5) with Ak = k X i=1 ∥ei∥ L + r 2εi L ! , Bk = k X i=1 εi L . The proof may be found in [29]. Note that while we have stated the proposition in terms of the function value achieved by the average of the iterates, it trivially also holds for the iteration that achieves the lowest function value. This result implies that the well-known O(1/k) convergence rate for the gradient method without errors still holds when both (∥ek∥) and (√εk) are summable. A sufficient condition to achieve this is that ∥ek∥decreases as O(1/k1+δ) while εk decreases as O(1/k2+δ′) for any δ, δ′ > 0. Note that a faster convergence of these two errors will not improve the convergence rate, but will yield a better constant factor. It is interesting to consider what happens if (∥ek∥) or (√εk) is not summable. For instance, if ∥ek∥ and √εk decrease as O(1/k), then Ak grows as O(log k) (note that Bk is always smaller than Ak) and the convergence of the function values is in O log2 k k . Finally, a necessary condition to obtain convergence is that the partial sums Ak and Bk need to be in o( √ k). We now turn to the case of an accelerated proximal-gradient method. We focus on a basic variant of the algorithm where βk is set to (k −1)/(k + 2) [32, Eq. (19) and (27)]: Proposition 2 (Accelerated proximal-gradient method - Convexity) Assume (H) and that we iterate recursion (3) with yk = xk + k−1 k+2(xk −xk−1). Then, for all k ⩾1, we have f(xk) −f(x∗) ⩽ 2L (k + 1)2 ∥x0 −x∗∥+ 2 eAk + q 2 eBk 2 , (6) with eAk = k X i=1 i ∥ei∥ L + r 2εi L ! , eBk = k X i=1 i2εi L . 4 In this case, we require the series (k∥ek∥) and (k√εk) to be summable to achieve the optimal O(1/k2) rate, which is an (unsurprisingly) stronger constraint than in the basic case. A sufficient condition is for ∥ek∥and √εk to decrease as O(1/k2+δ) for any δ > 0. Note that, as opposed to Proposition 1 that is stated for the average iterate, this bound is for the last iterate xk. Again, it is interesting to see what happens when the summability assumption is not met. First, if ∥ek∥or √εk decreases at a rate of O(1/k2), then k(∥ek∥+ √ek) decreases as O(1/k) and eAk grows as O(log k) (note that eBk is always smaller than eAk), yielding a convergence rate of O log2 k k2 for f(xk) −f(x∗). Also, and perhaps more interestingly, if ∥ek∥or √εk decreases at a rate of O(1/k), Eq. (6) does not guarantee convergence of the function values. More generally, the form of eAk and eBk indicates that errors have a greater effect on the accelerated method than on the basic method. Hence, as also discussed in [22], unlike in the error-free case the accelerated method may not necessarily be better than the basic method because it is more sensitive to errors in the computation. In the case where g is strongly convex it is possible to obtain linear convergence rates that depend on the ratio γ = µ/L as opposed to the sublinear convergence rates discussed above. In particular, we obtain the following convergence rate on the iterates of the basic proximal-gradient method: Proposition 3 (Basic proximal-gradient method - Strong convexity) Assume (H), that g is µstrongly convex, and that we iterate recursion (3) with yk = xk. Then, for all k ⩾1, we have: ∥xk −x∗∥⩽(1 −γ)k (∥x0 −x∗∥+ ¯Ak) , (7) with ¯Ak = k X i=1 (1 −γ)−i ∥ei∥ L + r 2εi L ! . A consequence of this proposition is that we obtain a linear rate of convergence even in the presence of errors, provided that ∥ek∥and √εk decrease linearly to 0. If they do so at a rate of Q′ < (1 −γ), then the convergence rate of ∥xk −x∗∥is linear with constant (1 −γ), as in the error-free algorithm. If we have Q′ > (1 −γ), then the convergence of ∥xk −x∗∥is linear with constant Q′. If we have Q′ = (1 −γ), then ∥xk−x∗∥converges to 0 as O(k (1 −γ)k) = o [(1 −γ) + δ′]k for all δ′ > 0. Finally, we consider the accelerated proximal-gradient algorithm when g is strongly convex. We focus on a basic variant of the algorithm where βk is set to (1 −√γ)/(1 + √γ) [30, §2.2.1]: Proposition 4 (Accelerated proximal-gradient method - Strong convexity) Assume (H), that g is µ-strongly convex, and that we iterate recursion (3) with yk = xk + 1−√γ 1+√γ (xk −xk−1). Then, for all k ⩾1, we have f(xk) −f(x∗) ⩽(1 −√γ)k p 2(f(x0) −f(x∗)) + bAk r 2 µ + q bBk 2 , (8) with bAk = k X i=1 ∥ei∥+ p 2Lεi (1 −√γ)−i/2 , bBk = k X i=1 εi (1 −√γ)−i . Note that while we have stated the result in terms of function values, we obtain an analogous result on the iterates because by strong convexity of f we have µ 2 ||xk −x∗||2 ≤f(xk) −f(x∗). This proposition implies that we obtain a linear rate of convergence in the presence of errors provided that ||ek||2 and εk decrease linearly to 0. If they do so at a rate Q′ < (1 −√γ), then the constant is (1 −√γ), while if Q′ > (1 −√γ) then the constant will be Q′. Thus, the accelerated inexact proximal-gradient method will have a faster convergence rate than the exact basic proximalgradient method provided that Q′ < (1 −γ). Oddly, in our analysis of the strongly convex case, the accelerated method is less sensitive to errors than the basic method. However, unlike the basic method, the accelerated method requires knowing µ in addition to L. If µ is misspecified, then the convergence rate of the accelerated method may be slower than the basic method. 5 5 Experiments We tested the basic inexact proximal-gradient and accelerated proximal-gradient methods on the CUR-like factorization optimization problem introduced in [33] to approximate a given matrix W, min X 1 2∥W −WXW∥2 F + λrow nr X i=1 ||Xi||p + λcol nc X j=1 ||Xj||p . Under an appropriate choice of p, this optimization problem yields a matrix X with sparse rows and sparse columns, meaning that entire rows and columns of the matrix X are set to exactly zero. In [33], the authors used an accelerated proximal-gradient method and chose p = ∞since under this choice the proximity operator can be computed exactly. However, this has the undesirable effect that it also encourages all values in the same row (or column) to have the same magnitude. The more natural choice of p = 2 was not explored since in this case there is no known algorithm to exactly compute the proximity operator. Our experiments focused on the case of p = 2. In this case, it is possible to very quickly compute an approximate proximity operator using the block coordinate descent (BCD) algorithm presented in [12], which is equivalent to the proximal variant of Dykstra’s algorithm introduced by [34]. In our implementation of the BCD method, we alternate between computing the proximity operator with respect to the rows and to the columns. Since the BCD method allows us to compute a duality gap when solving the proximal problem, we can run the method until the duality gap is below a given error threshold εk to find an xk+1 satisfying (4). In our experiments, we used the four data sets examined by [33]1 and we choose λrow = .01 and λcol = .01, which yielded approximately 25–40% non-zero entries in X (depending on the data set). Rather than assuming we are given the Lipschitz constant L, on the first iteration we set L to 1 and following [2] we double our estimate anytime g(xk) > g(yk−1) + ⟨g′(yk−1), xk −yk−1⟩+ (L/2)||xk−yk−1||2. We tested three different ways to terminate the approximate proximal problem, each parameterized by a parameter α: • εk = 1/kα: Running the BCD algorithm until the duality gap is below 1/kα. • εk = α: Running the BCD algorithm until the duality gap is below α. • n = α: Running the BCD algorithm for a fixed number of iterations α. Note that all three strategies lead to global convergence in the case of the basic proximal-gradient method, the first two give a convergence rate up to some fixed optimality tolerance, and in this paper we have shown that the first one (for large enough α) yields a convergence rate for an arbitrary optimality tolerance. Note that the iterates produced by the BCD iterations are sparse, so we expected the algorithms to spend the majority of their time solving the proximity problem. Thus, we used the function value against the number of BCD iterations as a measure of performance. We plot the results after 500 BCD iterations for the first two data sets for the proximal-gradient method in Figure 1, and the accelerated proximal-gradient method in Figure 2. The results for the other two data sets are similar, and are included in [29]. In these plots, the first column varies α using the choice εk = 1/kα, the second column varies α using the choice εk = α, and the third column varies α using the choice n = α. We also include one of the best methods from the first column in the second and third columns as a reference. In the context of proximal-gradient methods the choice of εk = 1/k3, which is one choice that achieves the fastest convergence rate according to our analysis, gives the best performance across all four data sets. However, in these plots we also see that reasonable performance can be achieved by any of the three strategies above provided that α is chosen carefully. For example, choosing n = 3 or choosing εk = 10−6 both give reasonable performance. However, these are only empirical observations for these data sets and they may be ineffective for other data sets or if we change the number of iterations, while we have given theoretical justification for the choice εk = 1/k3. Similar trends are observed for the case of accelerated proximal-gradient methods, though the choice of εk = 1/k3 (which no longer achieves the fastest convergence rate according to our analysis) no longer dominates the other methods in the accelerated setting. For the SRBCT data set the choice 1The datasets are freely available at http://www.gems-system.org. 6 100 200 300 400 500 10 −10 10 −5 10 0 εk = 1/k εk = 1/k2 εk = 1/k3 εk = 1/k4 εk = 1/k5 100 200 300 400 500 10 −10 10 −5 10 0 n=1 n=2 n=3 n=5 εk = 1/k3 100 200 300 400 500 10 −10 10 −5 10 0 εk=1e−2 εk=1e−4 εk=1e−6 εk=1e−10 εk = 1/k3 100 200 300 400 500 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 εk = 1/k εk = 1/k2 εk = 1/k3 εk = 1/k4 εk = 1/k5 100 200 300 400 500 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 n=1 n=2 n=3 n=5 εk = 1/k3 100 200 300 400 500 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 εk=1e−2 εk=1e−4 εk=1e−6 εk=1e−10 εk = 1/k3 Figure 1: Objective function against number of proximal iterations for the proximal-gradient method with different strategies for terminating the approximate proximity calculation. The top row is for the 9 Tumors data, the bottom row is for the Brain Tumor1 data. 100 200 300 400 500 10 −10 10 −5 10 0 εk = 1/k εk = 1/k2 εk = 1/k3 εk = 1/k4 εk = 1/k5 100 200 300 400 500 10 −10 10 −5 10 0 n=1 n=2 n=3 n=5 εk = 1/k4 100 200 300 400 500 10 −10 10 −5 10 0 εk=1e−2 εk=1e−4 εk=1e−6 εk=1e−10 εk = 1/k4 100 200 300 400 500 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 εk = 1/k εk = 1/k2 εk = 1/k3 εk = 1/k4 εk = 1/k5 100 200 300 400 500 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 n=1 n=2 n=3 n=5 εk = 1/k4 100 200 300 400 500 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 εk=1e−2 εk=1e−4 εk=1e−6 εk=1e−10 εk = 1/k4 Figure 2: Objective function against number of proximal iterations for the accelerated proximalgradient method with different strategies for terminating the approximate proximity calculation. The top row is for the 9 Tumors data, the bottom row is for the Brain Tumor1 data. εk = 1/k4, which is a choice that achieves the fastest convergence rate up to a poly-logarithmic factor, yields better performance than εk = 1/k3. Interestingly, the only choice that yields the fastest possible convergence rate (εk = 1/k5) had reasonable performance but did not give the best performance on any data set. This seems to reflect the trade-off between performing inner BCD iterations to achieve a small duality gap and performing outer gradient iterations to decrease the value of f. Also, the constant terms which were not taken into account in the analysis do play an important role here, due to the relatively small number of outer iterations performed. 7 6 Discussion An alternative to inexact proximal methods for solving structured sparsity problems are smoothing methods [35] and alternating direction methods [36]. However, a major disadvantage of both these approaches is that the iterates are not sparse, so they can not take advantage of the sparsity of the problem when running the algorithm. In contrast, the method proposed in this paper has the appealing property that it tends to generate sparse iterates. Further, the accelerated smoothing method only has a convergence rate of O(1/k), and the performance of alternating direction methods is often sensitive to the exact choice of their penalty parameter. On the other hand, while our analysis suggests using a sequence of errors like O(1/kα) for α large enough, the practical performance of inexact proximal-gradients methods will be sensitive to the exact choice of this sequence. Although we have illustrated the use of our results in the context of a structured sparsity problem, inexact proximal-gradient methods are also used in other applications such as total-variation [7, 8] and nuclear-norm [9, 10] regularization. This work provides a theoretical justification for using inexact proximal-gradient methods in these and other applications, and suggests some guidelines for practioners that do not want to lose the appealing convergence rates of these methods. Further, although our experiments and much of our discussion focus on errors in the calculation of the proximity operator, our analysis also allows for an error in the calculation of the gradient. This may also be useful in a variety of contexts. For example, errors in the calculation of the gradient arise when fitting undirected graphical models and using an iterative method to approximate the gradient of the log-partition function [37]. Other examples include using a reduced set of training examples within kernel methods [38] or subsampling to solve semidefinite programming problems [39]. In our analysis, we assume that the smoothness constant L is known, but it would be interesting to extend methods for estimating L in the exact case [2] to the case of inexact algorithms. In the context of accelerated methods for strongly convex optimization, our analysis also assumes that µ is known, and it would be interesting to explore variants that do not make this assumption. We also note that if the basic proximal-gradient method is given knowledge of µ, then our analysis can be modified to obtain a faster linear convergence rate of (1 −γ)/(1 + γ) instead of (1 −γ) for strongly-convex optimization using a step size of 2/(µ + L), see Theorem 2.1.15 of [30]. Finally, we note that there has been recent interest in inexact proximal Newton-like methods [40], and it would be interesting to analyze the effect of errors on the convergence rates of these methods. Acknowledgements Mark Schmidt, Nicolas Le Roux, and Francis Bach are supported by the European Research Council (SIERRA-ERC-239993). References [1] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [2] Y. Nesterov. Gradient methods for minimizing composite objective function. CORE Discussion Papers, (2007/76), 2007. [3] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society: Series B, 58(1):267–288, 1996. [4] S.S. Chen, D.L. Donoho, and M.A. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1):33–61, 1998. [5] S.J. Wright, R.D. Nowak, and M.A.T. Figueiredo. Sparse reconstruction by separable approximation. IEEE Transactions on Signal Processing, 57(7):2479–2493, 2009. [6] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Convex optimization with sparsity-inducing norms. In S. Sra, S. Nowozin, and S.J. Wright, editors, Optimization for Machine Learning. MIT Press, 2011. [7] J. Fadili and G. Peyr´e. Total variation projection with first order schemes. IEEE Transactions on Image Processing, 20(3):657–669, 2011. [8] X. Chen, S. Kim, Q. Lin, J.G. Carbonell, and E.P. Xing. Graph-structured multi-task regression and an efficient optimization method for general fused Lasso. arXiv:1005.3579v1, 2010. [9] J.-F. Cai, E.J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4), 2010. [10] S. Ma, D. Goldfarb, and L. Chen. Fixed point and Bregman iterative methods for matrix rank minimization. Mathematical Programming, 128(1):321–353, 2011. 8 [11] L. Jacob, G. Obozinski, and J.-P. Vert. Group Lasso with overlap and graph Lasso. ICML, 2009. [12] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary learning. JMLR, 12:2297–2334, 2011. [13] A. Barbero and S. Sra. Fast Newton-type methods for total variation regularization. ICML, 2011. [14] J. Liu and J. Ye. Fast overlapping group Lasso. arXiv:1009.0306v1, 2010. [15] M. Schmidt and K. Murphy. Convex structure learning in log-linear models: Beyond pairwise potentials. AISTATS, 2010. [16] M. Patriksson. A unified framework of descent algorithms for nonlinear programs and variational inequalities. PhD thesis, Department of Mathematics, Link¨oping University, Sweden, 1995. [17] P.L. Combettes. Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization, 53(5-6):475–504, 2004. [18] J. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting. JMLR, 10:2873–2898, 2009. [19] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. JMLR, 10:777–801, 2009. [20] A. d’Aspremont. Smooth optimization with approximate gradient. SIAM Journal on Optimization, 19(3):1171–1183, 2008. [21] M. Baes. Estimate sequence methods: extensions and approximations. IFOR internal report, ETH Zurich, 2009. [22] O. Devolder, F. Glineur, and Y. Nesterov. First-order methods of smooth convex optimization with inexact oracle. CORE Discussion Papers, (2011/02), 2011. [23] A. Nedic and D. Bertsekas. Convergence rate of incremental subgradient algorithms. Stochastic Optimization: Algorithms and Applications, pages 263–304, 2000. [24] Z.-Q. Luo and P. Tseng. Error bounds and convergence analysis of feasible descent methods: A general approach. Annals of Operations Research, 46-47(1):157–178, 1993. [25] M.P. Friedlander and M. Schmidt. Hybrid deterministic-stochastic methods for data fitting. arXiv:1104.2373, 2011. [26] R.T. Rockafellar. Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization, 14(5):877–898, 1976. [27] O. G¨uler. New proximal point algorithms for convex minimization. SIAM Journal on Optimization, 2(4):649–664, 1992. [28] S. Villa, S. Salzo, L. Baldassarre, and A. Verri. Accelerated and inexact forward-backward algorithms. Optimization Online, 2011. [29] M. Schmidt, N. Le Roux, and F. Bach. Convergence rates of inexact proximal-gradient methods for convex optimization. arXiv:1109.2415v2, 2011. [30] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Springer, 2004. [31] D.P. Bertsekas. Convex optimization theory. Athena Scientific, 2009. [32] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization, 2008. [33] J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Convex and network flow optimization for structured sparsity. JMLR, 12:2681–2720, 2011. [34] H.H. Bauschke and P.L. Combettes. A Dykstra-like algorithm for two monotone operators. Pacific Journal of Optimization, 4(3):383–391, 2008. [35] Y. Nesterov. Smooth minimization of non-smooth functions. Math. Prog., 103(1):127–152, 2005. [36] P.L. Combettes and J.-C. Pesquet. Proximal splitting methods in signal processing. In H.H. Bauschke, R.S. Burachik, P.L. Combettes, V. Elser, D.R. Luke, and H. Wolkowicz, editors, Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pages 185–212. Springer, 2011. [37] M.J. Wainwright, T.S. Jaakkola, and A.S. Willsky. Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudo-moment matching. AISTATS, 2003. [38] J. Kivinen, A.J. Smola, and R.C. Williamson. Online learning with kernels. IEEE Transactions on Signal Processing, 52(8):2165–2176, 2004. [39] A. d’Aspremont. Subsampling algorithms for semidefinite programming. arXiv:0803.1990v5, 2009. [40] M. Schmidt, D. Kim, and S. Sra. Projected Newton-type methods in machine learning. In S. Sra, S. Nowozin, and S. Wright, editors, Optimization for Machine Learning. MIT Press, 2011. 9
|
2011
|
171
|
4,226
|
Co-Training for Domain Adaptation Minmin Chen, Kilian Q. Weinberger Department of Computer Science and Engineering Washington University in St. Louis St. Louis, MO 63130 mc15,kilian@wustl.edu John C. Blitzer Google Research 1600 Amphitheatre Parkway Mountain View, CA 94043 blitzer@google.com Abstract Domain adaptation algorithms seek to generalize a model trained in a source domain to a new target domain. In many practical cases, the source and target distributions can differ substantially, and in some cases crucial target features may not have support in the source domain. In this paper we introduce an algorithm that bridges the gap between source and target domains by slowly adding to the training set both the target features and instances in which the current algorithm is the most confident. Our algorithm is a variant of co-training [7], and we name it CODA (Co-training for domain adaptation). Unlike the original co-training work, we do not assume a particular feature split. Instead, for each iteration of cotraining, we formulate a single optimization problem which simultaneously learns a target predictor, a split of the feature space into views, and a subset of source and target features to include in the predictor. CODA significantly out-performs the state-of-the-art on the 12-domain benchmark data set of Blitzer et al. [4]. Indeed, over a wide range (65 of 84 comparisons) of target supervision CODA achieves the best performance. 1 Introduction Domain adaptation addresses the problem of generalizing from a source distribution for which we have ample labeled training data to a target distribution for which we have little or no labels [3, 14, 28]. Domain adaptation is of practical importance in many areas of applied machine learning, ranging from computational biology [17] to natural language processing [11, 19] to computer vision [23]. In this work, we focus primarily on domain adaptation problems that are characterized by missing features. This is often the case in natural language processing, where different genres often use very different vocabulary to describe similar concepts. For example, in our experiments we use the sentiment data of Blitzer et al. [4], where a breeze to use is a way to express positive sentiment about kitchen appliances, but not about books. In this situation, most domain adaptation algorithms seek to eliminate the difference between source and target distributions, either by re-weighting source instances [14, 18] or learning a new feature representation [6, 28]. We present an algorithm which differs from both of these approaches. Our method seeks to slowly adapt its training set from the source to the target domain, using ideas from co-training. We accomplish this in two ways: First, we train on our own output in rounds, where at each round, we include in our training data the target instances we are most confident of. Second, we select a subset of shared source and target features based on their compatibility. Different from most previous work on selecting features for domain adaptation, the compatibility is measured across the training set and the unlabeled set, instead of across the two domains. As more target instances are added to the training set, target specific features become compatible across the two sets, therefore are included in the predictor. Finally, we exploit the pseudo multiview co-training algorithm of Chen et al. [10] 1 to exploit the unlabeled data efficiently. These three intuitive ideas can be combined in a single optimization problem. We name our algorithm CODA (Co-Training for Domain Adaptation). By allowing us to slowly change our training data from source to target, CODA has an advantage over representation-learning algorithms [6, 28], since they must decide a priori what the best representation is. In contrast, each iteration of CODA can choose exactly those few target features which can be related to the current (source and pseudo-labeled target) training set. We find that in the sentiment prediction data set of Blitzer et al. [4] CODA improves the state-of-the-art cross widely varying amounts of target labeled data in 65 out of 84 settings. 2 Notation and Setting We assume our data originates from two domains, Source (S) and Target (T). The source data is fully labeled DS = {(x1, y1), . . . , (xns, yns)} ⊂Rd × Y and sampled from some distribution PS(X, Y ). The target data is sampled from PT (X, Y ) and is divided into labeled Dl T = {(x1, y1), . . . , (xnt, ynt)} ⊂Rd ×Y and unlabeled Du T = {(x1, ?), . . . (xmt, ?)} ⊂Rd ×Y parts, where in the latter the labels are unknown during training time. Both domains are of equal dimensionality d. Our goal is to learn a classifier h ∈H to accurately predict the labels on the unlabeled portion of DT , but also to extend to out-of-sample test points, such that for any (x, y) sampled from PT , we have h(x) = y with high probability. For simplicity we assume that Y = {+1, −1}, although our method can easily be adapted to multi-class or regression settings. We assume the existence of a base classifier, which determines the set H. Throughout this paper we simply use logistic regression, i.e. our classifier is parameterized by a weight-vector w ∈Rd and defined as hw(x) = (1 + e−w⊤x)−1. The weights w are set to minimize the loss function ℓ(w; D) = −1 |D| X (x,y)∈D log(1 + exp(−yw⊤x)). (1) If trained on data sampled from PS(X, Y ), logistic regression models the distribution PS(Y |X) [13] through Ph(Y = y|X = x; w) = (1 + e−w⊤xy)−1. In this paper, our goal is to adapt this classifier to the target distribution PT (Y |X). 3 Method In this section, we begin with a semi-supervised approach and describe the rote-learning procedure to automatically annotate target domain inputs. The algorithm maintains and grows a training set that is iteratively adapted to the target domain. We then incorporate feature selection into the optimization, a crucial element of our domain-adaptation algorithm. The feature selection addresses the change in distribution and support from PS to PT . Further, we introduce pseudo multi-view co-training [7, 10], which improves the rote-learning procedure by adding inputs with features that are still not used effectively by the current classifier. We use automated feature decomposition to artificially split our data into multiple views, explicitly to enable successful co-training. 3.1 Self-training for Domain Adaptation First, we assume we are given a loss function ℓ– in our case the log-loss from eq. (1) – which provides some estimate of confidence in its predictions. In logistic regression, if ˆy = sign(h(x)) is the prediction for an input x, the probability Ph(Y = ˆy|X = x; w) is a natural metric of certainty (as h(x) can be interpreted as a probability for x to be of label +1), but other methods [22] can be used. Self-training [19] is a simple and intuitive iterative algorithm to leverage unlabeled data. During training one maintains a labeled training set L and an unlabeled test set U, initialized as L = DS ∪Dl T and U = Du T . Each iteration, a classifier hw is trained to minimize the loss function ℓover L and is evaluated on all elements of U. The c most confident predictions on U are moved to L for the next iteration, labeled by the prediction of sign(hw). The algorithm terminates when U is empty or all predictions are below a pre-defined confidence threshold (and considered unreliable). Algorithm 1 summarizes self-training in pseudo-code with the use of feature selection, described in the following section. 2 Algorithm 1 SEDA pseudo-code. 1: Inputs: L and U. 2: repeat 3: w∗= argminwℓ(w; L) + γs(L, U, w) 4: Apply hw∗on all elements of U. 5: Move up-to c confident inputs xi from U to L, labeled as sign(h(xi)). 6: until No more predictions are confident 7: Return hw∗ 3.2 Feature Selection So far, we have not addressed that the two data sets U and L are not sampled from the same distribution. In domain adaptation, the training data is no longer representative of the test data. More explicitly, PS(Y |X = x) is different from PT (Y |X = x). For illustration, consider the sentiment analysis problem in section 4, where data consists of unigram and bigram bag-of-words features and the task is to classify if a book-review (source domain) or dvd-review (target domain) is positive or negative. Here, the bigram feature “must read” is indicative of a positive opinion within the source (“books”) domain, but rarely appears in the target (“dvd”) domain. A classifier, trained on the source-dominated set L, that relies too heavily on such features will not make enough highconfidence predictions on the set U = Du T . To address this issue, we extend the classifier with a weighted ℓ1 regularization for feature selection. The weights are assigned to encourage the classifier to only use features that behave similarly in both L and U. Different from previous work on feature selection for domain adaptation [25], where the goal is to find a new representation to minimize the difference between the distributions of the source and target domain, what we are proposing is to minimize the difference between the distributions of the labeled training set L and the unlabeled set U (which coincides with the testing set in our setting). This difference is crucial, as it makes the empirical distributions of L and U align gradually. For example, after some iterations, the classifier can pick features that are never present in the source domain, but which have entered L through the rote-learning procedure. We perform the feature selection implicitly through w. For a feature α, let us denote the Pearson correlation coefficient (PCC)1 between feature value xα and the label y for all pairs (x, y) ∈L as ρL(xα, y). It can be shown that ρL(xα, y) ∈[−1, 1] with a value of +1 if a feature is perfectly aligned with the label (i.e. the feature is the label), 0 if it has no correlation, and −1 if it is of opposite polarity (i.e. the inverted label). Similarly, let us define the PCC for all pairs in U as ρU;w(xα, Y ), where the unknown label Y is a random variable drawn from the conditional probability Ph(Y |X; w). The two PCC values indicate how predictive a feature is of the (estimated) class label in the two respective data sets. Ideally, we would like to choose features that are similarly predictive across the two sets. We measure how similarly a feature behaves across L and U with the product ρL(xα, y)ρU;w(xα, Y ). With this notation, we define the feature weight that reflects the cross-domain incompatibility of a feature as ∆L,U,w(α) = (1 −ρL(xα, y)ρU;w(xα, Y )). (2) It is straight-forward to show that ∆L,U,w ∈[0, 2]. Intuitively, ∆L,U,w expresses to what degree we would like to remove a feature. A perfect feature, that is the label itself (and the prediction in U), results in a score of 0. A feature that is not correlated with the class label in at least one of the two domains (and therefore is too domain-specific) obtains a score of 1. A feature that switches polarization across domains (and therefore is “malicious”) has a score ∆L,U,w(α) > 1 (in the extreme case if it is the label in L and the inverted label in U, its score would be 2). We incorporate (2) into a weighted ℓ1 regularization s(L, U, w) = d X α=1 ∆L,U,w(α)|wα|. (3) Intuitively (3) encourages feature sparsity with a strong emphasis on features with little or opposite correlation across the domains, whereas good features that are consistently predictive in both 1The PCC for two random variables X, Y is defined as ρ = E[(X−µX)(Y −µY )] σXσY , where µX denotes the mean and σX the standard deviation of X. 3 domains become cheap. We refer to this version of the algorithm as Self-training for Domain Adaptation (SEDA). The optimization with feature selection, used in Algorithm 1, becomes w = argminwℓ(L) + γs(L, U, w). (4) Here, γ ≥0 denotes the loss-regularization trade-off parameter. As we have very few labeled inputs from the target domain in the early iterations, stronger regularization is imposed so that only features shared across the two domains are used. When more and more inputs from the target domain are included in the training set, we gradually decrease the regularization to accommodate target specific features. The algorithm is very insensitive to the exact initial choice of γ. The guideline is to start with a relatively large number, and decrease it until the selected feature set is not empty. In our implementation, we set it to γ0 = 0.1, and we divide it by a factor of 1.1 during each iteration. 3.3 Co-training for Domain Adaptation For rote-learning to be effective, we need to move test inputs from U to L that 1) are correctly classified (with high probability) and 2) have potential to improve the classifier in future iterations. The former is addressed by the feature selecting regularization from the previous section – restricting the classifier to a sub-set of features that are known to be cross-data set compatible reduces the generalization error on U. In this section we address the second requirement. We want to add inputs xi that contain additional features, which were not used to obtain the prediction hw(xi) and would enrich the training set L. If the exact labels of the inputs in U were known, a good active learning [26] strategy would be to move inputs to L on which the current classifier hw is most uncertain. In our setting, this would be clearly ill advised as the uncertain prediction is also used as the label. A natural solution to this dilemma is co-training [7]. Co-training assumes the data set is presented in two separate views and two classifiers are trained, one in each view. Each iteration, only inputs that are confident according to exactly one of the two classifiers are moved to the training set. This way, one classifier provides the (estimated) labels to the inputs on which the other classifier is uncertain. In our setting we do not have multiple views and which features are selected varies in each iteration. Hence, co-training does not apply out-of-the-box. We can, however, split our features into two mutually exclusive views such that co-training is effective. To this end we follow the pseudo-multiview regularization introduced by Chen et al. [10]. The main intuition is to train two classifiers on a single view X such that: (1) both perform well on the labeled data; (2) both are trained on strictly different features; (3) together they are likely to satisfy Balcan’s condition of ϵ-expandability [2], a necessary and sufficient pre-condition for co-training to work2. These three aspects can be formulated explicitly as three modifications of our optimization problem (4). We discuss each of them in detail in the following. Loss. Two classifiers are required for co-training, whose weight vectors we denote by u and v. The performance of each classifier is measured by the log-loss ℓ(·; L) in eq. (1). To ensure that both classifiers perform well on the training set L, i.e. both have a small training loss, we train them jointly while minimizing the soft-maximum3 of the two losses, log eℓ(u;L) + eℓ(v;L) . (5) Feature Decomposition. Co-training requires the two classifiers to be trained on different feature spaces. We create those by splitting the feature-space into two mutually exclusive sub-sets. More precisely, for each feature α, at least one of the two classifiers must have a zero weight in the αth dimension. We can enforce this across all features with the equality constraint d X α=1 u2 αv2 α = 0. (6) ϵ-Expandability. In the original co-training formulation [7], it is assumed that the two views of the data are class conditionally independent. This assumption is very strong and can easily be 2Provided that the classifiers are never confident and wrong — which can be violated in practice. 3The soft-max of a set of elements S is a differentiable approximation of max(S) ≈log(P s∈S es). 4 violated in practice [20]. Recent work [2] weakens this requirement significantly to a condition of ϵ-expandability. Loosely phrased, for the two classifiers to be able to teach each other, they must make confident predictions on different subsets of the unlabeled set U. For the classifier hu, let ˆy = sign(u⊤x) ∈{±1} denote the class prediction and Ph(ˆy|x; u) its confidence. Define cu(x) as a confidence indicator function (for some confidence threshold τ > 0)4 cu(x) = 1 if p(ˆy|x; u) > τ 0 otherwise, (7) and cv respectively. Then the ϵ-expanding condition translates to X x∈U [cu(x)¯cv(x) + ¯cu(x)cv(x)] ≥ϵ min "X x∈U cu(x)cv(x), X x∈U ¯cu(x)¯cv(x) # , (8) for some ϵ > 0. Here, cu(x) = 1 −cu(x) indicates that classifier hu is not confident about input x. Intuitively, the constraint in eq. (8) ensures that the total number of inputs in U that can be used for rote-learning because exactly one classifier is confident (LHS), is larger than the set of inputs which cannot be used because both classifiers are already confident or both are not confident (RHS). In summary, the framework splits the feature space into two mutually exclusive sub-sets. This representation enables us to train two logistic regression classifiers, both with small loss on the labeled data set, while satisfying two constraints to ensure feature decomposition and ϵ-expandability. Our final classifier has the weight vector w = u + v. We refer to the resulting algorithm as CODA (Cotraining for Domain Adaptation), which can be stated concisely with the following optimization problem: min w,u,v log eℓ(u;L) + eℓ(v;L) + γs(L, U, w) subject to: (1) Pd i=1 u2 i v2 i = 0 (2) P x∈U [cu(x)¯cv(x) + ¯cu(x)cv(x)] ≥ϵ min P x∈U cu(x)cv(x), P x∈U ¯cu(x)¯cv(x) (3) w = u + v The optimization is non-convex. However, as it is not particularly sensitive to initialization, we set u, v randomly and optimize with standard conjugate gradient descent5. Due to space constraints we do not include a pseudo-code implementation of CODA. The implementation is essentially identical to that of SEDA (Algorithm 1) where the above optimization problem is solved instead of eq. (4) in line 3. In line 5, we move inputs that one classifier is confident about while the other one is uncertain to the training set L to improve the classifier in future iterations. 4 Results We evaluate our algorithm together with several other domain adaptation algorithms on the “Amazon reviews” benchmark data sets [6]. The data set contains reviews of four different types of products: books, DVDs, electronics, and kitchen appliances from Amazon.com. In the original dataset, each review is associated with a rating of 1-5 stars. For simplicity, we are only concerned about whether or not a review is positive (higher than 3 stars) or negative (3 stars or lower). That is, yi = {+1, −1}, where yi = 1 indicates that it is a positive review, and −1 otherwise. The data from four domains results in 12 directed adaptation tasks (e.g. books →dvds). Each domain adaptation task consists of 2, 000 labeled source inputs and around 4, 000 unlabeled target test inputs (varying slightly between tasks). We let the amount of labeled target data vary from 0 to 1600. For each setting with target labels we ran 10 experiments with different, randomly chosen, labeled instances. The original feature space of unigrams and bigrams is on average approximately 100, 000 dimensions across 4In our implementation, the 0-1 indicator was replaced by a very steep differentiable sigmoid function, and τ was set to 0.8 across different experiments. 5We use minimize.m (http://tinyurl.com/minimize-m). 5 different domains. To reduce the dimensionality, we only use features that appear at least 10 times in a particular domain adaptation task (with approximately 40, 000 features remaining). Further, we pre-process the data set with standard tf-idf [24] feature re-weighting. 0 50 100 200 400 800 1600 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 Relative Test Error Number of target labeled data Logistic Regression Self−training SEDA CODA 0 50 100 200 400 800 1600 0.75 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 Relative Test Error Number of target labeled data Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA Figure 1: Relative test-error reduction over logistic regression, averaged across all 12 domain adaptation tasks, as a function of the target training set size. Left: A comparison of the three algorithms from section 3. The graph shows clearly that self-training (Self-training vs. Logistic Regression), feature-selection (SEDA vs. Self-training) and co-training (CODA vs. SEDA), each improve the accuracy substantially. Right: A comparison of CODA with four state-of-the-art domain adaptation algorithms. CODA leads to particularly strong improvements under little target supervision. As a first experiment, we compare the three algorithms from Section 3 and logistic regression as a baseline. The results are in the left plot of figure 1. For logistic regression, we ignore the difference between source and target distribution, and train a classifier on the union of both labeled data sets. We use ℓ2 regularization, and set the regularization constant with 5-fold cross-validation. In figure 1, all classification errors are shown relative to this baseline. Our second baseline is self-training, which adds self-training to logistic regression – as described in section 3.1. We start with the set of labeled instances from source and target domain, and gradually add confident predictions to the training set from the unlabeled target domain (without regularization). SEDA adds feature selection to the self-training procedure, as described in section 3.2. We optimize over 100 iterations of selftraining, at which stage the regularization was effectively zero and the classifier converged. For CODA we replace self-training with pseudo-multi-view co-training, as described in section 3.3. The left plot in figure 1 shows the relative classification errors of these four algorithms averaged over all 12 domain adaptation tasks, under varying amounts of target labels. We observe two trends: First, there are clear gaps between logistic regression, self-training, SEDA, and CODA. From these three gaps one can conclude that self-training, feature-selection and co-training each lead to substantial improvements in classification error. A second trend is that the relative improvement over logistic regression reduces as more labeled target data becomes available. This is not surprising, as with sufficient target labels the task turns into a classical supervised learning problem and the source data becomes irrelevant. As a second experiment, we compare CODA against three state-of-the-art domain adaptation algorithms. We refer to these as Coupled, the coupled-subspaces approach [6], EasyAdapt [11], and EasyAdapt++. [15]. Details about the respective algorithms are provided in section 5. Coupled subspaces, as described in [6], does not utilize labeled target data and its result is depicted as a single point. The right plot in figure 1 compares these algorithms, relative to logistic regression. Figure 3 shows the individual results on all the 12 adaptation tasks with absolute classification error rates. The error bars show the standard deviation across the 10 runs with different labeled instances. EasyAdapt and EasyAdapt++, both consistently improve over logistic regression once sufficient target data is available. It is noteworthy that, on average, CODA outperforms the other algorithms in almost all settings when 800 labeled target points or less are present. With 1600 labeled target points all algorithms perform similar to the baseline and additional source data is irrelevant. All hyper-parameters of competing algorithms were carefully set by 5-fold cross validation. Concerning computational requirements, it is fair to say that CODA is significantly slower than the other algorithms, as each iteration is of comparable complexity as logistic regression or EasyAdapt. 6 20 40 60 80 100 0.9 1 1.1 1.2 1.3 1.4 Iterations Ratio of used features Source heavy Target heavy 20 40 60 80 100 0.9 1 1.1 1.2 1.3 1.4 Iterations Ratio of used features 20 40 60 80 100 0.9 1 1.1 1.2 1.3 1.4 Iterations Ratio of used features Source heavy Target heavy Source heavy Target heavy 0 target labels 400 target labels 1600 target labels Ratio of used features (source/target) r(w) Ratio of used features (source/target) Figure 2: The ratio of the average number of used features between source and target inputs (9), tracked throughout the CODA optimization. The three plots show the same statistic at different amounts of target labels. Initially, an input from the source domain has on average 10-35% more features that are used by the classifier than a target input. At around iteration 40, this relation changes and the classifier uses more target-typical features. The graph shows the geometric mean across all adaptation tasks. With no target data available (left plot), the early spike in source dominance is more pronounced and decreases when more target labels are available (middle and right plot). In typical domain adaptation settings this is generally not a problem, as training sets tend to be small. In our experiments, the average training time for CODA6 was about 20 minutes. Finally, we investigate the feature-selection process during CODA training. Let us define the indicator function δ(a) ∈{0, 1} to be δ(a) = 0 if and only if a = 0, which operates element-wise on vectors. The vector δ(w) ∈{0, 1}d indicates which features are used in the classifier and δ(xi) indicates which features are present in input xi. We can denote the ratio between the average number of used features in labeled training inputs over those in unlabeled target inputs as r(w) = 1 |Dl S| P xs∈Dl S δ(w)⊤δ(xs) 1 |Dl T | P xt∈Dl T δ(w)⊤δ(xt). (9) Figure 2 shows the plot of r(w) for all weight vectors during the 100 iterations of CODA, averaged across all 12 data sets. The three plots show the same statistic under varying amounts of target labels. Two trends can be observed: First, during CODA training, the classifier initially selects more source-specific features. For example in the case with zero labeled target data, during early iterations the average source input contains 20 −35% more used features relative to target inputs. This source-heavy feature distribution changes and eventually turns into target-heavy distribution as the classifier adapts to the target domain. As a second trend, we observe that with more target labels (right plot), this spike in source features is much less pronounced whereas the final target-heavy ratio is unchanged but starts earlier. This indicates that as the target labels increase, the classifier makes less use of the source data and relies sooner and more directly on the target signal. 5 Related Work and Discussion Domain adaptation algorithms that do not use labeled target domain data are sometimes called unsupervised adaptation algorithms. There are roughly three types of algorithms in this group. The first type, which includes the coupled subspaces algorithm of Blitzer et al. [5], learns a shared representation under which the source and target distributions are closer than under the ambient feature space [28]. The largest disadvantage of these algorithms is that they do not jointly optimize the predictor and the representation, which prevents them from focusing on those features which are both different and predictive. By jointly optimizing the feature selection, the multi-view split and the prediction, CODA allows us to do both. The second type of algorithm attempts to directly minimize the divergence between domains, typically by weighting individual instances [14, 16, 18]. These algorithms do not assume highly divergent domains (e.g. those with unique target features), but they have the advantage over both CODA and representation-learning of learning asymptotically optimal target predictors from only 6We used a straight-forward MatlabT M implementation. 7 0 50 100 200 400 800 1600 0.1 0.15 0.2 0.25 0.3 0.35 Test Error Dvd −> Books Number of target labeled data Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0 50 100 200 400 800 1600 0.1 0.15 0.2 0.25 0.3 0.35 Test Error Electronics −> Books Number of target labeled data Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0 50 100 200 400 800 1600 0.1 0.15 0.2 0.25 0.3 0.35 Test Error Kitchen −> Books Number of target labeled data Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0 50 100 200 400 800 1600 0.1 0.15 0.2 0.25 0.3 0.35 Test Error Books −> Dvd Number of target labeled data Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0 50 100 200 400 800 1600 0.2 0.25 0.3 0.35 Test Error Electronics −> Dvd Number of target labeled data Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0 50 100 200 400 800 1600 0.1 0.15 0.2 0.25 0.3 0.35 Test Error Kitchen −> Dvd Number of target labeled data Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0 50 100 200 400 800 1600 0.1 0.15 0.2 0.25 0.3 0.35 Test Error Books −> Electronics Number of target labeled data Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0 50 100 200 400 800 1600 0.1 0.15 0.2 0.25 0.3 0.35 Test Error Dvd −> Electronics Number of target labeled data Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0 50 100 200 400 800 1600 0.05 0.1 0.15 0.2 0.25 0.3 Test Error Kitchen −> Electronics Number of target labeled data Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0 50 100 200 400 800 1600 0.1 0.15 0.2 0.25 0.3 0.35 Test Error Books −> Kitchen Number of target labeled data Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0 50 100 200 400 800 1600 0.1 0.15 0.2 0.25 Test Error Dvd −> Kitchen Number of target labeled data Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA 0 50 100 200 400 800 1600 0.08 0.1 0.12 0.14 0.16 0.18 Test Error Electronics −> Kitchen Number of target labeled data Logistic Regression Coupled EasyAdapt EasyAdapt++ CODA Figure 3: The individual results on all domain adaptation tasks under varying amounts of labeled target data. The graphs show the absolute classification error rates. All settings with existing labeled target data were averaged over 10 runs (with randomly selected labeled instances). The vertical bars indicate the standard deviation in these cases. source training data (when their assumptions hold). We did not explore them here because their assumptions are clearly violated for this data set. In natural language processing, a final type of very successful algorithm self-trains on its own target predictions to automatically annotate new target domain features [19]. These methods are most closely related, in spirit, to our own CODA algorithm. Indeed, our self-training baseline is intended to mimic this style of algorithm. The final set of domain adaptation algorithms, which we compared against but did not describe, are those which actively seek to minimize the labeling divergence between domains using multi-task techniques [1, 8, 9, 12, 21, 27]. Most prominently, Daum´e [11] trains separate source and target models, but regularizes these models to be close to one another. The EasyAdapt++ variant of this algorithm, which we compared against, generalizes this to the semi-supervised setting by making the assumption that for unlabeled target instances, the tasks should be similar. Although these methods did not significantly out-perform our baselines in the sentiment data set, we note that there do exist data sets on which such multi-task techniques are especially important [11], and we hope soon to explore combinations of CODA with multi-task learning on those data sets. 8 References [1] R.K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. The Journal of Machine Learning Research, 6:1817–1853, 2005. [2] M.F. Balcan, A. Blum, and K. Yang. Co-training and expansion: Towards bridging theory and practice. NIPS, 17:89–96, 2004. [3] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and Jenn Wortman. A theory of learning from different domains. Machine Learning, 2009. [4] J. Blitzer, M. Dredze, and F. Pereira. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Association for Computational Linguistics, Prague, Czech Republic, 2007. [5] J. Blitzer, D. Foster, and S. Kakade. Domain adaptation with coupled subspaces. In Conference on Artificial Intelligence and Statistics, Fort Lauterdale, 2011. [6] J. Blitzer, R. McDonald, and F. Pereira. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 120–128. Association for Computational Linguistics, 2006. [7] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory, page 100. ACM, 1998. [8] R. Caruana. Multitask learning. Machine Learning, 28:41–75, 1997. [9] O. Chapelle, P. Shivaswamy, S. Vadrevu, K.Q. Weinberger, Y. Zhang, and B. Tseng. Multi-task learning for boosting with application to web search ranking. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’10, pages 1189–1198, New York, NY, USA, 2010. ACM. [10] M. Chen, K.Q. Weinberger, and Y. Chen. Automatic Feature Decomposition for Single View Co-training. In International Conference on Machine Learning, 2011. [11] H. Daume III. Frustratingly easy domain adaptation. In Association for Computational Linguistics, 2007. [12] T. Evgeniou, C.A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 6(1):615, 2006. [13] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer Verlag, New York, 2009. [14] J. Huang, A.J. Smola, A. Gretton, K. M. Borgwardt, and B. Scholkopf. Correcting sample selection bias by unlabeled data. In NIPS 19, pages 601–608. MIT Press, Cambridge, MA, 2007. [15] H. Daume III, A. Kumar, and A. Saha. Co-regularization based semi-supervised domain adaptation. In NIPS 23, pages 478–486. MIT Press, 2010. [16] J. Jiang and C.X. Zhai. Instance weighting for domain adaptation in nlp. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 264–271, Prague, Czech Republic, June 2007. Association for Computational Linguistics. [17] Qian Liu, Aaron Mackey, David Roos, and Fernando Pereira. Evigan: a hidden variable model for integrating gene evidence for eukaryotic gene prediction. Bioinformatics, 2008. [18] T. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation with multiple sources. In NIPS 21, pages 1041–1048. MIT Press, 2009. [19] D. McClosky, E. Charniak, and M. Johnson. Reranking and self-training for parser adaptation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 337–344. Association for Computational Linguistics, 2006. [20] K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In Proceedings of the ninth international conference on Information and knowledge management, pages 86–93. ACM, 2000. [21] S. Parameswaran and K.Q. Weinberger. Large margin multi-task metric learning. In NIPS 23, pages 1867–1875. 2010. [22] J.C. Platt et al. Probabilities for sv machines. NIPS, pages 61–74, 1999. [23] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. Computer Vision–ECCV 2010, pages 213–226, 2010. [24] G. Salton and C. Buckley. Term-weighting approaches in automatic text retrieval. Information processing & management, 24(5):513–523, 1988. [25] S. Satpal and S. Sarawagi. Domain adaptation of conditional probability models via feature subsetting. Knowledge Discovery in Databases: PKDD 2007, pages 224–235, 2007. [26] B. Settles. Active learning literature survey. Machine Learning, 15(2):201–221, 1994. [27] K.Q. Weinberger, A. Dasgupta, J. Langford, A. Smola, and J. Attenberg. Feature hashing for large scale multitask learning. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1113–1120. ACM, 2009. [28] G. Xue, W. Dai, Q. Yang, and Y. Yu. Topic-bridged plsa for cross-domain text classication. In SIGIR, 2008. 9
|
2011
|
172
|
4,227
|
Learning to Agglomerate Superpixel Hierarchies Viren Jain Janelia Farm Research Campus Howard Hughes Medical Institute Srinivas C. Turaga Brain & Cognitive Sciences Massachusetts Institute of Technology Kevin L. Briggman, Moritz N. Helmstaedter, Winfried Denk Department of Biomedical Optics Max Planck Institute for Medical Research H. Sebastian Seung Howard Hughes Medical Institute Massachusetts Institute of Technology Abstract An agglomerative clustering algorithm merges the most similar pair of clusters at every iteration. The function that evaluates similarity is traditionally handdesigned, but there has been recent interest in supervised or semisupervised settings in which ground-truth clustered data is available for training. Here we show how to train a similarity function by regarding it as the action-value function of a reinforcement learning problem. We apply this general method to segment images by clustering superpixels, an application that we call Learning to Agglomerate Superpixel Hierarchies (LASH). When applied to a challenging dataset of brain images from serial electron microscopy, LASH dramatically improved segmentation accuracy when clustering supervoxels generated by state of the boundary detection algorithms. The naive strategy of directly training only supervoxel similarities and applying single linkage clustering produced less improvement. 1 Introduction A clustering is defined as a partitioning of a set of elements into subsets called clusters. Roughly speaking, similar elements should belong to the same cluster and dissimilar ones to different clusters. In the traditional unsupervised formulation of clustering, the true membership of elements in clusters is completely unknown. Recently there has been interest in the supervised or semisupervised setting [5], in which true membership is known for some elements and can serve as training data. The goal is to learn a clustering algorithm that generalizes to new elements and new clusters. A convenient objective function for learning is the agreement between the output of the algorithm and the true clustering, for which the standard measurement is the Rand index [25]. Clustering is relevant for many application domains. One prominent example is image segmentation, the division of an image into clusters of pixels that correspond to distinct objects in the scene. Traditional approaches treated image segmentation as unsupervised clustering. However, it is becoming popular to utilize a supervised clustering approach in which a segmentation algorithm is trained on a set of images for which ground truth is known [23, 32]. The Rand index has become increasingly popular for evaluating the accuracy of image segmentation [34, 3, 13, 15, 35], and has recently been used as an objective function for supervised learning of this task [32]. This paper focuses on agglomerative algorithms for clustering, which iteratively merge pairs of clusters that maximize a similarity function. Equivalently, the merged pairs may be those that minimize 1 a distance or dissimilarity function, which is like a similarity function up to a change of sign. Speed is a chief advantage of agglomerative algorithms. The number of evaluations of the similarity function is polynomial in the number of elements to be clustered. In contrast, the popular approach of using a Markov random field to partition a graph with nodes that are the elements to be clustered, and edge weights given by their similarities, involves a computation that can be NP-hard [18]. Inefficient inference becomes even more costly for learning, which generally involves many iterations of inference. To deal with this problem, many researchers have developed learning methods for graphical models that depend on efficient approximate inference. However, once such approximations are introduced, many of the desirable theoretical properties of this framework no longer apply and performance in practice may be arbitrarily poor, as several authors have recently noted [36, 19, 8]. Here we avoid such issues by basing learning on agglomerative clustering, which is an efficient inference procedure in the first place. We show that an agglomerative clustering algorithm can be regarded as a policy for a deterministic Markov decision process (DMDP) in which a state is a clustering, an action is a merging of two clusters, and the immediate reward is the change in the Rand index with respect to the ground truth clustering. In this formulation, the optimal action-value function turns out to be the optimal similarity function for agglomerative clustering. This DMDP formulation is helpful because it enables the application of ideas from reinforcement learning (RL) to find an approximation to the optimal similarity function. Our formalism is generally applicable to any type of clustering, but is illustrated with a specific application to segmenting images by clustering superpixels. These are defined as groups of pixels from an oversegmentation produced by some other algorithm [27]. Recent research has shown that agglomerating superpixels using a hand-designed similarity function can improve segmentation accuracy [3]. It is plausible that it would be even more powerful to learn the similarity function from training data. Here we apply our RL framework to accomplish this, yielding a new method called Learning Agglomeration of Superpixel Hierarchies (LASH). LASH works by iteratively updating an approximation to the optimal similarity function. It uses the current approximation to generate a sequence of clusterings, and then improves the approximation on all possible actions on these clusterings. LASH is an instance of a strategy called on-policy control in RL. This strategy has seen many empirical successes, but the theoretical guarantees are rather limited. Furthermore, LASH is implemented here for simplicity using infinite temporal discounting, though it could be extended to the case of finite discounting. Therefore we empirically evaluated LASH on the problem of segmenting images of brain tissue from serial electron microscopy, which has recently attracted a great deal of interest [6, 15]. We find that LASH substantially improves upon state of the art convolutional network and random forest boundary-detection methods for this problem, reducing segmentation error (as measured by the Rand error) by 50% as compared to the next best technique. We also tried the simpler strategy of directly training superpixel similarities, and then applying single linkage clustering [2]. This produced less accurate test set segmentations than LASH. 2 Agglomerative clustering as reinforcement learning A Markov decision process (MDP) is defined by a state s, a set of actions A(s) at each state, a function P(s, a, s) specifying the probability of the s →stransition after taking action a ∈A(s), and a function R(s, a, s) specifying the immediate reward. A policy π is a map from states to actions, a = π(s). The goal of reinforcement learning (RL) is to find a policy π that maximizes the expected value of total reward. Total reward is defined as the sum of immediate rewards T −1 t=0 R(st, at) up to some time horizon T. Alternatively, it is defined as the sum of discounted immediate rewards, ∞ t=0 γtR(st, at), where 0 ≤γ ≤1 is the discount factor. Many RL methods are based on finding an optimal action-value function Q∗(s, a), which is defined as the sum of discounted rewards obtained by taking action a at state s and following the optimal policy thereafter. An optimal policy can be extracted from this function by π∗(s) = argmaxa Q∗(s, a). 2 We can define agglomerative clustering as an MDP. Its state s is a clustering of a set of objects. For each pair of clusters in st, there is an action at ∈A(st) that merges them to yield the clustering st+1 = at(st). Since the merge action is deterministic, we have the special case of a deterministic MDP, rather than a stochastic one. To define the rewards of the MDP, we make use of the Rand index, a standard measure of agreement between two clusterings of the same set [25]. A clustering is equivalent to classifying all pairs of objects as belonging to the same cluster or different clusters. The Rand index RI(s, s) is the fraction of object pairs on which the clusterings s and sagree. Therefore, we can define the immediate reward of action a as the resulting increase in the Rand index with respect to a ground truth clustering s∗, R(s, a) = RI(a(s), s∗) −RI(s, s∗). An agglomerative clustering algorithm is a policy of this MDP, and the optimal similarity function is given by the optimal action-value function Q∗. The sum of undiscounted immediate rewards “telescopes” to the simple result T −1 t=0 R(st, at) = RI(sT , s∗) −RI(s0, s∗) [21]. Therefore RL for a finite time horizon T is equivalent to maximizing the Rand index RI(sT , s∗) of the clustering at time T. We will focus on the simple case of infinite discounting (γ = 0). Then the optimal action-value function Q∗(s, a) is equal to R(s, a). In other words, R(s, a) is the best similarity function. We know R(s, a) exactly for the training data, but we would also like it to apply to data for which ground truth is unknown. Therefore we train a function approximator Qθ so that Qθ(s, a) ≈R(s, a) on the training data, and hope that it generalizes to the test data. The following procedure is a simple way of doing this. 1. Generate an initial sequence of clusterings (s1, . . . , sT ) by using R(s, a) as a similarity function: iterate at = argmaxa R(st, a) and st+1 = at(st), terminating when maxa R(st, a) ≤0. 2. Train the parameters θ so that Qθ(st, a) ≈R(st, a) for all st and for all a ∈A(st). 3. Generate a new sequence of clusterings by using Qθ(s, a) as a similarity function: iterate at = argmaxa Qθ(st, a) and st+1 = at(st), terminating when maxa Qθ(st, a) ≤0. 4. Goto 2. Here the clustering s1 is the trivial one in which each element is its own cluster. (The termination of the clustering is equivalent to the continued selection of a “do-nothing” action that leaves the clustering the same, st+1 = st.) This is an example of “on-policy” learning, because the function approximator Qθ is trained on clusterings generated by using it as a policy. It makes intuitive sense to optimize Qθ for the kinds of clusterings that it actually sees in practice, rather than for all possible clusterings. However, there is no theoretical guarantee that such on-policy learning will converge, since we are using a nonlinear function approximation. Guarantees only exist if the action-value function is represented by a lookup table or a linear approximation. Nevertheless, the nonlinear approach has achieved practical success in a number of problem domains. Later we will present empirical results supporting the effectiveness of on-policy learning in our application. The assumption of infinite discounting removes a major challenge of RL, dealing with temporally delayed reward. Are we losing anything by this assumption? If our approximation to the actionvalue function were perfect, Qθ(s, a) = R(s, a), then agglomerative clustering would amount to greedy maximization of the Rand index. It is straightforward to show that this yields the clustering that is the global maximum. In practice, the approximation will be imperfect, and extending the above procedure to finite discounting could be helpful. 3 Agglomerating superpixels for image segmentation The introduction of the Berkeley segmentation database (BSD) provoked a renaissance of the boundary detection and segmentation literature. The creation of a ground-truth segmentation database enabled learning-driven methods for low-level boundary detection, which were found to outperform classic methods such as Canny’s [23, 10]. Global and multi-scale features were added to improve performance even further [26, 22, 29], and recently learning methods have been developed that directly optimize measures of segmentation performance [32, 13]. 3 0.86 0.88 0.9 0.92 0.94 0.96 0.98 1 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1 Recall Precision 3UHFLVLRQï5HFDOORI&RQQHFWHG3L[HO3DLUV 6WDQGDUG&1 LODVWLN %/27&&1 0$/,6&1 6LQJOH/LQNDJH LASH Rand Error Pair Recall Pair Precision Baseline .0499 0 n/a CN [14, 33] .0084 91.85 91.33 ilastik [31] .0064 87.85 99.31 BLOTC CN [13] .0056 94.32 94.57 MALIS CN [32] .0055 94.97 93.41 Single Linkage .0049 96.48 93.78 LASH .0029 94.83 99.30 Figure 1: Performance comparison on a one megavoxel test set; parameters, such as the binarization threshold for the convolutional network (CN) affinity graphs, were determined based on the optimal value on the training set. CN’s used a field of view of 16 × 16 × 16, ilastik used a field of view of 23 × 23 × 23, and LASH used a field of view of 50 × 50 × 50. LASH leads to a substantial decrease in Rand error (1-Rand Index), and much higher connected pixel-pair precision at similar levels of recall as compared to other state of the art methods. The ‘Connected pixel-pairs’ curve measures the accuracy of connected pixel pairs pairs relative to ground truth. This measure corrects for the imbalance in the Rand error for segmentations in which most pixels are disconnected from one another, as in the case of EM reconstruction of dense brain wiring. For example, ‘Trivial Baseline’ above represents the trivial segmentation in which all pixels are disconnected from one another, and achieves relatively low Rand error but of course zero connected-pair recall. However, boundary detectors alone have so far failed to produce segmentations that rival human levels of accuracy. Therefore many recent studies use boundary detectors to generate an oversegmentation of the image into fragments, and then attempt to cluster the “superpixels” . This approach has been shown to improve the accuracy of segmenting natural images [3, 30]. A similar approach [2, 1, 17, 35, 16] has also been employed to segment 3d nanoscale images from serial electron microscopy [11, 9]. In principle, it should be possible to map the connections between neurons by analyzing these images [20, 12, 28]. Since this analysis is highly laborious, it would be desirable to have automated computer algorithms for doing so [15]. First, each synapse must be identified. Second, the “wires” of the brain, its axons and dendrites, must be traced, i.e., segmented. If these two tasks are solved, it is then possible to establish which pairs of neurons are connected by synapses. For our experiments, images of rabbit retina inner plexiform layer were acquired using Serial Block Face Scanning Electron Microscopy (SBF-SEM) [9, 4]. The tissue was specially stained to enhance cell boundaries while suppressing contrast from intracellular structures (e.g., mitochondria). The image volume was acquired at 22 × 22 × 25 nm resolution, yielding a nearly isotropic 3d dataset with excellent slice-to-slice registration. Two training sets were created by human tracing and proofreading of subsets of the 3d image. The training sets were augmented with their eight 3d orthogonal rotations and reflections to yield 16 training images that contained roughly 80 megavoxels of labeled training data. A separate one megavoxel labeled test set was used to evaluate algorithm performance. 3.1 Boundary Detectors For comparison purposes, as well as to provide supervoxels for LASH, we tested several state of the art boundary detection algorithms on the data. A convolutional network (CN) was trained to produce affinity graphs that can be segmented using connected components or watershed [14, 33]. We also trained CNs using MALIS and BLOTC, which are recently proposed machine learning algorithms that optimize true metrics of segmentation performance. MALIS directly optimizes the Rand index [32]. BLOTC, originally introduced for 2d boundary maps and here generalized to 3d affinity graphs, 4 SBF-SEM Z-X Reslice Human Labeling BLOTC CN LASH z x 0 to 100 100 to 1,000 1,000 to 1e4 1e4 to 1e5 More than 1e5 0 5 10 15 20 25 30 35 40 % of Volume Occupied Supervoxel size (in voxels) Supervoxel Sizes Figure 2: (Left) Visual comparison of output from a state of the boundary detector, BLOTC CN [13], and Learning to Agglomerate Superpixel Hierarchies (LASH). Image and segmentations are from a Z-X axis resectioning of the 100 × 100 × 100 voxel test set. Segmentations were performed in 3d though only a single 2d 100 × 100 reslice is shown here. White circle shows an example location in which BLOTC CN merged two separate objects due to weak staining in an adjacent image slice; orange ellipse shows an example location in which BLOTC CN split up a single thin object. LASH avoids both of these errors. (Right) Distribution of supervoxel sizes, as measured by percentage of image volume occupied by specific size ranges of supervoxels. optimizes ‘warping error,’ a measure of topological disagreement derived from concepts introduced in digital topology [13]. Finally, we trained ‘ilastik,’ a random-forest based boundary detector [31]. Unlike the CNs, which operated on the raw image and learned features as part of the training process, ilastik uses a predefined set of image features that represented low-level image structure such as intensity gradients and texture. The CNs used a field of view of 16 × 16 × 16 voxels to make decisions about any particular image location, while ilastik used features from a field of view of up to 23 × 23 × 23 voxels. To generate segmentations of the test set, we found connected components of the thresholded boundary detector output, and then performed marker-based watershed to grow out these regions until they touched. Figure 1 shows the Rand index attained by the CNs and ilastik. Here we convert the index into an error measure by subtracting it from 1. Segmentation performance is sensitive to the threshold used to binarize boundary detector output, so we used the threshold that minimized Rand error on the training set. 3.2 Supervoxel Agglomeration Supervoxels were generated from BLOTC convolutional network output, using connected components applied at a high threshold (0.9) to avoid undersegmented regions (in the test set, there was only one supervoxel in the initial oversegmentation which contained more than one ground truth region). Regions were then grown out using marker-based watershed. The size of the supervoxels varied considerably, but the majority of the image volume was assigned to supervoxels larger than 1, 000 voxels in size (as shown in Figure 3). For each pair of neighboring supervoxels, we computed a 138 dimensional feature vector, as described in the Appendix. This was used as input to the learned similarity function Qθ, which we represented by a decision-tree boosting classifier [7]. We followed the procedure given in Section 2, but with two modifications. First the examples used in each training iteration were collected by segmenting all the images in the training set, not only a single image. Second, Qθ was trained to approximate H(R(st, a)) rather than R(st, a), where H is the Heaviside step function and the log-loss was optimized. This was done because our function approximator was suitable for classification, but 5 some other approximator suitable for regression could also be used. The loop in the procedure of Section 2 was terminated when training error stopped decreasing by a significant amount, after 3 cycles. Then the learned similarity function was applied to agglomerate supervoxels in the test set to yield the results in Figure 1. The agglomeration terminated after around 5000 steps. The results show substantial decrease in Rand error compared to state of the art techniques (MALIS and BLOTC CN). A potential sublety in interpreting these results is the small absolute values of the Rand error for all of these techniques. The Rand error is defined as the probability of classifying pairs of voxels as belonging to the same or different clusters. This classification task is highly imbalanced, because the vast majority of voxel pairs belong to different ground truth clusters. Hence even a completely trivial segmentation in which every voxel is its own cluster can achieve fairly low Rand error (Figure 1). Precision and recall are better quantifications of performance at imbalanced classification[23]. Figure 1 shows that LASH achieves much higher precision at similar recall. For the task of segmenting neurons in EM images, high precision is especially important as false positives can lead to false positive neuron-pair connections. Visual comparison of segmentation performance is shown in Figure 2. LASH avoids both split and merge errors that result from segmenting BLOTC CN output. BLOTC CN in turn was previously shown to outperform other techniques such as Boosted Edge Learning, multi-scale normalized cut, and gPb-OWT-UCM [13]. 3.3 Naive training of the similarity function on superpixel pairs In the conventional algorithms for agglomerative clustering, the similarity S(A, B) of two clusters A and B can be reduced to the similarities S(x, y) of elements x ∈A and y ∈B. For example, single linkage clustering assumes that S(A, B) = maxx∈A,y∈B S(x, y). The maximum operation is replaced by the minimum or average in other common algorithms. LASH does not impose any such constraint of reducibility on the similarity function. Consequently, LASH must truly compute new similarities after each agglomerative step. In contrast, conventional algorithms can start by computing the matrix of similarities between the elements to be clustered, and all further similarities between clusters follow from trivial computations. Therefore another method of learning agglomerative clustering is to train a similarity function on pairs of superpixels only, and then apply a standard agglomerative algorithm such as single linkage clustering. This has been previously been done for images from serial electron microscopy [2]. (Note that single linkage clustering is equivalent to creating a graph in which nodes are superpixels and edge weights are their similarities, and then finding the connected components of the thresholded graph.) As shown in Figure 1, clustering superpixels in this way improves upon boundary detection algorithms. However, the improvement is substantially less than achieved by LASH. Discussion Why did LASH achieve better accuracy than other approaches? One might argue that the comparison is unfair, because the CNs and ilastik detected boundaries using a field of view considerably smaller than that used in the LASH features (up to 50 × 50 × 50 for the SVF feature computation). If these competing methods were allowed to use the same context, perhaps their accuracy would improve dramatically. This is possible, but training time would also increase dramatically. Training a CN with MALIS or BLOTC on 80 megavoxels of training data with a 163 field of view already takes on the order of a week, using an optimized GPU implementation [24]. Adding the additional layers to the CN required to achieve a field of view of 503 might require months of additional training.1 In constrast, the entire LASH training process is completed within roughly one day. This can be attributed to the efficiency gains associated with computations on supervoxels rather than voxels. In short, LASH is more accurate because it is efficient enough to utilize more image context in its computations. Why does LASH outperform the naive method of directly training superpixel similarities used in single linkage clustering? The naive method uses the same amount of image context. In this case, 1Using a much larger field of view with a CN will likely require new architectures that incorporate multiscale capabilities. 6 Figure 3: Example of SVF feature computation. Blue and red are two different supervoxels. Left panel shows rendering of the objects, right panel shows smoothed vector fields (thin arrows), along with chosen center-ofmass orientation vectors (thick blue/red lines) and line connecting the two center of masses (thick green line). The angle between the thick blue/red and green lines is used as a feature during LASH. LASH is probably superior because it trains the similarities by optimizing the clustering that they actually yield. The naive method resembles LASH, but with the modification that the action-value function is trained only for the actions possible on the clustering s1 rather than on the entire sequence of clusterings (see Step 2 of the procedure in Section 2). We have conceptualized LASH in the framework of reinforcement learning. Previous work has applied reinforcement learning to other structured prediction problems [21]. An additional closely related approach to structured prediction is SEARN, introduced by Daume et al [8]. As in our approach, SEARN uses a single classifier repeatedly on a (structured) input to iteratively solve an inference problem. The major difference between our approach and theirs is the way the classifier is trained. In paticular, SEARN begins with a manually specified policy (given by ground truth or heuristics) and then iteratively degrades the policy as a classifier is trained and ‘replaces’ the initial policy. In our approach, the initial policy may exhibit poor performance (i.e., for random initial θ), and then improves through training. We have implemented LASH with infinite discounting of future rewards, but extending to finite discounting might produce better results. Generalizing the action space to include splitting of clusters as well as agglomeration might also be advantageous. Finally, the objective function optimized by learning might be tailored to better reflect more task-specific criteria, such as the number of locations that human might have to correct (‘proofread’) to yield an error-free segmentation by semiautomated means. These directions will be explored in future work. Appendix Features of supervoxel pairs used by the similarity function The similarity function that we trained with LASH required as input a set of features for each supervoxel pair that might be merged. For each supervoxel pair, we first computed a ‘decision point,’ defined as the midpoint of the shortest line that connects any two points of the supervoxels. From this decision point, we computed several types of features that encodes information about the underlying affinity graph as well the shape of the supervoxel objects near the decision point: (1) size of each supervoxel in the pair, (2) distance between the two supervoxels, (3) analog affinity value of the graph edge at which the two supervoxels would merge if grown out using watershed, and the distance from the decision point to this edge, (4) ‘Smoothed Vector Field’ (SVF), a novel shape feature described below, computed at various spatial scales (maximum 50 × 50 × 50). This feature measures the orientation of each supervoxel near the decision point. Finally, for each supervoxel in the pair we also included the above features for the closest 4 other decision points that involve that supervoxel. Overall, this feature set yielded a 138 dimensional feature vector for each supervoxel pair. The smoothed vector field (SVF) shape feature attempts to determine the orientation of a supervoxel near some specific location (e.g., the decision point used in reference to some other supervoxel). 7 The main challenge in computing such an orientation is dealing with high-frequency noise and irregularities in the precise shape of the supervoxel. We developed a novel approach that deals with this issue by smoothing a vector field derived from image moments. For a binary 3d image, SVF is computed in the following manner: 1. A spherical mask of radius 5 is selected around each image location Ix,y,z, and vx,y,z is then computed as the largest eigenvector of the 3 × 3 second order image moment matrix for that window. 2. The vector field is smoothed via 3 iterations of ‘ising-like’ interactions among nearest neighbor vector fields: vx,y,z ←f x+1 i=x−1 y+1 j=y−1 z+1 k=z−1 vi,j,k , where f represents a (non-linear) renormalization such that the magnitude of each vector remains 1. 3. The smoothed vector at the center of mass of the supervoxel is used to compute angular orientation of the supervoxel (see Figure 3). References [1] B. Andres, J. H. Kappes, U. Köthe, C. Schnörr, and F. A. Hamprecht. An empirical comparison of inference algorithms for graphical models with higher order factors using opengm. In M. Goesele, S. Roth, A. Kuijper, B. Schiele, and K. Schindler, editors, Pattern Recognition, volume 6376 of Lecture Notes in Computer Science, pages 353–362. Springer, 2010. [2] B. Andres, U. Koethe, M. Helmstaedter, W. Denk, and F. Hamprecht. Segmentation of SBFSEM volume data of neural tissue by hierarchical classification. In Proceedings of the 30th DAGM symposium on Pattern Recognition, pages 142–152. Springer, 2008. [3] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. From contours to regions: An empirical evaluation. Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, 0:2294–2301, 2009. [4] K. L. Briggman and W. Denk. Towards neural circuit reconstruction with volume electron microscopy techniques. Current opinion in neurobiology, 16(5):562–570, 2006. [5] O. Chapelle, B. Schlkopf, and A. Zien. Semi-supervised learning. The MIT Press, page 528, 2010. [6] D. Chklovskii, S. Vitaladevuni, and L. Scheffer. Semi-automated reconstruction of neural circuits using electron microscopy. Current Opinion in Neurobiology, 2010. [7] M. Collins, R. Schapire, and Y. Singer. Logistic regression, AdaBoost and Bregman distances. Machine Learning, 48(1):253–285, 2002. [8] H. Daumé, III, J. Langford, and D. Marcu. Search-based structured prediction. Machine Learning, 75:297–325, 2009. 10.1007/s10994-009-5106-x. [9] W. Denk and H. Horstmann. Serial block-face scanning electron microscopy to reconstruct threedimensional tissue nanostructure. PLoS Biol, 2(11):e329, 2004. [10] P. Dollar, Z. Tu, and S. Belongie. Supervised learning of edges and object boundaries. Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, 2:1964–1971, 2006. [11] K. J. Hayworth, N. Kasthuri, R. Schalek, and J. W. Lichtman. Automating the collection of ultrathin serial sections for large volume TEM reconstructions. Microscopy and Microanalysis, 12(S02):86–87, 2006. [12] M. Helmstaedter, K. L. Briggman, and W. Denk. 3D structural imaging of the brain with photons and electrons. Current Opinion in Neurobiology, 18(6):633–641, 2008. [13] V. Jain, B. Bollmann, M. Richardson, D. Berger, M. Helmstaedter, K. Briggman, W. Denk, J. Bowden, J. Mendenhall, W. Abraham, K. Harris, N. Kasthuri, K. Hayworth, R. Schalek, J. Tapia, J. Lichtman, and H. Seung. Boundary Learning by Optimization with Topological Constraints. In Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, 2010. [14] V. Jain, J. F. Murray, F. Roth, S. C. Turaga, V. Zhigulin, K. L. Briggman, M. N. Helmstaedter, W. Denk, and H. S. Seung. Supervised learning of image restoration with convolutional networks. Computer Vision, IEEE International Conference on, 0:1–8, 2007. [15] V. Jain, H. Seung, and S. Turaga. Machines that learn to segment images: a crucial technology for connectomics. Current opinion in neurobiology, 2010. [16] E. Jurrus, R. Whitaker, B. W. Jones, R. Marc, and T. Tasdizen. An optimal-path approach for neural circuit reconstruction. In Biomedical Imaging: From Nano to Macro, 2008. ISBI 2008. 5th IEEE International Symposium on, pages 1609–1612, May 2008. [17] V. Kaynig, T. Fuchs, and J. M. Buhmann. Neuron geometry extraction by perceptual grouping in sstem images. In Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, 2010. [18] V. Kolmogorov and R. Zabih. What energy functions can be minimizedvia graph cuts? IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 147–159, 2004. [19] A. Kulesza, F. Pereira, et al. Structured learning with approximate inference. Advances in neural information processing systems, 20, 2007. 8 [20] J. W. Lichtman and J. R. Sanes. Ome sweet ome: what can the genome tell us about the connectome? Curr. Opin. Neurobiol., 18(3):346–353, Jun 2008. [21] F. Maes, L. Denoyer, and P. Gallinari. Structured prediction with reinforcement learning. Machine learning, 77(2):271–301, 2009. [22] M. Maire, P. Arbeláez, C. Fowlkes, and J. Malik. Using contours to detect and localize junctions in natural images. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008. [23] D. R. Martin, C. C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans. Patt. Anal. Mach. Intell., pages 530–549, 2004. [24] J. Mutch, U. Knoblich, and T. Poggio. CNS: a GPU-based framework for simulating cortically-organized networks. Technical report, Massachussetts Institute of Technology, 2010. [25] W. M. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical association, 66(336):846–850, 1971. [26] X. Ren. Multi-scale improves boundary detection in natural images. In Proceedings of the 10th European Conference on Computer Vision: Part III, pages 533–545. Springer-Verlag, Springer, 2008. [27] X. Ren and J. Malik. Learning a Classification Model for Segmentation. In Proceedings of the Ninth IEEE International Conference on Computer Vision-Volume 2, page 10. IEEE Computer Society, 2003. [28] H. Seung. Reading the Book of Memory: Sparse Sampling versus Dense Mapping of Connectomes. Neuron, 62(1):17–29, 2009. [29] E. Sharon, A. Brandt, and R. Basri. Fast multiscale image segmentation. In Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on, volume 1, pages 70–77. IEEE, 2000. [30] E. Sharon, M. Galun, D. Sharon, R. Basri, and A. Brandt. Hierarchy and adaptivity in segmenting visual scenes. Nature, 442(7104):810–813, 2006. [31] C. Sommer, C. Straehle, U. Köthe, and F. A. Hamprecht. "ilastik: Interactive learning and segmentation toolkit". In 8th IEEE International Symposium on Biomedical Imaging (ISBI 2011), in press, 2011. [32] S. C. Turaga, K. L. Briggman, M. Helmstaedter, W. Denk, and H. S. Seung. Maximin affinity learning of image segmentation. In NIPS, 2009. [33] S. C. Turaga, J. F. Murray, V. Jain, F. Roth, M. Helmstaedter, K. Briggman, W. Denk, and H. S. Seung. Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Computation, 22(2):511–538, 2010. [34] R. Unnikrishnan, C. Pantofaru, and M. Hebert. Toward objective evaluation of image segmentation algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6):929, 2007. [35] S. N. Vitaladevuni and R. Basri. Co-clustering of image segments using convex optimization applied to EM neuronal reconstruction. In Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, 2010. [36] M. Wainwright. Estimating the wrong graphical model: Benefits in the computation-limited setting. The Journal of Machine Learning Research, 7:1829–1859, 2006. 9
|
2011
|
173
|
4,228
|
Structured Learning for Cell Tracking Xinghua Lou, Fred A. Hamprecht Heidelberg Collaboratory for Image Processing (HCI) Interdisciplinary Center for Scientific Computing (IWR) University of Heidelberg, Heidelberg 69115, Germany {xinghua.lou,fred.hamprecht}@iwr.uni-heidelberg.de Abstract We study the problem of learning to track a large quantity of homogeneous objects such as cell tracking in cell culture study and developmental biology. Reliable cell tracking in time-lapse microscopic image sequences is important for modern biomedical research. Existing cell tracking methods are usually kept simple and use only a small number of features to allow for manual parameter tweaking or grid search. We propose a structured learning approach that allows to learn optimum parameters automatically from a training set. This allows for the use of a richer set of features which in turn affords improved tracking compared to recently reported methods on two public benchmark sequences. 1 Introduction One distinguishing property of life is its temporal dynamics, and it is hence only natural that time lapse experiments play a crucial role in current research on signaling pathways, drug discovery and developmental biology [17]. Such experiments yield a very large number of images, and reliable automated cell tracking emerges naturally as a prerequisite for further quantitative analysis. Even today, cell tracking remains a challenging problem in dense populations, in the presence of complex behavior or when image quality is poor. Existing cell tracking methods can broadly be categorized as deformable models, stochastic filtering and object association. Deformable models combine detection, segmentation and tracking by initializing a set of models (e.g. active contours) in the first frame and updating them in subsequent frames (e.g. [17, 8]). Large displacements are difficult to capture with this class of techniques and are better handled by state space models, e.g. in the guise of stochastic filtering. The latter also allows for sophisticated observation models (e.g. [20]). Stochastic filtering builds on a solid statistical foundation, but is often limited in practice due to its high computational demands. Object association methods approximate and simplify the problem by separating the detection and association steps: once object candidates have been detected and characterized, a second step suggests associations between object candidates at different frames. This class of methods scales well [21, 16, 13] and allows the tracking of thousands of cells in 3D [19]. All of the above approaches contain energy terms whose parameters may be tedious or difficult to adjust. Recently, great efforts have been made to produce better energy terms with helps of machine learning techniques. This was first accomplished by casting tracking as a local affinity prediction problem such as binary classification with either offline [1] or online learning [11, 5, 15], weakly supervised learning with imperfect oracles [27], manifold appearance model learning [25], or ranking [10, 18]. However, these local methods fail to capture the very important dependency among associations, hence the resulting local affinities do not necessarily guarantee a better global association [26]. To address this limitation, [26] extended the RankBoost method from [18] to rank global associations represented as a Conditional Random Field (CRF). Regardless of this, it has two major drawbacks. Firstly, it depends on a set of artificially generated false association samples that can make the training data particularly imbalanced and the training procedure too expensive 1 for large-scale tracking problems. Secondly, RankBoost desires the ranking feature to be positively correlated with the final ranking (i.e. the association score) [10]. This in turn requires careful preadjustment of the sign of each feature based on some prior knowledge [18]. Actually, this prior knowledge may not always be available or reliable in practice. The contribution of this paper is two-fold. We first present an extended formulation of the object association models proposed in the literature. This generalization improves the expressiveness of the model, but also increases the number of parameters. We hence, secondly, propose to use structured learning to automatically learn optimum parameters from a training set, and hence profit fully from this richer description. Our method addresses the limitations of aforementioned learning approaches in a principled way. The rest of the paper is organized as follows. In section 2, we present the extended object association models and a structured learning approach for global affinity learning. In section 3, an evaluation shows that our framework inherits the runtime advantage of object association while addressing many of its limitations. Finally, section 4 states our conclusions and discusses future work. 2 Structured Learning for Cell Tracking 2.1 Association Hypotheses and Scoring We assume that a previous detection and segmentation step has identified object candidates in all frames, see Fig. 1. We set out to find that set of object associations that best explains these observations. To this end, we admit the following set E of standard events [21, 13]: a cell can move or divide and it can appear or disappear. In addition, we allow two cells to (seemingly) merge, to account for occlusion or undersegmentation; and a cell can (seemingly) split, to allow for the lifting of occlusion or oversegmentation. These additional hypotheses are useful to account for the errors that typically occur in the detection and segmentation step in crowded or noisy data. The distinction between division and split is reasonable given that typical fluorescence stains endow the anaphase with a distinctive appearance. Hypotheses Frame t 1c Frame t+1 Input Frame Pair 2c 3c 1c 2c 3c 4c 5c 4c 5c 1c 1c 2c 3c 2c 3c 1c 2c 1c 2c 1c 2c 1c moves to moves to divides to moves to divides to splits to Features move , 1 1 c cf move , 2 1 c cf divide } , {, 2 1 1 c c cf move , 1 2 c cf divide } , {, 3 2 2 c c cf split } , {, 5 4 3 c c cf } , , { 3 2 c c c1 C } , , , , { 5 4 3 2 c c c c c 1 C c c 5c 3c moves to … … … … … … move , 5 3 c cf 4c 3c moves to move , 4 3 c cf z move , 1 1 c cz move , 2 1 c cz divide } , {, 2 1 1 c c cz move , 1 2 c cz divide } , {, 3 2 2 c c cz split } , {, 5 4 3 c c cz move , 5 3 c cz move , 4 3 c cz Value 1 0 0 0 1 1 0 0 e Figure 1: Toy example: two sets of object candidates, and a small subset of the possible association hypotheses. One particular interpretation of the scene is indicated by colored arrows (left) or equivalently by a configuration of binary indicator variables z (rightmost column in table). Given a pair of object candidate lists x = {C, C′} in two neighboring frames, there is a multitude of possible association hypotheses, see Fig. 1. We have two tasks: firstly, to allow only consistent associations (e.g. making sure that each cell in the second frame is accounted for only once); and secondly to identify, among the multitude of consistent hypotheses, the one that is most compatible with the observations, and with what we have learned from the training data. We express this compatibility of the association between c ∈P(C) and c′ ∈P(C′) by event e ∈E as an inner product f e c,c′we . Here, f e c,c′ is a feature vector that characterizes the discrepancy (if any) between object candidates c and c′; and we is a parameter vector that encodes everything we 2 have learned from the training data. Summing over all object candidates in either of the frames and over all types of events gives the following compatibility function: L(x, z; w) = X e∈E X c∈P(C) X c′∈P(C′) ⟨f e c,c′, we⟩ze c,c′ (1) s. t. X e∈E X c∈P(C) ze c,c′ = 1 and X e∈E X c′∈P(C′) ze c,c′ = 1 with ze c,c′ ∈{0, 1} (2) The constraints in the last line involve binary indicator variables z that reflect the consistency requirements: each candidate in the first frame must have a single fate, and each candidate from the second frame a unique history. As an important technical detail, note that P(C) := C ∪(C ⊗C) is a set comprising each object candidate, as well as all ordered pairs of object candidates from a frame1. This allows us to conveniently subsume cell divisions, splits and mergers in the above equation. Overall, the compatibility function L(x, z; w), i.e. the global affinity measure, states how well a set of associations z matches the observations f(x) computed from the raw data x, given the knowledge w from the training set. The remaining tasks, discussed next, are how to learn the parameters w from the training data (section 2.2); given these, how to find the best possible associations z (section 2.3); and finding useful features (section 2.4). 2.2 Structured Max-Margin Parameter Learning In learning the parameters automatically from a training set, we pursue two goals: first, to go beyond manual parameter tweaking in obtaining the best possible performance; and second, to make the process as facile as possible for the user. This is under the assumption that most experimentalists find it easier to specify what a correct tracking should look like, rather than what value a more-or-less obscure parameter should have. Given N training frame pairs X = {xn} and their correct associations Z∗= {z∗ n}, n = 1, . . . , N, the best set of parameters is the optimizer of arg min w R(w; X, Z∗) + λΩ(w) (3) Here, R(w; X, Z∗) measures the empirical loss of the current parametrization w given the training data X, Z∗. To prevent overfitting to the training data, this is complemented by the regularizer Ω(w) that favors parsimonious models. We use L1 or L2 regularization (Ω(w) = ||w||p p/p, p = {1, 2}), i.e. a measure of the length of the parameter vector w. The latter is often used for its numerical efficiency, while the former is popular thanks to its potential to induce sparse solutions (i.e., some parameters can become zero). The empirical loss is given by R(w; X, Z∗) = 1 N PN i=1 ∆(z∗ n, ˆzn(w; xn)). Here ∆(z∗, ˆz) is a loss function that measures the discrepancy between a true association z∗and a prediction by specifying the fraction of missed events w.r.t. the ground truth: ∆(z∗, ˆz) = 1 |z∗| X e∈E X c∈P(C) X c′∈P(C′) z∗e c,c′(1 −ˆze c,c′). (4) This decomposable function allows for exact inference when solving Eq. 5 [6]. Importantly, both the input (objects from a frame pair) and output (associations between objects) in this learning problem are structured. We hence resort to max-margin structured learning [2] to exploit the structure and dependency within the association hypotheses. In comparison to other aforementioned learning methods, structured learning allows us to directly learn the global affinity measure, avoid generating many artificial false association samples, and drop any assumptions on the signs of the features. Structured learning has been successfully applied to many complex real world problems such as word/sequence alignment [22, 24], graph matching [6], static analysis of binary executables [14] and segmentation [3]. In particular, we attempt to find the decision boundary that maximizes the margin between the correct association z∗ n and the closest runner-up solution. An equivalent formulation is the condition 1For the example in Fig. 1, P(C) = {c1, c2, c3, {c1, c2}, {c1, c3}, {c2, c3}}. 3 that the score of z∗ n be greater than that of any other solution. To allow for regularization, one can relax this constraint by introducing slack variables ξn, which finally yields the following objective function for the max-margin structured learning problem from Eq. 3: arg min w,ξ≥0 1 N PN n=1 ξn + λΩ(w) s. t. ∀n, ∀ˆzn ∈Zn : L(xn, z∗ n; w) −L(xn, ˆzn; w) ≥∆(z∗ n, ˆzn) −ξn, (5) where Zn is the set of possible consistent associations and ∆(z∗ n, ˆzn) −ξn is known as “marginrescaling” [24]. Intuitively, it pushes the decision boundary further away from the “bad” solutions with high losses. 2.3 Inference and Implementation Since Eq. 5 involves an exponential number of constraints, the learning problem cannot be represented explicitly, let alone solved directly. We thus resort to the bundle method [23] which, in turn, is based on the cutting-planes approach [24]. The basic idea is as follows: Start with some parametrization w and no constraints. Iteratively find, first, the optimum associations for the current w by solving, for all n, ˆzn = arg maxz{L(xn, z; w) + ∆(z∗ n, z)}. Use all these ˆzn to identify the most violated constraint, and add it to Eq. 5. Update w by solving Eq. 5 (with added constraints), then find new best associations, etc. pp. For a given parametrization, the optimum associations can be found by integer linear programming (ILP) [16, 21, 13]. Our framework has been implemented in Matlab and C++, including a labeling GUI for the generation of training set associations, feature extraction, model inference and the bundle method. To reduce the search space and eliminate hypotheses with no prospect of being realized, we constrain the hypotheses to a k-nearest neighborhood with distance thresholding. We use IBM CPLEX2 as the underlying optimization platform for the ILP, quadratic programming and linear programming as needed for solving Eq. 5 [23]. 2.4 Features To differentiate similar events (e.g. division and split) and resolve ambiguity in model inference, we need rich features to characterize different events. In additional to basic features such as size/position [21] and intensity histogram [16], we also designed new features such as “shape compactness” for oversegmentation and “angle pattern” for division. Shape compactness relates the summed areas of two object candidates to the area of their union’s convex hull. Angle pattern describes the constellation of two daughter cells relative to their mother. Features can be defined on a pair of object candidates or on an individual object candidate only. Our features are categorized in Table 1. Note that the same feature can be used for different events. Table 1: Categorization of features. Feature Description Position difference in position, distance to border, overlap with border; Intensity difference in intensity histogram/sum/mean/deviation, intensity of father cell; Shape difference in shape, difference in size, shape compactness, shape evenness; Others division angle pattern, mass evenness, eccentricity of father cell. 3 Results We evaluated the proposed method on two publicly available image sequences provided in conjunction with the DCellIQ project3 [16] and the Mitocheck project4 [12]. The two datasets show a certain degree of variations such as illumination, cell density and image compression artifacts (Fig. 2). The 2http://www-01.ibm.com/software/integration/optimization/cplex-optimizer/ 3http://www.cbi-tmhs.org/Dcelliq/files/051606 HeLaMCF10A DMSO 1.rar 4http://www.mitocheck.org/cgi-bin/mtc?action=show movie;query=243867 4 GFP stained cell nuclei were segmented using the method in [19], yielding an F-measure over 99.3% by counting. Full ground truth associations for training and evaluation were generated with a Matlab GUI tool at a rate of approximately 20 frames/hour. Some statistics about these two datasets are shown in Table 2. Table 2: Some statistics about the datasets in our evaluation. Name Image Size No. of Frames No. of Cells Segm. F-Measure Compressed DCellIQ 512 × 672 100 10664 99.5% No Mitocheck 1024 × 1344 94 24096 99.3% Yes T=25 T=50 T=75 T=25 T=50 T=75 Figure 2: Selected raw images from the DCellIQ sequence (top) and the Mitocheck sequence (bottom). The Mitocheck sequence exhibits higher cell density, larger intensity variability and “blockness” artifacts due to image compression. Task 1: Efficient Tracking for a Given Sequence We first evaluate our method on a task that is frequently encountered in practice: the user simply wishes to obtain a good tracking for a given sequence with the smallest possible effort. For a fair comparison, we extended Padfield’s method [21] to account for the six events described in section 2.1 and used the same features (viz., size and position) and weights as in [21]. Hand-tuning of the parameters results in a high accuracy of 98.4% (i.e. 1 - total loss) as shown in Table 3 (2nd row). A detailed analysis of the error counts for specific events shows that the method accounts well for moves, but has difficulty with disappearance and split events. This is mainly due to the limited descriptive power of the simple features used. To study the difference between manual tweaking and learning of the parameters, we used the learning framework presented here to optimize the model and obtained a reduction of the total loss from 1.64% to 0.65% (3rd row). This can be considered as the limit of this model. Note that the learned parametrization actually deteriorates the detection of divisions because the learning aims at minimizing the overall loss across all events. In obtaining these results, one third of the entire sequence was used for training, just as in all subsequent comparisons. With 37 features included and their weights optimized using structured learning, our model fully profits from this richer description and achieves a total loss of only 0.30% (4th row) which is a significant improvement over [21, 16] (2nd/7th row) and manual tweaking (6th row). Though a certain amount of efforts is needed for creating the training set, our method allows experimentalists to contribute their expertise in an intuitive fashion. Some example associations are shown in Fig. 3. The learned parameters are summarized in Fig. 4 (top). They afford the following observations: Firstly, features on cell size and shape are generally of high importance, which is in line with the assumption in [21]. Secondly, the correlations of the features with the final association score are 5 Table 3: Performance comparison on the DCellIQ dataset. The header row shows the number of events occurring for moves, divisions, appearance, disappearance, splits and mergers. The remaining entries give the error counts for each event, summed over the entire sequence. mov div app dis spl mer total loss 10156 104 78 76 54 55 Padfield et al. [21] 71 18 16 26 30 12 1.64% Padfield et al. w/ learning 21 25 5 5 6 10 0.65% Ours w/ learning (L2 regula.) 15 6 4 1 2 6 0.30% Ours w/ learning (L1 regula.) 22 6 9 3 4 9 0.45% Ours w/ manual tweaking 56 24 16 19 2 5 1.12% Li et al. [16] 6.18%a Local learning by Random Forest 18 14 2 0 12 13 0.55% aHere we use the best reported error matching rate in [16] (similar to our loss). Figure 3: Some diverging associations by [21] (top) and our method (bottom). Color code: yellow – move; red – division; green – split; cyan – merger. automatically learned. For example, shape compactness is positively correlated with split but negatively with division. This is in line with the intuition that an oversegmentation conserves compact shape, while a true division seemingly pushes the daughters far away from each other (in the present kind of data, where only DNA is labeled). Finally, in spite of the regularization, many features are associated with large parameter values, which is key to the improved expressive power. Task 2: Tracking for High-Throughput Experiments The experiment described in the foregoing draws both training and test samples from the same time lapse experiment. However, in high-throughput experiments such as in the Mitocheck project [12], it is more desirable to train on one or a few sequences, and make predictions on many others. To emulate this situation, we have used the parameters w trained in the foregoing on the DCellIQ sequence [16] and used these to estimate the tracking of the Mitocheck dataset. The main focus of the Mitocheck project is on accurate detection of mitosis (cell division). Despite the difference in illumination and cell density from the training data, and despite the segmentation artifacts caused by the compression of the image sequence, our method shows a high generalization capability and obtains a total loss of 0.78%. In particular, we extract 93.2% of 384 mitosis events which is a significant improvement over the mitosis detection rate reported in [12] (81.5%, 294 events). Comparison to Local Affinity Learning We also developed a local affinity learning approach that is in spirit of [1, 15]. Rather than using AdaBoost [9], we chose Random Forest (RF) [4] which provides fairly comparable classification power [7]. We sample positive associations from the ground truth and randomly generate false associations. RF classifiers are built for each event independently. The predicted probabilities by the RF classifiers are used to compute the overall association score as in Eq. 6 (with the same constraints in Eq. 2). Since we have multiple competing events (one cell can only have a single 6 −0.06 −0.04 −0.02 0 0.02 0.04 Importance Feature Importance (L2) mov div app dis spl mer −0.1 0 0.1 diff. position diff. size diff. shape diff. inten. hist. diff. inten. sum diff. inten. mean diff. inten. devia. diff. inten. sum angle pattern father intensity father eccentricity size evenness shape compactness mass evenness overlap with border distance to border diff. size diff. inten. sum diff. inten. mean diff. inten. devia. overlap with border distance to border diff. size diff. inten. sum diff. inten. mean diff. inten. devia. diff. position diff. size diff. shape diff. inten. mean shape compactness mass evenness diff. position diff. size diff. shape diff. inten. mean shape compactness mass evenness Importance Feature Importance (L1) Figure 4: Parameters w learned from the training data with L2 (top) or L1 (bottom) regularization. Parameters weighing the features for different events are colored differently. Both parameter vectors are normalized to unit 1-norm, i.e. ∥w∥1 = 1. Table 4: Performance comparison on the Mitocheck dataset. The method was trained on the DCellIQ dataset. The header row shows the number of events occurring for moves, divisions, appearance, disappearance, splits and mergers. The remaining entries give the error counts for each event, summed over the entire sequence. mov div app dis spl mer total loss 22520 384 310 304 127 132 Padfield et al. w/ learning 171 85 58 47 53 13 1.39% Ours w/ learning (L2 regula.) 98 26 31 25 43 9 0.78% Ours w/ learning (L1 regula.) 93 35 54 25 26 48 0.98% Local learning by Random Forest 214 281 162 10 82 68 2.33% fate), we also introduce weights {αe} to capture the dependencies between events. These weights are optimized via a grid search on the training data. L(x, z; w) = X e∈E X c∈P(C) X c′∈P(C′) αeProb(f e c,c′)ze c,c′ (6) The results are shown in Table 3 (8th row) and Table 4 (5th row), which afford the following observations. Firstly, a locally strong affinity prediction does not necessarily guarantee a better global association. Secondly, local learning shows particularly weak generalization capability. Sensitivity to Training Set The success of supervised learning depends on the representativeness (and hence also size) of the training set. To test the sensitivity of the results to the training data used, we drew different numbers of training image pairs randomly from the entire sequence and used the remaining pairs for testing. For each training set size, this experiment is repeated 10 times. The mean and deviation of the losses on the respective test sets is shown in Fig. 5. According to the one-standard-error-rule, associations between at least 15 or 20 image pairs are desirable, which can be accomplished in well below an hour of annotation work. 7 L1 vs. L2 Regularization The results of L1 vs. L2 regularization are comparable (see Table 3 and Table 4). While L1 regularization yields sparser feature selection 4 (bottom), it has a much slower convergence rate (Fig. 6). The staircase structure shows that, due to sparse feature selection, the bundle method has to find more constraints to escape from a local minimum. 0 10 20 30 40 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 Number of frame pairs for training Average test loss (× 10−2) Sensitivity to Training Data Size Figure 5: Learning curve of structured learning (with L2 regularization). 10 20 30 40 50 60 70 0 5 10 15 20 25 30 Number of constraints Approximation gap ε (× 10−3) Convergence Rate (L1 vs. L2) L1 Regularization L2 Regularization Figure 6: Convergence rates of structured learning (L1 vs. L2 regularization). 4 Conclusion & Future Work We present a new cell tracking scheme that uses more expressive features and comes with a structured learning framework to train the larger number of parameters involved. Comparison to related methods shows that this learning scheme brings significant improvements in performance and, in our opinion, usability. We currently work on further improvement of the tracking by considering more than two frames at a time, and on an active learning scheme that should reduce the amount of required training inputs. Acknowledgement We are very grateful for partial financial support by CellNetworks Cluster (EXC81), FORSYSViroQuant (0313923), SBCancer, DFG (GRK 1653) and “Enable fund” of University of Heidelberg. We also thank Bjoern Andres, Jing Yuan and Christoph Straehle for their comments on the manuscript. References [1] S. Avidan. Ensemble Tracking. In CVPR, 2005. [2] G. Bakir, T. Hofmann, B. Schoelkopf, A. J. Smola, B. Taskar, and S. Vishwanathan. Predicting Structured Data. MIT Press, Cambridge, MA, 2006. [3] L. Bertelli, T. Yu, D. Vu, and B. Gokturk. Kernelized Structural SVM Learning for Supervised Object Segmentation. In CVPR, 2011. [4] L. Breiman. Random Forests. Mach Learn, 45(1):5–32, 2001. [5] M. D. Breitenstein, F. Reichlin, B. Leibe, E. Koller-Meier, and L. V. Gool. Robust Trackingby-Detection using a Detector Confidence Particle Filter. In ICCV, 2009. [6] T. S. Caetano, J. J. McAuley, L. Cheng, Q. V. Le, and A. J. Smola. Learning Graph Matching. IEEE T Pattern Anal, 31(6):1048–1058, 2009. [7] R. Caruana and A. Niculescu-Mizil. An Empirical Comparison of Supervised Learning Algorithms. In ICML, pages 161–168, 2006. 8 [8] O. Dzyubachyk, W. A. van Cappellen, J. Essers, et al. Advanced Level-Set-Based Cell Tracking in Time-Lapse Fluorescence Microscopy. IEEE T Med Imag, 29(3):852, 2010. [9] Y. Freund. An adaptive version of the boost by majority algorithm. Mach Learn, 43(3):293– 318, 2001. [10] Y. Freund, R. Iyer, R. E. Schapire, , and Y. Singer. An Efficient Boosting Algorithm for Combining Preferences. J Mach Learn Res, 4:933–969, 2003. [11] H. Grabner and H. Bischof. On-line Boosting and Vision. In CVPR, 2006. [12] M. Held, M. H. A. Schmitz, et al. CellCognition: time-resolved phenotype annotation in highthroughput live cell imaging. Nature Methods, 7(9):747–754, 2010. [13] T. Kanade, Z. Yin, R. Bise, S. Huh, S. E. Eom, M. Sandbothe, and M. Chen. Cell Image Analysis: Algorithms, System and Applications. In WACV, 2011. [14] N. Karampatziakis. Static Analysis of Binary Executables Using Structural SVMs. In NIPS, 2010. [15] C.-H. Kuo, C. Huang, , and R. Nevatia. Multi-Target Tracking by On-Line Learned Discriminative Appearance Models. In CVPR, 2010. [16] F. Li, X. Zhou, J. Ma, and S. Wong. Multiple Nuclei Tracking Using Integer Programming for Quantitative Cancer Cell Cycle Analysis. IEEE T Med Imag, 29(1):96, 2010. [17] K. Li, E. D. Miller, M. Chen, et al. Cell population tracking and lineage construction with spatiotemporal context. Med Image Anal, 12(5):546–566, 2008. [18] Y. Li, C. Huang, and R. Nevatia. Learning to Associate: HybridBoosted Multi-Target Tracker for Crowded Scene. CVPR, 2009. [19] X. Lou, F. O. Kaster, M. S. Lindner, et al. DELTR: Digital Embryo Lineage Tree Reconstructor. In ISBI, 2011. [20] E. Meijering, O. Dzyubachyk, I. Smal, and W. A. van Cappellen. Tracking in cell and developmental biology. Semin Cell Dev Biol, 20(8):894 – 902, 2009. [21] D. Padfield, J. Rittscher, and B. Roysam. Coupled Minimum-Cost Flow Cell Tracking for High-Throughput Quantitative Analysis. Med Image Anal, 2010. [22] B. Taskar, S. Lacoste-Julien, and M. I. Jordan. Structured Prediction, Dual Extragradient and Bregman Projections. J Mach Learn Res, 7:1627–1653, 2006. [23] C. H. Teo, S. V. N. Vishwanthan, A. J. Smola, and Q. V. Le. Bundle methods for regularized risk minimization. J Mach Learn Res, 11:311–365, 2010. [24] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large Margin Methods for Structured and Interdependent Output Variables. J Mach Learn Res, 6(2):1453, 2006. [25] X. Wang, G. Hua, and T. X. Han. Discriminative Tracking by Metric Learning. In ECCV, 2010. [26] B. Yang, C. Huang, and R. Nevatia. Learning Affinities and Dependencies for Multi-Target Tracking using a CRF Model. In CVPR, 2011. [27] B. Zhong, H. Yao, S. Chen, et al. Visual Tracking via Weakly Supervised Learning from Multiple Imperfect Oracles. In CVPR, 2010. 9
|
2011
|
174
|
4,229
|
Hierarchical Topic Modeling for Analysis of Time-Evolving Personal Choices XianXing Zhang Duke University xianxing.zhang@duke.edu David B. Dunson Duke University dunson@stat.duke.edu Lawrence Carin Duke University lcarin@ee.duke.edu Abstract The nested Chinese restaurant process is extended to design a nonparametric topic-model tree for representation of human choices. Each tree path corresponds to a type of person, and each node (topic) has a corresponding probability vector over items that may be selected. The observed data are assumed to have associated temporal covariates (corresponding to the time at which choices are made), and we wish to impose that with increasing time it is more probable that topics deeper in the tree are utilized. This structure is imposed by developing a new “change point" stick-breaking model that is coupled with a Poisson and productof-gammas construction. To share topics across the tree nodes, topic distributions are drawn from a Dirichlet process. As a demonstration of this concept, we analyze real data on course selections of undergraduate students at Duke University, with the goal of uncovering and concisely representing structure in the curriculum and in the characteristics of the student body. 1 Introduction As time progresses, the choices humans make often change. For example, the types of products one purchases, as well as the types of people one befriends, often change or evolve with time. However, the choices one makes later in life are typically statistically related to choices made earlier. Such behavior is of interest when considering marketing to particular groups of people, at different stages of their lives, and it is also relevant for analysis of time-evolving social networks. In this paper we seek to develop a hierarchical tree structure for representation of this phenomena, with each tree path characteristic of a type of person, and a tree node defined by a distribution over choices (characterizing a type of person at a “stage of life”). As time proceeds, each person moves along layers of a tree branch, making choices based on the node at a given layer, thereby yielding a hierarchical representation of behavior with time. Note that as one moves deeper in the tree, the number of nodes at a given tree layer increases as a result of sequential branching; this appears to be well matched to the modeling of choices made by individuals, who often become more distinctive and specialized with increasing time. The number of paths (types of people), nodes (stages of development) and the statistics of the time dependence are to be inferred nonparametrically based on observed data, which are typically characterized by a very sparse binary matrix (most individuals only select a tiny fraction of the choices that are available to them). To demonstrate this concept using real data, we consider selections of classes made by undergraduate students at Duke University, with the goal of uncovering structure in the students and classes, as inferred by time-evolving student choices. For each student, the data presented to the model are a set of indices of selected classes (but not class names or subject matter), as well as the academic year in which each class was selected (e.g., sophomore year). While the student majors and class names are not used by the model, they are known, and this information provides “truth” with which model-inferred structure may be assessed. This study therefore also provides the opportunity to examine the quality of the inferred hierarchical-tree structure in models of the type considered in 1 [1, 4, 5, 8, 7, 12, 13, 6, 21, 15, 22] (such structure is difficult to validate with documents, for which there is no “truth”). We seek to impose that as time progresses it is more probable that an individual’s choices are based on nodes deeper in the tree, so that as one moves from the tree root to the leaves, we observe the evolution of choices as people age. Such temporal tree-structure could be meaningful by itself, e.g., in our particular case it will allow university administrators and faculty to examine if objectives in curriculum design are manifested in the actual usage/choices of students. Further, the results of such an analysis may be of interest to applicants at a given school (e.g., high school students), as the inferred structure concisely describes both the student body and the curriculum. Also the uncovered structure may be used to aid downstream applications [17, 2, 11]. The basic form of the nonparametric tree developed here is based on the the nested Chinese restaurant process (nCRP) topic model [4, 5]. However, to achieve the goals of the unique problem considered, we make the following new modeling contributions. We develop a new “change-point” stick-breaking process (cpSBP), which is a stick-breaking process that induces probabilities that stochastically increase to an unknown changepoint and then decrease. This construction is conceptually related to the “umbrella” placed on dose response curves [9]. In the proposed model each individual has a unique cpSBP, that evolves with time such that choices at later times are encouraged to be associated with deeper nodes in the tree. Time is a covariate, and within the change-point model a new product-of-gammas construction is developed, and coupled to the Poisson distribution. Another novel aspect of the proposed model concerns drawing the node-dependent topics from a Dirichlet process, sharing topics across the tree. This is motivated by the idea that different types of people (paths) may be characterized by similar choices at different nodes in the respective paths (e.g., person Type A may make certain types of choices early in life, while person Type B may make similar choices later in life). Such sharing of topics allows the inference of relationships between choices different people make over time. 2 Model Formulation 2.1 Nested Chinese Restaurant Process The nested Chinese restaurant process (nCRP) [4, 5] is a generative probabilistic model that defines a prior distribution over a tree-structured hierarchy with infinitely many paths. In an nCRP model of personal choice each individual picks a tree path by walking from the root node down the tree, from node to node. Specifically, when situated at a particular parent node, the child node ci individual i chooses is modeled as a random variable that can be either an existing node or a new node: (i) the probability that ci is an existing child node k is proportional to the number of persons who already chose node k from the same parent, (ii) and a new node can be created and chosen with probability proportional to γ0 > 0, which is the nCRP concentration parameter. This process is defined recursively such that each individual is allocated to one specific path of the tree hierarchy, through a sequence of probabilistic parent-child steps. The tree hierarchy implied by the nCRP provides a natural framework to capture the structure of personal choices, where each node is characterized by a distribution on the items that may be selected (e.g., each node is a “choice topic"). Similar constructions have been considered for document analysis [5, 21, 4], in which the model captures the structure of word usage in documents. However, there are unique aspects of the time-evolving personal-choice problem, particularly the goal motivated above that one should select topics deeper in the tree as time evolves, to capture the specialized characteristics of people as they age. Hierarchical topic models proposed previously [4, 7] have employed a stick-breaking process (SBP) to guide selection of the tree depth at which a node/topic is selected, with an unbounded number of path layers, but these models do not provide a means of imposing the above temporal dynamics (which were not relevant for the document problems considered there). 2.2 Change Point Stick Breaking Process In the proposed model, instead of constraining the SBP construction to start at the root node, we model the starting-point depth of the SBP as a random variable and infer it from data, while still maintaining a valid distribution over each layer of any path. To do this we replace the single SBP over path layers with two statistically dependent SBPs: one begins from layer p+1 and moves down 2 5 10 15 20 25 30 35 40 45 50 55 60 0 0.1 0.2 0.3 0.4 0.5 Index Stick Lenght cpSBP, α=1 cpSBP, α=5 cpSBP, α=10 SBP, α=1 SBP, α=5 SBP, α=10 Figure 1: Illustrative comparison of the stick lengths between change point stick breaking process (cpSBP) and stick breaking process (SBP) with different value of α; typical draws from cpSBP and SBP are depicted. aω and bω are both set to 1, the change point is set to p = 30 and the truncation of both stick breaking constructions are set to 60. the tree away from the root, and the other begins from layer p and moves upward to the root; the latter SBP is truncated when it hits the root, while the former is in principle of infinite length. The tree depth p which relates these two SBPs is modeled as a random variable, drawn from a Poisson distribution, and is denoted the change point. In this way we encourage the principal stick weight to be placed heavily around the change point p, instead of restricting it to the top layers as in [4, 7]. To model the time dependence, and encourage use of greater tree depths with increasing time, we seek a formulation that imposes that the Poisson parameter grows (statistically) with increasing time. The temporal information is represented as covariate t(i, n), denoting the time at which the the nth selection/choice is made by individual i; in many applications t(i, n) ∈{1, . . . , T}, and for the student class-selection problem T = 4, corresponding to the freshman through senior years; below we drop the indices (i, n) on the time, for notational simplicity. When individual i makes selections at time t, she employs a corresponding change point pi,t. To integrate the temporal covariate into the model, we develop a product-of-gammas and Poission conjugate pair to model pi,t which encourages pi,t associated with larger t to locate deeper in the tree. Specifically, consider γi,l = Ga(γi,l|ai,l, bi,l), λi,t = tY l=1 γi,l, pi,t ∼Poi(pi,t|λi,t) (1) The product-of-gammas construction in (1) is inspired by the multiplicative-gamma process (MGP) developed in [3] for sparse factor analysis. Although each draw of γi,l from a gamma distribution is not guaranteed to be greater than one, and thus λi,t will not increase with probability one, in practice we find γi,l is often inferred to be greater than one when (ai,l −1)/bi,l > 1. However, an MGP based on a left-truncated gamma distribution can be readily derived. Given the change point pi,t = p, the cpSBP constructs the stick-weight vector θp i,t over layers of path bi by dividing it to into two parts: ˆθp i,t and ˜θp i,t, modeling them separately as two SBPs, where ˆθp i,t = {ˆθp i,t(p), ˆθp i,t(p −1), . . . , ˆθp i,t(1)} and ˜θp i,t = {˜θp i,t(p + 1), ˜θp i,t(p + 2), . . . , ˜θp i,t(∞)}. For notation simplicity, we denote Vh = Vi,t(h) when constructing θp i,t, yielding ˆθp i,t(u) = Vu p Y h=u+1 (1 −Vh), ˜θp i,t(d) = Vd d−1 Y h=p+1 (1 −Vh), Vh ∼beta(Vh|1, α) (2) Note that the above SBP contains two constructions: When d > p the stick weight ˜θp i,t(d) is constructed as a classic SBP but with the stick-breaking construction starting at layer p + 1. On the other hand when u ≤p the stick weight ˆθp i,t(u) is constructed “backwards” from p to the root node, which is a truncated stick breaking process with truncation level set to p. A thorough discussion of the truncated stick breaking process is found in [10]. We further use a beta distributed latent variable ωbi to combine the two stick breaking process together while ensuring each element of θp i,t = {ˆθp i,t, ˜θp i,t} sums to one. Thus we have the following distribution over layers of a given path from which the layer allocation variables {li,n : t(i, n) = t} for a selection at time t by individual i are sampled: li,n ∼ωi,t p X l=1 ˆθi,t(l)δl + (1 −ωi,t) ∞ X l=p+1 ˜θi,t(l)δl, ωi,t ∼Beta(ωi,t|aω, bω) (3) 3 Note that the change point stick breaking process (cpSBP) can be treated as a generalization of the stick breaking process for Dirichlet process, since when pi,t = 0 the cpSBP corresponds to the SBP. From the simulation studied in Figure 1, we observe that the change point, which is modeled through the temporal covariate t as in (1), corresponds to the layer with large stick weight and thus at which topic draws are most probable. Also note that one may alternatively suggest simply using pi,t directly as the layer from which a topic is drawn, without the subsequent use of a cpSBP. We examined this in the course of the experiments, and it did not work well, likely as a result of the inflexibility of the single-parameter Poisson (with its equal mean and variance). The cpSBP provided the additional necessary modeling flexibility. 2.3 Sharing topics among different nodes One problem with the nCRP-based topic model, implied by the tree structure, is that all descendent sub-topics from parent node pa1 are distinct from the descendants of parent pa2, if pa1 ̸= pa2. Some of these distinct sets of children from different parents may be redundant, and this redundancy can be removed if a child can have more than one parent [7, 13, 6]. In addition to the above problem, in our application there are other potential problems brought by the cpSBP. Since we encourage the later choice selections to be drawn from topics deeper in the tree, redundant topics at multiple layers may be manifested if two types of people tend to make similar choices at different time points (e.g., at different stages of life). Thus it is likely that similar (redundant) topics may be learned on different layers of the tree, and the inability of the original nCRP construction to share these topics misses another opportunity to share statistical strength. In [7, 13, 6] the authors addressed related challenges by replacing the tree structure with a directed acyclic graph (DAG), demonstrating success for document modeling. However, those solutions don’t have the flexibility of sharing topics on nodes among different layers. Here we propose a new and simpler approach, so that the nCRP-based tree hierarchy is retained, while different nodes in the whole tree may share the same topic, resolving the two problems discussed above. To achieve this we draw a set of “global” topics {ˆφk}, and a stick-breaking process is employed to allocate one of these global topics as φj, representing the jth node in the tree (this corresponds to drawing the {φj} from a Dirichlet process [16], with a Dirichlet distribution base). The SBP defined over the global topics is represented as follows: πk = δk k−1 Y i=1 (1 −δi), δi ∼Beta(δi|1, η), ˆφk ∼Dir(ˆφk|β), φj ∼ ∞ X k=1 πkδ ˆφk (4) Within the generative process, let zi,n denote the assignment of the nth choice of individual i to global topic ˆφzi,n; then the corresponding item chosen is drawn from Mult(1, ˆφzi,n). 3 Model Inference In the proposed model, we sample the per-individual tree path indicator bi, the layer allocation of choice topics in those paths li,n, the change point pi,t for each time interval, the parameters associated with the cpSBP construction γi,t, ωi,t, θp i,t, the stick breaking weight π over the global topics ˆφk, and the global topic-assignment indicator zi,n. Similar to [4], the per-node topic parameters φn are marginalized out. We provide update equations cycling through {li,n, pi,t, γi,t, ωi,t, θp i,t} that are unique for this model. The update equations for bi and {π, zi,n} are similar to the the ones in [4] and [18], respectively, which we do not reproduce here for brevity. Sampling for change point pi,t Due to the non-conjugacy between the Poisson and multinomial distributions, the exact form of its posterior distribution is difficult to compute. Additionally, in order to sample pi,t, we require imputation of an infinite-dimensional process. The implementation of the sampling algorithm either relies on finite approximations [10] which lead to straightforward update equations, or requires an additional Metropolis-Hastings (M-H) step which allows us to obtain samples from the exact posterior distribution of pi,t with no approximation, e.g., the retrospective sampler [14] proposed for Dirichlet process hierarchical models. In this section we first introduce the finite approximation based sampler, and the retrospective sampling scheme based method will be described in the supplemental material. 4 Denote P as the truncated maximum value of the change point, then given the samples of all other latent variables, pi,t can be sampled from the following equation: q(pi,t = p|θp i,t, λi,t, ωi,t, li,t) ∝p(pi,t = p|λi,t, P)p(li,t|θp i,t, ωi,t), 0 ≤p ≤P (5) where li,t = {li,n : t(i, n) = t} are all layer allocations of choices made by individual i at time t. p(pi,t = p|λi,t, P) = λp i,te−λi,t p!CP is the Poisson density function truncated with p ≤P, and CP = PP p=1 λp i,te−λi,t p! . p(li,t|θp i,t, ωi,t) = Mult(li,t|{ωi,t ˆθp i,t, (1 −ωi,t)˜θp i,t}) is the multinomial density function over the layer allocations li,t. Sampling choice layer allocation li,n Given all the other variables, now we sample the layer allocation li,n for the nth choice made by individual i. Denote ci,n as the nth choice made by individual i, Mzi,n,ci,n = #[z−(i,n) = zi,n, c−(i,n) = ci,n] + β as the smoothed count of seeing choice ci,n allocated to global topic zi,n, excluding the current choice. Parameter li,n can be sampled from the following equation: p(li,n = l|pi,t = p, z, ωi,t, θp i,t, c) ∝ ( ωi,tˆθp i,t(l)Mzi,n,ci,n, 0 < l ≤p (1 −ωi,t)˜θp i,t(l)Mzi,n,ci,n, p < l ≤P Sampling for product-of-gammas construction γi,t From (1) note that the temporal dependent intensity parameter λi,t can be reconstructed from the gamma distributed variables γi,t, which again can be sampled directly from its posterior distribution given all other variables, due to the conjugacy of product-gamma variable and Poisson construction. Denoting τ (t) i,l = Ql j=1,j̸=t γi,j, we have: p(γi,t|{pi,t}T t=1, ai,t, bi,t) = Ga γi,t ai,t + T X l=t pi,l, bi,t + T X l=t τ (t) i,l ! Sampling for cpSBP parameters {ωi,t, θp i,t} Given the change points pi,t and choice layer allocation li,t = {li,n : t(i, n) = t}, the cpSBP parameters θp i,t = {ˆθp i,t, ˜θp i,t} can be reconstructed based on samples of Vh as defined in (2). Specifically, we have p(Vh|pi,t = p, li,t) = Beta Vh|ah + Nh,t, bh + Ph−1 l=1 Nl,t if h ≤p Beta Vh|ah + Nh,t, bh + Pmax li,t l=h+1 Nl,t if h > p where Nl,t = #[li,n = l, t(i, n) = t] records the number of times a choice made by individual i in time interval t is allocated to path layer l. Given the samples of other variables, ωi,t is sampled from its full conditional posteior distribution: p(ωi,t|pi,t = p, {li,n : t(i, n) = t}) = Beta ωi,t 1 + Pp l=1 Nl,t, 1 + Pmax li,t l=p+1 Nl,t Sampling the Hyperparameters Concerning hyperparameters α, η, γ0, β, related to the stick breaking process and hierarchical topic model construction, we sample them within the inference process by placing prior distributions over them, similar to methods in [4]. One may also consider other alternatives for learning the hyperparameters within topic models [19]. For the hyperparameters ai,l, bi,l in the product-of-gamma construction, we sample them as proposed in [3]. Finally, we fix aw = 1 and sample bw by placing a gamma prior distribution on it. All these steps are done by a M-H step between iterations of the Gibbs sampler. 4 Analysis of student course selection 4.1 Data description and computations We demonstrate the proposed model on real data by considering selections of classes made by undergraduate students at Duke University, for students in graduating classes 2009 to 2013; the data consists of class selections of all students from Fall 2005 to Spring 2011. For computational reasons 5 the cpSBP and SBP employed over the tree-path depth are truncated to a maximum of 10 layers (beyond this depth the number of topics employed by the data was minimal), while the number children of each parent node is allowed to be unbounded. Within the sampler, we ran the model based on the class selection records of students from class of 2009 and 2010 (total of 3382 students and 2756 unique classes), and collected 200 samples after burn-in, taking every fifth sample to approximate the posterior distribution over the latent tree structure as well as the topic on each node of the tree. We analyze the quality of the learned models using the remaining data (classes of 2011-2013), characterized by 5171 students and 2972 unique classes. Each topic is a probabilistic vector defined over 3015 classes offered across all years. Within the MCMC inference procedure we trained our model as follows: first, we fixed the change point pi,t = t and then ran the sampler for 100 iterations, then burned in the inference for 5000 iterations with pi,t updated before drawing 5000 samples from the full posterior. 4.2 Quantitative assessment # Topics # Nodes Predictive LL (11) Predictive LL (11-13) nCRP 492±11 492±11 -293226.8399 -471736.8876 cpSBP-nCRP 973±37 973±37 -290271.3576 -469912.1120 DP-nCRP 318±26 521±41 -292311.3971 -471951.3452 Full model 367±32 961±44 -288511.4298 -468331.2990 Table 1: Predictive log-likelihood comparison on two datasets, given the mean of number of topics and nodes learned with rounded standard deviation. nCRP is the model proposed in [4]. Compared to nCRP, the cpSBP-nCRP replaced SBP with the proposed cpSBP, while in DP-nCRP the draw of topic for each node is from Dirichlet process(DP) instead of Dirichlet distribution and retained the SBP construction in nCRP. The full model used both cpSBP and DP. Results shown for class of 2011, and classes 2011-2013. 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 x 10 4 Layer Freshman Sophomore Junior Senior 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 x 10 4 Layer Freshman Sophomore Junior Senior Figure 2: Histograms of class layer allocations according to their time covariates. Left: Stick breaking process, Right: Change point stick breaking process In this section we examine the model’s ability to explain unseen data. For comparison consistency we computed the predictive log-likelihood based on the samples collected in the same way as [4] (alternatives means of evaluating topic models are discussed in [20]). We test the model using two different compositions of the data, the first is based on class selection history of students from class of 2011 (1696 students), where all 4 years of records are available. The second is based on class selection history of students from class of 2011 to 2013 (3475 students), where for the later two years only partial course selection information is available, e.g., for students from class of 2013 only class selection choices made in freshman year are available. Additionally, we compare the different models with respect to the learned number of topics and the learned number of tree nodes. This comparison is an indicator of the level of “parsimony” of the proposed model, introduced by replacing independent draws of topics from a Dirichlet distribution by draws from a Dirichlet process (with Dirichlet distribution base), as explained in Section 2.3. Since the number of tree nodes grows exponentially with the number of tree layers, from a practical viewpoint sharing topics among the nodes saves memory used to store the topic vectors, whose dimension is typically large (here the number of classes: 3015). In addition to the above insight, as the experimental results indicates, sharing topics among different nodes can enhance the sharing of statistical strength, which leads to better predictive performance. The results are summarized in Table 4.2. We hypothesize that the enhanced performance of the proposed model to explain the unseen data is also due to it’s improved ability to capture the latent predictive statistical structure, e.g., to capture 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 7 Proportion Layer 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 7 Proportion Layer BME POLI ECON CS BIO PPS ME OTHER Figure 3: Change of the proportion of important majors along the layers of two paths which share the nodes up to the second layer. These two paths correspond to the full versions (all 7 layers) of the top two paths in Figure 4. BME: Biomedical Engineering, POLI: Political Science, ECON: Economics, CS: Computer Science, BIO: Biology, PPS: Public Policy Science, ME: Mechanical Engineering, OTHER: other 73 majors. the latent temporal dynamics within the data by the change point stick breaking process (cpSBP). To demonstrate this point, in Figure 2 we compare how cpSBP and SBP guided the class layer allocations which have associated time covariates (e.g., the academic year of each student). From Figure 2 we observe that under spSBP, as the students’ academic career advances, they are more probable to choose classes from topics deeper in the tree, while such pattern is less obvious in the SBP case. Further, spSBP encouraged the data to utilize more layers in the tree than SBP. 4.3 Analyzing the learned tree With incorporation of time covariates, we examine if the uncovered hierarchical structure is consistent with the actual curriculum of students from their freshman to senior year. And we consider two analyses here. The first is a visualization of a subtree learned from the class-selection history, based on students of the class of 2009, as shown in Figure 4; shown are the most-probable classes in each topic, as well as a histogram on the covariates (1 to 4, for freshman through senior) of the students who employed the topic. For example, the topics on the top two layers correspond to the most popular classes selected by mechanical engineering and computer science students, respectively, while topics located to the right correspond to more advanced classes; to the left-most the root topic corresponds to classes required for all students (e.g., academic writing). The tree structured hierarchy captured the general trend of class selection within/across different majors. In Figure 4 we also highlight a topic in red, shared by two nodes. This topic corresponds to a set of general introductory classes which are popular (high attendance) for two types of students: (i) young students who take these classes early for preparation of future advanced studies, and (ii) students who need to fill elective requirements later in their academic career (“ideally" of an easy/elementary nature, to not “distract" from required classes from the major). It is therefore deemed interesting that these same classes seem to be preferred by young and old students, for apparently very different reasons. Note that the sharing of topics between nodes of different layers is a unique aspect of this model, not possible in [7, 13, 6]. In the second analysis we examine how the majors of students are distributed in the learned tree; the ideal case would be that each tree path corresponds to an academic major, and the nodes shared by paths manifest sharing of topics between different but related majors. In Figure 3 we show the change of proportions of different majors among different layers of the top two paths in Figure 4 (this is a zoom-in of a much larger tree). For a clear illustration, we show the seven most popular majors for these paths as a function of time (out of a total of 80 majors), and the remaining 73 majors are group together. We observe that the students with mechanical engineering (ME) majors share the node on the second layer with students with a computer science (CS) major, and the layers deeper in the tree begin to be exclusive to students with CS and ME majors, respectively. This phenomenon corresponds to the process a student determining her major by choosing courses as she walks down tree path. This also matches the fact that in this university, students declare their major during the sophomore year. 7 INTRO
TO
SIGNALS
AND
SYSTEMS
MICROELECT
DEVICES
&
CIRCUITS
ELECTRONIC
MUSIC
QUANTUM
MECHANICS
I
TRNSPRT
PHENOM
BIOLOGCL
SYS
NONLINEAR
DYNAMICS
INTRO
ELECTRIC,
MAGNET
TISSUE
ENGINEERING
MECHANICAL
DESIGN
AERODYNAMICS
THERMO
DYNAMICS
MODELS
CELL
&
MOL
SYSTEMS
COMPRESSIBLE
FLUID
FLOW
SIGNALS
AND
SYSTEMS
ELECTROMAGNET
FIELDS
FAILURE
ANALY/PREVENTION
CHEM/TECHNOL/SOCIETY
DATA
ANALY/STAT
INFER
THE
DYNAMIC
EARTH
SOCIAL
PSYCHOLOGY
ABNORMAL
PSYCHOLOGY
POL
ANALY
PUB
POL
MAKING
GLOBAL
HEALTH
ELEMENTARY
SPANISH
ACADEMIC
WRITING
ECONOMIC
PRINCIPLES
GENERAL
CHEMISTRY
I
GENERAL
CHEMISTRY
II
INTERMEDIATE
CALCULUS
INTERMEDIATE
ECONOMICS
I
INTRODUCTORY
PSYCHOLOGY
LABORATORY
CALCULUS
I
COMP
METH
IN
ENGINEERING
INTERMEDIATE
CALCULUS
INTRODUCTORY
MECHANICS
INTRO
TO
ENGINEERING
LINEAR
ALGEBRA
GENERAL
CHEMISTRY
ENGINEERING
INNOVATION
ELECTRICITY
&
MAGNETISM
THE
DYNAMIC
EARTH
INTEGRATING
ENV
SCI/POL
ENERGY
AND
ENVIRONMENT
LIBERTY
&
AMER
CONST
GENERAL
PHYSICS
II
INTERMEDIATE
GERMAN
I
U
S
ENVIRONMENTAL
POL
ECONOMIC
PRINCIPLES
COMPUTER
ORGANIZA/PROG
SOFTWARE
DESIGN/IMPLEMEN
DISCRETE
MATH
FOR
COMPSCI
INTRO
TO
OPERATING
SYSTM
INTRO
TO
MATH
LOGIC
INTRO
TO
DATABASE
SYSTEMS
DESIGN/ANALY
ALGORITHMS
PROGRAM
DESIGN/ANALY
II
MODERN
TERRORISM
LIBERTY/EQUALITY&AMER
CONST
CAMPAIGNS
AND
ELECTIONS
JUNIOR-‐SENIOR
SEM
SP
TOP
THE
AMERICAN
PRESIDENCY
ERA
OF
THE
AMERICAN
REVOLU
HISTORY
OF
WORLD
WARS
POLITICS
AND
LITERATURE
THEATER
IN
LONDON:
TEXT
THEATER
IN
LONDON:
PERFORM
INDIVID
DANCE
PROG:
SPECIAL
INTRODUCTION
TO
ACTING
DIRECTING
THEATER
PRODUCTION
TENNESSEE
WILLIAMS/CHEKHOV
INTRODUCTION
TO
THEATER
PROGRAM
DESIGN/ANALY
I
INTRO
ELECTRIC,
MAGNET
STRUCT
DESIGN
AND
OPTMZATN
ELEM
DIFFERENTIAL
EQUAT
COMPUTER
ARCHITECTURE
PROBABIL/STATIS
IN
EGR
MATRIX
STRUCT
ANALYSIS
CONCRETE
AND
COMP
STRUCT
DEV
CONGRESS
AS
INSTITUTION
MODERN
AMERICA
INTERNATIONAL
SECURITY
POL
DEV
WESTERN
EUROPE
GLOBAL
AND
DOM
POLITICS
CRUSADES
TO
HOLY
LAND
SPECIAL
TOPICS
IN
POLITICS
THEOL/FICTION
C
S
LEWIS
ACOUSTICS
AND
MUSIC
REPERTORY:
MODERN
DANCE
COMPOSITION
MEDIA
INTERN
LOS
ANGELES
U.S.
CULTURE
INDUSTRIES
THE
HOLLYWOOD
CYBER
JOUR
REPERTORY:
BALLET
MODERN
DANCE
1890-‐1950
COMP
NETWORK
ARCHITEC
ELECTROMAGNET
FIELDS
INTRO
TO
EMBEDDED
SYSTEMS
LINEAR
CONTROL
SYSTEMS
ELECTRICITY
&
MAGNETISM
COMPUTER
ARCHITECTURE
INTRO
TO
OPERATING
SYSTM
DIG
IMAGE/MULTIDIM
PROCESSING
CHEM/TECHNOL/SOCIETY
DATA
ANALY/STAT
INFER
THE
DYNAMIC
EARTH
SOCIAL
PSYCHOLOGY
ABNORMAL
PSYCHOLOGY
POL
ANALY
PUB
POL
MAKING
GLOBAL
HEALTH
ELEMENTARY
SPANISH
LAW
AND
POLITICS
LIBERTY/EQUALITY&AMER
CONST
CAMPAIGNS
AND
ELECTIONS
JUNIOR-‐SENIOR
SEM
SP
TOP
THE
AMERICAN
PRESIDENCY
ERA
OF
THE
AMERICAN
REVOL
HISTORY
OF
WORLD
WARS
POLITICS
AND
LITERATURE
THEATER
IN
LONDON:
TEXT
THEATER
IN
LONDON:
PERFORM
INDIVID
DANCE
PROG:
SPECIAL
INTRODUCTION
TO
ACTING
DIRECTING
THEATER
PRODUCTION
TENNESSEE
WILLIAMS/CHEKHOV
INTRODUCTION
TO
THEATER
Figure 4: A subtree of topics learned from courses chosen by undergraduate students of class 2009; the whole tree has 372 nodes and 252 topics, and a maximum of 7 layers. Each node shows two aggregated statistics at that node: the eight most common classes of the topic on that node and a histogram of the academic year the topic was selected by students (1-4, for freshman-senior). The columns in each of the histogram correspond to freshman to senior year from left to right. The two highlighted red nodes share the same topic. These results correspond to one (maximum-likelihood) collection sample. 5 Discussion We have extended hierarchical topic models to an important problem that has received limited attention to date: the evolution of personal choices over time. The proposed approach builds upon the nCRP [4], but introduces novel modeling components to address the problem of interest. Specifically, we develop a change-point stick-breaking process, coupled with a product of gammas and Poisson construction, that encourages individuals to be represented by nodes deeper in the tree as time evolves. The Dirichlet process has also been used to design the node-dependent topics, sharing strength and inferring relationships between choices of different people over time. The framework has been successfully demonstrated with a real-world data set: selection of courses over many years, for students at Duke University. Although we worked only on one specific real-world data set, there are many other examples for which such a model may be of interest, especially when the data correspond to a sparse set of choices over time. For example, it could be useful for companies attempting to understand the purchases (choices) of customers, as a function of time (e.g., the clothing choices of people as they advance from teen years to adulthood). This may be of interest in marketing and targeted advertisement. Acknowledgements We would like to thank the anonymous reviewers for their insightful comments. The research reported here was supported by AFOSR, ARO, DARPA, DOE, NGA and ONR. 8 References [1] R. P. Adams, Z. Ghahramani, and M. I. Jordan. Tree-structured stick breaking for hierarchical data. In Neural Information Processing Systems (NIPS), 2010. [2] E. Bart, I. Porteous, P. Perona, and M. Welling. Unsupervised learning of visual taxonomies. In CVPR, 2008. [3] A. Bhattacharya and D. B. Dunson. Sparse Bayesian infinite factor models. Biometrika, 2011. [4] D. M. Blei, T. L. Griffiths, and M. I. Jordan. The nested Chinese restaurant process and Bayesian nonparametric inference of topic hierarchies. Journal of the ACM, 57(2), 2010. [5] D. M. Blei, T. L. Griffiths, M. I. Jordan, and J. B. Tenenbaum. Hierarchical topic models and the nested Chinese restaurant process. In Neural Information Processing Systems (NIPS). 2004. [6] A. Chambers, P. Smyth, and M. Steyvers. Learning concept graphs from text with stickbreaking priors. In Advances in Neural Information Processing Systems(NIPS). 2010. [7] H. Chen, D.B. Dunson, and L. Carin. Topic modeling with nonparametric Markov tree. In Proc. Int. Conf. Machine Learning (ICML), 2011. [8] T. L. Griffiths, M. Steyvers, and J. B. Tenenbaum. Topics in semantic representation. Psychological Review, 114(2):211–244, 2007. [9] C. Hans and D. B. Dunson. Bayesian inferences on umbrella orderings. BIOMETRICS, 61:1018–1026, 2005. [10] H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. Journal of the American Statistical Association, 96(453):161–173, 2001. [11] L. Li, C. Wang, Y. Lim, D. Blei, and L. Fei-Fei. Building and using a semantivisual image hierarchy. In CVPR, 2010. [12] W. Li and A. McCallum. Pachinko allocation: Dag-structured mixture models of topic correlations. In Proc. Int. Conf. Machine Learning (ICML), 2006. [13] D. Mimno, W. Li, and A. McCallum. Mixtures of hierarchical topics with Pachinko allocation. In Proc. Int. Conf. Machine Learning (ICML), 2007. [14] O. Papaspiliopoulos and G. O. Roberts. Retrospective Markov chain Monte Carlo methods for Dirichlet process hierarchiacal models. Biometrika, 95(1):169–186, 2008. [15] R. Salakhutdinov, J. Tenenbaum, and A. Torralba. One-shot Learning with a Hierarchical Nonparametric Bayesian Model. MIT Technical Report, 2011. [16] J. Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639–650, 1994. [17] J. Sivic, B. C. Russell, A. Zisserman, W. T. Freeman, and A. A. Efros. Unsupervised discovery of visual object class hierarchies. In CVPR, 2008. [18] Y. W. Teh, M. I. Jordan, Matthew J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581, 2006. [19] H. M. Wallach, D. Mimno, and A. McCallum. Rethinking LDA: Why priors matter. In Neural Information Processing Systems (NIPS), 2009. [20] H. M. Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. Evaluation methods for topic models. In Proc. Int. Conf. Machine Learning (ICML), 2009. [21] C. Wang and D. M. Blei. Variational inference for the nested Chinese restaurant process. In Neural Information Processing Systems (NIPS), 2009. [22] XX. Zhang, D. B. Dunson, and L. Carin. Tree-structured infinite sparse factor model. In Proc. Int. Conf. Machine Learning (ICML), 2011. 9
|
2011
|
175
|
4,230
|
Selecting the State-Representation in Reinforcement Learning Odalric-Ambrym Maillard INRIA Lille - Nord Europe odalricambrym.maillard@gmail.com R´emi Munos INRIA Lille - Nord Europe remi.munos@inria.fr Daniil Ryabko INRIA Lille - Nord Europe daniil@ryabko.net Abstract The problem of selecting the right state-representation in a reinforcement learning problem is considered. Several models (functions mapping past observations to a finite set) of the observations are given, and it is known that for at least one of these models the resulting state dynamics are indeed Markovian. Without knowing neither which of the models is the correct one, nor what are the probabilistic characteristics of the resulting MDP, it is required to obtain as much reward as the optimal policy for the correct model (or for the best of the correct models, if there are several). We propose an algorithm that achieves that, with a regret of order T 2/3 where T is the horizon time. 1 Introduction We consider the problem of selecting the right state-representation in an average-reward reinforcement learning problem. Each state-representation is defined by a model φj (to which corresponds a state space Sφj) and we assume that the number J of available models is finite and that (at least) one model is a weakly-communicating Markov decision process (MDP). We do not make any assumption at all about the other models. This problem is considered in the general reinforcement learning setting, where an agent interacts with an unknown environment in a single stream of repeated observations, actions and rewards. There are no “resets,” thus all the learning has to be done online. Our goal is to construct an algorithm that performs almost as well as the algorithm that knows both which model is a MDP (knows the “true” model) and the characteristics of this MDP (the transition probabilities and rewards). Consider some examples that help motivate the problem. The first example is high-level feature selection. Suppose that the space of histories is huge, such as the space of video streams or that of game plays. In addition to these data, we also have some high-level features extracted from it, such as “there is a person present in the video” or “the adversary (in a game) is aggressive.” We know that most of the features are redundant, but we also know that some combination of some of the features describes the problem well and exhibits Markovian dynamics. Given a potentially large number of feature combinations of this kind, we want to find a policy whose average reward is as good as that of the best policy for the right combination of features. Another example is bounding the order of an MDP. The process is known to be k-order Markov, where k is unknown but un upper bound K >> k is given. The goal is to perform as well as if we knew k. Yet another example is selecting the right discretization. The environment is an MDP with a continuous state space. We have several candidate quantizations of the state space, one of which gives an MDP. Again, we would like to find a policy that is as good as the optimal policy for the right discretization. This example also opens 1 the way for extensions of the proposed approach: we would like to be able to treat an infinite set of possible discretization, none of which may be perfectly Markovian. The present work can be considered the first step in this direction. It is important to note that we do not make any assumptions on the “wrong” models (those that do not have Markovian dynamics). Therefore, we are not able to test which model is Markovian in the classical statistical sense, since in order to do that we would need a viable alternative hypothesis (such as, the model is not Markov but is K-order Markov). In fact, the constructed algorithm never “knows” which model is the right one; it is “only” able to get the same average level of reward as if it knew. Previous work. This work builds on previous work on learning average-reward MDPs. Namely, we use in our algorithm as a subroutine the algorithm UCRL2 of [6] that is designed to provide finite time bounds for undiscounted MDPs. Such a problem has been pioneered in the reinforcement learning literature by [7] and then improved in various ways by [4, 11, 12, 6, 3]; UCRL2 achieves a regret of the order DT 1/2 in any weakly-communicating MDP with diameter D, with respect to the best policy for this MDP. The diameter D of a MDP is defined in [6] as the expected minimum time required to reach any state starting from any other state. A related result is reported in [3], which improves on constants related to the characteristics of the MDP. A similar approach has been considered in [10]; the difference is that in that work the probabilistic characteristics of each model are completely known, but the models are not assumed to be Markovian, and belong to a countably infinite (rather than finite) set. The problem we address can be also viewed as a generalization of the bandit problem (see e.g. [9, 8, 1]): there are finitely many “arms”, corresponding to the policies used in each model, and one of the arms is the best, in the sense that the corresponding model is the “true” one. In the usual bandit setting, the rewards are assumed to be i.i.d. thus one can estimate the mean value of the arms while switching arbitrarily from one arm to the next (the quality of the estimate only depends on the number of pulls of each arm). However, in our setting, estimating the average-reward of a policy requires playing it many times consecutively. This can be seen as a bandit problem with dependent arms, with complex costs of switching between arms. Contribution. We show that despite the fact that the true Markov model of states is unknown and that nothing is assumed on the wrong representations, it is still possible to derive a finite-time analysis of the regret for this problem. This is stated in Theorem 1; the bound on the regret that we obtain is of order T 2/3. The intuition is that if the “true” model φ∗is known, but its probabilistic properties are not, then we still know that there exists an optimal control policy that depends on the observed state sj∗,t only. Therefore, the optimal rate of rewards can be obtained by a clever exploration/exploitation strategy, such as UCRL2 algorithm [6]. Since we do not know in advance which model is a MDP, we need to explore them all, for a sufficiently long time in order to estimate the rate of rewards that one can get using a good policy in that model. Outline. In Section 2 we introduce the precise notion of model and set up the notations. Then we present the proposed algorithm in Section 3; it uses UCRL2 of [6] as a subroutine and selects the models φ according to a penalized empirical criterion. In Section 4 we discuss some directions for further development. Finally, Section 5 is devoted to the proof of Theorem 1. 2 Notation and definitions We consider a space of observations O, a space of actions A, and a space of rewards R (all assumed to be Polish). Moreover, we assume that A is of finite cardinality A def = |A| and that 0 ∈R ⊂[0, 1]. The set of histories up to time t for all t ∈N∪{0} will be denoted by H<t def = O×(A×R×O)t−1, and we define the set of all possible histories by H def = ∞ [ t=1 H<t. Environments. For a Polish X, we Denote by P(X) the set of probability distributions over X. Define an environment to be a mapping from the set of histories H to the set of functions that map any action a ∈A to a probability distribution νa ∈P(R × O) over the product space of rewards and observations. 2 We consider the problem of reinforcement learning when the learner interacts with some unknown environment e⋆. The interaction is sequential and goes as follows: first some h<1 = {o0} is generated according to ι, then at time step t > 0, the learner choses an action at ∈A according to the current history h<t ∈H<t. Then a couple of reward and observations (rt, ot) is drawn according to the distribution (e⋆(h<t))at ∈P(R × O). Finally, h<t+1 is defined by the concatenation of h<t with (at, rt, ot). With these notations, at each time step t > 0, ot−1 is the last observation given to the learner before choosing an action, at is the action output at this step, and rt is the immediate reward received after playing at. State representation functions (models). Let S ⊂N be some finite set; intuitively, this has to be considered as a set of states. A state representation function φ is a function from the set of histories H to S. For a state representation function φ, we will use the notation Sφ for its set of states, and st,φ := φ(h<t). In the sequel, when we talk about a Markov decision process, it will be assumed to be weakly communicating, which means that for each pair of states u1, u2 there exists k ∈N and a sequence of actions α1, .., αk ∈A such that P(sk+1,φ = u2|s1,φ = u1, a1 = α1...ak = αk) > 0. Having that in mind, we introduce the following definition. Definition 1 We say that an environment e with a state representation function φ is Markov, or, for short, that φ is a Markov model (of e), if the process (st,φ, at, rt), t ∈N is a (weakly communicating) Markov decision process. For example, consider a state-representation function φ that depends only on the last observation, and that partitions the observation space into finitely many cells. Then an environment is Markov with this representation function if the probability distribution on the next cells only depends on the last observed cell and action. Note that there may be many state-representation functions with which an environment e is Markov. 3 Main results Given a set Φ = {φj; j ⩽J} of J state-representation functions (models), one of which being a Markov model of the unknown environment e⋆, we want to construct a strategy that performs nearly as well as the best algorithm that knows which φj is Markov, and knows all the probabilistic characteristics (transition probabilities and rewards) of the MDP corresponding to this model. For that purpose we define the regret of any strategy at time T, like in [6, 3], as ∆(T) def = Tρ⋆− T X t=1 rt , where rt are the rewards received when following the proposed strategy and ρ⋆is the average optimal value in the best Markov model, i.e., ρ⋆= limT 1 T E(PT t=1 rt(π⋆)) where rt(π⋆) are the rewards received when following the optimal policy for the best Markov model. Note that this definition makes sense since when the MDP is weakly communicating, the average optimal value of reward does not depend on the initial state. Also, one could replace Tρ∗with the expected sum of rewards obtained in T steps (following the optimal policy) at the price of an additional O( √ T) term. In the next subsection, we describe an algorithm that achieves a sub-linear regret of order T 2/3. 3.1 Best Lower Bound (BLB) algorithm In this section, we introduce the Best-Lower-Bound (BLB) algorithm, described in Figure 1. The algorithm works in stages of doubling length. Each stage consists in 2 phases: an exploration and an exploitation phase. In the exploration phase, BLB plays the UCRL2 algorithm on each model (φj)1⩽j⩽J successively, as if each model φj was a Markov model, for a fixed number τi,1,J of rounds. The exploitation part consists in selecting first the model with highest lower bound, according to the empirical rewards obtained in the previous exploration phase. This model is initially selected for the same time as in the exploration phase, and then a test decides to either continue playing this model (if its performance during exploitation is still above the corresponding lower bound, i.e. if the rewards obtained are still at least as good as if it was playing the best model). If it does not pass the test, then another model (with second best lower-bound) is select and played, and so on. Until the exploitation phase (of fixed length τi,2) finishes and the next stage starts. 3 Parameters: f, δ For each stage i ⩾1 do Set the total length of stage i to be τi := 2i. 1. Exploration. Set τi,1 = τ 2/3 i . For each j ∈{1, . . . , J} do – Run UCRL2 with parameter δi(δ) defined in (1) using φj during τi,1,J steps: the state space is assumed to be Sφj with transition structure induced by φj. – Compute the corresponding average empirical reward bµi,1(φj) received during this exploration phase. 2. Exploitation. Set τi,2 = τi −τi,1 and initialize J := {1, . . . , J} . While the current length of the exploitation part is less than τi,2 do – Select bj = argmax j∈J bµi,1(φj) −2B(i, φj, δ) (using (3)). – Run UCRL2 with parameter δi(δ) using φbj: update at each time step t the current average empirical reward bµi,2,t(φbj) from the beginning of the run. Provided that the length of the current run is larger than τi,1,J, do the test bµi,2,t(φbj) ⩾bµi,1(φbj) −2B(i, φbj, δ) . – If the test fails, then stop UCRL2 and set J := J \ {bj}. If J = ∅ then set J := {1, . . . , J}. Figure 1: The Best-Lower-Bound selection strategy. The length of stage i is fixed and defined to be τi def = 2i. Thus for a total time horizon T, the number of stages I(T) before time T is I(T) def = ⌞log2(T + 1)⌟. Each stage i (of length τi) is further decomposed into an exploration (length τi,1) and an exploitation (length τi,2) phases. Exploration phase. All the models {φj}j⩽J are played one after another for the same amount of time τi,1,J def = τi,1 J . Each episode 1 ⩽j ⩽J consists in running the UCRL2 algorithm using the model of states and transitions induced by the state-representation function φj. Note that UCRL2 does not require the horizon T in advance, but requires a parameter p in order to ensure a near optimal regret bound with probability higher than 1 −p. We define this parameter p to be δi(δ) in stage i, where δi(δ) def = (2i −(J−1 + 1)22i/3 + 4)−12−i+1δ . (1) The average empirical reward received during each episode is written bµi,1(φj). Exploitation phase. We use the empirical rewards bµi,1(φj) received in the previous exploration part of stage i together with a confidence bound in order to select the model to play. Moreover, a model φ is no longer run for a fixed period of time (as in the exploration part of stage i), but for a period τi,2(φ) that depends on some test; we first initialize J := {1, . . . , J} and then choose bj def = argmax j∈J bµi,1(φj) −2B(i, φj, δ) , (2) where we define B(i, φ, δ) def = 34f(τi −1 + τi,1)|Sφ| s A log( τi,1,J δi(δ) ) τi,1,J , (3) where δ and the function f are parameters of the BLB algorithm. Then UCRL2 is played using the selected model φbj for the parameter δi(δ). In parallel we test whether the average empirical reward we receive during this exploitation phase is high enough; at time t, if the length of the current episode is larger than τ1,i,J, we test if bµi,2,t(φbj) ⩾bµi,1(φbj) −2B(i, φbj, δ). (4) If the test is positive, we keep playing UCRL2 using the same model. Now, if the test fails, then the model bj is discarded (until the end of stage i) i.e. we update J := J \ {bj} and we select a new one according to (2). We repeat those steps until the total time τi,2 of the exploitation phase of stage i is over. 4 Remark Note that the model selected for exploitation in (2) is the one that has the best lower bound. This is a pessimistic (or robust) selection strategy. We know that if the right model is selected, then with high probability, this model will be kept during the whole exploitation phase. If this is not the right model, then either the policy provides good rewards and we should keep playing it, or it does not, in which case it will not pass the test (4) and will be removed from the set of models that will be exploited in this phase. 3.2 Regret analysis Theorem 1 (Main result) Assume that a finite set of J state-representation functions Φ is given, and there exists at least one function φ⋆∈Φ such that with φ⋆as a state-representation function the environment is a Markov decision process. If there are several such models, let φ⋆be the one with the highest average reward of the optimal policy of the corresponding MDP. Then the regret (with respect to the optimal policy corresponding to φ∗) of the BLB algorithm run with parameter δ, for any horizon T, with probability higher than 1 −δ is bounded as follows ∆(T) ⩽cf(T)S AJ log (Jδ)−1 log2(T) 1/2 T 2/3 + c′DS A log(δ−1) log2(T)T 1/2 + c(f, D), (5) for some numerical constants c, c′ and c(f, D). The parameter f(t) can be chosen to be any increasing function, for instance the choice f(t) := log2 t + 1, gives c(f, D) ⩽2D. The proof of this result is reported in Section 5. Remark. Importantly, the algorithm considered here does not know in advance the diameter D of the true model, nor the time horizon T. Due to this lack of knowledge, it uses a guess f(t) (e.g. log(t)) on this diameter, which result in the additional regret term c(f, D) and the additional factor f(T); knowing D would enable to remove both of them, but this is a strong assumption. Choosing f(t) := log2 t + 1 gives a bound which is of order T 2/3 in T but is exponential in D; taking f(t) := tε we get a bound of order T 2/3+ε in T but of polynomial order 1/ε in D. 4 Discussion and outlook Intuition. The main idea why this algorithm works is as follows. The “wrong” models are used during exploitation stages only as long as they are giving rewards that are higher than the rewards that could be obtained in the “true” model. All the models are explored sufficiently long so as to be able to estimate the optimal reward level in the true model, and to learn its policy. Thus, nothing has to be known about the “wrong” models. This is in stark contrast to the usual situation in mathematical statistics, where to be able to test a hypothesis about a model (e.g., that the data is generated by a certain model versus some alternative models), one has to make assumptions about alternative models. This has to be done in order to make sure that the Type II error is small (the power of the test is large): that this error is small has to be proven under the alternative. Here, although we are solving seemingly the same problem, the role of the Type II error is played by the rewards. As long as the rewards are high we do not care where the model we are using is correct or not. We only have to ensure that the true model passes the test. Assumptions. A crucial assumption made in this work is that the “true” model φ∗belongs to a known finite set. While passing from a finite to a countably infinite set appears rather straightforward, getting rid of the assumption that this set contains the true model seems more difficult. What one would want to obtain in this setting is sub-linear regret with respect to the performance of the optimal policy in the best model; this, however, seems difficult without additional assumptions on the probabilistic characteristics of the models. Another approach not discussed here would be to try to build a good state representation function, as what is suggested for instance in [5]. Yet another interesting generalization in this direction would be to consider uncountable (possibly parametric but general) sets of models. This, however, would necessarily require some heavy assumptions on the set of models. Regret. The reader familiar with adversarial bandit literature will notice that our bound of order T 2/3 is worse than T 1/2 that usually appears in this context (see, for example [2]). The reason is that our notion of regret is different: in adversarial bandit literature, the regret is measured with respect to the best choice of the arm for the given fixed history. In contrast, we measure the regret with respect to the best policy (for knows the correct model and its parameters) that, in general, would obtain completely different (from what our algorithm would get) rewards and observations right from the beginning. 5 Estimating the diameter? As previously mentioned, a possibly large additive constant c(f, D) appears in the regret since we do not known a bound on the diameter of the MDP in the “true” model, and use log T instead. Finding a way to properly address this problem by estimating online the diameter of the MDP is an interesting open question. Let us provide some intuition concerning this problem. First, we notice that, as reported in [6], when we compute an optimistic model based on the empirical rewards and transitions of the true model, the span of the corresponding optimistic value function sp(bV +) is always smaller than the diameter D. This span increases as we get more rewards and transitions samples, which gives a natural empirical lower bound on D. However, it seems quite difficult to compute a tight empirical upper bound on D (or sp(bV +)). In [3], the authors derive a regret bound that scales with the span of the true value function sp(V ⋆), which is also less than D, and can be significantly smaller in some cases. However, since we do not have the property that sp(bV +) ⩽sp(V ⋆), we need to introduce an explicit penalization in order to control the span of the computed optimistic models, and this requires assuming we know an upper bound B on sp(V ⋆) in order to guarantee a final regret bound scaling with B. Unfortunately this does not solve the estimation problem of D, which remains an open question. 5 Proof of Theorem 1 In this section, we now detail the proof of Theorem 1. The proof is stated in several parts. First we remind a general confidence bound for the UCRL2 algorithm in the true model. Then we decompose the regret into the sum of the regret in each stage i. After analyzing the contribution to the regret in stage i, we then gather all stages and tune the length of each stage and episode in order to get the final regret bound. 5.1 Upper and Lower confidence bounds From the analysis of UCRL2 in [6], we have the property that with probability higher than 1 −δ′, the regret of UCRL2 when run for τ consecutive many steps from time t1 in the true model φ⋆is upper bounded by ρ⋆−1 τ t1+τ−1 X t=t1 rt ⩽34D|Sφ⋆| r A log( τ δ′ ) τ , (6) where D is the diameter of the MDP. What is interesting is that this diameter does not need to be known by the algorithm. Also by carefully looking at the proof of UCRL, it can be shown that the following bound is also valid with probability higher than 1 −δ′: 1 τ t1+τ−1 X t=t1 rt −ρ⋆⩽34D|Sφ⋆| r A log( τ δ′ ) τ . We now define the following quantity, for every model φ, episode length τ and δ′ ∈(0, 1) BD(τ, φ, δ′) def = 34D|Sφ| r A log( τ δ′ ) τ . (7) 5.2 Regret of stage i In this section we analyze the regret of the stage i, which we denote ∆i. Note that since each stage i ⩽I is of length τi = 2i except the last one I that may stop before, we have ∆(T) = I(T ) X i=1 ∆i , (8) where I(T) = ⌞log2(T +1)⌟. We further decompose ∆i = ∆1,i+∆i,2 into the regret corresponding to the exploration stage ∆1,i and the regret corresponding to the exploitation stage ∆i,2. τi,1 is the total length of the exploration stage i and τi,2 is the total length of the exploitation stage i. For each model φ, we write τi,1,J def = τi,1 J the number of consecutive steps during which the UCRL2 algorithm is run with model φ in the exploration stage i, and τi,2(φ) the number of consecutive steps during which the UCRL2 algorithm is run with model φ in the exploitation stage i. Good and Bad models. Let us now introduce the two following sets of models, defined after the end of the exploration stage, i.e. at time ti. Gi def = {φ ∈Φ ; bµi,1(φ) −2B(i, φ, δ) ≥bµi,1(φ⋆) −2B(i, φ⋆, δ)}\{φ∗} , Bi def = {φ ∈Φ ; bµi,1(φ) −2B(i, φ, δ) < bµi,1(φ⋆) −2B(i, φ⋆, δ)} . With this definition, we have the decomposition Φ = Gi ∪{φ⋆} ∪Bi. 6 5.2.1 Regret in the exploration phase Since in the exploration stage i each model φ is run for τi,1,J many steps, the regret for each model φ ̸= φ⋆is bounded by τi,1,Jρ⋆. Now the regret for the true model is τi,1,J(ρ⋆−bµ1(φ⋆)), thus the total contribution to the regret in the exploration stage i is upper-bounded by ∆i,1 ⩽τi,1,J(ρ⋆−bµ1(φ⋆)) + (J −1)τi,1,Jρ⋆. (9) 5.2.2 Regret in the exploitation phase By definition, all models in Gi ∪{φ⋆} are selected before any model in Bi is selected. The good models. Let us consider some φ ∈Gi and an event Ωi under which the exploitation phase does not reset. The test (equation (4)) starts after τi,1,J, thus, since there is not reset, either τi,2(φ) = τi,1,J in which case the contribution to the regret is bounded by τi,1,Jρ⋆, or τi,2(φ) > τi,1,J, in which case the regret during the (τi,2(φ) −1) steps (where the test was successful) is bounded by (τi,2(φ) −1)(ρ⋆−bµi,2,τi,2(φ)−1(φ)) ⩽ (τi,2(φ) −1)(ρ⋆−bµi,1(φ) + 2B(i, φ, δ)) ⩽ (τi,2(φ) −1)(ρ⋆−bµi,1(φ⋆) + 2B(i, φ⋆, δ)) , and now since in the last step φ fails to pass the test, this adds a contribution to the regret at most ρ⋆. We deduce that the total contribution to the regret of all the models φ ∈Gi in the exploitation stages on the event Ωi is bounded by ∆i,2(Gi) ⩽ X φ∈G max{τi,1,Jρ⋆, (τi,2(φ) −1)(ρ⋆−bµi,1(φ⋆) + 2B(i, φ⋆, δ)) + ρ⋆} . (10) The true model. First, let us note that since the total regret of the true model during the exploitation step i is given by τi,2(φ⋆)(ρ⋆−bµi,2,t(φ⋆)) , then the total regret of the exploration and exploitation stages in episode i on Ωi is bounded by ∆i ⩽ τi,1,J(ρ⋆−bµ1(φ⋆)) + τi,1,J(J −1)ρ⋆+ τi,2(φ⋆)(ρ⋆−bµi,2,ti+τi,2(φ⋆)) + X φ∈Gi max{τi,1,Jρ⋆, (τi,2(φ) −1)(ρ⋆−bµi,1(φ⋆) + 2B(i, φ⋆, δ)) + ρ⋆} + X φ∈Bi τi,2(φ)ρ⋆. Now from the analysis provided in [6] we know that when we run the UCRL2 with the true model φ⋆with parameter δi(δ), then there exists an event Ω1,i of probability at least 1 −δi(δ) such that on this event ρ⋆−bµi,1(φ⋆) ⩽BD(τi,1,J, φ⋆, δi(δ)) , and similarly there exists an event Ω2,i of probability at least 1 −δi(δ), such that on this event ρ⋆−bµi,2,t(φ⋆) ⩽BD(τi,2(φ⋆), φ⋆, δ1(δ)) . Now we show that, with high probability, the true model φ⋆passes all the tests (equation (4)) until the end of the episode i, and thus equivalently, with high probability no model φ ∈Bi is selected, so that X φ∈Bi τi,2(φ) = 0. For the true model, after τ(φ⋆, t) ⩾τi,1,J, there remains at most (τi,2−τi,1,J +1) possible timesteps where we do the test for the true model φ⋆. For each test we need to control µi,2,t(φ⋆), and the event corresponding to bµi,1(φ⋆) is shared by all the tests. Thus we deduce that with probability higher than 1−(τi,2 −τi,1,J +2)δi(δ) we have simultaneously on all time step until the end of exploitation phase of stage i, bµi,2,t(φ⋆) −bµi,1(φ⋆) = bµi,2,t(φ⋆) −ρ⋆+ ρ⋆−bµi,1(φ⋆) ⩾ −BD(τ(φ⋆, t), φ⋆, δi(δ)) −BD(τi,1,J, φ⋆, δi(δ)) ⩾ −2BD(τi,1,J, φ⋆, δi(δ)) . Now provided that f(ti) ⩾D, then BD(τi,1,J, φ⋆, δi(δ)) ⩽B(i, φ⋆, δ) , thus the true model passes all tests until the end of the exploitation part of stage i on an event Ω3,i of probability higher than 1 −(τi,2 −τi,1,J + 2)δi(δ). Since there is no reset, we can choose Ωi def = Ω3,i. Note that on this event, we thus have X φ∈Bi τi,2(φ) = 0. 7 By using a union bound over the events Ω1,i, Ω2,i and Ω3,i, then we deduce that with probability higher than 1 −(τi,2 −τi,1,J + 4)δi(δ), ∆i ⩽ τi,1,JBD(τi,1,J, φ⋆, δi(δ))) + [τi,1,J(J −1) + |Gi|]ρ⋆+ τi,2(φ⋆)BD(τi,2(φ⋆), φ⋆, δi(δ)) + X φ∈Gi max{(τi,1,J −1)ρ⋆, (τi,2(φ) −1)(BD(τi,1,J, φ⋆, δi(δ)) + 2B(i, φ⋆, δ)} . Now using again the fact that f(ti) ⩾D, and after some simplifications, we deduce that ∆i ⩽ τi,1,JBD(τi,1,J, φ⋆, δi(δ)) + τi,2(φ⋆)BD(τi,2(φ⋆), φ⋆, δi(δ)) + X φ∈Gi (τi,2(φ) −1)3B(i, φ⋆, δ) + τi,1,J(J + |Gi| −1)ρ⋆. Finally, we use the fact that τBD(τ, φ⋆, δi(δ)) is increasing with τ to deduce the following rough bound that holds with probability higher than 1 −(τi,2 −τi,1,J + 4)δi(δ) ∆i ⩽ τi,2B(i, φ⋆, δ) + τi,2BD(τi,2, φ⋆, δi(δ)) + 2Jτi,1,Jρ⋆, where we used the fact that τi,2 = τi,2(φ⋆) + X φ∈G τi,2(φ) . 5.3 Tuning the parameters of each stage. We now conclude by tuning the parameters of each stage, i.e. the probabilities δi(δ) and the length τi, τi,1 and τi,2. The total length of stage i is by definition τi = τi,1 + τi,2 = τi,1,JJ + τi,2 , where τi = 2i . So we set τi,1 def = τ 2/3 i and then we have τi,2 def = τi −τ 2/3 i and τi,1,J = τ 2/3 i J . Now using these values and the definition of the bound B(i, φ⋆, δ), and BD(τi,2, φ⋆, δi(δ)), we deduce with probability higher than 1 −(τi,2 −τi,1,J + 4)δi(δ) the following upper bound ∆i ⩽ 34f(ti)S s AJ log τ 2/3 i Jδi(δ) τ 2/3 i + 34DS r A log τi δi(δ) τi + 2τ 2/3 i ρ⋆, with ti = 2i −1 + 22i/3 and where we used the fact that J τ 2/3 i 1/2 τi,2 ⩽ √ Jτ 2/3 i . We now define δi(δ) such that δi(δ) def = (2i −(J−1 + 1)22i/3 + 4)−12−i+1δ . Since for the stages i ∈I0 def = {i ⩾1; f(ti) < D}, the regret is bounded by ∆i ⩽τiρ⋆, then the total cumulative regret of the algorithm is bounded with probability higher than 1 −δ (using the defition of the δi(δ)) by ∆(T) ⩽ X i/∈I0 [34f(ti)S r JA log 28i/3 Jδ + 2]22i/3 + 34DS r A log 23i δ 2i + X i∈I0 2iρ⋆. where ti = 2i −1 + 22i/3 ⩽T. We conclude by using the fact that since I(T) ⩽log2(T + 1), then with probability higher than 1 −δ, the following bound on the regret holds ∆(T) ⩽cf(T)S AJ log(Jδ)−1 log2(T) 1/2 T 2/3 + c′DS A log(δ−1) log2(T)T 1/2 + c(f, D) . for some constant c, c′, and where c(f, D) = P i∈I0 2iρ⋆. Now for the special choice when f(T) def = log2(T+1), then i ∈I0 means 2i+22i/3 < 2D+2, thus we must have i < D, and thus c(f, d) ⩽2D. Acknowledgements This research was partially supported by the French Ministry of Higher Education and Research, Nord- Pas-de-Calais Regional Council and FEDER through CPER 2007-2013, ANR projects EXPLO-RA (ANR-08-COSI-004) and Lampada (ANR-09-EMER-007), by the European Communitys Seventh Framework Programme (FP7/2007-2013) under grant agreement 231495 (project CompLACS), and by Pascal-2. 8 References [1] Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235–256, 2002. [2] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Foundations of Computer Science, 1995. Proceedings., 36th Annual Symposium on, pages 322 –331, oct 1995. [3] Peter L. Bartlett and Ambuj Tewari. REGAl: a regularization based algorithm for reinforcement learning in weakly communicating mdps. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI, pages 35–42, Arlington, Virginia, United States, 2009. AUAI Press. [4] Ronen I. Brafman and Moshe Tennenholtz. R-max - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3:213–231, March 2003. [5] Marcus Hutter. Feature reinforcement learning: Part I: Unstructured MDPs. Journal of Artificial General Intelligence, 1:3–24, 2009. [6] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 99:1563–1600, August 2010. [7] Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49:209–232, November 2002. [8] Tze L. Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4–22, 1985. [9] Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematics Society, 58:527–535, 1952. [10] Daniil Ryabko and Marcus Hutter. On the possibility of learning in reactive environments with arbitrary dependence. Theoretical Compututer Science, 405:274–284, October 2008. [11] Alexander L. Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L. Littman. PAC model-free reinforcement learning. In Proceedings of the 23rd international conference on Machine learning, ICML, pages 881–888, New York, NY, USA, 2006. ACM. [12] Ambuj Tewari and Peter L. Bartlett. Optimistic linear programming gives logarithmic regret for irreducible mdps. In Proceedings of Neural Information Processing Systems Conference (NIPS), 2007. 9
|
2011
|
176
|
4,231
|
A Reinforcement Learning Theory for Homeostatic Regulation Mehdi Keramati Group for Neural Theory, LNC, ENS Paris, France mohammadmahdi.keramati@ens.fr Boris Gutkin Group for Neural Theory, LNC, ENS Paris, France boris.gutkin@ens.fr Abstract Reinforcement learning models address animal’s behavioral adaptation to its changing “external” environment, and are based on the assumption that Pavlovian, habitual and goal-directed responses seek to maximize reward acquisition. Negative-feedback models of homeostatic regulation, on the other hand, are concerned with behavioral adaptation in response to the “internal” state of the animal, and assume that animals’ behavioral objective is to minimize deviations of some key physiological variables from their hypothetical setpoints. Building upon the drive-reduction theory of reward, we propose a new analytical framework that integrates learning and regulatory systems, such that the two seemingly unrelated objectives of reward maximization and physiological-stability prove to be identical. The proposed theory shows behavioral adaptation to both internal and external states in a disciplined way. We further show that the proposed framework allows for a unified explanation of some behavioral pattern like motivational sensitivity of different associative learning mechanism, anticipatory responses, interaction among competing motivational systems, and risk aversion. 1 Introduction “Reinforcement learning” and “negative-feedback models of homeostatic regulation” are two control theory-based computational frameworks that have had major contributions in learning and motivation literatures in behavioral psychology, respectively. Proposing neurobiologically-plausible algorithms, computational theory of reinforcement learning (RL) explains how animals adapt their behavior to varying contingencies in their “external” environment, through persistently updating their estimates of the rewarding value of feasible choices [1]. The teaching signal required for these updates is suggested to be carried by phasic activity of midbrain dopamine neurons [2] projecting to the striatum, where stimulus-response associations are supposed to be encoded. Critically, this theory is built upon one hypothetical assumption: animals behavioral objective is to maximize reward acquisition. In this respect, without addressing the question of how the brain constructs reward, the RL theory gives a normative explanation for how the brains decision making circuitry shapes instrumental responses as a function of external cues, so as the animal to satisfy its reward-maximization objective. Negative-feedback models of homeostatic regulation (HR), on the other hand, seek to explain regular variabilities in behavior as a function of the internal state of the animal, when the external stimuli are fixed [3, 4]. Homeostasis means maintaining stable of some physiological variables. Correspondingly, behavioral homeostasis refers to corrective responses that are triggered by deviation of 1 a regulated variable from its hypothetical setpoint. In fact, regulatory systems operate “as if” they aim at defending variables against perturbations. In this sense, existence of an “error signal” (also known as “negative feedback”), defined by the discrepancy between the current and desired internal state, is essential for any regulatory mechanism. Even though the presence of this negative feedback in controlling motivated behaviors is argued to be indisputable for explaining some behavioral and physiological facts [3], three major difficulties have raised criticism against negative-feedback models of behavioral homeostasis [3, 5]: (1) Anticipatory eating and drinking can be elicited in the absence of physiological depletion [6], supposedly in order to prevent anticipated deviations in future. In more general terms, motivated behaviors are observed to be elicited even when no negative feedback is detectable. (2) Intravenous (and even intragastric, in some cases) injection of food is not motivating, even though it alleviates deviation of the motivational state from its homeostatic setpoint. For example, rats do not show motivation, after several learning trials, to run down an alley for intragastric feeding by milk, whereas they quickly learn to run for normal drinking of milk [7]. These behavioral patterns are used as evidence to argue that maintaining homeostasis (reducing the negative feedback) is not a behavioral objective. In contrast, taste (and other sensory) information of stimuli is argued to be the key factor for their reinforcing properties [5]. (3) The traditional homeostatic regulation theory simply assumes that the animal knows how to translate its various physiological deficits into the appropriate behaviors. In fact, without taking into account the contextual state of the animal, negative feedback models only address the question of whether or not the animal should have motivation toward a certain outcome, without answering how the outcome can be achieved through a series of instrumental actions. The existence of these shortcomings calls for rethinking the traditional view toward homeostatic regulation theory. We believe that these shortcomings, as well as the weak spot of RL models in not taking into account the internal drive state of the animal, all arise from lack of an integrated view toward learning and motivation. We show in this paper that a simple unified computational theory for RL and HR allows for explaining a wide range of behavioral patterns including those mentioned above, without decreasing the explanatory power of the two original theories. We further show that such a unified theory can satisfy the two objectives of reward maximization and deviation minimization at the same time. 2 The model The term “reward” (with many equivalents like reinforcer, motivational salience, utility, etc.) has been at the very heart of behavioral psychology since its foundation. In purely behavioral terms, it refers to a stimulus delivered to the animal after a response is made, that increases the probability of making that response in the future. RL theory proposes algorithms for how different learning systems can adapt agent’s responses to varying external conditions in order to maximize the acquisition of reward. For this purpose, RL algorithms try to learn, via experiences, the sum of discounted rewards expected to be received after taking a specific action (at), in a specific state (st), onward. V (st, at) = E [ rt + γrt+1 + γ2rt+2 + . . . |st, at ] = E [ ∞ ∑ i=t γi−tri|st, at ] (1) 0 ≤γ ≤1 discounts future rewards. rt denotes the rewarding value of the outcome the animal receives at time t, which is a often set to a “fixed” value that can be either positive or negative, depending on whether the corresponding stimulus is appetitive or aversive, respectively. However, animals motivation for outcomes is not fixed, but a function of their internal state: a food pellet is more rewarding to a hungry, than a sated rat. In fact, the internal state (also referred to as drive, or motivational state) of the animal affects the reinforcing value of a constant external outcome. As an attempt to identify the nature of reward and its relation to drive states, neo-behaviorists like Hull [8], Spence, and Mowrer have proposed the “drive reduction” theory of motivation. According to this theory, one primary mechanism underlying reward is drive reduction. In terms of homeostatic 2 regulation theory, reward is defined as a stimulus that reduces the discrepancy between the current and the desired drive state of the animal, i.e, food pellet is rewarding to a hungry rat because it fulfills a physiological need. To capture this idea formally, let’s Ht = {h1,t, h2,t, .., hN,t} denote the physiological state of the animal at time t, and H∗= {h∗ 1, h∗ 2, .., h∗ N} as the homeostatic setpoint. As a special case, figure 1 shows a simplified system where food and water constitute all the physiological needs. This model can obviously be extended, without loss of generality, to cover other homeostatically regulated drives, as well as more detailed aspects of feeding like differential drives for carbohydrate, sodium, calcium, etc. A drive function can then be defined on this space as a mapping from physioFigure 1: An examplary homeostatic space, with food and water as regulated physiological needs logical state to motivation: d(Ht) = m v u u t N ∑ i=1 |h∗ i −hi,t|n (2) This drive function is in fact a distance function (Euclidian distance when m = n = 2) that creates some quasi-circle iso-drive curves centred around the homeostatic setpoint. The homeostatic space, as a multidimensional metric space, is a hypothetical construct that allow us to explain a wide range of behavioral and physiological evidence in a unified framework. Since animals physiological states are most often below the setpoint, our focus in this paper is mostly on the quarter of the homeostatic space that is below the homeostatic setpoint. Having defined drive, we can now provide a formal definition for reward, based on the drive reduction theory. Assume that as the result of some actions, the animal receives an outcome at time t that contains ki,t units of the constituent hi, for each i ∈{1, 2, .., N}. Kt can be defined as a row vector with entries (ki,t : i ∈{1, 2, .., N}). Consumption of that outcome will result in a transition of the physiological state from Ht to Ht+1 = Ht + Kt, and consequently, a transition of the drive state from d(Ht) to d(Ht + Kt). For example, figure 1 shows the transition resulted from taking k1 units of food. Accordingly, the rewarding value of that outcome can be defined as the consequent reduction in the drive function: r(Ht, Kt) = d(Ht) −d(Ht+1) = d(Ht) −d(Ht + Kt) (3) This drive-reduction definition of reward is the central element in our proposed framework that will bridge the gap between regulatory and reward learning systems. 3 Behavioral plausibility of the reward function Before discussing how the reward defined in equation 3 can be used by associative learning mechanisms (RL theory), we are interested in this section to show that the functional form proposed for the reward function allows for several behavioral phenomena to be explained in a unified framework. 3 For all m > n > 2, the rewarding value of an outcome consisting of only one constituent, Kt = (0, 0, .., kj,t, .., 0) ,when the animal is in the motivational state Ht, will have the four below properties. Even though these properties are written for the cases that the need state remains below the setpoint, as the drive function is symmetric in respect to the setpoint, similar result can be derived for other three quarters. 3.1 Reward value increases with the dose of outcome The reward function is an increasing function of the magnitude of the outcome (e.g., number of food pellets); i.e. the bigger the outcome is, the more rewarding value it will have. It is straightforward to show that: dr(Ht, Kt) dkj,t > 0 : for kj,t > 0 (4) Supporting this property of the drive function, it is shown in progressive ratio schedules that rats maintain higher breakpoints when reinforced with bigger appetitive outcomes, reflecting higher motivation toward them [9, 10]. 3.2 Excitatory effect of deprivation level Increasing the deprivation level of the animal will increase the reinforcing strength of a constant dose of a corresponding outcome, i.e. a single pellet of food is more rewarding to a hungry, than a sated rate. dr(Ht, Kt) d|h∗ j −hj,t| > 0 : for kj,t > 0 (5) Consistently, the level of food deprivation in rats is demonstrated to increases the breakpoint in progressive ratio schedules [10]. 3.3 Inhibitory effect of the irrelevant drive A large body of behavioral experiments shows that deprivation level for one need has an inhibitory effect on the reinforcing value of outcomes that satisfy irrelevant needs. (see [11] for review). It is well known that high levels of irrelevant thirst impairs Pavlovian as well as instrumental responses for food during both acquisition and extinction. Reciprocally, food deprivation is demonstrated to suppress Pavlovian and instrumental water-related responses. As some other examples, increased calcium appetite is shown to reduce appetite for phosphorus, and increased level of hunger is demonstrated to inhibit sexual behavior. It is straightforward to show that the specific functional form proposed for the drive function can capture this inhibitory-like interaction between irrelevant drives: dr(Ht, Kt) d|h∗ i −hi,t| < 0 : for all i ̸= j and kj,t > 0 (6) Intuitively, one does not play chess, or even search for sex, with an empty stomach. This behavioral pattern can be interpreted as competition among different motivational systems (different dimensions of the homeostatic space), and is consistent with the notion of “complementary” relation between some goods in economics, like between cake and coffee. Each of two complement goods are highly rewarding, only when the other one is also available; and taking one good when the other one is lacking, is not that rewarding. 3.4 Risk aversion Risk aversion, as a fundamental concept in both psychology and economics, and supported by a wealth of behavioral experiments, is defined by reluctance of individuals to select choices with uncertain outcomes, compared to choices with more certain payoffs, even when the expected payoff of the former is higher than that of the latter. It is easy to show that a concave reward (utility) function 4 (in respect to the quantity of the outcome), is equivalent to risk aversion. It is due to this feature that concavity of the utility function has become a fundamental assumption in the microeconomic theory. The proposed form of the drive function can capture risk-aversion: d2r(Ht, Kt) dk2 j,t < 0 : for kj,t > 0 (7) It is noteworthy that as Kt consists of substances that fulfil physiological needs, one should be careful in extending this and other features of the model to the case of monetary rewards, or any other type of stimuli that does not seem to have a corresponding regulatory system (like social rewards, novelty-induced reward, etc.). It is clear that the four mentioned properties of the reward function depend on the specific functional form adopted for the drive function (equation 2). In fact, the drive-reduction definition of reward allows for the validity of the form of drive function to be experimentally tested in behavioral tasks. 4 Homeostatic control through learning mechanisms Despite significant day-to-day variations in energy expenditure and food availability, the homeostatic regulation system involved in control of food-intake and energy balance is a remarkably precise system. This adaptive nature of homeostatic regulation system can be achieved by employing the animals learning mechanisms, which are capable of adopting flexible behavioral strategies to cope with changes in the environmental conditions. In this section we are interested in looking at behavioral and theoretical implications of interaction between homeostatic regulation and reward learning systems. One theoretical implication of the proposed definition of reward is that it reconciles the RL and homeostatic regulation theories, in terms of normative assumptions behind them. In more precise words, if any reward learning system, like RL algorithms, uses the drive-reduction definition of reward proposed in equation 3, then a behavioral policy that maximizes the sum of discounted rewards is at the same time minimizing the sum of discounted drives (or sum of discounted deviations from the setpoint), and vice versa (see Supplementary Methods: Proposition 1). In this respect, reward maximization can be seen as just means that guide animal’s behavior toward satisfying the more basic objective of maintaining homeostasis. Since the reward function defined by equation 3 depends on the internal state and thus, is nonstationary, some appropriate adjustments in the classical RL algorithms that try to find an optimal policy must be thought of. One straightforward solution is to define an augmented Markov decision process (MDP) implied by the cross-product of internal and external states, an then use a variant of RL algorithms to learn action-values in that problem-space. Consistent with this view, some behavioral studies show that internal state can work in the same way that external state works. That is, animals are able to acquire responses conditioned upon some certain motivational states (e.g. motivational state induced by benzodiazepine agonist) [11]. Although this approach, in theory, guarantees convergence to the optimal policy, high dimensionality of the resulting state-space makes learning rather impossible in practice. Moreover, since the next external state only depends on the current external but not internal state, such an augmented MDP will have significant redundant structure. From a machine learning point of view, as argued in [12], an appropriate function approximator specifically designed to take advantage of such a structure can be used to reduce state-space dimensionality. Beside this algorithm-independent expansion of state-space argued above, the proposed definition of reward provides an opportunity for discussing how different associate learning systems in the brain take the animal’s internal state into account. Here we discuss motivational sensitivity in habitual (hypothesized to use a model-free RL, like temporal difference algorithm [13]), goal-directed (hypothesized to use a model-based RL [13]), and Pavlovian learning systems. 5 A model-based RL algorithm learns the state-transition matrix (action-outcome contingencies), as well as the reward function (the incentive value of each outcome), and then uses them to compute the value of possible actions in a given state, using Bellman optimality equation[1]. In our framework, as long as a model-based algorithm is involved in decision making, all that the animal needs to do when its internal state shifts is to update the incentive value (reward function) of each potential outcome. The reward function being updated, the animal will be able to take the optimal policy, given that the state-transition function is learned completely. In order to update the reward function, one way is that the animal re-experiences outcomes in its new motivational state. This way seems to be how the goal-directed system works, since re-exposure is demonstrated to be necessary in order for the animals’ behavior to be sensitive to changes in motivational state [11]. The second way is to update the reward function without re-exposure, but through directly estimating the drivereduction effect of outcomes in the current motivational state, using equation 3. This way seems to be how the Pavlovian system works, since numerous experiments show that Pavlovian conditioned or unconditioned responses are sensitive to the internal state in an outcome-specific way, without reexposure being required [11]. This Pavlovian-type behavioral adaptation is observed even when the animal has never experienced the outcome in its current motivational state during its whole life. This shows that animals are able to directly estimate the drive-reduction effect of at least some certain types of outcomes. Furthermore, the fact that this motivational sensitivity is outcome-specific shows that the underlying associative learning structure is model-based (i.e. based on learning the causal contingencies between events). The habitual system (hypothesized to use a model-free RL[13]) is demonstrated, through outcomedevaluation experiments, that cannot directly (i.e. without new learning) adapt the animal’s behavioral response after shifts in the motivational state [11]. It might be concluded from this observation that action-values in the habitual system only depend on the internal state of the animal during the course of learning (past experiences), but not performance. This is consistent with the information storage scheme in model-free RL, where cached action-values lose connection with the identity of the outcomes. Thus, the habitual system doesn’t know what specific action-values should be updated when a novel internal state is being experienced. However, it has been argued by some authors [14] that even though habitual responses are insensitive to sensory-specific satiety, they are sensitive to motivational manipulations in a general, outcomeindependent way. Note that the previous form of motivational insensitivity concerns lack of behavioral adaptation in an outcome-specific way (for example, lack of greater preference for food-seeking behavior, compared to water-seeking behavior, after a shift from satiety to hunger state). Borrowing from the Hullian concept of “generalized drive” [8], it has been proposed that transition from an outcome-sated to an outcome-deprived motivational state will energize all pre-potent habitual responses, irrespective of whether or not those actions result in that certain outcome [14]. For example, a motivational shift from satiety to hunger will result in energization of both food-seeking and water-seeking habitual responses. Taking a normative perspective, we argue here that this outcome-independent energizing effect is an approximate way of updating the value of state-action pairs when the motivational state shifts instantaneously. Assuming that the animal is trained under the fixed internal state H0, and then tested in a novel internal state H1, the cached values in the habitual system can be approximated in the new motivational state by: Q1(s, a) = d(H1) d(H0).Q0(s, a) : for all state-action pairs (8) Where Q0(s, a) represents action-values learned by the habitual system after the training period. According to this update rule, all the prepotent actions will get energized if deviation from the homeostatic setpoint increases in the new internal state, whether or not the outcome of those actions are more desired in the new state. Proposition 2 (see Supplementary Methods) shows that this update rule is a perfect approximation only when H1 = c.H0, where c ≥0. This means that the energizing effect will result in rational behavior after motivational shift, only when the new internal state is an 6 amplification or abridge of the old internal state in all the dimension of the homeostatic space, with equal magnitude. Since the model-free system does not learn the causal model of the environment, it cannot show motivational sensitivity in an outcome-specific way, and this general energizing effect is the best approximate way to react to motivational manipulations. 4.1 Anticipatory responses The notion of predictive homeostasis [3, 4], as opposed to the classical reactive homeostasis where the existence of negative feedback (physiological depletion) is essential, suggests that through anticipating future needs, individuals make anticipatory responses in order to prevent deviation of regulated variables from their setpoints. Anticipatory eating and drinking, as two examples, are defined by taking food and water in the absence of any detectable decrease in the corresponding physiological signals. Although it is quite clear that explaining such behavioral patterns requires integrating homeostatic mechanisms and learning (as predictive) processes, to our knowledge, no well-defined mechanism has been proposed for it so far. We show here, through an example, how the proposed model can reconcile anticipatory responses to the homeostatic regulation theory. Temperature regulation provides a clear example for predictive homeostasis. Body temperature, which has long been a topic of interest in the homeostatic regulation literature, is shown to increase back to its normal level by shivering, after the animal is placed into a cold place. Interestingly, cues that predict being placed into a cold place induce anticipatory shivering and result in the body temperature to go above the normal temperature of the animal [15]. Similarly, cues that predict receiving a drug that decreases body temperature is shown to have the same effect. This behavior is interpreted as an attempt by the animal to alleviate the severity of deviation from the setpoint. To model this example, lets assume that x∗is the normal temperature of the animal, and that putting the animal in the cold place will result in a decrease of lx units in the body temperature. Furthermore, assume that when presented with the coldness-predicting cue (SC), the animal chooses how much to shiver and thus, increases its body temperature by kx units. In a scenario like this, after observing the coldness-predicting cue, the animals temperature will shift from x∗to x∗+ kx, as a result of anticipatory shivering. Assuming that the animal will then experience coldness after a delay of one time-unit, its temperature will transit from x∗+ kx to x∗+ kx −lx. The rewarding value of this transition will be discounted with the rate of γ. Finally, by assuming that after one more time-unit the body temperature will go back to the normal level, x∗, the animal will receive another reward, discounted with the rate of γ2. The sum of discounted rewards can be written as below: V (SC, kX) = [d(x∗)−d(x∗+kX)]+γ.[d(x∗+kX)−d(x∗+kX−lX)]+γ2.[d(x∗+kX−lX)−d(x∗)] (9) Proposition 3 (see Supplementary Methods) shows that the optimal strategy for maximizing the sum of discounted rewards in this scenario is when kx = lx/2, assuming that the discount factor is not equal to one, but is sufficiently close to it. In fact, the model predicts that the best strategy is to perform anticipatory shivering to the extent that keep it as close as possible to the setpoint: turning around but close to the setpoint is preferred to getting far from it and coming back. In fact this example, which can be easily generalized to anticipatory eating and drinking, shows that when learning mechanisms play a regulatory role, it is not only the initial and final motivational states of a policy that matters, but also the trajectory of the motivational state in the homeostatic space through that policy matters. It is in fact due to discounting future rewards. If the discount factor is one, then regardless of the trajectory of the motivational state, the sum of rewards for all policies that start from a certain homeostatic point and finish at another point, will be equal. In that case, the sum of rewards for all values of kx in the anticipatory shivering example will be zero and thus, anticipatory strategies will not be preferred to a reactive-homeostasis strategy. In fact, the model predicts that decreasing animal’s discount rate (e.g. through pharmacological agents known to modulate discount rate) should increase the probability that the animal show an anticipatory response, and vice versa. 7 5 Discussion Despite considerable differences, some common principles are argued to govern all motivational systems. we have proposed a model that captures some of these commonalities between homoestatically regulated motivational systems. We showed how the rewarding value of an outcome can be computed as a function of the animal’s internal state and the constituting elements of the outcome, through the drive reduction equation. We further demonstrated how this computed reward can be used by different learning mechanisms to form appropriate Pavlovian or instrumental associations. However, it should be noted that concerning food (and other ingestible) outcomes, the basic form of the drive-reduction definition of reward (equation 2) has four interrelated problems, some of them used traditionally to criticize the drive reduction theory: (1) Post-digestive nutritive feedback of food, defined by the drive reduction equation, might occur hours after ingestion. Such a long delay between an instrumental response and its consequent drive reduction effect (reward) would make it difficult for an appropriate association to be established. In fact, according to the temporal contiguity rule of associative learning, unconditioned stimuli must follow immediately the conditioned stimuli, for an association to be established between them. (2) Dopamine neurons, which are supposed to carry reward learning signal, are demonstrated to show instantaneous burst activity in response to unexpected food rewards, without waiting for the food to be digested, and drive to be reduced [16]. (3) Intravenous injection (and intragastric intubation, in some cases) of food is not rewarding, even though its drive reduction effect is equal to when that food is ingested orally. As mentioned before, oral ingestion of the same outcome is shown to have significant reinforcing effect [7]. (4) Palatable foods have reinforcing effect, even when they do not have any nutritional value (i.e. they do not reduce any physiological need) [16]. Making the story even more complicated, this taste-dependent rewarding value of food is demonstrated to be modulated by not only the approximated nutritional content of the food, but also the internal physiological state of the animal [16]. However, assuming that taste and other sensory properties of a food outcome give an estimate, ˆ k1,t, of its true nutritional content, k1,t, the rewarding effect of food can be approximated by equation 3, as soon as the food is sensed, or taken. This association between sensory information and post-ingestive effects of food might have been established through learning, or evolution. This simple plausible assumption clearly resolves the four problems listed above for the classical notion of drive reduction. It explains that gastric, olfactory, or visual information of food is necessary for its reinforcing effect to be induced and thus, intravenous injection of food is not reinforcing due to lack of appropriate sensory information. Moreover, there is no delay in this mechanism between food intake and its drive-reduction effect and therefore, Dopamine neurons can respond instantaneously. Finally, as equation 3 predicts, this taste-dependent rewarding value is modulated by the motivational state of the animal. Previous computational accounts in the psychological literature attempting to incorporate internalstate dependence of motivation into the RL models use ad hoc addition or multiplication of drive state with the a priori reward magnitude [17, 13]. In the machine learning literature, among others, Bersini [18] uses an RL model where the agent gets punished if its internal state transgresses a predefined viability zone. Simulation results show that such a setting motivates the agent to maintain its internal variables in a bounded zone. A more recent work [12] also uses RL framework where reward is generated by drive difference. It is demonstrated that this design allows the agent to make a balance in satisfying different drives. Apart from physiological and behavioral plausibility, the theoretical novelty of our proposed framework is in formalizing the hypothetical concept of drive, as a mapping from physiological to motivational state. This has allowed the model to show analytically that reward maximization and deviation minimization can be seen as two sides of the same coin. 6 Acknowledgements MK and BG are supported by grants from Frontiers du Vivant, the French MESR, CNRS, INSERM, ANR, ENP and NERF. 8 References [1] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, 1998. [2] W. Schultz, P. Dayan, and P. R. Montague. A neural substrate of prediction and reward. Science, 275(5306):1593–1599, 1997. [3] F. M. Toates. Motivational Systems. Problems in the behavioral sciences. Cambridge University Press, New York, 1986. [4] J. E. R. Staddon. Adaptive behavior and learning. Cambridge University Press, New York, 1983. [5] K. C. Berridge. Motivation concepts in behavioral neuroscience. Physiol Behav, 81(2):179– 209, 2004. [6] S.C. Woods and R.J. Seeley. Hunger and energy homeostasis. In C. R. Gallistel, editor, Volume 3 of Steven’s Handbook of Experimental Psychology: Learning, Motivation, and Emotion, pages 633–68. Wiley, New York, third edition, 2002. [7] N. E. Miller and M. L. Kessen. Reward effects of food via stomach fistula compared with those of food via mouth. J Comp Physiol Psychol, 45(6):555–564, 1952. [8] C. L. Hull. Principles of behavior: an introduction to behavior theory. The Century psychology series. Appleton-Century-Crofts, New York, 1943. [9] P. Skjoldager, P. J. Pierre, and G. Mittleman. Reinforcer magnitude and progressive ratio responding in the rat: Effects of increased effort, prefeeding, and extinction. Learn motiv, 24(3):303–343, 1993. [10] W. Hodos. Progressive ratio as a measure of reward strength. Science, 134:943–944, 1961. [11] A. Dickinson and B. W. Balleine. The role of learning in motivation. In C. R. Gallistel, editor, Volume 3 of Steven’s Handbook of Experimental Psychology: Learning, Motivation, and Emotion, pages 497–533. Wiley, New York, third edition, 2002. [12] George Konidaris and Andrew Barto. An adaptive robot motivational system. In Proceedings of the 9th international conference on Simulation of adaptive behavior : from animals to animats 9, pages 346–356, 2006. [13] N. D. Daw, Y. Niv, and P. Dayan. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nat Neurosci, 8(12):1704–11, 2005. [14] Y. Niv, D. Joel, and P. Dayan. A normative perspective on motivation. Trends Cogn Sci, 10(8):375–381, 2006. [15] J. G. Mansfield, R. S. Benedict, and S. C. Woods. Response specificity of behaviorally augmented tolerance to ethanol supports a learning interpretation. Psychopharmacology, 79(23):94–98, 1983. [16] L. H. Schneider. Orosensory self-stimulation by sucrose involves brain dopaminergic mechanisms. Ann. N. Y. Acad. Sci, 575:307–319, 1989. [17] J. Zhang, K. C. Berridge, A. J. Tindell, K. S. Smith, and J. W. Aldridge. A neural computational model of incentive salience. PLoS Comp Biol, 5(7), 2009. [18] Hugues Bersini. Reinforcement learning for homeostatic endogenous variables. In Proceedings of the third international conference on Simulation of adaptive behavior : from animals to animats 3, page 325333, Brighton, United Kingdom, 1994. MIT Press. ACM ID: 189936. 9
|
2011
|
177
|
4,232
|
Semantic Labeling of 3D Point Clouds for Indoor Scenes Hema Swetha Koppula∗, Abhishek Anand∗, Thorsten Joachims, and Ashutosh Saxena Department of Computer Science, Cornell University. {hema,aa755,tj,asaxena}@cs.cornell.edu Abstract Inexpensive RGB-D cameras that give an RGB image together with depth data have become widely available. In this paper, we use this data to build 3D point clouds of full indoor scenes such as an office and address the task of semantic labeling of these 3D point clouds. We propose a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurence relationships and geometric relationships. With a large number of object classes and relations, the model’s parsimony becomes important and we address that by using multiple types of edge potentials. The model admits efficient approximate inference, and we train it using a maximum-margin learning approach. In our experiments over a total of 52 3D scenes of homes and offices (composed from about 550 views, having 2495 segments labeled with 27 object classes), we get a performance of 84.06% in labeling 17 object classes for offices, and 73.38% in labeling 17 object classes for home scenes. Finally, we applied these algorithms successfully on a mobile robot for the task of finding objects in large cluttered rooms.1 1 Introduction Inexpensive RGB-D sensors that augment an RGB image with depth data have recently become widely available. At the same time, years of research on SLAM (Simultaneous Localization and Mapping) now make it possible to reliably merge multiple RGB-D images into a single point cloud, easily providing an approximate 3D model of a complete indoor scene (e.g., a room). In this paper, we explore how this move from part-of-scene 2D images to full-scene 3D point clouds can improve the richness of models for object labeling. In the past, a significant amount of work has been done in semantic labeling of 2D images. However, a lot of valuable information about the shape and geometric layout of objects is lost when a 2D image is formed from the corresponding 3D world. A classifier that has access to a full 3D model, can access important geometric properties in addition to the local shape and appearance of an object. For example, many objects occur in characteristic relative geometric configurations (e.g., a monitor is almost always on a table), and many objects consist of visually distinct parts that occur in a certain relative configuration. More generally, a 3D model makes it easy to reason about a variety of properties, which are based on 3D distances, volume and local convexity. Some recent works attempt to first infer the geometric layout from 2D images for improving the object detection [12, 14, 28]. However, such a geometric layout is not accurate enough to give significant improvement. Other recent work [35] considers labeling a scene using a single 3D view (i.e., a 2.5D representation). In our work, we first use SLAM in order to compose multiple views from a Microsoft Kinect RGB-D sensor together into one 3D point cloud, providing each RGB pixel with an absolute 3D location in the scene. We then (over-)segment the scene and predict semantic labels for each segment (see Fig. 1). We predict not only coarse classes like in [1, 35] (i.e., 1This work was first presented at [16]. ∗indicates equal contribution. 1 Figure 1: Office scene (top) and Home (bottom) scene with the corresponding label coloring above the images. The left-most is the original point cloud, the middle is the ground truth labeling and the right most is the point cloud with predicted labels. wall, ground, ceiling, building), but also label individual objects (e.g., printer, keyboard, mouse). Furthermore, we model rich relational information beyond an associative coupling of labels [1]. In this paper, we propose and evaluate the first model and learning algorithm for scene understanding that exploits rich relational information derived from the full-scene 3D point cloud for object labeling. In particular, we propose a graphical model that naturally captures the geometric relationships of a 3D scene. Each 3D segment is associated with a node, and pairwise potentials model the relationships between segments (e.g., co-planarity, convexity, visual similarity, object co-occurrences and proximity). The model admits efficient approximate inference [25], and we show that it can be trained using a maximum-margin approach [7, 31, 34] that globally minimizes an upper bound on the training loss. We model both associative and non-associative coupling of labels. With a large number of object classes, the model’s parsimony becomes important. Some features are better indicators of label similarity, while other features are better indicators of nonassociative relations such as geometric arrangement (e.g., on-top-of, in-front-of). We therefore introduce parsimony in the model by using appropriate clique potentials rather than using general clique potentials. Our model is highly flexible and our software is available as a ROS package at: http://pr.cs.cornell.edu/sceneunderstanding To empirically evaluate our model and algorithms, we perform several experiments over a total of 52 scenes of two types: offices and homes. These scenes were built from about 550 views from the Kinect sensor, and they are also available for public use. We consider labeling each segment (from a total of about 50 segments per scene) into 27 classes (17 for offices and 17 for homes, with 7 classes in common). Our experiments show that our method, which captures several local cues and contextual properties, achieves an overall performance of 84.06% on office scenes and 73.38% on home scenes. We also consider the problem of labeling 3D segments with multiple attributes meaningful to robotics context (such as small objects that can be manipulated, furniture, etc.). Finally, we successfully applied these algorithms on mobile robots for the task of finding objects in cluttered office scenes. 2 Related Work There is a huge body of work in the area of scene understanding and object recognition from 2D images. Previous works focus on several different aspects: designing good local features such as HOG (histogram-of-gradients) [5] and bag of words [4], and designing good global (context) features such as GIST features [33]. However, these approaches do not consider the relative arrangement of the parts of the object or of multiple objects with respect to each other. A number of works propose models that explicitly capture the relations between different parts of the object e.g., Pedro et al.’s part-based models [6], and between different objects in 2D images [13, 14]. However, a lot of valuable information about the shape and geometric layout of objects is lost when a 2D image is formed from the corresponding 3D world. In some recent works, 3D layout or depths have been used for improving object detection (e.g., [11, 12, 14, 20, 21, 22, 27, 28]). Here a rough 3D scene geometry (e.g., main surfaces in the scene) is inferred from a single 2D image or a stereo video stream, respectively. However, the estimated geometry is not accurate enough to give significant improvements. With 3D data, we can more precisely determine the shape, size and geometric orientation of the objects, and several other properties and therefore capture much stronger context. The recent availability of synchronized videos of both color and depth obtained from RGB-D (Kinect-style) depth cameras, shifted the focus to making use of both visual as well as shape features for object detection [9, 18, 19, 24, 26] and 3D segmentation (e.g., [3]). These methods demonstrate 2 that augmenting visual features with 3D information can enhance object detection in cluttered, realworld environments. However, these works do not make use of the contextual relationships between various objects which have been shown to be useful for tasks such as object detection and scene understanding in 2D images. Our goal is to perform semantic labeling of indoor scenes by modeling and learning several contextual relationships. There is also some recent work in labeling outdoor scenes obtained from LIDAR data into a few geometric classes (e.g., ground, building, trees, vegetation, etc.). [8, 30] capture context by designing node features and [36] do so by stacking layers of classifiers; however these methods do not model the correlation between the labels. Some of these works model some contextual relationships in the learning model itself. For example, [1, 23] use associative Markov networks in order to favor similar labels for nodes in the cliques. However, many relative features between objects are not associative in nature. For example, the relationship “on top of” does not hold in between two ground segments, i.e., a ground segment cannot be “on top of” another ground segment. Therefore, using an associative Markov network is very restrictive for our problem. All of these works [1, 23, 29, 30, 36] were designed for outdoor scenes with LIDAR data (without RGB values) and therefore would not apply directly to RGB-D data in indoor environments. Furthermore, these methods only consider very few geometric classes (between three to five classes) in outdoor environments, whereas we consider a large number of object classes for labeling the indoor RGB-D data. The most related work to ours is [35], where they label the planar patches in a point-cloud of an indoor scene with four geometric labels (walls, floors, ceilings, clutter). They use a CRF to model geometrical relationships such as orthogonal, parallel, adjacent, and coplanar. The learning method for estimating the parameters was based on maximizing the pseudo-likelihood resulting in a suboptimal learning algorithm. In comparison, our basic representation is a 3D segment (as compared to planar patches) and we consider a much larger number of classes (beyond just the geometric classes). We also capture a much richer set of relationships between pairs of objects, and use a principled max-margin learning method to learn the parameters of our model. 3 Approach We now outline our approach, including the model, its inference methods, and the learning algorithm. Our input is multiple Kinect RGB-D images of a scene (i.e., a room) stitched into a single 3D point cloud using RGBDSLAM.2 Each such point cloud is then over-segmented based on smoothness (i.e., difference in the local surface normals) and continuity of surfaces (i.e., distance between the points). These segments are the atomic units in our model. Our goal is to label each of them. Before getting into the technical details of the model, the following outlines the properties we aim to capture in our model: Visual appearance. The reasonable success of object detection in 2D images shows that visual appearance is a good indicator for labeling scenes. We therefore model the local color, texture, gradients of intensities, etc. for predicting the labels. In addition, we also model the property that if nearby segments are similar in visual appearance, they are more likely to belong to the same object. Local shape and geometry. Objects have characteristic shapes—for example, a table is horizontal, a monitor is vertical, a keyboard is uneven, and a sofa is usually smoothly curved. Furthermore, parts of an object often form a convex shape. We compute 3D shape features to capture this. Geometrical context. Many sets of objects occur in characteristic relative geometric configurations. For example, a monitor is always on-top-of a table, chairs are usually found near tables, a keyboard is in-front-of a monitor. This means that our model needs to capture non-associative relationships (i.e., that neighboring segments differ in their labels in specific patterns). Note the examples given above are just illustrative. For any particular practical application, there will likely be other properties that could also be included. As demonstrated in the following section, our model is flexible enough to include a wide range of features. 3.1 Model Formulation We model the three-dimensional structure of a scene using a model isomorphic to a Markov Random Field with log-linear node and pairwise edge potentials. Given a segmented point cloud x = (x1, ..., xN) consisting of segments xi, we aim to predict a labeling y = (y1, ..., yN) for the segments. Each segment label yi is itself a vector of K binary class labels yi = (y1 i , ..., yK i ), with each yk i ∈{0, 1} indicating whether a segment i is a member of class k. Note that multiple yk i can be 1 for each segment (e.g., a segment can be both a “chair” and a “movable object”). We use 2http://openslam.org/rgbdslam.html 3 N6. Vertical component of the normal: niz 1 N7. Vertical position of centroid: ciz 1 N8. Vert. and Hor. extent of bounding box 2 N9. Dist. from the scene boundary (Fig. 2) 1 E6. Diff. in angle with vert.: cos (niz) cos (njz) 1 E8. Dist. between closest points: minu∈si,v∈sj d(u, v) (Fig. 2) 1 E8. rel. position from camera (in front of/behind). (Fig. 2) 1 Table 1: Node and edge features. location above ground, and its shape. Some features capture spatial location of an object in the scene (e.g., N9). We connect two segments (nodes) i and j by an edge if there exists a point in segment i and a point in segment j which are less than context range distance apart. This captures the closest distance between two segments (as compared to centroid distance between the segments)—we study the effect of context range more in Section 4. The edge features φt(i, j) (Table 1-right) consist of associative features (E1-E2) based on visual appearance and local shape, as well as non-associative features (E3-E8) that capture the tendencies of two objects to occur in certain configurations. Note that our features are insensitive to horizontal translation and rotation of the camera. However, our features place a lot of emphasis on the vertical direction because gravity influences the shape and relative positions of objects to a large extent. i j cam rhi dbi dbj rhj 3.2.1 Computing Predictions Solving the argmax in Eq. (1) for the discriminant function in Eq. (2) is NP hard. However, its equivalent formulation as the following mixed-integer program has a linear relaxation with several desirable properties. ˆy = argmax y max z i∈V K k=1 yk i wk n · φn(i) + (i,j)∈E Tt∈T (l,k)∈Tt zlk ij wlk t · φt(i, j) (3) ∀i, j, l, k : zlk ij ≤yl i, zlk ij ≤yk j , yl i + yk j ≤zlk ij + 1, zlk ij , yl i ∈{0, 1} (4) Note that the products yl iyk j have been replaced by auxiliary variables zlk ij . Relaxing the variables zlk ij and yl i to the interval [0, 1] leads to a linear program that can be shown to always have half-integral solutions (i.e. yl i only take values {0, 0.5, 1} at the solution) [10]. Furthermore, this relaxation can also be solved as a quadratic pseudo-Boolean optimization problem using a graph-cut method [25], which is orders of magnitude faster than using a general purpose LP solver (i.e., 10 sec for labeling a typical scene in our experiments). Therefore, we refer to the solution of this relaxation as ˆycut. The relaxation solution ˆycut has an interesting property called Persistence [2, 10]. Persistence says that any segment for which the value of yl i is integral in ˆycut (i.e. does not take value 0.5) is labeled just like it would be in the optimal mixed-integer solution. 5 N7. Vertical position of centroid: ciz 1 N8. Vert. and Hor. extent of bounding box 2 N9. Dist. from the scene boundary (Fig. 2) 1 E8. Dist. between closest points: minu∈si,v∈sj d(u, v) (Fig. 2) 1 E8. rel. position from camera (in front of/behind). (Fig. 2) 1 Table 1: Node and edge features. location above ground, and its shape. Some features capture spatial location of an object in the scene (e.g., N9). We connect two segments (nodes) i and j by an edge if there exists a point in segment i and a point in segment j which are less than context range distance apart. This captures the closest distance between two segments (as compared to centroid distance between the segments)—we study the effect of context range more in Section 4. The edge features φt(i, j) (Table 1-right) consist of associative features (E1-E2) based on visual appearance and local shape, as well as non-associative features (E3-E8) that capture the tendencies of two objects to occur in certain configurations. Note that our features are insensitive to horizontal translation and rotation of the camera. However, our features place a lot of emphasis on the vertical direction because gravity influences the shape and relative positions of objects to a large extent. cam ri rj ˆni ˆnj 3.2.1 Computing Predictions Solving the argmax in Eq. (1) for the discriminant function in Eq. (2) is NP hard. However, its equivalent formulation as the following mixed-integer program has a linear relaxation with several desirable properties. ˆy = argmax y max z i∈V K k=1 yk i wk n · φn(i) + (i,j)∈E Tt∈T (l,k)∈Tt zlk ij wlk t · φt(i, j) (3) ∀i, j, l, k : zlk ij ≤yl i, zlk ij ≤yk j , yl i + yk j ≤zlk ij + 1, zlk ij , yl i ∈{0, 1} (4) Note that the products yl iyk j have been replaced by auxiliary variables zlk ij . Relaxing the variables zlk ij and yl i to the interval [0, 1] leads to a linear program that can be shown to always have half-integral solutions (i.e. yl i only take values {0, 0.5, 1} at the solution) [10]. Furthermore, this relaxation can also be solved as a quadratic pseudo-Boolean optimization problem using a graph-cut method [25], which is orders of magnitude faster than using a general purpose LP solver (i.e., 10 sec for labeling a typical scene in our experiments). Therefore, we refer to the solution of this relaxation as ˆycut. The relaxation solution ˆycut has an interesting property called Persistence [2, 10]. Persistence says that any segment for which the value of yl i is integral in ˆycut (i.e. does not take value 0.5) is labeled just like it would be in the optimal mixed-integer solution. Since every segment in our experiments is in exactly one class, we also consider the linear relaxation from above with the additional constraint ∀i : K j=1 yj i = 1. This problem can no longer be solved via graph cuts and is not half-integral. We refer to its solution as ˆyLP . Computing ˆyLP for a 5 Local Shape and Geometry 8 N4. linearness (λi0 - λi1), planarness (λi1 - λi2) 2 N5. Scatter: λi0 1 N6. Vertical component of the normal: ˆ niz 1 N7. Vertical position of centroid: ciz 1 N8. Vert. and Hor. extent of bounding box 2 N9. Dist. from the scene boundary (Fig. 2) 1 E3. Horizontal distance b/w centroids. 1 E4. Vertical Displacement b/w centroids: (ciz −cjz) 1 E5. Angle between normals (Dot product): ˆni · ˆnj 1 E6. Diff. in angle with vert.: cos−1(niz) - cos−1(njz) 1 E8. Dist. between closest points: minu∈si,v∈sj d(u, v) (Fig. 2) 1 E8. rel. position from camera (in front of/behind). (Fig. 2) 1 Table 1: Node and edge features. location above ground, and its shape. Some features capture spatial location of an object in the scene (e.g., N9). We connect two segments (nodes) i and j by an edge if there exists a point in segment i and a point in segment j which are less than context range distance apart. This captures the closest distance between two segments (as compared to centroid distance between the segments)—we study the effect of context range more in Section 4. The edge features φt(i, j) (Table 1-right) consist of associative features (E1-E2) based on visual appearance and local shape, as well as non-associative features (E3-E8) that capture the tendencies of two objects to occur in certain configurations. Note that our features are insensitive to horizontal translation and rotation of the camera. However, our features place a lot of emphasis on the vertical direction because gravity influences the shape and relative positions of objects to a large extent. i dminij j 3.2.1 Computing Predictions Solving the argmax in Eq. (1) for the discriminant function in Eq. (2) is NP hard. However, its equivalent formulation as the following mixed-integer program has a linear relaxation with several desirable properties. ˆy = argmax y max z i∈V K k=1 yk i wk n · φn(i) + (i,j)∈E Tt∈T (l,k)∈Tt zlk ij wlk t · φt(i, j) (3) ∀i, j, l, k : zlk ij ≤yl i, zlk ij ≤yk j , yl i + yk j ≤zlk ij + 1, zlk ij , yl i ∈{0, 1} (4) Note that the products yl iyk j have been replaced by auxiliary variables zlk ij . Relaxing the variables zlk ij and yl i to the interval [0, 1] leads to a linear program that can be shown to always have half-integral solutions (i.e. yl i only take values {0, 0.5, 1} at the solution) [10]. Furthermore, this relaxation can also be solved as a quadratic pseudo-Boolean optimization problem using a graph-cut method [25], which is orders of magnitude faster than using a general purpose LP solver (i.e., 10 sec for labeling a typical scene in our experiments). Therefore, we refer to the solution of this relaxation as ˆycut. The relaxation solution ˆycut has an interesting property called Persistence [2, 10]. Persistence says that any segment for which the value of yl i is integral in ˆycut (i.e. does not take value 0.5) is labeled just like it would be in the optimal mixed-integer solution. Since every segment in our experiments is in exactly one class, we also consider the linear relaxation from above with the additional constraint ∀i : K j=1 yj i = 1. This problem can no longer be solved via graph cuts and is not half-integral. We refer to its solution as ˆyLP . Computing ˆyLP for a scene takes 11 minutes on average4. Finally, we can also compute the exact mixed integer solution including the additional constraint ∀i : K j=1 yj i = 1 using a general-purpose MIP solver4. We set a time limit of 30 minutes for the MIP solver. This takes 18 minutes on average for a scene. All runtimes are for single CPU implementations using 17 classes. 4http://www.tfinley.net/software/pyglpk/readme.html 5 Figure 2: Illustration of a few features. (Left) Features N11 and E9. Segment i is infront of segment j if rhi < rhj. (Middle) Two connected segment i and j are form a convex shape if (ri −rj). ˆni ≥0 and (rj −ri). ˆnj ≥0. (Right) Illustrating feature E8. such multi-labelings in our attribute experiments where each segment can have multiple attributes, but not in segment labeling experiments where each segment can have only one label). For a segmented point cloud x, the prediction ˆy is computed as the argmax of a discriminant function fw(x, y) that is parameterized by a vector of weights w. ˆy = argmax y fw(x, y) (1) The discriminant function captures the dependencies between segment labels as defined by an undirected graph (V, E) of vertices V = {1, ..., N} and edges E ⊆V × V. We describe in Section 3.2 how this graph is derived from the spatial proximity of the segments. Given (V, E), we define the following discriminant function based on individual segment features φn(i) and edge features φt(i, j) as further described below. fw(y, x) = X i∈V K X k=1 yk i wk n · φn(i) + X (i,j)∈E X Tt∈T X (l,k)∈Tt yl iyk j wlk t · φt(i, j) (2) The node feature map φn(i) describes segment i through a vector of features, and there is one weight vector for each of the K classes. Examples of such features are the ones capturing local visual appearance, shape and geometry. The edge feature maps φt(i, j) describe the relationship between segments i and j. Examples of edge features are the ones capturing similarity in visual appearance and geometric context.3 There may be multiple types t of edge feature maps φt(i, j), and each type has a graph over the K classes with edges Tt. If Tt contains an edge between classes l and k, then this feature map and a weight vector wlk t is used to model the dependencies between classes l and k. If the edge is not present in Tt, then φt(i, j) is not used. We say that a type t of edge features is modeled by an associative edge potential if Tt = {(k, k)|∀k = 1..K}. And it is modeled by an non-associative edge potential if Tt = {(l, k)|∀l, k = 1..K}. Finally, it is modeled by an object-associative edge potential if Tt = {(l, k)|∃object, l, k ∈ parts(object)}. Parsimonious model. In our experiments we distinguished between two types of edge feature maps—“object-associative” features φoa(i, j) used between classes that are parts of the same object (e.g., “chair base”, “chair back” and “chair back rest”), and “non-associative” features φna(i, j) that are used between any pair of classes. Examples of features in the object-associative feature map φoa(i, j) include similarity in appearance, co-planarity, and convexity—i.e., features that indicate whether two adjacent segments belong to the same class or object. A key reason for distinguishing between object-associative and non-associate features is parsimony of the model. In this parsimonious model (referred to as svm mrf parsimon), we model object associative features using objectassociative edge potentials and non-associative features as non-associative edge potentials. As not all edge features are non-associative, we avoid learning weight vectors for relationships which do not exist. Note that |Tna| >> |Toa| since, in practice, the number of parts of an objects is much less than K. Due to this, the model we learn with both type of edge features will have much lesser number of parameters compared to a model learnt with all edge features as non-associative features. 3.2 Features Table 1 summarizes the features used in our experiments. λi0, λi1 and λi2 are the 3 eigen-values of the scatter matrix computed from the points of segment i in decreasing order. ci is the centroid of segment i. ri is the ray vector to the centroid of segment i from the position camera in which it was captured. rhi is the projection of ri on horizontal plane. ˆni is the unit normal of segment i which points towards the camera (ri.ˆni < 0). The node features φn(i) consist of visual appearance features based on histogram of HSV values and the histogram of gradients (HOG), as well as local shape and geometry features that capture properties such as how planar a segment is, its absolute 3Even though it is not represented in the notation, note that both the node feature map φn(i) and the edge feature maps φt(i, j) can compute their features based on the full x, not just xi and xj. 4 Node features for segment i. Description Count Visual Appearance 48 N1. Histogram of HSV color values 14 N2. Average HSV color values 3 N3. Average of HOG features of the blocks in image spanned by the points of a segment 31 Local Shape and Geometry 8 N4. linearness (λi0 - λi1), planarness (λi1 - λi2) 2 N5. Scatter: λi0 1 N6. Vertical component of the normal: ˆ niz 1 N7. Vertical position of centroid: ciz 1 N8. Vert. and Hor. extent of bounding box 2 N9. Dist. from the scene boundary (Fig. 2) 1 Edge features for (segment i, segment j). Description Count Visual Appearance (associative) 3 E1. Difference of avg HSV color values 3 Local Shape and Geometry (associative) 2 E2. Coplanarity and convexity (Fig. 2) 2 Geometric context (non-associative) 6 E3. Horizontal distance b/w centroids. 1 E4. Vertical Displacement b/w centroids: (ciz −cjz) 1 E5. Angle between normals (Dot product): ˆni · ˆnj 1 E6. Diff. in angle with vert.: cos−1(niz) - cos−1(njz) 1 E8. Dist. between closest points: minu∈si,v∈sj d(u, v) (Fig. 2) 1 E8. rel. position from camera (in front of/behind). (Fig. 2) 1 Table 1: Node and edge features. location above ground, and its shape. Some features capture spatial location of an object in the scene (e.g., N9). We connect two segments (nodes) i and j by an edge if there exists a point in segment i and a point in segment j which are less than context range distance apart. This captures the closest distance between two segments (as compared to centroid distance between the segments)—we study the effect of context range more in Section 4. The edge features φt(i, j) (Table 1-right) consist of associative features (E1-E2) based on visual appearance and local shape, as well as non-associative features (E3-E8) that capture the tendencies of two objects to occur in certain configurations. Note that our features are insensitive to horizontal translation and rotation of the camera. However, our features place a lot of emphasis on the vertical direction because gravity influences the shape and relative positions of objects to a large extent. 3.2.1 Computing Predictions Solving the argmax in Eq. (1) for the discriminant function in Eq. (2) is NP hard. However, its equivalent formulation as the following mixed-integer program has a linear relaxation with several desirable properties. ˆy = argmax y max z X i∈V K X k=1 yk i wk n · φn(i) + X (i,j)∈E X Tt∈T X (l,k)∈Tt zlk ij wlk t · φt(i, j) (3) ∀i, j, l, k : zlk ij ≤yl i, zlk ij ≤yk j , yl i + yk j ≤zlk ij + 1, zlk ij , yl i ∈{0, 1} (4) Note that the products yl iyk j have been replaced by auxiliary variables zlk ij . Relaxing the variables zlk ij and yl i to the interval [0, 1] leads to a linear program that can be shown to always have half-integral solutions (i.e. yl i only take values {0, 0.5, 1} at the solution) [10]. Furthermore, this relaxation can also be solved as a quadratic pseudo-Boolean optimization problem using a graph-cut method [25], which is orders of magnitude faster than using a general purpose LP solver (i.e., 10 sec for labeling a typical scene in our experiments). Therefore, we refer to the solution of this relaxation as ˆycut. The relaxation solution ˆycut has an interesting property called Persistence [2, 10]. Persistence says that any segment for which the value of yl i is integral in ˆycut (i.e. does not take value 0.5) is labeled just like it would be in the optimal mixed-integer solution. Since every segment in our experiments is in exactly one class, we also consider the linear relaxation from above with the additional constraint ∀i : PK j=1 yj i = 1. This problem can no longer be solved via graph cuts and is not half-integral. We refer to its solution as ˆyLP . Computing ˆyLP for a scene takes 11 minutes on average4. Finally, we can also compute the exact mixed integer solution including the additional constraint ∀i : PK j=1 yj i = 1 using a general-purpose MIP solver4. We set a time limit of 30 minutes for the MIP solver. This takes 18 minutes on average for a scene. All runtimes are for single CPU implementations using 17 classes. When using this algorithm in practice on new scenes (e.g., during our robotic experiments), objects other than the 27 objects we modeled might be present (e.g., coffee-mugs). So we relax the constraint ∀i : PK j=1 yj i = 1 to ∀i : PK j=1 yj i ≤1. This increases precision greatly at the cost of some drop in recall. Also, this relaxed MIP takes lesser time to solve. 3.2.2 Learning Algorithm We take a large-margin approach to learning the parameter vector w of Eq. (2) from labeled training examples (x1, y1), ..., (xn, yn) [31, 32, 34]. Compared to Conditional Random Field training [17] 4http://www.tfinley.net/software/pyglpk/readme.html 5 using maximum likelihood, this has the advantage that the partition function normalizing Eq. (2) does not need to be computed, and that the training problem can be formulated as a convex program for which efficient algorithms exist. Our method optimizes a regularized upper bound on the training error R(h) = 1 n n X j=1 ∆(yj, ˆyj), (5) where ˆyj is the optimal solution of Eq. (1) and ∆(y, ˆy) = PN i=1 PK k=1 |yk i −ˆyk i |. To simplify notation, note that Eq. (3) can be equivalently written as wT Ψ(x, y) by appropriately stacking the wk n and wlk t into w and the yk i φn(k) and zlk ij φt(l, k) into Ψ(x, y), where each zlk ij is consistent with Eq. (4) given y. Training can then be formulated as the following convex quadratic program [15]: min w,ξ 1 2wT w + Cξ (6) s.t. ∀¯y1, ..., ¯yn ∈{0, 0.5, 1}N·K : 1 nwT n X i=1 [Ψ(xi, yi) −Ψ(xi, ¯yi)] ≥∆(yi, ¯yi) −ξ While the number of constraints in this quadratic program is exponential in n, N, and K, it can nevertheless be solved efficiently using the cutting-plane algorithm for training structural SVMs [15]. The algorithm maintains a working set of constraints, and it can be shown to provide an ϵaccurate solution after adding at most O(R2C/ϵ) constraints (ignoring log terms). The algorithm merely need access to an efficient method for computing ¯yi = argmax y∈{0,0.5,1}N·K wT Ψ(xi, y) + ∆(yi, y) . (7) Due to the structure of ∆(., .), this problem is identical to the relaxed prediction problem in Eqs. (3)(4) and can be solved efficiently using graph cuts. Since our training problem is an overgenerating formulation as defined in [7], the value of ξ at the solution is an upper bound on the training error in Eq. (5). Furthermore, [7] observed empirically that the relaxed prediction ˆycut after training w via Eq. (6) is typically largely integral, meaning that most labels yk i of the relaxed solution are the same as the optimal mixed-integer solution due to persistence. We made the same observation in our experiments as well. 4 Experiments 4.1 Data We consider labeling object segments in full 3D scene (as compared to 2.5D data from a single view). For this purpose, we collected data of 24 office and 28 home scenes (composed from about 550 views). Each scene was reconstructed from about 8-9 RGB-D views from a Kinect sensor and contains about one million colored points. We first over-segment the 3D scene (as described earlier) to obtain the atomic units of our representation. For training, we manually labeled the segments, and we selected the labels which were present in a minimum of 5 scenes in the dataset. Specifically, the office labels are: {wall, floor, tableTop, tableDrawer, tableLeg, chairBackRest, chairBase, chairBack, monitor, printerFront, printerSide keyboard, cpuTop, cpuFront, cpuSide, book, paper }, and the home labels are: {wall, floor, tableTop, tableDrawer, tableLeg, chairBackRest, chairBase, sofaBase, sofaArm, sofaBackRest, bed, bedSide, quilt, pillow, shelfRack, laptop, book }. This gave us a total of 1108 labeled segments in the office scenes and 1387 segments in the home scenes. Often one object may be divided into multiple segments because of over-segmentation. We have made this data available at: http://pr.cs.cornell.edu/sceneunderstanding/data/data.php. 4.2 Results Table 2 shows the results, performed using 4-fold cross-validation and averaging performance across the folds for the models trained separately on home and office datasets. We use both the macro and micro averaging to aggregate precision and recall over various classes. Since our algorithm can only predict one label for each segment, micro precision and recall are same as the percentage of correctly classified segments. Macro precision and recall are respectively the averages of precision and recall for all classes. The optimal C value is determined separately for each of the algorithms by cross-validation. Figure 1 shows the original point cloud, ground-truth and predicted labels for one office (top) and one home scene (bottom). We see that on majority of the classes we are able to predict the correct 6 Table 2: Learning experiment statistics. The table shows average micro precision/recall, and average macro precision and recall for home and office scenes. Office Scenes Home Scenes micro macro micro macro features algorithm P/R Precision Recall P/R Precision Recall None max class 26.23 26.23 5.88 29.38 29.38 5.88 Image Only svm node only 46.67 35.73 31.67 38.00 15.03 14.50 Shape Only svm node only 75.36 64.56 60.88 56.25 35.90 36.52 Image+Shape svm node only 77.97 69.44 66.23 56.50 37.18 34.73 Image+Shape & context single frames 84.32 77.84 68.12 69.13 47.84 43.62 Image+Shape & context svm mrf assoc 75.94 63.89 61.79 62.50 44.65 38.34 Image+Shape & context svm mrf nonassoc 81.45 76.79 70.07 72.38 57.82 53.62 Image+Shape & context svm mrf parsimon 84.06 80.52 72.64 73.38 56.81 54.80 label. It makes mistakes in some cases and these usually tend to be reasonable, such as a pillow getting confused with the bed, and table-top getting confused with the shelf-rack. One of our goals is to study the effect of various factors, and therefore we compared different versions of the algorithms with various settings. We discuss them in the following. Do Image and Point-Cloud Features Capture Complimentary Information? The RGB-D data contains both image and depth information, and enables us to compute a wide variety of features. In this experiment, we compare the two kinds of features: Image (RGB) and Shape (Point Cloud) features. To show the effect of the features independent of the effect of context, we only use the node potentials from our model, referred to as svm node only in Table 2. The svm node only model is equivalent to the multi-class SVM formulation [15]. Table 2 shows that Shape features are more effective compared to the Image, and the combination works better on both precision and recall. This indicates that the two types of features offer complementary information and their combination is better for our classification task. How Important is Context? Using our svm mrf parsimon model as described in Section 3.1, we show significant improvements in the performance over using svm node only model on both datasets. In office scenes, the micro precision increased by 6.09% over the best svm node only model that does not use any context. In home scenes the increase is much higher, 16.88%. The type of contextual relations we capture depend on the type of edge potentials we model. To study this, we compared our method with models using only associative or only non-associative edge potentials referred to as svm mrf assoc and svm mrf nonassoc respectively. We observed that modeling all edge features using associative potentials is poor compared to our full model. In fact, using only associative potentials showed a drop in performance compared to svm node only model on the office dataset. This indicates it is important to capture the relations between regions having different labels. Our svm mrf nonassoc model does so, by modeling all edge features using nonassociative potentials, which can favor or disfavor labels of different classes for nearby segments. It gives higher precision and recall compared to svm node only and svm mrf assoc. This shows that modeling using non-associative potentials is a better choice for our labeling problem. However, not all the edge features are non-associative in nature, modeling them using only nonassociative potentials could be an overkill (each non-associative feature adds K2 more parameters to be learnt). Therefore using our svm mrf parsimon model to model these relations achieves higher performance in both datasets. Figure 3: Effect of context range on precision (=recall here). How Large should the Context Range be? Context relationships of different objects can be meaningful for different spatial distances. This range may vary depending on the environment as well. For example, in an office, keyboard and monitor go together, but they may have little relation with a sofa that is slightly farther away. In a house, sofa and table may go together even if they are farther away. In order to study this, we compared our svm mrf parsimon with varying context range for determining the neighborhood (see Figure 3 for average micro precision vs range plot). Note that the context range is determined from the boundary of one segment to the boundary of the other, and hence it is somewhat independent of the size of the object. We note that increasing the context range increases the performance to some level, and then it drops slightly. We attribute this to the fact that increasing the context range can connect irrelevant objects 7 with an edge, and with limited training data, spurious relationships may be learned. We observe that the optimal context range for office scenes is around 0.3 meters and 0.6 meters for home scenes. How does a Full 3D Model Compare to a 2.5D Model? In Table 2, we compare the performance of our full model with a model that was trained and tested on single views of the same scenes. During the comparison, the training folds were consistent with other experiments, however the segmentation of the point clouds was different (because each point cloud is from a single view). This makes the micro precision values meaningless because the distribution of labels is not same for the two cases. In particular, many large object in scenes (e.g., wall, ground) get split up into multiple segments in single views. We observed that the macro precision and recall are higher when multiple views are combined to form the scene. We attribute the improvement in macro precision and recall to the fact that larger scenes have more context, and models are more complete because of multiple views. What is the effect of the inference method? The results for svm mrf algorithms Table 2 were generated using the MIP solver. We observed that the MIP solver is typically 2-3% more accurate than the LP solver. The graph-cut algorithm however, gives a higher precision and lower recall on both datasets. For example, on office data, the graphcut inference for our svm mrf parsimon gave a micro precision of 90.25 and micro recall of 61.74. Here, the micro precision and recall are not same as some of the segments might not get any label. Since it is orders of magnitude faster, it is ideal for realtime robotic applications. 4.3 Robotic experiments Figure 4: Cornell’s POLAR robot using our classifier for detecting a keyboard in a cluttered room. The ability to label segments is very useful for robotics applications, for example, in detecting objects (so that a robot can find/retrieve an object on request) or for other robotic tasks. We therefore performed two relevant robotic experiments. Attribute Learning: In some robotic tasks, such as robotic grasping, it is not important to know the exact object category, but just knowing a few attributes of an object may be useful. For example, if a robot has to clean a floor, it would help if it knows which objects it can move and which it cannot. If it has to place an object, it should place them on horizontal surfaces, preferably where humans do not sit. With this motivation we have designed 8 attributes, each for the home and office scenes, giving a total of 10 unique attributes in total, comprised of: wall, floor, flat-horizontalsurfaces, furniture, fabric, heavy, seating-areas, small-objects, table-top-objects, electronics. Note that each segment in the point cloud can have multiple attributes and therefore we can learn these attributes using our model which naturally allows multiple labels per segment. We compute the precision and recall over the attributes by counting how many attributes were correctly inferred. In home scenes we obtained a precision of 83.12% and 70.03% recall, and in the office scenes we obtain 87.92% precision and 71.93% recall. Object Detection: We finally use our algorithm on two mobile robots, mounted with Kinects, for completing the goal of finding objects such as a keyboard in cluttered office scenes. The following video shows our robot successfully finding a keyboard in an office: http://pr.cs.cornell. edu/sceneunderstanding/ In conclusion, we have proposed and evaluated the first model and learning algorithm for scene understanding that exploits rich relational information from the full-scene 3D point cloud. We applied this technique to object labeling problem, and studied affects of various factors on a large dataset. Our robotic application shows that such inexpensive RGB-D sensors can be extremely useful for scene understanding for robots. This research was funded in part by NSF Award IIS-0713483. References [1] D. Anguelov, B. Taskar, V. Chatalbashev, D. Koller, D. Gupta, G. Heitz, and A. Ng. Discriminative learning of markov random fields for segmentation of 3d scan data. In CVPR, 2005. [2] E. Boros and P. Hammer. Pseudo-boolean optimization. Dis. Appl. Math., 123(1-3):155–225, 2002. [3] A. Collet Romea, S. Srinivasa, and M. Hebert. Structure discovery in multi-modal data : a region-based approach. In ICRA, 2011. [4] G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray. Visual categorization with bags of keypoints. In Workshop on statistical learning in computer vision, ECCV, 2004. 8 [5] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. [6] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In CVPR, 2008. [7] T. Finley and T. Joachims. Training structural svms when exact inference is intractable. In ICML, 2008. [8] A. Golovinskiy, V. G. Kim, and T. Funkhouser. Shape-based recognition of 3d point clouds in urban environments. ICCV, 2009. [9] S. Gould, P. Baumstarck, M. Quigley, A. Y. Ng, and D. Koller. Integrating Visual and Range Data for Robotic Object Detection. In ECCV workshop Multi-camera Multi-modal (M2SFA2), 2008. [10] P. Hammer, P. Hansen, and B. Simeone. Roof duality, complementation and persistency in quadratic 0–1 optimization. Mathematical Programming, 28(2):121–155, 1984. [11] V. Hedau, D. Hoiem, and D. Forsyth. Thinking inside the box: Using appearance models and context based on room geometry. In ECCV, 2010. [12] G. Heitz, S. Gould, A. Saxena, and D. Koller. Cascaded classification models: Combining models for holistic scene understanding. In NIPS, 2008. [13] G. Heitz and D. Koller. Learning spatial context: Using stuff to find things. In ECCV, 2008. [14] D. Hoiem, A. A. Efros, and M. Hebert. Putting objects in perspective. In In CVPR, 2006. [15] T. Joachims, T. Finley, and C. Yu. Cutting-plane training of structural SVMs. Machine Learning, 77(1):27–59, 2009. [16] H. Koppula, A. Anand, T. Joachims, and A. Saxena. Labeling 3d scenes for personal assistant robots. In R:SS workshop on RGB-D cameras, 2011. [17] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, 2001. [18] K. Lai, L. Bo, X. Ren, and D. Fox. A Large-Scale Hierarchical Multi-View RGB-D Object Dataset. In ICRA, 2011. [19] K. Lai, L. Bo, X. Ren, and D. Fox. Sparse Distance Learning for Object Recognition Combining RGB and Depth Information. In ICRA, 2011. [20] D. C. Lee, A. Gupta, M. Hebert, and T. Kanade. Estimating spatial layout of rooms using volumetric reasoning about objects and surfaces. In NIPS, 2010. [21] B. Leibe, N. Cornelis, K. Cornelis, and L. V. Gool. Dynamic 3d scene analysis from a moving vehicle. In CVPR, 2007. [22] C. Li, A. Kowdle, A. Saxena, and T. Chen. Towards holistic scene understanding: Feedback enabled cascaded classification models. In NIPS, 2010. [23] D. Munoz, N. Vandapel, and M. Hebert. Onboard contextual classification of 3-d point clouds with learned high-order markov random fields. In ICRA, 2009. [24] M. Quigley, S. Batra, S. Gould, E. Klingbeil, Q. V. Le, A. Wellman, and A. Y. Ng. High-accuracy 3d sensing for mobile manipulation: Improving object detection and door opening. In ICRA, 2009. [25] C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer. Optimizing binary mrfs via extended roof duality. In CVPR, 2007. [26] R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha, and M. Beetz. Towards 3d point cloud based object maps for household environments. Robot. Auton. Syst., 56:927–941, 2008. [27] A. Saxena, S. H. Chung, and A. Y. Ng. Learning depth from single monocular images. In NIPS 18, 2005. [28] A. Saxena, M. Sun, and A. Y. Ng. Make3d: Learning 3d scene structure from a single still image. IEEE PAMI, 31(5):824–840, 2009. [29] R. Shapovalov and A. Velizhev. Cutting-plane training of non-associative markov network for 3d point cloud segmentation. In 3DIMPVT, 2011. [30] R. Shapovalov, A. Velizhev, and O. Barinova. Non-associative markov networks for 3d point cloud classification. In ISPRS Commission III symposium - PCV 2010, 2010. [31] B. Taskar, V. Chatalbashev, and D. Koller. Learning associative markov networks. In ICML. ACM, 2004. [32] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003. [33] A. Torralba. Contextual priming for object detection. IJCV, 53(2):169–191, 2003. [34] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In ICML, 2004. [35] X. Xiong and D. Huber. Using context to create semantic 3d models of indoor environments. In BMVC, 2010. [36] X. Xiong, D. Munoz, J. A. Bagnell, and M. Hebert. 3-d scene analysis via sequenced predictions over points and regions. In ICRA, 2011. 9
|
2011
|
178
|
4,233
|
Higher-Order Correlation Clustering for Image Segmentation Sungwoong Kim Department of EE, KAIST Daejeon, South Korea sungwoong.kim01@gmail.com Sebastian Nowozin Microsoft Research Cambridge Cambridge, UK Sebastian.Nowozin@microsoft.com Pushmeet Kohli Microsoft Research Cambridge Cambridge, UK pkohli@microsoft.com Chang D. Yoo Department of EE, KAIST Daejeon, South Korea cdyoo@ee.kaist.ac.kr Abstract For many of the state-of-the-art computer vision algorithms, image segmentation is an important preprocessing step. As such, several image segmentation algorithms have been proposed, however, with certain reservation due to high computational load and many hand-tuning parameters. Correlation clustering, a graphpartitioning algorithm often used in natural language processing and document clustering, has the potential to perform better than previously proposed image segmentation algorithms. We improve the basic correlation clustering formulation by taking into account higher-order cluster relationships. This improves clustering in the presence of local boundary ambiguities. We first apply the pairwise correlation clustering to image segmentation over a pairwise superpixel graph and then develop higher-order correlation clustering over a hypergraph that considers higher-order relations among superpixels. Fast inference is possible by linear programming relaxation, and also effective parameter learning framework by structured support vector machine is possible. Experimental results on various datasets show that the proposed higher-order correlation clustering outperforms other state-of-the-art image segmentation algorithms. 1 Introduction Image segmentation, a partitioning of an image into disjoint regions such that each region is homogeneous, is an important preprocessing step for many of the state-of-the-art algorithms for high-level image/scene understanding for three reasons. First, the coherent support of a region, commonly assumed to be of a single label, serves as a good prior for many labeling tasks. Second, these coherent regions allow a more consistent feature extraction that can incorporate surrounding contextual information by pooling many feature responses over the region. Third, compared to pixels, a small number of larger homogeneous regions significantly reduces the computational cost for a successive labeling task. Image segmentation algorithms can be categorized into either non-graph-based or graph-based algorithms. Some well-known non-graph-based algorithms represented by mode-seeking algorithms such as the K-means [1], mean-shift [2], and EM [3] are available, while well-known graph-based algorithms are available as the min-cuts [4], normalized cuts [5] and Felzenszwalb-Huttenlocher (FH) segmentation algorithm [6]. In comparison to non-graph-based segmentations, graph-based segmentations have been shown to produce consistent segmentations by adaptively balancing local 1 judgements of similarity [7]. Moreover, the graph-based segmentation algorithms with global objective functions such as the min-cuts and normalized cuts have been shown to perform better than the FH algorithm that is based on the local objective function, since the global-objective algorithms benefit from the global nature of the information [7]. However, in contrast to the min-cuts and normalized cuts which are node-labeling algorithms, the FH algorithm benefits from the edge-labeling in that it leads to faster inference and does not require a pre-specified number of segmentations in each image [7]. Correlation clustering is a graph-partitioning algorithm [8] that simultaneously maximizes intracluster similarity and inter-cluster dissimilarity by solving the global objective (discriminant) function. In comparison with the previous image segmentation algorithms, correlation clustering is a graph-based, global-objective, and edge-labeling algorithm and therefore, has the potential to perform better for image segmentation. Furthermore, correlation clustering leads to the linear discriminant function which allows for approximate polynomial-time inference by linear programming (LP) and large margin training based on structured support vector machine (S-SVM) [9]. A framework that uses S-SVM for training the parameters in correlation clustering has been considered previously by Finley et al. [10]; however, the framework was applied to noun-phrase and news article clusterings. Taskar derived a max-margin formulation for learning the edge scores for correlation clustering [11]. However, his learning criterion is different from the S-SVM and is limited to applications involving two different segmentations of a single image. Furthermore, Taskar does not provide any experimental comparisons or quantitative results. Even though the previous (pairwise) correlation clustering can consider global aspects of an image using the discriminatively-trained discriminant function, it is restricted in resolving the segment boundary ambiguities caused by neighboring pairwise relations presented by the pairwise graph. Therefore, to capture long-range dependencies of distant nodes in a global context, this paper proposes a novel higher-order correlation clustering to incorporate higher-order relations. We first apply the pairwise correlation clustering to image segmentation over a pairwise superpixel graph and then develop higher-order correlation clustering over a hypergraph that considers higher-order relations among superpixels. The proposed higher-order correlation clustering is defined over a hypergraph in which an edge can connect to two or more nodes [12]. Hypergraphs have been previously used to lift certain limitations of conventional pairwise graphs [13, 14, 15]. However, previously proposed hypergraphs for image segmentation are restricted to partitioning based on the generalization of normalized cut framework, which suffer from a number of problems. First, inference is slow and difficult especially with increasing graph size. A number of algorithms to approximate the inference process have been introduced based on the coarsening algorithm [14] and the hypergraph Laplacian matrices [13]; these are heuristic approaches and therefore are sub-optimal. Second, incorporating a supervised learning algorithm for parameter estimation under the spectral hypergraph partitioning framework is difficult. This is in line with the difficulties in learning spectral graph partitioning. This requires a complex and unstable eigenvector approximation which must be differentiable [16, 17]. Third, utilizing rich region-based features is restricted. Almost all previous hypergraph-based image segmentation algorithms are restricted to use only color variances as region features. The proposed higher-order correlation clustering overcomes all of these problems due to the generalization of the pairwise correlation clustering and enables to take advantages of using a hypergraph. The proposed higher-order correlation clustering algorithm uses as its input a hypergraph and leads to a linear discriminant function. A rich feature vector is defined based on several visual cues involving higher-order relations among superpixels. For fast inference, the LP relaxation is used to approximately solve the higher-order correlation clustering problem, and for supervised training of the parameter vector by S-SVM, we apply a decomposable structured loss function to handle unbalanced classes. We incorporate this loss function into the cutting plane procedure for S-SVM training. Experimental results on various datasets show that the proposed higher-order correlation clustering outperforms other state-of-the-art image segmentation algorithms. The rest of the paper is organized as follows. Section 2 presents the higher-order correlation clustering for image segmentation. Section 3 describes large margin training for supervised image segmentation based on the S-SVM and the cutting plane algorithm. A number of experimental and comparative results are presented and discussed in Section 4, followed by a conclusion in Section 5. 2 i j k (a) (b) ijk y jk y ik y ij y jk y j k Figure 1: Illustrations of a part of (a) the pairwise graph (b) and the triplet graph built on superpixels. 2 Higher-order correlation clustering The proposed image segmentation is based on superpixels which are small coherent regions preserving almost all boundaries between different regions, since superpixels significantly reduce computational cost and allow feature extraction to be conducted from a larger homogeneous region. The proposed correlation clustering merges superpixels into disjoint homogeneous regions over a superpixel graph. 2.1 Pairwise correlation clustering over pairwise superpixel graph Define a pairwise undirected graph G = (V, E) where a node corresponds to a superpixel and a link between adjacent superpixels corresponds to an edge (see Figure 1.(a)). A binary label yjk for an edge (j, k) ∈E between nodes j and k is defined such that yjk = { 1, if nodes j and k belong to the same region, 0, otherwise. (1) A discriminant function, which is the negative energy function, is defined over an image x and label y of all edges as F(x, y; w) = ∑ (j,k)∈E Simw(x, j, k)yjk = ∑ (j,k)∈E ⟨w, ϕjk(x)⟩yjk = ⟨w, ∑ (j,k)∈E ϕjk(x)yjk⟩= ⟨w, Φ(x, y)⟩ (2) where the similarity measure between nodes j and k, Simw(x, j, k), is parameterized by w and takes values of both signs such that a large positive value means strong similarity while a large negative value means high degree of dissimilarity. Note that the discriminant function F(x, y; w) is assumed to be linear in both the parameter vector w and the joint feature map Φ(x, y), and ϕjk(x) is a pairwise feature vector which reflects the correspondence between the jth and the kth superpixels. An image segmentation is to infer the edge label, ˆy, over the pairwise superpixel graph G by maximizing F such that ˆy = argmax y∈Y F(x, y; w) (3) where Y is the set of {0, 1}E that corresponds to a valid segmentation, the so called multicut polytope. However, solving (3) with this Y is generally NP-hard. Therefore, we approximate Y by means of a common multicut LP relaxation [18] with the following two constraints: (1) cycle inequality and (2) odd-wheel inequality. When producing the segmentation results based on the approximated LP solutions, we take the floor of a fractionally-predicted label of each edge independently for simply obtaining valid integer solutions that may be sub-optimal. Even though this pairwise correlation clustering takes a rich pairwise feature vector and a trained parameter vector (which will be presented later), it often produces incorrectly predicted segments due to the segment boundary ambiguities caused by limited pairwise relations of neighboring superpixels (see Figure 2). Therefore, to incorporate higher-order relations, we develop higher-order correlation clustering by generalizing the correlation clustering over a hypergraph. 2.2 Higher-order correlation clustering over hypergraph The proposed higher-order correlation clustering is defined over a hypergraph in which an edge called hyperedge can connect to two or more nodes. For example, as shown in Figure 1.(b), one 3 (a)
(b)
(c)
(d)
Figure 2: Example of segmentation result by pairwise correlation clustering. (a) Original image. (b) Ground-truth. (c) Superpixels. (d) Segments obtained by pairwise correlation clustering. can introduce binary labels for each adjacent vertices forming a triplet such that yijk = 1 if all vertices in the triplet ({i, j, k}) are in the same cluster; otherwise, yijk = 0. Define a hypergraph HG = (V, E) where V is a set of nodes (superpixels) and E is a set of hyperedges (subsets of V) such that ∪ e∈E = V. Here, a hyperedge e has at least two nodes, i.e. |e| ≥2. Therefore, the hyperedge set E can be divided into two disjoint subsets: pairwise edge set Ep = {e ∈E | |e| = 2} and higherorder edge set Eh = {e ∈E | |e| > 2} such that Ep ∪Eh = E. Note that in the proposed hypergraph for higher-order correlation clustering all hyperedges containing just two nodes (∀ep ∈Ep) are linked between adjacent superpixels. The pairwise superpixel graph is a special hypergraph where all hyperedges contain just two (neighboring) superpixels: Ep = E. A binary label ye for a hyperedge e ∈E is defined such that ye = { 1, if all nodes in e belong to the same region, 0, otherwise. (4) Similar to the pairwise correlation clustering, a linear discriminant function is defined over an image x and label y of all hyperedges as F(x, y; w) = ∑ e∈E Homw(x, e)ye = ∑ e∈E ⟨w, ϕe(x)⟩ye = ∑ ep∈Ep ⟨wp, ϕep(x)⟩yep+ ∑ eh∈Eh ⟨wh, ϕeh(x)⟩yeh =⟨w, Φ(x, y)⟩(5) where the homogeneity measure among nodes in e, Homw(x, e), is also the inner product of the parameter vector w and the feature vector ϕe(x) and takes values of both signs such that a large positive value means strong homogeneity while a large negative value means high degree of nonhomogeneity. Note that the proposed discriminant function for higher-order correlation clustering is decomposed into two terms by assigning different parameter vectors to the pairwise edge set Ep and the higher-order edge set Eh such that w = [wT p , wT h ]T . Thus, in addition to the pairwise similarity between neighboring superpixels, the proposed higher-order correlation clustering considers a broad homogeneous region reflecting higher-order relations among superpixels. Now the problem is how to build our hypergraph from a given image. Here, we use unsupervised multiple partitionings (quantizations) from baseline superpixels. We obtain unsupervised multiple partitionings by merging not pixels but superpixels with different image quantizations using the ultrametric contour maps [19]. For example, in Figure 3, there are three region layers, one superpixel (pairwise) layer and two higher-order layers, from which a hypergraph is constructed by defining hyperedges as follows: first, all edges (black line) in the pairwise superpixel graph from the first layer are incorporated into the pairwise edge set Ep, while hyperedges (yellow line) corresponding to regions (groups of superpixels) in the second and third layers are included in the higher-order edge set Eh. Note that we can further decompose the higher-order term in (5) into two terms associated with the second layer and the third layer, respectively, by assigning different parameter vectors; however for simplicity, this paper aggregates all higher-order edges from all higher-order layers into a single higher-order edge set assigning the same parameter vector. 2.2.1 LP relaxation for inference An image segmentation is to infer the hyperedge label, ˆy, over the hypergraph HG by maximizing the discriminant function F such that ˆy = argmax y∈Y F(x, y; w) (6) 4 Superpixel(pairwise) layer Superpixel(pairwise) layer Higher-order layer Higher-order layer (a) (b) (c) Figure 3: Hypergraph construction from multiple partitionings. (a) Multiple partitionings from baseline superpixels. (b) Hyperedge (yellow line) corresponding to a region in the second layer. (c) Hyperedge (yellow line) corresponding to a region in the third layer. where Y is also the set of {0, 1}E that corresponds to a valid segmentation. Since the inference problem (6) is also NP-hard, we relax Y by (facet-defining) linear inequalities. In addition to the constraints placed on pairwise labels such that the cycle inequality and odd-wheel inequality hold pairwise correlation clustering, we augment the constraints for labels on the higher-order edges, called higher-order inequalities, for a valid segmentation; there is no all-one pairwise labels in a region for which the higher-order edge is labeled as zero (non-homogeneous region), and when a region is labeled as one (homogeneous region), all pairwise labels in that region should be one. These higher-order inequalities can be formulated as yeh ≤yep, ∀ep ∈Ep|ep ⊂eh, (7) (1 −yeh) ≤ ∑ ep∈Ep|ep⊂eh (1 −yep). Indeed, the LP relaxation to approximately solve (6) is formulated as argmax y ∑ ep∈Ep ⟨wp, ϕep(x)⟩yep + ∑ eh∈Eh ⟨wh, ϕeh(x)⟩yeh (8) s.t. ∀e ∈E(= Ep ∪ Eh), ye ∈[0, 1], ∀ep ∈Ep, cycle inequalities, odd-wheel inequalities [18], ∀eh ∈Eh, higher-order inequalities (7). Note that the proposed higher-order correlation clustering follows the concept of soft constraints: superpixels within a hyperedge are encouraged to merge if a hyperedge is highly homogeneous. 2.2.2 Feature vector We construct a 771-dimensional feature vector ϕe(x) by concatenating several visual cues with different quantization levels and thresholds. The pairwise feature vector ϕep(x) reflects the correspondence between neighboring superpixels, and the higher-order feature vector ϕeh(x) characterizes a more complex relations among superpixels in a broader region to measure homogeneity. The magnitude of w determines the importance of each feature, and this importance is task-dependent. Thus, w is estimated by supervised training described in Section 3. 1. Pairwise feature vector (611-dim): ϕep = [ϕc ep; ϕt ep; ϕs ep; ϕe ep; ϕv ep; 1]. • Color difference ϕc: The 26 RGB/HSV color distances (absolute differences, χ2distances, earth mover’s distances) between two adjacent superpixels. 5 • Texture difference ϕt: The 64 texture distances (absolute differences, χ2-distances, earth mover’s distances) between two adjacent superpixels using 15 Leung-Malik (LM) filter banks [19]. • Shape/location difference ϕs: The 5-dimensional shape/location feature proposed in [20]. • Edge strength ϕe: The 1-of-15 coding of the quantized edge strength proposed in [19]. • Joint visual word posterior ϕv: The 100-dimensional vector holding the joint visual word posteriors for a pair of neighboring superpixels using 10 visual words and the 400-dimensional vector holding the joint posteriors based on 20 visual words [21]. 2. Higher-order feature vector (160-dim): ϕeh = [ϕva eh; ϕe eh; ϕtm eh ; 1]. • Variance ϕva: The 14 color variances and 30 texture variances among superpixels in a hyperedge. • Edge strength ϕe: The 1-of-15 coding of the quantized edge strength proposed in [19]. • Template matching score ϕtm: The color/texture and shape/location features of all regions in the training images are clustered using k-means with k = 100 to obtain 100 representative templates of distinct regions. The 100-dimensional template matching feature vector is composed of the matching scores between a region defined by a hyperedge and templates using the Gaussian RBF kernel. In each feature vector, the bias (=1) is augmented for proper similarity/homogeneity measure which can either be positive or negative. 3 Structural learning The proposed discriminant function is defined over the superpixel graph, and therefore, the groundtruth segmentation needs to be transformed to the ground-truth edge labels in the superpixel graph. For this, we first assign a single dominant segment label to each superpixel by majority voting over the superpixel’s constituent pixels and then obtain the ground-truth edge labels. Using this ground-truth edge labels of the training data, the S-SVM [9] is performed to estimate the parameter vector. Given N training samples {xn, yn}N n=1 where yn is the ground-truth edge labels for the nth training image, the S-SVM [9] optimizes w by minimizing a quadratic objective function subject to a set of linear margin constraints: min w,ξ 1 2∥w∥2 + C N ∑ n=1 ξn (9) s.t. ⟨w, ∆Φ(xn, y)⟩≥∆(yn, y) −ξn, ∀n, y ∈Y\yn, ξn ≥0, ∀n where ∆Φ(xn, y) = Φ(xn, yn) −Φ(xn, y), and C > 0 is a constant that controls the trade-off between margin maximization and training error minimization. In the S-SVM, the margin is scaled with a loss ∆(yn, y), which is the difference measure between prediction y and ground-truth label yn of the nth image. The S-SVM offers good generalization ability as well as the flexibility to choose any loss function [9]. The cutting plane algorithm [9, 18] with LP relaxation for loss-augmented inference is used to solve the optimization problem of S-SVM, since fast convergence and high robustness of the cutting plane algorithm in handling a large number of margin constraints are well-known [22, 23]. A loss function is usually a non-negative function, and a loss function that is decomposable is preferred, since it enables the loss-augmented inference in the cutting plane algorithm to be performed efficiently. The most popular loss function that is decomposable is the Hamming distance which is equivalent to the number of mismatches between yn and y at the edge level in this correlation clustering. Unfortunately, in the proposed correlation clustering for image segmentation, the number of edges which are labeled as 1 is considerably higher than that of edges which are labeled as 0. This unbalance makes other learning methods such as the perceptron algorithm inappropriate, since it leads to the clustering of the whole image as one segment. This problem due to the unbalance also 6 20
25
30
35
40
0.76
0.78
0.8
0.82
0.84
Average number of regions
PRI
Multi
-
NCut
gPb
-
Hoiem
gPb
-
owt
-
ucm
Corr
-
Cluster
-
Pairwise
Corr
-
Cluster
-
Higher
20
25
30
35
40
2
2.5
3
Average number of regions
VOI
20
25
30
35
40
0.3
0.4
0.5
0.6
Average number of regions
SCO
20
25
30
35
40
8
9
10
11
Average number of regions
BDE
Figure 4: Obtained evaluation measures from segmentation results on the SBD. occurs when we use the Hamming loss in the S-SVM. Therefore, we use the following loss function: ∆(yn, y)= ∑ ep∈Ep ( Rp yn ep +yep −(Rp + 1)yn epyep ) +D ∑ eh∈Eh ( Rh yn eh +yeh −(Rh + 1)yn ehyeh ) (10) where D is the relative weight of the loss at higher-order edge level to that of the loss at pairwise edge level. In addition, Rp and Rh control the relative importance between the incorrect merging of the superpixels and the incorrect separation of the superpixels by imposing different weights to the false negative and the false positive. Here, we set both Rp and Rh to be less than 1 to overcome the problem due to the unbalance. 4 Experiments To evaluate segmentations obtained by various algorithms against the ground-truth segmentation, we conducted image segmentations on three benchmark datasets: Stanford background dataset [24] (SBD), Berkeley segmentation dataset (BSDS) [25], MSRC dataset [26]. For image segmentation based on correlation clustering, we initially obtain baseline superpixels (438 superpixels per image on average) by the gPb contour detector and the oriented watershed transform [19] and then construct a hypergraph. The function parameters are initially set to zero, and then based on the S-SVM, the structured output learning is used to estimate the parameter vectors. Note that the relaxed solutions in loss-augmented inference are used during training, while in testing, our simple rounding method is used to produce valid segmentation results. Rounding is only necessary in case we obtain fractional solutions from LP-relaxed correlation clustering. We compared the proposed pairwise/higher-order correlation clustering to the following state-of-theart image segmentation algorithms: multiscale NCut [27], gPb-owt-ucm [19], and gPb-Hoiem [20] that grouped the same superpixels based on pairwise same-label likelihoods. The pairwise samelabel likelihoods were independently learnt from the training data with the same 611-dimensional pairwise feature vector. We consider four performance measures: probabilistic Rand index (PRI) [28], variation of information (VOI) [29], segmentation covering (SCO) [19], and boundary displacement error (BDE) [30]. As the predicted segmentation is close to the ground-truth segmentation, the PRI and SCO are increased while the VOI and BDE are decreased. 4.1 Stanford background dataset The SBD consists of 715 outdoor images with corresponding pixel-wise annotations. We employed 5-fold cross-validation with the dataset randomly split into 572 training images and 143 test images for each fold. Figure 4 shows the obtained four measures from segmentation results according to the average number of regions. Note that the performance varies with different numbers of regions, and for this reason, we designed each algorithm to produce multiple segmentations (20 to 40 regions). Specifically, multiple segmentations in the proposed algorithm were obtained by varying Rp (0.001∼0.2) and Rh (0.1∼1.0) in the loss function during training (D=10). Irrespective of the measure, the proposed higher-order correlation clustering (Corr-Cluster-Higher) performed better than other algorithms including the pairwise correlation clustering (Corr-Cluster-Pairwise). Figure 5 shows some example segmentations. The proposed higher-order correlation clustering yielded the best segmentation results. In specific, incorrectly predicted segments by pairwise correlation clustering were reduced in the segmentation results obtained by higher-order correlation clustering 7 Ground
-
truth
Multi
-
NCut
gPb
-
Hoiem
gPb
-
owt
-
ucm
Corr
-
Cluster
-
Pairwise
Corr
-
Cluster
-
Higher
Original image
Figure 5: Results of image segmentation. Table 1: Quantitative results on the BSDS test set and on the MSRC test set. BSDS MSRC Test set PRI VOI SCO BDE PRI VOI SCO BDE Multi-NCut 0.728 3.043 0.315 14.257 0.628 2.765 0.341 11.941 gPb-owt-ucm 0.794 1.909 0.571 11.461 0.779 1.675 0.628 9.800 gPb-Hoiem 0.724 3.194 0.316 14.795 0.614 2.847 0.353 13.533 Corr-Cluster-Pairwise 0.806 1.829 0.585 11.194 0.773 1.648 0.632 9.194 Corr-Cluster-Higher 0.814 1.743 0.599 10.377 0.784 1.594 0.648 9.040 owing to the consideration of higher-order relations in broad regions. Regarding the runtime of our algorithm, we observed that for test-time inference it took on average around 15 seconds (graph construction and feature extraction: 14s, LP: 1s) per image on a 2.67GHz processor, whereas the overall training took 10 hours on the training set. Note that other segmentation algorithms such as the multiscale NCut and the gPb-owt-ucm took on average a few minutes per image. 4.2 Berkeley segmentation dataset and MSRC dataset The BSDS contains 300 natural images split into the 200 training images and 100 test images. Since each image is segmented by multiple human subjects, we defined a single probabilistic (real-valued) ground-truth segmentation of each image for training in the proposed correlation clustering. The MSRC dataset is composed of 591 natural images. We split the data into 45% training, 10% validation, and 45% test sets, following [26]. The performance was evaluated using the clean ground-truth object instance labeling of [31]. On average, all segmentation algorithms were set to produce 30 disjoint regions per image on the BSDS and 15 disjoint regions per image on the MSRC dataset. As shown in Table 1, the proposed higher-order correlation clustering gave the best results on both datasets. Especially, the obtained results on the BSDS are similar or even better than the best results ever reported on the BSDS [32, 19]. 5 Conclusion This paper proposed the higher-order correlation clustering over a hypergraph to merge superpixels into homogeneous regions. The LP relaxation was used to approximately solve the higher-order correlation clustering over a hypergraph where a rich feature vector was defined based on several visual cues involving higher-order relations among superpixels. The S-SVM was used for supervised training of parameters in correlation clustering, and the cutting plane algorithm with LP-relaxed inference was applied to solve the optimization problem of S-SVM. Experimental results showed that the proposed higher-order correlation clustering outperformed other image segmentation algorithms on various datasets. The proposed framework is applicable to a variety of other areas. Acknowledgments This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No.2011-0018249). 8 References [1] T. Kanungo, D. Mount, N. Netanyahu, C. Piatko, R. Silverman, and A. Wu, “An efficient k-means clustering algorithm: Analysis and implementation,” PAMI, vol. 24, pp. 881–892, 2002. [2] D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” PAMI, vol. 24, pp. 603–619, 2002. [3] C. Carson, S. Belongie, H. Greenspan, and J. Malik, “Blobworld: image segmentation using expectationmaximization and its application to image querying,” PAMI, vol. 24, pp. 1026–1038, 2002. [4] F. Estrada and A. Jepson, “Spectral embedding and mincut for image segmentation,” in BMVC, 2004. [5] J. Shi and J. Malik, “Normalized cuts and image segmentation,” PAMI, vol. 22, pp. 888–905, 2000. [6] P. Felzenszwalb and D. Huttenlocher, “Efficient graph-based image segmentation,” IJCV, vol. 59, pp. 167–181, 2004. [7] F. Estrada and A. Jepson, “Benchmarking image segmentation algorithms,” IJCV, vol. 85, 2009. [8] N. Bansal, A. Blum, and S. Chawla, “Correlation clustering,” Machine Learning, vol. 56, 2004. [9] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun, “Large margin methods for structured and independent output variables,” JMLR, vol. 6, 2005. [10] T. Finley and T. Joachims, “Supervised clustering with support vector machines,” in ICML, 2005. [11] B. Taskar, “Learning structured prediction models: a large margin approach,” Ph.D. thesis, Stanford University, 2004. [12] C. Berge, Hypergraphs, North-Holland, Amsterdam, 1989. [13] L. Ding and A. Yilmaz, “Image segmentation as learning on hypergraphs,” in Proc. ICMAL, 2008. [14] S. Rital, “Hypergraph cuts and unsupervised representation for image segmentation,” Fundamenta Informaticae, vol. 96, pp. 153–179, 2009. [15] A. Ducournau, S. Rital, A. Bretto, and B. Laget, “A multilevel spectral hypergraph partitioning approach for color image segmentation,” in Proc. ICSIPA, 2009. [16] F. Bach and M. I. Jordan, “Learning spectral clustering,” in NIPS, 2003. [17] T. Cour, N. Gogin, and J. Shi, “Learning spectral graph segmentation,” in AISTATS, 2005. [18] S. Nowozin and S. Jegelka, “Solution stability in linear programming relaxations: Graph partitioning and unsupervised learning,” in ICML, 2009. [19] P. Arbel´aez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” PAMI, vol. 33, pp. 898–916, 2011. [20] D. Hoiem, A. A. Efros, and M. Hebert, “Recovering surface layout from an image,” IJCV, 2007. [21] D. Batra, R. Sukthankar, and T. Chen, “Learning class-specific affinities for image labelling,” in CVPR, 2008. [22] T. Finley and T. Joachims, “Training structural SVMs when exact inference is intractable,” in ICML, 2008. [23] A. Kulesza and F. Pereira, “Structured learning with approximate inference,” in NIPS, 2007. [24] S. Gould, R. Fulton, and D. Koller, “Decomposing a scene into geometric and semantically consistent regions,” in ICCV, 2009. [25] C. Fowlkes, D. Martin, and J. Malik, The Berkeley Segmentation Dataset and Benchmark (BSDB), http://www.cs.berkeley.edu/projects/vision/grouping/segbench/. [26] J. Shotton, J. Winn, C. Rother, and A. Criminisi, “Textonboost: joint apprearence, shape and context modeling for multi-class object recognition and segmentation,” in ECCV, 2006. [27] T. Cour, F. Benezit, and J. Shi, “Spectral segmentation with multiscale graph decomposition,” in CVPR, 2005. [28] W. M. Rand, “Objective criteria for the evaluation of clustering methods,” Journal of the American Statistical Association, vol. 66, pp. 846–850, 1971. [29] M. Meila, “Computing clusterings: An axiomatic view,” in ICML, 2005. [30] J. Freixenet, X. Munoz, D. Raba, J. Marti, and X. Cufi, “Yet another survey on image segmentation: Region and boundary information integration,” in ECCV, 2002. [31] T. Malisiewicz and A. A. Efros, “Improving spatial support for objects via multiple segmentations,” in BMVC, 2007. [32] T. Kim, K. Lee, and S. Lee, “Learning full pairwise affinities for spectral segmentation,” in CVPR, 2010. 9
|
2011
|
179
|
4,234
|
The Impact of Unlabeled Patterns in Rademacher Complexity Theory for Kernel Classifiers Davide Anguita, Alessandro Ghio, Luca Oneto, Sandro Ridella Department of Biophysical and Electronic Engineering University of Genova Via Opera Pia 11A, I-16145 Genova, Italy {Davide.Anguita,Alessandro.Ghio} @unige.it {Luca.Oneto,Sandro.Ridella} @unige.it Abstract We derive here new generalization bounds, based on Rademacher Complexity theory, for model selection and error estimation of linear (kernel) classifiers, which exploit the availability of unlabeled samples. In particular, two results are obtained: the first one shows that, using the unlabeled samples, the confidence term of the conventional bound can be reduced by a factor of three; the second one shows that the unlabeled samples can be used to obtain much tighter bounds, by building localized versions of the hypothesis class containing the optimal classifier. 1 Introduction Understanding the factors that influence the performance of a statistical procedure is a key step for finding a way to improve it. One of the most explored procedures in the machine learning approach to pattern classification aims at solving the well–known model selection and error estimation problem, which targets the estimation of the generalization error and the choice of the optimal predictor from a set of possible classifiers. For reaching this target, several approaches have been proposed [1, 2, 3, 4], which provide an upper bound on the generalization ability of the classifier, which can be used for model selection purposes as well. Typically, all these bounds consists of three terms: the first one is the empirical error of the classifier (i.e. the error performed on the training data), the second term is a bias that takes into account the complexity of the class of functions, which the classifier belongs to, and the third one is a confidence term, which depends on the cardinality of the training set. These approaches are quite interesting because they investigate the finite sample behavior of a classifier, instead of the asymptotic one, even though their practical applicability has been questioned for a long time1. One of the most recent methods for obtaining these bounds is to exploit the Rademacher Complexity, which is a powerful statistical tool that has been deeply investigated during the last years [5, 6, 7]. This approach has shown to be of practical use, by outperforming more traditional methods [8, 9] for model selection in the small–sample regime [10, 5, 6], i.e. when the dimensionality of the samples is comparable, or even larger, than the cardinality of the training set. We show in this work how its performance can be further improved by exploiting some extra knowledge on the problem. In fact, real–world classification problems often are composed of datasets with labeled and unlabeled data [11, 12]: for this reason an interesting challenge is finding a way to exploit the unlabeled data for obtaining tighter bounds and, therefore, better error estimations. In this paper, we present two methods for exploiting the unlabeled data in the Rademacher Complexity theory [2]. First, we show how the unlabeled data can have a role in reducing the confidence 1See, for example, the NIPS 2004 Workshop (Ab)Use of Bounds or the 2002 Neurocolt Workshop on Bounds less than 0.5 1 term, by obtaining a new bound that takes into account both labeled and unlabeled data. Then, we propose a method, based on [7], which exploits the unlabeled data for selecting a better hypothesis space, which the classifier belongs to, resulting in a much sharper and accurate bound. 2 Theoretical framework and results We consider the following prediction problem: based on a random observation of X ∈X ⊆Rd one has to estimate Y ∈Y ⊆{−1, 1} by choosing a suitable prediction rule f : X →[−1, 1]. The generalization error L(f) = E{X,Y}ℓ(f(X), Y ) associated to the prediction rule is defined through a bounded loss function ℓ(f(X), Y ) : [−1, 1] × Y →[0, 1]. We observe a set of labeled samples Dnl : (Xl 1, Y l 1), · · · , (Xl nl, Y l nl) and a set of unlabeled ones Dnu : (Xu 1 ), · · · , (Xu nu) . The data consist of a sequence of independent, identically distributed (i.i.d.) samples with the same distribution P(X, Y) for Dnl and Dnu. The goal is to obtain a bound on L(f) that takes into account both the labeled and unlabeled data. As we do not know the distribution that have generated the data, we do not know L(f) but only its empirical estimation Lnl(f) = 1/nl Pnl i=1 ℓ(f(Xl i), Y l i ). In the typical context of Structural Risk Minimization (SRM) [13] we define an infinite sequence of hypothesis spaces of increasing complexity {Fi, i = 1, 2, · · · }, then we choose a suitable function space Fi and, consequently, a model f ∗∈Fi that fits the data. As we do not know the true data distribution, we can only say that: {L(f) −Lnl(f)}f∈Fi ≤sup f∈Fi {L(f) −Lnl(f)} (1) or, equivalently: L(f) ≤Lnl(f) + sup f∈Fi {L(f) −Lnl(f)} , ∀f ∈Fi (2) In this framework, the SRM procedure brings us to the following choice of the function space and the corresponding optimal classifier: f ∗, F∗: arg min Fi∈{F1,F2,··· } " min f∈Fi Lnl(f)f∈Fi + sup f∈Fi {L(f) −Lnl(f)} # (3) Since the generalization bias (supf∈Fi {L(f) −Lnl(f)}) is a random variable, it is possible to statistically analyze it and obtain a bound that holds with high probability [5]. From this point, we will consider two types of prediction rule with the associated loss function: fH(x) =sign(wT φ(x) + b), ℓH(fH(x), y) = 1 −yfH(x) 2 (4) fS(x) = min(1, wT φ(x) + b) if wT φ(x) + b > 0 max(−1, wT φ(x) + b) if wT φ(x) + b ≤0 , ℓS(fS(x), y) = 1 −yfS(x) 2 (5) where φ(·) : Rd →RD with D >> d, w ∈RD and b ∈R. The function φ(·) is introduced to allow for a later introduction of kernels, even though, for simplicity, we will focus only on the linear case. Note that both the hard loss ℓH(fH(x), y) and the soft loss (or ramp loss) [14] ℓS(fS(x), y) are bounded ([0, 1]) and symmetric (ℓ(f(x), y) = 1 −ℓ(f(x), −y)). Then, we recall the definition of Rademacher Complexity (R) for a class of functions F: ˆRnl(F) = Eσ sup f∈F 2 nl nl X i=1 σiℓ(f(xi), yi) = Eσ sup f∈F 1 nl nl X i=1 σif(xi) (6) where σ1, . . . , σnl are nl independent Rademacher random variables, i.e. independent random variables for which P(σi = +1) = P(σi = −1) = 1/2, and the last equality holds if we use one of the losses defined before. Note that ˆR is a computable realization of the expected Rademacher Complexity R(F) = E(X,Y) ˆR(F). The most renowed result in Rademacher Complexity theory states that [2]: L(f)f∈F ≤Lnl(f)f∈F + ˆRnl(F) + 3 s log 2 δ 2nl (7) which holds with probability (1 −δ) and allows to solve the problem of Eq. (3). 2 2.1 Exploiting unlabeled samples for reducing the confidence term Assuming that the amount of unlabeled data is larger than the number of labeled samples, we split them in blocks of similar size by defining the quantity m = ⌊nu/nl⌋+ 1, so that we can consider a ghost sample D′ mnl composed of mnl pattern. Then, we can upper bound the expected generalization bias in the following way 2: E{X,Y} sup f∈F {L(f) −Lnl(f)} = E{X,Y} sup f∈F E{X ′,Y′} 1 m m X i=1 1 nl i·nl X k=(i−1)·nl+1 ℓ′ k −1 nl nl X i=1 ℓi ≤E{X,Y}E{X ′,Y′} 1 m m X i=1 sup f∈F 1 nl i·nl X k=(i−1)·nl+1 ℓ′ k −ℓ|k|nl = E{X,Y}E{X ′,Y′}Eσ 1 m m X i=1 sup f∈F 1 nl i·nl X k=(i−1)·nl+1 σ|k|nl h ℓ′ k −ℓ|k|nl i ≤E{X,Y}Eσ 1 m m X i=1 sup f∈F 2 nl i·nl X k=(i−1)·nl+1 σ|k|nlℓk = E{X,Y} 1 m m X i=1 ˆRi nl(F) where |k|nl = (k −1) mod (nl)+1. The last quantity (that we call Expected Extended Rademacher Complexity E{X,Y} ˆRnu(F)) and the expected generalization bias are both deterministic quantities and we know only one realization of them, dependent on the sample. Then, we can use the McDiarmid’s inequality [15] to obtain: P " sup f∈F {L(f) −Lnl(f)} ≥ˆRnu(F) + ǫ # ≤ (8) P " sup f∈F {L(f) −Lnl(f)} ≥E{X,Y} sup f∈F {L(f) −Lnl(f)} + aǫ # + (9) P h E{X,Y} ˆRnu(F) ≥ˆRnu(F) + (1 −a)ǫ i ≤ (10) e−2nla2ǫ2 + e− (mnl) 2 (1−a)2ǫ2 (11) with a ∈[0, 1]. By choosing a = √m 2+√m, we can write: P " sup f∈F {L(f) −Lnl(f)} ≥1 m m X i=1 ˆRi nl(F) + ǫ # ≤2e − 2mnlǫ2 (2+√m)2 (12) and obtain an explicit bound which holds with probability (1 −δ): L(f)f∈F ≤Lnl(f)f∈F + 1 m m X i=1 ˆRi nl(F) + 2 + √m √m s log 2 δ 2nl (13) where ˆRi nl(F) is the Rademacher Complexity of the class F computed on the i-th block of unlabeled data. Note that for m = 1 the training set does not contain any unlabeled data and the bound given by Eq. (3) is recovered, while for large m the confidence term is reduced by a factor of 3. At a first sight, it would seem impossible to compute the term ˆRi nl without knowing the labels of the data, but it is easy to show that this is not the case. In fact, let us define K+ i = n k ∈{k = (i −1) · nl + 1, . . . , i · nl} : σ|k|nl = +1 o and K− i = 2we define ℓ(f(xi), yi) ≡ℓi to simplify the notation 3 (a) Coventional function classes (b) Localized function classes Figure 1: The effect of selecting a better center for the hypothesis classes. n k ∈{k = (i −1) · nl + 1, . . . , i · nl} : σ|k|nl = −1 o , then we have: ˆRnu(F) = 1 + 1 m m X i=1 Eσ sup f∈F 2 nl X k∈K+ i ℓ(fk, yk) − X k∈K− i ℓ(fk, yk) − X k∈K+ i 1 = 1 + 1 m m X i=1 Eσ sup f∈F −2 nl X k∈K+ i ℓ(fk, −yk) −2 nl X k∈K− i ℓ(fk, yk) = 1 + 1 m m X i=1 Eσ sup f∈F −2 nl i·nl X k=(i−1)·nl+1 ℓ(fk, −σ|k|nlyk) = 1 −1 m m X i=1 Eσ inf f∈F 2 nl i·nl X k=(i−1)·nl+1 ℓ(fk, σ|k|nl) which corresponds to solving a classification problem using all the available data with random labels. The expectation can be easily computed with some Monte Carlo trials. 2.2 Exploiting the unlabeled data for tightening the bound Another way of exploiting the unlabeled data is to use them for selecting a more suitable sequence of hypothesis spaces. For this purpose we could use some of the unlabeled samples or, even better, the nc = nu −⌊nu/nl⌋nl samples left from the procedure of the previous section. The idea is inspired by the work of [3] and [7], which propose to inflate the hypothesis classes by centering them around a ‘good’ classifier. Usually, in fact, we have no a-priori information on what can be considered a good choice of the class center, so a natural choice is the origin [13], as in Figure 1(a). However, if it happens that the center is ‘close’ to the optimal classifier, the search for a suitable class will stop very soon and the resulting Rademacher Complexity will be consequently reduced (see Figure 1(b)). We propose here a method for finding two possible ‘good’ centers for the hypothesis classes. Let us consider nc unlabeled samples and run a clustering algorithm on them, by setting the number of clusters to 2, and obtaining two clusters C1 and C2. We build two distinct labeled datasets by assigning the labels +1 and −1 to C1 and C2, respectively, and then vice-versa. Finally, we build two classifiers fC1(x) and fC2(x) = −fC1(x) by learning the two datasets3. The two classifiers, which have been found using only unlabeled samples, can then be used as centers for searching a better hypothesis class. It is worthwhile noting that any supervised learning algorithm can be used [16], because the centers are only a hint for a better centered hypothesis space: their actual classification performance is not of paramount importance. The underlying principle that inspired 3Note that we could build only one classifier by assigning the most probable labels to the nc samples, according to the nl labeled ones but, rigorously speaking, this is not allowed by the SRM principle, because it would lead to use the same data for both choosing the space of functions and computing the Rademacher Complexity. 4 this procedure relies on the reasonable hypothesis that P(X) is correlated with P(X, Y): in fact, in an unlucky scenario, where the two classes are heavily overlapped, the method would obviously fail. Choosing a good center for the SRM procedure can greatly reduce the second term of the bound given by Eq. (13) [7] (the bias or complexity term). Note, however, that the confidence term is not affected, so we propose here an improved bound, which makes this term depending on ˆRi nl(F) as well. We use a recent concentration result for Self Bounding Functions [17], instead of the looser McDiarmid’s inequality. The detailed proof is omitted due to space constraints and we give here only the sketch (it is a more general version of the proof in [18] for Rademacher Complexities): P " sup f∈F {L(f) −Lnl(f)} ≥ˆRnu(F) + ǫ # ≤e−2nla2ǫ2 + e − (mnl)(1−a)2ǫ2 2E{X,Y} ˆ Rnu (F) (14) with a ∈[0, 1]. Choosing a = √m √m+2 q E{X,Y} 1 m Pm i=1 ˆ Ri nl(F), we obtain: P " sup f∈F {L(f) −Lnl(f)} ≥ˆRnu(F) + ǫ # ≤2e − 2mnlǫ2 ( √m+2√ E{X,Y} ˆ Rnu (F)) 2 (15) so that the following explicit bound holds with probability (1 −δ): L(f)f∈F ≤Lnl(f)f∈F + ˆRnu(F) + 2 q E{X,Y} ˆRnu(F) + √m √m s log 2 δ 2nl (16) Note that, in the worst case, E{X,Y} ˆRnu(F) = 1 and we obtain again Eq. (13). Unfortunately, the Expected Extended Rademacher Complexity cannot be computed, but we can upper bound it with its empirical version (see, for example, [19], pages 420–422, for a justificaton of this step) as in Eq.(10) to obtain: P " sup f∈F {L(f) −Lnl(f)} ≥ˆRnu(F) + ǫ # ≤e−2nla2ǫ2 + e − (mnl)(1−a)2ǫ2 2( ˆ Rnu (F)+(1−a)ǫ) (17) with a ∈[0, 1]. Differently from Eq. (15) the previous expression cannot be put in explicit form, but it can be simply computed numerically by writing it as: L(f)f∈F ≤Lnl(f)f∈F + 1 m m X i=1 ˆRi nl(F) + ǫb u (18) The value ǫb u can be obtained by upper bounding with δ the last term of Eq. (17) and solving the inequality respect to a and ǫ, so that the bound holds with probability (1 −δ). We can show the improvements obtained through these new results, by plotting the values of the confidence terms and comparing them with the conventional one [2]. Figure 2 shows the value of ǫl in Eq. (7) against ǫu, the corresponding term in Eq. (13), and ǫb u, as a function of the number of samples. 3 Performing the Structural Risk Minimization procedure Computing the values of the bounds described in the previous sections is a straightforward process, at least in theory. The empirical error Lnl(f) is found by learning a classifier with the original labeled dataset, while the (Extended) Rademacher Complexity ˆRi nl(F) is computed by learning the dataset composed of both labeled and unlabeled samples with random labels. In order apply in practice the results of the previous section and to better control the hypothesis space, we formulate the learning phase of the classifier as the following optimization problem, based 5 40 60 80 100 120 140 160 180 200 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 n ε m ∈ [1,10] εl εu m = 2 m = 1 m = 10 (a) ǫl VS ǫu 40 60 80 100 120 140 160 180 200 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 n ε m = 1, R ∈ [0,1] εu εb u R = 1 R = 0.9 R = 0 (b) ǫnl VS ǫb u with m = 1 Figure 2: Comparison of the new confidence terms with the conventional one. on the Ivanov version of the Support Vector Machine (I-SVM) [13]: min w,b,ξ n X i=1 ηi (19) ∥w −ˆw∥2 ≤ρ2 yi wT φ(xi) + b ≥1 −ξi ξi ≥0, ηi = min (2, ξi) where the size of the hypothesis space, centered in ˆw, is controlled by the hyperparameter ρ and the last constraint is introduced for bounding the SVM loss function, which would be otherwise unbounded and would prevent the application of the theory developed so far. Note that, in practice, two sub-problems must be solved: the first one with ˆw = + ˆwC1 and the second one with ˆw = −ˆwC1, then the solution corresponding to the smaller value of the objective function is selected. Unfortunately, solving a classification problem with a bounded loss function is computationally intractable, because the problem is no longer convex and even state-of-the-art solvers like, for example, CPLEX [20] fail to found an exact solution, when the training set size exceeds few tens of samples. Therefore, we propose here to find an approximate solution through well–known algorithms like, for example, the Peeling [6] or the Convex–Concave Constrained Programming (CCCP) technique [14, 21, 22]. Furthermore, we derive a dual formulation of problem (19) that allows us exploiting the well known Sequential Minimal Optimization (SMO) algorithm for SVM learning [23]. Problem (19) can be rewritten in the equivalent Tikhonov formulation: min w,b,ξ 1 2∥w −ˆw∥2 + C n X i=1 ηi (20) yi wT φ(xi) + b ≥1 −ξi ξi ≥0, ηi = min (2, ξi) which gives the same solution of the Ivanov formulation for some value of C [13]. The method for finding the value of C, corresponding to a given value of ρ, is reported in [10], where it is also shown that C cannot be used directly to control the hypothesis space. Then, it is possible to apply the CCCP technique, which is synthesized in Algorithm 1, by splitting the objective function in its convex and concave parts: min w,b,ξ Jconvex(θ) z }| { 1 2∥w −ˆw∥2 + C n X i=1 ξi Jconcave(θ) z }| { −C n X i=1 ςi (21) yi wT φ(xi) + b ≥1 −ξi ξi ≥0, ςi = max(0, ξi −2) 6 where θ = [w|b] is introduced to simplify the notation. Obviously, the algorithm does not guarantee to find the optimal solution, but it converges to a (usually good) solution in a finite number of steps [14]. To apply the algorithm we must compute the derivative of the concave part of the objective function: dJconcave(θ) dθ θt θ = n X i=1 d (−Cςi) dθ θt ! θ = n X i=1 βiyi wT φ(xi) + b (22) Then, the learning problem becomes: min w,b,ξ 1 2∥w −ˆw∥2 + C n X i=1 ξi + n X i=1 ∆iyi wT φ(xi) + b (23) yi wT φ(xi) + b ≥1 −ξi, ξi ≥0 where ∆i = C if yif t(xt) < −1 0 otherwise (24) Finally, it is possible to obtain the dual formulation (derivation is omitted due to lack of space): min β 1 2 n X i=1 n X j=1 βiβjyiyjK(xi, xj) + n X i=1 nC1 X j=1 ˆαjyiˆyjK(ˆxj, xi) −1 βi (25) −∆i ≤βi ≤C −∆i, n X i=1 βiyi = 0 where we have used the kernel trick [24] K(·, ·) = φ(·)T φ(·). 4 A case study We consider the MNIST dataset [25], which consists of 62000 images, representing the numbers from 0 to 9: in particular, we consider the 13074 patterns containing 0’s and 1’s, allowing us to deal with a binary classification problem. We simulate the small–sample regime by randomly sampling a training set with low cardinality (nl < 500), while the remaining 13074 −nl images are used as a test set or as an unlabeled dataset, by simply discarding the labels. In order to build statistically relevant results, this procedure is repeated 30 times. In Table 1 we compare the conventional bound with our proposal. In the first column the number of labeled patterns (nl) is reported, while the second column shows the number of unlabeled ones (nu). The optimal classifier f ∗is selected by varying ρ in the range [10−6, 1], and selecting the function corresponding to the minimum of the generalization error estimate provided by each bound. Then, for each case, the selected f ∗is tested on the remaining 13074 −(nl + nu) samples and the classification results are reported in column three and four, respectively. The results show that the f ∗selected by exploiting the unlabeled patterns behaves better than the other and, furthermore, the estimated L(f), reported in column five and six, shows that the bound is tighter, as expected by theory. The most interesting result, however, derives from the use of the new bound of Eq. (18), as reported in Table 2, where the unlabeled data is exploited for selecting a more suitable center of the hypothesis space. The results are reported analogously to Table 1. Note that, for each experiment, 30% Algorithm 1 CCCP procedure Initialize θ0 repeat θt+1 = arg minθ Jconvex(θ) + dJconcave(θ) dθ θt θ until θt+1 = θt 7 Table 1: Model selection and error estimation, exploiting unlabeled data for tightening the bound. Test error of f ∗ Estimated L(f) nl nu Eq. (7) Eq. (13) Eq. (7) Eq. (13) 10 20 13.20 ± 0.86 12.40 ± 0.82 194.00 ± 0.97 157.70 ± 0.97 20 40 8.93 ± 1.20 8.93 ± 1.29 142.00 ± 1.06 116.33 ± 1.06 40 80 6.26 ± 0.16 6.02 ± 0.17 103.00 ± 0.59 84.85 ± 0.59 60 120 5.95 ± 0.12 5.88 ± 0.13 85.50 ± 0.48 70.68 ± 0.48 80 160 5.61 ± 0.07 5.30 ± 0.07 73.70 ± 0.40 60.86 ± 0.40 100 200 5.36 ± 0.21 5.51 ± 0.22 66.10 ± 0.37 54.62 ± 0.37 120 240 4.98 ± 0.40 5.36 ± 0.40 61.30 ± 0.33 50.82 ± 0.33 150 300 4.41 ± 0.53 4.08 ± 0.51 55.10 ± 0.28 45.73 ± 0.28 170 340 3.59 ± 0.57 3.40 ± 0.64 52.40 ± 0.26 43.60 ± 0.26 200 400 2.75 ± 0.47 2.67 ± 0.48 48.10 ± 0.19 39.98 ± 0.19 250 500 2.07 ± 0.03 2.05 ± 0.03 42.70 ± 0.22 35.44 ± 0.22 300 600 2.02 ± 0.04 1.94 ± 0.04 39.20 ± 0.17 32.57 ± 0.17 400 800 1.93 ± 0.02 1.79 ± 0.02 34.90 ± 0.19 29.16 ± 0.19 Table 2: Model selection and error estimation, exploiting unlabeled data for selecting a more suitable hypothesis center. Test error of f ∗ Estimated L(f) nl nu Eq. (7) Eq. (18) Eq. (7) Eq. (18) 7 3 13.20 ± 0.86 8.98 ± 1.12 219.15 ± 0.97 104.01 ± 1.62 14 6 8.93 ± 1.20 5.10 ± 0.67 159.79 ± 1.06 86.70 ± 0.01 28 12 6.26 ± 0.16 3.05 ± 0.23 115.58 ± 0.59 51.35 ± 0.00 42 18 5.95 ± 0.12 2.36 ± 0.23 95.77 ± 0.48 38.37 ± 0.00 56 24 5.61 ± 0.07 1.96 ± 0.14 82.59 ± 0.40 31.39 ± 0.00 70 30 5.36 ± 0.21 1.63 ± 0.11 74.05 ± 0.37 26.83 ± 0.00 84 36 4.98 ± 0.40 1.44 ± 0.11 68.56 ± 0.33 23.77 ± 0.00 105 45 4.41 ± 0.53 1.27 ± 0.09 61.59 ± 0.28 20.36 ± 0.00 119 51 3.59 ± 0.57 1.20 ± 0.08 58.50 ± 0.26 18.77 ± 0.00 140 60 2.75 ± 0.47 1.08 ± 0.09 53.72 ± 0.19 16.82 ± 0.00 175 75 2.07 ± 0.03 0.92 ± 0.05 47.73 ± 0.22 14.52 ± 0.00 210 90 2.02 ± 0.04 0.81 ± 0.07 43.79 ± 0.17 12.91 ± 0.00 280 120 1.93 ± 0.02 0.70 ± 0.06 38.88 ± 0.19 10.86 ± 0.00 of the data (nu) are used for selecting the hypothesis center and the remaining ones (nl) are used for training the classifier. The proposed method consistently selects a better classifier, which registers a threefold classification error reduction on the test set, especially for training sets of smaller cardinality. The estimation of L(f) is largely reduced as well. We have to consider that this very clear performance increase is also favoured by the characteristics of the MNIST dataset, which consists of well–separated classes: this particular data distribution implies that only few samples suffice for identifying a good hypothesis center. Many more experiments with different datasets and varying the ratio between labeled and unlabeled samples are needed, and are currently underway, for establishing the general validity of our proposal but, in any case, these results appear to be very promising. 5 Conclusion In this paper we have studied two methods which exploit unlabeled samples to tighten the Rademacher Complexity bounds on the generalization error of linear (kernel) classifiers. The first method improves a very well–known result, while the second one aims at changing the entire approach by selecting more suitable hypothesis spaces, not only acting on the bound itself. The recent literature on the theory of bounds attempts to obtain tighter bounds through more refined concentration inequalities (e.g. improving Mc Diarmid’s inequality), but we believe that the idea of reducing the size of the hypothesis space is a more appealing field of research because it opens the road to possible significant improvements. References [1] V.N. Vapnik and A.Y. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16:264, 1971. 8 [2] P.L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. The Journal of Machine Learning Research, 3:463–482, 2003. [3] P.L. Bartlett, O. Bousquet, and S. Mendelson. Local rademacher complexities. The Annals of Statistics, 33(4):1497–1537, 2005. [4] O. Bousquet and A. Elisseeff. Stability and generalization. The Journal of Machine Learning Research, 2:499–526, 2002. [5] P.L. Bartlett, S. Boucheron, and G. Lugosi. Model selection and error estimation. Machine Learning, 48(1):85–113, 2002. [6] D. Anguita, A. Ghio, and S. Ridella. Maximal discrepancy for support vector machines. Neurocomputing, 74(9):1436–1443, 2011. [7] D. Anguita, A. Ghio, L. Oneto, and S. Ridella. Selecting the Hypothesis Space for Improving the Generalization Ability of Support Vector Machines. In The 2011 International Joint Conference on Neural Networks (IJCNN), San Jose, California. IEEE, 2011. [8] S. Arlot and A. Celisse. A survey of cross-validation procedures for model selection. Statistics Surveys, 4:40–79, 2010. [9] B. Efron and R. Tibshirani. An introduction to the bootstrap. Chapman & Hall/CRC, 1993. [10] D. Anguita, A. Ghio, L. Oneto, and S. Ridella. In-sample Model Selection for Support Vector Machines. In The 2011 International Joint Conference on Neural Networks (IJCNN), San Jose, California. IEEE, 2011. [11] K.P. Bennett and A. Demiriz. Semi-supervised support vector machines. In Advances in neural information processing systems 11: proceedings of the 1998 conference, page 368. The MIT Press, 1999. [12] O. Chapelle, B. Scholkopf, and A. Zien. Semi-supervised learning. The MIT Press, page 528, 2010. [13] V.N. Vapnik. The nature of statistical learning theory. Springer Verlag, 2000. [14] R. Collobert, F. Sinz, J. Weston, and L. Bottou. Trading convexity for scalability. In Proceedings of the 23rd international conference on Machine learning, pages 201–208. ACM, 2006. [15] C. McDiarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148– 188, 1989. [16] S. Haykin. Neural networks: a comprehensive foundation. Prentice Hall PTR Upper Saddle River, NJ, USA, 1994. [17] S. Boucheron, G. Lugosi, and P. Massart. On concentration of self-bounding functions. Electronic Journal of Probability, 14:1884–1899, 2009. [18] S. Boucheron, G. Lugosi, and P. Massart. Concentration inequalities using the entropy method. The Annals of Probability, 31(3):1583–1614, 2003. [19] G. Casella and R.L. Berger. Statistical inference. 2001. [20] I. CPLEX. 11.0 users manual. ILOG SA, 2008. [21] J. Wang, X. Shen, and W. Pan. On efficient large margin semisupervised learning: Method and theory. Journal of Machine Learning Research, 10:719–742, 2009. [22] J. Wang and X. Shen. Large margin semi–supervised learning. Journal of Machine Learning Research, 8:1867–1891, 2007. [23] J. Platt. Sequential minimal optimization: A fast algorithm for training support vector machines. Advances in Kernel MethodsSupport Vector Learning, 208:1–21, 1998. [24] J. Shawe-Taylor and N. Cristianini. Margin distribution and soft margin. Advances in Large Margin Classifiers, pages 349–358, 2000. [25] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In 24th ICML, pages 473–480, 2007. 9
|
2011
|
18
|
4,235
|
Learning large-margin halfspaces with more malicious noise Philip M. Long Google plong@google.com Rocco A. Servedio Columbia University rocco@cs.columbia.edu Abstract We describe a simple algorithm that runs in time poly(n, 1/γ, 1/ε) and learns an unknown n-dimensional γ-margin halfspace to accuracy 1 −ε in the presence of malicious noise, when the noise rate is allowed to be as high as Θ(εγ p log(1/γ)). Previous efficient algorithms could only learn to accuracy ε in the presence of malicious noise of rate at most Θ(εγ). Our algorithm does not work by optimizing a convex loss function. We show that no algorithm for learning γ-margin halfspaces that minimizes a convex proxy for misclassification error can tolerate malicious noise at a rate greater than Θ(εγ); this may partially explain why previous algorithms could not achieve the higher noise tolerance of our new algorithm. 1 Introduction Learning an unknown halfspace from labeled examples that satisfy a margin constraint (meaning that no example may lie too close to the separating hyperplane) is one of the oldest and most intensively studied problems in machine learning, with research going back at least five decades to early seminal work on the Perceptron algorithm [5, 26, 27]. In this paper we study the problem of learning an unknown γ-margin halfspace in the model of Probably Approximately Correct (PAC) learning with malicious noise at rate η. More precisely, in this learning scenario the target function is an unknown origin-centered halfspace f(x) = sign(w · x) over the domain Rn (we may assume w.l.o.g. that w is a unit vector). There is an unknown distribution D over the unit ball Bn = {x ∈Rn : ∥x∥2 ≤1} which is guaranteed to put zero probability mass on examples x that lie within Euclidean distance at most γ from the separating hyperplane w · x = 0; in other words, every point x in the support of D satisfies |w · x| ≥γ. The learner has access to a noisy example oracle EXη(f, D) which works as follows: when invoked, with probability 1 −η the oracle draws x from D and outputs the labeled example (x, f(x)) and with probability η the oracle outputs a “noisy” labeled example which may be an arbitrary element (x′, y) of Bn×{−1, 1}. (It may be helpful to think of the noisy examples as being constructed by an omniscient and malevolent adversary who has full knowledge of the state of the learning algorithm and previous draws from the oracle. In particular, note that noisy examples need not satisfy the margin constraint and can lie arbitrarily close to, or on, the hyperplane w · x = 0.) The goal of the learner is to output a hypothesis h : Rn →{−1, 1} which has high accuracy with respect to D: more precisely, with probability at least 1/2 (over the draws from D used to run the learner and any internal randomness of the learner) the hypothesis h must satisfy Prx∼D[h(x) ̸= f(x)] ≤ε. (Because a success probability can be improved efficiently using standard repeat-and-test techniques [19], we follow the common practice of excluding this success probability from our analysis.) In particular, we are interested in computationally efficient learning algorithms which have running time poly(n, 1/γ, 1/ε). 1 Introduced by Valiant in 1985 [30], the malicious noise model is a challenging one, as witnessed by the fact that learning algorithms can typically only withstand relatively low levels of malicious noise. Indeed, it is well known that for essentially all PAC learning problems it is information-theoretically possible to learn to accuracy 1 −ε only if the malicious noise rate η is at most ε/(1 + ε) [20], and most computationally efficient algorithms for learning even simple classes of functions can only tolerate significantly lower malicious noise rates (see e.g. [1, 2, 8, 20, 24, 28]). Interestingly, the original Perceptron algorithm [5, 26, 27] for learning a γ-margin halfspace can be shown to have relatively high tolerance to malicious noise. Several researchers [14, 17] have established upper bounds on the number of mistakes that the Perceptron algorithm will make when run on a sequence of examples that are linearly separable with a margin except for some limited number of “noisy” data points. Servedio [28] observed that combining these upper bounds with Theorem 6.2 of Auer and Cesa-Bianchi [3] yields a straightforward “PAC version” of the online Perceptron algorithm that can learn γ-margin halfspaces to accuracy 1 −ε in the presence of malicious noise provided that the malicious noise rate η is at most some value Θ(εγ). Servedio [28] also describes a different PAC learning algorithm which uses a “smooth” booster together with a simple geometric real-valued weak learner and achieves essentially the same result: it also learns a γ-margin halfspace to accuracy 1 −ε in the presence of malicious noise at rate at most Θ(εγ). Both the boosting-based algorithm of [28] and the Perceptron-based approach run in time poly(n, 1/γ, 1/ε). Our results. We give a simple new algorithm for learning γ-margin halfspaces in the presence of malicious noise. Like the earlier approaches, our algorithm runs in time poly(n, 1/γ, 1/ε); however, it goes beyond the Θ(εγ) malicious noise tolerance of previous approaches. Our first main result is: Theorem 1 There is a poly(n, 1/γ, 1/ε)-time algorithm that can learn an unknown γ-margin halfspace to accuracy 1−ε in the presence of malicious noise at any rate η ≤cεγ p log(1/γ) whenever γ < 1/7, where c > 0 is a universal constant. While our Θ( p log(1/γ)) improvement is not large, it is interesting to go beyond the “naturallooking” Θ(εγ) bound of Perceptron and other simple approaches. The algorithm of Theorem 1 is not based on convex optimization, and this is not a coincidence: our second main result is, roughly stated, the following. Informal paraphrase of Theorem 2 Let A be any learning algorithm that chooses a hypothesis vector v so as to minimize a convex proxy for the binary misclassification error. Then A cannot learn γ-margin halfspaces to accuracy 1 −ε in the presence of malicious noise at rate η ≥cεγ, where c > 0 is a universal constant. Our approach. The algorithm of Theorem 1 is a modification of a boosting-based approach to learning halfspaces that is due to Balcan and Blum [7] (see also [6]). [7] considers a weak learner which simply generates a random origin-centered halfspace sign(v · x) by taking v to be a uniform random unit vector. The analysis of [7], which is for a noise-free setting, shows that such a random halfspace has probability Ω(γ) of having accuracy at least 1/2 + Ω(γ) with respect to D. Given this, any boosting algorithm can be used to get a PAC algorithm for learning γ-margin halfspaces to accuracy 1 −ε. Our algorithm is based on a modified weak learner which generates a collection of k = ⌈log(1/γ)⌉ independent random origin-centered halfspaces h1 = sign(v1 · x), . . . , hk = sign(vk · x) and takes the majority vote H = Maj(h1, . . . , hk). The crux of our analysis is to show that if there is no noise, then with probability at least (roughly) γ2 the function H has accuracy at least 1/2 + Ω(γ √ k) with respect to D (see Section 2, in particular Lemma 1). By using this weak learner in conjunction with a “smooth” boosting algorithm as in [28], we get the overall malicious-noise-tolerant PAC learning algorithm of Theorem 1 (see Section 3). For Theorem 2 we consider any algorithm that draws some number m of samples and minimizes a convex proxy for misclassification error. If m is too small then well-known sample complexity bounds imply that the algorithm cannot learn γ-margin halfspaces to high accuracy, so we may assume that m is large; but together with the assumption that the noise rate is high, this means that with overwhelmingly high probability the sample will contain many noisy examples. The heart of our analysis deals with this situation; we describe a simple γ-margin data source and adversary 2 strategy which ensures that the convex proxy for misclassification error will achieve its minimum on a hypothesis vector that has accuracy less than 1 −ε with respect to the underlying noiseless distribution of examples. We also establish the same fact about algorithms that use a regularizer from a class that includes the most popular regularizers based on p-norms. Related work. As mentioned above, Servedio [28] gave a boosting-based algorithm that learns γ-margin halfspaces with malicious noise at rates up to η = Θ(εγ). Khardon and Wachman [21] empirically studied the noise tolerance of variants of the Perceptron algorithm. Klivans et al. [22] showed that an algorithm that combines PCA-like techniques with smooth boosting can tolerate relatively high levels of malicious noise provided that the distribution D is sufficiently “nice” (uniform over the unit sphere or isotropic log-concave). We note that γ-margin distributions are significantly less restrictive and can be very far from having the “nice” properties required by [22]. We previously [23] showed that any boosting algorithm that works by stagewise minimization of a convex “potential function” cannot tolerate random classification noise – this is a type of “benign” rather than malicious noise, which independently flips the label of each example with probability η. A natural question is whether Theorem 2 follows from [23] by having the malicious noise simply simulate random classification noise; the answer is no, essentially because the ordering of quantifiers is reversed in the two results. The construction and analysis from [23] crucially relies on the fact that in the setting of that paper, first the random misclassification noise rate η is chosen to take some particular value in (0, 1/2), and then the margin parameter γ is selected in a way that depends on η. In contrast, in this paper the situation is reversed: in our setting first the margin parameter γ is selected, and then given this value we study how high a malicious noise rate η can be tolerated. 2 The basic weak learner for Theorem 1 Let f(x) = sign(w · x) be an unknown halfspace and D be an unknown distribution over the ndimensional unit ball that has a γ margin with respect to f as described in Section 1. For odd k ≥1 we let Ak denote the algorithm that works as follows: Ak generates k independent uniform random unit vectors v1, . . . , vk in Rn and outputs the hypothesis H(x) = Maj(sign(v1 · x), . . . , sign(vk · x)). Note that Ak does not use any examples (and thus malicious noise does not affect its execution). As the main result of Section 2 we show that if k is not too large then algorithm Ak has a nonnegligible chance of outputting a reasonably good weak hypothesis: Lemma 1 For odd k ≤ 1 16γ2 the hypothesis H generated by Ak has probability at least Ω(γ √ k/2k) of satisfying Prx∼D[H(x) ̸= f(x)] ≤1 2 −γ √ k 100π. 2.1 A useful tail bound The following notation will be useful in analyzing algorithm Ak: Let vote(γ, k) := Pr hPk i=1 Xi < k/2 i where X1, . . . , Xk are i.i.d. Bernoulli (0/1) random variables with E[Xi] = 1/2 + γ for all i. Clearly vote(γ, k) is the lower tail of a Binomial distribution, but for our purposes we need an upper bound on vote(γ, k) when k is very small relative to 1/γ2 and the value of vote(γ, k) is close to but – crucially – less than 1/2. Standard Chernoff-type bounds [10] do not seem to be useful here, so we give a simple self-contained proof of the bound we need (no attempt has been made to optimize constant factors below). Lemma 2 For 0 < γ < 1/2 and odd k ≤ 1 16γ2 we have vote(γ, k) ≤1/2 −γ √ k 50 . Proof: The lemma is easily verified for k = 1, 3, 5, 7 so we assume k ≥9 below. The value vote(γ, k) equals P i<k/2 k i (1/2 −γ)k−i(1/2 + γ)i, which is easily seen to equal 1 2k P i<k/2 k i (1 −4γ2)i(1 −2γ)k−2i. Since k is odd 1 2k P i<k/2 k i equals 1/2, so it remains to show that 1 2k P i<k/2 k i 1 −(1 −4γ2)i(1 −2γ)k−2i ≥ γ √ k 50 . Consider any integer 3 i ∈[0, k/2 − √ k]. For such an i we have (1 −2γ)k−2i ≤(1 −2γ)2 √ k ≤ 1 −(2γ)(2 √ k) + (2γ)2 2 √ k 2 (1) ≤ 1 −4γ √ k + 8γ √ k(γ √ k) (2) ≤ 1 −4γ √ k + 2γ √ k = 1 −2γ √ k (3) where (1) is obtained by truncating the alternating binomial series expansion of (1 −2γ)2 √ k after a positive term, (2) uses the upper bound ℓ 2 ≤ℓ2/2, and (3) uses γ √ k ≤1/4 which follows from the bound k ≤ 1 16γ2 . So we have (1 −4γ2)i(1 −2γ)k−2i ≤1 −2γ √ k and thus we have 1−(1−4γ2)i(1−2γ)k−2i ≥2γ √ k. The sum P i≤k/2− √ k k i is at least 0.01·2k for all odd k ≥9 [13], so we obtain the claimed bound: 1 2k X i<k/2 k i 1 −(1 −4γ2)i(1 −2γ)k−2i ≥1 2k X i<k/2− √ k k i 2γ √ k ≥γ √ k 50 . 2.2 Proof of Lemma 1 Throughout the following discussion it will be convenient to view angles between vectors as lying the range [−π, π), so acute angles are in the range (−π/2, π/2). Recall that sign(w·x) is the unknown target halfspace (we assume w is a unit vector) and v1, . . . , vk are the random unit vectors generated by algorithm Ak. For j ∈{1, . . . , k} let Gj denote the “good” event that the angle between vj and w is acute, i.e. lies in the interval (−π/2, π/2), and let G denote the event G1 ∧· · · ∧Gk. Since the vectors vi are selected independently we have Pr[G] = Qk j=1 Pr[Gj] = 2−k. The following claim shows that conditioned on G, any γ-margin point has a noticeably-better-than1 2 chance of being classified correctly by H (note that the probability below is over the random generation of H by Ak): Claim 3 Fix x ∈Bn to be any point such that |w·x| ≥γ. Then we have PrH[H(x) ̸= f(x) | G] ≤ vote(γ/π, k) ≤1/2 −γ √ k 50π . Proof: Without loss of generality we assume that x is a positive example (an entirely similar analysis goes through for negative examples), so w · x ≥γ. Let α denote the angle from w to x in the plane spanned by w and x; again without loss of generality we may assume that α lies in [0, π/2] (the case of negative angles is symmetric). In fact since x is a positive example with margin γ, we have that 0 ≤α ≤π/2 −γ. Fix any j ∈{1, . . . , k} and let us consider the random unit vector vj. Let v′ j be the projection of vj onto the plane spanned by x and w. The distribution of v′ j/||v′ j|| is uniform on the unit circle in that plane. We have that sign(vj · x) ̸= f(x) if and only if the magnitude of the angle between v′ j and x is at least π/2. Conditioned on Gj, the angle from v′ to w is uniformly distributed over the interval (−π/2, π/2). Since the angle from w to x is α, the angle from v′ to x is the sum of the angle from v′ to w and the angle from w to x, and therefore it is uniformly distributed over the interval (−π/2 + α, π/2 + α) . Recalling that α ≥0, we have that sign(vj · x) ̸= f(x) if and only if angle from v′ to x lies in (π/2, π/2 + α). Since the margin condition implies α ≤π/2 −γ as noted above, we have Pr[sign(vj · x) ̸= f(x) | Gj] ≤π/2−γ π = 1 2 −γ π. Now recall that v1, ..., vk are chosen independently at random, and G = G1 ∧· · · ∧Gk. Thus, after conditioning on G, we have that v1, ..., vk are still independent and the events sign(v1 · x) ̸= f(x), . . . , sign(vk · x) ̸= f(x) are independent. It follows that PrH[H(x) ̸= f(x) | G] ≤ vote γ π, k ≤1/2 −γ √ k 50π , where we used Lemma 2 for the final inequality. Now all the ingredients are in place for us to prove Lemma 1. Since Claim 3 may be applied to every x in the support of D, we have Prx∼D,H[H(x) ̸= f(x) | G] ≤1/2−γ √ k 50π . Applying Fubini’s 4 theorem we get that EH[Prx∼D[H(x) ̸= f(x)] | G] ≤1/2 −γ √ k 50π . Applying Markov’s inequality to the nonnegative random variable Prx∼D[H(x) ̸= f(x)], we get Pr H " Pr x∼D[H(x) ̸= f(x)] > 1 −γ √ k 50π 2 | G # ≤2(1/2 −γ √ k 50π ) 1 −γ √ k 50π , which implies Pr H " Pr x∼D[H(x) ̸= f(x)] ≤1 −γ √ k 50π 2 | G # ≥Ω(γ √ k). Since PrH[G] = 2−k we get Pr H " Pr x∼D[H(x) ̸= f(x)] ≤1 −γ √ k 50π 2 # ≥Ω(γ √ k/2k), and Lemma 1 is proved. 3 Proof of Theorem 1: smooth boosting the weak learner to tolerate malicious noise Our overall algorithm for learning γ-margin halfspaces with malicious noise, which we call Algorithm B, combines a weak learner derived from Section 2 with a “smooth” boosting algorithm. Recall that boosting algorithms [15, 25] work by repeatedly running a weak learner on a sequence of carefully crafted distributions over labeled examples. Given the initial distribution P over labeled examples (x, y), a distribution Pi over labeled examples is said to be κ-smooth if Pi[(x, y)] ≤1 κP[(x, y)] for every (x, y) in the support of P. Several boosting algorithms are known [9, 16, 28] that generate only 1/ε-smooth distributions when boosting to final accuracy 1 −ε. For concreteness we will use the MadaBoost algorithm of [9], which generates a (1 −ε)-accurate final hypothesis after O( 1 εγ2 ) stages of calling the weak learner and runs in time poly( 1 ε, 1 γ ). At a high level our analysis here is related to previous works [28, 22] that used smooth boosting to tolerate malicious noise. The basic idea is that since a smooth booster does not increase the weight of any example by more than a 1/ε factor, it cannot “amplify” the malicious noise rate by more than this factor. In [28] the weak learner only achieved advantage O(γ) so as long as the malicious noise rate was initially O(εγ), the “amplified” malicious noise rate of O(γ) could not completely “overcome” the advantage and boosting could proceed successfully. Here we have a weak learner that achieves a higher advantage, so boosting can proceed successfully in the presence of more malicious noise. The rest of this section provides details. The weak learner W that B uses is a slight extension of algorithm Ak from Section 2 with k = ⌈log(1/γ)⌉. When invoked with distribution Pt over labeled examples, algorithm W • makes ℓ(specified later) calls to algorithm A⌈log(1/γ)⌉, generating candidate hypotheses H1, ..., Hℓ; and • evaluates H1, ..., Hℓusing M (specified later) independent examples drawn from Pt and outputs the Hj that makes the fewest errors on these examples. The overall algorithm B • draws a multiset S of m examples (we will argue later that poly(n, 1/γ, 1/ε) many examples suffice) from EXη(f, D); • sets the initial distribution P over labeled examples to be uniform over S; and • uses MadaBoost to boost to accuracy 1−ϵ/4 with respect to P, using W as a weak learner. Recall that we are assuming η ≤cεγ p log(1/γ); we will show that under this assumption, algorithm B outputs a final hypothesis h that satisfies Prx∼D[h(x) = f(x)] ≥1 −ε with probability at least 1/2. 5 First, let SN ⊆S denote the noisy examples in S. A standard Chernoff bound [10] implies that with probability at least 5/6 we have |SN|/|S| ≤2η; we henceforth write η′ to denote |SN|/|S|. We will show below that with high probability, every time MadaBoost calls the weak learner W with a distribution Pt, W generates a weak hypothesis (call it ht) that has Pr(x,y)∼Pt[ht(x) = y] ≥ 1/2+Θ(γ p log(1/γ)). MadaBoost’s boosting guarantee then implies that the final hypothesis (call it h) of Algorithm B satisfies Pr(x,y)∼P [h(x) = y] ≥1−ε/4. Since h is correct on (1−ε/4) of the points in the sample S and η′ ≤2η, h must be correct on at least 1 −ε/4 −2η of the points in S \ SN, which is a noise-free sample of poly(n, 1/γ, 1/ε) labeled examples generated according to D. Since h belongs to a class of hypotheses with VC dimension at most poly(n, 1/γ, 1/ϵ) (because the analysis of MadaBoost implies that h is a weighted vote over O(1/(εγ2)) many weak hypotheses, and each weak hypothesis is a vote over O(log(1/γ)) n-dimensional halfspaces), by standard sample complexity bounds [4, 31, 29], with probability 5/6, the accuracy of h with respect to D is at least 1 −ε/2 −4η > 1 −ε, as desired. Thus it remains to show that with high probability each time W is called on a distribution Pt, it indeed generates a weak hypothesis with advantage at least Ω(γ p log(1/γ)). Recall the following: Definition 1 The total variation distance between distributions P and Q over finite domain X is dT V (P, Q) := maxE⊆X P[E] −Q[E]. Suppose R is the uniform distribution over the noisy points SN ⊆S, and P ′ is the uniform distribution over the remaining points S \ SN (we may view P ′ as the “clean” version of P). Then the distribution P may be written as P = (1 −η′)P ′ + η′R, and for any event E we have P[E] −P ′[E] ≤η′R[E] ≤η′, so dT V (P, P ′) ≤η′. Let Pt denote the distribution generated by MadaBoost during boosting stage t. The smoothness of MadaBoost implies that Pt[SN] ≤4η′/ϵ, so the noisy examples have total probability at most 4η′/ε under Pt. Arguing as for the original distribution, we have that the clean version P ′ t of Pt satisfies dT V (P ′ t, Pt) ≤4η′/ϵ. (4) By Lemma 1, each call to algorithm A⌈log(1/γ)⌉yields a hypothesis (call it g) that satisfies Pr g [errorP ′ t(g) ≤1/2 −γ p log(1/γ)/(100π)] ≥Ω(γ2), (5) where for any distribution Q we define errorQ(g) def = Pr(x,y)∼Q[g(x) ̸= y]. Recalling that η′ ≤2η and η < cεγ p log(1/γ), for a suitably small absolute constant c > 0 we have that 4η′/ε < γ p log(1/γ)/(400π). (6) Then (4) and (5) imply that Prg[errorPt(g) ≤1/2 −3γ p log(1/γ)/(400π)] ≥Ω(γ2). This means that by taking the parameters ℓand M of the weak learner W to be poly(1/γ, log(1/ε)), we can ensure that with overall probability at least 2/3, at each stage t of boosting the weak hypothesis ht that W selects from its ℓcalls to A in that stage will satisfy errorPt(gt) ≤1/2 −γ p log(1/γ)/(200π). This concludes the proof of Theorem 1. 4 Convex optimization algorithms have limited malicious noise tolerance Given a sample S = {(x1, y1), . . . , (xm, ym)} of labeled examples, the number of examples misclassified by the hypothesis sign(v · x) is a nonconvex function of v, and thus it can be difficult to find a v that minimizes this error (see [12, 18] for theoretical results that support this intuition in various settings). In an effort to bring the powerful tools of convex optimization to bear on various halfspace learning problems, a widely used approach is to instead minimize some convex proxy for misclassification error. Definition 2 will define the class of such algorithms analyzed in this section. This definition allows algorithms to use regularization, but by setting the regularizer ψ to be the all-0 function it also covers algorithms that do not. 6 Definition 2 A function φ : R →R+ is a convex misclassification proxy if φ is convex, nonincreasing, differentiable, and satisfies φ′(0) < 0. A function ψ : Rn →[0, ∞) is a componentwise regularizer if ψ(v) = Pn i=1 τ(vi) for a convex, differentiable τ : R →[0, ∞) for which τ(0) = 0. Given a sample of labeled examples S = {(x1, y1), . . . , (xm, ym)} ∈(Rn × {−1, 1})m, the (φ,ψ)loss of vector v on S is Lφ,ψ,S(v) := ψ(v)+Pm i=1 φ(y(v·xi)). A (φ,ψ)-minimizer is any learning algorithm that minimizes Lφ,ψ,S(v) whenever the minimum exists. Our main negative result, shows that for any sample size, algorithms that minimize a regularized convex proxy for misclassification error will succeed with exponentially small probability for a malicious noise rate that is Θ(εγ), and therefore for any larger malicious noise rate. Theorem 2 Fix φ to be any convex misclassification proxy and ψ to be any componentwise regularizer, and let algorithm A be a (φ,ψ)-minimizer. Fix ε ∈(0, 1/8] to be any error parameter, γ ∈(0, 1/8] to be any margin parameter, and m ≥1 to be any sample size. Let the malicious noise rate η be 16εγ. Then there is an n, a target halfspace f(x) = sign(w · x) over Rn, a γ-margin distribution D for f (supported on points x ∈Bn that have | w ∥w∥·x| ≥γ), and a malicious adversary with the following property: If Aφ is given m random examples drawn from EXη(f, D) and outputs a vector v, then the probability (over the draws from EXη(f, D)) that v satisfies Prx∼D[sign(v · x) ̸= f(x)] ≤ε is at most e−c/γ, where c > 0 is some universal constant. Proof: The analysis has two cases based on whether or not the number of examples m exceeds m0 := 1 32ϵγ2 . (We emphasize that Case 2, in which n is taken to be just 2, is the case that is of primary interest, since in Case 1 the algorithm does not have enough examples to reliably learn a γ-margin halfspace even in a noiseless scenario.) Case 1 (m ≤m0): Let n = ⌊1/γ2⌋and let e(i) ∈Rn denote the unit vector with a 1 in the ith component. Then the set of examples E := {e(1), ..., e(n)} is shattered by the family F which consists of all 2n halfspaces whose weight vectors are in {−γ, γ}n, and any distribution whose support is E is a γ-margin distribution for any such halfspace. The proof of the well-known informationtheoretic lower bound of [11]1 gives that for any learning algorithm that uses m examples (such as A), there is a distribution D supported on E and a halfspace f ∈F such that the output h of A satisfies Pr[Prx∼D[h(x) ̸= f(x)] > ϵ] ≥1 −exp −c γ2 , where the outer probability is over the random examples drawn by A. This proves the theorem in Case 1. Case 2 (m > m0): We note that it is well known (see e.g. [31]) that O( 1 εγ2 ) examples suffice to learn γ-margin n-dimensional halfspaces for any n if there is no noise, so noisy examples will play an important role in the construction in this case. We take n = 2. The target halfspace is f(x) = sign( p 1 −γ2x1 + γx2). The distribution D is very simple and is supported on only two points: it puts weight 2ϵ on the point γ √ 1−γ2 , 0 which is a positive example for f, and weight 1 −2ϵ on the point (0, 1) which is also a positive example for f. When the malicious adversary is allowed to corrupt an example, with probability 1/2 it provides the point (1, 0) and mislabels it as negative, and with probability 1/2 it provides the point (0, 1) and mislabels it as negative. Let S = ((x1, y1), ..., (xm, ym)) be a sample of m examples drawn from EXη(f, D). We define pS,1 := ˛˛˛ n t:xt= “ γ/√ 1−γ2,0 ”o˛˛˛ |S| , pS,2 := |{t:xt=(0,1),y=1}| |S| , ηS,1 := |{t:xt=(1,0)}| |S| , and ηS,2 := 1In particular, see the last displayed equation in the proof of Lemma 3 of [11]. 7 |{t:xt=(0,1),y=−1}| |S| . Using standard Chernoff bounds (see e.g. [10]) and a union bound we get Pr[pS,1 = 0 or pS,2 = 0 or pS,1 > 3ϵ or ηS,1 < η/4 or ηS,2 < η/4] ≤(1 −2ε(1 −η))m + (1 −(1 −2ε)(1 −η))m + exp −ϵm 12 + 2 exp −ηm 24 ≤2(1 −ε)m + exp −ϵm 12 + 2 exp −ηm 24 (since ϵ ≤1/4 and η ≤1/2) ≤2 exp − 1 32γ2 + exp − 1 96γ2 + 2 exp −1 48γ . Since the theorem allows for a e−c/γ success probability for A, it suffices to consider the case in which pS,1 and pS,2 are both positive, pS,1 ≤3ϵ, and min{ηS,1, ηS,2} ≥η/4. For v = (v1, v2) ∈ R2 the value Lφ,ψ,S(v) is proportional to L(v1, v2) := pS,1φ γv1 p 1 −γ2 ! + pS,2φ(v2) + ηS,1φ(−v1) + ηS,2φ(−v2) + ψ(v) |S| . From the bounds stated above on pS,1, pS,2, ηS,1 and ηS,2 we may conclude that Lφ,ψ,S(v) does achieve a minimum value. This is because for any z ∈R the set {v : Lφ,ψ,S(v) ≤z} is bounded, and therefore so is its closure. Since Lφ,ψ,S(v) is bounded below by zero and is continuous, this implies that it has a minimum. To see that for any z ∈R the set {v : Lφ,ψ,S(v) ≤z} is bounded, observe that if either v1 or v2 is fixed and the other one is allowed to take on arbitrarily large magnitude values (either positive or negative), this causes Lφ,ψ,S(v) to take on arbitrarily large positive values (this is an easy consequence of the definition of L, the fact that φ is convex, nonnegative and nonincreasing, φ′(0) < 0, and the fact that pS,1, pS,2, ηS,1, ηS,2 are all positive). Taking the derivative with respect to v1 yields ∂L ∂v1 = pS,1 γ p 1 −γ2 φ′ γv1 p 1 −γ2 ! −ηS,1φ′(−v1) −τ ′(v1) |S| . (7) When v1 = 0, the derivative (7) is pS,1 γ √ 1−γ2 φ′(0) −ηS,1φ′(0) (recall that τ is minimized at 0 and thus τ ′(0) = 0). Recall that φ′(0) < 0 by assumption. If pS,1 γ √ 1−γ2 < ηS,1 then (7) is positive at 0, which means that L(v1, v2) is an increasing function of v1 at v1 = 0 for all v2. Since L is convex, this means that for each v2 ∈R we have that the value v⋆ 1 that minimizes L(v⋆ 1, v2) is a negative value v⋆ 1 < 0. So, if pS,1 γ √ 1−γ2 < ηS,1, the linear classifier v output by Aφ has v1 ≤0; hence it misclassifies the point ( γ √ 1−γ2 , 0), and thus has error rate at least 2ϵ with respect to D. Combining the fact that γ ≤1/8 with the facts that pS,1 ≤3ϵ and ηS,1 > η/4, we get pS,1 γ √ 1−γ2 < 1.01 × pS,1γ < 4ϵγ = η/4 < ηS,1 which completes the proof. 5 Conclusion It would be interesting to further improve on the malicious noise tolerance of efficient algorithms for PAC learning γ-margin halfspaces, or to establish computational hardness results for this problem. Another goal for future work is to develop an algorithm that matches the noise tolerance of Theorem 1 but uses a single halfspace as its hypothesis representation. References [1] J. Aslam and S. Decatur. Specification and simulation of statistical query algorithms for efficiency and noise tolerance. Journal of Computer and System Sciences, 56:191–208, 1998. [2] P. Auer. Learning nested differences in the presence of malicious noise. Theor. Comp. Sci., 185(1):159– 175, 1997. [3] P. Auer and N. Cesa-Bianchi. On-line learning with malicious noise and the closure algorithm. Annals of Mathematics and Artificial Intelligence, 23:83–99, 1998. 8 [4] E. B. Baum and D. Haussler. What size net gives valid generalization? Neural Comput., 1:151–160, 1989. [5] H. Block. The Perceptron: a model for brain functioning. Reviews of Modern Physics, 34:123–135, 1962. [6] A. Blum. Random Projection, Margins, Kernels, and Feature-Selection. In LNCS Volume 3940, pages 52–68, 2006. [7] A. Blum and M.-F. Balcan. A discriminative model for semi-supervised learning. Journal of the ACM, 57(3), 2010. [8] S. Decatur. Statistical queries and faulty PAC oracles. In Proc. 6th COLT, pages 262–268, 1993. [9] C. Domingo and O. Watanabe. MadaBoost: a modified version of AdaBoost. In Proc. 13th COLT, pages 180–189, 2000. [10] D. Dubhashi and A. Panconesi. Concentration of measure for the analysis of randomized algorithms. Cambridge University Press, Cambridge, 2009. [11] A. Ehrenfeucht, D. Haussler, M. Kearns, and L. Valiant. A general lower bound on the number of examples needed for learning. Information and Computation, 82(3):247–251, 1989. [12] V. Feldman, P. Gopalan, S. Khot, and A. Ponnuswami. On agnostic learning of parities, monomials, and halfspaces. SIAM J. Comput., 39(2):606–645, 2009. [13] W. Feller. Generalization of a probability limit theorem of Cram´er. Trans. Am. Math. Soc., 54:361–372, 1943. [14] Y. Freund and R. Schapire. Large margin classification using the Perceptron algorithm. In Proc. 11th COLT, pages 209–217., 1998. [15] Y. Freund and R. Schapire. A short introduction to boosting. J. Japan. Soc. Artif. Intel., 14(5):771–780, 1999. [16] D. Gavinsky. Optimally-smooth adaptive boosting and application to agnostic learning. JMLR, 4:101– 117, 2003. [17] C. Gentile and N. Littlestone. The robustness of the p-norm algorithms. In Proc. 12th COLT, pages 1–11, 1999. [18] V. Guruswami and P. Raghavendra. Hardness of learning halfspaces with noise. SIAM J. Comput., 39(2):742–765, 2009. [19] D. Haussler, M. Kearns, N. Littlestone, and M. Warmuth. Equivalence of models for polynomial learnability. Information and Computation, 95(2):129–161, 1991. [20] M. Kearns and M. Li. Learning in the presence of malicious errors. SIAM Journal on Computing, 22(4):807–837, 1993. [21] R. Khardon and G. Wachman. Noise tolerant variants of the perceptron algorithm. JMLR, 8:227–248, 2007. [22] A. Klivans, P. Long, and R. Servedio. Learning Halfspaces with Malicious Noise. JMLR, 10:2715–2740, 2009. [23] P. Long and R. Servedio. Random classification noise defeats all convex potential boosters. Machine Learning, 78(3):287–304, 2010. [24] Y. Mansour and M. Parnas. Learning conjunctions with noise under product distributions. Information Processing Letters, 68(4):189–196, 1998. [25] R. Meir and G. R¨atsch. An introduction to boosting and leveraging. In LNAI Advanced Lectures on Machine Learning, pages 118–183, 2003. [26] A. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on Mathematical Theory of Automata, volume XII, pages 615–622, 1962. [27] F. Rosenblatt. The Perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review, 65:386–407, 1958. [28] R. Servedio. Smooth boosting and learning with malicious noise. JMLR, 4:633–648, 2003. [29] J. Shawe-Taylor, P. Bartlett, R. Williamson, and M. Anthony. Structural risk minimization over datadependent hierarchies. IEEE Transactions on Information Theory, 44(5):1926–1940, 1998. [30] L. Valiant. Learning disjunctions of conjunctions. In Proc. 9th IJCAI, pages 560–566, 1985. [31] V. Vapnik. Statistical Learning Theory. Wiley-Interscience, New York, 1998. 9
|
2011
|
180
|
4,236
|
The Local Rademacher Complexity of ℓp-Norm Multiple Kernel Learning Marius Kloft∗ Machine Learning Laboratory TU Berlin, Germany kloft@tu-berlin.de Gilles Blanchard Department of Mathematics University of Potsdam, Germany gilles.blanchard@math.uni-potsdam.de Abstract We derive an upper bound on the local Rademacher complexity of ℓp-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches. Previous local approaches analyzed the case p = 1 only while our analysis covers all cases 1 ≤p ≤∞, assuming the different feature mappings corresponding to the different kernels to be uncorrelated. We also show a lower bound that shows that the bound is tight, and derive consequences regarding excess loss, namely fast convergence rates of the order O(n− α 1+α ), where α is the minimum eigenvalue decay rate of the individual kernels. 1 Introduction Kernel methods [24, 21] allow to obtain nonlinear learning machines from simpler, linear ones; nowadays they can almost completely be applied out-of-the-box [3]. Nevertheless, after more than a decade of research it still remains an unsolved problem to find the best abstraction or kernel for a problem at hand. Most frequently, the kernel is selected from a candidate set according to its generalization performance on a validation set. Clearly, the performance of such an algorithm is limited by the best kernel in the set. Unfortunately, in the current state of research, there is little hope that in the near future a machine will be able to automatically find—or even engineer—the best kernel for a particular problem at hand [25]. However, by restricting to a less general problem, can we hope to achieve the automatic kernel selection? In the seminal work of Lanckriet et al. [18] it was shown that learning a support vector machine (SVM) [9] and a convex kernel combination at the same time is computationally feasible. This approach was entitled multiple kernel learning (MKL). Research in the subsequent years focused on speeding up the initially demanding optimization algorithms [22, 26]—ignoring the fact that empirical evidence for the superiority of MKL over trivial baseline approaches (not optimizing the kernel) was missing. In 2008, negative results concerning the accuracy of MKL in practical applications accumulated: at the NIPS 2008 MKL workshop [6] several researchers presented empirical evidence showing that traditional MKL rarely helps in practice and frequently is outperformed by a regular SVM using a uniform kernel combination, see http://videolectures.net/lkasok08_ whistler/. Subsequent research (e.g., [10]) revealed further negative evidence and peaked in the provocative question “Can learning kernels help performance?” posed by Corinna Cortes in an invited talk at ICML 2009 [5]. Consequently, despite all the substantial progress in the field of MKL, there remained an unsatisfied need for an approach that is really useful for practical applications: a model that has a good chance of improving the accuracy (over a plain sum kernel). A first step towards a model of kernel learning ∗Marius Kloft is also with Friedrich Miescher Laboratory, Max Planck Society, T¨ubingen. A part of this work was done while Marius Kloft was with UC Berkeley, USA, and Gilles Blanchard was with Weierstraß Institute for Applied Analysis and Stochastics, Berlin. 1 C H P Z S V L1 L4 L14 L30 SW1SW2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Test Set Accuracy SVM (single) 1−norm MKL 1.07−norm MKL 1.14−norm MKL 1.33−norm MKL SVM (all) C H P Z S V L1 L4 L14 L30 SW1SW2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Kernel Weights θi 1−norm MKL 1.07−norm MKL 1.14−norm MKL 1.33−norm MKL SVM Figure 1: Result of a typical ℓp-norm MKL experiment in terms of accuracy (LEFT) and kernel weights output by MKL (RIGHT). that is useful for practical applications was made in [7, 13, 14]: by imposing an ℓq-norm penalty (q > 1) rather than an ℓ1-norm one on the kernel combination coefficients. This ℓq-norm MKL is an empirical minimization algorithm that operates on the multi-kernel class consisting of functions f : x 7→⟨w, φk(x)⟩with ∥w∥k ≤D, where φk is the kernel mapping into the reproducing kernel Hilbert space (RKHS) Hk with kernel k and norm ∥.∥k, while the kernel k itself ranges over the set of possible kernels k = PM m=1 θmkm ∥θ∥q ≤1, θ ≥0 . A conceptual milestone going back to the work of [1] and [20] is that this multi-kernel class can equivalently be represented as a block-norm regularized linear class in the product RKHS: Hp,D,M = fw : x 7→⟨w, φ(x)⟩ w = (w(1), . . . , w(M)), ∥w∥2,p ≤D , (1) where there is a one-to-one mapping of q ∈[1, ∞] to p ∈[1, 2] given by p = 2q q+1. In Figure 1, we show exemplary results of an ℓp-norm MKL experiment, achieved on the protein fold prediction dataset used in [4] (see supplementary material A for experimental details). We first observe that, as expected, ℓp-norm MKL enforces strong sparsity in the coefficients θm when p = 1 and no sparsity at all otherwise (but various degrees of soft sparsity for intermediate p). Crucially, the performance (as measured by the test error) is not monotonic as a function of p; p = 1 (sparse MKL) yields the same performance as the regular SVM using a uniform kernel combination, but optimal performance is attained for some intermediate value of p—namely, p = 1.14. This is a strong empirical motivation to study theoretically the performance of ℓp-MKL beyond the limiting cases p = 1 and p = ∞. Clearly, the complexity of (1) will be greater than one that is based on a single kernel only. However, it is unclear whether the increase is decent or considerably high and—since there is a free parameter p—how this relates to the choice of p. To this end, the main aim of this paper is to analyze the sample complexity of the hypothesis class (1). An analysis of this model, based on global Rademacher complexities, was developed by [8] for special cases of p. In the present work, we base our main analysis on the theory of local Rademacher complexities, which allows to derive improved and more precise rates of convergence that cover the whole range of p ∈[1, ∞]. Outline of the contributions. This paper makes the following contributions: • An upper bound on the local Rademacher complexity of ℓp-norm MKL is shown, from which we derive an excess risk bound that achieves a fast convergence rate of the order O(M 1+ 2 1+α 1 p∗−1 n− α 1+α ), where α is the minimum eigenvalue decay rate of the individual kernels (previous bounds for ℓp-norm MKL only achieved O(M 1 p∗n−1 2 ). • A lower bound is shown that beside absolute constants matches the upper bounds, showing that our results are tight. • The generalization performance of ℓp-norm MKL as guaranteed by the excess risk bound is studied for varying values of p, shedding light on the appropriateness of a small/large p in various learning scenarios. Furthermore, we also present a simpler, more general proof of the global Rademacher bound shown in [8] (at the expense of a slightly worse constant). A comparison of the rates obtained with local and global Rademacher analysis is carried out in Section 3. 2 Notation. We abbreviate Hp = Hp,D = Hp,D,M if clear from the context. We denote the (normalized) kernel matrices corresponding to k and km by K and Km, respectively, i.e., the ijth entry of K is 1 nk(xi, xj). Also, we denote u = (u(m))M m=1 = (u(1), . . . , u(M)) ∈H = H1 × . . . × HM. Furthermore, let P be a probability measure on X i.i.d. generating the data x1, . . . , xn and denote by E the corresponding expectation operator. We work with operators in Hilbert spaces and will use instead of the usual vector/matrix notation φ(x)φ(x)⊤the tensor notation φ(x) ⊗φ(x) ∈HS(H), which is a Hilbert-Schmidt operator H 7→H defined as (φ(x) ⊗φ(x))u = ⟨φ(x), u⟩φ(x). The space HS(H) of Hilbert-Schmidt operators on H is itself a Hilbert space, and the expectation Eφ(x) ⊗φ(x) is well-defined and belongs to HS(H) as soon as E ∥φ(x)∥2 is finite, which will always be assumed. We denote by J = Eφ(x) ⊗φ(x) and Jm = Eφm(x) ⊗φm(x) the uncentered covariance operators corresponding to variables φ(x) and φm(x), respectively; it holds that tr(J) = E ∥φ(x)∥2 2 and tr(Jm) = E ∥φm(x)∥2 2. Global Rademacher Complexities We first review global Rademacher complexities (GRC) in multiple kernel learning. Let x1, . . . , xn be an i.i.d. sample drawn from P. The global Rademacher complexity is defined as R(Hp) = E supfw∈Hp⟨w, 1 n Pn i=1 σiφ(xi)⟩, where (σi)1≤i≤n is an i.i.d. family (independent of φ(xi)) of Rademacher variables (random signs). Its empirical counterpart is denoted by bR(Hp) = E R(Hp) x1, . . . , xn = Eσ supfw∈Hp⟨w, 1 n Pn i=1 σiφ(xi)⟩. In the recent paper of [8] it was shown bR(Hp) ≤D n cp∗
tr(Km) M m=1
p∗ 2 1/2 for p ∈[1, 2] and p∗being an integer (where c = 23/44 and p∗:= p p−1 is the conjugated exponent). This bound is tight and improves a series of loose results that were given for p = 1 in the past (see [8] and references therein). In fact, the above result can be extended to the whole range of p ∈[1, ∞] (in the supplementary material we present a quite simple proof using c = 1): Proposition 1 (GLOBAL RADEMACHER COMPLEXITY BOUND). For any p ≥1 the empirical version of global Rademacher complexity of the multi-kernel class Hp can be bounded as bR(Hp) ≤ min t∈[p,∞] D s t∗ n
1 n tr(Km) M m=1
t∗ 2 . Interestingly, the above GRC bound is not monotonic in p and thus the minimum is not always attained for t := p. 2 The Local Rademacher Complexity of Multiple Kernel Learning Let x1, . . . , xn be an i.i.d. sample drawn from P. We define the local Rademacher complexity (LRC) of Hp as Rr(Hp) = E supfw∈Hp:P f 2 w≤r⟨w, 1 n Pn i=1 σiφ(xi)⟩, where Pf 2 w := E(fw(φ(x)))2. Note that it subsumes the global RC as a special case for r = ∞. We will also use the following assumption in the bounds for the case p ∈[1, 2]: Assumption (U) (no-correlation). Let x ∼P. The Hilbert space valued random variables φ1(x), . . . , φM(x) are said to be (pairwise) uncorrelated if for any m ̸= m′ and w ∈Hm , w′ ∈ Hm′ , the real variables ⟨w, φm(x)⟩and ⟨w′, φm′(x)⟩are uncorrelated. For example, if X = RM, the above means that the input variable x ∈X has independent coordinates, and the kernels k1, . . . , kM each act on a different coordinate. Such a setting was also considered by [23] (for sparse MKL). To state the bounds, note that covariance operators enjoy discrete eigenvalue-eigenvector decompositions J = Eφ(x) ⊗φ(x) = P∞ j=1 λjuj ⊗uj and Jm = Ex(m) ⊗x(m) = P∞ j=1 λ(m) j u(m) j ⊗u(m) j , where (uj)j≥1 and (u(m) j )j≥1 form orthonormal bases of H and Hm, respectively. We are now equipped to state our main results: Theorem 2 (LOCAL RADEMACHER COMPLEXITY BOUND, p ∈[1, 2] ). Assume that the kernels are uniformly bounded (∥k∥∞≤B < ∞) and that Assumption (U) holds. The local Rademacher complexity of the multi-kernel class Hp can be bounded for any p ∈[1, 2] as Rr(Hp) ≤min t∈[p,2] v u u t16 n
∞ X j=1 min rM 1−2 t∗, ceD2t∗2λ(m) j M m=1
t∗ 2 + √ BeDM 1 t∗t∗ n . 3 Theorem 3 (LOCAL RADEMACHER COMPLEXITY BOUND, p ∈[2, ∞] ). For any p ∈[2, ∞], Rr(Hp) ≤ min t∈[p,∞] v u u t 2 n ∞ X j=1 min(r, D2M 2 t∗−1λj). It is interesting to compare the above bounds for the special case p = 2 with the ones of Bartlett et al. [2]. The main term of the bound of Theorem 3 (taking t = p = 2) is then essentially determined by O 1 n PM m=1 P∞ j=1 min(r, λ(m) j ) 1/2 . If the variables (φm(x)) are centered and uncorrelated, this is equivalently of order O 1 n P∞ j=1 min(r, λj) 1/2 because spec(J) = SM m=1 spec(Jm); that is, {λi, i ≥1} = SM m=1 λ(m) i , i ≥1}; this rate is also what we would obtain through Theorem 3, so both bounds on the LRC recover the rate shown in [2] for the special case p = 2. It is also interesting to study the case p = 1: by using t = (log(M))∗in Theorem 2, we obtain the bound Rr(H1) ≤ 16 n
P∞ j=1 min rM, e3D2(log M)2λ(m) j M m=1
∞ 1/2 + √ Be 3 2 D log(M) n , for all M ≥e2. We now turn to proving Theorem 2. the proof of Theorem 3 is straightforward and shown in the supplementary material C. Proof of Theorem 2. . Note that it suffices to prove the result for t = p as trivially ∥w∥2,t ≤∥w∥2,p holds for all t ≥p so that Hp ⊆Ht and therefore Rr(Hp) ≤Rr(Ht). STEP 1: RELATING THE ORIGINAL CLASS WITH THE CENTERED CLASS. In order to exploit the no-correlation assumption, we will work in large parts of the proof with the centered class f Hp = efw ∥w∥2,p ≤D , wherein efw : x 7→⟨w, eφ(x)⟩, and eφ(x) := φ(x) −Eφ(x). We start the proof by noting that efw(x) = fw(x) −⟨w, Eφ(x)⟩= fw(x) −E ⟨w, φ(x)⟩= fw(φ(x)) −Efw(φ(x)), so that, by the bias-variance decomposition, it holds that Pf 2 w = Efw(x)2 = E (fw(x) −Efw(x))2 + (Efw(φ(x)))2 = P ef 2 w + Pfw 2 . (2) Furthermore we note that by Jensen’s inequality
Eφ(x)
2,p∗ = M X m=1
Eφm(x)
p∗ 2 1 p∗ = M X m=1 Eφm(x), Eφm(x) p∗ 2 1 p∗ Jensen ≤ M X m=1 E φm(x), φm(x) p∗ 2 1 p∗ = s
tr(Jm) M m=1
p∗ 2 (3) so that we can express the complexity of the centered class in terms of the uncentered one as follows: Rr(Hp) ≤E sup fw∈Hp, P f 2 w≤r w, 1 n n X i=1 σi eφ(xi) + E sup fw∈Hp, P f 2 w≤r w, 1 n n X i=1 σiEφ(x) . Concerning the first term of the above upper bound, using (2) we have P ef 2 w ≤Pf 2 w, and thus E sup fw∈Hp, P f 2 w≤r w, 1 n n X i=1 σi eφ(xi) ≤E sup fw∈Hp, P e f 2 w≤r w, 1 n n X i=1 σi eφ(xi) = Rr( eHp). Now to bound the second term, we write E sup fw∈Hp, P f 2 w≤r w, 1 n n X i=1 σiEφ(x) ≤√n sup fw∈Hp, P f 2 w≤r ⟨w, Eφ(x)⟩. Now observe that we have ⟨w, Eφ(x)⟩ H¨older ≤ ∥w∥2,p ∥Eφ(x)∥2,p∗ (3) ≤∥w∥2,p r
tr(Jm) M m=1
p∗ 2 as well as ⟨w, Eφ(x)⟩= Efw(x) ≤ p Pf 2w . We finally obtain, putting together the steps above, Rr(Hp) ≤Rr( eHp) + n−1 2 min √r, D r
tr(Jm) M m=1
p∗ 2 . (4) This shows that there is no loss in working with the centered class instead of the uncentered one. 4 STEP 2: BOUNDING THE COMPLEXITY OF THE CENTERED CLASS. In this step of the proof we generalize the technique of [19] to multi-kernel classes. First we note that, since the (centered) covariance operator Eeφm(x) ⊗eφm(x) is also a self-adjoint Hilbert-Schmidt operator on Hm, there exists an eigendecomposition Eeφm(x) ⊗eφm(x) = P∞ j=1 eλ(m) j eu(m) j ⊗eu(m) j , wherein (eu(m) j )j≥1 is an orthogonal basis of Hm. Furthermore, the no-correlation assumption (U) entails Eeφl(x) ⊗ eφm(x) = 0 for all l ̸= m. As a consequence, for all j and m, P ef 2 w = E( efw(x))2 = E M X m=1 wm, eφm(x) 2 = M X m=1 ∞ X j=1 eλ(m) j D wm, eu(m) j E2 (5) E D 1 n n X i=1 σi eφm(xi), eu(m) j E2 = 1 n D eu(m) j , 1 n n X i=1 E eφm(xi) ⊗eφm(xi) eu(m) j E = eλ(m) j n . (6) Let now h1, . . . , hM be arbitrary nonnegative integers. We can express the LRC in terms of the eigendecompositon as follows Rr( eHp) = E sup fw∈e Hp:P e f 2 w≤r D w, 1 n n X i=1 σi eφ(xi) E = E sup fw∈e Hp:P e f 2 w≤r D w(m)M m=1, 1 n n X i=1 σi eφm(xi) M m=1 E C.-S., Jensen ≤ sup P e f 2 w≤r "v u u t M X m=1 hm X j=1 eλ(m) j ⟨w(m), eu(m) j ⟩2 v u u t M X m=1 hm X j=1 eλ(m) j −1 E 1 n n X i=1 σi eφm(xi), eu(m) j 2 # + E sup fw∈e Hp w, ∞ X j=hm+1 ⟨1 n n X i=1 σi eφm(xi), eu(m) j ⟩eu(m) j M m=1 so that (5) and (6) yield Rr( eHp) (5), (6),H¨older ≤ s r PM m=1 hm n + D E
∞ X j=hm+1 ⟨1 n n X i=1 σi eφm(xi), eu(m) j ⟩eu(m) j M m=1
2,p∗. STEP 3: KHINTCHINE-KAHANE’S AND ROSENTHAL’S INEQUALITIES. We use the KhintchineKahane (K.-K.) inequality (see Lemma B.2 in the supplementary material) to further bound the right term in the above expression as E
P j>hm⟨1 n Pn i=1 σi eφm(xi), eu(m) j ⟩eu(m) j M m=1
2,p∗ ≤ q p∗ n PM m=1 E P j>hm 1 n Pn i=1⟨eφm(xi), eu(m) j ⟩2 p∗ 2 1 p∗. Note that for p ≥2 it holds that p∗/2 ≤1, and thus it suffices to employ Jensen’s inequality once again to move the expectation operator inside the inner term. In the general case we need a handle on the p∗ 2 -th moments and to this end employ Lemma C.1 (Rosenthal + Young; see supplementary material), which yields M X m=1 E ∞ X j=hm+1 1 n n X i=1 ⟨eφm(xi), eu(m) j ⟩2 p∗ 2 1 p∗ R+Y ≤ M X m=1 (ep∗) p∗ 2 B n p∗ 2 + ∞ X j=hm+1 1 n n X i=1 E⟨eφm(xi), eu(m) j ⟩2 p∗ 2 ! 1 p∗ (∗) ≤ v u u tep∗ BM 2 p∗ n + M X m=1 ∞ X j=hm+1 eλ(m) j p∗ 2 2 p∗! where for (∗) we used the subadditivity of p∗√·. Note that ∀j, m : eλ(m) j ≤λ(m) j by the LidskiiMirsky-Wielandt theorem since Eφm(x)⊗φm(x) = Eeφm(x)⊗eφm(x)+Eφm(x)⊗Eφm(x). Thus 5 by the subadditivity of the root function Rr( eHp) ≤ s r PM m=1 hm n + D v u u tep∗2 n BM 2 p∗ n +
∞ X j=hm+1 λ(m) j M m=1
p∗ 2 ! ≤ s r PM m=1 hm n + v u u tep∗2D2 n
∞ X j=hm+1 λ(m) j M m=1
p∗ 2 + √ BeDM 1 p∗p∗ n . (7) STEP 4: BOUNDING THE COMPLEXITY OF THE ORIGINAL CLASS. Now note that for all nonnegative integers hm we either have n−1 2 min √r, D
tr(Jm) M m=1
p∗ 2 1/2 ≤ ep∗2D2 n
P∞ j=hm+1 λ(m) j M m=1
p∗ 2 1/2 (in case all hm are zero) or it holds n−1 2 min √r, D
tr(Jm) M m=1
p∗ 2 1/2 ≤ r PM m=1 hm/n 1/2 (in case that at least one hm is nonzero) so that in any case we get n−1 2 min √r, D
tr(Jm) M m=1
p∗ 2 1/2 ≤ r PM m=1 hm n 1/2 + ep∗2D2 n
P∞ j=hm+1 λ(m) j M m=1
p∗ 2 1/2. Thus the following preliminary bound follows from (4) by (7): Rr(Hp) ≤ s 4r PM m=1 hm n + v u u t4ep∗2D2 n
∞ X j=hm+1 λ(m) j M m=1
p∗ 2 + √ BeDM 1 p∗p∗ n , (8) for all nonnegative integers hm ≥0. Later, we will use the above bound (8) for the computation of the excess loss; however, to gain more insight in the bounds’ properties, we express it in terms of the truncated spectra of the kernels at the scale r as follows: STEP 5: RELATING THE BOUND TO THE TRUNCATION OF THE SPECTRA OF THE KERNELS. Next, we notice that for all nonnegative real numbers A1, A2 and any a1, a2 ∈Rm + it holds for all q ≥1 p A1 + p A2 ≤ p 2(A1 + A2) (9) ∥a1∥q + ∥a2∥q ≤ 21−1 q ∥a1 + a2∥q ≤2 ∥a1 + a2∥q (10) (the first statement follows from the concavity of the square root function and the second one is readily proved; see Lemma C.3 in the supplementary material) and thus Rr(Hp)≤ v u u t16 n
rM 1−2 p∗hm + ep∗2D2 ∞ X j=hm+1 λ(m) j M m=1
p∗ 2 + √ BeDM 1 p∗p∗ n , where we used that for all non-negative a ∈RM and 0 < q < p ≤∞it holds (ℓq-to-ℓp conversion) ∥a∥q = ⟨1, aq⟩ 1 q H¨older ≤ ∥1∥(p/q)∗∥aq∥p/q 1/q = M 1 q −1 p ∥a∥p . (11) Since the above holds for all nonnegative integers hm, the result follows, completing the proof. 2.1 Lower and Excess Risk Bounds To investigate the tightness of the presented upper bounds on the LRC of Hp, we consider the case where φ1(x), . . . , φM(x) are i.i.d; for example, this happens if the original input space X is RM, the original input variable x ∈X has i.i.d. coordinates, and the kernels k1, . . . , kM are identical and each act on a different coordinate of x. Theorem 4 (LOWER BOUND). Assume that the kernels are centered and i.i.d.. Then, there is an absolute constant c such that if λ(1) ≥ 1 nD2 then for all r ≥1 n and p ≥1, Rr(Hp,D,M) ≥ v u u t c n ∞ X j=1 min(rM, D2M 2/p∗λ(1) j ). (12) Comparing the above lower bound with the upper bounds, we observe that the upper bound of Theorem 2 for centered identical independent kernels is of the order 6 O qP∞ j=1 min rM, D2M 2 p∗λ(1) j , thus matching the rate of the lower bound (the same holds for the bound of Theorem 3). This shows that the upper bounds of the previous section are tight. As an application of our results to prediction problems such as classification or regression, we also bound the excess loss of empirical minimization, ˆf := argminf 1 n Pn i=1 l(f(xi), yi), w.r.t. to a loss function l: P(l ˆ f −lf ∗) := E l( ˆf(x), y) −E l(f ∗(x), y), where f ∗:= argminf E l(f(x), y) . We use the analysis of Bartlett et al. [2] to show the following excess risk bound under the assumption of algebraically decreasing eigenvalues of the kernel matrices, i.e. ∃d > 0, α > 1, ∀m : λ(m) j ≤ d j−α (proof shown in the supplementary material E): Theorem 5. Assume that ∥k∥∞≤B and ∃d > 0, α > 1, ∀m : λ(m) j ≤d j−α. Let l be a Lipschitz continuous loss with constant L and assume there is a positive constant F such that ∀f ∈ F : P(f −f ∗)2 ≤F P(lf −lf ∗). Then for all z > 0 with probability at least 1 −e−z the excess loss of the multi-kernel class Hp can be bounded for p ∈[1, 2] as P(l ˆ f −lf ∗) ≤min t∈[p,2] 186 r 3 −α 1 −α dD2L2t∗2 1 1+α F α−1 α+1 M 1+ 2 1+α 1 t∗−1 n− α 1+α + 47 √ BDLM 1 t∗t∗ n + (22BDLM 1 t∗+ 27F)z n . We see from the above bound that convergence can be almost as slow as O p∗M 1 p∗n−1 2 (if α ≈1 is small ) and almost as fast as O n−1 (if α is large). 3 Interpretation of Bounds In this section, we discuss the rates of Theorem 5 obtained by local analysis bounds, that is ∀t ∈[p, 2] : P(l ˆ f −lf ∗) = O t∗D 2 1+α M 1+ 2 1+α 1 t∗−1 n− α 1+α . (13) On the other hand, the global Rademacher complexity directly leads to a bound of the form [8] ∀t ∈[p, 2] : P(l ˆ f −lf ∗) = O t∗DM 1 t∗n−1 2 . (14) To compare the above rates, we first assume p ≥(log M)∗so that the best choice is t = p. Clearly, the rate obtained through local analysis is better in n since α > 1. Regarding the rate in the number of kernels M and the radius D, a straightforward calculation shows that the local analysis improves over the global one whenever M 1 p /D = O(√n) . Interestingly, this “phase transition” does not depend on α (i.e. the “complexity” of the kernels), but only on p. Second, if p ≤(log M)∗, the best choice in (13) and (14) is t = (log M)∗so that P(l ˆ f −lf ∗) ≤O min Mn−1, min t∈[p,2] t∗DM 1 t∗n−1 2 (15) and the phase transition occurs for M D log M = O(√n). Note, that when letting α →∞the classical case of aggregation of M basis functions is recovered. This situation is to be compared to the sharp analysis of the optimal convergence rate of convex aggregation of M functions obtained by [27] in the framework of squared error loss regression, which is shown to be O min M n , 1 n log M √n 1/2 . This corresponds to the setting studied here with D = 1, p = 1 and α →∞, and we see that our bound recovers (up to log factors) in this case this sharp bound and the related phase transition phenomenon. Please note that, by introducing an inequality in Eq. (5), Assumption (U)—a similar assumption was also used in [23]—can be relaxed to a more general, RIP-like assumption as used in [16]; this comes at the expense of an additional factor in the bounds (details omitted here). When Can Learning Kernels Help Performance? As a practical application of the presented bounds, we analyze the impact of the norm-parameter p on the accuracy of ℓp-norm MKL in various learning scenarios, showing why an intermediate p often turns out to be optimal in practical applications. As indicated in the introduction, there is empirical evidence that the performance of ℓp-norm MKL crucially depends on the choice of the norm parameter p (for example, cf. Figure 1 7 w* w* w* 1.0 1.2 1.4 1.6 1.8 2.0 60 70 80 90 110 p bound (a) β = 2 1.0 1.2 1.4 1.6 1.8 2.0 40 45 50 55 60 65 p bound (b) β = 1 1.0 1.2 1.4 1.6 1.8 2.0 20 30 40 50 60 p bound (c) β = 0.5 Figure 2: Illustration of the three analyzed learning scenarios (TOP) differing in their soft sparsity of the Bayes hypothesis w∗(parametrized by β) and corresponding values of the bound factor νt as a function of p (BOTTOM). A soft sparse (LEFT), a intermediate non-sparse (CENTER), and an almost uniform w∗(RIGHT). in the introduction). The aim of this section is to relate the theoretical analysis presented here to this empirically observed phenomenon. To start with, first note that the choice of p only affects the excess risk bound in the factor (cf. Theorem 5 and Equation (13)) νt := min t∈[p,2] Dpt∗ 2 1+α M 1+ 2 1+α 1 t∗−1 . Let us assume that the Bayes hypothesis can be represented by w∗∈H such that the block components satisfy ∥w∗ m∥2 = m−β, m = 1, . . . , M, where β ≥0 is a parameter parameterizing the “soft sparsity” of the components. For example, the cases β ∈{0.5, 1, 2} are shown in Figure 2 for M = 2 and rank-1 kernels. If n is large, the best bias-complexity trade-off for a fixed p will correspond to a vanishing bias, so that the best choice of D will be close to the minimal value such that w∗∈Hp,D, that is, Dp = ||w∗||p. Plugging in this value for Dp, the bound factor νp becomes νp := ∥w∗∥ 2 1+α p min t∈[p,2] t∗ 2 1+α M 1+ 2 1+α 1 t∗−1 . We can now plot the value νp as a function of p fixing α, M, and β. We realized this simulation for α = 2, M = 1000, and β ∈{0.5, 1, 2}.The results are shown in Figure 2. Note that the soft sparsity of w∗is increased from the left hand to the right hand side. We observe that in the “soft sparsest” scenario (LEFT) the minimum is attained for a quite small p = 1.2, while for the intermediate case (CENTER) p = 1.4 is optimal, and finally in the uniformly non-sparse scenario (RIGHT) the choice of p = 2 is optimal, i.e. SVM. This means that if the true Bayes hypothesis has an intermediately dense representation (which is frequently encountered in practical applications), our bound gives the strongest generalization guarantees to ℓp-norm MKL using an intermediate choice of p. 4 Conclusion We derived a sharp upper bound on the local Rademacher complexity of ℓp-norm multiple kernel learning. We also proved a lower bound that matches the upper one and shows that our result is tight. Using the local Rademacher complexity bound, we derived an excess risk bound that attains the fast rate of O(n− α 1+α ), where α is the minimum eigenvalue decay rate of the individual kernels. In a practical case study, we found that the optimal value of that bound depends on the true Bayesoptimal kernel weights. If the true weights exhibit soft sparsity but are not strongly sparse, then the generalization bound is minimized for an intermediate p. This is not only intuitive but also supports empirical studies showing that sparse MKL (p = 1) rarely works in practice, while some intermediate choice of p can improve performance. Acknowledgments We thank Peter L. Bartlett and K.-R. M¨uller for valuable comments. This work was supported by the German Science Foundation (DFG MU 987/6-1, RA 1894/1-1) and by the European Community’s 7th Framework Programme under the PASCAL2 Network of Excellence (ICT-216886) and under the E.U. grant agreement 247022 (MASH Project). 8 References [1] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm. In Proc. 21st ICML. ACM, 2004. [2] P. L. Bartlett, O. Bousquet, and S. Mendelson. Local Rademacher complexities. Annals of Statistics, 33(4):1497–1537, 2005. [3] R. R. Bouckaert, E. Frank, M. A. Hall, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten. WEKA– experiences with a java open-source project. Journal of Machine Learning Research, 11:2533–2541, 2010. [4] C. Campbell and Y. Ying. Learning with Support Vector Machines. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2011. [5] C. Cortes. Invited talk: Can learning kernels help performance? In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 1:1–1:1, New York, NY, USA, 2009. ACM. Video http://videolectures.net/icml09_cortes_clkh/. [6] C. Cortes, A. Gretton, G. Lanckriet, M. Mohri, and A. Rostamizadeh. Proceedings of the NIPS Workshop on Kernel Learning: Automatic Selection of Optimal Kernels, 2008. URL http://videolectures. net/lkasok08_whistler/, Video http://www.cs.nyu.edu/learning_kernels. [7] C. Cortes, M. Mohri, and A. Rostamizadeh. L2 regularization for learning kernels. In Proceedings of the International Conference on Uncertainty in Artificial Intelligence, 2009. [8] C. Cortes, M. Mohri, and A. Rostamizadeh. Generalization bounds for learning kernels. In Proceedings, 27th ICML, 2010. [9] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273–297, 1995. [10] P. V. Gehler and S. Nowozin. Let the kernel figure it out: Principled learning of pre-processing for kernel classifiers. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 06 2009. [11] R. Ibragimov and S. Sharakhmetov. The best constant in the rosenthal inequality for nonnegative random variables. Statistics & Probability Letters, 55(4):367 – 376, 2001. [12] J.-P. Kahane. Some random series of functions. Cambridge University Press, 2nd edition, 1985. [13] M. Kloft, U. Brefeld, S. Sonnenburg, P. Laskov, K.-R. M¨uller, and A. Zien. Efficient and accurate lp-norm multiple kernel learning. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 997–1005. MIT Press, 2009. [14] M. Kloft, U. Brefeld, S. Sonnenburg, and A. Zien. Lp-norm multiple kernel learning. Journal of Machine Learning Research, 12:953–997, Mar 2011. [15] V. Koltchinskii. Local Rademacher complexities and oracle inequalities in risk minimization. Annals of Statistics, 34(6):2593–2656, 2006. [16] V. Koltchinskii and M. Yuan. Sparsity in multiple kernel learning. Annals of Statistics, 38(6):3660–3695, 2010. [17] S. Kwapi´en and W. A. Woyczy´nski. Random Series and Stochastic Integrals: Single and Multiple. Birkh¨auser, Basel and Boston, M.A., 1992. [18] G. Lanckriet, N. Cristianini, L. E. Ghaoui, P. Bartlett, and M. I. Jordan. Learning the kernel matrix with semi-definite programming. JMLR, 5:27–72, 2004. [19] S. Mendelson. On the performance of kernel classes. J. Mach. Learn. Res., 4:759–771, December 2003. [20] C. A. Micchelli and M. Pontil. Learning the kernel function via regularization. Journal of Machine Learning Research, 6:1099–1125, 2005. [21] K.-R. M¨uller, S. Mika, G. R¨atsch, K. Tsuda, and B. Sch¨olkopf. An introduction to kernel-based learning algorithms. IEEE Neural Networks, 12(2):181–201, May 2001. [22] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet. SimpleMKL. Journal of Machine Learning Research, 9:2491–2521, 2008. [23] G. Raskutti, M. J. Wainwright, and B. Yu. Minimax-optimal rates for sparse additive models over kernel classes via convex programming. CoRR, abs/1008.3654, 2010. [24] B. Sch¨olkopf, A. Smola, and K.-R. M¨uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299–1319, 1998. [25] J. R. Searle. Minds, brains, and programs. Behavioral and Brain Sciences, 3(03):417–424, 1980. [26] S. Sonnenburg, G. R¨atsch, C. Sch¨afer, and B. Sch¨olkopf. Large scale multiple kernel learning. Journal of Machine Learning Research, 7:1531–1565, July 2006. [27] A. Tsybakov. Optimal rates of aggregation. In B. Sch¨olkopf and M. Warmuth, editors, Computational Learning Theory and Kernel Machines (COLT-2003), volume 2777 of Lecture Notes in Artificial Intelligence, pages 303–313. Springer, 2003. 9
|
2011
|
181
|
4,237
|
A Global Structural EM Algorithm for a Model of Cancer Progression Ali Tofigh School of Computer Science McGill Centre for Bioinformatics McGill University, Canada ali.tofigh@mcgill.ca Erik Sj¨olund Stockholm Bioinformatics Center Stockholm University, Sweden erik.sj¨olund@sbc.su.se Mattias H¨oglund Department of Oncology Lund University, Sweden mattias.hoglund@med.lu.se Jens Lagergren Science for Life Lab Swedish e-Science Research Center Stockholm Bioinformatics Center School of Computer Science and Communication KTH Royal Institute of Technology, Sweden jensl@csc.kth.se Abstract Cancer has complex patterns of progression that include converging as well as diverging progressional pathways. Vogelstein’s path model of colon cancer was a pioneering contribution to cancer research. Since then, several attempts have been made at obtaining mathematical models of cancer progression, devising learning algorithms, and applying these to cross-sectional data. Beerenwinkel et al. provided, what they coined, EM-like algorithms for Oncogenetic Trees (OTs) and mixtures of such. Given the small size of current and future data sets, it is important to minimize the number of parameters of a model. For this reason, we too focus on tree-based models and introduce Hidden-variable Oncogenetic Trees (HOTs). In contrast to OTs, HOTs allow for errors in the data and thereby provide more realistic modeling. We also design global structural EM algorithms for learning HOTs and mixtures of HOTs (HOT-mixtures). The algorithms are global in the sense that, during the M-step, they find a structure that yields a global maximum of the expected complete log-likelihood rather than merely one that improves it. The algorithm for single HOTs performs very well on reasonable-sized data sets, while that for HOT-mixtures requires data sets of sizes obtainable only with tomorrow’s more cost-efficient technologies. 1 Introduction In the learning literature, there are several previous results on learning probabilistic tree models, including various Expectation Maximization-based inference algorithms. In [1], trees were considered where the vertices were associated with observable variables and an efficient algorithm for finding a globally optimal Maximum Likelihood (ML) solution was described. Subsequently, [2] presented a structural Expectation Maximization (EM) algorithm for finding the ML mixture of trees as well as MAP solutions with respect to several priors. There are three axes along which it is natural to compare these as well as other results. The first axis is the type of dependency structure allowed. The second axis is the type of variables used— 1 observable only or hidden and observable—and the type of relations they can have. The third axis is the type of inference algorithms that are known for the model. It is interesting in relation to the present result to ask in what respect the structural EM algorithm of [3] constitutes an improvement when compared with Friedman’s earlier structural EM algorithm [4]. In fact, it may seem like the former constitutes no improvement at all, since the latter is concerned with more general dependency structures. Notice, however, that it is customary to distinguish between EM algorithms and generalized EM algorithms for inferring numerical parameters, the difference being that in the M-step of the former, parameters are found that maximize the expected complete log-likelihood, whereas in the latter, parameters are found that merely improve it. As Friedman points out in his article on the Bayesian Structural EM algorithm [4], the same distinction can be made regarding the maximization over structures. Clearly, it would be convenient to use the same terminology for structural EM algorithms as for ordinary EM algorithms. However, the distinction is often not made for structural EM algorithms and even researchers that consider themselves experts in the field seem to be unaware of it. For this reason, we define global structural EM algorithms to be EM algorithms that in the M-step find a structure yielding a global maximum of the expected complete log-likelihood (as opposed to a structure that merely improves it). Equipped with this definition, we note that the phylogeny algorithm of [3] is a global structural EM algorithm in contrast to the earlier algorithm [4]. Another example of a global structural EM algorithm is the learning algorithm for trees with hidden variables presented in [5]. In an effort to provide mathematical models of cancer progression, Desper et al. introduced the Oncogenetic Tree model where observable variables corresponding to aberrations are associated with vertices of a tree [6]. They then proceeded to show that an algorithm based on Edmonds’s optimum branching algorithm will, with high probability, correctly reconstruct an Oncogenetic Tree T from sufficiently long series of data generated from T . The Oncogenetic Tree model suffers from two problems; monotonicity—an aberration associated with a child cannot occur unless the aberration associated with its parent has occurred—and limitedstructure—compared to a network, the tree structure severely limits the sets of progressional paths that can be modeled. In an attempt to remedy these problems, the Network Aberration Model was proposed [7, 8]. However, the computational problems associated with these network models are hard; for instance, no efficient EM algorithm for training is yet known. In another attempt, Beerenwinkel et al. used mixtures of Oncogenetic Trees to overcome the problem of limited-structure, but without removing the monotonicity and only obtaining an algorithm with an EM-like structure that has not been proved to deliver a locally optimal maximum-likelihood solution [9, 10, 11]. Beerenwinkel and coworkers used Conjunctive Bayesian Networks (CBNs) to model cancer progression [12, 13]. In order to overcome the limited ability of CBNs to model noisy biological data, [14] introduced the hidden CBN model. A hidden CBN can be obtained from a CBN by considering each variable in the CBN to be hidden and associating an observable variable with each hidden variable. The hidden CBN also has a common error parameter specifying the probability that any individual observable variable differs from its associated hidden variable. In a hidden CBN, values are first generated for the hidden variables, and then, the observable variables obtain values based both on the hidden variables and the error parameter. We present the Hidden-variable Oncogenetic Tree (HOT) model where a hidden and an observable variable are associated with each vertex of a rooted directed tree. The value of the hidden variable indicates whether or not the tumor progression has reached the vertex (a value of one means that cancer progression has reached the vertex and zero that it has not), while the value of the observable variable indicates whether a specific aberration has been detected (a value of one represents detection and zero the opposite). This interpretation provides several relations between the variables in a HOT. An asymmetric relation is required between the hidden variables associated with the two endpoints of an arc of the directed tree. Because of this asymmetry, the global structural EM algorithm that we derive for the HOT ML problem cannot, in contrast to many of the above mentioned algorithms, be based on a maximum spanning tree algorithm and is instead based on the optimal branching algorithm [15, 16, 17]. Having so rectified the monotonicity problem, we proceed to obtain a model allowing for a higher degree of structural variation by introducing mixtures of HOTs (HOT-mixtures) and, in contrast to Beerenwinkel et al., we derive a proper structural EM algorithm for training these. 2 In the near future, multiple types of high throughput (HTP) data will be available for large collections of tumors, providing great opportunities as well as computational challenges for progression model inference. One of the main motivations for our models and inference methods is that they enable analysis of future HTP-data, which most likely will require the ability to handle large numbers of mutational events. In this paper, however, we apply our methods to cytogenetic data for colon and kidney cancer, mostly due to the availability of cytogenetic data for large numbers of tumors provided by the Mitelman database [18]. 2 HOTs and the novel global structural EM algorithm 2.1 Hidden-variable Oncogenetic Trees We will denote the set of observed data points D and an individual data point X. In Section 3, we will apply our methods to CNA, i.e., a data point will be a set of observed copy number abberations, but in general, more complex events can be used. A rooted directed tree T consists of a set of vertices, denoted V (T) and a set of arcs denoted A(T). An arc ⟨u, v⟩is directed from the vertex u called its tail towards the vertex v called its head. If there is an arc with tail p and head u in a directed tree T, then p is called the parent of u in T and denoted p(u) (the tree T will be clear from context). An OT is a rooted directed tree where an aberration associated with each vertex and a probability associated with each arc. One can view an OT as generating a set of aberrations by first visiting the root and then continuing towards the leaves (preorder) visiting each vertex with the probability of its incoming arc if the parent has been visited, and with probability zero if the parent has not been visited. The result of the progression is the set of aberrations associated with the visited vertices. • • • • • (a) • -3p 0.5 +17q 0.25 -4p 0.25 +Xp 0.25 (b) • -3p 0.9 0.5 +17q 0.8 0.25 -4p 0.9 0.25 +Xp 0.8 0.25 (c) • T1,λ1=0.7 +Xp 0.9 0.5 -4p 0.6 0.5 +17q 0.8 0.5 -3p 0.9 0.25 • T2,λ2=0.3 -3p 0.9 0.5 +17q 0.8 0.25 -4p 0.9 0.25 +Xp 0.8 0.25 (d) Figure 1: (a) A rooted directed tree with the root at the top. All arcs are directed downwards, i.e., away from the root. (b) An OT with probabilities associated with arcs and CNAs associated with vertices. (c) A HOT with probabilities associated with arcs (indicating the probability that the hidden variable associated with the head of the arc receives the value 1 conditioned that the hidden variable associated with the tail has this value), and CNAs as well as probabilities associated with vertices (indicating the probability that the observable variable associated with the vertex receives the value 1 conditioned that the hidden variable associated with the vertex has received this value). (d) A HOT-mixture consisting of two HOTs. The mixing probability for T1 is 0.7 and that for T2 is 0.3. So with probability 0.7 a synthetic tumor is generated from T1 and otherwise one is generated from T2. In Figure 1(b), an OT for CNA is depicted (aberrations are written in the standard notation for CNAs in cytogenetic data, i.e., each represents a duplication (+) or deletion (-) of a specific chromosomal region). Notice that an aberration associated with a vertex cannot occur unless the aberration associated with its parent has occurred. For instance, the set {+Xp, +17q} cannot be generated by the OT in Figure 1(b). In a data-modeling context, this is highly undesirable as data is typically noisy and is bound to contain both false positives and negatives. Our HOT model does not suffer from this problem. A Hidden-variable Oncogenetic Tree (HOT) is a directed tree where, just like OTs, each vertex represents a specific aberration. Unlike OTs however, the progression of cancer is modeled with hidden variables associated with vertices and conditional probabilities associated with the arcs. The observation of the aberrations (the data) are modeled with a different set of random variables whose values are conditioned on the hidden variables. 3 Formally, a Hidden-variable Oncogenetic Tree (HOT) is a pair T = (T, Θ) where: 1. T is a rooted directed tree and Θ consists of two conditional probability distributions, θX(u) and θZ(u), for each vertex u; 2. two random variables are associated with each vertex: an observable variable X(u) and a hidden variable Z(u), each assuming the values 0 or 1; 3. the hidden variable associated with the root, Z(r), is defined to have a value of one; 4. for each non-root vertex u, θZ(u) is a conditional probability distribution on Z(u) conditioned by Z(p(u)) satisfying Pr[Z(u) = 1|Z(p(u)) = 0] = ϵZ(u); and 5. for each non-root vertex u, θX(u) is a conditional probability distribution on X(u) conditioned by Z(u) satisfying Pr[X(u) = 1|Z(u) = 0] = ϵX(u). With respect to (4), one might argue that Pr[Z(u) = 1|Z(p(u)) = 0] should be zero, since if the progression has not reached p(u) it should not be able to proceed to u. However, the derivation and implementation of the EM algorithm depends on the non-zero value of this probability for much the same reasons that people use pseudo-counts [19], namely, once a parameter receives the value 0 in an EM algorithm for training, it will subsequently not be changed. Moreover, ϵZ has a natural interpretation: it corresponds to a small probability of spontaneous mutations occurring independently from the overall progressional path that the disease is following. Similar arguments apply to (5) where we interpret ϵX as the small probability of falsely detecting an aberration that is not actually present (corresponding to a false positive test). We note here that it is possible to have CPDs where X(u) and Z(u) depend on both X(p(u)) and Z(p(u)), and even to let X(u) depend on all three of Z(u), X(p(u)), and Z(p(u)). We note here that our arguments can easily be extended to cover these cases, although we will not consider them further in the following text. Figure 1(c) shows an example of a HOT where ϵZ and ϵX have been omitted for clarity. 2.2 The novel global structural EM algorithm for HOTs We have derived a global structural Expectation Maximization (EM) algorithm for inferring HOTs from data. According to standard EM theory [20], such an algorithm is obtained if there is a procedure that given a HOT T finds a HOT T ′ that maximizes the so-called complete log-likelihood (also known as the Q-term): Q(T ′; T ) = X X∈D X Z Pr[Z|X, T ] log Pr[Z, X|T ′]. The likelihood of T ′ is guaranteed to be at least as high as T , which immediately leads to an iterative procedure. In standard EM, the Q-term is maximized only over the parameters of a model, in our case the conditional probabilities, leaving the structure, i.e., the directed tree, unchanged. Friedman et al. [3] extended the use of EM algorithms from the standard parameter estimation to also finding an optimal structure. In their case, the probabilistic model was reversible and the tree that maximized the expected complete log-likelihood could be obtained using a maximum spanning tree algorithm. In our case, the pair-wise relations between hidden variables are asymmetric and a maximum spanning tree algorithm cannot be used. However, as we show below, the Q-term can be maximized by instead using Edmonds’s optimal branching algorithm. When dealing with mixtures of HOTs in later sections, we will need to maximize the weighted version of the Q-term, which we introduce already here: Qf(T ′; T ) = X X∈D X Z f(X)Pr[Z|X, T ] log Pr[Z, X|T ′], (1) where f is a weight function on the data points in D and can be computed in constant time. By expanding and rearranging the terms in (1) (see the appendix), it can be shown that Qf(T ′; T ) equalsX ⟨u,v⟩∈A(T ′) X a,b∈{0,1} X X∈D f(X)Pr[Z(v) = a, Z(u) = b|X, T ] log Pr[Z(v) = a|Z(u) = b, θ′ Z(u)] 4 + X ⟨u,v⟩∈A(T ) X σ,a∈{0,1} X X∈D:X(u)=σ f(X)Pr[Z(v) = a|X, T ] log Pr[X(v) = σ|Z(v) = a, θ′ X(u)]. As long as the directed tree T ′ is fixed, the standard EM methodology (see for instance [19]) can be used to find the Θ′ that maximizes Qf(T ′, Θ′; T ) as follows. First, let Au(a, b) = X X∈D f(X)Pr[Z(u) = a, Z(p′(u)) = b|X, T ] (2) and Bu(σ, a) = X X∈D:X(u)=σ f(X)Pr[Z(u) = a|X, T ]. (3) Then the Θ′ that, for a fixed T ′, maximizes Qf(T ′; T ) (i.e. Qf(T ′, Θ′; T )) is given by Pr[Z(u) = a|Z(p′(u)) = b, θ′ Z(u)] = Au(a, b)/( X a∈{0,1} Au(a, b)) and Pr[X(u) = σ|Z(u) = a, θ′ Z(u)] = Bu(σ, a)/( X σ∈{0,1} Bu(σ, a)). The time required for computing the right hand sides of (2) and (3) is O(n2), where n is the number of aberrations (The probabilities Pr[Z(u) = a, Z(v) = b|X, T ] can be computed using techniques analogous to those appearing in [3]). For each arc ⟨p, u⟩of T ′, using the CPDs defined above, we define the weight of the arc, specific to this tree to be X a,b∈{0,1} X X∈D f(X)Pr[Z(u) = a, Z(p′(u)) = b|X, T ] log Pr[Z(u) = a|Z(p′(u)) = b, θ′ Z(u)] + X b∈{0,1} X X∈D f(X)Pr[Z(u) = a|X, T ] log Pr[X(u)|Z(u) = a, θ′ X(u)]. We now make two important observations from which it follows how to maximize the weighted expected complete log-likelihood over all directed trees. First, notice that if two directed trees T ′ and T ′′ have a common arc ⟨p, u⟩, then this arc has the same weight in these two trees (since the weights on the arc does not depend on any other arc in the tree). Let G be the directed, complete, and arc-weighted graph with the same vertex set as the tree T, and with arc weights given by the above expression. An optimal arborescence of a directed graph is a rooted directed tree on the same set of vertices as the directed graph, i.e., a subgraph that has exactly one directed path from one specified vertex called the root to any other vertex, and has maximum arc weight sum among all such rooted directed trees. For any arborescence T ′ of G, the sum of the arc weights equals, by the construction of G, the maximum value of Qf(T ′, Θ′; T ) over all Θ′. From this follows that, a (spanning) directed tree T ′ is an optimal arborescence of G if and only if T ′ maximizes the Qf term. And so, applying Edmonds’s algorithm to G gives the desired directed tree. Tarjan’s implementation of Edmonds’s algorithm runs in quadratic time [15, 16, 17]. Hence, the total running time for the algorithm is O(|D| · n2). 2.3 HOT-mixtures In this section we extend our model to HOT-mixtures by including an initial random choice of one of several HOTs and letting the final outcome be generated by the chosen HOT. We will also obtain an EM-based model-training algorithm for HOT-mixtures by showing how to optimize the expected complete log-likelihoods for HOT-mixtures. Formally, we will use k HOTs T1, . . . , Tk and a random mixing variable I that takes on values in 1, . . . , k. The probability that I = i is denoted λi and λ = (λ1, . . . , λk) is a vector of parameters of the model in addition to those of the HOTs (λ1, . . . , λk are constrained to sum to 1). The following notation is convenient γi(X) = Pr[I = i|X, M] = λiPr[X|Ti] P j∈[k] λjPr[X|Tj]. 5 For a HOT-mixture, the expected complete log-likelihood can be expressed as follows X X∈D X Z,I Pr[Z, I|X, M] log Pr[Z, I, X|M ′]. (4) Using standard EM methodology, it is possible to show that (4) can be maximized by independently maximizing X i∈[k] X X∈D γi(X) log(λ′ i) (5) and, for each i = 1, . . . , k, maximizing X X∈D X Z Pr[Z|X, Ti]γi(X) log(Pr[Z, X|T ′ i ]) (6) Finding a λ′ = λ′ 1, . . . , λ′ k maximizing (5) is straightforward (see for instance [19]) and, for each i = 1, . . . , k, finding a T ′ i that maxmizes the weighted Q-term in (6) can be done as described in the previous subsections (with γi(X) weighting the data points). 3 Results In this section, we report results obtained by applying our algorithms to synthetic and cytogenetic cancer data. In the standard version of the EM algorithm, there are four parameters per edge of a HOT. The number of parameters can be reduced by letting some parameters be global, e.g., by letting ϵx(u) = ϵx(u′) for all vertices u and u′. There are three parameters whose global estimation is desirable: ϵx, ϵZ, and Pr[X(u) = 0|Z(u) = 1]. However, for technical reasons, requiring that ϵz be global makes it impossible to derive an EM algorithm. Therefore, we will distinguish between two different versions of the algorithm: one with free parameters and one with global parameters. The free parameter version then corresponds to the standard EM algorithm, while the global parameter version corresponds to letting ϵx and Pr[X(u) = 0|Z(u) = 1] be global. When evaluating the global parameter version of the algorithm using synthetic data, we will follow the convention of letting all three error parameters be global when generating data. Other conventions used for all the tests described here include the following. We enforce an upper limit of 0.5 on ϵz and ϵx. Also, for each data set, we first run the algorithm on a set of randomly generated start HOTs or start HOT-mixtures for 10 iterations. The HOT or HOT-mixture that results in the best likelihood is then run until convergence. Unless stated otherwise, the number of start trees and mixtures is 100. 3.1 Tests on Synthetic Data sets 3.1.1 Single HOTs We generated random HOTs with 10, 25, and 40 vertices with parameters on the edges chosen uniformly in the intervals Pr[Z(u) = 1|Z(p(u)) = 1] ∈[0.1, 1.0], (7) Pr[X(u) = 0|Z(u) = 1], ϵx, ϵz ∈[0.01, q], (8) where q ∈{0.05, 0.10, 0.25, 0.50}. For each combination, we generated 100 HOTs for a total of 3 × 4 × 100 = 1200 HOTs. Figure 2 shows the result of our experiments on synthetic data. An edge of the generated HOT connecting one specific aberration to another is considered to have been correctly recovered if the HOT obtained from the algorithm connects the same two aberrations in the same direction. We also compared the performance of our algorithms with that of Mtreemix by Beerenwinkel et al [11]. The generated data from our single HOTs were passed to Mtreemix and the same criteria as above were used to detect correctly recovered edges (no special options were set when running Mtreemix on data generated with global parameters since no distinction between global and free parameters can be made on oncogenetic trees). Mtreemix outperforms our methods when the HOTs and the error parameters are small, while our algorithms outperform Mtreemix significantly as the size of the HOTs or error parameters become larger. 6 0 2000 4000 0.0 0.2 0.4 0.6 0.8 1.0 EM algorithm free parameters # data points % correctly recovered edges q = 0.05 q = 0.1 q = 0.25 q = 0.5 0 2000 4000 Mtreemix # data points % correctly recovered edges 10 vertices 25 vertices 40 vertices (a) 0 2000 4000 0.0 0.2 0.4 0.6 0.8 1.0 EM algorithm global parameters # data points % correctly recovered edges q = 0.05 q = 0.1 q = 0.25 q = 0.5 0 2000 4000 Mtreemix # data points % correctly recovered edges 10 vertices 25 vertices 40 vertices (b) Figure 2: Histograms showing the mean percentage of edges that were correctly recovered by the algorithm for the free parameter case together with error bars showing one standard deviation. 0 2000 4000 6000 8000 0.2 0.3 0.4 0.5 0.6 q = 0.05 # data points % correctly recovered edges 0 2000 4000 6000 8000 0.2 0.3 0.4 0.5 0.6 q = 0.1 # data points % correctly recovered edges 0 2000 4000 6000 8000 0.2 0.3 0.4 0.5 0.6 q = 0.25 # data points % correctly recovered edges mixture probabilities 0.5 vs 0.5 0.3 vs 0.7 0.1 vs 0.9 Figure 3: Histograms showing proportion of edges correctly recovered by the EM algorithm for HOT-mixtures with global parameters on two HOTs with 25 vertices each. Each bar represents 100 mixtures. Error bars show one standard deviation. 3.1.2 HOT Mixtures We also tested the ability of the algoriithm to recover a mixture of two HOTs. The results are shown in 3. When measuring the number of correctly recovered edges, the following procedure was used. Each HOT produced from the algorithm was compared to each HOT from which the data was generated, and the number of correctly recovered edges was noted. The best way of matching the two HOTs produced from the algorithm with the two original HOTs was then determined. Two 7 17p5q5q18q-,8p8p14p-15,-21 9p-,-9q-10,-22 (a) +7 3p+17 +5q,-14 +16 +1q,8p-,-17 +20,+21 +2 6q-,-9,-22,-X +2,+12,+19 -4,-10,-13,-15 +8q,1p-,-18,-21 (b) • +7 +17 -4 +16 -10 +12 +20 1p-,-13,-15 -18,-21 +19 +2p,+8q,+21 (c) • 3p+5q -14 -9 8p-10 6q-,-21 -13 -15 -17 -18 -X +1q 1p-22 +19 -4 +2 +12 +8q (d) Figure 4: HOTs obtained from RCC data. (a) shows an adapted version of the pathways for CC data published in [21]. (b) is a figure adapted from [22] showing the pathways obtained from statistical analysis of RCC data. (c) and (d) are the HOTs we obtained from the RCC data using only aberrations on the left and right pathways in (b), respectively. Notice the high level of agreement between the root-to-leaf paths in the recovered HOTs with those in (b). features can clearly be distinguished: the results improve as the size of the data increases, and the algorithm performs better when the HOTs have equal probability in the mixture. 3.2 Tests on Cancer Data Our cytogenetic data for colon (CC) and kidney (RCC) cancer consist of 512 and 998 tumors, respectively. The data consist of measurements on 41 common aberrations (18 gains, 23 losses) for CC and 28 (13 gains, 15 losses) for RCC. The data have previously been analyzed in [21] and [22] resulting in suggested pathways of progression. These analyses were based on Principal Component Analysis (PCA) performed on correlations between aberrations and a statistical measure called time of occurrence (TO) which measures how early or late an aberration occurs during progression. The aberrations were then clustered based on the PCA and each cluster was manually formed into a pathway (based on PCA and TO). One advantage of our approach is that we are able to replace the manual curation by automated computational steps. Another advantage is that our models assign probabilities to data and the different models can therefore be compared objectively. We expect ϵz and ϵx to be small in the type of data that we are using. We obtained the n most correlated aberrations in our CC data, for n ∈{4, . . . , 11}, and tested different upper limits on ϵz and ϵx. The best correspondence to previously published analyses of the data was found when ϵz ≤0.25 and ϵx ≤0.01 by counting the number of bad edges. A bad edge is one that contradicts the partial ordering given by the pathways described in [21], of which the relevant part is shown in the Figure 4(a). Having found upper limits that work well on the CC data, we applied the algorithm with these upper bounds to the RCC data. The earlier analyses in [22] strongly suggests that two HOTs are required to model the RCC data. Given that our mixture model appears, from our tests on synthetic data, to require substantially more data points to recover the underlying HOTs in a satisfactory manner, we used the results of the analysis in [22] to divide the aberrations into two (overlapping) clusters for which we created HOTs separately. These HOTs can be seen in Figure 4(c) and 4(d) and they show very good agreement to the pathways from [22] shown in Figure 4(b). For instance, each root-to-leaf path in the HOT of Figure 4(c) agrees perfectly with the pathway shown in Figure 4(b). 8 References [1] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE Trans Inform Theor, 14(3):462–467, 1968. [2] M. Meila and M.I. Jordan. Learning with mixtures of trees. J Mach Learn Res, 1(1):1–48, 2000. [3] N. Friedman, M. Ninio, I. Pe’er, and T. Pupko. A structural em algorithm for phylogenetic inference. J Comput Biol, 9(2):331–353, 2002. [4] N. Friedman. The bayesian structural em algorithm. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, pages 129–138. Morgan Kaufmann, 1998. [5] P Leray and O Franc¸ois. Bayesian network structural learning and incomplete data. Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR 2005), pages 33–40, 2005. [6] R. Desper, F. Jiang, O.P. Kallioniemi, H. Moch, C.H. Papadimitriou, and A.A. Schaffer. Inferring tree models for oncogenesis from comparative genome hybridization data. J Comput Biol, 6(1):37–51, 1999. [7] M. Hjelm, M. H¨oglund, and J. Lagergren. New probabilistic network models and algorithms for oncogenesis. J Comput Biol, 13(4):853–865, May 2006. [8] M.D. Radmacher, R. Simon, R. Desper, R. Taetle, A.A. Schaffer, and M.A. Nelson. Graph models of oncogenesis with an application to melanoma. J Theor Biol, 212(4):535–48, Oct 2001. [9] N. Beerenwinkel, J. Rahnenfuhrer, M. Daumer, D. Hoffmann, R. Kaiser, J. Selbig, and T. Lengauer. Learning multiple evolutionary pathways from cross-sectional data. J Comput Biol, 12(6):584–598, Jul 2005. [10] J. Rahnenfuhrer, N. Beerenwinkel, W.A. Schulz, C. Hartmann, A. von Deimling, B. Wullich, and T. Lengauer. Estimating cancer survival and clinical outcome based on genetic tumor progression scores. Bioinformatics, 21(10):2438–2446, May 2005. [11] N. Beerenwinkel, J. Rahnenfuhrer, R. Kaiser, D. Hoffmann, J. Selbig, and T. Lengauer. Mtreemix: a software package for learning and using mixture models of mutagenetic trees. Bioinformatics, 21(9):2106– 2107, 2005. [12] N. Beerenwinkel, N. Eriksson, and B. Sturmfels. Conjunctive bayesian networks. Bernoulli, 13(4):893– 909, Jan 2007. [13] N. Beerenwinkel, N. Eriksson, and B. Sturmfels. Evolution on distributive lattices. J Theor Biol, 242(2):409–20, Sep 2006. [14] M. Gerstung, M. Baudis, H. Moch, and N. Beerenwinkel. Quantifying cancer progression with conjunctive bayesian networks. Bioinformatics, 25(21):2809–15, Nov 2009. [15] R.E. Tarjan. Finding optimum branchings. Networks, 7(1):25–36, 1977. [16] R.M. Karp. A simple derivation of edmond’s algorithm for optimum branching. Networks, 1(265-272):5, 1971. [17] P. Camerini, L. Fratta, and F. Maffioli. The k best spanning arborescences of a network. Networks, 10(2):91–110, 1980. [18] F. Mitelman, B. Johansson, and F. Mertens (Eds.). Mitelman database of chromosome aberrations and gene fusions in cancer, 2010. http://cgap.nci.nih.gov/Chromosomes/Mitelman. [19] R. Durbin, S.R. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis. Cambridge University Press, Cambridge, 1998. [20] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the em algorithm. J Roy Stat Soc B, 39(1):1–38, 1977. [21] M. H¨oglund, D. Gisselsson, G.B. Hansen, T. S¨all, F. Mitelman, and M. Nilbert. Dissecting karyotypic patterns in colorectal tumors: Two distinct but overlapping pathways in the adenoma-carcinoma transition. Canc Res, 62:5939–5946, 2002. [22] M. H¨oglund, D. Gisselsson, M. Soller, G.B. Hansen, P. Elfving, and F. Mitelman. Dissecting karyotypic patterns in renal cell carcinoma: an analysis of the accumulated cytogenetic data. Canc Genet Cytogenet, 153(1):1–9, 2004. 9
|
2011
|
182
|
4,238
|
Infinite Latent SVM for Classification and Multi-task Learning Jun Zhu†, Ning Chen†, and Eric P. Xing‡ †Dept. of Computer Science & Tech., TNList Lab, Tsinghua University, Beijing 100084, China ‡Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA dcszj@tsinghua.edu.cn;chenn07@mails.thu.edu.cn;epxing@cs.cmu.edu Abstract Unlike existing nonparametric Bayesian models, which rely solely on specially conceived priors to incorporate domain knowledge for discovering improved latent representations, we study nonparametric Bayesian inference with regularization on the desired posterior distributions. While priors can indirectly affect posterior distributions through Bayes’ theorem, imposing posterior regularization is arguably more direct and in some cases can be much easier. We particularly focus on developing infinite latent support vector machines (iLSVM) and multi-task infinite latent support vector machines (MT-iLSVM), which explore the largemargin idea in combination with a nonparametric Bayesian model for discovering predictive latent features for classification and multi-task learning, respectively. We present efficient inference methods and report empirical studies on several benchmark datasets. Our results appear to demonstrate the merits inherited from both large-margin learning and Bayesian nonparametrics. 1 Introduction Nonparametric Bayesian latent variable models have recently gained remarkable popularity in statistics and machine learning, partly owning to their desirable “nonparametric” nature which allows practitioners to “sidestep” the difficult model selection problem, e.g., figuring out the unknown number of components (or classes) in a mixture model [2] or determining the unknown dimensionality of latent features [12], by using an appropriate prior distribution with a large support. Among the most commonly used priors are Gaussian process (GP) [24], Dirichlet process (DP) [2] and Indian buffet process (IBP) [12]. However, standard nonparametric Bayesian models are limited in that they usually make very strict and unrealistic assumptions on data, such as that observations being homogeneous or exchangeable. A number of recent developments in Bayesian nonparametrics have attempted to alleviate such limitations. For example, to handle heterogenous observations, predictor-dependent processes [20] have been proposed; and to relax the exchangeability assumption, various correlation structures, such as hierarchical structures [26], temporal or spatial dependencies [5], and stochastic ordering dependencies [13, 10], have been introduced. However, all these methods rely solely on crafting a nonparametric Bayesian prior encoding some special structure, which can indirectly influence the posterior distribution of interest via trading-off with likelihood models. Since it is the posterior distributions, which capture the latent structures to be learned, that are of our ultimate interest, an arguably more direct way to learn a desirable latent-variable model is to impose posterior regularization (i.e., regularization on posterior distributions), as we will explore in this paper. Another reason for using posterior regularization is that in some cases it is more natural and easier to incorporate domain knowledge, such as the large-margin [15, 31] or manifold constraints [14], directly on posterior distributions rather than through priors, as shown in this paper. Posterior regularization, usually through imposing constraints on the posterior distributions of latent variables or via some information projection, has been widely studied in learning a finite log-linear model from partially observed data, including generalized expectation [21], posterior regulariza1 tion [11], and alternating projection [6], all of which are doing maximum likelihood estimation (MLE) to learn a single set of model parameters by optimizing an objective. Recent attempts toward learning a posterior distribution of model parameters include the “learning from measurements” [19], maximum entropy discrimination [15] and MedLDA [31]. But again, all these methods are limited to finite parametric models. To our knowledge, very few attempts have been made to impose posterior regularization on nonparametric Bayesian latent variable models. One exception is our recent work of infinite SVM (iSVM) [32], a DP mixture of large-margin classifiers. iSVM is a latent class model that assigns each data example to a single mixture component for classification and the unknown number of mixture components is automatically resolved from data. In this paper, we present a general formulation of performing nonparametric Bayesian inference subject to appropriate posterior constraints. In particular, we concentrate on developing the infinite latent support vector machines (iLSVM) and multi-task infinite latent support vector machines (MT-iLSVM), which explore the discriminative large-margin idea to learn infinite latent feature models for classification and multi-task learning [3, 4], respectively. As such, our methods as well as [32] represent an attempt to push forward the interface between Bayesian nonparametrics and large margin learning, which have complementary advantages but have been largely treated as two separate subfields in the machine learning community. Technically, although it is intuitively natural for MLE-based methods to include a regularization term on the posterior distributions of latent variables, this is not straightforward for Bayesian inference because we do not have an optimization objective to be regularized. We base our work on the interpretation of the Bayes’ theorem by Zellner [29], namely, the Bayes’ theorem can be reformulated as a minimization problem. Under this optimization framework, we incorporate posterior constraints to do regularized Bayesian inference, with a penalty term that measures the violation of the constraints. Both iLSVM and MT-iLSVM are special cases that explore the large-margin principle to consider supervising information for learning predictive latent features, which are good for classification or multi-task learning. We use the nonparametric IBP prior to allow the models to have an unbounded number of latent features. The regularized inference problem can be efficiently solved with an iterative procedure, which leverages existing high-performance convex optimization techniques. Related Work: As stated above, both iLSVM and MT-iLSVM generalize the ideas of iSVM to infinite latent feature models. For multi-task learning, nonparametric Bayesian models have been developed in [28, 23] for learning features shared by multiple tasks. But these methods are based on standard Bayesian inference, without the ability to consider posterior regularization, such as the large-margin constraints or the manifold constraints [14]. Finally, MT-iLSVM is a nonparametric Bayesian generalization of the popular multi-task learning methods [1, 16], as explained shortly. 2 Regularized Bayesian Inference with Posterior Constraints In this section, we present the general framework of regularized Bayesian inference with posterior constraints. We begin with a brief review of the basic results due to Zellner [29]. 2.1 Bayesian Inference as a Learning Model Let M be a model space, containing any variables whose posterior distributions we are trying to infer. Bayesian inference starts with a prior distribution π(M) and a likelihood function p(x|M) indexed by the model M ∈M. Then, by the Bayes’ theorem, the posterior distribution is p(M|x1, · · · , xN) = π(M) ∏N n=1 p(xn|M) p(x1, · · · , xN) , (1) where p(x1, · · · , xN) is the marginal likelihood or evidence of observed data. Zellner [29] first showed that the posterior distribution due to the Bayes’ theorem is the solution of the problem min p(M) KL(p(M)∥π(M)) − N ∑ n=1 ∫ log p(xn|M)p(M)dM (2) s.t. : p(M) ∈Pprob, where KL(p(M)∥π(M)) is the Kullback-Leibler (KL) divergence, and Pprob is the space of valid probability distributions with an appropriate dimension. 2.2 Regularized Bayesian Inference with Posterior Constraints As commented by E.T. Jaynes [29], “this fresh interpretation of Bayes’ theorem could make the use of Bayesian methods more attractive and widespread, and stimulate new developments in 2 the general theory of inference”. Below, we study how to extend the basic results to incorporate posterior constraints in Bayesian inference. In the standard Bayesian inference, the constraints (i.e., p(M) ∈Pprob) do not have auxiliary free parameters. In general, regularized Bayesian inference solves the constrained optimization problem min p(M),ξ KL(p(M)∥π(M)) − N ∑ n=1 ∫ log p(xn|M)p(M)dM + U(ξ) (3) s.t. : p(M) ∈Ppost(ξ), where Ppost(ξ) is a subspace of distributions that satisfy a set of constraints. The auxiliary parameters ξ are usually nonnegative and interpreted as slack variables. U(ξ) is a convex function, which usually corresponds to a surrogate loss (e.g., hinge loss) of a prediction rule, as we shall see. We can use an iterative procedure to do the regularized Bayesian inference based on convex optimization techniques. The general recipe is that we use the Lagrangian method by introducing Lagrangian multipliers ω. Then, we iteratively solve for p(M) with ω and ξ fixed; and solve for ω and ξ with p(M) given. For the first step, we can use sampling or variational methods [9] to do approximate inference; and under certain conditions, such as using the constraints based on posterior expectation [21], the second step can be efficiently done using high-performance convex optimization techniques, as we shall see. 3 Infinite Latent Support Vector Machines In this section, we concretize the ideas of regularized Bayesian inference by particularly focusing on developing large-margin classifiers with an unbounded dimension of latent features, which can be used as a representation of examples for the single-task classification or as a common representation that captures relationships among multiple tasks for multi-task learning. We first present the single-task classification model. The basic setup is that we project each data example x ∈X ⊂RD to a latent feature vector z. Here, we consider binary features1. Given a set of N data examples, let Z be the matrix, of which each row is a binary vector zn associated with data sample n. Instead of pre-specifying a fixed dimension of z, we resort to the nonparametric Bayesian methods and let z have an infinite number of dimensions. To make the expected number of active latent features finite, we put the well-studied IBP prior on the binary feature matrix Z. 3.1 Indian Buffet Process Indian buffet process (IBP) was proposed in [12] and has been successfully applied in various fields, such as link prediction [22] and multi-task learning [23]. We focus on its stick-breaking construction [25], which is good for developing efficient inference methods. Let πk ∈(0, 1) be a parameter associated with column k of the binary matrix Z. Given πk, each znk in column k is sampled independently from Bernoulli(πk). The parameters π are generated by a stick-breaking process π1 = ν1, and πk = νkπk−1 = k ∏ i=1 νi, (4) where νi ∼Beta(α, 1). This process results in a decreasing sequence of probabilities πk. Specifically, given a finite dataset, the probability of seeing feature k decreases exponentially with k. 3.2 Infinite Latent Support Vector Machines We consider the multi-way classification, where each training data is provided with a categorical label y, where y ∈Y def = {1, · · · , L}. For binary classification and regression, similar procedure can be applied to impose large-margin constraints on posterior distributions. Suppose that the latent features z are given, then we can define the latent discriminant function as f(y, x, z; η) def = η⊤g(y, x, z), (5) where g(y, x, z) is a vector stacking of L subvectors2 of which the yth is z⊤and all the others are zero. Since we are doing Bayesian inference, we need to maintain the entire distribution profile of 1Real-valued features can be easily considered as in [12]. 2We can consider the input features x or its certain statistics in combination with the latent features z to define a classifier boundary, by simply concatenating them in the subvectors. 3 the latent features Z. However, in order to make a prediction on the observed data x, we need to get rid of the uncertainty of Z. Here, we define the effective discriminant function as an expectation3 (i.e., a weighted average considering all possible values of Z) of the latent discriminant function. To make the model fully Bayesian, we also treat η as random and aim to infer the posterior distribution p(Z, η) from given data. More formally, the effective discriminant function f : X ×Y 7→R is f(y, x; p(Z, η)) def = Ep(Z,η)[f(y, x, z; η)] = Ep(Z,η)[η⊤g(y, x, z)]. (6) Note that although the number of latent features is allowed to be infinite, with probability one, the number of non-zero features is finite when only a finite number of data are observed, under the IBP prior. Moreover, to make it computationally feasible, we usually set a finite upper bound K to the number of possible features, where K is sufficiently large and known as the truncation level (See Sec 3.4 and Appendix A.2 for details). As shown in [9], the ℓ1-distance truncation error of marginal distributions decreases exponentially as K increases. With the above definitions, we define the Ppost(ξ) in problem (3) using large-margin constraints as Pc post(ξ) def = { p(Z, η) ∀n ∈Itr : f(yn, xn; p(Z, η))−f(y, xn; p(Z, η))≥ℓ(y, yn)−ξn, ∀y ξn ≥0 } (7) and define the penalty function as U c(ξ) def = C ∑ n∈Itr ξp n, where p ≥1. If p is 1, minimizing U c(ξ) is equivalent to minimizing the hinge-loss (or ℓ1-loss) Rc h of the prediction rule (9), where Rc h = C ∑ n∈Itr maxy(f(y, xn; p(Z, η)) + ℓ(y, yn) −f(yn, xn; p(Z, η))); if p is 2, the surrogate loss is the ℓ2-loss. For clarity, we consider the hinge loss. The non-negative cost function ℓ(y, yn) (e.g., 0/1-cost) measures the cost of predicting xn to be y when its true label is yn. Itr is the index set of training data. In order to robustly estimate the latent matrix Z, we need a reasonable amount of data. Therefore, we also relate Z to the observed data x by defining a likelihood model to provide as much data as possible. Here, we define the linear-Gaussian likelihood model for real-valued data p(xn|zn, W, σ2 n0) = N(xn|Wz⊤ n , σ2 n0I), (8) where W is a random loading matrix and I is an identity matrix with appropriate dimensions. We assume W follows an independent Gaussian prior, i.e., π(W) = ∏ d N(wd|0, σ2 0I). Fig. 1 (a) shows the graphical structure of iLSVM. The hyperparameters σ2 0 and σ2 n0 can be set a priori or estimated from observed data (See Appendix A.2 for details). Testing: to make prediction on test examples, we put both training and test data together to do the regularized Bayesian inference. For training data, we impose the above large-margin constraints because of the awareness of their true labels, while for test data, we do the inference without the large-margin constraints since we do not know their true labels. After inference, we make the prediction via the rule y∗def = arg max y f(y, x; p(Z, η)). (9) The ability to generalize to test data relies on the fact that all the data examples share η and the IBP prior. We can also cast the problem as a transductive inference problem by imposing additional constraints on test data [17]. However, the resulting problem will be generally harder to solve. 3.3 Multi-Task Infinite Latent Support Vector Machines Different from classification, which is typically formulated as a single learning task, multi-task learning aims to improve a set of related tasks through sharing statistical strength between these tasks, which are performed jointly. Many different approaches have been developed for multi-task learning (See [16] for a review). In particular, learning a common latent representation shared by all the related tasks has proven to be an effective way to capture task relationships [1, 3, 23]. Below, we present the multi-task infinite latent SVM (MT-iLSVM) for learning a common binary projection matrix Z to capture the relationships among multiple tasks. Similar as in iLSVM, we also put the IBP prior on Z to allow it to have an unbounded number of columns. 3Although other choices such as taking the mode are possible, our choice could lead to a computationally easy problem because expectation is a linear functional of the distribution under which the expectation is taken. Moreover, expectation can be more robust than taking the mode [18], and it has been used in [31, 32]. 4 Xn Yn ˤ W Zn N IBP(˞ ) (a) Z Xmn Ymn Wmn N M ˤm ̃m IBP(˞ ) (b) Figure 1: Graphical structures of (a) infinite latent SVM (iLSVM); and (b) multi-task infinite latent SVM (MT-iLSVM). For MT-iLSVM, the dashed nodes (i.e., ςm) are included to illustrate the task relatedness. We have omitted the priors on W and η for notation brevity. Suppose we have M related tasks. Let Dm = {(xmn, ymn)}n∈Im tr be the training data for task m. We consider binary classification tasks, where Ym = {+1, −1}. Extension to multi-way classification or regression tasks can be easily done. If the latent matrix Z is given, we define the latent discriminant function for task m as fm(x, Z; ηm) def = (Zηm)⊤x = η⊤ m(Z⊤x). (10) This definition provides two views of how the M tasks get related. If we let ςm = Zηm, then ςm are the actual parameters of task m and all ςm in different tasks are coupled by sharing the same latent matrix Z. Another view is that each task m has its own parameters ηm, but all the tasks share the same latent features Z⊤x, which is a projection of the input features x and Z is the latent projection matrix. As such, our method can be viewed as a nonparametric Bayesian treatment of alternating structure optimization (ASO) [1], which learns a single projection matrix with a pre-specified latent dimension. Moreover, different from [16], which learns a binary vector with known dimensionality to select features or kernels on x, we learn an unbounded projection matrix Z using nonparametric Bayesian techniques. As in iLSVM, we take the fully Bayeisan treatment (i.e., ηm are also random variables) and define the effective discriminant function for task m as the expectation fm(x; p(Z, η)) def = Ep(Z,η)[fm(x, Z; ηm)] = Ep(Z,η)[Zηm]⊤x. (11) Then, the prediction rule for task m is naturally y∗ m def = signfm(x). Similarly, we do regularized Bayesian inference by imposing the following constraints and defining U MT (ξ) def = C∑ m,n∈Im tr ξmn PMT post(ξ) def = { p(Z, η) ∀m, ∀n ∈Im tr : ymnEp(Z,η)[Zηm]⊤xmn ≥1 −ξmn ξmn ≥0 } . (12) Similar as in iLSVM, minimizing U MT (ξ) is equivalent to minimizing the hinge-loss RMT h of the multiple binary prediction rules, where RMT h = C ∑ m,n∈Im tr max(0, 1−ymnEp(Z,η)[Zηm]⊤xmn). Finally, to obtain more data to estimate the latent Z, we also relate it to observed data by defining the likelihood model p(xmn|wmn, Z, λ2 mn) = N(xmn|Zwmn, λ2 mnI), (13) where wmn is a vector. We assume W has an independent prior π(W) = ∏ mn N(wmn|0, σ2 m0I). Fig. 1 (b) illustrates the graphical structure of MT-iLSVM. For testing, we use the same strategy as in iLSVM to do Bayesian inference on both training and test data. The difference is that training data are subject to large-margin constraints, while test data are not. Similarly, the hyper-parameters σ2 m0 and λ2 mn can be set a priori or estimated from data (See Appendix A.1 for details). 3.4 Inference with Truncated Mean-Field Constraints We briefly discuss how to do regularized Bayesian inference (3) with the large-margin constraints for MT-iLSVM. For iLSVM, similar procedure applies. To make the problem easier to solve, we use the stick-breaking representation of IBP, which includes the auxiliary variables ν, and infer the posterior p(ν, W, Z, η). Furthermore, we impose the truncated mean-field constraint that p(ν, W, Z, η) = p(η) K ∏ k=1 ( p(νk|γk) D ∏ d=1 p(zdk|ψdk) ) ∏ mn p(wmn|Φmn, σ2 mnI), (14) where K is the truncation level; p(wmn|Φmn, σ2 mnI) = N(wmn|Φmn, σ2 mnI); p(zdk|ψdk) = Bernoulli(ψdk); and p(νk|γk) = Beta(γk1, γk2). We first turn the constrained problem to a problem of finding a stationary point using Lagrangian methods by introducing Lagrange multipliers ω, one for each large-margin constraint as defined in Eq. (12), and u for the nonnegativity constraints of ξ. Let L(p, ξ, ω, u) be the Lagrangian functional. The inference procedure iteratively solves the following two steps (We defer the details to Appendix A.1): 5 Infer p(ν), p(W), and p(Z): for p(W), since the prior is also normal, we can easily derive the update rules for Φmn and σ2 mn. For p(ν), we have the same update rules as in [9]. We defer the details to Appendix A.1. Now, we focus on p(Z) and provide insights on how the large-margin constraints regularize the procedure of inferring the latent matrix Z. Since the large-margin constraints are linear of p(Z), we can get the mean-field update equation as ψdk = 1 1+e−ϑdk , where ϑdk = k ∑ j=1 Ep[log vj] −Lν k − ∑ mn 1 2λ2mn ( (Kσ2 mn + (ϕk mn)2) (15) −2xd mnϕk mn + 2 ∑ j̸=k ϕj mnϕk mnψdj ) + ∑ m,n∈Im tr ymnEp[ηmk]xd mn, where Lν k is an lower bound of Ep[log(1 −∏k j=1 vj)] (See Appendix A.1 for details). The last term of ϑdk is due to the large-margin posterior constraints as defined in Eq. (12). Infer p(η) and solve for ω and ξ: We optimize L over p(η) and can get p(η) = ∏ m p(ηm), where p(ηm) ∝π(ηm) exp{η⊤ mµm}, and µm = ∑ n∈Im tr ymnωmn(ψ⊤xmn). Here, we assume π(ηm) is standard normal. Then, we have p(ηm) = N(ηm|µm, I). Substituting the solution of p(η) into L, we get M independent dual problems max ωm −1 2µ⊤ mµm + ∑ n∈Im tr ωmn s.t.. : 0 ≤ωmn ≤1, ∀n ∈Im tr , (16) which (or its primal form) can be efficiently solved with a binary SVM solver, such as SVM-light. 4 Experiments We present empirical results for both classification and multi-task learning. Our results demonstrate the merits inherited from both Bayesian nonparametrics and large-margin learning. 4.1 Multi-way Classification We evaluate the infinite latent SVM (iLSVM) for classification on the real TRECVID2003 and Flickr image datasets, which have been extensively evaluated in the context of learning finite latent feature models [8]. TRECVID2003 consists of 1078 video key-frames, and each example has two types of features – 1894-dimension binary vector of text features and 165-dimension HSV color histogram. The Flickr image dataset consists of 3411 natural scene images about 13 types of animals (e.g., tiger, cat and etc.) downloaded from the Flickr website. Also, each example has two types of features, including 500-dimension SIFT bag-of-words and 634-dimension real-valued features (e.g., color histogram, edge direction histogram, and block-wise color moments). Here, we consider the real-valued features only by using normal distributions for x. We compare iLSVM with the large-margin Harmonium (MMH) [8], which was shown to outperform many other latent feature models [8], and two decoupled approaches – EFH+SVM and IBP+SVM. EFH+SVM uses the exponential family Harmonium (EFH) [27] to discover latent features and then learns a multi-way SVM classifier. IBP+SVM is similar, but uses an IBP factor analysis model [12] to discover latent features. As finite models, both MMH and EFH+SVM need to pre-specify the dimensionality of latent features. We report their results on classification accuracy and F1 score (i.e., the average F1 score over all possible classes) [32] achieved with the best dimensionality in Table 1. For iLSVM and IBP+SVM, we use the mean-field inference method and present the average performance with 5 randomly initialized runs (See Appendix A.2 for the algorithm and initialization details). We perform 5-fold cross-validation on training data to select hyperparameters, e.g., α and C (we use the same procedure for MT-iLSVM). We can see that iLSVM can achieve comparable performance with the nearly optimal MMH, without needing to pre-specify the latent feature dimension4, and is much better than the decoupled approaches (i.e., IBP+SVM and EFH+SVM). 4.2 Multi-task Learning 4.2.1 Description of the Data Scene and Yeast Data: These datasets are from the UCI repository, and each data example has multiple labels. As in [23], we treat the multi-label classification as a multi-task learning problem, 4We set the truncation level to 300, which is large enough. 6 Table 1: Classification accuracy and F1 scores on the TRECVID2003 and Flickr image datasets. TRECVID2003 Flickr Model Accuracy F1 score Accuracy F1 score EFH+SVM 0.565 ± 0.0 0.427 ± 0.0 0.476 ± 0.0 0.461 ± 0.0 MMH 0.566 ± 0.0 0.430 ± 0.0 0.538 ± 0.0 0.512 ± 0.0 IBP+SVM 0.553 ± 0.013 0.397 ± 0.030 0.500 ± 0.004 0.477 ± 0.009 iLSVM 0.563 ± 0.010 0.448 ± 0.011 0.533 ± 0.005 0.510 ± 0.010 Table 2: Multi-label classification performance on Scene and Yeast datasets. Yeast Scene Model Acc F1-Micro F1-Macro Acc F1-Micro F1-Macro yaxue [23] 0.5106 0.3897 0.4022 0.7765 0.2669 0.2816 piyushrai-1 [23] 0.5212 0.3631 0.3901 0.7756 0.3153 0.3242 piyushrai-2 [23] 0.5424 0.3946 0.4112 0.7911 0.3214 0.3226 MT-IBP+SVM 0.5475 ± 0.005 0.3910 ± 0.006 0.4345 ± 0.007 0.8590 ± 0.002 0.4880 ± 0.012 0.5147 ± 0.018 MT-iLSVM 0.5792 ± 0.003 0.4258 ± 0.005 0.4742 ± 0.008 0.8752 ± 0.004 0.5834 ± 0.026 0.6148 ± 0.020 where each label assignment is treated as a binary classification task. The Yeast dataset consists of 1500 training and 917 test examples, each having 103 features, and the number of labels (or tasks) per example is 14. The Scene dataset consists 1211 training and 1196 test examples, each having 294 features, and the number of labels (or tasks) per example for this dataset is 6. School Data: This dataset comes from the Inner London Education Authority and has been used to study the effectiveness of schools. It consists of examination records from 139 secondary schools in years 1985, 1986 and 1987. It is a random 50% sample with 15362 students. The dataset is publicly available and has been extensively evaluated in various multi-task learning methods [4, 7, 30], where each task is defined as predicting the exam scores of students belonging to a specific school based on four student-dependent features (year of the exam, gender, VR band and ethnic group) and four school-dependent features (percentage of students eligible for free school meals, percentage of students in VR band 1, school gender and school denomination). In order to compare with the above methods, we follow the same setup described in [3, 4] and similarly we create dummy variables for those features that are categorical forming a total of 19 student-dependent features and 8 school-dependent features. We use the same 10 random splits5 of the data, so that 75% of the examples from each school (task) belong to the training set and 25% to the test set. On average, the training set includes about 80 students per school and the test set about 30 students per school. 4.2.2 Results Scene and Yeast Data: We compare with the closely related nonparametric Bayesian methods [23, 28], which were shown to outperform the independent Bayesian logistic regression and a singletask pooling approach [23], and a decoupled method MT-IBP+SVM6 that uses IBP factor analysis model to find shared latent features among multiple tasks and then builds separate SVM classifiers for different tasks. For MT-iLSVM and MT-IBP+SVM, we use the mean-field inference method in Sec 3.4 and report the average performance with 5 randomly initialized runs (See Appendix A.1 for initialization details). For comparison with [23, 28], we use the overall classification accuracy, F1-Macro and F1-Micro as performance measures. Table 2 shows the results. We can see that the large-margin MT-iLSVM performs much better than other nonparametric Bayesian methods and MT-IBP+SVM, which separates the inference of latent features from learning the classifiers. School Data: We use the percentage of explained variance [4] as the measure of the regression performance, which is defined as the total variance of the data minus the sum-squared error on the test set as a percentage of the total variance. Since we use the same settings, we can compare with the state-of-the-art results of Bayesian multi-task learning (BMTL) [4], multi-task Gaussian processes (MTGP) [7], convex multi-task relationship learning (MTRL) [30], and single-task learning (STL) as reported in [7, 30]. For MT-iLSVM and MT-IBP+SVM, we also report the results achieved by using both the latent features (i.e., Z⊤x) and the original input features x through vector concatenation, and we denote the corresponding methods by MT-iLSVMf and MT-IBP+SVMf, respectively. From 5Available at: http://ttic.uchicago.edu/∼argyriou/code/index.html 6This decoupled approach is in fact an one-iteration MT-iLSVM, where we first infer the shared latent matrix Z and then learn an SVM classifier for each task. 7 Table 3: Percentage of explained variance by various models on the School dataset. STL BMTL MTGP MTRL MT-IBP+SVM MT-iLSVM MT-IBP+SVMf MT-iLSVMf 23.5 ± 1.9 29.5 ± 0.4 29.2 ± 1.6 29.9 ± 1.8 20.0 ± 2.9 30.9 ± 1.2 28.5 ± 1.6 31.7 ± 1.1 Table 4: Percentage of explained variance and running time by MT-iLSVM with various training sizes. 50% 60% 70% 80% 90% 100% explained variance (%) 25.8 ± 0.4 27.3 ± 0.7 29.6 ± 0.4 30.0 ± 0.5 30.8 ± 0.4 30.9 ± 1.2 running time (s) 370.3 ± 32.5 455.9 ± 18.6 492.6 ± 33.2 600.1 ± 50.2 777.6 ± 73.4 918.9 ± 96.5 the results in Table 3, we can see that the multi-task latent SVM (i.e., MT-iLSVM) achieves better results than the existing methods that have been tested in previous studies. Again, the joint MTiLSVM performs much better than the decoupled method MT-IBP+SVM, which separates the latent feature inference from the training of large-margin classifiers. Finally, using both latent features and the original input features can boost the performance slightly for MT-iLSVM, while much more significantly for the decoupled MT-IBP+SVM. 1 2 3 4 5 6 0.565 0.57 0.575 0.58 0.585 0.59 sqrt of α Accuracy (a) Yeast 0 1 2 3 4 5 6 0.565 0.57 0.575 0.58 0.585 0.59 sqrt of C Accuracy (b) Yeast 0 0.5 1 1.5 2 2.5 15 20 25 30 35 C Explained variance (%) (c) School Figure 2: Sensitivity study of MT-iLSVM: (a) classification accuracy with different α; (b) classification accuracy with different C; and (c) percentage of explained variance with different C. 4.3 Sensitivity Analysis Figure 2 shows how the performance of MT-iLSVM changes against the hyper-parameter α and regularization constant C on Yeast and School datasets. We can see that on the Yeast dataset, MTiLSVM is insensitive to α and C. For the School dataset, MT-iLSVM is stable when C is set between 0.3 and 1. MT-iLSVM is insensitive to α on the School data too, which is omitted to save space. Table 4 shows how the training size affects the performance and running time of MT-iLSVM on the School dataset. We use the first b% (b = 50, 60, 70, 80, 90, 100) of the training data in each of the 10 random splits as training set and use the corresponding test data as test set. We can see that as training size increases, the performance and running time generally increase; and MT-iLSVM achieves the state-of-art performance when using about 70% training data. From the running time, we can also see that MT-iLSVM is generally quite efficient by using mean-field inference. Finally, we investigate how the performance of MT-iLSVM changes against the hyperparameters σ2 m0 and λ2 mn. We initially set σ2 m0 = 1 and compute λ2 mn from observed data. If we further estimate them by maximizing the objective function, the performance does not change much (±0.3% for average explained variance on the School dataset). We have similar observations for iLSVM. 5 Conclusions and Future Work We first present a general framework for doing regularized Bayesian inference subject to appropriate constraints, which are imposed directly on the posterior distributions. Then, we particularly concentrate on developing two nonparametric Bayesian models to learn predictive latent features for classification and multi-task learning, respectively, by exploring the large-margin principle to define posterior constraints. Both models allow the latent dimension to be automatically resolved from the data. The empirical results on several real datasets appear to demonstrate that our methods inherit the merits from both Bayesian nonparametrics and large-margin learning. Regularized Bayesian inference offers a general framework for considering posterior regularization in performing nonparametric Bayesian inference. For future work, we plan to study other posterior regularization beyond the large-margin constraints, such as posterior constraints defined on manifold structures [14], and investigate how posterior regularization can be used in other interesting nonparametric Bayesian models [5, 26]. 8 Acknowledgments This work was done when JZ was a post-doc fellow in CMU. JZ is supported by National Key Project for Basic Research of China (No. 2012CB316300) and the National Natural Science Foundation of China (No. 60805023). EX is supported by AFOSR FA95501010247, ONR N000140910758, NSF Career DBI-0546594 and Alfred P. Sloan Research Fellowship. References [1] R. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. JMLR, (6):1817–1853, 2005. [2] C.E. Antoniak. Mixture of Dirichlet process with applications to Bayesian nonparametric problems. Annals of Stats, (273):1152–1174, 1974. [3] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. In NIPS, 2007. [4] B. Bakker and T. Heskes. Task clustering and gating for Bayesian multitask learning. JMLR, (4):83–99, 2003. [5] M.J. Beal, Z. Ghahramani, and C.E. Rasmussen. The infinite hidden Markov model. In NIPS, 2002. [6] K. Bellare, G. Druck, and A. McCallum. Alternating projections for learning with expectation constraints. In UAI, 2009. [7] E. Bonilla, K.M.A. Chai, and C. Williams. Multi-task Gaussian process prediction. In NIPS, 2008. [8] N. Chen, J. Zhu, and E.P. Xing. Predictive subspace learning for multiview data: a large margin approach. In NIPS, 2010. [9] F. Doshi-Velez, K. Miller, J. Van Gael, and Y.W. Teh. Variational inference for the Indian buffet process. In AISTATS, 2009. [10] D. Dunson and S. Peddada. Bayesian nonparametric inferences on stochastic ordering. ISDS Discussion Paper, 2, 2007. [11] K. Ganchev, J. Graca, J. Gillenwater, and B. Taskar. Posterior regularization for structured latent variable models. JMLR, (11):2001–2094, 2010. [12] T.L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In NIPS, 2006. [13] D. Hoff. Bayesian methods for partial stochastic orderings. Biometrika, 90:303–317, 2003. [14] S. Huh and S. Fienberg. Discriminative topic modeling based on manifold learning. In KDD, 2010. [15] T. Jaakkola, M. Meila, and T. Jebara. Maximum entropy discrimination. In NIPS, 1999. [16] T. Jebara. Multitask sparsity via maximum entropy discrimination. JMLR, (12):75–110, 2011. [17] T. Joachims. Transductive inference for text classification using support vector machines. In ICML, 1999. [18] M. E. Khan, B. Marlin, G. Bouchard, and K. Murphy. Variational bounds for mixed-data factor analysis. In NIPS, 2010. [19] P. Liang, M. Jordan, and D. Klein. Learning from measurements in exponential families. In ICML, 2009. [20] S.N. MacEachern. Dependent nonparametric process. In the Section on Bayesian Statistical Science of ASA, 1999. [21] G. Mann and A. McCallum. Generalized expectation criteria for semi-supervised learning with weakly labeled data. JMLR, (11):955–984, 2010. [22] K. Miller, T. Griffiths, and M. Jordan. Nonparametric latent feature models for link prediction. In NIPS, 2009. [23] P. Rai and H. Daume III. Infinite predictor subspace models for multitask learning. In AISTATS, 2010. [24] C.E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In NIPS, 2002. [25] Y.W. Teh, D. Gorur, and Z. Ghahramani. Stick-breaking construction of the Indian buffet process. In AISTATS, 2007. [26] Y.W. Teh, M. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet process. JASA, 101(476):1566–1581, 2006. [27] M. Welling, M. Rosen-Zvi, and G. Hinton. Exponential family harmoniums with an application to information retrieval. In NIPS, 2004. [28] Y. Xue, D. Dunson, and L. Carin. The matrix stick-breaking process for flexible multi-task learning. In ICML, 2007. [29] A. Zellner. Optimal information processing and Bayes’ theorem. American Statistician, 42:278–280, 1988. [30] Y. Zhang and D.Y. Yeung. A convex formulation for learning task relationships in multi-task learning. In UAI, 2010. [31] J. Zhu, A. Ahmed, and E.P. Xing. MedLDA: Maximum margin supervised topic models for regression and classification. In ICML, 2009. [32] J. Zhu, N. Chen, and E.P. Xing. Infinite SVM: a Dirichlet process mixture of large-margin kernel machines. In ICML, 2011. 9
|
2011
|
183
|
4,239
|
On U-processes and clustering performance St´ephan Cl´emenc¸on∗ LTCI UMR Telecom ParisTech/CNRS No. 5141 Institut Telecom, Paris, 75634 Cedex 13, France stephan.clemencon@telecom-paristech.fr Abstract Many clustering techniques aim at optimizing empirical criteria that are of the form of a U-statistic of degree two. Given a measure of dissimilarity between pairs of observations, the goal is to minimize the within cluster point scatter over a class of partitions of the feature space. It is the purpose of this paper to define a general statistical framework, relying on the theory of U-processes, for studying the performance of such clustering methods. In this setup, under adequate assumptions on the complexity of the subsets forming the partition candidates, the excess of clustering risk is proved to be of the order OP(1/√n). Based on recent results related to the tail behavior of degenerate U-processes, it is also shown how to establish tighter rate bounds. Model selection issues, related to the number of clusters forming the data partition in particular, are also considered. 1 Introduction In cluster analysis, the objective is to segment a dataset into subgroups, such that data points in the same subgroup are more similar to each other (in a sense that will be specified) than to those in other subgroups. Given the wide range of applications of the clustering paradigm, numerous data segmentation procedures have been introduced in the machine-learning literature (see Chapter 14 in [HTF09] and Chapter 8 in [CFZ09] for recent overviews of ”off-the-shelf” clustering techniques). Whereas the design of clustering algorithms is still receiving much attention in machine-learning (see [WT10] and the references therein for instance), the statistical study of their performance, with the notable exception of the celebrated K-means approach, see [Har78, Pol81, Pol82, BD04] and more recently [BDL08] in the functional data analysis setting, may appear to be not sufficiently well-documented in contrast, as pointed out in [vLBD05, BvL09]. Indeed, in the K-means situation, the specific form of the criterion (and of its expectation, the clustering risk), as well as that of the cells defining the clusters and forming a partition of the feature space (Voronoi cells), permits to use, in a straightforward manner, results of the theory of empirical processes in order to control the performance of empirical clustering risk minimizers. Unfortunately, this center-based approach does not carry over into more general situations, where the dissimilarity measure is not a square hilbertian norm anymore, unless one loses the possibility to interpret the clustering criterion as a function of pairwise dissimilarities between the observations (cf K-medians). It is the goal of this paper to establish a general statistical framework for investigating clustering performance. The present analysis is based on the observation that many statistical criteria for measuring clustering accuracy are (symmetric) U-statistics (of degree two), functions of a matrix of dissimilarities between pairs of data points. Such statistics have recently received a good deal of attention in the machine-learning literature, insofar as empirical performance measures of predictive rules in problems such as statistical ranking (when viewed as pairwise classification), see [CLV08], or learning on graphs ([BB06]), are precisely functionals of this type, generalizing sample mean statistics. By means of uniform deviation results for U-processes, the Empirical Risk Minimization ∗http://www.tsi.enst.fr/∼clemenco/. 1 paradigm (ERM) can be extended to situations where natural estimates of the risk are U-statistics. In this way, we establish here a rate bound of order OP(1/√n) for the excess of clustering risk of empirical minimizers under adequate complexity assumptions on the cells forming the partition candidates (the bias term is neglected in the present analysis). A linearization technique, combined with sharper tail results in the case of degenerate U-processes is also used in order to show that tighter rate bounds can be obtained. Finally, it is shown how to use the upper bounds established in this analysis in order to deal with the problem of automatic model selection, that of selecting the number of clusters in particular, through complexity penalization. The paper is structured as follows. In section 2, the notations are set out, a formal description of cluster analysis, from the ”pairwise dissimilarity” perspective, is given and the main theoretical concepts involved in the present analysis are briefly recalled. In section 3, an upper bound for the performance of empirical minimization of the clustering risk is established in the context of general dissimilarity measures. Section 4 shows how to refine the rate bound previously obtained by means of a recent inequality for degenerate U-processes, while section 5 deals with automatic selection of the optimal number of clusters. Technical proofs are deferred to the Appendix section. 2 Theoretical background In this section, after a brief description of the probabilistic framework of the study, the general formulation of the clustering objective, based on the notion of dissimilarity between pairs of observations, is recalled and the connection of the problem of investigating clustering performance with the theory of U-statistics and U-processes is highlighted. Concepts pertaining to this theory and involved in the subsequent analysis are next recalled. 2.1 Probabilistic setup and first notations Here and throughout, (X1, . . . , Xn) denotes a sample of i.i.d. random vectors, valued in a highdimensional feature space X, typically a subset of the euclidian space Rd with d >> 1, with common probability distribution µ(dx). With no loss of generality, we assume that the feature space X coincides with the support of the distribution µ(dx). The indicator function of any event E will be denoted by I{E}, the usual lp norm on Rd by ||x||p = (Pd i=1 |xi|p)1/p when 1 ≤p < ∞and by ||x||∞= max1≤i≤d |xi| in the case p = ∞, with x = (x1, . . . , xd) ∈Rd. When well-defined, the expectation and the variance of a r.v. Z are denoted by E[Z] and Var(Z) respectively. Finally, we denote by x+ = max(0, x) the positive part of any real number x. 2.2 Cluster analysis The goal of clustering techniques is to partition the data (X1, . . . , Xn) into a given finite number of groups, K << n say, so that the observations lying in a same group are more similar to each other than to those in other groups. When equipped with a (borelian) measure of dissimilarity D : X 2 →R∗ +, the clustering task can be rigorously cast as the problem of minimizing the criterion c Wn(P) = 2 n(n −1) K X k=1 X 1≤i<j≤n D(Xi, Xj) · I{(Xi, Xj) ∈C2 k}, (1) over all possible partitions P = {Ck : 1 ≤k ≤K} of the feature space X. The quantity (1) is generally called the intra-cluster similarity or the within cluster point scatter. The function D aiming at measuring dissimilarity between pairs of observations, we suppose that it fulfills the following properties: • (SYMMETRY) For all (x, x′) ∈X 2, D(x, x′) = D(x′, x) • (SEPARATION) For all (x, x′) ∈X 2: D(x, x′) = 0, ⇔x = x′ Typical choices for the dissimilarity measure are of the form D(x, x′) = Φ(||x−x′||p), where p ≥1 and Φ : R+ →R+ is a nondecreasing function such that Φ(0) = 0 and Φ(t) > 0 for all t > 0. This includes the so-termed ”standard K-means” setup, where the dissimilarity measure coincides with 2 the square euclidian norm (in this case, p = 2 and Φ(t) = t2 for t ≥0). Notice that the expectation of the r.v. (1) is equal to the following quantity: W(P) = K X k=1 E D(X, X′) · I{(X, X′) ∈C2 k} , (2) where (X, X′) denotes a pair of independent r.v.’s drawn from µ(dx). It will be referred to as the clustering risk of the partition P, while its statistical counterpart (1) will be called the empirical clustering risk. Optimal partitions of the feature space X are defined as those that minimize W(P). Remark 1 (MAXIMIZATION FORMULATION) It is well-known that minimizing the empirical clustering risk (1) is equivalent to maximizing the between-cluster scatter point, which is given by 1/(n(n−1))·P k̸=l P i, j D(Xi, Xj)·I{(Xi, Xj) ∈Ck ×Cl}, the sum of these two statistics being independent from the partition P = {Ck : 1 ≤k ≤K} considered. Suppose we are given a (hopefully sufficiently rich) class Π of partitions of the feature space X. Here we consider minimizers of the empirical risk c Wn over Π, i.e. partitions bP∗ n in Π such that c Wn bP∗ n = min P∈Π c Wn (P) . (3) The design of practical algorithms for computing (approximately) empirical clustering risk minimizers is beyond the scope of this paper (refer to [HTF09] for an overview of ”off-the-shelf” clustering methods). Here, focus is on the performance of such empirically defined rules. 2.3 U-statistics and U-processes The subsequent analysis crucially relies on the fact that the quantity (1) that one seeks to optimize is a U-statistic. For clarity’s sake, we recall the definition of this class of statistics, generalizing sample means. Definition 1 (U-STATISTIC OF DEGREE TWO.) Let X1, . . . , Xn be independent copies of a random vector X drawn from a probability distribution µ(dx) on the space X and K : X 2 →R be a symmetric function such that K(X1, X2) is square integrable. By definition, the functional Un = 2 n(n −1) X 1≤i<j≤n K(Xi, Xj). (4) is a (symmetric) U-statistic of degree two, with kernel K. It is said to be degenerate when K(1)(x) def = E[K(x, X)] = 0 with probability one for all x ∈X, non degenerate otherwise. The statistic (4) is a natural (unbiased) estimate of the quantity θ = R R K(x, x′)µ(dx)µ(dx′). The class of U-statistics is very large and include most dispersion measures, including the variance or the Gini mean difference (with K(x, x′) = (x−x′)2 and K(x, x′) = |x−x′| respectively, (x, x′) ∈R2), as well as the celebrated Wilcoxon location test statistic (with K(x, x′) = I{x + x′ > 0} for (x, x′) ∈R2 in this case). Although the dependence structure induced by the summation over all pairs of observations makes its study more difficult than that of basic sample means, this estimator has nice properties. It is well-known folklore in mathematical statistics that it is the most efficient estimator among all unbiased estimators of the parameter θ (i.e. that with minimum variance), see [vdV98]. Precisely, when non degenerate, it is asymptotically normal with limiting variance 4·Var(K(1)(X)) (refer to Chapter 5 in [Ser80] for an account of asymptotic analysis of U-statistics). As shall be seen in section 4, the reduced variance property of U-statistics is crucial, when it comes to establish tight rate bounds. Going back to the U-statistic of degree two (1) estimating (2), observe that its symmetric kernel is: ∀(x, x′) ∈X 2, KP(x, x′) = K X k=1 D(x, x′) · I{(x, x′) ∈C2 k}. (5) Assuming that E[D2(X1, X2) · I{(X1, X2) ∈C2 k}] < ∞for all k ∈{1, . . . , K} and placing ourselves in the situation where K ≥1 is less than X’s cardinality, the U-statistic (1) is always non 3 degenerate, except in the (sole) case where X is made of K elements exactly and all P’s cells are singletons. Indeed, for all x ∈X, denoting by k(x) the index of {1, . . . , K} such that x ∈Ck(x), we have: K(1) P (x) def = E[KP(x, X)] = Z x′∈Ck(x) D(x, x′)µ(dx′). (6) As µ’s support coincides with X and the separation property is fulfilled by D, the quantity above is zero iff Ck(x) = {x}. In the non degenerate case, notice finally that the asymptotic variance of √n{c Wn(P) −W(P)} is equal to 4 · Var(D(X, Ck(X)), where we set D(x, C) = R x′∈X D(x, x′)µ(dx′) for all x ∈X and any measurable set C ⊂X. By definition, a U-process is a collection of U-statistics, one may refer to [dlPG99] for an account of the theory of U-processes. Echoing the role played by the theory of empirical processes in the study of the ERM principle in binary classification, the control of the fluctuations of the U-process n c Wn(P) −W(P) : P ∈Π o indexed by a set Π of partition candidates will naturally lie at the heart of the present analysis. As shall be seen below, this can be achieved mainly by the means of the Hoeffding representations of U-statistics, see [Hoe48]. 3 A bound for the excess of clustering risk Here we establish an upper bound for the performance of an empirical minimizer of the clustering risk over a class ΠK of partitions of X with K ≥1 cells, K being fixed here and supposed to be smaller than X’s cardinality. We denote by W ∗ K the clustering risk minimum over all partitions of X with K cells. The following global suprema of empirical Rademacher averages, characterizing the complexity of the cells forming the partition candidates, shall be involved in the subsequent rate analysis: ∀n ≥2, AK,n = sup C∈P, P∈ΠK 1 ⌊n/2⌋ ⌊n/2⌋ X i=1 ϵiD(Xi, Xi+⌊n/2⌋) · I{(Xi, Xi+⌊n/2⌋) ∈C2} , (7) where ϵ = (ϵi)i≥1 is a Rademacher chaos, independent from the Xi’s, see [Kol06]. The following theorem reveals that the clustering performance of the empirical minimizer (3) is of the order OP(1/√n), when neglecting the bias term (depending on the richness of ΠK solely). Theorem 1 Consider a class ΠK of partitions with K ≥1 cells and suppose that: • there exists B < ∞such that for all P in ΠK, any C in P, sup(x,x′)∈C2 D(x, x′) ≤B, • the expectation of the Rademacher average AK,n is of the order O(n−1/2). Let δ > 0. For any empirical clustering risk minimizer bP∗ n, we have with probability at least 1 −δ: ∀n ≥2, W( bP∗ n) −W ∗ K ≤ 4KE[AK,n] + 2BK r 2 log(1/δ) n + inf P∈ΠK W(P) −W ∗ K ≤ c(B, δ) · K √n + inf P∈ΠK W(P) −W ∗ K , (8) for some constant c(B, δ) < ∞, independent from n and K. The key for proving (8) is to express the U-statistic Wn(P) in terms of sums of i.i.d. r.v.’s, as that involved in the Rademacher average (7): Wn(P) = 1 n! X σ∈Sn 1 ⌊n/2⌋ ⌊n/2⌋ X i=1 KP(Xi, Xi+⌊n/2⌋), (9) 4 where the average is taken over Sn, the symmetric group of order n. The main point lies in the fact that standard techniques in empirical process theory can be then used to control Wn(P) −W(P) uniformly over ΠK under adequate hypotheses, see the proof in the Appendix for technical details. We underline that, naturally, the complexity assumption is also a crucial ingredient of the result stated above, and more generally to clustering consistency results, see Example 1 in [BvL09]. We also point out that the ERM approach is by no means the sole method to obtain error bounds in the clustering context. Just like in binary classification (see [KN02]), one may use a notion of stability of a clustering algorithm to establish such results, see [vL09, ST09] and the references therein. Refer to [vLBD06, vLBD08] for error bounds proved through the stability approach. Before showing how the bound for the excess of risk stated above can be improved, a few remarks are in order. Remark 2 (ON THE COMPLEXITY ASSUMPTION.) We point out that standard entropy metric arguments can be used in order to bound the expected value of the Rademacher average An, see [BBL05] for instance. In particular, if the set of functions FΠK = {(x, x′) ∈X 2 7→D(x, x′) · I{(x, x′) ∈ C2} : C ∈P, P ∈ΠK} is a VC major class with finite VC dimension V (see [Dud99]), then E[AK,n] ≤c p V/n for some universal constant c < ∞. This covers a wide variety of situations, including the case where D(x, x′) = ||x −x′||β p and the class of sets {C : C ∈P, P ∈ΠK} is of finite VC dimension. Remark 3 (K-MEANS.) In the standard K-means approach, the dissimilarity measure is D(x, x′) = ||x −x′||2 2 and partition candidates are indexed by a collection c of distinct ”centers” c1, . . . , cK in X: Pc = {C1, . . . , CK} with Ck = {x ∈X : ||x−ck||2 = min1≤l≤K ||x−cl||2} for 1 ≤k ≤K (with adequate distance-tie breaking). One may easily check that for this specific collection of partitions ΠK and this choice for the dissimilarity measure, the class FΠK is a VC major class with finite VC dimension, see section 19.1 in [DGL96] for instance. Additionally, it should be noticed than in most practical clustering procedures, center candidates are picked in a data-driven fashion, being taken as the averages of the observations lying in each cluster/cell. In this respect, the M-estimation problem formulated here can be considered to a certain extent as closer to what is actually achieved by K-means clustering techniques in practice, than the usual formulation of the K-means problem (as an optimization problem over c = (c1, . . . , cK) namely). Remark 4 (WEIGHTED CLUSTERING CRITERIA.) Notice that, in practice, the measure D involved in (1) may depend on the data. For scaling purpose, one could assign data-dependent weights ω = (ωi)1≤i≤d in a coordinatewise manner, leading to bD(x, x′) = Pd i=1(xi −x′ i)2/bσ2 i for instance, where bσ2 i denotes the sample variance related to the i-th coordinate. Although the criterion reflecting the performance is not a U-statistic anymore, the theory we develop here can be straightforwardly used for investigating clustering accuracy in such a case. Indeed, it is easy to control the difference between the latter and the U-statistic (1) with D(x, x′) = Pd i=1(xi −x′ i)2/σ2 i , the σ2 i ’s denoting the theoretical variances of µ’s marginals, under adequate moment assumptions. 4 Tighter bounds for empirical clustering risk minimizers We now show that one may refine the rate bound established above, by considering another representation of the U-statistic (1), its Hoeffding’s decomposition (see [Ser80]): for all partition P, Wn(P) −W(P) = 2Ln(P) + Mn(P), (10) Ln(P) = (1/n) Pn i=1 P C∈P H(1) C (Xi) being a simple average of i.i.d r.v.’s with, for (x, x′) ∈X 2, HC(x, x′) = D(x, x′) · I{(x, x′) ∈C2} and H(1) C (x) = D(x, C) · I{x ∈C} −D(C, C), where D(C, C) = R x∈C D(x, C)µ(dx) and E[HC(x, X)] = D(x, C) · I{x ∈C}, and Mn(P) being a degenerate U-statistic based on the Xi’s with kernel given by: P C∈P H(2) C (x, x′), where H(2) C (x, x′) = HC(x, x′) −H(1) C (x) −H(1) C (x′) −D(C, C), for all (x, x′) ∈X 2. The leading term in (10) is the (centered) sample mean 2Ln(P), of the order OP( p 1/n), while the second term is of the order OP(1/n). Hence, provided this holds true 5 uniformly over P, the main contribution to the rate bound should arise from the quantity sup P∈ΠK |2Ln(P)| ≤2K sup C∈P, P∈ΠK |(1/n) n X i=1 H(1) C (Xi) −D(C, C)|, which thus leads to consider the following suprema of empirical Rademacher averages: RK,n = sup C∈P, P∈ΠK 1 n n X i=1 ϵiD(Xi, C) · I{Xi ∈C} . (11) This supremum clearly has smaller mean and variance than (7). We also introduce the quantities: Zϵ = sup C∈P, P∈ΠK X i,j ϵiϵjH(2) C (Xi, Xj) , Uϵ = sup C∈P, P∈ΠK sup α: P j α2 j X i,j ϵiαjH(2) C (Xi, Xj), M = sup C∈P, P∈ΠK sup 1≤j≤n X i ϵiH(2) C (Xi, Xj) . Theorem 2 Consider a class ΠK of partitions with K cells and suppose that: • there exists B < ∞such that sup(x,x′)∈C2 D(x, x′) ≤B for all P ∈ΠK, C ∈P. Let δ > 0. For any empirical clustering risk minimizer bP∗ n, with probability at least 1 −δ: ∀n ≥2, W( bP∗ n) −W ∗ K ≤4KE[RK,n] + 2BK r log(2/δ) n + Kκ(n, δ) + inf P∈ΠK W(P) −W ∗ K , (12) where we set for some universal constant C < ∞, independent from n, N and K: κ(n, δ) = C E[Zϵ] + p log(1/δ)E[Uϵ] + (n + E[M])/ log(1/δ) /n2. (13) The result above relies on the moment inequality for degenerate U-processes proved in [CLV08]. Remark 5 (LOCALIZATION.) The same argument can be used to decompose Λn(P) −Λ(P), where Λn(P) = c Wn(P) −W ∗ K is an estimate of the excess of risk Λ(P) = W(P) −W ∗ K, and, by means of concentration inequalities, to obtain next a sharp upper bound that involves the modulus of continuity of the variance of the Rademacher average indexed by the convex hull of the set of functions {P C∈P D(x, C) · I{x ∈C} −P C∗∈P∗D(x, C∗) · {x ∈C∗} : P ∈ΠK}, following in the footsteps or recent advances in binary classification, see [Kol06] and subsection 5.3 in [BBL05]. Owing to space limitations, this will be dealt with in a forthcoming article. 5 Model selection - choosing the number of clusters A crucial issue in data segmentation is to determine the number K of cells that exhibits the most the clustering phenomenon in the data. A variety of automatic procedures for choosing a good value for K have been proposed in the literature, based on data splitting, resampling or sampling techniques ([PFvN89, TWH01, ST08]). Here we consider a complexity regularization method that avoids to have recourse to such techniques and uses a data-dependent penalty term based on the analysis carried out above. Suppose that we have a sequence Π1, Π2, . . . of collections of partitions of the feature space X such that, for all K ≥1, the elements of ΠK are made of K cells and fulfill the assumptions of Theorem 1. In order to avoid overfitting, consider the (data-driven) complexity penalty given by pen(n, K) = 3KEϵ[AK,n] + 27BK log K n + p (2B log K)/n (14) and the minimizer bP b K,n of the penalized empirical clustering risk, with bK = arg min K≥1 n c Wn( bPK,n) + pen(n, K) o and c Wn( bPK,n) = min P∈ΠK c Wn(P). 6 The next result shows that the partition thus selected nearly achieves the performance that would be obtained with the help of an oracle, revealing the value of the index K that minimizes E[ bPK,n]−W ∗, with W ∗= infP W(P). Theorem 3 (AN ORACLE INEQUALITY) Suppose that, for all K ≥1, the assumptions of Theorem 1 are fulfilled. Then, we have: E h W( bP b K,n) i −W ∗≤min K≥1 {W ∗ K −W ∗+ pen(n, K)} + π2 6 2B r 2 n + 18B n ! . (15) Of course, the penalty could be slightly refined using the results of Section 4. Due to space limitations, such an analysis is not carried out here and is left to the reader. 6 Conclusion Whereas, until now, the theoretical analysis of clustering performance was mainly limited to the K-means situation (but not only, cf [BvL09] for instance), this paper establishes bounds for the success of empirical clustering risk minimization in a general ”pairwise dissimilarity” framework, relying on the theory of U-processes. The excess of risk of empirical minimizers of the clustering risk is proved to be of the order OP(n−1/2) under mild assumptions on the complexity of the cells forming the partition candidates. It is also shown how to refine slightly this upper bound through a linearization technique and the use of recent inequalities for degenerate U-processes. Although the improvement displayed here can appear as not very significant at first glance, our approach suggests that much sharper data-dependent bounds could be established this way. To the best of our knowledge, the present analysis is the first to state results of this nature. As regards complexity regularization, while focus is here on the choice of the number of clusters, the argument used in this paper also paves the way for investigating more general model selection issues, including choices related to the geometry/complexity of the cells of the partition considered. Appendix - Technical proofs Proof of Theorem 1 We may classically write: c W( bPn) −W ∗ K ≤ 2 sup P∈ΠK |c Wn(P) −W(P)| + inf P∈ΠK W(P) −W ∗ K ≤ 2K sup C∈P, P∈ΠK |Un(C) −u(C)| + inf P∈ΠK W(P) −W ∗ K , (16) where Un(C) denotes the U-statistic with kernel HC(x, x′) = D(x, x′) · I{(x, x′) ∈C2} based on the sample X1, . . . , Xn and u(C) its expectation. Therefore, mimicking the argument of Corollary 3 in [CLV08], based on the so-termed first Hoeffding’s representation of U-statistics (see Lemma A.1 in [CLV08]), we may straightforwardly derive the lemma below. Proposition 1 (UNIFORM DEVIATIONS) Suppose that Theorem 1’s assumptions are fulfilled. Let δ > 0. With probability at least 1 −δ, we have: ∀n ≥2, sup C∈P, P∈ΠK |Un(C) −u(C)| ≤2E[AK,n] + B r 2 log(1/δ) n . (17) PROOF. The argument follows in the footsteps of Corollary 3’s proof in [CLV08]. It is based on the so-termed first Hoeffding’s representation of U-statistics (9), which provides an immediate control of the moment generating function of the supremum supC |Un(C) −u(C)| by that of the norm of an empirical process, namely supC |An(C) −u(C)|, where, for all C ∈P and P ∈ΠK: An(C) = 1 ⌊n/2⌋ ⌊n/2⌋ X i=1 D(Xi, Xi+⌊n/2⌋) · I{(Xi, Xi+⌊n/2⌋) ∈C2}. 7 Lemma 1 (see Lemma A.1 in [CLV08]) Let Ψ : R →R be convex and nondecreasing. We have: E exp λ · sup C |Un(C) −u(C)| ≤E exp λ · sup C |An(C) −u(C)| . (18) Now, using standard symmetrization and randomization tricks, one obtains that: ∀λ > 0, E exp λ · sup C |An(C) −u(C)| ≤E [exp (2λ · AK,n)] . (19) Observing that the value of AK,n cannot change by more than 2B/n when one of the (ϵi, Xi, Xi+⌊n/2⌋)′s is changed, while the others are kept fixed, the standard bounded differences inequality argument applies and yields: E [exp (2λ · AK,n)] ≤exp 2λ · E[AK,n] + λ2B2 2n . (20) Next, Markov’s inequality with λ = (t −2E[AK,n])/B2 gives: P{supC |An(C) −u(C)| > t} ≤ exp(−n(t −2E[AK,n])2/(2B2)). The desired result is then immediate. □ The rate bound is finally established by combining bounds (16) and (17). Proof of Theorem 2 (Sketch of) The theorem can be proved by using the decomposition (10), applying the argument above in order to control supP |Ln(P)| and the lemma below to handle the degenerate part. The latter is based on a recent moment inequality for degenerate U-processes, proved in [CLV08]. Due to space limitations, technical details are left to the reader. Lemma 2 (see Theorem 11 in [CLV08]) Suppose that Theorem 2’s assumptions are fulfilled. There exists a universal constant C < ∞such that for all δ ∈(0, 1), we have with probability at least 1 −δ: ∀n ≥2, sup P∈ΠK |Mn(P)| ≤Kκ(n, δ). Proof of Theorem 3 The proof mimics the argument of Theorem 8.1 in [BBL05]. We thus obtain that: ∀K ≥1, E h W( bP b K,n) i −W ∗≤E h W( bPK,n) i −W ∗+ pen(K, n) + X k≥1 E " sup P∈Πk {W(P) −c Wn(P)} −pen(n, k) + # . Reproducing the argument of Theorem 1’s proof, one may easily show that: ∀k ≥1, E sup P∈Πk {W(P) −c Wn(P)} ≤2kE[Ak,n]. Thus, for all k ≥1, the quantity P{supP∈Πk{W(P) −c Wn(P)} ≥pen(n, k) + 2δ} is bounded by P sup P∈Πk {W(P) −c Wn(P)} ≥E sup P∈Πk {W(P) −c Wn(P)} + p (2B log k)/n + δ + P 3kEϵ[Ak,n] ≤2kE[Ak,n] −27Bk log k n −δ . By virtue of the bounded differences inequality (jumps being bounded by 2B/n), the first term is bounded by exp(−nδ2/(2B2))/k2, while the second term is bounded by, exp(−nδ/(9Bk))/k3 as shown by Lemma 8.2 in [BBL05] (see the third inequality therein). Integrating over δ, one obtains: E " sup P∈Πk {W(P) −c Wn(P)} −pen(n, k) + # ≤(2B p 2/n + 18B/n)/k2. Summing next the bounds thus obtained over k leads to the oracle inequality stated in the theorem. 8 References [BB06] G. Biau and L. Bleakley. Statistical Inference on Graphs. Statistics & Decisions, 24:209–232, 2006. [BBL05] S. Boucheron, O. Bousquet, and G. Lugosi. Theory of Classification: A Survey of Some Recent Advances. ESAIM: Probability and Statistics, 9:323–375, 2005. [BD04] S. Ben-David. A framework for statistical clustering with a constant time approximation algorithms for k-median clustering. In Proceedings of COLT’04, Lecture Notes in Computer Science, Volume 3120/2004, 415-426, 2004. [BDL08] G. Biau, L. Devroye, and G. Lugosi. On the Performance of Clustering in Hilbert Space. IEEE Trans. Inform. Theory, 54(2):781–790, 2008. [BvL09] S. Bubeck and U. von Luxburg. Nearest neighbor clustering: A baseline method for consistent clustering with arbitrary objective functions. Journal of Machine Learning Research, 10:657–698, 2009. [CFZ09] B. Clarke, E. Fokou´e, and H.. Zhang. Principles and Theory for Data-Mining and MachineLearning. Springer, 2009. [CLV08] S. Cl´emenc¸on, G. Lugosi, and N. Vayatis. Ranking and empirical risk minimization of U-statistics. The Annals of Statistics, 36(2):844–874, 2008. [DGL96] L. Devroye, L. Gy¨orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996. [dlPG99] V. de la Pena and E. Gin´e. Decoupling: from Dependence to Independence. Springer, 1999. [Dud99] R.M. Dudley. Uniform Central Limit Theorems. Cambridge University Press, 1999. [Har78] J.A. Hartigan. Asymptotic distributions for clustering criteria. The Annals of Statistics, 6:117–131, 1978. [Hoe48] W. Hoeffding. A class of statistics with asymptotically normal distribution. Ann. Math. Stat., 19:293–325, 1948. [HTF09] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning (2nd ed.), pages 520–528. Springer, 2009. [KN02] S. Kutin and P. Niyogi. Almost-everywhere algorithmic stability and generalization error. In Proceedings of the of the 18th Conference in Uncertainty in Artificial Intelligence, 2002. [Kol06] V. Koltchinskii. Local Rademacher complexities and oracle inequalities in risk minimization (with discussion). The Annals of Statistics, 34:2593–2706, 2006. [PFvN89] R. Peck, L. Fisher, and J. van Ness. Bootstrap confidence intervals for the number of clusters in cluster analysis. J. Am. Stat. Assoc., 84:184–191, 1989. [Pol81] D. Pollard. Strong consistency of k-means clustering. The Annals of Statistics, 9:135–140, 1981. [Pol82] D. Pollard. A central limit theorem for k-means clustering. The Annals of Probability, 10:919–926, 1982. [Ser80] R.J. Serfling. Approximation theorems of mathematical statistics. Wiley, 1980. [ST08] O. Shamir and N. Tishby. Model selection and stability in k-means clustering. In in Proceedings of the 21rst Annual Conference on Learning Theory, 2008. [ST09] O. Shamir and N. Tishby. On the reliability of clustering stability in the large sample regime. In Advances in Neural Information Processing Systems 21, 2009. [TWH01] R. Tibshirani, G. Walther, and T. Hastie. Estimating the number of clusters in a data set via the gap statistic. J. Royal Stat. Soc., 63(2):411–423, 2001. [vdV98] A. van der Vaart. Asymptotic Statistics. Cambridge University Press, 1998. [vL09] U. von Luxburg. Clustering stability: An overview. Foundations and Trends in Machine Learning, 2(3):235–274, 2009. [vLBD05] U. von Luxburg and S. Ben-David. Towards a statistical theory of clustering. In Pascal workshop on Statistics and Optimization of Clustering, 2005. [vLBD06] U. von Luxburg and S. Ben-David. A sober look at clustering stability. In Proceedings of the 19th Conference on Learning Theory, 2006. [vLBD08] U. von Luxburg and S. Ben-David. Relating clustering stability to properties of cluster boundaries. In Proceedings of the 21th Conference on Learning Theory, 2008. [WT10] D. M. Witten and R. Tibshirani. A framework for feature selection in clustering. J. Amer. Stat. Assoc., 105(490):713–726, 2010. 9
|
2011
|
184
|
4,240
|
Robust Lasso with missing and grossly corrupted observations Nam H. Nguyen Johns Hopkins University nam@jhu.edu Nasser M. Nasrabadi U.S. Army Research Lab nasser.m.nasrabadi.civ@mail.mil Trac D. Tran Johns Hopkins University trac@jhu.edu Abstract This paper studies the problem of accurately recovering a sparse vector β⋆from highly corrupted linear measurements y = Xβ⋆+ e⋆+ w where e⋆is a sparse error vector whose nonzero entries may be unbounded and w is a bounded noise. We propose a so-called extended Lasso optimization which takes into consideration sparse prior information of both β⋆and e⋆. Our first result shows that the extended Lasso can faithfully recover both the regression and the corruption vectors. Our analysis is relied on a notion of extended restricted eigenvalue for the design matrix X. Our second set of results applies to a general class of Gaussian design matrix X with i.i.d rows N(0, Σ), for which we provide a surprising phenomenon: the extended Lasso can recover exact signed supports of both β⋆ and e⋆from only Ω(k log p log n) observations, even the fraction of corruption is arbitrarily close to one. Our analysis also shows that this amount of observations required to achieve exact signed support is optimal. 1 Introduction One of the central problems in statistics is the linear regression in which the goal is to accurately estimate a regression vector β⋆∈Rp from the noisy observations y = Xβ⋆+ w, (1) where X ∈Rn×p is the measurement or design matrix, and w ∈Rn is the stochastic observation vector noise. A particular situation recently attracted much attention from research community concerns with the model in which the number of regression variables p is larger than the number of observations n (p ≥n). In such circumstances, without imposing some additional assumptions for this model, it is well known that the problem is ill-posed, and thus the linear regression is not consistent. Accordingly, there have been various lines of work on high dimensional inference based on imposing different types of structure constraints such as sparsity and group sparsity [15] [5] [21]. Among them, the most popular model focused on sparsity assumption of the regression vector. To estimate β, a standard method, namely Lasso [15], was proposed to use l1-penalty as a surrogate function to enforce sparsity constraint. min β 1 2 ∥y −Xβ∥2 2 + λ ∥β∥1 , (2) where λ is the positive regularization parameter and l1-norm ∥β∥1 is defined by ∥β∥1 = Pp i=1 |βi|. During the past few years, there has been numerous studies to understand the ℓ1-regularization for sparse regression models [23] [11] [10] [17] [4] [2] [22]. These works are mainly characterized by 1 the type of the loss functions considered. For instance, some authors [4] seek to obtain a regression estimate bβ that delivers small prediction error while other authors [2] [11] [22] seek to produce a regressor with minimal parameter estimation error, which is measured by the ℓ2-norm of (bβ −β⋆). Another line of work [23] [17] considers the variable selection in which the goal is to obtain an estimate that correctly identifies the support of the true regression vector. To achieve low prediction or parameter estimation loss, it is now well known that it is both sufficient and necessary to impose certain lower bounds on the smallest singular values of the design matrix [10] [2], while a notion of small mutual incoherence for the design matrix [4] [23] [17] is required to achieve accurate variable selection. We notice that all the previous work relies on the assumption that the observation noise has bounded energy. Without this assumption, it is very likely that the estimated regressor is either not reliable or unable to identify the correct support. With this observation in mind, in this paper, we extend the linear model (1) by considering the noise with unbounded energy. It is clear that if all the entries of y is corrupted by large error, then it is impossible to faithfully recover the regression vector β⋆. However, in many practical applications such as face and acoustic recognition, only a portion of the observation vector is contaminated by gross error. Formally, we have the mathematical model y = Xβ⋆+ e⋆+ w, (3) where e⋆∈Rn is the sparse error whose locations of nonzero entries are unknown and magnitudes can be arbitrarily large and w is another noise vector with bounded entries. In this paper, we assume that w has a multivariate Gaussian N(0, σ2In×n) distribution. This model also includes as a particular case the missing data problem in which all the entries of y is not fully observed, but some are missing. This problem is particularly important in computer vision and biology applications. If some entries of y are missing, the nonzero entries of e⋆whose locations are associated with the missing entries of the observation vector y have the same values as entries of y but with inverse signs. The problems of recovering the data under gross error has gained increasing attentions recently with many interesting practical applications [18] [6] [7] as well as theoretical consideration [9] [13] [8]. Another recent line of research on recovering the data from grossly corrupted measurements has been also studied in the context of robust principal component analysis (RPCA) [3] [20] [1]. Let us consider some examples to illustrate: • Face recognition. The model (3) has been originally proposed by Wright et al. [19] in the context of face recognition. In this problem, a face test sample y is assumed to be represented as a linear combination of training faces in the dictionary X, y = Xβ where β is the coefficient vector used for classification. However, it is often the case that the face is occluded by unwanted objects such as glasses, hats etc. These occlusions, which occupy a portion of the test face, can be considered as the sparse error e⋆in the model (3). • Subspace clustering. One of the important problem on high dimensional analysis is to cluster the data points into multiple subspaces. A recent work of Elhamifar and Vidal [6] showed that this problem can be solved by expressing each data point as a sparse linear combination of all other data points. Coefficient vectors recovered from solving the Lasso problems are then employed for clustering. If the data points are represented as a matrix X, then we wish to find a sparse coefficient matrix B such that X = XB and diag(B) = 0. When the data is missing or contaminated with outliers, [6] formulates the problem as X = XB + E and minimize a sum of two ℓ1-norms with respect to both B and E. • Sensor network. In this model, sensors collect measurements of a signal β⋆independently by simply projecting β⋆onto row vectors of a sensing matrix X, yi = ⟨Xi, β⋆⟩. The measurements yi are then sent to the center hub for analysis. However, it is highly likely that some sensors might fail to send the measurements correctly and sometimes report totally irrelevant measurements. Therefore, it is more accurate to employ the observation model (3) than model (1). It is worth noticing that in the aforementioned applications, e⋆plays the role as the sparse (undesired) error. However, in many other applications, e⋆can contain meaningful information, and thus necessary to be recovered. An example of this kind is signal separation, in which β⋆and e⋆are two distinct signal components (video or audio). Furthermore, in applications such as classification and 2 clustering, the assumption that the test sample y is a linear combination of a few training samples in the dictionary (design matrix) X might be violated. This sparse component e⋆can thus be seen as the compensation for linear regression model mismatch. Given the observation model (1) and the sparsity assumptions on both regression vector β⋆and error e⋆, we propose the following convex minimization to estimate the unknown parameter β⋆as well as the error e⋆. min β,e 1 2 ∥y −Xβ −e∥2 2 + λβ ∥β∥1 + λe ∥e∥1 , (4) where λβ and λe are positive regularization parameters. This optimization, we call extended Lasso, can be seen as a generalization of the Lasso program. Indeed, by setting λe = 0, (4) returns to the standard Lasso. The additional regularization associated with e encourages sparsity on the error where parameter λe controls the sparsity level. In this paper, we focus on the following questions: what are necessary and sufficient conditions for the ambient dimension p, the number of observations n, the sparsity index k of the regression β⋆and the fraction of corruption so that (i) the extended Lasso is able (or unable) to recover the exact support sets of both β⋆and e⋆? (ii) the extended Lasso is able to recover β⋆and e⋆with small prediction error and parameter error? We are particularly interested in understanding the asymptotic situation where the the fraction of error is arbitrarily close to 100%. Previous work. The problem of recovering the estimation vector β⋆and error e⋆has originally proposed and analyzed by Wright and Ma [18]. In the absence of the stochastic noise w in the observation model (3), the authors proposed to estimate (β⋆, e⋆) by solving the linear program min β,e ∥β∥1 + ∥e∥1 s.t. y = Xβ + e. (5) The result of [18] is asymptotic in nature. They showed that for a class of Gaussian design matrix with i.i.d entries, the optimization (5) can recover (β⋆, e⋆) precisely with high probability even when the fraction of corruption is arbitrarily close to one. However, the result holds under rather stringent conditions. In particularly, they require the number of observations n grow proportionally with the ambient dimension p, and the sparsity index k is a very small portion of n. These conditions is of course far from the optimal bound in compressed sensing (CS) and statistics literature (recall k ≤O(n/ log p) is sufficient in conventional analysis [17]). Another line of work has also focused on the optimization (5). In both papers of Laska et al. [7] and Li et al. [9], the authors establish that for Gaussian design matrix X, if n ≥C(k + s) log p where s is the sparsity level of e⋆, then the recovery is exact. This follows from the fact that the combination matrix [X, I] obeys the restricted isometry property, a well-known property used to guarantee exact recovery of sparse vectors via ℓ1-minimization. These results, however, do not allow the fraction of corruption close to one. Among the previous work, the most closely related to the current paper are recent results by Li [8] and Nguyen et al. [13] in which a positive regularization parameter λ is employed to control the sparsity of e⋆. Using different methods, both sets of authors show that as λ is deterministically selected to be 1/√log p and X is a sub-orthogonal matrix, then the solution of following optimization is exact even a constant fraction of observation is corrupted. Moreover, [8] establishes a similar result with Gaussian design matrix in which the number of observations is only an order of k log p an amount that is known to be optimal in CS and statistics. min β,e ∥β∥1 + λ ∥e∥1 s.t. y = Xβ + e. (6) Our contribution. This paper considers a general setting in which the observations are contaminated by both sparse and dense errors. We allow the corruptions to linearly grow with the number of observations and have arbitrarily large magnitudes. We establish a general scaling of the quadruplet (n, p, k, s) such that the extended Lasso stably recovers both the regression and corruption vectors. Of particular interest to us are the following equations: (a) First, under what scalings of (n, p, k, s) does the extended Lasso obtain the unique solution with small estimation error. (b) Second, under what scalings of (n, p, k) does the extended Lasso obtain the exact signed support recovery even almost all the observations are corrupted? 3 (c) Third, under what scalings of (n, p, k, s) does no solution of the extended Lasso specify the correct signed support? To answer for the first question, we introduce a notion of extended restricted eigenvalue for a matrix [X, I] where I is an identity matrix. We show that this property satisfies for a general class of random Gaussian design matrix. The answers to the last two questions requires stricter conditions for the design matrix. In particular, for random Gaussian design matrix with i.i.d rows N(0, Σ), we rely on two standard assumptions: invertibility and mutual incoherence. If we denote Z = [X, I] where I is an identity matrix and β = [β⋆T , e⋆T ]T , then the observation vector y is reformulated as y = Zβ + w, which is the same as standard Lasso model. However, previous results [2] [17] applying to random Gaussian design matrix are irrelevant to this setting since the Z no longer behave like a Gaussian matrix. To establish theoretical analysis, we need more study on the interaction between the Gaussian and identity matrices. By exploiting the fact that the matrix Z consists of two component where one component has special structure, our analysis reveals an interesting phenomenon: extended Lasso can accurately recover both the regressor β⋆and corruption e⋆even when the fraction of corruption is up to 100%. We measure the recoverability of these variables under two criterions: parameter accuracy and feature selection accuracy. Moreover, our analysis can be extended to the situation in which the identity matrix can be replaced by a tight frame D as well as extended to other models such as group Lasso or matrix Lasso with sparse error. Notation We summarize here some standard notation used throughout the paper. We reserve T and S as the sparse support of β⋆and e⋆, respectively. Given and design matrix X ∈Rn×p and subsets S and T, we use XST to denote the |S| × |T| submatrix obtained by extracting those rows indexed by S and columns indexed by T. We use the notation C1, C2, c1, c2, etc., to refer to positive constants, whose value may change from line to line. Given two functions f and g, the notation f(n) = O(g(n)) means that there exists a constant c < +∞such that f(n) ≤cg(n); the notation f(n) = Ω(g(n)) means that f(n) ≥cg(n) and the notation f(n) = Θ(g(n)) means that f(n) = (g(n)) and f(n) = Ω(g(n)). The symbol f(n) = o(g(n)) means that f(n)/g(n) →0. 2 Main results In this section, we provide precise statements of the main results of this paper. In the first subsection, we establish the parameter estimation and provide a deterministic result which bases on the notion of extended restricted eigenvalue. We further show that the random Gaussian design matrix satisfies this property with high probability. The next sub-section considers the feature estimation. We establish conditions for the design matrix such that the solution of the extended Lasso has the exact signed supports. 2.1 Parameter estimation As in conventional Lasso, to obtain a low parameter estimation bound, it is necessary to impose conditions on the design matrix X. In this paper, we introduce a notion of extended restricted eigenvalue (extended RE) condition. Let C be a restricted set, we say that the matrix X satisfies the extended RE assumption over the set C if there exists some κl > 0 such that ∥Xh + f∥2 ≥κl(∥h∥2 + ∥f∥2) for all (h, f) ∈C, (7) where the restricted set C of interest is defined with λn := λe/λβ as follow C := {(h, f) ∈Rp × Rn | ∥hT c∥1 + λn ∥fSc∥1 ≤3 ∥hT ∥1 + 3λn ∥fS∥1}. (8) This assumption is a natural extension of the restricted eigenvalue condition and restricted strong convexity considered in [2] [14] and [12]. In the absent of a vector f in the equation (7) and in the set C, this condition returns to the restricted eigenvalue defined in [2]. As explained at more length in [2] and [16], restricted eigenvalue is among the weakest assumption on the design matrix such that the solution of the Lasso is consistent. With this assumption at hand, we now state the first theorem 4 Theorem 1. Consider the optimal solution (bβ, be) to the optimization problem (4) with regularization parameters chosen as λβ ≥2 γ ∥X∗w∥∞ and λn := λe λβ = γ ∥w∥∞ ∥X∗w∥∞ , (9) where γ ∈(0, 1]. Assuming that the design matrix X obeys the extended RE, then the error set (h, f) = (bβ −β⋆, be −e⋆) is bounded by ∥h∥2 + ∥f∥2 ≤3κ−2 l λβ √ k + λe √s . (10) There are several interesting observations from this theorem 1) The error bound naturally split into two components related to the sparsity indices of β⋆and e⋆. In addition, the error bound contains three quantity: the sparsity indices, regularization parameters and the extended RE constant. If the terms related to the corruption e⋆are omitted, then we obtain similar parameter estimation bound as the standard Lasso [2] [12]. 2) The choice of regularization parameters λβ and λe can make explicitly: assuming w is a Gaussian random vector whose entries are N(0, σ2) and the design matrix has unit-normed columns, it is clear that with high probability, ∥X∗w∥∞≤2 p σ2 log p and ∥w∥∞≤2 p σ2 log n. Thus, it is sufficient to select λβ ≥4 γ p σ2 log p and λe ≥4 p σ2 log n. 3) At the first glance, the parameter γ does not seem to have any meaningful interpretation and the γ = 1 seems to be the best selection due to the smallest estimation error it can produce. However, this parameter actually control the sparsity level of the regression vector with respect to the fraction of corruption. This relation is made via the restricted set C. In the following lemma, we show that the extended RE condition actually exists for a large class of random Gaussian design matrix whose rows are i.i.d zero mean with covariance Σ. Before stating the lemma, let us define some quantities operating on the covariance matrix Σ: Cmin := λmin(Σ) is the smallest eigenvalue of Σ, Cmax := λmax(Σ) is the biggest eigenvalue of Σ and ξ(Σ) := maxi Σii is the maximal entry on the diagonal of the matrix Σ. Lemma 1. Consider the random Gaussian design matrix whose rows are i.i.d N(0, Σ) and assume n2Cmaxξ(Σ) = Θ(1). Select λn := γ p ξ(Σ)n s log n log p , (11) then with probability greater than 1 −c1 exp(−c2n), the matrix X satisfies the extended RE with parameter κl = 1 4 √ 2, provided that n ≥C ξ(Σ) Cmin k log p and s ≤min n C1 n γ2 log n, C2n o for some small constants C1, C2. We would like to make some remarks: 1) The choice of parameter λn is nothing special here. When design matrix is Gaussian and independent with the Gaussian stochastic noise w, we can easily show that ∥X∗w∥∞≤2 p ξ(Σ)nδ2 log p with probability at least 1 −2 exp(−log p). Therefore, the selection of λn follows from Theorem 1. 2) The proof of this lemma, shown in the Appendix, boils down to control two terms • Restricted eigenvalue with X. ∥Xh∥2 2 + ∥f∥2 2 ≥κr(∥h∥2 2 + ∥f∥2 2) for all (h, f) ∈C. • Mutual incoherence. Column space of the matrix X is incoherent with the column space of the identity matrix. That is, there exists some κm > 0 such that | ⟨Xh, f⟩| ≤κm(∥h∥2 + ∥f∥2)2 for all (h, f) ∈C. If the incoherence between these two column spaces is sufficiently small such that 4κm < κr, then we can conclude that ∥Xh + f∥2 2 ≥(κr −2κm)(∥h∥2 + ∥f∥2)2. The small mutual incoherence 5 property is especially important since it provides how the regression separates away from the sparse error. 3) To simplify our result, we consider a special case of the uniform Gaussian design, in which Σ = 1 nIp×p. In this situation, Cmin = Cmax = ξ(Σ) = 1/n. We have the following result which is a corollary of Theorem 1 and Lemma 1 Corollary 1 (Standard Gaussian design). Let X be a standard Gaussian design matrix. Consider the optimal solution (bβ, be) to the optimization problem (4) with regularization parameters chosen as λβ ≥4 γ p σ2 log p and λe ≥4 p σ2 log n, (12) where γ ∈(0, 1]. Also assuming that n ≥Ck log p and s ≤min{C1 n γ2 log n, C2n} for some small constants C1, C2. Then with probability greater than 1 −c1 exp(−c2n), the error set (h, f) = (bβ −β⋆, be −e⋆) is bounded by ∥h∥2 + ∥f∥2 ≤384 1 γ p σ2k log p + p σ2s log n , (13) Corollary 1 reveals an interesting phenomenon: by setting γ = 1/√log n, even when the fraction of corruption is linearly proportional with the number of samples n, the extended Lasso (4) is still capable to recover both coefficient vector β⋆and corruption (missing) vector e⋆within a bounded error (13). Without the dense noise w in the observation model (3) (σ = 0), the extended Lasso recovers the exact solution. This result is impossible to achieve with standard Lasso. Furthermore, if we know in prior that the number of corrupted observations is an order of O(n/ log p), then selecting γ = 1 instead of 1/ log n will minimize the estimation error (see equation (13)) of Theorem 1. 2.2 Feature selection with random Gaussian design In many applications, the feature selection criteria is more preferred [17] [23]. Feature selection refers to the property that the recovered parameter has the same signed support as the true regressor. In general, good feature selection implies good parameter estimation but the reverse direction does not usually hold. In this part, we investigate conditions for the design matrix and the scaling of (n, p, k, s) such as both regression and sparse error vectors obtain this criteria. Consider the linear model (3) where X is the Gaussian random design matrix whose rows are i.i.d zero mean with covariance matrix Σ. It has been well known in the Lasso that in order to obtain feature selection accuracy, the covariance matrix Σ must obey two properties: invertibility and small mutual coherence restricted on the set T. The first property guarantees that (4) is strictly convex, leading to the unique solution of the convex program, while the second property requires the separation between two components of Σ, one related to the set T and the other to the set T c must be sufficiently small. 1. Invertibility. To guarantee uniqueness, we require ΣT T to be invertible. Particularly, let Cmin = λmin(ΣT T ), we require Cmin > 0. 2. Mutual incoherence. For some γ ∈(0, 1),
Σ∗ T cT (ΣT T )−1
∞≤1 2(1 −γ) (14) where ∥·∥∞refers to ℓ∞/ℓ∞operator norm. It is worth noting that in the standard Lasso the factor 1 2 is omitted. Our condition is tighter than condition used to establish feature estimation in the Lasso by a constant factor. In fact, the quantity 1/2 is nothing special here and we can set any value close to one with a compensation that the number of samples n will increase. Thus, we put 1/2 for the simplicity of the proof. Toward the end, we will also elaborate three other quantities operating on the restricted covariance matrix ΣT T : Cmax, which is defined as the maximum eigenvalue of ΣT T : Cmax := λmax(ΣT T ); D− max and D+ max, which are denoted as ℓ∞-norm of matrices Σ−1 T T and ΣT T : D− max :=
(ΣT T )−1
∞and D+ max := ∥ΣT T ∥∞. 6 Our result also involves in two other quantities operating on the conditional covariance matrix of (XT c|XT ) defined as ΣT c|T := ΣT cT c −ΣT cT Σ−1 T T ΣT T c. (15) We then define ρu(ΣT c|T ) = maxi(ΣT c|T )ii and ρl(ΣT c|T ) = 1 2 mini̸=j[(ΣT c|T )ii + (ΣT c|T )jj − 2(ΣT c|T )ij]. Toward the end, we denote a shorthand ρu and ρl. We establish the following result for Gaussian random design whose covariance matrix Σ obeys the two assumptions. Theorem 2. (Achievability) Given the linear model (3) with random Gaussian design and the covariance matrix Σ satisfy invertibility and incoherence properties for any γ ∈(0, 1), suppose we solve the extended Lasso (4) with regularization parameters obeying λβ = 4 γ q max{ρu, D+ max}nσ2 log p and λe = 8 p σ2 log n. (16) Also, let η = 1 32γ2 log n, the sequence (n, p, k, s) and regularization parameters λβ, λe satisfying s ≤ηn n ≥max C1 1 (1 −η) ρu Cmin k log(p −k), C2 η (1 −η)2 max{ρu, D+ max} Cmin k log(p −k) log n , (17) where C1 and C2 are numerical constants. In addition, suppose that mini∈T |β⋆ i | > fβ(λβ) and mini∈S |e⋆ i | > fe(λβ, λe) where fβ := c1 λβ n −s r k log(p −k) n
Σ−1/2 T T
2 ∞+ 20 s σ2 log k Cmin(n −s) and (18) fe := c2(Cmax(k√s + s √ k))1/2 λβ n −s r k log(p −k) n
Σ−1/2 T T
2 ∞+ c3λe. (19) Then the following properties holds with probability greater than 1−c exp(−c′ max{log n, log pk}) 1. The solution pair (bβ, be) of the extended Lasso (4) is unique and has exact signed support. 2. ℓ∞-norm bounds:
bβ −β⋆
∞≤fβ(λβ) and ∥be −e⋆∥∞≤fe(λβ). There are several interesting observations from the theorem 1) The first and important observation is that extended Lasso is robust to arbitrarily large and sparse error observation. In that sense, the extended Lasso can be viewed as a generalization of the Lasso. Under the same invertibility and mutual incoherence assumptions on the covariance matrix Σ as the standard Lasso, the extended Lasso program can recover both the regression vector and error with exact signed supports even when almost all the observations are contaminated by arbitrarily large error with unknown support. What we sacrifice for the corruption robustness is an additional log factor to the number of samples. We notice that when the error fraction is O(n/ log n), only O(k log(p −k)) samples are sufficient to recover the exact signed supports of both regression and sparse error vectors. 2) We consider the special case with Gaussian random design in which the covariance matrix Σ = 1 nIp×p. In this case, entries of X is i.i.d N(0, 1/n) and we have quantities Cmin = Cmax = D+ max = D− max = ρu = ρl = 1. In addition, the invertibility and mutual incoherence properties are automatically satisfied. The theorem implies that when the number of errors s is close to n, the number of samples n needed to recover exact signed supports satisfies n log n = Ω(k log(p − k)). Furthermore, Theorem 2 guarantees consistency in element-wise ℓ∞-norm of the estimated regression at the rate
bβ −β⋆
∞= O p σ2 log p q k log(p−k) γ2n . As γ is chosen to be 1/√32 log n (equivalent to establish s close to n), the ℓ∞error rate is an order of O(σ√log p), which is known to be the same as that of the standard Lasso. 7 3) Corollary 1, though interesting, is not able to guarantee stable recovery when the fraction of corruption converges to one. We show in Theorem 2 that this fraction can come arbitrarily close to one by sacrificing a factor of log n for the number of samples. Theorem 2 also implies that there is a significant difference between recovery to obtain small parameter estimation error versus recovery to obtain correct variable selection. When the amount of corrupted observations is linearly proportional with n, recovering the exact signed supports require an increase from Ω(k log p) (in Corollary 1) to Ω(k log p log n) samples (in Theorem 2). This behavior is captured similarly by the standard Lasso, as pointed out in [17], Corollary 2. Our next theorem show that the number of samples needed to recover accurate signed support is optimal. That is, whenever the rescaled sample size satisfies (20), then for whatever regularization parameters λβ and λe are selected, no solution of the extended Lasso correctly identifies the signed supports with high probability. Theorem 3. (Inachievability) Given the linear model (3) with random Gaussian design and the covariance matrix Σ satisfy invertibility and incoherence properties for any γ ∈(0, 1). Let η = 1 32γ2 log(n−s) and the sequence (n, p, k, s) satisfies s ≥ηn and n ≤min C3 1 (1 −η) ρu Cmin k log(p −k), C4 η (1 −η)2 min{ρl, D+ max} Cmax k log(p −k) log(1 −η)n 1 + p σ2 log n λe !−1 , (20) where C3 and C4 are some small universal constants. Then with probability tending to one, no solution pair of the extended Lasso (5) has the correct signed support. 3 Illustrative simulations In this section, we provide some simulations to illustrate the possibility of the extended Lasso in recovering the exact regression signed support when a significant fraction of observations is corrupted by large error. Simulations are performed for a range of parameters (n, p, k, s) where the design matrix X is uniform Gaussian random whose rows are i.i.d N(0, Ip×p). For each fixed set of (n, p, k, s), we generate sparse vectors β⋆and e⋆where locations of nonzero entries are uniformly random and magnitudes are Gaussian distributed. In our experiments, we consider varying problem sizes p = {128, 256, 512} and three types of regression sparsity indices: sublinear sparsity (k = 0.2p/ log(0.2p)), linear sparsity (k = 0.1p) and fractional power sparsity (k = 0.5p0.75). In all cases, we fixed the error support size s = n/2. This means half of the observations is corrupted. By this selection, Theorem 2 suggests that number of samples n ≥2Ck log(p −k) log n to guarantee exact signed support recovery. We choose n log n = 4θk log(p −k) where parameter θ is the rescaled sample size. This parameter control the success/failure of the extended Lasso. In the algorithm, we select λβ = 2 p σ2 log p log n and λe = 2 p σ2 log n as suggested by Theorem 2, where the noise level σ = 0.1 is fixed. The algorithm reports a success if the solution pair has the same signed support as (β⋆, e⋆). In Fig. 1, each point on the curve represents the average of 100 trials. As demonstrated by simulations, our extended Lasso is cable to recover the exact signed support of both β⋆and e⋆even 50% of the observations are contaminated. Furthermore, up to unknown constants, our theorem 2 and 3 match with simulation results. As the sample size n log n ≤2k log(p− k), the probability of success starts going to zero, implying the failure of the extended Lasso. Acknowledgments We acknowledge support from the Army Research Office (ARO) under Grant 60291-MA and National Science Foundation (NSF) under Grant CCF-1117545. References [1] A. Agarwal, S. Negahban, and M. Wainwright. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. Proc. 28th Inter. Conf. Mach. Learn. (ICML-11), pages 1129–1136, 2011. 8 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Rescaled sample size θ Probability of success Sublinear sparsity p=128 p=256 p=512 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Rescaled sample size θ Probability of success Linear sparsity p=128 p=256 p=512 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Rescaled sample size θ Probability of success Fractional power sparsity p=128 p=256 p=512 Figure 1: Probability of success in recovering the signed supports [2] P. Bickel, Y. Ritov, and A. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Annals of statistics, 37(4):1705–1732, 2009. [3] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Submitted for publication, 2009. [4] E. J. Cand`es and Y. Plan. Near-ideal model selection by l1 minimization. Annals of Statistics, 37:2145– 2177, 2009. [5] E. J. Cand`es and T. Tao. The Dantzig selector: statistical estimation when p is much larger than n. Annals of statistics, 35(6):2313–2351, 2007. [6] E. Elhamifar and R. Vidal. Sparse subspace clustering. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2790–2797, 2009. [7] J. N. Laska, M. A. Davenport, and R. G. Baraniuk. Exact signal recovery from sparsely corrupted measurements through the pursuit of justice. In Asilomar conference on Signals, Systems and Computers, pages 1556–1560, 2009. [8] X. Li. Compressed sensing and matrix completion with constant proportion of corruptions. Preprint, 2011. [9] Z. Li, F. Wu, and J. Wright. On the systematic measurement matrix for compressed sensing in the presence of gross error. In Data compression conference (DCC), pages 356–365, 2010. [10] N. Meinshausen and P. Buehlmann. High dimensional graphs and variable selection with the lasso. Annals of statistics, 34(3):1436–1462, 2008. [11] N. Meinshausen and B. Yu. Lasso-type recovery of sparse representations for high-dimensional data. Annals of statistics, 37(1):2246–2270, 2009. [12] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Preprint, 2010. [13] N. H. Nguyen and Trac. D. Tran. Exact recoverability from dense corrupted observations via l1 minimization. preprint, 2010. [14] G. Raskutti, M. J. Wainwright, and B. Yu. Restricted eigenvalue properties for correlated gaussian designs. Journal of Machine Learning Research, 11:2241–2259, 2010. [15] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B, 58(1):267–288, 1996. [16] S. A. van de Geer and P. Buehlmann. On the conditions used to prove oracle results for the lasso. Electronic Journal of Statistics, 3(1360-1392), 2009. [17] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using l1 -constrained quadratic programming ( lasso ). IEEE Trans. Information Theory, 55(5):2183–2202, 2009. [18] J. Wright and Y. Ma. Dense error correction via l1 minimization. IEEE Transaction on Information Theory, 56(7):3540–3560, 2010. [19] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Transaction on Pattern Analysis and Machine Intelligence, 31(2):210–227, 2009. [20] H. Xu, C. Caramanis, and S. Sanghavi. Robust pca via outlier pursuit. Ad. Neural Infor. Proc. Sys. (NIPS), pages 2496–2504, 2010. [21] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B, 68(1):49–67, 2006. [22] T. Zhang. Some sharp performance bounds for least squares regression with l1 regularization. Annals of statistics, 37(5):2109–2144, 2009. [23] P. Zhao and B. Yu. On model selection consistency of Lasso. The Journal of Machine Learning Research, 7:2541–2563, 2006. 9
|
2011
|
185
|
4,241
|
Unifying Non-Maximum Likelihood Learning Objectives with Minimum KL Contraction Siwei Lyu Computer Science Department University at Albany, State University of New York lsw@cs.albany.edu Abstract When used to learn high dimensional parametric probabilistic models, the classical maximum likelihood (ML) learning often suffers from computational intractability, which motivates the active developments of non-ML learning methods. Yet, because of their divergent motivations and forms, the objective functions of many non-ML learning methods are seemingly unrelated, and there lacks a unified framework to understand them. In this work, based on an information geometric view of parametric learning, we introduce a general non-ML learning principle termed as minimum KL contraction, where we seek optimal parameters that minimizes the contraction of the KL divergence between the two distributions after they are transformed with a KL contraction operator. We then show that the objective functions of several important or recently developed non-ML learning methods, including contrastive divergence [12], noise-contrastive estimation [11], partial likelihood [7], non-local contrastive objectives [31], score matching [14], pseudo-likelihood [3], maximum conditional likelihood [17], maximum mutual information [2], maximum marginal likelihood [9], and conditional and marginal composite likelihood [24], can be unified under the minimum KL contraction framework with different choices of the KL contraction operators. 1 Introduction Fitting parametric probabilistic models to data is a basic task in statistics and machine learning. Given a set of training data {x(1), · · · , x(n)}, parameter learning aims to find a member in a parametric distribution family, qθ, to best represent the training data. In practice, many useful high dimensional parametric probabilistic models, such as Markov random fields [18] or products of experts [12], are defined as qθ(x) = ˜qθ(x)/Z(θ), where ˜qθ is the unnormalized model and Z(θ) = R Rd ˜qθ(x)dx is the partition function. The maximum (log) likelihood (ML) estimation is the most commonly used method for parameter learning, where the optimal parameter is obtained by solving argmaxθ 1 n Pn k=1 log qθ(x(k)). The obtained ML estimators has many desirable properties, such as consistency and asymptotic normality [21]. However, because of the high dimensional integration/summation, the partition function in qθ oftentimes makes ML learning computationally intractable. For this reason, non-ML parameter learning methods that use “tricks” to obviate direct computation of the partition function have experienced rapid developments, particularly in recent years. While many computationally efficient non-ML learning methods have achieved impressive practical performances, with a few exceptions, their different learning objectives and numerical implementations seem to suggest that they are largely unrelated. In this work, based on the information geometric view of parametric learning, we elaborate on a general non-ML learning principle termed as minimum KL contraction (MKC), where we seek optimal parameters that minimize the contraction of the KL divergence between two distributions after they are transformed with a KL contraction operator. The KL contraction operator is a mapping between 1 probability distributions under which the KL divergence of two distributions tend to reduce unless they are equal. We then show that the objective functions of a wide range of non-ML learning methods, including contrastive divergence [12], noise-contrastive estimation [11], partial likelihood [7], non-local contrastive objectives [31], score matching [14], pseudo-likelihood [3], maximum conditional likelihood [17], maximum mutual information [2], maximum marginal likelihood [9], and conditional and marginal composite likelihood [24], can all be unified under the MKC framework with different choices of the KL contraction operators and MKC objective functions. 2 Related Works Similarities in the parameter updates among non-ML learning methods have been noticed in several recent works. For instance, in [15], it is shown that the parameter update in score matching [14] is equivalent to the parameter update in a version of contrastive divergence [12] that performs Langevin approximation instead of Gibbs sampling, and both are approximations to the parameter update of pseudo-likelihood [3]. This connection is further generalized in [1], which shows that parameter update in another variant of contrastive divergence is equivalent to a stochastic parameter update in conditional composite likelihood [24]. However, such similarities in numerical implementations are only tangential to the more fundamental relationship among the objective functions of different non-ML learning methods. On the other hand, the energy based learning [22] presents a general framework that subsume most non-ML learning objectives, but its broad generality also obscures their specific statistical interpretations. At the objective function level, relations between some non-ML methods are known. For instance, it is known that pseudo-likelihood is a special case of conditional composite likelihood [30]. In [10], several non-ML learning methods are unified under the framework of minimizing Bregman divergence. 3 KL Contraction Operator We base our discussion hereafter on continuous variables and probability density functions. Most results can be readily extended to the discrete case by replacing integrations and probability density functions with summations and probability mass functions. We denote Ωd as the set of all probability density functions over Rd. For two different probability distributions p, q ∈Ωd, their KulbackLeibler (KL) divergence (also known as relative entropy or I-divergence) [6] is defined as KL(p∥q) = R Rd p(x) log p(x) q(x)dx. KL divergence is non-negative and equals to zero if and only if p = q almost everywhere (a.e.). We define a distribution operator, Φ, as a mapping between a density function p ∈Ωd to another density function ˜p ∈Ωd′, and adopt the shorthand notation ˜p = Φ{p}. A distribution p is a fix point of a distribution operator Φ if p = Φ{p}. p q ˜q ˜p DKL (˜p ˜q ) DKL (p q ) ˜p = Φ{p} ˜q = Φ{q} Figure 1: Illustration of a KL contraction operator on two density functions p and q. A KL contraction operator is a distribution operator, Φ : Ωd 7→Ωd′, such that for any p, q ∈Ωd, there exist a constant β ≥1 for the following condition to hold: KL(p∥q) −βKL(Φ{p}∥Φ{q}) ≥0. (1) Subsequently, β is known as the contraction factor, and LHS of Eq.(1) is the KL contraction of p and q under Φ. Obviously, if p = q (a.e.), their KL contraction, as well as their KL divergence, is zero. In addition, a KL contraction operator is strict if the equality in Eq.(1) holds only when p = q (a.e.). Intuitively, if the KL divergence is regarded as a “distance” metric of probability distributions1, then it is never increased after both distributions are transformed with a KL contraction operator, a graphical illustration of which is shown in Fig.1. Furthermore, under a strict KL contraction operator, the KL divergence is always reduced unless the two distributions are equal (a.e.). The KL contraction operators are analogous to the contraction operators in ordinary metric spaces, with β having a similar role as the Lipschitz constant [19]. 1Indeed, it is known that the KL divergence behaves like the squared Euclidean distance [6]. 2 Eq.(1) gives the general and abstract definition of KL contraction operators. In the following, we give several examples of KL contraction operators that are constructed from common operations of probability distributions. 3.1 Conditional Distribution We can form a family of KL contraction operators using conditional distributions. Consider x ∈Rd with distribution p(x) ∈Ωd and y ∈Rd′, from a conditional distribution, T(y|x), we can define a distribution operator, as Φc T {p}(y) = Z Rd T(y|x)p(x)dx = ˜p(y). (2) The following result shows that Φc T is a strict KL contraction operator with β = 1. Lemma 1 (Cover & Thomas [6]2) For two distributions p, q ∈Ωd, with the distribution operator defined in Eq.(2), we have KL(p∥q) −KL(Φc T {p}∥Φc T {q}) = Z Rd′ ˜p(y)KL(Tp(x|y)∥Tq(x|y)) dy ≥0, where Tp(x|y) = T (y|x)p(x) ˜p(y) and Tq(x|y) = T (y|x)q(x) ˜q(y) are the induced conditional distributions from p and q with T. Furthermore, the equality holds if and only if p = q (a.e.). 3.2 Marginalization and Marginal Grafting Two related types of KL contraction operators can be constructed based on marginal distributions. Consider x with distribution p(x) ∈Ωd, and a nonempty index subset A ⊂{1, · · · , d}. Let \A denote {1, · · · , d}−A, the marginal distribution, pA(xA), of sub-vector xA formed by components whose indices are in A is obtained by integrating p(x) over sub-vector x\A. This marginalization operation thus defines a distribution operator between p ∈Ωd and pA ∈Ω|A|, as: Φm A {p}(xA) = Z Rd−|A| p(x)dx\A = pA(xA) (3) Another KL contraction operator termed as marginal grafting can also be defined based on pA. For a distribution q(x) ∈Ωd, the marginal grafting operator is defined as: Φg p,A{q}(x) = q(x)pA(xA) qA(xA) = q\A|A(x\A|xA)pA(xA), (4) Φg p,A{q} can be understood as replacing qA in q(x) with pA. The last term in Eq.4 is nonnegative and integrates to one over Rd, and is thus a proper probability distribution in Ωd. Furthermore, p is the fixed point of operator Φg p,A, as Φg p,A{p} = p. The following result shows that both Φm A and Φmg p,A are KL contraction operators, and that they are in a sense complementary to each other. Lemma 2 (Huber [13]) For two distributions p, q ∈Ωd, with the distribution operators defined in Eq.(3) and Eq.(4), we have KL(p∥q) −KL Φg p,A{p}
Φg p,A{q} = KL(Φm A {p}∥Φm A {q}) . Furthermore, KL Φg p,A{p}
Φg p,A{q} = Z Rd pA(xA)KL p\A|A(·|xA)
q\A|A(·|xA) dxA, where p\A|A(·|xA) and q\A|A(·|xA) are the conditional distributions induced from p(x) and q(x), and KL(Φm A {p}∥Φm A {q}) = KL(pA(xA)∥qA(xA)) . Lemma 2 also indicates that neither Φm A nor Φmg p,A is strict - the KL contraction of p(x) and q(x) for the former is zero if p\A|A(x\A|xA) = q\A|A(x\A|xA) (a.e.), even though they may differ on the marginal distribution over xA. And for the latter, having pA(xA) = qA(xA) is sufficient to make their KL contraction zero. 2We cite the original reference to this and subsequent results, which are recast using the terminology introduced in this work. Due to the limit of space, we defer formal proofs of these results to the supplementary materials. 3 3.3 Binary Mixture For two different distributions p(x) and g(x) ∈Ωd, we introduce a binary variable c ∈{0, 1} and Pr(c = i) = πi, with π0, π1 ∈[0, 1] and π0 + π1 = 1. We can then form a joint distribution ˆp(x, c = 0) = π0g(x) and ˆp(x, c = 1) = π1p(x) over Rd × {0, 1}. Marginalizing out c from ˆp(x, c), we obtain a binary mixture ˜p(x), which induces a distribution operator: Φb g{p}(x) = π0g(x) + π1p(x) = ˜p(x). (5) The following result shows that Φb g is a strict KL contraction operator with β = 1/π1. Lemma 3 For two distributions p, q ∈Ωd, with the distribution operator defined in Eq.(5), we have KL(p∥q) −1 π1 KL Φb g{p}
Φb g{q} = 1 π1 Z Rd ˜p(x) KL pc|x(c|x)
qc|x(c|x) dx ≥0, where p(c|x) and q(c|x) are the induced posterior conditional distributions from ˆp(x, c) and ˆq(x, c), respectively. The equality holds if and only if p = q (a.e.). 3.4 Lumping Let S = (S1, S2, · · · , Sm) be a partition of Rd such that Si ∩Sj = ∅for i ̸= j, and Sm i=1 Si = Rd, the lumping [8] of a distribution p(x) ∈Ωd over S yields a distribution over i ∈{1, 2, · · · , m}, and subsequently induces a distribution operator Φl S, as: Φl S{p}(i) = Z x∈Si p(x)dx = P S i , for i = 1, · · · , m. (6) The following result shows that Φl S is a KL contraction operator with β = 1. Lemma 4 (Csisz`ar & Shields [8]) For two distributions p, q ∈Ωd, with the distribution operator defined in Eq.(6), we have KL(p∥q) −KL Φl S{p}
Φl S{p} = m X i=1 P S i KL(˜pi∥˜qi) ≥0, where ˜pi(x) = p(x)×1[x∈Si] R x′∈Si p(x′)dx′ and ˜qi(x) = q(x)×1[x∈Si] R x′∈Si q(x′)dx′ are the distributions induced from p and q by restricting to Si, respectively, with 1[·] being the indicator function. Note that Φl S is in general not strict, as two distributions agree over all ˜pi and ˜qi will have a zero KL contraction. 4 Minimizing KL Contraction for Parametric Learning In this work, we take the information geometric view of parameter learning - assuming training data are samples from a distribution p ∈Ωd, we seek an optimal distribution on the statistical manifold corresponding to the parametric distribution family qθ that best approximates p [20]. In this context, the maximum (log) likelihood learning is equivalent to finding parameter θ that minimizes the KL divergence of p and qθ [8], as argminθ KL(p∥qθ) = argmaxθ R Rd p(x) log qθ(x)dx. The data based ML objective is obtained when we approximate the expectation with sample average as R Rd p(x) log qθ(x)dx ≈1 n Pn k=1 log qθ(x(k)). The KL contraction operators suggest an alternative approach for parametric learning. In particular, the KL contraction of p and qθ under a KL contraction operator is always nonnegative and reaches zero when p and qθ are equal almost everywhere. Therefore, we can minimize their KL contraction under a KL contraction operator to encourage the matching of qθ to p. We term this general approach of parameter learning as minimum KL contraction (MKC). Mathematically, minimum KL contraction may be realized with three different but related types of objective functions. - Type I: With a KL contraction operator Φ, we can find optimal θ that directly minimizes the KL contraction between p and qθ, as: argmin θ KL(p∥qθ) −βKL(Φ{p}∥Φ{qθ}) . (7) In practice, it may be desirable to use Φ with β = 1 that is also a linear operator for L2 bounded functions over Rd [19]. To better see this, consider qθ(x) = ˜qθ(x) Z(θ) as the model defined with 4 the unnormalized model and its partition function. Furthermore, assuming that we can obtain samples {x1, · · · , xn} and {y1, · · · , yn′} from p and Φ{p}, respectively, the optimization of Eq.(7) can be approximated as argmin θ KL(p∥qθ)−KL(Φ{p}∥Φ{qθ})≈argmax θ 1 n n X k=1 log ˜qθ(x(k))−1 n′ n′ X k=1 log Φ{˜qθ}(y(k)), where due to the linearity of Φ, the two terms of Z(θ) in qθ and L{qθ} cancel out each other. Therefore, the optimization does not require the computation of the partition function, a highly desirable property for fitting parameters in high dimensional probabilistic models with intractable partition functions. Type I MKC objective functions with KL contraction operators induced from conditional distribution, marginalization, marginal grafting, linear transform, and lumping all fall into this category. However, using nonlinear KL contraction operators, such as the one induced from binary mixtures, may also be able to avoid computing the partition function (e.g., Section 4.4). Furthermore, the KL contraction operator in Eq.(7) can have parameters, which can include the model parameter θ (e.g., Section 4.2). However, the optimization becomes more complicated as Φ{p} cannot be ignored when optimizing θ. Last, note that when using Type I MKC objective functions with a non-strict KL contraction operator, we cannot guarantee p = qθ even if their corresponding KL contraction is zero. - Type II: Consider a strict KL contraction operator with β = 1, denoted as Φt, is parameterized by an auxiliary parameter t that is different from θ, and for any distribution p ∈Ωd, we have Φ0{p} = p and Φt{p} is continuously differentiable with regards to t. Then, the KL divergence Φt{p} and Φt{qθ} can be regarded as a function of t and θ, as: L(t, θ) = KL(Φt{p}∥Φt{qθ). Thus, the KL contraction in Eq.(7) can be approximated with a Taylor expansion near t = 0, as KL(p∥qθ)−KL(Φδt{p}∥Φδt{qθ}) = KL(Φ0{p}∥Φ0{qθ})−KL(Φδt{p}∥Φδt{qθ}) = L(0, θ)− L(δt, θ) ≈−δt ∂L(t,θ) ∂t t=0 = −δt ∂ ∂tKL(Φt{p}∥Φt{qθ}) t=0. If the derivative of KL contraction with regards to t is easier to work with than the KL contraction itself (e.g., Section 4.5), we can fix δt and equivalently maximizing the derivative, which is the Type II MKC objective function, as argmax θ ∂ ∂tKL(Φt{p}∥Φt{qθ}) t=0 . (8) - Type III: In the case where we have access to a set of different KL contraction operators, {Φ1, · · · , Φm}, we can implement the minimum KL contraction principle by finding optimal θ that minimizes their average KL contraction, as argmin θ 1 m m X i=1 (KL(p∥qθ) −βiKL(Φi{p}∥Φi{qθ})) . (9) As each KL contraction in the sum is nonnegative, Eq.(9) is zero if and only if each KL contraction is zero. If the consistency of p and qθ with regards to Φi corresponds to certain constraints on qθ, the objective function, Eq.(9), represents the consistency of all such constraints. Under some special cases, minimizing Eq.(9) to zero over a sufficient number of certain types of KL contraction operators may indeed ensure equality of p and qθ (e.g., Section 4.6). 4.1 Fitting Gaussian Model with KL Contraction Operator from a Gaussian Distribution We first describe an instance of MKC learning under a very simple setting, where we approximate a distribution p(x) for x ∈R with known mean µ0 and variance σ2 0, with a Gaussian model qθ whose mean and variance are the parameters to be estimated as θ = (µ, σ2). Using the strict KL contraction operator Φc T constructed with a Gaussian conditional distribution T(y|x) = 1 p 2πσ2 1 exp −(y −x)2 2σ2 1 , with known variance σ2 1, we form the Type I MKC objective function. In this simple case, Eq.(7) is reduced to a closed form objective function, as: argmin µ,σ2 σ2 0 2σ2 − σ2 0 + σ2 1 2(σ2 + σ2 1) + 1 2 log σ2 σ2 + σ2 1 + σ2 1(µ −µ0)2 2σ2(σ2 + σ2 1) , whose optimal solution, µ = µ0 and σ2 = σ2 0, is obtained by direct differentiation. The detailed derivation of this result is omitted due to the limit of space. Note that, the optimal parameters do not rely on the parameter in the KL contraction operator (in this case, σ2 1), and are the same as those obtained by minimizing the KL divergence between p and qθ, or equivalently, maximizing the log likelihood, when samples from p(x) are used to approximate the expectation. 5 4.2 Relation with Contrastive Divergence [12] Next, we consider the general strict KL contraction operator Φc Tθ constructed from a conditional distribution, Tθ(y|x), for x, y ∈Rd, of which the parametric model qθ is a fixed point, as qθ(y) = R Rd Tθ(y|x)qθ(x)dx = Φc Tθ{qθ}(y). In other words, qθ is the equilibrium distribution of the Markov chain whose transitional distribution is given by Tθ(y|x). The Type I objective function of minimum KL contraction, Eq.(7), for p, qθ ∈Ωd under Φc Tθ is argmin θ KL(p∥qθ) −KL Φc Tθ{p}
Φc Tθ{qθ} = argmin θ KL(p∥qθ) −KL(pθ∥qθ) , where pθ is the shorthand notation for Φc Tθ{p}. Note that this is the objective function of the contrastive divergence learning [12]. However, the dependency of pθ on θ makes this objective function difficult to optimize. By ignoring this dependency, the practical parameter update in contrastive divergence only approximately follows the gradient of this objective function [5]. 4.3 Relation with Partial Likelihood [7] and Non-local Contrastive Objectives [31] Next, we consider the Type I MKC objective function, Eq.(7), combined with the KL contraction operator constructed from lumping. Using Lemma 4, we have argmin θ KL(p∥qθ) −KL Φl S{p}
Φl S{qθ} = argmin θ m X i=1 P S i KL ˜pi
˜qθ i = argmax θ m X i=1 P S i Z x∈Si ˜pi(x) log ˜qθ i (x)dx ≈argmax θ 1 n n X k=1 1[x(k)∈Si] m X i=1 P S i log ˜qθ i (x(k)), where {x(1), · · · , x(n)} are samples from p(x). Minimizing KL contraction in this case is equivalent to maximizing the weighted sum of log likelihood of the probability distributions formed by restricting the overall model to subsets of state space. The last step resembles the partial likelihood objective function [7], which is recently rediscovered in the context of discriminative learning as non-local contrastive objectives [31]. In [31], the partitions are required to overlap with each other, while the above result shows that non-overlapping partitions of Rd can also be used for non-ML parameter learning. 4.4 Relation with Noise Contrastive Estimation [11] Next, we consider the Type I MKC objective function, Eq.(7), combined with the strict KL contraction operator constructed from the binary mixture operation (Lemma 3). In particular, we simplify Eq.(7) using the definition of Φb g, as: argmin θ KL(p∥qθ) −1 π1 KL Φb g{p}
Φb g{qθ} = argmin θ 1 π1 Z Rd (π0g(x) + π1p(x)) log (π0g(x) + π1qθ(x)) dx − Z Rd p(x) log qθ(x)dx = argmax θ Z Rd p(x) log π1qθ(x) π0g(x) + π1qθ(x)dx + π0 π1 Z Rd g(x) log π0g(x) π0g(x) + π1qθ(x)dx. When the expectations in the above objective function are approximated with averages over samples from p(x) and g(x), {x(1), · · · , x(n+)} and {y(1), · · · , y(n−)}, the Type I MKC objective function in this case reduces to argmax θ 1 n+ n+ X k=1 log π1qθ(x(k)) π0g(x(k)) + π1qθ(x(k)) + π0 π1 1 n− n− X k=1 log π0g(y(k)) π0g(y(k)) + π1qθ(y(k)). If we set π0 = π1 = 1/2, and treat {x(1), · · · , x(n+)} and {y(1), · · · , y(n−)} as data of interest and noise, respectively, the above objective function can also be interpreted as minimizing the Bayesian classification error of data and noise, which is the objective function of noise-contrastive estimation [11]. 6 4.5 Relation with Score Matching [14] Next, we consider the strict KL contraction operator, Φc Tt, constructed from an isotropic Gaussian conditional distribution with a time decaying variance (i.e., a Gaussian diffusion process): Tt(y|x) = 1 (2πt)d/2 exp −∥y −x∥2 2t , where t ∈[0, ∞) is the continuous temporal index. Note that we have Φc T0{p} = p for any p ∈Ωd. If both p(x) and qθ(x) are functions differentiable with regards to x, it is know that the temporal derivative of the KL contraction of p and qθ under Φc Tt is in closed form, which is formally stated in the following result. Lemma 5 (Lyu [25]) For any two distributions p, q ∈Ωd differentiable with regards to x, we have d dtKL Φc Tt{p}
Φc Tt{qθ} = −1 2 Z Rd Φc Tt{p}(x)
∇xΦc Ttp(x) Φc Ttp(x) −∇xΦc Ttqθ(x) Φc Ttqθ(x)
2 dx, (10) where ∇x is the gradient operator with regards to x. Setting t = 0 in Eq.(10), we obtain a closed form for the Type II MKC objective function, Eq.(8), which can be further simplified [14], as argmax θ d dtKL(Φt{p}∥Φt{qθ}) t=0 = argmin θ Z Rd p(x)
∇xp(x) p(x) −∇xqθ(x) qθ(x)
2 dx = argmin θ Z Rd p(x) ∥∇x log qθ(x)∥2 + 2△x log qθ(x) dx ≈ argmin θ 1 n n X k=1
∇x log qθ(x(k))
2 + 2△x log qθ(x(k)) , where {x(1), · · · , x(n)} are samples from p(x), and △x is the Laplacian operator with regards to x. The last step is the objective function of score matching learning [14]. 4.6 Relation with Conditional Composite Likelihood [24] and Pseudo-Likelihood [3] Next, we consider the Type I MKC objective function, Eq.(7), combined with the KL contraction operator, Φm A , constructed from marginalization. According to Lemma 2, we have argminθ KL(p∥q) −KL(Φm A {p}∥Φm A {q}) = argmaxθ R Rd p(x) log q\A|A(x\A|xA)dx ≈ argmaxθ 1 n Pn k=1 log q\A|A(x(k) \A|x(k) A ), where in the last step, expectation over p(x) is replaced with averages over samples from p(x), {x(1), · · · , x(n)}. This corresponds to the objective function in maximum conditional likelihood [17] or maximum mutual information [2], which are non-ML learning objectives for discriminative learning of high dimensional probabilistic data models. However, Lemma 2 also shows that KL(p∥q)−KL(Φm A {p}∥Φm A {q}) = 0 is not sufficient to guarantee p = qθ. Alternatively, we can use the Type III MKC objective function, Eq.(9), to combine KL contraction operators formed from marginalizations over m different index subsets A1, · · · , Am: argmin θ KL(p∥q)−1 m m X i=1 KL Φm Ai{p}
Φm Ai{q} ≈argmax θ 1 m m X i=1 1 n n X k=1 log qAi|\Ai(x(k) Ai |x(k) \Ai). This is the objective function in conditional composite likelihood [24, 30, 23, 1] (also rediscovered under the name piecewise learning in [26]). A special case of conditional composite likelihood is when Ai = \{i}, the resulting marginalization operator, Φm \{i}, is known as the ith singleton marginalization operator. With the d different singleton marginalization operators, we can rewrite the objective function as KL(p∥q) − 1 d Pd i=1 KL Φm \ip
Φm \iq = 1 d Pd i=1 R R pi(xi)KL pi|\i(xi|x\i)
qi|\i(xi|x\i) dxi. Note that in this case, the average KL contraction is zero if and only if p(x) and qθ(x) agree on every singleton conditional distribution, i.e., pi|\i(xi|x\i) = qi|\i(xi|x\i) for all i and x. According the Brook’s Lemma [4], the latter condition is sufficient for p(x) = qθ(x) (a.e.). Furthermore, when approximating the expectations with averages over samples from p(x), we have 7 argmin θ KL(p∥q) −1 d d X i=1 KL Φm \{i}p
Φm \{i}q ≈argmax θ 1 n n X k=1 1 d d X i=1 log qi|\i(x(k) i |x(k) \i ), which is objective function in maximum pseudo-likelihood learning [3, 29]. 4.7 Relation with Marginal Composite Likelihood We now consider combining Type III MKC objective function, Eq.(9), with the KL contraction operator constructed from the marginal grafting operation. Specifically, with m different KL contraction operators constructed from marginal grafting on index subsets A1, · · · , Am, using Lemma 2, we can expand the corresponding Type III minimum KL contraction objective function as: argmin θ KL(p∥q) −1 m m X i=1 KL Φg p,Ai{p}
Φg p,Ai{q} = argmin θ 1 m m X i=1 KL(pAi(xAi)∥qAi(xAi)) = argmax θ 1 m m X i=1 Z Rd pAi(xAi) log qAi(xAi)dxAi ≈argmax θ 1 n n X k=1 1 m m X i=1 log qAi(x(k) Ai ) The last step, which maximizes the log likelihood of a set of marginal distributions on training data, corresponds to the objective function of marginal composite likelihood [30]. With m = 1, the resulting objective is used in the maximum marginal likelihood or Type-II likelihood learning [9]. 5 Discussions In this work, based on an information geometric view of parameter learning, we have described minimum KL contraction as a unifying principle for non-ML parameter learning, showing that the objective functions of several existing non-ML parameter learning methods can all be understood as instantiations of this principle with different KL contraction operators. There are several directions that we would like to extend the current work. First, the proposed minimum KL contraction framework may be further generalized using the more general f-divergence [8], of which the KL divergence is a special case. With the more general framework, we hope to reveal further relations among other types of non-ML learning objectives [16, 25, 28, 27]. Second, in the current work, we have focused on the idealization of parametric learning as matching probability distributions. In practice, learning is most often performed on finite data set with an unknown underlying distribution. In such cases, asymptotic properties of the estimation as data volume increases, such as consistency, become essential. While many non-ML learning methods covered in this work have been shown to be consistent individually, the unification based on the minimum KL contraction may provide a general condition for such asymptotic properties. Last, understanding different existing non-ML learning objectives through minimizing KL contraction also provides a principled approach to devise new non-ML learning methods, by seeking new KL contraction operators, or new combinations of existing KL contraction operators. Acknowledgement The author would like to thank Jascha Sohl-Dickstein, Michael DeWeese and Michael Gutmann for helpful discussions on an early version of this work. This work is supported by the National Science Foundation under the CAREER Award Grant No. 0953373. References [1] Arthur U. Asuncion, Qiang Liu, Alexander T. Ihler, and Padhraic Smyth. Learning with blocks: Composite likelihood and contrastive divergence. In AISTATS, 2010. 2, 7 [2] L. Bahl, P. Brown, P. de Souza, and R. Mercer. Maximum mutual information estimation of hidden markov model parameters for speech recognition. In ICASSP, 1986. 1, 2, 7 [3] J. Besag. Statistical analysis of non-lattice data. The Statistician, 24:179–95, 1975. 1, 2, 7, 8 [4] D. Brook. On the distinction between the conditional probability and the joint probability approaches in the specification of nearest-neighbor systems. Biometrika, 3/4(51):481–483, 1964. 7 8 [5] M. ´A. Carreira-Perpi˜n´an and G. E. Hinton. On contrastive divergence learning. In AISTATS, 2005. 6 [6] T. Cover and J. Thomas. Elements of Information Theory. Wiley-Interscience, 2nd edition, 2006. 2, 3 [7] D. R. Cox. Partial likelihood. Biometrika, 62(2):pp. 269–276, 1975. 1, 2, 6 [8] I. Csisz´ar and P. C. Shields. Information theory and statistics: A tutorial. Foundations and Trends in Communications and Information Theory, 1(4):417–528, 2004. 4, 8 [9] I.J. Good. The Estimation of Probabilities: An Essay on Modern Bayesian Methods. MIT Press, 1965. 1, 2, 8 [10] M. Gutmann and J. Hirayama. Bregman divergence as general framework to estimate unnormalized statistical models. In Conference on Uncertainty in Artificial Intelligence (UAI), Barcelona, Spain, 2011. 2 [11] M. Gutmann and A. Hyv¨arinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In AISTATS, 2010. 1, 2, 6 [12] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771–1800, 2002. 1, 2, 6 [13] P. J. Huber. Projection pursuit. The Annuals of Statistics, 13(2):435–475, 1985. 3 [14] A. Hyv¨arinen. Estimation of non-normalized statistical models using score matching. Journal of Machine Learning Research, 6:695–709, 2005. 1, 2, 7 [15] A. Hyv¨arinen. Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. IEEE Transactions on Neural Networks, 18(5):1529– 1531, 2007. 2 [16] A. Hyv¨arinen. Some extensions of score matching. Computational Statistics & Data Analysis, 51:2499–2512, 2007. 8 [17] T. Jebara and A. Pentland. Maximum conditional likelihood via bound maximization and the CEM algorithm. In NIPS, 1998. 1, 2, 7 [18] J. Laurie Kindermann, Ross; Snell. Markov Random Fields and Their Applications. American Mathematical Society, 1980. 1 [19] E. Kreyszig. Introductory Functional Analysis with Applications. Wiley, 1989. 2, 4 [20] Stefan L. Lauritzen. Statistical manifolds. In Differential Geometry in Statistical Inference, pages 163–216, 1987. 4 [21] Lucien Le Cam. Maximum likelihood — an introduction. ISI Review, 58(2):153–171, 1990. 1 [22] Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato, and F. Huang. Tutorial on energy-based learning. In Predicting Structured Data. MIT Press, 2006. 2 [23] P. Liang and M. I Jordan. An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators. In International Conference on Machine Learning, 2008. 7 [24] B. G Lindsay. Composite likelihood methods. Contemporary Mathematics, 80(1):22–39, 1988. 1, 2, 7 [25] S. Lyu. Interpretation and generalization of score matching. In UAI, 2009. 7, 8 [26] A. McCallum, C. Pal, G. Druck, and X. Wang. Multi-conditional learning: Generative/discriminative training for clustering and classification. In Association for the Advancement of Artificial Intelligence (AAAI), 2006. 7 [27] M. Pihlaja, M. Gutmann, and A. Hyv¨arinen. A family of computationally efficient and simple estimators for unnormalized statistical models. In UAI, 2010. 8 [28] J. Sohl-Dickstein, P. Battaglino, and M. DeWeese. Minimum probability flow learning. In ICML, 2011. 8 [29] D. Strauss and M. Ikeda. Pseudolikelihood estimation for social networks. Journal of the American Statistical Association, 85:204–212, 1990. 8 [30] C. Varin and P. Vidoni. A note on composite likelihood inference and model selection. Biometrika, 92(3):519–528, 2005. 2, 7, 8 [31] D. Vickrey, C. Lin, and D. Koller. Non-local contrastive objectives. In ICML, 2010. 1, 2, 6 9
|
2011
|
186
|
4,242
|
Select and Sample — A Model of Efficient Neural Inference and Learning Jacquelyn A. Shelton, J¨org Bornschein, Abdul-Saboor Sheikh Frankfurt Institute for Advanced Studies Goethe-University Frankfurt, Germany {shelton,bornschein,sheikh}@fias.uni-frankfurt.de Pietro Berkes Volen Center for Complex Systems Brandeis University, Boston, USA berkes@brandeis.edu J¨org L¨ucke Frankfurt Institute for Advanced Studies Goethe-University Frankfurt, Germany luecke@fias.uni-frankfurt.de Abstract An increasing number of experimental studies indicate that perception encodes a posterior probability distribution over possible causes of sensory stimuli, which is used to act close to optimally in the environment. One outstanding difficulty with this hypothesis is that the exact posterior will in general be too complex to be represented directly, and thus neurons will have to represent an approximation of this distribution. Two influential proposals of efficient posterior representation by neural populations are: 1) neural activity represents samples of the underlying distribution, or 2) they represent a parametric representation of a variational approximation of the posterior. We show that these approaches can be combined for an inference scheme that retains the advantages of both: it is able to represent multiple modes and arbitrary correlations, a feature of sampling methods, and it reduces the represented space to regions of high probability mass, a strength of variational approximations. Neurally, the combined method can be interpreted as a feed-forward preselection of the relevant state space, followed by a neural dynamics implementation of Markov Chain Monte Carlo (MCMC) to approximate the posterior over the relevant states. We demonstrate the effectiveness and efficiency of this approach on a sparse coding model. In numerical experiments on artificial data and image patches, we compare the performance of the algorithms to that of exact EM, variational state space selection alone, MCMC alone, and the combined select and sample approach. The select and sample approach integrates the advantages of the sampling and variational approximations, and forms a robust, neurally plausible, and very efficient model of processing and learning in cortical networks. For sparse coding we show applications easily exceeding a thousand observed and a thousand hidden dimensions. 1 Introduction According to the recently quite influential statistical approach to perception, our brain represents not only the most likely interpretation of a stimulus, but also its corresponding uncertainty. In other words, ideally the brain would represent the full posterior distribution over all possible interpretations of the stimulus, which is statistically optimal for inference and learning [1, 2, 3] – a hypothesis supported by an increasing number of psychophysical and electrophysiological results [4, 5, 6, 7, 8, 9]. 1 Although it is generally accepted that humans indeed maintain a complex posterior representation, one outstanding difficulty with this approach is that the full posterior distribution is in general very complex, as it may be highly correlated (due to explaining away effects), multimodal (multiple possible interpretations), and very high-dimensional. One approach to address this problem in neural circuits is to let neuronal activity represent the parameters of a variational approximation of the real posterior [10, 11]. Although this approach can approximate the full posterior, the number of neurons explodes with the number of variables – for example, approximation via a Gaussian distribution requires N 2 parameters to represent the covariance matrix over N variables. Another approach is to identify neurons with variables and interpret neural activity as samples from their posterior [12, 13, 3]. This interpretation is consistent with a range of experimental observations, including neural variability (which would result from the uncertainty in the posterior) and spontaneous activity (corresponding to samples from the prior in the absence of a stimulus) [3, 9]. The advantage of using sampling is that the number of neurons scales linearly with the number of variables, and it can represent arbitrarily complex posterior distributons given enough samples. The latter part is the issue: collecting a sufficient number of samples to form such a complex, high-dimensional representation is quite time-costly. Modeling studies have shown that a small number of samples are sufficient to perform well on low-dimensional tasks (intuitively, this is because taking a lowdimensional marginal of the posterior accumulates samples over all dimensions) [14, 15]. However, most sensory data is inherently very high-dimensional. As such, in order to faithfully represent visual scenes containing potentially many objects and object parts, one requires a high-dimensional latent space to represent the high number of potential causes, which returns to the problem sampling approaches face in high dimensions. The goal of the line of research pursued here is to address the following questions: 1) can we find a sophisticated representation of the posterior for very high-dimensional hidden spaces? 2) as this goal is believed to be shared by the brain, can we find a biologically plausible solution reaching it? In this paper we propose a novel approach to approximate inference and learning that addresses the drawbacks of sampling as a neural processing model, yet maintains its beneficial posterior representation and neural plausibility. We show that sampling can be combined with a preselection of candidate units. Such a selection connects sampling to the influential models of neural processing that emphasize feed-forward processing ([16, 17] and many more), and is consistent with the popular view of neural processing and learning as an interplay between feed-forward and recurrent stages of processing [18, 19, 20, 21, 12]. Our combined approach emerges naturally by interpreting feedforward selection and sampling as approximations to exact inference in a probabilistic framework for perception. 2 A Select and Sample Approach to Approximate Inference Inference and learning in neural circuits can be regarded as the task of inferring the true hidden causes of a stimulus. An example is inferring the objects in a visual scene based on the image projected on the retina. We will refer to the sensory stimulus (the image) as a data point, ⃗y = (y1, . . . , yD), and we will refer to the hidden causes (the objects) as ⃗s = (s1, . . . , sH) with sh denoting hidden variable or hidden unit h. The data distribution can then be modeled by a generative data model: p(⃗y | Θ) = P ⃗s p(⃗y |⃗s, Θ) p(⃗s | Θ) with Θ denoting the parameters of the model1. If we assume that the data distribution can be optimally modeled by the generative distribution for optimal parameters Θ∗, then the posterior probability p(⃗s | ⃗y, Θ∗) represents optimal inference given a data point ⃗y. The parameters Θ∗given a set of N data points Y = {⃗y1, . . . , ⃗yN} are given by the maximum likelihood parameters Θ∗= argmaxΘ{p(Y | Θ)}. A standard procedure to find the maximum likelihood solution is expectation maximization (EM). EM iteratively optimizes a lower bound of the data likelihood by inferring the posterior distribution over hidden variables given the current parameters (the E-step), and then adjusting the parameters to maximize the likelihood of the data averaged over this posterior (the M-step). The M-step updates typically depend only on a small number of expectation values of the posterior as given by ⟨g(⃗s)⟩p(⃗s | ⃗y (n),Θ) = P ⃗s p(⃗s | ⃗y (n), Θ) g(⃗s) , (1) where g(⃗s) is usually an elementary function of the hidden variables (e.g., g(⃗s) = ⃗s or g(⃗s) = ⃗s⃗sT in the case of standard sparse coding). For any non-trivial generative model, the computation of 1In the case of continuous variables the sum is replaced by an integral. For a hierarchical model, the prior distribution p(⃗s | Θ) may be subdivided hierarchically into different sets of variables. 2 expectation values (1) is the computationally demanding part of EM optimization. Their exact computation is often intractable and many well-known algorithms (e.g., [22, 23]) rely on estimations. The EM iterations can be associated with neural processing by the assumption that neural activity represents the posterior over hidden variables (E-step), and that synaptic plasticity implements changes to model parameters (M-step). Here we will consider two prominent models of neural processing on the ground of approximations to the expectation values (1) and show how they can be combined. Selection. Feed-forward processing has frequently been discussed as an important component of neural processing [16, 24, 17, 25]. One perspective on this early component of neural activity is as a preselection of candidate units or hypotheses for a given sensory stimulus ([18, 21, 26, 19] and many more), with the goal of reducing the computational demand of an otherwise too complex computation. In the context of probabilistic approaches, it has recently been shown that preselection can be formulated as a variational approximation to exact inference [27]. The variational distribution in this case is given by a truncated sum over possible hidden states: p(⃗s | ⃗y (n), Θ) ≈qn(⃗s; Θ) = p(⃗s | ⃗y (n), Θ) X ⃗s ′∈Kn p(⃗s ′ | ⃗y (n), Θ) δ(⃗s ∈Kn) = p(⃗s, ⃗y (n) | Θ) X ⃗s ′∈Kn p(⃗s ′, ⃗y (n) | Θ) δ(⃗s ∈Kn) (2) where δ(⃗s ∈Kn) = 1 if ⃗s ∈Kn and zero otherwise. The subset Kn represents the preselected latent states. Given a data point ⃗y (n), Eqn. 2 results in good approximations to the posterior if Kn contains most posterior mass. Since for many applications the posterior mass is concentrated in small volumes of the state space, the approximation quality can stay high even for relatively small sets Kn. This approximation can be used to compute efficiently the expectation values needed in the M-step (1): ⟨g(⃗s)⟩p(⃗s | ⃗y (n),Θ) ≈⟨g(⃗s)⟩qn(⃗s;Θ) = P ⃗s∈Kn p(⃗s, ⃗y (n) | Θ) g(⃗s) P ⃗s ′∈Kn p(⃗s ′, ⃗y (n) | Θ) . (3) Eqn. 3 represents a reduction in required computational resources as it involves only summations (or integrations) over the smaller state space Kn. The requirement is that the set Kn needs to be selected prior to the computation of expectation values, and the final improvement in efficiency relies on such selections being efficiently computable. As such, a selection function Sh(⃗y, Θ) needs to be carefully chosen in order to define Kn; Sh(⃗y, Θ) efficiently selects the candidate units sh that are most likely to have contributed to a data point ⃗y (n). Kn can then be defined by: Kn = {⃗s | for all h ̸∈I : sh = 0} , (4) where I contains the H′ indices h with the highest values of Sh(⃗y, Θ) (compare Fig. 1). For sparse coding models, for instance, we can exploit that the posterior mass lies close to low dimensional subspaces to define the sets Kn [27, 28], and appropriate Sh(⃗y, Θ) can be found by deriving efficiently computable upper-bounds for probabilities p(sh = 1 | ⃗y (n), Θ) [27, 28] or by derivations based on taking limits for no data noise [27, 29]. For more complex models, see [27] (Sec. 5.3-4) for a discussion of suitable selection functions. Often the precise form of Sh(⃗y, Θ) has limited influence on the final approximation accuracy because a) its values are not used for the approximation (3) itself and b) the size of sets Kn can often be chosen generously to easily contain the regions with large posterior mass. The larger Kn the less precise the selection has to be. For Kn equal to the entire state space, no selection is required and the approximations (2) and (3) fall back to the case of exact inference. Sampling. An alternative way to approximate the expectation values in eq. 1 is by sampling from the posterior distribution, and using the samples to compute the average: ⟨g(⃗s)⟩p(⃗s | ⃗y (n),Θ) ≈ 1 M PM m=1 g(⃗s(m)) with ⃗s(m) ∼p(⃗s | ⃗y, Θ). (5) The challenging aspect of this approach is to efficiently draw samples from the posterior. In a high-dimensional sample space, this is mostly done by Markov Chain Monte Carlo (MCMC). This class of methods draws samples from the posterior distribution such that each subsequent sample is drawn relative to the current state, and the resulting sequence of samples form a Markov chain. In the limit of a large number of samples, Monte Carlo methods are theoretically able to represent any probability distribution. However, the number of samples required in high-dimensional spaces can be very large (Fig. 1A, sampling). 3 A exact EM X ⃗s p(⃗s | ⃗y)g(⃗s) g(⃗smax) MAP estimate ⃗smax X ⃗s∈Kn qn(⃗s; Θ)g(⃗s) preselection B 1 M M X m=1 g(⃗s) sampling select and 1 M M X m=1 g(⃗s) sample C Sh(⃗y (n)) Kn s1 sH y1 yD s1 sH y1 yD Wdh Wdh selected units selected units with ⃗s(m) ∼p(⃗s | ⃗y (n), Θ) with ⃗s(m) ∼qn(⃗s; Θ) Kn Figure 1: A Simplified illustration of the posterior mass and the respective regions each approximation approach uses to compute the expectation values. B Graphical model showing each connection Wdh between the observed variables ⃗y and hidden variables ⃗s, and how H′ = 2 hidden variables/units are selected to form a set Kn. C Graphical model resulting from the selection of hidden variables and associated weights Wdh (black). Select and Sample. Although preselection is a deterministic approach very different than the stochastic nature of sampling, its formulation as approximation to expectation values (3) allows for a straight-forward combination of both approaches: given a data point, ⃗y(n), we first approximate the expectation value (3) using the variational distribution qn(⃗s; Θ) as defined by preselection (2). Second, we approximate the expectations w.r.t. qn(⃗s; Θ) using sampling. The combined approach is thus given by: ⟨g(⃗s)⟩p(⃗s | ⃗y (n),Θ) ≈⟨g(⃗s)⟩qn(⃗s;Θ) ≈ 1 M PM m=1 g(⃗s(m)) with ⃗s(m) ∼qn(⃗s; Θ), (6) where ⃗s(m) denote samples from the truncated distribution qn. Instead of drawing from a distribution over the entire state space, approximation (6) requires only samples from a potentially very small subspace Kn (Fig. 1). In the subspace Kn, most of the original probability mass is concentrated in a smaller volume, thus MCMC algorithms perform more efficiently, which results in a smaller space to explore, shorter burn-in times, and a reduced number of required samples. Compared to selection alone, the select and sample approach will represent an increase in efficiency as soon as the number of samples required for a good approximation is less then the number of states in Kn. 3 Sparse Coding: An Example Application We systematically investigate the computational efficiency, performance, and biological plausibility of the select and sample approach in comparison with selection and sampling alone using a sparse coding model of images. The choice of a sparse coding model has numerous advantages. First, it is a non-trivial model that has been extremely well-studied in machine learning research, and for which efficient algorithms exist (e.g., [23, 30]). Second, it has become a standard (albeit somewhat simplistic) model of the organization of receptive fields in primary visual cortex [22, 31, 32]. Here we consider a discrete variant of this model known as Binary Sparse Coding (BSC; [29, 27], also compare [33]), which has binary hidden variables but otherwise the same features as standard sparse coding versions. The generative model for BSC is expressed by p(⃗s|π) = QH h=1 πsh 1 −π 1−sh , p(⃗y|⃗s, W, σ) = N(⃗y; W⃗s, σ21) , (7) where W ∈RD×H denotes the basis vectors and π parameterizes the sparsity (⃗s and ⃗y as above). The M-step updates of the BSC learning algorithm (see e.g. [27]) are given by: W new = PN n=1 ⃗y(n) ⟨⃗s ⟩T qn PN n=1 ⃗s⃗s T qn −1 , (8) (σ2)new = 1 ND P n ⃗y(n) −W ⃗s 2 qn, πnew = 1 N P n | < ⃗s >qn |, where |⃗x| = 1 H P h xh. (9) The only expectation values needed for the M-step are thus ⟨⃗s⟩qn and ⃗s⃗sT qn. We will compare learning and inference between the following algorithms: 4 BSCexact. An EM algorithm without approximations is obtained if we use the exact posterior for the expectations: qn = p(⃗s | ⃗y (n), Θ). We will refer to this exact algorithm as BSCexact. Although directly computable, the expectation values for BSCexact require sums over the entire state space, i.e., over 2H terms. For large numbers of latent dimensions, BSCexact is thus intractable. BSCselect. An algorithm that more efficiently scales with the number of hidden dimensions is obtained by applying preselection. For the BSC model we use qn as given in (3) and Kn = {⃗s | (for all h ̸∈I : sh = 0) or P h sh = 1}. Note that in addition to states as in (4) we include all states with one non-zero unit (all singletons). Including them avoids EM iterations in the initial phases of learning that leave some basis functions unmodified (see [27]). As selection function Sh(⃗y (n)) to define Kn we use: Sh(⃗y (n)) = ( ⃗W T h / || ⃗Wh||) ⃗y (n), with || ⃗Wh|| = qPD d=1(Wdh)2 . (10) A large value of Sh(⃗y (n)) strongly indicates that ⃗y (n) contains the basis function ⃗Wh as a component (see Fig. 1C). Note that (10) can be related to a deterministic ICA-like selection of a hidden state ⃗s(n) in the limit case of no noise (compare [27]). Further restrictions of the state space are possible but require modified M-step equations (see [27, 29]), which will not be considered here. BSCsample. An alternative non-deterministic approach can be derived using Gibbs sampling. Gibbs sampling is an MCMC algorithm which systematically explores the sample space by repeatedly drawing samples from the conditional distributions of the individual hidden dimensions. In other words, the transition probability from the current sample to a new candidate sample is given by p(snew h |⃗s current \h ). In our case of a binary sample space, this equates to selecting one random axis h ∈{1, . . . , H} and toggling its bit value (thereby changing the binary state in that dimension), leaving the remaining axes unchanged. Specifically, the posterior probability computed for each candidate sample is expressed by: p(sh = 1 |⃗s\h, ⃗y) = p(sh = 1,⃗s\h, ⃗y)β p(sh = 0,⃗s\h, ⃗y)β + p(sh = 1,⃗s\h, ⃗y)β , (11) where we have introduced a parameter β that allows for smoothing of the posterior distribution. To ensure an appropriate mixing behavior of the MCMC chains over a wide range of σ (note that σ is a model parameter that changes with learning), we define β = T σ2 , where T is a temperature parameter that is set manually and selected such that good mixing is achieved. The samples drawn in this manner can then be used to approximate the expectation values in (8) to (9) using (5). BSCs+s. The EM learning algorithm given by combining selection and sampling is obtained by applying (6). First note that inserting the BSC generative model into (2) results in: qn(⃗s; Θ) = N(⃗y; W⃗s, σ21) BernoulliKn(⃗s; π) P ⃗s ′∈Kn N(⃗y; W⃗s ′, σ21) BernoulliKn(⃗s ′; π) δ(⃗s ∈Kn) (12) where BernoulliKn(⃗s; π) = Q h∈I πsh (1 −π)1−sh. The remainder of the Bernoulli distribution cancels out. If we define ˜⃗s to be the binary vector consisting of all entries of ⃗s of the selected dimensions, and if ˜W ∈RD×H′ contains all basis functions of those selected, we observe that the distribution is equal to the posterior w.r.t. a BSC model with H′ instead of H hidden dimensions: p(˜⃗s | ⃗y, Θ) = N(⃗y; ˜W ˜⃗s, σ21H′) Bernoulli(˜⃗s; π) P ˜⃗s ′ N(⃗y; ˜W ˜⃗s ′, σ21H′) Bernoulli(˜⃗s ′; π) Instead of drawing samples from qn(⃗s; Θ) we can thus draw samples from the exact posterior w.r.t. the BSC generative model with H′ dimensions. The sampling procedure for BSCsamplecan thus be applied simply by ignoring the non-selected dimensions and their associated parameters. For different data points, different latent dimensions will be selected such that averaging over data points can update all model parameters. For selection we again use Sh(⃗y, Θ) (10), defining Kn as in (4), where I now contains the H′–2 indices h with the highest values of Sh(⃗y, Θ) and two randomly selected dimensions (drawn from a uniform distribution over all non-selected dimensions). The two randomly selected dimensions fulfill the same purpose as the inclusion of singleton states for BSCselect. Preselection and Gibbs sampling on the selected dimensions define an approximation to the required expectation values (3) and result in an EM algorithm referred to as BSCs+s. 5 Complexity. Collecting the number of operations necessary to compute the expectation values for all four BSC cases, we arrive at O NS( D |{z} p(⃗s,⃗y) + 1 |{z} ⟨⃗s⟩ + H |{z} ⟨⃗s⃗sT ⟩ ) (13) where S denotes the number of hidden states that contribute to the calculation of the expectation values. For the approaches with preselection (BSCselect, BSCs+s), all the calculations of the expectation values can be performed on the reduced latent space; therefore the H is replaced by H′. For BSCexactthis number scales exponentially in H: Sexact = 2H, and in in the BSCselectcase, it scales exponentially in the number of preselected hidden variables: Sselect = 2H′. However, for the sampling based approaches (BSCsampleand BSCs+s), the number S directly corresponds to the number of samples to be evaluated and is obtained empirically. As we will show later, Ss+s = 200 × H′ is a reasonable choice for the interval of H′ that we investigate in this paper (1 ≤H′ ≤40). 4 Numerical Experiments We compare the select and sample approach with selection and sampling applied individually on different data sets: artifical images and natural image patches. For all experiments using the two sampling approaches, we draw 20 independent chains that are initialized at random states in order to increase the mixing of the samples. Also, of the samples drawn per chain, 1 3 were used to as burn-in samples, and 2 3 were retained samples. Artificial data. Our first set of experiments investigate the select and sample approach’s convergence properties on artificial data sets where ground truth is available. As the following experiments were run on a small scale problem, we can compute the exact data likelihood for each EM step in all four algorithms (BSCexact, BSCselect, BSCsampleand BSCs+s) to compare convergence on ground truth likelihood. A B C 1 50 EM step L(Θ) BSCexact BSCselect BSCsample BSCs+s 1 50 EM step 1 50 EM step 1 50 EM step Figure 2: Experiments using artificial bars data with H = 12, D = 6 × 6. Dotted line indicates the ground truth log-likelihood value. A Random selection of the N = 2000 training data points ⃗y (n). B Learned basis functions Wdh after a successful training run. C Development of the log-likelihood over a period of 50 EM steps for all 4 investigated algorithms. Data for these experiments consisted of images generated by creating H = 12 basis functions ⃗W gt h in the form of horizontal and vertical bars on a D = 6 × 6 = 36 pixel grid. Each bar was randomly assigned to be either positive (W gt dh ∈{0.0, 10.0}) or negative (W gt h′d ∈{−10.0, 0.0}). N = 2000 data points ⃗y (n) were generated by linearly combining these basis functions (see e.g., [34]). Using a sparseness value of πgt = 2 H resulted in, on average, two active bars per data point. According to the model, we added Gaussian noise (σgt = 2.0) to the data (Fig. 2A). We applied all algorithms to the same dataset and monitored the exact likelihood over a period of 50 EM steps (Fig. 2C). Although the calculation of the exact likelihood requires O(N2H(D +H)) operations, this is feasible for such a small scale problem. For models using preselection (BSCselectand BSCs+s), we set H′ to 6, effectively halving the number of hidden variables participating in the calculation of the expectation values. For BSCsampleand BSCs+swe drew 200 samples from the posterior p(⃗s | ⃗y (n)) of each data point, as such the number of states evaluated totaled Ssample = 200 × H = 2400 and Ss+s = 200 × H′ = 1200, respectively. To ensure an appropriate mixing behavior annealing temperature was set to T = 50. In each experiment the basis functions were initialized at the data mean plus Gaussian noise, the prior probability to πinit = 1 H and the data noise to the variance of the data. All algorithms recover the correct set of bases functions in > 50% of the trials, and the sparseness prior π and the data noise σ with high accuracy. Comparing the computational costs of algorithms shows the benefits of preselection already for this small scale problem: while BSCexactevaluates the expectation values using the full set of 2H = 4096 hidden 6 states, BSCselectonly considers 2H′ + (H −H′) = 70 states. The pure sampling based approaches performs 2400 evaluations while BSCs+srequires 1200 evaluations. Image patches. We test the select and sample approach on natural image data at a more challenging scale, to include biological plausibility in the demonstration of its applicability to larger scale problems. We extracted N = 40, 000 patches of size D = 26 × 26 = 676 pixels from the van Hateren image database [31] 2, and preprocessed them using a Difference of Gaussians (DoG) filter, which approximates the sensitivity of center-on and center-off neurons found in the early stages of the mammalian visual processing. Filter parameters where chosen as in [35, 28]. For the following experiments we ran 100 EM iterations to ensure proper convergence. The annealing temperature was set to T = 20. C L(Θ) # of states 400 × H′ 100 × H′ A B S = 200 × H′ -5.47e7 -5.53e7 BSCselect BSCsample BSCs+s ×40 107 106 105 104 103 # of states D -5.51e7 -5.49e7 Figure 3: Experiments on image patches with D = 26 × 26, H = 800 and H′ = 20. A Random selection of used patches (after DoG preprocessing). B Random selection of learned basis functions (number of samples set to 200). C End approx. log-likelihood after 100 EM-steps vs. number of samples per data point. D Number of states that had to be evaluated for the different approaches. The first series of experiments investigate the effect of the number of drawn samples on the performance of the algorithm (as measured by the approximate data likelihood) across the entire range of H′ values between 12 and 36. We observe with BSCs+sthat 200 samples per hidden dimension (total states = 200 × H′) are sufficient: the final value of the likelihood after 100 EM steps begins to saturate. Particularly, increasing the number of samples does not increase the likelihood by more than 1%. In Fig. 3C we report the curve for H′ = 20, but the same trend is observed for all other values of H′. In another set of experiments, we used this number of samples (200 × H) in the pure sampling case (BSCsample) in order to monitor the likelihood behavior. We observed two consistent trends: 1) the algorithm was never observed to converge to a high-likelihood solution, and 2) even when initialized at solutions with high likelihood, the likelihood always decreases. This example demonstrates the gains of using select and sample above pure sampling: while BSCs+sonly needs 200 × 20 = 4, 000 samples to robustly reach a high-likelihood solutions, by following the same regime with BSCsample, not only did the algorithm poorly converge on a high-likelihood solution, but it used 200 × 800 = 160, 000 samples to do so (Fig. 3D). Large scale experiment on image patches. Comparison of the above results shows that the most efficient algorithm is obtained by a combination of preselection and sampling, our select and sample approach (BSCs+s), with no or only minimal effect on the performance of the algorithm – as depicted in Fig. 2 and 3. This efficiency allows for applications to much larger scale problems than would be possible by individual approximation approaches. To demonstrate the efficiency of the combined approach we applied BSCs+sto the same image dataset, but with a very high number of observed and hidden dimensions. We extracted from the database N = 500, 000 patches of size D = 40 × 40 = 1, 600 pixels. BSCs+swas applied with the number of hidden units set to H = 1, 600 and with H′ = 34. Using the same conditions as in the previous experiments (notably S = 200 × H′ = 64, 000 samples and 100 EM iterations) we again obtain a set of Gabor-like basis functions (see Fig. 4A) with relatively very few necessary states (Fig. 4B). To our knowledge, the presented results illustrate the largest application of sparse coding with a reasonably complete representation of the posterior. 5 Discussion We have introduced a novel and efficient method for unsupervised learning in probabilistic models – one which maintains a complex representation of the posterior for problems consistent with 2We restricted the set of images to 900 images without man-made structures (see Fig 3A). The brightest 2% of the pixels were clamped to the max value of the remaining 98% (reducing influences of light-reflections) 7 A B BSCselect : S = 2H′ BSCs+s : S = 200 × H′ 40 0 100 1012 34 104 108 # of states H′ Figure 4: A Large-scale application of BSCs+swith H′ = 34 to image patches (D = 40×40 = 1600 pixels and H = 1600 hidden dimensions). A random selection of the inferred basis functions is shown (see Suppl for all basis functions and model parameters). B Comparison the of computational complexity: BSCselectscales exponentially with H′ whereas BSCs+sscales linearly. Note the large difference at H′ = 34 as used in A. real-world scales. Furthermore, our approach is biologically plausible and models how the brain can make sense of its environment for large-scale sensory inputs. Specifically, the method could be implemented in neural networks using two mechanisms, both of which have been independently suggested in the context of a statistical framework for perception: feed-forward preselection [27], and sampling [12, 13, 3]. We showed that the two seemingly contrasting approaches can be combined based on their interpretation as approximate inference methods, resulting in a considerable increase in computational efficiency (e.g., Figs. 3-4). We used a sparse coding model of natural images – a standard model for neural response properties in V1 [22, 31] – in order to investigate, both numerically and analytically, the applicability and efficiency of the method. Comparisons of our approach with exact inference, selection alone, and sampling alone showed a very favorable scaling with the number of observed and hidden dimensions. To the best of our knowledge, the only other sparse coding implementation that reached a comparable problem size (D = 20×20, H = 2 000) assumed a Laplace prior and used a MAP estimation of the posterior [23]. However, with MAP estimations, basis functions have to be rescaled (compare [22]) and data noise or prior parameters cannot be inferred (instead a regularizer is hand-set). Our method does not require any of these artificial mechanisms because of its rich posterior representation. Such representations are, furthermore, crucial for inferring all parameters such as data noise and sparsity (learned in all of our experiments), and to correctly act when faced with uncertain input [2, 8, 3]. Concretely, we used a sparse coding model with binary latent variables. This allowed for a systematic comparison with exact EM for low-dimensional problems, but extension to the continuous case should be straight-forward. In the model, the selection step results in a simple, local and neurally plausible integration of input data, given by (10). We used this in combination with Gibbs sampling, which is also neurally plausible because neurons can individually sample their next state based on the current state of the other neurons, as transmitted through recurrent connections [15]. The idea of combining sampling with feed-forward mechanisms has previously been explored, but in other contexts and with different goals. Work by Beal [36] used variational approximations as proposal distributions within importance sampling, and Zhu et al. [37] guided a Metropolis-Hastings algorithm by a data-driven proposal distribution. Both approaches are different from selecting subspaces prior to sampling and are more difficult to link to neural feed-forward sweeps [18, 21]. We expect the select and sample strategy to be widely applicable to machine learning models whenever the posterior probability masses can be expected to be concentrated in a small sub-space of the whole latent space. Using more sophisticated preselection mechanisms and sampling schemes could lead to a further reduction in computational efforts, although the details will depend in general on the particular model and input data. Acknowledgements. We acknowledge funding by the German Research Foundation (DFG) in the project LU 1196/4-1 (JL), by the German Federal Ministry of Education and Research (BMBF), project 01GQ0840 (JAS, JB, ASS), by the Swartz Foundation and the Swiss National Science Foundation (PB). Furthermore, support by the Physics Dept. and the Center for Scientific Computing (CSC) in Frankfurt are acknowledged. 8 References [1] P. Dayan and L. F. Abbott. Theoretical Neuroscience. MIT Press, Cambridge, 2001. [2] R. P. N. Rao, B. A. Olshausen, and M. S. Lewicki. Probabilistic Models of the Brain: Perception and Neural Function. MIT Press, 2002. [3] J. Fiser, P. Berkes, G. Orban, and M. Lengye. Statistically optimal perception and learning: from behavior to neural representations. Trends in Cognitive Sciences, 14:119–130, 2010. [4] M. D. Ernst and M. S. Banks. Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415:419–433, 2002. [5] Y. Weiss, E. P. Simoncelli, and E. H. Adelson. Motion illusions as optimal percepts. Nature Neuroscience, 5:598–604, 2002. [6] K. P. Kording and D. M. Wolpert. Bayesian integration in sensorimotor learning. Nature, 427:244–247, 2004. [7] J. M. Beck, W. J. Ma, R. Kiani, T. Hanksand A. K. Churchland, J. Roitman, M. N.. Shadlen, P. E. Latham, and A. Pouget. Probabilistic population codes for bayesian decision making. Neuron, 60(6), 2008. [8] J. Trommersh¨auser, L. T. Maloney, and M. S. Landy. Decision making, movement planning and statistical decision theory. Trends in Cognitive Science, 12:291–297, 2008. [9] P. Berkes, G. Orban, M. Lengyel, and J. Fiser. Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Science, 331(6013):83–87, 2011. [10] W. J. Ma, J. M. Beck, P. E. Latham, and A. Pouget. Bayesian inference with probabilistic population codes. Nature Neuroscience, 9:1432–1438, 2006. [11] R. Turner, P. Berkes, and J. Fiser. Learning complex tasks with probabilistic population codes. In Frontiers in Neuroscience, 2011. Comp. and Systems Neuroscience 2011. [12] T. S. Lee and D. Mumford. Hierarchical Bayesian inference in the visual cortex. Journal of the Optical Society of America A, 20(7):1434–1448, 2003. [13] P. O. Hoyer and A. Hyvarinen. Interpreting neural response variability as Monte Carlo sampling from the posterior. In Adv. Neur. Inf. Proc. Syst. 16, pages 293–300. MIT Press, 2003. [14] E. Vul, N. D. Goodman, T. L. Griffiths, and J. B. Tenenbaum. One and done? Optimal decisions from very few samples. In 31st Annual Meeting of the Cognitive Science Society, 2009. [15] P. Berkes, R. Turner, and J. Fiser. The army of one (sample): the characteristics of sampling-based probabilistic neural representations. In Frontiers in Neuroscience, 2011. Comp. and Systems Neuroscience 2011. [16] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 1958. [17] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience, 211(11):1019 – 1025, 1999. [18] V. A. F.. Lamme and P. R. Roelfsema. The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences, 23(11):571 – 579, 2000. [19] A. Yuille and D. Kersten. Vision as bayesian inference: analysis by synthesis? Trends in Cognitive Sciences, 10(7):301–308, 2006. [20] G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The ‘wake-sleep’ algorithm for unsupervised neural networks. Science, 268:1158 – 1161, 1995. [21] E. K¨orner, M. O. Gewaltig, U. K¨orner, A. Richter, and T. Rodemann. A model of computation in neocortical architecture. Neural Networks, 12:989 – 1005, 1999. [22] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607–609, 1996. [23] H. Lee, A. Battle, R. Raina, and A. Ng. Efficient sparse coding algorithms. NIPS, 20:801–808, 2007. [24] Y. LeCun. Backpropagation applied to handwritten zip code recognition. [25] M. Riesenhuber and T. Poggio. How visual cortex recognizes objects: The tale of the standard model. 2002. [26] T. S. Lee and D. Mumford. Hierarchical bayesian inference in the visual cortex. J Opt Soc Am A Opt Image Sci Vis, 20(7):1434–1448, July 2003. [27] J. L¨ucke and J. Eggert. Expectation Truncation And the Benefits of Preselection in Training Generative Models. Journal of Machine Learning Research, 2010. [28] G. Puertas, J. Bornschein, and J. L¨ucke. The maximal causes of natural scenes are edge filters. NIPS, 23, 2010. [29] M. Henniges, G. Puertas, J. Bornschein, J. Eggert, and J. L¨ucke. Binary sparse coding. Latent Variable Analysis and Signal Separation, 2010. [30] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. The Journal of Machine Learning Research, 11, 2010. [31] J. Hateren and A. Schaaf. Independent Component Filters of Natural Images Compared with Simple Cells in Primary Visual Cortex. Proc Biol Sci, 265(1394):359–366, 1998. [32] D. L. Ringach. Spatial Structure and Symmetry of Simple-Cell Receptive Fields in Macaque Primary Visual Cortex. J Neurophysiol, 88:455–463, 2002. [33] M. Haft, R. Hofman, and V. Tresp. Generative binary codes. Pattern Anal Appl, 6(4):269–284, 2004. [34] P. O. Hoyer. Non-negative sparse coding. Neural Networks for Signal Processing XII: Proceedings of the IEEE Workshop, pages 557–565, 2002. [35] J. L¨ucke. Receptive Field Self-Organization in a Model of the Fine Structure in V1 Cortical Columns. Neural Computation, 2009. [36] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Computational Neuroscience Unit, University College London., 2003. [37] Z. Tu and S. C. Zhu. Image Segmentation by Data-Driven Markov Chain Monte Carlo. PAMI, 24(5):657– 673, 2002. 9
|
2011
|
187
|
4,243
|
Clustered Multi-Task Learning Via Alternating Structure Optimization Jiayu Zhou, Jianhui Chen, Jieping Ye Computer Science and Engineering Arizona State University Tempe, AZ 85287 {jiayu.zhou, jianhui.chen, jieping.ye}@asu.edu Abstract Multi-task learning (MTL) learns multiple related tasks simultaneously to improve generalization performance. Alternating structure optimization (ASO) is a popular MTL method that learns a shared low-dimensional predictive structure on hypothesis spaces from multiple related tasks. It has been applied successfully in many real world applications. As an alternative MTL approach, clustered multi-task learning (CMTL) assumes that multiple tasks follow a clustered structure, i.e., tasks are partitioned into a set of groups where tasks in the same group are similar to each other, and that such a clustered structure is unknown a priori. The objectives in ASO and CMTL differ in how multiple tasks are related. Interestingly, we show in this paper the equivalence relationship between ASO and CMTL, providing significant new insights into ASO and CMTL as well as their inherent relationship. The CMTL formulation is non-convex, and we adopt a convex relaxation to the CMTL formulation. We further establish the equivalence relationship between the proposed convex relaxation of CMTL and an existing convex relaxation of ASO, and show that the proposed convex CMTL formulation is significantly more efficient especially for high-dimensional data. In addition, we present three algorithms for solving the convex CMTL formulation. We report experimental results on benchmark datasets to demonstrate the efficiency of the proposed algorithms. 1 Introduction Many real-world problems involve multiple related classificatrion/regression tasks. A naive approach is to apply single task learning (STL) where each task is solved independently and thus the task relatedness is not exploited. Recently, there is a growing interest in multi-task learning (MTL), where we learn multiple related tasks simultaneously by extracting appropriate shared information across tasks. In MTL, multiple tasks are expected to benefit from each other, resulting in improved generalization performance. The effectiveness of MTL has been demonstrated empirically [1, 2, 3] and theoretically [4, 5, 6]. MTL has been applied in many applications including biomedical informatics [7], marketing [1], natural language processing [2], and computer vision [3]. Many different MTL approaches have been proposed in the past; they differ in how the relatedness among different tasks is modeled. Evgeniou et al. [8] proposed the regularized MTL which constrained the models of all tasks to be close to each other. The task relatedness can also be modeled by constraining multiple tasks to share a common underlying structure [4, 6, 9, 10]. Ando and Zhang [5] proposed a structural learning formulation, which assumed multiple predictors for different tasks shared a common structure on the underlying predictor space. For linear predictors, they proposed the alternating structure optimization (ASO) that simultaneously performed inference on multiple tasks and discovered the shared low-dimensional predictive structure. ASO has been 1 shown to be effective in many practical applications [2, 11, 12]. One limitation of the original ASO formulation is that it involves a non-convex optimization problem and a globally optimal solution is not guaranteed. A convex relaxation of ASO called CASO was proposed and analyzed in [13]. Many existing MTL formulations are based on the assumption that all tasks are related. In practical applications, the tasks may exhibit a more sophisticated group structure where the models of tasks from the same group are closer to each other than those from a different group. There have been many prior work along this line of research, known as clustered multi-task learning (CMTL). In [14], the mutual relatedness of tasks was estimated and knowledge of one task could be transferred to other tasks in the same cluster. Bakker and Heskes [15] used clustered multi-task learning in a Bayesian setting by considering a mixture of Gaussians instead of single Gaussian priors. Evgeniou et al. [8] proposed the task clustering regularization and showed how cluster information could be encoded in MTL, and however the group structure was required to be known a priori. Xue et al. [16] introduced the Dirichlet process prior which automatically identified subgroups of related tasks. In [17], a clustered MTL framework was proposed that simultaneously identified clusters and performed multi-task inference. Because the formulation is non-convex, they also proposed a convex relaxation to obtain a global optimum [17]. Wang et al. [18] used a similar idea to consider clustered tasks by introducing an inter-task regularization. The objective in CMTL differs from many MTL formulations (e.g., ASO which aims to identify a shared low-dimensional predictive structure for all tasks) which are based on the standard assumption that each task can learn equally well from any other task. In this paper, we study the inherent relationship between these two seemingly different MTL formulations. Specifically, we establish the equivalence relationship between ASO and a specific formulation of CMTL, which performs simultaneous multi-task learning and task clustering: First, we show that CMTL performs clustering on the tasks, while ASO performs projection on the features to find a shared low-rank structure. Next, we show that the spectral relaxation of the clustering (on tasks) in CMTL and the projection (on the features) in ASO lead to an identical regularization, related to the negative Ky Fan k-norm of the weight matrix involving all task models, thus establishing their equivalence relationship. The presented analysis provides significant new insights into ASO and CMTL as well as their inherent relationship. To our best knowledge, the clustering view of ASO has not been explored before. One major limitation of the ASO/CMTL formulation is that it involves a non-convex optimization, as the negative Ky Fan k-norm is concave. We propose a convex relaxation of CMTL, and establish the equivalence relationship between the proposed convex relaxation of CMTL and the convex ASO formulation proposed in [13]. We show that the proposed convex CMTL formulation is significantly more efficient especially for high-dimensional data. We further develop three algorithms for solving the convex CMTL formulation based on the block coordinate descent, accelerated projected gradient, and gradient descent, respectively. We have conducted experiments on benchmark datasets including School and Sarcos; our results demonstrate the efficiency of the proposed algorithms. Notation: Throughout this paper, Rd denotes the d-dimensional Euclidean space. I denotes the identity matrix of a proper size. N denotes the set of natural numbers. Sm + denotes the set of symmetric positive semi-definite matrices of size m by m. A ⪯B means that B −A is positie semi-definite. tr (X) is the trace of X. 2 Multi-Task Learning: ASO and CMTL Assume we are given a multi-task learning problem with m tasks; each task i ∈Nm is associated with a set of training data {(xi 1, yi 1), . . . , (xi ni, yi ni)} ⊂Rd × R, and a linear predictive function fi: fi(xi j) = wT i xi j, where wi is the weight vector of the i-th task, d is the data dimensionality, and ni is the number of samples of the i-th task. We denote W = [w1, . . . , wm] ∈Rd×m as the weight matrix to be estimated. Given a loss function ℓ(·, ·), the empirical risk is given by: L(W) = m X i=1 1 ni ni X j=1 ℓ(wT i xi j, yi j) . We study the following multi-task learning formulation: minW L(W) + Ω(W), where Ωencodes our prior knowledge about the m tasks. Next, we review ASO and CMTL and explore their inherent relationship. 2 2.1 Alternating structure optimization In ASO [5], all tasks are assumed to share a common feature space Θ ∈Rh×d, where h ≤ min(m, d) is the dimensionality of the shared feature space and Θ has orthonormal columns, i.e., ΘΘT = Ih. The predictive function of ASO is: fi(xi j) = wT i xi j = uT i xi j+vT i Θxi j, where the weight wi = ui + ΘT vi consists of two components including the weight ui for the high-dimensional feature space and the weight vi for the low-dimensional space based on Θ. ASO minimizes the following objective function: L(W) + α Pm i=1 ∥ui∥2 2, subject to: ΘΘT = Ih, where α is the regularization parameter for task relatedness. We can further improve the formulation by including a penalty, β Pm i=1 ∥wi∥2 2, to improve the generalization performance as in traditional supervised learning. Since ui = wi −ΘT vi, we obtain the following ASO formulation: min W,{vi},Θ:ΘΘT =Ih L(W) + m X i=1 α∥wi −ΘT vi∥2 2 + β∥wi∥2 2 . (1) 2.2 Clustered multi-task learning In CMTL, we assume that the tasks are clustered into k < m clusters, and the index set of the j-th cluster is defined as Ij = {v|v ∈cluster j}. We denote the mean of the jth cluster to be ¯wj = 1 nj P v∈Ij wv. For a given W = [w1, · · · , wm], the sum-of-square error (SSE) function in K-means clustering is given by [19, 20]: k X j=1 X v∈Ij ∥wv −¯wj∥2 2 = tr W T W −tr F T W T WF , (2) where the matrix F ∈Rm×k is an orthogonal cluster indicator matrix with Fi,j = 1 √nj if i ∈Ij and Fi,j = 0 otherwise. If we ignore the special structure of F and keep the orthogonality requirement only, the relaxed SSE minimization problem is: min F :F T F =Ik tr W T W −tr F T W T WF , (3) resulting in the following penalty function for CMTL: ΩCMTL0(W, F) = α tr W T W −tr F T W T WF + β tr W T W , (4) where the first term is derived from the K-means clustering objective and the second term is to improve the generalization performance. Combing Eq. (4) with the empirical error term L(W), we obtain the following CMTL formulation: min W,F :F T F =Ik L(W) + ΩCMTL0(W, F). (5) 2.3 Equivalence of ASO and CMTL In the ASO formulation in Eq. (1), it is clear that the optimal vi is given by v∗ i = Θwi. Thus, the penalty in ASO has the following equivalent form: ΩASO(W, Θ) = m X i=1 α∥wi −ΘT Θwi∥2 2 + β∥wi∥2 2 = α tr W T W −tr W T ΘT ΘW + β tr W T W , (6) resulting in the following equivalent ASO formulation: min W,Θ:ΘΘT =Ih L(W) + ΩASO(W, Θ). (7) The penalty of the ASO formulation in Eq. (7) looks very similar to the penalty of the CMTL formulation in Eq. (5), however the operations involved are fundamentally different. In the CMTL formulation in Eq. (5), the matrix F is operated on the task dimension, as it is derived from the K-means clustering on the tasks; while in the ASO formulation in Eq. (7), the matrix Θ is operated on the feature dimension, as it aims to identify a shared low-dimensional predictive structure for all tasks. Although different in the mathematical formulation, we show in the following theorem that the objectives of CMTL and ASO are equivalent. 3 Theorem 2.1. The objectives of CMTL in Eq. (5) and ASO in Eq. (7) are equivalent if the cluster number, k, in K-means equals to the size, h, of the shared low-dimensional feature space. Proof. Denote Q(W) = L(W)+(α+β) tr W T W , with α, β > 0. Then, CMTL and ASO solve the following optimization problems: min W,F :F T F =Ip Q(W) −α tr WFF T W T , min W,Θ:ΘΘT =Ip Q(W) −α tr W T ΘT ΘW , respectively. Note that in both CMTL and ASO, the first term Q is independent of F or Θ, for a given W. Thus, the optimal F and Θ for these two optimization problems are given by solving: [CMTL] max F :F T F =Ik tr WFF T W T , [ASO] max Θ:ΘΘT =Ik tr W T ΘT ΘW . Since WW T and W T W share the same set of nonzero eigenvalues, by the Ky-Fan Theorem [21], both problems above achieve exactly the same maximum objective value: ∥W T W∥(k) = Pk i=1 λi(W T W), where λi(W T W) denotes the i-th largest eigenvalue of W T W and ∥W T W∥(k) is known as the Ky Fan k-norm of matrix W T W. Plugging the results back to the original objective, the optimization problem for both CMTL and ASO becomes minW Q(W) −α∥W T W∥(k). This completes the proof of this theorem. 3 Convex Relaxation of CMTL The formulation in Eq. (5) is non-convex. A natural approach is to perform a convex relaxation on CMTL. We first reformulate the penalty in Eq. (5) as follows: ΩCMTL0(W, F) = α tr W((1 + η)I −FF T )W T , (8) where η is defined as η = β/α > 0. Since F T F = Ik, the following holds: (1 + η)I −FF T = η(1 + η)(ηI + FF T )−1. Thus, we can reformulate ΩCMTL0 in Eq. (8) as the following equivalent form: ΩCMTL1(W, F) = αη(1 + η) tr W(ηI + FF T )−1W T . (9) resulting in the following equivalent CMTL formulation: min W,F :F T F =Ik L(W) + ΩCMTL1(W, F). (10) Following [13, 17], we obtain the following convex relaxation of Eq. (10), called cCMTL: min W,M L(W) + ΩcCMTL(W, M) s.t. tr (M) = k, M ⪯I, M ∈Sm +. (11) where ΩcCMTL(W, M) is defined as: ΩcCMTL(W, M) = αη(1 + η) tr W(ηI + M)−1W T . (12) The optimization problem in Eq. (11) is jointly convex with respect to W and M [9]. 3.1 Equivalence of cASO and cCMTL A convex relaxation (cASO) of the ASO formulation in Eq. (7) has been proposed in [13]: min W,S L(W) + ΩcASO(W, S) s.t. tr (S) = h, S ⪯I, S ∈Sd +, (13) where ΩcASO is defined as: ΩcASO(W, S) = αη(1 + η) tr W T (ηI + S)−1W . (14) The cASO formulation in Eq. (13) and the cCMTL formulation in Eq. (11) are different in the regularization components: the respective Hessian of the regularization with respect to W are different. Similar to Theorem 2.1, our analysis shows that cASO and cCMTL are equivalent. 4 Theorem 3.1. The objectives of the cCMTL formulation in Eq. (11) and the cASO formulation in Eq. (13) are equivalent if the cluster number, k, in K-means equals to the size, h, of the shared low-dimensional feature space. Proof. Define the following two convex functions of W: gcCMTL(W) = min M tr W(ηI + M)−1W T , s.t. tr (M) = k, M ⪯I, M ∈Sm +, (15) and gcASO(W) = min S tr W T (ηI + S)−1W , s.t. tr (S) = h, S ⪯I, S ∈Sd +. (16) The cCMTL and cASO formulations can be expressed as unconstrained optimization w.r.t. W: [cCMTL] min W L(W) + c · gCMTL(W), [cASO] min W L(W) + c · gASO(W), where c = αη(1 + η). Let h = k ≤min(d, m). Next, we show that for a given W, gCMTL(W) = gASO(W) holds. Let W = Q1ΣQ2, M = P1Λ1P T 1 , and S = P2Λ2P T 2 , be the SVD of W, M, and S (M and S are symmetric positive semi-definite), respectively, where Σ = diag{σ1, σ2, . . . , σm}, Λ1 = diag{λ(1) 1 , λ(1) 2 , . . . , λ(1) m }, and Λ2 = {λ(2) 1 , λ(2) 2 , . . . , λ(2) m }. Let q < k be the rank of Σ. It follows from the basic properties of the trace that: tr W(ηI + M)−1W T = tr (ηI + Λ1)−1P T 1 Q2Σ2QT 2 P1 . The problem in Eq. (15) is thus equivalent to: min P1,Λ1 tr (ηI + Λ1)−1P T 1 Q2Σ2QT 2 P1 , s.t. P1P T 1 = I, P T 1 P1 = I, d X i=1 λ(1) i = k. (17) It can be shown that the optimal P ∗ 1 is given by P ∗ 1 = Q2 and the optimal Λ∗ 1 is given by solving the following simple (convex) optimization problem [13]: Λ∗ 1 = argmin Λ1 q X i=1 σ2 i η + λ(1) i , s.t. q X i λ(1) i = k, 0 ≤λ(1) i ≤1. (18) It follows that gcCMTL(W) = tr (ηI + Λ∗ 1)−1Σ2 . Similarly, we can show that gcASO(W) = tr (ηI + Λ∗ 2)−1Σ2 , where Λ∗ 2 = argmin Λ2 q X i=1 σ2 i η + λ(2) i , s.t. q X i λ(2) i = h, 0 ≤λ(2) i ≤1. It is clear that when h = k, Λ∗ 1 = Λ∗ 2 holds. Therefore, we have gcCMTL(W) = gcASO(W). This completes the proof. Remark 3.2. In the functional of cASO in Eq. (16) the variable to be optimized is S ∈Sd +, while in the functional of cCMTL in Eq. (15) the optimization variable is M ∈Sm +. In many practical MTL problems the data dimensionality d is much larger than the task number m, and in such cases cCMTL is significantly more efficient in terms of both time and space. Our equivalence relationship established in Theorem 3.1 provides an (equivalent) efficient implementation of cASO especially for high-dimensional problems. 4 Optimization Algorithms In this section, we propose to employ three different methods, i.e., Alternating Optimization Method (altCMTL), Accelerated Projected Gradient Method (apgCMTL), and Direct Gradient Descent Method (graCMTL), respectively, for solving the convex relaxation in Eq. (11). Note that we focus on smooth loss functions in this paper. 5 4.1 Alternating Optimization Method The Alternating Optimization Method (altCMTL) is similar to the Block Coordinate Descent (BCD) method [22], in which the variable is optimized alternatively with the other variables fixed. The pseudo-code of altCMTL is provided in the supplemental material. Note that using similar techniques as the ones from [23], we can show that altCMTL finds the globally optimal solution to Eq. (11). The altCMTL algorithm involves the following two steps in each iteration: Optimization of W For a fixed M, the optimal W can be obtained via solving: min W L(W) + c tr W(ηI + M)−1W T . (19) The problem above is smooth and convex. It can be solved using gradient-type methods [22]. In the special case of a least square loss function, the problem in Eq. (19) admits an analytic solution. Optimization of M For a fixed W, the optimal M can be obtained via solving: min M tr W(ηI + M)−1W T , s.t. tr (M) = k, M ⪯I, M ∈Sm +. (20) From Theorem 3.1, the optimal M to Eq. (20) is given by M = QΛ∗QT , where Λ∗is obtained from Eq. (18). The problem in Eq. (18) can be efficiently solved using similar techniques in [17]. 4.2 Accelerated Projected Gradient Method The accelerated projected gradient method (APG) has been applied to solve many machine learning formulations [24]. We apply APG to solve the cCMTL formulation in Eq. (11). The algorithm is called apgCMTL. The key component of apgCMTL is to compute a proximal operator as follows: min WZ,MZ
WZ −ˆWS
2 F +
MZ −ˆ MS
2 F , s.t. tr (MZ) = k, MZ ⪯I, MZ ∈Sm +, (21) where the details about the construction of ˆWS and ˆ MS can be found in [24]. The optimization problem in Eq. (21) is involved in each iteration of apgCMTL, and hence its computation is critical for the practical efficiency of apgCMTL. We show below that the optimal WZ and MZ to Eq. (21) can be computed efficiently. Computation of Wz The optimal WZ to Eq. (21) can be obtained by solving: min WZ
WZ −ˆWS
2 F . (22) Clearly the optimal WZ to Eq. (22) is equal to ˆWS. Computation of Mz The optimal MZ to Eq. (21) can be obtained by solving: min MZ
MZ −ˆ MS
2 F , s.t. tr (MZ) = k, MZ ⪯I, MZ ∈Sm +, (23) where ˆ MS is not guaranteed to be positive semidefinite. Our analysis shows that the optimization problem in Eq. (23) admits an analytical solution via solving a simple convex projection problem. The main result and the pseudo-code of apgCMTL are provided in the supplemental material. 4.3 Direct Gradient Descent Method In Direct Gradient Descent Method (graCMTL) as used in [17], the cCMTL problem in Eq. (11) is reformulated as an optimization problem with one single variable W, given by: min W L(W) + c · gCMTL(W), (24) where gCMTL(W) is a functional of W defined in Eq. (15). Given the intermediate solution Wk−1 from the (k −1)-th iteration of graCMTL, we compute the gradient of gCMTL(W) and then apply the general gradient descent scheme [25] to obtain Wk. Note that at each iterative step in line search, we need to solve the optimization problem in the form of Eq. (20). The gradient of gCMTL(·) at Wk−1 is given by [26, 27]: ∇W gCMTL(Wk) = 2(ηI + ˆ M)−1W T k−1, where ˆ M is obtained by solving Eq. (20) at W = Wk−1. The pseudo-code of graCMTL is provided in the supplemental material. 6 Truth RidgeSTL RegMTL cCMTL Figure 1: The correlation matrices of the ground truth model, and the models learnt from RidgeSTL, RegMTL, and cCMTL. Darker color indicates higher correlation. In the ground truth there are 100 tasks clustered into 5 groups. Each task has 200 dimensions. 95 training samples and 5 testing samples are used in each task. The test errors (in terms of nMSE) for RidgeSTL, RegMTL, and cCMTL are 0.8077, 0.6830, 0.0354, respectively. 5 Experiments In this section, we empirically evaluate the effectiveness and the efficiency of the proposed algorithms on synthetic and real-world data sets. The normalized mean square error (nMSE) and the averaged mean square error (aMSE) are used as the performance measure [23]. Note that in this paper we have not developed new MTL formulations; instead our main focus is on the theoretical understanding of the inherent relationship between ASO and CMTL. Thus, an extensive comparative study of various MTL algorithms is out of the scope of this paper. As an illustration, in the following experiments we only compare cCMTL with two baseline techniques: ridge regression STL (RidgeSTL) and regularized MTL (RegMTL) [28]. Simulation Study We apply the proposed cCMTL formulation in Eq. (11) on a synthetic data set (with a pre-defined cluster structure). We use 5-fold cross-validation to determine the regularization parameters for all methods. We construct the synthetic data set following a procedure similar to the one in [17]: the constructed synthetic data set consists of 5 clusters, where each cluster includes 20 (regression) tasks and each task is represented by a weight vector of length d = 300. Details of the construction is provided in the supplemental material. We apply RidgeSTL, RegMTL, and cCMTL on the constructed synthetic data. The correlation coefficient matrices of the obtained weight vectors are presented in Figure 1. From the result we can observe (1) cCMTL is able to capture the cluster structure among tasks and achieves a small test error; (2) RegMTL is better than RidgeSTL in terms of test error. It however introduces unnecessary correlation among tasks possibly due to the assumption that all tasks are related; (3) In cCMTL we also notice some ‘noisy’ correlation, which may because of the spectral relaxation. Table 1: Performance comparison on the School data in terms of nMSE and aMSE. Smaller nMSE and aMSE indicate better performance. All regularization parameters are tuned using 5-fold cross validation. The mean and standard deviation are calculated based on 10 random repetitions. Measure Ratio RidgeSTL RegMTL cCMTL nMSE 10% 1.3954 ± 0.0596 1.0988 ± 0.0178 1.0850 ± 0.0206 15% 1.1370 ± 0.0146 1.0636 ± 0.0170 0.9708 ± 0.0145 20% 1.0290 ± 0.0309 1.0349 ± 0.0091 0.8864 ± 0.0094 25% 0.8649 ± 0.0123 1.0139 ± 0.0057 0.8243 ± 0.0031 30% 0.8367 ± 0.0102 1.0042 ± 0.0066 0.8006 ± 0.0081 aMSE 10% 0.3664 ± 0.0160 0.2865 ± 0.0054 0.2831 ± 0.0050 15% 0.2972 ± 0.0034 0.2771 ± 0.0045 0.2525 ± 0.0048 20% 0.2717 ± 0.0083 0.2709 ± 0.0027 0.2322 ± 0.0022 25% 0.2261 ± 0.0033 0.2650 ± 0.0027 0.2154 ± 0.0020 30% 0.2196 ± 0.0035 0.2632 ± 0.0028 0.2101 ± 0.0016 Effectiveness Comparison Next, we empirically evaluate the effectiveness of the cCMTL formulation in comparison with RidgeSTL and RegMTL using real world benchmark datasets including the School data1 and the Sarcos data2. The regularization parameters for all algorithms are deter1http://www.cs.ucl.ac.uk/staff/A.Argyriou/code/ 2http://gaussianprocess.org/gpml/data/ 7 500 1000 1500 2000 2500 0 50 100 150 200 Dimension Seconds apgCMTL altCMTL graCMTL 3000 5000 7000 9000 3000 5000 7000 9000 2 4 6 8 10 12 Sample Size Seconds apgCMTL altCMTL graCMTL 50 90 130 170 0 1 2 3 4 5 Task Number Seconds apgCMTL altCMTL graCMTL Figure 2: Sensitivity study of altCMTL, apgCMTL, graCMTL in terms of the computation cost (in seconds) with respect to feature dimensionality (left), sample size (middle), and task number (right). mined via 5-fold cross validation; the reported experimental results are averaged over 10 random repetitions. The School data consists of the exam scores of 15362 students from 139 secondary schools, where each student is described by 27 attributes. We vary the training ratio in the set 5 × {1, 2, · · · , 6}% and record the respective performance. The experimental results are presented in Table 1. We can observe that cCMTL performs the best among all settings. Experimental results on the Sarcos dataset is available in the supplemental material. Efficiency Comparison We compare the efficiency of the three algorithms including altCMTL, apgCMTLand graCMTL for solving the cCMTL formulation in Eq. (11). For the following experiments, we set α = 1, β = 1, and k = 2 in cCMTL. We observe a similar trend in other settings. Specifically, we study how the feature dimensionality, the sample size, and the task number affect the required computation cost (in seconds) for convergence. The experimental setup is as follows: we terminate apgCMTL when the change of objective values in two successive steps is smaller than 10−5 and record the obtained objective value; we then use such a value as the stopping criterion in graCMTL and altCMTL, that is, we stop graCMTL or altCMTL when graCMTL or altCMTL attains an objective value equal to or smaller than the one attained by apgCMTL. We use Yahoo Arts data for the first two experiments. Because in Yahoo data the task number is very small, we construct a synthetic data for the third experiment. In the first experiment, we vary the feature dimensionality in the set [500 : 500 : 2500] with the sample size fixed at 4000 and the task numbers fixed at 17. The experimental result is presented in the left plot of Figure 2. In the second experiment, we vary the sample size in the set [3000 : 1000 : 9000] with the dimensionality fixed at 500 and the task number fixed at 17. The experimental result is presented in the middle plot of Figure 2. From the first two experiments, we observe that larger feature dimensionality or larger sample size will lead to higher computation cost. In the third experiment, we vary the task number in the set [10 : 10 : 190] with the feature dimensionality fixed at 600 and the sample size fixed at 2000. The employed synthetic data set is constructed as follows: for each task, we generate the entries of the data matrix Xi from N(0, 1), and generate the entries of the weight vector from N(0, 1), the response vector yi is computed as yi = Xiwi + ξ, where ξ ∼N(0, 0.01) represents the noise vector. The experimental result is presented in the right plot of Figure 2. We can observe that altCMTL is more efficient than the other two algorithms. 6 Conclusion In this paper we establish the equivalence relationship between two multi-task learning techniques: alternating structure optimization (ASO) and clustered multi-task learning (CMTL). We further establish the equivalence relationship between our proposed convex relaxation of CMTL and an existing convex relaxation of ASO. In addition, we propose three algorithms for solving the convex CMTL formulation and demonstrate their effectiveness and efficiency on benchmark datasets. The proposed algorithms involve the computation of SVD. In the case of a very large task number, the SVD computation will be expensive. We seek to further improve the efficiency of the algorithms by employing approximation methods. In addition, we plan to apply the proposed algorithms to other real world applications involving multiple (clustered) tasks. Acknowledgments This work was supported in part by NSF IIS-0812551, IIS-0953662, MCB-1026710, CCF-1025177, and NIH R01 LM010730. 8 References [1] T. Evgeniou, M. Pontil, and O. Toubia. A convex optimization approach to modeling consumer heterogeneity in conjoint estimation. Marketing Science, 26(6):805–818, 2007. [2] R.K. Ando. Applying alternating structure optimization to word sense disambiguation. In Proceedings of the Tenth Conference on Computational Natural Language Learning, pages 77–84, 2006. [3] A. Torralba, K.P. Murphy, and W.T. Freeman. Sharing features: efficient boosting procedures for multiclass object detection. In Computer Vision and Pattern Recognition, 2004, IEEE Conference on, volume 2, pages 762–769, 2004. [4] J. Baxter. A model of inductive bias learning. J. Artif. Intell. Res., 12:149–198, 2000. [5] R.K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. The Journal of Machine Learning Research, 6:1817–1853, 2005. [6] S. Ben-David and R. Schuller. Exploiting task relatedness for multiple task learning. Lecture notes in computer science, pages 567–580, 2003. [7] S. Bickel, J. Bogojeska, T. Lengauer, and T. Scheffer. Multi-task learning for hiv therapy screening. In Proceedings of the 25th International Conference on Machine Learning, pages 56–63. ACM, 2008. [8] T. Evgeniou, C.A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 6(1):615, 2006. [9] A. Argyriou, C.A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework for multi-task structure learning. Advances in Neural Information Processing Systems, 20:25–32, 2008. [10] R. Caruana. Multitask learning. Machine Learning, 28(1):41–75, 1997. [11] J. Blitzer, R. McDonald, and F. Pereira. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 Conference on EMNLP, pages 120–128, 2006. [12] A. Quattoni, M. Collins, and T. Darrell. Learning visual representations using images with captions. In Computer Vision and Pattern Recognition, 2007. IEEE Conference on, pages 1–8. IEEE, 2007. [13] J. Chen, L. Tang, J. Liu, and J. Ye. A convex formulation for learning shared structures from multiple tasks. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 137–144. ACM, 2009. [14] S. Thrun and J. O’Sullivan. Clustering learning tasks and the selective cross-task transfer of knowledge. Learning to learn, pages 181–209, 1998. [15] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. The Journal of Machine Learning Research, 4:83–99, 2003. [16] Y. Xue, X. Liao, L. Carin, and B. Krishnapuram. Multi-task learning for classification with dirichlet process priors. The Journal of Machine Learning Research, 8:35–63, 2007. [17] L. Jacob, F. Bach, and J.P. Vert. Clustered multi-task learning: A convex formulation. Arxiv preprint arXiv:0809.2085, 2008. [18] F. Wang, X. Wang, and T. Li. Semi-supervised multi-task learning with task regularizations. In Data Mining, 2009. ICDM’09. Ninth IEEE International Conference on, pages 562–568. IEEE, 2009. [19] C. Ding and X. He. K-means clustering via principal component analysis. In Proceedings of the twentyfirst International Conference on Machine learning, page 29. ACM, 2004. [20] H. Zha, X. He, C. Ding, M. Gu, and H. Simon. Spectral relaxation for k-means clustering. Advances in Neural Information Processing Systems, 2:1057–1064, 2002. [21] K. Fan. On a theorem of Weyl concerning eigenvalues of linear transformations I. Proceedings of the National Academy of Sciences of the United States of America, 35(11):652, 1949. [22] J. Nocedal and S.J. Wright. Numerical optimization. Springer verlag, 1999. [23] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243–272, 2008. [24] Y. Nesterov. Gradient methods for minimizing composite objective function. ReCALL, 76(2007076), 2007. [25] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, 2004. [26] J. Gauvin and F. Dubeau. Differential properties of the marginal function in mathematical programming. Optimality and Stability in Mathematical Programming, pages 101–119, 1982. [27] M. Wu, B. Sch¨olkopf, and G. Bakır. A direct method for building sparse kernel learning algorithms. The Journal of Machine Learning Research, 7:603–624, 2006. [28] T. Evgeniou and M. Pontil. Regularized multi–task learning. In Proceedings of the tenth ACM SIGKDD International Conference on Knowledge discovery and data mining, pages 109–117. ACM, 2004. 9
|
2011
|
188
|
4,244
|
On Learning Discrete Graphical Models Using Greedy Methods Ali Jalali University of Texas at Austin alij@mail.utexas.edu Christopher C. Johnson University of Texas at Asutin cjohnson@cs.utexas.edu Pradeep Ravikumar University of Texas at Asutin pradeepr@cs.utexas.edu Abstract In this paper, we address the problem of learning the structure of a pairwise graphical model from samples in a high-dimensional setting. Our first main result studies the sparsistency, or consistency in sparsity pattern recovery, properties of a forward-backward greedy algorithm as applied to general statistical models. As a special case, we then apply this algorithm to learn the structure of a discrete graphical model via neighborhood estimation. As a corollary of our general result, we derive sufficient conditions on the number of samples n, the maximum nodedegree d and the problem size p, as well as other conditions on the model parameters, so that the algorithm recovers all the edges with high probability. Our result guarantees graph selection for samples scaling as n = Ω(d2 log(p)), in contrast to existing convex-optimization based algorithms that require a sample complexity of Ω(d3 log(p)). Further, the greedy algorithm only requires a restricted strong convexity condition which is typically milder than irrepresentability assumptions. We corroborate these results using numerical simulations at the end. 1 Introduction Undirected graphical models, also known as Markov random fields, are used in a variety of domains, including statistical physics, natural language processing and image analysis among others. In this paper we are concerned with the task of estimating the graph structure G of a Markov random field (MRF) over a discrete random vector X = (X1, X2, . . . , Xp), given n independent and identically distributed samples {x(1), x(2), . . . , x(n)}. This underlying graph structure encodes conditional independence assumptions among subsets of the variables, and thus plays an important role in a broad range of applications of MRFs. Existing approaches: Neighborhood Estimation, Greedy Local Search. Methods for estimating such graph structure include those based on constraint and hypothesis testing [22], and those that estimate restricted classes of graph structures such as trees [8], polytrees [11], and hypertrees [23]. A recent class of successful approaches for graphical model structure learning are based on estimating the local neighborhood of each node. One subclass of these for the special case of bounded degree graphs involve the use of exhaustive search so that their computational complexity grows at least as quickly as O(pd), where d is the maximum neighborhood size in the graphical model [1, 4, 9]. Another subclass use convex programs to learn the neighborhood structure: for instance [20, 17, 16] estimate the neighborhood set for each vertex r ∈V by optimizing its ℓ1-regularized conditional likelihood; [15, 10] use ℓ1/ℓ2-regularized conditional likelihood. Even these methods, however need to solve regularized convex programs with typically polynomial computational cost of O(p4) or O(p6), are still expensive for large problems. Another popular class of approaches are based on using a score metric and searching for the best scoring structure from a candidate set of graph structures. Ex1 act search is typically NP-hard [7]; indeed for general discrete MRFs, not only is the search space intractably large, but calculation of typical score metrics itself is computationally intractable since they involve computing the partition function associated with the Markov random field [26]. Such methods thus have to use approximations and search heuristics for tractable computation. Question: Can one use local procedures that are as inexpensive as the heuristic greedy approaches, and yet come with the strong statistical guarantees of the regularized convex program based approaches? High-dimensional Estimation; Greedy Methods. There has been an increasing focus in recent years on high-dimensional statistical models where the number of parameters p is comparable to or even larger than the number of observations n. It is now well understood that consistent estimation is possible even under such high-dimensional scaling if some low-dimensional structure is imposed on the model space. Of relevance to graphical model structure learning is the structure of sparsity, where a sparse set of non-zero parameters entail a sparse set of edges. A surge of recent work [5, 12] has shown that ℓ1-regularization for learning such sparse models can lead to practical algorithms with strong theoretical guarantees. A line of recent work (cf. paragraph above) has thus leveraged this sparsity inducing nature of ℓ1-regularization, to propose and analyze convex programs based on regularized log-likelihood functions. A related line of recent work on learning sparse models has focused on “stagewise” greedy algorithms. These perform simple forward steps (adding parameters greedily), and possibly also backward steps (removing parameters greedily), and yet provide strong statistical guarantees for the estimate after a finite number of greedy steps. The forward greedy variant which performs just the forward step has appeared in various guises in multiple communities: in machine learning as boosting [13], in function approximation [24], and in signal processing as basis pursuit [6]. In the context of statistical model estimation, Zhang [28] analyzed the forward greedy algorithm for the case of sparse linear regression; and showed that the forward greedy algorithm is sparsistent (consistent for model selection recovery) under the same “irrepresentable” condition as that required for “sparsistency” of the Lasso. Zhang [27] analyzes a more general greedy algorithm for sparse linear regression that performs forward and backward steps, and showed that it is sparsistent under a weaker restricted eigenvalue condition. Here we ask the question: Can we provide an analysis of a general forward backward algorithm for parameter estimation in general statistical models? Specifically, we need to extend the sparsistency analysis of [28] to general non-linear models, which requires a subtler analysis due to the circular requirement of requiring to control the third order terms in the Taylor series expansion of the log-likelihood, that in turn requires the estimate to be well-behaved. Such extensions in the case of ℓ1-regularization occur for instance in [20, 25, 3]. Our Contributions. In this paper, we address both questions above. In the first part, we analyze the forward backward greedy algorithm [28] for general statistical models. We note that even though we consider the general statistical model case, our analysis is much simpler and accessible than [28], and would be of use even to a reader interested in just the linear model case of Zhang [28]. In the second part, we use this to show that when combined with neighborhood estimation, the forward backward variant applied to local conditional log-likelihoods provides a simple computationally tractable method that adds and deletes edges, but comes with strong sparsistency guarantees. We reiterate that the our first result on the sparsistency of the forward backward greedy algorithm for general objectives is of independent interest even outside the context of graphical models. As we show, the greedy method is better than the ℓ1-regularized counterpart in [20] theoretically, as well as experimentally. The sufficient condition on the parameters imposed by the greedy algorithm is a restricted strong convexity condition [19], which is weaker than the irrepresentable condition required by [20]. Further, the number of samples required for sparsistent graph recovery scales as O(d2 log p), where d is the maximum node degree, in contrast to O(d3 log p) for the ℓ1-regularized counterpart. We corroborate this in our simulations, where we find that the greedy algorithm requires fewer observations than [20] for sparsistent graph recovery. 2 Review, Setup and Notation 2.1 Markov Random Fields Let X = (X1, . . . , Xp) be a random vector, each variable Xi taking values in a discrete set X of cardinality m. Let G = (V, E) denote a graph with p nodes, corresponding to the p variables {X1, . . . , Xp}. A pairwise Markov random field over X = (X1, . . . , Xp) is then specified by nodewise and pairwise functions θr : X 7→R for all r ∈V , and θrt : X ×X 7→R for all (r, t) ∈E: P(x) ∝exp X r∈V θr(xr) + X (r,t)∈E θrt(xr, xt) . (1) 2 In this paper, we largely focus on the case where the variables are binary with X = {−1, +1}, where we can rewrite (1) to the Ising model form [14] for some set of parameters {θr} and {θrt} as P(x) ∝exp X r∈V θrxr + X (r,t)∈E θrtxrxt . (2) 2.2 Graphical Model Selection Let D := {x(1), . . . , x(n)} denote the set of n samples, where each p-dimensional vector x(i) ∈ {1, . . . , m}p is drawn i.i.d. from a distribution Pθ∗of the form (1), for parameters θ∗and graph G = (V, E∗) over the p variables. Note that the true edge set E∗can also be expressed as a function of the parameters as E∗= {(r, t) ∈V × V : θ∗ st ̸= 0}. (3) The graphical model selection task consists of inferring this edge set E∗from the samples D. The goal is to construct an estimator ˆEn for which P[ ˆEn = E∗] →1 as n →∞. Denote by N ∗(r) the set of neighbors of a vertex r ∈V , so that N ∗(r) = {t : (r, t) ∈E∗}. Then the graphical model selection problem is equivalent to that of estimating the neighborhoods ˆ Nn(r) ⊂V , so that P[ ˆ Nn(r) = N ∗(r); ∀r ∈V ] →1 as n →∞. For any pair of random variables Xr and Xt, the parameter θrt fully characterizes whether there is an edge between them, and can be estimated via its conditional likelihood. In particular, defining Θr := (θr1, . . . , θrp), our goal is to use the conditional likelihood of Xr conditioned on XV \r to estimate Θr and hence its neighborhood N(r). This conditional distribution of Xr conditioned on XV \r generated by (2) is given by the logistic model P Xr = xr XV \r = xV \r = exp(θrxr + P t∈V \r θrtxrxt) 1 + exp(θr + P r∈V \r θrtxr) . Given the n samples D, the corresponding conditional log-likelihood is given by L(Θr; D) = 1 n n X i=1 log 1+ exp θrx(i)+ X t∈V \r θrtx(i) r x(i) t −θrx(i) r − X t∈V \r θrtx(i) r x(i) t . (4) In Section 4, we study a greedy algorithm (Algorithm 2) that finds these node neighborhoods ˆ Nn(r) = Supp(bΘr) of each random variable Xr separately by a greedy stagewise optimization of the conditional log-likelihood of Xr conditioned on XV \r. The algorithm then combines these neighborhoods to obtain a graph estimate bE using an “OR” rule: bEn = ∪r{(r, t) : t ∈ˆ Nn(r)}. Other rules such as the “AND” rule, that add an edge only if it occurs in each of the respective node neighborhoods, could be used to combine the node-neighborhoods to a graph estimate. We show in Theorem 2 that the neighborhood selection by the greedy algorithm succeeds in recovering the exact node-neighborhoods with high probability, so that by a union bound, the graph estimates using either the AND or OR rules would be exact with high probability as well. Before we describe this greedy algorithm and its analysis in Section 4 however, we first consider the general statistical model case in the next section. We first describe the forward backward greedy algorithm of Zhang [28] as applied to general statistical models, followed by a sparsistency analysis for this general case. We then specialize these general results in Section 4 to the graphical model case. The next section is thus of independent interest even outside the context of graphical models. 3 Greedy Algorithm for General Losses Consider a random variable Z with distribution P, and let Zn 1 := {Z1, . . . , Zn} denote n observations drawn i.i.d. according to P. Suppose we are interested in estimating some parameter θ∗∈Rp of the distribution P that is sparse; denote its number of non-zeroes by s∗:= ∥θ∗∥0. Let L : Rp × Zn 7→R be some loss function that assigns a cost to any parameter θ ∈Rp, for a given set of observations Zn 1 . For ease of notation, in the sequel, we adopt the shorthand L(θ) for L(θ; Zn 1 ). We assume that θ∗satisfies EZ [∇L(θ∗)] = 0. 3 Algorithm 1 Greedy forward-backward algorithm for finding a sparse optimizer of L(·) Input: Data D := {x(1), . . . , x(n)}, Stopping Threshold ǫS, Backward Step Factor ν ∈(0, 1) Output: Sparse optimizer bθ bθ(0) ←−0 and bS(0) ←−φ and k ←−1 while true do {Forward Step} (j∗, α∗) ←−arg min j∈( b S(k−1)) c ; α L(bθ(k−1)+αej; D) bS(k) ←−bS(k−1) ∪{j∗} δ(k) f ←−L(bθ(k−1); D) −L(bθ(k−1) + α∗ej∗; D) if δ(k) f ≤ǫS then break end if bθ(k) ←−arg min θ L θ b S(k); D k ←−k + 1 while true do {Backward Step} j∗←−arg min j∈b S(k−1) L(bθ(k−1) −bθ(k−1) j ej; D) if L bθ(k−1) −bθ(k−1) j∗ ej∗; D −L bθ(k−1); D > νδ(k) f then break end if bS(k−1) ←−bS(k) −{j∗} bθ(k−1) ←−arg min θ L θ b S(k−1); D k ←−k −1 end while end while We now consider the forward backward greedy algorithm in Algorithm 1 that rewrites the algorithm in [27] to allow for general loss functions. The algorithm starts with an empty set of active variables bS(0) and gradually adds (and removes) vairables to the active set until it meets the stopping criterion. This algorithm has two major steps: the forward step and the backward step. In the forward step, the algorithm finds the best next candidate and adds it to the active set as long as it improves the loss function at least by ǫS, otherwise the stopping criterion is met and the algorithm terminates. Then, in the backward step, the algorithm checks the influence of all variables in the presence of the newly added variable. If one or more of the previously added variables do not contribute at least νǫS to the loss function, then the algorithm removes them from the active set. This procedure ensures that at each round, the loss function is improved by at least (1 −ν)ǫS and hence it terminates within a finite number of steps. We state the assumptions on the loss function such that sparsistency is guaranteed. Let us first recall the definition of restricted strong convexity from Negahban et al. [18]. Specifically, for a given set S, the loss function is said to satisfy restricted strong convexity (RSC) with parameter κl with respect to the set S if L(θ + ∆; Zn 1 ) −L(θ; Zn 1 ) −⟨∇L(θ; Zn 1 ), ∆⟩≥κl 2 ∥∆∥2 2 for all ∆∈S. (5) We can now define sparsity restricted strong convexity as follows. Specifically, we say that the loss function L satisfies RSC(k) with parameter κl if it satisfies RSC with parameter κl for the set {∆∈Rp : ∥∆∥0 ≤k}. In contrast, we say that the loss function satisfies restricted strong smoothness (RSS) with parameter κu with respect to a set S if L(θ + ∆; Zn 1 ) −L(θ; Zn 1 ) −⟨∇L(θ; Zn 1 ), ∆⟩≤κu 2 ∥∆∥2 2 for all ∆∈S. 4 We can define RSS(k) similarly. The loss function L satisfies RSS(k) with parameter κu if it satisfies RSS with parameter κu for the set {∆∈Rp : ∥∆∥0 ≤k}. Given any constants κl and κu, and a sample based loss function L, we can typically use concentration based arguments to obtain bounds on the sample size required so that the RSS and RSC conditions hold with high probability. Another property of the loss function that we require is an upper bound λn on the ℓ∞norm of the gradient of the loss at the true parameter θ∗, i.e., λn ≥∥∇L(θ∗)∥∞. This captures the “noise level” of the samples with respect to the loss. Here too, we can typically use concentration arguments to show for instance that λn ≤cn(log(p)/n)1/2, for some constant cn > 0 with high probability. Theorem 1 (Sparsistency). Suppose the loss function L(·) satisfies RSC (η s∗) and RSS (η s∗) with parameters κl and κu for some η ≥2 + 4ρ2( p (ρ2 −ρ)/s∗+ √ 2)2 with ρ = κu/κl. Moreover, suppose that the true parameters θ∗satisfy minj∈S∗|θ∗ j | > p 32ρǫS/κl. Then if we run Algorithm 1 with stopping threshold ǫS ≥(8ρη/κl) s∗λ2 n, the output bθ with support bS satisfies: (a) Error Bound: ∥bθ −θ∗∥2 ≤ 2 κl √ s∗(λn√η + √ǫS √2κu). (b) No False Exclusions: S∗−bS = ∅. (c) No False Inclusions: bS −S∗= ∅. Proof. The proof theorem hinges on three main lemmas: Lemmas 1 and 2 which are simple consequences of the forward and backward steps failing when the greedy algorithm stops, and Lemma 3 which uses these two lemmas and extends techniques from [21] and [19] to obtain an ℓ2 error bound on the error. Provided these lemmas hold, we then show below that the greedy algorithm is sparsistent. However, these lemmas require apriori that the RSC and RSS conditions hold for sparsity size |S∗∪bS|. Thus, we use the result in Lemma 4 that if RSC(ηs∗) holds, then the solution when the algorithm terminates satisfies |bS| ≤(η −1)s∗, and hence |bS ∪S∗| ≤ηs∗. Thus, we can then apply Lemmas 1, 2 and Lemma 3 to complete the proof as detailed below. (a) The result follows directly from Lemma 3, and noting that |bS ∪S∗| ≤ηs∗. In this Lemma, we show that the upper bound holds by drawing from fixed point techniques in [21] and [19], and by using a simple consequence of the forward step failing when the greedy algorithm stops. (b) We follow the chaining argument in [27]. For any τ ∈R, we have τ|{j ∈S∗−bS : |θ∗ j |2 > τ}| ≤∥θ∗ S∗−b S∥2 2 ≤∥θ∗−bθ∥2 2 ≤8ηs∗λ2 n κ2 l + 16κuǫS κ2 l |S∗−bS|, where the last inequality follows from part (a) and the inequality (a + b)2 ≤2a2 + 2b2. Now, setting τ = 32κuǫS κ2 l , and dividing both sides by τ/2 we get 2|{j ∈S∗−bS : |θ∗ j |2 > τ}| ≤ηs∗λ2 n 2κuǫS + |S∗−bS|. Substituting |{j ∈S∗−bS : |θ∗ j |2 > τ}| = |S∗−bS| −|{j ∈S∗−bS : |θ∗ j |2 ≤τ}|, we get |S∗−bS| ≤|{j ∈S∗−bS : |θ∗ j |2 ≤τ}| + ηs∗λ2 n 2κuǫS ≤|{j ∈S∗−bS : |θ∗ j |2 ≤τ}| + 1/2, due to the setting of the stopping threshold ǫS. This in turn entails that |S∗−bS| ≤|{j ∈S∗−bS : |θ∗ j |2 ≤τ}| = 0, by our assumption on the size of the minimum entry of θ∗. (c) From Lemma 2, which provides a simple consequence of the backward step failing when the greedy algorithm stops, for b∆= bθ −θ∗, we have ǫS/κu|bS −S∗| ≤∥b∆b S−S∗∥2 2 ≤∥b∆∥2 2, so that using Lemma 3 and that |S∗−bS| = 0, we obtain that |bS −S∗| ≤ 4ηs∗λ2 nκu ǫSκ2 l ≤1/2, due to the setting of the stopping threshold ǫS. 5 Algorithm 2 Greedy forward-backward algorithm for pairwise discrete graphical model learning Input: Data D := {x(1), . . . , x(n)}, Stopping Threshold ǫS, Backward Step Factor ν ∈(0, 1) Output: Estimated Edges bE for r ∈V do Run Algorithm 1 with the loss L(·) set as in (4), to obtain bΘr with support c Nr end for Output bE = S r n (r, t) : t ∈c Nr o 3.1 Lemmas for Theorem 1 We list the simple lemmas that characterize the solution obtained when the algorithm terminates, and on which the proof of Theorem 1 hinges. Lemma 1 (Stopping Forward Step). When the algorithm 1 stops with parameter bθ supported on bS, we have L bθ −L (θ∗) < q 2 |S∗−bS| κu ǫS
bθ −θ∗
2 . Lemma 2 (Stopping Backward Step). When the algorithm 1 stops with parameter bθ supported on bS, we have
b∆b S−S∗
2 2 ≥ǫS κu bS −S∗ . Lemma 3 (Stopping Error Bound). When the algorithm 1 stops with parameter bθ supported on bS, we have
bθ −θ∗
2 ≤2 κl λn rS∗∪bS + r 2 S∗−bS κuǫS ! . Lemma 4 (Stopping Size). If ǫS > λ2 n κu q 2 η−1 − q 2 η −2 and RSC (ηs∗) holds for some η ≥2 + 4ρ2 q ρ2−ρ s∗ + √ 2 2 , then the algorithm 1 stops with k ≤(η −1)s∗. Notice that if ǫS ≥(8ρη/κl) (η2/(4ρ2)) λ2 n, then, the assumption of this lemma is satisfied. Hence for large value of s∗≥8ρ2 > η2/(4ρ2), it suffices to have ǫS ≥(8ρη/κl) s∗λ2 n. 4 Greedy Algorithm for Pairwise Graphical Models Suppose we are given set of n i.i.d. samples D := {x(1), . . . , x(n)}, drawn from a pairwise Ising model as in (2), with parameters θ∗, and graph G = (V, E∗). It will be useful to denote the maximum node-degree in the graph E∗by d. As we will show, our model selection performance depends critically on this parameter d. We propose Algorithm 2 for estimating the underlying graphical model from the n samples D. Theorem 2 (Pairwise Sparsistency). Suppose we run Algorithm 2 with stopping threshold ǫS ≥ c1 d log p n , where, d is the maximum node degree in the graphical model, and the true parameters θ∗ satisfy c3 √ d > minj∈S∗|θ∗ j | > c2√ǫS, and further that number of samples scales as n > c4 d2 log p, for some constants c1, c2, c3, c4. Then, with probability at least 1 −c′ exp(−c′′n), the output bθ supported on bS satisfies: (a) No False Exclusions: E∗−bE = ∅. 6 (b) No False Inclusions: bE −E∗= ∅. Proof. This theorem is a corollary to our general Theorem 1. We first show that the conditions of Theorem 1 hold under the assumptions in this corollary. RSC, RSS. We first note that the conditional log-likelihood loss function in (4) corresponds to a logistic likelihood. Moreover, the covariates are all binary, and bounded, and hence also sub-Gaussian. [19, 2] analyze the RSC and RSS properties of generalized linear models, of which logistic models are an instance, and show that the following result holds if the covariates are sub-Gaussian. Let ∂L(∆; θ∗) = L(θ∗+ ∆) −L(θ∗) −⟨∇L(θ∗), ∆⟩be the second order Taylor series remainder. Then, Proposition 2 in [19] states that that there exist constants κl 1 and κl 2, independent of n, p such that with probability at least 1 −c1 exp(−c2n), for some constants c1, c2 > 0, ∂L(∆; θ∗) ≥κl 1∥∆∥2 ( ∥∆∥2 −κl 2 r log(p) n ∥∆∥1 ) for all ∆: ∥∆∥2 ≤1. Thus, if ∥∆∥0 ≤k := ηd, then ∥∆∥1 ≤ √ k∥∆∥2, so that ∂L(∆; θ∗) ≥∥∆∥2 2 κl 1 −κl 2 r k log p n ! ≥κl 1 2 ∥∆∥2 2, if n > 4(κl 2/κl 1)2 ηd log(p). In other words, with probability at least 1 −c1 exp(−c2n), the loss function L satisfies RSC(k) with parameter κl 1 provided n > 4(κl 2/κl 1)2 ηd log(p). Similarly, it follows from [19, 2] that there exist constants κu 1 and κu 2 such that with probability at least 1 − c′ 1 exp(−c′ 2n), ∂L(∆; θ∗) ≤κu 1∥∆∥2{∥∆∥2 −κu 2∥∆∥1} for all ∆: ∥∆∥2 ≤1, so that by a similar argument, with probability at least 1−c′ 1 exp(−c′ 2n), the loss function L satisfies RSS(k) with parameter κu 1 provided n > 4(κu 2/κu 1)2 ηd log(p). Noise Level. Next, we obtain a bound on the noiselevel λn ≥∥∇L(θ∗)∥∞following similar arguments to [20]. Let W denote the gradient ∇L(θ∗) of the loss function (4). Any entry of W has the form Wt = 1 n Pn i=1 Z(i) rt , where Z(i) rt = x(i) t (x(i) r −P(xr = 1|x(i) \s )) are zero-mean, i.i.d. and bounded |Z(i) rt | ≤1. Thus, an application of Hoeffding’s inequality yields that P[|Wt| > δ] ≤2 exp(−2nδ2). Applying a union bound over indices in W, we get P[∥W∥∞> δ] ≤2 exp(−2nδ2 + log(p)). Thus, if λn = (log(p)/n)1/2, then ∥W∥∞≤λn with probability at least 1 −exp(−nλ2 n + log(p)). We can now verify that under the assumptions in the corollary, the conditions on the stopping size ǫS and the minimum absolute value of the non-zero parameters minj∈S∗|θ∗ j | are satisfied. Moreover, from the discussion above, under the sample size scaling in the corollary, the required RSC and RSS conditions hold as well. Thus, Theorem 1 yields that each node neighborhood is recovered with no false exclusions or inclusions with probability at least 1 −c′ exp(−c′′n). An application of a union bound over all nodes completes the proof. Remarks. The sufficient condition on the parameters imposed by the greedy algorithm is a restricted strong convexity condition [19], which is weaker than the irrepresentable condition required by [20]. Further, the number of samples required for sparsistent graph recovery scales as O(d2 log p), where d is the maximum node degree, in contrast to O(d3 log p) for the ℓ1 regularized counterpart. We corroborate this in our simulations, where we find that the greedy algorithm requires fewer observations than [20] for sparsistent graph recovery. We also note that the result can also be extended to the general pairwise graphical model case, where each random variable takes values in the range {1, . . . , m}. In that case, the conditional likelihood of each node conditioned on the rest of the nodes takes the form of a multiclass logistic model, and the greedy algorithm would take the form of a “group” forward-backward greedy algorithm, which would add or remove all the parameters corresponding to an edge as a group. Our analysis however naturally extends to such a group greedy setting as well. The analysis for RSC and RSS remains the same and for bounds on λn, see equation (12) in [15]. We defer further discussion on this due to the lack of space. 7 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.2 0.4 0.6 0.8 1 Control Parameter Probability of Success p = 36 p = 64 p = 100 Greedy Algorithm Logistic Regression (a) Chain (Line Graph) 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.2 0.4 0.6 0.8 1 Control Parameter Probability of Success p = 36 p = 64 p = 100 Logistic Regression Greedy Algorithm (b) 4-Nearest Neighbor (Grid Graph) 0 0.25 0.5 0.75 1 1.25 1.5 1.5 0 0.2 0.4 0.6 0.8 1 Control Parameter Probability of Success p = 36 p = 64 p = 100 Logistic Regression Greedy Algorithm (c) Star (d) Chain, 4-Nearest Neighbor and Star Graphs Fig 1: Plots of success probability P[ b N±(r) = N ∗(r), ∀r ∈V ] versus the control parameter β(n, p, d) = n/[20d log(p)] for Ising model on (a) chain (d = 2), (b) 4-nearest neighbor (d = 4) and (c) Star graph (d = 0.1p). The coupling parameters are chosen randomly from θ∗ st = ±0.50 for both greedy and node-wise ℓ1-regularized logistic regression methods. As our theorem suggests and these figures show, the greedy algorithm requires less samples to recover the exact structure of the graphical model. 5 Experimental Results We now present experimental results that illustrate the power of Algorithm 2 and support our theoretical guarantees. We simulated structure learning of different graph structures and compared the learning rates of our method to that of node-wise ℓ1-regularized logistic regression as outlined in [20]. We performed experiments using 3 different graph structures: (a) chain (line graph), (b) 4-nearest neighbor (grid graph) and (c) star graph. For each experiment, we assumed a pairwise binary Ising model in which each θ∗ rt = ±1 randomly. For each graph type, we generated a set of n i.i.d. samples {x(1), ..., x(n)} using Gibbs sampling. We then attempted to learn the structure of the model using both Algorithm 2 as well as node-wise ℓ1-regularized logistic regression. We then compared the actual graph structure with the empirically learned graph structures. If the graph structures matched completely then we declared the result a success otherwise we declared the result a failure. We compared these results over a range of sample sizes (n) and averaged the results for each sample size over a batch of size 10. For all greedy experiments we set the stopping threshold ǫS = c log(np) n , where c is a tuning constant, as suggested by Theorem 2, and set the backwards step threshold ν = 0.5. For all logistic regression experiments we set the regularization parameter λn = c′p log(p)/n, where c′ was set via cross-validation. Figure 1 shows the results for the chain (d = 2), grid (d = 4) and star (d = 0.1p) graphs using both Algorithm 2 and node-wise ℓ1-regularized logistic regression for three different graph sizes p ∈{36, 64, 100} with mixed (random sign) couplings. For each sample size, we generated a batch of 10 different graphical models and averaged the probability of success (complete structure learned) over the batch. Each curve then represents the probability of success versus the control parameter β(n, p, d) = n/[20d log(p)] which increases with the sample size n. These results support our theoretical claims and demonstrate the efficiency of the greedy method in comparison to node-wise ℓ1-regularized logistic regression [20]. 6 Acknowledgements We would like to acknowledge the support of NSF grant IIS-1018426. 8 References [1] P. Abbeel, D. Koller, and A. Y. Ng. Learning factor graphs in polynomial time and sample complexity. Jour. Mach. Learning Res., 7:1743–1788, 2006. [2] A. Agarwal, S. Negahban, and M. Wainwright. Convergence rates of gradient methods for highdimensional statistical recovery. In NIPS, 2010. [3] F. Bach. Self-concordant analysis for logistic regression. Electronic Journal of Statistics, 4:384–414, 2010. [4] G. Bresler, E. Mossel, and A. Sly. Reconstruction of markov random fields from samples: Some easy observations and algorithms. In RANDOM 2008. [5] E. Candes and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than n. Annals of Statistics, 2006. [6] S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM J. Sci. Computing, 20(1):33–61, 1998. [7] D. Chickering. Learning Bayesian networks is NP-complete. Proceedings of AI and Statistics, 1995. [8] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE Trans. Info. Theory, 14(3):462–467, 1968. [9] I. Csisz´ar and Z. Talata. Consistent estimation of the basic neighborhood structure of Markov random fields. The Annals of Statistics, 34(1):123–145, 2006. [10] C. Dahinden, M. Kalisch, and P. Buhlmann. Decomposition and model selection for large contingency tables. Biometrical Journal, 52(2):233–252, 2010. [11] S. Dasgupta. Learning polytrees. In Uncertainty on Artificial Intelligence, pages 134–14, 1999. [12] D. Donoho and M. Elad. Maximal sparsity representation via ℓ1 minimization. Proc. Natl. Acad. Sci., 100:2197–2202, March 2003. [13] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: A statistical view of boosting. Annals of Statistics, 28:337–374, 2000. [14] E. Ising. Beitrag zur theorie der ferromagnetismus. Zeitschrift f¨ur Physik, 31:253–258, 1925. [15] A. Jalali, P. Ravikumar, V. Vasuki, and S. Sanghavi. On learning discrete graphical models using groupsparse regularization. In Inter. Conf. on AI and Statistics (AISTATS) 14, 2011. [16] S.-I. Lee, V. Ganapathi, and D. Koller. Efficient structure learning of markov networks using l1regularization. In Neural Information Processing Systems (NIPS) 19, 2007. [17] N. Meinshausen and P. B¨uhlmann. High dimensional graphs and variable selection with the lasso. Annals of Statistics, 34(3), 2006. [18] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. In Neural Information Processing Systems (NIPS) 22, 2009. [19] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. In Arxiv, 2010. [20] P. Ravikumar, M. J. Wainwright, and J. Lafferty. High-dimensional ising model selection using ℓ1regularized logistic regression. Annals of Statistics, 38(3):1287–1319. [21] A. J. Rothman, P. J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation. 2: 494–515, 2008. [22] P. Spirtes, C. Glymour, and R. Scheines. Causation, prediction and search. MIT Press, 2000. [23] N. Srebro. Maximum likelihood bounded tree-width Markov networks. Artificial Intelligence, 143(1): 123–138, 2003. [24] V. N. Temlyakov. Greedy approximation. Acta Numerica, 17:235–409, 2008. [25] S. van de Geer. High-dimensional generalized linear models and the lasso. The Annals of Statistics, 36: 614–645, 2008. [26] D. J. A. Welsh. Complexity: Knots, Colourings, and Counting. LMS Lecture Note Series. Cambridge University Press, Cambridge, 1993. [27] T. Zhang. Adaptive forward-backward greedy algorithm for sparse learning with linear models. In Neural Information Processing Systems (NIPS) 21, 2008. [28] T. Zhang. On the consistency of feature selection using greedy least squares regression. Journal of Machine Learning Research, 10:555–568, 2009. 9
|
2011
|
189
|
4,245
|
Regularized Laplacian Estimation and Fast Eigenvector Approximation Patrick O. Perry Information, Operations, and Management Sciences NYU Stern School of Business New York, NY 10012 pperry@stern.nyu.edu Michael W. Mahoney Department of Mathematics Stanford University Stanford, CA 94305 mmahoney@cs.stanford.edu Abstract Recently, Mahoney and Orecchia demonstrated that popular diffusion-based procedures to compute a quick approximation to the first nontrivial eigenvector of a data graph Laplacian exactly solve certain regularized Semi-Definite Programs (SDPs). In this paper, we extend that result by providing a statistical interpretation of their approximation procedure. Our interpretation will be analogous to the manner in which ℓ2-regularized or ℓ1-regularized ℓ2-regression (often called Ridge regression and Lasso regression, respectively) can be interpreted in terms of a Gaussian prior or a Laplace prior, respectively, on the coefficient vector of the regression problem. Our framework will imply that the solutions to the MahoneyOrecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. Conversely, it will imply that the solution to this regularized estimation problem can be computed very quickly by running, e.g., the fast diffusion-based PageRank procedure for computing an approximation to the first nontrivial eigenvector of the graph Laplacian. Empirical results are also provided to illustrate the manner in which approximate eigenvector computation implicitly performs statistical regularization, relative to running the corresponding exact algorithm. 1 Introduction Approximation algorithms and heuristic approximations are commonly used to speed up the running time of algorithms in machine learning and data analysis. In some cases, the outputs of these approximate procedures are “better” than the output of the more expensive exact algorithms, in the sense that they lead to more robust results or more useful results for the downstream practitioner. Recently, Mahoney and Orecchia formalized these ideas in the context of computing the first nontrivial eigenvector of a graph Laplacian [1]. Recall that, given a graph G on n nodes or equivalently its n×n Laplacian matrix L, the top nontrivial eigenvector of the Laplacian exactly optimizes the Rayleigh quotient, subject to the usual constraints. This optimization problem can equivalently be expressed as a vector optimization program with the objective function f(x) = xT Lx, where x is an n-dimensional vector, or as a Semi-Definite Program (SDP) with objective function F(X) = Tr(LX), where X is an n × n symmetric positive semi-definite matrix. This first nontrivial vector is, of course, of widespread interest in applications due to its usefulness for graph partitioning, image segmentation, data clustering, semi-supervised learning, etc. [2, 3, 4, 5, 6, 7]. In this context, Mahoney and Orecchia asked the question: do popular diffusion-based procedures— such as running the Heat Kernel or performing a Lazy Random Walk or computing the PageRank function—to compute a quick approximation to the first nontrivial eigenvector of L solve some other regularized version of the Rayleigh quotient objective function exactly? Understanding this algorithmic-statistical tradeoff is clearly of interest if one is interested in very large-scale applications, where performing statistical analysis to derive an objective and then calling a black box solver to optimize that objective exactly might be too expensive. Mahoney and Orecchia answered the above question in the affirmative, with the interesting twist that the regularization is on the SDP 1 formulation rather than the usual vector optimization problem. That is, these three diffusion-based procedures exactly optimize a regularized SDP with objective function F(X) + 1 ηG(X), for some regularization function G(·) to be described below, subject to the usual constraints. In this paper, we extend the Mahoney-Orecchia result by providing a statistical interpretation of their approximation procedure. Our interpretation will be analogous to the manner in which ℓ2regularized or ℓ1-regularized ℓ2-regression (often called Ridge regression and Lasso regression, respectively) can be interpreted in terms of a Gaussian prior or a Laplace prior, respectively, on the coefficient vector of the regression problem. In more detail, we will set up a sampling model, whereby the graph Laplacian is interpreted as an observation from a random process; we will posit the existence of a “population Laplacian” driving the random process; and we will then define an estimation problem: find the inverse of the population Laplacian. We will show that the maximum a posteriori probability (MAP) estimate of the inverse of the population Laplacian leads to a regularized SDP, where the objective function F(X) = Tr(LX) and where the role of the penalty function G(·) is to encode prior assumptions about the population Laplacian. In addition, we will show that when G(·) is the log-determinant function then the MAP estimate leads to the Mahoney-Orecchia regularized SDP corresponding to running the PageRank heuristic. Said another way, the solutions to the Mahoney-Orecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. Moreover, by Mahoney and Orecchia’s main result, the solution to this regularized SDP can be computed very quickly—rather than solving the SDP with a blackbox solver and rather computing explicitly the pseudoinverse of the Laplacian, one can simply run the fast diffusion-based PageRank heuristic for computing an approximation to the first nontrivial eigenvector of the Laplacian L. The next section describes some background. Section 3 then describes a statistical framework for graph estimation; and Section 4 describes prior assumptions that can be made on the population Laplacian. These two sections will shed light on the computational implications associated with these prior assumptions; but more importantly they will shed light on the implicit prior assumptions associated with making certain decisions to speed up computations. Then, Section 5 will provide an empirical evaluation, and Section 6 will provide a brief conclusion. Additional discussion is available in the Appendix of the technical report version of this paper [8]. 2 Background on Laplacians and diffusion-based procedures A weighted symmetric graph G is defined by a vertex set V = {1, . . . , n}, an edge set E ⊂V × V , and a weight function w : E →R+, where w is assumed to be symmetric (i.e., w(u, v) = w(v, u)). In this case, one can construct a matrix, L0 ∈RV ×V , called the combinatorial Laplacian of G: L0(u, v) = −w(u, v) when u ̸= v, d(u) −w(u, u) otherwise, where d(u) = P v w(u, v) is called the degree of u. By construction, L0 is positive semidefinite. Note that the all-ones vector, often denoted 1, is an eigenvector of L0 with eigenvalue zero, i.e., L1 = 0. For this reason, 1 is often called trivial eigenvector of L0. Letting D be a diagonal matrix with D(u, u) = d(u), one can also define a normalized version of the Laplacian: L = D−1/2L0D−1/2. Unless explicitly stated otherwise, when we refer to the Laplacian of a graph, we will mean the normalized Laplacian. In many situations, e.g., to perform spectral graph partitioning, one is interested in computing the first nontrivial eigenvector of a Laplacian. Typically, this vector is computed “exactly” by calling a black-box solver; but it could also be approximated with an iteration-based method (such as the Power Method or Lanczos Method) or by running a random walk-based or diffusion-based method to the asymptotic state. These random walk-based or diffusion-based methods assign positive and negative “charge” to the nodes, and then they let the distribution of charge evolve according to dynamics derived from the graph structure. Three canonical evolution dynamics are the following: Heat Kernel. Here, the charge evolves according to the heat equation ∂Ht ∂t = −LHt. Thus, the vector of charges evolves as Ht = exp(−tL) = P∞ k=0 (−t)k k! Lk, where t ≥0 is a time parameter, times an input seed distribution vector. PageRank. Here, the charge at a node evolves by either moving to a neighbor of the current node or teleporting to a random node. More formally, the vector of charges evolves as Rγ = γ (I −(1 −γ) M)−1 , (1) 2 where M is the natural random walk transition matrix associated with the graph and where γ ∈(0, 1) is the so-called teleportation parameter, times an input seed vector. Lazy Random Walk. Here, the charge either stays at the current node or moves to a neighbor. Thus, if M is the natural random walk transition matrix associated with the graph, then the vector of charges evolves as some power of Wα = αI + (1 −α)M, where α ∈(0, 1) represents the “holding probability,” times an input seed vector. In each of these cases, there is a parameter (t, γ, and the number of steps of the Lazy Random Walk) that controls the “aggressiveness” of the dynamics and thus how quickly the diffusive process equilibrates; and there is an input “seed” distribution vector. Thus, e.g., if one is interested in global spectral graph partitioning, then this seed vector could be a vector with entries drawn from {−1, +1} uniformly at random, while if one is interested in local spectral graph partitioning [9, 10, 11, 12], then this vector could be the indicator vector of a small “seed set” of nodes. See Appendix A of [8] for a brief discussion of local and global spectral partitioning in this context. Mahoney and Orecchia showed that these three dynamics arise as solutions to SDPs of the form minimize X Tr(LX) + 1 ηG(X) subject to X ⪰0, Tr(X) = 1, XD1/21 = 0, (2) where G is a penalty function (shown to be the generalized entropy, the log-determinant, and a certain matrix-p-norm, respectively [1]) and where η is a parameter related to the aggressiveness of the diffusive process [1]. Conversely, solutions to the regularized SDP of (2) for appropriate values of η can be computed exactly by running one of the above three diffusion-based procedures. Notably, when G = 0, the solution to the SDP of (2) is uu′, where u is the smallest nontrivial eigenvector of L. More generally and in this precise sense, the Heat Kernel, PageRank, and Lazy Random Walk dynamics can be seen as “regularized” versions of spectral clustering and Laplacian eigenvector computation. Intuitively, the function G(·) is acting as a penalty function, in a manner analogous to the ℓ2 or ℓ1 penalty in Ridge regression or Lasso regression, and by running one of these three dynamics one is implicitly making assumptions about the form of G(·). In this paper, we provide a statistical framework to make that intuition precise. 3 A statistical framework for regularized graph estimation Here, we will lay out a simple Bayesian framework for estimating a graph Laplacian. Importantly, this framework will allow for regularization by incorporating prior information. 3.1 Analogy with regularized linear regression It will be helpful to keep in mind the Bayesian interpretation of regularized linear regression. In that context, we observe n predictor-response pairs in Rp × R, denoted (x1, y1), . . . , (xn, yn); the goal is to find a vector β such that β′xi ≈yi. Typically, we choose β by minimizing the residual sum of squares, i.e., F(β) = RSS(β) = P i ∥yi −β′xi∥2 2, or a penalized version of it. For Ridge regression, we minimize F(β) + λ∥β∥2 2; while for Lasso regression, we minimize F(β) + λ∥β∥1. The additional terms in the optimization criteria (i.e., λ∥β∥2 2 and λ∥β∥1) are called penalty functions; and adding a penalty function to the optimization criterion can often be interpreted as incorporating prior information about β. For example, we can model y1, . . . , yn as independent random observations with distributions dependent on β. Specifically, we can suppose yi is a Gaussian random variable with mean β′xi and known variance σ2. This induces a conditional density for the vector y = (y1, . . . , yn): p(y | β) ∝exp{− 1 2σ2 F(β)}, (3) where the constant of proportionality depends only on y and σ. Next, we can assume that β itself is random, drawn from a distribution with density p(β). This distribution is called a prior, since it encodes prior knowledge about β. Without loss of generality, the prior density can be assumed to take the form p(β) ∝exp{−U(β)}. (4) 3 Since the two random variables are dependent, upon observing y, we have information about β. This information is encoded in the posterior density, p(β | y), computed via Bayes’ rule as p(β | y) ∝p(y | β) p(β) ∝exp{− 1 2σ2 F(β) −U(β)}. (5) The MAP estimate of β is the value that maximizes p(β | y); equivalently, it is the value of β that minimizes −log p(β | y). In this framework, we can recover the solution to Ridge regression or Lasso regression by setting U(β) = λ 2σ2 ∥β∥2 2 or U(β) = λ 2σ2 ∥β∥1, respectively. Thus, Ridge regression can be interpreted as imposing a Gaussian prior on β, and Lasso regression can be interpreted as imposing a double-exponential prior on β. 3.2 Bayesian inference for the population Laplacian For our problem, suppose that we have a connected graph with n nodes; or, equivalently, that we have L, the normalized Laplacian of that graph. We will view this observed graph Laplacian, L, as a “sample” Laplacian, i.e., as random object whose distribution depends on a true “population” Laplacian, L. As with the linear regression example, this induces a conditional density for L, to be denoted p(L | L). Next, we can assume prior information about the population Laplacian in the form of a prior density, p(L); and, given the observed Laplacian, we can estimate the population Laplacian by maximizing its posterior density, p(L | L). Thus, to apply the Bayesian formalism, we need to specify the conditional density of L given L. In the context of linear regression, we assumed that the observations followed a Gaussian distribution. A graph Laplacian is not just a single observation—it is a positive semidefinite matrix with a very specific structure. Thus, we will take L to be a random object with expectation L, where L is another normalized graph Laplacian. Although, in general, L can be distinct from L, we will require that the nodes in the population and sample graphs have the same degrees. That is, if d = d(1), . . . , d(n) denotes the “degree vector” of the graph, and D = diag d(1), . . . , d(n) , then we can define X = {X : X ⪰0, XD1/21 = 0, rank(X) = n −1}, (6) in which case the population Laplacian and the sample Laplacian will both be members of X. To model L, we will choose a distribution for positive semi-definite matrices analogous to the Gaussian distribution: a scaled Wishart matrix with expectation L. Note that, although it captures the trait that L is positive semi-definite, this distribution does not accurately model every feature of L. For example, a scaled Wishart matrix does not necessarily have ones along its diagonal. However, the mode of the density is at L, a Laplacian; and for large values of the scale parameter, most of the mass will be on matrices close to L. Appendix B of [8] provides a more detailed heuristic justification for the use of the Wishart distribution. To be more precise, let m ≥n −1 be a scale parameter, and suppose that L is distributed over X as a 1 mWishart(L, m) random variable. Then, E[L | L] = L, and L has conditional density p(L | L) ∝exp{−m 2 Tr(LL+)} |L|m/2 , (7) where |·| denotes pseudodeterminant (product of nonzero eigenvalues). The constant of proportionality depends only on L, d, m, and n; and we emphasize that the density is supported on X. Eqn. (7) is analogous to Eqn. (3) in the linear regression context, with 1/m, the inverse of the sample size parameter, playing the role of the variance parameter σ2. Next, suppose we have know that L is a random object drawn from a prior density p(L). Without loss of generality, p(L) ∝exp{−U(L)}, (8) for some function U, supported on a subset ¯ X ⊆X. Eqn. (8) is analogous to Eqn. (4) from the linear regression example. Upon observing L, the posterior distribution for L is p(L | L) ∝p(L | L) p(L) ∝exp{−m 2 Tr(LL+) + m 2 log |L+| −U(L)}, (9) with support determined by ¯ X. Eqn. (9) is analogous to Eqn. (5) from the linear regression example. If we denote by ˆL the MAP estimate of L, then it follows that ˆL+ is the solution to the program minimize X Tr(LX) + 2 mU(X+) −log |X| subject to X ∈¯ X ⊆X. (10) 4 Note the similarity with Mahoney-Orecchia regularized SDP of (2). In particular, if ¯ X = {X : Tr(X) = 1} ∩X, then the two programs are identical except for the factor of log |X| in the optimization criterion. 4 A prior related to the PageRank procedure Here, we will present a prior distribution for the population Laplacian that will allow us to leverage the estimation framework of Section 3; and we will show that the MAP estimate of L for this prior is related to the PageRank procedure via the Mahoney-Orecchia regularized SDP. Appendix C of [8] presents priors that lead to the Heat Kernel and Lazy Random Walk in an analogous way; in both of these cases, however, the priors are data-dependent in the strong sense that they explicitly depend on the number of data points. 4.1 Prior density The prior we will present will be based on neutrality and invariance conditions; and it will be supported on X, i.e., on the subset of positive-semidefinite matrices that was the support set for the conditional density defined in Eqn. (7). In particular, recall that, in addition to being positive semidefinite, every matrix in the support set has rank n−1 and satisfies XD1/21 = 0. Note that because the prior depends on the data (via the orthogonality constraint induced by D), this is not a prior in the fully Bayesian sense; instead, the prior can be considered as part of an empirical or pseudo-Bayes estimation procedure. The prior we will specify depends only on the eigenvalues of the normalized Laplacian, or equivalently on the eigenvalues of the pseudoinverse of the Laplacian. Let L+ = τ OΛO′ be the spectral decomposition of the pseudoinverse of the normalized Laplacian L, where τ ≥0 is a scale factor, O ∈Rn×n−1 is an orthogonal matrix, and Λ = diag λ(1), . . . , λ(n −1) , where P v λ(v) = 1. Note that the values λ(1), . . . , λ(n −1) are unordered and that the vector λ = λ(1), . . . , λ(n −1) lies in the unit simplex. If we require that the distribution for λ be exchangeable (invariant under permutations) and neutral (λ(v) independent of the vector λ(u)/(1 −λ(v)) : u ̸= v , for all v), then the only non-degenerate possibility is that λ is Dirichlet-distributed with parameter vector (α, . . . , α) [13]. The parameter α, to which we refer as the “shape” parameter, must satisfy α > 0 for the density to be defined. In this case, p(L) ∝p(τ) n−1 Y v=1 λ(v)α−1, (11) where p(τ) is a prior for τ. Thus, the prior weight on L only depends on τ and Λ. One implication is that the prior is “nearly” rotationally invariant, in the sense that p(P ′LP) = p(L) for any rank(n −1) projection matrix P satisfying PD1/21 = 0. 4.2 Posterior estimation and connection to PageRank To analyze the MAP estimate associated with the prior of Eqn. (11) and to explain its connection with the PageRank dynamics, the following proposition is crucial. Proposition 4.1. Suppose the conditional likelihood for L given L is as defined in (7) and the prior density for L is as defined in (11). Define ˆL to be the MAP estimate of L. Then, [Tr( ˆL+)]−1 ˆL+ solves the Mahoney-Orecchia regularized SDP (2), with G(X) = −log |X| and η as given in Eqn. (12) below. Proof. For L in the support set of the posterior, define τ = Tr(L+) and Θ = τ −1L+, so that Tr(Θ) = 1. Further, rank(Θ) = n −1. Express the prior in the form of Eqn. (8) with function U given by U(L) = −log{p(τ) |Θ|α−1} = −(α −1) log |Θ| −log p(τ), where, as before, | · | denotes pseudodeterminant. Using (9) and the relation |L+| = τ n−1|Θ|, the posterior density for L given L is p(L | L) ∝exp n −mτ 2 Tr(LΘ) + m+2(α−1) 2 log |Θ| + g(τ) o , 5 where g(τ) = m(n−1) 2 log τ + log p(τ). Suppose ˆL maximizes the posterior likelihood. Define ˆτ = Tr( ˆL+) and ˆΘ = [ˆτ]−1 ˆL+. In this case, ˆΘ must minimize the quantity Tr(LˆΘ) −1 η log |ˆΘ|, where η = mˆτ m + 2(α −1). (12) Thus ˆΘ solves the regularized SDP (2) with G(X) = −log |X|. Mahoney and Orecchia showed that the solution to (2) with G(X) = −log |X| is closely related to the PageRank matrix, Rγ, defined in Eqn. (1). By combining Proposition 4.1 with their result, we get that the MAP estimate of L satisfies ˆL+ ∝D−1/2RγD1/2; conversely, Rγ ∝D1/2 ˆL+D−1/2. Thus, the PageRank operator of Eqn. (1) can be viewed as a degree-scaled regularized estimate of the pseudoinverse of the Laplacian. Moreover, prior assumptions about the spectrum of the graph Laplacian have direct implications on the optimal teleportation parameter. Specifically Mahoney and Orecchia’s Lemma 2 shows how η is related to the teleportation parameter γ, and Eqn. (12) shows how the optimal η is related to prior assumptions about the Laplacian. 5 Empirical evaluation In this section, we provide an empirical evaluation of the performance of the regularized Laplacian estimator, compared with the unregularized estimator. To do this, we need a ground truth population Laplacian L and a noisily-observed sample Laplacian L. Thus, in Section 5.1, we construct a family of distributions for L; importantly, this family will be able to represent both low-dimensional graphs and expander-like graphs. Interestingly, the prior of Eqn. (11) captures some of the qualitative features of both of these types of graphs (as the shape parameter is varied). Then, in Section 5.2, we describe a sampling procedure for L which, superficially, has no relation to the scaled Wishart conditional density of Eqn. (7). Despite this model misspecification, the regularized estimator ˆLη outperforms L for many choices of the regularization parameter η. 5.1 Ground truth generation and prior evaluation The ground truth graphs we generate are motivated by the Watts-Strogatz “small-world” model [14]. To generate a ground truth population Laplacian, L—equivalently, a population graph—we start with a two-dimensional lattice of width w and height h, and thus n = wh nodes. Points in the lattice are connected to their four nearest neighbors, making adjustments as necessary at the boundary. We then perform s edge-swaps: for each swap, we choose two edges uniformly at random and then we swap the endpoints. For example, if we sample edges i1 ∼j1 and i2 ∼j2, then we replace these edges with i1 ∼j2 and i2 ∼j1. Thus, when s = 0, the graph is the original discretization of a low-dimensional space; and as s increases to infinity, the graph becomes more and more like a uniformly chosen 4-regular graph (which is an expander [15] and which bears similarities with an Erd˝os-R´enyi random graph [16]). Indeed, each edge swap is a step of the Metropolis algorithm toward a uniformly chosen random graph with a fixed degree sequence. For the empirical evaluation presented here, h = 7 and w = 6; but the results are qualitatively similar for other values. Figure 1 compares the expected order statistics (sorted values) for the Dirichlet prior of Eqn. (11) with the expected eigenvalues of Θ = L+/ Tr(L+) for the small-world model. In particular, in Figure 1(a), we show the behavior of the order statistics of a Dirichlet distribution on the (n −1)dimensional simplex with scalar shape parameter α, as a function of α. For each value of the shape α, we generated a random (n −1)-dimensional Dirichlet vector, λ, with parameter vector (α, . . . , α); we computed the n −1 order statistics of λ by sorting its components; and we repeated this procedure for 500 replicates and averaged the values. Figure 1(b) shows a corresponding plot for the ordered eigenvalues of Θ. For each value of s (normalized, here, by the number of edges µ, where µ = 2wh −w −h = 71), we generated the normalized Laplacian, L, corresponding to the random s-edge-swapped grid; we computed the n −1 nonzero eigenvalues of Θ; and we performed 1000 replicates of this procedure and averaged the resulting eigenvalues. Interestingly, the behavior of the spectrum of the small-world model as the edge-swaps increase is qualitatively quite similar to the behavior of the Dirichlet prior order statistics as the shape parameter α increases. In particular, note that for small values of the shape parameter α the first few order-statistics are well-separated from the rest; and that as α increases, the order statistics become 6 0.5 1.0 1.5 2.0 0.00 0.10 0.20 Shape Order statistics GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G (a) Dirichlet distribution order statistics. 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.05 0.10 0.15 0.20 Swaps Edges Inverse Laplacian Eigenvalues GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G G G (b) Spectrum of the inverse Laplacian. Figure 1: Analytical and empirical priors. 1(a) shows the Dirichlet distribution order statistics versus the shape parameter; and 1(b) shows the spectrum of Θ as a function of the rewiring parameter. concentrated around 1/(n −1). Similarly, when the edge-swap parameter s = 0, the top two eigenvalues (corresponding to the width-wise and height-wise coordinates on the grid) are well-separated from the bulk; as s increases, the top eigenvalues quickly merge into the bulk; and eventually, as s goes to infinity, the distribution becomes very close that that of a uniformly chosen 4-regular graph. 5.2 Sampling procedure, estimation performance, and optimal regularization behavior Finally, we evaluate the estimation performance of a regularized estimator of the graph Laplacian and compare it with an unregularized estimate. To do so, we construct the population graph G and its Laplacian L, for a given value of s, as described in Section 5.1. Let µ be the number of edges in G. The sampling procedure used to generate the observed graph G and its Laplacian L is parameterized by the sample size m. (Note that this parameter is analogous to the Wishart scale parameter in Eqn. (7), but here we are sampling from a different distribution.) We randomly choose m edges with replacement from G; and we define sample graph G and corresponding Laplacian L by setting the weight of i ∼j equal to the number of times we sampled that edge. Note that the sample graph G over-counts some edges in G and misses others. We then compute the regularized estimate ˆLη, up to a constant of proportionality, by solving (implicitly!) the Mahoney-Orecchia regularized SDP (2) with G(X) = −log |X|. We define the unregularized estimate ˆL to be equal to the observed Laplacian, L. Given a population Laplacian L, we define τ = τ(L) = Tr(L+) and Θ = Θ(L) = τ −1L+. We define ˆτη, ˆτ, ˆΘη, and ˆΘ similarly to the population quantities. Our performance criterion is the relative Frobenius error ∥Θ −ˆΘη∥F/∥Θ −ˆΘ∥F, where ∥· ∥F denotes the Frobenius norm (∥A∥F = [Tr(A′A)]1/2). Appendix D of [8] presents similar results when the performance criterion is the relative spectral norm error. Figures 2(a), 2(b), and 2(c) show the regularization performance when s = 4 (an intermediate value) for three different values of m/µ. In each case, the mean error and one standard deviation around it are plotted as a function of η/¯τ, as computed from 100 replicates; here, ¯τ is the mean value of τ over all replicates. The implicit regularization clearly improves the performance of the estimator for a large range of η values. (Note that the regularization parameter in the regularized SDP (2) is 1/η, and thus smaller values along the X-axis correspond to stronger regularization.) In particular, when the data are very noisy, e.g., when m/µ = 0.2, as in Figure 2(a), improved results are seen only for very strong regularization; for intermediate levels of noise, e.g., m/µ = 1.0, as in Figure 2(b), (in which case m is chosen such that G and G have the same number of edges counting multiplicity), improved performance is seen for a wide range of values of η; and for low levels of noise, Figure 2(c) illustrates that improved results are obtained for moderate levels of implicit regularization. Figures 2(d) and 2(e) illustrate similar results for s = 0 and s = 32. 7 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5 Regularization Rel. Frobenius Error GGGGGGG G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG (a) m/µ = 0.2 and s = 4. 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 Regularization Rel. Frobenius Error GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG (b) m/µ = 1.0 and s = 4. 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Regularization Rel. Frobenius Error GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG (c) m/µ = 2.0 and s = 4. 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5 Regularization Rel. Frobenius Error GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG (d) m/µ = 2.0 and s = 0. 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0 1 2 3 4 Regularization Rel. Frobenius Error GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG (e) m/µ = 2.0 and s = 32. 0.1 0.2 0.5 1.0 2.0 5.0 10.0 0.0 0.2 0.4 0.6 0.8 1.0 Sample Proportion Optimal Penalty G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Swaps/Edges 0.90 0.45 0.23 0.11 0.06 0.03 0.01 (f) Optimal η∗/¯τ. Figure 2: Regularization performance. 2(a) through 2(e) plot the relative Frobenius norm error, versus the (normalized) regularization parameter η/¯τ. Shown are plots for various values of the (normalized) number of edges, m/µ, and the edge-swap parameter, s. Recall that the regularization parameter in the regularized SDP (2) is 1/η, and thus smaller values along the X-axis correspond to stronger regularization. 2(f) plots the optimal regularization parameter η∗/¯τ as a function of sample proportion for different fractions of edge swaps. As when regularization is implemented explicitly, in all these cases, we observe a “sweet spot” where there is an optimal value for the implicit regularization parameter. Figure 2(f) illustrates how the optimal choice of η depends on parameters defining the population Laplacians and sample Laplacians. In particular, it illustrates how η∗, the optimal value of η (normalized by ¯τ), depends on the sampling proportion m/µ and the swaps per edges s/µ. Observe that as the sample size m increases, η∗converges monotonically to ¯τ; and, further, that higher values of s (corresponding to more expander-like graphs) correspond to higher values of η∗. Both of these observations are in direct agreement with Eqn. (12). 6 Conclusion We have provided a statistical interpretation for the observation that popular diffusion-based procedures to compute a quick approximation to the first nontrivial eigenvector of a data graph Laplacian exactly solve a certain regularized version of the problem. One might be tempted to view our results as “unfortunate,” in that it is not straightforward to interpret the priors presented in this paper. Instead, our results should be viewed as making explicit the implicit prior assumptions associated with making certain decisions (that are already made in practice) to speed up computations. Several extensions suggest themselves. The most obvious might be to try to obtain Proposition 4.1 with a more natural or empirically-plausible model than the Wishart distribution; to extend the empirical evaluation to much larger and more realistic data sets; to apply our methodology to other widely-used approximation procedures; and to characterize when implicitly regularizing an eigenvector leads to better statistical behavior in downstream applications where that eigenvector is used. More generally, though, we expect that understanding the algorithmic-statistical tradeoffs that we have illustrated will become increasingly important in very large-scale data analysis applications. 8 References [1] M. W. Mahoney and L. Orecchia. Implementing regularization implicitly via approximate eigenvector computation. In Proceedings of the 28th International Conference on Machine Learning, pages 121–128, 2011. [2] D.A. Spielman and S.-H. Teng. Spectral partitioning works: Planar graphs and finite element meshes. In FOCS ’96: Proceedings of the 37th Annual IEEE Symposium on Foundations of Computer Science, pages 96–107, 1996. [3] S. Guattery and G.L. Miller. On the quality of spectral separators. SIAM Journal on Matrix Analysis and Applications, 19:701–719, 1998. [4] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transcations of Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000. [5] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373–1396, 2003. [6] T. Joachims. Transductive learning via spectral graph partitioning. In Proceedings of the 20th International Conference on Machine Learning, pages 290–297, 2003. [7] J. Leskovec, K.J. Lang, A. Dasgupta, and M.W. Mahoney. Community structure in large networks: Natural cluster sizes and the absence of large well-defined clusters. Internet Mathematics, 6(1):29–123, 2009. Also available at: arXiv:0810.1355. [8] P. O. Perry and M. W. Mahoney. Regularized Laplacian estimation and fast eigenvector approximation. Technical report. Preprint: arXiv:1110.1757 (2011). [9] D.A. Spielman and S.-H. Teng. Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In STOC ’04: Proceedings of the 36th annual ACM Symposium on Theory of Computing, pages 81–90, 2004. [10] R. Andersen, F.R.K. Chung, and K. Lang. Local graph partitioning using PageRank vectors. In FOCS ’06: Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science, pages 475–486, 2006. [11] F.R.K. Chung. The heat kernel as the pagerank of a graph. Proceedings of the National Academy of Sciences of the United States of America, 104(50):19735–19740, 2007. [12] M. W. Mahoney, L. Orecchia, and N. K. Vishnoi. A spectral algorithm for improving graph partitions with applications to exploring data graphs locally. Technical report. Preprint: arXiv:0912.0681 (2009). [13] J. Fabius. Two characterizations of the Dirichlet distribution. The Annals of Statistics, 1(3):583–587, 1973. [14] D.J. Watts and S.H. Strogatz. Collective dynamics of small-world networks. Nature, 393:440– 442, 1998. [15] S. Hoory, N. Linial, and A. Wigderson. Expander graphs and their applications. Bulletin of the American Mathematical Society, 43:439–561, 2006. [16] B. Bollobas. Random Graphs. Academic Press, London, 1985. 9
|
2011
|
19
|
4,246
|
Learning person-object interactions for action recognition in still images Vincent Delaitre∗ ´Ecole Normale Sup´erieure Josef Sivic* INRIA Paris - Rocquencourt Ivan Laptev* INRIA Paris - Rocquencourt Abstract We investigate a discriminatively trained model of person-object interactions for recognizing common human actions in still images. We build on the locally order-less spatial pyramid bag-of-features model, which was shown to perform extremely well on a range of object, scene and human action recognition tasks. We introduce three principal contributions. First, we replace the standard quantized local HOG/SIFT features with stronger discriminatively trained body part and object detectors. Second, we introduce new person-object interaction features based on spatial co-occurrences of individual body parts and objects. Third, we address the combinatorial problem of a large number of possible interaction pairs and propose a discriminative selection procedure using a linear support vector machine (SVM) with a sparsity inducing regularizer. Learning of action-specific body part and object interactions bypasses the difficult problem of estimating the complete human body pose configuration. Benefits of the proposed model are shown on human action recognition in consumer photographs, outperforming the strong bag-of-features baseline. 1 Introduction Human actions are ubiquitous and represent essential information for understanding the content of many still images such as consumer photographs, news images, sparsely sampled surveillance videos, and street-side imagery. Automatic recognition of human actions and interactions, however, remains a very challenging problem. The key difficulty stems from the fact that the imaged appearance of a person performing a particular action can vary significantly due to many factors such as camera viewpoint, person’s clothing, occlusions, variation of body pose, object appearance and the layout of the scene. In addition, motion cues often used to disambiguate actions in video [6, 27, 31] are not available in still images. In this work, we seek to recognize common human actions, such as ”walking”, ”running” or ”reading a book” in challenging realistic images. As opposed to action recognition in video [6, 27, 31], action recognition in still images has received relatively little attention. A number of previous works [21, 24, 37] focus on exploiting body pose as a cue for action recognition. In particular, several methods address joint modeling of human poses, objects and relations among them [21, 40]. Reliable estimation of body configurations for people in arbitrary poses, however, remains a very challenging research problem. Less structured representations, e.g. [11, 39] have recently emerged as a promising alternative demonstrating state-of-the-art results for action recognition in static images. In this work, we investigate discriminatively trained models of interactions between objects and human body parts. We build on the locally orderless statistical representations based on spatial ∗WILLOW project, Laboratoire d’Informatique de l’´Ecole Normale Sup´erieure, ENS/INRIA/CNRS UMR 8548, Paris, France 1 pyramids [28] and bag-of-features models [9, 16, 34], which have demonstrated excellent performance on a range of scene [28], object [22, 36, 41] and action [11] recognition tasks. Rather than relying on accurate estimation of body part configurations or accurate object detection in the image, we represent human actions as locally orderless distributions over body parts and objects together with their interactions. By opportunistically learning class-specific object and body part interactions (e.g. relative configuration of leg and horse detections for the riding horse action, see Figure 1), we avoid the extremely challenging task of estimating the full body configuration. Towards this goal, we consider the following challenges: (i) what should be the representation of object and body part appearance; (ii) how to model object and human body part interactions; and (iii) how to choose suitable interaction pairs in the huge space of all possible combinations and relative configurations of objects and body parts. To address these challenges, we introduce the following three contributions. First, we replace the quantized HOG/SIFT features, typically used in bag-of-features models [11, 28, 36] with powerful, discriminatively trained, local object and human body part detectors [7, 25]. This significantly enhances generalization over appearance variation, due to e.g. clothing or viewpoint while providing a reliable signal on part locations. Second, we develop a part interaction representation, capturing pair-wise relative position and scale between object/body parts, and include this representation in a scale-space spatial pyramid model. Third, rather than choosing interacting parts manually, we select them in a discriminative fashion. Suitable pair-wise interactions are first chosen from a large pool of hundreds of thousands of candidate interactions using a linear support vector machine (SVM) with a sparsity inducing regularizer. The selected interaction features are then input into a final, more computationally expensive, non-linear SVM classifier based on the locally orderless spatial pyramid representation. 2 Related work Modeling person-object interactions for action recognition has recently attracted significant attention. Gupta et al. [21], Wang et al. [37], and Yao and Fei Fei [40] develop joint models of body pose configuration and object location within the image. While great progress has been made on estimating body pose configurations [5, 19, 25, 33], inferring accurate human body pose in images of common actions in consumer photographs remains an extremely challenging problem due to a significant amount of occlusions, partial truncation by image boundaries or objects in the scene, non-upright poses, and large variability in camera viewpoint. While we build on the recent body pose estimation work by using strong pose-specific body part models [7, 25], we explicitly avoid inferring the complete body configuration. In a similar spirit, Desai et al. [13] avoid inferring body configuration by representing a small set of body postures using single HOG templates and represent relative position of the entire person and an object using simple relations (e.g. above, to the left). They do not explicitly model body parts and their interactions with objects as we do in this work. Yang et al. [38] model the body pose as a latent variable for action recognition. Differently to our method, however, they do not attempt to model interactions between people (their body parts) and objects. In a recent work, Maji et al. [30] also represent people by activation responses of body part detectors (rather than inferring the actual body pose), however, they model only interactions between person and object bounding boxes, not considering individual body parts, as we do in this work. Learning spatial groupings of low-level (SIFT) features for recognizing person-object interactions has been explored by Yao and Fei Fei [39]. While we also learn spatial interactions, we build on powerful body part and object detectors pre-learnt on separate training data, providing a degree of generalization over appearance (e.g. clothing), viewpoint and illumination variation. Differently to [39], we deploy dicriminative selection of interactions using SVM with sparsity inducing regularizer. Spatial-pyramid based bag-of-features models have demonstrated excellent performance on action recognition in still images [1, 11] outperforming body pose based methods [21] or grouplet models [40] on their datasets [11]. We build on these locally orderless representations but replace the low-level features (HOG) with strong pre-trained detectors. Similarly, the object-bank representation [29], where natural scenes are represented by response vectors of densely applied pre-trained 2 Detection dj (left thigh) Detection di (horse) pj pi v Person bounding box C Figure 1: Representing person-object interactions by pairs of body part (cyan) and object (blue) detectors. To get a strong interaction response, the pair of detectors (here visualized at positions pi and pj) must fire in a particular relative 3D scale-space displacement (given by the vector v) with a scale-space displacement uncertainty (deformation cost) given by diagonal 3×3 covariance matrix C (the spatial part of C is visualized as a yellow dotted ellipse). Our image representation is defined by the max-pooling of interaction responses over the whole image, solved efficiently by the distance transform. object detectors, has shown a great promise for scene recognition. The work in [29], however, does not attempt to model people, body parts and their interactions with objects. Related work also includes models of contextual spatial and co-occurrence relationships between objects [12, 32] as well as objects and the scene [22, 23, 35]. Object part detectors trained from labelled data also form a key ingredient of attribute-based object representations [15, 26]. While we build on this body of work, these approaches do not model interactions of people and their body parts with objects and focus on object/scene recognition rather than recognition of human actions. 3 Representing person-object interactions This section describes our image representation in terms of body parts, objects and interactions among them. 3.1 Representing body parts and objects We assume to have a set of n available detectors d1, ..., dn which have been pre-trained for different body parts and object classes. Each detector i produces a map of dense 3D responses di(I, p) over locations and scales of a given image I. We express the positions of detections p in terms of scalespace coordinates p = (x, y, σ) where (x, y) corresponds to the spatial location and σ = log ˜σ is an additive scale parameter log-related to the image scale factor ˜σ making the addition in the position vector space meaningful. In this paper we use two types of detectors. For objects we use LSVM detector [17] trained on PASCAL VOC images for ten object classes1. For body parts we implement the method of [25] and train ten body part detectors2 for each of sixteen pose clusters giving 160 body part detectors in total (see [25] for further details). Both of our detectors use Histograms of Oriented Gradients (HOG) [10] as an underlying low-level image representation. 1The ten object detectors correspond to object classes bicycle, car, chair, cow, dining table, horse, motorbike, person, sofa, tv/monitor 2The ten body part detectors correspond to head, torso, {left, right} × {forearm, upper arm, lower leg, thigh} 3 3.2 Representing pairwise interactions We define interactions by the pairs of detectors (di, dj) as well as by the spatial and scale relations among them. Each pair of detectors constitutes a two-node tree where the position and the scale of the leaf are related to the root by scale-space offset and a spatial deformation cost. More precisely, an interaction pair is defined by a quadruplet q = (i, j, v, C) ∈N × N × R3 × M3,3 where i and j are the indices of the detectors at the root and leaf, v is the offset of the leaf relatively to the root and C is a 3 × 3 diagonal matrix defining the displacement cost of the leaf with respect to its expected position. Figure 1 illustrates an example of an interaction between a horse and the left thigh for the horse riding action. We measure the response of the interaction q located at the root position p1 by: r(I, q, p1) = max p2 di(I, p1) + dj(I, p2) −uT Cu (1) where u = p2 −(p1 +v) is the displacement vector corresponding to the drift of the leaf node with respect to its expected position (p1 + v). Maximizing over p2 in (1) provides localization of the leaf node with the optimal trade-off between the detector score and the displacement cost. For any interaction q we compute its responses for all pairs of node positions p1, p2. We do this efficiently in linear time with respect to p using distance transform [18]. 3.3 Representing images by response vectors of pair-wise interactions Given a set of M interaction pairs q1, · · · , qM, we wish to aggregate their responses (1), over an image region A. Here A can be (i) an (extended) person bounding box, as used for selecting discriminative interaction features (Section 4.2) or (ii) a cell of the scale-space pyramid representation, as used in the final non-linear classifier (Section 4.3). We define score s(I, q, A) of an interaction pair q within A of an image I by max-pooling, i.e. as the maximum response of the interaction pair within A: s(I, q, A) = max p∈A r(I, q, p). (2) An image region A is then represented by a M-vector of interaction pair scores z = (s1, · · · , sM) with si = s(I, qi, A). (3) 4 Learning person-object interactions Given object and body part interaction pairs q introduced in the previous section, we wish to use them for action classification in still images. A brute-force approach of analyzing all possible interactions, however, is computationally prohibitive since the space of all possible interactions is combinatorial in the number of detectors and scale-space relations among them. To address this problem, we aim in this paper to select a set of M action-specific interaction pairs q1, . . . , qM, which are both representative and discriminative for a given action class. Our learning procedure consists of the three main steps as follows. First, for each action we generate a large pool of candidate interactions, each comprising a pair of (body part / object) detectors and their relative scale-space displacement. This step is data-driven and selects candidate detection pairs which frequently occur for a particular action in a consistent relative scale-space configuration. Next, from this initial pool of candidate interactions we select a set of M discriminative interactions which best separate the particular action class from other classes in our training set. This is achieved using a linear Support Vector Machine (SVM) classifier with a sparsity inducing regularizer. Finally, the discriminative interactions are combined across classes and used as interaction features in our final non-linear spatial-pyramid like SVM classifier. The three steps are detailed below. 4.1 Generating a candidate pool of interaction pairs To initialize our model, we first generate a large pool of candidate interactions in a data-driven manner. Following the suggestion in [17] that the accurate selection of the deformation cost C may not be that important, we set C to a reasonable fixed value for all pairs, and focus on finding clusters of frequently co-occurring detectors (di, dj) in specific relative configurations. For each detector i and an image I, we first collect a set of positions of all positive detector responses PI i = {p | di(I, p) > 0}, where di(I, p) is the response of detector i at position p in image I. We 4 then apply a standard non-maxima suppression (NMS) step to eliminate multiple responses of a detector in local image neighbourhoods and then limit PI i to the L top-scoring detections. The intuition behind this step is that a part/object interaction is not likely to occur many times in an image. For each pair of detectors (di, dj) we then gather relative displacements between their detections from all the training images Ik: Dij = S k{pj −pi | pi ∈PIk i and pj ∈PIk j }. To discover potentially interesting interaction pairs, we perform a mean-shift clustering over Dij using a window of radius R ∈R3 (2D-image space and scale) equal to the inverse of the square root of the deformation cost: R = diag(C−1 2 ). We also discard clusters which contribute to less than η percent of the training images. The set of m resulting candidate pairs (i, j, v1, C), · · · , (i, j, vm, C) is built from the centers v1, · · · , vm of the remaining clusters. By applying this procedure to all pairs of detectors, we generate a large pool (hundreds of thousands) of potentially interesting candidate interactions. 4.2 Discriminative selection of interaction pairs The initialization described above produces a large number of candidate interactions. Many of them, however, may not be informative resulting in unnecessary computational load at the training and classification times. For this reason we wish to select a smaller number of M discriminative interactions. Given a set of N training images, each represented by an interaction response vector zi, described in eq. (3) where A is the extended person bounding box given for each image, and a binary label yi (in a 1-vs-all setup for each class), the learning problem for each action class can be formulated using the binary SVM cost function: J(w, b) = λ N X i=1 max{0, 1 −yi(w⊤zi + b)} + ∥w∥1, (4) where w, b are parameters of the classifier and λ is the weighting factor between the (hinge) loss on the training examples and the L1 regularizer of the classifier. By minimizing (4) in a one-versus-all setting for each action class we search (by binary search) for the value of the regularization parameter λ resulting in the sparse weight vector w with M nonzero elements. Selection of M interaction pairs corresponding to non-zero elements of w gives M most discriminative (according to (4)) interaction pairs per action class. Note that other discriminative feature selection strategies such as boosting [20] can be also used. However, the proposed approach is able to jointly search the entire set of candidate feature pairs by minimizing a convex cost given in (4), whereas boosting implements a greedy feature selection procedure, which may be sub-optimal. 4.3 Using interaction pairs for classification Given a set of M discriminative interactions for each action class obtained as described above, we wish to train a final non-linear action classifier. We use spatial pyramid-like representation [28], aggregating responses in each cell of the pyramid using max-pooling as described by eq. (2), where A is one cell of the spatial pyramid. We extend the standard 2D pyramid representation to scalespace resulting in a 3D pyramid with D = 1 + 23 + 43 = 73 cells. Using the scale-space pyramid with D cells, we represent each image by concatenating M features from each of the K classes into a MKD-dimensional vector. We train a non-linear SVM with RBF kernel and L2 regularizer for each action class using a 5-fold cross-validation for the regularization and kernel band-width parameters. We found that using this final non-linear classifier consistently improves classification performance over the linear SVM given by equation (4). Note that feature selection (section 4.2) is necessary in this case as applying the non-linear spatial pyramid classifier on the entire pool of all candidate interactions would be computationally infeasible. 5 Experiments We test our model on the Willow-action dataset downloaded from [4] and the PASCAL VOC 2010 action classification dataset [14]. The Willow-action dataset contains more than 900 images with more than 1100 labelled person detections from 7 human action classes: Interaction with Computer, 5 Photographing, Playing Music, Riding Bike, Riding Horse, Running and Walking. The training set contains 70 examples of each action class and the rest (at least 39 examples per class) is left for testing. The PASCAL VOC 2010 dataset contains the 7 above classes together with 2 other actions: Phoning and Reading. It contains a similar number of images. Each training and testing image in both datasets is annotated with the smallest bounding box containing each person and by the performed action(s). We follow the same experimental setup for both datasets. Implementation details: We use our implementation of body part detectors described in [25] with 16 pose clusters trained on the publicly available 2000 image database [3], and 10 pre-trained PASCAL 2007 Latent SVM object detectors [2]: bicycle, car, chair, cow, dining table, horse, motorbike, person, sofa, tvmonitor. In the human action training/test data, we extend each given person bounding box by 50% and resize the image so that the bounding box has a maximum size of 300 pixels. We run the detectors over the transformed bounding boxes and consider the image scales sk = 2k/10 for k ∈{−10, · · · , 10}. At each scale we extract the detector response every 4 pixels and 8 pixels for the body part and object detectors, respectively. The outputs of each detector are then normalized by subtracting the mean of maximum responses within the training bounding boxes and then normalizing the variance to 1. We generate the candidate interaction pairs by taking the mean-shift radius R = (30, 30, log(2)/2), L = 3 and η = 8%. The covariance of the pair deformation cost C is fixed in all experiments to R−2. We select M = 310 discriminative interaction pairs to compute the final spatial pyramid representation of each image. Results: Table 1 summarizes per-class action classification results (reported using average precision for each class) for the proposed method (d. Interactions), and three baselines. The first baseline (a. BOF) is the bag-of-features classifier [11], aggregating quantized responses of densely sampled HOG features in spatial pyramid representation, using a (non-linear) intersection kernel. Note that this is a strong baseline, which was shown [11] to outperform the recent person-object interaction models of [39] and [21] on their own datasets. The second baseline (b. LSVM) is the latent SVM classifier [17] trained in a 1-vs-all fashion for each class. To obtain a single classification score for each person bounding box, we take the maximum LSVM detection score from the detections overlapping the extended bounding box with the standard overlap score [14] higher than 0.5. The final baseline (c. Detectors) is a SVM classifier with an RBF kernel trained on max-pooled responses of the entire bank of body part and object detectors in a spatial pyramid representation but without interactions. This baseline is similar in spirit to the object bank representation [29], but here targeted to action classification by including a bank of pose-specific body part detectors as well as object detectors. On average, the proposed method (d.) outperforms all baselines, obtaining the best result on 4 out of 7 classes. The largest improvements are obtained on Riding Bike and Horse actions, for which reliable object detectors are available. The improvement of the proposed method d. with respect to using the plain bank of object and body part detectors c. directly demonstrates the benefit of modeling interactions. Example detections of interaction pairs are shown in figure 2. Table 2 shows the performance of the proposed interaction model (d. Interactions) and its combination with the baselines (e. BOF+LSVM+Inter.) on the Pascal VOC 2010 data. Interestingly, the proposed approach is complementary to both the BOF (51.25 mAP) and LSVM (44.08 mAP) methods and by combining all three approaches (following [11]) the overall performance improves to 60.66 mAP. We also report results of the ”Poselet” method [30], which, similar to our method, is trained from external non-Pascal data. Our combined approach achieves better overall performance and also outperforms the ”Poselet” approach on 6 out of 9 classes. Finally, our combined approach also obtains competitive performance compared to the overall best reported result on the Pascal VOC 2010 data – ”SURREY MK KDA” [1] – and outperforms this method on the ”Riding Horse” and ”Walking” classes. 6 Conclusion We have developed person-object interaction features based on non-rigid relative scale-space displacement of pairs of body part and object detectors. Further, we have shown that such features can be learnt in a discriminative fashion and can improve action classification performance over a strong bag-of-features baseline in challenging realistic images of common human actions. In addition, the learnt interaction features in some cases correspond to visually meaningful configurations of body parts, and body parts with objects. 6 Inter. w/ Comp. Blue: Screen Cyan: L. Leg Photographing Blue: Head Cyan: L. Thigh Playing Instr. Blue: L. Forearm Cyan: L. Forearm Riding Bike Blue: R. Forearm Cyan: Motorbike Riding Horse Blue: Horse Cyan: L. Thigh Running Blue: L. Arm Cyan: R. Leg Walking Blue: L. Arm Cyan: Head Figure 2: Example detections of discriminative interaction pairs. These body part interaction pairs are chosen as discriminative (high positive weight wi) for action classes indicated on the left. In each row, the first three images show detections on the correct action class. The last image shows a high scoring detection on an incorrect action class. In the examples shown, the interaction features capture either a body part and an object, or two body part interactions. Note that while these interaction pairs are found to be discriminative, due to the detection noise, they do not necessary localize the correct body parts in all images. However, they may still fire at consistent locations across many images as illustrated in the second row, where the head detector consistently detects the camera lens, and the thigh detector fires consistently at the edge of the head. Similarly, the leg detector seems to consistently fire on keyboards (see the third image in the first row for an example), thus improving the confidence of the computer detections for the ”Interacting with computer” action. 7 Action / Method a. BOF [11] b. LSVM c. Detectors d. Interactions (1) Inter. w/ Comp. 58.15 30.21 45.64 56.60 (2) Photographing 35.39 28.12 36.35 37.47 (3) Playing Music 73.19 56.34 68.35 72.00 (4) Riding Bike 82.43 68.70 86.69 90.39 (5) Riding Horse 69.60 60.12 71.44 75.03 (6) Running 44.53 51.99 57.65 59.73 (7) Walking 54.18 55.97 57.68 57.64 Average (mAP) 59.64 50.21 60.54 64.12 Table 1: Per-class average-precision for different methods on the Willow-actions dataset. Action / Method d. Interactions e. BOF+LSVM+Inter. Poselets[30] MK-KDA[1] (1) Phoning 42.11 48.61 49.6 52.6 (2) Playing Instr. 30.78 53.07 43.2 53.5 (3) Reading 28.70 28.56 27.7 35.9 (4) Riding Bike 84.93 80.05 83.7 81.0 (5) Riding Horse 89.61 90.67 89.4 89.3 (6) Running 81.28 85.81 85.6 86.5 (7) Taking Photo 26.89 33.53 31.0 32.8 (8) Using Computer 52.31 56.10 59.1 59.2 (9) Walking 70.12 69.56 67.9 68.6 Average (mAP) 56.30 60.66 59.7 62.2 Table 2: Per-class average-precision on the Pascal VOC 2010 action classification dataset. We use only a small set of object detectors available at [2], however, we are now in a position to include many more additional object (camera, computer, laptop) or texture (grass, road, trees) detectors, trained from additional datasets, such as ImageNet or LabelMe. Currently, we consider detections of entire objects, but the proposed model can be easily extended to represent interactions between body parts and parts of objects [8]. Acknowledgements. This work was partly supported by the Quaero, OSEO, MSR-INRIA, ANR DETECT (ANR-09-JCJC-0027-01) and the EIT-ICT labs. References [1] http://pascallin.ecs.soton.ac.uk/challenges/voc/voc2010/results/index.html. [2] http://people.cs.uchicago.edu/∼pff/latent/. [3] http://www.comp.leeds.ac.uk/mat4saj/lsp.html. [4] http://www.di.ens.fr/willow/research/stillactions/. [5] M. Andriluka, S. Roth, and B. Schiele. Pictorial structures revisited: People detection and articulated pose estimation. In CVPR, 2009. [6] A. Bobick and J. Davis. The recognition of human movement using temporal templates. IEEE PAMI, 23(3):257–276, 2001. [7] L. Bourdev and J. Malik. Poselets: Body part detectors trained using 3D human pose annotations. In ICCV, 2009. [8] T. Brox, L. Bourdev, S. Maji, and J. Malik. Object segmentation by alignment of poselet activations to image contours. In CVPR, 2011. [9] G. Csurka, C. Bray, C. Dance, and L. Fan. Visual categorization with bags of keypoints. In WS-SLCV, ECCV, 2004. [10] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, pages I:886–893, 2005. [11] V. Delaitre, I. Laptev, and J. Sivic. Recognizing human actions in still images: a study of bagof-features and part-based representations. In Proc. BMVC., 2010. updated version, available at http://www.di.ens.fr/willow/research/stillactions/. [12] C. Desai, D. Ramanan, and C. Fowlkes. Discriminative models for multi-class object layout. In ICCV, 2009. 8 [13] C. Desai, D. Ramanan, and C. Fowlkes. Discriminative models for static human-object interactions. In SMiCV, CVPR, 2010. [14] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. IJCV, 2010. In press. [15] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In CVPR, 2009. [16] L. Fei-Fei and P. Perona. A Bayesian hierarchical model for learning natural scene categories. In CVPR, Jun 2005. [17] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. IEEE PAMI, 2009. [18] P. Felzenszwalb and D. Huttenlocher. Distance transforms of sampled functions. Technical report, Cornell University CIS, Tech. Rep. 2004-1963, 2004. [19] V. Ferrari, M. Marin-Jimenez, and A. Zisserman. Pose search: retrieving people using their pose. In CVPR, 2009. [20] Y. Freund and R. Schapire. A decision theoretic generalisation of online learning. Computer and System Sciences, 55(1):119–139, 1997. [21] A. Gupta, A. Kembhavi, and L. Davis. Observing human-object interactions: Using spatial and functional compatibility for recognition. IEEE PAMI, 31(10):1775–1789, 2009. [22] H. Harzallah, F. Jurie, and C. Schmid. Combining efficient object localization and image classification. In ICCV, 2009. [23] D. Hoiem, A. Efros, and M. Hebert. Putting objects in perspective. In CVPR, 2006. [24] N. Ikizler, R. G. Cinbis, S. Pehlivan, and P. Duygulu. Recognizing actions from still images. In Proc. ICPR, 2008. [25] S. Johnson and M. Everingham. Learning effective human pose estimation from inaccurate annotation. In CVPR, 2011. [26] C. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In CVPR, 2009. [27] I. Laptev, M. Marszałek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In CVPR, 2008. [28] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In CVPR, pages II: 2169–2178, 2006. [29] L. Li, H. Su, E. Xing, and L. Fei-Fei. Object bank: A high-level image representation for scene classification and semantic feature sparsification. In NIPS, 2010. [30] S. Maji, L. Bourdev, and J. Malik. Action recognition from a distributed representation of pose and appearance. In CVPR, 2011. [31] T. B. Moeslund, A. Hilton, and V. Kruger. A survey of advances in vision-based human motion capture and analysis. CVIU, 103(2-3):90–126, 2006. [32] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie. Objects in context. In ICCV, 2007. [33] B. Sapp, A. Toshev, and B. Taskar. Cascaded models for articulated pose estimation. In ECCV, 2010. [34] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In ICCV, 2003. [35] A. Torralba. Contextual priming for object detection. IJCV, 53(2):169–191, July 2003. [36] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman. Multiple kernels for object detection. In ICCV, 2009. [37] Y. Wang, H. Jiang, M. S. Drew, Z. N. Li, and G. Mori. Unsupervised discovery of action classes. In CVPR, pages II: 1654–1661, 2006. [38] W. Yang, Y. Wang, and G. Mori. Recognizing human actions from still images with latent poses. In CVPR, 2010. [39] B. Yao and L. Fei-Fei. Grouplet: A structured image representation for recognizing human and object interactions. In CVPR, 2010. [40] B. Yao and L. Fei-Fei. Modeling mutual context of object and human pose in human-object interaction activities. In CVPR, 2010. [41] J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid. Local features and kernels for classification of texture and object categories: a comprehensive study. IJCV, 73(2):213–238, 2007. 9
|
2011
|
190
|
4,247
|
Efficient inference in matrix-variate Gaussian models with iid observation noise Oliver Stegle1 Max Planck Institutes T¨ubingen, Germany stegle@tuebingen.mpg.de Christoph Lippert1 Max Planck Institutes T¨ubingen, Germany clippert@tuebingen.mpg.de Joris Mooij Institute for Computing and Information Sciences Radboud University Nijmegen, The Netherlands j.mooij@cs.ru.nl Neil Lawrence Department of Computer Science University of Sheffield Sheffield, UK N.Lawrence@sheffield.ac.uk Karsten Borgwardt Max Planck Institutes & Eberhard Karls Universit¨at T¨ubingen, Germany karsten.borgwardt@tuebingen.mpg.de Abstract Inference in matrix-variate Gaussian models has major applications for multioutput prediction and joint learning of row and column covariances from matrixvariate data. Here, we discuss an approach for efficient inference in such models that explicitly account for iid observation noise. Computational tractability can be retained by exploiting the Kronecker product between row and column covariance matrices. Using this framework, we show how to generalize the Graphical Lasso in order to learn a sparse inverse covariance between features while accounting for a low-rank confounding covariance between samples. We show practical utility on applications to biology, where we model covariances with more than 100,000 dimensions. We find greater accuracy in recovering biological network structures and are able to better reconstruct the confounders. 1 Introduction Matrix-variate normal (MVN) models have important applications in various fields. These models have been used as regularizer for multi-output prediction, jointly modeling the similarity between tasks and samples [1]. In related work in Gaussian processes (GPs), generalizations of MVN distributions have been used for inference of vector-valued functions [2, 3]. These models with Kronecker factored covariance have applications in geostatistics [4], statistical testing on matrix-variate data [5] and statistical genetics [6]. In prior work, different covariance functions for rows and columns have been combined in a flexible manner. For example, Dutilleul and Zhang et al. [7, 1] have performed estimation of free-form covariances with different norm penalties. In other applications for prediction [2] and dimension reduction [8], combinations of free-form covariances with squared exponential covariances have been used. 1These authors contributed equally to this work. 1 In the absence of iid observation noise, an efficient inference scheme also known as the “flip-flop algorithm” can be derived. In this iterative approach, estimation of the respective covariances is decoupled by rotating the data with respect to one of the covariances to optimize parameters of the other [7, 1]. While this simplifying assumption of noise-free matrix-variate data has been used with some success, there are clear motivations for including iid noise in the model. For example, Bonilla et al. [2] have shown that in multi-task regression a noise free GP with Kronecker structure leads to a cancelation of information sharing between the various prediction tasks. This effect, also known from the geostatistics literature [4], eliminates any benefit from multivariate prediction compared to na¨ıve approaches. Alternatively, when including observation noise in the model, computational tractability has been limited to smaller datasets. The covariance matrix no longer directly factorizes into a Kronecker product, thus rendering simple approaches such as the “flip-flop algorithm” inappropriate. Here, we address these shortcomings and propose a general framework for efficient inference in matrix-variate normal models that include iid observation noise. Although in this model the covariance matrix no longer factorizes into a Kronecker product, we show how efficient parameter inference can still be done. To this end, we provide derivations of both the log-likelihood and gradients with respect to hyperparameters that can be computed in the same asymptotic runtime as iterations of the “flip-flop algorithm” on a noise-free model. This allows for parameter learning of covariance matrices of size 105 × 105, or even bigger, which would not be possible if done na¨ıvely. First, we show how for any combination of covariances, evaluation of model likelihood and gradients with respect to individual covariance parameters is tractable. Second, we apply this framework to structure learning in Gaussian graphical models, while accounting for a confounding non-iid sample structure. This generalization of the Graphical Lasso [9, 10] (GLASSO) allows to jointly learn and account for a sparse inverse covariance matrix between features and a structured (nondiagonal) sample covariance. The low rank component of the sample covariance is used to account for confounding effects, as is done in other models for genomics [11, 12]. We illustrate this generalization called “Kronecker GLASSO” on synthetic datasets and heterogeneous protein signaling and gene expression data, where the aim is to recover the hidden network structures. We show that our approach is able to recover the confounding structure, when it is known, and reveals sparse biological networks that are in better agreement with known components of the latent network structure. 2 Efficient inference in Kronecker Gaussian processes Assume we are given a data matrix Y ∈RN×D with N rows and D columns, where N is the number of samples with D features each. As an example, think of N as a number of micro-array experiments, where in each experiment the expression levels of the same D genes are measured; here, yrc would be the expression level of gene c in experiment r. Alternatively, Y could represent multi-variate targets in a multi-task prediction setting, with rows corresponding to tasks and columns to features. This setting occurs in geostatistics, where the entries yrc correspond to ecological measurements taken on a regular grid. First we introduce some notation. For any L × M matrix A, we define vec(A) to be the vector obtained by concatenating the columns of A; further, let A ⊗B denote the Kronecker product (or tensor product) between matrices A and B: vec(A) = a11 a21 ... aLM ; A ⊗B = a11B a12B . . . a1MB a21B a22B . . . a2MB . . . . . . . . . ... aL1B aL2B . . . aLMB . For modeling Y as a matrix-variate normal distribution with iid observation noise, we first introduce N × D additional latent variables Z, which can be thought of as the noise-free observations. The data Y is then given by Z plus iid Gaussian observation noise: p(Y | Z, σ2) = N vec(Y) vec(Z), σ2IN·D . (1) 2 If the covariance between rows and columns of the noise-free observations Z factorizes, we may assume a zero-mean matrix-variate normal model for Z: p(Z | C, R) = exp{−1 2Tr[C−1ZTR−1Z]} (2π)ND/2|R|N/2|C|D/2 , which can be equivalently formulated as a multivariate normal distribution: = N (vec(Z) | 0N·D, C(ΘC) ⊗R(ΘR)) . (2) Here, the matrix C is a D ×D column covariance matrix and R is an N ×N row covariance matrix that may depend on hyperparameters ΘC and ΘR respectively. Marginalizing over the noise-free observations Z results in the Kronecker Gaussian process model of the observed data Y p(Y | C, R, σ2) = N vec(Y) 0N·D, C(ΘC) ⊗R(ΘR) + σ2IN·D . (3) For notational convenience we will drop the dependency on hyperparameters ΘC, ΘR and σ2. Note that for σ2 = 0, the likelihood model in Equation (3) reduces to the matrix-variate normal distribution in Equation (2). 2.1 Efficient parameter estimation For efficient optimization of the log likelihood, L = ln p(Y | C, R, σ2), with respect to the hyperparameters, we exploit an identity that allows us to write a matrix product with a Kronecker product matrix in terms of ordinary matrix products: (C ⊗R)vec(Y) = vec(RTYC). (4) We also exploit the compatibility of a Kronecker product plus a constant diagonal term with the eigenvalue decomposition: (C ⊗R + σ2I) = (UC ⊗UR)(SC ⊗SR + σ2I)(UT C ⊗UT R), (5) where C = UCSCUT C is the eigenvalue decomposition of C, and similarly for R. Likelihood evaluation Using these identities, the log of the likelihood in Equation (3) follows as L = −N · D 2 ln(2π) −1 2 ln SC ⊗SR + σ2I −1 2vec(UT RYUC)T(SC ⊗SR + σ2I)−1vec(UT RYUC). (6) This term can be interpreted as a multivariate normal distribution with diagonal covariance matrix (SC ⊗SR + σ2I) on rotated data vec(UT RYUC)T, similar to an approach that is used to speed up mixed models in genetics [13]. Gradient evaluation Derivatives of the log marginal likelihood with respect to a particular covariance parameter θR ∈ΘR can be expressed as d dθR L = −1 2diag (SC ⊗SR + σ2I)−1T diag SC ⊗(UT R d dθR RUR) + 1 2vec( ˜Y)Tvec UT R d dθR RUR ˜YSC , (7) where vec( ˜Y) = (SC ⊗SR + σ2I)−1vec(UT RYUC). Analogous expressions follow for partial derivatives with respect to θC ∈ΘC and the noise level σ2. Full details of all derivations, including derivatives wrt. σ2, can be found in the supplementary material. Runtime and memory complexity A na¨ıve implementation for optimizing the likelihood (3) with respect to the hyperparameters would have runtime complexity O(N 3D3) and memory complexity O(N 2D2). Using the likelihood and derivative as expressed in Equations (6) and (7), each evaluation with new kernel parameters involves solving the symmetric eigenvalue problems of both R and C, together having a runtime complexity of O(N 3 + D3). Explicit evaluation of any matrix Kronecker products is not necessary, resulting in a low memory complexity of O(N 2 + D2). 3 3 Graphical Lasso in the presence of confounders Estimation of sparse inverse covariance matrices is widely used to identify undirected network structures from observational data. However, non-iid observations due to hidden confounding variables may hinder accurate recovery of the true network structure. If not accounted for, confounders may lead to a large number of false positive edges. This is of particular relevance in biological applications, where observational data are often heterogeneous, combining measurements from different labs, data obtained under various perturbations or from a range of measurement platforms. As an application of the framework described in Section 2, we here propose an approach to learning sparse inverse covariance matrices between features, while accounting for covariation between samples due to confounders. First, we briefly review the “orthogonal” approaches that account for the corresponding types of sample and feature covariance we set out to model. 3.1 Explaining feature dependencies using the Graphical Lasso A common approach to model relationships between variables in a graphical model is the GLASSO. It has been used in the context of biological studies to recover the hidden network structure of gene-gene interrelationships [14], for instance. The GLASSO assumes a multivariate Gaussian distribution on features with a sparse precision (inverse covariance) matrix. The sparsity is induced by an L1 penalty on the entries of C−1, the inverse of the feature covariance matrix. Under the simplifying assumption of iid samples, the posterior distribution of Y under this model is proportional to p(Y, C−1) = p(C−1) N Y r=1 N (Yr,: | 0D, C) . (8) Here, the prior on the precision matrix C−1 is p(C−1) ∝exp −λ
C−1
1 [C−1 ≻0], (9) with ∥A∥1 defined as the sum over all absolute values of the matrix entries. Note that this prior is only nonzero for positive-definite C−1. 3.2 Modeling confounders using the Gaussian process latent variable model Confounders are unobserved variables that can lead to spurious associations between observed variables and to covariation between samples. A possible approach to identify such confounders is dimensionality reduction. Here we briefly review two dimensionality reduction methods, dual probabilistic PCA and its generalization, the Gaussian process latent variable model (GPLVM) [15]. In the context of applications, these methods have previously been applied to identify regulatory processes [16], and to recover confounding factors with broad effects on many features [11, 12]. In dual probabilistic PCA [15], the observed data Y is explained as a linear combination of K latent variables (“factors”), plus independent observation noise. The model is as follows: Y = XW + E, where X ∈RN×K contains the values of K latent variables (“factors”), W ∈RK×D contains independent standard-normally distributed weights that specify the mapping between latent and observed variables. Finally, E ∈RN×D contains iid Gaussian noise with Erc ∼N(0, σ2). Marginalizing over the weights W yields the data likelihood: p(Y | X) = D Y c=1 N Y:,c 0N, XXT + σ2IN . (10) Learning the latent factors X and the observation noise variance σ2 can be done by maximum likelihood. The more general GPLVM [15] is obtained by replacing XXT in (10) with a more general Gram matrix R, with Rrs = κ (xr1, . . . , xrK), (xs1, . . . , xsK) for some covariance function κ : RK × RK →R. 4 3.3 Combining the two models We propose to combine these two different explanations of the data into one coherent model. Instead of treating either the samples or the features as being (conditionally) independent, we aim to learn a joint covariance for the observed data matrix Y. This model, called Kronecker GLASSO, is a special instance of the Kronecker Gaussian process model introduced in Section 2, as the data likelihood can be written as: p(Y | R, C−1) = N vec(Y) 0N·D, C ⊗R + σ2IN·D . (11) Here, we build on the model components introduced in Section 3.2 and Section 3.1. We use the sparse L1 penalty (9) for the feature inverse covariance C−1 and use a linear kernel for the covariance on rows R = XXT + ρ2IN. Learning the model parameters proceeds via MAP inference, optimizing the log likelihood implied by Equation (11) with respect to X and C−1, and the hyperparameters σ2, ρ2. By combining the GLASSO and GPLVM in this way, we can recover network structure in the presence of confounders. An equivalent generative model can be obtained in a similar way as in dual probabilistic PCA. The main difference is that now, the rows of the weight matrix W are sampled from a N(0D, C) distribution instead of a N(0D, ID) distribution. This generative model for Y given latent variables X ∈RN×K and feature covariance C ∈RD×D is of the form Y = XW + ρV + E, where W ∈RK×D, V ∈RN×D and E ∈RN×D are jointly independent with distributions vec(W) ∼ N(0KD, C ⊗IK), vec(V) ∼N(0ND, C ⊗IN) and vec(E) ∼N(0ND, σ2IND). 3.4 Inference in the joint model As already mentioned in Section 2, parameter inference in the Kronecker GLASSO model implied by Equation (11), when done na¨ıvely, is intractable for all but very low dimensional data matrices Y. Even using the tricks discussed in Section 2, free-form sparse inverse covariance updates for C−1 are intractable under the L1 penalty when depending on gradient updates. Similar as in Section 2, the first step towards efficient inference is to introduce N × D additional latent variables Z, which can be thought of as the noise-free observations: p(Y|Z, σ2) = N vec(Y) vec(Z), σ2IN·D (12) p(Z|R, C) = N (vec(Z) | 0N·D, C ⊗R) . (13) We consider the latent variables Z as additional model parameters. We now optimize the distribution p(Y, C−1 | Z, R, σ2) = p(Y | Z, σ2)p(Z | R, C)p(C−1) with respect to the unknown parameters Z, C−1, σ2, and R (which depends on X and kernel parameters ΘR) by iterating through the following steps: 1. Optimize for σ2, R after integrating out Z, for fixed C: argmax σ2,ΘR,X p(Y | C, R(ΘR, X), σ2) = argmax σ2,ΘR,X N vec(Y) 0N·D, C ⊗R(ΘR, X) + σ2IN·D (14) 2. Calculate the expectation of Z for fixed R, C, and σ2 : vec(ˆZ) = (C ⊗R)(C ⊗R + σ2IN·D)−1vec(Y) 3. Optimize ˆC−1 for fixed R and ˆZ: argmax ˆC−1 p( ˆC−1 | ˆZ, R) = argmax ˆC−1 N vec(ˆZ) 0, ˆC ⊗R p( ˆC−1) and set C = ˆC. As a stopping criterion we consider the relative reduction of the negative log-marginal likelihood (Equation (11)) plus the regularizer on C−1. The choice to optimize ˆC−1 for fixed ˆZ is motivated by computational considerations, as this subproblem then reduces to conventional GLASSO; a full EM approach with latent variables Z does not seem feasible. Step 1 can be done using the efficient likelihood evaluations and gradients described in Section 2. We will now discuss step 3 in more detail. 5 (a) Precision-recall curve (b) Ground truth 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 (c) GLASSO 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 (d) Kron GLASSO 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 (e) Ideal GLASSO Figure 1: Network reconstruction on the simulated example. (a) Precision-recall curve, when varying the sparsity penalty λ. Compared are the standard GLASSO, our algorithm with Kronecker structure (Kronecker GLASSO) and as a reference an idealized setting, applying standard GLASSO to a similar dataset without confounding influences (Ideal GLASSO). The model that accounts for confounders approaches the performance of an idealized model, while standard GLASSO finds a large fraction of false positive edges. (b) Ground truth network. (c-e) Recovered networks for GLASSO, Kronecker GLASSO and Ideal GLASSO at 40% recall (star in (a)). False positive predicted edges are colored in red. Because of the effect of confounders, standard GLASSO predicted an excess of edges to 4 of the nodes. Optimizing for ˆC−1 The third step, optimizing with respect to ˆC−1, can be done efficiently, using similar ideas as in Section 2. First consider: ln N vec(ˆZ) 0N·D, ˆC ⊗R = −N · D 2 ln(2π) −1 2 ln ˆC ⊗R −1 2vec(ˆZ)T( ˆC ⊗R)−1vec(ˆZ). Now, using the Kronecker identity (4) and ln |A ⊗B| = rank(B) ln |A| + rank(A) ln |B| , we can rewrite the log likelihood as: ln N vec(ˆZ) 0, ˆC ⊗R p( ˆC−1) = −N·D 2 ln(2π) −1 2D ln |R| + 1 2N ln ˆC−1 −1 2Tr(ˆZTR−1 ˆZ ˆC−1). Thus we obtain a standard GLASSO problem with covariance matrix ˆZTR−1 ˆZ: argmax ˆ C−1 p( ˆC−1 | ˆZ, R) = argmax ˆ C−1≻0 −1 2Tr(ˆZTR−1 ˆZ ˆC−1) + 1 2N ln ˆC−1 −λ
ˆC−1
1 . (15) The inverse sample covariance R−1 in Equation (15) rotates the data covariance, similar as in the established flip-flop algorithm for inference in matrix-variate normal distributions [7, 1]. 4 Experiments In this Section, we describe three experiments with the generalized GLASSO. 4.1 Simulation study First, we considered an artificial dataset to illustrate the effect of confounding factors on the solution quality of sparse inverse covariance estimation. We created synthetic data, with N = 100 samples and D = 50 features according to the generative model described in Section 3.3. We generated the sparse inverse column covariance C−1 choosing edges at random with a sparsity level of 1%. Non-zero entries of the inverse covariance were drawn from a Gaussian with mean 1 and variance 2. The row covariance matrix R was created from K = 3 random factors xk, each drawn from unit variance iid Gaussian variables. The weighting between the confounders and the iid component ρ2 was set such that the factors explained equal variance, which corresponds to moderate extent of confounding influences. Finally, we added independent Gaussian observation noise, choosing a signal-to-noise ratio of 10%. 6 (a) Precision-recall curve praf pmek plcg PIP2 PIP3 p44/42 pakts473 PKA PKC P38 pjnk (b) Ground truth praf pmek plcg PIP2 PIP3 p44/42 pakts473 PKA PKC P38 pjnk (c) GLASSO praf pmek plcg PIP2 PIP3 p44/42 pakts473 PKA PKC P38 pjnk (d) Kron GLASSO Figure 2: Network reconstruction of a protein signaling network from Sachs et al. (a) Precision-recall curve, when varying the sparsity penalty λ. Compared are the standard GLASSO, and our algorithm with Kronecker structure (Kronecker GLASSO). Standard GLASSO, not accounting for confounders, found more false positive edges for a wide range of recall rates. (b) Ground truth network. (c-d) Recovered networks for GLASSO and Kronecker GLASSO at 40% recall (star in (a)). False positive edge predictions are colored in red. Next, we applied different methods to reconstruct the true simulated network. We considered standard GLASSO and our Kronecker model that accounts for the confounding influence (Kronecker GLASSO). For reference, we also considered an idealized setting, applying GLASSO to a similar dataset without the confounding effects (Ideal GLASSO), obtained by setting X = 0N·K in the generative model. To determine an appropriate latent dimensionality of Kronecker GLASSO, we used the BIC criterion on multiple restarts with K = 1 to K = 5 latent factors. For all models we varied the sparsity parameter of the graphical lasso, setting λ = 5x, with x linearly interpolated between −8 and 3. The solution set of lasso-based algorithms is typically unstable and depends on slight variation of the data. To improve the stability of all methods, we employed stability selection [17], applying each algorithm for all regularization parameters 100 times to randomly drawn subsets containing 90% of the data. We then considered edges that were found in at least 50% of all 100 restarts. Figure 1a shows the precision-recall curve for each algorithm. Kronecker GLASSO performed considerably better than standard GLASSO, approaching the performance of the ideal model without confounders. Figures 1b-d show the reconstructed networks at 40% recall. While Kronecker GLASSO reconstructed the same network as the ideal model, standard GLASSO found an excess of false positive edges. 4.2 Network reconstruction of protein-signaling networks Important practical applications of the GLASSO include the reconstruction of gene and protein networks. Here, we revisit the extensively studied protein signaling data from Sachs et al. [18]. The dataset provides observational data of the activations of 11 proteins under various external stimuli. We combined measurements from the first 3 experiments, yielding a heterogeneous mix of 2,666 samples that are not expected to be an iid sample set. To make the inference more difficult, we selected a random fraction of 10% of the samples, yielding a final data matrix of size 266 times 11. We used the directed ground truth network and moralized the graph structure to obtain an undirected ground truth network. Parameter choice and stability selection were done as in the simulation study. Figure 2 shows the results. Analogous to the simulation setting, the Kronecker GLASSO model found true network links with greater accuracy than standard graphical lasso. This results suggest that our model is suitable to account for confounding variation as it occurs in real settings. 4.3 Large-scale application to yeast gene expression data Next, we considered an application to large-scale gene expression profiling data from yeast. We revisited the dataset from Smith et al. [19], consisting of 109 genetically diverse yeast strains, each of which has been expression profiled in two environmental conditions (glucose and ethanol). Because 7 101 102 103 Number of features (genes) 0.4 0.5 0.6 0.7 0.8 0.9 1.0 r^2 correlation with true confounder GPLVM Kronecker GLasso (a) Confounder reconstruction (b) GLASSO consistency (68%) (c) Kron. GLASSO consistency (74%) Figure 3: (a) Correlation coefficient between learned confounding factor and true environmental condition for different subsets of all features (genes). Compared are the standard GPLVM model with a linear covariance and our proposed model that accounts for low rank confounders and sparse gene-gene relationships (Kronecker GLASSO). Kronecker GLASSO is able to better recover the hidden confounder by accounting for the covariance structure between genes. (b,c) Consistency of edges on the largest network with 1,000 nodes learnt on the joint dataset, comparing the results when combining both conditions with those for a single condition (glucose). the confounder in this dataset is known explicitly, we tested the ability of Kronecker GLASSO to recover it from observational data. Because of missing complete ground truth information, we could not evaluate the network reconstruction quality directly. An appropriate regularization parameter was selected by means of cross validation, evaluating the marginal likelihood on a test set (analogous to the procedure described in [10]). To simplify the comparison to the known confounding factor, we chose a fixed number of confounders that we set to K = 1. Recovery of the known confounder Figure 3a shows the r2 correlation coefficient between the inferred factor and the true environmental condition for increasing number of features (genes) that were used for learning. In particular for small numbers of genes, accounting for the network structure between genes improved the ability to recover the true confounding effect. Consistency of obtained networks Next, we tested the consistency when applying GLASSO and Kronecker GLASSO to data that combines both conditions, glucose and ethanol, comparing to the recovered network from a single condition alone (glucose). The respective networks are shown in Figures 3b and 3c. The Kronecker GLASSO model identifies more consistent edges, which shows the susceptibility of standard GLASSO to the confounder, here the environmental influence. 5 Conclusions and Discussion We have shown an efficient scheme for parameter learning in matrix-variate normal distributions with iid observation noise. By exploiting some linear algebra tricks, we have shown how hyperparameter optimization for the row and column covariances can be carried out without evaluating the prohibitive full covariance, thereby greatly reducing computational and memory complexity. To the best of our knowledge, these measures have not previously been proposed, despite their general applicability. As an application of our framework, we have proposed a method that accounts for confounding influences while estimating a sparse inverse covariance structure. Our approach extends the Graphical Lasso, generalizing the rigid assumption of iid samples to more general sample covariances. For this purpose, we employ a Kronecker product covariance structure and learn a low-rank covariance between samples, thereby accounting for potential confounding influences. We provided synthetic and real world examples where our method is of practical use, reducing the number of false positive edges learned. Acknowledgments This research was supported by the FP7 PASCAL II Network of Excellence. OS received funding from the Volkswagen Foundation. JM was supported by NWO, the Netherlands Organization for Scientific Research (VENI grant 639.031.036). 8 References [1] Y. Zhang and J. Schneider. Learning multiple tasks with a sparse matrix-normal penalty. In Advances in Neural Information Processing Systems, 2010. [2] E. Bonilla, K.M. Chai, and C. Williams. Multi-task gaussian process prediction. Advances in Neural Information Processing Systems, 20:153–160, 2008. [3] M.A. Alvarez and N.D. Lawrence. Computationally efficient convolved multiple output gaussian processes. Journal of Machine Learning Research, 12:1425–1466, 2011. [4] H. Wackernagel. Multivariate geostatistics: an introduction with applications. Springer Verlag, 2003. [5] G.I. Allen and R. Tibshirani. Inference with transposable data: Modeling the effects of row and column correlations. Arxiv preprint arXiv:1004.0209, 2010. [6] M. Lynch and B. Walsh. Genetics and Analysis of Quantitative Traits. Sinauer Associates Inc., U.S., 1998. [7] P. Dutilleul. The MLE algorithm for the matrix normal distribution. Journal of Statistical Computation and Simulation, 64(2):105–123, 1999. [8] K. Zhang, B. Sch¨olkopf, and D. Janzing. Invariant gaussian process latent variable models and application in causal discovery. In Uncertainty in Artificial Intelligence, 2010. [9] O. Banerjee, L. El Ghaoui, and A. d’Aspremont. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. Journal of Machine Learning Research, 9:485–516, 2008. [10] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432, 2008. [11] J.T. Leek and J.D. Storey. Capturing heterogeneity in gene expression studies by surrogate variable analysis. PLoS Genetics, 3(9):e161, 2007. [12] O. Stegle, L. Parts, R. Durbin, and J. Winn. A bayesian framework to account for complex non-genetic factors in gene expression levels greatly increases power in eqtl studies. PLoS Computational Biology, 6(5):e1000770, 2010. [13] C. Lippert, J. Listgarten, Y. Liu, C.M. Kadie, R.I. Davidson, and D. Heckerman. FaST linear mixed models for genome-wide association studies. Nature Methods, 8:833–835, 2011. [14] P. Men´endez, Y.A.I. Kourmpetis, C.J.F. Ter Braak, and F.A. van Eeuwijk. Gene regulatory networks from multifactorial perturbations using graphical lasso: Application to the dream4 challenge. PLoS One, 5(12):e14147, 2010. [15] N. Lawrence. Probabilistic non-linear principal component analysis with gaussian process latent variable models. Journal of Machine Learning Research, 6:1783–1816, 2005. [16] K.Y. Yeung and W.L. Ruzzo. Principal component analysis for clustering gene expression data. Bioinformatics, 17(9):763, 2001. [17] N. Meinshausen and P. B¨uhlmann. Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(4):417–473, 2010. [18] K. Sachs, O. Perez, D. Pe’er, D.A. Lauffenburger, and G.P. Nolan. Causal protein-signaling networks derived from multiparameter single-cell data. Science, 308(5721):523, 2005. [19] E.N. Smith and L. Kruglyak. Gene–environment interaction in yeast gene expression. PLoS Biology, 6(4):e83, 2008. 9
|
2011
|
191
|
4,248
|
Confidence Sets for Network Structure David S. Choi School of Engineering and Applied Sciences Harvard University Cambridge, MA 02138 dchoi@seas.harvard.edu Patrick Wolfe School of Engineering and Applied Sciences Harvard University Cambridge, MA 02138 patrick@seas.harvard.edu Edoardo M. Airoldi Department of Statistics Harvard University Cambridge, MA 02138 airoldi@fas.harvard.edu Abstract Latent variable models are frequently used to identify structure in dichotomous network data, in part because they give rise to a Bernoulli product likelihood that is both well understood and consistent with the notion of exchangeable random graphs. In this article we propose conservative confidence sets that hold with respect to these underlying Bernoulli parameters as a function of any given partition of network nodes, enabling us to assess estimates of residual network structure, that is, structure that cannot be explained by known covariates and thus cannot be easily verified by manual inspection. We demonstrate the proposed methodology by analyzing student friendship networks from the National Longitudinal Survey of Adolescent Health that include race, gender, and school year as covariates. We employ a stochastic expectation-maximization algorithm to fit a logistic regression model that includes these explanatory variables as well as a latent stochastic blockmodel component and additional node-specific effects. Although maximumlikelihood estimates do not appear consistent in this context, we are able to evaluate confidence sets as a function of different blockmodel partitions, which enables us to qualitatively assess the significance of estimated residual network structure relative to a baseline, which models covariates but lacks block structure. 1 Introduction Network datasets comprising edge measurements Aij ∈{0, 1} of a binary, symmetric, and antireflexive relation on a set of n nodes, 1 ≤i < j ≤n, are fast becoming of paramount interest in the statistical analysis and data mining literatures [1]. A common aim of many models for such data is to test for and explain the presence of network structure, primary examples being communities and blocks of nodes that are equivalent in some formal sense. Algorithmic formulations of this problem take varied forms and span many literatures, touching on subjects such as statistical physics [2, 3], theoretical computer science [4], economics [5], and social network analysis [6]. One popular modeling assumption for network data is to assume dyadic independence of the edge measurements when conditioned on a set of latent variables [7, 8, 9, 10]. The number of latent parameters in such models generally increases with the size of the graph, however, meaning that computationally intensive fitting algorithms may be required and standard consistency results may not always hold. As a result, it can often be difficult to assess statistical significance or quantify the uncertainty associated with parameter estimates. This issue is evident in literatures focused 1 on community detection, where common practice is to examine whether algorithmically identified communities agree with prior knowledge or intuition [11, 12]; this practice is less useful if additional confirmatory information is unavailable, or if detailed uncertainty quantification is desired. Confidence sets are a standard statistical tool for uncertainty quantification, but they are not yet well developed for network data. In this paper, we propose a family of confidence sets for network structure that apply under the assumption of a Bernoulli product likelihood. The form of these sets stems from a stochastic blockmodel formulation which reflects the notion of latent nodal classes, and they provide a new tool for the analysis of estimated or algorithmically determined network structure. We demonstrate usage of the confidence sets by analyzing a sample of 26 adolescent friendship networks from the National Longitudinal Survey of Adolescent Health (available at http://www.cpc.unc.edu/addhealth), using a baseline model that only includes explanatory covariates and heterogeneity in the nodal degrees. We employ these confidence sets to validate departures from this baseline model taking the form of residual community structure. Though the confidence sets we employ are conservative, we show that they are effective in identifying putative residual structure in these friendship network data. 2 Model Specification and Inference We represent network data via a sociomatrix A ∈{0, 1}N×N that reflects the adjacency structure of a simple, undirected graph on N nodes. In keeping with the latent variable network analysis literature, we assume entries {Aij} for i < j to be independent Bernoulli random variables with associated success probabilities {Pij}i<j, and complete A as a symmetric matrix with zeros along its main diagonal. The corresponding data log-likelihood is given by L(A; P) = X i<j Aij log(Pij) + (1 −Aij) log(1 −Pij), (1) where each Pij can itself be modeled as a function of latent as well as explanatory variables. Given an instantiation of A and a latent variable model for the probabilities {Pij}i<j, it is natural to seek a quantification of the uncertainty associated with estimates of these Bernoulli parameters. A standard approach in non-network settings is to posit a parametric model and then compute confidence intervals, for example by appealing to standard maximum-likelihood asymptotics. However, as mentioned earlier, the formulation of most latent variable network models dictates an increasing number of parameters as the number of network nodes grows; this amount of expressive power appears necessary to capture many salient characteristics of network data. As a result, standard asymptotic results do not necessarily apply, leaving open questions for inference. 2.1 A Logistic Regression Model for Network Structure To illustrate the complexities that can arise in this inferential setting, we adopt a latent variable network model with a standard flavor: a logistic regression model that simultaneously incorporates aspects of blockmodels, additive effects, and explanatory variables (see [10] for a more general formulation). Specifically, we incorporate a K-class stochastic blockmodel component parameterized in terms of a symmetric matrix Θ ∈RK×K and a membership vector z ∈{1, . . . , K}N whose values denote the class of each node, with Pij depending on Θzizj. A vector of additional node-specific latent variables α is included to account for heterogeneity in the observed nodal degrees, along with a vector of regression coefficients β corresponding to explanatory variables x(i, j). Thus we obtain the log-odds parameterization log Pij 1 −Pij = Θzizj + αi + αj + x(i, j)′β, (2) where we further enforce the identifiability constraint that P i αi = 0. 2.2 Likelihood-Based Inference Exact maximization of the log-likelihood L(A; z, Θ, α, β, x) is computationally demanding even for moderately large K and N, owing to the total number of nodal partitions induced by z. Algorithm 1 details a stochastic expectation-maximization (EM) algorithm to explore the likelihood space. 2 Algorithm 1 Stochastic Expectation-Maximization Fitting of model (2) 1. Set t = 0 and initialize (z(0), Θ(0), α(0), β(0)). 2. For iteration t, do: E-step Sample z(t) ∝exp{L(z | A; Θ(t), α(t), β(t), x)} (e.g., via Gibbs sampling) M-step Set (Θ(t), α(t), β(t)) = argmaxΘ,α,β L(Θ, α, β | z(t); A, x) (convex optimization) 3. Set t ←t + 1 and return to Step 2. When α and β are fixed to zero, model (2) reduces to a re-parameterization of the standard stochastic blockmodel. Consistency results for this model have been developed for a range of conditions [7, 13, 14, 15, 16]. However, it is not clear how uncertainty in z and Θ should be quantified or even concisely expressed: in this vein, previous efforts to assess the robustness of fitted structure include [17], in which community partitions are analyzed under perturbations of the network, and [18], in which the behavior of local minima resulting from simulated annealing is examined; a likelihood-based test is proposed in [19] to compare sets of community divisions. Without the blockmodel components z and Θ, the model of Eq. (2) reduces to a generalized linear model whose likelihood can be maximized by standard methods. If α is further constrained to equal 0, the model is finite dimensional and standard asymptotic results for inference can be applied. Otherwise, the increasing dimensionality of α brings consistency into question, and in fact certain negative results are known for a related model, known as the p1 exponential random graph model [20]. Specifically, [21] reports that the maximum likelihood estimator for the p1 model exhibits bias with magnitude equal to its variance. Although estimation error does converge asymptotically to zero for the p1 model, it is not known how to generate general confidence intervals or hypothesis tests; [22] prescribes reporting standard errors only as summary statistics, with no association to p-values. The predictions of [21] were replicated (reported below) when fitting simulated data drawn from the model of Eq. (2) with parameters matched to observed characteristics of the Adolescent Health friendship networks. Model selection techniques, such as out-of-sample prediction, are sometimes used to validate statistical network models. For example, [23] uses out-of-sample prediction to compare the stochastic blockmodel to other network models. We note that model selection techniques and the confidence estimates presented here are complementary. To choose the best model for the data, a model selection method should be used; however, if the parameter will be interpreted to draw conclusions about the data, a confidence estimate may be desired as well. 2.3 Confidence Sets for Network Structure Instead of quantifying the uncertainty associated with estimates of the model parameters (z, Θ, α, β), we directly find confidence sets for the Bernoulli likelihood parameters {Pij}i<j. To this end, for any fixed K and class assignment z, define symmetric matrices ¯Φ, ˆΦ in [0, 1]K×K element-wise for 1 ≤a ≤b as ¯Φ(z) ab = 1 nab X i<j Pij 1{zi = a, zj = b}, ˆΦ(z) ab = 1 nab X i<j Aij 1{zi = a, zj = b}, with nab denoting the maximum number of possible edges between classes a and b (i.e., the corresponding number of Bernoulli trials). Thus ¯Φ(z) ab is the expected proportion of edges between (or within, if a = b) classes a and b, under class assignment z, and ˆΦ(z) ab is its corresponding sample proportion estimator. Intuitively, ¯Φ(z) measures assortativity by z; whenever the sociomatrix A is unstructured, elements of ¯Φ(z) should be nearly uniform for any choice of partition z. When strong community structure is present in A, however, these elements should instead be well separated for corresponding values of z. Thus, it is of interest to examine a confidence set that relates ˆΦ(z) ab to its expected value ¯Φ(z) for a range of partitions z. To this end, we may define such a set by considering a weighted sum of the 3 Element of β Θ = 0, α = 0 Θ = 0 Intercept −0.001 (0.004) 2.26 (0.070) Gender 0.003 (0.004) −0.005 (0.004) Race −0.001 (0.004) −0.03 (0.005) Grade 0.006 (0.003) 0.04 (0.003) Table 1: Empirical bias (with standard errors) of ML-estimated components of β under a baseline model, for the cases α = 0 versus α unconstrained. Note the change in estimated bias when α is included in the model. form P a≤b nabD(ˆΦ(z) ab ||¯Φ(z) ab ), where D(p||p′) = p log(p/p′)+(1−p) log[(1−p)/(1−p′)] denotes the (nonnegative) Kullback–Leibler divergence of a Bernoulli random variable with parameter p′ from that of one with parameter p. A confidence set is then obtainable via direct application of the following theorem. Theorem 1 ([14]) Let {Aij}i<j be comprised of N 2 independent Bernoulli(Pij) trials, and let Z = {1, . . . , K}N. Then with probability at least 1 −δ, sup z∈Z X a≤b nabD(ˆΦ(z) ab ||¯Φ(z) ab ) ≤N log K + (K2 + K) log N K + 1 + log 1 δ . (3) Because Eq. (3) holds uniformly over all class assignments, we may choose to apply it directly to the value of z obtained from Algorithm 1—and because it does not assume any particular form of latent structure, we are able to avoid the difficulties associated with determining confidence sets directly for the parameters of latent variable models such as Eq. (2). However, it is important to note that this generality comes at a price: In simulation studies undertaken in [14] as well as those detailed below, the bound of Eq. (3) is observed to be loose by a multiplicative factor ranging from 3 to 7 on average. 2.4 Estimator Consistency and Confidence Sets Recalling our above discussion of estimator consistency for the related p1 model, we undertook a small simulation study to investigate the consistency of maximum-likelihood (ML) estimation in a “baseline” version of model (2) with K = 1 and the corresponding (scalar) value of Θ set equal to zero. We compared estimates for the cases α = 0 versus α unconstrained for 500 graphs generated randomly from a model of the form specified in Eq. (2) based on school 8 of the Add-Health data set. The number of nodes N = 204 and covariates x(i, j) matched that of School 8 in the Adolescent Health friendship network dataset, and the regression coefficient vector β = (−2.6, 0.025, 0.9, −1.6)′, set to match the ML estimate of β for School 8, fitted via logistic regression with Θ = 0, α = 0. The covariates x(i, j) comprised of an intercept term, an indicator for whether students i and j shared the same gender, an indicator for shared race, and their difference in school grade. The inclusion of α in the model of Eq. (2) appears to give rise to a loss of estimator consistency, as shown in Table 1 where the empirical bias of each component of β is reported. This suggests, as we alluded to above, that inferential conclusions based on parameter estimates from latent variable models should be interpreted with caution. To explore the tightness of the confidence sets given by the bound in Eq. (3), we fitted the full model specified in Eq. (2) with K in the range 2–6 to 50 draws from a restricted version of the model corresponding to each of the 26 schools in our dataset. In the same manner described above, each simulated graph shared the same size and covariates as its corresponding school in the dataset, with β fixed to its ML-fitted value with Θ = 0, α = 0. The empirical divergence term P a≤b nabD(ˆΦ(z) ab ||¯Φ(z) ab ) under the approximate ML partition determined via Algorithm 1 was then tabulated for each of these 1300 fits, and compared to its 95% confidence upper bound given by Eq. (3). The empirical divergences are reported in the histogram of Fig. 1 as a fraction of the upper bound. It may be seen from Fig. 1 that the largest divergence observed was less than 41% of its corresponding bound, with 95% of all divergences less than 22% of their corresponding bound. 4 Figure 1: Divergence terms P a≤b nabD(ˆΦ(z) ab ||¯Φ(z) ab ) as fractions of 95% confidence set values, shown for approximate maximum-likelihood fits to 1300 randomly graphs matched to the 26-school friendship network dataset. This analysis provides an indication of how inflated the confidence set sizes are expected to be in practice; while conservative in nature, they seem usable for practical situations. 3 Analysis of Adolescent Health Networks The National Longitudinal Study of Adolescent Health (Add Health) is a study of adolescents in the United States. To date, four waves of surveys have been collected over the course of fifteen years. Many statistical studies have been performed using the data to explore a variety of social and health issues1. For example, [24, 25] discusses effects of racial diversity on community formation across schools. Here we examine the schools individually to find residual block structure not explained by gender, race, or grade. Since we will be unable to verify such blocks by checking against explanatory variables, we rely on the confidence sets developed above to assess significance of the discovered block structure. Our approach is as follows. As discussed in Section 2.3, Eq. (3) enables us to calculate confidence sets with respect to Bernoulli parameters {Pij} for any class membership vector z in terms of the corresponding sample proportion matrices ˆΦ(z). Then, by comparing values of ˆΦ(z) to a baseline model obtained by fitting K = 1, Θ = 0 (thus removing the stochastic block component from Eq. (2)), we may evaluate whether or not the observed sample counts are consistent with the structure predicted by the baseline model. This procedure provides a kind of notional p-value to qualitatively assess significance of the residual structure induced by any choice of z. 3.1 Model Checking We first fit model (2) with Θ = 0 and α = 0, since it reduces to a logistic regression with explanatory variables x(i, j), for which standard asymptotic results apply. The parameter fits were examined and an analysis of deviance was conducted. The fits were observed to be well behaved in this setting; estimates of β and their corresponding standard errors indicate a clustering effect by grade that is stronger than that of either shared gender or race. An analysis of deviance, where each variable was withheld from the model, resulted in similar conclusions: Average deviances across the 26 schools were −69, −238, and −3760 for gender, race, and grade respectively, with p-values below 0.05 indicating significance in all but 3, 7, and 0 of the schools for each of the respective covariates; these schools had small numbers of students, with a maximum N of 108. When α was re-introduced into the model of Eq. (2), its components were observed to correlate highly with the sequence of observed nodal degrees in the data, as expected. (Recall that consistency results are not known for this model, so that p-values cannot be associated with deviances or standard errors; however, in our simulations the maximum-likelihood estimates showed moderate errors, as discussed in Section 2.4.) For two of the schools, the resulting model was degenerate, whereas for the remaining schools the α-degree correlation had a range of 0.78–0.94 and a median value of 0.89. 1For a bibliography, see http://www.cpc.unc.edu/projects/addhealth/pubs. 5 (a) K = 2 (b) K = 4 (c) K = 6 Figure 2: Student counts resulting from a stochastic blockmodel fit for K ∈{2, . . . , 6}, arranged by latent block and school year (grade) for School 6. The inferred block structure approximately aligns with the grade covariate (which was not included in this model). Estimates of β did not undergo qualitative significant changes from their earlier values when the restriction α = 0 was lifted. A “pure” stochastic blockmodel (α = 0, β = 0) was fitted to our data over the range K ∈{2, . . . , 6}, to observe if the resulting block structure replicates that of any of the known covariates. Figure 2 shows counts of students by latent class (under the approximate maximum-likelihood estimate of z) and grade for School 6; it can be seen that the recovered grouping of students by latent class is closely aligned with the grade covariate, particularly for grades 7–10. 3.2 Residual Block Structure We now report on the assessment of residual block structure in the Adolescent Health friendship network data. Recalling that the confidence sets obtained with Eq. (3) hold uniformly for all partitions of equal size, independently of how they are chosen, we therefore may freely modify the fitting procedure of Algorithm 1 to obtain partitions that exhibit the greatest degree of structure. Bearing in mind the high observed α-degree correlation as discussed above, we replaced the latent variable vector α in the model of Eq. (2) with a more parsimonious categorical covariate determined by grouping the observed network degrees according to the ranges 0–3, 4–7, and 8–∞. We also expanded the covariates by giving each race and grade pairing its own indicator function. These modifications would be inappropriate for the baseline model, as dyadic independence conditioned on the covariates would be lost, and standard errors for β would be larger; however, the changes were useful for improving the output of Algorithm 1 without invalidating Eq. (3). Fig. 3 depicts partitions for which the observed ˆΦ(z), fitted for various K > 1 using the modified version of Algorithm 1 detailed above, is “far” from its nominal value under the baseline model fitted with K = 1, in the sense that the corresponding 95% Bonferroni-corrected confidence set bound is exceeded. We observe that in each partition, the number of apparently visible communities exceeds K, and they are comprised of small numbers of students. This effect is due to the intersection of grade and z-induced clustering. We take as our definition of nominal value the quantity ¯Φ(z) computed under the baseline model, which we denote by Φ(z). Table 2 lists normalized divergence terms N 2 −1 P a≤b nabD(ˆΦ(z) ab ||Φ(z) ab ), Bonferroni-corrected 95% confidence bounds, and measures of alignment between the corresponding partitions z and the explanatory variables. The alignment with the covariates are small, as measured by the Jacaard similarity coefficient and ratio of withinclass to total variance2, signifying the residual quality of the partitions, while the relatively large divergence terms signify that the Bonferroni-corrected confidence set bounds for each school have been met or exceeded. 2The alignment scores are defined as follows. The Jacaard similarity coefficient is defined as |A ∩B|/|A ∪ B|, were A, B ⊂ N 2 are the student pairings sharing the same latent class or the same covariate value, respectively. See [12] for further network-related discussion. Variance ratio denotes the within-class degree variance divided by the total variance, averaged over all classes. 6 Jaccard coefficient or Variance ratio School Students Edges K Div. (Bound) Gender Race Grade Degree 10 678 2795 6 0.0064 (0.0062) 0.14 0.16 0.097 0.93 18 284 1189 5 0.0150 (0.0150) 0.17 0.19 0.14 0.88 21 377 1531 6 0.0140 (0.0120) 0.15 0.16 0.12 0.95 22 614 2450 5 0.0064 (0.0061) 0.18 0.14 0.11 0.99 26 551 2066 3 0.0049 (0.0045) 0.25 0.21 0.13 0.99 29 569 2534 6 0.0091 (0.0075) 0.15 0.16 0.10 0.88 38 521 1925 5 0.0073 (0.0073) 0.17 0.18 0.17 0.86 55 336 803 4 0.0100 (0.0100) 0.20 0.18 0.21 0.97 56 446 1394 6 0.0120 (0.0099) 0.15 0.14 0.15 0.98 66 644 2865 6 0.0069 (0.0066) 0.15 0.16 0.099 0.91 67 456 926 3 0.0055 (0.0055) 0.25 0.23 0.25 1.00 72 352 1398 4 0.0099 (0.0095) 0.21 0.21 0.12 0.96 78 432 1334 6 0.0100 (0.0100) 0.15 0.12 0.15 0.98 80 594 1745 4 0.0054 (0.0053) 0.20 0.19 0.15 0.99 Table 2: Block structure assessments corresponding to Fig. 3. Small Jacaard coefficient values (for gender, race, and grade) and variance ratios approaching 1 for degree indicate a lack of alignment with covariates and hence the identification of residual structure in the corresponding partition. We note that the usage of covariate information was necessary to detect small student groups; without the incorporation of grade effects, we would require a much larger value of K for Algorithm 1 to detect the observed network structure (a concern noted by [23] in the absence of covariates), which in turn would inflate the confidence set, leading to an inability to validate the observed structure from that predicted by a baseline model. 4 Concluding Remarks In this article we have developed confidence sets for assessing inferred network structure, by leveraging our result derived in [14]. We explored the use of these confidence sets with an application to the analysis of Adolescent Health survey data comprising friendship networks from 26 schools. Our methodology can be summarized as follows. In lieu of a parametric model, we assume dyadic independence with Bernoulli parameters {Pji}. We introduced a baseline model (K = 1) that incorporates degree and covariate effects, without block structure. Algorithm 1 was then used to find highly assortative partitions of students which are also far from partitions induced by the explanatory covariates in the baseline model. Differences in assortativity were quantified by an empirical divergence statistic, which was compared to an upper bound computed from Eq. (3) to check for significance and to generate confidence sets for {Pij}. While the upper bound in Eq. (3) is known to be loose, simulation results in Figure 1 suggest that the slack is moderate, leading to useful confidence sets in practice. In our procedure, we cannot quantify the uncertainty associated with the estimated baseline model, since the parameter estimates lack consistency. As a result, we cannot conduct a formal hypothesis test for Θ = 0. However, for a baseline model where the MLE is known to be consistent, we conjecture that such a hypothesis test should be possible by incorporating the confidence set associated with the MLE. Despite concerns regarding estimator consistency in this and other latent variable models, we were able to show that the notion of confidence sets may instead be used to provide a (conservative) measure of residual block structure. We note that many open questions remain, and are hopeful that this analysis may help to shed light on some important current issues facing practitioners and theorists alike in statistical network analysis. 7 (a) School 10, K = 6 (b) School 18, K = 5 (c) School 21, K = 6 (d) School 22, K = 5 (e) School 26, K = 3 (f) School 29, K = 6 (g) School 38, K = 5 (h) School 55, K = 4 (i) School 56, K = 6 (j) School 66, K = 6 (k) School 67, K = 3 (l) School 72, K = 4 (m) School 78, K = 6 (n) School 80, K = 4 Figure 3: Adjacency matrices for schools exhibiting residual block structure as described in Section 3.2, with nodes ordered by grade (solid lines) and corresponding latent classes (dotted lines). 8 References [1] A. Goldenberg, A. X. Zheng, S. E. Fienberg, and E. M. Airoldi, “A survey of statistical network models”, Foundation and Trends in Machine Learning, vol. 2, pp. 1–117, Feb. 2010. [2] R. Albert and A. L. Barabasi, “Statistical mechanics of complex networks”, Reviews of Modern Physics, vol. 74, no. 47, Jan. 2002. [3] M. E. J. Newman, “The structure and function of complex networks”, SIAM Review, vol. 45, pp. 167–256, June 2003. [4] C. Cooper and A. M. Frieze, “A general model of web graphs”, Random Structures and Algorithms, vol. 22, no. 3, pp. 311–335, Mar. 2003. [5] M. O. Jackson, Social and Economic Networks, Princeton University Press, 2008. [6] S. Wasserman and K. Faust, Social Network Analysis: Methods and Applications, Cambridge University Press, Cambridge, U.K., 1994. [7] T. A. B. Snijders and K. Nowicki, “Estimation and prediction for stochastic blockmodels for graphs with latent block structure”, J. Classif., vol. 14, pp. 75–100, Jan. 1997. [8] M. S. Handcock, A. E. Raftery, and J. M. Tantrum, “Model-based clustering for social networks”, J. R. Stat. Soc. A, vol. 170, pp. 301–354, Mar. 2007. [9] E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing, “Mixed membership stochastic blockmodels”, J. Mach. Learn. Res., vol. 9, pp. 1981–2014, June 2008. [10] P. D. Hoff, “Multiplicative latent factor models for description and prediction of social networks”, Computational Math. Organization Theory, vol. 15, pp. 261–272, Dec. 2009. [11] M. E. J. Newman, “Modularity and community structure in networks”, Proc. Natl Acad. Sci. U.S.A., vol. 103, pp. 8577–8582, June 2006. [12] A. L. Traud, E. D. Kelsic, P. J. Mucha, and M. A. Porter, “Comparing community structure to characteristics in online collegiate social networks”, SIAM Rev., 2011, to appear. [13] P. J. Bickel and A. Chen, “A nonparametric view of network models and Newman-Girvan and other modularities”, Proc. Natl Acad. Sci. U.S.A., vol. 106, pp. 21068–21073, Dec. 2009. [14] D.S. Choi, P.J. Wolfe, and E.M. Airoldi, “Stochastic blockmodels with growing numbers of classes”, Biometrika, 2011, to appear. [15] K. Rohe, S. Chatterjee, and B. Yu, “Spectral clustering and the high-dimensional stochastic blockmodel”, Ann. Stat., 2011, to appear. [16] A. Celisse, J.J. Daudin, and L. Pierre, “Consistency of maximum-likelihood and variational estimators in the stochastic block model”, Arxiv preprint 1105.3288, 2011. [17] B. Karrer, E. Levina, and MEJ Newman, “Robustness of community structure in networks”, Phys. Rev. E, vol. 77, pp. 46119–46128, Apr. 2008. [18] C.P. Massen and J.P.K. Doye, “Thermodynamics of community structure”, Arxiv preprint cond-mat/0610077, 2006. [19] J. Copic, M. O. Jackson, and A. Kirman, “Identifying community structures from network data via maximum likelihood methods”, B.E. J. Theoretical Economics, vol. 9, Sept. 2009. [20] P.W. Holland and S. Leinhardt, “An exponential family of probability distributions for directed graphs”, J. Am. Stat. Assoc., vol. 76, pp. 33–50, Mar. 1981. [21] SJ Haberman, “Comment on Holland and Leinhardt”, J. Am. Stat. Assoc., vol. 76, pp. 60–62, Mar. 1981. [22] S. Wasserman and S.O.L. Weaver, “Statistical analysis of binary relational data: parameter estimation”, J. Math. Psychol., vol. 29, pp. 406–427, Dec. 1985. [23] P. D. Hoff, “Modeling homophily and stochastic equivalence in symmetric relational data”, in Adv. in Neural Information Processing Systems, pp. 657–664. MIT Press, 2008. [24] S.M. Goodreau, J.A. Kitts, and M. Morris, “Birds of a feather, or friend of a friend? using exponential random graph models to investigate adolescent social networks”, Demography, vol. 46, pp. 103–125, Feb. 2009. [25] M.C. Gonz´alez, H.J. Herrmann, J. Kert´esz, and T. Vicsek, “Community structure and ethnic preferences in school friendship networks”, Physica A., vol. 379, no. 1, pp. 307–316, 2007. 9
|
2011
|
192
|
4,249
|
Structural equations and divisive normalization for energy-dependent component analysis Jun-ichiro Hirayama Dept. of Systems Science Graduate School of of Informatics Kyoto University 611-0011 Uji, Kyoto, Japan Aapo Hyv¨arinen Dept. of Mathematics and Statistics Dept. of Computer Science and HIIT University of Helsinki 00560 Helsinki, Finland Abstract Components estimated by independent component analysis and related methods are typically not independent in real data. A very common form of nonlinear dependency between the components is correlations in their variances or energies. Here, we propose a principled probabilistic model to model the energycorrelations between the latent variables. Our two-stage model includes a linear mixing of latent signals into the observed ones like in ICA. The main new feature is a model of the energy-correlations based on the structural equation model (SEM), in particular, a Linear Non-Gaussian SEM. The SEM is closely related to divisive normalization which effectively reduces energy correlation. Our new twostage model enables estimation of both the linear mixing and the interactions related to energy-correlations, without resorting to approximations of the likelihood function or other non-principled approaches. We demonstrate the applicability of our method with synthetic dataset, natural images and brain signals. 1 Introduction Statistical models of natural signals have provided a rich framework to describe how sensory neurons process and adapt to ecologically-valid stimuli [28, 12]. In early studies, independent component analysis (ICA) [2, 31, 13] and sparse coding [22] have successfully shown that V1 simple cell-like edge filters, or receptive fields, emerge as optimal inference on latent quantities under linear generative models trained on natural image patches. In the subsequent developments over the last decade, many studies (e.g. [10, 32, 11, 14, 23, 17]) have focused explicitly or implicitly on modeling a particular type of nonlinear dependency between the responses of the linear filters, namely correlations in their variances or energies. Some of them showed that models on energy-correlation could account for, e.g., response properties of V1 complex cells [10, 15], cortical topography [11, 23], and contrast gain control [26]. Interestingly, such energy correlations are also prominent in other kinds of data, including brain signals [33] and presumably even financial time series which have strong heteroscedasticity. Thus, developing a general model for energy-correlations of linear latent variables is an important problem in the theory of machine learning, and such models are likely to have a wide domain of applicability. Here, we propose a new statistical model incorporating energy-correlations within the latent variables. Our two-stage model includes a linear mixing of latent signals into the observed ones like in ICA, and a model of the energy-correlations based on the structural equation model (SEM) [3], in particular the Linear Non-Gaussian (LiNG) SEM [27, 18] developed recently. As a model of natural signals, an important feature of our model is its connection to “divisive normalization” (DN) [7, 4, 26], which effectively reduces energy-correlations of linearly-transformed natural signals [32, 26, 29, 19, 21] and is now part of a well-accepted model of V1 single cell responses [12]. 1 We provide a new generative interpretation of DN based on the SEM, which is an important contribution of this work. Also, from machine learning perspective, causal analysis by using SEM has recently become very popular; our model could extend the applicability of LiNG-SEM for blindly mixed signals. As a two-stage extension of ICA, our model is also closely related to both the scale-mixture-based models, e.g. [11, 30, 14] (see also [32]) and the energy-based models, e.g. [23, 17]. An advantage of our new model is its tractability: our model requires neither an approximation of likelihood function nor non-canonical principles for modeling and estimation as previous models. 2 Structural equation model and divisive normalization A structural equation model (SEM) [3] of a random vector y = (y1, y2, . . . , yd)⊤is formulated as simultaneous equations of random variables, such that yi = κi(yi, y−i, ri), i = 1, 2, . . . , d, (1) or y = κ(y, r), where the function κi describes how each single variable yi is related to other variables y−i, possibly including itself, and a corresponding stochastic disturbance or external input ri which is independent of y. These equations, called structural equations, specify the distribution of y, as y is an implicit function (assuming the system is invertible) of the random vector r = (r1, r2, . . . , rd)⊤. If there exists a permutation Π : y 7→y′ such that each y′ i only depends on the preceding ones {y′ j|j < i}, an SEM is called recursive or acyclic, associated with a directed acyclic graph (DAG); the model is then a cascade of (possibly) nonlinear regressions of yi’s on the preceding variables on the graph, and is also seen as a Bayesian network. Otherwise, the SEM is called non-recursive or cyclic, where the structural equations cannot be simply decomposed into regressive models. In a standard interpretation, a cyclic SEM rather describes the distribution of equilibrium points of a dynamical system, y(t) = κ(y(t −1), r) (t = 0, 1, . . .), where every realized input r is fixed until y(t) converges to y [24, 18]; some conditions are usually needed to make the interpretation valid. 2.1 Divisive normalization as non-linear SEM Now, we briefly point out the connection of SEM to DN, which strongly motivated us to explore the application of SEM to natural signal statistics. Let s1, s2, . . . , sd be scalar-valued outputs of d linear filters applied to a multivariate input, collectively written as s = (s1, s2, . . . , sd)⊤. The linear filters may either be derived/designed with some mathematical principles (e.g. Wavelets) or be learned from data (e.g. ICA). The outputs of linear filters often have the property that their energies ϕ(|si|) (i = 1, 2, . . . , d) have non-negligible dependencies or correlations to each other, even when the outputs themselves are linearly uncorrelated. The nonlinear function ϕ is any appropriate measure of energy, typically given by the squaring function, i.e. ϕ(|s|) = s2 [26, 12], while other choices will not be excluded; we assume ϕ is continuously differentiable and strictly increasing over [0, ∞), and ϕ(0) = 0. Divisive Normalization (DN) [26] is an effective nonlinear transformation for eliminating the energy-dependencies remained in the filtered outputs. Although several variants have been proposed, a basic form can be formulated as follows: Given the d outputs, their energies are normalized (divided) by a linear combination of the energies of other signals, such that zi = ϕ(|si|) ∑ j hijϕ(|sj|) + hi0 , i = 1, 2, . . . , d, (2) where hij and hi0 are real-valued parameters of this transform. Now, it is straightforward to see that the following structural equations in the log-energy domain, yi := ln ϕ(|si|) = ln( ∑ j hij exp(yj) + hi0) + ri, i = 1, 2, . . . , d, (3) correspond to Eq. (2) where zi = exp(ri) is another representation of the disturbance. The SEM will typically be cyclic, since the coefficients hij in Eq. (2) are seldom constrained to satisfy acyclicity; 2 Eq. (3) thus implies a nonlinear dynamical system, and this can be interpreted as the data-generating processes underlying DN. Interestingly, Eq. (3) also implies a linear system with multiplicative input, eyi = (∑ j hijeyj + hi0)zi, in the energy domain, i.e. eyi := ϕ(|si|). The DN transform of Eq. (2) gives the optimal mapping under the SEM to infer the disturbance from given si’s; if the true disturbances are independent, it optimally reduces the energy-dependencies. This is consistent with the redundancy reduction view of DN [29, 19]. Note also that the SEM above implies ey = (I −diag(z)H)−1diag(h0)z with H = (hij) and h0 = (hi0), as shown in [20] in the context of DN 1. Although mathematically equivalent, such a complicated dependence [20] on the disturbance z does not provide an elegant model of the underlying data-generating process, compared to relatively the simple form of Eq. (3). 3 Energy-dependent ICA using structural equation model Now, we define a new generative model which models energy-dependencies of linear latent components using an SEM. 3.1 Scale-mixture model Let s now be a random vector of d source signals underlying an observation x = (x1, x2, . . . , xd)⊤ which has the same dimensionality for simplicity. They follow a standard linear generative model: x = As, (4) where A is a square mixing matrix. We assume here E[x] = E[s] = 0 without loss of generality, by always subtracting the sample mean from every observation. Then, assuming A is invertible, each transposed row wi of the demixing (filtering) matrix W = A−1 gives the optimal filter to recover si from x, which is constrained to have unit norm, ∥wi∥2 2 = 1 to fix the scaling ambiguity. To introduce energy-correlations into the sources, a classic approach is to use a scale-mixture representation of sources, such that si = uiσi, where ui represents a normalized signal having zero mean and constant variance, and σi is a positive factor that is independent of ui and modulates the variance (energy) of si [32, 11, 30, 14, 16]. Also, in vector notation, we write s = u ⊙σ, (5) where ⊙denotes component-wise multiplication. Here, u and σ are mutually independent, and ui’s are also independent of each other. Then E[s|σ] = 0 and E[ss⊤|σ] = diag(σ2 1, σ2 2, . . . , σ2 d) for any given σ, where σi’s may be dependent of each other and introduce energy-correlations. A drawback of this approach is that to learn effectively the model based on the likelihood, we usually need some approximation to deal with the marginalization over u. 3.2 Linear Non-Gaussian SEM Here, we simplify the above scale-mixture model by restricting ui to be binary, i.e. ui ∈{−1, 1}, and uniformly distributed. Although the simplification reduces the flexibility of source distribution, the resultant model is tractable, i.e. no approximation is needed for likelihood computation, as will be shown below. Also, this implies that ui = sign(si) and σi = |si|, and hence the log-energy above now has a simple deterministic relation to σi, i.e. yi = ln ϕ(σi), which can be inverted to σi = ϕ−1(exp(yi)). We particularly assume the log-energies yi follow the Linear Non-Gaussian (LiNG) [27, 18] SEM: yi = ∑ j hijyj + hi0 + ri, i = 1, 2, . . . , d, (6) where the disturbances are zero-mean and in particular assumed to be non-Gaussian and independent of each other, which has been shown to greatly improve the identifiability of linear SEMs [27]; the interaction structure in Eq. (6) can be represented by a directed graph for which the matrix 1To be precise, [20] showed the invertibility of the entire mapping s 7→z in the case of a “signed” DN transform that keeps the signs of zi and si to be the same. 3 H = (hij) serves as the weighted adjacency matrix. In the energy domain, Eq. (6) is equivalent to eyi = (∏ j eyhij j ) ehi0zi (i = 1, 2, . . . , d), and interestingly, these SEMs further imply a novel form of DN transform, given by zi = ϕ(|si|) ehi0 ∏ j ϕ(|sj|)hij , i = 1, 2, . . . , d, (7) where the denominator is now not additive but multiplicative. It provides an interesting alternative to the original DN. To recapitulate the new generative model proposed here: 1) The log-energies y are generated according to the SEM in Eq. (6); 2) the sources are generated according to Eq. (5) with σi = ϕ−1(exp(yi)) and random signs, ui; and 3) the observation x is obtained by linearly mixing the sources as in Eq. (4). In our model, the optimal mapping to infer zi = exp(ri) from x under this model is the linear filtering W followed by the new DN transform, Eq. (7). On the other hand, it would also be possible to define the energy-dependent ICA by using the nonlinear SEM in Eq. (3) instead. Then, the optimal inference would be given by the divisive normalization in Eq. (2). However, estimation and other theoretical issues (e.g. identifiability) related to nonlinear SEMs, particularly in the case of non-Gaussianity of the disturbances, are quite involved, and are still under development, e.g. [8]. 3.3 Identifiability issues Both the theory and algorithms related to LiNG coincide largely with those of ICA, since Eq. (6) with non-Gaussian r implies the generative model of ICA, y = Br + b0, where B = (I −H)−1 and b0 = Bh0 with h0 = (hi0). Like ICA [13], Eq. (6) is not completely identifiable due to the ambiguities related to scaling (with signs) and permutation [27, 18]. To fix the scaling, we set E[rr⊤] = I here. The permutation ambiguity is more serious than in the case of ICA, because the row-permutation of H completely changes the structure of corresponding directed graph, and is typically addressed by constraining the graph structure, as will be discussed next. Two classes of LiNG-SEM have been proposed, corresponding to different constraints on the graph structure. One is LiNGAM [27], which ensures the full identifiability by the DAG constraint. The other is generally referred to as LiNG [18] which allows general cyclic graphs; the “LiNG discovery” algorithm in [18] dealt with the non-identifiability of cyclic SEMs by finding out multiple solutions that give the same distribution. Here we define two variants of our model: One is the acyclic model, using LiNGAM. In contrast to original LiNGAM, our target is (linear) latent variables, but not observed ones. The ordering of latent variables is not meaningful, because the rows of filter matrix W can be arbitrarily permuted. The acyclic constraint thus can be simplified into a lower-triangular constraint on H. Another one is the symmetric model, which uses a special case of cyclic SEM, i.e. those with a symmetric constraint on H. Such constraint would be relatively new to the context of SEM, although it is a well-known setting in the ICA literature (e.g. [5]). The SEM is then identifiable using only the first- and secondorder statistics, based on the relations h0 = VE[y] and V := I −H = Cov[y]−1 2 [5], provided that V is positive definite 2. This implies the non-Gaussianity is not essential for identifiability, in contrast that the acyclic model is not identifiable without non-Gaussianity [27]. The above relations also suggest moment-based estimators of h0 and V, which can be used either as the final estimates or as the initial conditions in the maximum likelihood algorithm below. 3.4 Maximum likelihood Let ψ(s) := ln ϕ(|s|) for notational simplicity, and denote ψ′(s) := sign(s)(ln ϕ)′(|s|) as a convention, e.g. (ln |s|)′ := 1/s. Also, following the basic theory of ICA, we assume the disturbances have a joint probability density function (pdf) pr(r) = ∏ i ρ(ri) with a common fixed marginal pdf ρ. Then, we have the following pdf of s without any approximation (see Appendix for derivation): ps(s) = 1 2d | det V| d ∏ i=1 ρ(v⊤ i ψ(s) −hi0)|ψ′(si)|. (8) 2Under the dynamical system interpretation, the matrix H should have absolute eigenvalues smaller than one for stability [18], where V = I −H is naturally positive definite because the eigenvalues are all positive. 4 10 2 10 3 10 0 10 1 α=−0.4 Amari Index 10 2 10 3 α=−0.3 10 2 10 3 α=−0.2 10 2 10 3 α=0 Sample Size 10 2 10 3 α=0.2 FastICA 10 2 10 3 α=0.3 No Dep. 10 2 10 3 α=0.4 Proposed Figure 1: Estimation performance of mixing matrix measured by the “Amari Index” [1] (nonnegative, and zero denotes perfect estimation up to unavoidable indeterminacies) versus sample size, shown in log-log scales. Each panel corresponds to a particular value of α, which determined the relative connection strength between sources. The solid lines denotes the median of ten runs. where vi is i-th transposed row vector of V (= I −H). The pdf of x is given by px(x) = | det W|ps(Wx), and the corresponding loss function, l = −ln px(x) + const., is given by l(x, W, V, h0) = ¯f(Vψ(Wx) −h0) + ¯g(Wx) −ln | det W| −ln | det V|, (9) where ¯f(r) = ∑ i f(ri), f(ri) = −ln ρ(ri), ¯g(s) = ∑ i g(si), and g(si) = −ln |ψ′(si)|. Note that the loss function above is closely related to the ones in previous studies, such as of energybased models [23, 17]. Our model is less flexible to these models, since it is limited to the case that A is square, but the exact likelihood is available. It is also interesting to see that the loss function above includes an additional second term that has not appeared in previous models, due to the formal derivation of pdf by the argument of transformation of random variables. To obtain the maximum likelihood estimates of W, V, and h0, we minimize the negative loglikelihood (i.e. empirical average of the losses) by the projected gradient method (for the unit-norm constraints, ∥wi∥2 2 = 1). The required first derivatives are given by ∂l ∂h0 = −f ′(r), ∂l ∂V = f ′(Vy −h0)y⊤−V−⊤, (10a) ∂l ∂W = { diag(ψ′(Wx))V⊤f ′(Vy −h0) + g′(Wx) } x⊤−W−⊤. (10b) In both acyclic and symmetric cases, only the lower-triangular elements in V are free parameters. If acyclic, the upper-triangular elements are fixed at zero; if symmetric, they are dependent of the lower-triangular elements, and thus ∂l/∂vij (i > j) should be replaced with ∂l/∂vij + ∂l/∂vji. 4 Simulations To demonstrate the applicability of our method, we conducted the following simulation experiments. In all experiments below, we set ϕ(|s|) = |s|, and ρ(r) = (1/2)sech(πr/2) corresponding to the standard tanh nonlinearity in ICA: f ′(r) = (π/2) tanh((π/2)r). In our projected gradient algorithm, the matrix W was first initialized by FastICA [9]; the SEM parameters, H and h0, were initialized by the moment-based estimator described above (symmetric model) or by the LiNGAM algorithm [27] (acyclic model). The algorithm was terminated when the decrease of objective value was smaller than 10−6; the learning rate was adjusted in each step by simply multiplying it by the factor 0.9 until the new point did not increase the objective value. 4.1 Synthetic dataset First, we examined how the energy-dependence learned in the SEM affects the estimation of linear filters. We artificially sampled the dataset with d = 10 from our generative model by setting the matrix V to be tridiagonal, where all the main and the first diagonals were set at 10 and 10α, respectively. Figure 1 shows the “Amari Index” [1] of estimated W by three methods, at several 5 0 0.5 1 0 0.02 0.04 0.06 Position Pairwise Distance Connection Weight −1 0 1 0 0.02 0.04 0.06 Orientation Pairwise Difference (mod ±π/2) Connection Weight −0.2 0 0.2 0 0.02 0.04 0.06 Frequency Pairwise Difference Connection Weight −2 0 2 0 0.02 0.04 0.06 Phase Pairwise Difference (mod ± π) Connection Weight Figure 2: Connection weights versus pairwise differences of four properties of linear basis functions, estimated by fitting 2D Gabor functions. The curves were fit by local Gaussian smoothing. factors α and sample sizes, with ten runs for every condition. In each run, the true mixing matrix was given by inverting W randomly generated from standard Gaussian and then row-normalized to have unit norms. The three methods were: 1) FastICA 3 with the tanh nonlinearity, 2) Our method (symmetric model) without energy-dependence (NoDep) initialized by FastICA, and 3) Our full method (symmetric model) initialized by NoDep. NoDep was the same as the full method except that the off-diagonal elements of H was kept zero. Note that our two algorithms used exactly the same criterion for termination of algorithm, while FastICA used a different one. This could cause the relatively poor performance of FastICA in this figure. The comparison between the full method and NoDep showed that energy-dependence learned in the SEM could improve the estimation of filter matrix, especially when the dependence was relatively strong. 4.2 Natural images The dataset consisted of 50, 000 image patches of 16 × 16 pixels randomly taken from the original gray-scale pictures of natural scenes 4. As a preprocessing, the sample mean was subtracted and the dimensionality was reduced to 160 by the principal component analysis (PCA) where 99% of the variance was retained. We constrained the SEM to be symmetric. Both of the obtained basis functions and filters were qualitatively very similar to those reported in many previous studies, and given in the Supplementary Material. Figure 2 shows the values of connection weights hij (after a row-wise re-scaling of V to set any hii = 1 −vii to be zero, as a standard convention in SEM [18]) for every d(d −1) pairs, compared with the pairwise difference of four properties of learned features (i.e. basis functions), estimated by fitting 2D Gabor functions: spatial positions, frequencies, orientations and phases. As is clearly seen, the connection weights tended to be large if the features were similar to each other, except for their phases; the phases were not strongly correlated with the weights as suggested by the fitted curve, while they exhibited a weak tendency to be the same or the opposite (shifted ±π) to each other. We can also see a weak tendency for the negative weights to have large magnitudes when the pairs have near-orthogonal directions or different frequencies. Figure 3 illustrates how the learned features are associated with the other ones, using iconified representations. We can see: 1) associations with positive weights between features were quite spatially-localized and occur particularly with similar orientations, and 2) those with negative weights especially occur from cross-oriented features to a target, which were sometimes non-localized and overlapped to the target feature. Notice that in the DN transform (7), these positive weights learned in the SEM perform as inhibitory and will suppress the energies of the filters having similar properties. 4.3 Magnetoencephalography (MEG) Brain activity was recorded in a single healthy subject who received alternating visual, auditory, and tactile stimulation interspersed with rest periods [25]. The original signals were measured in 204 channels (sensors) for several minutes with sampling rate (75Hz); the total number of measurements, i.e. sample size, was N = 73, 760. As a preprocessing, we applied a band-pass filter (8-30Hz) and remove some outliers. Also, we subtracted the sample mean and then reduced the dimensionality by PCA to d = 24, with 90% of variance still retained. 3Matlab package is available at http://research.ics.tkk.fi/ica/fastica/. We used the following options: g=tanh, approach=symm, epsilon=10−6, MaxNumIterations=104, finetune=tanh. 4Available in Imageica Toolbox by Patrik Hoyer, at http://www.cs.helsinki.fi/u/phoyer/software.html 6 Figure 3: Depiction of connection properties between learned basis functions in a similar manner to that has used in e.g. [6]. In each small panel, the black bar depicts the position, orientation and length of a single Gabor-like basis function obtained by our method; the red (resp. blue) pattern of superimposed bars is a linear combination of the bars of the other basis functions according to the absolute values of positive (resp. negative) connection weights to the target one. The intensities of red and blue colors were adjusted separately from each other in each panel; the ratio of the maximum positive and negative connection strengths is depicted at the bottom of each small panel by the relative length of horizontal color bars. Figure 4: Estimated interaction graph (DAG) for MEG data. The red and blue edges respectively denotes the positive and negative connections. Only the edges with strong connections are drawn, where the absolute threshold value was the same for positive and negative weights. The two manually-inserted contours denote possible clusters of sources (see text). 7 Figure 4 shows an interaction graph under the DAG constraint. One cluster of components, highlighted in the figure by the manually inserted yellow contour, seems to consist of components related to auditory processing. The components are located in the temporal cortex, and all but one in the left hemisphere. The direction of influence, which we can estimate in the acyclic model, seems to be from the anterior areas to posterior ones. This may be related to top-down influence, since the primary auditory cortex seems to be included in the posterior areas on the left hemisphere; at the end of the chain, the signal goes to the right hemisphere. Such temporal components are typically quite difficult to find because the modulation of their energies is quite weak. Our method may help in grouping such components together by analyzing the energy correlations. Another cluster of components consists of low-level visual areas, highlighted by the green contour. It is more difficult to interpret these interactions because the areas corresponding to the components are very close to each other. It seems, however, that here the influences are mainly from the primary visual areas to the higher-order visual areas. 5 Conclusion We proposed a new statistical model that uses SEM to model energy-dependencies of latent variables in a standard linear generative model. In particular, with a simplified form of scale-mixture model, the likelihood function was derived without any approximation. The SEM has both acyclic and cyclic variants. In the acyclic case, non-Gaussianity is essential for identifiability, while in the cyclic case we introduces the constraint of symmetricity which also guarantees identifiability. We also provided a new generative interpretation of DN transform based on a nonlinear SEM. Our method exhibited a high applicability in three simulations each with synthetic dataset, natural images, and brain signals. Appendix: Derivation of Eq. (8) From the uniformity of signs, we have ps(s) = ps(Ds) for any D = diag(±1, . . . , ±1); particularly, let Dk correspond to the signs of k-th orthant Sk of Rd, and S1 = (0, ∞)d. Then, the relation ∫ S1dσ pσ(σ) = ∑K k=1 ∫ Skds ps(s) = ∑K k=1 ∫ S1dσ ps(Dkσ) = 2d ∫ S1dσ ps(σ) implies ps(s) = (1/2d)pσ(s) for any s ∈S1; thus ps(s) = (1/2d)pσ(|s|) for any s ∈Rd. Now, y = ln ϕ(σ) (for every component) and thus pσ(σ) = py(y) ∏ i |(ln ϕ)′(σi)|, where we assume ϕ is differentiable. Let ψ(s) := ln ϕ(|s|) and ψ′(s) := sign(s)(ln ϕ)′(|s|). Then it follows that ps(s) = (1/2d)py(ψ(s)) ∏ i |ψ′(si)|, where ψ(s) performs component-wise. Since y maps linearly to r with the absolute Jacobian | det V|, we have py(y) = | det V| ∏ i ρ(ri); combining it with ps above, we obtain Eq. (8). Acknowledgements We would like to thank Jes´us Malo and Valero Laparra for inspiring this work, Michael Gutmann and Patrik Hoyer for helpful discussions and providing codes for fitting Gabor functions and visualization. The MEG data was kindly provided by Pavan Ramkumar and Riitta Hari. J.H. was partially supported by JSPS Research Fellowships for Young Scientists. References [1] S. Amari, A. Cichoki, and H. H. Yang. A new learning algorithm for blind signal separation. In Advances in Neural Information Processing Systems, volume 8, 1996. [2] A. J. Bell and T. J. Sejnowski. The ‘independent components’ of natural scenes are edge filters. Vision Res., 37:3327–3338, 1997. [3] K. A. Bollen. Structural Equations with Latent Variables. Wiley, New York, 1989. [4] M. Carandini, D. J. Heeger, and J. A. Movshon. Linearity and normalization in simple cells of the macaque primary visual cortex. Journal of Neuroscience, 17:8621–8644, 1997. [5] A. Cichocki and P. Georgiev. Blind source separation algorithms with matrix constraints. IEICE Trans. Fundamentals, E86-A(3):522–531, 2003. 8 [6] P. Garrigues and B. A. Olshausen. Learning horizontal connections in a sparse coding model of natural images. In Advances in Neural Information Processing Systems, volume 20, pages 505–512, 2008. [7] D. J. Heeger. Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9:181–197, 1992. [8] P. O. Hoyer, D. Janzing, J. Mooij, J. Peters, and B. Sch¨olkopf. Nonlinear causal discovery with additive noise models. In Advances in Neural Information Processing Systems, volume 21, pages 689–696, 2009. [9] A. Hyv¨arinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Transactions on Neural Networks, 10(3):626–634, 1999. [10] A. Hyv¨arinen and P.O. Hoyer. Emergence of phase and shift invariant features by decomposition of natural images into independent feature subspaces. Neural Comput., 12(7):1705–1720, 2000. [11] A. Hyv¨arinen, P.O. Hoyer, and M. Inki. Topographic independent component analysis. Neural Comput., 13(7):1527–1558, 2001. [12] A Hyv¨arinen, J. Hurri, and P. O. Hoyer. Natural Image Statistics – A probabilistic approach to early computational vision. Springer-Verlag, 2009. [13] A. Hyv¨arinen, J. Karhunen, and E. Oja. Independent Component Analysis. John Wiley & Sons, 2001. [14] Y. Karklin and M. S. Lewicki. A hierarchical Bayesian model for learning nonlinear statistical regularities in nonstationary natural signals. Neural Comput., 17:397–423, 2005. [15] Y. Karklin and M. S. Lewicki. Emergence of complex cell properties by learning to generalize in natural scenes. Nature, 457:83–86, January 2009. [16] M. Kawanabe and K.-R. M¨uller. Estimating functions for blind separation when sources have variance dependencies. Journal of Machine Learning Research, 6:453–482, 2005. [17] U. K¨oster and A. Hyv¨arinen. A two-layer model of natural stimuli estimated with score matching. Neural Comput., 22:2308–2333, 2010. [18] G. Lacerda, P. Spirtes, J. Ramsey, and P. Hoyer. Discovering cyclic causal models by independent components analysis. In Proceedings of the Twenty-Fourth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI’08), pages 366–374, 2008. [19] S. Lyu. Divisive normalization: Justification and effectiveness as efficient coding transform. In Advances in Neural Information Processing Systems 23, pages 1522–1530, 2010. [20] J. Malo, I. Epifanio, R. Navarro, and E. P. Simoncelli. Nonlinear image representation for efficient perceptual coding. IEEE Trans Image Process, 15(1):68–80, 2006. [21] J. Malo and V. Laparra. Psychophysically tuned divisive normalization approximately factorizes the PDF of natural images. Neural Comput., 22(12):3179–3206, 2010. [22] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607–609, 1996. [23] S. Osindero, M. Welling, and G. E. Hinton. Topographic product models applied to natural scene statistics. Neural Comput., 18:381–414, 2006. [24] J. Pearl. On the statistical interpretation of structural equations. Technical Report R-200, UCLA Cognitive Systems Laboratory, 1993. [25] P. Ramkumar, L. Parkkonen, R. Hari, and A. Hyv¨arinen. Characterization of neuromagnetic brain rhythms over time scales of minutes using spatial independent component analysis. Human Brain Mapping, 2011. In press. [26] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience, 4(8), 2001. [27] S. Shimizu, P.O. Hoyer, A. Hyv¨arinen, and A. Kerminen. A linear non-Gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7:2003–2030, 2006. [28] E. P. Simoncelli and B. A. Olshausen. Natural image statistics and neural representation. Annu. Rev. Neurosci., 24:1193–1216, 2001. [29] R. Valerio and R. Navarro. Optimal coding through divisive normalization models of V1 neurons. Network: Computation in Neural Systems, 14:579–593, 2003. [30] H. Valpola, M. Harva, and J. Karhunen. Hierarchical models of variance sources. Signal Processing, 84(2):267–282, 2004. [31] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc. R. Soc. Lond. B, 265(359–366), 1998. [32] M. J. Wainwright and E. P. Simoncelli. Scale mixtures of gaussians and the statistics of natural images. In Advances in Neural Information Processing Systems, volume 12, pages 855–861, 2000. [33] K. Zhang and A. Hyv¨arinen. Source separation and higher-order causal analysis of MEG and EEG. In Proceedings of the Twenty-Sixth Conference (UAI 2010), pages 709–716, 2010. 9
|
2011
|
193
|
4,250
|
Gaussian Process Training with Input Noise Andrew McHutchon Department of Engineering Cambridge University Cambridge, CB2 1PZ ajm257@cam.ac.uk Carl Edward Rasmussen Department of Engineering Cambridge University Cambridge, CB2 1PZ cer54@cam.ac.uk Abstract In standard Gaussian Process regression input locations are assumed to be noise free. We present a simple yet effective GP model for training on input points corrupted by i.i.d. Gaussian noise. To make computations tractable we use a local linear expansion about each input point. This allows the input noise to be recast as output noise proportional to the squared gradient of the GP posterior mean. The input noise variances are inferred from the data as extra hyperparameters. They are trained alongside other hyperparameters by the usual method of maximisation of the marginal likelihood. Training uses an iterative scheme, which alternates between optimising the hyperparameters and calculating the posterior gradient. Analytic predictive moments can then be found for Gaussian distributed test points. We compare our model to others over a range of different regression problems and show that it improves over current methods. 1 Introduction Over the last decade the use of Gaussian Processes (GPs) as non-parametric regression models has grown significantly. They have been successfully used to learn mappings between inputs and outputs in a wide variety of tasks. However, many authors have highlighted a limitation in the way GPs handle noisy measurements. Standard GP regression [1] makes two assumptions about the noise in datasets: firstly that measurements of input points, x, are noise-free, and, secondly, that output points, y, are corrupted by constant-variance Gaussian noise. For some datasets this makes intuitive sense: for example, an application in Rasmussen and Williams (2006) [1] is that of modelling CO2 concentration in the atmosphere over the last forty years. One can viably assume that the date is available noise-free and the CO2 sensors are affected by signal-independent sensor noise. However, in many datasets, either or both of these assumptions are not valid and lead to poor modelling performance. In this paper we look at datasets where the input measurements, as well as the output, are corrupted by noise. Unfortunately, in the GP framework, considering each input location to be a distribution is intractable. If, as an approximation, we treat the input measurements as if they were deterministic, and inflate the corresponding output variance to compensate, this leads to the output noise variance varying across the input space, a feature often called heteroscedasticity. One method for modelling datasets with input noise is, therefore, to hold the input measurements to be deterministic and then use a heteroscedastic GP model. This approach has been strengthened by the breadth of research published recently on extending GPs to heteroscedastic data. However, referring the input noise to the output in this way results in heteroscedasticity with a very particular structure. This structure can be exploited to improve upon current heteroscedastic GP models for datasets with input noise. One can imagine that in regions where a process is changing its output value rapidly, corrupted input measurements will have a much greater effect than in regions Pre-conference version 1 where the output is almost constant. In other words, the effect of the input noise is related to the gradient of the function mapping input to output. This is the intuition behind the model we propose in this paper. We fit a local linear model to the GP posterior mean about each training point. The input noise variance can then be referred to the output, proportional to the square of the posterior mean function’s gradient. This approach is particularly powerful in the case of time-series data where the output at time t becomes the input at time t + 1. In this situation, input measurements are clearly not noise-free: the noise on a particular measurement is the same whether it is considered an input or output. By also assuming the inputs are noisy, our model is better able to fit datasets of this type. Furthermore, we can estimate the noise variance on each input dimension, which is often very useful for analysis. Related work lies in the field of heteroscedastic GPs. A common approach to modelling changing variance with a GP, as proposed by Goldberg et al. [2], is to make the noise variance a random variable and attempt to estimate its form at the same time as estimating the posterior mean. Goldberg et al. suggested using a second GP to model the noise level as a function of the input location. Kersting et al. [3] improved upon Goldberg et al.’s Monte Carlo training method with a “most likely” training scheme and demonstrated its effectiveness; related work includes Yuan and Wahba [4], and Le at al. [5] who proposed a scheme to find the variance via a maximum-a-posteriori estimate set in the exponential family. Snelson and Ghahramani [6] suggest a different approach whereby the importance of points in a pseudo-training set can be varied, allowing the posterior variance to vary as well. Recently Wilson and Ghahramani broadened the scope still further and proposed Copula and Wishart Process methods [7, 8]. Although all of these methods could be applied to datasets with input noise, they are designed for a more general class of heteroscedastic problems and so none of them exploits the structure inherent in input noise datasets. Our model also has a further advantage in that training is by marginal likelihood maximisation rather than by an approximate inference method, or one such as maximum likelihood, which is more susceptible to overfitting. Dallaire et al. [9] train on Gaussian distributed input points by calculating the expected the covariance matrix. However, their method requires prior knowledge of the noise variance, rather than inferring it as we do in this paper. 2 The Model In this section we formally derive our model, which we refer to as NIGP (noisy input GP). Let x and y be a pair of measurements from a process, where x is a D dimensional input to the process and y is the corresponding scalar output. In standard GP regression we assume that y is a noisy measurement of the actual output of the process ˜y, y = ˜y + ϵy (1) where, ϵy ∼N 0, σ2 y . In our model, we further assume that the inputs are also noisy measurements of the actual input ˜x, x = ˜x + ϵx (2) where ϵx ∼N (0, Σx). We assume that each input dimension is independently corrupted by noise, thus Σx is diagonal. Under a model f(.), we can write the output as a function of the input in the following form, y = f(˜x + ϵx) + ϵy (3) For a GP model the posterior distribution based on equation 3 is intractable. We therefore consider a Taylor expansion about the latent state ˜x, f(˜x + ϵx) = f(˜x) + ϵT x ∂f(˜x) ∂˜x + . . . ≃f(x) + ϵT x ∂f(x) ∂x + . . . (4) We don’t have access to the latent variable ˜x so we approximate it with the noisy measurements. Now the derivative of a Gaussian Process is another Gaussian Process [10]. Thus, the exact treatment would require the consideration of a distribution over Taylor expansions. Although the resulting distribution is not Gaussian, its first and second moments can be calculated analytically. However, these calculations carry a high computational load and previous experiments showed this exact treatment 2 provided no significant improvement over the much quicker approximate method we now describe. Instead we take the derivative of the mean of the GP function, which we will denote ∂¯ f, a Ddimensional vector, for the derivative of one GP function value w.r.t. the D-dimensional input, and ∆¯ f, an N by D matrix, for the derivative of N function values. Differentiating the mean function corresponds to ignoring the uncertainty about the derivative. If we expand up to the first order terms we get a linear model for the input noise, y = f(x) + ϵT x ∂¯ f + ϵy (5) The probability of an observation y is therefore, P(y | f) = N(f, σ2 y + ∂T ¯ f Σx ∂¯ f) (6) We keep the usual Gaussian Process prior, P(f | X) = N(0, K(X, X)), where K(X, X) is the N by N training data covariance matrix and X is an N by D matrix of input observations. Combining these probabilities gives the predictive posterior mean and variance as, E [f∗| X, y, x∗] = k(x∗, X) K(X, X) + σ2 yI + diag{∆¯ f Σx ∆T ¯ f } −1y V [f∗| X, y, x∗] = k(x∗, x∗) −k(x∗, X) K(X, X) + σ2 yI + diag{∆¯ f Σx ∆T ¯ f } −1k(X, x∗) (7) This is equivalent to treating the inputs as deterministic and adding a corrective term, diag{∆¯ f Σx ∆T ¯ f }, to the output noise. The notation “diag{.}” results in a diagonal matrix, the elements of which are the diagonal elements of its matrix argument. Note that if the posterior mean gradient is constant across the input space the heteroscedasticity is removed and our model is essentially identical to a standard GP. An advantage of our approach can be seen in the case of multiple output dimensions. As the input noise levels are the same for each of the output dimensions, our model can use data from all of the outputs when learning the input noise variances. Not only does this give more information about the noise variances without needing further input measurements but it also reduces over-fitting as the learnt noise variances must agree with all E output dimensions. For time-series datasets (where the model has to predict the next state given the current), each dimension’s input and output noise variance can be constrained to be the same since the noise level on a measurement is independent of whether it is an input or output. This further constraint increases the ability of the model to recover the actual noise variances. The model is thus ideally suited to the common task of multivariate time series modelling. 3 Training Our model introduces an extra D hyperparameters compared to the standard GP - one noise variance hyperparameter per input dimension. A major advantage of our model is that these hyperparameters can be trained alongside any others by maximisation of the marginal likelihood. This approach automatically includes regularisation of the noise parameters and reduces the effect of over-fitting. In order to calculate the marginal likelihood of the training data we need the posterior distribution, and the slope of its mean, at each of the training points. However, evaluating the posterior mean from equation 7 with x∗∈X, results in an analytically unsolvable differential equation: ¯f is a complicated function of ∆¯ f, its own derivative. Therefore, we define a two-step approach: first we evaluate a standard GP with the training data, using our initial hyperparameter settings and ignoring the input noise. We then find the slope of the posterior mean of this GP at each of the training points and use it to add in the corrective variance term, diag{∆¯ f Σx ∆T ¯ f }. This process is summarised in figures 1a and 1b. The marginal likelihood of the GP with the corrected variance is then computed, along with its derivatives with respect to the initial hyperparameters, which include the input noise variances. This step involves chaining the derivatives of the marginal likelihood back through the slope calculation. Gradient descent can then be used to improve the hyperparameters. Figure 1c shows the GP posterior for the trained hyperparameters and shows how NIGP can reduce output noise level estimates by taking input noise into account. Figure 1d shows the NIGP fit for the trained hyperparameters. 3 −1 0 1 2 3 4 5 6 Target a) Initial hyperparameters & training data define a GP fit b) Extra variance added proportional to squared slope 0 1 2 3 4 5 6 −1 0 1 2 3 4 5 6 Input Target c) Standard GP with NIGP trained hyperparameters 0 1 2 3 4 5 6 Input d) The NIGP fit including variance from input noise Figure 1: Training with NIGP. (a) A standard GP posterior distribution can be computed from an initial set of hyperparameters and a training data set, shown by the blue crosses. The gradients of the posterior mean at each training point can then be found analytically. (b) The NIGP method increases the posterior variance by the square of the posterior mean slope multiplied by the current setting of the input noise variance hyperparameter. The marginal likelihood of this fit is then calculated along with its derivatives w.r.t. initial hyperparameter settings. Gradient descent is used to train the hyperparameters. (c) This plot shows the standard GP posterior using the newly trained hyperparameters. Comparing to plot (a) shows that the output noise hyperparameter has been greatly reduced. (d) This plot shows the NIGP fit - plot(c) with the input noise corrective variance term, diag{∆¯ f Σx ∆T ¯ f }. Plot (d) is related to plot (c) in the same way that plot (b) is related to plot (a). To improve the fit further we can iterate this procedure: we use the slopes of the current trained NIGP, instead of a standard GP, to calculate the effect of the input noise, i.e. replace the fit in figure 1a with the fit from figure 1d and re-train. 4 Prediction We turn now to the task of making predictions at noisy input locations with our model. To be true to our model we must use the same process in making predictions as we did in training. We therefore use the trained hyperparameters and the training data to define a GP posterior mean, which we differentiate at each test point and each training point. The calculated gradients are then used to add in the corrective variance terms. The posterior mean slope at the test points is only used to calculate the variance over observations, where we increase the predictive variance by the noise variances. There is an alternative option, however. If a single test point is considered to have a Gaussian distribution and all the training points are certain then, although the GP posterior is unknown, its mean and variance can be calculated exactly [11]. As our model estimates the input noise variance Σx during training, we can consider a test point to be Gaussian distributed: x′ ∗∼N (x∗, Σx). [11] then gives the mean and variance of the posterior distribution, for a squared exponential kernel (equation 12), to be, ¯f∗= K + σ2 yI + Σx∂¯ f 2−1y T q (8) 4 where, qi = σ2 f ΣxΛ−1 + I −1 2 exp −1 2(xi −x∗)T (Σx + Λ)−1 (xi −x∗) (9) where Λ is a diagonal matrix of the squared lengthscale hyperparameters. V [f∗] = σ2 f −tr K + σ2 yI + Σx∂¯ f 2−1Q + αT Qα −¯f 2 ∗ (10) with, Qij = k(xi, x∗)k(xj, x∗) |2ΣxΛ−1 + I| 1 2 exp (z −x∗)T Λ + 1 2ΛΣ−1 x Λ −1(z −x∗) (11) with z = 1 2(xi+xj). This method is computationally slower than using equation 7 and is vulnerable to worse results if the learnt input noise variance Σx is very different from the true value. However, it gives proper consideration to the uncertainty surrounding the test point and exactly computes the moments of the correct posterior distribution. This often leads it to outperform predictions based on equation 7. 5 Results We tested our model on a variety of functions and datasets, comparing its performance to standard GP regression as well as Kersting et al.’s ‘most likely heteroscedastic GP’ (MLHGP) model, a state-of-the-art heteroscedastic GP model. We used the squared exponential kernel with Automatic Relevance Determination, k(xi, xj) = σ2 f exp −1 2(xi −xj)T Λ−1(xi −xj) (12) where Λ is a diagonal matrix of the squared lengthscale hyperparameters and σ2 f is a signal variance hyperparameter. Code to run NIGP is available on the author’s website. Standard GP Kersting et al. This paper −10 −5 0 5 10 −1.5 −1 −0.5 0 0.5 1 1.5 −10 −5 0 5 10 −1.5 −1 −0.5 0 0.5 1 1.5 −10 −5 0 5 10 −1.5 −1 −0.5 0 0.5 1 1.5 Figure 2: Posterior distribution for a near-square wave with σy = 0.05, σx = 0.3, and 60 data points. The solid line represents the predictive mean and the dashed lines are two standard deviations either side. Also shown are the training points and the underlying function. The left image is for standard GP regression, the middle uses Kersting et al.’s MLHGP algorithm, the right image shows our model. While the predictive means are similar, both our model and MLHGP pinch in the variance around the low noise areas. Our model correctly expands the variance around all steep areas whereas MLHGP can only do so where high noise is observed (see areas around x= -6 and x = 1). Figure 2 shows an example comparison between standard GP regression, Kersting et al.’s MLHGP, and our model for a simple near-square wave function. This function was chosen as it has areas 5 of steep gradient and near flat gradient and thus suffers from the heteroscedastic problems we are trying to solve. The posterior means are very similar for the three models, however the variances are quite different. The standard GP model has to take into account the large noise seen around the steep sloped areas by assuming large noise everywhere, which leads to the much larger error bars. Our model can recover the actual noise levels by taking the input noise into account. Both our model and MLHGP pinch the variance in around the flat regions of the function and expand it around the steep areas. For the example shown in figure 2 the standard GP estimated an output noise standard deviation of 0.16 (much too large) compared to our estimate of 0.052, which is very close to the correct value of 0.050. Our model also learnt an input noise standard deviation of 0.305, very close to the real value of 0.300. MLHGP does not produce a single estimate of noise levels. Predictions for 1000 noisy measurements were made using each of the models and the log probability of the test set was calculated. The standard GP model had a log probability per data point of 0.419, MLHGP 0.740, and our model 0.885, a significant improvement. Part of the reason for our improvement over MLHGP can be seen around x = 1: our model has near-symmetric ‘horns’ in the variance around the corners of the square wave, whereas MLHGP only has one ‘horn’. This is because in our model, the amount of noise expected is proportional to the derivative of the mean squared, which is the same for both sides of the square wave. In Kersting et al.’s model the noise is estimated from the training points themselves. In this example the training points around x = 1 happen to have low noise and so the learnt variance is smaller. The same problem can be seen around x = −6 where MLHGP has much too small variance. This illustrates an important aspect of our model: the accuracy in plotting the varying effect of noise is only dependent on the accuracy of the mean posterior function and not on an extra, learnt noise model. This means that our model typically requires fewer data points to achieve the same accuracy as MLHGP on input noise datasets. To test the models further, we trained them on a suite of six functions. The functions were again chosen to have varying gradients across the input space. The training set consisted of twenty five points in the interval [-10, 10] and the test set one thousand points in the same interval. Trials were run for different levels of input noise. For each trial, ten different initialisations of the hyperparameters were tried. In order to remove initialisation effects the best initialisations for each model were chosen at each step. The entire experiment was run on twenty different random seeds. For our model, NIGP, we trained both a single model for all output dimensions, as well as separate models for each of the outputs, to see what the effect of using the cross-dimension information was. Figure 3 shows the results for this experiment. The figure shows that NIGP performs very well on all the functions, always outperforming the standard GP when there is input noise and nearly always MLHGP; wherever there is a significant difference our model is favoured. Training on all the outputs at once only gives an improvement for some of the functions, which suggests that, for the others, the input noise levels could be estimated from the individual functions alone. The predictions using stochastic test-points, equations 8 and 10, generally outperformed the predictions made using deterministic test-points, equation 7. The RMSEs are quite similar to each other for most of the functions as the posterior means are very similar, although where they do differ significantly, again, it is to favour our model. These results show our model consistently calculates a more accurate predictive posterior variance than either a standard GP or a state-of-the-art heteroscedastic GP model. As previously mentioned, our model can be adapted to work more effectively with time-series data, where the outputs become subsequent inputs. In this situation the input and output noise variance will be the same. We therefore combine these two parameters into one. We tested NIGP on a timeseries dataset and compared the two modes (with separate input and output noise hyperparameters and with combined) and also to standard GP regression (MLHGP was not available for multiple input dimensions). The dataset is a simulated pendulum without friction and with added noise. There are two variables: pendulum angle and angular velocity. The choice of time interval between observations is important: for very small time intervals, and hence small changes in the angle, the dynamics are approximately linear, as sin θ ≈θ. As discussed before, our model will not bring any benefit to linear dynamics, so in order to see the difference in performance a much longer time interval was chosen. The range of initial angular velocities was chosen to allow the pendulum to spin multiple times at the extremes, which adds extra non-linearity. Ten different initialisations were tried, with the one achieving the highest training set marginal likelihood chosen, and the whole experiment was repeated fifty times with different random seeds. The plots show the difference in log probability of the test set between four versions of NIGP and a standard GP model trained on the same data. All four versions of our model perform better than the 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −2 −1.5 −1 −0.5 0 0.5 1 Negative log predictive posterior sin(x) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −1.5 −1 −0.5 0 0.5 Near−square wave 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −1.5 −1 −0.5 0 0.5 1 1.5 2 exp(−0.2*x)*sin(x) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −1 −0.5 0 0.5 1 1.5 2 Input noise standard deviation Negative log predictive posterior tan(0.15*(x))*sin(x) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.5 1 1.5 2 2.5 3 3.5 4 Input noise standard deviation 0.2*x2*tanh(cos(x)) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 Input noise standard deviation 0.5*log(x2*(sin(2*x)+2)+1) NIGP DTP all o/p NIGP DTP indiv. o/p NIGP STP indiv. o/p NIGP STP all o/p Kersting et al. Standard GP 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normalised test set RMSE sin(x) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 Near−square wave 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 exp(−0.2*x)*sin(x) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 Input noise standard deviation Normalised test set RMSE tan(0.15*(x))*sin(x) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Input noise standard deviation 0.2*x2*tanh(cos(x)) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 Input noise standard deviation 0.5*log(x2*(sin(2*x)+2)+1) Figure 3: Comparison of models for suite of 6 test functions. The solid line is our model with ‘deterministic test-point’ predictions, the solid line with triangles is our model with ‘stochastic testpoint’ predictions. Both these models were trained on all 6 functions at once, the respective dashed lines were trained on the functions individually. The dash-dot line is a standard GP regression model and the dotted line is MLHGP. RMSE has been normalised by the RMS value of the function. In both plots lower values indicate better performance. The plots show our model has lower negative log posterior predictive than standard GP on all the functions, particularly the exponentially decaying sine wave and the multiplication between tan and sin. standard GP. Once again the stochastic test point version outperforms the deterministic test points. There was a slight improvement in RMSE using our model but the differences were within two standard deviations of each other. There is also a slight improvement using the combined noise levels although, again, the difference is contained within the error bars. A better comparison between the two modes is to look at the input noise variance values recovered. The real noise standard deviations used were 0.2 and 0.4 for the angle and angular velocity respectively. The model which learnt the variances separately found standard deviations of 0.3265 and 0.8026 averaged over the trials, whereas the combined model found 0.2429 and 0.8948. This is a significant improvement on the first dimension. Both modes struggle to recover the correct noise level on the second dimension and this is probably why the angular velocity prediction performance shown in figure 4 is worse than the angle prediction performance. Training with more data signif7 Figure 4: The difference between four versions of NIGP and a standard GP model on a pendulum prediction task. DTP stands for deterministic test point and STP is stochastic test point. Comb. and sep. indicate whether the model combined the input and output noise parameters or treated them separately. The error bars indicate plus/minus two standard deviations. icantly improved the recovered noise value although the difference between the two NIGP modes then shrank as there was sufficient information to correctly deduce the noise levels separately. 6 Conclusion The correct way of training on input points corrupted by Gaussian noise is to consider every input point as a Gaussian distribution. This model is intractable, however, and so approximations must be made. In our model, we refer the input noise to the output by passing it through a local linear expansion. This adds a term to the likelihood which is proportional to the squared posterior mean gradient. Not only does this lead to tractable computations but it makes intuitive sense - input noise has a larger effect in areas where the function is changing its output rapidly. The model, although simple in its approach, has been shown to be very effective, outperforming Kersting et al.’s model and a standard GP model in a variety of different regression tasks. It can make use of multiple outputs and can recover a noise variance parameter for each input dimension, which is often useful for analysis. In our approximate model, exact inference can be performed as the model hyperparameters can be trained simultaneously by marginal likelihood maximisation. A proper handling of time-series data would constrain the specific noise levels on each training point to be the same for when they are considered inputs and outputs. This would be computationally very expensive however. By allowing input noise and fixing the input and output noise variances to be identical, our model is a computationally efficient alternative. Our results showed that NIGP gives a substantial improvement over the often-used standard GP for modelling time-series data. It is important to state that this model has been designed to tackle a particular situation, that of constant-variance input noise, and would not perform so well on a general heteroscedastic problem. It could not be expected to improve over a standard GP on problems where noise levels are proportional to the function or input value for example. We do not see this limitation as too restricting however, as we maintain that constant input noise situations (including those where this is a sufficient approximation) are reasonably common. Throughout the paper we have taken particular care to avoid functions or systems which are linear, or approximately linear, as in these cases our model can be reduced to standard GP regression. However, for the problems for which NIGP has been designed, such as the various non-linear problems we have presented in this paper, our model outperforms current methods. This paper considers a first order Taylor expansion of the posterior mean function. We would expect this to be a good approximation for any function providing the input noise levels are not too large (i.e. small perturbations around the point we linearised about). In practice, we could require that the input noise level is not larger than the input characteristic length scale. A more accurate model could use a second order Taylor series, which would still be analytic although computationally the algorithm would then scale with D3 rather than the current D2. Another refinement could be achieved by doing a Taylor series for the full posterior distribution (not just its mean, as we have done here), again at considerably higher computational cost. These are interesting areas for future research, which we are actively pursuing. 8 References [1] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [2] Paul W. Goldberg, Christopher K. I. Williams, and Christopher M. Bishop. Regression with input-dependent noise: A Gaussian Process treatment. NIPS-98, 1998. [3] Kristian Kersting, Christian Plagemann, Patrick Pfaff, and Wolfram Burgard. Most likely heteroscedastic Gaussian Process regression. ICML-07, 2007. [4] Ming Yuan and Grace Wahba. Doubly penalized likelihood estimator in heteroscedastic regression. Statistics and Probability Letter, 69:11–20, 2004. [5] Quoc V. Le, Alex J. Smola, and Stephane Canu. Heteroscedastic Gaussian Process regression. Procedings of ICML-05, pages 489–496, 2005. [6] Edward Snelson and Zoubin Ghahramani. Variable noise and dimensionality reduction for sparse gaussian processes. Procedings of UAI-06, 2006. [7] A.G. Wilson and Z. Ghahramani. Copula processes. In J. Lafferty, C. K. I. Williams, J. ShaweTaylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 2460–2468. 2010. [8] Andrew Wilson and Zoubin Ghahramani. Generalised Wishart Processes. In Proceedings of the Twenty-Seventh Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-11), pages 736–744, Corvallis, Oregon, 2011. AUAI Press. [9] P. Dallaire, C. Besse, and B. Chaib-draa. Learning Gaussian Process Models from Uncertain Data. 16th International Conference on Neural Information Processing, 2008. [10] E. Solak, R. Murray-Smith, W.e. Leithead, D.J. Leith, and C.E. Rasmussen. Derivative observations in Gaussian Process models of dynamic systems. NIPS-03, pages 1033–1040, 2003. [11] Agathe Girard, Carl Edward Rasmussen, Joaquin Quinonero Candela, and Roderick MurraySmith. Gaussian Process priors with incertain inputs - application to multiple-step ahead time series forecasting. Advances in Neural Information Processing Systems 16, 2003. 9
|
2011
|
194
|
4,251
|
On the Universality of Online Mirror Descent Nathan Srebro TTIC nati@ttic.edu Karthik Sridharan TTIC karthik@ttic.edu Ambuj Tewari University of Texas at Austin ambuj@cs.utexas.edu Abstract We show that for a general class of convex online learning problems, Mirror Descent can always achieve a (nearly) optimal regret guarantee. 1 Introduction Mirror Descent is a first-order optimization procedure which generalizes the classic Gradient Descent procedure to non-Euclidean geometries by relying on a “distance generating function” specific to the geometry (the squared 2norm in the case of standard Gradient Descent) [14, 4]. Mirror Descent is also applicable, and has been analyzed, in a stochastic optimization setting [9] and in an online setting, where it can ensure bounded online regret [20]. In fact, many classical online learning algorithms can be viewed as instantiations or variants of Online Mirror Descent, generally either with the Euclidean geometry (e.g. the Perceptron algorithm [5] and Online Gradient Descent [27]), or in the simplex (1 geometry), using an entropic distance generating function (Winnow [13] and Multiplicative Weights / Online Exponentiated Gradient algorithm [11]). More recently, the Online Mirror Descent framework has been applied, with appropriate distance generating functions derived for a variety of new learning problems like multi-task learning and other matrix learning problems [10], online PCA [26] etc. In this paper, we show that Online Mirror Descent is, in a sense, universal. That is, for any convex online learning problem, of a general form (specified in Section 2), if the problem is online learnable, then it is online learnable, with a nearly optimal regret rate, using Online Mirror Descent, with an appropriate distance generating function. Since Mirror descent is a first order method and often has simple and computationally efficient update rules, this makes the result especially attractive. Viewing online learning as a sequentially repeated game, this means that Online Mirror Descent is a near optimal strategy, guaranteeing an outcome very close to the value of the game. In order to show such universality, we first generalize and refine the standard Mirror Descent analysis to situations where the constraint set is not the dual of the data domain, obtaining a general upper bound on the regret of Online Mirror Descent in terms of the existence of an appropriate uniformly convex distance generating function (Section 3). We then extend the notion of a martingale type of a Banach space to be sensitive to both the constraint set and the data domain, and building on results of [24], we relate the value of the online learning repeated game to this generalized notion of martingale type (Section 4). Finally, again building on and generalizing the work of [16], we show how having appropriate martingale type guarantees the existence of a good uniformly convex function (Section 5), that in turn establishes the desired nearly-optimal guarantee on Online Mirror Descent (Section 6). We mainly build on the analysis of [24], who related the value of the online game to the notion of martingale type of a Banach space and uniform convexity when the constraint set and data domain are dual to each other. The main technical advance here is a non-trivial generalization of their analysis (as well as the Mirror Descent analysis) to the more general situation where the constraint set and data domain are chosen independently of each other. In Section 7 several examples are provided that demostrate the use of our analysis. Mirror Descent was initially introduced as a first order deterministic optimization procedure, with an p constraint and a matching q Lipschitz assumption (1 ≤p ≤2, 1/q + 1/p = 1), was shown to be optimal in terms of the number of exact gradient evaluations [15]. Shalev-Shwartz and Singer later observed that the online version of Mirror Descent, again with an p bound and matching q Lipschitz assumption (1 ≤p ≤2, 1/q + 1/p = 1), is also optimal in terms 1 of the worst-case (adversarial) online regret. In fact, in such scenarios stochastic Mirror Descent is also optimal in terms of the number of samples used. We emphasize that although in most, if not all, settings known to us these three notions of optimality coincide, here we focus only on the worst-case online regret. Sridharan and Tewri [24] generalized the optimality of online Mirror Descent (w.r.t. regret) to scenarios where learner is constrained to a unit ball of an arbitrary Banach space (not necessarily and p space) and the objective functions have sub-gradients that lie in the dual ball of the space—for reasons that will become clear shortly, we refer to this as the data domain. However, often we encounter problems where the constraint set and data domain are not dual balls, but rather are arbitrary convex subsets. In this paper, we explore this more general, “non-dual”, variant, and show that also in such scenarios online Mirror Descent is (nearly) optimal in terms of the (asymptotic) worst-case online regret. 2 Online Convex Learning Problem An online convex learning problem can be viewed as a multi-round repeated game where on round t, the learner first picks a vector (predictor) wt from some fixed set W, which is a closed convex subset of a vector space B. Next, the adversary picks a convex cost function ft : W →R from a class of convex functions F. At the end of the round, the learner pays instantaneous cost ft(wt). We refer to the strategy used by the learner to pick the ft’s as an online learning algorithm. More formally, an online learning algorithm A for the problem is specified by the mapping A : n∈N Fn−1 →W. The regret of the algorithm A for a given sequence of cost functions f1, . . . , fn is given by Rn(A, f1, . . . , fn) = 1 n n t=1 ft(A(f1:t−1)) −inf w∈W 1 n n t=1 ft(w) . The goal of the learner (or the online learning algorithm), is to minimize the regret for any n. In this paper, we consider cost function classes F specified by a convex subset X ⊂Bof the dual space B. We consider various types of classes, where for all of them, subgradients1 of the functions in F lie inside X (we use the notation x, wto mean applying linear functional x ∈Bon w ∈B) : FLip(X) = {f : f is convex ∀w ∈W, ∇f(w) ∈X} , Flin(X) = {w →x, w: x ∈X} , Fsup(X) = {w →|x, w−y| : x ∈X, y ∈[−b, b]} The value of the game is then the best possible worst-case regret guarantee an algorithm can enjoy. Formally : Vn(F, X, W) = inf A sup f1:n∈F(X) Rn(A, f1:n) (1) It is well known that the value of a game for all the above sets F is the same. More generally: Proposition 1. If for a convex function class F, we have that ∀f ∈F, w ∈W, ∇f(w) ∈X then, Vn(F, X, W) ≤Vn(Flin, X, W) Furthermore, Vn(FLip, X, W) = Vn(Fsup, X, W) = Vn(Flin, X, W) That is, the value for any class F for which subgradients are in X, is upper bounded by the value of the class of linear functionals in W, see e.g. [1]. In particular, this includes the class FLip which is the class of all functions with subgradients in X, and since Flin(X) ⊂FLip(X) we get the first equality. The second equality is shown in [18]. The class Fsup(X) corresponds to linear prediction with an absolute-difference loss, and thus its value is the best possible guarantee for online supervised learning with this loss. We can define more generally a class F= {(x, w, y) : x ∈X, y ∈[−b, b]} for any 1-Lipschitz loss , and this class would also be of the desired type, with its value upper bounded by Vn(Flin, X, W). In fact, this setting includes supervised learning fairly generally, including problems such as multitask learning and matrix completion, where in all cases X specifies the data domain2. The equality in the above proposition can also be extended to other commonly occurring convex loss function classes like the hinge loss class with some extra constant factors. 1Throughout we commit to a slight abuse of notation, with ∇f(w) indicating some sub-gradient of f at w and ∇f(w) ∈X meaning that at least one of the sub-gradients is in X. 2Any convex supervised learning problem can be viewed as linear classification with some convex constraint W on predictors. 2 Owing to Proposition 1, we can focus our attention on the class Flin (as other two behave similarly), and use shorthand Vn(W, X) := Vn(Flin, X, W) (2) Henceforth the term value without any qualification refers to value of the linear game. Further, for any p ∈[1, 2] let, Vp := inf V ∀n ∈N, Vn(W, X) ≤V n−(1−1 p) (3) Most prior work on online learning and optimization considers the case when W is the unit ball of some Banach space, and X is the unit ball of the dual space, i.e. W and X are related to each other through duality. In this work, however, we analyze the general problem where X ∈Bis not necessarily the dual ball of W. It will be convenient for us to relate the notions of a convex set and a corresponding norm. The Minkowski functional of a subset K of a vector space V is defined as vK := inf {α > 0 : v ∈αK}. If K is convex and centrally symmetric (i.e. K = −K), then ·K is a semi-norm. Throughout this paper, we will require that W and X are convex and centrally symmetric. Further, if the set K is bounded then ·K is a norm. Although not strictly required for our results, for simplicity we will assume W and X are are such that ·W and ·X (the Minkowski functionals of the sets W and X) are norms. Even though we do this for simplicity, we remark that all the results go through for semi-norms. We use X and W to represent the dual of balls X and W respectively, i.e. the unit balls of the dual norms ·∗ X and ·∗ W. 3 Mirror Descent and Uniform Convexity A key tool in the analysis mirror descent is the notion of strong convexity, or more generally uniform convexity: Definition 1. Ψ : B →R is q-uniformly convex w.r.t. · if for any w, w∈B: ∀α∈[0,1] Ψ (αw + (1 −α)w) ≤αΨ(w) + (1 −α)Ψ(w) −α(1−α) q w −wq We emphasize that in the definition above, the norm .and the subset W need not be related, and we only require uniform convexity inside W. This allows us to relate a norm with a non-matching “ball”. To this end define, Dp := inf sup w∈W Ψ(w) p−1 p Ψ : W →R+ is p p−1-uniformly convex w.r.t. ·X ∗, Ψ(0) = 0 Given a function Ψ, the Mirror Descent algorithm, AMD is given by wt+1 = argmin w∈W ∆Ψ (w|wt) + η∇ft(wt), w −wt (4) or equivalently w t+1 = ∇Ψ∗(∇Ψ(wt) −η∇ft(wt)) , wt+1 = argmin w∈W ∆Ψ w w t+1 (5) where ∆Ψ (w|w) := Ψ(w)−Ψ(w)−∇Ψ(w), w −wis the Bregman divergence and Ψ∗is the convex conjugate of Ψ. As an example notice that when Ψ(w) = 1 2 w2 2 then we get the gradient descent algorithm and when W is the d dimensional simplex and Ψ(w) = d i=1 wi log(1/wi) then we get the multiplicative weights update algorithm. Lemma 2. Let Ψ : B →R be non-negative and q-uniformly convex w.r.t. norm ·X ∗. For the Mirror Descent algorithm with this Ψ, using w1 = argmin w∈W Ψ(w) and η = supw∈W Ψ(w) nB 1/p we can guarantee that for any f1, . . . , fn s.t. 1 n n t=1 ∇ftp X ≤1 (where p = q q−1), R(AMD, f1, . . . , fn) ≤2 supw∈W Ψ(w) n 1 q . Note that in our case we have ∇f ∈X, i.e. ∇fX ≤1, and so certainly 1 n n t=1 ∇ftp X ≤1. Similarly to the value of the game, for any p ∈[1, 2], we define: MDp := inf D : ∃Ψ, η s.t. ∀n ∈N, sup f1:n∈F(X) Rn(AMD, f1:n) ≤Dn−(1−1 p ) (6) where the Mirror Descent algorithm in the above definition is run with the corresponding Ψ and η. The constant MDp is a characterization of the best guarantee the Mirror Descent algorithm can provide. Lemma 2 therefore implies: 3 Corollary 3. Vp ≤MDp ≤2Dp. Proof. The first inequality is by the definition of Vp and MDp. Second inequality follows from previous lemma. The Mirror Descent bound suggests that as long as we can find an appropriate function Ψ that is uniformly convex w.r.t. ·∗ X we can get a diminishing regret guarantee. This suggests constructing the following function: ˜Ψq := argmin ψ:ψ is q-uniformly convex w.r.t. ·X∗on W and ψ≥0 sup w∈W Ψ(w) . (7) If no q-uniformly convex function exists then ˜Ψq = ∞is assumed by default. The above function is in a sense the best choice for the Mirror Descent bound in (2). The question then is: when can we find such appropriate functions and what is the best rate we can guarantee using Mirror Descent? 4 Martingale Type and Value In [24], it was shown that the concept of the Martingale type (also sometimes called the Haar type) of a Banach space and optimal rates for online convex optimization problem, where X and W are duals of each other, are closely related. In this section we extend the classic notion of Martingale type of a Banach space (see for instance [16]) to one that accounts for the pair (W, X). Before we proceed with the definitions we would like to introduce a few necessary notations. First, throughout we shall use ∈{±1}N to represent infinite sequence of signs drawn uniformly at random (i.e. each i has equal probability of being +1 or −1). Also throughout (xn)n∈N represents a sequence of mappings where each xn : {±1}n−1 →B. We shall commit to the abuse of notation and use xn() to represent xn() = xn(1, . . . , n−1) (i.e. although we used entire as argument, xn only depends on first n −1 signs). We are now ready to give the extended definition of Martingale type (or M-type) of a pair (W, X). Definition 2. A pair (W, X) of subsets of a vector space Bis said to be of M-type p if there exists a constant C ≥1 such that for all sequence of mappings (xn)n≥1 where each xn : {±1}n−1 →Band any x0 ∈B: sup n E x0 + n i=1 ixi() p W ≤Cp x0p X + n≥1 E [xn()p X ] (8) The concept is called Martingale type because (nxn())n∈N is a martingale difference sequence and it can be shown that rate of convergence of martingales in Banach spaces is governed by the rate of convergence of martingales of the form Zn = x0 + n i=1 ixi() (which are incidentally called Walsh-Paley martingales). We point the reader to [16, 17] for more details. Further, for any p ∈[1, 2] we also define, Cp := inf C ∀x0 ∈B, ∀(xn)n∈N, sup n E x0 + n i=1 ixi() p W ≤Cp x0p X + n≥1 E xn()p X Cp is useful in determining if the pair (W, X) has Martingale type p. The results of [24, 18] showing that a Martingale type implies low regret, actually apply also for “non-matching” W and X and, in our notation, imply that Vp ≤2Cp. Specifically we have the following theorem from [24, 18] : Theorem 4. [24, 18] For any W ∈B and any X ∈Band any n ≥1, sup x E 1 n n i=1 ixi() W ≤Vn(W, X) ≤2 sup x E 1 n n i=1 ixi() W where the supremum above is over sequence of mappings (xn)n≥1 where each xn : {±1}n−1 →X. Our main interest here will is in establishing that low regret implies Martingale type. To do so, we start with the above theorem to relate value of the online convex optimization game to rate of convergence of martingales in the Banach space. We then extend the result of Pisier in [16] to the “non-matching” setting combining it with the above theorem to finally get : 4 Lemma 5. If for some r ∈(1, 2] there exists a constant D > 0 such that for any n, Vn(W, X) ≤Dn−(1−1 r ) then for all p < r, we can conclude that any x0 ∈Band any Bsequence of mappings (xn)n≥1 where each xn : {±1}n−1 →Bwill satisfy : sup n E x0 + n i=1 ixi() p W ≤ 1104 D (r −p)2 p x0p X + i≥1 E [xi()p X ] That is, the pair (W, X) is of martingale type p. The following corollary is an easy consequence of the above lemma. Corollary 6. For any p ∈[1, 2] and any p< p : Cp≤1104 Vp (p−p)2 5 Uniform Convexity and Martingale Type The classical notion of Martingale type plays a central role in the study of geometry of Banach spaces. In [16], it was shown that a Banach space has Martingale type p (the classical notion) if and only if uniformly convex functions with certain properties exist on that space (w.r.t. the norm of that Banach space). In this section, we extend this result and show how the Martingale type of a pair (W, X) are related to existence of certain uniformly convex functions. Specifically, the following theorem shows that the notion of Martingale type of pair (W, X) is equivalent to the existence of a non-negative function that is uniformly convex w.r.t. the norm ·X . Lemma 7. If, for some p ∈(1, 2], there exists a constant C > 0, such that for all sequences of mappings (xn)n≥1 where each xn : {±1}n−1 →Band any x0 ∈B: sup n E x0 + n i=1 ixi() p W ≤Cp x0p X + n≥1 E [xn()p X ] (i.e. (W, X) has Martingale type p), then there exists a convex function Ψ : B →R+ with Ψ(0) = 0, that is q-uniformly convex w.r.t. norm ·X s.t. ∀w ∈B, 1 q wq X ≤Ψ(w) ≤Cq q wq W. The following corollary follows directly from the above lemma. Corollary 8. For any p ∈[1, 2], Dp ≤Cp. The proof of Lemma 7 goes further and gives a specific uniformly convex function Ψ satisfying the desired requirement (i.e. establishing Dp ≤Cp) under the assumptions of the previous lemma: Ψ∗ q(x) := sup 1 Cp sup n E x + n i=1 ixi() p W − i≥1 E xi()p X , Ψq := (Ψ∗ q)∗. (9) where the supremum above is over sequences (xn)n∈N and p = q q−1. 6 Optimality of Mirror Descent In the Section 3, we saw that if we can find an appropriate uniformly convex function to use in the mirror descent algorithm, we can guarantee diminishing regret. However the pending question there was when can we find such a function and what is the rate we can gaurantee. In Section 4 we introduced the extended notion of Martingale type of a pair (W, X) and how it related to the value of the game. Then, in Section 5, we saw how the concept of M-type related to existence of certain uniformly convex functions. We can now combine these results to show that the mirror descent algorithm is a universal online learning algorithm for convex learning problems. Specifically we show that whenever a problem is online learnable, the mirror descent algorithm can guarantee near optimal rates: 5 Theorem 9. If for some constant V > 0 and some q ∈[2, ∞), Vn(W, X) ≤V n−1 q for all n, then for any n > eq−1, there exists regularizer function Ψ and step-size η, such that the regret of the mirror descent algorithm using Ψ against any f1, . . . , fn chosen by the adversary is bounded as: Rn(AMD, f1:n) ≤6002 V log2(n) n−1 q (10) Proof. Combining Mirror descent guarantee in Lemma 2, Lemma 7 and the lower bound in Lemma 5 with p = q q−1 − 1 log(n) we get the above statement. The above Theorem tells us that, with appropriate Ψ and learning rate η, mirror descent will obtain regret at most a factor of 6002 log(n) from the best possible worst-case upper bound. We would like to point out that the constant V in the value of the game appears linearly and there is no other problem or space related hidden constants in the bound. The following figure summarizes the relationship between the various constants. The arrow mark from Cpto Cp indicates that for any n, all the quantities are within log2 n factor of each other. p< p, Cp≤ Vp ≤ MDp ≤ Dp ≤ Cp Lemma 5 (extending Pisier’s result [16]) Definition of Vp (Generalized MD guarantee) Lemma 2 Construction of Ψ, Lemma 10 (extending Pisier’s result [16]) Figure 1: Relationship between the various constants We now provide some general guidelines that will help us in picking out appropriate function Ψ for mirror descent. First we note that though the function Ψq in the construction (9) need not be such that (qΨq(w))1/q is a norm, with a simple modification as noted in [17] we can make it a norm. This basically tells us that the pair (W, X) is online learnable, if and only if we can sandwich a q-uniformly convex norm in-between X and a scaled version of W (for some q < ∞). Also note that by definition of uniform convexity, if any function Ψ is q-uniformly convex w.r.t. some norm ·and we have that ·≥c ·X , then Ψ(·) cq is q-uniformly convex w.r.t. norm ·X . These two observations together suggest that given pair (W, X) what we need to do is find a norm ·in between · X and C ·W (C < ∞, smaller the C better the bound ) such that ·q is q-uniformly convex w.r.t ·. 7 Examples We demonstrate our results on several online learning problems, specified by W and X. p non-dual pairs It is usual in the literature to consider the case when W is the unit ball of the p norm in some finite dimension d while X is taken to be the unit ball of the dual norm q where p, q are H¨older conjugate exponents. Using the machinery developed in this paper, it becomes effortless to consider the non-dual case when W is the unit ball Bp1 of some p1 norm while X is the unit ball Bp2 for arbitrary p1, p2 in [1, ∞]. We shall use q1 and q2 to represent Holder conjugates of p1 and p2. Before we proceed we first note that for any r ∈(1, 2], ψr(w) := 1 2(r−1)w2 r is 2-uniformly w.r.t. norm ·r (see for instance [25]). On the other hand by Clarkson’s inequality, we have that for r ∈(2, ∞), ψr(w) := 2r r wr r is r-uniformly convex w.r.t. ·r. Putting it together we see that for any r ∈(1, ∞), the function ψr defined above, is Q-uniformly convex w.r.t ·r for Q = max{r, 2}. The basic technique idea is to be to select ψr based on the guidelines in the end of the previous section. Finally we show that using ˜ψr := dQ max{ 1 q2 −1 r ,0}ψr in Mirror descent Lemma 2 yields the bound that for any f1, . . . , fn ∈F: Rn(AMD, f1:n) ≤ 2 max{2, 1 √ 2(r−1)}dmax{ 1 q2 −1 r ,0}+max{ 1 r −1 p1 ,0} n1/ max{r,2} The following table summarizes the scenarios where a value of r = 2, i.e. a rate of D2/√n, is possible, and lists the corresponding values of D2 (up to numeric constant of at most 16): 6 p1 Range q2 = p2 p2−1 Range D2 1 ≤p1 ≤2 q2 > 2 1 1 ≤p1 ≤2 p1 ≤q2 ≤2 √p2 −1 1 ≤p1 ≤2 1 ≤q2 < p1 d1/q2−1/p1√p2 −1 p1 > 2 q2 > 2 d(1/2−1/p1) p1 > 2 1 ≤q2 ≤2 d(1/q2−1/p1) 1 ≤p1 ≤2 q2 = ∞ log(d) Note that the first two rows are dimension free, and so apply also in infinite-dimensional settings, whereas in the other scenarios, D2 is finite only when the dimension is finite. An interesting phenomena occurs when d is ∞, p1 > 2 and q2 ≥p1. In this case D2 = ∞and so one cant expect a rate of O( 1 √n). However we have Dp2 < 16 and so can still get a rate of n−1 q2 . Ball et al [3] tightly calculate the constants of strong convexity of squared p norms, establishing the tightness of D2 when p1 = p2. By extending their constructions it is also possible to show tightness (up to a factor of 16) for all other values in the table. Also, Agarwal et al [2] recently showed lower bounds on the sample complexity of stochastic optimization when p1 = ∞and p2 is arbitrary—their lower bounds match the last two rows in the table. Non-dual Schatten norm pairs in finite dimensions Exactly the same analysis as above can be carried out for Schatten p-norms, i.e. when W = BS(p1), X = BS(p2) are the unit balls of Schatten p-norm (the p-norm of the singular values) for matrix of dimensions d1 × d2. We get the same results as in the table above (as upper bounds on D2), with d = min{d1, d2}. These results again follow using similar arguments as p case and tight constants for strong convexity parameters of the Schatten norm from [3]. Non-dual group norm pairs in finite dimensions In applications such as multitask learning, groups norms such as wq,1 are often used on matrices w ∈Rk×d where (q, 1) norm means taking the 1-norm of the q-norms of the columns of w. Popular choices include q = 2, ∞. Here, it may be quite unnatural to use the dual norm (p, ∞) to define the space X where the data lives. For instance, we might want to consider W = B(q,1) and X = B(∞,∞) = B∞. In such a case we can calculate that D2(W, X) = Θ(k1−1 q log(d)) using Ψ(w) = 1 q+r−2 w2 q,r where r = log d log d−1. Max Norm Max-norm has been proposed as a convex matrix regularizer for application such as matrix completion [21]. In the online version of the matrix completion problem at each time step one element of the matrix is revealed, corresponding to X being the set of all matrices with a single element being 1 and the rest 0. Since we need X to be convex we can take the absolute convex hull of this set and use X to be the unit element-wise 1 ball. Its dual is WX = maxi,j |Wi,j|. On the other hand given a matrix W, its max-norm is given by Wmax = minU,V :W =UV (maxi Ui2) maxj Vj2 . The set W is the unit ball under the max norm. As noted in [22] the max-norm ball is equivalent, up to a factor two, to the convex hull of all rank one sign matrices. Let us now make a more general observation. Consider any set W = abscvx({w1, . . . , wK}), the absolute convex hull of K points w1, . . . , wK ∈B. In this case, the Minkowski norm for this W is given by wW := infα1,...,αK:w=K i=1 αiwi K i=1 |αi|. In this case, for any q ∈(1, 2], if we define the norm wW,q = infα1,...,αK:w=K i=1 αiwi K i=1 |αi|q1/q , then the function Ψ(w) = 1 2(q−1) w2 W,q is 2-uniformly convex w.r.t. ·W,q (similar to 1 −q case). Further if we use q = log K log K−1, then supw∈W Ψ(w) = O(√log K) and so D2 = √log K. For the max norm case the norm is equivalent to the norm got by the taking the absolute convex hull of the set of all rank one sign matrices. Cardinality of this set is of course 2N+M. Hence using the above proposition and noting that X is the unit ball of | · |∞we see that Ψ is obviously 2-uniformly convex w.r.t. ·X and so we get a regret bound O M+N n . This matches the stochastic (PAC) learning guarantee [22], and is the first guarantee we are aware of for the max norm matrix completion problem in the online setting. 8 Conclusion and Discussion In this paper we showed that for a general class of convex online learning problems, there always exists a distance generating function Ψ such that Mirror Descent using this function achieves a near-optimal regret guarantee. This 7 shows that a fairly simple first-order method, in which each iteration requires a gradient computation and a proxmap computation, is sufficient for online learning in a very general sense. Of course, the main challenge is deriving distance generating functions appropriate for specific problems—although we give two mathematical expressions for such functions, in equations (7) and (9), neither is particularly tractable in general. In the end of Section 6 we do give some general guidelines for choosing the right distance generating function. However obtaining a more explicit and simple procedure at least for reasonable Banach spaces is a very interesting question. Furthermore, for the Mirror Descent procedure to be efficient, the prox-map of the distance generating function must be efficiently computable, which means that even though a Mirror Descent procedure is always theoretically possible, we might in practice choose to use a non-optimal distance generating function, or even a non-MD procedure. Furthermore, we might also find other properties of w desirable, such as sparsity, which would bias us toward alternative methods [12, 7]. Nevertheless, in most instances that we are aware of, Mirror Descent, or slight variations of it, is truly an optimal procedure and this is formalized and rigorously establish here. In terms of the generality of the problems we handle, we required that the constraint set W be convex, but this seems unavoidable if we wish to obtain efficient algorithms (at least in general). Furthermore, we know that in terms of worst-case behavior, both in the stochastic and in the online setting, for convex cost functions, the value is unchained when the convex hull of a non-convex constraint set [18]. The requirement that the data domain X be convex is perhaps more restrictive, since even with non-convex data domain, the objective is still convex. Such non-convex X are certainly relevant in many applications, e.g. when the data is sparse, or when x ∈X is an indicator, as in matrix completion problems and total variation regularization. In the total variation regularization problem, W is the set of all functions on the interval [0, 1] with total variation bounded by 1 which is in fact a Banach space. However set X we consider here is not the entire dual ball and in fact is neither convex nor symmetric. It only consists of evaluations of the functions in W at points on interval [0, 1] and one can consider a supervised learning problem where the goal is to use the set of all functions with bounded variations to predict targets which take on values in [−1, 1] . Although the total-variation problem is not learnable, the matrix completion problem certainly is of much interest. In the matrix completion case, taking the convex hull of X does not seem to change the value, but we are unaware of neither a guarantee that the value of the game is unchanged when a non-convex X is replaced by its convex hull, nor of an example where the value does change—it would certainly be useful to understand this issue. We view the requirement that W and X be symmetric around the origin as less restrictive and mostly a matter of convenience. We also focused on a specific form of the cost class F, which beyond the almost unavoidable assumption of convexity, is taken to be constrained through the cost sub-gradients. This is general enough for considering supervised learning with an arbitrary convex loss in a worst-case setting, as the sub-gradients in this case exactly correspond to the data points, and so restricting F through its sub gradients corresponds to restricting the data domain. Following Proposition 1, any optimality result for FLip also applies to Fsup, and this statement can also be easily extended to any other reasonable loss function, including the hinge-loss, smooth loss functions such as the logistic loss, and even stronglyconvex loss functions such as the squared loss (in this context, note that a strongly convex scalar function for supervised learning does not translate to a strongly convex optimization problem in the worst case). Going beyond a worst-case formulation of supervised learning, one might consider online repeated games with other constraints on F, such as strong convexity, or even constraints on {ft} as a sequence, such as requiring low average error or conditions on the covariance of the data—these are beyond the scope of the current paper. Even for the statistical learning setting, online methods along with online to batch conversion are often preferred due to their efficiency especially in high dimensional problems. In fact for p spaces in the dual case, using lower bounds on the sample complexity for statistical learning of these problems, one can show that for large dimensional problems, mirror descent is an optimal procedure even for the statistical learning problem. We would like to consider the question of whether Mirror Descent is optimal for stochastic convex optimization (convex statistical learning) setting [9, 19, 23] in general. Establishing such universality would have significant implications, as it would indicate that any learnable (convex) problem, is learnable using a one-pass first-order online method (i.e. Stochastic Approximation approach). References [1] J. Abernethy, P. L. Bartlett, A. Rakhlin, and A. Tewari. Optimal strategies and minimax lower bounds for online convex games. In Proceedings of the Nineteenth Annual Conference on Computational Learning Theory, 2008. [2] Alekh Agarwal, Peter L. Bartlett, Pradeep Ravikumar, and Martin J. Wainwright. Information-theoretic lower bounds on the oracle complexity of convex optimization. 8 [3] Keith Ball, Eric A. Carlen, and Elliott H. Lieb. Sharp uniform convexity and smoothness inequalities for trace norms. Invent. Math., 115:463–482, 1994. [4] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167–175, 2003. [5] H. D. Block. The perceptron: A model for brain functioning. Reviews of Modern Physics, 34:123–135, 1962. Reprinted in ”Neurocomputing” by Anderson and Rosenfeld. [6] V. Chandrasekaran, S. Sanghavi, P. Parrilo, and A. Willsky. Sparse and low-rank matrix decompositions. In IFAC Symposium on System Identification, 2009. [7] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the 1-ball for learning in high dimensions. In Proceedings of the 25th International Conference on Machine Learning, 2008. [8] Ali Jalali, Pradeep Ravikumar, Sujay Sanghavi, and Chao Ruan. A Dirty Model for Multi-task Learning. In NIPS, December 2010. [9] A. Juditsky, G. Lan, A. Nemirovski, and A. Shapiro. Stochastic approximation approach to stochastic programming. SIAM J. Optim, 19(4):1574–1609, 2009. [10] Sham M. Kakade, Shai Shalev-shwartz, and Ambuj Tewari. On the duality of strong convexity and strong smoothness: Learning applications and matrix regularization, 2010. [11] J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–64, January 1997. [12] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. In Advances in Neural Information Processing Systems 21, pages 905–912, 2009. [13] N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285–318, 1988. [14] A. Nemirovski and D. Yudin. On cesaro’s convergence of the gradient descent method for finding saddle points of convex-concave functions. Doklady Akademii Nauk SSSR, 239(4), 1978. [15] A. Nemirovski and D. Yudin. Problem complexity and method efficiency in optimization. Nauka Publishers, Moscow, 1978. [16] G. Pisier. Martingales with values in uniformly convex spaces. Israel Journal of Mathematics, 20(3–4):326–350, 1975. [17] G. Pisier. Martingales in banach spaces (in connection with type and cotype). Winter School/IHP Graduate Course, 2011. [18] A. Rakhlin, K. Sridharan, and A. Tewari. Online learning: Random averages, combinatorial parameters, and learnability. NIPS, 2010. [19] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Stochastic convex optimization. In COLT, 2009. [20] S. Shalev-Shwartz and Y. Singer. Convex repeated games and fenchel duality. Advances in Neural Information Processing Systems, 19:1265, 2007. [21] Nathan Srebro, Jason D. M. Rennie, and Tommi S. Jaakola. Maximum-margin matrix factorization. In Advances in Neural Information Processing Systems 17, pages 1329–1336. MIT Press, 2005. [22] Nathan Srebro and Adi Shraibman. Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory, pages 545–560. Springer-Verlag, 2005. [23] Nathan Srebro and Ambuj Tewari. Stochastic optimization for machine learning. In ICML 2010, tutorial, 2010. [24] K. Sridharan and A. Tewari. Convex games in Banach spaces. In Proceedings of the 23nd Annual Conference on Learning Theory, 2010. [25] S.Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, Hebrew University of Jerusalem, 2007. [26] Manfred K. Warmuth and Dima Kuzmin. Randomized online pca algorithms with regret bounds that are logarithmic in the dimension, 2007. [27] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, 2003. [28] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B, 67:301–320, 2005. 9
|
2011
|
195
|
4,252
|
Unifying Framework for Fast Learning Rate of Non-Sparse Multiple Kernel Learning Taiji Suzuki Department of Mathematical Informatics The University of Tokyo Tokyo 113-8656, Japan s-taiji@stat.t.u-tokyo.ac.jp Abstract In this paper, we give a new generalization error bound of Multiple Kernel Learning (MKL) for a general class of regularizations. Our main target in this paper is dense type regularizations including ℓp-MKL that imposes ℓp-mixed-norm regularization instead of ℓ1-mixed-norm regularization. According to the recent numerical experiments, the sparse regularization does not necessarily show a good performance compared with dense type regularizations. Motivated by this fact, this paper gives a general theoretical tool to derive fast learning rates that is applicable to arbitrary mixed-norm-type regularizations in a unifying manner. As a by-product of our general result, we show a fast learning rate of ℓp-MKL that is tightest among existing bounds. We also show that our general learning rate achieves the minimax lower bound. Finally, we show that, when the complexities of candidate reproducing kernel Hilbert spaces are inhomogeneous, dense type regularization shows better learning rate compared with sparse ℓ1 regularization. 1 Introduction Multiple Kernel Learning (MKL) proposed by [20] is one of the most promising methods that adaptively select the kernel function in supervised kernel learning. A kernel method is widely used and several studies have supported its usefulness [25]. However the performance of kernel methods critically relies on the choice of the kernel function. Many methods have been proposed to deal with the issue of kernel selection. [23] studied hyperkrenels as a kernel of kernel functions. [2] considered DC programming approach to learn a mixture of kernels with continuous parameters. Some studies tackled a problem to learn non-linear combination of kernels as in [4, 9, 34]. Among them, learning a linear combination of finite candidate kernels with non-negative coefficients is the most basic, fundamental and commonly used approach. The seminal work of MKL by [20] considered learning convex combination of candidate kernels. This work opened up the sequence of the MKL studies. [5] showed that MKL can be reformulated as a kernel version of the group lasso [36]. This formulation gives an insight that MKL can be described as a ℓ1-mixed-norm regularized method. As a generalization of MKL, ℓp-MKL that imposes ℓp-mixed-norm regularization has been proposed [22, 14]. ℓp-MKL includes the original MKL as a special case as ℓ1-MKL. Another direction of generalizing MKL is elasticnet-MKL [26, 31] that imposes a mixture of ℓ1-mixed-norm and ℓ2-mixed-norm regularizations. Recently numerical studies have shown that ℓp-MKL with p > 1 and elasticnet-MKL show better performances than ℓ1-MKL in several situations [14, 8, 31]. An interesting perception here is that both ℓp-MKL and elasticnet-MKL produce denser estimator than the original ℓ1-MKL while they show favorable performances. One motivation of this paper is to give a theoretical justification to these generalized dense type MKL methods in a unifying manner. 1 In the pioneering paper of [20], a convergence rate of MKL is given as √ M n , where M is the number of given kernels and n is the number of samples. [27] gave improved learning bound utilizing the pseudo-dimension of the given kernel class. [35] gave a convergence bound utilizing Rademacher chaos and gave some upper bounds of the Rademacher chaos utilizing the pseudo-dimension of the kernel class. [8] presented a convergence bound for a learning method with L2 regularization on the kernel weight. [10] gave the convergence rate of ℓp-MKL as M 1−1 p ∨√ log(M) √n for 1 ≤p ≤2. [15] gave a similar convergence bound with improved constants. [16] generalized this bound to a variant of the elasticnet type regularization and widened the effective range of p to all range of p ≥1 while in the existing bounds 1 ≤p ≤2 was imposed. One concern about these bounds is that all bounds introduced above are “global” bounds in a sense that the bounds are applicable to all candidates of estimators. Consequently all convergence rate presented above are of order 1/√n with respect to the number n of samples. However, by utilizing the localization techniques including so-called local Rademacher complexity [6, 17] and peeling device [32], we can derive a faster learning rate. Instead of uniformly bounding all candidates of estimators, the localized inequality focuses on a particular estimator such as empirical risk minimizer, thus can gives a sharp convergence rate. Localized bounds of MKL have been given mainly in sparse learning settings [18, 21, 19], and there are only few studies for non-sparse settings in which the sparsity of the ground truth is not assumed. Recently [13] gave a localized convergence bound of ℓp-MKL. However, their analysis assumed a strong condition where RKHSs have no-correlation to each other. In this paper, we show a unified framework to derive fast convergence rates of MKL with various regularization types. The framework is applicable to arbitrary mixed-norm regularizations including ℓp-MKL and elasticnet-MKL. Our learning rate utilizes the localization technique, thus is tighter than global type learning rates. Moreover our analysis does not require no-correlation assumption as in [13]. We apply our general framework to some examples and show our bound achieves the minimax-optimal rate. As a by-product, we obtain a tighter convergence rate of ℓp-MKL than existing results. Finally, we show that dense type regularizations can outperforms sparse ℓ1 regularization when the complexities of the RKHSs are not uniformly same. 2 Preliminary In this section, we give the problem formulation, the notations and the assumptions required for the convergence analysis. 2.1 Problem Formulation Suppose that we are given n i.i.d. samples {(xi, yi)}n i=1 distributed from a probability distribution P on X × R where X is an input space. We denote by Π the marginal distribution of P on X. We are given M reproducing kernel Hilbert spaces (RKHS) {Hm}M m=1 each of which is associated with a kernel km. We consider a mixed-norm type regularization with respect to an arbitrary given norm ∥·∥ψ, that is, the regularization is given by the norm ∥(∥fm∥Hm)M m=1∥ψ of the vector (∥fm∥Hm)M m=1 for fm ∈Hm (m = 1, . . . , M) ∗. For notational simplicity, we write ∥f∥ψ = ∥(∥fm∥Hm)M m=1∥ψ for f = ∑M m=1 fm (fm ∈Hm). The general formulation of MKL that we consider in this paper fits a function f = ∑M m=1 fm (fm ∈ Hm) to the data by solving the following optimization problem: ˆf = M ∑ m=1 ˆfm = arg min fm∈Hm (m=1,...,M) 1 n n ∑ i=1 ( yi − M ∑ m=1 fm(xi) )2 + λ(n) 1 ∥f∥2 ψ. (1) We call this “ψ-norm MKL”. This formulation covers many practically used MKL methods (e.g., ℓp-MKL, elasticnet-MKL, variable sparsity kernel learning (see later for their definitions)), and is solvable by a finite dimensional optimization procedure due to the representer theorem [12]. In this ∗We assume that the mixed-norm ∥(∥fm∥Hm)M m=1∥ψ satisfies the triangular inequality with respect to (fm)M m=1, that is, ∥(∥fm + f ′ m∥Hm)M m=1∥ψ ≤∥(∥fm∥Hm)M m=1∥ψ + ∥(∥f ′ m∥Hm)M m=1∥ψ. To satisfy this condition, it is sufficient if the norm is monotone, i.e., ∥a∥ψ ≤∥a + b∥ψ for all a, b ≥0. 2 paper, we focus on the regression problem (the squared loss). However the discussion presented here can be generalized to Lipschitz continuous and strongly convex losses [6]. Example 1: ℓp-MKL The first motivating example of ψ-norm MKL is ℓp-MKL [14] that employs ℓp-norm for 1 ≤p ≤∞as the regularizer: ∥f∥ψ = ∥(∥fm∥Hm)M m=1∥ℓp = (∑M m=1 ∥fm∥p Hm) 1 p . If p is strictly greater than 1 (p > 1), the solution of ℓp-MKL becomes dense. In particular, p = 2 corresponds to averaging candidate kernels with uniform weight [22]. It is reported that ℓp-MKL with p greater than 1, say p = 4 3, often shows better performance than the original sparse ℓ1-MKL [10]. Example 2: Elasticnet-MKL The second example is elasticnet-MKL [26, 31] that employs mixture of ℓ1 and ℓ2 norms as the regularizer: ∥f∥ψ = τ∥f∥ℓ1 + (1 −τ)∥f∥ℓ2 = τ ∑M m=1 ∥fm∥Hm + (1 −τ)(∑M m=1 ∥fm∥2 Hm) 1 2 with τ ∈[0, 1]. Elasticnet-MKL shares the same spirit with ℓp-MKL in a sense that it bridges sparse ℓ1-regularization and dense ℓ2-regularization. An efficient optimization method for elasticnet-MKL is proposed by [30]. Example 3: Variable Sparsity Kernel Learning Variable Sparsity Kernel Learning (VSKL) proposed by [1] divides the RKHSs into M ′ groups {Hj,k}Mj k=1, (j = 1, . . . , M ′) and imposes a mixed norm regularization ∥f∥ψ = ∥f∥(p,q) = {∑M′ j=1(∑Mj k=1 ∥fj,k∥p Hj,k) q p } 1 q where 1 ≤p, q, and fj,k ∈Hj,k. An advantageous point of VSKL is that by adjusting the parameters p and q, various levels of sparsity can be introduced, that is, the parameters can control the level of sparsity within group and between groups. This point is beneficial especially for multi-modal tasks like object categorization. 2.2 Notations and Assumptions Here, we prepare notations and assumptions that are used in the analysis. Let H⊕M = H1 ⊕· · · ⊕ HM. Throughout the paper, we assume the following technical conditions (see also [3]). Assumption 1. (Basic Assumptions) (A1) There exists f ∗= (f ∗ 1 , . . . , f ∗ M) ∈H⊕M such that E[Y |X] = f ∗(X) = ∑M m=1 f ∗ m(X), and the noise ϵ := Y −f ∗(X) is bounded as |ϵ| ≤L. (A2) For each m = 1, . . . , M, Hm is separable (with respect to the RKHS norm) and supX∈X |km(X, X)| < 1. The first assumption in (A1) ensures the model H⊕M is correctly specified, and the technical assumption |ϵ| ≤L allows ϵf to be Lipschitz continuous with respect to f. The noise boundedness can be relaxed to unbounded situation as in [24], but we don’t pursue that direction for simplicity. Let an integral operator Tkm : L2(Π) →L2(Π) corresponding to a kernel function km be Tkmf = ∫ km(·, x)f(x)dΠ(x). It is known that this operator is compact, positive, and self-adjoint (see Theorem 4.27 of [28]). Thus it has at most countably many non-negative eigenvalues. We denote by µℓ,m be the ℓ-th largest eigenvalue (with possible multiplicity) of the integral operator Tkm. Then we assume the following assumption on the decreasing rate of µℓ,m. Assumption 2. (Spectral Assumption) There exist 0 < sm < 1 and 0 < c such that (A3) µℓ,m ≤cℓ− 1 sm , (∀ℓ≥1, 1 ≤∀m ≤M), where {µℓ,m}∞ ℓ=1 is the spectrum of the operator Tkm corresponding to the kernel km. It was shown that the spectral assumption (A3) is equivalent to the classical covering number assumption [29]. Recall that the ϵ-covering number N(ϵ, BHm, L2(Π)) with respect to L2(Π) is the minimal number of balls with radius ϵ needed to cover the unit ball BHm in Hm [33]. If the spectral assumption (A3) holds, there exists a constant C that depends only on s and c such that log N(ε, BHm, L2(Π)) ≤Cε−2sm, (2) 3 Table 1: Summary of the constants we use in this article. n The number of samples. M The number of candidate kernels. sm The spectral decay coefficient; see (A3). κM The smallest eigenvalue of the design matrix (see Eq. (3)). and the converse is also true (see [29, Theorem 15] and [28] for details). Therefore, if sm is large, the RKHSs are regarded as “complex”, and if sm is small, the RKHSs are “simple”. An important class of RKHSs where sm is known is Sobolev space. (A3) holds with sm = d 2α for Sobolev space of α-times continuously differentiability on the Euclidean ball of Rd [11]. Moreover, for α-times continuously differentiable kernels on a closed Euclidean ball in Rd, that holds for sm = d 2α [28, Theorem 6.26]. According to Theorem 7.34 of [28], for Gaussian kernels with compact support distribution, that holds for arbitrary small 0 < sm. The covering number of Gaussian kernels with unbounded support distribution is also described in Theorem 7.34 of [28]. Let κM be defined as follows: κM := sup { κ ≥0 κ ≤ ∥∑M m=1 fm∥2 L2(Π) ∑M m=1 ∥fm∥2 L2(Π) , ∀fm ∈Hm (m = 1, . . . , M) } . (3) κM represents the correlation of RKHSs. We assume all RKHSs are not completely correlated to each other. Assumption 3. (Incoherence Assumption) κM is strictly bounded from below; there exists a constant C0 > 0 such that (A4) 0 < C−1 0 < κM. This condition is motivated by the incoherence condition [18, 21] considered in sparse MKL settings. This ensures the uniqueness of the decomposition f ∗= ∑M m=1 f ∗ m of the ground truth. [3] also assumed this condition to show the consistency of ℓ1-MKL. Finally we give a technical assumption with respect to ∞-norm. Assumption 4. (Embedded Assumption) Under the Spectral Assumption, there exists a constant C1 > 0 such that (A5) ∥fm∥∞≤C1∥fm∥1−sm Hm ∥fm∥sm L2(Π). This condition is met when the input distribution Π has a density with respect to the uniform distribution on X that is bounded away from 0 and the RKHSs are continuously embedded in a Sobolev space W α,2(X) where sm = d 2α, d is the dimension of the input space X and α is the “smoothness” of the Sobolev space. Many practically used kernels satisfy this condition (A5). For example, the RKHSs of Gaussian kernels can be embedded in all Sobolev spaces. Therefore the condition (A5) seems rather common and practical. More generally, there is a clear characterization of the condition (A5) in terms of real interpolation of spaces. One can find detailed and formal discussions of interpolations in [29], and Proposition 2.10 of [7] gives the necessary and sufficient condition for the assumption (A5). Constants we use later are summarized in Table 1. 3 Convergence Rate Analysis of ψ-norm MKL Here we derive the learning rate of ψ-norm MKL in a most general setting. We suppose that the number of kernels M can increase along with the number of samples n. The motivation of our analysis is summarized as follows: • Give a unifying frame work to derive a sharp convergence rate of ψ-norm MKL. • (homogeneous complexity) Show the convergence rate of some examples using our general frame work, and prove its minimax-optimality under conditions that the complexities sm of all RKHSs are same. 4 • (inhomogeneous complexity) Discuss how the dense type regularization outperforms the sparse type regularization, when the complexities sm of all RKHSs are not uniformly same. Now we define η(t) := ηn(t) = max(1, √ t, t/√n) for t > 0, and, for given positive reals {rm}M m=1 and given n, we define α1, α2, β1, β2 as follows: α1 := α1({rm}) = 3 ( M ∑ m=1 r−2sm m n ) 1 2 , α2 := α2({rm}) = 3
( smr1−sm m √n )M m=1
ψ∗, β1 := β1({rm}) =3 ( M ∑ m=1 r −2sm(3−sm) 1+sm m n 2 1+sm ) 1 2 , β2 := β2({rm}) =3
( smr (1−sm)2 1+sm m n 1 1+sm )M m=1
ψ∗ , (4) (note that α1, α2, β1, β2 implicitly depends on the reals {rm}M m=1). Then the following theorem gives the general form of the learning rate of ψ-norm MKL. Theorem 1. Suppose Assumptions 1-4 are satisfied. Let {rm}M m=1 be arbitrary positive reals that can depend on n, and assume λ(n) 1 = ( α2 α1 )2 + ( β2 β1 )2 . Then for all n and t′ that satisfy log(M) √n ≤1 and 4ϕ√n κM max{α2 1, β2 1, M log(M) n }η(t′) ≤ 1 12 and for all t ≥1, we have ∥ˆf −f ∗∥2 L2(Π) ≤24η(t)2ϕ2 κM ( α2 1 + β2 1 + M log(M) n ) + 4 [(α2 α1 )2 + (β2 β1 )2] ∥f ∗∥2 ψ. (5) with probability 1 −exp(−t) −exp(−t′). The proof will be given in Appendix D in the supplementary material. One can also find an outline of the proof in Appendix A in the supplementary material. The statement of Theorem 1 itself is complicated. Thus we will show later concrete learning rates on some examples such as ℓp-MKL. The convergence rate (5) depends on the positive reals {rm}M m=1, but the choice of {rm}M m=1 are arbitrary. Thus by minimizing the right hand side of Eq. (5), we obtain tight convergence bound as follows: ∥ˆf −f ∗∥2 L2(Π) =Op ( min {rm}M m=1: rm>0 { α2 1 + β2 1 + [(α2 α1 )2 + (β2 β1 )2] ∥f ∗∥2 ψ + M log(M) n }) . (6) There is a trade-off between the first two terms (a) := α2 1 + β2 1 and the third term (b) := [( α2 α1 )2 + ( β2 β1 )2] ∥f ∗∥2 ψ, that is, if we take {rm}m large, then the term (a) becomes small and the term (b) becomes large, on the other hand, if we take {rm}m small, then it results in large (a) and small (b). Therefore we need to balance the two terms (a) and (b) to obtain the minimum in Eq. (6). We discuss the obtained learning rate in two situations, (i) homogeneous complexity situation, and (ii) inhomogeneous complexity situation: (i) (homogeneous) All sms are same: there exists 0 < s < 1 such that sm = s (∀m) (Sec.3.1). (ii) (inhomogeneous) All sms are not same: there exist m, m′ such that sm ̸= sm′ (Sec.3.2). 3.1 Analysis on Homogeneous Settings Here we assume all sms are same, say sm = s for all m (homogeneous setting). If we further restrict the situation as all rms are same (rm = r (∀m) for some r), then the minimization in Eq. (6) can be easily carried out as in the following lemma. Let 1 be the M-dimensional vector each element of which is 1: 1 := (1, . . . , 1)⊤∈RM, and ∥· ∥ψ∗be the dual norm of the ψ-norm†. Lemma 2. When sm = s (∀m) with some 0 < s < 1 and n ≥(∥1∥ψ∗∥f ∗∥ψ/M) 4s 1−s , the bound (6) indicates that ∥ˆf −f ∗∥2 L2(Π) = Op ( M 1−2s 1+s n− 1 1+s (∥1∥ψ∗∥f ∗∥ψ) 2s 1+s + M log(M) n ) . (7) †The dual of the norm ∥· ∥ψ is defined as ∥b∥ψ∗:= supa{b⊤a | ∥a∥ψ ≤1}. 5 The proof is given in Appendix G.1 in the supplementary material. Lemma 2 is derived by assuming rm = r (∀m), which might make the bound loose. However, when the norm ∥· ∥ψ is isotropic (whose definition will appear later), that restriction (rm = r (∀m)) does not make the bound loose, that is, the upper bound obtained in Lemma 2 is tight and achieves the minimax optimal rate (the minimax optimal rate is the one that cannot be improved by any estimator). In the following, we investigate the general result of Lemma 2 through some important examples. Convergence Rate of ℓp-MKL Here we derive the convergence rate of ℓp-MKL (1 ≤p ≤∞) where ∥f∥ψ = ∑M m=1(∥fm∥p Hm) 1 p (for p = ∞, it is defined as maxm ∥fm∥Hm). It is well known that the dual norm of ℓp-norm is given as ℓq-norm where q is the real satisfying 1 p + 1 q = 1. For notational simplicity, let Rp := (∑M m=1 ∥f ∗ m∥p Hm ) 1 p . Then substituting ∥f ∗∥ψ = Rp and ∥1∥ψ∗= ∥1∥ℓq = M 1 q = M 1−1 p into the bound (7), the learning rate of ℓp-MKL is given as ∥ˆf −f ∗∥2 L2(Π) =Op ( n− 1 1+s M 1− 2s p(1+s) R 2s 1+s p + M log(M) n ) . (8) If we further assume n is sufficiently large so that n ≥M 2 p R−2 p (log M) 1+s s , the leading term is the first term, and thus we have ∥ˆf −f ∗∥2 L2(Π) = Op ( n− 1 1+s M 1− 2s p(1+s) R 2s 1+s p ) . (9) Note that as the complexity s of RKHSs becomes small the convergence rate becomes fast. It is known that n− 1 1+s is the minimax optimal learning rate for single kernel learning. The derived rate of ℓp-MKL is obtained by multiplying a coefficient depending on M and Rp to the optimal rate of single kernel learning. To investigate the dependency of Rp to the learning rate, let us consider two extreme settings, i.e., sparse setting (∥f ∗ m∥Hm)M m=1 = (1, 0, . . . , 0) and dense setting (∥f ∗ m∥Hm)M m=1 = (1, . . . , 1) as in [15]. • (∥f ∗ m∥Hm)M m=1 = (1, 0, . . . , 0): Rp = 1 for all p. Therefore the convergence rate n− 1 1+s M 1− 2s p(1+s) is fast for small p and the minimum is achieved at p = 1. This means that ℓ1 regularization is preferred for sparse truth. • (∥f ∗ m∥Hm)M m=1 = (1, . . . , 1): Rp = M 1 p , thus the convergence rate is Mn− 1 1+s for all p. Interestingly for dense ground truth, there is no dependency of the convergence rate on the parameter p (later we will show that this is not the case in inhomogeneous settings (Sec.3.2)). That is, the convergence rate is M times the optimal learning rate of single kernel learning (n− 1 1+s ) for all p. This means that for the dense settings, the complexity of solving MKL problem is equivalent to that of solving M single kernel learning problems. Comparison with Existing Bounds Here we compare the bound for ℓp-MKL we derived above with the existing bounds. Let Hℓp(R) be the ℓp-mixed norm ball with radius R: Hℓp(R) := {f = ∑M m=1 fm | (∑M m=1 ∥fm∥p Hm) 1 p ≤R}. [10, 16, 15] gave “global” type bounds for ℓp-MKL as R(f) ≤bR(f) + C M 1−1 p ∨√ log(M) √n R for all f ∈Hℓp(R), (10) where R(f) and bR(f) is the population risk and the empirical risk. First observation is that the bounds by [10] and [15] are restricted to the situation 1 ≤p ≤2. On the other hand, our analysis and that of [16] covers all p ≥1. Second, since our bound is specialized to the regularized risk minimizer ˆf defined at Eq. (1) while the existing bound (10) is applicable to all f ∈Hℓp(R), our bound is sharper than theirs for sufficiently large n. To see this, suppose n ≥M 2 p R−2 p , then we have n− 1 1+s M 1− 2s p(1+s) ≤n−1 2 M 1−1 p . Moreover we should note that s can be large as long as Spectral Assumption (A3) is satisfied. Thus the bound (10) is formally recovered by our analysis by approaching s to 1. Recently [13] gave a tighter convergence rate utilizing the localization technique as ∥ˆf −f ∗∥2 L2(Π) = Op ( minp′≥p { p′ p′−1n− 1 1+s M 1− 2s p′(1+s) R 2s 1+s p′ }) , under a strong condition κM = 1 that imposes all 6 RKHSs are completely uncorrelated to each other. Comparing our bound with their result, there are not minp′≥p and p′ p′−1 in our bound (if there is not the term p′ p′−1, then the minimum of minp′≥p is attained at p′ = p, thus our bound is tighter), moreover our analysis doesn’t need the strong assumption κM = 1. Convergence Rate of Elasticnet-MKL Elasticnet-MKL employs a mixture of ℓ1 and ℓ2 norm as the regularizer: ∥f∥ψ = τ∥f∥ℓ1 + (1 −τ)∥f∥ℓ2 where τ ∈[0, 1]. Then its dual norm is given by ∥b∥ψ∗= mina∈RM { max ( ∥a∥ℓ∞ τ , ∥a−b∥ℓ2 1−τ )} . Therefore by a simple calculation, we have ∥1∥ψ∗= √ M 1−τ+τ √ M . Hence Eq. (7) gives the convergence rate of elasticnet-MKL as ∥ˆf −f ∗∥2 L2(Π) = Op ( n− 1 1+s M 1− s 1+s (1−τ+τ √ M) 2s 1+s (τ∥f ∗∥ℓ1 + (1 −τ)∥f ∗∥ℓ2) 2s 1+s + M log(M) n ) . Note that, when τ = 0 or τ = 1, this rate is identical to that of ℓ2-MKL or ℓ1-MKL obtained in Eq. (8) respectively. 3.1.1 Minimax Lower Bound In this section, we show that the derived learning rate (7) achieves the minimax-learning rate on the ψ-norm ball Hψ(R) := { f = ∑M m=1 fm ∥f∥ψ ≤R } , when the norm is isotropic. We say the ψ-norm ∥· ∥ψ is isotropic when there exits a universal constant ¯c such that ¯cM = ¯c∥1∥ℓ1 ≥∥1∥ψ∗∥1∥ψ, ∥b∥ψ ≤∥b′∥ψ (if 0 ≤bm ≤b′ m (∀m)), (11) (note that the inverse inequality M ≤∥1∥ψ∗∥1∥ψ of the first condition always holds by the definition of the dual norm). Practically used regularizations usually satisfy this isotropic property. In fact, ℓp-MKL, elasticnet-MKL and VSKL satisfy the isotropic property with ¯c = 1. We derive the minimax learning rate in a simpler situation. First we assume that each RKHS is same as others. That is, the input vector is decomposed into M components like x = (x(1), . . . , x(M)) where {x(m)}M m=1 are M i.i.d. copies of a random variable ˜X, and Hm = {fm | fm(x) = fm(x(1), . . . , x(M)) = ˜fm(x(m)), ˜fm ∈ e H} where e H is an RKHS shared by all Hm. Thus f ∈H⊕M is decomposed as f(x) = f(x(1), . . . , x(M)) = ∑M m=1 ˜fm(x(m)) where each ˜fm is a member of the common RKHS e H. We denote by ek the kernel associated with the RKHS e H. In addition to the condition about the upper bound of spectrum (Spectral Assumption (A3)), we assume that the spectrum of all the RKHSs Hm have the same lower bound of polynomial rate. Assumption 5. (Strong Spectral Assumption) There exist 0 < s < 1 and 0 < c, c′ such that (A6) c′ℓ−1 s ≤˜µℓ≤cℓ−1 s , (1 ≤∀ℓ), where {˜µℓ}∞ ℓ=1 is the spectrum of the integral operator T˜k corresponding to the kernel ˜k. In particular, the spectrum of Tkm also satisfies µℓ,m ∼ℓ−1 s (∀ℓ, m). Without loss of generality, we may assume that E[f( ˜X)] = 0 (∀f ∈e H). Since each fm receives i.i.d. copy of ˜X, Hms are orthogonal to each other: E[fm(X)fm′(X)] = E[ ˜fm(X(m)) ˜fm′(X(m′))] = 0 (∀fm ∈Hm, ∀fm′ ∈Hm′, ∀m ̸= m′). We also assume that the noise {ϵi}n i=1 is an i.i.d. normal sequence with standard deviation σ > 0. Under the assumptions described above, we have the following minimax L2(Π)-error. Theorem 3. Suppose R > 0 is given and n > ¯c2M 2 R2∥1∥2 ψ∗is satisfied. Then the minimax-learning rate on Hψ(R) for isotropic norm ∥· ∥ψ is lower bounded as min ˆ f max f ∗∈Hψ(R) E [ ∥ˆf −f ∗∥2 L2(Π) ] ≥CM 1−2s 1+s n− 1 1+s (∥1∥ψ∗R) 2s 1+s , (12) where inf is taken over all measurable functions of n samples {(xi, yi)}n i=1. 7 The proof will be given in Appendix F in the supplementary material. One can see that the convergence rate derived in Eq. (7) achieves the minimax rate on the ψ-norm ball (Theorem 3) up to M log(M) n that is negligible when the number of samples is large. This means that the ψ-norm regularization is well suited to make the estimator included in the ψ-norm ball. 3.2 Analysis on Inhomogeneous Settings In the previous section (analysis on homogeneous settings), we have not seen any theoretical justification supporting the fact that dense MKL methods like ℓ4 3 -MKL can outperform the sparse ℓ1-MKL [10]. In this section, we show dense type regularizations can outperform the sparse regularization in inhomogeneous settings (there exists m, m′ such that sm ̸= sm′). For simplicity, we focus on ℓp-MKL, and discuss the relation between the learning rate and the norm parameter p. Let us consider an extreme situation where s1 = s for some 0 < s < 1 and sm = 0 (m > 1)‡. In this situation, we have α1 = 3 ( r−2s 1 +M−1 n ) 1 2 , α2 = 3 sr1−s 1 √n , β1 = 3 ( r −2s(3−s) 1+s 1 +M−1 n 2 1+s ) 1 2 , β2 = 3 sr (1−s)2 1+s 1 n 1 1+s . for all p. Note that these α1, α2, β1 and β2 have no dependency on p. Therefore the learning bound (6) is smallest when p = ∞because ∥f ∗∥ℓ∞≤∥f ∗∥ℓp for all 1 ≤p < ∞. In particular, when (∥f ∗ m∥Hm)M m=1 = 1, we have ∥f ∗∥ℓ1 = M∥f ∗∥ℓ∞and thus obviously the learning rate of ℓ∞-MKL given by Eq. (6) is faster than that of ℓ1-MKL. In fact, through a bit cumbersome calculation, one can check that ℓ∞-MKL can be M 2s 1+s times faster than ℓ1-MKL in a worst case. This indicates that, when the complexities of RKHSs are inhomogeneous, the generalization abilities of dense type regularizations (e.g., ℓ∞-MKL) can be better than the sparse type regularization (ℓ1-MKL). In real settings, it is likely that one uses various types of kernels and the complexities of RKHSs become inhomogeneous. As mentioned above, it has been often reported that ℓ1-MKL is outperformed by dense type MKL such as ℓ4 3 -MKL in numerical experiments [10]. Our theoretical analysis explains well this experimental results. 4 Conclusion We have shown a unified framework to derive the learning rate of MKL with arbitrary mixed-normtype regularization. To analyze the general result, we considered two situations: homogeneous settings and inhomogeneous settings. We have seen that the convergence rate of ℓp-MKL obtained in homogeneous settings is tighter and require less restrictive condition than existing results. We have also shown the convergence rate of elasticnet-MKL, and proved the derived learning rate is minimax optimal. Furthermore, we observed that our bound well explains the favorable experimental results for dense type MKL by considering the inhomogeneous settings. This is the first result that strongly justifies the effectiveness of dense type regularizations in MKL. Acknowledgement This work was partially supported by MEXT Kakenhi 22700289 and the Aihara Project, the FIRST program from JSPS, initiated by CSTP. References [1] J. Aflalo, A. Ben-Tal, C. Bhattacharyya, J. S. Nath, and S. Raman. Variable sparsity kernel learning. Journal of Machine Learning Research, 12:565–592, 2011. [2] A. Argyriou, R. Hauser, C. A. Micchelli, and M. Pontil. A DC-programming algorithm for kernel selection. In the 23st ICML, pages 41–48, 2006. [3] F. R. Bach. Consistency of the group lasso and multiple kernel learning. Journal of Machine Learning Research, 9:1179–1225, 2008. [4] F. R. Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In Advances in Neural Information Processing Systems 21, pages 105–112, 2009. [5] F. R. Bach, G. Lanckriet, and M. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm. In the 21st ICML, pages 41–48, 2004. [6] P. Bartlett, O. Bousquet, and S. Mendelson. Local Rademacher complexities. The Annals of Statistics, 33:1487–1537, 2005. ‡In our assumption sm should be greater than 0. However we formally put sm = 0 (m > 1) for simplicity of discussion. For rigorous discussion, one might consider arbitrary small sm ≪s. 8 [7] C. Bennett and R. Sharpley. Interpolation of Operators. Academic Press, Boston, 1988. [8] C. Cortes, M. Mohri, and A. Rostamizadeh. L2 regularization for learning kernels. In UAI 2009, 2009. [9] C. Cortes, M. Mohri, and A. Rostamizadeh. Learning non-linear combinations of kernels. In Advances in Neural Information Processing Systems 22, pages 396–404, 2009. [10] C. Cortes, M. Mohri, and A. Rostamizadeh. Generalization bounds for learning kernels. In the 27th ICML, pages 247–254, 2010. [11] D. E. Edmunds and H. Triebel. Function Spaces, Entropy Numbers, Differential Operators. Cambridge University Press, Cambridge, 1996. [12] G. S. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. Journal of Mathematical Analysis and Applications, 33:82–95, 1971. [13] M. Kloft and G. Blanchard. The local rademacher complexity of ℓp-norm multiple kernel learning, 2011. arXiv:1103.0790. [14] M. Kloft, U. Brefeld, S. Sonnenburg, P. Laskov, K.-R. M¨uller, and A. Zien. Efficient and accurate ℓp-norm multiple kernel learning. In Advances in Neural Information Processing Systems 22, pages 997–1005, 2009. [15] M. Kloft, U. Brefeld, S. Sonnenburg, and A. Zien. lp-norm multiple kernel learning. Journal of Machine Learning Research, 12:953–997, 2011. [16] M. Kloft, U. R¨uckert, and P. L. Bartlett. A unifying view of multiple kernel learning. In ECML/PKDD, 2010. [17] V. Koltchinskii. Local Rademacher complexities and oracle inequalities in risk minimization. The Annals of Statistics, 34:2593–2656, 2006. [18] V. Koltchinskii and M. Yuan. Sparse recovery in large ensembles of kernel machines. In COLT, pages 229–238, 2008. [19] V. Koltchinskii and M. Yuan. Sparsity in multiple kernel learning. The Annals of Statistics, 38(6):3660– 3695, 2010. [20] G. Lanckriet, N. Cristianini, L. E. Ghaoui, P. Bartlett, and M. Jordan. Learning the kernel matrix with semi-definite programming. Journal of Machine Learning Research, 5:27–72, 2004. [21] L. Meier, S. van de Geer, and P. B¨uhlmann. High-dimensional additive modeling. The Annals of Statistics, 37(6B):3779–3821, 2009. [22] C. A. Micchelli and M. Pontil. Learning the kernel function via regularization. Journal of Machine Learning Research, 6:1099–1125, 2005. [23] C. S. Ong, A. J. Smola, and R. C. Williamson. Learning the kernel with hyperkernels. Journal of Machine Learning Research, 6:1043–1071, 2005. [24] G. Raskutti, M. Wainwright, and B. Yu. Minimax-optimal rates for sparse additive models over kernel classes via convex programming. Technical report, 2010. arXiv:1008.3654. [25] B. Sch¨olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002. [26] J. Shawe-Taylor. Kernel learning for novelty detection. In NIPS 2008 Workshop on Kernel Learning: Automatic Selection of Optimal Kernels, Whistler, 2008. [27] N. Srebro and S. Ben-David. Learning bounds for support vector machines with learned kernels. In COLT, pages 169–183, 2006. [28] I. Steinwart. Support Vector Machines. Springer, 2008. [29] I. Steinwart, D. Hush, and C. Scovel. Optimal rates for regularized least squares regression. In COLT, 2009. [30] T. Suzuki and R. Tomioka. Spicymkl: A fast algorithm for multiple kernel learning with thousands of kernels. Machine Learning, 85(1):77–108, 2011. [31] R. Tomioka and T. Suzuki. Sparsity-accuracy trade-off in MKL. In NIPS 2009 Workshop: Understanding Multiple Kernel Learning Methods, Whistler, 2009. [32] S. van de Geer. Empirical Processes in M-Estimation. Cambridge University Press, 2000. [33] A. W. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes: With Applications to Statistics. Springer, New York, 1996. [34] M. Varma and B. R. Babu. More generality in efficient multiple kernel learning. In the 26th ICML, pages 1065–1072, 2009. [35] Y. Ying and C. Campbell. Generalization bounds for learning the kernel. In COLT, 2009. [36] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of The Royal Statistical Society Series B, 68(1):49–67, 2006. 9
|
2011
|
196
|
4,253
|
Speedy Q-Learning Mohammad Gheshlaghi Azar Radboud University Nijmegen Geert Grooteplein 21N, 6525 EZ Nijmegen, Netherlands m.azar@science.ru.nl Remi Munos INRIA Lille, SequeL Project 40 avenue Halley 59650 Villeneuve d’Ascq, France r.munos@inria.fr Mohammad Ghavamzadeh INRIA Lille, SequeL Project 40 avenue Halley 59650 Villeneuve d’Ascq, France m.ghavamzadeh@inria.fr Hilbert J. Kappen Radboud University Nijmegen Geert Grooteplein 21N, 6525 EZ Nijmegen, Netherlands b.kappen@science.ru.nl Abstract We introduce a new convergent variant of Q-learning, called speedy Q-learning (SQL), to address the problem of slow convergence in the standard form of the Q-learning algorithm. We prove a PAC bound on the performance of SQL, which shows that for an MDP with n state-action pairs and the discount factor γ only T = O log(n)/(ǫ2(1 −γ)4) steps are required for the SQL algorithm to converge to an ǫ-optimal action-value function with high probability. This bound has a better dependency on 1/ǫ and 1/(1−γ), and thus, is tighter than the best available result for Q-learning. Our bound is also superior to the existing results for both modelfree and model-based instances of batch Q-value iteration that are considered to be more efficient than the incremental methods like Q-learning. 1 Introduction Q-learning [20] is a well-known model-free reinforcement learning (RL) algorithm that finds an estimate of the optimal action-value function. Q-learning is a combination of dynamic programming, more specifically the value iteration algorithm, and stochastic approximation. In finite state-action problems, it has been shown that Q-learning converges to the optimal action-value function [5,10]. However, it suffers from slow convergence, especially when the discount factor γ is close to one [8, 17]. The main reason for the slow convergence of Q-learning is the combination of the sample-based stochastic approximation (that makes use of a decaying learning rate) and the fact that the Bellman operator propagates information throughout the whole space (specially when γ is close to 1). In this paper, we focus on RL problems that are formulated as finite state-action discounted infinite horizon Markov decision processes (MDPs), and propose an algorithm, called speedy Q-learning (SQL), that addresses the problem of slow convergence of Q-learning. At each time step, SQL uses two successive estimates of the action-value function that makes its space complexity twice as the standard Q-learning. However, this allows SQL to use a more aggressive learning rate for one of the terms in its update rule and eventually achieves a faster convergence rate than the standard Qlearning (see Section 3.1 for a more detailed discussion). We prove a PAC bound on the performance of SQL, which shows that only T = O log(n)/((1 −γ)4ǫ2) number of samples are required for SQL in order to guarantee an ǫ-optimal action-value function with high probability. This is superior to the best result for the standard Q-learning by [8], both in terms of 1/ǫ and 1/(1 −γ). The rate for SQL is even better than that for the Phased Q-learning algorithm, a model-free batch Q-value 1 iteration algorithm proposed and analyzed by [12]. In addition, SQL’s rate is slightly better than the rate of the model-based batch Q-value iteration algorithm in [12] and has a better computational and memory requirement (computational and space complexity), see Section 3.3.2 for more detailed comparisons. Similar to Q-learning, SQL may be implemented in synchronous and asynchronous fashions. For the sake of simplicity in the analysis, we only report and analyze its synchronous version in this paper. However, it can easily be implemented in an asynchronous fashion and our theoretical results can also be extended to this setting by following the same path as [8]. The idea of using previous estimates of the action-values has already been used to improve the performance of Q-learning. A popular algorithm of this kind is Q(λ) [14, 20], which incorporates the concept of eligibility traces in Q-learning, and has been empirically shown to have a better performance than Q-learning, i.e., Q(0), for suitable values of λ. Another recent work in this direction is Double Q-learning [19], which uses two estimators for the action-value function to alleviate the over-estimation of action-values in Q-learning. This over-estimation is caused by a positive bias introduced by using the maximum action value as an approximation for the expected action value [19]. The rest of the paper is organized as follows. After introducing the notations used in the paper in Section 2, we present our Speedy Q-learning algorithm in Section 3. We first describe the algorithm in Section 3.1, then state our main theoretical result, i.e., a high-probability bound on the performance of SQL, in Section 3.2, and finally compare our bound with the previous results on Q-learning in Section 3.3. Section 4 contains the detailed proof of the performance bound of the SQL algorithm. Finally, we conclude the paper and discuss some future directions in Section 5. 2 Preliminaries In this section, we introduce some concepts and definitions from the theory of Markov decision processes (MDPs) that are used throughout the paper. We start by the definition of supremum norm. For a real-valued function g : Y 7→R, where Y is a finite set, the supremum norm of g is defined as ∥g∥≜maxy∈Y |g(y)|. We consider the standard reinforcement learning (RL) framework [5,16] in which a learning agent interacts with a stochastic environment and this interaction is modeled as a discrete-time discounted MDP. A discounted MDP is a quintuple (X, A, P, R, γ), where X and A are the set of states and actions, P is the state transition distribution, R is the reward function, and γ ∈(0, 1) is a discount factor. We denote by P(·|x, a) and r(x, a) the probability distribution over the next state and the immediate reward of taking action a at state x, respectively. To keep the representation succinct, we use Z for the joint state-action space X × A. Assumption 1 (MDP Regularity). We assume Z and, subsequently, X and A are finite sets with cardinalities n, |X| and |A|, respectively. We also assume that the immediate rewards r(x, a) are uniformly bounded by Rmax and define the horizon of the MDP β ≜1/(1−γ) and Vmax ≜βRmax. A stationary Markov policy π(·|x) is the distribution over the control actions given the current state x. It is deterministic if this distribution concentrates over a single action. The value and the action-value functions of a policy π, denoted respectively by V π : X 7→R and Qπ : Z 7→R, are defined as the expected sum of discounted rewards that are encountered when the policy π is executed. Given a MDP, the goal is to find a policy that attains the best possible values, V ∗(x) ≜supπ V π(x), ∀x ∈X. Function V ∗is called the optimal value function. Similarly the optimal action-value function is defined as Q∗(x, a) = supπ Qπ(x, a), ∀(x, a) ∈Z. The optimal action-value function Q∗is the unique fixed-point of the Bellman optimality operator T defined as (TQ)(x, a) ≜r(x, a) + γ P y∈X P(y|x, a) maxb∈A Q(y, b), ∀(x, a) ∈Z. It is important to note that T is a contraction with factor γ, i.e., for any pair of action-value functions Q and Q′, we have ∥TQ −TQ′∥≤γ ∥Q −Q′∥[4, Chap. 1]. Finally for the sake of readability, we define the max operator M over action-value functions as (MQ)(x) = maxa∈A Q(x, a), ∀x ∈X. 3 Speedy Q-Learning In this section, we introduce our RL algorithm, called speedy Q-Learning (SQL), derive a performance bound for this algorithm, and compare this bound with similar results on standard Q-learning. 2 The derived performance bound shows that SQL has a rate of convergence of order O( p 1/T), which is better than all the existing results for Q-learning. 3.1 Speedy Q-Learning Algorithm The pseudo-code of the SQL algorithm is shown in Algorithm 1. As it can be seen, this is the synchronous version of the algorithm, which will be analyzed in the paper. Similar to the standard Q-learning, SQL may be implemented either synchronously or asynchronously. In the asynchronous version, at each time step, the action-value of the observed state-action pair is updated, while the rest of the state-action pairs remain unchanged. For the convergence of this instance of the algorithm, it is required that all the states and actions are visited infinitely many times, which makes the analysis slightly more complicated. On the other hand, given a generative model, the algorithm may be also formulated in a synchronous fashion, in which we first generate a next state y ∼P(·|x, a) for each state-action pair (x, a), and then update the action-values of all the stateaction pairs using these samples. We chose to include only the synchronous version of SQL in the paper just for the sake of simplicity in the analysis. However, the algorithm can be implemented in an asynchronous fashion (similar to the more familiar instance of Q-learning) and our theoretical results can also be extended to the asynchronous case under some mild assumptions.1 Algorithm 1: Synchronous Speedy Q-Learning (SQL) Input: Initial action-value function Q0, discount factor γ, and number of iteration T Q−1 := Q0; // Initialization for k := 0, 1, 2, 3, . . . , T −1 do // Main loop αk := 1 k+1; for each (x, a) ∈Z do Generate the next state sample yk ∼P(·|x, a); TkQk−1(x, a) := r(x, a) + γMQk−1(yk); TkQk(x, a) := r(x, a) + γMQk(yk); // Empirical Bellman operator Qk+1(x, a) := Qk(x, a)+αk ` TkQk−1(x, a)−Qk(x, a) ´ +(1−αk) ` TkQk(x, a)−TkQk−1(x, a) ´ ; // SQL update rule end end return QT As it can be seen from Algorithm 1, at each time step k, SQL keeps track of the action-value functions of the two time-steps k and k −1, and its main update rule is of the following form: Qk+1(x, a) = Qk(x, a)+αk TkQk−1(x, a)−Qk(x, a) +(1−αk) TkQk(x, a)−TkQk−1(x, a) , (1) where TkQ(x, a) = r(x, a) + γMQ(yk) is the empirical Bellman optimality operator for the sampled next state yk ∼P(·|x, a). At each time step k and for state-action pair (x, a), SQL works as follows: (i) it generates a next state yk by drawing a sample from P(·|x, a), (ii) it calculates two sample estimates TkQk−1(x, a) and TkQk(x, a) of the Bellman optimality operator (for state-action pair (x, a) using the next state yk) applied to the estimates Qk−1 and Qk of the action-value function at the previous and current time steps, and finally (iii) it updates the action-value function of (x, a), generates Qk+1(x, a), using the update rule of Eq. 1. Moreover, we let αk decays linearly with time, i.e., αk = 1/(k + 1), in the SQL algorithm. 2The update rule of Eq. 1 may be rewritten in the following more compact form: Qk+1(x, a) = (1 −αk)Qk(x, a) + αkDk[Qk, Qk−1](x, a), (2) where Dk[Qk, Qk−1](x, a) ≜kTkQk(x, a) −(k −1)TkQk−1(x, a). This compact form will come specifically handy in the analysis of the algorithm in Section 4. Let us consider the update rule of Q-learning Qk+1(x, a) = Qk(x, a) + αk TkQk(x, a) −Qk(x, a) , 1See [2] for the convergence analysis of the asynchronous variant of SQL. 2Note that other (polynomial) learning steps can also be used with speedy Q-learning. However one can show that the rate of convergence of SQL is optimized for αk = 1 ‹ (k + 1). This is in contrast to the standard Q-learning algorithm for which the rate of convergence is optimized for a polynomial learning step [8]. 3 which may be rewritten as Qk+1(x, a) = Qk(x, a)+αk TkQk−1(x, a)−Qk(x, a) +αk TkQk(x, a)−TkQk−1(x, a) . (3) Comparing the Q-learning update rule of Eq. 3 with the one for SQL in Eq. 1, we first notice that the same terms: TkQk−1 −Qk and TkQk −TkQk−1 appear on the RHS of the update rules of both algorithms. However, while Q-learning uses the same conservative learning rate αk for both these terms, SQL uses αk for the first term and a bigger learning step 1 −αk = k/(k + 1) for the second one. Since the term TkQk −TkQk−1 goes to zero as Qk approaches its optimal value Q∗, it is not necessary that its learning rate approaches zero. As a result, using the learning rate αk, which goes to zero with k, is too conservative for this term. This might be a reason why SQL that uses a more aggressive learning rate 1 −αk for this term has a faster convergence rate than Q-learning. 3.2 Main Theoretical Result The main theoretical result of the paper is expressed as a high-probability bound over the performance of the SQL algorithm. Theorem 1. Let Assumption 1 holds and T be a positive integer. Then, at iteration T of SQL with probability at least 1 −δ, we have ∥Q∗−QT ∥≤2β2Rmax γ T + s 2 log 2n δ T . We report the proof of Theorem 1 in Section 4. This result, combined with Borel-Cantelli lemma [9], guarantees that QT converges almost surely to Q∗with the rate p 1/T. Further, the following result which quantifies the number of steps T required to reach the error ǫ > 0 in estimating the optimal action-value function, w.p. 1 −δ, is an immediate consequence of Theorem 1. Corollary 1 (Finite-time PAC (“probably approximately correct”) performance bound for SQL). Under Assumption 1, for any ǫ > 0, after T = 11.66β4R2 max log 2n δ ǫ2 steps of SQL, the uniform approximation error ∥Q∗−QT ∥≤ǫ, with probability at least 1 −δ. 3.3 Relation to Existing Results In this section, we first compare our results for SQL with the existing results on the convergence of standard Q-learning. This comparison indicates that SQL accelerates the convergence of Q-learning, especially for γ close to 1 and small ǫ. We then compare SQL with batch Q-value iteration (QI) in terms of sample and computational complexities, i.e., the number of samples and the computational cost required to achieve an ǫ-optimal solution w.p. 1 −δ, as well as space complexity, i.e., the memory required at each step of the algorithm. 3.3.1 A Comparison with the Convergence Rate of Standard Q-Learning There are not many studies in the literature concerning the convergence rate of incremental modelfree RL algorithms such as Q-learning. [17] has provided the asymptotic convergence rate for Qlearning under the assumption that all the states have the same next state distribution. This result shows that the asymptotic convergence rate of Q-learning has exponential dependency on 1−γ, i.e., the rate of convergence is of ˜O(1/t1−γ) for γ ≥1/2. The finite time behavior of Q-learning have been throughly investigated in [8] for different time scales. Their main result indicates that by using the polynomial learning step αk = 1 (k + 1)ω , 0.5 < ω < 1, Q-learning achieves ǫ-optimal performance w.p. at least 1 −δ after T = O " β4R2 max log nβRmax δǫ ǫ2 # 1 w + β log βRmax ǫ 1 1−ω (4) 4 steps. When γ ≈1, one can argue that β = 1/(1 −γ) becomes the dominant term in the bound of Eq. 4, and thus, the optimized bound w.r.t. ω is obtained for ω = 4/5 and is of ˜O β5/ǫ2.5 . On the other hand, SQL is guaranteed to achieve the same precision in only O β4/ǫ2 steps. The difference between these two bounds is significant for large values of β, i.e., γ’s close to 1. 3.3.2 SQL vs. Q-Value Iteration Finite sample bounds for both model-based and model-free (Phased Q-learning) QI have been derived in [12] and [7]. These algorithms can be considered as the batch version of Q-learning. They show that to quantify ǫ-optimal action-value functions with high probability, we need O nβ5/ǫ2 log(1/ǫ) log(nβ) + log log 1 ǫ and O nβ4/ǫ2(log(nβ) + log log 1 ǫ) samples in model-free and model-based QI, respectively. A comparison between their results and the main result of this paper suggests that the sample complexity of SQL, which is of order O nβ4/ǫ2 log n ,3 is better than model-free QI in terms of β and log(1/ǫ). Although the sample complexities of SQL is only slightly tighter than the model-based QI, SQL has a significantly better computational and space complexity than model-based QI: SQL needs only 2n memory space, while the space complexity of model-based QI is given by either ˜O(nβ4/ǫ2) or n(|X| + 1), depending on whether the learned state transition matrix is sparse or not [12]. Also, SQL improves the computational complexity by a factor of ˜O(β) compared to both model-free and model-based QI.4 Table 1 summarizes the comparisons between SQL and the other RL methods discussed in this section. Table 1: Comparison between SQL, Q-learning, model-based and model-free Q-value iteration in terms of sample complexity (SC), computational complexity (CC), and space complexity (SPC). Method SQL Q-learning (optimized) Model-based QI Model-free QI SC ˜O nβ4 ǫ2 ˜O nβ5 ǫ2.5 ˜O nβ4 ǫ2 ˜O nβ5 ǫ2 CC ˜O nβ4 ǫ2 ˜O nβ5 ǫ2.5 ˜O nβ5 ǫ2 ˜O nβ5 ǫ2 SPC Θ(n) Θ(n) ˜O nβ4 ǫ2 Θ(n) 4 Analysis In this section, we give some intuition about the convergence of SQL and provide the full proof of the finite-time analysis reported in Theorem 1. We start by introducing some notations. Let Fk be the filtration generated by the sequence of all random samples {y1, y2, . . . , yk} drawn from the distribution P(·|x, a), for all state action (x, a) up to round k. We define the operator D[Qk, Qk−1] as the expected value of the empirical operator Dk conditioned on Fk−1: D[Qk, Qk−1](x, a) ≜E(Dk[Qk, Qk−1](x, a) |Fk−1 ) = kTQk(x, a) −(k −1)TQk−1(x, a). Thus the update rule of SQL writes Qk+1(x, a) = (1 −αk)Qk(x, a) + αk (D[Qk, Qk−1](x, a) −ǫk(x, a)) , (5) 3Note that at each round of SQL n new samples are generated. This combined with the result of Corollary 1 deduces the sample complexity of order O(nβ4/ǫ2 log(n/δ)). 4SQL has the sample and computational complexity of a same order since it performs only one Q-value update per sample, whereas, in the case of model-based QI, the algorithm needs to iterate the action-value function of all state-action pairs at least ˜O(β) times using Bellman operator, which leads to a computational complexity bound of order ˜O(nβ5/ǫ2) given that only ˜O(nβ4/ǫ2) entries of the estimated transition matrix are non-zero [12]. 5 where the estimation error ǫk is defined as the difference between the operator D[Qk, Qk−1] and its sample estimate Dk[Qk, Qk−1] for all (x, a) ∈Z: ǫk(x, a) ≜D[Qk, Qk−1](x, a) −Dk[Qk, Qk−1](x, a). We have the property that E[ǫk(x, a)|Fk−1] = 0 which means that for all (x, a) ∈Z the sequence of estimation error {ǫ1(x, a), ǫ2(x, a), . . . , ǫk(x, a)} is a martingale difference sequence w.r.t. the filtration Fk. Let us define the martingale Ek(x, a) to be the sum of the estimation errors: Ek(x, a) ≜ k X j=0 ǫj(x, a), ∀(x, a) ∈Z. (6) The proof of Theorem 1 follows the following steps: (i) Lemma 1 shows the stability of the algorithm (i.e., the sequence of Qk stays bounded). (ii) Lemma 2 states the key property that the SQL iterate Qk+1 is very close to the Bellman operator T applied to the previous iterate Qk plus an estimation error term of order Ek/k. (iii) By induction, Lemma 3 provides a performance bound ∥Q∗−Qk∥ in terms of a discounted sum of the cumulative estimation errors {Ej}j=0:k−1. Finally (iv) we use a maximal Azuma’s inequality (see Lemma 4) to bound Ek and deduce the finite time performance for SQL. For simplicity of the notations, we remove the dependence on (x, a) (e.g., writing Q for Q(x, a), Ek for Ek(x, a)) when there is no possible confusion. Lemma 1 (Stability of SQL). Let Assumption 1 hold and assume that the initial action-value function Q0 = Q−1 is uniformly bounded by Vmax, then we have, for all k ≥0, ∥Qk∥≤Vmax, ∥ǫk∥≤2Vmax, and ∥Dk[Qk, Qk−1]∥≤Vmax. Proof. We first prove that ∥Dk[Qk, Qk−1]∥≤Vmax by induction. For k = 0 we have: ∥D0[Q0, Q−1]∥≤∥r∥+ γ∥MQ−1∥≤Rmax + γVmax = Vmax. Now for any k ≥0, let us assume that the bound ∥Dk[Qk, Qk−1]∥≤Vmax holds. Thus ∥Dk+1[Qk+1, Qk]∥≤∥r∥+ γ ∥(k + 1)MQk+1 −kMQk∥ = ∥r∥+ γ
(k + 1)M k k + 1Qk + 1 k + 1Dk[Qk, Qk−1] −kMQk
≤∥r∥+ γ ∥M(kQk + Dk[Qk, Qk−1] −kQk)∥ ≤∥r∥+ γ ∥Dk[Qk, Qk−1]∥≤Rmax + γVmax = Vmax, and by induction, we deduce that for all k ≥0, ∥Dk[Qk, Qk−1]∥≤Vmax. Now the bound on ǫk follows from ∥ǫk∥= ∥E(Dk[Qk, Qk−1]|Fk−1) −Dk[Qk, Qk−1]∥≤2Vmax, and the bound ∥Qk∥≤Vmax is deduced by noticing that Qk = 1/k Pk−1 j=0 Dj[Qj, Qj−1]. The next lemma shows that Qk is close to TQk−1, up to a O(1/k) term plus the average cumulative estimation error 1 kEk−1. Lemma 2. Under Assumption 1, for any k ≥1: Qk = 1 k (TQ0 + (k −1)TQk−1 −Ek−1) . (7) Proof. We prove this result by induction. The result holds for k = 1, where (7) reduces to (5). We now show that if the property (7) holds for k then it also holds for k + 1. Assume that (7) holds for k. Then, from (5) we have: Qk+1 = k k + 1Qk + 1 k + 1(kTQk −(k −1)TQk−1 −ǫk) = k k + 1 1 k (TQ0 + (k −1)TQk−1 −Ek−1) + 1 k + 1(kTQk −(k −1)TQk−1 −ǫk) = 1 k + 1(TQ0 + kTQk −Ek−1 −ǫk) = 1 k + 1(TQ0 + kTQk −Ek). Thus (7) holds for k + 1, and is thus true for all k ≥1. 6 Now we bound the difference between Q∗and Qk in terms of the discounted sum of cumulative estimation errors {E0, E1, . . . , Ek−1}. Lemma 3 (Error Propagation of SQL). Let Assumption 1 hold and assume that the initial actionvalue function Q0 = Q−1 is uniformly bounded by Vmax, then for all k ≥1, we have ∥Q∗−Qk∥≤2γβVmax k + 1 k k X j=1 γk−j ∥Ej−1∥. (8) Proof. Again we prove this lemma by induction. The result holds for k = 1 as: ∥Q∗−Q1∥= ∥TQ∗−T0Q0∥= ||TQ∗−TQ0 + ǫ0|| ≤||TQ∗−TQ0|| + ||ǫ0|| ≤2γVmax + ||ǫ0|| ≤2γβVmax + ∥E0∥ We now show that if the bound holds for k, then it also holds for k + 1. Thus, assume that (8) holds for k. By using Lemma 2:
Q∗−Qk+1
=
Q∗− 1 k + 1(TQ0 + kTQk −Ek)
=
1 k + 1(TQ∗−TQ0) + k k + 1(TQ∗−TQk) + 1 k + 1Ek
≤ γ k + 1 ∥Q∗−Q0∥+ kγ k + 1 ∥Q∗−Qk∥+ 1 k + 1 ∥Ek∥ ≤ 2γ k + 1Vmax + kγ k + 1 2γβVmax k + 1 k k X j=1 γk−j ∥Ej−1∥ + 1 k + 1 ∥Ek∥ = 2γβVmax k + 1 + 1 k + 1 k+1 X j=1 γk+1−j ∥Ej−1∥. Thus (8) holds for k + 1 thus for all k ≥1 by induction. Now, based on Lemmas 3 and 1, we prove the main theorem of this paper. Proof of Theorem 1. We begin our analysis by recalling the result of Lemma 3 at round T: ∥Q∗−QT ∥≤2γβVmax T + 1 T T X k=1 γT −k ∥Ek−1∥. Note that the difference between this bound and the result of Theorem 1 is just in the second term. So, we only need to show that the following inequality holds, with probability at least 1 −δ: 1 T T X k=1 γT −k ∥Ek−1∥≤2βVmax s 2 log 2n δ T . (9) We first notice that: 1 T T X k=1 γT −k ∥Ek−1∥≤1 T T X k=1 γT −k max 1≤k≤T ∥Ek−1∥≤β max1≤k≤T ∥Ek−1∥ T . (10) Therefore, in order to prove (9) it is sufficient to bound max1≤k≤T ∥Ek−1∥ = max(x,a)∈Z max1≤k≤T |Ek−1(x, a)| in high probability. We start by providing a high probability bound for max1≤k≤T |Ek−1(x, a)| for a given (x, a). First notice that P max 1≤k≤T |Ek−1(x, a)| > ǫ = P max max 1≤k≤T(Ek−1(x, a)), max 1≤k≤T(−Ek−1(x, a)) > ǫ = P max 1≤k≤T(Ek−1(x, a)) > ǫ [ max 1≤k≤T(−Ek−1(x, a)) > ǫ ≤P max 1≤k≤T(Ek−1(x, a)) > ǫ + P max 1≤k≤T(−Ek−1(x, a)) > ǫ , (11) and each term is now bounded by using a maximal Azuma inequality, reminded now (see e.g., [6]). 7 Lemma 4 (Maximal Hoeffding-Azuma Inequality). Let V = {V1, V2, . . . , VT } be a martingale difference sequence w.r.t. a sequence of random variables {X1, X2, . . . , XT } (i.e., E(Vk+1|X1, . . . Xk) = 0 for all 0 < k ≤T) such that V is uniformly bounded by L > 0. If we define Sk = Pk i=1 Vi, then for any ǫ > 0, we have P max 1≤k≤T Sk > ǫ ≤exp −ǫ2 2TL2 . As mentioned earlier, the sequence of random variables {ǫ0(x, a), ǫ1(x, a), · · · , ǫk(x, a)} is a martingale difference sequence w.r.t. the filtration Fk (generated by the random samples {y0, y1, . . . , yk}(x, a) for all (x, a)), i.e., E[ǫk(x, a)|Fk−1] = 0. It follows from Lemma 4 that for any ǫ > 0 we have: P max 1≤k≤T(Ek−1(x, a)) > ǫ ≤exp −ǫ2 8TV 2 max P max 1≤k≤T(−Ek−1(x, a)) > ǫ ≤exp −ǫ2 8TV 2 max . (12) By combining (12) with (11) we deduce that P (max1≤k≤T |Ek−1(x, a)| > ǫ) ≤2 exp −ǫ2 8T V 2 max , and by a union bound over the state-action space, we deduce that P max 1≤k≤T ∥Ek−1∥> ǫ ≤2n exp −ǫ2 8TV 2 max . (13) This bound can be rewritten as: for any δ > 0, P max 1≤k≤T ∥Ek−1∥≤Vmax r 8T log 2n δ ! ≥1 −δ, (14) which by using (10) proves (9) and Theorem 1. 5 Conclusions and Future Work In this paper, we introduced a new Q-learning algorithm, called speedy Q-learning (SQL). We analyzed the finite time behavior of this algorithm as well as its asymptotic convergence to the optimal action-value function. Our result is in the form of high probability bound on the performance loss of SQL, which suggests that the algorithm converges to the optimal action-value function in a faster rate than the standard Q-learning. Overall, SQL is a simple, efficient and theoretically well-founded reinforcement learning algorithm, which improves on existing RL algorithms such as Q-learning and model-based value iteration. In this work, we are only interested in the estimation of the optimal action-value function and not the problem of exploration. Therefore, we did not compare our result to the PAC-MDP methods [15,18] and the upper-confidence bound based algorithms [3, 11], in which the choice of the exploration policy impacts the behavior of the learning algorithms. However, we believe that it would be possible to gain w.r.t. the state of the art in PAC-MDPs, by combining the asynchronous version of SQL with a smart exploration strategy. This is mainly due to the fact that the bound for SQL has been proved to be tighter than the RL algorithms that have been used for estimating the value function in PAC-MDP methods, especially in the model-free case. We consider this as a subject for future research. Another possible direction for future work is to scale up SQL to large (possibly continuous) state and action spaces where function approximation is needed. We believe that it would be possible to extend our current SQL analysis to the continuous case along the same path as in the fitted value iteration analysis by [13] and [1]. This would require extending the error propagation result of Lemma 3 to a ℓ2-norm analysis and combining it with the standard regression bounds. Acknowledgments The authors appreciate supports from the PASCAL2 Network of Excellence Internal-Visit Programme and the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement no 231495. We also thank Peter Auer for helpful discussion and the anonymous reviewers for their valuable comments. 8 References [1] A. Antos, R. Munos, and Cs. Szepesv´ari. Fitted Q-iteration in continuous action-space MDPs. In Proceedings of the 21st Annual Conference on Neural Information Processing Systems, 2007. [2] M. Gheshlaghi Azar, R. Munos, M. Ghavamzadeh, and H.J. Kappen. Reinforcement learning with a near optimal rate of convergence. Technical Report inria-00636615, INRIA, 2011. [3] P. L. Bartlett and A. Tewari. REGAL: A regularization based algorithm for reinforcement learning in weakly communicating MDPs. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, 2009. [4] D. P. Bertsekas. Dynamic Programming and Optimal Control, volume II. Athena Scientific, Belmount, Massachusetts, third edition, 2007. [5] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, Massachusetts, 1996. [6] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. [7] E. Even-Dar, S. Mannor, and Y. Mansour. PAC bounds for multi-armed bandit and Markov decision processes. In 15th Annual Conference on Computational Learning Theory, pages 255–270, 2002. [8] E. Even-Dar and Y. Mansour. Learning rates for Q-learning. Journal of Machine Learning Research, 5:1–25, 2003. [9] W. Feller. An Introduction to Probability Theory and Its Applications, volume 1. Wiley, 1968. [10] T. Jaakkola, M. I. Jordan, and S. Singh. On the convergence of stochastic iterative dynamic programming. Neural Computation, 6(6):1185–1201, 1994. [11] T. Jaksch, R. Ortner, and P. Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11:1563–1600, 2010. [12] M. Kearns and S. Singh. Finite-sample convergence rates for Q-learning and indirect algorithms. In Advances in Neural Information Processing Systems 12, pages 996–1002. MIT Press, 1999. [13] R. Munos and Cs. Szepesv´ari. Finite-time bounds for fitted value iteration. Journal of Machine Learning Research, 9:815–857, 2008. [14] J. Peng and R. J. Williams. Incremental multi-step Q-learning. Machine Learning, 22(13):283–290, 1996. [15] A. L. Strehl, L. Li, and M. L. Littman. Reinforcement learning in finite MDPs: PAC analysis. Journal of Machine Learning Research, 10:2413–2444, 2009. [16] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, Massachusetts, 1998. [17] Cs. Szepesv´ari. The asymptotic convergence-rate of Q-learning. In Advances in Neural Information Processing Systems 10, Denver, Colorado, USA, 1997, 1997. [18] I. Szita and Cs. Szepesv´ari. Model-based reinforcement learning with nearly tight exploration complexity bounds. In Proceedings of the 27th International Conference on Machine Learning, pages 1031–1038. Omnipress, 2010. [19] H. van Hasselt. Double Q-learning. In Advances in Neural Information Processing Systems 23, pages 2613–2621, 2010. [20] C. Watkins. Learning from Delayed Rewards. PhD thesis, Kings College, Cambridge, England, 1989. 9
|
2011
|
197
|
4,254
|
High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity Po-Ling Loh Department of Statistics University of California, Berkeley Berkeley, CA 94720 ploh@berkeley.edu Martin J. Wainwright Departments of Statistics and EECS University of California, Berkeley Berkeley, CA 94720 wainwrig@stat.berkeley.edu Abstract Although the standard formulations of prediction problems involve fully-observed and noiseless data drawn in an i.i.d. manner, many applications involve noisy and/or missing data, possibly involving dependencies. We study these issues in the context of high-dimensional sparse linear regression, and propose novel estimators for the cases of noisy, missing, and/or dependent data. Many standard approaches to noisy or missing data, such as those using the EM algorithm, lead to optimization problems that are inherently non-convex, and it is difficult to establish theoretical guarantees on practical algorithms. While our approach also involves optimizing non-convex programs, we are able to both analyze the statistical error associated with any global optimum, and prove that a simple projected gradient descent algorithm will converge in polynomial time to a small neighborhood of the set of global minimizers. On the statistical side, we provide non-asymptotic bounds that hold with high probability for the cases of noisy, missing, and/or dependent data. On the computational side, we prove that under the same types of conditions required for statistical consistency, the projected gradient descent algorithm will converge at geometric rates to a near-global minimizer. We illustrate these theoretical predictions with simulations, showing agreement with the predicted scalings. 1 Introduction In standard formulations of prediction problems, it is assumed that the covariates are fully-observed and sampled independently from some underlying distribution. However, these assumptions are not realistic for many applications, in which covariates may be observed only partially, observed with corruption, or exhibit dependencies. Consider the problem of modeling the voting behavior of politicians: in this setting, votes may be missing due to abstentions, and temporally dependent due to collusion or “tit-for-tat” behavior. Similarly, surveys often suffer from the missing data problem, since users fail to respond to all questions. Sensor network data also tends to be both noisy due to measurement error, and partially missing due to failures or drop-outs of sensors. There are a variety of methods for dealing with noisy and/or missing data, including various heuristic methods, as well as likelihood-based methods involving the expectation-maximization (EM) algorithm (e.g., see the book [1] and references therein). A challenge in this context is the possible non-convexity of associated optimization problems. For instance, in applications of EM, problems in which the negative likelihood is a convex function often become non-convex with missing or noisy data. Consequently, although the EM algorithm will converge to a local minimum, it is difficult to guarantee that the local optimum is close to a global minimum. In this paper, we study these issues in the context of high-dimensional sparse linear regression, in the case when the predictors or covariates are noisy, missing, and/or dependent. Our main contribution is to develop and study some simple methods for handling these issues, and to prove theoretical results about both the associated statistical error and the optimization error. Like EM-based approaches, our estimators are based on solving optimization problems that may be non-convex; however, despite this non-convexity, we are still able to prove that a simple form of projected gradient descent will produce an output that is “sufficiently close”—meaning as 1 small as the statistical error—to any global optimum. As a second result, we bound the size of this statistical error, showing that it has the same scaling as the minimax rates for the classical cases of perfectly observed and independently sampled covariates. In this way, we obtain estimators for noisy, missing, and/or dependent data with guarantees similar to the usual fully-observed and independent case. The resulting estimators allow us to solve the problem of high-dimensional Gaussian graphical model selection with missing data. There is a large body of work on the problem of corrupted covariates or errors-in-variables for regression problems (see the papers and books [2, 3, 4, 5] and references therein). Much of the earlier theoretical work is classical in nature, where the sample size n diverges with the dimension p held fixed. Most relevant to this paper is more recent work that has examined issues of corrupted and/or missing data in the context of highdimensional sparse linear models, allowing for n ≪p. St¨adler and B¨uhlmann [6] developed an EM-based method for sparse inverse covariance matrix estimation in the missing data regime, and used this result to derive an algorithm for sparse linear regression with missing data. As mentioned above, however, it is difficult to guarantee that EM will converge to a point close to a global optimum of the likelihood, in contrast to the methods studied here. Rosenbaum and Tsybakov [7] studied the sparse linear model when the covariates are corrupted by noise, and proposed a modified form of the Dantzig selector, involving a convex program. This convexity produces a computationally attractive method, but the statistical error bounds that they establish scale proportionally with the size of the additive perturbation, hence are often weaker than the bounds that can be proved using our methods. The remainder of this paper is organized as follows. We begin in Section 2 with background and a precise description of the problem. We then introduce the class of estimators we will consider and the form of the projected gradient descent algorithm. Section 3 is devoted to a description of our main results, including a pair of general theorems on the statistical and optimization error, and then a series of corollaries applying our results to the cases of noisy, missing, and dependent data. In Section 4, we demonstrate simulations to confirm that our methods work in practice. For detailed proofs, we refer the reader to the technical report [8]. Notation. For a matrix M, we write ∥M∥max := maxi,j |mij| to be the elementwise ℓ∞-norm of M. Furthermore, |||M|||1 denotes the induced ℓ1-operator norm (maximum absolute column sum) of M, and |||M|||op is the induced ℓ2-operator norm (spectral norm) of M. We write κ(M) := λmax(M) λmin(M) , the condition number of M. 2 Background and problem set-up In this section, we provide a formal description of the problem and motivate the class of estimators studied in the paper. We then describe a class of projected gradient descent algorithms to be used in the sequel. 2.1 Observation model and high-dimensional framework Suppose we observe a response variable yi ∈R that is linked to a covariate vector xi ∈Rp via the linear model yi = ⟨xi, β∗⟩+ ϵi, for i = 1, 2, . . . , n. (1) Here, the regression vector β∗∈Rp is unknown, and ϵi ∈R is observation noise, independent of xi. Rather than directly observing each xi ∈Rp, we observe a vector zi ∈Rp linked to xi via some conditional distribution: zi ∼Q(· | xi), for i = 1, 2, . . . , n. (2) This setup allows us to model various types of disturbances to the covariates, including (a) Additive noise: We observe zi = xi + wi, where wi ∈Rp is a random vector independent of xi, say zero-mean with known covariance matrix Σw. (b) Missing data: For a fraction ρ ∈[0, 1), we observe a random vector zi ∈Rp such that independently for each component j, we observe zij = xij with probability 1 −ρ, and zij = ∗with probability ρ. This model can also be generalized to allow for different missing probabilities for each covariate. Our first set of results is deterministic, depending on specific instantiations of the observed variables {(yi, zi)}n i=1. However, we are also interested in proving results that hold with high probability when the xi’s and zi’s are drawn at random from some distribution. We develop results for both the i.i.d. setting and the case of dependent covariates, where the xi’s are generated according to a stationary vector autoregressive (VAR) process. Furthermore, we work within a high-dimensional framework where n ≪p, and assume β∗has at most k non-zero parameters, where the sparsity k is also allowed to increase to infinity with the sample size n. We assume the scaling ∥β∗∥2 = O(1), which is reasonable in order to have a non-diverging signal-to-noise ratio. 2 2.2 M-estimators for noisy and missing covariates We begin by examining a simple deterministic problem. Let Cov(X) = Σx ≻0, and consider the program bβ ∈arg min ∥β∥1≤R 1 2βT Σxβ −⟨Σxβ∗, β⟩ . (3) As long as the constraint radius R is at least ∥β∗∥1, the unique solution to this convex program is bβ = β∗. This idealization suggests various estimators based on the plug-in principle. We form unbiased estimates of Σx and Σxβ∗, denoted by bΓ and bγ, respectively, and consider the modified program and its regularized version: bβ ∈arg min ∥β∥1≤R 1 2βT bΓβ −⟨bγ, β⟩ , (4) bβ ∈arg min β∈Rp 1 2βT bΓβ −⟨bγ, β⟩+ λn∥β∥1 , (5) where λn > 0 is the regularization parameter. The Lasso [9, 10] is a special case of these programs, where bΓLas := 1 nXT X and bγLas := 1 nXT y, (6) and we have introduced the shorthand y = (y1, . . . , yn)T ∈Rn, and X ∈Rn×p, with xT i as its ith row. In this paper, we focus on more general instantiations of the programs (4) and (5), involving different choices of the pair (bΓ, bγ) that are adapted to the cases of noisy and/or missing data. Note that the matrix bΓLas is positive semidefinite, so the Lasso program is convex. In sharp contrast, for the cases of noisy or missing data, the most natural choice of the matrix bΓ is not positive semidefinite, hence the loss functions appearing in the problems (4) and (5) are non-convex. It is generally impossible to provide a polynomial-time algorithm that converges to a (near) global optimum of a non-convex problem. Remarkably, we prove that a simple projected gradient descent algorithm still converges with high probability to a vector close to any global optimum in our setting. Let us illustrate these ideas with some examples: Example 1 (Additive noise). Suppose we observe the n × p matrix Z = X + W, where W is a random matrix independent of X, with rows wi drawn i.i.d. from a zero-mean distribution with known covariance Σw. Consider the pair bΓadd := 1 nZT Z −Σw and bγadd := 1 nZT y, (7) which correspond to unbiased estimators of Σx and Σxβ∗, respectively. Note that when Σw = 0 (corresponding to the noiseless case), the estimators reduce to the standard Lasso. However, when Σw ̸= 0, the matrix bΓadd is not positive semidefinite in the high-dimensional regime (n ≪p) of interest. Indeed, since the matrix 1 nZT Z has rank at most n, the subtracted matrix Σw may cause bΓadd to have a large number of negative eigenvalues. Example 2 (Missing data). Suppose each entry of X is missing independently with probability ρ ∈[0, 1), and we observe the matrix Z ∈Rn×p with entries Zij = Xij with probability 1 −ρ, 0 otherwise. Given the observed matrix Z ∈Rn×p, consider an estimator of the general form (4), based on the choices bΓmis := eZT eZ n −ρ diag eZT eZ n and bγmis := 1 n eZT y, (8) where eZij = Zij/(1−ρ). It is easy to see that the pair (bΓmis, bγmis) reduces to the pair (bΓLas, bγLas) for the standard Lasso when ρ = 0, corresponding to no missing data. In the more interesting case when ρ ∈(0, 1), the matrix e ZT e Z n in equation (8) has rank at most n, so the subtracted diagonal matrix may cause the matrix bΓmis to have a large number of negative eigenvalues when n ≪p, and the associated quadratic function is not convex. 2.3 Restricted eigenvalue conditions Given an estimate bβ, there are various ways to assess its closeness to β∗. We focus on the ℓ2-norm ∥bβ−β∗∥2, as well as the closely related ℓ1-norm ∥bβ −β∗∥1. When the covariate matrix X is fully observed (so that the Lasso can be applied), it is well understood that a sufficient condition for ℓ2-recovery is that the matrix bΓLas = 1 nXT X satisfy a restricted eigenvalue (RE) condition (e.g., [11, 12, 13]). In this paper, we use the following condition: 3 Definition 1 (Lower-RE condition). The matrix bΓ satisfies a lower restricted eigenvalue condition with curvature αℓ> 0 and tolerance τℓ(n, p) > 0 if θT bΓθ ≥αℓ∥θ∥2 2 −τℓ(n, p)∥θ∥2 1 for all θ ∈Rp. (9) It can be shown that when the Lasso matrix bΓLas = 1 nXT X satisfies this RE condition (9), the Lasso estimate has low ℓ2-error for any vector β∗supported on any subset of size at most k ≲ 1 τℓ(n,p). Moreover, it is known that for various random choices of the design matrix X, the Lasso matrix bΓLas will satisfy such an RE condition with high probability (e.g., [14]). We also make use of the analogous upper restricted eigenvalue condition: Definition 2 (Upper-RE condition). The matrix bΓ satisfies an upper restricted eigenvalue condition with smoothness αu > 0 and tolerance τu(n, p) > 0 if θT bΓθ ≤αu∥θ∥2 2 + τu(n, p)∥θ∥2 1 for all θ ∈Rp. (10) In recent work on high-dimensional projected gradient descent, Agarwal et al. [15] use a more general form of bounds (9) and (10), called the restricted strong convexity (RSC) and restricted smoothness (RSM) conditions. 2.4 Projected gradient descent In addition to proving results about the global minima of programs (4) and (5), we are also interested in polynomial-time procedures for approximating such optima. We show that the simple projected gradient descent algorithm can be used to solve the program (4). The algorithm generates a sequence of iterates βt according to βt+1 = Π βt −1 η (bΓβt −bγ) , (11) where η > 0 is a stepsize parameter, and Π denotes the ℓ2-projection onto the ℓ1-ball of radius R. This projection can be computed rapidly in O(p) time, for instance using a procedure due to Duchi et al. [16]. Our analysis shows that under a reasonable set of conditions, the iterates for the family of programs (4) converges to a point extremely close to any global optimum in both ℓ1-norm and ℓ2-norm, even for the non-convex program. 3 Main results and consequences We provide theoretical guarantees for both the constrained estimator (4) and the regularized variant bβ ∈arg min ∥β∥1≤b0 √ k 1 2βT bΓβ −⟨bγ, β⟩+ λn∥β∥1 , (12) for a constant b0 ≥∥β∗∥2, which is a hybrid between the constrained (4) and regularized (5) programs. 3.1 Statistical error In controlling the statistical error, we assume that the matrix bΓ satisfies a lower-RE condition with curvature αℓand tolerance τℓ(n, p), as previously defined (9). In addition, recall that the matrix bΓ and vector bγ serve as surrogates to the deterministic quantities Σx ∈Rp×p and Σxβ∗∈Rp, respectively. We assume there is a function ϕ(Q, σϵ), depending on the standard deviation σϵ of the observation noise vector ϵ from equation (1) and the conditional distribution Q from equation (2), such that the following deviation conditions are satisfied: ∥bγ −Σxβ∗∥∞≤ϕ(Q, σϵ) r log p n and ∥(bΓ −Σx)β∗∥∞≤ϕ(Q, σϵ) r log p n . (13) The following result applies to any global optimum bβ of the program (12) with λn ≥4 ϕ(Q, σϵ) q log p n : Theorem 1 (Statistical error). Suppose the surrogates (bΓ, bγ) satisfy the deviation bounds (13), and the matrix bΓ satisfies the lower-RE condition (9) with parameters (αℓ, τℓ) such that √ k τℓ(n, p) ≤min αℓ 128 √ k , ϕ(Q, σϵ) 2 b0 r log p n . (14) 4 Then for any vector β∗with sparsity at most k, there is a universal positive constant c0 such that any global optimum bβ satisfies the bounds ∥bβ −β∗∥2 ≤c0 √ k αℓ max ϕ(Q, σϵ) r log p n , λn , and (15a) ∥bβ −β∗∥1 ≤8 c0 k αℓ max ϕ(Q, σϵ) r log p n , λn . (15b) The same bounds (without λn) also apply to the constrained program (4) with radius choice R = ∥β∗∥1. Remarks: Note that for the standard Lasso pair (bΓLas, bγLas), bounds of the form (15) for sub-Gaussian noise are well-known from past work (e.g., [12, 17, 18, 19]). The novelty of Theorem 1 is in allowing for general pairs of such surrogates, which can lead to non-convexity in the underlying M-estimator. 3.2 Optimization error Although Theorem 1 provides guarantees that hold uniformly for any choice of global minimizer, it does not provide any guidance on how to approximate such a global minimizer using a polynomial-time algorithm. Nonetheless, we are able to show that for the family of programs (4), under reasonable conditions on bΓ satisfied in various settings, a simple projected gradient method will converge geometrically fast to a very good approximation of any global optimum. Theorem 2 (Optimization error). Consider the program (4) with any choice of radius R for which the constraint is active. Suppose that the surrogate matrix bΓ satisfies the lower-RE (9) and upper-RE (10) conditions with τu, τl ≍ log p n , and that we apply projected gradient descent (11) with constant stepsize η = 2αu. Then as long as n ≿k log p, there is a contraction coefficient γ ∈(0, 1) independent of (n, p, k) and universal positive constants (c1, c2) such that for any global optimum bβ, the gradient descent iterates {βt}∞ t=0 satisfy the bound ∥βt −bβ∥2 2 ≤γt∥β0 −bβ∥2 2 + c1 log p n ∥bβ −β∗∥2 1 + c2∥bβ −β∗∥2 2 for all t = 0, 1, 2, . . .. (16) In addition, we have the ℓ1-bound ∥βt −bβ∥1 ≤2 √ k ∥βt −bβ∥2 + 2 √ k ∥bβ −β∗∥2 + 2 ∥bβ −β∗∥1 for all t = 0, 1, 2, . . .. (17) Note that the bound (16) controls the ℓ2-distance between the iterate βt at time t, which is easily computed in polynomial-time, and any global optimum bβ of the program (4), which may be difficult to compute. Since γ ∈(0, 1), the first term in the bound vanishes as t increases. Together with Theorem 1, equations (16) and (17) imply that the ℓ2- and ℓ1-optimization error are bounded as O( k log p n ) and O k q log p n , respectively. 3.3 Some consequences Both Theorems 1 and 2 are deterministic results; applying them to specific models requires additional work to establish the stated conditions. We turn to the statements of some consequences of these theorems for different cases of noisy, missing, and dependent data. A zero-mean random variable Z is sub-Gaussian with parameter σ > 0 if E(eλZ) ≤exp(λ2σ2/2) for all λ ∈R. We say that a random matrix X ∈Rn×p is sub-Gaussian with parameters (Σ, σ2) if each row xT i ∈Rp is sampled independently from a zero-mean distribution with covariance Σ, and for any unit vector u ∈Rp, the random variable uT xi is sub-Gaussian with parameter at most σ. We begin with the case of i.i.d. samples with additive noise, as described in Example 1. Corollary 1. Suppose we observe Z = X + W, where the random matrices X, W ∈Rn×p are subGaussian with parameters (Σx, σ2 x) and (Σw, σ2 w), respectively, and the sample size is lower-bounded as n ≿max σ2 x+σ2 w λmin(Σx) 2, 1 k log p. Then for the M-estimator based on the surrogates (bΓadd, bγadd), the results of Theorems 1 and 2 hold with parameters αℓ= 1 2λmin(Σx) and ϕ(Q, σϵ) = c0 σ2 x + σ2 w + σϵ p σ2x + σ2w , with probability at least 1 −c1 exp(−c2 log p). 5 For i.i.d. samples with missing data, we have the following: Corollary 2. Suppose X ∈Rn×p is a sub-Gaussian matrix with parameters (Σx, σ2 x), and Z is the missing data matrix with parameter ρ. If n ≿max 1 (1−ρ)4 σ4 x λ2 min(Σx), 1 k log p, then Theorems 1 and 2 hold with probability at least 1 −c1 exp(−c2 log p) for αℓ= 1 2λmin(Σx) and ϕ(Q, σϵ) = c0 σx 1 −ρ σϵ + σx 1 −ρ . Now consider the case where the rows of X are drawn from a vector autoregressive (VAR) process according to xi+1 = Axi + vi, for i = 1, 2, . . . , n −1, (18) where vi ∈Rp is a zero-mean noise vector with covariance matrix Σv, and A ∈Rp×p is a driving matrix with spectral norm |||A|||2 < 1. We assume the rows of X are drawn from a Gaussian distribution with covariance Σx, such that Σx = AΣxAT + Σv, so the process is stationary. Corollary 3 corresponds to the case of additive noise for a Gaussian VAR process. A similar result can be derived in the missing data setting. Corollary 3. Suppose the rows of X are drawn according to a Gaussian VAR process with driving matrix A. Suppose the additive noise matrix W is i.i.d. with Gaussian rows. If n ≿max ζ4 λ2 min(Σx), 1 k log p, with ζ2 = |||Σw|||op + 2|||Σx|||op 1 −|||A|||op , then Theorems 1 and 2 hold with probability at least 1 −c1 exp(−c2 log p) for αℓ= 1 2λmin(Σx) and ϕ(Q, σϵ) = c0(σϵζ + ζ2). 3.4 Application to graphical model inverse covariance estimation The problem of inverse covariance estimation for a Gaussian graphical model is closely related to the Lasso. Meinshausen and B¨uhlmann [20] prescribed a way to recover the support of the precision matrix Θ when each column of Θ is k-sparse, via linear regression and the Lasso. More recently, Yuan [21] proposed a method for estimating Θ using linear regression and the Dantzig selector, and obtained error bounds on |||bΘ −Θ|||1 when the columns of Θ are bounded in ℓ1. Both of these results assume the rows of X are observed noiselessly and independently. Suppose we are given a matrix X ∈Rn×p of samples from a multivariate Gaussian distribution, where each row is distributed according to N(0, Σ). We assume the rows of X are either i.i.d. or sampled from a Gaussian VAR process (18). Based on the modified Lasso, we devise a method to estimate Θ based on a corrupted observation matrix Z. Let Xj denote the jth column of X, and let X−j denote the matrix X with jth column removed. By standard results on Gaussian graphical models, there exists a vector θj ∈Rp−1 such that Xj = X−jθj + ϵj, (19) where ϵj is a vector of i.i.d. Gaussians and ϵj ⊥⊥X−j. Defining aj := −(Σjj −Σj,−jθj)−1, we have Θj,−j = ajθj. Our algorithm estimates bθj and baj for each j and combines the estimates to obtain bΘj,−j = bajbθj. In the additive noise case, we observe Z = X + W. The equations (19) yield Zj = X−jθj + (ϵj + W j). Note that δj = ϵj + W j is a vector of i.i.d. Gaussians, and since X ⊥⊥W, we have δj ⊥⊥X−j. Hence, our results on covariates with additive noise produce an estimate of θj by solving the program (4) or (12) with the pair (bΓ(j), bγ(j)) = (bΣ−j,−j, 1 nZ−jT Zj), where bΣ = 1 nZT Z −Σw. When Z is a missing-data version of X, we similarly estimate the vectors θj with suitable corrections. We arrive at the following algorithm: Algorithm 3.1. (1) Perform p linear regressions of the variables Zj upon the remaining variables Z−j, using the modified Lasso program (4) or (12) with the estimators (bΓ(j), bγ(j)), to obtain estimates bθj. (2) Estimate the scalars aj using baj := −(bΣjj −bΣj,−jbθj)−1. Set eΘj,−j = bajbθj and eΘjj = −baj. (3) Construct the matrix bΘ = arg min Θ∈Sp |||Θ −eΘ|||1, where Sp is the set of symmetric p × p matrices. Note that the minimization in step (3) is a linear program, so is easily solved with standard methods. We have: 6 Corollary 4. Suppose the columns of the matrix Θ are k-sparse, and suppose the condition number κ(Θ) is nonzero and finite. Suppose the deviation conditions ∥bγ(j) −Σ−j,−jθj∥∞≤ϕ(Q, σϵ) r log p n and ∥(bΓ(j) −Σ−j,−j)θj∥∞≤ϕ(Q, σϵ) r log p n (20) hold for all j, and suppose we have the following additional deviation condition on bΣ: ∥bΣ −Σ∥max ≤cϕ(Q, σϵ) r log p n . (21) Finally, suppose the lower-RE condition holds uniformly over the matrices bΓ(j) with the scaling (14). Then under the estimation procedure of Algorithm 3.1, there exists a universal constant c0 such that |||bΘ −Θ|||op ≤c0κ2(Σ) λmin(Σ) ϕ(Q, σϵ) λmin(Σ) + ϕ(Q, σϵ) αℓ k r log p n . 4 Simulations In this section, we provide simulation results to confirm that the scalings predicted by our theory are sharp. In Figure 1, we plot the results of simulations under the additive noise model described in Example 1, using Σx = I and Σw = σ2 wI with σw = 0.2. Panel (a) provides plots of ℓ2-error versus the sample size n, for p ∈{128, 256, 512}. For all three choices of dimensions, the error decreases to zero as the sample size n increases, showing consistency of the method. If we plot the ℓ2-error versus the rescaled sample size n/(k log p), as depicted in panel (b), the curves roughly align for different values of p, agreeing with Theorem 1. Panel (c) shows analogous curves for VAR data with additive noise, using a driving matrix A with |||A|||op = 0.2. 0 500 1000 1500 2000 2500 3000 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24 0.26 0.28 n l2 norm error Additive noise p=128 p=256 p=512 2 4 6 8 10 12 14 16 18 20 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24 0.26 0.28 n/(k log p) l2 norm error Additive noise p=128 p=256 p=512 0 2 4 6 8 10 12 14 16 18 20 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 n/(k log p) l2 norm error Additive noise with autoregressive data p=128 p=256 p=512 (a) (b) (c) Figure 1. Plots of the error ∥bβ −β∗∥2 after running projected gradient descent on the non-convex objective, with sparsity k ≈√p. Plot (a) is an error plot for i.i.d. data with additive noise, and plot (b) shows ℓ2-error versus the rescaled sample size n k log p. Plot (c) depicts a similar (rescaled) plot for VAR data with additive noise. As predicted by Theorem 1, the curves align for different values of p in the rescaled plot. We also verified the results of Theorem 2 empirically. Figure 2 shows the results of applying projected gradient descent to solve the optimization problem (4) in the cases of additive noise and missing data. We first applied projected gradient to obtain an initial estimate bβ, then reapplied projected gradient descent 10 times, tracking the optimization error ∥βt −bβ∥2 (in blue) and statistical error ∥βt −β∗∥2 (in red). As predicted by Theorem 2, the iterates exhibit geometric convergence to roughly the same fixed point, regardless of starting point. Finally, we simulated the inverse covariance matrix estimation algorithm on three types of graphical models: (a) Chain-structured graphs. In this case, all nodes of are arranged in a line. The diagonal entries of Θ equal 1, and entries corresponding to links in the chain equal 0.1. Then Θ is rescaled so |||Θ|||op = 1. (b) Star-structured graphs. In this case, all nodes are connected to a central node, which has degree k ≈0.1p. All other nodes have degree 1. The diagonal entries of Θ are set equal to 1, and all entries corresponding to edges in the graph are set equal to 0.1. Then Θ is rescaled so |||Θ|||op = 1. (c) Erd¨os-Renyi graphs. As in Rothman et al. [22], we first generate a matrix B with diagonal entries 0, and all other entries independently equal to 0.5 with probability k/p, and 0 otherwise. Then δ is chosen so Θ = B + δI has condition number p, and Θ is rescaled so |||Θ|||op = 1. 7 0 20 40 60 80 100 ï3.5 ï3 ï2.5 ï2 ï1.5 ï1 ï0.5 0 0.5 Iteration count log(||`t ï `||2) Log error plot: additive noise case Stat error Opt error 0 20 40 60 80 100 ï3.5 ï3 ï2.5 ï2 ï1.5 ï1 ï0.5 0 0.5 Iteration count log(||`t ï `||2) Log error plot: missing data case Stat error Opt error (a) (b) Figure 2. Plots of the optimization error log(∥βt −bβ∥2) and statistical error log(∥βt −β∗∥2) versus iteration number t, generated by running projected gradient descent on the non-convex objective. As predicted by Theorem 2, the optimization error decreases geometrically. After generating the matrix X of n i.i.d. samples from the appropriate graphical model, with covariance matrix Σx = Θ−1, we generated the corrupted matrix Z = X + W with Σw = (0.2)2I. Figure 3 shows the rescaled ℓ2-error 1 √ k|||bΘ−Θ|||op plotted against the sample size n for a chain-structured graph, with panel (a) showing the original plot and panel (b) plotting against the rescaled sample size. We obtained qualitatively similar results for the star and Erd¨os-Renyi graphs, in the presence of missing and/or dependent data. 0 100 200 300 400 500 600 700 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 n 1/sqrt(k) * l2 operator norm error Chain graph p=64 p=128 p=256 10 20 30 40 50 60 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 n/(k log p) 1/sqrt(k) * l2 operator norm error Chain graph p=64 p=128 p=256 (a) ℓ2 error plot for chain graph, additive noise (b) rescaled plot Figure 3. (a) Plots of the rescaled error 1 √ k|||bΘ−Θ|||op after running projected gradient descent for a chain-structured Gaussian graphical model with additive noise. As predicted by Theorems 1 and 2, all curves align when the rescaled error is plotted against the ratio n k log p, as shown in (b). Each point represents the average over 50 trials. 5 Discussion In this paper, we formulated an ℓ1-constrained minimization problem for sparse linear regression on corrupted data. The source of corruption may be additive noise or missing data, and although the resulting objective is not generally convex, we showed that projected gradient descent is guaranteed to converge to a point within statistical precision of the optimum. In addition, we established ℓ1- and ℓ2-error bounds that hold with high probability when the data are drawn i.i.d. from a sub-Gaussian distribution, or drawn from a Gaussian VAR process. Finally, we used our results on linear regression to perform sparse inverse covariance estimation for a Gaussian graphical model, based on corrupted data. The bounds we obtain for the spectral norm of the error are of the same order as existing bounds for inverse covariance matrix estimation with uncorrupted, i.i.d. data. Acknowledgments PL acknowledges support from a Hertz Foundation Fellowship and an NDSEG Fellowship; MJW and PL were also partially supported by grants NSF-DMS-0907632 and AFOSR-09NL184. The authors thank Alekh Agarwal, Sahand Negahban, and John Duchi for discussions and guidance. 8 References [1] R. Little and D. B. Rubin. Statistical analysis with missing data. Wiley, New York, 1987. [2] J. T. Hwang. Multiplicative errors-in-variables models with applications to recent data released by the U.S. Department of Energy. Journal of the American Statistical Association, 81(395):pp. 680–688, 1986. [3] R. J. Carroll, D. Ruppert, and L. A. Stefanski. Measurement Error in Nonlinear Models. Chapman and Hall, 1995. [4] S. J. Iturria, R. J. Carroll, and D. Firth. Polynomial regression and estimating functions in the presence of multiplicative measurement error. Journal of the Royal Statistical Society Series B - Statistical Methodology, 61:547–561, 1999. [5] Q. Xu and J. You. Covariate selection for linear errors-in-variables regression models. Communications in Statistics - Theory and Methods, 36(2):375–386, 2007. [6] N. St¨adler and P. B¨uhlmann. Missing values: Sparse inverse covariance estimation and an extension to sparse regression. Statistics and Computing, pages 1–17, 2010. [7] M. Rosenbaum and A. B. Tsybakov. Sparse recovery under matrix uncertainty. Annals of Statistics, 38:2620–2651, 2010. [8] P. Loh and M.J. Wainwright. High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity. Technical report, UC Berkeley, September 2011. Available at http: //arxiv.org/abs/1109.3714. [9] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society, Series B, 58(1):267–288, 1996. [10] S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1):33–61, 1998. [11] S. van de Geer. The deterministic Lasso. In Proceedings of Joint Statistical Meeting, 2007. [12] P. J. Bickel, Y. Ritov, and A. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Annals of Statistics, 37(4):1705–1732, 2009. [13] S. van de Geer and P. Buhlmann. On the conditions used to prove oracle results for the Lasso. Electronic Journal of Statistics, 3:1360–1392, 2009. [14] G. Raskutti, M. J. Wainwright, and B. Yu. Restricted eigenvalue properties for correlated Gaussian designs. Journal of Machine Learning Research, 11:2241–2259, 2010. [15] A. Agarwal, S. Negahban, and M.J. Wainwright. Fast global convergence of gradient methods for highdimensional statistical recovery. Technical report, UC Berkeley, April 2011. Available at http:// arxiv.org/abs/1104.4824. [16] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the ℓ1-ball for learning in high dimensions. In International Conference on Machine Learning, pages 272–279, 2008. [17] C. H. Zhang and J. Huang. The sparsity and bias of the Lasso selection in high-dimensional linear regression. Annals of Statistics, 36(4):1567–1594, 2008. [18] N. Meinshausen and B. Yu. Lasso-type recovery of sparse representations for high-dimensional data. Annals of Statistics, 37(1):246–270, 2009. [19] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for the analysis of regularized M-estimators. In Advances in Neural Information Processing Systems, 2009. [20] N. Meinshausen and P. B¨uhlmann. High-dimensional graphs and variable selection with the Lasso. Annals of Statistics, 34:1436–1462, 2006. [21] M. Yuan. High-dimensional inverse covariance matrix estimation via linear programming. Journal of Machine Learning Research, 99:2261–2286, August 2010. [22] A. J. Rothman, P. J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494–515, 2008. 9
|
2011
|
198
|
4,255
|
Greedy Model Averaging Dong Dai Department of Statistics Rutgers University, New Jersey, 08816 dongdai916@gmail.com Tong Zhang Department of Statistics, Rutgers University, New Jersey, 08816 tzhang@stat.rutgers.edu Abstract This paper considers the problem of combining multiple models to achieve a prediction accuracy not much worse than that of the best single model for least squares regression. It is known that if the models are mis-specified, model averaging is superior to model selection. Specifically, let n be the sample size, then the worst case regret of the former decays at the rate of O(1/n) while the worst case regret of the latter decays at the rate of O(1/√n). In the literature, the most important and widely studied model averaging method that achieves the optimal O(1/n) average regret is the exponential weighted model averaging (EWMA) algorithm. However this method suffers from several limitations. The purpose of this paper is to present a new greedy model averaging procedure that improves EWMA. We prove strong theoretical guarantees for the new procedure and illustrate our theoretical results with empirical examples. 1 Introduction This paper considers the model combination problem, where the goal is to combine multiple models in order to achieve improved accuracy. This problem is important for practical applications because it is often the case that single learning models do not perform as well as their combinations. In practice, model combination is often achieved through the so-called “stacking” procedure, where multiple models {f1(x), . . . , fM(x)} are first learned based on a shared “training dataset”. Then these models are combined on a separate “validation dataset”. This paper is motivated by this scenario. In particular, we assume that M models {f1(x), . . . , fM(x)} are given a priori (e.g., we may regard them as being obtained with a separate training set), and we are provided with n labeled data points (validation data) {(X1, Y1), . . . , (Xn, Yn)} to combine these models. For simplicity and clarity, our analysis focuses on least squares regression in fixed design although similar analysis can be extended to random design and to other loss functions. In this setting, for notation convenience, we can represent the k-th model on the validation data as a vector f k = [fk(X1), . . . , fk(Xn)] ∈Rn, and we let the observation vector y = [Y1, . . . , Yn] ∈Rn. Let g = Ey be the mean. Our goal (in the fixed design or denoising setting) is to estimate the mean vector g from y using the M existing models F = {f 1, . . . f M}. Here, we can write y = g + ξ, where we assume that ξ are iid Gaussian noise: ξ ∼N(0, σ2In×n) for simplicity. This iid Gaussian assumption isn’t critical, and the results remain the same for independent sub-Gaussian noise. We assume that the models may be mis-specified. That is, let k∗be the best single model defined as: k∗= argmin k ∥f k −g∥2 2 , (1) 1 then f k∗̸= g. We are interested in an estimator ˆf of g that achieves a small regret R(ˆf) = 1 n
ˆf −g
2 2 −1 n
f k∗−g
2 2 . This paper considers a special class of model combination methods which we refer to as model averaging, with combined estimators of the form ˆf = M X k=1 ˆwkf k, where ˆwk ≥0 and P k ˆwk = 1. A standard method for “model averaging” is model selection, where we choose the model ˆk with the smallest least squares error: ˆf MS = f ˆk; ˆk = arg min k ∥f k −y∥2 2 . This corresponds to the choice of ˆwˆk = 1 and ˆwk = 0 when k ̸= ˆk. However, it is well known that the worst case regret this procedure can achieve is R(ˆf MS) = O( p ln M/n) [1]. Another standard model averaging method is the Exponential Weighted Model Averaging (EWMA) estimator defined as ˆf EW MA = M X k=1 ˆwkf k, ˆwk = qke−λ∥f k−y∥2 2 PM j=1 qje−λ∥f j−y∥ 2 2 , (2) with a tuned parameter λ ≥0. The extra parameters {qj}j=1,...,M are priors that impose bias favoring some models over some other models. Here we assume that qj ≥0 and P j qj = 1. In this setting, the most common prior choice is the flat prior qj = 1/M. It should be pointed out that a progressive variant of (2), which returns the average of n + 1 EWMA estimators with Si = {(X1, Y1), . . . , (Xi, Yi)} for i = 0, 1, . . . , n, was often analyzed in the earlier literature [2, 9, 5, 1]. Nevertheless, the non progressive version presented in (2) is clearly a more natural estimator, and this is the form that has been studied in more recent work [3, 6, 8]. Our current paper does not differentiate these two versions of EWMA because they have similar theoretical properties. In particular, our experiments only compare to the non-progressive version (2) that performs better in practice. It is known that exponential model averaging leads to an average regret of O(ln M/n) which achieves the optimal rate; however it was pointed out in [1] that the rate does not hold with large probability. Specifically, EWMA only leads to a sub-optimal deviation bound of O( p ln M/n) with large probability. To remedy this sub-optimality, an empirical star algorithm (which we will refer to as STAR from now on) was then proposed in [1]; it was shown that the algorithm gives O(ln M/n) deviation bound with large probability under the flat prior qi = 1/M. One major issue of the STAR algorithm is that its average performance is often inferior to EWMA, as we can see from our empirical examples. Therefore although theoretically interesting, it is not an algorithm that can be regarded as a replacement of EWMA for practical purposes. Partly for this reason, a more recent study [7] re-examined the problem of improving EWMA, where different estimators were proposed in order to achieve optimal deviation for model averaging. However, the proposed algorithms are rather complex and difficult to implement. The purpose of this paper is to present a simple greedy model averaging (GMA) algorithm that gives the optimal O(ln M/n) deviation bound with large probability, and it can be applied with arbitrary prior qi. Moreover, unlike STAR which has average performance inferior to EWMA, the average performance of GMA algorithm is generally superior to EWMA as we shall illustrate with examples. It also has some other advantages which we will discuss in more details later in the paper. 2 Greedy Model Averaging This paper studies a new model averaging procedure presented in Algorithm 1. The procedure has L stages, and each time adds an additional model f ˆk(ℓ) into the ensemble. It is based on a simple, but 2 important modification of a classical sequential greedy approximation procedure in the literature [4], which corresponds to setting µ(ℓ) = 0, λ = 0 in Algorithm 1 with α(ℓ) optimized over [0, 1]. The STAR algorithm corresponds to the stage-2 estimator ˆf (2) with the above mentioned classical greedy procedure of [4]. However, in order to prove the desired deviation bound, our analysis critically depends on the extra term µ(ℓ)
ˆf (ℓ−1) −f j
2 2 which isn’t present in the classical procedure (that is, our proof does not apply to the procedure of [4]). As we will see in Section 4, this extra term does have a positive impact under suitable conditions that correspond to Theorem 1 and Theorem 2 below, and thus this term is not only for theoretical interest, but also it leads to practical benefits under the right conditions. Another difference between GMA and the greedy algorithm in [4] is that our procedure allows the use of non-flat priors through the extra penalty term λc(ℓ) ln(1/qj). This generality can be useful for some applications. Moreover, it is useful to notice that if we choose the flat prior qj = 1/M, then the term λc(ℓ) ln(1/qj) is identical for all models, and thus this term can be removed from the optimization. In this case, the proposed method has the advantage of being parameter free (with the default choice of ν = 0.5). This advantage is also shared by the STAR algorithm. input : noisy observation y and static models f 1, . . . , f M output : averaged model ˆf (ℓ) parameters: prior {qj}j=1,...,M and regularization parameters ν and λ let ˆf (0) = 0 for ℓ= 1, 2, . . . , L do let α(ℓ) = (ℓ−1)/ℓ let µ(1) = 0; µ(2) = 0.05; µ(ℓ) = ν(ℓ−1)/ℓ2 if ℓ> 2 let c(1) = 1; c(2) = 0.25; and c(ℓ) = [20ν(1 −ν)(ℓ−1)]−1 if ℓ> 2 let ˆk(ℓ) = argminj Q(ℓ)(j), where Q(ℓ)(j) :=
α(ℓ)ˆf (ℓ−1) + (1 −α(ℓ))f j −y
2 2 + µ(ℓ)
ˆf (ℓ−1) −f j
2 2 + λc(ℓ) ln 1 qj let ˆf (ℓ) = α(ℓ)ˆf (ℓ−1) + (1 −α(ℓ))f ˆk(ℓ) end Algorithm 1: Greedy Model Averaging (GMA) Observe that the first stage of GMA corresponds to the standard model selection procedure: ˆk(1) = argmin j h
f j −y
2 2 + λ ln(1/qj) i , ˆf (1) = f ˆk(1). As we have pointed out earlier, it is well known that only O(1/√n) regret can be achieved by any model selection procedure (that is, any procedure that returns a single model ˆf ˆk for some ˆk). However, a combination of only two models will allow us to achieve the optimal O(1/n) rate. In fact, ˆf (2) achieves this rate. For clarity, we rewrite this stage 2 estimator as ˆk(2) = argmin j "
1 2(f ˆk(1) + f j) −y
2 2 + 1 20
ˆf ˆk(1) −f j
2 2 + λ 4 ln(1/qj) # , ˆf (2) = 1 2(f ˆk(1) + f ˆk(2)). Theorem 1 shows that this simple stage 2 estimator achieves O(1/n) regret. A similar result was shown in [1] for the STAR algorithm under the flat prior qj = 1/M, which corresponds to the stage 2 estimator of the classical greedy algorithm in [4]. Theoretically our result has several advantages over that of the classical EWMA method. First it produces a sparse estimator while exponential averaging estimator is dense; second the performance bound is scale free in the sense that the bound 3 depends only on the noise variance but not the magnitude of maxj
f j
; third the optimal bound holds with high probability while EWMA only achieves optimal bound on average but not with large probability; and finally if we choose a flat prior qj = 1/M, the estimator is parameter free because we can exclude the term λ ln(1/qj) from the estimators. This result also improves the recent work of [7] in that the resulting bound is scale free while the algorithm itself is significantly simpler. One disadvantage of this stage-2 estimator (and similarly the STAR estimator of [1]) is that its average performance is generally inferior to that of EWMA, mainly due to the relatively large constant in Theorem 1 (the same issue holds for the STAR algorithm). For this reason, the stage-2 estimator is not a practical replacement of EWMA. This is the main reason why it is necessary to run GMA for L > 2 stages, which leads to reduced constants (see Theorem 2) below. Our empirical experiments show that in order to compete with EWMA for average performance, it is important to take L > 2. However a relatively small L (as small as L = 5) is often sufficient, and in such case the resulting estimator is still quite sparse. Theorem 1 Given qj ≥0 such that M P j=1 qj = 1. If λ ≥40σ2, then with probability 1 −2δ we have R(ˆf (2)) ≤λ n 3 4 ln(1/qk∗) + 1 2 ln(1/δ) . While the stage-2 estimator ˆf (2) achieves the optimal rate, running GMA for more more stages can further improve the performance. The following theorem shows that similar bounds can be obtained for GMA at stages larger than 2. However, the constant before σ2 n ln 1 qk∗δ approaches 8 when ℓ→∞(with default ν = 0.5), which is smaller than the constant of Theorem 1 which is about 30. This implies potential improvement when we run more stages, and this improvement is confirmed in our empirical study. In fact, with relatively large ℓ, the GMA method not only has the theoretical advantage of achieving smaller regret in deviation (that is, the regret bound holds with large probability) but also achieves better average performance in practice. Theorem 2 Given qj ≥0 such that M P j=1 qj = 1. If λ ≥40σ2 and let 0 < ν < 1 in Algorithm 1, then with probability 1 −2δ we have R(ˆf (ℓ)) ≤λ n (ℓ−2) + ln(ℓ−1) + 30ν(1 −ν) 20ν(1 −ν)ℓ ln 1 qk∗δ . Another important advantage of running GMA for ℓ> 2 stages is that the resulting estimator not only competes with the best single estimator f k∗, but also competes with the best estimator in the convex hull of cov(F) (with the parameter ν appropriately tuned). Note that the latter can be significantly better than the former. Define the convex hull of F as cov(F) = M X j=1 wjf j : wj ≥0; X j wj = 1 . The following theorem shows that as ℓ→∞, the prediction error of ˆf (ℓ) is no more than O(1/√n) worse than that of the optimal ¯f ∈cov(F) when we choose a sufficiently small ν = O(1/√n) in Algorithm 1. Note that in this case, it is beneficial to use a parameter ν smaller than the default choice of ν = 0.5. This phenomenon is also confirmed by our experiments. Theorem 3 Given qj ≥0 such that M P j=1 qj = 1. Consider any {wj : j = 1, . . . , M} such that P j wj = 1 and wj ≥0, and let ¯f = P j wjf j. If λ ≥40σ2 and let 0 < ν < 1 in Algorithm 1, then with probability 1 −2δ, when ℓ→∞: 1 n
ˆf (ℓ) −g
2 2 ≤1 n
¯f −g
2 2+ν n X k wk
f k −¯f
2 2+ λ 20ν(1 −ν)n X k wk ln 1 δqk +O 1 ℓ . 4 3 Experiments The point of these experiments is to show that the consequences of our theoretical analysis can be observed in practice, which support the main conclusions we reach. For this purpose, we consider the model g = Xw + 0.5∆g, where X = (f 1, . . . , f M) is an n × M matrix with independent standard Gaussian entries, and ∆g ∼N(0, In×n) implies that the model is mis-specified. The noise vector is ξ ∼N(0, σ2In×n), independently generated of X. The coefficient vector w = (w1, . . . , wM)⊤is given by wi = |ui|/ Ps j=1 |uj| for i = 1, . . . , s, where u1, . . . , us are independent standard uniform random variables for some fixed s. The performance of an estimator ˆf measured here is the mean squared error (MSE) defined as MSE(ˆf) = 1 n
ˆf −g
2 2 . We run the Greedy Model Averaging (GMA) algorithm for L stages up to L = 40. The EWMA parameter is tuned via 10-fold cross-validation. Moreover, we also listed the performance of EWMA with projection, which is the method that runs EWMA, but with each model f k replaced by model ˜f k = αkf k where αk = arg minα∈R ∥αf k −y∥2 2. That is, ˜f k is the best linear scaling of f k to predict y. Note that this is a special case of the class of methods studied in [6] (which considers more general projections) that leads to non progressive regret bounds, and this is the method of significant current interests [3, 8]. However, at least for the scenario considered in our paper, the projected EWMA method never improves performance in our experiments. Finally, for reference purpose, we also report the MSE of the best single model (BSM) f k∗, where k∗is given by (1). The model f k∗is clearly not a valid estimator because it depends on the unobserved g; however its performance is informative, and thus included in the tables. For simplicity, all algorithms use flat prior qk = 1/M. 4 Illustration of Theorem 1 and Theorem 2 The first set of experiments are performed with the parameters n = 50, M = 200,s = 1 and σ = 2. Five hundred replications are run, and the MSE performance of different algorithms are reported in Table 1 using the “mean ± standard deviation” format. Note that with s = 1, the target is g = f 1 +0.5∆g. Since f 1 and ∆g are random Gaussian vectors, the best single model is likely f 1. The noise σ = 2 is relatively large. This is thus the situation that model averaging does not achieve as good a performance as that of the best single model. This corresponds to the scenario considered in Theorem 1 and Theorem 2. The results indicate that for GMA, from L = 1 (corresponding to model selection) to L = 2 (stage-2 model averaging of Theorem 1), there is significant reduction of error. The performance of GMA with L = 2 is comparable to that of the STAR algorithm. This isn’t surprising, because STAR can be regarded as the stage-2 estimator based on the more classical greedy algorithm of [4]. We also observe that the error keeps decreasing (but at a slower pace) when L > 2, which is consistent with Theorem 2. It means that in order to achieve good performance, it is necessary to use more stages than L = 2 (although this doesn’t change the O(1/n) rate for regret, it can significantly reduce constant). It becomes better than EWMA when L is as small as 5, which still gives a relatively sparse averaged model. EWMA with projection does not perform as well as the standard EWMA method in this setting. Moreover, we note that in this scenario, the standard choice of ν = 0.5 in Theorem 2 is superior to choosing smaller ν = 0.1 or ν = 0.001. This again is consistent with Theorem 2, which shows that the new term we added into the greedy algorithm is indeed useful in this scenario. 5 Illustration of Theorem 3 The second set of experiments are performed with the parameters n = 50, M = 200,s = 10 and σ = 0.5. Five hundred replications are run, and the MSE performance of different algorithms are reported in Table 2 using the “mean ± standard deviation” format. 5 Table 1: MSE of different algorithms: best single model is superior to averaged models STAR EWMA EWMA (with projection) BSM 0.663 ± 0.4 0.645 ± 0.5 0.744 ± 0.5 0.252 ± 0.05 GMA L = 1 L = 2 L = 5 L = 20 L = 40 ν = 0.5 0.735 ± 0.74 0.689 ± 0.4 0.58 ± 0.39 0.566 ± 0.37 0.567 ± 0.38 ν = 0.1 0.735 ± 0.74 0.689 ± 0.4 0.645 ± 0.31 0.623 ± 0.29 0.622 ± 0.29 ν = 0.01 0.735 ± 0.74 0.689 ± 0.4 0.663 ± 0.3 0.638 ± 0.28 0.639 ± 0.28 Note that with s = 10, the target is g = ¯f + 0.5∆g for some ¯f ∈cov(F). The noise σ = 0.5 is relatively small, which makes it beneficial to compete with the best model ¯f in the convex hull even though GMA has a larger regret of O(1/√n) when competing with ¯f. This is thus the situation considered in Theorem 3, which means that model averaging can achieve better performance than that of the best single model. The results again show that for GMA, from L = 1 (corresponding to model selection) to L = 2 (stage-2 model averaging of Theorem 1), there is significant reduction of error. The performance of GMA with L = 2 is again comparable to that of the STAR algorithm. Again we observe that even with the standard choice of ν = 0.5, the error keeps decreasing (but at a slower pace) when L > 2, which is consistent with Theorem 2. It becomes better than EWMA when L is as small as 5, which still gives a relatively sparse averaged model. EWMA with projection again does not perform as well as the standard EWMA method in this setting. Moreover, we note that in this scenario, the standard choice of ν = 0.5 in Theorem 2 is inferior to choosing smaller parameter values of ν = 0.1 or ν = 0.001. This is consistent with Theorem 3, where it is beneficial to use a smaller value for ν in order to compete with the best model in the convex hull. Table 2: MSE of different algorithms: best single model is inferior to averaged model STAR EWMA EWMA (with projection) BSM 0.443 ± 0.08 0.316 ± 0.087 0.364 ± 0.078 0.736 ± 0.083 GMA L = 1 L = 2 L = 5 L = 20 L = 40 ν = 0.5 0.809 ± 0.12 0.456 ± 0.081 0.305 ± 0.062 0.266 ± 0.057 0.265 ± 0.057 ν = 0.1 0.809 ± 0.12 0.456 ± 0.081 0.269 ± 0.056 0.214 ± 0.046 0.211 ± 0.045 ν = 0.01 0.809 ± 0.12 0.456 ± 0.081 0.268 ± 0.053 0.211 ± 0.045 0.207 ± 0.045 6 Conclusion This paper presents a new model averaging scheme which we call greedy model averaging (GMA). It is shown that the new method can achieve regret bound of O(ln M/n) with large probability when competing with the single best model. Moreover, it can also compete with the best combined model in convex hull. Both our theory and experimental results suggest that the proposed GMA algorithm is superior to the standard EWMA procedure. Due to the simplicity of our proposal, GMA may be regarded as a valid alternative to the more widely studied EWMA procedure both for practical applications and for theoretical purposes. Finally we shall point out that while this work only considers static model averaging where the models F are finite, similar results can be obtained for affine estimators or infinite models considered in recent work [3, 6, 8]. Such extension will be left to the extended report. A Proof Sketches We only include proof sketches, and leave the details to the supplemental material that accompanies the submission. First we need the following standard Gaussian tail bounds. The proofs can be found in the supplemental material. 6 Proposition 1 Let f j ∈Rn be a set of fixed vectors (j = 1, . . . , M), and assume that qj ≥0 with P j qj = 1. Let k∗be a fixed integer between 1 and M. Define event E1 as E1 = ∀j : (f j −f k∗)⊤ξ ≤σ∥f j −f k∗∥2 q 2 ln(1/(δqj)) and define event E2 as E2 = ∀j, k : (f j −f k)⊤ξ ≤σ∥f j −f k∥2 q 2 ln(1/(δqjqk)) , then P(E1) ≥1 −δ and P(E2) ≥1 −δ. A.1 Proof Sketch of Theorem 1 More detailed proof can be found in the supplemental material. Note that with probability 1 −2δ, both event E1 and event E2 of Proposition 1 hold. Moreover we have
ˆf (2) −g
2 2 =
α(2)ˆf (1) + (1 −α(2))f ˆk(2) −g
2 2 ≤
α(2)ˆf (1) + (1 −α(2))f k∗−g
2 2 + 2(1 −α(2))ξ⊤(f ˆk(2) −f k∗) +µ(2)
ˆf (1) −f k∗
2 2 −
ˆf (1) −f ˆk(2)
2 2 + λc(2)(ln(1/qk∗) −ln(1/qˆk(2))). In the above derivation, the inequality is equivalent to Q(2)(ˆk(2)) ≤Q(2)(k∗), which is a simple fact of the definition of ˆk(ℓ) in the algorithm. Also we can rewrite the fact that Q(1)(ˆk(1)) ≤Q(1)(k∗) as
ˆf (1) −g
2 2 −
f k∗−g
2 2 ≤2ξ⊤(f ˆk(1) −f k∗) + λc(1) ln(qˆk(1)/qk∗). By combining the above two inequalities, we obtain
ˆf (2) −g
2 2 −
f k∗−g
2 2 ≤α(2) h 2ξ⊤(f ˆk(1) −f k∗) + λc(1) ln(qˆk(1)/qk∗) i +2(1 −α(2))ξ⊤(f ˆk(2) −f k∗) + h µ(2) −α(2)(1 −α(2)) i
f ˆk(1) −f k∗
2 2 −µ(2)
f ˆk(1) −f ˆk(2)
2 2 + λc(2)(ln(1/qk∗) −ln(1/qˆk(2))). Since α(2) = 1/2, we obtain
ˆf (2) −g
2 2 −
f k∗−g
2 2 ≤ (1 2λc(1) + λc(2)) ln(1/qk∗) −1 2λc(1) ln(1/qˆk(1)) −λc(2) ln(1/qˆk(2)) +2
f ˆk(1) −f k∗
2 σ s 2 ln 1 qˆk(1)δ + 2 · 1 2
f ˆk(2) −f ˆk(1)
2 σ s 2 ln 1 qˆk(1)qˆk(2)δ +(µ(2) −1/4)
f ˆk(1) −f k∗
2 2 −µ(2)
f ˆk(1) −f ˆk(2)
2 2 ≤ (1 2λc(1) + λc(2)) ln(1/qk∗) + (2r1 + 2r2) ln(1/δ). The first inequality above uses the tail probability bounds in the event E1 and E2. We then use the algebraic inequality 2a1b1 ≤a2 1/r1 + r1b2 1 and 2a2b2 ≤a2 2/r2 + r2b2 2 to obtain the last inequality, which implies the desired bound. A.2 Proof Sketch of Theorem 2 Again, more detailed proof can be found in the supplemental material. With probability 1−2δ, both event E1 and event E2 of Proposition 1 hold. This implies that the claim of Theorem 1 also holds. 7 Now consider any ℓ≥3. We have
ˆf (ℓ) −g
2 2 ≤
α(ℓ)ˆf (ℓ−1) + (1 −α(ℓ))f k∗−g
2 2 + 2ξ⊤h (1 −α(ℓ))f ˆk(ℓ) −(1 −α(ℓ))f k∗ i +λc(ℓ)(ln(1/qk∗) −ln(1/qˆk(ℓ))) + µ(ℓ)
ˆf (ℓ−1) −f k∗
2 2 −
ˆf (ℓ−1) −f ˆk(ℓ)
2 2 . The inequality is equivalent to Q(ℓ)(ˆk(ℓ)) ≤Q(ℓ)(k∗), which is a simple fact of the definition of ˆk(ℓ) in the algorithm. We can rewrite the above inequality as
ˆf (ℓ) −g
2 2 −
f k∗−g
2 2 ≤ α(ℓ)
ˆf (ℓ−1) −g
2 2 −
f k∗−g
2 2 −λc(ℓ)(ln(qk∗) −ln(qˆk(ℓ))) + 2(1 −α(ℓ))ξ⊤(f ˆk(ℓ) −f k∗) −µ(ℓ)
f ˆk(ℓ) −ˆf (ℓ−1)
2 2 + h µ(ℓ) −α(ℓ)(1 −α(ℓ)) i
ˆf (ℓ−1) −f k∗
2 2 ≤ α(ℓ)
ˆf (ℓ−1) −g
2 2 −
f k∗−g
2 2 + λc(ℓ)(ln(1/qk∗) −ln(1/qˆk(ℓ))) +2 ℓ
f ˆk(ℓ) −f k∗
2 σ s 2 ln 1 qˆk(ℓ)δ −µ(ℓ) α(ℓ)(1 −α(ℓ)) −µ(ℓ) α(ℓ)(1 −α(ℓ))
f ˆk(ℓ) −f k∗
2 2 ≤ ℓ−1 ℓ
ˆf (ℓ−1) −g
2 2 −
f k∗−g
2 2 + λc(ℓ)(ln(1/qk∗) −ln(1/qˆk(ℓ))) + −ℓ−1 ℓ2 ν(1 −ν) + σ2 ℓ2rℓ
f ˆk(ℓ) −f k∗
2 2 + 2rℓln 1 qˆk(ℓ)δ . The second inequality uses the fact that −p ∥a∥2 −q ∥b∥2 ≤ −pq/(p + q) ∥a + b∥2, which implies that µ(ℓ) −α(ℓ)(1 −α(ℓ))
ˆf (ℓ−1) −f k∗
2 2 −µ(ℓ)
f ˆk(ℓ) −ˆf (ℓ−1)
2 2 ≤ − µ(ℓ)[α(ℓ)(1−α(ℓ))−µ(ℓ)] α(ℓ)(1−α(ℓ))
f ˆk(ℓ) −f k∗
2 2 and uses the Gaussian tail bound in the event E1. The last inequality uses 2ab ≤a2/rℓ+ rℓb2, where rℓ> 0 is rℓ= λc(ℓ)/2. Denote by R(ℓ) =
ˆf (ℓ) −g
2 2 −
f k∗−g
2 2 ,then since the choice of parameters c(ℓ) = [20ν(1 −ν)(ℓ−1)]−1 we obtain R(ℓ) ≤ℓ−1 ℓR(ℓ−1) + λc(ℓ) ln(1/qk∗δ) . Solving this recursion for R(ℓ) leads to the desired bound. A.3 Proof Sketch of Theorem 3 Again, more detailed proof can be found in the supplemental material. Consider any ℓ≥3. We have
ˆf (ℓ) −g
2 2 ≤ X k wk
α(ℓ)ˆf (ℓ−1) + (1 −α(ℓ))f k −g
2 2 + µ(ℓ) X k wk
ˆf (ℓ−1) −f k
2 2 −
ˆf (ℓ−1) −f ˆk(ℓ)
2 2 ! +λc(ℓ)( X k wk ln(1/qk) −ln(1/qˆk(ℓ))) + 2ξ⊤ " (1 −α(ℓ))f ˆk(ℓ) −(1 −α(ℓ)) X k wkf k # . The inequality is equivalent to Q(ℓ)(ˆk(ℓ)) ≤P k wkQ(ℓ)(k), which is a simple fact of the definition of ˆk(ℓ) in the algorithm. Denote by R(ℓ) =
ˆf (ℓ) −g
2 2 −
¯f −g
2 2, then the same derivation as that of Theorem 2 implies that R(ℓ) ≤ℓ−1 ℓ R(ℓ−1) + λc(ℓ) X k wk ln(1/(δqk)) + [µ(ℓ) + (1 −α(ℓ))2] X k wk
f k −¯f
2 2 . Now by solving the recursion, we obtain the theorem. 8 References [1] Jean-Yves Audibert. Progressive mixture rules are deviation suboptimal. In NIPS’07, 2008. [2] Olivier Catoni. Statistical learning theory and stochastic optimization. Springer-Verlag, 2004. [3] Arnak Dalalyan and Joseph Salmon. Optimal aggregation of affine estimators. In COLT’01, 2011. [4] L.K. Jones. A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Ann. Statist., 20(1):608–613, 1992. [5] Anatoli Juditsky, Philippe Rigollet, and Alexandre Tsybakov. Learning by mirror averaging. The Annals of Statistics, 36:2183–2206, 2008. [6] Gilbert Leung and A.R. Barron. Information theory and mixing least-squares regressions. Information Theory, IEEE Transactions on, 52(8):3396 –3410, aug. 2006. [7] Philippe Rigollet. Kullback-leibler aggregation and misspecified generalized linear models. arXiv:0911.2919, November 2010. [8] Pilippe Rigollet and Alexandre Tsybakov. Exponential Screening and optimal rates of sparse estimation. The Annals of Statistics, 39:731–771, 2011. [9] Yuhong Yang. Adaptive regression by mixing. Journal of the American Statistical Association, 96:574–588, 2001. 9
|
2011
|
199
|
4,256
|
Action-Gap Phenomenon in Reinforcement Learning Amir-massoud Farahmand∗ School of Computer Science, McGill University Montreal, Quebec, Canada Abstract Many practitioners of reinforcement learning problems have observed that oftentimes the performance of the agent reaches very close to the optimal performance even though the estimated (action-)value function is still far from the optimal one. The goal of this paper is to explain and formalize this phenomenon by introducing the concept of the action-gap regularity. As a typical result, we prove that for an agent following the greedy policy ˆπ with respect to an action-value function ˆQ, the performance loss E V ∗(X) −V ˆπ(X) is upper bounded by O(∥ˆQ −Q∗∥1+ζ ∞), in which ζ ≥0 is the parameter quantifying the action-gap regularity. For ζ > 0, our results indicate smaller performance loss compared to what previous analyses had suggested. Finally, we show how this regularity affects the performance of the family of approximate value iteration algorithms. 1 Introduction This paper introduces a new type of regularity in the reinforcement learning (RL) and planning problems with finite-action spaces that suggests that the convergence rate of the performance loss to zero is faster than what previous analyses had indicated. The effect of this regularity, which we call the action-gap regularity, is that oftentimes the performance of the RL agent reaches very close to the optimal performance (e.g., it always solves the mountain-car problem with the optimal number of steps) even though the estimated action-value function is still far from the optimal one. Figure 1 illustrates the effect of this regularity in a simple problem. We use value iteration to solve a stochastic 1D chain walk problem (slight modification of the example in Section 9.1 of [1]). The behavior of the supremum of the difference between the estimate after k iterations and the optimal action-value function is O(γk), in which 0 ≤γ < 1 is the discount factor (notations shall be introduced in Section 2). The current theoretical results suggest that the convergence of the performance loss, which is defined as the average difference between the value of the optimal policy and the value of the greedy policy w.r.t. (with respect to) the estimated action-value function, should have the same O(γk) behavior (cf. Proposition 6.1 of Bertsekas and Tsitsiklis [2]). However, the behavior of the performance loss is often considerably faster, e.g., it is approximately O(γ1.85k) in this example. To gain a better understanding of the action-gap regularity, focus on a single state and suppose that there are only two actions available. When the estimated action-value function has a large error, the greedy policy w.r.t. it can possibly choose the suboptimal action. However, when the error becomes smaller than the (half of the) gap between the value of the optimal action and the other one, the selected greedy action is the optimal action. After passing this threshold, the size of the error in the estimate of the action-value function in that state does not have any effect on the quality of the selected action. The larger the gap is, the more inaccurate the estimate can be while the selected greedy action is the optimal one. On the other hand, if the estimated action-value function does not suggest a correct ordering of actions but the gap is negligibly small, the performance loss of not ∗www.SoloGen.net 1 10 20 30 40 50 60 70 80 90 100 1 10 −4 10 −3 10 −2 10 −1 10 0 10 1 k (iteration number) Error/Loss L!!error of the estimated action!value function Performance loss O("1.85k) O("k) Figure 1: Comparison of the action-value estimation error ∥ˆQ −Q∗∥∞and the performance loss ∥V ∗−V ˆπ∥1 (ˆπ is the greedy policy with respect to ˆQ) at different iterations of the value iteration algorithm. The rate of decrease for the performance loss is considerably faster than that of the estimation error. The problem is a 1D stochastic chain walk with 500 states and γ = 0.95. choosing the optimal action is small as well. The presence of this gap in the optimal action-value function is what we call the action-gap regularity of the problem and the described behavior is called the action-gap phenomenon. Action-gap regularity is similar to the low-noise (or margin) condition in the classification literature. The low-noise condition is the assumption that the conditional probability of the class label given input is “far” from the critical decision point. If this condition holds, “fast” convergence rate is obtainable as was shown by Mammen and Tsybakov [3], Tsybakov [4], Audibert and Tsybakov [5]. The low-noise condition is believed to be one reason that many high-dimensional classification problems can be solved with efficient sample complexity (cf. Rinaldo and Wasserman [6]). We borrow techniques developed in the classification literature, in particular by Audibert and Tsybakov [5], in our analysis. It is notable that there have been some works that used classification algorithms to solve reinforcement learning (e.g., Lagoudakis and Parr [7], Lazaric et al. [8]) or the related problem of apprenticeship learning (e.g., Syed and Schapire [9]). Nevertheless, the connection of this work to the classification literature is only by borrowing theoretical ideas from that literature and not in using any particular algorithm. The focus of this work is indeed on the value-based approaches, though one might expect that similar behavior can be observed in classification-based approaches as well. In the rest of this paper, we formalize the action-gap phenomenon and prove that whenever the MDP has a favorable action-gap regularity, fast convergence rate is achievable. Theorem 1 upper bounds the performance loss of the greedy policy w.r.t. the estimated action-value function by a function of the Lp-norm of the difference between the estimated action-value function and the optimal one. Our result complements previous theoretical analyses of RL/Planning problems such as those by Antos et al. [10], Munos and Szepesv´ari [11], Farahmand et al. [12, 13], Maillard et al. [14], who mainly focused on the quality of the (action-)value function estimate and ignored the action-gap regularity. This synergy provides a clearer picture of what makes an RL/Planning problem easy or difficult. Finally as an example of Theorem 1, we address the question of how the errors caused at each iteration of the Approximate Value Iteration (AVI) algorithm affect the quality of the outcome policy and show that the AVI procedure benefits from the action-gap regularity of the problem (Theorem 2). 2 Notations In this section, we provide a brief summary of some of the concepts and definitions from the theory of MDPs and RL. For more information, the reader is referred to Bertsekas and Tsitsiklis [2], Sutton and Barto [15], Szepesv´ari [16]. 2 For a space Ω, with σ-algebra σΩ, we define M(Ω) as the set of all probability measures over σΩ. B(Ω) denotes the space of bounded measurable functions w.r.t. (with respect to) σΩand B(Ω, L) denotes the subset of B(Ω) with bound 0 < L < ∞. A finite-action discounted MDP is a 5-tuple (X, A, P, R, γ), where X is a measurable state space, A is a finite set of actions, P : X ×A →M(X) is the transition probability kernel, R : X ×A →R is the reward distribution, and 0 ≤γ < 1 is a discount factor. We denote r(x, a) = E [R(·|x, a)]. A measurable mapping π : X →A is called a deterministic Markov stationary policy, or just a policy in short. An agent’s following a policy π in an MDP means that at each time step At = π(Xt). A policy π induces two transition probability kernels P π : X →M(X) and P π : X × A → M(X × A). For a measurable subset A of X and a measurable subset B of X × A, we define (P π)(A|x) ≜ R P(dy|x, π(x))I{y∈A} and (P π)(B|x, a) ≜ R P(dy|x, a)I{(y,π(y))∈B}. The m-step transition probability kernel (P π)m : X ×A →M(X ×A) for m = 2, 3, · · · are inductively defined as (P π)m(B|x, a) ≜ R X P(dy|x, a)(P π)m−1(B|y, π(y)) (similarly for (P π)m : X →M(X)). Given a transition probability kernel P : X →M(X), define the right-linear operator P· : B(X) →B(X) by (PV )(x) ≜ R X P(dy|x)V (y). For a probability measure ρ ∈M(X) and a measurable subset A of X, define the left-linear operators ·P : M(X) →M(X) by (ρP)(A) = R ρ(dx)P(dy|x)I{y∈A}. A typical choice of P is (P π)m : M(X) →M(X). These operators for P : X × A →M(X × A) are defined similarly. The value function V π and and the action-value function Qπ of a policy π are defined as follows: Let (Rt; t ≥1) be the sequence of rewards when the Markov chain is started from state X1 (stateaction (X1, A1) for the action-value function) drawn from a positive probability distribution over X (X × A) and the agent follows the policy π. Then V π(x) ≜E hP∞ t=1 γt−1Rt X1 = x i and Qπ(x, a) ≜E hP∞ t=1 γt−1Rt X1 = x, A1 = a i . For a discounted MDP, we define the optimal value and optimal action-value functions by V ∗(x) = supπ V π(x) for all states x ∈X and Q∗(x, a) = supπ Qπ(x, a) for all state-actions (x, a) ∈X ×A. We say that a policy π∗is optimal if it achieves the best values in every state, i.e., if V π∗= V ∗. We say that a policy π is greedy w.r.t. an action-value function Q and write π = ˆπ(·; Q), if π(x) = argmaxa∈A Q(x, a) holds for all x ∈X (if there exist multiple maximizers, a maximizer is chosen in an arbitrary deterministic manner). Greedy policies are important because a greedy policy w.r.t. the optimal action-value function Q∗is an optimal policy. For a fixed policy π, the Bellman operators T π : B(X) →B(X) and T π : B(X ×A) →B(X ×A) (for the action-value functions) are defined as (T πV )(x) ≜r(x, π(x)) + γ R X V (y)P(dy|x, π(x)) and (T πQ)(x, a) ≜r(x, a) + γ R X Q(y, π(y))P(dy|x, a). The fixed point of the Bellman operator is the (action-)value function of the policy π, i.e., T πQπ = Qπ and T πV π = V π. Similarly, the Bellman optimality operators T ∗: B(X) →B(X) and T ∗: B(X × A) →B(X × A) (for the action-value functions) are defined as (T ∗V )(x) ≜maxa n r(x, a) + γ R R×X V (y)P(dr, dy|x, a) o and (T ∗Q)(x, a) ≜r(x, a) + γ R R×X maxa′ Q(y, a′)P(dr, dy|x, a). Again, these operators enjoy a fixed-point property similar to that of the Bellman operators: T ∗Q∗= Q∗and T ∗V ∗= V ∗. For a probability measure ρ ∈M(X), and a measurable function V ∈B(X), we define the Lp(ρ)norm (1 ≤p < ∞) of V as ∥V ∥p,ρ ≜ R X |V (x)|p dρ(x) 1/p. The L∞(X)-norm is defined as ∥V ∥∞≜supx∈X |V (x)|. For ρ ∈M(X ×A) and Q ∈B(X ×A), we define ∥Q∥p,ρ (1 ≤p < ∞) by ∥Q∥p,ρ ≜ h 1 |A| P|A| a=1 ∥Q(·, a)∥p p,ρ i1/p and ∥Q∥∞≜sup(x,a)∈X×A |Q(x, a)|. 3 Action-Gap Theorem In this section, we present the action-gap theorem for an MDP (X, A, P, R, γ). To simplify the analysis, we assume that the number of actions |A| is only 2. We denote ρ∗∈M(X) as the station3 Figure 2: The action-gap function gQ∗(x) and the relative ordering of the optimal and the estimated action-value functions for a single state x. Depending on the ordering of the estimates, the greedy action is the same as (✓) or different from (X) the optimal action. This figure does not show all possible configurations. ary distribution induced by π∗, and we let ρ ∈M(X) be a user-specified evaluation distribution. This distribution indicates the relative importance of regions of the state space to the user. Suppose that algorithm A receives a dataset Dn = {(X1, A1, R1, X′ 1), . . . , (Xn, An, Rn, X′ n)} (with Ri is being drawn from R(·|Xt, At) and X′ t is being drawn from P(·|Xt, At)) and outputs ˆQ as an estimate of the optimal action-value function, i.e., ˆQ ←A(Dn). The exact nature of this algorithm is not important and it can be any online or offline, batch or incremental algorithms of choice such as Q-learning, SARSA [15], and their variants [17], LSPI [1], LARS-TD [18] in a policy iteration procedure, REG-LSPI [13], various Fitted Q-Iterations algorithms [19, 20, 12], or Linear Programming-based approaches [21, 22]. The only relevant aspect of ˆQ is how well it approximates Q∗. We quantify the quality of the approximation by the Lp-norm ∥ˆQ −Q∗∥p,ρ∗(p ∈[1, ∞]). The performance loss (or regret) of a policy π is the expected difference between the value of the optimal policy π∗to the value of π when the initial state distribution is selected according to ρ, i.e., Loss(π; ρ) ≜ Z X (V ∗(x) −V π(x)) dρ(x). (1) The value of Loss(ˆπ; ρ), in which ˆπ is the greedy policy w.r.t. ˆQ, is the main quantity of interest and indicates how much worse the agent following policy ˆπ would perform, in average, compared to the optimal one. The choice of ρ enables the user to specify the relative importance of regions in the state space. We define the action(-value)-gap function gQ∗: X →R as gQ∗(x) ≜|Q∗(x, 1) −Q∗(x, 2)| . This gap is shown in Figure 2. The following assumption quantifies the action-gap regularity. Assumption A1 (Action-Gap). For a fixed MDP (X, A, P, R, γ) with |A| = 2, there exist constants cg > 0 and ζ ≥0 such that for all t > 0, we have Pρ∗(0 < gQ∗(X) ≤t) ≜ Z X I{0 < gQ∗(x) ≤t} dρ∗(x) ≤cg tζ. The value of ζ controls the distribution of the action-gap gQ∗(X). A large value of ζ indicates that the probability that Q(X, 1) being very close to Q(X, 2) is small and vice versa. The smallness of this probability implies that the estimated action-value function ˆQ might be rather inaccurate in a 4 0 0.25 0.5 0.75 1 1.25 1.5 1.75 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 t P (0 < gQ(X) ! t) Figure 3: The probability distribution Pρ∗(0 < gQ∗(X) ≤t) for a 1D stochastic chain walk with 500 states and γ = 0.95. Here the probability of the action-gap being close to zero is small. large subset of the state space (measured according to ρ∗) but its corresponding greedy policy would still be the same as the optimal one. The case of ζ = 0 and cg = 1 is equivalent to not having any assumption on the action-gap. This assumption is inspired by the low-noise condition in the classification literature [5]. As an example of a typical behavior of an action-gap function, Figure 3 depicts Pρ∗(0 < gQ∗(X) ≤t) for the same 1D stochastic chain walk problem as mentioned in the Introduction. It is seen that the probability that the action-gap function gQ∗being close to zero is very small. Note that the specific polynomial form of the upper bound in Assumption A1 is only a modeling assumption that captures the essence of the action-gap regularity without trying to be too general to lead to unnecessarily complicated analyses. As a result of the dynamical nature of MDP, the performance loss depends not only on the choice of ρ and ρ∗, but also on the transition probability kernel P. To analyze this dependence, we define a concentrability coefficient and use a change of measure argument similar to the work of Munos [23, 24], Antos et al. [10]. Definition 1 (Concentrability of the Future-State Distribution). Given ρ, ρ∗∈M(X), a policy π, and an integer m ≥0, let ρ(P π)m ∈M(X) denote the future-state distribution obtained when the first state is distributed according to ρ and we then follow the policy π for m steps. Denote the supremum of the Radon-Nikodym derivative of ρ(P π)m w.r.t. ρ∗by c(m; π), i.e., c(m; π) ≜
d(ρ(P π)m) dρ∗
∞ . If ρ(P π)m is not absolutely continuous w.r.t. ρ∗, we set c(m; π) = ∞. The concentrability of the future-state distribution coefficient is defined as C(ρ, ρ∗) ≜sup π X m≥0 γmc(m; π). The following theorem upper bounds the performance loss Loss(ˆπ; ρ) as a function of ∥Q∗−ˆQ∥p,ρ∗, the action-gap distribution, and the concentrability coefficient. Theorem 1. Consider an MDP (X, A, P, R, γ) with |A| = 2 and an estimate ˆQ of the optimal action-value function. Let Assumption A1 hold and C(ρ, ρ∗) < ∞. Denote ˆπ as the greedy policy w.r.t. ˆQ. We then have Loss(ˆπ; ρ) ≤ 21+ζ cg C(ρ, ρ∗)
ˆQ −Q∗
1+ζ ∞ , 21+ p(1+ζ) p+ζ c p−1 p+ζ g C(ρ, ρ∗)
ˆQ −Q∗
p(1+ζ) p+ζ p,ρ∗ . (1 ≤p < ∞) 5 Proof. Let function F : X →R be defined as F(x) = V ∗(x) −V ˆπ(x) = Qπ∗(x, π∗(x)) − Qˆπ(x, ˆπ(x)) for any x ∈X. Note that Loss(ˆπ; ρ) = ρF. Decompose F(x) as F(x) = Qπ∗(x, π∗(x)) −Qπ∗(x, ˆπ(x)) + Qπ∗(x, ˆπ(x)) −Qˆπ(x, ˆπ(x)) = F1(x) + F2(x). We have F2(x) = r(x, ˆπ(x)) + γ Z X P(dy|x, ˆπ(x))Qπ∗(y, π∗(y)) − r(x, ˆπ(x)) + γ Z X P(dy|x, ˆπ(x))Qˆπ(y, ˆπ(y)) = γP ˆπ(·|x)F(·). Therefore, F = (I −γP ˆπ)−1F1 = P m≥0(γP ˆπ)mF1. Thus, ρF = X m≥0 ρ(γP ˆπ)mF1 = X m≥0 γm Z X ρ(P ˆπ)m (dy)F1(y) = X m≥0 γm Z X d ρ(P ˆπ)m dρ∗ (y)dρ∗(y)F1(y) ≤ X m≥0 γmc(m; ˆπ)ρ∗F1 ≤C(ρ, ρ∗) ρ∗F1. (2) in which we used the Radon-Nikodym theorem and the definition of concentrability coefficient. Let us turn to F1 and provide an upper bound for it. We use techniques similar to [5]. L∞result: Note that for any given x ∈X, if for some value of ε > 0 we have ˆπ(x) ̸= π∗(x) and |Qπ∗(x, a) −ˆQ(x, a)| ≤ε (for both a = 1, 2), then it holds that gQ∗(x) = |Qπ∗(x, 1) − Qπ∗(x, 2)| ≤2ε. To show it, suppose that instead gQ∗(x) = |Qπ∗(x, 1) −Qπ∗(x, 2)| > 2ε. Then because of the assumption |Qπ∗(x, a) −ˆQ(x, a)| ≤ε (a = 1, 2), the ordering of ˆQ(x, 1) and ˆQ(x, 2) is the same as the ordering of Q∗(x, 1) and Q∗(x, 2), which contradicts the assumption that ˆπ(x) ̸= π∗(x) (see Figure 2). Denote ε0 = ∥Qπ∗−ˆQ∥∞. Whenever ˆπ(x) = π∗(x), the value of F1(x) is zero, so we get F1(x) = h Qπ∗(x, π∗(x)) −Qπ∗(x, ˆπ(x)) i [I{ˆπ(x) = π∗(x)} + I{ˆπ(x) ̸= π∗(x)}] = h Qπ∗(x, π∗(x)) −Qπ∗(x, 1 −π∗(x)) i I{ˆπ(x) ̸= π∗(x)} × [I{gQ∗(x) = 0} + I{0 < gQ∗(x) ≤2ε0} + I{gQ∗(x) > 2ε0}] ≤0 + 2ε0 I{0 < gQ∗(x) ≤2ε0} + 0. Here we used the definition of gQ∗(x) and the fact that gQ∗(x) is no larger than 2ε0. This result together with Assumption A1 show that ρ∗F1 ≤2ε0 Pρ∗(0 < gQ∗(X) ≤2ε0) ≤2ε0 cg (2ε0)ζ. Plugging this result in (2) finishes the proof of the first part. Lp result: For any given x ∈X, let D(x) = |Qπ∗(x, 1) −ˆQ(x, 1)| + |Qπ∗(x, 2) −ˆQ(x, 2)|. Whenever ˆπ(x) ̸= π∗(x), we have gQ∗(x) ≤D(x). Similar to the previous case, we have F1(x) = h Qπ∗(x, π∗(x)) −Qπ∗(x, 1 −π∗(x)) i I{ˆπ(x) ̸= π∗(x)} × [I{gQ∗(x) = 0} + I{0 < gQ∗(x) ≤t} + I{gQ∗(x) > t}] ≤D(x) [I{0 < gQ∗(x) ≤t} + I{gQ∗(x) > t}] Take expectation w.r.t. ρ∗and use H¨older’s inequality to get ρ∗F1 ≤∥D∥p,ρ∗[Pρ∗(0 < gQ∗(X) ≤t)] p−1 p + ∥D∥p,ρ∗[Pρ∗(gQ∗(X) > t)] p−1 p ≤∥D∥p,ρ∗ cgtζ p−1 p + ∥D∥p,ρ∗[Pρ∗(D(X) > t)] p−1 p ≤∥D∥p,ρ∗ cgtζ p−1 p + ∥D∥p p,ρ∗ tp−1 . 6 where we used Assumption A1 and the definition of D(·) in the second inequality, and Markov’s inequality in the last one. Minimize the upper bound in t to get t = c −1 p+ζ g ∥D∥ p p+ζ p,ρ∗. This leads to ρ∗F1 ≤2c p−1 p+ζ g ∥D∥ p(1+ζ) p+ζ p,ρ∗ , which in turn alongside inequality (2) and ∥D∥p p,ρ∗≤2p∥Qπ∗−ˆQ∥p p,ρ∗ proves the second part of this result. This theorem indicates that if ∥ˆQ −Q∗∥p (1 < p ≤∞) has an error upper bound of O(n−β) (with β typically in the range of (0, 1/2] depending on the properties of the MDP and the estimator), we obtain faster convergence upper bounds on the performance loss Loss(ˆπ; ρ) whenever the problem has an action-gap regularity (ζ > 0). One might compare Theorem 1 with classical upper bounds such as ∥V ˆπ −V π∗∥∞≤ 2γ 1−γ ∥ˆV − V ∗∥∞(Proposition 6.1 of Bertsekas and Tsitsiklis [2]). In order to make these two bounds comparable, we slightly modify the proof of our theorem to get the L∞-norm in the left hand side. The result would be ∥V ∗−V ˆπ∥∞≤21+ζcg 1−γ ∥ˆQ −Q∗∥1+ζ ∞. If there is no action-gap assumption (ζ = 0 and cg = 1), the results are similar (except for a factor of γ and that we measure the error by the difference in the action-value function as opposed to the value function), but when ζ > 0 the error bound significantly improves. 4 Application of the Action-Gap Theorem in Approximate Value Iteration The goal of this section is to show how the analysis based on the action-gap phenomenon might lead to a tighter upper bound on the performance loss for the family of the AVI algorithms. There are various AVI algorithms (Riedmiller [19], Ernst et al. [20], Munos and Szepesv´ari [11], Farahmand et al. [12]), that work by generating a sequence of action-value function estimates ( ˆQk)K k=0, in which ˆQk+1 is the result of approximately applying the Bellman optimality operator to the previous estimate ˆQk, i.e., ˆQk+1 ≈T ∗ˆQk. Let us denote the error caused at each iteration by εk ≜T ∗ˆQk −ˆQk+1. (3) The following theorem, which is based on Theorem 3 of Farahmand et al. [25], relates the performance loss ∥Qˆπ(·; ˆ QK) −Q∗∥1,ρ of the obtained greedy policy ˆπ(·; ˆQK) to the error sequence (εk)K−1 k=0 and the action-gap assumption on the MDP. Before stating the theorem, we define the following sequence: αk = ( (1−γ) 1−γK+1 γK−k−1 0 ≤k < K, (1−γ) 1−γK+1 γK k = K. This sequence has αk ∝γK−k behavior and satisfies PK k=0 αk = 1. Theorem 2 (Error Propagation for AVI). Consider an MDP (X, A, P, R, γ) with |A| = 2 that satisfies Assumption A1 and has C(ρ, ρ∗) < ∞. Let p ≥1 be a real number and K be a positive integer. Then for any sequence ( ˆQk)K k=0 ⊂B(X × A, Qmax) and the corresponding sequence (εk)K−1 k=0 defined in (3), we have Loss(ˆπ(·, QK); ρ) ≤2 2 1 −γ p(1+ζ) p+ζ c p−1 p+ζ g C(ρ, ρ∗) "K−1 X k=0 αk ∥εk∥p p,ρ∗+ αK(2Qmax)p # 1+ζ p+ζ . Proof. Similar to Lemma 4.1 of Munos [24], one may derive Q∗−ˆQk+1 = T π∗Q∗−T π∗ˆQk + T π∗ˆQk −T ∗ˆQk + εk ≤γP π∗(Q∗−ˆQk) + εk where we used the property of the Bellman optimality operator T ∗ˆQk ≥T π∗ˆQk and the definition of εk (3). By induction, we get Q∗−ˆQK ≤ K−1 X k=0 γK−k−1(P π∗)K−k−1εk + γK(P π∗)K(Q∗−ˆQ0). 7 Therefore, for any p ≥1, the value of ∥Q∗−ˆQK∥p,ρ∗= ρ∗|Q∗−ˆQK|p is upper bounded by ρ∗|Q∗−ˆQK|p ≤ 1 −γK+1 1 −γ p "K−1 X k=0 αkρ∗(P π∗)K−k−1|εk| + αKρ∗(P π∗)K|Q∗−ˆQ0| #p ≤ 1 −γK+1 1 −γ p "K−1 X k=0 αk ∥εk∥p p,ρ∗+ αK(2Qmax)p # , where we used ρ∗(P π∗)m = ρ∗(for any m ≥0) and Jensen’s inequality. The application of Theorem 1 and noting that (1 −γK+1)/(1 −γ) ≤1/(1 −γ) lead to the desired result. Comparing this theorem with Theorem 3 of Farahmand et al. [25] is instructive. Denoting E = PK−1 k=0 αk∥εk∥2 2,ρ∗, this paper’s result indicates that the effect of the size of εk on Loss(ˆπ(·, ˆQK); ρ) depends on E 1+ζ 2+ζ , while [25], which does not consider the action-gap regularity, suggests that the effect depends on E1/2. For ζ > 0, this indicates a faster convergence rate for the performance loss while for ζ = 0, they are the same. 5 Conclusion This work introduced the action-gap regularity in reinforcement learning and planning problems and analyzed the action-gap phenomenon for two-action discounted MDPs. We showed that when the problem has a favorable action-gap regularity, quantified by the parameter ζ, the performance loss is much smaller than the error of the estimated optimal action-value function. The action-gap regularity, among other regularities such as the smoothness of the action-value function [13], is a step forward to better understanding of what properties of a sequential decision-making problem makes learning and planning easy or difficult. There are several issues that deserve to be studied in the future. Among them is the extension of the current framework to multi-action discounted MDPs. Also it is important to study the relation between the parameter ζ of the action-gap regularity assumption to the properties of the MDP such as the transition probability kernel and the reward distribution. Acknowledgments I thank the anonymous reviewers for their useful comments. This work was partly supported by AICML and NSERC. References [1] Michail G. Lagoudakis and Ronald Parr. Least-squares policy iteration. Journal of Machine Learning Research, 4:1107–1149, 2003. [2] Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming (Optimization and Neural Computation Series, 3). Athena Scientific, 1996. [3] Enno Mammen and Alexander B. Tsybakov. Smooth discrimination analysis. The Annals of Statistics, 27(6):1808–1829, 1999. [4] Alexander B. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32 (1):135–166, 2004. [5] Jean-Yves Audibert and Alexander B. Tsybakov. Fast learning rates for plug-in classifiers. The Annals of Statistics, 35(2):608–633, 2007. [6] Alessandro Rinaldo and Larry Wasserman. Generalized density clustering. The Annals of Statistics, 38(5):2678–2722, 2010. [7] Michail G. Lagoudakis and Ronald Parr. Reinforcement learning as classification: Leveraging modern classifiers. In ICML ’03: Proceedings of the 20th international conference on Machine learning, pages 424–431, 2003. 8 [8] Alessandro Lazaric, Mohammad Ghavamzadeh, and R´emi Munos. Analysis of a classificationbased policy iteration algorithm. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 607–614. Omnipress, 2010. [9] Omar Syed and Robert E. Schapire. A reduction from apprenticeship learning to classification. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems (NIPS - 23), pages 2253–2261, 2010. [10] Andr´as Antos, Csaba Szepesv´ari, and R´emi Munos. Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path. Machine Learning, 71:89–129, 2008. [11] R´emi Munos and Csaba Szepesv´ari. Finite-time bounds for fitted value iteration. Journal of Machine Learning Research, 9:815–857, 2008. [12] Amir-massoud Farahmand, Mohammad Ghavamzadeh, Csaba Szepesv´ari, and Shie Mannor. Regularized fitted Q-iteration for planning in continuous-space Markovian Decision Problems. In Proceedings of American Control Conference (ACC), pages 725–730, June 2009. [13] Amir-massoud Farahmand, Mohammad Ghavamzadeh, Csaba Szepesv´ari, and Shie Mannor. Regularized policy iteration. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems (NIPS - 21), pages 441–448. MIT Press, 2009. [14] Odalric Maillard, R´emi Munos, Alessandro Lazaric, and Mohammad Ghavamzadeh. Finitesample analysis of Bellman residual minimization. In Proceedings of the Second Asian Conference on Machine Learning (ACML), 2010. [15] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning). The MIT Press, 1998. [16] Csaba Szepesv´ari. Algorithms for Reinforcement Learning. Morgan Claypool Publishers, 2010. [17] Hamid Reza Maei, Csaba Szepesv´ari, Shalabh Bhatnagar, and Richard S. Sutton. Toward off-policy learning control with function approximation. In Johannes F¨urnkranz and Thorsten Joachims, editors, Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 719–726, Haifa, Israel, June 2010. Omnipress. [18] J. Zico Kolter and Andrew Y. Ng. Regularization and feature selection in least-squares temporal difference learning. In ICML ’09: Proceedings of the 26th Annual International Conference on Machine Learning, pages 521–528. ACM, 2009. [19] Martin Riedmiller. Neural fitted Q iteration – first experiences with a data efficient neural reinforcement learning method. In 16th European Conference on Machine Learning, pages 317–328, 2005. [20] Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6:503–556, 2005. [21] Daniela Pucci de Farias and Benjamin Van Roy. The linear programming approach to approximate dynamic programming. Operations Research, 51(6):850–865, 2003. [22] Marek Petrik and Shlomo Zilberstein. Constraint relaxation in approximate linear programs. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 809–816, New York, NY, USA, 2009. ACM. [23] R´emi Munos. Error bounds for approximate policy iteration. In ICML 2003: Proceedings of the 20th Annual International Conference on Machine Learning, pages 560–567, 2003. [24] R´emi Munos. Performance bounds in Lp norm for approximate value iteration. SIAM Journal on Control and Optimization, pages 541–561, 2007. [25] Amir-massoud Farahmand, R´emi Munos, and Csaba Szepesv´ari. Error propagation for approximate policy and value iteration. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems (NIPS 23), pages 568–576. 2010. 9
|
2011
|
2
|
4,257
|
The Doubly Correlated Nonparametric Topic Model Dae Il Kim and Erik B. Sudderth Department of Computer Science Brown University, Providence, RI 02906 daeil@cs.brown.edu, sudderth@cs.brown.edu Abstract Topic models are learned via a statistical model of variation within document collections, but designed to extract meaningful semantic structure. Desirable traits include the ability to incorporate annotations or metadata associated with documents; the discovery of correlated patterns of topic usage; and the avoidance of parametric assumptions, such as manual specification of the number of topics. We propose a doubly correlated nonparametric topic (DCNT) model, the first model to simultaneously capture all three of these properties. The DCNT models metadata via a flexible, Gaussian regression on arbitrary input features; correlations via a scalable square-root covariance representation; and nonparametric selection from an unbounded series of potential topics via a stick-breaking construction. We validate the semantic structure and predictive performance of the DCNT using a corpus of NIPS documents annotated by various metadata. 1 Introduction The contemporary problem of exploring huge collections of discrete data, from biological sequences to text documents, has prompted the development of increasingly sophisticated statistical models. Probabilistic topic models represent documents via a mixture of topics, which are themselves distributions on the discrete vocabulary of the corpus. Latent Dirichlet allocation (LDA) [3] was the first hierarchical Bayesian topic model, and remains influential and widely used. However, it suffers from three key limitations which are jointly addressed by our proposed model. The first assumption springs from LDA’s Dirichlet prior, which implicitly neglects correlations1 in document-specific topic usage. In diverse corpora, true semantic topics may exhibit strong (positive or negative) correlations; neglecting these dependencies may distort the inferred topic structure. The correlated topic model (CTM) [2] uses a logistic-normal prior to express correlations via a latent Gaussian distribution. However, its usage of a “soft-max” (multinomial logistic) transformation requires a global normalization, which in turn presumes a fixed, finite number of topics. The second assumption is that each document is represented solely by an unordered “bag of words”. However, text data is often accompanied by a rich set of metadata such as author names, publication dates, relevant keywords, etc. Topics that are consistent with such metadata may also be more semantically relevant. The Dirichlet multinomial regression (DMR) [11] model conditions LDA’s Dirichlet parameters on feature-dependent linear regressions; this allows metadata-specific topic frequencies but retains other limitations of the Dirichlet. Recently, the Gaussian process topic model [1] incorporated correlations at the topic level via a topic covariance, and the document level via an appropriate GP kernel function. This model remains parametric in its treatment of the number of topics, and computational scaling to large datasets is challenging since learning scales superlinearly with the number of documents. 1One can exactly sample from a Dirichlet distribution by drawing a vector of independent gamma random variables, and normalizing so they sum to one. This normalization induces slight negative correlations. 1 The third assumption is the a priori choice of the number of topics. The most direct nonparametric extension of LDA is the hierarchical Dirichlet process (HDP) [17]. The HDP allows an unbounded set of topics via a latent stochastic process, but nevertheless imposes a Dirichlet distribution on any finite subset of these topics. Alternatively, the nonparametric Bayes pachinko allocation [9] model captures correlations within an unbounded topic collection via an inferred, directed acyclic graph. More recently, the discrete infinite logistic normal [13] (DILN) model of topic correlations used an exponentiated Gaussian process (GP) to rescale the HDP. This construction is based on the gamma process representation of the DP [5]. While our goals are similar, we propose a rather different model based on the stick-breaking representation of the DP [16]. This choice leads to arguably simpler learning algorithms, and also facilitates our modeling of document metadata. In this paper, we develop a doubly correlated nonparametric topic (DCNT) model which captures between-topic correlations, as well as between-document correlations induced by metadata, for an unbounded set of potential topics. As described in Sec. 2, the global soft-max transformation of the DMR and CTM is replaced by a stick-breaking transformation, with inputs determined via both metadata-dependent linear regressions and a square-root covariance representation. Together, these choices lead to a well-posed nonparametric model which allows tractable MCMC learning and inference (Sec. 3). In Sec. 4, we validate the model using a toy dataset, as well as a corpus of NIPS documents annotated by author and year of publication. 2 A Doubly Correlated Nonparametric Topic Model The DCNT is a hierarchical, Bayesian nonparametric generalization of LDA. Here we give an overview of the model structure (see Fig. 1), focusing on our three key innovations. 2.1 Document Metadata Consider a collection of D documents. Let φd ∈RF denote a feature vector capturing the metadata associated with document d, and φ an F × D matrix of corpus metadata. When metadata is unavailable, we assume φd = 1. For each of an unbounded sequence of topics k, let ηfk ∈R denote an associated significance weight for feature f, and η:k ∈RF a vector of these weights.2 We place a Gaussian prior η:k ∼N(µ, Λ−1) on each topic’s weights, where µ ∈RF is a vector of mean feature responses, and Λ is an F × F diagonal precision matrix. In a hierarchical Bayesian fashion [6], these parameters have priors µf ∼N(0, γµ), λf ∼Gam(af, bf). Appropriate values for the hyperparameters γµ, af, and bf are discussed later. Given η and φd, the document-specific “score” for topic k is sampled as ukd ∼N(ηT :kφd, 1). These real-valued scores are mapped to document-specific topic frequencies πkd in subsequent sections. 2.2 Topic Correlations For topic k in the ordered sequence of topics, we define a sequence of k linear transformation weights Akℓ, ℓ= 1, . . . , k. We then sample a variable vkd as follows: vkd ∼N k X ℓ=1 Akℓuℓd, λ−1 v (1) Let A denote a lower triangular matrix containing these values Akℓ, padded by zeros. Slightly abusing notation, we can then compactly write this transformation as v:d ∼N(Au:d, L−1), where L = λvI is an infinite diagonal precision matrix. Critically, note that the distribution of vkd depends only on the first k entries of u:d, not the infinite tail of scores for subsequent topics. Marginalizing u:d, the covariance of v:d equals Cov[v:d] = AAT + L−1 ≜Σ. As in the classical factor analysis model, A encodes a square-root representation of an output covariance matrix. Our integration of input metadata has close connections to the semiparametric latent factor model [18], but we replace their kernel-based GP covariance representation with a feature-based regression. 2For any matrix η, we let η:k denote a column vector indexed by k, and ηf: a row vector indexed by f. 2 Figure 1: Directed graphical representation of a DCNT model for D documents containing N words. Each of the unbounded set of topics has a word distribution Ωk. The topic assignment zdn for word wdn depends on document-specific topic frequencies πd, which have a correlated dependence on the metadata φd produced by A and η. The Gaussian latent variables ud and vd implement this mapping, and simplify MCMC methods. Given similar lower triangular representations of factorized covariance matrices, conventional Bayesian factor analysis models place a symmetric Gaussian prior Akℓ∼N(0, λ−1 A ). Under this prior, however, E[Σkk] = kλ−1 A grows linearly with k. This can produce artifacts for standard factor analysis [10], and is disastrous for the DCNT where k is unbounded. We instead propose an alternative prior Akℓ∼N(0, (kλA)−1), so that the variance of entries in the kth row is reduced by a factor of k. This shrinkage is carefully chosen so that E[Σkk] = λ−1 A remains constant. If we constrain A to be a diagonal matrix, with Akk ∼N(0, λ−1 A ) and Akℓ= 0 for k ̸= ℓ, we recover a simplified singly correlated nonparametric topic (SCNT) model which captures metadata but not topic correlations. For either model, the precision parameters are assigned conjugate gamma priors λv ∼Gam(av, bv), λA ∼Gam(aA, bA). 2.3 Logistic Mapping to Stick-Breaking Topic Frequencies Stick breaking representations are widely used in applications of nonparametric Bayesian models, and lead to convenient sampling algorithms [8]. Let πkd be the probability of choosing topic k in document d, where P∞ k=1 πkd = 1. The DCNT constructs these probabilities as follows: πkd = ψ(vkd) k−1 Y ℓ=1 ψ(−vℓd), ψ(vkd) = 1 1 + exp(−vkd). (2) Here, 0 < ψ(vkd) < 1 is the classic logistic function, which satisfies ψ(−vℓd) = 1 −ψ(vℓd). This same transformation is part of the so-called logistic stick-breaking process [14], but that model is motivated by different applications, and thus employs a very different prior distribution for vkd. Given the distribution π:d, the topic assignment indicator for word n in document d is drawn according to zdn ∼Mult(π:d). Finally, wdn ∼Mult(Ωzdn) where Ωk ∼Dir(β) is the word distribution for topic k, sampled from a Dirichlet prior with symmetric hyperparameters β. 3 Monte Carlo Learning and Inference We use a Markov chain Monte Carlo (MCMC) method to approximately sample from the posterior distribution of the DCNT. For most parameters, our choice of conditionally conjugate priors leads to closed form Gibbs sampling updates. Due to the logistic stick-breaking transformation, closed form resampling of v is intractable; we instead use a Metropolis independence sampler [6]. Our sampler is based on a finite truncation of the full DCNT model, which has proven useful with other stick-breaking priors [8, 14, 15]. Let K be the maximum number topics. As our experiments demonstrate, K is not the number of topics that will be utilized by the learned model, but rather a (possibly loose) upper bound on that number. For notational convenience, let ¯K = K −1. 3 Under the truncated model, η is a F × ¯K matrix of regression coefficients, and u is a ¯K × D matrix satisfying u:d ∼N(ηT φd, I ¯ K). Similarly, A is a ¯K × ¯K lower triangular matrix, and v:d ∼N(Au:d, λ−1 v I ¯ K). The probabilities πkd for the first ¯K topics are set as in eq. (2), with the final topic set so that a valid distribution is ensured: πKd = 1 −PK−1 k=1 πkd = QK−1 k=1 ψ(−vkd). 3.1 Gibbs Updates for Topic Assignments, Correlation Parameters, and Hyperparameters The precision parameter λf controls the variability of the feature weights associated with each topic. As in many regression models, the gamma prior is conjugate so that p(λf | η, af, bf) ∝Gam(λf | af, bf) ¯ K Y k=1 N(ηfk | µf, λ−1 f ) ∝Gam λf | 1 2 ¯K + af, 1 2 ¯ K X k=1 (ηfk −µf)2 + bf . (3) Similarly, the precision parameter λv has a gamma prior and posterior: p(λv | v, av, bv) ∝Gam(λv | av, bv) D Y d=1 N(v:d | Au:d, L−1) ∝Gam λv | 1 2 ¯KD + av, 1 2 D X d=1 (v:d −Au:d)T (v:d −Au:d) + bv . (4) Entries of the regression matrix A have a rescaled Gaussian prior Akℓ∼N(0, (kλA)−1). With a gamma prior, the precision parameter λA nevertheless has the following gamma posterior: p(λA | A, aA, bA) ∝Gam(λA | aA, bA) ¯ K Y k=1 k Y ℓ=1 N(Akℓ| 0, (kλA)−1) ∝Gam λA | 1 2 ¯K( ¯K −1) + aA, 1 2 ¯ K X k=1 k X ℓ=1 kA2 kℓ+ bA . (5) Conditioning on the feature regression weights η, the mean weight µf in our hierarchical prior for each feature f has a Gaussian posterior: p(µf | η) ∝N(µf | 0, γµ) ¯ K Y k=1 N(ηfk | µf, λ−1 f ) ∝N µf | γµ ¯Kγµ + λ−1 f ¯ K X k=1 ηfk, (γ−1 µ + ¯Kλf)−1 (6) To sample η:k, the linear function relating metadata to topic k, we condition on all documents uk: as well as φ, µ, and Λ. Columns of η are conditionally independent, with Gaussian posteriors: p(η:k | u, φ, µ, Λ) ∝N(η:k | µ, Λ−1)N(uT k: | φT η:k, ID) ∝N(η:k | (Λ + φφT )−1(φuT k: + Λµ), (Λ + φφT )−1). (7) Similarly, the scores u:d for each document are conditionally independent with Gaussian posteriors: p(u:d | v:d, η, φd, L) ∝N(u:d | ηT φd, I ¯ K)N(v:d | Au:d, L−1) ∝N(u:d | (I ¯ K + AT LA)−1(AT Lv:d + ηT φd), (I ¯ K + AT LA)−1). (8) To resample A, we note that its rows are conditionally independent. The posterior of the k entries Ak: in row k depends on vk: and ˆUk ≜u1:k,:, the first k entries of u:d for each document d: p(AT k: | vk:, ˆUk, λA, λv) ∝ k Y j=1 N(Akj | 0, (kλA)−1)N(vT k: | ˆU T k AT k:, λ−1 v ID) (9) ∝N(AT k: | (kλAλ−1 v Ik + ˆUk ˆU T k )−1 ˆUkvT k:, (kλAIk + λv ˆUk ˆU T k )−1). 4 For the SCNT model, there is a related but simpler update (see supplemental material). As in collapsed sampling algorithms for LDA [7], we can analytically marginalize the word distribution Ωk for each topic. Let M \dn kw denote the number of instances of word w assigned to topic k, excluding token n in document d, and M \dn k. the number of total tokens assigned to topic k. For a vocabulary with W unique word types, the posterior distribution of topic indicator zdn is then p(zdn = k | π:d, z\dn) ∝πkd M \dn kw + β M \dn k. + Wβ ! . (10) Recall that the topic probabilities π:d are determined from v:d via Equation (2). 3.2 Metropolis Independence Sampler Updates for Topic Activations The posterior distribution of v:d does not have a closed analytical form due to the logistic nonlinearity underlying our stick-breaking construction. We instead employ a Metropolis-Hastings independence sampler, where proposals q(v∗ :d | v:d, A, u:d, λv) = N(v∗ :d | Au:d, λ−1 v I ¯ K) are drawn from the prior. Combining this with the likelihood of the Nd word tokens, the proposal is accepted with probability min(A(v∗ :d, v:d), 1), where A(v∗ :d, v:d) = p(v∗ :d | A, u:d, λv) QNd n=1 p(zdn | v∗ :d)q(v:d | v∗ :d, A, u:d, λv) p(v:d | A, u:d, λv) QNd n=1 p(zdn | v:d)q(v∗ :d | v:d, A, u:d, λv) = Nd Y n=1 p(zdn | v∗ :d) p(zdn | v:d) = K Y k=1 π∗ kd πkd PNd n=1 δ(zdn,k) (11) Because the proposal cancels with the prior distribution in the acceptance ratio A(v∗ :d, v:d), the final probability depends only on a ratio of likelihood functions, which can be easily evaluated from counts of the number of words assigned to each topic by zd. 4 Experimental Results 4.1 Toy Bars Dataset Following related validations of the LDA model [7], we ran experiments on a toy corpus of “images” designed to validate the features of the DCNT. The dataset consisted of 1,500 images (documents), each containing a vocabulary of 25 pixels (word types) arranged in a 5x5 grid. Documents can be visualized by displaying pixels with intensity proportional to the number of corresponding words (see Figure 2). Each training document contained 300 word tokens. Ten topics were defined, corresponding to all possible horizontal and vertical 5-pixel “bars”. We consider two toy datasets. In the first, a random number of topics is chosen for each document, and then a corresponding subset of the bars is picked uniformly at random. In the second, we induce topic correlations by generating documents that contain a combination of either only horizontal (topics 1-5) or only vertical (topics 6-10) bars. For these datasets, there was no associated metadata, so the input features were simply set as φd = 1. Using these toy datasets, we compared the LDA model to several versions of the DCNT. For LDA, we set the number of topics to the true value of K = 10. Similar to previous toy experiments [7], we set the parameters of its Dirichlet prior over topic distributions to α = 50/K, and the topic smoothing parameter to β = 0.01. For the DCNT model, we set γµ = 106, and all gamma prior hyperparameters as a = b = 0.01, corresponding to a mean of 1 and a variance of 100. To initialize the sampler, we set the precision parameters to their prior mean of 1, and sample all other variables from their prior. We compared three variants of the DCNT model: the singly correlated SCNT (A constrained to be diagonal) with K = 10, the DCNT with K = 10, and the DCNT with K = 20. The final case explores whether our stick-breaking prior can successfully infer the number of topics. For the toy dataset with correlated topics, the results of running all sampling algorithms for 10,000 iterations are illustrated in Figure 2. On this relatively clean data, all models limited to K = 10 5 Figure 2: A dataset of correlated toy bars (example document images in bottom left). Top: From left to right, the true counts of words generated by each topic, and the recovered counts for LDA (K = 10), SCNT (K = 10), DCNT (K = 10), and DCNT (K = 20). Note that the true topic order is not identifiable. Bottom: Inferred topic covariance matrices for the four corresponding models. Note that LDA assumes all topics have a slight negative correlation, while the DCNT infers more pronounced positive correlations. With K = 20 potential DCNT topics, several are inferred to be unused with high probability, and thus have low variance. topics recover the correct topics. With K = 20 topics, the DCNT recovers the true topics, as well as a redundant copy of one of the bars. This is typical behavior for sampling runs of this length; more extended runs usually merge such redundant bars. The development of more rapidly mixing MCMC methods is an interesting area for future research. To determine the topic correlations corresponding to a set of learned model parameters, we use a Monte Carlo estimate (details in the supplemental material). To make these matrices easier to visualize, the Hungarian algorithm was used to reorder topic labels for best alignment with the ground truth topic assignments. Note the significant blocks of positive correlations recovered by the DCNT, reflecting the true correlations used to create this toy data. 4.2 NIPS Corpus The NIPS corpus that we used consisted of publications from previous NIPS conferences 0-12 (1987-1999), including various metadata (year of publication, authors, and section categories). We compared four variants of the DCNT model: a model which ignored metadata, a model with indicator features for the year of publication, a model with indicator features for year of publication and the presence of highly prolific authors (those with more than 10 publications), and a model with features for year of publication and additional authors (those with more than 5 publications). In all cases, the feature matrix φ is binary. All models were truncated to use at most K = 50 topics, and the sampler initialized as in Sec. 4.1. 4.2.1 Conditioning on Metadata A learned DCNT model provides predictions for how topic frequencies change given particular metadata associated with a document. In Figure 3, we show how predicted topic frequencies change over time, conditioning also on one of three authors (Michael Jordan, Geoffrey Hinton, or Terrence Sejnowski). For each, words from a relevant topic illustrate how conditioning on a particular author can change the predicted document content. For example, the visualization associated with Michael Jordan shows that the frequency of the topic associated with probabilistic models gradually increases over the years, while the topic associated with neural networks decreases. Conditioning on Geoffrey Hinton puts larger mass on a topic which focuses on models developed by his research group. Finally, conditioning on Terrence Sejnowski dramatically increases the probability of topics related to neuroscience. 4.2.2 Correlations between Topics The DCNT model can also capture correlations between topics. In Fig. 4, we visualize this using a diagram where the size of a colored grid is proportional to the magnitude of the correlation 6 Figure 3: The DCNT predicts topic frequencies over the years (1987-1999) for documents with (a) none of the most prolific authors, (b) the Michael Jordan feature, (c) the Geoffrey Hinton feature, and (d) the Terrence Sejnowski feature. The stick-breaking distribution at the top shows the frequencies of each topic, averaging over all years; note some are unused. The middle row illustrates the word distributions for the topics highlighted by red dots in their respective columns. Larger words are more probable. Figure 4: A Hinton diagram of correlations between all pairs of topics, where the sizes of squares indicates the magnitude of dependence, and red and blue squares indicate positive and negative correlations, respectively. To the right are the top six words from three strongly correlated topic pairs. This visualization, along with others in this paper, are interactive and can be downloaded from this page: http://www.cs.brown.edu/˜daeil. coefficients between two topics. The results displayed in this figure are for a model trained without metadata. We can see that the model learned strong positive correlations between function and learning topics which have strong semantic similarities, but are not identical. Another positive correlation that the model discovered was between the topics visual and neuron; of course there are many papers at NIPS which study the brain’s visual cortex. A strong negative correlation was found between the network and model topics, which might reflect an idealogical separation between papers studying neural networks and probabilistic models. 4.3 Predictive Likelihood In order to quantitatively measure the generalization power of our DCNT model, we tested several variants on two versions of the toy bars dataset (correlated & uncorrelated). We also compared models on the NIPS corpus, to explore more realistic data where metadata is available. The test data for the toy dataset consisted of 500 documents generated by the same process as the training data, 7 LDA HDP DCNT−noF DCNT−Y DCNT−YA1 DCNT−YA2 1800 1850 1900 1950 2000 2050 2100 Perplexity scores (NIPS) 1975.46 2060.43 1926.42 1925.56 1923.1 1932.26 LDA−A HDP−A DCNT−A SCNT−A LDA−B HDP−B DCNT−B SCNT−B 0 2 4 6 8 10 12 14 Perplexity (Toy Data) 10.5 10.52 9.79 10.14 12.08 12.13 11.51 11.75 Figure 5: Perplexity scores (lower is better) computed via Chib-style estimators for several topic models. Left: Test performance for the toy datasets with uncorrelated bars (-A) and correlated bars (-B). Right: Test performance on the NIPS corpus with various metadata: no features (-noF), year features (-Y), year and prolific author features (over 10 publications, -YA1), and year and additional author features (over 5 publications, -YA2). while the NIPS corpus was split into training and tests subsets containing 80% and 20% of the full corpus, respectively. Over the years 1988-1999, there were a total of 328 test documents. We calculated predictive likelihood estimates using a Chib-style estimator [12]; for details see the supplemental material. In a previous comparison [19], the Chib-style estimator was found to be far more accurate than alternatives like the harmonic mean estimator. Note that there is some subtlety in correctly implementing the Chib-style estimator for our DCNT model, due to the possibility of rejection of our Metropolis-Hastings proposals. Predictive negative log-likelihood estimates were normalized by word counts to determine perplexity scores [3]. We tested several models, including the SCNT and DCNT, LDA with α = 1 and β = 0.01, and the HDP with full resampling of its concentration parameters. For the toy bars data, we set the number of topics to K = 10 for all models except the HDP, which learned K = 15. For the NIPS corpus, we set K = 50 for all models except the HDP, which learned K = 86. For the toy datasets, the LDA and HDP models perform similarly. The SCNT and DCNT are both superior, apparently due to their ability to capture non-Dirichlet distributions on topic occurrence patterns. For the NIPS data, all of the DCNT models are substantially more accurate than LDA and the HDP. Including metadata encoding the year of publication, and possibly also the most prolific authors, provides slight additional improvements in DCNT accuracy. Interestingly, when a larger set of author features is included, accuracy becomes slightly worse. This appears to be an overfitting issue: there are 125 authors with over 5 publications, and only a handful of training examples for each one. While it is pleasing that the DCNT and SCNT models seem to provide improved predictive likelihoods, a recent study on the human interpretability of topic models showed that such scores do not necessarily correlate with more meaningful semantic structures [4]. In many ways, the interactive visualizations illustrated in Sec. 4.2 provide more assurance that the DCNT can capture useful properties of real corpora. 5 Discussion The doubly correlated nonparametric topic model flexibly allows the incorporation of arbitrary features associated with documents, captures correlations that might exist within a dataset’s latent topics, and can learn an unbounded set of topics. The model uses a set of efficient MCMC techniques for learning and inference, and is supported by a set of web-based tools that allow users to visualize the inferred semantic structure. Acknowledgments This research supported in part by IARPA under AFRL contract number FA8650-10-C-7059. Dae Il Kim supported in part by an NSF Graduate Fellowship. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, AFRL, or the U.S. Government. 8 References [1] A. Agovic and A. Banerjee. Gaussian process topic models. In UAI, 2010. [2] D. M. Blei and J. D. Lafferty. A correlated topic model of science. AAS, 1(1):17–35, 2007. [3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, March 2003. [4] J. Chang, J. Boyd-Graber, S. Gerrish, C. Wang, and D. M. Blei. Reading tea leaves: How humans interpret topic models. In NIPS, 2009. [5] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. An. Stat., 1(2):209–230, 1973. [6] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman & Hall, 2004. [7] T. L. Griffiths and M. Steyvers. Finding scientific topics. PNAS, 2004. [8] H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. Journal of the American Statistical Association, 96(453):161–173, Mar. 2001. [9] W. Li, D. Blei, and A. McCallum. Nonparametric Bayes Pachinko allocation. In UAI, 2008. [10] H. F. Lopes and M. West. Bayesian model assessment in factor analysis. Stat. Sinica, 14:41–67, 2004. [11] D. Mimno and A. McCallum. Topic models conditioned on arbitrary features with dirichlet-multinomial regression. In UAI, 2008. [12] I. Murray and R. Salakhutdinov. Evaluating probabilities under high-dimensional latent variable models. In NIPS 21, pages 1137–1144. 2009. [13] J. Paisley, C. Wang, and D. Blei. The discrete infinite logistic normal distribution for mixed-membership modeling. In AISTATS, 2011. [14] L. Ren, L. Du, L. Carin, and D. B. Dunson. Logistic stick-breaking process. JMLR, 12, 2011. [15] A. Rodriguez and D. B. Dunson. Nonparametric bayesian models through probit stick-breaking processes. J. Bayesian Analysis, 2011. [16] J. Sethuraman. A constructive definition of Dirichlet priors. Stat. Sin., 4:639–650, 1994. [17] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581, 2006. [18] Y. W. Teh, M. Seeger, and M. I. Jordan. Semiparametric latent factor models. In AIStats 10, 2005. [19] H. M. Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. Evaluation methods for topic models. In ICML, 2009. 9
|
2011
|
20
|
4,258
|
Hierarchical Multitask Structured Output Learning for Large-Scale Sequence Segmentation Nico G¨ornitz1 Technical University Berlin, Franklinstr. 28/29, 10587 Berlin, Germany Nico.Goernitz@tu-berlin.de Christian Widmer1 FML of the Max Planck Society Spemannstr. 39, 72070 T¨ubingen, Germany Christian.Widmer@tue.mpg.de Georg Zeller European Molecular Biology Laboratory Meyerhofstr. 1, 69117 Heidelberg, Germany Georg.Zeller@gmail.com Andr´e Kahles FML of the Max Planck Society Spemannstr. 39, 72070 T¨ubingen, Germany Andre.Kahles@tue.mpg.de S¨oren Sonnenburg2 TomTom An den Treptowers 1, 12435 Berlin, Germany Soeren.Sonnenburg@tomtom.com Gunnar R¨atsch FML of the Max Planck Society Spemannstr. 39, 72070 T¨ubingen, Germany Gunnar.Raetsch@tue.mpg.de Abstract We present a novel regularization-based Multitask Learning (MTL) formulation for Structured Output (SO) prediction for the case of hierarchical task relations. Structured output prediction often leads to difficult inference problems and hence requires large amounts of training data to obtain accurate models. We propose to use MTL to exploit additional information from related learning tasks by means of hierarchical regularization. Training SO models on the combined set of examples from multiple tasks can easily become infeasible for real world applications. To be able to solve the optimization problems underlying multitask structured output learning, we propose an efficient algorithm based on bundle-methods. We demonstrate the performance of our approach in applications from the domain of computational biology addressing the key problem of gene finding. We show that 1) our proposed solver achieves much faster convergence than previous methods and 2) that the Hierarchical SO-MTL approach outperforms considered non-MTL methods. 1 Introduction In Machine Learning, model quality is most often limited by the lack of sufficient training data. When data from different, but related tasks, is available, it is possible to exploit it to boost the performance of each task by transferring relevant information. Multitask learning (MTL) considers the problem of inferring models for several tasks simultaneously, while imposing regularity criteria or shared representations in order to allow learning across tasks. This has been an active research focus and various methods (e.g., [5, 8]) have been explored, providing empirical findings [16] and theoretical foundations [3, 4]. Recently, also the relationships between tasks have been studied (e.g., [1]) assuming a cluster relationship [11] or a hierarchy [6, 23, 13] between tasks. Our proposed method follows this line of research in that it exploits externally provided hierarchical task relations. The generality of regularization-based MTL approaches makes it possible to extend them beyond the simple cases of classification or regression to Structured Output (SO) learning problems 1These authors contributed equally. 2This work was done while SS was at Technical University Berlin 1 [14, 2, 21, 10]. Here, the output is not in the form of a discrete class label or a real valued number, but a structured entity such as a label sequence, a tree, or a graph. One of the main contributions of this paper is to explicitly extend a regularization-based MTL formulation to the SVM-struct formulation for SO prediction [2, 21]. SO learning methods can be computationally demanding, and combining information from several tasks leads to even larger problems, which renders many interesting applications infeasible. Hence, our second main contribution is to provide an efficient solver for SO problems which is based on bundle methods [18, 19, 7]. It achieves much faster convergence and is therefore an essential tool to cope with the demands of the MTL setting. SO learning has been successfully applied in the analysis of images, natural language, and sequences. The latter is of particular interest in computational biology for the analysis of DNA, RNA or protein sequences. This field moreover constitutes an excellent application area for MTL [12, 22]. In computational biology, one often uses supervised learning methods to model biological processes in order to predict their outcomes and ultimately understand them better. Due to the complexity of many biological mechanisms, rich computational models have to be developed, which in turn require a reasonable amount of training data. However, especially in the biomedical domain, obtaining labeled training examples through experiments can be costly. Thus, combining information from several related tasks can be a cost-effective approach to best exploit the available label data. When transferring label information across tasks, it often makes sense to assume hierarchical task relations. In particular, in computational biology, where evolutionary processes often impose a task hierarchy [22]. For instance, we might be interested in modeling a common biological mechanism in several organisms such that each task corresponds to one organism. In this setting, we expect that the longer the common evolutionary history between two organisms, the more beneficial it is to share information between the corresponding tasks. In this work, we chose a challenging problem from genome biology to demonstrate that our approach is practically feasible in terms of speed and accuracy. In ab initio gene finding [17], the task is to build an accurate model of a gene and subsequently use it to predict the gene content of newly sequenced genomes or to refine existing annotations. Despite many commonalities between sequence features of genes across organisms, sequence differences have made it very difficult to build universal gene finders that achieve high accuracy in cross-organism prediction. This problem is hence ideally suited for the application of the proposed SO-MTL approach. 2 Methods Regularization based supervised learning methods, such as the SVM or Logistic Regression play a central role in many applications. In its most general form, such a method consists of a loss function L that captures the error with respect to the training data S = {(x1, y1), . . . , (xn, yn)} and a regularizer R that penalizes model complexity J(w) = n X i=1 L(w, xi, yi) + R(w). In the case of Multitask Learning (MTL), one is interested in obtaining several models w1, ..., wT based on T associated sets of examples St = {(x1, y1), . . . , (xnt, ynt)}, t = 1, . . . , T. To couple individual tasks, an additional regularization term RMT L is introduced that penalizes the disagreement between the individual models (e.g., [1, 8]): J(w1, ..., wT ) = T X t=1 nt X i=1 L(w, xi, yi) + R(wt) ! + RMT L(w1, ..., wT ). Special cases include T = 2 and RMT L(w1, w2) = γ ||w1 −w2|| (e.g., [8, 16]), where γ is a hyper-parameter controlling the strength of coupling of the solutions for both tasks. For more than two tasks, the number of coupling terms and hyper-parameters can rise quadratically leading to a difficult model-selection problem. 2.1 Hierarchical Multitask Learning (HMTL) We consider the case where tasks correspond to leaves of a tree and are related by its inner nodes. In [22], the case of taxonomically organized two-class classification tasks was investigated, where each task corresponds to a species (taxon). The idea was to mimic biological evolution that is assumed to 2 generate more specialized molecular processes with each speciation event from root to leaf. This is implemented by training on examples available for nodes in the current subtree (i.e., the tasks below the current node), while similarity to the parent classifier is induced through regularization. Thus, for each node n, one solves the following optimization problem, (w∗ n, b∗ n) = argmin w,b 1 2 (1 −γ) ||w||2 + γ w −w∗ p 2 + C X (x,y)∈S ℓ(⟨x, w⟩+ b, y) , (1) where p is the parent node of n (with the special case of w∗ p = 0 for the root node), ℓis an appropriate loss function (e.g., the hinge-loss). The hyper-parameter γ ∈[0, 1] determines the contribution of regularization from the origin vs. the parent node’s parameters (i.e., the strength of coupling between the node and its parent). The above problem can be equivalently rewritten as: (w∗ n, b∗ n) = argmin w,b 1 2 ||w||2 −γ w, w∗ p + C X (x,y)∈S ℓ(⟨x, w⟩+ b, y) . (2) For γ = 0, the tasks completely decouple and can be learnt independently. The parameters for the root node correspond to the globally best model. We will refer to these two cases as base-line methods for comparisons in the experimental section. 2.2 Structured Output Learning and Extensions for HMTL In contrast to binary classification, elements from the output space Υ (e.g., sequences, trees, or graphs) of structured output problems have an inherent structure which makes more sophisticated, problem-specific loss functions desirable. The loss between the true label y ∈Υ and the predicted label ˆy ∈Υ is measured by a loss function ∆: Υ × Υ →ℜ+. A widely used approach to predict ˆy ∈Υ is the use of a linearly parametrized model given an input vector x ∈X and a joint feature map Ψ : X × Υ →H that captures the dependencies between input and output (e.g., [21]): ˆyw(x) = argmax ¯y∈Υ ⟨w, Ψ(x, ¯y)⟩. The most common approaches to estimate the model parameters w are based on structured output SVMs (e.g., [2, 21]) and conditional random fields (e.g., [14]; see also [10]). Here we follow the approach taken in [21, 15], where estimating the parameter vector w amounts to solving the following optimization problem min w∈H ( R(w) + C n X i=1 ℓ(max ¯y∈Υ⟨w, Ψ(xi, ¯y)⟩+ ∆(yi, ¯y) −⟨w, Ψ(xi, yi)⟩) ) , (3) where R(w) is a regularizer and ℓis a loss function. For ℓ(a) = max(0, a) and R(w) = ∥w ∥2 2 we obtain the structured output support vector machine [21, 2] with margin rescaling and hinge-loss. It turns out that we can combine the structured output formulation with hierarchical multitask learning in a straight-forward way. We extend the regularizer R(w) in (3) with a γ-parametrized convex combination of a multitask regularizer 1 2 ||w −wp||2 2 with the original term. When R(w) = 1 2∥w ∥2 2 and omitting constant terms, we arrive at Rp,γ(w) = 1 2∥w ∥2 2 −γ⟨w, wp⟩. Thus we can apply the described hierarchical multitask learning approach and solve for every node the following optimization problem: min w∈H ( Rp,γ(w) + C n X i=1 ℓ(max ¯y∈Υ⟨w, Ψ(xi, ¯y)⟩+ ∆(yi, ¯y) −⟨w, Ψ(xi, yi)⟩) ) (4) A major difficulty remains: solving the resulting optimization problems which now can become considerably larger than for the single-task case. 2.3 A Bundle Method for Efficient Optimization A common approach to obtain a solution to (3) is to use so-called cutting-plane or column-generation methods. Here one considers growing subsets of all possible structures and solves restricted optimization problems. An algorithm implementing a variant of this strategy based on primal optimization is given in the appendix (similar in [21]). Cutting-plane and column generation techniques 3 often converge slowly. Moreover, the size of the restricted optimization problems grows steadily and solving them becomes more expensive in each iteration. Simple gradient descent or second order methods can not be directly applied as alternatives, because (4) is continuous but non-smooth. Our approach is instead based on bundle methods for regularized risk minimization as proposed in [18, 19] and [7]. In case of SVMs, this further relates to the OCAS method introduced in [9]. In order to achieve fast convergence, we use a variant of these methods adapted to structured output learning that is suitable for hierarchical multitask learning. We consider the objective function J(w) = Rp,γ(w) + L(w), where L(w) := C n X i=1 ℓ(max ¯y∈Υ {⟨w, Ψ(xi, ¯y)⟩+ ∆(yi, ¯y)} −⟨w, Ψ(xi, yi)⟩) and Rp,γ(w) is as defined in Section 2.2. Direct optimization of J is very expensive as computing L involves computing the maximum over the output space. Hence, we propose to optimize an estimate of the empirical loss ˆL (w), which can be computed efficiently. We define the estimated empirical loss ˆL (w) as ˆL(w) := C N X i=1 ℓ max (Ψ,∆)∈Γi {⟨w, Ψ⟩+ ∆} −⟨w, Ψ(xi, yi)⟩ . Accordingly, we define the estimated objective function as ˆJ(w) = Rp,γ(w) + ˆL(w). It is easy to verify that J(w) ≥ˆJ(w). Γi is a set of pairs (Ψ(xi, y), ∆(yi, y)) defined by a suitably chosen, growing subset of Υ, such that ˆL(w) →L(w) (cf. Algorithm 1). In general, bundle methods are extensions of cutting plane methods that use a prox-function to stabilize the solution of the approximated function. In the framework of regularized risk minimization, a natural prox-function is given by the regularizer. We apply this approach to the objective ˆJ(w) and solve min w Rp,γ(w) + max i∈I {⟨ai, w⟩+ bi} (5) where the set of cutting planes ai, bi lower bound ˆL. As proposed in [7, 19], we use a set I of limited size. Moreover, we calculate an aggregation cutting plane ¯a, ¯b that lower bounds the estimated empirical loss ˆL. To be able to solve the primal optimization problem in (5) in the dual space as proposed by [7, 19], we adopt an elegant strategy described in [7] to obtain the aggregated cutting plane (¯a′,¯b′) using the dual solution α of (5): ¯a′ = X i∈I αjai and ¯b′ = X i∈I αibi. (6) The following two formulations reach the same minimum when optimized with respect to w: min w∈H Rp(w) + max i∈I ⟨ai, w⟩+ bi = min w∈H {Rp(w) + ⟨¯a′, w⟩+ ¯b′}. This new aggregated plane can be used as an additional cutting plane in the next iteration step. We therefore have a monotonically increasing lower bound on the estimated empirical loss and can remove previously generated cutting planes without compromising convergence (see [7] for details). The algorithm is able to handle any (non-)smooth convex loss function ℓ, since only the subgradient needs to be computed. This can be done efficiently for the hinge-loss, squared hinge-loss, Huberloss, and logistic-loss. The resulting optimization algorithm is outlined in Algorithm 1. There are several improvements possible: For instance, one can bypass updating the empirical risk estimates in line 6, when L(w(k)) −ˆL(w(k)) ≤ϵ. Finally, while Algorithm 1 was formulated in primal space, it is easy to reformulate in dual variables making it independent of the dimensionality of w ∈H. 2.4 Taxonomically Constrained Model Selection Model selection for multitask learning is particularly difficult, as it requires hyper-parameter selection for several different, but related tasks in a dependent manner. For the described approach, each 4 Algorithm 1 Bundle Methods for Structured Output Algorithm 1: S ≥1: maximal size of the bundle set 2: θ > 0: linesearch trade-off (cf. [9] for details) 3: w(1) = wp 4: k = 1 and ¯a = 0,¯b = 0, Γi = ∅ ∀i 5: repeat 6: for i = 1, .., n do 7: y∗= argmaxy∈Υ{⟨w(k), Ψ(xi, y)⟩+ ∆(yi, y)} 8: if ℓ max y∈Υ {⟨w, Ψ(xi, y)⟩+ ∆(yi, y)} > ℓ max (Ψ,∆)∈Γi⟨w, Ψ⟩+ ∆ then 9: Γi = Γi ∪(Ψ(xi, y∗), ∆(yi, y∗)) 10: end if 11: Compute ak ∈∂w ˆL(w(k)) 12: Compute bk = ˆL(w(k)) −⟨w(k), ak⟩ 13: w∗= argmin w∈H Rp,γ(w) + max max (k−S)+<i≤k{⟨ai, w⟩+ bi}, ⟨¯a, w⟩+ ¯b 14: Update ¯a, ¯b according to (6) 15: η∗= argminη∈ℜˆJ(w∗+η(w∗−w(k))) 16: w(k+1) = (1 −θ) w∗+θη∗(w∗−w(k)) 17: k = k + 1 18: end for 19: until L(w(k)) −ˆL(w(k)) ≤ϵ and J(w(k)) −Jk(w(k)) ≤ϵ node n in the given taxonomy corresponds to solving an optimization problem that is subject to hyper-parameters γn and Cn (except for the root node, where only Cn is relevant). Hence, the direct optimization of all combinations of dependent hyper-parameters in model selection is not feasible in many cases. Therefore, we propose to perform a local model selection and optimize the current Cn and γn at each node n from top to bottom independently. This corresponds to using the taxonomy for reducing the parameter search space. To clarify this point, assume a perfect binary tree for n tasks. The length of the path from root to leaf is log2(n). The parameters along one path are dependent, e.g. the values chosen at the root will influence the optimal choice further down the tree. Given k candidate values for parameter γn, jointly optimizing all interdependent parameters along one path corresponds to optimizing over a grid of klog2(n) in contrast to k · log2(n) when using our proposed local strategy. 3 Results 3.1 Background To demonstrate the validity of our approach, we applied it to the computational biology problem of gene finding. Here, the task is to identify genomic regions encoding genes (from which RNAs and/or proteins are produced). Genomic sequence can be represented by long strings of the four letters A, C, G, and T (genome sizes range from a few megabases to several gigabases). In prokaryotes (mostly bacteria and archaea) gene structures are comparably simple (cf. Figure 1A): the protein coding region starts by a start codon (one out of three specific 3-mers in many prokaryotes) followed by a number of codon triplets (of three nucleotides each) and is terminated by a stop codon (one out of five specific 3-mers in many prokaryotes). Genic regions are first transcribed to RNA, subsequently the contained coding region is translated into a protein. Parts of the RNA that are not translated are called untranslated region (UTR). Genes are separated from one another by intergenic regions. The protein coding segment is depleted of stop codons making the computational problem of identifying coding regions relatively straight forward. In higher eukaryotes (animals, plants, etc.) however, the coding region can be interrupted by introns, which are removed from the RNA before it is translated into protein. Introns are flanked by specific sequence signals, so-called splice sites (cf. Figure 1B). The presence of introns substantially complicates the identification of the transcribed and coding regions. In particular, it is usually insufficient to identify regions depleted of stop codons to determine the encoded protein sequence. To 5 accurately detect the transcribed regions in eukaryotic genomes, it is therefore often necessary to use additional experimental data (e.g., sequencing of RNA fragments). Here, we consider two key problems in computational gene finding of (i) predicting (only) the coding regions for prokaryotes and (ii) predicting the exon-intron structure (but not the coding region) for eukaryotes. ATG TAA Intergenic Start Codon Coding region Stop Codon Intergenic UTR UTR N x 3 x {A,C,G,T} Intergenic Intron Intron Intergenic Exon Exon Exon A) Prokaryotic Gene B) Eukaryotic Gene Coding region N x 3 x {A,C,G,T} UTR UTR Figure 1: Panel A shows the structure of a prokaryotic gene. The protein coding region is flanked by a start and a stop codon and contains a multiple of three nucleotides. UTR denotes the untranslated region. Panel B shows the structure of an eukaryotic gene. The transcribed region contains introns and exons. Introns are flanked by splice sites and are removed from the RNA. The remaining sequence contains the UTRs and coding region. The problem of identifying genes can be posed as a label sequence learning task, were one assigns a label (out of intergenic, transcript start, untranslated region, coding start, coding exon, intron, coding stop, transcript stop) to each position in the genome. The labels have to follow a grammar dictated by the biological processes of transcription and translation (see Figure 1) making it suitable to apply structured output learning techniques to identify genes. Because the biological processes and cellular machineries which recognize genes have slowly evolved over time, genes of closely related species tend to exhibit similar sequence characteristics. Therefore these problems are very well suited for the application of multitask learning: sharing information among species is expected to lead to more accurate gene predictions compared to approaching the problem for each species in isolation. Currently, the genomes of many prokaryotic and eukaryotic species are being sequenced, but often very little is known about the genes encoded, and standard methods are typically used to infer them without systematically exploiting reliable information on related species. In the following we will consider two different aspects of the described problem. First, focusing on eukaryotic gene finding for a single species, we show that the proposed optimization algorithm very quickly converges to the optimal solution. Second, for the problem of prokaryotic gene finding in several species, we demonstrate that hierarchical multitask structured output learning significantly improves gene prediction accuracy. The supplement, data and code can be found on the project website3. 3.2 Eukaryotic Gene Finding Based on RNA-Seq We first consider the problem of detecting exonic, intronic and intergenic regions in a single eukaryotic genome. We use experimental data from RNA sequencing (RNA-seq) which provides evidence for exonic and intronic regions . For simplicity, we assume that for each position in the genome we are given numbers on how often this position was experimentally determined to be exonic and intronic, respectively. Ideally, exons and introns belonging to the same gene should have a constant number of confirmations, whereas these values may vary greatly between different genes. But in reality, these measurements are typically incomplete and noisy, so that inference techniques greatly help to reconstruct complete gene structures. As any HMM or HMSVM, our method employs a state model defining allowed transitions between states. It consists of five basic states: intergenic, exonic, intron start (donor splice site), intronic, and intron end (acceptor splice site). These states are duplicated Q = 5 times to model different levels of confirmation and the whole model is mirrored for simultaneous predictions of genes from both strands of the genome (see supplement for details). In total, we have 41 states, each of which is associated with several parameters scoring features derived from the exon and intron confirmation and computational splice site predictions (see supplement for details). Overall the model has almost 1000 parameters. We trained the model using 700 training regions with known exon/intron structures and a total length of ca. 5.2 million nucleotides (data from the nematode C. elegans). We used the column generationbased algorithm (see Appendix) and the Bundle method-based algorithm (Algorithm 1) and recorded upper and lower bounds of the objective during run time (cf. Figure 2). Whereas both algorithms 3http://bioweb.me/so-mtl 6 need a similar amount of computation per iteration (mostly decoding steps), the Bundle-method showed much faster convergence. We assessed prediction accuracy in a three-fold cross-validation procedure where individual test sequences consisted of large genomic regions (of several Mbp) each containing many genes. This evaluation procedure is expected to yield unbiased estimates that are very similar to whole-genome predictions. Prediction accuracy was compared to another recently proposed, widely used method called Cufflinks [20]. We observed that our method detects introns and transcripts more accurately than Cufflinks in the data set analyzed here (cf. Figure 2). 5 10 15 20 25 30 35 40 45 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 iteration objective value Bundle Method Upper Bound Bundle Method Lower Bound Original OP Upper Bound Original OP Lower Bound Target 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 F−Score Intron Transcript Our method Cufflinks Figure 2: Left panel: Convergence for bundle method-based solver versus column generation (log-scale). Right panel: Prediction accuracy of our eukaryotic gene finding method in comparison to a state-of-the-art method, Cufflinks [20]. The F-score (harmonic mean of precision and recall) was assessed based on two metrics: correctly predicted introns as well as transcripts for which all introns were correct (see label). 3.3 Gene Finding in Multiple Prokaryotic Genomes In a second series of experiments we evaluated the benefit of applying SO-MTL to prokaryotic gene prediction. SO prediction method We modeled prokaryotic genes as a Markov chain on the nucleotide level. To nonetheless account for the biological fact that genetic information is encoded in triplets, the model contains a 3-cycle of exon states; details are given in Figure 3. Start Codon Stop Codon Exonic2 Exonic1 Exonic3 Intergenic Start Stop Figure 3: Simple state model for prokaryotic gene finding. A suitable model for prokaryotic gene prediction needs to consider 1) that a gene starts with a start codon (i.e. a certain triplet of nucleotides) 2) ends with a stop codon and 3) has a length divisible by 3. Properties 1) and 2) are enforced by allowing only transitions into and out of the exonic states on start and stop codons, respectively. Property 3) is enforced by only allowing transitions from exon state Exonic3 to the stop codon state. Data generation We selected a subset of organisms with publicly available genomes to broadly cover the spectrum of prokaryotic organisms. In order to show that MTL is beneficial even for relatively distant species, we selected representatives from two different domains: bacteria and archaea. The relationship between these organisms is captured by the taxonomy shown in Figure 4, which was created based on the information available on the NCBI website4. For each organism, we generated one training example per annotated gene. The genomic sequences were cut between neighboring genes (splitting intergenic regions equally), such that a minimum distance of 6 nucleotides between genes was maintained. Features for SO learning were derived from the nucleotide sequence by transcoding it to a numerical representation of triplets. This resulted in binary vectors of size 43 = 64 with exactly one non-zero entry. We sub-sampled from the complete dataset of Ni examples for each organism i and created new datasets with 20 training examples, 40 evaluation examples and 200 test examples. 4ftp://ftp.ncbi.nlm.nih.gov/genomes/Bacteria/ 7 Figure 4: Species and their taxonomic hierarchy used for prokaryotic gene finding. Experimental setup For model selection we used a grid over the following two parameter ranges C = [100, 250], γ = [0, 0.025, 0.1, 0.25, 0.5, 0.75, 0.9, 1.0] for each node in the taxonomy (cf. Figure 4). Sub-sampling of the dataset was performed 3 times and results were subsequently averaged. We compared our MTL algorithm to two baseline methods, one where predictors for all tasks where trained without information transfer (independent) and the other extreme case, where one global model was fitted for all tasks based on the union of all data sets (union). Performance was measured by the F-score, the harmonic mean of precision and recall, where precision and recall were determined on nucleotide level (e.g. whether or not an exonic nucleotide was correctly predicted) in single-gene regions. (Note that due to its per-nucleotide Markov restriction, however, our method is not able to exploit that there is only one gene per examples sequence.) Results Figure 5 shows the results for our proposed MTL method and the two baseline methods described above (see Appendix for table). We observe that it generally pays off to combine information from different organisms, as union always performs better than independent. Indeed MTL improves over the naive combination method union with F-score increases of up to 4.05 percentage points in A. tumefaciens. On average, we observe an improvement of 13.99 percentage points for MTL over independent and 1.13 percentage points for MTL over union, confirming the value of MTL in transferring information across tasks. In addition, the new bundle method converges at least twice as fast as the originally proposed cutting plane method. F−Score 0.6 0.7 0.8 0.9 1.0 mean E. coli E. fergusonii A. tumefaciens H. pylori B. anthracis B. subtilis M. smithii S. islandicus Independent Union MTL Figure 5: Evaluation of MTL and baseline methods independent and union. 4 Discussion We have introduced a regularization-based approach to SO learning in the setting of hierarchical task relations and have empirically shown its validity on an application from computational biology. To cope with the increased problem size usually encountered in the MTL setting, we have developed an efficient solver based on bundle-methods and demonstrated its improved convergence behavior compared to column generation techniques. Applying our SO-MTL algorithm to the problem of prokaryotic gene finding, we could show that sharing information across tasks indeed results in improved accuracy over learning tasks in isolation. Additionally, the taxonomy, which relates individual tasks to each other, proved useful in that it led to more accurate predictions than were obtained when simply training on all examples together. We have previously shown that MTL algorithms excel in a scenarios where there is limited training data relative to the complexity of the problem and model [23]. As this experiment was carried out on a relatively small data set, more work is required to turn our approach into a state-of-the-art prokaryotic gene finder. 8 Acknowledgments We would like to thank the anonymous reviewers for insightful comments. Moreover, we are grateful to Jonas Behr, Jose Leiva, Yasemin Altun and Klaus-Robert M¨uller. This work was supported by the German Research Foundation (DFG) under the grant RA 1894/1-1. References [1] A. Agarwal, S. Gerber, and H. Daum´e III. Learning multiple tasks using manifold regularization. In Advances in Neural Information Processing Systems 23, 2010. [2] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden markov support vector machines. In Proc. ICML, 2003. [3] S. Ben-David and R. Schuller. Exploiting task relatedness for multiple task learning. Lecture notes in computer science, pages 567–580, 2003. [4] J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman. Learning bounds for domain adaptation. Advances in Neural Information Processing Systems, 20, 2007. [5] R. Caruana. Multitask learning. Machine Learning, 28(1):41–75, 1997. [6] H. Daum´e III. Bayesian multitask learning with latent hierarchies. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, 2009. [7] T.-M.-T. Do. Regularized Bundle Methods for Large-scale Learning Problems with an Application to Large Margin Training of Hidden Markov Models. PhD thesis, l’Universit´e Pierre & Marie Curie, 2010. [8] T. Evgeniou, C. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 6:615–637, 2005. [9] V. Franc and S. Sonnenburg. OCAS optimized cutting plane algorithm for support vector machines. In Proc. ICML, 2008. [10] T. Hazan and R. Urtasun. A primal-dual message-passing algorithm for approximated large scale structured prediction. In Advances in Neural Information Processing Systems 23, 2010. [11] L. Jacob, F. Bach, and J. Vert. Clustered multi-task learning: A convex formulation. Arxiv preprint arXiv:0809.2085, 2008. [12] L. Jacob and J. Vert. Efficient peptide-MHC-I binding prediction for alleles with few known binders. Bioinformatics, 24(3):358–66, 2008. [13] S. Kim and E. P. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. Proc. ICML, 2010. [14] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML, 2001. [15] G. R¨atsch and S. Sonnenburg. Large scale hidden semi-markov SVMs. In Advances in Neural Information Processing Systems 18, 2006. [16] G. Schweikert, C. Widmer, B. Sch¨olkopf, and G. R¨atsch. An Empirical Analysis of Domain Adaptation Algorithms for Genomic Sequence Analysis. In Advances in Neural Information Processing Systems 21, 2009. [17] G. Schweikert, A. Zien, G. Zeller, J. Behr, C. Dieterich, C. Ong, P. Philips, F. De Bona, L. Hartmann, A. Bohlen, N. Kr¨uger, S. Sonnenburg, and G. R¨atsch. mGene: accurate SVM-based gene finding with an application to nematode genomes. Genome Research, 19(11):2133–43, 2009. [18] A. Smola, S. Vishwanathan, and Q. Le. Bundle methods for machine learning. In Advances in Neural Information Processing Systems 20, 2008. [19] C. Teo, S. Vishwanathan, A.Smola, and Q. Le. Bundle methods for regularized risk minimization. Journal of Machine Learning Research, 11:311–365, 2010. [20] C. Trapnell, B. A. Williams, G. Pertea, A. Mortazavi, G. Kwan, M. J. van Baren, S. L. Salzberg, B. J. Wold, and L. Pachter. Transcript assembly and quantification by RNA-seq reveals unannotated transcripts and isoform switching during cell differentiation. Nature Biotechnology, 28:511–515, 2010. [21] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6:1453–1484, 2005. [22] C. Widmer, J. Leiva, Y. Altun, and G. R¨atsch. Leveraging Sequence Classification by Taxonomy-based Multitask Learning. In Research in Computational Molecular Biology, 2010. [23] C. Widmer, N. Toussaint, Y. Altun, and G. R¨atsch. Inferring latent task structure for Multitask Learning by Multiple Kernel Learning. BMC Bioinformatics, 11(Suppl 8):S5, 2010. 9
|
2011
|
200
|
4,259
|
Generalized Beta Mixtures of Gaussians Artin Armagan Dept. of Statistical Science Duke University Durham, NC 27708 artin@stat.duke.edu David B. Dunson Dept. of Statistical Science Duke University Durham, NC 27708 dunson@stat.duke.edu Merlise Clyde Dept. of Statistical Science Duke University Durham, NC 27708 clyde@stat.duke.edu Abstract In recent years, a rich variety of shrinkage priors have been proposed that have great promise in addressing massive regression problems. In general, these new priors can be expressed as scale mixtures of normals, but have more complex forms and better properties than traditional Cauchy and double exponential priors. We first propose a new class of normal scale mixtures through a novel generalized beta distribution that encompasses many interesting priors as special cases. This encompassing framework should prove useful in comparing competing priors, considering properties and revealing close connections. We then develop a class of variational Bayes approximations through the new hierarchy presented that will scale more efficiently to the types of truly massive data sets that are now encountered routinely. 1 Introduction Penalized likelihood estimation has evolved into a major area of research, with ℓ1[22] and other regularization penalties now used routinely in a rich variety of domains. Often minimizing a loss function subject to a regularization penalty leads to an estimator that has a Bayesian interpretation as the mode of a posterior distribution [8, 11, 1, 2], with different prior distributions inducing different penalties. For example, it is well known that Gaussian priors induce ℓ2 penalties, while double exponential priors induce ℓ1 penalties [8, 19, 13, 1]. Viewing massive-dimensional parameter learning and prediction problems from a Bayesian perspective naturally leads one to design new priors that have substantial advantages over the simple normal or double exponential choices and that induce rich new families of penalties. For example, in high-dimensional settings it is often appealing to have a prior that is concentrated at zero, favoring strong shrinkage of small signals and potentially a sparse estimator, while having heavy tails to avoid over-shrinkage of the larger signals. The Gaussian and double exponential priors are insufficiently flexible in having a single scale parameter and relatively light tails; in order to shrink many small signals strongly towards zero, the double exponential must be concentrated near zero and hence will over-shrink signals not close to zero. This phenomenon has motivated a rich variety of new priors such as the normal-exponential-gamma, the horseshoe and the generalized double Pareto [11, 14, 1, 6, 20, 7, 12, 2]. An alternative and widely applied Bayesian framework relies on variable selection priors and Bayesian model selection/averaging [18, 9, 16, 15]. Under such approaches the prior is a mixture of a mass at zero, corresponding to the coefficients to be set equal to zero and hence excluded from the model, and a continuous distribution, providing a prior for the size of the non-zero signals. This paradigm is very appealing in fully accounting for uncertainty in parameter learning and the unknown sparsity structure through a probabilistic framework. One obtains a posterior distribution over the model space corresponding to all possible subsets of predictors, and one can use this posterior for model-averaged predictions that take into account uncertainty in subset selection and to obtain marginal inclusion probabilities for each predictor providing a weight of evidence that a specific signal is non-zero allowing for uncertainty in the other signals to be included. Unfortunately, 1 the computational complexity is exponential in the number of candidate predictors (2p with p the number of predictors). Some recently proposed continuous shrinkage priors may be considered competitors to the conventional mixture priors [15, 6, 7, 12] yielding computationally attractive alternatives to Bayesian model averaging. Continuous shrinkage priors lead to several advantages. The ones represented as scale mixtures of Gaussians allow conjugate block updating of the regression coefficients in linear models and hence lead to substantial improvements in Markov chain Monte Carlo (MCMC) efficiency through more rapid mixing and convergence rates. Under certain conditions these will also yield sparse estimates, if desired, via maximum a posteriori (MAP) estimation and approximate inferences via variational approaches [17, 24, 5, 8, 11, 1, 2]. The class of priors that we consider in this paper encompasses many interesting priors as special cases and reveals interesting connections among different hierarchical formulations. Exploiting an equivalent conjugate hierarchy of this class of priors, we develop a class of variational Bayes approximations that can scale up to truly massive data sets. This conjugate hierarchy also allows for conjugate modeling of some previously proposed priors which have some rather complex yet advantageous forms and facilitates straightforward computation via Gibbs sampling. We also argue intuitively that by adjusting a global shrinkage parameter that controls the overall sparsity level, we may control the number of non-zero parameters to be estimated, enhancing results, if there is an underlying sparse structure. This global shrinkage parameter is inherent to the structure of the priors we discuss as in [6, 7] with close connections to the conventional variable selection priors. 2 Background We provide a brief background on shrinkage priors focusing primarily on the priors studied by [6, 7] and [11, 12] as well as the Strawderman-Berger (SB) prior [7]. These priors possess some very appealing properties in contrast to the double exponential prior which leads to the Bayesian lasso [19, 13]. They may be much heavier-tailed, biasing large signals less drastically while shrinking noiselike signals heavily towards zero. In particular, the priors by [6, 7], along with the StrawdermanBerger prior [7], have a very interesting and intuitive representation later given in (2), yet, are not formed in a conjugate manner potentially leading to analytical and computational complexity. [6, 7] propose a useful class of priors for the estimation of multiple means. Suppose a p-dimensional vector y|θ ∼N(θ, I) is observed. The independent hierarchical prior for θj is given by θj|τj ∼N(0, τj), τ 1/2 j ∼C+(0, φ1/2), (1) for j = 1, . . . , p, where N(µ, ν) denotes a normal distribution with mean µ and variance ν and C+(0, s) denotes a half-Cauchy distribution on ℜ+ with scale parameter s. With an appropriate transformation ρj = 1/(1 + τj), this hierarchy also can be represented as θj|ρj ∼N(0, 1/ρj −1), π(ρj|φ) ∝ρ−1/2 j (1 −ρj)−1/2 1 1 + (φ −1)ρj . (2) A special case where φ = 1 leads to ρj ∼B(1/2, 1/2) (beta distribution) where the name of the prior arises, horseshoe (HS) [6, 7]. Here ρjs are referred to as the shrinkage coefficients as they determine the magnitude with which θjs are pulled toward zero. A prior of the form ρj ∼B(1/2, 1/2) is natural to consider in the estimation of a signal θj as this yields a very desirable behavior both at the tails and in the neighborhood of zero. That is, the resulting prior has heavy-tails as well as being unbounded at zero which creates a strong pull towards zero for those values close to zero. [7] further discuss priors of the form ρj ∼B(a, b) for a > 0, b > 0 to elaborate more on their focus on the choice a = b = 1/2. A similar formulation dates back to [21]. [7] refer to the prior of the form ρj ∼B(1, 1/2) as the Strawderman-Berger prior due to [21] and [4]. The same hierarchical prior is also referred to as the quasi-Cauchy prior in [16]. Hence, the tail behavior of the StrawdermanBerger prior remains similar to the horseshoe (when φ = 1), while the behavior around the origin changes. The hierarchy in (2) is much more intuitive than the one in (1) as it explicitly reveals the behavior of the resulting marginal prior on θj. This intuitive representation makes these hierarchical priors interesting despite their relatively complex forms. On the other hand, what the prior in (1) or (2) lacks is a more trivial hierarchy that yields recognizable conditional posteriors in linear models. 2 [11, 12] consider the normal-exponential-gamma (NEG) and normal-gamma (NG) priors respectively which are formed in a conjugate manner yet lack the intuition the Strawderman-Berger and horseshoe priors provide in terms of the behavior of the density around the origin and at the tails. Hence the implementation of these priors may be more user-friendly but they are very implicit in how they behave. In what follows we will see that these two forms are not far from one another. In fact, we may unite these two distinct hierarchical formulations under the same class of priors through a generalized beta distribution and the proposed equivalence of hierarchies in the following section. This is rather important to be able to compare the behavior of priors emerging from different hierarchical formulations. Furthermore, this equivalence in the hierarchies will allow for a straightforward Gibbs sampling update in posterior inference, as well as making variational approximations possible in linear models. 3 Equivalence of Hierarchies via a Generalized Beta Distribution In this section we propose a generalization of the beta distribution to form a flexible class of scale mixtures of normals with very appealing behavior. We then formulate our hierarchical prior in a conjugate manner and reveal similarities and connections to the priors given in [16, 11, 12, 6, 7]. As the name generalized beta has previously been used, we refer to our generalization as the threeparameter beta (TPB) distribution. In the forthcoming text Γ(.) denotes the gamma function, G(µ, ν) denotes a gamma distribution with shape and rate parameters µ and ν, W(ν, S) denotes a Wishart distribution with ν degrees of freedom and scale matrix S, U(α1, α2) denotes a uniform distribution over (α1, α2), GIG(µ, ν, ξ) denotes a generalized inverse Gaussian distribution with density function (ν/ξ)µ/2{2Kµ(√νξ)}−1xµ−1 exp{(νx + ξ/x)/2}, and Kµ(.) is a modified Bessel function of the second kind. Definition 1. The three-parameter beta (TPB) distribution for a random variable X is defined by the density function f(x; a, b, φ) = Γ(a + b) Γ(a)Γ(b)φbxb−1(1 −x)a−1 {1 + (φ −1)x}−(a+b) , (3) for 0 < x < 1, a > 0, b > 0 and φ > 0 and is denoted by T PB(a, b, φ). It can be easily shown by a change of variable x = 1/(y + 1) that the above density integrates to 1. The kth moment of the TPB distribution is given by E(Xk) = Γ(a + b)Γ(b + k) Γ(b)Γ(a + b + k) 2F1(a + b, b + k; a + b + k; 1 −φ) (4) where 2F1 denotes the hypergeometric function. In fact it can be shown that TPB is a subclass of Gauss hypergeometric (GH) distribution proposed in [3] and the compound confluent hypergeometric (CCH) distribution proposed in [10]. The density functions of GH and CCH distributions are given by fGH(x; a, b, r, ζ) = xb−1(1 −x)a−1(1 + ζx)−r B(b, a)2F1(r, b; a + b; −ζ) , (5) fCCH(x; a, b, r, s, ν, θ) = νbxb−1(1 −x)a−1(θ + (1 −θ)νx)−r B(b, a) exp(−s/ν)Φ1(a, r, a + b, s/ν, 1 −θ), (6) for 0 < x < 1 and 0 < x < 1/ν, respectively, where B(b, a) = Γ(a)Γ(b)/Γ(a + b) denotes the beta function and Φ1 is the degenerate hypergeometric function of two variables [10]. Letting ζ = φ−1, r = a + b and noting that 2F1(a + b, b; a + b; 1 −φ) = φ−b, (5) becomes a TPB density. Also note that (6) becomes (5) for s = 1, ν = 1 and ζ = (1 −θ)/θ [10]. [20] considered an alternative special case of the CCH distribution for the shrinkage coefficients, ρj, by letting ν = r = 1 in (6). [20] refer to this special case as the hypergeometric-beta (HB) distribution. TPB and HB generalize the beta distribution in two distinct directions, with one practical advantage of the TPB being that it allows for a straightforward conjugate hierarchy leading to potentially substantial analytical and computational gains. 3 Now we move onto the hierarchical modeling of a flexible class of shrinkage priors for the estimation of a potentially sparse p-vector. Suppose a p-dimensional vector y|θ ∼N(θ, I) is observed where θ = (θ1, . . . , θp)′ is of interest. Now we define a shrinkage prior that is obtained by mixing a normal distribution over its scale parameter with the TPB distribution. Definition 2. The TPB normal scale mixture representation for the distribution of random variable θj is given by θj|ρj ∼N(0, 1/ρj −1), ρj ∼T PB(a, b, φ), (7) where a > 0, b > 0 and φ > 0. The resulting marginal distribution on θj is denoted by T PBN(a, b, φ). Figure 1 illustrates the density on ρj for varying values of a, b and φ. Note that the special case for a = b = 1/2 in Figure 1(a) gives the horseshoe prior. Also when a = φ = 1 and b = 1/2, this representation yields the Strawderman-Berger prior. For a fixed value of φ, smaller a values yield a density on θj that is more peaked at zero, while smaller values of b yield a density on θj that is heavier tailed. For fixed values of a and b, decreasing φ shifts the mass of the density on ρj from left to right, suggesting more support for stronger shrinkage. That said, the density assigned in the neighborhood of θj = 0 increases while making the overall density lighter-tailed. We next propose the equivalence of three hierarchical representations revealing a wide class of priors encompassing many of those mentioned earlier. Proposition 1. If θj ∼T PBN(a, b, φ), then 1) θj ∼N(0, τj), τj ∼G(a, λj) and λj ∼G(b, φ). 2) θj ∼N(0, τj), π(τj) = Γ(a+b) Γ(a)Γ(b)φ−aτ a−1(1 + τj/φ)−(a+b) which implies that τjφ ∼β′(a, b), the inverted beta distribution with parameters a and b. The equivalence given in Proposition 1 is significant as it makes the work in Section 4 possible under the TPB normal scale mixtures as well as further revealing connections among previously proposed shrinkage priors. It provides a rich class of priors leading to great flexibility in terms of the induced shrinkage and makes it clear that this new class of priors can be considered simultaneous extensions to the work by [11, 12] and [6, 7]. It is worth mentioning that the hierarchical prior(s) given in Proposition 1 are different than the approach taken by [12] in how we handle the mixing. In particular, the first hierarchy presented in Proposition 1 is identical to the NG prior up to the first stage mixing. While fixing the values of a and b, we further mix over λj (rather than a global λ) and further over φ if desired as will be discussed later. φ acts as a global shrinkage parameter in the hierarchy. On the other hand, [12] choose to further mix over a and a global λ while fixing the values of b and φ. By doing so, they forfeit a complete conjugate structure and an explicit control over the tail behavior of π(θj). As a direct corollary to Proposition 1, we observe a possible equivalence between the SB and the NEG priors. Corollary 1. If a = 1 in Proposition 1, then TPBN ≡NEG. If (a, b, φ) = (1, 1/2, 1) in Proposition 1, then TPBN ≡SB ≡NEG. An interesting, yet expected, observation on Proposition 1 is that a half-Cauchy prior can be represented as a scale mixture of gamma distributions, i.e. if τj ∼G(1/2, λj) and λj ∼G(1/2, φ), then τ 1/2 j ∼C+(0, φ1/2). This makes sense as τ 1/2|λj has a half-Normal distribution and the mixing distribution on the precision parameter is gamma with shape parameter 1/2. [7] further place a half-Cauchy prior on φ1/2 to complete the hierarchy. The aforementioned observation helps us formulate the complete hierarchy proposed in [7] in a conjugate manner. This should bring analytical and computational advantages as well as making the application of the procedure much easier for the average user without the need for a relatively more complex sampling scheme. Corollary 2. If θj ∼N(0, τj), τ 1/2 j ∼C+(0, φ1/2) and φ1/2 ∼C+(0, 1), then θj ∼ T PBN(1/2, 1/2, φ), φ ∼G(1/2, ω) and ω ∼G(1/2, 1). Hence disregarding the different treatments of the higher-level hyper-parameters, we have shown that the class of priors given in Definition 1 unites the priors in [16, 11, 12, 6, 7] under one family and reveals their close connections through the equivalence of hierarchies given in Proposition 1. The first hierarchy in Proposition 1 makes much of the work possible in the following sections. 4 0.0 0.2 0.4 0.6 0.8 1.0 (a) 0.0 0.2 0.4 0.6 0.8 1.0 (b) 0.0 0.2 0.4 0.6 0.8 1.0 (c) 0.0 0.2 0.4 0.6 0.8 1.0 (d) 0.0 0.2 0.4 0.6 0.8 1.0 (e) 0.0 0.2 0.4 0.6 0.8 1.0 (f) Figure 1: (a, b) = {(1/2, 1/2), (1, 1/2), (1, 1), (1/2, 2), (2, 2), (5, 2)} for (a)-(f) respectively. φ = {1/10, 1/9, 1/8, 1/7, 1/6, 1/5, 1/4, 1/3, 1/2, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10} considered for all pairs of a and b. The line corresponding to the lowest value of φ is drawn with a dashed line. 4 Estimation and Posterior Inference in Regression Models 4.1 Fully Bayes and Approximate Inference Consider the linear regression model, y = Xβ+ϵ, where y is an n-dimensional vector of responses, X is the n × p design matrix and ϵ is an n-dimensional vector of independent residuals which are normally distributed, N(0, σ2In) with variance σ2. We place the hierarchical prior given in Proposition 1 on each βj, i.e. βj ∼N(0, σ2τj), τj ∼G(a, λj), λj ∼G(b, φ). φ is used as a global shrinkage parameter common to all βj, and may be inferred using the data. Thus we follow the hierarchy by letting φ ∼G(1/2, ω), ω ∼G(1/2, 1) which implies φ1/2 ∼C+(0, 1) that is identical to what was used in [7] at this level of the hierarchy. However, we do not believe at this level in the hierarchy the choice of the prior will have a huge impact on the results. Although treating φ as unknown may be reasonable, when there exists some prior knowledge, it is appropriate to fix a φ value to reflect our prior belief in terms of underlying sparsity of the coefficient vector. This sounds rather natural as soon as one starts seeing φ as a parameter that governs the multiplicity adjustment as discussed in [7]. Note also that here we form the dependence on the error variance at a lower level of hierarchy rather than forming it in the prior of φ as done in [7]. If we let a = b = 1/2, we will have formulated the hierarchical prior given in [7] in a completely conjugate manner. We also let σ−2 ∼G(c0/2, d0/2). Under a normal likelihood, an efficient Gibbs sampler may be obtained as the fully conditional posteriors can be extracted: β|y, X, σ2, τ1, . . . , τp ∼N(µβ, Vβ), σ−2|y, X, β, τ1, . . . , τp ∼G(c∗, d∗), τj|βj, σ2, λj ∼ GIG(a−1/2, 2λj, β2 j /σ2), λj|τj, φ ∼G(a+b, τj+φ), φ|λj, ω ∼G(pb+1/2, Pp j=1 λj+ω), ω|φ ∼ G(1, φ + 1), where µβ = (X′X + T−1)−1X′y, Vβ = σ2(X′X + T−1)−1, c∗= (n + p + c0)/2, d∗= {(y −Xβ)′(y −Xβ) + β′T−1β + d0}/2, T = diag(τ1, . . . , τp). 5 As an alternative to MCMC and Laplace approximations [23], a lower-bound on marginal likelihoods may be obtained via variational methods [17] yielding approximate posterior distributions on the model parameters. Using a similar approach to [5, 1], the approximate marginal posterior distributions of the parameters are given by β ∼N(µβ, Vβ), σ−2 ∼G (c∗, d∗), τj ∼GIG(a−1/2, 2⟨λj⟩, ⟨σ−2⟩⟨β2 j ⟩), λj ∼G(a+b, ⟨τj⟩+⟨φ⟩), φ ∼G(pb+1/2, ⟨ω⟩+Pp j=1⟨λj⟩), ω ∼G(1, ⟨φ⟩+ 1), where µβ = ⟨β⟩= (X′X + T−1)−1X′y, Vβ = ⟨σ−2⟩−1(X′X + T−1)−1, T−1 = diag(⟨τ −1 1 ⟩, . . . , ⟨τ −1 p ⟩), c∗= (n+p+c0)/2, d∗= (y′y −2y′X⟨β⟩+Pn i=1 xi⟨ββ′⟩xi + Pp j=1⟨β2 j ⟩⟨τ −1 j ⟩+ d0)/2, ⟨ββ′⟩= Vβ + ⟨β⟩⟨β′⟩, ⟨σ−2⟩= c∗/d∗, ⟨λj⟩= (a + b)/(⟨τj⟩+ ⟨φ⟩), ⟨φ⟩= (pb + 1/2)/(⟨ω⟩+ Pp j=1⟨λj⟩), ⟨ω⟩= 1/(⟨φ⟩+ 1) and ⟨τ⟩ = (⟨σ−2⟩⟨β2 j ⟩)1/2Ka+1/2 (2⟨λj⟩⟨σ−2⟩⟨β2 j ⟩)1/2 (2⟨λj⟩)1/2Ka−1/2 (2⟨λj⟩⟨σ−2⟩⟨β2 j ⟩)1/2 , ⟨τ −1⟩ = (2⟨λj⟩)1/2K3/2−a (2⟨λj⟩⟨σ−2⟩⟨β2 j ⟩)1/2 (⟨σ−2⟩⟨β2 j ⟩)1/2K1/2−a (2⟨λj⟩⟨σ−2⟩⟨β2 j ⟩)1/2 . This procedure consists of initializing the moments and iterating through them until some convergence criterion is reached. The deterministic nature of these approximations make them attractive as a quick alternative to MCMC. This conjugate modeling approach we have taken allows for a very straightforward implementation of Strawderman-Berger and horseshoe priors or, more generally, TPB normal scale mixture priors in regression models without the need for a more sophisticated sampling scheme which may ultimately attract more audiences towards the use of these more flexible and carefully defined normal scale mixture priors. 4.2 Sparse Maximum a Posteriori Estimation Although not our main focus, many readers are interested in sparse solutions, hence we give the following brief discussion. Given a, b and φ, maximum a posteriori (MAP) estimation is rather straightforward via a simple expectation-maximization (EM) procedure. This is accomplished in a similar manner to [8] by obtaining the joint MAP estimates of the error variance and the regression coefficients having taken the expectation with respect to the conditional posterior distribution of τ −1 j using the second hierarchy given in Proposition 1. The kth expectation step then would consist of calculating ⟨τ −1 j ⟩(k) = R ∞ 0 τ a−1/2 j (1 + τj/φ)−(a+b) exp{−β2(k−1) j /(2σ2 (k−1)τj)}dτ −1 j R ∞ 0 τ 1/2+a j (1 + τj/φ)−(a+b) exp{−β2(k−1) j /(2σ2 (k−1)τj)}dτ −1 j (8) where β2(k−1) j and σ2 (k−1) denote the modal estimates of the jth component of β and the error variance σ2 at iteration (k −1). The solution to (8) may be expressed in terms of some special function(s) for changing values of a, b and φ. b < 1 is a good choice as it will keep the tails of the marginal density on βj heavy. A careful choice of a, on the other hand, is essential to sparse estimation. Admissible values of a for sparse estimation is apparent by the representation in Definition 2, noting that for any a > 1, π(ρj = 1) = 0, i.e. βj may never be shrunk exactly to zero. Hence for sparse estimation, it is essential that 0 < a ≤1. Figure 2 (a) and (b) give the prior densities on ρj for b = 1/2, φ = 1 and a = {1/2, 1, 3/2} and the resulting marginal prior densities on βj. These marginal densities are given by π(βj) = 1 √ 2π3/2 eβ2 j /2Γ(0, β2 j /2) a = 1/2 1 √ 2π −|βj| 2 eβ2 j /2 + βj 2 eβ2 j /2Erf(βj/ √ 2) a = 1 √ 2 π3/2 n 1 −1 2eβ2 j /2β2 j Γ(0, β2 j /2) o a = 3/2 where Erf(.) denotes the error function and Γ(s, z) = R ∞ z ts−1e−tdt is the incomplete gamma function. Figure 2 clearly illustrates that while all three cases have very similar tail behavior, their behavior around the origin differ drastically. 6 0.0 0.2 0.4 0.6 0.8 1.0 ρ (a) −3 −2 −1 0 1 2 3 β (b) Figure 2: Prior densities of (a) ρj and (b) βj for a = 1/2 (solid), a = 1 (dashed) and a = 3/2 (long dash). 5 Experiments Throughout this section we use the Jeffreys’ prior on the error precision by setting c0 = d0 = 0. We generate data for two cases, (n, p) = {(50, 20), (250, 100)}, from yi = x′ iβ∗+ ϵi, for i = 1, . . . , n where β∗is a p-vector that on average contains 20q non-zero elements which are indexed by the set A = {j : β∗ j ̸= 0} for some random q ∈(0, 1). We randomize the procedure in the following manner: (i) C ∼W(p, Ip×p), (ii) xi ∼N(0, C), (iii) q ∼B(1, 1) for the first and q ∼B(1, 4) for the second cases, (iv) I(j ∈A) ∼Bernoulli(q) for j = 1, . . . , p where I(.) denotes the indicator function, (v) for j ∈A, βj ∼U(0, 6) and for j /∈A, βj = 0 and finally (vi) ϵi ∼N(0, σ2) where σ ∼U(0, 6). We generated 1000 data sets for each case resulting in a median signal-to-noise ratio of approximately 3.3 and 4.5. We obtain the estimate of the regression coefficients, ˆβ, using the variational Bayes procedure and measure the performance by model error which is calculated as (β∗−ˆβ)′C(β∗−ˆβ). Figure 3(a) and (b) display the median relative model error (RME) values (with their distributions obtained via bootstrapping) which is obtained by dividing the model error observed from our procedures by that of ℓ1 regularization (lasso) tuned by 10-fold cross-validation. The boxplots in Figure 3(a) and (b) correspond to different (a, b, φ) values where C+ signifies that φ is treated as unknown with a half-Cauchy prior as given earlier in Section 4.1. It is worth mentioning that we attain a clearly superior performance compared to the lasso, particularly in the second case, despite the fact that the estimator resulting from the variational Bayes procedure is not a thresholding rule. Note that b = 1 choice leads to much better performance under Case 2 than Case 1. This is due to the fact that Case 2 involves a much sparser underlying setup on average than Case 1 and that the lighter tails attained by setting b = 1 leads to stronger shrinkage. To give a high dimensional example, we also generate a data set from the model yi = x′ iβ∗+ ϵi, for i = 1, . . . , 100, where β∗is a 10000-dimensional very sparse vector with 10 randomly chosen components set to be 3, ϵi ∼N(0, 32) and xij ∼N(0, 1) for j = 1, . . . , p. This β∗choice leads to a signal-to-noise ratios of 3.16. For the particular data set we generated, the randomly chosen components of β∗to be non-zero were indexed by 1263, 2199, 2421, 4809, 5530, 7483, 7638, 7741, 7891 and 8187. We set (a, b, φ) = (1, 1/2, 10−4) which implies that a priori P(ρj > 0.5) = 0.99 placing much more density in the neighborhood of ρj = 1 (total shrinkage). This choice is due to the fact that n/p = 0.01 and to roughly reflect that we do not want any more than 100 predictors in the resulting model. Hence φ is used, a priori, to limit the number of predictors in the model in relation to the sample size. Also note that with a = 1, the conditional posterior distribution of τ −1 j is reduced to an inverse Gaussian. Since we are adjusting the global shrinkage parameter, φ, a priori, and it is chosen such that P(ρj > 0.5) = 0.99, whether a = 1/2 or a = 1 should not matter. We first run the Gibbs sampler for 100000 iterations (2.4 hours on a computer with a 2.8 GHz CPU and 12 Gb of RAM using Matlab), discard the first 20000, thin the rest by picking every 5th sample to obtain the posteriors of the parameters. We observed that the chain converged by the 10000th iteration. For comparison purposes, we also ran the variational Bayes procedure using the values from the converged chain as the initial points (80 seconds). Figure 4 gives the posterior means attained by sampling and the variational approximation. The estimates corresponding to the zero elements of 7 0.9 1.0 1.1 1.2 (.5,.5,C+)(1,.5,C+) (.5,.5,1) (1,.5,1) (.5,1,C+) (1,1,C+) (.5,1,1) (1,1,1) (a) 0.4 0.5 0.6 0.7 (.5,.5,C+)(1,.5,C+) (.5,.5,1) (1,.5,1) (.5,1,C+) (1,1,C+) (.5,1,1) (1,1,1) (b) Figure 3: Relative ME at different (a, b, φ) values for (a) Case 1 and (b) Case 2. 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 1 2 3 1263 2199 2421 4809 5530 7483 7638 7741 78918187 Variable # β Figure 4: Posterior mean of β by sampling (square) and by approximate inference (circle). β∗are plotted with smaller shapes to prevent clutter. We see that in both cases the procedure is able to pick up the larger signals and shrink a significantly large portion of the rest towards zero. The approximate inference results are in accordance with the results from the Gibbs sampler. It should be noted that using a good informed guess on φ, rather than treating it as an unknown in this high dimensional setting, improves the performance drastically. 6 Discussion We conclude that the proposed hierarchical prior formulation constitutes a useful encompassing framework in understanding the behavior of different scale mixtures of normals and connecting them under a broader family of hierarchical priors. While ℓ1 regularization, or namely lasso, arising from a double exponential prior in the Bayesian framework yields certain computational advantages, it demonstrates much inferior estimation performance relative to the more carefully formulated scale mixtures of normals. The proposed equivalence of the hierarchies in Proposition 1 makes computation much easier for the TPB scale mixtures of normals. As per different choices of hyper-parameters, we recommend that a ∈(0, 1] and b ∈(0, 1); in particular (a, b) = {(1/2, 1/2), (1, 1/2)}. These choices guarantee that the resulting prior has a kink at zero, which is essential for sparse estimation, and leads to heavy tails to avoid unnecessary bias in large signals (recall that a choice of b = 1/2 will yield Cauchy-like tails). In problems where oracle knowledge on sparsity exists or when p >> n, we recommend that φ is fixed at a reasonable quantity to reflect an appropriate sparsity constraint as mentioned in Section 5. Acknowledgments This work was supported by Award Number R01ES017436 from the National Institute of Environmental Health Sciences. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Environmental Health Sciences or the National Institutes of Health. References [1] A. Armagan. Variational bridge regression. JMLR: W&CP, 5:17–24, 2009. 8 [2] A. Armagan, D. B. Dunson, and J. Lee. Generalized double Pareto shrinkage. arXiv:1104.0861v2, 2011. [3] C. Armero and M. J. Bayarri. Prior assessments for prediction in queues. The Statistician, 43(1):pp. 139–153, 1994. [4] J. Berger. A robust generalized Bayes estimator and confidence region for a multivariate normal mean. The Annals of Statistics, 8(4):pp. 716–761, 1980. [5] C. M. Bishop and M. E. Tipping. Variational relevance vector machines. In UAI ’00: Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pages 46–53, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. [6] C. M. Carvalho, N. G. Polson, and J. G. Scott. Handling sparsity via the horseshoe. JMLR: W&CP, 5, 2009. [7] C. M. Carvalho, N. G. Polson, and J. G. Scott. The horseshoe estimator for sparse signals. Biometrika, 97(2):465–480, 2010. [8] M. A. T. Figueiredo. Adaptive sparseness for supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25:1150–1159, 2003. [9] E. I. George and R. E. McCulloch. Variable selection via Gibbs sampling. Journal of the American Statistical Association, 88, 1993. [10] M. Gordy. A generalization of generalized beta distributions. Finance and Economics Discussion Series 1998-18, Board of Governors of the Federal Reserve System (U.S.), 1998. [11] J. E. Griffin and P. J. Brown. Bayesian adaptive lassos with non-convex penalization. Technical Report, 2007. [12] J. E. Griffin and P. J. Brown. Inference with normal-gamma prior distributions in regression problems. Bayesian Analysis, 5(1):171–188, 2010. [13] C. Hans. Bayesian lasso regression. Biometrika, 96:835–845, 2009. [14] C. J. Hoggart, J. C. Whittaker, and David J. Balding M. De Iorio. Simultaneous analysis of all SNPs in genome-wide and re-sequencing association studies. PLoS Genetics, 4(7), 2008. [15] H. Ishwaran and J. S. Rao. Spike and slab variable selection: Frequentist and Bayesian strategies. The Annals of Statistics, 33(2):pp. 730–773, 2005. [16] I. M. Johnstone and B. W. Silverman. Needles and straw in haystacks: Empirical Bayes estimates of possibly sparse sequences. Annals of Statistics, 32(4):pp. 1594–1649, 2004. [17] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. MIT Press, Cambridge, MA, USA, 1999. [18] T. J. Mitchell and J. J. Beauchamp. Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):pp. 1023–1032, 1988. [19] T. Park and G. Casella. The Bayesian lasso. Journal of the American Statistical Association, 103:681–686(6), 2008. [20] N. G. Polson and J. G. Scott. Alternative global-local shrinkage rules using hypergeometricbeta mixtures. Discussion Paper 2009-14, Department of Statistical Science, Duke University, 2009. [21] W. E. Strawderman. Proper Bayes minimax estimators of the multivariate normal mean. The Annals of Mathematical Statistics, 42(1):pp. 385–388, 1971. [22] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267–288, 1996. [23] L. Tierney and J. B. Kadane. Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association, 81(393):82–86, 1986. [24] M. E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1, 2001. 9
|
2011
|
201
|
4,260
|
Variational Gaussian Process Dynamical Systems Andreas C. Damianou∗ Department of Computer Science University of Sheffield, UK andreas.damianou@sheffield.ac.uk Michalis K. Titsias School of Computer Science University of Manchester, UK mtitsias@gmail.com Neil D. Lawrence∗ Department of Computer Science University of Sheffield, UK N.Lawrence@dcs.shef.ac.uk Abstract High dimensional time series are endemic in applications of machine learning such as robotics (sensor data), computational biology (gene expression data), vision (video sequences) and graphics (motion capture data). Practical nonlinear probabilistic approaches to this data are required. In this paper we introduce the variational Gaussian process dynamical system. Our work builds on recent variational approximations for Gaussian process latent variable models to allow for nonlinear dimensionality reduction simultaneously with learning a dynamical prior in the latent space. The approach also allows for the appropriate dimensionality of the latent space to be automatically determined. We demonstrate the model on a human motion capture data set and a series of high resolution video sequences. 1 Introduction Nonlinear probabilistic modeling of high dimensional time series data is a key challenge for the machine learning community. A standard approach is to simultaneously apply a nonlinear dimensionality reduction to the data whilst governing the latent space with a nonlinear temporal prior. The key difficulty for such approaches is that analytic marginalization of the latent space is typically intractable. Markov chain Monte Carlo approaches can also be problematic as latent trajectories are strongly correlated making efficient sampling a challenge. One promising approach to these time series has been to extend the Gaussian process latent variable model [1, 2] with a dynamical prior for the latent space and seek a maximum a posteriori (MAP) solution for the latent points [3, 4, 5]. Ko and Fox [6] further extend these models for fully Bayesian filtering in a robotics setting. We refer to this class of dynamical models based on the GP-LVM as Gaussian process dynamical systems (GPDS). However, the use of a MAP approximation for training these models presents key problems. Firstly, since the latent variables are not marginalised, the parameters of the dynamical prior cannot be optimized without the risk of overfitting. Further, the dimensionality of the latent space cannot be determined by the model: adding further dimensions always increases the likelihood of the data. In this paper we build on recent developments in variational approximations for Gaussian processes [7, 8] to introduce a variational Gaussian process dynamical system (VGPDS) where latent variables are approximately marginalized through optimization of a rigorous lower bound on the marginal likelihood. As well as providing a principled approach to handling uncertainty in the latent space, this allows both the parameters of the latent dynamical process and the dimensionality of the latent space to be determined. The approximation enables the application of our model to time series containing millions of dimensions and thousands of time points. We illustrate this by modeling human motion capture data and high dimensional video sequences. ∗Also at the Sheffield Institute for Translational Neuroscience, University of Sheffield, UK. 1 2 The Model Assume a multivariate times series dataset {yn, tn}N n=1, where yn ∈RD is a data vector observed at time tn ∈R+. We are especially interested in cases where each yn is a high dimensional vector and, therefore, we assume that there exists a low dimensional manifold that governs the generation of the data. Specifically, a temporal latent function x(t) ∈RQ (with Q ≪D), governs an intermediate hidden layer when generating the data, and the dth feature from the data vector yn is then produced from xn = x(tn) according to ynd = fd(xn) + ϵnd , ϵnd ∼N(0, β−1), (1) where fd(x) is a latent mapping from the low dimensional space to dth dimension of the observation space and β is the inverse variance of the white Gaussian noise. We do not want to make strong assumptions about the functional form of the latent functions (x, f).1 Instead we would like to infer them in a fully Bayesian non-parametric fashion using Gaussian processes [9]. Therefore, we assume that x is a multivariate Gaussian process indexed by time t and f is a different multivariate Gaussian process indexed by x, and we write xq(t) ∼GP(0, kx(ti, tj)), q = 1, . . . , Q, (2) fd(x) ∼GP(0, kf(xi, xj)), d = 1, . . . , D. (3) Here, the individual components of the latent function x are taken to be independent sample paths drawn from a Gaussian process with covariance function kx(ti, tj). Similarly, the components of f are independent draws from a Gaussian process with covariance function kf(xi, xj). These covariance functions, parametrized by parameters θx and θf respectively, play very distinct roles in the model. More precisely, kx determines the properties of each temporal latent function xq(t). For instance, the use of an Ornstein-Uhlbeck covariance function yields a Gauss-Markov process for xq(t), while the squared-exponential covariance function gives rise to very smooth and non-Markovian processes. In our experiments, we will focus on the squared exponential covariance function (RBF), the Mat´ern 3/2 which is only once differentiable, and a periodic covariance function [9, 10] which can be used when data exhibit strong periodicity. These covariance functions take the form: kx(rbf) (ti, tj) = σ2 rbfe −(ti−tj)2 (2l2 t ) , kx(mat) (ti, tj) = σ2 mat 1 + √ 3|ti −tj| lt ! e − √ 3|ti−tj | lt , kx(per) (ti, tj) = σ2 pere−1 2 sin2( 2π T (ti−tj)) lt . (4) The covariance function kf determines the properties of the latent mapping f that maps each low dimensional variable xn to the observed vector yn. We wish this mapping to be a non-linear but smooth, and thus a suitable choice is the squared exponential covariance function kf (xi, xj) = σ2 arde−1 2 PQ q=1 wq(xi,q−xj,q)2, (5) which assumes a different scale wq for each latent dimension. This, as in the variational Bayesian formulation of the GP-LVM [8], enables an automatic relevance determination procedure (ARD), i.e. it allows Bayesian training to “switch off” unnecessary dimensions by driving the values of the corresponding scales to zero. The matrix Y ∈RN×D will collectively denote all observed data so that its nth row corresponds to the data point yn. Similarly, the matrix F ∈RN×D will denote the mapping latent variables, i.e. fnd = fd(xn), associated with observations Y from (1). Analogously, X ∈RN×Q will store all low dimensional latent variables xnq = xq(tn). Further, we will refer to columns of these matrices by the vectors yd, fd, xq ∈RN. Given the latent variables we assume independence over the data features, and given time we assume independence over latent dimensions to give p(Y, F, X|t) = p(Y |F)p(F|X)p(X|t) = D Y d=1 p(yd|fd)p(fd|X ) Q Y q=1 p(xq|t), (6) where t ∈RN and p(yd|fd) is a Gaussian likelihood function term defined from (1). Further, p(fd|X ) is a marginal GP prior such that p(fd|X ) = N(fd|0, KNN ), (7) 1To simplify our notation, we often write x instead of x(t) and f instead of f(x). Later we also use a similar convention for the covariance functions by often writing them as kf and kx. 2 where KNN = kf(X, X) is the covariance matrix defined by the covariance function kf and similarly p(xq|t) is the marginal GP prior associated with the temporal function xq(t), p(xq|t) = N (xq|0, Kt) , (8) where Kt = kx(t, t) is the covariance matrix obtained by evaluating the covariance function kx on the observed times t. Bayesian inference using the above model poses a huge computational challenge as, for instance, marginalization of the variables X, that appear non-linearly inside the covariance matrix KNN, is troublesome. Practical approaches that have been considered until now (e.g. [5, 3]) marginalise out only F and seek a MAP solution for X. In the next section we describe how efficient variational approximations can be applied to marginalize X by extending the framework of [8]. 2.1 Variational Bayesian training The key difficulty with the Bayesian approach is propagating the prior density p(X|t) through the nonlinear mapping. This mapping gives the expressive power to the model, but simultaneously renders the associated marginal likelihood, p(Y |t) = Z p(Y |F)p(F|X)p(X|t)dXdF, (9) intractable. We now invoke the variational Bayesian methodology to approximate the integral. Following a standard procedure [11], we introduce a variational distribution q(Θ) and compute the Jensen’s lower bound Fv on the logarithm of (9), Fv(q, θ) = Z q(Θ) log p(Y |F)p(F|X)p(X |t) q(Θ) dXdF, (10) where θ denotes the model’s parameters. However, the above form of the lower bound is problematic because X (in the GP term p(F|X)) appears non-linearly inside the covariance matrix KNN making the integration over X difficult. As shown in [8], this intractability is removed by applying the “data augmentation” principle. More precisely, we augment the joint probability model in (6) by including M extra samples of the GP latent mapping f, known as inducing points, so that um ∈RD is such a sample. The inducing points are evaluated at a set of pseudo-inputs ˜X ∈RM×Q. The augmented joint probability density takes the form p(Y, F, U, X, ˜X|t) = D Y d=1 p(yd|fd)p(fd|ud, X )p(ud| ˜X)p(X|t), (11) where p(ud| ˜X) is a zero-mean Gaussian with a covariance matrix KMM constructed using the same function as for the GP prior (7). By dropping ˜X from our expressions, we write the augmented GP prior analytically (see [9]) as p(fd|ud, X) = N fd|KNMK−1 MMud, KNN −KNMK−1 MMKMN . (12) A key result in [8] is that a tractable lower bound (computed analogously to (10)) can be obtained through the variational density q(Θ) = q(F, U, X) = q(F|U, X)q(U)q(X) = D Y d=1 p(fd|ud, X)q(ud)q(X), (13) where q(X) = QQ q=1 N (xq|µq, Sq) and q(ud) is an arbitrary variational distribution. Titsias and Lawrence [8] assume full independence for q(X) and the variational covariances are diagonal matrices. Here, in contrast, the posterior over the latent variables will have strong correlations, so Sq is taken to be a N × N full covariance matrix. Optimization of the variational lower bound provides an approximation to the true posterior p(X|Y ) by q(X). In the augmented probability model, the “difficult” term p(F|X) appearing in (10) is now replaced with (12) and, eventually, it cancels out with the first factor of the variational distribution (13) so that F can be marginalised out analytically. Given the above and after breaking the logarithm in (10), we obtain the final form of the lower bound (see supplementary material for more details) Fv(q, θ) = ˆFv −KL(q(X) ∥p(X|t)), (14) 3 with ˆFv = R q(X) log p(Y |F)p(F|X) dXdF. Both terms in (14) are now tractable. Note that the first of the above terms involves the data while the second one only involves the prior. All the information regarding data point correlations is captured in the KL term and the connection with the observations comes through the variational distribution. Therefore, the first term in (14) has the same analytical solution as the one derived in [8]. Equation (14) can be maximized by using gradient-based methods2. However, when not factorizing q(X) across data points yields O(N 2) variational parameters to optimize. This issue is addressed in the next section. 2.2 Reparametrization and Optimization The optimization involves the model parameters θ = (β, θf, θx), the variational parameters {µq, Sq}Q q=1 from q(X) and the inducing points3 ˜X. Optimization of the variational parameters appears challenging, due to their large number and the correlations between them. However, by reparametrizing our O N 2 variational parameters according to the framework described in [12] we can obtain a set of O(N) less correlated variational parameters. Specifically, we first take the derivatives of the variational bound (14) w.r.t. Sq and µq and set them to zero, to find the stationary points, Sq = K −1 t + Λq −1 and µq = Kt ¯µq, (15) where Λq = −2 ϑˆFv(q,θ) ϑSq is a N × N diagonal, positive matrix and ¯µq = ϑ ˆ Fv ϑµq is a N−dimensional vector. The above stationary conditions tell us that, since Sq depends on a diagonal matrix Λq, we can reparametrize it using only the N−dimensional diagonal of that matrix, denoted by λq. Then, we can optimise the 2(Q × N) parameters (λq, ¯µq) and obtain the original parameters using (15). 2.3 Learning from Multiple Sequences Our objective is to model multivariate time series. A given data set may consist of a group of independent observed sequences, each with a different length (e.g. in human motion capture data several walks from a subject). Let, for example, the dataset be a group of S independent sequences Y (1), ..., Y (S) . We would like our model to capture the underlying commonality of these data. We handle this by allowing a different temporal latent function for each of the independent sequences, so that X(s) is the set of latent variables corresponding to the sequence s. These sets are a priori assumed to be independent since they correspond to separate sequences, i.e. p X(1), X(2), ..., X(S) = QS s=1 p(X(s)), where we dropped the conditioning on time for simplicity. This factorisation leads to a block-diagonal structure for the time covariance matrix Kt, where each block corresponds to one sequence. In this setting, each block of observations Y (s) is generated from its corresponding X(s) according to Y (s) = F (s) + ϵ, where the latent function which governs this mapping is shared across all sequences and ϵ is Gaussian noise. 3 Predictions Our algorithm models the temporal evolution of a dynamical system. It should be capable of generating completely new sequences or reconstructing missing observations from partially observed data. For generating a novel sequence given training data the model requires a time vector t∗as input and computes a density p(Y∗|Y, t, t∗). For reconstruction of partially observed data the time-stamp information is additionally accompanied by a partially observed sequence Y p ∗∈RN∗×Dp from the whole Y∗= (Y p ∗, Y m ∗), where p and m are set indices indicating the present (i.e. observed) and missing dimensions of Y∗respectively, so that p∪m = {1, . . . , D}. We reconstruct the missing dimensions by computing the Bayesian predictive distribution p(Y m ∗|Y p ∗, Y, t∗, t). The predictive densities can also be used as estimators for tasks like generative Bayesian classification. Whilst time-stamp information is always provided, in the next section we drop its dependence to avoid notational clutter. 2See supplementary material for more detailed derivation of (14) and for the equations for the gradients. 3We will use the term “variational parameters” to refer only to the parameters of q(X) although the inducing points are also variational parameters. 4 3.1 Predictions Given Only the Test Time Points To approximate the predictive density, we will need to introduce the underlying latent function values F∗∈ RN∗×D (the noisy-free version of Y∗) and the latent variables X∗∈RN∗×Q. We write the predictive density as p(Y∗|Y ) = Z p(Y∗, F∗, X∗|Y )dF∗dX∗= Z p(Y∗|F∗)p(F∗|X∗, Y )p(X∗|Y )dF∗dX∗. (16) The term p(F∗|X∗, Y ) is approximated by the variational distribution q(F∗|X∗) = Z Y d∈D p(f∗,d|ud, X∗)q(ud)dud = Y d∈D q(f∗,d|X∗), (17) where q(f∗,d|X∗) is a Gaussian that can be computed analytically, since in our variational framework the optimal setting for q(ud) is also found to be a Gaussian (see suppl. material for complete forms). As for the term p(X∗|Y ) in eq. (16), it is approximated by a Gaussian variational distribution q(X∗), q(X∗) = Q Y q=1 q(x∗,q) = Q Y q=1 Z p(x∗,q|xq)q(xq)dxq = Q Y q=1 ⟨p(x∗,q|xq)⟩q(xq) , (18) where p(x∗,q|xq) is a Gaussian found from the conditional GP prior (see [9]) and q(X) is also Gaussian. We can, thus, work out analytically the mean and variance for (18), which turn out to be: µx∗,q = K∗N ¯µq (19) var(x∗,q) = K∗∗−K∗N(Kt + Λ−1 q )−1KN∗ (20) where K∗N = kx(t∗, t), K∗N = K⊤ ∗N and K∗∗= kx(t∗, t∗). Notice that these equations have exactly the same form as found in standard GP regression problems. Once we have analytic forms for the posteriors in (16), the predictive density is approximated as p(Y∗|Y ) = Z p(Y∗|F∗)q(F∗|X∗)q(X∗)dF∗dX∗= Z p(Y∗|F∗) ⟨q(F∗|X∗)⟩q(X∗) dF∗, (21) which is a non-Gaussian integral that cannot be computed analytically. However, following the same argument as in [9, 13], we can calculate analytically its mean and covariance: E(F∗) = B⊤Ψ∗ 1 (22) Cov(F∗) = B⊤ Ψ∗ 2 −Ψ∗ 1(Ψ∗ 1)⊤ B + Ψ∗ 0I −Tr h K−1 MM −(KMM + βΨ2)−1 Ψ∗ 2 i I, (23) where B = β (KMM + βΨ2)−1 Ψ⊤ 1 Y , Ψ∗ 0 = ⟨kf(X∗, X∗)⟩, Ψ∗ 1 = ⟨KM∗⟩and Ψ∗ 2 = ⟨KM∗K∗M⟩. All expectations are taken w.r.t. q(X∗) and can be calculated analytically, while KM∗denotes the cross-covariance matrix between the training inducing inputs ˜X and X∗. The Ψ quantities are calculated analytically (see suppl. material). Finally, since Y∗is just a noisy version of F∗, the mean and covariance of (21) is just computed as: E(Y∗) = E(F∗) and Cov(Y∗) = Cov(F∗) + β−1IN∗. 3.2 Predictions Given the Test Time Points and Partially Observed Outputs The expression for the predictive density p(Y m ∗|Y p ∗, Y ) is similar to (16), p(Y m ∗|Y p ∗, Y ) = Z p(Y m ∗|F m ∗)p(F m ∗|X∗, Y p ∗, Y )p(X∗|Y p ∗, Y )dF m ∗dX∗, (24) and is analytically intractable. To obtain an approximation, we firstly need to apply variational inference and approximate p(X∗|Y p ∗, Y ) with a Gaussian distribution. This requires the optimisation of a new variational lower bound that accounts for the contribution of the partially observed data Y p ∗. This lower bound approximates the true marginal likelihood p(Y p ∗, Y ) and has exactly analogous form with the lower bound computed only on the training data Y . Moreover, the variational optimisation requires the definition of the variational distribution q(X∗, X) which needs to be optimised and is fully correlated across X and X∗. After the optimisation, the approximation to the true posterior p(X∗|Y p ∗, Y ) is given from the marginal q(X∗). A much faster but less accurate method would be to decouple the test from the training latent variables by imposing the factorisation q(X∗, X) = q(X)q(X∗). This is not used, however, in our current implementation. 5 4 Handling Very High Dimensional Datasets Our variational framework avoids the typical cubic complexity of Gaussian processes allowing relatively large training sets (thousands of time points, N). Further, the model scales only linearly with the number of dimensions D. Specifically, the number of dimensions only matters when performing calculations involving the data matrix Y . In the final form of the lower bound (and consequently in all of the derived quantities, such as gradients) this matrix only appears in the form Y Y ⊤which can be precomputed. This means that, when N ≪D, we can calculate Y Y ⊤only once and then substitute Y with the SVD (or Cholesky decomposition) of Y Y ⊤. In this way, we can work with an N × N instead of an N × D matrix. Practically speaking, this allows us to work with data sets involving millions of features. In our experiments we model directly the pixels of HD quality video, exploiting this trick. 5 Experiments We consider two different types of high dimensional time series, a human motion capture data set consisting of different walks and high resolution video sequences. The experiments are intended to explore the various properties of the model and to evaluate its performance in different tasks (prediction, reconstruction, generation of data). Matlab source code for repeating the following experiments and links to the video files are available on-line from http://staffwww.dcs.shef.ac.uk/people/N.Lawrence/vargplvm/. 5.1 Human Motion Capture Data We followed [14, 15] in considering motion capture data of walks and runs taken from subject 35 in the CMU motion capture database. We treated each motion as an independent sequence. The data set was constructed and preprocessed as described in [15]. This results in 2,613 separate 59-dimensional frames split into 31 training sequences with an average length of 84 frames each. The model is jointly trained, as explained in section 2.3, on both walks and runs, i.e. the algorithm learns a common latent space for these motions. At test time we investigate the ability of the model to reconstruct test data from a previously unseen sequence given partial information for the test targets. This is tested once by providing only the dimensions which correspond to the body of the subject and once by providing those that correspond to the legs. We compare with results in [15], which used MAP approximations for the dynamical models, and against nearest neighbour. We can also indirectly compare with the binary latent variable model (BLV) of [14] which used a slightly different data preprocessing. We assess the performance using the cumulative error per joint in the scaled space defined in [14] and by the root mean square error in the angle space suggested by [15]. Our model was initialized with nine latent dimensions. We performed two runs, once using the Mat´ern covariance function for the dynamical prior and once using the RBF. From table 1 we see that the variational Gaussian process dynamical system considerably outperforms the other approaches. The appropriate latent space dimensionality for the data was automatically inferred by our models. The model which employed an RBF covariance to govern the dynamics retained four dimensions, whereas the model that used the Mat´ern kept only three. The other latent dimensions were completely switched off by the ARD parameters. The best performance for the legs and the body reconstruction was achieved by the VGPDS model that used the Mat´ern and the RBF covariance function respectively. 5.2 Modeling Raw High Dimensional Video Sequences For our second set of experiments we considered video sequences. Such sequences are typically preprocessed before modeling to extract informative features and reduce the dimensionality of the problem. Here we work directly with the raw pixel values to demonstrate the ability of the VGPDS to model data with a vast number of features. This also allows us to directly sample video from the learned model. Firstly, we used the model to reconstruct partially observed frames from test video sequences4. For the first video discussed here we gave as partial information approximately 50% of the pixels while for the other two we gave approximately 40% of the pixels on each frame. The mean squared error per pixel was measured to 4‘Missa’ dataset: cipr.rpi.edu. ‘Ocean’: cogfilms.com. ‘Dog’: fitfurlife.com. See details in supplementary. The logo appearing in the ‘dog’ images in the experiments that follow, has been added with post-processing. 6 Table 1: Errors obtained for the motion capture dataset considering nearest neighbour in the angle space (NN) and in the scaled space(NN sc.), GPLVM, BLV and VGPDS. CL / CB are the leg and body datasets as preprocessed in [14], L and B the corresponding datasets from [15]. SC corresponds to the error in the scaled space, as in Taylor et al. while RA is the error in the angle space. The best error per column is in bold. Data CL CB L L B B Error Type SC SC SC RA SC RA BLV 11.7 8.8 NN sc. 22.2 20.5 GPLVM (Q = 3) 11.4 3.40 16.9 2.49 GPLVM (Q = 4) 9.7 3.38 20.7 2.72 GPLVM (Q = 5) 13.4 4.25 23.4 2.78 NN sc. 13.5 4.44 20.8 2.62 NN 14.0 4.11 30.9 3.20 VGPDS (RBF) 8.19 3.57 10.73 1.90 VGPDS (Mat´ern 3/2) 6.99 2.88 14.22 2.23 compare with the k−nearest neighbour (NN) method, for k ∈(1, .., 5) (we only present the error achieved for the best choice of k in each case). The datasets considered are the following: firstly, the ‘Missa’ dataset, a standard benchmark used in image processing. This is a 103,680-dimensional video, showing a woman talking for 150 frames. The data is challenging as there are translations in the pixel space. We also considered an HD video of dimensionality 9 × 105 that shows an artificially created scene of ocean waves as well as a 230, 400−dimensional video showing a dog running for 60 frames. The later is approximately periodic in nature, containing several paces from the dog. For the first two videos we used the Mat´ern and RBF covariance functions respectively to model the dynamics and interpolated to reconstruct blocks of frames chosen from the whole sequence. For the ‘dog’ dataset we constructed a compound kernel kx = kx(rbf) + kx(periodic), where the RBF term is employed to capture any divergence from the approximately periodic pattern. We then used our model to reconstruct the last 7 frames extrapolating beyond the original video. As can be seen in table 2, our method outperformed NN in all cases. The results are also demonstrated visually in figure 1 and the reconstructed videos are available in the supplementary material. Table 2: The mean squared error per pixel for VGPDS and NN for the three datasets (measured only in the missing inputs). The number of latent dimensions selected by our model is in parenthesis. Missa Ocean Dog VGPDS 2.52 (Q = 12) 9.36 (Q = 9) 4.01 (Q = 6) NN 2.63 9.53 4.15 As can be seen in figure 1, VGPDS predicts pixels which are smoothly connected with the observed part of the image, whereas the NN method cannot fit the predicted pixels in the overall context. As a second task, we used our generative model to create new samples and generate a new video sequence. This is most effective for the ‘dog’ video as the training examples were approximately periodic in nature. The model was trained on 60 frames (time-stamps [t1, t60]) and we generated new frames which correspond to the next 40 time points in the future. The only input given for this generation of future frames was the time-stamp vector, [t61, t100]. The results show a smooth transition from training to test and amongst the test video frames. The resulting video of the dog continuing to run is sharp and high quality. This experiment demonstrates the ability of the model to reconstruct massively high dimensional images without blurring. Frames from the result are shown in figure 2. The full video is available in the supplementary material. 6 Discussion and Future Work We have introduced a fully Bayesian approach for modeling dynamical systems through probabilistic nonlinear dimensionality reduction. Marginalizing the latent space and reconstructing data using Gaussian processes 7 frame46 50 100 150 200 250 300 (a) frame46 50 100 150 200 250 300 (b) frame46 50 100 150 200 250 300 (c) frame17 50 100 150 200 250 300 frame17 50 100 150 200 250 300 (d) (e) (f) 0 10 20 30 40 0 0.2 0.4 0.6 0.8 (g) 0 10 20 30 40 0 0.1 0.2 0.3 0.4 0.5 (h) Figure 1: (a) and (c) demonstrate the reconstruction achieved by VGPDS and NN respectively for the most challenging frame (b) of the ‘missa’ video, i.e. when translation occurs. (d) shows another example of the reconstruction achieved by VGPDS given the partially observed image. (e) (VGPDS) and (f) (NN) depict the reconstruction achieved for a frame of the ‘ocean’ dataset. Finally, we demonstrate the ability of the model to automatically select the latent dimensionality by showing the initial lengthscales (fig: (g)) of the ARD covariance function and the values obtained after training (fig: (h)) on the ‘dog’ data set. (a) (b) (c) Figure 2: The last frame of the training video (a) is smoothly followed by the first frame (b) of the generated video. A subsequent generated frame can be seen in (c). results in a very generic model for capturing complex, non-linear correlations even in very high dimensional data, without having to perform any data preprocessing or exhaustive search for defining the model’s structure and parameters. Our method’s effectiveness has been demonstrated in two tasks; firstly, in modeling human motion capture data and, secondly, in reconstructing and generating raw, very high dimensional video sequences. A promising future direction to follow would be to enhance our formulation with domain-specific knowledge encoded, for example, in more sophisticated covariance functions or in the way that data are being preprocessed. Thus, we can obtain application-oriented methods to be used for tasks in areas such as robotics, computer vision and finance. Acknowledgments Research was partially supported by the University of Sheffield Moody endowment fund and the Greek State Scholarships Foundation (IKY). We also thank Colin Litster and “Fit Fur Life” for allowing us to use their video files as datasets. Finally, we thank the reviewers for their insightful comments. 8 References [1] N. D. Lawrence, “Probabilistic non-linear principal component analysis with Gaussian process latent variable models,” Journal of Machine Learning Research, vol. 6, pp. 1783–1816, 2005. [2] N. D. Lawrence, “Gaussian process latent variable models for visualisation of high dimensional data,” in Advances in Neural Information Processing Systems, pp. 329–336, MIT Press, 2004. [3] J. M. Wang, D. J. Fleet, and A. Hertzmann, “Gaussian process dynamical models,” in NIPS, pp. 1441– 1448, MIT Press, 2006. [4] J. M. Wang, D. J. Fleet, and A. Hertzmann, “Gaussian process dynamical models for human motion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, pp. 283–298, Feb. 2008. [5] N. D. Lawrence, “Hierarchical Gaussian process latent variable models,” in Proceedings of the International Conference in Machine Learning, pp. 481–488, Omnipress, 2007. [6] J. Ko and D. Fox, “GP-BayesFilters: Bayesian filtering using Gaussian process prediction and observation models,” Auton. Robots, vol. 27, pp. 75–90, July 2009. [7] M. K. Titsias, “Variational learning of inducing variables in sparse Gaussian processes,” in Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, vol. 5, pp. 567–574, JMLR W&CP, 2009. [8] M. K. Titsias and N. D. Lawrence, “Bayesian Gaussian process latent variable model,” in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 844–851, JMLR W&CP 9, 2010. [9] C. E. Rasmussen and C. Williams, Gaussian Processes for Machine Learning. MIT Press, 2006. [10] D. J. C. MacKay, “Introduction to Gaussian processes,” in Neural Networks and Machine Learning (C. M. Bishop, ed.), NATO ASI Series, pp. 133–166, Kluwer Academic Press, 1998. [11] C. M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, 1st ed. 2006. corr. 2nd printing ed., Oct. 2007. [12] M. Opper and C. Archambeau, “The variational Gaussian approximation revisited,” Neural Computation, vol. 21, no. 3, pp. 786–792, 2009. [13] A. Girard, C. E. Rasmussen, J. Qui˜nonero-Candela, and R. Murray-Smith, “Gaussian process priors with uncertain inputs - application to multiple-step ahead time series forecasting,” in Neural Information Processing Systems, 2003. [14] G. W. Taylor, G. E. Hinton, and S. Roweis, “Modeling human motion using binary latent variables,” in Advances in Neural Information Processing Systems, vol. 19, MIT Press, 2007. [15] N. D. Lawrence, “Learning for larger datasets with the Gaussian process latent variable model,” in Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, pp. 243–250, Omnipress, 2007. 9
|
2011
|
202
|
4,261
|
Statistical Tests for Optimization Efficiency Levi Boyles, Anoop Korattikara, Deva Ramanan, Max Welling Department of Computer Science University of California, Irvine Irvine, CA 92697-3425 {lboyles},{akoratti},{dramanan},{welling}@ics.uci.edu Abstract Learning problems, such as logistic regression, are typically formulated as pure optimization problems defined on some loss function. We argue that this view ignores the fact that the loss function depends on stochastically generated data which in turn determines an intrinsic scale of precision for statistical estimation. By considering the statistical properties of the update variables used during the optimization (e.g. gradients), we can construct frequentist hypothesis tests to determine the reliability of these updates. We utilize subsets of the data for computing updates, and use the hypothesis tests for determining when the batch-size needs to be increased. This provides computational benefits and avoids overfitting by stopping when the batch-size has become equal to size of the full dataset. Moreover, the proposed algorithms depend on a single interpretable parameter – the probability for an update to be in the wrong direction – which is set to a single value across all algorithms and datasets. In this paper, we illustrate these ideas on three L1 regularized coordinate descent algorithms: L1-regularized L2-loss SVMs, L1-regularized logistic regression, and the Lasso, but we emphasize that the underlying methods are much more generally applicable. 1 Introduction There is an increasing tendency to consider machine learning as a problem in optimization: define a loss function, add constraints and/or regularizers and formulate it as a preferably convex program. Then, solve this program using some of the impressive tools from the optimization literature. The main purpose of this paper is to point out that this “reduction to optimization” ignores certain important statistical features that are unique to statistical estimation. The most important feature we will exploit is the fact that the statistical properties of an estimation problem determine an intrinsic scale of precision (that is usually much larger than machine precision). This implies immediately that optimizing parameter-values beyond that scale is pointless and may even have an adverse affect on generalization when the underlying model is complex. Besides a natural stopping criterion it also leads to much faster optimization before we reach that scale by realizing that far away from optimality we need much less precision to determine a parameter update than when close to optimality. These observations can be incorporated in many off-the-shelves optimizers and are often orthogonal to speed-up tricks in the optimization toolbox. The intricate relationship between computation and estimation has been pointed out before in [1] and [2] where asymptotic learning rates were provided. One of the important conclusions was that a not so impressive optimization algorithm such as stochastic gradient descent (SGD) can be nevertheless a very good learning algorithm because it can process more data per unit time. Also in [3] (sec. 5.5) the intimate relationship between computation and model fitting is pointed out. [4] gives bounds on the generalization risk for online algorithms, and [5] shows how additional data can be used to reduce running time for a fixed target generalization error. Regret-minimizing algorithms ([6], [7]) are another way to account for the interplay between learning and computation. Hypothesis testing has been exploited for computational gains before in [8]. 1 Our method exploits the fact that loss functions are random variables subject to uncertainty. In a frequentist world we may ask how different the value of the loss would have been if we would have sampled another dataset of the same size from a single shared underlying distribution. The role of an optimization algorithm is then to propose parameter updates that will be accepted or rejected on statistical grounds. The test we propose determines whether the direction of a parameter update is correct with high probability. If we do not pass our tests when using all the available data-cases then we stop learning (or alternatively we switch to sampling or bagging), because we have reached the intrinsic scale of precision set by the statistical properties of the estimation problem. However, we can use the same tests to speed up the optimization process itself, that is before we reach the above stopping criterion. To see that, imagine one is faced with an infinite dataset. In batch mode, using the whole (infinite) dataset, one would not take a single optimization step in finite time. Thus, one should really be concerned with making as much progress as possible per computational unit. Hence, one should only use a subset of the total available dataset. Importantly, the optimal batch-size depends on where we are in the learning process: far away from convergence we only need a rough idea of where to move which requires very few data-cases. On the other hand, the closer we get to the true parameter value, the more resolution we need. Thus, the computationally optimal batch-size is a function of the residual estimation error. Our algorithm adaptively grows a subset of the data by requiring that we have just enough precision to confidently move in the correct direction. Again, when we have exhausted all our data we stop learning. Our algorithm heavily relies on the central limit tendencies of large sums of random variables. Fortunately, many optimization algorithms are based on averages over data-cases. For instance, gradient descent falls in this class, as the gradient is defined by an average (or sum). As in [11], with large enough batch sizes we can use the Central Limit Theorem to claim that the average gradients are normally distributed and estimate their variance without actually seeing more data (this assumption is empirically verified in section 5.2). We have furthermore implemented methods to avoid testing updates for parameters which are likely to fail their test. This ensures that we approximately visit the features with their correct frequency (i.e. important features may require more updates than unimportant ones). In summary, the main contribution of this paper is to introduce a class of algorithms with the following properties. • They depend on a single interpretable parameter ǫ – the probability to update parameters in the wrong direction. Moreover, the performance of the algorithms is relatively insensitive to the exact value we choose. • They have a natural, inbuilt stopping criterion. The algorithms terminate when the probability to update the parameters in the wrong direction can not be made smaller than ǫ. • They are applicable to wide range of loss functions. The only requirement is that the updates depend on sums of random variables. • They inherit the convergence guarantees of the optimization method under consideration. This follows because the algorithms will eventually consider all the data. • They achieve very significant speedups in learning models from data. Throughout the learning process they determine the size of the data subset required to perform updates that move in the correct direction with probability at least 1 −ǫ. We emphasize that our framework is generally applicable. In this paper we show how these considerations can be applied to L1-regularized coordinate descent algorithms: L1-regularized L2-loss SVMs, L1-regularized logistic regression, and Lasso [9]. Coordinate descent algorithms are convenient because they do not require any tuning of hyper-parameters to be effective, and are still efficient when training sparse models. Our methodology extends these algorithms to be competitive for dense models and for N >> p. In section 2 we review the coordinate descent algorithms. Then, in section 3.2 we formulate our hypothesis testing framework, followed by a heuristic for predicting hypothesis test failures in section 4. We report experimental results in section 5 and we end with conclusions. 2 Coordinate Descent We consider L1-regularized learning problems where the loss is defined as a statistical average over N datapoints: f(β) = γ||β||1 + 1 2N N X i=1 loss(β, xi, yi) where β, xi ∈Rp (1) We will consider continously-differentiable loss functions (squared hinge-loss, log-loss, and squared-loss) that allow for the use of efficient coordinate-descent optimization algorithms, where 2 each parameter is updated βnew j ←βj + dj with: dj = argmin d f(β + dej) f(β + dej) = |βj + d| + Lj(d; β) + const (2) where Lj(d; β) = 1 2N PN i=1 loss(β + dej, xi, yi) and ej is the jth standard basis vector. To solve the above, we perform a second-order Taylor expansion of the partial loss Lj(d; β): f(β + dej) ≈|βj + d| + L′ j(0; β)d + 1 2L′′ j (0; β)d2 + const (3) [10] show that the minimum of the approximate objective (3) is obtained with: dj = − L′ j(0,β)+γ L′′ j (0,β) if L′ j(0, β) + γ ≤L′′ j (0, β)βj − L′ j(0,β)−γ L′′ j (0,β) if L′ j(0, β) −γ ≥L′′ j (0, β)βj −βj otherwise (4) For quadratic loss functions, the approximation in (3) is exact. For general convex loss functions, one can optimize (2) by repeatedly linearizing and applying the above update. We perform a single update per parameter during the cyclic iteration over parameters. Notably, the partial derivatives are functions of statistical averages computed over N training points. We show that one can use frequentist hypothesis tests to elegantly manage the amount of data needed (N) to reliably compute these quantities. 2.1 L1-regularized L2-loss SVM Using a squared hinge-loss function in (1), we obtain a L1-regularized L2-loss SVM: lossSV M = max(0, 1 −yiβT xi)2 (5) Appendix F of [10] derive the corresponding partial derivatives, where the second-order statistic is approximate because the squared hinge-loss is not twice differentiable: L′ j(0, β) = −1 N X i∈I(β) yixijbi(β) L′′ j (0, β) = 1 N X i∈I(β) x2 ij (6) where bi(β) = 1 −yiβT xi and I(β) = {i|bi(β) > 0}. We write xij for the jth element of datapoint xi. In [10], each parameter is updated until convergence, using a line-search for each update, whereas we simply check that L′′ term is not ill formed rather than performing a line search. 2.2 L1-regularized Logistic Regression Using a log-loss function in (1), we obtain a L1-regularized logistic regression model: losslog = log(1 + e−yiβT xi) (7) Appendix G of [10] derive the corresponding partial derivatives: L′ j(0, β) = 1 2N N X i=1 −xij 1 + eyiβT xi L′′ j (0, β) = 1 2N N X i=1 xij 1 + eyiβT xi 2 eyiβT xi (8) 2.3 L1-regularized Linear Regression (Lasso) Using a quadratic loss function in (1), we obtain a L1-regularized linear regression, or LASSO, model: lossquad = (yi −βT xi)2 (9) The corresponding partial derivatives [9] are: L′ j(0, β) = −1 N N X i=1 (yi −βT xi)xij L′′ j (0, β) = 1 N N X i=1 xijxij (10) 3 Because the Taylor expansion is exact for quadratic loss functions, we can directly write the closed form solution for parameter βnew j = S(αj, γ) where αj = 1 N N X i=1 xij(yi −˜y(j) i ) S(α, γ) = α −γ α > 0, γ < |α| α + γ α < 0, γ < |α| 0 γ ≥|α| (11) where ˜y(j) i = P k̸=j xikβk is the prediction made with all parameters except βj and S is a “softthreshold” function that is zero for an interval of 2γ about the origin, and shrinks the magnitude of the input α by γ outside of this interval. We can use this expression as an estimator for β from a dataset {xi, yi}. The above update rule assumes standardized data ( 1 N P i xij = 0, 1 N P i x2 ij = 1), but it is straightforward to extend for the general case. 3 Hypothesis Testing Each update βnew j = βj + dj is computed using a statistical average over a batch of N training points. We wish to estimate the reliability of an update as a function of N. To do so, we model the current β vector as a fixed constant and the N training points as random variables drawn from an underlying joint density p(x, y). This also makes the proposed updates dj and βnew j random variables because they are functions of the training points. In the following we will make an explicit distinction between random variables, e.g. βnew j , dj, xij, yi and their instantiations, ˆβnew j , ˆdj, ˆxij, ˆyi. We would like to determine whether or not a particular update is statistically justified. To this end, we use hypothesis tests where if there is high uncertainty in the direction of the update, we say this update is not justified and the update is not performed. For example, if our proposed update ˆβnew j is positive, we want to ensure that P(βnew j < 0) is small. 3.1 Algorithm Overview We propose a “growing batch” algorithm for handling very large or infinite datasets: first we select a very small subsample of the data of size Nb ≪N, and optimize until the entire set of parameters are failing their hypothesis tests (described in more detail below). We then query more data points and include them in our batch, reducing the variance of our estimates and making it more likely that they will pass their tests. We continue adding data to our batch until we are using the full dataset of size N. Once all of the parameters are failing their hypothesis tests on the full batch of data, we stop training. The reasoning behind this is, as argued in the introduction, that at this point we do not have enough evidence for even determining the direction in which to update, which implies that further optimization would result in overfitting. Thus, our algorithm behaves like a stochastic online algorithm during early stages and like a batch algorithm during later stages, equipped with a natural stopping condition. In our experiments, we increase the batch size Nb by a factor of 10 once all parameters fail their hypothesis tests for a given batch. Values in the range 2-100 also worked well, however, we chose 10 as it works very well for our implementation. 3.2 Lasso For quadratic loss functions with standardized variables, we can directly analyze the densities of dj, βnew j . We accept an update if the sign of dj can be estimated with sufficient probability. Central to our analysis is αj (11), which is equivalent to βnew j for the unregularized case γ = 0. We rewrite it as: αj = 1 N N X i=1 zij(β) where zij(β) = xij(yi −˜y(j) i ) (12) Because zij(β) are given by a fixed transformation of the iid training points, they themselves are iid. As N →∞, we can appeal to the Central Limit Theorm and model αj as distributed as a standard Normal: αj ∼N(µαj, σαj), where µαj = E[zij], ∀i and σ2 αj = 1 N V ar(zij) ∀i. Empirical justification of normality of these quantities is given in section 5.2. So, for any given αj, we can provide estimates E[zij] ≈ˆzj = 1 N X i ˆzij V ar(zij) ≈σ2 ˆzj = 1 N −1 X i (ˆzij −ˆzj)2 (13) 4 −2 −1 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Transformed Original −4 −2 0 2 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 Q−Q Plots of Gradient Distributions Normal Theoretic Quantiles Gradient Quantiles Nb=250,000 Nb=60,000 Nb=1,000,000 0 0.1 0.2 0.3 0.4 0.5 0.73 0.74 0.75 0.76 0.77 0.78 0.79 Average Precision ε 0 0.1 0.2 0.3 0.4 0.50 0.5 1 1.5 2 2.5 3x 10 4 Time (seconds) AP and Time responses to ε (LR on INRIA dataset) Figure 1: (left) A Gaussian distribution and the distribution resulting from applying the transformation S, with γ = .1. The interval that is “squashed” is shown by the dash-dotted blue lines. (middle) Q-Q plot demonstrating the normality of the gradients on the L1-regularized L2-loss SVM, computed at various stages of the algorithm (i.e. at different batch-sizes Nb and models β). Straight lines provide evidence that the empirical distribution is close to normality. (right ) Plot showing the behavior of our algorithm with respect to ǫ, using logistic regression on the INRIA dataset. ǫ = 0 corresponds to an algorithm which never updates, and ǫ = 0.5 corresponds to an algorithm which always updates (with no stopping criteria), so for these experiments ǫ was chosen in the range [.01, .49]. Error bars denote a single standard deviation. which in turn provide estimates for µαj and σαj. We next apply the soft threshold function S to αj to obtain βnew j , a random variable whose pdf is a Gaussian which has a section of width 2γ “squashed” to zero into a single point of probability mass, with the remaining density shifted towards zero by a magnitude γ. This is illustrated in Figure 1. Our criterion for accepting an update is that it moves towards the true solution with high probability. Let ˆdj be the realization of the random variable dj = βnew j −βj, computed from the sample batch of N training points. If ˆdj > 0, then we want P(dj ≤0) to be small, and vice versa. Specifically, for ˆdj > 0, we want P(dj ≤0) < ǫ, where P(dj ≤0) = P(βnew j ≤βj) = Φ βj−(µαj +γ) σαj if βj < 0 Φ βj−(µαj −γ) σαj if βj ≥0 (14) where Φ(·) denotes the cdf for the standard Normal. This distribution can be derived from its two underlying Gaussians, one with mean µαj + γ and one with mean µαj −γ. Similarly, one can define an analgous test of P(dj ≥0) < ǫ for ˆdj < 0. These are the hypothesis test equations for a single coordinate, so this test is performed once for each coordinate at its iteration in the coordinate descent algorithm. If a coordinate update fails its test, then we assume that we do not have enough evidence to perform an update on the coordinate, and do not update. Note that, since we are potentially rejecting many updates, significant computation could be going to “waste,” as we are computing updates without using them. Methods to address this follow in section 4. 3.3 Gradient-Based Hypothesis Tests For general convex loss functions, it is difficult to construct a pdf for dj and βnew j . Instead, we accept an update βnew j if the sign of the partial derivative ∂f(β) ∂βj can be estimated with sufficient reliability. Because f(β) may be nondifferentiable, we define ∂jf(β) to be the set of 1D subgradients, or lower tangent planes, at β along direction j. The minimal (in magnitude) subgradient gj, associated with the flatest lower tangent, is: gj = αj −γ if βj < 0 αj + γ if βj > 0 S(αj, γ) otherwise where αj = L′ j(0, β) = 1 N N X i=1 zij (15) where zij(β) = −2yixijbi(β) for the squared hinge-loss and zij(β) = xij 1+eyiβT xi for log-loss. Appealing to the same arguements as in Sec.3.2, one can show that αj ∼N(µαj, σαj) where µαj = E[zij], ∀i and σ2 αj = 1 N V ar(zij) ∀i. Thus the pdf of subgradient g is a Normal shifted by γsign(βj) in the case where βj ̸= 0, or a Normal transformed by the function S(αj, γ) in the case βj = 0. To formulate our hypothesis test, we write ˆgj as the realization of random variable gj, computed from the batch of N training points. We want to take an update only if our update is in the correct 5 10 1 10 2 10 3 10 4 0.55 0.6 0.65 0.7 0.75 0.8 time (seconds) Average Precision SVM Algorithm Comparison on the INRIA dataset CD−Full CD−Hyp Test vanilla CD SGD SGD−Regret 10 3 10 4 10 5 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.3 0.32 SVM Algorithm Comparison on the VOC dataset time (seconds) Average Precision CD−Full CD−HypTest SGD 10 2 10 3 10 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Logistic Regression Algorithm Comparison on the INRIA Dataset time (seconds) Average Precision CD−Hyp. Test vanilla CD SGD SGD−Regret Figure 2: Plot comparing various algorithms for the L1-regularized L2-loss SVM on the INRIA dataset (left) and the VOC dataset (middle), and for the L1-regularized logistic-regression on INRIA (right) using ǫ = 0.05. “CD-Full” denotes our method using all applicable heuristic speedups, “CD-Hyp Testing” does not use the shrinking heuristic while “vanilla CD” simply performs coordinate descent without any speedup methods. “SGD” is stochastic gradient descent with an annealing schedule. Optimization of the hyper-parameters of the annealing schedule (on train data) was not included in the total runtime. Note that our method achieves the optimal precision faster than SGD and also stops learning approximately when overfitting sets in. direction with high probability: for ˆgj > 0, we want P(gj ≤0) < ǫ, where P(gj ≤0) = Φ 0−(µαj −γ) σαj if βj ≤0 Φ 0−(µαj +γ) σαj if βj > 0 (16) We can likewise define a test of P(gj ≥0) < ǫ which we use to accept updates given a negative estimated gradient ˆgj < 0. 4 Additional Speedups It often occurs that many coordinates will fail their respective hypothesis tests for several consecutive iterations, so predicting these consecutive failures and skipping computations on these coordinates could potentially save computation. We employ a simple heuristic towards these matters based on a few observations (where for simplified notation we drop the subscript j): 1. If the set of parameters that are updating remains constant between updates, then for a particular coordinate, the change in the gradient from one iteration to the next is roughly constant. This is an empirical observation. 2. When close to the solution, σα remains roughly constant. We employ a heuristic which is a complicated instance of a simple idea: if the value a(0) of a variable of interest is changing at a constant rate r, we can predict its value at time t with a(t) = a(0) + rt. In our case, we wish to predict when the gradient will have moved to a point where the associated hypothesis test will pass. First, we will consider the unregularized case (γ = 0), wherein g = α. We wish to detect when the gradient will result in the hypothesis test passing, that is, we want to find the values µα ≈ˆα, where ˆα is a realization of the random variable α, such that P(g ≥0) = ǫ or P(g ≤0) = ǫ. For this purpose, we need to draw the distinction between an update which was taken, and one which is proposed but for which the hypothesis test failed. Let the set of accepted updates be indexed by t, as in ˆgt, and let the set of potential updates, after an accepted update at time t, be indexed by s, as in ˆgt(s). Thus the algorithm described in the previous section will compute ˆgt(1)...ˆgt(s∗) until the hypothesis test passes for ˆgt(s∗), and we then set ˆgt+1(0) = ˆgt(s∗), and perform an update to β using ˆgt+1(0). Ideally, we would prefer not to compute ˆgt(1)...ˆgt(s∗−1) at all, and instead only compute the gradient when we know the hypothesis test will pass, s∗iterations after the last accept. Given that we have some scheme from skipping k iterations, we estimate a “velocity” at which ˆgt(s) = ˆαt(s) changes: ∆e ≡ˆαt(s)−ˆαt(s−k−1) k+1 . If, for instance, ∆α > 0, we can compute the value of ˆα at which the hypothesis test will pass, assuming σα remains constant, by setting P(g ≤0|µα = αpass) = ǫ, and subsequently we can approximate the number of iterations to skip next1: αpass = −σαΦ−1(ǫ) kskip ←αpass −ˆαt(s) ∆α (17) 1In practice we cap kskip at some maximum number of iterations (say 40). 6 0.8 0.9 0.96 0.99 0.999 0 1 2 3 4 5 6 7 8 9 10 x 10 4 λ Time (seconds) SGD − η0=.5 SGD − η0=1 SGD − η0=2 SGD − η0=3 SGD − η0=5 ε=.4 ε=.2 ε=.05 0.8 0.9 0.96 0.99 0.999 0.72 0.73 0.74 0.75 0.76 0.77 0.78 0.79 0.8 0.81 0.82 λ Average Precision SGD − η0=.5 SGD − η0=1 SGD − η0=2 SGD − η0=3 SGD − η0=5 ε=.4 ε=.2 ε=.05 Figure 3: Comparison of our Lasso algorithm against SGD across various hyper-parameter settings for the exponential annealing schedule. Our algorithm is marked by the horizontal lines, with ǫ ∈{0.05, 0.2, 0.4}. Note that all algorithms have very similar precision scores in the interval [0.75 −0.76]. For values of λ = {0.8, 0.9, 0.96, 0.99, 0.999}, SGD gives a good score, however, picking η0 > 1 had an adverse effect on the optimization speed. Our method converged faster then SGD with the best annealing schedule. The regularized case with βj > 0 is equivalent to the unregularized case where g = α + γ, and we solve for the value of α that will allow the test to pass via P(g ≤0|µα = αpass) = ǫ: αpass = −σαΦ−1(ǫ) −γ (18) Similarly, the case with βj ≤0 is equivalent to the unregularized case where g = α −γ: αpass = −σαΦ−1(ǫ) + γ. For the case where ∆α < 0, we solve for P(g ≥0|µα = αpass) = ǫ. This gives αpass = −σαΦ−1(1 −ǫ) + γ if βj < 0 and αpass = −σαΦ−1(1 −ǫ) −γ otherwise. A similar heuristic for the Lasso case can also be derived. 4.1 Shrinking Strategy It is common in SVM algorithms to employ a “shrinking” strategy in which datapoints which do not contribute to the loss are removed from future computations. Specifically, if a data point (xi, yi) has the property that bi = 1 −yiβT xi < ǫshrink < 0, for some ǫshrink, then the data point is removed from the current batch. Data points removed from earlier batches in the optimization are still candidates for future batches. We employ this heuristic in our SVM implementation, and Figure 2 shows the relative performance between including this heuristic and not. 5 Experiments 5.1 Datasets We provide experimental results for the task of visual object detection, building on recent successful approaches that learn linear scanning-window classifiers defined on Histograms of Oriented Gradients (HOG) descriptors [12, 13]. We train and evaluate a pedestrain detector using the INRIA dataset [12], where (N, p) = (5e6, 1100). We also train and evaluate a car detector using the 2007 PASCAL VOC dataset [13], where (N,p) = (6e7,1400). For both datasets, we measure performance using the standard PASCAL evaluation protocol of average precision (with 50% overlap of predicted/ground truth bounding boxes). On such large training sets, one would expect delicately-tuned stochastic online algorithms (such as SGD) to outperform standard batch optimization (such as coordinate descent). We show that our algorithm exhibits the speed of the former with the reliability of the latter. 5.2 Normality Tests In this section we empirically verify the normality claims on the INRIA dataset. Because the negative examples in this data are comprised of many overlapping windows from images, we may expect this non-iid property to damage any normality properties of our updates. For these experiments, we focus on the gradients of the L1-regularized, L2-loss SVM computed during various stages of the optimization process. Figure 1 shows quantile-quantile plots of the average gradient, computed over different subsamples of the data of fixed size Nb, versus the standard Normal. Experiments for smaller N (≈100) and random β give similar curves. We conclude that the presence of straight lines provide strong evidence for our assumption that the distribution of gradients is in fact close to normally distributed. 7 5.3 Algorithm Comparisons We compared our algorithm to the stochastic gradient method for L1-regularized Log-linear models in [14], adapted for the L1-regularized methods above. We use the following decay schedule for all curves over time labeled “SGD”: η = η0 1 t0+t. In addition to this schedule, we also tested against SGD using the regret-minimizing schedule of [6] on the INRIA dataset: η = η0 1 √t0+t. After spending a significant amount of time hand-optimizing the hyper-parameters η0, t0, we found that settings η0 ≈1 for both rate schedules, and t0 ≈N/10 (standard SGD) and t0 ≈(N/10)2 (regret-minimzing SGD) have worked well on our datasets. We ran all our algorithms – Lasso, Logistic Regression and SVM – with a value of ǫ = 0.05 for both INRIA and VOC datasets. Figures 2 show a comparison between our method and stochastic gradient descent on the INRIA and VOC datasets. Our method including the shrinking strategy is faster for the SVM, while methods without a data shrinking strategy, such as logistic regression, are still competitive (see Figure 2). In comparing our methods to the coordinate descent upon which ours are based, we see that our framework provides a considerable speedup over standard coordinate descent. We do this with a method which eventually uses the entire batch of data, so the tricks that enable SGD to converge in an L1-regularized problem are not necessary. In terms of performance, our models are equivalent or near to published state of the art results for linear models [13, 15]. We also performed a comparison against SGD with an exponential decay schedule η = η0e−λt on the Lasso problem (see Fig 3). Exponential decay schedules are known to work well in practice [14], but do not give the theoretical convergence guarantees of other schedules. For a range of values for η0 and λ, we compare SGD against our algorithm with ǫ ∈{0.05, 0.2, 0.4}. From these experiments we conclude that changing ǫ from its standard value 0.05 all the way to 0.4 (recall that ǫ < 0.5) has very little effect on accuracy and speed. This in contrast to SGD which required hyper-parameter tuning to achieve comparable performance. To further demonstrate the robustness of our method to ǫ, we performed 5 trials of logistic regression on the INRIA dataset with a wide range of values of ǫ, with random initializations, shown in Figure 1. All choices of ǫ give a reasonable average precision, and the algorithm begins to become significantly slower only with ǫ > .3. 6 Conclusions We have introduced a new framework for optimization problems from a statistical, frequentist point of view. Every phase of the learning process has its own optimal batchsize. That is to say, we need only few data-cases early on in learning but many data-cases close to convergence. In fact, we argue that when we are using all of our data and cannot determine with statistical confidence that our update is in the correct direction, we should stop learning to avoid overfitting. These ideas are absent in the usual frequentist (a.k.a. maximum likelihood) and learning-theory approaches which formulate learning as the optimization of some loss function. A meaningful smallest length scale based on statistical considerations is present in Bayesian analysis through the notion of a posterior distribution. However, the most common inference technique in that domain, MCMC sampling, does not make use of the fact that less precision is needed during the first phases of learning (a.k.a. “burn-in”) because any accept/reject rule requires all data-cases to be seen. Hence, our approach can be thought of as a middle ground that borrows from both learning philosophies. Our approach also leverages the fact that some features are more predictive than others, and may deserve more attention during optimization. By predicting when updates will pass their statistical tests, we can update each feature approximately with the correct frequency. The proposed algorithms feature a single variable that needs to be set. However, the variable has a clear meaning – the allowed probability that an update moves in the wrong direction. We have used ǫ = 0.05 in all our experiments to showcase the robustness of the method. Our method is not limited to L1 methods or linear models; our framework can be used on any algorithm in which we take updates which are simple functions on averages over the data. Relative to vanilla coordinate descent, our algorithms can handle dense datasets with N >> p. Relative to SGD2 our method can be thought of as “self-annealing” in the sense that it increases its precision by adaptively increasing the dataset size. The advantages over SGD are therefore that we avoid tuning hyper-parameters of an annealing schedule and that we have an automated stopping criterion. 2Recent benchmarks [16] show that a properly tuned SGD solver is highly competitive for large-scale problems [17]. 8 References [1] L. Bottou and O. Bousquet. Learning using large datasets. In Mining Massive DataSets for Security, NATO ASI Workshop Series. IOS Press, Amsterdam, 2008. [2] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. Advances in neural information processing systems, 20:161–168, 2008. [3] B. Yu. Embracing statistical challenges in the information technology age. Technometrics, American Statistical Association and the American Society for Quality, 49:237–248, 2007. [4] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms. Information Theory, IEEE Transactions on, 50(9):2050–2057, 2004. [5] S. Shalev-Shwartz and N. Srebro. SVM optimization: inverse dependence on training set size. In Proceedings of the 25th international conference on Machine learning, pages 928–935. ACM, 2008. [6] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. Twentieth International Conference on Machine Learning, 2003. [7] P.L. Bartlett, E. Hazan, and A. Rakhlin. Adaptive online gradient descent. Advances in Neural Information Processing Systems, 21, 2007. [8] A. Korattikara, L. Boyles, M. Welling, J. Kim, and H. Park. Statistical optimization of non-negative matrix factorization. AISTATS, 2011. [9] J. Friedman, T. Hastie, H. H¨ofling, and R. Tibshirani. Pathwise coordinate optimization. Annals of Applied Statistics, 1(2):302–332, 2007. [10] R.E. Fan, K.W. Chang, C.J. Hsieh, X.R. Wang, and C.J. Lin. LIBLINEAR: A library for large linear classification. The Journal of Machine Learning Research, 9:1871–1874, 2008. [11] N. Le Roux, P.A. Manzagol, and Y. Bengio. Topmoumoute online natural gradient algorithm. In Neural Information Processing Systems (NIPS). Citeseer, 2007. [12] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 1, page 886. Citeseer, 2005. [13] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html. [14] Yoshimasa Tsuruoka, Jun’ichi Tsujii, and Sophia Ananiadou. Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 477–485, Suntec, Singapore, August 2009. Association for Computational Linguistics. [15] Navneet Dalal. Finding People in Images and Video. PhD thesis, Institut National Polytechnique de Grenoble / INRIA Grenoble, July 2006. [16] Pascal large scale learning challenge. http://largescale.ml.tu-berlin.de/workshop/, 2008. [17] A. Bordes, L. Bottou, and P. Gallinari. SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent. Journal of Machine Learning Research, 10:1737–1754, 2009. 9
|
2011
|
203
|
4,262
|
Statistical Performance of Convex Tensor Decomposition Ryota Tomioka† Taiji Suzuki† †Department of Mathematical Informatics, The University of Tokyo Tokyo 113-8656, Japan tomioka@mist.i.u-tokyo.ac.jp s-taiji@stat.t.u-tokyo.ac.jp Kohei Hayashi‡ ‡Graduate School of Information Science, Nara Institute of Science and Technology Nara 630-0192, Japan kohei-h@is.naist.jp Hisashi Kashima†,∗ ∗Basic Research Programs PRESTO, Synthesis of Knowledge for Information Oriented Society, JST Tokyo 102-8666, Japan kashima@mist.i.u-tokyo.ac.jp Abstract We analyze the statistical performance of a recently proposed convex tensor decomposition algorithm. Conventionally tensor decomposition has been formulated as non-convex optimization problems, which hindered the analysis of their performance. We show under some conditions that the mean squared error of the convex method scales linearly with the quantity we call the normalized rank of the true tensor. The current analysis naturally extends the analysis of convex low-rank matrix estimation to tensors. Furthermore, we show through numerical experiments that our theory can precisely predict the scaling behaviour in practice. 1 Introduction Tensors (multi-way arrays) generalize matrices and naturally represent data having more than two modalities. For example, multi-variate time-series, for instance, electroencephalography (EEG), recorded from multiple subjects under various conditions naturally form a tensor. Moreover, in collaborative filtering, users’ preferences on products, conventionally represented as a matrix, can be represented as a tensor when the preferences change over time or context. For the analysis of tensor data, various models and methods for the low-rank decomposition of tensors have been proposed (see Kolda & Bader [12] for a recent survey). These techniques have recently become increasingly popular in data-mining [1, 14] and computer vision [25, 26]. Besides they have proven useful in chemometrics [4], psychometrics [24], and signal processing [20, 7, 8]. Despite empirical success, the statistical performance of tensor decomposition algorithms has not been fully elucidated. The difficulty lies in the non-convexity of the conventional tensor decomposition algorithms (e.g., alternating least squares [6]). In addition, studies have revealed many discrepancies (see [12]) between matrix rank and tensor rank, which make extension of studies on the performance of low-rank matrix models (e.g., [9]) challenging. Recently, several authors [21, 10, 13, 23] have focused on the notion of tensor mode-k rank (instead of tensor rank), which is related to the Tucker decomposition [24]. They discovered that regularized estimation based on the Schatten 1-norm, which is a popular technique for recovering low-rank matrices via convex optimization, can also be applied to tensor decomposition. In particular, the 1 0 0.2 0.4 0.6 0.8 1 10 −3 10 0 Fraction of observed elements Convex Tucker (exact) Optimization tolerance Figure 1: Result of estimation of rank-(7, 8, 9) tensor of dimensions 50 × 50 × 20 from partial measurements; see [23] for the details. The estimation error ¯¯¯¯¯¯ ˆ W −W∗¯¯¯¯¯¯ F is plotted against the fraction of observed elements m = M/N. Error bars over 10 repetitions are also shown. Convex refers to the convex tensor decomposition based on the minimization problem (7). Tucker (exact) refers to the conventional (non-convex) Tucker decomposition [24] at the correct rank. Gray dashed line shows the optimization tolerance 10−3. The question is how we can predict the point where the generalization begins (roughly m = 0.35 in this plot). study in [23] showed that there is a clear transition at certain number of samples where the error drops dramatically from no generalization to perfect generalization (see Figure 1). In this paper, motivated by the above recent work, we mathematically analyze the performance of convex tensor decomposition. The new convex formulation for tensor decomposition allows us to generalize recent results on Schatten 1-norm-regularized estimation of matrices (see [17, 18, 5, 19]). Under a general setting we show how the estimation error scales with the mode-k ranks of the true tensor. Furthermore, we analyze the specific settings of (i) noisy tensor decomposition and (ii) random Gaussian design. In the first setting, we assume that all the elements of a low-rank tensor is observed with noise and the goal is to recover the underlying low-rank structure. This is the most common setting a tensor decomposition algorithm is used. In the second setting, we assume that the unknown tensor is a coefficient of a tensor-input scalar-output regression problem and the input tensors (design) are randomly given from independent Gaussian distributions. Surprisingly, it turns out that the random Gaussian setting can precisely predict the phase-transition-like behaviour in Figure 1. To the best of our knowledge, this is the first paper that rigorously studies the performance of a tensor decomposition algorithm. 2 Notation In this section, we introduce the notations we use in this paper. Moreover, we introduce a H¨olderlike inequality (3) and the notion of mode-k decomposability (5), which play central roles in our analysis. Let X ∈Rn1×···nK be a K-way tensor. We denote the number of elements in X by N = QK k=1 nk. The inner product between two tensors 〈W, X〉is defined as 〈W, X〉= vec(W)⊤vec(X), where vec is a vectorization. In addition, we define the Frobenius norm of a tensor ¯¯¯¯¯¯X ¯¯¯¯¯¯ F = p 〈X, X〉. The mode-k unfolding X(k) is the nk × ¯n\k (¯n\k := Q k′̸=k nk′) matrix obtained by concatenating the mode-k fibers (the vectors obtained by fixing every index of X but the kth index) of X as column vectors. The mode-k rank of a tensor X, denoted by rankk(X), is the rank of the mode-k unfolding X(k) (as a matrix). Note that when K = 2 and X is actually a matrix, and X(2) = X(1)⊤. We say a tensor X is rank (r1, . . . , rK) when rk = rankk(X) for k = 1, . . . , K. Note that the mode-k rank can be computed in a polynomial time, because it boils down to computing a matrix rank, whereas computing tensor rank is NP complete [11]. See [12] for more details. Since for each k, the convex envelope of the mode-k rank is given as the Schatten 1-norm [18] (known as the trace norm [22] or the nuclear norm [3]), it is natural to consider the following 2 overlapped Schatten 1-norm ¯¯¯¯¯¯W ¯¯¯¯¯¯ S1 of a tensor W ∈Rn1×···×nK (see also [21]): ¯¯¯¯¯¯W ¯¯¯¯¯¯ S1 = 1 K K X k=1 °°W (k) °° S1 , (1) where W (k) is the mode-k unfolding of W. Here ∥· ∥S1 is the Schatten 1-norm for a matrix ∥W ∥S1 = Xr j=1 σj(W ), where σj(W ) is the jth largest singular-value of W . The dual norm of the Schatten 1-norm is the Schatten ∞-norm (known as the spectral norm) as follows: ∥X∥S∞= max j=1,...,r σj(X). Since the two norms ∥· ∥S1 and ∥· ∥S∞are dual to each other, we have the following inequality: |〈W , X〉| ≤∥W ∥S1∥X∥S∞, (2) where 〈W , X〉is the inner product of W and X. The same inequality holds for the overlapped Schatten 1-norm (1) and its dual norm. The dual norm of the overlapped Schatten 1-norm can be characterized by the following lemma. Lemma 1. The dual norm of the overlapped Schatten 1-norm denoted as ¯¯¯¯¯¯· ¯¯¯¯¯¯ S∗ 1 is defined as the infimum of the maximum mode-k spectral norm over the tensors whose average equals the given tensor X as follows:¯¯¯¯¯¯X ¯¯¯¯¯¯ S∗ 1 = inf 1 K (Y(1)+Y(2)+···+Y(K))=X max k=1,...,K ∥Y (k) (k)∥S∞, where Y (k) (k) is the mode-k unfolding of Y(k). Moreover, the following upper bound on the dual norm ¯¯¯¯¯¯· ¯¯¯¯¯¯ S∗ 1 is valid: ¯¯¯¯¯¯X ¯¯¯¯¯¯ S∗ 1 ≤ ¯¯¯¯¯¯X ¯¯¯¯¯¯ mean := 1 K XK k=1 ∥X(k)∥S∞. Proof. The first part can be shown by solving the dual of the maximization problem ¯¯¯¯¯¯X ¯¯¯¯¯¯ S∗ 1 := sup 〈W, X〉s.t. ¯¯¯¯¯¯W ¯¯¯¯¯¯ S1 ≤1. The second part is obtained by setting Y(k) = K PK k′=1 1/ck′ X/ck, where ck = ∥X(k)∥S∞, and using Jensen’s inequality. According to Lemma 1, we have the following H¨older-like inequality |〈W, X〉| ≤ ¯¯¯¯¯¯W ¯¯¯¯¯¯ S1 ¯¯¯¯¯¯X ¯¯¯¯¯¯ S∗ 1 ≤ ¯¯¯¯¯¯W ¯¯¯¯¯¯ S1 ¯¯¯¯¯¯X ¯¯¯¯¯¯ mean. (3) Note that the above bound is tighter than the more intuitive relation | 〈W, X〉| ≤ ¯¯¯¯¯¯W ¯¯¯¯¯¯ S1 ¯¯¯¯¯¯X ¯¯¯¯¯¯ S∞ ( ¯¯¯¯¯¯X ¯¯¯¯¯¯ S∞:= max1,...,K ∥X(k)∥S∞), which one might come up as an analogy to the matrix case (2). Finally, let W∗∈Rn1×···×nK be the low-rank tensor that we wish to recover. We assume that W∗ is rank (r1, . . . , rK). Thus, for each k we have W ∗ (k) = U kSkV k (k = 1, . . . , K), where U k ∈Rnk×rk and V k ∈R¯n\k×rk are orthogonal, and Sk ∈Rrk×rk is diagonal. Let ∆∈Rn1×···×nK be an arbitrary tensor. We define the mode-k orthogonal complement ∆′′ k of an unfolding ∆(k) ∈Rnkׯn\k of ∆with respect to the true low-rank tensor W∗as follows: ∆′′ k = (Ink −U kU k ⊤)∆(k)(I ¯n\k −V kV k ⊤). (4) In addition ∆′ k := ∆(k) −∆′′ k is the component having overlapped row/column space with the unfolding of the true tensor W ∗ (k). Note that the decomposition ∆(k) = ∆′ k + ∆′′ k is defined for each mode; thus we use subscript k instead of (k). Using the decomposition defined above we have the following equality, which we call mode-k decomposability of the Schatten 1-norm: ∥W ∗ (k) + ∆′′ k∥S1 = ∥W ∗ (k)∥S1 + ∥∆′′ k∥S1 (k = 1, . . . , K). (5) The above decomposition is defined for each mode and thus it is weaker than the notion of decomposability discussed by Negahban et al. [15]. 3 3 Theory In this section, we first present a deterministic result that holds under a certain choice of regularization constant λM and an assumption called the restricted strong convexity. Then, we focus on special cases to justify the choice of regularization constant and the restricted strong convexity assumption. We analyze the setting of (i) noisy tensor decomposition and (ii) random Gaussian design in Section 3.2 and Section 3.3, respectively. 3.1 Main result Our goal is to estimate an unknown rank (r1, . . . , rK) tensor W∗∈Rn1×···nK from observations yi = 〈Xi, W∗〉+ ϵi (i = 1, . . . , M). (6) Here the noise ϵi follows the independent zero-mean Gaussian distribution with variance σ2. We employ the regularized empirical risk minimization problem proposed in [21, 10, 13, 23] for the estimation of W as follows: minimize W∈Rn1×···×nK 1 2M ∥y −X(W)∥2 2 + λM ¯¯¯¯¯¯W ¯¯¯¯¯¯ S1, (7) where y = (y1, . . . , yM)⊤is the collection of observations; X : Rn1×···×nK →RM is a linear operator that maps W to the M dimensional output vector X(W) = (〈X1, W〉, . . . , 〈XM, W〉) ⊤∈ RM. The Schatten 1-norm term penalizes every mode of W to be jointly low-rank (see Equation (1)); λM > 0 is the regularization constant. Accordingly, the solution of the minimization problem (7) is typically a low-rank tensor when λM is sufficiently large. In addition, we denote the adjoint operator of X as X∗: RM →Rn1×···×nK; that is X∗(ϵ) = PM i=1 ϵiXi ∈Rn1×···×nK. The first step in our analysis is to characterize the particularity of the residual tensor ∆:= ˆ W −W∗ as in the following lemma. Lemma 2. Let ˆ W be the solution of the minimization problem (7) with λM ≥2 ¯¯¯¯¯¯X∗(ϵ) ¯¯¯¯¯¯ mean/M, and let ∆:= ˆ W −W∗, where W∗is the true low-rank tensor. Let ∆(k) = ∆′ k + ∆′′ k be the decomposition defined in Equation (4). Then we have the following inequalities: 1. rank(∆′ k) ≤2rk for each k = 1, . . . , K. 2. PK k=1 ∥∆′′ k∥S1 ≤3 PK k=1 ∥∆′ k∥S1. Proof. The proof uses the mode-k decomposability (5) and is analogous to that of Lemma 1 in [17]. The second ingredient of our analysis is the restricted strong convexity. Although, “strong” may sound like a strong assumption, the point is that we require this assumption to hold only for the particular residual tensor we characterized in Lemma 2. The assumption can be stated as follows. Assumption 1 (Restricted strong convexity). We suppose that there is a positive constant κ(X) such that the operator X satisfies the inequality 1 M ∥X(∆)∥2 2 ≥κ(X) ¯¯¯¯¯¯∆ ¯¯¯¯¯¯2 F , (8) for all ∆∈Rn1×···×nK such that for each k = 1, . . . , K, rank(∆′ k) ≤2rk and PK k=1 ∥∆′′ k∥S1 ≤ 3 PK k=1 ∥∆′ k∥S1, where ∆′ k and ∆′′ k are defined through the decomposition (4). Now using the above two ingredients, we are ready to prove the following deterministic guarantee on the performance of the estimation procedure (7). Theorem 1. Let ˆ W be the solution of the minimization problem (7) with λM ≥2 ¯¯¯¯¯¯X∗(ϵ) ¯¯¯¯¯¯ mean/M. Suppose that the operator X satisfies the restricted strong convexity condition. Then the following bound is true: ¯¯¯¯¯¯ ˆ W −W∗¯¯¯¯¯¯ F ≤32λM PK k=1 √rk κ(X)K . (9) 4 Proof. Let ∆= ˆ W −W∗. Combining the fact that the objective value for ˆ W is smaller than that for W∗, the H¨older-like inequality (3), the triangular inequality ¯¯¯¯¯¯W∗¯¯¯¯¯¯ S1 − ¯¯¯¯¯¯ ˆ W ¯¯¯¯¯¯ S1 ≤ ¯¯¯¯¯¯∆ ¯¯¯¯¯¯ S1, and the assumption ¯¯¯¯¯¯X∗(ϵ)/M ¯¯¯¯¯¯ mean ≤λM/2, we obtain 1 2M ∥X(∆)∥2 2 ≤ ¯¯¯¯¯¯X∗(ϵ)/M ¯¯¯¯¯¯ mean ¯¯¯¯¯¯∆ ¯¯¯¯¯¯ S1 + λM ¯¯¯¯¯¯∆ ¯¯¯¯¯¯ S1 ≤2λM ¯¯¯¯¯¯∆ ¯¯¯¯¯¯ S1. (10) Now the left-hand side can be lower-bounded using the restricted strong convexity (8). On the other hand, using Lemma 2, the right-hand side can be upper-bounded as follows: ¯¯¯¯¯¯∆ ¯¯¯¯¯¯ S1 ≤1 K PK k=1(∥∆′ k∥S1 + ∥∆′′ k∥S1) ≤4 K PK k=1 ∥∆′ k∥S1 ≤ 4 ¯¯¯¯¯¯∆ ¯¯¯¯¯¯ F K PK k=1 √2rk, (11) where the last inequality follows because ¯¯¯¯¯¯∆ ¯¯¯¯¯¯ F = ∥∆(k)∥F for k = 1, . . . , K. Combining inequalities (8), (10), and (11), we obtain our claim (9). Negahban et al. [15] (see also [17]) pointed out that the key properties for establishing a sharp convergence result for a regularized M-estimator is the decomposability of the regularizer and the restricted strong convexity. What we have shown suggests that the weaker mode-k decomposability (5) suffice to obtain the above convergence result for the overlapped Schatten 1-norm (1) regularization. 3.2 Noisy Tensor Decomposition In this subsection, we consider the setting where all the elements are observed (with noise) and the goal is to recover the underlying low-rank tensor without noise. Since all the elements are observed only once, X is simply a vectorization (M = N), and the lefthand side of inequality (10) gives the quantity of interest ∥X(∆)∥2 2 = ¯¯¯¯¯¯ ˆ W −W∗¯¯¯¯¯¯ F . Therefore, the remaining task is to bound ¯¯¯¯¯¯X∗(ϵ) ¯¯¯¯¯¯ mean as in the following lemma. Lemma 3. Suppose that X : n1×· · ·×nK →N is a vectorization of a tensor. With high probability the quantity ¯¯¯¯¯¯X∗(ϵ) ¯¯¯¯¯¯ mean is concentrated around its mean, which can be bounded as follows: E ¯¯¯¯¯¯X∗(ϵ) ¯¯¯¯¯¯ mean ≤σ K K X k=1 ¡√nk + p¯n\k ¢ . (12) Setting the regularization constant as λM = c0E ¯¯¯¯¯¯X∗(ϵ) ¯¯¯¯¯¯ mean/N, we obtain the following theorem. Theorem 2. Suppose that X : n1×· · ·×nK →N is a vectorization of a tensor. There are universal constants c0 and c1, such that, with high probability, any solution of the minimization problem (7) with regularization constant λM = c0σ PK k=1(√nk +p¯n\k)/(KN) satisfies the following bound: ¯¯¯¯¯¯ ˆ W −W∗¯¯¯¯¯¯2 F ≤c1σ2 à 1 K K X k=1 ¡√nk + p¯n\k ¢ !2 à 1 K K X k=1 √rk !2 . Proof. Combining Equations (10)–(11) with the fact that X is simply a vectorization and M = N, we have 1 N ∥ˆ W −W∗∥F ≤16 √ 2λM K PK k=1 √rk. Substituting the choice of regularization constant λM and squaring both sides, we obtain our claim.□ We can simplify the result of Theorem 2 by noting that ¯n\k = N/nk ≫nk, when the dimensions are of the same order. Introducing the notation ∥r∥1/2 = ( 1 K PK k=1 √rk)2 and n−1 := (1/n1, . . . , 1/nK), we have ¯¯¯¯¯¯ ˆ W −W∗¯¯¯¯¯¯2 F N ≤Op ¡ σ2∥n−1∥1/2∥r∥1/2 ¢ . (13) We call the quantity ¯r = ∥n−1∥1/2∥r∥1/2 the normalized rank, because ¯r = r/n when the dimensions are balanced (nk = n and rk = r for all k = 1, . . . , K). 5 3.3 Random Gaussian Design In this subsection, we consider the case the elements of the input tensors Xi (i = 1, . . . , M) in the observation model (6) are distributed according to independent identical standard Gaussian distributions. We call this setting random Gaussian design. First we show an upper bound on the norm ¯¯¯¯¯¯X∗(ϵ) ¯¯¯¯¯¯ mean, which we use to specify the scaling of the regularization constant λM in Theorem 1. Lemma 4. Let X : Rn1×···×nK →RM be a random Gaussian design. In addition, we assume that the noise ϵi is sampled independently from N(0, σ2). Then with high probability the quantity ¯¯¯¯¯¯X∗(ϵ) ¯¯¯¯¯¯ mean is concentrated around its mean, which can be bounded as follows: E ¯¯¯¯¯¯X∗(ϵ) ¯¯¯¯¯¯ mean ≤σ √ M K K X k=1 ¡√nk + p¯n\k ¢ . Next the following lemma, which is a generalization of a result presented in Negahban and Wainwright [17, Proposition 1], provides a ground for the restricted strong convexity assumption (8). Lemma 5. Let X : Rn1×···×nK →RM be a random Gaussian design. Then it satisfies ∥X(∆)∥2 √ M ≥1 4 ¯¯¯¯¯¯∆ ¯¯¯¯¯¯ F −1 K K X k=1 Ãrnk M + r ¯n\k M ! ¯¯¯¯¯¯∆ ¯¯¯¯¯¯ S1, with probability at least 1 −2 exp(−N/32). Proof. The proof is analogous to that of Proposition 1 in [17] except that we use H¨older-like inequality (3) for tensors instead of inequality (2) for matrices. Finally, we obtain the following convergence bound. Theorem 3. Under the random Gaussian design setup, there are universal constants c0, c1, and c2 such that for a sample size M ≥c1( 1 K PK k=1(√nk +p¯n\k))2( 1 K PK k=1 √rk)2, any solution of the minimization problem (7) with regularization constant λM = c0σ PK k=1(√nk + p¯n\k)/(K √ M) satisfies the following bound: ¯¯¯¯¯¯ ˆ W −W∗¯¯¯¯¯¯2 F ≤c2 σ2( 1 K PK k=1(√nk + p¯n\k))2( 1 K PK k=1 √rk)2 M , with high probability. Again we can simplify the result of Theorem 3 as follows: for sample size M ≥c1N ¯r we have ¯¯¯¯¯¯ ˆ W −W∗¯¯¯¯¯¯2 F ≤Op µ σ2 N∥n−1∥1/2∥r∥1/2 M ¶ , (14) where ¯r = ∥n−1∥1/2∥r∥1/2 is the normalized rank. Note that the condition on the number of samples M does not depend on the noise variance σ2. Therefore in the limit σ2 →0, the bound (14) is sufficiently small but only valid for sample size M that exceeds c1N ¯r, which implies a threshold behavior as in Figure 1. Note also that in the matrix case (K = 2), r1 = r2 = r and N∥n−1∥1/2 = O(n1 + n2). Therefore we can restate the above result as for sample size M ≥c1r(n1 + n2), we have ∥ˆ W −W ∗∥2 F ≤ Op(r(n1 + n2)/M), which is compatible with the result in [17, 18]. 4 Experiments In this section, we conduct two numerical experiments to confirm our analysis in Section 3.2 and Section 3.3. 6 0 0.2 0.4 0.6 0.8 1 0 1 2 3 x 10 −4 Normalized rank Mean squared error size=[50 50 20] λM=0.03/N size=[50 50 20] λM=0.33/N size=[50 50 20] λM=0.54/N size=[100 100 50] λM=0.06/N size=[100 100 50] λM=0.69/N size=[100 100 50] λM=1.11/N (a) Small noise (σ = 0.01). 0 0.2 0.4 0.6 0.8 1 0 0.005 0.01 0.015 0.02 0.025 0.03 Normalized rank Mean squared error size=[50 50 20] λM=0.33/N size=[50 50 20] λM=2.34/N size=[50 50 20] λM=6/N size=[100 100 50] λM=0.66/N size=[100 100 50] λM=4.5/N size=[100 100 50] λM=12/N (b) Large noise (σ = 0.1). Figure 2: Result of noisy tensor decomposition for tensors of size 50 × 50 × 20 and 100 × 100 × 50. 4.1 Noisy Tensor Decomposition We randomly generated low-rank tensors of dimensions n(1) = (50, 50, 20) and n(2) = (100, 100, 50) for various ranks (r1, . . . , rK). For a specific rank, we generated the true tensor by drawing elements of the r1 × · · · × rK “core tensor” from the standard normal distribution and multiplying its each mode by an orthonormal factor randomly drawn from the Haar measure. As described in Section 3.2, the observation y consists of all the elements of the original tensor once (M = N) with additive independent Gaussian noise with variance σ2. We used the alternating direction method of multipliers (ADMM) for “constraint” approaches described in [23, 10] to solve the minimization problem (7). The whole experiment was repeated 10 times and averaged. The results are shown in Figure 2. The mean squared error ¯¯¯¯¯¯ ˆ W −W∗¯¯¯¯¯¯2 F /N is plotted against the normalized rank ¯r = ∥n−1∥1/2∥r∥1/2 (of the true tensor) defined in Equation (13). Since the choice of the regularization constant λM only depends on the size of the tensor and not on the ranks of the underlying tensor in Theorem 2, we fix the regularization constant to some different values and report the dependency of the estimation error on the normalized rank ¯r of the true tensor. Figure 2(a) shows the result for small noise (σ = 0.01) and Figure 2(b) shows the result for large noise (σ = 0.1). As predicted by Theorem 2, the squared error ¯¯¯¯¯¯ ˆ W −W∗¯¯¯¯¯¯2 F grows linearly against the normalized rank ¯r. This behaviour is consistently observed not only around the preferred regularization constant value (triangles) but also in the over-fitting case (circles) and the underfitting case (crosses). Moreover, as predicted by Theorem 2, the preferred regularization constant value scales linearly and the squared error scales quadratically to the noise standard deviation σ. As predicted by Lemma 3, the curves for the smaller 50 × 50 × 20 tensor and those for the larger 100 × 100 × 50 tensor seem to agree when the regularization constant is scaled by the factor two. Note that the dominant term in inequality (12) is the second term p¯n\k, which is roughly scaled by the factor two from 50 × 50 × 20 to 100 × 100 × 50. 4.2 Tensor completion from partial observations In this subsection, we repeat the simulation originally done by Tomioka et al. [23] and demonstrate that our results in Section 3.3 can precisely predict the empirical scaling behaviour with respect to both the size and rank of a tensor. We present results for both matrix completion (K = 2) and tensor completion (K = 3). For the matrix case, we randomly generated low-rank matrices of dimensions 50 × 20, 100 × 40, and 250×200. For the tensor case, we randomly generated low-rank tensors of dimensions 50×50×20 and 100 × 100 × 50. We generated the matrices or tensors as in the previous subsection for various ranks. We randomly selected some elements of the true matrix/tensor for training and kept the 7 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.2 0.4 0.6 0.8 1 Normalized rank ||n−1||1/2||r||1/2 Fraction at err<=0.01 size=[50 20] size=[100 40] size=[250 200] (a) Matrix completion (K = 2). 0 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 1 Normalized rank ||n−1||1/2||r||1/2 Fraction at Error<=0.01 size=[50 50 20] size=[100 100 50] (b) Tensor completion (K = 3). Figure 3: Scaling behaviour of matrix/tensor completion with respect to the size n and the rank r. remaining elements for testing. No observation noise is added. We used the ADMM for “as a matrix” and “constraint” approaches described in [23] to solve the minimization problem (7) for matrix completion and tensor completion, respectively. Since there is no observation noise, we chose the regularization constant λ →0. A single experiment for a specific size and rank can be visualized as in Figure 1. In Figure 3, we plot the minimum fraction of observations m = M/N that achieved error ¯¯¯¯¯¯ ˆ W −W∗¯¯¯¯¯¯ F smaller than 0.01 against the normalized rank ¯r = ∥n−1∥1/2∥r∥1/2 (of the true tensor) defined in Equation (13). The matrix case is plotted in Figure 3(a) and the tensor case is plotted in Figure 3(b). Each series (blue crosses or red circles) corresponds to different matrix/tensor size and each data-point corresponds to a different core size (rank). We can see that the fraction of observations m = M/N scales linearly against the normalized rank ¯r, which agrees with the condition M/N ≥c1∥n−1∥1/2∥r∥1/2 = c1¯r in Theorem 3 (see Equation (14)). The agreement is especially good for tensor completion (Figure 3(b)), where the two series almost overlap. Interestingly, we can see that when compared at the same normalized rank, tensor completion is easier than matrix completion. For example, when nk = 50 and rk = 10 for each k = 1, . . . , K, the normalized rank is 0.2. From Figure 3, we can see that we only need to see 30% of the entries in the tensor case to achieve error smaller than 0.01, whereas we need about 60% of the entries in the matrix case. 5 Conclusion We have analyzed the statistical performance of a tensor decomposition algorithm based on the overlapped Schatten 1-norm regularization (7). Numerical experiments show that our theory can predict the empirical scaling behaviour well. The fraction of observation m = M/N at the threshold predicted by our theory is proportional to the quantity we call the normalized rank, which refines conjecture (sum of the mode-k ranks) in [23]. There are numerous directions that the current study can be extended. In this paper, we have focused on the convergence of the estimation error; it would be meaningful to also analyze the condition for the consistency of the estimated rank as in [2]. Second, although we have succeeded in predicting the empirical scaling behaviour, the setting of random Gaussian design does not match the tensor completion setting in Section 4.2. In order to analyze the latter setting, the notion of incoherence in [5] or spikiness in [16] might be useful. This might also explain why tensor completion is easier than matrix completion at the same normalized rank. Moreover, when the target tensor is only low-rank in a certain mode, Schatten 1-norm regularization fails badly (as predicted by the high normalized rank). It would be desirable to analyze the “Mixture” approach that aims at this case [23]. In a broader context, we believe that the current paper could serve as a basis for re-examining the concept of tensor rank and low-rank approximation of tensors based on convex optimization. Acknowledgments. We would like to thank Franz Kir´aly and Hiroshi Kajino for their valuable comments and discussions. This work was supported in part by MEXT KAKENHI 22700138, 23240019, 23120004, 22700289, and NTT Communication Science Laboratories. 8 References [1] E. Acar and B. Yener. Unsupervised multiway data analysis: A literature survey. IEEE T. Knowl. Data. En., 21(1):6–20, 2009. [2] F.R. Bach. Consistency of trace norm minimization. J. Mach. Learn. Res., 9:1019–1048, 2008. [3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [4] R. Bro. PARAFAC. Tutorial and applications. Chemometr. Intell. Lab., 38(2):149–171, 1997. [5] E. J. Candes and B. Recht. Exact matrix completion via convex optimization. Found. Comput. Math., 9(6):717–772, 2009. [6] J.D. Carroll and J.J. Chang. Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition. Psychometrika, 35(3):283–319, 1970. [7] P. Comon. Tensor decompositions. In J. G. McWhirter and I. K. Proudler, editors, Mathematics in signal processing V. Oxford University Press, 2002. [8] L. De Lathauwer and J. Vandewalle. Dimensionality reduction in higher-order signal processing and rank-(r1, r2, . . . , rn) reduction in multilinear algebra. Linear Algebra Appl., 391:31–55, 2004. [9] K. Fukumizu. Generalization error of linear neural networks in unidentifiable cases. In Algorithmic Learning Theory, pages 51–62. Springer, 1999. [10] S. Gandy, B. Recht, and I. Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problems, 27:025010, 2011. [11] J. H˚astad. Tensor rank is NP-complete. Journal of Algorithms, 11(4):644–654, 1990. [12] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455–500, 2009. [13] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in visual data. In Prof. ICCV, 2009. [14] M. Mørup. Applications of tensor (multiway array) factorizations and decompositions in data mining. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 1(1):24–40, 2011. [15] S. Negahban, P. Ravikumar, M. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in NIPS 22, pages 1348–1356. 2009. [16] S. Negahban and M.J. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. Technical report, arXiv:1009.2118, 2010. [17] S. Negahban and M.J. Wainwright. Estimation of (near) low-rank matrices with noise and highdimensional scaling. Ann. Statist., 39(2), 2011. [18] B. Recht, M. Fazel, and P.A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3):471–501, 2010. [19] A. Rohde and A.B. Tsybakov. Estimation of high-dimensional low-rank matrices. Ann. Statist., 39(2):887–930, 2011. [20] N.D. Sidiropoulos, R. Bro, and G.B. Giannakis. Parallel factor analysis in sensor array processing. IEEE T. Signal Proces., 48(8):2377–2388, 2000. [21] M. Signoretto, L. De Lathauwer, and J.A.K. Suykens. Nuclear norms for tensors and their use for convex multilinear estimation. Technical Report 10-186, ESAT-SISTA, K.U.Leuven, 2010. [22] N. Srebro, J. D. M. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. In Lawrence K. Saul, Yair Weiss, and L´eon Bottou, editors, Advances in NIPS 17, pages 1329–1336. MIT Press, Cambridge, MA, 2005. [23] R. Tomioka, K. Hayashi, and H. Kashima. Estimation of low-rank tensors via convex optimization. Technical report, arXiv:1010.0789, 2011. [24] L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279–311, 1966. [25] M. Vasilescu and D. Terzopoulos. Multilinear analysis of image ensembles: Tensorfaces. Computer Vision—ECCV 2002, pages 447–460, 2002. [26] H. Wang and N. Ahuja. Facial expression decomposition. In Proc. 9th ICCV, pages 958 – 965, 2003. 9
|
2011
|
204
|
4,263
|
A Machine Learning Approach to Predict Chemical Reactions Matthew A. Kayala Pierre Baldi∗ Institute of Genomics and Bioinformatics School of Information and Computer Sciences University of California, Irvine Irvine, CA 92697 {mkayala,pfbaldi}@ics.uci.edu Abstract Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Previous approaches are not highthroughput, are not generalizable or scalable, or lack sufficient data to be effective. We describe single mechanistic reactions as concerted electron movements from an electron orbital source to an electron orbital sink. We use an existing rule-based expert system to derive a dataset consisting of 2,989 productive mechanistic steps and 6.14 million non-productive mechanistic steps. We then pose identifying productive mechanistic steps as a ranking problem: rank potential orbital interactions such that the top ranked interactions yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.0% of non-productive reactions with less than a 0.1% false negative rate. Then, we train an ensemble of ranking models on pairs of interacting orbitals to learn a relative productivity function over single mechanistic reactions in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanisms at the top 89.1% of the time, rising to 99.9% of the time when top ranked lists with at most four nonproductive reactions are considered. The final system allows multi-step reaction prediction. Furthermore, it is generalizable, making reasonable predictions over reactants and conditions which the rule-based expert system does not handle. 1 Introduction Determining the major products of chemical reactions given the input reactants and conditions is a fundamental problem in organic chemistry. Reaction prediction is a necessary component of retro-synthetic analysis or virtual library generation for drug design[1, 2] and has the potential to increase our understanding of biochemical catalysis and metabolism[3]. There are a broad range of approaches to reaction prediction falling around at least three main poles: physical simulations of transition states using various quantum mechanical and other approximations[4, 5, 6], rule-based expert systems[2, 7, 8, 9, 10, 11], and inductive machine learning methods[12]. However, none of these approaches can successfully emulate the remarkable abilities of a human chemist. 1.1 Previous approaches and representations The very concept of a “reaction” can be ambiguous, as it corresponds to a macroscopic abstraction, hence simplification, of a very complex underlying microscopic reality, ultimately driven by the ∗To whom correspondence should be addressed 1 laws of quantum and statistical mechanics. However, even for relatively small systems, it is impossible to find exact solutions to the Schr¨odinger equation. Thus in practice, energies are calculated with varyingly accurate approximations, ranging from ab-initio Hartree-Fock approaches or density functional theory to semi-empirical methods or mechanical force fields[6]. This leads to modeling reactions as minimum energy paths between stable atom configurations on a high-dimensional potential energy surface, where the path through the lowest energy transition state, i.e., saddle point, is the most favorable[4, 5]. By explicitly modeling energies, these approaches can be highly accurate and generalize to a diverse range of chemistries but require careful initialization and are computationally expensive (see [13] for a representative example). This branch of computational chemistry provides invaluable tools for in-depth analysis but is currently not suitable for high-throughput reactivity tasks and is far from being able to recapitulate the knowledge and ability of human experts. In contrast, most rule-based expert systems for high-throughput reactivity tasks use a much more abstract representation, in the form of general transformations over molecular graphs[2, 7, 8, 9, 10]. Reactions are predicted when a match is found in a library of allowable graph transformations. These general transformations model only net molecular changes for processes that in reality involve a sequence of transition states, as shown in Figure 1. These rule-based approaches suffer from at least four drawbacks: (1) they use a representation that is too high-level, in that an overall transformation obfuscates the underlying physical reality; (2) they require the manual curation of large amounts of expert knowledge; (3) they become unmanageable at larger scales, in that adding a new graph pattern often involves having to update a large proportion of existing transformations with exceptions; and (4) they lack generality, in that particular chemistries must explicitly be encoded to be predicted. [C;X3H0:1]=[C;X3:2].[H:3][Br:4]>>[Br:4][C:1][C:2][H:3] Figure 1: Overall transformation of an alkene (hydrocarbon with double bond) with hydrobromic acid (HBr) and corresponding mechanistic reactions. (a) shows the overall transform as a SMIRKS[14] string pattern and as a graph representation. In a molecular graph, vertices represent atoms, with carbons at unlabeled vertices. The number of edges between two vertices represents bond order. +/−symbols represent formal charge. Standard valences are filled using implicit hydrogens. (b) shows the two mechanistic reactions composing the overall transformation as arrowpushing diagrams[15, 16]. Dots represent non-bonded (lone pair) electrons, while arrows represent concerted electron movement. In the first step, electrons in the electron-rich carbon-carbon double bond attack the hydrogen and break the electron-poor hydrogen-bromine single bond, producing an anionic bromide (Br−) and a carbocation (C+). In the second step, electrons from the charged, electron-rich bromide attack the electron-poor carbocation, yielding the final alkyl halide. Somewhere between low-level QM treatment and abstract graph-based overall transformations, one can consider reactions at the mechanistic level. A mechanistic, or elementary, reaction is a concerted electron movement through a single transition state[15, 16]. These mechanistic reactions can be composed to yield overall transformations. For example, Figure 1 shows the overall transformation of an alkene interacting with hydrobromic acid to yield an alkyl halide, along with the two elementary reactions which compose the transformation. A mechanistic reaction is described as an idealized molecular orbital (MO) interaction between an electron source (donor) MO and an electron sink (acceptor) MO. MOs represent regions of the molecule with high (source) or low (sink) electron 2 density. In general, potential electron sources are composed of lone pairs of electrons and bonds, and potential electron sinks are composed of empty atomic orbitals and bonds. Bonds can act as either a source or a sink depending on the context. Because of space constraints, we cannot fully describe subtle chemical details that must be handled, such as chaining for resonance rearrangement. For details, see texts[15, 16] on mechanisms. Note that by considering all possible pairings of source and sink MOs, this representation allows the exhaustive enumeration of all potential mechanistic reactions over an arbitrary set of molecules. Recent work by Chen and Baldi[11] introduces a rule-based expert system (Reaction Explorer) in which each rearrangement pattern encompasses an elementary reaction. Here, the elementary reactions represent “productive” mechanistic steps, i.e. those reactions which lead to the overall major products. Thus, elementary reactions which are not the most kinetically favorable, but which eventually lead to the overall thermodynamic transformation product may be considered “productive”. This approach is a marked change from previous approaches using overall transformations, but as a rule-based system still suffers from the problems of curation, scale, and generality. While mechanistic reaction representations are approximations quite far from the Schr¨odinger equation, we expect them to be closer to the underlying reality and therefore more useful than overall transformations. Furthermore, we expect them also to be easier to predict than overall transformations due to their more elementary nature and mechanistic interpretation. In combination, these arguments suggest that working with mechanistic steps may facilitate the application of statistical machine learning approaches, and take advantage of their capability to generalize. Thus, in this work, reactions are modeled as mechanisms, and for the remainder of the paper, we consider the term “reaction” to denote a single elementary reaction. Furthermore, we consider the problem of reaction prediction to be precisely that of identifying the “productive” reactions over a given set of reactants under particular conditions. There has been very little work on machine learning approaches to reaction prediction. The sole example is a paper from 1990 on inductively extracting overall transformation patterns from reaction databases[12], a method which was never actually incorporated into a full reaction prediction system. This situation is surprising. Given improvements in both computing power and machine learning methods over the past 20 years, one could imagine a machine learning system that mines reaction information to learn the grammar of chemistry, e.g., in terms of graph grammars[17]. One potential reason behind the lack of progress in this area is the paucity of available data. Chemical publishing is dominated by closed models, making literature information difficult to access. Furthermore, parsing scientific text and extracting relevant chemical information from text and image data is an open problem of research[18, 19]. While commercial reaction databases exist, e.g., Reaxys[20] or SPRESI[21], the reactions in these databases are mostly unbalanced, not atom-mapped, and lack mechanistic detail[22]. This is in addition to suffering from a severe lack of openness; the databases are exorbitantly priced or provided with a restrictive query interface which precludes serious statistical data mining. As a result, and to the best of our knowledge, effective machine learning approaches to reaction prediction still need to be developed. 1.2 A new approach The limitations of previous work motivate a new, fresh approach to reaction prediction combining machine learning with mechanistic representations. The key idea is to first enumerate all potential source and sink MOs, and thus all possible reactions by their pairing, and then use classification and ranking techniques to identify productive reactions. There are multiple benefits resulting from such an approach. By using very general rules to enumerate possible reactions, the approach is not restricted to manually curated reaction patterns. By detailing individual reactions at the mechanistic level, the system may be able to statistically learn efficient predictive models based on physicochemical attributes rather than abstract overall transformations. And by ranking possible reactions instead of making binary decisions, the system may provide results amenable to flexible interpretation. However, the new approach also faces three key challenges: (1) the development of appropriate training datasets of productive reactions; (2) the development of a machine learning approach to control the combinatorial complexity resulting from considering all possible pairs of electron donors and acceptors among the reacting molecules; and (3) the development of machine learning solutions to the problem of predictively ranking the possible mechanisms. These challenges are addressed one-by-one in the following sections. 3 2 The data challenge A mechanistically defined dataset of reactions to use with the proposed approach does not currently exist. To derive a dataset, we use a mechanistically defined rule-based expert system (Reaction Explorer) together with its validation suite[11]. The validation suite is a manually composed set of reactants, reagents, and products covering a complete undergraduate organic chemistry curriculum. Entering a set of reactants and a reagent model into Reaction Explorer yields the complete sequence of mechanistic steps leading to the final products, where all reactions in this sequence share the conditions encoded by the corresponding reagent model. Each one of these mechanistic steps is considered to be a distinct productive elementary reaction. For a given set of reactants and conditions, which we call a (r, c) query tuple, the Reaction Explorer system labels a small set of reactions productive, while all other reactions enumerated by pairing source and sink MOs over the reactants are considered non-productive. We then define two {0, 1} labels for each atom (up to symmetries) and conditions (a, c) tuple over all (r, c) queries. An (a, c) tuple has label srcreact = 1 if it is the main atom of a source MO in a productive reaction over any corresponding (r, c) query, and has label srcreact = 0 otherwise. The label sinkreact is defined similarly using sink MOs. Reaction conditions are described with three parameters: temperature, anion solvation potential, and cation solvation potential. Temperature is listed in Kelvin. The solvation potentials are unitless numbers between 0 and 1 representing ease of cation or anion solvation, thus providing a quantitative scale to describe polar protic, polar aprotic, and nonpolar solvents. Note that any mechanistic interaction with the solvent or reagent is explicitly modeled, e.g. as in Figure 1. As an initial validation of the method, we consider general ionic reactions from the Reaction Explorer validation suite involving C, H, N, O, Li, Mg, and the halides. Extensions to include stereoselective, pericyclic, and radical reactions are discussed in Section 5. The dataset consists of 6.14 million reactions composed of 84,825 source and 74,725 sink MOs from 2,752 distinct reactants and reaction conditions, i.e., (r, c) queries. Of these 6.14 million reactions, the Reaction Explorer system labels 2,989 of them as productive. There are 22,894 atom symmetry classes, which when paired with reaction condition yields 29,104 (a, c) tuples. Of these 29,104 (a, c) tuples, 1,262 have label srcreact = 1 , and 1,786 have label sinkreact = 1. Atom and MO interaction data is available at our chemoinformatics portal (http://cdb.ics. uci.edu) under Supplements. 3 The combinatorial complexity challenge In the dataset, the average molecule has 44 source MOs and 50 sink MOs. For this average molecule, considering only intermolecular reactions with a second copy of the same molecule gives 44 × 50 = 2200 potential elementary reactions. Thus, the number of possible reactions is very large, motivating identifying productive reactions given a (r, c) query in two stages. In the first stage, we train filters using classification techniques on the source and sink reactivity labels. The idea is to train highly sensitive classifiers which reduce the breadth of possible reactions without erroneously filtering productive reactions. Then only those source and sink MOs where the main atom passes the respective atom level filter are considered when enumerating reactions to consider in the second ranking stage for predicting reaction productivity. Here, we train two separate classifiers to predict the source and sink atom level reactivity labels, each using the same feature descriptions and machine learning implementations. To assess the performance of the reactive site filter training, we perform full 10-fold cross-validation (CV) over all distinct tuples of molecules and conditions (m, c). 3.1 Feature representation Each (a, c) tuple is represented as a vector of physicochemical and topological features. There are 14 real-valued physicochemical features such as the reaction conditions, the molecular weight of the molecule, and the charge at and around the atom. Topological features are meant to capture the neighboring context of a in the molecular graph, for example counts over vertex-and-edge labeled 4 paths and trees rooted at a. We compute paths to length 4 and trees to depth 2, producing 743 molecular graph features. In addition to standard molecular graph features, we also include similar topological features over a restricted alphabet pharmacophore point graph, where pharmacophore point graph definitions are adapted from H¨ahnke, et al[23]. Using paths of length 4 and trees of depth 2 in the pharmacophore point graph yields another 759 features. This results in a total of 1,516 features. 3.2 Training Before training, all features are normalized to [0, 1] using the minimum and maximum values of the training set. We oversample (a, c) tuples with label 1 to ensure approximately balanced classes. We experimented with a variety of architectures. Here we report the results obtained using artificial neural networks using sigmoidal activation functions, with a single hidden layer and a single output node with a cross-entropy error function. Grid search using internal three-fold CV on a single training set is used to fit the architecture size (converging to 10 hidden nodes) and the L2-regularization (weight decay) parameter shared by all folds of the overall 10-fold CV. Weights are optimized by stochastic gradient descent with per-weight adaptive learning rates[24]. Optimization is stopped after 100 epochs as this is observed to be sufficient for convergence. As highly sensitive classifiers are desired, the choice of a decision threshold is important. We perform internal three-fold CV on the training set to find decision thresholds yielding a false negative rate of 0 on each respective internal test set. The decision threshold for the overall CV fold is taken as the average of these internal CV fold thresholds. 3.3 Results We report the true negative rate (TNR) and the false negative rate (FNR) for both the source and sink classification problems as well as for the the actual reaction filtering problem, as shown in Table 1. In a CV regime, we are able to filter 94.0% of the 6.14 million non-productive reactions with less than 0.1% false negatives, effectively reducing the ranking problem imbalance by an order of magnitude with minimal error. Having established excellent filtering results with rigorous CV, we then train classifiers with all available data in order to independently assess the ranking method. The results of these classifiers are shown in the last column of Table 1. Table 1: Reactive site classification results. Source reactive and sink reactive rows show results on the respective classification problems. The reaction row shows results of using the two atom classifiers for an initial reaction filtering. CV columns indicate results of full 10-fold cross-validation over (m, c) tuples. CV results show the mean and standard deviation over folds. The best TNR column shows results when trained with all available data. Problem CV TNR % (SD) CV FNR % (SD) Best TNR % Source Reactive 87.7(2.0) 0.1(0.2) 92.1 Sink Reactive 75.6(5.8) 0.2(0.4) 85.6 Reaction 94.0(1.5) < 0.1(< 0.1) 97.2 4 The ranking challenge We pose the task of identifying the productive reactions as a ranking problem. To assess performance, we perform full 10-fold CV over the 2,752 distinct (r, c) queries. With the overall filtered set of reactions, there are, on average, 1.1 productive and 62.5 non-productive reactions per (r, c) query. 4.1 Feature representation Each reaction is composed of a source and sink MO. The reaction feature vector is the concatenation of the corresponding source and sink atom level feature vectors with some modifications. To keep the size reasonable, only real valued and pharmacophore (path length 3 and tree depth 2) atom level 5 features are included. 124 features are calculated to describe the net difference between reactants and products, such as counts over bond types, rings, and formal charges. And finally, 450 features describing the forward and inverse reactions are calculated, including atoms and bonds involved and implied transition state geometry. This leads to a total of 1,677 reaction features. 4.2 Training We use a pairwise approach to ranking similar to [25], using two identical shared-weight artificial neural networks linked to a single comparator output node with fixed ±1 weights. The general architecture is shown in Figure 2. Each shared network receives as an input a potential reaction, i.e. a source-sink pair. Training is performed via back-propagation with weight-sharing. ... (Source, Sink) B (Source, Sink) A ... Figure 2: Shared weight artificial neural network architecture for pairwise ranking. The goal is to determine a productivity order between the (source, sink) A and (source, sink) B pairs. This is done with a pair of shared-weight artificial neural networks with sigmoidal hidden nodes and a linear output node. The output of these internal networks are tied to a single sigmoidal output node with fixed weights. The final output will approach 1 if the (source, sink) A pair is predicted to be relatively more productive than the (source, sink) B pair, and 0 otherwise. Training details are similar to the reactive site classification. All features are normalized to [0, 1] and grid search with internal three-fold CV on a single training set is used to fit the architecture size (converging to 20 hidden nodes) and L2-regularization (weight decay) parameter shared by all folds of the overall 10-fold CV. Weights are optimized using stochastic gradient descent with the same per-weight adaptive learning rate scheme[24]. Optimization is stopped after 25 epochs as this is observed to be sufficient for convergence. An ensemble consisting of five separate pairwise ranking machines (as described in Figure 2) is used for each training set. Each machine in the ensemble is trained with all the productive reactions (from the training set) and a random partition of the non-productive reactions (from the training set). Final ranking on the test set is determined by either simple majority vote or by ranking the average scores from the linear output node of the inner shared-weight network for each machine in the ensemble. The latter yields a minute performance increase and is reported. 4.3 Results We consider two measures for evaluating rankings, Normalized Discounted Cumulative Gain at list size i (NDCG@i) and Percent Within-n. NDCG@i is a common information retrieval metric[26] that sums the overall usefulness (or gain) of productive reactions in a given list of the top-i results, where individual gain decays exponentially with lower position. The measure is normalized such that the best possible ranking of a size i list has NDCG@i = 1. For example, NDCG@1 is the fraction of (r, c) queries in which the top ranked reaction is a productive reaction. Percent Within-n is simply how many (r, c) queries have at most n non-productive reactions in the smallest ranked list containing all productive reactions. For example, Percent Within-0 measures the percent of (r, c) 6 queries with perfect rank, and Percent Within-4 measures how often all productive reactions are recovered with at most 4 mis-ranked non-productive reactions. Note that the NDCG@1 and Percent Within-0 will differ because roughly 10% of (r, c) queries have more than one productive reaction. The non-productive MO interactions vastly outnumber the productive interactions. In spite of this imbalance, our approach gives excellent ranking results, shown in Table 2. The NDCG results show, for example, that in 89.5% of the queries, the top ranked reaction is productive. The Percent Within-n results show that 89.1% of queries have perfect ranking, while 99.9% of queries recover all productive reactions by considering lists with at most four non-productive reactions. Table 2: Reaction ranking results. We show Normalized Discounted Cumulative Gain at different list sizes i (NDCG@i) and Percent Within-n. See text for description of the measures. We report mean (standard deviation) results over CV folds. i Mean NDCG@i (SD) n Percent Within-n (SD) 1 0.895(0.016) 0 89.1(1.7) 2 0.939(0.011) 1 96.8(1.0) 3 0.952(0.008) 2 98.9(0.6) 4 0.954(0.007) 3 99.5(0.4) 5 0.956(0.007) 4 99.9(0.3) 4.4 Chemical applications The strong performance of the ranking system is exhibited by its ability to make accurate multi-step reaction predictions. An example, shown in the first row of Table 3, is an intramolecular Claisen condensation reaction with conditions (room temperature, polar aprotic solvent) requiring three elementary steps. The ranking method correctly predicts the given reaction as the highest ranked reaction at each step. Table 3: Chemical reactions of interest. The first row shows an example of full multi-step reaction prediction by the ranking system, a three step intramolecular Claisen condensation (room temp., polar aprotic). At each stage, the reaction shown is the top ranked when all possible reactions are considered by the two stage machine learning system. The second row shows two macrocyclizations which the rule-based system (Reaction Explorer) is unable to predict, but the machine learning approach effectively generalizes and ranks correctly. These reactions lead to the formation of a seven homo-cycle (7 carbons) on the left and seven hetero-cycle (6 carbons, 1 oxygen) on the right. The third row shows an intelligible error of the machine learning approach (see text). MultiStep Reaction Prediction Generality Reasonable Errors A generalizable system should be able to make reasonable predictions about reactants and reaction types with which it has only had implicit, rather than explicit, experience. Reaction Explorer, as a 7 rule-based expert system without explicit rules about larger ring forming reactions, does not make any predictions about seven and eight atom cyclizations. In reality though, larger ring forming reactions are possible. The second row of Table 3 shows the top two ranked reactions over a set of bromo-hept-1-en-2-olate reactants, leading to seven-member ring formation. The ranking model, without ever being trained with seven or eight-member ring forming reactions, returns the enolate attack as the most favorable, but also returns the lone pair nucleophilic substitution as the second most favorable. Similar results are made for similar eight-membered ring systems (not shown). Thus the ranking model is able to generalize and make reasonable suggestions, while the rule-based system is limited by hard-coded transformation patterns. Finally, the vast majority of errors are close errors, as exhibited by the 99.9% Within-4 measure. Furthermore, upon examination of these errors, they are largely intelligible and not unreasonable predictions. For example, the third row of Table 3 shows two reactions involving an oxonium compound and a bromide anion. Our ranking models return these two reactions as the highest, ranking the deprotonation slightly ahead of the substitution. This is considered a Within-1 ranking because the Reaction Explorer system labels only the substitution reaction as productive. However, the immediate precursor reaction in the sequence of Reaction Explorer mechanisms leading to these reactants is the inverse of the deprotonation reaction, i.e., the protonation of the alcohol. Hydrogen transfer reactions like this are reversible, and thus the deprotonation is likely the kinetically favored mechanism, i.e., it is reasonable to rank the deprotonation highly. It is just not productive, in that it does not lead to the final overall product. In a prediction system working with multi-step syntheses, such reversals of previous steps are easily discarded. 5 Conclusion Being able to predict the outcome of chemical reactions is a fundamental scientific problem. The ultimate goal of a reaction prediction system is to recapitulate and eventually surpass the ability of human chemists. In this work, we take a significant step in this direction, showing how to formulate reaction prediction as a machine learning problem and building an accurate implementation for a large and key subset of organic chemistry. There are a number of immediate applications of our system, including validating retro-synthetic suggestions, generating virtual libraries of molecules, and mechanistically annotating existing reaction databases. Reaction prediction is a largely untapped area for machine learning approaches. As such, there is of course room for improvements. The first is increasing the breadth of chemistry captured, e.g. radical, pericyclic, and stereoselective chemistry. Augmenting the MO description with number of electrons, allowing cyclic chained MO interactions, and including face orientations are plausible extensions to attack each of these additional areas of chemical reactivity. A second area of improvement is the curation of larger mechanistically defined datasets. We can approach this manually, by further use of expert systems to construct data with the required level of detail, or by carefully crafted crowdsourcing approaches. Other ongoing areas of research include improving the features, performing systematic feature selection, and experimenting with different statistical ranking techniques. As an untapped research problem for the machine learning community, we hope that the current work and our publicly available data will spark continued and open research in this important area. Acknowledgments Work supported by NIH grants LM010235-01A1 and 5T15LM007743 and NSF grant MRI EIA0321390 to PB. We acknowledge OpenEye Scientific Software and ChemAxon for academic software licenses. We wish to thank Profs. James Nowick, David Van Vranken, and Gregory Weiss for useful discussions. References [1] E.J. Corey and W.T. Wipke. Computer-assisted design of complex organic syntheses. Science, 166(3902):178–92, 1969. [2] M.H. Todd. Computer-aided organic synthesis. Chem. Soc. Rev., 34(3):247–266, 2005. [3] P. Rydberg, D.E. Gloriam, J. Zaretzki, C. Breneman, and L. Olsen. SMARTCyp: A 2D method for prediction of cytochrome P450-mediated drug metabolism. ACS Med. Chem. Lett., 1(3):96–100, 2010. 8 [4] G. Henkelman, B.P. Uberuaga, and H. J´onsson. A climbing image nudged elastic band method for finding saddle points and minimum energy paths. J. Chem. Phys., 113(22):9901–9904, 2000. [5] B. Peters, A. Heyden, A.T. Bell, and A. Chakraborty. A growing string method for determining transition states: comparison to the nudged elastic band and string methods. J. Chem. Phys., 120(17):7877–7886, 2004. [6] C.J. Cramer. Essentials of Computational Chemistry: Theories and Models. Wiley, West Sussex, England, 2 edition, 2004. [7] W.L. Jorgensen, E.R. Laird, A.J. Gushurst, J.M. Fleischer, S.A. Gothe, H.E. Helson, G.D. Paderes, and S. Sinclair. CAMEO: a program from the logical prediction of the products of organic reactions. Pure Appl. Chem., 62:1921–1932, 1990. [8] R. Hollering, J. Gasteiger, L. Steinhauer, K.-P. Schulz, and A. Herwig. Simulation of organic reactions: from the degradation of chemicals to combinatorial synthesis. J. Chem. Inf. Model., 40(2):482–494, 2000. [9] G. Benk¨o, C. Flamm, and P.F. Stadler. A graph-based toy model of chemistry. J. Chem. Inf. Model., 43(4):1085–1093, 2003. [10] I.M. Socorro, K. Taylor, and J.M. Goodman. ROBIA: a reaction prediction program. Org. Lett., 7(16):3541–3544, 2005. [11] J. Chen and P. Baldi. No electron left behind: a rule-based expert system to predict chemical reactions and reaction mechanisms. J. Chem. Inf. Model., 49(9):2034–2043, 2009. [12] P. R¨ose and J. Gasteiger. Automated derivation of reaction rules for the EROS 6.0 system for reaction prediction. Anal. Chim. Acta, 235:163–168, 1990. [13] B. Wang and Z. Cao. Mechanism of acid-catalyzed hydrolysis of formamide from cluster-continuum model calculations: concerted versus stepwise pathway. J. Phys. Chem. A, 114(49):12918–12927, 2010. [14] C.A. James, D. Weininger, and J. Delany. Daylight theory manual. http://www.daylight.com/ dayhtml/doc/theory/index.html, 2008. Last accessed January 2011. [15] C.K. Ingold. Structure and Mechanism in Organic Chemistry. Cornell University Press, Ithaca, NY, 1953. [16] R. Grossman. The Art of Writing Reasonable Organic Reaction Mechanisms. Springer-Verlag, New York, NY, 2 edition, 2003. [17] G. Rozenberg, editor. Handbook of Graph Grammars and Computing by Graph Transformation: Volume I. Foundations. World Scientific Publishing, River Edge, NJ, 1997. [18] D.L. Banville. Mining chemical structural information from the drug literature. Drug Discovery Today, 11:35–42, 2006. [19] J. Park, G.R. Rosania, and K. Saitou. Tunable machine vision-based strategy for automated annotation of chemical databases. J. Chem. Inf. Model., 49(8):1993–2001, 2009. [20] D.D. Ridley. Searching for chemical reaction information. In S.R. Heller, editor, The Beilstein Online Database, volume 436 of ACS Symposium Series, pages 88–112. American Chemical Society, Washington, DC, 1990. [21] D.L. Roth. SPRESIweb 2.1, a selective chemical synthesis and reaction database. J. Chem. Inf. Model., 45(5):1470–1473, 2005. [22] J. Gasteiger and T. Engel, editors. Chemoinformatics: A Textbook. Wiley-VCH, Weinheim, Germany, 2003. [23] V. H¨ahnke, B. Hofmann, T. Grgat, E. Proschak, D. Steinhilber, and G. Schneider. PhAST: pharmacophore alignment search tool. J. Comput. Chem., 30(5):761–71, 2009. [24] R. Neuneier and H.-G. Zimmermann. How to train neural networks. In G.B. Orr and K.-R. M¨uller, editors, Neural Networks: Tricks of the Trade, pages 373–423. Springer-Verlag, Heidelberg, Germany, 1998. [25] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In Proceedings of the 22nd International Conference on Machine Learning (ICML05), pages 89–96. ACM Press, Bonn, Germany, 2005. [26] K. J¨arvelin and J. Kek¨al¨ainen. Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst., 20(4):422–446, 2002. 9
|
2011
|
205
|
4,264
|
Sparse Features for PCA-Like Linear Regression Christos Boutsidis Mathematical Sciences Department IBM T. J. Watson Research Center Yorktown Heights, New York cboutsi@us.ibm.com Petros Drineas Computer Science Department Rensselaer Polytechnic Institute Troy, NY 12180 drinep@cs.rpi.edu Malik Magdon-Ismail Computer Science Department Rensselaer Polytechnic Institute Troy, NY 12180 magdon@cs.rpi.edu Abstract Principal Components Analysis (PCA) is often used as a feature extraction procedure. Given a matrix X ∈Rn×d, whose rows represent n data points with respect to d features, the top k right singular vectors of X (the so-called eigenfeatures), are arbitrary linear combinations of all available features. The eigenfeatures are very useful in data analysis, including the regularization of linear regression. Enforcing sparsity on the eigenfeatures, i.e., forcing them to be linear combinations of only a small number of actual features (as opposed to all available features), can promote better generalization error and improve the interpretability of the eigenfeatures. We present deterministic and randomized algorithms that construct such sparse eigenfeatures while provably achieving in-sample performance comparable to regularized linear regression. Our algorithms are relatively simple and practically efficient, and we demonstrate their performance on several data sets. 1 Introduction Least-squares analysis was introduced by Gauss in 1795 and has since has bloomed into a staple of the data analyst. Assume the usual setting with n tuples (x1, y1), . . . , (xn, yn) in Rd, where xi are points and yi are targets. The vector of regression weights w∗∈Rd minimizes (over all w ∈Rd) the RMS in-sample error E(w) = v u u t n X i=1 (xi · w −yi)2 = ∥Xw −y∥2. In the above, X ∈Rn×d is the data matrix whose rows are the vectors xi (i.e., Xij = xi[j]); and, y ∈Rn is the target vector (i.e., y[i] = yi). We will use the more convenient matrix formulation1, namely given X and y, we seek a vector w∗that minimizes ∥Xw −y∥2. The minimal-norm vector w∗can be computed via the Moore-Penrose pseudo-inverse of X: w∗= X+y. Then, the optimal in-sample error is equal to: E(w∗) = ∥y −XX+y∥2. 1For the sake of simplicity, we assume d ≤n and rank (X) = d in our exposition; neither assumption is necessary. 1 When the data is noisy and X is ill-conditioned, X+ becomes unstable to small perturbations and overfitting can become a serious problem. Practitioners deal with such situations by regularizing the regression. Popular regularization methods include, for example, the Lasso [28], Tikhonov regularization [17], and top-k PCA regression or truncated SVD regularization [21]. In general, such methods are encouraging some form of parsimony, thereby reducing the number of effective degrees of freedom available to fit the data. Our focus is on top-k PCA regression which can be viewed as regression onto the top-k principal components, or, equivalently, the top-k eigenfeatures. The eigenfeatures are the top-k right singular vectors of X and are arbitrary linear combinations of all available input features. The question we tackle is whether one can efficiently extract sparse eigenfeatures (i.e., eigenfeatures that are linear combinations of only a small number of the available features) that have nearly the same performance as the top-k eigenfeatures. Basic notation. A, B, . . . are matrices; a, b, . . . are vectors; i, j, . . . are integers; In is the n × n identity matrix; 0m×n is the m × n matrix of zeros; ei is the standard basis (whose dimensionality will be clear from the context). For vectors, we use the Euclidean norm ∥· ∥2; for matrices, the Frobenius and the spectral norms: ∥X∥2 F = P i,j X2 ij and ∥X∥2 = σ1 (X), i.e., the largest singular value of X. Top-k PCA Regression. Let X = UΣVT be the singular value decomposition of X, where U (resp. V) is the matrix of left (resp. right) singular vectors of X with singular values in the diagonal matrix Σ. For k ≤d, let Uk, Σk, and Vk contain only the top-k singular vectors and associated singular values. The best rank-k reconstruction of X in the Frobenius norm can be obtained from this truncated singular value decomposition as Xk = UkΣkVT k. The k right singular vectors in Vk are called the top-k eigenfeatures. The projections of the data points onto the top k eigenfeatures are obtained by projecting the xi’s onto the columns of Vk to obtain Fk = XVk = UΣVTVk = UkΣk. Now, each data point (row) in Fk only has k dimensions. Each column of Fk contains a particular eigenfeature’s value for every data point and is a linear combination of the columns of X. The top-k PCA regression uses Fk as the data matrix and y as the target vector to produce regression weights w∗ k = F+ k y. The in-sample error of this k-dimensional regression is equal to ∥y −Fkw∗ k∥2 = ∥y −FkF+ k y∥2 = ∥y −UkΣkΣ−1 k UT ky∥2 = ∥y −UkUT ky∥2. The weights w∗ k are k-dimensional and cannot be applied to X, but the equivalent weights Vkw∗ k can be applied to X and they have the same in-sample error with respect to X: E(Vkw∗ k) = ∥y −XVkw∗ k∥2 = ∥y −Fkw∗ k∥2 = ∥y −UkUT ky∥2. Hence, we will refer to both w∗ k and Vkw∗ k as the top-k PCA regression weights (the dimension will make it clear which one we are talking about) and, for simplicity, we will overload w∗ k to refer to both these weight vectors (the dimension will make it clear which). In practice, k is chosen to measure the “effective dimension” of the data, and, typically, k ≪rank(X) = d. One way to choose k is so that ∥X −Xk∥F ≪σk(X) (the “energy” in the k-th principal component is large compared to the energy in all smaller principal components). We do not argue the merits of top-k PCA regression; we just note that top-k PCA regression is a common tool for regularizing regression. Problem Formulation. Given X ∈Rn×d, k (the number of target eigenfeatures for top-k PCA regression), and r > k (the sparsity parameter), we seek to extract a set of at most k sparse eigenfeatures ˆVk which use at most r of the actual dimensions. Let ˆFk = X ˆVk ∈Rn×k denote the matrix whose columns are the k extracted sparse eigenfeatures, which are a linear combination of a set of at most r actual features. Our goal is to obtain sparse features for which the vector of sparse regression weights ˆwk = ˆF + k y results in an in-sample error ∥y −ˆFkˆF + k y∥2 that is close to the top-k PCA regression error ∥y −FkF+ k y∥2. Just as with top-k PCA regression, we can define the equivalent d-dimensional weights ˆVk ˆwk; we will overload ˆwk to refer to these weights as well. Finally, we conclude by noting that while our discussion above has focused on simple linear regression, the problem can also be defined for multiple regression, where the vector y is replaced by a matrix Y ∈Rn×ω, with ω ≥1. The weight vector w becomes a weight matrix, W, where each column of W contains the weights from the regression of the corresponding column of Y onto the features. All our results hold in this general setting as well, and we will actually present our main contributions in the context of multiple regression. 2 2 Our contributions Recall from our discussion at the end of the introduction that we will present all our results in the general setting, where the target vector y is replaced by a matrix Y ∈Rn×ω. Our first theorem argues that there exists a polynomial-time deterministic algorithm that constructs a feature matrix ˆFk ∈Rn×k, such that each feature (column of ˆFk) is a linear combination of at most r actual features (columns) from X and results in small in-sample error . Again, this should be contrasted with top-k PCA regression, which constructs a feature matrix Fk, such that each feature (column of Fk) is a linear combination of all features (columns) in X. Our theorems argue that the in-sample error of our features is almost as good as the in-sample error of top-k PCA regression, which uses dense features. Theorem 1 (Deterministic Feature Extraction). Let X ∈Rn×d and Y ∈Rn×ω be the input matrices in a multiple regression problem. Let k > 0 be a target rank for top-k PCA regression on X and Y. For any r > k, there exists an algorithm that constructs a feature matrix ˆFk = X ˆVk ∈Rn×k, such that every column of ˆFk is a linear combination of (the same) at most r columns of X, and
Y −X ˆWk
F = ∥Y −ˆFkˆF + k Y∥F ≤∥Y −XW∗ k∥F + 1 + r 9k r ! ∥X −Xk∥F σk(X) ∥Y∥2. (σk(X) is the k-th singular value of X.) The running time of the proposed algorithm is T (Vk) + O ndk + nrk2 , where T (Vk) is the time required to compute the matrix Vk, the top-k right singular vectors of X. Theorem 1 says that one can construct k features with sparsity O(k) and obtain a comparble regression error to that attained by the dense top-k PCA features, up to additive term that is proportional to ∆k = ∥X −Xk∥F /σk(X). To construct the features satisfying the guarantees of the above theorem, we first employ the Algorithm DSF-Select (see Table 1 and Section 4.3) to select r columns of X and form the matrix C ∈Rn×r. Now, let ΠC,k (Y) denote the best rank-k approximation (with respect to the Frobenius norm) to Y in the column-span of C. In other words, ΠC,k (Y) is a rank-k matrix that minimizes ∥Y −ΠC,k (Y) ∥F over all rank-k matrices in the column-span of C. Efficient algorithms are known for computing ΠC,k(X) and have been described in [2]. Given ΠC,k(Y), the sparse eigenfeatures can be computed efficiently as follows: first, set Ψ = C+ΠC,k(Y). Observe that CΨ = CC+ΠC,k(Y) = ΠC,k(Y). The last equality follows because CC+ projects onto the column span of C and ΠC,k(Y) is already in the column span of C. Ψ has rank at most k because ΠC,k(Y) has rank at most k. Let the SVD of Ψ be Ψ = UψΣψVT ψ and set ˆFk = CUψΣψ ∈Rn×k. Clearly, each column of ˆFk is a linear combination of (the same) at most r columns of X (the columns in C). The sparse features themselves can also be obtained because ˆFk = X ˆVk, so ˆVk = X+ˆFk. To prove that ˆFk are a good set of sparse features, we first relate the regression error from using ˆFk to how well ΠC,k(Y) approximates Y. ∥Y −ΠC,k(Y)∥F = ∥Y −CΨ∥F = ∥Y −CUψΣψVT ψ∥F = ∥Y −ˆFkVT ψ∥F ≥∥Y −ˆFkˆF + k Y∥F . The last inequality follows because ˆF + k Y are the optimal regression weights for the features ˆFk. The reverse inequality also holds because ΠC,k(Y) is the best rank-k approximation to Y in the column span of C. Thus, ∥Y −ˆFkˆF + k Y∥F = ∥Y −ΠC,k(Y)∥F . The upshot of the above discussion is that if we can find a matrix C consisting of columns of X for which ∥Y −ΠC,k(Y)∥F is small, then we immediately have good sparse eigenfeatures. Indeed, all that remains to complete the proof of Theorem 1 is to bound ∥Y −ΠC,k(Y)∥F for the columns C returned by the Algorithm DSF-Select. Our second result employs the Algorithm RSF-Select (see Table 2 and Section 4.4) to select r columns of X and again form the matrix C ∈Rn×r. One then proceeds to construct ΠC,k(Y) and ˆFk as described above. The advantage of this approach is simplicity, better efficiency and a slightly better error bound, at the expense of logarithmically worse sparsity. 3 Theorem 2 (Randomized Feature Extraction). Let X ∈Rn×d and Y ∈Rn×ω be the input matrices in a multiple regression problem. Let k > 0 be a target rank for top-k PCA regression on X and Y. For any r > 144k ln(20k), there exists a randomized algorithm that constructs a feature matrix ˆFk = X ˆVk ∈Rn×k, such that every column of ˆFk is a linear combination of at most r columns of X, and, with probability at least .7 (over random choices made in the algorithm),
Y −X ˆWk
F = ∥Y −ˆFkˆF + k Y∥F ≤∥Y −XW∗ k∥F + r 36k ln(20k) r ∥X −Xk∥F σk(X) ∥Y∥2. The running time of the proposed algorithm is T (Vk) + O(dk + r log r). 3 Connections with prior work A variant of our problem is the identification of a matrix C consisting of a small number (say r) columns of X such that the regression of Y onto C (as opposed to k features from C) gives small insample error. This is the sparse approximation problem, where the number of non-zero weights in the regression vector is restricted to r. This problem is known to be NP-hard [25]. Sparse approximation has important applications and many approximation algorithms have been presented [29, 9, 30]; proposed algorithms are typically either greedy or are based on convex optimization relaxations of the objective. An important difference between sparse approximation and sparse PCA regression is that our goal is not to minimize the error under a sparsity constraint, but to match the top-k PCA regularized regression under a sparsity constraint. We argue that it is possible to achieve a provably accurate sparse PCA-regression, i.e., use sparse features instead of dense ones. If X = Y (approximating X using the columns of X), then this is the column-based matrix reconstruction problem, which has received much attention in existing literature [16, 18, 14, 26, 5, 12, 20]. In this paper, we study the more general problem where X ̸= Y, which turns out to be considerably more difficult. Input sparseness is closely related to feature selection and automatic relevance determination. Research in this area is vast, and we refer the reader to [19] for a high-level view of the field. Again, the goal in this area is different than ours, namely they seek to reduce dimensionality and improve out-of-sample error. Our goal is to provide sparse PCA features that are almost as good as the exact principal components. While it is definitely the case that many methods outperform top-k PCA regression, especially for d ≫n, this discussion is orthogonal to our work. The closest result to ours in prior literature is the so-called rank-revealing QR (RRQR) factorization [8]. The authors use a QR-like decomposition to select exactly k columns of X and compare their sparse solution vector ˆwk with the top-k PCA regularized solution w∗ k. They show that ∥w∗ k −ˆwk∥2 ≤ p k(n −k) + 1 ∥X −Xk∥2 σk(X) ∆, where ∆= 2 ∥ˆwk∥2 + ∥y −Xw∗ k∥2 /σk(X). This bound is similar to our bound in Theorem 1, but only applies to r = k and is considerably weaker. For example, p k(n −k) + 1 ∥X −Xk∥2 ≥ √ k ∥X −Xk∥F ; note also that the dependence of the above bound on 1/σk(X) is generally worse than ours. The importance of the right singular vectors in matrix reconstruction problems (including PCA) has been heavily studied in prior literature, going back to work by Jolliffe in 1972 [22]. The idea of sampling columns from a matrix X with probabilities that are derived from VT k (as we do in Theorem 2) was introduced in [15] in order to construct coresets for regression problems by sampling data points (rows of the matrix X) as opposed to features (columns of the matrix X). Other prior work including [15, 13, 27, 6, 4] has employed variants of this sampling scheme; indeed, we borrow proof techniques from the above papers in our work. Finally, we note that our deterministic feature selection algorithm (Theorem 1) uses a sparsification tool developed in [2] for column based matrix reconstruction. This tool is a generalization of algorithms originally introduced in [1]. 4 4 Our algorithms Our algorithms emerge from the constructive proofs of Theorems 1 and 2. Both algorithms necessitate access to the right singular vectors of X, namely the matrix Vk ∈Rd×k. In our experiments, we used PROPACK [23] in order to compute Vk iteratively; PROPACK is a fast alternative to the exact SVD. Our first algorithm (DSF-Select) is deterministic, while the second algorithm (RSF-Select) is randomized, requiring logarithmically more columns to guarantee the theoretical bounds. Prior to describing our algorithms in detail, we will introduce useful notation on sampling and rescaling matrices as well as a matrix factorization lemma (Lemma 3) that will be critical in our proofs. 4.1 Sampling and rescaling matrices Let C ∈Rn×r contain r columns of X ∈Rn×d. We can express the matrix C as C = XΩ, where the sampling matrix Ω∈Rd×r is equal to [ei1, . . . , eir] and ei are standard basis vectors in Rd. In our proofs, we will make use of S ∈Rr×r, a diagonal rescaling matrix with positive entries on the diagonal. Our column selection algorithms return a sampling and a rescaling matrix, so that XΩS contains a subset of rescaled columns from X. The rescaling is benign since it does not affect the span of the columns of C = XΩand thus the quantity of interest, namely ΠC,k(Y). 4.2 A structural result using matrix factorizations We now present a matrix reconstruction lemma that will be the starting point for our algorithms. Let Y ∈Rn×ω be a target matrix and let X ∈Rn×d be the basis matrix that we will use in order to reconstruct Y. More specifically, we seek a sparse reconstruction of Y from X, or, in other words, we would like to choose r ≪d columns from X and form a matrix C ∈Rn×r such that ∥Y −ΠC,k(Y)∥F is small. Let Z ∈Rd×k be an orthogonal matrix (i.e., ZTZ = Ik), and express the matrix X as follows: X = HZT + E, where H is some matrix in Rn×k and E ∈Rn×d is the residual error of the factorization. It is easy to prove that the Frobenius or spectral norm of E is minimized when H = XZ. Let Ω∈Rd×r and S ∈Rr×r be a sampling and a rescaling matrix respectively as defined in the previous section, and let C = XΩ∈Rn×r. Then, the following lemma holds (see [3] for a detailed proof). Lemma 3 (Generalized Column Reconstruction). Using the above notation, if the rank of the matrix ZTΩS is equal to k, then ∥Y −ΠC,k(Y)∥F ≤∥Y −HH+Y∥F + ∥EΩS(ZTΩS)+H+Y∥F . (1) We now parse the above lemma carefully in order to understand its implications in our setting. For our goals, the matrix C essentially contains a subset of r features from the data matrix X. Recall that ΠC,k(Y) is the best rank-k approximation to Y within the column space of C; and, the difference Y −ΠC,k(Y) measures the error from performing regression using sparse eigenfeatures that are constructed as linear combinations of the columns of C. Moving to the right-hand side of eqn. (1), the two terms reflect a tradeoff between the accuracy of the reconstruction of Y using H and the error E in approximating X by the product HZT. Ideally, we would like to choose H so that Y can be accurately approximated and, at the same time, the matrix X is approximated by the product HZT with small residual error E. In general, these two goals might be competing and a balance must be struck. Here, we focus on one extreme of this trade off, namely choosing Z so that the (Frobenius) norm of the matrix E is minimized. More specifically, since Z has rank k, the best choice for HZT in order to minimize ∥E∥F is Xk; then, E = X −Xk. Using the SVD of Xk, namely Xk = UkΣkVT k, we apply Lemma 3 setting H = UkΣk and Z = Vk. The following corollary is immediate. Lemma 4 (Generalization of Lemma 7 in [2]). Using the above notation, if the rank of the matrix VT kΩS is equal to k, then ∥Y −ΠC,k(Y)∥F ≤∥Y −UkUT kY∥F + ∥(X −Xk)ΩS(VT kΩS)+Σ−1 k UT kY∥F . Our main results will follow by carefully choosing Ωand S in order to control the right-hand side of the above inequality. 5 Algorithm: DSF-Select Algorithm: DetSampling 1: Input: X, k, r. 2: Output: r columns of X in C. 3: Compute Vk and E = X −Xk = X −XVkV T k. 4: Run DetSampling to construct sampling and rescaling matrices Ωand S: [Ω, S] = DetSampling(V T k, E, r). 5: Return C = XΩ. 1: Input: V T = [v1, . . . , vd], A = [a1, . . . , ad], r. 2: Output: Sampling and rescaling matrices [Ω, S]. 3: Initialize B0 = 0k×k, Ω= 0d×r, and S = 0r×r. 4: for τ = 1 to r −1 do 5: Set Lτ = τ − √ rk. 6: Pick index i ∈{1, 2, ..., n} and t such that U(ai) ≤1 t ≤L(vi, Bτ−1, Lτ). 7: Update Bτ = Bτ−1 + tviv T i. 8: Set Ωiτ = 1 and Sττ = 1/ √ t. 9: end for 10: Return Ωand S. Table 1: DSF-Select: Deterministic Sparse Feature Selection 4.3 DSF-Select: Deterministic Sparse Feature Selection DSF-Select deterministically selects r columns of the matrix X to form the matrix C (see Table 1 and note that the matrix C = XΩmight contain duplicate columns which can be removed without any loss in accuracy). The heart of DSF-Select is the subroutine DetSampling, a near-greedy algorithm which selects columns of VT k iteratively to satisfy two criteria: the selected columns should form an approximately orthogonal basis for the columns of VT k so that (VT kΩS)+ is well-behaved; and EΩS should also be well-behaved. These two properties will allow us to prove our results via Lemma 4. The implementation of the proposed algorithm is quite simple since it relies only on standard linear algebraic operations. DetSampling takes as input two matrices: VT ∈Rk×d (satisfying VTV = Ik) and A ∈Rn×d. In order to describe the algorithm, it is convenient to view these two matrices as two sets of column vectors, VT = [v1, . . . , vd] (satisfying Pd i=1 vivT i = Ik) and A = [a1, . . . , ad]. In DSF-Select we set VT = VT k and A = E = X −Xk. Given k and r, the algorithm iterates from τ = 0 up to τ = r −1 and its main operation is to compute the functions φ(L, B) and L(v, B, L) that are defined as follows: φ (L, B) = k X i=1 1 λi −L, L (v, B, L) = vT (B −(L + 1) Ik)−2 v φ (L + 1, B) −φ (L, B) −vT (B −(L + 1) Ik)−1 v. In the above, B ∈Rk×k is a symmetric matrix with eigenvalues λ1, . . . , λk and L ∈R is a parameter. We also define the function U(a) for a vector a ∈Rn as follows: U(a) = 1 − r k r ! aTa ∥A∥2 F . At every step τ, the algorithm selects a column ai such that U(ai) ≤L(vi, Bτ−1, Lτ); note that Bτ−1 is a k × k matrix which is also updated at every step of the algorithm (see Table 1). The existence of such a column is guaranteed by results in [1, 2]. It is worth noting that in practical implementations of the proposed algorithm, there might exist multiple columns which satisfy the above requirement. In our implementation we chose to break such ties arbitrarily. However, more careful and informed choices, such as breaking the ties in a way that makes maximum progress towards our objective, might result in considerable savings. This is indeed an interesting open problem. The running time of our algorithm is dominated by the search for a column which satisfies U(ai) ≤L(vi, Bτ−1, Lτ). To compute the function L, we first need to compute φ(Lτ, Bτ−1) (which necessitates the eigenvalues of Bτ−1) and then we need to compute the inverse of Bτ−1−(L + 1) Ik. These computations need O(k3) time per iteration, for a total of O(rk3) time over all r iterations. Now, in order to compute the function L for each vector vi for all i = 1, . . . , d, we need an additional 6 Algorithm: RSF-Select Algorithm: RandSampling 1: Input: X, k, r. 2: Output: r columns of X in C. 3: Compute Vk. 4: Run RandSampling to construct sampling and rescaling matrices Ωand S: [Ω, S] = RandSampling(V T k, r). 5: Return C = XΩ. 1: Input: V T = [v1, . . . , vd] and r. 2: Output: Sampling and rescaling matrices [Ω, S]. 3: For i = 1, ..., d compute probabilities pi = 1 k ∥vi∥2 2. 4: Initialize Ω= 0d×r and S = 0r×r. 5: for τ = 1 to r do 6: Select an index iτ ∈{1, 2, ..., d} where the probability of selecting index i is equal to pi. 7: Set Ωiτ τ = 1 and Sττ = 1/√rpiτ . 8: end for 9: Return Ωand S. Table 2: RSF-Select: Randomized Sparse Feature Selection O(dk2) time per iteration; the total time for all r iterations is O(drk2). Next, in order to compute the function U, we need to compute aT i ai (for all i = 1, . . . , d) which necessitates O(nnz(A)) time, where nnz(A) is the number of non-zero elements of A. In our setting, A = E ∈Rn×d, so the overall running time is O(drk2 + nd). In order to get the final running time we also need to account for the computation of Vk and E. The theoretical properties of DetSampling were analyzed in detail in [2], building on the original analysis of [1]. The following lemma from [2] summarizes important properties of Ω. Lemma 5 ([2]). DetSampling with inputs VT and A returns a sampling matrix Ω∈Rd×r and a rescaling matrix S ∈Rr×r satisfying ∥(VTΩS)+∥2 ≤1 − r k r ; ∥AΩS∥F ≤∥A∥F . We apply Lemma 5 with V = VT k and A = E and we combine it with Lemma 4 to conclude the proof of Theorem 1; see [3] for details. 4.4 RSF-Select: Randomized Sparse Feature Selection RSF-Select is a randomized algorithm that selects r columns of the matrix X in order to form the matrix C (see Table 2). The main differences between RSF-Select and DSF-Select are two: first, RSF-Select only needs access to VT k and, second, RSF-Select uses a simple sampling procedure in order to select the columns of X to include in C. This sampling procedure is described in algorithm RandSampling and essentially selects columns of X with probabilities that depend on the norms of the columns of VT k. Thus, RandSampling first computes a set of probabilities that are proportional to the norms of the columns of VT k and then samples r columns of X in r independent identical trials with replacement, where in each trial a column is sampled according to the computed probabilities. Note that a column could be selected multiple times. In terms of running time, and assuming that the matrix Vk that contains the top k right singular vectors of X has already been computed, the proposed algorithm needs O(dk) time to compute the sampling probabilities and an additional O(d+ r log r) time to sample r columns from X. Similar to Lemma 5, we can prove analogous properties for the matrices Ωand S that are returned by algorithm RandSampling. Again, combining with Lemma 4 we can prove Theorem 2; see [3] for details. 5 Experiments The goal of our experiments is to illustrate that our algorithms produce sparse features which perform as well in-sample as the top-k PCA regression. It turns out that the out-of-sample performance is comparable (if not better in many cases, perhaps due to the sparsity) to top-k PCA-regression. 7 Data (n; d) k = 5, r = k + 1 k = 5, r = 2k w∗ k ˆwDSF k ˆwRSF k ˆwrnd k w∗ k ˆwDSF k ˆwRSF k ˆwrnd k Arcene (100;10,000) 0.93 0.99 0.88 0.94 0.91 0.98 1.0 1.0 0.93 1.0 0.89 0.97 0.86 0.98 1.0 1.0 I-sphere (351;34) 0.57 0.58 0.52 0.53 0.55 0.57 0.57 0.57 0.57 0.58 0.51 0.54 0.52 0.55 0.56 0.56 LibrasMov (45;90) 2.9 3.3 2.9 3.6 3.1 3.7 3.7 3.7 2.9 3.3 2.4 3.3 2.6 3.6 3.6 3.6 Madelon (2,000;500) 0.98 0.98 0.98 0.98 0.98 0.98 1.0 1.0 0.98 0.98 0.97 0.98 0.97 0.98 1.0 1.0 HillVal (606;100) 0.68 0.68 0.66 0.67 0.67 0.68 0.68 0.68 0.68 0.68 0.65 0.67 0.67 0.69 0.69 0.69 Spambase (4601;57) 0.30 0.30 0.30 0.30 0.31 0.30 0.28 0.38 0.3 0.3 0.3 0.3 0.3 0.3 0.25 0.35 Table 3: Comparison of DSF-select and RSF-select with top-k PCA. The top entry in each cell is the in-sample error, and the bottom entry is the out-sample error. In bold is the method achieving the best out-sample error. Compared to top-k PCA, our algorithms are efficient and work well in practice, even better than the theoretical bounds suggest. We present our findings in Table 3 using data sets from the UCI machine learning repository. We used a five-fold cross validation design with 1,000 random splits: we computed regression weights using 80% of the data and estimated out-sample error in the remaining 20% of the data. We set k = 5 in the experiments (no attempt was made to optimize k). Table 3 shows the in- and out-sample error for four methods: top-k PCA regression, w∗ k; r-sparse features regression using DSF-select, ˆwDSF k ; r-sparse features regression using RSF-select, ˆwRSF k ; r-sparse features regression using r random columns, ˆwrnd k . 6 Discussion The top-k PCA regression constructs “features” without looking at the targets – it is target-agnostic. So are all the algorithms we discussed here, as our goal was to compare with top-k PCA. However, there is unexplored potential in Lemma 3. We only explored one extreme choice for the factorization, namely the minimization of some norm of the matrix E. Other choices, in particular non-targetagnostic choices, could prove considerably better. Such investigations are left for future work. As mentioned when we discussed our deterministic algorithm, it will often be the case that in some steps of the greedy selection process, multiple columns could satisfy the criterion for selection. In such a situation, we are free to choose any one; we broke ties arbitrarily in our implementation, and even as is, the algorithm performed as well or better than top-k PCA. However, we expect that breaking the ties so as to optimize the ultimate objective would yield considerable additional benefit; this would also be non-target-agnostic. Acknowledgments This work has been supported by two NSF CCF and DMS grants to Petros Drineas and Malik Magdon-Ismail. References [1] J. Batson, D. Spielman, and N. Srivastava. Twice-ramanujan sparsifiers. In Proceedings of ACM STOC, pages 255–262, 2009. [2] C. Boutsidis, P. Drineas, and M. Magdon-Ismail. Near-optimal column based matrix reconstruction. In Proceedings of IEEE FOCS, 2011. [3] C. Boutsidis, P. Drineas, and M. Magdon-Ismail. Sparse features for PCA-like linear regression. manuscript, 2011. [4] C. Boutsidis and M. Magdon-Ismail. Deterministic feature selection for k-means clustering. arXiv:1109.5664v1, 2011. 8 [5] C. Boutsidis, M. W. Mahoney, and P. Drineas. An improved approximation algorithm for the column subset selection problem. In Proceedings of ACM -SIAM SODA, pages 968–977, 2009. [6] C. Boutsidis, M. W. Mahoney, and P. Drineas. Unsupervised feature selection for the k-means clustering problem. In Proceedings of NIPS, 2009. [7] J. Cadima and I. Jolliffe. Loadings and correlations in the interpretation of principal components. Applied Statistics, 22:203–214, 1995. [8] T. Chan and P. Hansen. Some applications of the rank revealing QR factorization. SIAM Journal on Scientific and Statistical Computing, 13:727–741, 1992. [9] A. Das and D. Kempe. Algorithms for subset selection in linear regression. In Proceedings of ACM STOC, 2008. [10] A. Dasgupta, P. Drineas, B. Harb, R. Kumar, and M. W. Mahoney. Sampling algorithms and coresets for Lp regression. In Proceedings of ACM-SIAM SODA, 2008. [11] A. d’Aspremont, L. El Ghaoui, M. I. Jordan, and G. R. G. Lanckriet. A direct formulation for sparse PCA using semidefinite programming. In Proceedings of NIPS, 2004. [12] A. Deshpande and L. Rademacher. Efficient volume sampling for row/column subset selection. In Proceedings of ACM STOC, 2010. [13] P. Drineas, R. Kannan, and M. Mahoney. Fast Monte Carlo algorithms for matrices I: Approximating matrix multiplication. SIAM Journal of Computing, 36(1):132–157, 2006. [14] P. Drineas, M. Mahoney, and S. Muthukrishnan. Polynomial time algorithm for column-row based relative-error low-rank matrix approximation. Technical Report 2006-04, DIMACS, March 2006. [15] P. Drineas, M. Mahoney, and S. Muthukrishnan. Sampling algorithms for ℓ2 regression and applications. In Proceedings of ACM-SIAM SODA, pages 1127–1136, 2006. [16] G. Golub. Numerical methods for solving linear least squares problems. Numerische Mathematik, 7:206– 216, 1965. [17] G. Golub, P. Hansen, and D. O’Leary. Tikhonov regularization and total least squares. SIAM Journal on Matrix Analysis and Applications, 21(1):185–194, 2000. [18] M. Gu and S. Eisenstat. Efficient algorithms for computing a strong rank-revealing QR factorization. SIAM Journal on Scientific Computing, 17:848–869, 1996. [19] I. Guyon and A. Elisseeff. Special issue on variable and feature selection. Journal of Machine Learning Research, 3, 2003. [20] N. Halko, P. Martinsson, and J. Tropp. Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 2011. [21] P. Hansen. The truncated SVD as a method for regularization. BIT Numerical Mathematics, 27(4):534– 553, 1987. [22] I. Jolliffe. Discarding variables in Principal Component Analysis: asrtificial data. Applied Statistics, 21(2):160–173, 1972. [23] R. Larsen. PROPACK: A software package for the symmetric eigenvalue problem and singular value problems on Lanczos and Lanczos bidiagonalization with partial reorthogonalization. http://soi.stanford.edu/∼rmunk/∼PROPACK/. [24] B. Moghaddam, Y. Weiss, and S. Avidan. Spectral bounds for sparse PCA: exact and greedy algorithms. In Proceedings of NIPS, 2005. [25] B. Natarajan. Sparse approximate solutions to linear systems. SIAM Journal on Computing, 24(2):227– 234, 1995. [26] M. Rudelson and R. Vershynin. Sampling from large matrices: An approach through geometric functional analysis. Journal of the ACM, 54, 2007. [27] N. Srivastava and D. Spielman. Graph sparsifications by effective resistances. In Proceedings of ACM STOC, pages 563–568, 2008. [28] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, pages 267–288, 1996. [29] J. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information Theory, 50(10):2231–2242, 2004. [30] T. Zhang. Generating a d-dimensional linear subspace efficiently. In Adaptive forward-backward greedy algorithm for sparse learning with linear models, 2008. 9
|
2011
|
206
|
4,265
|
Spike and Slab Variational Inference for Multi-Task and Multiple Kernel Learning Michalis K. Titsias University of Manchester mtitsias@gmail.com Miguel L´azaro-Gredilla Univ. de Cantabria & Univ. Carlos III de Madrid miguel@tsc.uc3m.es Abstract We introduce a variational Bayesian inference algorithm which can be widely applied to sparse linear models. The algorithm is based on the spike and slab prior which, from a Bayesian perspective, is the golden standard for sparse inference. We apply the method to a general multi-task and multiple kernel learning model in which a common set of Gaussian process functions is linearly combined with task-specific sparse weights, thus inducing relation between tasks. This model unifies several sparse linear models, such as generalized linear models, sparse factor analysis and matrix factorization with missing values, so that the variational algorithm can be applied to all these cases. We demonstrate our approach in multioutput Gaussian process regression, multi-class classification, image processing applications and collaborative filtering. 1 Introduction Sparse inference has found numerous applications in statistics and machine learning [1, 2, 3]. It is a generic idea that can be combined with popular models, such as linear regression, factor analysis and more recently multi-task and multiple kernel learning models. In the regularization theory literature sparse inference is tackled via ℓ1 regularization [2], which requires expensive cross-validation for model selection. From a Bayesian perspective, the spike and slab prior [1, 4, 5], also called twogroups prior [6], is the golden standard for sparse linear models. However, the discrete nature of the prior makes Bayesian inference a very challenging problem. Specifically, for M linear weights, inference under a spike and slab prior distribution on those weights requires a combinatorial search over 2M possible models. The problems found when working with the spike and slab prior led several researchers to consider soft-sparse or shrinkage priors such as the Laplace and other related scale mixtures of normals [3, 7, 8, 9, 10]. However, such priors are not ideal since they assign zero probability mass to events associated with weights having zero value. In this paper, we introduce a simple and efficient variational inference algorithm based on the spike and slab prior which can be widely applied to sparse linear models. The novel characteristic of this algorithm is that the variational distribution over sparse weights has a factorial nature, i.e., it can be written as a mixture of 2M components where M is the number of weights. Unlike the standard mean field approximation which uses a unimodal variational distribution, our variational algorithm can more precisely match the combinational nature of the posterior distribution over the weights. We will show that the proposed variational approach is more accurate and robust to unfavorable initializations than the standard mean field variational approximation. We apply the variational method to a general multi-task and multiple kernel learning model that expresses the correlation between tasks by letting them share a common set of Gaussian process latent functions. Each task is modeled by linearly combining these latent functions with taskspecific weights which are given a spike and slab prior distribution. This model is a spike and slab Bayesian reformulation of previous Gaussian process-based single-task multiple kernel learning 1 methods [11, 12, 13] and multi-task Gaussian processes (GPs) [14, 15, 16, 17]. Further, this model unifies several sparse linear models, such as generalized linear models, factor analysis, probabilistic PCA and matrix factorization with missing values. In the experiments, we apply the variational inference algorithms to all the above models and present results in multi-output regression, multi-class classification, image denoising, image inpainting and collaborative filtering. 2 Spike and slab multi-task and multiple kernel learning Section 2.1 discusses the spike and slab multi-task and multiple kernel learning (MTMKL) model that linearly combines Gaussian process latent functions. Spike and slab factor analysis and probabilistic PCA is discussed in Section 2.2, while missing values are dealt with in Section 2.3. 2.1 The model Let D = {X, Y}, with X ∈RN×D and Y ∈RN×Q, be a dataset such that the n-th row of X is an input vector xn and the n-th row of Y is the set of Q corresponding tasks or outputs. We use yq to refer to the q-th column of Y and ynq to the (n, q) entry. Outputs Y are then assumed to be generated according to the following hierarchical Bayesian model: ynq ∼N(ynq|fq(xn), σ2 q), ∀n,q (1a) fq(x) = M X m=1 wqmφm(x) = w⊤ q φ(x), ∀q (1b) wqm ∼πN(wqm|0, σ2 w) + (1 −π)δ0(wqm), ∀q,m (1c) φm(x) ∼GP(µm(x), km(xi, xj)), ∀m. (1d) Here, each µm(x) is a mean function, km(xi, xj) a covariance function, wq = [wq1, . . . , wqM]⊤, φ(x) = [φ1(x), . . . , φM(x)]⊤and δ0(wqm) denotes the Dirac delta function centered at zero. Since each of the Q tasks is a linear combination of the same set of latent functions {φm(x)}M m=1 (where typically M < Q ), correlation is induced in the outputs. Sharing a common set of features means that “knowledge transfer” between tasks can occur and latent functions are inferred more accurately, since data belonging to all tasks are used. Several linear models can be expressed as special cases of the above. For instance, a generalized linear model is obtained when the GPs are Dirac delta measures (with zero covariance functions) that deterministically assign each φm(x) to its mean function µm(x). However, the model in (1) has a number of additional features not present in standard linear models. Firstly, the basis functions are no longer deterministic, but they are instead drawn from different GPs, so an extra layer of flexibility is added to the model. Thus, a posterior distribution over the basis functions of the generalized linear model can be inferred from data. Secondly, a truly sparse prior, the spike and slab prior (1c), is placed over the weights of the model. Specifically, with probability 1−π, each wqm is zero, and with probability π, it is drawn from a Gaussian. This contrasts with previous approaches [3, 7, 8, 9, 13] in which soft-sparse priors that assign zero probability mass to the weights being exactly zero were used. Hyperparameters π and σ2 w are learnable in order to determine the amount of sparsity and the discrepancy of nonzero weights, respectively. Thirdly, the number of basis functions M can be inferred from data, since the sparse prior on the weights allows basis functions to be “switched off” as necessary by setting the corresponding weights to zero. Further, the model in (1) can be considered as a spike and slab Bayesian reformulation of multitask [14, 15] and multiple kernel learning previous methods [11, 12] that learn the weights using maximum likelihood. By assuming the weights wq are given, each output function yq(x) is a GP with covariance function Cov[(yq(xi), yq(xj)] = M X m=1 w2 qmkm(xi, xj), which clearly consists of a conic combination of kernel functions. Therefore, the proposed model can be reinterpreted as multiple kernel learning in which the weights of each kernel are assigned spike and slab priors in a full Bayesian formulation. 2 2.2 Sparse factor and principal component analysis An interesting case arises when µm(x) = 0 and km(xi, xj) = δij ∀m, where δij is the Kronecker delta. This says that each latent function is drawn from a white process so that it consists of independent values each following the standard normal distribution. We first define matrices Φ ∈RN×M and W ∈RQ×M, whose elements are, respectively, φnm = φm(xn) and wqm. Then, the model in (1) reduces to Y = ΦW⊤+ ξ, (2a) wqm ∼πN(wqm|0, σ2 w) + (1 −π)δ0(wqm), ∀q,m (2b) φnm ∼N(φnm|0, 1), ∀n,m (2c) ξnq ∼N(ξnq|0, σ2 q), ∀n,q, (2d) where ξ is an N × Q noise matrix with entries ξnq. The resulting model thus corresponds to sparse factor analysis or sparse probabilistic PCA (when the noise is homoscedastic, i.e., σ2 q is constant for all q). Observe that the sparse spike and slab prior is placed on the factor loadings W. 2.3 Missing values The method can easily handle missing values and thus be applied to problems involving matrix completion and collaborative filtering. More precisely, in the presence of missing values we have a binary matrix Z ∈RN×Q that indicates the observed elements in Y. Using Z the likelihood in (1a) is modified according to ynq ∼N(ynq|fq(xn), σ2 q), ∀n,q s.t. [Z]nq = 1. In the experiments we consider missing values in applications such as image inpainting and collaborative filtering. 3 Efficient variational inference The presence of the Dirac delta mass function makes the application of variational approximate inference algorithms in spike and slab Bayesian models troublesome. However, there exists a simple reparameterization of the spike and slab prior that is more amenable to approximate inference methods. Specifically, assume a Gaussian random variable ewqm ∼N( ewqm|0, σ2 w) and a Bernoulli random variable sqm ∼πsqm(1 −π)1−sqm. The product sqm ewqm forms a new random variable that follows the probability distribution in eq. (1c). This allows to reparameterize wqm according to wqm = sqm ewqm and assign the above prior distributions on sqm and ewqm. Thus, the reparameterized spike and slab prior takes the form: p( ewqm, sqm) = N(wqm|0, σ2 w)πsqm(1 −π)1−sqm, ∀q,m. (3) Notice that the presence of wqm in the likelihood function in (1a) is now replaced by the product sqm ewqm. After the above reparameterization, a standard mean field variational method that uses the factorized variational distribution over f W = {ewq}Q q=1 and S = {sq}Q q=1 takes the form q(f W, S) = QQ q=1 q(ewq, sq), where q(ewq, sq) = q(ewq)q(sq) = N(ewq|µwq, Σwq) M Y m=1 γsqm qm (1 −γqm)1−sqm (4) and where (µwq, Σwq, γq) are variational parameters. Such an approach has extensively used in [18] and also considered in [19]. However, the above variational distribution leads to a very inefficient approximation. This is because (4) is a unimodal distribution, and therefore has limited capacity when approximating the factorial true posterior distribution which can have exponentially many modes. To analyze the nature of the true posterior distribution, we consider the following two properties derived by assuming for simplicity a single output (Q = 1) so index q is dropped. Property 1: The true marginal posterior p(ew|Y) can be written as a mixture distribution having 2M components. This is an obvious fact since p(ew|Y) = P s p(ew|s, Y)p(s|Y), where the summation involves all 2M possible values of the binary vector s. The second property characterizes the nature of each conditional p(ew|s, Y) in the above sum. Property 2: Assume the conditional distribution p(ew|s, Y). We can write s = s1 ∪s0, where s1 denotes the elements in s with value one and s0 the elements with value zero. Using the 3 correspondence between s and ew, we have ew = ew1 ∪ew0. Then, p(ew|s, Y) factorizes as p(ew|s, Y) = p(ew1|Y)N(ew0|0, σ2 wI| ew0|), which says that the posterior over ew0 given s0 = 0 is equal to the prior over ew0. This property is obvious because ew0 and s0 appear in the likelihood as an elementwise product ew0 ◦s0, thus when s0 = 0, ew0 becomes disconnected from the data. The standard variational distribution in (4) ignores these properties and approximates the marginal p(ew|Y), which is a mixture with 2M components, with a single Gaussian distribution. Next we present an alternative variational approximation that takes into account the above properties. 3.1 The proposed variational method In the reparameterized spike and slab prior, each pair of variables { ewqm, sqm} is strongly correlated since their product is the underlying variable that interacts with the data. Thus, a sensible approximation must treat each pair { ewqm, sqm} as a unit so that { ewqm, sqm} are placed in the same factor of the variational distribution. The simplest factorization that achieves this is: q(ewq, sq) = M Y m=1 q( ewqm, sqm). (5) This variational distribution yields a marginal q(ewq) which has 2M components. This can be seen by writing q(ewq) = QM m=1 [q( ewqm, sqm = 1) + q( ewqm, sqm = 0)] and then by multiplying the terms a mixture of 2M components is obtained. Therefore, Property 1 is satisfied by (5). In turns out that Property 2 is also satisfied. This can be shown by taking the stationary condition for the factor q( ewqm, sqm) when maximizing the variational lower bound (on the true marginal likelihood): log p(Y, ewqm, sqm, Θ)p(Θ)N( ewqm|0, σ2 w)πsqm(1 −π)1−sqm q( ewqm, sqm)q(Θ) q( e wqm,sqm)q(Θ) , (6) where Θ are the remaining random variables in the model (i.e., excluding { ewqm, sqm}) and q(Θ) their variational distribution. The stationary condition for q( ewqm, sqm) is q( ewqm, sqm) = 1 Z e⟨log p(Y, e wqm,sqm,Θ)⟩q(Θ)N( ewqm|0, σ2 w)πsqm(1 −π)1−sqm, (7) where Z is a normalizing constant that does not depend on { ewqm, sqm}. Therefore, we have q( ewqm|sqm = 0) ∝ q( ewqm, sqm = 0) = C Z N( ewqm|0, σ2 w)(1 −π), where C = e⟨log p(Y, e wqm,sqm=0,Θ)⟩q(Θ) is a constant that does not depend on ewqm. From the last expression we obtain q( ewqm|sqm = 0) = N( ewqm|0, σ2 w) which implies that Property 2 is satisfied. The above remarks regarding variational distribution (5) are general and can hold for many spike and slab probability models as long as the weights ew and binary variables s interact inside the likelihood function according to ew ◦s. 3.2 Application to the multi-task and multiple kernel learning model Here, we briefly discuss the variational method applied to the multi-task and multiple kernel model described in Section 2.1 and refer to supplementary material for variational EM update equations. The explicit form of the joint probability density function on the training data of model (1) is p(Y, f W, S, Φ) = N(Y|Φ(f W◦S)⊤, Σ) Y q,m N( ewqm|0, σ2 w)πsqm(1 −π)sqm M Y m=1 N(φm|µm, Km), where {f W, S, Φ} is the whole set of random variables that need to be marginalized out to compute the marginal likelihood. The marginal likelihood is analytically intractable, so we lower bound it using the following variational distribution q(f W, S, Φ) = Q Y q=1 M Y m=1 q( ewqm, sqm) M Y m=1 q(φm). (8) 4 The stationary conditions of the lower bound result in analytical updates for all factors above. More precisely, q(φm) is an N-dimensional Gaussian distribution and each factor q( ewqm, sqm) leads to a marginal q( ewqm) which is a mixture of two Gaussians where one component is q( ewqm|sqm = 0) = N( ewqm|0, σ2 w), as shown in the previous section. The optimization proceeds using an EM algorithm that at the E-step updates the factors in (8) and at the M-step updates hyperparameters {{σq}Q q=1, σ2 w, π, {θm}M m=1} where θm parameterize kernel matrix Km. There is, however, one surprise in these updates. The GP hyperparameters θm are strongly dependent on the factor q(φm) of the corresponding GP latent vector, so updating θm by keeping fixed the factor q(φm) exhibits slow convergence. This problem is efficiently resolved by applying a Marginalized Variational step [20] which jointly updates the pair (q(φm), θm). This more advanced update together with all remaining updates of the EM algorithm are discussed in detail in the supplementary material. 4 Assessing the accuracy of the approximation In this section we compare the proposed variational inference method, in the following called paired mean field (PMF), against the standard mean field (MF) approximation. For simplicity, we consider a single-output linear regression problem where the data are generated according to: y = (ew ◦s)T x + ξ. Moreover, to remove the effect of hyperparameter learning from the comparison, (σ2, π, σ2 w) are fixed to known values. The objective of the comparison is to measure the accuracy when approximating the true posterior mean value for the parameter vector wtr = E[ew ◦s] where the expectation is under the true posterior distribution. wtr is obtained by running a very long run of Gibbs sampling. PMF and MF provide alternative approximations wPMF and wMF, and absolute errors between these approximations and wtr are used to measure accuracy. Since initialization is crucial for variational non-convex algorithms, the accuracy of PMF and MF is averaged over many random initializations of their respective variational distributions. soft-error soft-bound extreme-error extreme-bound MF 0.917 [0.002,1.930] -628.9 [-554.6,-793.5] 1.880 [0.965, 2.561] -895.0 [-618.9,-1483.3] PMF 0.208 [0.002,0.454] -560.7 [-557.8, -564.1] 0.204 [0.002, 0.454] -560.6 [-557.8, -564.0] Table 1: Comparison of MF and PMF in Boston-housing data in terms of approximating the ground-truth. Average errors (P13 m=1 |wtr m −wappr m |) together with 95% confidence intervals (given by percentiles) are shown for soft and extreme initializations. Average values for the variational lower bound are also shown. For the purpose of the comparison we also derived an efficient paired Gibbs sampler that follows exactly the same principle as PMF. This Gibbs sampler iteratively samples the pair ( ewm, sm) from the conditional p( ewm, sm|ew\m, s\m, y) and has been observed to mix much faster than the standard Gibbs sampler that samples ew and s separately. More details about the paired Gibbs sampler are given in the supplementary material. We considered the Boston-housing dataset which consists of 456 training examples and 13 inputs. Hyperparameters were fixed to values (σ2 = 0.1×var(y), π = 0.25, σ2 w = 1) where var(y) denotes the variance of the data. We performed two types of experiments each repeated 300 times. Each repetition of the first type uses a soft random initialization of each q(sm = 1) = γm from the range (0, 1). The second type uses an extreme random initialization so that each γm is initialized to either 0 or 1. For each run PMF and MF are initialized to the same variational parameters. Table 1 reports average absolute errors and also average values of the variational lower bounds. Clearly, PMF is more accurate than MF, achieves significantly higher values for the lower bound and exhibits smaller variance under different initializations. Further, for the more difficult case of extreme initializations the performance of MF becomes worse, while the performance of PMF remains unchanged. This shows that optimization in PMF, although non-convex, is very robust to unfavorable initializations. Similar experiments in other datasets have confirmed the above remarks. 5 Experiments Toy multi-output regression dataset. To illustrate the capabilities of the proposed model, we first apply it to a toy multi-output dataset with missing observations. Toy data is generated as follows: 5 Ten random latent functions are generated by sampling i.i.d. from zero-mean GPs with the following non-stationary covariance function k(xi, xj) = exp −x2 i −x2 j 20 (4 cos(0.5(xi −xj)) + cos(2(xi −xj))), at 201 evenly spaced points in the interval x ∈[−10, 10]. Ten tasks are then generated by adding Gaussian noise with standard deviation 0.2 to those random latent functions, and two additional tasks consist only of Gaussian noise with standard deviations 0.1 and 0.4. Finally, for each of the 12 tasks, we artificially simulate missing data by removing 41 contiguous observations, as shown in Figure 1. Missing data are not available to any learning algorithm, and will be used to test performance only. Note that the above covariance function is rank-4, so ten out of the twelve tasks will be related, though we do not know how, or which ones. All tasks are then learned using both independent GPs with squared exponential (SE) covariance function kSE(xi, xj) = exp(−(xi −xj)2/(2ℓ)) and the proposed MTMKL with M = 7 latent functions, each of them also using the SE prior. Hyperparameter ℓ, as well as noise levels are learned independently for each latent function. Figure 1 shows the inferred posterior means. −10 −8 −6 −4 −2 0 2 4 6 8 10 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 −10 −8 −6 −4 −2 0 2 4 6 8 10 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.5 −1 −0.5 0 0.5 1 1.5 −10 −8 −6 −4 −2 0 2 4 6 8 10 −2 −1 0 1 2 3 4 −10 −8 −6 −4 −2 0 2 4 6 8 10 −4 −3 −2 −1 0 1 2 −10 −8 −6 −4 −2 0 2 4 6 8 10 −3 −2 −1 0 1 2 3 −10 −8 −6 −4 −2 0 2 4 6 8 10 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 −10 −8 −6 −4 −2 0 2 4 6 8 10 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 −10 −8 −6 −4 −2 0 2 4 6 8 10 −3 −2 −1 0 1 2 3 4 5 6 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 −10 −8 −6 −4 −2 0 2 4 6 8 10 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 −10 −8 −6 −4 −2 0 2 4 6 8 10 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 Figure 1: Twelve related tasks and predictions according to independent GPs (blue, continuous line) and MTMKL (red, dashed line). Missing data for each task is represented using green circles. The mean square error (MSE) between predictions and missing observations for each task are displayed in Table 2. MTMKL is able to infer how tasks are related and then exploit that information to make much better predictions. After learning, only 4 out of the 7 available latent functions remain active, while the other ones are pruned by setting the corresponding weights to zero. This is in correspondence with the generating covariance function, which only had 4 eigenfunctions, showing how model order selection is automatic. Method \ Task # 1 2 3 4 5 6 7 8 9 10 11 12 Independent GPs 6.51 11.70 7.52 2.49 1.53 18.25 0.41 7.43 2.73 1.81 19.93 93.80 MTMKL 1.97 4.57 7.71 1.94 1.98 2.09 0.41 1.96 1.90 1.57 1.20 2.83 Table 2: MSE performance of independent GPs vs. MTMKL on the missing observations for each task. 6 Inferred noise standard deviations for the noise-only tasks are 0.10 and 0.45, and the average for the remaining tasks is 0.22, which agrees well with the stated actual values. The flowers dataset. Though the proposed model has been designed as a tool for regression, it can also be used approximately to solve classification problems by using output values to identify class membership. In this section we will apply it to the challenging flower identification problem posed in [21]. There are 2040 instances of flowers for training and 6149 for testing, mainly acquired from the web, with varying scales, resolutions, etc., which are labeled into 102 categories. In [21], four relevant features are identified: Color, histogram of gradient orientations and the scale invariant feature transform, sampled on both the foreground region and its boundary. More information is available at http://www.robots.ox.ac.uk/˜vgg/data/flowers/. For this type of dataset, state of the art performance has been achieved using a weighted linear combination of kernels (one per feature) in a support vector machine (SVM) classifier. A different set of weights is learned for each class. In [22] it is shown that these weights can be learned by solving a convex optimization problem. I.e., the standard approach to tackle the flower classification problem would correspond to solving 102 independent binary classification problems, each using a linear combination of 4 kernels. We take a different approach: Since all the 102 binary classification tasks are related, we learn all of them at once as a multi-task multiple-kernel problem, hoping that knowledge transfer between them will enhance performance. For each training instance, we set the corresponding output to +1 for the desired task, whereas the output for the remaining tasks is set to -1. Then we consider both using 10 and 13 latent functions per feature (i.e., M = 40 and M = 52). We measure performance in terms of the recognition rate (RR), which is the average of break-even points (where precision equals recall) for each class; average area under the curve (AUC); and the multi-class accuracy (MA) which is the rate of correctly classified instances. As baseline, recall that a random classifier would yield a RR and AUC of 0.5 and a MA of 1/102 = 0.0098. Results are reported in Table 3. Method Latent function # AUC on test set RR on test set MA on test set MTMKL M = 40 0.944 0.889 0.329 MTMKL M = 52 0.952 0.893 0.400 MKL from [21] M = 408 0.728 MKL from [13] M = 408 0.957 Table 3: Performance of the different multiple kernel learning algorithms on the flowers dataset. MTMKL significantly outperforms the state-of-the-art method in [21], yielding a performance in line with [13], due to its ability to share information across tasks. Image denoising and dictionary learning. Here we illustrate denoising on the 256 × 256 “house” image used in [19]. Three noise levels (standard deviations 15, 25 and 50) are considered. Following [19], we partition the noisy image in 62,001 overlapping 8 × 8 blocks and regard each block as a different task. MTMKL is then run using M = 64 “latent blocks”, also known as “dictionary elements” (bigger dictionaries do not result in significant performance increase). For the covariance of the latent functions, we consider two possible choices: Either a white covariance function (as in [19]) or an exponential covariance of the form kEXP(xi, xj) = e− |xi−xj | ℓ , where x are the pixel coordinates within each block. The first option is equivalent to placing an independent standard normal prior on each pixel of the dictionary. The second one, on the other hand, introduces correlation between neighboring pixels in the dictionary. Results are shown in Table 4. The exponential covariance clearly enhances performance and produces a more structured dictionary, as can be seen in Figure 3.(a). The Peak-to-Signal Ratio (PSNR) obtained using the proposed approach is comparable to the state-of-the-art results obtained in [19]. Image inpainting and dictionary learning. We now address the inpainting problem in color images. Following [19], we consider a color image in which a random 80% of the RGB components are missing. Using an analogous partitioning scheme as in the previous section we obtain 148,836 blocks of size 8×8×3, each of which is regarded as a different task. A dictionary size of M = 100 and a white covariance function (which is used in [19]) are selected. Note that we do not apply any other preprocessing to data or any specific initialization as it is done in [19]. The PSNR of the image 7 Figure 2: Noisy “house” image with σ = 25 and restored version using Exponential cov. function. PSNR (dB) Noise std Noisy image White Expon. σ = 15 24.66 33.98 34.29 σ = 25 20.22 30.98 31.88 σ = 50 14.20 26.14 28.08 Table 4: PSNR for noisy and restored image using several noise levels and covariance functions. after it is restored using MTMKL is 28.94 dB, see Figure 3.(b). This result is similar to the results reported in [19] and close to the state-of-the-art result of 29.65 dB achieved in [23]. (a) House: Dict. for white and Exponential (b) Castle: Missing values, restored and original Figure 3: Dictionaries inferred from noisy (σ = 25) “house” image; and “castle” inpainting results. Collaborative filtering. Finally, we performed an experiment on the 10M MovieLens data set that consists of 10 million ratings for 71,567 users and 10,681 films, with ratings ranging {1, 0.5, 2, . . . , 4.5, 5}. We followed the setup in [24] and used the ra and rb partitions provided with the database, that split the data into a training and testing, so that they are 10 ratings per user in the test set. We applied the sparse factor analysis model (i.e. sparse PCA but with heteroscedastic noise for the columns of the observation matrix Y which corresponds to films) with M = 20 latent dimensions. The RMSE for the ra partition was 0.88 for the rb partition was 0.85 so one average 0.865. This result is slightly better than 0.8740 RMSE reported in [24] using GP-LVM. 6 Discussion In this work we have proposed a spike and slab multi-task and multiple kernel learning model. A novel variational algorithm to perform inference in this model has been derived. The key contribution in this regard that explains the good performance of the algorithm is the choice of a joint distribution over ˜wqm and sqm in the variational posterior, as opposed to the usual independence assumption. This has the effect of using exponentially many modes to approximate the posterior, thus rendering it more accurate and much more robust to poor initializations of the variational parameters. The relevance and wide applicability of the proposed model has been illustrated by using it on very diverse tasks: multi-output regression, multi-class classification, image denoising, image inpainting and collaborative filtering. Prior structure beliefs were introduced in image dictionaries, which is also a novel contribution to the best of our knowledge. Finally an interesting topic for future research is to optimize the variational distribution proposed here with alternative approximate inference frameworks such as belief propagation or expectation propagation. This could allow to extend current methodologies within such frameworks that assume unimodal approximations [25, 26]. Acknowledgments We thank the reviewers for insightful comments. MKT was supported by EPSRC Grant No EP/F005687/1 “Gaussian Processes for Systems Identification with Applications in Systems Biology”. MLG gratefully acknowledges funding from CAM project CCG10-UC3M/TIC-5511 and CONSOLIDER-INGENIO 2010 CSD2008-00010 (COMONSENS). 8 References [1] T.J. Mitchell and J.J. Beauchamp. Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):1023–1032, 1988. [2] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58:267–288, 1994. [3] M.E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1:211–244, 2001. [4] E.I. George and R.E. Mcculloch. Variable selection via Gibbs sampling. Journal of the American Statistical Association, 88(423):881–889, 1993. [5] M. West. Bayesian factor regression models in the ”large p, small n” paradigm. In Bayesian Statistics, pages 723–732. Oxford University Press, 2003. [6] B. Efron. Microarrays, empirical Bayes and the two-groups model. Statistical Science, 23:1–22, 2008. [7] C. Archambeau and F. Bach. Sparse probabilistic projections. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 73–80. 2009. [8] F. Caron and A. Doucet. Sparse Bayesian nonparametric regression. In In 25th International Conference on Machine Learning (ICML). ACM, 2008. [9] Matthias W. Seeger and Hannes Nickisch. Compressed sensing and Bayesian experimental design. In ICML, pages 912–919, 2008. [10] C.M. Carvalho, N.G. Polson, and J.G. Scott. The horseshoe estimator for sparse signals. Biometrika, 97:465–480, 2010. [11] T. Damoulas and M.A. Girolami. Probabilistic multi-class multi-kernel learning: on protein fold recognition and remote homology detection. Bioinformatics, 24:1264–1270, 2008. [12] M. Christoudias, R. Urtasun, and T. Darrell. Bayesian localized multiple kernel learning. Technical report, EECS Department, University of California, Berkeley, Jul 2009. [13] C. Archambeau and F. Bach. Multiple Gaussian process models. In NIPS 23 workshop on New Directions in Multiple Kernel Learning. 2010. [14] Y.W. Teh, M. Seeger, and M.I. Jordan. Semiparametric latent factor models. In Proceedings of the International Workshop on Artificial Intelligence and Statistics, volume 10, 2005. [15] E.V. Bonilla, K.M.A. Chai, and C.K.I. Williams. Multi-task Gaussian process prediction. In Advances Neural Information Processing Systems 20, 2008. [16] P Boyle and M. Frean. Dependent Gaussian processes. In Advances in Neural Information Processing Systems 17, pages 217–224. MIT Press, 2005. [17] M. Alvarez and N.D. Lawrence. Sparse convolved Gaussian processes for multi-output regression. In Advances in Neural Information Processing Systems 20, pages 57–64, 2008. [18] R. Yoshida and M. West. Bayesian learning in sparse graphical factor models via variational mean-field annealing. Journal of Machine Learning Research, 11:1771–1798, 2010. [19] M. Zhou, H. Chen, J. Paisley, L. Ren, G. Sapiro, and L. Carin. Non-parametric Bayesian dictionary learning for sparse image represent ations. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 2295–2303. 2009. [20] M. L´azaro-Gredilla and M. Titsias. Variational heteroscedastic Gaussian process regression. In 28th International Conference on Machine Learning (ICML-11), pages 841–848, New York, NY, USA, June 2011. ACM. [21] M.E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, Dec 2008. [22] M. Varma and D. Ray. Learning the discriminative power invariance trade-off. In International Conference on Computer Vision. 2007. [23] J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEE Trans. Image Processing, 17, 2008. [24] N.D. Lawrence and R. Urtasun. Non-linear matrix factorization with Gaussian processes. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 601–608, 2009. [25] K. Sharp and M. Rattray. Dense message passing for sparse principal component analysis. In 13th International Conference on Artificial Intelligence and Statistics (AISTATS), pages 725–732, 2010. [26] J.M. Hern´andez-Lobato, D. Hern´andez-Lobato, and A. Su´arez. Network-based sparse Bayesian classification. Pattern Recognition, 44(4):886–900, 2011. 9
|
2011
|
207
|
4,266
|
Neuronal Adaptation for Sampling-Based Probabilistic Inference in Perceptual Bistability David P. Reichert, Peggy Seriès, and Amos J. Storkey School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh, EH8 9AB {d.p.reichert@sms., pseries@inf., a.storkey@} ed.ac.uk Abstract It has been argued that perceptual multistability reflects probabilistic inference performed by the brain when sensory input is ambiguous. Alternatively, more traditional explanations of multistability refer to low-level mechanisms such as neuronal adaptation. We employ a Deep Boltzmann Machine (DBM) model of cortical processing to demonstrate that these two different approaches can be combined in the same framework. Based on recent developments in machine learning, we show how neuronal adaptation can be understood as a mechanism that improves probabilistic, sampling-based inference. Using the ambiguous Necker cube image, we analyze the perceptual switching exhibited by the model. We also examine the influence of spatial attention, and explore how binocular rivalry can be modeled with the same approach. Our work joins earlier studies in demonstrating how the principles underlying DBMs relate to cortical processing, and offers novel perspectives on the neural implementation of approximate probabilistic inference in the brain. 1 Introduction Bayesian accounts of cortical processing posit that the brain implements a probabilistic model to learn and reason about the causes underlying sensory inputs. The nature of the potential cortical model and its means of implementation are hotly debated. Of particular interest in this context is bistable perception, where the percept switches over time between two interpretations in the case of an ambiguous stimulus such as the Necker cube, or two different images that are presented to either eye in binocular rivalry [1]. In these cases, ambiguous or conflicting sensory input could result in a bimodal posterior over image interpretations in a probabilistic model, and perceptual bistability could reflect the specific way the brain explores and represents this posterior [2, 3, 4, 5, 6]. Unlike more classic explanations that explain bistability with low-level mechanism such as neuronal fatigue (e.g. [7, 8]), maybe making it more of an epiphenomenon, the probabilistic approaches see bistability as a fundamental aspect of how the brain implements probabilistic inference. Recently, it has been suggested that the cortex could employ approximate inference schemes, e.g. by estimating probability distributions with a set of samples, and studies show how electrophysiological [9] and psychophysical [10] data can be interpreted in that light. Gershman et al. [6] focus on binocular rivalry and point out how in particular Markov Chain Monte Carlo (MCMC) algorithms, where correlated samples are drawn over time to approximate distributions, might naturally account for aspects of perceptual bistability, such as its stochasticity and the fact that perception at any point in time only reflects an individual interpretation of the image rather than a full distribution over possibilities. Gershman et al. do not provide a concrete neural model, however. In earlier work, we considered Deep Boltzmann Machines (DBMs) as models of cortical perception, and related hierarchical inference in these generative models to hallucinations [11] and attention 1 [12]. With the connection between MCMC and bistability established, it is natural to explore DBMs as models of bistability as well, because Gibbs sampling, a MCMC method, can be performed to do inference. Importantly from a neuroscientific perspective, Gibbs sampling in Boltzmann machines simply corresponds to the ‘standard’ way of running the DBM as a neural network with stochastic firing of the units. However, it is well known that MCMC methods in general and Gibbs sampling in particular can be problematic in practice for complex, multi-modal distributions, as the sampling algorithm can get stuck in individual modes (‘the chain does not mix’). In very recent machine learning work, Breuleux at al. [13] introduced a heuristic algorithm called Rates Fast Persistent Contrastive Divergence (rates-FPCD) that aims to improve sampling performance in a Boltzmann machine model by dynamically changing the model parameters, such as the connection strengths. In closely related work, Welling [14] suggested a potential connection to dynamic synapses in the brain. Hence, neuronal adaptation, here meant to be temporary changes to neuronal excitability and synaptic efficacy, could actually be seen as a means of enhancing sampling based inference [2]. We thus aim to demonstrate how the low-level and probabilistic accounts of bistable perception can be combined. We present a biological interpretation of rates-FPCD in terms of neuronal adaptation, or neuronal fatigue and synaptic depression specifically. Using a DBM that was trained on the two interpretations of the Necker cube, we show how such adaptation leads to bistable switching of the internal representations when the model is presented with the actual ambiguous Necker cube. Moreover, we model the role of spatial attention in biasing the perceptual switching. Finally, we explore how the same approach can be applied also to binocular rivalry. 2 Neuronal adaptation in a Deep Boltzmann Machine In this section we briefly introduce the DBM, the rates-FPCD algorithm as it is was motivated from a machine learning perspective, and then explain the latter’s relation to biology. A DBM [15] consists of stochastic binary units arranged hierarchically in several layers, with symmetric connections between layers and no connections within a layer. The first layer contains the visible units that are clamped to data, such as images, during inference, whereas the higher layers contain hidden units that learn representations from which they can generate the data in the visibles. With the states in layer k denoted by x(k), connection weights W(k) and biases b(k), the probability for a unit to switch on is determined by the input it gets from adjacent layers, using a sigmoid activation function: P(x(k) i = 1|x(k−1), x(k+1)) = 1+exp − X l w(k−1) li x(k−1) l − X m w(k) im x(k+1) m −b(k) i −1 . (1) Running the network by switching units on and off in this manner implements Gibbs sampling on a probability distribution determined by an energy function E, P(x) ∝exp(−E(x)) with E(x) = X k −x(k)T W(k)x(k+1) −x(k)T b(k). (2) Intuitively speaking, when run the model performs a random walk in the energy landscape shaped during learning, where it is attracted to ravines. Jumping between high-probability modes of the distribution corresponds to traversing from one ravine to another. 2.1 Rates-FPCD, neuronal fatigue and synaptic depression Unfortunately, for many realistically complex inference tasks MCMC methods such as Gibbs are prone to get stuck in individual modes, resulting in an incomplete exploration of the distribution, and there is much work in machine learning on improving sampling methods. One recently introduced algorithm is rates-FPCD (Rates Fast Persistent Contrastive Divergence) [13], which was utilized to sample from Restricted Boltzmann Machines (RBMs), the two layer building blocks of DBMs. Rates-FPCD is based on FPCD [16], which is used for training. Briefly, in FPCD one contribution to the weight training updates requires the model to be run continuously and independently of the data to explore the probability distribution as it is currently learned. Here it is important that the model does not get stuck in individual modes. It was found that introducing a fast changing component to 2 the weights (and biases) to dynamically and temporarily change the energy landscape can alleviate this problem. These fast weights Wf, which are added to the actual weights W, and the analogue fast biases b(k) f are updated according to Wf ← αWf + ϵ(x(0)p(x(1)|x0) −x′(0)x′(1)T ), (3) b(0) f ← αb(0) f + ϵ(x(0) −x′(0)), (4) b(1) f ← αb(1) f + ϵ(p(x(1)|x0) −x′(1)). (5) Here, the visibles x(0) are clamped to the current data item.1 x′(0) and x′(1) are current samples from the freely run model. ϵ is a parameter determining the rate of adaptation, and α ≤1 is a decay parameter that limits the amount of weight change contributed by the fast weights. The second term in each of the parentheses has the effect of changing the weights and biases such that whatever states are currently being sampled by the model are made less likely in the following. Hence, this will eventually ‘push’ the model out of a mode it is stuck in. The first terms in the parentheses are computed over the data and leads to the model being drawn to states supported by the current input. Computation of the first terms in the parentheses in equations 3-5 requires the training data. To turn FPCD into a general sampling algorithm applicable outside of training, when the training data is no longer around, rates-FPCD simply replaces the first terms with the so-called rates, which are the pairwise and unitary statistics averaged over all training data: Wf ← αWf + ϵ(E[x(0)x(1)T ] −x′(0)x′(1)T ), (6) b(0) f ← αb(0) f + ϵ(E[x(0)] −x′(0)), (7) b(1) f ← αb(1) f + ϵ(E[x(1)] −x′(1)) (8) (x(1) is sampled conditioned on the data). The rates are to be computed during training, but can then be used for sampling afterwards. It was found that these terms sufficiently serve to stabilize the sampling scheme, and that rates-FPCD yielded improved performance over Gibbs sampling [13]. Let us consider equations 6-8 from a biological perspective, interpreting the weight parameters as synaptic strengths and the biases as some overall excitability level of a neuron. The equations suggest that the capability of the network to explore the state space is improved by dynamically adjusting the neuron’s parameters (cf. e.g. [17]) depending on the current states of the neuron and its connected partners (second terms in parentheses), drawing them towards some set values (first terms, the rate statistics). All that is needed for the latter is that the neuron stores its average firing activity during learning (for the bias statistics) and the synapses remember some average firing correlation between connected neurons (for the weight statistics). In particular, if activation patterns in the network are sparse and neurons are off most of the time, then these average terms will be rather low. During inference,2 the neuron will fire strongly for its preferred stimulus (or stimulus interpretation), but then its firing probability will decrease as its excitability and synaptic efficacy drop, allowing the network to discover potential alternative interpretations of the stimulus. Thus, in the case of sparse activity, equations 6-8 implement a form of neuronal fatigue and synaptic depression. Preceding the introduction of rates-FPCD as a sampling algorithm, we also utilized the same mechanism (but only applied to the biases) in a biological model of hallucinations [11] to model homeostatic [18] regulation of neuronal firing. We showed how it helps to make the system more robust against noise corruption in the input, though it can lead to hallucinations under total sensory deprivation. Hence, the same underlying mechanisms could either be understood as short-term neuronal adaptation or longer term homeostatic regulation, depending on the time scales involved. 3 Experiments: Necker cube We trained a DBM on binary images of cubes at various locations, representing the two unambiguous interpretations of the Necker cube, and then tested the model on the actual, ambiguous Necker cube 1In practice, minibatches are used. 2Applied in a DBM, not a RBM; see next section. 3 (a) Training and test set examples. decoded hidden states time input (b) Perceptual bistability. Figure 1: (a): Examples of the unambiguous training images (left) and the ambiguous test images (right). (b): During inference on an ambiguous image, the decoded hidden states reveal perceptual switching resulting from neuronal adaptation. Four consecutive sampling cycles are shown. (Figure 1a). We use a similar setup3 to that described in [11, 12], with localized receptive fields the size of which increased from lower to higher hidden layers, and sparsity encouraged simply by initializing the biases to negative values in training. As in the aforementioned studies, we are interested in what is inferred in the hidden layers as the image is presented in the visibles, and ‘decode’ the hidden states by computing a reconstructed image for each hidden layer. To this end, starting with the states of the hidden layer of interest, the activations (i.e. firing probabilities) in each subsequent lower layer are computed deterministically in a single top-down pass, doubling the weights to compensate for the lack of bottom-up input, until a reconstructed image is obtained in the visibles. In this way, the reconstructed image is determined by the states in the initial layer alone, independently of the actual current states in the other layers. When presented with a Necker cube image, the hidden states were found to converge within a few sampling cycles (each consisting of one up and one down pass of sampling all hidden layers) to one of the unambiguous interpretations and remained therein, exhibiting no perceptual switching to the respective alternative interpretation.4 We then employed rates-FPCD to model neuronal adaptation.5 It should be noted that unlike in [13], we utilize it in a DBM rather than a RBM, and during inference instead of when generating data samples (i.e. in our case the visibles are always clamped to an image). The rate statistics were computed by measuring unit activities and pairwise correlations when the trained model was run on the training data. With neuronal adaption, the internal representations as decoded from the hidden layer were found to switch over time between the two image interpretations, thus the model exhibited perceptual bistability. An example of the switching of internal representations is displayed in Figure 1b. It can be observed that the perceptual state is most distinct in higher layers. For quantitative analysis, we computed the squared reconstruction error of the image decoded from the topmost layer with regards to either of the two image interpretations. Plotted against time (Figure 2a), this shows how the internal representations evolve during a trial. The representations match one of the two image interpretations in a relatively stable manner over several sampling cycles, with some degradation before and a short transition phase during a perceptual switch. To examine the effects of adaptation on an individual neuron, we picked a unit in the top layer that showed high variance in both its activity levels and neuronal parameters as they changed over the 3Images of 28x28 pixels, three hidden layers with 26x26 units each. Pretraining of the layers with CD-1, no training of full DBM. 4It should be noted that the behavior of the network will depend heavily on the specifics of the training and the data set used. We employed only the most simple training methods – layer-wise pre-training with CD-1 and no tuning of the full DBM – and do not claim that more advanced methods could not lead to better sampling behavior, especially for this simple toy data. Indeed, using PCD instead we found some spontaneous switching, though reconstructions were noisy. But for the argument at hand it is more important that in general, bad mixing with these models can be a problem that might be alleviated by methods such as rates-FPCD, hence using a setup that exhibits this problem is useful to make the point. 5α = 0.95, ϵ = 0.001 for Necker cube, α = 0.9, ϵ = 0.002 for binocular rivalry (Section 4). 4 0 20 40 60 80 100 120 140 sampling cycle 0 20 40 60 80 100 120 140 reconstruction error (a) Match of internal state to interpretations. 0 20 40 60 80 100 120 140 sampling cycle 0.0 0.5 1.0 activation mean weights (b) Single unit properties. Figure 2: (a): Time course of squared reconstruction errors of the decoded topmost hidden states w.r.t. either of the two image interpretations. Apart from during the transition periods, the percept at any point matches one (close to zero error) but not the other interpretation (high error). (b): Activation (i.e. firing probability) and mean synaptic strength (arbitrary origin and units) of a top layer unit that participates in coding for one but not the other interpretation (dashed line marks currently active interpretation). Depression and recovery of synaptic efficacy during instantiation of the preferred and non-preferred interpretations, respectively, lead to changes in activation that precede the next perceptual switch. trial, indicating that this unit was involved in coding for one but not the other image interpretation. In Figure 2b are plotted the time course of its activity levels (i.e. firing probability according to equation 1) and the mean synaptic efficacy, i.e. weight strength, of connections to this unit.6 As expected, the firing probability of this unit is close to one for one of the interpretations and close to zero for the other, especially in the initial time period after a perceptual switch. However, as the neuron’s firing rate and synaptic activity deviate from their low average levels, the synaptic efficacy changes as shown in the plot. For example, during instantiation of the preferred stimulus interpretation, the drop of neuronal excitability ultimately leads to a waning of activity that precedes and, together with the changes in the overall network, subsequently triggers the next perceptual switch. For another trial where we used an image of the Necker cube in a different position, the same unit showed constant low firing rates, indicating that it was not involved in representing that image. The neuronal parameters were then found to be stable throughout the trial, after a slight initial monotonic change that would allow the neuron to assume its low baseline activity as determined by the rate statistics. Moreover, other units were found to have relatively stable high firing rate for a given image throughout the trial, coding for features of the stimulus that were common to both image interpretations, even though their neuronal parameters equally adapted due to their elevated activity. This is due to the extent of adaptation being limited by the decay parameter α (equations 6-8), and shows that the adaptation can be set to be sufficiently strong to allow for exploration of the posterior, without overwhelming the representations of unambiguous image features. Similarly, we note that internal representations of the model when presented with the unambiguous images from the training set were stable under adaptation with our setting of parameter values. We also quantified the statistics of perceptual switching by measuring the length of time the model’s state would stay in either of the two interpretations for one of the test images. The resulting histograms of percept durations, i.e. time intervals between switches, are displayed in Figure 3a separately for the two interpretations. They are shaped like gamma or log-normal distributions, qualitatively in agreement with experimental results in human subjects [19]. There is a bias apparent in the model towards one of the interpretations (different for different images). Some biases are observed in humans (as visible in the data in [4]), potentially induced by statistical properties of the environment. However, our data set did not involve any biases, so this seems to be merely an artifact produced by the (basic) training procedure used. 6The changes to weights and biases are equivalent, so we show only the former. 5 0 5 10 15 20 25 30 35 40 percept duration 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 norm. count no attention attention 0 10 20 30 40 50 percept duration 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 norm. count no attention attention (a) Percept durations for both interpretations (left and right figures), with/without attention. attended (b) Figure 3: (a): Histograms over percept durations between perceptual switches, for either interpretation (left and right, respectively) of one of the test images. Ignoring the peaks at small interval lengths, which stem from fluctuations during transitions, the histograms are very well fitted by lognormal distributions (black curves, omitted in right figure to avoid clutter). Also plotted in both figures are histograms with spatial attention employed (see Section 3.1) to one of the interior corners of the Necker cube (as shown in (b)). The distributions shift or remain unchanged depending on whether the attended corner is salient or not for the image interpretation in question. 3.1 The role of spatial attention The statistics of multistable perception can be influenced voluntarily by human subjects [20]. For the Necker cube, overtly directing one’s gaze to corners of the cube, especially the interior ones, can have a biasing effect [21]. This could be explained by these features being in some way more salient for either of the two interpretations. An explanation matching our (simplified) setup would be that opaque cubes (as used in training) uniquely match one of the interpretations and lack one of the two interior corners. In the following, we model not eye movements but covert attention, involving only the shifting of an internal attentional ‘spotlight’, which also has been shown to affect perceptual switching in the Necker cube [22].7 The presented image remained unchanged and a spatial spotlight that biased the internal representations of the model was employed in the first hidden layer. To implement the spotlight, we made use of the fact that receptive fields were topographically organized, and that sparsity in a DBM breaks the symmetry between units being on and off and makes it possible to suppress represented information by suppressing the activity of specific hidden units [12]. We used a Gaussian shaped spotlight that was centered at one of the salient internal corners of the Necker cube (Figure 3b) and applied it to the hidden units as additional negative biases, attenuating activity further away from the focus. The effect of attention on the percept durations for one of the test images are displayed in Figure 3a, together with the data obtained without attention for comparison. For the interpretation that matched the corner that was attended, we found a shift towards longer percept durations (Figure 3a, left), whereas the distribution for the other interpretation was relatively unchanged (Figure 3a, right). Averaged over all test images, the mean interval spent representing the interpretation favored by spatial attention saw a 25% increase vs. approx. no change for the other interpretation. Hence, in the model spatial attention prolongs the percept whose salient feature is being attended. This seems to be qualitatively in line with experimental data at least in terms of voluntary attention having an effect, although specifics can depend on the nature of the stimulus and the details of the instructions given to experimental subjects [23]. 4 Experiments: binocular rivalry Several related studies that considered perceptual multistability in the light of probabilistic inference focused on binocular rivalry [2, 5, 6]. There, human observers are presented with a different image to each eye, and their perception is found to switch between the two images. Depending on 7We did not find an experimental study examining covert attention on the interior corners in unmodified Necker cubes, which is what we simulate. 6 L R training L R test Figure 4: : Example images for the binocular rivalry experiment. Training images (left) contained either horizontal or vertical bars, and the left and right image halves were identical (corresponding to the left and right ‘eyes’). For the test images (right), the left and right halves are drawn independently. They could come from the same category (top and bottom examples) or from conflicting categories (middle example). 0 100 200 300 400 sampling cycle 0 50 100 150 200 250 300 350 reconstruction error (a) Percept vs. eye images for same category. 0 100 200 300 400 sampling cycle 0 50 100 150 200 250 300 350 reconstruction error (b) Percept vs. eye images for conflict. categories. Figure 5: For binocular rivalry, displayed are the squared reconstruction errors for decoded top layer representations computed against either of the two input images. (a): The input images came from the same category (here, vertical bars), and fusing of the percept was prominent, resulting in modest, similar errors for both images. (b): For input images from conflicting categories, the percept alternated more strongly between the images, although intermediate, fused states were still more prevalent than was the case for the Necker cube. The step-like changes in the error were found to result from individual bars appearing and disappearing in the percept. specifics such as size and content of the images, perception can switch completely between the two images, fuse them, or do either to varying degrees over time [24, 25]. We demonstrate with a simple experiment that the phenomenon of binocular rivalry can be addressed in our framework as well. To this end, the same model architecture as before was used, but the number of visible units was doubled and the units were separated into left and right ‘eyes’. During training, both sets of visibles simply received the same images. During testing however, the left and right halves were set to independently drawn training images to simulate the binocular rivalry experiment. The units in the first hidden layer were set to be monocular in the sense that their receptive fields covered visible units only in either of the left or right half, whereas higher layers did not made this distinction. As a data set we used images containing either vertical or horizontal bars (Figure 4). As with the Necker cube, perceptual switching was observed with adaptation but not without. Generally, the perceptual state was found to be biased to one of the two images for some periods, while fusing the images to some extent during transition phases (Figure 5). Interestingly, whether fusing or alternation was more prominent depended on the nature of the conflict in the two input images: For images from the same category (both vertical or horizontal lines), fusing occurred more often (Figure 5a), whereas for images from conflicting categories, the percept represented more distinctly either image and fusing happened primarily in transition periods (Figure 5b). We quantified this by computing the reconstruction errors from the decoded hidden states with regards to the two images, and taking the absolute difference averaged over the trial as measure for how much the internal states were representing both images individually rather than fused versions. We found that this measure was more than two times higher for conflicting categories. This result is qualitatively in line with psychophysical experiments that showed fusing for differing but compatible images (e.g. different patches of the same source image) [24, 25]. 7 5 Related work and discussion Our study contributes to the emerging trend in computational neuroscience to consider approximate probabilistic inference in the brain (e.g. [9, 10]), and complements several recent papers that examine perceptual multistability in this light. Gershman et al. [6] argued for interpreting multistability as inference based on MCMC, focusing on binocular rivalry only. Importantly, they use Markov random fields as a high-level description of the perceptual problem itself (two possible ‘causes’ generating the image, with a topology matching the stimulus). They argue that the brain might implement MCMC inference over these external variables, but do not make any statement w.r.t. the underlying neural mechanisms. In contrast, in our model MCMC is performed over the internal, neurally embodied latent variables that were learned from data. Bistability results from bimodality in the learned high-dimensional hidden representations, rather than directly from the problem formulation. In another study, Sundareswara and Schrater [4] model perceptual switching for the Necker cube, including the influence of image context, which we could explore in future work. Similar to [6], they start from a high-level description of the problem. They design a custom abstract inference process that makes different predictions from our model: In their model, samples are drawn i.i.d. from the two posterior modes representing the two interpretations and are accumulated over time, with older samples being exponentially discounted. A separate decision process selects from the samples and determines what interpretation reaches awareness. In our model, the current conscious percept is simply determined by the current overall state of the network, and the switching dynamics are a direct result of how this state evolves over time (as in [6]). Hohwy et al. [5] explain binocular rivalry descriptively in their predictive coding framework. They identify switching with exploration in an energy landscape, and suggest the contribution of stochasticity or adaptation, but they do not make the connection to sampling and do not provide a computational model. The work by Grossberg and Swaminathan [8] is an example of a non-probabilistic model of, among other things, Necker cube bistability, providing much biological detail, and considering the role of spatial attention. Their study is also an instance of an approach that bases the switching on neuronal adaptation, but does not see a functional role for multistability as such, relegating instead the functional relevance of adaptation to a role it plays during learning only. Similarly, in earlier work Dayan [2] utilizes an ad-hoc adaptation process in a deterministic probabilistic model of binocular rivalry. He suggests sampling could provide stochasticity, wondering about the relation between sampling and adaptation. This is was what we have addressed here. Indeed, our approach is supported by recent psychophysics results [26], which indicate that both noise and neuronal adaptation are necessary to explain binocular rivalry. We note that our setup is of course a simplification and abstraction in that we do not explicitly model depth. Indeed, in perceiving the Necker cube one does not see the actually opaque cubes we used in training, but rather a 3D wireframe cube. Peculiarly, this is actually contrary to the depth information available, as a (2D) image of a cube is not actually a 3D cube, but collection of lines on a flat surface. How is a paradoxically ‘flat 3D cube’ represented in the brain? In a hierarchical architecture consisting of specialized areas, this might be realized by having a high level area that codes for objects (e.g. area IT in the cortex) represent a 3D cube, whereas another area that is primarily involved with depth as such represents a flat surface. Our work here and earlier [11, 12] showed that in a DBM, different hidden layers can represent different and partially conflicting information (cf. Figure 1b). Finally, we also note that in preliminary experiments with depth information (using real valued visibles) perceptual switching did still occur. In conclusion, we provided a biological interpretation of rates-FPCD, and thus showed how two seemingly distinct explanations for perceptual multistability, probabilistic inference and neuronal adaptation, can be merged in one framework. Unlike other approaches, our account combines sampling based inference and adaptation in a concrete neural architecture utilizing learned representations of images. Moreover, our study further demonstrates the relevance of DBMs as cortical models [11, 12]. We believe that further developing hybrid approaches – combining probabilistic models, dynamical systems, and classic connectionist networks – will help identifying the neural substrate of the Bayesian brain hypothesis. Acknowledgments Supported by the EPSRC, MRC and BBSRC. We thank N. Heess and the reviewers for comments. 8 References [1] Leopold and Logothetis (1999) Multistable phenomena: changing views in perception. Trends in Cognitive Sciences, 3, 254–264, PMID: 10377540. [2] Dayan, P. (1998) A hierarchical model of binocular rivalry. Neural Computation, 10, 1119–1135. [3] van Ee, R., Adams, W. J., and Mamassian, P. (2003) Bayesian modeling of cue interaction: bistability in stereoscopic slant perception. Journal of the Optical Society of America A, 20, 1398–1406. [4] Sundareswara, R. and Schrater, P. R. (2008) Perceptual multistability predicted by search model for Bayesian decisions. Journal of Vision, 8, 1–19. [5] Hohwy, J., Roepstorff, A., and Friston, K. (2008) Predictive coding explains binocular rivalry: An epistemological review. Cognition, 108, 687–701. [6] Gershman, S., Vul, E., and Tenenbaum, J. (2009) Perceptual multistability as markov chain monte carlo inference. Advances in Neural Information Processing Systems 22. [7] Blake, R. (1989) A neural theory of binocular rivalry. Psychological Review, 96, 145–167, PMID: 2648445. [8] Grossberg, S. and Swaminathan, G. (2004) A laminar cortical model for 3D perception of slanted and curved surfaces and of 2D images: development, attention, and bistability. Vision Research, 44, 1147– 1187. [9] Fiser, J., Berkes, B., Orban, G., and Lengyel, M. (2010) Statistically optimal perception and learning: from behavior to neural representations. Trends in Cognitive Sciences, 14, 119–130. [10] Vul, E., Goodman, N. D., Griffiths, T. L., and Tenenbaum, J. B. (2009) One and done? optimal decisions from very few samples. Proceedings of the 31st Annual Conference of the Cognitive Science Society.. [11] Reichert, D. P., Seriès, P., and Storkey, A. J. (2010) Hallucinations in Charles Bonnet Syndrome induced by homeostasis: a Deep Boltzmann Machine model. Advances in Neural Information Processing Systems 23, 23, 2020–2028. [12] Reichert, D. P., Seriès, P., and Storkey, A. J. (2011) A hierarchical generative model of recurrent ObjectBased attention in the visual cortex. Proceedings of the International Conference on Artificial Neural Networks (ICANN-11). [13] Breuleux, O., Bengio, Y., and Vincent, P. (2011) Quickly generating representative samples from an RBM-Derived process. Neural Computation, pp. 1–16. [14] Welling, M. (2009) Herding dynamical weights to learn. Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, Quebec, Canada, pp. 1121–1128, ACM. [15] Salakhutdinov, R. and Hinton, G. (2009) Deep Boltzmann machines. Proceedings of the 12th International Conference on Artificial Intelligence and Statistics (AISTATS), vol. 5, pp. 448–455. [16] Tieleman, T. and Hinton, G. (2009) Using fast weights to improve persistent contrastive divergence. Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, Quebec, Canada, pp. 1033–1040, ACM. [17] Maass, W. and Zador, A. M. (1999) Dynamic stochastic synapses as computational units. Neural Computation, 11, 903–917. [18] Turrigiano, G. G. (2008) The self-tuning neuron: synaptic scaling of excitatory synapses. Cell, 135, 422– 435, PMID: 18984155. [19] Zhou, Y. H., Gao, J. B., White, K. D., Yao, K., and Merk, I. (2004) Perceptual dominance time distributions in multistable visual perception. Biological Cybernetics, 90, 256–263. [20] Meng, M. and Tong, F. (2004) Can attention selectively bias bistable perception? differences between binocular rivalry and ambiguous figures. Journal of Vision, 4. [21] Toppino, T. C. (2003) Reversible-figure perception: mechanisms of intentional control. Perception & Psychophysics, 65, 1285–1295, PMID: 14710962. [22] Peterson, M. A. and Gibson, B. S. (1991) Directing spatial attention within an object: Altering the functional equivalence of shape descriptions. Journal of Experimental Psychology: Human Perception and Performance, 17, 170–182. [23] van Ee, R., Noest, A. J., Brascamp, J. W., and van den Berg, A. V. (2006) Attentional control over either of the two competing percepts of ambiguous stimuli revealed by a two-parameter analysis: means do not make the difference. Vision Research, 46, 3129–3141, PMID: 16650452. [24] Tong, F., Meng, M., and Blake, R. (2006) Neural bases of binocular rivalry. Trends in Cognitive Sciences, 10, 502–511. [25] Knapen, T., Kanai, R., Brascamp, J., van Boxtel, J., and van Ee, R. (2007) Distance in feature space determines exclusivity in visual rivalry. Vision Research, 47, 3269–3275, PMID: 17950397. [26] Kang, M. and Blake, R. (2010) What causes alternations in dominance during binocular rivalry? Attention, Perception, & Psychophysics, 72, 179–186. 9
|
2011
|
208
|
4,267
|
Reinforcement Learning using Kernel-Based Stochastic Factorization Andr´e M. S. Barreto School of Computer Science McGill University Montreal, Canada amsb@cs.mcgill.ca Doina Precup School of Computer Science McGill University Montreal, Canada dprecup@cs.mcgill.ca Joelle Pineau School of Computer Science McGill University Montreal, Canada jpineau@cs.mcgill.ca Abstract Kernel-based reinforcement-learning (KBRL) is a method for learning a decision policy from a set of sample transitions which stands out for its strong theoretical guarantees. However, the size of the approximator grows with the number of transitions, which makes the approach impractical for large problems. In this paper we introduce a novel algorithm to improve the scalability of KBRL. We resort to a special decomposition of a transition matrix, called stochastic factorization, to fix the size of the approximator while at the same time incorporating all the information contained in the data. The resulting algorithm, kernel-based stochastic factorization (KBSF), is much faster but still converges to a unique solution. We derive a theoretical upper bound for the distance between the value functions computed by KBRL and KBSF. The effectiveness of our method is illustrated with computational experiments on four reinforcement-learning problems, including a difficult task in which the goal is to learn a neurostimulation policy to suppress the occurrence of seizures in epileptic rat brains. We empirically demonstrate that the proposed approach is able to compress the information contained in KBRL’s model. Also, on the tasks studied, KBSF outperforms two of the most prominent reinforcement-learning algorithms, namely least-squares policy iteration and fitted Q-iteration. 1 Introduction Recent years have witnessed the emergence of several reinforcement-learning techniques that make it possible to learn a decision policy from a batch of sample transitions. Among them, Ormoneit and Sen’s kernel-based reinforcement learning (KBRL) stands out for two reasons [1]. First, unlike other approximation schemes, KBRL always converges to a unique solution. Second, KBRL is consistent in the statistical sense, meaning that adding more data always improves the quality of the resulting policy and eventually leads to optimal performance. Despite its nice theoretical properties, KBRL has not been widely adopted by the reinforcement learning community. One possible explanation for this is its high computational complexity. As discussed by Ormoneit and Glynn [2], KBRL can be seen as the derivation of a finite Markov decision process whose number of states coincides with the number of sample transitions collected to perform the approximation. This gives rise to a dilemma: on the one hand one wants as much data as possible to describe the dynamics of the decision problem, but on the other hand the number of transitions should be small enough to allow for the numerical solution of the resulting model. In this paper we describe a practical way of weighting the relative importance of these two conflicting objectives. We rely on a special decomposition of a transition matrix, called stochastic factorization, to rewrite it as the product of two stochastic matrices of smaller dimension. As we 1 will see, the stochastic factorization possesses a very useful property: if we swap its factors, we obtain another transition matrix which retains some fundamental characteristics of the original one. We exploit this property to fix the size of KBRL’s model. The resulting algorithm, kernel-based stochastic factorization (KBSF), is much faster than KBRL but still converges to a unique solution. We derive a theoretical bound on the distance between the value functions computed by KBRL and KBSF. We also present experiments on four reinforcement-learning domains, including the double pole-balancing task, a difficult control problem representative of a wide class of unstable dynamical systems, and a model of epileptic rat brains in which the goal is to learn a neurostimulation policy to suppress the occurrence of seizures. We empirically show that the proposed approach is able to compress the information contained in KBRL’s model, outperforming both the least-squares policy iteration algorithm and fitted Q-iteration on the tasks studied [3, 4]. 2 Background The KBRL algorithm solves a continuous state-space Markov Decision Process (MDP) using a finite model approximation. A finite MDP is defined by a tuple M ≡(S,A,Pa,ra,γ) [5]. The finite sets S and A are the state and action spaces. The matrix Pa ∈R|S|×|S| gives the transition probabilities associated with action a ∈A and the vector ra ∈R|S| stores the corresponding expected rewards. The discount factor γ ∈[0,1) is used to give smaller weights to rewards received further in the future. In the case of a finite MDP, we can use dynamic programming to find an optimal decision-policy π∗∈A|S| in polynomial time [5]. As well known, this is done using the concept of a value function. Throughout the paper, we use v ∈R|S| to denote the state-value function and Q ∈R|S|×|A| to refer to the action-value function. Let the operator Γ : R|S|×|A| 7→R|S| be given by ΓQ = v, with vi = maxj qi j, and define ∆: R|S| 7→R|S|×|A| as ∆v = Q, where the ath column of Q is given by qa = ra + γPav. A fundamental result in dynamic programming states that, starting from v(0) = 0, the expression v(t) = Γ∆v(t−1) gives the optimal t-step value function, and as t →∞the vector v(t) approaches v∗, from which any optimal decision policy π∗can be derived [5]. Consider now an MDP with continuous state space S ⊂Rd and let Sa = {(sa k,ra k, ˆsa k)|k = 1,2,...,na} be a set of sample transitions associated with action a ∈A, where sa k, ˆsa k ∈S and ra k ∈R. The model constructed by KBRL has the following transition and reward functions: ˆPa(sj|si) = κa(si,sa k), if sj = ˆsa k, 0, otherwise and ˆRa(si,sj) = ra k, if sj = ˆsa k, 0, otherwise, where κa(·,sa k) is a weighting kernel centered at sa k and defined in such a way that ∑na k=1 κa(si,sa k) = 1 for all si ∈S (for example, κa can be a normalized Gaussian function; see [1] and [2] for a formal definition and other examples of valid kernels). Since only transitions ending in the states ˆsa k have a non-zero probability of occurrence, one can solve a finite MDP ˆM whose space is composed solely of these n = ∑a na states [2, 6]. After the optimal value function of ˆM has been found, the value of any state si ∈S can be computed as Q(si,a) = ∑na k=1 κa(si,sa k) ra k +γ ˆV ∗(ˆsa k) . Ormoneit and Sen [1] proved that, if na →∞for all a ∈A and the widths of the kernels κa shrink at an “admissible” rate, the probability of choosing a suboptimal action based on Q(si,a) converges to zero. As discussed in the introduction, the problem with the practical application of KBRL is that, as n increases, so does the cost of solving the MDP derived by this algorithm. To alleviate this problem, Jong and Stone [6] propose growing incrementally the set of sample transitions, using a prioritized sweeping approach to guide the exploration of the state space. In this paper we present a new method for addressing this problem, using stochastic factorization. 3 Stochastic factorization A stochastic matrix has only non-negative elements and each of its rows sums to 1. That said, we can introduce the concept that will serve as a cornerstone for the rest of the paper: Definition 1 Given a stochastic matrix P ∈Rn×p, the relation P = DK is called a stochastic factorization of P if D ∈Rn×m and K ∈Rm×p are also stochastic matrices. The integer m > 0 is the order of the factorization. 2 This mathematical concept has been explored before. For example, Cohen and Rothblum [7] briefly discuss it as a special case of non-negative matrix factorization, while Cutler and Breiman [8] focus on slightly modified versions of the stochastic factorization for statistical data analysis. However, in this paper we will focus on a useful property of this type of factorization that seems to have passed unnoticed thus far. We call it the “stochastic-factorization trick”: Given a stochastic factorization of a square matrix, P = DK, swapping the factors of the factorization yields another transition matrix ¯P = KD, potentially much smaller than the original, which retains the basic topology and properties of P. The stochasticity of ¯P follows immediately from the same property of D and K. What is perhaps more surprising is the fact that this matrix shares some fundamental characteristics with the original matrix P. Specifically, it is possible to show that: (i) for each recurrent class in P there is a corresponding class in ¯P with the same period and, given some simple assumptions about the factorization, (ii) P is irreducible if and only if ¯P is irreducible and (iii) P is regular if and only if ¯P is regular (see [9] for details). Given the strong connection between P ∈Rn×n and ¯P ∈Rm×m, the idea of replacing the former by the latter comes almost inevitably. The motivation for this would be, of course, to save computational resources when m < n. In this paper we are interested in exploiting the stochastic-factorization trick to reduce the computational cost of dynamic programming. The idea is straightforward: given stochastic factorizations of the transition matrices Pa, we can apply our trick to obtain a reduced MDP that will be solved in place of the original one. In the most general scenario, we would have one independent factorization Pa = DaKa for each action a ∈A. However, in the current work we will focus on a particular case which will prove to be convenient both mathematically and computationally. When all factorizations share the same matrix D, it is easy to derive theoretical guarantees regarding the quality of the solution of the reduced MDP: Proposition 1 Let M ≡(S,A,Pa,ra,γ) be a finite MDP with |S| = n and 0 ≤γ < 1. Let DKa = Pa be |A| stochastic factorizations of order m and let ¯ra be vectors in Rm such that D¯ra = ra for all a ∈A. Define the MDP ¯M ≡( ¯S,A, ¯Pa, ¯ra,γ), with | ¯S| = m and ¯Pa = KaD. Then, ∥v∗−˜v∥∞≤ ¯C (1−γ)2 max i (1−max j di j), (1) where ˜v = ΓD ¯Q∗, ¯C = maxa,k ¯ra k −mina,k ¯ra k, and ∥·∥∞is the maximum norm. Proof. Since ra = D¯ra and D¯Pa = DKaD = PaD for all a ∈A, the stochastic matrix D satisfies Sorg and Singh’s definition of a soft homomorphism between M and ¯M (see equations (25)–(28) in [10]). Applying Theorem 1 by the same authors, we know that
Γ(Q∗−D ¯Q∗)
∞≤(1 −γ)−1 supi,t(1 − maxj di j) ¯δ (t) i , where ¯δ (t) i = maxj:di j>0,k ¯q(t) jk −minj:di j>0,k ¯q(t) jk and ¯q(t) jk are elements of the optimal t-step action-value function of ¯M, ¯Q(t) = ∆¯v(t−1). In order to obtain our bound, we note that
ΓQ∗−ΓD ¯Q∗
∞≤
Γ(Q∗−D ¯Q∗)
∞and, for all t > 0, ¯δ (t) i ≤(1−γ)−1(maxa,k ¯ra k −mina,k ¯ra k). 2 Proposition 1 elucidates the basic mechanism through which one can exploit the stochasticfactorization trick to reduce the number of states in an MDP. However, in order to apply this idea in practice, one must actually compute the factorizations. This computation can be expensive, exceeding the computational effort necessary to calculate v∗[11, 9]. In the next section we discuss a strategy to reduce the computational cost of the proposed approach. 4 Kernel-based stochastic factorization In Section 2 we presented KBRL, an approximation scheme for reinforcement learning whose main drawback is its high computational complexity. In Section 3 we discussed how the stochasticfactorization trick can in principle be useful to reduce an MDP, as long as one circumvents the computational burden imposed by the calculation of the matrices involved in the process. We now show how to leverage these two components to produce an algorithm called kernel-based stochastic factorization (KBSF) that overcomes these computational limitations. 3 As outlined in Section 2, KBRL defines the probability of a transition from state ˆsb i to state ˆsa k via kernel-averaging, formally denoted κa(ˆsb i ,sa k), where a,b ∈A. So for each action a ∈A, the state ˆsb i has an associated stochastic vector ˆpa j ∈R1×n whose non-zero entries correspond to the function κa(ˆsb i ,·) evaluated at sa k,k = 1,2,...,na. Recall that we are dealing with a continuous state space, so it is possible to compute an analogous vector for any si ∈S. Therefore, we can link each state of the original MDP with |A| n-dimensional stochastic vectors. The core strategy of KBSF is to find a set of m representative states associated with vectors ka i ∈R1×n whose convex combination can approximate the rows of the corresponding ˆPa. KBRL’s matrices ˆPa have a very specific structure, since only transitions ending in states ˆsa i associated with action a have a non-zero probability of occurrence. Suppose now we want to apply the stochastic-factorization trick to KBRL’s MDP. Assuming that the matrices Ka have the same structure as ˆPa, when computing ¯Pa = KaD we only have to look at the submatrices of Ka and D corresponding to the na non-zero columns of Ka. We call these matrices ˙Ka ∈Rm×na and ˙Da ∈Rna×m. Let {¯s1, ¯s2,..., ¯sm} be a set of representative states in S. KBSF computes matrices ˙Da and ˙Ka with elements ˙da i j = ¯κ(ˆsa i , ¯sj) and ˙ka i j = κa(¯si,sa j), where ¯κ is also a kernel. Obviously, once we have ˙Da and ˙Ka it is trivial to compute D and Ka. Depending on how the states ¯si and the kernels ¯κ are defined, we have DKa ≈ˆPa for all a ∈A. The important point here is that the matrices Pa = DKa are never actually computed, but instead we solve an MDP with m states whose dynamics are given by ¯Pa = KaD = ˙Ka ˙Da. Algorithm 1 gives a step-by-step description of KBSF. Algorithm 1 KBSF Input: Sa for all a ∈A, m Select a set of representative states {¯s1, ¯s2,..., ¯sm} for each a ∈A do Compute matrix ˙Da: ˙da i j = ¯κ(ˆsa i , ¯sj) Compute matrix ˙Ka: ˙ka i j = κa(¯si,sa j) Compute vector ¯ra: ¯ra i = ∑j ˙ka i jra j end for Solve ¯M ≡( ¯S,A, ¯Pa, ¯ra,γ), with ¯Pa= ˙Ka ˙Da Return ˜v = ΓD ¯Q∗, where D⊺= h ˙Da1⊺ ˙Da2⊺... ˙Da|A|⊺i Observe that we did not describe how to define the representative states ¯si. Ideally, these states would be linked to vectors ka i forming a convex hull which contains the rows of ˆPa. In practice, we can often resort to simple methods to pick states ¯si in strategic regions of S. In Section 5 we give an example of how to do so. Also, the reader might have noticed that the stochastic factorizations computed by KBSF are in fact approximations of the matrices ˆPa. The following proposition extends the result of the previous section to the approximate case: Proposition 2 Let ˆM ≡(S,A, ˆPa, ˆra,γ) be the finite MDP derived by KBRL and let D, Ka, and ¯ra be the matrices and vector computed by KBSF. Then, ∥ˆv∗−˜v∥∞≤ 1 1−γ max a ∥ˆra −D¯ra∥∞+ 1 (1−γ)2 ¯Cmax i (1−max j dij)+ ˆCγ 2 max a
ˆPa −DKa
∞ , (2) where ˆC = maxa,i ˆra i −mina,i ˆra i . Proof. Let M ≡(S,A,DKa,D¯ra,γ). It is obvious that ∥ˆv∗−˜v∥∞≤∥ˆv∗−v∗∥∞+∥v∗−˜v∥∞. (3) In order to provide a bound for ∥ˆv∗−v∗∥∞, we apply Whitt’s Theorem 3.1 and Corollary (b) of his Theorem 6.1 [12], with all mappings between ˆM and M taken to be identities, to obtain ∥ˆv∗−v∗∥∞≤ 1 1−γ max a ∥ˆra −D¯ra∥∞+ ˆCγ 2(1−γ) max a
ˆPa −DKa
∞ . (4) Resorting to Proposition 1, we can substitute (1) and (4) in (3) to obtain (2). 2 4 Notice that when D is deterministic—that is, when all its non-zero elements are 1—expression (2) reduces to Whitt’s classical result regarding state aggregation in dynamic programming [12]. On the other hand, when the stochastic factorizations are exact, we recover (1), which is a computable version of Sorg and Singh’s bound for soft homomorphisms [10]. Finally, if we have exact deterministic factorizations, the right-hand side of (2) reduces to zero. This also makes sense, since in this case the stochastic-factorization trick gives rise to an exact homomorphism [13]. As shown in Algorithm 1, KBSF is very simple to understand and to implement. It is also fast, requiring only O(nm2|A|) operations to build a reduced version of an MDP. Finally, and perhaps most importantly, KBSF always converges to a unique solution whose distance to the optimal one is bounded. In the next section we show how all these qualities turn into practical benefits. 5 Experiments We now present a series of computational experiments designed to illustrate the behavior of KBSF in a variety of challenging domains. We start with a simple problem showing that KBSF is indeed capable of compressing the information contained in KBRL’s model. We then move to more difficult tasks, and compare KBSF with other state-of-the-art reinforcement-learning algorithms. All problems considered in this section have a continuous state space and a finite number of actions and were modeled as discounted tasks with γ = 0.99. The algorithms’s results correspond to the performance of the greedy decision policy derived from the final value function computed. In all cases, the decision policies were evaluated on a set of test states from which the tasks cannot be easily solved. This makes the tasks considerably harder, since the algorithms must provide a good approximation of the value function over a larger region of the state space. The experiments were carried out in the same way for all tasks: first, we collected a set of n sample transitions (sa k,ra k, ˆsa k) using a uniformly-random exploration policy. Then the states ˆsa k were grouped by the k-means algorithm into m clusters and a Gaussian kernel ¯κ was positioned at the center of each resulting cluster [14]. These kernels defined the models used by KBSF to approximate the value function. This process was repeated 50 times for each task. We adopted the same width for all kernels. The algorithms were executed on each task with the following values for this parameter: {1,0.1,0.01}. The results reported represent the best performance of the algorithms over the 50 runs; that is, for each n and each m we picked the width that generated the maximum average return. Throughout this section we use the following convention to refer to specific instances of each method: the first number enclosed in parentheses after an algorithm’s name is n, the number of sample transitions used in the approximation, and the second one is m, the size of the model used to approximate the value function. Note that for KBRL n and m coincide. Figure 1 shows the results obtained by KBRL and KBSF on the puddle-world task [15]. In Figure 1a and 1b we observe the effect of fixing the number of transitions n and varying the number of representative states m. As expected, the results of KBSF improve as m →n. More surprising is the fact that KBSF has essentially the same performance as KBRL using models one order of magnitude smaller. This indicates that KBSF is summarizing well the information contained in the data. Depending on the values of n and m, this compression may represent a significant reduction of computational resources. For example, by replacing KBRL(8000) with KBSF(8000, 90), we obtain a decrease of more than 99% on the number of operations performed to find a policy, as shown in Figure 1b (the cost of constructing KBSF’s MDP is included in all reported run times). In Figures 1c and 1d we fix m and vary n. Observe in Figure 1c how KBRL and KBSF have similar performances, and both improve as n →∞. However, since KBSF is using a model of fixed size, its computational cost depends only linearly on n, whereas KBRL’s cost grows with n3. This explains the huge difference in the algorithms’s run times shown in Figure 1d. Next we evaluate how KBSF compares to other reinforcement-learning approaches. We first contrast our method with Lagoudakis and Parr’s least-squares policy iteration algorithm (LSPI) [3]. LSPI is a natural candidate here because it also builds an approximator of fixed size out of a batch of sample transitions. In all experiments LSPI used the same data and approximation architectures as KBSF (to be fair, we fixed the width of KBSF’s kernel κa at 1 in the comparisons). Figure 2 shows the results of LSPI and KBSF on the single and double pole-balancing tasks [16]. We call attention to the fact that the version of the problems used here is significantly harder than 5 20 40 60 80 100 120 140 0.5 1.0 1.5 2.0 2.5 3.0 m Return KBRL(8000) KBSF(8000,m) (a) Performance as a function of m 20 40 60 80 100 120 140 1e−01 1e+00 1e+01 1e+02 1e+03 m Seconds (log) KBRL(8000) KBSF(8000,m) (b) Run time as a function of m G G G G G G G G G G 2000 4000 6000 8000 10000 1.0 1.5 2.0 2.5 3.0 n Return G KBRL(n) KBSF(n,100) G G G G G G G G G G G G G G G G G G G G (c) Performance as a function of n G G G G G G G G G G 2000 4000 6000 8000 10000 5e−01 5e+00 5e+01 5e+02 n Seconds (log) G KBRL(n) KBSF(n,100) G G G G G G G G G G G G G G G G G G G G (d) Run time as a function of n Figure 1: Results on the puddle-world task averaged over 50 runs. The algorithms were evaluated on two sets of states distributed over the region of the state space surrounding the “puddles”. The first set was a 3×3 grid over [0.1,0.3]×[0.3,0.5] and the second one was composed of four states: {0.1,0.3}×{0.9,1.0}. The shadowed regions represent 99% confidence intervals. the more commonly-used variants in which the decision policies are evaluated on a single state close to the origin. This is probably the reason why LSPI achieves a success rate of no more than 60% on the single pole-balancing task, as shown in Figure 2a. In contrast, KBSF’s decision policies are able to balance the pole in 90% of the attempts, on average, using as few as m = 30 representative states. The results of KBSF on the double pole-balancing task are still more impressive. As Wieland [17] rightly points out, this version of the problem is considerably more difficult than its single pole variant, and previous attempts to apply reinforcement-learning techniques to this domain resulted in disappointing performance [18]. As shown in Figure 2c, KBSF(106, 200) is able to achieve a success rate of more than 80%. To put this number in perspective, recall that some of the test states are quite challenging, with the two poles inclined and falling in opposite directions. The good performance of KBSF comes at a relatively low computational cost. A conservative estimate reveals that, were KBRL(106) run on the same computer used for these experiments, we would have to wait for more than 6 months to see the results. KBSF(106, 200) delivers a decision policy in less than 7 minutes. KBSF’s computational cost also compares well with that of LSPI, as shown in Figures 2b and 2d. LSPI’s policy-evaluation step involves the update and solution of a linear system of equations, which take O(nm2) and O(m3|A|3), respectively. In addition, the policy-update stage requires the definition of π(ˆsa k) for all n states in the set of sample transitions. In contrast, KBSF only performs O(m3) operations to evaluate a decision policy and O(m2|A|) operations to update it. We conclude our empirical evaluation of KBSF by using it to learn a neurostimulation policy for the treatment of epilepsy. In order to do so, we use a generative model developed by Bush et al. [19] based on real data collected from epileptic rat hippocampus slices. This model was shown to re6 20 40 60 80 100 120 140 0.0 0.2 0.4 0.6 0.8 1.0 m Successful episodes LSPI(5x104,m) KBSF(5x104,m) (a) Performance on single pole-balancing 20 40 60 80 100 120 140 1 5 50 500 m Seconds (log) LSPI(5x104,m) KBSF(5x104,m) (b) Run time on single pole-balancing 50 100 150 200 0.0 0.2 0.4 0.6 0.8 m Successful episodes LSPI(106,m) KBSF(106,m) (c) Performance on double pole-balancing 50 100 150 200 50 200 1000 10000 m Seconds (log) LSPI(106,m) KBSF(106,m) (d) Run time on double pole-balancing Figure 2: Results on the pole-balancing tasks averaged over 50 runs. The values correspond to the fraction of episodes initiated from the test states in which the pole(s) could be balanced for 3000 steps (one minute of simulated time). The test sets were regular grids defined over the hypercube centered at the origin and covering 50% of the state-space axes in each dimension (we used a resolution of 3 and 2 cells per dimension for the single and double versions of the problem, respectively). Shadowed regions represent 99% confidence intervals. produce the seizure pattern of the original dynamical system and was later validated through the deployment of a learned treatment policy on a real brain slice [20]. The associated decision problem has a five-dimensional continuous state space and extremely non-linear dynamics. At each state the agent must choose whether or not to apply an electrical pulse. The goal is to suppress seizures while minimizing the total amount of stimulation needed to do so. We use as a baseline for our comparisons the fixed-frequency stimulation policies usually adopted in standard in vitro clinical studies [20]. In particular, we considered policies that apply electrical pulses at frequencies of 0 Hz, 0.5 Hz, 1 Hz, and 1.5 Hz. For this task we ran LSPI and KBSF with sparse kernels, that is, we only computed the value of the Gaussian function at the 6-nearest neighbors of a given state (thus defining a simplex with the same dimension as the state space). This modification made it possible to use m = 50,000 representative states with KBSF. Since for LSPI the reduction on the computational cost was not very significant, we fixed m = 50 to keep its run time within reasonable bounds. We compare the decision policies returned by KBSF and LSPI with those computed by fitted Qiteration using Ernst et al.’s extra-trees algorithm [4]. This approach has shown excellent performance on several reinforcement-learning tasks [4]. We used the extra-trees algorithm to build an ensemble of 30 trees. The algorithm was run for 50 iterations, with the structure of the trees fixed after the 10th one. The number of cut-directions evaluated at each node was fixed at dim(S) = 5, and the minimum number of elements required to split a node, denoted here by ηmin, was selected from the set {20,30,...,50,100,150,...,200}. In general, we observed that the performance of the tree7 based method improved with smaller values for ηmin, with an expected increase in the computational cost. Thus, in order to give an overall characterization of the performance of fitted Q-iteration, we report the results obtained with the extreme values of ηmin. The respective instances of the tree-based approach are referred to as T20 and T200. Figure 3 shows the results on the epilepsy-suppression task. In order to obtain different compromises of the problem’s two conflicting objectives, we varied the relative magnitude of the penalties associated with the occurrence of seizures and with the application of an electrical pulse [19, 20]. In particular, we fixed the latter at −1 and varied the former with values in {−10,−20,−40}. This appears in the plots as subscripts next to the algorithms’s names. As shown in Figure 3a, LSPI’s policies seem to prioritize reduction of stimulation at the expense of higher seizure occurrence, which is clearly sub-optimal from a clinical point of view. T200 also performs poorly, with solutions representing no advance over the fixed-frequency stimulation strategies. In contrast, T20 and KBSF are both able to generate decision policies superior to the 1 Hz policy, which is the most efficient stimulation regime known to date in the clinical literature [21]. However, as shown in Figure 3b, KBSF is able to do it at least 100 times faster than the tree-based method. 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.10 0.15 0.20 Fraction of stimulation Fraction of seizures 0Hz 0.5Hz 1Hz 1.5Hz T20−40 T20−10 T200−40 T200−20 T200−10 KBSF−40 KBSF−20 KBSF−10 LSPI−40 LSPI−20 LSPI−10 T20−20 (a) Performance. The length of the rectangles’s edges represent 99% confidence intervals. T20−10 T200−10 LSPI−10 KBSF−10 T20−20 T200−20 LSPI−20 KBSF−20 T20−40 T200−40 LSPI−40 KBSF−40 Seconds (log) 50 200 1000 5000 (b) Run times (confidence intervals do not show up in logarithmic scale) Figure 3: Results on the epilepsy-suppression problem averaged over 50 runs. The algorithms used n = 500,000 sample transitions to build the approximations. The decision policies were evaluated on episodes of 105 transitions starting from a fixed set of 10 test states drawn uniformly at random. 6 Conclusions We presented KBSF, a reinforcement-learning algorithm that emerges from the application of the stochastic-factorization trick to KBRL. As discussed, our algorithm is simple, fast, has good theoretical guarantees, and always converges to a unique solution. Our empirical results show that KBSF is able to learn very good decision policies with relatively low computational cost. It also has predictable behavior, generally improving its performance as the number of sample transitions or the size of its approximation model increases. In the future, we intend to investigate more principled strategies to select the representative states, based on the large body of literature available on kernel methods. We also plan to extend KBSF to the on-line scenario, where the intermediate decision policies generated during the learning process guide the collection of new sample transitions. Acknowledgments The authors would like to thank Keith Bush for making the epilepsy simulator available and also Yuri Grinberg for helpful discussions regarding this work. Funding for this research was provided by the National Institutes of Health (grant R21 DA019800) and the NSERC Discovery Grant program. 8 References [1] D. Ormoneit and S. Sen. Kernel-based reinforcement learning. Machine Learning, 49 (2–3): 161–178, 2002. [2] D. Ormoneit and P. Glynn. Kernel-based reinforcement learning in average-cost problems. IEEE Transactions on Automatic Control, 47(10):1624–1636, 2002. [3] M. G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning Research, 4:1107–1149, 2003. [4] D. Ernst, P. Geurts, and L. Wehenkel. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6:503–556, 2005. [5] M. L. Puterman. Markov Decision Processes—Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., 1994. [6] N. Jong and P. Stone. Kernel-based models for reinforcement learning in continuous state spaces. In Proceedings of the International Conference on Machine Learning—Workshop on Kernel Machines and Reinforcement Learning, 2006. [7] J. E. Cohen and U. G. Rothblum. Nonnegative ranks, decompositions and factorizations of nonnegative matrices. Linear Algebra and its Applications, 190:149–168, 1991. [8] A. Cutler and L. Breiman. Archetypal analysis. Technometrics, 36(4):338–347, 1994. [9] A. M. S. Barreto and M. D. Fragoso. Computing the stationary distribution of a finite Markov chain through stochastic factorization. SIAM Journal on Matrix Analysis and Applications. In press. [10] J. Sorg and S. Singh. Transfer via soft homomorphisms. In Autonomous Agents & Multiagent Systems / Agent Theories, Architectures, and Languages, pages 741–748, 2009. [11] S. A. Vavasis. On the complexity of nonnegative matrix factorization. SIAM Journal on Optimization, 20:1364–1377, 2009. [12] W. Whitt. Approximations of dynamic programs, I. Mathematics of Operations Research, 3 (3):231–243, 1978. [13] B. Ravindran. An Algebraic Approach to Abstraction in Reinforcement Learning. PhD thesis, University of Massachusetts, Amherst, MA, 2004. [14] L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. John Wiley and Sons, 1990. [15] R. S. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In Advances in Neural Information Processing Systems, volume 8, pages 1038– 1044, 1996. [16] F. J. Gomez. Robust Non-linear Control Through Neuroevolution. PhD thesis, The University of Texas at Austin, 2003. [17] A. P. Wieland. Evolving neural network controllers for unstable systems. In Proceedings of the International Joint Conference on Neural Networks, volume 2, pages 667–673, 1991. [18] F. Gomez, J. Schmidhuber, and R. Miikkulainen. Efficient non-linear control through neuroevolution. In Proceedings of the 17th European Conference on Machine Learning, pages 654–662, 2006. [19] K. Bush, J. Pineau, and M. Avoli. Manifold embeddings for model-based reinforcement learning of neurostimulation policies. In Proceedings of the ICML / UAI / COLT Workshop on Abstraction in Reinforcement Learning, 2009. [20] K. Bush and J. Pineau. Manifold embeddings for model-based reinforcement learning under partial observability. In Advances in Neural Information Processing Systems, volume 22, pages 189–197, 2009. [21] K. Jerger and S. J. Schiff. Periodic pacing an in vitro epileptic focus. Journal of Neurophysiology, (2):876–879, 1995. 9
|
2011
|
209
|
4,268
|
Generalized Lasso based Approximation of Sparse Coding for Visual Recognition Nobuyuki Morioka The University of New South Wales & NICTA Sydney, Australia nmorioka@cse.unsw.edu.au Shin’ichi Satoh National Institute of Informatics Tokyo, Japan satoh@nii.ac.jp Abstract Sparse coding, a method of explaining sensory data with as few dictionary bases as possible, has attracted much attention in computer vision. For visual object category recognition, ℓ1 regularized sparse coding is combined with the spatial pyramid representation to obtain state-of-the-art performance. However, because of its iterative optimization, applying sparse coding onto every local feature descriptor extracted from an image database can become a major bottleneck. To overcome this computational challenge, this paper presents “Generalized Lasso based Approximation of Sparse coding” (GLAS). By representing the distribution of sparse coefficients with slice transform, we fit a piece-wise linear mapping function with the generalized lasso. We also propose an efficient post-refinement procedure to perform mutual inhibition between bases which is essential for an overcomplete setting. The experiments show that GLAS obtains a comparable performance to ℓ1 regularized sparse coding, yet achieves a significant speed up demonstrating its effectiveness for large-scale visual recognition problems. 1 Introduction Recently, sparse coding [3, 18] has attracted much attention in computer vision research. Its applications range from image denoising [23] to image segmentation [17] and image classification [10, 24], achieving state-of-the-art results. Sparse coding interprets an input signal x ∈RD×1 with a sparse vector u ∈RK×1 whose linear combination with an overcomplete set of K bases (i.e., D ≪K), also known as dictionary B ∈RD×K, reconstructs the input as precisely as possible. To enforce sparseness on u, the ℓ1 norm is a popular choice due to its computational convenience and its interesting connection with the NP-hard ℓ0 norm in compressed sensing [2]. Several efficient ℓ1 regularized sparse coding algorithms have been proposed [4, 14] and are adopted in visual recognition [10, 24]. In particular, Yang et al. [24] compute the spare codes of many local feature descriptors with sparse coding. However, due to the ℓ1 norm being non-smooth convex, the sparse coding algorithm needs to optimize iteratively until convergence. Therefore, the local feature descriptor coding step becomes a major bottleneck for large-scale problems like visual recognition. The goal of this paper is to achieve state-of-the-art performance on large-scale visual recognition that is comparable to the work of Yang et al. [24], but with a significant improvement in efficiency. To this end, we propose “Generalized Lasso based Approximation of Sparse coding”, GLAS for short. Specifically, we encode the distribution of each dimension in sparse codes with the slice transform representation [9] and learn a piece-wise linear mapping function with the generalized lasso to obtain the best fit [21] to approximate ℓ1 regularized sparse coding. We further propose an efficient post-refinement procedure to capture the dependency between overcomplete bases. The effectiveness of our approach is demonstrated with several challenging object and scene category datasets, showing a comparable performance to Yang et al. [24] and performing better than other fast algorithms that obtain sparse codes [22]. While there have been several supervised dictionary 1 learning methods for sparse coding to obtain more discriminative sparse representations [16, 25], they have not been evaluated on visual recognition with many object categories due to its computational challenges. Furthermore, Ranzato et al. [19] have empirically shown that unsupervised learning of visual features can obtain a more general and effective representation. Therefore, in this paper, we focus on learning a fast approximation of sparse coding in an unsupervised manner. The paper is organized as follows: Section 2 reviews some related work including the linear spatial pyramid combined with sparse coding and other fast algorithms to obtain sparse codes. Section 3 presents GLAS. This is followed by the experimental results on several challenging categorization datasets in Section 4. Section 5 concludes the paper with discussion and future work. 2 Related Work 2.1 Linear Spatial Pyramid Matching Using Sparse Coding This section reviews the linear spatial pyramid matching based on sparse coding by Yang et al. [24]. Given a collection of N local feature descriptors randomly sampled from training images X = [x1, x2, . . . , xN] ∈RD×N, an over-complete dictionary B = [b1, b2, . . . , bK] ∈RD×K is learned by min B,U N X i=1 ∥xi −Bui∥2 2 + λ∥ui∥1 s.t. ∥bk∥2 2 ≤1, k = 1, 2, . . . , K. (1) The cost function above is a combination of the reconstruction error and the ℓ1 sparsity penalty which is controlled by λ. The ℓ2 norm on each bk is constrained to be less than or equal to 1 to avoid a trival solution. Since both B and [u1, u2, . . . , uN] are unknown a priori, an alternating optimization technique is often used [14] to optimize over the two parameter sets. Under the spatial pyramid matching framework, each image is divided into a set of sub-regions r = [r1, r2, . . . , rR]. For example, if 1×1, 2×2 and 4×4 partitions are used on an image, we have 21 sub-regions. Then, we compute the sparse solutions of all local feature descriptors, denoted as Urj, appearing in each sub-region rj by min Urj ∥Xrj −BUrj∥2 2 + λ∥Urj∥1. (2) The sparse solutions are max pooled for each sub-region and concatenated with other sub-regions to build a statistic of the image by h = [max(|Ur1|)⊤, max(|Ur2|)⊤, . . . , max(|UrR|)⊤]⊤, (3) where max(.) is a function that finds the maximum value at each row of a matrix and returns a column vector. Finally, a linear SVM is trained on a set of image statistics for classification. The main advantage of using sparse coding is that state-of-the-art results can be achieved with a simple linear classifier as reported in [24]. Compared to kernel-based methods, this dramatically speeds up training and testing time of the classifier. However, the step of finding a sparse code for each local descriptor with sparse coding now becomes a major bottleneck. Using the efficient sparse coding algorithm based on feature-sign search [14], the time to compute the solution for one local descriptor u is O(KZ) where Z is the number of non-zeros in u. This paper proposes an approximation method whose time complexity reduces to O(K). With the post-refinement procedure, its time complexity is O(K + Z2) which is still much lower than O(KZ). 2.2 Predictive Sparse Decomposition Predictive sparse decomposition (PSD) described in [10, 11] is a feedforward network that applies a non-linear mapping function on linearly transformed input data to match the optimal sparse coding solution as accurate as possible. Such feedfoward network is defined as: ˆui = Gg(Wxi, θ), where g(z, θ) denotes a non-linear parametric mapping function which can be of any form, but to name a few there are hyperbolic tangent, tanh(z + θ) and soft shrinkage, sign(z) max(|z| −θ, 0). The function is applied to linearly transformed data Wxi and subsequently scaled by a diagonal matrix 2 G. Given training samples {xi}N i=1, the parameters can be estimated either jointly or separately from the dictionary B. When learning jointly, we minimize the cost function given below: min B,G,W,θ,U N X i=1 ∥xi −Bui∥2 2 + λ∥ui∥1 + γ∥ui −Gg(Wxi, θ)∥2 2. (4) When learning separately, B and U are obtained with Eqn. (1) first. Then, other remaining parameters G, W and θ are estimated by solving the last term of Eqn. (4) only. Gregor and LeCun [7] have later proposed a better, but iterative approximation scheme for ℓ1 regularized sparse coding. One downside of the parametric approach is its accuracy is largely dependent on how well its parametric function fits the target statistical distribution, as argued by Hel-Or and Shaked [9]. This paper explores a non-parametric approach which can fit any distribution as long as data samples available are representative. The advantage of our approach over the parametric approach is that we do not need to seek an appropriate parametric function for each distribution. This is particularly useful in visual recognition that uses multiple feature types, as it automatically estimates the function form for each feature type from data. We demonstrate this with two different local descriptor types in our experiments. 2.3 Locality-constrained Linear Coding Another notable work that overcomes the bottleneck of the local descriptor coding step is localityconstrained linear coding (LLC) proposed by Wang et al. [22], a fast version of local coordinate coding [26]. Given a local feature descriptor xi, LLC searches for M nearest dictionary bases of each local descriptor xi and these nearest bases stacked in columns are denoted as Bφi ∈RD×M where φi indicates the index list of the bases. Then, the coefficients uφi ∈RM×1 whose linear combination with Bφi reconstructs xi is solved by: min uφi ∥xi −Bφiuφi∥2 2 s.t. 1⊤uφi = 1. (5) This is the least squares problem which can be solved quite efficiently. The final sparse code ui is obtained by setting its elements indexed at φi to uφi. The time complexity of LLC is O(K + M 2). This excludes the time required to find M nearest neighbours. While it is fast, the resulting sparse solutions obtained are not as discriminative as the ones obtained by sparse coding. This may be due to the fact that M is fixed across all local feature descriptors. Some descriptors may need more bases for accurate representation and others may need less bases for more distinctiveness. In contrast, the number of bases selected with our post-refinement procedure to handle the mutual inhibition is different for each local descriptor. 3 Generalized Lasso based Approximation of Sparse Coding This section describes GLAS. We first learn a dictionary from a collection of local feature descriptors as given Eqn. (1). Then, based on slice transform representation, we fit a piece-wise linear mapping function with the generalized lasso to approximate the optimal sparse solutions of the local feature descriptors under ℓ1 regularized sparse coding. Finally, we propose an efficient post-refinement procedure to perform the mutual inhibition. 3.1 Slice Transform Representation Slice transform representation is introduced as a way to discretize a function space so to fit a piecewise linear function for the purpose of image denoising by Hel-Or and Shaked [9]. This is later adopted by Adler et al. [1] for single image super resolution. In this paper, we utilise the representation to approximate sparse coding to obtain sparse codes for local feature descriptor as fast as possible. Given a local descriptor x, we can linearly combine with B⊤to obtain z. For the moment, we just consider one dimension of z denoted as z which is a real value and lies in a half open interval of [a, b). The interval is divided into Q −1 equal-sized bins whose boundaries form a vector q = [q1, q2, . . . , qQ]⊤such that a = q1 < q2 · · · < qQ = b. 3 −0.5 0 0.5 1 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 z u Data RLS L1−SC GLAS −0.5 0 0.5 1 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 z u Data RLS L1−SC GLAS −0.5 0 0.5 1 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 z u Data RLS L1−SC GLAS (a) (b) (c) Figure 1: Different approaches to fit a piece-wise linear mapping function. Regularized least squares (RLS) in red (see Eqn. (8)). ℓ1-regularized sparse coding (L1-SC) in magenta (see Eqn. (9)). GLAS in green (see Eqn. (10)). (a) All three methods achieving a good fit. (b) A case when L1-SC fails to extrapolate well at the end and RLS tends to align itself to q in black. (c) A case when data samples at around 0.25 are removed artificially to illustrate that RLS fails to interpolate as no neighoring prior is used. In contrast, GLAS can both interpolate and extrapolate well in the case of missing or noisy data. The interval into which the value of z falls is expressed as: π(z) = j if z ∈[qj−1, qj), and its corresponding residue is calculated by: r(z) = z−qπ(z)−1 qπ(z)−qπ(z)−1 . Based on the above, we can re-express z as z = (1 −r(z))qπ(z)−1 + r(z)qπ(z) = Sq(z)q, (6) where Sq(z) = [0, . . . , 0, 1 −r(z), r(z), 0, . . . , 0]. If we now come back to the multivariate case of z = B⊤x, then we have the following: z = [Sq(z1)q, Sq(z2)q, . . . , Sq(zK)q]⊤, where zk implies the kth dimension of z. Then, we replace the boundary vector q with p = {p1, p2, . . . , pK} such that resulting vector approximates the optimal sparse solution of x obtained by ℓ1 regularized sparse coding as much as possible. This is written as ˆu = [Sq(z1)p1, Sq(z2)p2, . . . , Sq(zK)pK]⊤. (7) Hel-Or and Shaked [9] have formulated the problem of learning each pk as regularized least squares either independently in a transform domain or jointly in a spatial domain. Unlike their setting, we have significantly large number of bases which makes joint optimization of all pk difficult. Moreover, since we are interested in approximating the sparse solutions which are in the transform domain, we learn each pk independently. Given N local descriptors X = [x1, x2, . . . , xN] ∈RD×N and their corresponding sparse solutions U = [u1, u2, . . . , uN] = [y1, y2, . . . , yK]⊤∈RK×N obtained with ℓ1 regularized sparse coding, we have an optimization problem given as min pk ∥yk −Skpk∥2 2 + α∥q −pk∥2 2, (8) where Sk = Sq(zk). The regularization of the second term is essential to avoid singularity when computing the inverse and its consequence is that pk is encouraged to align itself to q when not many data samples are available. This might have been a reasonable prior for image denoising [9], but not desirable for the purpose of approximating sparse coing, as we would like to suppress most of the coefficients in u to zero. Figure 1 shows the distribution of one dimension of sparse coefficients z obtained from a collection of SIFT descriptors and q does not look similar to the distribution. This motivates us to look at the generalized lasso [21] as an alternative for obtaining a better fit of the distribution of the coefficients. 3.2 Generalized Lasso In the previous section, we have argued that regularized least squares stated in Eqn. (8) does not give the desired result. Instead most intervals need to be set to zero. This naturally leads us to consider ℓ1 regularized sparse coding also known as the lasso which is formulated as: min pk ∥yk −Skpk∥2 2 + α∥pk∥1. (9) 4 However, the drawback of this is that the learnt piece-wise linear function may become unstable in cases when training data is noisy or missing as illustrated in Figure 1 (b) and (c). It turns out ℓ1 trend filtering [12], generally known as the generalized lasso [21], can overcome this problem. This is expressed as min pk ∥yk −Skpk∥2 2 + α∥Dpk∥1, (10) where D ∈R(Q−2)×Q is referred to as a penalty matrix and defined as D = −1 2 −1 −1 2 −1 ... ... ... −1 2 −1 . (11) To solve the above optimization problem, we can turn it into the sparse coding problem [21]. Since D is not invertible, the key is to augment D with A ∈R2×Q to build a square matrix ˜D = [D; A] ∈ RQ×Q such that rank(˜D) = Q and the rows of A are orthogonal to the rows of D. To satisfy such constraints, A can for example be set to [1, 2, . . . , Q; 2, 3, . . . , Q + 1]. If we let θ = [θ1; θ2] = ˜Dpk where θ1 = Dpk and θ2 = Apk, then Skpk = Sk ˜D −1θ = Sk1θ1 + Sk2θ2. After some substitutions, we see that θ2 can be solved by: θ2 = (S⊤ k2Sk2)−1S⊤ k2(yk −Sk1θ1), given θ1 is solved already. Now, to solve θ1, we have the following sparse coding problem: min θ1 ∥(I −P)yk −(I −P)Sk1θ1∥2 2 + α∥θ1∥1, (12) where P = Sk2(S⊤ k2Sk2)−1S⊤ k2. Having computed both θ1 and θ2, we can recover the solution of pk by ˜D −1θ. Further details can be found in [21]. Given the learnt p, we can approximate sparse solution of x by Eqn. (7). However, explicitly computing Sq(z) and multiplying it by p is somewhat redundant. Thus, we can alternatively compute each component of ˆu as follows: ˆuk = (1 −r(zk)) × pk(π(zk) −1) + r(zk) × pk(π(zk)), (13) whose time complexity becomes O(K). In Eqn. (13), since we are essentially using pk as a lookup table, the complexity is independent from Q. This is followed by ℓ1 normalization on ˆu. While ˆu can readily be used for the spatial max pooling as stated in Eqn. (3), it does not yet capture any “explaining away” effect where the corresponding coefficients of correlated bases are mutually inhibited to remove redundancy. This is because each pk is estimated independently in the transform domain [9]. In the next section, we propose an efficient post-refinement technique to mutually inhibit between the bases. 3.3 Capturing Dependency Between Bases To handle the mutual inhibition between overcomplete bases, this section explains how to refine the sparse codes by solving regularized least squares on a significantly small active basis set. Given a local descriptor x and its initial sparse code ˆu estimated with above method, we set the non-zero components of the code to be active. By denoting a set of these active components as φ, we have ˆuφ and Bφ which are the subsets of the sparse code and dictionary bases respectively. The goal is to compute the refined code of ˆuφ denoted as ˆvφ such that Bφvφ reconstructs xi as accurately as possible. We formulate this as regularised least squares given below: min ˆvφ ∥x −Bφˆvφ∥2 2 + β∥ˆvφ −ˆuφ∥2 2, (14) where β is the weight parameter of the regularisation. This is convex and has the following analytical solution: ˆvφ = (B⊤ φ Bφ + βI)−1(B⊤ φ x + βˆuφ). The intuition behind the above formulation is that the initial sparse code ˆu is considered as a good starting point for refinement to further reduce the reconstruction error by allowing redundant bases to compete against each other. Empirically, the number of active components for each ˆu is substantially small compared to the whole basis set. Hence, a linear system to be solved becomes much smaller 5 SIFT (128 Dim.) [15] Methods KM LLC [22] PSD [11] SC [24] GLAS GLAS+ 15 Train 55.5±1.2 62.7±1.0 64.0±1.2 65.2±1.2 64.4±1.2 65.1±1.1 30 Train 63.0±1.2 69.6±0.8 70.6±0.9 71.6±0.7 71.6±1.0 72.3±0.7 Time (sec) 0.06 0.25 0.06 3.53 0.15 0.23 Local Self-Similarity (30 Dim.) [20] Methods KM LLC [22] PSD [11] SC [24] GLAS GLAS+ 15 Train 60.1±1.3 62.4±0.8 59.7±0.8 64.8±0.9 62.3±1.2 63.8±0.9 30 Train 63.0±1.2 69.7±1.3 67.2±0.9 72.5±1.6 69.8±1.4 71.0±1.1 Time (sec) 0.05 0.24 0.05 1.97 0.13 0.18 Table 1: Recognition accuracy on Caltech-101. The dictionary sizes for all methods are set to 1024. We also report the time taken to process 1000 local descriptors for each method. which is computationally cheap. We also make sure that we do not deviate too much from the initial solution by introducing the regularization on ˆvφ. This refinement procedure may be similar to LLC [22]. However, in our case, we do not preset the number of active bases and determine by non-zero components of ˆu. More importantly, we base our final solution on ˆu and do not perform nearest neighbor search. With this refinement procedure, the total time complexity becomes O(K + Z2). We refer GLAS with this post-refinement procedure as GLAS+. 4 Experimental Results This section evaluates GLAS and GLAS+ on several challenging categorization datasets. To learn the mapping function, we have used 50,000 local descriptors as data samples. The parameters Q, α and β are fixed to 10, 0.1 and 0.25 respectively for all experiments, unless otherwise stated. For comparison, we have implemented methods discussed in Section 2. SC is our re-implementation of Yang et al. [24]. LLC is locality-constrained linear coding proposed by Wang et al. [22]. The number of nearest neighors to consider is set to 5. PSD is predictive sparse decomposition [11]. Shrinkage function is used as its parametric mapping function. We also include KM which builds its codebook with k-means clustering and adopts hard-assignment as its local descriptor coding. For all methods, exactly the same local feature descriptors, spatial max pooling technique and linear SVM are used to only compare the difference between the local feature descriptor coding techniques. As for the descriptors, SIFT [15] and Local Self-Similarity [20] are used. SIFT is a histogram of gradient directions computed over an image patch - capturing appearance information. We have sampled a 16×16 patch at every 8 pixel step. In contrast, Local Self-Similarity computes correlation between a small image patch of interest and its surrounding region which captures the geometric layout of a local region. Spatial max pooling with 1 × 1, 2 × 2 and 4 × 4 image partitions is used. The implementation is all done in MATLAB for fair comparison. 4.1 Caltech-101 The Caltech-101 dataset [5] consists of 9144 images which are divided into 101 object categories. The images are scaled down to 300 × 300 preserving their aspect ratios. We train with 15/30 images per class and test with 15 images per class. The dictionary size of each method is set to 1024 for both SIFT and Local Self-Similarity. The results are averaged over eight random training and testing splits and are reported in Table 1. For SIFT, GLAS+ is consistently better than GLAS demonstrating the effectiveness of mutual inhibition by the post-refinement procedure. Both GLAS and GLAS+ performs better than other fast algorithms that produces sparse codes. In addition GLAS and GLAS+ performs competitively against SC. In fact, GLAS+ is slightly better when 30 training images per class are used. While sparse codes for both GLAS and GLAS+ are learned from the solutions of SC, the approximated codes are not exactly the same as the ones of SC. Moreover, SC sometimes produces unstable codes due to the non-smooth convex property of ℓ1 norm as previously observed in [6]. In contrast, GLAS+ 6 0 5 10 15 20 25 66 68 70 72 74 Q Average Recognition SC GLAS GLAS+ 0 0.5 1 66 68 70 72 74 Alpha Average Recognition SC GLAS GLAS+ 0% 10% 20% 30% 40% 62 64 66 68 70 72 74 % of Missing Data Average Recognition SC RLS GLAS GLAS+ (a) (b) (c) Figure 2: (a) Q, the number of bins to quantize the interval of each sparse code component. (b) α, the parameter that controls the weight of the norm used for the generalized lasso. (c) When some data samples are missing GLAS is more robust than regularized least squares given in Eqn. (8). approximates its sparse codes with a relatively smooth piece-wise linear mapping function learned with the generalized lasso (note that the ℓ1 norm penalizes on changes in the shape of the function) and performs smooth post-refinement. We suspect these differences may be contributing to the slightly better results of GLAS+ on this dataset. Although PSD performs quite close to GLAS for SIFT, this is not the case for Local Self-Similarity. GLAS outperforms PSD probably due to the distribution of sparse codes is not captured well by a simple shrinkage function. Therefore, GLAS might be more effective for a wide range of distributions. This is useful for recognition using multiple feature types where speed is critical. GLAS performs worse than SC, but GLAS+ closes the gap between GLAS and SC. We suspect that due to Local Self-Similarity (30 dim.) being relatively low-dimensional than SIFT (128 dim.), the mutual inhibition becomes more important. This might also explain why LLC has performed reasonably well for this descriptor. Table 1 also reports computational time taken to process 1000 local descriptors for each method. GLAS and GLAS+ are slower than KM and PSD, but are slightly faster than LLC and significantly faster than SC. This demonstrates the practical importance of our approach where competitive recognition results are achieved with fast computation. Different values for Q, α and β are evaluated one parameter at a time. Figure 2 (a) shows the results of different Q. The results are very stable after 10 bins. As sparse codes are computed by Eqn. (13), the time complexity is not affected by what Q is chosen. Figure 2 (b) shows the results for different α which look very stable. We also observe similar stability for β. We also validate if the generalized lasso given in Eqn. (10) is more robust than the regularized least squares solution given in Eqn. (8) when some data samples are missing. When learning each qk, we artificially remove data samples from an interval centered around a randomly sampled point, as also illustrated in Figure 1 (c). We evaluate with different numbers of data samples removed in terms of percentages of the whole data sample set. The results are shown in Figure 2 (c) where the performance of RLS significantly drops as the number of missing data is increased. However, both GLAS and GLAS+ are not affected that much. 4.2 Caltech-256 Caltech-256 [8] contains 30,607 images and 256 object categories in total. Like Caltech-101, we scale the images down to 300×300 preserving their aspect ratios. The results are averaged over eight random training and testing splits and are reported in Table 2. We use 25 testing images per class. This time, for SIFT, GLAS performs slightly worse than SC, but GLAS+ outperforms SC probably due to the same argument given in the previous experiments on Caltech-101. For Local Self-Similarity, results similar to Caltech-101 are obtained. The performance of PSD is close to KM and is outperformed by GLAS, suggesting the inadequate fitting of sparse codes. LLC performs slightly better than GLAS, but could not perform better than GLAS+. While SC performed the best, the performance of GLAS+ is quite close to SC. We also plot a graph of the computational time taken for each method with its achieved accuracy on SIFT and Local Self-Similarity in Figure 3 (a) and (b) respectively. 7 SIFT (128 Dim.) [15] Methods KM LLC [22] PSD [11] SC [24] GLAS GLAS+ 15 Train 22.7±0.4 28.1±0.5 30.4±0.6 30.7±0.4 30.4±0.4 32.1±0.4 30 Train 27.4±0.5 34.0±0.6 36.3±0.5 36.8±0.4 36.1±0.4 38.2±0.4 Local Self-Similarity (30 Dim.) [20] Methods KM LLC [22] PSD [11] SC [24] GLAS GLAS+ 15 Train 23.7±0.4 26.3±0.5 24.3±0.6 28.7±0.5 26.0±0.5 27.6±0.6 30 Train 28.5±0.4 31.9±0.5 29.3±0.5 34.7±0.4 31.2±0.5 33.3±0.5 Table 2: Recognition accuracy on Caltech-256. The dictionary sizes are all set to 2048 for SIFT and 1024 for Local Self-Similarity. 0 2 4 6 25 30 35 40 Computational Time Average Recognition KM LLC PSD SC GLAS GLAS+ 0 0.5 1 1.5 2 28 30 32 34 36 Computational Time Average Recognition KM LLC PSD SC GLAS GLAS+ 0 1 2 3 4 76 77 78 79 80 81 Computational Time Average Recognition KM LLC PSD SC GLAS GLAS+ (a) (b) (c) Figure 3: Plotting computational time vs. average recognition. (a) and (b) are SIFT and Local-Self Similarity respectively evaluated on Caltech-256 with 30 training images. The dictionary size is set to 2048. (c) is SIFT evaluated on 15 Scenes. The dictionary size is set to 1024. 4.3 15 Scenes The 15 Scenes [13] dataset contains 4485 images divided into 15 scene classes ranging from indoor scenes to outdoor scenes. 100 training images per class are used for training and the rest for testing. We used SIFT to learn 1024 dictionary bases for each method. The results are plotted with computational time taken in Figure 3 (c). The result of GLAS+ (80.6%) are very similar to that of SC (80.7%), yet the former is significantly faster. In summary, we show that our approach works well on three different challenging datasets. 5 Conclusion This paper has presented an approximation of ℓ1 sparse coding based on the generalized lasso called GLAS. This is further extended with the post-refinement procedure to handle mutual inhibition between bases which are essential in an overcomplete setting. The experiments have shown competitive performance of GLAS against SC and achieved significant computational speed up. We have also demonstrated that the effectiveness of GLAS on two local descriptor types, namely SIFT and Local Self-Similarity where LLC and PSD only perform well on one type. GLAS is not restricted to only approximate ℓ1 sparse coding, but should be applicable to other variations of sparse coding in general. For example, it may be interesting to try GLAS on Laplacian sparse coding [6] that achieves smoother sparse codes than ℓ1 sparse coding. Acknowledgment NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. 8 References [1] A. Adler, Y. Hel-Or, and M. Elad. A Shrinkage Learning Approach for Single Image SuperResolution with Overcomplete Representations. In ECCV, 2010. [2] D.L. Donoho. For Most Large Underdetermined Systems of Linear Equations the Minimal L1norm Solution is also the Sparse Solution. Communications on Pure and Applied Mathematics, 2006. [3] D.L. Donoho and M. Elad. Optimally sparse representation in general (nonorthogonal) dictionaries via L1 minimization. PNAS, 100(5):2197–2202, 2003. [4] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least Angle Regression. Annals of Statistics, 2004. [5] L. Fei-Fei, R. Fergus, and P. Perona. Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories. In CVPR Workshop, 2004. [6] S. Gao, W. Tsang, L. Chia, and P. Zhao. Local Features Are Not Lonely - Laplacian Sparse Coding for Image Classification. In CVPR, 2010. [7] K. Gregor and Y. LeCun. Learning fast approximations of sparse coding. In ICML, 2010. [8] G. Griffin, A. Holub, and P. Perona. Caltech-256 Object Category Dataset. Technical Report, California Institute of Technology, 2007. [9] Y. Hel-Or and D. Shaked. A Discriminative Approach for Wavelet Denoising. TIP, 2008. [10] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the Best Multi-Stage Architecture for Object Recognition. In ICCV, 2009. [11] K Kavukcuoglu, M Ranzato, and Y Lecun. Fast inference in sparse coding algorithms with applications to object recognition. Technical rRport CBLL-TR-2008-12-01, Computational and Biological Learning Lab, Courant Institute, NYU, 2008. [12] S.-J. Kim, K. Koh, S. Boyd, and D. Gorinevsky. L1 trend filtering. SIAM Review, 2009. [13] S. Lazebnik, C. Schmid, and J. Ponce. Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories. In CVPR, 2006. [14] H. Lee, A. Battle, R. Raina, and A.Y. Ng. Efficient sparse coding algorithms. In NIPS, 2006. [15] D.G. Lowe. Distinctive Image Features from Scale-Invariant Keypoints. IJCV, 2004. [16] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Supervised Dictionary Learning. In NIPS, 2008. [17] J. Mairal, M. Leordeanu, F. Bach, M. Hebert, and J. Ponce. Discriminative Sparse Image Models for Class-Specific Edge Detection and Image Interpretation. In ECCV, 2008. [18] B.A. Olshausen and D.J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37, 1997. [19] M. Ranzato, F.J. Huang, Y. Boureau, and Y. LeCun. Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition. In CVPR, 2007. [20] E. Shechtman and M. Irani. Matching Local Self-Similarities across Image and Videos. In CVPR, 2007. [21] R. Tibshirani and J. Taylor. The Solution Path of the Generalized Lasso. The Annals of Statistics, 2010. [22] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained Linear Coding for Image Classification. In CVPR, 2010. [23] J. Yang, J. Wright, T. Huang, and Y. Ma. Image Super-Resolution via Sparse Representation. TIP, 2010. [24] J. Yang, K. Yu, Y. Gong, and T.S. Huang. Linear spatial pyramid matching using sparse coding for image classification. In CVPR, 2009. [25] J. Yang, K. Yu, and T. Huang. Supervised Translation-Invariant Sparse Coding. In CVPR, 2010. [26] K. Yu, T. Zhang, and Y. Gong. Nonlinear Learning using Local Coordinate Coding. In NIPS, 2009. 9
|
2011
|
21
|
4,269
|
Better Mini-Batch Algorithms via Accelerated Gradient Methods Andrew Cotter Toyota Technological Institute at Chicago cotter@ttic.edu Ohad Shamir Microsoft Research, NE ohadsh@microsoft.com Nathan Srebro Toyota Technological Institute at Chicago nati@ttic.edu Karthik Sridharan Toyota Technological Institute at Chicago karthik@ttic.edu Abstract Mini-batch algorithms have been proposed as a way to speed-up stochastic convex optimization problems. We study how such algorithms can be improved using accelerated gradient methods. We provide a novel analysis, which shows how standard gradient methods may sometimes be insufficient to obtain a significant speed-up and propose a novel accelerated gradient algorithm, which deals with this deficiency, enjoys a uniformly superior guarantee and works well in practice. 1 Introduction We consider a stochastic convex optimization problem of the form minw∈W L(w), where L(w) = Ez [(w, z)], based on an empirical sample of instances z1, . . . , zm. We assume that W is a convex subset of some Hilbert space (which in this paper, we will take to be Euclidean space), and is non-negative, convex and smooth in its first argument (i.e. has a Lipschitzconstinuous gradient). The classical learning application is when z = (x, y) and (w, (x, y)) is a prediction loss. In recent years, there has been much interest in developing efficient first-order stochastic optimization methods for these problems, such as mirror descent [2, 6] and dual averaging [9, 16]. These methods are characterized by incremental updates based on subgradients ∂(w, zi) of individual instances, and enjoy the advantages of being highly scalable and simple to implement. An important limitation of these methods is that they are inherently sequential, and so problematic to parallelize. A popular way to speed-up these algorithms, especially in a parallel setting, is via mini-batching, where the incremental update is performed on an average of the subgradients with respect to several instances at a time, rather than a single instance (i.e., 1 b b j=1 ∂(w, zi+j)). The gradient computations for each mini-batch can be parallelized, allowing these methods to perform faster in a distributed framework (see for instance [11]). Recently, [10] has shown that a mini-batching distributed framework is capable of attaining asymptotically optimal speed-up in general (see also [1]). A parallel development has been the popularization of accelerated gradient descent methods [7, 8, 15, 5]. In a deterministic optimization setting and for general smooth convex functions, these methods enjoy a rate of O(1/n2) (where n is the number of iterations) as opposed to O(1/n) using standard methods. However, in a stochastic setting (which is the relevant one for learning problems), the rate of both approaches have an O(1/√n) dominant term in general, so the benefit of using accelerated methods for learning problems is not obvious. 1 Algorithm 1 Stochastic Gradient Descent with Mini-Batching (SGD) Parameters: Step size η, mini-batch size b. Input: Sample z1, . . . , zm w1 = 0 for i = 1 to n = m/b do Let i(wi) = 1 b bi t=b(i−1)+1 (wi, zt) w i+1 := wi −η∇i(wi)) wi+1 := PW(w i+1) end for Return ¯w = 1 n n i=1 wi Algorithm 2 Accelerated Gradient Method (AG) Parameters: Step sizes (γi, βi), mini-batch size b Input: Sample z1, . . . , zm w = 0 for i = 1 to n = m/b do Let i(wi) := 1 b bi t=b(i−1)+1 (w, zt) wmd i := β−1 i wi + (1 −β−1 i )wag i w i+1 := wmd i −γi∇i(wmd i ) wi+1 := PW(w i+1) wag i+1 ←β−1 i wi+1 + (1 −β−1 i )wag i end for Return wag n In this paper, we study the application of accelerated methods for mini-batch algorithms, and provide theoretical results, a novel algorithm, and empirical experiments. The main resulting message is that by using an appropriate accelerated method, we obtain significantly better stochastic optimization algorithms in terms of convergence speed. Moreover, in certain regimes acceleration is actually necessary in order to allow a significant speedups. The potential benefit of acceleration to mini-batching has been briefly noted in [4], but here we study this issue in much more depth. In particular, we make the following contributions: • We develop novel convergence bounds for the standard gradient method, which refines the result of [10, 4] by being dependent on L(w), the expected loss of the best predictor in our class. For example, we show that in the regime where the desired suboptimality is comparable or larger than L(w), including in the separable case L(w) = 0, mini-batching does not lead to significant speed-ups with standard gradient methods. • We develop a novel variant of the stochastic accelerated gradient method [5], which is optimized for a mini-batch framework and implicitly adaptive to L(w). • We provide an analysis of our accelerated algorithm, refining the analysis of [5] by being dependent on L(w), and show how it always allows for significant speedups via mini-batching, in contrast to standard gradient methods. Moreover, its performance is uniformly superior, at least in terms of theoretical upper bounds. • We provide an empirical study, validating our theoretical observations and the efficacy of our new method. 2 Preliminaries As discussed in the introduction, we focus on a stochastic convex optimization problem, where we wish to minimize L(w) = Ez [(w, z)] over some convex domain W, using an i.i.d. sample z1, . . . , zm. Throughout this paper we assume that the instantaneous loss (·, z) is convex, non-negative and H-smooth for each z ∈Z. Also in this paper, we take W to be the set W = {w : w≤D}, although our results can be generalized. 2 We discuss two stochastic optimization approaches to deal with this problem: stochastic gradient descent (SGD), and accelerated gradient methods (AG). In a mini-batch setting, both approaches iteratively average sub-gradients with respect to several instances, and use this average to update the predictor. However, the update is done in different ways. The stochastic gradient descent algorithm (which in more general settings is known as mirror descent, e.g. [6]) is summarized as Algorithm 1. In the pseudocode, PW refers to the projection on to the ball W, which amounts to rescaling w to have norm at most D. The accelerated gradient method (e.g., [5]) is summarized as Algorithm 2. In terms of existing results, for the SGD algorithm we have [4, Section 5.1] E [L( ¯w)] −L(w) ≤O 1 m + b m , whereas for an accelerated gradient algorithm, we have [5] E [L(wag n )] −L(w) ≤O 1 m + b2 m2 . Thus, as long as b = o(√m), both methods allow us to use a large mini-batch size b without significantly degrading the performance of either method. This allows the number of iterations n = m/b to be smaller, potentially resulting in faster convergence speed. However, these bounds do not show that accelerated methods have a significant advantage over the SGD algorithm, at least when b = o(√m), since both have the same first-order term 1/√m. To understand the differences between these two methods better, we will need a more refined analysis, to which we now turn. 3 Convergence Guarantees The following theorems provide a refined convergence guarantee for the SGD algorithm and the AG algorithm, which improves on the analysis of [10, 4, 5] by being explicitly dependent on L(w), the expected loss of the best predictor win W. Due to lack of space, the proofs are only sketched. The full proofs are deferred to the supplementary material. Theorem 1. For the Stochastic Gradient Descent algorithm with η = min 1 2H , bD2 L(w)Hn 1+ HD2 L(w)bn , assuming L(0) ≤HD2, we get that E [L( ¯w)] −L(w) ≤ HD2 L(w) 2bn + 2HD2 n + 9HD2 bn Theorem 2. For the Accelerated Descent algorithm with βi = i+1 2 , γi = γip where γ = min 1 4H , bD2 412HL(w)(n−1)2p+1 , b 1044H(n−1)2p p+1 2p+1 D2 4HD2+√ 4HD2L(w) p 2p+1 (1) and p = min max log(b) 2 log(n −1), log log(n) 2 (log(b(n −1)) −log log(n)) , 1 , (2) as long as n ≥904, we have that E [L(wag n )] −L(w) ≤358 HD2L(w) b(n −1) + 1545HD2 √ b(n −1) + 1428HD2√log n b(n −1) + 4HD2 (n −1)2 3 We emphasize that Theorem 2 gives more than a theoretical bound: it actually specifies a novel accelerated gradient strategy, where the step size γi scales polynomially in i, in a way dependent on the minibatch size b and L(w). While L(w) may not be known in advance, it does have the practical implication that choosing γi ∝ip for some p < 1, as opposed to just choosing γi ∝i as in [5]), might yield superior results. The key observation used for analyzing the dependence on L(w) is that for any non-negative H-smooth convex function f : W →R, we have [13]: ∇f(w)≤ 4Hf(w) (3) This self-bounding property tells us that the norm of the gradient is small at a point if the loss is itself small at that point. This self-bounding property has been used in [14] in the online setting and in [13] in the stochastic setting to get better (faster) rates of convergence for non-negative smooth losses. The implication of this observation are that for any w ∈W, ∇L(w)≤ 4HL(w) and ∀z ∈Z, (w, z)≤ 4H(w, z). Proof sketch for Theorem 1. The proof for the stochastic gradient descent bound is mainly based on the proof techniques in [5] and its extension to the mini-batch case in [10]. Following the line of analysis in [5], one can show that E 1 n n i=1 L(wi) −L(w) ≤ η n−1 n−1 i=1 E ∇L(wi) −∇i(wi)2 + D2 2η(n−1) In the case of [5], E [∇L(wi) −∇i(wi)] is bounded by the variance, and that leads to the final bound provided in [5] (by setting η appropriately). As noticed in [10], in the minibatch setting we have ∇i(wi) = 1 b bi t=b(i−1)+1 (wi, zt) and so one can further show that E 1 n n i=1 L(wi) −L(w) ≤ η b2(n−1) n−1 i=1 ib t= (i−1)b+1 E ∇L(wi) −∇(wi, zt)2 + D2 2η(n−1) (4) In [10], each of ∇L(wi) −∇(wi, zt)is bounded by σ0 and so setting η, the mini-batch bound provided there is obtained. In our analysis we further use the self-bounding property to (4) and get that E 1 n n i=1 L(wi) −L(w) ≤ 16Hη b(n−1) n−1 i=1 E [L(wi)] + D2 2η(n−1) rearranging and setting η appropriately gives the final bound. Proof sketch for Theorem 2. The proof of the accelerated method starts in a similar way as in [5]. For the γi’a and βi’s mentioned in the theorem, following similar lines of analysis as in [5] we get the preliminary bound E [L(wag n )] −L(w) ≤ 2γ (n −1)p+1 n−1 i=1 i2p E ∇L(wmd i ) −∇i(wmd i ) 2 + D2 γ(n −1)p+1 In [5] the step size γi = γ(i + 1)/2 and βi = (i + 1)/2 which effectively amounts to p = 1 and further similar to the stochastic gradient descent analysis. Furthermore, each E ∇L(wmd i ) −∇i(wmd i ) 2 is assumed to be bounded by some constant, and thus leads to the final bound provided in [5] by setting γ appropriately. On the other hand, we first notice that due to the mini-batch setting, just like in the proof of stochastic gradient descent, E [L(wag n )] −L(w) ≤ 2γ b2(n−1)p+1 n−1 i=1 i2p ib t= b(i−1)+1 E ∇L(wmd i ) −∇(wmd i , zt) 2 + D2 γ(n−1)p+1 4 Using smoothness, the self bounding property some manipulations, we can further get the bound E [L(wag n )] −L(w) ≤ 64Hγ b(n−1)1−p n−1 i=1 (E [L(wag i )] −L(w)) + 64HγL(w)(n−1)p b + D2 γ(n−1)p+1 + 32HD2 b(n−1) Notice that the above recursively bounds E [L(wag n )] −L(w) in terms of n−1 i=1 (E [L(wag i )] −L(w)). While unrolling the recursion all the way down to 2 does not help, we notice that for any w ∈W, L(w) −L(w) ≤12HD2 + 3L(w). Hence we unroll the recursion to M steps and use this inequality for the remaining sum. Optimizing over number of steps up to which we unroll and also optimizing over the choice of γ, we get the bound, E [L(wag n )] −L(w) ≤ 1648HD2L(w) b(n−1) + 348(6HD2+2L(w)) b(n−1) (b(n −1)) p p+1 + 32HD2 b(n−1) + 4HD2 (n−1)p+1 + 36HD2 b(n−1) log(n) (b(n−1)) p 2p+1 Using the p as given in the theorem statement, and few simple manipulations, gives the final bound. 4 Optimizing with Mini-Batches To compare our two theorems and understand their implications, it will be convenient to treat H and D as constants, and focus on the more interesting parameters of sample size m, minibatch size b, and optimal expected loss L(w). Also, we will ignore the logarithmic factor in Theorem 2, since we will mostly be interested in significant (i.e. polynomial) differences between the two algorithms, and it is quite possible that this logarithmic factor is merely an artifact of our analysis. Using m = nb, we get that the bound for the SGD algorithm is E [L( ¯w)] −L(w) ≤ ˜O L(w) bn + 1 n = ˜O L(w) m + b m , (5) and the bound for the accelerated gradient method we propose is E [L(wag n )] −L(w) ≤ ˜O L(w) bn + 1 √ bn + 1 n2 = ˜O L(w) m + √ b m + b2 m2 . (6) To understand the implication these bounds, we follow the approach described in [3, 12] to analyze large-scale learning algorithms. First, we fix a desired suboptimality parameter , which measures how close to L(w) we want to get. Then, we assume that both algorithms are ran till the suboptimality of their outputs is at most . Our goal would be to understand the runtime each algorithm needs, till attaining suboptimality , as a function of L(w), , b. To measure this runtime, we need to discern two settings here: a parallel setting, where we assume that the mini-batch gradient computations are performed in parallel, and a serial setting, where the gradient computations are performed one after the other. In a parallel setting, we can take the number of iterations n as a rough measure of the runtime (note that in both algorithms, the runtime of a single iteration is comparable). In a serial setting, the relevant parameter is m, the number of data accesses. To analyze the dependence on m and n, we upper bound (5) and (6) by , and invert them to get the bounds on m and n. Ignoring logarithmic factors, for the SGD algorithm we get n ≤1 L(w) · 1 b + 1 m ≤1 L(w) + b , (7) 5 and for the AG algorithm we get n ≤1 L(w) · 1 b + 1 √ b + √ m ≤1 L(w) + √ b + b√ . (8) First, let us compare the performance of these two algorithms in the parallel setting, where the relevant parameter to measure runtime is n. Analyzing which of the terms in each bound dominates, we get that for the SGD algorithm, there are 2 regimes, while for the AG algorithm, there are 2-3 regimes depending on the relationship between L(w) and . The following two tables summarize the situation (again, ignoring constants): SGD Algorithm Regime n b ≤ L(w)m L(w) 2b b ≥ L(w)m 1 AG Algorithm Regime n ≤L(w)2 b ≤L(w)1/4m3/4 L(w) 2b b ≥L(w)1/4m3/4 1 √ ≥L(w)2 b ≤L(w)m L(w) 2b L(w)m ≤b ≤m2/3 1 √ b b ≥m2/3 1 √ From the tables, we see that for both methods, there is an initial linear speedup as a function of the minibatch size b. However, in the AG algorithm, this linear speedup regime holds for much larger minibatch sizes1. Even beyond the linear speedup regime, the AG algorithm still maintains a √ b speedup, for the reasonable case where ≥L(w)2. Finally, in all regimes, the runtime bound of the AG algorithm is equal or significantly smaller than that of the SGD algorithm. We now turn to discuss the serial setting, where the runtime is measured in terms of m. Inspecting (7) and (8), we see that a larger size of b actually requires m to increase for both algorithms. This is to be expected, since mini-batching does not lead to large gains in a serial setting. However, using mini-batching in a serial setting might still be beneficial for implementation reasons, resulting in constant-factor improvements in runtime (e.g. saving overhead and loop control, and via pipelining, concurrent memory accesses etc.). In that case, we can at least ask what is the largest mini-batch size that won’t degrade the runtime guarantee by more than a constant. Using our bounds, the mini-batch size b for the SGD algorithm can scale as much as L/, vs. a larger value of L/3/2 for the AG algorithm. Finally, an interesting point is that the AG algorithm is sometimes actually necessary to obtain significant speed-ups via a mini-batch framework (according to our bounds). Based on the table above, this happens when the desired suboptimality is not much bigger then L(w), i.e. = Ω(L(w)). This includes the “separable” case, L(w) = 0, and in general a regime where the “estimation error” and “approximation error” L(w) are roughly the same—an arguably very relevant one in machine learning. For the SGD algorithm, the critical mini-batch value L(w)m can be shown to equal L(w)/, which is O(1) in our case. So with SGD we get no non-constant parallel speedup. However, with AG, we still enjoy a speedup of at least Θ( √ b), all the way up to mini-batch size b = m2/3. 5 Experiments We implemented both the SGD algorithm (Algorithm 1) and the AG algorithm (Algorithm 2, using step-sizes of the form γi = γip as suggested by Theorem 2) on two publicly-available binary classification problems, astro-physics and CCAT. We used the smoothed hinge loss (w; x, y), defined as 0.5−ywx if ywx ≤0; 0 if ywx > 1, and 0.5(1−ywx)2 otherwise. While both datasets are relatively easy to classify, we also wished to understand the algorithms’ performance in the “separable” case L(w) = 0, to see if the theory in Section 4 1Since it is easily verified that L(w)m is generally smaller than both L(w)1/4m3/4 and L(w)m 6 astro-physics CCAT Test Loss 0.0 0.2 0.4 0.6 0.8 1.0 0.0014 0.0016 0.0018 0.0020 ● ● ● b=4 b=16 b=64 0.0 0.2 0.4 0.6 0.8 1.0 0.0016 0.0018 0.0020 0.0022 ● ● ● b=16 b=64 b=256 p p Figure 1: Left: Test smoothed hinge loss, as a function of p, after training using the AG algorithm on 6361 examples from astro-physics, for various batch sizes. Right: the same, for 18578 examples from CCAT. In both datasets, margin violations were removed before training so that L(w) = 0. The circled points are the theoretically-derived values p = ln b/(2 ln(n −1)) (see Theorem 2). holds in practice. To this end, we created an additional version of each dataset, where L(w) = 0, by training a classifier on the entire dataset and removing margin violations. In all of our experiments, we used up to half of the data for training, and one-quarter each for validation and testing. The validation set was used to determine the step sizes η and γi. We justify this by noting that our goal is to compare the performance of the SGD and AG algorithms, independently of the difficulties in choosing their stepsizes. In the implementation, we neglected the projection step, as we found it does not significantly affect performance when the stepsizes are properly selected. In our first set of experiments, we attempted to determine the relationship between the performance of the AG algorithm and the p parameter, which determines the rate of increase of the step sizes γi. Our experiments are summarized in Figure 5. Perhaps the most important conclusion to draw from these plots is that neither the “traditional” choice p = 1, nor the constant-step-size choice p = 0, give the best performance in all circumstances. Instead, there is a complicated data-dependent relationship between p, and the final classifier’s performance. Furthermore, there appears to be a weak trend towards higher p performing better for larger minibatch sizes b, which corresponds neatly with our theoretical predictions. In our next experiment, we directly compared the performance of the SGD and AG. To do so, we varied the minibatch size b while holding the total amount of training data (m = nb) fixed. When L(w) > 0 (top row of Figure 5), the total sample size m is high and the suboptimality is low (red and black plots), we see that for small minibatch size, both methods do not degrade as we increase b, corresponding to a linear parallel speedup. In fact, SGD is actually overall better, but as b increases, its performance degrades more quickly, eventually performing worse than AG. That is, even in the least favorable scenario for AG (high L(w) and small , see the tables in Sec. 4), it does give benefits with large enough minibatch sizes. Further, we see that once the suboptimality is roughly equal to L∗, AG significantly outperforms SGD, even with small minibatches, agreeing with theory. Turning to the case L(w) = 0 (bottom two rows of Figure 5), which is theoretically more favorable to AG, we see it is indeed mostly better, in terms of retaining linear parallel speedups for larger minibatch sizes, even for large data set sizes corresponding to small suboptimality values, and might even be advantageous with small minibatch sizes. 6 Summary In this paper, we presented novel contributions to the theory of first order stochastic convex optimization (Theorems 1 and 2, generalizing results of [4] and [5] to be sensitive to L (w)), 7 astro-physics CCAT Test Loss 1 5 10 50 100 500 5000 0.050 0.055 0.060 0.065 0.070 0.075 T=31884 T=7796 T=1949 1 10 100 1000 10000 0.08 0.09 0.10 0.11 0.12 0.13 0.14 T=402207 T=25137 T=1571 Test Loss 1 5 10 50 100 500 0.000 0.004 0.008 0.012 T=31884 T=7796 T=1949 1 10 100 1000 10000 0.000 0.010 0.020 0.030 T=402207 T=25137 T=1571 Test Misclassification 1 5 10 50 100 500 0.000 0.002 0.004 0.006 0.008 T=31884 T=7796 T=1949 1 10 100 1000 10000 0.000 0.005 0.010 0.015 0.020 T=402207 T=25137 T=1571 b b Figure 2: Test loss on astro-physics and CCAT as a function of mini-batch size b (in log-scale), where the total amount of training data m = nb is held fixed. Solid lines and dashed lines are for SGD and AG respectively (for AG, we used p = ln b/(2 ln(n−1)) as in Theorem 2). The upper row shows the smoothed hinge loss on the test set, using the original (uncensored) data. The bottom rows show the smoothed hinge loss and misclassification rate on the test set, using the modified data where L(w) = 0. All curves are averaged over three runs. developed a novel step size strategy for the accelerated method that we used in order to obtain our results and we saw works well in practice, and provided a more refined analysis of the effects of minibatching which paints a different picture then previous analyses [4, 1] and highlights the benefit of accelerated methods. A remaining open practical and theoretical question is whether the bound of Theorem 2 is tight. Following [5], the bound is tight for b = 1 and b →∞, i.e. the first and third terms are tight, but it is not clear whether the 1/( √ bn) dependence is indeed necessary. It would be interesting to understand whether with a more refined analysis, or perhaps different step-sizes, we can avoid this term, whether an altogether different algorithm is needed, or whether this term does represent the optimal behavior for any method based on b-aggregated stochastic gradient estimates. 8 References [1] A. Agarwal and J. Duchi. Distributed delayed stochastic optimization. Technical report, arXiv, 2011. [2] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167 – 175, 2003. [3] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In NIPS, 2007. [4] O. Dekel, R. Gilad Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using mini-batches. Technical report, arXiv, 2010. [5] G. Lan. An optimal method for stochastic convex optimization. Technical report, Georgia Institute of Technology, 2009. [6] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574–1609, 2009. [7] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o(1/k2). Doklady AN SSSR, 269:543–547, 1983. [8] Y. Nesterov. Smooth minimization of non-smooth functions. Math. Program., 103(1):127–152, 2005. [9] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming, 120(1):221–259, August 2009. [10] O. Shamir O. Dekel, R. Gilad-Bachrach and L. Xiao. Optimal distributed online prediction. In ICML, 2011. [11] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: primal estimated sub-gradient solver for SVM. Math. Program., 127(1):3–30, 2011. [12] S. Shalev-Shwartz and N. Srebro. SVM optimization: inverse dependence on training set size. In ICML, 2008. [13] N. Srebro, K. Sridharan, and A. Tewari. Smoothness, low noise and fast rates. In NIPS, 2010. [14] S.Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, Hebrew University of Jerusalem, 2007. [15] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. Submitted to SIAM Journal on Optimization, 2008. [16] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research, 11:2543–2596, 2010. 9
|
2011
|
210
|
4,270
|
Generalizing from Several Related Classification Tasks to a New Unlabeled Sample Gilles Blanchard Universit¨at Potsdam blanchard@math.uni-potsdam.de Gyemin Lee, Clayton Scott University of Michigan {gyemin,clayscot}@umich.edu Abstract We consider the problem of assigning class labels to an unlabeled test data set, given several labeled training data sets drawn from similar distributions. This problem arises in several applications where data distributions fluctuate because of biological, technical, or other sources of variation. We develop a distributionfree, kernel-based approach to the problem. This approach involves identifying an appropriate reproducing kernel Hilbert space and optimizing a regularized empirical risk over the space. We present generalization error analysis, describe universal kernels, and establish universal consistency of the proposed methodology. Experimental results on flow cytometry data are presented. 1 Introduction Is it possible to leverage the solution of one classification problem to solve another? This is a question that has received increasing attention in recent years from the machine learning community, and has been studied in a variety of settings, including multi-task learning, covariate shift, and transfer learning. In this work we study a new setting for this question, one that incorporates elements of the three aforementioned settings, and is motivated by many practical applications. To state the problem, let X be a feature space and Y a space of labels to predict; to simplify the exposition, we will assume the setting of binary classification, Y = {−1, 1} , although the methodology and results presented here are valid for general output spaces. For a given distribution PXY , we refer to the X marginal distribution PX as simply the marginal distribution, and the conditional PXY (Y |X) as the posterior distribution. There are N similar but distinct distributions P (i) XY on X × Y, i = 1, . . . , N. For each i, there is a training sample Si = (Xij, Yij)1≤j≤ni of iid realizations of P (i) XY . There is also a test distribution P T XY that is similar to but again distinct from the “training distributions” P (i) XY . Finally, there is a test sample (XT j , Y T j )1≤j≤nT of iid realizations of P T XY , but in this case the labels Yj are not observed. The goal is to correctly predict these unobserved labels. Essentially, given a random sample from the marginal test distribution P T X, we would like to predict the corresponding labels. Thus, when we say that the training and test distributions are “similar,” we mean that there is some pattern making it possible to learn a mapping from marginal distributions to labels. We will refer to this learning problem as learning marginal predictors. A concrete motivating application is given below. This problem may be contrasted with other learning problems. In multi-task learning, only the training distributions are of interest, and the goal is to use the similarity among distributions to improve the training of individual classifiers [1, 2, 3]. In our context, we view these distributions as “training tasks,” and seek to generalize to a new distribution/task. In the covariate shift problem, the marginal test distribution is different from the marginal training distribution(s), but the posterior 1 distribution is assumed to be the same [4]. In our case, both marginal and posterior test distributions can differ from their training counterparts [5]. Finally, in transfer learning, it is typically assumed that at least a few labels are available for the test data, and the training data sets are used to improve the performance of a standard classifier, for example by learning a metric or embedding which is appropriate for all data sets [6, 7]. In our case, no test labels are available, but we hope that through access to multiple training data sets, it is still possible to obtain collective knowledge about the “labeling process” that may be transferred to the test distribution. Some authors have considered transductive transfer learning, which is similar to the problem studied here in that no test labels are available. However, existing work has focused on the case N = 1 and typically relies on the covariate shift assumption [8]. We propose a distribution-free, kernel-based approach to the problem of learning marginal predictors. Our methodology is shown to yield a consistent learning procedure, meaning that the generalization error tends to the best possible as the sample sizes N, {ni}, nT tend to infinity. We also offer a proof-of-concept experimental study validating the proposed approach on flow cytometry data, including comparisons to multi-task kernels and a simple pooling approach. 2 Motivating Application: Automatic Gating of Flow Cytometry Data Flow cytometry is a high-throughput measurement platform that is an important clinical tool for the diagnosis of many blood-related pathologies. This technology allows for quantitative analysis of individual cells from a given population, derived for example from a blood sample from a patient. We may think of a flow cytometry data set as a set of d-dimensional attribute vectors (Xj)1≤j≤n, where n is the number of cells analyzed, and d is the number of attributes recorded per cell. These attributes pertain to various physical and chemical properties of the cell. Thus, a flow cytometry data set is a random sample from a patient-specific distribution. Now suppose a pathologist needs to analyze a new (“test”) patient with data (XT j )1≤j≤nT . Before proceeding, the pathologist first needs the data set to be “purified” so that only cells of a certain type are present. For example, lymphocytes are known to be relevant for the diagnosis of leukemia, whereas non-lymphocytes may potentially confound the analysis. In other words, it is necessary to determine the label Y T j ∈{−1, 1} associated to each cell, where Y T j = 1 indicates that the j-th cell is of the desired type. In clinical practice this is accomplished through a manual process known as “gating.” The data are visualized through a sequence of two-dimensional scatter plots, where at each stage a line segment or polygon is manually drawn to eliminate a portion of the unwanted cells. Because of the variability in flow cytometry data, this process is difficult to quantify in terms of a small subset of simple rules. Instead, it requires domain-specific knowledge and iterative refinement. Modern clinical laboratories routinely see dozens of cases per day, so it would be desirable to automate this process. Since clinical laboratories maintain historical databases, we can assume access to a number (N) of historical patients that have already been expert-gated. Because of biological and technical variations in flow cytometry data, the distributions P (i) XY of the historical patients will vary. For example, Fig. 1 shows exemplary two-dimensional scatter plots for two different patients, where the shaded cells correspond to lymphocytes. Nonetheless, there are certain general trends that are known to hold for all flow cytometry measurements. For example, lymphocytes are known to exhibit low levels of the “side-scatter” (SS) attribute, while expressing high levels of the attribute CD45 (see column 2 of Fig. 1). More generally, virtually every cell type of interest has a known tendency (e.g., high or low) for most measured attributes. Therefore, it is reasonable to assume that there is an underlying distribution (on distributions) governing flow cytometry data sets, that produces roughly similar distributions thereby making possible the automation of the gating process. 3 Formal Setting Let X denote the observation space and Y = {−1, 1} the output space. Let PX×Y denote the set of probability distributions on X × Y, PX the set of probability distributions on X, and PY|X the set of conditional probabilities of Y given X (also known as Markov transition kernels from X to 2 Figure 1: Two-dimensional projections of multi-dimensional flow cytometry data. Each row corresponds to a single patient. The distribution of cells differs from patient to patient. Lymphocytes, a type of white blood cell, are marked dark (blue) and others are marked bright (green). These were manually selected by a domain expert. Y ) which we also call “posteriors” in this work. The disintegration theorem (see for instance [9], Theorem 6.4) tells us that (under suitable regularity properties, e.g., X is a Polish space) any element PXY ∈PX×Y can be written as a product PXY = PX•PY |X, with PX ∈PX , PY |X ∈PY |X. The space PX×Y is endowed with the topology of weak convergence and the associated Borel σ-algebra. It is assumed that there exists a distribution µ on PX×Y, where P (1) XY , . . . , P (N) XY are i.i.d. realizations from µ, and the sample Si is made of ni i.i.d. realizations of (X, Y ) following the distribution P (i) XY . Now consider a test distribution P T XY and test sample ST = (XT j , Y T j )1≤j≤nT , whose labels are not observed. A decision function is a function f : PX × X 7→R that predicts bYi = f( bPX, Xi), where bPX is the associated empirical X distribution. If ℓ: R × Y 7→R+ is a loss, then the average loss incurred on the test sample is 1 nT PnT i=1 ℓ(bY T i , Y T i ) . Based on this, we define the average generalization error of a decision function over test samples of size nT , E(f, nT ) := EP T XY ∼µEST ∼(P T XY )⊗nT " 1 nT nT X i=1 ℓ(f( bP T X, XT i ), Y T i ) # . (1) In important point of the analysis is that, at training time as well as at test time, the marginal distribution PX for a sample is only known through the sample itself, that is, through the empirical marginal bPX. As is clear from equation (1), because of this the generalization error also depends on the test sample size nT . As nT grows, bP T X will converge to P T X. This motivates the following generalization error when we have an infinite test sample, where we then assume that the true marginal P T X is observed: E(f, ∞) := EP T XY ∼µE(XT ,Y T )∼P T XY ℓ(f(P T X, XT ), Y T ) . (2) To gain some insight into this risk, let us decompose µ into two parts, µX which generates the marginal distribution PX, and µY |X which, conditioned on PX, generates the posterior PY |X. Denote ˜X = (PX, X). We then have E(f, ∞) = EPX∼µXEPY |X∼µY |XEX∼PXEY |X∼PY |X h ℓ(f( ˜X), Y ) i = EPX∼µXEX∼PXEPY |X∼µY |XEY |X∼PY |X h ℓ(f( ˜X), Y ) i = E( ˜ X,Y )∼Qµ h ℓ(f( ˜X), Y ) i . Here Qµ is the distribution that generates ˜X by first drawing PX according to µX, and then drawing X according to PX. Similarly, Y is generated, conditioned on ˜X, by first drawing PY |X according to µY |X, and then drawing Y from PY |X. From this last expression, we see that the risk is like a standard binary classification risk based on ( ˜X, Y ) ∼Qµ. Thus, we can deduce several properties 3 that are known to hold for binary classification risks. For example, if the loss is the 0/1 loss, then f ∗( ˜X) = 2˜η( ˜X) −1 is an optimal predictor, where ˜η( ˜X) = EY ∼Qµ Y | ˜ X 1{Y =1} . More generally, E(f, ∞) −E(f ∗, ∞) = E ˜ X∼Qµ ˜ X h 1{sign(f( ˜ X))̸=sign(f ∗( ˜ X))}|2˜η( ˜X) −1| i . Our goal is a learning rule that asymptotically predicts as well as the global minimizer of (2), for a general loss ℓ. By the above observations, consistency with respect to a general ℓ(thought of as a surrogate) will imply consistency for the 0/1 loss, provided ℓis classification calibrated [10]. Despite the similarity to standard binary classification in the infinite sample case, we emphasize that the learning task here is different, because the realizations ( e Xij, Yij) are neither independent nor identically distributed. Finally, we note that there is a condition where for µ-almost all test distribution P T XY , the classifier f ∗(P T X, .) (where f ∗is the global minimizer of (2)) coincides with the optimal Bayes classifier for P T XY , although no labels from this test distribution are observed. This condition is simply that the posterior PY |X is (µ-almost surely) a function of PX. In other words, with the notation introduced above, µY |X(PX) is a Dirac delta for µ-almost all PX. Although we will not be assuming this condition throughout the paper, it is implicitly assumed in the motivating application presented in Section 2, where an expert labels the data points by just looking at their marginal distribution. Lemma 3.1. For a fixed distribution PXY , and a decision function f : X →R, let us denote R(f, PXY ) = E(X,Y )∼PXY [ℓ(f(X), Y )] and R∗(PXY ) := min f:X→R R(f, PXY ) = min f:X→R E(X,Y )∼PXY [ℓ(f(X), Y )] the corresponding optimal (Bayes) risk for the loss function ℓ. Assume that µ is a distribution on PX×Y such that µ-a.s. it holds PY |X = F(PX) for some deterministic mapping F. Let f ∗be a minimizer of the risk (2). Then we have for µ-almost all PXY : R(f ∗(PX, .), PXY ) = R∗(PXY ) and E(f ∗, ∞) = EPXY ∼µ [R∗(PXY )] . Proof. Straightforward. Obviously for any f : PX × X → R, one has for all PXY : R(f(PX, .), PXY ) ≥R∗(PXY ). For any fixed PX ∈PX , consider PXY := PX • F(PX) and g∗(PX) a Bayes classifier for this joint distribution. Pose f(PX, x) := g∗(PX)(x). Then f coincides for µ-almost PXY with a Bayes classifier for PXY , achieving equality in the above inequality. The second equality follows by taking expectation over PXY ∼µ. 4 Learning Algorithm We consider an approach based on positive semi-definite kernels, or simply kernels for short. Background information on kernels, including the definition, normalized kernels, universal kernels, and reproducing kernel Hilbert spaces (RKHSs), may be found in [11]. Several well-known learning algorithms, such as support vector machines and kernel ridge regression, may be viewed as minimizers of a norm-regularized empirical risk over the RKHS of a kernel. A similar development also exists for multi-task learning [3]. Inspired by this framework, we consider a general kernel algorithm as follows. Consider the loss function ℓ: R × Y →R+. Let k be a kernel on PX × X, and let Hk be the associated RKHS. For the sample Si let bP (i) X denote the corresponding empirical distribution of the Xijs. Also consider the extended input space PX × X and the extended data e Xij = ( bP (i) X , Xij). Note that bP (i) X plays a role similar to the task index in multi-task learning. Now define bfλ = arg min f∈Hk 1 N N X i=1 1 ni ni X j=1 ℓ(f( e Xij), Yij) + λ ∥f∥2 . (3) 4 For the hinge loss, by the representer theorem [12] this optimization problem reduces to a quadratic program equivalent to the dual of a kind of cost-sensitive SVM, and therefore can be solved using existing software packages. The final predictor has the form bfλ( bPX, x) = N X i=1 ni X j=1 αijYijk(( bP (i) X , Xij), ( bPX, x)) where the αij are nonnegative and mostly zero. See [11] for details. In the rest of the paper we will consider a kernel k on PX × X of the product form k((P1, x1), (P2, x2)) = kP (P1, P2)kX(x1, x2), (4) where kP is a kernel on PX and kX a kernel on X. Furthermore, we will consider kernels on PX of a particular form. Let k′ X denote a kernel on X (which might be different from kX) that is measurable and bounded. We define the following mapping Ψ : PX →Hk′ X: PX 7→Ψ(PX) := Z X k′ X(x, ·)dPX(x). (5) This mapping has been studied in the framework of “characteristic kernels” [13], and it has been proved that there are important links between universality of k′ X and injectivity of Ψ [14, 15]. Note that the mapping Ψ is linear. Therefore, if we consider the kernel kP (PX, P ′ X) = ⟨Ψ(PX), Ψ(P ′ X)⟩, it is a linear kernel on PX and cannot be a universal kernel. For this reason, we introduce yet another kernel K on Hk′ X and consider the kernel on PX given by kP (PX, P ′ X) = K (Ψ(PX), Ψ(P ′ X)) . (6) Note that particular kernels inspired by the finite dimensional case are of the form K(v, v′) = F(∥v −v′∥), (7) or K(v, v′) = G(⟨v, v′⟩), (8) where F, G are real functions of a real variable such that they define a kernel. For example, F(t) = exp(−t2/(2σ2)) yields a Gaussian-like kernel, while G(t) = (1 + t)d yields a polynomial-like kernel. Kernels of the above form on the space of probability distributions over a compact space X have been introduced and studied in [16]. Below we apply their results to deduce that k is a universal kernel for certain choices of kX, k′ X, and K. 5 Learning Theoretic Study Although the regularized estimation formula (3) defining bfλ is standard, the generalization error analysis is not, since the e Xij are neither identically distributed nor independent. We begin with a generalization error bound that establishes uniform estimation error control over functions belonging to a ball of Hk . We then discuss universal kernels, and finally deduce universal consistency of the algorithm. To simplify somewhat the analysis, we assume below that all training samples have the same size ni = n. Also let Bk(r) denote the closed ball of radius r, centered at the origin, in the RKHS of the kernel k. We consider the following assumptions on the loss and kernels: (Loss) The loss function ℓ: R × Y →R+ is Lℓ-Lipschitz in its first variable and bounded by Bℓ. (Kernels-A) The kernels kX, k′ X and K are bounded respectively by constants B2 k, B2 k′ ≥1, and B2 K . In addition, the canonical feature map ΦK : Hk′ X →HK associated to K satisfies a H¨older condition of order α ∈(0, 1] with constant LK, on Bk′ X(Bk′) : ∀v, w ∈Bk′ X(Bk′) : ∥ΦK(v) −ΦK(w)∥≤LK ∥v −w∥α . (9) Sufficient conditions for (9) are described in [11]. As an example, the condition is shown to hold with α = 1 when K is the Gaussian-like kernel on Hk′ X. The boundedness assumptions are also clearly satisfied for Gaussian kernels. 5 Theorem 5.1 (Uniform estimation error control). Assume conditions (Loss) and (Kernels-A) hold. If P (1) XY , . . . , P (N) XY are i.i.d. realizations from µ, and for each i = 1, . . . , N, the sample Si = (Xij, Yij)1≤j≤n is made of i.i.d. realizations from P (i) XY , then for any R > 0, with probability at least 1 −δ: sup f∈Bk(R) 1 N N X i=1 1 n n X j=1 ℓ(f( e Xij), Yij) −E(f, ∞) ≤c RBkLℓ Bk′LK log N + log δ−1 n α 2 + BK 1 √ N ! + Bℓ r log δ−1 N ! , (10) where c is a numerical constant, and Bk(R) denotes the ball of radius R of Hk . Proof sketch. The full proofs of this and other results are given in [11]. We give here a brief overview. We use the decomposition sup f∈Bk(R) 1 N N X i=1 1 ni ni X j=1 ℓ(f( e Xij), Yij) −E(f, ∞) ≤ sup f∈Bk(R) 1 N N X i=1 1 ni ni X j=1 ℓ(f( bP (i) X , Xij), Yij) −ℓ(f(P (i) X , Xij), Yij) + sup f∈Bk(R) 1 N N X i=1 1 ni ni X j=1 ℓ(f(P (i) X , Xij), Yij) −E(f, ∞) =: (I) + (II). Bounding (I), using the Lipschitz property of the loss function, can be reduced to controlling
f( bP (i) X , .) −f(P (i) X , .)
∞, conditional to P (i) X , uniformly for i = 1, . . . , N. This can be obtained using the reproducing property of the kernel k, the convergence of Ψ( bP (i) X ) to Ψ(P (i) X ) as a consequence of Hoeffding’s inequality in a Hilbert space, and the other assumptions (boundedness/H¨older property) on the kernels. Concerning the control of the term (II), it can be decomposed in turn into the convergence conditional to (P (i) X ), and the convergence of the conditional generalization error. In both cases, a standard approach using the Azuma-McDiarmid’s inequality [17] followed by symmetrization and Rademacher complexity analysis on a kernel space [18, 19] can be applied. For the first part, the random variables are the (Xij, Yij) (which are independent conditional to (P (i) X )); for the second part, the i.i.d. variables are the (P (i) X ) (the (Xij, Yij) being integrated out). To establish that k is universal on PX × X, the following lemma is useful. Lemma 5.2. Let Ω, Ω′ be two compact spaces and k, k′ be kernels on Ω, Ω′, respectively. If k, k′ are both universal, then the product kernel k((x, x′), (y, y′)) := k(x, y)k′(x′, y′) is universal on Ω× Ω′. Several examples of universal kernels are known on Euclidean space. We also need universal kernels on PX . Fortunately, this was recently investigated [16]. Some additional assumptions on the kernels and feature space are required: (Kernels-B) kX, k′ X, K, and X satisfy the following: X is a compact metric space; kX is universal on X; k′ X is continuous and universal on X; K is universal on any compact subset of Hk′ X. 6 Adapting the results of [16], we have the following. Theorem 5.3 (Universal kernel). Assume condition (Kernels-B) holds. Then, for kP defined as in (6), the product kernel k in (4) is universal on PX ×X. Furthermore, the assumption on K is fulfilled if K is of the form (8), where G is an analytical function with positive Taylor series coefficients, or if K is the normalized kernel associated to such a kernel. As an example, suppose that X is a compact subset of Rd. Let kX and k′ X be Gaussian kernels on X. Taking G(t) = exp(t), it follows that K(PX, P ′ X) = exp(⟨Ψ(PX), Ψ(P ′ X)⟩Hk′ X ) is universal on PX . By similar reasoning as in the finite dimensional case, the Gaussian-like kernel K(PX, P ′ X) = exp(−1 2σ2 ∥Ψ(PX) −Ψ(P ′ X)∥2 Hk′ X ) is also universal on PX . Thus the product kernel is universal. Corollary 5.4 (Universal consistency). Assume the conditions (Loss), (Kernels-A), and (KernelsB) are satisfied. Assume that N, n grow to infinity in such a way that N = O(nγ) for some γ > 0. Then, if λj is a sequence such that λj →0 and λj q j log j →∞, it holds that E( bfλmin(N,nα), ∞) → inf f:PX ×X→R E(f, ∞) in probability. 6 Experiments We demonstrate the proposed methodology for flow cytometry data auto-gating, described above. Peripheral blood samples were obtained from 35 normal patients, and lymphocytes were classified by a domain expert. The corresponding flow cytometry data sets have sample sizes ranging from 10,000 to 100,000, and the proportion of lymphocytes in each data set ranges from 10 to 40%. We took N = 10 of these data sets for training, and the remaining 25 for testing. To speed training time, we subsampled the 10 training data sets to have 1000 data points (cells) each. Adopting the hinge loss, we used the SVMlight [20] package to solve the quadratic program characterizing the solution. kP Train Test Pooling (τ = 1) 1.41 2.32 MTL (τ = 0.01) 1.59 2.64 MTL (τ = 0.5) 1.34 2.36 Proposed 1.32 2.29 Table 1: The misclassification rates (%) on training data sets and test data sets for different kP . The proposed method adapts the decision function to the test data (through the marginal-dependent kernel), accounting for its improved performance. The kernels kX, k′ X, and K are all taken to be Gaussian kernels with respective bandwidths σX, σ′ X, and σ. We set σX such that σ2 X equals 10 times the average distance of a data point to its nearest neighbor within the same data set. The second bandwidth was defined similarly, while the third was set to 1. The regularization parameter λ was set to 1. For comparison, we also considered three other options for kP . These kernels have the form kP (P1, P2) = 1 if P1 = P2, and kP (P1, P2) = τ otherwise. When τ = 1, the method is equivalent to pooling all of the training data together in one data set, and learning a single SVM classifier. This idea has been previously studied in the context of flow cytometry by [21]. When 0 < τ < 1, we obtain a kernel like what was used for multi-task learning (MTL) by [3]. Note that these kernels have the property that if P1 is a training data set, and P2 a test data set, then P1 ̸= P2 and so kP (P1, P2) is simply a constant. This implies that the learning rules produced by these kernels do not adapt to the test distribution, unlike the proposed kernel. In the experiments, we take τ = 1 (pooling), 0.01, and 0.5 (MTL). The results are shown in Fig. 2 and summarized in Table 1. The middle column of the table reports the average misclassification rate on the training data sets. Here we used those data points that were not part of the 1000-element subsample used for training. The right column shows the average misclassification rate on the test data sets. 7 Discussion Our approach to learning marginal predictors relies on the extended input pattern ˜X = (PX, X). Thus, we study the natural algorithm of minimizing a regularized empirical loss over a reproducing 7 Figure 2: The misclassification rates (%) on training data sets and test data sets for different kP . The last 25 data sets separated by dotted line are not used during training. kernel Hilbert space associated with the extended input domain PX ×X. We also establish universal consistency, using a novel generalization error analysis under the inherent non-iid sampling plan, and a construction of a universal kernel on PX × X. For the hinge loss, the algorithm may be implemented using standard techniques for SVMs. The algorithm is applied to flow cytometry autogating, and shown to improve upon kernels that do not adapt to the test distribution. Several future directions exist. From an application perspective, the need for adaptive classifiers arises in many applications, especially in biomedical applications involving biological and/or technical variation in patient data. For example, when electrocardiograms are used to monitor cardiac patients, it is desirable to classify each heartbeat as irregular or not. However, irregularities in a test patient’s heartbeat will differ from irregularities of historical patients, hence the need to adapt to the test distribution [22]. We can also ask how the methodology and analysis can be extended to the context where a small number of labels are available for the test distribution, as is commonly assumed in transfer learning. In this setting, two approaches are possible. The simplest one is to use the same optimization problem (3), wherein we include additionally the labeled examples of the test distribution. However, if several test samples are to be treated in succession, and we want to avoid a full, resource-consuming re-training using all the training samples each time, an interesting alternative is the following: learn once a function f0(PX, x) using the available training samples via (3); then, given a partially labeled test sample, learn a decision function on this sample only via the usual kernel norm regularized empirical loss minimization method, but replace the usual regularizer term ∥f∥2 by ∥f −f0(Px, .)∥2 (note that f0(Px, .) ∈Hk). In this sense, the marginal-adaptive decision function learned from the training samples would serve as a “prior” for learning on the test data. It would also be of interest to extend the proposed methodology to a multi-class setting. In this case, the problem has an interesting interpretation in terms of “learning to cluster.” Each training task may be viewed as a data set that has been clustered by a teacher. Generalization then entails the ability to learn the clustering process, so that clusters may be assigned to a new unlabeled data set. Future work may consider other asymptotic regimes, e.g., where {ni}, nT do not tend to infinity, or they tend to infinity much slower than N. It may also be of interest to develop implementations for differentiable losses such as the logistic loss, allowing for estimation of posterior probabilities. Finally, we would like to specify conditions on µ, the distribution-generating distribution, that are favorable for generalization (beyond the simple condition discussed in Lemma 3.1). Acknowledgments G. Blanchard was supported by the European Community’s 7th Framework Programme under the PASCAL2 Network of Excellence (ICT-216886) and under the E.U. grant agreement 247022 (MASH Project). G. Lee and C. Scott were supported in part by NSF Grant No. 0953135. 8 References [1] S. Thrun, “Is learning the n-th thing any easier than learning the first?,” Advances in Neural Information Processing Systems, pp. 640–646, 1996. [2] R. Caruana, “Multitask learning,” Machine Learning, vol. 28, pp. 41–75, 1997. [3] T. Evgeniou and M. Pontil, “Learning multiple tasks with kernel methods,” J. Machine Learning Research, pp. 615–637, 2005. [4] S. Bickel, M. Br¨uckner, and T. Scheffer, “Discriminative learning under covariate shift,” J. Machine Learning Research, pp. 2137–2155, 2009. [5] J. Quionero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence, Dataset Shift in Machine Learning, The MIT Press, 2009. [6] R. K. Ando and T. Zhang, “A high-performance semi-supervised learning method for text chunking,” Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL 05), pp. 1–9, 2005. [7] A. Rettinger, M. Zinkevich, and M. Bowling, “Boosting expert ensembles for rapid concept recall,” Proceedings of the 21st National Conference on Artificial Intelligence (AAAI 06), vol. 1, pp. 464–469, 2006. [8] A. Arnold, R. Nallapati, and W.W. Cohen, “A comparative study of methods for transductive transfer learning,” Seventh IEEE International Conference on Data Mining Workshops, pp. 77–82, 2007. [9] O. Kallenberg, Foundations of Modern Probability, Springer, 2002. [10] P. Bartlett, M. Jordan, and J. McAuliffe, “Convexity, classification, and risk bounds,” J. Amer. Stat. Assoc., vol. 101, no. 473, pp. 138–156, 2006. [11] G. Blanchard, G. Lee, and C. Scott, “Supplemental material,” NIPS 2011. [12] I. Steinwart and A. Christmann, Support Vector Machines, Springer, 2008. [13] A. Gretton, K. Borgwardt, M. Rasch, B. Sch¨olkopf, and A. Smola, “A kernel approach to comparing distributions,” in Proceedings of the 22nd AAAI Conference on Artificial Intelligence, R. Holte and A. Howe, Eds., 2007, pp. 1637–1641. [14] A. Gretton, K. Borgwardt, M. Rasch, B. Sch¨olkopf, and A. Smola, “A kernel method for the two-sample-problem,” in Advances in Neural Information Processing Systems 19, B. Sch¨olkopf, J. Platt, and T. Hoffman, Eds., 2007, pp. 513–520. [15] B. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch¨olkopf, and G. Lanckriet, “Hilbert space embeddings and metrics on probability measures,” Journal of Machine Learning Research, vol. 11, pp. 1517–1561, 2010. [16] A. Christmann and I. Steinwart, “Universal kernels on non-standard input spaces,” in Advances in Neural Information Processing Systems 23, J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, Eds., 2010, pp. 406–414. [17] C. McDiarmid, “On the method of bounded differences,” Surveys in Combinatorics, vol. 141, pp. 148–188, 1989. [18] V. Koltchinskii, “Rademacher penalties and structural risk minimization,” IEEE Transactions on Information Theory, vol. 47, no. 5, pp. 1902 – 1914, 2001. [19] P. Bartlett and S. Mendelson, “Rademacher and Gaussian complexities: Risk bounds and structural results,” Journal of Machine Learning Research, vol. 3, pp. 463–482, 2002. [20] T. Joachims, “Making large-scale SVM learning practical,” in Advances in Kernel Methods Support Vector Learning, B. Sch¨olkopf, C. Burges, and A. Smola, Eds., chapter 11, pp. 169– 184. MIT Press, Cambridge, MA, 1999. [21] J. Toedling, P. Rhein, R. Ratei, L. Karawajew, and R. Spang, “Automated in-silico detection of cell populations in flow cytometry readouts and its application to leukemia disease monitoring,” BMC Bioinformatics, vol. 7, pp. 282, 2006. [22] J. Wiens, Machine Learning for Patient-Adaptive Ectopic Beat Classication, Masters Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 2010. 9
|
2011
|
211
|
4,271
|
Energetically Optimal Action Potentials Martin Stemmler BCCN and LMU Munich Grosshadernerstr. 2, Planegg, 82125 Germany Biswa Sengupta, Simon Laughlin, Jeremy Niven Department of Zoology, University of Cambridge, Downing Street, Cambridge CB2 3EJ, UK Abstract Most action potentials in the nervous system take on the form of strong, rapid, and brief voltage deflections known as spikes, in stark contrast to other action potentials, such as in the heart, that are characterized by broad voltage plateaus. We derive the shape of the neuronal action potential from first principles, by postulating that action potential generation is strongly constrained by the brain’s need to minimize energy expenditure. For a given height of an action potential, the least energy is consumed when the underlying currents obey the bang-bang principle: the currents giving rise to the spike should be intense, yet short-lived, yielding spikes with sharp onsets and offsets. Energy optimality predicts features in the biophysics that are not per se required for producing the characteristic neuronal action potential: sodium currents should be extraordinarily powerful and inactivate with voltage; both potassium and sodium currents should have kinetics that have a bell-shaped voltage-dependence; and the cooperative action of multiple ‘gates’ should start the flow of current. 1 The paradox Nerve cells communicate with each other over long distances using spike-like action potentials, which are brief electrical events traveling rapidly down axons and dendrites. Each action potential is caused by an accelerating influx of sodium or calcium ions, depolarizing the cell membrane by forty millivolts or more, followed by repolarization of the cell membrane caused by an efflux of potassium ions. As different species of ions are swapped across the membrane during the action potential, ion pumps shuttle the excess ions back and restore the ionic concentration gradients. If we label each ionic species by α, the work ∆E done to restore the ionic concentration gradients is ∆E = RTV X α ∆[α]in ln [α]out [α]in , (1) where R is the gas constant, T is the temperature in Kelvin, V is the cell volume, [α]in|out is the concentration of ion α inside or outside the cell, and ∆[α]in is the concentration change inside the cell, which is assumed to be small relative to the total concentration. The sum P α zα∆[α] = 0, where zα is the charge on ion α, as no net charge accumulates during the action potential and no net work is done by or on the electric field. Often, sodium (Na+) and potassium (K+) play the dominant role in generating action potentials, in which case ∆E = ∆[Na]inFV(ENa −EK), where F is Faraday’s constant, ENa = RT/F ln [Na]out/[Na]in is the reversal potential for Na+, at which no net sodium current flows, and EK = RT/F ln [K]out/[K]in . This estimate of the work done does not include heat (due to loss through the membrane resistance) or the work done by the ion channel proteins in changing their conformational state during the action potential. Hence, the action potential’s energetic cost to the cell is directly proportional to ∆[Na]in; taking into account that each Na+ ion carries one elementary charge, the cost is also proportional to the 1 charge QNa that accumulates inside the cell. A maximally efficient cell reduces the charge per spike to a minimum. If a cell fires action potentials at an average rate f, the cell’s Na/K pumps must move Na+ and K+ ions in opposite directions, against their respective concentration gradients, to counteract an average inward Na+ current of f QNa. Exhaustive measurements on myocytes in the heart, which expend tremendous amounts of energy to keep the heart beating, indicate that Na/K pumps expel ∼0.5 µA/cm2 of Na+ current at membrane potentials close to rest [1]. Most excitable cells, even when spiking, spend most of their time close to resting potential, and yet standard models for action potentials can easily lead to accumulating an ionic charge of up to 5 µC/cm2 [2]; most of this accumulation occurs during a very brief time interval. If one were to take an isopotential nerve cell with the same density of ion pumps as in the heart, then such a cell would not be able to produce more than an action potential once every ten seconds on average. The brain should be effectively silent. Clearly, this conflicts with what is known about the average firing rates of neurons in the brainstem or even the neocortex, which can sustain spiking up to at least 7 Hz [3]. Part of the discrepancy can be resolved by noting that nerve cells are not isopotential and that action potential generation occurs within a highly restricted area of the membrane. Even so, standard models of action potential generation waste extraordinary amounts of energy; recent evidence [4] points out that many mammalian cortical neurons are much more efficient. As nature places a premium on energy consumption, we will argue that one can predict both the shape of the action potential and the underlying biophysics of the nonlinear, voltage-dependent ionic conductances from the principle of minimal energy consumption. After reviewing the ionic basis of action potentials, we first sketch how to compute the minimal energy cost for an arbitrary spike shape, and then solve for the optimal action potential shape with a given height. Finally, we show how minimal energy consumption explains all the dynamical features in the standard HodgkinHuxley (HH) model for neuronal dynamics that distinguish the brain’s action potentials from other highly nonlinear oscillations in physics and chemistry. 2 Ionic basis of the action potential In an excitable cell, synaptic drive forces the membrane permeability to different ions to change rapidly in time, producing the dynamics of the action potential. The current density Iα carried by an ion species α is given by the Goldman-Hodgkin-Katz (GHK) current equation[5, 6, 2], which assumes that ions are driven independently across the membrane under the influence of a constant electric field. Iα depends upon the ions membrane permeability, Pα, its concentrations on either side of the membrane [α]out and [α]in and the voltage across the membrane, V , according to: Iα = Pα z2 αV F 2 RT [α]out −[α]in exp (zαV F/RT) 1 −exp(zαV F/RT) , (2) To produce the fast currents that generate APs, a subset of the membranes ionic permeabilities Pα are gated by voltage. Changes in the permeability Pα are not instantaneous; the voltage-gated permeability is scaled mathematically by gating variables m(t) and h(t) with their own time dependence. After separating constant from time-dependent components in the permeability, the voltage-gated permeability obeys Pα(t) = m(t)rh(t)s such that 0 ≤Pα(t) ≤¯Pα, where r and s are positive, and ¯Pα is the peak permeability to ion α when all channels for ion α are open. Gating is also referred to as activation, and the associated nonlinear permeabilities are called active. There are also passive, voltage-insensitive permeabilities that maintain the resting potential and depolarise the membrane to trigger action potentials. The simplest possible kinetics for the gating variables are first order, involving only a single derivative in time. The steady state of each gating variable at a given voltage is determined by a Boltzmann function, to which the gating variables evolve: τm dm dt = rp ¯Pαm∞(V ) −m(t) and τh dh dt =h∞(V ) −h(t), 2 with m∞(V ) = {1 + exp ((V −Vm)/sm)}−1 the Boltzmann function described by the slope sm > 0 and the midpoint Vm; similarly, h∞(V ) = {1 + exp ((V −Vh)/sh)}−1, but with sh < 0. Scaling m∞(V ) by the rth root of the peak permeability ¯Pα is a matter of mathematical convenience. We will consider both voltage-independent and voltage-dependent time constants, either setting τj = τj,0 to be constant, where j ∈{m(t), h(t)}, or imposing a bell-shaped voltage dependence τj(V ) = τj,0 sech [sj (V −Vj)] The synaptic, leak, and voltage-dependent currents drive the rate of change in the voltage across the membrane C dV dt = Isyn + Ileak + X α Iα, where the synaptic permeability and leak permeability are held constant. 3 Resistive and capacitive components of the energy cost By treating the action potential as the charging and discharging of the cell membrane capacitance, the action potentials measured at the mossy fibre synapse in rats [4] or in mouse thalamocortical neurons [7] were found to be highly energy-efficient: the nonlinear, active conductances inject only slightly more current than is needed to charge a capacitor to the peak voltage of the action potential. The implicit assumption made here is that one can neglect the passive loss of current through the membrane resistance, known as the leak. Any passive loss must be compensated by additional charge, making this loss the primary target of the selection pressure that has shaped the dynamics of action potentials. On the other hand, the membrane capacitance at the site of AP initiation is generally modelled and experimentally confirmed [8] as being fairly constant around 1 µF/cm2; in contrast, the propagation, but not generation, of AP’s can be assisted by a reduction in the capacitance achieved by the myelin sheath that wraps some axons. As myelin would block the flow of ions, we posit that the specific capacitance cannot yield to selection pressure to minimise the work W = QNa(ENa −EK) needed for AP generation. To address how the shape and dynamics of action potentials might have evolved to consume less energy, we first fix the action potential’s shape and solve for the minimum charge QNa ab initio, without treating the cell membrane as a pure capacitor. Regardless of the action potential’s particular time-course V (t), voltage-dependent ionic conductances must transfer Na+ and K+ charge to elicit an action potential. Figure 1 shows a generic action potential and the associated ionic currents, comparing the latter to the minimal currents required. The passive equivalent circuit for the neuron consists of a resistor in parallel with a capacitor, driven by a synaptic current. To charge the membrane to the peak voltage, a neuron in a high-conductance state [9, 10] may well lose more charge through the resistor than is stored on the capacitor. For neurons in a low-conductance state and for rapid voltage deflections from the resting potential, membrane capacitance will be the primary determinant of the charge. 4 The norm of spikes How close can voltage-gated channels with realistic properties come to the minimal currents? What time-course for the action potential leads to the smallest minimal currents? To answer these questions, we must solve a constrained optimization problem on the solutions to the nonlinear differential equations for the neuronal dynamics. To separate action potentials from mere small-amplitude oscillations in the voltage, we need to introduce a metric. Smaller action potentials consume less energy, provided the underlying currents are optimal, yet signalling between neurons depends on the action potential’s voltage deflection reaching a minimum amplitude. Given the importance of the action potential’s amplitude, we define an Lp norm on the voltage wave-form V (t) to emphasize the maximal voltage deflection: ∥V (t) −⟨V ⟩∥p = (Z T 0 ∥V (t) −⟨V ⟩∥p dt ) 1 p , 3 Generic Action Potential Active and Minimal Currents Resistive vs. Capacitive Minimum Charge 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1 0.5 0 100 80 60 40 20 0 -20 -40 -60 -80 -100 current [µA/cm2] Qresistive/Qcapacitive t [ms] leak conductance [mS/cm2] Minimum IK Active IK 0 2 4 6 8 10 12 14 16 Minimum INa Active INa -10 -20 -30 -40 -50 -60 V [mV] t [ms] 0 2 4 6 8 10 12 14 16 a b c gNa C gsyn + + gK + gleak + For a fixed action potential waveform V (t): Minimum INa(t) = −LV (t)θ(LV (t)) Minimum IK(t) = −LV (t)θ(−LV (t)) with LV (t) ≡C ˙V (t) + Ileak[V (t)] + Isyn[V (t)]. Figure 1: To generate an action potential with an arbitrary time-course V (t), the nonlinear, timedependent permeabilities must deliver more charge than just to load the membrane capacitance— resistive losses must be compensated. (a) The action potential’s time-course in a generic HH model for a neuron, represented by the circuit diagram on the right. The peak of the action potential is ∼50 mV above the average potential. (b) The inward Na+ current, shown in green going in the negative direction, rapidly depolarizes the potential V (t) and yields the upstroke of the action potential. Concurrently, the K+ current activates, displayed as a positive deflection, and leads to the downstroke in the potential V (t). Inward and outward currents overlap significantly in time. The dotted lines within the region bounded by the solid lines represent the minimal Na+ current and the minimal K+ current needed to produce the V (t) spike waveform in (a). By the law of current conservation, the sum of capacitive, resistive, and synaptic currents, denoted by LV (t) ≡C ˙V (t) + Ileak[V (t)] + Isyn[V (t)], must be balanced by the active currents. If the cell’s passive properties, namely its capacitance and (leak) resistance, and the synaptic conductance are constant, we can deduce the minimal active currents needed to generate a specified V (t). The minimal currents, by definition, do not overlap in time. Taking into account passive current flow, restoring the concentration gradients after the action potential requires 29 nJ/cm2. By contrast, if the active currents were optimal, the cost would be 8.9 nJ/cm2. (c) To depolarize from the minimum to the maximum of the AP, the synaptic voltage-gated currents must deliver a charge Qcapacitive to charge the membrane capacitance and a charge Qresistive to compensate for the loss of current through leak channels. For a large leak conductance in the cell membrane, Qresistive can be larger than Qcapacitive. 4 where ⟨V ⟩is the average voltage. In the limit as p →∞, the norm simply becomes the difference between the action potential’s peak voltage and the mean voltage, whereas a finite p ensures that the norm is differentiable. In parameter space, we will focus our attention to the manifold of action potentials with constant Lp norm with 2 ≪p < ∞, which entails that the optimal action potential will have a finite, though possibly narrow width. To be close to the supremum norm, yet still have a norm that is well-behaved under differentiation, we decided to use p = 16. 5 Poincar´e-Lindstedt perturbation of periodic dynamical orbits Standard (secular) perturbation theory diverges for periodic orbits, so we apply the PoincarLindstedt technique of expanding both in the period and the dynamics of the asymptotic orbit and then derive a set of adjoint sensitivity equations for the differential-algebraic system. Solving once for the adjoint functions, we can easily compute the parameter gradient of any functional on the orbit, even for thousands of parameters. We start with a set of ordinary differential equations ˙x = F(x; p) for the neuron’s dynamics, an asymptotically periodic orbit xγ(t) that describes the action potential, and a functional G(x; p) on the orbit, representing the energy consumption, for instance. The functional can be written as an integral G(xγ; p) = Z ω(p)−1 0 g(xγ(t); p) dt, over some source term g(xγ(t); p). Assume that locally perturbing a parameter p ∈p induces a smooth change in the stable limit cycle, preserving its existence. Generally, a perturbation changes not only the limit cycle’s path in state space, but also the average speed with which this orbit is traversed; as a consequence, the value of the functional depends on this change in speed, to lowest order. For simplicity, consider a single, scalar parameter p. G(xγ; p) is the solution to ω(p)∂τ [G(xγ; p)] = g(xγ; p), where we have normalised time via τ = ω(p)t. Denoting partial derivatives by subscripts, we expand p 7→p + ϵ to get the O ϵ1 equation dτ [Gp(xγ; p)] + ωpg(xγ; p) = gx(xγ; p)xp + gp(xγ; p) in a procedure known as the Poincar´e-Lindstedt method. Hence, dG dp = Z ω−1 0 (gp + gxxp −ωpg) dt, where, once again by the Poincar´e-Lindstedt method, xp is the solution to ˙xp =Fx(xγ)xp + Fp(xγ) −ωpF (xγ) . Following the approach described by Cao, Li, Petzold, and Serban (2003), introduce a Lagrange vector AG(x) and consider the augmented objective function I(xγ; p) = G(xγ; p) − Z ω−1 0 AG(xγ). (F(xγ) −˙xγ) dt, which is identical to G(xγ; p) as F(x) −˙x = 0. Then dI(xγ; p) dp = Z ω−1 0 (gp + gxxp −ωpg) dt − Z ω−1 0 AG. (Fp + Fxxp −ωpF −˙xp) dt. Integrating the AG(x). ˙xp term by parts and using periodicity, we get dI(xγ; p) dp = Z ω−1 0 gp −ωpg −AG. (Fp −ωpF) dt − Z ω−1 0 h −gx + ˙AG + AG.F i xp dt. 5 Parameter minimum maximum peak permeability ¯PNa 0.24 fm/s 0.15 µm/s peak permeability ¯PK 6.6 fm/s 11 µm/s midpoint voltage Vm ∨Vh - 72 mV 70 mV slope sm ∨(−sh) 3.33 mV 200 mV time constant τm,0 ∨τh,0 5 µs 200 ms gating exponent r ∨s 0.2 5.0 Table 1: Parameter limits. We can let the second term vanish by making the vector AG(x) obey ˙AG(x) = −FT x (x; p) AG(x) + gx(x; p). Label the homogeneous solution (obtained by setting gx(xγ; p) = 0) as Z(x). It is known that the term ωp is given by ωp = ω R ω−1 0 Z(x).Fp(x) dt, provided Z(x) is normalised to satisfy Z(x).F(x) = 1. We can add any multiple of the homogeneous solution Z(x) to the inhomogeneous solution, so we can always make Z ω−1 0 AG(x).F(x) dt = G by taking AG(x) 7→AG(x) −Z(x) Z ω−1 0 AG(x).F(x) dt −ωG ! . (3) This condition will make AG(x) unique. Finally, with eq. (3) we get dG(xγ; p) dp = dI(xγ; p) dp = Z ω−1 0 gp −AG. Fp dt. The first term in the integral gives rise to the partial derivative ∂G(xγ; p)/ ∂p. In many cases, this term is either zero, can be made zero, or at least made independent of the dynamical variables. The parameters for the neuron models are listed in Table 1 together with their minimum and maximum allowed values. For each parameter in the neuron model, an auxiliary parameter on the entire real line is introduced, and a mapping from the real line onto the finite range set by the biophysical limits is defined. Gradient descent on this auxiliary parameter space is performed by orthogonalizing the gradient dQα/dp to the gradient dL/dp of the norm. To correct for drift off the constraint manifold of constant norm, illustrated in Fig. 3, steps of gradient ascent or descent on the Lp norm are performed while keeping Qα constant. The step size during gradient descent is adjusted to assure that ∆Qα < 0 and that a periodic solution xγ exists after adapting the parameters. The energy landscape is locally convex (Fig. 3). 6 Predicting the Hodgkin-Huxley model We start with a single-compartment Goldman-Hodgkin-Katz model neuron containing voltage-gated Na+ and leak conductances (Figure 1). A tonic synaptic input to the model evokes repetitive firing of action potentials. We seek those parameters that minimize the ionic load for an action potential of constant norm—in other words, spikes whose height relative to the average voltage is fairly constant, subject to a trade-off with the spike width. The ionic load is directly proportional to the work W performed by the ion flux. All parameters governing the ion channels’ voltage dependence and kinetics, including their time constants, mid-points, slopes, and peak values, are subject to change. The simplest model capable of generating an action potential must have two dynamical variables and two time scales: one for the upstroke and another for the downstroke. If both Na+ and K+ currents 6 c t [ms] Optimal Action Potential Cooperative Gating Model Falling Phase Currents V [mV] -2 -1 0 1 2 60 40 20 0 -20 -40 -60 0 0.2 0.4 750 500 250 0 current [μA/cm2] t [ms] IK[V] Excess INa[V] Peak Resurgence 5 1 τ [ms] t [ms] τn τh V [mV] Optimal Action Potential V [mV] a 5 1 t [ms] τ [ms] τn τh V [mV] Optimal Action Potential V [mV] b Transient Na Current Model Voltage-dependent (In)activation Model Falling Phase Currents Falling Phase Currents 300 200 100 t [ms] current [μA/cm2] current [μA/cm2] 0 0.25 0.5 0.75 200 100 0 0.25 0.5 0.75 t [ms] -4 -2 0 2 4 -4 -2 0 2 4 -60 0 60 40 20 0 -20 -40 -60 -60 0 60 60 40 20 0 -20 -40 -60 IK[V] Excess INa[V] Peak Resurgence IK[V] Excess INa[V] Peak Resurgence Q = 239 nC/cm2 PNa = m(t)h(t) PK = n(t) Q = 169 nC/cm2 PNa = m(t)h(t) PK = n(t) τi = τi(V) Q = 156 nC/cm2 PNa = m(t)h(t) PK = n(t)s τi = τi(V) τ [ms] V [mV] 5 1 τn τh -60 0 60 delay Figure 2: Optimal spike shapes and currents for neuron models with different biophysical features. During optimization, the spikes were constrained to have constant norm ∥V (t) −⟨V ⟩∥16 = 92 mV, which controls the height of the spike. Insets in the left column display the voltage-dependence of the optimized time constants for sodium inactivation and potassium activation; sodium activation is modeled as occurring instantaneously. (a) Model with voltage-dependent inactivation of Na+; time constants for the first order permeability kinetics are voltage-independent (inset). Inactivation turns off the Na+ current on the downstroke, but not completely: as the K+ current activates to repolarize the membrane, the inward Na+ current reactivates and counteracts the K+ current; the peak of the resurgent Na+ current is marked by a triangle. (b) Model with voltage-dependent time constants for the first order kinetics of activation and inactivation. The voltage dependence minimizes the resurgence of the Na+ current. (c) Power-law gating model with an inwardly rectifying potassium current replacing the leak current. The power law dependence introduces an effective delay in the onset of the K+ current, which further minimizes the overlap of Na+ and K+ currents in time. 7 12 14 16 18 10 12 14 16 10 20 yb yc ya Surface of Constant Norm Spikes VK [mV] sK [mV] τK [ms] ya yb yc Energy per Spike VK [mV] sK [mV] 12 14 16 18 10 12 14 16 16.3 16.4 16.5 VE [nJ/cm2] 16.3 nJ/cm2 ≥ 16.5 100 0 -2 0 2 V [mV] t [ms] 100 0 -2 0 2 V [mV] t [ms] 100 0 -2 0 2 V [mV] t [ms] Ta Tb Tc Figure 3: The energy required for an action potential three parameters governing potassium activation: the midpoint voltage VK, the slope sK, and the (maximum) time constant τK. The energy is the minimum work required to restore the ionic concentration gradients, as given by Eq. (1). Note that the energy within the constrained manifold of constant norm spikes is locally convex. are persistent, current flows in opposite directions at the same time, so that, even at the optimum, the ionic load is 1200 nC/cm2. On the other hand, no voltage-gated K+ channels are even required for a spike, as long as Na+ channels activate on a fast time scale and inactivate on a slower time scale and the leak is powerful enough to repolarize the neuron. Even so, the load is still 520 nC/cm2. While spikes require dynamics on two time scales, suppressing the overlap between inward and outward currents calls for a third time scale. The resulting dynamics are higher-dimensional and reduce the load to to 239 nC/cm2. Making the activation and inactivation time constants voltage-dependent permits ion channels to latch to an open or closed state during the rising and falling phase of the spike, reducing the ionic load to 189 nC/cm2 (Fig. 2) . The minimal Na+ and K+ currents are separated in time, yet dynamics that are linear in the activation variables cannot enforce a true delay between the offset of the Na+ current and the onset of the K+ current. If current flow depends on multiple gates that need to be activated simultaneously, optimization can use the nonlinearity of multiplication to introduce a delay in the rise of the K+ current that abolishes the overlap, and the ionic load drops to 156 nC/cm2. Any number of kinetic schemes for the nonlinear permeabilities Pα can give rise to the same spike waveform V (t), including the simplest two-dimensional one. Yet only the full Hodgkin-Huxley (HH) model, with its voltage-dependent kinetics that prevent the premature resurgence of inward current and cooperative gating that delays the onset of the outward current, minimizes the energetic cost. More complex models, in which voltage-dependent ion channels make transitions between multiple closed, inactivated, and open states, instantiate the energy-conserving features of the HH system at the molecular level. Furthermore, features that are eliminated during optimization, such as a voltage-dependent inactivation of the outward potassium current, are also not part of the delayed rectifier potassium current in the Hodgkin-Huxley framework. 8 References [1] Paul De Weer, David C. Gadsby, and R. F. Rakowski. Voltage dependence of the na-k pump. Ann. Rev. Physiol., 50:225–241, 1988. [2] B. Frankenhaeuser and A. F. Huxley. The action potential in the myelinated nerve fibre of xenopus laevis as computed on the basis of voltage clamp data. J. Physiol., 171:302–315, 1964. [3] Samuel S.-H. Wang, Jennifer R. Shultz, Mark J. Burish, Kimberly H. Harrison, Patrick R. Hof, Lex C. Towns, Matthew W. Wagers, and Krysta D. Wyatt. Functional trade-offs in white matter axonal scaling. J. Neurosci., 28(15):4047–4056, 2008. [4] Henrik Alle, Arnd Roth, and J¨org R. P. Geiger. Energy-efficient action potentials in hippocampal mossy fibers. Science, 325(5946):1405–1408, 2009. [5] D. E. Goldman. Potential, impedance and rectification in membranes. J. Gen. Physiol., 27:37– 60, 1943. [6] A. L. Hodgkin and B. Katz. The effect of sodium ions on the electrical activity of the giant axon of the squid. J. Physiol., 108:37–77, 1949. [7] Brett C. Carter and Bruce P. Bean. Sodium entry during action potentials of mammalian neurons: Incomplete inactivation and reduced metabolic efficiency in fast-spiking neurons. Neuron, 64(6):898–909, 2009. [8] Luc J. Gentet, Greg J. Stuart, and John D. Clements. Direct measurement of specific membrane capacitance in neurons. Biophys. J., 79:314–320, 2000. [9] Alain Destexhe, Michael Rudolph, and Denis Par´e. The high-conductance state of neocortical neurons in vivo. Nature Neurosci. Rev., 4:739–751, 2003. [10] Bilal Haider and David A. McCormick. Rapid neocortical dynamics: Cellular and network mechanisms. Neuron, 62:171–189, 2009. 9
|
2011
|
212
|
4,272
|
Global Solution of Fully-Observed Variational Bayesian Matrix Factorization is Column-Wise Independent Shinichi Nakajima Nikon Corporation Tokyo, 140-8601, Japan nakajima.s@nikon.co.jp Masashi Sugiyama Tokyo Institute of Technology Tokyo 152-8552, Japan sugi@cs.titech.ac.jp Derin Babacan University of Illinois at Urbana-Champaign Urbana, IL 61801, USA dbabacan@illinois.edu Abstract Variational Bayesian matrix factorization (VBMF) efficiently approximates the posterior distribution of factorized matrices by assuming matrix-wise independence of the two factors. A recent study on fully-observed VBMF showed that, under a stronger assumption that the two factorized matrices are column-wise independent, the global optimal solution can be analytically computed. However, it was not clear how restrictive the column-wise independence assumption is. In this paper, we prove that the global solution under matrix-wise independence is actually column-wise independent, implying that the column-wise independence assumption is harmless. A practical consequence of our theoretical finding is that the global solution under matrix-wise independence (which is a standard setup) can be obtained analytically in a computationally very efficient way without any iterative algorithms. We experimentally illustrate advantages of using our analytic solution in probabilistic principal component analysis. 1 Introduction The goal of matrix factorization (MF) is to approximate an observed matrix by a low-rank one. In this paper, we consider fully-observed MF where the observed matrix has no missing entry1. This formulation includes classical multivariate analysis techniques based on singular-value decomposition such as principal component analysis (PCA) [9] and canonical correlation analysis [10]. In the framework of probabilistic MF [20, 17, 19], posterior distributions of factorized matrices are considered. Since exact inference is computationally intractable, the Laplace approximation [3], the Markov chain Monte Carlo sampling [3, 18], and the variational Bayesian (VB) approximation [4, 13, 16, 15] were used for approximate inference in practice. Among them, the VB approximation seems to be a popular choice due to its high accuracy and computational efficiency. In the original VBMF [4, 13], factored matrices are assumed to be matrix-wise independent, and a local optimal solution is computed by an iterative algorithm. A simplified variant of VBMF (simpleVBMF) was also proposed [16], which assumes a stronger constraint that the factored matrices 1This excludes the collaborative filtering setup, which is aimed at imputing missing entries of an observed matrix [12, 7]. 1 are column-wise independent. A notable advantage of simpleVBMF is that the global optimal solution can be computed analytically in a computationally very efficient way [15]. Intuitively, it is suspected that simpleVBMF only possesses weaker approximation ability due to its stronger column-wise independence assumption. However, it was reported that no clear performance degradation was observed in experiments [14]. Thus, simpleVBMF would be a practically useful approach. Nevertheless, the influence of the stronger column-wise independence assumption was not elucidated beyond this empirical evaluation. The main contribution of this paper is to theoretically show that the column-wise independence assumption does not degrade the performance. More specifically, we prove that a global optimal solution of the original VBMF is actually column-wise independent. Thus, a global optimal solution of the original VBMF can be obtained by the analytic-form solution of simpleVBMF—no computationally-expensive iterative algorithm is necessary. We show the usefulness of the analyticform solution through experiments on probabilistic PCA. 2 Formulation In this section, we first formulate the problem of probabilistic MF, and then introduce the VB approximation and its simplified variant. 2.1 Probabilistic Matrix Factorization The probabilistic MF model is given as follows [19]: p(Y |A, B) ∝exp µ −1 2σ2 ∥Y −BA⊤∥2 Fro ¶ , (1) p(A) ∝exp µ −1 2tr ¡ AC−1 A A⊤¢¶ , p(B) ∝exp µ −1 2tr ¡ BC−1 B B⊤¢¶ , (2) where Y ∈RL×M is an observed matrix, A ∈RM×H and B ∈RL×H are parameter matrices to be estimated, and σ2 is the noise variance. Here, we denote by ⊤the transpose of a matrix or vector, by ∥· ∥Fro the Frobenius norm, and by tr(·) the trace of a matrix. We assume that the prior covariance matrices CA and CB are diagonal and positive definite, i.e., CA = diag(c2 a1, . . . , c2 aH), CB = diag(c2 b1, . . . , c2 bH) for cah, cbh > 0, h = 1, . . . , H. Without loss of generality, we assume that the diagonal entries of the product CACB are arranged in the non-increasing order, i.e., cahcbh ≥cah′cbh′ for any pair h < h′. Throughout the paper, we denote a column vector of a matrix by a bold smaller letter, and a row vector by a bold smaller letter with a tilde, namely, A = (a1, . . . , aH) = (ea1, . . . , eaM)⊤∈RM×H, B = (b1, . . . , bH) = ³ eb1, . . . , ebL ´⊤ ∈RL×H. 2.2 Variational Bayesian Approximation The Bayes posterior is written as p(A, B|Y ) = p(Y |A, B)p(A)p(B) Z(Y ) , (3) where Z(Y ) = 〈p(Y |A, B)〉p(A)p(B) is the marginal likelihood. Here, 〈·〉p denotes the expectation over the distribution p. Since the Bayes posterior (3) is computationally intractable, the VB approximation was proposed [4, 13, 16, 15]. Let r(A, B), or r for short, be a trial distribution. The following functional with respect to r is called the free energy: F(r|Y ) = ¿ log r(A, B) p(Y |A, B)p(A)p(B) À r(A,B) = ¿ log r(A, B) p(A, B|Y ) À r(A,B) −log Z(Y ). (4) 2 In the last equation, the first term is the Kullback-Leibler (KL) distance from the trial distribution to the Bayes posterior, and the second term is a constant. Therefore, minimizing the free energy (4) amounts to finding the distribution closest to the Bayes posterior in the sense of the KL distance. In the VB approximation, the free energy (4) is minimized over some restricted function space. A standard constraint for the MF model is matrix-wise independence [4, 13], i.e., rVB(A, B) = rVB A (A)rVB B (B). (5) This constraint breaks off the entanglement between the parameter matrices A and B, and leads to a computationally-tractable iterative algorithm. Using the variational method, we can show that, under the constraint (5), the VB posterior minimizing the free energy (4) is written as rVB(A, B) = M Y m=1 NH(eam; ebam, ΣA) L Y l=1 NH(ebl;ebbl, ΣB), where the parameters satisfy bA = ³ eba1, . . . , ebaM ´⊤ = Y ⊤bB ΣA σ2 , ΣA = σ2 ³ bB⊤bB + LΣB + σ2C−1 A ´−1 , (6) bB = µ ebb1, . . . , ebbL ¶⊤ = Y bAΣB σ2 , ΣB = σ2 ³ bA⊤bA + MΣA + σ2C−1 B ´−1 . (7) Here, Nd(·; µ, Σ) denotes the d-dimensional Gaussian distribution with mean µ and covariance matrix Σ. Iteratively updating the parameters bA, ΣA, bB, and ΣB by Eqs.(6) and (7) until convergence gives a local minimum of the free energy (4). When the noise variance σ2 is unknown, it can also be estimated based on the free energy minimization. The update rule for σ2 is given by σ2 = ∥Y ∥2 Fro −tr(2Y ⊤bB bA⊤) + tr ³ ( bA⊤bA + MΣA)( bB⊤bB + LΣB) ´ LM . (8) Furthermore, in the empirical Bayesian scenario, the hyperparameters CA and CB are also estimated from data. In this scenario, CA and CB are updated in each iteration by the following formulas: c2 ah = ∥bah∥2/M + (ΣA)hh , c2 bh = ∥bbh∥2/L + (ΣB)hh . (9) 2.3 SimpleVB Approximation A simplified variant, called the simpleVB approximation, assumes column-wise independence of each matrix [16, 15], i.e., rsimpleVB(A, B) = H Y h=1 rsimpleVB ah (ah) H Y h=1 rsimpleVB bh (bh). (10) This constraint restricts the covariances ΣA and ΣB to be diagonal, and thus necessary memory storage and computational cost are substantially reduced [16]. The simpleVB posterior can be written as rsimpleVB(A, B) = H Y h=1 NM(ah; bah, σ2 ahIM)NL(bh;bbh, σ2 bhIL), where the parameters satisfy bah = σ2 ah σ2 Y − X h′̸=h bbh′ba⊤ h′ ⊤ bbh, σ2 ah = σ2 ³ ∥bbh∥2 + Lσ2 bh + σ2c−2 ah ´−1 , (11) bbh = σ2 bh σ2 Y − X h′̸=h bbh′ba⊤ h′ bah, σ2 bh = σ2 ¡ ∥bah∥2 + Mσ2 ah + σ2c−2 bh ¢−1 . (12) 3 Here, Id denotes the d-dimensional identity matrix. Iterating Eqs.(11) and (12) until convergence, we can obtain a local minimum of the free energy. Eqs.(8) and (9) are similarly applied if the noise variance σ2 is unknown and in the empirical Bayesian scenario, respectively. A recent study has derived the analytic solution for simpleVB when the observed matrix has no missing entry [15]. This work made simpleVB more attractive, because it did not only provide substantial reduction of computation costs, but also guaranteed the global optimality of the solution. However, it was not clear how restrictive the column-wise independence assumption is, beyond its experimental success [14]. In the next section, we theoretically show that the column-wise independence assumption is actually harmless. 3 Analytic Solution of VBMF under Matrix-wise Independence Under the matrix-wise independence constraint (5), the free energy (4) can be written as F = 〈log r(A) + log r(B) −log p(Y |A, B)p(A)p(B)〉r(A)r(B) = LM 2 log σ2 + M 2 log |CA| |ΣA| + L 2 log |CB| |ΣB| + ∥Y ∥2 2σ2 + const. + 1 2tr n C−1 A ³ bA⊤bA + MΣA ´ + C−1 B ³ bB⊤bB + LΣB ´ +σ−2 ³ −2 bA⊤Y ⊤bB + ³ bA⊤bA + MΣA ´ ³ bB⊤bB + LΣB ´´o . (13) Note that Eqs.(6) and (7) together form the stationarity condition of Eq.(13) with respect to bA, bB, ΣA, and ΣB. Below, we show that a global solution of ΣA and ΣB is diagonal. When the product CACB is nondegenerate (i.e., cahcbh > cah′cbh′ for any pair h < h′), the global solution is unique and diagonal. On the other hand, when CACB is degenerate, the global solutions are not unique because arbitrary rotation in the degenerate subspace is possible without changing the free energy. However, still one of the equivalent solutions is always diagonal. Theorem 1 Diagonal ΣA and ΣB minimize the free energy (13). The basic idea of our proof is that, since minimizing the free energy (13) with respect to A, B, ΣA, and ΣB is too complicated, we focus on a restricted space written in a particular form that includes the optimal solution. From necessary conditions for optimality, we can deduce that the solutions ΣA and ΣB are diagonal. Below, we describe the outline of the proof for non-degenerate CACB. The complete proof for general cases is omitted because of the page limit. (Sketch of proof of Theorem 1) Assume that (A∗, B∗, Σ∗ A, Σ∗ B) is a minimizer of the free energy (13), and consider the following set of parameters specified by an H × H orthogonal matrix Ω: bA = A∗C−1/2 A Ω⊤C1/2 A , ΣA = C1/2 A ΩC−1/2 A Σ∗ AC−1/2 A Ω⊤C1/2 A , bB = B∗C1/2 A Ω⊤C−1/2 A , ΣB = C−1/2 A ΩC1/2 A Σ∗ BC1/2 A Ω⊤C−1/2 A . Note that bB bA⊤is invariant with respect to Ω, and ( bA, bB, ΣA, ΣB) = (A∗, B∗, Σ∗ A, Σ∗ B) holds if Ω= IH. Then, as a function of Ω, the free energy (13) can be simplified as F(Ω) = 1 2tr n C−1 A C−1 B ΩC1/2 A ¡ B∗⊤B∗+ LΣ∗ B ¢ C1/2 A Ω⊤o + const. This is necessarily minimized at Ω= IH, because we assumed that (A∗, B∗, Σ∗ A, Σ∗ B) is a minimizer. We can show that F(Ω) is minimized at Ω= IH only if B∗⊤B∗+ LΣ∗ B is diagonal. This implies that Σ∗ A (see Eq.(6)) should be diagonal. Similarly, we consider another set of parameters specified by an H × H orthogonal matrix Ω′: bA = A∗C1/2 B Ω′⊤C−1/2 B , ΣA = C−1/2 B Ω′C1/2 B Σ∗ AC1/2 B Ω′⊤C−1/2 B , bB = B∗C−1/2 B Ω′⊤C1/2 B , ΣB = C1/2 B Ω′C−1/2 B Σ∗ BC−1/2 B Ω′⊤C1/2 B . 4 Then, as a function of Ω′, the free energy (13) can be expressed as F(Ω′) = 1 2tr n C−1 A C−1 B Ω′C1/2 B ¡ A∗⊤A∗+ MΣ∗ A ¢ C1/2 B Ω′⊤o + const. Similarly, this is minimized at Ω′ = IH only if A∗⊤A∗+ MΣ∗ A is diagonal. Thus, Σ∗ B should be diagonal (see Eq.(7)). □ The result that ΣA and ΣB become diagonal would be natural because we assumed the independent Gaussian prior on A and B: the fact that any Y can be decomposed into orthogonal components may imply that the observation Y cannot convey any preference for singular-component-wise correlation. Note, however, that Theorem 1 does not necessarily hold when the observed matrix has missing entries. Theorem 1 implies that the stronger column-wise independence constraint (10) does not degrade approximation accuracy, and the VB solution under matrix-wise independence (5) essentially agrees with the simpleVB solution. Consequently, we can obtain a global analytic solution for VB, by combining Theorem 1 above with Theorem 1 in [15]: Corollary 1 Let γh (≥0) be the h-th largest singular value of Y , and let ωah and ωbh be the associated right and left singular vectors: Y = L X h=1 γhωbhω⊤ ah. Let bγh be the second largest real solution of the following quartic equation with respect to t: fh(t) := t4 + ξ3t3 + ξ2t2 + ξ1t + ξ0 = 0, (14) where the coefficients are defined by ξ3 = (L −M)2γh LM , ξ2 = − à ξ3γh + (L2 + M 2)η2 h LM + 2σ4 c2ahc2 bh ! , ξ1 = ξ3 p ξ0, ξ0 = à η2 h − σ4 c2ahc2 bh !2 , η2 h = µ 1 −σ2L γ2 h ¶ µ 1 −σ2M γ2 h ¶ γ2 h. Let eγh = v u u u t(L + M)σ2 2 + σ4 2c2ahc2 bh + v u u t à (L + M)σ2 2 + σ4 2c2ahc2 bh !2 −LMσ4. (15) Then, the global VB solution under matrix-wise independence (5) can be expressed as bU VB ≡〈BA⊤〉rVB(A,B) = bB bA⊤= H X h=1 bγVB h ωbhω⊤ ah, where bγVB h = ½bγh if γh > eγh, 0 otherwise. Theorem 1 holds also in the empirical Bayesian scenario, where the hyperparameters (CA, CB) are also estimated from observation. Accordingly, the empirical VB solution also agrees with the empirical simpleVB solution, whose analytic-form is given in Corollary 5 in [15]. Thus, we obtain the global analytic solution for empirical VB: Corollary 2 The global empirical VB solution under matrix-wise independence (5) is given by bU EVB = H X h=1 bγEVB h ωbhω⊤ ah, where bγEVB h = ( ˘γVB h if γh > γh and ∆h ≤0, 0 otherwise. Here, γh = ( √ L + √ M)σ, (16) ˘c2 h = 1 2LM µ γ2 h −(L + M)σ2 + q (γ2 h −(L + M)σ2)2 −4LMσ4 ¶ , (17) ∆h = M log ³ γh Mσ2 ˘γVB h + 1 ´ + L log ³ γh Lσ2 ˘γVB h + 1 ´ + 1 σ2 ¡ −2γh˘γVB h + LM˘c2 h ¢ , (18) and ˘γVB h is the VB solution for cahcbh = ˘ch. 5 When we calculate the empirical VB solution, we first check if γh > γh holds. If it holds, we compute ˘γVB h by using Eq.(17) and Corollary 1. Otherwise, bγEVB h = 0. Finally, we check if ∆h ≤0 holds by using Eq.(18). When the noise variance σ2 is unknown, it is optimized by a naive 1-dimensional search to minimize the free energy [15]. To evaluate the free energy (13), we need the covariances ΣA and ΣB, which neither Corollary 1 nor Corollary 2 provides. The following corollary, which gives the complete information on the VB posterior, is obtained by combining Theorem 1 above with Corollary 2 in [15]: Corollary 3 The VB posteriors under matrix-wise independence (5) are given by rVB A (A) = H Y h=1 NM(ah; bah, σ2 ahIM), rVB B (B) = H Y h=1 NL(bh;bbh, σ2 bhIL), where, for bγVB h being the solution given by Corollary 1, bah = ± q bγVB h bδh · ωah, bbh = ± q bγVB h bδ−1 h · ωbh, σ2 ah = − ¡ bη2 h −σ2(M −L) ¢ + p (bη2 h −σ2(M −L))2 + 4Mσ2bη2 h 2M(bγVB h bδ−1 h + σ2c−2 ah ) , σ2 bh = − ¡ bη2 h + σ2(M −L) ¢ + p (bη2 h + σ2(M −L))2 + 4Lσ2bη2 h 2L(bγVB h bδh + σ2c−2 bh ) , bδh = (M −L)(γh −bγVB h ) + q (M −L)2(γh −bγVB h )2 + 4σ4LM c2ahc2 bh 2σ2Mc−2 ah , bη2 h = ( η2 h if γh > eγh, σ4 c2 ahc2 bh otherwise. Note that the ratio cah/cbh is arbitrary in empirical VB, so we can fix it to, e.g., cah/cbh = 1 without loss of generality [15]. 4 Experimental Results In this section, we first introduce probabilistic PCA as a probabilistic MF model. Then, we show experimental results on artificial and benchmark datasets, which illustrate practical advantages of using our analytic solution. 4.1 Probabilistic PCA In probabilistic PCA [20], the observation y ∈RL is assumed to be driven by a latent vector ea ∈RH in the following form: y = Bea + ε. Here, B ∈RL×H specifies the linear relationship between ea and y, and ε ∈RL is a Gaussian noise subject to NL(0, σ2IL). Suppose that we are given M observed samples {y1, . . . , yM} generated from the latent vectors {ea1, . . . , eaM}, and each latent vector is subject to ea ∼NH(0, IH). Then, the probabilistic PCA model is written as Eqs.(1) and (2) with CA = IH. If we apply Bayesian inference, the intrinsic dimension H is automatically selected without predetermination [4, 14]. This useful property is called automatic dimensionality selection (ADS). 4.2 Experiment on Artificial Data We compare the iterative algorithm and the analytic solution in the empirical VB scenario with unknown noise variance, i.e., the hyperparameters (CA, CB) and the noise variance σ2 are also 6 0 50 100 150 200 250 1.8 2 2.2 2.4 2.6 2.8 Iteration F /(LM ) Analytic Iterative (a) Free energy 0 50 100 150 200 250 0 20 40 60 80 Iteration Time(sec) Analytic Iterative (b) Computation time 0 50 100 150 200 250 0 20 40 60 80 100 Iteration bH Analytic Iterative (c) Estimated rank Figure 1: Experimental results for Artificial1 dataset, where the data dimension is L = 100, the number of samples is M = 300, and the true rank is H∗= 20. 0 50 100 150 200 250 2.6 2.8 3 3.2 Iteration F /(LM ) Analytic Iterative (a) Free energy 0 50 100 150 200 250 0 5 10 15 20 25 30 Iteration Time(sec) Analytic Iterative (b) Computation time 0 50 100 150 200 250 0 20 40 60 80 Iteration bH Analytic Iterative (c) Estimated rank Figure 2: Experimental results for Artificial2 dataset (L = 70, M = 300, and H∗= 40). estimated from observation. We use the full-rank model (i.e., H = min(L, M)), and expect the ADS effect to automatically find the true rank H∗. Figure 1 shows the free energy, the computation time, and the estimated rank over iterations for an artificial (Artificial1) dataset with L = 100, M = 300, and H∗= 20. We randomly created true matrices A∗∈RM×H∗and B∗∈RL×H∗so that each entry of A∗and B∗follows N1(0, 1). An observed matrix Y was created by adding a noise subject to N1(0, 1) to each entry of B∗A∗⊤. The iterative algorithm consists of the update rules (6)–(9). Initial values were set in the following way: bA and bB are randomly created so that each entry follows N1(0, 1). Other variables are set to ΣA = ΣB = CA = CB = IH and σ2 = 1. Note that we rescale Y so that ∥Y ∥2 Fro/(LM) = 1, before starting iteration. We ran the iterative algorithm 10 times, starting from different initial points, and each trial is plotted by a solid line in Figure 1. The analytic solution consists of applying Corollary 2 combined with a naive 1-dimensional search for noise variance σ2 estimation [15]. The analytic solution is plotted by the dashed line. We see that the analytic solution estimates the true rank bH = H∗= 20 immediately (∼0.1 sec on average over 10 trials), while the iterative algorithm does not converge in 60 sec. Figure 2 shows experimental results on another artificial dataset (Artificial2) where L = 70, M = 300, and H∗= 40. In this case, all the 10 trials of the iterative algorithm are trapped at local minima. We empirically observed a tendency that the iterative algorithm suffers from the local minima problem when H∗is large (close to H). 4.3 Experiment on Benchmark Data Figures 3 and 4 show experimental results on the Satellite and the Spectf datasets available from the UCI repository [1], showing similar tendencies to Figures 1 and 2. We also conducted experiments on various benchmark datasets, and found that the iterative algorithm typically converges slowly, and sometimes suffers from the local minima problem, while our analytic-form gives the global solution immediately. 7 0 50 100 150 200 250 2.5 3 3.5 4 4.5 5 Iteration F /(LM ) Analytic Iterative (a) Free energy 0 50 100 150 200 250 0 100 200 300 400 500 Iteration Time(sec) Analytic Iterative (b) Computation time 0 50 100 150 200 250 0 10 20 30 40 50 Iteration bH Analytic Iterative (c) Estimated rank Figure 3: Experimental results for the Sat dataset (L = 36, M = 6435). 0 50 100 150 200 250 3 3.5 4 4.5 5 Iteration F /(LM ) Analytic Iterative (a) Free energy 0 50 100 150 200 250 0 5 10 15 20 25 Iteration Time(sec) Analytic Iterative (b) Computation time 0 50 100 150 200 250 0 10 20 30 40 Iteration bH Analytic Iterative (c) Estimated rank Figure 4: Experimental results for the Spectf dataset (L = 44, M = 267). 5 Conclusion and Discussion In this paper, we have analyzed the fully-observed variational Bayesian matrix factorization (VBMF) under matrix-wise independence. We have shown that the VB solution under matrix-wise independence essentially agrees with the simplified VB (simpleVB) solution under column-wise independence. As a consequence, we can obtain the global VB solution under matrix-wise independence analytically in a computationally very efficient way. Our analysis assumed uncorrelated priors. With correlated priors, the posterior is no longer uncorrelated and thus it is not straightforward to obtain a global solution analytically. Nevertheless, there exists a situation where an analytic solution can be easily obtained: Suppose there exists an H × H non-singular matrix T such that both of C′ A = TCAT ⊤and C′ B = (T −1)⊤CBT −1 are diagonal. We can show that the free energy (13) is invariant under the following transformation for any T: A →AT ⊤, ΣA →TΣAT ⊤, CA →TCAT ⊤, B →BT −1, ΣB →(T −1)T ΣBT −1, CB →(T −1)⊤CBT −1. Accordingly, the following procedure gives the global solution analytically: the analytic solution given the diagonal (C′ A, C′ B) is first computed, and the above transformation is then applied. We have demonstrated the usefulness of our analytic solution in probabilistic PCA. On the other hand, robust PCA has gathered a great deal of attention recently [5], and its Bayesian variant has been proposed [2]. We expect that our analysis can handle more structured sparsity, in addition to the current low-rank inducing sparsity. Extension of the current work along this line will allow us to give more theoretical insights into robust PCA and provide computationally efficient algorithms. Finally, a more challenging direction is to handle priors correlated over rows of A and B. This allows us to model correlations in the observation space, and capture, e.g., short-term correlation in time-series data and neighboring pixels correlation in image data. Analyzing such a situation, as well as missing value imputation and tensor factorization [11, 6, 8, 21] is our important future work. Acknowledgments The authors thank anonymous reviewers for helpful comments. Masashi Sugiyama was supported by the FIRST program. Derin Babacan was supported by a Beckman Postdoctoral Fellowship. 8 References [1] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007. [2] D. Babacan, M. Luessi, R. Molina, and A. Katsaggelos. Sparse Bayesian methods for low-rank matrix estimation. arXiv:1102.5288v1 [stat.ML], 2011. [3] C. M. Bishop. Bayesian principal components. In Advances in NIPS, volume 11, pages 382–388, 1999. [4] C. M. Bishop. Variational principal components. In Proc. of ICANN, volume 1, pages 514–509, 1999. [5] E.-J. Candes, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? CoRR, abs/0912.3599, 2009. [6] J. D. Carroll and J. J. Chang. Analysis of individual differences in multidimensional scaling via an n-way generalization of ’eckart-young’ decomposition. Psychometrika, 35:283–319, 1970. [7] S. Funk. Try this at home. http://sifter.org/˜simon/journal/20061211.html, 2006. [8] R. A. Harshman. Foundations of the parafac procedure: Models and conditions for an ”explanatory” multimodal factor analysis. UCLA Working Papers in Phonetics, 16:1–84, 1970. [9] H. Hotelling. Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24:417–441, 1933. [10] H. Hotelling. Relations between two sets of variates. Biometrika, 28(3–4):321–377, 1936. [11] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455–500, 2009. [12] J. A. Konstan, B. N. Miller, D. Maltz, J. L. Herlocker, L. R. Gordon, and J. Riedl. Grouplens: Applying collaborative filtering to Usenet news. Communications of the ACM, 40(3):77–87, 1997. [13] Y. J. Lim and T. W. Teh. Variational Bayesian approach to movie rating prediction. In Proceedings of KDD Cup and Workshop, 2007. [14] S. Nakajima, M. Sugiyama, and D. Babacan. On Bayesian PCA: Automatic dimensionality selection and analytic solution. In Proceedings of 28th International Conference on Machine Learning (ICML2011), Bellevue, WA, USA, Jun. 28–Jul.2 2011. [15] S. Nakajima, M. Sugiyama, and R. Tomioka. Global analytic solution for variational Bayesian matrix factorization. In J. Lafferty, C. K. I. Williams, R. Zemel, J. Shawe-Taylor, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1759–1767, 2010. [16] T. Raiko, A. Ilin, and J. Karhunen. Principal component analysis for large scale problems with lots of missing values. In J. Kok, J. Koronacki, R. Lopez de Mantras, S. Matwin, D. Mladenic, and A. Skowron, editors, Proceedings of the 18th European Conference on Machine Learning, volume 4701 of Lecture Notes in Computer Science, pages 691–698, Berlin, 2007. Springer-Verlag. [17] S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation, 11:305–345, 1999. [18] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using Markov chain Monte Carlo. In International Conference on Machine Learning, 2008. [19] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In J. C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1257–1264, Cambridge, MA, 2008. MIT Press. [20] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical Society, 61:611–622, 1999. [21] L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31:279–311, 1996. 9
|
2011
|
213
|
4,273
|
Algorithms and hardness results for parallel large margin learning Philip M. Long Google plong@google.com Rocco A. Servedio Columbia University rocco@cs.columbia.edu Abstract We study the fundamental problem of learning an unknown large-margin halfspace in the context of parallel computation. Our main positive result is a parallel algorithm for learning a large-margin halfspace that is based on interior point methods from convex optimization and fast parallel algorithms for matrix computations. We show that this algorithm learns an unknown γ-margin halfspace over n dimensions using poly(n, 1/γ) processors and runs in time ˜O(1/γ) + O(log n). In contrast, naive parallel algorithms that learn a γ-margin halfspace in time that depends polylogarithmically on n have Ω(1/γ2) runtime dependence on γ. Our main negative result deals with boosting, which is a standard approach to learning large-margin halfspaces. We give an information-theoretic proof that in the original PAC framework, in which a weak learning algorithm is provided as an oracle that is called by the booster, boosting cannot be parallelized: the ability to call the weak learner multiple times in parallel within a single boosting stage does not reduce the overall number of successive stages of boosting that are required. 1 Introduction In this paper we consider large-margin halfspace learning in the PAC model: there is a target halfspace f(x) = sign(w·x), where w is an unknown unit vector, and an unknown probability distribution D over the unit ball Bn = {x ∈Rn : ∥x∥2 ≤1} which has support on {x ∈Bn : |w·x| ≥γ}. (Throughout this paper we refer to such a combination of target halfspace f and distribution D as a γ-margin halfspace.) The learning algorithm is given access to labeled examples (x, f(x)) where each x is independently drawn from D, and it must with high probability output a hypothesis h : Rn →{−1, 1} that satisfies Prx∼D[h(x) ̸= f(x)] ≤ε. Learning a large-margin halfspace is a fundamental problem in machine learning; indeed, one of the most famous algorithms in machine learning is the Perceptron algorithm [25] for this problem. PAC algorithms based on the Perceptron [17] run in poly(n, 1 γ , 1 ε) time, use O( 1 εγ2 ) labeled examples in Rn, and learn an unknown n-dimensional γ-margin halfspace to accuracy 1 −ε. A motivating question: achieving Perceptron’s performance in parallel? The last few years have witnessed a resurgence of interest in highly efficient parallel algorithms for a wide range of computational problems in many areas including machine learning [33, 32]. So a natural goal is to develop an efficient parallel algorithm for learning γ-margin halfspaces that matches the performance of the Perceptron algorithm. A well-established theoretical notion of efficient parallel computation is that an efficient parallel algorithm for a problem with input size N is one that uses poly(N) processors and runs in parallel time polylog(N), see e.g. [12]. Since the input to the Perceptron algorithm is a sample of poly( 1 ε, 1 γ ) labeled examples in Rn, we naturally arrive at the following: 1 Algorithm Number of processors Running time naive parallelization of Perceptron poly(n, 1/γ) ˜O(1/γ2) + O(log n) naive parallelization of [27] poly(n, 1/γ) ˜O(1/γ2) + O(log n) polynomial-time linear programming [2] 1 poly(n, log(1/γ)) This paper poly(n, 1/γ) ˜O(1/γ) + O(log n) Table 1: Bounds on various parallel algorithms for learning a γ-margin halfspace over Rn. Main Question: Is there a learning algorithm that uses poly(n, 1 γ , 1 ε) processors and runs in time poly(log n, log 1 γ , log 1 ε) to learn an unknown n-dimensional γmargin halfspace to accuracy 1 −ε? (See [31] for a detailed definition of parallel learning algorithms; here we only recall that an efficient parallel learning algorithm’s hypothesis must be efficiently evaluatable in parallel.) As Freund [10] has largely settled how the resources required by parallel algorithms scale with the accuracy parameter ϵ (see Lemma 6 below), our focus in this paper is on γ and n, leading to the following: Main Question (simplified): Is there a learning algorithm that uses poly(n, 1 γ ) processors and runs in time poly(log n, log 1 γ ) to learn an unknown n-dimensional γ-margin halfspace to accuracy 9/10? This question, which we view as a fundamental open problem, inspired the research reported here. Prior results. Table 1 summarizes the running time and number of processors used by various parallel algorithms to learn a γ-margin halfspace over Rn. The naive parallelization of Perceptron in the first line of the table is an algorithm that runs for O(1/γ2) stages; in each stage it processes all of the O(1/γ2) examples simultaneously in parallel, identifies one that causes the Perceptron algorithm to update its hypothesis vector, and performs this update. We do not see how to obtain parallel time bounds better than O(1/γ2) from recent analyses of other algorithms based of gradient descent (such as [7, 8, 4]), some of which use assumptions incomparable in strength to the γ-margin condition studied here. The second line of the table corresponds to a similar naive parallelization of the boosting-based algorithm of [27] that achieves Perceptron-like performance for learning a γ-margin halfspace. It boosts for O(1/γ2) stages over a O(1/γ2)-size sample; using one processor for each coordinate of each example, the running time bound is ˜O(1/γ2) · log n, using poly(n, 1/γ) processors. (For both this algorithm and the Perceptron the time bound can be improved to ˜O(1/γ2) + O(log n) as claimed in the table by using an initial random projection step; we explain how to do this in Section 2.) The third line of the table, included for comparison, is simply a standard sequential algorithm for learning a halfspace based on polynomial-time linear programming executed on one processor, see e.g. [2, 14]. Efficient parallel algorithms have been developed for some simpler PAC learning problems such as learning conjunctions, disjunctions, and symmetric Boolean functions [31]. [6] gave efficient parallel PAC learning algorithms for some geometric constant-dimensional concept classes. In terms of negative results for parallel learning, [31] shows that (under a complexity-theoretic assumption) there is no parallel algorithm using poly(n) processors and polylog(n) time that constructs a halfspace hypothesis that is consistent with a given linearly separable data set of ndimensional labeled examples. This does not give a negative answer to the Main Question for several reasons: the Main Question allows any hypothesis representation (that can be efficiently evaluated in parallel), allows the number of processors to grow inverse polynomially with the margin parameter γ, and allows the final hypothesis to err on up to (say) 5% of the points in the data set. Our results. Our main positive result is a parallel algorithm that uses poly(n, 1/γ) processors to learn γ-margin halfspaces in parallel time ˜O(1/γ) + O(log n) (see Table 1). We believe ours is the first algorithm that runs in time polylogarithmic in n and subquadratic in 1/γ. Our analysis can be modified to establish similar positive results for other formulations of the large-margin learning problem, including ones (see [28]) that have been tied closely to weak learnability (these modifications are not presented due to space constraints). In contrast, our main negative result is an 2 information-theoretic argument that suggests that such positive parallel learning results cannot be obtained by boosting alone. We show that if the weak learner must be called as an oracle, boosting cannot be parallelized: any parallel booster must perform Ω(1/γ2) sequential stages of boosting a “black-box” γ-advantage weak learner in the worst case. This extends an earlier lower bound of Freund [10] for standard (sequential) boosters that can only call the weak learner once per stage. 2 A parallel algorithm for learning γ-margin halfspaces over Bn Our parallel algorithm is an amalgamation of existing tools from high-dimensional geometry, convex optimization, parallel algorithms for linear algebra, and learning theory. Roughly speaking the algorithm works as follows: given a data set of m = ˜O(1/γ2) labeled examples from Bn×{−1, 1}, it begins by randomly projecting the examples down to d = ˜O(1/γ2) dimensions. This essentially preserves the geometry so the resulting d-dimensional labeled examples are still linearly separable with margin Θ(γ). The algorithm then uses a variant of a linear programming algorithm of Renegar [24, 21] which, roughly speaking, solves linear programs with m constraints to high accuracy using (essentially) √m stages of Newton’s method. Within Renegar’s algorithm we employ fast parallel algorithms for linear algebra [22] to carry out each stage of Newton’s method in polylog(1/γ) parallel time steps. This suffices to learn the unknown halfspace to high constant accuracy (say 9/10); to get a 1 −ε-accurate hypothesis we combine the above procedure with Freund’s approach [10] for boosting accuracy that was mentioned in the introduction. The above sketch omits many details, including crucial issues of precision in solving the linear programs to adequate accuracy. In the rest of this section we address the necessary details in full and prove the following theorem: Theorem 1 There is a parallel algorithm with the following performance guarantee: Let f, D define an unknown γ-margin halfspace over Bn as described in the introduction. The algorithm is given as input ϵ, δ > 0 and access to labeled examples (x, f(x)) that are drawn independently from D. It runs in O(((1/γ)polylog(1/γ)+log(n)) log(1/ϵ)+log log(1/δ)) time, uses poly(n, 1/γ, 1/ϵ, log(1/δ)) processors, and with probability 1−δ it outputs a hypothesis h satisfying Prx∼D[h(x) ̸= f(x)] ≤ε. We assume that the value of γ is “known” to the algorithm, since otherwise the algorithm can use a standard “guess and check” approach trying γ = 1, 1/2, 1/4, etc., until it finds a value that works. We first describe the tools from the literature that are used in the algorithm. Random projection. We say that a random projection matrix is a matrix A chosen uniformly from {−1, 1}n×d. Given such an A and a unit vector w ∈Rn (recall that the target halfspace f is f(x) = sign(w · x)), let w′ denote (1/ √ d)wA. After transformation by A the distribution D over Bn is transformed to a distribution D′ over Rd in the natural way: a draw x′ from D′ is obtained by making a draw x from D and setting x′ = (1/ √ d)xA. We will use the following lemma from [1]: Lemma 1 [1] Let f(x) = sign(w · x) and D define a γ-margin halfspace as described in the introduction. For d = O((1/γ2) log(1/γ)), a random n × d projection matrix A will with probability 99/100 induce D′ and w′ as described above such that Prx′∼D′ h w′ ∥w′∥· x′ < γ/2 or ∥x′∥2 > 2 i ≤γ4. Convex optimization. We recall some tools we will use from convex optimization over Rd [24, 3]. Let F be the convex barrier function F(u) = Pd i=1 log 1 (ui−ai)(bi−ui) (we specify the values ai < bi below). Let g(u) be the gradient of F at u; note that g(u)i = 1 bi−ui − 1 ui−ai . Let H(u) be the Hessian of F at u, let ||v||u = p vT H(u)v, and let n(u) = −H(u)−1g(u) be the Newton step at u. For a linear subspace L of Rd, let F|L be the restriction of F to L, i.e. the function that evaluates to F on L and ∞everywhere else. We will apply interior point methods to approximately solve problems of the following form, where a1, ..., ad, b1, ..., bd ∈[−2, 2], |bi −ai| ≥2 for all i, and L is a subspace of Rd: minimize −u1 such that u ∈L and ai ≤xi ≤bi for all i. (1) Let z ∈Rd be the minimizer, and let opt be the optimal value of (1). 3 The algorithm we analyze minimizes Fη(u) def = −ηu1 + F|L(u) for successively larger values of η. Let z(η) be the minimizer of Fη , let optη = Fη(z(η)), and let nη(u) be its Newton step. (To keep the notation clean, the dependence on L is suppressed from the notation.) As in [23], we periodically round intermediate solutions to keep the bit complexity under control. The analysis of such rounding in [23] requires a problem transformation which does not preserve the large-margin condition that we need for our analysis, so we give a new analysis, using tools from [24], and a simpler algorithm. It is easier to analyze the effect of the rounding on the quality of the solution than on the progress measure used in [24]. Fortunately, [3] describes an algorithm that can go from an approximately optimal solution to a solution with a good measure of progress while controlling the bit complexity of the output. The algorithm repeatedly finds the direction of the Newton step, and then performs a line search to find the approximately optimal step size. Lemma 2 ([3, Section 9.6.4]) There is an algorithm Abt with the following property. Suppose for any η > 0, Abt is given u with rational components such that Fη(u) −optη ≤2. Then after constantly many iterations of Newton’s method and back-tracking line search, Abt returns an u+ that (i) satisfies ||nη(u+)||u+ ≤1/9; and (ii) has rational components that have bit-length bounded by a polynomial in d, the bit length of u, and the bit length of the matrix A such that L = {v : Av = 0}.1 We analyze the following variant of the usual central path algorithm for linear programming, which we call Acpr. It takes as input a precision parameter α and outputs the final u(k). • Set η1 = 1, β = 1 + 1 8 √ 2d and ϵ = 1 ⌈2 √ d(5d/α+ √ d210d/α+2d+1)⌉. • Given u as input, run Abt starting with u to obtain u(1) such that ||nη1(u(1))||u(1) ≤1/9. • For k from 2 to 1 + ⌈log(4d/α) log(β) ⌉perform the following steps (i)–(iv): (i) set ηk = βηk−1; (ii) set w(k) = u(k−1) +nηk(u(k−1)) (i.e. do one step of Newton’s method); (iii) form r(k) by rounding each component of w(k) to the nearest multiple of ϵ, and then projecting back onto L; (iv) Run Abt starting with r(k) to obtain u(k) such that ||nηk(u(k))||u(k) ≤1/9. The following lemma, implicit2 in [3, 24], bounds the quality of the solutions in terms of the progress measure ||nηk(u)||u. Lemma 3 If u ∈L and ||nηk(u)||u ≤1/9, then Fηk(u)−optηk ≤||nηk(u)||2 u and −u1−opt ≤4d ηk . The following key lemma shows that rounding intermediate solutions does not do too much harm: Lemma 4 For any k, if Fηk(w(k)) ≤optηk + 1/9, then Fηk(r(k)) ≤optηk + 1. Proof: Fix k, and note that ηk = βk−1 ≤5d/α. We henceforth drop k from all notation. First, we claim that κ = min i {|ai −wi|, |bi −wi|} ≥2−2η−2d−1/9. (2) Let m = ((a1 + b1)/2, ..., (ad + bd)/2). Since Fη(w) ≤optη + 1/9, we have Fη(w) ≤Fη(m) + 1/9 ≤η + 1/9. But minimizing each term of Fη separately, we get Fη(w) ≥log 1 κ −2d −η. Combining this with the previous inequality and solving for κ yields (2). Since ||w −r|| ≤ϵ √ d, recalling that ϵ ≤ 1 2 √ d(5d/α+ √ d210d/α+2d+1), we have min i {|ai −ri|, |bi −ri|} ≥2−2η−2d−1/9 −ϵ √ d ≥2−2η−2d−1. (3) 1We note for the reader’s convenience that λ(u) in [3] is the same as our ||n(u+)||u+. The analysis on pages 503-505 of [3] shows that a constant number of iterations suffice. Each step is a projection of H(u)−1g(u) onto L, which can be seen to have bit-length bounded by a polynomial in the bit-length of u. Composing polynomials constantly many times yields a polynomial, which gives the claimed bit-length bound for u+. 2The first inequality is (9.50) from [3]. The last line of p. 46 of [24] proves that ||nηk(u)||u ≤1/9 implies ||u −z(η)||z(η) ≤1/5 from which the second inequality follows by (2.14) of [24], using the fact that ϑ = 2d (proved on page 35 of [24]). 4 Now, define ψ : R →R by ψ(t) = Fη w + t r−w ||r−w|| . We have Fη(r) −Fη(w) = ψ(||r −w||) −ψ(0) = Z ||r−w|| 0 ψ′(t)dt ≤||r −w|| max t |ψ′(t)|. (4) Let S be the line segment between w and r. Since for each t ∈[0, ||r −w||] the value ψ′(t) is a directional derivative of Fη at some point of S, (4) implies that, for the gradient gη of Fη, Fη(r) −Fη(w) ≤||w −r|| max{||gη(s)|| : s ∈S}. (5) However (3) and (2) imply that min{|ai −si|, |bi −si|} ≥2−2η−2d−1 for all s ∈S. Recalling that g(u)i = 1 bi−ui − 1 ui−ai , this means that ||gη(s)|| ≤η + √ d22η+2d+1 so that applying (5) we get Fη(r) −Fη(w) ≤||w −r||(η + √ d22η+2d+1). Since ||w −r|| ≤ϵ √ d, we have Fη(r) −Fη(w) ≤ ϵ √ d(η + √ d22η+2d+1) ≤ϵ √ d(5d/α + √ d210d/α+2d+1) ≤1/2, and the lemma follows. Fast parallel linear algebra: inverting matrices. We will use an algorithm due to Reif [22]: Lemma 5 ([22]) There is a polylog(d, L)-time, poly(d, L)-processor parallel algorithm which, given as input a d × d matrix A with rational entries of total bit-length L, outputs A−1. Learning theory: boosting accuracy. The following is implicit in the analysis of Freund [10]. Lemma 6 ([10]) Let D be a distribution over (unlabeled) examples. Let A be a parallel learning algorithm such that for all D′ with support(D′) ⊆support(D), given draws (x, f(x)) from D′, with probability 9/10 A outputs a hypothesis with accuracy 9/10 (w.r.t. D′) using P processors in T time. Then there is a parallel algorithm B that with probability 1 −δ constructs a (1 −ε)-accurate hypothesis (w.r.t. D) in O(T log(1/ϵ)+log log(1/δ)) time using poly(P, 1/ϵ, log(1/δ)) processors. 2.1 Proof of Theorem 1 As described at the start of this section, due to Lemma 6, it suffices to prove the lemma in the case that ϵ = 1/10 and δ = 1/10. We assume w.l.o.g. that γ = 1/integer. The algorithm first selects an n × d random projection matrix A where d = O(log(1/γ)/γ2). This defines a transformation ΦA : Bn →Rd as follows: given x ∈Bn, the vector ΦA(x) ∈ Rd is obtained by (i) rounding each xi to the nearest integer multiple of 1/(4⌈ p n/γ⌉); then (ii) setting x′ = (1/2 √ d)xA; and finally (iii) rounding each x′ i to the nearest multiple of 1/(8⌈d/γ⌉). Given x it is easy to compute ΦA(x) using O(n log(1/γ)/γ2) processors in O(log(n/γ)) time. Let D′ denote the distribution over Rd obtained by applying ΦA to D. Across all coordinates D′ is supported on rational numbers with the same poly(1/γ) common denominator. By Lemma 1, with probability 99/100 over A, the target-distribution pair (w′ = (1/ √ d)wA, D′) satisfies Pr x′∼D′ h |x′ · (w′/∥w′∥)| < γ′ def = γ/8 or ∥x′∥2 > 1 i ≤γ4. (6) The algorithm next draws m = c log(1/γ)/γ2 labeled training examples (ΦA(x), f(x)) from D′; this can be done in O(log(n/γ)) time using O(n) · poly(1/γ) processors as noted above. It then applies Acpr to find a d-dimensional halfspace h that classifies all m examples correctly (more on this below). By (6), with probability at least (say) 29/30 over the random draw of (ΦA(x1), ym), ..., (ΦA(xm), ym), we have that yt(w′ · ΦA(xt)) ≥γ and ||ΦA(xt)|| ≤1 for all t = 1, . . . , m. Now the standard VC bound for halfspaces [30] applied to h and D′ implies that since h classifies all m examples correctly, with overall probability at least 9/10 its accuracy is at least 9/10 with respect to D′, i.e. Prx∼D[h(ΦA(x)) ̸= f(x)] ≤1/10. So the hypothesis h ◦ΦA has accuracy 9/10 with respect to D with probability 9/10 as required by Lemma 6. It remains to justify the above claim about Acpr classifying all examples correctly, and analyze the running time. More precisely we show that given m = O(log(1/γ)/γ2) training examples in Bd with rational components that all have coordinates with a common denominator that is poly(1/γ) and are separable with a margin γ′ = γ/8, Acpr can be used to construct a d-dimensional halfspace that classifies them all correctly in ˜O(1/γ) parallel time using poly(1/γ) processors. 5 Given (x′ 1, y1), ..., (x′ m, ym) ∈Bd × {−1, 1} satisfying the above conditions, we will apply algorithm Acpr to the following linear program, called LP, with α = γ′/2: “minimize −s such that yt(v · x′ t) −st = s and 0 ≤st ≤2 for all t ∈[m]; −1 ≤vi ≤1 for all i ∈[d]; and −2 ≤s ≤2.” Intuitively, s is the minimum margin over all examples, and st is the difference between each example’s margin and s. The subspace L is defined by the equality constraints yt(v · x′ t) −st = s. Our analysis will conclude by applying the following lemma, with an initial solution of s = −1, v = 0, and st = 1 for all t. (Note that u1 corresponds to s.) Lemma 7 Given any d-dimensional linear program in the form (1), and an initial solution u ∈L such that min{|ui −ai|, |ui −bi|} ≥1 for all i, Algorithm Acpr approximates the optimal solution to an additive ±α. It runs in √ d · polylog(d/α) parallel time and uses poly(1/α, d) processors. The LP constraints enforce that all examples are classified correctly with a margin of at least s. The feasible solution in which v is w′/||w′||, s equals γ′ and st = yt(v·x′ t)−s shows that the optimum solution of LP has value at most −γ′. So approximating the optimum to an additive ±α = ±γ′/2 ensures that all examples are classified correctly, and it is enough to prove Lemma 7. Proof of Lemma 7: First, we claim that, for all k, ||nηk(u(k))||u(k) ≤1/9; given this, since the final value of ηk is at least 4d/α, Lemma 3 implies that the solution is α-close to optimal. We induct on k. For k = 1, since initially mini{|ui −ai|, |ui −bi|} ≥1, we have F(u) ≤0, and, since η1 = 1 and u1 ≥−1 we have Fη1(u) ≤1 and optη1 ≥−1. So we can apply Lemma 2 to get the base case. Now, for the induction step, suppose ||nηk(u(k))||u(k) ≤1/9. It then follows3 from [24, page 46] that ||nηk+1(w(k+1))||w(k+1) ≤1/9. Next, Lemmas 3 and 4 imply that Fηk+1(r(k+1)) −optηk+1 ≤ 1. Then Lemma 2 gives ||nηk+1(u(k+1))||u(k+1) ≤1/9 as required. Next, we claim that the bit-length of all intermediate solutions is at most poly(d, 1/γ). This holds for r(k), and follows for u(k) and w(k) because each of them is obtained from some r(k) by performing a constant number of operations each of which blows up the bit length at most polynomially (see Lemma 2). Since each intermediate solution has polynomial bit length, the matrix inverses can be computed in polylog(d, 1/γ) time using poly(d, 1/γ) processors, by Lemma 5. The time bound then follows from the fact that there are at most O( √ d log(d/α)) iterations. 3 Lower bound for parallel boosting in the oracle model Boosting is a widely used method for learning large-margin halfspaces. In this section we consider the question of whether boosting algorithms can be efficiently parallelized. We work in the original PAC learning setting [29, 16, 26] in which a weak learning algorithm is provided as an oracle that is called by the boosting algorithm, which must simulate a distribution over labeled examples for the weak learner. Our main result for this setting is that boosting is inherently sequential; being able to to call the weak learner multiple times in parallel within a single boosting stage does not reduce the overall number of sequential boosting stages that are required. In fact we show this in a very strong sense, by proving that a boosting algorithm that runs arbitrarily many copies of the weak learner in parallel in each stage cannot save even one stage over a sequential booster that runs the weak learner just once in each stage. This lower bound is unconditional and information-theoretic. Below we first define the parallel boosting framework and give some examples of parallel boosters. We then state and prove our lower bound on the number of stages required by parallel boosters. A consequence of our lower bound is that Ω(log(1/ε)/γ2) stages of parallel boosting are required in order to boost a γ-advantage weak learner to achieve classification accuracy 1 −ε no matter how many copies of the weak learner are used in parallel in each stage. Our definition of weak learning is standard in PAC learning, except that for our discussion it suffices to consider a single target function f : X →{−1, 1} over a domain X. Definition 1 A γ-advantage weak learner L is an algorithm that is given access to a source of independent random labeled examples drawn from an (unknown and arbitrary) probability distribution 3Noting that ϑ ≤2d [24, page 35]. 6 P over labeled examples {(x, f(x))}x∈X. L must4 return a weak hypothesis h : X →{−1, 1} that satisfies Pr(x,f(x))←P[h(x) = f(x)] ≥1/2 + γ. Such an h is said to have advantage γ w.r.t. P. We fix P to henceforth denote the initial distribution over labeled examples, i.e. P is a distribution over {(x, f(x))}x∈X where the marginal distribution PX may be an arbitrary distribution over X. Intuitively, a boosting algorithm runs the weak learner repeatedly on a sequence of carefully chosen distributions to obtain a sequence of weak hypotheses, and combines the weak hypotheses to obtain a final hypothesis that has high accuracy under P. We give a precise definition below, but first we give some intuition to motivate our definition. In stage t of a parallel booster the boosting algorithm may run the weak learner many times in parallel using different probability distributions. The probability weight of a labeled example (x, f(x)) under a distribution constructed at the t-th stage of boosting may depend on the values of all the weak hypotheses from previous stages and on the value of f(x), but may not depend on any of the weak hypotheses generated by any of the calls to the weak learner in stage t. No other dependence on x is allowed, since intuitively the only interface that the boosting algorithm should have with each data point is through its label and the values of the weak hypotheses from earlier stages. We further observe that since the distribution P is the only source of labeled examples, a booster should construct the distributions at each stage by somehow “filtering” examples (x, f(x)) drawn from P based only on the value of f(x) and the values of the weak hypotheses from previous stages. We thus define a parallel booster as follows: Definition 2 (Parallel booster) A T-stage parallel boosting algorithm with N-fold parallelism is defined by TN functions {αt,k}t∈[T ],k∈[N] and a (randomized) Boolean function h, where αt,k : {−1, 1}(t−1)N+1 →[0, 1] and h : {−1, 1}T N →{−1, 1}. In the t-th stage of boosting the weak learner is run N times in parallel. For each k ∈[N], the distribution Pt,k over labeled examples that is given to the k-th run of the weak learner is as follows: a draw from Pt,k is made by drawing (x, f(x)) from P and accepting (x, f(x)) as the output of the draw from Pt,k with probability px = αt,k(h1,1(x), . . . , ht−1,N(x), f(x)) (and rejecting it and trying again otherwise). In stage t, for each k ∈[N] the booster gives the weak learner access to Pt,k as defined above and the weak learner generates a hypothesis ht,k that has advantage at least γ w.r.t. Pt,k. After T stages, TN weak hypotheses {ht,k}t∈[T ],k∈[N] have been obtained from the weak learner. The final hypothesis of the booster is H(x) := h(h1,1(x), . . . , hT,N(x)), and its accuracy is minht,k Pr(x,f(x))←P[H(x) = f(x)], where the min is taken over all sequences of TN weak hypotheses subject to the condition that each ht,k has advantage at least γ w.r.t. Pt,k. The parameter N above corresponds to the number of processors that the parallel booster is using; we get a sequential booster when N = 1. Many of the most common PAC-model boosters in the literature are sequential boosters, such as [26, 10, 9, 11, 27, 5] and others. In [10] Freund gave a boosting algorithm and showed that after T stages of boosting, his algorithm generates a final hypothesis that is guaranteed to have error at most vote(γ, T) def = P⌊T/2⌋ j=0 T j 1 2 + γ j (1/2 −γ)T −j (see Theorem 2.1 of [10]). Freund also gave a matching lower bound by showing (see his Theorem 2.4) that any T-stage sequential booster must have error at least as large as vote(γ, T), and so consequently any sequential booster that generates a (1 −ε)-accurate final hypothesis must run for Ω(log(1/ε)/γ2) stages. Our Theorem 2 below extends this lower bound to parallel boosters. Several parallel boosting algorithms have been given in the literature, including branching program [20, 13, 18, 19] and decision tree [15] boosters. All of these boosters take Ω(log(1/ε)/γ2) stages to learn to accuracy 1 −ε; our theorem below implies that any parallel booster must run for Ω(log(1/ε)/γ2) stages no matter how many parallel calls to the weak learner are made per stage. Theorem 2 Let B be any T-stage parallel boosting algorithm with N-fold parallelism. Then for any 0 < γ < 1/2, when B is used to boost a γ-advantage weak learner the resulting final hypothesis may have error as large as vote(γ, T) (see the discussion after Definition 2). We emphasize that Theorem 2 holds for any γ and any N that may depend on γ in an arbitrary way. 4The usual definition of a weak learner would allow L to fail with probability δ. This probability can be made exponentially small by running L multiple times so for simplicity we assume there is no failure probability. 7 The theorem is proved as follows: fix any 0 < γ < 1/2 and fix B to be any T-stage parallel boosting algorithm. We will exhibit a target function f and a distribution P over {(x, f(x))x∈X, and describe a strategy that a weak learner W can use to generate weak hypotheses ht,k that each have advantage at least γ with respect to the distributions Pt,k. We show that with this weak learner W, the resulting final hypothesis H that B outputs will have accuracy at most 1 −vote(γ, T). We begin by describing the desired f and P. The domain X of f is X = Z × Ω, where Z = {−1, 1}n and Ωis the set of all ω = (ω1, ω2, . . . ) where each ωi belongs to {−1, 1}. The target function f is simply f(z, ω) = z. The distribution P = (PX, PY ) over {(x, f(x))}x∈X is defined as follows. A draw from P is obtained by drawing x = (z, ω) from PX and returning (x, f(x)). A draw of x = (z, ω) from PX is obtained by first choosing a uniform random value in {−1, 1} for z, and then choosing ωi ∈{−1, 1} to equal z with probability 1/2 + γ independently for each i. Note that under P, given the label z = f(x) of a labeled example (x, f(x)), each coordinate ωi of x is correct in predicting the value of f(x, z) with probability 1/2 + γ independently of all other ωj’s. We next describe a way that a weak learner W can generate a γ-advantage weak hypothesis each time it is invoked by B. Fix any t ∈[T] and any k ∈[N]. When W is invoked with Pt,k it replies as follows (recall that for x ∈X we have x = (z, ω) as described above): (i) if Pr(x,f(x))←Pt,k[ωt = f(x)] ≥1/2 + γ then the weak hypothesis ht,k(x) is the function “ωt,” i.e. the (t + 1)-st coordinate of x. Otherwise, (ii) the weak hypothesis ht,k(x) is “z,” i.e. the first coordinate of x. (Note that since f(x) = z for all x, this weak hypothesis has zero error under any distribution.) It is clear that each weak hypothesis ht,k generated as described above indeed has advantage at least γ w.r.t. Pt,k, so the above is a legitimate strategy for W. The following lemma will play a key role: Lemma 8 If W never uses option (ii) then Pr(x,f(x))←P[H(x) ̸= f(x)] ≥vote(γ, T). Proof: If the weak learner never uses option (ii) then H depends only on variables ω1, . . . , ωT and hence is a (randomized) Boolean function over these variables. Recall that for (x = (z, ω), f(x) = z) drawn from P, each coordinate ω1, . . . , ωT independently equals z with probability 1/2 + γ. Hence the optimal (randomized) Boolean function H over inputs ω1, . . . , ωT that maximizes the accuracy Pr(x,f(x))←P[H(x) = f(x)] is the (deterministic) function H(x) = Maj(ω1, . . . , ωT ) that outputs the majority vote of its input bits. (This can be easily verified using Bayes’ rule in the usual “Naive Bayes” calculation.) The error rate of this H is precisely the probability that at most ⌊T/2⌋ “heads” are obtained in T independent (1/2 + γ)-biased coin tosses, which equals vote(γ, T). Thus it suffices to prove the following lemma, which we prove by induction on t: Lemma 9 W never uses option (ii) (i.e. Pr(x,f(x))←Pt,k[ωt = f(x)] ≥1/2 + γ always). Proof: Base case (t = 1). For any k ∈[N], since t = 1 there are no weak hypotheses from previous stages, so the value of px is determined by the bit f(x) = z (see Definition 2). Hence P1,k is a convex combination of two distributions which we call D1 and D−1. For b ∈{−1, 1}, a draw of (x = (z, ω); f(x) = z) from Db is obtained by setting z = b and independently setting each coordinate ωi equal to z with probability 1/2 + γ. Thus in the convex combination P1,k of D1 and D−1, we also have that ω1 equals z (i.e. f(x)) with probability 1/2 + γ. So the base case is done. Inductive step (t > 1). Fix any k ∈[N]. The inductive hypothesis and the weak learner’s strategy together imply that for each labeled example (x = (z, ω), f(x) = z), since hs,ℓ(x) = ωs for s < t, the rejection sampling parameter px = αt,k(h1,1(x), . . . , ht−1,N(x), f(x)) is determined by ω1, . . . , ωt−1 and z and does not depend on ωt, ωt+1, .... Consequently the distribution Pt,k over labeled examples is some convex combination of 2t distributions which we denote Db, where b ranges over {−1, 1}t corresponding to conditioning on all possible values for ω1, . . . , ωt−1, z. For each b = (b1, . . . , bt) ∈{−1, 1}t, a draw of (x = (z, ω); f(x) = z) from Db is obtained by setting z = bt, setting (ω1, . . . , ωt−1) = (b1, . . . , bt−1), and independently setting each other coordinate ωj (j ≥t) equal to z with probability 1/2+γ. In particular, because ωt is conditionally independent of ω1, ..., ωt−1 given z, Pr(ωt = z|ω1 = b1, ..., ωt−1 = bt−1) = Pr(ωt = z) = 1/2 + γ. Thus in the convex combination Pt,k of the different Db’s, we also have that ωt equals z (i.e. f(x)) with probability 1/2 + γ. This concludes the proof of the lemma and the proof of Theorem 2. 8 References [1] R. Arriaga and S. Vempala. An algorithmic theory of learning: Robust concepts and random projection. In Proc. 40th FOCS, pages 616–623, 1999. [2] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM, 36(4):929–965, 1989. [3] S. P. Boyd and L. Vandenberghe. Convex Optimization. Cambridge, 2004. [4] J. Bradley, A. Kyrola, D. Bickson, and C. Guestrin. Parallel coordinate descent for l1-regularized loss minimization. ICML, 2011. [5] Joseph K. Bradley and Robert E. Schapire. Filterboost: Regression and classification on large datasets. In NIPS, 2007. [6] N. Bshouty, S. Goldman, and H.D. Mathias. Noise-tolerant parallel learning of geometric concepts. Inf. and Comput., 147(1):89 – 110, 1998. [7] Michael Collins, Robert E. Schapire, and Yoram Singer. Logistic regression, adaboost and bregman distances. Machine Learning, 48(1-3):253–285, 2002. [8] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction. ICML, 2011. [9] C. Domingo and O. Watanabe. MadaBoost: a modified version of AdaBoost. In Proc. 13th COLT, pages 180–189, 2000. [10] Y. Freund. Boosting a weak learning algorithm by majority. Inf. and Comput., 121(2):256–285, 1995. [11] Y. Freund. An adaptive version of the boost-by-majority algorithm. Mach. Learn., 43(3):293–318, 2001. [12] R. Greenlaw, H.J. Hoover, and W.L. Ruzzo. Limits to Parallel Computation: P-Completeness Theory. Oxford University Press, New York, 1995. [13] A. Kalai and R. Servedio. Boosting in the presence of noise. Journal of Computer & System Sciences, 71(3):266–290, 2005. [14] N. Karmarkar. A new polynomial time algorithm for linear programming. Combinat., 4:373–395, 1984. [15] M. Kearns and Y. Mansour. On the boosting ability of top-down decision tree learning algorithms. In Proceedings of the Twenty-Eighth Annual Symposium on Theory of Computing, pages 459–468, 1996. [16] M. Kearns and U. Vazirani. An Introduction to Computational Learning Theory. MIT Press, Cambridge, MA, 1994. [17] N. Littlestone. From online to batch learning. In Proc. 2nd COLT, pages 269–284, 1989. [18] P. Long and R. Servedio. Martingale boosting. In Proc. 18th Annual COLT, pages 79–94, 2005. [19] P. Long and R. Servedio. Adaptive martingale boosting. In Proc. 22nd NIPS, pages 977–984, 2008. [20] Y. Mansour and D. McAllester. Boosting using branching programs. Journal of Computer & System Sciences, 64(1):103–112, 2002. [21] Y. Nesterov and A. Nemirovskii. Interior Point Polynomial Methods in Convex Programming: Theory and Applications. Society for Industrial and Applied Mathematics, Philadelphia, 1994. [22] John H. Reif. O(log2 n) time efficient parallel factorization of dense, sparse separable, and banded matrices. SPAA, 1994. [23] J. Renegar. A polynomial-time algorithm, based on Newton’s method, for linear programming. Mathematical Programming, 40:59–93, 1988. [24] James Renegar. A mathematical view of interior-point methods in convex optimization. Society for Industrial and Applied Mathematics, 2001. [25] F. Rosenblatt. The Perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review, 65:386–407, 1958. [26] R. Schapire. The strength of weak learnability. Machine Learning, 5(2):197–227, 1990. [27] R. Servedio. Smooth boosting and learning with malicious noise. JMLR, 4:633–648, 2003. [28] S. Shalev-Shwartz and Y. Singer. On the equivalence of weak learnability and linear separability: New relaxations and efficient boosting algorithms. Machine Learning, 80(2):141–163, 2010. [29] L. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984. [30] V. Vapnik. Statistical Learning Theory. Wiley-Interscience, New York, 1998. [31] J. S. Vitter and J. Lin. Learning in parallel. Inf. Comput., 96(2):179–202, 1992. [32] DIMACS 2011 Workshop. Parallelism: A 2020 Vision. 2011. [33] NIPS 2009 Workshop. Large-Scale Machine Learning: Parallelism and Massive Datasets. 2009. 9
|
2011
|
214
|
4,274
|
Semi-supervised Regression via Parallel Field Regularization Binbin Lin Chiyuan Zhang Xiaofei He State Key Lab of CAD&CG, College of Computer Science, Zhejiang University Hangzhou 310058, China {binbinlinzju, chiyuan.zhang.zju, xiaofeihe}@gmail.com Abstract This paper studies the problem of semi-supervised learning from the vector field perspective. Many of the existing work use the graph Laplacian to ensure the smoothness of the prediction function on the data manifold. However, beyond smoothness, it is suggested by recent theoretical work that we should ensure second order smoothness for achieving faster rates of convergence for semisupervised regression problems. To achieve this goal, we show that the second order smoothness measures the linearity of the function, and the gradient field of a linear function has to be a parallel vector field. Consequently, we propose to find a function which minimizes the empirical error, and simultaneously requires its gradient field to be as parallel as possible. We give a continuous objective function on the manifold and discuss how to discretize it by using random points. The discretized optimization problem turns out to be a sparse linear system which can be solved very efficiently. The experimental results have demonstrated the effectiveness of our proposed approach. 1 Introduction In many machine learning problems, one is often confronted with very high dimensional data. There is a strong intuition that the data may have a lower dimensional intrinsic representation. Various researchers have considered the case when the data is sampled from a submanifold embedded in the ambient Euclidean space. Consequently, learning with the low dimensional manifold structure, or specifically the intrinsic topological and geometrical properties of the data manifold, becomes a crucial problem. In the past decade, many geometrically motivated approaches have been developed. The early work mainly considers the problem of dimensionality reduction. One hopes that the manifold structure could be preserved in the much lower dimensional Euclidean space. For example, ISOMAP [1] is a global approach which tries to preserve the pairwise geodesic distance on the manifold. Different from ISOMAP, Hessian Eigenmaps (HLLE, [2]) is a local approach for similar purpose. Locally Linear Embedding (LLE, [3]) and Laplacian Eigenmaps (LE, [4]) can be viewed as Laplacian operator based methods which mainly consider the local neighborhood structure of the manifold. Besides dimensionality reduction, Laplacian based regularization has also been widely employed in semi-supervised learning. These methods construct a nearest neighbor graph over the labeled and unlabeled data to model the underlying manifold structure, and use the graph Laplacian [5] to measure the smoothness of the learned function on the manifold. A variety of semi-supervised learning approaches using the graph Laplacian have been proposed [6, 7, 8]. In semi-supervised regression, some recent theoretical analysis [9] shows that using the graph Laplacian regularizer does not lead to faster minimax rates of convergence. [9] also states that the Laplacian regularizer is way too general for measuring the smoothness of the function. It is further suggested that we 1 should ensure second order smoothness to achieve faster rates of convergence for semi-supervised regression problems. The Laplacian regularizer is the integral on the norm of the gradient of the function, which is the first order derivative on the function. In this paper, we design regularization terms that penalize the second order smoothness of the function, i.e., the linearity of the function. Estimating the second order covariant derivative of the function is a very challenging problem. We try to address this problem from vector fields perspective. We show that the gradient field of a linear function has to be a parallel vector field (or parallel field in short). Consequently, we propose a novel approach called Parallel Field Regularization (PFR) to simultaneously find the function and its gradient field, while requiring the gradient field to be as parallel as possible. Specifically, we propose to compute a function and a vector field which satisfy three conditions simultaneously: 1) the function minimizes the empirical error on the labeled data, 2) the vector field is close to the gradient field of the function, 3) the vector field should be as parallel as possible. A novel regularization framework from the vector filed perspective is developed. We give both the continuous and discrete forms of the objective function, and develop an efficient optimization scheme to solve this problem. 2 Regularization on the Vector Field We first briefly introduce semi-supervised learning methods with regularization on the function. Let M be a d-dimensional submanifold in Rm. Given l labeled data points (xi, yi)l i=1 on M, we aim to learn a function f : M →R based on the manifold M and the labeled points (xi, yi)l i=1. A framework of semi-supervised learning based on differential operators can be formulated as follows: arg min f∈C∞(M) E(f) = 1 l l X i=1 R0(f(xi), yi) + λ1R1(f) where C∞(M) denotes smooth functions on M, R0 : R × R →R is the loss function and R1(f) : C∞(M) →R is a regularization functional. R1 is often written as a functional norm associated with a differential operator, i.e., R1(f) = R M ∥Df∥2 where D is a differential operator. If D is the covariant derivative ∇on the manifold, then R1(f) = R M ∥∇f∥2 = R M fL(f) becomes the Laplacian regularizer. If D is the Hessian operator on the manifold, then R1(f) = R M ∥Hessf∥2 becomes the Hessian regularizer. 2.1 Parallel Fields and Linear Functions We first show the relationship between a parallel field and a linear function on the manifold. Definition 2.1 (Parallel Field [10]). A vector field X on manifold M is a parallel field if ∇X ≡0, where ∇is the covariant derivative on M. Definition 2.2 (Linear Function [10]). A continuous function f : M →R is said to be linear if (f ◦γ)(t) = f(γ(0)) + ct (1) for each geodesic γ. A function f is linear means that it varies linearly along the geodesics of the manifold. It is a natural extension of linear functions on Euclidean space. Proposition 2.1. [10] Let V be a parallel field on the manifold. If it is also a gradient field for function f, V = ∇f, then f is a linear function on the manifold. This proposition tells us the relationship between a parallel field and a linear function on the manifold. 2.2 Objective Function We aim to design regularization terms that penalize the second order smoothness of the function. Following the above analysis, we first approximate gradient field of the prediction function by a 2 Figure 1: Covariant derivative demonstration. Let V, Y be two vector fields on manifold M. Given a point x ∈M, we show how to compute the vector ∇Y V |x. Let γ(t) be a curve on M: γ : I →M which satisfies γ(0) = x and γ′(0) = Yx. Then the covariant derivative along the direction dγ(t) dt |t=0 can be computed by projecting dV dt |t=0 to the tangent space TxM at x. In other words, ∇γ′(0)V |x = Px( dV dt |t=0), where Px : v ∈Rm →Px(v) ∈TxM is the projection matrix. It is not difficult to check that the computation of ∇Y V |x is independent to the choice of the curve γ. vector field, then we require the vector field to be as parallel as possible. Therefore, we try to learn the function f and its gradient field ∇f simultaneously. Formally, we propose to learn a function f and a vector field V on the manifold with two constraints: • The vector field V should be close to the gradient field ∇f of f, which can be formularized as follows: min f∈C∞,V R1(f, V ) = Z M ∥∇f −V ∥2 (2) • The vector field V should be as parallel as possible: min V R2(V ) = Z M ∥∇V ∥2 F (3) where ∇is the covariant derivative on the manifold, ∥· ∥F denotes the Frobenius norm. In the following, we provide some detailed explanation of R2(V ). ∇V measures the change of the vector field V . If ∇V vanishes, then V is a parallel field. For a given direction Yx at x ∈M, the geometrical meaning of ∇Y V |x is demonstrated in Fig. 1. For a fixed point x ∈M, ∇V |x is a linear map on the tangent space TxM. According to the definition of Frobenius norm, we have ∥∇V ∥2 F = d X i,j=1 (g(∇∂iV, ∂j))2 = d X i=1 (g(∇∂iV, ∇∂iV )) (4) where g is the Riemannian metric on M and ∂1, . . . , ∂d is an orthonormal basis of TxM. Naturally, we propose the following objective function based on vector field regularization: arg min f∈C∞(M),V E(f, V ) = 1 l l X i=1 R0(xi, yi, f) + λ1R1(f, V ) + λ2R2(V ) (5) For the loss function R0, we use the squared loss R0(f(xi), yi) = (f(xi) −yi)2 for simplicity. 3 Implementation Since the manifold M is unknown, the function f which minimizes (5) can not be directly solved. In this section, we discuss how to discretize the continuous objective function (5). 3.1 Vector Field Representation Given l labeled data points (xi, yi)l i=1 and n −l unlabeled points xl+1, . . . , xn in Rm. Let fi = f(xi), i = 1, . . . , n, our goal is to learn a function f = (f1, . . . , fn)T . We first construct a nearest neighbor graph by either ϵ-neighborhood or k nearest neighbors. Let xi ∼xj denote that xi and 3 xj are neighbors. For each point xi, we estimate its tangent space TxiM by performing PCA on its neighborhood. We choose the largest d eigenvectors as the bases since TxiM is d dimensional. Let Ti ∈Rm×d be the matrix whose columns constitute an orthonormal basis for TxiM. It is easy to show that Pi = TiT T i is the unique orthogonal projection from Rm onto the tangent space TxiM [11]. That is, for any vector a ∈Rm, we have Pia ∈TxiM and (a −Pia) ⊥Pia. Let V be a vector field on the manifold. For each point xi, let Vxi denote the value of the vector field V at xi, and ∇V |xi denote the value of ∇V at xi. According to the definition of vector field, Vxi should be a vector in tangent space TxiM. Therefore, it can be represented by the local coordinates of the tangent space, Vxi = Tivi, where vi ∈Rd. We define V = vT 1 , . . . , vT n T ∈Rdn. That is, V is a dn-dimensional big column vector which concatenates all the vi’s. In the following, we first discretize our objective function E(f, V ), and then minimize it to obtain f and V. 3.2 Gradient Field Computation In order to discretize R1(f, V ), we first discuss the Taylor expansion of f on the manifold. Let expx denote the exponential map at x. The exponential map expx : TxM →M maps the tangent space TxM to the manifold M. Let a ∈TxM be a tangent vector. Then there is a unique geodesic γa satisfying γa(0) = x with the initial tangent vector γ′ a(0) = a. The corresponding exponential map is defined as expx(ta) = γa(t), t ∈[0, 1]. Locally, the exponential map is a diffeomorphism. Note that f ◦expx : TxM →R is a smooth function on TxM. Then the following Taylor expansion of f holds: f(expx(a)) ≈f(x) + ⟨∇f(x), a⟩, (6) where a ∈TxM is a sufficiently small tangent vector. In the discrete case, let expxi denote the exponential map at xi. Since expxi is a diffeomorphism, there exists a tangent vector aij ∈TxiM such that expxi(aij) = xj. According to the definition of exponential map, ∥aij∥equals to the geodesic distance between xi and xj, which can be denoted as dij. Let eij be the unit vector in the direction of aij, i.e., eij = aij/dij. We approximate aij by projecting the vector xj −xi to the tangent space, i.e., aij = Pi(xj −xi). Therefore, Eq. (6) can be rewritten as follows: f(xj) = f(xi) + ⟨∇f(xi), Pi(xj −xi)⟩ (7) Since f is unknown, ∇f is also unknown. In the following, we discuss how to compute ∥∇f(xi) − Vxi∥2 discretely. We first show that the vector norm can be computed by an integral on a unit sphere, where the unit sphere can be discretely approximated by a neighborhood. Let u be a unit vector on tangent space TxM, then we have (see the exercise 1.12 in [12]) 1 ωd Z Sd−1⟨X, u⟩2dδ(X) = 1 (8) where Sd−1 is the unit (d −1)-sphere, dωd is its volume, and dδ is its volume form. Let ∂i, i = 1, . . . , d, be an orthonormal basis of TxM. Then for any vector b ∈TxM, it can be written as b = Pd i=1 bi∂i. Furthermore, we have ∥b∥2 = d X i=1 (bi)2 = d X i=1 (bi)2 1 ωd Z Sd−1⟨X, ∂i⟩2dδ(X) = 1 ωd Z Sd−1⟨X, b⟩2dδ(X) From Eq. (7), we can see that ⟨∇f(xi), Pi(xj −xi)⟩= f(xj) −f(xi). Thus, we have ∥∇f(xi) −Vxi∥2 = 1 ωd Z Sd−1⟨X, ∇f(xi) −Vxi⟩2dδ(X) ≈ X j∼i ⟨eij, ∇f(xi) −Vxi⟩2 = X j∼i wij⟨aij, ∇f(xi) −Vxi⟩2 = X j∼i wij⟨Pi(xj −xi), ∇f(xi) −Vxi⟩2 = X j∼i wij (Pi(xj −xi))T Vxi −f(xj) + f(xi) 2 . (9) 4 where wij = d−2 ij . The weight wij can be approximated either by heat kernel weight exp(−∥xi − xj∥2/δ) or simply by 0 −1 weight. Then R1 reduces to the following: R1(f, V) = X i X j∼i wij (xj −xi)T Tivi −fj + fi 2 (10) 3.3 Parallel Field Computation As discussed before, we hope the vector field to be as parallel as possible on the manifold. In the discrete case, R2 becomes R2(V) = n X i=1 ∥∇V |xi∥2 F (11) In the following, we discuss how to approximate ∥∇V |xi∥2 F for a given point xi. Since we do not know ∇∂iV for a given basis ∂i, ∥∇V |xi∥2 F cannot be computed according to Eq. (4). We define a (0, 2) symmetric tensor α as α(X, Y ) = g(∇XV, ∇Y V ), where X and Y are vector fields on the manifold. We have Trace(α) = Pd i=1 g(∇∂iV, ∇∂iV ) = ∥∇V ∥2 F , where ∂1, . . . , ∂d is an orthonormal basis of the tangent space. For the trace of α, we have the following geometric interpretation (see the exercise 1.12 in [12]): Trace(α) = 1 ωd Z Sd−1 α(X, X)dδ(X) (12) where Sd−1 is the unit (d −1)-sphere, dωd its volume, and dδ its volume form. So for a given point xi, we can approximate ∥∇V |xi∥by the following ∥∇V |xi∥2 F = Trace(α)xi = 1 ωd Z Sd−1 α(X, X)|xidδ(X) ≈ X j∼i α(eij, eij) = X j∼i ∥∇eijV ∥2 (13) Then we discuss how to discretize ∇eijV . Given eij ∈TxiM, there exists a unique geodesic γ(t) which satisfies γ(0) = xi and γ′(0) = eij. Then the covariant derivative of vector field V along eij is given by (please see Fig. 1) ∇eijV = Pi dV dt |t=0 = Pi lim t→0 V (γ(t)) −V (γ(0)) t ≈Pi (Vxj −Vxi) dij = √wij(PiVxj −Vxi) Combining Eq. (13), R2 becomes: R2(V) = X i X j∼i wij ∥PiTjvj −Tivi∥2 (14) 3.4 Objective Function in the Discrete Form Let I denote a n × n diagonal matrix where Iii = 1 if xi is labeled and Iii = 0 otherwise. And let y ∈Rn be a column vector whose i-th element is yi if xi is labeled and 0 otherwise. Then R0(f) = 1 l (f −y)T I(f −y) (15) Combining R1 in Eq. (10) and R2 in Eq. (14), the final objective function in the discrete form can be written as follows: E(f, V) = 1 l (f −y)T I(f −y) + λ1 X i X j∼i wij (xj −xi)T Tivi −fj + fi 2 + λ2 X i X j∼i wij ∥PiTjvj −Tivi∥2 (16) 5 3.5 Optimization In this subsection, we discuss how to solve the optimization problem (16). Let L denote the Laplacian matrix of the graph with weights wij. Then we can rewrite R1 as follows: R1(f, V) = 2f T Lf + X i X j∼i wij (xj −xi)T Tivi 2 −2 X i X j∼i wij(xj −xi)T TivisT ijf where sij ∈Rn is a selection vector of all zero elements except for the i-th element being −1 and the j-th element being 1. Then the partial derivative of R1 with respect to the variable vi is ∂R1(f, V) ∂vi = 2 X j∼i wijT T i (xj −xi)(xj −xi)T Tivi −2 X j∼i wijT T i (xj −xi)sT ijf Thus we get ∂R1(f, V) ∂V = 2GV −2Cf (17) where G is a dn × dn block diagonal matrix, and C = [CT 1 , . . . , CT n ]T is a dn × n block matrix. Denote the i-th d × d diagonal block of G by Gii and the i-th d × n block of C by Ci, we have Gii = X j∼i wijT T i (xj −xi)(xj −xi)T Ti (18) Ci = X j∼i wijT T i (xj −xi)sT ij (19) The partial derivative of R1 with respect to the variable f is ∂R1(f, V) ∂f = 4Lf −2CT V (20) Similarly, we can compute the partial derivative of R2 with respect to the variable vi: ∂R2(V) ∂vi = 2 X j∼i wij (T T i TjT T j Ti + I)vi −2T T i Tjvj = 2 X j∼i wij (QijQT ij + I)vi −2Qijvj where Qij = T T i Tj. Thus we obtain ∂R2 ∂V = 2BV (21) where B is a dn × dn sparse block matrix. If we index each d × d block by Bij, then for i, j = 1, . . . , n, we have Bii = X j∼i wij(QijQT ij + I) (22) Bij = −2wijQij, if xi ∼xj 0, otherwise (23) Notice that ∂R0 ∂f = 2 1 l I(f −y). Combining Eq. (17), Eq. (20) and Eq. (21), we have ∂E(f, V) ∂f = ∂R0 ∂f + λ1 ∂R1 ∂f + λ2 ∂R2 ∂f = 2(1 l I + 2λ1L)f −2λ1CT V −21 l y (24) ∂E(f, V) ∂V = ∂R0 ∂V + λ1 ∂R1 ∂V + λ2 ∂R2 ∂V = −2λ1Cf + 2(λ1G + λ2B)V (25) Requiring that the derivatives vanish, we finally get the following linear system 1 l I + 2λ1L −λ1CT −λ1C λ1G + λ2B f V = 1 l y 0 (26) 6 (a) Ground truth (b) Laplacian (3.65) (c) Hessian (1.35) (d) PFR (1.14) Figure 2: Global temperature prediction. Regression on the satellite measurement of temperatures in the middle troposphere. 1% samples are randomly selected as training data. The ground truth is shown in (a). The colors indicate temperature values (in Kelvin). The regression results are visualized in (b)∼(d). The numbers in the captions are the mean absolute prediction errors. 4 Related Work and Discussion The approximation of the Laplacian operator using the graph Laplacian [5] has enjoyed a great success in the last decade. Some theoretical results [13, 14] also show the consistency of the approximation. One of the most important features of the graph Laplacian is that it is coordinate free. That is, it does not depend on any special coordinate system. The estimation of Hessian is very difficult and there is few work on it. Previous approaches [2, 15] first estimate normal coordinates in the tangent space, and then estimate the first order derivative at each point, which is a matrix pseudo-inversion problem. One major limitation of this is that when the number of nearest neighbors k is larger than d + d(d+1) 2 , where d is the dimension of the manifold, the estimation will be inaccurate and unstable [15]. This is contradictory to the asymptotic case, since it is not desirable that k is bounded by a finite number when the data is sufficiently dense. In contrast, our method is coordinate free. Also, we directly estimate the norm of the second order derivative instead of trying to estimate its coefficients, which turns out to be an integral problem over the neighboring points. We only need to do simple matrix multiplications to approximate the integral at each point, but do not have to solve matrix inversion problems. Therefore, asymptotically, we would expect our method to be much more accurate and robust for the approximation of the norm of the second order derivative. 5 Experiments In this section, we compare our proposed Parallel Field Regularization (PFR) algorithm with two state-of-the-art semi-supervised regression methods: Laplacian regularized transduction (Laplacian) [8] and Hessian regularized transduction (Hessian)1 [15], respectively. Our experiments are carried out on two real-world data sets. Regularization parameters for all algorithms are chosen via crossvalidation. 5.1 Global Temperature In this test, we perform regression on the earth surface, which is a 2D sphere manifold. We try to predict the satellite measurement of temperatures in the middle troposphere in Dec. 20042, which contains 9504 valid temperature measurements. The coordinates (latitude, longitude) of the measurements are used as features and the corresponding temperature values are the responses. The dimension of manifold is set to 2 and the number of nearest neighbors is set to 6 in graph construction. We randomly select 1% of the samples as labeled data, and compare the predicted temperature values with the ground truth on the rest of the data. The regression results are shown in Fig. 2. The numbers in the captions indicate the mean absolute prediction errors generated by different algorithms. It can be seen from the visualization result that 1We use the code from the authors downloadable from http://www.ml.uni-saarland.de/code/ HessianSSR/HessianSSR.html. 2http://www.remss.com/msu/. 7 20 40 60 80 100 5 10 15 20 number of labels MAE PFR Hessian Laplacian Figure 3: Results on the moving hand dataset. frame 016 frame 300 Laplacian Hessian PFR Figure 4: The examples of regression results on the moving hand data set. 60 labeled samples are used for training. Each row shows the results obtained via the three algorithms for a frame. In each image, the red dots indicate the ground truth positions we labeled manually, and the blue arrows show the positions predicted by different algorithms. Hessian and PFR perform better than Laplacian. Furthermore, from the prediction error, we can see that PFR outperforms Hessian. 5.2 Positions of Moving Hands In this subsection, we perform experiments using a video of a subject sitting in a sofa and waving his arms 1. Our goal is to predict the positions of the (left and right) elbows and wrists. We extract the first 500 frames of the video and manually label the positions of the elbows and wrists. We scale each frame to size of 120 × 90 and use the raw pixels (10800-dimensional vectors) as the features. The response for each frame is a 8-dimensional vector whose elements are the 2D coordinates of the elbows and wrists. Since there are 8 free parameters, we set the dimension of manifold to 8. We use 18 nearest neighbors in graph construction. We run the experiments with different numbers of labeled frames. For each given number of labeled frames, we perform 10 tests with randomly selected labeled set. The average of the mean absolute error (MAE) for each test is calculated. The final result is shown in Fig. 3. As can be seen, PFR consistently outperforms the other two algorithms. Laplacian yields high MAE. Hessian is very unstable on this dataset, and the results vary drastically with different numbers of labels. We also show some example frames in Fig. 4. The red dots in the figures indicate the ground truth positions and the blue arrows are drawn by connecting the positions of elbows and wrists predicted by different algorithms. Again we can verify that PFR performs better than the other two algorithms. 6 Conclusion In this paper, we propose a novel semi-supervised learning algorithm from the vector field perspective. We show the relationship between vector fields and functions on the manifold. The parallelism of the vector field is used to measure the linearity of the target prediction function. Parallel fields are one kind of special vector fields on the manifold, which have very nice properties. It is interesting to explore other kinds of vector fields to facilitate learning on manifolds. Moreover, vector fields can also be used to study the geometry and topology of the manifold. For example, Poincar´e-Hopf theorem tells us that the sum of the indices over all the isolated zeroes of a vector field equals to the Euler characteristic of the manifold, which is a very important topological invariant. Acknowledgments This work was supported by the National Natural Science Foundation of China under Grant 61125203, the National Basic Research Program of China (973 Program) under Grant 2012CB316404, and the National Natural Science Foundation of China under Grants 90920303 and 60875044. 1The video is obtained from http://www.csail.mit.edu/˜rahimi/manif. 8 References [1] J. Tenenbaum, V. de Silva, and J. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000. [2] D. L. Donoho and C. E. Grimes. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proceedings of the National Academy of Sciences of the United States of America, 100(10):5591–5596, 2003. [3] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000. [4] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems 14, pages 585–591. 2001. [5] Fan R. K. Chung. Spectral Graph Theory, volume 92 of Regional Conference Series in Mathematics. AMS, 1997. [6] X. Zhu and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proc. of the 20th Internation Conference on Machine Learning, 2003. [7] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Sch¨olkopf. Learning with local and global consistency. In Advances in Neural Information Processing Systems 16, 2003. [8] Mikhail Belkin, Irina Matveeva, and Partha Niyogi. Regularization and semi-supervised learning on large graphs. In Conference on Learning Theory, pages 624–638, 2004. [9] John Lafferty and Larry Wasserman. Statistical analysis of semi-supervised regression. In Advances in Neural Information Processing Systems 20, pages 801–808, 2007. [10] P. Petersen. Riemannian Geometry. Springer, New York, 1998. [11] G. H. Golub and C. F. Van Loan. Matrix computations. Johns Hopkins University Press, 3rd edition, 1996. [12] B. Chow, P. Lu, and L. Ni. Hamilton’s Ricci Flow. AMS, Providence, Rhode Island, 2006. [13] Mikhail Belkin and Partha Niyogi. Towards a theoretical foundation for laplacian-based manifold methods. In Conference on Learning Theory, pages 486–500, 2005. [14] Matthias Hein, Jean yves Audibert, and Ulrike Von Luxburg. From graphs to manifolds - weak and strong pointwise consistency of graph laplacians. In Conference on Learning Theory, pages 470–485, 2005. [15] K. I. Kim, F. Steinke, and M. Hein. Semi-supervised regression using hessian energy with an application to semi-supervised dimensionality reduction. In Advances in Neural Information Processing Systems 22, pages 979–987. 2009. 9
|
2011
|
215
|
4,275
|
Thinning Measurement Models and Questionnaire Design Ricardo Silva Department of Statistical Science University College London Gower Street, London WC1E 6BT ricardo@stats.ucl.ac.uk Abstract Inferring key unobservable features of individuals is an important task in the applied sciences. In particular, an important source of data in fields such as marketing, social sciences and medicine is questionnaires: answers in such questionnaires are noisy measures of target unobserved features. While comprehensive surveys help to better estimate the latent variables of interest, aiming at a high number of questions comes at a price: refusal to participate in surveys can go up, as well as the rate of missing data; quality of answers can decline; costs associated with applying such questionnaires can also increase. In this paper, we cast the problem of refining existing models for questionnaire data as follows: solve a constrained optimization problem of preserving the maximum amount of information found in a latent variable model using only a subset of existing questions. The goal is to find an optimal subset of a given size. For that, we first define an information theoretical measure for quantifying the quality of a reduced questionnaire. Three different approximate inference methods are introduced to solve this problem. Comparisons against a simple but powerful heuristic are presented. 1 Contribution A common goal in the applied sciences is to measure concepts of interest that are not directly observable (Bartholomew et al., 2008). Such is the case in the social sciences, medicine, economics and other fields, where quantifying key attributes such as “consumer satisfaction,” “anxiety” and “recession” requires the development of indicators: observable variables that are postulated to measure the target latent variables up to some measurement error (Bollen, 1989; Carroll et al., 1995). In a probabilistic framework, this often boils down to a latent variable model (Bishop, 1998). One common setup is to assume each observed indicator Yi as being generated independently given the set of latent variables X. Conditioning on any given observed data point Y gives information about the distribution of the latent vector X, which can then be used for ranking, clustering, visualization or smoothing, among other tasks. Figure 1 provides an illustration. Questionnaires from large surveys are sometimes used to provide such indicators, each Yi recording an answer that typically corresponds to a Bernoulli or ordinal variable. For instance, experts can be given questions concerning whether there is freedom of press in a particular nation, as a way of measuring its democratization level (Bollen, 1989; Palomo et al., 2007). Nations can then be clustering or ranked within an interpretable latent space. Long questionnaires have nevertheless drawbacks, as summarized by Stanton et al. (2002) in the context of psychometric studies: Longer surveys take more time to complete, tend to have more missing data, and have higher refusal rates than short surveys. Arguably, then, techniques to reducing the length of scales while maintaining psychometric quality are wortwhile. 1 5 1 Y X2 (Democratization) 1 (Industrialization) X Y Y Y Y 2 3 4 1 5 9 13 18 23 28 33 38 43 48 53 58 63 68 73 0 5 10 Country (ordered by industrialization factor) Democratization 1 5 9 13 18 23 28 33 38 43 48 53 58 63 68 73 0 5 10 Factor scores: countries in the latent space Dem1960 Dem1965 (a) (b) Figure 1: (a) A graphical representation of a latent variable model. Notice that in general latent variables will be dependent. Here, the question is how to quantify democratization and industrialization levels of nations given observed indicators Y such as freedom of press and gross national product, among others (Bollen, 1989; Palomo et al., 2007). (b) An example of a result implied by the model (adapted from Palomo et al. (2007)): barplots of the conditional distribution of democratization levels given the observed indicators at two time points, ordered by the posterior mean industrialization level. The distribution of the latent variables given the observations is the basis of the analysis. Our contribution is a methodology for choosing which indicators to preserve (e.g., which items to keep in a questionnaire) given: i.) a latent variable model specification of the domain of interest; ii.) a target number of indicators that should be preserved. To accomplish this, we provide: i.) a target objective function that quantifies the amount of information preserved by a choice of a subset of indicators, with respect to the full set; ii.) algorithms for optimizing this choice of subset with respect to the objective function. The general idea is to start with a target posterior distribution of latent variables, defined by some latent variable measurement model M (i.e., PM(X | Y)). We want to choose a subset Yz ⊂Y so that the resulting conditional distribution PM(X | Yz) is as close as possible to the original one according to some metric. Model M is provided either by expertise or by numerous standard approaches that can be applied to learn it from data (e.g., methods in Bishop, 2009). We call this task measurement model thinning. Notice that the size of Yz is a domain-dependent choice. Assuming M is a good model for the data, choosing a subset of indicators will incur some information loss. It is up to the analyst to choose a trade-off between loss of information and the design of simpler, cheaper ways of measuring latent variables. Even if a shorter questionnaire is not to be deployed, the outcome of measurement model thinning provides a formal sensitivity analysis of the target latent distribution with respect to the available indicators. The result is useful to generate different insights into the domain. This paper is organized as follows: Section 2 defines a formal criterion to quantify how appropriate a subset Yz is. Section 3 describes different approaches in which this criterion can be optimized. Related work is briefly discussed in Section 4. Experiments with synthetic and real data are discussed in Section 5, followed by the conclusion. 2 An Information-Theoretical Criterion Our focus is on domains where latent variables are not a by-product of a dimensionality reduction technique, but the target of the analysis as in the example of Figure 1. That is, measurement error problems where the variables to be recorded are designed specifically to obtain information concerning such unknowns (Carroll et al., 1995; Bartholomew et al., 2008). As such, we postulate that the outcome of any analysis should be a functional of PM(X | Y), the conditional distribution of unobservables X given observables Y within a model M. It is assumed that M specifies the joint PM(X, Y). We further assume that observed variables are conditionally independent given X, i.e. PM(X, Y) = PM(X) Qp i=1 PM(Yi | X), with p being the number of observed indicators. 2 If z ≡(z1, z2, . . . , zp) is a binary vector of the same dimensionality as Y, and Yz is the subset of Y corresponding the non-zero entries of z, we can assess z by the KL divergence KL(PM(X | Y) || PM(X | Yz)) ≡ Z PM(X | Y) log PM(X | Y) PM(X | Yz) dX This is well-defined, since both distributions lie in the same sample space despite the difference of dimensionality between Y and Yz. Moreover, since Y is itself a random vector, our criterion becomes the expected KL divergence ⟨KL(PM(X | Y) || PM(X | Yz))⟩PM(Y) where ⟨·⟩denotes expectation. Our goal is to minimize this function with respect to z. Rearranging this expression to drop all constants that do not depend on z, and multiplying it by −1 to get a maximization problem, we obtain the problem of finding z⋆such that z⋆ = argmaxz n ⟨log(PM(Yz | X))⟩PM(X,Yz) −⟨log(PM(Yz))⟩PM(Yz) o = argmaxz ( p X i=1 zi ⟨log(PM(Yi | X))⟩PM(X,Yi) + HM(Yz) ) ≡ argmaxzFM(z) subject to Pp i=1 zi = K for a choice of K, and zi ∈{0, 1}. HM(·) denotes here the entropy of a distribution parameterized by M. Notice we used the assumption that indicators are mutually independent given X. There is an intuitive appeal of having a joint entropy term to reward not only marginal relationships between indicators and latent variables, but also selections that are jointly diverse. Notice that optimizing this objective function turns out to be equivalent to minimizing the conditional entropy of latent variables given Yz. Motivating conditional entropy from a more fundamental principle illustrates that other functions can be obtained by changing the divergence. 3 Approaches for Approximate Optimization The problem of optimizing FM(z) subject to the constraints Pp i=1 zi = K, zi ∈{0, 1}, is hard not only for its combinatorial nature, but due to the entropy term. This needs to be approximated, and the nature of the approximation should depend on the form taken by M. We will assume that it is possible to efficiently compute any marginals of PM(Y) of modest dimensionality (say, 10 dimensions). This is the case, for instance, in the probit model for binary data: X ∼N(0, Σ), Y ⋆ i ∼N(ΛT i X + λi;0, 1), Yi = 1, if Y ⋆ i > 0, and 0 otherwise where N(m, S) is the multivariate Gaussian distribution with mean m and covariance matrix S. The probit model is one of the most common latent variable models for questionnaire data (Bartholomew et al., 2008), with a straigthforward extension to ordinal data. In this model, marginals for a few dozen variables can be obtained efficiently since this corresponds to calculating multivariate Gaussian probabilities (Genz, 1992). Parameters can be fit by a variety of methods (Hahn et al., 2010). We also assume that M allows for the computation of ⟨log(PM(Yi | X))⟩PM(X,Yi) at little cost. Again, in the binary probit model this is simple, since this requires integrating away a single binary variable Yi and a univariate Gaussian ΛT i X. 3.1 Gaussian Entropy One approximation to FM(z) is to replace its entropy term by the corresponding entropy from some Gaussian distribution PN (Yz). The entropy of a Gaussian distribution is proportional to the logarithm of the determinant of its covariance matrix, and hence can be computed in O(p3) steps. This Gaussian can be chosen as the one closest to PM(Yz) in a KL(PM || PN ) sense: that is, the one with the same first and second moments as PM(Yz). In our case, computing these moments can be done deterministically (up to numerical error) using standard bivariate quadrature methods. No expectation-propagation (Minka, 2001) is necessary. The corresponding objective function is FM;N(z) ≡ p X i=1 zi ⟨log(PM(Yi | X))⟩PM(X,Yi) + 0.5 log |Σz| 3 where Σz is the covariance matrix of Yz – which for binary and ordinal data has a sensible interpretation. This function is also an upper bound on the exact function, FM(z), since the Gaussian is the distribution with the largest entropy for a given mean vector and covariance matrix. The resulting function is non-linear in z. In our experiments, we optimize for z using a greedy scheme: for all possible pairs (i, j) such that zi = 1 and zj = 0, we swap its values (so that P i zi is always K). We choose the pair with the highest increase in FM;N (z) and repeat the process until convergence. 3.2 Entropy with Bounded Neighborhoods An alternative bound can be derived from a standard fact in information theory: H(Y | S) ≤ H(Y | S′) for S′ ⊆S, where H(· | ·) denotes conditional entropy. This was exploited by Globerson and Jaakkola (2007) to define an upper bound in the entropy of a distribution as follows: consider a permutation e of the set {1, 2, . . ., p}, with e(i) being the i-th element of e. Denote by e(1 : i) the first i elements of this permutation (an empty set if i < 1). Moreover, let N(e, i) be a subset of e(1 : i −1). For a given set variables Y = {Y1, Y2, . . . , Yp} the following bound holds: H(Y1, Y2, . . . Yp) = n X i=1 H(Ye(i) | Ye(1:i−1)) ≤ p X i=1 H(Ye(i) | YN(e,i)) (1) If each set N(e, i) is no larger than some constant D, then this bound can be computed in O(p · 2D) steps for binary probit models. The bound holds for any choice of e, but we want it to be as tight as possible so that it gets weighted in a reasonable way against the other terms in FM(·). Since the entropy function is decomposable as a sum of functions that depend on i and N(e, i) only, one can minimize this bound with respect to e by using permutation optimization methods such as (Jaakkola et al., 2010). In our implementation, we use a method similar to Teyssier and Koller (2005) that shuffles neighboring entries of e to generate candidates, chooses the optimal N(e, i) for each i given the candidate e, and picks as the next permutation the candidate e with the greatest decrease in the bound. Notice that a permutation choice e and neighborhood choices N(e, i) define a Bayesian network where N(e, i) are the parents of Ye(i). Therefore, if this Bayesian network model provides a good approximation to PM(Y), the bound will be reasonably tight. Given e, we will further relax this bound with the goal of obtaining an integer programming formulation for the problem of optimizing an upper bound to FM(z). For any given z, we define the local term HL(z, i) as HL(z, i) ≡HM(Ye(i) | Yz∩N(e, i)) = X S∈P (N(e,i)) Y j∈S zj Y k∈N(e,i)\S (1 −zk) HM(Ye(i) | S) (2) where P(·) denotes the power set of a set. The new approximate objective function becomes FM;D(z) ≡ p X i=1 zi ⟨log(PM(Yi | X))⟩PM(X,Yi) + p X i=1 ze(i)HL(z, i) (3) Notice that HL(z, i) is still an upper bound on HM(Ye(i) | Ye(1:i−1)). The intuition is that we are bounding HM(Yz) by the entropy of a Bayesian network where a vertex Ye(i) is included if ze(i) = 1, with corresponding parents given by Yz ∩N(e, i). This is a well-defined Bayesian network for any choice of z. The shortcoming is that ideally we would like this Bayesian network to be the actual marginal of the model given by e and N(e, i). It is not: if the network implied by e and N(e, i) was, for instance, Y1 →Y2 →Y3, the choice of z = (1, 0, 1) would result on the entropy of the disconnected graph {Y1, Y3}, while the true marginal would correspond instead to the graph Y1 →Y3. However, our simplified marginalization has the advantage of avoiding an intractable problem. Moreover, it allows us to redefine the problem as an integer linear program (ILP). Each product ze(i) Q j zj Q k(1−zk) appearing in (3) results in a sum of O(2D) terms, each of which has (up to a sign) the form qM ≡Q m∈M zm for some set M. It is still the case that qM ∈{0, 1}. Therefore, objective function (3) can be interpreted as being linear on a set of binary variables {{z}, {q}}. We need further to enforce the constraints coming from qM = 1 ⇒{∀m ∈M, zm = 1}; qM = 0 ⇒{∃m ∈M s.t. zm = 0} 4 It is well-known (Glover and Woolsey, 1974) that this corresponds to the linear constraints qM = 1 ⇒{∀m ∈M, zm = 1} ⇔ ∀m ∈M, qM −zm ≤0 qM = 0 ⇒{∃m ∈M s.t. zm = 0} ⇔ P m∈M zm −qM ≤|M| −1 which combined with the linear constraint Pp i=1 zi = K implies that optimizing FM;D(z) is an ILP with O(p · 2D) variables and O(p2 · 2D) constraints. In our experiments in Section 5, we were able to solve essentially all of such ILPs exactly using linear programming relaxations with branch-and-bound. 3.3 Entropy with Tree-Structured Bounds The previous bound simplifies marginalization, which might badly overestimate entropies where the corresponding Yz are uniformly spread out in permutation e. We now propose a different type of bound which treats different marginalizations on an equal footing. It comes from the following observation: since H(Ye(i) | Ye(1:i−1)) is less than or equal to any conditional entropy H(Ye(i) | Yj) for j ∈e(1 : i −1), we have that the tighest bound given by singleton conditioning sets is H(Ye(i) | Ye(1:i−1)) ≤ min j∈e(1:i−1) HM(Ye(i) | Yj), resulting in the objective function FM;tree(z) ≡ p X i=1 zi ⟨log(PM(Yi | X))⟩PM(X,Yi)+ p X i=1 ze(i)· min {Yj∈Ye(1:i−1)∩Yz} H(Ye(i) | Yj) (4) where min{Yj∈Ye(1:i−1)∩Yz} H(Ye(i) | Yj) ≡H(Ye(i)) if Ye(1:i−1) ∩Yz = ∅. The intuition is that we are bounding the exact entropy using the entropy of a directed tree rooted at Yez(1), the first element of Yz according to e. That is, all variables are marginally dependent in the approximation regardless of what z is, and for a fixed z the tree is, by construction, the one obtained by the usual greedy algorithm of adding edges corresponding to the next legal pair of vertices with maximum mutual information (following an ordering, in this case). It turns out we can also write (4) as a linear objective function of a polynomial number of 0\1 variables and constraints. Let ¯zi ≡1 −zi. Let H(1) i , H(2) i , . . . , H(i−1) i be the values of set {HM(Ye(i) | Ye(1)), . . . , HM(Ye(i) | Ye(i−1))} sorted in ascending order, with z(1) i , . . . , z(i−1) i being the corresponding permutation of {ze(1), . . . , ze(i−1)}. We have min{Yj∈Ye(1:i−1)∩Yz} H(Ye(i) | Yj) = z(1) i H(1) i + ¯z(1) i z(2) i H(2) i + ¯z(1) i ¯z(2) i z(3) i H(3) i + . . . ¯z(1) i . . . ¯z(i−2) i z(i−1) i H(i−1) i + Qi−1 j=1 ¯z(j) i HM(Ye(i)) ≡ Pi−1 j=1 q(j) i H(j) i + q(i) i HM(Ye(i)) where q(j) i ≡z(j) i Qj−1 k=1 ¯z(k) i , and also a binary 0\1 variable. Plugging this expression into (4) gives a linear objective function in this extended variable space. The corresponding constraints are q(j) i = 1 ⇒{∀zm ∈{¯z(1) i , . . . , ¯z(j−1) i , z(j) i }, zm = 1} q(j) i = 0 ⇒{∃zm ∈{¯z(1) i , . . . , ¯z(j−1) i , z(j) i } s.t. zm = 0} which, as shown in the previous section, can be written as linear constraints (substituting each ¯zi by 1 −zi). The total number of constraints is however O(p3), which can be expensive, and often a linear relaxation procedure with branch-and-bound fails to provide guarantees of optimality. 3.4 The Reliability Score Finally, it is important to design cheap, effective criteria whose maxima correlate with the maxima of FM(·). Empirically, we have found high quality selections in binary probit models using the solution to the problem maximize FM;R(z) = p X i=1 wizi, subject to zi ∈{0, 1}, p X i=1 zi = K 5 where wi = ΛT i ΣΛi. This can be solved by picking the corresponding indicators with the highest K weights wi. Assuming a probit model where the measurement error for each Y ⋆ i has the same variance of 1, this score is related to the “reliability” of an indicator. Simply put, the reliability Ri of an indicator is the proportion of its variance that is due to the latent variables (Bollen, 1989, Chapter 6): Ri = wi/(wi + 1) for each Y ⋆ i . There is no current theory linking this solution to the problem of maximizing FM(·): since there is no entropy term, we can set an adversarial problem to easily defeat this method. For instance, this happens in a model where the K indicators of highest reliability all measure the same latent variable Xi and nothing else – much information about Xi would be preserved, but little about other variables. In any case, we found this criterion to be fairly competitive even if at times it produces extreme failures. An honest account of more sophisticated selection mechanisms cannot be performed without including it, as we do in Section 5. 4 Related Work The literature on survey analysis, in the context of latent variable models, contains several examples of guidelines on how to simplify questionnaires (sometimes described as providing “shortened versions” of scales). Much of the literature, however, consists of describing general guidelines and rules-of-thumb to accomplish this task (e.g, Richins, 2004; Stanton et al., 2002). One possible exception is Leite et al. (2008), which uses different model fitness criteria with respect to a given dataset to score candidate solutions, along with an expensive combinatorial optimization method. This conflates model selection and questionnaire thinning, and there is no theory linking the score functions to the amount of information preserved. In the machine learning and statistics literature, there is a large body of research in active learning, which is related to our task. One of the closest approaches is the one by Liang et al. (2009), which casts the classical problem of measurement selection within a Bayesian graphical model perspective. In that work, one has to choose which measurements to add. This is done sequentially, partially motivated by problems where collecting new measurements can be done relatively fast and cheap (say, by paying graduate students to annotate text data), and so the choice of next measurement can make use of fresh data. In our case, it not might be realistic to expect we can perform a large number of iterations of data collection – and as such the task of reducing the number of measurements from a large initial collection might be more relevant in practice. Liang et al. also focus on (multivariate) supervised learning instead of purely unsupervised learning. In statistics there is also a considerable body of literature on sufficient dimension reduction and its sparse variants (e.g., Chen et al., 2010). Such techniques create a bottleneck between two sets of variables in a regression problem (say, the mapping from Y to X) while eliminating some of the input variables. In principle one might want to adapt such models to take a latent variable model M as the target mapping. Besides some loss of interpretability, the computational implications might be problematic, though. Moreover, this framework has another free parameter corresponding to the dimensionality of the bottleneck that has to be set. It is not clear how this parameter, along with a choice of sparsity level, would interact with a fixed choice K of indicators to be kept. 5 Experiments In this section, we first describe some synthetic experiments to provide insights about the different methods, followed by one brief description of a case study. In all of the experiments, the target models M are binary probit. We set the neighborhood parameter for FM;N(·) to 9. The ordering e for the tree-structured method is obtained by the same greedy search of Section 3.2, where now the score is the average of all H(Yi | Yj) for all Yj preceding Yi. Finally, all ordering optimization methods were initialized by sorting indicators in a descending order according to their reliability scores, and the initial solution for all entropy-based optimization methods was given by the reliability score solution of Section 3.4. The integer program solver GUROBI 4.02 was used in all experiments. 5.1 Synthetic studies We start with a batch of synthetic experiments. We generated 80 models with 40 indicators and 10 latent variables1. We further preprocess such models into two groups: in 40 of them, we select a 1Details on the model generation: we generate 40 models by sampling the latent covariance matrix from an inverse Wishart distribution with 10 degrees of freedom and scale matrix 10I, I being the identity matrix. 6 −0.1 0 0.1 0.2 0.3 0.4 0.5 N/R T/R G/R N/S T/S G/S Improvement ratio: high signal −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 N/R T/R G/R N/S T/S G/S Improvement ratio: low signal 0.1 0.15 0.2 0.25 0.3 0.1 0.15 0.2 0.25 Mean error: high signal Tree bound Reliability score 0.1 0.15 0.2 0.25 0.3 0.1 0.15 0.2 0.25 Mean error: low signal Tree bound Reliability score (a) (b) (c) (d) Figure 2: (a) A comparison of the bounded neighborhood (N), tree-based (T ) and Gaussian (G) methods with respect to a random solution (R) and the reliability score (S). (b) A similar comparison for models where indicators are more weakly correlated to the latent variables than in (a). (c) and (d) Scatterplots of the average absolute deviance for the tree-based method (horizontal axis) against the reliability method (vertical axis). The bottom-left clouds correspond to the K = 32 trials. target reliability ri for each indicator Yi, uniformly at random from the interval [0.4 0.7]. We then rescale coefficients Λi such that the reliability (defined in Section 3.4) of the respective Y ⋆ i becomes ri. For the remaining 40 models, we sample ri uniformly at random from the interval [0.2 0.4]. We perform two choices of subsets: sets Yz of size 20 and 32 (50% and 80% of the total number of indicators). Our evaluation is as follows: since the expected value is perhaps the most common functional of the posterior distribution PM(X | Y), we calculate the expected value of the latent variables for a sample {y(1), y(2), . . . , y(1000)} of size 1000 taken from the respective synthetic models. This is done for the full set of 40 indicators, and for each set chosen by our four criteria: for each data point i and each objective function F, we evaluate the average distance d(i) F ≡P10 j=1 |ˆx(i) j −ˆx(i) j;F|/10. In this case, ˆx(i) j is the expected value of Xj obtained by conditioning on all indicators, while ˆx(i) j;F is the one obtained with the subset selected by optimizing F. We denote by mF the average of {d(1) F , d(2) F , . . . , d(1000) F }. Finally, we compare the three main methods with respect to the reliability score method using the improvement ratio statistic sF = 1 −mF/mFM;R, the proportion of average error decrease with respect to the reliability score. In order to provide a sense of scale on the difficulty of each problem, we compute the same ratios with respect to a random selection, obtained by choosing K = 20 and K = 32 indicators uniformly at random. Figure 2 provides a summary of the results. In Figure 2(a), each boxplot shows the distribution over the 40 probit models where reliabilities were sampled between [0.4 0.7] (the “high signal” models). The first three boxplots show the scores sF of the bounded neighborhood, tree-structured and Gaussian methods, respectively, compared against random selections. The last three boxplots are comparisons against the reliability heuristic. The tree-based method easily beats the Gaussian method, with about 75% of its outcomes being better than the median Gaussian outcome. The Gaussian approach is also less reliable, with results showing a long lower tail. Although the reliability score is on average a good approach, in only a handful of cases it was better than the tree-based method, and by considerably smaller magnitudes compared to the upper tails in the tree-based outcome distribution. A separate panel (Figure 2(b)) is shown for the 40 models with lower reliabilities. In this case, all methods show stronger improvements over the reliability score, although now there is a less clear difference between the tree method and the Gaussian one. Finally, in panels (c) and (d) we present scatterplots for the average deviances mF of the tree-based method against the reliability score. The two clouds correspond to the solutions with 20 and 32 indicators. Notice that in the vast majority of the cases the tree-based method does better. We then rescale the matrix to make all variances equal to 1. We also generate 40 models using as the inverse Wishart scale matrix the correlation matrix will all off-diagonal entries set to 0.5. Coefficients linking indicators to latent variables were set to zero with probability 0.8, and sampled from a standard Gaussian otherwise. If some latent variable ends up with no child, or an indicator ends up with no parent, we uniformly choose one child/parent to be linked to it. Code to fully replicate the synthetic experiments is available at HTTP://WWW.HOMEPAGES.UCL.AC.UK/∼UCGTRBD/. 7 5.2 Case study The National Health Service (NHS) is the public health system of the United Kingdom. In 2009, a major survey called the National Health Service National Staff Survey was deployed with the goal of “collect(ing) staff views about working in their local NHS trust” (Care Quality Comission and Aston University, 2010). A questionnaire of 206 items was filled by 156, 951 respondents. We designed a measurement model based on the structure of the questionnaire and fit it by the posterior expected value estimator. Gaussian and inverse Wishart priors were used, along with Gibbs sampling and a random subset of 50, 000 respondents. See the Supplementary Material for more details. Several items in this survey asked for the NHS staff member to provide degrees of agreement in a Likert scale (Bartholomew et al., 2008) to questions such as • ...have you ever come to work despite not feeling well enough to perform ...? • Have you felt pressure from your manager to come to work? • Have you felt pressure from colleagues to come to work? • Have you put yourself under pressure to come to work? as different probes into an unobservable self-assessed level of work pressure. We preprocessed and binarized the data to first narrow it down to 63 questions. We compare selections of 32 (50%) and 50 (80%) items using the same statistics of the previous section. sF;D sF;tree sF;N sF;random mF;tree mF;R K = 32 7.8% 6.3% 0.01% −16.0% 0.238 0.255 K = 50 10.5% 11.9% 7.6% −0.05% 0.123 0.140 Although gains were relatively small (as measured by the difference between reconstruction errors mF;tree −mF;R and the good performance of a random selection), we showed that: i.) we do improve results over a popular measure of indicator quality; ii.) we do provide some guarantees about the diversity of the selected items via a information-theoretical measure with an entropy term, with theoretically sound approximations to such a measure. For more details on the preprocessing, and more insights into the different selections, please refer to the Supplementary Material. 6 Conclusion There are problems where one posits that the relevant information is encoded in the posterior distribution of a set of latent variables. Questionnaires (and other instruments) can be used as evidence to generate this posterior, but there is a cost associated with complex questionnaires. One problem is how to simplify such instruments of measurement. To the best of our knowledge, we provide the first formal account on how to solve it. Nevertheless, we would like to stress there is no substitute for common sense. While the tools we provide here can be used for a variety of analyses – from deploying simpler questionnaires to sensitivity analysis – the value and cost of keeping particular indicators can go much beyond the information contained in the latent posterior distribution. How to combine this criterion with other domain-dependent criteria is a matter of future research. Another problem of importance is how to deal with model specification and transportability across studies. A measurement model built for a very specific population of respondents might transfer poorly to another group, and therefore taking into account model uncertainty will be important. The Bayesian setup discussed by Liang et al. (2009) might provide some directions on this issue. Also, there is further structure in real-world questionnaires we are not exploiting in the current work. Namely, it is not uncommon to have questionnaires with branching questions and other dynamic behaviour more commonly associated with Web based surveys and/or longitudinal studies. Finally, hybrid approaches combining the bounded neighborhood and the tree-structured methods, along with more sophisticated ordering optimization procedures and the use of other divergence measures and determinant-based criteria (e.g. Kulesza and Taskar, 2011), will also be studied in the future. Acknowledgments The author would like to thank James Cussens and Simon Lacoste-Julien for helpful discussions, as well as the anonymous reviewers for further comments. 8 References D. Bartholomew, F. Steele, I. Moustaki, and J. Galbraith. Analysis of Multivariate Social Science Data, 2nd edition. Chapman & Hall, 2008. C. Bishop. Latent variable models. In M. Jordan (editor), Learning in Graphical Models, pages 371–403, 1998. C. Bishop. Pattern Recognition and Machine Learning. Springer, 2009. K. Bollen. Structural Equations with Latent Variables. John Wiley & Sons, 1989. R. Carroll, D. Ruppert, and L. Stefanski. Measurement Error in Nonlinear Models. Chapman & Hall, 1995. X. Chen, C. Zou, and R. Cook. Coordinate-independent sparse sufficient dimension reduction and variable selection. Annals of Statistics, 38:3696–3723, 2010. Care Quality Commission and Aston University. Aston Business School, National Health Service National Staff Survey, 2009 [computer file]. Colchester, Essex: UK Data Archive [distributor], October 2010. Available at HTTPS://WWW.ESDS.AC.UK, SN: 6570, 2010. A. Genz. Numerical computation of multivariate normal probabilities. Journal of Computational and Graphical Statistics, 1:141–149, 1992. A. Globerson and T. Jaakkola. Approximate inference using conditional entropy decompositions. Proceedings of the 11th International Conference on Artificial Intelligence and Statistics (AISTATS 2007), pages 141–149, 2007. F. Glover and E. Woolsey. Converting the 0-1 polynomial programming problem to a 0-1 linear program. Operations Research, 22:180–182, 1974. P. Hahn, J. Scott, and C. Carvalho. A sparse factor-analytic probit model for congressional voting patterns. Duke University Department of Statistical Science, Discussion Paper 2009-22, 2010. T. Jaakkola, D. Sontag, A. Globerson, and M. Meila. Learning Bayesian network structure using LP relaxations. Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS 2010), pages 366–373, 2010. A. Kulesza and B. Taskar. k-DPPs: fixed-size determinantal point processes. Proceedings of the 28th International Conference on Machine Learning (ICML), pages 1193–1200, 2011. W. Leite, I-C. Huang, and G. Marcoulides. Item selection for the development of short forms of scales using an ant colony optimization algorithm. Multivariate Behavioral Research, 43:411– 431, 2008. P. Liang, M. Jordan, and D. Klein. Learning from measurements in exponential families. Proceedings of the 26th Annual International Conference on Machine Learning (ICML ’09), 2009. T. Minka. A family of algorithms for approximate Bayesian inference. PhD Thesis, Massachussets Institute of Technology, 2001. J. Palomo, D. Dunson, and K. Bollen. Bayesian structural equation modeling. In Sik-Yum Lee (ed.), Handbook of Latent Variable and Related Models, pages 163–188, 2007. M. Richins. The material values scale: Measurement properties and development of a short form. The Journal of Consumer Research, 31:209–219, 2004. J. Stanton, E. Sinar, W. Balzer, and P. Smith. Issues and strategies for reducing the length of selfreported scales. Personnel Psychology, 55:167–194, 2002. M. Teyssier and D. Koller. Ordering-based search: A simple and effective algorithm for learning Bayesian networks. Proceedings of the Twenty-first Conference on Uncertainty in AI (UAI ’05), pages 584–590, 2005. 9
|
2011
|
216
|
4,276
|
Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials Philipp Kr¨ahenb¨uhl Computer Science Department Stanford University philkr@cs.stanford.edu Vladlen Koltun Computer Science Department Stanford University vladlen@cs.stanford.edu Abstract Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While regionlevel models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy. 1 Introduction Multi-class image segmentation and labeling is one of the most challenging and actively studied problems in computer vision. The goal is to label every pixel in the image with one of several predetermined object categories, thus concurrently performing recognition and segmentation of multiple object classes. A common approach is to pose this problem as maximum a posteriori (MAP) inference in a conditional random field (CRF) defined over pixels or image patches [8, 12, 18, 19, 9]. The CRF potentials incorporate smoothness terms that maximize label agreement between similar pixels, and can integrate more elaborate terms that model contextual relationships between object classes. Basic CRF models are composed of unary potentials on individual pixels or image patches and pairwise potentials on neighboring pixels or patches [19, 23, 7, 5]. The resulting adjacency CRF structure is limited in its ability to model long-range connections within the image and generally results in excessive smoothing of object boundaries. In order to improve segmentation and labeling accuracy, researchers have expanded the basic CRF framework to incorporate hierarchical connectivity and higher-order potentials defined on image regions [8, 12, 9, 13]. However, the accuracy of these approaches is necessarily restricted by the accuracy of unsupervised image segmentation, which is used to compute the regions on which the model operates. This limits the ability of region-based approaches to produce accurate label assignments around complex object boundaries, although significant progress has been made [9, 13, 14]. In this paper, we explore a different model structure for accurate semantic segmentation and labeling. We use a fully connected CRF that establishes pairwise potentials on all pairs of pixels in the image. Fully connected CRFs have been used for semantic image labeling in the past [18, 22, 6, 17], but the complexity of inference in fully connected models has restricted their application to sets of hundreds of image regions or fewer. The segmentation accuracy achieved by these approaches is again limited by the unsupervised segmentation that produces the regions. In contrast, our model connects all 1 (a) Image (b) Unary classifiers (c) Robust P n CRF (d) Fully connected CRF, MCMC inference, 36 hrs sky tree grass bench tree road grass (e) Fully connected CRF, our approach, 0.2 seconds Figure 1: Pixel-level classification with a fully connected CRF. (a) Input image from the MSRC-21 dataset. (b) The response of unary classifiers used by our models. (c) Classification produced by the Robust P n CRF [9]. (d) Classification produced by MCMC inference [17] in a fully connected pixel-level CRF model; the algorithm was run for 36 hours and only partially converged for the bottom image. (e) Classification produced by our inference algorithm in the fully connected model in 0.2 seconds. pairs of individual pixels in the image, enabling greatly refined segmentation and labeling. The main challenge is the size of the model, which has tens of thousands of nodes and billions of edges even on low-resolution images. Our main contribution is a highly efficient inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels in an arbitrary feature space. The algorithm is based on a mean field approximation to the CRF distribution. This approximation is iteratively optimized through a series of message passing steps, each of which updates a single variable by aggregating information from all other variables. We show that a mean field update of all variables in a fully connected CRF can be performed using Gaussian filtering in feature space. This allows us to reduce the computational complexity of message passing from quadratic to linear in the number of variables by employing efficient approximate high-dimensional filtering [16, 2, 1]. The resulting approximate inference algorithm is sublinear in the number of edges in the model. Figure 1 demonstrates the benefits of the presented algorithm on two images from the MSRC-21 dataset for multi-class image segmentation and labeling. Figure 1(d) shows the results of approximate MCMC inference in fully connected CRFs on these images [17]. The MCMC procedure was run for 36 hours and only partially converged for the bottom image. We have also experimented with graph cut inference in the fully connected models [11], but it did not converge within 72 hours. In contrast, a single-threaded implementation of our algorithm produces a detailed pixel-level labeling in 0.2 seconds, as shown in Figure 1(e). A quantitative evaluation on the MSRC-21 and the PASCAL VOC 2010 datasets is provided in Section 6. To the best of our knowledge, we are the first to demonstrate efficient inference in fully connected CRF models at the pixel level. 2 The Fully Connected CRF Model Consider a random field X defined over a set of variables {X1, . . . , XN}. The domain of each variable is a set of labels L = {l1, l2, . . . , lk}. Consider also a random field I defined over variables {I1, . . . , IN}. In our setting, I ranges over possible input images of size N and X ranges over possible pixel-level image labelings. Ij is the color vector of pixel j and Xj is the label assigned to pixel j. A conditional random field (I, X) is characterized by a Gibbs distribution P(X|I) = 1 Z(I) exp(−P c∈CG φc(Xc|I)), where G = (V, E) is a graph on X and each clique c 2 in a set of cliques CG in G induces a potential φc [15]. The Gibbs energy of a labeling x ∈LN is E(x|I) = P c∈CG φc(xc|I). The maximum a posteriori (MAP) labeling of the random field is x∗= arg maxx∈LN P(x|I). For notational convenience we will omit the conditioning in the rest of the paper and use ψc(xc) to denote φc(xc|I). In the fully connected pairwise CRF model, G is the complete graph on X and CG is the set of all unary and pairwise cliques. The corresponding Gibbs energy is E(x) = X i ψu(xi) + X i<j ψp(xi, xj), (1) where i and j range from 1 to N. The unary potential ψu(xi) is computed independently for each pixel by a classifier that produces a distribution over the label assignment xi given image features. The unary potential used in our implementation incorporates shape, texture, location, and color descriptors and is described in Section 5. Since the output of the unary classifier for each pixel is produced independently from the outputs of the classifiers for other pixels, the MAP labeling produced by the unary classifiers alone is generally noisy and inconsistent, as shown in Figure 1(b). The pairwise potentials in our model have the form ψp(xi, xj) = µ(xi, xj) PK m=1 w(m)k(m)(fi, fj) | {z } k(fi,fj) , (2) where each k(m) is a Gaussian kernel k(m)(fi, fj) = exp(−1 2(fi −fj)TΛ(m)(fi −fj)), the vectors fi and fj are feature vectors for pixels i and j in an arbitrary feature space, w(m) are linear combination weights, and µ is a label compatibility function. Each kernel k(m) is characterized by a symmetric, positive-definite precision matrix Λ(m), which defines its shape. For multi-class image segmentation and labeling we use contrast-sensitive two-kernel potentials, defined in terms of the color vectors Ii and Ij and positions pi and pj: k(fi, fj) = w(1) exp −|pi −pj|2 2θ2α −|Ii −Ij|2 2θ2 β ! | {z } appearance kernel +w(2) exp −|pi −pj|2 2θ2γ | {z } smoothness kernel . (3) The appearance kernel is inspired by the observation that nearby pixels with similar color are likely to be in the same class. The degrees of nearness and similarity are controlled by parameters θα and θβ. The smoothness kernel removes small isolated regions [19]. The parameters are learned from data, as described in Section 4. A simple label compatibility function µ is given by the Potts model, µ(xi, xj) = [xi ̸= xj]. It introduces a penalty for nearby similar pixels that are assigned different labels. While this simple model works well in practice, it is insensitive to compatibility between labels. For example, it penalizes a pair of nearby pixels labeled “sky” and “bird” to the same extent as pixels labeled “sky” and “cat”. We can instead learn a general symmetric compatibility function µ(xi, xj) that takes interactions between labels into account, as described in Section 4. 3 Efficient Inference in Fully Connected CRFs Our algorithm is based on a mean field approximation to the CRF distribution. This approximation yields an iterative message passing algorithm for approximate inference. Our key observation is that message passing in the presented model can be performed using Gaussian filtering in feature space. This enables us to utilize highly efficient approximations for high-dimensional filtering, which reduce the complexity of message passing from quadratic to linear, resulting in an approximate inference algorithm for fully connected CRFs that is linear in the number of variables N and sublinear in the number of edges in the model. 3.1 Mean Field Approximation Instead of computing the exact distribution P(X), the mean field approximation computes a distribution Q(X) that minimizes the KL-divergence D(Q∥P) among all distributions Q that can be expressed as a product of independent marginals, Q(X) = Q i Qi(Xi) [10]. 3 Minimizing the KL-divergence, while constraining Q(X) and Qi(Xi) to be valid distributions, yields the following iterative update equation: Qi(xi = l) = 1 Zi exp −ψu(xi) − X l′∈L µ(l, l′) K X m=1 w(m) X j̸=i k(m)(fi, fj)Qj(l′) . (4) A detailed derivation of Equation 4 is given in the supplementary material. This update equation leads to the following inference algorithm: Algorithm 1 Mean field in fully connected CRFs Initialize Q ▷Qi(xi) ← 1 Zi exp{−φu(xi)} while not converged do ▷See Section 6 for convergence analysis ˜Q(m) i (l) ←P j̸=i k(m)(fi, fj)Qj(l) for all m ▷Message passing from all Xj to all Xi ˆQi(xi) ←P l∈L µ(m)(xi, l) P m w(m) ˜Q(m) i (l) ▷Compatibility transform Qi(xi) ←exp{−ψu(xi) −ˆQi(xi)} ▷Local update normalize Qi(xi) end while Each iteration of Algorithm 1 performs a message passing step, a compatibility transform, and a local update. Both the compatibility transform and the local update run in linear time and are highly efficient. The computational bottleneck is message passing. For each variable, this step requires evaluating a sum over all other variables. A naive implementation thus has quadratic complexity in the number of variables N. Next, we show how approximate high-dimensional filtering can be used to reduce the computational cost of message passing to linear. 3.2 Efficient Message Passing Using High-Dimensional Filtering From a signal processing standpoint, the message passing step can be expressed as a convolution with a Gaussian kernel GΛ(m) in feature space: ˜Q(m) i (l) = P j∈V k(m)(fi, fj)Qj(l) −Qi(l) | {z } message passing = [GΛ(m) ⊗Q(l)] (fi) | {z } Q (m) i (l) −Qi(l). (5) We subtract Qi(l) from the convolved function Q (m) i (l) because the convolution sums over all variables, while message passing does not sum over Qi. This convolution performs a low-pass filter, essentially band-limiting Q (m) i (l). By the sampling theorem, this function can be reconstructed from a set of samples whose spacing is proportional to the standard deviation of the filter [20]. We can thus perform the convolution by downsampling Q(l), convolving the samples with GΛ(m), and upsampling the result at the feature points [16]. Algorithm 2 Efficient message passing: Q (m) i (l) = P j∈V k(m)(fi, fj)Qj(l) Q↓(l) ←downsample(Q(l)) ▷Downsample ∀i∈V↓Q (m) ↓i (l) ←P j∈V↓k(m)(f↓i, f↓j)Q↓j(l) ▷Convolution on samples f↓ Q (m)(l) ←upsample(Q (m) ↓ (l)) ▷Upsample A common approximation to the Gaussian kernel is a truncated Gaussian, where all values beyond two standard deviations are set to zero. Since the spacing of the samples is proportional to the standard deviation, the support of the truncated kernel contains only a constant number of sample points. Thus the convolution can be approximately computed at each sample by aggregating values from only a constant number of neighboring samples. This implies that approximate message passing can be performed in O(N) time [16]. High-dimensional filtering algorithms that follow this approach can still have computational complexity exponential in d. However, a clever filtering scheme can reduce the complexity of the convolution operation to O(Nd). We use the permutohedral lattice, a highly efficient convolution data 4 structure that tiles the feature space with simplices arranged along d+1 axes [1]. The permutohedral lattice exploits the separability of unit variance Gaussian kernels. Thus we need to apply a whitening transform ˜f = Uf to the feature space in order to use it. The whitening transformation is found using the Cholesky decomposition of Λ(m) into UU T. In the transformed space, the high-dimensional convolution can be separated into a sequence of one-dimensional convolutions along the axes of the lattice. The resulting approximate message passing procedure is highly efficient even with a fully sequential implementation that does not make use of parallelism or the streaming capabilities of graphics hardware, which can provide further acceleration if desired. 4 Learning We learn the parameters of the model by piecewise training. First, the boosted unary classifiers are trained using the JointBoost algorithm [21], using the features described in Section 5. Next we learn the appearance kernel parameters w(1), θα, and θβ for the Potts model. w(1) can be found efficiently by a combination of expectation maximization and high-dimensional filtering. Unfortunately, the kernel widths θα and θβ cannot be computed effectively with this approach, since their gradient involves a sum of non-Gaussian kernels, which are not amenable to the same acceleration techniques. We found it to be more efficient to use grid search on a holdout validation set for all three kernel parameters w(1), θα and θβ. The smoothness kernel parameters w(2) and θγ do not significantly affect classification accuracy, but yield a small visual improvement. We found w(2) = θγ = 1 to work well in practice. The compatibility parameters µ(a, b) = µ(b, a) are learned using L-BFGS to maximize the loglikelihood ℓ(µ : I, T ) of the model for a validation set of images I with corresponding ground truth labelings T . L-BFGS requires the computation of the gradient of ℓ, which is intractable to estimate exactly, since it requires computing the gradient of the partition function Z. Instead, we use the mean field approximation described in Section 3 to estimate the gradient of Z. This leads to a simple approximation of the gradient for each training image: ∂ ∂µ(a, b)ℓ(µ : I(n), T (n)) ≈− X i T (n) i (a) X j̸=i k(fi, fj)T (n) j (b) + X i Qi(a) X j̸=i k(fi, fj)Qi(b), (6) where (I(n), T (n)) is a single training image with its ground truth labeling and T (n)(a) is a binary image in which the ith pixel T (n) i (a) has value 1 if the ground truth label at the ith pixel of T (n) is a and 0 otherwise. A detailed derivation of Equation 6 is given in the supplementary material. The sums P j̸=i k(fi, fj)Tj(b) and P j̸=i k(fi, fj)Qi(b) are both computationally expensive to evaluate directly. As in Section 3.2, we use high-dimensional filtering to compute both sums efficiently. The runtime of the final learning algorithm is linear in the number of variables N. 5 Implementation The unary potentials used in our implementation are derived from TextonBoost [19, 13]. We use the 17-dimensional filter bank suggested by Shotton et al. [19], and follow Ladick´y et al. [13] by adding color, histogram of oriented gradients (HOG), and pixel location features. Our evaluation on the MSRC-21 dataset uses this extended version of TextonBoost for the unary potentials. For the VOC 2010 dataset we include the response of bounding box object detectors [4] for each object class as 20 additional features. This increases the performance of the unary classifiers on the VOC 2010 from 13% to 22%. We gain an additional 5% by training a logistic regression classifier on the responses of the boosted classifier. For efficient high-dimensional filtering, we use a publicly available implementation of the permutohedral lattice [1]. We found a downsampling rate of one standard deviation to work best for all our experiments. Sampling-based filtering algorithms underestimate the edge strength k(fi, fj) for very similar feature points. Proper normalization can cancel out most of this error. The permutohedral lattice allows for two types of normalizations. A global normalization by the average 5 kernel strength ˆk = 1 N P i,j k(fi, fj) can correct for constant error. A pixelwise normalization by ˆki = P j k(fi, fj) handles regional errors as well, but slightly violates the CRF symmetry assumption ψp(xi, xj) = ψp(xj, xi). We found the pixelwise normalization to work better in practice. 6 Evaluation We evaluate the presented algorithm on two standard benchmarks for multi-class image segmentation and labeling. The first is the MSRC-21 dataset, which consists of 591 color images of size 320 × 213 with corresponding ground truth labelings of 21 object classes [19]. The second is the PASCAL VOC 2010 dataset, which contains 1928 color images of size approximately 500 × 400, with a total of 20 object classes and one background class [3]. The presented approach was evaluated alongside the adjacency (grid) CRF of Shotton et al. [19] and the Robust P n CRF of Kohli et al. [9], using publicly available reference implementations. To ensure a fair comparison, all models used the unary potentials described in Section 5. All experiments were conducted on an Intel i7-930 processor clocked at 2.80GHz. Eight CPU cores were used for training; all other experiments were performed on a single core. The inference algorithm was implemented in a single CPU thread. Convergence. We first evaluate the convergence of the mean field approximation by analyzing the KL-divergence between Q and P. Figure 2 shows the KL-divergence between Q and P over successive iterations of the inference algorithm. The KL-divergence was estimated up to a constant as described in supplementary material. Results are shown for different standard deviations θα and θβ of the kernels. The graphs were aligned at 20 iterations for visual comparison. The number of iterations was set to 10 in all subsequent experiments. MSRC-21 dataset. We use the standard split of the dataset into 45% training, 10% validation and 45% test images [19]. The unary potentials were learned on the training set, while the parameters of all CRF models were learned using holdout validation. The total CRF training time was 40 minutes. The learned label compatibility function performed on par with the Potts model on this dataset. Figure 3 provides qualitative and quantitative results on the dataset. We report the standard measures of multi-class segmentation accuracy: “global” denotes the overall percentage of correctly classified image pixels and “average” is the unweighted average of per-category classification accuracy [19, 9]. The presented inference algorithm on the fully connected CRF significantly outperforms the other models, evaluated against the standard ground truth data provided with the dataset. The ground truth labelings provided with the MSRC-21 dataset are quite imprecise. In particular, regions around object boundaries are often left unlabeled. This makes it difficult to quantitatively evaluate the performance of algorithms that strive for pixel-level accuracy. Following Kohli et al. [9], we manually produced accurate segmentations and labelings for a set of images from the MSRC-21 dataset. Each image was fully annotated at the pixel level, with careful labeling around complex boundaries. This labeling was performed by hand for 94 representative images from the MSRC21 dataset. Labeling a single image took 30 minutes on average. A number of images from this “accurate ground truth” set are shown in Figure 3. Figure 3 reports segmentation accuracy against this ground truth data alongside the evaluation against the standard ground truth. The results were obtained using 5-fold cross validation, where 4 5 of the 94 images were used to train the CRF pa 0 5 10 15 20 KL-divergence Number of iterations θα=θβ=10 θα=θβ=30 θα=θβ=50 θα=θβ=70 θα=θβ=90 (a) KL-divergence Image Q(sky) Q(bird) 0 iterations 1 iteration 2 iterations 10 iterations (b) Distributions Q(Xi =“bird”) (top) and Q(Xi =“sky”) (bottom) Figure 2: Convergence analysis. (a) KL-divergence of the mean field approximation during successive iterations of the inference algorithm, averaged across 94 images from the MSRC-21 dataset. (b) Visualization of convergence on distributions for two class labels over an image from the dataset. 6 Image Grid CRF Robust Pn CRF Our approach Accurate ground truth bird water road car sky tree building grass cow sky tree grass grass water bird Runtime Standard ground truth Accurate ground truth Global Average Global Average Unary classifiers − 84.0 76.6 83.2 ± 1.5 80.6 ± 2.3 Grid CRF 1s 84.6 77.2 84.8 ± 1.5 82.4 ± 1.8 Robust P n CRF 30s 84.9 77.5 86.5 ± 1.0 83.1 ± 1.5 Fully connected CRF 0.2s 86.0 78.3 88.2 ± 0.7 84.7 ± 0.7 Figure 3: Qualitative and quantitative results on the MSRC-21 dataset. rameters. The unary potentials were learned on a separate training set that did not include the 94 accurately annotated images. We also adopt the methodology proposed by Kohli et al. [9] for evaluating segmentation accuracy around boundaries. Specifically, we count the relative number of misclassified pixels within a narrow band (“trimap”) surrounding actual object boundaries, obtained from the accurate ground truth images. As shown in Figure 4, our algorithm outperforms previous work across all trimap widths. PASCAL VOC 2010. Due to the lack of a publicly available ground truth labeling for the test set in the PASCAL VOC 2010, we use the training and validation data for all our experiments. We randomly partitioned the images into 3 groups: 40% training, 15% validation, and 45% test set. Segmentation accuracy was measured using the standard VOC measure [3]. The unary potentials were learned on the training set and yielded an average classification accuracy of 27.6%. The parameters for the Potts potentials in the fully connected CRF model were learned on the validation set. The Image Ground truth Trimap (4px) Trimap (8px) (a) Trimaps of different widths 20 30 40 50 0 4 8 12 16 20 Pixelwise Classifiaction Error [%] Trimap Width [Pixels] Unary classifiers Grid CRF Robust Pn CRF Fully connected CRF (b) Segmentation accuracy within trimap Figure 4: Segmentation accuracy around object boundaries. (a) Visualization of the “trimap” measure. (b) Percent of misclassified pixels within trimaps of different widths. 7 Image Ground truth cat background Our approach Ground truth boat background sheep background Our approach Image Figure 5: Qualitative results on the PASCAL VOC 2010 dataset. Average segmentation accuracy was 30.2%. fully connected model with Potts potentials yielded an average classification accuracy of 29.1%. The label compatibility function, learned on the validation set, further increased the classification accuracy to 30.2%. For comparison, the grid CRF achieves 28.3%. Training time was 2.5 hours and inference time is 0.5 seconds. Qualitative results are provided in Figure 5. Long-range connections. We have examined the value of long-range connections in our model by varying the spatial and color ranges θα and θβ of the appearance kernel and analyzing the resulting classification accuracy. For this experiment, w(1) was held constant and w(2) was set to 0. The results are shown in Figure 6. Accuracy steadily increases as longer-range connections are added, peaking at spatial standard deviation of θα = 61 pixels and color standard deviation θβ = 11. At this setting, more than 50% of the pairwise potential energy in the model was assigned to edges of length 35 pixels or higher. However, long-range connections can also propagate misleading information, as shown in Figure 7. 0 100 200 θα 25 50 θβ 82% 84% 86% 88% (a) Quantitative θα 1.0 121.0 θβ 1.0 41.0 (b) Qualitative Figure 6: Influence of long-range connections on classification accuracy. (a) Global classification accuracy on the 94 MSRC images with accurate ground truth, as a function of kernel parameters θα and θβ. (b) Results for one image across two slices in parameter space, shown as black lines in (a). Discussion. We have presented a highly efficient approximate inference algorithm for fully connected CRF models. Our results demonstrate that dense pixel-level connectivity leads to significantly more accurate pixel-level classification performance. Our single-threaded implementation processes benchmark images in a fraction of a second and the algorithm can be parallelized for further performance gains. Acknowledgements. We thank Daphne Koller for helpful discussions. Philipp Kr¨ahenb¨uhl was supported in part by a Stanford Graduate Fellowship. Image Our approach Ground truth Image Our approach Ground truth bird background void road cat Figure 7: Failure cases on images from the PASCAL VOC 2010 (left) and the MSRC-21 (right). Long-range connections propagated misleading information, eroding the bird wing in the left image and corrupting the legs of the cat on the right. 8 References [1] A. Adams, J. Baek, and M. A. Davis. Fast high-dimensional filtering using the permutohedral lattice. Computer Graphics Forum, 29(2), 2010. 2, 5 [2] A. Adams, N. Gelfand, J. Dolson, and M. Levoy. Gaussian kd-trees for fast high-dimensional filtering. ACM Transactions on Graphics, 28(3), 2009. 2 [3] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes (VOC) challenge. IJCV, 88(2), 2010. 6, 7 [4] P. F. Felzenszwalb, R. B. Girshick, and D. A. McAllester. Cascade object detection with deformable part models. In Proc. CVPR, 2010. 5 [5] B. Fulkerson, A. Vedaldi, and S. Soatto. Class segmentation and object localization with superpixel neighborhoods. In Proc. ICCV, 2009. 1 [6] C. Galleguillos, A. Rabinovich, and S. Belongie. Object categorization using co-occurrence, location and appearance. In Proc. CVPR, 2008. 1 [7] S. Gould, J. Rodgers, D. Cohen, G. Elidan, and D. Koller. Multi-class segmentation with relative location prior. IJCV, 80(3), 2008. 1 [8] X. He, R. S. Zemel, and M. A. Carreira-Perpinan. Multiscale conditional random fields for image labeling. In Proc. CVPR, 2004. 1 [9] P. Kohli, L. Ladick´y, and P. H. S. Torr. Robust higher order potentials for enforcing label consistency. IJCV, 82(3), 2009. 1, 2, 6, 7 [10] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. 3 [11] V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts? PAMI, 26(2), 2004. 2 [12] S. Kumar and M. Hebert. A hierarchical field framework for unified context-based classification. In Proc. ICCV, 2005. 1 [13] L. Ladick´y, C. Russell, P. Kohli, and P. H. S. Torr. Associative hierarchical crfs for object class image segmentation. In Proc. ICCV, 2009. 1, 5 [14] L. Ladick´y, C. Russell, P. Kohli, and P. H. S. Torr. Graph cut based inference with co-occurrence statistics. In Proc. ECCV, 2010. 1 [15] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML, 2001. 3 [16] S. Paris and F. Durand. A fast approximation of the bilateral filter using a signal processing approach. IJCV, 81(1), 2009. 2, 4 [17] N. Payet and S. Todorovic. (RF)2 – random forest random field. In Proc. NIPS. 2010. 1, 2 [18] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie. Objects in context. In Proc. ICCV, 2007. 1 [19] J. Shotton, J. M. Winn, C. Rother, and A. Criminisi. Textonboost for image understanding: Multi-class object recognition and segmentation by jointly modeling texture, layout, and context. IJCV, 81(1), 2009. 1, 3, 5, 6 [20] S. W. Smith. The scientist and engineer’s guide to digital signal processing. California Technical Publishing, 1997. 4 [21] A. Torralba, K. P. Murphy, and W. T. Freeman. Sharing visual features for multiclass and multiview object detection. PAMI, 29(5), 2007. 5 [22] T. Toyoda and O. Hasegawa. Random field model for integration of local information and global information. PAMI, 30, 2008. 1 [23] J. J. Verbeek and B. Triggs. Scene segmentation with crfs learned from partially labeled images. In Proc. NIPS, 2007. 1 9
|
2011
|
217
|
4,277
|
Simultaneous Sampling and Multi-Structure Fitting with Adaptive Reversible Jump MCMC Trung Thanh Pham, Tat-Jun Chin, Jin Yu and David Suter School of Computer Science, The University of Adelaide, South Australia {trung,tjchin,jin.yu,dsuter}@cs.adelaide.edu.au Abstract Multi-structure model fitting has traditionally taken a two-stage approach: First, sample a (large) number of model hypotheses, then select the subset of hypotheses that optimise a joint fitting and model selection criterion. This disjoint two-stage approach is arguably suboptimal and inefficient — if the random sampling did not retrieve a good set of hypotheses, the optimised outcome will not represent a good fit. To overcome this weakness we propose a new multi-structure fitting approach based on Reversible Jump MCMC. Instrumental in raising the effectiveness of our method is an adaptive hypothesis generator, whose proposal distribution is learned incrementally and online. We prove that this adaptive proposal satisfies the diminishing adaptation property crucial for ensuring ergodicity in MCMC. Our method effectively conducts hypothesis sampling and optimisation simultaneously, and yields superior computational efficiency over previous two-stage methods. 1 Introduction Multi-structure model fitting is concerned with estimating the multiple instances (or structures) of a geometric model embedded in the input data. The task manifests in applications such as mixture regression [21], motion segmentation [27, 10], and multi-projective estimation [29]. Such a problem is known for its “chicken-and-egg” nature: Both data-to-structure assignments and structure parameters are unavailable, but given the solution of one subproblem, the solution of the other can be easily derived. In practical settings the number of structures is usually unknown beforehand, thus model selection is required in conjunction to fitting. This makes the problem very challenging. A common framework is to optimise a robust goodness-of-fit function jointly with a model selection criterion. For tractability most methods [25, 19, 17, 26, 18, 7, 31] take a “hypothesise-then-select” approach: First, randomly sample from the parameter space a large number of putative model hypotheses, then select a subset of the hypotheses (structures) that optimise the combined objective function. The hypotheses are typically fitted on minimal subsets [9] of the input data. Depending on the specific definition of the cost functions, a myriad of strategies have been proposed to select the best structures, namely tabu search [25], branch-and-bound [26], linear programming [19], dirichlet mixture clustering [17], message passing [18], graph cut [7], and quadratic programming [31]. While sampling is crucial for tractability, a disjoint two-stage approach raises an awkward situation: If the sampled hypotheses are inaccurate, or worse, if not all valid structures are sampled, the selection or optimisation step will be affected. The concern is palpable especially for higher-order geometric models (e.g., fundamental matrices in motion segmentation [27]) where enormous sampling effort is required before hitting good hypotheses (those fitted on all-inlier minimal subsets). Thus two-stage approaches are highly vulnerable to sampling inadequacies, even with theoretical assurances on the optimisation step (e.g., globally optimal over the sampled hypotheses [19, 7, 31]). The issue above can be viewed as the lack of a stopping criterion for the sampling stage. If there is only one structure, we can easily evaluate the sample quality (e.g., consensus size) on-the-fly 1 and stop as soon as the prospect of obtaining a better sample becomes insignificant [9]. Under multi-structure data, it is unknown what a suitable stopping criterion is (apart from solving the overall fitting and model selection problem itself). One can consider iterative local refinement of the structures or re-sampling after data assignment [7], but the fact remains that if the initial hypotheses are inaccurate, the results of the subsequent fitting and refinement will be affected. Clearly, an approach that simultaneously samples and optimises is more appropriate. To this end we propose a new method for multi-structure fitting and model selection based on Reversible Jump Markov Chain Monte Carlo (RJMCMC) [12]. By design MCMC techniques directly optimise via sampling. Despite their popular use [3] the method has not been fully explored in multi-structure fitting (a few authors have applied Monte Carlo techniques for robust estimation [28, 8], but mostly to enhance hypothesis sampling on single-structure data). We show how to exploit the reversible jump mechanism to provide a simple and effective framework for multi-structure model selection. The bane of MCMC, however, is the difficulty in designing efficient proposal distributions. Adaptive MCMC techniques [4, 24] promise to alleviate this difficulty by learning the proposal distribution on-the-fly. Instrumental in raising the efficiency of our RJMCMC approach is a recently proposed hypothesis generator [6] that progressively updates the proposal distribution using generated hypotheses. Care must be taken in introducing such adaptive schemes, since a chain propagated based on a non-stationary proposal is non-Markovian, and unless the proposal satisfies certain properties [4, 24], this generally means a loss of asymptotic convergence to the target distribution. Clearing these technical hurdles is one of our major contributions: Using emerging theory from adaptive MCMC [23, 4, 24, 11], we prove that the adaptive proposal, despite its origins in robust estimation [6], satisfies the properties required for convergence, most notably diminishing adaptation. The rest of the paper is organised as follows: Sec. 2 formulates our goal within a clear optimisation framework, and outlines our RJMCMC approach. Sec. 3 describes the adaptive hypothesis proposal used in our method, and develops proof that it is a valid adaptive MCMC sampler. We present our experimental results in Sec. 4 and draw conclusions in Sec. 5. 2 Multi-Structure Fitting and Model Selection Give input data X = {xi}N i=1, usually with outliers, our goal is to recover the instances or structures θk = {θc}k c=1 of a geometric model M embedded in X. The number of valid structures k is unknown beforehand and must also be estimated from the data. The problem domain is therefore the joint space of structure quantity and parameters {k, θk}. Such a problem is typically solved by jointly minimising fitting error and model complexity. Similar to [25, 19, 26], we use the AIC [1] {k∗, θ∗ k∗} = arg min {k,θk} −2 log L(θk) + 2αn(θk). Here L(θk) is the robust data likelihood and n(θk) the number of parameters to define θk. We include a positive constant α to allow reweighting of the two components. Assuming i.i.d. Gaussian noise with known variance σ, the above problem is equivalent to minimising the function f(k, θk) = N X i=1 ρ minc ric 1.96σ + αn(θk), (1) where ric = g(xi, θc) is the absolute residual of xi to the c-th structure θc in θk. The residuals are subjected to a robust loss function ρ(·) to limit the influence of outliers; we use the biweight function [16]. Minimising a function like (1) over a vast domain {k, θk} is a formidable task. 2.1 A reversible jump simulated annealing approach Simulated annealing has proven to be effective for difficult model selection problems [2, 5]. The idea is to propagate a Markov chain for the Boltzmann distribution encapsulating (1) bT (k, θk) ∝exp(−f(k, θk)/T) (2) where temperature T is progressively lowered until the samples from bT (k, θk) converge to the global minima of f(k, θk). Algorithm 1 shows the main body of the algorithm. Under weak regularity assumptions, there exist cooling schedules [5] that will guarantee that as T tends to zero the samples from the chain will concentrate around the global minima. 2 To simulate bT (k, θk) we adopt a mixture of kernels MCMC approach [2]. This involves in each iteration the execution of a randomly chosen type of move to update {k, θk}. Algorithm 2 summarises the idea. We make available 3 types of moves: birth, death and local update. Birth and death moves change the number of structures k. These moves effectively cause the chain to jump across parameter spaces θk of different dimensions. It is crucial that these trans-dimensional jumps are reversible to produce correct limiting behaviour of the chain. The following subsections explain. Algorithm 1 Simulated annealing for multi-structure fitting and model selection 1: Initialise temperature T and state {k, θk}. 2: Simulate Markov chain for bT (k, θk) until convergence. 3: Lower temperature T and repeat from Step 2 until T ≈0. Algorithm 2 Reversible jump mixture of kernels MCMC to simulate bT (k, θk) Require: Last visited state {k, θk} of previous chain, probability β (Sec. 4 describes setting β). 1: Sample a ∼U[0,1]. 2: if a ≤β then 3: With probability rB(k), attempt birth move, else attempt death move. 4: else 5: Attempt local update. 6: end if 7: Repeat from Step 1 until convergence (e.g., last V moves all rejected). 2.1.1 Birth and death moves The birth move propagates {k, θk} to {k′, θ′ k′}, with k′ = k+1. Applying Green’s [12, 22] seminal theorems on RJMCMC, the move is reversible if it is accepted with probability min{1, A}, where A = bT (k′, θ′ k′)[1 −rB(k′)]/k′ bT (k, θk)rB(k)q(u) ∂θ′ k′ ∂(θk, u) . (3) The probability of proposing the birth move is rB(k), where rB(k) = 1 for k = 1, rB(k) = 0.5 for k = 2, . . . , kmax −1, and rB(kmax) = 0. In other words, any move that attempts to move k beyond the range [1, kmax] is disallowed in Step 3 of Algorithm 2. The death move is proposed with probability 1 −rB(k). An existing structure is chosen randomly and deleted from θk. The death move is accepted with probability min{1, A−1}, with obvious changes to the notations in A−1. In the birth move, the extra degrees of freedom required to specify the new item in θ′ k′ are given by auxiliary variables u, which are in turn proposed by q(u). Following [18, 7, 31], we estimate parameters of the new item by fitting the geometric model M onto a minimal subset of the data. Thus u is a minimal subset of X. The size p of u is the minimum number of data required to instantiate M, e.g., p = 4 for planar homographies, and p = 7 or 8 for fundamental matrices [15]. Our approach is equivalently minimising (1) over collections {k, θk} of minimal subsets of X, where now θk ≡{uc}k c=1. Taking this view the Jacobian ∂θ′ k′/∂(θk, u) is simply the identity matrix. Considering only minimal subsets somewhat simplifies the problem, but there are still a colossal number of possible minimal subsets. Obtaining good overall performance thus hinges on the ability of proposal q(u) to propose minimal subsets that are relevant, i.e., those fitted purely on inliers of valid structures in the data. One way is to learn q(u) incrementally using generated hypotheses. We describe such a scheme [6] in Sec. 3 and prove that the adaptive proposal preserves ergodicity. 2.1.2 Local update A local update does not change the model complexity k. The move involves randomly choosing a structure θc in θk to update, making only local adjustments to its minimal subset uc. The outcome is a revised minimal subset u′ c, and the move is accepted with probability min{1, A}, where A = bT (k, θ′ k)q(uc|θ′ c) bT (k, θk)q(u′c|θc). (4) As shown in the above our local update is also accomplished with the adaptive proposal q(u|θ), but this time conditioned on the selected structure θc. Sec. 3 describes and anlyses q(u|θ). 3 3 Adaptive MCMC for Multi-Structure Fitting Our work capitalises on the hypothesis generation scheme of Chin et al. called Multi-GS [6] originally proposed for robust geometric fitting. The algorithm maintains a series of sampling weights which are revised incrementally as new hypotheses are generated. This bears similarity to the pioneering Adaptive Metropolis (AM) method of Haario et al. [13]. Here, we prove that our adaptive proposals q(u) and q(u|θ) based on Multi-GS satisfy conditions required to preserve ergodicity. 3.1 The Multi-GS algorithm Let {θm}M m=1 aggregate the set of hypotheses fitted on the minimal subsets proposed thus far in all birth and local update moves in Algorithm 1. To build the sampling weights, first for each xi ∈X we compute its absolute residuals as measured to the M hypotheses, yielding the residual vector r(i) := [ r(i) 1 r(i) 2 · · · r(i) M ]. We then find the permutation a(i) := [ a(i) 1 a(i) 2 · · · a(i) M ] that sorts the elements in r(i) in non-descending order. The permutation a(i) essentially ranks the M hypotheses according to the preference of xi; The higher a hypothesis is ranked the more likely xi is an inlier to it. The weight wi,j between the pair xi and xj is obtained as wi,j = Ih(xi, xj) := 1 h a(i) h ∩a(j) h , (5) where |a(i) h ∩a(j) h | is the number of identical elements shared by the first-h elements of a(i) and a(j). Clearly wi,j is symmetric with respect to the input pair xi and xj, and wi,i = 1 for all i. To ensure technical consistency in our later proofs, we add a small positive offset γ to the weight1, or wi,j = max(Ih(xi, xj), γ), (6) hence γ ≤wi,j ≤1. The weight wi,j measures the correlation of the top h preferences of xi and xj, and this value is typically high iff xi and xj are inliers from the same structure; Figs. 1 (c)–(g) illustrate. Parameter h controls the discriminative power of wi,j, and is typically set as a fixed ratio k of M, i.e., h = ⌈kM⌉. Experiments suggest that k = 0.1 provides generally good performance [6]. Multi-GS exploits the preference correlations to sample the next minimal subset u = {xst}p t=1, where xst ∈X and st ∈{1, . . . , N} indexes the particular datum from X; henceforth we regard u ≡{st}p t=1. The first datum s1 is chosen purely randomly. Beginning from t = 2, the selection of the t-th member st considers the weights related to the data s1, . . . , st−1 already present in u. More specifically, the index st is sampled according to the probabilities Pt(i) ∝ t−1 Y z=1 wsz,i, for i = 1, . . . , N, (7) i.e., if Pt(i) > Pt(j) then i is more likely than j to be chosen as st. A new hypothesis θM+1 is then fitted on u and the weights are updated in consideration of θM+1. Experiments comparing sampling efficiency (e.g., all-inlier minimal subsets produced per unit time) show that Multi-GS is superior over previous guided sampling schemes, especially on multi-structure data; See [6] for details. 3.2 Is Multi-GS a valid adaptive MCMC proposal? Our RJMCMC scheme in Algorithm 2 depends on the Multi-GS-inspired adaptive proposals qM(u) and qM(u|θ), where we now add the subscript M to make explicit their dependency on the set of aggregated hypotheses {θm}M m=1 as well as the weights {wi,j}N i,j=1 they induce. The probability of proposing a minimal subset u = {st}p t=1 from qM(u) can be calculated as qM(u) = 1 N Y a<b b≤p wsa,sb "p−1 Y d=1 1T d K e=1 wse #−1 , (8) 1It can be shown if both xi and xj are uniformly distributed outliers, the expected value of wi,j is h/M, i.e., a given pair xi and xj will likely have non-zero preference correlation. 4 where wi is the column vector [ wi,1 . . . wi,N ]T and J is the sequential Hadamard product over the given multiplicands. The term with the inverse in Eq. (8) relates to the normalising constants for Eq. (7). As an example, the probability of selecting the minimal subset u = {s1, s2, s3, s4} is qM(u) = 1 N ws1,s2ws1,s3ws2,s3ws1,s4ws2,s4ws3,s4 1T ws11T (ws1 ⊙ws2)1T (ws1 ⊙ws2 ⊙ws3). The local update proposal qM(u|θ) differs only in the manner in which the first datum xs1 is selected. Instead of chosen purely randomly, the first index s1 is sampled according to Ps1(i) ∝exp −O(g(xi, θ)) n , for i = 1, . . . , N, (9) where O(g(xi, θ)) is the order statistic of the absolute residual g(xi, θ) as measured to θ; to define qM(u|θ) the 1/N term in Eq. (8) is simply replaced with the appropriate probability from Eq. (9). For local updates an index i is more likely to be chosen as s1 if xi is close to θ. Parameter n relates to our prior belief of the minimum number of inliers per structure; we fix this to n = 0.1N. Since our proposal distributions are updated with the arrival of new hypotheses, the corresponding transition probabilities are inhomogeneous (they change with time) and the chain non-Markovian (the transition to a future state depends on all previous states). We aim to show that such continual adaptations with Multi-GS will still lead to the correct target distribution (2). First we restate Theorem 1 in [11] which is distilled from other work on Adaptive MCMC [23, 4, 24]. Theorem 1. Let Z = {Zn : n > 0} be a stochastic process on a compact state space Ξ evolving according to a collection of transition kernels Tn(z, z′) = pr(Zn+1|Zn = z, Zn−1 = zn−1, . . . , Z0 = z0), and let p(z) be the distribution of Zn. Suppose for every n and z0, . . . , zn−1 ∈Ξ and for some distribution π(z) on Ξ, X zn π(zn)Tn(zn, zn+1) = π(zn+1), (10) |Tn+k(z, z′) −Tn(z, z′)| ≤anck, an = O(n−r1), ck = O(k−r2), r1, r2 > 0, (11) Tn(z, z′) ≥ϵπ(z′), ϵ > 0, (12) where ϵ does not depend on n, z0, . . . , zn−1. Then, for any initial distribution p(z0) for Z0, sup zn |p(zn) −π(zn)| →0 for n →∞. Diminishing adaptation. Eq. (11) dictates that the transition kernel, and thus the proposal distribution in the Metropolis-Hastings updates in Eqs. (3) and (4), must converge to a fixed distribution, i.e., the adaptation must diminish. To see that this occurs naturally in qM(u), first we show that wi,j for all i, j converges as M increases. Without loss of generality assume that b new hypotheses are generated between successive weight updates wi,j and w′ i,j. Then, lim M→∞ w′ i,j −wi,j = lim M→∞ |a′(i) k(M+b) ∩a′(j) k(M+b)| k(M + b) −|a(i) kM ∩a(j) kM| kM ≤ lim M→∞ |a(i) kM ∩a(j) kM| ± b(k + 1) k(M + b) −|a(i) kM ∩a(j) kM| kM = lim M→∞ |a(i) kM ∩a(j) kM|/M ± b(k + 1)/M k + kb/M −|a(i) kM ∩a(j) kM|/M k = 0, where a′(i) is the revised preference of xi in consideration of the b new hypotheses. The result is based on the fact that the extension of b hypotheses will only perturb the overlap between the top-k percentile of any two preference vectors by at most b(k + 1) items. It should also be noted that the result is not due to w′ i,j and wi,j simultaneously vanishing with increasing M; in general lim M→∞|a(i) kM ∩a(j) kM|/M ̸= 0 since a(i) and a(j) are extended and revised as M increases and this may increase their mutual overlap. Figs. 1 (c)–(g) illustrate the convergence of wi,j as M increases. Using the above result, it can be shown that the product of any two weights also converges lim M→∞ w′ i,jw′ p,q −wi,jwp,q = lim M→∞ w′ i,j(w′ p,q −wp,q) + wp,q(w′ i,j −wi,j) ≤ lim M→∞ w′ i,j w′ p,q −wp,q + wp,q w′ i,j −wi,j = 0. 5 This result is readily extended to the product of any number of weights. To show the convergence of the normalisation terms in (8), we first observe that the sum of weights is bounded away from 0 ∀i, 1T wi ≥L, L > 0, due to the offsetting (6) and the constant element wi,i = 1 in wi (although wi,i will be set to zero to enforce sampling without replacement [6]). It can thus be established that lim M→∞ 1 1T w′ i − 1 1T wi = lim M→∞ 1T w′ i −1T wi (1T w′ i)(1T wi) ≤ lim M→∞ 1T w′ i −1T wi L2 = 0 since the sum of the weights also converges. The result is readily extended to the inverse of the sum of any number of Hadamard products of weights, since we have also previously established that the product of any number of weights converges. Finally, since Eq. (8) involves only multiplications of convergent quantities, qM(u) will converge to a fixed distribution as the update progresses. Invariance. Eq. (10) requires that transition probabilities based on qM(u) permits an invariant distribution individually for all M. Since we propose and accept based on the Metropolis-Hastings algorithm, detailed balance is satisfied by construction [3], which means that a Markov chain propagated based on qM(u) will asymptotically sample from the target distribution. Uniform ergodicity. Eq. (12) requires that qM(u) for all M be individually ergodic, i.e., the resulting chain using qM(u) is aperiodic and irreducible. Again, since we simulate the target using Metropolis-Hastings, every proposal has a chance of being rejected, thus implying aperiodicity [3]. Irreducibility is satisfied by the offsetting in (6) and renormalising [20], since this implies that there is always a non-zero probability of reaching any state (minimal subset) from the current state. The above results apply for the local update proposal qM(u|θ) which differs from qM(u) only in the (stationary) probability to select the first index s1. Hence qM(u|θ) is also a valid adaptive proposal. 4 Experiments We compare our approach (ARJMC) against state-of-the-art methods: message passing [18] (FLOSS), energy minimisation with graph cut [7] (ENERGY), and quadratic programming based on a novel preference feature [31] (QP-MF). We exclude older methods with known weaknesses, e.g., computational inefficiency [19, 17, 26], low accuracy due to greedy search [25], or vulnerability to outliers [17]. All methods are run in MATLAB except ENERGY which is available in C++2. For ARJMC, standard deviation σ in (1) is set as t/1.96, where t is the inlier threshold [9] obtained using ground truth model fitting results— The same t is provided to the competitors. In Algorithm 1 temperature T is initialiased as 1 and we apply the geometric cooling schedule Tnext = 0.99T. In Algorithm 2, probability β is set as equal to current temperature T, thus allowing more global explorations in the parameter space initially before concentrating on local refinement subsequently. Such a helpful strategy is not naturally practicable in disjoint two-stage approaches. 4.1 Two-view motion segmentation The goal is to segment point trajectories X matched across two views into distinct motions [27]. Trajectories of a particular motion can be related by a distinct fundamental matrix F ∈R3×3 [15]. Our task is thus to estimate the number of motions k and the fundamental matrices {Fc}k c=1 corresponding to the motions embedded in data X. Note that X may contain false trajectories (outliers). We estimate fundamental matrix hypotheses from minimal subsets of size p = 8 using the 8-point method [14]. The residual g(xi, F) is computed as the Sampson distance [15]. We test the methods on publicly available two-view motion segmentation datasets [30]. In particular we test on the 3- and 4-motion datasets provided, namely breadtoycar, carchipscube, toycubecar, breadcubechips, biscuitbookbox, cubebreadtoychips and breadcartoychips; see the dataset homepage for more details. Correspondences were established via SIFT matching and manual filtering was done to obtain ground truth segmentation. Examples are shown in Figs. 1(a) and 1(b). 2http://vision.csd.uwo.ca/code/#Multi-label optimization 6 (a) breadtoycar dataset with 3 motions (37, 39 and 34 inliers, 56 outliers) (b) cubebreadtoychips dataset with 4 motions (71, 49, 38 and 81 inliers, 88 outliers) 50 100 150 200 250 300 50 100 150 200 250 300 (c) M = 50 50 100 150 200 250 300 50 100 150 200 250 300 (d) M = 100 50 100 150 200 250 300 50 100 150 200 250 300 (e) M = 1000 50 100 150 200 250 300 50 100 150 200 250 300 (f) M = 5000 50 100 150 200 250 300 50 100 150 200 250 300 (g) M = 10000 0 5 10 15 20 25 30 15 20 25 30 35 40 45 Time (s) Objective function value f(k,θk) QP−MF (random) ENERGY (random) FLOSS (random) ARJMC QP−MF (Multi−GS) ENERGY (Multi−GS) FLOSS (Multi−GS) (h) Value of function f(k, θk) (best viewed in colour) 0 5 10 15 20 25 30 0 10 20 30 40 50 60 70 Time (s) Segmentation error (%) QPïMF (random) ENERGY (random) FLOSS (random) ARJMC QPïMF (MultiïGS) ENERGY (MultiïGS) FLOSS (MultiïGS) (i) Segmentation error (best viewed in colour) (j) M = 100 (k) M = 200 (l) M = 500 (m) M = 1000 Figure 1: (a) and (b) show respectively a 3- and 4-motion dataset (colours show ground truth labelling). To minimise clutter, lines joining false matches are not drawn. (c)–(g) show the evolution of the matrix of pairwise weights (5) computed from (b) as the number of hypotheses M is increased. For presentation the data are arranged according to their structure membership, which gives rise to a 4-block pattern. Observe that the block pattern, hence weights, converge as M increases. (h) and (i) respectively show performance measures (see text) of four methods on the dataset in (b). (j)–(m) show the evolution of the labelling result of ARJMC as M increases (only one view is shown). Figs. 1(c)–(g) show the evolution of the pairwise weights (5) as M increases until 10,000 for the data in Fig. 1(b). The matrices exhibit a a four-block pattern, indicating strong mutual preference among inliers from the same structure. This phenomenon allows accurate selection of minimal subsets in Multi-GS [6]. More pertinently, as we predicted in Sec. 3.2, the weights converge as M increases, as evidenced by the stabilising block pattern. Note that only a small number of weights are actually computed in Multi-GS [6]; the full matrix of weights are calculated here for illustration only. We run ARJMC and record the following performance measures: Value of the objective function f(k, θk) in Eq. (1), and segmentation error. The latter involves assigning each datum xi ∈X to the nearest structure in θk if the residual is less than the threshold t; else xi is labelled as an outlier. The overall labelling error is then obtained. The measures are recorded at time intervals corresponding to the instances when M = 100, 200, . . . , 1000 number of hypotheses generated so far in Algorithm 1. Median results over 20 repetitions on the data in Fig. 1(b) are shown in Figs. 1(h) and 1(i). Figs. 1(j)–1(m) depict the evolution of the segmentation result of ARJMC as M increases. 7 For objective comparisons the competing two-stage methods were tested as follows: First, M = 100, 200, . . . , 1000 hypotheses are accumulatively generated (using both uniform random sampling [9] and Multi-GS [6]). A new instance of each method is invoked on each set of M hypotheses. We ensure that each method returns the true number of structures for all M; this represents an advantage over ARJMC, since the “online learning” nature of ARJMC means the number of structures is not discovered until closer to convergence. Results are also shown in Figs. 1(h) and 1(i). Firstly, it is clear that the performance of the two-stage methods on both measures are improved dramatically with the application of Multi-GS for hypothesis generation. From Fig. 1(h) ARJMC is the most efficient in minimising the function f(k, θk); it converges to a low value in significantly less time. It should be noted however that the other methods are not directly minimising AIC or f(k, θk). The segmentation error (which no method here is directly minimising) thus represents a more objective performance measure. From Fig. 1(i), it can be seen that the initial error of ARJMC is much higher than all other methods, a direct consequence of not having yet estimated the true number of structures. The error is eventually minimised as ARJMC converges. Table 1 which summarises the results on the other datasets (all using Multi-GS) conveys a similar picture. Further results on multi-homography detection also yield similar outcomes (see supplementary material). Dataset breadtoycar (3 structures) carchipscube (3 structures) toycubecar (3 structures) # inliers, outliers 37, 39 and 34 inliers, 56 outliers 19, 33 and 53 inliers, 60 outliers 45, 69 and 14 inliers, 72 outliers M FLOSS ENERGY QP-MF ARJMC FLOSS ENERGY QP-MF ARJMC FLOSS ENERGY QP-MF ARJMC 100 25.22 31.74 24.78 68.70 21.82 29.70 23.64 52.73 31.75 26.25 29.00 81.50 200 14.13 26.74 18.91 61.96 15.76 36.97 30.30 58.18 23.00 27.25 19.25 75.75 300 10.43 33.48 18.70 54.13 12.73 24.24 26.67 49.09 22.75 25.25 18.00 65.00 400 9.57 27.83 18.26 48.48 10.30 32.73 28.48 24.24 22.00 26.25 22.50 52.75 500 9.57 27.39 26.30 10.87 10.30 30.91 27.27 13.33 22.50 22.50 23.00 45.75 600 8.70 25.87 20.43 8.48 9.09 28.48 23.03 9.70 21.75 26.50 20.75 37.75 700 8.91 30.43 21.30 7.17 8.48 22.42 27.88 9.70 17.50 26.50 23.00 23.50 800 7.83 21.09 22.17 6.52 10.30 26.67 25.45 9.70 21.50 26.50 20.00 18.50 900 7.39 25.22 26.74 6.52 8.48 36.36 26.06 9.70 18.75 20.75 15.75 19.75 1000 7.17 20.43 25.22 6.52 9.09 28.48 23.64 9.70 15.50 23.00 18.25 19.50 Time (seconds) 12.88 9.40 21.57 5.44 9.57 7.02 16.23 5.16 11.73 8.14 18.94 4.95 Dataset breadcubechip (3 structures) breadcartoychip (4 structures) biscuitbookbox (3 structures) # inliers, outliers 34, 57 and 58 inliers, 81 outliers 33, 23, 41 and 58 inliers, 82 outliers 67, 41 and 54 inliers, 97 outliers M FLOSS ENERGY QP-MF ARJMC FLOSS ENERGY QP-MF ARJMC FLOSS ENERGY QP-MF ARJMC 100 23.49 21.08 24.10 81.93 36.92 35.86 32.07 54.01 17.57 25.87 18.15 49.03 200 16.27 13.25 15.06 78.92 28.90 27.00 20.04 61.60 11.00 17.95 17.76 31.85 300 12.65 10.84 18.07 70.48 19.41 21.30 17.09 61.18 7.92 17.95 9.27 6.95 400 13.86 11.45 14.46 48.80 17.51 20.88 15.19 56.54 8.49 14.86 13.51 6.37 500 12.05 13.25 13.25 37.95 13.92 18.56 13.50 21.94 7.92 18.73 10.04 4.44 600 12.05 12.05 12.05 11.45 11.81 19.83 13.92 18.99 5.79 17.18 11.39 5.21 700 10.84 11.45 9.04 9.64 10.76 15.18 12.66 18.14 5.79 18.92 14.67 4.83 800 10.84 12.05 11.45 9.64 10.55 18.56 12.24 10.97 5.79 16.60 13.51 5.21 900 10.84 10.24 10.24 7.83 10.34 14.55 11.39 9.70 5.79 18.53 12.36 5.21 1000 10.84 10.84 10.84 8.43 9.70 15.18 11.60 9.70 5.79 13.71 13.13 5.79 Time (seconds) 9.57 6.96 16.38 4.47 13.40 9.86 22.46 5.39 15.46 10.66 24.36 5.47 Table 1: Median segmentation error (%) at different number of hypotheses M. Time elapsed at M = 1000 is shown at the bottom. The lowest error and time achieved on each dataset is boldfaced. 5 Conclusions By design, since our algorithm conducts hypothesis sampling, geometric fitting and model selection simultaneously, it minimises wastage in the sampling process and converges faster than previous two-stage approaches. This is evident from the experimental results. Underpinning our novel Reversible Jump MCMC method is an efficient hypothesis generator whose proposal distribution is learned online. Drawing from new theory on Adaptive MCMC, we prove that our efficient hypothesis generator satisfies the properties crucial to ensure convergence to the correct target distribution. Our work thus links the latest developments from MCMC optimisation and geometric model fitting. Acknowledgements. The authors would like to thank Anders Eriksson his insightful comments. This work was partly supported by the Australian Research Council grant DP0878801. 8 References [1] H. Akaike. A new look at the statistical model identification. IEEE Trans. on Automatic Control, 19(6):716–723, 1974. [2] C. Andrieu, N. de Freitas, and A. Doucet. Robust full Bayesian learning for radial basis networks. Neural Computation, 13:2359–2407, 2001. [3] C. Andrieu, N. de Freitas, A. Doucet, and M. I. Jordan. An introduction to MCMC for machine learning. Machine Learning, 50:5–43, 2003. [4] C. Andrieu and J. Thoms. A tutorial on adaptive MCMC. Statistics and Computing, 18(4), 2008. [5] S. P. Brooks, N. Friel, and R. King. Classical model selection via simulated annealing. J. R. Statist. Soc. B, 65(2):503–520, 2003. [6] T.-J. Chin, J. Yu, and D. Suter. Accelerated hypothesis generation for multi-structure robust fitting. In European Conf. on Computer Vision, 2010. [7] A. Delong, A. Osokin, H. Isack, and Y. Boykov. Fast approximate energy minimization with label costs. In Computer Vision and Pattern Recognition, 2010. [8] L. Fan and T. Pyln¨an¨ainen. Adaptive sample consensus for efficient random optimisation. In Int. Symposium on Visual Computing, 2009. [9] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Comm. of the ACM, 24:381–395, 1981. [10] S. Gaffney and P. Smyth. Trajectory clustering with mixtures of regression models. In ACM SIG on Knowledge Discovery and Data Mining, 1999. [11] P. Giordani and R. Kohn. Adaptive independent Metropolis-Hastings by fast estimation of mixtures of normals. Journal of Computational and Graphical Statistics, 19(2):243–259, 2010. [12] P. J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82(4):711–732, 1995. [13] H. Haario, E. Saksman, and J. Tamminen. An adaptive Metropolis algorithm. Bernoulli, 7(2):223–242, 2001. [14] R. Hartley. In defense of the eight-point algorithm. IEEE Trans. on Pattern Analysis and Machine Intelligence, 19(6):580–593, 1997. [15] R. Hartley and A. Zisserman. Multiple View Geometry. Cambridge University Press, 2004. [16] P. J. Huber. Robust Statistics. John Wiley & Sons Inc., 2009. [17] Y.-D. Jian and C.-S. Chen. Two-view motion segmentation by mixtures of dirichlet process with model selection and outlier removal. In International Conference on Computer Vision, 2007. [18] N. Lazic, I. Givoni, B. Frey, and P. Aarabi. FLoSS: Facility location for subspace segmentation. In IEEE Int. Conf. on Computer Vision, 2009. [19] H. Li. Two-view motion segmentation from linear programming relaxation. In Computer Vision and Pattern Recognition, 2007. [20] D. Nott and R. Kohn. Adaptive sampling for Bayesian variable selection. Biometrika, 92:747–763, 2005. [21] N. Quadrianto, T. S. Caetano, J. Lim, and D. Schuurmans. Convex relaxation of mixture regression with efficient algorithms. In Advances in Neural Information Processing Systems, 2010. [22] S. Richardson and P. J. Green. On Bayesian analysis on mixtures with an unknown number of components. J. R. Statist. Soc. B, 59(4):731–792, 1997. [23] G. O. Roberts and J. S. Rosenthal. Coupling and ergodicity of adaptive Markov chain Monte Carlo algorithms. Journal of Applied Probability, 44:458–475, 2007. [24] G. O. Roberts and J. S. Rosenthal. Examples of adaptive MCMC. Journal of Computational and Graphical Statistics, 18(2):349–367, 2009. [25] K. Schinder and D. Suter. Two-view multibody structure-and-motion with outliers through model selection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 28(6):983–995, 2006. [26] N. Thakoor and J. Gao. Branch-and-bound hypothesis selection for two-view multiple structure and motion segmentation. In Computer Vision and Pattern Recognition, 2008. [27] P. H. S. Torr. Motion segmentation and outlier detection. PhD thesis, Dept. of Engineering Science, University of Oxford, 1995. [28] P. H. S. Torr and C. H. Davidson. IMPSAC: Synthesis of importance sampling and random sample consensus. IEEE Trans. on Pattern Analysis and Machine Intelligence, 25(3):354–364, 2003. [29] E. Vincent and R. Lagani`ere. Detecting planar homographies in an image pair. In International Symposium on Image and Signal Processing and Analysis, 2001. [30] H. S. Wong, T.-J. Chin, J. Yu, and D. Suter. Dynamic and hierarchical multi-structure geometric model fitting. In International Conference on Computer Vision, 2011. [31] J. Yu, T.-J. Chin, and D. Suter. A global optimization approach to robust multi-model fitting. In Computer Vision and Pattern Recognition, 2011. 9
|
2011
|
218
|
4,278
|
Active dendrites: adaptation to spike-based communication Bal´azs B Ujfalussy1,2 ubi@rmki.kfki.hu M´at´e Lengyel1 m.lengyel@eng.cam.ac.uk 1 Computational & Biological Learning Lab, Dept. of Engineering, University of Cambridge, UK 2 Computational Neuroscience Group, Dept. of Biophysics, MTA KFKI RMKI, Budapest, Hungary Abstract Computational analyses of dendritic computations often assume stationary inputs to neurons, ignoring the pulsatile nature of spike-based communication between neurons and the moment-to-moment fluctuations caused by such spiking inputs. Conversely, circuit computations with spiking neurons are usually formalized without regard to the rich nonlinear nature of dendritic processing. Here we address the computational challenge faced by neurons that compute and represent analogue quantities but communicate with digital spikes, and show that reliable computation of even purely linear functions of inputs can require the interplay of strongly nonlinear subunits within the postsynaptic dendritic tree. Our theory predicts a matching of dendritic nonlinearities and synaptic weight distributions to the joint statistics of presynaptic inputs. This approach suggests normative roles for some puzzling forms of nonlinear dendritic dynamics and plasticity. 1 Introduction The operation of neural circuits fundamentally depends on the capacity of neurons to perform complex, nonlinear mappings from their inputs to their outputs. Since the vast majority of synaptic inputs impinge the dendritic membrane, its morphology, and passive as well as active electrical properties play important roles in determining the functional capabilities of a neuron. Indeed, both theoretical and experimental studies suggest that active, nonlinear processing in dendritic trees can significantly enhance the repertoire of singe neuron operations [1, 2]. However, previous functional approaches to dendritic processing were limited because they studied dendritic computations in a firing rate-based framework [3, 4], essentially requiring both the inputs and the output of a cell to have stationary firing rates for hundreds of milliseconds. Thus, they ignored the effects and consequences of temporal variations in neural activities at the time scale of inter-spike intervals characteristic of in vivo states [5]. Conversely, studies of spiking network dynamics [6, 7] have ignored the complex and highly nonlinear effects of the dendritic tree. Here we develop a computational theory that aims at explaining some of the morphological and electrophysiological properties of dendritic trees as adaptations towards spike-based communication. In line with the vast majority of theories about neural network computations, the starting point of our theory is that each neuron needs to compute some function of the membrane potential (or, equivalently, the instantaneous firing rate) of its presynaptic partners. However, as the postsynaptic neuron does not have direct access to the presynaptic membrane potentials, only to the spikes emitted by its presynaptic partners based on those potentials, computing the required function becomes a non-trivial inference problem. That is, neurons need to perform computations on their inputs in the face of significant uncertainty as to what those inputs exactly are, and so as to what their required output might be. In section 2 we formalize the problem of inferring some required output based on incomplete spiking-based information about inputs and derive an optimal online estimator for some simple 1 but tractable cases. In section 3 we show that the optimal estimator exhibits highly nonlinear behavior closely matching aspects of active dendritic processing, even when the function of inputs to be computed is purely linear. We also present predictions about how the statistics of presynaptic inputs should be matched by the clustering patterns of synaptic inputs onto active subunits of the dendritic tree. In section 4 we discuss our findings and ways to test our predictions experimentally. 2 Estimation from correlated spike trains 2.1 The need for nonlinear dendritic operations Ideally, the (subthreshold) dynamics of the somatic membrane potential, v(t), should implement some nonlinear function, f(u(t)), of the presynaptic membrane potentials, u(t).1 ⌧dv(t) dt = f(u(t)) −v(t) (1) However, the presynaptic membrane potentials cannot be observed directly, only the presynaptic spike trains s0:t that are stochastic functions of the presynaptic membrane potential trajectories. Therefore, to minimise squared error, the postsynaptic membrane potential should represent the mean of the posterior over possible output function values it should be computing based on the input spike trains: ⌧dv(t) dt ' Z f(u(t)) P(u(t)|s0:t) du(t) −v(t) (2) Biophysically, to a first approximation, the somatic membrane potential of the postsynaptic neuron can be described as some function(al), ˜f, of the local dendritic membrane potentials, vd(t) ⌧dv(t) dt = ˜f " vd(t) # −v(t) (3) This is interesting because Pfister et al. [11, 12] have recently suggested that short-term synaptic plasticity arranges for each local dendritic postsynaptic potential, vd i , to (approximately) represent the posterior mean of the corresponding presynaptic membrane potential: vd i (t) ' Z ui(t) P(ui(t)|si,0:t) dui (4) Thus, it would be tempting to say that in order to achieve the computational goal of Eq. 2, the way the dendritic tree (together with the soma) should integrate these local potentials, as given by ˜f, should be directly determined by the function that needs to be computed: ˜f = f. However, it is easy to see that in general this is going to be incorrect: f Z u(t) Y i P(ui(t)|si,0:t) du(t) ! 6= Z f(u(t)) P(u(t)|s0:t) du(t) (5) where the l.h.s. is what the neuron implements (eqs. 3-4) and the r.h.s. is what it should compute (eq. 2). The equality does not hold in general when f is non-linear or P(u(t)|s0:t) does not factorise. In the following, we are going to consider the case when the function, f(u), is a purely linear combination of synaptic inputs, f(u) = P i ci ui. Such linear transformations seem to suggest linear dendritic operations and, in combination with a single global ‘somatic’ nonlinearity, they are often assumed in neural network models and descriptive models of neuronal signal processing [10]. However, as we will show below, estimation from the spike trains of multiple correlated presynaptic neurons requires a non-linear integration of inputs even in this case. 1Dynamics of this form are assumed by many neural network models, though the variables u amd v are usually interpreted as instantaneous firing rates rather than membrane potentials [10]. However, just as in our case (Eq. 8), the two are often taken to be related through a simple non-linear function which thus makes the two frameworks essentially isomorphic. 2 2.2 The mOU-NP model We assume that the hidden dynamics of presynaptic membrane potentials are described by a multivariate Ornstein–Uhlenbeck (mOU) process (discretised in time into δt ! 0 time bins, thus formally yielding an AR(1) process): ut = ut−δt + 1 ⌧(u0 −ut−δt)δt + qt p δt, qt iid ⇠N(0, Q) (6) = ↵ut−δt + qt p δt + δt ⌧u0 (7) where we described all neurons with the same parameters: u0, the resting potential and ⌧, the membrane time constant (with ↵= 1 −δt ⌧). Importantly, Q is the covariance matrix parametrising the correlations between the subthreshold membrane potential fluctuations of presynaptic neurons. Spiking is described by a nonlinear-Poisson (NP) process where the instantaneous firing rate, r, is an exponential function of u with exponent β and “baseline rate” g: r(u) = g eβu (8) and the number of spikes emitted in a time bin, s, is Poisson with this rate: P(s|u) = Poisson(s; δt r(u)) (9) The spiking process itself is independent i.e., the likelihood is factorised across cells: P(s|u) = Y i P(si|ui) (10) 2.3 Assumed density filtering in the mOU-NP model Our goal is to derive the time evolution of the posterior distribution of the membrane potential, P(ut|s0:t), given a particular spiking pattern observed. Ultimately, we will need to compute some function of u under this distribution. For linear computations (see above), the final quantity of interest depends on P i ci ui which in the limit (of many presynaptic cells) is going to be Gaussiandistributed, and as such only dependent on the first two moments of the posterior. This motivates us to perform assumed density filtering, by which we substitute the true posterior with a momentmatched multivariate Gaussian in each time step, P(ut|s0:t) ' N(ut; µt, ⌃t). After some algebra (see Appendix for details) we obtain the following equations for the time evolution of the mean and covariance of the posterior under the generative process defined by Eqs. 7-10: ˙µ = 1 ⌧(u0 −µ) + β⌃(s(t) −γ) (11) ˙⌃ = 2 ⌧(⌃OU −⌃) −β2⌃Γ⌃ (12) where si(t) is the spike train of presynaptic neuron i represented as a sum of Dirac-delta functions, γ (Γ) is a vector (diagonal matrix) whose elements γi = Γii = g eβµi+ β2⌃ii 2 are the estimated firing rates of the neurons, and ⌃OU = Q⌧ 2 is the prior covariance matrix of the presynaptic membrane potentials in the absence of any observation. 2.4 Modelling correlated up and down states The mOU-NP process is a convenient and analytically tractable way to model correlations between presynaptic neurons but it obviously falls short of the dynamical complexity of cortical ensembles in many respects. Following and expanding on [12], here we considered one extension that allowed us to model coordinated changes between more hyper- and depolarised states across presynaptic neurons, such as those brought about by cortical up and down states. In this extension, the ‘resting’ potential of each presynaptic neuron, u0, could switch between two different values, uup and udown, and followed first order Markovian dynamics. Up and down states in cortical neurons are not independent but occur synchronously [13]. To reproduce these correlations 3 µ1 µ2 ⇤0.6 0.6 mean 0.1 0.5 variance 0 12 v (mV) -1.2 1.2 0 200 Figure 1: Simulation of the optimal estimator in the case of two presynaptic spikes with different time delays (∆t). A: The posterior means (Aa), variances, ⌃ii, and the covariance, ⌃12 (Ab). The dynamics of the postsynaptic membrane potential, v (Ad) is described by Eq. 1, where f(u) = u1 + u2 (Ac). B: The same as A on an extended time scale. C: The nonlinear summation of two EPSPs, characterised by the ratio of the actual EPSP (cyan on Ad) and the linear sum of two individual EPSPs (grey on Ad) is shown for different delays and correlations between the presynaptic neurons. The summation is sublinear if the presynaptic neurons are positively correlated, whereas negative correlations imply supralinear summation. we introduced a global, binary state variable, x that influenced the Markovian dynamics of the resting potential of individual neurons (see Appendix and Fig. 2A). Unfortunately, an analytical solution to the optimal estimator was out of reach in this case, so we resorted to particle filtering [14] to compute the output of the optimal estimator. 3 Nonlinear dendrites as near-optimal estimators 3.1 Correlated Ornstein-Uhlenbeck process First, we analysed the estimation problem in case of mOU dynamics where we could derive an optimal estimator for the membrane potential. Postsynaptic dynamics needed to follow the linear sum of presynaptic membrane potentials. Figure 1 shows the optimal postsynaptic response (Eqs. 11-12) after observing a pair of spikes from two correlated presynaptic neurons with different time delays. When one of the cells (black) emits a spike, this causes an instantaneous increase not only in the membrane potential estimate of the neuron itself but also in those of all correlated neurons (red neuron in Fig. 1Aa and Ba). Consequently, the estimated firing rate, γ, of both cells increases. Albeit indirectly, a spike also influences the uncertainty about the presynaptic membrane potentials – quantified by the posterior covariance matrix. A spike itself does not change this covariance directly, but since it increases estimated firing rates, the absence of even more spikes in the subsequent period becomes more informative. This increased information rate following a spike decreases estimator uncertainty about true membrane potential values for a short period (Fig. 1Ab and Bb). However, as the estimated firing rate decreases back to its resting value nearly exponentially after the spike, the estimated uncertainty also returns back to its steady state. Importantly, the instantaneous increase of the posterior means in response to a spike is proportional to the estimated uncertainty about the membrane potentials and to the estimator’s current belief about the correlations between the neurons. As each spike influences not only the mean estimate of the membrane potentials of other correlated neurons but also the uncertainty of these estimates, the effect of a spike from one cell on the posterior mean depends on the spiking history of all other correlated neurons (Fig. 1Ac-Ad). 4 In the example shown in Fig. 1, the postsynaptic dynamics is required to compute a purely linear sum of two presynaptic membrane potentials, f(u) = u1 + u2. However, depending on the prior correlation between the two presynaptic neurons and the time delay between the two spikes, the amplitude of the postsynaptic membrane potential change evoked by the pair of spikes can be either larger or smaller than the linear sum of the individual excitatory postsynaptic potentials (EPSPs) (Fig. 1Ad, C). EPSPs from independent neurons are additive, but if the presynaptic neurons are positively correlated then their spikes convey redundant information and they are integrated sublinearly. Conversely, simultaneous spikes from negatively correlated presynaptic neurons are largely unexpected and induce supralinear summation. The deviation from the linear summation is proportional to the magnitude of the correlation between the presynaptic neurons (Fig. 1C). We compared the nonlinear integration of the inputs in the optimal estimator with experiments measuring synaptic integration in the dendritic tree of neurons. For a passive membrane, cable theory [15] implies that inputs are integrated linearly only if they are on electronically separated dendritic branches, but reduction of the driving force entails a sublinear interaction between co-localised inputs. Moreover, it has been found that active currents, the IA potassium current in particular, also contribute to the sublinear integration within the dendritic tree [16, 17]. Our model predicts that inputs that are integrated sublinearly are positively correlated (Fig. 1C). In sum, we can already see that correlated inputs imply nonlinear integration in the postsynaptic neuron, and that the form of nonlinearity needs to be matched to the degree and sign of correlations between inputs. However, the finding that supralinear interactions are only expected from anticorrelated inputs defeats biological intuition. Another shortcoming of the mOU model is related to the second-order effects of spikes on the posterior covariance. As the covariance matrix does not change instantaneously after observing a presynaptic spikes (Fig. 1B), two spikes arriving simultaneously are summed linearly (not shown). At the other extreme, two spikes separated by long delays again do not influence each other. Therefore the nonlinearity of the integration of two spikes has a non-monotonic shape, which again is unlike the monotonic dependence of the degree of nonlinearity on interspike intervals found in experiments [18, 19]. In order to overcome these limitations, we extended the model to incorporate correlated changes in the activity levels of presynaptic neurons [13]. 3.2 Correlated up and down states While the statistics of presynaptic membrane potentials exhibit more complex temporal dependencies in the extended model (Fig. 2A), importantly, the task is still assumed to be the same simple linear computation as before: f(u) = u1 + u2. However, the more complex P(u) distribution means that we need to sum over the possible values of the hidden variables: P(u) = P u0 P(u|u0) P(u0). The observation of a spike changes both the conditional distributions, P(u|u0), and the probability of being in the up state, P(u0 = uup), by causing an upward shift in both. A second spike causes a further increase in the membrane potential estimate, and, more importantly, in the probability of being in the up state for both neurons. Since the probability of leaving the up state is low, the membrane potential estimate decays back to its steady state more slowly if the probability of being in the up state is high (Fig. 2B). This causes a supralinear increase in the membrane potential of the postsynaptic neuron which again depends on the interspike interval, but this time supralinearity is predicted for positively correlated presynaptic neurons (Fig. 2C,E). Note, that while in the mOU model, supralinear integration arises due to dynamical changes in uncertainty (of membrane potential estimates), in the extended model it is associated with a change in a hypothesis (about hidden up-down states). This is qualitatively similar to what was found in pyramidal neurons in the neocortex [19] and in the hippocampus [18, 20] that are able to switch from (sub)linear to supralinear integration of synaptic inputs through the generation of dendritic spikes [21]. Specifically, in neocortical pyramidal neurons Polsky et al. [19] found, that nearly synchronous inputs arriving to the same dendritic branch evoke substantially larger postsynaptic responses than expected from the linear sum of the individual responses (Fig. 2D-E). While there is a good qualitative match between model and experiments, the time scales of integration are off by a factor of 2. Neverthless, given that we did not perform exhaustive parameter fitting in our model, just simply set parameters to values that produced realistic presynaptic membrane potential trajectories (cf. our Fig. 2A with [13]), we regard the match acceptable and are confident that with further fine tuning of parameters the match would also improve quantitatively. 5 2 mV 1 ms 32 ms 50 ms 100 ms Membrane potential (mV) time (ms) ◆20 20 60 100 time (ms) up state probability ◆20 20 60 100 2 4 6 8 10 0 10 20 30 40 50 0.0 0.4 0.8 ◆4 ◆2 0 2 0 20 40 60 80 100 2 mV 100 ms 0 ms 15 ms 30 ms 50 ms 50 ms Figure 2: A: Example voltage traces and spikes from the modeled presynaptic neurons (black and red) with correlated up and down states. The green line indicates the value of the global up-down state variable. B: Inference in the model: The posterior probability of being in the up state (left) and the posterior mean of P i ui after observing two spikes (grey) from different neurons with ∆t = 8 ms latency. C: Supralinear summation in the switching mOU-NP model. D: Supralinear summation by dendritic spikes in a cortical pyramidal neuron. E: Peak amplitude of the response (red) and the linear sum (black squares) is shown for different delays in experiments (left) and the model (right). (D and left panel in E are reproduced from [19]). 3.3 Nonlinear dendritic trees are necessary for purely linear computations In the previous sections we demonstrated that optimal inference based on correlated spike trains requires nonlinear interaction within the postsynaptic neuron, and we showed that the dynamics of the optimal estimator is qualitatively similar to the dynamics of the somatic membrane potential of a postsynaptic neuron with nonlinear dendritic processing. In this section we will build a simplified model of dendritic signal processing and compare its performance directly to several alternative models (see below) on a purely linear task, for which the neuron needs to compute the sum of presynaptic membrane potentials: f(u) = P10 i=1 ui. We model the dendritic estimator as a two-layer feed-forward network of simple units (Fig. 3A) that has been proposed to closely mimic the repertoire of input-output transformations achievable by active dendritic trees [22]. In this model, synaptic inputs impinge on units in the first layer, corresponding to dendritic branches, where nonlinear integration of inputs arriving to a dendritic branch is modeled by a sigmoidal input-output function, and the outputs of dendritic branch units are in turn summed linearly in the single (somatic) unit of the second layer. We trained the model to estimate f by changing the connection weights of the two layers corresponding to synaptic weights (wji) and branch coupling strengths (˜cj, see Appendix, Fig. 3A). We compared the performance of the dendritic estimator to four alternative models (Figure 3B): 1. The linear estimator, which is similar to the dendritic estimator except that the dendrites are linear. 2. The independent estimator, in which the individual synapses are independently optimal estimators of the corresponding presynaptic membrane potentials (Eq. 4) [11, 12], and the cell combines these estimates linearly. Note that the only difference between the independent estimator and the optimal estimator is the assumption implicit to the former that presynaptic cells are independent. 3. The scaled independent estimator still combines the synaptic potentials linearly, but the weights of each synapse are rescaled to partially correct for the wrong assumption of independence. 4. Finally, the optimal estimator is represented by the differential equations 11-12. The performance of the different estimators were quantified by the estimation error normalized by the variance of the signal, h(P i ui−˜vestimator)2i var[P i ui] . Figure 3C shows the estimation error of the five different models in the case of 10 uniformly correlated presynaptic neurons. If the presynaptic neurons 6 0.2 0.4 0.6 0.8 0.9 0.5 0 Figure 3: Performance of 5 different estimators are compared in the task of estimating f(u) = PN i=1 ui. A: Model of the dendritic estimator. B: Different estimators (see text for more details). C: Estimation error, normalised with the variance of the signal. The number of presynaptic neurons were N = 10. Error bars show standard deviations. were independent, all three estimators that used dynamical synapses (˜vind, ˜vsind and ˜vopt) were optimal, whereas the linear estimator had substantially larger error. Interestingly, the performance of the dendritic estimator (yellow) was nearly optimal even if the individual synapses were not optimal estimators for the corresponding presynaptic membrane potentials. In fact, adding depressing synapses to the dendritic model degraded its performance because the sublinear effect introduced by the saturation of the sigmoidal dendritic nonlinearity interfered with that implied by synaptic depression. When the correlation increased between the presynaptic neurons, the performance of the estimators assuming independence (black and orange) became severely suboptimal, whereas the dendritic estimator (yellow) remained closer to optimal. Finally, in order to investigate the synaptic mechanisms underlying the remarkably high performance of the dendritic estimator, we trained a dendritic estimator on a task where the presynaptic neurons formed two groups. Neurons from different groups were independent or negatively correlated with each other, cor(ui, uk) = {−0.6, −0.3, 0}, while there were positive correlations between neurons from the same group, cor(ui, uj) = {0.3, 0.6, 0.9} (Fig. 4A). The postsynaptic neuron had two dendritic branches, each of them receiving input from each presynaptic neurons initially. After tuning synaptic weights and branch coupling strengths to minimize estimation error, and pruning synapses with weights below threshold, the model achieved near-optimal performance as before (Fig. 4C). More importantly, we found that the structure of the presynaptic correlations was reflected in the synaptic connection patterns on the dendritic branches: most neurons developed stable synaptic weights only on one of the two dendritic branches, and synapses originating from neurons within the same group tended to cluster on the same branch (Fig. 4B). 4 Discussion In the present paper we introduced a normative framework to describe single neuron computation that sheds new light on nonlinear dendritic information processing. Following [12], we observe that spike-based communication causes information loss in the nervous system, and neurons must infer the variables relevant for the computation [23–25]. As a consequence of this spiking bottleneck, signal processing in single neurons can be conceptually divided into two parts: the inference of the relevant variables and the computation itself. When the presynaptic neurons are independent then synapses with short term plasticity can optimally solve the inference problem [12] and nonlinear processing in the dendrites is only for computation. However, neurons in a population are often tend to be correlated [5, 13] and so the postsynaptic neuron should combine spike trains from such correlated neurons in order to find the optimal estimate of its output. We demonstrated that the solution of this inference problem requires nonlinear interaction between synaptic inputs in the 7 Figure 4: Synaptic connectivity reflects the correlation structure of the input. A: The presynaptic covariance matrix is block-diagonal, with two groups (neurons 1–4 and 5–8). Initially, each presynaptic neuron innervates both dendritic branches, and the weights, w, of the static synapses are then tuned to minimize estimation error. B: Synaptic weights after training, and pruning the weakest synapses. Columns corresponds to solutions of the error-minimization task with different presynaptic correlations and/or initial conditions, and rows are different synapses. The detailed connectivity patterns differ across solutions, but neurons from the same group usually all innervate the same dendritic branch. Below: fraction of neurons in each solution innervating 0, 1 or 2 branches. The height of the yellow (blue, green) bar indicates the proportion of presynaptic neurons innervating two (one, zero, respectively) branches of the postsynaptic neuron. C: After training, the nonlinear dendritic estimator performs close to optimal and much better than the linear neuron. postsynaptic cell even if the computation itself is purely linear. Of course, actual neurons are usually faced with both problems: they will need to compute nonlinear functions of correlated inputs and thus their nonlinearities will serve both estimation and computation. In such cases our approach allows dissecting the respective contributions of active dendritic processing towards estimation and computation. We demonstrated that the optimal estimator of the presynaptic membrane potentials can be closely approximated by a nonlinear dendritic tree where the connectivity from the presynaptic cells to the dendritic branches and the nonlinearities in the dendrites are tuned according to the dependency structure of the input. Our theory predicts that independent neurons will innervate distant dendritic domains, whereas neurons that have correlated membrane potentials will impinge on nearby dendritic locations, preferentially on the same dendritic branches, where synaptic integration in nonlinear [19, 26]. More specifically, the theory predicts sublinear integration between positively correlated neurons and supralinear integration through dendritic spiking between neurons with correlated changes in their activity levels. To directly test this prediction the membrane potentials of several neurons need to be recorded under naturalistic in vivo conditions [5, 13] and then the subcellular topography of their connectivity with a common postsynaptic target needs to be determined. Similar approaches have been used recently to characterize the connectivity between neurons with different receptive field properties in vivo [27, 28]. Our model suggests that the postsynaptic neuron should store information about the dependency structure of its presynaptic partners within its dendritic membrane. Online learning of this information based on the observed spiking patterns requires new, presumably non-associative forms of plasticity such as branch strength potentiation [29, 30] or activity-dependent structural plasticity [31]. Acknowledgments We thank J-P Pfister for valuable insights and comments on earlier versions of the manuscript, and P Dayan, B Gutkin, and Sz K´ali for useful discussions. This work has been supported by the Hungarian Scientific Research Fund (OTKA, grant number: 84471, BU) and the Welcome Trust (ML). 8 References 1. Koch, C. Biophysics of computation (Oxford University Press, 1999). 2. Stuart, G., Spruston, N. & Hausser, M. Dendrites (Oxford University Press, 2007). 3. Poirazi, P. & Mel, B.W. Impact of active dendrites and structural plasticity on the memory capacity of neural tissue. Neuron 29, 779–96 (2001). 4. Poirazi, P., Brannon, T. & Mel, B.W. Arithmetic of subthreshold synaptic summation in a model CA1 pyramidal cell. Neuron 37, 977–87 (2003). 5. Crochet, S., Poulet, J.F., Kremer, Y. & Petersen, C.C. Synaptic mechanisms underlying sparse coding of active touch. Neuron 69, 1160–75 (2011). 6. Maass, W. & Bishop, C. Pulsed Neural Networks (MIT Press, 1998). 7. Gerstner, W. & Kistler, W. Spiking Neuron Models (Cambridge University Press, 2002). 8. Rieke, F., Warland, D., de Ruyter van Steveninck, R. & Bialek, W. Spikes (MIT Press, 1996). 9. Deneve, S. Bayesian spiking neurons I: inference. Neural Comput. 20, 91–117 (2008). 10. Dayan, P. & Abbot, L.F. Theoretical neuroscience (The MIT press, 2001). 11. Pfister, J., Dayan, P. & Lengyel, M. Know thy neighbour: a normative theory of synaptic depression. Adv. Neural Inf. Proc. Sys. 22, 1464–1472 (2009). 12. Pfister, J., Dayan, P. & Lengyel, M. Synapses with short-term plasticity are optimal estimators of presynaptic membrane potentials. Nat. Neurosci. 13, 1271–1275 (2010). 13. Poulet, J.F. & Petersen, C.C. Internal brain state regulates membrane potential synchrony in barrel cortex of behaving mice. Nature 454, 881–5 (2008). 14. Doucet, A., De Freitas, N. & Gordon, N. Sequential Monte Carlo Methods in Practice (Springer, New York, 2001). 15. Rall, W. Branching dendritic trees and motoneuron membrane resistivity. Exp. Neurol. 1, 491–527 (1959). 16. Hoffman, D.A., Magee, J.C., Colbert, C.M. & Johnston, D. K+ channel regulation of signal propagation in dendrites of hippocampal pyramidal neurons. Nature 387, 869–75 (1997). 17. Cash, S. & Yuste, R. Linear summation of excitatory inputs by CA1 pyramidal neurons. Neuron 22, 383–94 (1999). 18. Gasparini, S., Migliore, M. & Magee, J.C. On the initiation and propagation of dendritic spikes in CA1 pyramidal neurons. J. Neurosci. 24, 11046–56 (2004). 19. Polsky, A., Mel, B.W. & Schiller, J. Computational subunits in thin dendrites of pyramidal cells. Nat. Neurosci. 7, 621–7 (2004). 20. Margulis, M. & Tang, C.M. Temporal integration can readily switch between sublinear and supralinear summation. J. Neurophysiol. 79, 2809–13 (1998). 21. Hausser, M., Spruston, N. & Stuart, G.J. Diversity and dynamics of dendritic signaling. Science 290, 739–44 (2000). 22. Poirazi, P., Brannon, T. & Mel, B.W. Pyramidal neuron as two-layer neural network. Neuron 37, 989–99 (2003). 23. Huys, Q.J., Zemel, R.S., Natarajan, R. & Dayan, P. Fast population coding. Neural Comput. 19, 404–41 (2007). 24. Natarajan, R., Huys, Q.J.M., Dayan, P. & Zemel, R.S. Encoding and decoding spikes for dynamics stimuli. Neural Computation 20, 2325–2360 (2008). 25. Gerwinn, S., Macke, J. & Bethge, M. Bayesian population decoding with spiking neurons. Frontiers in Computational Neuroscience 3 (2009). 26. Losonczy, A. & Magee, J.C. Integrative properties of radial oblique dendrites in hippocampal CA1 pyramidal neurons. Neuron 50, 291–307 (2006). 27. Bock, D.D. et al. Network anatomy and in vivo physiology of visual cortical neurons. Nature 471, 177–82 (2011). 28. Ko, H. et al. Functional specificity of local synaptic connections in neocortical networks. Nature (2011). 29. Losonczy, A., Makara, J.K. & Magee, J.C. Compartmentalized dendritic plasticity and input feature storage in neurons. Nature 452, 436–41 (2008). 30. Makara, J.K., Losonczy, A., Wen, Q. & Magee, J.C. Experience-dependent compartmentalized dendritic plasticity in rat hippocampal CA1 pyramidal neurons. Nat. Neurosci. 12, 1485–7 (2009). 31. Butz, M., Worgotter, F. & van Ooyen, A. Activity-dependent structural plasticity. Brain Res. Rev. 60, 287–305 (2009). 9
|
2011
|
219
|
4,279
|
SpaRCS: Recovering Low-Rank and Sparse Matrices from Compressive Measurements Andrew E. Waters, Aswin C. Sankaranarayanan, Richard G. Baraniuk Rice University {andrew.e.waters, saswin, richb}@rice.edu Abstract We consider the problem of recovering a matrix M that is the sum of a low-rank matrix L and a sparse matrix S from a small set of linear measurements of the form y = A(M) = A(L + S). This model subsumes three important classes of signal recovery problems: compressive sensing, affine rank minimization, and robust principal component analysis. We propose a natural optimization problem for signal recovery under this model and develop a new greedy algorithm called SpaRCS to solve it. Empirically, SpaRCS inherits a number of desirable properties from the state-of-the-art CoSaMP and ADMiRA algorithms, including exponential convergence and efficient implementation. Simulation results with video compressive sensing, hyperspectral imaging, and robust matrix completion data sets demonstrate both the accuracy and efficacy of the algorithm. 1 Introduction The explosion of digital sensing technology has unleashed a veritable data deluge that has pushed current signal processing algorithms to their limits. Not only are traditional sensing and processing algorithms increasingly overwhelmed by the sheer volume of sensor data, but storage and transmission of the data itself is also increasingly prohibitive without first employing costly compression techniques. This reality has driven much of the recent research on compressive data acquisition, in which data is acquired directly in a compressed format [1]. Recovery of the data typically requires finding a solution to an undetermined linear system, which becomes feasible when the underlying data possesses special structure. Within this general paradigm, three important problem classes have received significant recent attention: compressive sensing, affine rank minimization, and robust principal component analysis (PCA). Compressive sensing (CS): CS is concerned with the recovery a vector x that is sparse in some transform domain [1]. Data measurements take the form y = A(x), where A is an underdetermined linear operator. To recover x, one would ideally solve min kxk0 subject to y = A(x), (1) where kxk0 is the number of non-zero components in x. This problem formulation is non-convex, CS recovery is typically accomplished either via convex relaxation or greedy approaches. Affine rank minimization: The CS concept extends naturally to low-rank matrices. In the affine rank minimization problem [14,23], we observe the linear measurements y = A(L), where L is a low-rank matrix. One important sub-problem is that of matrix completion [3,5,22], where A takes the form of a sampling operator. To recover L, one would ideally solve min rank(L) subject to y = A(L). (2) As with CS, this problem is non-convex and so several algorithms based on convex relaxation and greedy methods have been developed for finding solutions. 1 Robust PCA: In the robust PCA problem [2,8], we wish to decompose a matrix M into a low-rank matrix L and a sparse matrix S such that M = L + S. This problem is known to have a stable solution provided L and S are sufficiently incoherent [2]. To date, this problem has been studied only in the non-compressive setting, i.e, when M is fully available. A variety of convex relaxation methods have been proposed for solving this case. The work of this paper stands at the intersection of these three problems. Specifically, we aim to recover the entries of a matrix M in terms of a low-rank matrix L and sparse matrix S from a small set of compressive measurements y = A(L+S). This problem is relevant in several application settings. A first application is the recovery of a video sequence obtained from a static camera observing a dynamic scene under changing illumination. Here, each column of M corresponds to a vectorized image frame of the video. The changing illumination has low-rank properties, while the foreground innovations exhibit sparse structures [2]. In such a scenario, neither sparse nor low-rank models are individually sufficient for capturing the underlying information of the signal. Models that combine low-rank and sparse components, however, are well suited for capturing such phenomenon. A second application is hyperspectral imaging, where each column of M is the vectorized image of a particular spectral band; a low-rank plus sparse model arises naturally due to material properties [7]. A third application is robust matrix completion [11], which can be cast as a compressive low-rank and sparse recovery problem. The natural optimization problem that unites the above three problem classes above is (P1) min ky −A(L + S)k2 subject to rank(L) r, kvec(S)k0 K. (3) The main contribution of this paper is a novel greedy algorithm for solving (P1), which we dub SpaRCS for SPArse and low Rank decomposition via Compressive Sensing. To the best of our knowledge, we are the first to propose a computationally efficient algorithm for solving a problem like (P1). SpaRCS combines the best aspects of CoSaMP [20] for sparse vector recovery and ADMiRA [17] for low-rank matrix recovery. 2 Background Here we introduce the relevant background information regarding signal recovery from CS measurements, where our definition of signal is broadened to include both vectors and matrices. We further provide background on incoherency between low-rank and sparse matrices. Restricted isometry and rank-restricted isometry properties: Signal recovery for a K-sparse vector from CS measurements is possible when the measurement operator A obeys the so-called restricted isometry property (RIP) [4] with constant δK (1 −δK)kxk2 2 kA(x)k2 2 (1 + δK)kxk2 2, 8kxk0 K. (4) This property implies that the information in x is nearly preserved after being measured by A. Analogous to CS, it has been shown that a low-rank matrix can be recovered from a set of CS measurements when the measurement operator A obeys the rank-restricted isometry (RRIP) property [23] with constant δ⇤ r (1 −δ⇤ r)kLk2 F kA(L)k2 F (1 + δ⇤ r)kLk2 F , 8 rank(L) r. (5) Recovery algorithms: Recovery of sparse vectors and low-rank matrices can be accomplished when the measurement operator A satisfies the appropriate RIP or RRIP condition. Recovery algorithms typically fall into one of two broad classes: convex optimization and greedy iteration. Convex optimization techniques recast (1) or (2) in a form that can be solved efficiently using convex programming [2, 27]. In the case of CS, the `0 norm is relaxed to the `1 norm; for low-rank matrices, the rank operator is relaxed to the nuclear norm. In contrast, greedy algorithms [17,20] operate iteratively on the signal measurements, constructing a basis for the signal and attempting signal recovery restricted to that basis. Compared to convex approaches, these algorithms often have superior speed and scale better to large problems. We highlight the CoSaMP algorithm [20] for sparse vector recovery and the ADMiRA algorithm [17] for low-rank matrix recovery in this paper. Both algorithms have strong convergence guarantees when the measurement operator A satisfies the appropriate RIP or RRIP condition, most notably exponential convergence to the true signal. 2 Matrix Incoherency: For matrix decomposition problems such as the Robust PCA problem or the problem defined in (3) to have unique solutions, there must exist a degree of incoherence between the low-rank matrix L and the sparse matrix S. It is known that the decomposition of a matrix into its low-rank and sparse components makes sense only when the low-rank matrix is not sparse and, similarly, when the sparse matrix is not low-rank. A simple deterministic condition can be found in the work by Chandrasekaran, et al [9]. For our purposes, we assume the following model for non-sparse low rank matrices. Definition 2.1 (Uniformly bounded matrix [5]) An N ⇥N matrix L of rank r is uniformly bounded if its singular vectors {uj, vj, 1 j r} obey kujk1, kvjk1 p µB/N, with µB = O(1), where kxk1 denotes the largest entry in magnitude of x. When µB is small (note that µB ≥1), this model for the low-rank matrix L ensures that its singular vectors are not sparse. This can be seen in the case of the a singular vector u by noting that 1 = kuk2 2 = PN k=1 u2 k kuk0kuk2 1. Rearranging terms enables us to write kuk0 ≥ 1 kuk21 ≥ N µB . Thus, µB controls the sparsity of the matrix L by bounding the sparsity of its singular vectors. A sufficient model for a sparse matrix that is not low-rank is to assume that the support set ⌦is uniform. As shown in the work of Candes, et al [2] this model is equivalent to defining the sparse support set ⌦= {(i, j) : δi,j = 1} with each δi,j being an i.i.d. Bernoulli with sufficiently small parameter ⇢S. 3 SpaRCS: CS recovery of low-rank and sparse matrices We now present the SpaRCS algorithm to solve (P1) and disucss its empirical properties. Assume that we are interested in a matrix M 2 RN1⇥N2 such that M = L + S, with rank(L) r, L uniformly bounded with constant µB, and kSk0 K with support distributed uniformly. Further assume that a known linear operator A : RN1⇥N2 ! Rp provides us with p compressive measurements y of M. Let A⇤denote the adjoint of the operator A and, given the index set T ⇢{1, . . . , N1N2}, let A|T denote the restriction of the operator to T. Given y = A(M)+e, where e denotes measurement noise, our goal is to estimate a low rank matrix bL and a sparse matrix bS such that y ⇡A(bL + bS). 3.1 Algorithm SpaRCS iteratively estimates L and S; the estimation of L is closely related to ADMiRA [17], while the estimation of S is closely related to CoSaMP [20]. At each iteration, SpaRCS computes a signal proxy and then proceeds through four steps to update its estimates of L and S. These steps are laid out in Algorithm 1. We use the notation supp(X; K) to denote the largest K-term support set of the matrix X. This forms a natural basis for sparse signal approximation. We further use the notation svd(X; r) to denote computation of the rank-r singular value decomposition (SVD) of X and the arrangement of its singular vectors into a set of up to r rank-1 matrices. This set of rank-1 matrices serve as a natural basis for approximating uniformly bounded low-rank matrices. 3.2 Performance characterization Empirically, SpaRCS produces a series of estimates bLk and bSk that converge convergence exponentially towards the true values L and S. This performance is inhereted largely from the behavior of the CoSaMP and ADMiRA algorithms with one noteworthy modification. The key difference is that, for SpaRCS, the sparse and low-rank estimation problems are coupled. While CoSaMP and ADMiRA operate solely in the presence of the measurement noise, SpaRCS must estimate L in the presence of the residual error of S, and vice-versa. Proving convergence for the algorithm in the presence of the additional residual terms is non-trivial; simply lumping these additional residual errors together with the measurement noise e is insufficient for analysis. As a concrete example, consider the support identification step b S supp(P; 2K), with P = A⇤(wk−1) = A⇤(A(S −bSk−1) + A(L −bLk−1) + e), 3 Algorithm 1: (bL, bS) = SpaRCS (y, A, A⇤, K, r, ✏) Initialization: k 1, bL0 0, bS0 0, L ;, S ;, w0 y while kwk−1k2 ≥✏do Compute signal proxy: P A⇤(wk−1) Support identification: b L svd(P; 2r); b S supp(P; 2K) Support merger: e L b L S L; e S b S S S Least squares estimation: BL e † L(y −A(bSk−1)); BS e † S(y −A(bLk−1)) Support pruning: (bLk , L) svd(BL; r); (bSk , S) supp(BS; K) Update residue: wk y −A(bLk + bSk) k k + 1 end bL = bLk−1; bS = bSk−1 that estimates the support set of S. CoSaMP relies on high correlation between supp(P; 2K) and supp(S −bSk−1; 2K); to achieve the same in SpaRCS, (L −bLk−1) must be well behaved. We are currently preparing a full theoretical characterization of the SpaRCS algorithm along with the necessary conditions that guarantee this exponential convergence property. We reserve the presentation of the convergence proof for an extended version of this work. Phase transition: The empirical performance of SpaRCS can be charted using phase transition plots, which predicts sufficient and necessary conditions on its success/failure. Figure 1 shows phase transition results on a problem of size N1 = N2 = 512 for various values of p, r, and K. As expected, SpaRCS degrades gracefully as we decrease p or increase r and K. r=5 r=10 r=15 r=20 r=25 Figure 1: Phase transitions for a recovery problem of size N1 = N2 = N = 512. Shown are aggregate results over 20 Monte-Carlo runs at each specification of r, K, and p. Black indicates recovery failure, while white indicates recovery success. Computational cost: SpaRCS is highly computationally efficient and scales well as N1, N2 grow large. The largest computational cost is that of computing the two truncated SVDs per iteration. The SVDs can be performed efficiently via the Lanczos algorithm or similar method. The least squares estimation can be solved efficiently using conjugate gradient or Richardson iterations. Support estimation for the sparse vector merely entails sorting the signal proxy magnitudes and choosing the largest 2K elements. 4 Figure 2 compares the performance of SpaRCS with two alternate recovery algorithms. We implement CS versions of the IT [18] and APG [19] algorithms, which solve the problems min ⌧(kLk⇤+ λkvec(S)k1) + 1 2kLk2 F + 1 2kSk2 F s.t. y = A(L + S) and min kLk⇤+ λkvec(S)k1 s.t. y = A(L + S), respectively. We endeavor to tune the parameters of these algorithms (which we refer to as CS IT and CS APG, respectively) to optimize their performance. Details of our implementation can be found in [26]. In all experiments, we consider matrices of size N ⇥N with rank(L) = 2 and kSk0 = 0.02N 2 and use permuted noiselets [12] for the measurement operator A. As a first experiment, we generate convergence plots for matrices with N = 128 and vary the measurement ratio p/N 2 from 0.05 to 0.5. We then recover bL and bS and measure the recovered signal to noise ratio (RSNR) for c M = bL + bS via 20 log10 ⇣ kMkF kM−bL−bSkF ⌘ . These results are displayed in Figure 2(a), where we see that SpaRCS provides the best recovery. As a second experiment, we vary the problem size N 2 {128, 256, 512, 1024} while holding the number of measurements constant at p = 0.2N 2. We measure the recovery time required by each algorithm to reach a residual error ky−A(bL+bS)k2 kyk2 5 ⇥10−4. These results are displayed in Figure 2(b), which demonstrate that SpaRCS converges significantly faster than the two other recovery methods. 0.1 0.2 0.3 0.4 0.5 0 10 20 30 40 p/N2 RSNR (dB) SpaRCS CS APG CS IT (a) Performance 7 8 9 10 10 1 10 2 10 3 10 4 10 5 log2(N) Convergence Time (sec) SpaRCS CS APG CS IT (b) Timing plot Figure 2: Performance and run-time comparisons between SpaRCS, CS IT, and CS APG. Shown are average results over 10 Monte-Carlo runs for problems of size N1 = N2 = N with rank(L) = 2 and kSk0 = 0.02N 2. (a) Performance for a problem with N = 128 for various values of the measurement ratio p/N 2. SpaRCS exhibits superior recovery over the alternate approaches. (b) Timing plot for problems of various sizes N. SpaRCS converges in time several orders of magnitude faster than the alternate approaches. 4 Applications We now present several experiments that validate SpaRCS and showcase its performance in several applications. In all experiments, we use permuted noiselets for the measurement operator A; these provide both a fast transform as well as save memory, since we do not have to store A explicitly. Video compressive sensing: The video CS problem is concerned with recovering multiple image frames of a video sequence from CS measurements [6,21,24]. We consider a 128⇥128⇥201 video sequence consisting of a static background with a number of people moving in the foreground. We aim to not only recover the original video but also separate the background and foreground. We resize the data cube into a 1282 ⇥201 matrix M, where each column corresponds to a (vectorized) image frame. The measurement operator A operates on each column of M independently, simulating acquisition using a single pixel camera [13]. We acquire p = 0.15 ⇥1282 measurements per image frame. We recover with SpaRCS using r = 1 and K = 20,000. The results are displayed in Figure 3, where it can be seen that SpaRCS accurately estimates and separates the low-rank background and the sparse foreground. Figure 4 shows recovery results on a more challenging sequence 5 Details 128x128x201 Video Compression 6.6656 SNR = 31.1637 Parameters Col-only measurement matrix M per col = 2458 Overall K = 49398 Rank = 1 Rho = 0.15 (a) (b) (c) Figure 3: SpaRCS recovery results on a 128 ⇥128 ⇥201 video sequence. The video sequence is reshaped into an N1 ⇥N2 matrix with N1 = 1282 and N2 = 201. (a) Ground truth for several frames. (b) Estimated low-rank component L. (c) Estimated sparse component S. The recovery SNR is 31.2 dB at the measurement ratio p/(N1N2) = 0.15. The recovery is accurate in spite of the measurement operator A working independently on each frame. (a) (b) Figure 4: SpaRCS recovery results on a 64 ⇥64 ⇥234 video sequence. The video sequence is reshaped into an N1⇥N2 matrix with N1 = 642 and N2 = 234. (a) Ground truth for several frames. (b) Recovered frames. The recovery SNR is 23.9 dB at the measurement ratio of p/(N1N2) = 0.33. The recovery is accurate in spite of the changing illumination conditions. with changing illumination. In contrast to SpaRCS, existing video CS algorithms do not work well with dramatically changing illumination. Hyperspectral compressive sensing: Low-rank/sparse decomposition has an important physical relevance in hyperspectral imaging [7]. Here we consider a hyperspectral cube, which contains a vector of spectral information at each image pixel. A measurement device such as [25] can provide compressive measurements of such a hyperspectral cube. We employ SpaRCS on a hyperspectral cube of size 128 ⇥128 ⇥128 rearranged as a matrix of size 1282 ⇥128 such that each column corresponds to a different spectral band. Figure 5 demonstrates recovery using p = 0.15 ⇥1282 ⇥ 128 total measurements of the entire data cube with r = 8, K = 3000. SpaRCS performs well in terms of residual error (Figure 5(c)) despite the number of rows being much larger than the number of columns. Figure 5(d) emphasizes the utility the sparse component. Using only a lowrank approximation (corresponding to traditional PCA) causes a significant increase in residual error over what is achieved by SpaRCS. Parameter mismatch: In Figure 6, we analyze the influence of incorrect selection of the parameters r using the hyperspectral data as an example. We plot the recovered SNR that can be obtained at various levels of the measurement ratio p/(N1N2) for both the case of r = 8 and r = 4. There are interesting tradeoffs associated with the choice of parameters. Larger values of r and K enable better approximation to the unknown signals. However, by increasing r and K, we also increase the number of independent parameters in the problem, which is given by (2 max(N1, N2)r −r2 +2K). An empirical rule-of-thumb for greedy recovery algorithms is that the number of measurements p should be 2–5 times the number of independent parameters. Consequently, there exists a tradeoff between the values of r, K, and p to ensure stable recovery. 6 Datasize: 128x128x128 == 128^2 x 128 matrix. Measurement matrix takes inner product with the WHOLE data matrix. Rank = 4. Measurement = 15%. K = 3000. Wavelet transformed data– db4. Recons SNR with Xhat = 27.3 dB Recons SNR with Lhat = 21.9 dB (a) (b) (c) (d) Figure 5: SpaRCS recovery results on a 128⇥128⇥128 hyperspectral data cube. The hyperspectral data is reshaped into an N1 ⇥N2 matrix with N1 = 1282 and N2 = 128. Each image pane corresponds to a different spectral band. (a) Ground truth. (b) Recovered images. (c) Residual error using both the low-rank and sparse component. (d) Residual error using only the low-rank component. The measurement ratio is p/(N1N2) = 0.15. (d) 5 10 15 20 15 20 25 30 N2/p RSNR (dB) r = 4 r = 8 (c) (b) (a) Figure 6: Hyperspectral data recovery for various values of the rank r of the low-rank matrix L. The data used is the same as in Figure 5. (a) r = 1, SNR = 12.81 dB. (b) r = 2, SNR = 19.42 dB. (c) r = 4, SNR = 27.46 dB. (d) Comparison of compression ratio (N1N2)/p and recovery SNR using r = 4 and r = 8. All results were obtained with K = 3000. Robust matrix completion: We apply SpaRCS to the robust matrix completion problem [11] min kLk⇤+ λksk1 subject to L⌦+ s = y (6) where s models outlier noise and ⌦denotes the set of observed entries. This problem can be cast as a compressive low-rank and sparse matrix recovery problem by using a sparse matrix S in place of the outlier noise s and realizing that the support of S is a subset of ⌦. This enables recovery of both L and S from samples of their sum L + S. Matrix completion under outlier noise [10,11] has received some attention and, in many ways, is the work that is closest to this paper. There are, however, several important distinctions. Chen et al. [11] analyze the convex problem of (6) to provide performance guarantees. Yet, convex optimization methods often do not scale well with the size of the problem. SpaRCS, by contrast, is computationally efficient and does scale well as the problem size increases. Furthermore, [10] is tied to the case when A is a sampling operator; it is not immediately clear whether this analysis can extend to the more general case of (P1), where the sparse component cannot be modeled as outlier noise in the measurements. 7 10 −3 10 −2 10 −1 10 0 −20 0 20 40 60 80 K/p RSNR (dB) CVX SpaRCS CS IT OptSpace (a) Performance 1/1000 1/100 1/50 1/25 1/10 1/5 1/4 1/3 −1 −0.5 0 0.5 1 1.5 2 2.5 K/p log(execution time) SpaRCS CS IT CVX OptSpace (b) Timing plot Figure 7: Comparison of several algorithms for the robust matrix completion problem. (a) RSNR averaged over 10 Monte-Carlo runs for an N ⇥N matrix completion problem with N = 128, r = 1, and p/N 2 = 0.2. Non-robust formulations, such OptSpace, fail. SpaRCS acheives performance close to that of the convex solver (CVX). (b) Comparison of convergence times for the various algorithms. SpaRCS converges in only a fraction of the time required by the other algorithms. In our robust matrix completion experiments we compare SpaRCS with CS SVT, OptSpace [16] (a non-robust matrix completion algorithm), and a convex solution using CVX [15]. Figure 7 shows the performance of these algorithms. OptSpace, being non-robust, fails as expected. The accuracy of SpaRCS is closest to that of CVX, although the convergence time of SpaRCS is several orders of magnitude faster. 5 Conclusion We have considered the problem of recovering low-rank and sparse matrices given only a few linear measurements. Our proposed greedy algorithm, SpaRCS, is both fast and accurate even for large matrix sizes and enjoys strong empirical performance in its convergence to the true solution. We have demonstrated the applicability of SpaRCS to video compressive sensing, hyperspectral imaging, and robust matrix completion. There are many avenues for future work. Model-based extensions of SpaRCS are important directions. Both low-rank and sparse matrices exhibit rich structure in practice, including low-rank Hankel matrices in system identification and group sparsity in background subtraction. The use of models could significantly enhance the performance of the algorithm. This would be especially useful in applications such as video CS, where the measurement operator is typically constrained to operate on each image frame individually. Acknowledgements This work was partially supported by the grants NSF CCF-0431150, CCF-0728867, CCF-0926127, CCF-1117939, ARO MURI W911NF-09-1-0383, W911NF-07-1-0185, DARPA N66001-11-14090, N66001-11-C-4092, N66001-08-1-2065, AFOSR FA9550-09-1-0432, and LLNL B593154. Additionally, the authors wish to thank Prof. John Wright for his helpful comments and corrections to a previous version of this manuscript. 8 References [1] E. J. Cand`es. Compressive sampling. In Intl. Cong. of Math., Madrid, Spain, Aug. 2006. [2] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. ACM, 58(1):1–37, 2009. [3] E. J. Cand`es and Y. Plan. Matrix completion with noise. Proc. IEEE, 98(6):925–936, 2010. [4] E. J. Cand`es and J. Romberg. Quantitative robust uncertainty principles and optimally sparse decompositions. Found. Comput. Math., 6(2):227–254, 2006. [5] E.J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. on Info. Theory, 56(5):2053–2080, 2010. [6] V. Cevher, A. C. Sankaranarayanan, M. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa. Compressive sensing for background subtraction. In European Conf. Comp. Vision, Marseilles, France, Oct. 2008. [7] A. Chakrabarti and T. Zickler. Statistics of Real-World Hyperspectral Images. In IEEE Int. Conf. Comp. Vis., Colorado Springs, CO, June 2011. [8] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Sparse and low-rank matrix decompositions. In Allerton Conf. on Comm., Contr., and Comp., Monticello, IL, Sep. 2009. [9] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A.S. Willsky. Rank-sparsity incoherence for matrix decomposition. Arxiv preprint arXiv:0906.2220, 2009. [10] Y. Chen, A. Jalali, S. Sanghavi, and C. Caramanis. Low-rank matrix recovery from errors and erasures. Arxiv preprint arXiv:1104.0354, 2011. [11] Y. Chen, H. Xu, C. Caramanis, and S. Sanghavi. Robust matrix completion with corrupted columns. Arxiv preprint arXiv:1102.2254, 2011. [12] R. Coifman, F. Geshwind, and Y. Meyer. Noiselets. Appl. Comput. Harmon. Anal., 10:27–44, 2001. [13] M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk. Single pixel imaging via compressive sampling. IEEE Signal Processing Mag., 25(2):83–91, 2008. [14] M. Fazel, E. Cand`es, B. Recht, and P. Parrilo. Compressed sensing and robust recovery of low rank matrices. In Asilomar Conf. Signals, Systems, and Computers, Pacific Grove, CA, Nov. 2008. [15] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx, Apr. 2011. [16] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. J. Mach. Learn. Res., 11:2057–2078, 2010. [17] K. Lee and Y. Bresler. Admira: Atomic decomposition for minimum rank approximation. IEEE Trans. on Info. Theory, 56(9):4402–4416, 2010. [18] Z. Lin, M. Chen, L. Wu, and Y. Ma. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. Technical report, University of Illinois at Urbana-Champaign, UrbanaChampaign, IL. [19] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma. Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. In Intl. Workshop on Comp. Adv. in Multi-Sensor Adapt. Processing, Aruba, Dutch Antilles, Dec. 2009. [20] D. Needell and J.A. Tropp. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal., 26(3):301–321, 2009. [21] J. Y. Park and M. B. Wakin. A multiscale framework for compressive sensing of video. In Picture Coding Symp., Chicago, IL, May 2009. [22] B. Recht. A simpler approach to matrix completion. J. Mach. Learn. Res., posted Oct. 2009, to appear. [23] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev., 52(3):471–501, 2010. [24] A. C. Sankaranarayanan, P. Turaga, R. G. Baraniuk, and R. Chellappa. Compressive acquisition of dynamic scenes. In European Conf. Comp. Vision, Crete, Greece, Sep. 2010. [25] T. Sun and K. Kelly. Compressive sensing hyperspectral imager. In Comput. Opt. Sensing and Imaging, San Jose, CA, Oct. 2009. [26] A. E. Waters, A. C. Sankaranarayanan, and R. G. Baraniuk. SpaRCS: Recovering low-rank and sparse matrices from compressive measurements. Technical report, Rice University, Houston, TX, 2011. [27] W. Yin, S. Osher, D. Goldfarb, and J. Darbon. Bregman iterative algorithms for `1-minimization with applications to compressed sensing. SIAM J. Imag. Sci., 1(1):143–168, 2008. 9
|
2011
|
22
|
4,280
|
Shaping Level Sets with Submodular Functions Francis Bach INRIA - Sierra Project-team Laboratoire d’Informatique de l’Ecole Normale Sup´erieure, Paris, France francis.bach@ens.fr Abstract We consider a class of sparsity-inducing regularization terms based on submodular functions. While previous work has focused on non-decreasing functions, we explore symmetric submodular functions and their Lov´asz extensions. We show that the Lov´asz extension may be seen as the convex envelope of a function that depends on level sets (i.e., the set of indices whose corresponding components of the underlying predictor are greater than a given constant): this leads to a class of convex structured regularization terms that impose prior knowledge on the level sets, and not only on the supports of the underlying predictors. We provide unified optimization algorithms, such as proximal operators, and theoretical guarantees (allowed level sets and recovery conditions). By selecting specific submodular functions, we give a new interpretation to known norms, such as the total variation; we also define new norms, in particular ones that are based on order statistics with application to clustering and outlier detection, and on noisy cuts in graphs with application to change point detection in the presence of outliers. 1 Introduction The concept of parsimony is central in many scientific domains. In the context of statistics, signal processing or machine learning, it may take several forms. Classically, in a variable or feature selection problem, a sparse solution with many zeros is sought so that the model is either more interpretable, cheaper to use, or simply matches available prior knowledge (see, e.g., [1, 2, 3] and references therein). In this paper, we instead consider sparsity-inducing regularization terms that will lead to solutions with many equal values. A classical example is the total variation in one or two dimensions, which leads to piecewise constant solutions [4, 5] and can be applied to various image labelling problems [6, 5], or change point detection tasks [7, 8, 9]. Another example is the “Oscar” penalty which induces automatic grouping of the features [10]. In this paper, we follow the approach of [3], who designed sparsity-inducing norms based on non-decreasing submodular functions, as a convex approximation to imposing a specific prior on the supports of the predictors. Here, we show that a similar parallel holds for some other class of submodular functions, namely non-negative set-functions which are equal to zero for the full and empty set. Our main instance of such functions are symmetric submodular functions. We make the following contributions: −We provide in Section 3 explicit links between priors on level sets and certain submodular functions: we show that the Lov´asz extensions (see, e.g., [11] and a short review in Section 2) associated to these submodular functions are the convex envelopes (i.e., tightest convex lower bounds) of specific functions that depend on all level sets of the underlying vector. −In Section 4, we reinterpret existing norms such as the total variation and design new norms, based on noisy cuts or order statistics. We propose applications to clustering and outlier detection, as well as to change point detection in the presence of outliers. −We provide unified algorithms in Section 5, such as proximal operators, which are based on a sequence of submodular function minimizations (SFMs), when such SFMs are efficient, or by adapting the generic slower approach of [3] otherwise. −We derive unified theoretical guarantees for level set recovery in Section 6, showing that even in the absence of correlation between predictors, level set recovery is not always guaranteed, a situation which is to be contrasted with traditional support recovery situations [1, 3]. 1 Notation. For w ∈Rp and q ∈[1, ∞], we denote by ∥w∥q the ℓq-norm of w. Given a subset A of V = {1, . . ., p}, 1A ∈{0, 1}p is the indicator vector of the subset A. Moreover, given a vector w and a matrix Q, wA and QAA denote the corresponding subvector and submatrix of w and Q. Finally, for w ∈Rp and A ⊂V , w(A) = P k∈A wk = w⊤1A (this defines a modular set-function). In this paper, for a certain vector w ∈Rp, we call level sets the sets of indices which are larger (or smaller) or equal to a certain constant α, which we denote {w ⩾α} (or {w ⩽α}), while we call constant sets the sets of indices which are equal to a constant α, which we denote {w = α}. 2 Review of Submodular Analysis In this section, we review relevant results from submodular analysis. For more details, see, e.g., [12], and, for a review with proofs derived from classical convex analysis, see, e.g., [11]. Definition. Throughout this paper, we consider a submodular function F defined on the power set 2V of V = {1, . . ., p}, i.e., such that ∀A, B ⊂V, F(A) + F(B) ⩾F(A ∪B) + F(A ∩B). Unless otherwise stated, we consider functions which are non-negative (i.e., such that F(A) ⩾0 for all A ⊂ V ), and that satisfy F(∅) = F(V ) = 0. Usual examples are symmetric submodular functions, i.e., such that ∀A ⊂V, F(V \A) = F(A), which are known to always have non-negativevalues. We give several examples in Section 4; for illustrating the concepts introduced in this section and Section 3, we will consider the cut in an undirected chain graph, i.e., F(A) = Pp−1 j=1 |(1A)j −(1A)j+1|. Lov´asz extension. Given any set-function F such that F(V ) = F(∅) = 0, one can define its Lov´asz extension f : Rp →R, as f(w) = R R F({w ⩾α})dα (see, e.g., [11] for this particular formulation). The Lov´asz extension is convex if and only if F is submodular. Moreover, f is piecewise-linear and for all A ⊂V , f(1A) = F(A), that is, it is indeed an extension from 2V (which can be identified to {0, 1}p through indicator vectors) to Rp. Finally, it is always positively homogeneous. For the chain graph, we obtain the usual total variation f(w) = Pp−1 j=1 |wj −wj+1|. Base polyhedron. We denote by B(F) = {s ∈Rp, ∀A ⊂V, s(A) ⩽F(A), s(V ) = F(V )} the base polyhedron [12], where we use the notation s(A) = P k∈A sk. One important result in submodular analysis is that if F is a submodular function, then we have a representation of f as a maximum of linear functions [12, 11], i.e., for all w ∈Rp, f(w) = maxs∈B(F ) w⊤s. Moreover, instead of solving a linear program with 2p −1 contraints, a solution s may be obtained by the following “greedy algorithm”: order the components of w in decreasing order wj1 ⩾· · · ⩾wjp, and then take for all k ∈{1, . . . , p}, sjk = F({j1, . . . , jk}) −F({j1, . . . , jk−1}). Tight and inseparable sets. The polyhedra U = {w ∈Rp, f(w) ⩽1} and B(F) are polar to each other (see, e.g., [13] for definitions and properties of polar sets). Therefore, the facial structure of U may be obtained from the one of B(F). Given s ∈B(F), a set A ⊂V is said tight if s(A) = F(A). It is known that the set of tight sets is a distributive lattice, i.e., if A and B are tight, then so are A∪B and A ∩B [12, 11]. The faces of B(F) are thus intersections of hyperplanes {s(A) = F(A)} for A belonging to certain distributive lattices (see Prop. 3). A set A is said separable if there exists a non-trivial partition of A = B ∪C such that F(A) = F(B) + F(C). A set is said inseparable if it is not separable. For the cut in an undirected graph, inseparable sets are exactly connected sets. 3 Properties of the Lov´asz Extension In this section, we derive properties of the Lov´asz extension for submodular functions, which go beyond convexity and homogeneity. Throughout this section, we assume that F is a non-negative submodular set-function that is equal to zero at ∅and V . This immediately implies that f is invariant by addition of any constant vector (that is, f(w + α1V ) = f(w) for all w ∈Rp and α ∈R), and that f(1V ) = F(V ) = 0. Thus, contrary to the non-decreasing case [3], our regularizers are not norms. However, they are norms on the hyperplane {w⊤1V = 0} as soon as for A ̸= ∅and A ̸= V , F(A) > 0 (which we assume for the rest of this paper). We now show that the Lov´asz extension is the convex envelope of a certain combinatorial function which does depend on all levets sets {w ⩾α} of w ∈Rp (see proof in [14]): Proposition 1 (Convex envelope) The Lov´asz extension f(w) is the convex envelope of the function w 7→maxα∈R F({w ⩾α}) on the set [0, 1]p +R1V = {w ∈Rp, maxk∈V wk −mink∈V wk ⩽1}. 2 w > w >w 1 2 1 w > w >w 3 2 3 2 w > w >w 1 1 3 w > w >w 2 2 w > w >w 1 3 2 1 w =w w =w 1 3 3 2 w =w 1 2 w > w >w 3 (0,1,1)/F({2,3}) (0,0,1)/F({3}) (1,0,1)/F({1,3}) (1,0,0)/F({1}) (1,1,0)/F({1,2}) (0,1,0)/F({2}) 3 (0,1,0)/2 (0,0,1) (0,1,1) (1,0,1)/2 (1,0,0) (1,1,0) Figure 1: Top: Polyhedral level set of f (projected on the set w⊤1V = 0), for 2 different submodular symmetric functions of three variables, with different inseparable sets leading to different sets of extreme points; changing values of F may make some of the extreme points disappear. The various extreme points cut the space into polygons where the ordering of the component is fixed. Left: F(A) = 1|A|∈{1,2}, leading to f(w) = maxk wk −mink wk (all possible extreme points); note that the polygon need not be symmetric in general. Right: one-dimensional total variation on three nodes, i.e., F(A) = |11∈A −12∈A| + |12∈A −13∈A|, leading to f(w) = |w1 −w2| + |w2 −w3|, for which the extreme points corresponding to the separable set {1, 3} and its complement disappear. Note the difference with the result of [3]: we consider here a different set on which we compute the convex envelope ([0, 1]p + R1V instead of [−1, 1]p), and not a function of the support of w, but of all its level sets.1 Moreover, the Lov´asz extension is a convex relaxation of a function of level sets (of the form {w ⩾α}) and not of constant sets (of the form {w = α}). It would have been perhaps more intuitive to consider for example R R F({w = α})dα, since it does not depend on the ordering of the values that w may take; however, to the best of our knowledge, the latter function does not lead to a convex function amenable to polynomial-time algorithms. This definition through level sets will generate some potentially undesired behavior (such as the well-known staircase effect for the one-dimensional total variation), as we show in Section 6. The next proposition describes the set of extreme points of the “unit ball” U = {w, f(w) ⩽1}, giving a first illustration of sparsity-inducing effects (see example in Figure 1, in particular for the one-dimensional total variation). Proposition 2 (Extreme points) The extreme points of the set U ∩{w⊤1V = 0} are the projections of the vectors 1A/F(A) on the plane {w⊤1V = 0}, for A such that A is inseparable for F and V \A is inseparable for B 7→F(A ∪B) −F(A). Partially ordered sets and distributive lattices. A subset D of 2V is a (distributive) lattice if it is invariant by intersection and union. We assume in this paper that all lattices contain the empty set ∅and the full set V , and we endow the lattice with the inclusion order. Such lattices may be represented as a partially ordered set (poset) Π(D) = {A1, . . . , Am} (with order relationship ≽), where the sets Aj, j = 1, . . . , m, form a partition of V (we always assume a topological ordering of the sets, i.e., Ai ≽Aj ⇒i ⩾j). As illustrated in Figure 2, we go from D to Π(D), by considering all maximal chains in D and the differences between consecutive sets. We go from Π(D) to D, by constructing all ideals of Π(D), i.e., sets J such that if an element of Π(D) is lower than an element of J, then it has to be in J (see [12] for more details, and an example in Figure 2). Distributive lattices and posets are thus in one-to-one correspondence. Throughout this section, we go back and forth between these two representations. The distributive lattice D will correspond to all authorized level sets {w ⩾α} for w in a single face of U, while the elements of the poset Π(D) are the constant sets (over which w is constant), with the order between the subsets giving partial constraints between the values of the corresponding constants. Faces of U. The faces of U are characterized by lattices D, with their corresponding posets Π(D) = {A1, . . . , Am}. We denote by U◦ D (and by UD its closure) the set of w ∈Rp such that (a) f(w) ⩽1, (b) w is piecewise constant with respect to Π(D), with value vi on Ai, and (c) for all pairs (i, j), 1Note that the support {w = 0} is a constant set which is the intersection of two level sets. 3 {5,6} {2,3,4,5,6} {1,2,5,6} {2,5,6} {1,2} {2} {5,6} {1,2,3,4,5,6} {1} {2} {3,4} Figure 2: Left: distributive lattice with 7 elements in 2{1,2,3,4,5,6}, represented with the Hasse diagram corresponding to the inclusion order (for a partial order, a Hasse diagram connects A to B if A is smaller than B and there is no C such that A is smaller than C and C is smaller than B). Right: corresponding poset, with 4 elements that form a partition of {1, 2, 3, 4, 5, 6}, represented with the Hasse diagram corresponding to the order ≽(a node points to its immediate smaller node according to ≽). Note that this corresponds to an “allowed” lattice (see Prop. 3) for the one-dimensional total variation. Ai ≽Aj ⇒vi > vj. For certain lattices D, these will be exactly the relative interiors of all faces of U (see proof in [14]): Proposition 3 (Faces of U) The (non-empty) relative interiors of all faces of U are exactly of the form U◦ D, where D is a lattice such that: (i) the restriction of F to D is modular, i.e., for all A, B ∈D, F(A)+F(B) = F(A∪B)+F(A∩B), (ii) for all j ∈{1, . . ., m}, the set Aj is inseparable for the function Cj 7→F(Bj−1 ∪Cj) − F(Bj−1), where Bj−1 is the union of all ancestors of Aj in Π(D), (iii) among all lattices corresponding to the same unordered partition, D is a maximal element of the set of lattices satisfying (i) and (ii). Among the three conditions, the second one is the easiest to interpret, as it reduces to having constant sets which are inseparable for certain submodular functions, and for cuts in an undirected graph, these will exactly be connected sets. Note also that extreme points from Prop. 2 are recovered with D = {∅, A, V }. Since we are able to characterize all faces of U (of all dimensions) with non-empty relative interior, we have a partition of the space and any w ∈Rp which is not proportional to 1V , will be, up to the strictly positive constant f(w), in exactly one of these relative interiors of faces; we refer to this lattice as the lattice associated to w. Note that from the face w belongs to, we have strong constraints on the constant sets, but we may not be able to determine all level sets of w, because only partial constraints are given by the order on Π(D). For example, in Figure 2 for the one-dimensional total variation, w2 may be larger or smaller than w5 = w6 (and even potentially equal, but with zero probability, see Section 6). 4 Examples of Submodular Functions In this section, we provide examples of submodular functions and of their Lov´asz extensions. Some are well-known (such as cut functions and total variations), some are new in the context of supervised learning (regular functions), while some have interesting effects in terms of clustering or outlier detection (cardinality-based functions). Symmetrization. From any submodular function G, one may define F(A) = G(A) + G(V \A) − G(∅) −G(V ), which is symmetric. Potentially interesting examples which are beyond the scope of this paper are mutual information, or functions of eigenvalues of submatrices [3]. Cut functions. Given a set of nonnegative weights d : V × V →R+, define the cut F(A) = P k∈A,j∈V \A d(k, j). The Lov´asz extension is equal to f(w) = P k,j∈V d(k, j)(wk −wj)+ (which shows submodularity because f is convex), and is often referred to as the total variation. If the weight function d is symmetric, then the submodular function is also symmetric. In this case, it can be shown that inseparable sets for functions A 7→F(A ∪B) −F(B) are exactly connected sets. Hence, by Props. 3 and 6, constant sets are connected sets, which is the usual justification behind the total variation. Note however that some configurations of connected sets are not allowed due to the other conditions in Prop. 3 (see examples in Section 6). In Figure 5 (right plot), we give an example of the usual chain graph, leading to the one-dimensional total variation [4, 5]. Note that these functions can be extended to cuts in hypergraphs, which may have interesting applications in computer vision [6]. Moreover, directed cuts may be interesting to favor increasing or decreasing jumps along the edges of the graph. 4 5 10 15 20 −5 0 5 weights 5 10 15 20 −5 0 5 weights 5 10 15 20 −5 0 5 weights −2 0 2 4 6 0 0.1 0.2 0.3 0.4 log(σ2) estimation error TV robust TV robust TV − 2 −2 0 2 4 6 0 0.1 0.2 0.3 0.4 log(σ2) estimation error TV robust TV robust TV − 2 −2 0 2 4 6 0 0.1 0.2 0.3 0.4 log(σ2) estimation error TV robust TV Figure 3: Three left plots: Estimation of noisy piecewise constant 1D signal with outliers (indices 5 and 15 in the chain of 20 nodes). Left: original signal. Middle: best estimation with total variation (level sets are not correctly estimated). Right: best estimation with the robust total variation based on noisy cut functions (level sets are correctly estimated, with less bias and with detection of outliers). Right plot: clustering estimation error vs. noise level, in a sequence of 100 variables, with a single jump, where noise of variance one is added, with 5% of outliers (averaged over 20 replications). Regular functions and robust total variation. By partial minimization, we obtain so-called regular functions [6, 5]. One application is “noisy cut functions”: for a given weight function d : W × W →R+, where each node in W is uniquely associated in a node in V , we consider the submodular function obtained as the minimum cut adapted to A in the augmented graph (see an example in the right plot of Figure 5): F(A) = minB⊂W P k∈B, j∈W\B d(k, j) + λ|A∆B|. This allows for robust versions of cuts, where some gaps may be tolerated; indeed, compared to having directly a small cut for A, B needs to have a small cut and be close to A, thus allowing some elements to be removed or added to A in order to lower the cut. See examples in Figure 3, illustrating the behavior of the type of graph displayed in the bottom-right plot of Figure 5, where the performance of the robust total variation is significantly more stable in presence of outliers. Cardinality-based functions. For F(A) = h(|A|) where h is such that h(0) = h(p) = 0 and h concave, we obtain a submodular function, and a Lov´asz extension that depends on the order statistics of w, i.e., if wj1 ⩾· · · ⩾wjp, then f(w) = Pp−1 k=1 h(k)(wjk −wjk+1). While these examples do not provide significantly different behaviors for the non-decreasing submodular functions explored by [3] (i.e., in terms of support), they lead to interesting behaviors here in terms of level sets, i.e., they will make the components w cluster together in specific ways. Indeed, as shown in Section 6, allowed constant sets A are such that A is inseparable for the function C 7→h(|B ∪C|) −h(|B|) (where B ⊂V is the set of components with higher values than the ones in A), which imposes that the concave function h is not linear on [|B|, |B|+|A|]. We consider the following examples: 1. F(A) = |A| · |V \A|, leading to f(w) = Pp i,j=1 |wi −wj|. This function can thus be also seen as the cut in the fully connected graph. All patterns of level sets are allowed as the function h is strongly concave (see left plot of Figure 4). This function has been extended in [15] by considering situations where each wj is a vector, instead of a scalar, and replacing the absolute value |wi −wj| by any norm ∥wi −wj∥, leading to convex formulations for clustering. 2. F(A) = 1 if A ̸= ∅and A ̸= V , and 0 otherwise, leading to f(w) = maxi,j |wi −wj|. Two large level sets at the top and bottom, all the rest of the variables are in-between and separated (Figure 4, second plot from the left). 3. F(A) = max{|A|, |V \A|}. This function is piecewise affine, with only one kink, thus only one level set of cardinalty greater than one (in the middle) is possible, which is observed in Figure 4 (third plot from the left). This may have applications to multivariate outlier detection by considering extensions similar to [15]. 5 Optimization Algorithms In this section, we present optimization methods for minimizing convex objective functions regularized by the Lov´asz extension of a submodular function. These lead to convex optimization problems, which we tackle using proximal methods (see, e.g., [16, 17] and references therein). We first start by mentioning that subgradients may easily be derived (but subgradient descent is here rather inefficient as shown in Figure 5). Moreover, note that with the square loss, the regularization paths are piecewise affine, as a direct consequence of regularizing by a polyhedral function. 5 0 0.01 0.02 0.03 −10 −5 0 5 10 weights λ 0 1 2 3 −10 −5 0 5 10 weights λ 0 0.2 0.4 −10 −5 0 5 10 weights λ 0 1 2 3 −10 −5 0 5 10 weights λ Figure 4: Left: Piecewise linear regularization paths of proximal problems (Eq. (1)) for different functions of cardinality. From left to right: quadratic function (all level sets allowed), second example in Section 4 (two large level sets at the top and bottom), piecewise linear with two pieces (a single large level set in the middle). Right: Same plot for the one-dimensional total variation. Note that in all these particular cases the regularization paths for orthogonal designs are agglomerative (see Section 5), while for general designs, they would still be piecewise affine but not agglomerative. Subgradient. From f(w) = maxs∈B(F ) s⊤w and the greedy algorithm2 presented in Section 2, one can easily get in polynomial time one subgradient as one of the maximizers s. This allows to use subgradient descent, with slow convergence compared to proximal methods (see Figure 5). Proximal problems through sequences of submodular function minimizations (SFMs). Given regularized problems of the form minw∈Rp L(w)+λf(w), where L is differentiable with Lipschitzcontinuous gradient, proximal methods have been shown to be particularly efficient first-order methods (see, e.g., [16]). In this paper, we use the method “ISTA” and its accelerated variant “FISTA” [16]. To apply these methods, it suffices to be able to solve efficiently: min w∈Rp 1 2∥w −z∥2 2 + λf(w), (1) which we refer to as the proximal problem. It is known that solving the proximal problem is related to submodular function minimization (SFM). More precisely, the minimum of A 7→λF(A) −z(A) may be obtained by selecting negative components of the solution of a single proximal problem [12, 11]. Alternatively, the solution of the proximal problem may be obtained by a sequence of at most p submodular function minimizations of the form A 7→λF(A)−z(A), by a decomposition algorithm adapted from [18], and described in [11]. Thus, computing the proximal operator has polynomial complexity since SFM has polynomial complexity. However, it may be too slow for practical purposes, as the best generic algorithm has complexity O(p6) [19]3. Nevertheless, this strategy is efficient for families of submodular functions for which dedicated fast algorithms exist: – Cuts: Minimizing the cut or the partially minimized cut, plus a modular function, may be done with a min-cut/max-flow algorithm [see, e.g., 6, 5]. For proximal methods, we need in fact to solve an instance of a parametric max-flow problem, which may be done using other efficient dedicated algorithms [21, 5] than the decomposition algorithm derived from [18]. – Functions of cardinality: minimizing functions of the form A 7→λF(A)−z(A) can be done in closed form by sorting the elements of z. Proximal problems through minimum-norm-point algorithm. In the generic case (i.e., beyond cuts and cardinality-based functions), we can follow [12, 3]: since f(w) is expressed as a minimum of linear functions, the problem reduces to the projection on the polytope B(F), for which we happen to be able to easily maximize linear functions (using the greedy algorithm described in Section 2). This can be tackled efficiently by the minimum-norm-point algorithm [12], which iterates between orthogonal projections on affine subspaces and the greedy algorithm for the submodular function4. We compare all optimization methods on synthetic examples in Figure 5. 2The greedy algorithm to find extreme points of the base polyhedron should not be confused with the greedy algorithm (e.g., forward selection) that is common in supervised learning/statistics. 3Note that even in the case of symmetric submodular functions, where more efficient algorithms in O(p3) for submodular function minimization (SFM) exist [20], the minimization of functions of the form λF(A) − z(A) is provably as hard as general SFM [20]. 4Interestingly, when used for submodular function minimization (SFM), the minimum-norm-point algorithm has no complexity bound but is empirically faster than algorithms with such bounds [12]. 6 0 2 4 6 8 10 10 −15 10 −10 10 −5 10 0 time (seconds) f(w)−min(f) fista−generic ista−generic subgradient fista−card ista−card subgradient−sqrt V W Figure 5: Left: Matlab running times of different optimization methods on 20 replications of a leastsquares regression problem with p = 1000 for a cardinality-based submodular function (best seen in color). Proximal methods with the generic algorithm (using the minimum-norm-point algorithm) are faster than subgradient descent (with two schedules for the learning rate, 1/t or 1/ √ t). Using the dedicated algorithm (which is not available in all situations) is significantly faster. Right: Examples of graphs (top: chain graph, bottom: hidden chain graph, with sets W and V and examples of a set A in light red, and B in blue, see text for details). Proximal path as agglomerative clustering. When λ varies from zero to +∞, then the unique optimal solution of Eq. (1) goes from z to a constant. We now provide conditions under which the regularization path of the proximal problem may be obtained by agglomerative clustering (see examples in Figure 4): Proposition 4 (Agglomerative clustering) Assume that for all sets A, B such that B ∩A = ∅and A is inseparable for D 7→F(B ∪D) −F(B), we have: ∀C ⊂A, |C| |A|[F(B ∪A) −F(B)] ⩽F(B ∪C) −F(B). (2) Then the regularization path for Eq. (1) is agglomerative, that is, if two variables are in the same constant for a certain µ ∈R+, so are they for all larger λ ⩾µ. As shown in [14], the assumptions required for by Prop. 4 are satisfied by (a) all submodular setfunctions that only depend on the cardinality, and (b) by the one-dimensional total variation—we thus recover and extend known results from [7, 22, 15]. Adding an ℓ1-norm. Following [4], we may add the ℓ1-norm ∥w∥1 for additional sparsity of w (on top of shaping its level sets). The following proposition extends the result for the one-dimensional total variation [4, 23] to all submodular functions and their Lov´asz extensions: Proposition 5 (Proximal problem for ℓ1-penalized problems) The unique minimizer of 1 2∥w − z∥2 2 + f(w) + λ∥w∥1 may be obtained by soft-thresholding the minimizers of 1 2∥w −z∥2 2 + f(w). That is, the proximal operator for f + λ∥· ∥1 is equal to the composition of the proximal operator for f and the one for λ∥· ∥1. 6 Sparsity-inducing Properties Going from the penalization of supports to the penalization of level sets introduces some complexity and for simplicity in this section, we only consider the analysis in the context of orthogonal design matrices, which is often referred to as the denoising problem, and in the context of level set estimation already leads to interesting results. That is, we study the unique global minimum ˆw of the proximal problem in Eq. (1) and make some assumption regarding z (typically z = w∗+ noise), and provide guarantees related to the recovery of the level sets of w∗. We first start by characterizing the allowed level sets, showing that the partial constraints defined in Section 3 on faces of {f(w) ⩽1} do not create by chance further groupings of variables (see proof in [14]). Proposition 6 (Stable constant sets) Assume z ∈Rp has an absolutely continuous density with respect to the Lebesgue measure. Then, with probability one, the unique minimizer ˆw of Eq. (1) has constant sets that define a partition corresponding to a lattice D defined in Prop. 3. We now show that under certain conditions the recovered constant sets are the correct ones: 7 Theorem 1 (Level set recovery) Assume that z = w∗+ σε, where ε ∈Rp is a standard Gaussian random vector, and w∗is consistent with the lattice D and its associated poset Π(D) = (A1, . . . , Am), with values v∗ j on Aj, for j ∈{1, . . ., m}. Denote Bj = A1 ∪· · · ∪Aj for j ∈{1, . . . , m}. Assume that there exists some constants ηj > 0 and ν > 0 such that: ∀Cj ⊂Aj, F(Bj−1∪Cj)−F(Bj−1)−|Cj| |Aj|[F(Bj−1∪Aj)−F(Bj−1)] ⩾ηj min |Cj| |Aj|, 1−|Cj| |Aj| , (3) ∀i, j ∈{1, . . . , m}, Ai ≽Aj ⇒v∗ i −v∗ j ⩾ν, (4) ∀j ∈{1, . . ., m}, λ F (Bj)−F (Bj−1) |Aj| ⩽ν/4. (5) Then the unique minimizer ˆw of Eq. (1) is associated to the same lattice D than w∗, with probability greater than 1 −Pm j=1 exp −ν2|Aj| 32σ2 −2 Pm j=1 |Aj| exp − λ2η2 j 2σ2|Aj|2 . We now discuss the three main assumptions of Theorem 1 as well as the probability estimate: – Eq. (3) is the equivalent of the support recovery condition for the Lasso [1] or its extensions [3]. The main difference is that for support recovery, this assumption is always met for orthogonal designs, while here it is not always met. Interestingly, the validity of level set recovery implies the agglomerativity of proximal paths (Eq. (2) in Prop. 4). Note that if Eq. (3) is satisfied only with ηj ⩾0 (it is then exactly Eq. (2) in Prop. 4), then, even with infinitesimal noise, one can show that in some cases, the wrong level sets may be obtained with non vanishing probability, while if ηj is strictly negative, one can show that in some cases, we never get the correct level sets. Eq. (3) is thus essentially sufficient and necessary. – Eq. (4) corresponds to having distinct values of w∗far enough from each other. – Eq. (5) is a constraint on λ which controls the bias of the estimator: if it is too large, then there may be a merging of two clusters. – In the probability estimate, the second term is small if all σ2|Aj|−1 are small enough (i.e., given the noise, there is enough data to correctly estimate the values of the constant sets) and the third term is small if λ is large enough, to avoid that clusters split. One-dimensional total variation. In this situation, we always get ηj = 0, but in some cases, it cannot be improved (i.e., the best possible ηj is equal to zero), and as shown in [14], this occurs as soon as there is a “staircase”, i.e., a piecewise constant vector, with a sequence of at least two consecutive increases, or two consecutive decreases, showing that in the presence of such staircases, one cannot have consistent support recovery, which is a well-known issue in signal processing (typically, more steps are created). If there is no staircase effect, we have ηj = 1 and Eq. (5) becomes λ ⩽ν 8 minj |Aj|. If we take λ equal to the limiting value in Eq. (5), then we obtain a probability less than 1 −4p exp(− ν2 minj |Aj|2 128σ2 maxj |Aj|2 ). Note that we could also derive general results when an additional ℓ1-penalty is used, thus extending results from [24]. Finally, similar (more) negative results may be obtained for the two-dimensional total variation [25, 14]. Clustering with F(A) = |A| · |V \A|. In this case, we have ηj = |Aj|/2, and Eq. (5) becomes λ ⩽ ν 4p, leading to the probability of correct support estimation greater than 1 −4p exp − ν2 128pσ2 . This indicates that the noise variance σ2 should be small compared to 1/p, which is not satisfactory and would be corrected with the weighting schemes proposed in [15]. 7 Conclusion We have presented a family of sparsity-inducing norms dedicated to incorporating prior knowledge or structural constraints on the level sets of linear predictors. We have provided a set of common algorithms and theoretical results, as well as simulations on synthetic examples illustrating the behavior of these norms. Several avenues are worth investigating: first, we could follow current practice in sparse methods, e.g., by considering related adapted concave penalties to enhance sparsity-inducing capabilities, or by extending some of the concepts for norms of matrices, with potential applications in matrix factorization [26] or multi-task learning [27]. Acknowledgements. This paper was partially supported by the Agence Nationale de la Recherche (MGA Project), the European Research Council (SIERRA Project) and Digiteo (BIOVIZ project). 8 References [1] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning Research, 7:2541–2563, 2006. [2] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of M-estimators with decomposable regularizers. In Adv. NIPS, 2009. [3] F. Bach. Structured sparsity-inducing norms through submodular functions. In Adv. NIPS, 2010. [4] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused Lasso. J. Roy. Stat. Soc. B, 67(1):91–108, 2005. [5] A. Chambolle and J. Darbon. On total variation minimization and surface evolution using parametric maximum flows. International Journal of Computer Vision, 84(3):288–307, 2009. [6] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE Trans. PAMI, 23(11):1222–1239, 2001. [7] Z. Harchaoui and C. L´evy-Leduc. Catching change-points with Lasso. Adv. NIPS, 20, 2008. [8] J.-P. Vert and K. Bleakley. Fast detection of multiple change-points shared by many signals using group LARS. Adv. NIPS, 23, 2010. [9] M. Kolar, L. Song, and E. Xing. Sparsistent learning of varying-coefficient models with structural changes. Adv. NIPS, 22, 2009. [10] H. D. Bondell and B. J. Reich. Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with oscar. Biometrics, 64(1):115–123, 2008. [11] F. Bach. Convex analysis and optimization with submodular functions: a tutorial. Technical Report 00527714, HAL, 2010. [12] S. Fujishige. Submodular Functions and Optimization. Elsevier, 2005. [13] R. T. Rockafellar. Convex Analysis. Princeton University Press, 1997. [14] F. Bach. Shaping level sets with submodular functions. Technical Report 00542949-v2, HAL, 2011. [15] T. Hocking, A. Joulin, F. Bach, and J.-P. Vert. Clusterpath: an algorithm for clustering using convex fusion penalties. In Proc. ICML, 2011. [16] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [17] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Technical Report 00613125, HAL, 2011. [18] H. Groenevelt. Two algorithms for maximizing a separable concave function over a polymatroid feasible region. European Journal of Operational Research, 54(2):227–236, 1991. [19] J. B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Mathematical Programming, 118(2):237–251, 2009. [20] M. Queyranne. Minimizing symmetric submodular functions. Mathematical Programming, 82(1):3–12, 1998. [21] G. Gallo, M. D. Grigoriadis, and R. E. Tarjan. A fast parametric maximum flow algorithm and applications. SIAM Journal on Computing, 18(1):30–55, 1989. [22] H. Hoefling. A path algorithm for the fused Lasso signal approximator. Technical Report 0910.0526v1, arXiv, 2009. [23] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research, 11:19–60, 2010. [24] A. Rinaldo. Properties and refinements of the fused Lasso. Ann. Stat., 37(5):2922–2952, 2009. [25] V. Duval, J.-F. Aujol, and Y. Gousseau. The TVL1 model: A geometric point of view. Multiscale Modeling and Simulation, 8(1):154–189, 2009. [26] N. Srebro, J. D. M. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. In Adv. NIPS 17, 2005. [27] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243–272, 2008. 9
|
2011
|
220
|
4,281
|
Probabilistic amplitude and frequency demodulation Richard E. Turner∗ Computational and Biological Learning Lab, Department of Engineering University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ, UK ret26@cam.ac.uk Maneesh Sahani Gatsby Computational Neuroscience Unit, University College London Alexandra House, 17 Queen Square, London, WC1N 3AR, UK maneesh@gatsby.ucl.ac.uk Abstract A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time. Although signal processing provides algorithms for so-called amplitude- and frequencydemodulation (AFD), there are well known problems with all of the existing methods. Motivated by the fact that AFD is ill-posed, we approach the problem using probabilistic inference. The new approach, called probabilistic amplitude and frequency demodulation (PAFD), models instantaneous frequency using an auto-regressive generalization of the von Mises distribution, and the envelopes using Gaussian auto-regressive dynamics with a positivity constraint. A novel form of expectation propagation is used for inference. We demonstrate that although PAFD is computationally demanding, it outperforms previous approaches on synthetic and real signals in clean, noisy and missing data settings. 1 Introduction Amplitude and frequency demodulation (AFD) is the process by which a signal (yt) is decomposed into the product of a slowly varying envelope or amplitude component (at) and a quickly varying sinusoidal carrier (cos(φt)), that is yt = at cos(φt). In its general form this is an ill-posed problem [1], and so algorithms must impose implicit or explicit assumptions about the form of carrier and envelope to realise a solution. In this paper we make the standard assumption that the amplitude variables are slowly varying positive variables, and the derivatives of the carrier phase, ωt = φt − φt−1 called the instantaneous frequencies (IFs), are also slowly varying variables. It has been argued that the subbands of speech are well characterised by such a representation [2, 3] and so AFD has found a range of applications in audio processing including audio coding [4, 2], speech enhancement [5] and source separation [6], and it is used in hearing devices [5]. AFD has been used as a scientific tool to investigate the perception of sounds [7]. AFD is also of importance in neural signal processing applications. Aggregate field measurements such as those collected at the scalp by electroencephalography (EEG) or within tissue as local field potentials often exhibit transient sharp spectral lines at characteristic frequencies. Within each such band, both the amplitude of the oscillation and the precise center frequencies may vary with time; and both of these phenomena may reveal important elements of the mechanism by which the field oscillation arises. ∗Richard Turner would like to thank the Laboratory for Computational Vision, New York University, New York, NY 10003-6603, USA, where he carried out this research. 1 Despite the fact that AFD has found a wide range of important applications, there are well-known problems with existing AFD algorithms [8, 1, 9, 10, 5]. Because of these problems, the Hilbert method, which recovers an amplitude from the magnitude of the analytic signal, is still considered to be the benchmark despite a number of limitations [11, 12]. In this paper, we show examples of demodulation of synthetic, audio, and hippocampal theta rhythm signals using various AFD techniques that highlights some of the anomalies associated with existing methods. Motivated by the deficiencies in the existing methods this paper develops a probabilistic form of AFD. This development begins in the next section where we reinterpret two existing probabilistic algorithms in the context of AFD. The limitations of these methods suggest an improved model (section 2) which we demonstrate on a range of synthetic and natural signals (sections 4 and 5). 1.1 Simple models for probabilistic amplitude and frequency demodulation In this paper, we view demodulation as an estimation problem in which a signal is fit with a sinusoid of time-varying amplitude and phase, yt = ℜ(at exp (iφt)) + ǫt. (1) The expression also includes a noise term which will be modeled as a zero-mean Gaussian with variance σ2 y, that is p(ǫt) = Norm(ǫt; 0, σ2 y). We are interested in the situation where the IF of the sinusoid varies slowly around a mean value ¯ω. In this case, the phase can be expressed in terms of the integrated mean frequency and a small perturbation, φt = ¯ωt + θt. Clearly, the problem of inferring at and θt from yt is ill-posed, and results will depend on the specification of prior distributions over the amplitude and phase perturbation variables. Our goal in this paper is to specify such prior distributions directly, but this will require the development of new techniques to handle the resulting non-linearities. A simpler alternative is to generate the sinusoidal signal from a rotating two-dimensional phasor. For example, re-parametrizing the likelihood in terms of the components x1,t = at cos(θt) and x2,t = at sin(θt), yields a linear likelihood function yt = at (cos(¯ωt) cos(θt) −sin(¯ωt) sin(θt)) + ǫt = cos(¯ωt)x1,t −sin(¯ωt)x2,t + ǫt = wT txt + ǫt. Here the phasor components, which have been collected into a vector xT t = [x1,t, x2,t], are multiplied by time-varying weights, wT t = [cos(¯ωt), −sin(¯ωt)]. To complete the model, prior distributions can be now be specified over xt. One choice that results in a particularly simple inference algorithm is a Gaussian one-step auto-regressive (AR(1)) prior, p(xk,t|xk,t−1) = Norm(xk,t; λxk,t−1, σ2 x). (2) When the dynamical parameter tends to unity (λ →1) and the dynamical noise variance to zero (σ2 x →0), the dynamics become very slow, and this slowness is inherited by the phase perturbations and amplitudes. This model is an instance of the Bayesian Spectrum Estimation (BSE) model [13] (when λ = 1), but re-interpreted in terms of amplitude- and frequency-modulated sinusoids, rather than fixed frequency basis functions. As the model is a linear Gaussian state space model, exact inference proceeds via the Kalman smoothing algorithm. Before discussing the properties of BSE in the context of fitting amplitude- and frequency-modulated sinusoids, we derive an equivalent model by returning to the likelihood function (eq. 1). Now the full complex representation of the sinusoid is retained. As before, the real part corresponds to the observed data, but the imaginary part is now treated explicitly as missing data, yt = ℜ(x1,t cos(¯ωt) −x2,t sin(¯ωt) + ix1,t sin(¯ωt) + ix2,t cos(¯ωt)) + ǫt. (3) The new form of the likelihood function can be expressed in vector form, yt = [1, 0]zt + ǫt, using a new set of variables, zt, which are rotated versions of the original variables, zt = R(¯ωt)xt where R(θ) = cos(θ) −sin(θ) sin(θ) cos(θ) . (4) An auto-regressive expression for the new variables, zt, can now be found using the fact that rotation matrices commute, R(θ1 + θ2) = R(θ1)R(θ2) = R(θ2)R(θ1), together with expression for the dynamics of the original variables, xt (eq. 2), zt = λR(¯ω)R(¯ω(t −1))xt−1 + R(¯ωt)ǫt = λR(¯ω)zt−1 + ǫ′ t (5) where the noise is a zero mean Gaussian with covariance ⟨ǫ′ tǫ′T t ⟩= R(¯ωt)⟨ǫtǫT t⟩RT(¯ωt) = σ2 xI. This equivalent formulation of the BSE model is called the Probabilistic Phase Vocoder (PPV) [14]. Again exact inference is possible using the Kalman smoothing algorithm. 2 1.2 Problems with simple models for probabilistic amplitude and frequency demodulation BSE-PPV is used to demodulate synthetic and natural signals in Figs. 1, 2 and 7. The decomposition is compared to the Hilbert method. These examples immediately reveal several problems with BSEPPV. Perhaps most unsatisfactory is the fact that the IF estimates are often ill behaved, to the extent that they go negative, especially in regions where the amplitude of the signal is low. It is easy to understand why this occurs by considering the prior distribution over amplitude and phase implied by our choice of prior distribution over xt (or equivalently over zt), p(at, φt|at−1, φt−1) = at 2πσ2x exp −1 2σ2x a2 t + λ2a2 t−1 + λ σ2x atat−1 cos(φt −φt−1 −¯ω) . (6) Phase and amplitude are dependent in the implied distribution, which is conditionally a uniform distribution over phase when the amplitude is zero and a strongly peaked von Mises distribution [15] when the amplitude is large. Consequently, the model favors more highly variable IFs at low amplitudes. In some applications this may be desirable, but for signals like sounds it presents a problem. First it may assign substantial probability to unphysical negative IFs. Second, the same noiseless signal at different intensities will yield different estimated IF content. Third, the complex coupling makes it difficult to select domain-appropriate time-scale parameters. Consideration of IF reveals yet another problem. When the phase-perturbations vary slowly (λ →1), there is no correlation between successive IFs (⟨ωtωt−1⟩−⟨ωt⟩⟨ωt−1⟩→0). One of the main goals of the model was to capture correlated IFs through time, and the solution is to move to priors with higher order temporal dependencies. In the next section we will propose a new model for PAFD which addresses these problems, retaining the same likelihood function, but modifying the prior to include independent distributions over the phase and amplitude variables. Hilbert 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 −2 0 2 BSE/PPV 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 −2 0 2 PAFD time /s 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 −2 0 2 frequency /Hz 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0 50 100 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0 50 100 time /s 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0 50 100 a ˆa ˆy ω ˆω y Figure 1: Comparison of AFD methods on a sinusoidally amplitude- and frequency-modulated sinusoid in broad-band noise. Estimated values are shown in red. The gray areas show the region where the true amplitude falls below the noise floor (a < σy) and the estimates become less accurate. See section 4 for details. 2 PAFD using Auto-regressive and generalized von Mises distributions We have argued that the amplitude and phase variables in a model for PAFD should be independently parametrized, but that this introduces difficulties as the likelihood is highly non-linear in these variables. This section and the next develop the tools necessary to handle this non-linearity. 3 0 0.05 0.1 0.15 0.2 0.25 −4 −2 0 2 4 envelopes 0 0.05 0.1 0.15 0.2 0.25 0 1000 2000 3000 frequency /Hz 0 0.05 0.1 0.15 0.2 0.25 1000 1500 2000 2500 time /s frequency /Hz Figure 2: AFD of a starling song. Top: The original waveform with estimated envelopes, shifted apart vertically to aid visualization. The light gray bar indicates the problematic low amplitude region. Bottom panels: IF estimates superposed onto the spectrum of the signal. PAFD tracks the FM/AM well, but the other methods have artifacts. An important initial consideration is whether to use a representation for phase which is wrapped, θ ∈ (−π, π], or unwrapped, θ ∈R. Although the latter has the advantage of implying simpler dynamics, it leads to a potential infinity of local modes at multiples of 2π making inference extremely difficult. It is therefore necessary to work with wrapped phases and a sensible starting point for a prior is thus the von Mises distribution, p(θ|k, µ) = 1 2πI0(k) exp(k cos(θ −µ)) = vonMises(θ; k, µ). (7) The two parameters, the concentration (k) and the mean (µ), determine the circular variance and mean of the distribution respectively. The normalizing constant is given by a modified Bessel function of the second kind, I0(k). Crucially for our purposes, the von Mises distribution can be obtained by taking a bivariate isotropic Gaussian with an arbitrary mean, and conditioning onto the unit-circle (this connects with BSE-PPV, see eq. 6). The Generalized von Mises distribution is formed in an identical way when the bivariate Gaussian is anisotropic [16]. These constructions suggest a simple extension to time-series data by conditioning a temporal bivariate Gaussian time-series onto the unit circle at all sample times. For example, when two independent Gaussian AR(2) distributions are used to construct the prior we have, p(x1:2,1:T ) ∝ T Y t=1 1(x2 1,t + x2 2,t = 1) 2 Y m=1 Norm(xm,t; λ1xm,t−1 + λ2xm,t−2, σ2 x). (8) where 1(x2 1,t + x2 2,t = 1) is an indicator function representing the unit circle constraint. Upon a change of variables x1,t = cos(θt), x2,t = sin(θt) this yields, p(θ1:T |k1, k2) ∝ T Y t=1 exp (k1 cos(θt −θt−1) + k2 cos(θt −θt−2)) , (9) 4 where k1 = λ1(1 −λ2)/σ2 x and k2 = λ2/σ2 x. One of the attractive features of this prior is that when it is combined with the likelihood (eq. 1) the resulting posterior distribution over phase variables is a temporal version of the Generalized von Mises distribution. That is, it can be expressed as a bivariate anisotropic Gaussian, which is constrained to the unit circle. It is this representation which will prove essential for inference. Having established a candidate prior over phases, we turn to the amplitude variables. With one eye upon the fact that the prior over phases can be interpreted as product of a Gaussian and a constraint, we employ a prior of a similar form for the amplitude variables; a truncated Gaussian AR(τ) process, p(a1:T |λ1:τ, σ2) ∝ T Y t=1 1(at ≥0) Norm at; τ X t′=1 λt′at−t′, σ2 ! . (10) The model formed from equations 1, 9 and 10 will be termed Probabilistic Amplitude and Frequency Demodulation. PAFD is closely related to the BSE-PPV model [13, 14]. Moreover, when the phase variables are drawn from a uniform distribution (k1 = k2 = 0) it reduces to the convex amplitude demodulation model [17], which itself is a form of probabilistic amplitude demodulation [18, 19, 20]. The AR prior over phases has also been used in a regression setting [21]. 3 Inference via expectation propagation The PAFD model introduced in the last section contains three separate types of non-linearity: the multiplicative interaction in the likelihood, the unit circle constraint, and the positivity constraint. Of these, it is the circular constraint which is most challenging as the development of general purpose machine learning methods for handling hard, non-convex constraints is an open research problem. Following [22], we propose a novel method which uses expectation propagation (EP) [23] to replace the hard constraints with soft, local, Gaussian approximations which are iteratively refined. In order to apply EP, the model is first rewritten into a simpler form. Making use of the fact that an AR(τ) process can be rewritten as an equivalent multi-dimensional AR(1) model with τ states, we concatenate the latent variables into an augmented state vector, sT t = [at, at−1, . . . , at−τ+1, x1,t, x2,t, x1,t−1, x2,t−1], and express the model as a product of clique potentials in terms of this variable, p(y1:T , s1:T ) ∝ T Y t=1 πt(st, st−1)ψt(s1,t, s1+τ,t, s2+τ,t), where πt(st, st−1) = Norm(st; Λsst−1, Σs), ψt(at, x1,t, x2,t) = Norm yt; at(cos(¯ωt)x1,t −sin(¯ωt)x2,t), σ2 y 1(at ≥0)1(x2 1,t + x2 2,t = 1). (See the supplementary material for details of the dynamical matrices Λs and Σs). In this new form the constraints have been incorporated with the non-linear likelihood into the potential ψt, leaving a standard Gaussian dynamical potential πt(st, st−1). Using EP we approximate the posterior distribution using a product of forward, backward and constrained-likelihood messages [24], q(s1:T ) = T Y t=1 αt(st)βt(st) ˜ψt(a1,t, x1,t, x2,t) = T Y t=1 qt(st). (11) The messages should be interpreted as follows: αt(st) is the effect of πt(st−1, st) and q(st−1) on the belief q(st), whilst βt(st) is the effect of πt+1(st, st+1) and q(st+1) on the belief q(st). Finally, ˜ψt(a1,t, x1,t, x2,t) is the effect of the likelihood and the constraints on the belief q(st). All of these messages will be un-normalized Gaussians. The updates for the messages can be found by removing the messages from q(s1:T ) that correspond to the effect of a particular potential. These messages are replaced by the corresponding potential. The deleted messages are then updated by moment matching the two distributions. The updates for the forward and backward messages are a straightforward application of EP and result in updates that are nearly identical to those used for Kalman smoothing. The updates for the constrained likelihood potential are more complicated: update ˜ψt such that q(xt) MOM = ˆpψ(st) = αt(st)βt(st)ψt(at, x1,t, x2,t). (12) The difficulty is the moment computation which we evaluate in two stages. First, we integrate over the amplitude variable, which involves computing the moments of a truncated Gaussian and 5 is therefore computationally efficient. Second, we numerically integrate over the one dimensional phase variable. For the details we again refer the reader to the supplementary material. A standard forward-backward message update schedule was used. Adaptive damping improved the numerical stability of the algorithm substantially. The computational complexity of PAFD is O T(N + τ 3) where N are the number of points used to compute the integral over the phase variable. For the experiments we used a second order process over the amplitude variables (τ = 2) and N = 1000 integration points. In this case, the 16-32 forward-backward passes required for convergence took one minute on a modern laptop computer for signals of length T = 1000. 4 Application to synthetic signals One of the main challenges posed by the evaluation of AFD algorithms is that the ground truth for real-world signals is unknown. This means that a quantitative comparison of different schemes must take an indirect approach. The first set of evaluations presented here uses synthetic signals, for which the ground truth is known. In particular, we consider amplitude- and frequency-modulated sinusoids, yt = at cos(θt) where at = 1 + sin(2πfat) and 1 2π dθ dt = ¯f + ∆f sin(2πfft), which have been corrupted by Gaussian noise. Fig. 1 compares AFD of one such signal ( ¯f = 50Hz, fa = 8Hz, ff = 5Hz and ∆f = 25Hz) by the Hilbert, BSE-PPV and PAFD methods. Fig. 3 summarizes the results at different noise levels in terms of the signal to noise ratio (SNR) of the estimated variables and the reconstructed signal, i.e. SNR(a) = 10 log10 PT t=1 a2 t −10 log10 PT t=1 (at −ˆat)2. PAFD consistently outperforms the other methods by this measure. Furthermore, Fig. 4 demonstrates that PAFD can be used to accurately reconstruct missing sections of this signal, outperforming BSE-PPV. 10 20 30 40 0 20 40 60 80 100 120 SNR signal /dB Hilbert BSE−PPV PAFD 10 20 30 40 −50 0 50 100 SNR signal /dB 10 20 30 40 50 0 20 40 60 80 SNR signal /dB SNR ˆω /dB SNR ˆa /dB SNR ˆy /dB ∞ ∞ Figure 3: Noisy synthetic data. SNR of estimated variables as a function of the SNR of the signal. Envelopes (left), IFs (center) and denoised signal (right). Solid markers denote examples in Fig. 1. 5 Application to real world signals Having validated PAFD on simple synthetic examples, we now consider real-world signals. Birdsong is used as a prototypical signal as it has strong frequency-modulation content. We isolate a 300ms component of a starling song using a bandpass filter and apply AFD. Fig. 2 shows that PAFD can track the underlying frequency modulation even though there is noise in the signal which causes the other methods to fail. This example forms the basis of two important robustness and consistency tests. In the first, spectrally matched noise is added to the signal and the IFs and amplitudes are reestimated and compared to those derived from the clean signal. Fig. 5 shows that the PAFD method is considerably more robust to this manipulation than both the Hilbert and BSE-PPV methods. In the second test, regions of the signal are removed and the model’s predictions for the missing regions are compared to the estimates derived from the clean signal (see fig. 6). Once again PAFD is more accurate. As a final test of PAFD we consider the important neuroscientific task of estimating the phase, equivalently the IF, of theta oscillations in an EEG signal. The EEG signal typically contains broadband noise and so a conventional analysis applies a band-pass filter before using the Hilbert method to estimate the IF. Although this improves the estimates markedly, the noise component cannot be completely eradicated which leads to artifacts in the IF estimates (see Fig. 7). In contrast 6 time /ms 0 20 40 60 −50 0 50 100 PAFD BSE−PPV time /ms 0 20 40 60 −20 0 20 40 60 80 time /ms 0 20 40 60 −20 0 20 40 60 80 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −2 0 2 freq. /Hz 0 0.2 0.4 0 50 100 time /s 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −2 0 2 0 0.2 0.4 0 50 100 freq. /Hz time /s SNR ˆω /dB SNR ˆa /dB SNR ˆy /dB y ˆya ˆa ωˆω Figure 4: Missing synthetic data experiments. TOP: SNR of estimated variables as a function of gap duration in the input signal. Envelopes (left), IFs (center) and denoised signal (right). Solid markers indicate the examples shown in the bottom rows of the figure. BOTTOM: Two examples of PAFD reconstruction. Light gray regions indicate missing sections of the signal. 10 15 20 25 30 35 10 15 20 25 30 35 40 SNR signal /dB Hilbert BSE−PPV PAFD 10 15 20 25 30 35 −10 −5 0 5 10 15 20 SNR signal /dB SNR ˆω /dB SNR ˆa /dB Figure 5: Noisy bird song experiments. SNR of estimated variables as compared to those estimated from the clean signal, as a function of the SNR of the input signal. Envelopes (left), IFs (right). PAFD returns sensible estimates from both the filtered and original signal. Critically, both estimates are similar to one another suggesting the new estimation scheme is reliable. 6 Conclusion Amplitude and frequency demodulation is a difficult, ill-posed estimation problem. We have developed a new inferential solution called probabilistic amplitude and frequency demodulation which employs a von Mises time-series prior over phase, constructed by conditioning a bivariate Gaussian auto-regressive distribution onto the unit circle. The construction naturally leads to an expectation propagation inference scheme which approximates the hard constraints using soft local Gaussians. 7 time /ms 2.8 12.5 0 20 40 60 PAFD BSE−PPV time /ms 2.8 12.5 0 20 40 60 time /ms 2.8 12.5 0 20 40 0 0.005 0.01 0.015 0.02 0.025 0.03 −2 0 2 freq. /Hz 0 0.01 0.02 0.03 1500 2000 2500 3000 time /s 0 0.005 0.01 0.015 0.02 0.025 0.03 −2 0 2 0 0.01 0.02 0.03 1500 2000 2500 3000 freq. /Hz time /s SNR ˆω /dB SNR ˆa /dB SNR ˆy /dB y ˆy ac ˆa ωc ˆω Figure 6: Missing natural data experiments. TOP: SNR of estimated variables as a function of gap duration in the input signal. Envelopes (left), IFs (center) and denoised signal (right). Solid markers indicate the examples shown in the bottom rows of the figure. BOTTOM: Two examples of PAFD reconstruction. Light gray regions indicate missing sections of the signal. envelopes 0 0.5 1 1.5 −2 0 2 frequency /Hz 0 0.5 1 1.5 0 5 10 time /s frequency /Hz 0 0.5 1 1.5 0 5 10 envelopes 0 0.5 1 1.5 −2 0 2 frequency /Hz 0 0.5 1 1.5 0 5 10 time /s frequency /Hz 0 0.5 1 1.5 0 5 10 y ˆaPAFD ˆaHE ˆaPPV-BSE ˆωHE ˆωBSE-PPV ˆωPAFD Figure 7: Comparison of AFD methods on EEG data. The left hand side shows estimates derived from the raw EEG signal, whilst the right shows estimates derived from a band-pass filtered version. The gray areas show the region where the true amplitude falls below the noise floor (a < σy), where conventional methods fail. We have demonstrated the utility of the new method on synthetic and natural signals, where it outperformed conventional approaches. Future research will consider extensions of the model to multiple sinusoids, and learning the model parameters so that the algorithm can adapt to novel signals. Acknowledgments Richard Turner was funded by the EPRC, and Maneesh Sahani by the Gatsby Charitable Foundation. 8 References [1] P. J. Loughlin and B. Tacer. On the amplitude- and frequency-modulation decomposition of signals. The Journal of the Acoustical Society of America, 100(3):1594–1601, 1996. [2] J. L. Flanagan. Parametric coding of speech spectra. The Journal of the Acoustical Society of America, 68:412–419, 1980. [3] P. Clark and L.E. Atlas. Time-frequency coherent modulation filtering of nonstationary signals. Signal Processing, IEEE Transactions on, 57(11):4323 –4332, nov. 2009. [4] J. L. Flanagan and R. M. Golden. Phase vocoder. Bell System Technical Journal, pages 1493–1509, 1966. [5] S. M. Schimmel. Theory of Modulation Frequency Analysis and Modulation Filtering, with Applications to Hearing Devices. PhD thesis, University of Washington, 2007. [6] L. E. Atlas and C. Janssen. Coherent modulation spectral filtering for single-channel music source separation. In Proceedings of the IEEE Conference on Acoustics Speech and Signal Processing, 2005. [7] Z. M. Smith, B. Delgutte, and A. J. Oxenham. Chimaeric sounds reveal dichotomies in auditory perception. Nature, 416(6876):87–90, 2002. [8] J. Dugundji. Envelopes and pre-envelopes of real waveforms. IEEE Transactions on Information Theory, 4:53–57, 1958. [9] O. Ghitza. On the upper cutoff frequency of the auditory critical-band envelope detectors in the context of speech perception. The Journal of the Acoustical Society of America, 110(3):1628–1640, 2001. [10] F. G. Zeng, K. Nie, S. Liu, G. Stickney, E. Del Rio, Y. Y. Kong, and H. Chen. On the dichotomy in auditory perception between temporal envelope and fine structure cues (L). The Journal of the Acoustical Society of America, 116(3):1351–1354, 2004. [11] D. Vakman. On the analytic signal, the Teager-Kaiser energy algorithm, and other methods for defining amplitude and frequency. IEEE Journal of Signal Processing, 44(4):791–797, 1996. [12] G. Girolami and D. Vakman. Instantaneous frequency estimation and measurement: a quasi-local method. Measurement Science and Technology, 13(6):909–917, 2002. [13] Y. Qi, T. P. Minka, and R. W. Picard. Bayesian spectrum estimation of unevenly sampled nonstationary data. In International Conference on Acoustics, Speech, and Signal Processing, 2002. [14] A. T. Cemgil and S. J. Godsill. Probabilistic Phase Vocoder and its application to Interpolation of Missing Values in Audio Signals. In 13th European Signal Processing Conference, Antalya/Turkey, 2005. [15] C. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. [16] R. Gatto and S. R. Jammalamadaka. The generalized von mises distribution. Statistical Methodology, 4:341–353, 2007. [17] G. Sell and M. Slaney. Solving demodulation as an optimization problem. IEEE Transactions on Audio, Speech and Language Processing, 18:2051–2066, November 2010. [18] R. E. Turner and M. Sahani. Probabilistic amplitude demodulation. In Independent Component Analysis and Signal Separation, pages 544–551, 2007. [19] R. E. Turner and M. Sahani. Statistical inference for single- and multi-band probabilistic amplitude demodulation. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 5466–5469, 2010. [20] R. E. Turner and M. Sahani. Demodulation as probabilistic inference. IEEE Transactions on Audio, Speech and Language Processing, 2011. [21] J. Breckling. The analysis of directional time series: Application to wind speed and direction. SpringerVerlag, 1989. [22] J. P. Cunningham. Algorithms for Understanding Motor Cortical Processing and Neural Prosthetic Systems. PhD thesis, Stanford University, Department of Electrical Engineering, (Stanford, California, USA, 2009. [23] T. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT Media Lab, 2001. [24] T. Heskes and O. Zoeter. Expectation propagation for approximate inference in dynamic bayesian networks. In A. Darwiche and N. Friedman, pages 216–233. Morgan Kaufmann Publishers, 2002. 9
|
2011
|
221
|
4,282
|
Multi-Bandit Best Arm Identification Victor Gabillon Mohammad Ghavamzadeh Alessandro Lazaric INRIA Lille - Nord Europe, Team SequeL {victor.gabillon,mohammad.ghavamzadeh,alessandro.lazaric}@inria.fr S´ebastien Bubeck Department of Operations Research and Financial Engineering, Princeton University sbubeck@princeton.edu Abstract We study the problem of identifying the best arm in each of the bandits in a multibandit multi-armed setting. We first propose an algorithm called Gap-based Exploration (GapE) that focuses on the arms whose mean is close to the mean of the best arm in the same bandit (i.e., small gap). We then introduce an algorithm, called GapE-V, which takes into account the variance of the arms in addition to their gap. We prove an upper-bound on the probability of error for both algorithms. Since GapE and GapE-V need to tune an exploration parameter that depends on the complexity of the problem, which is often unknown in advance, we also introduce variations of these algorithms that estimate this complexity online. Finally, we evaluate the performance of these algorithms and compare them to other allocation strategies on a number of synthetic problems. 1 Introduction Consider a clinical problem with M subpopulations, in which one should decide between Km options for treating subjects from each subpopulation m. A subpopulation may correspond to patients with a particular gene biomarker (or other risk categories) and the treatment options are the available treatments for a disease. The main objective here is to construct a rule, which recommends the best treatment for each of the subpopulations. These rules are usually constructed using data from clinical trials that are generally costly to run. Therefore, it is important to distribute the trial resources wisely so that the devised rule yields a good performance. Since it may take significantly more resources to find the best treatment for one subpopulation than for the others, the common strategy of enrolling patients as they arrive may not yield an overall good performance. Moreover, applying treatment options uniformly at random in a subpopulation could not only waste trial resources, but also it might run the risk of finding a bad treatment for that subpopulation. This problem can be formulated as the best arm identification over M multi-armed bandits [1], which itself can be seen as the problem of pure exploration [4] over multiple bandits. In this formulation, each subpopulation is considered as a multi-armed bandit, each treatment as an arm, trying a medication on a patient as a pull, and we are asked to recommend an arm for each bandit after a given number of pulls (budget). The evaluation can be based on 1) the average over the bandits of the reward of the recommended arms, or 2) the average probability of error (not selecting the best arm), or 3) the maximum probability of error. Note that this setting is different from the standard multi-armed bandit problem in which the goal is to maximize the cumulative sum of rewards (see e.g., [13, 3]). The pure exploration problem is about designing strategies that make the best use of the limited budget (e.g., the total number of patients that can be admitted to the clinical trial) in order to optimize the performance in a decision-making task. Audibert et al. [1] proposed two algorithms to address this problem: 1) a highly exploring strategy based on upper confidence bounds, called UCB-E, in which the optimal value of its parameter depends on some measure of the complexity of the problem, and 2) a parameter-free method based on progressively rejecting the arms which seem to be suboptimal, called Successive Rejects. They showed that both algorithms are nearly optimal since their probability of returning the wrong arm decreases exponentially at a rate. Racing algorithms (e.g., [10, 12]) 1 and action-elimination algorithms [7] address this problem under a constraint on the accuracy in identifying the best arm and they minimize the budget needed to achieve that accuracy. However, UCB-E and Successive Rejects are designed for a single bandit problem, and as we will discuss later, cannot be easily extended to the multi-bandit case studied in this paper. Deng et al. have recently proposed an active learning algorithm for resource allocation over multiple bandits [5]. However, they do not provide any theoretical analysis for their algorithm and only empirically evaluate its performance. Moreover, the target of their proposed algorithm is to minimize the maximum uncertainty in estimating the value of the arms for each bandit. Note that this is different than our target, which is to maximize the quality of the arms recommended for each bandit. In this paper, we study the problem of best-arm identification in a multi-armed multi-bandit setting under a fixed budget constraint, and propose an algorithm, called Gap-based Exploration (GapE), to solve it. The allocation strategy implemented by GapE focuses on the gap of the arms, i.e., the difference between the mean of the arm and the mean of the best arm (in that bandit). The GapE-variance (GapE-V) algorithm extends this approach taking into account also the variance of the arms. For both algorithms, we prove an upper-bound on the probability of error that decreases exponentially with the budget. Since both GapE and GapE-V need to tune an exploration parameter that depends on the complexity of the problem, which is rarely known in advance, we also introduce their adaptive version. Finally, we evaluate the performance of these algorithms and compare them with Uniform and Uniform+UCB-E strategies on a number of synthetic problems. Our empirical results indicate that 1) GapE and GapE-V have a better performance than Uniform and Uniform+UCB-E, and 2) the adaptive version of these algorithms match the performance of their non-adaptive counterparts. 2 Problem Setup In this section, we introduce the notation used throughout the paper and formalize the multi-bandit best arm identification problem. Let M be the number of bandits and K be the number of arms for each bandit (we use indices m, p, q for the bandits and k, i, j for the arms). Each arm k of a bandit m is characterized by a distribution νmk bounded in [0, b] with mean µmk and variance σ2 mk. In the following, we assume that each bandit has a unique best arm. We denote by µ∗ m and k∗ m the mean and the index of the best arm of bandit m (i.e., µ∗ m = max1≤k≤K µmk, k∗ m = arg max1≤k≤K µmk). In each bandit m, we define the gap for each arm as ∆mk = | maxj̸=k µmj −µmk|. The clinical trial problem described in Sec. 1 can be formalized as a game between a stochastic multibandit environment and a forecaster, where the distributions {νmk} are unknown to the forecaster. At each round t = 1, . . . , n, the forecaster pulls a bandit-arm pair I(t) = (m, k) and observes a sample drawn from the distribution νI(t) independent from the past. The forecaster estimates the expected value of each arm by computing the average of the samples observed over time. Let Tmk(t) be the number of times that arm k of bandit m has been pulled by the end of round t, then the mean of this arm is estimated as bµmk(t) = 1 Tmk(t) PTmk(t) s=1 Xmk(s), where Xmk(s) is the s-th sample observed from νmk. Given the previous definitions, we define the estimated gaps as b∆mk(t) = | maxj̸=k bµmj(t)−bµmk(t)|. At the end of round n, the forecaster returns for each bandit m the arm with the highest estimated mean, i.e., Jm(n) = arg maxk bµmk(n), and incurs a regret r(n) = 1 M M X m=1 rm(n) = 1 M M X m=1 µ∗ m −µmJm(n) . As discussed in the introduction, other performance measures can be defined for this problem. In some applications, returning the wrong arm is considered as an error independently from its regret, and thus, the objective is to minimize the average probability of error e(n) = 1 M M X m=1 em(n) = 1 M M X m=1 P Jm(n) ̸= k∗ m . Finally, in problems similar to the clinical trial, a reasonable objective is to return the right treatment for all the genetic profiles and not just to have a small average probability of error. In this case, the global performance of the forecaster can be measured as ℓ(n) = max m ℓm(n) = max m P Jm(n) ̸= k∗ m . It is interesting to note the relationship between these three performance measures: minm ∆m × e(n) ≤Er(n) ≤b×e(n) ≤b×ℓ(n), where the expectation in the regret is w.r.t. the random samples. As a result, any algorithm minimizing the worst case probability of error, ℓ(n), also controls the average probability of error, e(n), and the simple regret Er(n). Note that the algorithms introduced in this paper directly target the problem of minimizing ℓ(n). 2 Parameters: number of rounds n, exploration parameter a, maximum range b Initialize: Tmk(0) = 0, b∆mk(0) = 0 for all bandit-arm pairs (m, k) for t = 1, 2, . . . , n do Compute Bmk(t) = −b∆mk(t −1) + b q a Tmk(t−1) for all bandit-arm pairs (m, k) Draw I(t) ∈arg maxm,k Bmk(t) Observe XI(t) TI(t)(t −1) + 1 ∼νI(t) Update TI(t)(t) = TI(t)(t −1) + 1 and b∆mk(t) ∀k of the selected bandit end for Return Jm(n) ∈arg maxk∈{1,...,K} bµmk(n), ∀m ∈{1 . . . M} Figure 1: The pseudo-code of the gap-based Exploration (GapE) algorithm. 3 The Gap-based Exploration Algorithm Fig. 1 contains the pseudo-code of the gap-based exploration (GapE) algorithm. GapE flattens the bandit-arm structure and reduces it to a single-bandit problem with MK arms. At each time step t, the algorithm relies on the observations up to time t −1 to build an index Bmk(t) for each banditarm pair, and then selects the pair I(t) with the highest index. The index Bmk consists of two terms. The first term is the negative of the estimated gap for arm k in bandit m. Similar to other upper-confidence bound (UCB) methods [3], the second part is an exploration term which forces the algorithm to pull arms that have been less explored. As a result, the algorithm tends to pull arms with small estimated gap and small number of pulls. The exploration parameter a tunes the level of exploration of the algorithm. As it is shown by the theoretical analysis of Sec. 3.1, if the time horizon n is known, a should be set to a = 4 9 n−K H , where H = P m,k b2/∆2 mk is the complexity of the problem (see Sec. 3.1 for further discussion). Note that GapE differs from most standard bandit strategies in the sense that the B-index for an arm depends explicitly on the statistics of the other arms. This feature makes the analysis of this algorithm much more involved. As we may notice from Fig. 1, GapE resembles the UCB-E algorithm [1] designed to solve the pure exploration problem in the single-bandit setting. Nonetheless, the use of the negative estimated gap (−b∆mk) instead of the estimated mean (bµmk) (used by UCB-E) is crucial in the multi-bandit setting. In the single-bandit problem, since the best and second best arms have the same gap (∆mk∗ m = mink̸=k∗m ∆mk), GapE considers them equivalent and tends to pull them the same amount of time, while UCB-E tends to pull the best arm more often than the second best one. Despite this difference, the performanceof both algorithms in predicting the best arm after n pulls would be the same. This is due to the fact that the probability of error depends on the capability of the algorithm to distinguish optimal and suboptimal arms, and this is not affected by a different allocation over the best and second best arms as long as the number of pulls allocated to that pair is large enough w.r.t. their gap. Despite this similarity, the two approaches become completely different in the multi-bandit case. In this case, if we run UCB-E on all the MK arms, it tends to pull more the arm with the highest mean over all the bandits, i.e., k∗= arg maxm,k µmk. As a result, it would be accurate in predicting the best arm k∗over bandits, but may have an arbitrarily bad performance in predicting the best arm for each bandit, and thus, may incur a large error ℓ(n). On the other hand, GapE focuses on the arms with the smallest gaps. This way, it assigns more pulls to bandits whose optimal arms are difficult to identify (i.e., bandits with arms with small gaps), and as shown in the next section, it achieves a high probability in identifying the best arm in each bandit. 3.1 Theoretical Analysis In this section, we derive an upper-bound on the probability of error ℓ(n) for the GapE algorithm. Theorem 1. If we run GapE with parameter 0 < a ≤4 9 n−MK H , then its probability of error satisfies ℓ(n) ≤P ∃m : Jm(n) ̸= k∗ m ≤2MKn exp(−a 64), in particular for a = 4 9 n−MK H , we have ℓ(n) ≤2MKn exp(−1 144 n−MK H ). Remark 1 (Analysis of the bound). If the time horizon n is known in advance, it would be possible to set the exploration parameter a as a linear function of n, and as a result, the probability of error of GapE decreases exponentially with the time horizon. The other interesting aspect of the bound is the 3 complexity term H appearing in the optimal value of the exploration parameter a (i.e., a = 4 9 n−K H ). If we denote by Hmk = b2/∆2 mk, the complexity of arm k in bandit m, it is clear from the definition of H that each arm has an additive impact on the overall complexity of the multi-bandit problem. Moreover, if we define the complexity of each bandit m as Hm = P k b2/∆2 mk (similar to the definition of complexity for UCB-E in [1]), the GapE complexity may be rewritten as H = P m Hm. This means that the complexity of GapE is simply the sum of the complexities of all the bandits. Remark 2 (Comparison with the static allocation strategy). The main objective of GapE is to tradeoff between allocating pulls according to the gaps (more precisely, according to the complexities Hmk) and the exploration needed to improve the accuracy of their estimates. If the gaps were known in advance, a nearly-optimal static allocation strategy assigns to each bandit-arm pair a number of pulls proportional to its complexity. Let us consider a strategy that pulls each arm a fixed number of times over the horizon n. The probability of error for this strategy may be bounded as ℓStatic(n) ≤P ∃m : Jm(n) ̸= k∗ m ≤ M X m=1 P Jm(n) ̸= k∗ m ≤ M X m=1 X k̸=k∗m P ˆµmk∗m(n) ≤ˆµmk(n) ≤ M X m=1 X k̸=k∗m exp −Tmk(n)∆2 mk b2 = M X m=1 X k̸=k∗m exp −Tmk(n)H−1 mk . (1) Given the constraint P mk Tmk(n) = n, the allocation minimizing the last term in Eq. 1 is T ∗ mk(n) = nHmk/H. We refer to this fixed strategy as StaticGap. Although this is not necessarily the optimal static strategy (T ∗ mk(n) minimizes an upper-bound), this allocation guarantees a probability of error smaller than MK exp(−n/H). Theorem 1 shows that, for n large enough, GapE achieves the same performance as the static allocation StaticGap. Remark 3 (Comparison with other allocation strategies). At the beginning of Sec. 3, we discussed the difference between GapE and UCB-E. Here we compare the bound reported in Theorem 1 with the performance of the Uniform and combined Uniform+UCB-E allocation strategies. In the uniform allocation strategy, the total budget n is uniformly split over all the bandits and arms. As a result, each bandit-arm pair is pulled Tmk(n) = n/(MK) times. Using the same derivation as in Remark 2, the probability of error ℓ(n) for this strategy may be bounded as ℓUnif(n) ≤ M X m=1 X k̸=k∗m exp − n MK ∆2 mk b2 ≤MK exp − n MK maxm,k Hmk . In the Uniform+UCB-E allocation strategy, i.e., a two-level algorithm that first selects a bandit uniformly and then pulls arms within each bandit using UCB-E, the total number of pulls for each bandit m is P k Tmk(n) = n/M, while the number of pulls Tmk(n) over the arms in bandit m is determined by UCB-E. Thus, the probability of error of this strategy may be bounded as ℓUnif+UCB-E(n) ≤ M X m=1 2nK exp −n/M −K 18Hm ≤2nMK exp − n/M −K 18 maxm Hm , where the first inequality follows from Theorem 1 in [1] (recall that Hm = P k b2/∆2 mk). Let b = 1 (i.e., all the arms have distributions bounded in [0, 1]), up to constants and multiplicative factors in front of the exponentials, and if n is large enough compared to M and K (so as to approximate n/M −K and n −K by n), the probability of error for the three algorithms may be bounded as ℓUnif(n) ≤exp O −n/MK max m,k Hmk , ℓU+UCBE(n) ≤exp O −n/M max m Hm , ℓGapE(n) ≤exp O −n P m,k Hmk . By comparing the arguments of the exponential terms, we have the trivial sequence of inequalities MK maxm,k Hmk ≥M maxm P k Hmk ≥P m,k Hmk, which implies that the upper bound on the probability of error of GapE is usually significantly smaller. This relationship, which is confirmed by the experiments reported in Sec. 4, shows that GapE is able to adapt to the complexity H of the overall multi-bandit problem better than the other two allocation strategies. In fact, while the performance of the Uniform strategy depends on the most complex arm over the bandits and the strategy Unif+UCB-E is affected by the most complex bandit, the performance of GapE depends on the sum of the complexities of all the arms involved in the pure exploration problem. 4 Proof of Theorem 1. Step 1. Let us consider the following event: E = ∀m ∈{1, . . . , M}, ∀k ∈{1, . . . , K}, ∀t ∈{1, . . . , n}, bµmk(t) −µmk < bc r a Tmk(t) . From Chernoff-Hoeffding’sinequality and a union bound, we have P(ξ) ≥1−2MKn exp(−2ac2). Now we would like to prove that on the event E, we find the best arm for all the bandits, i.e., Jm(n) = k∗ m, ∀m ∈{1 . . .M}. Since Jm(n) is the empirical best arm of bandit m, we should prove that for any k ∈{1, . . . , K}, bµmk(n) ≤bµmk∗m(n). By upper-bounding the LHS and lower-bounding the RHS of this inequality, we note that it would be enough to prove bc p a/Tmk(n) ≤∆mk/2 on the event E, or equivalently, to prove that for any bandit-arm pair m, k, we have Tmk(n) ≥4ab2c2 ∆2 mk . Step 2. In this step, we show that in GapE, for any bandits (m, q) and arms (k, j), and for any t ≥MK, the following dependence between the number of pulls of the arms holds −∆mk + (1 + d)b r a max Tmk(t) −1, 1 ≥−∆qj + (1 −d)b r a Tqj(t), (2) where d ∈[0, 1]. We prove this inequality by induction. Base step. We know that after the first MK rounds of the GapE algorithm, all the arms have been pulled once, i.e., Tmk(t) = 1, ∀m, k, thus if a ≥1/4d2, the inequality (2) holds for t = MK. Inductive step. Let us assume that (2) holds at time t −1 and we pull arm i of bandit p at time t, i.e., I(t) = (p, i). So at time t, the inequality (2) trivially holds for every choice of m, q, k, and j, except when (m, k) = (p, i). As a result, in the inductive step, we only need to prove that the following holds for any q ∈{1, . . .M} and j ∈{1, . . . K} −∆pi + (1 + d)b r a max Tpi(t) −1, 1 ≥−∆qj + (1 −d)b r a Tqj(t). (3) Since arm i of bandit p has been pulled at time t, we have that for any bandit-arm pair (q, j) −b∆pi(t −1) + b r a Tpi(t −1) ≥−b∆qj(t −1) + b r a Tqj(t −1). (4) To prove (3), we first prove an upper-bound for −b∆pi(t −1) and a lower-bound for −b∆qj(t −1) −b∆pi(t−1) ≤−∆pi+ 2bc 1 −c r a Tpi(t) −1 and −b∆qj(t−1) ≥−∆qj−2 √ 2bc 1 −d r a Tqj(t) . (5) We report the proofs of the inequalities in (5) in App. B of [8]. The inequality (3), and as a result, the inductive step is proved by replacing −b∆pi(t−1) and −b∆qj(t−1) in (4) from (5) and under the conditions that d ≥ 2c 1−c and d ≥2 √ 2c 1−d . These conditions are satisfied by d = 1/2 and c = √ 2/16. Step 3. In order to prove the condition of Tmk(n) in step 1, we need to find a lower-bound on the number of pulls of all the arms at time t = n (at the end). Let us assume that arm k of bandit m has been pulled less than ab2(1−d)2 ∆2 mk , which indicates that −∆mk + (1 −d)b q a Tmk(n) > 0. From this result and (2), we have −∆qj + (1 + d)b q a Tqj(n)−1 > 0, or equivalently Tqj(n) < ab2(1+d)2 ∆2 qj + 1 for any pair (q, j). We also know that P q,j Tqj(n) = n. From these, we deduce that n −MK < ab2(1+d)2 P q,j 1 ∆2 qj . So, if we select a such that n−MK ≥ab2(1+d)2 P q,j 1 ∆2 qj , we contradict the first assumption that Tmk(n) < ab2(1−d)2 ∆2 mk , which means that Tmk(n) ≥ 4ab2c2 ∆2 mk for any pair (m, k), when 1 −d ≥2c. This concludes the proof. The condition for a in the statement of the theorem comes from our choice of a in this step and the values of c and d from the inductive step. 3.2 Extensions In this section we propose two variants on the GapE algorithm with the objective of extending its applicability and improving its performance. 5 GapE with variance (GapE-V). The allocation strategy implemented by GapE focuses only on the arms with small gap and does not take into consideration their variance. However, it is clear that the arms with small variance, even if their gap is small, just need a few pulls to be correctly estimated. In order to take into account both the gaps and variances of the arms, we introduce the GapE-variance (GapE-V) algorithm. Let bσ2 mk(t) = 1 Tmk(t)−1 PTmk(t) s=1 X2 mk(s) −bµ2 mk(t) be the estimated variance for arm k of bandit m at the end of round t. GapE-V uses the following B-index for each arm: Bmk(t) = −b∆mk(t −1) + s 2a bσ2 mk(t −1) Tmk(t −1) + 7ab 3 Tmk(t −1) −1 . Note that the exploration term in the B-index has now two components: the first one depends on the empirical variance and the second one decreases as O(1/Tmk). As a result, arms with low variance will be explored much less than in the GapE algorithm. Similar to the difference between UCB [3] and UCB-V [2], while the B-index in GapE is motivated by Hoeffding’s inequalities, the one for GapE-V is obtained using an empirical Bernstein’s inequality [11, 2]. The following performance bound can be proved for GapE-V algorithm. We report the proof of Theorem 2 in App. C of [8]. Theorem 2. If GapE-V is run with parameter 0 < a ≤8 9 n−2MK Hσ , then it satisfies ℓ(n) ≤P ∃m : Jm(n) ̸= k∗ m ≤6nMK exp − 9a 64 × 64 in particular for a = 8 9 n−2MK Hσ , we have ℓ(n) ≤6nMK exp − 1 64×8 n−2MK Hσ . In Theorem 2, Hσ is the complexity of the GapE-V algorithm and is defined as Hσ = M X m=1 K X k=1 σmk + p σ2 mk + (16/3)b∆mk 2 ∆2 mk . Although the variance-complexity Hσ could be larger than the complexity H used in GapE, whenever the variances of the arms are small compared to the range b of the distribution, we expect Hσ to be smaller than H. Furthermore, if the arms have very different variances, then GapE-V is expected to better capture the complexity of each arm and allocate the pulls accordingly. For instance, in the case where all the gaps are the same, GapE tends to allocate pulls proportionally to the complexity Hmk and it would perform an almost uniform allocation over bandits and arms. On the other hand, the variances of the arms could be very heterogeneous and GapE-V would adapt the allocation strategy by pulling more often the arms whose values are more uncertain. Adaptive GapE and GapE-V. A drawback of GapE and GapE-V is that the exploration parameter a should be tuned according to the complexities H and Hσ of the multi-bandit problem, which are rarely known in advance. A straightforward solution to this issue is to move to an adaptive version of these algorithms by substituting H and Hσ with suitable estimates bH and bHσ. At each step t of the adaptive GapE and GapE-V algorithms, we estimate these complexities as b H(t) = X m,k b2 UCB∆i(t)2 , b Hσ(t) = X m,k LCBσi(t) + p LCBσi(t)2 + (16/3)b × UCB∆i(t) 2 UCB∆i(t)2 , where UCB∆i(t) = b∆i(t −1) + s 1 2Ti(t −1) and LCBσi(t) = max 0, bσi(t −1) − s 2 Ti(t −1) −1 . Similar to the adaptive version of UCB-E in [1], bH and bHσ are lower-confidence bounds on the true complexities H and Hσ. Note that the GapE and GapE-V bounds written for the optimal value of a indicate an inverse relation between the complexity and the exploration. By using a lower-bound on the true H and Hσ, the algorithms tend to explore arms more uniformly and this allows them to increase the accuracy of their estimated complexities. Although we do not analyze these algorithms, we empirically show in Sec. 4 that they are in fact able to match the performance of the GapE and GapE-V algorithms. 4 Numerical Simulations In this section, we report numerical simulations of the gap-based algorithms presented in this paper, GapE and GapE-V, and their adaptive versions A-GapE and A-GapE-V, and compare them with Unif 6 Parameter η Maximum probability of error 0.15 0.20 0.25 0.30 Uniform + UCBE 4 8 16 32 GapE 2 4 8 16 Adapt GapE 1/8 1/4 1/2 1 Parameter η Maximum probability of error 0.15 0.20 0.25 GapE 8 16 32 64 GapE−V 2 4 8 16 Adapt GapE−V 1/4 1/2 1 2 Figure 2: (left) Problem 1: Comparison between GapE, adaptive GapE, and the uniform strategies. (right) Problem 2: Comparison between GapE, GapE-V, and adaptive GapE-V algorithms. Parameter η Maximum probability of error 0.15 0.25 0.35 0.45 Unif + UCBE 4 8 16 32 Unif + A UCBE 1 2 4 8 Unif + UCBE−V 2 4 8 16 Unif + A UCBE−V 1/4 1/2 1 2 GapE 4 8 16 32 A GapE 1/4 1/2 1 2 GapE−V 1/2 1 2 4 A GapE−V 1/4 1/2 1 2 Figure 3: Performance of the algorithms in Problem 3. and Unif+UCB-E algorithms introduced in Sec. 3.1. The results of our experiments both those in the paper and those in App. A of [8] indicate that 1) GapE successfully adapts its allocation strategy to the complexity of each bandit and outperforms the uniform allocation strategies, 2) the use of the empirical variance in GapE-V can significantly improve the performance over GapE, and 3) the adaptive versions of GapE and GapE-V that estimate the complexities H and Hσ online attain the same performance as the basic algorithms, which receive H and Hσ as an input. Experimental setting. We use the following three problems in our experiments. Note that b = 1 and that a Rademacher distribution with parameters (x, y) takes value x or y with probability 1/2. • Problem 1. n = 700, M = 2, K = 4. The arms have Bernoulli distribution with parameters: bandit 1 = (0.5, 0.45, 0.4, 0.3), bandit 2 = (0.5, 0.3, 0.2, 0.1). • Problem 2. n = 1000, M = 2, K = 4. The arms have Rademacher distribution with parameters (x, y): bandit 1 = {(0, 1.0), (0.45, 0.45), (0.25, 0.65), (0, 0.9)} and in bandit 2 = {(0.4, 0.6), (0.45, 0.45), (0.35, 0.55), (0.25, 0.65)}. • Problem 3. n = 1400, M = 4, K = 4. The arms have Rademacher distribution with parameters (x, y): bandit 1 = {(0, 1.0), (0.45, 0.45), (0.25, 0.65), (0, 0.9)}, bandit 2 = {(0.4, 0.6), (0.45, 0.45), (0.35, 0.55), (0.25, 0.65)}, bandit 3 = {(0, 1.0), (0.45, 0.45), (0.25, 0.65), (0, 0.9)}, and bandit 4 = {(0.4, 0.6), (0.45, 0.45), (0.35, 0.55), (0.25, 0.65)}. All the algorithms, except the uniform allocation, have an exploration parameter a. The theoretical analysis suggests that a should be proportional to n H . Although a could be optimized according to the bound, since the constants in the analysis are not accurate, we will run the algorithms with a = η n H , where η is a parameter which is empirically tuned (in the experiments we report four different values for η). If H correctly defines the complexity of the exploration problem (i.e., the number of samples to find the best arms with high probability), η should simply correct the inaccuracy of the constants in the analysis, and thus, the range of its nearly-optimal values should be constant across different problems. In Unif+UCB-E, UCB-E is run with the budget of n/M and the same parameter η for all the bandits. Finally, we set n ≃Hσ, since we expect Hσ to roughly capture the number of pulls necessary to solve the pure exploration problem with high probability. In Figs. 2 and 3, we report the performance l(n), i.e. the probability to identify the best arm in all the bandits after n rounds, of the gap-based algorithms as well as Unif and Unif+UCB-E strategies. The results are averaged 7 over 105 runs and the error bars correspond to three times the estimated standard deviation. In all the figures the performance of Unif is reported as a horizontal dashed line. The left panel of Fig. 2 displays the performance of Unif+UCB-E, GapE, and A-GapE in Problem 1. As expected, Unif+UCB-E has a better performance (23.9% probability of error) than Unif (29.4% probability of error), since it adapts the allocation within each bandit so as to pull more often the nearly-optimal arms. However, the two bandit problems are not equally difficult. In fact, their complexities are very different (H1 ≃925 and H2 ≃67), and thus, much less samples are needed to identify the best arm in the second bandit than in the first one. Unlike Unif+UCB-E, GapE adapts its allocation strategy to the complexities of the bandits (on average only 19% of the pulls are allocated to the second bandit), and at the same time to the arm complexities within each bandit (in the first bandit the averaged allocation of GapE is (37%, 36%, 20%, 7%)). As a result, GapE has a probability of error of 15.7%, which represents a significant improvement over Unif+UCB-E. The right panel of Fig. 2 compares the performance of GapE, GapE-V, and A-GapE-V in Problem 2. In this problem, all the gaps are equals (∆mk = 0.05), thus all the arms (and bandits) have the same complexity Hmk = 400. As a result, GapE tends to implement a nearly uniform allocation, which results in a small difference between Unif and GapE (28% and 25% accuracy, respectively). The reason why GapE is still able to improve over Unif may be explained by the difference between static and dynamic allocation strategies and it is further investigated in App. A of [8]. Unlike the gaps, the variance of the arms is extremely heterogeneous. In fact, the variance of the arms of bandit 1 is bigger than in bandit 2, thus making it harder to solve. This difference is captured by the definition of Hσ (Hσ 1 ≃1400 > Hσ 2 ≃600). Note also that Hσ ≤H. As discussed in Sec. 3.2, since GapE-V takes into account the empirical variance of the arms, it is able to adapt to the complexity Hσ mk of each bandit-arm pair and to focus more on uncertain arms. GapE-V improves the final accuracy by almost 10% w.r.t. GapE. From both panels of Fig. 2, we also notice that the adaptive algorithms achieve similar performance to their non-adaptive counterparts. Finally, we notice that a good choice of parameter η for GapE-V is always close to 2 and 4 (see also [8] for additional experiments), while GapE needs η to be tuned more carefully, particularly in Problem 2 where the large values of η try to compensate the fact that H does not successfully capture the real complexity of the problem. This further strengthens the intuition that Hσ is a more accurate measure of the complexity for the multi-bandit pure exploration problem. While Problems 1 and 2 are relatively simple, we report the results of the more complicated Problem 3 in Fig. 3. The experiment is designed so that the complexity w.r.t. the variance of each bandit and within each bandit is strongly heterogeneous. In this experiment, we also introduce UCBE-V that extends UCB-E by taking into account the empirical variance similarly to GapE-V. The results confirm the previous findings and show the improvement achieved by introducing empirical estimates of the variance and allocating non-uniformly over bandits. 5 Conclusion In this paper, we studied the problem of best arm identification in a multi-bandit multi-armed setting. We introduced a gap-based exploration algorithm, called GapE, and proved an upper-bound for its probability of error. We extended the basic algorithm to also consider the variance of the arms and proved an upper-bound for its probability of error. We also introduced adaptive versions of these algorithms that estimate the complexity of the problem online. The numerical simulations confirmed the theoretical findings that GapE and GapE-V outperform other allocation strategies, and that their adaptive counterparts are able to estimate the complexity without worsening the global performance. Although GapE does not know the gaps, the experimental results reported in [8] indicate that it might outperform a static allocation strategy, which knows the gaps in advance, thus suggesting that an adaptive strategy could perform better than a static one. This observation asks for further investigation. Moreover, we plan to apply the algorithms introduced in this paper to the problem of rollout allocation for classification-based policy iteration in reinforcement learning [9, 6], where the goal is to identify the greedy action (arm) in each of the states (bandit) in a training set. Acknowledgments Experiments presented in this paper were carried out using the Grid’5000 experimental testbed (https://www.grid5000.fr). This work was supported by Ministry of Higher Education and Research, Nord-Pas de Calais Regional Council and FEDER through the “contrat de projets ´etat region 2007–2013”, French National Research Agency (ANR) under project LAMPADA n◦ANR-09-EMER-007, European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement n◦231495, and PASCAL2 European Network of Excellence. 8 References [1] J.-Y. Audibert, S. Bubeck, and R. Munos. Best arm identification in multi-armed bandits. In Proceedings of the Twenty-Third Annual Conference on Learning Theory, pages 41–53, 2010. [2] Jean-Yves Audibert, R´emi Munos, and Csaba Szepesv´ari. Tuning bandit algorithms in stochastic environments. In Marcus Hutter, Rocco Servedio, and Eiji Takimoto, editors, Algorithmic Learning Theory, volume 4754 of Lecture Notes in Computer Science, pages 150–165. Springer Berlin / Heidelberg, 2007. [3] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multi-armed bandit problem. Machine Learning, 47:235–256, 2002. [4] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi-armed bandit problems. In Proceedings of the Twentieth International Conference on Algorithmic Learning Theory, pages 23–37, 2009. [5] K. Deng, J. Pineau, and S. Murphy. Active learning for personalizing treatment. In IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, 2011. [6] C. Dimitrakakis and M. Lagoudakis. Rollout sampling approximate policy iteration. Machine Learning Journal, 72(3):157–171, 2008. [7] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. Journal of Machine Learning Research, 7:1079–1105, 2006. [8] V. Gabillon, M. Ghavamzadeh, A. Lazaric, and S. Bubeck. Multi-bandit best arm identification. Technical Report 00632523, INRIA, 2011. [9] M. Lagoudakis and R. Parr. Reinforcement learning as classification: Leveraging modern classifiers. In Proceedings of the Twentieth International Conference on Machine Learning, pages 424–431, 2003. [10] O. Maron and A. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In Proceedings of Advances in Neural Information Processing Systems 6, 1993. [11] A. Maurer and M. Pontil. Empirical bernstein bounds and sample-variance penalization. In 22th annual conference on learning theory, 2009. [12] V. Mnih, Cs. Szepesv´ari, and J.-Y. Audibert. Empirical Bernstein stopping. In Proceedings of the Twenty-Fifth International Conference on Machine Learning, pages 672–679, 2008. [13] H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematics Society, 58:527–535, 1952. 9
|
2011
|
222
|
4,283
|
Nonlinear Inverse Reinforcement Learning with Gaussian Processes Sergey Levine Stanford University svlevine@cs.stanford.edu Zoran Popovi´c University of Washington zoran@cs.washington.edu Vladlen Koltun Stanford University vladlen@cs.stanford.edu Abstract We present a probabilistic algorithm for nonlinear inverse reinforcement learning. The goal of inverse reinforcement learning is to learn the reward function in a Markov decision process from expert demonstrations. While most prior inverse reinforcement learning algorithms represent the reward as a linear combination of a set of features, we use Gaussian processes to learn the reward as a nonlinear function, while also determining the relevance of each feature to the expert’s policy. Our probabilistic algorithm allows complex behaviors to be captured from suboptimal stochastic demonstrations, while automatically balancing the simplicity of the learned reward structure against its consistency with the observed actions. 1 Introduction Inverse reinforcement learning (IRL) methods learn a reward function in a Markov decision process (MDP) from expert demonstrations, allowing the expert’s policy to be generalized to unobserved situations [7]. Each task is consistent with many reward functions, but not all rewards provide a compact, portable representation of the task, so the central challenge in IRL is to find a reward with meaningful structure [7]. Many prior methods impose structure by describing the reward as a linear combination of hand selected features [1, 12]. In this paper, we extend the Gaussian process model to learn highly nonlinear reward functions that still compactly capture the demonstrated behavior. GP regression requires input-output pairs [11], and was previously used for value function approximation [10, 4, 2]. Our Gaussian Process Inverse Reinforcement Learning (GPIRL) algorithm only observes the expert’s actions, not the rewards, so we extend the GP model to account for the stochastic relationship between actions and underlying rewards. This allows GPIRL to balance the simplicity of the learned reward function against its consistency with the expert’s actions, without assuming the expert to be optimal. The learned GP kernel hyperparameters capture the structure of the reward, including the relevance of each feature. Once learned, the GP can recover the reward for the current state space, and can predict the reward for any unseen state space within the domain of the features. Previous IRL algorithms generally learn the reward as a linear combination of features, either by finding a reward under which the expert’s policy has a higher value than all other policies by a margin [7, 1, 12, 15], or else by maximizing the probability of the reward under a model of near-optimal expert behavior [6, 9, 17, 3]. While several margin-based methods learn nonlinear reward functions through feature construction [13, 14, 5], such methods assume optimal expert behavior. To the best of our knowledge, GPIRL is the first method to combine probabilistic reasoning about stochastic expert behavior with the ability to learn the reward as a nonlinear function of features, allowing it to outperform prior methods on tasks with inherently nonlinear rewards and suboptimal examples. 1 2 Inverse Reinforcement Learning Preliminaries A Markov decision process is defined as a tuple M = {S, A, T , γ, r}, where S is the state space, A is the set of actions, T sa s′ is the probability of a transition from s ∈S to s′ ∈S under action a ∈A, γ ∈[0, 1) is the discount factor, and r is the reward function. The optimal policy π⋆maximizes the expected discounted sum of rewards E [P∞ t=0 γtrst|π⋆]. In inverse reinforcement learning, the algorithm is presented with M\r, as well as expert demonstrations, denoted D = {ζ1, ..., ζN}, where ζi is a path ζi = {(si,0, ai,0), ..., (si,T , ai,T )}. The algorithm is also presented with features of the form f : S →R that can be used to represent the unknown reward r. IRL aims to find a reward function r under which the optimal policy matches the expert’s demonstrations. To this end, we could assume that the examples D are drawn from the optimal policy π⋆. However, real human demonstrations are rarely optimal. One approach to learning from a suboptimal expert is to use a probabilistic model of the expert’s behavior. We employ the maximum entropy IRL (MaxEnt) model [17], which is closely related to linearly-solvable MDPs [3], and has been used extensively to learn from human demonstrations [16, 17]. Under this model, the probability of taking a path ζ is proportional to the exponential of the rewards encountered along that path. This model is convenient for IRL, because its likelihood is differentiable [17], and a complete stochastic policy uniquely determines the reward function [3]. Intuitively, such a stochastic policy is more deterministic when the stakes are high, and more random when all choices have similar value. Under this policy, the probability of an action a in state s can be shown to be proportional to the exponential of the expected total reward after taking the action, denoted P(a|s) ∝exp(Qr sa), where Qr = r + γT Vr. The value function Vr is computed with a “soft” version of the familiar Bellman backup operator: Vr s = log P a exp Qr sa. The probability of a in state s is therefore normalized by exp Vr, giving P(a|s) = exp(Qr sa −Vr s). Detailed derivations of these equations can be found in prior work [16]. The complete log likelihood of the data under r can be written as log P(D|r) = X i X t log P(ai,t|si,t) = X i X t Qr si,tai,t −Vr si,t (1) While we can maximize Equation 1 directly to obtain r, such a reward is unlikely to exhibit meaningful structure, and would not be portable to novel state spaces. Prior methods address this problem by representing r as a linear combination of a set of provided features [17]. However, if r is not linear in the features, such methods are not sufficiently expressive. In the next section, we describe how Gaussian processes can be used to learn r as a general nonlinear function of the features. 3 The Gaussian Process Inverse Reinforcement Learning Algorithm GPIRL represents the reward as a nonlinear function of feature values. This function is modeled as a Gaussian process, and its structure is determined by its kernel function. The Bayesian GP framework provides a principled method for learning the hyperparameters of this kernel, thereby learning the structure of the unknown reward. Since the reward is not known, we use Equation 1 to specify a distribution over GP outputs, and learn both the output values and the kernel function. In GP regression, we use noisy observations y of the true underlying outputs u. GPIRL directly learns the true outputs u, which represent the rewards associated with feature coordinates Xu. These coordinates may simply be the feature values of all states or, as discussed in Section 5, a subset of all states. The rewards at states that are not included in this subset are inferred by the GP. We also learn the kernel hyperparameters θ in order to recover the structure of the reward. The most likely values of u and θ are found by maximizing their probability under the expert demonstrations D: P(u, θ|D, Xu) ∝P(D, u, θ|Xu) = Z r P(D|r) | {z } IRL term P(r|u, θ, Xu) | {z } GP posterior dr P(u, θ|Xu) | {z } GP probability (2) The log of P(D|r) is given by Equation 1, the GP posterior P(r|u, θ, Xu) is the probability of a reward function under the current values of u and θ, and P(u, θ|Xu) is the prior probability of a 2 particular assignment to u and θ. The log of P(u, θ|Xu) is the GP log marginal likelihood, which favors simple kernel functions and values of u that conform to the current kernel matrix [11]: log P(u, θ|Xu) = −1 2uTK−1 u,uu −1 2 log |Ku,u| −n 2 log 2π + log P(θ) (3) The last term log P(θ) is a hyperparameter prior, which is discussed in Section 4. The entries of the covariance matrix Ku,u are given by the kernel function. In order to determine the relevance of each feature, we use the automatic relevance detection (ARD) kernel, with hyperparameters θ = {β, Λ}: k(xi, xj) = β exp −1 2(xi −xj)TΛ(xi −xj) The hyperparameter β is the overall variance, and the diagonal matrix Λ specifies the weight on each feature. When Λ is learned, less relevant features receive low weights, and more relevant features receive high weights. States distinguished by highly-weighted features can take on different reward values, while those that have similar values for all highly-weighted features take on similar rewards. The GP posterior P(r|u, θ, Xu) is a Gaussian distribution with mean KT r,uK−1 u,uu and covariance Kr,r −KT r,uK−1 u,uKr,u. Kr,u is the covariance of the rewards at all states with the inducing point values u, located respectively at Xr and Xu [11]. Due to the complexity of P(D|r), the integral in Equation 2 cannot be computed in closed form. Instead, we can consider this problem as analogous to sparse approximation for GP regression [8], where a small set of inducing points u acts as the support for the full set of training points r. In this context, the Gaussian posterior distribution over r is called the training conditional. One approximation is to assume that the training conditional is deterministic – that is, has variance zero [8]. This approximation is particularly appropriate in our case, because if the learned GP is used to predict a reward for a novel state space, the most likely reward would have the same form as the mean of the training conditional. Under this approximation, the integral disappears, and r is set to KT r,uK−1 u,uu. The resulting log likelihood is simply log P(D, u, θ|Xu) = log P(D|r = KT r,uK−1 u,uu) | {z } IRL log likelihood + log P(u, θ|Xu) | {z } GP log likelihood (4) Once the likelihood is optimized, the reward r = KT r,uK−1 u,uu can be used to recover the expert’s policy on the entire state space. The GP can also predict the reward function for any novel state space in the domain of the features. The most likely reward for a novel state space is the mean posterior KT ⋆,uK−1u, where K⋆,u is the covariance of the new states and the inducing points. In our implementation, the likelihood is optimized with the L-BFGS method, with derivatives provided in the supplement. When the hyperparameters are learned, the likelihood is generally not convex. While this is not unusual for GP methods, it does mean that the method can suffer from local optima. In the supplement, we also describe a simple restart procedure we used to mitigate this problem. 4 Regularization and Hyperparameter Priors In GP regression, a noise term is often added to the diagonal of the kernel matrix to account for noisy observations. Since GPIRL learns the noiseless underlying outputs u, there is no cause to add a noise term, which means that the kernel matrix Ku,u can become singular. Intuitively, this indicates that two or more inducing points are deterministically covarying, and therefore redundant. To ensure that no inducing point is redundant, we assume that their positions in feature space Xu, rather than their values, are corrupted by white noise with variance σ2. The expected squared difference in the kth feature values between two points xi and xj is then given by (xik −xjk)2 + 2σ2, and the new, regularized kernel function is given by k(xi, xj) = β exp −1 2(xi −xj)TΛ(xi −xj) −1i̸=jσ2tr(Λ) (5) 3 The regularization ensures that k(xi, xj) < k(xi, xi) so long as at least one feature is relevant – that is, tr(Λ) > 0. While the regularized kernel prevents singular covariance matrices when many features become irrelevant, the log likelihood can still increase to infinity as Λ →0 or β →0: in both cases, −1 2 log |Ku,u| →∞and, so long as u →0, all other terms remain finite. To prevent such degeneracies, we use a hyperparameter prior that discourages kernels under which two inducing points become deterministically covarying. As two points ui and uj become deterministically related, the magnitude of their partial correlation [K−1 u,u]ij becomes infinity. We can therefore prevent degeneracies with a prior term of the form −1 2 P ij[K−1 u,u]2 ij = −1 2tr(K−2 u,u), which discourages large partial correlations between inducing points. Such a prior is dependent on Xu. However, unlike in GP regression, Xu and u are parameters of the algorithm rather than data, and since the inducing point positions are fixed in advance, it is possible to condition the prior on Xu. To encourage sparse feature weights Λ, we also use a sparsity-inducing penalty φ(Λ), resulting in the prior log P(θ|Xu) = −1 2tr(K−2 u,u) −φ(Λ). A variety of penalties are suitable, but we obtained the best results with φ(Λ) = P i log(Λii +1). Although we can also optimize for the noise variance σ2, we did not observe that this significantly altered the results, and instead fixed 2σ2 to 10−2. 5 Inducing Points and Large State Spaces A straightforward choice for the inducing points Xu is the feature values of all states in the state space S. Unfortunately, the kernel matrix Ku,u is constructed and inverted at each iteration of the optimization in order to compute the gradient. This is a costly procedure: constructing the matrix has running time O(dX|Xu|2) and inverting it is O(|Xu|3), where dX is the number of features. To make GPIRL tractable on large state spaces, we can instead choose Xu to be a small subset of S, so that only the construction of Kr,u depends on |S|, and this dependence is linear. In principle, the minimum size of Xu corresponds to the complexity of the reward function. For example, if the true reward has two constant regions, it can be represented by just two properly placed inducing points. In practice, Xu must cover the space of feature values well enough to represent an unknown reward function, but we can nonetheless use many fewer points than there are states in S. In our implementation, we chose Xu to contain the feature values of all states visited in the example paths, as well as additional random states added to raise |Xu| to a desired size. While this heuristic worked well in our experiments, we can also view the choice of Xu as analogous to the choice of the active set in sparse GP approximation. A number of methods have been proposed for selecting these sets [8], and applying such methods to GPIRL is a promising avenue for future work. 6 Alternative Kernels The particular choice of kernel function influences the structure of the learned reward. The stationary kernel in Equation 5 favors rewards that are smooth with respect to feature values. Other kernels can be used to learn other types of structure. For example, a reward function might have wide regions with uniform values, punctuated by regions of high-frequency variation, as is the case for piecewise constant rewards. A stationary kernel would have difficulty representing such structure. Instead, we can warp each coordinate xik of xi by a function wk(xik) to give high resolution to one region, and low resolution everywhere else. One such function is a sigmoid centered at mk and scaled by ℓk: wk(xik) = 1 1 + exp −xik−mk ℓk Replacing xi by w(xi) in Equation 5, we get a regularized warped kernel of the form k(xi, xj) = β exp −1 2 X k Λkk (wk(xik) −wk(xjk))2 + 1i̸=jσ2(wσ k(xik) + wσ k(xjk)) ! The second term in the sum is the contribution of the noise to the expected distance. Assuming σ2 is small, this value can be approximated to first order by setting wσ k(xik) = ∂wk ∂xik + sk, where sk is an 4 additional parameter that increases the noise in the tails of the sigmoid to prevent degeneracies. The parameters m, ℓ, and s are added to θ and jointly optimized with u and the other hyperparameters, using unit variance Gaussian priors for ℓand s and gamma priors for m. Note that this procedure is not equivalent to merely fitting a sigmoid to the reward function, since the reward can still vary nonlinearly in the high resolution regions around each sigmoid center mk. The accompanying supplement includes details about the priors placed on the warp parameters in our implementation, a complete derivation of wσ k, and the derivatives of the warped kernel function. During the optimization, as the sigmoid scales ℓbecome small, the derivatives with respect to the sigmoid centers m fall to zero. If the centers have not yet converged to the correct values, the optimization will end in a local optimum. It is therefore more important to address local optima when using the warped kernel. As mentioned in Section 3, we mitigate the effects of local optima with a small number of random restarts. Details of the particular random restart technique we used can also be found in the supplement. We presented just one example of how an alternative kernel allows us to learn a reward with a particular structure. Many kernels have been proposed for GPs [11], and this variety of kernel functions can be used to apply GPIRL to new domains and to extend its generality and flexibility. 7 Experiments We compared GPIRL with prior methods on several IRL tasks, using examples sampled from the stochastic MaxEnt policy (see Section 2) as well as human demonstrations. Examples drawn from the stochastic policy can intuitively be viewed as noisy samples of an underlying optimal policy, while the human demonstrations contain the stochasticity inherent in human behavior. GPIRL was compared with the MaxEnt IRL algorithm [17] and FIRL [5], as well as a variant of MaxEnt with a sparsity-inducing Laplace prior, which we refer to as MaxEnt/Lp. We evaluated a variety of other margin-based methods, including Abbeel and Ng’s projection algorithm, MMP, MWAL, MMPBoost and LEARCH [1, 12, 15, 13, 14]. Since GPIRL, FIRL, and MaxEnt consistently produced better results, the other algorithms are not shown here, but are included in the supplementary result tables. We compare the algorithms using the “expected value difference” score, which is a measure of how suboptimal the learned policy is under the true reward. To compute this score, we find the optimal deterministic policy under each learned reward, measure its expected sum of discounted rewards under the true reward function, and subtract this quantity from the expected sum of discounted rewards under the true policy. While we could also evaluate the optimal stochastic policies, this would unfairly penalize margin-based methods, which are unaware of the MaxEnt model. To determine how well each algorithm captured the structure of the reward function, we evaluated the learned reward on the environment on which it was learned, and on 4 additional random environments (denoted “transfer”). Algorithms that do not express the reward function in terms of the correct features are expected to perform poorly on the transfer environments, even if they perform well on the training environment. Methods that correctly identify relevant features should perform well on both. For each environment, we evaluated the algorithms with both discrete and continuous-valued features. In the latter case, GPIRL used the warped kernel in Section 6 and FIRL, which requires discrete features, was not tested. Each test was repeated 8 times with different random environments. 7.1 Objectworld Experiments The objectworld is an N ×N grid of states with five actions per state, corresponding to steps in each direction and staying in place. Each action has a 30% chance of moving in a different random direction. Randomly placed objects populate the objectworld, and each is assigned one of C inner and outer colors. Object placement is randomized in the transfer environments, while N and C remain the same. There are 2C continuous features, each giving the Euclidean distance to the nearest object with a specific inner or outer color. In the discrete feature case, there are 2CN binary features, each one an indicator for a corresponding continuous feature being less than d ∈{1, ..., N}. The true reward is positive in states that are both within 3 cells of outer color 1 and 2 cells of outer color 2, negative within 3 cells of outer color 1, and zero otherwise. Inner colors and all other outer colors are distractors. The algorithms were provided example paths of length 8, and the number of examples and colors was varied to determine their ability to handle limited data and distractors. 5 discrete features examples expected value difference 4 8 16 32 64 128 0 5 10 15 20 25 30 GPIRL MaxEnt MaxEnt/Lp FIRL examples expected value difference 4 8 16 32 64 128 0 5 10 15 20 25 30 discrete features transfer examples expected value difference 4 8 16 32 64 128 0 5 10 15 20 25 30 GPIRL MaxEnt MaxEnt/Lp FIRL examples expected value difference 4 8 16 32 64 128 0 5 10 15 20 25 30 continuous features examples expected value difference 4 8 16 32 64 128 0 5 10 15 20 25 30 GPIRL MaxEnt MaxEnt/Lp examples expected value difference 4 8 16 32 64 128 0 5 10 15 20 25 30 continuous features transfer examples expected value difference 4 8 16 32 64 128 0 5 10 15 20 25 30 GPIRL MaxEnt MaxEnt/Lp examples expected value difference 4 8 16 32 64 128 0 5 10 15 20 25 30 Figure 1: Results for 32×32 objectworlds with C = 2 and varying numbers of examples. Shading shows standard error. GPIRL learned accurate rewards that generalized well to new state spaces. discrete features colors expected value difference 2 4 6 8 10 12 0 5 10 15 20 25 30 GPIRL MaxEnt MaxEnt/Lp FIRL colors expected value difference 2 4 6 8 10 12 0 5 10 15 20 25 30 discrete features transfer colors expected value difference 2 4 6 8 10 12 0 5 10 15 20 25 30 GPIRL MaxEnt MaxEnt/Lp FIRL colors expected value difference 2 4 6 8 10 12 0 5 10 15 20 25 30 continuous features colors expected value difference 2 4 6 8 10 12 0 5 10 15 20 25 30 GPIRL MaxEnt MaxEnt/Lp colors expected value difference 2 4 6 8 10 12 0 5 10 15 20 25 30 continuous features transfer colors expected value difference 2 4 6 8 10 12 0 5 10 15 20 25 30 GPIRL MaxEnt MaxEnt/Lp colors expected value difference 2 4 6 8 10 12 0 5 10 15 20 25 30 Figure 2: Objectworld evaluation with 32 examples and varying numbers of colors C. GPIRL was able to perform well even as the number of distractor features increased. Because of the large number of irrelevant features and the nonlinearity of the reward, this example is particularly challenging for methods that learn linear reward functions. With 16 or more examples, GPIRL consistently learned reward functions that performed as well as the true reward, as shown in Figure 1, and was able to sustain this performance as the number of distractors increased, as shown in Figure 2. While the performance of MaxEnt and FIRL also improved with additional examples, they were consistently outperformed by GPIRL. In the case of FIRL, this was likely due to the suboptimal expert examples. In the case of MaxEnt, although the Laplace prior improved the results, the inability to represent nonlinear rewards limited the algorithm’s accuracy. These issues are evident in Figure 3, which shows part of a reward function learned by each method. When using continuous features, the performance of MaxEnt suffered even more from the increased nonlinearity of the reward function, while GPIRL maintained a similar level of accuracy. True Reward GPIRL MaxEnt/Lp FIRL outer color 1 objects outer color 2 objects other objects (distractors) expert actions Figure 3: Part of a reward function learned by each algorithm on an objectworld. While GPIRL learned the correct reward function, MaxEnt was unable to represent the nonlinearities, and FIRL learned an overly complex reward under which the suboptimal expert would have been optimal. 6 discrete features examples expected value difference 2 4 8 16 32 64 0 10 20 30 40 50 60 GPIRL MaxEnt MaxEnt/Lp FIRL examples expected value difference 2 4 8 16 32 64 0 10 20 30 40 50 60 discrete features transfer examples expected value difference 2 4 8 16 32 64 0 10 20 30 40 50 60 GPIRL MaxEnt MaxEnt/Lp FIRL examples expected value difference 2 4 8 16 32 64 0 10 20 30 40 50 60 continuous features examples expected value difference 2 4 8 16 32 64 0 10 20 30 40 50 60 GPIRL MaxEnt MaxEnt/Lp examples expected value difference 2 4 8 16 32 64 0 10 20 30 40 50 60 continuous features transfer examples expected value difference 2 4 8 16 32 64 0 10 20 30 40 50 60 GPIRL MaxEnt MaxEnt/Lp examples expected value difference 2 4 8 16 32 64 0 10 20 30 40 50 60 Figure 4: Results for 64-car-length highways with varying example counts. While GPIRL achieved only modest improvement over prior methods on the training environment, the large improvement in the transfer tests indicates that the underlying reward structure was captured more accurately. discrete features examples expected value difference 2 4 8 16 0 10 20 30 40 50 60 GPIRL MaxEnt MaxEnt/Lp FIRL examples expected value difference 2 4 8 16 0 10 20 30 40 50 60 discrete features transfer examples expected value difference 2 4 8 16 0 10 20 30 40 50 60 GPIRL MaxEnt MaxEnt/Lp FIRL examples expected value difference 2 4 8 16 0 10 20 30 40 50 60 continuous features examples expected value difference 2 4 8 16 0 10 20 30 40 50 60 GPIRL MaxEnt MaxEnt/Lp examples expected value difference 2 4 8 16 0 10 20 30 40 50 60 continuous features transfer examples expected value difference 2 4 8 16 0 10 20 30 40 50 60 GPIRL MaxEnt MaxEnt/Lp examples expected value difference 2 4 8 16 0 10 20 30 40 50 60 Figure 5: Evaluation on the highway environment with human demonstrations. GPIRL learned a reward function that more accurately reflected the true policy the expert was attempting to emulate. 7.2 Highway Driving Behavior In addition to the objectworld environment, we evaluated the algorithms on more concrete behaviors in the context of a simple highway driving simulator, modeled on the experiment in [5] and similar evaluations in other work [1]. The task is to navigate a car on a three-lane highway, where all other vehicles move at a constant speed. The agent can switch lanes and drive at up to four times the speed of traffic. Other vehicles are either civilian or police, and each vehicle can be a car or motorcycle. Continuous features indicate the distance to the nearest vehicle of a specific class (car or motorcycle) or category (civilian or police) in front of the agent, either in the same lane, the lane to the right, the lane to the left, or any lane. Another set of features gives the distance to the nearest such vehicle in a given lane behind the agent. There are also features to indiciate the current speed and lane. Discrete features again discretize the continuous features, with distances discretized in the same way as in the objectworld. In this section, we present results from synthetic and manmade demonstrations of a policy that drives as fast as possible, but avoids driving more than double the speed of traffic within two car-lengths of a police vehicle. Due to the connection between the police and speed features, the reward for this policy is nonlinear. We also evaluated a second policy that instead avoids driving more than double the speed of traffic in the rightmost lane. The results for this policy were similar to the first, and are included in the supplementary result tables. Figure 4 shows a comparison of GPIRL and prior algorithms on highways with varying numbers of 32-step synthetic demonstrations of the “police” task. GPIRL only modestly outperformed prior methods on the training environments with discrete features, but achieved large improvement on the transfer experiment. This indicates that, while prior algorithms learned a reasonable reward, this reward was not expressed in terms of the correct features, and did not generalize correctly. With continuous features, the nonlinearity of the reward was further exacerbated, making it difficult for linear methods to represent it even on the training environment. In Figure 5, we also evaluate how GPIRL and prior methods were able to learn the “police” behavior from human demonstrations. 7 True Reward GPIRL MaxEnt/Lp FIRL Figure 6: Highway reward functions learned from human demonstration. Road color indicates the reward at the highest speed, when the agent should be penalized for driving fast near police vehicles. The reward learned by GPIRL most closely resembles the true one. Although the human demonstrations were suboptimal, GPIRL was still able to learn a reward function that reflected the true policy more accurately than prior methods. Furthermore, the similarity of GPIRL’s performance with the human and synthetic demonstrations suggests that its model of suboptimal expert behavior is a reasonable reflection of actual human suboptimality. An example of rewards learned from human demonstrations is shown in Figure 6. Example videos of the learned policies and human demonstrations, as well as source code for our implementation of GPIRL, can be found at http://graphics.stanford.edu/projects/gpirl/index.htm 8 Discussion and Future Work We presented an algorithm for inverse reinforcement learning that represents nonlinear reward functions with Gaussian processes. Using a probabilistic model of a stochastic expert with a GP prior on reward values, our method is able to recover both a reward function and the hyperparameters of a kernel function that describes the structure of the reward. The learned GP can be used to predict a reward function consistent with the expert on any state space in the domain of the features. In experiments with nonlinear reward functions, GPIRL consistently outperformed prior methods, especially when generalizing the learned reward to new state spaces. However, like many GP models, the GPIRL log likelihood is multimodal. When using the warped kernel function, a random restart procedure was needed to consistently find a good optimum. More complex kernels might suffer more from local optima, potentially requiring more robust optimization methods. It should also be noted that our experiments were intentionally chosen to be challenging for algorithms that construct rewards as linear combinations. When good features that form a linear basis for the reward are already known, prior methods such as MaxEnt would be expected to perform comparably to GPIRL. However, it is often difficult to ensure this is the case in practice, and previous work on margin-based methods suggests that nonlinear methods often outperform linear ones [13, 14]. When presented with a novel state space, GPIRL currently uses the mean posterior of the GP to estimate the reward function. In principle, we could leverage the fact that GPs learn distributions over functions to account for the uncertainty about the reward in states that are different from any of the inducing points. For example, such an approach could be used to learn a “conservative” policy that aims to achieve high rewards with some degree of certainty, avoiding regions where the reward distribution has high variance. In an interactive training setting, such a method could also inform the expert about states that have high reward variance and require additional demonstrations. More generally, by introducing Gaussian processes into inverse reinforcement learning, GPIRL can benefit from the wealth of prior work on Gaussian process regression. For instance, we apply ideas from sparse GP approximation in the use of a small set of inducing points to learn the reward function in time linear in the number of states. A substantial body of prior work discusses techniques for automatically choosing or optimizing these inducing points [8], and such methods could be incorporated into GPIRL to learn reward functions with even smaller active sets. We also demonstrate how different kernels can be used to learn different types of reward structure, and further investigation into the kinds of kernel functions that are useful for IRL is another exciting avenue for future work. Acknowledgments. We thank Andrew Y. Ng and Krishnamurthy Dvijotham for helpful feedback and discussion. This work was supported by NSF Graduate Research Fellowship DGE-0645962. 8 References [1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In ICML ’04: Proceedings of the 21st International Conference on Machine Learning, 2004. [2] M. P. Deisenroth, C. E. Rasmussen, and J. Peters. Gaussian process dynamic programming. Neurocomputing, 72(7–9):1508–1524, 2009. [3] K. Dvijotham and E. Todorov. Inverse optimal control with linearly-solvable MDPs. In ICML ’10: Proceedings of the 27th International Conference on Machine Learning, pages 335–342, 2010. [4] Y. Engel, S. Mannor, and R. Meir. Reinforcement learning with Gaussian processes. In ICML ’05: Proceedings of the 22nd International Conference on Machine learning, pages 201–208, 2005. [5] S. Levine, Z. Popovi´c, and V. Koltun. Feature construction for inverse reinforcement learning. In Advances in Neural Information Processing Systems 23. 2010. [6] G. Neu and C. Szepesv´ari. Apprenticeship learning using inverse reinforcement learning and gradient methods. In Uncertainty in Artificial Intelligence (UAI), 2007. [7] A. Y. Ng and S. J. Russell. Algorithms for inverse reinforcement learning. In ICML ’00: Proceedings of the 17th International Conference on Machine Learning, pages 663–670, 2000. [8] J. Qui˜nonero Candela and C. E. Rasmussen. A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research, 6:1939–1959, 2005. [9] D. Ramachandran and E. Amir. Bayesian inverse reinforcement learning. In IJCAI’07: Proceedings of the 20th International Joint Conference on Artifical Intelligence, pages 2586–2591, 2007. [10] C. E. Rasmussen and M. Kuss. Gaussian processes in reinforcement learning. In Advances in Neural Information Processing Systems 16, 2003. [11] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2005. [12] N. Ratliff, J. A. Bagnell, and M. A. Zinkevich. Maximum margin planning. In ICML ’06: Proceedings of the 23rd International Conference on Machine Learning, pages 729–736, 2006. [13] N. Ratliff, D. Bradley, J. A. Bagnell, and J. Chestnutt. Boosting structured prediction for imitation learning. In Advances in Neural Information Processing Systems 19, 2007. [14] N. Ratliff, D. Silver, and J. A. Bagnell. Learning to search: Functional gradient techniques for imitation learning. Autonomous Robots, 27(1):25–53, 2009. [15] U. Syed and R. Schapire. A game-theoretic approach to apprenticeship learning. In Advances in Neural Information Processing Systems 20, 2008. [16] B. D. Ziebart. Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy. PhD thesis, Carnegie Mellon University, 2010. [17] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In AAAI Conference on Artificial Intelligence (AAAI 2008), pages 1433–1438, 2008. 9
|
2011
|
223
|
4,284
|
EigenNet: A Bayesian hybrid of generative and conditional models for sparse learning Yuan Qi Computer Science and Statistics Depts. Purdue University West Lafayette, IN 47907, USA Feng Yan Computer Science Dept. Purdue University West Lafayette, IN 47907, USA Abstract For many real-world applications, we often need to select correlated variables— such as genetic variations and imaging features associated with Alzheimer’s disease—in a high dimensional space. The correlation between variables presents a challenge to classical variable selection methods. To address this challenge, the elastic net has been developed and successfully applied to many applications. Despite its great success, the elastic net does not exploit the correlation information embedded in the data to select correlated variables. To overcome this limitation, we present a novel hybrid model, EigenNet, that uses the eigenstructures of data to guide variable selection. Specifically, it integrates a sparse conditional classification model with a generative model capturing variable correlations in a principled Bayesian framework. We develop an efficient active-set algorithm to estimate the model via evidence maximization. Experimental results on synthetic data and imaging genetics data demonstrate the superior predictive performance of the EigenNet over the lasso, the elastic net, and the automatic relevance determination. 1 Introduction In this paper we consider the problem of selecting correlated variables in a high dimensional space. Among many variable selection methods, the lasso and the elastic net are two popular choices (Tibshirani, 1994; Zou and Hastie, 2005). The lasso uses a l1 regularizer on model parameters. This regularizer shrinks the parameters towards zero, removing irreverent variables and yielding a sparse model (Tibshirani, 1994). However, the l1 penalty may lead to over-sparisification: given many correlated variables, the lasso often only select a few of them. This not only degenerates its prediction accuracy but also affects the interpretability of the estimated model. For example, based on high-throughput biological data such as gene expression and RNA-seq data, it is highly desirable to select multiple correlated genes associated with a phenotype since it may reveal underlying biological pathways. Due to its over-sparsification, the lasso may not be suitable for this task. To address this issue, the elastic net has been developed to encourage a grouping effect, where strongly correlated variables tend to be in or out of the model together (Zou and Hastie, 2005). However, the grouping effect is just the result of its composite l1 and l2 regularizer; the elastic net does not explicitly incorporate correlation information among variables in its model. In this paper, we propose a new sparse Bayesian hybrid model to utilize the eigen-information extracted from data for the selection of correlated variables. Specifically, it integrates a sparse conditional classification model with a generative model in a principle Bayesian framework (Lasserre et al., 2006): the conditional model achieves sparsity via automatic relevance determination (ARD) (MacKay, 1991), an empirical Bayesian approach for model sparisification; and the generative model is a latent variable model in which the observations are the eigenvectors of the unlabeled data, capturing correlations between variables. By integrating these two models together, the hybrid 1 model enables identification of groups of correlated variables guided by the eigenstructures. At the same time, the model passes the information from its conditional part to its generative part, selecting informative eigenvectors for the classification task. Furthermore, using the Bayesian hybrid model, we can automate the estimation of model hyperparameters. From the regularization perspective, the new hybrid model naturally generalizes the elastic net using a composite regularizer adaptive to the data eigenstructures. It contains a sparsity regularizer and a directional regularizer that encourages selecting variables associated with eigenvectors chosen by the model. When the variables are independent of each other, the eigenvectors are parallel to the axes and this composite regularizer reduces to the combination of the ARD and a l2 regularizer (similar to the composite regularizer of the elastic net). But when some of the input variables are strongly correlated, the regularizer will encourage the classifier aligned with eigenvectors selected by the model. On one hand, our model is like the elastic net to retain ‘all the big fish’. On the other hand, our model is different from the elastic net by the guidance from the eigen-information. Hence the name EigenNet. Experiments on synthetic data are presented in Section 5. Our results demonstrate that the EigenNet significantly outperforms the lasso, and the elastic net in terms of prediction accuracy. We applied this new approach to two tasks in imaging genetics: i) predicting cognitive function of healthy subjects and AD patients based on brain imaging markers, and ii) classifying the healthy and AD subjects based on single-nucleotide polymorphism (SNP) data. Compared to the lasso, the elastic net and the ARD, our approach achieves improved prediction accuracy. 2 Background: lasso and elastic net We denote n independent and identically distributed samples as D = {(x1, y1), . . . , (xn, yn)}, where xi is a p dimensional input features (i.e., explanatory variables) and yi is a scalar label (i.e., response). Also, we denote [x1, . . . , xn] by X and (y1, . . . , yn) by y. Although our presentation focuses on the binary classification problem (yi ∈{−1, 1}), our approach can be readily applied to other problems such as regression and survival analysis by choosing appropriate likelihood functions. For classification, we use a probit model as the data likelihood: p(y|X, w) = n Y i=1 σ(yiwTxi) (1) where σ(z) is the Gaussian cumulative distribution function and w denotes the classifier. To identify relevant variables for high dimensional problems, the lasso (Tibshirani, 1994) uses a l1 penalty, effectively shrinking w and b towards zero and pruning irrelevant variables. In a probabilistic framework this penalty corresponds to a Laplace prior distribution: p(w) = Y j λ exp(−λ|wj|) (2) where λ is a hyperparameter that controls the sparsity of the estimated model. The larger the hyperparameter λ, the sparser the model. As described in Section 1, the lasso may over-penalize relevant variables and hurt its predictive performance, especially when there are strongly correlated variables. To address this issue, the elastic net (Zou and Hastie, 2005) combines l1 and l2 regularizers to avoid the over-penalization. The combined regularizer corresponds to the following prior distribution, p(w) ∝Q j exp(−λ1|wj| − λ2w2 j), where λ1 and λ2 are hyperparameters. While it is well known that the elastic net tends to select strongly correlated variables together, it does not uses correlation information embedded in the unlabeled data. The selection of correlated variables is merely the result of a less aggressive regularizer for sparisty. Besides the elastic net, there are many variants (and extensions) to the lasso, such as the bridge (Frank and Friedman, 1993) and smoothly clipped absolute deviation (Fan and Li, 2001). These variants modify the l1 penalty to improve variable selection, but do not explicitly use the correlation information embedded in data. 2 0 1 2 3 0.5 1 1.5 2 2.5 Lasso, EigenNet (a) Independent variables 0 1 2 3 0.5 1 1.5 2 2.5 Lasso EigenNet (b) Correlated variables Figure 1: Toy examples. (a) When the variables x1 and x2 are independent of each other, both the lasso and the EigenNet select only x1. (b) When the variables x1 and x2 are correlated, the lasso selects only one variable. By contrast, guided by the major eigenvector of the data, the EigenNet selects both variables. 3 EigenNet: eigenstructure-guided variable selection In this section, we propose to use the covariance structure in data to guide the sparse estimation of model parameters. First, let us consider the following toy examples. 3.1 Toy examples Figure 1(a) shows samples from two classes. Clearly the variables x1 and x2 are not correlated. The lasso or the elastic net can successfully select the relevant variable x1 to classify the data. For the samples in Figure 1(b), the variables x1 and x2 are strongly correlated. Despite the strong correlation, the lasso would select only x1 and ignore x2. The elastic net may select both x1 and x2 if the regularization weight λ1 is small and λ2 is big, so that the elastic net behaves like l2 regularized classifier. The elastic net, however, does not explore the fact that x1 and x2 are correlated. Since the eigenstructure of the data covariance matrix captures correlation information between variables, we propose to not only regularize the classifier to be sparse, but also encourage it to be aligned with certain eigenvector(s) that are helpful for the classification task. Note that although classical Fisher linear discriminant also uses the data covariance matrix to learn the classifier, it generally does not provide a sparse solution, thus not suitable for the task of selecting correlated variables and removing irrelevant ones. For the data in Figure 1(a), since the two eigenvectors are parallel to the horizontal and vertical axes, the EigenNet essentially reduces to the elastic net and selects x1. For the data in Figure 1(b), the principle eigenvector can guide the EigenNet to select both x1 and x2. The minor eigenvector is, however, not useful for the classification task (in general, we need to select which eigenvectors are relevant to classification). We use a Bayesian framework to materialize the above ideas as described in the following section. 3.2 Bayesian hybrid of conditional and generative models The EigenNet is a hybrid of conditional and generative models. The conditional component allows us to learn the classifier via ”discriminative” training; the generative component captures the correlations between variables; and these two models are glued together via a joint prior distribution, so that the correlation information is used to guide the estimation of the classifier and the classification task is used to choose or scale relevant eigenvectors. Our approach is based on the general Bayesian framework proposed by Lasserre et al. (2006)), which allows one to combine conditional and generative models in an elegant principled way. Specifically, for the conditional model we have the same likelihood as (1), p(y|X, w) = Q i σ(yiwTxi). For the classifier w, we use a Gaussian prior: p(w) = Qp j=1 N(wj|0, β−1 j ). We will describe later how to efficiently learn the precision parameter βj from the data to obtain a sparse classifier. 3 i = 1, . . . , n yi xi w ˜w vj sj j = 1, . . . , m β λv λs Figure 2: The graphical model of the EigenNet. To encourage the classifier aligned with certain eigenvectors, we introduce ˜w—a latent vector (tightly) linked to the classifier w—in the generative model: p(V|s, ˜w) ∝ m Y j=1 N(vj|sj ˜w, (λvηj)−1I) (3) where vj and ηj are the j-th eigenvector and eigenvalue of the data covariance matrix, λv is a hyperparameter, s = [s1, . . . , sm] are scaling factors for the parameter ˜w. To combat overfitting, we assign a Gamma prior Gam(λv|c0, d0) over λv. Note that this generative model encourages ˜w to align with the major eigenvectors with bigger eigenvalues. However, eigenvectors are noisy and not all of them relevant to the classification task— we need to select relevant eigenvectors (i.e. the relevant sub-eigenspace) and remove irrelevant ones. To enable the selection of the relevant eigenvectors, we assign a Laplace prior on sj: p(s) ∝ m Y j=1 λs exp(−λs|sj|) (4) where λs is a hyperparameter. Finally, to link the conditional and generative models together, we use a prior for ˜w conditional on w: p( ˜w|w) ∝N( ˜w|w, rI) (5) Note that the variance parameter r controls how similar w and ˜w are in our joint model. For simplicity, we set r = 0 here so that p( ˜w|w) = δ( ˜w −w) where δ(a) = 1 if a = 1 and δ(a) = 0 otherwise. The graphical model representation of the EigenNet is given in Figure 2. 3.3 Model estimation In this section we present how to estimate the model based on an empirical Bayesian approach. Specifically, we will use expectation propagation (EP) (Minka, 2001) to estimate the posterior of the classifier w (and ˜w) and optimize the marginal likelihood of the joint model over the scaling variables s and the precision parameters β. First, given the hyperparameter λv and the latent variable s, the posterior distribution of w is p(w|y, X, ) ∝N(w|0, diag(β)−1) Y i σ(yiwTxi) Y j N(vj|sjw, (λvηj)−1I) (6) ∝N(w|mp, Vp) Y i σ(yiwTxi) (7) where Vp = (diag(β + λv P j ηjs2 jI))−1 and mp = λv P j ηjsjvj. Then we initialize the EP updates by p(w) = N(w|mp, Vp) and then iteratively approximate each likelihood factor σ(yiwTxi) by a factor with the Gaussian form: N(ti|xT i w, h−1 i ). In other words, EP maps the nonlinear nonGaussian factor to the Gaussian factor with the virtual observation ti and the noise variance h−1 i . After the convergence of EP, we obtain both the mean mw and the covariance Vw. Given the approximate posterior q(w), we maximize the variational lower bound over λv: L(λv) = Eqw[ X j ln N(vj|sjw, (λvηj)−1I) + ln Gam(λv|c0, d0)] (8) = pm 2 ln λv −F 2 λv + (c0 −1) ln λv −d0λv + contant 4 Algorithm 1 The empirical Bayesian estimation algorithm 1. Initialize the model to contain a small fraction of features and initialize the parameters: s = 0, λv = 1, t = 0 h = ∞. 2. Run EP to obtain the initial mean and the covariance mw and Vw. 3. Loop until convergence or reaching the maximum number of iterations 4. Loop over the j-th active set a. Update β via (12) and (13). b. If u2 j < rj, remove the features in the j-th active set from the model c. Update the posterior mean mw and the covariance Vw based on EP. d. Optimize the precision parameter λv via (9). e. Optimize the scaling factors s via (11). where F = P j ηj −2(P j vjηjsj)Tmw + P j ηjs2 j((mw)2 i + (Vw)i,i). As a result, we have λv = c0 −1 + pm/2 d0 + F/2 . (9) Similarly, we maximize the variational lower bound over s: L(s) = X j Eqw[ln N(vj|sjw, (λvηj)−1I)] −λs|sj| + contant. (10) Consequently we have for each j, if |vT j mw| < λs ηjλv , sj = Sign(vT j mw)|vT j mw| −λs/(ηjλv) (mw)2 i + (Vw)i,i ; otherwise, sj = 0. (11) To estimate β, we develop an active-set method to iteratively maximize the model marginal likelihood over elements of β. In particular, we use a strategy similar to Tipping and Faul (2003)’s approach: given the approximation factors N(t|XTw, diag(h)−1), the distribution over eigenvectors N(vj|sjw, (λvηj)−1I), and the prior distribution N(w|0, diag(β)−1), we can compute and decompose the log marginal likelihood L(β) = log p(y|X, s, λv) into two parts: L(βj) and L(β\j) where j and \j index the elements of β in the active set and the rest elements, respectively. Note that because the effective prior over w becomes N(w|mp, Vp) as in (7) — instead of the zero mean prior N(w|0, diag(β)−1)— we cannot apply the algorithm proposed by Tipping and Faul (2003). Instead, we decompose L(β) into L(βj) and L(β\j) as follows. First let us define Uj = tTdiag(h)xj + λv m X k=1 ηkskvj k −bTmw, Rj = (xj)Tdiag(h)xj + λv m X k=1 ηks2 k −bTVwb uj = βjUj βj −Rj rj = βjRj βj −Rj (12) where b = (xj)Tdiag(h)Xa + λvea j Pm k=1 ηks2 k, xj is the j-th column of the data matrix X, vj k is the j-th element of the vector vk, Xa are the columns of X associated with currently selected features (indexed by a), and ea j are the a-th elements of the j-th row of the identity matrix. Then we have L(β) = L(β\j) + 1 2(ln βj −ln(βj + uj) + r2 j βj+uj ). where L(β\j) does not depend on βj. Therefore, we can directly optimize over βj without updating β\j. Setting the gradient of L(β) over βj, we easily obtain the following optimality condition: if u2 j ≥rj, βj = r2 j u2 j −rj ; (13) if u2 j < rj, βj = ∞. In the latter case we remove the j-th feature if it is currently in the model. 5 0 10 20 30 40 (a) Lasso 0 10 20 30 40 (b) Elastic net 0 10 20 30 40 (c) EigenNet 0 10 20 30 40 (d) True Figure 3: Visualization of the lasso, the elastic net, the EigenNet and the true classifier weights. We used 80 training samples with 40 features. The test error rates of the lasso, the elastic net, and the EigenNet on 2000 test samples are 0.297, 0.245, and 0.137, respectively. The above active-set updates are very efficient, because during each iteration we only deal with a reduced model defined on the currently selected features. This approach significantly reduces the computational cost of EP from O(np2) to O(nl2) where l is the biggest model size during the active-set iterations. The empirical Bayesian estimation algorithm of EigenNet is summarized in Algorithm 1. 4 Related work The EigenNet is related to the classical eigenface approaches (Turk and Pentland, 1991; Sirovich and Kirby, 1987). The eigenface approach learns a model in the subspace spanned by the major eigenvectors of the data covariance matrix. The EigenNet also uses the eigensubspace to guide the model estimation. However, unlike the eigenface approach, the EigenNet adaptively selects eigenvectors and learns a sparse classifier. There are Bayesian versions of the lasso and the elastic net. Bayesian lasso (Park et al., 2008) puts a hyper-prior on the regularization coefficient and use a Gibbs sampler to jointly sample both regression weights and the regularization coefficient. Using a similar treatment to Bayesian lasso, Bayesian elastic net (Li and Lin, 2010) samples the two regularization coefficients simultaneously, potentially avoiding the “double shrinkage” problem described in the original elastic net paper (Zou and Hastie, 2005). As the EigenNet, these methods are grounded in a Bayesian framework, sharing the benefits of obtaining posterior distributions for handling estimation uncertainty. However, Bayesian lasso and Bayesian elastic net are presented to handle regression problems (though certainly they can be generalized for classification problems) and do not use the eigen-information embedded in data. The EigenNet, by contrast, selects the eigen-subspace and uses it to guide classification. Group lasso (Jacob et al., 2009) enforces sparsity on the groups of predictors—an entire group of correlated predictors may be retained or pruned off. However, applying the idea of group lasso to the EigenNet faces several difficulties: First, this approach won’t give (approximately) sparse classifiers unless we truncate eigenvectors. If we use truncation, we need to decide what threshold we should use to truncate each eigenvector—again it’s a difficult task. Second, it will be hard to tune all regularization coefficients associated with all major eigenvectors–cross validation would not suffice. By contrast, our classifier is sparse because of the ARD effect. More importantly, the latent variables sj in our model are automatically estimated from data, deciding how important each eigenvector is for the classification task in a principled Bayesian framework. 5 Experimental results We evaluated the new sparse Bayesian model, the EigenNet, on both synthetic and real data and compared it with three representative variable selection methods, the lasso, the elastic net, and an ARD approach (Qi et al., 2004). For the lasso and the elastic net, we used the Glmnet software package that uses cyclical coordinate descent in a pathwise fashion1. Like the EigenNet, the ARD approach also uses EP to approximate the model marginal likelihood. For the lasso and the elastic net, we used cross-validation to tune the hyperparameters; for the EigenNet, we estimated λv from data and tuned λs by cross-validation. 1http://www-stat.stanford.edu/ tibs/glmnet-matlab/ 6 20 40 60 80 0 0.1 0.2 0.3 0.4 # of training examples test error rate Lasso Elastic net EigenNet (a) independent features 20 40 60 80 0.1 0.15 0.2 0.25 0.3 0.35 # of training examples test error rate Lasso Elastic net EigenNet (b) correlated features 20 40 60 80 0 5 10 15 # of training examples RMSE Lasso Elastic net EigenNet (c) independent features 20 40 60 80 0 10 20 30 40 # of training examples RMSE Lasso Elastic net EigenNet (d) correlated features Figure 4: Predictive performance on synthetic datasets. (a) and (b): classification; (c) and (d): regression The results were averaged over 10 runs. For the data with independent features, the EigenNet outperforms the alternative methods when the number of training samples is small; for data with correlated features, the EigenNet outperforms the alternative methods consistently. 5.1 Visualization of estimated classifiers First, we tested these methods on synthetic data that contain correlated features. We sampled 40 dimensional data points, each of which contains two groups of correlated variables. The correlation coefficient between variables in each group is 0.81 and there are 4 variables in each group. We set the values of the classifier weights in one group as 5 and in the other group as -5. We also generated the bias term randomly from a standard Gaussian distribution. We set the number of training points to 80. Figure 3 shows the estimated classifiers and the true classifier we used to produce the data labels. Unlike the lasso and the elastic net, the EigenNet clearly identifies two groups of correlated variables, very close to the ground truth. As a result, on 2000 test points, the EigenNet achieves the lowest prediction error rate, 0.137, while the test error rates of the lasso and the elastic net are 0.297 and 0.245, respectively. 5.2 Experiments on synthetic data Now we systematically compared these methods for classification and regression on synthetic datasets containing correlated features and containing independent features (Although this presentation so far has been focused on classification, we can easily implement the EigenNet for regression; since we can compute the marginal likelihood exactly, the EP approximation is not needed for regression.) To generate data with correlated variables we used a similar procedure as in the visualization example: we sampled 40 dimensional data points, each of which contains two groups of correlated variables. The correlation coefficient between variables in each group is 0.81 and there are 4 variables in each group. However, unlike for the previous example where the classifier weights are the same for the correlated variables, now we set the weights within the same group to have the same sign, but with different random values. We varied the number of training points, ranging from 10 to 80, and tested all these methods. For the datasets with independent features, we followed the same procedure except that the features are independently sampled. We ran the experiments 10 times. Figure 4 shows the results averaged over 10 runs. We did not report the standard errors since they are very small. For the datasets with independent features, the EigenNet outperforms the alternative methods when the number of training examples is small (probably because in this case the eigenspace has a smaller dimension than than that of the classifier, effectively controlling the model flexibility); with more training examples, it is not unsurprising to see all these methods perform quite similarly. For the data with correlated features, although the results of the elastic net appear to overlaps with those of the lasso in the figure, the elastic net often outperforms the lasso with a small margin; also, the EigenNet consistently outperforms the lasso and the elastic net signficantly. The improved predictive performance of the EigenNet reflects the benefit of using the valuable correlation information to help the model estimation. 5.3 Application to imaging genetics Imaging genetics is an emerging research area where imaging markers and genetic variations (e.g., SNPs) are used to study neurodegenerative diseases, in particular, Alzheimer’s disease (AD). We 7 Lasso Elastic net EigenNet 6.5 7 7.5 8 8.5 Root−Mean−Square Error (a) Regression of ADAS-Cog score Lasso Elastic net ARD EigenNet 0.3 0.35 0.4 Classification Error Rate (b) Classification of healthy & AD subjects Figure 5: Imaging genetics applications: (a) prediction of the ADAS-Cog score based on 14 imaging features and (b) AD classification based on 2000 SNPs. The error bars represent the standard errors. applied the EigenNet to two critical problems in imaging genetics and compared its performance with that of alternative sparse learning methods. First, we considered a regression problem where the predictors are imaging features, which were generated by Holland et al. (2009) for ADNI and include volume measured in 14 brain regions of interest (ROI)—including the whole brain, ventricles, hippocampus, etc. We used these imaging features to predict the ADAS-Cog score, which is widely used to assess cognitive function of AD patients. It is hypothesized that the brain ROI volumes are associated with the ADAS-Cog score. But this association has not been rigorously studied by statistical learning methods. After removing missing entries, we obtained the data of 726 subjects, including healthy people, people with mild cognitive impairment (MCI), and AD patients. Then we applied the lasso, the elastic net, and the EigenNet to this prediction task. We randomly selected 508 training samples and 218 test samples for 50 times. The results are shown in Figure 5.(a). Second, we used SNP data to classify a subject into the healthy group or AD patients. We chose the top 2000 SNPs that are associated with AD based on a simple statistical test. There are 374 subjects in total (roughly the same size for each class). We compared the EigenNet with the lasso and the elastic net as well as the the ARD approach—since it corresponds to EigenNet’s conditional component. We randomly split the dataset into 262 training and 112 test samples 10 times. The results are summarized in Figure 5.(b). As shown in the Figure, for both the regression and classification problems, the EigenNet outperforms the alternative methods significantly. 6 Conclusions In this paper, we have presented a novel sparse Bayesian hybrid model to select correlated variables for regression and classification. It integrates the sparse conditional ARD model with a latent variable model for eigenvectors. For this hybrid model, we could explore other latent variable models, such as sparse projection methods (Guan and Dy, 2009; Archambeau and Bach, 2009); these models can better deal with noise in the unlabeled data and improve the selection of interdependent features (i.e., predictors). Furthermore, if we have certain prior knowledge about the interdependence between features, such as linkage disequilibrium between SNPs, we could easily incorporate them into our model. Thus, our model provides an elegant framework for integrating complex data generation processes and domain knowledge in sparse learning. 7 Acknowledgments The authors thank the anonymous reviewers and T. S. Jaakkola for constructive suggestions. This work was supported by NSF IIS-0916443, NSF CAREER award IIS-1054903, and the Center for Science of Information (CSoI), an NSF Science and Technology Center, under grant agreement CCF-0939370. 8 References Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58:267–288, 1994. Hui Zou and Trevor Hastie. Regularization and variable selection via the Elastic Net. Journal of the Royal Statistical Society B, 67:301–320, 2005. Julia A. Lasserre, Christopher M. Bishop, and Thomas P. Minka. Principled hybrids of generative and discriminative models. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pages 87–94, 2006. David J.C. MacKay. Bayesian interpolation. Neural Computation, 4:415–447, 1991. Ildiko E. Frank and Jerome H. Friedman. A statistical view of some chemometrics regression tools. Technometrics, 35(2):109–135, 1993. Jianqing Fan and Runze Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348–1360, 2001. Thomas P. Minka. Expectation propagation for approximate Bayesian inference. In Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pages 362–369, 2001. Michael E. Tipping and Anita C. Faul. Fast marginal likelihood maximisation for sparse Bayesian models. In Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, 2003. Matthew Turk and Alex Pentland. Eigenfaces for recognition. J. Cognitive Neuroscience, 3:71–86, 1991. L. Sirovich and M. Kirby. Low-dimensional procedure for the characterization of human faces. J. Opt. Soc. Am. A, 4(3):519–524, 1987. Park, Trevor, Casella, and George. The Bayesian Lasso. Journal of the American Statistical Association, 103(482):681–686, 2008. Qing Li and Nan Lin. The Bayesian Elastic Net. Bayesian Analysis, 5(1):151–170, 2010. Laurent Jacob, Guillaume Obozinski, and Jean-Philippe Vert. Group lasso with overlap and graph lasso. In Proceedings of the 26th Annual International Conference on Machine Learning, 2009. Yuan Qi, Thomas P. Minka, Rosalind W. Picard, and Zoubin Ghahraman. Predictive automatic relevance determination by expectation propagation. In Proceedings of Twenty-first International Conference on Machine Learning, pages 671–678, 2004. Dominic Holland, James B Brewer, Donald J Hagler, Christine Fenema-Notestine, and Anders M Dale. Subregional neuroanatomical change as a biomarker for alzheimer’s disease. Proceedings of the National Academy of Sciences, 106(49):20954–20959, 2009. Yue Guan and Jennifer Dy. Sparse probabilistic principal component analysis. JMLR W&CP: AISTATS, 5, 2009. C´edric Archambeau and Francis Bach. Sparse probabilistic projections. In Advances in Neural Information Processing Systems 21. 2009. 9
|
2011
|
224
|
4,285
|
Iterative Learning for Reliable Crowdsourcing Systems David R. Karger Sewoong Oh Devavrat Shah Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Abstract Crowdsourcing systems, in which tasks are electronically distributed to numerous “information piece-workers”, have emerged as an effective paradigm for humanpowered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a general model of such crowdsourcing tasks, and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers’ answers. We show that our algorithm significantly outperforms majority voting and, in fact, is asymptotically optimal through comparison to an oracle that knows the reliability of every worker. 1 Introduction Background. Crowdsourcing systems have emerged as an effective paradigm for human-powered problem solving and are now in widespread use for large-scale data-processing tasks such as image classification, video annotation, form data entry, optical character recognition, translation, recommendation, and proofreading. Crowdsourcing systems such as Amazon Mechanical Turk provide a market where a “taskmaster” can submit batches of small tasks to be completed for a small fee by any worker choosing to pick them up. For example, a worker may be able to earn a few cents by indicating which images from a set of 30 are suitable for children (one of the benefits of crowdsourcing is its applicability to such highly subjective questions). Since typical crowdsourced tasks are tedious and the reward is small, errors are common even among workers who make an effort. At the extreme, some workers are “spammers”, submitting arbitrary answers independent of the question in order to collect their fee. Thus, all crowdsourcers need strategies to ensure the reliability of answers. Because the worker crowd is large, anonymous, and transient, it is generally difficult to build up a trust relationship with particular workers.1 It is also difficult to condition payment on correct answers, as the correct answer may never truly be known and delaying payment can annoy workers and make it harder to recruit them for your future tasks. Instead, most crowdsourcers resort to redundancy, giving each task to multiple workers, paying them all irrespective of their answers, and aggregating the results by some method such as majority voting. For such systems there is a natural core optimization problem to be solved. Assuming the e-mail: {karger,swoh,devavrat}@mit.edu. This work was supported in parts by AFOSR complex networks project, MURI on network tomography, and National Science Foundation. 1For certain high-value tasks, crowdsourcers can use entrance exams to “prequalify” workers and block spammers, but this increases the cost and still provides no guarantee that the prequalified workers will try hard. 1 taskmaster wishes to achieve a certain reliability in their answers, how can they do so at minimum cost (which is equivalent to asking how they can do so while asking the fewest possible questions)? Several characteristics of crowdsourcing systems make this problem interesting. Workers are neither persistent nor identifiable; each batch of tasks will be solved by a worker who may be completely new and who you may never see again. Thus one cannot identify and reuse particularly reliable workers. Nonetheless, by comparing one worker’s answer to others’ on the same question, it is possible to draw conclusions about a worker’s reliability, which can be used to weight their answers to other questions in their batch. However, batches must be of manageable size, obeying limits on the number of tasks that can be given to a single worker. Another interesting aspect of this problem is the choice of task assignments. Unlike many inference problems which makes inferences based on a fixed set of signals, our algorithm can choose which signals to measure by deciding which questions to ask which workers. In the following, we first define a formal model that captures these aspects of the problem. We will then describe a scheme for deciding which tasks to assign to which workers and introduce a novel iterative algorithm to infer the correct answers from the workers’ responses. Setup. We model a set of m tasks {ti}i∈[m] as each being associated with an unobserved ‘correct’ answer si ∈{±1}. Here and after, we use [N] to denote the set of first N integers. In the earlier image categorization example, each task corresponds to labeling an image as suitable for children (+1) or not (−1). We assign these tasks to n workers from the crowd, which we denote by {wj}j∈[n]. When a task is assigned to a worker, we get a possibly inaccurate answer from the worker. We use Aij ∈{±1} to denote the answer if task ti is assigned to worker wj. Some workers are more diligent or have more expertise than others, while some other workers might be spammers. We choose a simple model to capture this diversity in workers’ reliability: we assume that each worker wj is characterized by a reliability pj ∈[0, 1], and that they make errors randomly on each question they answer. Precisely, if task ti is assigned to worker wj then Aij = si with probability pj , −si with probability 1 −pj , and Aij = 0 if ti is not assigned to wj. The random variable Aij is independent of any other event given pj. (Throughout this paper, we use boldface characters to denote random variables and random matrices unless it is clear from the context.) The underlying assumption here is that the error probability of a worker does not depend on the particular task and all the tasks share an equal level of difficulty. Hence, each worker’s performance is consistent across different tasks. We further assume that the reliability of workers {pj}j∈[n] are independent and identically distributed random variables with a given distribution on [0, 1]. One example is spammer-hammer model where, each worker is either a ‘hammer’ with probability q or is a ‘spammer’ with probability 1 −q. A hammer answers all questions correctly, in which case pj = 1, and a spammer gives random answers, in which case pj = 1/2. Given this random variable pj, we define an important parameter q ∈[0, 1], which captures the ‘average quality’ of the crowd: q ≡E[(2pj −1)2]. A value of q close to one indicates that a large proportion of the workers are diligent, whereas q close to zero indicates that there are many spammers in the crowd. The definition of q is consistent with use of q in the spammer-hammer model. We will see later that our bound on the error rate of our inference algorithm holds for any distribution of pj but depends on the distribution only through this parameter q. It is quite realistic to assume the existence of a prior distribution for pj. The model is therefore quite general: in particular, it is met if we simply randomize the order in which we upload our task batches, since this will have the effect of randomizing which workers perform which batches, yielding a distribution that meets our requirements. On the other hand, it is not realistic to assume that we know what the prior is. To execute our inference algorithm for a given number of iterations, we do not require any knowledge of the distribution of the reliability. However, q is necessary in order to determine how many times a task should be replicated and how many iterations we need to run to achieve certain reliability. Under this crowdsourcing model, a taskmaster first decides which tasks should be assigned to which workers, and then estimates the correct solutions {si}i∈[m] once all the answers {Aij} are submitted. We assume a one-shot scenario in which all questions are asked simultaneously and then an estimation is performed after all the answers are obtained. In particular, we do not allow allocating 2 tasks adaptively based on the answers received thus far. Then, assigning tasks to nodes amounts to designing a bipartite graph G({ti}i∈[m] ∪{wj}j∈[n], E) with m task and n worker nodes. Each edge (i, j) ∈E indicates that task ti was assigned to worker wj. Prior Work. A naive approach to identify the correct answer from multiple workers’ responses is to use majority voting. Majority voting simply chooses what the majority of workers agree on. When there are many spammers, majority voting is error-prone since it weights all the workers equally. We will show that majority voting is provably sub-optimal and can be significantly improved upon. To infer the answers of the tasks and also the reliability of workers, Dawid and Skene [1, 2] proposed an algorithm based on expectation maximization (EM) [3]. This approach has also been applied in classification problems where the training data is annotated by low-cost noisy ‘labelers’ [4, 5]. In [6] and [7], this EM approach has been applied to more complicated probabilistic models for image labeling tasks. However, the performance of these approaches are only empirically evaluated, and there is no analysis that proves performance guarantees. In particular, EM algorithms require an initial starting point which is typically randomly guessed. The algorithm is highly sensitive to this initialization, making it difficult to predict the quality of the resulting estimate. The advantage of using low-cost noisy ‘labelers’ has been studied in the context of supervised learning, where a set of labels on a training set is used to find a good classifier. Given a fixed budget, there is a tradeoff between acquiring a larger training dataset or acquiring a smaller dataset but with more labels per data point. Through extensive experiments, Sheng, Provost and Ipeirotis [8] show that getting repeated labeling can give considerable advantage. Contributions. In this work, we provide a rigorous treatment of designing a crowdsourcing system with the aim of minimizing the budget to achieve completion of a set of tasks with a certain reliability. We provide both an asymptotically optimal graph construction (random regular bipartite graph) and an asymptotically optimal algorithm for inference (iterative algorithm) on that graph. As the main result, we show that our algorithm performs as good as the best possible algorithm. The surprise lies in the fact that the optimality of our algorithm is established by comparing it with the best algorithm, one that is free to choose any graph, regular or irregular, and performs optimal estimation based on the information provided by an oracle about reliability of the workers. Previous approaches focus on developing inference algorithms assuming that a graph is already given. None of the prior work on crowdsourcing provides any systematic treatment of the graph construction. To the best of our knowledge, we are the first to study both aspects of crowdsourcing together and, more importantly, establish optimality. Another novel contribution of our work is the analysis technique. The iterative algorithm we introduce operates on real-valued messages whose distribution is a priori difficult to analyze. To overcome this challenge, we develop a novel technique of establishing that these messages are subGaussian (see Section 3 for a definition) using recursion, and compute the parameters in a closed form. This allows us to prove the sharp result on the error rate, and this technique could be of independent interest for analyzing a more general class of algorithms. 2 Main result Under the crowdsourcing model introduced, we want to design algorithms to assign tasks and estimate the answers. In what follows, we explain how to assign tasks using a random regular graph and introduce a novel iterative algorithm to infer the correct answers. We state the performance guarantees for our algorithm and provide comparisons to majority voting and an oracle estimator. Task allocation. Assigning tasks amounts to designing a bipartite graph G {ti}i∈[m] ∪ {wj}j∈[n], E , where each edge corresponds to a task-worker assignment. The taskmaster makes a choice of how many workers to assign to each task (the left degree l) and how many tasks to assign to each worker (the right degree r). Since the total number of edges has to be consistent, the number of workers n directly follows from ml = nr. To generate an (l, r)-regular bipartite graph we use a random graph generation scheme known as the configuration model in random graph literature [9, 10]. In principle, one could use arbitrary bipartite graph G for task allocation. However, as we show later in this paper, random regular graphs are sufficient to achieve order-optimal performance. 3 Inference algorithm. We introduce a novel iterative algorithm which operates on real-valued task messages {xi→j}(i,j)∈E and worker messages {yj→i}(i,j)∈E. The worker messages are initialized as independent Gaussian random variables. At each iteration, the messages are updated according to the described update rule, where ∂i is the neighborhood of ti. Intuitively, a worker message yj→i represents our belief on how ‘reliable’ the worker j is, such that our final estimate is a weighted sum of the answers weighted by each worker’s reliability: ˆsi = sign(P j∈∂i Aijyj→i). Iterative Algorithm Input: E, {Aij}(i,j)∈E, kmax Output: Estimation ˆs {Aij} 1: For all (i, j) ∈E do Initialize y(0) j→i with random Zij ∼N(1, 1) ; 2: For k = 1, . . . , kmax do For all (i, j) ∈E do x(k) i→j ←P j′∈∂i\j Aij′y(k−1) j′→i ; For all (i, j) ∈E do y(k) j→i ←P i′∈∂j\i Ai′jx(k) i′→j ; 3: For all i ∈[m] do xi ←P j∈∂i Aijy(kmax−1) j→i ; 4: Output estimate vector ˆs {Aij} = [sign(xi)] . While our algorithm is inspired by the standard Belief Propagation (BP) algorithm for approximating max-marginals [11, 12], our algorithm is original and overcomes a few critical limitations of the standard BP. First, the iterative algorithm does not require any knowledge of the prior distribution of pj, whereas the standard BP requires the knowledge of the distribution. Second, there is no efficient way to implement standard BP, since we need to pass sufficeint statistics (or messages) which under our general model are distributions over the reals. On the otherhand, the iterative algorithm only passes messages that are real numbers regardless of the prior distribution of pj, which is easy to implement. Third, the iterative algorithm is provably asymptotically order-optimal. Density evolution, explained in detail in Section 3, is a standard technique to analyze the performance of BP. Although we can write down the density evolution for the standard BP, we cannot analyze the densities, analytically or numerically. It is also very simple to write down the density evolution equations for the iterative algorithm, but it is not trivial to analyze the densities in this case either. We develop a novel technique to analyze the densities and prove optimality of our algorithm. 2.1 Performance guarantee We state the main analytical result of this paper: for random (l, r)-regular bipartite graph based task assignments with our iterative inference algorithm, the probability of error decays exponentially in lq, up to a universal constant and for a broad range of the parameters l, r and q. With a reasonable choice of l = r and both scaling like (1/q) log(1/ϵ), the proposed algorithm is guarantted to achieve error less than ϵ for any ϵ ∈(0, 1/2). Further, an algorithm independent lower bound that we establish suggests that such an error dependence on lq is unavoidable. Hence, in terms of the task allocation budget, our algorithm is order-optimal. The precise statements follow next. Let µ = E[2pj −1] and recall q = E[(2pj −1)2]. To lighten the notation, let ˆl ≡l −1 and ˆr ≡r −1. Define ρ2 k ≡ 2q µ2(q2ˆlˆr)k−1 + 3 + 1 qˆr 1 −(1/q2ˆlˆr)k−1 1 −(1/q2ˆlˆr) . For q2ˆlˆr > 1, let ρ2 ∞≡limk→∞ρ2 k such that ρ2 ∞ = 3 + (1/qˆr) q2ˆlˆr/(q2ˆlˆr −1) . Then we can show the following bound on the probability of making an error. Theorem 2.1. For fixed l > 1 and r > 1, assume that m tasks are assigned to n = ml/r workers according to a random (l, r)-regular graph drawn from the configuration model. If the distribution of the worker reliaiblity satisfy µ ≡E[2pj −1] > 0 and q2 > 1/(ˆlˆr), then for any s ∈{±1}m, the estimates from k iterations of the iterative algorithm achieve lim sup m→∞ 1 m m X i=1 P si ̸= ˆsi {Aij}(i,j)∈E ≤ e−lq/(2ρ2 k) . (1) 4 As we increase k, the above bound converges to a non-trivial limit. Corollary 2.2. Under the hypotheses of Theorem 2.1, lim sup k→∞ lim sup m→∞ 1 m m X i=1 P si ̸= ˆsi {Aij}(i,j)∈E ≤ e−lq/(2ρ2 ∞) . (2) One implication of this corollary is that, under a mild assumption that r ≥l, the probabiilty of error is upper bounded by e−(1/8)(lq−1). Even if we fix the value of q = E[(2pj −1)2], different distributions of pj can have different values of µ in the range of [q, √q]. Surprisingly, the asymptotic bound on the error rate does not depend on µ. Instead, as long as q is fixed, µ only affects how fast the algorithm converges (cf. Lemma 2.3). Notice that the bound in (2) is only meaningful when it is less than a half, whence ˆlˆrq2 > 1 and lq > 6 log(2) > 4. While as a task master the case of ˆlˆrq2 < 1 may not be of interest, for the purpose of completeness we comment on the performance of our algorithm in this regime. Specifically, we empirically observe that the error rate increases as the number of iterations k increases. Therefore, it makes sense to use k = 1. In which case, the algorithm essentially boils down to the majority rule. We can prove the following error bound. The proof is omitted due to a space constraint. lim sup m→∞ 1 m m X i=1 P si ̸= ˆsi {Aij}(i,j)∈E ≤ e−lµ2/4 . (3) 2.2 Discussion Here we make a few comments relating to the execution of the algorithm and the interpretation of the main results. First, the iterative algorithm is efficient with runtime comparable to the simple majority voting which requires O(ml) operations. Lemma 2.3. Under the hypotheses of Theorem 2.1, the total computational cost sufficient to achieve the bound in Corollary 2.2 up to any constant factor in the exponent is O(ml log(q/µ2)/ log(q2ˆlˆr)). By definition, we have q ≤µ ≤√q. The runtime is the worst when µ = q, which happens under the spammer-hammer model, and it is the best when µ = √q which happens if pj = (1 + √q)/2 deterministically. There exists a (non-iterative) polynomial time algorithm with runtime independent of q for computing the estimate which achieves (2), but in practice we expect that the number of iterations needed is small enough that the iterative algorithm will outperform this non-iterative algorithm. Detailed proof of Lemma 2.3 will be skipped here due to a space constraint. Second, the assumption that µ > 0 is necessary. If there is no assumption on µ, then we cannot distinguish if the responses came from tasks with {si}i∈[m] and workers with {pj}j∈[n] or tasks with {−si}i∈[m] and workers with {1 −pj}j∈[n]. Statistically, both of them give the same output. In the case when we know that µ < 0, we can use the same algorithm changing the sign of the final output and get the same performance guarantee. Third, our algorithm does not require any information on the distribution of pj. Further, unlike other EM based algorithms, the iterative algorithm is not sensitive to initialization and with random initialization converges to a unique estimate with high probability. This follows from the fact that the algorithm is essentially computing a leading eigenvector of a particular linear operator. 2.3 Relation to singular value decomposition The leading singular vectors are often used to capture the important aspects of datasets in matrix form. In our case, the leading left singular vector of A can be used to estimate the correct answers, where A ∈{0, ±1}m×n is the m × n adjacency matrix of the graph G weighted by the submitted answers. We can compute it using power iteration: for u ∈Rm and v ∈Rn, starting with a randomly initialized v, power iteration iteratively updates u and v according to for all i, ui = X j∈∂i Aijvj, and for all j, vj = X i∈∂j Aijui . 5 It is known that normalized u converges exponentially to the leading left singular vector. This update rule is very similar to that of our iterative algorithm. But there is one difference that is crucial in the analysis: in our algorithm we follow the framework of the celebrated belief propagation algorithm [11, 12] and exclude the incoming message from node j when computing an outgoing message to j. This extrinsic nature of our algorithm and the locally tree-like structure of sparse random graphs [9, 13] allow us to perform asymptotic analysis on the average error rate. In particular, if we use the leading singular vector of A to estimate s, such that si = sign(ui), then existing analysis techniques from random matrix theory does not give the strong performance guarantee we have. These techniques typically focus on understanding how the subspace spanned by the top singular vector behaves. To get a sharp bound, we need to analyze how each entry of the leading singular vector is distributed. We introduce the iterative algorithm in order to precisely characterize how each of the decision variable xi is distributed. Since the iterative algorithm introduced in this paper is quite similar to power iteration used to compute the leading singular vectors, this suggests that our analysis may shed light on how to analyze the top singular vectors of a sparse random matrix. 2.4 Optimality of our algorithm As a taskmaster, the natural core optimization problem of our concern is how to achieve a certain reliability in our answers with minimum cost. Since we pay equal amount for all the task assignments, the cost is proportional to the total number of edges of the graph G. Here we compute the total budget sufficient to achieve a target error rate using our algorithm and show that this is within a constant factor from the necessary budget to achieve the given target error rate using any graph and the best possible inference algorithm. The order-optimality is established with respect to all algorithms that operate in one-shot, i.e. all task assignments are done simultaneously, then an estimation is performed after all the answers are obtained. The proofs of the claims in this section are skipped here due to space limitations. Formally, consider a scenario where there are m tasks to complete and a target accuracy ϵ ∈(0, 1/2). To measure accuracy, we use the average probability of error per task denoted by dm(s, ˆs) ≡ (1/m) P i∈[m] P(si ̸= ˆsi). We will show that Ω (1/q) log(1/ϵ) assignments per task is necessary and sufficient to achieve the target error rate: dm(s, ˆs) ≤ϵ. To establish this fundamental limit, we use the following minimax bound on error rate. Consider the case where nature chooses a set of correct answers s ∈{±1}m and a distribution of the worker reliability pj ∼f. The distribution f is chosen from a set of all distributions on [0, 1] which satisfy Ef[(2pj −1)2] = q. We use F(q) to denote this set of distributions. Let G(m, l) denote the set of all bipartite graphs, including irregular graphs, that have m task nodes and ml total number of edges. Then the minimax error rate achieved by the best possible graph G ∈G(m, l) using the best possible inference algorithm is at least inf ALGO,G∈G(m,l) sup s,f∈F(q) dm(s, ˆsG,ALGO) ≥(1/2)e−(lq+O(lq2)) , (4) where ˆsG,ALGO denotes the estimate we get using graph G for task allocation and algorithm ALGO for inference. This minimax bound is established by computing the error rate of an oracle esitimator, which makes an optimal decision based on the information provided by an oracle who knows how reliable each worker is. Next, we show that the error rate of majority voting decays significantly slower: the leading term in the error exponent scales like −lq2. Let ˆsMV be the estimate produced by majority voting. Then, for q ∈(0, 1), there exists a numerical constant C1 such that inf G∈G(m,l) sup s,f∈F(q) dm(s, ˆsMV) = e−(C1lq2+O(lq4+1)) . (5) The lower bound in (4) does not depend on how many tasks are assigned to each worker. However, our main result depends on the value of r. We show that for a broad range of parameters l, r, and q our algorithm achieves optimality. Let ˆsIter be the estimate given by random regular graphs and the iterative algorithm. For ˆlq ≥C2, ˆrq ≥C3 and C2C3 > 1, Corollary 2.2 gives lim m→∞ sup s,f∈F(q) dm(s, ˆsIter) ≤ e−C4lq . (6) This is also illustrated in Figure 1. We ran numerical experiments with 1000 tasks and 1000 workers from the spammer-hammer model assigned according to random graphs with l = r from the configuration model. For the left figure, we fixed q = 0.3 and for the right figure we fixed l = 25. 6 1e-05 0.0001 0.001 0.01 0.1 1 0 5 10 15 20 25 30 Majority Voting EM Algorithm Iterative Algorithm Lower Bound l PError 1e-06 1e-05 0.0001 0.001 0.01 0.1 1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Majority Voting EM Algorithm Iterative Algorithm Lower Bound q PError Figure 1: The iterative algorithm improves over majority voting and EM algorithm [8]. Now, let ∆LB be the minimum cost per task necessary to achieve a target accuracy ϵ ∈(0, 1/2) using any graph and any possible algorithm. Then (4) implies ∆LB ∼(1/q) log(1/ϵ), where x ∼y indicates that x scales as y. Let ∆Iter be the minimum cost per task sufficient to achieve a target accuracy ϵ using our proposed algorithm. Then from (6) we get ∆Iter ∼(1/q) log(1/ϵ). This establishes the order-optimality of our algorithm. It is indeed surprising that regular graphs are sufficient to achieve this optimality. Further, let ∆Majority be the minimum cost per task necessary to achieve a target accuracy ϵ using the Majority voting. Then ∆Majority ∼(1/q2) log(1/ϵ), which significantly more costly than the optimal scaling of (1/q) log(1/ϵ) of our algorithm. 3 Proof of Theorem 2.1 By symmetry, we can assume all si’s are +1. If I is a random integer drawn uniformly in [m], then (1/m) P i∈[m] P(si ̸= ˆsi) ≤P(x(k) I ≤0), where x(k) i denotes the decision variable for task i after k iterations of the iterative algorithm. Asymptotically, for a fixed k, l and r, the local neighborhood of xI converges to a regular tree. To analyze limm→∞P(x(k) I ≤0), we use a standard probabilistic analysis technique known as ‘density evolution’ in coding theory or ‘recursive distributional equations’ in probabilistic combinatorics [9, 13]. Precisely, we use the following equality. lim m→∞P(x(k) I ≤0) = P(ˆx(k) ≤0) , (7) where ˆx(k) is defined through density evolution equations (8) and (9) in the following. Density evolution. In the large system limit as m →∞, the (l, r)-regular random graph locally converges in distribution to a (l, r)-regular tree. Therefore, for a randomly chosen edge (i, j), the messages xi→j and yj→i converge in distribution to x and ypj defined in the following density evolution equations (8). Here and after, we drop the superscript k denoting the iteration number whenever it is clear from the context. We initialize yp with a Gaussian distribution independent of p: y(0) p ∼N(1, 1). Let d= denote equality in distribution. Then, for k ∈{1, 2, . . .}, x(k) d= X i∈[l−1] zpi,iy(k−1) pi,i , y(k) p d= X j∈[r−1] zp,jx(k) j , (8) where xj’s, pi’s, and yp,i’s are independent copies of x, p, and yp, respectively. Also, zp,i’s and zp,j’s are independent copies of zp. p ∈[0, 1] is a random variable distributed according to the distribution of the worker’s quality. zp,j’s and xj’s are independent. zpi,i’s and ypi,i’s are conditionally independent conditioned on pi. Finally, zp is a random variable which is +1 with probability p and −1 with probability 1 −p. Then, for a randomly chosen I, the decision variable x(k) I converges in distribution to ˆx(k) d= X i∈[l] zpi,iy(k−1) pi,i . (9) Analyzing the density. Our strategy to provide an upper bound on P(ˆx(k) ≤0) is to show that ˆx(k) is sub-Gaussian with appropriate parameters and use the Chernoff bound. A random 7 variable z with mean m is said to be sub-Gaussian with parameter σ if for all λ ∈R the following inequality holds: E[eλz] ≤emλ+(1/2)σ2λ2. Define σ2 k ≡2ˆl(ˆlˆr)k−1 + µ2ˆl3ˆr(3qˆr + 1)(qˆlˆr)2k−4(1 −(1/q2ˆlˆr)k−1)/(1 −(1/q2ˆlˆr)) and mk ≡µˆl(qˆlˆr)k−1 for k ∈Z. We will first show that, x(k) is sub-Gaussian with mean mk and parameter σ2 k for a regime of λ we are interested in. Precisely, we will show that for |λ| ≤1/(2mk−1ˆr), E[eλx(k)] ≤emkλ+(1/2)σ2 kλ2 . (10) By definition, due to distributional independence, we have E[eλˆx(k)] = E[eλx(k)](l/ˆl). Therefore, it follows from (10) that ˆx(k) satisfies E[eλˆx(k)] ≤e(l/ˆl)mkλ+(l/2ˆl)σ2 kλ2. Applying the Chernoff bound with λ = −mk/(σ2 k), we get P ˆx(k) ≤0 ≤E eλˆx(k) ≤e−l m2 k/(2 ˆl σ2 k) , (11) Since mkmk−1/(σ2 k) ≤µ2ˆl2(qˆlˆr)2k−3/(3µ2qˆl3ˆr2(qˆlˆr)2k−4) = 1/(3ˆr), it is easy to check that |λ| ≤1/(2mk−1ˆr). Substituting (11) in (7), this finishes the proof of Theorem 2.1. Now we are left to prove that x(k) is sub-Gaussian with appropriate parameters. We can write down a recursive formula for the evolution of the moment generating functions of x and yp as E eλx(k) = Ep h pE[eλy(k−1) p |p] + ¯pE[e−λy(k−1) p |p] iˆl , (12) E eλy(k) p = pE eλx(k) + ¯pE e−λx(k)ˆr , (13) where ¯p = 1 −p and ¯p = 1 −p. We can prove that these are sub-Gaussian using induction. First, for k = 1, we show that x(1) is sub-Gaussian with mean m1 = µˆl and parameter σ2 1 = 2ˆl, where µ ≡E[2p −1]. Since yp is initialized as Gaussian with unit mean and variance, we have E[eλy(0) p ] = eλ+(1/2)λ2 regardless of p. Substituting this into (12), we get for any λ, E[eλx(1)] = (E[p]eλ + (1 −E[p])e−λ)ˆle(1/2)ˆlλ2 ≤eˆlµλ+ˆlλ2, where the inequality follows from the fact that aez + (1 −a)e−z ≤e(2a−1)z+(1/2)z2 for any z ∈R and a ∈[0, 1] (cf. [14, Lemma A.1.5]). Next, assuming E[eλx(k)] ≤emkλ+(1/2)σ2 kλ2 for |λ| ≤1/(2mk−1ˆr), we show that E[eλx(k+1)] ≤ emk+1λ+(1/2)σ2 k+1λ2 for |λ| ≤1/(2mkˆr), and compute appropriate mk+1 and σ2 k+1. Substituting the bound E[eλx(k)] ≤emkλ+(1/2)σ2 kλ2 in (13), we get E[eλy(k) p ] ≤(pemkλ + ¯pe−mkλ)ˆre(1/2)ˆrσ2 kλ2. Further applying this bound in (12), we get E h eλx(k+1)i ≤ Ep h p(pemkλ + ¯pe−mkλ)ˆr + ¯p(pe−mkλ + ¯pemkλ)ˆriˆl e(1/2)ˆlˆrσ2 kλ2 .(14) To bound the first term in the right-hand side, we use the next key lemma. Lemma 3.1. For any |z| ≤1/(2ˆr) and p ∈[0, 1] such that q = E[(2p −1)2], we have Ep h p(pez + ¯pe−z)ˆr + ¯p(¯pez + pe−z)ˆri ≤ eqˆrz+(1/2)(3qˆr2+ˆr)z2 . For the proof, we refer to the journal version of this paper. Applying this inequality to (14) gives E[eλx(k+1)] ≤eqˆlˆrmkλ+(1/2) (3qˆlˆr2+ˆlˆr)m2 k+ˆlˆrσ2 k λ2, for |λ| ≤1/(2mkˆr). In the regime where qˆlˆr ≥ 1 as per our assumption, mk is non-decreasing in k. At iteration k, the above recursion holds for |λ| ≤min{1/(2m1ˆr), . . . , 1/(2mk−1ˆr)} = 1/(2mk−1ˆr). Hence, we get a recursion for mk and σk such that (10) holds for |λ| ≤1/(2mk−1ˆr): mk+1 = qˆlˆrmk , σ2 k+1 = (3qˆlˆr2 + ˆlˆr)m2 k + ˆlˆrσ2 k . With the initialization m1 = µˆl and σ2 1 = 2ˆl, we have mk = µˆl(qˆlˆr)k−1 for k ∈{1, 2, . . .} and σ2 k = aσ2 k−1 + bck−2 for k ∈{2, 3, . . .}, with a = ˆlˆr, b = µ2ˆl2(3qˆlˆr2 + ˆlˆr), and c = (qˆlˆr)2. After some algebra, it follows that σ2 k = σ2 1ak−1 + bck−2 Pk−2 ℓ=0 (a/c)ℓ. For ˆlˆrq2 ̸= 1, we have a/c ̸= 1, whence σ2 k = σ2 1ak−1 + bck−2(1 −(a/c)k−1)/(1 −a/c). This finishes the proof of (10). 8 References [1] A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using the em algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1):20– 28, 1979. [2] P. Smyth, U. Fayyad, M. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective labelling of venus images. In Advances in neural information processing systems, pages 1085– 1092, 1995. [3] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):pp. 1–38, 1977. [4] R. Jin and Z. Ghahramani. Learning with multiple labels. In Advances in neural information processing systems, pages 921–928, 2003. [5] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. J. Mach. Learn. Res., 99:1297–1322, August 2010. [6] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in Neural Information Processing Systems, volume 22, pages 2035–2043, 2009. [7] P. Welinder, S. Branson, S. Belongie, and P. Perona. The multidimensional wisdom of crowds. In Advances in Neural Information Processing Systems, pages 2424–2432, 2010. [8] V. S. Sheng, F. Provost, and P. G. Ipeirotis. Get another label? improving data quality and data mining using multiple, noisy labelers. In Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’08, pages 614–622. ACM, 2008. [9] T. Richardson and R. Urbanke. Modern Coding Theory. Cambridge University Press, march 2008. [10] B. Bollob´as. Random Graphs. Cambridge University Press, January 2001. [11] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann Publ., San Mateo, Califonia, 1988. [12] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations, pages 239–269. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2003. [13] M. Mezard and A. Montanari. Information, Physics, and Computation. Oxford University Press, Inc., New York, NY, USA, 2009. [14] N. Alon and J. H. Spencer. The Probabilistic Method. John Wiley, 2008. 9
|
2011
|
225
|
4,286
|
Minimax Localization of Structural Information in Large Noisy Matrices Mladen Kolar†⇤ mladenk@cs.cmu.edu Sivaraman Balakrishnan†⇤ sbalakri@cs.cmu.edu Alessandro Rinaldo†† arinaldo@stat.cmu.edu Aarti Singh† aarti@cs.cmu.edu † School of Computer Science and †† Department of Statistics, Carnegie Mellon University Abstract We consider the problem of identifying a sparse set of relevant columns and rows in a large data matrix with highly corrupted entries. This problem of identifying groups from a collection of bipartite variables such as proteins and drugs, biological species and gene sequences, malware and signatures, etc is commonly referred to as biclustering or co-clustering. Despite its great practical relevance, and although several ad-hoc methods are available for biclustering, theoretical analysis of the problem is largely non-existent. The problem we consider is also closely related to structured multiple hypothesis testing, an area of statistics that has recently witnessed a flurry of activity. We make the following contributions 1. We prove lower bounds on the minimum signal strength needed for successful recovery of a bicluster as a function of the noise variance, size of the matrix and bicluster of interest. 2. We show that a combinatorial procedure based on the scan statistic achieves this optimal limit. 3. We characterize the SNR required by several computationally tractable procedures for biclustering including element-wise thresholding, column/row average thresholding and a convex relaxation approach to sparse singular vector decomposition. 1 Introduction Biclustering is the problem of identifying a (typically) sparse set of relevant columns and rows in a large, noisy data matrix. This problem along with the first algorithm to solve it were proposed by Hartigan [14] as a way to directly cluster data matrices to produce clusters with greater interpretability. Biclustering routinely arises in several applications such as discovering groups of proteins and drugs that interact with each other [19], learning phylogenetic relationships between different species based on alignments of snippets of their gene sequences [30], identifying malware that have similar signatures [7] and identifying groups of users with similar tastes for commercial products [29]. In these applications, the data matrix is often indexed by (object, feature) pairs and the goal is to identify clusters in this set of bipartite variables. In standard clustering problems, the goal is only to identify meaningful groups of objects and the methods typically use the entire feature vector to define a notion of similarity between the objects. ⇤These authors contributed equally to this work 1 Biclustering can be thought of as high-dimensional clustering where only a subset of the features are relevant for identifying similar objects, and the goal is to identify not only groups of objects that are similar, but also which features are relevant to the clustering task. Consider, for instance gene expression data where the objects correspond to genes, and the features correspond to their expression levels under a variety of experimental conditions. Our present understanding of biological systems leads us to expect that subsets of genes will be co-expressed only under a small number of experimental conditions. Although, pairs of genes are not expected to be similar under all experimental conditions it is critical to be able to discover local expression patterns, which can for instance correspond to joint participation in a particular biological pathway or process. Thus, while clustering aims to identify global structure in the data, biclustering take a more local approach by jointly clustering both objects and features. Prevalent techniques for finding biclusters are typically heuristic procedures with little or no theoretical underpinning. In order to study, understand and compare biclustering algorithms we consider a simple theoretical model of biclustering [18, 17, 26]. This model is akin to the spiked covariance model of [15] widely used in the study of PCA in high-dimensions. We will focus on the following simple observation model for the matrix A 2 Rn1⇥n2: A = βuv0 + ∆ (1) where ∆= {∆ij}i2[n1],j2[n2] is a random matrix whose entries are i.i.d. N(0, σ2) with σ2 > 0 known, u = {ui : i 2 [n1]} and v = {vi : i 2 [n2]} are unknown deterministic unit vectors in Rn1 and Rn2, respectively, and β > 0 is a constant. To simplify the presentation, we assume that u / {0, 1}n1 and v / {0, 1}n2. Let K1 = {i : ui 6= 0} and K2 = {i : vi 6= 0} be the sets indexing the non-zero components of the vectors u and v, respectively. We assume that u and v are sparse, that is, k1 := |K1| ⌧n1 and k2 := |K2| ⌧n2. While the sets (K1, K2) are unknown, we assume that their cardinalities are known. Notice that the magnitude of the signal for all the coordinates in the bicluster K1 ⇥K2 is β pk1k2 . The parameter β measures the strength of the signal, and is the key quantity we will be studying. We focus on the case of a single bicluster that appears as an elevated sub-matrix of size k1 ⇥k2 with signal strength β embedded in a large n1⇥n2 data matrix with entries corrupted by additive Gaussian noise with variance σ2. Under this model, the biclustering problem is formulated as the problem of estimating the sets K1 and K2, based on a single noisy observation A of the unknown signal matrix βuv0. Biclustering is most subtle when the matrix is large with several irrelevant variables, the entries are highly noisy, and the bicluster is small as defined by a sparse set of rows/columns. We provide a sharp characterization of tuples of (β, n1, n2, k1, k2, σ2) under which it is possible to recover the bicluster and study several common methods and establish the regimes under which they succeed. We establish minimax lower and upper bounds for the following class of models. Let ⇥(β0, k1, k2) := {(β, K1, K2) : β ≥β0, |K1| = k1, K1 ⇢[n1], |K2| = k2, K2 ⇢[n2]} (2) be a set of parameters. For a parameter ✓2 ⇥, let P✓denote the joint distribution of the entries of A = {aij}i2[n1],j2[n2], whose density with respect to the Lebesgue measure is Y ij N(aij; β(k1k2)−1/2 1I{i 2 K1, j 2 K2}, σ2), (3) where the notation N(z; µ, σ2) denotes the distribution p(z) ⇠N(µ, σ2) of a Gaussian random variable with mean µ and variance σ2, and 1I denotes the indicator function. We derive a lower bound that identifies tuples of (β, n1, n2, k1, k2, σ2) under which we can recover the true biclustering from a noisy high dimensional matrix. We show that a combinatorial procedure based on the scan statistic achieves the minimax optimal limits, however it is impractical as it requires enumerating all possible sub-matrices of a given size in a large matrix. We analyze the scalings (i.e. the relation between β and (n1, n2, k1, k2, σ2)) under which some computationally tractable procedures for biclustering including element-wise thresholding, column/row average thresholding and sparse singular vector decomposition (SSVD) succeed with high probability. We consider the detection of both small and large biclusters of weak activation, and show that at the minimax scaling the problem is surprisingly subtle (e.g., even detecting big clusters is quite hard). 2 In Table 1, we describe our main findings and compare the scalings under which the various algorithms succeed. Algorithm Combinatorial Thresholding Row/Column Averaging Sparse SVD SNR scaling Minimax Weak Intermediate Weak Bicluster size Any Any (n1/2+↵ 1 ⇥n1/2+↵ 2 ), ↵2 (0, 1/2) Any Theorem 2 Theorem 3 Theorem 4 Theorem 5 Where the scalings are, 1. Minimax: β ⇠σ max ⇣p k1 log(n1 −k1), p k2 log(n2 −k2) ⌘ 2. Weak: β ⇠σ max ⇣p k1k2 log(n1 −k1), p k1k2 log(n2 −k2) ⌘ 3. Intermediate (for large clusters): β ⇠σ max ✓p k1k2 log(n1−k1) n↵ 2 , p k1k2 log(n2−k2) n↵ 1 ◆ Element-wise thresholding does not take advantage of any structure in the data matrix and hence does not achieve the minimax scaling for any bicluster size. If the clusters are big enough row/column averaging performs better than element-wise thresholding since it can take advantage of structure. We also study a convex relaxation for sparse SVD, based on the DSPCA algorithm proposed by [11] that encourages the singular vectors of the matrix to be supported over a sparse set of variables. However, despite the increasing popularity of this method, we show that it is only guaranteed to yield a sparse set of singular vectors when the SNR is quite high, equivalent to element-wise thresholding, and fails for stronger scalings of the SNR. 1.1 Related work Due to its practical importance and difficulty biclustering has attracted considerable attention (for some recent surveys see [9, 27, 20, 22]). Broadly algorithms for biclustering can be categorized as either score-based searches, or spectral algorithms. Many of the proposed algorithms for identifying relevant clusters are based on heuristic searches whose goal is to identify large average sub-matrices or sub-matrices that are well fit by a two-way ANOVA model. Sun et. al. [26] provide some statistical backing for these exhaustive search procedures. In particular, they show how to construct a test via exhaustive search to distinguish when there is a small sub-matrix of weak activation from the “null” case when there is no bicluster. The premise behind the spectral algorithms is that if there was a sub-matrix embedded in a large matrix, then this sub-matrix could be identified from the left and right singular vectors of A. In the case when exactly one of u and v is random, the model (1) can be related to the spiked covariance model of [15]. In the case when v is random, the matrix A has independent columns and dependent rows. Therefore, A0A is a spiked covariance matrix and it is possible to use the existing theoretical results on the first eigenvalue to characterize the left singular vector of A. A lot of recent work has dealt with estimation of sparse eigenvectors of A0A, see for example [32, 16, 24, 31, 2]. For biclustering applications, the assumption that exactly one u or v is random, is not justifiable, therefore, theoretical results for the spiked covariance model do not translate directly. Singular vectors of the model (1) have been analyzed by [21], improving on earlier results of [6]. These results however are asymptotic and do not consider the case when u and v are sparse. Our setup for the biclustering problem also falls in the framework of structured normal means multiple hypothesis testing problems, where for each entry in the matrix the hypotheses are that the entry has mean 0 versus an elevated mean. The presence of a bicluster (sub-matrix) however imposes structure on which elements are elevated concurrently. Recently, several papers have investigated the structured normal means setting for ordered domains. For example, [5] consider the detection of elevated intervals and other parametric structures along an ordered line or grid, [4] consider detection of elevated connected paths in tree and lattice topologies, [3] considers nonparametric cluster structures in a regular grid. In addition, [1] consider testing of different elevated structures in a general but known graph topology. Our setup for the biclustering problem requires identification of an elevated submatrix in an unordered matrix. At a high level, all these results suggest that it is possible to leverage the structure to improve the SNR threshold at which the hypothesis testing problem is 3 feasible. However, computationally efficient procedures that achieve the minimax SNR thresholds are only known for a few of these problems. Our results for biclustering have a similar flavor, in that the minimax threshold requires a combinatorial procedure whereas the computationally efficient procedures we investigate are often sub-optimal. The rest of this paper is organized as follows. In Section 2, we provide a lower bound on the minimum signal strength needed for successfully identifying the bicluster. Section 3 presents a combinatorial procedure which achieves the lower bound and hence is minimax optimal. We investigate some computationally efficient procedures in Section 4. Simulation results are presented in Section 5 and we conclude in Section 6. All proofs are deferred to the Appendix. 2 Lower bound In this section, we derive a lower bound for the problem of identifying the correct bicluster, indexed by K1 and K2, in model (1). In particular, we derive conditions on (β, n1, n2, k1, k2, σ2) under which any method is going to make an error when estimating the correct cluster. Intuitively, if either the signal-to-noise ratio β/σ or the cluster size is small, the minimum signal strength needed will be high since it is harder to distinguish the bicluster from the noise. Theorem 1. Let ↵2 (0, 1 8) and βmin = βmin(n1, n2, k1, k2, σ) = σp↵max 0 @p k1 log(n1 −k1), p k2 log(n2 −k2), s k1k2 log(n1 −k1)(n2 −k1) k1 + k2 −1 1 A . (4) Then for all β0 βmin, inf Φ sup ✓2⇥(β0,k1,k2) P✓[Φ(A) 6= (K1(✓), K2(✓))] ≥ p M 1 + p M ✓ 1 −2↵− 2↵ log M ◆ n1,n2!1 −−−−−−! 1−2↵, (5) where M = min(n1 −k1, n2 −k2), ⇥(β0, k1, k2) is given in (2) and the infimum is over all measurable maps Φ : Rn1⇥n2 7! 2[n1] ⇥2[n2]. The result can be interpreted in the following way: for any biclustering procedure Φ, if β0 βmin, then there exists some element in the model class ⇥(β0, k1, k2) such that the probability of incorrectly identifying the sets K1 and K2 is bounded away from zero. The proof is based on a standard technique described in Chapter 2.6 of [28]. We start by identifying a subset of parameter tuples that are hard to distinguish. Once a suitable finite set is identified, tools for establishing lower bounds on the error in multiple-hypothesis testing can be directly applied. These tools only require computing the Kullback-Leibler (KL) divergence between two distributions P✓1 and P✓2, which in the case of model (1) are two multivariate normal distributions. These constructions and calculations are described in detail in the Appendix. 3 Minimax optimal combinatorial procedure We now investigate a combinatorial procedure achieving the lower bound of Theorem 1, in the sense that, for any ✓2 ⇥(βmin, k1, k2), the probability of recovering the true bicluster (K1, K2) tends to one as n1 and n2 grow unbounded. This scan procedure consists in enumerating all possible pairs of subsets of the row and column indexes of size k1 and k2, respectively, and choosing the one whose corresponding submatrix has the largest overall sum. In detail, for an observed matrix A and two candidate subsets ˜K1 ⇢[n1] and ˜K2 ⇢[n2], we define the associated score S( ˜K1, ˜K2) := P i2 ˜ K1 P j2 ˜ K2 aij. The estimated bicluster is the pair of subsets of sizes k1 and k2 achieving the highest score: (A) := argmax ( ˜ K1, ˜ K2) S( ˜K1, ˜K2) subject to | ˜K1| = k1, | ˜K2| = k2. (6) The following theorem determines the signal strength β needed for the decoder to find the true bicluster. 4 Theorem 2. Let A ⇠P✓with ✓2 ⇥(β, k1, k2) and assume that k1 n1/2 and k2 n2/2. If β ≥4σ max 0 @p k1 log(n1 −k1), p k2 log(n2 −k2), s 2k1k2 log(n1 −k1)(n2 −k2) k1 + k2 1 A (7) then P[ (A) 6= (K1, K2)] 2[(n1 −k1)−1 + (n2 −k2)−1] where is the decoder defined in (6). Comparing to the lower bound in Theorem 1, we observe that the combinatorial procedure using the decoder that looks for all possible clusters and chooses the one with largest score achieves the lower bound up to constants. Unfortunately, this procedure is not practical for data sets commonly encountered in practice, as it requires enumerating all -n1 k1 .-n2 k2 . possible sub-matrices of size k1 ⇥ k2. The combinatorial procedure requires the signal to be positive, but not necessarily constant throughout the bicluster. In fact it is easy to see that provided the average signal in the bicluster is larger than that stipulated by the theorem this procedure succeeds with high probability irrespective of how the signal is distributed across the bicluster. Finally, we remark that the estimation of the cluster is done under the assumption that k1 and k2 are known. Establishing minimax lower bounds and a procedure that adapts to unknown k1 and k2 is an open problem. 4 Computationally efficient biclustering procedures In this section we investigate the performance of various procedures for biclustering, that, unlike the optimal scan statistic procedure studied in the previous section, are computationally tractable. For each of these procedures however, computational ease comes at the cost of suboptimal performance: recovery of the true bicluster is only possible if the β is much larger than the minimax signal strength of Theorem 1. 4.1 Element-wise thresholding The simplest procedure that we analyze is based on element-wise thresholding. The bicluster is estimated as thr(A, ⌧) := {(i, j) 2 [n1] ⇥[n2] : |aij| ≥⌧} (8) where ⌧> 0 is a parameter. The following theorem characterizes the signal strength β required for the element-wise thresholding to succeed in recovering the bicluster. Theorem 3. Let A ⇠P✓with ✓2 ⇥(β, k1, k2) and fix δ > 0. Set the threshold ⌧as ⌧= σ r 2 log (n1 −k1)(n2 −k2) + k1(n2 −k2) + k2(n1 −k1) δ . If β ≥ p k1k2σ r 2 log k1k2 δ + r 2 log (n1 −k1)(n2 −k2) + k1(n2 −k2) + k2(n1 −k1) δ ! then P[ thr(A, ⌧) 6= K1 ⇥K2] = o(δ/(k1k2)). Comparing Theorem 3 with the lower bound in Theorem 1, we observe that the signal strength β needs to be O(max(pk1, pk2)) larger than the lower bound. This is not surprising, since the element-wise thresholding is not exploiting the structure of the problem, but is assuming that the large elements of the matrix A are positioned randomly. From the proof it is not hard to see that this upper bound is tight up to constants, i.e. if β cpk1k2σ ✓q 2 log k1k2 δ + q 2 log (n1−k1)(n2−k2)+k1(n2−k2)+k2(n1−k1) δ ◆ for a small enough constant c then thresholding will no longer recover the bicluster with probability at least 1 −δ. It is also worth noting that thresholding neither requires the signal in the bicluster to be constant nor positive provided it is larger in magnitude, at every entry, than the threshold specified in the theorem. 5 4.2 Row/Column averaging Next, we analyze another a procedure based on column and row averaging. When the bicluster is large this procedure exploits the structure of the problem and outperforms the simple elementwise thresholding and the sparse SVD, which is discussed in the following section. The averaging procedure works only well if the bicluster is “large”, as specified below, since otherwise the row or column average is dominated by the noise. More precisely, the averaging procedure computes the average of each row and column of A and outputs the k1 rows and k2 columns with the largest average. Let {rr,i}i2[n1] and {rc,j}j2[n2] denote the positions of rows and columns when they are ordered according to row and column averages in descending order. The bicluster is estimated then as avg(A) := {i 2 [n1] : rr,i k1} ⇥{j 2 [n2] : rc,j k2}. (9) The following theorem characterizes the signal strength β required for the averaging procedure to succeed in recovering the bicluster. Theorem 4. Let A ⇠P✓with ✓2 ⇥(β, k1, k2). If k1 = ⌦(n1/2+↵ 1 ) and k2 = ⌦(n1/2+↵ 2 ), where ↵2 (0, 1/2) is a constant and, β ≥4σ max p k1k2 log(n1 −k1) n↵ 2 , p k1k2 log(n2 −k2) n↵ 1 ! then P[ (A) 6= (K1, K2)] [n−1 1 + n−1 2 ]. Comparing to Theorem 3, we observe that the averaging requires lower signal strength than the element-wise thresholding when the bicluster is large, that is, k1 = ⌦(pn1) and k2 = ⌦(pn2). Unless both k1 = O(n1) and k2 = O(n2), the procedure does not achieve the lower bound of Theorem 1, however, the procedure is simple and computationally efficient. It is also not hard to show that this theorem is sharp in its characterization of the averaging procedure. Further, unlike thresholding, averaging requires the signal to be positive in the bicluster. It is interesting to note that a large bicluster can also be identified without assuming the normality of the noise matrix ∆. This non-parametric extension is based on a simple sign-test, and the details are provided in Appendix. 4.3 Sparse singular value decomposition (SSVD) An alternate way to estimate K1 and K2 would be based on the singular value decomposition (SVD), i.e. finding ˜u and ˜v that maximize h˜u, A˜vi, and then threshold the elements of ˜u and ˜v. Unfortunately, such a method would perform poorly when the signal β is weak and the dimensionality is high, since, due to the accumulation of noise, ˜u and ˜v are poor estimates of u and v and and do not exploit the fact that u and v are sparse. In fact, it is now well understood [8] that SVD is strongly inconsistent when the signal strength is weak, i.e. \(˜u, u) ! ⇡/2 (and similarly for v) almost surely. See [26] for a clear exposition and discussion of this inconsistency in the SVD setting. To properly exploit the sparsity in the singular vectors, it seems natural to impose a cardinality constraint to obtain a sparse singular vector decomposition (SSVD): max u2Sn1−1,v2Sn2−1hu, Avi subject to ||u||0 k1, ||v||0 k2, which can be further rewritten as max Z2Rn2⇥n1 tr AZ subject to Z = vu0, ||u||2 = 1, ||v||2 = 1, ||u||0 k1, ||v||0 k2. (10) The above problem is non-convex and computationally intractable. Inspired by the convex relaxation methods for sparse principal component analysis proposed by [11], we consider the following relaxation the SSVD: max X2R(n1+n2)⇥(n1+n2) tr AX21 −λ10|X21|1 subject to X ⌫0, tr X11 = 1, tr X22 = 1, (11) 6 where X is the block matrix X11 X12 X21 X22 4 with the block X21 corresponding to Z in (10). If the optimal solution bX is of rank 1, then, necessarily, bX = -bu bv . (bu0 bv0). Based on the sparse singular vectors bu and bv, we estimate the bicluster as bK1 = {j 2 [n1] : buj 6= 0} and bK2 = {j 2 [n2] : bvj 6= 0}. (12) The user defined parameter λ controls the sparsity of the solution bX21, and, therefore, provided the solution is of rank one, it also controls the sparsity of the vectors bu and bv and of the estimated bicluster. The following theorem provides sufficient conditions for the solution bX to be rank one and to recover the bicluster. Theorem 5. Consider the model in (1). Assume k1 ⇣k2 and k1 n1/2 and k2 n2/2. If β ≥2σ p k1k2 log(n1 −k1)(n2 −k2) (13) then the solution bX of the optimization problem in (11) with λ = β 2pk1k2 is of rank 1 with probability 1 −O(k−1 1 ). Furthermore, we have that ( bK1, bK2) = (K1, K2) with probability 1 −O(k−1 1 ). It is worth noting that SSVD correctly recovers signed vectors bu and bv under this signal strength. In particular, the procedure works even if the u and v in Equation 1 are signed. The following theorem establishes necessary conditions for the SSVD to have a rank 1 solution that correctly identifies the bicluster. Theorem 6. Consider the model in (1). Fix c 2 (0, 1/2). Assume that k1 ⇣k2 and k1 = o(n1/2−c) and k2 = o(n1/2−c 2 ). If β 2σ p ck1k2 log max(n1 −k1, n2 −k2), (14) with λ = β 2pk1k2 then the optimization problem (11) does not have a rank 1 solution that correctly recovers the sparsity pattern with probability at least 1 −O(exp(−(pk1 + pk2)2) for sufficiently large n1 and n2. From Theorem 6 observe that the sufficient conditions of Theorem 5 are sharp. In particular, the two theorems establish that the SSVD does not establish the lower bound given in Theorem 1. The signal strength needs to be of the same order as for the element-wise thresholding, which is somewhat surprising since from the formulation of the SSVD optimization problem it seems that the procedure uses the structure of the problem. From numerical simulations in Section 5 we observe that although SSVD requires the same scaling as thresholding, it consistently performs slightly better at a fixed signal strength. 5 Simulation results We test the performance of the three computationally efficient procedures on synthetic data: thresholding, averaging and sparse SVD. For sparse SVD we use an implementation posted online by [11]. We generate data from (1) with n = n1 = n2, k = k1 = k2, σ2 = 1 and u = v / (10 k, 00 n−k)0. For each algorithm we plot the Hamming fraction (i.e. the Hamming distance between sbu and su rescaled to be between 0 and 1) against the rescaled sample size. In each case we average the results over 50 runs. For thresholding and sparse SVD the rescaled scaling (x-axis) is β kp log(n−k) and for averaging the rescaled scaling (x-axis) is βn↵ kp log(n−k). We observe that there is a sharp threshold between success and failure of the algorithms, and the curves show good agreement with our theory. The vertical line shows the point after which successful recovery happens for all values of n. We can make a direct comparison between thresholding and sparse SVD (since the curves are identically rescaled) to see that at least empirically sparse SVD succeeds at a smaller scaling constant than thresholding even though their asymptotic rates are identical. 7 1 2 3 4 5 6 7 0 0.2 0.4 0.6 0.8 1 Signal strength Hamming fraction k = log(n) n = 100 n = 200 n = 300 n = 400 n = 500 1 2 3 4 5 6 7 0 0.2 0.4 0.6 0.8 1 Signal strength Hamming fraction k = n1/3 n = 100 n = 200 n = 300 n = 400 n = 500 1 2 3 4 5 6 7 0 0.2 0.4 0.6 0.8 1 Signal strength Hamming fraction k = 0.2n n = 100 n = 200 n = 300 n = 400 n = 500 Figure 1: Thresholding: Hamming fraction versus rescaled signal strength. 0 1 2 3 4 5 6 7 0 0.2 0.4 0.6 0.8 1 Signal strength Hamming fraction k = n1/2 + α, α = 0.1 n = 100 n = 200 n = 300 n = 400 n = 500 0 1 2 3 4 5 6 7 0 0.2 0.4 0.6 0.8 1 Signal strength Hamming fraction k = 0.2n n = 100 n = 200 n = 300 n = 400 n = 500 Figure 2: Averaging: Hamming fraction versus rescaled signal strength. 1 1.5 2 2.5 3 0 0.2 0.4 0.6 0.8 1 Signal strength Hamming fraction k = log(n) n = 100 n = 200 n = 300 n = 400 n = 500 1 1.5 2 2.5 3 0 0.2 0.4 0.6 0.8 1 Signal strength Hamming fraction k = n1/3 n = 100 n = 200 n = 300 n = 400 n = 500 1 1.5 2 2.5 3 0 0.2 0.4 0.6 0.8 1 Signal strength Hamming fraction k = 0.2n n = 100 n = 200 n = 300 n = 400 n = 500 Figure 3: Sparse SVD: Hamming fraction versus rescaled signal strength. 6 Discussion In this paper, we analyze biclustering using a simple statistical model (1), where a sparse rank one matrix is perturbed with noise. Using this model, we have characterized the minimal signal strength below which no procedure can succeed in recovering the bicluster. This lower bound can be matched using an exhaustive search technique. However, it is still an open problem to find a computationally efficient procedure that is minimax optimal. Amini et. al. [2] analyze the convex relaxation procedure proposed in [11] for high-dimensional sparse PCA. Under the minimax scaling for this problem they show that provided a rank-1 solution exists it has the desired sparsity pattern (they were however not able to show that a rank-1 solution exists with high probability). Somewhat surprisingly, we show that in the SVD case a rank-1 solution with the desired sparsity pattern does not exist with high probability. The two settings however are not identical since the noise in the spiked covariance model is Wishart rather than Gaussian, and has correlated entries. It would be interesting to analyze whether our negative result has similar implications for the sparse PCA setting. The focus of our paper has been on a model with one cluster, which although simple, provides several interesting theoretical insights. In practice, data often contains multiple clusters which need to be estimated. Many existing algorithms (see e.g. [17] and [18]) try to estimate multiple clusters and it would be useful to analyze these theoretically. Furthermore, the algorithms that we have analyzed assume knowledge of the size of the cluster, which is used to select the tuning parameters. It is a challenging problem of great practical relevance to find data driven methods to select these tuning parameters. 7 Acknowledgments We would like to thank Arash Amini and Martin Wainwright for fruitful discussions, and Larry Wasserman for his ideas, indispensable advice and wise guidance. This research is supported in part by AFOSR under grant FA9550-10-1-0382 and NSF under grant IIS-1116458. SB would also like to thank Jaime Carbonell and Srivatsan Narayanan for several valuable comments and thoughtprovoking discussions. 8 References [1] Louigi Addario-Berry, Nicolas Broutin, Luc Devroye, and G´abor Lugosi. On combinatorial testing problems. Ann. Statist., 38(5):3063–3092, 2010. [2] A.A. Amini and M.J. Wainwright. High-Dimensional Analysis Of Semidefinite Relaxations For Sparse Principal Components. The Annals of Statistics, 37(5B):2877–2921, 2009. [3] Ery Arias-Castro, Emmanuel J. Cand`es, and Arnaud Durand. Detection of an anomalous cluster in a network. Ann. Stat., 39(1):278–304, 2011. [4] Ery Arias-Castro, Emmanuel J. Cand`es, Hannes Helgason, and Ofer Zeitouni. Searching for a trail of evidence in a maze. Ann. Statist., 36(4):1726–1757, 2008. [5] Ery Arias-Castro, David L. Donoho, and Xiaoming Huo. Adaptive multiscale detection of filamentary structures in a background of uniform random points. Ann. Statist., 34(1):326–349, 2006. [6] Jushan Bai. Inferential theory for factor models of large dimensions. Econometrica, 71(1):pp. 135–171, 2003. [7] Ulrich Bayer, Paolo Milani Comparetti, Clemens Hlauscheck, Christopher Kruegel, and Engin Kirda. Scalable, Behavior-Based Malware Clustering. In 16th Symposium on Network and Distributed System Security (NDSS), 2009. [8] F. Benaych-Georges and R. Rao Nadakuditi. The singular values and vectors of low rank perturbations of large rectangular random matrices. ArXiv e-prints, March 2011. [9] S. Busygin, O. Prokopyev, and P.M. Pardalos. Biclustering in data mining. Computers & Operations Research, 35(9):2964–2987, 2008. [10] Emmanuel J. Cand`es, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? CoRR, abs/0912.3599, 2009. [11] Alexandre d’Aspremont, Laurent El Ghaoui, Michael I. Jordan, and Gert R. G. Lanckriet. A direct formulation for sparse pca using semidefinite programming. SIAM Review, 49:434–448, 2007. [12] K.R. Davidson and S.J. Szarek. Local operator theory, random matrices and Banach spaces. Handbook of the geometry of Banach spaces, 1:317–366, 2001. [13] R. Fletcher. Semi-definite matrix constraints in optimization. SIAM Journal on Control and Optimization, 23:493, 1985. [14] J. A. Hartigan. Direct clustering of a data matrix. Journal of the American Statistical Association, 67(337):pp. 123–129, 1972. [15] I.M. Johnstone. On the distribution of the largest eigenvalue in principal components analysis. The Annals of Statistics, 29(2):295–327, 2001. [16] I.M. Johnstone and A.Y. Lu. On consistency and sparsity for principal components analysis in high dimensions. Journal of the American Statistical Association, 104(486):682–693, 2009. [17] L. Lazzeroni and A. Owen. Plaid models for gene expression data. Statistica sinica, 12:61–86, 2002. [18] Mihee Lee, Haipeng Shen, Jianhua Z. Huang, and J. S. Marron. Biclustering via sparse singular value decomposition. Biometrics, 66(4):1087–1095, 2010. [19] Jinze Liu and Wei Wang. Op-cluster: Clustering by tendency in high dimensional space. In Proceedings of the Third IEEE International Conference on Data Mining, ICDM ’03, pages 187–, Washington, DC, USA, 2003. IEEE Computer Society. [20] S.C. Madeira and A.L. Oliveira. Biclustering algorithms for biological data analysis: a survey. IEEE Transactions on computational Biology and Bioinformatics, pages 24–45, 2004. [21] A. Onatski. Asymptotics of the principal components estimator of large factor models with weak factors. Economics Department, Columbia University, 2009. [22] L. Parsons, E. Haque, and H. Liu. Subspace clustering for high dimensional data: a review. ACM SIGKDD Explorations Newsletter, 6(1):90–105, 2004. [23] R.T. Rockafellar. The theory of subgradients and its applications to problems of optimization. Convex and nonconvex functions. Heldermann, 1981. [24] H. Shen and J.Z. Huang. Sparse principal component analysis via regularized low rank matrix approximation. Journal of multivariate analysis, 99(6):1015–1034, 2008. [25] GW Stewart. Perturbation theory for the singular value decomposition. Computer Science Technical Report Series; Vol. CS-TR-2539, page 13, 1990. [26] X. Sun and A. B. Nobel. On the maximal size of Large-Average and ANOVA-fit Submatrices in a Gaussian Random Matrix. ArXiv e-prints, September 2010. [27] A. Tanay, R. Sharan, and R. Shamir. Biclustering algorithms: A survey. Handbook of computational molecular biology, 2004. [28] A.B. Tsybakov. Introduction to nonparametric estimation. Springer, 2009. [29] Lyle Ungar and Dean P. Foster. A formal statistical approach to collaborative filtering. In CONALD, 98. [30] S. Wang, R. R. Gutell, and D. P. Miranker. Biclustering as a method for RNA local multiple sequence alignment. Bioinformatics, 23:3289–3296, Dec 2007. [31] D.M. Witten, R. Tibshirani, and T. Hastie. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics, 10(3):515, 2009. [32] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of computational and graphical statistics, 15(2):265–286, 2006. 9
|
2011
|
226
|
4,287
|
Crowdclustering Ryan Gomes∗ Caltech Peter Welinder Caltech Andreas Krause ETH Zurich & Caltech Pietro Perona Caltech Abstract Is it possible to crowdsource categorization? Amongst the challenges: (a) each worker has only a partial view of the data, (b) different workers may have different clustering criteria and may produce different numbers of categories, (c) the underlying category structure may be hierarchical. We propose a Bayesian model of how workers may approach clustering and show how one may infer clusters / categories, as well as worker parameters, using this model. Our experiments, carried out on large collections of images, suggest that Bayesian crowdclustering works well and may be superior to single-expert annotations. 1 Introduction Outsourcing information processing to large groups of anonymous workers has been made easier by the internet. Crowdsourcing services, such as Amazon’s Mechanical Turk, provide a convenient way to purchase Human Intelligence Tasks (HITs). Machine vision and machine learning researchers have begun using crowdsourcing to label large sets of data (e.g., images and video [1, 2, 3]) which may then be used as training data for AI and computer vision systems. In all the work so far categories are defined by a scientist, while categorical labels are provided by the workers. Can we use crowdsourcing to discover categories? I.e., is it possible to use crowdsourcing not only to classify data instances into established categories, but also to define the categories in the first place? This question is motivated by practical considerations. If we have a large number of images, perhaps several tens of thousands or more, it may not be realistic to expect a single person to look at all images and form an opinion as to how to categorize them. Additionally, individuals, whether untrained or expert, might not agree on the criteria used to define categories and may not even agree on the number of categories that are present. In some domains unsupervised clustering by machine may be of great help; however, unsupervised categorization of images and video is unfortunately a problem that is far from solved. Thus, it is an interesting question whether it is possible to collect and combine the opinion of multiple human operators, each one of which is able to view a (perhaps small) subset of a large image collection. We explore the question of crowdsourcing clustering in two steps: (a) Reduce the problem to a number of independent HITs of reasonable size and assign them to a large pool of human workers (Section 2). (b) Develop a model of the annotation process, and use the model to aggregate the human data automatically (Section 3) yielding a partition of the dataset into categories. We explore the properties of our approach and algorithms on a number of real world data sets, and compare against existing methods in Section 4. 2 Eliciting Information from Workers How shall we enable human operators to express their opinion on how to categorize a large collection of images? Whatever method we choose, it should be easy to learn and it should be implementable by means of a simple graphical user interface (GUI). Our approach (Figure 1) is based on displaying small subsets of M images and asking workers to group them by means of mouse clicks. We provide instructions that may cue workers to certain attributes but we do not provide the worker with category definitions or examples. The worker groups the M items into clusters of his choosing, as many as he sees fit. An item may be placed in its own cluster if it is unlike the others in the HIT. The choice of M trades off between the difficulty of the task (worker time required for a HIT ∗Corresponding author, e-mail: gomes@vision.caltech.edu 1 Image set Annotators GUI Pairwise labels ⇡ 6= 6= 6= lt τj xi Wj zi atbtjt Annotators Data Items Pairwise Labels Φk Vk “Atomic” clusters Model / inference Categories Figure 1: Schematic of Bayesian crowdclustering. A large image collection is explored by workers. In each HIT (Section 2), the worker views a small subset of images on a GUI. By associating (arbitrarily chosen) colors with sets of images the worker proposes a (partial) local clustering. Each HIT thus produces multiple binary pairwise labels: each pair of images shown in the same HIT is placed by the worker either in the same category or in different categories. Each image is viewed by multiple workers in different contexts. A model of the annotation process (Sec. 3.1) is used to compute the most likely set of categories from the binary labels. Worker parameters are estimated as well. increases super-linearly with the number of items), the resolution of the images (more images on the screen means that they will be smaller), and contextual information that may guide the worker to make more global category decisions (more images give a better context, see Section 4.1.) Partial clusterings on many M-sized subsets of the data from many different workers are thus the raw data on which we compute clustering. An alternative would have been to use pairwise distance judgments or three-way comparisons. A large body of work exists in the social sciences that makes use of human-provided similarity values defined between pairs of data items (e.g., Multidimensional Scaling [4].) After obtaining pairwise similarity ratings from workers, and producing a Euclidean embedding, one could conceivably proceed with unsupervised clustering of the data in the Euclidean space. However, accurate distance judgments may be more laborious to specify than partial clusterings. We chose to explore what we can achieve with partial clusterings alone. We do not expect workers to agree on their definitions of categories, or to be consistent in categorization when performing multiple HITs. Thus, we avoid explicitly associating categories across HITs. Instead, we represent the results of each HIT as a series of M 2 binary labels (see Figure 1). We assume that there are N total items (indexed by i), J workers (indexed by j), and H HITs (indexed by h). The information obtained from workers is a set of binary variables L, with elements lt ∈{−1, +1} indexed by a positive integer t ∈{1, . . . , T}. Associated with the t-th label is a quadruple (at, bt, jt, ht), where jt ∈{1, . . . , J} indicates the worker that produced the label, and at ∈{1, . . . , N} and bt ∈{1, . . . , N} indicate the two data items compared by the label. ht ∈{1, . . . , H} indicates the HIT from which the t-th pairwise label was derived. The number of labels is T = H M 2 . Sampling Procedure We have chosen to structure HITs as clustering tasks of M data items, so we must specify them. If we simply seperate the items into disjoint sets, then it will be impossible to infer a clustering over the entire data set. We will not know whether two items in different HITs are in the same cluster or not. There must be some overlap or redundancy: data items must be members of multiple HITs. In the other extreme, we could construct HITs such that each pair of items may be found in at least one HIT, so that every possible pairwise category relation is sampled. This would be quite expensive for large number of items N, since the number of labels scales asymptotically as T ∈Ω(N 2). However, we expect a noisy transitive property to hold: if items a and b are likely to be in the same cluster, and items b and c are (not) likely in the same cluster, then items a and c are (not) likely to be in the same cluster as well. The transitive nature of binary cluster relations should allow sparse sampling, especially when the number of clusters is relatively small. As a baseline sampling method, we use the random sampling scheme outlined by Strehl and Ghosh [5] developed for the problem of object distributed clustering, in which a partition of a complete data set is learned from a number of clusterings restricted to subsets of the data. (We compare our aggregation algorithm to this work in Section 4.) Their scheme controls the level of sampling redundancy with a single parameter V , which in our problem is interpreted as the expected number of HITs to which a data item belongs. 2 The N items are first distributed deterministically among the HITs, so that there are ⌈M V ⌉items in each HIT. Then the remaining M −⌈M V ⌉items in each HIT are filled by sampling without replacement from the N −⌈M V ⌉items that are not yet allocated to the HIT. There are a total of ⌈NV M ⌉ unique HITs. We introduce an additional parameter R, which is the number of different workers that perform each constructed HIT. The total number of HITs distributed to the crowdsourcing service is therefore H = R⌈NV M ⌉, and we impose the constraint that a worker can not perform the same HIT more than once. This sampling scheme generates T = R⌈NV M ⌉ M 2 ∈O(RNV M) binary labels. With this exception, we find a dearth of ideas in the literature pertaining to sampling methods for distributed clustering problems. Iterative schemes that adaptively choose maximally informative HITs may be preferable to random sampling. We are currently exploring ideas in this direction. 3 Aggregation via Bayesian Crowdclustering There is an extensive literature in machine learning on the problem of combining multiple alternative clusterings of data. This problem is known as consensus clustering [6], clustering aggregation [7], or cluster ensembles [5]. While some of these methods can work with partial input clusterings, most have not been demonstrated in situations where the input clusterings involve only a small subset of the total data items (M << N), which is the case in our problem. In addition, existing approaches focus on producing a single “average” clustering from a set of input clusterings. In contrast, we are not merely interested in the average clustering produced by a crowd of workers. Instead, we are interested in understanding the ways in which different individuals may categorize the data. We seek a master clustering of the data that may be combined in order to describe the tendencies of individual workers. We refer to these groups of data as atomic clusters. For example, suppose one worker groups objects into a cluster of tall objects and another of short objects, while a different worker groups the same objects into a cluster of red objects and another of blue objects. Then, our method should recover four atomic clusters: tall red objects, short red objects, tall blue objects, and short blue objects. The behavior of the two workers may then be summarized using a confusion table of the atomic clusters (see Section 3.3). The first worker groups the first and third atomic cluster into one category and the second and fourth atomic cluster into another category. The second worker groups the first and second atomic clusters into a category and the third and fourth atomic clusters into another category. 3.1 Generative Model We propose an approach in which data items are represented as points in a Euclidean space and workers are modeled as pairwise binary classifiers in this space. Atomic clusters are then obtained by clustering these inferred points using a Dirichlet process mixture model, which estimates the number of clusters [8]. The advantage of an intermediate Euclidean representation is that it provides a compact way to capture the characteristics of each data item. Certain items may be inherently more difficult to categorize, in which case they may lie between clusters. Items may be similar along one axis but different along another (e.g., object height versus object color.) A similar approach was proposed by Welinder et al. [3] for the analysis of classification labels obtained from crowdsourcing services. This method does not apply to our problem, since it involves binary labels applied to single data items rather than to pairs, and therefore requires that categories be defined a priori and agreed upon by all workers, which is incompatible with the crowdclustering problem. We propose a probabilistic latent variable model that relates pairwise binary labels to hidden variables associated with both workers and images. The graphical model is shown in Figure 1. xi is a D dimensional vector, with components [xi]d that encodes item i’s location in the embedding space RD. Symmetric matrix Wj ∈RD×D with entries [Wj]d1d2 and bias τj ∈R are used to define a pairwise binary classifier, explained in the next paragraph, that represents worker j’s labeling behavior. Because Wj is symmetric, we need only specify its upper triangular portion: vecp{Wj} which is a vector formed by “stacking” the partial columns of Wj according to the ordering [vecp{Wj}]1 = [Wj]11, [vecp{Wj}]2 = [Wj]12, [vecp{Wj}]3 = [Wj]22, etc. Φk = {µk, Σk} are the mean and covariance parameters associated with the k-th Gaussian atomic cluster, and Uk are stick breaking weights associated with a Dirichlet process. The key term is the pairwise quadratic logistic regression likelihood that captures worker j’s tendency to label the pair of images at and bt with lt: p(lt|xat, xbt, Wjt, τjt) = 1 1 + exp(−ltAt) (1) 3 where we define the pairwise quadratic activity At = xT atWjtxbt + τjt. Symmetry of Wj ensures that p(lt|xat, xbt, Wjt, τjt) = p(lt|xbt, xat, Wjt, τjt). This form of likelihood yields a compact and tractable method of representing classifiers defined over pairs of points in Euclidean space. Pairs of vectors with large pairwise activity tend to be classified as being in the same category, and in different categories otherwise. We find that this form of likelihood leads to tightly grouped clusters of points xi that are then easily discovered by mixture model clustering. The joint distribution is p(Φ, U, Z, X, W, τ, L) = ∞ Y k=1 p(Uk|α)p(Φk|m0, β0, J0, η0) N Y i=1 p(zi|U)p(xi|Φzi) (2) J Y j=1 p(vecp{Wj}|σw 0 )p(τj|στ 0) T Y t=1 p(lt|xat, xbt, Wjt, τjt). The conditional distributions are defined as follows: p(Uk|α) = Beta(Uk; 1, α) p(zi = k|U) = Uk k−1 Y l=1 (1 −Ul) (3) p(xi|Φzi) = Normal(xi; µzi, Σzi) p(xi|σx 0) = Y d Normal([xi]d; 0, σx 0) p(vecp{Wj}|σw 0 ) = Y d1≤d2 Normal([Wj]d1d2; 0, σw 0 ) p(τj|στ 0) = Normal(τj; 0, στ 0) p(Φk|m0, β0, J0, η0) = Normal-Wishart(Φk; m0, β0, J0, η0) where (σx 0, στ 0, σw 0 , α, m0, β0, J0, η0) are fixed hyper-parameters. Our model is similar to that of [9], which is used to model binary relational data. Salient differences include our use of a logistic rather than a Gaussian likelihood, and our enforcement of the symmetry of Wj. In the next section, we develop an efficient deterministic inference algorithm to accomodate much larger data sets than the sampling algorithm used in [9]. 3.2 Approximate Inference Exact posterior inference in this model is intractable, since computing it involves integrating over variables with complex dependencies. We therefore develop an inference algorithm based on the Variational Bayes method [10]. The high level idea is to work with a factorized proxy posterior distribution that does not model the full complexity of interactions between variables; it instead represents a single mode of the true posterior. Because this distribution is factorized, integrations involving it become tractable. We define the proxy distribution q(Φ, U, Z, X, W, τ) = ∞ Y k=K+1 p(Uk|α)p(Φk|m0, β0, J0, η0) K Y k=1 q(Uk)q(Φk) N Y i=1 q(zi)q(xi) J Y j=1 q(vecp{Wj})q(τj) (4) using parametric distributions of the following form: q(Uk) = Beta(Uk; ξk,1, ξk,2) q(Φk) = Normal-Wishart(mk, βk, Jk, ηk) (5) q(xi) = Y d Normal([xi]d; [µx i ]d, [σx i ]d) q(τj) = Normal(τj; µτ j , στ j ) q(zi = k) = qik q(vecp{Wj}) = Y d1≤d2 Normal([Wj]d1d2; [µw j ]d1d2, [σw j ]d1d2) To handle the infinite number of mixture components, we follow the approach of [11] where we define variational distributions for the first K components, and fix the remainder to their corresponding priors. {ξk,1, ξk,2} and {mk, βk, Jk, ηk} are the variational parameters associated with the k-th mixture component. q(zi = k) = qik form the factorized assignment distribution for item i. µx i and σx i are variational mean and variance parameters associated with data item i’s embedding location. µw j and σw j are symmetric matrix variational mean and variance parameters associated with worker j, and µτ j and στ j are variational mean and variance parameters for the bias τj of worker j. We use diagonal covariance Normal distributions over Wj and xi to reduce the number of parameters that must be estimated. 4 Next, we define a utility function which allows us to determine the variational parameters. We use Jensen’s inequality to develop a lower bound to the log evidence: log p(L|σx 0, στ 0, σw 0 , α, m0, β0, J0, η0) (6) ≥Eq log p(Φ, U, Z, X, W, τ, L) + H{q(Φ, U, Z, X, W, τ)}, H{·} is the entropy of the proxy distribution, and the lower bound is known as the Free Energy. However, the Free Energy still involves intractable integration, because the normal distributions over variables Wj, xi, and τj are not conjugate [12] to the logistic likelihood term. We therefore locally approximate the logistic likelihood with an unnormalized Gaussian function lower bound, which is the left hand side of the following inequality: g(∆t) exp{(ltAt −∆t)/2 + λ(∆t)(A2 t −∆2 t)} ≤p(lt|xat, xbt, Wjt, τjt). (7) This was adapted from [13] to our case of quadratic pairwise logistic regression. Here g(x) = (1 + e−x)−1 and λ(∆) = [1/2 −g(∆)]/(2∆). This expression introduces an additional variational parameter ∆t for each label, which are optimized in order to tighten the lower bound. Our utility function is therefore: F =Eq log p(Φ, U, Z, X, W, τ) + H{q(Φ, U, Z, X, W, τ)} (8) + X t log g(∆t) + lt 2 Eq{At} −∆t 2 + λ(∆t)(Eq{A2 t} −∆2 t) which is a tractable lower bound to the log evidence. Optimization of variational parameters is carried out in a coordinate ascent procedure, which exactly maximizes each variational parameter in turn while holding all others fixed. This is guaranteed to converge to a local maximum of the utility function. The update equations are given in an extended technical report [14]. We initialize the variational parameters by carrying out a layerwise procedure: first, we substitute a zero mean isotropic normal prior for the mixture model and perform variational updates over {µx i , σx i , µw j , σw j , µτ j , στ j }. Then we use µx i as point estimates for xi and update {mk, βk, Jk, ηk, ξk,1, ξk,2} and determine the initial number of clusters K as in [11]. Finally, full joint inference updates are performed. Their computational complexity is O(D4T + D2KN) = O(D4NV RM + D2KN). 3.3 Worker Confusion Analysis As discussed in Section 3, we propose to understand a worker’s behavior in terms of how he groups atomic clusters into his own notion of categories. We are interested in the predicted confusion matrix Cj for worker j, where [Cj]k1k2 = Eq n Z p(l = 1|xa, xb, Wj, τj)p(xa|Φk1)p(xb|Φk2)dxadxb o (9) which expresses the probability that worker j assigns data items sampled from atomic cluster k1 and k2 to the same cluster, as predicted by the variational posterior. This integration is intractable. We use the expected values E{Φk1} = {mk1, Jk1/ηk1} and E{Φk2} = {mk2, Jk2/ηk2} as point estimates in place of the variational distributions over Φk1 and Φk2. We then use Jensen’s inequality and Eq. 7 again to yield a lower bound. Maximizing this bound over ∆yields [ ˆCj]k1k2 = g( ˆ∆k1k2j) exp{(mT k1µw j mk2 + µτ j −ˆ∆k1k2j)/2} (10) which we use as our approximate confusion matrix, where ˆ∆k1k2j is given in [14]. 4 Experiments We tested our method on four image data sets that have established “ground truth” categories, which were provided by a single human expert. These categories do not necessarily reflect the uniquely valid way to categorize the data set, however they form a convenient baseline for the purpose of quantitative comparison. We used 1000 images from the Scenes data set from [15] to illustrate our approach (Figures 2, 3, and 4.) We used 1354 images of birds from 10 species in the CUB-200 data set [16] (Table 1) and the 3845 images in the Stonefly9 data set [17] (Table 1) in order to compare our method quantitatively to other cluster aggregation methods. We used the 37794 images from the Attribute Discovery data set [18] in order to demonstrate our method on a large scale problem. We set the dimensionality of xi to D = 4 (since higher dimensionality yielded no additional clusters) and we iterated the update equations 100 times, which was enough for convergence. Hyperparameters were tuned once on synthetic pairwise labels that simulated 100 data points drawn from 4 clusters, and fixed during all experiments. 5 −1.5 −1 −0.5 0 0.5 1 1.5 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 Average assignment entropy (bits): 0.0029653 1 2 3 4 5 6 7 8 9 10 11 1 2 3 5 4 Ground Truth Inferred Cluster 1 2 3 4 5 6 7 8 9 10 11 bedroom suburb kitchen living room coast forest highway inside city mountain open country street tall building office 0 10 20 30 40 50 60 70 Figure 2: Scene Dataset. Left: Mean locations µx i projected onto first two Fisher discriminant vectors, along with cluster labels superimposed at cluster means mk. Data items are colored according to their MAP label argmaxk qik. Center: High confidence example images from the largest five clusters (rows correspond to clusters.) Right: Confusion table between ground truth scene categories and inferred clusters. The first cluster includes three indoor ground truth categories, the second includes forest and open country categories, and the third includes two urban categories. See Section 4.1 for a discussion and potential solution of this issue. Worker: 9, # of HITs: 74 1 9 10 7 3 5 8 11 4 2 6 1 9 10 7 3 5 8 11 4 2 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Worker: 45, # of HITs: 15 1 9 7 10 3 5 8 11 4 6 2 1 9 7 10 3 5 8 11 4 6 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Worker: 29, # of HITs: 1 1 9 10 5 3 8 7 11 4 2 6 1 9 10 5 3 8 7 11 4 2 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 3: (Left of line) Worker confusion matrices for the 40 most active workers. (Right of line) Selected worker confusion matrices for Scenes experiment. Worker 9 (left) makes distinctions that correspond closely to the atomic clustering. Worker 45 (center) makes coarser distinctions, often combining atomic clusters. Right: Worker 29’s single HIT was largely random and does not align with the atomic clusters. Figure 2 (left) shows the mean locations of the data items µx i learned from the Scene data set, visualized as points in Euclidean space. We find well seperated clusters whose labels k are displayed at their mean locations mk. The points are colored according to argmaxk qik, which is item i’s MAP cluster assignment. The cluster labels are sorted according to the number of assigned items, with cluster 1 being the largest. The axes are the first two Fisher discriminant directions (derived from the MAP cluster assignments) as axes. The clusters are well seperated in the four dimensionsal space (we give the average assignment entropy −1 N P ik qik log qik in the figure title, which shows little cluster overlap.) Figure 2 (center) shows six high confidence examples from clusters 1 through 5. Figure 2 (right) shows the confusion table between the ground truth categories and the MAP clustering. We find that the MAP clusters often correspond to single ground truth categories, but they sometimes combine ground truth categories in reasonable ways. See Section 4.1 for a discussion and potential solution of this issue. Figure 3 (left of line) shows the predicted confusion matrices (Section 3.3) associated with the 40 workers that performed the most HITs. This matrix captures the worker’s tendency to label items from different atomic clusters as being in the same or different category. Figure 3 (right of line) shows in detail the predicted confusion matrices for three workers. We have sorted the MAP cluster indices to yield approximately block diagonal matrices, for ease of interpretation. Worker 9 makes relatively fine grained distinctions, including seperating clusters 1 and 9 that correspond to the indoor categories and the bedroom scenes, respectively. Worker 45 combines clusters 5 and 8 which correspond to city street and highway scenes in addition to grouping together all indoor scene categories. The finer grained distinctions made by worker 9 may be a result of performing more HITs (74) and seeing a larger number of images than worker 45, who performed 15 HITs. Finally (far right), we find a worker whose labels do not align with the atomic clustering. Inspection of his labels show that they were entered largely at random. Figure 4 (top left) shows the number of HITs performed by each worker according to descending rank. Figure 4 (bottom left) is a Pareto curve that indicates the percentage of the HITs performed by the most active workers. The Pareto principle (i.e., the law of the vital few) [19] roughly holds: the top 20% most active workers perform nearly 80% of the work. We wish to understand the extent to which the most active workers contribute to the results. For the purpose of quantitative comparisons, we use Variation of Information (VI) [20] to measure the discrepancy between the 6 0 20 40 60 80 100 120 140 0 100 200 Worker Rank Completed HITs 0 20 40 60 80 100 0 50 100 % of total workers % of total HITs 200 400 600 800 1000 1200 1400 1600 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 Variation of Information Number of HITs Remaining Top workers excluded Bottom workers excluded 3 4 5 6 7 8 9 10 11 1.5 2 2.5 3 3.5 4 V Variation of Information R=5 Bayes Crowd NMF Consensus S&G Cluster Ensembles Bayes Consensus Figure 4: Scene Data set. Left Top: Number of completed HITs by worker rank. Left Bottom: Pareto curve. Center: Variation of Information on the Scene data set as we incrementally remove top (blue) and bottom (red) ranked workers. The top workers are removed one at a time, bottom ranked workers are removed in groups so that both curves cover roughly the same domain. The most active workers do not dominate the results. Right: Variation of Information between the inferred clustering and the ground truth categories on the Scene data set, as a function of sampling parameter V . R is fixed at 5. Bayes Crowd Bayes Consensus NMF [21] Strehl & Ghosh [5] Birds [16] (VI) 1.103 ± 0.082 1.721 ± 0.07 1.500 ± 0.26 1.256 ± 0.001 Birds (time) 18.5 min 18.1 min 27.9 min 0.93 min Stonefly9 [17] (VI) 2.448 ± 0.063 2.735 ± 0.037 4.571 ± 0.158 3.836 ± 0.002 Stonefly9 (time) 100.1 min 98.5 min 212.6 min 46.5 min Table 1: Quantitative comparison on Bird and Stonefly species categorization data sets. Quality is measured using Variation of Information between the inferred clustering and ground truth. Bayesian Crowdclustering outperforms the alternatives. inferred MAP clustering and the ground truth categorization. VI is a metric with strong information theoretic justification that is defined between two partitions (clusterings) of a data set; smaller values indicate a closer match and a VI of 0 means that two clusterings are identical. In Figure 4 (center) we incrementally remove the most active (blue) and least active (red) workers. Removal of workers corresponds to moving from right to left on the x-axis, which indicates the number of HITs used to learn the model. The results show that removing the large number of workers that do fewer HITs is more detrimental to performance than removing the relatively few workers that do a large number of HITs (given the same number of total HITs), indicating that the atomic clustering is learned from the crowd at large. In Figure 4 (right), we judge the impact of the sampling redundancy parameter V described in Section 2. We compare our approach (Bayesian crowdclustering) to two existing clustering aggregation methods from the literature: consensus clustering by nonnegative matrix factorization (NMF) [21] and the cluster ensembles method of Strehl and Ghosh (S&G) [5]. NMF and S&G require the number of inferred clusters to be provided as a parameter, and we set this to the number of ground truth categories. Even without the benefit of this additional information, our method (which automatically infers the number of clusters) outperforms the alternatives. To judge the benefit of modeling the characteristics of individual workers, we also compare against a variant of our model in which all HITs are treated as if they are performed by a single worker (Bayesian consensus.) We find a significant improvement. We fix R = 5 in this experiment, but we find a similar ranking of methods at other values of R. However, the performance benefit of the Bayesian methods over the existing methods increases with R. We compare the four methods quantitatively on two additional data sets, with the results summarized in Table 1. In both cases, we instruct workers to categorize based on species. This is known to be a difficult task for non-experts. We set V = 6 and R = 5 for these experiments. Again, we find that Bayesian Crowdclustering outperforms the alternatives. A run time comparison is also given in Table 1. Bayesian Crowdclustering results on the Bird and Stonefly data sets are summarized in [14]. Finally, we demonstrate Bayesian crowdclustering on the large scale Attribute Discovery data set. This data set has four image categories: bags, earrings, ties, and women’s shoes. In addition, each image is a member of one of 27 sub-categories (e.g., the bags category includes backpacks and totes as sub-categories.) See [14] for summary figures. We find that our method easily discovers the four 7 Original Cluster 1 Original Cluster 4 Original Cluster 8 Ground Truth Inferred Cluster 1 2 3 bedroom suburb kitchen living room coast forest highway inside city mountain open country street tall building office 0 10 20 30 40 50 60 70 Ground Truth Inferred Cluster 1 bedroom suburb kitchen living room coast forest highway inside city mountain open country street tall building office 0 10 20 30 40 50 60 70 Ground Truth Inferred Cluster 1 2 3 bedroom suburb kitchen living room coast forest highway inside city mountain open country street tall building office 0 10 20 30 40 50 1.1 1.2 1.3 4.1 8.1 8.2 8.3 Figure 5: Divisive Clustering on the Scenes data set. Left: Confusion matrix and high confidence examples when running our method on images assigned to cluster one in the original experiment (Figure 2). The three indoor scene categories are correctly recovered. Center: Workers are unable to subdivide mountain scenes consistently and our method returns a single cluster. Right: Workers may find perceptually relevant distinctions not present in the ground truth categories. Here, the highway category is subdivided according to the number of cars present. categories. The subcategories are not discovered, likely due to limited context associated with HITs with size M = 36 as discussed in the next section. Runtime was approximately 9.5 hours on a six core Intel Xeon machine. 4.1 Divisive Clustering As indicated by the confusion matrix in Figure 2 (right), our method results in clusters that correspond to reasonable categories. However, it is clear that the data often has finer categorical distinctions that go undiscovered. We conjecture that this is a result of the limited context presented to the worker in each HIT. When shown a set of M = 36 images consisting mostly of different types of outdoor scenes and a few indoor scenes, it is reasonable for a worker to consider the indoor scenes as a unified category. However, if a HIT is composed purely of indoor scenes, a worker might draw finer distinctions between images of offices, kitchens, and living rooms. To test this conjecture, we developed a hierarchical procedure in which we run Bayesian crowdclustering independently on images that are MAP assigned to the same cluster in the original Scenes experiment. Figure 5 (left) shows the results on the indoor scenes assigned to original cluster 1. We find that when restricted to indoor scenes, the workers do find the relevant distinctions and our algorithm accurately recovers the kitchen, living room, and office ground truth categories. In Figure 5 (center) we ran the procedure on images from original cluster 4, which is composed predominantly of mountain scenes. The algorithm discovers one subcluster. In Figure 5 (right) the workers divide a cluster into three subclusters that are perceptually relevant: they have organized them according to the number of cars present. 5 Conclusions We have proposed a method for clustering a large set of data by distributing small tasks to a large group of workers. It is based on using a novel model of human clustering, as well as a novel machine learning method to aggregate worker annotations. Modeling both data item properties and the workers’ annotation process and parameters appears to produce performance that is superior to existing clustering aggregation methods. Our study poses a number of interesting questions for further research: Can adaptive sampling methods (as opposed to our random sampling) reduce the number of HITs that are necessary to achieve high quality clustering? Is it possible to model the workers’ tendency to learn over time as they perform HITs, rather than treating HITs independently as we do here? Can we model contextual effects, perhaps by modeling the way that humans “regularize” their categorical decisions depending on the number and variety of items present in the task? Acknowledgements This work was supported by ONR MURI grant 1015-G-NA-127, ARL grant W911NF-10-2-0016, and NSF grants IIS-0953413 and CNS-0932392. 8 References [1] A. Sorokin and D. A. Forsyth. Utility data annotation with Amazon Mechanical Turk. In Internet Vision, pages 1–8, 2008. [2] Sudheendra Vijayanarasimhan and Kristen Grauman. Large-Scale Live Active Learning: Training Object Detectors with Crawled Data and Crowds. In CVPR, 2011. [3] Peter Welinder, Steve Branson, Serge Belongie, and Pietro Perona. The multidimensional wisdom of crowds. In Neural Information Processing Systems Conference (NIPS), 2010. [4] J. B. Kruskal. Multidimensional scaling by optimizing goodness-of-fit to a nonmetric hypothesis. PSym, 29:1–29, 1964. [5] Alexander Strehl and Joydeep Ghosh. Cluster ensembles—A knowledge reuse framework for combining multiple partitions. Journal of Machine Learning Research, 3:583–617, 2002. [6] Stefano Monti, Pablo Tamayo, Jill Mesirov, and Todd Golub. Consensus clustering: A resampling-based method for class discovery and visualization of gene expression microarray data. Machine Learning, 52(1–2):91–118, 2003. [7] Gionis, Mannila, and Tsaparas. Clustering aggregation. In ACM Transactions on Knowledge Discovery from Data, volume 1. 2007. [8] A.Y. Lo. On a class of bayesian nonparametric estimates: I. density estimates. The Annals of Statistics, pages 351–357, 1984. [9] I. Sutskever, R. Salakhutdinov, and J.B. Tenenbaum. Modelling relational data using bayesian clustered tensor factorization. Advances in Neural Information Processing Systems (NIPS), 2009. [10] Hagai Attias. A variational baysian framework for graphical models. In NIPS, pages 209–215, 1999. [11] Kenichi Kurihara, Max Welling, and Nikos Vlassis. Accelerated variational dirichlet process mixtures. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19. MIT Press, Cambridge, MA, 2007. [12] J. M. Bernardo and A. F. M. Smith. Bayesian Theory. Wiley, 1994. [13] Tommi S. Jaakkola and Michael I. Jordan. A variational approach to Bayesian logistic regression models and their extensions, August 13 1996. [14] Ryan Gomes, Peter Welinder, Andreas Krause, and Pietro Perona. Crowdclustering. Technical Report CaltechAUTHORS:20110628-202526159, June 2011. [15] Li Fei-Fei and Pietro Perona. A Bayesian hierarchical model for learning natural scene categories. In CVPR, pages 524–531. IEEE Computer Society, 2005. [16] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. CaltechUCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010. [17] G. Martinez-Munoz, N. Larios, E. Mortensen, W. Zhang, A. Yamamuro, R. Paasch, N. Payet, D. Lytle, L. Shapiro, S. Todorovic, et al. Dictionary-free categorization of very similar objects via stacked evidence trees. 2009. [18] T. Berg, A. Berg, and J. Shih. Automatic attribute discovery and characterization from noisy web data. Computer Vision–ECCV 2010, pages 663–676, 2010. [19] V. Pareto. Cours d’economie politique. 1896. [20] M. Meila. Comparing clusterings by the variation of information. In Learning theory and Kernel machines: 16th Annual Conference on Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003, Washington, DC, USA, August 24-27, 2003: proceedings, volume 2777, page 173. Springer Verlag, 2003. [21] Tao Li, Chris H. Q. Ding, and Michael I. Jordan. Solving consensus and semi-supervised clustering problems using nonnegative matrix factorization. In ICDM, pages 577–582. IEEE Computer Society, 2007. 9
|
2011
|
227
|
4,288
|
Comparative Analysis of Viterbi Training and Maximum Likelihood Estimation for HMMs Armen Allahverdyan∗ Yerevan Physics Institute Yerevan, Armenia aarmen@yerphi.am Aram Galstyan USC Information Sciences Institute Marina del Rey, CA, USA galstyan@isi.edu Abstract We present an asymptotic analysis of Viterbi Training (VT) and contrast it with a more conventional Maximum Likelihood (ML) approach to parameter estimation in Hidden Markov Models. While ML estimator works by (locally) maximizing the likelihood of the observed data, VT seeks to maximize the probability of the most likely hidden state sequence. We develop an analytical framework based on a generating function formalism and illustrate it on an exactly solvable model of HMM with one unambiguous symbol. For this particular model the ML objective function is continuously degenerate. VT objective, in contrast, is shown to have only finite degeneracy. Furthermore, VT converges faster and results in sparser (simpler) models, thus realizing an automatic Occam’s razor for HMM learning. For more general scenario VT can be worse compared to ML but still capable of correctly recovering most of the parameters. 1 Introduction Hidden Markov Models (HMM) provide one of the simplest examples of structured data observed through a noisy channel. The inference problems of HMM naturally divide into two classes [20, 9]: i) recovering the hidden sequence of states given the observed sequence, and ii) estimating the model parameters (transition probabilities of the hidden Markov chain and/or conditional probabilities of observations) from the observed sequence. The first class of problems is usually solved via the maximum a posteriori (MAP) method and its computational implementation known as Viterbi algorithm [20, 9]. For the parameter estimation problem, the prevailing method is maximum likelihood (ML) estimation, which finds the parameters by maximizing the likelihood of the observed data. Since global optimization is generally intractable, in practice it is implemented through an expectation– maximization (EM) procedure known as Baum–Welch algorithm [20, 9]. An alternative approach to parameter learning is Viterbi Training (VT), also known in the literature as segmental K-means, Baum–Viterbi algorithm, classification EM, hard EM, etc. Instead of maximizing the likelihood of the observed data, VT seeks to maximize the probability of the most likely hidden state sequence. Maximizing VT objective function is hard [8], so in practice it is implemented via an EM-style iterations between calculating the MAP sequence and adjusting the model parameters based on the sequence statistics. It is known that VT lacks some of the desired features of ML estimation such as consistency [17], and in fact, can produce biased estimates [9]. However, it has been shown to perform well in practice, which explains its widespread use in applications such as speech recognition [16], unsupervised dependency parsing [24], grammar induction [6], ion channel modeling [19]. It is generally assumed that VT is more robust and faster but usually less accurate, although for certain tasks it outperforms conventional EM [24]. ∗Currently at: Laboratoire de Physique Statistique et Systemes Complexes, ISMANS, Le Mans, France. 1 The current understanding of when and under what circumstances one method should be preferred over the other is not well–established. For HMMs with continuos observations, Ref. [18] established an upper bound on the difference between the ML and VT objective functions, and showed that both approaches produce asymptotically similar estimates when the dimensionality of the observation space is very large. Note, however, that this asymptotic limit is not very interesting as it makes the structure imposed by the Markovian process irrelevant. A similar attempt to compare both approaches on discrete models (for stochastic context free grammars) was presented in [23]. However, the established bound was very loose. Our goal here is to understand, both qualitatively and quantitatively, the difference between the two estimation methods. We develop an analytical approach based on generating functions for examining the asymptotic properties of both approaches. Previously, a similar approach was used for calculating entropy rate of a hidden Markov process [1]. Here we provide a non-trivial extension of the methods that allows to perform comparative asymptotic analysis of ML and VT estimation. It is shown that both estimation methods correspond to certain free-energy minimization problem at different temperatures. Furthermore, we demonstrate the approach on a particular class of HMM with one unambiguous symbol and obtain a closed–form solution to the estimation problem. This class of HMMs is sufficiently rich so as to include models where not all parameters can be determined from the observations, i.e., the model is not identifiable [7, 14, 9]. We find that for the considered model VT is a better option if the ML objective is degenerate (i.e., not all parameters can be obtained from observations). Namely, not only VT recovers the identifiable parameters but it also provides a simple (in the sense that non-identifiable parameters are set to zero) and optimal (in the sense of the MAP performance) solution. Hence, VT realizes an automatic Occam’s razor for the HMM learning. In addition, we show that the VT algorithm for this model converges faster than the conventional EM approach. Whenever the ML objective is not degenerate, VT leads generally to inferior results that, nevertheless, may be partially correct in the sense of recovering certain (not all) parameters. 2 Hidden Markov Process Let S = {S0, S1, S2, ...} be a discrete-time, stationary, Markov process with conditional probability Pr[Sk+l = sk|Sk−1+l = sk−1] = p(sk|sk−1), (1) where l is an integer. Each realization sk of the random variable Sk takes values 1, ..., L. We assume that S is mixing: it has a unique stationary distribution pst(s), PL r=1p(s|r)pst(r) = pst(s), that is established from any initial probability in the long time limit. Let random variables Xi, with realizations xi = 1, .., M, be noisy observations of Si: the (timeinvariant) conditional probability of observing Xi = xi given the realization Si = si of the Markov process is π(xk|sk). Defining x ≡(xN, ..., x1), s ≡(sN, ..., s0), the joint probability of S, X reads P(s, x) = TsN sN−1(xN)...Ts1 s0(x1) pst(s0), (2) where the L × L transfer-matrix T(x) with matrix elements Tsi si−1(x) is defined as Tsi si−1(x) = π(x|si) p(si|si−1). (3) X = {X1, X2, ...} is called a hidden Markov process. Generally, it is not Markov, but it inherits stationarity and mixing from S [9]. The probabilities for X can be represented as follows: P(x) = X ss′ [T(x)]ss′ pst(s′), T(x) ≡T(xN)T(xN−1) . . . T(x1), (4) where T(x) is a product of transfer matrices. 3 Parameter Estimation 3.1 Maximum Likelihood Estimation The unknown parameters of an HMM are the transition probabilities p(s|s′) of the Markov process and the observation probabilities π(x|s); see (2). They have to be estimated from the observed 2 sequence x. This is standardly done via the maximum-likelihood approach: one starts with some trial values ˆp(s|s′), ˆπ(x|s) of the parameters and calculates the (log)-likelihood ln ˆP(x), where ˆP means the probability (4) calculated at the trial values of the parameters. Next, one maximizes ln ˆP(x) over ˆp(s|s′) and ˆπ(x|s) for the given observed sequence x (in practice this is done via the Baum-Welch algorithm [20, 9]). The rationale of this approach is as follows. Provided that the length N of the observed sequence is long, and recaling that X is mixing (due to the analogous feature of S) we get probability-one convergence (law of large numbers) [9]: ln ˆP(x) → X yP(y) ln ˆP(y), (5) where the average is taken over the true probability P(...) that generated x. Since the relative entropy is non-negative, P xP(x) ln[P(x)/ ˆP(x)] ≥0, the global maximum of P xP(x) ln ˆP(x) as a function of ˆp(s|s′) and ˆπ(x|s) is reached for ˆp(s|s′) = p(s|s′) and ˆπ(x|s) = π(x|s). This argument is silent on how unique this global maximum is and how difficult to reach it. 3.2 Viterbi Training An alternative approach to the parameter learning employs the maximal a posteriori (MAP) estimation and proceeds as follows: Instead of maximizing the likelihood of observed data (5) one tries to maximize the probability of the most likely sequence [20, 9]. Given the joint probability ˆP(s, x) at trial values of parameters, and given the observed sequence x, one estimates the generating statesequence s via maximizing the a posteriori probability ˆP(s|x) = ˆP(s, x)/ ˆP(x) (6) over s. Since ˆP(x) does not depend on s, one can maximize ln ˆP(s, x). If the number of observations is sufficiently large N →∞, one can substitute maxs ln ˆP(s, x) by its average over P(...) [see (5)] and instead maximize (over model parameters) X xP(x) maxs ln ˆP(s, x). (7) To relate (7) to the free energy concept (see e.g. [2, 4]), we define an auxiliary (Gibbsian) probability ˆρβ(s|x) = ˆP β(s, x)/ hX s′ ˆP β(s′, x) i , (8) where β > 0 is a parameter. As a function of s (and for a fixed x), ˆρβ→∞(s|x) concentrates on those s that maximize ln ˆP(s, x): ˆρβ→∞(s|x) →1 N X jδ[s, es[j](x)], (9) where δ(s, s′) is the Kronecker delta, es[j](x) are equivalent outcomes of the maximization, and N is the number of such outcomes. Further, define Fβ ≡−1 β X xP(x) ln X s ˆP β(s, x). (10) Within statistical mechanics Eqs. 8 and 10 refer to, respectively, the Gibbs distribution and free energy of a physical system with Hamiltonian H = −ln P(s, x) coupled to a thermal bath at inverse temperature β = 1/T [2, 4]. It is then clear that ML and Viterbi Training correspond to minimizing the free energy Eq. 10 at β = 1 and β = ∞, respectively. Note that β2∂βF = −P xP(x)P sρβ(s|x) ln ρβ(s|x) ≥0, which yields F1 ≤F∞. 3.3 Local Optimization As we mentioned, global maximization of neither objective is feasible in the general case. Instead, in practice this maximization is locally implemented via an EM-type algorithm [20, 9]: for a given observed sequence x, and for some initial values of the parameters, one calculates the expected value of the objective function under the trial parameters (E-step), and adjusts the parameters to maximize this expectation (M-step). The resulting estimates of the parameters are now employed as new trial parameters and the previous step is repeated. This recursion continues till convergence. 3 For our purposes, this procedure can be understood as calculating certain statistics of the hidden sequence averaged over the Gibbs distribution Eqs. 8. Indeed, let us introduce fγ(s) ≡ eβγPN i=1δ(si+1,a)δ(si,b) and define βFβ(γ) ≡− X xP(x) ln X s ˆP β(s, x)fγ(s). (11) Then, for instance, the (iterative) Viterbi estimate of the transition probabilities are given as follows: eP(Sk+1 = a, Sk = b) = −∂γ[F∞(γ)]|γ→0. (12) Conditional probabilities for observations are calculated similarly via a different indicator function. 4 Generating Function Note from (4) that both P(x) and ˆP(x) are obtained as matrix-products. For a large number of multipliers the behavior of such products is governed by the multiplicative law of large numbers. We now recall its formulation from [10]: for N →∞and x generated by the mixing process X there is a probability-one convergence: 1 N ln ||T(x)|| →1 N X yP(y) ln λ[T(y)], (13) where ||...|| is a matrix norm in the linear space of L × L matrices, and λ[T(x)] is the maximal eigenvalue of T(x). Note that (13) does not depend on the specific norm chosen, because all norms in the finite-dimensional linear space are equivalent; they differ by a multiplicative factor that disappears for N →∞[10]. Eqs. (4, 13) also imply P xλ[T(x)] →1. Altogether, we calculate (5) via its probability-one limit 1 N X xP(x) ln ˆP(x) →1 N X xλ[T(x)] ln λ[ˆT(x)]. (14) Note that the multiplicative law of large numbers is normally formulated for the maximal singular value. Its reformulation in terms of the maximal eigenvalue needs additional arguments [1]. Introducing the generating function ΛN(n, N) = X xλ[T(x)]λn h ˆT(x) i , (15) where n is a non-negative number, and where ΛN(n, N) means Λ(n, N) in degree of N, one represents (14) as 1 N X xλ[T(x)] ln λ[ˆT(x)] = ∂nΛ(n, N)|n=0, (16) where we took into account Λ(0, N) = 1, as follows from (15). The behavior of ΛN(n, N) is better understood after expressing it via the zeta-function ξ(z, n) [1] ξ(z, n) = exp − X∞ m=1 zm m Λm(n, m) , (17) where Λm(n, m) ≥0 is given by (15). Since for a large N, ΛN(n, N) →ΛN(n) [this is the content of (13)], the zeta-function ξ(z, n) has a zero at z = 1 Λ(n): ξ(1/Λ(n), n) = 0. (18) Indeed for z close (but smaller than) 1 Λ(n), the series P∞ m=1 zm m Λm(n, m) →P∞ m=1 [zΛ(n)]m m almost diverges and one has ξ(z, n) →1 −zΛ(n). Recalling that Λ(0) = 1 and taking n →0 in 0 = d dnξ( 1 Λ(n), n), we get from (16) 1 N X xλ[T(x)] ln λ[ˆT(x)] = ∂nξ(1, 0) ∂zξ(1, 0) . (19) For calculating −βFβ in (10) we have instead of (19) −βFβ N = ∂nξ[β](1, 0) ∂zξ[β](1, 0) , (20) where ξ[β](z, n) employs ˆT β si si−1(x) = ˆπβ(x|si) ˆpβ(si|si−1) instead of ˆTsi si−1(x) in (19). Though in this paper we restricted ourselves to the limit N →∞, we stress that the knowledge of the generating function ΛN(n, N) allows to analyze the learning algorithms for any finite N. 4 1 2 3 1 2 q2 r2 p2 q1 r1 p1 Figure 1: The hidden Markov process (21–22) for ϵ = 0. Gray circles and arrows indicate on the realization and transitions of the internal Markov process; see (21). The light circles and black arrows indicate on the realizations of the observed process. 5 Hidden Markov Model with One Unambiguous Symbol 5.1 Definition Given a L-state Markov process S, the observed process X has two states 1 and 2; see Fig. 1. All internal states besides one are observed as 2, while the internal state 1 produces, respectively, 1 and 2 with probabilities 1 −ϵ and ϵ. For L = 3 we obtain from (1) π(1|1) = 1 −π(2|1) = 1 −ϵ, π(1|2) = π(1|3) = π(2|1) = 0, π(2|2) = π(2|3) = 1. Hence 1 is unambiguous: if it is observed, the unobserved process S was certainly in 1; see Fig. 1. The simplest example of such HMM exists already for L = 2; see [12] for analytical features of entropy for this case. We, however, describe in detail the L = 3 situation, since this case will be seen to be generic (in contrast to L = 2) and it allows straightforward generalizations to L > 3. The transition matrix (1) of a general L = 3 Markov process reads P ≡{ p(s|s′) }3 s,s′=1 = p0 q1 r1 p1 q0 r2 p2 q2 r0 ! , p0 q0 r0 ! = 1 −p1 −p2 1 −r1 −r2 1 −r1 −r2 ! (21) where, e.g., q1 = p(1|2) is the transition probability 2 →1; see Fig. 1. The corresponding transfer matrices read from (3) T(1) = (1 −ϵ) p0 q1 r1 0 0 0 0 0 0 ! , T(2) = P −T(1). (22) Eq. (22) makes straightforward the reconstruction of the transfer-matrices for L ≥4. It should also be obvious that for all L only the first row of T(1) consists of non-zero elements. For ϵ = 0 we get from (22) the simplest example of an aggregated HMM, where several Markov states are mapped into one observed state. This model plays a special role for the HMM theory, since it was employed in the pioneering study of the non-identifiability problem [7]. 5.2 Solution of the Model For this model ξ(z, n) can be calculated exactly, because T(1) has only one non-zero row. Using the method outlined in the supplementary material (see also [1, 3]) we get ξ(z, n) = 1 −z(t0ˆtn 0 + τ0ˆτ n 0 ) + X∞ k=2[τ ˆτ nˆtn k−2tk−2 −ˆtn k−1tk−1]zk (23) where τ and ˆτ are the largest eigenvalues of T(2) and ˆT(2), respectively tk ≡⟨1|T(1)T(2)k|1⟩= XL α=1τ k αψα, (24) ψα ≡⟨1|T(1)|Rα⟩⟨Lα|1⟩, ⟨1| ≡(1, 0, . . . , 0). (25) Here |Rα⟩and ⟨Lα| are, respectively right and left eigenvalues of T(2), while τ1, . . . , τL (τL ≡τ) are the eigenvalues of T(2). Eqs. (24, 25) obviously extend to hatted quantities. 5 We get from (23, 19): ξ(1, n) = (1 −ˆτ nτ) 1 − X∞ k=0ˆtn ktk , (26) ∂nξ(1, 0) ∂zξ(1, 0) = P∞ k=0tk ln[ˆtk] P∞ k=0(k + 1)tk . (27) Note that for ϵ = 0, tk are return probabilities to the state 1 of the L-state Markov process. For ϵ > 0 this interpretation does not hold, but tk still has a meaning of probability as P∞ k=0tk = 1. Turning to equations (19, 27) for the free energy, we note that as a function of trial values it depends on the following 2L parameters: (ˆτ1, . . . , ˆτL, ˆψ1, . . . , ˆψL). (28) As a function of the true values, the free energy depends on the same 2L parameters (28) [without hats], though concrete dependencies are different. For the studied class of HMM there are at most L(L −1) + 1 unknown parameters: L(L −1) transition probabilities of the unobserved Markov chain, and one parameter ϵ coming from observations. We checked numerically that the Jacobian of the transformation from the unknown parameters to the parameters (28) has rank 2L −1. Any 2L −1 parameters among (28) can be taken as independent ones. For L > 2 the number of effective independent parameters that affect the free energy is smaller than the number of parameters. So if the number of unknown parameters is larger than 2L −1, neither of them can be found explicitly. One can only determine the values of the effective parameters. 6 The Simplest Non-Trivial Scenario The following example allows the full analytical treatment, but is generic in the sense that it contains all the key features of the more general situation given above (21). Assume that L = 3 and q0 = ˆq0 = r0 = ˆr0 = 0, ϵ = ˆϵ = 0. (29) Note the following explicit expressions t0 = p0, t1 = p1q1 + p2r1, t2 = p1r1q2 + p2q1r2, (30) τ = τ3 = √q2r2, τ2 = −τ, τ1 = 0, (31) ψ3 −ψ2 = t1/τ, ψ3 + ψ2 = t2/τ 2, (32) Eqs. (30–32) with obvious changes si →ˆsi for every symbol si hold for ˆtk, ˆτk and ˆψk. Note a consequence of P2 k=0pk = P2 k=0qk = P2 k=0rk = 1: τ 2(1 −t0) = 1 −t0 −t1 −t2. (33) 6.1 Optimization of F1 Eqs. (27) and (30–32) imply P∞ k=0(k + 1)tk = µ 1−τ 2 , µ ≡1 −τ 2 + t2 + (1 −t0)(1 + τ 2) > 0, (34) −µF1 N = t1 ln ˆt1 + t2 ln ˆt2 + (1 −τ 2)t0 ln ˆt0 + (1 −t0)τ 2 ln ˆτ 2 . (35) The free energy F1 depends on three independent parameters ˆt0, ˆt1, ˆt2 [recall (33)]. Hence, minimizing F1 we get ˆti = ti (i = 0, 1, 2), but we do not obtain a definite solution for the unknown parameters: any four numbers ˆp1, ˆp2, ˆq1, ˆr1 satisfying three equations t0 = 1 −ˆp1 −ˆp2, t1 = ˆp1ˆq1 + ˆp2ˆr1, t2 = ˆp1ˆr1(1 −ˆq1) + ˆp2ˆq1(1 −ˆr1), minimize F1. 6.2 Optimization of F∞ In deriving (35) we used no particular feature of {ˆpk}2 k=0, {ˆqk}2 k=1, {ˆrk}2 k=1. Hence, as seen from (20), the free energy at β > 0 is recovered from (35) by equating its LHS to −βFβ N and by taking in 6 its RHS: ˆt0 →ˆpβ 0, ˆτ 2 →ˆqβ 2 ˆrβ 2 , ˆt1 →ˆpβ 1 ˆqβ 1 + ˆpβ 2 ˆrβ 1 , ˆt2 →ˆpβ 1 ˆrβ 1 ˆqβ 2 + ˆpβ 2 ˆqβ 1 ˆrβ 2 . The zero-temperature free energy reads from (35) −µF∞ N = (1 −τ 2)t0 ln ˆt0 + (1 −t0)τ 2 ln ˆτ 2 + t1 ln max[ˆp1ˆq1, ˆp2ˆr1] + t2 ln max[ˆp2ˆq1ˆr2, ˆp1ˆr1ˆq2]. (36) We now minimize F∞over the trial parameters ˆp1, ˆp2, ˆq1, ˆr1. This is not what is done by the VT algorithm; see the discussion after (12). But at any rate both procedures aim to minimize the same target. VT recursion for this models will be studied in section 6.3 — it leads to the same result. Minimizing F∞over the trial parameters produces four distinct solutions: {ˆσi}4 i=1 = {ˆp1 = 0, ˆp2 = 0, ˆq1 = 0, ˆr1 = 0}. (37) For each of these four solutions: ˆti = ti (i = 0, 1, 2) and F1 = F∞. The easiest way to get these results is to minimize F∞under conditions ˆti = ti (for i = 0, 1, 2), obtain F1 = F∞and then to conclude that due to the inequality F1 ≤F∞the conditional minimization led to the global minimization. The logics of (37) is that the unambiguous state tends to get detached from the ambiguous ones, since the probabilities nullifying in (37) refer to transitions from or to the unambiguous state. Note that although minimizing either F∞and F1 produces correct values of the independent variables t0, t1, t2, in the present situation minimizing F∞is preferable, because it leads to the four-fold degenerate set of solutions (37) instead of the continuously degenerate set. For instance, if the solution with ˆp1 = 0 is chosen we get for other parameters ˆp2 = 1 −t0, ˆq1 = t2 1 −t0 −t1 , ˆr1 = t1 1 −t0 . (38) Furthermore, a more elaborate analysis reveals that for each fixed set of correct parameters only one among the four solutions Eq. 37 provides the best value for the quality of the MAP reconstruction, i.e. for the overlap between the original and MAP-decoded sequences. Finally, we note that minimizing F∞allows one to get the correct values t0, t1, t2 of the independent variables ˆt0, ˆt1 and ˆt2 only if their number is less than the number of unknown parameters. This is not a drawback, since once the number of unknown parameters is sufficiently small [less than four for the present case (29)] their exact values are obtained by minimizing F1. Even then, the minimization of F∞can provide partially correct answers. Assume in (36) that the parameter ˆr1 is known, ˆr1 = r1. Now F∞has three local minima given by ˆp1 = 0, ˆp2 = 0 and ˆq1 = 0; cf. with (37). The minimum with ˆp2 = 0 is the global one and it allows to obtain the exact values of the two effective parameters: ˆt0 = 1 −ˆp1 = t0 and ˆt1 = ˆp1ˆq1 = t1. These effective parameters are recovered, because they do not depend on the known parameter ˆr1 = r1. Two other minima have greater values of F∞, and they allow to recover only one effective parameter: ˆt0 = 1 −ˆp1 = t0. If in addition to ˆr1 also ˆq1 is known, the two local minimia of F∞(ˆp1 = 0 and ˆp2 = 0) allow to recover ˆt0 = t0 only. In contrast, if ˆp1 = p1 (or ˆp2 = p2) is known exactly, there are three local minima again—ˆp2 = 0, ˆq1 = 0, ˆr1 = 0—but now none of effective parameters is equal to its true value: ˆti ̸= ti (i = 0, 1, 2). 6.3 Viterbi EM Recall the description of the VT algorithm given after (12). For calculating eP(Sk+1 = a, Sk = b) via (11, 12) we modify the transfer matrix element in (15, 17) as ˆTab(k) →ˆTab(k)eγ, which produces from (11, 12) for the MAP-estimates of the transition probabilities ep1 = t1 ˆχ1 + t2 ˆχ2 t1 + t2 + t0(1 −τ 2), ep2 = 1 −t0 −ep1, (39) eq1 = t1 ˆχ1 + t2(1 −ˆχ2) t1 ˆχ1 + t2 + (1 −t0)τ 2 , eq2 = 1 −eq1 (40) er1 = t1(1 −ˆχ1) + t2 ˆχ2 t2 + t1(1 −ˆχ1) + (1 −t0)τ 2 er2 = 1 −er1, (41) where ˆχ1 ≡ ˆpβ 1 ˆqβ 1 ˆpβ 1 ˆqβ 1 +ˆpβ 2 ˆrβ 1 , ˆχ2 ≡ ˆpβ 1 ˆrβ 1 ˆqβ 2 ˆpβ 1 ˆrβ 1 ˆqβ 2 +ˆpβ 2 ˆrβ 2 ˆqβ 1 . The β →∞limit of ˆχ1 and ˆχ2 is obvious: each of them is equal to 0 or 1 depending on the ratios ˆp1 ˆq1 ˆp2ˆr1 and ˆp1ˆr1 ˆq2 ˆp2ˆr2 ˆq1 . The EM approach amounts to 7 starting with some trial values ˆp1, ˆp2, ˆq1, ˆr1 and using ep1, ep2, eq1, er1 as new trial parameters (and so on). We see from (39–41) that the algorithm converges just in one step: (39–41) are equal to the parameters given by one of four solutions (37)—which one among the solutions (37) is selected depends on the on initial trial parameters in (39–41)—recovering the correct effective parameters (30–32); e.g. cf. (38) with (39, 41) under ˆχ1 = ˆχ2 = 0. Hence, VT converges in one step in contrast to the Baum-Welch algorithm (that uses EM to locally minimize F1) which, for the present model, obviously does not converge in one step. There is possibly a deeper point in the one-step convergence that can explain why in practice VT converges faster than the Baum-Welch algorithm [9, 21]: recall that, e.g. the Newton method for local optimization works precisely in one step for quadratic functions, but generally there is a class of functions, where it performs faster than (say) the steepest descent method. Further research should show whether our situation is similar: the VT works just in one step for this exactly solvable HMM model that belongs to a class of models, where VT generally performs faster than ML. We conclude this section by noting that the solvable case (29) is generic: its key results extend to the general situation defined above (21). We checked this fact numerically for several values of L. In particular, the minimization of F∞nullifies as many trial parameters as necessary to express the remaining parameters via independent effective parameters t0, t1, . . .. Hence for L = 3 and ϵ = 0 two such trial parameters are nullified; cf. with discussion around (28). If the true error probability ϵ ̸= 0, the trial value ˆϵ is among the nullified parameters. Again, there is a discrete degeneracy in solutions provided by minimizing F∞. 7 Summary We presented a method for analyzing two basic techniques for parameter estimation in HMMs, and illustrated it on a specific class of HMMs with one unambiguous symbol. The virtue of this class of models is that it is exactly solvable, hence the sought quantities can be obtained in a closed form via generating functions. This is a rare occasion, because characteristics of HMM such as likelihood or entropy are notoriously difficult to calculate explicitly [1]. An important feature of the example considered here is that the set of unknown parameters is not completely identifiable in the maximum likelihood sense [7, 14]. This corresponds to the zero eigenvalue of the Hessian for the ML (maximum-likelihood) objective function. In practice, one can have weaker degeneracy of the objective function resulting in very small values for the Hessian eigenvalues. This scenario occurs often in various models of physics and computational biology [11]. Hence, it is a drawback that the theory of HMM learning was developed assuming complete identifiably [5]. One of our main result is that in contrast to the ML approach that produces continuously degenerate solutions, VT results in finitely degenerate solution that is sparse, i.e., some [non-identifiable] parameters are set to zero, and, furthermore, converges faster. Note that sparsity might be a desired feature in many practical applications. For instance, imposing sparsity on conventional EM-type learning has been shown to produce better results part of speech tagging applications [25]. Whereas [25] had to impose sparsity via an additional penalty term in the objective function, in our case sparsity is a natural outcome of maximizing the likelihood of the best sequence. While our results were obtained on a class of exactly-solvable model, it is plausible that they hold more generally. The fact that VT provides simpler and more definite solutions—among all choices of the parameters compatible with the observed data—can be viewed as a type of the Occam’s razor for the parameter learning. Note finally that statistical mechanics intuition behind these results is that the aposteriori likelihood is (negative) zero-temperature free energy of a certain physical system. Minimizing this free energy makes physical sense: this is the premise of the second law of thermodynamics that ensures relaxation towards a more equilibrium state. In that zero-temperature equilibrium state certain types of motion are frozen, which means nullifying the corresponding transition probabilities. In that way the second law relates to the Occam’s razor. Other connections of this type are discussed in [15]. Acknowledgments This research was supported in part by the US ARO MURI grant No. W911NF0610094 and US DTRA grant HDTRA1-10-1-0086. 8 References [1] A. E. Allahverdyan, Entropy of Hidden Markov Processes via Cycle Expansion, J. Stat. Phys. 133, 535 (2008). [2] A.E. Allahverdyan and A. Galstyan, On Maximum a Posteriori Estimation of Hidden Markov Processes, Proc. of UAI, (2009). [3] R. Artuso. E. Aurell and P. Cvitanovic, Recycling of strange sets, Nonlinearity 3, 325 (1990). [4] P. Baldi and S. Brunak, Bioinformatics, MIT Press, Cambridge, USA (2001). [5] L. E. Baum and T. Petrie, Statistical inference for probabilistic functions of finite state Markov chains, Ann. Math. Stat. 37, 1554 (1966). [6] J.M. Benedi, J.A. Sanchez, Estimation of stochastic context-free grammars and their use as language models, Comp. Speech and Lang. 19, pp. 249-274 (2005). [7] D. Blackwell and L. Koopmans, On the identifiability problem for functions of finite Markov chains, Ann. Math. Statist. 28, 1011 (1957). [8] S. B. Cohen and N. A. Smith, Viterbi training for PCFGs: Hardness results and competitiveness of uniform initialization, Procs. of ACL (2010). [9] Y. Ephraim and N. Merhav, Hidden Markov processes, IEEE Trans. Inf. Th., 48, 1518-1569, (2002). [10] L.Y. Goldsheid and G.A. Margulis, Lyapunov indices of a product of random matrices, Russ. Math. Surveys 44, 11 (1989). [11] R. N. Gutenkunst et al., Universally Sloppy Parameter Sensitivities in Systems Biology Models, PLoS Computational Biology, 3, 1871 (2007). [12] G. Han and B. Marcus, Analyticity of entropy rate of hidden Markov chains, IEEE Trans. Inf. Th., 52, 5251 (2006). [13] R. A. Horn and C. R. Johnson, Matrix Analysis (Cambridge University Press, New Jersey, USA, 1985). [14] H. Ito, S. Amari, and K. Kobayashi, Identifiability of Hidden Markov Information Sources, IEEE Trans. Inf. Th., 38, 324 (1992). [15] D. Janzing, On causally asymmetric versions of Occam’s Razor and their relation to thermodynamics, arXiv:0708.3411 (2007). [16] B. H. Juang and L. R. Rabiner, The segmental k-means algorithm for estimating parameters of hidden Markov models, IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-38, no.9, pp.1639-1641, (1990). [17] B. G. Leroux, Maximum-Likelihood Estimation for Hidden Markov Models, Stochastic Processes and Their Applications, 40, 127 (1992). [18] N. Merhav and Y. Ephraim, Maximum likelihood hidden Markov modeling using a dominant sequence of states, IEEE Transactions on Signal Processing, vol.39, no.9, pp.2111-2115 (1991). [19] F. Qin, Restoration of single-channel currents using the segmental k-means method based on hidden Markov modeling, Biophys J 86, 14881501 (2004). [20] L. R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition, Proc. IEEE, 77, 257 (1989). [21] L. J. Rodriguez and I. Torres, Comparative Study of the Baum-Welch and Viterbi Training Algorithms, Pattern Recognition and Image Analysis, Lecture Notes in Computer Science, 2652/2003, 847 (2003). [22] D. Ruelle, Statistical Mechanics, Thermodynamic Formalism, (Reading, MA: Addison-Wesley, 1978). [23] J. Sanchez, J. Benedi, F. Casacuberta, Comparison between the inside-outside algorithm and the Viterbi algorithm for stochastic context-free grammars, in Adv. in Struct. and Synt. Pattern Recognition (1996). [24] V. I. Spitkovsky, H. Alshawi, D. Jurafsky, and C. D. Manning, Viterbi Training Improves Unsupervised Dependency Parsing, in Proc. of the 14th Conference on Computational Natural Language Learning (2010). [25] A. Vaswani, A. Pauls, and D. Chiang, Efficient optimization of an MDL-inspired objective function for unsupervised part-of-speech tagging, in Proc. ACL (2010). 9
|
2011
|
228
|
4,289
|
Environmental statistics and the trade-off between model-based and TD learning in humans Dylan A. Simon Department of Psychology New York University New York, NY 10003 dylex@nyu.edu Nathaniel D. Daw Center for Neural Science and Department of Psychology New York University New York, NY 10003 nathaniel.daw@nyu.edu Abstract There is much evidence that humans and other animals utilize a combination of model-based and model-free RL methods. Although it has been proposed that these systems may dominate according to their relative statistical efficiency in different circumstances, there is little specific evidence — especially in humans — as to the details of this trade-off. Accordingly, we examine the relative performance of different RL approaches under situations in which the statistics of reward are differentially noisy and volatile. Using theory and simulation, we show that model-free TD learning is relatively most disadvantaged in cases of high volatility and low noise. We present data from a decision-making experiment manipulating these parameters, showing that humans shift learning strategies in accord with these predictions. The statistical circumstances favoring model-based RL are also those that promote a high learning rate, which helps explain why, in psychology, the distinction between these strategies is traditionally conceived in terms of rulebased vs. incremental learning. 1 Introduction There are many suggestions that humans and other animals employ multiple approaches to learned decision making [1]. Precisely delineating these approaches is key to understanding human decision systems, especially since many problems of behavioral control such as addiction have been attributed to partial failures of one component [2]. In particular, understanding the trade-offs between approaches in order to bring them under experimental control is critical for isolating their unique contributions and ultimately correcting maladaptive behavior. Psychologists primarily distinguish between declarative rule learning and more incremental learning of stimulus-response (S–R) habits across a broad range of tasks [3, 4]. They have shown that large problem spaces, probabilistic feedback (as in the weather prediction task), and difficult to verbalize rules (as in information integration tasks from category learning) all seem to promote the use of a habit learning system [5, 6, 7, 8, 9]. The alternative strategies, which these same manipulations disfavor, are often described as imputing (inherently deterministic) ‘rules’ or ‘maps’, and are potentially supported by dissociable neural systems also involved in memory for one-shot episodes [10]. Neuroscientists studying rats have focused on more specific tasks that test whether animals are sensitive to changes in the outcome contingency or value of actions. For instance, under different task circumstances or following different brain lesions, rats are more or less willing to continue working for a devalued food reward [11]. In terms of reinforcement learning (RL) theories, such evidence has been proposed to reflect a distinction between parallel systems for model-based vs. model-free RL [12, 13]: a world model permits updating a policy following a change in food value, while model-free methods preclude this. 1 Intuitively, S–R habits correspond well to the policies learned by TD methods such as actor/critic [14, 15], and rule-based cognitive planning strategies seem to mirror model-based algorithms. However, the implication that this distinction fundamentally concerns the use or non-use of a world model in representation and algorithm seems somewhat at odds with the conception in psychology. Specifically, neither the gradation of update (i.e., incremental vs. abrupt) nor the nature of representation (i.e., verbalizable rules) posited in the declarative system seem obviously related to the model-use distinction. Although there have been some suggestions about how episodic memory may support TD learning [16], a world model as conceived in RL is typically inherently probabilistic, so as to support computing expected action values in stochastic environments, and thus must be learned by incrementally composing multiple experiences. It has also been suggested that episodic memory supports yet a third decision strategy distinct from both model-based and model-free [17], although there is no experimental evidence for such a triple dissociation or in particular for a separation between the putative episodic and model-based controllers. Here we suggest that an explanation for this mismatch may follow from the circumstances under which each RL approach dominates. It has previously been proposed that model-free and modelbased reasoning should be traded off according to their relative statistical efficiency (proxied by uncertainty) in different circumstances [13]. In fact, what ultimately matters to a decision-maker is relative advantage in terms of reward [18]. Focusing specifically on task statistics, we extend the uncertainty framework to investigate under what circumstances the performance of a model-based system excels sufficiently to make it worthwhile. When the environment is completely static, TD is well known to converge to the optimal policy almost as quickly as model-based approaches [19], and so environmental change must be key to understanding its computational disadvantages. Primarily, model-free Monte Carlo (MC) methods such as TD are unable to propagate learned information around the state space efficiently, and in particular to generalize to states not observed in the current trajectory. This is not the only way in which MC methods learn slowly, however: they must also take samples of outcomes and average over them. This process introduces additional noise to the sampling process which must be averaged over, as observational deviations resulting from the learner’s own choice variability or transition stochasticity in the environment are confounded with variability in immediate rewards. In effect, this averaging imposes an upper bound on the learning rate needed to achieve reasonable performance, and, correspondingly, on how well it can keep up with task volatility. Conversely, the key benefit of model-based reasoning lies in its ability to react quickly to change, applying single-trial experience flexibly in order to construct values. We provide a more formal argument of this observation in MDPs with dynamic rewards and static transitions, and find that the environments in which TD is most impaired are those with frequent changes and little noise. This suggests a strategy by which these two approaches should optimally trade-off, which we test empirically using a decision task in humans while manipulating reward statistics. The high-volatility environments in which model-based learning dominates are also those in which a learning rate near one optimally applies. This may explain why a model-based system is associated with or perhaps specialized for rapid, declarative rule learning. 2 Theory Model-free and model-based methods differ in their strategies for estimating action values from samples. One key disadvantage of Monte Carlo sampling of long-run values in an MDP, relative to model-based RL (in which immediate rewards are sampled and aggregated according to the sampled transition dynamics), is the need to average samples over both reward and state transition stochasticity. This impairs its ability to track changes in the underlying MDP, with the disadvantage most pronounced in situations of high volatility and low noise. Below, we develop the intuition for this disadvantage by applying Kalman filter analysis [20] to examine uncertainties in the simplest possible MDP that exhibits the issue. Specifically, consider a state with two actions, each associated with a pair of terminal states. Each action leads to one of the two states with equal probability, and each of the four terminal states is associated with a reward. The rewards are stochastic and diffusing, according to a Gaussian process, and the transitions are fixed. We consider the uncertainty and reward achievable as a function of the volatility and observation noise. We have here made some simplifications in order to make the intuition as clear as possible: 2 that each trajectory has only a single state transition and reward; that in the steady state the static transition matrix has been fully learned; and that all analyzed distributions are Gaussian. We test some of these assumptions empirically in section 3 by showing that the same pattern holds in more complex tasks. 2.1 Model In general Xt(i) or just X will refer to an actual sample of the ith variable (e.g., reward or value) at time t, ¯X refers to the (latent) true mean of X, and ˆX refers to estimates of ¯X made by the learning process. Given i.i.d. Gaussian diffusion processes on each value, Xt(i), described by: σ2 = ⌦ ( ¯Xt+1(i) −¯Xt(i))2↵ diffusion or volatility, (1) "2 = ⌦ (Xt(i) −¯Xt(i))2↵ and observation noise, (2) the optimal learning rate that achieves the minimal uncertainty (from the Kalman gain) is: ↵⇤= σ p σ2 + 4"2 −σ2 2"2 (3) Note that this function is monotonically increasing with σ and decreasing with " (and in particular, ↵⇤! 1 as " ! 0). When using this learning rate the resulting asymptotic uncertainty (variance of estimates) will be: UX(↵⇤) = D ( ˆX −¯X)2E = σ p σ2 + 4"2 + σ2 2 (4) This, as expected, increases monotonically in both parameters. What often matters, however, is identifying the highest of multiple values, e.g., ¯X(i) and ¯X(j). If ¯X(i) −¯X(j) = d, the marginal value of the choice will be ±d. Given some uncertainty, U, the probability of this choice, i.e., ˆX(i) > ˆX(j), compared to chance is: c(U) = 2 Z 1 −1 φ ✓ x − d p U ◆ Φ(x)dx −1 (5) (Where φ and Φ are the density and distribution functions for the standard normal.) The resulting value of the choice is thus c(U)d. While c is flat at 1 as U ! 0, it shrinks as ⇥(1/ p U) (since φ0(0) = 0). Our goal is now to determine c(UQ) for each algorithm. 2.2 Value estimation Consider the value of one of the actions in our two-action MDP which leads to state A or B. Here, the true expected value of the choice is ¯Q = ¯ R(A)+ ¯ R(B) 2 . If each reward is changing according to the Gaussian diffusion process described above, this will induce a change process on Q. A modelbased system that has fully learned the transition dynamics will be able to estimate ˆR(A) and ˆR(B) separately, and thus take the expectation to produce ˆQ. By assuming each reward is sampled equally often and adopting the appropriate effective σ, the resulting uncertainty of this expectation, UMB, follows Equation 4, with X = Q. On the other hand, a Monte Carlo system that must take samples over transitions will observe Q = R(A) or Q = R(B). If (( ¯R(A) −¯R(B) (( = d, it will observe an additional variance of d2 4 from the mixture of the two reward distributions. Treating this noise as Gaussian and adding it to the noise of the rewards, this decreases the optimal learning rate and increases the minimal uncertainty to: UMC = D ( ˆQ −¯Q)2E = σ p σ2 + d2 + 4"2 + σ2 2 (6) Other forms of stochasticity, whether from changing policies or more complex MDPs, will similarly inflate the effective noise term, albeit with a different form. Clearly UMC ≥UMB. However, the more relevant measure is how these uncertainties translate into values [18]. For this we want to compare their relative success rates, c(U) from Equation 5, which scale directly to outcome. The relative advantage of the model-based (MB) approach, c(UMB) − 3 0.0 0.1 0.2 0.3 0.4 0.5 0.00 0.05 0.10 0.15 σ MB–TD advantage (probability) " 0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5 " σ 0 0.1 0.2 0.3 0.4 0.5 Figure 1: Difference in theoretical success rate between MB and MC c(UMC), is plotted in Figure 1 for an arbitrary reward deviation d = 1. As expected, as either the volatility or noise parameter gets very large and the task gets harder, the uncertainty increases, performance approaches chance, and the relative advantage vanishes. However, for reasonable sizes of σ, the model-based advantage first increases to a peak as σ increases, which is largest for small values of ". No comparable increasing advantage is seen for model-based valuation for increasing ". While these techniques may also be extended more generally to other MDPs (see Supplemental Materials), the core observation presented above should illuminate the remainder of our discussion. 3 Simulation To examine our qualitative predictions in a more realistic setting, we simulated randomly generated MDPs with 8 states, 2 actions, and transition and reward functions following the assumptions given in the previous section, with the addition of a contractive factor on rewards, ', to prevent divergence: ¯R0(s, a) ⇠N(0, 1) stationary distribution ' = p 1 −σ2 var ¯R = 1 ¯Rt(s, a) = ' ¯Rt−1(s, a) + wt(s, a) wt(s, a) ⇠N(0, σ2) Rt(s, a) = ¯Rt(s, a) + vt vt ⇠N(0, "2) Each transition had (at most) three possible outcome, with probabilities 0.6, 0.3, and 0.1, assigned randomly with replacement from the 8 states. In order to avoid bias related to the exploration policy, each learning algorithm observed the same set of 1000 choices (chosen according to the objectively optimal policy, plus softmax decision noise), and the greedy policy resulting from its learned values was assessed according to the true ¯R values at that point. The entire process was repeated 5000 times for each different setting of σ and " parameters. We compared the performance of a model-based approach using value iteration with a fixed, optimal reward learning rate and transition counting (MB) against various model-free algorithms including Q(0), SARSA(0), and SARSA(1) (with fixed optimal learning rates), all using a discount factor of γ = 0.9. As expected, all learners showed a decrement in reward as σ increased. Figure 2 shows the difference in mean reward obtained between MB and SARSA(0). Q(0) and SARSA(1) showed the same pattern of results. The correspondence between the theoretical results and the simulation confirms that the theoretical findings do hold more generally, and we claim that the same underlying effects drive these results. 4 0.2 0.3 0.4 0.5 0.6 0.0 0.5 1.0 1.5 σ MB–TD advantage (reward) " 0.2 0.3 0.4 0.5 0.6 0.2 0.3 0.4 0.5 0.6 " σ 0.02 0.03 0.04 0.05 0.06 Figure 2: Difference in reward obtained between MB and SARSA(0) 4 Human behavior Human subjects performed a decision task that represented an MDP with 4 states and 2 actions. The rewards followed the same contractive Gaussian diffusion process used in section 3, with σ and " parameters varied across subjects. We sought changes in the reliance on model-based and model-free strategies via regressions of past events onto current choices [21]. We hypothesized that model-based RL would be uniquely favored for large σ and small ". 4.1 Methods 4.1.1 Participants 55 individuals from the undergraduate subject pool and the surrounding community participated in the experiment. Twelve received monetary compensation based on performance, and the remainder received credit fulfilling course requirements. All participants gave informed consent and the study was approved by the human subjects ethics board of the institute. 4.1.2 Task Subjects viewed a graphical representation of a rotating disc with four pairs of colored squares equally spaced around the edge. Each pair of squares constituted a state (s 2 S = {N, E, S, W}) and had a unique distinguishable color and icon indicating direction (an arrow of some type). Each of the two squares in a state represented an action (a 2 A = {L, R}), and had a left- or right-directed icon. During the task, only the top quadrant of the disc was visible at any time, and at decision time subjects could select the left or right action by pressing the left or right arrow button on a keyboard. Immediately after selecting an action, between zero and five coins (including a pie-fraction of a coin) appeared under the selected action square, representing a reward (R 2 [0, 5]). After 600 ms, the disc began rotating and the reward became slowly obscured over the next 1150 ms until a new pair of squares was at the top of the disc and the next decision could be entered, as seen in Figure 3. The state dynamics were determined by a fixed transition function (T : S ⇥A ! A) such that each action was most likely to lead to the next adjacent state along the edge of the disc (e.g., T(N, L) = W). To this, additional uniform outcome noise was added with probability 0.4. The reward distribution followed the same Gaussian process given in the previous sections, except shifted and trimmed. The parameters σ and " were varied by condition. T : S ⇥A ⇥S ! [0, 1] T(s, a, s0) = ⇢0.7 if s0 = T(s, a) 0.1 otherwise Rt : S ⇥A ! [0, 5] Rt(s, a) = min(max( ¯Rt(s, a) + vt + 2.5, 0), 5) 5 Figure 3: Abstract task layout and screen shot shortly after a choice is made (yellow box indicates visible display): Each state has two actions, right (red) and left (blue), which lead to the indicated state with 70% probability, and otherwise to another state at random. Each action also results in a reward of 0–5 coins. Each subject was first trained on the transition and reward dynamics of the task, including 16 observations of reward samples where the latent value ¯R was shown so as to get a feeling for both the change and noise processes. They then performed 500 choice trials in a single condition. Each subject was randomly assigned to one of 12 conditions, made up of σ 2 {0.03, 0.0462, 0.0635, 0.0882, 0.1225, 0.1452} partially crossed with " 2 {0, 0.126, 0.158, 0.316, 0.474, 0.506}. 4.1.3 Analysis Because they use different sampling strategies to estimate action values, TD and model-based RL differ in their predictions of how experience with states and rewards should affect subsequent choices. Here, we use a regression analysis to measure the extent to which choices at a state are influenced by recent previous events characteristic of either approach [21]. This approach has the advantage of making only very coarse assumptions about the learning process, as opposed to likelihood-based model-fits which may be biased by the specific learning equations assumed. By confining our analyses to the most recent samples we remain agnostic about free parameters with non-linear effects such as learning rates and discount factors, but rather measure the relative strength of reliance on either sort of evidence directly using a general linear model. Regardless of the actual learning process, the most recent sample should have the strongest effect [22]. Accordingly, below we define explanatory variables that capture the most recently experienced reward sample that would be relevant to a choice under either Q(1) TD or model-based planning. The data for each subject were considered to be the sequence of states visited, St, actions taken, At, and rewards received, Rt. We define additional vector time sequences a, j, r, q, and p, each indexed by time and state and referred to generally as xt(s), with all x0 initially undefined. For each observation we perform the following updates: wt = [At = at(St)] ‘stay’ vs. ‘switch’ (boolean indicator) at+1(St) = At last action jt+1(St) = [St+1 6= T(St, At)] ‘jump’ unexpected transition rt+1(St) = Rt immediate reward qt+1(St−1) = Rt subsequent reward pt+1(St) = rt+1(T(St, At)) expected reward xt+1(s) = xt(s) 8s 6= St for x = a, j, r, q, and p dt+1 = |Rt −rt| change For convenience, we use xt to mean xt(St). Note that these vectors are step functions, such that each value is updated (xt 6= xt−1) only when a relevant observation is made. They thus always represent the most recent relevant sample. 6 Given the task dynamics, we can consider how a TD-based Q-learning system and a model-based planning system would compute values. Both take into account the last sample of the immediate reward, rt. They differ in how they account for the reward from the “next state”: either, for Q(1), as qt (the last reward received from the state visited after the last visit to St) or, for model-based RL, as pt (the last sample of the reward at the true successor state). That is, while TD(1) will incorporate the reward observed following Rt, regardless of the state, a model-based system will instead consider the expected successor state [21]. While the latter two reward observations will be the same in some cases, they can disagree either after a jump trial (j, where the model-based and sample successor states differ), or when the successor state has more recently been visited from a different predecessor state (providing a reward sample known to model-based but not to TD). Given this, we can separate the effects of model-based and model-free learning by defining additional explanatory variables: r0 t = ⇢qt if qt = pt 0 otherwise (after mean correction) common q⇤ t = qt −r0 t unique p⇤ t = pt −r0 t While r0 represents the cases where the two systems use the same reward observation, q⇤and p⇤are the residual rewards unique to each learning system. We applied a mixed-effects logistic regression model using glmer [23] to predict ‘stay’ (wt = 0) trials. Any regressors of interest were mean-corrected before being entered into the design. Any trial in which one of the variables was undefined (e.g., the first visit to a state) was omitted. Also, we required that subjects have at least 50 (10%) switch trials to be included. First we examined the main effects with a regression including fixed effects of interest for r, r0, q⇤, p⇤, and random effects of no interest for r, q, and p (without covariances). Next, we ran a regression adding all the interactions between the condition variables (σ, ") and the specific reward effects (q⇤, p⇤). Finally, we additionally included the interaction between change in reward on the previous trial (d) and the specific reward effects. 4.2 Results A total of 5 subjects failed to meet the inclusion criterion of 50 switch trials (in each case because they pressed the same button on almost all trials), leaving 500 decision trials from each of 50 subjects. Subjects were observed to switch on 143 ± 55 trials (mean ± 1 SD). As designed, there were an average of 151±17 ‘jump’ trials per subject. The number of trials in which TD and model-based disagreed as to the most recent relevant sample of the next-state reward (r0 = 0) was 243 ± 26, and for 181±19 of these, it was due to a more recent visit to the next state. The results of the regressions are shown in Table 1. Beyond the trivial effects of perseveration and reward, subjects showed a substantial amount of TDtype learning (q⇤> 0), and a smaller but significant amount of model-based lookahead (p⇤> 0). The interactions of these effects by condition demonstrated that subjects in higher drift conditions showed significantly less TD (σ⇥q⇤< 0) but unreduced model-based learning (σ⇥p⇤), possibly due to the relative disadvantage of TD with increased drift. Similarly, higher noise conditions showed decreased model-based effects (" ⇥p⇤< 0) and no change in TD, which may be driven by the decreasing advantage of MB. Note that, since the (nonsignificant) trend on the unaffected variable is positive, it is unlikely that either interaction effect results from a nonspecific change in performance or the “noisiness” of choices. Both of these effects are consistent with the pattern of differential reliance predicted by the theoretical analysis. The effect of change on the previous trial (d) provides one hint as to how subjects may adjust their reliance on either system dynamically: higher changes are indicative of noisier environments which are thus expected to promote TD learning. 5 Discussion We have shown that humans systematically adjust their reliance on learning approaches according to the statistics of the task, in a way qualitatively consistent with the theoretical considerations 7 Table 1: Behavioral effects from nested regressions (each including preceding groups) variable effects z p description constant mixed 11.61 * 10−29 perseveration r mixed 14.99 * 10−49 last immediate r r0 mixed 5.55 * 10−7 common next r q⇤ mixed 6.40 * 10−9 TD(1) next-step r p⇤ mixed 2.51 " 0.012 model predicted r σ ⇥q⇤ fixed -4.07 + 0.00005 TD with change σ ⇥p⇤ fixed 0.67 0.50 model with change " ⇥q⇤ fixed 0.99 0.32 TD with noise " ⇥p⇤ fixed -2.11 # 0.035 model with noise d ⇥q⇤ mixed 1.63 0.10 TD after change d ⇥p⇤ mixed -3.06 # 0.0022 model after change presented. Model-based methods, while always superior to TD in terms of performance, have the largest advantage in the presence of change paired with low environmental noise, because the Monte Carlo sampling strategy of TD interferes with tracking fast change. If the additional costs of modelbased computation are fixed, this would motivate employing the system only in the regime where its advantage was most pronounced [18]. Consistent with this, human behavior exhibited relatively larger use of model-based RL with increased reward volatility and lesser use of it with increased observation noise. Of course, increasing either the volatility or noise parameters makes the task harder, and a decline in the marker for either sort of learning, as we observed, implies an overall decrement in performance. However, as the decrement was specific to one or the other explanatory variable, this may also be interpreted as a relative increase in use of the unaffected strategy. It is also worth noting that the linearized regression analysis examines only the effect of the most recent rewards, and the weighting of those relative to earlier samples will depend on the learning rate [22]. Thus a decrease in learning rate for either system may be confounded with a decrease in the strength of its effect in our analysis. However, while the optimal learning rates are also predicted to differ between conditions, these predictions are common to both systems, and it seems unlikely that each would differentially adjust its learning rate in response to a different manipulation. The characteristics associated with these learning systems in psychology can be seen as consequences of the relative strengths of model-based and model-free learning. If the model-based system is most useful in conditions of low noise and high volatility, then the appropriate learning rates for such a system are large: there is less need and utility to take multiple samples for the purpose of averaging. In this case of a high learning rate, model-based learning is closely aligned with singleshot episodic encoding, possibly subsuming such a system [17], as well as with learning categorical, verbalizable rules in the psychological sense, rather than averages. This may also explain the selective engagement of putatively model-based brain regions such as the dorsolateral prefrontal cortex in tasks with less stochastic outcomes [24]. Finally, this relates indirectly to the well known phenomenon whereby dominance shifts from the model-based to the model-free controller with overtraining: a model-based system dominates early not simply because it learns faster, but because it is capable of better learning with fewer trials. The specific advantage of high learning rates may well motivate the brain to use a restricted modelbased system, such as one with learning rate fixed to 1. Indeed (see Supplemental materials), this restriction has little detriment on the system’s advantage over TD in the circumstances where it would be expected to be used, but causes drastic performance problems as observation noise increases, since averaging over samples is then required. Such a limitation might have useful computational advantages. Transition matrices learned this way, for instance, will be sparse: just records of trajectories. Such matrices admit both compressed representations and more efficient planning algorithms (e.g., tree search) as, in the fully deterministic case, only one trajectory must be examined. Conversely, evaluations in a model based system are extremely costly when transitions are highly stochastic, since averages must be computed over exponentially many paths, while they add no cost to model-free learning. Acknowledgments This work was supported by Award Number R01MH087882 from NIMH as part of the NSF/NIH CRCNS Program, and by a Scholar Award from the McKnight Foundation. 8 References [1] Bernard W. Balleine, Nathaniel D. Daw, and John P. O’Doherty. Multiple forms of value learning and the function of dopamine. In Paul W. Glimcher, Colin F. Camerer, Ernst Fehr, and Russell A. Poldrack, editors, Neuroeconomics: Decision Making and the Brain, chapter 24, pages 367–387. Academic Press, London, 2008. [2] Antoine Bechara. Decision making, impulse control and loss of willpower to resist drugs: a neurocognitive perspective. Nat Neurosci, 8(11):1458–63, 2005. [3] Frederick Toates. The interaction of cognitive and stimulus-response processes in the control of behaviour. Neuroscience & Biobehavioral Reviews, 22(1):59–83, 1997. [4] Peter Dayan. Goal-directed control and its antipodes. Neural Netw, 22:213–219, 2009. [5] Neal Schmitt, Bryan W. Coyle, and Larry King. Feedback and task predictability as determinants of performance in multiple cue probability learning tasks. Organ Behav Hum Perform, 16(2):388–402, 1976. [6] Berndt Brehmer and Jan Kuylenstierna. Task information and performance in probabilistic inference tasks. Organ Behav Hum Perform, 22:445–464, 1978. [7] B J Knowlton, L R Squire, and M A Gluck. Probabilistic classification learning in amnesia. Learn Mem, 1(2):106–120, 1994. [8] W. Todd Maddox and F. Gregory Ashby. Dissociating explicit and procedural-learning based systems of perceptual category learning. Behavioural Processes, 66(3):309–332, 2004. [9] W. Todd Maddox, J. Vincent Filoteo, Kelli D. Hejl, and A. David Ing. Category number impacts rule-based but not information-integration category learning: Further evidence for dissociable categorylearning systems. J Exp Psychol Learn Mem Cogn, 30(1):227–245, 2004. [10] R. A. Poldrack, J. Clark, E. J. Par´e-Blagoev, D. Shohamy, J. Creso Moyano, C. Myers, and M. A. Gluck. Interactive memory systems in the human brain. Nature, 414(6863):546–550, 2001. [11] Bernard W. Balleine and Anthony Dickinson. Goal-directed instrumental action: contingency and incentive learning and their cortical substrates. Neuropharmacology, 37(4–5):407–419, 1998. [12] Kenji Doya. What are the computations of the cerebellum, the basal ganglia and the cerebral cortex? Neural Netw, 12(7–8):961–974, 1999. [13] Nathaniel D. Daw, Yael Niv, and Peter Dayan. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nat Neurosci, 8(12):1704–1711, 2005. [14] Ben Seymour, John P. O’Doherty, Peter Dayan, Martin Koltzenburg, Anthony K. Jones, Raymond J. Dolan, Karl J. Friston, and Richard S. Frackowiak. Temporal difference models describe higher-order learning in humans. Nature, 429(6992):664–667, 2004. [15] John P. O’Doherty, Peter Dayan, Johannes Schultz, Ralf Deichmann, Karl Friston, and Raymond J. Dolan. Dissociable roles of ventral and dorsal striatum in instrumental conditioning. Science, 304(5669):452– 454, 2004. [16] Adam Johnson and A. David Redish. Hippocampal replay contributes to within session learning in a temporal difference reinforcement learning model. Neural Netw, 18(9):1163–1171, 2005. [17] M´at´e Lengyel and Peter Dayan. Hippocampal contributions to control: The third way. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 889–896. MIT Press, Cambridge, MA, 2008. [18] Mehdi Keramati, Amir Dezfouli, and Payam Piray. Speed/accuracy trade-off between the habitual and the goal-directed processes. PLoS Comput Biol, 7(5):e1002055, 2011. [19] Michael Kearns and Satinder Singh. Finite-sample convergence rates for q-learning and indirect algorithms. In Michael S. Kearns, Sara A. Solla, and David A. Cohn, editors, Advances in Neural Information Processing Systems 11, volume 11, pages 996–1002. MIT Press, Cambridge, MA, 1999. [20] R. E. Kalman. A new approach to linear filtering and prediction problems. J Basic Eng, 82(1):35–45, 1960. [21] Nathaniel D Daw, S. J. Gershman, B. Seymour, P. Dayan, and R. J. Dolan. Model-based influences on humans’ choices and striatal prediction errors. Neuron, 69(6):1204–1215, 2011. [22] Brian Lau and Paul W Glimcher. Dynamic response-by-response models of matching behavior in rhesus monkeys. J Exp Anal Behav, 84(3):555–579, 2005. [23] Douglas Bates, Martin Maechler, and Ben Bolker. lme4: Linear mixed-effects models using S4 classes, 2011. R package version 0.999375-39. [24] Saori C Tanaka, Kazuyuki Samejima, Go Okada, Kazutaka Ueda, Yasumasa Okamoto, Shigeto Yamawaki, and Kenji Doya. Brain mechanism of reward prediction under predictable and unpredictable environmental dynamics. Neural Netw, 19(8):1233–1241, 2006. 9
|
2011
|
229
|
4,290
|
Learning Patient-Specific Cancer Survival Distributions as a Sequence of Dependent Regressors Chun-Nam Yu, Russell Greiner, Hsiu-Chin Lin Department of Computing Science University of Alberta Edmonton, AB T6G 2E8 {chunnam,rgreiner,hsiuchin}@ualberta.ca Vickie Baracos Department of Oncology University of Alberta Edmonton, AB T6G 1Z2 vickie.baracos@ualberta.ca Abstract An accurate model of patient survival time can help in the treatment and care of cancer patients. The common practice of providing survival time estimates based only on population averages for the site and stage of cancer ignores many important individual differences among patients. In this paper, we propose a local regression method for learning patient-specific survival time distribution based on patient attributes such as blood tests and clinical assessments. When tested on a cohort of more than 2000 cancer patients, our method gives survival time predictions that are much more accurate than popular survival analysis models such as the Cox and Aalen regression models. Our results also show that using patient-specific attributes can reduce the prediction error on survival time by as much as 20% when compared to using cancer site and stage only. 1 Introduction When diagnosed with cancer, most patients ask about their prognosis: “how long will I live”, and “what is the success rate of each treatment option”. Many doctors provide patients with statistics on cancer survival based only on the site and stage of the tumor. Commonly used statistics include the 5-year survival rate and median survival time, e.g., a doctor can tell a specific patient with early stage lung cancer that s/he has a 50% 5-year survival rate. In general, today’s cancer survival rates and median survival times are estimated from a large group of cancer patients; while these estimates do apply to the population in general, they are not particularly accurate for individual patients, as they do not include patient-specific information such as age and general health conditions. While doctors can make adjustments to their survival time predictions based on these individual differences, it is better to directly incorporate these important factors explicitly in the prognostic models – e.g. by incorporating the clinical information, such as blood tests and performance status assessments [1] that doctors collect during the diagnosis and treatment of cancer. These data reveal important information about the state of the immune system and organ functioning of the patient, and therefore are very useful for predicting how well a patient will respond to treatments and how long s/he will survive. In this work, we develop machine learning techniques to incorporate this wealth of healthcare information to learn a more accurate prognostic model that uses patient-specific attributes. With improved prognostic models, cancer patients and their families can make more informed decisions on treatments, lifestyle changes, and sometimes end-of-life care. In survival analysis [2], the Cox proportional hazards model [3] and other parametric survival distributions have long been used to fit the survival time of a population. Researchers and clinicians usually apply these models to compare the survival time of two populations or to test for significant risk factors affecting survival; n.b., these models are not designed for the task of predicting survival 1 time for individual patients. Also, as these models work with the hazard function instead of the survival function (see Section 2), they might not give good calibrated predictions on survival rates for individuals. In this work we propose a new method, multi-task logistic regression (MTLR), to learn patient-specific survival distributions. MTLR directly models the survival function by combining multiple local logistic regression models in a dependent manner. This allows it to handle censored observations and the time-varying effects of features naturally. Compared to survival regression methods such as the Cox and Aalen regression models, MTLR gives significantly more accurate predictions on survival rates over several datasets, including a large cohort of more than 2000 cancer patients. MTLR also reduces the prediction error on survival time by 20% when compared to the common practice of using the median survival time based on cancer site and stage. Section 2 surveys basic survival analysis and related works. Section 3 introduces our method for learning patient-specific survival distributions. Section 4 evaluates our learned models on a large cohort of cancer patients, and also provides additional experiments on two other datasets. 2 Survival Time Prediction for Cancer Patients In most regression problems, we know both the covariates and “outcome” values for all individuals. By contrast, it is typical to not know many of the outcome values in survival data. In many medical studies, the event of interest for many individuals (death, disease recurrence) might not have occurred within the fixed period of study. In addition, other subjects could move out of town or decide to drop out any time. Here we know only the date of the final visit, which provides a lower bound on the survival time. We refer to the time recorded as the “event time”, whether it is the true survival time, or just the time of the last visit (censoring time). Such datasets are considered censored. Survival analysis provides many tools for modeling the survival time T of a population, such as a group of stage-3 lung cancer patients. A basic quantity of interest is the survival function S(t) = P(T ≥t), which is the probability that an individual within the population will survive longer than time t. Given the survival times of a set of individuals, we can plot the proportion of surviving individuals against time, as a way to visualize S(t). The plot of this empirical survival distribution is called the Kaplan-Meier curve [4] (Figure 1(left)). This is closely related to the hazard function λ(t), which describes the instantaneous rate of failure at time t λ(t) = lim ∆t→0 P(t ≤T < t + ∆t | T ≥t)/∆t, and S(t) = exp − Z t 0 λ(u)du . 2.1 Regression Models in Survival Analysis One of the most well-known regression model in survival analysis is Cox’s proportional hazards model [3]. It assumes the hazard function λ(t) depends multiplicatively on a set of features ⃗x: λ(t | ⃗x) = λ0(t) exp(⃗θ · ⃗x). It is called the proportional hazards model because the hazard rates of two individuals with features ⃗x1 and ⃗x2 differ by a ratio exp(⃗θ · (⃗x1 −⃗x2)). The function λ0(t), called the baseline hazard, is usually left unspecified in Cox regression. The regression coefficients ⃗θ are estimated by maximizing a partial likelihood objective, which depends only on the relative ordering of survival time of individuals but not on their actual values. Cox regression is mostly used for identifying important risk factors associated with survival in clinical studies. It is typically not used to predict survival time since the hazard function is incomplete without the baseline hazard λ0. Although we can fit a non-parametric survival function for λ0(t) after the coefficients of Cox regression are determined [2], this requires a cumbersome 2-step procedure. Another weakness of the Cox model is its proportional hazards assumption, which restricts the effect of each feature on survival to be constant over time. There are alternatives to the Cox model that avoids the proportional hazards restriction, including the Aalen additive hazards model [5] and other time-varying extensions to the Cox model [6]. The Aalen linear hazard model assumes the hazard function has the form λ(t | ⃗x) = ⃗θ(t) · ⃗x. (1) 2 0 10 20 30 40 50 60 70 0.0 0.2 0.4 0.6 0.8 1.0 Time (Months) Proportion Surviving 0 0 0 1 1 0 0 0 y1 y2 y21 y22 y60 y1 y2 y21 y22 y60 t1=1 t2=2 t21=21 t22=22 t60=60 t1=1 t2=2 t21=21 t22=22 t60=60 . . . . . . . . . . . . . . . . . . . . . . . . s=21.3 sc=21.3 Patient 1 (uncensored) Patient 2 (censored) GGGGGGGGGGGGGGG G G G G G G G G G G G G G G G G G G G G GG GGGGGGGGGGGGGGGGGGGGGGG 0 10 20 30 40 50 60 0.0 0.2 0.4 0.6 0.8 1.0 Time (Months) P(survival) Figure 1: (Left) Kaplan-Meier curve: each point (x, y) means proportion y of the patients are alive at time x. Vertical line separates those who have died versus those who survive at t = 20 months. (Middle) Example binary encoding for patient 1 (uncensored) with survival time 21.3 months and for patient 2 (censored), with last visit time at 21.3 months. (Right) Example discrete survival function for a single patient predicted by MTLR. While there are now many estimation techniques, goodness-of-fit tests, hypothesis tests for these survival regression models, they are rarely evaluated on the task of predicting survival time of individual patients. Moreover, it is not easy to choose between the various assumptions imposed by these models, such as whether the hazard rate should be a multiplicative or additive function of the features. In this paper we will test our MTLR method, which directly models the survival function, against Cox regression and Aalen regression as representatives of these survival analysis models. In machine learning, there are a few recently proposed regression technqiues for survival prediction [7, 8, 9, 10]. These methods attempt to optimize specific loss function or performance measures, which usually involve modifying the common regression loss functions to handle censored data. For example, Shivaswamy et al. [7] modified the support vector regression (SVR) loss function from max n |y −⃗θ · ⃗x| −ϵ, 0 o to max n (y −⃗θ · ⃗x) −ϵ, 0 o , where y is the time of censoring and ϵ is a tolerance parameter. In this way any prediction ⃗θ·⃗x above the censoring time y is deemed consistent with observation and is not penalized. This class of direct regression methods usually give very good results on the particular loss functions they optimize over, but could fail if the loss function is non-convex or difficult to optimize. Moreover, these methods only predict a single survival time value (a real number) without an associated confidence on prediction, which is a serious drawback in clinical applications. Our MTLR model below is closely related to local regression models [11] and varying coefficient models [12] in statistics. Hastie and Tibshirani [12] described a very general class of regression models that allow the coefficients to change with another set of variables called “effect modifiers”; they also discussed an application of their model to overcome the proportional hazards assumption in Cox models. While we focus on predicting survival time, they instead focused on evaluating the time-varying effect of prognostic factors and worked with the rank-based partial likelihood objective. 3 Survival Distribution Modeling via a Sequence of Dependent Regressors Consider a simpler classification task of predicting whether an individual will survive for more than t months. A common approach for this classification task is the logistic regression model [13], where we model the probability of surviving more than t months as: P⃗θ(T ≥t | ⃗x) = 1 + exp(⃗θ · ⃗x + b) −1 . The parameter vector ⃗θ describes the effect of how the features ⃗x affect the chance of survival, with the threshold b. This task corresponds to a specific time point on the Kaplan-Meier curve, which attempts to discriminate those who survive against those who have died, based on the features ⃗x (Figure 1(left)). Equivalently, the logistic regression model can be seen as modeling the individual survival probabilities of cancer patients at the time snapshot t. Taking this idea one step further, consider modeling the probability of survival of patients at each of a vector of time points τ = (t1, t2, . . . , tm) – e.g., τ could be the 60 monthly intervals from 1 month 3 up to 60 months. We can set up a series of logistic regression models for each of these: P⃗θi(T ≥ti | ⃗x) = 1 + exp(⃗θi · ⃗x + bi) −1 , 1 ≤i ≤m, (2) where ⃗θi and bi are time-specific parameter vector and thresholds. The input features ⃗x stay the same for all these classification tasks, but the binary labels yi = [T ≥ti] can change depending on the threshold ti. This particular setup allows us to answer queries about the survival probability of individual patients at each of the time snapshots {ti}, getting close to our goal of modeling a personal survival time distribution for individual patients. The use of time-specific parameter vector naturally allows us to capture the effect of time-varying covariates, similar to many dynamic regression models [14, 12]. However the outputs of these logistic regression models are not independent, as a death event at or before time ti implies death at all subsequent time points tj for all j > i. MTLR enforces the dependency of the outputs by predicting the survival status of a patient at each of the time snapshots ti jointly instead of independently. We encode the survival time s of a patient as a binary sequence y = (y1, y2, . . . , ym), where yi ∈{0, 1} denotes the survival status of the patient at time ti, so that yi = 0 (no death event yet) for all i with ti < s, and yi = 1 (death) for all i with ti ≥s (see Figure 1(middle)). We denote such an encoding of the survival time s as y(s), and let yi(s) be the value at its ith position. Here there are m + 1 possible legal sequences of the form (0, 0, . . . , 1, 1, . . . , 1), including the sequence of all ‘0’s and the sequence of all ‘1’s. The probability of observing the survival status sequence y = (y1, y2, . . . , ym) can be represented by the following generalization of the logistic regression model: PΘ(Y =(y1, y2, . . . , ym) | ⃗x)= exp(Pm i=1 yi(⃗θi · ⃗x + bi)) Pm k=0 exp(fΘ(⃗x, k)) , where Θ = (⃗θ1, . . . , ⃗θm), and fΘ(⃗x, k) = Pm i=k+1(⃗θi · ⃗x + bi) for 0 ≤k ≤m is the score of the sequence with the event occuring in the interval [tk, tk+1) before taking the logistic transform, with the boundary case fΘ(⃗x, m) = 0 being the score for the sequence of all ‘0’s. This is similar to the objective of conditional random fields [15] for sequence labeling, where the labels at each node are scored and predicted jointly. Therefore the log likelihood of a set of uncensored patients with survival time s1, s2, . . . , sn and feature vectors ⃗x1, ⃗x2, . . . , ⃗xn is Xn i=1 hXm j=1 yj(si)(⃗θj · ⃗xi + bj) −log Xm k=0 exp fΘ(⃗xi, k) i . Instead of directly maximizing this log likelihood, we solve the following optimization problem: min Θ C1 2 m X j=1 ∥⃗θj∥2+C2 2 m−1 X j=1 ∥⃗θj+1−⃗θj∥2− n X i=1 m X j=1 yj(si)(⃗θj·⃗xi+bj)−log m X k=0 exp fΘ(⃗xi, k) (3) The first regularizer over ∥⃗θj∥2 ensures the norm of the parameter vector is bounded to prevent overfitting. The second regularizer ∥⃗θj+1−⃗θj∥2 ensures the parameters vary smoothly across consecutive time points, and is especially important for controlling the capacity of the model when the time points become dense. The regularization constants C1 and C2, which control the amount of smoothing for the model, can be estimated via cross-validation. As the above optimization problem is convex and differentiable, optimization algorithms such as Newton’s method or quasi-Newton methods can be applied to solve it efficiently. Since we model the survival distribution as a series of dependent prediction tasks, we call this model multi-task logistic regression (MTLR). Figure 1(right) shows an example survival distribution predicted by MTLR for a test patient. 3.1 Handling Censored Data Our multi-task logistic regression model can handle censoring naturally by marginalizing over the unobserved variables in a survival status sequence (y1, y2, . . . , ym). For example, suppose a patient with features ⃗x is censored at time sc, and tj is the closest time point after sc. Then all the sequences 4 Table 1: Left: number of cancer patients for each site and stage in the cancer registry dataset. Right: features used in learning survival distributions site\stage 1 2 3 4 Bronchus & Lung 61 44 186 390 Colorectal 15 157 233 545 Head and Neck 6 8 14 206 Esophagus 0 1 1 63 Pancreas 1 3 0 134 Stomach 0 0 1 128 Other Digestive 0 1 0 77 Misc 1 0 3 123 basic age, sex, weight gain/loss, BMI, cancer site, cancer stage general wellbeing no appetite, nausea, sore mouth, taste funny, constipation, pain, dental problem, dry mouth, vomit, diarrhea, performance status blood test granulocytes, LDH-serum, HGB, lyphocytes platelet, WBC count, calcium-serum, creatinine, albumin y = (y1, y2, . . . , ym) with yi = 0 for i < j are consistent with this censored observation (see Figure 1(middle)). The likelihood of this censored patient is PΘ(T ≥tj | ⃗x) = Xm k=j exp(fΘ(⃗x, k))/ Xm k=0 exp(fΘ(⃗x, k)), (4) where the numerator is the sum over all consistent sequences. While the sum in the numerator makes the log-likelihood non-concave, we can still learn the parameters effectively using EM or gradient descent with suitable initialization. In summary, the proposed MTLR model holds several advantages over classical regression models in survival analysis for survival time prediction. First, it directly models the more intuitive survival function rather than the hazard function (conditional rate of failure/death), avoiding the difficulties of choosing between different forms of hazards. Second, by modeling the survival distribution as the joint output of a sequence of dependent local regressors, we can capture the time-varying effects of features and handle censored data easily and naturally. Third, we will see that our model can give more accurate predictions on survival and better calibrated probabilities (see Section 4), which are important in clinical applications. Our goal here is not to replace these tried-and-tested models in survival analysis, which are very effective for hypothesis testing and prognostic factor discovery. Instead, we want a tool that can accurately and effectively predict an individual’s survival time. 3.2 Relations to Other Machine Learning Models The objective of our MTLR model is of the same form as a general CRF [15], but there are several important differences from typical applications of CRFs for sequence labeling. First MTLR has no transition features (edge potentials) (Eq (3)); instead the dependencies between labels in the sequence are enforced implicitly by only allowing a linear number (m+1) of legal labelings. Second, in most sequence labeling applications of CRFs, the weights for the node potentials are shared across nodes to share statistic strengths and improve generalization. Instead, MTLR uses a different weight vector ⃗θi at each node to capture the time-varying effects of input features. Unlike typical sequence labeling problems, the sequence construction of our model might be better viewed as a device to obtain a flexible discrete approximation of the survival distribution of individual patients. Our approach can also be seen as an instance of multi-task learning [16], where the prediction of individual survival status at each time snapshot tj can be regarded as a separate task. The smoothing penalty ∥⃗θj −⃗θj+1∥2 is used by many multi-task regularizers to encourage weight sharing between related tasks. However, unlike typical multi-task learning problems, in our model the outputs of different tasks are dependent to satisfy the monotone condition of a survival function. 4 Experiments Our main dataset comes from the Alberta Cancer Registry obtained through the Cross Cancer Institute at the University of Alberta, which included 2402 cancer patients with tumors at different sites. About one third of the patients have censored survival times. Table 1 shows the groupings of cancer patients in the dataset and the patient-specific attributes for learning survival distributions. All these measurements are taken before the first chemotherapy. 5 In all experiments we report five-fold cross validation (5CV) results, where MTLR’s regularization parameters C1 and C2 are selected by another 5CV within the training fold, based on log likelihood. We pick the set of time points τ in these experiments to be the 100 points from the 1st percentile up to the 100th percentile of the event time (true survival time or censoring time) over all patients. Since all the datasets contain censored data, we first train an MTLR model using the event time (survival/censoring) as regression targets (no hidden variables). Then the trained model is used as the initial weights in the EM procedure in Eq (4) to train the final model. The Cox proportional hazards model is trained using the survival package in R, followed by the fitting of the baseline hazard λ0(t) using the Kalbfleisch-Prentice estimator [2]. The Aalen linear hazards model is trained using the timereg package. Both the Cox and the Aalen models are trained using the same set of 25 features. As a baseline for this cancer registry dataset, we also provide a prediction based on the median survival time and survival probabilities of the subgroup of patients with cancer at a specific site and at a specific stage, estimated from the training fold. 4.1 Survival Rate Prediction Our first evaluation focuses on the classification accuracy and calibration of predicted survival probabilities at different time thresholds. In addition to giving a binary prediction on whether a patient would survive beyond a certain time period, say 2 years, it is very useful to give an associated confidence of the prediction in terms of probabilities (survival rate). We use mean square error (MSE), also called the Brier score in this setting [17], to measure the quality of probability predictions. Previous work [18] showed that MSE can be decomposed into two components, one measuring calibration and one measuring discriminative power (i.e., classification accuracy) of the probability predictions. Table 2 shows the classification accuracy and MSE on the predicted probabilities of different models at 5, 12, and 22 months, which correspond to the 25% lower quantile, median, and 75% upper quantile of the survival time of all the cancer patients in the dataset. Our MTLR models produce predictions on survival status and survival probability that are much more accurate than the Cox and Aalen regression models. This shows the advantage of directly modeling the survival function instead of going through the hazard function when predicting survival probabilites. The Cox model and the Aalen model have classification accuracies and MSE that are similar to one another on this dataset. All regression models (MTLR, Cox, Aalen) beat the baseline prediction using median survival time based on cancer stage and site only, indicating that there is substantial advantage of employing extra clinical information to improve survival time predictions given to cancer patients. 4.2 Visualization Figure 2 visualizes the MTLR, Cox and Aalen regression models for two patients on a test fold. Patient 1 is a short survivor who lives for only 3 months from diagnosis, while patient 2 is a long survivor whose survival time is censored at 46 months. All three regression models (correctly) give poor prognosis for patient 1 and good prognosis for patient 2, but there are a few interesting differences when we examine the plots. The MTLR model is able to produce smooth survival curves of different shapes for the two patients (one convex with the other one slightly concave), while the Cox model always predict survival curves of similar shapes because of the proportional hazards assumption. Indeed it is well known that the survival curves of two individuals never crosses for a Cox model. For the Aalen model, we observe that the survival function is not (locally) monotonically decreasing. This is a consequence of the linear hazards assumption (Eq (1)), which allows the hazard to become negative and therefore the survival function to increase. This problem is less common when predicting survival curves at population level, but could be more frequent for individual survival distribution predictions. 4.3 Survival Time Predictions Optimizing Different Loss Functions Our third evaluation on the predicted survival distributions involves applying them to make predictions that minimize different clinically-relevant loss functions. For example, if the patient is interested in knowing whether s/he has weeks, months, or years to live, then measuring errors in terms of the logarithm of the survival time can be appropriate. In this case we can measure the loss 6 Table 2: Classification accuracy and MSE of survival probability predictions on cancer registry dataset (standard error of 5CV shown in brackets). Bold numbers indicate significance with a paired t-test at p = 0.05 level (this applies to all subsequent tables). Accuracy 5 month 12 month 22 month MTLR 86.5 (0.7) 76.1 (0.9) 74.5 (1.3) Cox 74.5 (0.9) 59.3 (1.1) 62.8 (3.5) Aalen 73.3 (1.2) 61.0 (1.7) 59.6 (3.6) Baseline 69.2 (0.3) 56.2 (2.0) 57.0 (1.4) MSE 5 month 12 month 22 month MTLR 0.101 (0.005) 0.158 (0.004) 0.170 (0.007) Cox 0.196 (0.009) 0.270 (0.008) 0.232 (0.016) Aalen 0.198 (0.004) 0.278 (0.008) 0.288 (0.020) Baseline 0.227 (0.012) 0.299 (0.011) 0.243 (0.012) 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60 P(survival) months MTLR patient 1 patient 2 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60 P(survival) months Cox patient 1 patient 2 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60 P(survival) months Aalen patient 1 patient 2 Figure 2: Predicted survival function for two patients in test set: MTLR (left), Cox (center), Aalen (right). Patient 1 lives for 3 months while patient 2 has survival time censored at 46 months. using the absolute error (AE) over log survival time lAE−log(p, t) = | log p −log t|, (5) where p and t are the predicted and true survival time respectively. In other scenarios, we might be more concerned about the difference of the predicted and true survival time. For example, as the cost of hospital stays and medication scales linearly with the survival time, the AE loss on the survival time could be appropriate, i.e, lAE(p, t) = |p −t|. (6) We also consider an error measure called the relative absolute error (RAE): lRAE(p, t) = min {|(p −t)/p| , 1} , (7) which is essentially AE scaled by the predicted survival time p, since p is known at prediction time in clinical applications. The loss is truncated at 1 to prevent large penalizations for small predicted survival time. Knowing that the average RAE of a predictor is 0.3 means we can expect the true survival time to be within 30% of the predicted time. Given any of these loss models l above, we can make a point prediction hl(⃗x) of the survival time for a patient with features ⃗x using the survival distribution PΘ estimated by our MTLR model: hl(⃗x)= argmin p∈{t1,...,tm} Xm k=0 l(p, tk)PΘ(Y =y(tk) | ⃗x), (8) where y(tk) is the survival time encoding defined in Section 3. Table 3 shows the results on optimizing the three proposed loss functions using the individual survival distribution learned with MTLR against other methods. For this particular evaluation, we also implemented the censored support vector regression (CSVR) proposed in [7, 8]. We train two CSVR models, one using the survival time and the other using logarithm of the survival time as regression targets, which correspond to minimizing the AE and AE-log loss functions. For RAE we report the best result from linear and log-scale CSVR in the table, since this non-convex loss is not minimized by either of them. As we do not know the true survival time for censored patients, we adopt the approach of not penalizing a prediction p for a patient with censoring time t if p > t, i.e., l(p, t) = 0 for the loss functions defined in Eqs (5) to (7) above. This is exactly the same censored training loss used in CSVR. Note that it is undesirable to test on uncensored patients only, as the survival time distributions are very different for censored and uncensored patients. For Cox and Aalen models we report results using predictions based on the median, as optimizing for different loss functions using Eq (8) with the distributions predicted by Cox and Aalen models give inferior results. The results in Table 3 show that, although CSVR has the advantage of optimizing the loss function directly during training, our MTLR model is still able to make predictions that improve on CSVR, 7 Table 3: Results on Optimizing Different Loss Functions on the Cancer Registry Dataset MTLR Cox Aalen CSVR Baseline AE 9.58 (0.11) 10.76 (0.12) 19.06 (2.04) 9.96 (0.32) 11.73 (0.62) AE-log 0.56 (0.02) 0.61 (0.02) 0.76 (0.06) 0.56 (0.02) 0.70 (0.05) RAE 0.40 (0.01) 0.44 (0.02) 0.44 (0.02) 0.44 (0.03) 0.53 (0.02) Table 4: (Top) MSE of Survival Probability Predictions on SUPPORT2 (left) and RHC (right). (Bottom) Results on Optimizing Different Loss Functions: SUPPORT2 (left), RHC (right) Support2 14 day 58 day 252 day MTLR 0.102(0.002) 0.162(0.002) 0.189(0.004) Cox 0.152(0.003) 0.213(0.004) 0.199(0.006) Aalen 0.141(0.003) 0.195(0.004) 0.195(0.008) Support2 AE AE-log RAE MTLR 11.74 (0.35) 1.19 (0.03) 0.53 (0.01) Cox 14.08 (0.49) 1.35 (0.03) 0.71 (0.01) Aalen 14.61 (0.66) 1.28 (0.04) 0.65 (0.01) CSVR 11.62 (0.15) 1.18 (0.02) 0.65 (0.01) RHC 8 day 27 day 163 day MTLR 0.121(0.002) 0.175(0.005) 0.201(0.004) Cox 0.180(0.005) 0.239(0.004) 0.223(0.004) Aalen 0.176(0.004) 0.229(0.006) 0.221(0.006) RHC AE AE-log RAE MTLR 2.90 (0.09) 1.07 (0.02) 0.49 (0.01) Cox 3.08 (0.09) 1.10 (0.02) 0.53 (0.01) Aalen 3.55 (0.85) 1.10 (0.06) 0.54 (0.01) CSVR 2.96 (0.07) 1.09 (0.02) 0.58 (0.01) sometimes significantly. Moreover MTLR is able to make survival time prediction with improved RAE, which is difficult for CSVR to optimize directly. MTLR also beats the Cox and Aalen models on all three loss functions. When compared to the baseline of predicting the median survival time by cancer site and stage, MTLR is able to employ extra clinical features to reduce the absolute error on survival time from 11.73 months to 9.58 months, and the error ratio between true and predicted survival time from being off by exp(0.70) ≈2.01 times to exp(0.56) ≈1.75 times. Both error measures are reduced by about 20%. 4.4 Evaluation on Other Datasets As additional evaluations, we also tested our model on the SUPPORT2 and RHC datasets (available at http://biostat.mc.vanderbilt.edu/wiki/Main/DataSets), which record the survival time for patients hospitalized with severe illnesses. SUPPORT2 contains over 9000 patients (32% censored) while RHC contains over 5000 patients (35% censored). Table 4 (top) shows the MSE on survival probability prediction over the SUPPORT2 dataset and RHC dataset (we omit classification accuracy due to lack of space). The thresholds are again chosen at 25% lower quantile, median, and 75% upper quantile of the population survival time. The MTLR model, again, produces significantly more accurate probabilty predictions when compared against the Cox and Aalen regression models. Table 4 (bottom) shows the results on optimizing different loss functions for SUPPORT2 and RHC. The results are consistent with the cancer registry dataset, with MTLR beating Cox and Aalen regressions while tying with CSVR on AE and AE-log. 5 Conclusions We plan to extend our model to an online system that can update survival predictions with new measurements. Our current data come from measurements taken when cancers are first diagnosed; it would be useful to be able to update survival predictions for patients incrementally, based on new blood tests or physician’s assessments. We have presented a new method for learning patient-specific survival distributions. Experiments on a large cohort of cancer patients show that our model gives much more accurate predictions of survival rates when compared to the Cox or Aalen survival regression models. Our results demonstrate that incorporating patient-specific features can significantly improve the accuracy of survival prediction over just using cancer site and stage, with prediction errors reduced by as much as 20%. Acknowledgments This work is supported by Alberta Innovates Centre for Machine Learning (AICML) and NSERC. We would also like to thank the Alberta Cancer Registry for the datasets used in this study. 8 References [1] M.M. Oken, R.H. Creech, D.C. Tormey, J. Horton, T.E. Davis, E.T. McFadden, and P.P. Carbone. Toxicity and response criteria of the eastern cooperative oncology group. American Journal of Clinical Oncology, 5(6):649, 1982. [2] J.D. Kalbfleisch and R.L. Prentice. The statistical analysis of failure time data. Wiley New York:, 1980. [3] D.R. Cox. Regression models and life-tables. Journal of the Royal Statistical Society. Series B (Methodological), 34(2):187–220, 1972. [4] E.L. Kaplan and P. Meier. Nonparametric estimation from incomplete observations. Journal of the American Statistical Association, 53(282):457–481, 1958. [5] O.O. Aalen. A linear regression model for the analysis of life times. Statistics in Medicine, 8(8):907–925, 1989. [6] T. Martinussen and T.H. Scheike. Dynamic regression models for survival data. Springer Verlag, 2006. [7] P.K. Shivaswamy, W. Chu, and M. Jansche. A support vector approach to censored targets. In ICDM 2007, pages 655–660. IEEE, 2008. [8] A. Khosla, Y. Cao, C.C.Y. Lin, H.K. Chiu, J. Hu, and H. Lee. An integrated machine learning approach to stroke prediction. In KDD, pages 183–192. ACM, 2010. [9] V. Raykar, H. Steck, B. Krishnapuram, C. Dehing-Oberije, and P. Lambin. On ranking in survival analysis: Bounds on the concordance index. NIPS, 20, 2007. [10] G.C. Cawley, N.L.C. Talbot, G.J. Janacek, and M.W. Peck. Sparse bayesian kernel survival analysis for modeling the growth domain of microbial pathogens. IEEE Transactions on Neural Networks, 17(2):471– 481, 2006. [11] W.S. Cleveland and S.J. Devlin. Locally weighted regression: an approach to regression analysis by local fitting. Journal of the American Statistical Association, 83(403):596–610, 1988. [12] T. Hastie and R. Tibshirani. Varying-coefficient models. Journal of the Royal Statistical Society. Series B (Methodological), 55(4):757–796, 1993. [13] B. Efron. Logistic regression, survival analysis, and the Kaplan-Meier Curve. Journal of the American Statistical Association, 83(402):414–425, 1988. [14] D. Gamerman. Dynamic Bayesian models for survival data. Applied Statistics, 40(1):63–79, 1991. [15] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282–289, 2001. [16] R. Caruana. Multitask learning. Machine Learning, 28(1):41–75, 1997. [17] G.W. Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1– 3, 1950. [18] M.H. DeGroot and S.E. Fienberg. The comparison and evaluation of forecasters. Journal of the Royal Statistical Society. Series D (The Statistician), 32(1):12–22, 1983. 9
|
2011
|
23
|
4,291
|
A Model for Temporal Dependencies in Event Streams Asela Gunawardana Microsoft Research One Microsoft Way Redmond, WA 98052 aselag@microsoft.com Christopher Meek Microsoft Research One Microsoft Way Redmond, WA 98052 meek@microsoft.com Puyang Xu ECE Dept. & CLSP Johns Hopkins University Baltimore, MD 21218 puyangxu@jhu.edu Abstract We introduce the Piecewise-Constant Conditional Intensity Model, a model for learning temporal dependencies in event streams. We describe a closed-form Bayesian approach to learning these models, and describe an importance sampling algorithm for forecasting future events using these models, using a proposal distribution based on Poisson superposition. We then use synthetic data, supercomputer event logs, and web search query logs to illustrate that our learning algorithm can efficiently learn nonlinear temporal dependencies, and that our importance sampling algorithm can effectively forecast future events. 1 Introduction The problem of modeling temporal dependencies in temporal streams of discrete events arises in a wide variety of applications. For example, system error logs [14], web search query logs, the firing patterns of neurons [18] and gene expression data [8], can all be viewed as streams of events over time. Events carry both information about their timing and their type (e.g., the web query issued or the type of error logged), and the dependencies between events can be due to both their timing and their types. Modeling these dependencies is valuable for forecasting future events in applications such as system failure prediction for preemptive maintenance or forecasting web users’ future interests for targeted advertising. We introduce the Piecewise-Constant Conditional Intensity Model (PCIM), which is a class of marked point processes [4] that can model the types and timing of events. This model captures the dependencies of each type of event on events in the past through a set of piecewise-constant conditional intensity functions. We use decision trees to represent these dependencies and give a conjugate prior for this model, allowing for closed-form computation of the marginal likelihood and parameter posteriors. Model selection then becomes a problem of choosing a decision tree. Decision tree induction can be done efficiently because of the closed form for the marginal likelihood. Forecasting can be carried out using forward sampling for arbitrary finite duration queries. For episodic sequence queries, that is, queries that specify particular sequences of events in given future time intervals, we develop a novel approach for estimating the probability of rare queries, which we call the Poisson Superposition Importance Sampler (PSIS). We validate our learning and inference procedures empirically. Using synthetic data we show that PCIMs can correctly learn the underlying dependency structure of event streams, and that the PSIS leads to effective forecasting. We then use real supercomputer event log data to show that PCIMs can be learned more than an order of magnitude faster than Poisson Networks [15, 18], and that they have better test set likelihood. Finally, we show that PCIMs and the PSIS are useful in forecasting future interests of real web search users. 1 2 Related Work While graphical models such as Bayesian networks [2] and dependency networks [10] are widely used to model the dependencies between variables, they do not model temporal dependencies (see e.g., [8]). Dynamic Bayesian Networks (DBN) [5, 9] allow modeling of temporal dependencies in discrete time. It is not clear how timestamps in our data should be discretized in order to apply the DBN approach. At a minimum, too slow a sampling rate results in poor representation of the data, and too fast a sampling rate increases the number of samples making learning and inference more costly. In addition, allowing long term dependencies requires conditioning on multiple steps into the past, and choosing too fast a sampling rate increases the number of such steps that need to be conditioned on. Recent progress in modeling continuous time processes include Continuous Time Bayesian Networks (CTBNs) [12, 13], Continuous Time Noisy-Or (CT-NOR) [16], Poisson Cascades [17], and Poisson Networks [15, 18]. CTBNs are homogeneous Markov models of the joint trajectories of discrete finite variables, rather than models of event streams in continuous time [15]. In contrast, CT-NOR and Poisson Cascades model event streams, but require the modeler to choose a parametric form for temporal dependencies. Simma et al [16, 17] describe how this choice significantly impacts model performance, and depends strongly on the domain. In particular, the problem of model selection for CT-NOR and Poisson Cascades is unaddressed. PCIMs, in contrast to CT-NOR and Poisson Cascades, perform structure learning to learn how different events in the past affect future events. Poisson Networks, described in more detail below, are closely related to PCIMs, but PCIMs are over an order of magnitude faster to learn and can model nonlinear temporal dependencies. 3 Conditional Intensity Models In this section, we define Conditional Intensity Models, introduce the class of Piecewise-Constant Conditional Intensity Models, and describe Poisson Networks. We assume that events of different types are distinguished by labels l drawn from a finite set L. An event is then composed of a non-negative time-stamp t and a label l. An event sequence x = {(ti, li)}n i=1 where 0 < t1 < · · · < tn. The history at time t of event sequence x is the sub-sequence h(t, x) = {(ti, li) | (ti, li) ∈x, ti ≤t}. We write hi for h(ti−1, x) when it is clear from context which x is meant. By convention t0 = 0. We define the ending time t(x) of an event sequence x as the time of the last event in x: t(x) = max ({t : (t, l) ∈x}) so that t(hi) = ti−1. A Conditional Intensity Model (CIM) is a set of non-negative conditional intensity functions indexed by label {λl(t|x; θ)}l∈L. The data likelihood for this model is p(x|θ) = Y l∈L n Y i=1 λl(ti|hi, θ)1l(li)e−Λl(ti|hi;θ) (1) where Λl(t|x; θ) = R t −∞λl(τ|x; θ)dτ for each event sequence x and the indicator function 1l(l′) is one if l′ = l and zero otherwise. The conditional intensities are assumed to satisfy λl(t|x; θ) = 0 for t ≤t(x) to ensure that ti > ti−1 = t(hi). These modeling assumptions are quite weak. In fact, any distribution for x in which the timestamps are continuous random variables can be written in this form. For more details see [4, 6]. Despite the fact that the modeling assumptions are weak, these models offer a powerful approach for decomposing the dependencies of different event types on the past. In particular, this per-label conditional factorization allows one to model detailed label-specific dependence on past events. 3.1 Piecewise-Constant Conditional Intensity Models Piecewise-Constant Conditional Intensity Models (PCIMs) are Conditional Intensity Models where the conditional intensity functions are assumed to be piecewise-constant. As described below, this assumption allows efficient learning and inference. PCIMs are defined in terms of local structures Sl for each label l, which specify regions in time where the corresponding conditional intensity function is constant, and local parameters θl for each label which specify the values taken in those regions. Piecewise-Constant Conditional Intensity Models (PCIMs) are defined by local structures Sl = (Σl, σl(t, x)) and local parameters θl = {λls}s∈Σl, where Σl denotes a set discrete states, λls 2 are non-negative constants, and σl denotes a state function that maps a time and an event sequence to Σl and is piecewise constant in time for every event sequence. The conditional intensity functions are defined as λl(t|x) = λls with s = σl(t, x), and thus are piecewise constant. The resulting data likelihood can be written as p(x|S, θ) = Y l∈L Y s∈Σl λcls(x) ls e−λlsdls(x) (2) where S = {Sl}l∈L , θ = {θl}l∈L , cls(x) is the number of times label l occurs in x when the state function for l maps to state s (i.e., P i 1l (li) 1s (σl(ti, hi))), and dls(x) is the total duration during which the state function for l maps to state s in the data x (i.e., R t(x) 0 1s (σ (τ, h (τ, x))) dτ). 3.2 Poisson Networks Poisson networks[15, 18] are closely related to PCIMs. Given a basis set B of piecewise-constant real-valued feature functions f(t, x), a feature vector σl(t, x) is defined for each l by selecting component feature functions from B. The resulting σl(t, x) are piecewise-constant in time. The conditional intensity for l is given by the regression λl(t|x, θ) = ewl·σl(t,x) with parameter wl. By convention, the component σl,0(t, x) = 1 so that wl,0 is a bias parameter. The resulting likelihood does not have a conjugate prior, and in our experiments we use iterative MAP parameter estimates under a Gaussian prior, and use a Laplace approximation of the marginal likelihood for structure learning (i.e., feature selection) [15]. In our experiments, each f ∈B is specified by a label l and a pair of time offsets 0 ≤d1 < d2, and takes on the value log 1 + cl,d1,d2(t,x) d2−d1 where cl,d1,d2(t, x) is the number of times l occurs in x in the interval [t −d2, t −d1). 4 Learning PCIMs In this section, we present an efficient learning algorithm for PCIMs. We give a conjugate prior for the parameters θ which yields closed form formulas for the parameter posteriors and the marginal likelihood of the data given a structure S. We then give a decision tree based learning algorithm that uses the closed-form marginal likelihood formula to learn the local structure Sl for each label. 4.1 Closed-Form Parameter Posterior and Marginal Likelihood In general, computing parameter posteriors for likelihoods of the form of equation (1) is complicated. However, in the case of PCIMs, the Gamma distribution is a conjugate prior for λls, despite the fact that the data likelihood of equation (2) is not a product of exponential densities (i.e., when cls(x) ̸= 1). The corresponding prior and posterior densities are given by p(λls|αls, βls) = βls αls Γ(αls)λαls−1 ls e−βlsλls; p(λls|αls, βls, x) = p(λls|αls + cls(x), βls + dls(x)) Assuming the prior over θ is a product of such p(λls|αls, βls), the marginal likelihood is p(x|S) = Y l∈L Y s∈Σl γls(x); γls(x) = βls αls Γ(αls) Γ(αls + cls(x)) (βls + dls(x))αls+cls(x) In our experiments, we use the point estimate ˆλls = αls+cls(x) βls+dls(x) which is E [λls | x]. 4.2 Structure Learning with Decision Trees In this section, we specify the set of possible structures in terms of a set of basis state functions, a set of decision trees built from them, and a greedy Bayesian model selection procedure for learning a structure. Finally, we describe the particular set of basis state functions we use in our experiments. We use B to denote the set of basis state functions f(t, x), each taking values in a basis state set Σf. Given B, we specify Sl through a decision tree whose interior nodes each have an associated f ∈B and a child corresponding to each value in Σf. The per-label state set Σl is then the set of 3 leaves in the tree. The state function σl(t, x) is computed by recursively applying the basis state functions in the tree until a leaf is reached. Note that the resulting mapping is a valid state function by construction. In order to carry out Bayesian model selection, we use a factored structural prior p(S) ∝ Q l∈L Q s∈Σl κls. Since the prior and the marginal likelihood both factor over l, the local structures Sl can be chosen independently. We search for each Sl as follows. We begin with Sl being the trivial decision tree that maps all event sequences and times to the root. In this case, λl(t|x) = λl. Given the current Sl, we consider S′ l specified by choosing a leaf s ∈Σl and a basis state function f ∈B, and assigning f to s to get a set of new child leaves {s1, · · · , sm} where m = |Σf|. Because the marginal likelihood factors over states, the gain in the posterior of the structure due to this split is p(S′ l|x) p(Sl|x) = κls1γls1(x)···κlsmγlsm(x) κlsγls(x) . The next structure S′ l is chosen by selecting the s and f with the largest gain. The search terminates if there is no gain larger than one. We note that the local structure representation and search can be extended from decision trees to decision graphs in a manner analogous to [3]. In our experiments, we wish to learn how events depend on the timing and type of prior events. We therefore use a set of time and label specific basis state functions. In particular, we use binary basis state functions fl′,d1,d2,τ indexed by a label l′ ∈L, two time offsets 0 ≤d1 < d2 and a threshold τ > 0. Such a f encodes whether or not the event sequence x contains at least τ events with label l′ with timestamps in the window [t −d2, t −d1). Examples of decision trees that use such basis state functions are shown in Figure 1. 5 Forecasting In this section, we describe how to use PCIMs to forecast whether a sequence of target labels will occur in a given order and in given time intervals. For example, we may wish to know the probability that a computer system will experience a system failure in the next week and again in the following week, or that an internet user will be shown a particular display ad and then visit the advertising merchant’s website in the next month. We call such a sequence and set of associated intervals an episodic sequence and denote it by e = l∗ j, [a∗ j, b∗ j) k j=1. We call (l∗ j, [a∗ j, b∗ j)) the jth episode. We say that the episodic sequence e occurs in an event sequence x if ∃i1 < · · · < ik : (tij, lij) ∈ x, lij = l∗ j, tij ∈[a∗ j, b∗ j). The set of event sequences x in which e occurs is denoted Xe. Given an event sequence h and a time t∗≥t(h), we term any event sequence x whose history up to t∗agrees with h (i.e., h(t∗, x) = h) an extension of h from t∗. Our forecasting problem is, given at observed sequence h at time t∗≥t(h), to compute the probability that e occurs in extensions of h from t∗. This probability is p (X ∈Xe | h(t∗, X) = h) and will be denoted using the shorthand p(Xe|h, t∗). Computing p(Xe|h, t∗) is hard in general because the probability of episodes of interest can depend on arbitrary numbers of intervening events. We therefore give Monte Carlo estimates for p(Xe|h, t∗), first describing a forward sampling procedure for forecasting episodic sequences (also applicable to other forecasting problems), and then introducing an importance sampling scheme specifically designed for forecasting episodic sequences. 5.1 Forward Sampling The probability of an episodic sequence can be estimated using a forward sampling approach by sampling M extensions {x(m)}M m=1 of h from t∗and using the estimate ˆpFwd(Xe|h, t∗; M) = 1 M PM m=1 1Xe(x(m)). By Hoeffding’s inequality, P(|ˆpFwd(Xe|h, t∗; M) −p(Xe|h, t∗)| > ϵ) ≤ 2e−2ϵ2M. Thus, the error in ˆpFwd(Xe|h, t∗; M) falls as O(1/ √ M). It is important to note that 1Xe(x) only depends on x up to b∗ k, and thus we need only sample finite extensions x such that t(x) < b∗ k from p x | h(t∗, x) = h, t|x|+1 ≥b∗ k . The forward sampling algorithm for Poisson Networks [15] can be easily adapted for PCIMs. Here we outline how to forward sample an extension x of h from t∗to b∗ k given a general CIM. Forward sampling consists of iteratively obtaining a sample sequence xi of length i by sampling (ti, li) and appending to a prior sampled sequence xi−1 of length i −1. The CIM likelihood (Equation 1) of an arbitrary event sequence x can be written as Qn i=1 p(ti, li|hi; θ). Thus, we begin with x|h| = h, 4 and iteratively sample (ti, li) from p(ti, li|hi = xi−1; θ) and append to xi−1 to obtain xi. Note that one needs to use rejection sampling during the first iteration to ensure t|h|+1 > t∗. The finite extension up to b∗ k is obtained by terminating when ti > b∗ k and rejecting ti. To sample (ti, li) we note that p(ti, li|hi; θ) = λli(ti|hi, θ)e−Λli(ti|hi;θ) Q l̸=li e−Λl(ti|hi;θ) has a competing risks form [1, 11], so that we can sample |L| candidate times tl i independently from the non-homogeneous exponential densities λl(tl i|hi, θ)e−Λl(tl i|hi;θ) and then let ti be the smallest of these candidate times and li be the corresponding l. A more detailed description of sampling tl i from a piecewise constant conditional intensities is given in [15]. Finally, we note that the basic sampling procedure can be made more efficient using the techniques described in [15] and [7]. 5.2 Importance Sampling When using a forward sampling approach to forecast unlikely episodic sequences, the episodes of interest will not occur in most of the sampled extensions and our estimate of p(Xe|h, t∗) will be noisy. In fact, due to the fact that absolute error in ˆpFwd falls as the square root of the number of sequences sampled, we would need O(1/p(Xe|h, t∗)2) sample sequences to get non-trivial lower bounds on p(Xe|h, t∗) using a forward sampling approach. To mitigate this problem we develop an importance sampling approach, where sequences are drawn from a proposal distribution q(·) that has an increased likelihood of generating extensions in which Xe occurs, and then uses a weighted empirical estimate. In particular, we will sample extensions x(m) of h from t∗ from q x | h(t∗, x) = h, t|x|+1 ≥b∗ k instead of p x | h(t∗, x) = h, t|x|+1 ≥b∗ k , and will estimate p(Xe|h, t∗) through ˆpImp(Xe|h, t∗; M) = 1 PM m=1 w(x(m)) M X m=1 w(x(m))1Xe(x(m)), w(x) = p x | h(t∗, x) = h, t|x|+1 ≥b∗ k q x | h(t∗, x) = h, t|x|+1 ≥b∗ k The Poisson Superposition Importance Sampler (PSIS) is an importance sampler whose proposal distribution q is based on Poisson superposition. This proposal distribution is defined to be a CIM whose conditional intensity functions are given by λl(t|x; θ) + λ∗ l (t|x) where λl(t|x; θ) is the conditional intensity function of l under the model and λ∗ l (t|x) is given by λ∗ l (t|x) = ( 1 b∗ j(x)−aj(x)(x) for l = l∗ j(x), t ∈[aj(x)(x), b∗ j(x)), and j(x) ̸= 0. 0 otherwise, where the active episode j(x) is 0 if t(x) ≥bj(x), j = 1, · · · , k and is min ({j : bj(x) > t(x)}) otherwise. The time bj(x) when the jth episode ceases to be active is the time at which the jth episode occurs in x, or b∗ j if it does not occur. If the episodic intervals [a∗ j, b∗ j) do not overlap, aj(x) = a∗ j. In general aj(x) and bj(x) are given by the recursion aj(x) = max {a∗ j, bj−1(x)} bj(x) = min {b∗ j} ∪{(ti, li) ∈x : li = l∗ j, ti ∈[aj(x), b∗ j)} . This choice of q makes it likely that the jth episode will occur after the j −1th episode. As the proposal distribution is also a CIM, importance sampling can be done using the forward sampling procedure above. If the model is a PCIM, the proposal distribution is also a PCIM, since λ∗ l (t|x) are piecewise constant in t. In practice the computation of j(x), aj(x), and bj(x) can be done during forward sampling. The importance weight corresponding to our proposal distribution is w(x) = k Y j=1 exp bj(x) −aj(x) b∗ j −aj(x) ! Y (ti,li)∈x: ti=bj(x),li=l∗ j λl∗ j (ti|xi) λl∗ j (ti|xi) + 1 b∗ j −aj(x) . 5 In many problems, the importance weight w(x) of a sequence x of length n is a product of n small terms. When n large, this can cause the importance weights to become degenerate, and this problem is often solved using particle filtering [7]. Note that the second product in w(x) above has at most one term for each j so that w(x) has k terms corresponding to the k episodes, which is independent of n. Thus, we do not experience the problem of degenerate weights when k is small, regardless of the number of events sampled. 6 Experimental Results We first validate that PCIMs can learn temporal dependencies and that the PSIS gives faster forecasting than forward sampling using a synthetic data set. We then show that PCIMs are more than an order of magnitude faster to train than Poisson Networks, and better model unseen test data using real supercomputer log data. Finally we show that PCIMs and the PSIS allow the forecasting future interests of web search users using real log data from a major commercial search engine. 6.1 Validation on Synthetic Data In order to evaluate the ability of PCIMs to learn nonlinear temporal dependencies we sampled data from a known model and verified that the dependencies learned were correct. Data was sampled from a PCIM with L = {A,B,C}. The known model is shown in Figure 1. A in [t-1,t) A in [t-2,t-1) λ=10.0 λ=0.0 yes yes no λ=0.1 no (a) Event type A B in [t-1,t) B in [t-2,t-1) λ=10.0 λ=0.0 yes yes no A in [t-5,t) λ=0.002 λ=0.2 yes no no (b) Event type B C in [t-1,t) C in [t-2,t-1) λ=10.0 λ=0.0 yes yes no B in [t-5,t) λ=0.002 λ=0.2 yes no no (c) Event type C Figure 1: Decision trees representing S and θ for events of type A, B and C. We sampled 100 time units of data, observing 97 instances of A, 58 instances of B, and 71 instances of C. We then learn a PCIM from the sampled data. We used basis state functions that tested for the presence of each label in windows with boundaries at t −0, 1, 2, · · · , 10, and +∞time units. We used a common prior with a mean rate of 0.1 and a equivalent sample size of one time unit for all λls, and the structural prior described above with κls = 0.1 for all s. The learned PCIM perfectly recovered the correct model structure. We repeated the experiment by sampling data from a model with fifteen labels, consisting of five independent copies of the model above. That is, L = {A1, B1, C1, · · · , A5, B5, C5} with each triple Ai, Bi, Ci independent of other labels, and dependent on each other as specified by Figure 1. Once again, the model structure was recovered perfectly. We evaluated the PSIS in forecasting event sequences with the model shown in Figure 1. The convergence of importance sampling is compared with that of forward sampling in Figure 2. We give results for forecasting three different episodic sequences, consisting of the label sequences {C}, {C, B}, and {C, B, A}, all in the interval [0, 1], given an empty history. The three queries are given in order of decreasing probability, so that inference becomes harder. We show how estimates of the probabilities of given episodic sequences vary as a function of the number of sequences sampled, giving the mean and variance of the trajectories of the estimates computed over ten runs. For all three queries, importance sampling converges faster and has lower variance. Since exact inference is infeasible for this model, we forward sample 4,000,000 event sequences and display this estimate. Note that despite the large sample size the Hoeffding bound gives a 95% confidence 6 (a) Label C in [0, 1] (b) Labels C, B in [0, 1] (c) Labels C, B, A in [0, 1] Figure 2: Trajectories of ˆpImp and ˆpFwd vs. the number of sequences sampled for three different queries. The dashed and dotted lines show the empirical mean and standard deviation over ten runs of ˆpImp and ˆpFwd. The solid line shows ˆpFwd based on 4 million event sequences. interval of ±0.0006 for this estimate, which is large relative to the probabilities estimated. This further suggests the need for importance sampling for rare label sequences. 6.2 Modeling Supercomputer Event Logs We compared PCIM and Poisson Nets on the task of modeling system event logs from the BlueGene/L supercomputer at Lawrence Livermore National Laboratory [14], available at the USENIX Computer Failure Data Repository. We filtered out informational (non-alert) messages from the logs, and randomly split the events by node into a training set with 311,060 alerts from 21,962 nodes, and a test set with 68,502 alerts from 9,412 nodes. We learned dependencies between the 38 alert types in the data. We treat the events from each node as separate sequences, and use a product of the per-sequence likelihoods given in equation (1). For both models, we used window boundaries at t −1/60, 1, 60, 3600, and ∞seconds. The PCIM used count threshold basis state functions with thresholds of 1, 4, 16 and 64 while the Poisson Net used log count feature vectors as described above. Both models used priors with a mean rate of an event every 100 days, no dependencies, and an equivalent sample size of one second. Both used a structural prior with κls = 0.1. Table 1 shows the test set likelihood and the run time for the two approaches. PCIM achieves better test set likelihood and is more than an order of magnitude faster. Test Log Likelihood Training Time PCIM -85.3 11 min Poisson Net -88.8 3 hr 33 min Table 1: A comparison of the PCIM and Poisson Net in modeling supercomputer event logs. The test set log likelihood reported has been divided by the number of test nodes (9,412). The training time for the PCIM and Poisson Net are also shown. 6.3 Forecasting Future Interests of Web Search Users We used the query logs of a major internet search engine to investigate the use of PCIMs in forecasting the future interests of web search users. All queries are mapped to one of 36 different interest categories using an automatic classifier. Thus, L contains 36 labels, such as “Travel” or “Health & Wellness.” Our training set contains event sequences for approximately 23k users consisting of about 385k timestamped labels recorded over a two month period. The test set contains event sequences for approximately 11k users of about 160k timestamped labels recorded over the next month. We trained a PCIM on the training data using window boundaries at t −1 hour, t −1 day, and t −1 week, and basis state functions that tested for the presence of one or more instance of each label in each window, treating users as i.i.d. The prior had a mean rate of an event every year, an equivalent sample size of one day. The structural prior had κls = 0.1. The model took 1 day and 18 hours to train on 3 GHz workstation. We did not compare to a Poisson network on this data since, as shown above, Poisson networks take an order of magnitude longer to learn. 7 Figure 3: Precision-recall curves for forecasting future Health & Wellness queries using a full PCIM, a restricted PCIM that conditions only on past Health & Wellness queries, a baseline that takes into account only past Health & Wellness queries and not their timing, and random guessing. Given the first week of each test user’s event sequence, we forecasted whether they would issue a query in a chosen target category in the second week. We used the PSIS with 100 sample sequences for forecasting. Figure 3 shows the precision recall curve for one target category label. Also shown is the result for restricted PCIMs that only model dependencies on prior occurrences of the target category. This is compared to a baseline where the conditional intensity depends only on whether the target label appeared in the history. This shows that modeling the temporal aspect of dependencies does provide a large improvement. Modeling dependencies on past occurrences of other labels also provides an improvement in the right-hand region of the precision-recall curve. To better understand the performance of PCIMs we also examined the problem of predicting the first occurrence of the target label. As Figure 3 suggests (but doesn’t show), the PCIM can model crosslabel dependencies to forecast the first occurrence of the target label. Forecasting new interests is valuable in a variety of applications including advertising and the fact that PCIMs are able to forecast first occurrences is promising. Results similar to Figure 3 were obtained for other target labels. 7 Discussion We presented the Piecewise-Constant Conditional Intensity Model, which is a model of temporal dependencies in continuous time event streams. We gave a conjugate prior and a greedy tree building procedure that allow for efficient learning of these models. Dependencies on the history are represented through automatically learned combinations of a given set of basis state functions. One of the key benefits of PCIMs is that they allow domain knowledge to be encoded in these basis state functions. This domain knowledge is incorporated into the model during structure search in situations where it is supported by the data. The fact that we use decision trees allows us to easily interpret the learned dependencies. In this paper, we focused on basis state functions indexed by a fixed set of time windows and labels. Exploring alternative types of basis state functions is an area for future research. For example, basis state functions could encode the most recent events that have occurred in the history rather than the events that occurred in windows of interest. The capacity of the resulting model class depends on the set of basis state functions chosen. Understanding how to choose the basis state functions and how to adapt our learning procedure to control the resulting capacity is another open topic. We also presented the Poisson Superposition Importance Sampler for forecasting episodic sequences with PCIMs. Developing forecasting algorithms for more general queries is of interest. Finally, we demonstrated the value of PCIMs in modeling the temporal behavior of web search users and of supercomputer nodes. In many applications, we have access to richer event streams such as spatio-temporal event streams and event streams with structured labels. It would be interesting to extend PCIMs to handle such rich event streams. 8 References [1] Simeon M. Berman. Note on extreme values, competing risks and semi-Markov processes. Ann. Math. Stat., 34(3):1104–1106, 1963. [2] W. Buntine. Theory refinement on Bayesian networks. In UAI, 1991. [3] David Maxwell Chickering, David Heckerman, and Christopher Meek. A Bayesian approach to learning Bayesian networks with local structure. In UAI, 1997. [4] D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes: Elementary Theory and Methods, volume I. Springer, 2 edition, 2003. [5] Thomas Dean and Keiji Kanazawa. Probabilistic temporal reasoning. In AAAI, 1988. [6] Vanessa Didelez. Graphical models for marked point processes based on local independence. J. Roy. Stat. Soc., Ser. B, 70(1):245–264, 2008. [7] Yu Fan and Christian R. Shelton. Sampling for approximate inference in continuous time Bayesian networks. In AI & M, 2008. [8] N. Friedman, I. Nachman, and D. Pe´er. Using Bayesian networks to analyze expression data. J. Comp. Bio., 7:601–620, 2000. [9] Nir Friedman, Kevin Murphy, and Stuart Russell. Learning the structure of dynamic probabilistic networks. In UAI, 1998. [10] David Heckerman, David Maxwell Chickering, Christopher Meek, Robert Rounthwaite, and Carl Kadie. Dependency networks for inference, collaborative filtering, and data visualization. JMLR, 1:49–75, October 2000. [11] A. A. J. Marley and Hans Colonius. The “horse race” random utility model for choice probabilities and reaction times, and its competing risks interpretation. J. Math. Psych., 36:1–20, 1992. [12] Uri Nodelman, Christian R. Shelton, and Daphne Koller. Continuous time Bayesian networks. In UAI, 2002. [13] Uri Nodelman, Christian R. Shelton, and Daphne Koller. Expectation Maximization and complex duration distributions for continuous time Bayesian networks. In UAI, 2005. [14] Adam Oliner and Jon Stearley. What supercomputers say - an analysis of five system logs. In IEEE/IFIP Conf. Dep. Sys. Net., 2007. [15] Shyamsundar Rajaram, Thore Graepel, and Ralf Herbrich. Poisson-networks: A model for structured point processes. In AIStats, 2005. [16] Aleksandr Simma, Moises Goldszmidt, John MacCormick, Paul Barham, Richard Brock, Rebecca Isaacs, and Reichard Mortier. CT-NOR: Representing and reasoning about events in continuous time. In UAI, 2008. [17] Aleksandr Simma and Michael I. Jordan. Modeling events with cascades of Poisson processes. In UAI, 2010. [18] Wilson Truccolo, Uri T. Eden, Matthew R. Gellows, John P. Donoghue, and Emery N. Brown. A point process framework relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. J. Neurophysiol., 93:1074–1089, 2005. 9
|
2011
|
230
|
4,292
|
Inferring Interaction Networks using the IBP applied to microRNA Target Prediction Hai-Son Le Machine Learning Department Carnegie Mellon University Pittsburgh, PA, USA hple@cs.cmu.edu Ziv Bar-Joseph Machine Learning Department Carnegie Mellon University Pittsburgh, PA, USA zivbj@cs.cmu.edu Abstract Determining interactions between entities and the overall organization and clustering of nodes in networks is a major challenge when analyzing biological and social network data. Here we extend the Indian Buffet Process (IBP), a nonparametric Bayesian model, to integrate noisy interaction scores with properties of individual entities for inferring interaction networks and clustering nodes within these networks. We present an application of this method to study how microRNAs regulate mRNAs in cells. Analysis of synthetic and real data indicates that the method improves upon prior methods, correctly recovers interactions and clusters, and provides accurate biological predictions. 1 Introduction Determining interactions between entities based on observations is a major challenge when analyzing biological and social network data [1, 12, 15]. In most cases we can obtain information regarding each of the entities (individuals in social networks and proteins in biological networks) and some information about possible relationships between them (friendships or conversation data for social networks and motif or experimental data for biology). The goal is then to integrate these datasets to recover the interaction network between the entities being studied. To simplify the analysis of the data it is also beneficial to identify groups, or clusters, within these interaction networks. Such groups can then be mapped to specific demographics or interests in the case of social networks or to modules and pathways in biological networks [2]. A large number of generative models were developed to represent entities as members of a number of classes. Many of these models are based on the stochastic blockmodel introduced in [19]. While the number of classes in such models could be fixed, or provided by the user, nonparametric Bayesian methods have been applied to allow this number to be inferred based on the observed data [9]. The stochastic blockmodel was also further extended in [1] to allow mixed membership of entities within these classes. An alternate approach is to use latent features to describe entities. [10] proposed a nonparametric Bayesian matrix factorization method to learn the latent factors in relational data whereas [12] presented a nonparametric model to study binary link data. All of these methods rely on the pairwise link and interaction data and in most cases do not utilize properties of the individual entities when determining interactions. Here we present a model that extends the Indian Buffet Process (IBP) [7], a nonparametric Bayesian prior over infinite binary matrices, to learn the interactions between entities with an unbounded number of groups. Specifically, we represent each group as a latent feature and define interactions between entities within each group. Such latent feature representation has been used in the past to describe entities [7, 10, 12] and IBP is an appropriate nonparametric prior to infer the number of latent features. However, unlike IBP our model utilizes interaction scores as priors and so the 1 model is not exchangeable anymore. We thus extend IBP by integrating it with Markov random field (MRF) constraints, specifically pairwise potentials as in Ising model. MRF priors has been combined with Dirichlet Process mixture models for image segmentation in a related work of Orbanz and Buhmann [13]. Pairwise information is also used in the distance dependent Chinese restaurant process [4] to encourage similar objects to be clustered. Our model is well suited for cases in which we are provided with information on both link structure and the outcome of the underlying interactions. In social networks such data can come from observations of conversation between individuals followed by actions of the specific individuals (for example, travel), whereas in biology it is suited for regulatory networks as discussed below. We apply our model to study the microRNA (miRNA) target prediction problem. miRNAs were recently discovered as a class of regulatory RNA molecules that regulate the levels of messenger RNAs (mRNAs) (which are later translated to proteins) by binding and inhibiting their specific targets [15]. They were shown to play an important role in a number of diseases including cancer, and determining the set of genes that are targeted by each miRNA is an important question when studying these diseases. Several methods were proposed to predict targets of miRNAs based on their sequence1. While these predictions are useful, due to the short length of miRNAs, they lead to many false positives and some false negatives [8]. In addition to sequence information, it is now possible to obtain the expression levels of miRNAs and their predicted mRNA targets using microarrays. Since miRNAs inhibit their direct targets, integrating sequence and expression data can improve predictions regarding the interactions between miRNAs and their targets. A number of methods based on regression analysis were suggested for this task [8, 17]. While methods utilizing expression data improved upon methods that only used sequence data, they often treated each target mRNA in isolation. In contrast, it has now been shown that each miRNA often targets hundreds of genes, and that miRNAs often work in groups to achieve a larger impact [14]. Thus, rather than trying to infer a separate regression model for each mRNA we use our IBP extended model to infer a joint regression model for a cluster of mRNAs and the set of miRNAs that regulate them. Such a model would provide statistical confidence (since it combines several observations) while adhering more closely to the underlying biology. In addition to inferring the interactions in the dataset such a model would also provide a grouping for genes and miRNAs which can be used to improve function prediction. 2 Computational model Firstly, we derive a distribution on infinite binary matrices starting with a finite model and taking the limit as the number of features goes to infinity. Secondly, we describe the application of our model to the miRNA target prediction problem using a Gaussian additive model. 2.1 Interaction model Let zik denote the (i, k) entry of a matrix Z and let zk denote the kth column of Z. The group membership of N entities is defined by a (latent) binary matrix Z where zik = 1 if entity i belongs to group k. Given Z, we say that entity i interacts with entity j if zikzjk = 1 for some k. Note that two entities can interact through many groups where each group represents one type of interaction. In many cases, a prior on such interactions can be obtained. Assume we have a N × N symmetric matrix W, where wij indicates the degree that we believe that entity i and j interact: wij > 0 if entities i and j are more likely to interact and wij < 0 if they are less likely to do so. Nonparametric prior for Z: Griffiths and Ghahramani [7] proposed the Indian Buffet Process (IBP) as a nonparametric prior distribution on sparse binary matrices Z. The IBP can be derived from a simple stochastic process, described by a culinary metaphor. In this metaphor, there are N customers (entities) entering a restaurant and choosing from an infinite array of dishes (groups). The first customer tries Poisson(α) dishes, where α is a parameter. The remaining customers enter one after the others. The ith customer tries a previously sampled dish k with probability mk i , where mk is the number of previous customers who have sampled this dish. He then samples a Poisson( α i ) number of new dishes. This process defines an exchangeable distribution on the equivalence classes of Z, which are the set of binary matrices that map to the same left-ordered binary matrices. [7]. Exchangeability 1Genes that are targets of miRNAs contain the reverse complement of part of the miRNA sequence. 2 means that the order of the customers does not affect the distribution and that permutation of the data does not change the resulting likelihood. The prior knowledge on interactions discussed above (encoded by W) violates the exchangeability of the IBP since the group membership probability depends on the identities of the entities whereas exchangeability means that permutation of entities does not change the probability. In [11], Miller et al. presented the phylogenetic Indian Buffet Process (pIBP), where they used a tree representation to express non-exchangeability. In their model, the relationships among customers are encoded as a tree allowing them to exploit the sum-product algorithm in defining the updates for an MCMC sampler, without significantly increasing the computational burden when performing inference. We combine the IBP with pairwise potentials using W, constraining the dish selection of customers. Similar to the pIBP, the entries in zk are not chosen independently given πk but rather depend on the particular assignment of the remaining entries. In the following sections, we start with a model with a finite number of groups and consider the limit as the number of groups grows to derive the nonparametric prior. Note that in our model, as in the original IBP [7], while the number of rows are finite, the number of columns (features) could be infinite. We can thus define a prior on interactions between entities (since their number is known in advance) while still allowing for an infinite number of groups. This flexibility allows the group parameters to be drawn from an infinite mixtures of priors which may lead to identical groups of entities each with a different set of parameters. 2.1.1 Prior on finite matrices Z We have an N × K binary matrix Z where N is the number of entities and K is a fixed, finite number of groups. In the IBP, each group/column k is associated with a parameter πk, chosen from a Beta(α/K, 1) prior distribution where α is a hyperparameter: πk|α ∼Beta α K , 1 P(zk|πk) = exp X i (1 −zik) log(1 −πk) + zik log πk The joint probability of a column k and πk in the IBP is: P(zk, πk|α) = 1 B( α K , 1) exp X i (1 −zik) log(1 −πk) + zik log πk + α K −1 log πk (1) where B(·) is the Beta function. For our model, we add the new pairwise potentials on memberships of entities. Defining Φzk = exp P i<j wijzikzjk , the joint probability of a column k and πk is: P(zk, πk|α) = 1 Z′ Φzk exp X i (1 −zik) log(1 −πk) + zik log πk + α K −1 log πk (2) where Z′ is the partition function. Note that IBP is a special case of our model when all w’s are zeros (W = 0). Following [7], we define the lof-equivalence classes [Z] as the sets of binary matrices mapped to the same left-ordered binary matrices. The history hi of a feature k at an entity i is defined as (z1k, . . . , z(i−1)k). When no object is specified, h refers to the full history. mk and mh denote the number of non-zero entries of a feature k and a history h respectively. Kh is the number of features possessing the history h while K0 is the number of features having mk = 0. K+ = P2N−1 h=1 Kh is the number of features for which mk > 0. By integrating over all values of πk, we get the marginal probability of a binary matrix Z. P(Z) = K Y k=1 Z 1 0 P(zk, πk|α) dπk (3) = K Y k=1 1 Z′ Φzk Z 1 0 exp α K + mk −1 log πk + (N −mk) log(1 −πk) dπk (4) = K Y k=1 1 Z′ ΦzkB α K + mk, N −mk + 1 (5) 3 The partition function Z′ could be written as: Z′ = P2N−1 h=0 ΦhB α K + mh, N −mh + 1 . 2.1.2 Taking the infinite limit The probability of a particular lof-equivalence class of binary matrices, [Z], is: P([Z]) = X Z P(Z) = K! Q2N−1 h=0 Kh! K Y k=1 1 Z′ ΦzkB mk + α K , N −mk + 1 (6) Taking the limit when K →∞, we can show that with Ψ = P2N−1 h=1 Φh (N−mh)!(mh−1)! N! : lim K→∞P([Z]) = lim K→∞ K! Q2N−1 h=0 Kh! K+ Y k=1 Φzk B(mk + α K , N −mk + 1) B( α K , N + 1) K Y k=1 1 Z′ B( α K , N + 1) (7) = αK+ Q2N−1 h=1 Kh! K+ Y k=1 Φzk (N −mk)!(mk −1)! N! exp −αΨ) (8) The detailed derivations are shown in Appendix. 2.1.3 The generative process We now describe a generative stochastic process for Z. It can be understood by a culinary metaphor, where each row of Z corresponds to a customer and each column corresponds to a dish. We denote by h(i) the value of zik in the complete history h. With ¯Φh = Φh (N−mh)!(mh−1)! N! , we define Ψi = P h:hi=0,h(i)=1 ¯Φh so that Ψ = PN i=1 Ψi. Finally, let z<ik be entries 1, . . . , (i −1) of zk. Assume that we are provided with a compatibility score between pairs of customers. That is, we have a value wij for the food preference similarity between customer i and customer j. Higher values of wij indicate similar preferences and customers with such values are more likely to select the same dish. Therefore, the dishes a customer selects may depend on the choices of previous customers. The first customer tries Poisson(αΨ1) dishes. The remaining customers enter one after the others. The ith customer selects dishes with a probability that partially depends on the selection of the previous customers. The probability that a dish would be selected is P h:hi=z<ik,h(i)=1 ¯Φh/ P h:hi=z<ik ¯Φh. He then samples a Poisson(αΨi) number of new dishes. This process repeats until all customers have made their selections. Although this process is not exchangeable, the sequential order of customers is not important. This means that we get the same marginal distribution for any particular order of customers. Let K(i) 1 denote the number of new dishes sampled by customer i, the probability of a particular matrix generated by this process is: P(Z) = αK+ QN i=1 K(i) 1 K+ Y k=1 ¯Φzk exp −αΨ) (9) If we only pay attention to the lof-equivalence classes [Z], since there are QN i=1 K(i) 1 Q2N −1 h=1 Kh! matrices generated by this process mapped to the same equivalence classes, multiplying P(Z) by this quantity recovers Equation (8). We show in Appendix that in the case of the IBP where Φh = 1 for all histories h (when W = 0), this generative process simplifies to the Indian Buffet Process. 2.2 Regression model for mRNA expression In this section, we describe the application using the nonparametric prior to the miRNA target prediction problem. However, the method is applicable in general settings where there is a way to model properties of one entity from properties of its interacting entities. Our input data are expression profiles of M messenger RNA (mRNA) transcripts and R miRNA transcript across T samples. Let X = (xT 1 , . . . , xT M)T , where each row vector xi is the expression profile of mRNA i in all samples. Similarly, let Y = (yT 1 , . . . , yT R)T represent the expression profiles of R miRNAs. Furthermore, suppose we are given a M ×R matrix C where cij is the prior likelihood score for the interaction of 4 mRNA i and miRNA j. Such matrix C could be obtained from sequence-based miRNA target predictions as discussed above. Applying our interaction model to this problem, the set of N = M +R entities are divided into two disjoint sets of mRNAs and miRNAs. Let Z = (UT , VT )T where U and V are the group membership matrices for mRNAs and miRNAs respectively, W is given by 0 C CT 0 . Therefore, mRNA i and miRNA j interact through all groups k such that uikvjk = 1. 2.2.1 Gaussian additive model In the interaction model suggested by GenMiR++ [8], each miRNA expression profile is used to explain the downregulation of the expression of its targeted mRNAs. Our model uses a group specific and miRNA specific coefficients ( s = [s1, . . . , s∞]T , with sk > 0 for groups and r = [r1, . . . , rR]T for all miRNAs) to model the downregulation effect. These coefficients represent the baseline effect of group members and the strength of specific miRNAs, respectively. Using these parameters the expression level of a specific mRNA could be explained by summing over expression profiles of all miRNAs targeting the mRNA: xi ∼N µ − X j (rj + X k:uikvjk=1 sk) yj, σ2I (10) where µ represents baseline expression for this mRNA and σ is used to represent measurement noise. Thus, under this model, the expression of a mRNA are reduced from their baseline values by a linear combination of expression values of the miRNAs that target them. The probability of the observed data given Z is: P(X, Y|Z, Θ) ∝exp − 1 2σ2 P i(xi −¯xi)T (xi −¯xi) , with Θ = {µ, σ2, s, r} and ¯xi = µ −P j(rj + P k:uikvjk=1 sk) yj. 2.2.2 Priors for model variables We use the following as prior distributions for the variables in our model: sk ∼Gamma(αs, βs) r ∼N(0, σ2 rI) (11) µ ∼N(0, σ2 µI) 1/σ2 ∼Gamma(αv, βv) where the α and β are the shape and scale parameters. The parameters are given hyperpriors: 1/σ2 r ∼ Gamma(ar, br) and 1/σ2 µ ∼Gamma(aµ, bµ). αs, βs, αv, βv are also given Gamma hyperpriors. 3 Inference by MCMC As with many nonparametric Bayesian models, exact inference is intractable. Instead we use a Markov Chain Monte Carlo (MCMC) method to sample from the posterior distribution of Z and Θ. Although, our model allows Z to have infinite number of columns, we only need to keep track of non-zero columns of Z, an important aspect which is exploited by several nonparametric Bayesian models [7]. Our sampling algorithm involves a mix of Gibbs and Metropolis-Hasting steps which are used to generate the new sample. 3.1 Sampling from populated columns of Z Let m−ik is the number of one entries not including zik in zk. Also let z−ik denote the entries of zk except zik and let Z−(ik) be the entire matrix Z except zik. The probability of an entry given the remaining entries in a column can be derived by considering an ordering of customers such that customer i is the last person in line and using the generative process in Section 2.1.3: P(zik = 1|z−ik) = ¯Φz<ik,zik=1 ¯Φz<ik,zik=1 + ¯Φz<ik,zik=0 = exp P j̸=i wijzjk (N −m−ik −1)!m−ik! exp P j̸=i wijzjk (N −m−ik −1)!m−ik! + (N −m−ik)!(m−ik −1)! = exp P j̸=i wijzjk m−ik exp P j̸=i wijzjk m−ik + (N −m−ik) 5 We could also get the result using the limiting probability in Equation (8). The probability of each zik given all other variables is: P(zik|X, Y, Z−(ik)) ∝P(X, Y|Z−(ik), zik)P(zik|z−ik). We need only to condition on z−ik since columns of Z are generated independently. 3.2 Sampling other variables Sampling a new column of Z: New columns are columns that do not yet have any entries equal to 1 (empty groups). When sampling for an entity i, we assume this is the last customer in line. Therefore, based on the generative process described in Section 2.1.3, the number of new features are Poisson( α N ). For each new column, we need to sample a new group specific coefficient variable sk. We can simply sample from the prior distribution given in Equation (11) since the probability P(X, Y|Z, Θ) is not affected by these new columns since no interactions are currently represented by these columns. Sampling sk for populated columns: Since we do not have a conjugate prior on s, we cannot compute the conditional likelihood directly. We turn to Metropolis-Hasting to sample s. The proposed distribution of a new value s∗ k given the old value sk is q(s∗ k|sk) = Gamma(h, sk h ) where h is the shape parameter. The mean of this distribution is the old value sk. The acceptance ratio is A(sk →s∗ k) = min h 1, P(X, Y|Z, Θ \ {sk}, s∗ k) p(s∗ k|αs, βs) q(sk|s∗ k) P(X, Y|Z, Θ) p(sk|αs, βs) q(s∗ k|sk) i In our experiments, h is selected so that the average acceptance rate is around 0.25 [5]. Sampling r, µ, σ2 and prior parameters: Closed-form formulas for the posterior distributions of r,µ and σ2 can be derived due to their conjugacy. For example, the posterior distribution of 1/σ2 given the other variables is: Gamma αv+ MT 2 , 1 βv + P i(xi−¯xi)T (xi−¯xi) 2 −1 .Equations for updates of r and µ are omitted due to lack of space. Gibbs sampling steps are used for σ2 r and σ2 µ since we can compute the posterior distribution with conjugate priors. For prior parameters {αs, βs, αv, βv}, we use Metropolis-Hasting steps discussed previously. 4 Results We name our method GroupMiR (Group MiRNA target prediction). In this section we compare the performance of GroupMiR with GenMiR++ [8], which is one of the popular methods for predicting miRNA-mRNA interactions. However, unlike our method it does not use grouping of mRNAs and attempts to predict each one separately. Besides, there are two other important differences of GenMiR++ from our method: 1) GenMiR++ only consider interactions in the candidate set while our method consider all possible interactions. 2) GenMiR++ accepts a binary matrix as a candidate set while our method allows continuous valued scores. To our best knowledge, GenMiR++, which uses the regression model for interaction between entities, is the only appropriate method2 for comparison. 4.1 Synthetic data We generated 9 synthetic datasets. Each dataset contains 20 miRNAs and 200 mRNAs. We set the number of groups to K = 5 and T = 10 for all datasets. The miRNA membership V is a random matrix with at most 5 ones in each column. The mRNA membership U is a random matrix with density of 0.1. The expression of mRNAs are generated from the model in Equation (10) with σ2 = 1. The remaining random variables are sampled as follows: y ∼N(0, 1), s ∼N(1, 0.1) and r ∼N(0, 0.1). Since the sequence based predictions of miRNA-mRNA interactions are based on short complementary regions they often result in many more false positives than false negatives. We thus introduce noise to the true binary interaction matrix C′ by probabilistically changing each zero value in that matrix to 1. We tested different noise probabilities: 0.1, 0.2, 0.4 and 0.8. We use C = 2C′ −1.8, α = 1 and the hyperprior parameters are set to generic values. Our sampler is ran for 2000 iterations and 1000 iterations are discarded as burn-in. 2We also tested with the original IBP (by setting W = 0). The results for both the synthetic and real data were too weak to be comparable with GenMIR++. See Appendix. 6 Figure 1: The posterior distribution of K. (a) Truth (b) 0.1 (c) 0.2 (d) 0.4 (e) 0.8 Figure 2: An example synthetic dataset. Figure 1 plots the estimated posterior distribution of K from the samples of the 9 datasets for all noise levels. As can be seen, when the noise level is small (0.1), the distributions are correctly centered around K = 5. With increasing noise levels, the number of groups is overestimated. However, GroupMiR still does very well at a noise level of 0.4 and estimates for the higher noise level are also within a reasonable range. We estimated a posterior mean for the interaction matrix Z by first ordering the columns of each sampled Z and then selecting the mode from the set of Z matrices. GenMiR++ returns a score value in [0, 1] for each potential interaction. To convert these to binary interactions we tested a number of different threshold cutoffs: 0.5, 0.7 and 0.9. Figure 3 presents a number of quality measures for the recovered interactions by the two methods. GroupMiR achieves the best F1 score across all noise levels greatly improving upon GenMiR++ when high noise levels are considered (a reasonable biological scenario). In general, while the precision is very high for all noise levels, recall drops to a lower rate. From a biological point of view, precision is probably more important than recall since each of the predictions needs to be experimentally tested, a process that is often time consuming and expensive. In addition to accurately recovering interactions between miRNAs and mRNAs, GroupMiR also correctly recovers the groupings of mRNA and miRNAs. Figure 2 presents a graphical view of the group membership in both the true model and the model recovered by GroupMiR for one of the synthetic datasets. As can be seen, our method is able to accurately recover the groupings of both miRNAs and mRNAs with moderate noise levels (up to 0.4). For the higher noise level (0.8) the method assigns more groups than in the underlying model. However, most interactions are still correctly recovered. These results hold for all datasets we tested (not shown due to lack of space). 0.1 0.2 0.4 0.8 0 0.2 0.4 0.6 0.8 1 Noise Rate (a) Recall 0.1 0.2 0.4 0.8 0.4 0.6 0.8 1 Noise Rate (b) Accuracy 0.1 0.2 0.4 0.8 0.2 0.4 0.6 0.8 1 Noise Rate (c) Precision 0.1 0.2 0.4 0.8 0 0.2 0.4 0.6 0.8 1 Noise Score (d) F1 Score Figure 3: Performance of GroupMiR versus GenMiR++: Each data point is a synthetic dataset. 4.2 Application to mouse lung development To test our method on real biological data, we used a mouse lung developmental dataset [6]. In this study, the authors used microarrays to profile both miRNAs and mRNAs at 7 time points, which include all recognized stages of lung development. We downloaded the log ratio normalized data collected in this study. Duplicate samples were averaged and median values of all probes were assigned to genes. As suggested in the paper, we used ratios to the last time point resulting in 6 values for each mRNA and miRNA. Priors for interaction between miRNA and mRNA were downloaded from the MicroCosm Target3 database. Selecting genes with variance in the top 10%, led to 219 miRNAs and 1498 mRNAs which were used for further analysis. We collected 5000 samples of the interaction matrix Z following a 5000 iteration burn-in period. Convergence of the MCMC chain is determined by monitoring trace plots of K in multiple chains. 3http://www.ebi.ac.uk/enright-srv/microcosm/ 7 Myh14 Slc6a14 mmu-miR-106b Afp Ear11 Ear2 Cyp2f2 mmu-miR-93 Ear1 mmu-miR-106a mmu-miR-20a Tmem100 Lgals3 mmu-miR-17 Lilrb3 Parp3 Dapk2 Fmo1 (a) Brca1 Aurkb H19 Blm Col24a1 4932413O14Rik 2610318N02Rik Ccna2 Pdgfc mmu-miR-29c Tmem48 Agtr2 mmu-miR-29a Pafah1b3 Col11a1 6720463M24Rik Lin7a Lmnb1 Dbf4 Mki67 (b) Kntc1 Fkbp3 Etaa1 Wdr75 Dhx9 Diap3 Tubb3 Ppih Msh6 Fancd2 Tcfl5 4732471D19Rik mmu-miR-30c mmu-miR-30b Zranb3 Igsf9 Ppa1 mmu-miR-30d mmu-miR-30e mmu-miR-30a Cenpq Brca2 Ube2t Mthfd1l Rad54l (c) Abcb7 mmu-miR-30e* mmu-miR-30a* C330027C09Rik Hmmr Mcm8 Cdca8 Gins1 (d) Dlk1 mmu-miR-195 Palb2 Clspn Wnk3 mmu-miR-16 Shcbp1 Top2a (e) Dusp9 mmu-miR-27b Aurka Kif2c Rrm2 mmu-miR-27a Nrk (f) Figure 4: Interaction network recovered by GroupMiR: Each node is a pie chart corresponding to its expression values in the 6 time points (red: up-regulation, green: down-regulation). Since there are many more entries for real data compared to synthetic data we computed a consensus for Z by reordering columns in each sample and averaging the entries across all matrices. We further analyzed the network constructed from groups with at least 90% posterior probability. The network recovered by GroupMiR is more connected (89 nodes and 208 edges) when compared to the network recovered by GenMiR++ (using equivalent 0.9 threshold) with 37 nodes and 34 edges (Appendix). We used Cytoscape [16] to visualize the 6 groups of interactions in Figure 4. The network contains several groups of co-expressed miRNAs controlling sets of mRNA, in agreement with previous biological studies [20]. To test the function of the clusters identified, we performed Gene Ontology (GO) enrichment analysis for the mRNAs using GOstat [3]. The full results (Bonferroni corrected) are presented in Appendix. As can be seen, several cell division categories are enriched in cluster (b) which is expected when dealing with a developing organ (which undergoes several rounds of cell division). Other significant functions include organelle organization and apoptosis which also are associated with development (cluster (c)). We performed similar GO enrichment analysis for the GenMiR++ results and for K-means when using the same set of mRNAs (setting k = 6 as in our model). In both cases we did not find any significant enrichment indicating that only by integrating sets of miRNAs with the mRNAs for this data we can find functional biological groupings. See Appendix for details. We have also looked at the miRNAs controlling the different clusters and found that in a number of cases these agreed with prior knowledge. Cluster (a) includes 2 members of the miR 17-92 cluster, which is known to be critical to lung organogenesis [18]. MiRNA families miR-30, miR-29, miR-20 and miR-16, all identified by our method, were also reported to play roles in the early stages of lung organogenesis [6]. It is important to point out that we did not filter miRNAs explicitly based on expression but these miRNAs came in the results based on their strong effect on mRNA expression. 5 Conclusions We have described an extension to IBP that allows us to integrate priors on interactions between entities with measured properties for individual entities when constructing interaction networks. The method was successfully applied to predict miRNA-mRNA interactions and we have shown that it works well on both synthetic and real data. While our focus in this paper was on a biological problem, several other datasets provide similar information including social networking data. Our method is appropriate for such datasets and can help when attempting to construct interaction networks based on observations. Acknowledgments This work is supported in part by NIH grants 1RO1GM085022, 1U01HL108642 and NSF grant DBI-0965316 to Z.B.J. 8 References [1] E.M. Airoldi, D.M. Blei, S.E. Fienberg, and E.P. Xing. Mixed membership stochastic blockmodels. The Journal of Machine Learning Research, 9:1981–2014, 2008. [2] Z. Bar-Joseph, G.K. Gerber, T.I. Lee, et al. Computational discovery of gene modules and regulatory networks. Nature biotechnology, 21(11):1337–1342, 2003. [3] T. Beißbarth and T.P. Speed. GOstat: find statistically overrepresented Gene Ontologies within a group of genes. Bioinformatics, 20(9):1464, 2004. [4] David M. Blei and Peter Frazier. Distance dependent chinese restaurant processes. In Johannes Frnkranz and Thorsten Joachims, editors, ICML, pages 87–94. Omnipress, 2010. [5] S. Chib and E. Greenberg. Understanding the metropolis-hastings algorithm. Amer. Statistician, 49(4):327–335, 1995. [6] J. Dong, G. Jiang, Y.W. Asmann, S. Tomaszek, et al. MicroRNA Networks in Mouse Lung Organogenesis. PloS one, 5(5):4645–4652, 2010. [7] T. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In Advances in Neural Information Processing Systems, 18:475, 2006. [8] J.C. Huang, T. Babak, T.W. Corson, et al. Using expression profiling data to identify human microRNA targets. Nature methods, 4(12):1045–1049, 2007. [9] C. Kemp, J.B. Tenenbaum, T.L. Griffiths, , et al. Learning systems of concepts with an infinite relational model. In Proc. 21st Natl Conf. Artif. Intell.(1), page 381, 2006. [10] E. Meeds, Z. Ghahramani, R.M. Neal, and S.T. Roweis. Modeling dyadic data with binary latent factors. In Advances in NIPS, 19:977, 2007. [11] K.T. Miller, T.L. Griffiths, and M.I. Jordan. The phylogenetic indian buffet process: A nonexchangeable nonparametric prior for latent features. In UAI, 2008. [12] K.T. Miller, T.L. Griffiths, and M.I. Jordan. Nonparametric latent feature models for link prediction. In Advances in Neural Information Processing Systems, 2009. [13] P. Orbanz and J.M. Buhmann. Nonparametric bayesian image segmentation. International Journal of Computer Vision, 77(1):25–45, 2008. [14] ME Peter. Targeting of mrnas by multiple mirnas: the next step. Oncogene, 29(15):2161–2164, 2010. [15] N. Rajewsky. microRNA target predictions in animals. Nature genetics, 38:S8–S13, 2006. [16] P. Shannon, A. Markiel, O. Ozier, et al. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome research, 13(11):2498, 2003. [17] F. Stingo, Y. Chen, M. Vannucci, et al. A Bayesian graphical modeling approach to microRNA regulatory network inference. Ann. Appl. Statist, 2010. [18] A. Ventura, A.G. Young, M.M. Winslow, et al. Targeted Deletion Reveals Essential and Overlapping Functions of the miR-17-92 Family of miRNA Clusters. Cell, 132:875–886, 2008. [19] Y.J. Wang and G.Y. Wong. Stochastic blockmodels for directed graphs. Journal of the American Statistical Association, 82(397):8–19, 1987. [20] C. Xiao and K. Rajewsky. MicroRNA control in the immune system: basic principles. Cell, 136(1):26–36, 2009. 9
|
2011
|
231
|
4,293
|
Learning unbelievable probabilities Xaq Pitkow Department of Brain and Cognitive Science University of Rochester Rochester, NY 14607 xaq@neurotheory.columbia.edu Yashar Ahmadian Center for Theoretical Neuroscience Columbia University New York, NY 10032 ya2005@columbia.edu Ken D. Miller Center for Theoretical Neuroscience Columbia University New York, NY 10032 ken@neurotheory.columbia.edu Abstract Loopy belief propagation performs approximate inference on graphical models with loops. One might hope to compensate for the approximation by adjusting model parameters. Learning algorithms for this purpose have been explored previously, and the claim has been made that every set of locally consistent marginals can arise from belief propagation run on a graphical model. On the contrary, here we show that many probability distributions have marginals that cannot be reached by belief propagation using any set of model parameters or any learning algorithm. We call such marginals ‘unbelievable.’ This problem occurs whenever the Hessian of the Bethe free energy is not positive-definite at the target marginals. All learning algorithms for belief propagation necessarily fail in these cases, producing beliefs or sets of beliefs that may even be worse than the pre-learning approximation. We then show that averaging inaccurate beliefs, each obtained from belief propagation using model parameters perturbed about some learned mean values, can achieve the unbelievable marginals. 1 Introduction Calculating marginal probabilities for a graphical model generally requires summing over exponentially many states, and is NP-hard in general [1]. A variety of approximate methods have been used to circumvent this problem. One popular technique is belief propagation (BP), in particular the sumproduct rule, which is a message-passing algorithm for performing inference on a graphical model [2]. Though exact and efficient on trees, it is merely an approximation when applied to graphical models with loops. A natural question is whether one can compensate for the shortcomings of the approximation by setting the model parameters appropriately. In this paper, we prove that some sets of marginals simply cannot be achieved by belief propagation. For these cases we provide a new algorithm that can achieve much better results by using an ensemble of parameters rather than a single instance. We are given a set of variables x with a given probability distribution P(x) of some data. We would like to construct a model that reproduces certain of its marginal probabilities, in particular those over individual variables pi(xi) = P x\xi P(x) for nodes i 2 V , and those over some relevant clusters of variables, p↵(x↵) = P x\x↵P(x) for ↵= {i1, . . . , id↵}. We will write the collection of all these marginals as a vector p. 1 We assume a model distribution Q0(x) in the exponential family taking the form Q0(x) = e−E(x)/Z (1) with normalization constant Z = P x e−E(x) and energy function E(x) = − X ↵ ✓↵· φ↵(x↵) (2) Here, ↵indexes sets of interacting variables (factors in the factor graph [3]), and x↵is a subset of variables whose interaction is characterized by a vector of sufficient statistics φ↵(x↵) and corresponding natural parameters ✓↵. We assume without loss of generality that each φ↵(x↵) is irreducible, meaning that the elements are linearly independent functions of x↵. We collect all these sufficient statistics and natural parameters in the vectors φ and ✓. Normally when learning a graphical model, one would fit its parameters so the marginal probabilities match the target. Here, however, we will not use exact inference to compute the marginals. Instead we will use approximate inference via loopy belief propagation to match the target. 2 Learning in Belief Propagation 2.1 Belief propagation The sum-product algorithm for belief propagation on a graphical model with energy function (2) uses the following equations [4]: mi!↵(xi) / Y β2Ni\↵ mβ!i(xi) m↵!i(xi) / X x↵\xi e✓↵·φ↵(x↵) Y j2N↵\i mj!↵(xj) (3) where Ni and N↵are the neighbors of node i or factor ↵in the factor graph. Once these messages converge, the single-node and factor beliefs are given by bi(xi) / Y ↵2Ni m↵!i(xi) b↵(x↵) / e✓↵·φ↵(x↵) Y i2N↵ mi!↵(xi) (4) where the beliefs must each be normalized to one. For tree graphs, these beliefs exactly equal the marginals of the graphical model Q0(x). For loopy graphs, the beliefs at stable fixed points are often good approximations of the marginals. While they are guaranteed to be locally consistent, P x↵\xi b↵(x↵) = bi(xi), they are not necessarily globally consistent: There may not exist a single joint distribution B(x) of which the beliefs are the marginals [5]. This is why the resultant beliefs are called pseudomarginals, rather than simply marginals. We use a vector b to refer to the set of both node and factor beliefs produced by belief propagation. 2.2 Bethe free energy Despite its limitations, BP is found empirically to work well in many circumstances. Some theoretical justification for loopy belief propagation emerged with proofs that its stable fixed points are local minima of the Bethe free energy [6, 7]. Free energies are important quantities in machine learning because the Kullback-Leibler divergence between the data and model distributions can be expressed in terms of free energies, so models can be optimized by minimizing free energies appropriately. Given an energy function E(x) from (2), the Gibbs free energy of a distribution Q(x) is F[Q] = U[Q] −S[Q] (5) where U is the average energy of the distribution U[Q] = X x E(x)Q(x) = − X ↵ ✓↵· X x↵ φ↵(x↵)q↵(x↵) (6) which depends on the marginals q↵(x↵) of Q(x), and S is the entropy S[Q] = − X x Q(x) log Q(x) (7) 2 Minimizing the Gibbs free energy F[Q] recovers the distribution Q0(x) for the graphical model (1). The Bethe free energy F β is an approximation to the Gibbs free energy, F β[Q] = U[Q] −Sβ[Q] (8) in which the average energy U is exact, but the true entropy S is replaced by an approximation, the Bethe entropy Sβ, which is a sum over the factor and node entropies [6]: Sβ[Q] = X ↵ S↵[q↵] + X i (1 −di)Si[qi] (9) S↵[q↵] = − X x↵ q↵(x↵) log q↵(x↵) Si[qi] = − X xi qi(xi) log qi(xi) (10) The coefficients di = |Ni| are the number of factors neighboring node i, and compensate for the overcounting of single-node marginals due to overlapping factor marginals. For tree-structured graphical models, which factorize as Q(x) = Q ↵q↵(x↵) Q i qi(xi)1−di, the Bethe entropy is exact, and hence so is the Bethe free energy. On loopy graphs, the Bethe entropy Sβ isn’t really even an entropy (e.g. it may be negative) because it neglects all statistical dependencies other than those present in the factor marginals. Nonetheless, the Bethe free energy is often close enough to the Gibbs free energy that its minima approximate the true marginals [8]. Since stable fixed points of BP are minima of the Bethe free energy [6, 7], this helped explain why belief propagation is often so successful. To emphasize that the Bethe free energy directly depends only on the marginals and not the joint distribution, we will write F β[q] where q is a vector of pseudomarginals q↵(x↵) for all ↵and all x↵. Pseudomarginal space is the convex set [5] of all q that satisfy the positivity and local consistency constraints, 0 q↵(x↵) 1 X x↵\xi q↵(x↵) = qi(xi) X xi qi(xi) = 1 (11) 2.3 Pseudo-moment matching We now wish to correct for the deficiencies of belief propagation by identifying the parameters ✓ so that BP produces beliefs b matching the true marginals p of the target distribution P(x). Since the fixed points of BP are stationary points of F β [6], one may simply try to find parameters ✓that produce a stationary point in pseudomarginal space at p, which is a necessary condition for BP to reach a stable fixed point there. Simply evaluate the gradient at p, set it to zero, and solve for ✓. Note that in principle this gradient could be used to directly minimize the Bethe free energy, but F β[q] is a complicated function of q that usually cannot be minimized analytically [8]. In contrast, here we are using it to solve for the parameters needed to move beliefs to a target location. This is much easier, since the Bethe free energy is linear in ✓. This approach to learning parameters has been described as ‘pseudo-moment matching’ [9, 10, 11]. The Lq-element vector q is an overcomplete representation of the pseudomarginals because it must obey the local consistency constraints (11). It is convenient to express the pseudomarginals in terms of a minimal set of parameters ⌘with the smaller dimensionality L✓of ✓and φ, using an affine transform q = W⌘+ k (12) where W is an Lq ⇥L✓rectangular matrix. One example is the expectation parameters ⌘↵= P x↵q↵(x↵)φ↵(x↵) [5], giving the energy simply as U = −✓· ⌘. The gradient with respect to those minimal parameters is @F β @⌘= @U @⌘−@Sβ @q @q @⌘= −✓−@Sβ @q W (13) The Bethe entropy gradient is simplest in the overcomplete representation q, @Sβ @q↵(x↵) = −1 −log q↵(x↵) @Sβ @qi(xi) = (−1 −log qi(xi))(1 −di) (14) Setting the gradient (13) to zero, we have a simple linear equation for the parameters ✓that tilt the Bethe free energy surface (Figure 1A) enough to place a stationary point at the desired marginals p: ✓= −@Sβ @q %%%% p W (15) 3 +1 min 0 –1 λ F β[q] p b F [q] A B C v1·q pseudomarginal space v1·q v1·q v2·q pseudomarginal space 0 β @2F β @(v1·q)2 Figure 1: Landscape of Bethe free energy for the binary graphical model with pairwise interactions. (A) A slice through the Bethe free energy (solid lines) along one axis v1 of pseudomarginal space, for three different values of parameters ✓. The energy U is linear in the pseudomarginals (dotted lines), so varying the parameters only changes the tilt of the free energy. This can add or remove local minima. (B) The second derivatives of the free energies in (A) are all identical. Where the second derivative is positive, a local minimum can exist (cyan); where it is negative (yellow), no parameters can produce a local minimum. (C) A two-dimensional slice of the Bethe free energy, colored according to the minimum eigenvalue λmin of the Bethe Hessian. During a run of Bethe wake-sleep learning, the beliefs (blue dots) proceed along v2 toward the target marginals p. Stable fixed points of BP can exist only in the believable region (cyan), but the target p resides in an unbelievable region (yellow). As learning equilibrates, the stable fixed points jump between believable regions on either side of the unbelievable zone. 2.4 Unbelievable marginals It is well known that BP may converge on stable fixed points that cannot be realized as marginals of any joint distribution. In this section we show that the converse is also true: There are some distributions whose marginals cannot be realized as beliefs for any set of couplings. In these cases, existing methods for learning often yield poor results, sometimes even worse than performing no learning at all. This is surprising in view of claims to the contrary: [9, 5] state that belief propagation run after pseudo-moment matching can always reach a fixed point that reproduces the target marginals. While BP does technically have such fixed points, they are not always stable and thus may not be reachable by running belief propagation. Definition 1. A set of marginals are ‘unbelievable’ if belief propagation cannot converge to them for any set of parameters. For belief propagation to converge to the target — namely, the marginals p — a zero gradient is not sufficient: The Bethe free energy must also be a local minimum [7].1 This requires a positivedefinite Hessian of F β (the ‘Bethe Hessian’ H) in the subspace of pseudomarginals that satisfies the local consistency constraints. Since the energy U is linear in the pseudomarginals, the Hessian is given by the second derivative of the Bethe entropy, H = @2F β @⌘2 = −W > @2Sβ @q2 W (16) where projection by W constrains the derivatives to the subspace spanned by the minimal parameters ⌘. If this Hessian is positive definite when evaluated at p then the parameters ✓given by (15) give F β a minimum at the target p. If not, then the target cannot be a stable fixed point of loopy belief propagation. In Section 3, we calculate the Bethe Hessian explicitly for a binary model with pairwise interactions. Theorem 1. Unbelievable marginal probabilities exist. Proof. Proof by example. The simplest unbelievable example is a binary graphical model with pairwise interactions between four nodes, x 2 {−1, +1}4, and the energy E(x) = −J P (ij) xixj. 1Even this is not sufficient, but it is necessary. 4 By symmetry and (1), marginals of this target P(x) are the same for all nodes and pairs: pi(xi) = 1 2 and pij(xi = xj) = ⇢= (2 + 4/(1 + e2J −e4J + e6J))−1. Substituting these marginals into the appropriate Bethe Hessian (22) gives a matrix that has a negative eigenvalue for all ⇢> 3 8, or J > 0.316. The associated eigenvector u has the same symmetry as the marginals, with singlenode components ui = 1 2(−2 + 7⇢−8⇢2 + p 10 −28⇢+ 81⇢2 −112⇢3 + 64⇢4) and pairwise components uij = 1. Thus the Bethe free energy does not have a minimum at the marginals of these P(x). Stable fixed points of BP occur only at local minima of the Bethe free energy [7], and so BP cannot reproduce the marginals p for any parameters. Hence these marginals are unbelievable. Not only do unbelievable marginals exist, but they are actually quite common, as we will see in Section 3. Graphical models with multinomial or gaussian variables and at least two loops always have some pseudomarginals for which the Hessian is not positive definite [12]. On the other hand, all marginals with sufficiently small correlations are believable because they are guaranteed to have a positive-definite Bethe Hessian [12]. Stronger conditions have not yet been described. 2.5 Bethe wake-sleep algorithm When pseudo-moment matching fails to reproduce unbelievable marginals, an alternative is to use a gradient descent procedure for learning, analagous to the wake-sleep algorithm used to train Boltzmann machines [13]. That original rule can be derived as gradient descent of the Kullback-Leibler divergence DKL between the target P(x) and the Boltzmann distribution Q0(x) (1), DKL[P||Q0] = X x P(x) log P(x) Q0(x) = F[P] −F[Q0] ≥0 (17) where F is the Gibbs free energy (5). Note that this free energy depends on the same energy function E (2) that defines the Boltzmann distribution Q0 (1), and achieves its minimal value of −log Z for that distribution. The Kullback-Leibler divergence is therefore bounded by zero, with equality if and only if P = Q0. By changing the energy E and thus Q0 to decrease this divergence, the graphical model moves closer to the target distribution. Here we use a new cost function, the ‘Bethe divergence’ Dβ[p||b], by replacing these free energies by Bethe free energies [14] evaluated at the true marginals p and at the beliefs b obtained from BP stable fixed points, Dβ[p||b] = F β[p] −F β[b] (18) We use gradient descent to optimize this cost, with gradient dDβ d✓ = @Dβ @✓+ @Dβ @b @b @✓ (19) The data’s free energy does not depend on the beliefs, so @F β[p]/@b = 0, and fixed points of belief propagation are stationary points of the Bethe free energy, so @F β[b]/@b = 0. Consequently @Dβ/@b = 0. Furthermore, the entropy terms of the free energies do not depend explicitly on ✓, so dDβ d✓ = @U(p) @✓ −@U(b) @✓ = −⌘(p) + ⌘(b) (20) where ⌘(q) = P x q(x)φ(x) are the expectations of the sufficient statistics φ(x) under the pseudomarginals q. This gradient forms the basis of a simple learning algorithm. At each step in learning, belief propagation is run, obtaining beliefs b for the current parameters ✓. The parameters are then changed in the opposite direction of the gradient, ∆✓= −✏dDβ d✓ = ✏(⌘(p) −⌘(b)) (21) where ✏is a learning rate. This generally increases the Bethe free energy for the beliefs while decreasing that of the data, hopefully allowing BP to draw closer to the data marginals. We call this learning rule the Bethe wake-sleep algorithm. Within this algorithm, there is still the freedom of how to choose initial messages for BP at each learning iteration. The result depends on these initial conditions because BP can have several stable fixed points. One might re-initialize the messages to a fixed starting point for each run of BP, choose 5 random initial messages for each run, or restart the messages where they stopped on the previous learning step. In our experiments we use the first approach, initializing to constant messages at the beginning of each BP run. The Bethe wake-sleep learning rule sometimes places a minimum of F β at the true data distribution, such that belief propagation can give the true marginals as one of its (possibly multiple) stable fixed points. However, for the reasons provided above, this cannot occur where the Bethe Hessian is not positive definite. 2.6 Ensemble belief propagation When the Bethe wake-sleep algorithm attempts to learn unbelievable marginals, the parameters and beliefs do not reach a fixed point but instead continue to vary over time (Figure 2A,B). Still, if learning reaches equilibrium, then the temporal average of beliefs is equal to the unbelievable marginals. Theorem 2. If the Bethe wake-sleep algorithm reaches equilibrium, then unbelievable marginals are matched by the belief propagation stable fixed points averaged over the equilibrium ensemble of parameters. Proof. At equilibrium, the time average of the parameter changes is zero by definition, h∆✓it = 0. Substitution of the Bethe wake-sleep equation, ∆✓= ✏(⌘(p) −⌘(b(t))) (20), directly implies that h⌘(b(t))it = ⌘(p). The deterministic mapping (12) from the minimal representation to the pseudomarginals gives hb(t)it = p. After learning has equilibrated, stable fixed points of belief propagation occur with just the right frequency so that they can be averaged together to reproduce the target distribution exactly (Figure 2C). Note that none of the individual stable fixed points may be close to the true marginals. We call this inference algorithm ensemble belief propagation (eBP). Ensemble BP produces perfect marginals by exploiting a constant, small amplitude learning, and thus assumes that the correct marginals are perpetually available. Yet it also works well when learning is turned off, if parameters are drawn randomly from a gaussian distribution with mean and covariance matched to the equilibrium distribution, ✓⇠N(¯✓, ⌃✓). In the simulations below (Figures 2C–D, 3B–C), ⌃✓was always low-rank, and only one or two principle components were needed for good performance. The gaussian ensemble is not quite as accurate as continued learning (Figure 3B,C), but the performance is still markedly better than any of the available stable fixed points. If the target is not within a convex hull of believable pseudomarginals, then learning cannot reach equilibrium: Eventually BP gets as close as it can but there remains a consistent difference ⌘(p) − ⌘(b), so ✓must increase without bound. Though possible in principle, we did not observe this effect in any of our experiments. There may also be no equilibrium if belief propagation at each learning iteration fails to converge. 3 Experiments The experiments in this section concentrate on the Ising model: N binary variables, s 2 {−1, +1}N, with factors comprising individual variables xi and pairs xi, xj. The energy function is E(x) = −P i hixi −P (ij) Jijxixj. Then the sufficient statistics are the various first and second moments, xi and xixj, and the natural parameters are hi, Jij. We use this model both for the target distributions and the model. We parameterize pseudomarginals as {q + i , q ++ ij } where q + i = qi(xi = +1) and q ++ ij = qij(xi = xj = +1) [8]. The remaining probabilities are linear functions of these values. Positivity constraints and local consistency constraints then appear as 0 q+ i 1 and max(0, q+ i + q+ j −1) q++ ij min(q+ i , q+ j ). If all the interactions are finite, then the inequality constraints are not active [15]. In this parameterization, the elements of the Bethe Hessian (16) are −@2Sβ @q + i @q + j = δi,j(1 −di) ⇥ (q + i )−1 + (1 −q + i )−1⇤ + δj2Ni ⇥ (1 −q + i −q + j + q ++ ij )−1⇤ (22) + δi,j X k2Ni ⇥ (q + i −q ++ ik )−1 + (1 −q + i −q + k + q ++ ik )−1⇤ 6 v1·q v2·q u1·✓ ✓ u2·✓ <0 >0 B C D min λ true marginals learning iteration estimated marginals 0 1 0 1 q + i q ++ ij BP EBP A ✓ ij h t J i Figure 2: Averaging over variable couplings can produce marginals otherwise unreachable by belief propagation. (A) As learning proceeds, the Bethe wake-sleep algorithm causes parameters ✓to converge on a discrete limit cycle when attempting to learn unbelievable marginals. (B) The same limit cycle, projected onto their first two principal components u1 and u2 of ✓during the cycle. (C) The corresponding beliefs b during the limit cycle (blue circles), projected onto the first two principal components v1 and v2 of the trajectory through pseudomarginal space. Believable regions of pseudomarginal space are colored with cyan and the unbelievable regions with yellow, and inconsistent pseudomarginals are black. Over the limit cycle, the average beliefs ¯b (blue ⇥) are precisely equal to the target marginals p (black ⇤). The average ¯b (red +) over many stable fixed points of BP (red dots) generated from randomly perturbed parameters ¯✓+ δ✓still produces a better approximation of the target marginals than any of the individual believable stable fixed points. (D) Even the best amongst several BP stable fixed points cannot match unbelievable marginals (black and grey). Ensemble BP leads to much improved performance (red and pink). − @2Sβ @q + i @q ++ jk = −δi,j ⇥ (q + i −q ++ ik )−1 + (1 −q + i −q + k + q ++ ik )−1⇤ −δi,k ⇥ (q + i −q ++ ij )−1 + (1 −q + i −q + j + q ++ ij )−1⇤ − @2Sβ @q ++ ij @q ++ k` = δij,k` ⇥ (q ++ ij )−1 + (q + i −q ++ ij )−1 + (q + j −q ++ ij )−1 + (1 −q + i −q + j + q ++ ij )−1⇤ Figure 3A shows the fraction of marginals that are unbelievable for 8-node, fully-connected Ising models with random coupling parameters hi ⇠N(0, 1 3) and Jij ⇠N(0, σJ). For σJ & 1 4, most marginals cannot be reproduced by belief propagation with any parameters, because the Bethe Hessian (22) has a negative eigenvalue. fraction unbelievable σJ 1 0 0 1 .01 10–4 10–5 .001 .1 1 10 10–2 10–1 10–3 1 Dβ[p||b] |p −b| i ii iii iv v BP eBP i ii iii iv v coupling standard deviation Bethe divergence Euclidean distance A B C Figure 3: Performance in learning unbelievable marginals. (A) Fraction of marginals that are unbelievable. Marginals were generated from fully connected, 8-node binary models with random biases and pairwise couplings, hi ⇠N(0, 1 3) and Jij ⇠N(0, σJ). (B,C) Performance of five models on 370 unbelievable random target marginals (Section 3), measured with Bethe divergence Dβ[p||b] (B) and Euclidean distance |p −b| (C). Target were generated as in (A) with σJ = 1 3, and selected for unbelievability. Bars represent central quartiles, and white line indicates the median. The five models are: (i) BP on the graphical model that generated the target distribution, (ii) BP after parameters are set by pseudomoment matching, (iii) the beliefs with the best performance encountered during Bethe wake-sleep learning, (iv) eBP using exact parameters from the last 100 iterations of learning, and (v) eBP with gaussian-distributed parameters with the same first- and second-order statistics as iv. 7 We generated 500 Ising model targets using σJ = 1 3, selected the unbelievable ones, and evaluated the performance of BP and ensemble BP for various methods of choosing parameters ✓. Each run of BP used exponential temporal message damping of 5 time steps [16], mt+1 = amt + (1 −a)mundamped with a = e−1/5. Fixed points were declared when messages changed by less than 10−9 on a single time step. We evaluated BP performance for the actual parameters that generated the target (1), pseudomoment matching (15), and at best-matching beliefs obtained at any time during Bethe wake-sleep learning. We also measured eBP performance for two parameter ensembles: the last 100 iterations of Bethe wake-sleep learning, and parameters sampled from a gaussian N(¯✓, ⌃✓) with the same mean and covariance as that ensemble. Belief propagation gave a poor approximation of the target marginals, as expected for a model with many strong loops. Even with learning, BP could never get the correct marginals, which was guaranteed by selection of unbelievable targets. Yet ensemble belief propagation gave excellent results. Using the exact parameter ensemble gave orders of magnitude improvement, limited by the number of beliefs being averaged. The gaussian parameter ensemble also did much better than even the best results of BP. 4 Discussion Other studies have also made use of the Bethe Hessian to draw conclusions about belief propagation. For instance, the Hessian reveals that the Ising model’s paramagnetic state becomes unstable in BP for large enough couplings [17]. For another example, when the Hessian is positive definite throughout pseudomarginal space, then the Bethe free energy is convex and thus BP has a unique stable fixed point [18]. Yet the stronger interpretation appears to be underappreciated: When the Hessian is not positive definite for some pseudomarginals, then BP can never have a stable fixed point there, for any parameters. One might hope that by adjusting the parameters of belief propagation in some systematic way, ✓! ✓BP, one could fix the approximation and so perform exact inference. In this paper we proved that this is a futile hope, because belief propagation simply can never converge to certain marginals. However, we also provided an algorithm that does work: Ensemble belief propagation uses BP on several different parameters with different stable fixed points and averages the results. This approach preserves the locality and scalability which make BP so popular, but corrects for some of its defects at the cost of running the algorithm a few times. Additionally, it raises the possibility that a systematic compensation for the flaws of BP might exist, but only as a mapping from individual parameters to an ensemble of parameters ✓! {✓eBP} that could be used in eBP. An especially clear application of eBP is to discriminative models like Conditional Random Fields [19]. These models are trained so that known inputs produce known inferences, and then generalize to draw novel inferences from novel inputs. When belief propagation is used during learning, then the model will fail even on known training examples if they happen to be unbelievable. Overall performance will suffer. Ensemble BP can remedy those training failures and thus allow better performance and more reliable generalization. This paper addressed learning in fully-observed models only, where marginals for all variables were available during training. Yet unbelievable marginals exist for models with hidden variables as well. Ensemble BP should work as in the fully-observed case, but training will require inference over the hidden variables during both wake and sleep phases. One important inference engine is the brain. When inference is hard, neural computations may resort to approximations, perhaps including belief propagation [20, 21, 22, 23, 24]. It would be undesirable for neural circuits to have big blind spots, i.e. reasonable inferences it cannot draw, yet that is precisely what occurs in BP. By averaging over models with eBP, this blind spot can be eliminated. In the brain, synaptic weights fluctuate due to a variety of mechanisms. Perhaps such fluctuations allow averaging over models and thereby reach conclusions unattainable by a deterministic mechanism. Note added in proof: After submission of this work, [25] presented partially overlapping results showing that some marginals cannot be achieved by belief propagation. Acknowledgments The authors thank Greg Wayne for helpful conversations. 8 References [1] Cooper G (1990) The computational complexity of probabilistic inference using bayesian belief networks. Artificial intelligence 42: 393–405. [2] Pearl J (1988) Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann Publishers, San Mateo CA. [3] Kschischang F, Frey B, Loeliger H (2001) Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory 47: 498–519. [4] Bishop C (2006) Pattern recognition and machine learning. Springer New York. [5] Wainwright M, Jordan M (2008) Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning 1: 1–305. [6] Yedidia JS, Freeman WT, Weiss Y (2000) Generalized belief propagation. In: Advances in Neural Information Processing Systems 13. MIT Press, pp. 689–695. [7] Heskes T (2003) Stable fixed points of loopy belief propagation are minima of the Bethe free energy. Advances in Neural Information Processing Systems 15: 343–350. [8] Welling M, Teh Y (2001) Belief optimization for binary networks: A stable alternative to loopy belief propagation. In: Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers Inc., pp. 554–561. [9] Wainwright MJ, Jaakkola TS, Willsky AS (2003) Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudo-moment matching. In: Artificial Intelligence and Statistics. [10] Welling M, Teh Y (2003) Approximate inference in Boltzmann machines. Artificial Intelligence 143: 19–50. [11] Parise S, Welling M (2005) Learning in markov random fields: An empirical study. In: Joint Statistical Meeting. volume 4. [12] Watanabe Y, Fukumizu K (2011) Loopy belief propagation, Bethe free energy and graph zeta function. arXiv cs.AI: 1103.0605v1. [13] Hinton G, Sejnowski T (1983) Analyzing cooperative computation. Proceedings of the Fifth Annual Cognitive Science Society, Rochester NY . [14] Welling M, Sutton C (2005) Learning in markov random fields with contrastive free energies. In: Cowell RG, Ghahramani Z, editors, Artificial Intelligence and Statistics. pp. 397-404. [15] Yedidia J, Freeman W, Weiss Y (2005) Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Transactions on Information Theory 51: 2282–2312. [16] Mooij J, Kappen H (2005) On the properties of the Bethe approximation and loopy belief propagation on binary networks. Journal of Statistical Mechanics: Theory and Experiment 11: P11012. [17] Mooij J, Kappen H (2005) Validity estimates for loopy belief propagation on binary real-world networks. In: Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, pp. 945–952. [18] Heskes T (2004) On the uniqueness of loopy belief propagation fixed points. Neural Computation 16: 2379–2413. [19] Lafferty J, McCallum A, Pereira F (2001) Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Proceedings of the 18th International Conference on Machine Learning : 282–289. [20] Litvak S, Ullman S (2009) Cortical circuitry implementing graphical models. Neural Computation 21: 3010–3056. [21] Steimer A, Maass W, Douglas R (2009) Belief propagation in networks of spiking neurons. Neural Computation 21: 2502–2523. [22] Ott T, Stoop R (2007) The neurodynamics of belief propagation on binary markov random fields. In: Advances in Neural Information Processing Systems 19, Cambridge, MA: MIT Press. pp. 1057–1064. [23] Shon A, Rao R (2005) Implementing belief propagation in neural circuits. Neurocomputing 65–66: 393– 399. [24] George D, Hawkins J (2009) Towards a mathematical theory of cortical micro-circuits. PLoS Computational Biology 5: 1–26. [25] Heinemann U, Globerson A (2011) What cannot be learned with Bethe approximations. In: Uncertainty in Artificial Intelligence. Corvallis, Oregon: AUAI Press, pp. 319–326. 9
|
2011
|
232
|
4,294
|
Relative Density-Ratio Estimation for Robust Distribution Comparison Makoto Yamada Tokyo Institute of Technology yamada@sg.cs.titech.ac.jp Taiji Suzuki The University of Tokyo s-taiji@stat.t.u-tokyo.ac.jp Takafumi Kanamori Nagoya University kanamori@is.nagoya-u.ac.jp Hirotaka Hachiya Masashi Sugiyama Tokyo Institute of Technology {hachiya@sg. sugi@}cs.titech.ac.jp Abstract Divergence estimators based on direct approximation of density-ratios without going through separate approximation of numerator and denominator densities have been successfully applied to machine learning tasks that involve distribution comparison such as outlier detection, transfer learning, and two-sample homogeneity test. However, since density-ratio functions often possess high fluctuation, divergence estimation is still a challenging task in practice. In this paper, we propose to use relative divergences for distribution comparison, which involves approximation of relative density-ratios. Since relative density-ratios are always smoother than corresponding ordinary density-ratios, our proposed method is favorable in terms of the non-parametric convergence speed. Furthermore, we show that the proposed divergence estimator has asymptotic variance independent of the model complexity under a parametric setup, implying that the proposed estimator hardly overfits even with complex models. Through experiments, we demonstrate the usefulness of the proposed approach. 1 Introduction Comparing probability distributions is a fundamental task in statistical data processing. It can be used for, e.g., outlier detection [1, 2], two-sample homogeneity test [3, 4], and transfer learning [5, 6]. A standard approach to comparing probability densities p(x) and p′(x) would be to estimate a divergence from p(x) to p′(x), such as the Kullback-Leibler (KL) divergence [7]: KL[p(x), p′(x)] := Ep(x) [log r(x)] , r(x) := p(x)/p′(x), where Ep(x) denotes the expectation over p(x). A naive way to estimate the KL divergence is to separately approximate the densities p(x) and p′(x) from data and plug the estimated densities in the above definition. However, since density estimation is known to be a hard task [8], this approach does not work well unless a good parametric model is available. Recently, a divergence estimation approach which directly approximates the density-ratio r(x) without going through separate approximation of densities p(x) and p′(x) has been proposed [9, 10]. Such density-ratio approximation methods were proved to achieve the optimal non-parametric convergence rate in the mini-max sense. However, the KL divergence estimation via density-ratio approximation is computationally rather expensive due to the non-linearity introduced by the ‘log’ term. To cope with this problem, another divergence called the Pearson (PE) divergence [11] is useful. The PE divergence is defined as PE[p(x), p′(x)] := 1 2Ep′(x) £ (r(x) −1)2¤ . 1 The PE divergence is a squared-loss variant of the KL divergence, and they both belong to the class of the Ali-Silvey-Csisz´ar divergences (which is also known as the f-divergences, see [12, 13]). Thus, the PE and KL divergences share similar properties, e.g., they are non-negative and vanish if and only if p(x) = p′(x). Similarly to the KL divergence estimation, the PE divergence can also be accurately estimated based on density-ratio approximation [14]: the density-ratio approximator called unconstrained leastsquares importance fitting (uLSIF) gives the PE divergence estimator analytically, which can be computed just by solving a system of linear equations. The practical usefulness of the uLSIF-based PE divergence estimator was demonstrated in various applications such as outlier detection [2], twosample homogeneity test [4], and dimensionality reduction [15]. In this paper, we first establish the non-parametric convergence rate of the uLSIF-based PE divergence estimator, which elucidates its superior theoretical properties. However, it also reveals that its convergence rate is actually governed by the ‘sup’-norm of the true density-ratio function: maxx r(x). This implies that, in the region where the denominator density p′(x) takes small values, the density-ratio r(x) = p(x)/p′(x) tends to take large values and therefore the overall convergence speed becomes slow. More critically, density-ratios can even diverge to infinity under a rather simple setting, e.g., when the ratio of two Gaussian functions is considered [16]. This makes the paradigm of divergence estimation based on density-ratio approximation unreliable. In order to overcome this fundamental problem, we propose an alternative approach to distribution comparison called α-relative divergence estimation. In the proposed approach, we estimate the α-relative divergence, which is the divergence from p(x) to the α-mixture density: qα(x) = αp(x) + (1 −α)p′(x) for 0 ≤α < 1. For example, the α-relative PE divergence is given by PEα[p(x), p′(x)] := PE[p(x), qα(x)] = 1 2Eqα(x) £ (rα(x) −1)2¤ , (1) where rα(x) is the α-relative density-ratio of p(x) and p′(x): rα(x) := p(x)/qα(x) = p(x)/ ³ αp(x) + (1 −α)p′(x) ´ . (2) We propose to estimate the α-relative divergence by direct approximation of the α-relative densityratio. A notable advantage of this approach is that the α-relative density-ratio is always bounded above by 1/α when α > 0, even when the ordinary density-ratio is unbounded. Based on this feature, we theoretically show that the α-relative PE divergence estimator based on α-relative density-ratio approximation is more favorable than the ordinary density-ratio approach in terms of the non-parametric convergence speed. We further prove that, under a correctly-specified parametric setup, the asymptotic variance of our α-relative PE divergence estimator does not depend on the model complexity. This means that the proposed α-relative PE divergence estimator hardly overfits even with complex models. Through experiments on outlier detection, two-sample homogeneity test, and transfer learning, we demonstrate that our proposed α-relative PE divergence estimator compares favorably with alternative approaches. 2 Estimation of Relative Pearson Divergence via Least-Squares Relative Density-Ratio Approximation Suppose we are given independent and identically distributed (i.i.d.) samples {xi}n i=1 from a d-dimensional distribution P with density p(x) and i.i.d. samples {x′ j}n′ j=1 from another ddimensional distribution P ′ with density p′(x). Our goal is to compare the two underlying distributions P and P ′ only using the two sets of samples {xi}n i=1 and {x′ j}n′ j=1. In this section, we give a method for estimating the α-relative PE divergence based on direct approximation of the α-relative density-ratio. 2 Direct Approximation of α-Relative Density-Ratios: Let us model the α-relative density-ratio rα(x) (2) by the following kernel model g(x; θ) := Pn ℓ=1 θℓK(x, xℓ), where θ := (θ1, . . . , θn)⊤ are parameters to be learned from data samples, ⊤denotes the transpose of a matrix or a vector, and K(x, x′) is a kernel basis function. In the experiments, we use the Gaussian kernel. The parameters θ in the model g(x; θ) are determined so that the following expected squared-error J is minimized: J(θ) := 1 2Eqα(x) h (g(x; θ) −rα(x))2i = α 2 Ep(x) £ g(x; θ)2¤ + (1−α) 2 Ep′(x) £ g(x; θ)2¤ −Ep(x) [g(x; θ)] + Const., where we used rα(x)qα(x) = p(x) in the third term. Approximating the expectations by empirical averages, we obtain the following optimization problem: bθ := argmin θ∈Rn h 1 2θ⊤c Hθ −bh ⊤θ + λ 2 θ⊤θ i , (3) where a penalty term λθ⊤θ/2 is included for regularization purposes, and λ (≥0) denotes the regularization parameter. c H and bh are defined as bHℓ,ℓ′ := α n Pn i=1K(xi, xℓ)K(xi, xℓ′)+ (1−α) n′ Pn′ j=1K(x′ j, xℓ)K(x′ j, xℓ′),bhℓ:= 1 n Pn i=1K(xi, xℓ). It is easy to confirm that the solution of Eq.(3) can be analytically obtained as bθ = (c H + λIn)−1bh, where In denotes the n-dimensional identity matrix. Finally, a density-ratio estimator is given as brα(x) := g(x; bθ) = Pn ℓ=1 bθℓK(x, xℓ). When α = 0, the above method is reduced to a direct density-ratio estimator called unconstrained least-squares importance fitting (uLSIF) [14]. Thus, the above method can be regarded as an extension of uLSIF to the α-relative density-ratio. For this reason, we refer to our method as relative uLSIF (RuLSIF). The performance of RuLSIF depends on the choice of the kernel function (the kernel width in the case of the Gaussian kernel) and the regularization parameter λ. Model selection of RuLSIF is possible based on cross-validation (CV) with respect to the squared-error criterion J. Using an estimator of the α-relative density-ratio rα(x), we can construct estimators of the αrelative PE divergence (1). After a few lines of calculation, we can show that the α-relative PE divergence (1) is equivalently expressed as PEα = −α 2 Ep(x) £ rα(x)2¤ −(1−α) 2 Ep′(x) £ rα(x)2¤ + Ep(x) [rα(x)] −1 2 = 1 2Ep(x) [rα(x)] −1 2. Note that the middle expression can also be obtained via Legendre-Fenchel convex duality of the divergence functional [17]. Based on these expressions, we consider the following two estimators: c PEα := −α 2n Pn i=1 brα(xi)2 −(1−α) 2n′ Pn′ j=1 brα(x′ j)2 + 1 n Pn i=1 brα(xi) −1 2, (4) f PEα := 1 2n Pn i=1 brα(xi) −1 2. (5) We note that the α-relative PE divergence (1) can have further different expressions than the above ones, and corresponding estimators can also be constructed similarly. However, the above two expressions will be particularly useful: the first estimator c PEα has superior theoretical properties (see Section 3) and the second one f PEα is simple to compute. 3 Theoretical Analysis In this section, we analyze theoretical properties of the proposed PE divergence estimators. Since our theoretical analysis is highly technical, we focus on explaining practical insights we can gain from the theoretical results here; we describe all the mathematical details in the supplementary material. 3 For theoretical analysis, let us consider a rather abstract form of our relative density-ratio estimator described as argming∈G h α 2n Pn i=1 g(xi)2 + (1−α) 2n′ Pn′ j=1 g(x′ j)2 −1 n Pn i=1 g(xi) + λ 2 R(g)2i , (6) where G is some function space (i.e., a statistical model) and R(·) is some regularization functional. Non-Parametric Convergence Analysis: First, we elucidate the non-parametric convergence rate of the proposed PE estimators. Here, we practically regard the function space G as an infinitedimensional reproducing kernel Hilbert space (RKHS) [18] such as the Gaussian kernel space, and R(·) as the associated RKHS norm. Let us represent the complexity of the function space G by γ (0 < γ < 2); the larger γ is, the more complex the function class G is (see the supplementary material for its precise definition). We analyze the convergence rate of our PE divergence estimators as ¯n := min(n, n′) tends to infinity for λ = λ¯n under λ¯n →o(1) and λ−1 ¯n = o(¯n2/(2+γ)). The first condition means that λ¯n tends to zero, but the second condition means that its shrinking speed should not be too fast. Under several technical assumptions detailed in the supplementary material, we have the following asymptotic convergence results for the two PE divergence estimators c PEα (4) and f PEα (5): c PEα −PEα = Op(¯n−1/2c∥rα∥∞+ λ¯n max(1, R(rα)2)), (7) f PEα −PEα = Op ³ λ1/2 ¯n ∥rα∥1/2 ∞max{1, R(rα)} + λ¯n max{1, ∥rα∥(1−γ/2)/2 ∞ , R(rα)∥rα∥(1−γ/2)/2 ∞ , R(rα)} ´ , (8) where Op denotes the asymptotic order in probability, c := (1 + α) q Vp(x)[rα(x)] + (1 −α) q Vp′(x)[rα(x)], and Vp(x) denotes the variance over p(x): Vp(x)[f(x)] = R ¡ f(x) − R f(x)p(x)dx ¢2 p(x)dx. In both Eq.(7) and Eq.(8), the coefficients of the leading terms (i.e., the first terms) of the asymptotic convergence rates become smaller as ∥rα∥∞gets smaller. Since ∥rα∥∞= °°° ¡ α + (1 −α)/r(x) ¢−1°°° ∞< 1 α for α > 0, larger α would be more preferable in terms of the asymptotic approximation error. Note that when α = 0, ∥rα∥∞can tend to infinity even under a simple setting that the ratio of two Gaussian functions is considered [16]. Thus, our proposed approach of estimating the α-relative PE divergence (with α > 0) would be more advantageous than the naive approach of estimating the plain PE divergence (which corresponds to α = 0) in terms of the non-parametric convergence rate. The above results also show that c PEα and f PEα have different asymptotic convergence rates. The leading term in Eq.(7) is of order ¯n−1/2, while the leading term in Eq.(8) is of order λ1/2 ¯n , which is slightly slower (depending on the complexity γ) than ¯n−1/2. Thus, c PEα would be more accurate than f PEα in large sample cases. Furthermore, when p(x) = p′(x), Vp(x)[rα(x)] = 0 holds and thus c = 0 holds. Then the leading term in Eq.(7) vanishes and therefore c PEα has the even faster convergence rate of order λ¯n, which is slightly slower (depending on the complexity γ) than ¯n−1. Similarly, if α is close to 1, rα(x) ≈1 and thus c ≈0 holds. When ¯n is not large enough to be able to neglect the terms of o(¯n−1/2), the terms of O(λ¯n) matter. If ∥rα∥∞and R(rα) are large (this can happen, e.g., when α is close to 0), the coefficient of the O(λ¯n)-term in Eq.(7) can be larger than that in Eq.(8). Then f PEα would be more favorable than c PEα in terms of the approximation accuracy. See the supplementary material for numerical examples illustrating the above theoretical results. 4 Parametric Variance Analysis: Next, we analyze the asymptotic variance of the PE divergence estimator c PEα (4) under a parametric setup. As the function space G in Eq.(6), we consider the following parametric model: G = {g(x; θ) | θ ∈ Θ ⊂Rb} for a finite b. Here we assume that this parametric model is correctly specified, i.e., it includes the true relative density-ratio function rα(x): there exists θ∗such that g(x; θ∗) = rα(x). Here, we use RuLSIF without regularization, i.e., λ = 0 in Eq.(6). Let us denote the variance of c PEα (4) by V[ c PEα], where randomness comes from the draw of samples {xi}n i=1 and {x′ j}n′ j=1. Then, under a standard regularity condition for the asymptotic normality [19], V[ c PEα] can be expressed and upper-bounded as V[ c PEα] = Vp(x) £ rα −αrα(x)2/2 ¤ /n + Vp′(x) £ (1 −α)rα(x)2/2 ¤ /n′ + o(n−1, n′−1) (9) ≤∥rα∥2 ∞/n + α2∥rα∥4 ∞/(4n) + (1 −α)2∥rα∥4 ∞/(4n′) + o(n−1, n′−1). (10) Let us denote the variance of f PEα by V[ f PEα]. Then, under a standard regularity condition for the asymptotic normality [19], the variance of f PEα is asymptotically expressed as V[ f PEα] = Vp(x) £¡ rα + (1 −αrα)Ep(x)[∇g]⊤H−1 α ∇g ¢ /2 ¤ /n + Vp′(x) £¡ (1 −α)rαEp(x)[∇g]⊤H−1 α ∇g ¢ /2 ¤ /n′ + o(n−1, n′−1), (11) where ∇g is the gradient vector of g with respect to θ at θ = θ∗and Hα = αEp(x)[∇g∇g⊤] + (1 −α)Ep′(x)[∇g∇g⊤]. Eq.(9) shows that, up to O(n−1, n′−1), the variance of c PEα depends only on the true relative density-ratio rα(x), not on the estimator of rα(x). This means that the model complexity does not affect the asymptotic variance. Therefore, overfitting would hardly occur in the estimation of the relative PE divergence even when complex models are used. We note that the above superior property is applicable only to relative PE divergence estimation, not to relative density-ratio estimation. This implies that overfitting occurs in relative density-ratio estimation, but the approximation error cancels out in relative PE divergence estimation. On the other hand, Eq.(11) shows that the variance of f PEα is affected by the model G, since the factor Ep(x)[∇g]⊤H−1 α ∇g depends on the model in general. When the equality Ep(x)[∇g]⊤H−1 α ∇g(x; θ∗) = rα(x) holds, the variances of f PEα and c PEα are asymptotically the same. However, in general, the use of c PEα would be more recommended. Eq.(10) shows that the variance V[ c PEα] can be upper-bounded by the quantity depending on ∥rα∥∞, which is monotonically lowered if ∥rα∥∞is reduced. Since ∥rα∥∞monotonically decreases as α increases, our proposed approach of estimating the α-relative PE divergence (with α > 0) would be more advantageous than the naive approach of estimating the plain PE divergence (which corresponds to α = 0) in terms of the parametric asymptotic variance. See the supplementary material for numerical examples illustrating the above theoretical results. 4 Experiments In this section, we experimentally evaluate the performance of the proposed method in two-sample homogeneity test, outlier detection, and transfer learning tasks. Two-Sample Homogeneity Test: First, we apply the proposed divergence estimator to twosample homogeneity test. Given two sets of samples X = {xi}n i=1 i.i.d. ∼P and X ′ = {x′ j}n′ j=1 i.i.d. ∼P ′, the goal of the twosample homogeneity test is to test the null hypothesis that the probability distributions P and P ′ are the same against its complementary alternative (i.e., the distributions are different). By using an estimator d Div of some divergence between the two distributions P and P ′, homogeneity of two distributions can be tested based on the permutation test procedure [20]. 5 Table 1: Experimental results of two-sample test. The mean (and standard deviation in the bracket) rate of accepting the null hypothesis (i.e., P = P ′) for IDA benchmark repository under the significance level 5% is reported. Left: when the two sets of samples are both taken from the positive training set (i.e., the null hypothesis is correct). Methods having the mean acceptance rate 0.95 according to the one-sample t-test at the significance level 5% are specified by bold face. Right: when the set of samples corresponding to the numerator of the density-ratio are taken from the positive training set and the set of samples corresponding to the denominator of the density-ratio are taken from the positive training set and the negative training set (i.e., the null hypothesis is not correct). The best method having the lowest mean accepting rate and comparable methods according to the two-sample t-test at the significance level 5% are specified by bold face. P = P ′ P ̸= P ′ Datasets d n = n′ MMD LSTT LSTT LSTT MMD LSTT LSTT LSTT (α = 0.0) (α = 0.5) (α = 0.95) (α = 0.0) (α = 0.5) (α = 0.95) banana 2 100 .96 (.20) .93 (.26) .92 (.27) .92 (.27) .52 (.50) .10 (.30) .02 (.14) .17 (.38) thyroid 5 19 .96 (.20) .95 (.22) .95 (.22) .88 (.33) .52 (.50) .81 (.39) .65 (.48) .80 (.40) titanic 5 21 .94 (.24) .86 (.35) .92 (.27) .89 (.31) .87 (.34) .86 (.35) .87 (.34) .88 (.33) diabetes 8 85 .96 (.20) .87 (.34) .91 (.29) .82 (.39) .31 (.46) .42 (.50) .47 (.50) .57 (.50) b-cancer 9 29 .98 (.14) .91 (.29) .94 (.24) .92 (.27) .87 (.34) .75 (.44) .80 (.40) .79 (.41) f-solar 9 100 .93 (.26) .91 (.29) .95 (.22) .93 (.26) .51 (.50) .81 (.39) .55 (.50) .66 (.48) heart 13 38 1.00 (.00) .85 (.36) .91 (.29) .93 (.26) .53 (.50) .28 (.45) .40 (.49) .62 (.49) german 20 100 .99 (.10) .91 (.29) .92 (.27) .89 (.31) .56 (.50) .55 (.50) .44 (.50) .68 (.47) ringnorm 20 100 .97 (.17) .93 (.26) .91 (.29) .85 (.36) .00 (.00) .00 (.00) .00 (.00) .02 (.14) waveform 21 66 .98 (.14) .92 (.27) .93 (.26) .88 (.33) .00 (.00) .00 (.00) .02 (.14) .00 (.00) When an asymmetric divergence such as the KL divergence [7] or the PE divergence [11] is adopted for two-sample test, the test results depend on the choice of directions: a divergence from P to P ′ or from P ′ to P. [4] proposed to choose the direction that gives a smaller p-value—it was experimentally shown that, when the uLSIF-based PE divergence estimator is used for the twosample test (which is called the least-squares two-sample test; LSTT), the heuristic of choosing the direction with a smaller p-value contributes to reducing the type-II error (the probability of accepting incorrect null-hypotheses, i.e., two distributions are judged to be the same when they are actually different), while the increase of the type-I error (the probability of rejecting correct null-hypotheses, i.e., two distributions are judged to be different when they are actually the same) is kept moderate. We apply the proposed method to the binary classification datasets taken from the IDA benchmark repository [21]. We test LSTT with the RuLSIF-based PE divergence estimator for α = 0, 0.5, and 0.95; we also test the maximum mean discrepancy (MMD) [22], which is a kernel-based two-sample test method. The performance of MMD depends on the choice of the Gaussian kernel width. Here, we adopt a version proposed by [23], which automatically optimizes the Gaussian kernel width. The p-values of MMD are computed in the same way as LSTT based on the permutation test procedure. First, we investigate the rate of accepting the null hypothesis when the null hypothesis is correct (i.e., the two distributions are the same). We split all the positive training samples into two sets and perform two-sample test for the two sets of samples. The experimental results are summarized in the left half of Table 1, showing that LSTT with α = 0.5 compares favorably with those with α = 0 and 0.95 and MMD in terms of the type-I error. Next, we consider the situation where the null hypothesis is not correct (i.e., the two distributions are different). The numerator samples are generated in the same way as above, but a half of denominator samples are replaced with negative training samples. Thus, while the numerator sample set contains only positive training samples, the denominator sample set includes both positive and negative training samples. The experimental results are summarized in the right half of Table 1, showing that LSTT with α = 0.5 again compares favorably with those with α = 0 and 0.95. Furthermore, LSTT with α = 0.5 tends to outperform MMD in terms of the type-II error. Overall, LSTT with α = 0.5 is shown to be a useful method for two-sample homogeneity test. See the supplementary material for more experimental evaluation. Inlier-Based Outlier Detection: Next, we apply the proposed method to outlier detection. Let us consider an outlier detection problem of finding irregular samples in a dataset (called an “evaluation dataset”) based on another dataset (called a “model dataset”) that only contains regular samples. Defining the density-ratio over the two sets of samples, we can see that the density-ratio 6 Table 2: Experimental results of outlier detection. Mean AUC score (and standard deviation in the bracket) over 100 trials is reported. The best method having the highest mean AUC score and comparable methods according to the two-sample t-test at the significance level 5% are specified by bold face. The datasets are sorted in the ascending order of the input dimensionality d. Datasets d OSVM (ν = 0.05) OSVM (ν = 0.1) RuLSIF (α = 0) RuLSIF (α = 0.5) RuLSIF (α = 0.95) IDA:banana 2 .668 (.105) .676 (.120) .597 (.097) .619 (.101) .623 (.115) IDA:thyroid 5 .760 (.148) .782 (.165) .804 (.148) .796 (.178) .722 (.153) IDA:titanic 5 .757 (.205) .752 (.191) .750 (.182) .701 (.184) .712 (.185) IDA:diabetes 8 .636 (.099) .610 (.090) .594 (.105) .575 (.105) .663 (.112) IDA:breast-cancer 9 .741 (.160) .691 (.147) .707 (.148) .737 (.159) .733 (.160) IDA:flare-solar 9 .594 (.087) .590 (.083) .626 (.102) .612 (.100) .584 (.114) IDA:heart 13 .714 (.140) .694 (.148) .748 (.149) .769 (.134) .726 (.127) IDA:german 20 .612 (.069) .604 (.084) .605 (.092) .597 (.101) .605 (.095) IDA:ringnorm 20 .991 (.012) .993 (.007) .944 (.091) .971 (.062) .992 (.010) IDA:waveform 21 .812 (.107) .843 (.123) .879 (.122) .875 (.117) .885 (.102) Speech 50 .788 (.068) .830 (.060) .804 (.101) .821 (.076) .836 (.083) 20News (‘rec’) 100 .598 (.063) .593 (.061) .628 (.105) .614 (.093) .767 (.100) 20News (‘sci’) 100 .592 (.069) .589 (.071) .620 (.094) .609 (.087) .704 (.093) 20News (‘talk’) 100 .661 (.084) .658 (.084) .672 (.117) .670 (.102) .823 (.078) USPS (1 vs. 2) 256 .889 (.052) .926 (.037) .848 (.081) .878 (.088) .898 (.051) USPS (2 vs. 3) 256 .823 (.053) .835 (.050) .803 (.093) .818 (.085) .879 (.074) USPS (3 vs. 4) 256 .901 (.044) .939 (.031) .950 (.056) .961 (.041) .984 (.016) USPS (4 vs. 5) 256 .871 (.041) .890 (.036) .857 (.099) .874 (.082) .941 (.031) USPS (5 vs. 6) 256 .825 (.058) .859 (.052) .863 (.078) .867 (.068) .901 (.049) USPS (6 vs. 7) 256 .910 (.034) .950 (.025) .972 (.038) .984 (.018) .994 (.010) USPS (7 vs. 8) 256 .938 (.030) .967 (.021) .941 (.053) .951 (.039) .980 (.015) USPS (8 vs. 9) 256 .721 (.072) .728 (.073) .721 (.084) .728 (.083) .761 (.096) USPS (9 vs. 0) 256 .920 (.037) .966 (.023) .982 (.048) .989 (.022) .994 (.011) values for regular samples are close to one, while those for outliers tend to be significantly deviated from one. Thus, density-ratio values could be used as an index of the degree of outlyingness [1, 2]. Since the evaluation dataset usually has a wider support than the model dataset, we regard the evaluation dataset as samples corresponding to the denominator density p′(x), and the model dataset as samples corresponding to the numerator density p(x). Then, outliers tend to have smaller densityratio values (i.e., close to zero). Thus, density-ratio approximators can be used for outlier detection. We evaluate the proposed method using various datasets: IDA benchmark repository [21], an inhouse French speech dataset, the 20 Newsgroup dataset, and the USPS hand-written digit dataset (the detailed specification of the datasets is explained in the supplementary material). We compare the area under the ROC curve (AUC) [24] of RuLSIF with α = 0, 0.5, and 0.95, and one-class support vector machine (OSVM) with the Gaussian kernel [25]. We used the LIBSVM implementation of OSVM [26]. The Gaussian width is set to the median distance between samples, which has been shown to be a useful heuristic [25]. Since there is no systematic method to determine the tuning parameter ν in OSVM, we report the results for ν = 0.05 and 0.1. The mean and standard deviation of the AUC scores over 100 runs with random sample choice are summarized in Table 2, showing that RuLSIF overall compares favorably with OSVM. Among the RuLSIF methods, small α tends to perform well for low-dimensional datasets, and large α tends to work well for high-dimensional datasets. Transfer Learning: Finally, we apply the proposed method to transfer learning. Let us consider a transductive transfer learning setup where labeled training samples {(xtr j , ytr j )}ntr j=1 drawn i.i.d. from p(y|x)ptr(x) and unlabeled test samples {xte i }nte i=1 drawn i.i.d. from pte(x) (which is generally different from ptr(x)) are available. The use of exponentially-weighted importance weighting was shown to be useful for adaptation from ptr(x) to pte(x) [5]: minf∈F · 1 ntr Pntr j=1 ³ pte(xtr j ) ptr(xtr j ) ´τ loss(ytr j , f(xtr j )) ¸ , where f(x) is a learned function and 0 ≤τ ≤1 is the exponential flattening parameter. τ = 0 corresponds to plain empirical-error minimization which is statistically efficient, while τ = 1 corresponds to importance-weighted empirical-error minimization which is statistically consistent; 0 < τ < 1 will give an intermediate estimator that balances the trade-off between statistical efficiency and consistency. τ can be determined by importance-weighted cross-validation [6] in a data dependent fashion. 7 Table 3: Experimental results of transfer learning in human activity recognition. Mean classification accuracy (and the standard deviation in the bracket) over 100 runs for human activity recognition of a new user is reported. We compare the plain kernel logistic regression (KLR) without importance weights, KLR with relative importance weights (RIW-KLR), KLR with exponentially-weighted importance weights (EIW-KLR), and KLR with plain importance weights (IW-KLR). The method having the highest mean classification accuracy and comparable methods according to the two-sample t-test at the significance level 5% are specified by bold face. Task KLR RIW-KLR EIW-KLR IW-KLR (α = 0, τ = 0) (α = 0.5) (τ = 0.5) (α = 1, τ = 1) Walks vs. run 0.803 (0.082) 0.889 (0.035) 0.882 (0.039) 0.882 (0.035) Walks vs. bicycle 0.880 (0.025) 0.892 (0.035) 0.867 (0.054) 0.854 (0.070) Walks vs. train 0.985 (0.017) 0.992 (0.008) 0.989 (0.011) 0.983 (0.021) However, a potential drawback is that estimation of r(x) (i.e., τ = 1) is rather hard, as shown in this paper. Here we propose to use relative importance weights instead: minf∈F h 1 ntr Pntr j=1 pte(xtr j ) (1−α)pte(xtr j )+αptr(xtr j )loss(ytr j , f(xtr j )) i . We apply the above transfer learning technique to human activity recognition using accelerometer data. Subjects were asked to perform a specific task such as walking, running, and bicycle riding, which was collected by iPodTouch. The duration of each task was arbitrary and the sampling rate was 20Hz with small variations (the detailed experimental setup is explained in the supplementary material). Let us consider a situation where a new user wants to use the activity recognition system. However, since the new user is not willing to label his/her accelerometer data due to troublesomeness, no labeled sample is available for the new user. On the other hand, unlabeled samples for the new user and labeled data obtained from existing users are available. Let labeled training data {(xtr j , ytr j )}ntr j=1 be the set of labeled accelerometer data for 20 existing users. Each user has at most 100 labeled samples for each action. Let unlabeled test data {xte i }nte i=1 be unlabeled accelerometer data obtained from the new user. The experiments are repeated 100 times with different sample choice for ntr = 500 and nte = 200. The classification accuracy for 800 test samples from the new user (which are different from the 200 unlabeled samples) are summarized in Table 3, showing that the proposed method using relative importance weights for α = 0.5 works better than other methods. 5 Conclusion In this paper, we proposed to use a relative divergence for robust distribution comparison. We gave a computationally efficient method for estimating the relative Pearson divergence based on direct relative density-ratio approximation. We theoretically elucidated the convergence rate of the proposed divergence estimator under non-parametric setup, which showed that the proposed approach of estimating the relative Pearson divergence is more preferable than the existing approach of estimating the plain Pearson divergence. Furthermore, we proved that the asymptotic variance of the proposed divergence estimator is independent of the model complexity under a correctly-specified parametric setup. Thus, the proposed divergence estimator hardly overfits even with complex models. Experimentally, we demonstrated the practical usefulness of the proposed divergence estimator in two-sample homogeneity test, inlier-based outlier detection, and transfer learning tasks. In addition to two-sample homogeneity test, inlier-based outlier detection, and transfer learning, density-ratios can be useful for tackling various machine learning problems, for example, multi-task learning, independence test, feature selection, causal inference, independent component analysis, dimensionality reduction, unpaired data matching, clustering, conditional density estimation, and probabilistic classification. Thus, it would be promising to explore more applications of the proposed relative density-ratio approximator beyond two-sample homogeneity test, inlier-based outlier detection, and transfer learning. Acknowledgments MY was supported by the JST PRESTO program, TS was partially supported by MEXT KAKENHI 22700289 and Aihara Project, the FIRST program from JSPS, initiated by CSTP, TK was partially supported by Grant-in-Aid for Young Scientists (20700251), HH was supported by the FIRST program, and MS was partially supported by SCAT, AOARD, and the FIRST program. 8 References [1] A. J. Smola, L. Song, and C. H. Teo. Relative novelty detection. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS2009), pages 536–543, 2009. [2] S. Hido, Y. Tsuboi, H. Kashima, M. Sugiyama, and T. Kanamori. Statistical outlier detection using direct density ratio estimation. Knowledge and Information Systems, 26(2):309–336, 2011. [3] A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch¨olkopf, and A. J. Smola. A kernel method for the twosample-problem. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 513–520. MIT Press, Cambridge, MA, 2007. [4] M. Sugiyama, T. Suzuki, Y. Itoh, T. Kanamori, and M. Kimura. Least-squares two-sample test. Neural Networks, 24(7):735–751, 2011. [5] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):227–244, 2000. [6] M. Sugiyama, M. Krauledat, and K.-R. M¨uller. Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research, 8:985–1005, May 2007. [7] S. Kullback and R. A. Leibler. On information and sufficiency. Annals of Mathematical Statistics, 22:79– 86, 1951. [8] V. N. Vapnik. Statistical Learning Theory. Wiley, New York, NY, 1998. [9] M. Sugiyama, T. Suzuki, S. Nakajima, H. Kashima, P. von B¨unau, and M. Kawanabe. Direct importance estimation for covariate shift adaptation. Annals of the Institute of Statistical Mathematics, 60:699–746, 2008. [10] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11):5847–5861, 2010. [11] K. Pearson. On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Philosophical Magazine, 50:157–175, 1900. [12] S. M. Ali and S. D. Silvey. A general class of coefficients of divergence of one distribution from another. Journal of the Royal Statistical Society, Series B, 28:131–142, 1966. [13] I. Csisz´ar. Information-type measures of difference of probability distributions and indirect observation. Studia Scientiarum Mathematicarum Hungarica, 2:229–318, 1967. [14] T. Kanamori, S. Hido, and M. Sugiyama. A least-squares approach to direct importance estimation. Journal of Machine Learning Research, 10:1391–1445, 2009. [15] T. Suzuki and M. Sugiyama. Sufficient dimension reduction via squared-loss mutual information estimation. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS2010), pages 804–811, 2010. [16] C. Cortes, Y. Mansour, and M. Mohri. Learning bounds for importance weighting. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 442–450. 2010. [17] R. T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, NJ, USA, 1970. [18] N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68:337–404, 1950. [19] A. W. van der Vaart. Asymptotic Statistics. Cambridge University Press, 2000. [20] B. Efron and R. J. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall, New York, NY, 1993. [21] G. R¨atsch, T. Onoda, and K.-R. M¨uller. Soft margins for adaboost. Machine Learning, 42(3):287–320, 2001. [22] K. M. Borgwardt, A. Gretton, M. J. Rasch, H.-P. Kriegel, B. Sch¨olkopf, and A. J. Smola. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics, 22(14):e49–e57, 2006. [23] B. Sriperumbudur, K. Fukumizu, A. Gretton, G. Lanckriet, and B. Sch¨olkopf. Kernel choice and classifiability for RKHS embeddings of probability distributions. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1750–1758. 2009. [24] A. P. Bradley. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition, 30:1145–1159, 1997. [25] B. Sch¨olkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating the support of a high-dimensional distribution. Neural Computation, 13(7):1443–1471, 2001. [26] C.-C. Chang and C.h-J. Lin. LIBSVM: A Library for Support Vector Machines, 2001. Software available at http://www.csie.ntu.edu.tw/∼cjlin/libsvm. 9
|
2011
|
233
|
4,295
|
The Manifold Tangent Classifier Salah Rifai, Yann N. Dauphin, Pascal Vincent, Yoshua Bengio, Xavier Muller Department of Computer Science and Operations Research University of Montreal Montreal, H3C 3J7 {rifaisal, dauphiya, vincentp, bengioy, mullerx}@iro.umontreal.ca Abstract We combine three important ideas present in previous work for building classifiers: the semi-supervised hypothesis (the input distribution contains information about the classifier), the unsupervised manifold hypothesis (data density concentrates near low-dimensional manifolds), and the manifold hypothesis for classification (different classes correspond to disjoint manifolds separated by low density). We exploit a novel algorithm for capturing manifold structure (high-order contractive auto-encoders) and we show how it builds a topological atlas of charts, each chart being characterized by the principal singular vectors of the Jacobian of a representation mapping. This representation learning algorithm can be stacked to yield a deep architecture, and we combine it with a domain knowledge-free version of the TangentProp algorithm to encourage the classifier to be insensitive to local directions changes along the manifold. Record-breaking classification results are obtained. 1 Introduction Much of machine learning research can be viewed as an exploration of ways to compensate for scarce prior knowledge about how to solve a specific task by extracting (usually implicit) knowledge from vast amounts of data. This is especially true of the search for generic learning algorithms that are to perform well on a wide range of domains for which they were not specifically tailored. While such an outlook precludes using much domain-specific knowledge in designing the algorithms, it can however be beneficial to leverage what might be called “generic” prior hypotheses, that appear likely to hold for a wide range of problems. The approach studied in the present work exploits three such prior hypotheses: 1. The semi-supervised learning hypothesis, according to which learning aspects of the input distribution p(x) can improve models of the conditional distribution of the supervised target p(y|x), i.e., p(x) and p(y|x) share something (Lasserre et al., 2006). This hypothesis underlies not only the strict semi-supervised setting where one has many more unlabeled examples at his disposal than labeled ones, but also the successful unsupervised pretraining approach for learning deep architectures, which has been shown to significantly improve supervised performance even without using additional unlabeled examples (Hinton et al., 2006; Bengio, 2009; Erhan et al., 2010). 2. The (unsupervised) manifold hypothesis, according to which real world data presented in high dimensional spaces is likely to concentrate in the vicinity of non-linear sub-manifolds of much lower dimensionality (Cayton, 2005; Narayanan and Mitter, 2010). 3. The manifold hypothesis for classification, according to which points of different classes are likely to concentrate along different sub-manifolds, separated by low density regions of the input space. 1 The recently proposed Contractive Auto-Encoder (CAE) algorithm (Rifai et al., 2011a), based on the idea of encouraging the learned representation to be robust to small variations of the input, was shown to be very effective for unsupervised feature learning. Its successful application in the pre-training of deep neural networks is yet another illustration of what can be gained by adopting hypothesis 1. In addition, Rifai et al. (2011a) propose, and show empirical evidence for, the hypothesis that the trade-off between reconstruction error and the pressure to be insensitive to variations in input space has an interesting consequence: It yields a mostly contractive mapping that, locally around each training point, remains substantially sensitive only to a few input directions (with different directions of sensitivity for different training points). This is taken as evidence that the algorithm indirectly exploits hypothesis 2 and models a lower-dimensional manifold. Most of the directions to which the representation is substantially sensitive are thought to be directions tangent to the datasupporting manifold (those that locally define its tangent space). The present work follows through on this interpretation, and investigates whether it is possible to use this information, that is presumably captured about manifold structure, to further improve classification performance by leveraging hypothesis 3. To that end, we extract a set of basis vectors for the local tangent space at each training point from the Contractive Auto-Encoder’s learned parameters. This is obtained with a Singular Value Decomposition (SVD) of the Jacobian of the encoder that maps each input to its learned representation. Based on hypothesis 3, we then adopt the “generic prior” that class labels are likely to be insensitive to most directions within these local tangent spaces (ex: small translations, rotations or scalings usually do not change an image’s class). Supervised classification algorithms that have been devised to efficiently exploit tangent directions given as domain-specific prior-knowledge (Simard et al., 1992, 1993), can readily be used instead with our learned tangent spaces. In particular, we will show record-breaking improvements by using TangentProp for fine tuning CAE-pre-trained deep neural networks. To the best of our knowledge this is the first time that the implicit relationship between an unsupervised learned mapping and the tangent space of a manifold is rendered explicit and successfully exploited for the training of a classifier. This showcases a unified approach that simultaneously leverages all three “generic” prior hypotheses considered. Our experiments (see Section 6) show that this approach sets new records for domain-knowledge-free performance on several real-world classification problems. Remarkably, in some cases it even outperformed methods that use weak or strong domain-specific prior knowledge (e.g. convolutional networks and tangent distance based on a-priori known transformations). Naturally, this approach is even more likely to be beneficial for datasets where no prior knowledge is readily available. 2 Contractive auto-encoders (CAE) We consider the problem of the unsupervised learning of a non-linear feature extractor from a dataset D = {x1, . . . , xn}. Examples xi ∈IRd are i.i.d. samples from an unknown distribution p(x). 2.1 Traditional auto-encoders The auto-encoder framework is one of the oldest and simplest techniques for the unsupervised learning of non-linear feature extractors. It learns an encoder function h, that maps an input x ∈IRd to a hidden representation h(x) ∈IRdh, jointly with a decoder function g, that maps h back to the input space as r = g(h(x)) the reconstruction of x. The encoder and decoder’s parameters θ are learned by stochastic gradient descent to minimize the average reconstruction error L(x, g(h(x))) for the examples of the training set. The objective being minimized is: JAE(θ) = X x∈D L(x, g(h(x))). (1) We will will use the most common forms of encoder, decoder, and reconstruction error: Encoder: h(x) = s(Wx + bh), where s is the element-wise logistic sigmoid s(z) = 1 1+e−z . Parameters are a dh × d weight matrix W and bias vector bh ∈IRdh. Decoder: r = g(h(x)) = s2(W T h(x) + br). Parameters are W T (tied weights, shared with the encoder) and bias vector br ∈IRd. Activation function s2 is either a logistic sigmoid (s2 = s) or the identity (linear decoder). 2 Loss function: Either the squared error: L(x, r) = ∥x−r∥2 or Bernoulli cross-entropy: L(x, r) = −Pd i=1 xi log(ri) + (1 −xi) log(1 −ri). The set of parameters of such an auto-encoder is θ = {W, bh, br}. Historically, auto-encoders were primarily viewed as a technique for dimensionality reduction, where a narrow bottleneck (i.e. dh < d) was in effect acting as a capacity control mechanism. By contrast, recent successes (Bengio et al., 2007; Ranzato et al., 2007a; Kavukcuoglu et al., 2009; Vincent et al., 2010; Rifai et al., 2011a) tend to rely on rich, oftentimes over-complete representations (dh > d), so that more sophisticated forms of regularization are required to pressure the auto-encoder to extract relevant features and avoid trivial solutions. Several successful techniques aim at sparse representations (Ranzato et al., 2007a; Kavukcuoglu et al., 2009; Goodfellow et al., 2009). Alternatively, denoising auto-encoders (Vincent et al., 2010) change the objective from mere reconstruction to that of denoising. 2.2 First order and higher order contractive auto-encoders More recently, Rifai et al. (2011a) introduced the Contractive Auto-Encoder (CAE), that encourages robustness of representation h(x) to small variations of a training input x, by penalizing its sensitivity to that input, measured as the Frobenius norm of the encoder’s Jacobian J(x) = ∂h ∂x(x). The regularized objective minimized by the CAE is the following: JCAE(θ) = X x∈D L(x, g(h(x))) + λ∥J(x)∥2, (2) where λ is a non-negative regularization hyper-parameter that controls how strongly the norm of the Jacobian is penalized. Note that, with the traditional sigmoid encoder form given above, one can easily obtain the Jacobian of the encoder. Its jth row is obtained form the jth row of W as: J(x)j = ∂hj(x) ∂x = hj(x)(1 −hj(x))Wj. (3) Computing the extra penalty term (and its contribution to the gradient) is similar to computing the reconstruction error term (and its contribution to the gradient), thus relatively cheap. It is also possible to penalize higher order derivatives (Hessian) by using a simple stochastic technique that eschews computing them explicitly, which would be prohibitive. It suffices to penalize differences between the Jacobian at x and the Jacobian at nearby points ˜x = x + ϵ (stochastic corruptions of x). This yields the CAE+H (Rifai et al., 2011b) variant with the following optimization objective: JCAE+H(θ) = X x∈D L(x, g(h(x))) + λ ||J(x)||2 + γEϵ∼N(0,σ2I) h ||J(x) −J(x + ϵ)||2i , (4) where γ is an additional regularization hyper-parameters that controls how strongly we penalize local variations of the Jacobian, i.e. higher order derivatives. The expectation E is over Gaussian noise variable ϵ. In practice stochastic samples thereof are used for each stochastic gradient update. The CAE+H is the variant used for our experiments. 3 Characterizing the tangent bundle captured by a CAE Rifai et al. (2011a) reason that, while the regularization term encourages insensitivity of h(x) in all input space directions, this pressure is counterbalanced by the need for accurate reconstruction, thus resulting in h(x) being substantially sensitive only to the few input directions required to distinguish close by training points. The geometric interpretation is that these directions span the local tangent space of the underlying manifold that supports the data. The tangent bundle of a smooth manifold is the manifold along with the set of tangent planes taken at all points on it. Each such tangent plane can be equipped with a local Euclidean coordinate system or chart. In topology, an atlas is a collection of such charts (like the locally Euclidean map in each page of a geographic atlas). Even though the set of charts may form a non-Euclidean manifold (e.g., a sphere), each chart is Euclidean. 3 3.1 Conditions for the feature mapping to define an atlas on a manifold In order to obtain a proper atlas of charts, h must be a diffeomorphism. It must be smooth (C∞) and invertible on open Euclidean balls on the manifold M around the training points. Smoothness is guaranteed because of our choice of parametrization (affine + sigmoid). Injectivity (different values of h(x) correspond to different values of x) on the training examples is encouraged by minimizing reconstruction error (otherwise we cannot distinguish training examples xi and xj by only looking at h(xi) and h(xj)). Since h(x) = s(Wx+bh) and s is invertible, using the definition of injectivity we get (by composing h(xi) = h(xj) with s−1) ∀i, j h(xi) = h(xj) ⇐⇒W∆ij = 0 where ∆ij = xi −xj. In order to preserve the injectivity of h, W has to form a basis spanned by its rows Wk, where ∀i, j ∃α ∈IRdh, ∆ij = Pdh k αkWk. With this condition satisfied, mapping h is injective in the subspace spanned by the variations in the training set. If we limit the domain of h to h(X) ⊂(0, 1)dh comprising values obtainable by h applied to some set X, then we obtain surjectivity by definition, hence bijectivity of h between the training set D and h(D). Let Mx be an open ball on the manifold M around training example x. By smoothness of the manifold M and of mapping h, we obtain bijectivity locally around the training examples (on the manifold) as well, i.e., between ∪x∈DMx and h(∪x∈DMx). 3.2 Obtaining an atlas from the learned feature mapping Now that we have necessary conditions for local invertibility of h(x) for x ∈D, let us consider how to define the local chart around x from the nature of h. Because h must be sensitive to changes from an example xi to one of its neighbors xj, but insensitive to other changes (because of the CAE penalty), we expect that this will be reflected in the spectrum of the Jacobian matrix J(x) = ∂h(x) ∂x at each training point x. In the ideal case where J(x) has rank k, h(x + ϵv) differs from h(x) only if v is in the span of the singular vectors of J(x) with non-zero singular value. In practice, J(x) has many tiny singular values. Hence, we define a local chart around x using the Singular Value Decomposition of JT (x) = U(x)S(x)V T (x) (where U(x) and V (x) are orthogonal and S(x) is diagonal). The tangent plane Hx at x is given by the span of the set of principal singular vectors Bx: Bx = {U·k(x)|Skk(x) > ϵ} and Hx = {x + v|v ∈span(Bx)}, where U·k(x) is the k-th column of U(x), and span({zk}) = {x|x = P k wkzk, wk ∈IR}. We can thus define an atlas A captured by h, based on the local linear approximation around each example: A = {(Mx, φx)|x ∈D, φx(˜x) = Bx(˜x −x)}. (5) Note that this way of obtaining an atlas can also be applied to subsequent layers of a deep network. It is thus possible to use a greedy layer-wise strategy to initialize a network with CAEs (Rifai et al., 2011a) and obtain an atlas that corresponds to the nonlinear features computed at any layer. 4 Exploiting the learned tangent directions for classification Using the previously defined charts for every point of the training set, we propose to use this additional information provided by unsupervised learning to improve the performance of the supervised task. In this we adopt the manifold hypothesis for classification mentioned in the introduction. 4.1 CAE-based tangent distance One way of achieving this is to use a nearest neighbor classifier with a similarity criterion defined as the shortest distance between two hyperplanes (Simard et al., 1993). The tangents extracted on each points will allow us to shrink the distances between two samples when they can approximate each other by a linear combination of their local tangents. Following Simard et al. (1993), we define the tangent distance between two points x and y as the distance between the two hyperplanes Hx, Hy ⊂IRd spanned respectively by Bx and By. Using the usual definition of distance between two spaces, d(Hx, Hy) = inf{∥z−w∥2|/ (z, w) ∈Hx×Hy}, we obtain the solution for this convex 4 problem by solving a system of linear equations (Simard et al., 1993). This procedure corresponds to allowing the considered points x and y to move along the directions spanned by their associated local charts. Their distance is then evaluated on the new coordinates where the distance is minimal. We can then use a nearest neighbor classifier based on this distance. 4.2 CAE-based tangent propagation Nearest neighbor techniques are often impractical for large scale datasets because their computational requirements scale linearly with n for each test case. By contrast, once trained, neural networks yield fast responses for test cases. We can also leverage the extracted local charts when training a neural network. Following the tangent propagation approach of Simard et al. (1992), but exploiting our learned tangents, we encourage the output o of a neural network classifier to be insensitive to variations in the directions of the local chart of x by adding the following penalty to its supervised objective function: Ω(x) = X u∈Bx ∂o ∂x(x) u 2 (6) Contribution of this term to the gradients of network parameters can be computed in O(Nw), where Nw is the number of neural network weights. 4.3 The Manifold Tangent Classifier (MTC) Putting it all together, here is the high level summary of how we build and train a deep network: 1. Train (unsupervised) a stack of K CAE+H layers (Eq. 4). Each is trained in turn on the representation learned by the previous layer. 2. For each xi ∈D compute the Jacobian of the last layer representation J(K)(xi) = ∂h(K) ∂x (xi) and its SVD1. Store the leading dM singular vectors in set Bxi. 3. On top of the K pre-trained layers, stack an output layer of size the number of classes. Finetune the whole network for supervised classification2 with an added tangent propagation penalty (Eq. 6), using for each xi, tangent directions Bxi. We call this deep learning algorithm the Manifold Tangent Classifier (MTC). Alternatively, instead of step 3, one can use the tangent vectors in Bxi in a tangent distance nearest neighbors classifier. 5 Related prior work Many Non-Linear Manifold Learning algorithms (Roweis and Saul, 2000; Tenenbaum et al., 2000) have been proposed which can automatically discover the main directions of variation around each training point, i.e., the tangent bundle. Most of these algorithms are non-parametric and local, i.e., explicitly parametrizing the tangent plane around each training point (with a separate set of parameters for each, or derived mostly from the set of training examples in every neighborhood), as most explicitly seen in Manifold Parzen Windows (Vincent and Bengio, 2003) and manifold Charting (Brand, 2003). See Bengio and Monperrus (2005) for a critique of local non-parametric manifold algorithms: they might require a number of training examples which grows exponentially with manifold dimension and curvature (more crooks and valleys in the manifold will require more examples). One attempt to generalize the manifold shape non-locally (Bengio et al., 2006) is based on explicitly predicting the tangent plane associated to any given point x, as a parametrized function of x. Note that these algorithms all explicitly exploit training set neighborhoods (see Figure 2), i.e. they use pairs or tuples of points, with the goal to explicitly model the tangent space, while it is 1J(K) is the product of the Jacobians of each encoder (see Eq. 3) in the stack. It suffices to compute its leading dM SVD vectors and singular values. This is achieved in O(dM × d × dh) per training example. For comparison, the cost of a forward propagation through a single MLP layer is O(d × dh) per example. 2A sigmoid output layer is preferred because computing its Jacobian is straightforward and efficient (Eq. 3). The supervised cost used is the cross entropy. Training is by stochastic gradient descent. 5 modeled implicitly by the CAE’s objective function (that is not based on pairs of points). More recently, the Local Coordinate Coding (LCC) algorithm (Yu et al., 2009) and its Local Tangent LCC variant (Yu and Zhang, 2010) were proposed to build a a local chart around each training example (with a local low-dimensional coordinate system around it) and use it to define a representation for each input x: the responsibility of each local chart/anchor in explaining input x and the coordinate of x in each local chart. That representation is then fed to a classifier and yield better generalization than x itself. The tangent distance (Simard et al., 1993) and TangentProp (Simard et al., 1992) algorithms were initially designed to exploit prior domain-knowledge of directions of invariance (ex: knowledge that the class of an image should be invariant to small translations rotations or scalings in the image plane). However any algorithm able to output a chart for a training point might potentially be used, as we do here, to provide directions to a Tangent distance or TangentProp (Simard et al., 1992) based classifier. Our approach is nevertheless unique as the CAE’s unsupervised feature learning capabilities are used simultaneously to provide a good initialization of deep network layers and a coherent non-local predictor of tangent spaces. TangentProp is itself closely related to the Double Backpropagation algorithm (Drucker and LeCun, 1992), in which one instead adds a penalty that is the sum of squared derivatives of the prediction error (with respect to the network input). Whereas TangentProp attempts to make the output insensitive to selected directions of change, the double backpropagation penalty term attempts to make the error at a training example invariant to changes in all directions. Since one is also trying to minimize the error at the training example, this amounts to making that minimization more robust, i.e., extend it to the neighborhood of the training examples. Also related is the Semi-Supervised Embedding algorithm (Weston et al., 2008). In addition to minimizing a supervised prediction error, it encourages each layer of representation of a deep architecture to be invariant when the training example is changed from x to a near neighbor of x in the training set. This algorithm works implicitly under the hypothesis that the variable y to predict from x is invariant to the local directions of change present between nearest neighbors. This is consistent with the manifold hypothesis for classification (hypothesis 3 mentioned in the introduction). Instead of removing variability along the local directions of variation, the Contractive Auto-Encoder (Rifai et al., 2011a) initially finds a representation which is most sensitive to them, as we explained in section 2. 6 Experiments We conducted experiments to evaluate our approach and the quality of the manifold tangents learned by the CAE, using a range of datasets from different domains: MNIST is a dataset of 28 × 28 images of handwritten digits. The learning task is to predict the digit contained in the images. Reuters Corpus Volume I is a popular benchmark for document classification. It consists of 800,000 real-world news wire stories made available by Reuters. We used the 2000 most frequent words calculated on the whole dataset to create a bag-of-words vector representation. We used the LYRL2004 split to separate between a train and test set. CIFAR-10 is a dataset of 70,000 32 × 32 RGB real-world images. It contains images of real-world objects (i.e. cars, animals) with all the variations present in natural images (i.e. backgrounds). Forest Cover Type is a large-scale database of cartographic variables for the prediction of forest cover types made available by the US Forest Service. We investigate whether leveraging the CAE learned tangents leads to better classification performance on these problems, using the following methodology: Optimal hyper-parameters for (a stack of) CAEs are selected by cross-validation on a disjoint validation set extracted from the training set. The quality of the feature extractor and tangents captured by the CAEs is evaluated by initializing an neural network (MLP) with the same parameters and fine-tuning it by backpropagation on the supervised classification task. The optimal strength of the supervised TangentProp penalty and number of tangents dM is also cross-validated. Results Figure 1 shows a visualization of the tangents learned by the CAE. On MNIST, the tangents mostly correspond to small geometrical transformations like translations and rotations. On CIFAR-10, the 6 Figure 1: Visualisation of the tangents learned by the CAE for MNIST, CIFAR-10 and RCV1 (top to bottom). The left-most column is the example and the following columns are its tangents. On RCV1, we show the tangents of a document with the topic ”Trading & Markets” (MCAT) with the negative terms in red(-) and the positive terms in green(+). Figure 2: Tangents extracted by local PCA on CIFAR-10. This shows the limitation of approaches that rely on training set neighborhoods. model also learns sensible tangents, which seem to correspond to changes in the parts of objects. The tangents on RCV1-v2 correspond to the addition or removal of similar words and removal of irrelevant words. We also note that extracting the tangents of the model is a way to visualize what the model has learned about the structure of the manifold. Interestingly, we see that hypothesis 3 holds for these datasets because most tangents do not change the class of the example. Table 1: Classification accuracy on several datasets using KNN variants measured on 10,000 test examples with 1,000 training examples. The KNN is trained on the raw input vector using the Euclidean distance while the K-layer+KNN is computed on the representation learned by a K-layer CAE. The KNN+Tangents uses at every sample the local charts extracted from the 1-layer CAE to compute tangent distance. KNN KNN+Tangents 1-Layer CAE+KNN 2-Layer CAE+KNN MNIST 86.9 88.7 90.55 91.15 CIFAR-10 25.4 26.5 25.1 COVERTYPE 70.2 70.98 69.54 67.45 We use KNN using tangent distance to evaluate the quality of the learned tangents more objectively. Table 1 shows that using the tangents extracted from a CAE always lead to better performance than a traditional KNN. As described in section 4.2, the tangents extracted by the CAE can be used for fine-tuning the multilayer perceptron using tangent propagation, yielding our Manifold Tangent Classifier (MTC). As it is a semi-supervised approach, we evaluate its effectiveness with a varying amount of labeled examples on MNIST. Following Weston et al. (2008), the unsupervised feature extractor is trained on the full training set and the supervised classifier is trained on a restricted labeled set. Table 2 shows our results for a single hidden layer MLP initialized with CAE+H pretraining (noted CAE for brevity) and for the same classifier fine-tuned with tangent propagation (i.e. the manifold tangent classifier of section 4.3, noted MTC). The methods that do not leverage the semi-supervised learning hypothesis (Support Vector Machines, traditional Neural Networks and Convolutional Neural Networks) give very poor performance when the amount of labeled data is low. In some cases, the methods that can learn from unlabeled data can reduce the classification error by half. The CAE gives better results than other approaches across almost the whole range considered. It shows that the features extracted 7 Table 2: Semi-supervised classification error on the MNIST test set with 100, 600, 1000 and 3000 labeled training examples. We compare our method with results from (Weston et al., 2008; Ranzato et al., 2007b; Salakhutdinov and Hinton, 2007). NN SVM CNN TSVM DBN-rNCA EmbedNN CAE MTC 100 25.81 23.44 22.98 16.81 16.86 13.47 12.03 600 11.44 8.85 7.68 6.16 8.7 5.97 6.3 5.13 1000 10.7 7.77 6.45 5.38 5.73 4.77 3.64 3000 6.04 4.21 3.35 3.45 3.3 3.59 3.22 2.57 from the rich unlabeled data distribution give a good inductive prior for the classification task. Note that the MTC consistently outperforms the CAE on this benchmark. Table 3: Classification error on the MNIST test set with the full training set. K-NN NN SVM DBN CAE DBM CNN MTC 3.09% 1.60% 1.40% 1.17% 1.04% 0.95% 0.95% 0.81% Table 3 shows our results on the full MNIST dataset with some results taken from (LeCun et al., 1999; Hinton et al., 2006). The CAE in this figure is a two-layer deep network with 2000 units per layer pretrained with the CAE+H objective. The MTC uses the same stack of CAEs trained with tangent propagation using 15 tangents. The prior state of the art for the permutation invariant version of the task was set by the Deep Boltzmann Machines (Salakhutdinov and Hinton, 2009) at 0.95%. Using our approach, we reach 0.81% error on the test set. Remarkably, the MTC also outperforms the basic Convolutional Neural Network (CNN) even though the CNN exploits prior knowledge about vision using convolution and pooling to enhance the results. Table 4: Classification error on the Forest CoverType dataset. SVM Distributed SVM MTC 4.11% 3.46% 3.13% We also trained a 4 layer MTC on the Forest CoverType dataset. Following Trebar and Steele (2008), we use the data split DS2-581 which contains over 500,000 training examples. The MTC yields the best performance for the classification task beating the previous state of the art held by the distributed SVM (mixture of several non-linear SVMs). 7 Conclusion In this work, we have shown a new way to characterize a manifold by extracting a local chart at each data point based on the unsupervised feature mapping built with a deep learning approach. The developed Manifold Tangent Classifier successfully leverages three common “generic prior hypotheses” in a unified manner. It learns a meaningful representation that captures the structure of the manifold, and can leverage this knowledge to reach superior classification performance. On datasets from different domains, it successfully achieves state of the art performance. Acknowledgments The authors would like to acknowledge the support of the following agencies for research funding and computing support: NSERC, FQRNT, Calcul Qu´ebec and CIFAR. References Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1–127. Also published as a book. Now Publishers, 2009. Bengio, Y. and Monperrus, M. (2005). Non-local manifold tangent learning. In NIPS’04, pages 129–136. MIT Press. Bengio, Y., Larochelle, H., and Vincent, P. (2006). Non-local manifold parzen windows. In NIPS’05, pages 115–122. MIT Press. 8 Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H. (2007). Greedy layer-wise training of deep networks. In Advances in NIPS 19. Brand, M. (2003). Charting a manifold. In NIPS’02, pages 961–968. MIT Press. Cayton, L. (2005). Algorithms for manifold learning. Technical Report CS2008-0923, UCSD. Drucker, H. and LeCun, Y. (1992). Improving generalisation performance using double back-propagation. IEEE Transactions on Neural Networks, 3(6), 991–997. Erhan, D., Bengio, Y., Courville, A., Manzagol, P.-A., Vincent, P., and Bengio, S. (2010). Why does unsupervised pre-training help deep learning? JMLR, 11, 625–660. Goodfellow, I., Le, Q., Saxe, A., and Ng, A. (2009). Measuring invariances in deep networks. In NIPS’09, pages 646–654. Hinton, G. E., Osindero, S., and Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527–1554. Kavukcuoglu, K., Ranzato, M., Fergus, R., and LeCun, Y. (2009). Learning invariant features through topographic filter maps. pages 1605–1612. IEEE. Lasserre, J. A., Bishop, C. M., and Minka, T. P. (2006). Principled hybrids of generative and discriminative models. pages 87–94, Washington, DC, USA. IEEE Computer Society. LeCun, Y., Haffner, P., Bottou, L., and Bengio, Y. (1999). Object recognition with gradient-based learning. In Shape, Contour and Grouping in Computer Vision, pages 319–345. Springer. Narayanan, H. and Mitter, S. (2010). Sample complexity of testing the manifold hypothesis. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1786–1794. Ranzato, M., Poultney, C., Chopra, S., and LeCun, Y. (2007a). Efficient learning of sparse representations with an energy-based model. In NIPS’06. Ranzato, M., Huang, F., Boureau, Y., and LeCun, Y. (2007b). Unsupervised learning of invariant feature hierarchies with applications to object recognition. IEEE Press. Rifai, S., Vincent, P., Muller, X., Glorot, X., and Bengio, Y. (2011a). Contracting auto-encoders: Explicit invariance during feature extraction. In Proceedings of the Twenty-eight International Conference on Machine Learning (ICML’11). Rifai, S., Mesnil, G., Vincent, P., Muller, X., Bengio, Y., Dauphin, Y., and Glorot, X. (2011b). Higher order contractive auto-encoder. In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD). Roweis, S. and Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500), 2323–2326. Salakhutdinov, R. and Hinton, G. E. (2007). Learning a nonlinear embedding by preserving class neighbourhood structure. In AISTATS’2007, San Juan, Porto Rico. Omnipress. Salakhutdinov, R. and Hinton, G. E. (2009). Deep Boltzmann machines. In AISTATS’2009, volume 5, pages 448–455. Simard, P., Victorri, B., LeCun, Y., and Denker, J. (1992). Tangent prop - A formalism for specifying selected invariances in an adaptive network. In NIPS’91, pages 895–903, San Mateo, CA. Morgan Kaufmann. Simard, P. Y., LeCun, Y., and Denker, J. (1993). Efficient pattern recognition using a new transformation distance. In NIPS’92, pages 50–58. Morgan Kaufmann, San Mateo. Tenenbaum, J., de Silva, V., and Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500), 2319–2323. Trebar, M. and Steele, N. (2008). Application of distributed svm architectures in classifying forest data cover types. Computers and Electronics in Agriculture, 63(2), 119 – 130. Vincent, P. and Bengio, Y. (2003). Manifold parzen windows. In NIPS’02. MIT Press. Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., and Manzagol, P.-A. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. JMLR, 11(3371–3408). Weston, J., Ratle, F., and Collobert, R. (2008). Deep learning via semi-supervised embedding. In ICML 2008, pages 1168–1175, New York, NY, USA. Yu, K. and Zhang, T. (2010). Improved local coordinate coding using local tangents. Yu, K., Zhang, T., and Gong, Y. (2009). Nonlinear learning using local coordinate coding. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 2223–2231. 9
|
2011
|
234
|
4,296
|
Manifold Pr´ecis: An Annealing Technique for Diverse Sampling of Manifolds Nitesh Shroff †, Pavan Turaga ‡, Rama Chellappa † †Department of Electrical and Computer Engineering, University of Maryland, College Park ‡School of Arts, Media, Engineering and ECEE, Arizona State University {nshroff,rama}@umiacs.umd.edu, pturaga@asu.edu Abstract In this paper, we consider the Pr´ecis problem of sampling K representative yet diverse data points from a large dataset. This problem arises frequently in applications such as video and document summarization, exploratory data analysis, and pre-filtering. We formulate a general theory which encompasses not just traditional techniques devised for vector spaces, but also non-Euclidean manifolds, thereby enabling these techniques to shapes, human activities, textures and many other image and video based datasets. We propose intrinsic manifold measures for measuring the quality of a selection of points with respect to their representative power, and their diversity. We then propose efficient algorithms to optimize the cost function using a novel annealing-based iterative alternation algorithm. The proposed formulation is applicable to manifolds of known geometry as well as to manifolds whose geometry needs to be estimated from samples. Experimental results show the strength and generality of the proposed approach. 1 Introduction The problem of sampling K representative data points from a large dataset arises frequently in various applications. Consider analyzing large datasets of shapes, objects, documents or large video sequences, etc. Analysts spend a large amount of time sifting through the acquired data to familiarize themselves with the content, before using them for their application specific tasks. This has necessitated the problem of optimal selection of a few representative exemplars from the dataset as an important step in exploratory data analysis. Other applications include Internet-based video summarization, where providing a quick overview of a video is important for improving the browsing experience. Similarly, in medical image analysis, picking a subset of K anatomical shapes from a large population helps in identifying the variations within and across shape classes, providing an invaluable tool for analysts. Depending upon the application, several subset selection criteria have been proposed in the literature. However, there seems to be a consensus in selecting exemplars that are representative of the dataset while minimizing the redundancy between the exemplars. Liu et al.[1] proposed that the summary of a document should satisfy the ‘coverage’ and ‘orthogonality’ criteria. Shroff et al.[2] extended this idea to selecting exemplars from videos that maximize ‘coverage’ and ‘diversity’. Simon et al.[3] formulated scene summarization as one of picking interesting and important scenes with minimal redundancy. Similarly, in statistics, stratified sampling techniques sample the population by dividing the dataset into mutually exclusive and exhaustive ‘strata’ (sub-groups) followed by a random selection of representatives from each strata [4]. The splitting of population into stratas ensures that a diverse selection is obtained. The need to select diverse subsets has also been emphasized in information retrieval applications [5, 6]. Column Subset Selection (CSS) [7, 8, 9] has been one of the popular techniques to address this problem. The goal of CSS is to select the K most “well-conditioned” columns from the matrix of data points. One of the key assumptions behind this and other techniques is that the objects or their representations, lie in the Euclidean space. Unfortunately, this assumption is not valid in many cases. In 1 applications like computer vision, images and videos are represented by features/models like shapes [10], bags-of-words, linear dynamical systems (LDS) [11], etc. Many of these features/models have been shown to lie in non-Euclidean spaces, implying that the underlying distance metric of the space is not the usual ℓ2/ℓp norm. Since these feature/model spaces have a non-trivial manifold structure, the distance metrics are highly non-linear functions. Examples of features/models - manifold pairs include: shapes - complex spherical manifold [10], linear subspaces - Grassmann manifold, covariance matrices - tensor space, histograms - simplex in Rn, etc. Even the familiar bag-of-words representation, used commonly in document analysis, is more naturally considered as a statistical manifold than as a vector space [12]. The geometric properties of the non-Euclidean manifolds allow one to develop accurate inference and classification algorithms [13, 14]. In this paper, we focus on the problem of selecting a subset of K exemplars from a dataset of N points when the dataset has an underlying manifold structure to it. We formulate the notion of representational error and diversity measure of exemplars while utilizing the non-Euclidean structure of the data points followed by the proposal of an efficient annealing-based optimization algorithm. Related Work: The problem of subset selection has been studied by the communities of numerical linear algebra and theoretical computer science. Most work in the former community is related to the Rank Revealing QR factorization (RRQR) [7, 15, 16]. Given a data matrix Y , the goal of RRQR factorization is to find a permutation matrix Π such that the QR factorization of Y Π reveals the numerical rank of the matrix. The resultant matrix Y Π has as its first K columns the most “well-conditioned” columns of the matrix Y . On the other hand, the latter community has focused on Column Subset Selection (CSS). The goal of CSS is to pick K columns forming a matrix C ∈ Rm×K such that the residual || Y −PCY ||ζ is minimized over all possible choices for the matrix C. Here PC = CC† denotes the projection onto the K-dimensional space spanned by the columns of C and ζ can represent the spectral or Frobenius norm. C† indicates the pseudo inverse of matrix C. Along these lines, different randomized algorithms have been proposed [17, 18, 9, 8]. Various approaches include a two-stage approach [9], subspace sampling methods [8], etc. Clustering techniques [19] have also been applied for subset selection [20, 21]. In order to select K exemplars, data points are clustered into ℓclusters with (ℓ≤K) followed by the selection of one or multiple exemplars from each cluster to obtain the best representation or low-rank approximation of each cluster. Affinity Propagation [21], is a clustering algorithm that takes similarity measures as input and recursively passes message between nodes until a set of exemplars emerges. As we discuss in this paper, the problems with these approaches are that (a) the objective functions optimized by the clustering functions do not incorporate the diversity of the exemplars, hence can be biased towards denser clusters, and also by outliers, and (b) seeking low-rank approximation of the data matrix or clusters individually is not always an appropriate subset selection criterion. Furthermore, these techniques are largely tuned towards addressing the problem in an Euclidean setting and cannot be applied for datasets in non-Euclidean spaces. Recently, advances have been made in utilizing non-Euclidean structure for statistical inferences and pattern recognition [13, 14, 22, 23]. These works have addressed inferences, clustering, dimensionality reduction, etc. in non-Euclidean spaces. To the best of our knowledge, the problem of subset selection for analytic manifolds remains largely unaddressed. While one could try to solve the problem by obtaining an embedding of a given manifold into a larger ambient Euclidean space, it is desirable to have a solution that is more intrinsic in nature. This is because the chosen embedding is often arbitrary, and introduces peculiarities that result from such extrinsic approaches. Further manifolds such as the Grassmannian or the manifold of infinite dimensional diffeomorphisms do not admit a natural embedding into a vector space. Contributions: 1) We present the first formal treatment of subset selection for the general case of manifolds, 2) We propose a novel annealing-based alternation algorithm to efficiently solve the optimization problem, 3) We present an extension of the algorithm for data manifolds, and demonstrate the favorable properties of the algorithm on real data. 2 Subset Selection on Analytic Manifolds In this section, we formalize the subset selection problem on manifolds and propose an efficient algorithm. First, we briefly touch upon the necessary basic concepts. Geometric Computations on Manifolds: Let M be an m-dimensional manifold and, for a point p ∈M, consider a differentiable curve γ : (−ϵ, ϵ) →M such that γ(0) = p. The velocity ˙γ(0) 2 denotes the velocity of γ at p. This vector is an example of a tangent vector to M at p. The set of all such tangent vectors is called the tangent space to M at p. If M is a Riemannian manifold then the exponential map expp : Tp(M) →M is defined by expp(v) = αv(1) where αv is a specific geodesic. The inverse exponential map (logarithmic map) logp : M →Tp takes a point on the manifold and returns a point on the tangent space – which is an Euclidean space. Representational error on manifolds: Let us assume that we are given a set of points X = {x1, x2, . . . xn} which belong to a manifold M. The goal is to select a few exemplars E = {e1, . . . eK} from the set X, such that the exemplars provide a good representation of the given data points, and are minimally redundant. For the special case of vector spaces, two common approaches for measuring representational error is in terms of linear spans, and nearest-exemplar error. The linear span error is given by: minz ∥X −Ez∥2 F , where X is the matrix form of the data, and E is a matrix of chosen exemplars. The nearest-exemplar error is given by: P i P xk∈Φi ∥xk −ei∥2, where ei is the ith exemplar and Φi corresponds to its Voronoi region. Of these two measures, the notion of linear span, while appropriate for matrix approximation, is not particularly meaningful for general dataset approximation problems since the ‘span’ of a dataset item does not carry much perceptually meaningful information. For example, the linear span of a vector x ∈Rn is the set of points αx, α ∈R. However, if x were an image, the linear span of x would be the set of images obtained by varying the global contrast level. All elements of this set however are perceptually equivalent, and one does not obtain any representational advantage from considering the span of x. Further, points sampled from the linear span of few images, would not be meaningful images. This situation is further complicated for manifold-valued data such as shapes, where the notion of linear span does not exist. One could attempt to define the notion of linear spans on the manifold as the set of points lying on the geodesic shot from some fixed pole toward the given dataset item. But, points sampled from this linear span might not be very meaningful e.g., samples from the linear span of a few shapes would give physically meaningless shapes. Hence, it is but natural to consider the representational error of a set X with respect to a set of exemplars E as follows: Jrep(E) = X i X xj∈Φi d2 g(xj, ei) (1) Here, dg is the geodesic distance on the manifold and Φi is the Voronoi region of the ith exemplar. This boils down to the familiar K-means or K-medoids cost function for Euclidean spaces. In order to avoid combinatorial optimization involved in solving this problem, we use efficient approximations i.e., we first find the mean followed by the selection of ei as data point that is closest to the mean. The algorithm for optimizing Jrep is given in algorithm 1. Similar to K-means clustering, a cluster label is assigned to each xj followed by the computation of the mean µi for each cluster. This is further followed by selecting representative exemplar ei as the data point closest to µi. Diversity measures on manifolds: The next question we consider is to define the notion of diversity of a selection of points on a manifold. We first begin by examining equivalent constructions for Rn. One of the ways to measure diversity is simply to use the sample variance of the points. This is similar to the construction used recently in [2]. For the case of manifolds, the sample variance can be replaced by the sample Karcher variance, given by the function: ρ(E) = 1 K PK i=1 d2 g(µ, ei), where µ is the Karcher mean [24], and the function value is the Karcher variance. However, this construction leads to highly inefficient optimization routines, essentially boiling down to a combinatorial search over all possible K-sized subsets of X. An alternate formulation for vector spaces that results in highly efficient optimization routines is via Rank-Revealing QR (RRQR) factorizations. For vector spaces, given a set of vectors X = {xi}, written in matrix form X, RRQR [7] aims to find Q, R and a permutation matrix Π ∈Rn×n such that XΠ = QR reveals the numerical rank of the matrix X. This permutation XΠ = (XK Xn−K) gives XK, the K most linearly independent columns of X. This factorization is achieved by seeking Π which maximizes Λ(XK) = Q i σi(XK), the product of the singular values of the matrix XK. For the case of manifolds, we adopt an approximate approach in order to measure diversity in terms of the ‘well-conditioned’ nature of the set of exemplars projected on the tangent space at the mean. In particular, for the dataset {xi} ⊆M, with intrinsic mean µ, and a given selection of exemplars 3 Algorithm 1: Algorithm to minimize Jrep Input: X ∈M, k, index vector ω, Γ Output: Permutation Matrix Π Initialize Π ←In×n for γ ←1 to Γ do Initialize Π(γ) ←In×n ei ←xωi for i = {1,2,. . . ,k} for i ←1 to k do Φi ←{xp : arg minj dg(xp, ej) = i } µi ←mean of Φi ˆj ←arg minj dg(xj, µi) Update: Π(γ) ←Π(γ) Πi↔ˆj end Update: Π ←Π Π(γ), ω ←ωΠ(γ) if Π(γ) = In×n then break end end Algorithm 2: Algorithm for Diversity Maximization Input: Matrix V ∈Rd×n, k, Tolerance tol Output: Permutation Matrix Π Initialize Π ←In×n repeat Compute QR decomposition of V to obtain R11, R12 and R22 s.t., V = Q R11 R12 0 R22 βij ← q (R−1 11 R12)2 ij + ||R22αj||2 2||αT i R−1 11 ||2 2 βm ←maxij βij (ˆi,ˆj) ←arg maxijβij Update: Π ←Π Πi↔(j+k) V ←V Πi↔(j+k) until βm < tol ; {ej}, we measure the diversity of exemplars as follows: matrix TE = [logµ(ej)] is obtained by projecting the exemplars {ej} on the tangent space at mean µ. Here, log() is the inverse exponential map on the manifold and gives tangent vectors at µ that point towards ej. Diversity can then be quantified as Jdiv(E) = Λ(TE), where, Λ(TE) represents the product of the singular values of the matrix TE. For vector spaces, this measure is related to the sample variance of the chosen exemplars. For manifolds, this measure is related to the sample Karcher variance. If we denote TX = [logµ(xi)], the matrix of tangent vectors corresponding to all data-points, and if Π is the permutation matrix that orders the columns such that the first K columns of TX correspond to the most diverse selection, then Jdiv(E) = Λ(TE) = det(R11), where, TXΠ = QR = Q R11 R12 0 R22 (2) Here, R11 ∈RK×K is the upper triangular matrix of R ∈Rn×n, R12 ∈RK×(n−K) and R22 ∈ R(n−K)×(n−K). The advantage of viewing the required quantity as the determinant of a sub-matrix Algorithm 3: Annealing-based Alternation Algorithm for Subset Selection on Manifolds Input: Data points X = {x1, x2, . . . , xn} ∈M, Number of exemplars k, Tolerance step δ Output: E = {e1, . . . ek} ⊆X Initial setup: Compute intrinsic mean µ of X Compute tangent vectors vi ←logµ(xi) V ←[v1, v2, . . . , vn] ω ←[1, 2, . . . , n] be the 1 × n index vector of X Tol ←1 Initialize: Π ←Randomly permute columns of In×n Update: V ←V Π, ω ←ωΠ. while Π ̸= In×n do Diversity: Π ←Div(V, k, tol) as in algorithm 2. Update: V ←V Π, ω ←ωΠ. Representative Error: Π ←Rep(X, k, ω,1) as in algorithm 1 Update: V ←V Π, ω ←ωΠ. tol ←tol + δ end ei ←xωi for i = {1,2,. . . ,k} on the right hand-side of the above equation is that one can obtain efficient techniques for optimizing this cost function. The algorithm for optimizing Jdiv is adopted from [7] and described in algorithm 2. Input to the algorithm is a matrix V created by the tangent-space projection of X and output is the K most “wellconditioned” columns of V . This is achieved by first decomposing V into QR and computing βij, which indicates the benefit of swapping ith and jth columns [7]. The algorithm then selects pair (ˆi, ˆj) corresponding to the maximum benefit swap βm and if βm > tol, this swap is accepted. This is repeated until either βm < tol or maximum number of iterations is completed. Representation and Diversity Trade-offs for Subset Selection: From (1) and (2), it can be seen that we seek a solution that represents a trade-off between two conflicting criteria. As an example, in figure 1(a) we show two cases, where Jrep and Jdiv are individually optimized. We can see that the solutions look quite different in each case. One way to write the global cost function is as a weighted combination of the two. However, such a formulation does not lend itself to efficient optimization routines (c.f. [2]). Further, the choice of weights is often left unjustified. Instead, we propose an annealing-based alternating technique of optimizing the conflicting criteria Jrep 4 Symbol Represents Γ Maximum number of iterations In×n Identity matrix Φi Voronoi region of ith exemplar Πi↔j Permutation matrix that swaps columns i and j Π(γ) Π in the γth iteration V Matrix obtained by tangent-space projection of X Hij (i, j) element of matrix H αj jth column of the identity matrix Hαj, αT j H jth column and row of matrix H respectively Table 1: Notations used in Algorithm 1 - 3 Computational Step Complexity M Exponential Map (assume) O(ν) M Inverse exponential Map (assume) O(χ) Intrinsic mean of X O((nχ + ν)Γ) Projection of X to tangent-space O(nχ) Geodesic distances in alg. 1 O(nKχ) K intrinsic means O((nχ + Kν)Γ) Alg. 2 O(mnK log n) Gm,p Exponential Map O(p3) Gm,p Inverse exponential map O(p3) Table 2: Complexity of various computational steps. and Jdiv. Optimization algorithms for Jrep and Jdiv individually are given in algorithms 1 and 2 respectively. We first optimize Jdiv to obtain an initial set of exemplars, and use this set as an initialization for optimizing Jrep. The output of this stage is used as the current solution to further optimize Jdiv. However, with each iteration, we increase the tolerance parameter tol in algorithm 2. This has the effect of accepting only those permutations that increase the diversity by a higher factor as iterations progress. This is done to ensure that the algorithm is guided towards convergence. If the tol value is not increased at each iteration, then optimizing Jdiv will continue to provide a new solution at each iteration that modifies the cost function only marginally. This is illustrated in figure 1(c), where we show how the cost functions Jrep and Jdiv exhibit an oscillatory behavior if annealing is not used. As seen in figure 1(b) , the convergence of Jdiv and Jrep is obtained very quickly on using the proposed annealing alternation technique. The complete annealing based alternation algorithm is described in algorithm 3. A technical detail to be noted here is that for algorithm 2, input matrix V ∈Rd×n should have d ≥k. For cases where d < k, algorithm 2 can be replaced by its extension proposed in [9]. Table 1 shows the notations introduced in algorithms 1 - 3. Πi↔j is obtained by permuting i and j columns of the identity matrix. 3 Complexity, Special cases and Limitations In this section, we discuss how the proposed method relates to the special case of M = Rn, and to sub-manifolds of Rn specified by a large number of samples. For the case of Rn, the cost functions Jrep and Jdiv boil down to familiar notions of clustering and low-rank matrix approximation respectively. In this case, algorithm 3 reduces to alternation between clustering and matrix approximation with the annealing ensuring that the algorithm converges. This results in a new algorithm for subset-selection in vector spaces. For the case of manifolds implicitly specified using samples, one can approach the problem in one of two ways. The first would be to obtain an embedding of the space into a Euclidean space and apply the special case of the algorithm for M = Rn. The embedding here needs to preserve the geodesic distances between all pairs of points. Multi-dimensional scaling can be used for this purpose. However, recent methods have also focused on estimating logarithmic maps numerically from sampled data points [25]. This would make the algorithm directly applicable for such cases, without the need for a separate embedding. Thus the proposed formalism can accommodate manifolds with known and unknown geometries. However, the formalism is limited to manifolds of finite dimension. The case of infinite dimensional manifolds, such as diffeomorphisms [26], space of closed curves [27], etc. pose problems in formulating the diversity cost function. While Jdiv could have been framed purely in terms of pairwise geodesics, making it extensible to infinite dimensional manifolds, it would have made the optimization a significant bottleneck, as already discussed in section 2. Computational Complexity: The computational complexity of computing exponential map and its inverse is specific to each manifold. Let n be the number of data points and K be the number of exemplars to be selected. Table 2 enumerates the complexity of different computational step of the algorithm. The last two rows show the complexity of an efficient algorithm proposed by [28] to compute the exponential map and its inverse for the case of Grassmann manifold Gm,p. 4 Experiments Baselines: We compare the proposed algorithm with two baselines. The first baseline is a clustering-based solution to subset selection, where we cluster the dataset into K clusters, and pick as exemplar the data point that is closest to the cluster centroid. Since clustering optimizes only the 5 −1 0 1 2 3 4 5 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 Data Jrep Jdiv Proposed (a) Subset Selection 0 1 2 3 4 5 6 7 8 9 10 11 12 0 200 400 600 800 Iteration Number Jdiv Jrep (b) Convergence With Annealing 0 1 2 3 4 5 6 7 8 9 10 11 12 0 200 400 600 800 Iteration Number Jdiv Jrep (c) Without Annealing Figure 1: Subset selection for a simple dataset consisting of unbalanced classes in R4. (a) Data projected on R2 for visualization using PCA. While trying to minimize the representational error, Jrep picks two exemplars from the dominant class. Jdiv picks diverse exemplars but from the boundaries. The proposed approach strikes a balance between the two and picks one ‘representative’ exemplar from each class. Convergence Analysis of algorithm 3: (b) with annealing and (c) without annealing. representation cost-function, we do not expect it to have the diversity of the proposed algorithm. This corresponds to the special case of optimizing only Jrep. The second baseline is to apply a tangent-space approximation to the entire data-set at the mean of the dataset, and then apply a subset-selection algorithm such as RRQR. This corresponds to optimizing only Jdiv where the input matrix is the matrix of tangent vectors. Since minimization of Jrep is not explicitly enforced, we do not expect the exemplars to be the best representatives, even though the set is diverse. A Simple Dataset: To gain some intuition, we first perform experiments on a simple synthetic dataset. For easy visualization and understanding, we generated a dataset with 3 unbalanced classes in Euclidean space R4. Individual cost functions, Jrep and Jdiv were first optimized to pick three exemplars using algorithms 1 and 2 respectively. Selected exemplars have been shown in figure 1(a), where the four dimensional dataset has been projected into two dimensions for visualization using Principal Component Analysis (PCA). Despite unbalanced class sizes, when optimized individually, Jdiv seeks to select exemplars from diverse classes but tends to pick them from the class boundaries. While unbalanced class sizes cause Jrep to pick 2 exemplars from the dominant cluster. Algorithm 3 iteratively optimizes for both these cost functions and picks an exemplar from every class. These exemplars, are closer to the centroid of the individual classes. Figure 1(b) shows the convergence of the algorithm for this simple dataset and compares it with the case when no annealing is applied (figure 1(c)). Jrep and Jdiv plots are shown as the iterations of algorithm 3 progresses. When annealing is applied, the tolerance value (tol) is increased by 0.05 in each iteration. It can be noted that in this case the algorithm converges to a steady state in 7 iterations (tol = 1.35). If no annealing is applied, the algorithm does not converge. Shape sampling/summarization: We conducted a real shape summarization experiment on the MPEG dataset [29]. This dataset has 70 shape classes with 20 shapes per class. For our experiments, we created a smaller dataset of 10 shape classes with 10 shapes per class. Figure 2(a) shows the shapes used in our experiments. We use an affine-invariant representation of shapes based on landmarks. Shape boundaries are uniformly sampled to obtain m landmark points. These points are concatenated in a matrix to obtain the landmark matrix L ∈Rm×2. Left singular vectors (Um×2), obtained by the singular value decomposition of matrix L = UΣV T , give the affine-invariant representation of shapes [30]. This affine shape-space U of m landmark points is a 2-dimensional subspace of Rm. These p-dimensional subspaces in Rm constitute the Grassmann manifold Gm,p. Details of the algorithms for the computation of exponential and inverse exponential map on Gm,p can be found in [28] and has also been included in the supplemental material. In the experiment, the cardinality of the subset was set to 10. As the number of shape classes is also 10, one would ideally seek one exemplar from each class. Algorithms 1 and 2 were first individually optimized to select the optimal subset. Algorithm 1 was applied intrinsically on the manifold with multiple initializations. Figure 2(b) shows the output with the least cost among these initializations. For algorithm 2, data points were projected on the tangent space at the mean using the inverse exponential map and the selected subset is shown in figure 2(c). Individual optimization of Jrep results in 1 exemplar each from 6 classes, 2 each from 2 classes (‘apple’ and ‘flower’) and misses 2 classes (‘bell’ and ‘chopper’). While, individual optimization of Jdiv alone picks 1 each from 8 classes, 2 from the class ‘car’ and none from the class ‘bell’. It can be observed that exemplars chosen by Jdiv for classes ‘glass’, ‘heart’,‘flower’ and ‘apple’ tend to be unusual members of the 6 (a) (b) (c) (d) Figure 2: (a) 10 classes from MPEG dataset with 10 shapes per class. Comparison of 10 exemplars selected by (b)Jrep, (c) Jdiv and (d) Proposed Approach. Jrep picks 2 exemplars each from 2 classes (‘apple’ and ‘flower’) and misses ‘bell’ and ‘chopper’ classes. Jdiv picks 1 from 8 different classes, 2 exemplars from class ‘car’ and none from class ‘bell’. It can be observed that exemplars chosen by Jdiv for classes ‘glass’, ‘heart’, ‘flower’ and ‘apple’ tend to be unusual members of the class. It also picks up the flipped car. While the proposed approach picks one representative exemplars from each class as desired. class. It also picks up the flipped car. Optimizing for both Jdiv and Jrep using algorithm 3 picks one ‘representative’ exemplar from each class as shown in figure 2(d). These exemplars picked by the three algorithms can be further used to label data points. Table 3 shows the confusion table thus obtained. For each data point, we find the nearest exemplar, and label the data point with the ground-truth label of this exemplar. For example, consider the row labeled as ‘bell’. All the data points of the class ‘bell’ were labeled as ‘pocket’ by Jrep while Jdiv labeled 7 data points from this class as ‘chopper’ and 3 as ‘pocket’. This confusion is largely due to both Jrep and Jdiv having missed out picking exemplars from this class. The proposed approach correctly labels all data points as it picks exemplars from every class. Glass Heart Apple Bell Baby Chopper Flower Car Pocket Teddy Glass (10,10,10) Heart (10,10,10) Apple (0,1,0) (8,7,10) (2,0,0) (0,2,0) Bell (0,0,10) (0,7,0) (10,3,0) Baby (10,10,10) Chopper (2,0,0) (8,0,0) (0,10,10) Flower (10,10,10) Car (10,10,10) Pocket (10,10,10) Teddy (10,10,10) Table 3: Confusion Table. Entries correspond to the tuple (Jrep, Jdiv, Proposed). Row labels correspond to the ground truth labels of the shape and the column labels correspond to the label of the nearest exemplar. Only non-zero entries have been shown in the table. KTH human action dataset: The next experiment was conducted on the KTH human action dataset [31]. This dataset consists of videos with 6 actions conducted by 25 persons in 4 different scenarios. For our experiment, we created a smaller dataset of 30 videos with the first 5 human subjects conducting 6 actions in the s4 (indoor) scenario. Figure 3(a) shows sample frames from each video. This dataset mainly consists of videos captured under constrained settings. This makes it difficult to identify the ‘usual’ or ‘unusual’ members of a class. To better understand the performance of the three algorithms, we synthetically added occlusion to the last video of each class. These occluded videos serve as the ‘unusual’ members. Histogram of Oriented Optical Flow (HOOF) [32] was extracted from each frame to obtain a normalized time-series for the videos. A Linear Dynamical System (LDS) is then estimated from this time-series using the approach in [11]. This model is described by the state transition equation: x(t + 1) = Ax(t) + w(t) and the observation equation z(t) = Cx(t) + v(t), where 7 (a) Dataset (b) Jrep (c) Jdiv (d) Proposed Figure 3: (a) Sample frames from KTH action dataset [31]. From top to bottom action classes are { box, run, walk, hand-clap, hand-wave and jog }. 5 exemplars selected by: (b)Jrep, (c) Jdiv and (d) Proposed. Exemplars picked by Jrep correspond to { box, run, run, hand-clap, hand-wave } actions. While Jdiv selects { box, walk, hand-clap, hand-wave and jog }. Proposed approach picks { box, run, walk, hand-clap and hand-wave }. x ∈Rd is the hidden state vector, z ∈Rp is the observation vector, w(t) ∼N(0, Θ) and v(t) ∼N(0, Ξ) are the noise components. Here, A is the state-transition matrix and C is the observation matrix. The expected observation sequence of model (A, C) lies in the column space of the infinite extended ‘observability’ matrix which is commonly approximated by a finite matrix OT m = [CT , (CA)T , (CA2)T , . . . , (CAm−1)T ]. The column space of this matrix OT m ∈Rmp×d is a d-dimensional subspace and hence lies on the Grassmann manifold. In this experiment, we consider the scenario when the number of classes in a dataset is unknown. We asked the algorithm to pick 5 exemplars when the actual number of classes in the dataset is 6. Figure 3(b) shows one frame from each of the videos selected when Jrep was optimized alone. It picks 1 exemplar each from 3 classes (‘box’,‘hand-clap’ and ‘hand-wave’), 2 from the class ‘run’ while misses out on ‘walk’ and ‘jog’. On the other hand, Jdiv (when optimized alone) picks 1 each from 5 different classes and misses the class ‘run’. It can be seen that Jdiv picks 2 exemplars that are ‘unusual’ members (occluded videos) of their respective class. The proposed approach picks 1 representative exemplar from 5 classes and none from the class ‘jog’. The proposed approach achieves both a diverse selection of exemplars, and also avoids picking outlying exemplars. Effect of Parameters and Initialization: In our experiments, the effect of tolerance steps (δ) for smaller values (< 0.1) has very minimal effect. After a few attempts, we fixed this value to 0.05 for all our experiments. In the first iteration, we start with tol = 1. With this value, algorithm 2 accepts any swap that increases Jdiv. This makes output of algorithm 2 after first iteration almost insensitive to initialization. While, in the later iterations, swaps are accepted only if they increase the value of Jdiv significantly and hence input to algorithm 2 becomes more important with the increase in tol. 5 Conclusion and Discussion In this paper, we addressed the problem of selecting K exemplars from a dataset when the dataset has an underlying manifold structure to it. We utilized the geometric structure of the manifold to formulate the notion of picking exemplars which minimize the representational error while maximizing the diversity of exemplars. An iterative alternation optimization technique based on annealing has been proposed. We discussed its convergence and complexity and showed its extension to data manifolds and Euclidean spaces. We showed summarization experiments with real shape and human actions dataset. Future work includes formulating subset selection for infinite dimensional manifolds and efficient approximations for this case. Also, several special cases of the proposed approach point to new directions of research such as the cases of vector spaces and data manifolds. Acknowledgement: This research was funded (in part) by a grant N00014 −09 −1 −0044 from the Office of Naval Research. The first author would like to thank Dikpal Reddy and Sima Taheri for helpful discussions and their valuable comments. References [1] K. Liu, E. Terzi, and T. Grandison, “ManyAspects: a system for highlighting diverse concepts in documents,” in Proceedings of VLDB Endowment, 2008. 8 [2] N. Shroff, P. Turaga, and R. Chellappa, “Video Pr´ecis: Highlighting diverse aspects of videos,” IEEE Transactions on Multimedia, vol. 12, no. 8, pp. 853 –868, Dec. 2010. [3] I. Simon, N. Snavely, and S. Seitz, “Scene summarization for online image collections,” in ICCV, 2007. [4] W. Cochran, Sampling techniques. Wiley, 1977. [5] Y. Yue and T. Joachims, “Predicting diverse subsets using structural svms,” in ICML, 2008. [6] J. Carbonell and J. Goldstein, “The use of mmr, diversity-based reranking for reordering documents and reproducing summaries,” in SIGIR, 1998. [7] M. Gu and S. Eisenstat, “Efficient algorithms for computing a strong rank-revealing QR factorization,” SIAM Journal on Scientific Computing, vol. 17, no. 4, pp. 848–869, 1996. [8] P. Drineas, M. Mahoney, and S. Muthukrishnan, “Relative-error CUR matrix decompositions,” SIAM Journal on Matrix Analysis and Applications, vol. 30, pp. 844–881, 2008. [9] C. Boutsidis, M. Mahoney, and P. Drineas, “An improved approximation algorithm for the column subset selection problem,” in SODA, 2009. [10] D. Kendall, “Shape manifolds, Procrustean metrics and complex projective spaces,” Bulletin of London Mathematical society, vol. 16, pp. 81–121, 1984. [11] S. Soatto, G. Doretto, and Y. N. Wu, “Dynamic textures,” ICCV, 2001. [12] J. D. Lafferty and G. Lebanon, “Diffusion kernels on statistical manifolds,” Journal of Machine Learning Research, vol. 6, pp. 129–163, 2005. [13] P. T. Fletcher, C. Lu, S. M. Pizer, and S. C. Joshi, “Principal geodesic analysis for the study of nonlinear statistics of shape,” IEEE Transactions on Medical Imaging, vol. 23, no. 8, pp. 995–1005, August 2004. [14] A. Srivastava, S. H. Joshi, W. Mio, and X. Liu, “Statistical shape analysis: Clustering, learning, and testing,” IEEE Transactions on pattern analysis and machine intelligence, vol. 27, no. 4, 2005. [15] G. Golub, “Numerical methods for solving linear least squares problems,” Numerische Mathematik, vol. 7, no. 3, pp. 206–216, 1965. [16] T. Chan, “Rank revealing QR factorizations,” Linear Algebra and Its Applications, vol. 88, pp. 67–82, 1987. [17] A. Frieze, R. Kannan, and S. Vempala, “Fast Monte-Carlo algorithms for finding low-rank approximations,” Journal of the ACM (JACM), vol. 51, no. 6, pp. 1025–1041, 2004. [18] A. Deshpande and L. Rademacher, “Efficient volume sampling for row/column subset selection,” in Foundations of Computer Science (FOCS), 2010. [19] G. Gan, C. Ma, and J. Wu, Data clustering: theory, algorithms, and applications. Society for Industrial and Applied Mathematics, 2007. [20] I. Dhillon and D. Modha, “Concept decompositions for large sparse text data using clustering,” Machine learning, vol. 42, no. 1, pp. 143–175, 2001. [21] B. J. Frey and D. Dueck, “Clustering by passing messages between data points,” Science, vol. 315, pp. 972–976, Feb. 2007. [22] R. Subbarao and P. Meer, “Nonlinear mean shift for clustering over analytic manifolds,” in CVPR, 2006. [23] A. Goh and R. Vidal, “Clustering and dimensionality reduction on riemannian manifolds,” in CVPR, 2008. [24] H. Karcher, “Riemannian center of mass and mollifier smoothing,” Communications on Pure and Applied Mathematics, vol. 30, no. 5, pp. 509–541, 1977. [25] T. Lin and H. Zha, “Riemannian manifold learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, pp. 796–809, 2008. [26] A. Trouv´e, “Diffeomorphisms groups and pattern matching in image analysis,” International Journal of Computer Vision, vol. 28, pp. 213–221, July 1998. [27] W. Mio, A. Srivastava, and S. Joshi, “On shape of plane elastic curves,” International Journal of Computer Vision, vol. 73, no. 3, pp. 307–324, 2007. [28] K. Gallivan, A. Srivastava, X. Liu, and P. Van Dooren, “Efficient algorithms for inferences on grassmann manifolds,” in IEEE Workshop on Statistical Signal Processing, 2003. [29] L. Latecki, R. Lakamper, and T. Eckhardt, “Shape descriptors for non-rigid shapes with a single closed contour,” in CVPR, 2000. [30] E. Begelfor and M. Werman, “Affine invariance revisited,” in CVPR, 2006. [31] C. Schuldt, I. Laptev, and B. Caputo, “Recognizing human actions: a local SVM approach,” in ICPR, 2004. [32] R. Chaudhry, A. Ravichandran, G. Hager, and R. Vidal, “Histograms of oriented optical flow and binetcauchy kernels on nonlinear dynamical systems for the recognition of human actions,” in CVPR, 2009. 9
|
2011
|
235
|
4,297
|
Facial Expression Transfer with Input-Output Temporal Restricted Boltzmann Machines Matthew D. Zeiler1, Graham W. Taylor1, Leonid Sigal2, Iain Matthews2, and Rob Fergus1 1Department of Computer Science, New York University, New York, NY 10012 2Disney Research, Pittsburgh, PA 15213 Abstract We present a type of Temporal Restricted Boltzmann Machine that defines a probability distribution over an output sequence conditional on an input sequence. It shares the desirable properties of RBMs: efficient exact inference, an exponentially more expressive latent state than HMMs, and the ability to model nonlinear structure and dynamics. We apply our model to a challenging real-world graphics problem: facial expression transfer. Our results demonstrate improved performance over several baselines modeling high-dimensional 2D and 3D data. 1 Introduction Modeling temporal dependence is an important consideration in many learning problems. One can capture temporal structure either explicitly in the model architecture, or implicitly through latent variables which can act as a “memory”. Feedforward neural networks which incorporate fixed delays into their architecture are an example of the former. A limitation of these models is that temporal context is fixed by the architecture instead of inferred from the data. To address this shortcoming, recurrent neural networks incorporate connections between the latent variables at different time steps. This enables them to capture arbitrary dynamics, yet they are more difficult to train [2]. Another family of dynamical models that has received much attention are probabilistic models such as Hidden Markov Models and more general Dynamic Bayes nets. Due to their statistical structure, they are perhaps more interpretable than their neural-network counterparts. Such models can be separated into two classes [19]: tractable models, which permit an exact and efficient procedure for inferring the posterior distribution over latent variables, and intractable models which require approximate inference. Tractable models such as Linear Dynamical Systems and HMMs are widely applied and well understood. However, they are limited in the types of structure that they can capture. These limitations are exactly what permit simple exact inference. Intractable models, such as Switching LDS, Factorial HMMs, and other more complex variants of DBNs permit more complex regularities to be learned from data. This comes at the cost of using approximate inference schemes, for example, Gibbs sampling or variational inference, which introduce either a computational burden or poorly approximate the true posterior. In this paper we focus on Temporal Restricted Boltzmann Machines [19,20], a family of models that permits tractable inference but allows much more complicated structure to be extracted from time series data. Models of this class have a number of attractive properties: 1) They employ a distributed state space where multiple factors interact to explain the data; 2) They permit nonlinear dynamics and multimodal predictions; and 3) Although maximum likelihood is intractable for these models, there exists a simple and efficient approximate learning algorithm that works well in practice. We concentrate on modeling the distribution of an output sequence conditional on an input sequence. Recurrent neural networks address this problem, though in a non-probabilistic sense. The InputOutput HMM [3] extends HMMs by conditioning both their dynamics and emission model on an input sequence. However, the IOHMM is representationally limited by its simple discrete state in 1 the same way as a HMM. Therefore we extend TRBMs to cope with input-output sequence pairs. Given the conditional nature of a TRBM (its hidden states and observations are conditioned on short histories of these variables), conditioning on an external input is a natural extension to this model. Several real-world problems involve sequence-to-sequence mappings. This includes motion-style transfer [9], economic forecasting with external indicators [13], and various tasks in natural language processing [6]. Sequence classification is a special case of this setting, where a scalar target is conditioned on an input sequence. In this paper, we consider facial expression transfer, a well-known problem in computer graphics. Current methods considered by the graphics community are typically linear (e.g., methods based on blendshape mapping) and they do not take into account dynamical aspects of the facial motion itself. This makes it difficult to retarget the facial articulations involved in speech. We propose a model that can encode a complex nonlinear mapping from the motion of one individual to another which captures facial geometry and dynamics of both source and target. 2 Related work In this section we discuss several latent variable models which can map an input sequence to an output sequence. We also briefly review our application field: facial expression transfer. 2.1 Temporal models Among probabilistic models, the Input-Output HMM [3] is most similar to the architecture we propose. Like the HMM, the IOHMM is a generative model of sequences but it models the distribution of an output sequence conditional on an input, while the HMM simply models the distribution of an output sequence. The IOHMM is also trained with a more discriminative-style EM-based learning paradigm than HMMs. A similarity between IOHMMs and TRBMs is that in both models, the dynamics and emission distributions are formulated as neural networks. However, the IOHMM state space is a multinomial while TRBMs have binary latent states. A K-state TRBM can thus represent the history of a time series using 2K state configurations while IOHMMs are restricted to K settings. The Continuous Profile Model [12] is a rich and robust extension of dynamic time warping that can be applied to many time series in parallel. The CPM has a discrete state-space and requires an input sequence. Therefore it is a type of conditional HMM. However, unlike the IOHMM and our proposed model, the input is unobserved, making learning completely unsupervised. Our approach is also related to the many proposed techniques for supervised learning with structured outputs. The problem of simultaneously predicting multiple, correlated variables has received a great deal of recent attention [1]. Many of these models, including the one we propose, are formally defined as undirected graphs whose potential functions are functions of some input. In Graph Transformer Networks [11] the dependency structure on the outputs is chosen to be sequential, which decouples the graph into pairwise potentials. Conditional Random Fields [10] are a special case of this model with linear potential functions. These models are trained discriminatively, typically with gradient descent, where our model is trained generatively using an approximate algorithm. 2.2 Facial expression transfer Facial expression transfer, also called motion retargeting or cross-mapping, is the act of adapting the motion of an actor to a target character. It, as well as the related fields of facial performance capture and performance-driven animation, have been very active research areas over the last several years. According to a review by Pighin [15], the two most important considerations for this task are facial model parameterization (called “the rig” in the graphics industry) and the nature of the chosen crossmapping. A popular parameterization is “blendshapes” where a rig is a set of linearly combined facial expressions each controlled by a scalar weight. Retargeting amounts to estimating a set of blending weights at each frame of the source data that accurately reconstructs the target frame. There are many different ways of selecting blendshapes, from simply selecting a set of sufficient frames from the data, to creating models based on principal components analysis. Another common parameterization is to simply represent the face by its vertex, polygon or spline geometry. The downside of this approach is that this representation has many more degrees of freedom than are present in an actual facial expression. A linear function is the most common choice for cross-mapping. While it is simple to estimate from data, it cannot produce subtle nonlinear motion required for realistic graphics applications. An 2 example of this approach is [5] which uses a parametric model based on eigen-points to reliably synthesize simple facial expressions but ultimately fails to capture more subtle details. Vlasic et al. [23] have proposed a multilinear mapping where variation in appearance across the source and target is explicitly separated from the variation in facial expression. None of these models explicitly incorporate dynamics into the mapping, which is a limitation addressed by our approach. Finally, we note that Susskind et al. [18] have used RBMs for facial expression generation, but not retargeting. Their work is focused on static rather than temporal data. 3 Modeling dynamics with Temporal Restricted Boltzmann Machines In this section we review the Temporal Restricted Boltzmann Machine. We then introduce the InputOutput Temporal Restricted Boltzmann Machine which extends the architecture to model an output sequence conditional on an input sequence. 3.1 Temporal Restricted Boltzmann Machines A Restricted Boltzmann Machine [17] is a bipartite Markov Random Field consisting of a layer of stochastic observed variables (“visible units”) connected to a layer of stochastic latent variables (“hidden units”). The absence of connections between hidden units ensures they are conditionally independent given a setting of the visible units, and vice-versa. This simplifies inference and learning. The RBM can be extended to model temporal data by conditioning its visible units and/or hidden units on a short history of their activations. This model is called a Temporal Restricted Boltzmann Machine [19]. Conditioning the model on the previous settings of the hidden units complicates inference. Although one can approximate the posterior distribution with the filtering distribution (treating the past setting of the hidden units as fixed), we choose to use a simplified form of the model which conditions only on previous visible states [20]. This model inherits the most important computational properties of the standard RBM: simple, exact inference and efficient approximate learning. RBMs typically have binary observed variables and binary latent variables but to model real-valued data (e.g., the parameterization of a face), we can use a modified form of the TRBM with conditionally independent linear-Gaussian observed variables [7]. The model, depicted in Fig. 1 (left), defines a joint probability distribution over a real-valued representation of the current frame of data, vt, and a collection of binary latent variables, ht, hj ∈{0, 1}: p(vt, ht|v<t) = exp (−E(vt, ht|v<t)) /Z(v<t). (1) For notational simplicity, we concatenate a short history of data at t−1,. . ., t−N into a vector which we call v<t. The distribution specified by Eq. 1 is conditional on this history and normalized by a quantity Z which is intractable to compute exactly1 but not needed for inference nor learning. The joint distribution is characterized by an “energy function”: E(vt, ht|v<t) = X i 1 2(vi,t −ˆai,t)2 − X j hj,tˆbj,t − X ij Wijvi,thj,t (2) which captures pairwise interactions between variables, assigning high energy to improbable configurations and low energy to probable configurations. In the first term, each visible unit contributes a quadratic penalty that depends on its deviation from a “dynamic mean” determined by the history: ˆai,t = ai + X k Akivk,<t (3) where k indexes the history vector. Weight matrix A and offset vector a (with elements ai) parameterize the autoregressive relationship between the history and current frame of data. Each hidden unit hj contributes a linear offset to the energy which is also a function of the history: ˆbj,t = bj + X k Bkjvk,<t. (4) 1To compute Z exactly we would need to integrate over the joint space of all possible output configurations and all settings of the binary latent variables. 3 Weight matrix B and offset b (with elements bj) parameterize the relationship between the history and the latent variables. The final term of Eq. 2 is a bi-linear constraint on the interaction between the current setting of the visible units and hidden units, characterized by matrix W. The density for observation vt conditioned on the past can be expressed by marginalizing out the binary hidden units in Eq. 1: p(vt|v<t) = X ht p(vt, ht|v<t) = X ht exp (−E(vt, ht|v<t)) /Z(v<t), (5) while the probability of observing a sequence, v(N+1):T , given an N-frame history v1:N, is simply the product of all the local conditional probabilities up to time T, the length of a sequence: p(v(N+1):T |v1:N) = T Y t=N+1 p(vt|v<t). (6) The TRBM has been used to generate and denoise sequences [19, 20], as well as a prior in multiview person tracking [22]. In all cases, it requires an initialization, v1:N, to perform these tasks. Alternatively, by learning a prior model of v1:N it could easily extended to model sequences nonconditionally, i.e., defining p(v1:T ). 3.2 Input-Output Temporal Restricted Boltzmann Machines Ultimately we are interested in learning a probabilistic mapping from an input sequence, s1:T to an output sequence, v1:T . In other words, we seek a model that defines p(v1:T |s1:T ). However, the TRBM only defines a distribution over an output sequence p(v1:T ). Extending this model to learn an input-output mapping is the primary contribution of this paper. Without loss of generality, we will assume that in addition to having access to the complete history of the input, we also have access to the first N frames of the output. Therefore we seek to model p(v(N+1):T |v1:N, s1:T ). By placing an N th order Markov assumption on the current output, vt, that is, assuming conditional independence on all other variables given an N-frame history of vt and an N + 1-frame history of the input (up to and including time t), we can operate in an online setting: p(v(N+1):T |v1:N, s1:T ) = T Y t=N+1 p(vt|v<t, s<=t). (7) where we have used the shorthand s<=t to describe a vector that concatenates a window over the input at time t, t−1, . . . , t−N. Note that in an offline setting, it is simple to generalize the model by conditioning the term inside the product on an arbitrary window of the source (which may include source observations past time t). st-N st-1 st l ..... vt-N vt-1 vt k ..... h j Hidden Units Input Frames Previous Output Frames Predicted Output vt-N vt-1 vt k ..... h j Hidden Units Previous Output Frames Predicted Output (a) (b) i i st-N st-1 st l ..... vt-N vt-1 vt k ..... h j Hidden Units Input Frames Previous Output Frames Predicted Output i Q B W P B W A A A B Δ Wh Ws Wv (c) Figure 1: Left: A Temporal Restricted Boltzmann Machine. Middle: An Input-Output Temporal Restricted Boltzmann Machine. Right: A factored third-order IOTRBM (FIOTRBM). 4 We can easily adapt the TRBM to model p(vt|v<t, s<=t) by modifying its energy function to incorporate the input. The general form of energy function remains the same as Eq. 2 but it is now also conditioned on s<=t by redefining the dynamic biases (Eq. 3 and 4) as follows: ˆait = ai + X k Akivk,<t + X l Plisl,<=t (8) ˆbjt = bj + X k Bkjvk,<t + X l Qljsl,<=t (9) where l is an index over elements of the input vector. Therefore the matrix P ties the input linearly to the output (much like existing simple models) but the matrix Q also allows the input to nonlinearly interact with the output through the latent variables h. We call this model an Input-Output Temporal Restricted Boltzmann Machine (IOTRBM). It is depicted in Fig. 1 (middle). A desirable criterion for training the model is to maximize the conditional log likelihood of the data: L = T X t=N+1 log p(vt|v<t, s<=t). (10) However, the gradient of Eq. 10 with respect to the model parameters θ = {W, A, B, P, Q, a, b} is difficult to compute analytically due to the normalization constant Z. Therefore, Contrastive Divergence (CD) learning is typically used in place of maximum likelihood. It follows the approximate gradient of an objective function that is the difference between two Kullback-Leibler divergences [8]. It is widely used in practice and tends to produce good generative models [4]. The CD updates for the IOTRBM have a common form (see the supplementary material for details): ∆θi ∝ T X t=N+1 ∂E(vt, ht|v<t, s<=t) ∂θi data − ∂E(vt, ht|v<t, s<=t) ∂θi recon (11) where ⟨·⟩data is an expectation with respect to the training data distribution, and ⟨·⟩recon is the M-step reconstruction distribution as obtained by alternating Gibbs sampling, starting with the visible units clamped to the training data. The input and output history stay fixed during Gibbs sampling. CD requires two main operations: 1) sampling the latent variables, given a window of the input and output, p(hj,t = 1|vt, v<t, s<=t) = 1 + exp(− X i Wijvi,t −ˆbjt) !−1 , (12) and 2) reconstructing the output data, given the latent variables: vi,t|ht, v<t, s<=t ∼N vit; X j Wijhj,t + ˆai,t, 1 . (13) Eq. 12 and 13 are alternated M times to arrive at the M-step quantities used in the weight updates. More details are given in Sec. 4. 3.3 Factored Third-order Input-Output Temporal Restricted Boltzmann Machines In an IOTRBM the input and target history can only modify the hidden units and current output through additive biases. There has been recent interest in exploring higher-order RBMs in which variables interact multiplicatively [14, 16, 21]. Fig. 1 (right) shows an IOTRBM whose parameters W, Q and P have been replaced by a three-way weight tensor defining a multiplicative interaction between the three sets of variables. The introduction of the tensor results in the number of model parameters becoming cubic and therefore we factor the tensor into three matrices: W s, W h, and W v. These parameters connect the input, hidden units, and current target, respectively to a set of deterministic units which modulate the connections between variables. The introduction of these factors corresponds to a kind of low-rank approximation to the original interaction tensor, that uses O(K2) parameters instead of O(K3). 5 The energy function of this model is: E(vt, ht|v<t, s<=t) = X i 1 2(vi,t−ˆai,t)2− X j hj,tˆbj,t− X f X ijl W v ifW h jfW s lfvi,thj,tsl,<=t (14) where f indexes factors and ˆai,t and ˆbj,t are defined by Eq. 3 and 4 respectively. Weight updates all have the same form as Eq. 11 (see the supplementary material for details). The conditional distribution of the latent variables given the other variables becomes, p(hj,t = 1|vt, v<t, s<=t) = 1 + exp(− X f W h jf X i W v ifvi,t X l W s lfsl,<=t −ˆbjt) −1 (15) and the reconstruction distribution becomes, vi,t|ht, v<t, s<=t ∼N vit; X f W v if X j W h jfhj,t X l W s lfsl,<=t + ˆai,t, 1 . (16) 4 Experiments We evaluate the IOTRBM on two facial expression transfer datasets, one based on 2D motion capture and the other on 3D motion capture. On both datasets we compare our model against three baselines: Linear regression (LR): We perform a regularized linear regression between each frame of the input to each frame of the output. The model is solved analytically by least squares. The regularization parameter is set by cross-validation on the training set. N th-order Autoregressive2 model (AR): This model improves on linear regression by also considering linear dynamics through the history of the input and output. Again through regularized least squares we fit a matrix that maps from a concatenation of the (N + 1)-frame input window s<=t and N-frame target window, v<t. Multilayer perceptron: A nonlinear model with one deterministic hidden layer, the same cardinality as the IOTRBM. The input is the concatenation of the source and target history, the output is the current target frame. We train with a nonlinear conjugate gradient method. These baselines were chosen to highlight the main difference of our approach over the majority of techniques proposed for this application, namely the consideration of dynamics and the use of a nonlinear mapping through latent variables. We also tried an IORBM, that is, an IOTRBM with no target history. It consistently performed worse than the IOTRBM, and we do not report its results. Details of learning All models saw a window of 4 input frames (3 previous + 1 current) and 6 previous output frames, with the exception of linear regression which only saw the current input. For the IOTRBM models, we found that initializing the parameters A and P to the solution found by the autoregressive model gave slightly better results. All other parameters were initialized to small random values. For CD learning we set the learning rates for A and P to 10−6 and for all other parameters to 10−3. This was done to prevent strong correlations from dominating early in learning. All parameters used a fixed weight decay of 0.02 and momentum of 0.75. As suggested by [21], we added a small amount of Gaussian noise (σ = 0.1) to the output history to make the model more robust to unseen outputs during prediction (recall that the model sees true outputs at training time, but is fed back predictions at test time). 4.1 2D facial expression transfer The first dataset we consider consists of facial motion capture of two subjects who were asked to speak the same phrases. It has 186 trials, totaling 10414 fames per subject. Each frame is 180 dimensional, representing the x and y position of 90 facial markers. Each pair of sequences has been manually time-aligned based on a phonetic transcription so they are synchronized between subjects. 2This model considers the history of the source when predicting the target so it is not purely autoregressive. 6 RMS Marker Error (mm) XXXXXXXXX Model Split S1 S2 S3 S4 S5 S6 Mean Linear regression 6.19 6.18 6.19 5.85 6.13 6.34 6.15 ± 0.15 Autoregressive 5.43 5.22 5.67 5.37 5.37 5.76 5.47 ± 0.20 MLP 5.30 5.28 5.76 5.31 5.28 5.31 5.37 ± 0.19 IOTRBM 5.31 5.27 5.71 5.14 5.17 5.08 5.28 ± 0.22 FIOTRBM 5.41 5.43 5.76 5.42 5.45 5.46 5.49 ± 0.13 Table 1: 2D dataset. Mean RMS error on test output sequences. Input noise Output noise Input & Output Noise XXXXXXXXX Model Noise 0.01 0.1 1 0.01 0.1 1 0.01 0.1 1 Linear regression 6.48 15.05 136.2 N/A Autoregressive 5.83 10.48 84.40 5.78 7.24 36.19 5.85 11.26 94.35 MLP 5.40 5.42 6.80 5.40 5.43 6.37 5.40 5.43 7.55 IOTRBM 5.06 5.07 5.39 5.07 5.18 8.48 5.07 5.17 8.57 FIOTRBM 5.46 5.46 5.66 5.46 5.46 5.56 5.46 5.46 5.82 Table 2: 2D dataset. Mean RMS error (in mm) under noisy input and output history (Split 6). Preprocessing We found the original data to exhibit significant random relative motion between the two faces throughout the entire sequences which could not reasonably be modeled. Therefore, we transformed the data with an affine transform on all markers in each frame such that a select few nose and skull markers per frame (stationary facial locations) were approximately fixed relative to the first frame of the source sequences. Both the input and output were reduced to 30 dimensions by retaining only their first 30 principal components. This maintained 99.9% of the variance in the data. Finally, the data was normalized to have zero mean and scaled by the average standard deviation of all the elements in the training set. We evaluate the various methods on 6 random arbitrary splits of the dataset. In each case, 150 complete sequences are maintained for training and the remaining 36 sequences are used for testing. Each model is presented with the first 6 frames of the true test output and successive 4-frame windows of the true test input. The exception is the linear regression model, which only sees the current input. Therefore prediction is measured from the 7th frame onward. The IOTRBM produces its final output by initializing its visible units with the current previous frame plus a small amount of Gaussian noise and then performing 30 alternating Gibbs steps. At the last step, we do not sample the hidden units. This predicted output frame now becomes the most recent frame in the output history and we iterate forward. The results show a IOTRBM with 30 hidden units. We also tried a model with 100 hidden units which performed slightly worse. Finally, we include the performance of a factored, third-order IOTRBM. This model used 30 hidden units and 50 factors. We report RMS marker error in mm where the mean is taken over all markers, frames and test sequences (Table 1). Not surprisingly, the IOTRBM consistently outperforms linear regression. In all but two splits (where performance is comparable) the IOTRBM outperforms the AR model. Mean performance over the splits shows an advantage to our approach. This is also qualitatively apparent in videos we have attached as supplementary material that superimpose the true target with predictions from the model. We encourage the reader to view the attached videos, as certain aesthetic properties such as the tradeoff between smoothness and responsiveness are not captured by RMS error. We observed that on the 2D dataset, the FIOTRBM had no advantage over the simpler IOTRBM. To compare the robustness of each model to corrupted inputs or outputs, we added various amounts of white Gaussian noise to the input window, output history initialization or both during retargeting with a trained model. This is performed for data split S6 (though we observed similar results for other splits). The performance of each model is given in Table 2. The IOTRBM generally outperforms the baseline models in the presence of noise. This is most apparent in the case of input noise: the scenario we would most likely find in practice. However, under low to moderate output noise, we note that the IOTRBM is robust, to the point that it does not even require a valid N frame output initialization to produce a sensible retargeting. Interestingly, we also observe the FIOTRBM performing well under high-noise conditions. 7 RMS Marker Error (mm) Split S1 S2 S3 S4 S5 Mean Autoregressive 2.12 2.98 2.44 2.26 2.46 2.45 ± 0.33 MLP 1.98 1.58 1.69 1.51 1.39 1.63 ± 0.22 IOTRBM 1.98 2.62 2.37 2.11 2.27 2.27 ± 0.25 FIOTRBM 1.70 1.54 1.55 1.42 1.48 1.54 ± 0.10 Table 3: 3D dataset. Mean RMS error on test output sequences. 4.2 3D facial expression transfer The second dataset we consider consists of facial motion capture data of two subjects, asked to perform a set of isolated facial movements based on FACS. The movements are more exaggerated than the speech performed in the 2D set. The dataset consists of two trials, totaling 1050 frames per subject. In contrast to the 2D set, the marker set used differs between subjects. The first subject has 313 markers (939 dimensions per frame) and the second subject has 332 markers (996 dimensions per frame). There is no correspondence between marker sets. Preprocessing The 3D data was not spatially aligned. Both the input and output were PCA-reduced to 50 dimensions (99.9% of variance). We then normalized in the same way as for the 2D data. We evaluate performance on 5 random splits of the 3D dataset, shown in Table 3. The IOTRBM and FIOTRBM models considered have identical architectures to the ones used for 2D data. We found empirically that increasing the noise level of the output history to σ = 1 improved generalization on the smaller dataset. Figure 2: Retargeting with the third-order factored TRBM. We show every 30th frame. The top row shows the input. The bottom row shows the true target (circles) and the prediction from our model (crosses). This figure is best viewed in electronic form and zoomed. Similar to the experiments with 2D data, the IOTRBM consistently outperforms the autoregressive model. However, it does not outperform the MLP. Interestingly, the factored, third-order model considerably improves on the performance of the standard IOTRBM and the MLP. Fig. 2 visualizes the predictions made by the FIOTRBM. We also refer the reader to videos included as supplementary material. These demonstrate a qualitative improvement of our models over the baselines considered. 5 Conclusion We have introduced the Input-Output Temporal Restricted Boltzmann Machine, a probabilistic model for learning mappings between sequences. We presented two variants of the model, one with pairwise and one with third-order multiplicative interactions. Our experiments so far are limited to dynamic facial expression transfer, but nothing restricts the model to this domain. Current methods for facial expression transfer are unable to factor out style in the retargeted motion, making it difficult to adjust the emotional content of the resulting facial animation. We are therefore interested in exploring extensions of our model that include style-based contextual variables (c.f., [21]). 8 Acknowledgements The authors thank Rafael Tena and Sarah Hilder for assisting with data collection and annotation. Matlab code Code is available at: http://www.matthewzeiler.com/pubs/nips2011/. References [1] G. H. Bakir, T. Hofmann, B. Sch¨olkopf, A. J. Smola, B. Taskar, and S. V. N. Vishwanathan. Predicting Structured Data. MIT Press, 2007. [2] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, 1994. [3] Y. Bengio and P. Frasconi. An input/output HMM architecture. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Proc. NIPS 7, pages 427–434, 1995. [4] M. Carreira-Perpinan and G. Hinton. On contrastive divergence learning. In AISTATS, pages 59–66, 2005. [5] E. Chuang and C. Bregler. Performance driven facial animation using blendshape interpolation. Technical report, Stanford University, 2002. [6] R. Collobert and J. Weston. A unified architecture for natural language processing: deep neural networks with multitask learning. In ICML, pages 160–167, 2008. [7] Y. Freund and D. Haussler. Unsupervised learning of distributions of binary vectors using 2-layer networks. In Proc. NIPS 4, 1992. [8] G. Hinton. Training products of experts by minimizing contrastive divergence. Neural Comput, 14(8):1771–1800, 2002. [9] E. Hsu, K. Pulli, and J. Popovi´c. Style translation for human motion. ACM Trans. Graph., 24(3):1082–1089, 2005. [10] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML, pages 282–289, 2001. [11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [12] J. Listgarten, R. Neal, S. Roweis, and A. Emili. Multiple alignment of continuous time series. In Proc. NIPS 17, 2005. [13] A. Mateo, A. Mu˜noz, and J. Garc´ıa-Gonz´alez. Modeling and forecasting electricity prices with input/output hidden Markov models. IEEE Trans. on Power Systems, 20(1):13–24, 1995. [14] R. Memisevic and G. Hinton. Learning to represent spatial transformations with factored higher-order Boltzmann machines. Neural Comput, 22(6):1473–92, 2010. [15] F. Pighin and J. P. Lewis. Facial motion retargeting. In ACM SIGGRAPH 2006 Courses, SIGGRAPH ’06, New York, NY, USA, 2006. ACM. [16] M. Ranzato and G. E. Hinton. Modeling pixel means and covariances using factorized ThirdOrder boltzmann machines. In Proc. CVPR, pages 2551–2558, 2010. [17] P. Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In D. E. Rumelhart, J. L. McClelland, et al., editors, Parallel Distributed Processing: Volume 1: Foundations, pages 194–281. MIT Press, Cambridge, MA, 1986. [18] J. Susskind, G. Hinton, J. Movellan, and A. Anderson. Generating facial expressions with deep belief nets. In Affective Computing, Focus on Emotion Expression, Synthesis and Recognition. I-TECH Education and Publishing, 2008. [19] I. Sutskever and G. Hinton. Learning multilevel distributed representations for highdimensional sequences. In Proc. AISTATS, 2007. [20] G. W. Taylor, G. E. Hinton, and S. Roweis. Modeling human motion using binary latent variables. In Proc. NIPS 19, 2007. [21] G. Taylor and G. Hinton. Factored conditional restricted Boltzmann machines for modeling motion style. In Proc. ICML, pages 1025–1032, 2009. [22] G. Taylor, L. Sigal, D. Fleet, and G. Hinton. Dynamical binary latent variable models for 3d human pose tracking. In Proc. CVPR, 2010. [23] D. Vlasic, M. Brand, H. Pfister, and J. Popovi´c. Face transfer with multilinear models. In ACM SIGGRAPH 2005, pages 426–433, 2005. 9
|
2011
|
236
|
4,298
|
Committing Bandits Loc Bui∗ MS&E Department Stanford University Ramesh Johari† MS&E Department Stanford University Shie Mannor‡ EE Department Technion Abstract We consider a multi-armed bandit problem where there are two phases. The first phase is an experimentation phase where the decision maker is free to explore multiple options. In the second phase the decision maker has to commit to one of the arms and stick with it. Cost is incurred during both phases with a higher cost during the experimentation phase. We analyze the regret in this setup, and both propose algorithms and provide upper and lower bounds that depend on the ratio of the duration of the experimentation phase to the duration of the commitment phase. Our analysis reveals that if given the choice, it is optimal to experiment Θ(ln T ) steps and then commit, where T is the time horizon. 1 Introduction In a range of applications, a dynamic decision making problem exhibits two distinctly different kinds of phases: experimentation and commitment. In the first phase, the decision maker explores multiple options, to determine which might be most suitable for the task at hand. However, eventually the decision maker must commit to a choice, and use that decision for the duration of the problem horizon. A notable feature of these phases in the models we study is that costs are incurred during both phases; that is, experimentation is not carried out “offline,” but rather is run “live” in the actual system. For example, consider the design of a recommendation engine for an online retailer (such as Amazon). Experimentation amounts to testing different recommendation strategies on arriving customers. However, such testing is not carried out without consequences; the retailer might lose potential rewards if experimentation leads to suboptimal recommendations. Eventually, the recommendation engine must be stabilized (both from a software development standpoint and a customer expectation standpoint), and when this happens the retailer has effectively committed to one strategy moving forward. As another example, consider product design and delivery (e.g., tapeouts in semiconductor manufacturing, or major releases in software engineering). The process of experimentation during design entails costs to the producer, but eventually the experimentation must stop and the design must be committed. Another example is that of dating followed by marriage to hopefully, the best possible mate. In this paper we consider a class of multi-armed bandit problems (which we call committing bandit problems) that mix these two features: the decision maker is allowed to try different arms in each period until commitment, at which point a final choice is made (“committed”) and the chosen arm is used until the end of the horizon. Of course, models that investigate each phase in isolation are extensively studied. If the problem consists of only experimentation, then we have the classical multi-armed bandit problem, where the decision maker is interested in minimizing the expected total regret against the best arm [9, 2]. At the other extreme, several papers have studied the pure ∗Email: locbui@stanford.edu †Email: ramesh.johari@stanford.edu ‡Email: shie@ee.technion.ac.il 1 exploration or budgeted learning problem, where the goal is to output the best arm at the end of an experimentation phase [13, 6, 4]; no costs are incurred for experimentation, but after finite time a single decision must be chosen (see [12] for a review). Formally, in a committing bandit problem, the decision maker can experiment without constraints for the first N of T periods, but must commit to a single decision for the last T −N periods, where T is the problem horizon. We first consider the soft deadline setting where the experimentation deadline N can be chosen by the decision maker, but there is a cost incurred per experimentation period. We divide this setting into two regimes depending on how N is chosen: the non-adaptive regime (Section 3) in which the decision maker has to choose N before the algorithm begins running, and the adaptive regime (Section 4) in which N can be chosen adaptively as the algorithm runs. We obtain two main results for the soft deadline setting. First, in both regimes, we find that the best tradeoff between experimentation and commitment (in terms of expected regret performance) is essentially obtained by experimenting for N = Θ(ln T ) periods, and then committing to the empirical best action for the remaining T −Θ(ln T ) periods; this yields an expected average regret of Θ(ln T/T ). Second, and somewhat surprisingly, we find that if the algorithm has access to distributional information about the arms, then adaptivity provides no additional benefit (at least in terms of expected regret performance); however, as we observe via simulations, on a sample path basis adaptive algorithms can outperform nonadaptive algorithms due to the additional flexibility. Finally, we demonstrate that if the algorithm has no initial distributional information, adaptivity is beneficial: we demonstrate an adaptive algorithm that achieves Θ(ln T/T ) regret in this case. We then study the hard deadline regime where the value of N is given to the decision maker in advance (Section 5). This is a sensible assumption for problems where the decision maker cannot control how long the experimentation period is; for example, in the product design example above, the release date is often fixed well in advance, and the engineers are not generally free to alter it. We propose the UCB-poly(δ) algorithm for this setting, where the parameter δ ∈(0, 1) reflects the tradeoff between experimentation and commitment. We show how to tune the algorithm to optimally choose δ, based on the relative values of N and T . We mention in passing that the celebrated exploration-exploitation dilemma is also a major issue in our setup. During the first N periods the tradeoff between exploration and exploitation exists bearing in mind that the last T −N periods will be used solely for exploitation. This changes the standard setup so that exploration in the first N periods becomes more important, as we shall see in our results. 2 The committing bandit problem We first describe the setup of the classical stochastic multi-armed bandit problem, as it will serve as background for the committing bandit problem. In a stochastic multi-armed bandit problem, there are K independent arms; each arm i, when pulled, returns a reward which is independently and identically drawn from a fixed Bernoulli distribution1 with unknown parameter θi ∈[0, 1]. Let It denote the index of the arm pulled at time t (It ∈{1, 2, . . ., K}), and let Xt denote the associated reward. Note that E[Xt] = θIt. Also, we define the following notation: θ∗:= max 1≤i≤K θi, i∗:= arg max 1≤i≤K θi, ∆i := θ∗−θi, ∆:= min i:∆i>0 ∆i. An allocation policy is an algorithm that chooses the next arm to pull based on the sequence of past pulled arms and obtained rewards. The cumulative regret of an allocation policy A after time n is: Rn = n ! t=1 (X∗ t −Xt) , where X∗ t is the reward that the algorithm would have received at time t if it had pulled the optimal arm i∗. In other words, Rn is the cumulative loss due to the fact that the allocation policy does not always pull the optimal arm. Let Ti(n) be the number of times that arm i is pulled up to time n. 1We assume Bernoulli distributions throughout the paper. Our results hold with minor modification for any distribution with bounded support. 2 Then: E[Rn] = θ∗n − K ! i=1 θiE[Ti(n)] = ! i̸=i∗ ∆iE[Ti(n)]. The reader is referred to the supplementary material for some well-known allocation policies, e.g., Unif (Uniform allocation) and UCB (Upper Confidence Bound) [2]. A recommendation policy is an algorithm that tries to recommend the “best” arm based on the sequence of past pulled arms and obtained rewards. Suppose that after time n, a recommendation policy R recommends the arm Jn as the “best” arm. Then the regret of recommendation policy R after time n, called the simple regret in [4], is defined as rn = θ∗−θJn = ∆Jn. The reader is also referred to the supplementary material for some natural recommendation policies, e.g., EBA (Empirical Best Arm) and MPA (Most Played Arm). The committing bandit problem considered in this paper is a version of the stochastic multi-armed bandit problem in which the algorithm is forced to commit to only one arm after some period of time. More precisely, the problem setting is as follows. Let T be the time horizon of the problem. From time 1 to some time N (N < T ), the algorithm can pull any arm in {1, 2, . . ., K}. Then, from time N + 1 to the end of the horizon (time T ), it must commit to pull only one arm. The first phase (time 1 to N) is called the experimentation phase, and the second phase (time N + 1 to T ) is called the commitment phase. We refer to time N as the experimentation deadline. An algorithm for the committing bandit problem is a combination of an allocation and a recommendation policy. That is, the algorithm has to decide which arm to pull during the first N slots, and then choose an arm to commit to during the remaining T −N slots. Because we consider settings where the algorithm designer can choose the experimentation deadline, we also assume a cost is imposed during the experimentation phase; otherwise, it is never optimal to be forced to commit. In particular, we assume that the reward earned during the experimentation phase is reduced by a constant factor γ ∈[0, 1). Thus the expected regret E[Reg] of such an algorithm is the average regret across both phases, i.e.: E[Reg] = 1 T " T ! t=1 θ∗−γ N ! t=1 E[θIt] − T ! t=N+1 E[θJN] # = γ E[RN] T + T −N T E[rN]+(1−γ)Nθ∗ T . 2.1 Committing bandit regimes We focus on three distinct regimes, that differ in the level of control given to the algorithm designer in choosing the experimentation deadline. Regime 1: Soft experimentation deadline, non-adaptive. In this regime, the value of T is given to the algorithm. For a given value of T , the value of N can be chosen freely between 1 and T −1, but the choice must be made before the process begins. Regime 2: Soft experimentation deadline, adaptive. The setting in this regime is the same as the previous one, except for the fact that the algorithm can choose the value of N adaptively as outcomes of past pulls are observed. Regime 3: Hard experimentation deadline. In this regime, both N and T are fixed and given to the algorithm. That is, the algorithm cannot control the experimentation deadline N. We are mainly interested in the asymptotic behavior of the algorithm when both N and T go to infinity. 2.2 Known lower-bounds As mentioned in the Introduction section, the experimentation and commitment phases have each been extensively studied in isolation. In this subsection, we only summarize briefly the known lower bounds on cumulative regret and simple regret that will be used in the paper. Result 1 (Distribution-dependent lower bound on cumulative regret [9]). For any allocation policy, and for any set of reward distributions such that their parameters θi are not all equal, there exists 3 an ordering of (θ1, . . . , θK) such that E[Rn] ≥ ! i̸=i∗ ∆i D(pi∥p∗) + o(1) ln n, where D(pi∥p∗) = pi log pi p∗+ p∗log p∗ pi is the Kullback-Leibler divergence between two Bernoulli reward distributions pi (of arm i) and p∗(of the optimal arm), and o(1) →0 as n →∞. Result 2 (Distribution-free lower bound on cumulative regret [13]). There exist positive constants c and N0 such that for any allocation policy, there exists a set of Bernoulli reward distributions such that E[Rn] ≥cK(ln n −ln K), ∀n ≥N0. The difference between Result 1 and Result 2 is that the lower bound in the former depends on the parameters of reward distributions (hence, called distribution-dependent), while the lower bound in the latter does not (hence, called distribution-free). That means, in the latter case, the reward distributions can be chosen adversarially. Therefore, it should be clear that the distribution-free lower bound is always higher than the distribution-dependent lower bound. Result 3 (Distribution-dependentbound on simple regret [4]). For any pair of allocation and recommendation policies, if the allocation policy can achieve an upper bound such that for all (Bernoulli) reward distributions θ1, . . . , θK, there exists a constant C ≥0 with E[Rn] ≤Cf(n), then for all sets of K ≥3 Bernoulli reward distributions with parameters θi that are all distinct and all different from 1, there exists an ordering (θ1, . . . , θK) such that E[rn] ≥∆ 2 e−Df(n), where D is a constant which can be calculated in closed form from C, and θ1, . . . , θK. In particular, since E[Rn] ≤θ∗n for any allocation policy, there exists a constant ξ depending only on θ1, . . . , θK such that E[rn] ≥(∆/2)e−ξn. Result 4 (Distribution-free lower bound on simple regret [4]). For any pair of allocation and recommendation policies, there exists a set of Bernoulli reward distributions such that E[rn] ≥ 1 20 ( K n . In the subsequent sections we analyze each of the committing bandit regimes in detail; in particular, we provide constructive upper bounds and matching lower bounds on the regret in each regime. The detailed proofs of all the results in this paper are presented in the supplementary material. 3 Regime 1: Soft experimentation deadline, non-adaptive In this regime, for a given value of T , the value of N can be chosen freely between 1 and T −1, but only before the algorithm begins pulling arms. Our main insight is that there exist matching upper and lower bounds of order Θ(ln T/T ); further, we propose an algorithm that can achieve this performance. Theorem 1. (1) Distribution-dependent lower bound: In Regime 1, for any algorithm, and any set of K ≥3 Bernoulli reward distributions such that θi are all distinct and all different from 1, there exists an ordering (θ1, . . . , θK) such that E[Reg] ≥ max (1 −γ)θ∗ ξ , ! i̸=i∗ ∆i D(pi∥p∗) + o(1) ln T T , where o(1) →0 as T →∞, and ξ is the constant discussed in Result 3. (2) Distribution-free lower bound: Also, for any algorithm in Regime 1, there exists a set of Bernoulli reward distributions such that E[Reg] ≥cK / 1 −ln K ln T 0 ln T T , where c is the constant in Result 2. 4 We now show that the Non-adaptive Unif-EBA algorithm (Algorithm 1) achieves the matching upper bound, as stated in the following theorem. Algorithm 1 Non-adaptive Unif-EBA Input: a set of arms {1, 2, . . . , K}, T , ∆ repeat Sample each arm in {1, 2, . . ., K} in the round robin fashion. until each arm has been chosen 1 ln T/∆22 times. Commit to the arm with maximum empirical average reward for the remaining periods. Theorem 2. For the Non-adaptive Unif-EBA algorithm (Algorithm 1), E[Reg] ≤ K ∆2 (1 −γ)θ∗+ γ K ! i̸=i∗ ∆i + 2∆2 ln T ln T T . This matches the lower bounds in Theorem 1 to the correct order in T . Observe that in this regime, both distribution-dependent and distribution-free lower bounds have the same asymptotic order of ln T/T. However, the preceding algorithm requires knowing the value of ∆. If ∆is unknown, a low regret algorithm that matches the lower bound does not seem to be possible in this regime, because of the relative nature of the regret. An algorithm may be unable to choose an N that explores sufficiently long when arms are difficult to distinguish, and yet commits quickly when arms are easy to distinguish. 4 Regime 2: Soft experimentation deadline, adaptive The setting in this regime is the same as the previous one, except that the algorithm is not required to choose N before it runs, i.e., N can be chosen adaptively. Thus, in particular, it is possible for the algorithm to reject bad arms or to estimate ∆as it runs. We first present the lower bounds on regret for any algorithm in this regime. Theorem 3. (1) Distribution-dependent lower bound: In Regime 2, for any algorithm, and any set of K ≥3 Bernoulli reward distribution such that θi are all distinct and all different from 1, there exists an ordering (θ1, . . . , θK) such that E[Reg] ≥ ! i̸=i∗ ∆i D(pi∥p∗) + o(1) ln T T , where o(1) →0 as T →∞. (2) Distribution-free lower bound: Also, for any algorithm in Regime 2, there exists a set of Bernoulli reward distributions such that E[Reg] ≥cK / 1 −ln K ln T 0 ln T T , where c is the constant in Result 2. Next, we derive several sequential algorithms with matching upper bounds on regret. The first algorithm is called Sequential Elimination & Commitment 1 (SEC1) (Algorithm 2); this algorithm requires the values of ∆and θ∗. Theorem 4. For the SEC1 algorithm (Algorithm 2), E[Reg] ≤ K ∆2 (1 −γ)θ∗+ γ K ! i̸=i∗ ∆i + b ln T T , where b = 3 2 + ∆2(K+2) (1−e−∆2/2)2 4 1 ln T →0 as T →∞. 5 Algorithm 2 Sequential Elimination & Commitment 1 (SEC1) Input: A set of arms {1, 2, . . ., K}, T , ∆, θ∗ Initialization: Set m = 0, B0 = {1, 2, . . . , K}, α = 1/∆2, ϵ1 = 1/∆, ϵ2 = ∆/2. repeat Sample each arm in Bm once. Let Si m be the total reward obtained from arm i so far. Set Bm+1 = Bm, m = m + 1. for i ∈Bm do if m ≤⌈α ln T ⌉and |mθ∗−Si m| > ϵ1 ln T then Delete arm i from Bm. end if if m > ⌈α ln T ⌉and |mθ∗−Si m| > ϵ2m then Delete arm i from Bm. end if end for until there is only one arm in Bm, then commit to that arm or the horizon T is reached. Observe that this algorithm matches the lower bounds in Theorem 3 to the correct order in T . We note that when N can be chosen adaptively, both distribution-dependent and distribution-free lower bounds have the same asymptotic order of ln T/T as the ones in the non-adaptive regime. In the distribution-dependent case, therefore, we obtain the surprising conclusion that adaptivity does not reduce the optimal expected regret. Indeed, the regret bound of SEC1 in Theorem 4 is exactly the same as for Non-adaptive Unif-EBA in Theorem 2. We conjecture that the constant 1/∆2 is actually the best achievable constant on expected regret. What is the benefit of adaptivity then? As simulation results in Section 6 suggest, SEC1 performs much better than Non-adaptive Unif-EBA in practice. The reason is rather intuitive: due to its adaptive nature, SEC1 is able to eliminate poor arms much earlier than the 1 ln T/∆22 threshold, while Non-adaptive Unif-EBA has to wait until that point to make decisions. Remark 1. Although SEC1 requires the value of θ∗, that requirement can be relaxed as θ∗can be estimated by the maximum empirical average reward across arms. In fact, as we will see in the simulations (Section 6), another version of SEC1 (called SEC2) in which mθ∗is replaced by maxj∈Bm Sj m achieves a nearly identical performance. Now, if the value of ∆is unknown, we have the following Sequential Committing UCB (SC-UCB) algorithm which is based on the improved UCB algorithm in [3]. The idea is to maintain an estimate of ∆and reduce it over time. Algorithm 3 Sequential Committing UCB (SC-UCB) Input: A set of arms {1, 2, . . ., K}, T Initialization: Set m = 0, ˜∆0 = 0, B0 = {1, 2, . . ., K}. for m = 0, 1, 2, . . ., ⌊log2(T/e)/2⌋do if |Bm| > 1 then Sample each arm in Bm until each arm has been chosen nm = 5 2 ln(T ˜∆2 m)/ ˜∆2 m 6 times. Let Si m be the total reward obtained from arm i so far. Delete all arms i from Bm for which maxj∈Bm Sj m −Si m > 2 7 nm ln(T ˜∆2m)/2 to obtain Bm+1. Set ˜∆m+1 = ˜∆m/2. else Commit to the single arm in Bm. end if end for Commit to any arm in Bm. 6 Theorem 5. For the SC-UCB algorithm (Algorithm 3), E[Reg] ≤ ! i̸=i∗ /γ∆i + (1 −γ)θ∗ ∆2 i 0 ln(T ∆2 i ) T / 32 + ∆2 i + 96 ln(T ∆2 i ) 0 . This matches the lower bounds in Theorem 3 to the correct order in T . 5 Regime 3: Hard experimentation deadline We now investigate the third regime where, in contrast to the previous two, the experimentation deadline N is fixed exogenously together with T . We consider the asymptotic behavior of regret as T and N approach infinity together. Note that since in this case the experimentation deadline is outside the algorithm designer’s control, we set the cost of experimentation γ = 1 for this section. Because both T and N are given, the main challenge in this context is choosing an algorithm that optimally balances the cumulative and simple regrets. We design and tune an algorithm that achieves this balance. We know from Result 3 that for any pair of allocation and recommendation policies, if E[RN] ≤ C1f(N), then E[rN] ≥(∆/2)e−Df(N). In other words, given an allocation policy A that has a cumulative regret bound C1f(N) (for some constant C1), the best (distribution-dependent) upper bound that any recommendation policy can achieve is C2e−C3f(N) (for some constants C2 and C3). Assuming that there exists a recommendation policy RA that achieves such an upper bound, we have the following upper bound on regret when applying [A, RA] to the committing bandit problem: E[Reg] ≤C1 f(N) T + T −N T C2e−C3f(N). (1) One can clearly see the trade-off between experimentation and commitment in (1): the smaller the first term, the larger the second term, and vice versa. Note that ln(N) ≤f(N) ≤N, and we have algorithms that give us only either one of the extremes (e.g., Unif has f(N) = N, while UCB [2] has f(N) = ln N). On the other hand, it would be useful to have an algorithm that can balance between these two extremes. In particular, we focus on finding a pair of allocation and recommendation policies which can simultaneously achieve the allocation bound C1N δ and the recommendation bound C2e−C3N δ where 0 < δ < 1. Let us consider a modification of the UCB allocation policy called UCB-poly(δ) (for 0 < δ < 1), where for t > K, with ˆθi,Ti(t−1) be the empirical average of rewards from arm i so far, It = arg max 1≤i≤K " ˆθi,Ti(t−1) + 8 2(t −1)δ Ti(t −1) # . Then we have the following result on the upper bound of its cumulative regret. Theorem 6. The cumulative regret of UCB-poly(δ) is upper-bounded by E[Rn] ≤ " ! i:∆i>0 8 ∆i + o(1) # nδ, where o(1) →0 as n →∞. Moreover, the simple regret for the pair [UCB-poly(δ), EBA] is upper-bounded by E[rn] ≤ 2 ! i̸=i∗ ∆i e−χnδ, where χ = min i σ 2 ∆2 i . In the supplementary material (see Theorem 7 there) we show that in the limit, as T and N increase to infinity, the optimal value of δ can be chosen as limN→∞ln(ln(T (N) −N))/ ln N if that limit exists. In particular, if T (N) is super-exponential in N we get an optimal δ of 1 representing pure exploration in the experimentation phase. If T (N) is sub-exponential we get an optimal δ of 0 representing a standard UCB during the experimentation phase. If T (N) is exponential we obtain δ in between. 7 Figure 1: Numerical performances where K = 20, γ = 0.75, and ∆= 0.02 6 Simulations In this section, we present numerical results on the performance of Non-adaptive Unif-EBA, SEC1, SEC2, and SC-UCB algorithms. (Recall that the SEC2 algorithm is a version of SEC1 in which mθ∗is replaced by maxj∈Bm Sj m, as discussed in Remark 1). The simulation setting includes K arms with Bernoulli reward distributions, the time horizon T , and the values of γ and ∆. The arm configurations are generated as follows. For each experiment, θ∗is generated independently and uniformly in the [0.5, 1] interval, and the second best arm reward is set as θ∗ 2 = θ∗−∆. These two values are then assigned to two randomly chosen arms, and the rest of arm rewards are generated independently and uniformly in [0, θ∗ 2]. Figure 1 shows the regrets of the above algorithms for various values of T (in logarithmic scale) with parameters K = 20, γ = 0.75, and ∆= 0.02 (we omitted error bars because the variation was small). Observe that the performances of SEC1 and SEC2 are nearly identical, which suggests that the requirement of knowing θ∗in SEC1 can be relaxed (see Remark 1). Moreover, SEC1 (or equivalently, SEC2) performs much better than Non-adaptive Unif-EBA due to its adaptive nature (see the discussion before Remark 1). Particularly, the performance of Non-adaptive Unif-EBA is quite poor when the experimentation deadline is roughly equal to T , since the algorithm does not commit before the experimentation deadline. Finally, SC-UCB does not perform as well as the others when T is large, but this algorithm does not need to know ∆, and thus suffers a performance loss due to the additional effort required to estimate ∆. Additional simulation results can be found in the supplementary material. 7 Extensions and future directions Our work is a first step in the study of the committing bandit setup. There are several extensions that call for future research which we outline below. First, an extension of the basic committing bandits setup to the case of contextual bandits [10, 11] is natural. In this setup before choosing an arm an additional “context” is provided to the decision maker. The problem is to choose a decision rule from a given class that prescribes what arm to choose for every context. This setup is more realistic when the decision maker has to commit to such a rule after some exploration time. Second, models with many arms (structured as in [8, 5]) or even infinitely arms (as in [1, 7, 14]) are of interest here as they may lead to different regimes and results here. Third, our models assumed that the commitment time is either predetermined or according to the decision maker’s will. There are other models of interest such as the case where some stochastic process determines the commitment time. Finally, a situation where the exploration and commitment phases alternate (randomly or according to a given schedule or at a cost) is of practical interest. This can represent the situation where there are a few releases of a product where exploration can be done until the time of the release, when the product is “frozen” until a new exploration period followed by a new release. 8 References [1] R. Agrawal. The continuum-armed bandit problem. SIAM Journal on Control and Optimization, 33(6):1926–1951, 1995. [2] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multi-armed bandit problem. Machine Learning Journal, 47(2-3):235–256, 2002. [3] P. Auer and R. Ortner. UCB revisited: Improved regret bounds for the stochastic multi-armed bandit problem. Periodica Mathematica Hungarica, 61(1-2):55–65, 2010. [4] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in finitely-armed and continuous-armed bandits. Theoretical Computer Science, 412(19):1832–1852, 2011. [5] P. A. Coquelin and R. Munos. Bandit algorithms for tree search. CoRR, abs/cs/0703062, 2007. [6] E. Even-Dar, S. Mannor, and Y. Mansour. Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. Journal of Machine Learning Research, 7:1079–1105, 2006. [7] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In STOC, pages 681–690, 2008. [8] L. Kocsis and C. Szepesv´ari. Bandit based Monte-Carlo planning. In ECML, pages 282–293, 2006. [9] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4–22, 1985. [10] J. Langford and T. Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In Advances in Neural Information Processing (NIPS), 2008. [11] L. Li, W. Chu, J. Langford, and R.E. Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th International Conference on World Wide Web, pages 661–670, 2010. [12] S. Mannor. k-armed bandit. In Encyclopedia of Machine Learning, pages 561–563. 2010. [13] S. Mannor and J. Tsitsiklis. The sample complexity of exploration in the multi-armed bandit problem. Journal of Machine Learning Research, 5:623–648, 2004. [14] P. Rusmevichientong and J. Tsitsiklis. Linearly parameterized bandits. Mathematics of Operations Research, 35(2):395–411, 2010. 9
|
2011
|
237
|
4,299
|
Large-Scale Category Structure Aware Image Categorization Bin Zhao School of Computer Science Carnegie Mellon University binzhao@cs.cmu.edu Li Fei-Fei Computer Science Department Stanford University feifeili@cs.stanford.edu Eric P. Xing School of Computer Science Carnegie Mellon University epxing@cs.cmu.edu Abstract Most previous research on image categorization has focused on medium-scale data sets, while large-scale image categorization with millions of images from thousands of categories remains a challenge. With the emergence of structured large-scale dataset such as the ImageNet, rich information about the conceptual relationships between images, such as a tree hierarchy among various image categories, become available. As human cognition of complex visual world benefits from underlying semantic relationships between object classes, we believe a machine learning system can and should leverage such information as well for better performance. In this paper, we employ such semantic relatedness among image categories for large-scale image categorization. Specifically, a category hierarchy is utilized to properly define loss function and select common set of features for related categories. An efficient optimization method based on proximal approximation and accelerated parallel gradient method is introduced. Experimental results on a subset of ImageNet containing 1.2 million images from 1000 categories demonstrate the effectiveness and promise of our proposed approach. 1 Introduction Image categorization / object recognition has been one of the most important research problems in the computer vision community. While most previous research on image categorization has focused on medium-scale data sets, involving objects from dozens of categories, there is recently a growing consensus that it is necessary to build general purpose object recognizers that are able to recognize many more different classes of objects. (A human being has little problem recognizing tens of thousands of visual categories, even with very little “training” data.) The Caltech 101/256 [14, 18] is a pioneer benchmark data set on that front. LabelMe [31] provides 30k labeled and segmented images, covering around 200 image categories. Moreover, the newly released ImageNet [12] data set goes a big step further, in that it further increases the number of classes to over 15000, and has more than 1000 images for each class on average. Similarly, TinyImage [36] contains 80 million 32 × 32 low resolution images, with each image loosely labeled with one of 75,062 English nouns. Clearly, these are no longer artificial visual categorization problems created for machine learning, but instead more like a human-level cognition problem for real world object recognition with a much bigger set of objects. A natural way to formulate this problem is a multi-way or multi-task classification, but the seemingly standard formulation on such gigantic data set poses a completely new challenge both to computer vision and machine learning. Unfortunately, despite the well-known advantages and recent advancements of multi-way classification techniques [1, 19, 4] in machine learning, complexity concerns have driven most research on such super large-scale data set back to simple methods such as nearest neighbor search [6], least square regression [16] or learning thousands of binary classifiers [24]. 1 (a) (b) 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (c) Figure 1: (a) Image category hierarchy in ImageNet; (b) Overlapping group structure; (3) Semantic relatedness measure between image categories. The hierarchical semantic structure stemmed from the WordNet over image categories in the ImageNet data makes it distinctive from other existing large-scale dataset, and it reassembles how human cognitive system stores visual knowledge. Figure 1(a) shows an example such as a tree hierarchy, where leaf nodes are individual categories, and each internal node denotes the cluster of categories corresponding to the leaf nodes in the subtree rooted at the given node. As human cognition of complex visual world benefits from underlying semantic relationships between object classes, we believe a machine learning system can and should leverage such information as well for better performance. Specifically, we argue that instead of formulating the recognition task as a flat classification problem, where each category is treated equally and independently, a better strategy is to utilize the rich information residing in the concept hierarchy among image categories to train a system that couples all different recognition tasks over different categories. It should be noted that our proposed method is applicable to any tree structure for image category, such as the category structure learned to capture visual appearance similarities between image classes [32, 17, 13]. To the best of our knowledge, our attempt in this paper represents an initial foray to systematically utilizing information residing in concept hierarchy, for multi-way classification on super large-scale image data sets. More precisely, our approach utilizes the concept hierarchy in two aspects: loss function and feature selection. First, the loss function used in our formulation weighs differentially for different misclassification outcomes: misclassifying an image to a category that is close to its true identity should receive less penalty than misclassifying it to a totally unrelated one. Second, in an image classification problem with thousands of categories, it is not realistic to assume that all of the classes share the same set of relevant features. That is to say, a subset of highly related categories may share a common set of relevant features, whereas weakly related categories are less likely to be affected by the same features. Consequently, the image categorization problem is formulated as augmented logistic regression with overlapping-group-lasso regularization. The corresponding optimization problem involves a non-smooth convex objective function represented as summation over all training examples. To solve this optimization problem, we introduce the Accelerated Parallel ProximaL gradiEnT (APPLET) method, which tackles the non-smoothness of overlapping-group-lasso penalty via proximal gradient [20, 9], and the huge number of training samples by Map-Reduce parallel computing [10]. Therefore, the contributions made in this paper are: (1) We incorporate the semantic relationships between object classes, into an augmented multi-class logistic regression formulation, regularized by the overlapping-group-lasso penalty. The sheer size of the ImageNet data set that our formulation is designed to tackle singles out our work from previous attempts on multi-class classification, or transfer learning. (2) We propose a proximal gradient based method for solving the resulting non-smooth optimization problem, where the super large scale of the problem is tackled by map-reduce parallel computation. The rest of this paper is organized as follows. Detailed explanation of the formulation is provided in Section 2. Section 3 introduces the Accelerated Parallel ProximaL gradiEnT (APPLET) method for solving the corresponding large-scale non-smooth optimization problem. Section 4 briefly reviews several related works. Section 5 demonstrates the effectiveness of the proposed algorithm using millions of training images from 1000 categories, followed by conclusions in Section 6. 2 Category Structure Aware Image Categorization 2.1 Motivation ImageNet organizes the different classes of images in a densely populated semantic hierarchy. Specifically, image categories in ImageNet are interlinked by several types of relations, with the 2 “IS-A” relation being the most comprehensive and useful [11], resulting in a tree hierarchy over image categories. For example, the ’husky’ category follows a path in the tree composed of ’working dog’, ’dog’, ’canine’, etc. The distance between two nodes in the tree depicts the difference between the two corresponding image categories. Consequently, in the category hierarchy in ImageNet, each internal node near the bottom of the tree shows that the image categories of its subtree are highly correlated, whereas the internal node near the root represents relatively weaker correlations among the categories in its subtree. The class hierarchy provides a measure of relatedness between image classes. Misclassifying an image to a category that is close to its true identity should receive less penalty than misclassifying it to a totally unrelated one. For example, although horses are not exactly ponies, we expect the loss for classifying a “pony” as a “horse” to be lower than classifying it as a “car”. Instead of using 0-1 loss as in conventional image categorization, which treats image categories equally and independently, our approach utilizes a loss function that is aware of the category hierarchy. Moreover, highly related image categories are more likely to share common visual patterns. For example, in Figure 1(a), husky and shepherd share similar object shape and texture. Consequently, recognition of these related categories are more likely to be affected by the same features. In this work, we regularize the sparsity pattern of weight vectors for related categories. This is equivalent to learning a low dimensional representation that is shared across multiple related categories. 2.2 Logistic Regression with Category Structure Given N training images, each represented as a J-dimensional input vector and belonging to one of the K categories. Let X denote the J × N input matrix, where each column corresponds to an instance. Similarly, let Y denote the N × 1 output vector, where each element corresponds to the label for an image. Multi-class logistic regression defines a weight vector wk for each class k ∈{1, . . . , K} and classifies sample z by y∗= arg maxy∈{1,...,k} P(y|x, W), with the conditional likelihood computed as P(yi|xi, W) = exp(wT yixi) ∑ k exp(wT k xi) (1) The optimal weight vectors W∗= [w∗ 1, . . . , w∗ K] are W∗= arg min W − N ∑ i=1 log P(yi|xi, W) + λΩ(W) (2) where Ω(W) is a regularization term defined on W and λ is the regularization parameter. 2.2.1 Augmented Soft-Max Loss Function Using the tree hierarchy on image categories, we could calculate a semantic relatedness (a.k.a. similarity) matrix S ∈RK×K over all categories, where Sij measures the semantic relatedness of class i and j. Using the semantic relatedness measure, the likelihood of xi belonging to category yi could be modified as follows ˆP(yi|xi, W) ∝ K ∑ r=1 Syi,rP(r|xi, W) ∝ K ∑ r=1 Syi,r exp(wT r xi) ∑ k exp(wT k xi) ∝ K ∑ r=1 Syi,r exp(wT r xi) (3) Since ∑K r=1 ˆP(r|xi, W) = 1, consequently, ˆP(yi|xi, W) = ∑K r=1 Syi,r exp(wT r xi) ∑K r=1 ∑K k=1 Sk,r exp(wTr xi) (4) For the special case where the semantic relatedness matrix S is an identity matrix, meaning each class is only related to itself, Eq. (4) simplifies to Eq. (1). Using this modified softmax loss function, the image categorization problem could be formulated as min W N ∑ i=1 [ log (∑ r ∑ k Sk,r exp(wT r xi) ) −log (∑ r Syi,r exp(wT r xi) )] + λΩ(W) (5) 3 2.2.2 Semantic Relatedness Matrix To compute semantic relatedness matrix S in the above formulation, we first define a metric measuring the semantic distance between image categories. A simple way to compute semantic distance in a structure such as the one provided by ImageNet is to utilize the paths connecting the two corresponding nodes to the root node. Following [7] we define the semantic distance Dij between class i and class j as the number of nodes shared by their two parent branches, divided by the length of the longest of the two branches Dij = intersect(path(i), path(j)) max(length(path(i)), length(path(j))) (6) where path(i) is the path from the root node to node i and intersect(p1, p2) counts the number of nodes shared by two paths p1 and p2. We construct the semantic relatedness matrix S = exp(−κ(1− D)), where κ is a constant controlling the decay factor of semantic relatedness with respect to semantic distance. Figure 1(c) shows the semantic relatedness matrix computed with κ = 5. 2.3 Tree-Guided Sparse Feature Coding In ImageNet, image categories are grouped at multiple granularity as a tree hierarchy. As illustrated in Section 2.1, the image categories in each internal node are likely to be influenced by a common set of features. In order to achieve this type of structured sparsity at multiple levels of the hierarchy, we utilize an overlapping-group-lasso penalty recently proposed in [21] for genetic association mapping problem, where the goal is to identify a small number of SNPs (inputs) out of millions of SNPs that influence phenotypes (outputs) such as gene expression measurements. Specifically, given the tree hierarchy T = (V, E) over image categories, each node v ∈V of tree T is associated with group Gv, composed of all leaf nodes in the subtree rooted at v, as illustrated in Figure 1(b). Clearly, each group Gv is a subset of the power set of {1, . . . , K}. Given these groups G = {Gv}v∈V of categories, we define the following overlapping-group-lasso penalty [21]: Ω(W) = ∑ j ∑ v∈V γv||wjGv||2 (7) where wjGv is the weight coefficients {wjk, k ∈Gv} for input j ∈{1, . . . , J} associated with categories in Gv, and each group Gv is associated with weight γv that reflects the strength of correlation within the group. It should be noted that we do not require groups in G to be mutually exclusive, and consequently, each leaf node would belong to multiple groups at various granularity. Inserting the above overlapping-group-lasso penalty into (5), we formulate the category structure aware image categorization as follows: min W N ∑ i=1 [ log (∑ r ∑ k Sk,r exp(wT r xi) ) −log (∑ r Syi,r exp(wT r xi) )] +λ ∑ j ∑ v∈V γv||wj Gv||2(8) 3 Accelerated Parallel ProximaL gradiEnT (APPLET) Method The challenge in solving problem (8) lies in two facts: the non-separability of W in the non-smooth overlapping-group-lasso penalty Ω(W), and the huge number N of training samples. Conventionally, to handle the non-smoothness of Ω(W), we could reformulate the problem as either second order cone programming (SOCP) or quadratic programming (QP) [35]. However, the state-of-theart approach for solving SOCP and QP based on interior point method requires solving a Newton system to find search direction, and is computationally very expensive even for moderate-sized problems. Moreover, due to the huge number of samples in the training set, off-the-shelf optimization solvers are too slow to be used. In this work, we adopt a proximal-gradient method to handle the non-smoothness of Ω(W). Specifically, we first reformulate the overlapping-group-lasso penalty Ω(W) into a max problem over auxiliary variables using dual norm, and then introduce its smooth lower bound [20, 9]. Instead of optimizing the original non-smooth penalty, we run the accelerated gradient descent method [27] under a Map-Reduce framework [10] to optimize the smooth lower bound. The proposed approach enjoys a fast convergence rate and low per-iteration complexity. 4 3.1 Reformulate the Penalty For referring convenience, we number the elements in the set G = {Gv}v∈V as G = {g1, . . . , g|G|} according to an arbitrary order, where |G| denotes the total number of elements in G. For each input j and group gi associated with wjgi, we introduce a vector of auxiliary variables αjgi ∈R|gi|. Since the dual norm of L2 norm is also an L2 norm, we can reformulate ||wjgi||2 as ||wjgi||2 = max||αjgi||2≤1 αT jgiwjgi. Moreover, define the following ∑ g∈G |g| × J matrix A = α1g1 . . . αJg1 ... ... ... α1g|G| . . . αJg|G| (9) in domain O = {A| ||αjgi||2 ≤1, ∀j ∈{1, . . . , J}, gi ∈G}. Following [9], the overlappinggroup-lasso penalty in (8) can be equivalently reformulated as Ω(W) = ∑ j ∑ i γi max ||αjgi||2≤1 αT jgiwjgi = max A∈O⟨CWT , A⟩ (10) where i = 1, . . . , |G|, j = 1, . . . , J, C ∈R ∑ g∈G |g|×K, and ⟨U, V⟩= Tr(UT V) is the inner product of two matrices. Moreover, the matrix C is defined with rows indexed by (s, gi) such that s ∈gi and i ∈{1, . . . , |G|}, columns indexed by k ∈{1, . . . , K}, and the value of the element at row (s, gi) and column k set to C(s,gi),k = γi if s = k and 0 otherwise. After the above reformulation, (10) is still a non-smooth function of W, and this makes the optimization challenging. To tackle this problem, we introduce an auxiliary function [20, 9] to construct a smooth approximation of (10). Specifically, our smooth approximation function is defined as: fµ(W) = max A∈O⟨CWT , A⟩−µd(A) (11) where µ is the positive smoothness parameter and d(A) is an arbitrary smooth strongly-convex function defined on O. The original penalty term can be viewed as fµ(W) with µ = 0. Since our algorithm will utilize the optimal solution W∗to (11), we choose d(A) = 1 2||A||2 F so that we can obtain the closed form solution for A∗. Clearly, fµ(W) is a lower bound of f0(W), with the gap computed as D = maxA∈O d(A) = maxA∈O 1 2||A||2 F = 1 2J|G|. Theorem 1 For any µ > 0, fµ(W) is a convex and continuously differentiable function in W, and the gradient of fµ(W) can be computed as ∇fµ(W) = A∗T C, where A∗is the optimal solution to (11). According to Theorem 1, fµ(W) is a smooth function for any µ > 0, with a simple form of gradient and can be viewed as a smooth approximation of f0(W) with the maximum gap of µD. Finally, the optimal solution A∗of (11) is composed of α∗ jgi = S( γiwjgi µ ), where S is the shrinkage operator defined as follows: S(u) = { u ||u||2 , ||u||2 > 1 u, ||u||2 ≤1 (12) 3.2 Accelerated Parallel Gradient Method Given the smooth approximation of Ω(W) in (11) and the corresponding gradient presented in Theorem 1, we could apply gradient descent method to solve the problem. Specifically, we replace the overlapping-group-lasso penalty in (8) with its smooth approximation fµ(W) to obtain the following optimization problem min W ˜f(W) = g(W) + λfµ(W) (13) where g(W) = ∑N i=1 [ log (∑ r ∑ k Sk,r exp(wT r xi) ) −log (∑ r Syi,r exp(wT r xi) )] is the augmented logistic regression loss function. The gradient of g(W) w.r.t. wk could be calculated as follows ∂g(W) ∂wk = N ∑ i=1 xi [ ∑ q Sk,q exp(wT k xi) ∑ r ∑ q Sr,q exp(wTr xi) − Syi,k exp(wT k xi) ∑ r Syi,r exp(wTr xi) ] (14) 5 Therefore, the gradient of g(W) w.r.t. to W could be computed as ∇g(W) = [ ∂g(W) ∂w1 , . . . , ∂g(W) ∂wK ]. According to Theorem 1, the gradient of ˜f(W) is given by ∇˜f(W) = ∇g(W) + λA∗T C (15) Although ˜f(W) is a smooth function of W, it is represented as a summation over all training samples. Consequently, ∇˜f(W) could only be computed by summing over all N training samples. Due to the huge number of samples in the training set, we adopt a Map-Reduce parallel framework [10] to compute ∇g(W) as shown in Eq.(14). While standard gradient schemes have a slow convergence rate, they can often be accelerated. This stems from the pioneering work of Nesterov in [27], which is a deterministic algorithm for smooth optimization. In this paper, we adopt this accelerated gradient method , and the whole algorithm is shown in Algorithm 1. Algorithm 1 Accelerated Parallel ProximaL gradiEnT method (APPLET) Input: X, Y,C, desired accuracy ϵ, step parameters {ηt} Initialization: B0 = 0 for t = 1, 2, . . ., until convergence do Map-step: Distribute data to M cores {X1, . . . , XM}, compute in parallel ∇gm(Bt−1) for Xm Reduce-step: (1) ∇˜f(Bt−1) = ∑M m=1 ∇gm(Bt−1) + λA∗T C (2) Wt = Bt−1 −ηt∇˜f(Bt−1) (3) Bt = Wt + t−1 t+2(Wt −Wt−1) end for Output: ˆ W = Wt 4 Related Works Various attempts in sharing information across related image categories have been explored. Early approaches stem from the neural networks, where the hidden layers are shared across different classes [8, 23]. Recent approaches transfer information across classes by regularizing the parameters of the classifiers across classes [37, 28, 15, 33, 34, 2, 26, 30]. Common to all these approaches is that experiments are always performed with relatively few classes [16]. It is unclear how these approaches would perform on super large-scale data sets containing thousands of image categories. Some of these approaches would encounter severe computational bottleneck when scaling up to thousands of classes [16]. Another line of research is the ImageNet Large Scale Visual Recognition Challenge 2010 (ILSVRC10) [3], where best performing approaches use techniques such as spatial pyramid matching [22], locality-constrained linear coding [38], the Fisher vector [29], and linear SVM trained using stochastic gradient descent. Success has been witnessed in ILSVRC10 even with simple machine learning techniques. However, none of these approaches utilize the semantic relationships defined among image categories in ImageNet, which we argue is a crucial source of information for further improvement in such super large scale classification problem. 5 Experiments In this section, we test the performance of APPLET on a subset of ImageNet used in ILSVRC10, containing 1.2 million images from 1000 categories, divided into distinct portions for training, validation and test. The number of images for each category ranges from 668 to 3047. We use the provided validation set for parameter selection and the final results are obtained on the test set. Before presenting the classification results, we’d like to make clear that the goal and contributions of this work is different from the aforementioned approaches proposed in ILSVRC10. Those approaches were designed to enter a performance competition, where heavy feature engineering and post processing (such as ad hoc voting for multiple algorithms) were used to achieve high accuracy. Our work, on the other hand, looks at this problem from a different angle, focusing on principled 6 methodology that explores the benefit of utilizing class structure in image categorization and proposing a model and related optimization technique to properly incorporate such information. We did not use the full scope of all the features, and post processing schemes to boost our classification results as the ILSVRC10 competition teams did. Therefore we argue that the results of our work is not directly comparable with the ILSVRC10 competitions. 5.1 Image Features Each image is resized to have a max side length of 300 pixels. SIFT [25] descriptors are computed on 20 × 20 overlapping patches with a spacing of 10 pixels. Images are further downsized to 1 2 of the side length and then 1 4 of the side length, and more descriptors are computed. We then perform k-means clustering on a random subset of 10 million SIFT descriptors to form a visual vocabulary of 1000 visual words. Using this learned vocabulary, we employ Locality-constrained Linear Coding (LLC) [38], which has shown state-of-the-art performance on several benchmark data sets, to construct a vector representation for each image. Finally, a single feature vector is computed for each image using max pooling on a spatial pyramid [22]. The pooled features from various locations and scales are then concatenated to form a spatial pyramid representation of the image. Consequently, each image is represented as a vector in a 21,000 dimensional space. 5.2 Evaluation Criteria We adopt the same performance measures used in ILSVRC10. Specifically, for every image, each tested algorithm will produce a list of 5 object categories in the descending order of confidence. Performance is measured using the top-n error rate, n = 1, . . . , 5 in our case, and two error measures are reported. The first is a flat error which equals 1 if the true class is not within the n most confident predictions, and 0 otherwise. The second is a hierarchical error, reporting the minimum height of the lowest common ancestors between true and predicted classes. For each of the above two criteria, the overall error score for an algorithm is the average error over all test images. Table 1: Classification results (both flat and hierarchical errors) of various algorithms. Flat Error Hierarchical Error Algorithm Top 1 Top 2 Top 3 Top 4 Top 5 Top 1 Top 2 Top 3 Top 4 Top 5 LR 0.797 0.726 0.678 0.639 0.607 8.727 6.974 5.997 5.355 4.854 ALR 0.796 0.723 0.668 0.624 0.587 8.259 6.234 5.061 4.269 3.659 GroupLR 0.786 0.699 0.642 0.600 0.568 7.620 5.460 4.322 3.624 3.156 APPLET 0.779 0.698 0.634 0.589 0.565 7.208 4.985 3.798 3.166 3.012 Figure 2: Left: image classes with highest accuracy. Right: image classes with lowest accuracy. 5.3 Comparisons & Classification Results We have conducted comprehensive performance evaluations by testing our method under different circumstances. Specifically, to better understand the effect of augmenting logistic regression with semantic relatedness and use of overlapping-group-lasso penalty to enforce group level feature selection, we study the model adding only augmented logistic regression loss and adding only overlapping-group-lasso penalty separately, and compare with the APPLET method. We use the conventional L2 regularized logistic regression [5] as baseline. The algorithms that we evaluated are listed below: (1)L2 regularized logistic regression (LR) [5]; (2) Augmented logistic regression with L2 regularization (ALR); (3) Logistic regression with overlapping-group-lasso regularization (GroupLR); (4) Augmented logistic regression with overlapping-group-lasso regularization (APPLET). Table 1 presents the classification results of various algorithms. According to the classification results, we could clearly see the advantage of APPLET over conventional logistic regression, especially on the top-5 error rate. Specifically, comparing the top-5 error rate, APPLET outperforms LR by a margin of 0.04 on flat loss, and a margin of 1.84 on hierarchical loss. It should be noted 7 that hierarchical error is measured by the height of the lowest common ancestor in the hierarchy, and moving up a level can more than double the number of descendants. Table 1 also compares the performance of ALR with LR. Specifically, ALR outperforms LR slightly when using the top-1 prediction results. However, on top-5 prediction results, ALR performs clearly better than LR. Similar phenomenon is observed when comparing the classification results of GroupLR with LR. Moreover, Figure 2 shows the image categories with highest and lowest classification accuracy. One key reason for introducing the augmented loss function is to ensure that predicted image class falls not too far from its true class on the semantic hierarchy. Results in Table 2 demonstrate that even though APPLET cannot guarantee to make the correct prediction on each image, it produces labels that are closer to the true one than LR, which generates labels far from correct ones. True class laptop linden gordon setter gourd bullfrog volcano odometer earthworm APPLET laptop(0) live oak(3) Irish setter(2) acorn(2) woodfrog(2) volcano(0) odometer(0) earthworm(0) LR laptop(0) log wood(3) alp(11) olive(2) water snake(9) geyser(4) odometer(0) slug(8) Table 2: Example prediction results of APPLET and LR. Numbers indicate the hierarchical error of the misclassification, defined in Section 5.2. As shown in Table 1, a systematic reduction in classification error using APPLET shows that acknowledging semantic relationships between image classes enables the system to discriminate at more informative semantic levels. Moreover, results in Table 2 demonstrate that classification results of APPLET can be significantly more informative, as labeling a “bullfrog” as “woodfrog” gives a more useful answer than “water snake”, as it is still correct at the “frog” level. 5.4 Effects of λ and κ on the Performance of APPLET We present in Figure 3 how categorization performance scales with λ and κ. According to Figure 3, APPLET achieves lowest categorization error around λ = 0.01. Moreover, the error rate increases 10 −3 10 −2 10 −1 10 0 10 1 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 Lambda Flat Error Top−1 Top−2 Top−3 Top−4 Top−5 10 −3 10 −2 10 −1 10 0 10 1 3 4 5 6 7 8 9 10 Lambda Hierarchical Error Top−1 Top−2 Top−3 Top−4 Top−5 0.5 5 50 500 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Kappa Flat Error Top−1 Top−2 Top−3 Top−4 Top−5 0.5 5 50 500 3 4 5 6 7 8 9 10 Kappa Hierarchical Error Top−1 Top−2 Top−3 Top−4 Top−5 Figure 3: Classification results (flat error and hierarchical error) of APPLET with various λ and κ. when λ is larger than 0.1, when excessive regularization hampers the algorithm from differentiating semantically related categories. Similarly, APPLET achieves best performance with κ = 5. When κ is too small, a large number of categories are mixed together, resulting in a much higher flat loss. On the other hand, when κ ≥50, the semantic relatedness matrix is close to diagonal, resulting in treating all categories independently, and categorization performance becomes similar as LR. 6 Conclusions In this paper, we argue the positive effect of incorporating category hierarchy information in super large scale image categorization. The sheer size of the problem considered here singles out our work from any previous works on multi-way classification or transfer learning. Empirical study using 1.2 million training images from 1000 categories demonstrates the effectiveness and promise of our proposed approach. Acknowledgments E. P. Xing is supported by NSF IIS-0713379, DBI-0546594, Career Award, ONR N000140910758, DARPA NBCH1080007 and Alfred P. Sloan Foundation. L. Fei-Fei is partially supported by an NSF CAREER grant (IIS-0845230) and an ONR MURI grant. 8 References [1] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. JMLR, 4:83–99, 2003. [2] E. Bart and S. Ullman. Cross-generalization: learning novel classes from a single example by feature replacement. In CVPR, 2005. [3] A. Berg, J. Deng, and L. Fei-Fei. Large scale visual recognition challenge 2010. http://www.imagenet.org/challenges/LSVRC/2010/, 2010. [4] A. Binder, K.-R. Mller, and M. Kawanabe. On taxonomies for multi-class image categorization. IJCV, pages 1–21, 2011. [5] C. Bishop. Pattern Recognition and Machine Learning. Springer-Verlag New York, Inc., 2006. [6] O. Boiman, E. Shechtman, and M. Irani. In defense of nearest-neighbor based image classification. In CVPR, 2008. [7] A. Budanitsky and G. Hirst. Evaluating wordnet-based measures of lexical semantic relatedness. Comput. Linguist., 32:13–47, March 2006. [8] R. Caruana. Multitask learning. Machine Learning, 28:41–75, 1997. [9] X. Chen, Q. Lin, S. Kim, J. Carbonell, and E. P. Xing. Smoothing proximal gradient method for general structured sparse learning. In UAI, 2011. [10] C. Chu, S. Kim, Y. Lin, Y. Yu, G., A. Ng, and K. Olukotun. Map-reduce for machine learning on multicore. In NIPS. 2007. [11] J. Deng, A. Berg, K. Li, and L. Fei-Fei. What does classifying more than 10,000 image categories tell us? In ECCV, 2010. [12] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009. [13] J. Deng, S. Satheesh, A. Berg, and L. Fei-Fei. Fast and balanced: Efficient label tree learning for large scale object recognition. In NIPS, 2011. [14] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories. In CVPR Workshop on Generative-Model Based Vision, 2004. [15] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. PAMI, 28:594–611, 2006. [16] R. Fergus, H. Bernal, Y. Weiss, and A. Torralba. Semantic label sharing for learning with many categories. In ECCV, ECCV’10, 2010. [17] T. Gao and D. Koller. Discriminative learning of relaxed hierarchy for large-scale visual recognition. In ICCV, 2011. [18] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report 7694, California Institute of Technology, 2007. [19] L. Jacob, F. Bach, and J.-P. Vert. Clustered multi-task learning: A convex formulation. In NIPS, 2008. [20] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for sparse hierarchical dictionary learning. In ICML, 2010. [21] S. Kim and E. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In ICML, 2010. [22] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR, 2006. [23] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proc. IEEE, 86:2278–2324, 1998. [24] Y. Lin, F. Lv, S. Zhu, M. Yang, T. Cour, K. Yu, L. Cao, and T. Huang. Large-scale image classification: fast feature extraction and svm training. In CVPR, 2011. [25] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60:91–110, 2004. [26] E. Miller, N. Matsakis, and P. Viola. Learning from one example through shared densities on transforms. In CVPR, 2000. [27] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o( 1 k2 ). Doklady AN SSSR (translated as Soviet. Math. Docl.), 269:543–547, 1983. [28] A. Opelt, A. Pinz, and A. Zisserman. Incremental learning of object detectors using a visual shape alphabet. In CVPR, 2006. [29] F. Perronnin, J. Sanchez, and T. Mensink. Improving the fisher kernel for large-scale image classification. In ECCV, 2010. [30] A. Quattoni, M. Collins, and T. Darrell. Transfer learning for image classification with sparse prototype representations. In CVPR, 2008. [31] B. Russell, A. Torralba, K. Murphy, and W. Freeman. Labelme: A database and web-based tool for image annotation. IJCV, 77:157–173, 2008. [32] R. Salakhutdinov, A. Torralba, and Josh Tenenbaum. Learning to share visual appearance for multiclass object detection. In CVPR, 2011. [33] E. Sudderth, A. Torralba, W. Freeman, and A. Willsky. Learning hierarchical models of scenes, objects, and parts. In CVPR, 2005. [34] J. Tenenbaum and W. Freeman. Separating style and content with bilinear models. Neural Computation, 12:1247–1283, 2000. [35] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society Series B, pages 91–108, 2005. [36] A. Torralba, R. Fergus, and W. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. PAMI, 30:1958–1970, 2008. [37] A. Torralba, K. Murphy, and W. Freeman. Sharing features: efficient boosting procedures for multiclass object detection. In CVPR, 2004. [38] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained linear coding for image classification. In CVPR, 2010. 9
|
2011
|
238
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.