index
int64 0
20.3k
| text
stringlengths 0
1.3M
| year
stringdate 1987-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
4,500
|
Entropy Estimations Using Correlated Symmetric Stable Random Projections Ping Li Department of Statistical Science Cornell University Ithaca, NY 14853 pingli@cornell.edu Cun-Hui Zhang Department of Statistics and Biostatistics Rutgers University New Brunswick, NJ 08901 czhang@stat.rutgers.edu Abstract Methods for efficiently estimating Shannon entropy of data streams have important applications in learning, data mining, and network anomaly detections (e.g., the DDoS attacks). For nonnegative data streams, the method of Compressed Counting (CC) [11, 13] based on maximally-skewed stable random projections can provide accurate estimates of the Shannon entropy using small storage. However, CC is no longer applicable when entries of data streams can be below zero, which is a common scenario when comparing two streams. In this paper, we propose an algorithm for entropy estimation in general data streams which allow negative entries. In our method, the Shannon entropy is approximated by the finite difference of two correlated frequency moments estimated from correlated samples of symmetric stable random variables. Interestingly, the estimator for the moment we recommend for entropy estimation barely has bounded variance itself, whereas the common geometric mean estimator (which has bounded higher-order moments) is not sufficient for entropy estimation. Our experiments confirm that this method is able to well approximate the Shannon entropy using small storage. 1 Introduction Computing the Shannon entropy in massive data have important applications in neural computation [17], graph estimation [5], query logs analysis in Web search [14], network anomaly detection [21], etc. (See NIPS2003 workshop on entropy estimation www.menem.com/˜ilya/ pages/NIPS03). In modern applications, as massive datasets are often generated in a streaming fashion, entropy estimation in data streams has become a challenging and interesting problem. 1.1 Data Streams Massive data generated in a streaming fashion are difficult to transmit and store [15], as the processing is often done on the fly in one-pass of the data. The problem of “scaling up for high dimensional data and high speed data streams” is among the “ten challenging problems in data mining research” [20]. Mining data streams at petabyte scale has become an important research area [1], as network data can easily reach that scale [20]. In the standard turnstile model [15], a data stream is a vector At of length D, where D = 264 or even D = 2128 is possible in network applications, e.g., (a pair of) IP addresses + port numbers. At time t, there is an input stream at = (it, It), it ∈[1, D] which updates At by a linear rule: At[it] = At−1[it] + It. (1) where It is the increment/decrement of package size at t. For network traffic, normally At[i] ≥0, which is called the strict turnstile model and suffices for describing certain natural phenomena. On the other hand, the general turnstile model (which allows At[i] < 0) is often used for comparing two streams, e.g., in network OD (origin-destination) flow analysis [21]. An important task is to compute the α-th frequency moment F(α) and the Shannon entropy H: F(α) = D X i=1 |At[i]|α, H = − D X i=1 |At[i]| F1 log |At[i]| F1 , (2) The exact computation of these summary statistics is not feasible because to do so one has to store the entire vector At of length D, as the entries are time-varying. Also, many applications (such as anomaly detections of network traffic) require computing the summary statistics in real-time. 1 1.2 Network Measurement, Monitoring, and Anomaly Detection Network traffic is a typical example of high-rate data streams. Industries are now prepared to move to 100 Gbits/second or Terabit/second Ethernet. An effective and reliable measurement of network traffic in real-time is crucial for anomaly detection and network diagnosis; and one such measurement metric is the Shannon entropy [4, 8, 19, 2, 9, 21]. The exact entropy measurement in real-time on high-speed links is however computationally prohibitive. 200 400 600 800 1000 1200 0 1 2 3 4 5 6 7 8 9 10 packet counts (thousands) source IP address: entropy value Figure 1: This plot is reproduced from a DARPA conference [4]. One can view x-axis as the surrogate for time. Y-axis is the measured Shannon entropy, which exhibited a sudden sharp change at the time when an attack occurred. The Distributed Denial of Service (DDoS) attack is a representative example of network anomalies. A DDoS attack attempts to make computers unavailable to intended users, either by forcing users to reset the computers or by exhausting the resources of service-hosting sites. For example, hackers may maliciously saturate the victim machines by sending many external communication requests. DDoS attacks typically target sites such as banks, credit card payment gateways, or military sites. A DDoS attack normally changes the statistical distribution of network traffic, which could be reliably captured by the abnormal variations in the measurements of Shannon entropy [4]. See Figure 1 for an illustration. Apparently, the entropy measurements do not have to be “perfect” for detecting attacks. It is however crucial that the algorithms should be computationally efficient (i.e., real-time and one-pass) at low memory cost, because the traffic data generating by large high-speed networks are enormous and transient. 1.3 Symmetric Stable Random Projections and Entropy Estimation Using Moments It turns out that, for 0 < α ≤2, one can use stable random projections to compute F(α) efficiently because the Turnstile model (1) is a linear model and the random projection operation is also linear (i.e., vector-matrix multiplication) [7]. Conceptually, we multiply the data stream vector At ∈RD by a random matrix R ∈RD×k, resulting in a vector X = At × R ∈Rk with entries xj = [At × R]j = D X i=1 rijAt[i], j = 1, 2, ..., k where rij ∼S(α, 1) is a symmetric α-stable random variable with unit scale [3, 22]: E(erijt) = e−|t|α. The standard normal (or Cauchy) distribution is a special case with α = 2 (or α = 1). In data stream computations, the matrix R is not materialized. The standard procedure is to (re)generate entries of R on-demand [7] using pseudo-random numbers [16]. Thus, we only need to store X ∈Rk. When a stream element at = (it, It) arrives, one updates the entries of X: xj ←xj + Itritj, j = 1, 2, ..., k. (3) By property of stable distributions, the samples xj, j = 1 to k, are also i.i.d. stable xj = D X i=1 rijAt[i] ∼S à α, F(α) = D X i=1 |At[i]|α ! (4) Therefore, the task boils down to estimating the scale parameter from k i.i.d. stable samples. Because the Shannon entropy is essentially the derivative of the frequency moment at α = 1, the popular approach is to approximate the Shannon entropy by the Tsallis entropy [18]: Tα = 1 α −1 à 1 −F(α) F α (1) ! . (5) which approaches the Shannon entropy H as α →1. [21] used a slight variant of (5) but the difference is not essential.1 In their approach, F(α) and F(1) are first estimated separately from 1[21] used F(1+∆)−F(1−∆) 2∆ and estimated the two frequency moments independently. The subtle difference between the finite difference approximations is not essential. It is the correlation that plays the crucial role. 2 two independent sets of samples. The estimated moments are then plugged in (5) to estimate the Shannon entropy H. Immediately, we can see the problem here: the variance of the estimated T(α) might be proportional to 1 (α−1)2 = 1 ∆2 . (Recall var(cX) = c2var(X)). One question is how to choose α (i.e., ∆). [6] proposed a conservative criterion by choosing α according to the worst case bias |H −Tα|. One can verify that ∆= |1 −α| < 10−7 is likely in [6]. In other words, the required sample size could be O ¡ 1014¢ . In practice, [21] exploited the biasvariance tradeoff but they still had to use an excessive number of samples, e.g., 106. In comparison, using our proposed approach, it appears that 100 ∼1000 samples might be sufficient. 1.4 Our Proposal We have made two key contributions. Firstly, instead of estimating F(α) and F(1) separately using two independent sets of samples, we make them highly positively correlated. Intuitively, if the two consistent estimators, denoted by ˆF(α) and ˆF(1) respectively, are highly positively correlated, then possibly their ratio ˆ F(α) ˆ F α (1) can be close to 1 with small variance. Ideally, if V ar µ ˆ F(α) ˆ F α (1) ¶ = O ¡ ∆2¢, the variance of the estimated Tsallis entropy ˆTα = 1 α−1 µ 1 − ˆ F(α) ˆ F α (1) ¶ will be essentially independent of ∆. It turns out that finding an estimator with V ar µ ˆ F(α) ˆ F α (1) ¶ = O ¡ ∆2¢ was not straightforward. It is known that around α = 1, the geometric mean estimator [10] is nearly statistically optimal. Interestingly, our analysis and simulation show that using the geometric mean estimator, we can essentially only achieve V ar µ ˆ F(α) ˆ F α (1) ¶ = O (∆), which, albeit a large improvement, is not small sufficient to cancel the O ¡ 1 ∆2 ¢ term. Therefore, our second key component is a new estimator of Tα using a moment estimator which does not have (or barely has) finite variance. Even though such an estimator is not good for estimating the single moment compared to the geometric mean, due to the high correlation, the ratio ˆ F(α) ˆ F α (1) is still very well-behaved and its variance is essentially O ¡ ∆2¢ , as shown in our theoretical analysis and experiments. 1.5 Compressed Counting (CC) for Nonnegative Data Streams The recent work [13] on Compressed Counting (CC) [11] provides an ideal solution to the problem of entropy estimation in nonnegative data streams. Basically, for nonnegative data streams, i.e., At[i] ≥0 at all times and all locations, we can compute the first moment easily, because F(1) = D X i=1 |At[i]| = D X i=1 At[i] = t X s=0 Is (6) where Is is the increment/decrement at time s. In other words, we just need a single counter to accumulate all the increments Is. This observation lead to the conjecture that estimating F(α) should be also easy if α ≈1, which consequently lead to the development of Compressed Counting which used maximally-skewed stable random projections instead of symmetric stable projections. The most recent work of CC [13] provided a new moment estimator to achieve the variance ∝O ¡ ∆2¢ . Unfortunately, for general data streams where entries can be negative, we have to resort to symmetric stable random projections. Fundamentally, the reason that skewed projections work well on nonnegative data streams is because the data themselves are skewed. However, when we compare two streams, the data become more or less symmetric and hence we must use symmetric projections. 1.6 Why Comparing the Difference of Two Streams? In machine learning research and practice, people routinely use the difference between feature vectors. [21] used the difference between data streams from a slightly different motivation. The goal of [21] is to measure the entropies of all OD pairs (origin-destination) in a network, because entropy measurements are crucial for detecting anomaly events such as DDoS attacks and network failures. They argued that the change of entropy of the traffic distribution may be invisible (i.e., too small to be detected) in the traditional volume matrix even during the time when an attack occurs. Instead, they proposed to measure the entropy from a number of locations across the network, i.e., 3 by examining the entropy of every OD flow in the network. In a similar argument, a DDoS attack may be invisible in terms of the traffic volume change, if the attack is launched outside the network. While [21] successfully demonstrated that measuring the Shannon entropy of OD flows is effective for detecting anomaly events, at that time they did not have the tools for efficiently estimating the entropy. Using symmetric stable random projections and independent samples, they needed a large number of samples (e.g., 106) because their variance blows up at the rate of O ¡ 1 ∆2 ¢ . For anomaly detection, reducing the sample size (k) is crucial because k determines the storage and estimation speed; and it is often required to detect the events at real time. In addition, the pseudo-random numbers have to be (re)-generated on the fly, at a cost proportional to k. 2 Our Proposed Algorithm Recall that a data stream is a long vector At[i], i = 1 to D. At time t, an incoming element at = (it, It) updates one entry: At[it] ←At−1[it] + It. Conceptually, we generate a random matrix R ∈RD×k whose entries are sampled from a stable distribution and multiply it with At: X = At × R. The matrix multiplication is linear and can be conducted incrementally as the new stream elements arrive. R is not materialized; its entries are re-generated on demand using pseudorandom numbers, as the standard practice in data stream computations [7]. Our method does not require At[i] ≥0 and hence it can handle the difference between two streams (e.g., the OD flows). 2.1 The Symmetric Stable Law Our work utilizes the symmetric stable distribution. We adopt the standard approach [3] to sample from the stable law S(α, 1) with index α and unit scale. We generate two independent random variables: w ∼exp(1) and u ∼unifom(−π/2, π/2) and feed them to a nonlinear transformation: Z(α) = g(w, u, α) = sin(αu) (cos u)1/α hcos(u −αu) w i(1−α)/α ∼S(α, 1), (7) to obtain a sample from S(α, 1). An important property is that, for −1 < γ < α, the moment exists: E|Z|γ = (2/π)Γ(1 −γ/α)Γ(γ) sin(γπ/2). For convenience, we define G(α, γ) = E|g(w, u, α)|γ = 2 π Γ(1 −γ/α)Γ(γ) sin (γπ/2) (8) 2.2 Our Recommended Estimator Conceptually, we have two matrices of i.i.d. random numbers: wij ∼exp(1), uij ∼uniform(−π/2, π/2), i = 1, 2, ..., D, j = 1, 2, ..., k, (9) As new stream elements arrive, we incrementally maintain two sets of samples, i.e., for i = 1 to k, xj = D X i=1 At[i]g(wij, uij, 1), yj = D X i=1 At[i]g(wij, uij, α) (10) Note that xj and yj are highly correlated because they are generated using the same random numbers (with different α). However, xi and yj are independent if i ̸= j. Our recommended estimator of the Tsallis entropy Tα is ˆTα,0.5 = 1 α −1 1 − à √π Γ ¡ 1 − 1 2α ¢ Pk j=1 p |yj| Pk j=1 p |xj| !2α (11) where α = 1 + ∆> 1 and the meaning of 0.5 will soon be clear. When ∆is sufficiently small, the estimated Tsallis entropy will be sufficiently close to the Shannon entropy. A nice property is that its variance is free of 1 ∆or 1 ∆2 terms. While it is intuitively clear that it is beneficial to make xj and yj highly correlated for the sake of reducing the variance, it might not be as intuitive why ˆTα,0.5 (11) is a good estimator for the entropy. We will explain why the obvious geometric mean estimator [10] is not sufficient for entropy estimation. 4 3 The Geometric Mean Estimator For estimating F(α), the geometric mean estimator [10] is close to be statistically optimal (efficiency ≈80%) at α ≈1. Thus, it was our first attempt to test the following estimator of the Tsallis entropy: ˆTα,gm = 1 α −1 à 1 − ˆF(α),gm ˆF α (1),gm ! , where ˆF(α),gm = Qk j=1 |yj|α/k Gk(α, α/k) , ˆF(1),gm = Qk j=1 |xj|1/k Gk(1, 1/k) , where G() is defined in (8). After simplification, we obtain: ˆTα,gm = 1 α −1 1 − k Y j=1 "¯¯¯¯ yj xj ¯¯¯¯ α/k G(1, 1/k) G(α, α/k) # . (12) 3.1 Theoretical Analysis The theoretical analysis of ˆT(α),gm, however, turns out to be difficult, as it requires computing E "¯¯¯¯ yj xj ¯¯¯¯ sα/k# = E ¯¯¯¯¯ PD i=1 At[i]g(wij, uij, α) PD i=1 At[i]g(wij, uij, 1) ¯¯¯¯¯ sα/k , s = 1, 2, (13) where g() is defined in (7). We first provide the following Lemma: Lemma 1 Let w ∼exp(1) and u ∼uniform(−π/2, π/2) be two independent variables. Let α = 1 + ∆> 1, for small ∆> 0. Then, for γ > −1, E ¯¯¯¯ g(w, u, α) g(w, u, 1) ¯¯¯¯ γ =1 −0.5772γ∆+ 0.5772γ∆2 −1.6386γ∆3 + 1.6822γ2∆2 + O ¡ γ∆4¢ + O ¡ γ2∆3¢ □(14) Note that we need to keep higher order terms in order to prove Lemma 2, to show the properties of the geometric mean estimator, when D = 1 (i.e., a stream with only one element). Lemma 2 If D = 1, then E ³ ˆTα,gm ´ = 1 k π2 2 −2.0935∆ k + 1.0614∆2 + O ¡ ∆3¢ + O µ∆2 k ¶ + O µ 1 k2 ¶ (15) V ar ³ ˆTα,gm ´ = 3.3645 k + O µ∆ k ¶ + O µ 1 k2 ¶ □ (16) When D = 1, we know Tα = H = 0. In this case, the geometric mean estimator ˆTα,gm is asymptotically unbiased with variance essentially free of 1 ∆, which is very encouraging. Will this result in Lemma 2 extend to general D? The answer is no, even for D = 2, i.e., yj xj = At[1]g(w1j, u1j, α) + At[2]g(w2j, u2j, α) At[1]g(w1j, u1j, 1) + At[2]g(w2j, u2j, 1) Because g() is symmetric, it is possible that the denominator At[1]g(w1j, u1j, 1) + At[2]g(w2j, u2j, 1) might be very small while the numerator At[1]g(w1j, u1j, α) + At[2]g(w2j, u2j, α) is not too small. In other words, there will be more variations when D > 1. In fact, our experiments in Sec. 3.2 and the theoretical analysis of a more general estimator in Sec. 4 both reveal that the variance of ˆTα,gm is essentially O ¡ 1 ∆ ¢ , which is of course still a substantial improvement over the previous O ¡ 1 ∆2 ¢ solution. 3.2 Experiments on the Geometric Mean Estimator (Correlated vs. Independent samples) We present some experimental results for evaluating ˆTα,gm, to demonstrate that (i) using correlation does substantially reduce variance and hence reduces the required sample size, and (ii) the variance (or MSE, the mean square error) of ˆTα,gm is roughly O ¡ 1 ∆ ¢ . 5 We follow [13] by using static data to evaluate the accuracies of the estimators. The projected vector X = At × R is the same at the end of the stream, regardless of whether it is computed at once (i.e., static) or incrementally (i.e., dynamic). Following [13], we selected 4 word vectors from a chunk of Web crawl data. For example, the entries for vector “REVIEW” are the numbers of occurrences of the word “REVIEW” in each document. We group these 4 vectors into 2 pairs: “THIS-HAVE” and “DO-REVIEW” and we estimate the Shannon entropies of the two resultant difference vectors. Figure 2 presents the mean square errors (MSE) of the estimated Shannon entropy, i.e., E( ˆTα−H)2, normalized by the truth (H2). The left panels contain the results using independently sampling (i.e., the prior work [21]) and the geometric mean estimator. The middle panels contain the results using correlated sampling (i.e., this paper) and the geometric mean estimator (12). The right panels multiply the results of the middle panels by ∆to illustrate that the variance of the geometric mean estimator for entropy ˆTα,gm is essentially O ¡ 1 ∆ ¢ . See more experiments in Figure 3. 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 100 k = 1000 THIS−HAVE : GM + Indep. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 k = 100 k = 1000 THIS−HAVE : GM + Corr. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −5 10 −4 10 −3 10 −2 10 −1 k = 10 k = 100 k = 1000 THIS−HAVE : GM + Corr. ∆ = α − 1 Normalized MSE × ∆ MSE × ∆ 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 100 k = 1000 DO−REVIEW : GM + Indep. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 k = 100 k = 1000 DO−REVIEW : GM + Corr. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −5 10 −4 10 −3 10 −2 10 −1 k = 10 k = 100 k = 1000 DO−REVIEW : GM + Corr. ∆ = α − 1 Normalized MSE × ∆ MSE × ∆ Figure 2: Two pairs of word vectors were selected. We conducted symmetric random projections using both independent sampling (left panels, as in [21]) and correlated sampling (middle panels, as our proposal). The Tsallis entropy (of the difference vector) is estimated using the geometric mean estimator (12) with three sample sizes k = 10, 100, and 1000. The normalized mean square errors (MSE: E| ˆTα,gm −H|2/H2) verify that correlated sampling reduces the errors substantially. 4 The General Estimator Since the geometric mean estimator could not satisfactorily solve the entropy estimation problem, we resort to estimators which behave dramatically different from the geometric mean. Our recommended estimator ˆTα,0.5 as in (11) is a special case (for γ = 0.5) of a more general family of estimators [12], parameterized by γ ∈(0, 1): ˆTα,γ = 1 α −1 à 1 − ˆF(α),γ ˆF α (1),γ ! , ˆF(α),γ = ÃPk j=1 |yj|γ kG(α, γ) !α/γ , ˆF(1),γ = ÃPk j=1 |xj|γ kG(1, γ) !1/γ which, after simplification, becomes ˆTα,γ = 1 α −1 1 − à Pk j=1 |yj|γ Pk j=1 |xj|γ G(1, γ) G(α, γ) !α/γ (17) Recall G(α, γ) is defined in (8), and G(1,0.5) G(α,0.5) = √π Γ(1−1 2α). To better understand ˆF(α),γ, recall if Z ∼S(α, 1), then E|Z|γ = G(α, γ) < ∞if −1 < γ < α. Therefore, µ P k j=1 |yj|γ kG(α,γ) ¶ is an unbiased estimate of F γ/α (α) . To recover F(α), we need to apply the power α/γ operation. Thus, it is clear that, as long as 0 < λ < 1, ˆF(α),γ is a consistent estimator of F(α) and E ³ ˆF(α),γ ´ is finite. In particular, the variance of ˆF(α),γ is bounded if 0 < γ < 0.5: E ³ ˆF(α),γ ´ = F(α) + O µ1 k ¶ , V ar ³ ˆF(α),γ ´ = F 2 (α) k α2 γ2 G(α, 2γ) −G2(α, γ) G2(α, γ) + O µ 1 k2 ¶ 6 The variance is unbounded if γ = 0.5 and α = 1, because G(1, 1) = ∞(Γ(0) = ∞). Interestingly, when γ →0 and α = 1, the asymptotic variance reaches the minimum. In fact, when γ →0, ˆF(α),γ converges to the geometric mean estimator ˆF(α),gm. A variant of ˆF(α),γ was discussed in [12]. 4.1 Theoretical Analysis Based on Lemma 3 and Lemma 4 (which is a fairly technical proof), we know that the variance of the general estimator is essentially V ar ³ ˆTα,γ ´ = O ³ ∆2γ−1 k ´ , for fixed γ ∈(0, 1/2). In other words, when γ is close to 0, the variance of the entropy estimator is essentially on the order of O (1/(k∆)), and while γ is close to 1/2, the variance is essentially O(1/k) as desired. Lemma 3 For any fixed γ ∈(0, 1), V ar ³ ˆTα,γ ´ = 1 ∆2 O ¡ E(|x1|γ −|y1|γ)2¢ k + O µ 1 k2 ¶ □ Lemma 4 Let 0 < ∆≤1/2 and α = 1 + ∆. Let γ ∈(0, 1/2) and m be a positive integer no smaller than 1/γ. Then, there exists a universal constant M such that E ³ |x1|γ −|y1|γ´2 ≤MF 2γ (1)∆1+2γ−1/mm2n m + eH2 (2m) + (1 −2γ)−2o. (1 −2γ), where eH2m = ¡ PD i=1 |At[i]| F(1) ¡ log |At[i]| F(1) )2m¢1/(2m). □ We should clarify that our theoretical analysis is only applicable for fixed γ ∈(0, 1/2). When γ = 0.5, the estimator ˆT(α),0.5 is still well-behaved, except we are unable to precisely analyze this case. Also, since we do not compute the exact constant, it is possible that for some carefully chosen α (data-dependent), ˆT(α),γ with γ < 0.5 may exhibit smaller variance than ˆT(α),0.5. We recommend ˆT(α),0.5 for convenience because it essentially frees practitioners from carefully choosing α. 4.2 Experimental Results Figure 3 presents some empirical results, for testing the general estimator ˆTα,γ (17), using more word vector pairs (including the same 2 pairs in Figure 2). We can see that when γ = 0.5, the (normalized) MSEs become flat (as desired) as ∆= α −1 →0. When γ > 1/2, the MSEs increase although the curves remain flat. When γ < 1/2, the MSEs blow up with increasing ∆. Note that, when γ < 1/2, it is possible to achieve smaller MSEs if we carefully choose α. How many samples (k) are needed? If the goal is to estimate the Shannon entropy within a few percentages of the the true value, then k = 100 ∼1000 should be sufficient, because √ MSE/H < 0.1 when k ≥100 as shown in Figure 3. 5 Conclusion Entropy estimation is an important task in machine learning, data mining, network measurement, anomaly detection, neural computations, etc. In modern applications, the data are often generated in a streaming fashion and many operations on the streams can only be conducted in one-pass of the data. It has been a challenging problem to estimate the Shannon entropy of data streams. The prior work [21] achieved some success in entropy estimation using symmetric stable random projections. However, even after aggressively exploiting the bias-variance tradeoff, they still need to a large number of samples, e.g., 106, which is prohibitive in both time and space, especially considering that in streaming applications the pseudo-random numbers have to be re-generated on the fly, the cost of which is directly proportional to the sample size. In our approach, we approximate the Shannon entropy using two high correlated estimates of the frequent comments. The positive correlation can substantially reduce the variance of the Shannon entropy estimate. However, finding the appropriate estimator of the frequency moment is another challenging task. We successfully find such an estimator and show that its variance (of the Shannon entropy estimate) is very small. Experimental results demonstrate that about 100 ∼1000 samples should be sufficient for achieving high accuracies. 7 Acknowledgement The research of Ping Li is partially supported by NSF-IIS-1249316, NSF-DMS-0808864, NSF-SES1131848, and ONR-YIP-N000140910911. The research of Cun-Hui Zhang is partially supported by NSF-DMS-0906420, NSF-DMS-1106753, NSF-DMS-1209014, and NSA-H98230-11-1-0205. 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 THIS−HAVE : Corr. γ = 0.3 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 THIS−HAVE : Corr. γ = 0.4 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 THIS−HAVE : Corr. γ = 0.5 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 THIS−HAVE : Corr. γ = 0.6 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 THIS−HAVE : Corr. γ = 0.7 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 DO−REVIEW : Corr. γ = 0.3 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 DO−REVIEW : Corr. γ = 0.4 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 DO−REVIEW : Corr. γ = 0.5 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 DO−REVIEW : Corr. γ = 0.6 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 DO−REVIEW : Corr. γ = 0.7 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 100 k = 1000 UNITED−STATES : GM + Indep. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 k = 100 k = 1000 UNITED−STATES : GM + Corr. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 UNITED−STATES : Corr. γ = 0.3 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 UNITED−STATES : Corr. γ = 0.5 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 UNITED−STATES : Corr. γ = 0.7 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 100 k = 1000 A−THE : GM + Indep. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 k = 100 k = 1000 A−THE : GM + Corr. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 A−THE : Corr. γ = 0.3 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 A−THE : Corr. γ = 0.5 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 A−THE : Corr. γ = 0.7 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 100 k = 1000 FOOD−LOVE : GM + Indep. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 k = 100 k = 1000 FOOD−LOVE : GM + Corr. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 FOOD−LOVE : Corr. γ = 0.3 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 FOOD−LOVE : Corr. γ = 0.5 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 FOOD−LOVE : Corr. γ = 0.7 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 100 k = 1000 DATA−PAPER : GM + Indep. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 k = 100 k = 1000 DATA−PAPER : GM + Corr. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 DATA−PAPER : Corr. γ = 0.3 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 DATA−PAPER : Corr. γ = 0.5 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 DATA−PAPER : Corr. γ = 0.7 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 100 k = 1000 NEWS−WASHINGTON : GM + Indep. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 k = 100 k = 1000 NEWS−WASHINGTON : GM + Corr. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 NEWS−WASHINGTON : Corr. γ = 0.3 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 NEWS−WASHINGTON : Corr. γ = 0.5 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 NEWS−WASHINGTON : Corr. γ = 0.7 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 100 k = 1000 MACHINE−LEARN : GM + Indep. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −2 10 0 10 2 10 4 10 6 k = 10 k = 100 k = 1000 MACHINE−LEARN : GM + Corr. ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 MACHINE−LEARN : Corr. γ = 0.3 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 MACHINE−LEARN : Corr. γ = 0.5 ∆ = α − 1 Normalized MSE 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 −4 10 −3 10 −2 10 −1 10 0 k = 10 k = 100 k = 1000 MACHINE−LEARN : Corr. γ = 0.7 ∆ = α − 1 Normalized MSE Figure 3: The first two rows are the normalized MSEs for same two vectors used in Figure 2, for estimating Shannon entropy using the general estimator ˆTα,γ with γ = 0.3, 0.4, 0.5, 0.6, 0.7. For the rest of the rows, the leftmost panels are the results of using independent samples (i.e., the prior work [21]) and the geometric mean estimator. The second column of panels are the results of using correlated samples and the geometric mean estimator. The right three columns of panels are for the proposed general estimator ˆTα,γ with γ = 0.3, 0.5, 0.7. We recommend γ = 0.5. 8 References [1] Brian Babcock, Shivnath Babu, Mayur Datar, Rajeev Motwani, and Jennifer Widom. Models and issues in data stream systems. In PODS, pages 1–16, Madison, WI, 2002. [2] Daniela Brauckhoff, Bernhard Tellenbach, Arno Wagner, Martin May, and Anukool Lakhina. Impact of packet sampling on anomaly detection metrics. In IMC, pages 159–164, Rio de Janeriro, Brazil, 2006. [3] John M. Chambers, C. L. Mallows, and B. W. Stuck. A method for simulating stable random variables. Journal of the American Statistical Association, 71(354):340–344, 1976. [4] Laura Feinstein, Dan Schnackenberg, Ravindra Balupari, and Darrell Kindred. Statistical approaches to DDoS attack detection and response. In DARPA Information Survivability Conference and Exposition, pages 303–314, 2003. [5] Anupam Gupta, John D. Lafferty, Han Liu, Larry A. Wasserman, and Min Xu. Forest density estimation. In COLT, pages 394–406, Haifa, Israel, 2010. [6] Nicholas J. A. Harvey, Jelani Nelson, and Krzysztof Onak. Streaming algorithms for estimating entropy. In ITW, 2008. [7] Piotr Indyk. Stable distributions, pseudorandom generators, embeddings, and data stream computation. Journal of ACM, 53(3):307–323, 2006. [8] Anukool Lakhina, Mark Crovella, and Christophe Diot. Mining anomalies using traffic feature distributions. In SIGCOMM, pages 217–228, Philadelphia, PA, 2005. [9] Ashwin Lall, Vyas Sekar, Mitsunori Ogihara, Jun Xu, and Hui Zhang. Data streaming algorithms for estimating entropy of network traffic. In SIGMETRICS, pages 145–156, Saint Malo, France, 2006. [10] Ping Li. Estimators and tail bounds for dimension reduction in lα (0 < α ≤2) using stable random projections. In SODA, pages 10 – 19, San Francisco, CA, 2008. [11] Ping Li. Compressed counting. In SODA, New York, NY, 2009. [12] Ping Li and Trevor J. Hastie. A unified near-optimal estimator for dimension reduction in lα (0 < α ≤2) using stable random projections. In NIPS, Vancouver, BC, Canada, 2007. [13] Ping Li and Cun-Hui Zhang. A new algorithm for compressed counting with applications in shannon entropy estimation in dynamic data. In COLT, 2011. [14] Qiaozhu Mei and Kenneth Church. Entropy of search logs: How hard is search? with personalization? with backoff? In WSDM, pages 45 – 54, Palo Alto, CA, 2008. [15] S. Muthukrishnan. Data streams: Algorithms and applications. Foundations and Trends in Theoretical Computer Science, 1:117–236, 2 2005. [16] Noam Nisan. Pseudorandom generators for space-bounded computations. In Proceedings of the twentysecond annual ACM symposium on Theory of computing, STOC, pages 204–212, 1990. [17] Liam Paninski. Estimation of entropy and mutual information. Neural Comput., 15(6):1191–1253, 2003. [18] Constantino Tsallis. Possible generalization of boltzmann-gibbs statistics. Journal of Statistical Physics, 52:479–487, 1988. [19] Kuai Xu, Zhi-Li Zhang, and Supratik Bhattacharyya. Profiling internet backbone traffic: behavior models and applications. In SIGCOMM ’05: Proceedings of the 2005 conference on Applications, technologies, architectures, and protocols for computer communications, pages 169–180, Philadelphia, Pennsylvania, USA, 2005. [20] Qiang Yang and Xindong Wu. 10 challeng problems in data mining research. International Journal of Information Technology and Decision Making, 5(4):597–604, 2006. [21] Haiquan Zhao, Ashwin Lall, Mitsunori Ogihara, Oliver Spatscheck, Jia Wang, and Jun Xu. A data streaming algorithm for estimating entropies of od flows. In IMC, San Diego, CA, 2007. [22] Vladimir M. Zolotarev. One-dimensional Stable Distributions. American Mathematical Society, Providence, RI, 1986. 9
|
2012
|
143
|
4,501
|
Coding efficiency and detectability of rate fluctuations with non-Poisson neuronal firing Shinsuke Koyama∗ Department of Statistical Modeling The Institute of Statistical Mathematics 10-3 Midori-cho, Tachikawa, Tokyo 190-8562, Japan skoyama@ism.ac.jp Abstract Statistical features of neuronal spike trains are known to be non-Poisson. Here, we investigate the extent to which the non-Poissonian feature affects the efficiency of transmitting information on fluctuating firing rates. For this purpose, we introduce the Kullback-Leibler (KL) divergence as a measure of the efficiency of information encoding, and assume that spike trains are generated by time-rescaled renewal processes. We show that the KL divergence determines the lower bound of the degree of rate fluctuations below which the temporal variation of the firing rates is undetectable from sparse data. We also show that the KL divergence, as well as the lower bound, depends not only on the variability of spikes in terms of the coefficient of variation, but also significantly on the higher-order moments of interspike interval (ISI) distributions. We examine three specific models that are commonly used for describing the stochastic nature of spikes (the gamma, inverse Gaussian (IG) and lognormal ISI distributions), and find that the time-rescaled renewal process with the IG distribution achieves the largest KL divergence, followed by the lognormal and gamma distributions. 1 Introduction Characterizing the statistical features of spike time sequences in the brain is important for understanding how the brain represents information about stimuli or actions in the sequences of spikes. Although the spike trains recorded from in vivo cortical neurons are known to be highly irregular [20, 24], a recent non-stationary analysis has revealed that individual neurons signal with nonPoisson firing, the characteristics of which are strongly correlated with the function of the cortical area [21]. This raises the question of what the neural coding advantages of non-Poisson spiking are. It could be that the precise timing of spikes carries additional information about the stimuli or actions [6, 15]. It is also possible that the efficiency of transmitting fluctuating rates might be enhanced by non-Poisson firing [5, 17]. Here, we explore the latter possibility. In the problem of estimating firing rates, there is a minimum degree of rate fluctuation below which a rate estimator cannot detect the temporal variation of the firing rate [23]. If, for instance, the degree of temporal variation of the rate is on the same order as that of the noise, a constant rate might be chosen as the most likely estimate for a given spike train. It is, therefore, interesting to see how the minimum degree of rate fluctuation depends on the non-Poissonian feature of spike trains. In this study, we investigate the extent to which the non-Poissonian feature of spike trains affects the encoding efficiency of rate fluctuations. In addition, we address the question of how the de∗http://skoyama.blogspot.jp 1 tectability of rate fluctuations depends on the encoding efficiency. For this purpose, we introduce the Kullback-Leibler (KL) divergence to measure the encoding efficiency, and assume that spike sequences are generated by time-rescaled renewal processes. With the aid of analytical and numerical studies, we suggest that the lower bound of detectable rate fluctuations, below which the empirical Bayes decoder cannot detect the rate fluctuations, is uniquely determined by the KL divergence. By examining three specific models (the time-rescaled renewal process with the gamma, inverse Gaussian (IG) and lognormal interspike interval (ISI) distributions), it is shown that the KL divergence, as well as the lower bound, depends not only on the first- and second-order moments, but also significantly on the higher-order moments of the ISI distributions. We also find that among the three ISI distributions, the IG distribution achieves the highest efficiency of coding information on rate fluctuations. 2 Encoding rate fluctuations using time-rescaled renewal processes Definitions of time-rescaled renewal processes and KL divergence We introduce time-rescaled renewal processes for a model of neuronal spike trains constructed in the following way. Let fκ(y) be a family of ISI distributions with the unit mean (i.e., ! ∞ 0 yfκ(y)dy = 1), where κ controls the shape of the distribution, and λ(t) be a fluctuating firing rate. A sequence of spikes {ti} := {t1, t2, . . . , tn} is generated in the following steps: (i) Derive ISIs {y1, y2, . . . , yn} independently from fκ(y), and arrange the ISIs sequentially to form a spike train of the unit rate; ith spike is given by summing the previous ISIs as si = "i j=1 yj. (ii) Transform {s1, s2, . . . , sn} to {t1, t2, . . . , tn} according to ti = Λ−1(si), where Λ−1(si) is the inverse of the function Λ(t) = ! t 0 λ(u)du. This transformation ensures that the instantaneous firing rate of {t i} corresponds to λ(t), while the shape of the ISI distribution fκ(y), which characterizes the firing irregularity, is unchanged in time. This is in agreement with the empirical fact that the degree of irregularity in neuronal firing is generally maintained in cortical processing [21, 22], while the firing rate λ(t) changes in time. The probability density of the occurrence of spikes at {t i} is, then, given by pκ({ti}|{λ(t)}) = n # i=1 λ(ti)fκ(Λ(ti) −Λ(ti−1)). (1) where t0 = 0. We next introduce the KL divergence for measuring the encoding efficiency of fluctuating rates. For this purpose, we assume that λ(t) is ergodic with a stationary distribution p(λ), the mean of which is given by µ: ⟨λ⟩λ := $ ∞ 0 λp(λ)dλ = lim T →∞ 1 T $ T 0 λ(t)dt = µ. (2) Consider a probability density of a renewal process that has the same ISI density f κ(x) and the constant rate µ: pκ({ti}|µ) = n # n=1 µfκ(µ(ti −ti−1)). (3) The KL divergence between pκ({ti}|{λ(t)}) and pκ({ti}|µ) is, then, defined as Dκ(λ(t)||µ) := lim T →∞ ∞ % n=0 1 T $ T 0 $ T t1 · · · $ T tn−1 pκ({ti}|{λ(t)}) × log pκ({ti}|{λ(t)}) pκ({ti}|µ) dt1dt2 · · · dtn. (4) Since it is defined as the entropy of a renewal process with the fluctuating rate λ(t) relative to that with the constant rate µ, Dκ(λ(t)||µ) can be interpreted as the amount of information on the rate fluctuations encoded into spike trains. Note that a similar quantity has been introduced in [3], where the quantity was computed only under a Poisson model. 2 Substituting Eqs. (1) and (3) into Eq. (4) and further assuming ergodicity of spike trains, the KL divergence can be expressed as Dκ(λ(t)||µ) = lim n→∞ 1 tn −t0 log pκ({ti}|{λ(t)}) pκ({ti}|µ) = lim n→∞ 1 tn −t0 n % i=1 & log λ(ti) + log fκ(Λ(ti) −Λ(ti−1)) −log µ −log fκ(µ(ti −ti−1)) ' . (5) This expression can be used for computing the KL divergence numerically by simulating a large number of spikes n ≫1. Three ISI distributions and their KL divergence In order to examine the behavior of the KL divergence, we use the three specific ISI distributions for fκ(y) (the gamma, inverse Gaussian (IG) and lognormal distributions), which have been used to describe the stochastic nature of ISIs [9, 10, 14]. These distributions and their coefficient of variation (CV = ( V ar(X)/E(X)) are given by gamma : fκ(y) = κκyκ−1e−κy/Γ(κ), CV = 1/√κ, (6) IG : fκ(y) = ) κ 2πy3 exp * −κ(y −1)2 2y + , CV = 1/√κ, (7) lognormal : fκ(y) = 1 y √ 2πκ exp * −(log y + κ 2 )2 2κ + , CV = √ eκ −1, (8) where Γ(κ) = ! ∞ 0 xκ−1e−xdx is the gamma function. Figure 1a illustrates the shape of the three distributions with three different values of CV . The KL divergence for the three models is analytically solvable when the rate fluctuation has a long time scale relative to the mean ISI. Here, we show the derivation for the gamma distribution. (The derivations for the IG and lognormal distributions are essentially the same.) Inserting Eq. (6) into Eq. (5) leads to Dκ(λ(t)||µ) = lim n→∞ 1 tn −t0 n % i=1 & log λ(ti) + (κ −1) log[Λ(ti) −Λ(ti−1)] −(κ −1) log(ti −ti−1) ' −κµ log µ, (9) where we used 1 tn−t0 ! tn t0 λ(t)dt →µ and n tn−t0 →µ as n →∞. By introducing the “averaged” firing rate in the ith ISI: ¯λi := Λ(ti)−Λ(ti−1) ti−ti−1 , we obtain log[Λ(ti) −Λ(ti−1)] = log ¯λi + log(ti − ti−1). Assuming that the time scale of the rate fluctuation is longer than the mean ISI so that ¯λi is approximated to λ(ti), Eq. (9) becomes Dκ(λ(t)||µ) = κ lim n→∞ 1 tn −t0 n % i=1 log λ(ti) −κµ log µ = κ , lim T →∞ 1 T $ T 0 % i δ(t −ti) log λ(t)dt −µ log µ . (10) The fluctuation in the apparent spike count is given by the variance to mean ratio as represented by the Fano factor [8]. For the renewal process in which ISIs are drawn from a given distribution function, it is proven that the Fano factor is related to the ISI variability with F ≈C 2 V [4]. Thus, for a long range time scale in which a serial correlation of spikes is negligible, the spike train in Eq. (10) can be approximated to n % i=1 δ(t −ti) ≈λ(t) + ( λ(t)/κξ(t), (11) 3 where ξ(t) is a fluctuating process such that ⟨ξ(t)⟩= 0 and ⟨ξ(t)ξ(t′)⟩= δ(t −t′). Using this, the first term on the rhs of (10) can be evaluated as lim T →∞ 1 T $ T 0 λ(t) log λ(t)dt + lim T →∞ 1 T $ T 0 ( λ(t)/κ log λ(t)ξ(t)dt = ⟨λ log λ⟩λ, (12) where the second term on the lhs has vanished due to a property of stochastic integrals. Therefore, the KL divergence of the gamma distribution is obtained as Dκ(λ(t)||µ) = κ & ⟨λ log λ⟩λ −µ log µ ' . (13) In the same way, the KL divergence for the IG and lognormal distributions are, respectively, derived as Dκ(λ(t)||µ) = µ 2 log µ −1 2⟨λ log λ⟩λ + κ + 1 2µ ⟨(λ −µ)2⟩λ, (14) and Dκ(λ(t)||µ) = µ 2κ(log µ)2 −log µ κ ⟨λ log λ⟩λ + 1 2κ⟨λ(log λ)2⟩λ. (15) See the supplementary material for the details of their derivations. Results We compute the KL divergence for the three models, in which the rate fluctuates according to the Ornstein-Uhlenbeck process. Formally, the rate process is given by λ(t) = [x(t)] +, where [·]+ is the rectification function: [x]+ = , x, x > 0 0, otherwise (16) and x(t) is derived from the Ornstein-Uhlenbeck process: dx(t) dt = −x(t) −µ τ + σ ) 2 τ ξ(t), (17) where ξ(t) is the Gaussian white noise. Figure 1b depicts the KL divergence as a function of σ for C V =0.6, 1 and 1.5. The analytical results (the solid lines) are in good agreement with the numerical results (the error bars). The KL divergence for the three models increases as σ is increased and as CV is decreased, which is rather obvious since larger σ and smaller CV imply lower noise entropy of spike trains. One nontrivial result is that, even if the three models share the same values of σ and C V , the KL divergence of each model significantly differs from that of the others: the IG distribution achieves the largest KL divergence, followed by the lognormal and gamma distributions. The difference in the KL divergence among the three models becomes larger as CV grows larger. Since the three models share the same firing rate λ(t) and CV , it can be concluded that the higher-order (more than second-order) moments of ISI distributions strongly affect the KL divergence. In order to confirm this result for another rate process, we examine a sinusoidal rate process, λ(t) = µ + σ sin t/τ, and observe the same behavior as the Ornstein-Uhlenbeck rate process (Figure 1c). 3 Decoding fluctuating rates using the empirical Bayes method In this section, we show that the KL divergence (4) determines the lower bound of the degree of rate fluctuation below which the empirical Bayes estimator cannot detect rate fluctuations. The empirical Bayes method We consider decoding a fluctuation rate λ(t) from a given spike train {t i} := {t1 . . . , tn} in an observation interval [0, T ] by the empirical Bayes method. Let x(t) ∈R be a latent variable that 4 CV=0.6 CV=1 CV=1.5 0 0.1 0.2 0.3 0 0.05 0.1 0.15 σ KL divergence CV=0.6 CV=1 CV=1.5 gamma IG lognormal (a) (b) 0 0.2 0.4 0.6 0.8 0 0.1 0.2 0.3 0.4 σ KL divergence CV=0.6 CV=1 CV=1.5 (c) Figure 1: (a) The gamma (blue), IG (green) and lognormal (red) ISI distribution functions for CV =0.6, 1 and 1.5. (b) The KL divergence as a function of σ for C V =0.6, 1 and 1.5, when the rate fluctuates according to the Ornstein-Uhlenbeckprocess (17) with µ = 1 and τ = 10. The blue, green and red indicate the KL divergence for the gamma, IG and lognormal distribution, respectively. The lines represent the theoretical values obtained by Eqs. (13), (14) and (15), and the error bars represent the average and standard deviation numerically computed according to Eq. (5) with n = 50, 000 and 10 trials. (c) The KL divergence for the sinusoidally modulated rate, λ(t) = µ + σ sin t/τ, with µ = 1 and τ = 10. is transformed from λ(t) via the log-link function x(t) = log λ(t). For the inference of λ(t) from {ti}, we use a prior distribution of x(t), such that the large gradient of x(t) is controlled by pγ({x(t)}) ∝exp . − 1 2γ2 $ T 0 /dx(t) dt 02 dt 1 , (18) where the hyperparameter γ controls the roughness of the latent process x(t): with the small γ, the model requires a constant latent process, and vice versa. By inverting the conditional probability distribution with the Bayes’ theorem, the posterior distribution of {x(t)} is obtained as pκ,γ({x(t)}|{ti}) = pκ({ti}|{x(t)})pγ({x(t)}) pκ,γ({ti}) . (19) The hyperparameters, γ and κ, which represent the roughness of the latent process and the shape of the ISI density function, can be determined by maximizing the marginal likelihood [16] defined by pκ,γ({ti}) = $ pκ({ti}|{x(t)})pγ({x(t)})D{x(t)}, (20) where ! D{x(t)} represents the integration over all possible latent process paths. Under a set of hyperparameters ˆγ and ˆκ that are determined by the marginal likelihood maximization, we can determine the maximum a posteriori (MAP) estimate of the latent process ˆx(t). The method for implementing the empirical Bayes analysis is summarized in the Appendix. Detectability of rate fluctuations We first examine the gamma distribution (6). For synthetic spike trains (n = 1, 000) generated by the time-rescaled renewal process with the gamma ISI distribution, in which the rate fluctuates according to the Ornstein-Uhlenbeck process (17) with µ = 1 and τ = 10, we attempt to decode λ(t) using the empirical Bayes decoder. Depending on the amplitude of the rate fluctuation σ and CV of fκ(y), the empirical Bayes decoder provides qualitatively two distinct rate estimations: (I) a fluctuating rate estimation (ˆγ > 0) for large σ and small CV , or (II) a constant rate estimation (ˆγ = 0) for small σ and large CV (Figure 2a). When σ is increased or CV is decreased, the 5 empirical Bayes estimator exhibits a phase transition corresponding to the switch of the most likely rate estimation from (II) to (I) (Figure 2b). Note that below the critical point of this phase transition, the empirical Bayes method provides a constant rate as the most likely estimation even if the true rate process fluctuates. The critical point, thus, gives the lower bound for the degree of detectable rate fluctuations. It is also confirmed, using numerical simulations, that the phase transition occurs not only with the gamma distribution, but also with the IG and lognormal distributions (Figure 2c,d). For the time-rescaled renewal process with the gamma ISI distribution, we could analytically derive the formula that the lower bound satisfies as: Dκ(λ(t)||µ) = φ(0) 4 maxη ! ∞ 0 φ(u)e−ηudu, (21) where φ(u) is the correlation function of λ(t). (See supplementary material for the derivation.) Eq. (21) is in good agreement with the simulation result for the entire parameter space (the solid line in Figure 2a). The expression of Eq. (21) itself does not depend on the gamma distribution. We investigated if this formula is also applicable to the IG and lognormal distributions, and found that the theoretical lower bounds (the solid lines in Figure 2c,d) indeed do correspond to those obtained by the numerical simulations; this result implies that Eq. (21) is applicable to more general time-rescaled renewal processes. Figure 2e compares the lower bounds among the three distributions. The lower bound of the IG distribution is the lowest, followed by the lognormal and gamma distributions, which is expected from the result in Figure 1b, as the lower bound is identically determined by the KL divergence via Eq. (21). We also examined the sinusoidally modulated rate, λ(t) = µ + σ sin t/τ; the qualitative result remains the same (Figure 2f-h). 4 Discussion In this study, we first examined the extent to which spike trains derived from time-rescaled renewal processes encode information on fluctuating rates. The encoding efficiency is measured by the KL divergence between two renewal processes with fluctuating and constant rates. We showed that the KL divergence significantly differs among the gamma, IG and lognormal ISI distributions, even if these three processes share the same rate fluctuation λ(t) and CV (Figure 1b). This suggests that the higher-order moments of ISIs play an important role in encoding information on fluctuating rates. Among the three distributions, the IG distribution achieves the largest KL divergence, followed by the lognormal and gamma distributions. A similar result has been reported for stationary renewal processes [12]. Since the KL divergence gives the distance between two probability distributions, Eq. (4) is naturally related to the ability to discriminate between a fluctuating rate and a constant rate. In fact, the lower bound of the degree of rate fluctuation, below which the empirical Bayes decoder cannot discriminate the underlying fluctuating rate from a constant rate, satisfies the formula (21). There commonly exists a lower bound below which the underlying rate fluctuations are undetectable, not only in the empirical Bayes method with the above prior distribution (18), but also with other prior distributions, and in other rate estimators such as a time-histogram. The lower bound in these methods has been derived for inhomogeneous Poisson processes as τσ 2/µ ∼O(1), where τ, σ and µ are the time scale, amplitude and mean of the rate fluctuation, respectively [23]. Thus, Eq. (21), or equivalently τDκ(λ(t)||µ) ∼O(1) is regarded as a generalization to the non-Poisson processes. Here, the crucial step for this generalization is incorporating the KL divergence into the formula. Note that the formula (21) was derived analytically under the assumption of the gamma ISI distribution, and then was shown to hold for the IG and lognormal ISI distributions with numerical simulations. The analytical tractability of the gamma family lies in the fact that it is the only scale family that admits the mean as a sufficient statistic. We conjecture, from our results with the three specific models, that Eq. (21) is applicable to more general time-rescaled renewal processes (even to “non-renewal” processes), which is open to future research. 6 0.5 1 1.5 0 0.1 0.2 0.3 CV σ 0.5 1 1.5 0 0.1 0.2 0.3 CV σ 0 0.1 0.2 0.3 0 0.05 0.1 σ γ^ γ=0 ^ (a) (b) (d) (e) σ (f) σ σ (g) (h) lognormal gamma IG lognormal (I) (II) (II) (I) 0.5 1 1.5 0 0.1 0.2 0.3 σ (c) CV IG 0.5 1 1.5 0 0.1 0.2 0.3 CV σ gamma γ>0 ^ 0.5 1 1.5 0 0.2 0.4 0.6 CV 0.5 1 1.5 0 0.2 0.4 0.6 CV 0.5 1 1.5 0 0.2 0.4 0.6 CV Figure 2: (a) Left: the phase diagram for sequences generated by the time-rescaled renewal process with the gamma ISI distribution. The ordinate represents the amplitude of rate fluctuation σ, and abscissa represents CV of the gamma ISI distribution. The dots represent the result of numerical simulations in which the empirical Bayes decoder provides a fluctuating rate estimation (ˆγ > 0). Each dot is plotted if ˆγ > 0 in more than 20 out of 40 identical trials. The solid line represents the theoretical lower bound obtained by the formula (21). Right: raster plots of sample spike trains and the estimated rates. The dotted lines and the solid lines represent the underlying rates and the estimated rates, respectively. The parameters (CV , σ) of top (ˆγ > 0) and bottom (ˆγ = 0) are (0.6, 0.3) and (1.5, 0.15), respectively. (b) The optimal hyperparameter ˆγ as a function of σ for CV = 0.6. The solid line represents the theoretical value, and the error bars represent the average and standard deviation of ˆγ determined by applying the empirical Bayes algorithm to 40 trials. (c, d) The phase diagrams for the IG and lognormal ISI distributions. (e) Comparison of the lower bounds among the three models. (f-h) The phase diagrams for the gamma, IG and lognormal ISI distributions, when the rate process is given by λ(t) = µ + σ sin t/τ with µ = 1 and τ = 10. A recent non-stationary analysis has revealed that individual neurons in the cortex signal with nonPoisson firing, which has empirically been characterized by measures based on the second-order moment of ISIs, such as CV and LV [21, 22]. Our results, however, suggest that it may be important to take into account the higher-order moments of ISIs for characterizing “irregularity” of cortical firing, in order to gain information on fluctuating firing rates. It has also been demonstrated that using non-Poisson spiking models enhances the performance of neural decoding [2, 11, 19]. Our results provide theoretical support for this as well. Appendix: Implementation of the empirical Bayes method Discretization To construct a practical algorithm for performing empirical Bayes decoding, we first divide the time axis into a set of intervals (ti−1, ti] (i = 1, . . . , n). We assume that the firing rate within each interval (ti−1, ti] does not change drastically (which is a reasonable assumption in practice), so that it can be approximated to a constant value λi. Letting Ti = ti −ti−1 be the ith ISI, the probability density of {Ti} ≡{T1, T2, . . . , Tn}, given the rate process {λi} ≡{λ1, λ2 . . . , λn} 7 is obtained from Eq. (1) as pκ({Ti}|{λi}) = 2n i=1 λifκ(λiTi). The rate process is linked with the latent process via xi = log λi. With the same time-discretization, the prior distribution of the latent process {xi} ≡{x1, x2, . . . , xn}, which corresponds to Eq. (18), is derived as p γ({xi}) = p(x1) 2n i=2 pγ(xi|xi−1), where pγ(xi|xi−1) = 1 ( πγ2(Ti + Ti−1) exp * −(xi −xi−1)2 γ2(Ti + Ti−1) + , (22) and p(x1) is the probability density function of the initial latent rate variable. p({Ti}|{λi}) and pγ({xi}) define a discrete-time state space model. We note that this provides a good approximation to the original continuous-time model if the timescale of the rate fluctuation is larger than the mean ISI. EM algorithm We assume that the ISI density function can be rewritten in the form of exponential family distributions with respect to the shape parameter κ: pκ(Ti|φi) := λifκ(λiTi) = exp[κS(Ti, φi) −ϕ(κ) + c(Ti, φi)], (23) with an appropriate parameter representation φi = φ(λi, κ). Here, κ is the natural parameter of the exponential family and S(Ti, φi) is its sufficient statistic. Suppose that the potential ϕ(κ) is a convex function. The expectation of S(T i, φi) is then given by η = $ S(Ti, φi)pκ(Ti|φi)dTi = dϕ(κ) dκ . (24) Since ϕ(κ) is convex, there is one-to-one correspondence between κ and η, and thus η provides alternative parametrization to κ [1]. The gamma (6), IG (7) and lognormal (8) distributions are included in this family. With the parameterization η, the EM algorithm for the state space model is derived as follows. Suppose that we have estimations ˆη(m) and ˆγ(m) at the mth iteration. The estimations at the (m+1)th iteration are given by ˆη(m+1) = 1 n n % i=1 ⟨S(Ti, φ(xi))⟩(m), (25) and ˆγ2 (m+1) = 2 n −1 n % i=2 ⟨(xi −xi−1)2⟩(m) Ti + Ti−1 , (26) where ⟨⟩(m) denotes the expectation with respect to the posterior probability of {x i}, given {Ti}, ˆη(m) and ˆγ(m). The posterior probability is computed by the Laplace approximation, introduced below. We update ˆη and ˆγ until the estimations converge. The estimation of κ is then transformed from ˆη with Eq. (24). Laplace approximation We employ Laplace’s method to compute an approximate posterior distribution of {x i}. Let x = (x1, x2, . . . , xn)t be the column vector of the latent process, ( )t being the transpose of a vector. The MAP estimate of the latent process is obtained by maximizing the log posterior distribution: l(x) = log p(x1) + n % i=2 log pγ(xi|xi−1) + n % i=1 log pκ(Ti|xi) + const., (27) with respect to x. We use a diffuse prior for p(x1) so that its contribution vanishes [7]. If pγ(xi|xi−1) is log-concave in xi and xi−1, and the pκ(Ti|xi) is also log-concave in xi, computing the MAP estimate is a concave optimization problem [18], which can be solved efficiently by a Newton method. Due to the Markovian Structure of the state-space model, the Hessian matrix, J(x) ≡∇∇xl(x), becomes a tridiagonal matrix, which allows us to compute the Newton step in O(n) time [13]. Let ˆx denote the MAP estimation of the posterior probability. The posterior probability is then approximated to a Gaussian whose mean vector and covariance matrix are given by ˆx and −J(ˆx)−1, respectively. 8 Acknowledgments This work was supported by JSPS KAKENHI Grant Number 24700287. References [1] S. Amari and H. Nagaoka. Methods of Information Geometry. Oxford University Press, 2000. [2] R. Barbieri, M. C. Quirk, L. M. Frank, M. A. Wilson, and E. N. Brown. Construction and analysis of non-poisson stimulus-response models of neural spiking activity. Journal of Neuroscience Methods, 105:25–37, 2001. [3] N. Brenner, S. P. Strong, R. Koberle, and W. Bialek. Synergy in a neural code. Neural Computation, 12:1531–1552, 2000. [4] D. R. Cox. Renewal Theory. Chapman and Hal, 1962. [5] J. P. Cunningham, B. M. Yu, K. V. Shenoy, and M. Sahani. Inferring neural firing rates from spike trains using Gaussian processes. In Neural Information Processing Systems, volume 20, pages 329–336, 2008. [6] R. M. Davies, G. L. Gerstein, and S. N. Baker. Measurement of time-dependent changes in the irregularity of neural spiking. Journal of Neurophysiology, 96:906–918, 2006. [7] J. Durbin and S. J. Koopman. Time Series Analysis by State Space Methods. Oxford University Press, 2001. [8] U. Fano. Ionization yield of radiations. ii. the fluctuations of the number of ions. Physical Review, 72:26–29, 1947. [9] G. L. Gerstein and B. Mandelbrot. Random walk models for the spike activity of a single neuron. Biophysical Journal, 4:41–68, 1964. [10] S. Ikeda and J. H. Manton. Capacity of a single spiking neuron channel. Neural Computation, 21:1714– 1748, 2009. [11] A. L. Jacobs, G. Fridman, R. M. Douglas, N. M. Alam, P. E. Latham, G. T. Prusky, and S. Nirenberg. Ruling out and ruling in neural codes. Proceedings of the National Academy of Sciences, 106:5936–5941, 2009. [12] K. Kang and S. Amari. Discrimination with spike times and ISI distributions. Neural Computation, 20:1411–1426, 2008. [13] S. Koyama and L. Paninski. Efficient computation of the maximum a posteriori path and parameter estimation in integrate-and-fire and more general state-space models. Journal of Computational Neuroscience, 29:89–105, 2009. [14] M. W. Levine. The distribution of the intervals between neural impulses in the maintained discharges of retinal ganglion cells. Biological Cybernetics, 65:459–467, 1991. [15] B. N. Lundstrom and A. L. Fairhall. Decoding stimulus variance from a distributional neural code of interspike intervals. Journal of Neuroscience, 26:9030–9037, 2006. [16] D. J. C. MacKay. Bayesian interpolation. Neural Computation, 4:415–447, 1992. [17] T. Omi and S. Shinomoto. Optimizing time histograms for non-Poisson spike trains. Neural Computation, 23:3125–3144, 2011. [18] L. Paninski. Log-concavity results on gaussian process methods for supervised and unsupervised learning. In Neural Information Processing Systems, volume 17, pages 1025–1032, 2005. [19] J. W. Pillow, L. Paninski, V. J. Uzzell, E. P. Simoncelli, and E. J. Chichilnisky. Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. Journal of Neuroscience, 23:11003– 11013, 2005. [20] M. N. Shadlen and W. T. Newsome. The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding. Journal of Neuroscience, 18:3870–3896, 1998. [21] S. Shinomoto, H. Kim, T. Shimokawa, N. Matsuno, S. Funahashi, K. Shima, I. Fujita, H. Tamura, T. Doi, K. Kawano, N. Inaba, K. Fukushima, S. Kurkin, K. Kurata, M. Taira, K. Tsutsui, H. Komatsu, T. Ogawa, K. Koida, J. Tanji, and K. Toyama. Relating neuronal firing patterns to functional differentiation of cerebral cortex. PLoS Computational Biology, 5:e1000433, 2009. [22] S. Shinomoto, K. Shima, and J. Tanji. Differences in spiking patterns among cortical neurons. Neural Computation, 15:2823–2842, 2003. [23] T. Shintani and S. Shinomoto. Detection limit for rate fluctuations in inhomogeneous poisson processes. Physical Review E, 85:041139, 2012. [24] W. R. Softky and C. Koch. The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. Journal of Neuroscience, 13:334–350, 1993. 9
|
2012
|
144
|
4,502
|
Nystr¨om Method vs Random Fourier Features: A Theoretical and Empirical Comparison Tianbao Yang†, Yu-Feng Li‡, Mehrdad Mahdavi♮, Rong Jin♮, Zhi-Hua Zhou‡ †Machine Learning Lab, GE Global Research, San Ramon, CA 94583 ♮Michigan State University, East Lansing, MI 48824 ‡National Key Laboratory for Novel Software Technology, Nanjing University, 210023, China tyang@ge.com,mahdavim,rongjin@msu.edu,liyf,zhouzh@lamda.nju.edu.cn Abstract Both random Fourier features and the Nystr¨om method have been successfully applied to efficient kernel learning. In this work, we investigate the fundamental difference between these two approaches, and how the difference could affect their generalization performances. Unlike approaches based on random Fourier features where the basis functions (i.e., cosine and sine functions) are sampled from a distribution independent from the training data, basis functions used by the Nystr¨om method are randomly sampled from the training examples and are therefore data dependent. By exploring this difference, we show that when there is a large gap in the eigen-spectrum of the kernel matrix, approaches based on the Nystr¨om method can yield impressively better generalization error bound than random Fourier features based approach. We empirically verify our theoretical findings on a wide range of large data sets. 1 Introduction Kernel methods [16], such as support vector machines, are among the most effective learning methods. These methods project data points into a high-dimensional or even infinite-dimensional feature space and find the optimal hyperplane in that feature space with strong generalization performance. One limitation of kernel methods is their high computational cost, which is at least quadratic in the number of training examples, due to the calculation of kernel matrix. Although low rank decomposition approaches (e.g., incomplete Cholesky decomposition [3]) have been used to alleviate the computational challenge of kernel methods, they still require computing the kernel matrix. Other approaches such as online learning [9] and budget learning [7] have also been developed for large-scale kernel learning, but they tend to yield performance worse performance than batch learning. To avoid computing kernel matrix, one common approach is to approximate a kernel learning problem with a linear prediction problem. It is often achieved by generating a vector representation of data that approximates the kernel similarity between any two data points. The most well known approaches in this category are random Fourier features [13, 14] and the Nystr¨om method [20, 8]. Although both approaches have been found effective, it is not clear what are their essential difference, and which method is preferable under which situations. The objective of this work is to understand the difference between these two approaches, both theoretically and empirically The theoretical foundation for random Fourier transform is that a shift-invariant kernel is the Fourier transform of a non-negative measure [15]. Using this property, in [13], the authors proposed to represent each data point by random Fourier features. Analysis in [14] shows that, the generalization error bound for kernel learning based on random Fourier features is given by O(N −1/2 + m−1/2), where N is the number of training examples and m is the number of sampled Fourier components. 1 An alternative approach for large-scale kernel classification is the Nystr¨om method [20, 8] that approximates the kernel matrix by a low rank matrix. It randomly samples a subset of training examples and computes a kernel matrix bK for the random samples. It then represents each data point by a vector based on its kernel similarity to the random samples and the sampled kernel matrix bK. Most analysis of the Nystr¨om method follows [8] and bounds the error in approximating the kernel matrix. According to [8], the approximation error of the Nystr¨om method, measured in spectral norm 1, is O(m−1/2), where m is the number of sampled training examples. Using the arguments in [6], we expected an additional error of O(m−1/2) in the generalization performance caused by the approximation of the Nystr¨om method, similar to random Fourier features. Contributions In this work, we first establish a unified framework for both methods from the viewpoint of functional approximation. This is important because random Fourier features and the Nystr¨om method address large-scale kernel learning very differently: random Fourier features aim to approximate the kernel function directly while the Nystr¨om method is designed to approximate the kernel matrix. The unified framework allows us to see a fundamental difference between the two methods: the basis functions used by random Fourier features are randomly sampled from a distribution independent from the training data, leading to a data independent vector representation; in contrast, the Nystr¨om method randomly selects a subset of training examples to form its basis functions, leading to a data dependent vector representation. By exploring this difference, we show that the additional error caused by the Nystr¨om method in the generalization performance can be improved to O(1/m) when there is a large gap in the eigen-spectrum of the kernel matrix. Empirical studies on a synthetic data set and a broad range of real data sets verify our analysis. 2 A Unified Framework for Approximate Large-Scale Kernel Learning Let D = {(x1, y1), . . . , (xN, yN)} be a collection of N training examples, where xi ∈X ⊆Rd, yi ∈Y. Let κ(·, ·) be a kernel function, Hκ denote the endowed Reproducing Kernel Hilbert Space, and K = [κ(xi, xj)]N×N be the kernel matrix for the samples in D. Without loss of generality, we assume κ(x, x) ≤1, ∀x ∈X. Let (λi, vi), i = 1, . . . , N be the eigenvalues and eigenvectors of K ranked in the descending order of eigenvalues. Let V = [Vij]N×N = (v1, . . . , vN) denote the eigenvector matrix. For the Nystr¨om method, let bD = {bx1, . . . , bxm} denote the randomly sampled examples, bK = [κ(bxi, bxj)]m×m denote the corresponding kernel matrix. Similarly, let {(bλi, bvi), i ∈[m]} denote the eigenpairs of bK ranked in the descending order of eigenvalues, and bV = [bVij]m×m = (bv1, . . . , bvm). We introduce two linear operators induced by examples in D and bD, i.e., LN[f] = 1 N N X i=1 κ(xi, ·)f(xi), Lm[f] = 1 m m X i=1 κ(bxi, ·)f(bxi). (1) It can be shown that both LN and Lm are self-adjoint operators. According to [18], the eigenvalues of LN and Lm are λi/N, i ∈[N] and bλi/m, i ∈[m], respectively, and their corresponding normalized eigenfunctions ϕj, j ∈[N] and bϕj, j ∈[m] are given by ϕj(·) = 1 p λj N X i=1 Vi,jκ(xi, ·), j ∈[N], bϕj(·) = 1 q bλj m X i=1 bVi,jκ(bxi, ·), j ∈[m]. (2) To make our discussion concrete, we focus on the RBF kernel 2, i.e., κ(x, ¯x) = exp(−∥x − ¯x∥2 2/[2σ2]), whose inverse Fourier transform is given by a Gaussian distribution p(u) = N(0, σ−2I) [15]. Our goal is to efficiently learn a kernel prediction function by solving the following optimization problem: min f∈HD λ 2 ∥f∥2 Hκ + 1 N N X i=1 ℓ(f(xi), yi), (3) 1We choose the bound based on spectral norm according to the discussion in [6]. 2 The improved bound obtained in the paper for the Nystrom method is valid for any kernel matrix that satisfies the eigengap condition. 2 where HD = span(κ(x1, ·), . . . , κ(xN, ·)) is a span over all the training examples 3, and ℓ(z, y) is a convex loss function with respect to z. To facilitate our analysis, we assume maxy∈Y ℓ(0, y) ≤1 and ℓ(z, y) has a bounded gradient |∇zℓ(z, y)| ≤C. The high computational cost of kernel learning arises from the fact that we have to search for an optimal classifier f(·) in a large space HD. Given this observation, to alleviate the computational cost of kernel classification, we can reduce space HD to a smaller space Ha, and only search for the solution f(·) ∈Ha. The main challenge is how to construct such a space Ha. On the one hand, Ha should be small enough to make it possible to perform efficient computation; on the other hand, Ha should be rich enough to provide good approximation for most bounded functions in HD. Below we show that the difference between random Fourier features and the Nystr¨om method lies in the construction of the approximate space Ha. For each method, we begin with a description of a vector representation of data, and then connect the vector representation to the approximate large kernel machine by functional approximation. Random Fourier Features The random Fourier features are constructed by first sampling Fourier components u1, . . . , um from p(u), projecting each example x to u1, . . . , um separately, and then passing them through sine and cosine functions, i.e., zf(x) = (sin(u⊤ 1 x), cos(u⊤ 1 x), . . . , sin(u⊤ mx), cos(u⊤ mx)). Given the random Fourier features, we then learn a linear machine f(x) = w⊤zf(x) by solving the following optimization problem: min w∈R2m λ 2 ∥w∥2 2 + 1 N N X i=1 ℓ(w⊤zf(xi), yi). (4) To connect the linear machine (4) to the kernel machine in (3) by a functional approximation, we can construct a functional space Hf a = span(s1(·), c1(·), . . . , sm(·), cm(·)), where sk(x) = sin(u⊤ k x) and ck(x) = cos(u⊤ k x). If we approximate HD in (3) by Hf a, we have min f∈Hf a λ 2 ∥f∥2 Hκ + 1 N N X i=1 ℓ(f(xi), yi). (5) The following proposition connects the approximate kernel machine in (5) to the linear machine in (4). Proofs can be found in supplementary file. Proposition 1 The approximate kernel machine in (5) is equivalent to the following linear machine min w∈R2m λ 2 w⊤(w ◦γ) + 1 N N X i=1 ℓ(w⊤zf(xi), yi), (6) where γ = (γs 1, γc 1, · · · , γs m, γc m)⊤and γs/c i = exp(σ2∥ui∥2 2/2). Comparing (6) to the linear machine based on random Fourier features in (4), we can see that other than the weights {γs/c i }m i=1, random Fourier features can be viewed as to approximate (3) by restricting the solution f(·) to Hf a. The Nystr¨om Method The Nystr¨om method approximates the full kernel matrix K by first sampling m examples, denoted by bx1, · · · , bxm, and then constructing a low rank matrix by bKr = Kb bK†K⊤ b , where Kb = [κ(xi, bxj)]N×m, bK = [κ(bxi, bxj)]m×m, bK† is the pseudo inverse of bK, and r denotes the rank of bK. In order to train a linear machine, we can derive a vector representation of data by zn(x) = bD−1/2 r bV ⊤ r (κ(x, bx1), . . . , κ(x, bxm))⊤, where bDr = diag(bλ1, . . . , bλr) and bVr = (bv1, . . . , bvr). It is straightforward to verify that zn(xi)⊤zn(xj) = [ bKr]ij. Given the vector representation zn(x), we then learn a linear machine f(x) = w⊤zn(x) by solving the following optimization problem: min w∈Rr λ 2 ∥w∥2 2 + 1 N N X i=1 ℓ(w⊤zn(xi), yi). (7) 3We use HD, instead of Hκ in (3), owing to the representer theorem [16]. 3 In order to see how the Nystr¨om method can be cast into the unified framework of approximating the large scale kernel machine by functional approximation, we construct the following functional space Hn a = span(bϕ1, . . . , bϕr), where bϕ1, . . . , bϕr are the first r normalized eigenfunctions of the operator Lm. The following proposition shows that the linear machine in (7) using the vector representation of the Nystr¨om method is equivalent to the approximate kernel machine in (3) by restricting the solution f(·) to an approximate functional space Hn a. Proposition 2 The linear machine in (7) is equivalent to the following approximate kernel machine min f∈Hn a λ 2 ∥f∥2 Hκ + 1 N N X i=1 ℓ(f(xi), yi), (8) Although both random Fourier features and the Nystr¨om method can be viewed as variants of the unified framework, they differ significantly in the construction of the approximate functional space Ha. In particular, the basis functions used by random Fourier features are sampled from a Gaussian distribution that is independent from the training examples. In contrast, the basis functions used by the Nystr¨om method are sampled from the training examples and are therefore data dependent. This difference, although subtle, can have significant impact on the classification performance. In the case of large eigengap, i.e., the first few eigenvalues of the full kernel matrix are much larger than the remaining eigenvalues, the classification performance is mostly determined by the top eigenvectors. Since the Nystr¨om method uses a data dependent sampling method, it is able to discover the subspace spanned by the top eigenvectors using a small number of samples. In contrast, since random Fourier features are drawn from a distribution independent from training data, it may require a large number of samples before it can discover this subspace. As a result, we expect a significantly lower generalization error for the Nystr¨om method. To illustrate this point, we generate a synthetic data set consisted of two balanced classes with a total of N = 10, 000 data points generated from uniform distributions in two balls of radius 0.5 centered at (−0.5, 0.5) and (0.5, 0.5), respectively. The σ value in the RBF kernel is chosen by cross-validation and is set to 6 for the synthetic data. To avoid a trivial task, 100 redundant features, each drawn from a uniform distribution on the unit interval, are added to each example. The data points in the first two dimensions are plotted in Figure 1(a) 4, and the eigenvalue distribution is shown in Figure 1(b). According to the results shown in Figure 1(c), it is clear that the Nystr¨om method performs significantly better than random Fourier features. By using only 100 samples, the Nystr¨om method is able to make perfect prediction, while the decision made by random Fourier features based method is close to random guess. To evaluate the approximation error of the functional space, we plot in Figure 1(e) and 1(f), respectively, the first two eigenvectors of the approximate kernel matrix computed by the Nystr¨om method and random Fourier features using 100 samples. Compared to the eigenvectors computed from the full kernel matrix (Figure 1(d)), we can see that the Nystr¨om method achieves a significantly better approximation of the first two eigenvectors than random Fourier features. Finally, we note that although the concept of eigengap has been exploited in many studies of kernel learning [2, 12, 1, 17], to the best of our knowledge, this is the first time it has been incorporated in the analysis for approximate large-scale kernel learning. 3 Main Theoretical Result Let f ∗ m be the optimal solution to the approximate kernel learning problem in (8), and let f ∗ N be the solution to the full version of kernel learning in (3). Let f ∗be the optimal solution to min f∈Hκ F(f) = λ 2 ∥f∥2 Hκ + E [ℓ(f(x), y)] , where E[·] takes expectation over the joint distribution P(x, y). Following [10], we define the excess risk of any classifier f ∈Hκ as Λ(f) = F(f) −F(f ∗). (9) 4Note that the scales of the two axes in Figure 1(a) are different. 4 −1 −0.5 0 0.5 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1st dimension 2nd dimension (a) Synthetic data: the first two dimensions 0.2N 0.4N 0.6N 0.8N N 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 rank Eigenvalues/N Synthetic data (b) Eigenvalues (in logarithmic scale) vs. rank. N is the total number of data points. 5 10 20 50 100 40 50 60 70 80 90 100 # random samples accuaracy Synthetic data Nystrom Method Random Fourier Features (c) Classification accuracy vs the number of samples 0 2000 4000 6000 8000 10000 0.0095 0.01 0.0105 Eigenvector 1 0 2000 4000 6000 8000 10000 −0.02 −0.01 0 0.01 0.02 Eigenvector 2 (d) the first two eigenvectors of the full kernel matrix 0 2000 4000 6000 8000 10000 0.0095 0.01 0.0105 Eigenvector 1 0 2000 4000 6000 8000 10000 −0.04 −0.02 0 0.02 0.04 Eigenvector 2 (e) the first two eigenvectors computed by Nystr¨om method 0 2000 4000 6000 8000 10000 −0.04 −0.02 0 0.02 0.04 Eigenvector 1 0 2000 4000 6000 8000 10000 −0.05 0 0.05 Eigenvector 2 (f) the first two eigenvectors computed by random Fourier features Figure 1: An Illustration Example Unlike [6], in this work, we aim to bound the generalization performance of f ∗ m by the generalization performance of f ∗ N, which better reflects the impact of approximating HD by Hn a. In order to obtain a tight bound, we exploit the local Rademacher complexity [10]. Define ψ(δ) = 2 N PN i=1 min(δ2, λi) 1/2 . Let eε as the solution to eε2 = ψ(eε) where the existence and uniqueness of eε are determined by the sub-root property of ψ(δ) [4], and ϵ = max eε, q 6 ln N N . According to [10], we have ϵ2 = O(N −1/2), and when the eigenvalues of kernel function follow a p-power law, it is improved to ϵ2 = O(N −p/(p+1)). The following theorem bounds Λ(f ∗ m) by Λ(f ∗ N). Section 4 will be devoted to the proof of this theorem. Theorem 1 For 16ϵ2e−2N ≤λ ≤1, λr+1 = O(N/m) and (λr −λr+1)/N = Ω(1) ≥3 2 ln(2N 3) m + r 2 ln(2N 3) m ! , with a probability 1 −3N −3, we have Λ(f ∗ m) ≤ 3Λ(f ∗ N) + 1 λ eO ϵ2 + 1 m , where eO(·) suppresses the polynomial term of ln N. Theorem 1 shows that the additional error caused by the approximation of the Nystr¨om method is improved to O(1/m) when there is a large gap between λr and λr+1. Note that the improvement from O(1/√m) to O(1/m) is very significant from the theoretical viewpoint, because it is well known that the generalization error for kernel learning is O(N −1/2) [4]5. As a result, to achieve a similar performance as the standard kernel learning, the number of required samples has to be 5It is possible to achieve a better generalization error bound of O(N −p/(p+1)) by assuming the eigenvalues of kernel matrix follow a p-power law [10]. However, large eigengap doest not immediately indicate power law distribution for eigenvalues and and consequently a better generalization error. 5 O(N) if the additional error caused by the kernel approximation is bounded by O(1/√m), leading to a high computational cost. On the other hand, with O(1/m) bound for the additional error caused by the kernel approximation, the number of required samples is reduced to √ N, making it more practical for large-scale kernel learning. We also note that the improvement made for the Nystr¨om method relies on the property that Hn a ⊂ HD and therefore requires data dependent basis functions. As a result, it does not carry over to random Fourier features. 4 Analysis In this section, we present the analysis that leads to Theorem 1. Most of the proofs can be found in the supplementary materials. We first present a theorem to show that the excessive risk bound of f ∗ m is related to the matrix approximation error ∥K −bKr∥2. Theorem 2 For 16ϵ2e−2N ≤λ ≤1, with a probability 1 −2N −3, we have Λ(f ∗ m) ≤3Λ(f ∗ N) + C2 ϵ2 λ + ∥K −bKr∥2 Nλ + e−N ! , where C2 is a numerical constant. In the sequel, we let Kr be the best rank-r approximation matrix for K. By the triangle inequality, ∥K −bKr∥2 ≤∥K −Kr∥2 + ∥Kr −bKr∥2 ≤λr+1 + ∥Kr −bKr∥2, we thus proceed to bound ∥Kr −bKr∥2. Using the eigenfunctions of Lm and LN, we define two linear operators Hr and bHr as Hr[f](·) = r X i=1 ϕi(·)⟨ϕi, f⟩Hκ, bHr[f](·) = r X i=1 bϕi(·)⟨bϕi, f⟩Hκ, (10) where f ∈Hκ. The following theorem shows that ∥Kr −bKr∥2 is related to the linear operator ∆H = Hr −bHr. Theorem 3 For bλr > 0 and λr > 0, we have ∥bKr −Kr∥2 ≤N∥L1/2 N ∆HL1/2 N ∥2, where ∥L∥2 stands for the spectral norm of a linear operator L. Given the result in Theorem 3, we move to bound the spectral norm of L1/2 N ∆HL1/2 N . To this end, we assume a sufficiently large eigengap ∆= (λr −λr+1)/N. The theorem below bounds ∥L1/2 N ∆HL1/2 N ∥2 using matrix perturbation theory [19]. Theorem 4 For ∆= (λr −λr+1)/N > 3∥LN −Lm∥HS, we have ∥L1/2 N ∆HL1/2 N ∥2 ≤η 4∥LN −Lm∥HS ∆−∥LN −Lm∥HS , where η = max r λr+1 N , 2∥LN −Lm∥HS ∆−∥LN −Lm∥HS ! . Remark To utilize the result in Theorem 4, we consider the case when λr+1 = O(N/m) and ∆= Ω(1). We have ∥L1/2 N ∆HL1/2 N ∥2 ≤O max 1 √m∥LN −Lm∥HS, ∥LN −Lm∥2 HS . Obviously, in order to achieve O(1/m) bound for ∥L1/2 N ∆HL1/2 N ∥2, we need an O(1/√m) bound for ∥LN −Lm∥HS, which is given by the following theorem. 6 Theorem 5 For κ(x, x) ≤1, ∀x ∈X, with a probability 1 −N −3, we have ∥LN −Lm∥HS ≤2 ln(2N 3) m + r 2 ln(2N 3) m . Theorem 5 directly follows from Lemma 2 of [18]. Therefore, by assuming the conditions in Theorem 1 and combining results from Theorems 3, 4, and 5, we immediately have ∥K −bKr∥2 ≤ O (N/m). Combining this bound with the result in Theorem 2 and using the union bound, we have, with a probability 1 −3N −3, Λ(f ∗ m) ≤3Λ(f ∗ N) + C λ ϵ2 + 1 m + e−N . We complete the proof of Theorem 1 by using the fact e−N < 1/N ≤1/m. 5 Empirical Studies To verify our theoretical findings, we evaluate the empirical performance of the Nystr¨om method and random Fourier features for large-scale kernel learning. Table 1 summarizes the statistics of the six data sets used in our study, including two for regression and four for classification. Note that datasets CPU, CENSUS, ADULT and FOREST were originally used in [13] to verify the effectiveness of random Fourier features. We evaluate the classification performance by accuracy, and the performance of regression by mean square error of the testing data. We use uniform sampling in the Nystr¨om method owing to its simplicity. We note that the empirical performance of the Nystr¨om method may be improved by using a different implementation [21, 11]. We download the codes from the website http://berkeley.intel-research.net/ arahimi/c/random-features for the implementation of random Fourier features. A RBF kernel is used for both methods and for all the datasets. A ridge regression package from [13] is used for the two regression tasks, and LIBSVM [5] is used for the classification tasks. All parameters are selected by a 5-fold cross validation. All experiments are repeated ten times, and prediction performance averaged over ten trials is reported. Figure 2 shows the performance of both methods with varied number of random samples. Note that for large datasets (i.e., COVTYPE and FOREST), we restrict the maximum number of random samples to 200 because of the high computational cost. We observed that for all the data sets, the Nystr¨om method outperforms random Fourier features 6. Moreover, except for COVTYPE with 10 random samples, the Nystr¨om method performs significantly better than random Fourier features, according to t-tests at 95% significance level. We finally evaluate that whether the large eigengap condition, the key assumption for our main theoretical result, holds for the data sets. Due to the large size, except for CPU, we compute the eigenvalues of kernel matrix based on 10, 000 randomly selected examples from each dataset. As shown in Figure 3 (eigenvalues are in logarithm scale), we observe that the eigenvalues drop very quickly as the rank increases, leading to a significant gap between the top eigenvalues and the remaining eigenvalues. 6 Conclusion and Discussion We study two methods for large-scale kernel learning, i.e., the Nystr¨om method and random Fourier features. One key difference between these two approaches is that the Nystr¨om method uses data 6We note that the classification performance of ADULT data set reported in Figure 2 does not match with the performance reported in [13]. Given the fact that we use the code provided by [13] and follow the same cross validation procedure, we believe our result is correct. We did not use the KDDCup dataset because of the problem of oversampling, as pointed out in [13]. Table 1: Statistics of data Sets TASK DATA # TRAIN # TEST #Attr. TASK DATA # TRAIN # TEST #Attr. Reg. CPU 6,554 819 21 Class. COD-RNA 59,535 271,617 8 Reg. CENSUS 18,186 2,273 119 Class. COVTYPE 464,810 116,202 54 Class. ADULT 32,561 16,281 123 Class. FOREST 522,910 58,102 54 7 10 20 50 100 200 500 1000 0 0.5 1 1.5 2 2.5 # random samples mean square error CPU Nystrom Method Random Fourier Features 10 20 50 100 200 500 1000 0 0.5 1 1.5 2 2.5 3 # random samples mean square error CENSUS Nystrom Method Random Fourier Features 10 20 50 100 200 500 1000 30 40 50 60 70 80 90 # random samples accuracy(%) ADULT Nystrom Method Random Fourier Features 10 20 50 100 200 500 40 50 60 70 80 90 100 # random samples accuracy(%) COD_RNA Nystrom Method Random Fourier Features 10 20 50 100 200 55 60 65 70 75 80 # random samples accuracy(%) COVTYPE Nystrom Method Random Fourier Features 10 20 50 100 200 55 60 65 70 75 80 # random samples accuracy(%) FOREST Nystrom Method Random Fourier Features Figure 2: Comparison of the Nymstr¨om method and random Fourier features. For regression tasks, the mean square error (with std.) is reported, and for classification tasks, accuracy (with std.) is reported. 0.2N 0.4N 0.6N 0.8N N 10 −8 10 −6 10 −4 10 −2 10 0 rank Eigenvalues/N CPU 0.2N 0.4N 0.6N 0.8N N 10 −8 10 −6 10 −4 10 −2 10 0 rank Eigenvalues/N CENSUS 0.2N 0.4N 0.6N 0.8N N 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 rank Eigenvalues/N ADULT 0.2N 0.4N 0.6N 0.8N N 10 −8 10 −6 10 −4 10 −2 10 0 rank Eigenvalues/N COD−RNA 0.2N 0.4N 0.6N 0.8N N 10 −8 10 −6 10 −4 10 −2 10 0 rank Eigenvalues/N COVTYPE 0.2N 0.4N 0.6N 0.8N N 10 −8 10 −6 10 −4 10 −2 10 0 rank Eigenvalues/N FOREST Figure 3: The eigenvalue distributions of kernel matrices. N is the number of examples used to compute eigenvalues. dependent basis functions while random Fourier features introduce data independent basis functions. This difference leads to an improved analysis for kernel learning approaches based on the Nystr¨om method. We show that when there is a large eigengap of kernel matrix, the approximation error of Nystr¨om method can be improved to O(1/m), leading to a significantly better generalization performance than random Fourier features. We verify the claim by an empirical study. As implied from our study, it is important to develop data dependent basis functions for large-scale kernel learning. One direction we plan to explore is to improve random Fourier features by making the sampling data dependent. This can be achieved by introducing a rejection procedure that rejects the sample Fourier components when they do not align well with the top eigenfunctions estimated from the sampled data. Acknowledgments This work was partially supported by ONR Award N00014-09-1-0663, NSF IIS-0643494, NSFC (61073097) and 973 Program (2010CB327903). 8 References [1] A. Azran and Z. Ghahramani. Spectral methods for automatic multiscale data clustering. In CVPR, pages 190–197, 2006. [2] F. R. Bach and M. I. Jordan. Learning spectral clustering. Technical Report UCB/CSD-031249, EECS Department, University of California, Berkeley, 2003. [3] F. R. Bach and M. I. Jordan. Predictive low-rank decomposition for kernel methods. In ICML, pages 33–40, 2005. [4] P. L. Bartlett, O. Bousquet, and S. Mendelson. Local rademacher complexities. Annals of Statistics, pages 44–58, 2002. [5] C. Chang and C. Lin. Libsvm: a library for support vector machines. TIST, 2(3):27, 2011. [6] C. Cortes, M. Mohri, and A. Talwalkar. On the impact of kernel approximation on learning accuracy. In AISTAT, pages 113–120, 2010. [7] O. Dekel, S. Shalev-Shwartz, and Y. Singer. The forgetron: A kernel-based perceptron on a fixed budget. In NIPS, 2005. [8] P. Drineas and M. W. Mahoney. On the nystrom method for approximating a gram matrix for improved kernel-based learning. JMLR, 6:2153–2175, 2005. [9] J. Kivinen, A. J. Smola, and R. C. Williamson. Online learning with kernels. IEEE Transactions on Signal Processing, pages 2165–2176, 2004. [10] V. Koltchinskii. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems. Springer, 2011. [11] S. Kumar, M. Mohri, and A. Talwalkar. Ensemble nystrom method. NIPS, pages 1060–1068, 2009. [12] U. Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395–416, 2007. [13] A. Rahimi and B. Recht. Random features for large-scale kernel machines. NIPS, pages 1177– 1184, 2007. [14] A. Rahimi and B. Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. NIPS, pages 1313–1320, 2009. [15] W. Rudin. Fourier analysis on groups. Wiley-Interscience, 1990. [16] B. Sch¨olkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2001. [17] T. Shi, M. Belkin, and B. Yu. Data spectroscopy: eigenspace of convolution operators and clustering. The Annals of Statistics, 37(6B):3960–3984, 2009. [18] S. Smale and D.-X. Zhou. Geometry on probability spaces. Constructive Approximation, 30(3):311–323, 2009. [19] G. W. Stewart and J. Sun. Matrix Perturbation Theory. Academic Press, 1990. [20] C. Williams and M. Seeger. Using the nystrom method to speed up kernel machines. NIPS, pages 682–688, 2001. [21] K. Zhang, I. W. Tsang, and J. T. Kwok. Improved nystrom low-rank approximation and error analysis. In ICML, pages 1232–1239, 2008. 9
|
2012
|
145
|
4,503
|
Spiking and saturating dendrites differentially expand single neuron computation capacity. Romain Caz´e INSERM U960, Paris Diderot, Paris 7, ENS 29 rue d’Ulm, 75005 Paris romain.caze@ens.fr Mark Humphries INSERM U960; University of Manchester 29 rue d’Ulm, 75005 Paris; UK mark.humphries@manchester.ac.uk Boris Gutkin INSERM U960, CNRS, ENS 29 rue d’Ulm, 75005 Paris boris.gutkin@ens.fr Abstract The integration of excitatory inputs in dendrites is non-linear: multiple excitatory inputs can produce a local depolarization departing from the arithmetic sum of each input’s response taken separately. If this depolarization is bigger than the arithmetic sum, the dendrite is spiking; if the depolarization is smaller, the dendrite is saturating. Decomposing a dendritic tree into independent dendritic spiking units greatly extends its computational capacity, as the neuron then maps onto a two layer neural network, enabling it to compute linearly non-separable Boolean functions (lnBFs). How can these lnBFs be implemented by dendritic architectures in practise? And can saturating dendrites equally expand computational capacity? To address these questions we use a binary neuron model and Boolean algebra. First, we confirm that spiking dendrites enable a neuron to compute lnBFs using an architecture based on the disjunctive normal form (DNF). Second, we prove that saturating dendrites as well as spiking dendrites enable a neuron to compute lnBFs using an architecture based on the conjunctive normal form (CNF). Contrary to a DNF-based architecture, in a CNF-based architecture, dendritic unit tunings do not imply the neuron tuning, as has been observed experimentally. Third, we show that one cannot use a DNF-based architecture with saturating dendrites. Consequently, we show that an important family of lnBFs implemented with a CNF-architecture can require an exponential number of saturating dendritic units, whereas the same family implemented with either a DNF-architecture or a CNF-architecture always require a linear number of spiking dendritic units. This minimization could explain why a neuron spends energetic resources to make its dendrites spike. 1 Introduction Recent progress in voltage clamp techniques has enabled the recording of local membrane voltage in dendritic branches, and this greatly changed our view of the potential for single neuron computation. Experiments have shown that when the local dendritic membrane potential reaches a given threshold a dendritic spike can be elicited [4, 13]. Based on this type of local dendritic non-linearity, it has been suggested that a CA1 hippocampal pyramidal neuron comprises multiple independent nonlinear spiking units, summating at the soma, and is thus equivalent to a two layer artificial neural network [12]. This idea is attractive, because this type of feed-forward network can implement any Boolean function, in particular linearly non-separable Boolean functions (lnBFs), and thus radically 1 extends the computational power of a single neuron. By contrast, a seminal neuron model, the McCulloch & Pitts unit [10], is restricted to linearly separable Boolean functions. However attractive this idea, it requires additional investigation. Indeed, spiking dendritic unit may enable the computation of lnBFs using an architecture, suggested in [9], where the dendritic tuning implies the neuron tuning (see also Proposition 1). This relation between dendritic and neuron tuning has not been confirmed experimentally; on the contrary it has been shown in vivo that dendritic tuning does not imply the neuron tuning [6]: calcium imaging in vivo has shown that the local calcium signal in dendrites can maximally increase for visual inputs whereas that do not trigger somatic spiking. We resolve this first issue here by showing how one can implement lnBFs with spiking dendritic units, whose tunings do not formally imply the somatic tuning. Moreover, the idea of a neuron implementing a two-layer network is based on spiking dendrites. Dendritic non-linearities have a variety of shapes, and many neuron types may not have the capacity to generate dendritic spikes. By contrast, all dendrites can saturate [1, 16, 2]. For instance, glutamate uncaging on cerebellar stellate cell dendrites and simultaneous somatic voltage recording of these interneurons shows that multiple excitatory inputs on the same dendrite result in a somatic depolarization smaller than the arithmetic sum of the quantal depolarizations [1]. This type of nonlinearity has been predicted from Rall’s work [7], a model which explains saturation by an increase in membrane conductance and a decrease in driving force. It is unknown whether local dendritic saturation can also enhance the general computational capacity of a single neuron in the same way as local dendritic spiking – but, if so, this would make plausible the implementation of lnBFs in potentially any type of neuron. In the present study we show that saturating dendritic units do also enable the computation of lnBFs (see Proposition 2). One can wonder why some dendrites support metabolically-expensive spiking if dendritic saturation is sufficient to compute all Boolean functions. We tackle this issue in the second part of our study. We show that a family of positive lnBFs may require an exponentially growing number of saturating dendritic units when the number of input variables grow linearly, whereas the same family of Boolean functions requires a linearly growing number of spiking dendritic units. Consequently dendritic spikes may minimize the number of units necessary to implement all Boolean functions. Thus, as the number of independent units – spiking or saturating – in a dendrite remains an open question [5], but potentially small [14], it may turn out that certain Boolean functions are only implementable using spiking dendrites. 2 Definitions 2.1 The binary two stage neuron We introduce here a neuron model analogous to [12]. Our model is a binary two stage neuron, where X is a binary input vector of length n and y is a binary variable modelling the neuron output. First, inputs sum locally within each dendritic unit j given a local weight vector Wj; then they pass though a local transfer function Fj accounting for the dendritic non-linear behavior. Second, outputs of the d dendritic subunits sum at the soma and passes though the somatic transfer function F0. F0 is a spiking transfer function whereas Fj are either spiking or saturating transfer functions, these functions are described in the next section and are displayed on Figure 1A. Formally, the output y is computed with the following equation: y = F0 d X j=1 Fj(Wj.X) 2.2 Sub-linear and supra-linear transfer functions A transfer function F takes as input a local weighted linear sum x and outputs F(x); this output depends on the type of transfer function: spiking or saturating, and on a single positive parameter Θ the threshold of the transfer function. The two types of transfer functions are defined as follows: Definition 1. Spiking transfer function Fspk(x) = 1 if x ≥Θ 0 otherwise 2 Table 1: Two examples of positive Boolean functions of 4 variables x1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 x2 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 x3 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 x4 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 g(x1, x2, x3, x4) 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 1 h(x1, x2, x3, x4) 0 0 0 0 0 1 1 1 0 1 1 1 0 1 1 1 Definition 2. Saturating transfer function Fsat(x) = 1 if x ≥Θ x/Θ otherwise The difference between a spiking and a saturating transfer function is that Fspk(x) = 0 whereas Fsat(x) = x/Θ if x is below Θ. To formally characterize this difference we define here sublinearity and supra-linearity of a transfer function F on a given interval I. These definitions are similar to the well-known notions of concavity and convexity: Definition 3. F is supra-linear on I if and only if F(x1 + x2) > F(x1) + F(x2) for at least one (x1, x2) ∈I2 F is sub-linear on I if and only if F(x1 + x2) < F(x1) + F(x2) for at least one (x1, x2) ∈I2 F is strictly sub-linear (resp. supra-linear) on I if it is sub-linear (resp. supra-linear) but not supra-linear (resp. sub-linear) on I. Note that these definitions also work when using n-tuples instead of couples on the interval (useful in Lemma 3). Note that whenever Θ > 0, Fspk is both supra and sub-linear on I = [0, +∞[ whereas Fsat is strictly sub-linear on the same interval. Fsat is not supra-linear on I because Fsat(x1 + x2) ≤Fsat(x1) + Fsat(x2) for all (x1, x2) ∈I2, by definition of Fsat. Moreover, Fsat is sub-linear on I because Fsat(a + b) = 1 and Fsat(a) + Fsat(b) = 2 for at least one (a, b) ∈I2 such that a ≥Θ and b ≥Θ. All in all, Fsat is strictly sub-linear on I. Similarly to Fsat, Fspk is sub-linear on I because Fspk(a + b) = 1 and Fspk(a) + Fspk(b) = 2 for at least one (a, b) ∈I2 such that a ≥Θ and b ≥Θ. Moreover, Fspk is supra-linear because Fspk(c + d) = 1 and Fspk(c) + Fspk(d) = 0 for at least one (c, d) such that c < Θ and d < Θ but c + d ≥Θ. All in all, Fspk is both sub-linear and supra-linear. 2.3 Boolean Algebra In order to study the range of possible input-output mappings implementable by a two stage neurons we use Boolean functions, which can efficiently and formally describe all binary input-output mappings. Let us recall the definition of this extensively studied mathematical object [3, 17]: Definition 4. A Boolean function of n variables is a function on {0, 1}n into {0, 1}, where n is a positive integer. In Table.1 the truth table for two Boolean functions g and h is presented. These Boolean functions are fully and uniquely defined by their truth table. Both g and h are positive lnBFs (see chapter 9 of [3] for an extensive study of linear separability); because of its importance we recall the definition of positive Boolean functions: Definition 5. Let f be a Boolean function on {0, 1}n. f is positive if and only if f(X) ≥f(Z) ∀(X, Z) ∈{0, 1}n such that X ≥Z (meaning that ∀i : xi ≥zi) We also recall the notion of implication as it is important to observe that a dendritic input-output function (or tuning) may or not imply the neuron’s input-output function: 3 Definition 6. Let f and g be two Boolean functions. f implies g ⇐⇒f(X) = 1 =⇒g(X) = 1 ∀X ∈{0, 1}n As will become clear, we can treat each dendritic unit as computing its own Boolean function on its inputs: for a unit’s output to imply the whole neuron’s output then means that if a unit outputs a 1, then the neuron outputs a 1. In order to describe positive Boolean functions, it is useful to decompose them into positive terms and positive clauses: Definition 7. Let X(j) be a tuple of k < n positive integers referencing the different variables present in a term or a clause. A positive term j is a conjunction of variables written as Tj(X) = ^ i∈X(j) xi. A positive clause j is a disjunction of variables written as Cj(X) = _ i∈X(j) xi. A term or (resp. clause) is prime if it is not implied by (resp. does not imply) any other term (resp. clause) in a disjunction (resp. conjunction) of multiple terms (resp. clauses). These terms and clauses can then define the Disjunctive or Conjunctive Normal Form (DNF or CNF) expression of a Boolean function f, particularly: Definition 8. A complete positive DNF is a disjunction of prime positive terms T: DNF(f) := _ Tj∈T ^ i∈X(j) xi Definition 9. A complete positive CNF is a conjunction of prime positive clauses C: CNF(f) := ^ Cj∈C _ i∈X(j) xi It has been shown that all positive Boolean functions can be expressed as a positive complete DNF ([3] Theorem 1.24); similarly all positive Boolean functions can be expressed as a positive complete CNF. These complete positive DNF or CNF are the shortest possible DNF or CNF descriptions of positive Boolean functions. To clarify all these definitions let us introduce a series of examples build around g and h. Example 1. Let us take X(1) = (1, 2) and X(2) = (3, 4). These tuples define two positive terms T1(X) = x1 ∧x2 where T1(X) = 1 only when x1 = 1 and x2 = 1 and T1(X) = 0 otherwise; similarly T2(X) = x3 ∧x4 where T2(X) = 1 only when x3 = 1 and x4 = 1. These tuples can also define two positive clauses C1(X) = x1 ∨x2 where C1(X) = 1 as soon as x1 = 1 or x2 = 1, and similarly C2(X) = x3 ∨x4 where C2(X) = 1 as soon as x3 = 1 or x4 = 1. In the disjunction of terms T1 ∨T2 the terms are prime because T1(X) = 1 is not implied by T2(X) = 1 for all X (and vice-versa). Similarly in the conjunction of clauses C1 ∧C2 the clauses are prime because C1(X) = 1 does not imply that C2(X) = 1 for all X (and vice-versa). T1 ∨T2 is the complete positive DNF expression of g; alternatively C1 ∧C2 is the complete positive CNF expression of h. The truth tables of g and h are displayed in Table 1 3 Results We first prove here that a two stage neuron with a sufficient number of only spiking or only saturating dendritic units can implement all positive Boolean functions, particularly lnBFs like g and h, whereas a classic McCulloch & Pitts unit is restricted to linearly separable Boolean functions. Moreover, we present two construction architectures for building a two stage neuron implementing a positive Boolean function based on its complete DNF or CNF expression. Finally we show that the DNF-based architecture is only possible with spiking dendritic units and not with saturating dendritic units. 4 Figure 1: Modeling dendritic spikes, dendritic saturations, and their impact on computation capacity (A) Two types of transfer functions for a unit j with a normalized height to 1 and a variable threshold Θj. The input is the local weighted sum Wj.X and the output is yj (A1) A spiking transfer function models somatic spikes and dendritic spikes (A2) A saturating transfer function models dendritic saturations (B) From left to right, a unit implementing the term T(X) = x1 ∨x2, and two units implementing the clause C(X) = x3 ∨x4, in circles are synaptic weights and in squares are threshold and the type of transfer function (spk:spiking, sat:saturating) (C) Two architectures to implement all positive Boolean functions in a two stage neuron, the d dendritic units correspond to all terms of a DNF (left) or to all the clauses of a CNF (right), the somatic unit respectively implements an AND or an OR logic operation 3.1 Computation of positive Boolean functions using non-linear dendritic units Lemma 1. A two stage neuron with non-negative synaptic weights and increasing transfer functions necessarily implements positive Boolean functions Proof. Let f be the Boolean function representing the input-output mapping of a two stage neuron, and two binary vectors X and Z such that X ≥Z. We have ∀j ∈{1, 2, . . . , d} non-negative local weights wi,j ≥0, thus for a given dendritic unit j we have: wi,jxi ≥wi,jzi. We can sum inequalities for all i, and Fj are increasing transfer functions thus: Fj(Wj.X) ≥Fj(Wj.Z). We can sum the d inequalities corresponding to every dendritic unit, and F0 is an increasing transfer function thus: f(X) ≥f(Z). Lemma 2. A term (resp. a clause) can be implemented by a unit with a supra-linear (resp. sublinear) transfer function Proof. We need to provide the parameter sets of a transfer function implementing a term (resp. a clause) with the constraint that the transfer function is supra-linear (resp. sub-linear). Indeed, a supra-linear transfer function (like the spiking transfer function) with the parameter set wi = 1 if i ∈X(j) and wi = 0 otherwise and Θ = card(X(j)) implements the term Tj. A sub-linear transfer function (like the saturating transfer function) with the parameter set wi = 1 if i ∈X(j) and wi = 0 otherwise and Θ = 1 implements the clause Cj. These implementation are illustrated by examples in Figure 1B 5 Lemma 3. A term (resp. a clause) cannot be implemented by a unit with a strictly sub-linear (resp. supra-linear) transfer function Proof. We prove this lemma for a term, the proof is similar for a clause. Let Tj be the term defined by X(j), with card(X(j)) ≥2. First, for all input vectors X such that xi = 1 with i ∈X(j) and xk̸=i = 0 then Tj(X) = 0 implying that F(W.X) = F(wixi) = 0. One can sum all these elements to obtain the following equality X i∈X(j) F(wixi) = 0. Second, for all input vectors X such that xi = 1 for all i ∈X(j) then Tj(X) = 1 implying that F X i∈X(j) wixi = 1. Putting the two pieces together we obtain: F X i∈X(j) wixi > X i∈X(j) F(wixi) This inequality shows that the tuple of points (wixi|i ∈X(j)) defining a term must have F supralinear; therefore, by Definition 2, F cannot be both strictly sub-linear and implement a term. Using these Lemmas we show the possible and impossible implementation architectures of positive Boolean functions in two-layer neuron models using either spiking or saturating dendritic units. Proposition 1. A two stage neuron with non-negative synaptic weights and a sufficient number of dendritic units with spiking transfer functions can implement only and all positive Boolean functions based on their positive complete DNF Proof. A two stage neuron can only compute positive Boolean functions (Lemma 1). All positive Boolean functions can be expressed as a positive complete DNF; because a spiking dendritic unit has a supra-linear transfer function it can implement all possible terms (Lemma 2). Therefore a two stage neuron model without inhibition can implement only and all positive Boolean functions with as many dendritic units as there are terms in the functions’ positive complete DNF. This architecture is represented on Figure 1C (left). Informally, this simply means that a dendrite is a pattern detector: if a pattern is present in the input then the dendritic unit elicits a dendritic spike. This architecture has been repeatedly invoked by theoreticians [8] and experimentalists ([9] in supplementary material) to suggest that dendritic spikes increase a neuron’s computational capacity. With this architecture, however, the dendritic transfer function, if it is viewed as a Boolean function, formally implies the neuron’s input-output mapping. This has not been confirmed experimentally yet. Proposition 2. A two stage neuron with non-negative synaptic weights and a sufficient number of dendritic units with spiking or saturating transfer functions can implement only and all positive Boolean functions based on their positive complete CNF Proof. A two stage neuron can only compute positive Boolean functions (Lemma 1). All positive Boolean functions can be expressed as a positive complete CNF; because a spiking or a saturating dendritic unit has a sub-linear transfer function they both can implement all possible clauses (Lemma 2). Therefore a two stage neuron model without inhibition can implement only and all positive Boolean functions with as many dendritic units as there are clauses in the functions’ positive complete CNF. This architecture is represented on Figure 1C (right). To our knowledge, this implementation architecture has not yet been proposed in the neuroscience literature. It shows that saturations can increase the computational power of a neuron as much as dendritic spikes. It also shows that another implementation architecture is possible using spiking dendritic units. Using this architecture, the dendritic units’ transfer functions do not imply the somatic output. This independence of dendritic and somatic response to inputs has been observed in Layer 2/3 neurons [6]. Proposition 3. A two stage neuron with non-negative synaptic weights and only dendritic units with saturating transfer functions cannot implement a positive Boolean function based on its complete DNF 6 Proof. The transfer function of a saturating dendritic unit is strictly sub-linear, therefore this unit cannot implement a term (Lemma 3). This result suggests that spiking dendritic units are more flexible than saturating dendritic units; they allow the computation of Boolean functions through either DNF or CNF-based architectures (illustrated in Figure 2), whereas saturating units are restricted to CNF-based architectures. 3.2 Implementation of a family of positive lnBFs using either spiking or saturating dendrites Figure 2: Implementation of two linearly non-separable Boolean functions using CNF-based or DNF-based architectures. Four parameter sets of two-stage neuron models: in circles are synaptic weights and in squares are threshold and the unit type (spk:spiking, sat:saturating). These parameter sets implement (A1/A2) g or (B1/B2) h, two lnBFs depicted in Table 1 using: (A1/B1) a DNFbased architecture and spiking dendritic units only; (A2/B2) a CNF-based architecture and saturating dendritic units only. The Boolean functions g and h form a family of Boolean functions we call feature binding problems in reference to [8]. In this section we show how this family can be implemented using either a DNFbased or CNF-based architecture. For some Boolean functions, the DNF and CNF grow at different rates as a function of the number of variables [3, 11]. This is the case when g and h are defined for n input variables. Example 2. Let’s define g by the complete positive DNF expression φ : φ(g(x1, z1, . . . , xn, zn)) := x1z1 ∨x2z2 ∨· · · ∨xnzn The same function g has a unique complete positive CNF expression; let’s call it ψ. The clauses of ψ are exactly those elementary disjunctions of n variables that involve one variable out of each of the pairs {x1, z1}, {x2, z2}, . . . , {xn, zn}. Thus ψ has 2n clauses. Example 3. Let’s define h by the complete positive CNF expression ψ: ψ(h(x1, z1, . . . , xn, zn)) := (x1 ∨z1)(x2 ∨z2) . . . (xn ∨zn) The same function h has a unique complete positive DNF expression; let’s call it φ. The terms of φ are exactly those elementary conjunctions of n variables that involve one variable out of each of the pairs {x1, z1}, {x2, z2}, . . . , {xn, zn}. Thus φ has 2n terms. Table 2 shows the number of necessary units for g and h depending on the chosen architecture. From Propositions 1 and 2, it is immediately clear that spiking dendritic units always give access to the 7 Table 2: Number of necessary units Boolean function # of terms in DNF # of clauses in CNF g n 2n h 2n n minimal possible two-stage neuron implementation. A neuron with spiking dendritic units can thus implement g with n units using DNF-based and h with n units using CNF-based architectures; but saturating units, restricted to CNF-based architectures, can only implement h with 2n units. 4 Discussion The main result of our study is that dendritic saturations can play a computational role that is as important as dendritic spikes: saturating dendritic units enable a neuron to compute lnBFs (as shown in Proposition 2). The same Proposition shows that a neuron can compute lnBFs decomposed according to the CNF using spiking dendritic units; with this architecture, dendritic tuning does not imply the somatic tuning to inputs. Moreover, we demonstrated that an important family of lnBFs formed by g and h can be implemented in a two stage neuron using either spiking or saturating dendritic units. We also showed that lnBFs cannot be implemented in a two stage neuron using a DNF-based architecture with only dendritic saturating units (Proposition 3). These results nicely separate the implications of saturating and spiking dendritic units in single neuron computation. On the one hand, spiking dendritic units are a more flexible basis for computation, as they can be employed in two different implementation architectures (Proposition 1 and 2) where dendritic tunings – the dendritic unit transfer functions – can imply or not the tuning of the whole neuron. The latter may explain why dendrites can have a tuning different from the whole neuron as has been observed in Layer 2/3 pyramidal cells of the visual cortex [6]. On the other hand, saturating dendritic units can enhance single neuron computation through implementing all positive Boolean functions (Proposition 3), while reducing the energetic costs associated with the active ion channels required for dendritic spikes [4, 13]. For an infinite number of dendritic units, saturating and spiking units lead to the same increase in computation capacity; for a finite number of dendritic units our results suggests that spiking dendritic units could have advantages over saturating dendritic units. In the second part of our study we showed that a family of lnBFs can be described by an expression containing an exponential or a linear number of elements. Namely, the lnBFs defined by g or h can be implemented with a linear number of spiking dendritic units whereas for g a neuronal implementation using only saturations requires an exponential number of saturating dendritic units. Consequently, spiking dendritic units may allow the minimization of dendritic units necessary to implement this family of Boolean functions. The Boolean functions g and h formalize feature binding problems [8] which are important and challenging computations (see [15] for review). Some single neuron solutions to feature binding problems have been proposed in [8], but restricted to DNF-based architectures; our results thus generalize and extend this study by proposing alternative CNF-based solutions. Moreover, we show that this alternative architecture enables the solution of an important family of binding problems with a linear number of spiking dendritic unit. Thus we have proposed more efficient solutions to a family of challenging computations. Because of their elegance and simplicity stemming from Boolean algebra, we believe our results are applicable to more complex situations. They can be extended to continuous transfer functions, which are more biologically plausible; in this case the notion of sub-linearity and supra-linearity are replaced by concavity and convexity. Moreover, all the parameters used here for proofs and examples are integer-valued but the same proofs and examples are easily extendable to continuous steady-state rate models where parameters are real-valued. In conclusion, our results have a solid formal basis, moreover, they both explain recent experimental findings and suggest a new way to implement Boolean functions using saturating as well as spiking dendritic units. 8 References [1] T. Abrahamsson, L. Cathala, K. Matsui, R. Shigemoto, and D.A. DiGregorio. Thin Dendrites of Cerebellar Interneurons Confer Sublinear Synaptic Integration and a Gradient of Short-Term Plasticity. Neuron, 73(6):1159–1172, March 2012. [2] S. Cash and R. Yuste. Linear summation of excitatory inputs by CA1 pyramidal neurons. Neuron, 22(2):383–394, February 1999. [3] Y. Crama and P.L. Hammer. Boolean Functions: Theory, Algorithms, and Applications (Encyclopedia of Mathematics and its Applications). Cambridge University Press, 2011. [4] S. Gasparini, M. Migliore, and J.C. Magee. On the initiation and propagation of dendritic spikes in CA1 pyramidal neurons. The Journal of Neuroscience, 24(49):11046–11056, December 2004. [5] M. Hausser and B.W. Mel. Dendrites: bug or feature? Current Opinion in Neurobiology, 13(3):372–383, June 2003. [6] H. Jia, N.L. Rochefort, X. Chen, and A. Konnerth. Dendritic organization of sensory input to cortical neurons in vivo. Nature, 464(7293):1307–1312, 2010. [7] C. Koch. Biophysics of computation : information processing in single neurons. Oxford University Press, New York, 1999. [8] R. Legenstein and W. Maass. Branch-Specific Plasticity Enables Self-Organization of Nonlinear Computation in Single Neurons. Journal of Neuroscience, 31(30):10787–10802, July 2011. [9] A. Losonczy, J.K. Makara, and J.C. Magee. Compartmentalized dendritic plasticity and input feature storage in neurons. Nature, 452(7186):436–441, March 2008. [10] W.S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. Bulletin of mathematical biology, 52(1-2):99–115; discussion 73–97, January 1943. [11] P.B. Miltersen, J. Radhakrishnan, and I. Wegener. On converting CNF to DNF. Theoretical computer science, 347:325–335, November 2005. [12] P. Poirazi, T. Brannon, and B.W. Mel. Pyramidal neuron as two-layer neural network. Neuron, 37(6):989–999, March 2003. [13] A. Polsky, B.W. Mel, and J. Schiller. Computational subunits in thin dendrites of pyramidal cells. Nature Neuroscience, 7(6):621–627, June 2004. [14] M.W.H. H Remme, M. Lengyel, and B.S. Gutkin. Democracy-independence trade-off in oscillating dendrites and its implications for grid cells. Neuron, 66(3):429–37, May 2010. [15] A.L. Roskies. The Binding Problem. Neuron, 24:7–9, 1999. [16] K. Vervaeke, A. Lorincz, Z. Nusser, and R.A. Silver. Gap Junctions Compensate for Sublinear Dendritic Integration in an Inhibitory Network. Science, 335(6076):1624–1628, March 2012. [17] I. Wegener. Complexity of Boolean Functions. Wiley-Teubner, 1987. 9
|
2012
|
146
|
4,504
|
Active Learning of Model Evidence Using Bayesian Quadrature Michael A. Osborne University of Oxford mosb@robots.ox.ac.uk David Duvenaud University of Cambridge dkd23@cam.ac.uk Roman Garnett Carnegie Mellon University rgarnett@cs.cmu.edu Carl E. Rasmussen University of Cambridge cer54@cam.ac.uk Stephen J. Roberts University of Oxford sjrob@robots.ox.ac.uk Zoubin Ghahramani University of Cambridge zoubin@eng.cam.ac.uk Abstract Numerical integration is a key component of many problems in scientific computing, statistical modelling, and machine learning. Bayesian Quadrature is a modelbased method for numerical integration which, relative to standard Monte Carlo methods, offers increased sample efficiency and a more robust estimate of the uncertainty in the estimated integral. We propose a novel Bayesian Quadrature approach for numerical integration when the integrand is non-negative, such as the case of computing the marginal likelihood, predictive distribution, or normalising constant of a probabilistic model. Our approach approximately marginalises the quadrature model’s hyperparameters in closed form, and introduces an active learning scheme to optimally select function evaluations, as opposed to using Monte Carlo samples. We demonstrate our method on both a number of synthetic benchmarks and a real scientific problem from astronomy. 1 Introduction The fitting of complex models to big data often requires computationally intractable integrals to be approximated. In particular, machine learning applications often require integrals over probabilities Z = ⟨ℓ⟩= Z ℓ(x)p(x)dx, (1) where ℓ(x) is non-negative. Examples include computing marginal likelihoods, partition functions, predictive distributions at test points, and integrating over (latent) variables or parameters in a model. While the methods we will describe are applicable to all such problems, we will explicitly consider computing model evidences, where ℓ(x) is the unnormalised likelihood of some parameters x1, . . . , xD. This is a particular challenge in modelling big data, where evaluating the likelihood over the entire dataset is extremely computationally demanding. There exist several standard randomised methods for computing model evidence, such as annealed importance sampling (AIS) [1], nested sampling [2] and bridge sampling. For a review, see [3]. These methods estimate Z given the value of the integrand on a set of sample points, whose size is limited by the expense of evaluating ℓ(x). It is well known that convergence diagnostics are often unreliable for Monte Carlo estimates of partition functions [4, 5, 6]. Most such algorithms also have parameters which must be set by hand, such as proposal distributions or annealing schedules. An alternative, model-based, approach is Bayesian Quadrature (BQ) [7, 8, 9, 10], which specifies a distribution over likelihood functions, using observations of the likelihood to infer a distribution 1 draw from GP draw from GP draw from GP p(Z|samples) expected Z GP mean ± SD GP mean samples ℓ(x) x Z Figure 1: Model-based integration computes a posterior for the integral Z = R ℓ(x)p(x)dx, conditioned on sampled values of the function ℓ(x). For this plot, we assume a Gaussian process model for ℓ(x) and a broad Gaussian prior p(x). The variously probable integrands permitted under the model will give different possible values for Z, with associated differing probabilities. for Z (see Figure 1). This approach offers improved sample efficiency [10], crucial for expensive samples computed on big data. We improve upon this existing work in three ways: Log-GP: [10] used a GP prior on the likelihood function; this is a poor model in this case, unable to express the non-negativity and high dynamic range of most likelihood functions. [11] introduced an approximate means of exploiting a GP on the logarithm of a function (henceforth, a log-GP), which better captures these properties of likelihood functions. We apply this method to estimate Z, and extend it to compute Z’s posterior variance and expected variance after adding a sample. Active Sampling: Previous work on BQ has used randomised or a priori fixed sampling schedules. We use active sampling, selecting locations which minimise the expected uncertainty in Z. Hyperparameter Marginalisation: Uncertainty in the hyperparameters of the model used for quadrature has previously been ignored, leading to overconfidence in the estimate of Z. We introduce a tractable approximate marginalisation of input scale hyperparameters. From a Bayesian perspective, numerical integration is fundamentally an inference and sequential decision making problem: Given a set of function evaluations, what can we infer about the integral, and how do we decide where to next evaluate the function. Monte Carlo methods, including MCMC, provide simple but generally suboptimal and non-adaptive answers: compute a sample mean, and evaluate randomly. Our approach attempts to learn about the integrand as it evaluates the function at different points, and decide based on information gain where to evaluate next. We compare our approach against standard Monte Carlo techniques and previous Bayesian approaches on both simulated and real problems. 2 Bayesian Quadrature Bayesian quadrature [8, 10] is a means of performing Bayesian inference about the value of a potentially nonanalytic integral, ⟨f⟩:= R f(x)p(x)dx. For clarity, we henceforth assume the domain of integration X = R, although all results generalise to Rn. We assume a Gaussian density p(x) := N(x; νx, λx), although other convenient forms, or, if necessary, the use of an importance re-weighting trick (q(x) = q(x)/p(x)p(x) for any q(x)), allow any other integral to be approximated. Quadrature involves evaluating f(x) at a vector of sample points xs, giving f s := f(xs). Often this evaluation is computationally expensive; the consequent sparsity of samples introduces uncertainty about the function f between them, and hence uncertainty about the integral ⟨f⟩. Previous work on BQ chooses a Gaussian process (GP) [12] prior for f, with mean µf and Gaussian covariance function K(x1, x2) := h2 N(x1; x2, w) . (2) Here hyperparameter h species the output scale, while hyperparameter w defines a (squared) input scale over x. These scales are typically fitted using type two maximum likelihood (MLII); we will later introduce an approximate means of marginalising them in Section 4. We’ll use the following dense notation for the standard GP expressions for the posterior mean m, covariance C, and variance 2 input scale input scale log ℓ(x) x ℓ(x) x Figure 2: A GP fitted to a peaked log-likelihood function is typically a better model than GP fit to the likelihood function (which is non-negative and has high dynamic range). The former GP also usually has the longer input scale, allowing it to generalise better to distant parts of the function. V , respectively: mf|s(x⋆) := m(f⋆|f s), Cf|s(x⋆, x′ ⋆) := C(f⋆, f ′ ⋆|f s) and Vf|s(x⋆) := V (f⋆|f s). Note that this notation assumes implicit conditioning on hyperparameters. Where required for disambiguation, we’ll make this explicit, as per mf|s,w(x⋆) := m(f⋆|f s, w) and so forth. Variables possessing a multivariate Gaussian distribution are jointly Gaussian distributed with any affine transformations of those variables. Because integration is affine, we can hence use computed samples f s to perform analytic Gaussian process inference about the value of integrals over f(x), such as ⟨f⟩. The mean estimate for ⟨f⟩given f s is m(⟨f⟩|f s) = ZZ ⟨f⟩p(⟨f⟩|f) p(f|f s) d⟨f⟩df = ZZ ⟨f⟩δ ⟨f⟩− Z f(x) p(x) dx N f; mf|s, Cf|s d⟨f⟩df = Z mf|s(x) p(x) dx , (3) which is expressible in closed-form due to standard Gaussian identities [10]. The corresponding closed-form expression for the posterior variance of ⟨f⟩lends itself as a natural convergence diagnostic. Similarly, we can compute the posteriors for integrals over the product of multiple, independent functions. For example, we can calculate the posterior mean m(⟨fg⟩|f s, gs) for an integral R f(x)g(x)p(x)dx. In the following three sections, we will expand upon the improvements this paper introduces in the use of Bayesian Quadrature for computing model evidences. 3 Modelling Likelihood Functions We wish to evaluate the evidence (1), an integral over non-negative likelihoods, ℓ(x). Assigning a standard GP prior to ℓ(x) ignores prior information about the range and non-negativity of ℓ(x), leading to pathologies such as potentially negative evidences (as observed in [10]). A much better prior would be a GP prior on log ℓ(x) (see Figure 2). However, the resulting integral is intractable, m(Z|log ℓs) = Z Z exp log ℓ(x) p(x) dx N log ℓ; mlog ℓ|s, Clog ℓ|s dlog ℓ, (4) as (4) does not possess the affine property exploited in (3). To progress, we adopt an approximate inference method inspired by [11] to tractably integrate under a log-GP prior.1 Specifically, we linearise the problematic exponential term around some point log ℓ0(x), as exp log ℓ(x) ≃exp log ℓ0(x) + exp log ℓ0(x) log ℓ(x) −log ℓ0(x) (5) The integral (4) consists of the product of Z and a GP for log ℓ. The former is ∼exp log ℓ, the latter is ∼exp −(log ℓ−m)2 , effectively permitting only a small range of log ℓfunctions. Over this narrow region, it is reasonable to assume that Z does not vary too dramatically, and can be approximated as linear in log ℓ, as is assumed by (5). Using this approximation, and making the definition ∆log ℓ|s := mlog ℓ|s −log ℓ0, we arrive at m(Z|log ℓs) ≃m(Z|log ℓ0, log ℓs) := Z ℓ0(x)p(x) dx + Z ℓ0(x)∆log ℓ|s(x)p(x) dx . (6) 1In practice, we use the transform log (ℓ(x) + 1), allowing us to assume the transformed quantity has zero mean. For the sake of simplicity, we omit this detail in the following derivations. 3 m ∆log ℓ|s(x)|∆c ∆c m log ℓ(x)|log ℓ(xs) log m ℓ(x)|ℓ(xs) log ℓ(x) log ℓ(xs) log ℓ(x) x final approx m ℓ(x)|ℓ(xs) ℓ(x) ℓ(xs) ℓ(x) x 0 2 4 0 20 40 Figure 3: Our approximate use of a GP for log ℓ(x) improves upon the use of a GP for ℓ(x) alone. Here the ‘final approx’ is mℓ|s(1 + ∆log ℓ|s), from (5) and (6). We now choose ℓ0 to allow us to resolve the first integral in (6). First, we introduce a secondary GP model for ℓ, the non-log space, and choose ℓ0 := mℓ|s, where mℓ|s is the standard GP conditional mean for ℓgiven observations ℓ(xs). For both GPs2 (over both log and non-log spaces), we take zero prior means and Gaussian covariances of the form (2). It is reasonable to use zero prior means: ℓ(x) is expected to be negligible except at a small number of peaks. If a quantity is dependent upon the GP prior for ℓ, it will be represented as conditional on ℓs; if dependent upon the former GP prior over log ℓ, it will be conditional upon log ℓs. We expect ∆log ℓ|s(x) to be small everywhere relative to the magnitude of log ℓ(x) (see Figure 3). Hence log ℓ0 is close to the peaks of the Gaussian over log ℓ, rendering our linearisation appropriate. For ℓ0, the first integral in (6) becomes tractable. Unfortunately, the second integral in (6) is non-analytic due to the log ℓ0 term within ∆log ℓ|s. As such, we perform another stage of Bayesian quadrature by treating ∆log ℓ|s as an unknown function of x. For tractability, we assume this prior is independent of the prior for log ℓ. We use another GP for ∆log ℓ|s, with zero prior mean and Gaussian covariance (2). A zero prior mean here is reasonable: ∆log ℓ|s is exactly zero at xs, and tends to zero far away from xs, where both mlog ℓ|s and log ℓ0 are given by the compatible prior means for log ℓand ℓ. We must now choose candidate points xc at which to evaluate the ∆log ℓ|s function (note we do not need to evaluate ℓ(xc) in order to compute ∆c := ∆log ℓ|s(xc)). xc should firstly include xs, where we know that ∆log ℓ|s is equal to zero. We select the remainder of xc at random on the hyper-ellipses (whose axes are defined by the input scales for ℓ) surrounding existing observations; we expect ∆log ℓ|s to be extremised at such xc. We limit ourselves to a number of candidates that scales linearly with the dimensionality of the integral for all experiments. Given these candidates, we can now marginalise (6) over ∆log ℓ|s to give m(Z|log ℓs) ≃m(Z|log ℓ0, log ℓs, ∆c) = m(Z|ℓs) + m ⟨ℓ∆log ℓ|s⟩ ℓs, ∆c , (7) where both terms are analytic as per Section 2; m(Z|ℓs) is of the form (3). The correction factor, the second term in (7), is expected to be small, since ∆log ℓ|s is small. We extend the work of [11] to additionally calculate the variance in the evidence, V (Z|log ℓ0, log ℓs, ∆c) = S(Z |log ℓ0, log ℓs) −m(Z|log ℓ0, log ℓs, ∆c)2 , (8) where the second moment is S(Z |log ℓ0, log ℓs) := m ⟨ℓClog ℓ|s ℓ⟩ log ℓs + m(Z|log ℓ0, log ℓs, ∆c)2 , (9) and hence V (Z|log ℓ0, log ℓs, ∆c) = m ⟨ℓClog ℓ|s ℓ⟩ log ℓs := ZZ mℓ|s(x)mℓ|s(x′)Clog ℓ|s(x, x′)p(x)p(x′)dxdx′, (10) which is expressible in closed form, although space precludes us from doing so. This variance can be employed as a convergence diagnostic; it describes our uncertainty in the model evidence Z. 2Note that separately modelling ℓand log ℓis not inconsistent: we use the posterior mean of the GP for ℓ only as a convenient parameterisation for ℓ0; we do not treat this GP as a full probabilistic model. While this modelling choice may seem excessive, this approach provides significant advantages in the sampling efficiency of the overall algorithm by approximately capturing the non-negativity of our integrand and allowing active sampling. 4 true marginalised length scale approx. marginalised length scale variance mean data f(x) x (a) expected variance x sample 6 8 10 (b) Figure 4: a) Integrating hyperparameters increases the marginal posterior variance (in regions whose mean varies as the input scales change) to more closely match the true posterior marginal variance. b) An example showing the expected uncertainty in the evidence after observing the likelihood function at that location. p(x) and l(x) are plotted at the top in green and black respectively, the next sample location in red. Note the model discovering a new mode on the right hand side, sampling around it, then moving on to other regions of high uncertainty on the left hand side. In summary, we have described a linearisation approach to exploiting a GP prior over log-likelihoods; this permitted the calculation of the analytic posterior mean (7) and variance (10) of Z. Note that our approximation will improve with increasing numbers of samples: ∆log ℓ|s will eventually be small everywhere, since it is clamped to zero at each observation. The quality of the linearisation can also be improved by increasing the number of candidate locations, at the cost of slower computation. 4 Marginalising hyperparameters We now present a novel means of approximately marginalising the hyperparameters of the GP used to model the log-integrand, log ℓ. In previous approaches to Bayesian Quadrature, hyperparameters were estimated using MLII, which approximates the likelihood as a delta function. However, ignoring the uncertainty in the hyperparameters can lead to pathologies. In particular, the reliability of the variance for Z depends crucially upon marginalising over all unknown quantities. The hyperparameters of most interest are the input scales w for the GP over the log-likelihood; these hyperparameters can have a powerful influence on the fit to a function. We use MLII to fit all hyperparameters other than w. Marginalisation of w is confounded by the complex dependence of our predictions upon these input scales. We make the following essential approximations: Flat prior: We assume that the prior for w is broad, so that our posterior is the normalised likelihood. Laplace approximation: p(log ℓs|w) is taken as Gaussian with mean equal to the MLII value ˆw and with diagonal covariance Cw, diagonal elements fitted using the second derivatives of the likelihood. We represent the posterior mean for log ℓconditioned on ˆw as ˆm := mlog ℓ|s, ˆ w. GP mean affine in w: Given the narrow width of the likelihood for w, p(log ℓ|log ℓs, w) is approximated as having a GP mean which is affine in w around the MLII values, and a constant covariance; mlog ℓ|s,w ≃ˆm + ∂ˆm ∂w (w −ˆw) and Clog ℓ|s,w ≃Clog ℓ|s, ˆ w. The implication of these approximations is that the marginal posterior mean over log ℓis simply ˜mlog ℓ|s := mlog ℓ|s, ˆ w. The marginal posterior variance is ˜Clog ℓ|s := Clog ℓ|s, ˆ w + ∂ˆm ∂w Cw ∂ˆm ∂w . An example of our approximate posterior is depicted in Figure 4a. Our approximations give the marginal posterior mean for Z: ˜m(Z|log ℓ0, log ℓs, ∆c) := m(Z|log ℓ0, log ℓs, ∆c, ˆw) , (11) of the form (7). The marginal posterior variance ˜V (Z|log ℓ0, log ℓs, ∆c) = ZZ dx dx′mℓ|s(x) mℓ|s(x′) Clog ℓ|s(x, x′) + ∂ˆm(x) ∂w Cw ∂ˆm(x′) ∂w (12) is possible, although laborious, to express analytically, as with (10). 5 5 Active Sampling One major benefit of model-based integration is that samples can be chosen by any method, in contrast to Monte Carlo methods, which typically must sample from a specific distribution. In this section, we describe a scheme to select samples xs sequentially, by minimising the expected uncertainty in the evidence that remains after taking each additional sample.3 We take the variance in the evidence as our loss function, and proceed according to Bayesian decision theory. Surprisingly, the posterior variance of a GP model with fixed hyperparameters does not depend on the function values at sampled locations at all; only the location of those samples matters. In traditional Bayesian quadrature, the evidence is an affine transformation of the sampled likelihood values, hence its estimate for the variance in the evidence is also independent of likelihood values. As such, active learning with fixed hyperparameters is pointless, and the optimal sampling design can be found in advance [13]. In Section 3, we took Z as an affine transform of the log-likelihood, which we model with a GP. As the affine transformation (5) itself depends on the function values (via the dependence of log ℓ0), the conclusions of the previous paragraph do not apply, and active learning is desirable. The uncertainty over the hyperparameters of the GP further motivates active learning: without assuming a priori knowledge of the hyperparameters, we can’t evaluate the GP to precompute a sampling schedule. The approximate marginalisation of hyperparameters permits an approach to active sampling that acknowledges the influence new samples may have on the posterior over hyperparameters. Active sampling selects a new sample xa so as to minimise the expected variance in the evidence after adding the sample to the model of ℓ. The objective is therefore to choose the xa that minimises the expected loss; xa = argminxa V (Z|log ℓ0, log ℓs,a) | log ℓ0, log ℓs (note xa is implicitly conditioned on, as usual for function inputs) where the expected loss is V (Z|log ℓ0, log ℓs,a) | log ℓ0, log ℓs = S(Z |log ℓ0, log ℓs) − Z m(Z|log ℓ0, log ℓa,s, ∆c)2 × N log ℓa; ˆma, ˆCa + ∂ˆma ∂w Cw ∂ˆmT a ∂w dlog ℓa , (13) and we define ˆma := m(log ℓa|log ℓs, ˆw) and ˆCa := V (log ℓa|log ℓs, ˆw). The first term in (13), the second moment, is independent of the selection of xa and can hence be safely ignored for active sampling (true regardless of the model chosen for the likelihood). The second term, the negative expected squared mean, can be resolved analytically4 for any trial xa (we omit the laborious details). Importantly, we do not have to make a linearisation approximation for this new sample. That is, the GP posterior over log ℓa can be fully exploited when performing active sampling. In order to minimise the expected variance, the objective in (13) encourages the maximisation of the expected squared mean of Z. Due to our log-GP model, one means the method can use to do this is to seek points where the log-likelihood is predicted to be large: which we call exploitation. The objective in (13) naturally balances exploitation against exploration: the choice of points where our current variance in the log-likelihood is significant (see Figure 4b). Note that the variance for log ℓa is increased by approximate integration over hyperparameters, encouraging exploration. 6 Experiments We now present empirical evaluation of our algorithm in a variety of different experiments. Metrics: We judged our methods according to three metrics, all averages over N similar experiments indexed by i. Define Zi as the ground truth evidence for the ith experiment, m(Zi) as its estimated mean and V (Zi) as its predicted variance. Firstly, we computed the average log error, 3We also expect such samples to be useful not just for estimating the evidence, but also for any other related expectations, such as would be required to perform prediction using the model. 4Here we use the fact that R exp(c y) N y; m, σ2 dy = exp(c m + 1/2 c2σ2). We assume that ∆log ℓ|s does not depend on log ℓa, only its location xa: we know ∆(xa) = 0 and assume ∆log ℓ|s elsewhere remains unchanged. 6 ALE := 1 N PN i=1 |log m(Zi) −log Zi| . Next we computed the negative log-density of the truth, assuming experiments are independent, −log p(Z) = −PN i=1 log N(Zi; m(Zi), V (Zi)), which quantifies the accuracy of our variance estimates. We also computed the calibration C, defined as the fraction of experiments in which the ground truth lay within our 50% confidence interval m(Zi) −0.6745√V (Zi), m(Zi) + 0.6745√V (Zi) . Ideally, C would be 50%: any higher, and a method is under-confident, any lower and it is over-confident. Methods: We first compared against simple Monte Carlo (SMC). SMC generates samples x1, . . . , xN from p(x), and estimates Z by ˆZ = 1/N PN n=1 ℓ(xn). An estimate of the variance of ˆZ is given by the standard error of ℓ(x). As an alternative Monte Carlo technique, we implemented Annealed Importance Sampling (AIS) using a Metropolis-Hastings sampler. The inverse temperature schedule was linear as in [10], and the proposal width was adjusted to attain approximately a 50% acceptance rate. Note that a single AIS chain provides no ready means of determining the posterior variance for its estimate of Z. Our first model-based method was Bayesian Monte Carlo (BMC) – the algorithm used in [10]. Here samples were drawn from the AIS chain above, and a GP was fit to the likelihood samples. For this and other methods, where not otherwise mentioned, GP hyperparameters were selected using MLII. We then tested four novel methods. Firstly, Bayesian Quadrature (BQ), which employed the linearisation approach of Section 3 to modeling the log-transformed likelihood values. The samples supplied to it were drawn from the same AIS chain as used above, and 400 candidate points were permitted. BQ* is the same algorithm as BQ but with hyperparameters approximately marginalised, as per Section 4. Note that this influences only the variance of the estimate; the means for BQ and BQ* are identical. The performance of these methods allow us to quantify to what extent our innovations improve estimation given a fixed set of samples. Next, we tested a novel algorithm, Doubly Bayesian Quadrature (BBQ). The method is so named for the fact that we use not only Bayesian inference (with a GP over the log-transformed likelihood) to compute the posterior for the evidence, but also Bayesian decision theory to select our samples actively, as described in Section 5. BBQ* is identical, but with hyperparameters approximately marginalised. Both algorithms demonstrate the influence of active sampling on our performance. Problems: We used these methods to evaluate evidences given Gaussian priors and a variety of likelihood functions. As in [10] and [11], we focus on low numbers of samples; we permitted tested methods 150 samples on synthetic integrands, and 300 when using real data. We are motivated by real-world, big-data, problems where evaluating likelihood samples is expensive, making it desirable to determine the techniques for evidence estimation that can operate best when permitted only a small number of samples. Ground truth Z is available for some integrals; for the non-analytic integrals, Z was estimated by a run of SMC with 105 samples. We considered seven synthetic examples. We firstly tested using single Gaussians, in one, four, ten and twenty dimensions. We also tested on mixtures of two Gaussians in one dimension (two examples, alternately widely separated and overlapping) and four dimensions (a single example). We additionally tested methods on a real scientific problem: detecting a damped Lyman-α absorber (DLA) between the Earth and an observed quasar from spectrographic readings of the quasar. DLAs are large objects consisting primarily of neutral hydrogen gas. The statistical properties of DLAs inform us about the distribution of neutral hydrogen in the universe, which is of fundamental cosmological importance. We model the quasar spectra using a GP; the presence of a DLA is represented as an observation fault with known dynamics [14]. This model has five hyperparameters to be marginalised, to which we assign priors drawn from the large corpus of data obtained from the Sloan Digital Sky Survey (SDSS) [15]. We tested over four datasets; the expense of evaluating a GP likelihood sample on the large datasets available from the SDSS (140TB of data have been released in total) motivates the small sample sizes considered. Evaluation Table 1 shows combined performance on the synthetic integrands listed above. The calibration scores C show that all methods5 are systematically overconfident, although our approaches are at least as well calibrated as alternatives. On average, BBQ* provides an estimate 5Because a single AIS chain gives no estimate of uncertainty, it has no likelihood or calibration scores. 7 Z Number of samples 10 20 30 40 50 ×10−3 1 2 3 4 True value BBQ* BMC SMC (a) −log p(Z) Number of samples 50 100 150 −6 −4 −2 0 2 (b) Figure 5: a) The posterior distribution over Z for several methods on a one-dimensional example as the number of samples increases. Shaded regions denote ±2 SD’s from the mean. The shaded regions for SMC and BMC are off the vertical scale of this figure. b) The log density of the true evidence for different methods (colours identical to those in a), compared to the true Z (in black). The integrand is the same as that in Figure 4b. Table 1: Combined Synthetic Results Method −log p(Z) ALE C SMC > 1000 1.101 0.286 AIS N/A 1.695 N/A BMC > 1000 2.695 0.143 BQ > 1000 6.760 0.429 BQ* > 1000 6.760 0.429 BBQ 13.597 0.919 0.286 BBQ* −11.909 0.271 0.286 Table 2: Combined Real Results Method −log p(Z) ALE C SMC 5.001 0.632 0.250 AIS N/A 2.146 N/A BMC 9.536 1.455 0.500 BQ 37.017 0.635 0.000 BQ* 33.040 0.635 0.000 BBQ 3.734 0.400 0.000 BBQ* 74.242 1.732 0.250 of Z which is closer to the truth than the other methods given the same number of samples, and assigns much higher likelihood to the true value of Z. BBQ* also achieved the lowest error on five, and best likelihood on six, of the seven problems, including the twenty dimensional problem for both metrics. Figure 5a shows a case where both SMC and BBQ* are relatively close to the true value, however BBQ*’s posterior variance is much smaller. Figure 5b demonstrates the typical behaviour of the active sampling of BBQ*, which quickly concentrates the posterior distribution at the true Z. The negative likelihoods of BQ* are for every problem slightly lower than for BQ (−log p(Z) is on average 0.2 lower), indicating that the approximate marginalisation of hyperparameters grants a small improvement in variance estimate. Table 2 shows results for the various methods on the real integration problems. Here BBQ is clearly the best performer; the additional exploration induced by the hyperparameter marginalisation of BBQ* may have led to local peaks being incompletely exploited. Exploration in a relatively high dimensional, multi-modal space is inherently risky; nonetheless, BBQ* achieved lower error than BBQ on two of the problems. 7 Conclusions In this paper, we have made several advances to the BQ method for evidence estimation. These are: approximately imposing a positivity constraint6, approximately marginalising hyperparameters, and using active sampling to select the location of function evaluations. Of these contributions, the active learning approach yielded the most significant gains for integral estimation. Acknowledgements M.A.O. was funded by the ORCHID project (http://www.orchid.ac.uk/). 6Our approximations mean that we cannot guarantee non-negativity, but our approach improves upon alternatives that make no attempt to enforce the non-negativity constraint. 8 References [1] R.M. Neal. Annealed importance sampling. Statistics and Computing, 11(2):125–139, 2001. [2] J. Skilling. Nested sampling. Bayesian inference and maximum entropy methods in science and engineering, 735:395–405, 2004. [3] M.H. Chen, Q.M. Shao, and J.G. Ibrahim. Monte Carlo methods in Bayesian computation. Springer, 2000. [4] R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRGTR-93-1, University of Toronto, 1993. [5] S.P. Brooks and G.O. Roberts. Convergence assessment techniques for Markov chain Monte Carlo. Statistics and Computing, 8(4):319–335, 1998. [6] M.K. Cowles, G.O. Roberts, and J.S. Rosenthal. Possible biases induced by MCMC convergence diagnostics. Journal of Statistical Computation and Simulation, 64(1):87, 1999. [7] P. Diaconis. Bayesian numerical analysis. In S. Gupta J. Berger, editor, Statistical Decision Theory and Related Topics IV, volume 1, pages 163–175. Springer-Verlag, New York, 1988. [8] A. O’Hagan. Bayes-Hermite quadrature. Journal of Statistical Planning and Inference, 29:245–260, 1991. [9] M. Kennedy. Bayesian quadrature with non-normal approximating functions. Statistics and Computing, 8(4):365–375, 1998. [10] C. E. Rasmussen and Z. Ghahramani. Bayesian Monte Carlo. In S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems, volume 15. MIT Press, Cambridge, MA, 2003. [11] M.A. Osborne, R. Garnett, S.J. Roberts, C. Hart, S. Aigrain, N.P. Gibson, and S. Aigrain. Bayesian quadrature for ratios. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2012), 2012. [12] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [13] T. P. Minka. Deriving quadrature rules from Gaussian processes. Technical report, Statistics Department, Carnegie Mellon University, 2000. [14] R. Garnett, M.A. Osborne, S. Reece, A. Rogers, and S.J. Roberts. Sequential bayesian prediction in the presence of changepoints and faults. The Computer Journal, 53(9):1430, 2010. [15] Sloan Digital Sky Survey, 2011. http://www.sdss.org/. 9
|
2012
|
147
|
4,505
|
Diffusion Decision Making for Adaptive k-Nearest Neighbor Classification Yung-Kyun Noh, Frank Chongwoo Park Schl. of Mechanical and Aerospace Engineering Seoul National University Seoul 151-744, Korea {nohyung,fcp}@snu.ac.kr Daniel D. Lee Dept. of Electrical and Systems Engineering University of Pennsylvania Philadelphia, PA 19104, USA ddlee@seas.upenn.edu Abstract This paper sheds light on some fundamental connections of the diffusion decision making model of neuroscience and cognitive psychology with k-nearest neighbor classification. We show that conventional k-nearest neighbor classification can be viewed as a special problem of the diffusion decision model in the asymptotic situation. By applying the optimal strategy associated with the diffusion decision model, an adaptive rule is developed for determining appropriate values of k in knearest neighbor classification. Making use of the sequential probability ratio test (SPRT) and Bayesian analysis, we propose five different criteria for adaptively acquiring nearest neighbors. Experiments with both synthetic and real datasets demonstrate the effectiveness of our classification criteria. 1 Introduction The recent interest in understanding human perception and behavior from the perspective of neuroscience and cognitive psychology has spurred a revival of interest in mathematical decision theory. One of the standard interpretations of this theory is that when there is a continuous input of noisy information, a decision becomes certain only after accumulating sufficient information. It is also typically understood that early decisions save resources. Among the many theoretical explanations for this phenomenon, the diffusion decision model offers a particularly appealing explanation of how information is accumulated and how the time involved in making a decision affects overall accuracy. The diffusion decision model considers the diffusion of accumulated evidence toward one of the competing choices, and reaches a decision when the evidence meets a pre-defined confidence level. The diffusion decision model successfully explains the distribution of decision times for humans [13, 14, 15]. More recently, this model offers a compelling explanation of the neuronal decision making process in the lateral intraparietal (LIP) area of the brain for perceptual decision making based on visual evidence [2, 11, 16]. The fundamental premise behind this model is that there is a tradeoff between decision times and accuracy, and that both are controlled by the confidence level. As described in Bogacz et al [3], the sequential probability ratio test (SPRT) is one mathematical model that explains this tradeoff. More recent studies also demonstrate how SPRT can be used to explain the evidence as emanated from Poisson processes [6, 21]. Now shifting our attention to machine learning, the well-known k-nearest neighbor classification uses a simple majority voting strategy that, at least in the asymptotic case, implicitly involves a similar tradeoff between time and accuracy. According to Cover and Hart [4], the expected accuracy of k-nearest neighbor classification always increases with respect to k when there is sufficient data. At the same time, there is a natural preference to use less resources, or equivalently, a fewer number of nearest neighbors. If one seeks to maximize the accuracy for a given number of total nearest neigh1 Figure 1: Diffusion decision model. The evidence of decision making is accumulated, and it diffuses over time (to the right). Once the accumulated evidence reaches one of the confidence levels of either choice, z or −z, the model stops collecting any more evidence and makes a decision. bors, this naturally leads to the idea of using different ks for different data. At a certain level, this adaptive idea can be anticipated, but methods described in the existing literature are almost exclusively heuristic-based, without offering a thorough understanding of under what situations heuristics are effective [1, 12, 19]. In this work, we present a set of simple, theoretically sound criteria for adaptive k-nearest neighbor classification. We first show that the conventional majority voting rule is identical to the diffusion decision model when applied to data from two different Poisson processes. Depending on how the accumulating evidence is defined, it is possible to construct five different criteria based on different statistical tests. First, we derive three different criteria using the SPRT statistical test. Second, using standard Bayesian analysis, we derive two probabilities for the case where one density function is greater than the other. Our five criteria are then used as diffusing evidence; once the evidence exceeds a certain confidence level, collection of information can cease and a decision can be made immediately. Despite the complexity of the derivations involved, the resulting five criteria have a particularly simple and appealing form. This feature can be traced to the memoryless property of Poisson processes. In particular, all criteria can be cast as a function of the information of only one nearest neighbor in each class. Using our derivation, we consider this property to be the result of the assumption that we have sufficient data; the criteria are not guaranteed to work in the event that there is insufficient data. We present experimental results involving real and synthetic data to verify this conjecture. The remainder of the paper is organized as follows. In Section 2, a particular form of the diffusion decision model is reviewed for Poisson processes, and two simple tests based on SPRT are derived. The relationship between k-nearest neighbor classification and diffusion decision making is explained in Section 3. In Section 4, we describe the adaptive k-nearest neighbor classification procedure in terms of the diffusion decision model, and we introduce five different criteria within this context. Experiments for synthetic and real datasets are presented in Section 5, and the main conclusions are summarized in Section 6. 2 Diffusion Decision Model for Two Poisson Processes The diffusion decision model is a stochastic model for decision making. The model considers the diffusion of an evidence in favor of either of two possible choices by continuously accumulating the information. After initial wavering between the two choices, the evidence finally reaches a level of confidence where a decision is made as in Fig. 1. In mathematical modeling of this diffusion process, Gaussian noise has been predominantly used as a model for zigzagging upon a constant drift toward a choice [3, 13]. However, when we consider two competing Poisson signals, a simpler statistical test can be used instead of estimating the direction of the drift. In the studies of decision making in the lateral intraparietal (LIP) area of the brain [2, 11], two Poisson processes are assumed to have rate parameters of either λ+ and λ−where we know that λ+ > λ−, but exact values are unknown. When it should be determined which Poisson process has the larger rate λ+, a sequential probability ratio test (SPRT) can be used to explain a diffusion decision model [6, 21]. 2 The Poisson distribution we use has the form: p(N|λ, T) = (λT )N N! exp(−λT), and we consider two Poisson distributions for N1 and N2 at time T1 and T2, respectively: p(N1|λ1, T1) and p(N2|λ2, T2). Here, λ1 and λ2 are the rate parameters, and either of these parameters has λ+ where the other has λ−. Now, we apply the statistical test of Wald [18] for a confidence α( > 1): p(N1|λ1 = λ+)p(N2|λ2 = λ−) p(N1|λ1 = λ−)p(N2|λ2 = λ+) > α or < 1 α (1) for the situation where there is N1 number of signals at time T1 for the first Poisson process and N2 number of signals at time T2 for the second process. We can determine that λ1 has the λ+ once the left term is greater than α, and λ2 has the λ+ once it is greater than 1 α, otherwise, we must collect more information. According to Wald and Wolfowitz [18], this test is optimal in that the test requires the fewest average observations with the same probability of error. By taking the log on both sides, we can rewrite the test as log λ+ λ− (N1 −N2) −(λ+ −λ−)(T1 −T2) > log a or (2) < −log a. Considering two special situations, this equation can be reduced into two different, simple tests. First, we can consider observation of the numbers N1 and N2 at a certain time T = T1 = T2. Then test in Eq. (2) is reduced into one test previously proposed in [21]: |N1 −N2| > zN (3) where zN is a constant satisfying zN = log α log(λ+/λ−). Another simple test can be made by using the observation times T1 and T2 when we find the same number of signals N = N1 = N2: |T1 −T2| > zT (4) where zT satisfies zT = log α λ+−λ−. Here, we can consider ∆N = N1 −N2 and ∆T = T1 −T2 as two different evidences in the diffusion decision model. The evidence diffuses as we collect more information, and we come to make a decision once the evidence reaches the confidence levels, ±zN for ∆N, and ±zT for ∆T. In this work, we refer to the first model, using the criterion ∆N, as the ∆N rule and the second model, using ∆T, as the ∆T rule. Although the ∆N rule has been previously derived and used [21], we propse four more test criteria in this paper including Eq. (4). Later, we show that the diffusion decision making with these five criteria is related to different methods for k-nearest neighbor classification. 3 Equivalence of Diffusion Decision Model and k-Nearest Neighbor Classification A conventional k-nearest neighbor (k-NN) classification takes a majority voting strategy using k number of nearest neighbors. According to Cover and Hart [4], in the limit of infinite sampling, this simple majority voting rule can produce a fairly low expected error and furthermore, this error decreases even more as a bigger k is used. This theoretical result is obtained from the relationship between the k-NN classification error and the optimal Bayes error: the expected error with one nearest neighbor is always less than twice the Bayes error, and the error decreases with the number of k asymptotically to the Bayes error [4]. In this situation, we can claim that the k-NN classification actually performs the aforementioned diffusion decision making for Poisson processes. The identity comes from two equivalence relationships: first, the logical equivalence between two decision rules; second, the equivalence of distribution of nearest neighbors to the Poisson distribution in an asymptotic situation. 3.1 Equivalent Strategy of Majority Voting Here, we first show an equivalence between the conventional k-NN classification and a novel comparison algorithm: 3 Theorem: For two-class data, we consider the N-th nearest datum of each class from the testing point. With an odd number k, majority voting rule in k-NN classification is equivalent to the rule of picking up the class to which a datum with smaller distance to the testing point belongs, for k = 2N −1. Proof: Among k-NNs of a test point, if there are more than or equal to N data having label C, for C ∈{1, 2}, the test point is classified as class C according to the majority voting because N = (k + 1)/2 > k 2. If we consider three distances dk to the k-th nearest neighbor among all data, dN,C to the N-th nearest neighbor in class C, and dN,¬C to the N-th nearest neighbor in class nonC, then both dN,C ≤dk and dN,¬C > dk are satisfied in this case. This completes one direction of proof that the selection of class C by majority voting implies dN,C < dN,¬C. The opposite direction can be proved similarly. Therefore, instead of counting the number of nearest neighbors, we can classify a test point using two separate N-th nearest neighbors of two classes and comparing the distances. This logical equivalence applies regardless of the underlying density functions. 3.2 Nearest neighbors as Poisson processes The random generation of data from a particular underlying density function induces a density function of distance to the nearest neighbors. When the density function is λ(x) for x ∈RD and we consider a D-dimensional hypersphere of volume V with N-th nearest neighbor on its surface, a random variable u = MV , which is the volume of the sphere V multiplied by the number of data M, asymptotically converges in distribution to the Erlang density function [10]: p(u|λ) = λN Γ(N) exp(−λu)uN−1 (5) with a large amount of data. Here, the volume element is a function of distance d which can be represented as V = γdD and γ = πD/2 Γ(D/2+1), a proportionality constant for a hypersphere volume. This Erlang function is a special case of the Gamma density function when the parameter N is an integer. We can also note that this Erlang density function implies the Poisson distribution with respect to N [20], and we can write the distribution of N as follows: p(N|λ) = λN Γ(N + 1) exp(−λ). (6) This equation shows that the appearance of nearest neighbors can be approximated with Poisson processes. In other words, with a growing hypersphere at a constant rate in volume, the occurrence of new points within a hypersphere will follow a Poisson distribution. This Erlang function in Eq. (5) comes from the asymptotic convergence in distribution of the real distribution, the binomial distribution with finite N number of samples [10]. Here, we note that, with a finite number of samples, the memoryless property of the Poisson disappears. This results in the breakdown of the independency assumption between posterior probabilities for classes which Cover and Hart used implicitly when they derived the expected error of k-NN classification [4]. On the other hand, once we have enough data, and hence the density functions Eq. (5) and Eq. (6) explain data correctly, we can expect the equivalence between the diffusion decision making and k-NN classification. In this case, the nearest neighbors are the samples of a Poisson process, having the rate parameter λ, which is the probability density at the test point. Now, we can turn back to the conventional k-NN classification. By theorem 1 and the arguments in this section, the k-NN classification strategy is the same as the strategy of comparing two Poisson processes using N-th samples of each class. This connection naturally exploits the conventional k-NN classification to the adaptive method of using different ks using the confidence level in the diffusion decision model. 4 4 Criteria for Adaptive k-NN Classification Using the equivalence settings of the diffusion decision model and the k-NN classification, we can extend the conventional majority voting strategy to more sophisticated adaptive strategies. First, the SPRT criteria in the previous section, ∆N rule and ∆T rule can be used. For the ∆N rule in Eq. (3), we can use the numbers of nearest neighbors N1 and N2 within a fixed distance d, then compare |∆N| = |N1 −N2| with a pre-defined confidence level zN. Instead of making an immediate decision, we can collect more nearst neighbors by increasing d until Eq. (3) is satisfied. This is the “∆N rule” for adaptive k-NN classification. In terms of the ∆T rule in Eq. (4), using the correspondence of time in the original SPRT to the volume within the hypersphere in k-NN classification, we can make two different criteria for adaptive k-NN classification. First, we consider two volume elements, V1 and V2 of N-th nearest neighbors, and the criterion can be rewritten as |V1 −V2| > zV . We refer to this rule as the “∆V rule”. Additional criterion for the ∆T rule considers a more conservative rule using the volume of (N +1)th nearest neighbor hypersphere. Since a slightly smaller hypersphere than this hypersphere still contains N number of nearest neighbors, we can make the same test more difficult to stop diffusing by replacing the smaller volume in the ∆V rule with the volume of (N + 1)-th nearest neighbor hypersphere of that class. We refer to this rule as the “Conservative ∆V rule” because it is more cautious in making a decision with this strategy. In addition to the SPRT method, with which we derive three different criteria, we can also derive several stopping criteria using the Bayesian approach. If we consider λ as a random variable and apply an appropriate prior, we can obtain a posterior distribution of λ as well as the probability of P(λ1 > λ2) or P(λ1 < λ2). In the following section, we show how we can derive these probabilities and how these probabilities can be used as evidence in the diffusion decision making model. 4.1 Bayesian Criteria For both Eq. (5) and Eq. (6), we consider λ as a random variable, and we can apply a conjugate prior for λ: p(λ) = ba Γ(a)λa−1 exp(−λb) (7) with constants a and b. The constant a is an integer satisfying a ≥1, and b is a real number. With this prior Eq. (7), the posteriors for two likelihoods Eq. (5) and Eq. (6) are obtained easily: p(λ|u) = (u + b)N+a Γ(N + a) λN+a−1 exp(−λ(u + b)) (8) p(λ|N) = (b + 1)N+a Γ(N + a) λN+a−1 exp(−λ(b + 1)) (9) First, we derive P(λ1 > λ2|u1, u2) for u1 and u2 obtained using the N-th nearest neighbors in class 1 and class 2. Because the posterior functions of different classes are independent from each other, this probability of λ1 > λ2 is simply obtained by the double integration: P(λ1 > λ2|u1, u2) = Z ∞ 0 p(λ2|u2) Z λ2 0 p(λ1|u1) dλ1 dλ2. (10) After some calculation, the integration result gives an extremely simple analytic solution: P(λ1 > λ2|u1, u2) = N+a−1 X m=0 2N + 2a −1 m (u1 + b)m(u2 + b)2N+2a−1−m (u1 + u2 + 2b)2N+2a−1 (11) Here, we merely consider the case that a = 1, and it is interesting to note that this probability is equivalent to the probability of flipping a biased coin 2N + 1 times and observing less than or equal to N number of heads. This probability from the Bayesian approach can be efficiently computed 5 (a) (b) Figure 2: Decision making process for the nearest neighbor classification with (a) 80% and (b) 90% confidence level. Sample data are generated from the probability densities λ1 = 0.8 and λ2 = 0.2. For incrementing N-th nearest neighbors of different classes, the criterion probabilities P(λ1 > λ2|u1, u2) and P(λ1 < λ2|u1, u2) are calculated and compared with the confidence level. Unless the probability exceeds the confidence level, the next (N + 1)-th nearest neighbors are collected and the criterion probabilities are calculated again. In this figure, the diffusion of the criterion probability P(λ1 > λ2|u1, u2) is displayed for different realizations, where the evidence stops diffusing once the criterion passes the threshold where enough evidence has accumulated. The bars represent the number of points that are correctly (Red, upward bars) and incorrectly (Blue, downward bars) classified at each stage of the computation. Using a larger confidence results in less error, but with a concomitant increase in the number of nearest neighbors used. in an incremental fashion, and the nearest neighbor computation can be adaptively stopped with enough confidence of the evidence probability. The second probability P(λ1 > λ2|N1, N2) for the number of nearest neighbors N1 and N2 within a particular distance can be similarly derived. Using the double integration of Eq. (9), we can derive the analytic result again as P(λ1 > λ2|N1, N2) = 1 2N1+N2+2a−1 N1+a−1 X m=0 N1 + N2 + 2a −1 m . (12) Both the probabilities Eq. (11) and Eq. (12) can be used as evidence that diffuse along with incoming information. Stopping criteria for diffusion can be derived using these probabilities. 4.2 Adaptive k-NN Classification Of interest in the diffusion decision model is the relationship between the accuracy and the amount of resources needed to obtain the accuracy. In a diffusion decision setting for k-NN classification, we can control the amount of resources using the confidence level. For example, in Fig. 2, we generated data from two uniform density functions, λ1 = 0.8 and λ2 = 0.2, for different classes, and we applied different confidence levels, 0.8 and 0.9 in Fig. 2(a) and (b), respectively. Using the P(λ1 > λ2|u1, u2) criterion in Eq. (11), we applied the adaptive k-NN classification with an increasing N of two classes. Fig. 2 shows the decision results of the classification with incrementing N for 1000 realizations, and a few diffusion examples of the evidence probability in Eq. (11) are presented. According to the confidence level, the average number of nearest neighbors used differs. For Fig. 2(a) when the confidence level is lower than Fig. 2(b), the evidence reaches the confidence level at an earlier stage than Fig. 2(b), while the decision in Fig. 2(b) tends to select the first class more often than in Fig. 2(a). Considering that the optimal Bayes classification choosing class 1 for λ1 > λ2, the decisions for class 2 can be considered as errors. In this sense, we can say with the higher confidence level, decisions are made more correctly while using more resources. Therefore, the efficiencies 6 0 5 10 15 20 25 30 0.65 0.7 0.75 0.8 Average number of nearest neighbors Accuracy PN DN PV CDV DV kNN CNN Race minRace MinMaxRatio Jigang 0 5 10 15 20 25 0.71 0.72 0.73 0.74 0.75 0.76 0.77 Number of nearest neighbors used Accuracy PN DN PV CDV DV kNN CNN Race minRace MinMaxRatio Jigang (a) (b) 0 5 10 15 20 25 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 Average number of nearest neighbors Accuracy PN DN PV CDV DV kNN CNN Race minRace MinMaxRatio Jigang 0 5 10 15 20 25 0.6 0.62 0.64 0.66 0.68 0.7 0.72 0.74 Average number of nearest neighbors Accuracy PN DN PV CDV DV kNN CNN Race minRace MinMaxRatio Jigang (c) (d) Figure 3: Classification accuracy (vertical axis) versus the average number of nearest neighbors used (horizontal axis) for adaptive k-NN classification. (a) Uniform probability densities for λ1 = 0.8 and λ2 = 0.2 in 100-dimensional space, (b) CIFAR-10, (c) 2 × 105 data per class for 5-dimensional Gaussians, and (d) 2 × 106 data per class for the same Gaussians in (c) are used. between strategies can be compared using the accuracies as well as the average number of nearest neighbors used. 5 Experiments In the experiments, we compare the accuracy of the algorithms to the number of nearest neighbors used, for various confidence levels for criteria. We used the conventional k-NN classification as well as the proposed adaptive methods. Adaptive classification includes the comparison rule of Nth nearest neighbors using three criteria—the ∆V rule (DV), the Conservative ∆V rule (CDV), and Bayesian probability in Eq. (11) (PV)—, as well as the comparison rule of N1-th and N2-th at a given volume using two rules—the ∆N rule (DN) and Bayesian probability in Eq. (12) (PN). We present the average accuracies resulting from the use of these k-NN classification and five adaptive rules with respect to the average number of nearest neighbors used. We first show the results on synthetic datasets. In Fig. 3(a), we used two uniform probability densities λ1 = 0.8 and λ2 = 0.2 in 100-dimensional space, and we classified a test point based on the nearest neighbors. In this figure, all algorithms are expected to approach the Bayes performance based on Cover and Hart’s approach when the average number of nearest neighbors increase. In 7 this experiment, we can observe that all five proposed adaptive algorithms approach the Bayes error quicker than other methods showing similar rates with each other. Here, we also present the results of other adaptive algorithms CNN [12], Race, minRace, MinMaxRatio, and Jigang [19]. They perform majority voting with increasing k; CNN stops collecting more nearest neighbors once more than a certain amount of consecutive neighbors are found with the same labels; Race stops when the total amount of neighbors of one class exceeds a certain level; minRace stops when all classes have at least a predefined amount of neighbors; MinMaxRatio considers the ratio between numbers of nearest neighbors in different classes; lastly, Jigang is a probability criterion slightly different from Eq. (12). Except for Jigang’s method, all algorithms perform poorly, while our five algorithms perform equally well though they use different information, probably because the performance produced by diffusion decision making algorithms is optimal. Fig. 3(b) shows the experiments for a CIFAR-10 subset of the tiny images dataset [17]. The CIFAR10 set has 10-class 32 × 32 color images. Each class has 6000 images, and they are separated into one testing set and five training sets. With this 10-class data, we first performed Fisher Discriminant Analysis to obtain a 9-dimensional subspace, then all different adaptive algorithms are applied on this subspace. The result is the average accuracy for five different training sets and for all possible pairs of 10 classes. Because the underlying density is non-uniform here, the result shows the performance decrease when algorithms use non-close nearest neighbors. Except for DV and PV criteria, all of our adaptive algorithms outperform all other methods. The k-NN classification in the original data space shows the maximal average performance of 0.721 at k = 3, which is far less than the overall accuracies in the figure, because the distance information is poor in the high dimensional space. Fig. 3(c) and (d) clearly show that our algorithms are not guaranteed to work with insufficient data. We generated data from two different Gaussian functions and tried to classify a datum located at one of the modes to figure out the label of this datum. The number of generated data is 2 × 105 per class for (c), and 2 × 106 per class for (d) in 5-dimensional space. We presented the average result of 5000 realizations, and the comparison of two figures show that our adaptive algorithms work as expected when Cover and Hart’s asymptotic data condition holds. The Poisson process assumption also holds when this condition is satisfied. 6 Conclusions In this work, we showed that k-NN classification in the asymptotic limit is equivalent to the diffusion decision model for decision making. Nearest neighbor classification and the diffusion decision model are both very well known models in machine learning and cognitive science respectively, but the intimate connection between them has not been studied before. Using analysis of Poisson processes, we showed how classification using incrementally increasing nearest neighbors can be mapped to a simple threshold based decision model. In the diffusion decision model, the confidence level plays a key role in determining the tradeoff between speed and accuracy. The notion of confidence can also be applied to nearest neighbor classification to adapt the number of nearest neighbors used in making the classification decision. We presented several different criteria for choosing the appropriate number of nearest neighbors based on the sequential probability ratio test in addition to Bayesian inference. We demonstrated the utility of these methods in modulating speed versus accuracy on both simulated and benchmark datasets. It is straightforward to extend these methods to other datasets and algorithms that utilize neighborhood information. Future work will investigate how our results would scale with dataset size and feature representations. Potential benefits of this work include a well-grounded approach to speeding up classification using parallel computation on very large datasets. Acknowledgments This research is supported in part by the US Office of Naval Research, Intel Science and Technology Center, AIM Center, KIST-CIR, ROSAEC-ERC, SNU-IAMD, and the BK21. 8 References [1] A. F. Atiya. Estimating the posterior probabilities using the k-nearest neighbor rule. Neural Computation, 17(3):731–740, 2005. [2] J. M. Beck, W. J. Ma, R. Kiani, T. Hanks, A. K. Churchland, J. Roitman, M. N. Shadlen, P. E. Latham, and A. Pouget. Probabilistic population codes for Bayesian decision making. Neuron, 60(6):1142–1152, 2008. [3] R. Bogacz, E. Brown, J. Moehlis, P. Holmes, and J. D. Cohen. The physics of optimal decision making: A formal analysis of models of performance in two-alternative forced-choice tasks. Psychological Review, 113(4):700–765, 2006. [4] T. Cover and P. Hart. Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13(1):21–27, 1967. [5] L. Devroye, L. Gy¨orfi, and G. Lugosi. A probabilistic theory of pattern recognition. Applications of mathematics. Springer, 1996. [6] M. A. Girshick. Contributions to the theory of sequential analysis I. The Annuals of Mathematical Statistics, 17:123–143, 1946. [7] M. Goldstein. kn-Nearest Neighbor Classification. IEEE Transactions on Information Theory, IT-18(5):627–630, 1972. [8] C. C. Holmes and N. M. Adams. A probabilistic nearest neighbour method for statistical pattern recognition. Journal of the Royal Statistical Society Series B, 64(2):295–306, 2002. [9] M. D. Lee, I. G. Fuss, and D. J. Navarro. A Bayesian approach to diffusion models of decisionmaking and response time. In Advances in Neural Information Processing Systems 19, pages 809–816. 2007. [10] N. Leonenko, L. Pronzato, and V. Savani. A class of R´enyi information estimators for multidimensional densities. Annals of Statistics, 36:2153–2182, 2008. [11] W. J. Ma, J. M. Beck, P. E. Latham, and A. Pouget. Bayesian inference with probabilistic population codes. Nature Neuroscience, 9(11):1432–1438, 2006. [12] S. Ougiaroglou, A. Nanopoulos, A. N. Papadopoulos, Y. Manolopoulos, and T. WelzerDruzovec. Adaptive k-nearest-neighbor classification using a dynamic number of nearest neighbors. In Proceedings of the 11th East European conference on Advances in databases and information systems, pages 66–82, 2007. [13] R. Ratcliff and G. Mckoon. The diffusion decision model: theory and data for two-choice decision tasks. Neural Computation, 20(4):873–922, 2008. [14] R. Ratcliff and J. N. Rouder. A diffusion model account of masking in two-choice letter identification. Journal of Experimental Psychology Human Perception and Performance, 26(1):127– 140, 2000. [15] M. N. Shadlen, A. K. Hanks, A. K. Churchland, R. Kiani, and T. Yang. The speed and accuracy of a simple perceptual decision: a mathematical primer. Bayesian brain: Probabilistic approaches to neural coding, 2006. [16] M. N. Shadlen and W. T. Newsome. The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding. Journal of Neuroscience, 18:3870– 3896, 1998. [17] A. Torralba, R. Fergus, and W. T. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1958–1970, 2008. [18] A. Wald and J. Wolfowitz. Optimum character of the sequential probability ratio test. Annals of Mathematical Statistics, 19:326–339, 1948. [19] J. Wang, P. Neskovic, and L. N. Cooper. Neighborhood size selection in the k-nearest-neighbor rule using statistical confidence. Pattern Recognition, 39(3):417–423, 2006. [20] L. Wasserman. All of Statistics: A Concise Course in Statistical Inference (Springer Texts in Statistics). Springer, December 2003. [21] J. Zhang and R. Bogacz. Optimal decision making on the basis of evidence represented in spike trains. Neural Computation, 22(5):1113–1148, 2010. 9
|
2012
|
148
|
4,506
|
Bayesian Hierarchical Reinforcement Learning Feng Cao Department of EECS Case Western Reserve University Cleveland, OH 44106 fxc100@case.edu Soumya Ray Department of EECS Case Western Reserve University Cleveland, OH 44106 sray@case.edu Abstract We describe an approach to incorporating Bayesian priors in the MAXQ framework for hierarchical reinforcement learning (HRL). We define priors on the primitive environment model and on task pseudo-rewards. Since models for composite tasks can be complex, we use a mixed model-based/model-free learning approach to find an optimal hierarchical policy. We show empirically that (i) our approach results in improved convergence over non-Bayesian baselines, (ii) using both task hierarchies and Bayesian priors is better than either alone, (iii) taking advantage of the task hierarchy reduces the computational cost of Bayesian reinforcement learning and (iv) in this framework, task pseudo-rewards can be learned instead of being manually specified, leading to hierarchically optimal rather than recursively optimal policies. 1 Introduction Reinforcement learning (RL) is a well known framework that formalizes decision making in unknown, uncertain environments. RL agents learn policies that map environment states to available actions while optimizing some measure of long-term utility. While various algorithms have been developed for RL [1], and applied successfully to a variety of tasks [2], the standard RL setting suffers from at least two drawbacks. First, it is difficult to scale standard RL approaches to large state spaces with many factors (the well-known “curse of dimensionality”). Second, vanilla RL approaches do not incorporate prior knowledge about the environment and good policies. Hierarchical reinforcement learning (HRL) [3] attempts to address the scaling problem by simplifying the overall decision making problem in different ways. For example, one approach introduces macro-operators for sequences of primitive actions. Planning at the level of these operators may result in simpler policies [4]. Another idea is to decompose the task’s overall value function, for example by defining task hierarchies [5] or partial programs with choice points [6]. The structure of the decomposition provides several benefits: first, for the “higher level” subtasks, policies are defined by calling “lower level” subtasks (which may themselves be quite complex); as a result policies for higher level subtasks may be expressed compactly. Second, a task hierarchy or partial program can impose constraints on the space of policies by encoding knowledge about the structure of good policies and thereby reduce the search space. Third, learning within subtasks allows state abstraction, that is, some state variables can be ignored because they do not affect the policy within that subtask. This also simplifies the learning problem. While HRL attempts to address the scalability issue, it does not take into account probabilistic prior knowledge the agent may have about the task. For example, the agent may have some idea about where high/low utility states may be located and what their utilities may be, or some idea about the approximate shape of the value function or policy. Bayesian reinforcement learning addresses this issue by incorporating priors on models [7], value functions [8, 9] or policies [10]. Specifying good 1 priors leads to many benefits, including initial good policies, directed exploration towards regions of uncertainty, and faster convergence to the optimal policy. In this paper, we propose an approach that incorporates Bayesian priors in hierarchical reinforcement learning. We use the MAXQ framework [5], that decomposes the overall task into subtasks so that value functions of the individual subtasks can be combined to recover the value function of the overall task. We extend this framework by incorporating priors on the primitive environment model and on task pseudo-rewards. In order to avoid building models for composite tasks (which can be very complex), we adopt a mixed model-based/model-free learning approach. We empirically evaluate our algorithm to understand the effect of the priors in addition to the task hierarchy. Our experiments indicate that: (i) taking advantage of probabilistic prior knowledge can lead to faster convergence, even for HRL, (ii) task hierarchies and Bayesian priors can be complementary sources of information, and using both sources is better than either alone, (iii) taking advantage of the task hierarchy can reduce the computational cost of Bayesian RL, which generally tends to be very high, and (iv) task pseudo-rewards can be learned instead of being manually specified, leading to automatic learning of hierarchically optimal rather than recursively optimal policies. In this way Bayesian RL and HRL are synergistic: Bayesian RL improves convergence of HRL and can learn hierarchy parameters, while HRL can reduce the significant computational cost of Bayesian RL. Our work assumes the probabilistic priors to be given in advance and focuses on learning with them. Other work has addressed the issue of obtaining these priors. For example, one source of prior information is multi-task reinforcement learning [11, 12], where an agent uses the solutions of previous RL tasks to build priors over models or policies for future tasks. We also assume the task hierarchy is given. Other work has explored learning MAXQ hierarchies in different settings [13]. 2 Background and Related Work In the MAXQ framework, each composite subtask Ti defines a semi-Markov decision process with parameters ⟨Si, Xi, Ci, Gi⟩. Si defines the set of “non-terminal” states for Ti, where Ti may be called by its parent. Gi defines a set of “goal” states for Ti. The actions available within Ti are described by the set of “child tasks” Ci. Finally, Xi denotes the set of “relevant state variables” for Ti. Often, we unify the non-Si states and Gi into a single “termination” predicate, Pi. An (s, a, s′) triple where Pi(s) is false, Pi(s′) is true, a ∈Ci, and the transition probability P(s′|s, a) > 0 is called an exit of the subtask Ti. A pseudo-reward function ˜R(s, a) can be defined over exits to express preferences over the possible exits of a subtask. A hierarchical policy π for the overall task is an assignment of a local policy to each SMDP Ti. A hierarchically optimal policy is a hierarchical policy that has the maximum expected reward. A hierarchical policy is said to be recursively optimal if the local policy for each subtask is optimal given that all its subtask policies are optimal. Given a task graph, model-free [5] or model-based [14] methods can be used to learn value functions for each task-subtask pair. In the model-free method, a policy is produced by maintaining a value and a completion function for each subtask. For a task i, the value V (a, s) denotes the expected value of calling child task a in state s. This is (recursively) estimated as the expected reward obtained while executing a. The completion function C(i, s, a) denotes the expected reward obtained while completing i after having called a in s. The central idea behind MAXQ is that the value of i, V (i, s), can be (recursively) decomposed in terms of V (a, s) and C(i, s, a). The model-based RMAXQ [14] algorithm extends RMAX [15] to MAXQ by learning models for all primitive and composite tasks. Value iteration is used with these models to learn a policy for each subtask. An optimistic exploration strategy is used together with a parameter m that determines how often a transition or reward needs to be seen to be usable in the planning step. In the MAXQ framework, pseudo-rewards must be manually specified to learn hierarchically optimal policies. Recent work has attempted to directly learn hierarchically optimal policies for ALisp partial programs, that generalize MAXQ task hierarchies [6, 16], using a model-free approach. Here, along with task value and completion functions, an “external” Q function QE is maintained for each subtask. This function stores the reward obtained after the parent of a subtask exits. A problem here is that this hurts state abstraction, since QE is no longer “local” to a subtask. In later work [16], this is addressed by recursively representing QE in terms of task value and completion functions, linked by conditional probabilities of parent exits given child exits. The conditional probabilities and recursive decomposition are used to compute QE as needed to select actions. 2 Bayesian reinforcement learning methods incorporate probabilistic prior knowledge on models [7], value functions [8, 9], policies [10] or combinations [17]. One Bayesian model-based RL algorithm proceeds as follows. At each step, a distribution over model parameters is maintained. At each step, a model is sampled from this distribution (Thompson sampling [18, 19]). This model is then solved and actions are taken according to the policy obtained. This yields observations that are used to update the parameters of the current distribution to create a posterior distribution over models. This procedure is then iterated to convergence. Variations of this idea have been investigated; for example, some work converts the distribution over models to an empirical distribution over Qfunctions, and produces policies by sampling from this distribution instead [7]. Relatively little work exists that attempts to incorporate probabilistic priors into HRL. We have found one preliminary attempt [20] that builds on the RMAX+MAXQ [14] method. This approach adds priors to each subtask model and performs (separate) Bayesian model-based learning for each subtask. 1 In our approach, we do not construct models for subtasks, which can be very complex in general. Instead, we only maintain distributions over primitive actions, and use a mixed modelbased/model-free learning algorithm that is naturally integrated with the standard MAXQ learning algorithm. Further, we show how to learn pseudo-rewards for MAXQ in the Bayesian framework. 3 Bayesian MAXQ Algorithm In this section, we describe our approach to incorporating probabilistic priors into MAXQ. We use priors over primitive models and pseudo-rewards. As we explain below, pseudo-rewards are value functions; thus our approach uses priors both on models and value functions. While such an integration may not be needed for standard Bayesian RL, it appears naturally in our setting. We first describe our approach to incorporating priors on environment models alone (assuming pseudo-rewards are fixed). We do this following the Bayesian model-based RL framework. At each step we have a distribution over environment models (initially the prior). The algorithm has two main subroutines: the main BAYESIAN MAXQ routine (Algorithm 1) and an auxiliary RECOMPUTE VALUE routine (Algorithm 2). In this description, the value V and completion C functions are assumed to be global. At the start of each episode, the BAYESIAN MAXQ routine is called with the Root task and the initial state for the current episode. The MAXQ execution protocol is then followed, where each task chooses an action based on its current value function (initially random). When a primitive action is reached and executed, it updates the posterior over model parameters (Line 3) and its own value estimate (which is just the reward function for primitive actions). When a task exits and returns to its parent, the parent subsequently updates its completion function based on the current estimates of the value of the exit state (Lines 14 and 15). Note that in MAXQ, the value function of a composite task can be (recursively) computed using the completion functions of subtasks and the rewards obtained by executing primitive actions, so we do not need to separately store or update the value functions (except for the primitive actions where the value function is the reward). Finally, each primitive action maintains a count of how many times it has been executed and each composite task maintains a count of how many child actions have been taken. When k (an algorithm parameter) steps have been executed in a composite task, BAYESIAN MAXQ calls RECOMPUTE VALUE to re-estimate the value and completion functions (the check on k is shown in RECOMPUTE VALUE, Line 2). When activated, this function recursively re-estimates the value/completion functions for all subtasks of the current task. At the level of a primitive action, this simply involves resampling the reward and transition parameters from the current posterior over models. For a composite task, we use the MAXQ-Q algorithm (Table 4 in [5]). We run this algorithm for Sim episodes, starting with the current subtask as the root, with the current pseudoreward estimates (we explain below how these are obtained). This algorithm recursively updates the completion function of the task graph below the current task. Note that in this step, the subtasks with primitive actions use model-based updates. That is, when a primitive action is “executed” in such tasks, the currently sampled transition function (part of Θ in Line 5) is used to find the next state, and then the associated reward is used to update the completion function. This is similar to Lines 12, 14 and 15 in BAYESIAN MAXQ, except that it uses the sampled model Θ instead of the 1While we believe this description is accurate, unfortunately, due to language issues and some missing technical and experimental details in the cited article, we have been unable to replicate this work. 3 Algorithm 1 BAYESIAN MAXQ Input: Task i, State s, Update Interval k, Simulation Episodes Sim Output: Next state s′, steps taken N, cumulative reward CR 1: if i is primitive then 2: Execute i, observe r, s′ 3: Update current posterior parameters Ψ using (s, i, r, s′) 4: Update current value estimate: V (i, s) ←(1 −α) · V (i, s) + α · r 5: Count(i) ←Count(i) + 1 6: return (s′, 1, r) 7: else 8: N ←0, CR ←0, taskStack ←Stack(){i is composite} 9: while i is not terminated do 10: RECOMPUTE VALUE(i, k, Sim) 11: a ←ϵ-greedy action from V (i, s) 12: ⟨s′, Na, cr⟩←BAYESIAN MAXQ(a, s) 13: taskStack.push(⟨a, s′, Na, cr⟩) 14: a∗ s′ ←arg maxa′ ˜C(i, s′, a′) + V (a′, s′) 15: C(i, s, a) ←(1 −α) · C(i, s, a) + α · γNa C(i, s′, a∗ s′) + V (a∗ s′, s′) 16: ˜C(i, s, a) ←(1 −α) · ˜C(i, s, a) + α · γNa ˜R(i, s′) + ˜C(i, s′, a∗ s′) + V (a∗ s′, s′) 17: s ←s′, CR ←CR + γN · cr, N ←N + Na, Count(i) ←Count(i) + 1 18: end while 19: UPDATE PSEUDO REWARD(taskStack, ˜R(i, s′)) 20: return (s′, N, CR) 21: end if Algorithm 2 RECOMPUTE VALUE Input: Task i, Update Interval k, Simulation Episodes Sim Output: Recomputed value and completion functions for the task graph below and including i 1: if Count(i) < k then 2: return 3: end if 4: if i is primitive then 5: Sample new transition and reward parameters Θ from current posterior Ψ 6: else 7: for all child tasks a of i do 8: RECOMPUTE VALUE(a, k, Sim) 9: end for 10: for Sim episodes do 11: s ←random nonterminal state of i 12: Run MAXQ-Q(i, s, Θ, ˜R) 13: end for 14: end if 15: Count(i) ←0 real environment. After RECOMPUTE VALUE terminates, a new set of value/completion functions are available for BAYESIAN MAXQ to use to select actions. Next we discuss task pseudo-rewards (PRs). A PR is a value associated with a subtask exit that defines how “good” that exit is for that subtask. The ideal PR for an exit is the expected reward under the hierarchically optimal policy after exiting the subtask, until the global task (Root) ends; thus the PR is a value function. This PR would enable the subtask to choose the “right” exit in the context of what the rest of the task hierarchy is doing. In standard MAXQ, these have to be set manually. This is problematic because it presupposes (quite detailed) knowledge of the hierarchically optimal policy. Further, setting the wrong PRs can result in non-convergence or highly suboptimal policies. Sometimes this problem is sidestepped simply by setting all PRs to zero, resulting in recursively optimal policies. However, it is easy to construct examples where a recursively optimal policy 4 Algorithm 3 UPDATE PSEUDO REWARD Input: taskStack, Parent’s pseudo reward ˜Rp 1: tempCR ←˜Rp, Na′ ←0, cr′ ←0 2: while taskStack is not empty do 3: ⟨a, s, Na, cr⟩←taskStack.pop() 4: tempCR ←γN ′ a · tempCR + cr′ 5: Update pseudo-reward posterior Φ for ˜R(a, s) using (a, s, tempCR) 6: Resample ˜R(a, s) from Φ 7: Na′ ←Na, cr′ ←cr 8: end while is arbitrarily worse than the hierarchically optimal policy. For all these reasons, PRs are major “nuisance parameters” in the MAXQ framework. What makes learning PRs tricky is that they are not only value functions, but also function as parameters of MAXQ. That is, setting different PRs essentially results in a new learning problem. For this reason, simply trying to learn PRs in a standard temporal difference (TD) way fails (as we show in our experiments). Fortunately, Bayesian RL allows us to address both these issues. First, we can treat value functions as probabilistic unknown parameters. Second, and more importantly, a key idea in Bayesian RL is the “lifting” of exploration to the space of task parameters. That is, instead of exploration through action selection, Bayesian RL can perform exploration by sampling task parameters. Thus treating a PR as an unknown Bayesian parameter also leads to exploration over the value of this parameter, until an optimal value is found. In this way, hierarchically optimal policies can be learned from scratch—a major advantage over the standard MAXQ setting. To learn PRs, we again maintain a distribution over all such parameters, Φ, initially a prior. For simplicity, we only focus on tasks with multiple exits, since otherwise, a PR has no effect on the policy (though the value function changes). When a composite task executes, we keep track of each child task’s execution in a stack. When the parent itself exits, we obtain a new observation of the PRs of each child by computing the discounted cumulative reward received after it exited, added to the current estimate of the parent’s PR (Algorithm 3). This observation is used to update the current posterior over the child’s PR. Since this is a value function estimate, early in the learning process, the estimates are noisy. Following prior work [8], we use a window containing the most recent observations. When a new observation arrives, the oldest observation is removed, the new one is added and a new posterior estimate is computed. After updating the posterior, it is sampled to obtain a new PR estimate for the associated exit. This estimate is used where needed (in Algorithms 1 and 2) until the next posterior update. Combined with the model-based priors above, we hypothesize that this procedure, iterated till convergence, will produce a hierarchically optimal policy. 4 Empirical Evaluation In this section, we evaluate our approach and test four hypotheses: First, does incorporating modelbased priors help speed up the convergence of MAXQ to the optimal policy? Second, does the task hierarchy still matter if very good priors are available for primitive actions? Third, how does Bayesian MAXQ compare to standard (flat) Bayesian RL? Does Bayesian RL perform better (in terms of computational time) if a task hierarchy is available? Finally, can our approach effectively learn PRs and policies that are hierarchically optimal? We first focus on evaluating the first three hypotheses using domains where a zero PR results in hierarchical optimality. To evaluate these hypotheses, we use two domains: the fickle version of Taxi-World [5] (625 states) and Resource-collection [13] (8265 states). 2 In Taxi-World, the agent controls a taxi in a grid-world that has to pick up a passenger from a source location and drop them off at their destination. The state variables consist of the location of the taxi and the source and destination of the passenger. The actions available to the agent consist of navigation actions and actions to pickup and putdown the passenger. The agent gets a reward of +20 upon completing the task, a constant −1 reward for every action and a −10 penalty for an erroneous action. Further, each 2Task hierarchies for all domains are available in the supplementary material. 5 -1000 -800 -600 -400 -200 0 0 100 200 300 400 500 Average Cumulative Reward Per Episode B-MaxQ Uninformed B-MaxQ Good B-MB-Q Uninformed B-MB-Q Good B-MB-Q Good Comparable Simulations -1000 -800 -600 -400 -200 0 0 100 200 300 400 500 B-MaxQ Uninformed R-MaxQ MaxQ FlatQ -1000 -800 -600 -400 -200 0 0 200 400 600 800 1000 Average Cumulative Reward Per Episode Episode B-MaxQ Uninformed B-MaxQ Good B-MB-Q Uninformed B-MB-Q Good -1000 -800 -600 -400 -200 0 0 200 400 600 800 1000 Episode B-MaxQ Uninformed MaxQ R-MaxQ FlatQ Figure 1: Performance on Taxi-World (top row) and Resource-collection (bottom). The x-axis shows episodes. The prefix “B-” denotes Bayesian, “Uninformed/Good” denotes the prior and “MB” denotes model-based. Left column: Bayesian methods, right: non-Bayesian methods, with Bayesian MAXQ for reference. navigation action has a 15% chance of moving in each direction orthogonal to the intended move. In the Resource-collection domain, the agent collects resources (gold and wood) from a grid world map. Here the state variables consist of the location of the agent, what the agent is carrying, whether a goldmine or forest is adjacent to its current location and whether a desired gold or wood quota has been met. The actions available to the agent are to move to a specific location, chop gold or harvest wood, and to deposit the item it is carrying (if any). For each navigation action, the agent has a 30% chance of moving to a random location. In our experiments, the map contains two goldmines and two forests, each containing two units of gold and two units of wood, and the gold and wood quota is set to three each. The agent gets a +50 reward when it meets the gold/wood quota, a constant −1 reward for every action and an additional −1 for erroneous actions (such as trying to deposit when it is not carrying anything). For the Bayesian methods, we use Dirichlet priors for the transition function parameters and NormalGamma priors for the reward function parameters. We use two priors: an uninformed prior, set to approximate a uniform distribution, and a “good” prior where a previously computed model posterior is used as the “prior.” The prior distributions we use are conjugate to the likelihood, so we can compute the posterior distributions in closed form. In general, this is not necessary; more complex priors could be used as long as we can sample from the posterior distribution. The methods we evaluate are: (i) Flat Q, the standard Q-learning algorithm, (ii) MAXQ-0, the standard, Q-learning algorithm for MAXQ with no PR, (iii) Bayesian model-based Q-learning with an uninformed prior and (iv) a “good” prior, (v) Bayesian MAXQ (our proposed approach) with an uninformed prior and (vi) a “good” prior, and (vii) RMAXQ [14]. In our implementation, the Bayesian model-based Q-learning uses the same code as the Bayesian MAXQ algorithm, with a “trivial” hierarchy consisting of the Root task with only the primitive actions as children. For the Bayesian methods, the update frequency k was set to 50 for Taxi-World and 25 for Resource-collection. Sim was set to 200 for Bayesian MAXQ for Taxi-World and 1000 for Bayesian model-based Q, and to 1000 for both for Resource collection. For RMAXQ, the threshold sample size m was set to 5 following prior work [14]. The value iteration was terminated either after 300 loops or when the successive difference between iterations was less than 0.001. The theoretical version of RMAXQ requires updating and re-solving the model every step. In practice for the larger problems, this is too 6 time-consuming, so we re-solve the models every 10 steps. This is similar to the update frequency k for Bayesian MAXQ. The results are shown in Figure 1 (episodes on x-axis). From these results, comparing the Bayesian versions of MAXQ to standard MAXQ, we observe that for Taxi-World, the Bayesian version converges faster to the optimal policy even with the uninformed prior, while for Resource-collection, the convergence rates are similar. When a good prior is available, convergence is very fast (almost immediate) in both domains. Thus, the availability of model priors can help speed up convergence in many cases for HRL. We further observe that RMAXQ converges more slowly than MAXQ or Bayesian MAXQ, though it is much better than Flat Q. This is different from prior work [14]. This may be because our domains are more stochastic than the Taxi-world on which prior results [14] were obtained. We conjecture that, as the environment becomes more stochastic, errors in primitive model estimates may propagate into subtask models and hurt the performance of this algorithm. In their analysis [14], the authors noted that the error in the transition function for a composite task is a function of the total number of terminal states in the subtask. The error is also compounded as we move up the task hierarchy. This could be countered by increasing m, the sample size used to estimate model parameters. This would improve the accuracy of the primitive model, but would further hurt the convergence rate of the algorithm. Next, we compare the Bayesian MAXQ approach to “flat” Bayesian model-based Q learning. We note that in Taxi-World, with uninformed priors, though the “flat” method initially does worse, it soon catches up to standard MAXQ and then to Bayesian MAXQ. This is probably because in this domain, the primitive models are relatively easy to acquire, and the task hierarchy provides no additional leverage. For Resource-collection, however, even with a good prior, “flat” Bayesian model-based Q does not converge. The difference is that in this case, the task hierarchy encodes extra information that cannot be deduced just from the models. In particular, the task hierarchy tells the agent that good policies consist of gold/wood collection moves followed by deposit moves. Since the reward structure in this domain is very sparse, it is difficult to deduce this even if very good models are available. Taken together, these results show that task hierarchies and model priors can be complementary: in general, Bayesian MAXQ outperforms both flat Bayesian RL and MAXQ (in speed of convergence, since here MAXQ can learn the hierarchically optimal policy). Table 1: Time for 500 episodes, Taxi-World. Method Time (s) Bayesian MaxQ, Uninformed Prior 205 Bayesian Model-based Q, Uninformed Prior 4684 Bayesian MaxQ, Good Prior 96 Bayesian Model-based Q, Good Prior 3089 Bayesian Model-based Q, Good Prior & Comparable Simulations 4006 RMAXQ 229 MAXQ 2.06 Flat Q 1.77 Next, we compare the time taken by the different approaches in our experiments in TaxiWorld (Table 1). As expected, the Bayesian RL approaches are significantly slower than the non-Bayesian approaches. Further, among non-Bayesian approaches, the hierarchical approaches (MAXQ and RMAXQ) are slower than the non-hierarchical flat Q. Out of the Bayesian methods, however, the Bayesian MAXQ approaches are significantly faster than the flat Bayesian model-based approaches. This is because for the flat case, during the simulation in RECOMPUTE VALUE, a much larger task needs to be solved, while the Bayesian MAXQ approach is able to take into account the structure of the hierarchy to only simulate subtasks as needed, which ends up being much more efficient. However, we note that we allowed the flat Bayesian model-based approach 1000 episodes of simulation as opposed to 200 for Bayesian MAXQ. Clearly this increases the time taken for the flat cases. But at the same time, this is necessary: the “Comparable Simulations” row (and curve in Figure 1 top left) shows that, if the simulations are reduced to 250 episodes for this approach, the resulting values are no longer reliable and the performance of the Bayesian flat approach drops sharply. Notice that while Flat Q runs faster than MAXQ (because of the additional “bookkeeping” overhead due to the task hierarchy), Bayesian MAXQ runs much faster than Bayesian model-based Q. Thus, taking advantage of the hierarchical task decomposition helps reduce the computational cost of Bayesian RL. Finally we evaluate how well our approach estimates PRs. Here we use two domains: a ModifiedTaxi-World and a Hallway domain [5, 21] (4320 states). In Modified-Taxi-World, we allow dropoffs at any one of the four locations and do not provide a reward for task termination. Thus the Navigate subtask needs a PR (corresponding to the correct dropoff location) to learn a good policy. The Hallway domain consists of a maze with a large scale structure of hallways and intersections. The agent has stochastic movement actions. For these experiments, we use uninformed priors on the environment model. The PR Gaussian-Gamma priors are set to prefer each exit from 7 -1000 -800 -600 -400 -200 0 0 100 200 300 400 500 Average Cumulative Reward Per Episode B-MaxQ Bayes PR B-MaxQ Manual PR B-MaxQ No PR -1000 -800 -600 -400 -200 0 0 100 200 300 400 500 B-MaxQ Bayes PR MaxQ Non-Bayes PR MaxQ Manual PR MaxQ No PR ALispQ FlatQ -2000 -1800 -1600 -1400 -1200 -1000 -800 -600 -400 -200 0 1000 2000 3000 4000 5000 Average Cumulative Reward Per Episode Episode B-MaxQ Bayes PR B-MaxQ Manual PR B-MaxQ No PR -2000 -1800 -1600 -1400 -1200 -1000 -800 -600 -400 -200 0 1000 2000 3000 4000 5000 Episode Figure 2: Performance on Modified-Taxi-World (top row) and Hallway (bottom). “B-”: Bayesian, “PR”: Pseudo Reward. Left: Bayesian methods, right: non-Bayesian methods, with Bayesian MAXQ as reference. The x-axis is episodes. The bottom right figure has the same legend as the top right. a subtask equally. The baselines we use are: (i) Bayesian MAXQ and MAXQ with fixed zero PR, (ii) Bayesian MAXQ and MAXQ with fixed manually set PR, (iii) flat Q, (iv) ALISPQ [6] and (v) MAXQ with a non-Bayesian PR update. This last method tracks PR just as our approach; however, instead of a Bayesian update, it updates the PR using a temporal difference update, treating it as a simple value function. The results are shown in Figure 2 (episodes on x-axis). From these results, we first observe that the methods with zero PR always do worse than those with “proper” PR, indicating that in these cases the recursively optimal policy is not the hierarchically optimal policy. When a PR is manually set, in both domain, MAXQ converges to better policies. We observe that in each case, the Bayesian MAXQ approach is able to learn a policy that is as good, starting with no pseudo rewards; further, its convergence rates are often better. Further, we observe that the simple TD update strategy (MAXQ Non-Bayes PR in Figure 2) fails in both cases—in ModifiedTaxi-World, it is able to learn a policy that is approximately as good as a recursively optimal policy, but in the Hallway domain, it fails to converge completely, indicating that this strategy cannot generally learn PRs. Finally, we observe that the tripartite Q-decomposition of ALISPQ is also able to correctly learn hierarchically optimal policies, however, it converges slowly compared to Bayesian MAXQ or MAXQ with manual PRs. This is especially visible in the Hallway domain, where there are not many opportunities for state abstraction. We believe this is likely because it is estimating entire Q-functions rather than just the PRs. In a sense, it is doing more work than is needed to capture the hierarchically optimal policy, because an exact Q-function may not be needed to capture the preference for the best exit, rather, a value that assigns it a sufficiently high reward compared to the other exits would suffice. Taken together, these results indicate that incorporating Bayesian priors into MAXQ can successfully learn PRs from scratch and produce hierarchically optimal policies. 5 Conclusion In this paper, we have proposed an approach to incorporating probabilistic priors on environment models and task pseudo-rewards into HRL by extending the MAXQ framework. Our experiments indicate that several synergies exist between HRL and Bayesian RL, and combining them is fruitful. In future work, we plan to investigate approximate model and value representations, as well as multi-task RL to learn the priors. 8 References [1] R.S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [2] Leslie Pack Kaelbling, Michael L. Littman, and Andrew W. Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237–285, 1996. [3] Andrew G. Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems, 13(4):341–379, 2003. [4] Martin Stolle and Doina Precup. Learning Options in reinforcement Learning, volume 2371/2002 of Lecture Notes in Computer Science, pages 212–223. Springer, 2002. [5] Thomas G. Dietterich. Hierarchical reinforcement learning with the maxq value function decomposition. Journal of Artificial Intelligence Research, 13:227–303, 2000. [6] D. Andre and S. Russell. State Abstraction for Programmable Reinforcement Learning Agents. In Proceedings of the Eighteenth National Conference on Artificial Intelligence (AAAI), 2002. [7] R. Dearden, N. Friedman, and D. Andre. Model based bayesian exploration. In Proceedings of Fifteenth Conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann, 1999. [8] R. Dearden, N. Friedman, and S. Russell. Bayesian Q-learning. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, 1998. [9] Y. Engel, S. Mannor, and R. Meir. Bayes meets Bellman:the Gaussian process approach to temporal difference learning. In Proceedings of the Twentieth Internationl Conference on Machine Learning, 2003. [10] Mohammad Ghavamzadeh and Yaakov Engel. Bayesian policy gradient algorithms. In Advances in Neural Information Processing Systems 19. MIT Press, 2007. [11] Alessandro Lazaric and Mohammad Ghavamzadeh. Bayesian multi-task reinforcement learning. In Proceedings of the 27th International Conference on Machine Learning, 2010. [12] Aaron Wilson, Alan Fern, Soumya Ray, and Prasad Tadepalli. Multi-task reinforcement learning: a hierarchical bayesian approach. In Proceedings of the 24th international conference on Machine learning, pages 1015–1022, New York, NY, USA, 2007. ACM. [13] N. Mehta, S. Ray, P. Tadepalli, and T. Dietterich. Automatic discovery and transfer of MAXQ hierarchies. In Andrew McCallum and Sam Roweis, editors, Proceedings of the 25th International Conference on Machine Learning, pages 648–655. Omnipress, 2008. [14] Nicholas K. Jong and Peter Stone. Hierarchical model-based reinforcement learning: R-MAX + MAXQ. In Proceedings of the 25th International Conference on Machine Learning, 2008. [15] Ronen I. Brafman, Moshe Tennenholtz, and Pack Kaelbling. R-MAX - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 2001. [16] B. Marthi, S. Russell, and D. Andre. A compact, hierarchically optimal q-function decomposition. In 22nd Conference on Uncertainty in Artificial Intelligence, 2006. [17] M. Ghavamzadeh and Y. Engel. Bayesian actor-critic algorithms. In Zoubin Ghahramani, editor, Proceedings of the 24th Annual International Conference on Machine Learning, pages 297–304. Omnipress, 2007. [18] W. R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25:285–294, 1933. [19] M. J. A. Strens. A Bayesian framework for reinforcement learning. In Proceeding of the 17th International Conference on Machine Learning, 2000. [20] Zhaohui Dai, Xin Chen, Weihua Cao, and Min Wu. Model-based learning with bayesian and maxq value function decomposition for hierarchical task. In Proceedings of the 8th World Congress on Intelligent Control and Automation, 2010. [21] Ronald Edward Parr. Hierarchical Control and Learning for Markov Decision Processes. PhD thesis, 1998. 9
|
2012
|
149
|
4,507
|
Provable ICA with Unknown Gaussian Noise, with Implications for Gaussian Mixtures and Autoencoders Sanjeev Arora∗ Rong Ge∗ Ankur Moitra † Sushant Sachdeva∗ Abstract We present a new algorithm for Independent Component Analysis (ICA) which has provable performance guarantees. In particular, suppose we are given samples of the form y = Ax + η where A is an unknown n × n matrix and x is a random variable whose components are independent and have a fourth moment strictly less than that of a standard Gaussian random variable and η is an n-dimensional Gaussian random variable with unknown covariance Σ: We give an algorithm that provable recovers A and Σ up to an additive ϵ and whose running time and sample complexity are polynomial in n and 1/ϵ. To accomplish this, we introduce a novel “quasi-whitening” step that may be useful in other contexts in which the covariance of Gaussian noise is not known in advance. We also give a general framework for finding all local optima of a function (given an oracle for approximately finding just one) and this is a crucial step in our algorithm, one that has been overlooked in previous attempts, and allows us to control the accumulation of error when we find the columns of A one by one via local search. 1 Introduction We present an algorithm (with rigorous performance guarantees) for a basic statistical problem. Suppose η is an independent n-dimensional Gaussian random variable with an unknown covariance matrix Σ and A is an unknown n × n matrix. We are given samples of the form y = Ax + η where x is a random variable whose components are independent and have a fourth moment strictly less than that of a standard Gaussian random variable. The most natural case is when x is chosen uniformly at random from {+1, −1}n, although our algorithms in even the more general case above. Our goal is to reconstruct an additive approximation to the matrix A and the covariance matrix Σ running in time and using a number of samples that is polynomial in n and 1 ϵ , where ϵ is the target precision (see Theorem 1.1) This problem arises in several research directions within machine learning: Independent Component Analysis (ICA), Deep Learning, Gaussian Mixture Models (GMM), etc. We describe these connections next, and known results (focusing on algorithms with provable performance guarantees, since that is our goal). Most obviously, the above problem can be seen as an instance of Independent Component Analysis (ICA) with unknown Gaussian noise. ICA has an illustrious history with applications ranging from econometrics, to signal processing, to image segmentation. The goal generally involves finding a linear transformation of the data so that the coordinates are as independent as possible [1, 2, 3]. This is often accomplished by finding directions in which the projection is “non-Gaussian” [4]. Clearly, if the datapoint y is generated as Ax (i.e., with no noise η added) then applying linear transformation A−1 to the data results in samples A−1y whose coordinates are independent. This restricted case was considered by Comon [1] and Frieze, Jerrum and Kannan [5], and their goal was to recover an ∗{arora, rongge, sachdeva}@cs.princeton.edu. Department of Computer Science, Princeton University, Princeton NJ 08540. Research supported by the NSF grants CCF-0832797, CCF-1117309 and Simons Investigator Grant †moitra@ias.edu. School of Mathematics, Institute for Advanced Study, Princeton NJ 08540. Research supported in part by NSF grant No. DMS-0835373 and by an NSF Computing and Innovation Fellowship. 1 additive approximation to A efficiently and using a polynomial number of samples. (We will later note a gap in their reasoning, albeit fixable by our methods. See also recent papers by Anandkumar et al., Hsu and Kakade[6, 7], that do not use local search and avoids this issue.) To the best of our knowledge, there are currently no known algorithms with provable guarantees for the more general case of ICA with Gaussian noise (this is especially true if the covariance matrix is unknown, as in our problem), although many empirical approaches are known. (eg. [8], the issue of “empirical” vs “rigorous” is elaborated upon after Theorem 1.1.) The second view of our problem is as a concisely described Gaussian Mixture Model. Our data is generated as a mixture of 2n identical Gaussian components (with an unknown covariance matrix) whose centers are the points {Ax : x ∈{−1, 1}n}, and all mixing weights are equal. Notice, this mixture of 2n Gaussians admits a concise description using O(n2) parameters. The problem of learning Gaussian mixtures has a long history, and the popular approach in practice is to use the EM algorithm [9], though it has no worst-case guarantees (the method may take a very long time to converge, and worse, may not always converge to the correct solution). An influential paper of Dasgupta [10] initiated the program of designing algorithms with provable guarantees, which was improved in a sequence of papers [11, 12, 13, 14]. But in the current setting, it is unclear how to apply any of the above algorithms (including EM) since the trivial application would keep track of exponentially many parameters – one for each component. Thus, new ideas seem necessary to achieve polynomial running time. The third view of our problem is as a simple form of autoencoding [15]. This is a central notion in Deep Learning, where the goal is to obtain a compact representation of a target distribution using a multilayered architecture, where a complicated function (the target) can be built up by composing layers of a simple function (called the autoencoder [16]). The main tenet is that there are interesting functions which can be represented concisely using many layers, but would need a very large representation if a “shallow” architecture is used instead). This is most useful for functions that are “highly varying” (i.e. cannot be compactly described by piecewise linear functions or other “simple” local representations). Formally, it is possible to represent using just (say) n2 parameters, some distributions with 2n “varying parts” or “interesting regions.” The Restricted Boltzmann Machine (RBM) is an especially popular autoencoder in Deep Learning, though many others have been proposed. However, to the best of our knowledge, there has been no successful attempt to give a rigorous analysis of Deep Learning. Concretely, if the data is indeed generated using the distribution represented by an RBM, then do the popular algorithms for Deep Learning [17] learn the model parameters correctly and in polynomial time? Clearly, if the running time were actually found to be exponential in the number of parameters, then this would erode some of the advantages of the compact representation. How is Deep Learning related to our problem? As noted by Freund and Haussler [18] many years ago, an RBM with real-valued visible units (the version that seems more amenable to theoretical analysis) is precisely a mixture of exponentially many standard Gaussians. It is parametrized by an n × m matrix A and a vector θ ∈Rn. It encodes a mixture of n-dimensional standard Gaussians centered at the points {Ax : x ∈{−1, 1}m}, where the mixing weight of the Gaussian centered at Ax is exp(∥Ax∥2 2 + θ · x). This is of course reminiscent of our problem. Formally, our algorithm can be seen as a nonlinear autoencoding scheme analogous to an RBM but with uniform mixing weights. Interestingly, the algorithm that we present here looks nothing like the approaches favored traditionally in Deep Learning, and may provide an interesting new perspective. 1.1 Our results and techniques We give a provable algorithm for ICA with unknown Gaussian noise. We have not made an attempt to optimize the quoted running time of this model, but we emphasize that this is in fact the first algorithm with provable guarantees for this problem and moreover we believe that in practice our algorithm will run almost as fast as the usual ICA algorithms, which are its close relatives. Theorem 1.1 (Main, Informally). There is an algorithm that recovers the unknown A and Σ up to additive error ϵ in each entry in time that is polynomial in n, ∥A∥2, ∥Σ∥2, 1/ϵ, 1/λmin(A) where ∥· ∥2 denotes the operator norm and λmin(·) denotes the smallest eigenvalue. The classical approach for ICA initiated in Comon [1] and Frieze, Jerrum and Kannan [5]) is for the noiseless case in which y = Ax. The first step is whitening, which applies a suitable linear transformation that makes the variance the same in all directions, thus reducing to the case where 2 A is a rotation matrix. Given samples y = Rx where R is a rotation matrix, the rows of R can be found in principle by computing the vectors u that are local minima of E[(u · y)4]. Subsequently, a number of works (see e.g. [19, 20]) have focused on giving algorithms that are robust to noise. A popular approach is to use the fourth order cumulant (as an alternative to the fourth order moment) as a method for “denoising,” or any one of a number of other functionals whose local optima reveal interesting directions. However, theoretical guarantees of these algorithms are not well understood. The above procedures in the noise-free model can almost be made rigorous (i.e., provably polynomial running time and number of samples), except for one subtlety: it is unclear how to use local search to find all optima in polynomial time. In practice, one finds a single local optimum, projects to the subspace orthogonal to it and continues recursively on a lower-dimensional problem. However, a naive implementation of this idea is unstable since approximation errors can accumulate badly, and to the best of our knowledge no rigorous analysis has been given prior to our work. (This is not a technicality: in some similar settings the errors are known to blow up exponentially [21].) One of our contributions is a modified local search that avoids this potential instability and finds all local optima in this setting. (Section 4.2.) Our major new contribution however is dealing with noise that is an unknown Gaussian. This is an important generalization, since many methods used in ICA are quite unstable to noise (and a wrong estimate for the covariance could lead to bad results). Here, we no longer need to assume we know even rough estimates for the covariance. Moreover, in the context of Gaussian Mixture Models this generalization corresponds to learning a mixture of many Gaussians where the covariance of the components is not known in advance. We design new tools for denoising and especially whitening in this setting. Denoising uses the fourth order cumulant instead of the fourth moment used in [5] and whitening involves a novel use of the Hessian of the cumulant. Even then, we cannot reduce to the simple case y = Rx as above, and are left with a more complicated functional form (see “quasi-whitening” in Section 2.) Nevertheless, we can reduce to an optimization problem that can be solved via local search, and which remains amenable to a rigorous analysis. The results of the local optimization step can be then used to simplify the complicated functional form and recover A as well as the noise Σ. We defer many of our proofs to the supplementary material section, due to space constraints. In order to avoid cluttered notation, we have focused on the case in which x is chosen uniformly at random from {−1, +1}n, although our algorithm and analysis work under the more general conditions that the coordinates of x are (i) independent and (ii) have a fourth moment that is less than three (the fourth moment of a Gaussian random variable). In this case, the functional P(u) (see Lemma 2.2) will take the same form but with weights depending on the exact value of the fourth moment for each coordinate. Since we already carry through an unknown diagonal matrix D throughout our analysis, this generalization only changes the entries on the diagonal and the same algorithm and proof apply. 2 Denoising and quasi-whitening As mentioned, our approach is based on the fourth order cumulant. The cumulants of a random variable are the coefficients of the Taylor expansion of the logarithm of the characteristic function [22]. Let κr(X) be the rth cumulant of a random variable X. We make use of: Fact 2.1. (i) If X has mean zero, then κ4(X) = E[X4]−3 E[X2]2. (ii) If X is Gaussian with mean µ and variance σ2, then κ1(X) = µ, κ2(X) = σ2 and κr(X) = 0 for all r > 2. (iii) If X and Y are independent, then κr(X + Y ) = κr(X) + κr(Y ). The crux of our technique is to look at the following functional, where y is the random variable Ax + η whose samples are given to us. Let u ∈Rn be any vector. Then P(u) = −κ4(uT y). Note that for any u we can compute P(u) reasonably accurately by drawing sufficient number of samples of y and taking an empirical average. Furthermore, since x and η are independent, and η is Gaussian, the next lemma is immediate. We call it “denoising” since it allows us empirical access to some information about A that is uncorrupted by the noise η. Lemma 2.2 (Denoising Lemma). P(u) = 2 Pn i=1(uT A)4 i . The intuition is that P(u) = −κ4(uT Ax) since the fourth cumulant does not depend on the additive Gaussian noise, and then the lemma follows from completing the square. 3 2.1 Quasi-whitening via the Hessian of P(u) In prior works on ICA, whitening refers to reducing to the case where y = Rx for some some rotation matrix R. Here we give a technique to reduce to the case where y = RDx + η′ where η′ is some other Gaussian noise (still unknown), R is a rotation matrix and D is a diagonal matrix that depends upon A. We call this quasi-whitening. Quasi-whitening suffices for us since local search using the objective function κ4(uT y) will give us (approximations to) the rows of RD, from which we will be able to recover A. Quasi-whitening involves computing the Hessian of P(u), which recall is the matrix of all 2nd order partial derivatives of P(u). Throughout this section, we will denote the Hessian operator by H. In matrix form, the Hessian of P(u) is ∂2 ∂ui∂uj P(u) = 24 n X k=1 Ai,kAj,k(Ak · u)2; H(P(U)) = 24 n X k=1 (Ak · u)2AkAT k = ADA(u)AT where Ak is the k-th column of the matrix A (we use subscripts to denote the columns of matrices throught the paper). DA(u) is the following diagonal matrix: Definition 2.3. Let DA(u) be a diagonal matrix in which the kth entry is 24(Ak · u)2. Of course, the exact Hessian of P(u) is unavailable and we will instead compute an empirical approximation bP(u) to P(u) (given many samples from the distribution), and we will show that the Hessian of bP(u) is a good approximation to the Hessian of P(u). Definition 2.4. Given 2N samples y1, y′ 1, y2, y′ 2..., yN, y′ N of the random variable y, let bP(u) = −1 N N X i=1 (uT yi)4 + 3 N N X i=1 (uT yi)2(uT y′ i)2. Our first step is to show that the expectation of the Hessian of bP(u) is exactly the Hessian of P(u). In fact, since the expectation of bP(u) is exactly P(u) (and since bP(u) is an analytic function of the samples and of the vector u), we can interchange the Hessian operator and the expectation operator. Roughly, one can imagine the expectation operator as an integral over the possible values of the random samples, and as is well-known in analysis, one can differentiate under the integral provided that all functions are suitably smooth over the domain of integration. Claim 2.5. Ey,y′[−(uT y)4 + 3(uT y)2(uT y′)2] = P(u) This claim follows immediately from the definition of P(u), and since y and y′ are independent. Lemma 2.6. H(P(u)) = Ey,y′[H(−(uT y)4 + 3(uT y)2(uT y′)2)] Next, we compute the two terms inside the expectation: Claim 2.7. H((uT y)4) = 12(uT y)2yyT Claim 2.8. H((uT y)2(uT y′)2) = 2(uT y′)2yyT + 2(uT y)2y′(y′)T + 4(uT y)(uT y′)(y(y′)T + (y′)yT ) Let λmin(A) denote the smallest eigenvalue of A. Our analysis also requires bounds on the entries of DA(u0): Claim 2.9. If u0 is chosen uniformly at random then with high probability for all i, n min i=1 ∥Ai∥2 2n−4 ≤DA(u0))i,i ≤ n max i=1 ∥Ai∥2 2 log n n Lemma 2.10. If u0 is chosen uniformly at random and furthermore we are given 2N = poly(n, 1/ϵ, 1/λmin(A), ∥A∥2, ∥Σ∥2) samples of y, then with high probability we will have that (1 −ϵ)ADA(u0)AT ⪯H( bP(u0)) ⪯(1 + ϵ)ADA(u0)AT . Lemma 2.11. Suppose that (1−ϵ)ADA(u0)AT ⪯c M ⪯(1+ϵ)ADA(u0)AT , and let c M = BBT . Then there is a rotation matrix R∗such that ∥B−1ADA(u0)1/2 −R∗∥F ≤√nϵ. The intuition is: if any of the singular values of B−1ADA(u0)1/2 are outside the range [1−ϵ, 1+ϵ], we can find a unit vector x where the quadratic forms xT ADA(u0)AT x and xT c Mx are too far apart (which contradicts the condition of the lemma). Hence the singular values of B−1ADA(u0)1/2 can all be set to one without changing the Froebenius norm of B−1ADA(u0)1/2 too much, and this yields a rotation matrix. 4 3 Our algorithm (and notation) In this section we describe our overall algorithm. It uses as a blackbox the denoising and quasiwhitening already described above, as well as a routine for computing all local maxima of some “well-behaved” functions which is described later in Section 4. Notation: Placing a hat over a function corresponds to an empirical approximation that we obtain from random samples. This approximation introduces error, which we will keep track of. Step 1: Pick a random u0 ∈Rn and estimate the Hessian H( bP(u0)). Compute B such that H( bP(u0)) = BBT . Let D = DA(u0) be the diagonal matrix defined in Definition 2.3. Step 2: Take 2N samples y1, y2, ..., yN, y′ 1, y′ 2, ..., y′ N, and let bP ′(u) = −1 N PN i=1(uT B−1yi)4 + 3 N PN i=1(uT B−1yi)2(uT B−1y′ i)2 which is an empirical estimation of P ′(u). Step 3: Use the procedure ALLOPT( bP ′(u), β, δ′, β′, δ′) of Section 4 to compute all n local maxima of the function bP ′(u). Step 4: Let R be the matrix whose rows are the n local optima recovered in the previous step. Use procedure RECOVER of Section 5 to find A and Σ. Explanation: Step 1 uses the transformation B−1 computed in the previous Section to quasi-whiten the data. Namely, we consider the sequence of samples z = B−1y, which are therefore of the form R′Dx+η′ where η = B−1η, D = DA(u0) and R′ is close to a rotation matrix R∗(by Lemma 2.11). In Step 2 we look at κ4((uT z)), which effectively denoises the new samples (see Lemma 2.2), and thus is the same as κ4(R′D−1/2x). Let P ′(u) = κ4(uT z) = κ4(uT B−1y) which is easily seen to be E[(uT R′D−1/2x)4]. Step 2 estimates this function, obtaining bP ′(u). Then Step 3 tries to find local optima via local search. Ideally we would have liked access to the functional P ∗(u) = (uT R∗x)4 since the procedure for local optima works only for true rotations. But since R′ and R∗are close we can make it work approximately with bP ′(u), and then in Step 4 use these local optima to finally recover A. Theorem 3.1. Suppose we are given samples of the form y = Ax + η where x is uniform on {+1, −1}n, A is an n × n matrix, η is an n-dimensional Gaussian random variable independent of x with unknown covariance matrix Σ. There is an algorithm that with high probability recovers ∥bA −AΠdiag(ki)∥F ≤ϵ where Π is some permutation matrix and each ki ∈{+1, −1} and also recovers ∥bΣ −Σ∥F ≤ϵ. Furthermore the running time and number of samples needed are poly(n, 1/ϵ, ∥A∥2 , ∥Σ∥2 , 1/λmin(A)) Note that here we recover A up to a permutation of the columns and sign-flips. In general, this is all we can hope for since the distribution of x is also invariant under these same operations. Also, the dependence of our algorithm on the various norms (of A and Σ) seems inherent since our goal is to recover an additive approximation, and as we scale up A and/or Σ, this goal becomes a stronger relative guarantee on the error. 4 Framework for iteratively finding all local maxima In this section, we first describe a fairly standard procedure (based upon Newton’s method) for finding a single local maximum of a function f ∗: Rn →R among all unit vectors and an analysis of its rate of convergence. Such a procedure is a common tool in statistical algorithms, but here we state it rather carefully since we later give a general method to convert any local search algorithm (that meets certain criteria) into one that finds all local maxima (see Section 4.2). Given that we can only ever hope for an additive approximation to a local maximum, one should be concerned about how the error accumulates when our goal is to find all local maxima. In fact, a naive strategy is to project onto the subspace orthogonal to the directions found so far, and continue in this subspace. However, such an approach seems to accumulate errors badly (the additive error of the last local maxima found is exponentially larger than the error of the first). Rather, the crux of our analysis is a novel method for bounding how much the error can accumulate (by refining old estimates). 5 Algorithm 1. LOCALOPT, Input:f(u), us, β, δ Output: vector v 1. Set u ←us. 2. Maximize (via Lagrangian methods) Proj⊥u(∇f(u))T ξ + 1 2ξT Proj⊥u(H(f(u)))ξ −1 2 ∂ ∂u f(u) · ∥ξ∥2 2 Subject to ∥ξ∥2 ≤β′ and uT ξ = 0 3. Let ξ be the solution, ˜u = u+ξ ∥u+ξ∥ 4. If f(˜u) ≥f(u) + δ/2, set u ←˜u and Repeat Step 2 5. Else return u Our strategy is to first find a local maximum in the orthogonal subspace, then run the local optimization algorithm again (in the original n-dimensional space) to “refine” the local maximum we have found. The intuition is that since we are already close to a particular local maxima, the local search algorithm cannot jump to some other local maxima (since this would entail going through a valley). 4.1 Finding one local maximum Throughout this section, we will assume that we are given oracle access to a function f(u) and its gradient and Hessian. The procedure is also given a starting point us, a search range β, and a step size δ. For simplicity in notation we define the following projection operator. Definition 4.1. Proj⊥u(v) = v −(uT v)u, Proj⊥u(M) = M −(uT Mu)uuT . The basic step the algorithm is a modification of Newton’s method to find a local improvement that makes progress so long as the current point u is far from a local maxima. Notice that if we add a small vector to u, we do not necessarily preserve the norm of u. In order to have control over how the norm of u changes, during local optimization step the algorithm projects the gradient ∇f and Hessian H(f) to the space perpendicular to u. There is also an additional correction term −∂/∂uf(u) · ∥ξ∥2/2. This correction term is necessary because the new vector we obtain is (u + ξ)/ ∥(u + ξ)∥2 which is close to u −∥ξ∥2 2/2 · u + ξ + O(β3). Step 2 of the algorithm is just maximizing a quadratic function and can be solved exactly using Lagrangian Multiplier method. To increase efficiency it is also acceptable to perform an approximate maximization step by taking ξ to be either aligned with the gradient Proj⊥u∇f(u) or the largest eigenvector of Proj⊥u(H(f(u))). The algorithm is guaranteed to succeed in polynomial time when the function is Locally Improvable and Locally Approximable: Definition 4.2 ((γ, β, δ)-Locally Improvable). A function f(u) : Rn →R is (γ, β, δ)-Locally Improvable, if for any u that is at least γ far from any local maxima, there is a u′ such that ∥u′ −u∥2 ≤β and f(u′) ≥f(u) + δ. Definition 4.3 ((β, δ)-Locally Approximable). A function f(u) is locally approximable, if its third order derivatives exist and for any u and any direction v, the third order derivative of f at point u in the direction of v is bounded by 0.01δ/β3. The analysis of the running time of the procedure comes from local Taylor expansion. When a function is Locally Approximable it is well approximated by the gradient and Hessian within a β neighborhood. The following theorem from [5] showed that the two properties above are enough to guarantee the success of a local search algorithm even when the function is only approximated. Theorem 4.4 ([5]). If |f(u) −f ∗(u)| ≤δ/8, the function f ∗(u) is (γ, β, δ)-Locally Improvable, f(u) is (β, δ) Locally Approximable, then Algorithm 1 will find a vector v that is γ close to some local maximum. The running time is at most O((n2 + T) max f ∗/δ) where T is the time to evaluate the function f and its gradient and Hessian. 4.2 Finding all local maxima Now we consider how to find all local maxima of a given function f ∗(u). The crucial condition that we need is that all local maxima are orthogonal (which is indeed true in our problem, and is morally true when using local search more generally in ICA). Note that this condition implies that there are at most n local maxima.1 In fact we will assume that there are exactly n local maxima. If we are given an exact oracle for f ∗and can compute exact local maxima then we can find all local maxima 6 Algorithm 2. ALLOPT, Input:f(u), β, δ, β′, δ′ Output: v1, v2, ..., vn, ∀i ∥vi −v∗ i ∥≤γ. 1. Let v1 = LOCALOPT(f, e1, β, δ) 2. FOR i = 2 TO n DO 3. Let gi be the projection of f to the orthogonal subspace of v1, v2, ..., vi−1. 4. Let u′ = LOCALOPT(g, e1, β′, δ′). 5. Let vi = LOCALOPT(f, u′, β, δ). 6. END FOR 7. Return v1, v2, ..., vn easily: find one local maximum, project the function into the orthogonal subspace, and continue to find more local maxima. Definition 4.5. The projection of a function f to a linear subspace S is a function on that subspace with value equal to f. More explicitly, if {v1, v2, ..., vd} is an orthonormal basis of S, the projection of f to S is a function g : Rd →R such that g(w) = f(Pd i=1 wivi). The following theorem gives sufficient conditions under which the above algorithm finds all local maxima, making precise the intuition given at the beginning of this section. Theorem 4.6. Suppose the function f ∗(u) : Rn →R satisfies the following properties: 1. Orthogonal Local Maxima: The function has n local maxima v∗ i , and they are orthogonal to each other. 2. Locally Improvable: f ∗is (γ, β, δ) Locally Improvable. 3. Improvable Projection: The projection of the function to any subspace spanned by a subset of local maxima is (γ′, β′, δ′) Locally Improvable. The step size δ′ ≥10δ. 4. Lipschitz: If ∥u −u′∥2 ≤3√nγ, then the function value |f ∗(u) −f ∗(u′)| ≤δ′/20. 5. Attraction Radius: Let Rad ≥3√nγ + γ′, for any local maximum v∗ i , let T be min f ∗(u) for ∥u −v∗ i ∥2 ≤Rad, then there exist a set U containing ∥u −v∗ i ∥2 ≤3√nγ + γ′ and does not contain any other local maxima, such that for every u that is not in U but is β close to U, f ∗(u) < T. If we are given function f such that |f(u) −f ∗(u)| ≤δ/8 and f is both (β, δ) and (β′, δ′) Locally Approximable, then Algorithm 2 can find all local maxima of f ∗within distance γ. To prove this theorem, we first notice the projection of the function f in Step 3 of the algorithm should be close to the projection of f ∗to the remaining local maxima. This is implied by Lipschitz condition and is formally shown in the following two lemmas. First we prove a “coupling” between the orthogonal complement of two close subspaces: Lemma 4.7. Given v1, v2, ..., vk, each γ-close respectively to local maxima v∗ 1, v∗ 2, ..., v∗ k (this is without loss of generality because we can permute the index of local maxima), then there is an orthonormal basis vk+1, vk+2, ..., vn for the orthogonal space of span{v1, v2, ..., vk} such that for any unit vector w ∈Rn−k, Pn−k i=1 wkvk+i is 3√nγ close to Pn−k i=1 wkv∗ k+i. We prove this lemma using a modification of the Gram-Schmidt orthonormalization procedure. Using this lemma we see that the projected function is close to the projection of f ∗in the span of the rest of local maxima: Lemma 4.8. Let g∗be the projection of f ∗into the space spanned by the rest of local maxima, then |g∗(w) −g(w)| ≤δ/8 + δ′/20 ≤δ′/8. 5 Local search on the fourth order cumulant Next, we prove that the fourth order cumulant P ∗(u) satisfies the properties above. Then the algorithm given in the previous section will find all of the local maxima, which is the missing step in our 1Technically, there are 2n local maxima since for each direction u that is a local maxima, so too is −u but this is an unimportant detail for our purposes. 7 Algorithm 3. RECOVER, Input:B, bP ′(u), bR, ϵ Output: bA, bΣ 1. Let bDA(u) be a diagonal matrix whose ith entry is 1 2 bP ′( bRi) −1/2 . 2. Let b A = B bR bDA(u)−1/2. 3. Estimate C = E[yyT ] by taking O((∥A∥2 + ∥Σ∥2)4n2ϵ−2) samples and let bC = 1 N PN i=1 yiyT i . 4. Let bΣ = bC −b A b AT 5. Return b A, bΣ main goal: learning a noisy linear transformation Ax + η with unknown Gaussian noise. We first use a theorem from [5] to show that properties for finding one local maxima is satisfied. Also, for notational convenience we set di = 2DA(u0)−2 i,i and let dmin and dmax denote the minimum and maximum values (bounds on these and their ratio follow from Claim 2.9). Using this notation P ∗(u) = Pn i=1 di(uT R∗ i )4. Theorem 5.1 ([5]). When β < dmin/10dmaxn2, the function P ∗(u) is (3√nβ, β, P ∗(u)β2/100) Locally Improvable and (β, dminβ2/100n) Locally Approximable. Moreover, the local maxima of the function is exactly {±R∗ i }. We then observe that given enough samples, the empirical mean bP ′(u) is close to P ∗(u). For concentration we require every degree four term zizjzkzl has variance at most Z. Claim 5.2. Z = O(d2 minλmin(A)8∥Σ∥4 2 + d2 min). Lemma 5.3. Given 2N samples y1, y2, ..., yN, y′ 1, y′ 2, ..., y′ N, suppose columns of R′ = B−1ADA(u0)1/2 are ϵ close to the corresponding columns of R∗, with high probability the function bP ′(u) is O(dmaxn1/2ϵ + n2(N/Z log n)−1/2) close to the true function P ∗(u). The other properties required by Theorem 4.6 are also satisfied: Lemma 5.4. For any ∥u −u′∥2 ≤r, |P ∗(u) −P ∗(u′)| ≤5dmaxn1/2r. All local maxima of P ∗ has attraction radius Rad ≥dmin/100dmax. Applying Theorem 4.6 we obtain the following Lemma (the parameters are chosen so that all properties required are satisfied): Lemma 5.5. Let β′ = Θ((dmin/dmax)2), β = min{γn−1/2, Ω((dmin/dmax)4n−3.5)}, then the procedure RECOVER(f, β, dminβ2/100n , β′, dminβ′2/100n) finds vectors v1, v2, ..., vn, so that there is a permutation matrix Π and ki ∈{±1} and for all i: ∥vi −(RΠDiag(ki))∗ i ∥2 ≤γ. After obtaining bR = [v1, v2, ..., vn] we can use Algorithm 3 to find A and Σ: Theorem 5.6. Given a matrix bR such that there is permutation matrix Π and ki ∈{±1} with ∥bRi −ki(R∗Π)i∥2 ≤γ for all i, Algorithm 3 returns matrix bA such that ∥bA −AΠDiag(ki)∥F ≤ O(γ ∥A∥2 2 n3/2/λmin(A)). If γ ≤O(ϵ/ ∥A∥2 2 n3/2λmin(A)) × min{1/ ∥A∥2 , 1}, we also have ∥bΣ −Σ∥F ≤ϵ. Recall that the diagonal matrix DA(u) is unknown (since it depends on A), but if we are given R∗(or an approximation) and since P ∗(u) = Pn i=1 di(uT R∗ i )4, we can recover the matrix DA(u) approximately from computing P ∗(R∗ i ). Then given DA(u), we can recover A and Σ and this completes the analysis of our algorithm. Conclusions ICA is a vast field with many successful techniques. Most rely on heuristic nonlinear optimization. An exciting question is: can we give a rigorous analysis of those techniques as well, just as we did for local search on cumulants? A rigorous analysis of deep learning —say, an algorithm that provably learns the parameters of an RBM—is another problem that is wide open, and a plausible special case involves subtle variations on the problem we considered here. 8 References [1] P. Comon. Independent component analysis: a new concept? Signal Processing, pp. 287–314, 1994. 1, 1.1 [2] A. Hyvarinen, J. Karhunen, E. Oja. Independent Component Analysis. Wiley: New York, 2001. 1 [3] A. Hyvarinen, E. Oja. Independent component analysis: algorithms and applications. Neural Networks, pp. 411–430, 2000. 1 [4] P. J. Huber. Projection pursuit. Annals of Statistics pp. 435–475, 1985. 1 [5] A. Frieze, M. Jerrum, R. Kannan. Learning linear transformations. FOCS, pp. 359–368, 1996. 1, 1.1, 4.1, 4.4, 5, 5.1 [6] A. Anandkumar, D. Foster, D. Hsu, S. Kakade, Y. Liu. Two SVDs suffice: spectral decompositions for probabilistic topic modeling and latent Dirichlet allocation. Arxiv:abs/1203.0697, 2012. 1 [7] D. Hsu, S. Kakade. Learning mixtures of spherical Gaussians: moment methods and spectral decompositions. Arxiv:abs/1206.5766, 2012. 1 [8] L. De Lathauwer; J. Castaing; J.-F. Cardoso, Fourth-Order Cumulant-Based Blind Identification of Underdetermined Mixtures, Signal Processing, IEEE Transactions on, vol.55, no.6, pp.2965-2973, June 2007 1 [9] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the EM Algorithm. Journal of the Royal Statistical Society Series B, pp. 1–38, 1977. 1 [10] S. Dasgupta. Learning mixtures of Gaussians. FOCS pp. 634–644, 1999. 1 [11] S. Arora and R. Kannan. Learning mixtures of separated nonspherical Gaussians. Annals of Applied Probability, pp. 69-92, 2005. 1 [12] M. Belkin and K. Sinha. Polynomial learning of distribution families. FOCS pp. 103–112, 2010. 1 [13] A. T. Kalai, A. Moitra, and G. Valiant. Efficiently learning mixtures of two Gaussians. STOC pp. 553-562, 2010. 1 [14] A. Moitra and G. Valiant. Setting the polynomial learnability of mixtures of Gaussians. FOCS pp. 93–102, 2010. 1 [15] G. Hinton, R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science pp. 504–507, 2006. 1 [16] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, pp. 1–127, 2009. 1 [17] G. E. Hinton. A Practical Guide to Training Restricted Boltzmann Machines, Version 1, UTML TR 2010-003, Department of Computer Science, University of Toronto, August 2010 1 [18] Y. Freund , D. Haussler. Unsupervised Learning of Distributions on Binary Vectors using Two Layer Networks University of California at Santa Cruz, Santa Cruz, CA, 1994 1 [19] S. Cruces, L. Castedo, A. Cichocki, Robust blind source separation algorithms using cumulants, Neurocomputing, Volume 49, Issues 14, pp 87-118, 2002. 1.1 [20] L., De Lathauwer; B., De Moor; J. Vandewalle. Independent component analysis based on higher-order statistics only Proceedings of 8th IEEE Signal Processing Workshop on Statistical Signal and Array Processing, 1996. 1.1 [21] S. Vempala, Y. Xiao. Structure from local optima: learning subspace juntas via higher order PCA. Arxiv:abs/1108.3329, 2011. 1.1 [22] M. Kendall, A. Stuart. The Advanced Theory of Statistics Charles Griffin and Company, 1958. 2 9
|
2012
|
15
|
4,508
|
Compressive Sensing MRI with Wavelet Tree Sparsity Chen Chen and Junzhou Huang Department of Computer Science and Engineering University of Texas at Arlington cchen@mavs.uta.edu jzhuang@uta.edu Abstract In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. This can significantly reduce MR scanning time. According to structured sparsity theory, the measurements can be further reduced to O(K + log n) for tree-sparse data instead of O(K + K log n) for standard K-sparse data with length n. However, few of existing algorithms have utilized this for CS-MRI, while most of them model the problem with total variation and wavelet sparse regularization. On the other side, some algorithms have been proposed for tree sparse regularization, but few of them have validated the benefit of wavelet tree structure in CS-MRI. In this paper, we propose a fast convex optimization algorithm to improve CS-MRI. Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. The original complex problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms. 1 Introduction Magnetic Resonance Imaging (MRI) is widely used for observing the tissue changes of the patients within a non-invasive manner. One limitation of MRI is its imaging speed, including both scanning speed and reconstruction speed. Long waiting time and slow scanning may result in patients’ annoyance and blur on images due to local motion such as breathing, heart beating etc. According to compressive sensing (CS) [1,2] theory, only a small number of measurements is enough to recover an image with good quality. This is an extension of Nyquist-Shannon sampling theorem when data is sparse or can be sparsely represented. Compressive Sensing Magnetic Resonance Imaging (CSMRI) becomes one of the most successful applications of compressive sensing, since MR scanning time is directly related to the number of sampling measurements [3]. As most images can be transferred to some sparse domain (wavelet etc.), only O(K + K log n) samples are enough to obtain robust MR image reconstruction. Actually, this result can be better. Recent works on structured sparsity show that the required number of sampling measurements could be further reduced to O(K + log n) by exploring the tree structure [4-6]. A typical relationship in tree sparsity is that, if a parent coefficient has a large/small value, its children also tend to be large/small. Some methods have been proposed to improve standard CS reconstruction by utilizing this prior. Specially, two convex models are proposed to handle the treebased reconstruction problem [7]. They apply SpaRSA [11] to solve their models, with a relatively slow convergence rate. In Bayesian compressive sensing, Markov Chain Monte Carlo (MCMC) and variational Bayesian (VB) are used to solve the tree-based hierarchical models [8][9]. Turbo AMP [10] also well exploits tree sparsity for compressive sensing with an iterative approximate message 1 passing approach. However, none of them has conducted numerous experiments on MR images to validate their superiority. In existing CS-MRI models, the linear combination of total variation and wavelet sparse regularization is very popular [3,12-15]. Classical conjugate gradient decent method is first used to solve this problem [3]. TVCMRI [12] and RecPF [13] use an operator-splitting method and a variable splitting method to solve this problem respectively. FCSA [14,15] decomposes the original problem into two easy subproblems and separately solves each of them with FISTA [16,17]. They are the state-of-the-art algorithms for CS-MRI, but none of them utilizes tree sparsity prior to enhance their performance. In this paper, we propose a new model for CS-MRI, which combines wavelet sparsity, gradient sparsity and tree sparsity seamlessly. In tree structure modeling, we assign each pair of parentchild wavelet coefficients to one group, which forces them to be zeros or non-zeros simultaneously. This is an overlapping group problem and hard to be solved directly. A new variable is introduced to decompose this problem to three simpler subproblems. Then each of subproblems has closed form solution or can be solved efficiently by existing techniques. We conduct extensive experiments to compare the proposed algorithm with the state-of-the-art CS-MRI algorithms and several tree sparsity algorithms. The proposed algorithm always achieves the best results in terms of SNR and computational time. Our contribution can be summarized as: (1) We introduce the wavelet tree sparsity to CS-MRI, and provide a convex formulation to model the tree-based structure combining with total variation and wavelet sparsity; (2) An efficient algorithm with fast convergence performance is proposed in this paper to solve this model. Each iteration only cost O(n log n) time.(3) Numerous experiments have been conducted to compare the proposed algorithm with the state-of-the-art CS-MRI algorithms and several general tree-based algorithms or solvers. The results show that the proposed algorithm outperforms all others on real MR images. 2 Related work 2.1 Tree based compressive sensing If a signal is sparse or can be sparsely represented, the necessary samples to reconstruct it can be significantly smaller than that needed in Nyquist-Shannon sampling theorem. Moreover, if we know some prior about the structure of the original signal, such as group or graph structure, the measurements can be further reduced [4,5]. Some previous algorithms have utilized the tree structure of wavelet coefficients to improve CS reconstruction [7-10]. OGL [7] is a convex approach to model the tree structure: ˆθ = arg min θ {F(θ) = 1 2||b −AΦT θ||2 2 + λg X g∈G ||eθg||2 + 1 2τ 2 n X i=1 X j∈Ji (θi −θj i )2} (1) where θ is a set of the wavelet coefficients. A represents a partial Fourier transform for MR reconstruction problem and b is the measurements data. ΦT denotes the inverse wavelet transform. G denotes the set of all parent-child groups and g is one of such groups. eθ is an extended vector of θ with replicates and the last term is a penalty to force the replicates to be the same. When wavelet coefficients are recovered, they can be transformed to the recovered image by an inverse wavelet transform.This method well explores the tree structure assumption, but may be slow in general as following reasons: a) the parent-child relationship in their model is hard to maintain; b) it applies SpaRSA [11] to solve (1). Overall, their method can only achieve a convergence rate of F(θk) −F(θ∗) ≃O(1/k) [16], where k is the iteration number and θ∗is the optimal solution. In statistical learning, AMP [10], MCMC [8], and VB [9] all solve (2) with probabilistic inference. In (2), x is the original image to be reconstructed and w is Gaussian white noise. In these approaches, graphical models are used to represent the wavelet tree structure and the distribution of each coefficient is decided by its parent’s value. y = Ax + w = AΦT θ + w (2) 2 2.2 Efficient MR image reconstruction algorithms In existing CS-MRI algorithms, the linear combination of total variation and wavelet sparsity constrains has shown good property for MR images. Recent fastest algorithms all attempt to solve (3) in less computational time. α and β are two positive parameters, and Φ denotes the wavelet transform. ∥x∥T V = P i P j p (∇1xij)2 + (∇2xij)2, where ∇1 and ∇2 denote the forward finite difference operators on the first and second coordinates. TVCMRI [12] and RecPF [13] use an operator-splitting method and a variable splitting method to solve this problem respectively. FCSA [14,15] decomposes this problem into 2 simpler problems and solves them with FISTA respectively. The convergence rate of FISTA is O(1/k2). These approaches are very effective on real MR image reconstruction, but none of them utilizes the wavelet tree structure to get further enhancement. ˆx = arg min x {1 2∥Ax −b∥2 2 + α∥x∥T V + β∥Φx∥1} (3) 2.3 Convex overlapped group sparsity solvers SLEP [18] (Sparse Learning with Efficient Projections) has the package for tree structured group lasso (4). Its main function is to iteratively solve the tree structured denoising problem. When it comes to reconstruction problem, it applies FISTA to transfer the problem to denoising. ˆx = arg min x {1 2∥Ax −b∥2 2 + β∥Φx∥tree} (4) YALL1 [19] (Your Algorithms for L1) can solve the general overlapping group sparse problem efficiently. We put it in comparisons too. It first relaxes the constrained overlapping group minimization to unconstrained problem by Lagrangian method. Then the minimization of the x and z subproblems can be written as: ˆx = arg min x,z {β2 2 ∥Ax −b∥2 2 + λT 1 GΦx + β1 2 ||z −GΦx||2 2 −λT 2 Ax + s X i=1 wi||zi||2} (5) where G indicates the grouping index with all its elements to be 1 or 0. s is the total number of groups. λ1, λ2 are multipliers and β1, β2 are positive parameters. 3 Algorithm Observations tell us that the wavelet coefficients of real MR images tend to be quadtree structured [20], although not strictly. Moreover they are generally sparse in wavelet and gradient domain. So we utilize all the sparse prior in our model. A new algorithm called Wavelet Tree Sparsity MRI (WaTMRI) is proposed to efficiently solve this model. Tree-based MRI problem can be formulated as follows: min x {F(x) = 1 2∥Ax −b∥2 2 + α∥x∥T V + β(∥Φx∥1 + X g∈G ||Φxg||2)} (6) The total variation and L1 term in fact have complemented the tree structure assumption, which make our model more robust on real MR images. This is a main difference with previous tree structured algorithms or solvers. However, this problem can not be solved efficiently. We introduce 3 a variable z to constrain x with overlapping structure. Then the problem becomes non-overlapping convex optimization. Let GΦx = z, (6) can be rewritten as: min x,z {F(x) = 1 2∥Ax −b∥2 2 + α∥x∥T V + β(∥Φx∥1 + s X i=1 ||zgi||2) + λ 2 ||z −GΦx||2 2} (7) For the z subproblem, it has closed form solution by the group-wise soft thresholding. For the x subproblem, we can combine the first and last quadratic penalty on the right side. Then the rest has the similar form with FCSA and can be solved efficiently with an iterative scheme. 3.1 Solution As mentioned above, z-subproblem in (7) can be written as: zgi = arg min zgi {β||zgi||2 + λ 2 ||zgi −(GΦx)gi||2 2}, i = 1, 2, ..., s (8) where gi is the i-th group and s is number of total groups. It has a closed form solution by soft thresholding: zgi = max(||ri||2 −β λ, 0) ri ||ri||2 , i = 1, 2, ..., s (9) where ri = (GΦx)gi. For the x-subproblem, x = arg min x {1 2∥Ax −b∥2 2 + α∥x∥T V + β∥Φx∥1 + λ 2 ||z −GΦx||2 2} (10) Let f(x) = 1 2∥Ax −b∥2 2 + λ 2 ||z −GΦx||2 2, which is a convex and smooth function with Lipschitz Lf, and g1(x) = α∥x∥T V , g2(x) = β∥Φx∥1, which are convex but non-smooth functions. Then this x problem can be solved efficiently by FCSA. For convenience, we denote (9) by z = shrinkgroup(GΦx, β λ). Now, we can summarize our algorithm as in Algorithm 1: Algorithm 1 WaTMRI Input: ρ = 1/Lf, r1 = x0, t1 = 1, α, β, λ for k = 1 to N do 1) z = shrinkgroup(GΦxk−1, β/λ) 2) xg = rk −ρ∇f(rk) 3) x1 = proxρ(2α∥x∥T V )(xg) 4) x2 = proxρ(2β(∥Φx∥1)(xg) 5) xk = (x1 + x2)/2 6) tk+1 = [1 + p 1 + 4(tk)2]/2 7) rk+1 = xk + tk−1 tk+1 (xk −xk−1) end for where the proximal map is defined for any scaler ρ > 0: proxρ(g)(x) := arg min u {g(u) + 1 2ρ∥u −x∥2} (11) and ∇f(rk) = AT (Ark −b) + λΦT GT (GΦrk −z) with AT denotes the inverse partial Fourier transform. 4 3.2 Algorithm analysis Suppose x represents an image with n pixels and z contains n′ elements. Although G is a n′ × n matrix, it is sparse with only n′ non-zero elements. So we can implement a multiplication by G efficiently with O(n′) time. Step 1 shrinkgroup takes O(n′ + n log n) time. The total cost of step 2 takes O(n log n) time. Step 4 takes O(n log n) when the fast wavelet transform is applied. Steps 3,5 all cost O(n). Note that n′ ≤2n since we assign every parent-child coefficients to one group and leave every wavelet scaling in one group. So the total computation complexity for each iteration is O(n log n), the same complexity as that in TVCMRI, RecPF and FCSA. We introduce the wavelet tree structure constrain in our model, without increasing the total computation complexity. The xsubproblem is accelerated by FISTA, and the whole algorithm shows a very fast convergence rate in the following experiments. 4 Experiments 4.1 Experiments set up Numerous experiments have been conducted to show the superiority of the proposed algorithm on CS-MRI. In the MR imaging problem, A is partial Fourier transform with m rows and n columns. We define the sampling ratio as m/n. The fewer measurements we samples, the less MR scanning time is need. So MR imaging is always interested in low sampling ratio cases. We follow the sampling strategy of previous works([12,14-15]), which randomly choose more Fourier coefficients from low frequency and less on high frequency. All measurements are mixed with 0.01 Gaussian white noise. Signal-to-Noise Ratio (SNR) is used for result evaluation. All experiments are on a laptop with 2.4GHz Intel core i5 2430M CPU. Matlab version is 7.8(2009a). We conduct experiments on four MR images : ”Cardiac”, ”Brain”, ”Chest” and ”Shoulder” ( Figure 1). We first compare our algorithm with the classical and fastest MR image reconstruction algorithms: CG[3], TVCMRI[12], RecPF[13], FCSA[14,15], and then with general tree based algorithms or solvers: AMP[10], VB[9], YALL1[19], SLEP[18]. For fair comparisons, all codes are downloaded from the authors’ websites. We do not include MCMC[7] in experiements because it has slow execution speed and untractable convergence [9][10]. OGL[7] solves its model by SpaRSA [11] with only O(1/k) convergence rate, which can not be competitive with recent FISTA[16,17] algorithms with O(1/k2) convergence rate. The authors have not published the code yet. So we do not include OGL for comparisons neither. We use the same setting α = 0.001, β = 0.035 in previous works [12,14,15] for all convex models. λ = 0.2 × β is used for our model. Figure 1: MR images: Cardiac; Brain; Chest; Shoulder and the sampling mask. 4.2 Comparisons with MR image reconstruction algorithms We first compare our method with the state-of-the-art MR image reconstruction algorithms. For convenience, all test images are resized to 256×256. Figure 2 shows the performance comparison on ”Brain” image. All algorithms terminate after 50 iterations. We decompose the wavelet coefficients to 4 levels here since more levels would increase the computation cost and less levels will weaken tree structure benefit. One could observe that the visual result recovered by the proposed algorithm is the closest to the original with only 20% sampling ratio. Although tree structure definitely cost a little more time to solve, it always achieves the best performance in terms of SNR and CPU time. We have conducted experiments on other images and obtain similar results that the proposed algorithm always has the best performance in terms of SNR and CPU time. This result is reasonable because 5 we exploit wavelet tree structure in our model, which can reduce requirement for the number of measurements or increase the accuracy of the solution with the same measurements. Figure 2: Brain image reconstruction with 20% sampling. Visual results from left to right, top to bottom are original image, images reconstructed by CG [3], TVCMRI [12], RecPF [13], FCSA [14,15], and the proposed algorithm. Their SNR are 10.26, 13.50, 14.29, 15.69 and 16.88. The right side shows the average SNR to iterations and SNR to CPU time. 4.3 Comparisons with general algorithms of tree structure We also compare our algorithm with existing algorithms for tree sparsity with statistical inference and convex optimization. For statistical algorithms AMP [10] and VB [9], we use the default setting in their code. For SLEP [18], we set the same parameters α and β as those in previous experiments. For YALL1 [19], we set both β1 and β2 equal to β. VB needs every column of A, which slows down the whole algorithm. Due to the higher space requirement and time complexity of VB, we resize all images to 128 × 128. The wavelet decomposition level is set as 3. Figure 3 shows the reconstruction results on ”Brain” image with only 20% measurements. All algorithm terminates after 50 iterations. Due to high computational complexity of VB, we do not show the performance of VB in the right bottom panel. As AMP and VB can converge with only a small number of iterations and are much slower, we run them 10 iterations in all later experiments. The proposed algorithm always achieves the highest SNR to CPU time among all tree-based algorithms or solvers. These results are reasonable because none of the other algorithms uses the sparse prior of MR images in wavelet and gradient domain simultaneously. Table 1: Comparisons of SNR (db) on four MR images Algorithms Iterations Cardiac Brain Chest Shoulder AMP [10] 10 11.36±0.95 11.56±0.60 11.00±0.30 14.49±1.04 VB [9] 10 9.62±1.82 9.23±1.39 8.93±0.79 13.81±0.44 SLEP [18] 50 12.24±1.08 12.28±0.78 12.34±0.28 15.65±1.78 YALL1 [19] 50 9.56±0.13 7.73±0.15 7.76±0.56 13.14±0.22 Proposed 50 14.80± 0.51 14.11± 0.41 12.90± 0.13 18.93± 0.73 Table 1 and 2 show the results on four MR images. Although statistical algorithms are slow in general, they have the convenience without tuning parameters, as all parameters are learned from data. Fortunately, good parameters for MR image reconstruction are easy to tune in our model. Except the proposed algorithm, all other algorithms have a strong assumption of the tree structure. However for real MR data, many images do not strictly follow this assumption. Due to this rea6 Table 2: Comparisons of execution time(sec) on four MR images Algorithms Iterations Cardiac Brain Chest Shoulder AMP [10] 10 2.30±0.06 2.36±0.33 2.37±0.41 2.29±0.22 VB [9] 10 13.95±0.11 14.25±0.29 14.11±0.40 14.15±0.42 SLEP [18] 50 1.44±0.08 1.52±0.06 1.41±0.05 1.45±0.08 YALL1 [19] 50 1.02±0.04 1.04±0.01 0.98±0.04 1.00±0.02 Proposed 50 1.54±0.04 1.61±0.03 1.56±0.07 1.62±0.14 Figure 3: Brain image reconstruction with 20% sampling. Visual results from left to right, top to bottom are original image, images reconstructed by AMP [10], VB [9], SLEP [18], YALL1 [19], and the proposed algorithm. Their SNR are 11.56, 8.81, 12.28, 7.73 and 14.11. The right side shows the average SNR to iterations and to CPU time. Note that the right bottom panel only shows the first 10 iterations time of AMP. son, these tree-based algorithms can not do their best on real MR images. To show the benefit of proposed model, we design another experiment on a toy MR image, which more strictly follow the tree structure assumption. First we set the wavelet coefficients who have the smallest 0.1% energy to zero. Then if a coefficient’s parent or child is zero, we set it to be zero. Hence coefficients in the same group are both zeros or non-zeros. Figure 4 shows the original toy brain image and corresponding results of different algorithms. We found that all algorithms have improved a lot and the performance of all algorithms becomes much closer. From Figure 4 and 3, we could find that the proposed algorithm has great superiority on real MR image, because we combined TV and wavelet sparsity, which ”soften” and complement the tree structure assumption for real MR data. Other tree based algorithms depend on ”hard” tree structure only, which makes them hard to perform well on CS-MRI. Finally, we show the results at different sampling ratios at Figure 5. For the same algorithm, the SNR of the solution tends to be higher when more measurements are used. On the same tested image, the order of performance tends to be the same. It coincides the conclusion in previous paper [14,15] that FCSA is better than TVCMRI and RecPF, and far better than classical method CG. Comparing all these experiments, the proposed algorithm always achieves the highest SNR than all other algorithms on real MR images. 7 Figure 4: Toy image reconstruction with 20% sampling. Visual results from left to right, top to bottom are original image, images reconstructed by AMP [10], VB [9], SLEP [18], YALL1 [19], and the proposed algorithm. Their SNR are 12.99, 10.12, 13.53, 13.19 and 15.29. The right side shows the average SNR to iterations and to CPU time. Figure 5: Average SNR with different sampling ratios on 4 MR images. All algorithms terminate after 50 iterations except AMP [10] and VB [9] terminate after 10 iterations. From left to right, results are on ”Cardiac”,”Brain”,”Chest” and ”Shoulder”. 5 Conclusions Real MR images not only tend to be tree structured sparse, but also are sparse in wavelet and gradient domain . In this paper, we consider all these priors in our model and all terms in this mode are convex. To solve this model, we decompose the original problem into three simpler ones and solve each of them very efficiently. Numerous experiments have been conducted to validate our method. All experiments demonstrate that the proposed algorithm outperforms the state-of-the-art ones in CS-MRI and general tree-based algorithms or solvers. Compared with the state-of-the-art algorithms in CS-MRI, the tree structure in our model can help reduce the required measurements, and leads to better performance. Compared with general tree sparsity algorithms, our algorithm can obtain more robust results on real MR data. Future work will be combining the proposed algorithm with the nonlocal total variation [22] for multi-contrast MRI [21]. 8 References [1] Donoho, D. (2006) Compressed sensing. IEEE Trans. on Information Theory 52(4):1289-1306. [2] Candes, E., Romberg, J. & Tao, T. (2006) Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. on Information Theory 52(2):489-509. [3] Lustig, M., Donoho, D. & Pauly, J. (2007) Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine 58(6):1182-1195. [4] Huang, J., Zhang, T. & Metaxas, D. (2011) Learning With Structured Sparsity. Journal of Machine Learning Research 12:3371-3412. [5] Baraniuk, R.G., Cevher, V., Duarte, M.F. & Hegde, C. (2010) Model-based compressive sensing. IEEE Trans. on Information Theory 56:1982-2001. [6] Bach, F., Jenatton, R., Mairal, J. & Obozinski, G. (2012) Structured sparsity through convex optimization. Technical report, HAL 00621245-v2, to appear in Statistical Science. [7] Rao, N., Nowak, R., Wright, S. & Kingsbury, N. (2011) Convex approaches to model wavelet sparsity patterns. In IEEE International Conference On Image Processing, ICIP’11 [8] He, L. & Carin, L. (2009) Exploiting Structure in Wavelet-Based Bayesian Compressive Sensing. IEEE Trans. on Signal Processing 57(9):3488-3497. [9] He, L., Chen, H. & Carin, L. (2010) Tree-Structured Compressive Sensing with Variational Bayesian Analysis. IEEE Signal Processing Letters 17(3):233-236 [10] Som, S., Potter, L.C. & Schniter, P. (2010) Compressive Imaging using Approximate Message Passing and a Markov-Tree Prior. In Proceedings of Asilomar Conference on Signals, Systems, and Computers. [11] Wright, S.J., Nowak, R.D. & Figueiredo, M.A.T. (2009) Sparse reconstruction by separable approximation. IEEE Trans. on Signal Processing 57:2479-2493. [12] Ma, S., Yin, W., Zhang, Y. & Chakraborty, A.(2008) An efficient algorithm for compressed MR imaging using total variation and wavelets. In In Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR’08. [13] Yang, J., Zhang, Y. & Yin, W. (2010) A fast alternating direction method for tvl1-l2 signal reconstruction from partial fourier data. IEEE Journal of Selected Topics in Signal Processing, Special Issue on Compressive Sensing 4(2):288-297. [14] Huang, J., Zhang, S. & Metaxas, D. (2011) Efficient MR Image Reconstruction for Compressed MR Imaging. Medical Image Analysis 15(5):670-679. [15] Huang, J., Zhang, S. & Metaxas, D. (2010) Efficient MR Image Reconstruction for Compressed MR Imaging. In Proc. of the 13th Annual International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI’10. [16] Beck, A. & Teboulle, M. (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1):183-202. [17] Beck, A. & Teboulle, M. (2009) Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. on Image Processing 18(113):2419-2434 [18] Liu, J., Ji, S. & Ye, J. (2009) SLEP: Sparse Learning with Efficient Projections. Arizona State University. http://www.public.asu.edu/ jye02/Software/SLEP. [19] Deng, W., Yin, W. & Zhang, Y. (2011) Group Sparse Optimization by Alternating Direction Method. Rice CAAM Report TR11-06. [20] Manduca A., & Said A. (1996) Wavelet Compression of Medical Images with Set Partitioning in Hierarchical Trees. In Proceedings of International Conference IEEE Engineering in Medicine and Biology Society, EMBS. [21] Huang, J., Chen, C. & Axel, L. (2012) Fast Multi-contrast MRI Reconstruction. In Proc. of the 15th Annual International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI’12. [22] Huang, J., & Yang, F. (2012) Compressed Magnetic Resonace Imaging Based on Wavelet Sparsity and Nonlocal Total Variation. IEEE International Symposium on Biomedical Imaging, ISBI’12. 9
|
2012
|
150
|
4,509
|
Mixability in Statistical Learning Tim van Erven Universit´e Paris-Sud, France tim@timvanerven.nl Peter D. Gr¨unwald CWI and Leiden University, the Netherlands pdg@cwi.nl Mark D. Reid ANU and NICTA, Australia Mark.Reid@anu.edu.au Robert C. Williamson ANU and NICTA, Australia Bob.Williamson@anu.edu.au Abstract Statistical learning and sequential prediction are two different but related formalisms to study the quality of predictions. Mapping out their relations and transferring ideas is an active area of investigation. We provide another piece of the puzzle by showing that an important concept in sequential prediction, the mixability of a loss, has a natural counterpart in the statistical setting, which we call stochastic mixability. Just as ordinary mixability characterizes fast rates for the worst-case regret in sequential prediction, stochastic mixability characterizes fast rates in statistical learning. We show that, in the special case of log-loss, stochastic mixability reduces to a well-known (but usually unnamed) martingale condition, which is used in existing convergence theorems for minimum description length and Bayesian inference. In the case of 0/1-loss, it reduces to the margin condition of Mammen and Tsybakov, and in the case that the model under consideration contains all possible predictors, it is equivalent to ordinary mixability. 1 Introduction In statistical learning (also called batch learning) [1] one obtains a random sample (X1, Y1), . . . , (Xn, Yn) of independent pairs of observations, which are all distributed according to the same distribution P ⇤. The goal is to select a function ˆf that maps X to a prediction ˆf(X) of Y for a new pair (X, Y ) from the same P ⇤. The quality of ˆf is measured by its excess risk, which is the expectation of its loss `(Y, ˆf(X)) minus the expected loss of the best prediction function f ⇤ in a given class of functions F. Analysis in this setting usually involves giving guarantees about the performance of ˆf in the worst case over the choice of the distribution of the data. In contrast, the setting of sequential prediction (also called online learning) [2] makes no probabilistic assumptions about the source of the data. Instead, pairs of observations (xt, yt) are assumed to become available one at a time, in rounds t = 1, . . . , n, and the goal is to select a function ˆft just before round t, which maps xt to a prediction of yt. The quality of predictions ˆf1, . . . , ˆfn is evaluated by their regret, which is the sum of their losses `(y1, ˆf1(x1)), . . . , `(yn, ˆfn(xn)) on the actual observations minus the total loss of the best fixed prediction function f ⇤in a class of functions F. In sequential prediction the usual analysis involves giving guarantees about the performance of ˆf1, . . . , ˆfn in the worst case over all possible realisations of the data. When stating rates of convergence, we will divide the worst-case regret by n, which makes the rates comparable to rates in the statistical learning setting. Mapping out the relations between statistical learning and sequential prediction is an active area of investigation, and several connections are known. For example, using any of a variety of online1 to-batch conversion techniques [3], any sequential predictions ˆf1, . . . , ˆfn may be converted into a single statistical prediction ˆf and the statistical performance of ˆf is bounded by the sequential prediction performance of ˆf1, . . . , ˆfn. Moreover, a deep understanding of the relation between worstcase rates in both settings is provided by Abernethy, Agarwal, Bartlett and Rakhlin [4]. Amongst others, their results imply that for many loss functions the worst-case rate in sequential prediction exceeds the worst-case rate in statistical learning. Fast Rates In sequential prediction with a finite class F, it is known that the worst-case regret can be bounded by a constant if and only if the loss ` has the property of being mixable [5, 6] (subject to mild regularity conditions on the loss). Dividing by n, this corresponds to O(1/n) rates, which is fast compared to the usual O(1/pn) rates. In statistical learning, there are two kinds of conditions that are associated with fast rates. First, for 0/1-loss, fast rates (faster than O(1/pn)) are associated with Mammen and Tsybakov’s margin condition [7, 8], which depends on a parameter . In the nicest case, = 1 and then O(1/n) rates are possible. Second, for log(arithmic) loss there is a single supermartingale condition that is essential to obtain fast rates in all convergence proofs of two-part minimum description length (MDL) estimators, and in many convergence proofs of Bayesian estimators. This condition, used by e.g. [9, 10, 11, 12, 13, 14], sometimes remains implicit (see Example 1 below) and usually goes unnamed. A special case has been called the ‘supermartingale property’ by Chernov, Kalnishkan, Zhdanov and Vovk [15]. Audibert [16] also introduced a closely related condition, which does seem subtly different however. Our Contribution We define the notion of stochastic mixability of a loss `, set of predictors F, and distribution P ⇤, which we argue to be the natural analogue of mixability for the statistical setting on two grounds: first, we show that it is closely related to both the supermartingale condition and the margin condition, the two properties that are known to be related to fast rates; second, we show that it shares various essential properties with ordinary mixability and in specific cases is even equivalent to ordinary mixability. To support the first part of our argument, we show the following: (a) for bounded losses (including 0/1-loss), stochastic mixability is equivalent to the best case (= 1) of a generalization of the margin condition; other values of may be interpreted in terms of a slightly relaxed version of stochastic mixability; (b) for log-loss, stochastic mixability reduces to the supermartingale condition; (c) in general, stochastic mixability allows uniform O(log |Fn|/n)-statistical learning rates to be achieved, where |Fn| is the size of a sub-model Fn ⇢F considered at sample size n. Finally, (d) if stochastic mixability does not hold, then in general O(log |Fn|/n)-statistical learning rates cannot be achieved, at least not for 0/1-loss or for log-loss. To support the second part of our argument, we show: (e) if the set F is ‘full’, i.e. it contains all prediction functions for the given loss, then stochastic mixability turns out to be formally equivalent to ordinary mixability (if F is not full, then either condition may hold without the other). We choose to call our property stochastic mixability rather than, say, ‘generalized margin condition for = 1’ or ‘generalized supermartingale condition’, because (f) we also show that the general condition can be formulated in an alternative way (Theorem 2) that directly indicates a strong relation to ordinary mixability, and (g) just like ordinary mixability, it can be interpreted as the requirement that a set of so-called pseudo-likelihoods is (effectively) convex. We note that special cases of results (a)–(e) already follow from existing work of many other authors; we provide a detailed comparison in Section 7. Our contributions are to generalize these results, and to relate them to each other, to the notion of mixability from sequential prediction, and to the interpretation in terms of convexity of a set of pseudo-likelihoods. This leads to our central conclusion: the concept of stochastic mixability is closely related to mixability and plays a fundamental role in achieving fast rates in the statistical learning setting. Outline In §2 we define both ordinary mixability and stochastic mixability. We show that two of the standard ways to express mixability have natural analogues that express stochastic mixability (leading to (f)). In example 1 we specialize the definition to log-loss and explain its importance in the literature on MDL and Bayesian inference, leading to (b). A third interpretation of mixability and standard mixability in terms of sets (g) is described in §3. The equivalence between mixability 2 and stochastic mixability if F is full is presented in §4 where we also show that the equivalence need not hold if F is not full (e). In §5, we turn our attention to a version of the margin condition that does not assume that F contains the Bayes optimal predictor and we show that (a slightly relaxed version of) stochastic mixability is equivalent to the margin condition, taking care of (a). We show (§6) that if stochastic mixability holds, O(log |Fn|/n)-rates can always be achieved (c), and that in some cases in which it does not hold, O(log |Fn|/n)-rates cannot be achieved (d). Finally (§7) we connect our results to previous work in the literature. Proofs omitted from the main body of the paper are in the supplementary material. 2 Mixability and Stochastic Mixability We now introduce the notions of mixability and stochastic mixability, showing two equivalent formulations of the latter. 2.1 Mixability A loss function `: Y⇥A ! [0, 1] is a nonnegative function that measures the quality of a prediction a 2 A when the true outcome is y 2 Y by `(y, a). We will assume that all spaces come equipped with appropriate σ-algebras, so we may define distributions on them, and that the loss function ` is measurable. Definition 1 (Mixability). For ⌘> 0, a loss ` is called ⌘-mixable if for any distribution ⇡on A there exists a single prediction a⇡such that `(y, a⇡) −1 ⌘ln Z e−⌘`(y,a) ⇡(da) for all y. (1) It is called mixable if there exists an ⌘> 0 such that it is ⌘-mixable. Let A be a random variable with distribution ⇡. Then (1) may be rewritten as E⇡ e−⌘`(y,A) e−⌘`(y,a⇡) # 1 for all y. (2) 2.2 Stochastic Mixability Let F be a set of predictors f : X ! A, which are measurable functions that map any input x 2 X to a prediction f(x). For example, if A = Y = {0, 1} and the loss is the 0/1-loss, `0/1(y, a) = 1{y 6= a}, then the predictors are classifiers. Let P ⇤be the distribution of a pair of random variables (X, Y ) with values in X ⇥Y. Most expectations in the paper are with respect to P ⇤. Whenever this is not the case we will add a subscript to the expectation operator, as in (2). Definition 2 (Stochastic Mixability). For any ⌘≥0, we say that (`, F, P ⇤) is ⌘-stochastically mixable if there exists an f ⇤2 F such that E e−⌘`(Y,f(X)) e−⌘`(Y,f ⇤(X)) # 1 for all f 2 F. (3) We call (`, F, P ⇤) stochastically mixable if there exists an ⌘> 0 such that it is ⌘-stochastically mixable. By Jensen’s inequality, we see that (3) implies 1 ≥E h e−⌘`(Y,f(X)) e−⌘`(Y,f⇤(X)) i ≥eE[⌘(`(Y,f ⇤(X))−`(Y,f(X)))], so that E[`(Y, f ⇤(X))] E[`(Y, f(X)))] for all f 2 F, and hence the definition of stochastic mixability presumes that f ⇤minimizes E[`(Y, f(X))] over all f 2 F. We will assume throughout the paper that such an f ⇤exists, and that E[`(Y, f ⇤(X))] < 1. The larger ⌘, the stronger the requirement of ⌘-stochastic mixability: Proposition 1. Any triple (`, F, P ⇤) is 0-stochastically mixable. And if 0 < γ < ⌘, then ⌘-stochastic mixability implies γ-stochastic mixability. 3 Example 1 (Log-loss). Let F be a set of conditional probability densities and let `log be log-loss, i.e. A is the set of densities on Y, f(x)(y) is written, as usual, as f(y | x), and `log(y, f(x)) := −ln f(y | x). For log-loss, statistical learning becomes equivalent to conditional density estimation with random design (see, e.g., [14]). Equation 3 now becomes equivalent to A⌘(f ⇤kf) := E ✓f(Y | X) f ⇤(Y | X) ◆⌘ 1. (4) A⌘has been called the generalized Hellinger affinity [12] in the literature. If the model is correct, i.e. it contains the true conditional density p⇤(y | x), then, because the log-loss is a proper loss [17] we must have f ⇤= p⇤and then, for ⌘= 1, trivially A⌘(fkf ⇤) = 1 for all f 2 F. Thus if the model F is correct, then the log-loss is ⌘-stochastically mixable for ⌘= 1. In that case, for ⌘= 1/2, A⌘ turns into the standard definition of Hellinger affinity [10]. Equation 4 — which just expresses 1-stochastic mixability for log-loss — is used in all previous convergence theorems for 2-part MDL density estimation [10, 12, 11, 18], and, more implicitly, in various convergence theorems for Bayesian procedures, including the pioneering paper by Doob [9]. All these results assume that the model F is correct, but, if one studies the proofs, one finds that the assumption is only needed to establish that (4) holds for ⌘= 1. For example, as first noted by [12], if F is a convex set of densities, then (4) also holds for ⌘= 1, even if the model is incorrect, and, indeed, two-part MDL converges at fast rates in such cases (see [14] for a precise definition of what this means, as well as more general treatment of (4)). Kleijn and Van der Vaart [13], in their extensive analysis of Bayesian nonparametric inference if the model is wrong, also use the fact that (4) holds with ⌘= 1 for convex models to show that fast posterior concentration rates hold for such models even if they do not contain the true p⇤. The definition of stochastic mixability looks similar to (2), but whereas ⇡is a distribution on predictions, P ⇤is a distribution on outcomes (X, Y ). Thus at first sight the resemblance appears to be only superficial. It is therefore quite surprising that stochastic mixability can also be expressed in a way that looks like (1), which provides a first hint that the relation goes deeper. Theorem 2. Let ⌘> 0. Then (`, F, P ⇤) is ⌘-stochastically mixable if and only if for any distribution ⇡on F there exists a single predictor f ⇤2 F such that E ⇥ `(Y, f ⇤(X)) ⇤ E −1 ⌘ln Z e−⌘`(Y,f(X)) ⇡(df) # . (5) Notice that, without loss of generality, we can always choose f ⇤to be the minimizer of E[`(Y, f(X))]. Then f ⇤does not depend on ⇡. 3 The Convexity Interpretation There is a third way to express mixability, as the convexity of a set of so-called pseudo-likelihoods. We will now show that stochastic mixability can also be interpreted as convexity of the corresponding set in the statistical learning setting. Following Chernov et al. [15], we first note that the essential feature of a loss ` with corresponding set of predictions A is the set of achievable losses they induce: L = {l: Y ! [0, 1] | 9a 2 A: l(y) = `(y, a) for all y 2 Y}. If we would reparametrize the loss by a different set of predictions A0, while keeping L the same, then essentially nothing would change. For example, for 0/1-loss standard ways to parametrize predictions are by A = {0, 1}, by A = {−1, +1} or by A = R with the interpretation that predicting a ≥0 maps to the prediction 1 and a < 0 maps to the prediction 0. Of course these are all equivalent, because L is the same. It will be convenient to consider the set of functions that lie above the achievable losses in L: S = S` = {l: Y ! [0, 1] | 9l0 2 L: l(y) ≥l0(y) for all y 2 Y}, Chernov et al. call this the super prediction set. It plays a role similar to the role of the epigraph of a function in convex analysis. Let ⌘> 0. Then with each element l 2 S in the super prediction 4 P ⇤ P ⇤ f ⇤ f ⇤ Stochastically mixable Not stochastically mixable PF(⌘) PF(⌘) coPF(⌘) coPF(⌘) Figure 1: The relation between convexity and stochastic mixability for log-loss, ⌘= 1 and X = {x} a singleton, in which case P ⇤and the elements of PF(⌘) can all be interpreted as distributions on Y. set, we associate a pseudo-likelihood p(y) = e−⌘l(y). Note that 0 p(y) 1, but it is generally not the case that R p(y) µ(dy) = 1 for some reference measure µ on Y, so p(y) is not normalized. Let e−⌘S = {e−⌘l | l 2 S} denote the set of all such pseudo-likelihoods. By multiplying (1) by −⌘ and exponentiating, it can be shown that ⌘-mixability is exactly equivalent to the requirement that e−⌘S is convex [2, 15]. And like for the first two expressions of mixability, there is an analogous convexity interpretation for stochastic mixability. In order to define pseudo-likelihoods in the statistical setting, we need to take into account that the predictions f(X) of the predictors in F are not deterministic, but depend on X. Hence we define conditional pseudo-likelihoods p(Y |X) = e−⌘`(Y,f(X)). (See also Example 1.) There is no need to introduce a conditional analogue of the super prediction set. Instead, let PF(⌘) = {e−⌘`(Y,f(X)) | f 2 F} denote the set of all conditional pseudo-likelihoods. For λ 2 [0, 1], a convex combination of any two p0, p1 2 PF(⌘) can be defined as pλ(Y |X) = (1 −λ)p0(Y |X) + λp1(Y |X). And consequently, we may speak of the convex hull co PF(⌘) = {pλ | p0, p1 2 PF(⌘), λ 2 [0, 1]} of PF(⌘). Corollary 3. Let ⌘> 0. Then ⌘-stochastic mixability of (`, F, P ⇤) is equivalent to the requirement that min p2PF(⌘) E ⇥−1 ⌘ln p(Y |X) ⇤ = min p2co PF(⌘) E ⇥−1 ⌘ln p(Y |X) ⇤ . (6) Proof. This follows directly from Theorem 2 after rewriting it in terms of conditional pseudolikelihoods. Notice that the left-hand side of (6) equals E[`(Y, f ⇤(X))], which does not depend on ⌘. Equation 6 expresses that the convex hull operator has no effect, which means that PF(⌘) looks convex from the perspective of P ⇤. See Figure 1 for an illustration for log-loss. Thus we obtain an interpretation of ⌘-stochastic mixability as effective convexity of the set of pseudo-likelihoods PF(⌘) with respect to P ⇤. Figure 1 suggests that f ⇤should be unique if the loss is stochastically mixable, which is almost right. It is in fact the loss `(Y, f ⇤(X)) of f ⇤that is unique (almost surely): Corollary 4. If (`, F, P ⇤) is stochastically mixable and there exist f ⇤, g⇤ 2 F such that E[`(Y, f ⇤(X))] = E[`(Y, g⇤(X))] = minf2F E[`(Y, f(X))], then `(Y, f ⇤(X)) = `(Y, g⇤(X)) almost surely. Proof. Let ⇡(f ⇤) = ⇡(g⇤) = 1/2. Then, by Theorem 2 and (strict) convexity of −ln, min f2F E[`(Y, f(X))] E −1 ⌘ln +1 2e−⌘`(Y,f ⇤(X)) + 1 2e−⌘`(Y,g⇤(X)),# E 1 2`(Y, f ⇤(X)) + 1 2`(Y, g⇤(X)) # = min f2F E[`(Y, f(X))]. 5 Hence both inequalities must hold with equality. For the second inequality this is only the case if `(Y, f ⇤(X)) = `(Y, g⇤(X)) almost surely, which was to be shown. 4 When Mixability and Stochastic Mixability Are the Same Having observed that mixability and stochastic mixability of a loss share several common features, we now show that in specific cases the two concepts even coincide. More specifically, Theorem 5 below shows that a loss ` (meeting two requirements) is ⌘-mixable if and only if it is ⌘-stochastically mixable relative to Ffull, the set of all functions from X to A, and all distributions P ⇤. To avoid measurability issues, we will assume that X is countable throughout this section. The two conditions we assume of ` are both related to its set of pseudo-likelihoods Φ := e−⌘S, which was defined in Section 3. The first condition is that Φ is closed. When Y is infinite, we mean closed relative to the topology for the supremum norm kpk1 = supy2Y |p(y)|. The second, more technical condition is that Φ is pre-supportable. That is, for every pseudo-likelihood p 2 Φ, its pre-image s 2 S (defined for each y 2 Y by s(y) := −1 ⌘ln p(y)) is supportable. Here, a point s 2 S is supportable if it is optimal for some distribution P ⇤ Y over Y – that is, if there exists a distribution P ⇤ Y over Y such that EP ⇤ Y [s(Y )] EP ⇤ Y [t(Y )] for all t 2 S. This is the case, for example, for all proper losses [17]. We say (`, F) is ⌘-stochastically mixable if (`, F, P ⇤) is ⌘-stochastically mixable for all distributions P ⇤on X ⇥Y. Theorem 5. Suppose X is countable. Let ⌘> 0 and suppose ` is a loss such that its pseudolikelihood set e−⌘S is closed and pre-supportable. Then (`, Ffull) is ⌘-stochastically mixable if and only if ` is ⌘-mixable. This result generalizes Theorem 9 and Lemma 11 by Chernov et al. [15] from finite Y to arbitrary continuous Y, which they raised as an open question. In their setting, there are no explanatory variables x, which may be emulated in our framework by letting X contain only a single element. Their conditions also imply (by their Lemma 10) that the loss ` is proper, which implies that e−⌘S is closed and pre-supportable. We note that for proper losses ⌘-mixability is especially well understood [19]. The proof of Theorem 5 is broken into two lemmas (the proofs of which are in the supplementary material). The first establishes conditions for when mixability implies stochastic mixability, borrowing from a similar result for log-loss by Li [12]. Lemma 6. Let ⌘> 0. Suppose the Bayes optimal predictor f ⇤ B(x) 2 arg mina2A E[`(Y, a)|X = x] is in the model: f ⇤ B = f ⇤2 F. If ` is ⌘-mixable, then (`, F, P ⇤) is ⌘-stochastically mixable. The second lemma shows that stochastic mixability implies mixability. Lemma 7. Suppose the conditions of Theorem 5 are satisfied. If (`, Ffull) is ⌘-stochastically mixable, then it is ⌘-mixable. The above two lemmata are sufficient to prove the equivalence of stochastic and ordinary mixability. Proof of Theorem 5. In order to show that ⌘-mixability of ` implies ⌘-stochastic mixability of (`, Ffull) we note that the Bayes-optimal predictor f ⇤ B for any ` and P ⇤must be in Ffull and so Lemma 6 implies (`, Ffull, P ⇤) is ⌘-stochastically mixable for any distribution P ⇤. Conversely, that ⌘-stochastic mixability of (`, Ffull) implies the ⌘-mixability of ` follows immediately from Lemma 7. Example 2 (if F is not full). In this case, we can have either stochastic mixability without ordinary mixability or the converse. Consider a loss function ` that is not mixable in the ordinary sense, e.g. ` = `0/1, the 0/1-loss [6], and a set F consisting of just a single predictor. Then clearly ` is stochastically mixable relative to F. This is, of course, a trivial case. We do not know whether we can have stochastic mixability without ordinary mixability in nontrivial cases, and plan to investigate this for future work. For the converse, we know that it does hold in nontrivial cases: consider the log-loss `log which is 1-mixable in the standard sense (Example 1). Let Y = {0, 1} and let the model F be a set of conditional probability mass functions {f✓| ✓2 ⇥} where ⇥is the 6 set of all classifiers, i.e. all functions X ! Y, and f✓(y | x) := e−`0/1(y,✓(x))/(1 + e−1) where `0/1(y, ˆy) = 1{y 6= ˆy} is the 0/1-loss. Then log-loss becomes an affine function of 0/1-loss: for each ✓2 ⇥, `log(Y, f✓(X)) = `0/1(Y, ✓(X)) + C with C = ln(1 + e−1) [14]. Because 0/1-loss is not standard mixable, by Theorem 5, 0/1-loss is not stochastically mixable relative to ⇥. But then we must also have that log-loss is not stochastically mixable relative to F. 5 Stochastic Mixability and the Margin Condition The excess risk of any f compared to f ⇤is the mean of the excess loss `(Y, f(X)) −`(Y, f ⇤(X)): d(f, f ⇤) = E ⇥ `(Y, f(X)) −`(Y, f ⇤(X)) ⇤ . We also define the expected square of the excess loss, which is closely related to its variance: V (f, f ⇤) = E ⇣ `(Y, f(X)) −`(Y, f ⇤(X)) ⌘2 . Note that, for 0/1-loss, V (f, f ⇤) = P ⇤(f(X) 6= f ⇤(X)) is the probability that f and f ⇤disagree. The margin condition, introduced by Mammen and Tsybakov [7, 8] for 0/1-loss, is satisfied with constants ≥1 and c0 > 0 if c0V (f, f ⇤)d(f, f ⇤) for all f 2 F. (7) Unlike Mammen and Tsybakov, we do not assume that F necessarily contains the Bayes predictor, which minimizes the risk over all possible predictors. The same generalization has been used in the context of model selection by Arlot and Bartlett [20]. Remark 1. In some practical cases, the margin condition only holds for a subset of the model such that V (f, f ⇤) ✏0 for some ✏0 > 0 [8]. In such cases, the discussion below applies to the same subset. Stochastic mixability, as we have defined it, is directly related to the margin condition for the case = 1. In order to relate it to other values of , we need a little more flexibility: for given ✏≥0 and (`, F, P ⇤), we define F✏= {f ⇤} [ {f 2 F | d(f, f ⇤) ≥✏}, (8) which excludes a band of predictors that approximate the best predictor in the model to within excess risk ✏. Theorem 8. Suppose a loss ` takes values in [0, V ] for 0 < V < 1. Fix a model F and distribution P ⇤. Then the margin condition (7) is satisfied if and only if there exists a constant C > 0 such that, for all ✏> 0, (`, F✏, P ⇤) is ⌘-stochastically mixable for ⌘= C✏(−1)/. In particular, if the margin condition is satisfied with constants and c0, we can take C = min / V 2c1/ 0 eV −V −1, 1 V (−1)/ . This theorem gives a new interpretation of the margin condition as the rate at which ⌘has to go to 0 when the model F is approximated by ⌘-stochastically mixable models F✏. By the following corollary, proved in the additional material, stochastic mixability of the whole model F is equivalent to the best case of the margin condition. Corollary 9. Suppose ` takes values in [0, V ] for 0 < V < 1. Then (`, F, P ⇤) is stochastically mixable if and only if there exists a constant c0 > 0 such that the margin condition (7) is satisfied with = 1. 6 Connection to Uniform O(log |Fn|/n) Rates Let ` be a bounded loss function. Assume that, at sample size n, an estimator ˆf (statistical learning algorithm) is used based on a finite model Fn, where we allow the size |Fn| to grow with n. Let, for all n, Pn be any set of distributions on X ⇥Y such that for all P ⇤2 Pn, the generalized margin condition (7) holds for = 1 and uniform constant c0 not depending on n, with model Fn. In the case of 0/1-loss, the results of e.g. Tsybakov [8] suggest that there exist estimators 7 ˆfn : (X ⇥Y)n ! Fn that achieve a convergence rate of O(log |Fn|/n), uniformly for all P ⇤2 P; that is, sup P ⇤2Pn EP ⇤[d( ˆfn, f ⇤)] = O(log |Fn|/n). (9) This can indeed be proven, for general loss functions, using Theorem 4.2. of Zhang [21] and with ˆfn set to Zhang’s information-risk-minimization estimator (to see this, at sample size n apply Zhang’s result with ↵set to 0 and a prior ⇡that is uniform on Fn, so that −log ⇡(f) = log |Fn| for any f 2 Fn). By Theorem 8, this means that, for any bounded loss function `, if, for some ⌘> 0, all n, we have that (`, Fn, P ⇤) is ⌘-stochastically mixable for all P ⇤2 Pn, then Zhang’s estimator satisfies (9). Hence, for bounded loss functions, stochastic mixability implies a uniform O(log |Fn|/n) rate. A connection between stochastic mixability and fast rates is also made by Gr¨unwald [14], who introduces some slack in the definition (allowing the number 1 in (3) to be slightly larger) and uses the convexity interpretation from Section 3 to empirically determine the largest possible value for ⌘. His Theorem 2, applied with a slack set to 0, implies an in-probability version of Zhang’s result above. Example 3. We just explained that, if ` is stochastically mixable relative to Fn, then uniform O(log |Fn|/n) rates can be achieved. We now illustrate that if this is not the case, then, at least if ` is 0/1-loss or if ` is log-loss, uniform O(log |Fn|/n) rates cannot be achieved in general. To see this, let ⇥n be a finite set of classifiers ✓: X ! Y, Y = {0, 1} and let ` be 0/1-loss. Let for each n, ˆfn : (X ⇥Y)n ! Fn be some arbitrary estimator. It is known from e.g. the work of Vapnik [22] that for every sequence of estimators ˆf1, ˆf2, . . ., there exist a sequence ⇥1, ⇥2, . . ., all finite, and a sequence P ⇤ 1 , P ⇤ 2 , . . . such that EP ⇤ n[d( ˆfn, f ⇤)] log |⇥n|/n ! 1. Clearly then, by Zhang’s result above, there cannot be an ⌘such that for all n, (`, ⇥n, P ⇤ n) is ⌘stochastically mixable. This establishes that if stochastic mixability does not hold, then uniform rates of O(log |Fn|/n) are not achievable in general for 0/1-loss. By the construction of Example 2, we can modify ⇥n into a set of corresponding log-loss predictors Fn so that the log-loss `log becomes an affine function of the 0/1-loss; thus, there still is no ⌘such that for all n, (`log, Fn, P ⇤ n) is ⌘-mixable, and the sequence of estimators still cannot achieve uniform a O(log |Fn|/n) rate with log-loss either. 7 Discussion — Related Work Let us now return to the summary of our contributions which we provided as items (a)—(g) in §1. We note that slight variations of our formula (3) for stochastic mixability already appear in [14] (but there no connections to ordinary mixability are made) and [15] (but there it is just a tool for the worst-case sequential setting, and no connections to fast rates in statistical learning are made). Equation 3 looks completely different from the margin condition, yet results connecting the two, somewhat similar to (a), albeit very implicitly, already appear in [23] and [24]. Also, the paper by Gr¨unwald [14] contains a connection between the margin condition somewhat similar to Theorem 8, but involving a significantly weaker version of stochastic mixability in which the inequality (3) only holds with some slack. Result (b) is trivial given Definition 2; (c) is a consequence of Theorem 4.2. of [21] when combined with (a) (see Section 6). Result (d) (Theorem 5) is a significant extension of a similar result by Chernov et al. [15]. Yet, our proof techniques and interpretation are completely different from those in [15]. Result (e), Example 3, is a direct consequence of (a). Result (f) (Theorem 2) is completely new, but the proof is partly based on ideas which already appear in [12] in a log-loss/MDL context, and (g) is a consequence of (f). Finally, Corollary 3 can be seen as analogous to the results of Lee et al. [25], who showed the role of convexity of F for fast rates in the regression setting with squared loss. Acknowledgments This work was supported by the ARC and by NICTA, funded by the Australian Government. It was also supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778, and by NWO Rubicon grant 680-50-1112. 8 References [1] O. Bousquet, S. Boucheron, and G. Lugosi. Introduction to statistical learning theory. In O. Bousquet, U. von Luxburg, and G. R¨atsch, editors, Advanced Lectures on Machine Learning, volume 3176 of Lecture Notes in Computer Science, pages 169–207. Springer Berlin / Heidelberg, 2004. [2] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [3] O. Dekel and Y. Singer. Data-driven online to batch conversions. In Y. Weiss, B. Sch¨olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18 (NIPS), pages 267–274, Cambridge, MA, 2006. MIT Press. [4] J. Abernethy, A. Agarwal, P. L. Bartlett, and A. Rakhlin. A stochastic view of optimal regret through minimax duality. In Proceedings of the 22nd Conference on Learning Theory (COLT), 2009. [5] Y. Kalnishkan and M. V. Vyugin. The weak aggregating algorithm and weak mixability. Journal of Computer and System Sciences, 74:1228–1244, 2008. [6] V. Vovk. A game of prediction with expert advice. In Proceedings of the 8th Conference on Learning Theory (COLT), pages 51–60. ACM, 1995. [7] E. Mammen and A. B. Tsybakov. Smooth discrimination analysis. The Annals of Statistics, 27(6):1808–1829, 1999. [8] A. B. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32(1):135–166, 2004. [9] J. L. Doob. Application of the theory of martingales. In Le Calcul de Probabilit´es et ses Applications. Colloques Internationaux du Centre National de la Recherche Scientifique, pages 23–27, Paris, 1949. [10] A. Barron and T. Cover. Minimum complexity density estimation. IEEE Transactions on Information Theory, 37(4):1034–1054, 1991. [11] T. Zhang. From ✏-entropy to KL entropy: analysis of minimum information complexity density estimation. Annals of Statistics, 34(5):2180–2210, 2006. [12] J. Li. Estimation of Mixture Models. PhD thesis, Yale University, 1999. [13] B. Kleijn and A. van der Vaart. Misspecification in infinite-dimensional Bayesian statistics. Annals of Statistics, 34(2), 2006. [14] P. Gr¨unwald. Safe learning: bridging the gap between Bayes, MDL and statistical learning theory via empirical convexity. In Proceedings of the 24th Conference on Learning Theory (COLT), 2011. [15] A. Chernov, Y. Kalnishkan, F. Zhdanov, and V. Vovk. Supermartingales in prediction with expert advice. Theoretical Computer Science, 411:2647–2669, 2010. [16] J.-Y. Audibert. Fast learning rates in statistical inference through aggregation. Annals of Statistics, 37(4):1591–1646, 2009. [17] E. Vernet, R. C. Williamson, and M. D. Reid. Composite multiclass losses. In Advances in Neural Information Processing Systems 24 (NIPS), 2011. [18] P. Gr¨unwald. The Minimum Description Length Principle. MIT Press, Cambridge, MA, 2007. [19] T. van Erven, M. Reid, and R. Williamson. Mixability is Bayes risk curvature relative to log loss. In Proceedings of the 24th Conference on Learning Theory (COLT), 2011. [20] S. Arlot and P. L. Bartlett. Margin-adaptive model selection in statistical learning. Bernoulli, 17(2):687–713, 2011. [21] T. Zhang. Information theoretical upper and lower bounds for statistical estimation. IEEE Transactions on Information Theory, 52(4):1307–1321, 2006. [22] V. Vapnik. Statistical Learning Theory. Wiley, New York, 1998. [23] J.-Y. Audibert. PAC-Bayesian statistical learning theory. PhD thesis, Universit´e Paris VI, 2004. [24] O. Catoni. PAC-Bayesian Supervised Classification. Lecture Notes-Monograph Series. IMS, 2007. [25] W. Lee, P. Bartlett, and R. Williamson. The importance of convexity in learning with squared loss. IEEE Transactions on Information Theory, 44(5):1974–1980, 1998. Correction, Volume 54(9), 4395 (2008). [26] A. N. Shiryaev. Probability. Springer-Verlag, 1996. [27] J.-Y. Audibert. A better variance control for PAC-Bayesian classification. Preprint 905, Laboratoire de Probabilit´es et Mod`eles Al´eatoires, Universit´es Paris 6 and Paris 7, 2004. 9
|
2012
|
151
|
4,510
|
Symmetric Correspondence Topic Models for Multilingual Text Analysis Kosuke Fukumasu† Koji Eguchi† Eric P. Xing‡ †Graduate School of System Informatics, Kobe University, Kobe 657-8501, Japan ‡School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA fukumasu@cs25.scitec.kobe-u.ac.jp, eguchi@port.kobe-u.ac.jp, epxing@cs.cmu.edu Abstract Topic modeling is a widely used approach to analyzing large text collections. A small number of multilingual topic models have recently been explored to discover latent topics among parallel or comparable documents, such as in Wikipedia. Other topic models that were originally proposed for structured data are also applicable to multilingual documents. Correspondence Latent Dirichlet Allocation (CorrLDA) is one such model; however, it requires a pivot language to be specified in advance. We propose a new topic model, Symmetric Correspondence LDA (SymCorrLDA), that incorporates a hidden variable to control a pivot language, in an extension of CorrLDA. We experimented with two multilingual comparable datasets extracted from Wikipedia and demonstrate that SymCorrLDA is more effective than some other existing multilingual topic models. 1 Introduction Topic models (also known as mixed-membership models) are a useful method for analyzing large text collections [1, 2]. In topic modeling, each document is represented as a mixture of topics, where each topic is represented as a word distribution. Latent Dirichlet Allocation (LDA) [2] is one of the well-known topic models. Most topic models assume that texts are monolingual; however, some can capture statistical dependencies between multiple classes of representations and can be used for multilingual parallel or comparable documents. Here, a parallel document is a merged document consisting of multiple language parts that are translations from one language to another, sometimes including sentence-to-sentence or word-to-word alignments. A comparable document is a merged document consisting of multiple language parts that are not translations of each other but instead describe similar concepts and events. Recently published multilingual topic models [3, 4], which are the equivalent of Conditionally Independent LDA (CI-LDA) [5, 6], can discover latent topics among parallel or comparable documents. SwitchLDA [6] was modeled by extending CI-LDA. It can control the proportions of languages in each multilingual topic. However, both CI-LDA and SwitchLDA preserve dependencies between languages only by sharing per-document multinomial distributions over latent topics, and accordingly the resulting dependencies are relatively weak. Correspondence LDA (CorrLDA) [7] is another type of topic model for structured data represented in multiple classes. It was originally proposed for annotated image data to simultaneously model words and visual features, and it can also be applied to parallel or comparable documents. In the modeling, it first generates topics for visual features in an annotated image. Then only the topics associated with the visual features in the image are used to generate words. In this sense, visual features can be said to be the pivot in modeling annotated image data. However, when CorrLDA is applied to multilingual documents, a language that plays the role of the pivot (a pivot language1) 1Note that the term ‘pivot language’ does not have exactly the same meaning as that commonly used in the machine translation community, where it means an intermediary language for translation between more than three languages. 1 must be specified in advance. The pivot language selected is sensitive to the quality of the multilingual topics estimated with CorrLDA. For example, a translation of a Japanese book into English would presumably have a pivot to the Japanese book, but a set of international news stories would have pivots that differ based on the country an article is about. It is often difficult to appropriately select the pivot language. To address this problem, which we call the pivot problem, we propose a new topic model, Symmetric Correspondence LDA (SymCorrLDA), that incorporates a hidden variable to control the pivot language, in an extension of CorrLDA. Our SymCorrLDA addresses the problem of CorrLDA and can select an appropriate pivot language by inference from the data. We evaluate various multilingual topic models, i.e., CI-LDA, SwitchLDA, CorrLDA, and our SymCorrLDA, as well as LDA, using comparable articles in different languages (English, Japanese, and Spanish) extracted from Wikipedia. We first demonstrate through experiments that CorrLDA outperforms the other existing multilingual topic models mentioned, and then show that our SymCorrLDA works more effectively than CorrLDA in any case of selecting a pivot language. 2 Multilingual Topic Models with Multilingual Comparable Documents Bilingual topic models for bilingual parallel documents that have word-to-word alignments have been developed, such as those by [8]. Their models are directed towards machine translation, where word-to-word alignments are involved in the generative process. In contrast, we focus on analyzing dependencies among languages by modeling multilingual comparable documents, each of which consists of multiple language parts that are not translations of each other but instead describe similar concepts and events. The target documents can be parallel documents, but word-to-word alignments are not taken into account in the topic modeling. Some other researchers explored different types of multilingual topic models that are based on the premise of using multilingual dictionaries or WordNet [9, 10, 11]. In contrast, CI-LDA and SwitchLDA only require multilingual comparable documents that can be easily obtained, such as from Wikipedia, when we use those models for multilingual text analysis. This is more similar to the motivation of this paper. Below, we introduce LDA-style topic models that handle multiple classes and can be applied to multilingual comparable documents for the above-mentioned purposes. 2.1 Conditionally Independent LDA (CI-LDA) CI-LDA [5, 6] is an extension of the LDA model to handle multiple classes, such as words and citations in scientific articles. The CI-LDA framework was used to model multilingual parallel or comparable documents by [3] and [4]. Figure 1(b) shows a graphical model representation of CILDA for documents in L languages, and Figure 1(a) shows that of LDA for reference. D, T, and Nd respectively indicate the number of documents, number of topics, and number of word tokens that appear in a specific language part in a document d. The superscript ‘(·)’ indicates the variables corresponding to a specific language part in a document d. For better understanding, we show below the process of generating a document according to the graphical model of the CI-LDA model. 1. For all D documents, sample θd ∼Dirichlet(α) 2. For all T topics and for all L languages, sample ϕ(ℓ) t ∼Dirichlet(β(ℓ)) 3. For each of the N(ℓ) d words w(ℓ) i in language ℓ(ℓ∈{1, · · · , L}) of document d: a. Sample a topic z(ℓ) i ∼Multinomial(θd) b. Sample a word w(ℓ) i ∼Multinomial(ϕ(ℓ) z(ℓ) i ) For example, when we deal with Japanese and English bilingual data, w(1) and w(2) are a Japanese and an English word, respectively. CI-LDA preserves dependencies between languages only by sharing the multinomial distributions with parameters θd. Accordingly, there are substantial chances that some topics are assigned only to a specific language part in each document, and the resulting dependencies are relatively weak. 2.2 SwitchLDA Similarly to CI-LDA, SwitchLDA [6] can be applied to multilingual comparable documents. However, different from CI-LDA, SwitchLDA can adjust the proportions of multiple different languages for each topic, according to a binomial distribution for bilingual data or a multinomial distribution for data of more than three languages. Figure 1(c) depicts a graphical model representation of SwitchLDA for documents in L languages. The generative process is described below. 2 (a) LDA (b) CI-LDA (c) SwitchLDA Figure 1: Graphical model representations of (a) LDA, (b) CI-LDA, and (c) SwitchLDA 1. For all D documents, sample θd ∼Dirichlet(α) 2. For all T topics: a. For all L languages, sample ϕ(ℓ) t ∼Dirichlet(β(ℓ)) b. Sample ψt ∼Dirichlet(η) 3. For each of the Nd words wi in document d: a. Sample a topic zi ∼Multinomial(θd) b. Sample a language label si ∼Multinomial(ψzi) c. Sample a word wi ∼Multinomial(ϕ(si) zi ) Here, ψt indicates a multinomial parameter to adjust the proportions of L different languages for topic t. If all components of hyperparameter vector η are large enough, SwitchLDA becomes equivalent to CI-LDA. SwitchLDA is an extension of CI-LDA to give emphasis or de-emphasis to specific languages for each topic. Therefore, SwitchLDA may represent multilingual topics more flexibly; however, it still has the drawback that the dependencies between languages are relatively weak. 2.3 Correspondence LDA (CorrLDA) CorrLDA [7] can also be applied to multilingual comparable documents. In the multilingual setting, this model first generates topics for one language part of a document. We refer to this language as a pivot language. For the other languages, the model then uses the topics that were already generated in the pivot language. Figure 2(a) shows a graphical model representation of CorrLDA assuming L languages, when p is the pivot language that is specified in advance. Here, N(ℓ) d (ℓ∈{p, 2, · · · , L}) denotes the number of words in language ℓin document d. The generative process is shown below: 1. For all D documents’ pivot language parts, sample θ(p) d ∼Dirichlet(α(p)) 2. For all T topics and for all L languages (including the pivot language), sample ϕ(ℓ) t ∼Dirichlet(β(ℓ)) 3. For each of the N(p) d words w(p) i in the pivot language p of document d: a. Sample a topic z(p) i ∼Multinomial(θ(p) d ) b. Sample a word w(p) i ∼Multinomial(ϕ(p) z(p) i ) 4. For each of the N(ℓ) d words w(ℓ) i in language ℓ(ℓ∈{2, · · · , L}) of document d: a. Sample a topic y(ℓ) i ∼Uni form ( z(p) 1 , · · · , z(p) N(p) d ) b. Sample a word w(ℓ) i ∼Multinomial(ϕ(ℓ) y(ℓ) i ) This model can capture more direct dependencies between languages, due to the constraints that topics have to be selected from the topics selected in the pivot language parts. However, when CorrLDA is applied to multilingual documents, a pivot language must be specified in advance. Moreover, the pivot language selected is sensitive to the quality of the multilingual topics estimated with CorrLDA. 3 Symmetric Correspondence Topic Models When CorrLDA is applied to parallel or comparable documents, this model first generates topics for one language part of a document, which we refer to this language as a pivot language. For the other languages, the model then uses the topics that were already generated in the pivot language. CorrLDA has the great advantage that it can capture more direct dependency between languages; 3 (a) CorrLDA (b) SymCorrLDA (c) alternative SymCorrLDA Figure 2: Graphical model representations of (a) CorrLDA, (b) SymCorrLDA, and (c) its variant however, it has a disadvantage that it requires a pivot language to be specified in advance. Since the pivot language may differ based on the subject, such as the country a document is about, it is often difficult to appropriately select the pivot language. To address this problem, we propose Symmetric Correspondence LDA (SymCorrLDA). This model generates a flag that specifies a pivot language for each word, adjusting the probability of being pivot languages in each language part of a document according to a binomial distribution for bilingual data or a multinomial distribution for data of more than three languages. In other words, SymCorrLDA estimates from the data the best pivot language at the word level in each document. The pivot language flags may be assigned to the words in the originally written portions in each language, since the original portions may be described confidently and with rich vocabulary. Figure 2(b) shows a graphical model representation of SymCorrLDA. SymCorrLDA’s generative process is shown as follows, assuming L languages: 1. For all D documents: a. For all L languages, sample θ(ℓ) d ∼Dirichlet(α(ℓ)) b. Sample πd ∼Dirichlet(γ) 2. For all T topics and for all L languages, sample ϕ(ℓ) t ∼Dirichlet(β(ℓ)) 3. For each of the N(ℓ) d words w(ℓ) i in language ℓ(ℓ∈{1, · · · , L}) of document d: a. Sample a pivot language flag x(ℓ) i ∼Multinomial(πd) b. If (x(ℓ) i =ℓ), sample a topic z(ℓ) i ∼Multinomial(θ(ℓ) d ) c. If (x(ℓ) i =m,ℓ), sample a topic y(ℓ) i ∼Uniform ( z(m) 1 , · · · , z(m) M(m) d ) d. Sample a word w(ℓ) i ∼Multinomial ( δx(ℓ) i =ℓϕ(ℓ) z(ℓ) i + (1 −δx(ℓ) i =ℓ)ϕ(ℓ) y(ℓ) i ) The pivot language flag x(ℓ) i = ℓfor an arbitrary language ℓindicates that the pivot language for the word w(ℓ) i is its own language ℓ, and x(ℓ) i = m indicates that the pivot language for w(ℓ) i is another language m different from its own language ℓ. The indicator function δ takes the value 1 when the designated event occurs and 0 if otherwise. Unlike CorrLDA, the uniform distribution at Step 3-c is not based on the topics that are generated for all N(m) d words with the pivot language flags, but based only on the topics that are already generated for M(m) d (M(m) d ≤N(m) d ) words with the pivot language flags at each step while in the generative process.2 The full conditional probability for collapsed Gibbs sampling of this model is given by the following equations, assuming symmetric Dirichlet priors parameterized by α(ℓ), β(ℓ) (ℓ∈{1, · · · , L}), and γ: P(z(ℓ) i = t, x(ℓ) i = ℓ|w(ℓ) i = w(ℓ), z(ℓ) −i , w(ℓ) −i , x−i, α(ℓ), β(ℓ), γ) ∝ ndℓ,−i + γ ndℓ,−i + ∑ j,ℓnd j + Lγ · CTD(ℓ) td,−i + α(ℓ) ∑ t′ CTD(ℓ) t′ d,−i + Tα(ℓ) · CW(ℓ)T w(ℓ)′ t,−i + β(ℓ) ∑ w(ℓ)′ CW(ℓ)T w(ℓ)′ t,−i + W(ℓ)β(ℓ) (1) P(y(ℓ) i = t, x(ℓ) i = m|w(ℓ) i = w(ℓ), y(ℓ) −i , z(m), w(ℓ) −i , x−i, β(ℓ), γ) ∝ 2M(m) d words may indeed differ in size at the step of generating each word in the generative process. However, this is not problematic for inference, such as by collapsed Gibbs sampling, where any topic is first randomly assigned to every word, and a more appropriate topic is then re-assigned to each word, based on the topics previously assigned to all N(m) d words, not M(m) d words, with the pivot language flags. 4 Table 1: Summary of bilingual data Japanese English No. of documents 229,855 No. of word types (vocab) 124,046 173,157 No. of word tokens 61,187,469 80,096,333 Table 2: Summary of trilingual data Japanese English Spanish No. of documents 90,602 No. of word types (vocab) 70,902 98,474 96,191 No. of word tokens 25,952,978 33,999,988 25,701,830 ndm,−i + γ ndm,−i + ∑ j,m nd j + Lγ · CTD(m) td N(m) d · CW(ℓ)T w(ℓ)′ t,−i + β(ℓ) ∑ w(ℓ)′ CW(ℓ)T w(ℓ)′ t,−i + W(ℓ)β(ℓ) (2) where w(·) = {w(·) i }, z(·) = {z(·) i }, and x(·) = {x(·) i }. W(·) and N(·) d respectively indicate the total number of vocabulary words (word types) in the specified language, and the number of word tokens that appear in the specified language part of document d. ndℓand ndm are the number of times, for an arbitrary word i ∈{1, · · · , N(·) d } in an arbitrary language j ∈{1, · · · , L} of document d, the flags x( j) i = ℓand x( j) i = m respectively are allocated to document d. CTD(·) td indicates the (t, d) element of a T × D topic-document count matrix, meaning the number of times topic t is allocated to the document d’s language part specified in parentheses. CW(·)T wt indicates the (w, t) element of a W(·) × T word-topic count matrix, meaning the number of times topic t is allocated to word w in the language specified in parentheses. The subscript ‘−i’ indicates when wi is removed from the data. Now we slightly modify SymCorrLDA by replacing Step 3-c in its generative process by: 3-c. If (x(ℓ) i =m,ℓ), sample a topic y(ℓ) i ∼Multinomial(θ(m) d ) Figure 2(c) shows a graphical model representation of this alternative SymCorrLDA. In this model, non-pivot topics are dependent on the distribution behind the pivot topics, not dependent directly on the pivot topics as in the original SymCorrLDA. By this modification, the generative process is more naturally described. Accordingly, Eq. (2) of the full conditional probability is replaced by: P(y(ℓ) i = t, x(ℓ) i = m|w(ℓ) i = w(ℓ), y(ℓ) −i , z(m), w(ℓ) −i , x−i, β(ℓ), γ) ∝ ndm,−i + γ ndm,−i + ∑ j,m nd j + Lγ · CTD(m) td + α(m) ∑ t′ CTD(m) t′ d + Tα(m) · CW(ℓ)T w(ℓ)′ t,−i + β(ℓ) ∑ w(ℓ)′ CW(ℓ)T w(ℓ)′ t,−i + W(ℓ)β(ℓ) (3) As you can see in the second term of the right-hand side above, the constraints are relaxed by this modification so that topics do not always have to be selected from the topics selected for the words with the pivot language flags, differently from that of Eq. (2). We will show through experiments how the modification affects the quality of the estimated multilingual topics, in the following section. 4 Experiments In this section, we demonstrate some examples with SymCorrLDA, and then we compare multilingual topic models using various evaluation methods. For the evaluation, we use held-out loglikelihood using two datasets, the task of finding an English article that is on the same topic as that of a Japanese article, and a task with the languages reversed. 4.1 Settings The datasets used in this work are two collections of Wikipedia articles: one is in English and Japanese, the other is in English, Japanese, and Spanish, and articles in each collection are connected across languages via inter-language links, as of November 2, 2009. We extracted text content from the original Wikipedia articles, removing link information and revision history information. We used WP2TXT3 for this purpose. For English articles, we removed 418 types of standard stop words [12]. For Spanish articles, we removed 351 types of standard stop words [13]. As for Japanese articles, we removed function words, such as symbols, conjunctions and particles, using part-of-speech tags annotated by MeCab4. The statistics of the datasets after preprocessing are shown in Tables 1 and 2. We assumed each set of Wikipedia articles connected via inter-language links between two (or 3http://wp2txt.rubyforge.org/ 4http://mecab.sourceforge.net/ 5 0 10000 20000 30000 40000 50000 60000 70000 80000 0 0.2 0.4 0.6 0.8 1 frequency πd,1 0th iteration 5th iteration 20th iteration 50th iteration Figure 3: Change of frequency distribution of πd,1 according to number of iterations 0.5 1.0 0.0 Japanese Language Europe Austria Physics Horyu̅-ji (Horyu Temple) Personal computer Western art history Shogi (Japanese chess) 1, d π (a) Examples with bilingual data Europe Sony Mount Fuji Bullfigh"ng NFL Mobile phone ) 0,0,1( = d π ) 0,1,0 ( = d π )1,0,0 ( = d π (b) Examples with trilingual data Figure 4: Document titles and corresponding πd 0.5 1.0 0.0 Proporon of Japanese pivot Topic 201 ireland irish scotland sco!sh dublin airurando (Ireland) suko"orando (Scotland) nen (year) daburin (Dublin) kitaairurando (Northern Ireland) Topic 13 japan osaka kyoto hughes japanese osaka kyoto shi (city) nen (year) kobe Topic 251 united cup manchester manager league nen (year) ingurando (England) daihyo̅ (representave) rigu (league) sizun (season) Topic 269 species insects eggs body larvae rui(species) shu (species) karada (body) konchu̅ (insect) dobutsu (animal) Topic 59 castle ba"le oda hideyoshi nobunaga nobunaga shiro (castle) hideyoshi shi(surname) oda Topic 426 car vehicle vehicles cars truck kuruma (car) jidosha(automobile) sharyo̅ (vehicle) unten (driving) torakku (truck) Figure 5: Topic examples and corresponding proportion of pivots assigned to Japanese. An English translation for each Japanese word follows in parentheses, except for Japanese proper nouns. three) languages as a comparable document that consists of two (or three) language parts. To carry out the evaluation in the task of finding counterpart articles that we will describe later, we randomly divided the Wikipedia document collection at the document level into 80% training documents and 20% test documents. Furthermore, to compute held-out log-likelihood, we randomly divided each of the training documents at the word level into 90% training set and 10% held-out set. We first estimated CI-LDA, SwitchLDA, CorrLDA, and SymCorrLDA and its alternative version (‘SymCorrLDA-alt’) as well as LDA for a baseline, using collapsed Gibbs sampling with the training set. In addition, we estimated a special implementation of SymCorrLDA, setting πd in a simple way for comparison, where the pivot language flag for each word is randomly selected according to the proportion of the length of each language part (‘SymCorrLDA-rand’). For all the models, we assumed symmetric Dirichlet hyperparameters α = 50/T and β = 0.01, which have often been used in prior work [14]. We imposed the convergence condition of collapsed Gibbs sampling, such that the percentage change of held-out log-likelihood is less than 0.1%. For SymCorrLDA, we assumed symmetric Dirichlet hyperparameters γ = 1. For SwitchLDA, we assumed symmetric Dirichlet hyperparameters η = 1. We investigated the effect of γ in SymCorrLDA and η in SwitchLDA; however, the held-out log-likelihood was almost constant when varying these hyperparameters. LDA does not distinguish languages, so for a baseline we assumed all the language parts connected via inter-language links to be mixed together as a single document. 4.2 Pivot assignments Figure 3 demonstrates how the frequency distribution of the pivot language-flag (binomial) parameter πd,1 for the Japanese language with the bilingual dataset5 in SymCorrLDA changes while in iterations of collapsed Gibbs sampling. This figure shows that the pivot language flag is randomly assigned at the initial state, and then it converges to an appropriate bias for each document as the iterations proceed. We next demonstrate how the pivot language flags are assigned to each document. Figure 4(a) shows the titles of eight documents and the corresponding πd when using the bilingual data (T = 500). If πd,1 is close to 1, the article can be considered to be more related to a subject on Japanese or Japan. In contrast, if πd,1 is close to 0 and therefore πd,2 = 1 −πd,1 is close to 1, the article can be considered to be more related to a subject on English or English-speaking countries. Therefore, a pivot is assigned considering the language biases of the articles. Figure 4(b) shows the titles of six documents and the corresponding πd = (πd,1, πd,2, πd,3) when using the trilingual 5The parameter for English was πd,2 = 1 −πd,1 in this case. 6 Table 3: Per-word held-out log-likelihood with bilingual data. Boldface indicates the best result in each column. T=500 T=1000 Japanese English Japanese English LDA -8.127 -8.633 -7.992 -8.530 CI-LDA -8.136 -8.644 -8.008 -8.549 SwitchLDA -8.139 -8.641 -8.012 -8.549 CorrLDA1 -7.463 -8.403 -7.345 -8.346 CorrLDA2 -7.777 -8.197 -7.663 -8.109 SymCorrLDA -7.433 -8.175 -7.317 -8.084 SymCorrLDA-alt -7.476 -8.206 -7.358 -8.116 SymCorrLDA-rand -7.483 -8.222 -7.373 -8.137 Table 4: Per-word held-out log-likelihood with trilingual data. Boldface indicates the best result in each column. T=500 T=1000 Japanese English Spanish Japanese English Spanish CorrLDA1 -7.408 -8.512 -8.667 -7.305 -8.393 -8.545 CorrLDA2 -7.655 -8.198 -8.467 -7.572 -8.122 -8.401 CorrLDA3 -7.794 -8.460 -8.338 -7.700 -8.383 -8.274 SymCorrLDA -7.394 -8.178 -8.289 -7.287 -8.093 -8.215 SymCorrLDA-alt -7.440 -8.209 -8.330 -7.330 -8.120 -8.254 data (T =500). Here, πd,1, πd,2, and πd,3 respectively indicate the pivot language-flag (multinomial) parameters corresponding to Japanese, English, and Spanish parts in each document. We further demonstrate the proportions of pivot assignments at the topic level. Figure 5 shows the content of 6 topics through 10 words with the highest probability for each language and for each topic when using the bilingual data (T = 500), some of which are biased to Japanese (Topics 13 and 59) or English (Topics 201 and 251), while the others have almost no bias. It can be seen that the pivot bias to specific languages can be interpreted. 4.3 Held-out log-likelihood By measuring the held-out log-likelihood, we can evaluate the quality of each topic model. The higher the held-out log-likelihood, the greater the predictive ability of the model. In this work, we estimated multilingual topic models with the training set and computed the log-likelihood of generating the held-out set that was mentioned in Section 4.1. Table 3 shows the held-out log-likelihood of each multilingual topic model estimated with the bilingual dataset when T = 500 and 1000. Note that the held-out log-likelihood (i.e., the micro-average per-word log-likelihood of the 10% held-out set) is shown for each language in this table, while the model estimation was performed over the 90% training set in all the languages. Hereafter, CorrLDA1 refers to the CorrLDA model that was estimated when Japanese was the pivot language. As described in Section 2.3, the CorrLDA model first generates topics for the pivot language part of a document, and for the other language parts of the document, the model then uses the topics that were already generated in the pivot language. CorrLDA2 refers to the CorrLDA model when English was the pivot language. As the results in Table 3 show, the held-out log-likelihoods of CorrLDA1 and CorrLDA2 are much higher than those of the other prior models: CI-LDA, SwitchLDA, and LDA, in both cases. This is because CorrLDA can capture direct dependencies between languages, due to the constraints that topics have to be selected from the topics selected in the pivot language parts. On the other hand, CI-LDA and SwitchLDA are too poorly constrained to effectively capture the dependencies between languages, as mentioned in Sections 2.1 and 2.2. In particular, CorrLDA1 has the highest held-out log-likelihood among all the prior models for Japanese, while CorrLDA2 is the best among all the prior models for English. This is probably due to the fact that CorrLDA can estimate topics from the pivot language parts (Japanese in the case of CorrLDA1) without any specific constraints; however, great constraints (topics having to be selected from the topics selected in the pivot language parts) are imposed for the other language parts. In SymCorrLDA, the held-out log-likelihood for Japanese is larger than that of CorrLDA1 (and the other models), and the held-out log-likelihood for English is larger than that of CorrLDA2. This is probably because SymCorrLDA estimates the pivot language appropriately adjusted for each word in each document. Next, we compare SymCorrLDA and its alternative version (SymCorrLDA-alt). We observed in Table 3 that the held-out log-likelihood of SymCorrLDA-alt is smaller than that of the original SymCorrLDA, and comparable to CorrLDA’s best. This is because the constraints in SymCorrLDA-alt are relaxed so that topics do not always have to be selected from the topics selected for the words with the pivot language flags. For further consideration, let us examine the results of the simplified implementation: SymCorrLDA-rand, which we defined in Section 4.1. SymCorrLDA-rand’s held-out log-likelihood lies even below CorrLDA’s best. These results reflect the fact that the performance of SymCorrLDA in its full form is inherently affected by the nature of the language biases in the multilingual comparable documents, rather than merely being affected by the language part length. 7 Table 4 shows the held-out log-likelihood with the trilingual data when T = 500 and 1000. Here, CorrLDA3 refers to the CorrLDA model that was estimated when Spanish was the pivot language. As you can see in this table, SymCorrLDA’s held-out log-likelihood is larger than CorrLDA’s best. SymCorrLDA can estimate the pivot language appropriately adjusted for each word in each document in the trilingual data, as with the bilingual data. SymCorrLDA-alt behaves similarly as with the bilingual data. For both the bilingual and trilingual data, the improvements with SymCorrLDA were statistically significant, compared to each of the other models, according to the Wilcoxon signed-rank test at the 5% level in terms of the word-by-word held-out log-likelihood. As for the scalability, SymCorrLDA is as scalable as CorrLDA because the time complexity of SymCorrLDA is the same order as that of CorrLDA: the number of topics times the sum of vocabulary size in each language. On clock time, SymCorrLDA does pay some extra, such as around 40% of the time for CorrLDA in the case of the bilingual data, for allocating the pivot language flags. 4.4 Finding counterpart articles Given an article, we can find its unseen counterpart articles in other languages using a multilingual topic model. To evaluate this task, we experimented with the bilingual dataset. We estimated document-topic distributions of test documents for each language, using the topic-word distributions that were estimated by each multilingual topic model with training documents. We then evaluated the performance of finding English counterpart articles using Japanese articles as queries, and vice versa. For estimating the document-topic distributions of test documents, we used re-sampling of LDA using the topic-word distribution estimated beforehand by each multilingual topic model [15]. We then computed the Jensen-Shannon (JS) divergence [16] between a document-topic distribution of Japanese and that of English for each test document. Each held-out English-Japanese article pair connected via an inter-language link is considered to be on the same topic; therefore, JS divergence of such an article pair is expected to be small if the latent topic estimation is accurate. We first assumed each held-out Japanese article to be a query and the corresponding English article to be relevant, and evaluated the ranking of all the test articles of English in ascending order of the JS divergence; then we conducted the task with the languages reversed. Table 5: MRR in counterpart article finding task. Boldface indicates the best result in each column. Japanese to English English to Japanese T=500 T=1000 T=500 T=1000 LDA 0.0743 0.1027 0.0870 0.1262 CI-LDA 0.1426 0.1464 0.1697 0.1818 SwitchLDA 0.1357 0.1347 0.1668 0.1653 CorrLDA1 0.2987 0.3281 0.2863 0.3111 CorrLDA2 0.2829 0.3063 0.3161 0.3464 SymCorrLDA 0.3256 0.3592 0.3348 0.3685 Table 5 shows the results of mean reciprocal rank (MRR), when T = 500 and 1000. The reciprocal rank is defined as the multiplicative inverse of the rank of the counterpart article corresponding to each query article, and the mean reciprocal rank is the average of it over all the query articles. CorrLDA works much more effectively than the other prior models: CI-LDA, SwitchLDA, and LDA, and overall, SymCorrLDA works the most effectively. We observed that the improvements with SymCorrLDA were statistically significant according to the Wilcoxon signed-rank test at the 5% level, compared with each of the other models. Therefore, it is clear that SymCorrLDA estimates multilingual topics the most successfully in this experiment. 5 Conclusions In this paper, we compared the performance of various topic models that can be applied to multilingual documents, not using multilingual dictionaries, in terms of held-out log-likelihood and in the task of cross-lingual link detection. We demonstrated through experiments that CorrLDA works significantly more effectively than CI-LDA, which was used in prior work on multilingual topic models. Furthermore, we proposed a new topic model, SymCorrLDA, that incorporates a hidden variable to control a pivot language, in an extension of CorrLDA. SymCorrLDA has an advantage in that it does not require a pivot language to be specified in advance, while CorrLDA does. We demonstrated that SymCorrLDA is more effective than CorrLDA and the other topic models, through experiments with Wikipedia datasets using held-out log-likelihood and in the task of finding counterpart articles in other languages. SymCorrLDA can be applied to other kinds of data that have multiple classes of representations, such as annotated image data. We plan to investigate this in future work. 8 Acknowledgments We thank Sinead Williamson, Manami Matsuura, and the anonymous reviewers for valuable discussions and comments. This work was supported in part by the Grant-in-Aid for Scientific Research (#23300039) from JSPS, Japan. References [1] Thomas Hofmann. Probabilistic latent semantic indexing. In Proceedings of the 22nd Anuual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 50–57, Berkeley, California, USA, 1999. [2] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [3] David Mimno, Hanna M. Wallach, Jason Naradowsky, David A. Smith, and Andrew McCallum. Polylingual topic models. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 880–889, Stroudsburg, Pennsylvania, USA, 2009. [4] Xiaochuan Ni, Jian-Tao Sun, Jian Hu, and Zheng Chen. Mining multilingual topics from wikipedia. In Proceedings of the 18th International Conference on World Wide Web, pages 1155–1156, Madrid, Spain, 2009. [5] Elena Erosheva, Stephen Fienberg, and John Lafferty. Mixed-membership models of scientific publications. Proceedings of the National Academy of Sciences of the United States of America, 101:5220–5227, 2004. [6] David Newman, Chaitanya Chemudugunta, Padhraic Smyth, and Mark Steyvers. Statistical entity-topic models. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 680–686, Philadelphia, Pennsylvania, USA, 2006. [7] David M. Blei and Michael I. Jordan. Modeling annotated data. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, pages 127–134, Toronto, Canada, 2003. [8] Bing Zhao and Eric P. Xing. BiTAM: Bilingual topic admixture models for word alignment. In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics, pages 969–976, Sydney, Australia, 2006. [9] Jordan Boyd-Graber and David M. Blei. Multilingual topic models for unaligned text. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, pages 75–82, Montreal, Canada, 2009. [10] Jagadeesh Jagarlamudi and Hal Daume. Extracting multilingual topics from unaligned comparable corpora. In Advances in Information Retrieval, volume 5993 of Lecture Notes in Computer Science, pages 1–12. Springer, 2010. [11] Duo Zhang, Qiaozhu Mei, and ChengXiang Zhai. Cross-lingual latent topic extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1128–1137, Uppsala, Sweden, 2010. [12] James P. Callan, W. Bruce Croft, and Stephen M. Harding. The INQUERY retrieval system. In Proceedings of the 3rd International Conference on Database and Expert Systems Applications, pages 78–83, Valencia, Spain, 1992. [13] Jacques Savoy. Report on CLEF-2002 experiments: Combining multiple sources of evidence. In Advances in Cross-Language Information Retrieval, volume 2785 of Lecture Notes in Computer Science, pages 66–90. Springer, 2003. [14] Mark Steyvers and Tom Griffiths. Handbook of Latent Semantic Analysis, chapter 21: Probabilistic Topic Models. Lawrence Erbaum Associates, Mahwah, New Jersey and London, 2007. [15] Hanna M. Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. Evaluation methods for topic models. In Proceedings of the 26th International Conference on Machine Learning, pages 1105–1112, Montreal, Canada, 2009. [16] Jianhua Lin. Divergence measures based on the shannon entropy. IEEE Transactions on Information Theory, 37(1):145–151, 1991. 9
|
2012
|
152
|
4,511
|
Learning Halfspaces with the Zero-One Loss: Time-Accuracy Tradeoffs Aharon Birnbaum and Shai Shalev-Shwartz School of Computer Science and Engineering The Hebrew University Jerusalem, Israel Abstract Given α, ϵ, we study the time complexity required to improperly learn a halfspace with misclassification error rate of at most (1 + α) L∗ γ + ϵ, where L∗ γ is the optimal γ-margin error rate. For α = 1/γ, polynomial time and sample complexity is achievable using the hinge-loss. For α = 0, Shalev-Shwartz et al. [2011] showed that poly(1/γ) time is impossible, while learning is possible in time exp( ˜O(1/γ)). An immediate question, which this paper tackles, is what is achievable if α ∈(0, 1/γ). We derive positive results interpolating between the polynomial time for α = 1/γ and the exponential time for α = 0. In particular, we show that there are cases in which α = o(1/γ) but the problem is still solvable in polynomial time. Our results naturally extend to the adversarial online learning model and to the PAC learning with malicious noise model. 1 Introduction Some of the most influential machine learning tools are based on the hypothesis class of halfspaces with margin. Examples include the Perceptron [Rosenblatt, 1958], Support Vector Machines [Vapnik, 1998], and AdaBoost [Freund and Schapire, 1997]. In this paper we study the computational complexity of learning halfspaces with margin. A halfspace is a mapping h(x) = sign(⟨w, x⟩), where w, x ∈X are taken from the unit ball of an RKHS (e.g. Rn), and ⟨w, x⟩is their inner-product. Relying on the kernel trick, our sole assumption on X is that we are able to calculate efficiently the inner-product between any two instances (see for example Sch¨olkopf and Smola [2002], Cristianini and Shawe-Taylor [2004]). Given an example (x, y) ∈X × {±1} and a vector w, we say that w errs on (x, y) if y⟨w, x⟩≤0 and we say that w makes a γ-margin error on (x, y) if y⟨w, x⟩≤γ. The error rate of a predictor h : X →{±1} is defined as L01(h) = P[h(x) ̸= y], where the probability is over some unknown distribution over X ×{±1}. The γ-margin error rate of a predictor x 7→⟨w, x⟩is defined as Lγ(w) = P[y⟨w, x⟩≤γ]. A learning algorithm A receives an i.i.d. training set S = (x1, y1), . . . , (xm, ym) and its goal is to return a predictor, A(S), whose error rate is small. We study the runtime required to learn a predictor such that with high probability over the choice of S, the error rate of the learnt predictor satisfies L01(A(S)) ≤(1 + α) L∗ γ + ϵ where L∗ γ = min w:∥w∥=1 Lγ(w) . (1) There are three parameters of interest: the margin parameter, γ, the multiplicative approximation factor parameter, α, and the additive error parameter ϵ. From the statistical perspective (i.e., if we allow exponential runtime), Equation (1) is achievable with α = 0 by letting A be the algorithm which minimizes the number of margin errors over the 1 training set subject to a norm constraint on w. The sample complexity of A is m = ˜Ω( 1 γ2ϵ2 ). See for example Cristianini and Shawe-Taylor [2004]. If the data is separable with margin (that is, L∗ γ = 0), then the aforementioned A can be implemented in time poly(1/γ, 1/ϵ). However, the problem is much harder in the agnostic case, namely, when L∗ γ > 0 and the distribution over examples can be arbitrary. Ben-David and Simon [2000] showed that, no proper learning algorithm can satisfy Equation (1) with α = 0 while running in time polynomial in both 1/γ and 1/ϵ. By “proper” we mean an algorithm which returns a halfspace predictor. Shalev-Shwartz et al. [2011] extended this results to improper learning—that is, when A(S) should satisfy Equation (1) but is not required to be a halfspace. They also derived an algorithm that satisfies Equation (1) and runs in time exp ( C 1 γ log( 1 γ ϵ) ) , where C is a constant. Most algorithms that are being used in practice minimize a convex surrogate loss. That is, instead of minimizing the number of mistakes on the training set, the algorithms minimize ˆL(w) = 1 m ∑m i=1 ℓ(yi⟨w, xi⟩), where ℓ: R →R is a convex function that upper bounds the 0 −1 loss. For example, the Support Vector Machine (SVM) algorithm relies on the hinge loss. The advantage of surrogate convex losses is that minimizing them can be performed in time poly(1/γ, 1/ϵ). It is easy to verify that minimizing ˆL(w) with respect to the hinge loss yields a predictor that satisfies Equation (1) with α = 1 γ . Furthermore, Long and Servedio [2011], Ben-David et al. [2012] have shown that any convex surrogate loss cannot guarantee Equation (1) if α < 1 2 ( 1 γ −1 ) . Despite the centrality of this problem, not much is known on the runtime required to guarantee Equation (1) with other values of α. In particular, a natural question is how the runtime changes when enlarging α from 0 to 1 γ . Does it change gradually or perhaps there is a phase transition? Our main contribution is an upper bound on the required runtime as a function of α. For any α between1 5 and 1 γ , let τ = 1 γ α. We show that the runtime required to guarantee Equation (1) is at most exp (C τ min{τ, log(1/γ)}), where C is a universal constant (we ignore additional factors which are polynomial in 1/γ, 1/ϵ—see a precise statement with the exact constants in Theorem 1). That is, when we enlarge α, the runtime decreases gradually from being exponential to being polynomial. Furthermore, we show that the algorithm which yields the aforementioned bound is a vanilla SVM with a specific kernel. We also show how one can design specific kernels that will fit well certain values of α while minimizing our upper bound on the sample and time complexity. In Section 4 we extend our results to the more challenging learning settings of adversarial online learning and PAC learning with malicious noise. For both cases, we obtain similar upper bounds on the runtime as a function of α. The technique we use in the malicious noise case may be of independent interest. An interesting special case is when α = 1 γ √ log(1/γ). In this case, τ = √ log(1/γ) and hence the runtime is still polynomial in 1/γ. This recovers a recent result of Long and Servedio [2011]. Their technique is based on a smooth boosting algorithm applied on top of a weak learner which constructs random halfspaces and takes their majority vote. Furthermore, Long and Servedio emphasize that their algorithm is not based on convex optimization. They show that no convex surrogate can obtain α = o(1/γ). As mentioned before, our technique is rather different as we do rely on the hinge loss as a surrogate convex loss. There is no contradiction to Long and Servedio since we apply the convex loss in the feature space induced by our kernel function. The negative result of Long and Servedio holds only if the convex surrogate is applied on the original space. 1We did not analyze the case α < 5 because the runtime is already exponential in 1/γ even when α = 5. Note, however, that our bound for α = 5 is slightly better than the bound of Shalev-Shwartz et al. [2011] for α = 0 because our bound does not involve the parameter ϵ in the exponent while their bound depends on exp(1/γ log(1/(ϵγ))). 2 1.1 Additional related work The problem of learning kernel-based halfspaces has been extensively studied before in the framework of SVM [Vapnik, 1998, Cristianini and Shawe-Taylor, 2004, Sch¨olkopf and Smola, 2002] and the Perceptron [Freund and Schapire, 1999]. Most algorithms replace the 0-1 error function with a convex surrogate. As mentioned previously, Ben-David et al. [2012] have shown that this approach leads to approximation factor of at least 1 2 ( 1 γ −1 ) . There has been several works attempting to obtain efficient algorithm for the case α = 0 under certain distributional assumptions. For example, Kalai et al. [2005], Blais et al. [2008] have shown that if the marginal data distribution over X is a product distribution, then it is possible to satisfy Equation (1) with α = γ = 0, in time poly(n1/ϵ4). Klivans et al. [2009] derived similar results for the case of malicious noise. Another distributional assumption is on the conditional probability of the label given the instance. For example, Kalai and Sastry [2009] solves the problem in polynomial time if there exists a vector w and a monotonically non-increasing function ϕ such that P(Y = 1|X = x) = ϕ(⟨w, x⟩). Zhang [2004], Bartlett et al. [2006] also studied the relationship between surrogate convex loss functions and the 0-1 loss function. They introduce the notion of well calibrated loss functions, meaning that the excess risk of a predictor h (over the Bayes optimal) with respect to the 0-1 loss can be bounded using the excess risk of the predictor with respect to the surrogate loss. It follows that if the latter is close to zero than the former is also close to zero. However, as Ben-David et al. [2012] show in detail, without making additional distributional assumptions the fact that a loss function is well calibrated does not yield finite-sample or finite-time bounds. In terms of techniques, our Theorem 1 can be seen as a generalization of the positive result given in Shalev-Shwartz et al. [2011]. While Shalev-Shwartz et al. only studied the case α = 0, we are interested in understanding the whole curve of runtime as a function of α. Similar to the analysis of Shalev-Shwartz et al., we approximate the sigmoidal and erf transfer functions using polynomials. However, we need to break symmetry in the definition of the exact transfer function to approximate. The main technical observation is that the Lipschitz constant of the transfer functions we approximate does not depend on α, and is roughly 1/γ no matter what α is. Instead, the change of the transfer function when α is increasing is in higher order derivatives. To the best of our knowledge, the only middle point on the curve that has been studied before is the case α = 1 γ√ log(1/γ), which was analyzed in Long and Servedio [2011]. Our work shows an upper bound on the entire curve. Besides that, we also provide a recipe for constructing better kernels for specific values of α. 2 Main Results Our main result is an upper bound on the time and sample complexity for all values of α between 5 and 1/γ. The bounds we derive hold for a norm-constraint form of SVM with a specific kernel, which we recall now. Given a training set S = (x1, y1), . . . , (xm, ym), and a feature mapping ψ : X →X ′, where X ′ is the unit ball of some Hilbert space, consider the following learning rule: argmin v:∥v∥2≤B m ∑ i=1 max{0, 1 −yi⟨v, ψ(xi)⟩} . (2) Using the well known kernel-trick, if K(x, x′) implements the inner product ⟨ψ(x), ψ(x′)⟩, and G is an m × m matrix with Gi,j = K(xi, xj), then we can write a solution of Equation (2) as v = ∑ i aiψ(xi) where the vector a ∈Rm is a solution of argmin a:aT Ga≤B m ∑ i=1 max{0, 1 −yi (Ga)i} . (3) The above is a convex optimization problem in m variables and can be solved in time poly(m). Given a solution a ∈Rm, we define a classifier ha : X →{±1} to be ha(x) = sign ( m ∑ i=1 aiK(xi, x) ) . (4) 3 The upper bounds we derive hold for the above kernel-based SVM with the kernel function K(x, x′) = 1 1 −1 2⟨x, x′⟩. (5) We are now ready to state our main theorem. Theorem 1 For any γ ∈(0, 1/2) and α ≥5, let τ = 1 γ α and let B = min { 4α2 ( 96τ 2 + e18τ log(8τα2)+5) , 1 γ2 ( 0.06 e4τ 2 + 3 )} = poly(1/γ) · emin{18τ log(8τα2) , 4τ 2} . Fix ϵ, δ ∈(0, 1/2) and let m be a training set size that satisfies m ≥16 ϵ2 max{2B, (1 + α)2 log(2/δ)} . Let A be the algorithm which solves Equation (3) with the kernel function given in Equation (5), and returns the predictor defined in Equation (4). Then, for any distribution, with probability of at least 1 −δ, the algorithm A satisfies Equation (1). The proof of the theorem is given in the next section. As a direct corollary we obtain that there is an efficient algorithm that achieves an approximation factor of α = o(1/γ): Corollary 2 For any ϵ, δ, γ ∈(0, 1), let α = 1/γ √ log(1/γ) and let B = 0.06 γ6 + 3 γ2 . Then, with m, A being as defined in Theorem 1, the algorithm A satisfies Equation (1). As another corollary of Theorem 1 we obtain that for any constant c ∈(0, 1), it is possible to satisfy Equation (1) with α = c/γ in polynomial time. However, the dependence of the runtime on the constant c is e4/c2. For example, for c = 1/2 we obtain the multiplicative factor e16 ≈8, 800, 000. Our next contribution is to show that a more careful design of the kernel function can yield better bounds. Theorem 3 For any γ, α, let p be a polynomial of the form p(z) = ∑d j=1 βjz2j−1 (namely, p is odd) that satisfies max z∈[−1,1] |p(z)| ≤α and min z:|z|≥γ |p(z)| ≥1 . Let m be a training set size that satisfies m ≥16 ϵ2 max{∥β∥2 1, 2 log(4/δ), (1 + α)2 log(2/δ)} Let A be the algorithm which solves Equation (3) with the following kernel function K(x, x′) = d ∑ j=1 |βj|(⟨x, x′⟩)2j−1 , and returns the predictor defined in Equation (4). Then, for any distribution, with probability of at least 1 −δ, the algorithm A satisfies Equation (1). The above theorem provides us with a recipe for constructing good kernel functions: Given γ and α, find a vector β with minimal ℓ1 norm such that the polynomial p(z) = ∑d j=1 βjz2j−1 satisfies the conditions given in Theorem 3. For a fixed degree d, this can be written as the following optimization problem: min β∈Rd ∥β∥1 s.t. ∀x ∈[0, 1], p(z) ≤α ∧ ∀z ∈[γ, 1], p(z) ≥1 . (6) Note that for any x, the expression p(x) is a linear function of β. Therefore, the above problem is a linear program with an infinite number of constraints. Nevertheless, it can be solved efficiently using the Ellipsoid algorithm. Indeed, for any β, we can find the extreme points of the polynomial 4 it defines, and then determine whether β satisfies all the constraints or, if it doesn’t, we can find a violated constraint. To demonstrate how Theorem 3 can yield a better guarantee (in terms of the constants), we solved Equation (6) for the simple case of d = 2. For this simple case, we can provide an analytic solution to Equation (6), and based on this solution we obtain the following lemma whose proof is provided in the appendix. Lemma 4 Given γ < 2/3, consider the polynomial p(z) = β1z + β2z3, where β1 = 1 γ + γ 1+γ , β2 = − 1 γ(1+γ) . Then, p satisfies the conditions of Theorem 3 with α = 2 3 √ 3γ + 2 √ 3 ≤0.385 · 1 γ + 1.155 . Furthermore, ∥β∥1 ≤2 γ + 1. It is interesting to compare the guarantee given in the above lemma to the guarantee of using the vanilla hinge-loss. For both cases the sample complexity is order of 1 γ2ϵ2 . For the vanilla hingeloss we obtain the approximation factor 1 γ while for the kernel given in Lemma 4 we obtain the approximation factor of α ≤0.385 · 1 γ + 1.155. Recall that Ben-David et al. [2012] have shown that without utilizing kernels, no convex surrogate loss can guarantee an approximation factor smaller than α < 1 2( 1 γ −1). The above discussion shows that applying the hinge-loss with a kernel function can break this barrier without a significant increase in runtime2 or sample complexity. 3 Proofs Given a scalar loss function ℓ: R →R, and a vector w, we denote by L(w) = E(x,y)∼D[ℓ(y⟨w, x⟩)] the expected loss value of the predictions of w with respect to a distribution D over X × {±1}. Given a training set S = (x1, y1), . . . , (xm, ym), we denote by ˆL(w) = 1 m ∑m i=1 ℓ(yi⟨w, xi⟩) the empirical loss of w. We slightly overload our notation and also use L(w) to denote E(x,y)∼D[ℓ(y⟨w, ψ(x)⟩)], when w is an element of an RKHS corresponding to the mapping ψ. We define ˆL(w) analogously. We will make extensive use of the following loss functions: the zero-one loss, ℓ01(z) = 1[z ≤0], the γ-zero-one loss, ℓγ(z) = 1[z ≤γ], the hinge-loss, ℓh(z) = [1−z]+ = max{0, 1−z}, and the ramploss, ℓramp(z) = min{1, ℓh(z)}. We will use L01(w), Lγ(w), Lh(w), and Lramp(w) to denote the expectations with respect to the different loss functions. Similarly ˆL01(w), ˆLγ(w), ˆLh(w), and ˆLramp(w) are the empirical losses of w with respect to the different loss functions. Recall that we output a vector v that solves Equation (3). This vector is in the RKHS corresponding to the kernel given in Equation (5). Let Bx = maxx∈X K(x, x) ≤2. Since the ramp-loss upper bounds the zero-one loss we have that L01(v) ≤Lramp(v). The advantage of using the ramp loss is that it is both a Lipschitz function and it is bounded by 1. Hence, standard Rademacher generalization analysis (e.g. Bartlett and Mendelson [2002], Bousquet [2002]) yields that with probability of at least 1 −δ/2 over the choice of S we have: Lramp(v) ≤ˆLramp(v) + 2 √ BxB m + √ 2 ln(4/δ) m | {z } =ϵ1 . (7) Since the ramp loss is upper bounded by the hinge-loss, we have shown the following inequalities, L01(v) ≤Lramp(v) ≤ˆLramp(v) + ϵ1 ≤ˆLh(v) + ϵ1 . (8) Next, we rely on the following claim adapted from [Shalev-Shwartz et al., 2011, Lemma 2.4]: 2It should be noted that solving SVM with kernels takes more time than solving a linear SVM. Hence, if the original instance space is a low dimensional Euclidean space we loose polynomially in the time complexity. However, when the original instance space is also an RKHS, and our kernel is composed on top of the original kernel, the increase in the time complexity is not significant. 5 Claim 5 Let p(z) = ∑∞ j=0 βjzj be any polynomial that satisfies ∑∞ j=0 β2 j 2j ≤B, and let w be any vector in X. Then, there exists vw in the RKHS defined by the kernel given in Equation (5), such that ∥vw∥2 ≤B and for all x ∈X, ⟨vw, ψ(x)⟩= p(⟨w, x⟩). For any polynomial p, let ℓp(z) = ℓh(p(z)), and let ˆLp be defined analogously. If p is an odd polynomial, we have that ℓp(y⟨w, x⟩) = [1 −yp(⟨w, x⟩)]+. By the definition of v as minimizing ˆLh(v) over ∥v∥2 ≤B, it follows from the above claim that for any odd p that satisfies ∑∞ j=0 β2 j 2j ≤ B and for any w∗∈X, we have that ˆLh(v) ≤ˆLh(vw∗) = ˆLp(w∗) . Next, it is straightforward to verify that if p is an odd polynomial that satisfies: max z∈[−1,1] |p(z)| ≤α and min z∈[γ,1] p(z) ≥1 (9) then, ℓp(z) ≤(1 + α)ℓγ(z) for all z ∈[−1, 1]. For such polynomials, we have that ˆLp(w∗) ≤ (1 + α)ˆLγ(w∗). Finally, by Hoeffding’s inequality, for any fixed w∗, if m > log(2/δ) ϵ2 2 , then with probability of at least 1 −δ/2 over the choice of S we have that ˆLγ(w∗) ≤Lγ(w∗) + ϵ2 . So, overall, we have obtained that with probability of at least 1 −δ, L01(v) ≤(1 + α) Lγ(w∗) + (1 + α)ϵ2 + ϵ1 . Choosing m large enough so that (1 + α)ϵ2 + ϵ1 ≤ϵ, we obtain: Corollary 6 Fix γ, ϵ, δ ∈(0, 1) and α > 0. Let p be an odd polynomial such that ∑ j β2 j 2j ≤B and such that Equation (9) holds. Let m be a training set size that satisfies: m ≥16 ϵ2 · max{2B, 2 log(4/δ), (1 + α)2 log(2/δ)} . Then, with probability of at least 1−δ, the solution of Equation (3) satisfies, L01(v) ≤(1+α)L∗ γ+ϵ. The proof of Theorem 1 follows immediately from the above corollary together with the following two lemmas, whose proofs are provided in the appendix. Lemma 7 For any γ > 0 and α > 2, let τ = 1 αγ and let B = 1 γ2 ( 0.06 e4τ 2 + 3 ) . Then, there exists a polynomial that satisfies the conditions of Corollary 6 with the parameters α, γ, B. Lemma 8 For any γ ∈ (0, 1/2) and α ∈ [5, 1 γ ], let τ = 1 αγ and let B = 4α2 ( 96τ 2 + exp ( 18τ log ( 8τα2) + 5 )) . Then, there exists a polynomial that satisfies the conditions of Corollary 6 with the parameters α, γ, B. 3.1 Proof of Theorem 3 The proof is similar to the proof of Theorem 1 except that we replace Claim 5 with the following: Lemma 9 Let p(z) = ∑d j=1 βjz2j−1 be any polynomial, and let w be any vector in X. Then, there exists vw in the RKHS defined by the kernel given in Theorem 3, such that ∥vw∥2 ≤∥β∥1 and for all x ∈X, ⟨vw, ψ(x)⟩= p(⟨w, x⟩). Proof We start with an explicit definition of the mapping ψ(x) corresponding to the kernel in the theorem. The coordinates of ψ(x) are indexed by tuples (k1, . . . , kj) ∈[n]j for j = 1, 3, . . . , 2d−1. Coordinate (k1, . . . , kj) equals to √ |βj|xk1xk2 . . . xkj. Next, for any w ∈X, we define explicitly the vector vw for which ⟨vw, ψ(x)⟩= p(⟨w, x⟩). Coordinate (k1, . . . kj) of vw equals to sign(βj) √ |βj|wk1wk2 . . . wkj. It is easy to verify that indeed ∥vw∥2 ≤∥β∥1 and for all x ∈X, ⟨vw, ψ(x)⟩= p(⟨w, x⟩). Since for any x ∈X we also have that K(x, x) ≤∥β∥1, the proof of Theorem 3 follows using the same arguments as in the proof of Theorem 1. 6 4 Extension to other learning models In this section we briefly describe how our results can be extended to adversarial online learning and to PAC learning with malicious noise. We start with the online learning model. 4.1 Online learning Online learning is performed in a sequence of consecutive rounds, where at round t the learner is given an instance, xt ∈X, and is required to predict its label. After predicting ˆyt, the target label, yt, is revealed. The goal of the learner is to make as few prediction mistakes as possible. See for example Cesa-Bianchi and Lugosi [2006]. A classic online classification algorithm is the Perceptron [Rosenblatt, 1958]. The Perceptron maintains a vector wt and predicts according to ˆyt = sign(⟨wt, xt⟩). Initially, w1 = 0, and at round t the Perceptron updates the vector using the rule wt+1 = wt + 1[ˆyt ̸= yt] ytxt. Freund and Schapire [1999] observed that the Perceptron can also be implemented efficiently in an RKHS using a kernel function. Agmon [1954] and others have shown that if there exists w∗such that for all t, yt⟨w∗, xt⟩≥1 and ∥xt∥2 ≤Bx, then the Perceptron will make at most ∥w∗∥2Bx prediction mistakes. This bound holds without making any additional distributional assumptions on the sequence of examples. This mistake bound has been generalized to the noisy case (see for example Gentile [2003]) as follows. Given a sequence (x1, y1), . . . , (xm, ym), and a vector w∗, let Lh(w∗) = 1 m ∑m t=1 ℓh(yt⟨w∗, xt⟩), where ℓh is the hinge-loss. Then, the average number of prediction mistakes the Perceptron will make on this sequence is at most 1 m m ∑ t=1 1[ˆyt ̸= yt] ≤Lh(w∗) + √ Bx∥w∗∥2 Lh(w∗) m + Bx∥w∗∥2 m . (10) Let Lγ(w∗) = 1 m ∑m t=1 1(yt⟨w∗, xt⟩≤γ). Trivially, Equation (10) can yield a bound whose leading term is ( 1 + 1 γ ) Lγ(w∗) (namely, it corresponds to α = 1 γ ). On the other hand, Ben-David et al. [2009] have derived a mistake bound whose leading term depends on Lγ(w∗) (namely, it corresponds to α = 0), but the runtime of the algorithm is at least m1/γ2. The main result of this section is to derive a mistake bound for the Perceptron based on all values of α between 5 and 1/γ. Theorem 10 For any γ ∈(0, 1/2) and α ≥5, let τ = 1 γ α and let Bα,γ be the value of B as defined in Theorem 1. Then, for any sequence (x1, y1), . . . , (xm, ym), if the Perceptron is run on this sequence using the kernel function given in Equation (5), the average number of prediction mistakes it will make is at most: min γ∈(0,1/2),α≥5,w∗∈X (1 + α)Lγ(w∗) + √ 2Bα,γ (1 + α)Lγ(w∗) m + 2Bα,γ m Proof [sketch] Equation (10) holds if we implement the Perceptron using the kernel function given in Equation (5), for which Bx = 2. Furthermore, similarly to the proof of Theorem 1, for any polynomial p that satisfies the conditions of Corollary 6 we have that there exists v∗in the RKHS corresponding to the kernel, with ∥v∗∥2 ≤B and with Lh(v∗) ≤(1 + α)Lγ(w∗). The theorem follows. 4.2 PAC learning with malicious noise In this model, introduced by Valiant [1985] and specified to the case of halfspaces with margin by Servedio [2003], Long and Servedio [2011], there is an unknown distribution over instances in X and there is an unknown target vector w∗∈X such that |⟨w∗, x⟩| ≥γ with probability 1. The learner has an access to an example oracle. At each query to the oracle, with probability of 1 −η it samples a random example x ∈X according to the unknown distribution over X, and 7 returns (x, sign(⟨w∗, x⟩)). However, with probability η, the oracle returns an arbitrary element of X × {±1}. The goal of the learner is to output a predictor that has L01(h) ≤ϵ, with respect to the “clean” distribution. Auer and Cesa-Bianchi [1998] described a general conversion from online learning to the malicious noise setting. Servedio [2003] used this conversion to derive a bound based on the Perceptron’s mistake bound. In our case, we cannot rely on the conversion of Auer and Cesa-Bianchi [1998] since it requires a proper learner, while the online learner described in the previous section is not proper. Instead, we propose the following simple algorithm. First, sample m examples. Then, solve kernel SVM on the resulting noisy training set. Theorem 11 Let γ ∈(0, 1/4), δ ∈(0, 1/2), and α > 5. Let B be as defined in Theorem 1. Let m be a training set size that satisfies: m ≥64 ϵ2 · max { 2B , (2 + α)2 log(1/δ) } . Then, with probability of at least 1 −2δ, the output of kernel-SVM on the noisy training set, denoted h, satisfies L01(h) ≤(2 + α)η + ϵ/2. It follows that if η ≤ ϵ 2(2+α) then L01(h) ≤ϵ. Proof Let ¯S be a training set in which we replace the noisy examples with clean iid examples. Let ¯L denotes the empirical loss over ¯S and ˆL denotes the empirical loss over S. As in the proof of Theorem 1, we have that w.p. of at least 1 −δ, for any v in the RKHS corresponding to the kernel that satisfies ∥v∥2 ≤B we have that: L01(v) ≤¯Lramp(v) + 3ϵ/8 , (11) by our assumption on m. Let ˆη be the fraction of noisy examples in S. Note that ¯S and S differ in at most mˆη elements. Therefore, for any v, ¯Lramp(v) ≤ˆLramp(v) + ˆη . (12) Now, let v be the minimizer of ˆLh, let w∗be the target vector in the original space (i.e., the one which achieves correct predictions with margin γ on clean examples), and let vw∗be its corresponding element in the RKHS (see Claim 5). We have ˆLramp(v) ≤ˆLh(v) ≤ˆLh(vw∗) = ˆLp(w∗) ≤(1 + α)ˆLγ(w∗) ≤(1 + α)ˆη . (13) In the above, the first inequality is since the ramp loss is upper bounded by the hinge loss, the second inequality is by the definition of v, the third equality is by Claim 5, the fourth inequality is by the properties of p, and the last inequality follows from the definition of ˆη. Combining the above yields, L01(v) ≤(2 + α)ˆη + 3ϵ/8 . Finally, using Hoefding’s inequality, we know that for the definition of m, with probability of at least 1 −δ we have that ˆη ≤η + ϵ 8(2+α). Applying the union bound and combining the above we conclude that with probability of at least 1 −2δ, L01(v) ≤(2 + α)η + ϵ/2. 5 Summary and Open Problems We have derived upper bounds on the time and sample complexities as a function of the approximation factor. We further provided a recipe for designing kernel functions with a small time and sample complexity for any given value of approximation factor and margin. Our results are applicable to agnostic PAC Learning, online learning, and PAC learning with malicious noise. An immediate open question is whether our results can be improved. If not, can computationally hardness results be formally established. Another open question is whether the upper bounds we have derived for an improper learner can be also derived for a proper learner. Acknowledgements: This work is supported by the Israeli Science Foundation grant number 59810 and by the German-Israeli Foundation grant number 2254-2010. Shai Shalev-Shwartz is incumbent of the John S. Cohen Senior Lectureship in Computer Science. 8 References S. Agmon. The relaxation method for linear inequalities. Canadian Journal of Mathematics, 6(3):382–392, 1954. P. Auer and N. Cesa-Bianchi. On-line learning with malicious noise and the closure algorithm. Annals of mathematics and artificial intelligence, 23(1):83–99, 1998. P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463–482, 2002. P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101:138–156, 2006. S. Ben-David and H. Simon. Efficient learning of linear perceptrons. In NIPS, 2000. S. Ben-David, D. Pal, , and S. Shalev-Shwartz. Agnostic online learning. In COLT, 2009. S. Ben-David, D. Loker, N. Srebro, and K. Sridharan. Minimizing the misclassification error rate using a surrogate convex loss. In ICML, 2012. E. Blais, R. O’Donnell, and K Wimmer. Polynomial regression under arbitrary product distributions. In COLT, 2008. O. Bousquet. Concentration Inequalities and Empirical Processes Theory Applied to the Analysis of Learning Algorithms. PhD thesis, Ecole Polytechnique, 2002. N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. N. Cristianini and J. Shawe-Taylor. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004. Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277–296, 1999. Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, August 1997. C. Gentile. The robustness of the p-norm algorithms. Machine Learning, 53(3):265–299, 2003. A. Kalai, A.R. Klivans, Y. Mansour, and R. Servedio. Agnostically learning halfspaces. In Proceedings of the 46th Foundations of Computer Science (FOCS), 2005. A.T. Kalai and R. Sastry. The isotron algorithm: High-dimensional isotonic regression. In Proceedings of the 22th Annual Conference on Learning Theory, 2009. A.R. Klivans, P.M. Long, and R.A. Servedio. Learning halfspaces with malicious noise. The Journal of Machine Learning Research, 10:2715–2740, 2009. P.M. Long and R.A. Servedio. Learning large-margin halfspaces with more malicious noise. In NIPS, 2011. F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65:386–407, 1958. (Reprinted in Neurocomputing (MIT Press, 1988).). B. Sch¨olkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond. MIT Press, 2002. R.A. Servedio. Smooth boosting and learning with malicious noise. Journal of Machine Learning Research, 4: 633–648, 2003. S. Shalev-Shwartz, O. Shamir, and K. Sridharan. Learning kernel-based halfspaces with the 0-1 loss. SIAM Journal on Computing, 40:1623–1646, 2011. L. G. Valiant. Learning disjunctions of conjunctions. In Proceedings of the 9th International Joint Conference on Artificial Intelligence, pages 560–566, August 1985. V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. The Annals of Statistics, 32:56–85, 2004. 9
|
2012
|
153
|
4,512
|
Learning High-Density Regions for a Generalized Kolmogorov-Smirnov Test in High-Dimensional Data Assaf Glazer Department of Computer Science Technion – Israel Institute of Technology Haifa 32000, Israel assafgr@cs.technion.ac.il Michael Lindenbaoum Department of Computer Science Technion – Israel Institute of Technology Haifa 32000, Israel mic@cs.technion.ac.il Shaul Markovitch Department of Computer Science Technion – Israel Institute of Technology Haifa 32000, Israel Address shaulm@cs.technion.ac.il Abstract We propose an efficient, generalized, nonparametric, statistical KolmogorovSmirnov test for detecting distributional change in high-dimensional data. To implement the test, we introduce a novel, hierarchical, minimum-volume sets estimator to represent the distributions to be tested. Our work is motivated by the need to detect changes in data streams, and the test is especially efficient in this context. We provide the theoretical foundations of our test and show its superiority over existing methods. 1 Introduction The Kolmogorov-Smirnov (KS) test is efficient, simple, and often considered the choice method for comparing distributions. Let X = {x1, . . . , xn} and X ′ = {x′ 1, . . . , x′ m} be two sets of feature vectors sampled i.i.d. with respect to F and F ′ distributions. The goal of the KS test is to determine whether F ̸= F ′. For one-dimensional distributions, the KS statistics are based on the maximal difference between cumulative distribution functions (CDFs) of the two distributions. However, nonparametric extensions of this test to high-dimensional data are hard to define since there are 2d−1 ways to represent a d-dimensional distribution by a CDF. Indeed, due to this limitation, several extensions of the KS test to more than one dimension have been proposed [17, 9] but their practical applications are mostly limited to a few dimensions. One prominent approach of generalizing the KS test to beyond one-dimensional data is that of Polonik [18]. It is based on a generalized quantile transform to a set of high-density hierarchical regions. The transform is used to construct two sets of plots, expected and empirical, which serve as the two input CDFs for the KS test. Polonik’s transform is based on a density estimation over X. It maps the input quantile in [0, 1] to a level-set of the estimated density such that the expected probability of feature vectors to lie within it is equal to its associated quantile. The expected plots are the quantiles, and the empirical plots are fractions of examples in X ′ that lie within each mapped region. Polonik’s approach can handle multivariate data, but is hard to apply in high-dimensional or smallsample-sized settings where a reliable density estimation is hard. In this paper we introduce a generalized KS test, based on Polonik’s theory, to determine whether two samples are drawn from dif1 ferent distributions. However, instead of a density estimator, we use a novel hierarchical minimumvolume sets estimator to estimate the set of high-density regions directly. Because the estimation of such regions is intrinsically simpler than density estimation, our test is more accurate than densityestimation approaches. In addition, whereas Polonik’s work was largely theoretical, we take a practical approach and empirically show the superiority of our test over existing nonparametric tests in realistic, high-dimensional data. To use Polonik’s generalization of the KS test, the high-density regions should be hierarchical. Using classical minimum-volume set (MV-set) estimators, however, does not, in itself, guarantee this property. We present here a novel method for approximate MV-sets estimation that guarantees the hierarchy, thus allowing the KS test to be generalized to high dimensions. Our method uses classical MV-set estimators as a basic component. We test our method with two types of estimators: one-class SVMs (OCSVMs) and one-class neighbor machines (OCNMs). While the statistical test introduced in this paper traces distributional changes in high dimensional data in general, it is effective in particular for change detection in data streams. Many real-world applications (e.g. process control) work in dynamic environments where streams of multivariate data are collected over time, during which unanticipated distributional changes in data streams might prevent the proper operation of these applications. Change-detection methods are thus required to trace such changes (e.g. [6]). We extensively evaluate our test on a collection of change-detection tasks. We also show that our proposed test can be used for the classical setting of the two-sample problem using symmetric and asymmetric variations of our test. 2 Learning Hierarchical High-Density Regions Our approach for generalizing the KS test is based on estimating a hierarchical set of MV-sets in input space. In this section we introduce a method for finding such a set in high-dimensional data. Following the notion of multivariate quantiles [8], let X = {x1, . . . , xn} be a set of examples i.i.d. with respect to a probability distribution F defined on a measurable space Rd, S . Let λ be a realvalued function defined on C ⊂S. Then, the minimum-volume set (MV-set) with respect to F, λ, and C at level α is C (α) = argmin C′∈C {λ(C′) : F(C′) ≥α} . (1) If more than one set attains the minimum, one will be picked. Equivalently, if F(C) is replaced with Fn (C) = 1 n Pn 1 1C (xi), then Cn(α) is one of the empirical MV-sets that attains the minimum. In the following we think of λ as a Lebesgue measure on Rd. Polonik introduced a new approach that uses a hierarchical set of MV-sets to generalize the KS test beyond one dimension. Assume F has a density function f with respect to λ, and let Lf(c) = {x : f(x) ≥c} be the level set of f at level c. Sufficient regularity conditions on f are assumed. Polonik observed that if Lf(c) ∈C, then Lf(c) is an MV-set of F at level α = F(Lf(c)). He thus suggested that level-sets can be used as approximations of the MV-sets of a distribution. Hence, a density estimator was used to define a family of MV-sets {C(α), α ∈[0, 1]} such that a hierarchy constraint C(α) ⊂C(β) is satisfied for 0 ≤α < β ≤1. We also use hierarchical MV-sets to represent distributions in our research. However, since a density estimation is hard to apply in high-dimensional data, a more practical solution is proposed. Instead of basing our method on the products of a density estimation method, we introduce a novel nonparametric method, which uses MV-set estimators (OCSVM and OCNM) as a basic component, to estimate hierarchical MV-sets without the need for a density estimation step. 2.1 Learning Minimum-Volume Sets with One-Class SVM Estimators OCSVM is a nonparametric method for estimating a high-density region in a high-dimensional distribution [19]. Consider a function Φ : Rd →F mapping the feature vectors in X to a hypersphere in an infinite Hilbert space F. Let H be a hypothesis space of half-space decision functions fC(x) = sgn ((w · Φ(x)) −ρ) such that fC(x) = +1 if x ∈C, and −1 otherwise. To separate X 2 from the origin, the learner is asked to solve this quadratic program: min w∈F,ξ∈Rn,ρ∈R 1 2||w||2 −ρ + 1 νn X i ξi, s.t. (w · Φ (xi)) ≥ρ −ξi, ξi ≥0, (2) where ξ is the vector of the slack variables, and 0 < ν < 1 is a regularization parameter related to the proportion of outliers in the training data. All training examples xi for which (w · Φ(x))−ρ ≤0 are called support vectors (SVs). Outliers are referred to as examples that strictly satisfy (w · Φ(x)) − ρ < 0. Since the algorithm only depends on the dot product in F, Φ never needs to be explicitly computed, and a kernel function k (·, ·) is used instead such that k (xi, xj) = (Φ(xi) · Φ(xj))F. The following theorem draws the connection between the ν regularization parameter and the region C provided by the solution of Equation 2: Theorem 1 (Sch¨olkopf et al. [19]). Assume the solution of Equation 2 satisfies ρ ̸= 0. The following statements hold: (1) ν is an upper bound on the fraction of outliers. (2) ν is a lower bound on the fraction of SVs. (3) Suppose X were generated i.i.d. from a distribution F which does not contain discrete components. Suppose, moreover, that the kernel k is analytic and non-constant. Then, with probability 1, asymptotically, ν is equal to both the fraction of SVs and to the fraction of outliers. This theorem shows that we can use OCSVMs to estimate high-density regions in the input space while bounding the number of examples in X lying outside these regions. Thus, by setting ν = 1−α, we can use OCSVMs to estimate regions approximating C(α). We use this estimation method with its original quadratic optimization scheme to learn a family of MV-sets. However, a straightforward approach of training a set of OCSVMs, each with different ν ∈(0, 1), would not necessarily satisfy the hierarchy requirement. In the following algorithm, we propose a modified construction of these regions such that both the hierarchical constraint and the density assumption (Theorem 1) will hold for each region. Let 0 < α1 < α2, . . . , < αq < 1 be a sequence of quantiles. Given X and a kernel function k (·, ·), our hierarchical MV-sets estimator iteratively trains a set of q OCSVMs, one for each quantile, and returns a set of decision functions, ˆfC(α1), . . . , ˆfC(αq) that satisfy both hierarchy and density requirements. Training starts from the largest quantile (αq). Let Di be the training set of the OCSVM trained for the αi quantile. Let fC(αi), SVbi be the decision function and the calculated outliers (bounded SVs) of the OCSVM trained for the i-th quantile. Let Oi = Sq j=i SVbj. At each iteration, Di contains examples in X that were not classified as outliers in previous iterations (not in Oi+1). In addition, ν is set to the required fraction of outliers over Di that will keep the total fraction of outliers over X equal to 1 −αi. After each iteration, ˆfC(αi) corresponds to the intersection between the region associated with the previous decision function and the half-space associated with the current learned OCSVM. Thus ˆfC(αi) corresponds to the region specified by an intersection of halfspaces. The outliers in Oi are points that lie strictly outside the constructed region. The pseudo-code of our estimator is given in Algorithm 1. Algorithm 1 Hierarchical MV-sets Estimator (HMVE) 1: Input: X, 0 < α1 < α2, . . . , < αq < 1, k (·, ·) 2: Output: ˆfC(α1), . . . , ˆfC(αq) 3: Initialize: Dq ←X, Oq+1 ←∅ 4: for i = q to 1 do 5: ν ←(1−αi)|X|−|Oi+1| |Di| 6: fC(αi), SVbi ←OCSV M(Di, ν, k) 7: if i = q then 8: ˆfC(αi)(x) ←fC(αi(x)) 9: else 10: ˆfC(αi)(x) ← fC(αi(x)) : ˆfC(αi+1)(x) −1 : otherwise 11: Oi ←Oi+1 ∪SVbi, Di−1 ←Di \ SVbi 12: return ˆfC(α1), . . . , ˆfC(αq) The following theorem shows that the regions specified by the decision functions ˆfC(α1), . . . , ˆfC(αq) are: (a) approximations for the MV-sets in the same sense suggested by Sch¨olkopf et al., and (b) hierarchically nested. In the following, ˆC(αi) is denoted as the estimates of C(αi) with respect to ˆfC(αi). Theorem 2. Let ˆfC(α1), . . . , ˆfeC(αq) be the decision functions returned by Algorithm 1 with parameters {α1, . . . , αq}, X, k (·, ·). Assume X is separable. Let ˆC(αi) be the region in the input space 3 C1 C2 C2 C3 C3 C4 S 1x 2x 3x F1 Fd O 1x 3x 2x jh 1 jh j top p 1 j top p j sv p 1 j sv p O j j w 1 1 j j w Hypersphere with radius 1 1 100 ,..., x x Time 101 150 ,..., x x 49 ,..., i i x x 49,..., m n m n x x . . . . . . Training set Testing sets . . . . . . 1 ˆC 2 ˆC 3 ˆC 2 ˆC Figure 1: Left: Estimated MV-sets ˆC(αi) in the original input space, q = 3. Right: the projected ˆC(αi) in F. 10 15 20 25 30 35 40 45 50 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 # training points symmetric difference 2D level−sets estimations: qcd ocsvm/ocnm Vs. kde HMVE (OCSVM) HMVE (OCNM) KDE2D Figure 2: Averaged symmetric differences against the number of training points for the OCSVM / OCNM versions of our estimator, and the KDE2d density estimator associated with ˆfC(αi), and SVubi be the set of (unbounded) SVs lying on the separating hyperplane in the region associated with fC(αi(x)). Then, the following statements hold:(1) ˆC(αi) ⊆ˆC(αj) for αi < αj. (2) |Oi| |X| ≤1 −αi ≤ |SVubi|+|Oi| |X| . (3) Suppose X were i.i.d. drawn from a distribution F which does not contain discrete components, and k is analytic and non-constant. Then, 1 −αi is asymptotically equal to |Oi| |X| . Proof. Statement (1) holds by definition of ˆfC(αi). Statements (2)-(3) are proved by induction on the number of iterations. In the first iteration ˆfC(αq) equals fC(αq). Thus, since Oq = SVbq and ν = 1 −αq, statements (2)-(3) follow directly from Theorem 1 1. Then, by the induction hypothesis, statements (2)-(3) hold for the first n −1 iterations over the αq, . . . , αq−n+1 quantiles. We now prove that statements (2)-(3) hold for ˆfC(αq−n) in the next iteration. Since ˆfC(αq−n+1)(x) = −1 implies ˆfC(αq−n)(x) = −1, Oq−n+1 are outliers with respect to ˆfC(αq−n). In addition, ν = (1−αq−n)|X|−|Oq−n+1| |Di| . Hence, following Theorem 1, the total proportion of outliers with respect to X is |Oq−n| = |SVbq−n| + |Oq−n+1| ≤ν|Di| + |Oq−n+1| = (1 −αq−n)|X|, and |SVubq−n| + |Oq−n+1| ≥(1 −αq−n)|X|. Hence, |Oq−n| |X| ≤1 −αq−n ≤ |SVubq−n|+|Oq−n| |X| . In the same manner, under the conditions of statement (3), |Oq−n| is asymptotically equal to (1 −αq−n)|X|, and hence, asymptotically, 1 −αq−n = |Oq−n| |X| . Figure 1 illustrates the estimated MV-sets ˆC(αi) in both the original and the projected spaces. On the left, all ˆC(αi) regions in the original input space are colored with decreased gray levels. Note that ˆC(αi) is a subset of ˆC(αj) if i < j. On the right, the projected regions of all ˆC(αi)s in F are marked with the same colors. Examples xi in the input space and their mapped vectors φ(xi) in F are contained in the same relative regions in both spaces. It can be seen that the projections of ˆC(αi) in F are the intersecting half-spaces learned by Algorithm 1. 2.2 Learning Minimum-Volume Sets with One-Class Neighbor Machine Estimators OCNM [15] is as an alternative method to the OCSVM estimator for finding regions close to C(α). Unlike OCSVM, the OCNM solution is proven to be asymptotically close to the MV-set specified 2. Degenerated structures in data that may damage the generalization of SVMs could be another reason for choosing OCNM [24]. In practice, for finite sample size, it is not clear which estimator is more accurate. 1Note that the separability of the data implies that the solution of Equation 2 satisfies ρ ̸= 0. 2Sch¨olkopf et al. [19] proved that the set provided by OCSVM converges asymptotically to the correct probability and not to the correct MV-set. Although this property should be sufficient for the correctness of our test, Polonik observed that MV-sets are preferred. 4 OCNM uses either a sparsity or a concentration neighborhood measure. M(Rd, X) →R is a sparsity measure if f(x) > f(y) implies lim|X|→∞P(M(x, X) < M(y, X)) = 1. An example for a valid sparsity measure is the distance of x to its kth-nearest neighbor in X. When a sparsity measure is used, the OCNM estimator solves the following linear problem max ξ∈Rn,ρ∈R νnρ − n X i ξi, s.t. M (xi, X) ≥ρ −ξi, ξi ≥0, (3) such that the resulting decision function fC(x) = sgn (ρ −M(x, X)) satisfies bounds and convergence properties similar to those mentioned in Theorem 1 (ν-property). OCNM can replace OCSVM in our hierarchical MV-sets estimator. In contrast to OCSVMs, when OCNMs are iteratively trained on X using a growing sequence of ν values, outliers need not be moved from previous iterations to ensure that the ν-property will hold for each decision function. Hence, a simpler version of Algorithm 1 can be used, where X is used for training all OCNMs and ν = 1 −αi for each step 3. Since Theorem 2 relies on the ν-property of the estimator, it can be shown that similar statements to those of Theorem 2 also hold when OCNM is used. As previously discussed, since the estimation of MV-sets is simpler than density estimation, our test can achieve higher accuracy than approaches based on density estimation. To illustrate this hypothesis empirically, we conducted the following preliminary experiment. We sampled 10 to 50 i.i.d. points with respect to a two-dimensional, mixture of Gaussians, distribution p = 1 2N(µ = (0.5, 0.5), Σ = 0.1I) + 1 2N(µ = (−0.5, −0.5), Σ = 0.5I). We use the OCNM and OCSVM versions of our estimator to approximate hierarchical MV-sets for qα = 9 quantiles: α = 0.1, 0, 2, . . . , 0.9 (detailed setup parameters are discussed in Section 4). MV-sets estimated with a KDE2d kernel-density estimation [2] were used for comparison. For each sample size, we measured the error of each method according to the mean weighted symmetric difference between the true MV-sets and their estimates, 1 qα P α R x∈C(α)∆ˆ C(α) p(x)dx. Results, averaged over 50 simulations, are shows in Figure 2. The advantages of our approach can easily be seen: both versions of our estimator preform notably better, especially for small sample sizes. 3 Generalized Kolmogorov-Smirnov Test We now introduce a nonparametric, generalized Kolmogorov-Smirnov (GKS) statistical test for determining whether F ̸= F ′ in high-dimensional data. Assume F, F ′ are one-dimensional continuous distributions and Fn, F ′ m are empirical distributions estimated from n and m examples i.i.d. drawn from F, F ′. Then, the two-sample Kolmogorov-Smirnov (KS) statistic is KSn,m = sup x∈R |Fn(x) −F ′ m(x)| (4) and q nm n+m KSn,m is asymptotically distributed, under the null hypothesis, as the distribution of supx∈R |B(F(x))| for a standard Brownian bridge B when F = F ′. Under the null hypothesis, assume F = F ′ and let F −1 be a quantile transform of F, i.e., the inverse of F. Then we can replace the supremum over x ∈R with the supremum over α ∈[0, 1] as follows: KSn,m = sup α∈[0,1] Fn(F −1(α)) −F ′ m(F −1(α)) . (5) Note that in the one-dimensional setting, F −1(α) is the point x s.t. F(X ≤x) ≤α where X is a random variable drawn from F. Equivalently, F −1(α) can be identified with the interval [−∞, x]. In a high-dimensional space these intervals can be replaced by hierarchical MV-sets C(α) [18], and hence, Equation 5 can be calculated regardless of the input space dimensionality. We suggest replacing KSn,m with Tn,m = sup α∈[0,1] |Fn(C(α)) −F ′ m(C(α))|. (6) For estimating C(α) we use our nonparametric method from Section 2. ˆC(α) is learned with X and marked as ˆCX (α). In practice, when |X| is finite, the expected proportion of examples that lie 3Note that intersection is still needed (Algorithm 1, line 10) to ensure the hierarchical property on ˆC(αi). 5 within ˆCX (αi) is not guaranteed to be exactly αi. Therefore, after learning the decision functions, we estimate Fn( ˆCX (αi)) by a k-folds cross-validation procedure. Our final test statistic is ˆTn,m = sup 1≤i≤q ˆFn( ˆCX (αi)) −Fm( ˆCX (αi)) , (7) where ˆFn( ˆCX (αi)) is the estimate of Fn( ˆCX (αi)). The two-sample KS statistical test is used over ˆTn,m to calculate the resulting p-value. The test defined above works only in one direction by predicting whether distributions of the samples share the same “concentrations” as regions estimated according to X, and not according to X ′. We may symmetrize it by running the non-symmetric test twice, once in each direction, and return twice their minimum p-value (Bonferroni correction). Note that by doing so in the context of a change detection task, we pay in runtime required for learning MV-sets for each X ′. 4 Empirical Evaluation We first evaluated our test on concept-drift detection problems in data-stream classification tasks. Concept drifts are associated with distributional changes in data streams that occur due to hidden context [22] — changes of which the classifier is unaware. We used the 27 UCI datasets used in [6], and 6 additional high-dimensionality UCI datasets: arrhythmia, madelon, semeion, internet advertisement, hill-valley, and musk. The average number of features for all datasets is 123 4. Following the experimental setup used by [11, 6], we generated, for each dataset, a sequence ⟨x1, . . . , xn+m⟩, where the first n examples are associated with the most frequent label, and the following m examples with the second most frequent. Within each label the examples were shuffled randomly. The first 100 examples ⟨x1, . . . , x100⟩, associated, in all datasets, with the most common label, were used as the baseline dataset X. A sliding window of 50 consecutive examples over the following sequence of examples was iteratively used to define the most recent data X ′ at hand. Statistical tests were evaluated with X and all possible X ′ windows. In total, for each dataset, the set {⟨X, X ′i⟩|X ′i = {xi, . . . , xi+49} , 101 ≤i ≤n + m −49} of pairs were used for evaluation. The following figure illustrates this setup: C1 C2 C2 C3 C3 C4 S 1x 2x 3x F1 Fd C3 C2 C2 C1 O 1x 3x 2x jh 1 jh j top p 1 j top p j sv p 1 j sv p O j j w 1 1 j j w Hypersphere with radius 1 1 100 ,..., x x Time 101 150 ,..., x x 49 ,..., i i x x 49,..., m n m n x x . . . . . . Training set Testing sets . . . . . . The pairs ⟨X, X ′i⟩, i ≤n −49, where all examples in X ′i have the same labels as in X, are considered “unchanged.” The remaining pairs are considered “changed.” Performance is evaluated using precision-recall values with respect to the change detection task. We compare our one-directional (GKS1d) and two-directional (GKS2d) tests to the following 5 reference tests: kdq-tree test (KDQ) [4], Metavariable Wald-Wolfowitz test (WW) [10], Kernel change detection (KCD) [5], Maximum mean discrepancy test (MMD) [12], and PAC-Bayesian margin test (PBM) [6]. See section 5 for details. All tests, except of MMD, were implemented and parameters were set with accordance to their suggested setting in their associate papers. The implementation of MMD test provided by the authors 5 was used with default parameters (RBF kernels with automatic kernel width detection) and Rademacher bounds. Similar results were also measured for asymptotic bounds. Note that we cannot compare our test to Polonik’s test since density estimations and level-sets extractions are not practically feasible on high-dimensional data. The LibSVM package [3] with a Gaussian kernel (γ = 2 #features) was used for the OCSVMs. A distance from a point to its kth-nearest neighbor was used as a sparsity measure for the OCNMs. k is set to 10% of the sample size 6. α = 0.1, 0.2, . . . , 0.9 were used for all experiments. 4Nominal features were transformed into numeric ones using binary encoding; missing values were replaced by their features’ average values. 5The code can be downloaded at http://people.kyb.tuebingen.mpg.de/arthur/mmd.htm. 6Preliminary experiments show similar results obtained with k equal to 10, 20, . . . , 50% of |X|. 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 0.5 0.6 0.7 0.8 0.9 1 recall precision GKS1d (OCSVM) GKS2d (OCSVM) WW MMD PBM KCD KDQ BEP Figure 3: Precision-recall curves averaged over all 33 experiments for GKS1d (OCSVMs), GKS2d (OCSVMs), and the 5 reference tests. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 0.5 0.6 0.7 0.8 0.9 1 recall precision GKS1d (OCSVM) GKS2d (OCSVM) GKS1d (OCNM) GKS2d (OCNM) Figure 4: Precision-recall curves averaged over all 33 experiments for GKS1d (OCSVMs), GKS2d (OCSVMs), GKS1d (OSNMs), and GKS2d (OSNMs). 4.1 Results For better visualization, results are shown in two separate figures: Figure 3 shows the precisionrecall plots averaged over the 33 experiments for the OCSVM version of our tests, and the 5 reference tests. Figure 4 shows the precision-recall plots averaged over the 33 experiments for the OCSVM and OCNM versions of our tests. In both versions, GKS1d and GKS2d provide the best precisionrecall compromise. For example, for the OCSVM version, at a recall of 0.86, GKS1d accurately detects distributional changes with 0.90 precision and GKS2d with 0.88 precision, while the second best competitor does so with 0.84 precision. In terms of their break even point (BEP) measures – the points at which precision equals recall – GKS1d outperforms the other 5 reference tests with a BEP of 0.89 while its second best competitor does so with BEP of 0.84. Mean precisions for each dataset were compared using the Wilcoxon statistical test with α = 0.05. Here, too, GKS1d performs significantly better than all others for both OCSVM and OCNM versions, except for the MMD with a p-value of 0.08 for GKS1d(OCSVM) and 0.12 for GKS1d(OCNM). Although the plots for our GKS1d (OCSVM) test (Figure 4) look better than GKS2d, no significant difference was found. This result is consistent with previous studies which claim that variants of solutions whose goal is to make the tests more symmetric have empirically shown no conclusive superiority [4]. We also found that the GKS1d (OCSVM) version of our test has the least runtime and scales well with dimensionality, while the GKS1d (OSNM) version suffers from increased time complexity, especially in high dimensions, due to its expensive neighborhood measure. However, note that this observation is true only when off-line computational processing on X is not considered. As opposed to the KCD, and, PBM, tests, our GKS1d test need not be retrained on each X ′. Hence, in the context where X is treated as a baseline dataset, GKS1d (OCSVM) is relatively cheap, and estimated in O (nm) time (the total number of SVs used to calculate f ′ C(α1), . . . , f ′ C(αq) is O (n)). In comparison to other tests, it is still the least computationally demanding 7. 4.2 Topic Change Detection among Documents We evaluated our test on an additional setup of high-dimensionality problems pertaining to the detection of topic changes in streams of documents. We used the 20-Newsgroup document corpus 8. 1000 words were randomly picked to generate 1000 bag-of-words features. 12 categories were used for the experiments 9. Topic changes were simulated between all pairs of categories (66 pairs in total), using the same methodology as in the previous UCI experiments. Due to the excessive runtime 7MMD and WW complexities are estimated in O (n + m)2 time where n, m are the sample sizes. KDQ uses bootstrapping for p-value estimations, and hence, is more expensive. 8The 20-Newsgroup corpus is at http://people.csail.mit.edu/jrennie/20Newsgroups/. 9The selection of these categories is based on the train/test split defined in http://www.cad.zju. edu.cn/home/dengcai/Data/TextData.html. 7 of some of the tests, especially with high-dimensional data, we evaluated only 4 of the 7 methods: GKS1d (OCSVM), WW, MMD, and KDQ, whose expected runtime may be more reasonable. Once again, our GKS1d test dominates the others with the best precision-recall compromise. With regard to BEP values, GKS1d outperforms the other reference tests with a BEP of 0.67 (0.70 precision on average), while its second best competitor (MMD) does so with a BEP of 0.62 (0.64 precision on average). According to the Wilcoxon statistical test with α = 0.05, GKS1d performs significantly better than the others in terms of their average precision measures. 5 Related Work Our proposed test belongs to a family of nonparametric tests for detecting change in multivariate data that compare distributions without the intermediate density estimation step. Our reference tests were thus taken from this family of studies. The kdq-tree test (KDQ) [4] uses a spatial scheme (called kdq-tree) to partition the data into small cells. Then, the Kullback-Leibler (KL) distance is used to measure the difference between data counts for the two samples in each cell. A permutation (bootstrapping) test [7] is used to calculate the significant difference (p-value). The metavariable WaldWolfowitz test (WW) [10] measures the differences between two samples according to the minimum spanning tree in the graph of distances between all pairs in both samples. Then, the Wald-Wolfowitz test statistics are computed over the number of components left in the graph after removing edges between examples of different samples. The kernel change detection (KCD) [5] measures the distance between two samples according to a “Fisher-like” distance between samples. This distance is based on hypercircle characteristics of the resulting two OCSVMs, which were trained separately on each sample. The maximum mean discrepancy test (MMD) [12] meausres discrepancy according to a complete matrix of kernel-based dissimilarity measures between all examples, and test statistics are then computed. (5) The PAC-Bayesian margin test (PBM) [6] measures the distance between two samples according to the average margins of a linear SVM classifier between the samples, and test statistics are computed. As discussed in detail before, our test follows the general approach of Polonik but differs in three important ways: (1) While Polonik uses a density estimator for specifying the MV-sets, we introduce a simpler method that finds the MV-sets directly from the data. Our method is thus more practical and accurate in high-dimensional or small-sample-sized settings. (2) Once the MV-sets are defined, Polonik uses their hypothetical quantiles as the expected plots, and hence, runs the KS test in its onesample version (goodness-of-fit test). We take a more practically accurate approach for finite sample size when approximations of MV-sets are not precise. Instead of using the hypothetical measures, we estimate the expected plots of X empirically and use the two-sample KS test instead. (3) Unlike Polonik’s work, ours was evaluated empirically and its superiority demonstrated over a wide range of nonparametric tests. Moreover, since Polonik’s test relies on a density estimation and the ability to extract its level-sets, it is not practically feasible in high-dimensional settings. Other methods for estimating MV-sets exist in the literature [21, 1, 16, 13, 20, 23, 14]. Unfortunately, for problems beyond two dimensions and non-convex sets, there is often a gap between their theoretical and practical estimates [20]. We chose here OCSVM and OSNM because they perform well on small, high-dimensional samples. 6 Discussion and Summary This paper makes two contributions. First, it proposes a new method that uses OCSVMs or OCNMs to represent high-dimensional distributions as a hierarchy of high-density regions. This method is used for statistical tests, but can also be used as a general, black-box, method for efficient and practical representations of high-dimensional distributions. Second, it presents a nonparametric, generalized, KS test that uses our representation method to detect distributional changes in highdimensional data. Our test was found superior to competing tests in the sense of average precision and BEP measures, especially in the context of change-detection tasks. An interesting and still open question is how we should set the input α quantiles for our method. The problem of determining the number of quantiles – and the gaps between consecutive ones – is related to the problem of histogram design. 8 References [1] S. Ben-David and M. Lindenbaum. Learning distributions by their density levels: A paradigm for learning without a teacher. Journal of Computer and System Sciences, 55(1):171–182, 1997. [2] ZI Botev, JF Grotowski, and DP Kroese. Kernel density estimation via diffusion. The Annals of Statistics, 38(5):2916–2957, 2010. [3] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001. [4] T. Dasu, S. Krishnan, S. Venkatasubramanian, and K. Yi. An information-theoretic approach to detecting changes in multi-dimensional data streams. In INTERFACE, 2006. [5] F. Desobry, M. Davy, and C. Doncarli. An online kernel change detection algorithm. Signal Processing, Transactions on Information Theory, 53(8):2961–2974, 2005. [6] Anton Dries and Ulrich R¨uckert. Adaptive concept drift detection. Statistical Analysis and Data Mining, 2(5-6):311–327, 2009. [7] B. Efron and R.J. Tibshirani. An Introduction to the Bootstrap. Chapman and Hall/CRC, 1994. [8] J.H.J. Einmahl and D.M. Mason. Generalized quantile processes. The Annals of Statistics, pages 1062–1078, 1992. [9] G. Fasano and A. Franceschini. A multidimensional version of the kolmogorov-smirnov test. Monthly Notices of the Royal Astronomical Society, 225:155–170, 1987. [10] J.H. Friedman and L.C. Rafsky. Multivariate generalizations of the Wald-Wolfowitz and Smirnov two-sample tests. The Annals of Statistics, 7(4):697–717, 1979. [11] J. Gama, P. Medas, G. Castillo, and P. Rodrigues. Learning with drift detection. In SBIA, pages 66–112. Springer, 2004. [12] A. Gretton, K.M. Borgwardt, M. Rasch, B. Scholkopf, and A.J. Smola. A kernel method for the two-sample-problem. Machine Learning, 1:1–10, 2008. [13] X. Huo and J.C. Lu. A network flow approach in finding maximum likelihood estimate of high concentration regions. Computational Statistics & Data Analysis, 46(1):33–56, 2004. [14] D.M. Mason and W. Polonik. Asymptotic normality of plug-in level set estimates. The Annals of Applied Probability, 19(3):1108–1142, 2009. [15] A. Munoz and J.M. Moguerza. Estimation of high-density regions using one-class neighbor machines. In PAMI, pages 476–480, 2006. [16] J. Nunez Garcia, Z. Kutalik, K.H. Cho, and O. Wolkenhauer. Level sets and minimum volume sets of probability density functions. International Journal of Approximate Reasoning, 34(1): 25–47, 2003. [17] JA Peacock. Two-dimensional goodness-of-fit testing in astronomy. Monthly Notices of the Royal Astronomical Society, 202:615–627, 1983. [18] W. Polonik. Concentration and goodness-of-fit in higher dimensions:(asymptotically) distribution-free methods. The Annals of Statistics, 27(4):1210–1229, 1999. [19] Bernhard Sch¨olkopf, John C. Platt, John C. Shawe-Taylor, Alex J. Smola, and Robert C. Williamson. Estimating the support of a high-dimensional distribution. Neural Computation, 13(7):1443–1471, 2001. [20] C.D. Scott and R.D. Nowak. Learning minimum volume sets. The Journal of Machine Learning Research, 7:665–704, 2006. [21] G. Walther. Granulometric smoothing. The Annals of Statistics, pages 2273–2299, 1997. [22] G. Widmer and M. Kubat. Learning in the presence of concept drift and hidden contexts. Machine Learning, 23(1):69–101, 1996. [23] R.M. Willett and R.D. Nowak. Minimax optimal level-set estimation. Image Processing, IEEE Transactions on, 16(12):2965–2979, 2007. [24] John Wright, Yi Ma, Yangyu Tao, Zhouchen Lin, and Heung-Yeung Shum. Classification via minimum incremental coding length. SIAM J. Imaging Sciences, 2(2):367–395, 2009. 9
|
2012
|
154
|
4,513
|
Factorial LDA: Sparse Multi-Dimensional Text Models Michael J. Paul and Mark Dredze Human Language Technology Center of Excellence (HLTCOE) Center for Language and Speech Processing (CLSP) Johns Hopkins University Baltimore, MD 21218 {mpaul,mdredze}@cs.jhu.edu Abstract Latent variable models can be enriched with a multi-dimensional structure to consider the many latent factors in a text corpus, such as topic, author perspective and sentiment. We introduce factorial LDA, a multi-dimensional model in which a document is influenced by K different factors, and each word token depends on a K-dimensional vector of latent variables. Our model incorporates structured word priors and learns a sparse product of factors. Experiments on research abstracts show that our model can learn latent factors such as research topic, scientific discipline, and focus (methods vs. applications). Our modeling improvements reduce test perplexity and improve human interpretability of the discovered factors. 1 Introduction There are many factors that contribute to a document’s word choice: topic, syntax, sentiment, author perspective, and others. Latent variable “topic models” such as latent Dirichlet allocation (LDA) implicitly model a single factor of topical content [1]. More in-depth analyses of corpora call for models that are explicitly aware of additional factors beyond topic. Some topic models have been used to model specific factors like sentiment [2], and more general models—like the topic aspect model [3] and sparse additive generative models (SAGE) [4]—have jointly considered both topic and another factor, such as perspective. Most prior work has only considered two factors at once.1 This paper presents factorial LDA, a general framework for multi-dimensional text models that capture an arbitrary number of factors. While standard topic models associate each word token with a single latent topic variable, a multi-dimensional model associates each token with a vector of multiple factors, such as (topic, political ideology) or (product type, sentiment, author age). Scaling to an arbitrary number of factors poses challenges that cannot be addressed with existing two-dimensional models. First, we must ensure consistency across different word distributions which have the same components. For example, the word distributions associated with the (topic, perspective) pairs (ECONOMICS,LIBERAL) and (ECONOMICS,CONSERVATIVE) should both give high probability to words about economics. Additionally, increasing the number of factors results in a multiplicative increase in the number of possible tuples that can be formed, and not all tuples will be well-supported by the data. We address these two issues by adding additional structure to our model: we impose structured word priors that link tuples with common components, and we place a sparse prior over the space of possible tuples. We demonstrate that both of these model structures lead to improvements in model performance. In the next section, we introduce our model, where our main contributions are to: 1A recent variant of SAGE modeled three factors in historic documents: topic, time, and location [5]. 1 • introduce a general model that can accommodate K different factors (dimensions) of language, • design structured priors over the word distributions that tie together common factors, • enforce a sparsity pattern which excludes unsupported combinations of components (tuples). We then discuss our inference procedure (§4) and share experimental results (§5). 2 Factorial LDA: A Multi-Dimensional Generative Model Latent Dirichlet allocation (LDA) [1] assumes we have a set of Z latent components (usually called “topics” in the context of text modeling), and each data point (a document) has a discrete distribution θ over these topics. The set of topics can be thought as a vector of length Z, where each cell is a pointer into a discrete distribution over words, parameterized by φz. Under LDA, a document is generated by choosing the topic distribution θ from a Dirichlet prior, then for each token we sample a latent topic t from this distribution before sampling a word w from the tth word distribution φt. Without additional structure, LDA tends to learn distributions which correspond to semantic topics (such as SPORTS or ECONOMICS) [6] which dominate the choice of words in a document, rather than syntax, perspective, or other aspects of document content. Imagine that instead of a one-dimensional vector of Z topics, we have a two-dimensional matrix of Z1 components along one dimension (rows) and Z2 components along the other (columns). This structure makes sense if a corpus is composed of two different factors, and the two dimensions might correspond to factors such as news topic and political perspective (if we are modeling newspaper editorials), or research topic and discipline (if we are modeling scientific papers). Individual cells of the matrix would represent pairs such as (ECONOMICS,CONSERVATIVE) or (GRAMMAR,LINGUISTICS) and each is associated with a word distribution φ⃗z. Conceptually, this is the idea behind the twodimensional models of TAM [3] and SAGE [4]. Let us expand this idea further by assuming K factors modeled with a K-dimensional array, where each cell of the array has a pointer to a word distribution corresponding to that particular K-tuple. For example, in addition to topic and perspective, we might want to model a third factor of the author’s gender in newspaper editorials, yielding triples such as (ECONOMICS,CONSERVATIVE,MALE). Conceptually, each K-tuple ⃗t functions as a topic in LDA (with an associated word distribution φ⃗t) except that K-tuples imply a structure, e.g. the pairs (ECONOMICS,CONSERVATIVE) and (ECONOMICS,LIBERAL) are related. This is the idea behind factorial LDA (f-LDA). At its core, our model follows the basic template of LDA, but each word token is associated with a K-tuple rather than a single topic value. Under f-LDA, each document has a distribution over tuples, and each tuple indexes into a distribution over words. Of course, without additional structure, this would simply be equivalent to LDA with QK k Zk topics. In f-LDA, we induce a factorial structure by creating priors which tie together tuples that share components: distributions involving the pair (ECONOMICS,CONSERVATIVE) should have commonalties with distributions for (ECONOMICS,LIBERAL). The key ingredients of our new model are: • We model the intuition that tuples which share components should share other properties. For example, we expect the word distributions for (ECONOMICS,CONSERVATIVE) and (ECONOMICS,LIBERAL) to both give high probability to words about economics, while the pairs (ECONOMICS,LIBERAL) and (ENVIRONMENT,LIBERAL) should both reflect words about liberalism. Similarly, we want each document’s distribution over tuples to reflect the same type of consistency. If a document is written from a liberal perspective, then we believe that pairs of the form (*,LIBERAL) are more likely to have high probability than pairs with CONSERVATIVE as the second component. This consistency across factors is encouraged by sharing parameters across the word and topic prior distributions in the model: this encodes our a priori assumption that distributions which share components should be similar. • Additionally, we allow for sparsity across the set of tuples. As the dimensionality of the array increases, we are going to encounter problems of overparameterization, because the model will likely contain more tuples than are observed in the data. We handle this by having an auxiliary multi-dimensional array which encodes a sparsity pattern over tuples. The priors over tuples are augmented with this sparsity pattern. These priors model the belief that the Cartesian product of factors should be sparse; the posterior may “opt out” of some tuples. 2 α αd θ φ ω b γ z w D N K K K Q k Zk Q k Zk P k Zk (a) based different article research present information level new role time examines words context development word following primary article use skills children school teachers education year teaching educational reading childrens use writing ability instruction spelling strategy written skills adult narratives use words word phoneme speech length following sequence phonetic sentence hebrew exp( + + ) = Posterior Prior ω(0) ω(1) ω(2) WORDS EDUCATION (b) Figure 1: (a) Factorial LDA as a graphical model. (b) An illustration of word distributions in f-LDA with two factors. When applying f-LDA to a collection of scientific articles from various disciplines, we learn weights ω corresponding to a topic we call WORDS and the discipline EDUCATION as well as background words. These weights are combined to form the Dirichlet prior, and the distribution for (WORDS,EDUCATION) is drawn from this prior: this distribution describes writing education. The generative story (we’ll describe the individual pieces below) is as follows. 1. Draw the various hyperparameters α and ω from N(0, Iσ2) 2. For each tuple ⃗t = (t1, t2, . . . , tK): (a) Sample word distribution φ⃗t ∼Dir(ˆω(⃗t)) (b) Sample sparsity “bit” b⃗t ∼Beta(γ0, γ1) 3. For each document d ∈D: (a) Draw document component weights α(d,k) ∼N(0, Iσ2) for each factor k (b) Sample distribution over tuples θ(d) ∼Dir(B · ˆα(d)) (c) For each token: i. Sample component tuple ⃗z ∼θ(d) ii. Sample word w ∼φ⃗z where the Dirichlet vectors ˆω and ˆα are defined as: ˆω(⃗t) w ≜exp ( ω(B) + ω(0) w + X k ω(k) tkw ) ˆα(d) ⃗t ≜exp ( α(B)+ X k α(D,k) tk + α(d,k) tk !) (1) See Figure 1a for the graphical model, and Figure 1b for an illustration of how the weight vectors ω(0) and ω(k) are combined to form ˆω for a particular tuple that was inferred by our model. The words shown have the highest weight after running our inference procedure (see §5 for experimental details). As discussed above, the only difference between f-LDA and LDA is that structure has been added to the Dirichlet priors for the word and topic distributions. We use a form of Dirichlet-multinomial regression [7] to formulate the priors for φ and θ in terms of the log-linear functions in Eq. 1. We will now describe these priors in more detail. Prior over φ: We formulate the priors of φ to encourage word distributions to be consistent across components of each factor. For example, tuples that reflect the same topic should share words. To achieve this goal, we link the priors for tuples that share common components by utilizing a loglinear parameterization of the Dirichlet prior of φ (Eq. 1). Formally, we place a prior Dirichlet(ˆω(⃗t)) over φ⃗t, the word distribution for tuple ⃗t = (t1, t2, . . . , tK). The Dirichlet vector ˆω(⃗t) controls the precision and focus of the prior. It is a function of three types of hyperparameters. First, a single corpus-wide bias scalar ω(B), and second, a vector over the vocabulary ω(0), which reflects the relative likelihood of different words. These respectively increase the overall precision of words and the default likelihood of each word. Finally, ω(k) tkw introduces bias parameters for each word w for component tk of the kth factor. By increasing the weight of a particular ω(k) tkw, we increase the expected relative log-probabilities of word w in φ⃗z for all ⃗z that contain component tk, thereby tying these priors together. Prior over θ: We use a similar formulation for the prior over θ. Recall that we want documents to naturally favor tuples that share components, i.e. favoring both (ECONOMICS,CONSERVATIVE) and (EDUCATION,CONSERVATIVE) if the document favors CONSERVATIVE in general. To address this, we let θ(d) be drawn from Dirichlet(ˆα(d)), where instead of a corpus-wide prior, each document has a 3 vector ˆα(d) which reflects the independent contributions of the factors via a log-linear function. This function contains three types of hyperparameters. First, α(B) is the corpus-wide precision parameter (the bias); this is shared across all documents and tuples. Second, α(D,k) tk indicates the bias for the kth factor’s component tk across the entire corpus D, which enables the model to favor certain components a priori. Finally, α(d,k) tk is the bias for the kth factor’s component tk specifically in document d. This allows documents to favor certain components over others, such as the perspective CONSERVATIVE in a specific document. We assume all ωs and αs are independent and normally distributed around 0, which gives us L2 regularization during optimization. Sparsity over tuples: Finally, we describe the generation of the sparsity pattern over tuples in the corpus. We assume a K-dimensional binary array B, where an entry b⃗t corresponds to tuple ⃗t. If b⃗t = 1, then ⃗t is active: that is, we are allowed to chose ⃗t to generate a token and we learn φ⃗t; otherwise we do not. We modify this prior over θ to include a binary mask of the tuples: θ(d) ∼Dirichlet(B · ˆα(d)), where · is the Hadamard (cell-wise) product. θ will not include tuples for which b⃗t = 0; otherwise the prior will remain unchanged. We would ideally model B so that its values are in {0,1}. While we could use a Beta-Bernoulli model (a finite Indian Buffet Process [8]) to generate a finite binary matrix (array), this model is typically learned over continuous data; learning over discrete observations (tuples) can be exceedingly difficult since forcing the model to change a bit can yield large changes to the observations, which makes mixing very slow.2 To aid learning, we relax the constraint that B must be binary and instead allow b⃗t to be real-valued in (0, 1). This is a common approximation used in other models, such as artificial neural networks and deep belief networks. To encourage sparsity, we place a “U-shaped” Beta(γ0, γ1) prior over b⃗t, where γ < 1, which yields a density function that is concentrated around the edges 0 and 1. Empirically, we will show that this effectively learns a sparse binary B. The effect is that the prior assigns tiny probabilities to some tuples instead of strictly 0. 3 Related Work Previous work on multi-dimensional modeling includes the topic aspect model (TAM) [3], multiview LDA (mv-LDA) [10], cross-collection LDA [11] and sparse additive generative models (SAGE) [4], which jointly consider both topic and another factor. Other work has jointly modeled topic and sentiment [2]. Zhang et al. [12] apply PLSA [13] to multi-dimensional OLAP data, but not with a joint model. Our work is the first to jointly model an arbitrary number of factors. A rather different approach considered different dimensions of clustering using spectral methods [14], in which K different clusterings are obtained by considering K different eigenvectors. For example, product reviews can be clustered not only by topic, but also by sentiment and author attributes. We contrast this body of work with probabilistic matrix and tensor factorization models [15, 16] which model data that has already been organized in multiple dimensions – for example, topic-like models have been used to model the movie ratings within a matrix of users and movies. f-LDA and the models described above, however, operate over flat input (text documents), and it is only the latent structure that is assumed to be organized along multiple dimensions. An important contribution of f-LDA is the use of priors to tie together word distributions with the same components. Previous work with two-dimensional models, such as TAM and mv-LDA, assume conditional independence among all φ, and there is no explicit encouragement of correlation. An alternative approach would be to strictly enforce consistency, such as through a “product of experts” model [17], in which each factor has independent word distributions that are multiplied together and renormalized to form the distribution for a particular tuple, i.e. φ⃗t ∝Q k φtk. Syntactic topic models [18] and shared components topic models [19] follow this approach. Our structured word prior generalizes both of these approaches. By setting all ω(k) to 0, the factors have no influence on the prior and we obtain the distribution independence of TAM. If instead we have large ω values, then the model behaves like a product of experts; as precision increases, the posterior converges to the prior. By learning ω our model can determine the optimal amount of coherence among the φ. 2 One approach is to (approximately) collapse out the sparsity array [9], but this is difficult when working over the entire corpus of tokens. Experiments with Metropolis-Hastings samplers, split-merge based samplers, and alternative prior structures all suffered from mixing problems. 4 Another key part of f-LDA is the inclusion of a sparsity pattern. There have been several recent approaches that enforce sparsity in topic models. Various applications of sparsity can be organized into three categories. First, one could enforce sparsity over the topic-specific word distributions, forcing each topic to select a subset of relevant words. This is the idea behind sparse topic models [20], which restrict topics to a subset of the vocabulary, and SAGE [4], which applies L1 regularization to word weights. A second approach is to enforce sparsity in the document-specific topic distributions, focusing each document on a subset of relevant topics. This is the idea in focused topic models [9]. Finally—our contribution—is to impose sparsity among the set of topics (or K-tuples) that are available to the model. Among sparsity-inducing regularizers, one that closely relates to our goals is the group lasso [21]. While the standard lasso will drive vector elements to 0, the group lasso will drive entire vectors to 0. 4 Inference and Optimization f-LDA turns out to be fairly similar to LDA in terms of inference. In both models, words are generated by first sampling a latent variable (in our case, a latent tuple) from a distribution θ, then sampling the word from φ conditioned on the latent variable. The differences between LDA and f-LDA lie in the parameters of the Dirichlet priors. The presentation of our optimization procedure focuses on these parameters. We follow the common approach of alternating between sampling the latent variables and direct optimization of the Bayesian hyperparameters [22]. We use a Gibbs sampler to estimate E[⃗z], and given the current estimate of this expectation, we optimize the parameters α, ω and B. These two steps form a Monte Carlo EM (MCEM) routine. 4.1 Latent Variable Sampling The latent variables ⃗z are sampled using the standard collapsed Gibbs sampler for LDA [23], with the exception that the basic Dirichlet priors have been replaced with our structured priors for θ and φ. The sampling equation for ⃗z for token i, given all other latent variable assignments ⃗z, the corpus w and the parameters (α, ω, and B) becomes: p(⃗zi = ⃗t | ⃗z \{⃗zi}, w, α, ω, B) ∝ nd ⃗t + b⃗t ˆα(d) ⃗t n⃗t w + ˆω(⃗t) w P w′ n⃗t w′ + ˆω(⃗t) w′ ! (2) where nb a denotes the number of times a occurs in b. 4.2 Optimizing the Sparsity Array and Hyperparameters For mathematical convenience, we reparameterize B in terms of the logistic function σ, such that b⃗t ≡σ(β⃗t). We optimize β ∈R to obtain b ∈(0, 1). The derivative of σ(x) has the simple form σ(x)σ(−x). For a tuple ⃗t, the gradient of the corpus log likelihood L with respect to β⃗t is: ∂L ∂β⃗t = (γ0 −1)σ(−β⃗t) + (γ1 −1)(−σ(β⃗t)) + " X d∈D σ(β⃗t)σ(−β⃗t) ˆα(d) ⃗t × (3) Ψ(nd ⃗t + σ(β⃗t)ˆα(d) ⃗t ) −Ψ(σ(β⃗t)ˆα(d) ⃗t ) + Ψ (P ⃗u σ(β⃗u) ˆα(d) ⃗u ) −Ψ P ⃗u nd ⃗u + σ(β⃗u ˆα(d) ⃗u ) # where the γ values are the Beta parameters. The top terms are a result of the Beta prior over b⃗t, while the summation over documents reflects the gradient of the Dirichlet-multinomial compound. Standard non-convex optimization methods can be used on this gradient. To avoid shallow local minima, we optimize this gradually by taking small gradient steps, performing a single iteration of gradient ascent after each Gibbs sampling iteration (see §5 for more details). The gradients for the α and ω variables have a similar form to (3); the main difference with ω is that the gradient involves a sum over components rather than over documents. We similarly update these values through gradient ascent. 5 5 Experiments We experiment with two data sets that could contain multiple factors. The first is a collection of 5000 computational linguistics abstracts from the ACL Anthology (ACL). The second combines these abstracts (C) with several journals in the fields of linguistics (L), education (E), and psychology (P). We use 1000 articles from each discipline (CLEP). For both corpora, we keep an additional 1000 documents for development and 1000 for test (uniformly representative of the 4 CLEP disciplines). We used ⃗Z = (∗, 2, 2) for ACL and ⃗Z = (∗, 4) for CLEP for various numbers of “topics” Z1 ∈ {5, . . . , 50}. While we cannot say in advance what each factor will represent, we observed that when Zk is large, components along this factor correspond to topics. Therefore, we set Z1 > Zk>1 and assume the first factor is topic. While our model presentation assumed latent factors, we could observe factors, such as knowing the journal of each article in CLEP. However, our experiments strictly focus on the unsupervised setting to measure what the model can infer on its own. We will compare our complete model against simpler models by ablating parts of f-LDA. If we remove the structured word priors and array sparsity, we are left with a basic multi-dimensional model (base). We will compare against models where we add back in the structured word priors (W) and array sparsity (S), and finally the full f-LDA model (SW). All variants are identical except that we fix all ω(k) = 0 to remove structured word priors and fix B = 1 to remove sparsity. We also compare against the topic aspect model (TAM) [3], a two-dimensional model, using the public implementation.3 TAM is similar to the “base” two-factor f-LDA model except that f-LDA has a single θ per document with priors that are independently weighted by each factor, whereas TAM has K independent θs, with a different θk for each factor. If the Dirichlet precision in f-LDA is very high, then it should exhibit similar behavior as having separate θs. TAM only models two dimensions so we are restricted to running it on the two-dimensional CLEP data set. For hyperparameters, we set γ0 = γ1 = 0.1 in the Beta prior over b⃗t, and we set σ2 = 10 for α and 1 for ω in the Gaussian prior over weights. Bias parameters (α(B), ω(B)) are initialized to −5 for weak initial priors. Our sampling algorithm alternates between a full pass over tokens and a single gradient step on the parameters (step size of 10−2 for α; 10−3 for ω and β). Results are averaged or pooled from five trials of randomly initialized chains, which are each run for 10,000 iterations. Perplexity Following standard practice we measure perplexity on held-out data by fixing all parameters during training except document-specific parameters (α(d,k), θ(d)), which are computed from the test document. We use the “document completion” method: we infer parameters from half a document and measure perplexity on the remaining half [24]. Monte Carlo EM is run on test data for 200 iterations. Average perplexity comes from another 10 iterations. Figure 2a shows that the structured word priors yield lower perplexity, while results for sparse models are mixed. On ACL, sparsity consistently improves perplexity once the number of topics exceeds 20, while on CLEP sparsity does worse. Experiments with varying K yielded similar orderings, suggesting that differences are data dependent and not dependent on K. On CLEP, we find that TAM performs worse than f-LDA with a lower number of topics (which is what we find to work best qualitatively), but catches up as the number of topics increases. (Beyond 50 topics, we find that TAM’s perplexity stays about the same, and then begins to increase again once Z ≥75.) Thus, in addition to scaling to more factors, f-LDA is more predictive than simpler multi-dimensional models. Qualitative Results To illustrate model behavior we include a sample of output on ACL (Figure 3). We consider the component-specific weights for each factor ⃗ω(k) tk , which present an “overview” of each component, as well as the tuple-specific word distributions φ⃗t. Upon examination, we determined that the first factor (Z1= 20) corresponds to topic, the second (Z2= 2) to approach (empirical vs. theoretical), and the third (Z3= 2) to focus (methods vs. applications). The top row shows words common across all components for each factor. The bottom row shows specific φ⃗t. Consider the topic SPEECH: the triple (SPEECH,METHODS,THEORETICAL) emphasizes the linguistic side of speech processing (phonological, prosodic, etc.) while (SPEECH,APPLICATIONS,EMPIRICAL) is predominantly about dialogue systems and speech interfaces. We also see tuple sparsity (shaded 3Most other two-dimensional models, including SAGE [4] and multi-view LDA [10], assume that the second factor is fixed and observed. Our focus in this paper is fully unsupervised models. 6 5 10 15 20 25 30 35 40 45 50 Number of ”topics” 1800 2000 2200 2400 2600 2800 3000 Held-Out Perplexity (nats) ACL (K = 3) 5 10 15 20 25 30 35 40 45 50 Number of ”topics” CLEP (K = 2) Base S W SW TAM (a) 0.0 0.2 0.4 0.6 0.8 1.0 b 0 20 40 60 80 100 120 140 160 180 Number of Instances Distribution of Sparsity Values Best Fit Prior (b) Figure 2: (a) The document completion perplexity on two data sets. Models with “W” use structured word priors, and those with “S” use sparsity. Error bars indicate 90% confidence intervals. When pooling results across all numbers of topics ≥20, we find that S is significantly better than Base with p = 1.4×10−4 and SW is better than W with p = 5×10−5 on the ACL corpus. (b) The distribution of sparsity values induced on the ACL corpus with ⃗Z = (20, 2, 2). ACL CLEP ACL CLEP Intrusion Accuracy Relatedness Score (1–5) TAM n/a 46% n/a 2.29 ± 0.26 Baseline 39% 38% 2.35 ± 0.31 2.55 ± 0.37 Sparsity (S) 51% 43% 2.61 ± 0.37 2.53 ± 0.48 Word Priors (W) 76% 45% 3.56 ± 0.36 2.59 ± 0.33 Combined (SW) 73% 67% 3.90 ± 0.37 2.67 ± 0.55 Table 1: Results from human judgments. The best scoring model for each data set is in bold. 90% confidence intervals are indicated for scores; scores were more varied on the CLEP corpus. tuples, in which b⃗t ≤0.5) for poor tuples. For example, under the topic of DATA, a mostly empirical topic, tuples along the THEORETICAL component are inactive. Human Judgments Perplexity may not correlate with human judgments [6], which are important for f-LDA since structured word priors and array sparsity are motivated in part by semantic coherence. We measured interpretability based on the notion of relatedness: among components that are inferred to belong to the same factor, how many actually make sense together? Seven annotators provided judgments for two related tasks. First, we presented annotators with two word lists (ten most frequent words assigned to each tuple4) that are assigned to the same topic, along with a word list randomly selected from another topic. Annotators are asked to choose the word list that does not belong, i.e. an intrusion test [6]. If the two tuples from the same topic are strongly related, the random list should be easy to identify. Second, annotators are presented with pairs of word lists from the same topic and asked to judge the degree of relation using a 5-point Likert scale. We ran these experiments on both corpora with 20 topics. For the two models without the structured word priors, we use a symmetric prior (by optimizing only ω(B) and fixing ω(0) = 0), since symmetric word priors can lead to better interpretability [22].5 We exclude tuples with b⃗t ≤0.5. Across all data sets and models, annotators labeled 362 triples in the intrusion experiment and 333 pairs in the scoring experiment. The results (Table 1) differ slightly from the perplexity results. The word priors help in all cases, but much more so on ACL. The models with sparsity are generally better than those without, even on CLEP, in contrast to perplexity where sparse models did worse. This suggests that removing tuples with small b⃗t values removes nonsensical tuples. Overall, the judgments are worse for the CLEP corpus; this appears to be a difficult corpus to model due to high topic diversity and low overlap across disciplines. TAM is judged to be worse than all f-LDA variants when directly scored by annotators. The intrusion performance with TAM is better than or comparable to the ablated versions of f-LDA, but worse than the full model. It thus appears that both the structured priors and sparsity yield more interpretable word clusters. 4We use frequency instead of the actual posterior because including the learned priors (which share many words) could make the task unfairly easy. 5We used an asymmetric prior for the perplexity experiments, which gave slightly better results. 7 “Topic” “Approach” “Focus” “SPEECH” “I.R.” “M.T.” . . . “EMPIRICAL” “THEORETICAL” “METHODS” “APPLICATIONS” speech document translation . . . task theory word user spoken retrieval machine . . . tasks description algorithm research recognition documents source . . . performance formal method project state question mt . . . improve forms accuracy technology vocabulary web parallel . . . accuracy treatment best processing recognizer answering french . . . learning linguistics sentence science utterances query bilingual . . . demonstrate syntax statistical natural synthesis answer transfer . . . using ed previously development Topic SPEECH DATA MODELING GRAMMAR Focus METHODS APPL. METHODS APPL. METHODS APPL. METHODS APPL. (b=0.20) (b=1.00) (b=1.00) (b=1.00) (b=1.00) (b=0.50) (b=1.00) (b=0.57) EMPIRICAL dialogue corpus data models parsing grammar spoken data corpus model parser parsing speech training annotation approach syntactic based dialogues model annotated shown tree robust understanding tagging corpora error parse component Approach task annotated collection errors dependency processing recognition test xml statistical treebank linguistic (b=0.99) (b=0.00) (b=0.07) (b=0.02) (b=1.00) (b=0.01) (b=1.00) (b=1.00) THEORETICAL speech rules grammar grammar words rule parsing grammars recognition model grammars formalism prosodic shown structures parsing written models paper based phonological right formalism efficient spoken left based unification Figure 3: Example output from the ACL corpus with ⃗Z = (20, 2, 2). Above: The top words (based on their ω values) for a few components from three factors. Below: A three-dimensional table showing a sample of four topics (i.e. components of the first factor) with their top words (based on their φ values) as they appear in all combinations of factors. The components in the top table are combined to create 3-tuples in the bottom table. Shaded cells (b ≤0.5) are inactive. The names of factors and their components in quotes are manually assigned through post-hoc analysis. Sparsity Patterns Finally, we examine the learned sparsity patterns: how much of B is close to 0 or 1? Figure 2b shows a histogram of b⃗t values (ACL with 20 topics, 3 factors) pooled across five sampling chains. The majority of values are close to 0 or 1, effectively capturing a sparse binary array. The higher variance near 0 relative to 1 suggests that the model prefers to keep bits “on”— and give tuples tiny probability—rather than “off.” This suggests that a model with a hard constraint might struggle to “turn off” bits during inference. While we fixed the Beta parameters in our experiments, these can be tuned to control sparsity. The model will favor more “on” than “off” bits by setting γ1 > γ0, or vice versa. When γ > 1, the Beta distribution no longer favors sparsity; we confirmed empirically that this leads to b⃗t values that are closer to 0.8 or 0.9 rather than 1. In contrast, setting γ ≪0.1 yields more extreme values near 0 and 1 than with γ = 0.1 (e.g. .9999 instead of .991), but this does not greatly affect the number of non-binary values. Thus, a sparse prior alone cannot fully satisfy our preference that B is binary. Comparison to LDA The runtimes of samplers for LDA and f-LDA are on the same order (but we have not investigated differences in mixing time). Our f-LDA implementation is one to two times slower per iteration than our own comparable LDA implementation (with hyperparameter optimization using the methods in [25]). We did not observe a consistent pattern regarding the perplexity of the two models. Averaged across all numbers of topics, the perplexity of LDA was 97% the perplexity of f-LDA on ACL and 104% on CLEP. Note that our experiments always use a comparable number of word distributions, thus ⃗Z = (20, 2, 2) is the same as Z = 80 topics in LDA. 6 Conclusion We have presented factorial LDA, a multi-dimensional text model that can incorporate an arbitrary number of factors. To encourage the model to learn the desired patterns, we developed two new types of priors: word priors that share features across factors, and a sparsity prior that restricts the set of active tuples. We have shown both qualitatively and quantitatively that f-LDA is capable of discovering interpretable patterns even in multi-dimensional spaces. 8 Acknowledgements We are grateful to Jason Eisner, Matthew Gormley, Nicholas Andrews, David Mimno, and the anonymous reviewers for helpful discussions and feedback. This work was supported in part by a National Science Foundation Graduate Research Fellowship under Grant No. DGE-0707427. References [1] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. JMLR, 2003. [2] Q. Mei, X. Ling, M. Wondra, H. Su, and C. Zhai. Topic sentiment mixture: modeling facets and opinions in weblogs. In WWW, 2007. [3] M. Paul and R. Girju. A two-dimensional topic-aspect model for discovering multi-faceted topics. In AAAI, 2010. [4] J. Eisenstein, A. Ahmed, and E. P. Xing. Sparse additive generative models of text. In ICML, 2011. [5] W. Y. Wang, E. Mayfield, S. Naidu, and J. Dittmar. Historical analysis of legal opinions with a sparse mixed-effects latent variable model. In ACL, pages 740–749, July 2012. [6] J. Chang, J. Boyd-Graber, S. Gerrish, C. Wang, and D. Blei. Reading tea leaves: How humans interpret topic models. In NIPS, 2009. [7] D. Mimno and A. McCallum. Topic models conditioned on arbitrary features with dirichlet-multinomial regression. In UAI, 2008. [8] T. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In NIPS, 2006. [9] S. Williamson, C. Wang, K. Heller, and D. Blei. The IBP-compound dirichlet process and its application to focused topic modeling. In ICML, 2010. [10] A. Ahmed and E. P. Xing. Staying informed: supervised and semi-supervised multi-view topical analysis of ideological perspective. In EMNLP, pages 1140–1150, 2010. [11] M. Paul and R. Girju. Cross-cultural analysis of blogs and forums with mixed-collection topic models. In EMNLP, pages 1408–1417, August 2009. [12] D. Zhang, C. Zhai, J. Han, A. Srivastava, and N. Oza. Topic modeling for OLAP on multidimensional text databases: topic cube and its applications. Statistical Analysis and Data Mining, 2, 2009. [13] T. Hofmann. Probabilistic latent semantic indexing. In SIGIR, 1999. [14] S. Dasgupta and V. Ng. Mining clustering dimensions. In ICML, 2010. [15] I. Porteous, E. Bart, and M. Welling. Multi-HDP: a non parametric Bayesian model for tensor factorization. In AAAI, pages 1487–1490, 2008. [16] L. Mackey, D. Weiss, and M. I. Jordan. Mixed membership matrix factorization. In ICML, 2010. [17] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Comput., 14:1771–1800, August 2002. [18] J. Boyd-Graber and D. Blei. Syntactic topic models. In NIPS, 2008. [19] M. R. Gormley, M. Dredze, B. Van Durme, and J. Eisner. Shared components topic models. In NAACL, 2010. [20] C. Wang and D. Blei. Decoupling sparsity and smoothness in the discrete hierarchical Dirichlet process. In NIPS, 2009. [21] L. Meier, S. van de Geer, and P. B¨uhlmann. The group lasso for logistic regression. Journal Of The Royal Statistical Society Series B, 70(1):53–71, 2008. [22] H. Wallach, D. Mimno, and A. McCallum. Rethinking LDA: Why priors matter. In NIPS, 2009. [23] T. Griffiths and M. Steyvers. Finding scientific topics. In Proceedings of the National Academy of Sciences of the United States of America, 2004. [24] M. Rosen-Zvi, T. Griffiths, M. Steyvers, and P. Smyth. The author-topic model for authors and documents. In UAI, 2004. [25] Michael J. Paul. Mixed membership Markov models for unsupervised conversation modeling. In EMNLPCoNLL, 2012. 9
|
2012
|
155
|
4,514
|
Augment-and-Conquer Negative Binomial Processes Mingyuan Zhou Dept. of Electrical and Computer Engineering Duke University, Durham, NC 27708 mz1@ee.duke.edu Lawrence Carin Dept. of Electrical and Computer Engineering Duke University, Durham, NC 27708 lcarin@ee.duke.edu Abstract By developing data augmentation methods unique to the negative binomial (NB) distribution, we unite seemingly disjoint count and mixture models under the NB process framework. We develop fundamental properties of the models and derive efficient Gibbs sampling inference. We show that the gamma-NB process can be reduced to the hierarchical Dirichlet process with normalization, highlighting its unique theoretical, structural and computational advantages. A variety of NB processes with distinct sharing mechanisms are constructed and applied to topic modeling, with connections to existing algorithms, showing the importance of inferring both the NB dispersion and probability parameters. 1 Introduction There has been increasing interest in count modeling using the Poisson process, geometric process [1, 2, 3, 4] and recently the negative binomial (NB) process [5, 6]. Notably, it has been independently shown in [5] and [6] that the NB process, originally constructed for count analysis, can be naturally applied for mixture modeling of grouped data x1, · · · , xJ, where each group xj = {xji}i=1,Nj. For a territory long occupied by the hierarchical Dirichlet process (HDP) [7] and related models, the inference of which may require substantial bookkeeping and suffer from slow convergence [7], the discovery of the NB process for mixture modeling can be significant. As the seemingly distinct problems of count and mixture modeling are united under the NB process framework, new opportunities emerge for better data fitting, more efficient inference and more flexible model constructions. However, neither [5] nor [6] explore the properties of the NB distribution deep enough to achieve fully tractable closed-form inference. Of particular concern is the NB dispersion parameter, which was simply fixed or empirically set [6], or inferred with a Metropolis-Hastings algorithm [5]. Under these limitations, both papers fail to reveal the connections of the NB process to the HDP, and thus may lead to false assessments on comparing their modeling abilities. We perform joint count and mixture modeling under the NB process framework, using completely random measures [1, 8, 9] that are simple to construct and amenable for posterior computation. We propose to augment-and-conquer the NB process: by “augmenting” a NB process into both the gamma-Poisson and compound Poisson representations, we “conquer” the unification of count and mixture modeling, the analysis of fundamental model properties, and the derivation of efficient Gibbs sampling inference. We make two additional contributions: 1) we construct a gamma-NB process, analyze its properties and show how its normalization leads to the HDP, highlighting its unique theoretical, structural and computational advantages relative to the HDP. 2) We show that a variety of NB processes can be constructed with distinct model properties, for which the shared random measure can be selected from completely random measures such as the gamma, beta, and beta-Bernoulli processes; we compare their performance on topic modeling, a typical example for mixture modeling of grouped data, and show the importance of inferring both the NB dispersion and probability parameters, which respectively govern the overdispersion level and the variance-to-mean ratio in count modeling. 1 1.1 Poisson process for count and mixture modeling Before introducing the NB process, we first illustrate how the seemingly distinct problems of count and mixture modeling can be united under the Poisson process. Denote Ωas a measure space and for each Borel set A ⊂Ω, denote Xj(A) as a count random variable describing the number of observations in xj that reside within A. Given grouped data x1, · · · , xJ, for any measurable disjoint partition A1, · · · , AQ of Ω, we aim to jointly model the count random variables {Xj(Aq)}. A natural choice would be to define a Poisson process Xj ∼PP(G), with a shared completely random measure G on Ω, such that Xj(A) ∼Pois G(A) for each A ⊂Ω. Denote G(Ω) = PQ q=1 G(Aq) and eG = G/G(Ω). Following Lemma 4.1 of [5], the joint distributions of Xj(Ω), Xj(A1), · · · , Xj(AQ) are equivalent under the following two expressions: Xj(Ω) = PQ q=1 Xj(Aq), Xj(Aq) ∼Pois G(Aq) ; (1) Xj(Ω) ∼Poisson(G(Ω)), [Xj(A1), · · · , Xj(Aq)] ∼Mult Xj(Ω); eG(A1), · · · , eG(AQ) . (2) Thus the Poisson process provides not only a way to generate independent counts from each Aq, but also a mechanism for mixture modeling, which allocates the observations into any measurable disjoint partition {Aq}1,Q of Ω, conditioning on Xj(Ω) and the normalized mean measure eG. To complete the model, we may place a gamma process [9] prior on the shared measure as G ∼GaP(c, G0), with concentration parameter c and base measure G0, such that G(A) ∼ Gamma(G0(A), 1/c) for each A ⊂Ω, where G0 can be continuous, discrete or a combination of both. Note that eG = G/G(Ω) now becomes a Dirichlet process (DP) as eG ∼DP(γ0, eG0), where γ0 = G0(Ω) and eG0 = G0/γ0. The normalized gamma representation of the DP is discussed in [10, 11, 9] and has been used to construct the group-level DPs for an HDP [12]. The Poisson process has an equal-dispersion assumption for count modeling. As shown in (2), the construction of Poisson processes with a shared gamma process mean measure implies the same mixture proportions across groups, which is essentially the same as the DP when used for mixture modeling when the total counts {Xj(Ω)}j are not treated as random variables. This motivates us to consider adding an additional layer or using a different distribution other than the Poisson to model the counts. As shown below, the NB distribution is an ideal candidate, not only because it allows overdispersion, but also because it can be augmented into both a gamma-Poisson and a compound Poisson representations. 2 Augment-and-Conquer the Negative Binomial Distribution The NB distribution m ∼NB(r, p) has the probability mass function (PMF) fM(m) = Γ(r+m) m!Γ(r) (1− p)rpm. It has a mean µ = rp/(1−p) smaller than the variance σ2 = rp/(1 −p)2 = µ+r−1µ2, with the variance-to-mean ratio (VMR) as (1−p)−1 and the overdispersion level (ODL, the coefficient of the quadratic term in σ2) as r−1. It has been widely investigated and applied to numerous scientific studies [13, 14, 15]. The NB distribution can be augmented into a gamma-Poisson construction as m ∼Pois(λ), λ ∼Gamma (r, p/(1 −p)), where the gamma distribution is parameterized by its shape r and scale p/(1 −p). It can also be augmented under a compound Poisson representation [16] as m = Pl t=1 ut, ut ∼Log(p), l ∼Pois(−r ln(1 −p)), where u ∼Log(p) is the logarithmic distribution [17] with probability-generating function (PGF) CU(z) = ln(1 −pz)/ln(1 −p), |z| < p−1. In a slight abuse of notation, but for added conciseness, in the following discussion we use m ∼Pl t=1 Log(p) to denote m = Pl t=1 ut, ut ∼Log(p). The inference of the NB dispersion parameter r has long been a challenge [13, 18, 19]. In this paper, we first place a gamma prior on it as r ∼Gamma(r1, 1/c1). We then use Lemma 2.1 (below) to infer a latent count l for each m ∼NB(r, p) conditioning on m and r. Since l ∼Pois(−r ln(1 −p)) by construction, we can use the gamma Poisson conjugacy to update r. Using Lemma 2.2 (below), we can further infer an augmented latent count l′ for each l, and then use these latent counts to update r1, assuming r1 ∼Gamma(r2, 1/c2). Using Lemmas 2.1 and 2.2, we can continue this process repeatedly, suggesting that we may build a NB process to model data that have subgroups within groups. The conditional posterior of the latent count l was first derived by us but was not given an analytical form [20]. Below we explicitly derive the PMF of l, shown in (3), and find that it exactly represents the distribution of the random number of tables occupied by m customers in a Chinese restaurant process with concentration parameter r [21, 22, 7]. We denote l ∼CRT(m, r) as a Chinese restaurant table (CRT) count random variable with such a PMF and as proved in the supplementary material, we can sample it as l = Pm n=1 bn, bn ∼Bernoulli (r/(n −1 + r)). 2 Both the gamma-Poisson and compound-Poisson augmentations of the NB distribution and Lemmas 2.1 and 2.2 are key ingredients of this paper. We will show that these augment-and-concur methods not only unite count and mixture modeling and provide efficient inference, but also, as shown in Section 3, let us examine the posteriors to understand fundamental properties of the NB processes, clearly revealing connections to previous nonparametric Bayesian mixture models. Lemma 2.1. Denote s(m, j) as Stirling numbers of the first kind [17]. Augment m ∼NB(r, p) under the compound Poisson representation as m ∼Pl t=1 Log(p), l ∼Pois(−r ln(1 −p)), then the conditional posterior of l has PMF Pr(l = j|m, r) = Γ(r) Γ(m+r)|s(m, j)|rj, j = 0, 1, · · · , m. (3) Proof. Denote wj ∼Pj t=1 Log(p), j = 1, · · · , m. Since wj is the summation of j iid Log(p) random variables, the PGF of wj becomes CWj(z) = Cj U(z) = [ln(1 −pz)/ln(1 −p)]j , |z| < p−1. Using the property that [ln(1 + x)]j = j! P∞ n=j s(n,j)xn n! [17], we have Pr(wj = m) = C(m) Wj (0)/m! = (−1)mpjj!s(m, j)/(m![ln(1 −p)]j). Thus for 0 ≤j ≤m, we have Pr(L = j|m, r) ∝Pr(wj = m)Pois(j; −r ln(1−p)) ∝|s(m, j)|rj. Denote Sr(m) = Pm j=0 |s(m, j)|rj, we have Sr(m) = (m−1+r)Sr(m−1) = · · · = Qm−1 n=1 (r+n)Sr(1) = Qm−1 n=0 (r+n) = Γ(m+r) Γ(r) . Lemma 2.2. Let m ∼NB(r, p), r ∼Gamma(r1, 1/c1), denote p′ = −ln(1−p) c1−ln(1−p), then m can also be generated from a compound distribution as m ∼Pl t=1 Log(p), l ∼Pl′ t′=1 Log(p′), l′ ∼Pois(−r1 ln(1 −p′)). (4) Proof. Augmenting m leads to m ∼Pl t=1 Log(p), l ∼Pois(−r ln(1 −p)). Marginalizing out r leads to l ∼NB (r1, p′). Augmenting l using its compound Poisson representation leads to (4). 3 Gamma-Negative Binomial Process We explore sharing the NB dispersion across groups while the probability parameters are group dependent. We define a NB process X ∼NBP(G, p) as X(A) ∼NB(G(A), p) for each A ⊂Ωand construct a gamma-NB process for joint count and mixture modeling as Xj ∼NBP(G, pj), G ∼ GaP(c, G0), which can be augmented as a gamma-gamma-Poisson process as Xj ∼PP(Λj), Λj ∼GaP((1 −pj)/pj, G), G ∼GaP(c, G0). (5) In the above PP(·) and GaP(·) represent the Poisson and gamma processes, respectively, as defined in Section 1.1. Using Lemma 2.2, the gamma-NB process can also be augmented as Xj ∼PLj t=1 Log(pj), Lj ∼PP(−G ln(1 −pj)), G ∼GaP(c, G0); (6) L = P j Lj ∼PL′ t=1 Log(p′), L′ ∼PP(−G0 ln(1 −p′)), p′ = −P j ln(1−pj) c−P j ln(1−pj). (7) These three augmentations allow us to derive a sequence of closed-form update equations for inference with the gamma-NB process. Using the gamma Poisson conjugacy on (5), for each A ⊂Ω, we have Λj(A)|G, Xj, pj ∼Gamma (G(A) + Xj(A), pj), thus the conditional posterior of Λj is Λj|G, Xj, pj ∼GaP 1/pj, G + Xj . (8) Define T ∼CRTP(X, G) as a CRT process that T(A) = P ω∈A T(ω), T(ω) ∼CRT(X(ω), G(ω)) for each A ⊂Ω. Applying Lemma 2.1 on (6) and (7), we have Lj|Xj, G ∼CRTP(Xj, G), L′|L, G0 ∼CRTP(L, G0). (9) If G0 is a continuous base measure and γ0 = G0(Ω) is finite, we have G0(ω)→0 ∀ω ∈Ωand thus L′(Ω)|L, G0 = P ω∈Ωδ(L(ω) > 0) = P ω∈Ωδ(P j Xj(ω) > 0) (10) which is equal to K+, the total number of used discrete atoms; if G0 is discrete as G0 = PK k=1 γ0 K δωk, then L′(ωk) = CRT(L(ωk), γ0 K ) ≥1 if P j Xj(ωk) > 0, thus L′(Ω) ≥K+. In either case, let γ0 ∼Gamma(e0, 1/f0), with the gamma Poisson conjugacy on (6) and (7), we have γ0|{L′(Ω), p′} ∼Gamma e0 + L′(Ω), 1 f0−ln(1−p′) ; (11) G|G0, {Lj, pj} ∼GaP c −P j ln(1 −pj), G0 + P j Lj . (12) Since the data {xji}i are exchangeable within group j, the predictive distribution of a point Xji, conditioning on X−i j = {Xjn}n:n̸=i and G, with Λj marginalized out, can be expressed as Xji|G, X−i j ∼ E[Λj|G,X−i j ] E[Λj(Ω)|G,X−i j ] = G G(Ω)+Xj(Ω)−1 + X−i j G(Ω)+Xj(Ω)−1. (13) 3 3.1 Relationship with the hierarchical Dirichlet process Using the equivalence between (1) and (2) and normalizing all the gamma processes in (5), denoting eΛj = Λj/Λj(Ω), α = G(Ω), eG = G/α, γ0 = G0(Ω) and eG0 = G0/γ0, we can re-express (5) as Xji ∼eΛj, eΛj ∼DP(α, eG), α ∼Gamma(γ0, 1/c), eG ∼DP(γ0, eG0) (14) which is an HDP [7]. Thus the normalized gamma-NB process leads to an HDP, yet we cannot return from the HDP to the gamma-NB process without modeling Xj(Ω) and Λj(Ω) as random variables. Theoretically, they are distinct in that the gamma-NB process is a completely random measure, assigning independent random variables into any disjoint Borel sets {Aq}1,Q of Ω; whereas the HDP is not. Practically, the gamma-NB process can exploit conjugacy to achieve analytical conditional posteriors for all latent parameters. The inference of the HDP is a major challenge and it is usually solved through alternative constructions such as the Chinese restaurant franchise (CRF) and stick-breaking representations [7, 23]. In particular, without analytical conditional posteriors, the inference of concentration parameters α and γ0 is nontrivial [7, 24] and they are often simply fixed [23]. Under the CRF metaphor α governs the random number of tables occupied by customers in each restaurant independently; further, if the base probability measure eG0 is continuous, γ0 governs the random number of dishes selected by tables of all restaurants. One may apply the data augmentation method of [22] to sample α and γ0. However, if eG0 is discrete as eG0 = PK k=1 1 K δωk, which is of practical value and becomes a continuous base measure as K →∞[11, 7, 24], then using the method of [22] to sample γ0 is only approximately correct, which may result in a biased estimate in practice, especially if K is not large enough. By contrast, in the gamma-NB process, the shared gamma process G can be analytically updated with (12) and G(Ω) plays the role of α in the HDP, which is readily available as G(Ω)|G0, {Lj, pj}j=1,N ∼Gamma γ0 + P j Lj(Ω), 1 c−P j ln(1−pj) (15) and as in (11), regardless of whether the base measure is continuous, the total mass γ0 has an analytical gamma posterior whose shape parameter is governed by L′(Ω), with L′(Ω) = K+ if G0 is continuous and finite and L′(Ω) ≥K+ if G0 = PK k=1 γ0 K δωk. Equation (15) also intuitively shows how the NB probability parameters {pj} govern the variations among {eΛj} in the gamma-NB process. In the HDP, pj is not explicitly modeled, and since its value becomes irrelevant when taking the normalized constructions in (14), it is usually treated as a nuisance parameter and perceived as pj = 0.5 when needed for interpretation purpose. Fixing pj = 0.5 is also considered in [12] to construct an HDP, whose group-level DPs are normalized from gamma processes with the scale parameters as pj 1−pj = 1; it is also shown in [12] that improved performance can be obtained for topic modeling by learning the scale parameters with a log Gaussian process prior. However, no analytical conditional posteriors are provided and Gibbs sampling is not considered as a viable option [12]. 3.2 Augment-and-conquer inference for joint count and mixture modeling For a finite continuous base measure, the gamma process G ∼GaP(c, G0) can also be defined with its L´evy measure on a product space R+ × Ω, expressed as ν(drdω) = r−1e−crdrG0(dω) [9]. Since the Poisson intensity ν+ = ν(R+×Ω) = ∞and R R R+×Ωrν(drdω) is finite, a draw from this process can be expressed as G = P∞ k=1 rkδωk, (rk, ωk) ∼π(drdω), π(drdω)ν+ ≡ν(drdω) [9]. Here we consider a discrete base measure as G0 = PK k=1 γ0 K δωk, ωk ∼g0(ωk), then we have G = PK k=1 rkδωk, rk ∼Gamma(γ0/K, 1/c), ωk ∼g0(ωk), which becomes a draw from the gamma process with a continuous base measure as K →∞. Let xji ∼F(ωzji) be observation i in group j, linked to a mixture component ωzji ∈Ωthrough a distribution F. Denote njk = PNj i=1 δ(zji = k), we can express the gamma-NB process with the discrete base measure as ωk ∼g0(ωk), Nj = PK k=1 njk, njk ∼Pois(λjk), λjk ∼Gamma(rk, pj/(1 −pj)) rk ∼Gamma(γ0/K, 1/c), pj ∼Beta(a0, b0), γ0 ∼Gamma(e0, 1/f0) (16) where marginally we have njk ∼NB(rk, pj). Using the equivalence between (1) and (2), we can equivalently express Nj and njk in the above model as Nj ∼Pois (λj) , [nj1, · · · , njK] ∼ Mult (Nj; λj1/λj, · · · , λjK/λj), where λj = PK k=1 λjk. Since the data {xji}i=1,Nj are fully exchangeable, rather than drawing [nj1, · · · , njK] once, we may equivalently draw the index zji ∼Discrete (λj1/λj, · · · , λjK/λj) (17) 4 for each xji and then let njk = PNj i=1 δ(zji = k). This provides further insights on how the seemingly disjoint problems of count and mixture modeling are united under the NB process framework. Following (8)-(12), the block Gibbs sampling is straightforward to write as p(ωk|−) ∝Q zji=k F(xji; ωk)g0(ωk), Pr(zji = k|−) ∝F(xji; ωk)λjk (pj|−) ∼Beta a0 + Nj, b0 + P k rk , p′ = −P j ln(1−pj) c−P j ln(1−pj), (ljk|−) ∼CRT(njk, rk) (l′ k|−) ∼CRT(P j ljk, γ0/K), (γ0|−) ∼Gamma e0 + P k l′ k, 1 f0−ln(1−p′) (rk|−) ∼Gamma γ0/K + P j ljk, 1 c−P j ln(1−pj) , (λjk|−) ∼Gamma(rk + njk, pj). (18) which has similar computational complexity as that of the direct assignment block Gibbs sampling of the CRF-HDP [7, 24]. If g0(ω) is conjugate to the likelihood F(x; ω), then the posterior p(ω|−) would be analytical. Note that when K →∞, we have (l′ k|−) = δ(P j ljk > 0) = δ(P j njk > 0). Using (1) and (2) and normalizing the gamma distributions, (16) can be re-expressed as zji ∼Discrete(˜λj), ˜λj ∼Dir(α˜r), α ∼Gamma(γ0, 1/c), ˜r ∼Dir(γ0/K, · · · , γ0/K) (19) which loses the count modeling ability and becomes a finite representation of the HDP, the inference of which is not conjugate and has to be solved under alternative representations [7, 24]. This also implies that by using the Dirichlet process as the foundation, traditional mixture modeling may discard useful count information from the beginning. 4 The Negative Binomial Process Family and Related Algorithms The gamma-NB process shares the NB dispersion across groups. Since the NB distribution has two adjustable parameters, we may explore alternative ideas, with the NB probability measure shared across groups as in [6], or with both the dispersion and probability measures shared as in [5]. These constructions are distinct from both the gamma-NB process and HDP in that Λj has space dependent scales, and thus its normalization eΛj = Λj/Λj(Ω) no longer follows a Dirichlet process. It is natural to let the probability measure be drawn from a beta process [25, 26], which can be defined by its L´evy measure on a product space [0, 1]×Ωas ν(dpdω) = cp−1(1−p)c−1dpB0(dω). A draw from the beta process B ∼BP(c, B0) with concentration parameter c and base measure B0 can be expressed as B = P∞ k=1 pkδωk. A beta-NB process [5, 6] can be constructed by letting Xj ∼ NBP(rj, B), with a random draw expressed as Xj = P∞ k=1 njkδωk, njk ∼NB(rj, pk). Under this construction, the NB probability measure is shared and the NB dispersion parameters are group dependent. As in [5], we may also consider a marked-beta-NB1 process that both the NB probability and dispersion measures are shared, in which each point of the beta process is marked with an independent gamma random variable. Thus a draw from the marked-beta process becomes (R, B) = P∞ k=1(rk, pk)δωk, and the NB process Xj ∼NBP(R, B) becomes Xj = P∞ k=1 njkδωk, njk ∼ NB(rk, pk). Since the beta and NB processes are conjugate, the posterior of B is tractable, as shown in [5, 6]. If it is believed that there are excessive number of zeros, governed by a process other than the NB process, we may introduce a zero inflated NB process as Xj ∼NBP(RZj, pj), where Zj ∼BeP(B) is drawn from the Bernoulli process [26] and (R, B) = P∞ k=1(rk, πk)δωk is drawn from a marked-beta process, thus njk ∼NB(rkbjk, pj), bjk = Bernoulli(πk). This construction can be linked to the model in [27] with appropriate normalization, with advantages that there is no need to fix pj = 0.5 and the inference is fully tractable. The zero inflated construction can also be linked to models for real valued data using the Indian buffet process (IBP) or beta-Bernoulli process spike-and-slab prior [28, 29, 30, 31]. 4.1 Related Algorithms To show how the NB processes can be diversely constructed and to make connections to previous parametric and nonparametric mixture models, we show in Table 1 a variety of NB processes, which differ on how the dispersion and probability measures are shared. For a deeper understanding on how the counts are modeled, we also show in Table 1 both the VMR and ODL implied by these 1We may also consider a beta marked-gamma-NB process, whose performance is found to be very similar. 5 Table 1: A variety of negative binomial processes are constructed with distinct sharing mechanisms, reflected with which parameters from rk, rj, pk, pj and πk (bjk) are inferred (indicated by a check-mark ✓), and the implied VMR and ODL for counts {njk}j,k. They are applied for topic modeling of a document corpus, a typical example of mixture modeling of grouped data. Related algorithms are shown in the last column. Algorithms rk rj pk pj πk VMR ODL Related Algorithms NB-LDA ✓ ✓ (1 −pj)−1 r−1 j LDA [32], Dir-PFA [5] NB-HDP ✓ 0.5 2 r−1 k HDP[7], DILN-HDP [12] NB-FTM ✓ 0.5 ✓ 2 (rk)−1bjk FTM [27], SγΓ-PFA [5] Beta-NB ✓ ✓ (1 −pk)−1 r−1 j BNBP [5], BNBP [6] Gamma-NB ✓ ✓ (1 −pj)−1 r−1 k CRF-HDP [7, 24] Marked-Beta-NB ✓ ✓ (1 −pk)−1 r−1 k BNBP [5] settings. We consider topic modeling of a document corpus, a typical example of mixture modeling of grouped data, where each a-bag-of-words document constitutes a group, each word is an exchangeable group member, and F(xji; ωk) is simply the probability of word xji in topic ωk. We consider six differently constructed NB processes in Table 1: (i) Related to latent Dirichlet allocation (LDA) [32] and Dirichlet Poisson factor analysis (Dir-PFA) [5], the NB-LDA is also a parametric topic model that requires tuning the number of topics. However, it uses a document dependent rj and pj to automatically learn the smoothing of the gamma distributed topic weights, and it lets rj ∼Gamma(γ0, 1/c), γ0 ∼Gamma(e0, 1/f0) to share statistical strength between documents, with closed-form Gibbs sampling inference. Thus even the most basic parametric LDA topic model can be improved under the NB count modeling framework. (ii) The NB-HDP model is related to the HDP [7], and since pj is an irrelevant parameter in the HDP due to normalization, we set it in the NB-HDP as 0.5, the usually perceived value before normalization. The NB-HDP model is comparable to the DILN-HDP [12] that constructs the group-level DPs with normalized gamma processes, whose scale parameters are also set as one. (iii) The NB-FTM model introduces an additional beta-Bernoulli process under the NB process framework to explicitly model zero counts. It is the same as the sparse-gamma-gamma-PFA (SγΓ-PFA) in [5] and is comparable to the focused topic model (FTM) [27], which is constructed from the IBP compound DP. Nevertheless, they apply about the same likelihoods and priors for inference. The Zero-Inflated-NB process improves over them by allowing pj to be inferred, which generally yields better data fitting. (iv) The Gamma-NB process explores the idea that the dispersion measure is shared across groups, and it improves over the NBHDP by allowing the learning of pj. It reduces to the HDP [7] by normalizing both the group-level and the shared gamma processes. (v) The Beta-NB process explores sharing the probability measure across groups, and it improves over the beta negative binomial process (BNBP) proposed in [6], allowing inference of rj. (vi) The Marked-Beta-NB process is comparable to the BNBP proposed in [5], with the distinction that it allows analytical update of rk. The constructions and inference of various NB processes and related algorithms in Table 1 all follow the formulas in (16) and (18), respectively, with additional details presented in the supplementary material. Note that as shown in [5], NB process topic models can also be considered as factor analysis of the term-document count matrix under the Poisson likelihood, with ωk as the kth factor loading that sums to one and λjk as the factor score, which can be further linked to nonnegative matrix factorization [33] and a gamma Poisson factor model [34]. If except for proportions ˜λj and ˜r, the absolute values, e.g., λjk, rk and pk, are also of interest, then the NB process based joint count and mixture models would apparently be more appropriate than the HDP based mixture models. 5 Example Results Motivated by Table 1, we consider topic modeling using a variety of NB processes, which differ on which parameters are learned and consequently how the VMR and ODL of the latent counts {njk}j,k are modeled. We compare them with LDA [32, 35] and CRF-HDP [7, 24]. For fair comparison, they are all implemented with block Gibbs sampling using a discrete base measure with K atoms, and for the first fifty iterations, the Gamma-NB process with rk ≡50/K and pj ≡0.5 is used for initialization. For LDA and NB-LDA, we search K for optimal performance and for the other models, we set K = 400 as an upper-bound. We set the parameters as c = 1, η = 0.05 and a0 = b0 = e0 = f0 = 0.01. For LDA, we set the topic proportion Dirichlet smoothing parameter as 50/K, following the topic model toolbox2 provided for [35]. We consider 2500 Gibbs sampling iterations, with the last 1500 samples collected. Under the NB processes, each word xji would 6 0 50 100 150 200 250 300 350 400 800 850 900 950 1000 1050 1100 1150 1200 1250 1300 K+=127 K+=201 K+=107 K+=161 K+=177 K+=130 (a) Number of topics Perplexity 0.2 0.3 0.4 0.5 0.6 0.7 0.8 700 800 900 1000 1100 1200 1300 1400 1500 Training data percentage Perplexity (b) LDA NB−LDA NB−HDP NB−FTM Beta−NB CRF−HDP Gamma−NB Marked−Beta−NB Figure 1: Comparison of per-word perplexities on the held-out words between various algorithms. (a) With 60% of the words in each document used for training, the performance varies as a function of K in both LDA and NB-LDA, which are parametric models, whereas the NB-HDP, NB-FTM, Beta-NB, CRF-HDP, GammaNB and Marked-Beta-NB all infer the number of active topics, which are 127, 201, 107, 161, 177 and 130, respectively, according to the last Gibbs sampling iteration. (b) Per-word perplexities of various models as a function of the percentage of words in each document used for training. The results of the LDA and NB-LDA are shown with the best settings of K under each training/testing partition. be assigned to a topic k based on both F(xji; ωk) and the topic weights {λjk}k=1,K; each topic is drawn from a Dirichlet base measure as ωk ∼Dir(η, · · · , η) ∈RV , where V is the number of unique terms in the vocabulary and η is a smoothing parameter. Let vji denote the location of word xji in the vocabulary, then we have (ωk|−) ∼Dir η + P j P i δ(zji = k, vji = 1), · · · , η + P j P i δ(zji = k, vji = V ) . We consider the Psychological Review2 corpus, restricting the vocabulary to terms that occur in five or more documents. The corpus includes 1281 abstracts from 1967 to 2003, with 2,566 unique terms and 71,279 total word counts. We randomly select 20%, 40%, 60% or 80% of the words from each document to learn a document dependent probability for each term v as fjv = PS s=1 PK k=1 ω(s) vk λ(s) jk PS s=1 PV v=1 PK k=1 ω(s) vk λ(s) jk , where ωvk is the probability of term v in topic k and S is the total number of collected samples. We use {fjv}j,v to calculate the perword perplexity on the held-out words as in [5]. The final results are averaged from five random training/testing partitions. Note that the perplexity per test word is the fair metric to compare topic models. However, when the actual Poisson rates or distribution parameters for counts instead of the mixture proportions are of interest, it is obvious that a NB process based joint count and mixture model would be more appropriate than an HDP based mixture model. Figure 1 compares the performance of various algorithms. The Marked-Beta-NB process has the best performance, closely followed by the Gamma-NB process, CRF-HDP and Beta-NB process. With an appropriate K, the parametric NB-LDA may outperform the nonparametric NB-HDP and NB-FTM as the training data percentage increases, somewhat unexpected but very intuitive results, showing that even by learning both the NB dispersion and probability parameters rj and pj in a document dependent manner, we may get better data fitting than using nonparametric models that share the NB dispersion parameters rk across documents, but fix the NB probability parameters. Figure 2 shows the learned model parameters by various algorithms under the NB process framework, revealing distinct sharing mechanisms and model properties. When (rj, pj) is used, as in the NB-LDA, different documents are weakly coupled with rj ∼Gamma(γ0, 1/c), and the modeling results show that a typical document in this corpus usually has a small rj and a large pj, thus a large ODL and a large VMR, indicating highly overdispersed counts on its topic usage. When (rj, pk) is used to model the latent counts {njk}j,k, as in the Beta-NB process, the transition between active and non-active topics is very sharp that pk is either close to one or close to zero. That is because pk controls the mean as E[P j njk] = pk/(1 −pk) P j rj and the VMR as (1 −pk)−1 on topic k, thus a popular topic must also have large pk and thus large overdispersion measured by the VMR; since the counts {njk}j are usually overdispersed, particularly true in this corpus, a middle range pk indicating an appreciable mean and small overdispersion is not favored by the model and thus is rarely observed. When (rk, pj) is used, as in the Gamma-NB process, the transition is much smoother that rk gradually decreases. The reason is that rk controls the mean as E[P j njk] = rk P j pj/(1 −pj) and the ODL r−1 k on topic k, thus popular topics must also have large rk and thus small overdispersion measured by the ODL, and unpopular topics are modeled with small rk and thus large overdispersion, allowing rarely and lightly used topics. Therefore, we can expect that (rk, pj) would allow 2http://psiexp.ss.uci.edu/research/programs data/toolbox.htm 7 0 500 1000 10 −4 10 −2 10 0 10 2 NB−LDA rj Document Index 0 500 1000 0.2 0.4 0.6 0.8 1 pj Document Index 0 200 400 10 −4 10 −2 10 0 NB−HDP rk Topic Index 0 500 1000 0 0.5 1 pj Document Index 0 200 400 0 10 20 30 NB−FTM rk Topic Index 0 200 400 10 −3 10 −2 10 −1 10 0 πk Topic Index 0 500 1000 10 −4 10 −2 10 0 10 2 Beta−NB rj Document Index 0 200 400 0 0.5 1 pk Topic Index 0 200 400 10 −4 10 −2 10 0 Gamma−NB rk Topic Index 0 500 1000 0 0.5 1 pj Document Index 0 200 400 10 −4 10 −2 10 0 Marked−Beta−NB rk Topic Index 0 200 400 0 0.5 1 pk Topic Index Figure 2: Distinct sharing mechanisms and model properties are evident between various NB processes, by comparing their inferred parameters. Note that the transition between active and non-active topics is very sharp when pk is used and much smoother when rk is used. Both the documents and topics are ordered in a decreasing order based on the number of words associated with each of them. These results are based on the last Gibbs sampling iteration. The values are shown in either linear or log scales for convenient visualization. more topics than (rj, pk), as confirmed in Figure 1 (a) that the Gamma-NB process learns 177 active topics, significantly more than the 107 ones of the Beta-NB process. With these analysis, we can conclude that the mean and the amount of overdispersion (measure by the VMR or ODL) for the usage of topic k is positively correlated under (rj, pk) and negatively correlated under (rk, pj). When (rk, pk) is used, as in the Marked-Beta-NB process, more diverse combinations of mean and overdispersion would be allowed as both rk and pk are now responsible for the mean E[P j njk] = Jrkpk/(1−pk). For example, there could be not only large mean and small overdispersion (large rk and small pk), but also large mean and large overdispersion (small rk and large pk). Thus (rk, pk) may combine the advantages of using only rk or pk to model topic k, as confirmed by the superior performance of the Marked-Beta-NB over the Beta-NB and Gamma-NB processes. When (rk, πk) is used, as in the NB-FTM model, our results show that we usually have a small πk and a large rk, indicating topic k is sparsely used across the documents but once it is used, the amount of variation on usage is small. This modeling properties might be helpful when there are excessive number of zeros which might not be well modeled by the NB process alone. In our experiments, we find the more direct approaches of using pk or pj generally yield better results, but this might not be the case when excessive number of zeros are better explained with the underlying beta-Bernoulli or IBP processes, e.g., when the training words are scarce. It is also interesting to compare the Gamma-NB and NB-HDP. From a mixture-modeling viewpoint, fixing pj = 0.5 is natural as pj becomes irrelevant after normalization. However, from a count modeling viewpoint, this would make a restrictive assumption that each count vector {njk}k=1,K has the same VMR of 2, and the experimental results in Figure 1 confirm the importance of learning pj together with rk. It is also interesting to examine (15), which can be viewed as the concentration parameter α in the HDP, allowing the adjustment of pj would allow a more flexible model assumption on the amount of variations between the topic proportions, and thus potentially better data fitting. 6 Conclusions We propose a variety of negative binomial (NB) processes to jointly model counts across groups, which can be naturally applied for mixture modeling of grouped data. The proposed NB processes are completely random measures that they assign independent random variables to disjoint Borel sets of the measure space, as opposed to the hierarchical Dirichlet process (HDP) whose measures on disjoint Borel sets are negatively correlated. We discover augment-and-conquer inference methods that by “augmenting” a NB process into both the gamma-Poisson and compound Poisson representations, we are able to “conquer” the unification of count and mixture modeling, the analysis of fundamental model properties and the derivation of efficient Gibbs sampling inference. We demonstrate that the gamma-NB process, which shares the NB dispersion measure across groups, can be normalized to produce the HDP and we show in detail its theoretical, structural and computational advantages over the HDP. We examine the distinct sharing mechanisms and model properties of various NB processes, with connections to existing algorithms, with experimental results on topic modeling showing the importance of modeling both the NB dispersion and probability parameters. Acknowledgments The research reported here was supported by ARO, DOE, NGA, and ONR, and by DARPA under the MSEE and HIST programs. 8 References [1] J. F. C. Kingman. Poisson Processes. Oxford University Press, 1993. [2] M. K. Titsias. The infinite gamma-Poisson feature model. In NIPS, 2008. [3] R. J. Thibaux. Nonparametric Bayesian Models for Machine Learning. PhD thesis, UC Berkeley, 2008. [4] K. T. Miller. Bayesian Nonparametric Latent Feature Models. PhD thesis, UC Berkeley, 2011. [5] M. Zhou, L. Hannah, D. Dunson, and L. Carin. Beta-negative binomial process and Poisson factor analysis. In AISTATS, 2012. [6] T. Broderick, L. Mackey, J. Paisley, and M. I. Jordan. Combinatorial clustering and the beta negative binomial process. arXiv:1111.1802v3, 2012. [7] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. JASA, 2006. [8] M. I. Jordan. Hierarchical models, nested models and completely random measures. 2010. [9] R. L. Wolpert, M. A. Clyde, and C. Tu. Stochastic expansions using continuous dictionaries: L´evy Adaptive Regression Kernels. Annals of Statistics, 2011. [10] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. Ann. Statist., 1973. [11] H. Ishwaran and M. Zarepour. Exact and approximate sum-representations for the Dirichlet process. Can. J. Statist., 2002. [12] J. Paisley, C. Wang, and D. M. Blei. The discrete infinite logistic normal distribution. Bayesian Analysis, 2012. [13] C. I. Bliss and R. A. Fisher. Fitting the negative binomial distribution to biological data. Biometrics, 1953. [14] A. C. Cameron and P. K. Trivedi. Regression Analysis of Count Data. Cambridge, UK, 1998. [15] R. Winkelmann. Econometric Analysis of Count Data. Springer, Berlin, 5th edition, 2008. [16] M. H. Quenouille. A relation between the logarithmic, Poisson, and negative binomial series. Biometrics, 1949. [17] N. L. Johnson, A. W. Kemp, and S. Kotz. Univariate Discrete Distributions. John Wiley & Sons, 2005. [18] S. J. Clark and J. N. Perry. Estimation of the negative binomial parameter κ by maximum quasi-likelihood. Biometrics, 1989. [19] M. D. Robinson and G. K. Smyth. Small-sample estimation of negative binomial dispersion, with applications to SAGE data. Biostatistics, 2008. [20] M. Zhou, L. Li, D. Dunson, and L. Carin. Lognormal and gamma mixed negative binomial regression. In ICML, 2012. [21] C. E. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. Ann. Statist., 1974. [22] M. D. Escobar and M. West. Bayesian density estimation and inference using mixtures. JASA, 1995. [23] C. Wang, J. Paisley, and D. M. Blei. Online variational inference for the hierarchical Dirichlet process. In AISTATS, 2011. [24] E. Fox, E. Sudderth, M. Jordan, and A. Willsky. Developing a tempered HDP-HMM for systems with state persistence. MIT LIDS, TR #2777, 2007. [25] N. L. Hjort. Nonparametric Bayes estimators based on beta processes in models for life history data. Ann. Statist., 1990. [26] R. Thibaux and M. I. Jordan. Hierarchical beta processes and the Indian buffet process. In AISTATS, 2007. [27] S. Williamson, C. Wang, K. A. Heller, and D. M. Blei. The IBP compound Dirichlet process and its application to focused topic modeling. In ICML, 2010. [28] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In NIPS, 2005. [29] M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin. Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images. IEEE TIP, 2012. [30] M. Zhou, H. Yang, G. Sapiro, D. Dunson, and L. Carin. Dependent hierarchical beta process for image interpolation and denoising. In AISTATS, 2011. [31] L. Li, M. Zhou, G. Sapiro, and L. Carin. On the integration of topic modeling and dictionary learning. In ICML, 2011. [32] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. J. Mach. Learn. Res., 2003. [33] D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. In NIPS, 2000. [34] J. Canny. Gap: a factor model for discrete data. In SIGIR, 2004. [35] T. L. Griffiths and M. Steyvers. Finding scientific topics. PNAS, 2004. 9
|
2012
|
156
|
4,515
|
Large Scale Distributed Deep Networks Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marc’Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Andrew Y. Ng {jeff, gcorrado}@google.com Google Inc., Mountain View, CA Abstract Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm. 1 Introduction Deep learning and unsupervised feature learning have shown great promise in many practical applications. State-of-the-art performance has been reported in several domains, ranging from speech recognition [1, 2], visual object recognition [3, 4], to text processing [5, 6]. It has also been observed that increasing the scale of deep learning, with respect to the number of training examples, the number of model parameters, or both, can drastically improve ultimate classification accuracy [3, 4, 7]. These results have led to a surge of interest in scaling up the training and inference algorithms used for these models [8] and in improving applicable optimization procedures [7, 9]. The use of GPUs [1, 2, 3, 8] is a significant advance in recent years that makes the training of modestly sized deep networks practical. A known limitation of the GPU approach is that the training speed-up is small when the model does not fit in GPU memory (typically less than 6 gigabytes). To use a GPU effectively, researchers often reduce the size of the data or parameters so that CPU-to-GPU transfers are not a significant bottleneck. While data and parameter reduction work well for small problems (e.g. acoustic modeling for speech recognition), they are less attractive for problems with a large number of examples and dimensions (e.g., high-resolution images). In this paper, we describe an alternative approach: using large-scale clusters of machines to distribute training and inference in deep networks. We have developed a software framework called DistBelief that enables model parallelism within a machine (via multithreading) and across machines (via 1 message passing), with the details of parallelism, synchronization and communication managed by the framework. In addition to supporting model parallelism, the DistBelief framework also supports data parallelism, where multiple replicas of a model are used to optimize a single objective. Within this framework, we have designed and implemented two novel methods for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure which leverages adaptive learning rates and supports a large number of model replicas, and (ii) Sandblaster L-BFGS, a distributed implementation of L-BFGS that uses both data and model parallelism.1 Both Downpour SGD and Sandblaster L-BFGS enjoy significant speed gains compared to more conventional implementations of SGD and L-BFGS. Our experiments reveal several surprising results about large-scale nonconvex optimization. Firstly, asynchronous SGD, rarely applied to nonconvex problems, works very well for training deep networks, particularly when combined with Adagrad [10] adaptive learning rates. Secondly, we show that given sufficient resources, L-BFGS is competitive with or faster than many variants of SGD. With regard to specific applications in deep learning, we report two main findings: that our distributed optimization approach can both greatly accelerate the training of modestly sized models, and that it can also train models that are larger than could be contemplated otherwise. To illustrate the first point, we show that we can use a cluster of machines to train a modestly sized speech model to the same classification accuracy in less than 1/10th the time required on a GPU. To illustrate the second point, we trained a large neural network of more than 1 billion parameters and used this network to drastically improve on state-of-the-art performance on the ImageNet dataset, one of the largest datasets in computer vision. 2 Previous work In recent years commercial and academic machine learning data sets have grown at an unprecedented pace. In response, a great many authors have explored scaling up machine learning algorithms through parallelization and distribution [11, 12, 13, 14, 15, 16, 17]. Much of this research has focused on linear, convex models, where distributed gradient computation is the natural first step. Within this area, some groups have relaxed synchronization requirements, exploring delayed gradient updates for convex problems [12, 17]. In parallel, other groups working on problems with sparse gradients (problems where only a tiny fraction of the coordinates of the gradient vector are non-zero for any given training example) have explored lock-less asynchronous stochastic gradient descent on shared-memory architectures (i.e. single machines) [5, 18]. We are interested in an approach that captures the best of both worlds, allowing the use of a cluster of machines asynchronously computing gradients, but without requiring that the problem be either convex or sparse. In the context of deep learning, most work has focused on training relatively small models on a single machine (e.g., Theano [19]). Suggestions for scaling up deep learning include the use of a farm of GPUs to train a collection of many small models and subsequently averaging their predictions [20], or modifying standard deep networks to make them inherently more parallelizable [21]. Our focus is scaling deep learning techniques in the direction of training very large models, those with a few billion parameters, but without introducing restrictions on the form of the model. In special cases where one layer dominates computation, some authors have considered distributing computation in that one layer and replicating computation in the remaining layers [5]. But in the general case where many layers of the model are computationally intensive, full model parallelism in a spirit similar to [22] is required. To be successful, however, we believe that model parallelism must be combined with clever distributed optimization techniques that leverage data parallelism. We considered a number of existing large-scale computational tools for application to our problem, MapReduce [23] and GraphLab [24] being notable examples. We concluded that MapReduce, designed for parallel data processing, was ill-suited for the iterative computations inherent in deep network training; whereas GraphLab, designed for general (unstructured) graph computations, would not exploit computing efficiencies available in the structured graphs typically found in deep networks. 1We implemented L-BFGS within the Sandblaster framework, but the general approach is also suitable for a variety of other batch optimization methods. 2 Machine 1 Machine 2 Machine 3 Machine 4 Figure 1: An example of model parallelism in DistBelief. A five layer deep neural network with local connectivity is shown here, partitioned across four machines (blue rectangles). Only those nodes with edges that cross partition boundaries (thick lines) will need to have their state transmitted between machines. Even in cases where a node has multiple edges crossing a partition boundary, its state is only sent to the machine on the other side of that boundary once. Within each partition, computation for individual nodes will the parallelized across all available CPU cores. 3 Model parallelism To facilitate the training of very large deep networks, we have developed a software framework, DistBelief, that supports distributed computation in neural networks and layered graphical models. The user defines the computation that takes place at each node in each layer of the model, and the messages that should be passed during the upward and downward phases of computation.2 For large models, the user may partition the model across several machines (Figure 1), so that responsibility for the computation for different nodes is assigned to different machines. The framework automatically parallelizes computation in each machine using all available cores, and manages communication, synchronization and data transfer between machines during both training and inference. The performance benefits of distributing a deep network across multiple machines depends on the connectivity structure and computational needs of the model. Models with a large number of parameters or high computational demands typically benefit from access to more CPUs and memory, up to the point where communication costs dominate. We have successfully run large models with up to 144 partitions in the DistBelief framework with significant speedups, while more modestly sized models show decent speedups for up to 8 or 16 partitions. (See Section 5, under the heading Model Parallelism Benchmarks, for experimental results.) Obviously, models with local connectivity structures tend to be more amenable to extensive distribution than fully-connected structures, given their lower communication requirements. The typical cause of less-than-ideal speedups is variance in processing times across the different machines, leading to many machines waiting for the single slowest machine to finish a given phase of computation. Nonetheless, for our largest models, we can efficiently use 32 machines where each machine achieves an average CPU utilization of 16 cores, for a total of 512 CPU cores training a single large neural network. When combined with the distributed optimization algorithms described in the next section, which utilize multiple replicas of the entire neural network, it is possible to use tens of thousands of CPU cores for training a single model, leading to significant reductions in overall training times. 4 Distributed optimization algorithms Parallelizing computation within the DistBelief framework allows us to instantiate and run neural networks considerably larger than have been previously reported. But in order to train such large models in a reasonable amount of time, we need to parallelize computation not only within a single 2In the case of a neural network ‘upward’ and ‘downward’ might equally well be called ‘feedforward’ and ‘backprop’, while for a Hidden Markov Model, they might be more familiar as ‘forward’ and ‘backward’. 3 Parameter Server Model Replicas Data Shards w’ = w - ηΔw w Δw Parameter Server Model Replicas Data Coordinator (small messages) Figure 2: Left: Downpour SGD. Model replicas asynchronously fetch parameters w and push gradients ∆w to the parameter server. Right: Sandblaster L-BFGS. A single ‘coordinator’ sends small messages to replicas and the parameter server to orchestrate batch optimization. instance of the model, but to distribute training across multiple model instances. In this section we describe this second level of parallelism, where we employ a set of DistBelief model instances, or replicas, to simultaneously solve a single optimization problem. We present a comparison of two large-scale distributed optimization procedures: Downpour SGD, an online method, and Sandblaster L-BFGS, a batch method. Both methods leverage the concept of a centralized sharded parameter server, which model replicas use to share their parameters. Both methods take advantage of the distributed computation DistBelief allows within each individual replica. But most importantly, both methods are designed to tolerate variance in the processing speed of different model replicas, and even the wholesale failure of model replicas which may be taken offline or restarted at random. In a sense, these two optimization algorithms implement an intelligent version of data parallelism. Both approaches allow us to simultaneously process distinct training examples in each of the many model replicas, and periodically combine their results to optimize our objective function. 4.1 Downpour SGD Stochastic gradient descent (SGD) is perhaps the most commonly used optimization procedure for training deep neural networks [25, 26, 3]. Unfortunately, the traditional formulation of SGD is inherently sequential, making it impractical to apply to very large data sets where the time required to move through the data in an entirely serial fashion is prohibitive. To apply SGD to large data sets, we introduce Downpour SGD, a variant of asynchronous stochastic gradient descent that uses multiple replicas of a single DistBelief model. The basic approach is as follows: We divide the training data into a number of subsets and run a copy of the model on each of these subsets. The models communicate updates through a centralized parameter server, which keeps the current state of all parameters for the model, sharded across many machines (e.g., if we have 10 parameter server shards, each shard is responsible for storing and applying updates to 1/10th of the model parameters) (Figure 2). This approach is asynchronous in two distinct aspects: the model replicas run independently of each other, and the parameter server shards also run independently of one another. In the simplest implementation, before processing each mini-batch, a model replica asks the parameter server service for an updated copy of its model parameters. Because DistBelief models are themselves partitioned across multiple machines, each machine needs to communicate with just the subset of parameter server shards that hold the model parameters relevant to its partition. After receiving an updated copy of its parameters, the DistBelief model replica processes a mini-batch of data to compute a parameter gradient, and sends the gradient to the parameter server, which then applies the gradient to the current value of the model parameters. It is possible to reduce the communication overhead of Downpour SGD by limiting each model replica to request updated parameters only every nfetch steps and send updated gradient values only every npush steps (where nfetch might not be equal to npush). In fact, the process of fetching 4 parameters, pushing gradients, and processing training data can be carried out in three only weakly synchronized threads (see the Appendix for pseudocode). In the experiments reported below we fixed nfetch = npush = 1 for simplicity and ease of comparison to traditional SGD. Downpour SGD is more robust to machines failures than standard (synchronous) SGD. For synchronous SGD, if one machine fails, the entire training process is delayed; whereas for asynchronous SGD, if one machine in a model replica fails, the other model replicas continue processing their training data and updating the model parameters via the parameter servers. On the other hand, the multiple forms of asynchronous processing in Downpour SGD introduce a great deal of additional stochasticity in the optimization procedure. Most obviously, a model replica is almost certainly computing its gradients based on a set of parameters that are slightly out of date, in that some other model replica will likely have updated the parameters on the parameter server in the meantime. But there are several other sources of stochasticity beyond this: Because the parameter server shards act independently, there is no guarantee that at any given moment the parameters on each shard of the parameter server have undergone the same number of updates, or that the updates were applied in the same order. Moreover, because the model replicas are permitted to fetch parameters and push gradients in separate threads, there may be additional subtle inconsistencies in the timestamps of parameters. There is little theoretical grounding for the safety of these operations for nonconvex problems, but in practice we found relaxing consistency requirements to be remarkably effective. One technique that we have found to greatly increase the robustness of Downpour SGD is the use of the Adagrad [10] adaptive learning rate procedure. Rather than using a single fixed learning rate on the parameter sever (η in Figure 2), Adagrad uses a separate adaptive learning rate for each parameter. Let ηi,K be the learning rate of the i-th parameter at iteration K and ∆wi,K its gradient, then we set: ηi,K = γ/ qPK j=1 ∆wi,j2. Because these learning rates are computed only from the summed squared gradients of each parameter, Adagrad is easily implemented locally within each parameter server shard. The value of γ, the constant scaling factor for all learning rates, is generally larger (perhaps by an order of magnitude) than the best fixed learning rate used without Adagrad. The use of Adagrad extends the maximum number of model replicas that can productively work simultaneously, and combined with a practice of “warmstarting” model training with only a single model replica before unleashing the other replicas, it has virtually eliminated stability concerns in training deep networks using Downpour SGD (see results in Section 5). 4.2 Sandblaster L-BFGS Batch methods have been shown to work well in training small deep networks [7]. To apply these methods to large models and large datasets, we introduce the Sandblaster batch optimization framework and discuss an implementation of L-BFGS using this framework. A key idea in Sandblaster is distributed parameter storage and manipulation. The core of the optimization algorithm (e.g L-BFGS) resides in a coordinator process (Figure 2), which does not have direct access to the model parameters. Instead, the coordinator issues commands drawn from a small set of operations (e.g., dot product, scaling, coefficient-wise addition, multiplication) that can be performed by each parameter server shard independently, with the results being stored locally on the same shard. Additional information, e.g the history cache for L-BFGS, is also stored on the parameter server shard on which it was computed. This allows running large models (billions of parameters) without incurring the overhead of sending all the parameters and gradients to a single central server. (See the Appendix for pseudocode.) In typical parallelized implementations of L-BFGS, data is distributed to many machines and each machine is responsible for computing the gradient on a specific subset of data examples. The gradients are sent back to a central server (or aggregated via a tree [16]). Many such methods wait for the slowest machine, and therefore do not scale well to large shared clusters. To account for this problem, we employ the following load balancing scheme: The coordinator assigns each of the N model replicas a small portion of work, much smaller than 1/Nth of the total size of a batch, and assigns replicas new portions whenever they are free. With this approach, faster model replicas do more work than slower replicas. To further manage slow model replicas at the end of a batch, the coordinator schedules multiple copies of the outstanding portions and uses the result from whichever model replica finishes first. This scheme is similar to the use of “backup tasks” in the MapReduce framework [23]. Prefetching of data, along with supporting data affinity by assigning sequential 5 portions of data to the same worker makes data access a non-issue. In contrast with Downpour SGD, which requires relatively high frequency, high bandwidth parameter synchronization with the parameter server, Sandblaster workers only fetch parameters at the beginning of each batch (when they have been updated by the coordinator), and only send the gradients every few completed portions (to protect against replica failures and restarts). 5 Experiments We evaluated our optimization algorithms by applying them to training models for two different deep learning problems: object recognition in still images and acoustic processing for speech recognition. The speech recognition task was to classify the central region (or frame) in a short snippet of audio as one of several thousand acoustic states. We used a deep network with five layers: four hidden layer with sigmoidal activations and 2560 nodes each, and a softmax output layer with 8192 nodes. The input representation was 11 consecutive overlapping 25 ms frames of speech, each represented by 40 log-energy values. The network was fully-connected layer-to-layer, for a total of approximately 42 million model parameters. We trained on a data set of 1.1 billion weakly labeled examples, and evaluated on a hold out test set. See [27] for similar deep network configurations and training procedures. For visual object recognition we trained a larger neural network with locally-connected receptive fields on the ImageNet data set of 16 million images, each of which we scaled to 100x100 pixels [28]. The network had three stages, each composed of filtering, pooling and local contrast normalization, where each node in the filtering layer was connected to a 10x10 patch in the layer below. Our infrastructure allows many nodes to connect to the same input patch, and we ran experiments varying the number of identically connected nodes from 8 to 36. The output layer consisted of 21 thousand one-vs-all logistic classifier nodes, one for each of the ImageNet object categories. See [29] for similar deep network configurations and training procedures. Model parallelism benchmarks: To explore the scaling behavior of DistBelief model parallelism (Section 3), we measured the mean time to process a single mini-batch for simple SGD training as a function of the number of partitions (machines) used in a single model instance. In Figure 3 we quantify the impact of parallelizing across N machines by reporting the average training speed-up: the ratio of the time taken using only a single machine to the time taken using N. Speedups for inference steps in these models are similar and are not shown here. The moderately sized speech model runs fastest on 8 machines, computing 2.2× faster than using a single machine. (Models were configured to use no more than 20 cores per machine.) Partitioning 1 16 32 64 128 0 5 10 15 Machines per model instance Training speedïup Speech: 42M parameters Images: 80M parameters Images: 330M parameters Images: 1.7B parameters Figure 3: Training speed-up for four different deep networks as a function of machines allocated to a single DistBelief model instance. Models with more parameters benefit more from the use of additional machines than do models with fewer parameters. 6 0 20 40 60 80 100 120 0 5 10 15 20 25 Time (hours) Average Frame Accuracy (%) Accuracy on Training Set SGD [1] DownpourSGD [20] DownpourSGD [200] w/Adagrad Sandblaster L−BFGS [2000] 0 20 40 60 80 100 120 0 5 10 15 20 25 Time (hours) Average Frame Accuracy (%) Accuracy on Test Set SGD [1] GPU [1] DownpourSGD [20] DownpourSGD [20] w/Adagrad DownpourSGD [200] w/Adagrad Sandblaster L−BFGS [2000] Figure 4: Left: Training accuracy (on a portion of the training set) for different optimization methods. Right: Classification accuracy on the hold out test set as a function of training time. Downpour and Sandblaster experiments initialized using the same ∼10 hour warmstart of simple SGD. the model on more than 8 machines actually slows training, as network overhead starts to dominate in the fully-connected network structure and there is less work for each machine to perform with more partitions. In contrast, the much larger, locally-connected image models can benefit from using many more machines per model replica. The largest model, with 1.7 billion parameters benefits the most, giving a speedup of more than 12× using 81 machines. For these large models using more machines continues to increase speed, but with diminishing returns. Optimization method comparisons: To evaluate the proposed distributed optimization procedures, we ran the speech model described above in a variety of configurations. We consider two baseline optimization procedures: training a DistBelief model (on 8 partitions) using conventional (single replica) SGD, and training the identical model on a GPU using CUDA [27]. The three distributed optimization methods we compare to these baseline methods are: Downpour SGD with a fixed learning rate, Downpour SGD with Adagrad learning rates, and Sandblaster L-BFGS. Figure 4 shows classification performance as a function of training time for each of these methods on both the training and test sets. Our goal is to obtain the maximum test set accuracy in the minimum amount of training time, regardless of resource requirements. Conventional single replica SGD (black curves) is the slowest to train. Downpour SGD with 20 model replicas (blue curves) shows a significant improvement. Downpour SGD with 20 replicas plus Adagrad (orange curve) is modestly faster. Sandblaster L-BFGS using 2000 model replicas (green curves) is considerably faster yet again. The fastest, however, is Downpour SGD plus Adagrad with 200 model replicas (red curves). Given access to sufficient CPU resourses, both Sandblaster L-BFGS and Downpour SGD with Adagrad can train models substantially faster than a high performance GPU. Though we did not confine the above experiments to a fixed resource budget, it is interesting to consider how the various methods trade off resource consumption for performance. We analyze this by arbitrarily choosing a fixed test set accuracy (16%), and measuring the time each method took to reach that accuracy as a function of machines and utilized CPU cores, Figure 5. One of the four points on each traces corresponds to a training configuration shown in Figure 4, the other three points are alternative configurations. In this plot, points closer to the origin are preferable in that they take less time while using fewer resources. In this regard Downpour SGD using Adagrad appears to be the best trade-off: For any fixed budget of machines or cores, Downpour SGD with Adagrad takes less time to reach the accuracy target than either Downpour SGD with a fixed learning rate or Sandblaster L-BFGS. For any allotted training time to reach the accuracy target, Downpour SGD with Adagrad used few resources than Sandblaster L-BFGS, and in many cases Downpour SGD with a fixed learning rate could not even reach the target within the deadline. The Sandblaster L-BFGS system does show promise in terms 7 1 1000 2000 3000 4000 5000 6000 10 20 30 40 50 60 70 80 Machines Time (hours) Time to 16% accuracy Downpour SGD Downpour SGD w/Adagrad Sandblaster L−BFGS GPU 1 2000 4000 6000 8000 10000 12000 10 20 30 40 50 60 70 80 Cores Time (hours) Time to 16% accuracy Downpour SGD Downpour SGD w/Adagrad Sandblaster L−BFGS GPU (CUDA cores) Figure 5: Time to reach a fixed accuracy (16%) for different optimization strategies as a function of number of the machines (left) and cores (right). of its scaling with additional cores, suggesting that it may ultimately produce the fastest training times if used with an extremely large resource budget (e.g., 30k cores). Application to ImageNet: The previous experiments demonstrate that our techniques can accelerate the training of neural networks with tens of millions of parameters. However, the more significant advantage of our cluster-based approach to distributed optimization is its ability to scale to models that are much larger than can be comfortably fit on single machine, let alone a single GPU. As a first step toward exploring the capabilities of very large neural networks, we used Downpour SGD to train the 1.7 billion parameter image model described above on the ImageNet object classification task. As detailed in [29], this network achieved a cross-validated classification accuracy of over 15%, a relative improvement over 60% from the best performance we are aware of on the 21k category ImageNet classification task. 6 Conclusions In this paper we introduced DistBelief, a framework for parallel distributed training of deep networks. Within this framework, we discovered several effective distributed optimization strategies. We found that Downpour SGD, a highly asynchronous variant of SGD works surprisingly well for training nonconvex deep learning models. Sandblaster L-BFGS, a distributed implementation of L-BFGS, can be competitive with SGD, and its more efficient use of network bandwidth enables it to scale to a larger number of concurrent cores for training a single model. That said, the combination of Downpour SGD with the Adagrad adaptive learning rate procedure emerges as the clearly dominant method when working with a computational budget of 2000 CPU cores or less. Adagrad was not originally designed to be used with asynchronous SGD, and neither method is typically applied to nonconvex problems. It is surprising, therefore, that they work so well together, and on highly nonlinear deep networks. We conjecture that Adagrad automatically stabilizes volatile parameters in the face of the flurry of asynchronous updates, and naturally adjusts learning rates to the demands of different layers in the deep network. Our experiments show that our new large-scale training methods can use a cluster of machines to train even modestly sized deep networks significantly faster than a GPU, and without the GPU’s limitation on the maximum size of the model. To demonstrate the value of being able to train larger models, we have trained a model with over 1 billion parameters to achieve better than state-of-the-art performance on the ImageNet object recognition challenge. Acknowledgments The authors would like to thank Samy Bengio, Tom Dean, John Duchi, Yuval Netzer, Patrick Nguyen, Yoram Singer, Sebastian Thrun, and Vincent Vanhoucke for their indispensable advice, support, and comments. 8 References [1] G. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for large vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 2012. [2] G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, 2012. [3] D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber. Deep big simple neural nets excel on handwritten digit recognition. CoRR, 2010. [4] A. Coates, H. Lee, and A. Y. Ng. An analysis of single-layer networks in unsupervised feature learning. In AISTATS 14, 2011. [5] Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155, 2003. [6] R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML, 2008. [7] Q.V. Le, J. Ngiam, A. Coates, A. Lahiri, B. Prochnow, and A.Y. Ng. On optimization methods for deep learning. In ICML, 2011. [8] R. Raina, A. Madhavan, and A. Y. Ng. Large-scale deep unsupervised learning using graphics processors. In ICML, 2009. [9] J. Martens. Deep learning via hessian-free optimization. In ICML, 2010. [10] J. C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011. [11] Q. Shi, J. Petterson, G. Dror, J. Langford, A. Smola, A. Strehl, and V. Vishwanathan. Hash kernels. In AISTATS, 2009. [12] J. Langford, A. Smola, and M. Zinkevich. Slow learners are fast. In NIPS, 2009. [13] G. Mann, R. McDonald, M. Mohri, N. Silberman, and D. Walker. Efficient large-scale distributed training of conditional maximum entropy models. In NIPS, 2009. [14] R. McDonald, K. Hall, and G. Mann. Distributed training strategies for the structured perceptron. In NAACL, 2010. [15] M. Zinkevich, M. Weimer, A. Smola, and L. Li. Parallelized stochastic gradient descent. In NIPS, 2010. [16] A. Agarwal, O. Chapelle, M. Dudik, and J. Langford. A reliable effective terascale linear learning system. In AISTATS, 2011. [17] A. Agarwal and J. Duchi. Distributed delayed stochastic optimization. In NIPS, 2011. [18] F. Niu, B. Retcht, C. Re, and S. J. Wright. Hogwild! A lock-free approach to parallelizing stochastic gradient descent. In NIPS, 2011. [19] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In SciPy, 2010. [20] D. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. Technical report, IDSIA, 2012. [21] L. Deng, D. Yu, and J. Platt. Scalable stacking and learning for building deep architectures. In ICASSP, 2012. [22] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, U. Toronto, 2009. [23] J. Dean and S. Ghemawat. Map-Reduce: simplified data processing on large clusters. CACM, 2008. [24] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. Hellerstein. Distributed GraphLab: A framework for machine learning in the cloud. In VLDB, 2012. [25] L. Bottou. Stochastic gradient learning in neural networks. In Proceedings of Neuro-Nˆımes 91, 1991. [26] Y. LeCun, L. Bottou, G. Orr, and K. Muller. Efficient backprop. In Neural Networks: Tricks of the trade. Springer, 1998. [27] V. Vanhoucke, A. Senior, and M. Z. Mao. Improving the speed of neural networks on cpus. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011, 2011. [28] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009. [29] Q.V. Le, M.A. Ranzato, R. Monga, M. Devin, K. Chen, G.S. Corrado, J. Dean, and A.Y. Ng. Building high-level features using large scale unsupervised learning. In ICML, 2012. 9
|
2012
|
157
|
4,516
|
Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses Po-Ling Loh Department of Statistics University of California, Berkeley Berkeley, CA 94720 ploh@berkeley.edu Martin J. Wainwright Departments of Statistics and EECS University of California, Berkeley Berkeley, CA 94720 wainwrig@stat.berkeley.edu Abstract We investigate a curious relationship between the structure of a discrete graphical model and the support of the inverse of a generalized covariance matrix. We show that for certain graph structures, the support of the inverse covariance matrix of indicator variables on the vertices of a graph reflects the conditional independence structure of the graph. Our work extends results that have previously been established only in the context of multivariate Gaussian graphical models, thereby addressing an open question about the significance of the inverse covariance matrix of a non-Gaussian distribution. Based on our population-level results, we show how the graphical Lasso may be used to recover the edge structure of certain classes of discrete graphical models, and present simulations to verify our theoretical results. 1 Introduction Graphical model inference is now prevalent in many fields, running the gamut from computer vision and civil engineering to political science and epidemiology. In many applications, learning the edge structure of an underlying graphical model is of great importance—for instance, a graphical model may be used to represent friendships between people in a social network, or links between organisms with the propensity to spread an infectious disease [1]. It is well known that zeros in the inverse covariance matrix of a multivariate Gaussian distribution indicate the absence of an edge in the corresponding graphical model. This fact, combined with techniques in high-dimensional statistical inference, has been leveraged by many authors to recover the structure of a Gaussian graphical model when the edge set is sparse (e.g., see the papers [2, 3, 4, 5] and references therein). Recently, Liu et al. [6, 7] introduced the notion of a nonparanormal distribution, which generalizes the Gaussian distribution by allowing for univariate monotonic transformations, and argued that the same structural properties of the inverse covariance matrix carry over to the nonparanormal. However, the question of whether a relationship exists between conditional independence and the structure of the inverse covariance matrix in a general graph remains unresolved. In this paper, we focus on discrete graphical models and establish a number of interesting links between covariance matrices and the edge structure of an underlying graph. Instead of only analyzing the standard covariance matrix, we show that it is often fruitful to augment the usual covariance matrix with higherorder interaction terms. Our main result has a striking corollary in the context of tree-structured graphs: for any discrete graphical model, the inverse of a generalized covariance matrix is always (block) graph-structured. In particular, for binary variables, the inverse of the usual covariance matrix corresponds exactly to the edge structure of the tree. We also establish several corollaries that apply to more general discrete graphs. Our methods are capable of handling noisy or missing data in a seamless manner. 1 Other related work on graphical model selection for discrete graphs includes the classic ChowLiu algorithm for trees [8]; nodewise logistic regression for discrete models with pairwise interactions [9, 10]; and techniques based on conditional entropy or mutual information [11, 12]. Our main contribution is to present a clean and surprising result on a simple link between the inverse covariance matrix and edge structure of a discrete model, which may be used to derive inference algorithms applicable even to data with systematic corruptions. The remainder of the paper is organized as follows: In Section 2, we provide brief background and notation on graphical models, and describe the classes of augmented covariance matrices we will consider. In Section 3, we state our main results on the relationship between the support of generalized inverse covariance matrices and the edge structure of a discrete graphical model. We relate our population-level results to concrete algorithms that are guaranteed to recover the edge structure of a discrete graph with high probability. In Section 4, we report the results of simulations used to verify our theoretical claims. For detailed proofs, we refer the reader to the technical report [13]. 2 Background and problem setup In this section, we provide background on graphical models and exponential families. We then work through a simple example that illustrates the phenomena and methodology studied in this paper. 2.1 Graphical models An undirected graph G = (V, E) consists of a collection of vertices V = {1, 2, . . . , p} and a collection of unordered vertex pairs E ⊆V × V , meaning no distinction is made between edges (s, t) and (t, s). We associate to each vertex s ∈V a random variable Xs taking values in some space X. The random vector X := (X1, . . . , Xp) is a Markov random field with respect to G if XA ⊥⊥XB | XS whenever S is a cutset of A and B, meaning every path from A to B in G must pass through S. We have used the shorthand XA := {Xs : s ∈A}. In particular, Xs ⊥⊥Xt | X\{s,t} whenever (s, t) /∈E. By the Hammersley-Clifford theorem for strictly positive distributions [14], the Markov properties imply a factorization of the distribution of X: P(x1, . . . , xp) ∝ Y C∈C ψC(xC), (1) where C is the set of all cliques (fully-connected subsets of V ) and ψC(xC) are the corresponding clique potentials. The factorization (1) may alternatively be represented in terms of an exponential family associated with the clique structure of G. For each clique C ∈C, we define a family of sufficient statistics {φC;α : X |C| →R, α ∈IC} associated with variables in C, where IC indexes the sufficient statistics corresponding to C. We also introduce a canonical parameter θC;α ∈R associated with each sufficient statistic φC;α. For a given assignment of canonical parameters θ, we may express the clique potentials as ψC(xC) = X α∈IC θC;αφC;α(xC) := ⟨θC, φC⟩, so equation (1) may be rewritten as Pθ(x1, . . . , xp) = exp X C∈C ⟨θC, φC⟩−A(θ) , (2) where A(θ) := log P x∈X p exp P C∈C⟨θC, φC⟩ is the (log) partition function. Note that for a graph with only pairwise interactions, we have C = V ∪E. If we associate the function φs(xs) = xs with clique {s} and the function φst(xs, xt) = xsxt with edge (s, t), the factorization (2) becomes Pθ(x1, . . . , xp) = exp X s∈V θsxs + X (s,t)∈E θstxsxt −A(θ) . (3) 2 When X = {0, 1}, this family of distributions corresponds to the inhomogeneous Ising model. When X = R (and with certain additional restrictions on the weights), the family (3) corresponds to a Gauss-Markov random field. Both of these models are minimal exponential families, meaning the sufficient statistics are linearly independent [15]. For a discrete graphical model with X = {0, 1, . . . , m−1}, it is convenient to make use of sufficient statistics involving indicator functions. For clique C, define the subset of configurations X |C| 0 = {J = (j1, . . . , j|C|) | jℓ̸= 0 for all ℓ= 1, . . . , |C|}, for which no variables take the value 0. Then |X |C| 0 | = (m−1)|C|. For any configuration J ∈X |C| 0 , we define the indicator function φC;J(xC) = 1 if xC = J, 0 otherwise, and consider the family of models Pθ(x1, . . . , xp) = exp X C∈C ⟨θC, φC⟩−A(θ) , where xj ∈X = {0, 1, . . . , m −1}, (4) with ⟨θC, φC⟩= P J∈X |C| 0 θC;JφC;J(xC). Note in particular that when m = 2, X|C| 0 is a singleton state containing the vector of all ones, and the sufficient statistics are given by φC;J(xC) = Y s∈C xs, for C ∈C and J = {1}|C|; i.e., the indicator functions may simply be expressed as products of variables appearing in the clique. When the graphical model has only pairwise interactions, elements of C have cardinality at most two, and the model (4) clearly reduces to the Ising model (3). Finally, as with the equation (3), the family (4) is a minimal exponential family. 2.2 Covariance matrices and beyond Consider the usual covariance matrix Σ = cov(X1, . . . , Xp). When X is Gaussian, it is a wellknown consequence of the Hammersley-Clifford theorem that the entries of the precision matrix Γ = Σ−1 correspond to rescaled conditional correlations [14]. The magnitude of Γst is a scalar multiple of the correlation of Xs and Xt conditioned on X\{s,t}, and encodes the strength of the edge (s, t). In particular, the sparsity pattern of Γst reflects the edge structure of the graph: Γst = 0 if and only if Xs ⊥⊥Xt | X\{s,t}. For more general distributions, however, Corr(Xs, Xt | X\{s,t}) is a function of X\{s,t}, and it is not known whether the entries of Γ have any relationship with the strengths of edges in the graph. Nonetheless, it is tempting to conjecture that inverse covariance matrices, and more generally, inverses of higher-order moment matrices, might be related to graph structure. Let us explore this possibility by considering a simple example, namely the binary Ising model (3) with X = {0, 1}. Example 1. Consider a simple chain graph on four nodes, as illustrated in Figure 1(a). In terms of the factorization (3), let the node potentials be θs = 0.1 for all s ∈V and the edge potentials be θst = 2 for all (s, t) ∈E. For a multivariate Gaussian graphical model defined on G, standard theory predicts that the inverse covariance matrix Γ = Σ−1 of the distribution is graph-structured: Γst = 0 if and only if (s, t) /∈E. Surprisingly, this is also the case for the chain graph with binary variables: a little computation show that Γ takes the form shown in panel (f). However, this statement is not true for the single-cycle graph shown in panel (b), with added edge (1, 4). Indeed, as shown in panel (g), the inverse covariance matrix has no nonzero entries at all. But for a more complicated graph, say the one in (e), we again observe a graph-structured inverse covariance matrix. Still focusing on the single-cycle graph in panel (b), suppose that instead of considering the ordinary covariance matrix, we compute the covariance matrix of the augmented random vector (X1, X2, X3, X4, X1X3), where the extra term X1X3 is represented by the dotted edge shown 3 (a) Chain (b) Single cycle (c) Edge augmented (d) With 3-cliques (e) Dino Γchain = 9.80 −3.59 0 0 −3.59 34.30 −4.77 0 0 −4.77 34.30 −3.59 0 0 −3.59 9.80 . Γloop = 51.37 −5.37 −0.17 −5.37 −5.37 51.37 −5.37 −0.17 −0.17 −5.37 51.37 −5.37 −5.37 −0.17 −5.37 51.37 , (f) (g) Figure 1. (a)–(e) Different examples of graphical models. (f) Inverse covariance for chain-structured graph in (a). (g) Inverse covariance for single-cycle graph in (b). in panel (c). The 5 × 5 inverse of this generalized covariance matrix takes the form Γaug = 103 × 1.15 −0.02 1.09 −0.02 −1.14 −0.02 0.05 −0.02 0 0.01 1.09 −0.02 1.14 −0.02 −1.14 −0.02 0 −0.02 0.05 0.01 −1.14 0.01 −1.14 0.01 1.19 . This matrix safely separates nodes 1 and 4, but the entry corresponding to the phantom edge (1, 3) is not equal to zero. Indeed, we would observe a similar phenomenon if we chose to augment the graph by including the edge (2, 4) rather than (1, 3). Note that the relationship between entries of Γaug and the edge strength is not direct; although the factorization (3) has no potential corresponding to the augmented “edge” (1, 3), the (1, 3) entry of Γaug is noticeably larger in magnitude than the entries corresponding to actual edges with nonzero potentials. This example shows that the usual inverse covariance matrix is not always graph-structured, but computing generalized covariance matrices involving higher-order interaction terms may indicate graph structure. Now let us consider a more general graphical model that adds the 3-clique interaction terms shown in panel (d) to the usual Ising terms. We compute the covariance matrix of the augmented vector Φ(X) = X1, X2, X3, X4, X1X2, X2X3, X3X4, X1X4, X1X3, X1X2X3, X1X3X4 ∈{0, 1}11. Empirically, we find that the 11×11 inverse of the matrix cov(Φ(X)) continues to respect aspects of the graph structure: in particular, there are zeros in position (α, β), corresponding to the associated functions Xα = Q s∈α Xs and Xβ = Q s∈β Xβ, whenever α and β do not lie within the same maximal clique. (For instance, this applies to the pairs (α, β) = ({2}, {4}) and (α, β) = ({2}, {1, 4}).) The goal of this paper is to understand when certain inverse covariances do (and do not) capture the structure of a graphical model. The underlying principles behind the behavior demonstrated in Example 1 will be made concrete in Theorem 1 and its corollaries in the next section. 3 Main results and consequences We now state our main results on the structure of generalized inverse covariance matrices and graph structure. We present our results in two parts: one concerning statements at the population level, and the other concerning statements at the level of statistical consistency based on random samples. 3.1 Population-level results Our main result concerns a connection between the inverses of generalized inverse covariance matrices associated with the model (4) and the structure of the graph. We begin with some notation. Recall that a triangulation of a graph G = (V, E) is an augmented graph eG = (V, eE) with no chordless 4-cycles. (For instance, the single cycle in panel (b) is a chordless 4-cycle, whereas panel 4 (c) shows a triangulated graph. The dinosaur graph in panel (e) is also triangulated.) The edge set eE corresponds to the original edge set E plus the additional edges added to form the triangulation. In general, G admits many different triangulations; the results we prove below will hold for any fixed triangulation of G. We also require some notation for defining generalized covariance matrices. Let S be a collection of subsets of vertices, and define the random vector Φ(X; S) = φS;J, J ∈X |C| 0 , S ∈S ∩C , (5) consisting of all sufficient statistics over cliques in S. We will often be interested in situations where S contains all subsets of a given set. For a subset A ⊆V , we let pow(A) denote the set of all non-empty subsets of A. (For instance, pow({1, 2}) = {1, 2, (1, 2)}.) Furthermore, for a collection of subsets S, we let pow(S) be the set of all subsets {pow(S), S ∈S}, discarding any duplicates that arise. We are now ready to state our main theorem regarding the support of a certain type of generalized inverse covariance matrix. Theorem 1. [Triangulation and block graph-structure.] Consider an arbitrary discrete graphical model of the form (4), and let T be the set of maximal cliques in any triangulation of G. Then the inverse Γ of the augmented covariance matrix cov(Φ(X; pow(T ))) is block graph-structured in the following sense: (a) For any two subsets A and B which are not subsets of the same maximal clique, the block Γ(pow(A), pow(B)) is zero. (b) For almost all parameters θ, the entire block Γ(pow(A), pow(B)) is nonzero whenever A and B belong to a common maximal clique. The proof of this result relies on convex analysis and the geometry of exponential families [15, 16]. In particular, in any minimal exponential family, there is a one-to-one correspondence between exponential parameters (θα in our notation) and mean parameters (µα = E[φα(X)]). This correspondence is induced by the Fenchel-Legendre duality between the log partition function A and its dual A∗, and allows us to relate Γ to the graph structure. Note that when the original graph G is a tree, the graph is already triangulated and the set T in Theorem 1 is equal to the edge set E. Hence, Theorem 1 implies that the inverse Γ of the augmented covariance matrix with sufficient statistics for all vertices and edges is graph-structured, and blocks of nonzeros in Γ correspond to edges in the graph. In particular, the (m−1)p×(m−1)p submatrix ΓV,V corresponding to sufficient statistics of vertices is block graph-structured; in the case when m = 2, the submatrix ΓV,V is simply the p × p block corresponding to the vector (X1, . . . , Xp). When G is not triangulated, however, we may need to invert a larger augmented covariance matrix and include sufficient statistics over pairs (s, t) /∈E, as well. In fact, it is not necessary to take the set of sufficient statistics over all maximal cliques, and we may consider a slightly smaller augmented covariance matrix. Recall that any triangulation T gives rise to a junction tree representation of G, where nodes of the junction tree are subsets of V corresponding to maximal cliques in T , and the edges are intersections of adjacent cliques known as separator sets [15]. The following corollary involves the generalized covariance matrix containing only sufficient statistics for nodes and separator sets of T : Corollary 1. Let S be the set of separator sets in any triangulation of G, and let Γ be the inverse of cov(Φ(X; V ∪pow(S))). Then ΓV,V is block graph-structured: Γs,t = 0 whenever (s, t) /∈eE. The proof of this corollary is based on applying the block matrix inversion formula [17] to express ΓV,V in terms of the matrix Γ from Theorem 1. Panel (c) of Example 1 and the associated matrix Γaug provides a concrete instance of this corollary in action. In panel (c), the single separator set in the triangulation is {1, 3}, so augmenting the usual covariance matrix with the additional sufficient statistic X1X3 and taking the inverse should yield a graph-structured matrix. Indeed, edge (2, 4) does not belong to eE, and as predicted by Corollary 1, we observe that Γaug(2, 4) = 0. Note that V ∪pow(S) ⊆pow(T ), and the set of sufficient statistics considered in Corollary 1 is generally much smaller than the set of sufficient statistics considered in Theorem 1. Hence, the generalized covariance matrix of Corollary 1 has a smaller dimension than the generalized covariance matrix of Theorem 1, and is much more tractable for estimation. 5 Although Theorem 1 and Corollary 1 are clean results at the population level, however, forming the proper augmented covariance matrix requires some prior knowledge of the graph—namely, which edges are involved in a suitable triangulation. In the case of a graph with only singleton separator sets, Corollary 1 specializes to the following useful corollary, which only involves the covariance matrix over indicators of vertices of G: Corollary 2. For any graph with singleton separator sets, the inverse matrix Γ of the ordinary covariance matrix cov(Φ(X; V )) is graph-structured. (This class includes trees as a special case.) Again, we may relate this corollary to Example 1—the inverse covariance matrices for the tree graph in panel (a) and the dinosaur graph in panel (e) are exactly graph-structured. Indeed, although the dinosaur graph is not a tree, it possesses the nice property that the only separator sets in its junction tree are singletons. Corollary 1 also guarantees that inverse covariances may be partially graph-structured, in the sense that (ΓV,V )st = 0 for any pair of vertices (s, t) separable by a singleton separator set. This is because for any such pair (s, t), we form a junction tree with two nodes, one containing s and one containing t, and apply Corollary 1 to conclude that (ΓV,V )st = 0. Indeed, the matrix ΓV,V over singleton vertices is agnostic to which triangulation we choose for the graph. In settings where there exists a junction tree representation of the graph with only singleton separator sets, Corollary 2 has a number of useful implications for the consistency of methods that have traditionally only been applied for edge recovery in Gaussian graphical models. In such settings, Corollary 2 implies that it suffices to estimate the support of ΓV,V from the data. 3.2 Consequences for graphical Lasso for trees Moving beyond the population level, we now establish results concerning the statistical consistency of methods for graph selection in discrete graphical models, based on i.i.d. draws from a discrete graph. We describe how a combination of our population-level results and some concentration inequalities may be leveraged to analyze the statistical behavior of log-determinant methods for discrete tree-structured graphical models, and suggest extensions of these methods when observations are systematically corrupted by noise or missing data. Given p-dimensional random variables (X1, . . . , Xp) with covariance Σ∗, consider the estimator bΘ ∈arg min Θ⪰0{trace(bΣΘ) −log det(Θ) + λn X s̸=t |Θst|}, (6) where bΣ is an estimator for Σ∗. For multivariate Gaussian data, this program is an ℓ1-regularized maximum likelihood estimate known as the graphical Lasso, and is a well-studied method for recovering the edge structure in a Gaussian graphical model [18, 19, 20]. Although the program (6) has no relation to the MLE in the case of a discrete graphical model, it is still useful for estimating Θ∗:= (Σ∗)−1, and our analysis shows the surprising result that the program is consistent for recovering the structure of any tree-structured Ising model. We consider a general estimate bΣ of the covariance matrix Σ such that P ∥bΣ −Σ∗∥max ≥ϕ(Σ∗) r log p n ≤c exp(−ψ(n, p)) (7) for functions ϕ and ψ, where ∥· ∥max denotes the elementwise ℓ∞-norm. In the case of fullyobserved i.i.d. data with sub-Gaussian parameter σ2, where bΣ = 1 n Pn i=1 xixT i −xxT is the usual sample covariance, this bound holds with ϕ(Σ∗) = σ2 and ψ(n, p) = c′ log p. In addition, we require a certain mutual incoherence condition on the true covariance matrix Σ∗to control the correlation of non-edge variables with edge variables in the graph. Let Γ∗= Σ∗⊗Σ∗, where ⊗denotes the Kronecker product. Then Γ∗is a p2 × p2 matrix indexed by vertex pairs. The incoherence condition is given by max e∈Sc ∥Γ∗ eS(Γ∗ SS)−1∥1 ≤1 −α, α ∈(0, 1], (8) where S := {(s, t) : Θ∗ st ̸= 0} is the set of vertex pairs corresponding to nonzero elements of the precision matrix Θ∗—equivalently, the edge set of the graph, by our theory on tree-structured discrete graphs. For more intuition on the mutual incoherence condition, see Ravikumar et al. [4]. 6 Our global edge recovery algorithm proceeds as follows: Algorithm 1 (Graphical Lasso). 1. Form a suitable estimate bΣ of the true covariance matrix Σ. 2. Optimize the graphical Lasso program (6) with parameter λn, denoting the solution by bΘ. 3. Threshold the entries of bΘ at level τn to obtain an estimate of Θ∗. We then have the following consistency result, a straightforward consequence of the graph structure of Θ∗and concentration properties of bΣ: Corollary 3. Suppose we have a tree-structured Ising model with degree at most d, satisfying the mutual incoherence condition (8). If n ≿d2 log p, then Algorithm 1 with bΣ the sample covariance matrix and parameters λn ≥c1 α q log p n and τn = c2 c1 α q log p n + λn recovers all edges (s, t) with |Θ∗ st| > τn/2, with probability at least 1 −c exp(−c′ log p). Hence, if |Θ∗ st| > τn/2 for all edges (s, t) ∈E, Corollary 3 guarantees that the log-determinant method plus thresholding recovers the full graph exactly. In the case of the standard sample covariance matrix, this method has been implemented by Banerjee et al. [18]; our analysis establishes consistency of their method for discrete trees. The scaling n ≿d2 log p is unavoidable, as shown by information-theoretic analysis [21], and also appears in other past work on Ising models [10, 9, 11]. Our analysis also has a cautionary message: the proof of Corollary 3 relies heavily on the population-level result in Corollary 2, which ensures that Θ∗is tree-structured. For a general graph, we have no guarantees that Θ∗will be graph-structured (e.g., see panel (b) in Figure 1), so the graphical Lasso (6) is inconsistent in general. On the positive side, if we restrict ourselves to tree-structured graphs, the estimator (6) is attractive, since it relies only on an estimate bΣ of the population covariance Σ∗that satisfies the deviation condition (7). In particular, when the samples {xi}n i=1 are contaminated by noise or missing data, all we require is a sufficiently good estimate bΣ of Σ∗. Furthermore, the program (6) is always convex even when the estimator bΣ is not positive semidefinite (as will often be the case for missing/corrupted data). As a concrete example of how we may correct the program (6) to handle corrupted data, consider the case when each entry of xi is missing independently with probability ρ, and the corresponding observations zi are zero-filled for missing entries. A natural estimator is bΣ = 1 n n X i=1 zizT i ! ÷ M − 1 (1 −ρ)2 zzT , (9) where ÷ denotes elementwise division by the matrix M with diagonal entries (1 −ρ) and offdiagonal entries (1 −ρ)2, correcting for the bias in both the mean and second moment terms. The deviation condition (7) may be shown to hold w.h.p., where ϕ(Σ∗) scales with (1 −ρ) (cf. Loh and Wainwright [22]). Similarly, we may derive an appropriate estimator bΣ and a subsequent version of Algorithm 1 in situations when the data are systematically contaminated by other forms of additive or multiplicative corruption. Generalizing to the case of m-ary discrete graphical models with m > 2, we may easily modify the program (6) by replacing the elementwise ℓ1-penalty by the corresponding group ℓ1-penalty, where the groups are the indicator variables for a given vertex. Precise theoretical guarantees may be derived from results on the group graphical Lasso [23]. 4 Simulations Figure 2 depicts the results of simulations we performed to test our theoretical predictions. In all cases, we generated binary Ising models with node weights 0.1 and edge weights 0.3 (using spin {−1, 1} variables). The five curves show the results of our graphical Lasso method applied to the dinosaur graph in Figure 1. Each curve plots the probability of success in recovering the 15 7 edges of the graph, as a function of the rescaled sample size n log p, where p = 13. The leftmost (red) curve corresponds to the case of fully-observed covariates (ρ = 0), whereas the remaining four curves correspond to increasing missing data fractions ρ ∈{0.05, 0.1, 0.15, 0.2}, using the corrected estimator (9). We observe that all five runs display a transition from success probability 0 to success probability 1 in roughly the same range of the rescaled sample size, as predicted by our theory. Indeed, since the dinosaur graph has only singleton separators, Corollary 2 ensures that the inverse covariance matrix is exactly graph-structured. Note that the curves shift right as the fraction ρ of missing data increases, since the problem becomes harder. 0 100 200 300 400 500 0 0.2 0.4 0.6 0.8 1 success prob vs. sample size for dino graph with missing data n/log p success prob, avg over 1000 trials rho = 0 rho = 0.05 rho = 0.1 rho = 0.15 rho = 0.2 Figure 2. Simulation results for our graphical Lasso method on binary Ising models, allowing for missing data in the observations. The figure shows simulation results for the dinosaur graph. Each point represents an average over 1000 trials. The horizontal axis gives the rescaled sample size n log p. 5 Discussion The correspondence between the inverse covariance matrix and graph structure of a Gauss-Markov random field is a classical fact, with many useful consequences for efficient estimation of Gaussian graphical models. It has long been an open question as to whether or not similar properties extend to a broader class of graphical models. In this paper, we have provided a partial affirmative answer to this question and developed theoretical results extending such relationships to discrete undirected graphical models. As shown by our results, the inverse of the ordinary covariance matrix is graph-structured for special subclasses of graphs with singleton separator sets. More generally, we have shown that it is worthwhile to consider the inverses of generalized covariance matrices, formed by introducing indicator functions for larger subsets of variables. When these subsets are chosen to reflect the structure of an underlying junction tree, the edge structure is reflected in the inverse covariance matrix. Our population-level results have a number of statistical consequences for graphical model selection. We have shown how our results may be used to establish consistency (or inconsistency) of the standard graphical Lasso applied to discrete graphs, even when observations are systematically corrupted by mechanisms such as additive noise and missing data. As noted by an anonymous reviewer, the Chow-Liu algorithm might also potentially be modified to allow for missing or corrupted observations. However, our proposed method and further offshoots of our population-level result may be applied even in cases of non-tree graphs, which is beyond the scope of the Chow-Liu algorithm. Acknowledgments PL acknowledges support from a Hertz Foundation Fellowship and an NDSEG Fellowship. MJW and PL were also partially supported by grants NSF-DMS-0907632 and AFOSR-09NL184. The authors thank the anonymous reviewers for helpful feedback. 8 References [1] M.E.J. Newman and D.J. Watts. Scaling and percolation in the small-world network model. Phys. Rev. E, 60(6):7332–7342, December 1999. [2] T. Cai, W. Liu, and X. Luo. A constrained ℓ1 minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association, 106:594–607, 2011. [3] N. Meinshausen and P. B¨uhlmann. High-dimensional graphs and variable selection with the Lasso. Annals of Statistics, 34:1436–1462, 2006. [4] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence. Electronic Journal of Statistics, 4:935–980, 2011. [5] M. Yuan. High-dimensional inverse covariance matrix estimation via linear programming. Journal of Machine Learning Research, 99:2261–2286, August 2010. [6] H. Liu, F. Han, M. Yuan, J.D. Lafferty, and L.A. Wasserman. High dimensional semiparametric Gaussian copula graphical models. arXiv e-prints, March 2012. Available at http://arxiv.org/abs/1202.2169. [7] H. Liu, J.D. Lafferty, and L.A. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. Journal of Machine Learning Research, 10:2295–2328, 2009. [8] C.I. Chow and C.N. Liu. Approximating discrete probability distributions with dependence trees. IEEE Transactions on Information Theory, 14:462–467, 1968. [9] A. Jalali, P.D. Ravikumar, V. Vasuki, and S. Sanghavi. On learning discrete graphical models using group-sparse regularization. Journal of Machine Learning Research - Proceedings Track, 15:378–387, 2011. [10] P. Ravikumar, M.J. Wainwright, and J.D. Lafferty. High-dimensional Ising model selection using ℓ1-regularized logistic regression. Annals of Statistics, 38:1287, 2010. [11] A. Anandkumar, V.Y.F. Tan, and A.S. Willsky. High-dimensional structure learning of Ising models: Local separation criterion. Annals of Statistics, 40(3):1346–1375, 2012. [12] G. Bresler, E. Mossel, and A. Sly. Reconstruction of markov random fields from samples: Some observations and algorithms. In APPROX-RANDOM, pages 343–356, 2008. [13] P. Loh and M.J. Wainwright. Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses. arXiv e-prints, November 2012. [14] S.L. Lauritzen. Graphical Models. Oxford University Press, 1996. [15] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Found. Trends Mach. Learn., 1(1-2):1–305, January 2008. [16] R. T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, 1970. [17] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, 1990. [18] O. Banerjee, L. El Ghaoui, and A. d’Aspremont. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. Journal of Machine Learning Research, 9:485–516, 2008. [19] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical Lasso. Biostatistics, 9(3):432–441, July 2008. [20] M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(1):19–35, 2007. [21] Narayana P. Santhanam and Martin J. Wainwright. Information-theoretic limits of selecting binary graphical models in high dimensions. IEEE Transactions on Information Theory, 58(7):4117–4134, 2012. [22] P. Loh and M.J. Wainwright. High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity. Annals of Statistics, 40(3):1637–1664, 2012. [23] L. Jacob, G. Obozinski, and J. P. Vert. Group Lasso with Overlap and Graph Lasso. In International Conference on Machine Learning (ICML), pages 433–440, 2009. 9
|
2012
|
158
|
4,517
|
Joint Modeling of a Matrix with Associated Text via Latent Binary Features XianXing Zhang Duke University xianxing.zhang@duke.edu Lawrence Carin Duke University lcarin@duke.edu Abstract A new methodology is developed for joint analysis of a matrix and accompanying documents, with the documents associated with the matrix rows/columns. The documents are modeled with a focused topic model, inferring interpretable latent binary features for each document. A new matrix decomposition is developed, with latent binary features associated with the rows/columns, and with imposition of a low-rank constraint. The matrix decomposition and topic model are coupled by sharing the latent binary feature vectors associated with each. The model is applied to roll-call data, with the associated documents defined by the legislation. Advantages of the proposed model are demonstrated for prediction of votes on a new piece of legislation, based only on the observed text of legislation. The coupling of the text and legislation is also shown to yield insight into the properties of the matrix decomposition for roll-call data. 1 Introduction The analysis of legislative roll-call data provides an interesting setting for recent developments in the joint analysis of matrices and text [23, 8]. While the roll-call data matrix is typically binary, the modeling framework is general, in that it may be readily extended to categorical, integer or real observations. The problem is made interesting because, in addition to the matrix of votes, we have access to the text of the legislation (e.g., characteristic of the columns of the matrix, with each column representing a piece of legislation and each row a legislator). While roll-call data provides an interesting proving ground, the basic methodologies are applicable to any setting for which one is interested in analysis of matrices, and there is text associated with the rows or columns (e.g., the text may correspond to content on a website; each column of the matrix may represent a website, and each row an individual, with the matrix representing number of visits). The analysis of roll-call data is of significant interest to political scientists [15, 6]. In most such research the binary data are typically analyzed with a probit or logistic link function, and the underlying real matrix is assumed to have rank one. Each legislator and piece of legislation exists at a point along this one dimension, which is interpreted as characterizing a (one-dimensional) political philosophy (e.g., from “conservative” to “liberal”). Roll-call data analysis have principally been interested in inferring the position of legislators in the one-dimensional latent space, with this dictated in part by the fact that the ability to perform prediction is limited. As in much matrix-completion research [17, 18], one typically can only infer votes that are missing at random. It is not possible to predict the votes of legislators on a new piece of legislation (for which, for example, an entire column of votes is missing). This has motivated the joint analysis of roll-call votes and the associated legislation [23, 8]: by modeling the latent space of the text legislation with a topic model, and making connections between topics and the latent space of the matrix decomposition, one may infer votes of an entire missing column of the matrix, assuming access to the text associated with that new legislation. 1 While the research in [23, 8] showed the potential of joint text-matrix analysis, there were several open questions that motivate this paper. In [23, 8] a latent Dirichlet allocation (LDA) [5] topic model was employed for the text. It has been demonstrated that LDA yields inferior perplexity scores when compared to modern Bayesian topic models, such as the focused topic model (FTM) [24]. Another significant issue with [23, 8] concerns how the topic (text) and matrix models are coupled. In [23, 8] the frequency with which a given topic is utilized in the text legislation is used to infer the associated matrix parameters (e.g., to infer the latent feature vector associated with the respective column of the matrix). This is undesirable, because the frequency with which a topic is used in the document is characteristic of the style of writing: their may be a topic that is only mentioned briefly in the document, but that is critical to the outcome of the vote, while other topics may not impact the vote but are discussed frequently in the legislation. We also wish to move beyond the rank-one matrix assumption in [15, 6, 8]. Motivated by these limitations, in this paper the FTM is employed to model the text of legislation, with each piece of legislation characterized by a latent binary vector that defines the sparse set of associated topics. A new probabilistic low-rank matrix decomposition is developed for the votes, utilizing latent binary features; this leverages the merits of what were previously two distinct lines of matrix factorization methods [13, 17]. Unlike previous approaches, the rank is not fixed a priori but inferred adaptively, with theoretical justifications. For a piece of legislation, the latent binary feature vectors for the FTM and matrix decomposition are shared, yielding a new means of jointly modeling text and matrices. This linkage between text and matrices is innovative as: (i) it’s based on whether a topic is relevant to a document/legislation, not based on the frequency with which the topic is used in the document (i.e., not based on the style of writing); (ii) it enables interpretation of the underlying latent binary features [13, 9] based upon available text data. The rest of the paper is organized as follows. Section 2 first reviews the focused topic model, then introduces a new lowrank matrix decomposition method, and the joint model of the two. Section 3 discusses posterior inference. In Section 4 quantitative results are presented for prediction of columns of roll-call votes based on the associated text legislation, and the joint model is demonstrated qualitatively to infer meaning/insight for the characteristics of legislation and voting patterns, and Section 5 concludes. 2 Model and Analysis 2.1 Focused topic modeling Focused topic model (FTM) [24] were developed to address a limitation of related models based on the hierarchical Dirichlet process (HDP) [21]: the HDP shares a set of “global” topics across all documents, and each topic is in general manifested with non-zero probability in each document. This property of HDP tends to yield less “focused” or descriptive topics. It is desirable to share a set of topics across all documents, but with the additional constraint that a given document only utilize a small subset of the topics; this tends to yield more descriptive/focused topics, characteristic of detailed properties of the documents. A FTM is manifested as a compound linkage of the Indian buffet process (IBP) [10] and the Dirichlet process (DP). Each document draws latent binary features from an IBP to select a finite subset of atoms/topics from the DP. In the model details, the DP is represented in terms of a normalized gamma process [7] with weighting by the binary feature vector, constituting a document-specific topic distribution in which only a subset of topics are manifested with non-zero probability. The key components of the FTM are summarized as follows [24]: bjt|πt ∼Bernoulli(bjt|πt), πt = Qt l=1 νt, νt|αr ∼Beta(νt|αr, 1) θj|{bj:, λ} ∼Dirichlet(θj|bj: ⊙λ), λt|γ ∼Gamma(λt|γ, 1) (1) where bjt1∈{0, 1} indicates if document j uses topic t, which is modeled as drawn from an IBP parameterized by αr under the stick breaking construction [20], as shown in the first line of (1). λ = {λt}Kr t=1 represents the relative mass on Kr topics (Kr could be infinite in principle); λ is shared across all documents, analogous to the “top layer” of the HDP. θj is the topic distribution for the jth document, and the expression bj: ⊙λ denotes the pointwise vector product between 1Throughout this paper notation bij are used to denote the entry locates at the ith row and jth column in matrix B, bj: and b:k are used to represent the jth row and kth column in B respectively. 2 bj: and λ, thereby selecting a subset of topics for document j (those for which the corresponding components of bj: are non-zero). The rest of the FTM is constructed similar to LDA [5], where for each token n in document j, a topic indicator is drawn as zjn|θj ∼Mult(zjn|1, θj). Conditional on zjn and the topics {βk}Kr k=1, a word is drawn as wjn|zjn, {βk}Kr k=1 ∼Mult(wjn|1, βzjn), where βk|η ∼Dirichlet(βk|η). Although in (1) bj: is mainly designed to map the global prevalence of topics across the corpus, λ, to a within-document proportion of topic usage, θj, latent features bj: are informative in their own right, as they indicate which subset of topics is relevant to a given document. The documentdependent topic usage bj: may be more important than θj when characterizing the meaning of a document: θj specifies the frequency with which each of the selected topics is utilized in document j (this is related to writing style – verbosity or parsimony – and less related to meaning); it may be more important to just know what underlying topics are used in the document, characterized by bj:. We therefore make the linkage between documents and an associated matrix via the bj:, not based on θj (where [23, 8] base the document-matrix linkage via θj or it’s empirical estimate). 2.2 Matrix factorization with binary latent factors and a low-rank assumption Binary matrix factorization (BMF) [13, 14] is a general framework in which real latent matrix X ∈ RP ×N is decomposed as X = LHRT , where L ∈{0, 1}P ×Kl, R ∈{0, 1}N×Kr are binary, and H ∈RKl×Kr is real. The rows of L and R are modeled via IBPs, parameterized by αl and αr respectively, and Kl and Kr are the truncation levels for the IBPs, which again can be infinite in principle. The observed matrix is Y, which may be real, binary, or categorial [12]. The observations are modeled in an element-wise fashion: yij = f(xij). We focus on binary observed matrices, Y ∈{0, 1}P ×N, and utilize f(·) as a probit model [2]: yij = 1 if ˆxij ≥0 0 if ˆxij < 0 (2) with ˆxij = xij + ϵij, where ϵij ∼N(0, 1). We generalize the BMF framework by imposing that H is low-rank. Specifically, we impose the rank-1 expansion H = PKc k=1 u:kvT :k, where u:k and v:k are column vectors (thus their outer product is a rank-1 matrix), each of them is modeled here by a Gaussian distribution: u:k ∼N (u:k|0, IKl) v:k ∼N (v:k|0, IKr) (3) and Kc is the number of such rank-1 matrices such that Kc < min(Kl, Kr), i.e., H is low-rank. To motivate this model, consider the representation H = PKc k=1 u:kvT :k in the decomposition X = LHRT , which implies X = PKc k=1(Lu:k)(Rv:k)T . Therefore, we may also express X = ΨΦT , with Ψ ∈RP ×Kc and Φ ∈RN×Kc; the kth column of Ψ is defined by Lu:k and the kth column of Φ defined by Rv:k. Consequently, the low-rank assumption for H yields a low-rank model X = ΨΦT , precisely as in [17, 18]. Thus the definition of Ψ and Φ via the binary matrices L and R and the linkage matrix H merges previously two distinct lines of matrix factorization methods. In the context of the application considered here, the decomposition X = LHRT will prove convenient, as we may share the binary matrices L or R among the topic usage of available documents. The binary features in L and R are therefore characteristic of the presence/absence of underlying topics, or related latent processes, and the matrix H provides the mapping of how these binary features map to observed data. However, how to specify Kc remains an open question for the above low-rank construction. As a contribution of this paper, we provide a new means of imposing a low-rank model within the prior. We model the “significance” of each rank-1 term in the expansion explicitly, using a stochastic process {sk}Kc k=1, therefore H can be decomposed as H = PKc k=1 sku:kvT :k, Kc can be infinity in principle. As a result, the hierarchical representation in modeling the latent matrix X in probit model can be summarized as: ˆxij| n li:, rj:, {u:k, v:k, sk}Kc k=1 o ∼N ˆxij| PKc k=1 sk(li:u:k)(rj:v:k)T , 1 (4) Note that sk in (4) is similar to the singular value of SVD in spirit. Intuitively, we wish to impose |sk| to decrease “fast” as the increase of index k, and the rank-1 matrices with large indices will have 3 negligible impact over (4), therefore Kc plays a role similar to the truncation level in stick breaking construction for DP [11] and IBP [20]. To achieve this end, we model each sk as a Gaussian random variable with a conjugate multiplicative gamma process (MGP) placed on its precision parameter: sk|τk ∼N sk|0, τ −1 k , τk = Qk l=1 δl, δl|αc ∼Gamma(δl|αc, 1) (5) The MGP was originally proposed in [3] for learning sparse factor models and further extended for tree-structured sparse factor models [26] and change-point stick breaking process [25], one of its properties is that it increasingly shrinks sk towards zero with the increase of index k. Next we make the above intuition rigorous. Theorem 1 below formally states that if sk is modeled by MGP as in (5), the rank-1 expansion in (4) will converge when Kc →∞. Theorem 1. When αc > 1, the sequence PKc k=1 sk(li:u:k)(rj:v:k)T converges in ℓ2, as Kc →∞. Although in MGP Kc is unbounded [3], for computational considerations we would like to truncate it to a finite value Kc ≪max (P, N), without much loss of information. As justification, the following theoretical bound is obtained, in a manner similar to its counterparts in DP [11]. Lemma 1. Denoting M Kc ij = P∞ k=Kc+1 sk(li:u:k)(rj:v:k)T , then ∀ϵ > 0 we have p{(M Kc ij )2 > ϵ} < ab(1−1/αc) ϵαKc c , where a = maxk E(li:u:k)2, b = maxk E(rj:v:k)2. Lemma 1 states that, when αc > 1 the approximation error introduced by the truncation level Kc decays exponentially fast to 0, as Kc →∞. In Section 3 an MCMC method is developed to adaptively choose Kc at each iteration, which alleviates us from fixing it a priori. The proof of Theorem 1 and Lemma 1 can be found in the Supplemental Material. 2.3 Joint learning of FTM and BMF Via the FTM and BMF framework of the previous subsections, each piece of legislation j is represented as two latent binary feature vectors bj: and rj:. To jointly model the matrix of votes with associated text of legislation, a natural choice is to impose bj: = rj:. As a result, the full joint model can be specified by equations (1) - (5), with bjt in (1) replaced by rjt. Note that the joint model links the topics characteristic of the text, to the latent binary features characteristic of legislation in the matrix decomposition; and such linkage leverages statistical strength of the two data source across the latent variables of the joint model during posterior inference. A graphical representation of the joint model can be found in the Supplemental Material. In the context of the model for Y = f(X), with X = LHRT , if one were to learn L and H based upon available training data, then a new legislation y:N+1 could be predicted if we had access to r:N+1. Via the construction above, not only do we gain a predictive advantage, because the new legislation’s latent binary features r:N+1 can be obtained from modeling its document as in (1), but also the model provides powerful interpretative insights. Specifically the topics inferred from the documents may be used to interpret the latent binary features associated with the matrix factorization. These advantages will be demonstrated through experiments on legislative roll-call data in Section 4. 2.4 Related work The ideal point topic model (IPTM) was developed in [8], where the supervised Latent Dirichlet Allocation (sLDA) [4] model was used to link empirical topic-usage frequencies to the latent factors via regression. In that work the dimension of the latent factors was set to 1, e.g., fixing Kc = 1 in our nomenclature. In [23] the authors proposed to jointly analyze the voting matrix and the associated text through a mixture model, where each legislation’s latent feature factor is clustered to a mixture component in coupled with that legislation’s document topic distribution θ. Note that in their case each piece of legislation can only belong to one cluster, while in our case the latent binary features for each document can be effectively treated as being grouped to multiple clusters [13] (a mixedmembership model, manifested in terms of the binary feature vectors). Similar research in linking collaborative filtering and topic models can also be found in web content recommendation [1], movie recommendation[19], and scientific paper recommendation [22]. None of these methods makes use of the binary indicators as the characterization of associated documents, but perform linking via the topic distribution θ and the latent (real) features in different ways. 4 3 Posterior Inference We use Gibbs sampling for posterior inference over the latent variables, and only sampling equations that are unique for this model are discussed here. The rest are similar to those in [24, 13]. In the following we use p(·|−) to denote the conditional posterior of one variable given on all others. Sampling {v:k, u:k}k=1:Kc Based on (3) and (4) the conditional posterior of v:k can be written as p(v:k|−) ∝QN j=1 N(ˆx:j| PKc k=1 sk(Lu:k)(rj:v:k), 1)N(v:k|0, IKr). It can be shown that p(v:k|−) = N (v:k|µv:k, Σv:k), with mean µv:k = skΣv:k PN j=1(Lu:krj:)T ˜x−k :j and covariance matrix Σv:k = [IKr + s2 k PN j=1(Lu:krj:)T (Lu:krj:)]−1, where ˜x−k :j = ˆx:j −LUVT rT j: + Lu:krj:v:k. By repeating the above procedure p(u:k|−) can be derived similarly. Sampling {sk}k=1:Kc Based on (4) and (5) the conditional posterior of sk can be written as p(sk|−) ∝QN j=1 N(ˆx:j| PKc k=1 sk(Lu:k)(rj:v:k), 1)N(sk|0, τ −1 k ). It can be shown that p(sk|−) = N sk|µsk, σ2 sk , with mean µsk = σ2 sk PN j=1((Lu:k)(rj:v:k))T ˜x−k :j and variance σ2 sk = 1/(τk + PN j=1((Lu:k)(rj:v:k))T ((Lu:k)(rj:v:k))). Sampling {τk, δk}k=1:Kc Based on (5), given a fixed truncation level Kc it can be sampled directly from its posterior distribution: p(δk|−) = Gamma δk|αc + Kc−k+1 2 , 1 + 1 2 PKc l=k ν(k) l s2 l , where ν(k) l = Ql t=1,t̸=k δt. τk can then be reconstructed from δ1:k as in (5). Sampling {rjt}j=1:N,t=1:Kr Similar to the derivation in [24], p(rjt = 1|−) = 1 if Njt > 0, where Njt denotes the number of times document j used topic t. When Njt = 0, based on (1) and (4) the conditional posterior of rjt can be written as p(rjt = 1|−) ∝ πt πt+2λt(1−πt) exp{−1 2[(LhT t:)T (LhT t:) −2(LhT t:)T ˜x−k :j ]}, where ht: represents the tth row of H = PKc k=1 sku:kvT :k; and p(rjt = 0|−) ∝ 2λt(1−πt) πt+2λt(1−πt). {lit}t=1:Kl i=1:P is sampled as described in [13]. Adaptive sampler for MGP The above Gibbs sampler needs a predefined truncation level Kc. In [3, 26] the authors proposed an adaptive sampler, tuning Kc as the sampler progresses, with convergence of the chain guaranteed [16]. Specifically, the adaptation procedure is triggered with probability p(t) = exp(z0 + z1t) at the tth iteration, with z0, z1 chosen so that adaptation occurs frequently at the beginning of the chain but decreases exponentially fast. When the adaptation is triggered in the tth iteration, let qκ(t) = {k|d∞(skLu:kvT :kRT ) ≤κ} denotes in iteration t the indices of the rank-1 matrices with the maximum-valued entry less than some pre-defined threshold κ, which intuitively has a negligible contribution at the tth iteration, and thus are deleted and Kc will decrease. On the other hand, if qκ(t) is empty then it suggests that more rank-1 matrices are needed, in this case we increase Kc by one and draw u:Kc, v:Kc from their prior distributions respectively. 4 Experimental Results 4.1 Experiment setting We have performed joint matrix and text analysis, considering the House of Representatives (House), sessions 106 - 111 2; we model each session’s roll-call votes separately as binary matrix Y. Entry yij = 1 denotes that the ith legislator’s response to legislation j is either “Yea” or “Yes” , and yij = 0 denotes that the corresponding response is either “Nay” or “No”. The data are preprocessed in the same way as described in [8]. We recommend to set the IBP hyperparameters αl = αr = 1, MGP hyperparameters αc = 3, FTM hyperparameters γ = 5 and topic model hyperparameter η = 0.01. We also considered using a random-walk MH algorithm with non-informative gamma prior to infer those hyperparameters, as described in [24, 3], and the Markov chain manifested similar mixing performance. The truncation level Kc in the MGP is not fixed, but inferred from the adaptive sampler, with threshold parameter κ set to 0.05 (it is recommended to be set small for most applications). In the study below, for each model we run 5000 iterations of the Gibbs sampler, with the first 1000 iterations discarded as burn-in, and 400 samples are collected, taking every tenth iteration afterwards, to perform Bayesian estimate on the object of interest. 2These data are available from thomas.loc.gov 5 4.2 Predicting random missing votes In this section we study the classical problem of estimating the values of matrix data that are missing uniformly at random (in-matrix missing votes), without the use of associated documents. We compare the model proposed in (4) to the probabilistic matrix factorization (PMF) found in [17, 18]. This is done by decomposing the latent matrix X = ΨΦT , where each row of Ψ and ΦT are drawn from a Gaussian distribution with mean and covariance matrix modeled by a Gaussian-Wishart distribution. To study the behavior of the proposed MGP prior in (5), we (i) vary the number of columns (rank) Kc in Ψ and Φ as a free parameter, and call this model PMF; and (ii) incorporate MGP into the decomposition of X = ΨSΦT where S ∈RKc×Kc is a diagonal matrix with each diagonal element specified as sk. The model in (ii) is called PMF+MGP. Additionally, to check if the low-rank assumption detailed in Section 2.2 is effective for BMF, we also compare the performance of the BMF model originally proposed in [13], which we term BMF-Original. We compared these models on predicting the missing values selected uniformly at random, with different percentage (90%, 95%, 99%) of missingness. This study has been done on House data from the 106 to 111 sessions; however, to conserve space we only summarized the experimental results on the 110th House data, in Figure 1; similar results are observed across all sessions. In Figure 1 each panel corresponds to a certain percentage of missingness; the horizontal axis is the number of columns (rank), which varies as a free parameter of PMF, while the vertical axis is the prediction accuracy. MGP is observed to be generally effective in modeling the rank across three panels, and the low-rank assumption is critical to get good performance for the BMF. When the percentage of missingness is relatively low, e.g., 90% or 95%, PMF performs better than BMF, however when the percentage of missingness is high e.g., 99%, the BMF (with low rank assumption) is very competitive with PMF. This is probably because of the way BMF encourages the sharing of statistical strength among all rows and columns via the matrix H as described in [13], which is most effective when data is scarce. 4.3 Predicting new bills based on text We study the predictive power of the proposed model when the legislative roll-call votes and the associated bill documents are modeled jointly, as described in Section 2.3. We compare our proposed model with the IPTM in [8], where the authors fixed the rank Kc = 1 in IPTM; we term this model IPTM(Kc = 1). In [8] the authors suggested that fixing the rank to one might be overrestrictive, thus we also propose to model the rank in the ideal point model using MGP, in a similar way to how this was done for the PMF model, and call this model IPTM. We also compare our model with that in [23], where the authors proposed to combine the factor analysis model and topic model via a compounded mixture model, with all sessions of roll-call data are modeled jointly via a Markov process. Since our main goal is to predict new bills but not modeling the matrices dynamically, in the following experiments we remove the Markov process but model each session of House data separately; we call this model FATM. In [23] the authors proposed to use a beta-Bernoulli distributed binary variable bk to model if the kth rank-1 matrix is used in matrix decomposition. When performing posterior inference we find that bk tends to be easily trapped in local maxima, while MGP, which models the significance of usage (but not the binary usage) of each kth rank-1 matrix via sk, smoother estimates and better mixing were observed. For each session the bills are partitioned into 6-folds, and we iteratively remove a fold, and train the model with the remaining folds; predictions are then performed on the bills in the removed fold. The experiment results are summarized in Figure 2. Note that since rj: is modeled via the stick-breaking construction of IBP as in (1), the total number of latent binary features Kr is unbounded, and we face the risk of having the latent binary features important for explaining voting Y and important for explaining the associated text learned separately. This may lead to the undesirable consequence that the latent features learned from text are not discriminative in predicting a new piece of legislation. To reduce such risk, in practice we could either set αr such that it strongly favor fewer latent binary features, or we can truncate the stick breaking construction at a pre-defined level Kr. For a clearer comparison with other models, where the number of topics are fixed, we choose the second approach and let Kr vary as the maximum number of possible topics. Across all sessions IPTM consistently performs better than its counterpart when Kc = 1; this again demonstrates the effectiveness of MGP in modeling the rank. Although there is no significant advan6 0.96 0.97 90% Missing 0.95 0.96 0.97 95% Missing 0 89 0.9 0.91 99% Missing Kc = 12 Kc = 9 Kc = 12 Kc = 12 Kc = 11 Kc = 13 0.93 0.94 0.95 0.92 0.93 0.94 0.95 0 86 0.87 0.88 0.89 Kc 12 c 0.92 1 5 10 20 50 0.9 0.91 1 5 10 20 50 Proposed BMF‐Original PMF PMF+MGP 0.85 0.86 1 5 10 20 50 Figure 1: Comparison of prediction accuracy for votes missing uniformly at random, for the 110th House data. Different panels corresponds to different percentage of missingness, for each panel the vertical axis represents accuracy and horizontal axis represents the rank set for PMF. For PMF+MGP and our proposed method, inferred rank Kc is shown for the most-probable collection sample. 0.89 0.91 0.93 108th 0.88 0.9 0.92 107th 0.88 0.9 0.92 106th 0.81 0.83 0.85 0.87 0.8 0.82 0.84 0.86 0.8 0.82 0.84 0.86 0.79 0.81 30 50 100 150 200 300 0.78 0.8 30 50 100 150 200 300 0.78 30 50 100 150 200 300 0.87 0.89 0.91 109th 0 88 0.9 0.92 110th 0.91 0.93 0.95 111th 0 79 0.81 0.83 0.85 0.82 0.84 0.86 0.88 0 83 0.85 0.87 0.89 0.77 0.79 30 50 100 150 200 300 0.8 30 50 100 150 200 300 Proposed FATM IPTM IPTM(Kc = 1) 0.81 0.83 30 50 100 150 200 300 Figure 2: Prediction accuracy for held-out legislation across 106th - 111th House data; prediction of an entire column of missing votes based on text. In each panel the vertical axis represents accuracy and the horizontal axis represents the number of topics used for each model. Results are averaged across 6-folds, with variances are too small to see. tage of our proposed model when the truncation on the number of topics Kr (horizontal axis) is small (e.g., 30-50), over-fitting is observed for all models except our proposed model. As we increase the number of topics, the performance of other models drop significantly (vertical axis). Across all five sessions, the best quantitative results are obtained by the proposed model when Kr > 100. 4.4 Latent binary feature interpretation In this study we partition all the bills into two groups: (i) bills for which there is near-unanimous agreement, with “Yea” or “Yes” more than 90%; (ii) contentious bills with percentage of votes received as “Yea” or “Yes” less than 60%. By linking the inferred binary latent features to the topics for those two groups, we can get insight into the characteristics of legislation and voting patterns, e.g., what influenced a near-unanimous yes vote, and what influenced more contention. Figure 3 compares the latent feature usage pattern of those two groups; the horizontal axis represents the latent features, where we set Kr = 100 for illustration purpose, and the vertical axis is the aggregated frequency that a feature/topic is used by all the bills in each of those two groups. The frequency is normalized within each group for easy interpretation. For each group, we select three discriminative features: ones heavily used in one group but rarely used in the other (these selected features are highlighted in blue/red). For example, in the left panel the features highlighted in blue are widely used by bills in the left group, but rarely used by bills in the right group. As observed 7 0 10 20 30 40 50 60 70 80 90 100 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Binary feature usage pattern for unanimous agreed bills 0 10 20 30 40 50 60 70 80 90 100 0 0.02 0.04 0.06 0.08 0.1 Binary feature usage pattern for highly debated bills Topic 22 Topic 62 Topic 73 Topic 31 Topic 83 Topic 38 Topic 22 Topic 38 Topic 31 Topic 62 Topic 73 Topic 83 Figure 3: Comparison of the frequencies of binary features usage between two groups of bills, left: nearunanimous affirmative bills (e.g., bills with percentage of votes received as “Yes” or “Yea” is more than 90%). Right: contentious bills (e.g., bills with percentage of votes received as “Yes” or “Yea” is less than 60%). Data from 110th House, when Kr = 100. The vertical axis represents the normalized frequency of using feature/topic within the corresponding group. The six most discriminative features/topics (labeled in the figure) are shown in Table 1 Table 1: Six discriminative topics of unanimous agreed/highly debated bills learned from the 110th house of representatives, with top-ten most probable words shown. (R) and (B) represent the topics depicted in Figure 3 as red and blue respectively. TOPIC 22 (B) TOPIC 31(R) TOPIC 38 (R) TOPIC 62 (B) TOPIC 73 (B) TOPIC 83 (R) CHILDREN CONCURRENT RESOLUTION TAX PEOPLE NATION CLAUSE CHILD ADJOURN CORPORATION WORLD ATTACK PRINT YOUTH MAJORITY LEADER TAXABLE HOME TERRORIST WAIVE PORNOGRAPHY DESIGNEE CREDIT SANITATION PEOPLE SUBSTITUTE INTERNET AVIATION PENALTY WATER SEPTEMBER COMMITTEE AMENDMENT FATHER RECESS REVENUE INTERNATIONAL VOLUNTEER READ FAMILY MINORITY LEADER TAXPAYER SOUTHERN CITIZEN DEBATE PARENT FEBRUARY SPECIAL COMPENSATION PAKISTAN OFFER SCHOOL MOTION OFFER FILE ASSOCIATION LEGITIMATE DIVIDE AND CONTROL EMERGENCY STAND SUBSTITUTE ECONOMIC FUTURE MOTION from Figure 3, the learned binary features are discriminative, as the usage pattern for those two groups are quite different. We also study the interpretation of those latent features by linking them to the topics inferred from the texts. As an example, those six highlighted features are linked to their corresponding topics and depicted in Table 1, with the top-ten most probable words within each topic shown. For the unanimous agreed bills, we can read from Table 1 that they are highly probable to be related to topics about the education of youth (Topic 22), or the prevention of terrorist (Topic 73). While the bills from the contentious group tend to more related to making amendments to an existing piece of legislation (Topic 83) or discussing taxation (Topic 38). Note that compared to conventional topic modeling, these inferred topics are not only informative in semantic meaning of the bills, but also discriminative in predicting the outcome of the bills. 5 Conclusion A new methodology has been developed for the joint analysis of a matrix with associated text, based on sharing latent binary features modeled via the Indian buffet process. The model has been demonstrated on analysis of voting data from the US House of Representatives. Imposition of a lowrank representation for the latent real matrix has proven important, with this done in a new manner via the multiplicative gamma process. Encouraging quantitative results are demonstrated, and the model has also been shown to yield interesting insights into the meaning of the latent features. The sharing of latent binary features provides a general joint learning framework for Indian buffet process based models [9], where focused topic model and binary matrix factorization are two examples, exploring other possibilities in different scenarios could be an interesting direction. Acknowledgements The authors would like to thank anonymous reviewers for providing useful comments. The research reported here was supported by ARO, DOE, NGA, ONR, and DARPA (under the MSEE program). 8 References [1] D. Agarwal and B. Chen. fLDA: matrix factorization through latent Dirichlet allocation. In WSDM, 2010. [2] J. H. Albert and S. Chib. Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association, 1993. [3] A. Bhattacharya and D. B. Dunson. Sparse Bayesian infinite factor models. Biometrika, 2011. [4] D. M. Blei and Jon D. McAuliffe. Supervised topic models. In Advances in Neural Information Processing Systems, 2007. [5] D. M. Blei, A. Ng, and M. I. Jordan. Latent Dirichlet allocation. JMLR, 2003. [6] J. Clinton, S. Jackman, and D. Rivers. The statistical analysis of roll call data. Am. Political Sc. Review, 2004. [7] T. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1973. [8] S. Gerrish and D.M. Blei. Predicting legislative roll calls from text. In ICML, 2011. [9] T. L. Griffiths and Z. Ghahramani. The indian buffet process: An introduction and review. Journal of Machine Learning Research, 12:1185–1224, 2011. [10] T.L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In Advances in Neural Information Processing Systems, 2005. [11] H. Ishwaran and L.F. James. Gibbs sampling methods for stick-breaking priors. J. American Statistical Association, 2001. [12] P. McCullagh and J. Nelder. Generalized Linear Models. Chapman and Hall, 1989. [13] E. Meeds, Z. Ghahramani, R. Neal, and S. Roweis. Modeling dyadic data with binary latent factors. In Advances in Neural Information Processing Systems. 2007. [14] K. Miller, T. Griffiths, and M.I. Jordan. Nonparametric latent feature models for link prediction. In Advances in Neural Information Processing Systems, 2009. [15] K.T. Poole. Recent developments in analytical models of voting in the U.S. congress. Am. Political Sc. Review, 1988. [16] G. O. Roberts and J. S. Rosenthal. Coupling and ergodicity of adaptive MCMC. Journal of Applied Probability, 2007. [17] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Advances in Neural Information Processing Systems, 2007. [18] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using Markov chain Monte Carlo. In ICML, 2008. [19] H. Shan and A. Banerjee. Generalized probabilistic matrix factorizations for collaborative filtering. In ICDM, 2010. [20] Y. W. Teh, D. Görür, and Z. Ghahramani. Stick-breaking construction for the Indian buffet process. In AISTATS, 2007. [21] Y. W. Teh, M. I. Jordan, Matthew J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 2006. [22] C. Wang and D. M. Blei. Collaborative topic modeling for recommending scientific articles. In KDD, 2011. [23] E. Wang, D. Liu, J. Silva, D. B. Dunson, and L. Carin. Joint analysis of time-evolving binary matrices and associated documents. In Advances in Neural Information Processing Systems, 2010. [24] S. Williamson, C. Wang, K. A. Heller, and D. M. Blei. The IBP compound Dirichlet process and its application to focused topic modeling. In ICML, 2010. [25] X. Zhang, D. Dunson, and L. Carin. Hierarchical topic modeling for analysis of time-evolving personal choices. In Advances in Neural Information Processing Systems 24. 2011. [26] X. Zhang, D. Dunson, and L. Carin. Tree-structured infinite sparse factor model. In ICML, 2011. 9
|
2012
|
159
|
4,518
|
Learning Image Descriptors with the Boosting-Trick Tomasz Trzcinski, Mario Christoudias, Vincent Lepetit and Pascal Fua CVLab, EPFL, Lausanne, Switzerland firstname.lastname@epfl.ch Abstract In this paper we apply boosting to learn complex non-linear local visual feature representations, drawing inspiration from its successful application to visual object detection. The main goal of local feature descriptors is to distinctively represent a salient image region while remaining invariant to viewpoint and illumination changes. This representation can be improved using machine learning, however, past approaches have been mostly limited to learning linear feature mappings in either the original input or a kernelized input feature space. While kernelized methods have proven somewhat effective for learning non-linear local feature descriptors, they rely heavily on the choice of an appropriate kernel function whose selection is often difficult and non-intuitive. We propose to use the boosting-trick to obtain a non-linear mapping of the input to a high-dimensional feature space. The non-linear feature mapping obtained with the boosting-trick is highly intuitive. We employ gradient-based weak learners resulting in a learned descriptor that closely resembles the well-known SIFT. As demonstrated in our experiments, the resulting descriptor can be learned directly from intensity patches achieving state-of-the-art performance. 1 Introduction Representing salient image regions in a way that is invariant to unwanted image transformations is a crucial Computer Vision task. Well-known local feature descriptors, such as the Scale Invariant Feature Transform (SIFT) [1] or Speeded Up Robust Features (SURF) [2], address this problem by using a set of hand-crafted filters and non-linear operations. These descriptors have become prevalent, even though they are not truly invariant with respect to various viewpoint and illumination changes which limits their applicability. In an effort to address these limitations, a fair amount of work has focused on learning local feature descriptors [3, 4, 5] that leverage labeled training image patches to learn invariant feature representations based on local image statistics. Although significant progress has been made, these approaches are either built on top of hand-crafted representations [5] or still require significant parameter tuning as in [4] which relies on a non-analytical objective that is difficult to optimize. Learning an invariant feature representation is strongly related to learning an appropriate similarity measure or metric over intensity patches that is invariant to unwanted image transformations, and work on descriptor learning has been predominantly focused in this area [3, 6, 5]. Methods for metric learning that have been applied to image data have largely focused on learning a linear feature mapping in either the original input or a kernelized input feature space [7, 8]. This includes previous boosting-based metric learning methods that thus far have been limited to learning linear feature transformations [3, 7, 9]. In this way, non-linearities are modeled using a predefined similarity or kernel function that implicitly maps the input features to a high-dimensional feature space where the transformation is assumed to be linear. While these methods have proven somewhat effective for learning non-linear local feature mappings, choosing an appropriate kernel function is often nonintuitive and remains a challenging and largely open problem. Additionally, kernel methods involve 1 an optimization whose problem complexity grows quadratically with the number of training examples making them difficult to apply to large problems that are typical to local descriptor learning. In this paper, we apply boosting to learn complex non-linear local visual feature representations drawing inspiration from its successful application to visual object detection [10]. Image patch appearance is modeled using local non-linear filters evaluated within the image patch that are effectively selected with boosting. Analogous to the kernel-trick, our approach can be seen as applying a boosting-trick [11] to obtain a non-linear mapping of the input to a high-dimensional feature space. Unlike kernel methods, the boosting-trick allows for the definition of intuitive non-linear feature mappings. Also, our learning approach scales linearly with the number of training examples making it more easily amenable to large scale problems and results in highly accurate descriptor matching. We build upon [3] that also relies on boosting to compute a descriptor, and show how we can use it as a way to efficiently select features, from which we compute a compact representation. We also replace the simple weak learners of [3] by non-linear filters more adapted to the problem. In particular, we employ image gradient-based weak learners similar to [12] that share a close connection with the non-linear filters used in proven image descriptors such as SIFT and Histogram-of-Oriented Gradients (HOG) [13]. Our approach can be seen as a generalization of these methods cast within a principled learning framework. As seen in our experiments, our descriptor can be learned directly from intensity patches and results in state-of-the-art performance rivaling its hand-designed equivalents. To evaluate our approach we consider the image patch dataset of [4] containing several hundreds of thousands of image patches under varying viewpoint and illumination conditions. As baselines we compare against leading contemporary hand-designed and learned local feature descriptors [1, 2, 3, 5]. We demonstrate the effectiveness of our approach on this challenging dataset, significantly outperforming the baseline methods. 2 Related work Machine learning has been applied to improve both matching efficiency and accuracy of image descriptors [3, 4, 5, 8, 14, 15]. Feature hashing methods improve the storage and computational requirements of image-based features [16, 14, 15]. Salakhutdinov and Hinton [16, 17] develop a semantic hashing approach based on Restricted Boltzman Machines (RBMs) applied to binary images of digits. Similarly, Weiss et al. [14] present a spectral hashing approach that learns compact binary codes for efficient image indexing and matching. Kulis and Darrell [15] extend this idea to explicitly minimize the error between the original Euclidean and computed Hamming distances. Many of these approaches presume a given distance or similarity measure over a pre-defined input feature space. Although they result in efficient description and indexing in many cases they are limited to the matching accuracy of the original input space. In contrast, our approach learns a nonlinear feature mapping that is specifically optimized to result in highly accurate descriptor matching. Methods to metric learning learn feature spaces tailored to a particular matching task [5, 8]. These methods assume the presence of annotated label pairs or triplets that encode the desired proximity relationships of the learned feature embedding. Jain et al. [8] learn a Mahalanobis distance metric defined using either the original input or a kernelized input feature space applied to image classification and matching. Alternatively, Strecha et al. [5] employ Linear Discriminant Analysis to learn a linear feature mapping from binary-labeled example pairs. Both of these methods are closely related, offering different optimization strategies for learning a Mahalanobis-based distance metric. While these methods improve matching accuracy through a learned feature space, they require the presence of a pre-selected kernel function to encode non-linearities. Such approaches are well suited for certain image indexing and classification tasks where task-specific kernel functions have been proposed (e.g., [18]). However, they are less applicable to local image feature matching, for which the appropriate choice of kernel function is less understood. Boosting has also been applied for learning Mahalanobis-based distance metrics involving highdimensional input spaces overcoming the large computational complexity of conventional positive semi-definite (PSD) solvers based on the interior point method [7, 9]. Shen et al. [19] proposed a PSD solver using column generation techniques based on AdaBoost, that was later extended to involve closed-form iterative updates [7]. More recently, Bi et al. [9] devised a similar method exhibiting even further improvements in computational complexity with application to bio-medical imagery. While these methods also use boosting to learn a feature mapping, they have emphasized 2 computational efficiency only considering linear feature embeddings. Our approach exhibits similar computational advantages, however, has the ability to learn non-linear feature mappings beyond what these methods have proposed. Similar to our work, Brown et al. [4] also consider different feature pooling and selection strategies of gradient-based features resulting in a descriptor which is both short and discriminant. In [4], however, they optimize on the combination of handcrafted blocks, and their parameters. The criterion they consider—the area below the ROC curve—is not analytical and thus difficult to optimize, and does not generalize well. In contrast, we provide a generic learning framework for finding such representations. Moreover, the form of our descriptor is much simpler. Simultaneous to this work, similar ideas were explored in [20, 21]. While these approaches assume a sub-sampled or course set of pooling regions to mitigate tractability, we allow for the discovery of more generic pooling configurations with boosting. Our work on boosted feature learning can be traced back to the work of Doll´ar et al. [22] where they apply boosting across a range of different features for pedestrian detection. Our approach is probably most similar to the boosted Similarity Sensitive Coding (SSC) method of Shakhnarovich [3] that learns a boosted similarity function from a family of weak learners, a method that was later extended in [23] to be used with a Hamming distance. In [3], only linear projection based weak-learners were considered. Also, Boosted SSC can often yield fairly high-dimensional embeddings. Our approach can be seen as an extension of Boosted SSC to form low-dimensional feature mappings. We also show that the image gradient-based weak learners of [24] are well adapted to the problem. As seen in our experiments, our approach significantly outperforms Boosted SSC when applied to image intensity patches. 3 Method Given an image intensity patch x ∈RD we look for a descriptor of x as a non-linear mapping H(x) into the space spanned by {hi}M i=1, a collection of thresholded non-linear response functions hi(x) : RD →{−1, 1}. The number of response functions M is generally large and possibly infinite. This mapping can be learned by minimizing the exponential loss with respect to a desired similarity function f(x, y) defined over image patch pairs L = N X i=1 exp(−lif(xi, yi)) (1) where xi, yi ∈RD are training intensity patches and li ∈{−1, 1} is a label indicating whether it is a similar (+1) or dissimilar (−1) pair. The Boosted SSC method proposed in [3] considers a similarity function defined by a simply weighted sum of thresholded response functions f(x, y) = M X i=1 αihi(x)hi(y) . (2) This defines a weighted hash function with the importance of each dimension i given by αi. Substituting this expression into Equation (1) gives LSSC = N X i=1 exp −li M X j=1 αjhj(xi)hj(yi) . (3) In practice M is large and in general the number of possible hi’s can be infinite making the explicit optimization of LSSC difficult, which constitutes a problem for which boosting is particularly well suited [25]. Although boosting is a greedy optimization scheme, it is a provably effective method for constructing a highly accurate predictor from a collection of weak predictors hi. Similar to the kernel trick, the resulting boosting-trick also maps each observation to a highdimensional feature space, however, it computes an explicit mapping for which the αi’s that define f(x, y) are assumed to be sparse [11]. In fact, Rosset et al. [26] have shown that under certain 3 settings boosting can be interpreted as imposing an L1 sparsity constraint over the response function weights αi. As will be seen below, unlike the kernel trick, this allows for the definition of high-dimensional embeddings well suited to the descriptor matching task whose features have an intuitive explanation. Boosted SSC employs linear response weak predictors based on a linear projection of the input. In contrast, we consider non-linear response functions more suitable for the descriptor matching task as discussed in Section 3.3. In addition, the greedy optimization can often yield embeddings that although accurate are fairly redundant and inefficient. In what follows, we will present our approach for learning compact boosted feature descriptors called Low-Dimensional Boosted Gradient Maps (L-BGM). First, we present a modified similarity function well suited for learning low-dimensional, discriminative embeddings with boosting. Next, we show how we can factorize the learned embedding to form a compact feature descriptor. Finally, the gradient-based weak learners utilized by our approach are detailed. 3.1 Similarity measure To mitigate the potentially redundant embeddings found by boosting we propose an alternative similarity function that models the correlation between weak response functions, fLBGM(x, y) = X i,j αi,jhi(x)hj(y) = h(x)T Ah(y), (4) where h(x) = [h1(x), · · · , hM(x)] and A is an M × M matrix of coefficients αi,j. This similarity measure is a generalization of Equation (2). In particular, fLBGM is equivalent to the Boosted SSC similarity measure in the restricted case of a diagonal A. Substituting the above expression into Equation (1) gives LLBGM = N X k=1 exp −lk X i,j αi,jhi(xk)hj(yk) . (5) Although it can be shown that LLBGM can be jointly optimized for A and the hi’s using boosting, this involves a fairly complex procedure. Instead, we propose a two step learning strategy whereby we first apply AdaBoost to find the hi’s as in [3]. As shown by our experiments, this provides an effective way to select relevant hi’s. We then apply stochastic gradient descent to find an optimal weighting over the selected features that minimizes LLBGM. More formally, let P be the number of relevant response functions found with AdaBoost with P ≪ M. We define AP ∈RP ×P to be the sub-matrix corresponding to the non-zero entries of A, explicitly optimized by our approach. Note that as the loss function is convex in A, AP can be found optimally with respect to the selected hi’s. In addition, we constrain αi,j = αj,i during optimization restricting the solution to the set of symmetric P × P matrices yielding a symmetric similarity measure fLBGM. We also experimented with more restrictive forms of regularization, e.g., constraining AP to be possitive semi-definite, however, this is more costly and gave similar results. We use a simple implementation of stochastic gradient descent with a constant valued step size, initialized using the diagonal matrix found by Boosted SSC, and iterate until convergence or a maximum number of iterations is reached. Note that because the weak learners are binary, we can precompute the exponential terms involved in the derivatives for all the data samples, as they are constant with respect to AP . This significantly speeds up the optimization process. 3.2 Embedding factorization The similarity function of Equation (4) defines an implicit feature mapping over example pairs. We now show how the AP matrix in fLBGM can be factorized to result in compact feature descriptors computed independently over each input. Assuming AP to be a symmetric P × P matrix it can be factorized into the following form, AP = BWBT = d X k=1 wkbkbT k (6) 4 Image Gradients Keypoint Descriptor α weighting R1 R2 R3 R4 e0 e1 e2 e3 e4 e5 e6 φR ,e 1 1 Figure 1: A specialized configuration of weak response functions φ corresponding to a regular gridding within the image patch. In addition, assuming a Gaussian weighting of the α’s results in a descriptor that closely resembles SIFT [1] and is one of the many solutions afforded by our learning framework. where W = diag([w1, · · · , wd]), wk ∈{−1, 1}, B = [b1, · · · , bd], b ∈RP , and d ≤P. Equation (4) can then be re-expressed as fLBGM(x, y) = d X k=1 wk P X i=1 bk,ihi(x) ! P X j=1 bk,jhj(y) . (7) This factorization defines a signed inner product between the embedded feature vectors and provides increased efficiency with respect to the original similarity measure 1. For d < P (i.e., the effective rank of AP is d < P) the factorization represents a smoothed version of AP discarding the lowenergy dimensions that typically correlate with noise, leading to further performance improvements. The final embedding found with our approach is therefore HLBGM(x) = BT h(x) , (8) and HLBGM(x) : RD →Rd. The projection matrix B defines a discriminative dimensionality reduction optimized with respect to the exponential loss objective of Equation (5). As seen in our experiments, in the case of redundant hi this results in a considerable feature compression, also offering a more compact description than the original input patch. 3.3 Weak learners The boosting-trick allows for a variety of non-linear embeddings parameterized by the chosen weak learner family. We employ the gradient-based response functions of [12] to form our feature descriptor. In [12], the usefulness of these features was demonstrated for visual object detection. In what follows, we extend these features to the descriptor matching task illustrating their close connection with the well-known SIFT descriptor. Following the notation of [12], our weak learners are defined as h(x; R, e, T) = 1 if φR,e(x) ≤T −1 otherwise , (9) where φR,e(x) = X m∈R ξe(x, m) / X ek∈Φ,m∈R ξek(x, m) , (10) with region ξe(x, m) being the gradient energy along an orientation e at location m within x, and R defining a rectangular extent within the patch. The gradient energy is computed based on the dot product between e and the gradient orientation at pixel m [12]. The orientation e ranges between [−π, π] and is quantized to take values Φ = {0, 2π q , 4π q , · · · , (q −1) ∗2π q } with q the number of 1Matching two sets of descriptors each of size N is O(N 2P 2) under the original measure and O(NPd + N 2d) provided the factorization, resulting in significant savings for reasonably sized N and P, and d ≪P. 5 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 1.2 0 0.2 0.4 0.6 0.8 1 1.2 (a) (b) (c) Figure 2: Learned spatial weighting obtained with Boosted Gradient Maps (BGM) trained on (a) Liberty, (b) Notre Dame and (c) Yosemite datasets. The learned weighting closely resembles the Gaussian weighting employed by SIFT (white circles indicate σ/2 and σ used by SIFT). quantization bins. As noted in [12] this representation can be computed efficiently using integral images. The non-linear gradient response functions φR,e along with their thresholding T define the parameterization of the weak learner family optimized with our approach. Consider the specialized configuration illustrated in Figure 1. This corresponds to a selection of weak learners whose R and e values are parameterized such that they lie along a regular grid, equally sampling each edge orientation within each grid cell. In addition, if we assume a Gaussian weighting centered about the patch, the resulting descriptor closely resembles SIFT2 [1]. In fact, this configuration and weighting corresponds to one of the many solutions afforded by our approach. In [4], they note the importance of allowing for alternative pooling and feature selection strategies, both of which are effectively optimized within our framework. As seen in our experiments, this results in a significant performance gain over hand-designed SIFT. 4 Results In this section, we first present an overview of our evaluation framework. We then show the results obtained using Boosted SSC combined with gradient-based weak learners described in Sec. 3.3. We continue with the results generated when applying the factorized embedding of the matrix A. Finally, we present a comparison of our final descriptor with the state of the art. 4.1 Evaluation framework We evaluate the performance of our methods using three publicly available datasets: Liberty, Notre Dame and Yosemite [4]. Each of them contain over 400k scale- and rotation-normalized 64 × 64 patches. These patches are sampled around interest points detected using Difference of Gaussians and the correspondences between patches are found using a multi-view stereo algorithm. The datasets created this way exhibit substantial perspective distortion and various lighting conditions. The ground truth available for each of these datasets describes 100k, 200k and 500k pairs of patches, where 50% correspond to match pairs, and 50% to non-match pairs. In our evaluation, we separately consider each dataset for training and use the held-out datasets for testing. We report the results of the evaluation in terms of ROC curves and 95% error rate as is done in [4]. 4.2 Boosted Gradient Maps To show the performance boost we get by using gradient-based weak learners in our boosting scheme, we plot the results for the original Boosted SSC method [3], which relies on thresholded pixel intensities as weak learners, and for the same method which uses gradient-based weak learners instead (referred to as Boosted Gradient Maps (BGM)) with q = 24 quantized orientation bins used throughout our experiments. As we can see in Fig. 3(a), a 128-dimensional Boosted SSC descriptor can be easily outperformed by a 32-dimensional BGM descriptor. When comparing descriptors with the same dimensionality, the improvement measured in terms of 95% error rate reaches over 50%. Furthermore, it is worth noticing, that with 128 dimensions BGM performs similarly to SIFT, and when we increase the dimensionality to 512 - it outperforms SIFT by 14% in terms of 95% error rate. When comparing the 256-dimensional SIFT (obtained by increasing the granularity of the orientation bins) with the 256-dimensional BGM, the extended SIFT descriptor performs much worse 2SIFT additionally normalizes each descriptor to be unit norm, however, the underlying representation is otherwise quite similar. 6 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 True Positive Rate False Positive Rate Train: Liberty (200k) Test: Notre Dame (100k) SIFT (128, 28.09%) Boosted SSC (128, 72.95%) BGM (32, 37.03%) BGM (64, 29.60%) BGM (128, 21.93%) BGM (256, 15.99%) BGM (512, 14.36%) (a) 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 True Positive Rate False Positive Rate Train: Liberty (200k) Test: Notre Dame (100k) SIFT (128, 28.09%) Boosted SSC (128, 72.95%) BGM-PCA (32, 25.73%) L-BGM-Diag (32, 34.71%) L-BGM (32, 16.20%) L-BGM (64, 14.15%) L-BGM (128, 13.76%) L-BGM (256, 13.38%) L-BGM (512, 16.33%) (b) Figure 3: (a) Boosted SCC using thresholded pixel intensities in comparison with our Boosted Gradient Maps (BGM) approach. (b) Results after optimization of the correlation matrix A. Performance is evaluated with respect to factorization dimensionality d. In parentheses: the number of dimensions and the 95% error rate. (34.22% error rate vs 15.99% for BGM-256). This indicates that boosting with a similar number of non-linear classifiers adds to the performance, and proves how well tuned the SIFT descriptor is. Visualizations of the learned weighting obtained with BGM trained on Liberty, Notre Dame and Yosemite datasets are displayed in Figure 2. To plot the visualizations we sum the α’s across orientations within the rectangular regions of the corresponding weak learners. Note that although there are some differences, interestingly this weighting closely resembles the Gaussian weighting employed by SIFT. 4.3 Low-Dimensional Boosted Gradient Maps To further improve performance, we optimize over the correlation matrix of the weak learners’ responses, as explained in Sec. 3.1, and apply the embedding from Sec. 3.2. The results of this method are shown in Fig. 3(b). In these experiments, we learn our L-BGM descriptor using the responses of 512 gradient-based weak learners selected with boosting. We first optimize over the weak learners’ correlation matrix which is constrained to be diagonal. This corresponds to a global optimization of the weights of the weak learners. The resulting 32-dimensional L-BGM-Diag descriptor performs only slightly better than the corresponding 32-dimensional BGM. Interestingly, the additional degrees of freedom obtained by optimizing over the full correlation matrix boost the results significantly and allow us to outperform SIFT with as few as 32 dimensions. When we compare our 128-dimensional descriptor, i.e., the descriptor of the same length as SIFT, we observe 15% improvement in terms of 95% error rate. However, when we increase the descriptor length from 256 to 512 we can see a slight performance drop since we begin to include the “noisy” dimensions of our embedding which correspond to the eigenvalues of low magnitude, a trend typical to many dimensionality reduction techniques. Hence, as our final descriptor, we select the 64-dimensional L-BGM descriptor, as it provides a decent trade-off between performance and descriptor length. Figure 3(b) also shows the results obtained by applying PCA on the responses of 512 gradient-based weak learners (BGM-PCA). The descriptor generated this way performs similarly to SIFT, however our method still provides better results even for the same dimensionality, which shows the advantage in optimizing the exponential loss of Eq. 5. 4.4 Comparison with the state of the art Here we compare our approach against the following baselines: sum of squared differences of pixel intensities (SSD), the state-of-the-art SIFT descriptor [1], SURF descriptor [2], binary LDAHash descriptor [5], a real-valued descriptor computed by applying LDE projections on bias-gain normalized patches (LDA-int) [4] and the original Boosted SSC [3]. We have also tested recent binary descriptors such as BRIEF [27], ORB [28] or BRISK [29], however, they performed much worse than the baselines presented in the paper. For SIFT, we use the publicly available implementation of A. Vedaldi [30]. For SURF and LDAHash, we use the implementation available from the websites of the authors. For the other methods, we use our own implementation. For LDA-int we choose the dimensionality which was reported to perform the best on a given dataset according to [4]. For Boosted SSC, we use 128-dimensions as this obtained the best performance. 7 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 True Positive Rate False Positive Rate Train: Notre Dame (200k) Test: Liberty (100k) SSD (1024, 69.11%) SIFT (128, 36.27%) SURF (64, 54.01%) LDAHash (128, 49.66%) LDA-int (27, 53.93%) Boosted SSC (128, 70.35%) BGM (256, 21.62%) L-BGM (64, 18.05%) (a) 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 True Positive Rate False Positive Rate Train: Yosemite (200k) Test: Notre Dame (100k) SSD (1024, 76.13%) SIFT (128, 28.09%) SURF (64, 45.51%) LDAHash (128, 51.58%) LDA-int (14, 49.14%) Boosted SSC (128, 72.20%) BGM (256, 14.69%) L-BGM (64, 13.73%) (b) Figure 4: Comparison to state of the art. In parentheses: the number of dimensions, and the 95% error rate. Our L-BGM approach outperforms SIFT by up to 18% in terms of 95% error rate using half fewer dimensions. In Fig. 4 we plot the recognition curves for all the baselines and our method. BGM and L-BGM outperform the baseline methods across all FP rates. The maximal performance boost is obtained by using our 64-dimensional L-BGM descriptor that results in an up to 18% improvement in terms of 95% error rate with respect to the state-of-the-art SIFT descriptor. Descriptors derived from patch intensities, i.e. SSD, Boosted SSC and LDA-int, perform much worse than the gradient-based ones. Finally, our BGM and L-BGM descriptors far outperform SIFT which relies on hand-crafted filters applied to gradient maps. Moreover, with BGM and L-BGM we are able to reduce the 95% error rate by over 3 times with respect to the other state-of-the-art descriptors, namely SURF and LDAHash. We have computed the results for all the configurations of training and testing datasets without observing any significant differences, thus we show here only a representative set of the curves. More results can be found in the supplementary material. Interestingly, the results we obtain are comparable with “the best of the best” results reported in [4]. However, since the code for their compact descriptors is not publicly available, we can only compare the performance in terms of the 95% error rates. Only the composite descriptors of [4] provide some advantage over our compact L-BGM, as their average 95% error rate is 2% lower than this of L-BGM. Nevertheless, we outperform their non-parametric descriptors by 12% and perform slightly better than the parametric ones, while using descriptors of an order of magnitude shorter. This comparison indicates that even though our approach does not require any complex pipeline optimization and parameter tuning, we perform similarly to the finely optimized descriptors presented in [4]. 5 Conclusions In this paper we presented a new method for learning image descriptors by using Low-Dimensional Boosted Gradient Maps (L-BGM). L-BGM offers an attractive alternative to traditional descriptor learning techniques that model non-linearities based on the kernel-trick, relying on a pre-specified kernel function whose selection can be difficult and unintuitive. In contrast, we have shown that for the descriptor matching problem the boosting-trick leads to non-linear feature mappings whose features have an intuitive explanation. We demonstrated the use of gradient-based weak learner functions for learning descriptors within our framework, illustrating their close connection with the well-known SIFT descriptor. A discriminative embedding technique was also presented, yielding fairly compact and discriminative feature descriptions compared to the baseline methods. We evaluated our approach on benchmark datasets where L-BGM was shown to outperform leading contemporary hand-designed and learned feature descriptors. Unlike previous approaches, our L-BGM descriptor can be learned directly from raw intensity patches achieving state-of-the-art performance. Interesting avenues of future work include the exploration of other weak learner families for descriptor learning, e.g., SURF-like Haar features, and extensions to binary feature embeddings. Acknowledgments We would like to thank Karim Ali for sharing his feature code and his insightful feedback and discussions. References [1] Lowe, D.: Distinctive Image Features from Scale-Invariant Keypoints. IJCV 20(2) (2004) 91–110 8 [2] Bay, H., Tuytelaars, T., Van Gool, L.: SURF: Speeded Up Robust Features. In: ECCV’06 [3] Shakhnarovich, G.: Learning Task-Specific Similarity. PhD thesis, MIT (2006) [4] Brown, M., Hua, G., Winder, S.: Discriminative Learning of Local Image Descriptors. PAMI (2011) [5] Strecha, C., Bronstein, A., Bronstein, M., Fua, P.: LDAHash: Improved Matching with Smaller Descriptors. PAMI 34(1) (2012) [6] Kulis, B., Jain, P., Grauman, K.: Fast Similarity Search for Learned Metrics. PAMI (2009) 2143–2157 [7] Shen, C., Kim, J., Wang, L., van den Hengel, A.: Positive Semidefinite Metric Learning with Boosting. In: NIPS. (2009) [8] Jain, P., Kulis, B., Davis, J., Dhillon, I.: Metric and Kernel Learning using a Linear Transformation. JMLR (2012) [9] Bi, J., Wu, D., Lu, L., Liu, M., Tao, Y., Wolf, M.: AdaBoost on Low-Rank PSD Matrices for Metric Learning. In: CVPR. (2011) [10] Viola, P., Jones, M.: Rapid Object Detection Using a Boosted Cascade of Simple Features. In: CVPR’01 [11] Chapelle, O., Shivaswamy, P., Vadrevu, S., Weinberger, K., Zhang, Y., Tseng, B.: Boosted Multi-Task Learning. Machine Learning (2010) [12] Ali, K., Fleuret, F., Hasler, D., Fua, P.: A Real-Time Deformable Detector. PAMI 34(2) (2012) 225–239 [13] Dalal, N., Triggs, B.: Histograms of Oriented Gradients for Human Detection. In: CVPR’05 [14] Weiss, Y., Torralba, A., Fergus, R.: Spectral Hashing. NIPS 21 (2009) 1753–1760 [15] Kulis, B., Darrell, T.: Learning to Hash with Binary Reconstructive Embeddings. In: NIPS’09 [16] Salakhutdinov, R., Hinton, G.: Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure. In: International Conference on Artificial Intelligence and Statistics. (2007) [17] Salakhutdinov, R., Hinton, G.: Semantic Hashing. International Journal of Approximate Reasoning (2009) [18] Grauman, K., Darrell, T.: The Pyramid Match Kernel: Discriminative Classification with Sets of Image Features. In: ICCV’05 [19] Shen, C., Welsh, A., Wang, L.: PSDBoost: Matrix Generation Linear Programming for Positive Semidefinite Matrices Learning. In: NIPS. (2008) [20] Jia, Y., Huang, C., Darrell, T.: Beyond Spatial Pyramids: Receptive Field Learning for Pooled Image Features. In: CVPR’12 [21] Simonyan, K., Vedaldi, A., Zisserman, A.: Descriptor Learning Using Convex Optimisation. In: ECCV’12 [22] Doll´ar, P., Tu, Z., Perona, P., Belongie, S.: Integral Channel Features. In: BMVC’09 [23] Torralba, A., Fergus, R., Weiss, Y.: Small Codes and Large Databases for Recognition. In: CVPR’08 [24] Ali, K., Fleuret, F., Hasler, D., Fua, P.: A Real-Time Deformable Detector. PAMI (2011) [25] Freund, Y., Schapire, R.: A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. In: European Conference on Computational Learning Theory. (1995) [26] Rosset, S., Zhu, J., Hastie, T.: Boosting as a Regularized Path to a Maximum Margin Classifier. JMLR (2004) [27] Calonder, M., Lepetit, V., Ozuysal, M., Trzcinski, T., Strecha, C., Fua, P.: BRIEF: Computing a Local Binary Descriptor Very Fast. PAMI 34(7) (2012) 1281–1298 [28] Rublee, E., Rabaud, V., Konolidge, K., Bradski, G.: ORB: An Efficient Alternative to SIFT or SURF. In: ICCV’11 [29] Leutenegger, S., Chli, M., Siegwart, R.: BRISK: Binary Robust Invariant Scalable Keypoints. In: ICCV’11 [30] Vedaldi, A.: http://www.vlfeat.org/˜vedaldi/code/siftpp.html 9
|
2012
|
16
|
4,519
|
Near-Optimal MAP Inference for Determinantal Point Processes Jennifer Gillenwater Alex Kulesza Ben Taskar Computer and Information Science University of Pennsylvania {jengi,kulesza,taskar}@cis.upenn.edu Abstract Determinantal point processes (DPPs) have recently been proposed as computationally efficient probabilistic models of diverse sets for a variety of applications, including document summarization, image search, and pose estimation. Many DPP inference operations, including normalization and sampling, are tractable; however, finding the most likely configuration (MAP), which is often required in practice for decoding, is NP-hard, so we must resort to approximate inference. This optimization problem, which also arises in experimental design and sensor placement, involves finding the largest principal minor of a positive semidefinite matrix. Because the objective is log-submodular, greedy algorithms have been used in the past with some empirical success; however, these methods only give approximation guarantees in the special case of monotone objectives, which correspond to a restricted class of DPPs. In this paper we propose a new algorithm for approximating the MAP problem based on continuous techniques for submodular function maximization. Our method involves a novel continuous relaxation of the log-probability function, which, in contrast to the multilinear extension used for general submodular functions, can be evaluated and differentiated exactly and efficiently. We obtain a practical algorithm with a 1/4-approximation guarantee for a more general class of non-monotone DPPs; our algorithm also extends to MAP inference under complex polytope constraints, making it possible to combine DPPs with Markov random fields, weighted matchings, and other models. We demonstrate that our approach outperforms standard and recent methods on both synthetic and real-world data. 1 Introduction Informative subset selection problems arise in many applications where a small number of items must be chosen to represent or cover a much larger set; for instance, text summarization [1, 2], document and image search [3, 4, 5], sensor placement [6], viral marketing [7], and many others. Recently, probabilistic models extending determinantal point processes (DPPs) [8, 9] were proposed for several such problems [10, 5, 11]. DPPs offer computationally attractive properties, including exact and efficient computation of marginals [8], sampling [12, 5], and (partial) parameter estimation [13]. They are characterized by a notion of diversity, as shown in Figure 1; points in the plane sampled from a DPP (center) are more spread out than those sampled independently (left). However, in many cases we would like to make use of the most likely configuration (MAP inference, right), which involves finding the largest principal minor of a positive semidefinite matrix. This is an NP-hard problem [14], and so we must resort to approximate inference methods. The DPP probability is a log-submodular function, and hence greedy algorithms are natural; however, the standard greedy algorithm of Nemhauser and Wolsey [15] offers an approximation guarantee of 1 −1/e only for non-decreasing (monotone) submodular functions, and does not apply for general 1 Independent DPP sample DPP MAP Figure 1: From left to right, a set of points in the plane sampled independently at random, a sample drawn from a DPP, and an approximation of the DPP MAP set estimated by our algorithm. DPPs. In addition, we are are often interested in conditioning MAP inference on knapsack-type budget constraints, matroid constraints, or general polytope constraints. For example, we might consider a DPP model over edges of a bipartite graph and ask for the most likely set under the one-to-one matching constraint. In this paper we propose a new algorithm for approximating MAP inference that handles these types of constraints for non-monotone DPPs. Recent work on non-monotone submodular function optimization can be broadly split into combinatorial versus continuous approaches. Among combinatorial methods, modified greedy, local search and simulated annealing algorithms provide certain constant factor guarantees [16, 17, 18] and have been recently extended to optimization under knapsack and matroid constraints [19, 20]. Continuous methods [21, 22] use a multilinear extension of the submodular set function to the convex hull of the feasible sets and then round fractional solutions obtained by maximizing in the interior of the polytope. Our algorithm falls into the continuous category, using a novel and efficient non-linear continuous extension specifically tailored to DPPs. In comparison to the constant-factor algorithms for general submodular functions, our approach is more efficient because we have explicit access to the objective function and its gradient. In contrast, general submodular functions assume a simple function oracle and need to employ sampling to estimate function and gradient values in the polytope interior. We show that our non-linear extension enjoys some of the critical properties of the standard multilinear extension and propose an efficient algorithm that can handle solvable polytope constraints. Our algorithm compares favorably to greedy and recent “symmetric” greedy [18] methods on unconstrained simulated problems, simulated problems under matching constraints, and a real-world matching task using quotes from political candidates. 2 Background Determinantal point processes (DPPs) are distributions over subsets that prefer diversity. Originally, DPPs were introduced to model fermions in quantum physics [8], but since then they have arisen in a variety of other settings including non-intersecting random paths, random spanning trees, and eigenvalues of random matrices [9, 23, 12]. More recently, they have been applied as probabilistic models for machine learning problems [10, 13, 5, 11]. Formally, a DPP P on a set of items Y = {1, 2, . . . , N} is a probability measure on 2Y, the set of all subsets of Y. For every Y ✓Y we have: P(Y ) / det(LY ) (1) where L is a positive semidefinite matrix. LY ⌘[Lij]i,j2Y denotes the restriction of L to the entries indexed by elements of Y , and det(L;) = 1. If L is written as a Gram matrix, L = B>B, then the quantity det(LY ) can be interpreted as the squared volume spanned by the column vectors Bi for i 2 Y . If Lij = B> i Bj is viewed as a measure of similarity between items i and j, then when i and j are similar their vectors are relatively non-orthogonal, and therefore sets including both i and j will span less volume and be less probable. This is illustrated in Figure 2. As a result, DPPs assign higher probability to sets that are diverse under L. 2 Figure 2: (a) The DPP probability of a set Y depends on the volume spanned by vectors Bi for i 2 Y . (b) As length increases, so does volume. (c) As similarity increases, volume decreases. The normalization constant in Equation (1) can be computed explicitly thanks to the identity X Y det(LY ) = det(L + I) , (2) where I is the N ⇥N identity matrix. In fact, a variety of probabilistic inference operations can be performed efficiently, including sampling, marginalization, and conditioning [12, 24]. However, the maximum a posteriori (MAP) problem arg maxY det(LY ) is NP-hard [14]. In many practical situations it would be useful to approximate the MAP set; for instance, during decoding, online training, etc. 2.1 Submodularity A function f : 2Y ! R is called submodular if it satisfies f(X [ {i}) −f(X) ≥f(Y [ {i}) −f(Y ) (3) whenever X ✓Y and i 62 Y . Intuitively, the contribution made by a single item i only decreases as the set grows. Common submodular functions include the mutual information of a set of variables and the number of cut edges leaving a set of vertices of a graph. A submodular function f is called nondecreasing (or monotone) when X ✓Y implies f(X) f(Y ). It is possible to show that log det(LY ) is a submodular function: entropy is submodular, and the entropy of a Gaussian is proportional to log det(⌃Y ) (plus a linear term in |Y |), where ⌃is the covariance matrix. Submodular functions are easy to minimize, and a variety of algorithms exist for approximately maximizing them; however, to our knowledge none of these existing algorithms simultaneously allows for general polytope constraints on the set Y , offers an approximation guarantee, and can be implemented in practice without expensive sampling to approximate the objective. We provide a technique that addresses all three criteria for the DPP MAP problem, although approximation guarantees for the general polytope case depend on the choice of rounding algorithm and remain an open problem. We use the submodular maximization algorithm of [21] as a starting point. 3 MAP Inference We seek an approximate solution to the generalized DPP MAP problem arg maxY 2S log det(LY ), where S ✓[0, 1]N and Y 2 S means that the characteristic vector I(Y ) is in S. We will assume that S is a down-monotone, solvable polytope; down-monotone means that for x, y 2 [0, 1]N, x 2 S implies y 2 S whenever x ≥y (that is, whenever xi ≥yi 8i), and solvable means that for any linear objective function g(x) = a>x, we can efficiently find x 2 S maximizing g(x). One common approach for approximating discrete optimization problems is to replace the discrete variables with continuous analogs and extend the objective function to the continuous domain. When the resulting continuous optimization is solved, the result may include fractional variables. Typically, a rounding scheme is then used to produce a valid integral solution. As we will detail below, 3 we use a novel non-linear continuous relaxation that has a nice property: when the polytope is unconstrained, S = [0, 1]N, our method will (essentially) always produce integral solutions. For more complex polytopes, a rounding procedure is required. When the objective f(Y ) is a submodular set function, as in our setting, the multilinear extension can be used to obtain certain theoretical guarantees for the relaxed optimization scheme described above [21, 25]. The multilinear extension is defined on a vector x 2 [0, 1]N: F(x) = X Y Y i2Y xi Y i62Y (1 −xi)f(Y ) . (4) That is, F(x) is the expected value of f(Y ) when Y is the random set obtained by including element i with probability xi. Unfortunately, this expectation generally cannot be computed efficiently, since it involves summing over exponentially many sets Y . Thus, to use the multilinear extension in practice requires estimating its value and derivative via Monte Carlo techniques. This makes the optimization quite computationally expensive, as well as introducing a variety of technical convergence issues. Instead, for the special case of DPP probabilities we propose a new continuous extension that is efficiently computable and differentiable. We refer to the following function as the softmax extension: ˜F(x) = log X Y Y i2Y xi Y i62Y (1 −xi) exp(f(Y )) . (5) See the supplementary material for a visual comparison of Equations (4) and (5). While the softmax extension also involves a sum over exponentially many sets Y , we have the following theorem. Theorem 1. For a positive semidefinite matrix L and x 2 [0, 1]N, X Y Y i2Y xi Y i62Y (1 −xi) det(LY ) = det(diag(x)(L −I) + I) . (6) All proofs are included in the supplementary material. Corollary 2. For f(Y ) = log det(LY ), we have ˜F(x) = log det(diag(x)(L −I) + I) and @ @xi ˜F(x) = tr((diag(x)(L −I) + I)−1(L −I)i) , (7) where (L −I)i denotes the matrix obtained by zeroing all except the ith row of L −I. Corollary 2 says that softmax extension for the DPP MAP problem is computable and differentiable in O(N 3) time. Using a variant of gradient ascent (Section 3.1), this will be sufficient to efficiently find a local maximum of the softmax extension over an arbitrary solvable polytope. It then remains to show that this local maximum comes with approximation guarantees. 3.1 Conditional gradient When the optimization polytope S is simple—for instance, the unit cube [0, 1]N—we can apply generic gradient-based optimization methods like L-BFGS to rapidly find a local maximum of the softmax extension. In situations where we are able to efficiently project onto the polytope S, we can apply projected gradient methods. In the general case, however, we assume only that the polytope is solvable. In such settings, we can use the conditional gradient algorithm (also known as the FrankWolfe algorithm) [26, 27]. Algorithm 1 describes the procedure; intuitively, at each step we move to a convex combination of the current point and the point maximizing the linear approximation of the function given by the current gradient. This ensures that we move in an increasing direction while remaining in S. Note that finding y requires optimizing a linear function over S; this step is efficient whenever the polytope is solvable. 3.2 Approximation bound In order to obtain an approximation bound for the DPP MAP problem, we consider the two-phase optimization in Algorithm 2, originally proposed in [21]. The second call to LOCAL-OPT is necessary in theory; however, in practice it can usually be omitted with minimal loss (if any). We will show that Algorithm 2 produces a 1/4-approximation. 4 Algorithm 1 LOCAL-OPT Input: function ˜F, polytope S x 0 while not converged do y arg maxy02S r ˜F(x)>y0 ↵ arg max↵02[0,1] ˜F(↵0x + (1 −↵0)y) x ↵x + (1 −↵)y end while Output: x Algorithm 2 Approximating the DPP MAP Input: kernel L, polytope S Let ˜F(x) = log det(diag(x)(L −I) + I) x LOCAL-OPT( ˜F, S) y LOCAL-OPT( ˜F, S\{y0 | y0 1−x}) Output: ⇢ x : ˜F(x) > ˜F(y) y : otherwise We begin by proving that the continuous extension ˜F is concave in positive directions, although it is not concave in general. Lemma 3. When u, v ≥0, we have @2 @s@t ˜F(x + su + tv) 0 (8) wherever 0 < x + su + tv < 1. Corollary 4. ˜F(x + tv) is concave along any direction v ≥0 (equivalently, v 0). Corollary 4 tells us that a local optimum x of ˜F has certain global properties—namely, that ˜F(x) ≥ ˜F(y) whenever y x or y ≥x. This leads to the following result from [21]. Lemma 5. If x is a local optimum of ˜F(·), then for any y 2 [0, 1]N, 2 ˜F(x) ≥˜F(x _ y) + ˜F(x ^ y) , (9) where (x _ y)i = max(xi, yi) and (x ^ y)i = min(xi, yi). Following [21], we now define a surrogate function ˜F ⇤. Let Xi ✓[0, 1] be a subset of the unit interval representing xi = |Xi|, where |Xi| denotes the measure of Xi. (Note that this representation is overcomplete, since there are in general many subsets of [0, 1] with measure xi.) ˜F ⇤is defined on X = (X1, X2, . . . , XN) by ˜F ⇤(X) = ˜F(x), x = (|X1|, |X2|, . . . , |XN|) . (10) Lemma 6. ˜F ⇤is submodular. Lemmas 5 and 6 suffice to prove the following theorem, which appears for the multilinear extension in [21], bounding the approximation ratio of Algorithm 2. Theorem 7. Let ˜F(x) be the softmax extension of a nonnegative submodular function f(Y ) = log det(LY ), let OPT = maxx02S ˜F(x0), and let x and y be local optima of ˜F in S and S \ {y0 | y0 1 −x}, respectively. Then max( ˜F(x), ˜F(y)) ≥1 4OPT ≥1 4 max Y 2S log det(LY ) . (11) Note that the softmax extension is an upper bound on the multilinear extension, thus Equation (11) is at least as tight as the corresponding result in [21]. Corollary 8. Algorithm 2 yields a 1/4-approximation to the DPP MAP problem whenever log det(LY ) ≥0 for all Y . In general, the objective value obtained by Algorithm 2 is bounded below by 1 4(OPT −p0) + p0, where p0 = minY log det(LY ). In practice, filtering of near-duplicates can be used to keep p0 from getting too small; however, in our empirical tests p0 did not seem to have a significant effect on approximation quality. 5 3.3 Rounding When the polytope S is unconstrained, it is easy to show that the results of Algorithm 1—and, in turn, Algorithm 2—are integral (or can be rounded without loss). Theorem 9. If S = [0, 1]N, then for any local optimum x of ˜F, either x is integral or at least one fractional coordinate xi can be set to 0 or 1 without lowering the objective. More generally, however, the polytope S can be complex, and the output of Algorithm 2 needs to be rounded. We speculate that the contention resolution rounding schemes proposed in [21] for the multilinear extension F may be extensible to ˜F, but do not attempt to prove so here. Instead, in our experiments we apply pipage rounding [28] and threshold rounding (rounding all coordinates up or down using a single threshold), which are simple and seem to work well in practice. 3.4 Model combination In addition to theoretical guarantees and the empirical advantages we demonstrate in Section 4, the proposed approach to the DPP MAP problem offers a great deal of flexibility. Since the general framework of continuous optimization is widely used in machine learning, this technique allows DPPs to be easily combined with other models. For instance, if S is the local polytope for a Markov random field, then, augmenting the objective with the (linear) log-likelihood of the MRF—additive linear objective terms do not affect the lemmas proved above—we can approximately compute the MAP configuration of the DPP-MRF product model. We might in this way model diverse objects placed in a sequence, or fit to an underlying signal like an image. Empirical studies of these possibilities are left to future work. 4 Experiments To illustrate the proposed method, we compare it to the widely used greedy algorithm of Nemhauser and Wolsey [15] (Algorithm 3) and the recently proposed deterministic “symmetric” greedy algorithm [18], which has a 1/3 approximation guarantee for unconstrained non-monotone problems. Note that, while a naive implementation of the arg max in Algorithm 3 requires evaluating the objective for each item in U, here we can exploit the fact that DPPs are closed under conditioning to compute all necessary values with only two matrix inversions [5]. We report baseline runtimes using this optimized greedy algorithm, which is about 10 times faster than the naive version at N = 200. The code and data for all experiments can be downloaded from http://www.seas.upenn.edu/˜jengi/dpp-map.html. 4.1 Synthetic data As a first test, we approximate the MAP configuration for DPPs with random kernels drawn from a Wishart distribution. Specifically, we choose L = B>B, where B 2 RN⇥N has entries drawn independently from the standard normal distribution, bij ⇠N(0, 1). This results in L ⇠WN(N, I), a Wishart distribution with N degrees of freedom and an identity covariance matrix. This distribution has several desirable properties: (1) in terms of eigenvectors, it spreads its mass uniformly over all unitary matrices [29], and (2) the probability density of eigenvalues λ1, . . . , λN is exp − N X i=1 λi ! N Y i=1 QN j=i+1(λi −λj)2 ((N −i)!)2 , (12) the first term of which deters the eigenvalues from being too large, and the second term of which encourages the eigenvalues to be well-separated [30]. Property (1) implies that we will see a variety of eigenvectors, which play an important role in the structure of a DPP [5]. Property (2) implies that interactions between these eigenvectors will be important, as no one eigenvalue is likely to dominate. Combined, these properties suggest that samples should encompass a wide range of DPPs. Figure 3a shows performance results on these random kernels in the unconstrained setting. Our proposed algorithm outperforms greedy in general, and the performance gap tends to grow with the size of the ground set, N. (We let N vary in the range [50, 200] since prior work with DPPs 6 50 100 150 200 −0.5 0 0.5 1 N log prob. ratio (vs. greedy) 50 100 150 200 0 1 2 3 4 N time ratio (vs. greedy) (a) 50 100 150 200 0 2 4 6 N log prob. ratio (vs. sym gr.) 50 100 150 200 0 1 2 3 N time ratio (vs. sym greedy) (b) 50 100 150 200 0 2 4 6 8 N log prob. ratio (vs. greedy) 50 100 150 200 0 5 10 15 N time ratio (vs. greedy) (c) Figure 3: Median and quartile log probability ratios (top) and running time ratios (bottom) for 100 random trials. (a) The proposed algorithm versus greedy on unconstrained problems. (b) The proposed algorithm versus symmetric greedy on unconstrained problems. (c) The proposed algorithm versus greedy on constrained problems. Dotted black lines indicate equal performance. in real-world scenarios [5, 13] has typically operated in this range.) Moreover, Figure 3a (bottom) illustrates that our method is of comparable efficiency at medium N, and becomes more efficient as N grows. Despite the fact that the symmetric greedy algorithm [18] has an improved approximation guarantee of 1/3, essentially the same analysis applies to Figure 3b. Figure 3c summarizes the performance of our algorithm in a constrained setting. To create plausible constraints, in this setting we generate two separate random matrices B(1) and B(2), and then select random pairs of rows (B(1) i , B(2) j ). Averaging (B(1) i + B(2) j )/2 creates one row of the matrix B; we then set L = B>B. The constraints require that if xk corresponding to the (i, j) pair is 1, no other xk0 can have first element i or second element j; i.e., the pairs cannot overlap. Since exact duplicate pairs produce identical rows in L, they are never both selected and can be pruned ahead of time. This means our constraints are of a form that allows us to apply pipage rounding to the possibly fractional result. Figure 3c shows even greater gains over greedy in this setting; however, enforcing the constraints precludes using fast methods like L-BFGS, so our optimization procedure is in this case somewhat slower than greedy. 4.2 Matched summarization Finally, we demonstrate our approach using real-world data. Consider the following task: given a set of documents, select a set of document pairs such that the two elements within a pair are similar, but the overall set of pairs is diverse. For instance, we might want to compare the opinions of various authors on a range of topics—or even to compare the statements made at different points in time by the same author, e.g., a politician believed to have changed positions on various issues. In this vein, we extract all the statements made by the eight main contenders in the 2012 US Republican primary debates: Bachmann, Cain, Gingrich, Huntsman, Paul, Perry, Romney, and Santorum. See the supplementary material for an example of some of these statements. Each pair of candidates (a, b) constitutes one instance of our task. The task output is a set of statement pairs where the first statement in each pair comes from candidate a and the second from candidate b. The goal of optimization is to find a set that is diverse (contains many topics, such as healthcare, foreign policy, immigration, etc.) but where both statements in each pair are topically similar. Before formulating a DPP objective for this task, we perform some pre-processing. We filter short statements, leaving us with an average of 179 quotes per candidate (min = 93, max = 332 quotes). 7 Algorithm 3 Greedy MAP for DPPs Input: kernel L, polytope S Y ;, U Y while U is not empty do i⇤ arg maxi2U log det(LY [{i}) if log det(LY [{i⇤}) < log det(LY ) then break end if Y Y [ {i⇤} U {i | i 62 Y, I(Y [ {i}) 2 S} end while Output: Y 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 log probability ratio (SoftMax / Greedy) λ Figure 4: Log ratio of the objective value achieved by our method to that achieved by greedy for ten settings of match weight λ. We parse the quotes, keeping only nouns. We further filter nouns by document frequency, keeping only those that occur in at least 10% of the quotes. Then we generate a feature matrix W where Wqt is the number of times term t appears in quote q. This matrix is then normalized so that kWqk2 = 1, where Wq is the qth row of W. For a given pair of candidates (a, b) we compute the quality of each possible quote pair (q(a) i , q(b) j ) as the dot product of their rows in W. While the model will naturally ignore low-quality pairs, for efficiency we throw away such pairs in pre-processing. For each of candidate a’s quotes q(a) i we keep a pair with quote j = arg maxj0 quality(q(a) i , q(b) j0 ) from candidate b, and vice-versa. The scores of the unpruned quotes, which we denote r, are re-normalized to span the [0, 1] range. To create a feature vector describing each pair, we simply add the corresponding pair of quote feature vectors and re-normalize, forming a new W matrix. Our task is to select some high-quality representative subset of the unpruned quote pairs. We formulate this as a DPP objective with kernel L = MSM, where Sij is a measurement of similarity between quote pairs i and j, and M is a diagonal matrix with Mii representing the match quality of pair i. We set S = WW T and diag(M) = p exp(λr), where λ is a hyperparameter. Large λ places more emphasis on picking high-quality pairs than on making the overall set diverse. To help limit the number of pairs selected when optimizing the objective, we add some constraints. For each candidate we cluster their quotes using k-means on the word feature vectors and impose the constraint that no more than one quote per cluster can be selected. We round the final solution using the threshold rounding scheme described in Section 3.3. Figure 4 shows the result of optimizing this constrained objective, averaged over all 56 candidate pairs. For all settings of λ we outperform greedy. In general, we observe that our algorithm is most improved compared to greedy when the constraints are in play. In this case, when λ is small the constraints are less relevant, since the model has an intrinsic preference for smaller sets. On the other hand, when λ is very large the algorithms must choose as many pairs as possible in order to maximize their score; in this case the constraints play an important role. 5 Conclusion We presented a new approach to solving the MAP problem for DPPs based on continuous algorithms for submodular maximization. Unlike the multilinear extension used in the general case, the softmax extension we propose is efficiently computable and differentiable. Furthermore, it allows for general solvable polytope constraints, and yields a guaranteed 1/4-approximation in a subclass of DPPs. Our method makes it easy to combine DPPs with other models like MRFs or matching models, and is faster and more reliable than standard greedy methods on synthetic and real-world problems. Acknowledgments This material is based upon work supported under a National Science Foundation Graduate Research Fellowship, Sloan Research Fellowship, and NSF Grant 0803256. 8 References [1] A. Nenkova, L. Vanderwende, and K. McKeown. A Compositional Context-Sensitive Multi-Document Summarizer: Exploring the Factors that Influence Summarization. In Proc. SIGIR, 2006. [2] H. Lin and J. Bilmes. Multi-document Summarization via Budgeted Maximization of Submodular Functions. In Proc. NAACL/HLT, 2010. [3] F. Radlinski, R. Kleinberg, and T. Joachims. Learning Diverse Rankings with Multi-Armed Bandits. In Proc. ICML, 2008. [4] Y. Yue and T. Joachims. Predicting Diverse Subsets Using Structural SVMs. In Proc. ICML, 2008. [5] A. Kulesza and B. Taskar. k-DPPs: Fixed-Size Determinantal Point Processes. In Proc. ICML, 2011. [6] C. Guestrin, A. Krause, and A. Singh. Near-Optimal Sensor Placements in Gaussian Processes. In Proc. ICML, 2005. [7] D. Kempe, J. Kleinberg, and E. Tardos. Influential Nodes in a Diffusion Model for Social Networks. In Automata, Languages and Programming, volume 3580 of Lecture Notes in Computer Science. 2005. [8] O. Macchi. The Coincidence Approach to Stochastic Point Processes. Advances in Applied Probability, 7(1), 1975. [9] D. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes: Elementary Theory and Methods. 2003. [10] A. Kulesza and B Taskar. Structured Determinantal Point Processes. In Proc. NIPS, 2010. [11] A. Kulesza, J. Gillenwater, and B. Taskar. Discovering Diverse and Salient Threads in Document Collections. In Proc. EMNLP, 2012. [12] J. Hough, M. Krishnapur, Y. Peres, and B. Vir´ag. Determinantal Processes and Independence. Probability Surveys, 3, 2006. [13] A. Kulesza and B. Taskar. Learning Determinantal Point Processes. In Proc. UAI, 2011. [14] C. Ko, J. Lee, and M. Queyranne. An Exact Algorithm for Maximum Entropy Sampling. Operations Research, 43(4), 1995. [15] G. Nemhauser, L. Wolsey, and M. Fisher. An Analysis of Approximations for Maximizing Submodular Set Functions I. Mathematical Programming, 14(1), 1978. [16] U. Feige, V. Mirrokni, and J. Vondrak. Maximizing Non-Monotone Submodular Functions. In Proc. FOCS, 2007. [17] T. Robertazzi and S. Schwartz. An Accelerated Sequential Algorithm for Producing D-optimal Designs. SIAM J. Sci. Stat. Comput., 10(2), 1989. [18] N. Buchbinder, M. Feldman, J. Naor, and R. Schwartz. A Tight Linear Time (1/2)-Approximation for Unconstrained Submodular Maximization. In Proc. FOCS, 2012. [19] A. Gupta, A. Roth, G. Schoenebeck, and K. Talwar. Constrained Nonmonotone Submodular Maximization: Offline and Secretary Algorithms. In Internet and Network Economics, volume 6484 of LNCS. 2010. [20] S. Gharan and J. Vondr´ak. Submodular Maximization by Simulated Annealing. In Proc. Soda, 2011. [21] C. Chekuri, J. Vondr´ak, and R. Zenklusen. Submodular Function Maximization via the Multilinear Relaxation and Contention Resolution Schemes. arXiv:1105.4593, 2011. [22] M. Feldman, J. Naor, and R. Schwartz. Nonmonotone Submodular Maximization via a Structural Continuous Greedy Algorithm. Automata, Languages and Programming, 2011. [23] A. Borodin and A. Soshnikov. Janossy Densities I. Determinantal Ensembles. Journal of Statistical Physics, 113(3), 2003. [24] A. Borodin. Determinantal Point Processes. arXiv:0911.1153, 2009. [25] M. Feldman, J. Naor, and R. Schwartz. A Unified Continuous Greedy Algorithm for Submodular Maximization. In Proc. FOCS, 2011. [26] D. Bertsekas. Nonlinear Programming. Athena Scientific, 1999. [27] M. Frank and P. Wolfe. An Algorithm for Quadratic Programming. Naval Research Logistics Quarterly, 3(1-2), 1956. [28] A. Ageev and M. Sviridenko. Pipage Rounding: A New Method of Constructing Algorithms with Proven Performance Guarantee. Journal of Combinatorial Optimization, 8(3), 2004. [29] A. James. Distributions of Matrix Variates and Latent Roots Derived from Normal Samples. Annals of Mathematical Statistics, 35(2), 1964. [30] P. Hsu. On the Distribution of Roots of Certain Determinantal Equations. Annals of Eugenics, 9(3), 1939. 9
|
2012
|
160
|
4,520
|
Image Denoising and Inpainting with Deep Neural Networks Junyuan Xie, Linli Xu, Enhong Chen1 School of Computer Science and Technology University of Science and Technology of China eric.jy.xie@gmail.com, linlixu@ustc.edu.cn, cheneh@ustc.edu.cn Abstract We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method’s performance in the image denoising task is comparable to that of KSVD which is a widely used sparse coding technique. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random. Moreover, the proposed method does not need the information regarding the region that requires inpainting to be given a priori. Experimental results demonstrate the effectiveness of the proposed method in the tasks of image denoising and blind inpainting. We also show that our new training scheme for DA is more effective and can improve the performance of unsupervised feature learning. 1 Introduction Observed image signals are often corrupted by acquisition channel or artificial editing. The goal of image restoration techniques is to restore the original image from a noisy observation of it. Image denoising and inpainting are common image restoration problems that are both useful by themselves and important preprocessing steps of many other applications. Image denoising problems arise when an image is corrupted by additive white Gaussian noise which is common result of many acquisition channels, whereas image inpainting problems occur when some pixel values are missing or when we want to remove more sophisticated patterns, like superimposed text or other objects, from the image. This paper focuses on image denoising and blind inpainting. Various methods have been proposed for image denoising. One approach is to transfer image signals to an alternative domain where they can be more easily separated from the noise [1, 2, 3]. For example, Bayes Least Squares with a Gaussian Scale-Mixture (BLS-GSM), which was proposed by Portilla et al, is based on the transformation to wavelet domain [2]. Another approach is to capture image statistics directly in the image domain. Following this strategy, A family of models exploiting the (linear) sparse coding technique have drawn increasing attention recently [4, 5, 6, 7, 8, 9]. Sparse coding methods reconstruct images from a sparse linear combination of an over-complete dictionary. In recent research, the dictionary is learned from data instead of hand crafted as before. This learning step improves the performance of sparse coding significantly. One example of these methods is the KSVD sparse coding algorithm proposed in [6]. 1Corresponding author. 1 Image inpainting methods can be divided into two categories: non-blind inpainting and blind inpainting. In non-blind inpainting, the regions that need to be filled in are provided to the algorithm a priori, whereas in blind inpainting, no information about the locations of the corrupted pixels is given and the algorithm must automatically identify the pixels that require inpainting. The stateof-the-art non-blind inpainting algorithms can perform very well on removing text, doodle, or even very large objects [10, 11, 12]. Some image denoising methods, after modification, can also be applied to non-blind image inpainting with state-of-the-art results [7]. Blind inpainting, however, is a much harder problem. To the best of our knowledge, existing algorithms can only address i.i.d. or simply structured impulse noise [13, 14, 15]. Although sparse coding models perform well in practice, they share a shallow linear structure. Recent research suggests, however, that non-linear, deep models can achieve superior performance in various real world problems. One typical category of deep models are multi-layer neural networks. In [16], Jain et al. proposed to denoise images with convolutional neural networks. In this paper, we propose to combine the advantageous “sparse” and “deep” principles of sparse coding and deep networks to solve the image denoising and blind inpainting problems. The sparse variants of deep neural network are expected to perform especially well in vision problems because they have a similar structure to human visual cortex [17]. Deep neural networks with many hidden layers were generally considered hard to train before a new training scheme was proposed which is to adopt greedy layer-wise pre-training to give better initialization of network parameters before traditional back-propagation training [18, 19]. There exist several methods for pre-training, including Restricted Boltzmann Machine (RBM) and Denoising Auto-encoder (DA) [20, 21]. We employ DA to perform pre-training in our method because it naturally lends itself to denoising and inpainting tasks. DA is a two-layer neural network that tries to reconstruct the original input from a noisy version of it. The structure of a DA is shown in Fig.1a. A series of DAs can be stacked to form a deep network called Stacked Denoising Auto-encoders (SDA) by using the hidden layer activation of the previous layer as input of the next layer. SDA is widely used for unsupervised pre-training and feature learning [21]. In these settings, only the clean data is provided while the noisy version of it is generated during training by adding random Gaussian or Salt-and-Pepper noise to the clean data. After training of one layer, only the clean data is passed on to the network to produce the clean training data for the next layer while the noisy data is discarded. The noisy training data for the next layer is similarly constructed by randomly corrupting the generated clean training data. For the image denoising and inpainting tasks, however, the choices of clean and noisy input are natural: they are set to be the desired image after denoising or inpainting and the observed noisy image respectively. Therefore, we propose a new training scheme that trains the DA to reconstruct the clean image from the corresponding noisy observation. After training of the first layer, the hidden layer activations of both the noisy input and the clean input are calculated to serve as the training data of the second layer. Our experiments on the image denoising and inpainting tasks demonstrate that SDA is able to learn features that adapt to specific noises from white Gaussian noise to superimposed text. Inspired by SDA’s ability to learn noise specific features in denoising tasks, we argue that in unsupervised feature learning problems the type of noise used can also affect the performance. Specifically, instead of corrupting the input with arbitrarily chosen noise, more sophisticated corruption process that agrees to the true noise distribution in the data can improve the quality of the learned features. For example, when learning audio features, the variations of noise on different frequencies are usually different and sometimes correlated. Hence instead of corrupting the training data with simple i.i.d. Gaussian noise, Gaussian noise with more realistic parameters that are either estimated from data or suggested by theory should be a better choice. 2 Model Description In this section, we first introduce the problem formulation and some basic notations. Then we briefly give preliminaries about Denoising Auto-encoder (DA), which is a fundamental building block of our proposed method. 2 (a) Denoising auto-encoder (DA) architecture (b) Stacked sparse denoising auto-encoder architecture Figure 1: Model architectures. 2.1 Problem Formulation Assuming x is the observed noisy image and y is the original noise free image, we can formulate the image corruption process as: x = η(y). (1) where η : Rn →Rn is an arbitrary stochastic corrupting process that corrupts the input. Then, the denoising task’s learning objective becomes: f = argmin f Ey∥f(x) −y∥2 2 (2) From this formulation, we can see that the task here is to find a function f that best approximates η−1. We can now treat the image denoising and inpainting problems in a unified framework by choosing appropriate η in different situations. 2.2 Denoising Auto-encoder Let yi be the original data for i = 1, 2, ..., N and xi be the corrupted version of corresponding yi. DA is defined as shown in Fig.1a: h(xi) = σ(Wxi + b) (3) ˆy(xi) = σ(W′h(xi) + b′) (4) where σ(x) = (1+exp(−x))−1 is the sigmoid activation function which is applied element-wise to vectors, hi is the hidden layer activation, ˆy(xi) is an approximation of yi and Θ = {W, b, W′, b′} represents the weights and biases. DA can be trained with various optimization methods to minimize the reconstruction loss: θ = argmin θ N X i=1 ∥yi −ˆy(xi)∥. (5) After finish training a DA, we can move on to training the next layer by using the hidden layer activation of the first layer as the input of the next layer. This is called Stacked denoising autoencoder (SDA) [21]. 2.3 Stacked Sparse Denoising Auto-encoders In this section, we will describe the structure and optimization objective of the proposed model Stacked Sparse Denoising Auto-encoders (SSDA). Due to the fact that directly processing the entire image is intractable, we instead draw overlapping patches from the image as our data objects. In the training phase, the model is supplied with both the corrupted noisy image patches xi, for i = 1, 2, ..., N, and the original patches yi. After training, SSDA will be able to reconstruct the corresponding clean image given any noisy observation. To combine the virtues of sparse coding and neural networks and avoid over-fitting, we train a DA to minimize the reconstruction loss regularized by a sparsity-inducing term: L1(X, Y; θ) = 1 N N X i=1 1 2∥yi −ˆy(xi)∥2 2 + β KL(ˆρ∥ρ) + λ 2 (∥W∥2 F + ∥W′∥2 F ) (6) 3 Method Standard deviation σ 25/PSNR=20.17 50/PSNR=14.16 100/PSNR=8.13 SSDA 30.52 ± 1.02 27.37 ± 1.10 24.18 ± 1.39 BLS-GSM 30.49 ± 1.17 27.28 ± 1.44 24.37 ± 1.36 KSVD 30.96 ± 0.77 27.34 ± 1.11 23.50 ± 1.15 Table 1: Comparison of the denoising performance. Performance is measured by Peak Signal to Noise Ratio (PSNR). Results are averaged over testing set. where KL(ˆρ∥ρ) = |ˆρ| X j=1 ρ log ρ ˆρj + (1 −ρ) log (1 −ρ) 1 −ˆρj , ˆρ = 1 N N X i h(xi). and h(·) and ˆy(·) are defined in (3), (4) respectively. Here ˆρ is the average activation of the hidden layer. We regularize the hidden layer representation to be sparse by choosing small ρ so that the KLdivergence term will encourage the mean activation of hidden units to be small. Hence the hidden units will be zero most of the time and achieve sparsity. After training of the first DA, we use h(yi) and h(xi) as the clean and noisy input respectively for the second DA. This is different from the approach described in [21], where xi is discarded and η(h(yi)) is used as the noisy input. We point out that our method is more natural in that, since h(yi) lies in a different space from yi, the meaning of applying η(·) to h(yi) is not clear. We then initialize a deep network with the weights obtained from K stacked DAs. The network has one input layer, one output and 2K −1 hidden layers as shown in Fig.1b. The entire network is then trained using the standard back-propagation algorithm to minimize the following objective: L2(X, Y; θ) = 1 N N X i=1 1 2∥yi −y(xi)∥2 2 + λ 2 2K X j=1 (∥Wj∥2 F ). (7) Here we removed the sparsity regularization because the pre-trained weights will serve as regularization to the network [18]. In both of the pre-training and fine-tuning stages, the loss functions are optimized with L-BFGS algorithm (a Quasi-Newton method) which, according to [22], can achieve fastest convergence in our settings. 3 Experiments We narrow our focus down to denoising and inpainting of grey-scale images, but there is no difficulty in generalizing to colored images. We use a set of natural images collected from the web1 as our training set and standard testing images2 as the testing set. We create noisy images from clean training and testing images by applying the function (1) to them. Image patches are then extracted from both clean and noisy images to train SSDAs. We employ Peak Signal to Noise Ratio (PSNR) to quantify denoising results: 10 log10(2552/σ2 e), where σ2 e is the mean squared error. PSNR is one of the standard indicators used for evaluating image denoising results. 3.1 Denoising White Gaussian Noise We first corrupt images with additive white Gaussian noise of various standard deviations. For the proposed method, one SSDA model is trained for each noise level. We evaluate different hyperparameter combinations and report the best result. We set K to 2 for all cases because adding more layers may slightly improve the performance but require much more training time. In the meantime, we try different patch sizes and find that higher noise level generally requires larger patch size. The 1http://decsai.ugr.es/cvg/dbimagenes/ 2Widely used images commonly referred to as Lena, Barbara, Boat, Pepper, etc. in the image processing community. 4 Figure 2: Visual comparison of denoising results. Results of images corrupted by white Gaussian noise with standard deviation σ = 50 are shown. The last row zooms in on the outlined region of the original image. 5 dimension of hidden layers is generally set to be a constant factor times the dimension of the input3. SSDA is not very sensitive to the weights of the regularization terms. For Bayes Least SquaresGaussian Scale Mixture (BLS-GSM) and KSVD method, we use the fully trained and optimized toolbox obtained from the corresponding authors [2, 7]. All three models are tuned to specific noise level of each input. The comparison of quantitative results are shown in Tab.1. Numerical results showed that differences between the three algorithms are statistical insignificant. A visual comparison is shown in Fig.2. We find that SSDA gives clearer boundary and restores more texture details than KSVD and BLS-GSM although the PSNR scores are close. This indicates that although the reconstruction errors averaged over all pixels are the same, SSDA is better at denoising complex regions. 3.2 Image Inpainting Figure 3: Visual comparison of inpainting results. For the image inpainting task, we test our model on the text removal problem. Both the training and testing set compose of images with super-imposed text of various fonts and sizes from 18-pix to 36-pix. Due to the lack of comparable blind inpainting algorithms, We compare our method to the non-blind KSVD inpainting algorithm [7], which significantly simplifies the problem by requiring the knowledge of which pixels are corrupted and require inpainting. A visual comparison is shown in Fig.3. We find that SSDA is able to eliminate text of small fonts completely while text of larger fonts is dimmed. The proposed method, being blind, generates results comparable to KSVD’s even though KSVD is a non-blind algorithm. Non-blind inpainting is a well developed technology that works decently on the removal of small objects. Blind inpainting, however, is much harder since it demands automatic identification of the patterns that requires inpainting, which, by itself is a very challenging problem. To the best of our knowledge, former methods are only capable of removing i.i.d. or simply structured impulse noise [13, 14, 15]. SSDA’s capability of blind inpainting of complex patterns is one of this paper’s major contributions. 6 Training noise Testing noise Gaussian Salt-and-Pepper Image background Gaussian 91.42% 82.95% 86.45% Salt-and-Pepper 90.05% 90.14% 81.77% Image background 84.88% 74.47% 86.87% Table 2: Comparison of classification results. Highest accuracy in each column is shown in bold font. 3.3 Hidden Layer Feature Analysis Traditionally when training denoising auto-encoders, the noisy training data is usually generated with arbitrarily selected simple noise distribution regardless of the characteristics of the specific training data [21]. However, we propose that this process deserves more attention. In real world problems, the clean training data is in fact usually subject to noise. Hence, if we estimate the distribution of noise and exaggerate it to generate noisy training data, the resulting DA will learn to be more robust to noise in the input data and produce better features. Inspired by SSDA’s ability to learn different features when trained on denoising different noise patterns, we argue that training denoising auto-encoders with noise patterns that fit to specific situations can also improve the performance of unsupervised feature learning. We demonstrate this by a comparison of classification performance with different sets of features learned on the MNIST dataset. We train DAs with different types of noise and then apply them to handwritten digits corrupted by the type of noise they are trained on as well as other types of noise. We compare the quality of the learned features by feeding them to SVMs and comparing the corresponding classification accuracy. The results are shown in Tab.2. We find that the highest classification accuracy on each type of noise is achieved by the DA trained to remove that type of noise. This is not surprising since more information is utilized, however it indicates that instead of arbitrarily corrupting input with noise that follows simple distribution and feeding it to DA, more sophisticated methods that corrupt input in more realistic ways can achieve better performance. 4 Discussion 4.1 Prior vs. Learned Structure Unlike models relying on structural priors, our method’s denoising ability comes from learning. Some models, for example BLS-GSM, have carefully designed structures that can give surprisingly good results with random parameter settings [23]. However, randomly initialized SSDA obviously can not produce any meaningful results. Therefore SSDA’s ability to denoise and inpaint images is mostly the result of training. Whereas models that rely on structural priors usually have very limited scope of applications, our model can be adapted to other tasks more conveniently. With some modifications, it is possible to denoise audio signals or complete missing data (as a data preprocessing step) with SSDA. 4.2 Advantages and Limitations Traditionally, for complicated inpainting tasks, an inpainting mask that tells the algorithm which pixels correspond to noise and require inpainting is supplied a priori. However, in various situations this is time consuming or sometimes even impossible. Our approach, being blind, has significant advantages in such circumstances. This makes our method a suitable choice for fully automatic and noise pattern specific image processing. The limitation of our method is also obvious: SSDA strongly relies on supervised training. In our experiment, we find that SSDA can generalize to unseen, but similar noise patterns. Generally speaking, however, SSDA can remove only the noise patterns it has seen in the training data. Therefore, 3We set this factor to 5. The other hyper-parameters are: λ = 10−4, β = 10−2, ρ = 0.05. 7 SSDA would only be suitable in circumstances where the scope of denoising tasks is narrow, such as reconstructing images corrupted by a certain procedure. 5 Conclusion In this paper, we present a novel approach to image denoising and blind inpainting that combines sparse coding and deep neural networks pre-trained with denoising auto-encoders. We propose a new training scheme for DA that makes it possible to denoise and inpaint images within a unified framework. In the experiments, our method achieves performance comparable to traditional linear sparse coding algorithm on the simple task of denoising additive white Gaussian noise. Moreover, our non-linear approach successfully tackles the much harder problem of blind inpainting of complex patterns which, to the best of our knowledge, has not been addressed before. We also show that the proposed training scheme is able to improve DA’s performance in the tasks of unsupervised feature learning. In our future work, we would like to explore the possibility of adapting the proposed approach to various other applications such as denoising and inpainting of audio and video, image super-resolution and missing data completion. It is also meaningful to investigate into the effects of different hyperparameter settings on the learned features. 6 Acknowledgement Research supported by grants from the National Natural Science Foundation of China (No. 61003135 & No. 61073110), NSFC Major Program (No. 71090401/71090400), the Fundamental Research Funds for the Central Universities (WK0110000022), the National Major Special Science & Technology Projects (No. 2011ZX04016-071), and Research Fund for the Doctoral Program of Higher Education of China (20093402110017, 20113402110024). References [1] J. Xu, K. Zhang, M. Xu, and Z. Zhou. An adaptive threshold method for image denoising based on wavelet domain. Proceedings of SPIE, the International Society for Optical Engineering, 7495:165, 2009. [2] J. Portilla, V. Strela, M.J. Wainwright, and E.P. Simoncelli. Image denoising using scale mixtures of Gaussians in the wavelet domain. Image Processing, IEEE Transactions on, 12(11):1338–1351, 2003. [3] F. Luisier, T. Blu, and M. Unser. A new SURE approach to image denoising: Interscale orthonormal wavelet thresholding. IEEE Transactions on Image Processing, 16(3):593–606, 2007. [4] B.A. Olshausen and D.J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision research, 37(23):3311–3325, 1997. [5] K. Kreutz-Delgado, J.F. Murray, B.D. Rao, K. Engan, T.W. Lee, and T.J. Sejnowski. Dictionary learning algorithms for sparse representation. Neural computation, 15(2):349–396, 2003. [6] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing, 15(12):3736–3745, 2006. [7] J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEE Transactions on Image Processing, 17(1):53–69, 2008. [8] X. Lu, H. Yuan, P. Yan, Y. Yuan, L. Li, and X. Li. Image denoising via improved sparse coding. Proceedings of the British Machine Vision Conference, pages 74–1, 2011. [9] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. Proceedings of the 26th Annual International Conference on Machine Learning, pages 689– 696, 2009. [10] A. Criminisi, P. P´erez, and K. Toyama. Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on Image Processing, 13(9):1200–1212, 2004. 8 [11] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester. Image inpainting. Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 417–424, 2000. [12] A. Telea. An image inpainting technique based on the fast marching method. Journal of graphics tools., 9(1):23–34, 2004. [13] B. Dong, H. Ji, J. Li, Z. Shen, and Y. Xu. Wavelet frame based blind image inpainting. Applied and Computational Harmonic Analysis, 2011. [14] Y. Wang, A. Szlam, and G. Lerman. Robust locally linear analysis with applications to image denoising and blind inpainting. preprint, 2011. [15] M. Yan. Restoration of images corrupted by impulse noise using blind inpainting and l0 norm. preprint, 2011. [16] V. Jain and H.S. Seung. Natural image denoising with convolutional networks. Advances in Neural Information Processing Systems, 21:769–776, 2008. [17] H. Lee, C. Ekanadham, and A. Ng. Sparse deep belief net model for visual area V2. Advances in Neural Information Processing Systems 20, pages 873–880, 2008. [18] D. Erhan, Y. Bengio, A. Courville, P.A. Manzagol, P. Vincent, and S. Bengio. Why does unsupervised pre-training help deep learning? The Journal of Machine Learning Research, 11:625–660, 2010. [19] Y. Bengio. Learning deep architectures for AI. Foundations and Trends R⃝in Machine Learning, 2(1):1–127, 2009. [20] R. Salakhutdinov and G.E. Hinton. Deep boltzmann machines. Proceedings of the international conference on artificial intelligence and statistics, 5(2):448–455, 2009. [21] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371–3408, 2010. [22] Q.V. Le, A. Coates, B. Prochnow, and A.Y. Ng. On optimization methods for deep learning. Learning, pages 265–272, 2011. [23] S. Roth and M.J. Adviser-Black. High-order markov random fields for low-level vision. Brown University Press, 2007. 9
|
2012
|
161
|
4,521
|
Strategic Impatience in Go/NoGo versus Forced-Choice Decision-Making Pradeep Shenoy Cognitive Science Department University of California, San Diego La Jolla, CA, 92093 pshenoy@ucsd.edu Angela J. Yu Cognitive Science Department University of California, San Diego La Jolla, CA, 92093 ajyu@ucsd.edu Abstract Two-alternative forced choice (2AFC) and Go/NoGo (GNG) tasks are behavioral choice paradigms commonly used to study sensory and cognitive processing in choice behavior. While GNG is thought to isolate the sensory/decisional component by eliminating the need for response selection as in 2AFC, a consistent tendency for subjects to make more Go responses (both higher hits and false alarm rates) in the GNG task raises the concern that there may be fundamental differences in the sensory or cognitive processes engaged in the two tasks. Existing mechanistic models of these choice tasks, mostly variants of the drift-diffusion model (DDM; [1, 2]) and the related leaky competing accumulator models [3, 4], capture various aspects of behavioral performance, but do not clarify the provenance of the Go bias in GNG. We postulate that this “impatience” to go is a strategic adjustment in response to the implicit asymmetry in the cost structure of the 2AFC and GNG tasks: the NoGo response requires waiting until the response deadline, while a Go response immediately terminates the current trial. We show that a Bayes-risk minimizing decision policy that minimizes not only error rate but also average decision delay naturally exhibits the experimentally observed Go bias. The optimal decision policy is formally equivalent to a DDM with a timevarying threshold that initially rises after stimulus onset, and collapses again just before the response deadline. The initial rise in the threshold is due to the diminishing temporal advantage of choosing the fast Go response compared to the fixeddelay NoGo response. We also show that fitting a simpler, fixed-threshold DDM to the optimal model reproduces the counterintuitive result of a higher threshold in GNG than 2AFC decision-making, previously observed in direct DDM fit to behavioral data [2], although such fixed-threshold approximations cannot reproduce the Go bias. Our results suggest that observed discrepancies between GNG and 2AFC decision-making may arise from rational strategic adjustments to the cost structure, and thus need not imply any other difference in the underlying sensory and cognitive processes. 1 Introduction The two-alternative forced-choice (2AFC) task is a standard experimental paradigm used in psychology and neuroscience to investigate various aspects of sensory, motor, and cognitive processing [5]. Typically, the paradigm involves a forced choice between two responses based on a presented stimulus, with the measured response time and accuracy of choices shedding light on the cognitive and neural processes underlying behavior. Another paradigm that appears to share many features of the 2AFC task is the Go/NoGo (GNG) task [6], (see Luce [5] for a review), where one stimulus category is associated with an overt Go response that has to be executed before a response dead1 line, and the other stimulus (NoGo) requires withholding response until the response deadline has elapsed. In principle, the GNG task could be used to probe the same decision-making problems as the 2AFC task, with the possible advantage of eliminating a “response selection stage” that may follow the decision in the 2AFC task [6, 7]. Indeed, the GNG task has been used to study various aspects of human and animal cognition, e.g., lexical judgements [8, 9], perceptual decision-making [10, 11, 12], and the neural basis of choice behavior (in particular, distinguishing among neural activations associated with stimulus, memory, and response) [13, 14, 15]. However, experimental evidence also indicates that there is a curious choice bias toward the overt (Go) response in the GNG task [11, 16, 2, 15], in the form of shorter response times and more false alarms for the Go response, than when compared to the same stimulus pairings in a 2AFC task [2, 16]. It has been suggested that this choice bias may reflect differential sensory and cognitive processes underlying the two tasks, and thus making the two non-interchangeable in the study of perception and decision-making. In this paper, we hypothesize that this discrepancy may simply be due to differences in the implicit reward (cost) structure of the two tasks: the NoGo response incurs a higher imposed waiting cost than the Go response, since the NoGo response must wait until the response deadline has passed to register, while a Go response immediately terminates the trial. In contrast, in the 2AFC task, the cost function is symmetric for the two alternatives, whether in terms of error or delay. We propose that the implicit cost structure difference in GNG can fully account for the Go bias in GNG compared to 2AFC tasks, without the need to appeal to other differences in sensory or cognitive processing. To investigate this hypothesis, we adopt a Bayes risk minimization framework for both the 2AFC and GNG tasks, whereby sensory processing is modeled as iterative Bayesian inference of stimulus type based on a stream of noisy sensory input, and the decision of when/how to respond rests on a policy that minimizes a linear combination of expected decision delay and response errors. The optimal decision policy for this Bayes-risk formulation in the 2AFC task is known as the sequential probability ratio test (SPRT; [17, 18]), and has been shown to account for both behavioral [19, 4] and neural data [19, 20]. Here, we generalize this theoretical framework to account for both 2AFC and GNG decision-making in a unified framework, by assuming that a subject’s sensory and perceptual processing (of the same pair of stimuli) and the relative preference for decision accuracy versus speed are shared across 2AFC and GNG, with the only difference between them being the asymmetric temporal cost implicit in the reward structure of the GNG task –the Go response terminating a trial while the NoGO response only registering after the response deadline. As a stochastic process, SPRT is a bounded random walk, whereby the stochasticity in the random walk comes from noise in the observation process. The continuum (time) limit of a bounded random walk is the bounded drift-diffusion model (DDM), which generally assume a stochastic dynamic variable to undergo constant drift, as well as diffusion due to Wiener noise, until one of two finite thresholds is breached. In psychology, DDM has been augmented with additional parameters such as a non-decision-related repsonse delay, variability in drift-rate, and variability in starting point across trials. Figure 4A shows a simple variant of the DDM illustrating the following parameters: rate of accumulation, threshold, and “nondecision time” or temporal offset to the start of the diffusion process. These augmented DDMs have been used to model behavior in 2AFC tasks [21, 22, 23, 5, 24, 4], and also appear to provide good descriptive accounts of the neural activities underlying perceptual decision-making [25, 20, 26, 27]. Variants of augmented DDM have also been utilized to fit data in other simple decision-making tasks, including the GNG task [2]. While augmenting DDM with extra parameters gives it additional power in explaining subtleties in data, this also diminishes the normative interpretability of DDM fits by eliminating its formal relationship to the optimal SPRT procedure. As a consequence, when the behavioral objectives change, e.g., in the GNG task, DDM cannot predict a priori what parameters ought to change and how much. Instead, we begin with a Bayes-risk minimization formulation and derive the non-parametric optimal decision-procedure as a function of sensory statistics and behavioral objectives. We then map the optimal policy to the DDM model space, and compare directly with previously proposed DDM variants in the context of 2AFC and GNG tasks. In the following sections, we first describe our proposed Bayesian inference and decision-making model, then compare simulations of the optimal decision-making model with published experimental data of subjects performing perceptual decision-making in 2AFC and GNG tasks [16]. We also explore other evidence exploring the degree of go bias in the GNG task [28]. Next, we consider the formal relationship between the optimal model and a fixed-threshold DDM that was previously utilized to fit behavioral data from the GNG task [2, 12]. Finally, we present novel experimental 2 no/nogo yes/go 0 0.05 0.1 0.15 Error Rate Data: Error rate 2AFC GNG 0 200 400 RT (ms) Data: RT left/nogo right/go 0 0.05 0.1 0.15 0.2 Stimulus Error Rate Model: Error rate 2AFC GNG 0 5 10 15 RT (steps) Model: RT Figure 1: Systematic error biases in the GNG task. (A) The figure shows error rates associated with a perceptual decision-making task performed by subjects in both Go/NoGo and Yes/No (forced choice) settings. Although the error rates in the forced choice settings were similar for both classes, there was a significant bias towards the Go response in the GNG task, with more false alarms than omission errors. (B) Mean response time on the GNG task was lower than for the same stimulus on the 2AFC task. (Data adapted from Bacon-Mace et al., 2007). predictions of the optimal decision-making model, including those that specifically differ from the fixed-threshold DDM approximation [2, 12]. 2 Bayesian inference and risk minimization in choice tasks Human choice behavior in the GNG and 2AFC tasks exhibits a consistent Go bias in the GNG task that is not apparent for the same stimulus in the 2AFC task. For example, Figure 1 shows data from a task in which subjects must identify whether a briefly-presented noisy image contains an animal or not [16], under two different response conditions: GNG (only respond to animal-present images), and 2AFC (respond yes/no to each image). Subjects showed a significant bias towards the Go response in the GNG task, in the form of higher false alarms than omission errors (Figure 1A), as well as faster RT than for the same stimulus in the 2AFC task (Figure 1B). For the 2AFC task, a large body of literature supports the “accumulate-to-bound” model of perceptual decision-making, [23, 20, 26], where moment-to-moment sensory input (“evidence” in favor of either choice) is accumulated over time until it reaches a bound, at which point, a response is generated. Previous work by Yu & Frazier [29] extended the formulation to include 2AFC tasks with a decision deadline, in which subjects have the additional constraint of not exceeding a decision deadline. They showed that the optimal policy for decision-making under a deadline is to accumulate evidence up to time-varying thresholds that collapse toward each other over time, leading to more “liberal” choices and higher error rate in later responses than earlier ones. Here, we generalize the framework to model the GNG task. In particular, the same deadline by which the subject must make a response (or else be counted as a “miss”) on a Go trial, is the one for which the subject must withold response (or else be counted as a “false alarm”). We model evidence accumulation as iterative Bayesian inference over the identity of the stimulus, and decision-making as an iterative decision policy that chooses whether to respond (and which one in 2AFC) or continue observing at least one more time point, based on current evidence. The optimal policy minimizes the expected value of a cost function that depends linearly on decision delay and errors. The model is described below. 2.1 Evidence integration as Bayesian inference We model evidence accumulation, in both 2AFC and GNG, as iterative Bayesian inference about the stimulus identity conditioned on an independent and identically distributed (i.i.d.) stream of sensory input. Specifically, we assume a generative model where the observations are a continual sequence of data samples x1, x2, . . ., iid-generated from a likelihood function f0(x) or f1(x) depending on whether the true stimulus state is d = 0 or d = 1, respectively. This incoming stream therefore provides accumulating evidence of the hidden category label d ∈{0, 1}. For concreteness, we assume the likelihood functions are Gaussian distributions with means ±µ (+ for d = 1, - for d = 0), and a variance parameter σ2 controlling the noisiness of the stimuli. 3 no/nogo yes/go 0 0.05 0.1 0.15 Error Rate 2AFC GNG 0 200 400 RT (ms) Data: RT left/nogo right/go 0 0.05 0.1 0.15 0.2 Stimulus Error Rate Model: Error rate 2AFC GNG 0 5 10 15 RT (steps) Model: RT 0 20 40 0 0.5 1 Time Belief Decision threshold A B Model: Error rate C Model: RT Figure 2: Rational behavior in 2AFC and GNG tasks. (A) The figure shows the decision threshold as a function of belief state across the 2AFC and GNG tasks. The optimal decision boundary for 2AFC is a pair of parallel thresholds (solid line) that collapse and meet at the response deadline (indicated by dashed vertical line). The optimal GNG decision boundary is a single initially increasing threshold (dashed line), that decreases to 0.5 at the response deadline. (B;C) Monte Carlo simulation of the optimal policy show a bias towards the overt response in the GNG task. The two response alternatives in the 2AFC task are represented as “left” and “right”, corresponding to “nogo” and “go” in the GNG task (B). The GNG task shows lower miss rate and higher false alarm rate than the corresponding 2AFC error rate (B), along with faster RT than the 2AFC task (C). Compare to the experimental data in Figure 1. Parameter settings: c = 0.01, µ = 0.25, D = 40 timesteps. The recognition model specifies the mechanism by which stimulus identity is inferred from the noisy observations xt. In our model, we compute an posterior distribution over the category label conditioned on the data sampled so far xt ≜(x1, x2, . . . xt), bt ≜P{d = 1|xt}, also known as the belief state, by iteratively applying Bayes’ rule: bt+1 = btf1(xt+1) btf1(xt+1) + (1 −bt)f0(xt+1) (1) where b0 ≜P{d = 1} is the prior probability of the stimulus category being 1 (and is 0.5 for equally likely stimuli). We hypothesize that the same evidence accumulation mechanism underlies decision-making in both tasks, in particular with the same noise process/likelihood functions, f0(x) and f1(x), for a particular individual observing the same stimuli. 2.2 Action selection as Bayes-risk minimization We model behavior in the two tasks as a sequential decision-making process where, at each instant, the model decideses between two actions, as a function of the current evidence so far, encapsulated in the current belief state bt: stop (and choose the response for the more probable stimulus category for 2AFC), or continue one more time step. A stopping policy is a mapping from the belief state to the action space, π :bt 7→{stop, continue}, where the stop action in 2AFC also requires a stimulus category decision δ. In accordance with the standard Bayes risk framework for optimizing the decision policy in a stopping problem, we assume that the behavioral cost function is a linear combination of the probability of making a decision error and the expected decision delay τ (the stopping time if a response is emitted before the deadline, and the deadline D otherwise). We assume that the decision delay component is weighted by a sampling or time cost c, while the cost of all decision errors are penalized by the same magnitude and normalized to unit cost. Based on this cost function, the optimal decision policy is the policy that minimizes the overall expected cost: 2AFC : Lπ = c⟨τ⟩+ P{δ ̸= d} + P{τ = D} (2) GNG : Lπ = c⟨τ⟩+ P{τ = D|d = 1}P{d = 1} + P{τ < D|d = 0}P{d = 0} (3) The 2AFC cost function is a special case of the more general scenario previously considered for deadlined sequential hypothesis testing [29]: P{δ ̸= d} is the expected wrong response cost, while P{τ = D} is the expected cost of not responding before the deadline (omission error). In the GNG cost function, P{τ = D|d = 1} is the probability that no response is emitted before the deadline on a Go trial (miss), P{τ < D|d = 0} is the probability that a NoGo trial is terminated by a Go 4 20 50 80 0 0.1 0.2 0.3 0.4 Error rate Data: Error rate go nogo 20 50 80 0 100 200 300 400 RT (ms) Data: RT go FA 20 50 80 0 0.1 0.2 0.3 0.4 Error rate Model: Error rate go nogo 20 50 80 0 5 10 RT (timesteps) Model: RT go FA A B D C % Nogo trials % Nogo trials % Nogo trials % Nogo trials Figure 3: Influence of stimulus statistics on Go bias. Our model predicts that alse alarms are more frequent than misses (A), and are also faster than correct Go RTs (B). The Go bias, which is apparent at 50% Go trials, is signficantly increased when Go trials are more frequent (80%), and reduced when Go trials are reduced to 20% of the trials. Parameter settings: c = 0.014, µ = 0.45, D = 40 timesteps. (C-D) Human subjects exhibited a similar pattern of behavior in a letter discrimination task (Data from Nieuwenhuis et al., 2003). response (false alarm), a correct hit requires τ < D (responding before the deadline), and a correct NoGo response consists of a series of continue actions until a predefined response deadline D. In both GNG and 2AFC tasks, the choice to stop limits the decision delay cost, and the choice to continue (up to a predefined response deadline D) results in the collection of more data that help to disambiguate the stimulus category but at the cost of c per additional sample of data observed. We compute the optimal policy using Bellman’s dynamic programming principle (Bellman, 1952). Specifically, we iteratively compute the expected cost of continue and stop as a function of the belief state bt (these are the Q-factors for continue and stop, Qc(bt) and Qs(bt)). If Qc(bt) < Qs(bt), then the optimal policy chooses to continue; otherwise, it chooses to stop; therefore, the belief state is partitioned by the decision policy into a continuation region and a stopping region (details omitted due to lack of space). The principal difference between the two tasks as formulated here is the loss function. In the 2AFC task, all trials are terminated by a response (unless the response deadline is exceeded). However, in the GNG version, subjects have to wait until the response deadline to choose the NoGo response. This introduces a significant, extra cost of time for NoGo responses, suggesting that it may in some cases be better to select the Go response despite the relative inadequacy of sensory evidence. We explore these aspects in detail in the following section. 3 Results Opportunity cost and the Go/NoGo decision threshold Figure 2A illustrates the difference between the optimal decision policies for the two tasks. The red lines (solid: 2AFC, dashed: GNG) illustrates the optimal decision thresholds, which, when exceeded by the cumulative sensory evidence bt, generate the corresponding response, as a function of time. For the 2AFC task, the optimal policy is a pair of thresholds that are initially fairly constant over time, but then collapse toward each other (into an empty set if the cost of exceeding the deadline is sufficiently large) as the deadline approaches (cf. [29]). In contrast, the threshold for the GNG 5 Figure 4: Drift-diffusion model (DDM) for 2AFC and GNG tasks (A) A simplified version of the DDM for 2-choice tasks, where a noisy accumulation process with a certain rate produces one of two responses when it reaches a positive or negative threshold. In addition to the rate and threshold parameters, a third parameter (the temporal offset to the start of the accumulation process) represents the nondecision processes associated with visual and motor delays. (B) DDM fits to 2AFC and GNG choice data(Gomez et al., 2007, Mack & Palmeri, 2010) suggest that the GNG task is associated with a higher threshold and shorter offset than the 2AFC task. (C) Optimal decision-making model predicts a lower, time-varying threshold for the GNG task. task (dotted line) is a single threshold that varies over time, and is lower at the beginning of the trial. This is a direct consequence of the opportunity cost involved with waiting until the deadline: if the deadline is far away, the cost of waiting may be more than the cost of an immediate error that terminates the trial; indeed, we expect that the farther away the deadline, the greater temporal cost savings conferred by Go response over waiting to register the NoGo response. Decision-making in 2AFC and GNG tasks Figure 2B;C shows the effect of the time-varying threshold on RT and accuracy in an example model simulation. Figure 2B shows that the GNG model is significantly biased towards the Go response, with a higher fraction of false alarms than misses. This asymmetry is absent in the 2AFC model performance. In addition, GNG response times are faster than 2AFC response times (Figure 2C). This bias is a direct result of the time-varying threshold in the GNG task; early on in the trial, the decision threshold is lower, and produces fast, error-prone responses. This model prediction is consistent with data from human perceptual decision-making. Figure 1 shows behavioral data in the two tasks [16]– subjects determined from a brief presentation of a noisy visual stimulus whether or not the image contained an animal. The same task was performed in two response conditions: 2AFC, where each stimulus required a yes/no response, and GNG, where subjects only responded to image containing the target. Figure 1A shows that in the 2AFC condition, subjects are not significantly biased towards either response, with both false alarms and miss rates being similar to each other. On the other hand, in the Go/NoGo condition, subjects showed a significant bias towards the overt response, thus producing substantially more false alarms and fewer misses. In the GNG task, their RT was significantly shorter than in the 2AFC task (Figure 1B). Similar results have also been reported by Gomez et al. in the context of lexical decisionmaking [2]. Influence of stimulus probability on Go bias We investigate the degree of Go bias in the GNG model by considering the effect of trial type frequency on behavioral measures in the GNG task. Model simulations (Figure 3) show that, consistent with Figure 2 and a host of other experimental data, there is a significant bias toward the Go response when Go and NoGo trials are equiprobable, and this bias is increased (respectively diminished) as NoGo trials are fewer or more frequent. The figure also shows that RT for both correct Go and erroneous NoGo responses increase with the frequency of NoGo trials, and that false alarm RT is faster than correct response RT. In recent work, Nieuwenhuis et al. [28] used a block design to compare choice accuracy and RT in a letter discrimination task when the fraction of NoGo trials was set to 20%, 50%, and 80%. As shown in Figure 3C;D, , subjects’ behavior was reliably modulated by trial type frequency, in a manner closely reflecting model predictions. 6 rate 0 0.02 0.04 0.06 0.08 0.1 thresh 0 0.5 1 1.5 offset 0 2 4 6 8 10 2AFC Go−nogo Figure 5: DDM approximation to optimal decision-making model. Simplified DDMs were fit to optimal model simulations of 2AFC and GNG behavior, and the best-fit parameters compared between tasks. The DDM approximation for optimal GNG behavior shows a higher decision threshold (B), and lower nondecision time (C), than the DDM approximation for the 2AFC task. In addition, the rate of evidence accumulation was also lower for the GNG fit (A). In our formulation, although the decision boundary is unchanged by the experimental manipulation, the stimulus frequency induces a prior belief over the identity of the stimulus, and thus represents the starting point for the evidence accumulation process. When Go trials are rare, the starting point is far from the decision boundary, and it takes longer for a response to be generated. Further, due to the extra evidence needed to overcome the prior, choices are less likely to be erroneous. Drift-diffusion models and optimal behavior Various versions of augmented DDM have been used to fit GNG behavioral data, with one variant in particular suggesting that the decision threshold in GNG ought to be higher than 2AFC [2], in an apparent contradiction to our model’s predictions (Figure 4). By fitting RT and choice data from lexical judgment, numerosity judgment, and memory-based decision making tasks, Gomez et al. [2] found that a DDM with an implicit negative boundary associated with the NoGo stimulus provided a good fit to RT data. Further, joint parameter fits to 2AFC and GNG choice data indicated that the principal difference in the two tasks was in the nondecision time and decision threshold; the rate parameter (representing the evidence accumulation process) was similar in both tasks. In particular, they suggested that the nondecision time was shorter, and the decision threshold higher than in the 2AFC task (Figure 4B). These results were replicated by Mack & Palmeri by fitting DDM to behavioral data from a visual categorization task performed in both 2AFC and GNG versions [12]. Although DDMs are formally equivalent to optimal decision-making in a restricted class of sequential choice problems [18], they do not explicitly represent and manipulate uncertainty and cost, as we do in our Bayesian risk-minimization framework. In particular, our framework allows us to predict that optimal behavior is well-characterized by a DDM with a time-varying threshold (Figure 4C), and that the restricted class of constant-threshold DDMs are insufficient to fully explain observed behavior. Nevertheless, we can ask whether our prediction is consistent with the empirical results obtained from DDM fits with constant decision thresholds. To address this, we computed the best constant-threshold DDM approximations to optimal decision making in the two tasks. We simulated the optimal model with a shared set of parameters for both the 2AFC and GNG tasks, and fit simplified random-walk models with 3 free parameters (Figure 4A) to the output of our optimal model’s simulations. Figure 5 shows that the best-fitting DDM approximation for optimal GNG behavior has a higher threshold and a lower offset parameter than the best-fitting DDM for optimal 2AFC task behavior. Note that varying the magnitude of a symmetric (explicit and implicit) decision threshold is not capable of explaining the go bias towards the overt response. Gomez et al. also considered additional variants of the DDM which allow for a change in the initial starting point, and for a different accumulation rate in the GNG task. These models, when fit to data, showed a bias towards the overt response; however, the quality of fit did not significantly improve [2] . Thus, our results and those of Gomez et al. [2] are conceptually consistent; a prinicipal difference in the two tasks is the decision threshold, whereas the evidence accumulation process is similar across tasks. However, our analysis explains precisely how and why the thresholds in the two tasks are different: the GNG task has a time-varying threshold that is lower than the 2-choice threshold, 7 due to the difference in loss functions in the two tasks. In particular, our model accounts for the bias towards the overt response, without recourse to an implicit decision boundary or additional parameter changes. When optimal behavior is approximated by a simpler class of models (e.g., models with fixed decision threshold), the best fit to optimal GNG behavior turns out to be a higher threshold and shorter nondecision time, as found by previous work [2, 12], and adjustments to the initial starting point are required to explain the overt response bias. 4 Discussion Forcing a choice between two alternatives is a fundamental technique used to study a wide variety of perceptual and cognitive phenomena, but there has long been confusion over whether GNG and 2AFC variants of such tasks are probing the same underlying neural and cognitive processes. Our work demonstrates that a common Bayes-optimal sequential inference and decision policy can explain the behavioral results in both tasks, as well as what was perceived to be a troubling Go bias in the GNG task, compared to 2AFC. We showed that the Go bias arises naturally as a rational response to the asymmetric time cost between Go and NoGo responses, as the former immediately terminates the trial, while the latter requires the subject to wait until the end of the trial to record the choice. The consequence of this cost asymmetry is an optimal decision policy that requires Bayesian evidence accumulation up to a time-varying boundary, which has an inverted-U shape: the initial low boundary is due to the temporal advantage of choosing to Go early and save on the time necessary to wait to register a NoGo response, the later collapsing of boundary is due to the expectation of the deadline for responding. We showed that this optimal decision policy accounts for the general behavioral phenomena observed in GNG tasks, in particular accounting for the Go bias. Importantly, our work shows that need not be any fundamental differences in the cognitive and neural processes underlying perception and decision-making in these tasks, at least not on account of the Go bias. Our model makes several novel experimental predictions for the GNG task: (1) for fast responses, false alarm rate increases as a function of response time (in contrast, the fixed-threshold DDM approximation predicts a constant alarm rate); (2) lengthening the response deadline should exacerbate the Go bias; (3) if GNG and 2AFC share a common inference and decision-making neural infrastructure, then our model predicts within-subject cross-task correlation: e.g. favoring speed over accuracy in the 2AFC task should correlate with a greater Go bias in the GNG task. The optimal decision policy for the GNG task can naturally be viewed as a stochastic process (though it is normatively derived from task statistics and behavioral goals). We can therefore compare our model to other stochastic process models previously proposed for the GNG task. Our model has a single decision threshold associated with the overt response, consistent with some early models proposed for the task (see e.g., Sperling et al. [30]). In contrast, the extended DDM framework proposed by Gomez et al. has an additional boundary associated with the NoGo response (corresponding to a covert NoGo response). Gomez et al. report that single-threshold variants of the DDM provided very poor fits to the data. Although computationally and behaviorally we do not require a covert-response or associated threshold, it is nevertheless possible that neural implementations of behavior in the task may involve an explicit “NoGo” choice For instance, substantial empirical work aims to isolate neural correlates of restraint, corresponding to a putative “NoGo” action, by contrasting neural activity on “go” and “nogo” (see e.g., [31, 32]). We will consider approximating the optimal policy with one that includes this second boundary in future work. 8 References [1] R Ratcliff and P L Smith. Psychol. Rev., 111:333–346, 2004. [2] P Gomez, R Ratcliff, and M Perea. Journal of Experimental Psychology, 136(3):389–413, 2007. [3] M Usher and J L McClelland. Psychol. Rev., 108(3):550–592, 2001. [4] R. Bogacz, E. Brown, J. Moehlis, P. Holmes, and J.D. Cohen. Psychological Review, 113(4):700, 2006. [5] R.D. Luce. Number 8. Oxford University Press, USA, 1991. [6] F.C. Donders. Acta Psychologica, 30:412, 1969. [7] B. Gordon and A. Caramazza. Brain and Language, 15(1):143–160, 1982. [8] Y Hino and SJ Lupker. Journal of experimental psychology. Human perception and performance, 26:166–183, 2000. [9] M Perea, E Rosa, and C Gomez. Memory and Cognition, 30(34-45), 2002. [10] S. Thorpe, D. Fize, C. Marlot, and Others. Nature, 381(6582):520–522, 1996. [11] A. Delorme, G. Richard, and M. Fabre-Thorpe. Vision Research, 40(16):2187–2200, 2000. [12] ML Mack and TJ Palmeri. Journal of Vision, 10:1–11, 2010. [13] M.A. Sommer and R.H. Wurtz. J Neurophysiol., 85(4):1673–1685, 2001. [14] RP Hasegawa, BW Peterson, and ME Goldberg. Neuron, 43(3):415–25, August 2004. [15] G Aston-Jones, J Rajkowski, and P Kubiak. J Neurosci., 14:4467–4480, 1994. [16] N. Bacon-Mac´e, H. Kirchner, M. Fabre-Thorpe, and S.J. Thorpe. J Exp. Psychol.: Human Perception and Performance, 33(5):1013, 2007. [17] A Wald. Dover publications, 1947. [18] A. Wald and J. Wolfowitz. The Annals of Mathematical Statistics, 19(3):326–339, 1948. [19] J.D. Roitman and M.N. Shadlen. J neurosci., 22(21):9475, 2002. [20] J.I. Gold and M.N. Shadlen. Neuron, 36(2):299–308, 2002. [21] M. Stone. Psychometrika, 25(3):251–260, 1960. [22] D.R.J. Laming. Academic Press, 1968. [23] R. Ratcliff. Psychological Review, 85(2):59, 1978. [24] J.I. Gold and M.N. Shadlen. Annu. Rev. Neurosci., 30:535–574, 2007. [25] D.P. Hanes and J.D. Schall. Science, 274(5286):427, 1996. [26] M.E. Mazurek, J.D. Roitman, J. Ditterich, and M.N. Shadlen. Cerebral cortex, 13(11):1257, 2003. [27] R Ratcliff, A Cherian, and M Segraves. Journal of neurophysiology, 90:1392–1407, 2003. [28] S Nieuwenhuis, N Yeung, W van den Wildenberg, and KR Ridderinkhof. Cognitive, affective & behavioral neuroscience, 3(1):17–26, March 2003. [29] P. Frazier and A.J. Yu. Advances in neural information processing systems, 20:465–472, 2008. [30] G. Sperling and B. Dosher. Handbook of perception and human performance., 1:2–1, 1986. [31] D.J. Simmonds, J.J. Pekar, and S.H. Mostofsky. Neuropsychologia, 46(1):224–232, 2008. [32] A.R. Aron, S. Durston, D.M. Eagle, G.D. Logan, C.M. Stinear, and V. Stuphorn. The Journal of Neuroscience, 27(44):11860–11864, 2007. 9
|
2012
|
162
|
4,522
|
Cost-Sensitive Exploration in Bayesian Reinforcement Learning Dongho Kim Department of Engineering University of Cambridge, UK dk449@cam.ac.uk Kee-Eung Kim Dept of Computer Science KAIST, Korea kekim@cs.kaist.ac.kr Pascal Poupart School of Computer Science University of Waterloo, Canada ppoupart@cs.uwaterloo.ca Abstract In this paper, we consider Bayesian reinforcement learning (BRL) where actions incur costs in addition to rewards, and thus exploration has to be constrained in terms of the expected total cost while learning to maximize the expected longterm total reward. In order to formalize cost-sensitive exploration, we use the constrained Markov decision process (CMDP) as the model of the environment, in which we can naturally encode exploration requirements using the cost function. We extend BEETLE, a model-based BRL method, for learning in the environment with cost constraints. We demonstrate the cost-sensitive exploration behaviour in a number of simulated problems. 1 Introduction In reinforcement learning (RL), the agent interacts with a (partially) unknown environment, classically assumed to be a Markov decision process (MDP), with the goal of maximizing its expected long-term total reward. The agent faces the exploration-exploitation dilemma: the agent must select actions that exploit its current knowledge about the environment to maximize reward, but it also needs to select actions that explore for more information so that it can act better. Bayesian RL (BRL) [1, 2, 3, 4] provides a principled framework to the exploration-exploitation dilemma. However, exploratory actions may have serious consequences. For example, a robot exploring in an unfamiliar terrain may reach a dangerous location and sustain heavy damage, or wander off from the recharging station to the point where a costly rescue mission is required. In a less mission critical scenario, a route recommendation system that learns actual travel times should be aware of toll fees associated with different routes. Therefore, the agent needs to carefully (if not completely) avoid critical situations while exploring to gain more information. The constrained MDP (CMDP) extends the standard MDP to account for limited resources or multiple objectives [5]. The CMDP assumes that executing actions incur costs and rewards that should be optimized separately. Assuming the expected total reward and cost criterion, the goal is to find an optimal policy that maximizes the expected total reward while bounding the expected total cost. Since we can naturally encode undesirable behaviors into the cost function, we formulate the costsensitive exploration problem as RL in the environment modeled as a CMDP. Note that we can employ other criteria for the cost constraint in CMDPs. We can make the actual total cost below the cost bound with probability one using the sample-path cost constraints [6, 7], or with probability 1 −δ using the percentile cost constraints [8]. In this paper, we restrict ourselves to the expected total cost constraint mainly due to the computational efficiency in solving the constrained optimization problem. Extending our work to other cost criteria is left as a future work. The main argument we make is that the CMDP provides a natural framework for representing various approaches to constrained exploration, such as safe exploration [9, 10]. 1 In order to perform cost-sensitive exploration in the Bayesian RL (BRL) setting, we cast the problem as a constrained partially observable MDP (CPOMDP) [11, 12] planning problem. Specifically, we take a model-based BRL approach and extend BEETLE [4] to solve the CPOMDP which models BRL with cost constraints. 2 Background In this section, we review the background for cost-sensitive exploration in BRL. As we explained in the previous section, we assume that the environment is modeled as a CMDP, and formulate model-based BRL as a CPOMDP. We briefly review the CMDP and CPOMDP before summarizing BEETLE, a model-based BRL for environments without cost constraints. 2.1 Constrained MDPs (CMDPs) and Constrained POMDPs (CPOMDPs) The standard (infinite-horizon discounted return) MDP is defined by tuple ⟨S, A, T, R, γ, b0⟩where: S is the set of states s; A is the set of actions a; T(s, a, s′) is the transition function which denotes the probability Pr(s′|s, a) of changing to state s′ from s by executing action a; R(s, a) ∈ℜis the reward function which denotes the immediate reward of executing action a in state s; γ ∈[0, 1) is the discount factor; b0(s) is the initial state probability for state s. b0 is optional, since an optimal policy π∗: S →A that maps from states to actions can be shown not to be dependent on b0. The constrained MDP (CMDP) is defined by tuple ⟨S, A, T, R, C, ˆc, γ, b0⟩with the following additional components: C(s, a) ∈ℜis the cost function which denotes the immediate cost incurred by executing action a in state s; ˆc is the bound on expected total discounted cost. An optimal policy of a CMDP maximizes the expected total discounted reward over the infinite horizon, while not incurring more than ˆc total discounted cost in the expectation. We can formalize this constrained optimization problem as: maxπ Vπ s.t. Cπ ≤ˆc. where Vπ = Eπ,b0[P∞ t=0 γtR(st, at)] is the expected total discounted reward, and Cπ = Eπ,b0[P∞ t=0 γtC(st, at)] is the expected total discounted cost. We will also use Cπ(s) to denote the expected total cost starting from the state s. It has been shown that an optimal policy for CMDP is generally a randomized stationary policy [5]. Hence, we define a policy π as a mapping of states to probability distributions over actions, where π(s, a) denotes the probability that an agent will execute action a in state s. We can find an optimal policy by solving the following linear program (LP): max x X s,a R(s, a)x(s, a) (1) s.t. X a x(s′, a) −γ X s,a x(s, a)T(s, a, s′) = b0(s′) ∀s′ X s,a C(s, a)x(s, a) ≤ˆc and x(s, a) ≥0 ∀s, a The variables x’s are related to the occupancy measure of optimal policy, where x(s, a) is the expected discounted number of times executing a at state s. If the above LP yields a feasible solution, optimal policy can be obtained by π(s, a) = x(s, a)/ P a′ x(s, a′). Note that due to the introduction of cost constraints, the resulting optimal policy is contingent on the initial state distribution b0, in contrast to the standard MDP of which an optimal policy can be independent of the initial state distribution. Note also that the above LP may be infeasible if there is no policy that can satisfy the cost constraint. The constrained POMDP (CPOMDP) extends the standard POMDP in a similar manner. The standard POMDP is defined by tuple ⟨S, A, Z, T, O, R, γ, b0⟩with the following additional components: the set Z of observations z, and the observation probability O(s′, a, z) representing the probability Pr(z|s′, a) of observing z when executing action a and changing to state s′. The states in the POMDP are hidden to the agent, and it has to act based on the observations instead. The CPOMDP 2 Algorithm 1: Point-based backup of α-vector pairs with admissible cost input : (b, d) with belief state b and admissible cost d; set Γ of α-vector pairs output: set Γ′ (b,d) of α-vector pairs (contains at most 2 pairs for a single cost function) // regress foreach a ∈A do αa,∗ R = R(·, a), αa,∗ C = C(·, a) foreach (αi,R, αi,C) ∈Γ, z ∈Z do αa,z i,R(s) = P s′ T(s, a, s′)O(s′, a, z)αi,R(s′) αa,z i,C(s) = P s′ T(s, a, s′)O(s′, a, z)αi,C(s′) // backup for each action foreach a ∈A do Solve the following LP to obtain best randomized action at the next time step: max ˜ wiz,dz b · X i,z ˜wizαa,z i,R subject to b · P i ˜wizαa,z i,C ≤dz ∀z P i ˜wiz = 1 ∀z ˜wiz ≥0 ∀i, z P z dz = 1 γ (d −C(b, a)) αa R = αa,∗ R + γ P i,z ˜wizαa,z i,R αa C = αa,∗ C + γ P i,z ˜wizαa,z i,C // find the best randomized action for the current time step Solve the following LP with : max wa b · X a waαa R subject to b · P a waαa C ≤d P a wa = 1 wa ≥0 ∀a return Γ′ (b,d) = {(αa R, αa C)|wa > 0} is defined by adding the cost function C and the cost bound ˆc into the definition as in the CMDP. Although the CPOMDP is intractable to solve as is the case with the POMDP, there exists an efficient point-based algorithm [12]. The Bellman backup operator for CPOMDP generates pairs of α-vectors (αR, αC), each vector corresponding to the expected total reward and cost, respectively. In order to facilitate defining the Bellman backup operator at a belief state, we augment the belief state with a scalar quantity called admissible cost [13], which represents the expected total cost that can be additionally incurred for the future time steps without violating the cost constraint. Suppose that, at time step t, the agent has so far incurred a total cost of Wt, i.e., Wt = Pt τ=0 γτC(sτ, aτ). The admissible cost at time step t + 1 is defined as dt = 1 γt+1 (ˆc −Wt). It can be computed recursively by the equation dt+1 = 1 γ (dt −C(st, at)), which can be derived from Wt = Wt−1 +γC(st, at), and d0 = ˆc. Given a pair of belief state and admissible cost (b, d) and the set of α-vector pairs Γ = {(αi,R, αi,C)}, the best (randomized) action is obtained by solving the following LP: max wi b · X i wiαi,R subject to b · P i wiαi,C ≤d P i wi = 1 wi ≥0 ∀i where wi corresponds to the probability of choosing the action associated with the pair (αi,R, αi,C). The point-based backup for CPOMDP leveraging the above LP formulation is shown in Algorithm 1.1 1Note that this algorithm is an improvement over the heuristic distribution of the admissible cost to each observation by ratio Pr(z|b, a) in [12]. Instead, we optimize the cost distribution by solving an LP. 3 2.2 BEETLE BEETLE [4] is a model-based BRL algorithm, based on the idea that BRL can be formulated as a POMDP planning problem. Assuming that the environment is modeled as a discrete-state MDP P = ⟨S, A, T, R, γ⟩where the transition function T is unknown, we treat each transition probability T(s, a, s′) as an unknown parameter θs,s′ a and formulate BRL as a hyperstate POMDP ⟨SP , AP , ZP , TP , OP , RP , γ, b0⟩where SP = S × {θs,s′ a }, AP = A, ZP = S, TP (s, θ, a, s′, θ′) = θs,s′ a δθ(θ′), OP (s′, θ′, a, z) = δs′(z), and RP (s, θ, a) = R(s, a). In summary, the hyperstate POMDP augments the original state space with the set of unknown parameters {θs,s′ a }, since the agent has to take actions without exact information on the unknown parameters. The belief state b in the hyperstate POMDP yields the posterior of θ. Specifically, assuming a product of Dirichlets for the belief state such that b(θ) = Y s,a Dir(θs,∗ a ; ns,∗ a ) where θs,∗ a is the parameter vector of multinomial distribution defining the transition function for state s and action a, and ns,∗ a is the hyperparameter vector of the corresponding Dirichlet distribution. Since the hyperparameter ns,s′ a can be viewed as pseudocounts, i.e., the number of times observing transition (s, a, s′), the updated belief after observing transition (ˆs, ˆa, ˆs′) is also a product of Dirichlets: bˆs,ˆs′ ˆa (θ) = Y s,a Dir(θs,∗ a ; ns,∗ a + δˆs,ˆa,ˆs′(s, a, s′)) Hence, belief states in the hyperstate POMDP can be represented by |S|2|A| variables one for each hyperparameter, and the belief update is efficiently performed by incrementing the hyperparmeter corresponding to the observed transition. Solving the hyperstate POMDP is performed by dynamic programming with the Bellman backup operator [2]. Specifically, the value function is represented as a set Γ of α-functions for each state s, so that the value of optimal policy is obtained by V∗ s (b) = maxα∈Γ αs(b) where αs(b) = R θ b(θ)αs(θ)dθ. Using the fact that α-functions are multivariate polynomials of θ, we can obtain an exact solution to the Bellman backup. There are two computational challenges with the hyperstate POMDP approach. First, being a POMDP, the Bellman backup has to be performed on all possible belief states in the probability simplex. BEETLE adopts Perseus [14], performing randomized point-based backups confined to the set of sampled (s, b) pairs by simulating a default or random policy, and reducing the total number of value backups by improving the value of many belief points through a single backup. Second, the number of monomial terms in the α-function increases exponentially with the number of backups. BEETLE chooses a fixed set of basis functions and projects the α-function onto a linear combination of these basis functions. The set of basis functions is chosen to be the set of monomials extracted from the sampled belief states. 3 Constrained BEETLE (CBEETLE) We take an approach similar to BEETLE for cost-sensitive exploration in BRL. Specifically, we formulate cost-sensitive BRL as a hyperstate CPOMDP ⟨SP , AP , ZP , TP , OP , RP , CP , ˆc, γ, b0⟩where SP = S × {θs,s′ a }, AP = A, ZP = S, TP (s, θ, a, s′, θ′) = θs,s′ a δθ(θ′), OP (s′, θ′, a, z) = δs′(z), RP (s, θ, a) = R(s, a), and CP (s, θ, a) = C(s, a). Note that using the cost function C and cost bound ˆc to encode the constraints on the exploration behaviour allows us to enjoy the same flexibility as using the reward function to define the task objective in the standard MDP and POMDP. Although, for the sake of exposition, we use a single cost function and discount factor in our definition of CMDP and CPOMDP, we can generalize the model to have multiple cost functions that capture different aspects of exploration behaviour that cannot be put together on the same scale, and different discount factors for rewards and costs. In addition, we can even completely eliminate the possibility of executing action a in state s by setting the discount factor to 1 for the cost constraint and impose a sufficiently low cost bound ˆc < C(s, a). 4 Algorithm 2: Point-based backup of α-function pairs for the hyperstate CPOMDP2 input : (s, n, d) with state s, Dirichlet hyperparameter n representing belief state b, and admissible cost d; set Γs of α-function pairs for each state s output: set Γ′ (s,n,d) of α-function pairs (contains at most 2 pairs for a single cost function) // regress foreach a ∈A do αa,∗ R = R(s, a), αa,∗ C = C(s, a) // constant functions foreach s′ ∈S, (αi,R, αi,C) ∈Γs′ do αa,s′ i,R = θs,s′ a αi,R, αa,s′ i,C = θs,s′ a αi,C // multiplied by variable θs,s′ a // backup for each action foreach a ∈A do Solve the following LP to obtain best randomized action at the next time step: max ˜ wis′,dz X i,s′ ˜wis′αa,s′ i,R (b) subject to P i ˜wis′αa,s′ i,C (b) ≤ds′ ∀s′ P i ˜wis′ = 1 ∀s′ ˜wis′ ≥0 ∀i, s′ P z ds′ = 1 γ (d −C(s, a)) αa R = αa,∗ R + γ P i,s′ ˜wis′αa,s′ i,R , αa C = αa,∗ C + γ P i,s′ ˜wis′αa,s′ i,C // find the best randomized action for the current time step Solve the following LP with : max wa X a waαa R(b) subject to P a waαa C(b) ≤d P a wa = 1 wa ≥0 ∀a return Γ′ (s,n,d) = {(αa R, αa C)|wa > 0} We call our algorithm CBEETLE, which solves the hyperstate CPOMDP planning problem. As in BEETLE, α-vectors for the expected total reward and cost are represented as α-functions in terms of unknown parameters. The point-based backup operator in Algorithm 1 naturally extends to α-functions without significant increase in the computation complexity: the size of LP does not increase even though the belief states represent probability distributions over unknown parameters. Algorithm 2 shows the point-based backup of α-functions in the hyperstate CPOMDP. In addition, if we choose a fixed set of basis functions for representing α-functions, we can pre-compute the projections of α-functions ( ˜T, ˜R, and ˜C) in the same way as BEETLE. This technique is used in the point-based backup, although not explicitly described in the pseudocode due to the page limit. We also implemented the randomized point-based backup to further improve the performance. The key step in the randomized value update is to check whether a newly generated α-function pairs Γ = {(αi,R, αi,C)} from a point-based backup yields improved value at some other sampled belief state (s, n, d). We can obtain the value of Γ at the belief state by solving the following LP: max wi X i wiαi,R(b) subject to P i wiαi,C(b) ≤d P i wi = 1 wi ≥0 ∀i (2) If we can find an improved value, we skip the point-based backup at (s, n, d) in the current iteration. Algorithm 3 shows the randomized point-based value update. In summary, the point-based value iteration algorithm for CPOMDP and BEETLE readily provide all the essential computational tools to implement the hyperstate CPOMDP planning for the costsensitive BRL. 2The α-functions in the pseudocode are functions of θ and α(b) is defined to be R θ b(θ)α(θ)dθ as explained in Sec. 2.2. 5 Algorithm 3: Randomized point-based value update for the hyperstate CPOMDP input : set B of sampled belief points, and set Γs of α-function pairs for each state s output: set Γ′ s of α-function pairs (updated value function) // initialize ˜B = B // belief points needed to be improved foreach s ∈S do Γ′ s = ∅ // randomized backup while ˜B ̸= ∅do Sample ˜b = (˜s, ˜n, ˜d) ∈˜B Obtain Γ′ ˜b by point-based backup at ˜b with {Γs|∀s ∈S} (Algorithm 2) Γ′ ˜s = Γ′ ˜s ∪Γ′ ˜b foreach b ∈B do Calculate V ′(b) by solving the LP Eqn. 2 with Γ′ ˜b ˜B = {b ∈B : V ′(b) < V (b)} return {Γ′ s|∀s ∈S} (a) (b) Figure 1: (a) 5-state chain: each edge is labeled with action, reward, and cost associated with the transition. (b) 6 × 7 maze: a 6 × 7 grid including the start location with recharging station (S), goal location (G), and 3 flags to capture. 4 Experiments We used the constrained versions of two standard BRL problems to demonstrate the cost-sensitive exploration. The first one is the 5-state chain [15, 16, 4], and the second one is the 6 × 7 maze [16]. 4.1 Description of Problems The 5-state chain problem is shown in Figure 1a, where the agent has two actions 1 and 2. The agent receives a large reward of 10 by executing action 1 in state 5, or a small reward of 2 by executing action 2 in any state. With probability 0.2, the agent slips and makes the transition corresponding to the other action. We defined the constrained version of the problem by assigning a cost of 1 for action 1 in every state, thus making the consecutive execution of action 1 potentially violate the cost constraint. The 6 × 7 maze problem is shown in Figure 1b, where the white cells are navigatable locations and gray cells are walls that block navigation. There are 5 actions available to the agent: move left, right, up, down, or stay. Every “move” action (except for the stay action) can fail with probability 0.1, resulting in a slip to two nearby cells that are perpendicular to the intended direction. If the agent bumps into a wall, the action will have no effect. The goal of this problem is to capture as many flags as possible and reach the goal location. Upon reaching the goal, the agent obtains a reward equal to the number of flags captured, and the agent gets warped back to the start location. Since there are 33 reachable locations in the maze and 8 possible combinations for the status of captured flags, there are a total of 264 states. We defined the constrained version of the problem by assuming that the agent is equipped with a battery and every action consumes energy except the stay action at 6 recharging station. We modeled the power consumption by assigning a cost of 0 for executing the stay action at the recharging station, and a cost of 1 otherwise. Thus, the battery recharging is done by executing stay action at the recharging station, as the admissible cost increases by factor 1/γ.3 4.2 Results Table 1 summarizes the experimental results for the constrained chain and maze problems. In the chain problem, we used two structural prior models, “tied” and “semi”, among three priors experimented in [4]. Both chain-tied and chain-semi assume that the transition dynamics are known to the agent except for the slip probabilities. In chain-tied, the slip probability is assumed to be independent of state and action, thus there is only one unknown parameter in transition dynamics. In chain-semi, the slip probability is assumed to be action dependent, thus there are two unknown parameters since there are two actions. We used uninformative Dirichlet priors in both settings. We excluded experimenting with the “full” prior model (completely unknown transition dynamics) since even BEETLE was not able to learn a near-optimal policy as reported in [4]. We report the average discounted total reward and cost as well as their 95% confidence intervals for the first 1000 time steps using 200 simulated trials. We performed 60 Bellman iterations on 500 belief states, and used the first 50 belief states for choosing the set of basis functions. The discount factor was set to 0.99. When ˆc=100, which is the maximum expected total cost that can be incurred by any policy, CBEETLE found policies that are as good as the policy found by BEETLE since the cost constraint has no effect. As we impose tighter cost constraints by ˆc=75, 50, and 25, the policies start to trade off the rewards in order to meet the cost constraint. Note also that, although we use approximations in the various stages of the algorithm, ˆc is within the confidence intervals of the average total cost, meaning that the cost constraint is either met or violated by statistically insignificant amounts. Since chainsemi has more unknown parameters than chain-tied, it is natural that the performance of CBEETLE policy is slighly degraded in chain-semi. Note also that as we impose tighter cost constraints, the running times generally increase. This is because the cost constraint in the LP tends to become active at more belief states, generating two α-function pairs instead of a single α-function pair when the cost constaint in the LP is not active. The results for the maze problem were calculated for the first 2000 time steps using 100 simulated trials. We performed 30 Bellman iterations on 2000 belief states, and used 50 basis functions. Due to the computational requirement for solving the large hyperstate CPOMDP, we only experimented with the “tied” prior model which assumes that the slip probability is shared by every state and action. Running CBEETLE with ˆc = 1/(1 −0.95) = 20 is equivalent to running BEETLE without cost constraints, as verified in the table. We further analyzed the cost-sensitive exploration behaviour in the maze problem. Figure 2 compares the policy behaviors of BEETLE and CBEETLE(ˆc=18) in the maze problem. The BEETLE policy generally captures the top flag first (Figure 2a), then navigates straight to the goal (Figure 2b) or captures the right flag and navigates to the goal (Figure 2c). If it captures the right flag first, it then navigates to the goal (Figure 2d) or captures the top flag and navigates to the goal (Figure 2e). We suspect that the reason the third flag on the left is not captured is due to the relatively low discount rate, hence ignored due to numerical approximations. The CBEETLE policy shows a similar capture behaviour, but it stays at the recharging station for a number of time steps between the first and second flag captures, which can be confirmed by the high state visitation frequency for the cell S in Figures 2g and 2i. This is because the policy cannot navigate to the other flag position and move to the goal without recharging the battery in between. The agent also frequently visits the recharging station before the first flag capture (Figure 2f) because it actively explores for the first flag with a high uncertainty in the dynamics. 3It may seem odd that the battery recharges at an exponential rate. We can set γ = 1 and make the cost function assign, e.g., a cost of -1 for recharging and 1 for consuming, but our implementation currently assumes same discount factor for the rewards and costs. Implementation for different discount factors is left as a future work, but note that we can still obtain meaningful results with γ sufficiently close to 1. 7 Table 1: Experimental results for the chain and maze problems. problem algorithm ˆc utopic avg discounted avg discounted time value total reward total cost (minutes) BEETLE − 354.77 351.11±8.42 − 1.0 chain-tied 100 354.77 354.68±8.57 100.00±0 2.4 |S| = 5 CBEETLE 75 325.75 287.70±8.17 75.05±0.14 2.4 |A| = 2 50 296.73 264.97±7.06 49.96±0.09 44.3 25 238.95 212.19±4.98 25.12±0.13 80.59 BEETLE − 354.77 351.11±8.42 − 1.6 chain-semi 100 354.77 354.68±8.57 100.00±0 3.7 |S| = 5 CBEETLE 75 325.75 287.64±8.16 75.05±0.14 3.8 |A| = 2 50 296.73 256.76±7.23 50.09±0.14 70.7 25 238.95 204.84±4.51 25.01±0.16 139.3 maze-tied BEETLE − 1.03 1.02±0.02 − 159.8 |S| = 264 CBEETLE 20 1.03 1.02±0.02 19.04±0.02 242.5 |A| = 5 18 0.97 0.93±0.04 17.96±0.46 733.1 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) Figure 2: State visitation frequencies of each location in the maze problem over 100 runs. Brightness is proportional to the relative visitation frequency. (a-e) Behavior of BEETLE (a) before the first flag capture, (b) after the top flag captured first, (c) after the top flag captured first and the right flag second, (d) after the right flag captured first, and (e) after the right flag captured first and the top flag second. (f-j) Behavior of CBEETLE (ˆc = 18). The yellow star represents the current location of the agent. 5 Conclusion In this paper, we proposed CBEETLE, a model-based BRL algorithm for cost-sensitive exploration, extending BEETLE to solve the hyperstate CPOMDP which models BRL using cost constraints. We showed that cost-sensitive BRL can be effectively solved by the randomized point-based value iteration for CPOMDPs. Experimental results show that CBEETLE can learn reasonably good policies for underlying CMDPs while exploring the unknown environment cost-sensitively. While our experiments show that the policies generally satisfy the cost constraints, it can still potentially violate the constraints since we approximate the alpha functions using a finite number of basis functions. As for the future work, we plan to focus on making CBEETLE more robust to the approximation errors by performing a constrained optimization when approximating alpha functions to guarantee that we never violate the cost constraints. Acknowledgments This work was supported by National Research Foundation of Korea (Grant# 2012-007881), the Defense Acquisition Program Administration and Agency for Defense Development of Korea (Contract# UD080042AD), and the SW Computing R&D Program of KEIT (2011-10041313) funded by the Ministry of Knowledge Economy of Korea. 8 References [1] R. Howard. Dynamic programming. MIT Press, 1960. [2] M. Duff. Optimal learning: Computational procedures for Bayes-adaptive Markov decision processes. PhD thesis, University of Massachusetts, Amherst, 2002. [3] S. Ross, J. Pineau, B. Chaib-draa, and P. Kreitmann. A Bayesian approach for larning and planning in partially observable markov decision processes. Journal of Machine Learning Research, 12, 2011. [4] P. Poupart, N. Vlassis, J. Hoey, and K. Regan. An analytic solution to descrete Bayesian reinforcement learning. In Proc. of ICML, 2006. [5] E. Altman. Constrained Markov Decision Processes. Chapman & Hall/CRC, 1999. [6] K. W. Ross and R. Varadarajan. Markov decision-processes with sample path constraints - the communicating case. Operations Research, 37(5):780–790, 1989. [7] K. W. Ross and R. Varadarajan. Multichain Markov decision-processes with a sample path constraint - a decomposition approach. Mathematics of Operations Research, 16(1):195–207, 1991. [8] E. Delage and S. Mannor. Percentile optimization for Markov decision processes with parameter uncertainty. Operations Research, 58(1), 2010. [9] A. Hans, D. Schneegaß, A. M. Sch¨afer, and S. Udluft. Safe exploration for reinforcement learning. In Proc. of 16th European Symposium on Artificial Neural Networks, 2008. [10] T. M. Moldovan and P. Abbeel. Safe exploration in Markov decision processes. In Proc. of NIPS Workshop on Bayesian Optimization, Experimental Design and Bandits, 2011. [11] J. D. Isom, S. P. Meyn, and R. D. Braatz. Piecewise linear dynamic programming for constrained POMDPs. In Proc. of AAAI, 2008. [12] D. Kim, J. Lee, K.-E. Kim, and P. Poupart. Point-based value iteration for constrained POMDPs. In Proc. of IJCAI, 2011. [13] A. B. Piunovskiy and X. Mao. Constrained Markovian decision processes: the dynamic programming approach. Operations Research Letters, 27(3):119–126, 2000. [14] M. T. J. Spaan and N. Vlassis. Perseus: Randomized point-based value iteration for POMDPs. Journal of Artificial Intelligence Research, 24, 2005. [15] R. Dearden, N. Friedman, and D. Andre. Bayesian Q-learning. In Proc. of AAAI, 1998. [16] M. Strens. A Bayesian framework for reinforcement learning. In Proc. of ICML, 2000. 9
|
2012
|
163
|
4,523
|
MCMC for continuous-time discrete-state systems Vinayak Rao Gatsby Computational Neuroscience Unit University College London vrao@gatsby.ucl.ac.uk Yee Whye Teh Gatsby Computational Neuroscience Unit University College London ywteh@gatsby.ucl.ac.uk Abstract We propose a simple and novel framework for MCMC inference in continuoustime discrete-state systems with pure jump trajectories. We construct an exact MCMC sampler for such systems by alternately sampling a random discretization of time given a trajectory of the system, and then a new trajectory given the discretization. The first step can be performed efficiently using properties of the Poisson process, while the second step can avail of discrete-time MCMC techniques based on the forward-backward algorithm. We show the advantage of our approach compared to particle MCMC and a uniformization-based sampler. 1 Introduction There has been growing interest in the machine learning community to model dynamical systems in continuous time. Examples include point processes [1], Markov processes [2], structured Markov processes [3], infinite state Markov processes [4], semi-Markov processes [5] etc. However, a major impediment towards the more widespread use of these models is the problem of inference. A simple approach is to discretize time, and then run inference on the resulting approximation. This however has a number of drawbacks, not least of which is that we lose the advantages that motivated the use of continuous time in the first place. Time-discretization introduces a bias into our inferences, and to control this, one has to work at a time resolution that results in a very large number of discrete time steps. This can be computationally expensive. Our focus in this paper is on posterior sampling via Markov chain Monte Carlo (MCMC), and there is a huge literature on such techniques for discrete-time models [6]. Here, we construct an exact MCMC sampler for pure jump processes in continuous time, using a workhorse of the discrete-time domain, the forward-filtering backward-sampling algorithm [7, 8], to make efficient updates. The core of our approach is an auxiliary variable Gibbs sampler that repeats two steps. The first step runs the forward-backward algorithm on a random discretization of time to sample a new trajectory. The second step then resamples a new time-discretization given this trajectory. A random discretization allows a relatively coarse grid, while still keeping inferences unbiased. Such a coarse discretization allows us to apply the forward-backward algorithm to a Markov chain with relatively few time steps, resulting in computational savings. Even though the marginal distribution of the random time-discretization can be quite complicated, we show that conditioned on the system trajectory, it is just distributed as a Poisson process. While the forward-backward algorithm was developed originally for finite state hidden Markov models and linear Gaussian systems, it also forms the core of samplers for more complicated systems like nonlinear/non-Gaussian [9], infinite state [10], and non-Markovian [11] time series. Our ideas thus apply to essentially any pure jump process, so long as it makes only finite transitions over finite intervals. For concreteness, we focus on semi-Markov processes. We compare our sampler with two other continuous-time MCMC samplers, a particle MCMC sampler [12], and a uniformizationbased sampler [13]. The latter turns out to be a special case of ours, corresponding to a random time-discretization that is marginally distributed as a homogeneous Poisson process. 1 2 Semi-Markov processes A semi-Markov (jump) process (sMJP) is a right-continuous, piecewise-constant stochastic process on the nonnegative real-line taking values in some state space S [14, 15]. For simplicity, we assume S is finite, labelling its elements from 1 to N. We also assume the process is stationary. Then, the sMJP is parametrized by π0, an (arbitrary) initial distribution over states, as well as an N ×N matrix of hazard functions, Ass′(·) ∀s, s′ ∈S. For any τ, Ass′(τ) gives the rate of transitioning to state s′, τ time units after entering state s (we allow self-transitions, so s′ can equal s). Let this transition occur after a waiting time τs′. Then τs′ is distributed according to the density rss′(·), related to Ass′(·) as shown below (see eg. [16]): rss′(τs′) = Ass′(τs′)e(− R τs′ 0 Ass′(u)du), Ass′(τs′) = rss′(τs′)/ 1 − Z τs′ 0 rss′(u)du (1) Sampling an sMJP trajectory proceeds as follows: on entering state s, sample waiting times τs′ ∼ Ass′(·) ∀s′ ∈S. The sMJP enters a new state, snew, corresponding to the smallest of these waiting times. Let this waiting time be τhold (so that τhold = τsnew = mins′ τs′). Then, advance the current time by τhold, and set the sMJP state to snew. Repeat this procedure, now with the rate functions Asnews′(·) ∀s′ ∈S. Define As(·) = P s′∈S Ass′(·). From the independence of the times τss′, equation 1 tells us that P(τhold > τ) = Y s′∈S P(τs′ > τ) = e(− R τ 0 As(u)du), τhold ∼rs(τ) ≡As(τ)e(− R τ 0 As(u)du) (2) Comparing with equation 1, we see that As(·) gives the rate of any transition out of state s. An equivalent characterization of many continuous-time processes is to first sample the waiting time τhold, and then draw a new state s′. For the sMJP, the latter probability is proportional to Ass′(τhold). A special sMJP is the Markov jump process (MJP) where the hazard functions are constant (giving exponential waiting times). For an MJP, future behaviour is independent of the current waiting time. By allowing general waiting-time distributions, an sMJP can model memory effects like burstiness or refractoriness in the system dynamics. We represent an sMJP trajectory on an interval [tstart, tend] as (S, T), where T = (t0, · · · , t|T |) is the sequence of jump times (including the endpoints) and S = (s0, · · · , s|S|) is the corresponding sequence of state values. Here |S| = |T|, and si+1 = si implies a self-transition at time ti+1 (except at the end time t|T | = tend which does not correspond to a jump). The filled circles in figure 1(c) represent (S, T); since the process is right-continuous, si gives the state after the jump at ti. 2.1 Sampling by dependent thinning We now describe an alternate thinning-based approach to sampling an sMJP trajectory. Our approach will produce candidate event times at a rate higher that the actual event rates in the system. To correct for this, we probabilistically reject (or thin) these events. Define W as the sequence of actual event times T, together with the thinned event times (which we call U, these are the empty circles in figure 1(c)). W = (w0, · · · , w|W |) forms a random discretization of time (with |W| = |T| + |U|); define V = (v0, · · · , v|W |) as a sequence of state assignments to the times W. At any wi, let li represent the time since the last sMJP transition (so that, li = wi −maxt∈T,t≤wi t), and let L = l1, · · · , l|W | . Figures 1(b) and (c) show these quantities, as well as continuous-time processes S(t) and L(t) such that li = L(wi) and si = S(wi). (V, L, W) forms an equivalent representation of (S, T) that includes a redundant set of thinned events U. Note that if the ith event is thinned, vi = vi−1, however this is not a self-transition. L helps distinguish self-transitions (having associated l’s equal to 0) from thinned events. We explain the generative process of (V, L, W) below; a proof of its correctness is included in the supplementary material. For each hazard function As(τ), define another dominating hazard function Bs(τ), so that Bs(τ) ≥ As(τ) ∀s, τ. Suppose we have instantiated the system trajectory until time wi, with the sMJP having just entered state vi ∈S (so that li = 0). We sample the next candidate event time wi+1, with ∆wi = (wi+1 −wi) drawn from the hazard function Bvi(·). A larger rate implies faster events, so that ∆wi will on average be smaller than a waiting time τhold drawn from Avi(·). We correct for this by treating wi+1 as an actual event with probability Avi(∆wi+li) Bvi(∆wi+li). If this is the case, we sample a new state vi+1 with probability proportional to Avivi+1 (∆wi + li), and set li+1 = 0. On the other hand, if the event is rejected, we set vi+1 to vi, and li+1 = (∆wi + li). We now sample 2 Figure 1: a) Instantaneous hazard rates given a trajectory b) State holding times, L(t) c) sMJP state values S(t) d) Graphical model for the randomized time-discretization e) Resampling the sMJP trajectory. In b) and c), the filled and empty circles represent actual and thinned events respectively. ∆wi+1 (and thus wi+2), such that (∆wi+1 + li+1) ∼Bvi+1(·). More simply, we sample a new waiting time from Bvi+1(·), conditioned on it being greater than li+1. Again, accept this point with probability Avi+1(∆wi+1+li+1) Bvi+1(∆wi+1li+1) , and repeat this process. Proposition 1 confirms that this generative process (summarized by the graphical model in figure 1(d), and algorithm 1) yields a trajectory from the sMJP. Figure 1(d) also depicts observations X of the sMJP trajectory; we elaborate on this later. Proposition 1. The path (V, L, W) returned by the thinning procedure described above is equivalent to a sample (S, T) from the sMJP (π0, A). Algorithm 1 State-dependent thinning for sMJPs Input: Hazard functions Ass′(·) ∀s, s′ ∈S, and an initial distribution over states π0. Dominating hazard functions Bs(τ) ≥As(τ) ∀τ, s, where As(τ) = P s′ Ass′(τ). Output: A piecewise constant path (V, L, W) ≡((vi, li, wi)) on the interval [tstart, tend]. 1: Draw v0 ∼π0 and set w0 = tstart. Set l0 = 0 and i = 0. 2: while wi < tend do 3: Sample τhold ∼Bvi(·), with τhold > li. Let ∆wi = τhold −li, and wi+1 = wi + ∆wi. 4: with probability Avi(τhold) Bvi(τhold) 5: Set li+1 = 0, and sample vi+1, with P(vi+1 = s′|vi) ∝Avis′(τhold), s′ ∈S. 6: else 7: Set li+1 = li + ∆wi, and vi+1 = vi. 8: end 9: Increment i. 10: end while 11: Set w|W | = tend, v|W | = v|W |−1, l|W | = l|W | + w|W | −w|W |−1. 3 2.2 Posterior inference via MCMC We now define an auxiliary variable Gibbs sampler, setting up a Markov chain that converges to the posterior distribution over the thinned representation (V, L, W) given observations X of the sMJP trajectory. The observations can lie in any space X, and for any time-discretization W, let xi represent all observations in the interval (wi, wi+1). By construction, the sMJP stays in a single state vi over this interval; let P(xi|vi) be the corresponding likelihood vector. Given a time discretization W ≡(U ∪T) and the observations X, we discard the old state labels (V, L), and sample a new path ( ˜V , ˜L, W) ≡( ˜S, ˜T) using the forward-backward algorithm. We then discard the thinned events ˜U, and given the path ( ˜S, ˜T), resample new thinned events Unew, resulting in a new time discretization Wnew ≡( ˜T ∪Unew). We describe both operations below. Resampling the sMJP trajectory given the set of times W: Given W (and thus all ∆wi), this involves assigning each element wi ∈W, a label (vi, li) (see figure 1(d)). Note that the system is Markov in the pair (vi, li), so that this step is a straightforward application of the forward-backward algorithm to the graphical model shown in figure 1(d). Observe from this figure that the joint distribution factorizes as: P(V, L, W, X) = P(v0, l0) |W |−1 Y i=0 P(xi|vi)P(∆wi|vi, li)P(vi+1, li+1|vi, li, ∆wi) (3) From equation 2, (with B instead of A), P(∆wi|vi, li) = Bvi(li + ∆wi)e − R (li+∆wi) li Bvi(t)dt . The term P(vi+1, li+1|vi, li, ∆wi) is the thinning/state-transition probability from steps 4 and 5 of algorithm 1. The forward-filtering stage then moves sequentially through the times in W, successively calculating the probabilities P(vi, li, w1:i+1, x1:i) using the recursion: P(vi, li, w1:i+1, x1:i)=P(xi|vi)P(wi+1|vi, li) X vi−1,li−1 P(vi, li|vi−1, li−1, ∆wi)P(vi−1, li−1, w1:i, x1:i−1) The backward sampling stage then returns a new trajectory ( ˜V , ˜L, W) ≡( ˜S, ˜T). See figure 1(e). Observe that li can take (i + 1) values (in the set {0, wi −wi−1, · · · , wi −w0}), with the value of li affecting P(vi+1, li+1|vi, li, ∆wi+1).Thus, the forward-backward algorithm for a general sMJP scales quadratically with |W|. We can however use ideas from discrete-time MCMC to reduce this cost (eg. [11] use a slice sampler to limit the maximum holding time of a state, and thus limit li). Resampling the thinned events given the sMJP trajectory: Having obtained a new sMJP trajectory (V, L, W), we discard all thinned events U, so that the current state of the sampler is now (S, T). We then resample the thinned events ˜U, recovering a new thinned representation ( ˜V , ˜L, ˜W), and with it, a new discretization of time. To simplify notation, we define the instantaneous hazard functions A(t) and B(t) (see figure 1(a)): A(t) = AS(t)(L(t)), and B(t) = BS(t)(L(t)) (4) These were the event rates relevant at any time t during the generative process. Note that the sMJP trajectory completely determines these quantities. The events W (whether thinned or not) were generated from a rate B(·) process, while the probability that an event wi was thinned is 1 −A(wi)/B(wi). The Poisson thinning theorem [17] then suggests that the thinned events U are distributed as a Poisson process with intensity (B(t) −A(t)). The following proposition (see the supplementary material for a proof) shows that this is indeed the case. Proposition 2. Conditioned on a trajectory (S, T) of the sMJP, the thinned events U are distributed as a Poisson process with intensity (B(t) −A(t)). Observe that this is independent of the observations X. We show in section 2.4 how sampling from such a Poisson process is straightforward for appropriately chosen bounding rates Bs. 2.3 Related work An increasingly popular approach to inference in continuous-time systems is particle MCMC (pMCMC) [12]. At a high level, this uses particle filtering to generate a continuous-time trajectory, which then serves as a proposal for a Metropolis-Hastings (MH) algorithm. Particle filtering however cannot propogate back information from future observations, and pMCMC methods can have difficulty in situations where strong observations cause the posterior to deviate from the prior. 4 Recently, [13] proposed a sampler for MJPs that is a special case of ours. This was derived via a classical idea called uniformization, and constructed the time discretization W from a homogeneous Poisson process. Our sampler reduces to this when a constant dominating rate B > maxs,τ As(τ) is used to bound all event rates. However, such a ‘uniformizing’ rate does not always exist (we will discuss two such systems with unbounded rates). Moreover, with a single rate B, the average number of candidate events |W|, (and thus the computational cost of the algorithm), scales with the leaving rate of the most unstable state. Since this state is often the one that the system will spend the least amount of time in, such a strategy can be wasteful. Under our sampler, the distribution of W is not a Poisson process. Instead, events rates are coupled via the sMJP state. This allows our sampler to adapt the granularity of time-discretization to that required by the posterior trajectories, moreover this granularity can vary over the time interval. There exists other work on continuous-time models based on the idea of a random discretization of time [18, 1]. Like uniformization, these all are limited to specific continuous-time models with specific thinning constructions, and are not formulated in as general a manner as we have done. Moreover, none of these exploit the ability to efficiently resample the time-discretization from a Poisson process, or a new trajectory using the forward-backward algorithm. 2.4 Experiments In this section, we evaluate our sampler on a 3-state sMJP with Weibull hazard rates. Here rss′(τ|αss′, λss′) = e(−(τ/λss′)αss′ ) αss′ λss′ τ λss′ αss′−1 , Ass′(τ|αss′, λss′) = αss′ λss′ τ λss′ αss′−1 where λss′ is the scale parameter, and the shape parameter αss′ controls the stability of a state s. When αss′ < 1, on entering state s, the system is likely to quickly jump to state s′. By contrast, αss′ > 1 gives a ‘recovery’ period before transitions to s′. Note that for αss′ < 1, the hazard function tends to infinity as τ →0. Now, choose an Ω> 1. We use the following simple upper bound Bss′(τ): Bss′(τ) = ΩAss′(τ|αss′, λss′) = Ωαss′ λss′ τ λss′ αss′−1 = αss′ ˜λss′ τ ˜λss′ αss′−1 (5) Here, ˜λ = λ/ α√ Ωfor any λ and α. Thus, sampling from the dominating hazard function Bss′(·) reduces to straightforward sampling from a Weibull with a smaller scale parameter ˜λss′. Note from algorithm 1 that with this construction of the dominating rates, each candidate event is rejected with probability 1 −1 Ω ; this can be a guide to choosing Ω. In our experiments, we set Ωequal to 2. Sampling thinned events on an interval (ti, ti+1) (where the sMJP is in state si) involves sampling from a Poisson process with intensity (B(t) −A(t)) = (Ω−1)A(t) = (Ω−1) P s′ Asis′(t −ti). This is just the superposition of N independent and shifted Poisson processes on (0, ti+1 −ti), the nth having intensity (Ω−1)Asin(·) ≡ˆAsin(·). As before, ˆA(·) is a Weibull hazard function obtained by correcting the scale parameter λ of A(·) by α√ Ω−1. A simple way to sample such a Poisson process is by first drawing the number of events from a Poisson distribution with mean R (ti+1−ti) 0 ˆAsin(u)du, and then drawing that many events i.i.d. from ˆAsin truncated at (ti+1 −ti). Solving the integral for the Poisson mean is straightforward for the Weibull. Call the resulting Poisson sequence ˜Tn, and define ˜T = ∪n∈S ˜Tn. Then Wi ≡˜T + ti is the set of resampled thinned events on the interval (ti, ti+1). We repeat this over each segment (ti, ti+1) of the sMJP path. In the following experiments, the shape parameters for each Weibull hazard (αss′) was randomly drawn from the interval [0.6, 3], while the scale parameter was always set to 1. π0 was set to the discrete uniform distribution. The unbounded hazards associated with αss′ < 1 meant that uniformization is not applicable to this problem, and we only compared our sampler with pMCMC. We implemented both samplers in Matlab. Our MCMC sampler was set up with Ω= 2, so that the dominating hazard rate at any instant equalled twice the true hazard rate (i.e. Bss′(τ) = 2Ass′(τ)), giving a probability of thinning equal to 0.5. For pMCMC, we implemented the particle independent Metropolis-Hastings sampler from [12]. We tried different values for the number of particles; for our problems, we found 10 gave best results. All MCMC runs consisted of 5000 iterations following a burn-in period of 1000. After any MCMC run, given a sequence of piecewise constant trajectories, we calculated the empirical distribution of 5 Figure 2: ESS per unit time vs the inverse-temperature of the likelihood, when the trajectories are over an interval of length 20 (left) and 2 (right). 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 2 4 6 8 10 12 Effective samples per second Thinning particle MCMC10 particle MCMC20 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 20 40 60 Effective samples per second Thinning particle MCMC10 particle MCMC20 2 5 10 20 50 0 10 20 30 40 Effective samples per second Thinning particle MCMC10 2 5 10 20 50 0 10 20 30 Effective samples per second Thinning particle MCMC10 2 5 10 20 50 0 5 10 15 20 25 30 Effective samples per second Thinning particle MCMC10 Figure 3: ESS per second for increasing interval lengths. Temperature decreases from the left to right subplots. the time spent in each state as well as the number of state transitions. We then used R-coda [19] to estimate effective sample sizes (ESS) for these quantities. The ESS of the simulation was set to the median ESS of all these statistics. Effect of the observations For our first experiment, we distributed 10 observations over an interval of length 20. Each observation favoured a particular, random state over the other two states by a factor of 100, giving random likelihood vectors like (1, 100, 1)⊤. We then raised the likelihood vector P(xi|·) to an ‘inverse-temperature’ ν, so that the effective likelihood at the ith observation was (P(xi|si))ν. As this parameter varied from 0 to 1, the problem moved from sampling from the prior to a situation where the trajectory was observed (almost) perfectly at 10 random times. The left plot in figure 2 shows the ESS produced per unit time by both samplers as the inversetemperature increased, averaging results from 10 random parametrizations of the sMJP. We see (as one might expect), that when the effect of the observations is weak, particle MCMC (which uses the prior distribution to make local proposals), outperforms our thinning-based sampler. pMCMC also has the benefit of being simpler implementation-wise, and is about 2-3 times faster (in terms of raw computation time) for a Weibull sMJP, than our sampler. As the effect of the likelihood increases, pMCMC starts to have more and more difficulty tracking the observations. By contrast, our sampler is fairly insensitive to the effect of the likelihood, eventually outperforming the particle MCMC sampler. While there exist techniques to generate more data-driven proposals for the particle MCMC [12, 20], these compromise the appealing simplicity of the original particle MCMC sampler. Moreover, none of these really have the ability to propagate information back from the future (like the forward-backward algorithm), rather they make more and more local moves (for instance, by updating the sMJP trajectory on smaller and smaller subsets of the observation interval). The right plot in figure 2 shows the ESS per unit time for both samplers, now with the observation interval set to a smaller length of 2. Here, our sampler comprehensively outperforms pMCMC. There are two reasons for this. First, more observations per unit time requires rapid switching between states, a deviation from the prior that particle filtering is unlikely to propose. Additionally, over short intervals, the quadratic cost of the forward-backward step of our algorithm is less pronounced. Effect of the observation interval length In the next experiment, we more carefully compare the two samplers as the interval length varies. For three setting of the inverse temperature parameter (0.1, 0.5 and 0.9), we calculated the number of effective samples produced per unit time as the length of the observation interval increased from 2 to 50. Once again, we averaged results from 10 random settings of the sMJP parameters. Figure 3 show the results for the low, medium and high settings of the the inverse temperature. Again, we clearly see the benefit of the forward-backward algorithm, especially in the low temperature and short interval regimes where the posterior deviates from the prior. Of course, the performance of our sampler can be improved further using ideas from the discrete-time domain; these can help ameliorate effect of the quadratic cost for long intervals. 6 1 2 5 10 20 0 20 40 60 80 100 Effective samples per second Uniformization Thinning particle MCMC 1 2 5 10 20 0 20 40 60 80 100 Effective samples per second Uniformization Thinning particle MCMC 1 2 5 10 20 0 20 40 60 80 Effective samples per second Uniformization Thinning particle MCMC Figure 4: Effect of increasing the leaving rate of a state. Temperature decreases from the left to right plots. 3 Markov jump processes In this section, we look at the Markov jump process (MJP), which we saw has constant hazard functions Ass′. MJPs are also defined to disallow self-transitions, so that Ass = 0 ∀s ∈S. If we use constant dominating hazard rates Bs, we see from algorithm 1 that all probabilities at time wi depend only on the current state si, and are independent of the holding time li. Thus, we no longer need to represent the holding times L. The forward message at time wi needs only to represent the probability of vi taking different values in S; this completely specifies the state of the MJP. As a result, the cost of a forward-backward iteration is now linear in |W|. In the next experiment, we compare Matlab implementations of our thinning-based sampler and the particle MCMC sampler with the uniformization-based sampler described in section 2.3. Recall that the latter samples candidate event times W from a homogeneous Poisson process with a stateindependent rate B > maxs As. Following [13], we set B = 2 maxs As. As in section 2.4, we set Ω= 2 for our sampler, so that Bs = 2As ∀s. pMCMC was run with 20 particles. Observe that for uniformization, the rate B is determined by the leaving rate of the most unstable state; often this is the state the system spends the least time in. To study this, we applied all three samplers to a 3-state MJP, two of whose states had leaving rates equal to 1. The leaving rate of the third state, was varied from 1 to 20 (call this rate γ). On leaving any state, the probability of transitioning to either of the other two was uniformly distributed between 0 and 1. This way, we constructed 10 random MJPs for each γ. We distributed 5 observation times (again, favouring a random state by a factor of 100) over the interval [0, 10]. Like section 2.4, we looked at the ESS per unit time for 3 settings of the inverse temperature parameter ν, now as we varied γ. Figure 4 shows the results. The pMCMC sampler clearly performs worse than the other two. The Markov structure of the MJP makes the forward-backward algorithm very natural and efficient, by contrast, running a particle filter with 20 particles took about twice as long as our sampler. Further, we see that while both the uniformization and our sampler perform comparably for low values of γ, our sampler starts to outperform uniformization for γ’s greater than 2. In fact, for weak observations and large γs, even particle MCMC outperforms uniformization. As we mentioned earlier, this is because for uniformization, the granularity of time-discretization is determined by the least stable state, resulting in very long Markov chains for large values of γ. 3.1 The M/M/∞queue We finally apply our ideas to an infinite state MJP from queuing theory, the M/M/∞queue (also called an immigration-death process [21]). Here, individuals (customers, messages, jobs etc.) enter a population according to a homogeneous Poisson process with rate α independent of the population size. The lifespan of each individual (or the job ‘service time’) is exponentially distributed with rate β, so that the rate at which a ‘death’ occurs in the population is proportional to the population size. Let S(t) represent the population size (or the number of ‘busy servers’) at time t. Then, under the M/M/∞queue, the stochastic process S(t) evolves according to a simple birth-death Markov jump process on the space S = {1, · · · , ∞}, with rates As,s+1 = α and As,s−1 = sβ. All other rates are 0. Observe that since the population size of the M/M/∞queue is unbounded, we cannot upper bound the event rates in the system. Thus, uniformization is not directly applicable to this system. Instead, we have to truncate the maximum value of S(t) to some constant, say c. This is the so-called M/M/c/c queue; now, when all c servers are busy, any incoming jobs are rejected. In the following, we considered an M/M/∞queue with α and β set to 10 and 1 respectively. For some tend, the state of the system was observed perfectly at three times 0, tend/10 and tend, with values 10, 2 and 15 respectively. Conditioned on these, we sought the posterior distribution over the 7 Figure 5: The M/M/∞ queue: a) ESS per unit time b) ESS per unit time scaled by interval length. 1 2 5 10 20 0 10 20 30 40 50 60 Effective samples per second Uniformization Dependent thinning Thinning (trunc) 1 2 5 10 20 10 20 30 40 50 60 70 Effective samples per second per unit interval length Uniformization Dependent thinning Thinning (trunc) system trajectory on the interval [0, tend]. Since the state of the system at time 0 is perfectly observed to be 10, given any time-discretization, the maximum value of si at step i of the Markov chain is (10 + i). Thus, message dimensions are always finite, and we can directly apply the forwardbackward algorithm. For noisy observations, we can use a slice sampler [22]. We compared our sampler with uniformization; for this, we approximated the M/M/∞system with an M/M/50/50 system. We also applied our sampler to this truncated approximation, labelling it as ‘Thinning (trunc)’. For both these samplers, the message dimensions were 50. The large state spaces involved makes pMCMC very inefficient, and we did not include it in our results. Figure 5(a) shows the ESS per unit time for all three samplers as we varied the interval length tend from 1 to 20. Sampling a trajectory over a long interval will take more time than over a short one, and to more clearly distinguish performance for large values of tend, we scale each ESS from the left plot with tend, the length of the interval, in the right subplot of figure 5. We see our sampler always outperforms uniformization, with the difference particularly significant for short intervals. Interestingly, running our thinning-based sampler on the truncated system offers no significant computational benefit over running it on the full model. As the observation interval becomes longer and longer, the MJP trajectory can make larger and larger excursions (especially over the interval [tend/10, tend]). Thus as tend increases, event rates witnessed in posterior trajectories starts to increase. As our sampler adapts to this, the number of thinned events in all three samplers start to become comparable, causing the uniformization-based sampler to approach the performance of the other two samplers. At the same time, we see that the difference between our truncated and our untruncated sampler starts to widen. Of course, we should remember that over long intervals, truncating the system size to 50 becomes more likely to introduce biases into our inferences. 4 Discussion We described a general framework for MCMC inference in continuous-time discrete-state systems. Each MCMC iteration first samples a random discretization of time given the trajectory of the system. Given this, we then resample the sMJP trajectory using the forward-backward algorithm. While we looked only at semi-Markov and Markov jump processes, it is easy to extend our approach to piecewise-constant stochastic processes with more complicated dependency structures. For our sampler, a bottleneck in the rate of mixing is that the new and old trajectories share an intermediate discretization W (see figure 1(e)). Recall that an sMJP trajectory defines an instantaneous hazard function B(t); our scheme requires the discretization sampled from the old hazard function be compatible with the new hazard function. Thus, the forward-backward algorithm is unlikely to return a trajectory associated with a hazard function that differ significantly from the old one. By contrast, for uniformization, the hazard function is a constant B, independent of the system state. However, this comes at the cost of a conservatively high discretization of time. An interesting direction for future work is too see how different choices of the dominating hazard function can help trade-off these factors. For instance, we proposed, using a single Ω, with Bs(·) = ΩAs(·). It is possible to use a different Ωs for each state s, or even an Ωs(·) that varies with time. Similarly, one can consider additive (rather than multiplicative) constructions of Bs(·). For general sMJPs, the forward-backward algorithm scales quadratically with |W|, the number of candidate jump times. Such scaling is characteristic of sMJPs, though we can avail of discrete-time MCMC techniques to ameliorate this. For sMJPs whose hazard functions are constant beyond a ‘window of memory’, inference scales quadratically with the memory length, and only linearly with |W|. One can use such approximations to devise efficient MH proposals for sMJPs trajectories. 8 References [1] Ryan P. Adams, Iain Murray, and David J. C. MacKay. Tractable nonparametric Bayesian inference in Poisson processes with Gaussian process intensities. In Proceedings of the 26th International Conference on Machine Learning (ICML), 2009. [2] Y. W. Teh, C. Blundell, and L. T. Elliott. Modelling genetic variations with fragmentation-coagulation processes. In Advances In Neural Information Processing Systems, 2011. [3] U. Nodelman, C.R. Shelton, and D. Koller. Continuous time Bayesian networks. In Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence (UAI), pages 378–387, 2002. [4] Ardavan Saeedi and Alexandre Bouchard-Cˆot´e. Priors over Recurrent Continuous Time Processes. In Advances in Neural Information Processing Systems 24 (NIPS), volume 24, 2011. [5] Matthias Hoffman, Hendrik Kueck, Nando de Freitas, and Arnaud Doucet. New inference strategies for solving Markov decision processes using reversible jump MCMC. In Proceedings of the TwentyFifth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-09), pages 223–231, Corvallis, Oregon, 2009. AUAI Press. [6] A. Doucet, N. de Freitas, and N. J. Gordon. Sequential Monte Carlo Methods in Practice. Statistics for Engineering and Information Science. New York: Springer-Verlag, May 2001. [7] Fr¨uwirth-Schnatter. Data augmentation and dynamic linear models. J. Time Ser. Anal., 15:183–202, 1994. [8] C. K. Carter and R. Kohn. Markov chain Monte Carlo in conditionally Gaussian state space models. Biometrika, 83:589–601, 1996. [9] Radford M. Neal, Matthew J. Beal, and Sam T. Roweis. Inferring state sequences for non-linear systems with embedded hidden Markov models. In Advances in Neural Information Processing Systems 16 (NIPS), volume 16, pages 401–408. MIT Press, 2004. [10] J. Van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden Markov model. In Proceedings of the International Conference on Machine Learning, volume 25, 2008. [11] M. Dewar, C. Wiggins, and F. Wood. Inference in hidden Markov models with explicit state duration distributions. IEEE Signal Processing Letters, page To Appear, 2012. [12] Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society Series B, 72(3):269–342, 2010. [13] V. Rao and Y. W. Teh. Fast MCMC sampling for Markov jump processes and continuous time Bayesian networks. In Proceedings of the International Conference on Uncertainty in Artificial Intelligence, 2011. [14] William Feller. On semi-Markov processes. Proceedings of the National Academy of Sciences of the United States of America, 51(4):pp. 653–659, 1964. [15] D. Sonderman. Comparing semi-Markov processes. Mathematics of Operations Research, 5(1):110–119, 1980. [16] D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes. Springer, 2008. [17] J. F. C. Kingman. Poisson processes, volume 3 of Oxford Studies in Probability. The Clarendon Press Oxford University Press, New York, 1993. Oxford Science Publications. [18] A. Beskos and G.O. Roberts. Exact simulation of diffusions. Annals of applied probability, 15(4):2422 – 2444, November 2005. [19] Martyn Plummer, Nicky Best, Kate Cowles, and Karen Vines. CODA: Convergence diagnosis and output analysis for MCMC. R News, 6(1):7–11, March 2006. [20] Andrew Golightly and Darren J. Wilkinson. Bayesian parameter inference for stochastic biochemical network models using particle Markov chain Monte Carlo. Interface Focus, 1(6):807–820, December 2011. [21] S. Asmussen. Applied Probability and Queues. Applications of Mathematics. Springer, 2003. [22] Stephen G. Walker. Sampling the Dirichlet mixture model with slices. Communications in Statistics Simulation and Computation, 36:45, 2007. 9
|
2012
|
164
|
4,524
|
Spectral Learning of General Weighted Automata via Constrained Matrix Completion Borja Balle Universitat Polit`ecnica de Catalunya bballe@lsi.upc.edu Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Abstract Many tasks in text and speech processing and computational biology require estimating functions mapping strings to real numbers. A broad class of such functions can be defined by weighted automata. Spectral methods based on the singular value decomposition of a Hankel matrix have been recently proposed for learning a probability distribution represented by a weighted automaton from a training sample drawn according to this same target distribution. In this paper, we show how spectral methods can be extended to the problem of learning a general weighted automaton from a sample generated by an arbitrary distribution. The main obstruction to this approach is that, in general, some entries of the Hankel matrix may be missing. We present a solution to this problem based on solving a constrained matrix completion problem. Combining these two ingredients, matrix completion and spectral method, a whole new family of algorithms for learning general weighted automata is obtained. We present generalization bounds for a particular algorithm in this family. The proofs rely on a joint stability analysis of matrix completion and spectral learning. 1 Introduction Many tasks in text and speech processing, computational biology, or learning models of the environment in reinforcement learning, require estimating a function mapping variable-length sequences to real numbers. A broad class of such functions can be defined by weighted automata. The mathematical and algorithmic properties of weighted automata have been extensively studied in the most general setting where they are defined in terms of an arbitrary semiring [28, 9, 23]. Weighted automata are widely used in applications ranging from natural text and speech processing [24] to optical character recognition [12] and image processing [1]. This paper addresses the problem of learning weighted automata from a finite set of labeled examples. The particular instance of this problem where the objective is to learn a probabilistic automaton from examples drawn from this same distribution has recently drawn much attention: starting with the seminal work of Hsu et al. [19], the so-called spectral method has proven to be a valuable tool in developing novel and theoretically-sound algorithms for learning HMMs and other related classes of distributions [5, 30, 31, 10, 6, 4]. Spectral methods have also been applied to other probabilistic models of practical interest, including probabilistic context-free grammars and graphical models with hidden variables [26, 22, 16, 3, 2]. The main idea behind these algorithms is that, under an identifiability assumption, the method of moments can be used to formulate a set of equations relating the parameters defining the target to observable statistics. Given enough training data, these statistics can be accurately estimated. Then, solving the corresponding approximate equations yields a model that closely estimates the target distribution. The spectral term takes its origin from the use of a singular value decomposition in solving those equations. 1 This paper tackles a significantly more general and more challenging problem than the specific instance just mentioned. Indeed, in general, there seems to be a large gap separating the scenario of learning a probabilistic automaton using data drawn according to the distribution it generates, from that of learning an arbitrary weighted automaton from labeled data drawn from some unknown distribution. For a start, in the former setting there is only one object to care about because the distribution from which examples are drawn is the target machine. In contrast, the latter involves two distinct objects: a distribution according to which strings are drawn, and a target weighted automaton assigning labels to these strings. It is not difficult in this setting to conceive that, for a particular target, an adversary could find a distribution over strings making the learner’s task insurmountably difficult. In fact, this is the core idea behind the cryptography-based hardness results for learning deterministic finite automata given by Kearns and Valiant [20] – these same results apply to our setting as well. But, even in cases where the distribution “cooperates,” there is still an obstruction in leveraging the spectral method for learning general weighted automata. The statistics used by the spectral method are essentially the probabilities assigned by the target distribution to each string in some fixed finite set B. In the case where the target is a distribution, increasingly large samples yield uniformly convergent estimates for these probabilities. Thus, it can be safely assumed that the probability of any string from B not present in the sample is zero. When learning arbitrary weighted automata, however, the value assigned by the target to an unseen string is unknown. Furthermore, one cannot expect that a sample would contain the values of the target function for all the strings in B. This observation raises the question of whether it is possible at all to apply the spectral method in a setting with missing data, or, alternatively, whether there is a principled way to “estimate” this missing information and then apply the spectral method. As it turns out, the latter approach can be naturally formulated as a constrained matrix completion problem. When applying the spectral method, the (approximate) values of the target on B are arranged in a matrix H. Thus, the main difference between the two settings can be restated as follows: when learning a weighted automaton representing a distribution, unknown entries of H can be filled in with zeros, while in the general setting there is a priori no straightforward method to fill in the missing values. We propose to use a matrix completion algorithm for solving this last problem. In particular, since H is a Hankel matrix whose entries must satisfy some equality constraints, it turns out that the problem of learning weighted automata under an arbitrary distribution leads to what we call the Hankel matrix completion problem. This is essentially a constrained matrix completion problem where entries of valid hypotheses need to satisfy a set of equalities. We give an algorithm for solving this problem via convex optimization. Many existing approaches to matrix completion, e.g., [14, 13, 27, 18], are also based on convex optimization. Since the set of valid hypotheses for our constrained matrix completion problem is convex, many of these algorithms could also be modified to deal with the Hankel matrix completion problem. In summary, our approach leverages two recent techniques for learning a general weighted automaton: matrix completion and spectral learning. It consists of first predicting the missing entries in H and then applying the spectral method to the resulting matrix. Altogether, this yields a family of algorithms parametrized by the choice of the specific Hankel matrix completion algorithm used. These algorithms are designed for learning an arbitrary weighted automaton from samples generated by an unknown distribution over strings and labels. We study a special instance of this family of algorithms and prove generalization guarantees for its performance based on a stability analysis, under mild conditions on the distribution. The proof contains two main novel ingredients: a stability analysis of an algorithm for constrained matrix completion, and an extension of the analysis of spectral learning to an agnostic setting where data is generated by an arbitrary distribution and labeled by a process not necessarily modeled by a weighted automaton. The rest of the paper is organized as follows. Section 2 introduces the main notation and definitions used in subsequent sections. In Section 3, we describe a family of algorithms for learning general weighted automata by combining constrained matrix completion and spectral methods. In Section 4, we give a detailed analysis of one particular algorithm in this family, including generalization bounds. 2 2 Preliminaries This section introduces the main notation used in this paper. Bold letters will be used for vectors v and matrices M. For vectors, ∥v∥denotes the standard euclidean norm. For matrices, ∥M∥denotes the operator norm. For p ∈[1, +∞], ∥M∥p denotes the Schatten p-norm: ∥M∥p = (P n≥1 σp n(M))1/p, where σn(M) is the nth singular value of M. The special case p = 2 coincides with the Frobenius norm which will be sometimes also written as ∥M∥F . The Moore–Penrose pseudo-inverse of a matrix M is denoted by M+. 2.1 Functions over Strings and Hankel Matrices We denote by Σ = {a1, . . . , ak} a finite alphabet of size k ≥1 and by ϵ the empty string. We also write Σ′ = {ϵ} ∪Σ. The set of all strings over Σ is denoted by Σ⋆and the length of a string x denoted by |x|. For any n ≥0, Σ≤n denotes the set of all strings of length at most n. Given two sets of strings P, S ⊆Σ⋆we denote by PS the set of all strings uv obtained by concatenation of a string u ∈P and a string v ∈S. A set of strings P is called Σ-complete when P = P′Σ′ for some set P′. P′ is then called the root of P. A pair (P, S) with P, S ⊆Σ⋆is said to form a basis of Σ⋆ if ϵ ∈P ∩S and P is Σ-complete. We define the dimension of a basis (P, S) as the cardinality of PS, that is |PS|. For any basis B = (P, S), we denote by HB the vector space of functions RPS whose dimension is the dimension of B. We will simply write H instead of HB when the basis B is clear from the context. The Hankel matrix H ∈RP×S associated to a function h ∈H is the matrix whose entries are defined by H(u, v) = h(uv) for all u ∈P and v ∈S. Note that the mapping h 7→H is linear. In fact, H is isomorphic to the vector space formed by all |P| × |S| real Hankel matrices and we can thus write by identification H = H ∈RP×S : ∀u1, u2 ∈P, ∀v1, v2 ∈S, u1v1 = u2v2 ⇒H(u1, v1) = H(u2, v2) . It is clear from this characterization that H is a convex set because it is a subset of a convex space defined by equality constraints. In particular, a matrix in H contains |P||S| coefficients with |PS| degrees of freedom, and the dependencies can be specified as a set of equalities of the form H(u1, v1) = H(u2, v2) when u1v1 = u2v2. We will use both characterizations of H indistinctly for the rest of the paper. Also, note that different orderings of P and S may result in different sets of matrices. For convenience, we will assume for all that follows an arbitrary fixed ordering, since the choice of that order has no effect on any of our results. Matrix norms extend naturally to norms in H. For any p ∈[1, +∞], the Hankel–Schatten p-norm on H is defined as ∥h∥p = ∥H∥p. It is straightforward to verify that ∥h∥p is a norm by the linearity of h 7→H. In particular, this implies that the function ∥· ∥p : H →R is convex. In the case p = 2, it can be seen that ∥h∥2 2 = ⟨h, h⟩H, with the inner product on H defined by ⟨h, h′⟩H = X x∈PS cxh(x)h′(x) , where cx = |{(u, v) ∈P ×S : x = uv}| is the number of possible decompositions of x into a prefix in P and a suffix in S. 2.2 Weighted finite automata A widely used class of functions mapping strings to real numbers is that of functions defined by weighted finite automata (WFA) or in short weighted automata [23]. These functions are also known as rational power series [28, 9]. A WFA over Σ with n states can be defined as a tuple A = ⟨α, β, {Aa}a∈Σ⟩, where α, β ∈Rn are the initial and final weight vectors, and Aa ∈Rn×n the transition matrix associated to each alphabet symbol a ∈Σ. The function fA realized by a WFA A is defined by fA(x) = α⊤Ax1 · · · Axtβ , for any string x = x1 · · · xt ∈Σ∗with t = |x| and xi ∈Σ for all i ∈[1, t]. We will say that a WFA A = ⟨α, β, {Aa}⟩is γ-bounded if ∥α∥, ∥β∥, ∥Aa∥≤γ for all a ∈Σ. This property is convenient to bound the maximum value assigned by a WFA to any string of a given length. 3 1 -1 a, 0 b, 2/3 a, 0 b, 3/4 a, 1/3 b, 1 a, 3/4 b, 6/5 1/2 1/2 α⊤= [1/2 1/2] β⊤= [1 −1] Aa = 3/4 0 0 1/3 Ab = 6/5 2/3 3/4 1 (a) (b) Figure 1: Example of a weighted automaton over Σ = {a, b} with 2 states: (a) graph representation; (b) algebraic representation. WFAs can be more generally defined over an arbitrary semiring instead of the field of real numbers and are also known as multiplicity automata (e.g., [8]). To any function f : Σ⋆→R, we can associate its Hankel matrix Hf ∈RΣ⋆×Σ⋆with entries defined by Hf(u, v) = f(uv). These are just the bi-infinite versions of the Hankel matrices we introduced in the case P = S = Σ⋆. Carlyle and Paz [15] and Fliess [17] gave the following characterization of the set of functions f in RΣ⋆ defined by a WFA in terms of the rank of their Hankel matrix rank(Hf).1 Theorem 1 ([15, 17]) A function f : Σ⋆→R can be defined by a WFA iff rank(Hf) is finite and in that case rank(Hf) is the minimal number of states of any WFA A such that f = fA. Thus, WFAs can be viewed as those functions whose Hankel matrix can be finitely “compressed”. Since finite sub-blocks of a Hankel matrix cannot have a larger rank than its bi-infinite extension, this justifies the use of a low-rank-enforcing regularization in the definition of a Hankel matrix completion. Note that deterministic finite automata (DFA) with n states can be represented by a WFA with at most n states. Thus, the results we present here can be directly applied to classification problems in Σ⋆. However, specializing our results to this particular setting may yield several improvements. 2.2.1 Example Figure 1 shows an example of a weighted automaton A = ⟨α, β, {Aa}⟩with two states defined over the alphabet Σ = {a, b}, with both its algebraic representation (Figure 1(b)) in terms of vectors and matrices and the equivalent graph representation (Figure 1(a)) useful for a variety of WFA algorithms [23]. Let W = {ϵ, a, b}, then B = (WΣ′, W) is a Σ-complete basis. The following is the Hankel matrix of A on this basis shown with three-digit precision entries: H⊤ B = ϵ a b aa ab ba bb ϵ 0.00 0.20 0.14 0.22 0.15 0.45 0.31 a 0.20 0.22 0.45 0.19 0.29 0.45 0.85 b 0.14 0.15 0.31 0.13 0.20 0.32 0.58 . By Theorem 1, the Hankel matrix of A has rank at most 2. Given HB, the spectral method described in [19] can be used to recover a WFA ˆA equivalent to A, in the sense that A and ˆA compute the same function. In general, one may be given a sample of strings labeled using some WFA that does not contain enough information to fully specify a Hankel matrix over a complete basis. In that case, Theorem 1 motivates the use of a low-rank matrix completion algorithm to fill in the missing entries in HB prior to the application of the spectral method. This is the basis of the algorithm we describe in the following section. 3 The HMC+SM Algorithm In this section we describe our algorithm HMC+SM for learning weighted automata. As input, the algorithm takes a sample Z = (z1, . . . , zm) containing m examples zi = (xi, yi) ∈Σ⋆× R, 1The construction of an equivalent WFA with the minimal number of states from a given WFA was first given by Sch¨utzenberger [29]. 4 1 ≤i ≤m, drawn i.i.d. from some distribution D over Σ⋆× R. There are three parameters a user can specify to control the behavior of the algorithm: a basis B = (P, S) of Σ⋆, a regularization parameter τ > 0, and the desired number of states n in the hypothesis. The output returned by HMC+SM is a WFA AZ with n states that computes a function fAZ : Σ⋆→R. The algorithm works in two stages. In the first stage, a constrained matrix completion algorithm with input Z and regularization parameter τ is used to return a Hankel matrix HZ ∈HB. In the second stage, the spectral method is applied to HZ to compute a WFA AZ with n states. These two steps will be described in detail in the following sections. As will soon become apparent, HMC+SM defines in fact a whole family of algorithms. In particular, by combining the spectral method with any algorithm for solving the Hankel matrix completion problem, one can derive a new algorithm for learning WFAs. For concreteness, in the following, we will only consider the Hankel matrix completion algorithm described in Section 3.1. Through its parametrization by a number 1 ≤p ≤∞and a convex loss ℓ: R × R →R+, this completion algorithm already gives rise to a family of learning algorithms that we denote by HMCp,ℓ+SM. However, it is important to keep in mind that for each existing matrix completion algorithm that can be modified to solve the Hankel matrix completion problem, a new algorithm for learning WFAs can be obtained via the general scheme we describe below. 3.1 Hankel Matrix Completion We now describe our Hankel matrix completion algorithm. Given a basis B = (P, S) of Σ⋆and a sample Z over Σ⋆× R, the algorithm solves a convex optimization problem and returns a matrix HZ ∈HB. We give two equivalent descriptions of this optimization, one in terms of functions h: PS →R, and another in terms of Hankel matrices H ∈RP×S. While the former is perhaps conceptually simpler, the latter is easier to implement within the existing frameworks of convex optimization. We will denote by eZ the subsample of Z formed by examples z = (x, y) with x ∈PS and by em its size | eZ|. For any p ∈[1, +∞] and a convex loss function ℓ: R × R →R+, we consider the objective function FZ defined for any h ∈H by FZ(h) = τN(h) + bR e Z(h) = τ∥h∥2 p + 1 em X (x,y)∈e Z ℓ(h(x), y) , where τ > 0 is a regularization parameter. FZ is a convex function, by the convexity of ∥· ∥p and ℓ. Our algorithm seeks to minimize this loss function over the finite-dimensional vector space H and returns a function hZ satisfying hZ ∈argmin h∈H FZ(h) . (HMC-h) To define an equivalent optimization over the matrix version of H, we introduce the following notation. For each string x ∈PS, fix a pair of coordinate vectors (ux, vx) ∈RP × RS such that u⊤ x Hvx = H(x) for any H ∈H. That is, ux and vx are coordinate vectors corresponding respectively to a prefix u ∈P and a suffix v ∈S, and such that uv = x. Now, abusing our previous notation, we define the following loss function over matrices: FZ(H) = τN(H) + bR e Z(H) = τ∥H∥2 p + 1 em X (x,y)∈e Z ℓ(u⊤ x Hvx, y) . This is a convex function defined over the space of all |P| × |S| matrices. Optimizing FZ over the convex set of Hankel matrices H leads to an algorithm equivalent to (HMC-h): HZ ∈argmin H∈H FZ(H) . (HMC-H) We note here that our approach shares some common aspects with some previous work in matrix completion. The fact that there may not be a true underlying Hankel matrix makes it somewhat close to the agnostic setting in [18], where matrix completion is also applied under arbitrary distributions. Nonetheless, it is also possible to consider other learning frameworks for WFAs where algorithms for exact matrix completion [14, 27] or noisy matrix completion [13] may be useful. Furthermore, since most algorithms in the literature of matrix completion are based on convex optimization problems, it is likely that most of them can be adapted to solve constrained matrix completions problems such as the one we discuss here. 5 3.2 Spectral Method for General WFA Here, we describe how the spectral method can be applied to HZ to obtain a WFA. We use the same notation as in [7] and a version of the spectral method working with an arbitrary basis (as in [5, 4, 7]), in contrast to versions restricted to P = Σ≤2 and S = Σ like [19]. We first need to partition HZ into k + 1 blocks as follows. Since B is a basis, P is Σ-complete and admits a root P′. We define a block Ha ∈RP′×S for each a ∈Σ′, whose entries are given by Ha(u, v) = HZ(ua, v), for any u ∈P′ and v ∈S. Thus, after suitably permuting the rows of HZ, we can write H⊤ Z = [H⊤ ϵ , H⊤ a1, . . . , H⊤ ak]. We will use the following specific notation to refer to the rows and columns of Hϵ corresponding to ϵ ∈P′ ∩S: hϵ,S ∈RS with hϵ,S(v) = Hϵ(ϵ, v) and hP′,ϵ(u) ∈RP′ with hP′,ϵ(u) = Hϵ(u, ϵ). Using this notation, the spectral method can be described as follows. Given the desired number of states n, it consists of first computing the truncated SVD of Hϵ corresponding to the n largest singular values: UnDnV⊤ n . Thus, matrix UnDnV⊤ n is the best rank n approximation to Hϵ with respect to the Frobenius norm. Then, using the right singular vectors Vn of Hϵ, the next step consists of computing a weighted automaton AZ = ⟨α, β, {Aa}⟩as follows: α⊤= h⊤ ϵ,SVn β = (HϵVn)+ hP′,ϵ Aa = (HϵVn)+ HaVn . (SM) The fact that the spectral method is based on a singular value decomposition justifies in part the use of a Schatten p-norm as a regularizer in (HMC-H). In particular, two very natural choices are p = 1 and p = 2. The first one corresponds to a nuclear norm regularized optimization, which is known to enforce a low rank constraint on HZ. In a sense, this choice can be justified in view of Theorem 1 when the target is known to be generated by some WFA. On the other hand, choosing p = 2 also has some effect on the spread of singular values, while at the same time enforcing the coefficients in HZ – especially those that are completely unknown – to be small. As our analysis suggests, this last property is important for preventing errors from accumulating on the values assigned by AZ to long strings. 4 Generalization Bound In this section, we study the generalization properties of HMCp,ℓ+SM. We give a stability analysis for a special instance of this family of algorithms and use it to derive a generalization bound. We study the specific case where p = 2 and ℓ(y, y′) = |y −y′| for all (y, y′). But, much of our analysis can be used to derive similar bounds for other instances of HMCp,ℓ+SM. The proofs of the technical results presented are given in the Appendix. We first introduce some notation needed for the presentation of our main result. For any ν > 0, let tν be the function defined by tν(x) = x for |x| ≤ν and tν(x) = ν sign(x) for |x| > ν. For any distribution D over Σ⋆×R, we denote by DΣ its marginal distribution over Σ⋆. The probability that a string x ∼DΣ belongs to PS is denoted by π = DΣ(PS). We assume that the parameters B, n, and τ are fixed. Two parameters that depend on D will appear in our bound. In order to define these parameters, we need to consider the output HZ of (HMC-H) as a random variable that depends on the sample Z. Writing H⊤ Z = [H⊤ ϵ , H⊤ a1, . . . , H⊤ ak], as in Section 3.2, we define: σ = E Z∼Dm [σn(Hϵ)] ρ = E Z∼Dm σn(Hϵ)2 −σn+1(Hϵ)2 , where σn(M) denotes the nth singular value of matrix M. Note that these parameters may vary with m, n, τ and B. In contrast to previous learning results based on the spectral method, our bound holds in an agnostic setting. That is, we do not require that the data was generated from some (probabilistic) unknown WFA. However, in order to prove our results we do need to make two assumptions about the tails of the distribution. First, we need to assume that there exists a bound on the magnitude of the labels generated by the distribution. Assumption 1 There exists a constant ν > 0 such that if (x, y) ∼D, then |y| ≤ν almost surely. 6 Second, we assume that the strings generated by the distribution will not be too long. In particular, that the length of the strings generated by DΣ follows a distribution whose tail is slightly lighter than sub-exponential. Assumption 2 There exist constants c, η > 0 such that Px∼DΣ[|x| ≥t] ≤exp(−ct1+η) holds for all t ≥0. We note that in the present context both assumptions are quite reasonable. Assumption 1 is equivalent to assumptions made in other contexts where a stability analysis is pursued, e.g., in the analysis of support vector regression in [11]. Furthermore, in our context, this assumption can be relaxed to require only that the distribution over labels be sub-Gaussian, at the expense of a more complex proof. Assumption 2 is required by the fact already pointed out in [19] that errors in the estimation of operator models accumulate exponentially with the length of the string. Moreover, it is well known that the tail of any probability distribution generated by a WFA is sub-exponential. Thus, though we do not require DΣ to be generated by a WFA, we do need its distribution over lengths to have a tail behavior similar to that of a distribution generated by a WFA. This seems to be a limitation common to all known learnability proofs based on the spectral method. We can now state our main result, which is a bound on the average loss R(f) = Ez∼D[ℓ(f(x), y)] in terms of the empirical loss bRZ(f) = |Z|−1 P z∈Z ℓ(f(x), y). Theorem 2 Let Z be a sample formed by m i.i.d. examples generated from some distribution D satisfying Assumptions 1 and 2. Let AZ be the WFA returned by algorithm HMCp,ℓ+SM with p = 2 and loss function ℓ(y, y′) = |y −y′|. Then, for any δ > 0, the following holds with probability at least 1 −δ for fZ = tν ◦fAZ: R(fZ) ≤bRZ(fZ) + O ν4|P|2|S|3/2 τσ3ρπ ln m m1/3 r ln 1 δ . The proof of this theorem is based on an algorithmic stability analysis. Thus, we will consider two samples of size m, Z ∼Dm consisting of m i.i.d. examples drawn from D, and Z′ differing from Z by just one point: say zm in Z = (z1, . . . , zm) and z′ m in Z′ = (z1, . . . , zm−1, z′ m). The new example z′ m is an arbitrary point the support of D. Throughout the analysis we use the shorter notation H = HZ and H′ = HZ′ for the Hankel matrices obtained from (HMC-H) based on samples Z and Z′ respectively. The first step in the analysis is to bound the stability of the matrix completion algorithm. This is done in the following lemma, that gives a sample-dependent and a sample-independent bound for the stability of H. Lemma 3 Suppose D satisfies Assumption 1. Then, the following holds: ∥H −H′∥F ≤min 2ν p |P||S|, 1 τ min{ em, em′} . The standard method for deriving generalization bounds from algorithmic stability results could be applied here to obtain a generalization bound for our Hankel matrix completion algorithm. However, our goal is to give a generalization bound for the full HMC+SM algorithm. Using the bound on the Frobenius norm ∥H−H′∥F , we are able to analyze the stability of σn(Hϵ), σn(Hϵ)2 −σn+1(Hϵ)2, and Vn using well-known results on the stability of singular values and singular vectors. These results are used to bound the difference between the operators of WFA AZ and AZ′. The following lemma can be proven by modifying and extending some of the arguments of [19, 4], which were given in the specific case of WFAs representing a probability distribution. Lemma 4 Let ε = ∥H−H′∥F , bσ = min{σn(Hϵ), σn(H′ ϵ)}, and bρ = σn(Hϵ)2−σn+1(Hϵ)2. Suppose ε ≤ p bρ/4. Then, there exists some constant C > 0 such that the following three inequalities 7 hold: ∀a ∈Σ : ∥Aa −A′ a∥≤Cεν3|P|3/2|S|1/2/bρbσ2; ∥α −α′∥≤Cεν2|P|1/2|S|/bρ; ∥β −β′∥≤Cεν3|P|3/2|S|1/2/bρbσ2. The other half of the proof results from combining Lemmas 3 and 4 to obtain a bound for |fZ(x)−fZ′(x)|. This is a delicate step, because some of the bounds given above involve quantities that are defined in terms of Z. Therefore, all these parameters need to be controlled in order to ensure that the bounds do not grow too large. Furthermore, to obtain the desired bounds we need to extend the usual tools for analyzing spectral methods to the current setting. In particular, these tools need to be adapted to the agnostic settings where there is no underlying true WFA. The analysis is further complicated by the fact that now the functions we are trying to learn and the distribution that generates the data are not necessarily related. Once all this is achieved, it remains to combine these new tools to show an algorithmic stability result for HMCp,ℓ+SM. In the following lemma, we first define “bad” samples Z and show that bad samples have a very low probability. Lemma 5 Suppose D satisfies Assumptions 1 and 2. If Z is a large enough i.i.d. sample from D, then with probability at least 1 −1/m3 the following inequalities hold simultaneously: |xi| ≤ ((1/c) ln(4m4))1/1+η for all i, ε ≤4/(τπm), bσ ≥σ/2, and bρ ≥ρ/2. After that we give two upper bounds for |fZ(x) −fZ′(x)|: a tighter bound that holds for “good” samples Z and Z′ and a another one that holds for all samples. These bounds are combined using a variant of McDiarmid’s inequality for dealing with functions that do not satisfy the bounded differences assumption almost surely [21]. The rest of the proof then follows the same scheme as the standard one for deriving generalization bounds for stable algorithms [11, 25]. 5 Conclusion We described a new algorithmic solution for learning arbitrary weighted automata from a sample of labeled strings drawn from an unknown distribution. Our approach combines an algorithm for constrained matrix completion with the recently developed spectral learning methods for learning probabilistic automata. Using our general scheme, a broad family of algorithms for learning weighted automata can be obtained. We gave a stability analysis of a particular algorithm in that family and used it to prove generalization bounds that hold for all distributions satisfying two reasonable assumptions. The particular case of Schatten p-norm with p = 1, which corresponds to a regularization with the nuclear norm, can be analyzed using similar techniques. Our results can be further extended by deriving generalization guarantees for all algorithms in the family we introduced. An extensive and rigorous empirical comparison of all these algorithms will be an important complement to the research we presented. Finally, learning DFAs under an arbitrary distribution using the algorithms we presented deserves a specific study since the problem is of interest in many applications and since it may benefit from improved learning guarantees. Acknowledgments Borja Balle is partially supported by an FPU fellowship (AP2008-02064) and project TIN201127479-C04-03 (BASMATI) of the Spanish Ministry of Education and Science, the EU PASCAL2 NoE (FP7-ICT-216886), and by the Generalitat de Catalunya (2009-SGR-1428). The work of Mehryar Mohri was partly funded by the NSF grant IIS-1117591. 8 References [1] J. Albert and J. Kari. Digital image compression. In Handbook of Weighted Automata. Springer, 2009. [2] A. Anandkumar, D. P. Foster, D. Hsu, S. M. Kakade, and Y-K. Liu. Two SVDs suffice: Spectral decompositions for probabilistic topic modeling and latent dirichlet allocation. CoRR, abs/1204.6703, 2012. [3] A. Anandkumar, D. Hsu, and S. M. Kakade. A method of moments for mixture models and hidden Markov models. COLT, 2012. [4] R. Bailly. Quadratic weighted automata: Spectral algorithm and likelihood maximization. ACML, 2011. [5] R. Bailly, F. Denis, and L. Ralaivola. Grammatical inference as a principal component analysis problem. ICML, 2009. [6] B. Balle, A. Quattoni, and X. Carreras. A spectral learning algorithm for finite state transducers. ECML– PKDD, 2011. [7] B. Balle, A. Quattoni, and X. Carreras. Local loss optimization in operator models: A new insight into spectral learning. ICML, 2012. [8] A. Beimel, F. Bergadano, N.H. Bshouty, E. Kushilevitz, and S. Varricchio. Learning functions represented as multiplicity automata. JACM, 2000. [9] J. Berstel and C. Reutenauer. Rational Series and Their Languages. Springer, 1988. [10] B. Boots, S. Siddiqi, and G. Gordon. Closing the learning planning loop with predictive state representations. I. J. Robotic Research, 2011. [11] O. Bousquet and A. Elisseeff. Stability and generalization. JMLR, 2002. [12] T. M. Breuel. The OCRopus open source OCR system. IS&T/SPIE Annual Symposium, 2008. [13] E.J. Candes and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 2010. [14] E.J. Candes and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 2010. [15] Jack W. Carlyle and Azaria Paz. Realizations by stochastic finite automata. J. Comput. Syst. Sci., 5(1):26– 40, 1971. [16] S. B. Cohen, K. Stratos, M. Collins, D. P. Foster, and L. Ungar. Spectral learning of latent-variable PCFGs. ACL, 2012. [17] M. Fliess. Matrices de Hankel. Journal de Math´ematiques Pures et Appliqu´ees, 53:197–222, 1974. [18] R. Foygel, R. Salakhutdinov, O. Shamir, and N. Srebro. Learning with the weighted trace-norm under arbitrary sampling distributions. NIPS, 2011. [19] D. Hsu, S. M. Kakade, and T. Zhang. A spectral algorithm for learning hidden Markov models. COLT, 2009. [20] M. Kearns and L. Valiant. Cryptographic limitations on learning boolean formulae and finite automata. JACM, 1994. [21] S. Kutin. Extensions to McDiarmid’s inequality when differences are bounded with high probability. Technical report, TR-2002-04, University of Chicago, 2002. [22] F.M. Luque, A. Quattoni, B. Balle, and X. Carreras. Spectral learning in non-deterministic dependency parsing. EACL, 2012. [23] M. Mohri. Weighted automata algorithms. In Handbook of Weighted Automata. Springer, 2009. [24] M. Mohri, F. C. N. Pereira, and M. Riley. Speech recognition with weighted finite-state transducers. In Handbook on Speech Processing and Speech Communication. Springer, 2008. [25] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. The MIT Press, 2012. [26] A.P. Parikh, L. Song, and E.P. Xing. A spectral algorithm for latent tree graphical models. ICML, 2011. [27] B. Recht. A simpler approach to matrix completion. JMLR, 2011. [28] Arto Salomaa and Matti Soittola. Automata-Theoretic Aspects of Formal Power Series. Springer-Verlag: New York, 1978. [29] M.P. Sch¨utzenberger. On the definition of a family of automata. Information and Control, 1961. [30] S. M. Siddiqi, B. Boots, and G. J. Gordon. Reduced-rank hidden Markov models. AISTATS, 2010. [31] L. Song, B. Boots, S. Siddiqi, G. Gordon, and A. Smola. Hilbert space embeddings of hidden Markov models. ICML, 2010. 9
|
2012
|
165
|
4,525
|
Learning with Recursive Perceptual Representations Oriol Vinyals UC Berkeley Berkeley, CA Yangqing Jia UC Berkeley Berkeley, CA Li Deng Microsoft Research Redmond, WA Trevor Darrell UC Berkeley Berkeley, CA Abstract Linear Support Vector Machines (SVMs) have become very popular in vision as part of state-of-the-art object recognition and other classification tasks but require high dimensional feature spaces for good performance. Deep learning methods can find more compact representations but current methods employ multilayer perceptrons that require solving a difficult, non-convex optimization problem. We propose a deep non-linear classifier whose layers are SVMs and which incorporates random projection as its core stacking element. Our method learns layers of linear SVMs recursively transforming the original data manifold through a random projection of the weak prediction computed from each layer. Our method scales as linear SVMs, does not rely on any kernel computations or nonconvex optimization, and exhibits better generalization ability than kernel-based SVMs. This is especially true when the number of training samples is smaller than the dimensionality of data, a common scenario in many real-world applications. The use of random projections is key to our method, as we show in the experiments section, in which we observe a consistent improvement over previous –often more complicated– methods on several vision and speech benchmarks. 1 Introduction In this paper, we focus on the learning of a general-purpose non-linear classifier applied to perceptual signals such as vision and speech. The Support Vector Machine (SVM) has been a popular method for multimodal classification tasks since its introduction, and one of its main advantages is the simplicity of training a linear model. Linear SVMs often fail to solve complex problems however, and with non-linear kernels, SVMs usually suffer from speed and memory issues when faced with very large-scale data, although techniques such as non-convex optimization [6] or spline approximations [19] exist for speed-ups. In addition, finding the “oracle” kernel for a specific task remains an open problem, especially in applications such as vision and speech. Our aim is to design a classifier that combines the simplicity of the linear Support Vector Machine (SVM) with the power derived from deep architectures. The new technique we propose follows the philosophy of “stacked generalization” [23], i.e. the framework of building layer-by-layer architectures, and is motivated by the recent success of a convex stacking architecture which uses a simplified form of neural network with closed-form, convex learning [10]. Specifically, we propose a new stacking technique for building a deep architecture, using a linear SVM as the base building block, and a random projection as its core stacking element. The proposed model, which we call the Random Recursive SVM (R2SVM), involves an efficient, feed-forward convex learning procedure. The key element in our convex learning of each layer is to randomly project the predictions of the previous layer SVM back to the original feature space. As we will show in the paper, this could be seen as recursively transforming the original data manifold so that data from different classes are moved apart, leading to better linear separability in the subsequent layers. In particular, we show that randomly generating projection parameters, instead of fine-tuning them using backpropagation, suffices to achieve a significant performance gain. As a result, our 1 Layer 2 Layer 3 Code 1 Code 2 Input Space Layer1 (linear SVM) Layer2 Layer3 Figure 1: A conceptual example of Random Recursive SVM separating edges from cross-bars. Starting from data manifolds that are not linearly separable, our method transforms the data manifolds in a stacked way to find a linear separating hyperplane in the high layers, which corresponds to non-linear separating hyperplanes in the lower layers. Non-linear classification is achieved without kernelization, using a recursive architecture. model does not require any complex learning techniques other than training linear SVMs, while canonical deep architectures usually require carefully designed pre-training and fine-tuning steps, which often depend on specific applications. Using linear SVMs as building blocks our model scales in the same way as the linear SVM does, enabling fast computation during both training and testing time. While linear SVM fails to solve non-linearly separable problems, the simple non-linearity in our algorithm, introduced with sigmoid functions, is shown to adapt to a wide range of real-world data with the same learning structure. From a kernel based perspective, our method could be viewed as a special non-linear SVM, with the benefit that the non-linear kernel naturally emerges from the stacked structure instead of being defined as in conventional algorithms. This brings additional flexibility to the applications, as task-dependent kernel designs usually require detailed domain-specific knowledge, and may not generalize well due to suboptimal choices of non-linearity. Additionally, kernel SVMs usually suffer from speed and memory issues when faced with large-scale data, although techniques such as non-convex optimization [6] exist for speed-ups. Our findings suggest that the proposed model, while keeping the simplicity and efficiency of training a linear SVM, can exploit non-linear dependencies with the proposed deep architecture, as suggested by the results on two well known vision and speech datasets. In addition, our model performs better than other non-linear models under small training set sizes (i.e. it exhibits better generalization gap), which is a desirable property inherited from the linear model used in the architecture presented in the paper. 2 Previous Work There has been a trend on object, acoustic and image classification to move the complexity from the classifier to the feature extraction step. The main focus of many state of the art systems has been to build rich feature descriptors (e.g. SIFT [18], HOG [7] or MFCC [8]), and use sophisticated non-linear classifiers, usually based on kernel functions and SVM or mixture models. Thus, the complexity of the overall system (feature extractor followed by the non-linear classifier) is shared in the two blocks. Vector Quantization [12], and Sparse Coding [21, 24, 26] have theoretically and empirically been shown to work well with linear classifiers. In [4], the authors note that the choice of codebook does not seem to impact performance significantly, and encoding via an inner product plus a non-linearity can effectively replace sparse coding, making testing significantly simpler and faster. A disturbing issue with sparse coding + linear classification is that with a limited codebook size, linear separability might be an overly strong statement, undermining the use of a single linear classifier. This has been empirically verified: as we increase the codebook size, the performance keeps improving [4], indicating that such representations may not be able to fully exploit the complexity 2 of the data [2]. In fact, recent success on PASCAL VOC could partially be attributed to a huge codebook [25]. While this is theoretically valid, the practical advantage of linear models diminishes quickly, as the computation cost of feature generation, as well as training a high-dimensional classifier (despite linear), can make it as expensive as classical non-linear classifiers. Despite this trend to rely on linear classifiers and overcomplete feature representations, sparse coding is still a flat model, and efforts have been made to add flexibility to the features. In particular, Deep Coding Networks [17] proposed an extension where a higher order Taylor approximation of the non-linear classification function is used, which shows improvements over coding that uses one layer. Our approach can be seen as an extension to sparse coding used in a stacked architecture. Stacking is a general philosophy that promotes generalization in learning complex functions and that improves classification performance. The method presented in this paper is a new stacking technique that has close connections to several stacking methods developed in the literature, which are briefly surveyed in this section. In [23], the concept of stacking was proposed where simple modules of functions or classifiers are “stacked” on top of each other in order to learn complex functions or classifiers. Since then, various ways of implementing stacking operations have been developed, and they can be divided into two general categories. In the first category, stacking is performed in a layer-by-layer fashion and typically involves no supervised information. This gives rise to multiple layers in unsupervised feature learning, as exemplified in Deep Belief Networks [14, 13, 9], layered Convolutional Neural Networks [15], Deep Auto-encoder [14, 9], etc. Applications of such stacking methods includes object recognition [15, 26, 4], speech recognition [20], etc. In the second category of techniques, stacking is carried out using supervised information. The modules of the stacking architectures are typically simple classifiers. The new features for the stacked classifier at a higher level of the hierarchy come from concatenation of the classifier output of lower modules and the raw input features. Cohen and de Carvalho [5] developed a stacking architecture where the simple module is a Conditional Random Field. Another successful stacking architecture reported in [10, 11] uses supervised information for stacking where the basic module is a simplified form of multilayer perceptron where the output units are linear and the hidden units are sigmoidal nonlinear. The linearity in the output units permits highly efficient, closed-form estimation (results of convex optimization) for the output network weights given the hidden units’ outputs. Stacked context has also been used in [3], where a set of classifier scores are stacked to produce a more reliable detection. Our proposed method will build a stacked architecture where each layer is an SVM, which has proven to be a very successful classifier for computer vision applications. 3 The Random Recursive SVM In this section we formally introduce the Random Recursive SVM model, and discuss the motivation and justification behind it. Specifically, we consider a training set that contains N pairs of tuples (d(i), y(i)), where d(i) ∈RD is the feature vector, and y(i) ∈{1, . . . , C} is the class label corresponding to the i-th sample. As depicted in Figure 2(b), the model is built by multiple layers of blocks, which we call Random SVMs, that each learns a linear SVM classifier and transforms the data based on a random projection of previous layers SVM outputs. The linear SVM classifiers are learned in a one-vs-all fashion. For convenience, let θ ∈RD×C be the classification matrix by stacking each parameter vector column-wise, so that o(i) = θT d(i) is the vector of scores for each class corresponding to the sample d(i), and ˆy(i) = arg maxc θc T d(i) is the prediction for the i-th sample if we want to make final predictions. From this point onward, we drop the index ·(i) for the i-th sample for notational convenience. 3.1 Recursive Transform of Input Features Figure 2(b) visualizes one typical layer in the pipeline of our algorithm. Each layer takes the output of the previous layer, (starting from x1 = d for the first layer as our initial input), and feeds it to a standard linear SVM that gives the output o1. In general, o1 would not be a perfect prediction, but would be better than a random guess. We then use a random projection matrix W2,1 ∈RD×C whose elements are sampled from N(0, 1) to project the output o1 into the original feature space, 3 RSVM RSVM RSVM · · · d prediction transformed data (a) Layered structure of R2SVM SVM xl−1 ol W o1···l−1 d o1···l + xl (b) Details of an RSVM layer. Figure 2: The pipeline of the proposed Random Recursive SVM model. (a) The model is built with layers of Random SVM blocks, which are based on simple linear SVMs. Speech and image signals are provided as input to the first level. (b) For each random SVM layer, we train a linear SVM using the transformed data manifold by combining the original features and random projections of previous layers’ predictions. in order to use this noisy prediction to modify the original features. Mathematically, the additively modified feature space after applying the linear SVM to obtain o1 is: x2 = σ(d + βW2,1o1), where β is a weight parameter that controls the degree with which we move the original data sample x1, and σ(·) is the sigmoid function, which introduces non-linearity in a similar way as in the multilayer perceptron models, and prevents the recursive structure to degenerate to a trivial linear model. In addition, such non-linearity, akin to neural networks, has desirable properties in terms of Gaussian complexity and generalization bounds [1]. Intuitively, the random projection aims to push data from different classes towards different directions, so that the resulting features are more likely to be linearly separable. The sigmoid function controls the scale of the resulting features, and at the same time prevents the random projection to be “too confident” on some data points, as the prediction of the lower-layer is still imperfect. An important note is that, when the dimension of the feature space D is relatively large, then the column vectors of Wl are much likely to be approximately orthogonal, known as the quasi-orthogonality property of high-dimensional spaces [16]. At the same time, the column vectors correspond to the per class bias applied to the original sample d if the output was close to ideal (i.e. ol = ec, where ec is the one-hot encoding representing class c), so the fact that they are approximately orthogonal means that (with high probability) they are pushing the per-class manifolds apart. The training of the R2SVM is then carried out in a purely feed-forward way. Specifically, we train a linear SVM for the l-th layer, and then compute the input of the next layer as the addition of the original feature space and the random projection of previous layers’ outputs, which is then passed through a simple sigmoid function: ol = θT l xl xl+1 = σ(d + βWl+1[oT 1 , oT 2 , · · · , oT l ]T ) where θl are the linear SVM parameters trained with xl, and Wl+1 is the concatenation of l random projection matrices [Wl+1,1, Wl+1,2, · · · , Wl+1,l], one for each previous layer, each being a random matrix sampled from N(0, 1). Following [10], for each layer we use the outputs from all lower modules, instead of only the immediately lower module. A chief difference of our proposed method from previous approaches is that, instead of concatenating predictions with the raw input data to form the new expanded input data, we use the predictions to modify the features in the original space with a non-linear transformation. As will be shown in the next section, experimental results demonstrate that this approach is superior than simple concatenation in terms of classification performance. 3.2 On the Randomness in R2SVM The motivation behind our method is that projections of previous predictions help to move apart the manifolds that belong to each class in a recursive fashion, in order to achieve better linear separability (Figure 1 shows a vision example separating different image patches). Specifically, consider that we have a two class problem which is non-linearly separable. The following Lemma illustrates the fact that, if we are given an oracle prediction of the labels, it is possible to 4 add an offset to each class to “pull” the manifolds apart with this new architecture, and to guarantee an improvement on the training set if we assume perfect labels. Lemma 3.1 Let T be a set of N tuples (d(i), y(i)), where d(i) ∈RD is the feature vector, and y(i) ∈{1, . . . , C} is the class label corresponding to the i-th sample. Let θ ∈ RD×C be the corresponding linear SVM solution with objective function value fT ,θ. Then, there exist wi ∈RD for i = {1, . . . , C} such that the translated set T ′ defined as (d(i) + wy(i), y(i)) has a linear SVM solution θ′ which achieves a better optimum fT ′,θ′ < fT ,θ. Proof Let θi be the i-th column of θ (which corresponds to the one vs all classifier for class i). Define wi = θi ||θi||2 2 . Then we have max(0, 1 −θT y(i)(d(i) + wy(i))) = max(0, 1 −(θT y(i)d(i) + 1)) ≤max(0, 1 −(θT y(i)d(i))), which leads to fT ′,θ ≤fT ,θ. Since θ′ is defined to be the optimum for the set T ′, fT ′,θ′ ≤fT ′,θ, which concludes the proof. ■ Lemma 3.1 would work for any monotonically decreasing loss function (in particular, for the hinge loss of SVM), and motivates our search for a transform of the original features to achieve linear separability, under the guidance of SVM predictions. Note that we would achieve perfect classification under the assumption that we have oracle labels, while we only have noisy predictions for each class ˆy(i) during testing time. Under such noisy predictions, a deterministic choice of wi, especially linear combinations of the data as in the proof for Lemma 3.1, suffers from over-confidence in the labels and may add little benefit to the learned linear SVMs. A first choice to avoid degenerated results is to take random weights. This enables us to use labelrelevant information in the predictions, while at the same time de-correlate it with the original input d. Surprisingly, as shown in Figure 4(a), randomness achieves a significant performance gain in contrast to the “optimal” direction given by Lemma 3.1 (which degenerates due to imperfect predictions), or alternative stacking strategies such as concatenation as in [10]. We also note that beyond sampling projection matrices from a zero-mean Gaussian distribution, a biased sampling that favors directions near the “optimal” direction may also work, but the degree of bias would be empirically difficult to determine and may be data-dependent. In general, we aim to avoid supervision in the projection parameters, as trying to optimize the weights jointly would defeat the purpose of having a computationally efficient method, and would, perhaps, increase training accuracy at the expense of over-fitting. The risk of over-fitting is also lower in this way, as we do not increase the dimensionality of the input space, and we do not learn the matrices Wl, which means we pass a weak signal from layer to layer. Also, training Random Recursive SVM is carried out in a feed-forward way, where each step involves a convex optimization problem that can be efficiently solved. 3.3 Synthetic examples To visually show the effectiveness of our approach in learning non-linear SVM classifiers without kernels, we apply our algorithm to two synthetic examples, neither of which can be linearly separated. The first example contains two classes distributed in a two-moon shaped way, and the second example contains data distributed as two more complex spirals. Figure 3 visualizes the classification hyperplane at different stages of our algorithm. The first layer of our approach is identical to the linear SVM, which is not able to separate the data well. However, when classifiers are recursively stacked in our approach, the classification hyperplane is able to adapt to the nonlinear characteristics of the two classes. 4 Experiments In this section we empirically evaluate our method, and support our claims: (1) for low-dimensional features, linear SVMs suffer from their limited representation power, while R2SVMs significantly improve performance; (2) for high-dimensional features, and especially when faced with limited amount of training data, R2SVMs exhibit better generalization power than conventional kernelized non-linear SVMs; and (3) the random, feed-forward learning scheme is able to achieve state-of-theart performance, without complex fine-tuning. 5 (a) (b) (c) (d) (e) (f) Figure 3: Classification hyperplane from different stages of our algorithm: first layer, second layer, and final layer outputs. (a)-(c) show the two-moon data and (d)-(f) show the spiral data. 0 10 20 30 40 50 60 Layer Index 64 65 66 67 68 69 Accuracy Accuracy vs. Number of Layers on CIFAR-10 random concatenation deterministic (a) 0 200 400 600 800 1000 1200 1400 1600 Codebook Size 64 66 68 70 72 74 76 78 80 Accuracy Linear SVM R2SVM RBF-SVM (b) Figure 4: Results on CIFAR-10. (a) Accuracy versus number of layers on CIFAR-10 for Random Recursive SVM with all the training data and 50 codebook size, for a baseline where the output of a classifier is concatenated with the input feature space, and for a deterministic version of recursive SVM where the projections are as in the proof of Lemma 3.1. (b) Accuracy versus codebook size on CIFAR-10 for linear SVM, RBF SVM, and our proposed method. We describe the experimental results on two well known classification benchmarks: CIFAR-10 and TIMIT. The CIFAR-10 dataset contains large amount of training/testing data focusing on object classification. TIMIT is a speech database that contains two orders of magnitude more training samples than the other datasets, and the largest output label space. Recall that our method relies on two parameters: β, which is the factor that controls how much to shift the original feature space, and C, the regularization parameter of the linear SVM trained at each layer. β is set to 1 10 for all the experiments, which was experimentally found to work well for one of the CIFAR-10 configurations. C controls the regularization of each layer, and is an important parameter – setting it too high will yield overfitting as the number of layers is increased. As a result, we learned this parameter via cross validation for each configuration, which is the usual practice of other approaches. Lastly, for each layer, we sample a new random matrix Wl. As a result, even if the training and testing sets are fixed, randomness still exists in our algorithm. Although one may expect the performance to fluctuate from run to run, in practice we never observe a standard deviation larger than 0.25 (and typically less than 0.1) for the classification accuracy, over multiple runs of each experiment. CIFAR-10 The CIFAR-10 dataset contains 10 object classes with a fair amount of training examples per class (5000), with images of small size (32x32 pixels). For this dataset, we follow the standard pipeline defined in [4]: dense 6x6 local patches with ZCA whitening are extracted with stride 1, and thresholding coding with α = 0.25 is adopted for encoding. The codebook is trained with OMP-1. The features are then average-pooled on a 2 × 2 grid to form the global image representation. We tested three classifiers: linear SVM, RBF kernel based SVM, and the Random Recursive SVM model as introduced in Section 3. As have been shown in Figure 4(b), the performance is almost monotonically increasing as we stack more layers in R2SVM. Also, stacks of SVMs by concatenation of output and input feature space does not yield much gain above 1 layer (which is a linear SVM), and neither does a deterministic 6 Table 1: Results on CIFAR-10, with different codebook sizes (hence feature dimensions). Method Tr. Size Code. Size Acc. Linear SVM All 50 64.7% RBF SVM All 50 74.4% R2SVM All 50 69.3% DCN All 50 67.2% Linear SVM All 1600 79.5% RBF SVM All 1600 79.0% R2SVM All 1600 79.7% DCN All 1600 78.1% Table 2: Results on CIFAR-10, with 25 training data per class. Method Tr. Size Code. Size Acc. Linear SVM 25/class 50 41.3% RBF SVM 25/class 50 42.2% R2SVM 25/class 50 42.8% DCN 25/class 50 40.7% Linear SVM 25/class 1600 44.1% RBF SVM 25/class 1600 41.6% R2SVM 25/class 1600 45.1% DCN 25/class 1600 42.7% version of recursive SVM where a projection matrix as in the proof for Lemma 3.1 is used. For the R2SVM, in most cases the performance asymptotically converges within 30 layers. Note that training each layer involves training a linear SVM, so the computational complexity is simply linear to the depth of our model. In contrast to this, the difficulty of training deep learning models based on many hidden layers may be significantly harder, partially due to the lack of supervised information for its hidden layers. Figure 4(b) shows the effect that the feature dimensionality (controlled by the codebook size of OMP-1) has on the performance of the linear and non-linear classifiers, and Table 1 provides representative numerical results. In particular, when the codebook size is low, the assumption that we can approximate the non-linear function f as a globally linear classifier fails, and in those cases the R2SVM and RBF SVM clearly outperform the linear SVM. Moreover, as the codebook size grows, non-linear classifiers, represented by RBF SVM in our experiments, suffer from the curse of dimensionality partially due to the large dimensionality of the over-complete feature representation. In fact, as the dimensionality of the over-complete representation becomes too large, RBF SVM starts performing worse than linear SVM. For linear SVM, increasing the codebook size makes it perform better with respect to non-linear classifiers, but additional gains can still be consistently obtained by the Random Recursive SVM method. Also note how our model outperforms DCN, another stacking architecture proposed in [10]. Similar to the change of codebook sizes, it is interesting to experiment with the number of training examples per class. In the case where we use fewer training examples per class, little gain is obtained by classical RBF SVMs, and performance even drops when the feature dimension is too high (Table 2), while our Random Recursive SVM remains competitive and does not overfit more than any baseline. This again suggests that our proposed method may generalize better than RBF, which is a desirable property when the number of training examples is small with respect to the dimensionality of the feature space, which are cases of interest to many computer vision applications. In general, our method is able to combine the advantages of both linear and nonlinear SVM: it has higher representation power than linear SVM, providing consistent performance gains, and at the same time has a better robustness against overfitting. It is also worth pointing out again that R2SVM is highly efficient, since each layer is a simple linear SVM that can be carried out by simple matrix multiplication. On the other hand, non-linear SVMs like RBF SVM may take much longer to run especially for large-scale data, when special care has to be taken [6]. TIMIT Finally, we report our experiments using the popular speech database TIMIT. The speech data is analyzed using a 25-ms Hamming window with a 10-ms fixed frame rate. We represent the speech using first- to 12th-order Mel frequency cepstral coefficients (MFCCs) and energy, along with their first and second temporal derivatives. The training set consists of 462 speakers, with a total number of frames in the training data of size 1.1 million, making classical kernel SVMs virtually impossible to train. The development set contains 50 speakers, with a total of 120K frames, and is used for cross validation. Results are reported using the standard 24-speaker core test set consisting of 192 sentences with 7333 phone tokens and 57920 frames. The data is normalized to have zero mean and unit variance. All experiments used a context window of 11 frames. This gives a total of 39 × 11 = 429 elements in each feature vector. We used 183 7 Table 3: Performance comparison on TIMIT. Method Phone state accuracy Linear SVM 50.1% (2000 codes) 53.5% (8000 codes) R2SVM 53.5% (2000 codes) 55.1% (8000 codes) DCN, learned per-layer 48.5% DCN, jointly fine-tuned 54.3% target class labels (i.e., three states for each of the 61 phones), which are typically called “phone states”, with a one-hot encoding. The pipeline adopted is otherwise unchanged from the previous dataset. However, we did not apply pooling, and instead coded the whole 429 dimensional vector with a dictionary with 2000 and 8000 elements found with OMP-1, with the same parameter α as in the vision tasks. The competitive results with a framework known in vision adapted to speech [22], as shown in Table 3, are interesting on their own right, as the optimization framework for linear SVM is well understood, and the dictionary learning and encoding step are almost trivial and scale well with the amounts of data available in typical speech tasks. On the other hand, our R2SVM boosts performance quite significantly, similar to what we observed on other datasets. In Table 3 we also report recent work on this dataset [10], which uses multi-layer perceptron with a hidden layer and linear output, and stacks each block on top of each other. In their experiments, the representation used from the speech signal is not sparse, and uses instead Restricted Boltzman Machine, which is more time consuming to learn. In addition, only when jointly optimizing the network weights (fine tuning), which requires solving a non-convex problem, the accuracy achieves state-of-the-art performance of 54.3%. Our method does not include this step, which could be added as future work; we thus think the fairest comparison of our result is to the per-layer DCN performance. In all the experiments above we have observed two advantages of R2SVM. First, it provides a consistent improvement over linear SVM. Second, it can offer a better generalization ability over nonlinear SVMs, especially when the ratio of dimensionality to the number of training data is large. These advantages, combined with the fact that R2SVM is efficient in both training and testing, suggests that it could be adopted as an improvement over the existing classification pipeline in general. We also note that in the current work we have not employed techniques of fine tuning similar to the one employed in the architecture of [10]. Fine tuning of the latter architecture has accounted for between 10% to 20% error reduction, and reduces the need for having large depth in order to achieve a fixed level of recognition accuracy. Development of fine-tuning is expected to improve recognition accuracy further, and is in the interest of future research. However, even without fine tuning, the recognition accuracy is still shown to consistently improve until convergence, showing the robustness of the proposed method. 5 Conclusions and Future Work In this paper, we investigated low level vision and audio representations. We combined the simplicity of linear SVMs with the power derived from deep architectures, and proposed a new stacking technique for building a better classifier, using linear SVM as the base building blocks and emplying a random non-linear projection to add flexibility to the model. Our work is partially motivated by the recent trend of using coding techniques as feature representation with relatively large dictionaries. The chief advantage of our method lies in the fact that it learns non-linear classifiers without the need of kernel design, while keeping the efficiency of linear SVMs. Experimental results on vision and speech datasets showed that the method provides consistent improvement over linear baselines, even with no learning of the model parameters. The convexity of our model could lead to better theoretical analysis of such deep structures in terms of generalization gap, adds interesting opportunities for learning using large computer clusters, and would potentially help understanding the nature of other deep learning approaches, which is the main interest of future research. 8 References [1] P L Bartlett and S Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. The Journal of Machine Learning Research, 3:463–482, 2003. [2] O Boiman, E Shechtman, and M Irani. In defense of nearest-neighbor based image classification. In CVPR, 2008. [3] L Bourdev, S Maji, T Brox, and J Malik. Detecting people using mutually consistent poselet activations. In ECCV, 2010. [4] A Coates and A Ng. The importance of encoding versus training with sparse coding and vector quantization. In ICML, 2011. [5] W Cohen and V R de Carvalho. Stacked sequential learning. In IJCAI, 2005. [6] R Collobert, F Sinz, J Weston, and L Bottou. Trading convexity for scalability. In ICML, 2006. [7] N Dalal. Histograms of oriented gradients for human detection. In CVPR, 2005. [8] S Davis and P Mermelstein. Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. Acoustics, Speech and Signal Processing, IEEE Transactions on, 28(4):357–366, 1980. [9] L Deng, M L Seltzer, D Yu, A Acero, A Mohamed, and G Hinton. Binary coding of speech spectrograms using a deep auto-encoder. In Interspeech, 2010. [10] L Deng and D Yu. Deep convex network: A scalable architecture for deep learning. In Interspeech, 2011. [11] L Deng, D Yu, and J Platt. Scalable stacking and learning for building deep architectures. In ICASSP, 2012. [12] L Fei-Fei and P Perona. A bayesian hierarchical model for learning natural scene categories. In CVPR, 2005. [13] G Hinton, L Deng, D Yu, G Dahl, A Mohamed, N Jaitly, A Senior, V Vanhoucke, P Nguyen, T Sainath, and B Kingsbury. Deep Neural Networks for Acoustic Modeling in Speech Recognition. IEEE Signal Processing Magazine, 28:82–97, 2012. [14] G Hinton and R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504, 2006. [15] K Jarrett, K Kavukcuoglu, M A Ranzato, and Y LeCun. What is the best multi-stage architecture for object recognition? In ICCV, 2009. [16] T Kohonen. Self-Organizing Maps. Springer-Verlag, 2001. [17] Y Lin, T Zhang, S Zhu, and K Yu. Deep coding network. In NIPS, 2010. [18] D Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004. [19] S Maji, AC Berg, and J Malik. Classification using intersection kernel support vector machines is efficient. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. Ieee, 2008. [20] A Mohamed, D Yu, and L Deng. Investigation of full-sequence training of deep belief networks for speech recognition. In Interspeech, 2010. [21] B Olshausen and D J Field. Sparse coding with an overcomplete basis set: a strategy employed by V1? Vision research, 37(23):3311–3325, 1997. [22] O Vinyals and L Deng. Are Sparse Representations Rich Enough for Acoustic Modeling? In Interspeech, 2012. [23] D H Wolpert. Stacked generalization. Neural networks, 5(2):241–259, 1992. [24] J Yang, K Yu, and Y Gong. Linear spatial pyramid matching using sparse coding for image classification. In CVPR, 2009. [25] J Yang, K Yu, and T Huang. Efficient highly over-complete sparse coding using a mixture model. In ECCV, 2010. [26] K Yu and T Zhang. Improved Local Coordinate Coding using Local Tangents. In ICML, 2010. 9
|
2012
|
166
|
4,526
|
Scaled Gradients on Grassmann Manifolds for Matrix Completion Thanh T. Ngo and Yousef Saad Department of Computer Science and Engineering University of Minnesota, Twin Cities Minneapolis, MN 55455 thango@cs.umn.edu, saad@cs.umn.edu Abstract This paper describes gradient methods based on a scaled metric on the Grassmann manifold for low-rank matrix completion. The proposed methods significantly improve canonical gradient methods, especially on ill-conditioned matrices, while maintaining established global convegence and exact recovery guarantees. A connection between a form of subspace iteration for matrix completion and the scaled gradient descent procedure is also established. The proposed conjugate gradient method based on the scaled gradient outperforms several existing algorithms for matrix completion and is competitive with recently proposed methods. 1 Introduction Let A ∈Rm×n be a rank-r matrix, where r ≪m, n. The matrix completion problem is to reconstruct A given a subset of entries of A. This problem has attracted much attention recently [8, 14, 13, 18, 21] because of its broad applications, e.g., in recommender systems, structure from motion, and multitask learning (see e.g. [19, 9, 2]). 1.1 Related work Let Ω= {(i, j)|Aij is observed}. We define PΩ(A) ∈Rm×n to be the projection of A onto the observed entries Ω: PΩ(A)ij = Aij if (i, j) ∈Ωand PΩ(A)ij = 0 otherwise. If the rank is unknown and there is no noise, the problem can be formulated as: Minimize rank (X) subject to PΩ(X) = PΩ(A). (1) Rank minimization is NP-hard and so work has been done to solve a convex relaxation of it by approximating the rank by the nuclear norm. Under some conditions, the solution of the relaxed problem can be shown to be the exact solution of the rank minimization problem with overwhelming probability [8, 18]. Usually, algorithms to minimize the nuclear norm iteratively use the Singular Value Decomposition (SVD), specifically the singular value thresholding operator [7, 15, 17], which makes them expensive. If the rank is known, we can formulate the matrix completion problem as follows: Find matrix X to minimize ||PΩ(X) −PΩ(A)||F subject to rank (X) = r. (2) Keshavan et al. [14] have proved that exact recovery can be obtained with high probability by solving a non-convex optimization problem. A number of algorithms based on non-convex formulation use the framework of optimization on matrix manifolds [14, 22, 6]. Keshavan et al. [14] propose a steepest descent procedure on the product of Grassmann manifolds of r-dimensional subspaces. Vandereycken [22] discusses a conjugate gradient algorithm on the Riemann manifold of rank-r matrices. Boumal and Absil [6] consider a trust region method on the Grassmann manifold. Although 1 they do not solve an optimization problem on the matrix manifold, Wei et al. [23] perform a low rank matrix factorization based on a successive over-relaxation iteration. Also, Srebro and Jaakkola [21] discuss SVD-EM, one of the early fixed-rank methods using truncated singular value decomposition iteratively. Dai et al. [10] recently propose an interesting approach that does not use the Frobenius norm of the residual as the objective function but instead uses the consistency between the current estimate of the column space (or row space) and the observed entries. Guaranteed performance for this method has been established for rank-1 matrices. In this paper, we will focus on the case when the rank r is known and solve problem (2). In fact, even when the rank is unknown, the sparse matrix which consists of observed entries can give us a very good approximation of the rank based on its singular spectrum [14]. Also, a few values of the rank can be used and the best one is selected. Moreover, the singular spectrum is revealed during the iterations, so many fixed rank methods can also be adapted to find the rank of the matrix. 1.2 Our contribution OptSpace [14] is an efficient algorithm for low-rank matrix completion with global convergence and exact recovery guarantees. We propose using a non-canonical metric on the Grassmann manifold to improve OptSpace while maintaining its appealing properties. The non-canonical metric introduces a scaling factor to the gradient of the objective function which can be interpreted as an adaptive preconditioner for the matrix completion problem. The gradient descent procedure using the scaled gradient is related to a form of subspace iteration for matrix completion. Each iteration of the subspace iteration is inexpensive and the procedure converges very rapidly. The connection between the two methods leads to some improvements and to efficient implementations for both of them. Throughout the paper, AΩwill be a shorthand for PΩ(A) and qf(U) is the Q factor in the QR factorization of U which gives an orthonormal basis for span (U). Also, P¯Ω(.) denotes the projection onto the negation of Ω. 2 Subspace iteration for incomplete matrices We begin with a form of subspace iteration for matrix completion depicted in Algorithm 1. If the Algorithm 1 SUBSPACE ITERATION FOR INCOMPLETE MATRICES. Input: Matrix AΩ, Ω, and the rank r. Output: Left and right dominant subspaces U and V and associated singular values. 1: [U0, Σ0, V0] = svd(AΩ, r), S0 = Σ0; // Initialize U, V and Σ 2: for i = 0,1,2,... do 3: Xi+1 = P¯Ω(UiSiV T i ) + AΩ // Obtain new estimate of A 4: Ui+1 = Xi+1Vi; Vi+1 = XT i+1Ui+1 // Update subspaces 5: Ui+1 = qf(Ui+1); Vi+1 = qf(Vi+1) // Re-orthogonalize bases 6: Si+1 = U T i+1Xi+1Vi+1 // Compute new S for next estimate of A 7: if condition then 8: // Diagonalize S to obtain current estimate of singular vectors and values 9: [RU, Σi+1, RV ] = svd(Si+1); Ui+1 = Ui+1RU; Vi+1 = Vi+1RV ; Si+1 = Σi+1. 10: end if 11: end for matrix A is fully observed, U and V can be randomly initialized, line 3 is not needed and in lines 4 and 6 we use A instead of Xi+1 to update the subspaces. In this case, we have the classical twosided subspace iteration for singular value decomposition. Lines 6-9 correspond to a Rayleigh-Ritz projection to obtain current approximations of singular vectors and singular values. It is known that if the initial columns of U and V are not orthogonal to any of the first r left and right singular vectors of A respectively, the algorithm converges to the dominant subspaces of A [20, Theorem 5.1]. Back to the case when the matrix A is not fully observed, the basic idea of Algorithm 1 is to use an approximation of A in each iteration to update the subspaces U and V and then from the new U and V , we can obtain a better approximation of A for the next iteration. Line 3 is to compute a new estimate of A by replacing all entries of UiSiV T i at the known positions by the true values in A. The update in line 6 is to get the new Si+1 based on recently computed subspaces. Diagonalizing 2 Si+1 (lines 7-10) is optional for matrix completion. This step provides current approximations of the singular values which could be useful for several purposes such as in regularization or for convergence test. This comes with very little additional overhead, since Si+1 is a small r×r matrix. Each iteration of Algorithm 1 can be seen as an approximation of an iteration of SVD-EM where a few matrix multiplications are used to update U and V instead of using a truncated SVD to compute the dominant subspaces of Xi+1. Recall that computing an SVD, e.g. by a Lanczos type procedure, requires several, possibly a large number of, matrix multiplications of this type. We now discuss efficient implementations of Algorithm 1 and modifications to speed-up its convergence. First, the explicit computation of Xi+1 in line 3 is not needed. Let ˆXi = UiSiV T i . Then Xi+1 = P¯Ω(UiSiV T i ) + AΩ= ˆXi + Ei, where Ei = PΩ(A −ˆXi) is a sparse matrix of errors at known entries which can be computed efficiently by exploiting the structure of ˆXi. Assume that each Si is not singular (the non-singularity of Si will be discussed in Section 4). Then if we post-multiply the update of U in line 4 by S−1 i , the subspace remains the same, and the update becomes: Ui+1 = Xi+1ViS−1 i = ( ˆXi + Ei)ViS−1 i = Ui + EiViS−1 i , (3) The update of V can also be efficiently implemented. Here, we make a slight change, namely Vi+1 = XT i+1Ui (Ui instead of Ui+1). We observe that the convergence speed remains roughly the same (when A is fully observed, the algorithm is a slower version of subspace iteration where the convergence rate is halved). With this change, we can derive an update to V that is similar to (3), Vi+1 = Vi + ET i UiS−T i , (4) We will point out in Section 3 that the updating terms EiViS−1 i and ET i UiS−T i are related to the gradients of a matrix completion objective function on the Grassmann manifold. As a result, to improve the convergence speed, we can add an adaptive step size ti to the process, as follows: Ui+1 = Ui + tiEiViS−1 i and Vi+1 = Vi + tiET i UiS−T i . This is equivalent to using ˆXi + tiEi as the estimate of A in each iteration. The step size can be computed using a heuristic adapted from [23]. Initially, t is set to some initial value t0 (t0 = 1 in our experiments). If the error ∥Ei∥F decreases compared to the previous step, t is increased by a factor α. Conversely, if the error increases, indicating that the step is too big, t is reset to t = t0. The matrix Si+1 can be computed efficiently by exploiting low-rank structures and the sparsity. Si+1 = (U T i+1Ui)Si(V T i Vi+1) + tiU T i+1EiVi+1 (5) There are also other ways to obtain Si+1 once Ui+1 and Vi+1 are determined to improve the current approximation of A . For example we can solve the following quadratic program [14]: Si+1 = argminS∥PΩ(A −Ui+1SV T i+1)∥2 F (6) We summarize the discussion in Algorithm 2. A sufficiently small error ∥Ei∥F can be used as a Algorithm 2 GENERIC SUBSPACE ITERATION FOR INCOMPLETE MATRICES. Input: Matrix AΩ, Ω, and number r. Output: Left and right dominant subspaces U and V and associated singular values. 1: Initialize orthonormal matrices U0 ∈Rm×r and V0 ∈Rn×r. 2: for i = 0,1,2,... do 3: Compute Ei and appropriate step size ti 4: Ui+1 = Ui + tiEiViS−1 i and Vi+1 = Vi + tiET i UiS−T i 5: Orthonormalize Ui+1 and Vi+1 6: Find Si+1 such that PΩ(Ui+1Si+1V T i+1) is close to AΩ(e.g. via (5), (6)). 7: end for stoppping criterion. Algorithm 1 can be shown to be equivalent to LMaFit algorithm proposed in [23]. The authors in [23] also obtain results on local convergence of LMaFit. We will pursue a different approach here. The updates (3) and (4) are reminiscent of the gradient descent steps for minimizing matrix completion error on the Grassmann manifold that is introduced in [14] and the next section discusses the connection to optimization on the Grassmann manifold. 3 3 Optimization on the Grassmann manifold In this section, we show that using a non-canonical Riemann metric on the Grassmann manifold, the gradient of the same objective function in [14] is of a form similar to (3) and (4). Based on this, improvements to the gradient descent algorithms can be made and exact recovery results similar to those of [14] can be maintained. The readers are referred to [1, 11] for details on optimization frameworks on matrix manifolds. 3.1 Gradients on the Grassmann manifold for matrix completion problem Let G(m, r) be the Grassmann manifold in which each point corresponds to a subspace of dimension r in Rm. One of the results of [14], is that under a few assumptions (to be addressed in Section 4), one can obtain with high probability the exact matrix A by minimizing a regularized version of the function F: G(m, r) × G(n, r) →R defined below. F(U, V ) = min S∈Rr×r F(U, S, V ), (7) where F(U, S, V ) = (1/2)∥PΩ(A −USV T )∥2 F , U ∈Rm×k and V ∈Rn×k are orthonormal matrices. Here, we abuse the notation by denoting by U and V both orthonormal matrices as well as the points on the Grassmann manifold which they span. Note that F only depends on the subspaces spanned by matrices U and V . The function F(U, V ) can be easily evaluated by solving the quadratic minimization problem in the form of (6). If G(m, r) is endowed with the canonical inner product ⟨W, W ′⟩= Tr (W T W ′), where W and W ′ are tangent vectors of G(m, r) at U (i.e. W, W ′ ∈Rm×r such that W T U = 0 and W ′T U = 0) and similarly for G(n, r), the gradients of F(U, V ) on the product manifold are: gradFU(U, V ) = (I −UU T )PΩ(USV T −A)V ST (8) gradFV (U, V ) = (I −V V T )PΩ(USV T −A)T US. (9) In the above formulas, (I−UU T ) and (I−V V T ) are the projections of the derivatives PΩ(USV T − A)V ST and PΩ(USV T −A)T US onto the tangent space of the manifold at (U, V ). Notice that the derivative terms are very similar to the updates in (3) and (4). The difference is in the scaling factors where gradFU and gradFV use ST and S while those in Algorithm 2 use S−1 and S−T . Assume that S is a diagonal matrix which can always be obtained by rotating U and V appropriately. F(U, V ) would change more rapidly when the columns of U and V corresponding to larger entries of S are changed. The rate of change of F would be approximately proportional to S2 ii when the i-th columns of U and V are changed, or in other words, S2 gives us an approximate second order information of F at the current point (U, V ). This suggests that the level set of F should be similar to an “ellipse” with the shorter axes corresponding to the larger values of S. It is therefore compelling to use a scaled metric on the Grassmann manifold. Consider the inner product ⟨W, W ′⟩D = Tr (DW T W ′), where D ∈Rr×r is a symmetric positive definite matrix. We will derive the partial gradients of F on the Grassmann manifold endowed with this scaled inner product. According to [11], gradFU is the tangent vector of G(m, r) at U such that Tr (F T U W) = ⟨(gradFU)T , W⟩D, (10) for all tangent vectors W at U, where FU is the partial derivative of F with respect to U. Recall that the tangent vectors at U are those W’s such that W T U = 0. The solution of (10) with the constraints that W T U = 0 and (gradFU)T U = 0 gives us the gradient based on the scaled metric, which we will denote by gradsFU and gradsFV . gradsFU(U, V ) = (I −UU T )FUD−1 = (I −UU T )PΩ(USV T −A)V SD−1. (11) gradsFV (U, V ) = (I −V V T )FV D−1 = (I −V V T )PΩ(USV T −A)T USD−1. (12) Notice the additional scaling D appearing in these scaled gradients. Now if we use D = S2 (still with the assumption that S is diagonal) as suggested by the arguments above on the approximate shape of the level set of F, we will have gradsFU(U, V ) = (I −UU T )PΩ(USV T −A)V S−1 and gradsFV (U, V ) = (I −V V T )PΩ(USV T −A)T US−1 (note that S depends on U and V ). 4 If S is not diagonalized, we use SST and ST S to derive gradsFU and gradsFV respectively, and the scalings appear exactly as in (3) and (4). gradsFU(U, V ) = (I −UU T )PΩ(USV T −A)V S−1 (13) gradsFV (U, V ) = (I −V V T )PΩ(USV T −A)T US−T (14) This scaling can be interpreted as an adaptive preconditioning step similar to those that are popular in the scientific computing literature [4]. As will be shown in our experiments, this scaled gradient direction outperforms canonical gradient directions especially for ill-conditioned matrices. The optimization framework on matrix manifolds allows to define several elements of the manifold in a flexible way. Here, we use the scaled-metric to obtain a good descent direction, while other operations on the manifold can be based on the canonical metric which has simple and efficient computational forms. The next two sections describe algorithms using scaled-gradients. 3.2 Gradient descent algorithms on the Grassmann manifold Gradient descent algorithms on matrix manifolds are based on the update Ui+1 = R(Ui + tiWi) (15) where Wi is the gradient-related search direction, ti is the step size and R(U) is a retraction on the manifold which defines a projection of U onto the manifold [1]. We use R(U) = span (U) as the retraction on the Grassmann manifold where span (U) is represented by qf(U), which is the Q factor in the QR factorization of U. Optimization on the product of two Grassmann manifolds can be done by treating each component as a coordinate component. The step size t can be computed in several ways, e.g., by a simple back-tracking method to find the point satisfying the Armijo condition [3]. Algorithm 3 is an outline of our gradient descent method for matrix completion. We let gradsF (i) U ≡gradsFU(Ui, Vi) and gradsF (i) V ≡gradsFV (Ui, Vi). In line 5, the exact Si+1 which realizes F(Ui+1, Vi+1) can be computed according to (6). A direct method to solve (6) costs O(|Ω|r4). Alternatively, Si+1 can be computed approximately and we found that (5) is fast (O((|Ω| + m + n)r2)) and gives the same convergence speed. If (5) fails to yield good enough progress, we can always switch back to (6) and compute Si+1 exactly. The subspace iteration and LMaFit can be seen as relaxed versions of this gradient descent procedure. The next section goes further and described the conjugate gradient iteration. Algorithm 3 GRADIENT DESCENT WITH SCALED-GRADIENT ON THE GRASSMANN MANIFOLD. Input: Matrix AΩ, Ω, and number r. Output: U and V which minimize F(U, V ), and S which realizes F(U, V ). 1: Initialize orthonormal matrices U0 and V0. 2: for i = 0,1,2,... do 3: Compute gradsF (i) U and gradsF (i) V according to (13) and (14). 4: Find an appropriate step size ti and compute (Ui+1, Vi+1) = (qf(Ui −tigradsF (i) U ), qf(Vi −tigradsF (i) V )) 5: Compute Si+1 according to (6) (exact) or (5) (approximate). 6: end for 3.3 Conjugate gradient method on the Grassmann manifold In this section, we describe the conjugate gradient (CG) method on the Grassmann manifold based on the scaled gradients to solve the matrix completion problem. The main additional ingredient we need is vector transport which is used to transport the old search direction to the current point on the manifold. The transported search direction is then combined with the scaled gradient at the current point, e.g. by Polak-Ribiere formula (see [11]), to derive the new search direction. After this, a line search procedure is performed to find the appropriate step size along this search direction. Vector transport can be defined using the Riemann connection, which in turn is defined based on the Riemann metric [1]. As mentioned at the end of Section 3.1, we will use the canonical metric to 5 derive vector transport when considering the natural quotient manifold structure of the Grassmann manifold. The tangent W ′ at U will be transported to U + W as TU+W (W ′) where TU(W ′) = (I −UU T )W ′. Algorithm 4 is a sketch of the resulting conjugate gradient procedure. Algorithm 4 CONJUGATE GRADIENT WITH SCALED-GRADIENT ON THE GRASSMANN MANIFOLD. Input: Matrix AΩ, Ω, and number r. Output: U and V which minimize F(U, V ), and S which realizes F(U, V ). 1: Initialize orthonormal matrices U0 and V0. 2: Compute (η0, ξ0) = (gradsF (0) U , gradsF (0) V ). 3: for i = 0,1,2,... do 4: Compute a step size ti and compute (Ui+1, Vi+1) = (qf(Ui + tiηi), qf(Vi + tiξi)) 5: Compute βi+1 (Polak-Ribiere) and set (ηi+1, ξi+1) = (−gradsF (i) U + βi+1TUi+1(ηi), −gradsF (i) V + βi+1TVi+1(ξi)) 6: Compute Si+1 according to (6) or (5). 7: end for 4 Convergence and exact recovery of scaled-gradient descent methods Let A = U∗Σ∗V T ∗be the singular value decomposition of A, where U∗∈Rm×r, V∗∈Rn×r and Σ∗∈Rr×r. Let us also denote z = (U, V ) a point on G(m, r) × G(n, r). Clearly, z∗= (U∗, V∗) is a minimum of F. Assume that A is incoherent [14]; A has bounded entries and the minimum singular value of A is bounded away from 0. Let κ(A) be the condition number of A. It is shown that, if the number of observed entries is of order O(max{κ(A)2n log n, κ(A)6n}) then, with high probability, F is well approximated by a parabola and z∗is the unique stationary point of F in a sufficiently small neighborhood of z∗([14, Lemma 6.4&6.5]). From these observations, given an initial point that is sufficiently close to z∗, a gradient descent procedure on F (with an additional regularization term to keep the intermediate points incoherent) converges to z∗and exact recovery is obtained. The singular value decomposition of a trimmed version of the observerd matrix AΩcan give us the initial point that ensures convergence. The readers are referred to [14] for details. From [14], let G(U, V ) = Pm i=1 G1( ∥U (i)∥2 Cinc ) + Pn i=1 G1( ∥V (i)∥2 Cinc ), where G1(x) = 0 if x ≤1 and G1(x) = e(x−1)2 −1 otherwise; Cinc is a constant depending on the incoherence assumptions. We consider the regularized version of F: ˜F(U, V ) = F(U, V ) + ρG(U, V ), where ρ is chosen appropriately so that U and V remain incoherent during the execution of the algorithm. We can see that z∗is also the minimum of ˜F. We will now show that the scaled-gradients of ˜F are well-defined during the iterations and they are indeed descent directions of ˜F and only vanish at z∗. As a result, the scaled-gradient-based methods can inherit all the convergence results in [14]. First, S must be non-singular during the iterations for the scaled-gradients to be well-defined. As a corollary of Lemma 6.4 in [14], the extreme singular values of any intermediate S are bounded by extreme singular values σ∗ min and σ∗ max of Σ∗: σmax ≤2σ∗ max and σmin ≥1 2σ∗ min. The second inequality implies that S is well-conditioned during the iterations. The scaled-gradient is the descent direction of ˜F as a direct result from the fact that it is indeed the gradient of ˜F based on a non-canonical metric. Moreover, by Lemma 6.5 in [14], ∥grad ˜F(z)∥2 ≥Cnǫ2(σ∗ min)4d(z, z∗)2 for some constant C, where ∥.∥and d(., .) are the canonical norm and distance on the Grassmann manifold respectively. Based on this, a similar lower bound of ∥grads ˜F∥can be derived. Let D1 = SST and D2 = ST S be the scaling matrices. Then, ∥grads ˜F(z)∥2 = ∥grad ˜FU(z)D−1 1 ∥2 F + ∥grad ˜FV (z)D−1 2 ∥2 F ≥σ−2 max(∥grad ˜FU(z)∥2 F + ∥grad ˜FV (z)∥2 F ) ≥(2σ∗ max)−2∥grad ˜F(z)∥2 ≥(2σ∗ max)−2Cnǫ2(σ∗ min)4d(z, z∗)2 = C(σ∗ min)4(2σ∗ max)−2nǫ2d(z, z∗)2. Therefore, the scaled gradients only vanish at z∗which means the scaled-gradient descent procedure must converge to z∗, which is the exact solution [3]. 6 5 Experiments and results The proposed algorithms were implemented in Matlab with some mex-routines to perform matrix multiplications with sparse masks. For synthesis data, we consider two cases: (1) fully random low-rank matrices, A = randn(m, r) ∗randn(r, n) (in Matlab notations) whose singular values tend to be roughly the same; (2) random low-rank matrices with chosen singular values by letting U = qf(randn(m, r)), V = qf(randn(n, r)) and A = USV T where S is a diagonal matrix with chosen singular values. The initializations of all methods are based on the SVD of AΩ. First, we illustrate the improvement of scaled gradients over canonical gradients for steepest descent and conjugate gradient methods on 5000 × 5000 matrices with rank 5 (Figure 1). Note that CanonGrass-Steep is OptSpace with our implementation. In this experiment, Si is obtained exactly using (6). The time needed for each iteration is roughly the same for all methods so we only present the results in terms of iteration counts. We can see that there are some small improvements for the fully random case (Figure 1a) since the singular values are roughly the same. The improvement is more substantial for matrices with larger condition numbers (Figure 1b). 10 20 30 40 50 60 70 80 90 −15 −10 −5 0 Iteration count RMSE (log−scale) 5000x5000 − Rank 5 − 1.0% observed entries Singular values [4774, 4914, 4979, 5055, 5146] Canon−Grass−Steep Canon−Grass−CG Scaled−Grass−Steep Scaled−Grass−CG 20 40 60 80 100 120 140 160 180 200 −14 −12 −10 −8 −6 −4 −2 0 Iteration count RMSE (log−scale) 5000x5000 − Rank 5 − 1.0% observed entries Singular values [1000, 2000, 3000, 4000, 5000] Canon−Grass−Steep Canon−Grass−CG Scaled−Grass−Steep Scaled−Grass−CG (a) (b) Figure 1: Log-RMSE for fully random matrix (a) and random matrix with chosen spectrum (b). Now, we compare the relaxed version of the scaled conjugate gradient which uses (5) to compute Si (ScGrass-CG) to LMaFit [23], Riemann-CG [22], RTRMC2 [6] (trust region method with second order information), SVP [12] and GROUSE [5] (Figure 2). These methods are also implemented in Matlab with mex-routines similar to ours except for GROUSE which is entirely in Matlab (Indeed GROUSE does not use sparse matrix multiplication as other methods do). The subspace iteration method and the relaxed version of scaled steepest descent converge similarly to LMaFit, so we omit them in the graph. Note that each iteration of GROUSE in the graph corresponds to one pass over the matrix. It does not have exactly the same meaning as one iteration of other methods and is much slower with its current implementation. We use the best step sizes that we found for SVP and GROUSE. In terms of iteration counts, we can see that for the fully random case (upper row), RTRMC2 is the best while ScGrass-CG and Riemann-CG converge reasonably fast. However, each iteraton of RTRMC2 is slower so in terms of time, ScGrass-CG and Riemann-CG are the fastest in our experiments. When the condition number of the matrix is higher, ScGrass-CG converges fastest both in terms of iteration counts and execution time. Finally, we test the algorithms on Jester-1 and MovieLens-100K datasets which are assumed to be low-rank matrices with noise (SVP and GROUSE are not tested because their step sizes need to be appropriately chosen). Similarly to previous work, for the Jester dataset we randomly select 4000 users and randomly withhold 2 ratings for each user for testing. For the MovieLens dataset, we use the common dataset prepared by [16], and keep 50% for training and 50% for testing. We run 100 different randomizations of Jester and 10 randomizations of MovieLens and average the results. We stop all methods early, when the change of RMSE is less than 10−4, to avoid overfitting. All methods stop well before one minute. The Normalized Mean Absolute Errors (NMAEs) [13] are reported in Table 1. ScGrass-CG is the relaxed scaled CG method and ScGrassCG-Reg is the exact scaled CG method using a spectral-regularization version of F proposed in 7 20 40 60 80 100 120 140 160 180 200 −14 −12 −10 −8 −6 −4 −2 0 2 Iteration count RMSE (log−scale) 10000x10000 − Rank 10 − 0.5% observed entries Singular values [9612,9717,9806,9920,9987,10113,10128,10226,10248,10348] RTRMC2 RiemannCG GROUSE ScGrass−CG LMaFit SVP 0 5 10 15 20 25 30 35 40 45 50 −16 −14 −12 −10 −8 −6 −4 −2 0 2 Time [s] RMSE (log−scale) 10000x10000 − Rank 10 − 0.5% observed entries Singular values [9612,9717,9806,9920,9987,10113,10128,10226,10248,10348] GROUSE SVP LMaFit RTRMC2 ScGrass−CG Riemann−CG 50 100 150 200 250 300 −14 −12 −10 −8 −6 −4 −2 0 2 Iteration count RMSE (log−scale) 10000x10000 − Rank 10 − 0.5% observed entries Singular values [1000,2000,3000,4000,5000,6000,7000,8000,9000,10000] Riemann−CG RTRMC2 LMaFit SVP GROUSE ScGrass−CG 0 10 20 30 40 50 60 70 80 90 100 −14 −12 −10 −8 −6 −4 −2 0 2 Time [s] RMSE (log−scale) 10000x10000 − Rank 10 − 0.5% observed entries Singular values [1000,2000,3000,4000,5000,6000,7000,8000,9000,10000] SVP GROUSE RTRMC2 LMaFit Riemann−CG ScGrassCG Figure 2: Log-RMSE. Upper row is fully random, lower row is random with chosen singular values. Rank ScGrass-CG ScGrass-CG-Reg LMaFit Riemann-CG RTRMC2 5 0.1588 0.1588 0.1588 0.1591 0.1588 7 0.1584 0.1584 0.1581 0.1584 0.1583 5 0.1808 0.1758 0.1828 0.1781 0.1884 7 0.1832 0.1787 0.1836 0.1817 0.2298 Table 1: NMAE on Jester dataset (first 2 rows) and MovieLens 100K. NMAEs for a random guesser are 0.33 on Jester and 0.37 on MovieLens 100K. [13]: ˜F(U, V ) = minS(1/2)(∥PΩ(USV T −A)∥+ λ∥S∥2 F ). All methods perform similarly and demonstrate overfitting when k = 7 for MovieLens. We observe that ScGrass-CG-Reg suffers the least from overfitting thanks to its regularization. This shows the importance of regularization for noisy matrices and motivates future work in this direction. 6 Conlusion and future work The gradients obtained from a scaled metric on the Grassmann manifold can result in improved convergence of gradient methods on matrix manifolds for matrix completion while maintaining good global convergence and exact recovery guarantees. We have established a connection between scaled gradient methods and subspace iteration method for matrix completion. The relaxed versions of the proposed gradient methods, adapted from the subspace iteration, are faster than previously discussed algorithms, sometimes much faster depending on the conditionining of the data matrix. In the future, we will investigate if these relaxed versions achieve similar performance guarantees. We are also interested in exploring ways to regularize the relaxed versions to deal with noisy data. The convergence condition of OptSpace depends on κ(A)6 and weakening this dependency for the proposed algorithms is also an interesting future direction. 8 Acknowledgments This work was supported by NSF grants DMS-0810938 and DMR-0940218. References [1] P.-A. Absil, R. Mahony, and R. Sepulchre. Optimization Algorithms on Matrix Manifolds. Princeton University Press, Princeton, NJ, 2008. [2] Y. Amit, M. Fink, N. Srebro, and S. Ullman. Uncovering shared structures in multiclass classification. In Proceedings of the 24th international conference on Machine learning, ICML ’07, pages 17–24, 2007. [3] L. Armijo. Minimization of functions having Lipschitz continuous first partial derivatives. Pacific Journal of Mathematics, 16(1):1–3, 1966. [4] J. Baglama, D. Calvetti, G. H. Golub, and L. Reichel. Adaptively preconditioned GMRES algorithms. SIAM J. Sci. Comput., 20(1):243–269, December 1998. [5] L. Balzano, R. Nowak, and B. Recht. Online identification and tracking of subspaces from highly incomplete information. In Proceedings of Allerton, September 2010. [6] N. Boumal and P.-A. Absil. Rtrmc: A riemannian trust-region method for low-rank matrix completion. In NIPS, 2011. [7] J-F. Cai, E. J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010. [8] E. Candes and T. Tao. The power of convex relaxation: Near-optimal matrix completion, 2009. [9] P. Chen and D. Suter. Recovering the Missing Components in a Large Noisy Low-Rank Matrix: Application to SFM. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(8):1051–1063, 2004. [10] W. Dai, E. Kerman, and O. Milenkovic. A geometric approach to low-rank matrix completion. IEEE Transactions on Information Theory, 58(1):237–247, 2012. [11] A. Edelman, T. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints. SIAM J. Matrix Anal. Appl, 20:303–353, 1998. [12] P. Jain, R. Meka, and I. S. Dhillon. Guaranteed rank minimization via singular value projection. In NIPS, pages 937–945, 2010. [13] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 952–960. 2009. [14] R. H. Keshavan, S. Oh, and A. Montanari. Matrix completion from a few entries. CoRR, abs/0901.3150, 2009. [15] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank minimization. Math. Program., 128(1-2):321–353, 2011. [16] B. Marlin. Collaborative filtering: A machine learning perspective, 2004. [17] R. Mazumder, T. Hastie, and R. Tibshirani. Spectral regularization algorithms for learning large incomplete matrices. J. Mach. Learn. Res., 11:2287–2322, August 2010. [18] B. Recht. A simpler approach to matrix completion. CoRR, abs/0910.0651, 2009. [19] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In In Proceedings of the 22nd International Conference on Machine Learning (ICML, pages 713–719. ACM, 2005. [20] Y. Saad. Numerical Methods for Large Eigenvalue Problems- classics edition. SIAM, Philadelpha, PA, 2011. [21] N. Srebro and T. Jaakkola. Weighted low-rank approximations. In In 20th International Conference on Machine Learning, pages 720–727. AAAI Press, 2003. [22] B. Vandereycken. Low-rank matrix completion by riemannian optimization. Technical report, Mathematics Section, Ecole Polytechnique Federale de de Lausanne, 2011. [23] Z. Wen, W. Yin, and Y. Zhang. Solving a low-rank factorization model for matrix completion using a non-linear successive over-relaxation algorithm. In CAAM Technical Report. Rice University, 2010. 9
|
2012
|
167
|
4,527
|
Link Prediction in Graphs with Autoregressive Features Emile Richard CMLA UMR CNRS 8536, ENS Cachan, France Stéphane Gaïffas CMAP - Ecole Polytechnique & LSTA - Université Paris 6 Nicolas Vayatis CMLA UMR CNRS 8536, ENS Cachan, France Abstract In the paper, we consider the problem of link prediction in time-evolving graphs. We assume that certain graph features, such as the node degree, follow a vector autoregressive (VAR) model and we propose to use this information to improve the accuracy of prediction. Our strategy involves a joint optimization procedure over the space of adjacency matrices and VAR matrices which takes into account both sparsity and low rank properties of the matrices. Oracle inequalities are derived and illustrate the trade-offs in the choice of smoothing parameters when modeling the joint effect of sparsity and low rank property. The estimate is computed efficiently using proximal methods through a generalized forward-backward agorithm. 1 Introduction Forecasting systems behavior with multiple responses has been a challenging issue in many contexts of applications such as collaborative filtering, financial markets, or bioinformatics, where responses can be, respectively, movie ratings, stock prices, or activity of genes within a cell. Statistical modeling techniques have been widely investigated in the context of multivariate time series either in the multiple linear regression setup [4] or with autoregressive models [23]. More recently, kernel-based regularized methods have been developed for multitask learning [7, 2]. These approaches share the use of the correlation structure among input variables to enrich the prediction on every single output. Often, the correlation structure is assumed to be given or it is estimated separately. A discrete encoding of correlations between variables can be modeled as a graph so that learning the dependence structure amounts to performing graph inference through the discovery of uncovered edges on the graph. The latter problem is interesting per se and it is known as the problem of link prediction where it is assumed that only a part of the graph is actually observed [15, 9]. This situation occurs in various applications such as recommender systems, social networks, or proteomics, and the appropriate tools can be found among matrix completion techniques [21, 5, 1]. In the realistic setup of a time-evolving graph, matrix completion was also used and adapted to take into account the dynamics of the features of the graph [18]. In this paper, we study the prediction problem where the observation is a sequence of graphs adjacency matrices (At)0≤t≤T and the goal is to predict AT +1. This type of problem arises in applications such as recommender systems where, given information on purchases made by some users, one would like to predict future purchases. In this context, users and products can be modeled as the nodes of a bipartite graph, while purchases or clicks are modeled as edges. In functional genomics and systems biology, estimating regulatory networks in gene expression can be performed by modeling the data as graphs and fitting predictive models is a natural way for estimating evolving networks in these contexts. A large variety of methods for link prediction only consider predicting from a single static snapshot of the graph - this includes heuristics [15, 20], matrix factorization [13], diffusion [16], or probabilistic methods [22]. More recently, some works have investigated using sequences of observations of the graph to improve the prediction, such as using regression on features extracted from the graphs [18], using matrix factorization [14], continuous-time regression [25]. Our main assumption is that the network effect is a 1 cause and a symptom at the same time, and therefore, the edges and the graph features should be estimated simultaneously. We propose a regularized approach to predict the uncovered links and the evolution of the graph features simultaneously. We provide oracle bounds under the assumption that the noise sequence has subgaussian tails and we prove that our procedure achieves a trade-off in the calibration of smoothing parameters which adjust with the sparsity and the rank of the unknown adjacency matrix. The rest of this paper is organized as follows. In Section 2, we describe the general setup of our work with the main assumptions and we formulate a regularized optimization problem which aims at jointly estimating the autoregression parameters and predicting the graph. In Section 3, we provide technical results with oracle inequalities and other theoretical guarantees on the joint estimation-prediction. Section 4 is devoted to the description of the numerical simulations which illustrate our approach. We also provide an efficient algorithm for solving the optimization problem and show empirical results. The proof of the theoretical results are provided as supplementary material in a separate document. 2 Estimation of low-rank graphs with autoregressive features Our approach is based on the asumption that features can explain most of the information contained in the graph, and that these features are evolving with time. We make the following assumptions about the sequence (At)t≥0 of adjacency matrices of the graphs sequence. Low-Rank. We assume that the matrices At have low-rank. This reflects the presence of highly connected groups of nodes such as communities in social networks, or product categories and groups of loyal/fan users in a market place data, and is sometimes motivated by the small number of factors that explain nodes interactions. Autoregressive linear features. We assume to be given a linear map ω : Rn×n →Rd defined by ω(A) = ⟨Ω1, A⟩, · · · , ⟨Ωd, A⟩ ⊤ , (1) where (Ωi)1≤i≤d is a set of n × n matrices. These matrices can be either deterministic or random in our theoretical analysis, but we take them deterministic for the sake of simplicity. The vector time series (ω(At))t≥0 has autoregressive dynamics, given by a VAR (Vector Auto-Regressive) model: ω(At+1) = W ⊤ 0 ω(At) + Nt+1, (2) where W0 ∈Rd×d is a unknown sparse matrix and (Nt)t≥0 is a sequence of noise vectors in Rd. An example of linear features is the degree (i.e. number of edges connected to each node, or the sum of their weights if the edges are weighted), which is a measure of popularity in social and commerce networks. Introducing XT −1 = (ω(A0), . . . , ω(AT −1))⊤ and XT = (ω(A1), . . . , ω(AT ))⊤, which are both T × d matrices, we can write this model in a matrix form: XT = XT −1W0 + NT , (3) where NT = (N1, . . . , NT )⊤. This assumes that the noise is driven by time-series dynamics (a martingale increment), where each coordinates are independent (meaning that features are independently corrupted by noise), with a sub-gaussian tail and variance uniformly bounded by a constant σ2. In particular, no independence assumption between the Nt is required here. Notations. The notations ∥·∥F , ∥·∥p, ∥·∥∞, ∥·∥∗and ∥·∥op stand, respectively, for the Frobenius norm, entry-wise ℓp norm, entry-wise ℓ∞norm, trace-norm (or nuclear norm, given by the sum of the singular values) and operator norm (the largest singular value). We denote by ⟨A, B⟩= tr(A⊤B) the Euclidean matrix product. A vector in Rd is always understood as a d × 1 matrix. We denote by ∥A∥0 the number of non-zero elements of A. The product A ◦B between two matrices with matching dimensions stands for the Hadamard or entry-wise product between A and B. The matrix |A| contains the absolute values of entries of A. The matrix (M)+ is the componentwise positive part of the matrix M, and sign(M) is the sign matrix associated to M with the convention sign(0) = 0 2 If A is a n × n matrix with rank r, we write its SVD as A = UΣV ⊤= Pr j=1 σjujv⊤ j where Σ = diag(σ1, . . . , σr) is a r × r diagonal matrix containing the non-zero singular values of A in decreasing order, and U = [u1, . . . , ur], V = [v1, . . . , vr] are n × r matrices with columns given by the left and right singular vectors of A. The projection matrix onto the space spanned by the columns (resp. rows) of A is given by PU = UU ⊤(resp. PV = V V ⊤). The operator PA : Rn×n →Rn×n given by PA(B) = PUB + BPV −PUBPV is the projector onto the linear space spanned by the matrices ukx⊤and yv⊤ k for 1 ≤j, k ≤r and x, y ∈Rn. The projector onto the orthogonal space is given by P⊥ A (B) = (I −PU)B(I −PV ). We also use the notation a ∨b = max(a, b). 2.1 Joint prediction-estimation through penalized optimization In order to reflect the autoregressive dynamics of the features, we use a least-squares goodness-offit criterion that encourages the similarity between two feature vectors at successive time steps. In order to induce sparsity in the estimator of W0, we penalize this criterion using the ℓ1 norm. This leads to the following penalized objective function: J1(W) = 1 T ∥XT −XT −1W∥2 F + κ∥W∥1, where κ > 0 is a smoothing parameter. Now, for the prediction of AT +1, we propose to minimize a least-squares criterion penalized by the combination of an ℓ1 norm and a trace-norm. This mixture of norms induces sparsity and a low-rank of the adjacency matrix. Such a combination of ℓ1 and trace-norm was already studied in [8] for the matrix regression model, and in [19] for the prediction of an adjacency matrix. The objective function defined below exploits the fact that if W is close to W0, then the features of the next graph ω(AT +1) should be close to W ⊤ω(AT ). Therefore, we consider J2(A, W) = 1 d∥ω(A) −W ⊤ω(AT )∥2 F + τ∥A∥∗+ γ∥A∥1, where τ, γ > 0 are smoothing parameters. The overall objective function is the sum of the two partial objectives J1 and J2, which is jointly convex with respect to A and W: L(A, W) .= 1 T ∥XT −XT −1W∥2 F + κ∥W∥1 + 1 d∥ω(A) −W ⊤ω(AT )∥2 2 + τ∥A∥∗+ γ∥A∥1, (4) If we choose convex cones A ⊂Rn×n and W ⊂Rd×d, our joint estimation-prediction procedure is defined by ( ˆA, ˆW) ∈ arg min (A,W )∈A×W L(A, W). (5) It is natural to take W = Rd×d and A = (R+)n×n since there is no a priori on the values of the feature matrix W0, while the entries of the matrix AT +1 must be positive. In the next section we propose oracle inequalities which prove that this procedure can estimate W0 and predict AT +1 at the same time. 2.2 Main result The central contribution of our work is to bound the prediction error with high probability under the following natural hypothesis on the noise process. Assumption 1. We assume that (Nt)t≥0 satisfies E[Nt|Ft−1] = 0 for any t ≥1 and that there is σ > 0 such that for any λ ∈R and j = 1, . . . , d and t ≥0: E[eλ(Nt)j|Ft−1] ≤eσ2λ2/2. Moreover, we assume that for each t ≥0, the coordinates (Nt)1, . . . , (Nt)d are independent. The main result can be summarized as follows. The prediction error and the estimation error can be simultaneously bounded by the sum of three terms that involve homogeneously (a) the sparsity, (b) the rank of the adjacency matrix AT +1, and (c) the sparsity of the VAR model matrix W0. The tight bounds we obtain are similar to the bounds of the Lasso and are upper bounded by: 3 C1 log d T ∥W0∥0 + C2 log n d ∥AT +1∥0 + C3 log n d rank AT +1 . The positive constants C1, C2, C3 are proportional to the noise level σ. The interplay between the rank and sparsity constraints on AT +1 are reflected in the observation that the values of C2 and C3 can be changed as long as their sum remains constant. 3 Oracle inequalities In this section we give oracle inequalities for the mixed prediction-estimation error which is given, for any A ∈Rn×n and W ∈Rd×d, by E(A, W)2 .= 1 d∥(W −W0)⊤ω(AT ) −ω(A −AT +1)∥2 2 + 1 T ∥XT −1(W −W0)∥2 F . (6) It is important to have in mind that an upper-bound on E implies upper-bounds on each of its two components. It entails in particular an upper-bound on the feature estimation error ∥XT −1(c W −W0)∥F that makes ∥(c W −W0)⊤ω(AT )∥2 smaller and consequently controls the prediction error over the graph edges through ∥ω( bA −AT +1)∥2. The upper bounds on E given below exhibit the dependence of the accuracy of estimation and prediction on the number of features d, the number of edges n and the number T of observed graphs in the sequence. Let us recall NT = (N1, . . . , NT )⊤and introduce the noise processes M = −1 d d X j=1 (NT +1)jΩj and Ξ = 1 T T X t=1 ω(At−1)N ⊤ t + 1 dω(AT )N ⊤ T +1, which are, respectively, n × n and d × d random matrices. The source of randomness comes from the noise sequence (Nt)t≥0, see Assumption 1. If these noise processes are controlled correctly, we can prove the following oracle inequalities for procedure (5). The next result is an oracle inequality of slow type (see for instance [3]), that holds in full generality. Theorem 1. Under Assumption 2, let ( ˆA, ˆW) be given by (5) and suppose that τ ≥2α∥M∥op, γ ≥2(1 −α)∥M∥∞ and κ ≥2∥Ξ∥∞ (7) for some α ∈(0, 1). Then, we have E( bA, c W)2 ≤ inf (A,W )∈A×W n E(A, W)2 + 2τ∥A∥∗+ 2γ∥A∥1 + 2κ∥W∥1 o . For the proof of oracle inequalities of fast type, the restricted eigenvalue (RE) condition introduced in [3] and [10, 11] is of importance. Restricted eigenvalue conditions are implied by, and in general weaker than, the so-called incoherence or RIP (Restricted isometry property, [6]) assumptions, which excludes, for instance, strong correlations between covariates in a linear regression model. This condition is acknowledged to be one of the weakest to derive fast rates for the Lasso (see [24] for a comparison of conditions). Matrix version of these assumptions are introduced in [12]. Below is a version of the RE assumption that fits in our context. First, we need to introduce the two restriction cones. The first cone is related to the ∥W∥1 term used in procedure (5). If W ∈Rd×d, we denote by ΘW = sign(W) ∈{0, ±1}d×d the signed sparsity pattern of W and by Θ⊥ W ∈{0, 1}d×d the orthogonal sparsity pattern. For a fixed matrix W ∈Rd×d and c > 0, we introduce the cone C1(W, c) .= n W ′ ∈W : ∥Θ⊥ W ◦W ′∥1 ≤c∥ΘW ◦W ′∥1 o . This cone contains the matrices W ′ that have their largest entries in the sparsity pattern of W. The second cone is related to mixture of the terms ∥A∥∗and ∥A∥1 in procedure (5). Before defining it, we need further notations and definitions. 4 For a fixed A ∈Rn×n and c, β > 0, we introduce the cone C2(A, c, β) .= n A′ ∈A : ∥P⊥ A (A′)∥∗+ β∥Θ⊥ A ◦A′∥1 ≤c ∥PA(A′)∥∗+ β∥ΘA ◦A′∥1 o . This cone consist of the matrices A′ with large entries close to that of A and that are “almost aligned” with the row and column spaces of A. The parameter β quantifies the interplay between these too notions. Assumption 2 (Restricted Eigenvalue (RE)). For W ∈W and c > 0, we have µ1(W, c) = inf n µ > 0 : ∥ΘW ◦W ′∥F ≤ µ √ T ∥XT −1W ′∥F , ∀W ′ ∈C1(W, c) o . For A ∈A and c, β > 0, we introduce µ2(A, W, c, β) = inf n µ > 0 : ∥PA(A′)∥F ∨∥ΘA ◦A′∥F ≤µ √ d ∥W ′⊤ω(AT ) −ω(A′)∥2 ∀W ′ ∈C1(W, c), ∀A′ ∈C2(A, c, β) o . (8) The RE assumption consists of assuming that the constants µ1 and µ2 are finite. Now we can state the following Theorem that gives a fast oracle inequality for our procedure using RE. Theorem 2. Under Assumption 2 and Assumption 2, let ( ˆA, ˆW) be given by (5) and suppose that τ ≥3α∥M∥op, γ ≥3(1 −α)∥M∥∞ and κ ≥3∥Ξ∥∞ (9) for some α ∈(0, 1). Then, we have E( bA, c W)2 ≤ inf (A,W )∈A×W n E(A, W)2+25 18µ2(A, W)2 τ 2 rank(A)+γ2∥A∥0)+25 36κ2µ1(W)2∥W∥0 o , where µ1(W) = µ1(W, 5) and µ2(A, W) = µ2(A, W, 5, γ/τ) (see Assumption 2). The proofs of Theorems 1 and 2 use tools introduced in [12] and [3]. Note that the residual term from this oracle inequality mixes the notions of sparsity of A and W via the terms rank(A), ∥A∥0 and ∥W∥0. It says that our mixed penalization procedure provides an optimal trade-off between fitting the data and complexity, measured by both sparsity and low-rank. This is the first result of this nature to be found in literature. In the next Theorem 3, we obtain convergence rates for the procedure (5) by combining Theorem 2 with controls on the noise processes. We introduce v2 Ω,op =
1 d d X j=1 Ω⊤ j Ωj
op ∨
1 d d X j=1 ΩjΩ⊤ j
op, v2 Ω,∞=
1 d d X j=1 Ωj ◦Ωj
∞, σ2 ω = max j=1,...,d σ2 ω,j, where σ2 ω,j = 1 T T X t=1 ωj(At−1)2 + ωj(AT )2 , which are the (observable) variance terms that naturally appear in the controls of the noise processes. We introduce also ℓT = 2 max j=1,...,d log log σ2 ω,j ∨ 1 σ2 ω,j ∨e , which is a small (observable) technical term that comes out of our analysis of the noise process Ξ. This term is a small price to pay for the fact that no independence assumption is required on the noise sequence (Nt)t≥0, but only a martingale increment structure with sub-gaussian tails. Theorem 3. Consider the procedure ( ˆA, ˆW) given by (5) with smoothing parameters given by τ = 3ασvΩ,op r 2(x + log(2n)) d , γ = 3(1 −α)σvΩ,∞ r 2(x + 2 log n) d , κ = 6σσω r 2e(x + 2 log d + ℓT ) T + p 2e(x + 2 log d + ℓT ) d . 5 for some α ∈(0, 1) and fix a confidence level x > 0. Then, we have E( bA, c W)2 ≤ inf (A,W )∈A×W n E(A, W)2 + C1∥W∥0(x + 2 log d + ℓT ) 1 T + 1 d2 + C2∥A∥0 2(x + 2 log n) d + C3 rank(A)2(x + log(2n)) d o where C1 = 100eµ1(W)2σ2σ2 ω, C2 = 25µ2(A, W)2(1−α)2σ2v2 Ω,∞, C3 = 25µ2(A, W)2α2σ2v2 Ω,op, with a probability larger than 1 −17e−x, where µ1 and µ2 are the same as in Theorem 2. The proof of Theorem 3 follows directly from Theorem 2 basic noise control results. In the next Theorem, we propose more explicit upper bounds for both the indivivual estimation of W0 and the prediction of AT +1. Theorem 4. Under the same assumptions as in Theorem 3 and the same choice of smoothing parameters, for any x > 0 the following inequalities hold with probability larger than 1 −17e−x: • Feature prediction error: 1 T ∥XT ( ˆW −W0)∥2 F ≤25 36κ2µ1(W0)2∥W0∥0 + inf A∈A n1 d∥ω(A) −ω(AT +1)∥2 2 + 25 18µ2(A, W0)2 τ 2 rank(A) + γ2∥A∥0) o (10) • VAR parameter estimation error: ∥ˆW −W0∥1 ≤5κµ1(W0)2∥W0∥0 +6 p ∥W0∥0µ1(W0) inf A∈A r 1 d∥ω(A) −ω(AT +1)∥2 2 + 25 18µ2(A, W0)2 τ 2 rank(A) + γ2∥A∥0) (11) • Link prediction error: ∥ˆA−AT +1∥∗≤5κµ1(W0)2∥W0∥0+µ2(AT +1, W0)(6 p rank AT +1+5γ τ p ∥AT +1∥0) × inf A∈A r 1 d∥ω(A) −ω(AT +1)∥2 2 + 25 18µ2(A, W0)2 τ 2 rank(A) + γ2∥A∥0) . (12) 4 Algorithms and Numerical Experiments 4.1 Generalized forward-backward algorithm for minimizing L We use the algorithm designed in [17] for minimizing our objective function. Note that this algorithm is preferable to the method introduced in [18] as it directly minimizes L jointly in (S, W) rather than alternately minimizing in W and S. Moreover we use the novel joint penalty from [19] that is more suited for estimating graphs. The proximal operator for the trace norm is given by the shrinkage operation, if Z = U diag(σ1, · · · , σn)V T is the singular value decomposition of Z, proxτ||.||∗(Z) = U diag((σi −τ)+)iV T . Similarly, the proximal operator for the ℓ1-norm is the soft thresholding operator defined by using the entry-wise product of matrices denoted by ◦: proxγ||.||1(Z) = sgn(Z) ◦(|Z| −γ)+ . The algorithm converges under very mild conditions when the step size θ is smaller than 2 L, where L is the operator norm of the joint quadratic loss: Φ : (A, W) 7→1 T ∥XT −XT −1W∥2 F + 1 d∥ω(A) −W ⊤ω(AT )∥2 F . 6 Algorithm 1 Generalized Forward-Backward to Minimize L Initialize A, Z1, Z2, W repeat Compute (GA, GW ) = ∇A,W Φ(A, W). Compute Z1 = prox2θτ||.||∗(2A −Z1 −θGA) Compute Z2 = prox2θγ||.||1(2A −Z2 −θGA) Set A = 1 2(Z1 + Z2) Set W = proxθκ||.||1(W −θGW ) until convergence return (A, W) minimizing L 4.2 A generative model for graphs having linearly autoregressive features Let V0 ∈Rn×r be a sparse matrix, V † 0 its pseudo-inverse such, that V † 0 V0 = V ⊤ 0 V ⊤† 0 = Ir. Fix two sparse matrices W0 ∈Rr×r and U0 ∈Rn×r . Now define the sequence of matrices (At)t≥0 for t = 1, 2, · · · by Ut = Ut−1W0 + Nt and At = UtV ⊤ 0 + Mt for i.i.d sparse noise matrices Nt and Mt, which means that for any pair of indices (i, j), with high probability (Nt)i,j = 0 and (Mt)i,j = 0. We define the linear feature map ω(A) = AV ⊤† 0 , and point out that 1. The sequence ω(At)⊤ t = Ut + MtV ⊤† 0 t follows the linear autoregressive relation ω(At) ⊤= ω(At−1) ⊤W0 + Nt + MtV ⊤† 0 . 2. For any time index t, the matrix At is close to UtV0 that has rank at most r 3. The matrices At and Ut are both sparse by construction. 4.3 Empirical evaluation We tested the presented methods on synthetic data generated as in section (4.2). In our experiments the noise matrices Mt and Nt where built by soft-thresholding i.i.d. noise N(0, σ2). We took as input T = 10 successive graph snapshots on n = 50 nodes graphs of rank r = 5. We used d = 10 linear features, and finally the noise level was set to σ = .5. We compare our methods to standard baselines in link prediction. We use the area under the ROC curve as the measure of performance and report empirical results averaged over 50 runs with the corresponding confidence intervals in figure 4.3. The competitor methods are the nearest neighbors (NN) and static sparse and low-rank estimation, that is the link prediction algorithm suggested in [19]. The algorithm NN scores pairs of nodes with the number of common friends between them, which is given by A2 when A is the cumulative graph adjacency matrix f AT = PT t=0 At and the static sparse and low-rank estimation is obtained by minimizing the objective ∥X −f AT ∥2 F + τ∥X∥∗+ γ∥X∥1, and can be seen as the closest static version of our method. The two methods autoregressive low-rank and static low-rank are regularized using only the trace-norm, (corresponding to forcing γ = 0) and are slightly inferior to their sparse and low-rank rivals. Since the matrix V0 defining the linear map ω is unknown we consider the feature map ω(A) = AV where f AT = UΣV ⊤is the SVD of f AT . The parameters τ and γ are chosen by 10-fold cross validation for each of the methods separately. 4.4 Discussion 1. Comparison with the baselines. This experiment sharply shows the benefit of using a temporal approach when one can handle the feature extraction task. The left-hand plot shows that if few snapshots are available (T ≤4 in these experiments), then static approaches are 7 2 3 4 5 6 7 8 9 10 0.75 0.8 0.85 0.9 0.95 T AUC Link prediction performance Autoregressive Sparse and Low−rank Autoregressive Low−rank Static Sparse and Low−rank Static Low−rank Nearest−Neighbors rank AT+1 T AUC 0 10 20 30 40 50 60 70 2 4 6 8 10 12 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 Figure 1: Left: performance of algorithms in terms of Area Under the ROC Curve, average and confidence intervals over 50 runs. Right: Phase transition diagram. to be preferred, whereas feature autoregressive approaches outperform as soon as sufficient number T graph snapshots are available (see phase transition). The decreasing performance of static algorithms can be explained by the fact that they use as input a mixture of graphs observed at different time steps. Knowing that at each time step the nodes have specific latent factors, despite the slow evolution of the factors, adding the resulting graphs leads to confuse the factors. 2. Phase transition. The right-hand figure is a phase transition diagram showing in which part of rank and time domain the estimation is accurate and illustrates the interplay between these two domain parameters. 3. Choice of the feature map ω. In the current work we used the projection onto the vector space of the top-r singular vectors of the cumulative adjacency matrix as the linear map ω, and this choice has shown empirical superiority to other choices. The question of choosing the best measurement to summarize graph information as in compress sensing seems to have both theoretical and application potential. Moreover, a deeper understanding of the connections of our problem with compressed sensing, for the construction and theoretical validation of the features mapping, is an important point that needs several developments. One possible approach is based on multi-kernel learning, that should be considered in a future work. 4. Generalization of the method. In this paper we consider only an autoregressive process of order 1. For better prediction accuracy, one could consider mode general models, such as vector ARMA models, and use model-selection techniques for the choice of the orders of the model. A general modelling based on state-space model could be developed as well. We presented a procedure for predicting graphs having linear autoregressive features. Our approach can easily be generalized to non-linear prediction through kernel-based methods. References [1] J. Abernethy, F. Bach, Th. Evgeniou, and J.-Ph. Vert. A new approach to collaborative filtering: operator estimation with spectral regularization. JMLR, 10:803–826, 2009. [2] A. Argyriou, M. Pontil, Ch. Micchelli, and Y. Ying. A spectral regularization framework for multi-task structure learning. Proceedings of Neural Information Processing Systems (NIPS), 2007. [3] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of lasso and dantzig selector. Annals of Statistics, 37, 2009. [4] L. Breiman and J. H. Friedman. Predicting multivariate responses in multiple linear regression. Journal of the Royal Statistical Society (JRSS): Series B (Statistical Methodology), 59:3–54, 1997. 8 [5] E.J. Candès and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5), 2009. [6] Candès E. and Tao T. Decoding by linear programming. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2005. [7] Th. Evgeniou, Ch. A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 6:615–637, 2005. [8] S. Gaiffas and G. Lecue. Sharp oracle inequalities for high-dimensional matrix prediction. Information Theory, IEEE Transactions on, 57(10):6942 –6957, oct. 2011. [9] M. Kolar and E. P. Xing. On time varying undirected graphs. in Proceedings of the 14th International Conference on Artifical Intelligence and Statistics AISTATS, 2011. [10] V. Koltchinskii. The Dantzig selector and sparsity oracle inequalities. Bernoulli, 15(3):799– 828, 2009. [11] V. Koltchinskii. Sparsity in penalized empirical risk minimization. Ann. Inst. Henri Poincaré Probab. Stat., 45(1):7–57, 2009. [12] V. Koltchinskii, K. Lounici, and A. Tsybakov. Nuclear norm penalization and optimal rates for noisy matrix completion. Annals of Statistics, 2011. [13] Y. Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 426–434. ACM, 2008. [14] Y. Koren. Collaborative filtering with temporal dynamics. Communications of the ACM, 53(4):89–97, 2010. [15] D. Liben-Nowell and J. Kleinberg. The link-prediction problem for social networks. Journal of the American society for information science and technology, 58(7):1019–1031, 2007. [16] S.A. Myers and Jure Leskovec. On the convexity of latent social network inference. In NIPS, 2010. [17] H. Raguet, J. Fadili, and G. Peyré. Generalized forward-backward splitting. Arxiv preprint arXiv:1108.4404, 2011. [18] E. Richard, N. Baskiotis, Th. Evgeniou, and N. Vayatis. Link discovery using graph feature tracking. Proceedings of Neural Information Processing Systems (NIPS), 2010. [19] E. Richard, P.-A. Savalle, and N. Vayatis. Estimation of simultaneously sparse and low-rank matrices. In Proceeding of 29th Annual International Conference on Machine Learning, 2012. [20] P. Sarkar, D. Chakrabarti, and A.W. Moore. Theoretical justification of popular link prediction heuristics. In International Conference on Learning Theory (COLT), pages 295–307, 2010. [21] N. Srebro, J. D. M. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. In Lawrence K. Saul, Yair Weiss, and Léon Bottou, editors, in Proceedings of Neural Information Processing Systems 17, pages 1329–1336. MIT Press, Cambridge, MA, 2005. [22] B. Taskar, M.F. Wong, P. Abbeel, and D. Koller. Link prediction in relational data. In Neural Information Processing Systems, volume 15, 2003. [23] R. S. Tsay. Analysis of Financial Time Series. Wiley-Interscience; 3rd edition, 2005. [24] S. A. van de Geer and P. Bühlmann. On the conditions used to prove oracle results for the Lasso. Electron. J. Stat., 3:1360–1392, 2009. [25] D.Q. Vu, A. Asuncion, D. Hunter, and P. Smyth. Continuous-time regression models for longitudinal networks. In Advances in Neural Information Processing Systems. MIT Press, 2011. 9
|
2012
|
168
|
4,528
|
A Generative Model for Parts-based Object Segmentation S. M. Ali Eslami School of Informatics University of Edinburgh s.m.eslami@sms.ed.ac.uk Christopher K. I. Williams School of Informatics University of Edinburgh ckiw@inf.ed.ac.uk Abstract The Shape Boltzmann Machine (SBM) [1] has recently been introduced as a stateof-the-art model of foreground/background object shape. We extend the SBM to account for the foreground object’s parts. Our new model, the Multinomial SBM (MSBM), can capture both local and global statistics of part shapes accurately. We combine the MSBM with an appearance model to form a fully generative model of images of objects. Parts-based object segmentations are obtained simply by performing probabilistic inference in the model. We apply the model to two challenging datasets which exhibit significant shape and appearance variability, and find that it obtains results that are comparable to the state-of-the-art. There has been significant focus in computer vision on object recognition and detection e.g. [2], but a strong desire remains to obtain richer descriptions of objects than just their bounding boxes. One such description is a parts-based object segmentation, in which an image is partitioned into multiple sets of pixels, each belonging to either a part of the object of interest, or its background. The significance of parts in computer vision has been recognized since the earliest days of the field (e.g. [3, 4, 5]), and there exists a rich history of work on probabilistic models for parts-based segmentation e.g. [6, 7]. Many such models only consider local neighborhood statistics, however several models have recently been proposed that aim to increase the accuracy of segmentations by also incorporating prior knowledge about the foreground object’s shape [8, 9, 10, 11]. In such cases, probabilistic techniques often mainly differ in how accurately they represent and learn about the variability exhibited by the shapes of the object’s parts. Accurate models of the shapes and appearances of parts can be necessary to perform inference in datasets that exhibit large amounts of variability. In general, the stronger the models of these two components, the more performance is improved. A generative model has the added benefit of being able to generate samples, which allows us to visually inspect the quality of its understanding of the data and the problem. Recently, a generative probabilistic model known as the Shape Boltzmann Machine (SBM) has been used to model binary object shapes [1]. The SBM has been shown to constitute the state-of-the-art and it possesses several highly desirable characteristics: samples from the model look realistic, and it generalizes to generate samples that differ from the limited number of examples it is trained on. The main contributions of this paper are as follows: 1) In order to account for object parts we extend the SBM to use multinomial visible units instead of binary ones, resulting in the Multinomial Shape Boltzmann Machine (MSBM), and we demonstrate that the MSBM constitutes a strong model of parts-based object shape. 2) We combine the MSBM with an appearance model to form a fully generative model of images of objects (see Fig. 1). We show how parts-based object segmentations can be obtained simply by performing probabilistic inference in the model. We apply our model to two challenging datasets and find that in addition to being principled and fully generative, the model’s performance is comparable to the state-of-the-art. 1 Appearance model Shape model images Train Train labels Test image Parsing Joint Model Figure 1: Overview. Using annotated images separate models of shape and appearance are trained. Given an unseen test image, its parsing is obtained via inference in the proposed joint model. In Secs. 1 and 2 we present the model and propose efficient inference and learning schemes. In Sec. 3 we compare and contrast the resulting joint model with existing work in the literature. We describe our experimental results in Sec. 4 and conclude with a discussion in Sec. 5. 1 Model We consider datasets of cropped images of an object class. We assume that the images are constructed through some combination of a fixed number of parts. Given a dataset D = {Xd}, d = 1...n of such images X, each consisting of P pixels {xi}, i = 1...P, we wish to infer a segmentation S for the image. S consists of a labeling si for every pixel, where si is a 1-of-(L+1) encoded variable, and L is the fixed number of parts that combine to generate the foreground. In other words, si = (sli), l = 0...L, sli 2 {0, 1} and P l sli = 1. Note that the background is also treated as a ‘part’ (l = 0). Accurate inference of S is driven by models for 1) part shapes and 2) part appearances. Part shapes: Several types of models can be used to define probabilistic distributions over segmentations S. The simplest approach is to model each pixel si independently with categorical variables whose parameters are specified by the object’s mean shape (Fig. 2(a)). Markov Random Fields (MRFs, Fig. 2(b)) additionally model interactions between nearby pixels using pairwise potential functions that efficiently capture local properties of images like smoothness and continuity. Restricted Boltzmann Machines (RBMs) and their multi-layered counterparts Deep Boltzmann Machines (DBMs, Fig. 2(c)) make heavy use of hidden variables to efficiently define higher-order potentials that take into account the configuration of larger groups of image pixels. The introduction of such hidden variables provides a way to efficiently capture complex, global properties of image pixels. RBMs and DBMs are powerful generative models, but they also have many parameters. Segmented images, however, are expensive to obtain and datasets are typically small (hundreds of examples). In order to learn a model that accurately captures the properties of part shapes we use DBMs but also impose carefully chosen connectivity and capacity constraints, following the structure of the Shape Boltzmann Machine (SBM) [1]. We further extend the model to account for multi-part shapes to obtain the Multinomial Shape Boltzmann Machine (MSBM). The MSBM has two layers of latent variables: h1 and h2 (collectively H = {h1, h2}), and defines a Boltzmann distribution over segmentations p(S) = P h1,h2 exp{−E(S, h1, h2|✓s)}/Z(✓s) where E(S, h1, h2|✓s) = X i,l blisli + X i,j,l w1 lijslih1 j + X j c1 jh1 j + X j,k w2 jkh1 jh2 k + X k c2 kh2 k, (1) where j and k range over the first and second layer hidden variables, and ✓s = {W 1, W 2, b, c1, c2} are the shape model parameters. In the first layer, local receptive fields are enforced by connecting each hidden unit in h1 only to a subset of the visible units, corresponding to one of four patches, as shown in Fig. 2(d,e). Each patch overlaps its neighbor by b pixels, which allows boundary continuity to be learned at the lowest layer. We share weights between the four sets of first-layer hidden units and patches, and purposely restrict the number of units in h2. These modifications significantly reduce the number of parameters whilst taking into account an important property of shapes, namely that the strongest dependencies between pixels are typically local. 2 S (a) Mean S (b) MRF S h1 h2 (c) DBM S h1 h2 (d) SBM b S h1 h2 (e) 2D SBM Figure 2: Models of shape. Object shape is modeled with undirected graphical models. (a) 1D slice of a mean model. (b) Markov Random Field in 1D. (c) Deep Boltzmann Machine in 1D. (d) 1D slice of a Shape Boltzmann Machine. (e) Shape Boltzmann Machine in 2D. In all models latent units h are binary and visible units S are multinomial random variables. Based on Fig. 2 of [1]. l = 0 l = 1 l = 2 k = 1 k = 2 k = 3 k = 1 k = 2 k = 3 k = 1 k = 2 k = 3 ⇡ φ Figure 3: A model of appearances. Left: An exemplar dataset. Here we assume one background (l = 0) and two foreground (l = 1, non-body; l = 2, body) parts. Right: The corresponding appearance model. In this example, L = 2, K = 3 and W = 6. Best viewed in color. Part appearances: Pixels in a given image are assumed to have been generated by W fixed Gaussians in RGB space. During pre-training, the means {µw} and covariances {⌃w} of these Gaussians are extracted by training a mixture model with W components on every pixel in the dataset, ignoring image and part structure. It is also assumed that each of the L parts can have different appearances in different images, and that these appearances can be clustered into K classes. The classes differ in how likely they are to use each of the W components when ‘coloring in’ the part. The generative process is as follows. For part l in an image, one of the K classes is chosen (represented by a 1-of-K indicator variable al). Given al, the probability distribution defined on pixels associated with part l is given by a Gaussian mixture model with means {µw} and covariances {⌃w} and mixing proportions {φlkw}. The prior on A = {al} specifies the probability ⇡lk of appearance class k being chosen for part l. Therefore appearance parameters ✓a = {⇡lk, φlkw} (see Fig. 3) and: p(xi|A, si, ✓a) = Y l p(xi|al, ✓a)sli = Y l Y k X w φlkw N(xi|µw, ⌃w) !alk!sli , (2) p(A|✓a) = Y l p(al|✓a) = Y l Y k (⇡lk)alk. (3) Combining shapes and appearances: To summarize, the latent variables for X are A, S, H, and the model’s active parameters ✓include shape parameters ✓s and appearance parameters ✓a, so that p(X, A, S, H|✓) = 1 Z(λ) p(A|✓a)p(S, H|✓s) Y i p(xi|A, si, ✓a)λ, (4) where the parameter λ adjusts the relative contributions of the shape and appearance components. See Fig. 4 for an illustration of the complete graphical model. During learning, we find the values of ✓that maximize the likelihood of the training data D, and segmentation is performed on a previously-unseen image by querying the marginal distribution p(S|Xtest, ✓). Note that Z(λ) is constant throughout the execution of the algorithms. We set λ via trial and error in our experiments. 3 H si ✓s ✓a al xi P L+1 n H S X A Figure 4: A model of shape and appearance. Left: The joint model. Pixels xi are modeled via appearance variables al. The model’s belief about each layer’s shape is captured by shape variables H. Segmentation variables si assign each pixel to a layer. Right: Schematic for an image X. 2 Inference and learning Inference: We approximate p(A, S, H|X, ✓) by drawing samples of A, S and H using block-Gibbs Markov Chain Monte Carlo (MCMC). The desired distribution p(S|X, ✓) can then be obtained by considering only the samples for S (see Algorithm 1). In order to sample p(A|S, H, X, ✓) we consider the conditional distribution of appearance class k being chosen for part l which is given by: p(alk = 1|S, X, ✓) = ⇡lk Q i (P w φlkw N(xi|µw, ⌃w))λ·sli PK r=1 h ⇡lr Q i (P w φlrw N(xi|µw, ⌃w))λ·slii. (5) Since the MSBM only has edges between each pair of adjacent layers, all hidden units within a layer are conditionally independent given the units in the other two layers. This property can be exploited to make inference in the shape model exact and efficient. The conditional probabilities are: p(h1 j = 1|s, h2, ✓) = σ( X i,l w1 lijsli + X k w2 jkh2 k + c1 j), (6) p(h2 k = 1|h1, ✓) = σ( X j w2 jkh1 j + c2 j), (7) where σ(y) = 1/(1 + exp(−y)) is the sigmoid function. To sample from p(H|S, X, ✓) we iterate between Eqns. 6 and 7 multiple times and keep only the final values of h1 and h2. Finally, we draw samples for the pixels in p(S|A, H, X, ✓) independently: p(sli = 1|A, H, X, ✓) = exp(P j w1 lijh1 j + bli) p(xi|A, sli = 1, ✓)λ PL m=1 exp(P j w1 mijh1 j + bmi) p(xi|A, smi = 1, ✓)λ . (8) Seeding: Since the latent-space is extremely high-dimensional, in practice we find it helpful to run several inference chains, each initializing S(1) to a different value. The ‘best’ inference is retained and the others are discarded. The computation of the likelihood p(X|✓) of image X is intractable, so we approximate the quality of each inference using a scoring function: Score(X|✓) = 1 T X t p(X, A(t), S(t), H(t)|✓), (9) where {A(t), S(t), H(t)}, t = 1...T are the samples obtained from the posterior p(A, S, H|X, ✓). If the samples were drawn from the prior p(A, S, H|✓) the scoring function would be an unbiased estimator of p(X|✓), but would be wildly inaccurate due to the high probability of missing the important regions of latent space (see e.g. [12, p. 107-109] for further discussion of this issue). Learning: Learning of the model involves maximizing the log likelihood log p(D|✓a, ✓s) of the training dataset D with respect to the model parameters ✓a and ✓s. Since training is partially supervised, in that for each image X its corresponding segmentation S is also given, we can learn the parameters of the shape and appearance components separately. For appearances, the learning of the mixing coefficients and the histogram parameters decomposes into standard mixture updates independently for each part. For shapes, we follow the standard deep 4 Algorithm 1 MCMC inference algorithm. 1: procedure INFER(X, ✓) 2: Initialize S(1), H(1) 3: for t 2 : chain length do 4: A(t) ⇠p(A|S(t−1), H(t−1), X, ✓) 5: S(t) ⇠p(S|A(t), H(t−1), X, ✓) 6: H(t) ⇠p(H|S(t), ✓) 7: return {S(t)}t=burnin:chain length learning literature closely [13, 1]. In the pre-training phase we greedily train the model bottom up, one layer at a time. We begin by training an RBM on the observed data using stochastic maximum likelihood learning (SML; also referred to as ‘persistent CD’; [14, 13]). Once this RBM is trained, we infer the conditional mean of the hidden units for each training image. The resulting vectors then serve as the training data for a second RBM which is again trained using SML. We use the parameters of these two RBMs to initialize the parameters of the full MSBM model. In the second phase we perform approximate stochastic gradient ascent in the likelihood of the full model to finetune the parameters in an EM-like scheme as described in [13]. 3 Related work Existing probabilistic models of images can be categorized by the amount of variability they expect to encounter in the data and by how they model this variability. A significant portion of the literature models images using only two parts: a foreground object and its background e.g. [15, 16, 17, 18, 19]. Models that account for the parts within the foreground object mainly differ in how accurately they learn about and represent the variability of the shapes of the object’s parts. In Probabilistic Index Maps (PIMs) [8] a mean partitioning is learned, and the deformable PIM [9] additionally allows for local deformations of this mean partitioning. Stel Component Analysis [10] accounts for larger amounts of shape variability by learning a number of different template means for the object that are blended together on a pixel-by-pixel basis. Factored Shapes and Appearances [11] models global properties of shape using a factor analysis-like model, and ‘masked’ RBMs have been used to model more local properties of shape [20]. However, none of these models constitute a strong model of shape in terms of realism of samples and generalization capabilities [1]. We demonstrate in Sec. 4 that, like the SBM, the MSBM does in fact possess these properties. The closest works to ours in terms of ability to deal with datasets that exhibit significant variability in both shape and appearance are the works of Bo and Fowlkes [21] and Thomas et al. [22]. Bo and Fowlkes [21] present an algorithm for pedestrian segmentation that models the shapes of the parts using several template means. The different parts are composed using hand coded geometric constraints, which means that the model cannot be automatically extended to other application domains. The Implicit Shape Model (ISM) used in [22] is reliant on interest point detectors and defines distributions over segmentations only in the posterior, and therefore is not fully generative. The model presented here is entirely learned from data and fully generative, therefore it can be applied to new datasets and diagnosed with relative ease. Due to its modular structure, we also expect it to rapidly absorb future developments in shape and appearance models. 4 Experiments Penn-Fudan pedestrians: The first dataset that we considered is Penn-Fudan pedestrians [23], consisting of 169 images of pedestrians (Fig. 6(a)). The images are annotated with ground-truth segmentations for L = 7 different parts (hair, face, upper and lower clothes, shoes, legs, arms; Fig. 6(d)). We compare the performance of the model with the algorithm of Bo and Fowlkes [21]. For the shape component, we trained an MSBM on the 684 images of a labeled version of the HumanEva dataset [24] (at 48 ⇥24 pixels; also flipped horizontally) with overlap b = 4, and 400 and 50 hidden units in the first and second layers respectively. Each layer was pre-trained for 3000 epochs (iterations). After pre-training, joint training was performed for 1000 epochs. 5 (a) Sampling (b) Diffs ! ! ! (c) Completion Figure 5: Learned shape model. (a) A chain of samples (1000 samples between frames). The apparent ‘blurriness’ of samples is not due to averaging or resizing. We display the probability of each pixel belonging to different parts. If, for example, there is a 50-50 chance that a pixel belongs to the red or blue parts, we display that pixel in purple. (b) Differences between the samples and their most similar counterparts in the training dataset. (c) Completion of occlusions (pink). To assess the realism and generalization characteristics of the learned MSBM we sample from it. In Fig. 5(a) we show a chain of unconstrained samples from an MSBM generated via block-Gibbs MCMC (1000 samples between frames). The model captures highly non-linear correlations in the data whilst preserving the object’s details (e.g. face and arms). To demonstrate that the model has not simply memorized the training data, in Fig. 5(b) we show the difference between the sampled shapes in Fig. 5(a) and their closest images in the training set (based on per-pixel label agreement). We see that the model generalizes in non-trivial ways to generate realistic shapes that it had not encountered during training. In Fig. 5(c) we show how the MSBM completes rectangular occlusions. The samples highlight the variability in possible completions captured by the model. Note how, e.g. the length of the person’s trousers on one leg affects the model’s predictions for the other, demonstrating the model’s knowledge about long-range dependencies. An interactive MATLAB GUI for sampling from this MSBM has been included in the supplementary material. The Penn-Fudan dataset (at 200 ⇥100 pixels) was then split into 10 train/test cross-validation splits without replacement. We used the training images in each split to train the appearance component with a vocabulary of size W = 50 and K = 100 mixture components1. We additionally constrained the model by sharing the appearance models for the arms and legs with that of the face. We assess the quality of the appearance model by performing the following experiment: for each test image, we used the scoring function described in Eq. 9 to evaluate a number of different proposal segmentations for that image. We considered 10 randomly chosen segmentations from the training dataset as well as the ground-truth segmentation for the test image, and found that the appearance model correctly assigns the highest score to the ground-truth 95% of the time. During inference, the shape and appearance models (which are defined on images of different sizes), were combined at 200 ⇥100 pixels via MATLAB’s imresize function, and we set λ = 0.8 (Eq. 8) via trial and error. Inference chains were seeded at 100 exemplar segmentations from the HumanEva dataset (obtained using the K-medoids algorithm with K = 100), and were run for 20 Gibbs iterations each (with 5 iterations of Eqs. 6 and 7 per Gibbs iteration). Our unoptimized MATLAB implementation completed inference for each chain in around 7 seconds. We compute the conditional probability of each pixel belonging to different parts given the last set of samples obtained from the highest scoring chain, assign each pixel independently to the most likely part at that pixel, and report the percentage of correctly labeled pixels (see Table 1). We find that accuracy can be improved using superpixels (SP) computed on X (pixels within a superpixel are all assigned the most common label within it; as with [21] we use gPb-OWT-UCM [25]). We also report the accuracy obtained, had the top scoring seed segmentation been returned as the final segmentation for each image. Here the quality of the seed is determined solely by the appearance model. We observe that the model has comparable performance to the state-of-the-art but pedestrianspecific algorithm of [21], and that inference in the model significantly improves the accuracy of the segmentations over the baseline (top seed+SP). Qualitative results can be seen in Fig. 6(c). 1We obtained the best quantitative results with these settings. The appearances exhibited by the parts in the dataset are highly varied, and the complexity of the appearance model reflects this fact. 6 Table 1: Penn-Fudan pedestrians. We report the percentage of correctly labeled pixels. The final column is an average of the background, upper and lower body scores (as reported in [21]). FG BG Upper Body Lower Body Head Average Bo and Fowlkes [21] 73.3% 81.1% 73.6% 71.6% 51.8% 69.5% MSBM 70.7% 72.8% 68.6% 66.7% 53.0% 65.3% MSBM + SP 71.6% 73.8% 69.9% 68.5% 54.1% 66.6% Top seed 59.0% 61.8% 56.8% 49.8% 45.5% 53.5% Top seed + SP 61.6% 67.3% 60.8% 54.1% 43.5% 56.4% Table 2: ETHZ cars. We report the percentage of pixels belonging to each part that are labeled correctly. The final column is an average weighted by the frequency of occurrence of each label. BG Body Wheel Window Bumper License Light Average ISM [22] 93.2% 72.2% 63.6% 80.5% 73.8% 56.2% 34.8% 86.8% MSBM 94.6% 72.7% 36.8% 74.4% 64.9% 17.9% 19.9% 86.0% Top seed 92.2% 68.4% 28.3% 63.8% 45.4% 11.2% 15.1% 81.8% ETHZ cars: The second dataset that we considered is the ETHZ labeled cars dataset [22], which itself is a subset of the LabelMe dataset [23], consisting of 139 images of cars, all in the same semiprofile view (Fig. 7(a)). The images are annotated with ground-truth segmentations for L = 6 parts (body, wheel, window, bumper, license plate, headlight; Fig. 7(d)). We compare the performance of the model with the ISM of Thomas et al. [22], who also report their results on this dataset. The dataset was split into 10 train/test cross-validation splits without replacement. We used the training images in each split to train both the shape and appearance components. For the shape component, we trained an MSBM at 50 ⇥50 pixels with overlap b = 4, and 2000 and 100 hidden units in the first and second layers respectively. Each layer was pre-trained for 3000 epochs and joint training was performed for 1000 epochs. The appearance model was trained with a vocabulary of size W = 50 and K = 100 mixture components and we set λ = 0.7. Inference chains were seeded at 50 exemplar segmentations (obtained using K-medoids). We find that the use of superpixels does not help with this dataset (due to the poor quality of superpixels obtained for these images). Qualitative and quantitative results that show the performance of model to be comparable to the state-of-the-art ISM can be seen in Fig. 7(c) and Table 2. We believe the discrepancy in accuracy between the MSBM and ISM on the ‘license’ and ‘light’ labels to mainly be due to ISM’s use of interest-points, as they are able to locate such fine structures accurately. By incorporating better models of part appearance into the generative model, we expect to see this discrepancy decrease. 5 Conclusions and future work In this paper we have shown how the SBM can be extended to obtain the MSBM, and presented a principled probabilistic model of images of objects that exploits the MSBM as its model for part shapes. We demonstrated how object segmentations can be obtained simply by performing MCMC inference in the model. The model can also be treated as a probabilistic evaluator of segmentations: given a proposal segmentation it can be used to estimate its likelihood. This leads us to believe that the combination of a generative model such as ours, with a discriminative, bottom-up segmentation algorithm could be highly effective. We are currently investigating how textured appearance models, which take into account the spatial structure of pixels, affect the learning and inference algorithms and the performance of the model. Acknowledgments Thanks to Charless Fowlkes and Vittorio Ferrari for access to datasets, and to Pushmeet Kohli and John Winn for valuable discussions. AE has received funding from the Carnegie Trust, the SORSAS scheme, and the IST Programme under the PASCAL2 Network of Excellence (IST-2007-216886). 7 (a) Test (b) Bo and Fowlkes (c) MSBM (d) Ground truth Background Hair Face Upper Shoes Legs Lower Arms Figure 6: Penn-Fudan pedestrians. (a) Test images. (b) Results reported by Bo and Fowlkes [21]. (c) Output of the joint model. (d) Ground-truth images. Images shown are those selected by [21]. (a) Test (b) Thomas et al. (c) MSBM (d) Ground truth Background Body Wheel Window Bumper License Headlight Figure 7: ETHZ cars. (a) Test images. (b) Results reported by Thomas et al. [22]. (c) Output of the joint model. (d) Ground-truth images. Images shown are those selected by [22]. 8 References [1] S. M. Ali Eslami, Nicolas Heess, and John Winn. The Shape Boltzmann Machine: a Strong Model of Object Shape. In IEEE CVPR, 2012. [2] Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. The PASCAL Visual Object Classes (VOC) Challenge. International Journal of Computer Vision, 88:303–338, 2010. [3] Martin Fischler and Robert Elschlager. The Representation and Matching of Pictorial Structures. IEEE Transactions on Computers, 22(1):67–92, 1973. [4] David Marr. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. Freeman, 1982. [5] Irving Biederman. Recognition-by-components: A theory of human image understanding. Psychological Review, 94:115–147, 1987. [6] Ashish Kapoor and John Winn. Located Hidden Random Fields: Learning Discriminative Parts for Object Detection. In ECCV, pages 302–315, 2006. [7] John Winn and Jamie Shotton. The Layout Consistent Random Field for Recognizing and Segmenting Partially Occluded Objects. In IEEE CVPR, pages 37–44, 2006. [8] Nebojsa Jojic and Yaron Caspi. Capturing Image Structure with Probabilistic Index Maps. In IEEE CVPR, pages 212–219, 2004. [9] John Winn and Nebojsa Jojic. LOCUS: Learning object classes with unsupervised segmentation. In ICCV, pages 756–763, 2005. [10] Nebojsa Jojic, Alessandro Perina, Marco Cristani, Vittorio Murino, and Brendan Frey. Stel component analysis. In IEEE CVPR, pages 2044–2051, 2009. [11] S. M. Ali Eslami and Christopher K. I. Williams. Factored Shapes and Appearances for Partsbased Object Understanding. In BMVC, pages 18.1–18.12, 2011. [12] Nicolas Heess. Learning generative models of mid-level structure in natural images. PhD thesis, University of Edinburgh, 2011. [13] Ruslan Salakhutdinov and Geoffrey Hinton. Deep Boltzmann Machines. In AISTATS, volume 5, pages 448–455, 2009. [14] Tijmen Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In ICML, pages 1064–1071, 2008. [15] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. “GrabCut”: interactive foreground extraction using iterated graph cuts. ACM SIGGRAPH, 23:309–314, 2004. [16] Eran Borenstein, Eitan Sharon, and Shimon Ullman. Combining Top-Down and Bottom-Up Segmentation. In CVPR Workshop on Perceptual Organization in Computer Vision, 2004. [17] Himanshu Arora, Nicolas Loeff, David Forsyth, and Narendra Ahuja. Unsupervised Segmentation of Objects using Efficient Learning. IEEE CVPR, pages 1–7, 2007. [18] Bogdan Alexe, Thomas Deselaers, and Vittorio Ferrari. ClassCut for unsupervised class segmentation. In ECCV, pages 380–393, 2010. [19] Nicolas Heess, Nicolas Le Roux, and John Winn. Weakly Supervised Learning of ForegroundBackground Segmentation using Masked RBMs. In ICANN, 2011. [20] Nicolas Le Roux, Nicolas Heess, Jamie Shotton, and John Winn. Learning a Generative Model of Images by Factoring Appearance and Shape. Neural Computation, 23(3):593–650, 2011. [21] Yihang Bo and Charless Fowlkes. Shape-based Pedestrian Parsing. In IEEE CVPR, 2011. [22] Alexander Thomas, Vittorio Ferrari, Bastian Leibe, Tinne Tuytelaars, and Luc Van Gool. Using Recognition and Annotation to Guide a Robot’s Attention. IJRR, 28(8):976–998, 2009. [23] Bryan Russell, Antonio Torralba, Kevin Murphy, and William Freeman. LabelMe: A Database and Tool for Image Annotation. International Journal of Computer Vision, 77:157–173, 2008. [24] Leonid Sigal, Alexandru Balan, and Michael Black. HumanEva. International Journal of Computer Vision, 87(1-2):4–27, 2010. [25] Pablo Arbelaez, Michael Maire, Charless C. Fowlkes, and Jitendra Malik. From Contours to Regions: An Empirical Evaluation. In IEEE CVPR, 2009. 9
|
2012
|
169
|
4,529
|
A latent factor model for highly multi-relational data Rodolphe Jenatton CMAP, UMR CNRS 7641, Ecole Polytechnique, Palaiseau, France jenatton@cmap.polytechnique.fr Nicolas Le Roux INRIA - SIERRA Project Team, Ecole Normale Sup´erieure, Paris, France nicolas@le-roux.name Antoine Bordes Heudiasyc, UMR CNRS 7253, Universit´e de Technologie de Compi`egne, France antoine.bordes@utc.fr Guillaume Obozinski INRIA - SIERRA Project Team, Ecole Normale Sup´erieure, Paris, France guillaume.obozinski@ens.fr Abstract Many data such as social networks, movie preferences or knowledge bases are multi-relational, in that they describe multiple relations between entities. While there is a large body of work focused on modeling these data, modeling these multiple types of relations jointly remains challenging. Further, existing approaches tend to breakdown when the number of these types grows. In this paper, we propose a method for modeling large multi-relational datasets, with possibly thousands of relations. Our model is based on a bilinear structure, which captures various orders of interaction of the data, and also shares sparse latent factors across different relations. We illustrate the performance of our approach on standard tensor-factorization datasets where we attain, or outperform, state-of-the-art results. Finally, a NLP application demonstrates our scalability and the ability of our model to learn efficient and semantically meaningful verb representations. 1 Introduction Statistical Relational Learning (SRL) [7] aims at modeling data consisting of relations between entities. Social networks, preference data from recommender systems, relational databases used for the semantic web or in bioinformatics, illustrate the diversity of applications in which such modeling has a potential impact. Relational data typically involve different types of relations between entities or attributes. These entities can be users in the case of social networks or recommender systems, words in the case of lexical knowledge bases, or genes and proteins in the case of bioinformatics ontologies, to name a few. For binary relations, the data is naturally represented as a so called multi-relational graph consisting of nodes associated with entities and of different types of edges between nodes corresponding to the different types of relations. Equivalently the data consists of a collection of triplets of the form (subject, relation, object), listing the actual relationships where we will call subject and object respectively the first and second term of a binary relation. Relational data typically cumulates many difficulties. First, a large number of relation types, some being significantly more represented than others and possibly concerning only subsets of entities; second, the data is typically noisy and incomplete (missing or incorrect relationships, redundant entities); finally most datasets are large scale with up to millions of entities and billions of links for real-world knowledge bases. Besides relational databases, SRL can also be used to model natural language semantics. A standard way of representing the meaning of language is to identify entities and relations in texts or speech utterances and to organize them. This can be conducted at various scales, from the word or sentence level (e.g. in parsing or semantic role labeling) to a collection of texts (e.g. in knowledge extraction). 1 SRL systems are a useful tool there, as they can automatically extract high level information from the collected data by building summaries [22], sense categorization lexicons [11], ontologies [20], etc. Progress in SRL would be likely to lead to advances in natural language understanding. In this paper, we introduce a model for relational data and apply it to multi-relational graphs and to natural language. In assigning high probabilities to valid relations and low probabilities to all the others, this model extracts meaningful representations of the various entities and relations in the data. Unlike other factorization methods (e.g. [15]), our model is probabilistic which has the advantage of accounting explicitly for the uncertainties in the data. Besides, thanks to a sparse distributed representation of relation types, our model can handle data with a significantly larger number of relation types than was considered so far in the literature (a crucial aspect for natural language data). We empirically show that this approach ties or beats state-of-the-art algorithms on various benchmarks of link prediction, a standard test-bed for SRL methods. 2 Related work A branch of relational learning, motivated by applications such as collaborative filtering and link prediction in networks, models relations between entities as resulting from intrinsic latent attributes of these entities.1 Work in what we will call relational learning from latent attributes (RLA) focused mostly on the problem of modeling a single relation type as opposed to trying to model simultaneously a collection of relations which can themselves be similar. As reflected by several formalisms proposed for relational learning [7], it is the latter multi-relational learning problem which is needed to model efficiently large scale relational databases. The fact that relations can be similar or related suggests that a superposition of independently learned models for each relation would be highly inefficient especially since the relationships observed for each relation are extremely sparse. RLA translates often into learning an embedding of the entities, which corresponds algebraically to a matrix factorization problem (typically the matrix of observed relationships). A natural extension to learning multiple relations consists in stacking the matrices to be factorized and applying classical tensor factorization methods such as CANDECOMP/PARAFAC [25, 8]. This approach, which induces inherently some sharing of parameters between both different terms and different relations, has been applied successfully [8] and has inspired some probabilistic formulations [4]. Another natural extension to learning several relations simultaneously can be to share the common embedding or the entities across relations via collective matrix factorization as proposed in RESCAL [15] and other related work [18, 23]. The simplest form of latent attribute that can be associated to an entity is a latent class: the resulting model is the classical stochastic blockmodel [26, 17]. Several clustering-based approaches have been proposed for multi-relational learning: [9] considered a non-parametric Bayesian extension of the stochastic blockmodel allowing to automatically infer the number of latent clusters; [14, 28] refined this to allow entities to have a mixed clusters membership; [10] introduced clustering in Markov-Logic networks; [24] used a non-parametric Bayesian clustering of entities embedding in a collective matrix factorization formulation. To share parameters between relations, [9, 24, 14, 28] and [10] build models that cluster not only entities but relations as well. With the same aim of reducing the number of parameters, the Semantic Matching Energy model (SME) of [2] embeds relations as a vector from the same space as the entities and models likely relationships by an energy combining together binary interactions between the relation vector and each of the vectors encoding the two terms. In terms of scalability, RESCAL [15], which has been shown to achieve state of the art performance on several relation datasets, has recently been applied to the knowledge base YAGO [16] thereby showing its ability to scale well on data with very large numbers of entities, although the number of relations modeled remained moderate (less than 100). As for SME [2], its modeling of relations by vectors allowed it to scale to several thousands of relations. Scalability can be also an issue for nonparametric Bayesian models (e.g. [9, 24]) because of the cost of inference. 1This is called Statistical Predicate Invention by [10]. 2 3 Relational data modeling We consider relational data consisting of triplets that encode the existence of relation between two entities that we will call the subject and the object. Specifically, we consider a set of ns subjects {Si}i∈J1;nsK along with no objects {Ok}k∈J1;noK which are related by some of nr relations {Rj}j∈J1;nrK. A triplet encodes that the relation Rj holds between the subject Si and the object Ok, which we will write Rj(Si, Ok) = 1. We will therefore refer to a triplet also as a relationship. A typical example which we will discuss in greater detail is in natural language processing where a triplet (Si, Rj, Ok) corresponds to the association of a subject and a direct object through a transitive verb. The goal is to learn a model of the relations to reliably predict unseen triplets. For instance, one might be interested in finding a likely relation Rj based only on the subject and object (Si, Ok). 4 Model description In this work, we formulate the problem of learning a relation as a matrix factorization problem. Following a rationale underlying several previous approaches [15, 24], we consider a model in which entities are embedded in Rp and relations are encoded as bilinear operators on the entities. More precisely, we assume that the ns subjects (resp. no objects) are represented by vectors of Rp, stored as the columns of the matrix S ≜[s1, . . . , sns] ∈Rp×ns (resp. as the columns of O ≜[o1, . . . , ono] ∈Rp×no). Each of the p-dimensional representations si, ok will have to be learned. The relations are represented by a collection of matrices (Rj)1≤j≤nr, with Rj ∈Rp×p, which together form a three-dimensional tensor. We consider a model of the probability of the event Rj(Si, Ok) = 1 . Assuming first that si and ok are fixed, our model is derived from a logistic model P[Rj(Si, Ok) = 1] ≜σ η(j) ik , with σ(t) ≜1/(1 + e−t). A natural form for η(j) ik is a linear function of the tensor product si ⊗ok which we can write η(j) ik = ⟨si, Rjok⟩where ⟨·, ·⟩is the usual inner product in Rp. If we think now of learning si, Rj and ok for all (i, j, k) simultaneously, this model learns together the matrices Rj and optimal embeddings si, ok of the entities so that the usual logistic regressions based on si⊗ok predict well the probability of the observed relationships. This is the initial model considered in [24] and it matches the model considered in [16] if the least-square loss is substituted to the logistic loss. We will refine this model in two ways: first by redefining the term η(j) ik as a function η(j) ik ≜E(si, Rj, ok) taking into account the different orders of interactions between si, ok and Rj, second, by parameterizing the relations Rj by latent “relational” factors that reduce the overall number of parameters of the model. 4.1 A multiple order log-odds ratio model One way of thinking about the probability of occurrence of a specific relationship corresponding to the triplet (Si, Rj, Ok) is as resulting (a) from the marginal propensity of individual entities Si, Ok to enter relations and the marginal propensity of relations Rj to occur, (b) from 2-way interactions of (Si, Rj), (Rj, Ok) corresponding to entities tending to occur marginally as left of right terms of a relation (c) from 2-way interactions of pairs of entities (Si, Ok) that overall tend to have more relations together, and (d) the 3-way dependencies between (Si, Rj, Ok). In NLP, we often refer to these as respectively unigram, bigram and trigram terms, a terminology which we will reuse in the rest of the paper. We therefore design E(si, Rj, ok) to account for these interactions of various orders, retaining only 2 terms involving Rj . In particular, introducing new parameters y, y′, z, z′ ∈Rp, we define η(j) ik = E(si, Rj, ok) as E(si, Rj, ok) ≜⟨y, Rj y′⟩+ ⟨si, Rj z⟩+ ⟨z′, Rj ok⟩+ ⟨si, Rj ok⟩, (1) where ⟨y, Rj y′⟩, ⟨si, Rj z⟩+⟨z′, Rj ok⟩and ⟨si, Rj ok⟩are the uni-, bi- and trigram terms. This parametrization is redundant in general given that E(si, Rj, ok) is of the form ⟨(si + z), Rj (ok + z′)⟩+ bj; but it is however useful in the context of a regularized model (see Section 5). 2This is motivated by the fact that we are primarily interested in modelling the relations terms, and that it is not necessary to introduce all terms to fully parameterize the model. 3 4.2 Sharing parameters across relations through latent factors When learning a large number of relations, the number of observations for many relations can be quite small, leading to a risk of overfitting. Sutskever et al. [24] addressed this issue with a nonparametric Bayesian model inducing clustering of both relations and entities. SME [2] proposed to embed relations as vectors of Rp, like entities, to tackle problems with hundreds of relation types. With a similar motivation to decrease the overall number of parameters, instead of using a general parameterization of the matrices Rj as in RESCAL [16], we require that all Rj decompose over a common set of d rank one matrices {Θr}1≤r≤d representing some canonical relations: Rj = d X r=1 αj rΘr, for some sparse αj ∈Rd and Θr = urv⊤ r for ur, vr ∈Rp. (2) The combined effect of (a) the sparsity of the decomposition and (b) the fact that d ≪nr leads to sharing parameters across relations. Further, constraining Θr to be the outer product urv⊤ r also speeds up all computations relying on linear algebra. 5 Regularized formulation and optimization Denoting P (resp. N) the set of indices of positively (resp. negatively) labeled relations, the likelihood we seek to maximize is L ≜ Y (i,j,k)∈P P[Rj(Si, Ok) = 1] · Y (i′,j′,k′)∈N P[Rj′(Si′, Ok′) = 0] . The log-likelihood is thus log(L) = P (i,j,k)∈P η(j) ik −P (i,j,k)∈P∪N log(1 + exp(η(j) ik )), with η(j) ik = E(si, Rj, ok). To properly normalize the terms appearing in (1) and (2), we carry out the minimization of the negative log-likelihood over a specific constraint set, namely min S,O,{αj}, {Θr}y,y′,z,z′ −log(L), with ∥αj∥1 ≤λ, Θr = ur · v⊤ r , z = z′, O = S, sj, ok, y, y′, z, ur and vr in the ball w; ∥w∥2 ≤1 . We chose to constrain α in ℓ1-norm based on preliminary experiments suggesting that it led to better results that the regularization in ℓ2-norm. The regularization parameter λ ≥0 controls the sparsity of the relation representations in (2). The equality constraints induce a shared representations between subject and objects which were shown to improve the model in preliminary experiments. Given the fact that the model is conditional on a pair (si, ok), only a single scale parameter, namely αj r, is necessary in the product αj r⟨si, Θrok⟩, which motivates all the Euclidean unit ball constraints. 5.1 Algorithmic approach Given the large scale of the problems we are interested in (e.g., |P| ≈106), and since we can project efficiently onto the constraint set (both the projections onto ℓ1- and ℓ2-norm balls can be performed in linear time [1]), our optimization problem lends itself well to a stochastic projected gradient algorithm [3]. In order to speed up the optimization, we use several practical tricks. First, we consider a stochastic gradient descent scheme with mini-batches containing 100 triplets. Second, we use stepsizes of the form a/(1 + k) with k the iteration number and a a scalar (common to all parameters) optimized over a logarithmic grid on a validation set.3 Additionally, we cannot treat the NLP application (see Sec. 8) as a standard tensor factorization problem. Indeed, in that case, we only have access to the positively labeled triplets P. Following [2], we generate elements in N by considering triplets of the form {(i, j′, k)}, j′ ̸= j for each (i, j, k) ∈ P. In practice, for each positive triplet, we sample a number of artificial negative triplets containing the same subject and object as our positive triplet but different verbs. This allowed us to change 3The code is available under an open-source license from http://goo.gl/TGYuh. 4 the problem into a multiclass one where the goal was to correctly classify the “positive” verb, in competition with the “negative” ones. The standard approach for this problem is to use a multinomial logistic function. However, such a function is highly sensitive to the particular choice of negative verbs and using all the verbs as negative ones would be too costly. Another more robust approach consists in using the likelihood function defined above where we try to classify the positive verbs as a valid relationship and the negative ones as invalid relationships. Further, this approximation to the multinomial logistic function is asymptotically unbiased. Finally, we observed that it was advantageous to down-weight the influence of the negative verbs to avoid swamping the influence of the positive ones. 6 Relation to other models Our model is closely related to several other models. First, if d is large, the parameters of the Rj are decoupled and the RESCAL model is retrieved (up to a change of loss function). Second, our model is also related to classical tensor factorization model such as PARAFAC which approximate the tensor [Rk(Si, Oj)]i,j,k in the least-square sense by a low rank tensor ˜H of the form Pd r=1 αr ⊗βr ⊗γr for (αr, βr, γr)∈Rnr×ns×no.The parameterization of all Rj as linear combinations of d rank one matrices is in fact equivalent to constraining the tensor R = {Rj}j∈J1;nrK to be the low rank tensor R = Pd r=1 αr ⊗ur ⊗vr. As a consequence, the tensor of all trigram terms4 can be written also as Pd r=1 αr ⊗βr ⊗γr with βr = S⊤ur and γr = O⊤vr. This shows that our model is a particular form of tensor factorization which reduces to PARAFAC (up to a change of loss function) when p is sufficiently large. Finally, the approach considered in [2] seems a priori quite different from ours, in particular since relations are in that work embedded as vectors of Rp like the entities as opposed to matrices of Rp×p in our case. This choice can be detrimental to model complex relation patterns as we show in Section 7. In addition, no parameterization of the model [2] is able of handling both bigram and trigram interactions as we propose. 7 Application to multi-relational benchmarks We report in this section the performance of our model evaluated on standard tensor-factorization datasets, which we first briefly describe. 7.1 Datasets Kinships. Australian tribes are renowned among anthropologists for the complex relational structure of their kinship systems. This dataset, created by [6], focuses on the Alyawarra, a tribe from Central Australia. 104 tribe members were asked to provide the kinship terms they used for one another. This results in graph of 104 entities and 26 relation types, each of them depicting a different kinship term, such as Adiadya or Umbaidya. See [6] or [9] for more details. UMLS. This dataset contains data from the Unified Medical Language System semantic work gathered by [12]. This consists in a graph with 135 entities and 49 relation types. The entities are high-level concepts like ’Disease or Syndrome’, ’Diagnostic Procedure’, or ’Mammal’. The relations represent verbs depicting causal influence between concepts like ’affect’ or ’cause’. Nations. This dataset groups 14 countries (Brazil, China, Egypt, etc.) with 56 binary relation types representing interactions among them like ’economic aid’, ’treaties’ or ’rel diplomacy’, and 111 features describing each country, which we treated as 111 additional entities interacting with the country through an additional ’has feature’ relation5. See [21] for details. 4Other terms can be decomposed in a similar way. 5The resulting new relationships were only used for training, and not considered at test time. 5 Datasets Our approach RESCAL [16] MRC [10] SME [2] Kinships Area under PR curve 0.946 ± 0.005 0.95 0.84 0.907 ± 0.008 Log-likelihood -0.029 ± 0.001 N/A -0.045 ± 0.002 N/A UMLS Area under PR curve 0.990 ± 0.003 0.98 0.98 0.983 ± 0.003 Log-likelihood -0.002 ± 0.0003 N/A -0.004 ± 0.001 N/A Nations Area under PR curve 0.909 ± 0.009 0.84 0.75 0.883 ± 0.02 Log-likelihood -0.202 ± 0.008 N/A -0.311 ± 0.022 N/A Table 1: Comparisons of the performance obtained by our approach, RESCAL [16], MRC [10] and SME [2] over three standard datasets. The results are computed by 10-fold cross-validation. 7.2 Results These three datasets are relatively small-scale and contain only a few relationships (in the order of tens). Since our model is primarily designed to handle a large number of relationships (see Sec. 4.2), this setting is the most favorable to evaluate the potential of our approach. As reported in Table 1, our method does nonetheless yield better or equally good performance as previous state-of-the-art techniques, both in terms of area under the precision-recall curve (AUC) and log-likelihood (LL). The results displayed in Table 1 are computed by 10-fold cross-validation6, averaged over 10 random splits of the datasets (90% for cross-validation and 10% for testing). We chose to compare our model with RESCAL [16], MRC [10] and SME [2] because they achieved the best published results on these benchmarks in terms of AUC and LL, to the best of our knowledge. Interestingly, the trigram term from (1) is essential to obtain good performance on Kinships (with the trigram term removed, we obtain 0.16 in AUC and −0.14 in LL), thus showing the need for modeling 3-way interactions in complex relational data. Moreover, and as expected due to the low number of relations, the value of λ selected by cross-validation is quite large (λ = nr × d), and as consequence does not lead to sparsity in (2). Results on this dataset also exhibits the benefit of modeling relations with matrices instead of vectors as does SME [2]. Zhu [28] recently reported results on Nations and Kinships evaluated in terms of area under the receiver-operating-characteristic curve instead of area under the precision-recall curve as we display in Table 1. With this other metric, our model obtains 0.953 on Nations and 0.992 on Kinships and hence outperforms Zhu’s approach, which achieves 0.926 and 0.962 respectively. 8 Learning semantic representations of verbs By providing an approach to model the relational structure of language, SRL can be of great use for learning natural language semantics. Hence, this section proposes an application of our method on text data from Wikipedia for learning a representation of words, with a focus on verbs. 8.1 Experimental setting Data. We collected this data in two stages. First, the SENNA software7 [5] was used to perform part-of-speech tagging, chunking, lemmatization8 and semantic role labeling on ≈2,000,000 Wikipedia articles. This data was then filtered to only select sentences for which the syntactic structure was (subject, verb, direct object) with each term of the triplet being a single word from the WordNet lexicon [13]. Subjects and direct objects ended up being all single nouns, whose dictionary size is 30,605. The total number of relations in this dataset (i.e. the number of verbs) is 4,547: this is much larger than for previously published multi-relational benchmarks. We kept 1,000,000 such relationships to build a training set, 50,000 for a validation set and 250,000 for test. All triplets are unique and we made sure that all words appearing in the validation or test sets were occurring in the training set.9 6The values of λ, d and p are searched in nr × d · {0.05, 0.1, 0.5, 1}, {100, 200, 500} and {10, 25, 50}. 7Available from ronan.collobert.com/senna/. 8Lemmatization was carried out using NLTK (nltk.org) and transforms a word into its base form. 9The data set is available under an open-source license from http://goo.gl/TGYuh. 6 synonyms not considered best synonyms considered median/mean rank p@5 p@20 median/mean rank p@5 p@20 Our approach 50 / 195.0 0.78 0.95 19 / 96.7 0.89 0.98 SME [2] 56 / 199.6 0.77 0.95 19 / 99.2 0.89 0.98 Bigram 48 / 517.4 0.72 0.83 17 / 157.7 0.87 0.95 Table 2: Performance obtained on the NLP dataset by our approach, SME [2] and a bigram model. Details about the statistics of the table are given in the text. Practical training setup. During the training phase, we optimized over the validation set various parameters, namely, the size p ∈{25, 50, 100} of the representations, the dimension d ∈ {50, 100, 200} of the latent decompositions (2), the value of the regularization parameter λ as a fraction {1, 0.5, 0.1, 0.05, 0.01} of nr × d, the stepsize in {0.1, 0.05, 0.01} and the weighting of the negative triplets. Moreover, to speed up the training, we gradually increased the number of sampled negative verbs (cf. Section 5.1), from 25 up to 50, which had the effect of refining the training. 8.2 Results Verb prediction. We first consider a direct evaluation of our approach based on the test set of 250,000 instances by measuring how well we predict a relevant and meaningful verb given a pair (subject, direct object). To this end, for each test relationship, we rank all verbs using our probability estimates given a pair (subject, direct object). Table 2 displays our results with two kinds of metrics, namely, (1) the rank of the correct verb and (2) the fraction of test examples for which the correct verb is ranked in the top z% of the list. The latter criterion is referred to as p@z. In order to evaluate if some language semantics is captured by the representations, we also consider a less conservative approach where, instead of focusing on the correct verb only, we measure the minimum rank achieved over its set of synonyms obtained from WordNet. Our method is compared with that of SME [2], which was shown to scale well on data with large sets of relations, and with a bigram model, which estimates the probabilities of the pairs (subject, verb) and (verb, direct object). The first observation is that the task of verb prediction can be quite well addressed by a simple model based on 2-way interactions, as shown by the good median rank obtained by the bigram model. This is confirmed by the mild influence of the trigram term on the performance of our model. On this data, we experienced that using bigram interactions in our energy function was essential to achieve good predictions. However, the drop in the mean rank between our approach and the bigram-only model still indicates that many examples do need a richer model to be correctly handled. By comparison, we tend to consistently match or improve upon the performance of SME. Remarquably, model selection led to the choice of λ = 0.1 · nr × d for which the coefficients α of the representations (2) are sparse in the sense they are dominated by few large values (e.g., the top 2% of the largest values of α account for about 25% of the total ℓ1-norm ∥α∥1). 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision Predicting class 4 Our approach SME Collobert et al. Best WordNet 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision Predicting classes 3 and 4 Our approach SME Collobert et al. Best WordNet Figure 1: Precision-recall curves for the task of lexical similarity classification. The curves are computed based on different similarity measures between verbs, namely, our approach, SME [2], Collobert et al. [5] and the best (out of three) WordNet similarity measure [13]. Details about the task can be found in the text. 7 Our approach SME [2] Collobert et al. [5] Best WordNet [19] AUC (class 4) 0.40 0.21 0.31 0.40 AUC (classes 3&4) 0.54 0.36 0.48 0.59 Table 3: Performance obtained on a task of lexical similarity classification [27], where we compare our approach, SME [2], Collobert et al.’s word embeddings [5] and the best (out of 3) WordNet Similarity measure [19] using area under the precision-recall curve. Details are given in the text. Lexical similarity classification. Our method learns latent representations for verbs and imposes them some structure via shared parameters, as shown in Section 4.2. This should lead to similar representations for similar verbs. We consider the task of lexical similarity classification described in [27] to evaluate this hypothesis. Their dataset consists of 130 pairs of verbs labeled by humans with a score in {0, 1, 2, 3, 4}. Higher scores means a stronger semantic similarity between the verbs composing the pair. For instance, (divide,split) is labeled 4, while (postpone,show) has a score of 0. Based on the pairwise Euclidean distances10 between our learned verb representations Rj, we try to predict the class 4 (and also the “merged” classes {3, 4}) by using the assumption that the smallest the distance between Ri and Rj, the more likely the pair (i, j) should be labeled as 4. We compare to representations learnt by [2] on the same training data, to word embeddings of [5] (which are considered as efficient features in Natural Language Processing), and with three similarity measures provided by WordNet Similarity [19]. For the latest, we only display the best one, named “path”, which is built by counting the number of nodes along the shortest path between the senses in the “is-a” hierarchies of WordNet. We report our results on precision-recall curves displayed in Figure 1 and the corresponding areas under the curve (AUC) in Table 3. Even though we tend to miss the first few pairs, we compare favorably to [2] and [5] and our AUC is close to the reference established by WordNet Similarity. Our method is capable of encoding meaningful semantic embeddings for verbs, even though it has been trained on noisy, automatically collected data and in spite of the fact that it was not our primary goal that distance in parameter space should satisfy any condition. Performance might be improved by training on cleaner triplets, such as those collected by [11]. 9 Conclusion Designing methods capable of handling large amounts of linked relations seems necessary to be able to model the wealth of relations underlying the semantics of any real-world problems. We tackle this problem by using a shared representation of relations naturally suited to multi-relational data, in which entities have a unique representation shared between relation types, and where we propose that relation themselves decompose over latent “relational” factors. This new approach ties or beats state-of-the art models on both standard relational learning problems and an NLP task. The decomposition of relations over latent factors allows a significant reduction of the number of parameters and is motivated both by computational and statistical reasons. In particular, our approach is quite scalable both with respect to the number of relations and to the data samples. One might wonder about the relative importance of the various terms in our formulation. Interestingly, though the presence of the trigram term was crucial in the tensor factorization problems, it played a marginal role in the NLP experiment, where most of the information was contained in the bigram and unigram terms. Finally, we believe that exploring the similarities of the relations through an analysis of the latent factors could provide some insight on the structures shared between different relation types. Acknowledgments This work was partially funded by the Pascal2 European Network of Excellence. NLR and RJ are supported by the European Research Council (resp., SIERRA-ERC-239993 & SIPA-ERC-256919). 10Other distances could of course be considered, we choose the Euclidean metric for simplicity. 8 References [1] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends in Machine Learning, 4(1):1–106, 2011. [2] A. Bordes, X. Glorot, J. Weston, and Y. Bengio. A semantic matching energy function for learning with multi-relational data. Machine Learning, 2012. To appear. [3] L. Bottou and Y. LeCun. Large scale online learning. In Advances in Neural Information Processing Systems, volume 16, pages 217–224, 2004. [4] W. Chu and Z. Ghahramani. Probabilistic models for incomplete multi-dimensional arrays. Journal of Machine Learning Research - Proceedings Track, 5:89–96, 2009. [5] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. JMLR, 12:2493–2537, 2011. [6] W. Denham. The detection of patterns in Alyawarra nonverbal behavior. PhD thesis, 1973. [7] L. Getoor and B. Taskar. Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning). The MIT Press, 2007. [8] R. A. Harshman and M. E. Lundy. Parafac: parallel factor analysis. Comput. Stat. Data Anal., 18(1):39– 72, Aug. 1994. [9] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an infinite relational model. In Proc. of AAAI, pages 381–388, 2006. [10] S. Kok and P. Domingos. Statistical predicate invention. In Proceedings of the 24th international conference on Machine learning, pages 433–440, 2007. [11] A. Korhonen, Y. Krymolowski, and T. Briscoe. A large subcategorization lexicon for natural language processing applications. In Proceedings of LREC, 2006. [12] A. T. McCray. An upper level ontology for the biomedical domain. Comparative and Functional Genomics, 4:80–88, 2003. [13] G. Miller. WordNet: a Lexical Database for English. Communications of the ACM, 38(11):39–41, 1995. [14] K. Miller, T. Griffiths, and M. Jordan. Nonparametric latent feature models for link prediction. In Advances in Neural Information Processing Systems 22, pages 1276–1284. 2009. [15] M. Nickel, V. Tresp, and H.-P. Kriegel. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th Intl Conf. on Mach. Learn., pages 809–816, 2011. [16] M. Nickel, V. Tresp, and H.-P. Kriegel. Factorizing YAGO: scalable machine learning for linked data. In Proc. of the 21st intl conf. on WWW, pages 271–280, 2012. [17] K. Nowicki and T. A. B. Snijders. Estimation and prediction for stochastic blockstructures. Journal of the American Statistical Association, 96(455):1077–1087, 2001. [18] A. Paccanaro and G. Hinton. Learning distributed representations of concepts using linear relational embedding. IEEE Trans. on Knowl. and Data Eng., 13:232–244, 2001. [19] T. Pedersen, S. Patwardhan, and J. Michelizzi. Wordnet:: Similarity: measuring the relatedness of concepts. In Demonstration Papers at HLT-NAACL 2004, pages 38–41, 2004. [20] H. Poon and P. Domingos. Unsupervised ontology induction from text. In Proceedings of the 48th Annual Meeting of the Association for Computl Linguistics, pages 296–305, 2010. [21] R. J. Rummel. Dimensionality of nations project: Attributes of nations and behavior of nation dyads. In ICPSR data file, pages 1950–1965. 1999. [22] D. Shen, J.-T. Sun, H. Li, Q. Yang, and Z. Chen. Document summarization using conditional random fields. In Proc. of the 20th Intl Joint Conf. on Artif. Intel., pages 2862–2867, 2007. [23] A. P. Singh and G. J. Gordon. Relational learning via collective matrix factorization. In Proc. of SIGKDD’08, pages 650–658, 2008. [24] I. Sutskever, R. Salakhutdinov, and J. Tenenbaum. Modelling relational data using bayesian clustered tensor factorization. In Adv. in Neur. Inf. Proc. Syst. 22, 2009. [25] L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31:279–311, 1966. [26] Y. J. Wang and G. Y. Wong. Stochastic blockmodels for directed graphs. Journal of the American Statistical Association, 82(397), 1987. [27] D. Yang and D. M. W. Powers. Verb similarity on the taxonomy of wordnet. Proceedings of GWC-06, pages 121–128, 2006. [28] J. Zhu. Max-margin nonparametric latent feature models for link prediction. In Proceedings of the 29th Intl Conference on Machine Learning, 2012. 9
|
2012
|
17
|
4,530
|
Dimensionality Dependent PAC-Bayes Margin Bound Chi Jin Key Laboratory of Machine Perception, MOE School of Physics Peking University chijin06@gmail.com Liwei Wang Key Laboratory of Machine Perception, MOE School of EECS Peking University wanglw@cis.pku.edu.cn Abstract Margin is one of the most important concepts in machine learning. Previous margin bounds, both for SVM and for boosting, are dimensionality independent. A major advantage of this dimensionality independency is that it can explain the excellent performance of SVM whose feature spaces are often of high or infinite dimension. In this paper we address the problem whether such dimensionality independency is intrinsic for the margin bounds. We prove a dimensionality dependent PAC-Bayes margin bound. The bound is monotone increasing with respect to the dimension when keeping all other factors fixed. We show that our bound is strictly sharper than a previously well-known PAC-Bayes margin bound if the feature space is of finite dimension; and the two bounds tend to be equivalent as the dimension goes to infinity. In addition, we show that the VC bound for linear classifiers can be recovered from our bound under mild conditions. We conduct extensive experiments on benchmark datasets and find that the new bound is useful for model selection and is usually significantly sharper than the dimensionality independent PAC-Bayes margin bound as well as the VC bound for linear classifiers. 1 Introduction Linear classifiers, including SVM and boosting, play an important role in machine learning. A central concept in the generalization analysis of linear classifiers is margin. There have been extensive works on bounding the generalization errors of SVM and boosting in terms of margins (with various definitions such l2, l1, soft, hard, average, minimum, etc.) In 1970’s Vapnik pointed out that large margin can imply good generalization. Using the fatshattering dimension, Shawe-Taylor et al. [1] proved a margin bound for linear classifiers. This bound was improved and simplified in a series of works [2, 3, 4, 5] mainly based on the PAC-Bayes theory [6] which was developed originally for stochastic classifiers. (See Section 2 for a brief review of the PAC-Bayes theory and the PAC-Bayes margin bounds.) All these bounds state that if a linear classifier in the feature space induces large margins for most of the training examples, then it has a small generalization error bound independent of the dimensionality of the feature space. The (l1) margin has also been extensively studied for boosting to explain its generalization ability. Schapire et al. [7] proved a margin bound for the generalization error of voting classifiers. The bound is independent of the number of base classifiers combined in the voting classifier1. This margin bound was greatly improved in [8, 9] using (local) Rademacher complexities. There also exist improved margin bounds for boosting from the viewpoint of PAC-Bayes theory [10], the diversity of base classifiers [11], and different definition of margins [12, 13]. 1The bound depends on the VC dimension of the base hypothesis class. Nevertheless, given the VC dimension of the base hypothesis space, the bound does not depend on the number of the base classifiers, which can be seen as the dimension of the feature space. 1 The aforementioned margin bounds are all dimensionality independent. That is, the bounds are solely characterized by the margins on the training data and do not depend on the dimension of feature space. A major advantage of such dimensionality independent margin bounds is that they can explain the generalization ability of SVM and boosting whose feature spaces have high or infinite dimension, in which case the standard VC bound becomes trivial. Although very successful in bounding the generalization error, a natural question is whether this dimensionality independency is intrinsic for margin bounds. In this paper we explore this problem. Building upon the PAC-Bayes theory, we prove a dimensionality dependent margin bound. This bound is monotone increasing with respect to the dimension when keeping all other factors fixed. Comparing with the PAC-Bayes margin bound of Langford [4], the new bound is strictly sharper when the feature space is of finite dimension; and the two bounds tend to be equal as the dimension goes to infinity. We conduct extensive experiments on benchmark datasets. The experimental results show that the new bound is significantly sharper than the dimensionality independent PAC-Bayes margin bound as well as the VC bound for linear classifiers on relatively large datasets. The bound is also found useful for model selection. The rest of this paper is organized as follows. Section 2 contains a brief review of the PAC-Bayes theory and the dimensionality independent PAC-Bayes margin bound. In Section 3 we give the dimensionality dependent PAC-Bayes margin bound and further improvements. We provide the experimental results in Section 4, and conclude in Section 5. Due to the space limit, all the proofs are given in the supplementary material. 2 Background Let X be the instance space or generally the feature space. In this paper we always assume X = Rd. We consider binary classification problems and let Y = {−1, 1}. Examples are drawn independently according to an underlying distribution D over X × Y. Let PD(A(x, y)) denote the probability of event A when an example (x, y) is chosen according to D. Let S denote a training set of n i.i.d. examples. We denote by PS(A(x, y)) the probability of event A when an example (x, y) is chosen at random from S. Similarly we denote by ED and ES the corresponding expectations. If c is a classifier, then we denote by erD(c) = PD(y ̸= c(x)) the generalization error of c, and let erS(c) = PS(y ̸= c(x)) be the empirical error. An important type of classifiers studied in this paper is stochastic classifiers. Let C be a set of classifiers, and let Q be a probability distribution of classifiers on C. A stochastic classifier defined by Q randomly selects c ∈C according to Q. When clear from the context, we often denote by erD(Q) and erS(Q) the generalization and empirical error of the stochastic classifier Q respectively. That is, erD(Q) = Ec∼Q[erD(c)]; erS(Q) = Ec∼Q[erS(c)] A probability distribution Q of classifiers also defines a deterministic classifier—the voting classifier, which we denote by vQ. For x ∈X vQ(x) = sgn[Ec∼Qc(x)]. In this paper we always consider homogeneous linear classifiers2, or stochastic classifiers whose distribution is over homogeneous linear classifiers. Let X = Rd. For any w ∈Rd, the linear classifier cw is defined as cw(·) = sgn[< w, · >]. When we consider a probability distribution over all homogeneous linear classifiers cw in Rd, we can equivalently consider a distribution of w ∈Rd. The work in this paper is based on the PAC-Bayes theory. PAC-Bayes theory is a beautiful generalization of the classical PAC theory to the setting of Bayes learning. It gives generalization error bounds for stochastic classifiers. The PAC-Bayes theorem was first proposed by McAllester [6]. The following elegant version is due to Langford [4]. 2This does not sacrifice any generality since linear classifiers can be easily transformed to homogeneous linear classifiers by adding a new dimension. 2 Theorem 2.1. Let P, Q denote probability distributions of classifiers. For any P and any δ ∈(0, 1), with probability 1 −δ over the random draw of n training examples kl (erS(Q) || erD(Q)) ≤KL(Q||P) + ln n+1 δ n (1) holds simultaneously for all distributions Q. Here KL(Q||P) is the Kullback-Leibler divergence of distributions Q and P; kl(a||b) for a, b ∈[0, 1] is the Bernoulli KL divergence defined as kl(a||b) = a log a b + (1 −a) log 1−a 1−b . The above PAC-Bayes theorem states that if a stochastic classifier, whose distribution Q is close (in the sense of KL divergence) to the fixed prior P, has a small training error, then its generalization error is small. PAC-Bayes theory has been improved and generalized in a series of works [5, 14]. For important recent results please referred to [14]. [15] generalizes the KL divergence in the PAC-Bayes theorem to arbitrary convex functions. [15, 16, 17, 18, 19] utilize improved PAC-Bayes bounds to develop learning algorithms and perform model selections. Very interestingly, it is shown in [2] that one can derive a margin bound for linear classifiers (including SVM) from the PAC-Bayes theorem quite easily. It is much simpler and slightly tighter than previous margin bounds for SVM [1, 20]. The following simplified and refined version can be found in [4]. Theorem 2.2 ([4]). Let X = Rd. Let Q(µ, ˆw) (µ > 0, ˆw ∈Rd, ∥ˆw∥= 1) denote the distribution of homogeneous linear classifiers cw, where w ∼N(µˆw, I). For any δ ∈(0, 1), with probability 1 −δ over the random draw of n training examples kl (erS(Q(µ, ˆw)) || erD(Q(µ, ˆw))) ≤ µ2 2 + ln n+1 δ n (2) holds simultaneously for all µ > 0 and all ˆw ∈Rd with ∥ˆw∥= 1. In addition, the empirical error of the stochastic classifier can be written as erS(Q(µ, ˆw)) = ESΦ(µγ(ˆw; x, y)), (3) where γ(ˆw; x, y) = y <ˆw,x> ∥x∥ is the margin of (x, y) with respect to the unit vector ˆw; and Φ(t) = 1 −Φ(t) = ∫∞ t 1 √ 2π e−τ 2/2dτ (4) is the probability of the upper tail of Gaussian distribution. According to Theorem 2.2, if there is a linear classifier ˆw ∈Rd inducing large margins for most training examples, i.e., γ(ˆw; x, y) is large for most (x, y) , then choosing a relatively small µ would yield a small erS (Q(µ, ˆw)) and in turn a small upper bound for the generalization error of the stochastic classifier Q(µ, ˆw). Note that this bound does not depend on the dimensionality d. In fact almost all previously known margin bounds are dimensionality independent3. PAC-Bayes theory only provides bounds for stochastic classifiers. In practice however, users often prefer deterministic classifiers. There is a close relation between the error of a stochastic classifier defined by distribution Q and the error of the deterministic voting classifier vQ. The following simple result is well-known. Proposition 2.3. Let vQ be the voting classifier defined by distribution Q. That is, vQ(·) = sgn[Ec∼Qc(·)]. Then for any Q erD(vQ) ≤2 erD(Q). (5) Combining Theorem 2.2 and Proposition 2.3, one can upper bound the generalization error of the voting classifier vQ associated with Q(µ, ˆw) given in Theorem 2.2. In fact, it is easy to see that vQ = cˆw, the voting classifier is exactly the linear classifier ˆw. Thus erD(cˆw) ≤2erD(Q(µ, ˆw)). (6) 3There exist dimensionality dependent margin bounds [21]. However these bounds grow unboundedly as the dimensionality tends to infinity. 3 From Theorem 2.2, Proposition 2.3 and (6), we have that with probability 1−δ the following margin bound holds for all classifiers cˆw with ˆw ∈Rd, ∥ˆw∥= 1 and all µ > 0: kl ( erS(Q(µ, ˆw)) || erD(cˆw) 2 ) ≤ µ2 2 + ln n+1 δ n . (7) One disadvantage of the bounds in (5), (6) and (7) is that they involve a multiplicative factor of 2. In general, the factor 2 cannot be improved. However for linear classifiers with large margins there can exist tighter bounds. The following is a slightly refined version of the bounds given in [2, 3]. Proposition 2.4 ([2, 3]). Let Q(µ, ˆw) and vQ = cˆw be defined as above. Let erD,θ(Q(µ, ˆw)) = Ew∼N (µˆw,I)PD ( y <w,x> ∥x∥ ≤θ ) be the error of the stochastic classifier with margin θ. Then for all θ ≥0 erD(cˆw) ≤erD,θ(Q(µ, ˆw)) + Φ(θ). (8) The bound states that if the stochastic classifier induces small errors with large margin θ, then the linear (voting) classifier has only a slightly larger generalization error than the stochastic classifier. However sometimes (8) can be larger than (5). The two bounds have a different regime in which they dominate [2]. It is also worth pointing out that the margin y <w,x> ∥x∥ considered in Proposition 2.4 is unnormalized with respect to w. See Section 3 for more discussions. To apply Proposition 2.4, one needs to further bound erD,θ(Q(µ, ˆw)) by its empirical version erS,θ(Q(µ, ˆw)) := Ew∼N (µˆw,I)PS ( y <w,x> ∥x∥ ≤θ ) = ESΦ(µy <ˆw,x> ∥x∥ −θ). With slight modifications of Theorem 2.2, one can show that for any θ ≥0 with probability 1 −δ the following bound is valid for all µ and ˆw uniformly: kl (erS,θ(Q(µ, ˆw)) || erD,θ(Q(µ, ˆw))) ≤ µ2 2 + ln n+1 δ n . (9) The following Proposition combines the above results. Proposition 2.5. For any θ ≥0 and any δ > 0 with probability 1 −δ the following bound is valid for all µ and ˆw uniformly: kl ( erS,θ(Q(µ, ˆw)) || erD(cˆw) −Φ(θ)) ) ≤ µ2 2 + ln n+1 δ n . (10) Note that this last bound is not uniform for θ, see also [3]. Improving the multiplicative factor was also studied in [22, 17], in which the variance of the stochastic classifier is also bounded by PAC-Bayes theorem, and Chebyshev inequality can be used. 3 Theoretical Results In this section we give the theoretical results. The main result of this paper is Theorem 3.1, which provides a dimensionality dependent PAC-Bayes margin bound. Theorem 3.1. Let Q(µ, ˆw) (µ > 0, ˆw ∈Rd, ∥ˆw∥= 1) denote the distribution of linear classifiers cw(·) = sgn[< w, · >], where w ∼N(µˆw, I). For any δ ∈(0, 1), with probability 1 −δ over the random draw of n training examples kl (erS(Q(µ, ˆw)) || erD(Q(µ, ˆw))) ≤ d 2 ln(1 + µ2 d ) + ln n+1 δ n (11) holds simultaneously for all µ > 0 and all ˆw ∈Rd with ∥ˆw∥= 1. Here erS(Q(µ, ˆw)) = ESΦ(µγ(ˆw; x, y)) and γ(ˆw; x, y) = y <ˆw,x> ∥x∥ are the same as in Theorem 2.2. Comparing Theorem 3.1 with Theorem 2.2, it is easy to see the following Proposition holds. Proposition 3.2. The bound (11) is sharper than (2) for any d < ∞, and the two bounds tend to be equivalent as d →∞. 4 Theorem 3.1 is the first dimensionality dependent margin bound that remains nontrivial in infinite dimension. Theorem 3.1 and Theorem 2.2 are uniform bounds for µ. Thus one can choose appropriate µ to optimize each bound respectively. Note that erS(Q(µ, ˆw)) in the LHS of the two bounds is monotone decreasing with respect to µ. Comparing to Theorem 2.2, Theorem 3.1 has the advantage that its RHS scales only in O(ln µ) rather than O(µ2), and therefore allows choosing a very large µ. As described in (7) in Section 2, we can also obtain a margin bound for the deterministic linear classifier cˆw by combining (11) with erD(cˆw) ≤2 erD(Q(µ, ˆw)). In addition, note that the VC dimension of homogeneous linear classifiers in Rd is d. From Theorem 3.1 we can almost recover the VC bound [23] erD(c) ≤erS(c) + √ d ( 1 + ln ( 2n d )) + ln 4 δ n (12) for homogenous linear classifiers in Rd under mild conditions. Formally we have the following Corollary. Corollary 3.3. Theorem 3.1 implies the following result. Suppose n > 5. For any δ > 2e−d 8 n−1 8 , with probability 1 −δ over the random draw of n training examples erD(cw) ≤erS(cw) + √ d ln ( 1 + ( 2n d )) + 1 2 ln 2(n+1) δ n + √ d + ln n n (13) holds simultaneously for all homogeneous linear classifiers cw with w ∈Rd satisfying PD (y < w, x > ∥w∥∥x∥ ≤(ln n)1/2d3/2 4n2 ) ≤1 4 √ d + ln n n . (14) Condition (14) is easy to satisfy if d ≪n. In a sense, the dimensionality dependent margin bound in Theorem 3.1 unifies the dimensionality independent margin bound and the VC bound for linear classifiers. Although it is not easy to theoretically quantify how much sharper (11) is than (2) and the VC bound (12) (because the first two bounds hold uniformly for all µ), in Section 4 we will demonstrate by experiments that the new bound is usually significantly better than (2) and (12) on relatively large datasets. 3.1 Improving the Multiplicative Factor As we mentioned in Section 2, Proposition 2.3 involves a multiplicative factor of 2 when bounding the error of the deterministic voting classifier by the error of the stochastic classifier. Note that in general erD(cˆw) ≤2erD(Q(µ, ˆw)) cannot be improved (consider the case that with probability one the data has zero margin with respect to ˆw). Here we study how to improve it for large margin classifiers. Recall that Proposition 2.4 gives erD(cˆw) ≤erD,θ(Q(µ, ˆw)) + Φ(θ), which bounds the generalization error of the linear classifier in terms of the error of the stochastic classifier with margin θ ≥0. As pointed out in [2], this bound is not always better than Proposition 2.3 (i.e., erD(cˆw) ≤2erD(Q(µ, ˆw))). The two bounds each has a different dominant regime. Our first result in this subsection is the following simple improvement over both Proposition 2.3 and Proposition 2.4. Proposition 3.4. Using the notions in Proposition 2.4, we have that for all θ ≥0, erD(cˆw) ≤ 1 Φ(θ)erD,θ(Q(µ, ˆw)), (15) where Φ(θ) is defined in Theorem 2.2. 5 It is easy to see that Proposition 2.3 is a special case of Proposition 3.4: just let θ = 0 in (15) we recover (6). Thus Proposition 3.4 is always sharper than Proposition 2.3. It is also easy to show that (15) is sharper than (8) in Proposition 2.4 whenever the bounds are nontrivial. Formally we have the following proposition. Proposition 3.5. Suppose the RHS of (8) or the RHS of (15) is smaller than 1, i.e., at least one of the two bounds is nontrivial. Then (15) is sharper than (8). As mentioned in Section 2, the margins discussed so far in this subsection are unnormalized with respect to w ∈Rd. That is, we consider y <w,x> ∥x∥. In the following we will focus on normalized margins y <w,x> ∥w∥∥x∥. It will soon be clear that this brings additional benefits when combining with the dimensionality dependent margin bound. Let erN D,θ(Q(µ, ˆw)) = Ew∼N (µˆw,I)PD(y <w,x> ∥w∥∥x∥≤θ) be the true error of the stochastic classifier Q(µ, ˆw) with normalized margin θ ∈[−1, 1]. Also let erN S,θ(Q(µ, ˆw)) be its empirical version. We have the following lemma. Lemma 3.6. For any µ > 0, any ˆw ∈Rd with ∥ˆw∥= 1 and any θ ≥0, erD(cˆw) ≤ erN D,θ (Q(µ, ˆw)) Φ(µθ) . (16) If erN D,θ(Q) is only slightly larger than erD(Q) for a not-too-small θ > 0, then erN D,θ(Q) Φ(µθ) can be much smaller than 2erD(Q) even with a not too large µ. Also note that setting θ = 0 in (16), we can recover (6). The true margin error erN D,θ(Q) can be bounded by its empirical version similar to Theorem 3.1: For any θ ≥0 and any δ > 0, with probability 1 −δ kl ( erN S,θ(Q(µ, ˆw))||erN D,θ(Q(µ, ˆw)) ) ≤ d 2 ln(1 + µ2 d ) + ln n+1 δ n (17) holds simultaneously for all µ > 0 and ˆw ∈Rd with ∥ˆw∥= 1. Combining the previous two results we have a dimensionality dependent margin bound for the linear classifier cˆw. Proposition 3.7. Let Q(µ, ˆw) defined as before. For any θ ≥0 and any δ > 0, with probability 1 −δ over the random draw of n training examples kl ( erN S,θ(Q(µ, ˆw))||erD(cˆw)Φ(µθ) ) ≤ d 2 ln(1 + µ2 d ) + ln n+1 δ n (18) holds simultaneously for all µ > 0 and ˆw ∈Rd with ∥ˆw∥= 1. To see how Proposition 3.7 improves the multiplicative factor, let’s take a closer look at the bound (18). Observe that as µ getting large, erN S,θ(Q(µ, ˆw)) = Ew∼N (µˆw,I)PD(y <w,x> ∥w∥∥x∥≤θ) tends to the empirical error of the linear classifier ˆw with margin θ, i.e., PS ( y <ˆw,x> ∥x∥ ≤θ ) (recall that ∥ˆw∥=1). Also if µθ > 3, Φ(µθ) ≈1. Taking into the consideration that the RHS of (18) scales only in O(ln µ), we can choose a relatively large µ and (18) gives a dimensionality dependent margin bound whose multiplicative factor can be very close to 1. 4 Experiments In this section we conduct a series of experiments on benchmark datasets. The goal is to see to what extent the Dimensionality Dependent margin bound (will be referred to as DD-margin bound) is sharper than the Dimensionality Independent margin bound (will be referred to as DI-margin bound) as well as the VC bound. More importantly, we want to see from the experiments how useful the DD-margin bound is for model selection. 6 Table 1: Description of dataset Dataset # Examples # Features Dataset # Examples # Features Image 2310 20 Letter 20000 16 Magic04 19020 10 Mushroom 8124 22 Optdigits 5620 64 PageBlock 5473 10 Pendigits 10992 16 Waveform 3304 21 BreastCancer 683 9 Glass 214 9 Pima 768 8 wdbc 569 30 We use 12 datasets all from the UCI repository [24]. A description of the datasets is given in Table 1. For each dataset, we use 5-fold cross validation and average the results over 10 runs (for a total 50 runs). If the dataset is a multiclass problem, we group the data into two classes since we study binary classification problems. In the data preprocessing stage each feature is normalized to [0, 1]. To compare the bounds and to do model selection, we use SVM with polynomial kernels K(x, x′) = (a < x, x′ > +b)t and let t varies4. For each t, we train a classifier by libsvm [25]. We plot the values of the three bounds—the DD-margin bound, the DI-margin bound, the VC bound (12) as well as the test and training error (see Figure 1 - Figure 12). For the two margin bounds, since they hold uniformly for µ > 0, we select the optimal µ to make the bounds as small as possible. For simplicity, we combine Proposition 2.3 with Theorem 3.1 and Theorem 2.2 respectively to obtain the final bound for the generalization error of the deterministic linear classifiers. In each figure, the horizonal axis represents the degree t of the polynomial kernel. All bounds in the figures (including training and test error) are for deterministic (voting) classifier. To analyze the experimental results, we group the 12 results into two categories as follows. 1. Figure 1 - Figure 8. This category consists of eight datasets, and each of them contains at least 2000 examples (relatively large datasets). On all these datasets, the DD-margin bounds are significantly sharper than the DI-margin bounds as well as the VC bounds. More importantly, the DD-margin bounds work well for model selection. We can use this bound to choose the degree of the polynomial kernel. On all the datasets except “Image”, the curve of the DD-margin bound is highly correlated with the curve of the test error: When the test error decreases (or increases), the DD-margin bound also decreases (or increases); And as the test error remains unchanged as the degree t grows, the DD-margin bound selects the model with the lowest complexity. 2. Figure 9 - Figure 12. This category consists of four small datasets, each contains less than 1000 examples. On these small datasets, the VC bounds often become trivial (larger than 1). The DD-margin bounds are still always, but less significantly, sharper than the DImargin bounds. However, on these small datasets, it is difficult to tell if the bounds select good models. In sum, the experimental results demonstrate that the DD-margin bound is usually significantly sharper than the DI-margin bound as well as the VC bound if the dataset is relatively large. Also the DD-margin bound is useful for model selection. However, for small datasets, all three bounds seem not useful for practical purpose. 5 Conclusion In this paper we study the problem whether dimensionality independency is intrinsic for margin bounds. We prove a dimensionality dependent PAC-Bayes margin bound. This bound is sharper than a previously well-known dimensionality independent margin bound when the feature space is of finite dimension; and they tend to be equivalent as the dimensionality grows to infinity. Experimental results demonstrate that for relatively large datasets the new bound is often useful for model selection and significantly sharper than previous margin bound as well as the VC bound. 4For simplicity we fix a and b as constants in all the experiments. 7 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 t error DD−margin DI−margin VC train error test error Figure 1: Image 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 t error DD−margin DI−margin VC train error test error Figure 2: Letter 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 t error DD−margin DI−margin VC train error test error Figure 3: Magic04 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 t error DD−margin DI−margin VC train error test error Figure 4: Mushroom 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 t error DD−margin DI−margin VC train error test error Figure 5: Optdigits 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 t error DD−margin DI−margin VC train error test error Figure 6: PageBlocks 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 t error DD−margin DI−margin VC train error test error Figure 7: Pendigits 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 t error DD−margin DI−margin VC train error test error Figure 8: Waveform 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 t error DD−margin DI−margin VC train error test error Figure 9: BreastCancer 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 t error DD−margin DI−margin VC train error test error Figure 10: Glass 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 t error DD−margin DI−margin VC train error test error Figure 11: Pima 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 t error DD−margin DI−margin VC train error test error Figure 12: wdbc Our work is based on the PAC-Bayes theory. One limitation is that it involves a multiplicative factor of 2 when transforming stochastic classifiers to deterministic classifiers. Although we provide two improved bounds (Proposition 3.4, 3.7) over previous results (Proposition 2.3, 2.4), the multiplicative factor is still strictly larger than 1. A future work is to study whether there exist dimensionality dependent margin bounds (not necessarily PAC-Bayes) without this multiplicative factor. Acknowledgments This work was supported by NSFC(61222307, 61075003) and a grant from Microsoft Research Asia. We also thank Chicheng Zhang for very helpful discussions. 8 References [1] John Shawe-Taylor, Peter L. Bartlett, Robert C. Williamson, and Martin Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5):1926–1940, 1998. [2] John Langford and John Shawe-Taylor. PAC-Bayes & Margins. In Advances in Neural Information Processing Systems, pages 423–430, 2002. [3] David A. McAllester. Simplified PAC-Bayesian margin bounds. Learning Theory and Kernel Machines, 2777:203–215, 2003. [4] John Langford. Tutorial on practical prediction theory for classification. Journal of Machine Learning Research, 6:273–306, 2005. [5] Matthias Seeger. PAC-Bayesian generalization error bounds for Gaussian process classification. Journal of Machine Learning Research, 3:233–269, 2002. [6] David A. McAllester. Some PAC-Bayesian theorems. Machine Learning, 37(3):355–363, 1999. [7] Robert E. Schapire, Yoav Freund, Peter Barlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. Annals of Statistics, 26(5):1651–1686, 1998. [8] Vladimir Koltchinskii and Dmitry Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. Annals of Statistics, 30:1–50, 2002. [9] Vladimir Koltchinskii and Dmitry Panchenko. Complexities of convex combinations and bounding the generalization error in classification. Annals of Statistics, 33:1455–1496, 2005. [10] John Langford, Matthias Seeger, and Nimrod Megiddo. An improved predictive accuracy bound for averaging classifiers. In International Conference on Machine Learning, pages 290–297, 2001. [11] Sanjoy Dasgupta and Philip M. Long. Boosting with diverse base classifiers. In Annual Conference on Learning Theory, pages 273–287, 2003. [12] Leo Breiman. Prediction games and arcing algorithms. Neural Computation, 11:1493–1518, 1999. [13] Liwei Wang, Masashi Sugiyama, Zhaoxiang Jing, Cheng Yang, Zhi-Hua Zhou, and Jufu Feng. A refined margin analysis for boosting algorithms via equilibrium margin. Journal of Machine Learning Research, 12:1835–1863, 2011. [14] Olivier Catoni. PAC-Bayesian supervised classification: The thermodynamics of statistical learning. IMS Lecture Notes–Monograph Series, 56, 2007. [15] Pascal Germain, Alexandre Lacasse, Franc¸ois Laviolette, and Mario Marchand. PAC-Bayesian learning of linear classifiers. In International Conference on Machine Learning, page 45, 2009. [16] Pascal Germain, Alexandre Lacasse, Franc¸ois Laviolette, Mario Marchand, and Sara Shanian. From PAC-Bayes bounds to KL regularization. In Advances in Neural Information Processing Systems, pages 603–610, 2009. [17] Jean-Francis Roy, Franc¸ois Laviolette, and Mario Marchand. From PAC-Bayes bounds to quadratic programs for majority votes. In International Conference on Machine Learning, pages 649–656, 2011. [18] Amiran Ambroladze, Emilio Parrado-Hern´andez, and John Shawe-Taylor. Tighter pac-bayes bounds. In Advances in Neural Information Processing Systems, pages 9–16, 2006. [19] John Shawe-Taylor, Emilio Parrado-Hern´andez, and Amiran Ambroladze. Data dependent priors in PACBayes bounds. In International Conference on Computational Statistics, pages 231–240, 2010. [20] Peter L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory, 44(2):525–536, 1998. [21] Ralf Herbrich and Thore Graepel. A PAC-Bayesian margin bound for linear classifiers. IEEE Transactions on Information Theory, 48(12):3140–3150, 2002. [22] Alexandre Lacasse, Franc¸ois Laviolette, Mario Marchand, Pascal Germain, and Nicolas Usunier. PACBayes bounds for the risk of the majority vote and the variance of the gibbs classifier. In Advances in Neural Information Processing Systems, pages 769–776, 2006. [23] Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, 1998. [24] Andrew Frank and Arthur Asuncion. UCI machine learning repository, 2010. [25] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. 9
|
2012
|
170
|
4,531
|
MAP Inference in Chains using Column Generation David Belanger∗, Alexandre Passos∗, Sebastian Riedel†, Andrew McCallum Department of Computer Science, University of Massachusetts, Amherst † Department of Computer Science, University College London {belanger,apassos,mccallum}@cs.umass.edu, s.riedel@cs.ucl.ac.uk Abstract Linear chains and trees are basic building blocks in many applications of graphical models, and they admit simple exact maximum a-posteriori (MAP) inference algorithms based on message passing. However, in many cases this computation is prohibitively expensive, due to quadratic dependence on variables’ domain sizes. The standard algorithms are inefficient because they compute scores for hypotheses for which there is strong negative local evidence. For this reason there has been significant previous interest in beam search and its variants; however, these methods provide only approximate results. This paper presents new exact inference algorithms based on the combination of column generation and pre-computed bounds on terms of the model’s scoring function. While we do not improve worst-case performance, our method substantially speeds real-world, typical-case inference in chains and trees. Experiments show our method to be twice as fast as exact Viterbi for Wall Street Journal part-of-speech tagging and over thirteen times faster for a joint part-of-speed and named-entity-recognition task. Our algorithm is also extendable to new techniques for approximate inference, to faster 0/1 loss oracles, and new opportunities for connections between inference and learning. We encourage further exploration of high-level reasoning about the optimization problem implicit in dynamic programs. 1 Introduction Many uses of graphical models either directly employ chains or tree structures—as in part-of-speech tagging—or employ them to enable inference in more complex models—as in junction trees and tree block coordinate descent [1]. Traditional message-passing inference in these structures requires an amount of computation dependent on the product of the domain sizes of variables sharing an edge in the graph. Even in chains, exact inference is prohibitive in tasks with large domains due to the quadratic dependence on domain size. For this reason, many practitioners rely on beam search or other approximate inference techniques [2]. However, inference by beam search is approximate. This not only hurts test-time accuracy, but can also interfere with parameter estimation [3]. We present a new algorithm for exact MAP inference in chains that is substantially faster than Viterbi in the typical case. We draw on four key ideas: (1) it is wasteful to compute and store messages to and from low-scoring states, (2) it is possible to compute bounds on data-independent (not varying with the input data) scores of the model offline, (3) inference should make decisions based on local evidence for variables’ values and rely on the graph only for disambiguation [4], and (4) runtime behavior should adapt to the cost structure of the model (i.e., the algorithm should be energy-aware [5]). The combination of these ideas yields provably exact MAP inference for chains and trees that can be more than an order of magnitude faster than traditional methods. Our algorithm has wideranging applicability, and we believe it could beneficially replace many traditional uses of Viterbi and beam search. ∗The first two authors contributed equally to this paper. 1 We exploit the connections between message-passing algorithms and LP relaxations for MAP inference. Directly solving LP relaxations for MAP using a state-of-the-art solver is inefficient because it ignores key structure of the problem [6]. However, it is possible to leverage message-passing as a fast subroutine to solve smaller LPs, and use high-level techniques to compose these solutions into a solution to the original problem. With this interplay in mind, we employ column generation [7], a family of approaches to solving linear programs that are dual to cutting planes: they start by solving restricted primal problems, where many LP variables are set to zero, and slowly add other LP variables until they are able to prove that adding no other variable can improve the solution. From these properties of column generation, we also show how to perform approximate inference that is guaranteed not to be worse than the optimal by a given gap, how to construct an efficient 0/1-loss oracle by running 2-best inference in a subset of the graphical model, and how to learn parameters in such a way to make inference even faster. The use of column generation has not been widely explored or appreciated in graphical models. This paper is intended to demonstrate its benefits and encourage further work in this direction. We demonstrate experimentally that our method has substantial speed advantages while retaining guaranteed exact inference. In Wall Street Journal part-of-speech tagging our method is more than 2.5 times faster than Viterbi, and also faster than beam search with a width of two. In joint POS tagging and named entity recognition, our method is thirteen times faster than Viterbi and also faster than beam search with a width of seven. 2 Delayed Column Generation in LPs In LPs used for combinatorial optimization problems, we know a priori that there are optimal solutions in which many variables will be set to zero. This is enforced by the problem’s constraints or it characterizes optimality (e.g., the solution to a shortest path LP would not include multiple paths). Column generation is a technique for exploiting this sparsity for faster inference. It restricts an LP to a subset of its variables (implicitly setting the others to zero) and alternates between solving this restricted linear program and selecting which variables should be added to it, based on whether they could potentially improve the objective. When no candidates remain, the current solution to the restricted problem is guaranteed to be the exact solution of the unrestricted problem. The test to determine whether un-generated variables could potentially improve the objective is whether their reduced cost is positive, which is also the test employed by some pivoting rules in the simplex algorithm [8, 7]. The difference between the algorithms is that simplex enumerates primal variables explicitly, while column generation “generates” them only as needed. The key to an efficient column generation algorithm is an oracle that can either prove that no variable with positive reduced cost exists or produce one. Consider the general LP: max. cT x s.t. Ax ≤b, x ≥0 (1) With corresponding Lagrangian: L(x, λ) = cT x + λt (b −Ax) = Σi ci −AT i λ xi + λtb. (2) For a given assignment to the dual variables λ, a variable xi is a candidate for being added to the restricted problem if its reduced cost ri = ci −AT i λ, the scalar multiplying it in the Lagrangian, is positive. Another way to justify this decision rule is by considering the constraints in the LP dual: min. bT λ s.t. AT λ ≥c λ ≥0 (3) Here, the reduced cost of a primal variable equals the degree to which its dual constraint is violated, and thus column generation in the primal is equivalent to cutting planes in the dual [7]. If there is no variable of positive reduced cost, then the current dual variables from the restricted problem are feasible in the unrestricted problem, and thus we have a primal-dual optimal pair, and can terminate column generation. An advantageous property of column generation that we employ later on is that it maintains primal feasibility across iterations, and thus it can be halted to provide approximate, anytime solutions. 2 3 Connection Between LP Relaxations and Message-Passing in Chains This section provides background showing how the LP formulation of the inference problem in chains leads to the known message-passing algorithm. The derivation follows Wainwright and Jordan [9], but is specialized for chains and highlights connections to our contributions. The LP for MAP inference in chains is as follows max. P i,xi µi(xi)θi(xi) + P i,xi,xi+1 µi(xi, xi+1)τ(xi, xi+1) s.t. P xi µi(xi) = 1 ∀i P xi µi(xi, xi+1) = µi+1(xi+1) ∀i, xi+1 P xi+1 µi(xi, xi+1) = µi(xi) ∀i, xi (4) where θi(xi) is the score obtained from assigning the i-th variable to value xi, µi(xi) is an indicator variable saying whether or not the MAP assignment sets the i-th variable to the value xi, and τ(xi, xi+1) is the score the model assigns to a transition from value xi to value xi+1. It’s implicitly assumed that all variables are positive. We assume a static τ, but all statements trivially generalize to position-dependent τi. We can restructure this LP to only depend on the pairwise assignment variables µi(xi, xi+1) by creating an edge between the last variable in the chain and an artificial variable and then “billing” all local scores to the pairwise edge that touches them from the right. Then we restructure the constraints to sum out both sides of each edge, and add indicator variables µn(xn, ·) and 0-scoring transitions for the artificial edge. This leaves the following LP (with dual variables written after their corresponding constraints). max. P i,xi,xi+1 µi(xi, xi+1)(τi(xi, xi+1) + θi(xi)) s.t. P xn µn(xn, ·) = 1 (N) P xi−1 µi−1(xi−1, xi) = P xi+1 µi(xi, xi+1) (αi(xi)) (5) The dual of this linear program is min. N s.t. αi+1(xi+1) −αi(xi) ≥τ(xi, xi+1) + θi(xi) ∀i, xi, xi+1 N −αn(xn) ≥θn(xn) ∀xn (6) and setting the α dual variables by αi+1(xi+1) = max xi αi(xi) + θi(xi) + τ(xi, xi+1) (7) and N = maxxn αn(xn) + θn(xn) is a sufficient condition for dual feasibility, and as N will have the value of the primal solution, for optimality. Note that this equation is exactly the forward message-passing equation for max-product belief propagation in chains, i.e. the Viterbi algorithm. A setting of the dual variables is optimal if maximization of the problem’s Lagrangian over the primal variables yields a primal-feasible setting. The coefficients on the edge variables µi(xi, xi+1) are their reduced costs, αi(xi) −αi+1(xi+1) + θi(xi) + τ(xi, xi+1). (8) For duals that obey the constraints of (6), it is clear that the maximal reduced cost is zero, when xi is set to the argmax used when constructing αi+1(xi+1). Therefore, to a obtain a primal-optimal solution, we start at the end of the chain and follow the argmax indices to the beginning, which is the same backward sweep of the Viterbi algorithm. 3.1 Improving the reduced cost with information from both ends of the chain Column generation adds all variables with positive reduced cost to the restricted LP, but equation (8) leads to an inefficient algorithm because it is positive for many irrelevant edge settings. In (8), the only terms that involve xi+1 are τ(xi, xi+1) and the τ(x′ i, xi+1) term that is part of αi+1(xi+1). These are data-independent. Therefore, even if there is very strong local evidence against a particular 3 setting xi+1, pairs xi, xi+1 may have positive reduced cost if the global transition factor τ(xi, xi+1) places positive weight on their compatibility. We can improve upon this by exploring different LP formulations than that of Wainwright and Jordan. Note that in equation (5) a local score is “billed” to its rightmost edge. Instead, if we split it halfway (now using phantom edges in both sides of the chain), we would obtain slightly different message passing rules and the following reduced cost expression: αi(xi) −αi+1(xi+1) + 1 2 (θi(xi) + θj(xj)) + τ(xi, xi+1). (9) This contains local information for both xi and xi+1, though it halves the magnitude of it. In table 2 we demonstrate that this yields comparable performance to using the reduced cost of (8), which still outperforms Viterbi. An even better reduced cost expression can be obtained by duplicating the marginalization constraints, we have: max. P i,xi,xi+1 µi(xi, xi+1) τ(xi, xi+1) + 1 2θi(xi) + 1 2θi+1(xi+1) s.t. P xn µn(xn, ·) = 1 (N +) P x1 µ0(·, x1) = 1 (N −) P xi−1 µi−1(xi−1, xi) = P xi+1 µi(xi, xi+1) (αi(xi)) P xi+1 µi(xi, xi+1) = P xi−1 µi−1(xi−1, xi) (βi(xi)) (10) Following similar logic as in the previous section, setting the dual variables according to (7) and βi−1(xi−1) = max xi βi(xi) + θi(xi) + τ(xi−1, xi) (11) is a sufficient condition for optimality. In effect, we solve the LP of equation (10) in two independent procedures, each solving the onedirectional subproblem in (6), and either one of these subroutines is sufficient to construct a primal optimal solution. This redundancy is important, though, because the resulting reduced cost (12) 2Ri(xi, xi+1) = 2τ(xi, xi+1) + θi(xi) + θi+1(xi+1) + (αi(xi) −αi+1(xi+1)) + (βi+1(xi+1) −βi(xi)) . incorporates global information from both directions in the chain. In table 2 we show that column generation with (12) is fastest, which is not obvious, given the extra overhead of computing the β messages. This is the reduced cost that we use in the following discussion and experiments, unless explicitly stated otherwise. 4 Column Generation Algorithm We present an algorithm for exact MAP inference that in practice is usually faster than traditional message passing. Like all column generation algorithms, our technique requires components for three tasks: choosing the initial set of variables in the restricted LP, solving the restricted LP, and finding variables with positive reduced cost. When no variable of positive reduced cost exists, the current solution to the restricted problem is optimal because we have a primal-feasible, dual-feasible pair. Pseudocode for our algorithm is provided in Figure 1. In the following description, many concepts will be explained in terms of nodes, despite our LP being defined over edges. The edge quantities can be defined in terms of node quantities, such as the α and β messages, and it is more efficient to store these than the quadratically-many edge quantities. 4.1 Initialization To initialize the LP, we first define a restricted domain for each node in the graphical model consisting of only xL i = argmax θi(xi). Other initialization strategies, such as adding the high-scoring transitions, or the k best xi, are also valid. Next, we include in the initial restricted LP all the indicator variables µi(xL i , xL i+1) corresponding to these size-one domains. Solving the initial restricted LP is very efficient, since all nodes have only one valid setting, and no maximization is needed when passing messages. 4 4.2 Warm-Starting the Restricted LP Updating all messages using the max-product rules of equations (7) and (11) is a valid way to solve the restricted LP, but it doesn’t leverage the messages that were optimal for previous calls to the problem. In practice, the restricted domains of every node are not updated at every iteration, and hence many of the previous messages may still appear in a dual-optimal setting of the current restricted problem. As usual, solving the restricted LP, can be decomposed into independently solving each of the one-directional LPs, and thus we update α independently of β. To construct a primal setting from either the α or β messages, we employ the standard technique of back-tracing the argmaxes used in their update equations. In some regions of the chain, we can avoid updating messages because we can guarantee that the proposed message updates would yield the same maximization and thus the same primal setting. Simple rules include, for example, avoiding updating α to the left of the first updated domain and to avoid updating αi(∗) if |Di−1|= 1, since maximization over |Di−1| is trivial. Furthermore, to the right of the the last updated domain, if we compute new messages α′ i(∗) and find that the argmax at the current MAP assignment x∗ i doesn’t change, we can revert to the previous αi(∗) and terminate message passing. An analogous statement can be made about the β variables. When solving the restricted LP, some constraints are trivially satisfied because they only involve variables that are implicitly set to zero, and hence the corresponding dual variables can be set arbitrarily. To prevent extraneous un-generated variables from having a high reduced cost, we choose duals by guessing values that should be feasible in the unrestricted LP, with a smaller computational cost than solving the unrestricted LP directly. We employ the same update equation used for the in-domain messages in (7) and (11), and maximize over the restricted domain of the variable’s neighbor. In our experiments, over 90% of the restricted domains were of size 1, so this dependence on the size of the neighbor domain was not a computational bottleneck in practice, and still allowed the reduced-cost oracle to consider five or less candidate edges in each iteration in more than 86% of the calls. 4.3 Reduced-Cost Oracle Exhaustively searching the chain for variables of positive reduced cost by iterating over all settings of all edges would be as expensive as exact max-product message-passing. However, our oracle search strategy is efficient because it prunes these away using precomputed bounds on the transitions. First we decompose equation (12) as follows 2Ri(xi, xi+1) = 2τ(xi, xi+1) + S+ i (xi) + S− i (xi+1) (13) where S+ i (xi) = θi(xi)+αi(xi)−βi(xi) and S− i (xi+1) = θi+1(xi+1)−αi+1(xi+1)+βi+1(xi+1). If in practice, most settings for each edge have negative reduced cost, we can efficiently find candidate settings by first upper-bounding S+ i (xi) + 2τ(xi, xi+1), finding all possible values xi+1 that could yield a positive reduced cost, and then doing the reverse. Finally, we search over the much smaller set of candidates for xi and xi+1. This strategy is described in Figure 1. After the first round of column generation, if Ri(xi, xi+1) hasn’t changed for every xi, xi+1, then no variables of positive reduced cost can exist because they would have been added in the previous iteration, and we can skip the oracle. This condition can be checked while passing messages. Lastly, a final pruning strategy is that if there are settings xi, x′ i such that θi(xi) + min xi−1 τ(xi−1, xi) + min xi+1 τ(xi, xi+1) > θi(x′ i) + max xi−1 τ(xi−1, x′ i) + max xi+1 τ(x′ i, xi+1), (14) then we know with certainty that setting x′ i is suboptimal. This helps prune the oracle’s search space efficiently because the above maxima and minima are data-independent offline computations. We can do so by first linearly searching through the labels for a node for the one with highest local score and then using precomputed bounds on the transition scores to linearly discard states whose upper bound on the score is smaller than the lower bound of the best state. 5 : Algorithm: CG-Infer begin for i = 1 →n do Di = {argmax θi(xi)} end while domains haven’t converged do (α, β) ←GetMessages(D, θ) for i = 1 →n do D∗ i , D∗ i+1 ←ReducedCostOracle(i) Di ←Di ∪D∗ i Di+1 ←Di+1 ∪D∗ i+1 end end end : Algorithm: ReducedCostOracle(i) begin Uτ(·, xj) ←maxxi τ(xi, xj) Uτ(xi, ·) ←maxxj τ(xi, xj) Ui ←maxxi S+ i (xi) C′ i ←{xj|S− i (xj) + Ui + 2Uτ(·, xj) > 0} U ′ i ←maxxi+i∈C′ iS− i (xj) Ci ←{xi|S+ i (xi) + U ′ i + 2Uτ(xi, ·) > 0} D×D′ ←{xi, xj ∈Ci, C′ i|R(xi, xj) > 0} return D, D′ end Figure 1: Column Generation Algorithm and Pruning Strategy for Reduced Cost Oracle 5 Extensions of the Algorithm The column generation algorithm is fairly general, and can be easily extended to allow for many interesting use cases. In section 7 we provide experiments supporting the usefulness of these extensions, and they are described in more detail in appendix A. First of all, our algorithm generalizes easily to MAP inference in trees by using a similar structure but a different reduced cost expression that considers messages flowing in both directions across each edge (appendix A.1). The reduced cost oracle can also be used to compute the duality gap of an approximate solution. This allows early stopping of our algorithm if the gap is small and also provides analysis of the sub-optimality of the output of beam search (appendix A.2). Furthermore, margin violation queries when doing structured SVM training with a 0/1 loss can be done efficiently using a small modification of our algorithm, in which we also add variables of small negative reduced cost and do 2-best inference within the restricted domains (appendix A.3). Lastly, regularizing the transition weights more strongly allows one to train models that will decode more quickly (appendix A.4). Most standard inference algorithms, such as Viterbi, do not have this behavior where the inference time is affected by the actual model scores. By coupling inference and learning, practitioners have more freedom to trade off test-time speed vs. accuracy. 6 Related Work Column generation has been employed as a way of dramatically speeding up MAP inference problems in Riedel et al [10], which applies it directly to the LP relaxation for dependency parsing with grandparent edges. There has been substantial prior work on improving the speed of max-product inference in chains by pruning the search process. CarpeDiem [11] relies on an an expression similar to the oriented, left-to-right reduced cost equation of (8), also with a similar pruning strategy to the one described in section 4.3. Following up, Kaji et al. [12] presented a staggered decoding strategy that similarly attempts to bound the best achievable score using uninstantiated domains, but only used local scores when searching for new candidates. The dual variables obtained in earlier runs were then used to warm-start the inference in later runs, similarly to what is done in section 4.2. Their techniques obtained similar speed-ups as ours over Viterbi inference. However, their algorithms do not provide extensions to inference in trees, a margin-violation oracle, and approximate inference using a duality gap. Furthermore, Kaji et al. use data-dependent transition scores. This may improve our performance as well, if the transition scores are more sharply peaked. Similarly, Raphael [13] also presents a staggered decoding strategy, but does so in a way that applies to many dynamic programming algorithms. The strategy of preprocessing data-independent factors to speed up max-product has been previously explored by McAuley and Caetano [14], who showed that if the transition weights are large, savings can be obtained by sorting them offline. Our contributions, on the other hand, are more effective 6 when the transitions are small. The same authors have also explored strategies to reduce the worstcase complexity of message-passing by exploiting faster matrix multiplication algorithms [15]. Figure 2: Training-time manipulation of accuracy vs. test throughput for our algorithm Alternative methods of leveraging the interplay between fast dynamic programming algorithms and higher-level LP techniques have been explored elsewhere. For example, in dual decomposition [16], inference in joint models is reduced to repeated inference in independent models. Tree block-coordinate descent performs approximate inference in loopy models using exact inference in trees as a subroutine [1]. Column generation is cutting planes in the dual, and cutting planes have been used successfully in various machine learning contexts. See, for example, Sontag et al [17] and Riedel et al [18]. There is a mapping between dynamic programs and shortest path problems [19]. Our reduced cost is an estimate of the desirability of an edge setting, and thus our algorithm is heuristic search in the space of edge settings. With dual feasibility, this heuristic is consistent, and thus our algorithm is iteratively constructing a heuristic such that it can perform A∗search for the final restricted LP [20]. 7 Experiments We compare the performance of column generation with exact and approximate inference on Wall Street Journal [21] part-of-speech (POS) tagging and joint POS tagging and named-entityrecognition (POS/NER). The output variable domain size is 45 for POS and 360 for POS/NER. The test set contains 5463 sentences. The POS model was trained with a 0/1 loss structured SVM and the POS/NER model was trained using SampleRank [22]. Table 1 compares the inference times and accuracies of column generation (CG), Viterbi, Viterbi with the final pruning technique described in section 4.3 (Viterbi+P), CG with duality gap termination condition 0.15% (CG+DG), and beam search. For POS, CG, is more than twice as fast as Viterbi, with speed comparable to a beam of size 3. Whereas CG is exact, Beam-3 loses 1.6% accuracy. Exact inference in the model obtains a tagging accuracy of 95.3%. For joint POS and NER tagging, the speedups are even more dramatic. We observe a 13x speedup over Viterbi and are comparable in speed with a beam of size 7, while being exact. As in POS, CG-DG provides a mild speedup. Over 90% of tokens in the POS task had a domain of size one, and over 99% had a domain of size 3 or smaller. Column generation always finished in at most three iterations, and 22% of the time it terminated after one. 86% of the time, the reduced-cost oracle iterated over at most 5 candidate edge settings, which is a significant reduction from the worst-case behavior of 452. The pruning strategy in Viterbi+P manages to restrict the number of possible labels for each token to at most 5 for over 65% of the tokens, and prunes the size of each domain by half over 95% of the time. Table 2.A presents results for a 0/1 loss oracle described in section 5. Baselines are a standard Viterbi 2-best search1 and Viterbi 2-best with the pruning technique of 4.3 (Viterbi+P). CG outperforms Viterbi 2-best on both POS and POS/NER. Though Viterbi+P presents an effective speedup, we are still 19x faster on POS/NER. In terms of absolute throughput, POS/NER is faster than POS because the POS/NER model wasn’t trained with a regularized structured SVM, and thus there are fewer margin violations. Our 0/1 oracle is quite efficient when determining that there isn’t a margin violation, but requires extra work when required to actually produce the 2-best setting. Table 2.B shows column generation with two other reduced-cost formulations on the same POS tagging task. CG-α uses the reduced-cost from equation (8) while CG-α+θi+1 uses the reducedcost from equation (9). The full CG is clearly beneficial, despite requiring computation of β. 1Implemented by replacing all maximizations in the viterbi code with two-best maximizations. 7 Algorithm % Exact Sent./sec. Viterbi 100 3144.6 Viterbi+P 100 4515.3 CG 100 8227.6 CG-DG 98.9 9355.6 Beam-1 57.7 12117.6 Beam-2 92.6 7519.3 Beam-3 98.4 6802.5 Beam-4 99.5 5731.2 Algorithm % Exact Sent./sec. Viterbi 100 56.9 Viterbi+P 100 498.9 CG 100 779.9 CG-DG 98.4 804 Beam-1 66.6 3717.0 Beam-5 98.5 994.97 Beam-7 99.2 772.8 Beam-10 99.5 575.1 Table 1: Comparing inference time and exactness of Column Generation (CG), Viterbi, Viterbi with the final pruning technique of section 4.3 (Viterbi+P), and CG with duality gap termination condition 0.15%(CG+DG), and beam search on POS tagging (left) and joint POS/NER (right). POS POS/NER Method Sent./sec. Sent./sec. CG 85.0 299.9 Viterbi 2-best 56.0 .06 Viterbi+P 2-best 119.6 11.7 Reduced Cost POS Sent./sec. CG 8227.6 CG-α 5125.8 CG-α+θi+1 4532.1 Table 2: (A) the speedups for a 0/1 loss oracle (B) comparing reduced cost formulations In Figure 2, we explore the ability to manipulate training time regularization to trade off test accuracy and test speed, as discussed in section 5. We train a structured SVM with L2 regularization (coefficient 0.1) the emission weights, and vary the L2 coefficient on the transition weights from 0.1 to 10. A 4x gain in speed can be obtained at the expense of an 8% relative decrease in accuracy. 8 Conclusions and future work In this paper we presented an efficient family of algorithms based on column generation for MAP inference in chains and trees. This algorithm exploits the fact that inference can often rule out many possible values, and we can efficiently expand the set of values on the fly. Depending on the parameter settings it can be twice as fast as Viterbi in WSJ POS tagging and 13x faster in a joint POS-NER task. One avenue of further work is to extend the bounding strategies in this algorithm for inference in cluster graphs or junction trees, allowing faster inference in higher-order chains or even loopy graphical models. The connection between inference and learning shown in section 5 also bears further study, since it would be helpful to have more prescriptive advice for regularization strategies to achieve certain desired accuracy/time tradeoffs. Acknowledgments This work was supported in part by the Center for Intelligent Information Retrieval. The University of Massachusetts gratefully acknowledges the support of Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181, in part by IARPA via DoI/NBC contract #D11PC20152, in part by Army prime contract number W911NF-07-1-0216 and University of Pennsylvania subaward number 103-548106 , and in part by UPenn NSF medium IIS-0803847. Any opinions, findings and conclusions or recommendations expressed in this material are the authors’ and do not necessarily reflect those of the sponsor. The U.S. Government is authorized to reproduce and distribute reprint for Governmental purposes notwithstanding any copyright annotation thereon References [1] David Sontag and Tommi Jaakkola. Tree block coordinate descent for MAP in graphical models. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AI-STATS), volume 8, pages 544–551. JMLR: W&CP, 2009. 8 [2] C. Pal, C. Sutton, and A. McCallum. Sparse forward-backward using minimum divergence beams for fast training of conditional random fields. In Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on, volume 5, pages V–V. IEEE, 2006. [3] A. Kulesza, F. Pereira, et al. Structured learning with approximate inference. Advances in neural information processing systems, 20:785–792, 2007. [4] L. Shen, G. Satta, and A. Joshi. Guided learning for bidirectional sequence classification. In Annual Meeting-Association for Computational Linguistics, volume 45, page 760, 2007. [5] D. Tarlow, D. Batra, P. Kohli, and V. Kolmogorov. Dynamic tree block coordinate ascent. In ICML, pages 113–120, 2011. [6] C. Yanover, T. Meltzer, and Y. Weiss. Linear programming relaxations and belief propagation–an empirical study. The Journal of Machine Learning Research, 7:1887–1907, 2006. [7] M. Lubbecke and J. Desrosiers. Selected topics in column generation. Operations Research, 53:1007– 1023, 2004. [8] D. Bertsimas and J. Tsitsiklis. Introduction to Linear Optimization. Athena Scientific, 1997. [9] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008. [10] S. Riedel, D. Smith, and A. McCallum. Parse, price and cutdelayed column and row generation for graph based parsers. Proceedings of the Conference on Empirical methods in natural language processing (EMNLP ’12), 2012. [11] R. Esposito and D.P. Radicioni. Carpediem: an algorithm for the fast evaluation of SSL classifiers. In Proceedings of the 24th international conference on Machine learning, pages 257–264. ACM, 2007. [12] N. Kaji, Y. Fujiwara, N. Yoshinaga, and M. Kitsuregawa. Efficient staggered decoding for sequence labeling. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 485–494. Association for Computational Linguistics, 2010. [13] C. Raphael. Coarse-to-fine dynamic programming. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23(12):1379–1390, 2001. [14] J. McAuley and T. Caetano. Exploiting data-independence for fast belief-propagation. In International Conference on Machine Learning 2010, volume 767, page 774, 2010. [15] J.J. McAuley and T.S. Caetano. Faster algorithms for max-product message-passing. Journal of Machine Learning Research, 12:1349–1388, 2011. [16] A.M. Rush, D. Sontag, M. Collins, and T. Jaakkola. On dual decomposition and linear programming relaxations for natural language processing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1–11. Association for Computational Linguistics, 2010. [17] D. Sontag and T. Jaakkola. New outer bounds on the marginal polytope. In Advances in Neural Information Processing Systems, 2007. [18] S. Riedel. Improving the accuracy and efficiency of MAP inference for Markov logic. Proceedings of UAI 2008, pages 468–475, 2008. [19] R. Kipp Martin, Ronald L. Rardin, and Brian A. Campbell. Polyhedral characterization of discrete dynamic programming. Operations Research, 38(1):pp. 127–138, 1990. [20] R.K. Ahuja, T.L. Magnanti, J.B. Orlin, and K. Weihe. Network flows: theory, algorithms and applications. ZOR-Methods and Models of Operations Research, 41(3):252–254, 1995. [21] M.P. Marcus, M.A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330, 1993. [22] M. Wick, K. Rohanimanesh, K. Bellare, A. Culotta, and A. McCallum. SampleRank: training factor graphs with atomic gradients. In Proceedings of ICML, 2011. 9
|
2012
|
171
|
4,532
|
Cocktail Party Processing via Structured Prediction Yuxuan Wang1, DeLiang Wang1,2 1Department of Computer Science and Engineering 2Center for Cognitive Science The Ohio State University Columbus, OH 43210 {wangyuxu,dwang}@cse.ohio-state.edu Abstract While human listeners excel at selectively attending to a conversation in a cocktail party, machine performance is still far inferior by comparison. We show that the cocktail party problem, or the speech separation problem, can be effectively approached via structured prediction. To account for temporal dynamics in speech, we employ conditional random fields (CRFs) to classify speech dominance within each time-frequency unit for a sound mixture. To capture complex, nonlinear relationship between input and output, both state and transition feature functions in CRFs are learned by deep neural networks. The formulation of the problem as classification allows us to directly optimize a measure that is well correlated with human speech intelligibility. The proposed system substantially outperforms existing ones in a variety of noises. 1 Introduction The cocktail party problem, or the speech separation problem, is one of the central problems in speech processing. A particularly difficult scenario is monaural speech separation, in which mixtures are recorded by a single microphone and the task is to separate the target speech from its interference. This is a severely underdetermined figure-ground separation problem, and has been studied for decades with limited success. Researchers have attempted to solve the monaural speech separation problem from various angles. In signal processing, speech enhancement (e.g., [1, 2]) has been extensively studied, and assumptions regarding the statistical properties of noise are crucial to its success. Model-based methods (e.g., [3]) work well in constrained environments, and source models need to be trained in advance. Computational auditory scene analysis (CASA) [4] is inspired by how human auditory system functions [5]. CASA has the potential to deal with general acoustic environments but existing systems have limited performance, particularly in dealing with unvoiced speech. Recent studies suggest a new formulation to the cocktail party problem, where the focus is to classify whether a time-frequency (T-F) unit is dominated by the target speech [6]. Motivated by this viewpoint, we propose to approach the monaural speech separation problem via structured prediction. The use of structured predictors, as opposed to binary classifiers, is motivated by temporal dynamics in speech signal. Our study makes the following contributions: (1) we demonstrate that modeling temporal dynamics via structured prediction can significantly improve separation; (2) to capture nonlinearity, we propose a new structured prediction model that makes use of the discriminative feature learning power of deep neural networks; and (3) instead of classification accuracy, we show how to directly optimize a measure that is well correlated with human speech intelligibility. 1 2 Separation as binary classification We aim to estimate a time-frequency matrix called the ideal binary mask (IBM). The IBM is a binary matrix constructed from premixed target and interference, where 1 indicates that the target energy exceeds the interference energy by a local signal-to-noise (SNR) criterion (LC) in the corresponding T-F unit, and 0 otherwise. The IBM is defined as: IBM(t, f) = 1, if SNR(t, f) > LC 0, otherwise, where SNR(t, f) denotes the local SNR (in decibels) within the T-F unit at time t and frequency f. We adopt the common choice of LC = 0 in this paper [7]. Despite its simplicity, adopting the IBM as a computational objective offers several advantages. First, the IBM is directly based on the auditory masking phenomenon whereby a stronger sound tends to mask a weaker one within a critical band. Second, unlike other objectives such as maximizing SNR, it is well established that large human speech intelligibility improvements result from IBM processing, even for very low SNR mixtures [7–9]. Improving human speech intelligibility is considered as a gold standard for speech separation. Third, IBM estimation naturally leads to classification, which opens the cocktail party problem to a plethora of machine learning techniques. We propose to formulate IBM estimation as binary classification as follows, which is a form of supervised learning. A sound mixture with the 16 kHz sampling rate is passed through a 64-channel gammatone filterbank spanning from 50 Hz to 8000 Hz on the equivalent rectangular bandwidth rate scale. The output from each filter channel is divided into 20-ms frames with 10-ms frame shift, producing a cochleagram [4]. Due to different spectral properties of speech, a subband classifier is trained for each filter channel independently, with the IBM providing training labels. Acoustic features for each subband classifier are extracted from T-F units in the cochleagram. The target speech is separated by binary weighting of the cochleagram using the estimated IBM [4]. Several recent studies have attempted to directly estimate the IBM via classification. By employing Gaussian mixture models (GMMs) as classifiers and amplitude modulation spectrograms (AMS) as features, Kim et al. [10] show that estimated masks can improve human speech intelligibility in noise. Han and Wang [11] have improved Kim et al.’s system by employing support vector machines (SVMs) as classifiers. Wang et al. [12] propose a set of complementary acoustic features that shows further improvements over previous systems. The complementary feature is a concatenation of AMS, relative spectral transform and perceptual linear prediction (RASTA-PLP), mel-frequency cepstral coefficients (MFCC), and pitch-based features. Because the ratio of 1’s to 0’s in the IBM is often skewed, simply using classification accuracy as the evaluation criterion may not be appropriate. Speech intelligibility studies [9,10] have evaluated the influence of the hit (HIT) and false-alarm (FA) rate on intelligibility scores. The difference, the HIT−FA rate, is found to be well correlated to human speech intelligibility in noise [10]. The HIT rate is the percent of correctly classified target-dominant T-F units (1’s) in the IBM, and the FA rate is the percent of wrongly classified interference-dominant T-F units (0’s). Therefore, it is desirable to design a separation algorithm that maximizes HIT−FA of the output mask. 3 Proposed system Dictated by speech production mechanisms, the IBM contains highly structured, rather than, random patterns. Previous systems do not explicitly model such structure. As a result, temporal dynamics, which is a fundamental characteristic of speech, is largely ignored in previous work. Separation systems accounting for temporal dynamics exist. For example, Mysore et al. [13] incorporate temporal dynamics using HMMs. Hershey et al. [14] consider different levels of dynamic constraints. However, these works do not treat separation as classification. Contrary to standard binary classifiers, structured prediction models are able to model correlations in the output. In this paper, we treat unit classification at each filter channel as a sequence labeling problem and employ linear-chain conditional random fields (CRFs) [15] as subband classifiers. 2 3.1 Conditional random fields Different from HMM, a CRF is a discriminative model and does not need independence assumptions of features, making it more suitable to our task. A CRF models the posterior probability P(y|x) as follows. Denoting y as a label sequence and x as an input sequence, P(y|x) = exp P t wT f(y, x, t) Z(x) . (1) Here t indexes time frames, w is the parameters to learn, and Z(x) = P y′ exp P t wT f(y′, x, t) is the partition function. f is a vector-valued feature function associated with each local site (T-F unit in our task), and often categorized into state feature functions s(yt, x, t) and transition feature functions t(yt−1, yt, x, t). State feature functions define the local discriminant functions for each T-F unit and transition feature functions capture the interaction between neighboring labels. We assume a linear-chain setting and the first-order Markovian property, i.e., only interactions between two neighboring units in time are modeled. In our task, we can simply use acoustic feature vectors in each T-F unit as state feature functions and their concatenations as transition feature functions: s(yt, x, t) = [δ(yt=0)xt, δ(yt=1)xt]T , (2) t(yt−1, yt, x, t) = [δ(yt−1=yt)zt, δ(yt−1̸=yt)zt]T , (3) where δ is the indicator function and zt = [xt−1, xt]T . Equation (3) essentially encodes temporal continuity in the IBM. To simplify notations, all feature functions are written as f(yt−1, yt, x, t) in the remainder of the paper. Training is for estimating w, and is usually done by maximizing the conditional log-likelihood on a training set T = x(m), y(m) , i.e., we seek w by max w X m log p(y(m)|x(m), w) + R(w), (4) where m is the index of a training sample, and R(w) is a regularizer of w (we use ℓ2 in this paper). For gradient ascent, a popular choice is the limited-memory BFGS (L-BFGS) algorithm [16]. 3.2 Nonlinear expansion using deep neural networks A CRF is a log-linear model, which has only linear modeling power. As acoustic features are generally not linearly separable, the direct use of CRFs unlikely produces good results. In the following, we propose a method to transform the standard CRF into a nonlinear sequence classifier. We employ pretrained deep neural networks (DNNs) to capture nonlinearity between input and output. DNNs have received widespread attention since Hinton et al.’s paper [17]. DNNs can be viewed as hierarchical feature detectors that learn increasingly complex feature mappings as the number of hidden layers increases. To deal with problems such as vanishing gradients, Hinton et al. suggest to first pretrain a DNN using a stack of restricted Boltzmann machines (RBMs) in a unsupervised and layerwise fashion. The resulting network weights are then supervisedly finetuned by backpropagation. We first train DNN in the standard way to classify speech dominance in each T-F unit. After pretraining and supervised finetuning, we then take the last hidden layer representations from the DNN as learned features to train the CRF. In a discriminatively trained DNN, the weights from the last hidden layer to the output layer would define a linear classifier, hence the last hidden layer representations are more amenable to linear classification. In other words, we replace x by h in equations (1)-(4), where h represents the learned hidden features. This way CRFs would greatly benefit from the nonlinear modeling power of deep architectures. To better encode local contextual information, we could use a window (across both time and frequency) of learned features to label the current T-F unit. A more parsimonious way is to use a window of posteriors estimated by DNNs as features to train the CRF, which can dramatically reduce the dimensionality. We note in passing that the correlations across both time and frequency can also be encoded at the model level, e.g., by using grid-structured CRFs. However the decoding algorithm may substantially increase the computational complexity of the system. 3 We want to point out that an important advantage of using neural networks for feature learning is its efficiency in the test phase; once trained, the nonlinear feature extraction of DNN is extremely fast (only involves forward pass). This is, however, not always true for other methods. For example, sparse coding may need to solve a new optimization problem to get the features. Test phase efficiency is crucial for real-time implementation of a speech separation system. There is related work on developing nonlinear sequence classifiers in the machine learning community. For example, van der Maaten et al. [18] and Morency et al. [19] consider incorporating hidden variables into the training and inference in CRF. Peng et al. [20] investigate a combination of neural networks and CRFs. Other related studies include [21] and [22]. The proposed model differs from the previous methods in that (1) discriminatively trained deep architecture is used, and/or (2) a CRF instead of a Viterbi decoder is used on top of a neural network for sequence labeling, and/or (3) nonlinear features are also used in modeling transitions. In addition, the use of a contextual window and the change of the objective function discussed in the next subsection is specifically tailored to the speech separation problem. 3.3 Maximizing HIT−FA rate As argued before, it is desirable to train a classifier to maximize the HIT−FA rate of the estimated mask. In this subsection, we show how to change the objective function and efficiently calculate the gradients in CRF. Since subband classifiers are used, we aim to maximize the channelwise HIT−FA. Denote the output label as ut ∈{0, 1} and the true label as yt ∈{0, 1}. The per utterance HIT−FA rate can be expressed as P t utyt/ P t yt −P t ut(1 −yt)/ P t(1 −yt), where the first term is the HIT rate and the second the FA rate. To make the objective function differentiable, we replace ut by the marginal probability p(yt = 1|x), hence we seek w by maximizing the HIT−FA on a training set: max w P m P t p(y(m) t = 1|x(m), w)y(m) t P m P t y(m) t − P m P t p(y(m) t = 1|x(m), w)(1 −y(m) t ) P m P t(1 −y(m) t ) ! . (5) Clearly, computing the gradient of (5) boils down to computing the gradient of the marginal. A speech utterance (sentence) typically spans several hundreds of time frames, therefore numerical stability is critically important in our task. As can be seen later, computing the gradient of the marginal requires the gradient of forward/backward scores. We adopt Rabiner’s scaling trick [23] used in HMM to normalize the forward/backward score at each time point. Specifically, define α(t, u) and β(t, u) as the forward and backward score of label u at time t, respectively. We normalize the forward score such that P u α(t, u) = 1, and use the resulting scaling to normalize the backward score. Defining potential function φt(v, u) = exp wT f(v, u, x, t) , the recurrence of the normalized forward/backward score is written as, α(t, u) = X v α(t −1, v)φt(v, u)/s(t), (6) β(t, u) = X v β(t + 1, v)φt(u, v)/s(t + 1), (7) where s(t) = P u P v α(t −1, v)φt(v, u). It is easy to show that Z(x) = Q t s(t), and now the marginal has a simpler form of p(yt|x, w) = α(t, yt)β(t, yt). Therefore, the gradient of the marginal is, ∂p(yt|x, w) ∂w = Gα(t, yt)β(t, yt) + α(t, yt)Gβ(t, yt), (8) where Gα and Gβ are the gradients of the normalized forward and backward score, respectively. Due to score normalization, Gα and Gβ will very unlikely overflow. We now show that Gα can be calculated recursively. Let q(t, u) = P v α(t −1, v)φt(v, u), we have Gα(t, u) = ∂α(t, u) ∂w = ∂q(t,u) ∂w P v q(t, v) −P v ∂q(t,v) ∂w q(t, u) (P v q(t, v))2 , (9) and, ∂q(t, u) ∂w = X v Gα(t −1, v)φt(v, u) + X v α(t −1, v)φt(v, u)f(v, u, x, t). (10) 4 65 70 75 80 85 90 95 −10 dB −5 dB 0 dB HIT−FA DNN DNN* DNN−CRF DNN−CRF* (a) matched: overall 65 70 75 80 85 90 95 −10 dB −5 dB 0 dB HIT−FA DNN DNN* DNN−CRF DNN−CRF* (b) matched: voiced 65 70 75 80 85 90 95 −10 dB −5 dB 0 dB HIT−FA DNN DNN* DNN−CRF DNN−CRF* (c) matched: unvoiced 60 65 70 75 80 85 90 −10 dB −5 dB 0 dB HIT−FA DNN DNN* DNN−CRF DNN−CRF* (d) unmatched: overall 60 65 70 75 80 85 90 −10 dB −5 dB 0 dB HIT−FA DNN DNN* DNN−CRF DNN−CRF* (e) unmatched: voiced 40 45 50 55 60 65 70 75 80 85 −10 dB −5 dB 0 dB HIT−FA DNN DNN* DNN−CRF DNN−CRF* (f) unmatched: unvoiced Figure 1: HIT−FA results. (a)-(c): matched-noise test condition; (d)-(f): unmatched-noise test condition. 0 10 20 30 40 50 60 55 60 65 70 75 80 85 90 Channel HIT−FA DNN DNN* DNN−CRF DNN−CRF* (a) overall 0 10 20 30 40 50 60 50 55 60 65 70 75 80 85 90 Channel HIT−FA DNN DNN* DNN−CRF DNN−CRF* (b) voiced speech intervals 0 10 20 30 40 50 60 30 40 50 60 70 80 90 Channel HIT−FA DNN DNN* DNN−CRF DNN−CRF* (c) unvoiced speech intervals Figure 2: Channelwise HIT−FA comparisons on the 0 dB test mixtures. The derivation of Gβ is similar, thus omitted. The time complexity of calculating Gα and Gβ is O(L|S|2), where L and |S| are the utterance length and the size of the label set, respectively. This is the same as the forward-backward recursion. The objective function in (5) is not concave. Since high accuracy correlates with high HIT−FA, a safe practice is to use a solution from (4) as a warm start for the subsequent optimization of (5). For feature learning, DNN is also trained using (5) in the final system. The gradient calculation is much simpler due to the absence of transition features. We found that L-BFGS performs well and shows fast and stable convergence for both feature learning and CRF training. 4 Experimental results 4.1 Experimental setup Our training and test sets are primarily created from the IEEE corpus [24] recorded by a single female speaker. This enables us to directly compare with previous intelligibility studies [10], where the same speaker is used in training and testing. The training set is created by mixing 50 utterances with 12 noises at 0 dB. To create the test set, we choose 20 unseen utterances from the same speaker. First, the 20 utterances are mixed with the previous 12 noises to create a matched-noise test 5 Channel Frame 20 40 60 80 100 120 140 160 180 200 220 10 20 30 40 50 60 (a) Ideal binary mask Channel Frame 20 40 60 80 100 120 140 160 180 200 220 10 20 30 40 50 60 (b) DNN-CRF∗-P mask Channel Frame 20 40 60 80 100 120 140 160 180 200 220 10 20 30 40 50 60 (c) DNN mask Figure 3: Masks for a test utterance mixed with an unseen crowd noise at 0 dB. White represents 1’s and black represents 0’s. condition, then 5 unseen noises to create a unmatched-noise test condition. The test noises1 cover a variety of daily noises and most of them are highly non-stationary. In each frequency channel, there are roughly 150,000 and 82,000 T-F units in the training and test set, respectively. Speakerindependent experiments are presented in Section 4.4. The proposed system is called DNN-CRF or DNN-CRF∗if it is trained to maximize HIT−FA. We use suffix R and P to distinguish training features for CRF, where R stands for learned features without a context window (features are learned from the complementary acoustic feature set mentioned in Section 2) and P stands for a window of posterior features. We use a two hidden layer DNN as it provides a good trade-off between performance and complexity, and use a context window spanning 5 time frames and 17 frequency channels to construct the posterior feature vector. We use the cross-entropy objective function for training the standard DNN in comparisons. 4.2 Experiment 1: HIT−FA maximization In this subsection, we show the effect of directly maximizing the HIT−FA rate. To evaluate the contribution from the change of the objective alone, we use ideal pitch in the following experiments to neutralize pitch estimation errors. The models are trained on 0 dB mixtures. In addition to 0 dB, we also test the trained models on -10 and -5 dB mixtures. Such a test setting not only allows us to measure the system’s generalization to different SNR conditions, but also to show the effects of HIT−FA maximization on estimating sparse IBMs. We compare DNN-CRF∗-R with DNN, DNN∗ and DNN-CRF-R, and the results are shown in Figure 1 and 2. We document HIT−FA rates on three levels: overall, voiced intervals (pitched frames) and unvoiced intervals (unpitched frames). Voicing boundaries are determined using ideal pitch. Figure 1 shows the results for both matched-noise and unmatched-noise test conditions. First, comparing the performances of DNN-CRFs and DNNs, we can see that modeling temporal continuity always improves performance. It also seems very helpful for generalization to different SNRs. In the matched condition, the improvement by directly maximizing HIT−FA is most significant in unvoiced intervals. The improvement becomes larger when SNR decreases. In the unmatched condition, as classification becomes much harder, direct maximization of HIT−FA offers more improvements in all cases. The largest HIT−FA improvement of DNN-CRF∗-R over DNN is about 10.7% and 21.2% absolute in overall and unvoiced speech intervals, respectively. For a closer inspection, Figure 2 shows channelwise HIT−FA comparisons on the 0 dB test mixtures in the matched-noise test condition. It is well known that unvoiced speech is indispensable for speech intelligibility but hard to separate. Due to the lack of harmonicity and weak energy, frequency channels containing unvoiced speech often have significantly skewed distributions of target-dominant and interference-dominant units. Therefore, an accuracy-maximizing classifier tends to output all 0’s to attain a high accuracy. As an illustration, Figure 3 shows two masks for an utterance mixed with an unseen crowd noise at 0 dB using DNN and DNN-CRF∗-P respectively. The two estimated masks achieve similar accuracy around 90%. However, it is clear that the DNN mask misses significant portions of unvoiced speech, e.g., between frame 30-50 and 220-240. 1Test noises are: babble, bird chirp, crow, cocktail party, yelling, clap, rain, rock music, siren, telephone, white, wind, crowd, fan, speech shaped, traffic, and factory noise. The first 12 are used in training. 6 Table 1: Performance comparisons between different systems. Boldface indicates best result System Matched-noise condition Unmatched-noise condition Accuracy HIT−FA SNR (dB) SegSNR (dB) Accuracy HIT−FA SNR (dB) SegSNR (dB) GMM [10] 77.4% 55.4% 10.2 7.3 65.9% 31.6% 6.8 1.9 SVM [11] 86.6% 68.0% 10.5 10.9 91.2% 64.1% 9.7 7.9 DNN 87.7% 71.6% 11.4 11.8 91.1% 66.2% 9.9 8.1 CRF 82.3% 59.8% 8.8 8.7 90.8% 64.0% 9.3 7.8 SVM-Struct 81.7% 58.6% 8.4 8.1 90.7% 63.5% 9.1 7.5 CNF 87.8% 71.7% 11.2 12.0 91.1% 66.9% 9.8 8.4 LD-CRF 86.3% 68.4% 9.7 10.5 91.1% 63.6% 8.9 7.8 DNN-CRF∗-R 89.1% 75.6% 12.1 13.2 90.8% 70.2% 10.3 9.0 DNN-CRF∗-P 89.9% 76.9% 12.0 13.5 91.1% 70.7% 10.0 8.9 Hendriks et al. [1] n/a n/a 4.6 0.5 n/a n/a 6.2 1.1 Wiener Filter [2] n/a n/a 3.7 -0.7 n/a n/a 5.6 -0.6 Table 2: Performance comparisons when tested on different unseen speakers System Matched-noise condition Unmatched-noise condition Accuracy HIT−FA SNR (dB) SegSNR (dB) Accuracy HIT−FA SNR (dB) SegSNR (dB) SVM [11] 86.2% 65.0% 10.2 9.9 91.1% 60.6% 9.4 7.3 DNN-CRF∗-P 87.3% 72.0% 12.1 11.2 90.9% 68.3% 10.1 8.1 Hendriks et al. [1] n/a n/a 4.5 -2.9 n/a n/a 6.9 -1.0 Wiener Filter [2] n/a n/a 3.8 -4.5 n/a n/a 6.0 -3.3 In summary, direct maximization of HIT−FA improves HIT−FA performance compared to accuracy maximization, especially for unvoiced speech, and the improvement is more significant when the system is tested on unseen acoustic environments. 4.3 Experiment 2: system comparisons We systematically compare the proposed system with three kinds of systems on 0 dB mixtures: binary classifier based, structured predictor based, and speech enhancement based. In addition to HIT−FA, we also include classification accuracy, SNR and segmental SNR (segSNR) as alternative evaluation criteria. To compute SNRs, we use the target speech resynthesized from the IBM as the ground truth signal for all classification-based systems. This way of computing SNRs is commonly adopted in the literature [4,25], as the IBM represents the ground truth of classification. All classification-based systems use the same feature set, but with estimated pitch, described in Section 2, except for Kim et al.’s GMM based system which uses AMS features [10]. Note that we fail to produce reasonable results using the complementary feature set in Kim et al.’s system, possibly because GMM requires much more training data than discriminative models for high dimensional features. Results are summarized in Table 1. We first compare with methods based on binary classifiers. These include two existing systems [10, 11] and a DNN based system. Due to the variety of noises, classification is challenging even in the matched-noise condition. It is clear that the proposed system significantly outperforms the others in terms of all criteria. The improvement of DNN-CRF∗s over DNN demonstrates the benefit of modeling temporal continuity. It is interesting to see that DNN significantly outperforms SVM, especially for unvoiced speech (not shown) which is important for speech intelligibility. We note that without RBM pretraining, DNN performs significantly worse. Classification in the unmatchednoise condition is obviously more difficult, as feature distributions are likely mismatched between the training and the test set. Kim et al.’s system fails to generalize to different acoustic environments due to substantially increased FA rates. The proposed system significantly outperforms SVM and DNN, achieving about 71% overall HIT−FA and 10 dB SNR for unseen noises. Kim et al.’s system has been shown to improve human speech intelligibility [10], it is therefore reasonable to project that the proposed system will provide further speech intelligibility improvements. We next compare with systems based on structured predictors, including CRF, SVM-Struct [26], conditional neural fields (CNF) [20] and latent-dynamic CRF (LD-CRF) [19]. For fair comparisons, we use a two hidden layer CNF model with the same number of parameters as DNN-CRF∗s. Conventional structured predictors such as CRF and SVM-Struct (linear kernel) are able to explicitly model temporal dynamics, but only with linear modeling capability. Direct use of CRF turns out to be much worse than using kernel SVM. Nevertheless, the performance can be substantially 7 boosted by adding latent variables (LD-CRF) or by using nonlinear feature functions (CNF and DNN-CRF∗s). With the same network architecture, CNF mainly differs from our model in two aspects. First, CNF does not use unsupervised RBM pretraining. Second, CNF only uses bias units in building transition features. As a result, the proposed system significantly outperforms CNF, even if CRF and neural networks are jointly trained in the CNF model. With better ability of encoding contextual information, using a window of posteriors as features clearly outperforms single unit features in terms of classification. It is worth noting that although SVM achieves slightly higher accuracy in the unmatched-noise condition, the resulting HIT−FA and SNRs are worse than some other systems. This is consistent with our analysis in Section 4.2. Finally, we compare with two representative speech enhancement systems [1, 2]. The algorithm proposed in [1] represents a recent state-of-the-art method and Wiener filtering [2] is one of the most widely used speech enhancement algorithms. Since speech enhancement does not aim to estimate the IBM, we compare SNRs by using clean speech (not the IBM) as the ground truth. As shown in Table 1, the speech enhancement algorithms are much worse, and this is true of all 17 noises. Due to temporal continuity modeling and the use of T-F context, the proposed system produces masks that are smoother than those from the other systems (e.g., Figure 3). As a result, the outputs seem to contain less musical noise. 4.4 Experiment 3: speaker generalization Although the training set contains only a single IEEE speaker, the proposed system generalizes reasonably well to different unseen speakers. To show this, we create a new test set by mixing 20 utterances from the TIMIT corpus [27] at 0 dB. The new test utterances are chosen from 10 different female TIMIT speakers, each providing 2 utterances. We show the results in Table 2, and it is clear that the proposed system generalizes better than existing ones to unseen speakers. Note that significantly better performance and generalization to different genders can be obtained by including the speaker(s) of interest into the training set. 5 Discussion and conclusion Listening tests have shown that a high FA rate is more detrimental to speech intelligibility than a high miss (or low HIT) [9]. The proposed classification framework affords us control over these two quantities. For example, we could constrain the upper bound of the FA rate while still maximizing the HIT rate. In this case, a constrained optimization should substitute (5). Our experimental results (not shown due to lack of space) indicate that this can effectively remove spurious target segments while still produce intelligible speech. Being able to efficiently compute the derivative of marginals, in principle one could optimize a class of objectives other than HIT−FA. These may include objectives concerning either speech intelligibility or quality, as long as the objective of interest can be expressed or approximated by a combination of marginal probabilities. For example, we have tried to simultaneously minimize two traditional CASA measures PEL and PNR (see e.g., [25]), where PEL represents the percent of target energy loss and PNR the percent of noise energy residue. Significant reductions in both measures can be achieved compared to methods that maximize accuracy or conditional log-likelihood. We have demonstrated that the challenge of the monaural speech separation problem can be effectively approached via structured prediction. Observing that the IBM exhibits highly structured patterns, we have proposed to use CRF to explicitly model the temporal continuity in the IBM. This linear sequence classifier is further transformed to a nonlinear one by using state and transition feature functions learned from DNN. Consistent with the results from speech perception, we train the proposed DNN-CRF model to maximize a measure that is well correlated to human speech intelligibility in noise. Experimental results show that the proposed system significantly outperforms existing ones and generalizes better to different acoustic environments. Aside from temporal continuity, other ASA principles [5] such as common onset and co-modulation also contribute to the structure in the IBM, and we will investigate these in future work. Acknowledgements. This research was supported in part by an AFOSR grant (FA9550-12-1-0130), an STTR subcontract from Kuzer, and the Ohio Supercomputer Center. 8 References [1] R. Hendriks, R. Heusdens, and J. Jensen, “MMSE based noise PSD tracking with low complexity,” in ICASSP, 2010. [2] P. Scalart and J. Filho, “Speech enhancement based on a priori signal to noise estimation,” in ICASSP, 1996. [3] S. Roweis, “One microphone source separation,” in NIPS, 2001. [4] D. Wang and G. Brown, Eds., Computational Auditory Scene Analysis: Principles, Algorithms and Applications. Hoboken, NJ: Wiley-IEEE Press, 2006. [5] A.S. Bregman, Auditory scene analysis: The perceptual organization of sound. The MIT Press, 1994. [6] D. Wang, “On ideal binary mask as the computational goal of auditory scene analysis,” in Speech Separation by Humans and Machines, Divenyi P., Ed. Kluwer Academic, Norwell MA., 2005, pp. 181–197. [7] D. Brungart, P. Chang, B. Simpson, and D. Wang, “Isolating the energetic component of speech-on-speech masking with ideal time-frequency segregation,” J. Acoust. Soc. Am., vol. 120, pp. 4007–4018, 2006. [8] M. Anzalone, L. Calandruccio, K. Doherty, and L. Carney, “Determination of the potential benefit of time-frequency gain manipulation,” Ear and hearing, vol. 27, no. 5, pp. 480–492, 2006. [9] N. Li and P. Loizou, “Factors influencing intelligibility of ideal binary-masked speech: Implications for noise reduction,” J. Acoust. Soc. Am., vol. 123, no. 3, pp. 1673–1682, 2008. [10] G. Kim, Y. Lu, Y. Hu, and P. Loizou, “An algorithm that improves speech intelligibility in noise for normal-hearing listeners,” J. Acoust. Soc. Am., vol. 126, pp. 1486–1494, 2009. [11] K. Han and D. Wang, “An SVM based classification approach to speech separation,” in ICASSP, 2011. [12] Y. Wang, K. Han, and D. Wang, “Exploring monaural features for classification-based speech segregation,” IEEE Trans. Audio, Speech, Lang. Process., in press, 2012. [13] G. Mysore and P. Smaragdis, “A non-negative approach to semi-supervised separation of speech from noise with the use of temporal dynamics,” in ICASSP, 2011. [14] J. Hershey, T. Kristjansson, S. Rennie, and P. Olsen, “Single channel speech separation using factorial dynamics,” in NIPS, 2007. [15] J. Lafferty, A. McCallum, and F. Pareira, “Conditional random fields: probabilistic models for segmenting and labeling sequence data,” in ICML, 2001. [16] J. Nocedal and S. Wright, Numerical optimization. Springer verlag, 1999. [17] G. Hinton, S. Osindero, and Y. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527–1554, 2006. [18] L. van der Maaten, M. Welling, and L. Saul, “Hidden-unit conditional random fields,” in AISTATS, 2011. [19] L. Morency, A. Quattoni, and T. Darrell, “Latent-dynamic discriminative models for continuous gesture recognition,” in CVPR, 2007. [20] J. Peng, L. Bo, and J. Xu, “Conditional neural fields,” in NIPS, 2009. [21] A. Mohamed, G. Dahl, and G. Hinton, “Deep belief networks for phone recognition,” in NIPS workshop on speech recognition and related applications, 2009. [22] T. Do and T. Artieres, “Neural conditional random fields,” in AISTATS, 2010. [23] L. Rabiner, “A tutorial on hidden Markov models and selected applications in speech recognition,” Proc. IEEE, vol. 77, no. 2, pp. 257–286, 2003. [24] IEEE, “IEEE recommended practice for speech quality measurements,” IEEE Trans. Audio Electroacoust., vol. 17, pp. 225–246, 1969. [25] G. Hu and D. Wang, “Monaural speech segregation based on pitch tracking and amplitude modulation,” IEEE Trans. Neural Networks, vol. 15, no. 5, pp. 1135–1150, 2004. [26] I. Tsochataridis, T. Hofmann, and T. Joachims, “Support vector machine for interdependent and structured output spaces,” in ICML, 2004. [27] J. Garofolo, DARPA TIMIT acoustic-phonetic continuous speech corpus, NIST, 1993. 9
|
2012
|
172
|
4,533
|
Fused sparsity and robust estimation for linear models with unknown variance Yin Chen University Paris Est, LIGM 77455 Marne-la-Valle, FRANCE yin.chen@eleves.enpc.fr Arnak S. Dalalyan ENSAE-CREST-GENES 92245 MALAKOFF Cedex, FRANCE arnak.dalalyan@ensae.fr Abstract In this paper, we develop a novel approach to the problem of learning sparse representations in the context of fused sparsity and unknown noise level. We propose an algorithm, termed Scaled Fused Dantzig Selector (SFDS), that accomplishes the aforementioned learning task by means of a second-order cone program. A special emphasize is put on the particular instance of fused sparsity corresponding to the learning in presence of outliers. We establish finite sample risk bounds and carry out an experimental evaluation on both synthetic and real data. 1 Introduction Consider the classical problem of Gaussian linear regression1: Y = Xβ∗+ σ∗ξ, ξ ∼Nn(0, In), (1) where Y ∈Rn and X ∈Rn×p are observed, in the neoclassical setting of very large dimensional unknown vector β∗. Even if the ambient dimensionality p of β∗is larger than n, it has proven possible to consistently estimate this vector under the sparsity assumption. The letter states that the number of nonzero elements of β∗, denoted by s and called intrinsic dimension, is small compared to the sample size n. Most famous methods of estimating sparse vectors, the Lasso and the Dantzig Selector (DS), rely on convex relaxation of ℓ0-norm penalty leading to a convex program that involves the ℓ1-norm of β. More precisely, for a given ¯λ > 0, the Lasso and the DS [26, 4, 5, 3] are defined as bβ L = arg min β∈Rp 1 2∥Y −Xβ∥2 2 + ¯λ∥β∥1 (Lasso) bβ DS = arg min ∥β∥1 subject to ∥X⊤(Y −Xβ)∥∞≤¯λ. (DS) The performance of these algorithms depends heavily on the choice of the tuning parameter ¯λ. Several empirical and theoretical studies emphasized that ¯λ should be chosen proportionally to the noise standard deviation σ∗. Unfortunately, in most applications, the latter is unavailable. It is therefore vital to design statistical procedures that estimate β and σ in a joint fashion. This topic received special attention in last years, cf. [10] and the references therein, with the introduction of computationally efficient and theoretically justified σ-adaptive procedures the square-root Lasso [2] (a.k.a. scaled Lasso [24]) and the ℓ1 penalized log-likelihood minimization [20]. In the present work, we are interested in the setting where β∗is not necessarily sparse, but for a known q × p matrix M, the vector Mβ∗is sparse. We call this setting “fused sparsity scenario”. 1We denote by In the n × n identity matrix. For a vector v, we use the standard notation ∥v∥1, ∥v∥2 and ∥v∥∞for the ℓ1, ℓ2 and ℓ∞norms, corresponding respectively to the sum of absolute values, the square root of the sum of squares and the maximum of the coefficients of v. 1 The term “fused” sparsity, introduced by [27], originates from the case where Mβ is the discrete derivative of a signal β and the aim is to minimize the total variation, see [12, 19] for a recent overview and some asymptotic results. For general matrices M, tight risk bounds were proved in [14]. We adopt here this framework of general M and aim at designing a computationally efficient procedure capable to handle the situation of unknown noise level and for which we are able to provide theoretical guarantees along with empirical evidence for its good performance. This goal is attained by introducing a new procedure, termed Scaled Fused Dantzig Selector (SFDS), which is closely related to the penalized maximum likelihood estimator but has some advantages in terms of computational complexity. We establish tight risk bounds for the SFDS, which are nearly as strong as those proved for the Lasso and the Dantzig selector in the case of known σ∗. We also show that the robust estimation in linear models can be seen as a particular example of the fused sparsity scenario. Finally, we carry out a “proof of concept” type experimental evaluation to show the potential of our approach. 2 Estimation under fused sparsity with unknown level of noise 2.1 Scaled Fused Dantzig Selector We will only consider the case rank(M) = q ≤p, which is more relevant for the applications we have in mind (image denoising and robust estimation). Under this condition, one can find a (p−q)×p matrix N such that the augmented matrix M = [M⊤N⊤]⊤is of full rank. Let us denote by mj the jth column of the matrix M −1, so that M −1 = [m1, ..., mp]. We also introduce: M −1 = [M†, N†], M† = [m1, ..., mq] ∈Rp×q, N† = [mq+1, ..., mp] ∈Rp×(p−q). Given two positive tuning parameters λ and µ, we define the Scaled Fused Dantzig Selector (SFDS) (bβ, bσ) as a solution to the following optimization problem: minimize q X j=1 ∥Xmj∥2|(Mβ)j| subject to |m⊤ j X⊤(Xβ −Y )| ≤λσ∥Xmj∥2, j ≤q; N⊤ † X⊤(Xβ −Y ) = 0, nµσ2 + Y ⊤Xβ ≤∥Y ∥2 2. (P1) This estimator has several attractive properties: (a) it can be efficiently computed even for very large scale problems using a second-order cone program, (b) it is equivariant with respect to the scale transformations both in the response Y and in the lines of M and, finally, (c) it is closely related to the penalized maximum likelihood estimator. Let us give further details on these points. 2.2 Relation with the penalized maximum likelihood estimator One natural way to approach the problem of estimating β∗in our setup is to rely on the standard procedure of penalized log-likelihood minimization. If the noise distribution is Gaussian, ξ ∼ Nn(0, In), the negative log-likelihood (up to irrelevant additive terms) is given by ℓ(Y , X; β, σ) = n log(σ) + ∥Y −Xβ∥2 2 2σ2 . In the context of large dimension we are concerned with, i.e., when p/n is not small, the maximum likelihood estimator is subject to overfitting and is of very poor quality. If it is plausible to expect that the data can be fitted sufficiently well by a vector β∗such that for some matrix M, only a small fraction of elements of Mβ∗are nonzero, then one can considerably improve the quality of estimation by adding a penalty term to the log-likelihood. However, the most appealing penalty, the number of nonzero elements of Mβ, leads to a nonconvex optimization problem which cannot be efficiently solved even for moderately large values of p. Instead, convex penalties of the form P j ωj|(Mβ)j|, where wj > 0 are some weights, have proven to provide high accuracy estimates at a relatively low computational cost. This corresponds to defining the estimator (bβPL, bσPL) as the 2 minimizer of the penalized log-likelihood ¯ℓ(Y , X; β, σ) = n log(σ) + ∥Y −Xβ∥2 2 2σ2 + q X j=1 ωj|(Mβ)j|. To ensure the scale equivariance, the weights ωj should be chosen inversely proportionally to σ: ωj = σ−1¯ωj. This leads to the estimator (bβPL, bσPL) = arg min β,σ n log(σ) + ∥Y −Xβ∥2 2 2σ2 + q X j=1 ¯ωj |(Mβ)j| σ . Although this problem can be cast [20] as a problem of convex minimization (by making the change of parameters φ = β/σ and ρ = 1/σ), it does not belong to the standard categories of convex problems that can be solved either by linear programming or by second-order cone programming or by semidefinite programming. Furthermore, the smooth part of the objective function is not Lipschitz which makes it impossible to directly apply most first-order optimization methods developed in recent years. Our goal is to propose a procedure that is close in spirit to the penalized maximum likelihood but has the additional property of being computable by standard algorithms of second-order cone programming. To achieve this goal, at the first step, we remark that it can be useful to introduce a penalty term that depends exclusively on σ and that prevents the estimator of σ∗from being too large or too small. One can show that the only function (up to a multiplicative constant) that can serve as penalty without breaking the property of scale equivariance is the logarithmic function. Therefore, we introduce an additional tuning parameter µ > 0 and look for minimizing the criterion nµ log(σ) + ∥Y −Xβ∥2 2 2σ2 + q X j=1 ¯ωj |(Mβ)j| σ . (2) If we make the change of variables φ1 = Mβ/σ, φ2 = Nβ/σ and ρ = 1/σ, we get a convex function for which the first-order conditions [20] take the form m⊤ j X⊤(Y −Xβ) ∈¯ωjsign({Mβ}j), (3) N⊤ † X⊤(Y −Xβ) = 0, (4) 1 nµ ∥Y ∥2 2 −Y ⊤Xβ = σ2. (5) Thus, any minimizer of (2) should satisfy these conditions. Therefore, to simplify the problem of optimization we propose to replace minimization of (2) by the minimization of the weighted ℓ1norm P j ¯ωj|(Mβ)j| subject to some constraints that are as close as possible to (3-5). The only problem here is that the constraints (3) and (5) are not convex. The “convexification” of these constraints leads to the procedure described in (P1). As we explain below, the particular choice of ¯ωjs is dictated by the desire to enforce the scale equivariance of the procedure. 2.3 Basic properties A key feature of the SFDS is its scale equivariance. Indeed, one easily checks that if (bβ, bσ) is a solution to (P1) for some inputs X, Y and M, then α(bβ, bσ) will be a solution to (P1) for the inputs X, αY and M, whatever the value of α ∈R is. This is the equivariance with respect to the scale change in the response Y . Our method is also equivariant with respect to the scale change in M. More precisely, if (bβ, bσ) is a solution to (P1) for some inputs X, Y and M, then (bβ, bσ) will be a solution to (P1) for the inputs X, Y and DM, whatever the q × q diagonal matrix D is. The latter property is important since if we believe that for a given matrix M the vector Mβ∗is sparse, then this is also the case for the vector DMβ∗, for any diagonal matrix D. Having a procedure the output of which is independent of the choice of D is of significant practical importance, since it leads to a solution that is robust with respect to small variations of the problem formulation. The second attractive feature of the SFDS is that it can be computed by solving a convex optimization problem of second-order cone programming (SOCP). Recall that an SOCP is a constrained 3 optimization problem that can be cast as minimization with respect to w ∈Rd of a linear function a⊤w under second-order conic constraints of the form ∥Aiw + bi∥2 ≤c⊤ i w + di, where Ais are some ri × d matrices, bi ∈Rri, ci ∈Rd are some vectors and dis are some real numbers. The problem (P1) belongs well to this category, since it can be written as min(u1 + . . . + uq) subject to ∥Xmj∥2|(Mβ)j| ≤uj; |m⊤ j X⊤(Xβ −Y )| ≤λσ∥Xmj∥2, ∀j = 1, . . . , q; N⊤ † X⊤(Xβ −Y ) = 0, q 4nµ∥Y ∥2 2σ2 + (Y ⊤Xβ)2 ≤2∥Y ∥2 2 −Y ⊤Xβ. Note that all these constraints can be transformed into linear inequalities, except the last one which is a second order cone constraint. The problems of this type can be efficiently solved by various standard toolboxes such as SeDuMi [22] or TFOCS [1]. 2.4 Finite sample risk bound To provide theoretical guarantees for our estimator, we impose the by now usual assumption of restricted eigenvalues on a suitably chosen matrix. This assumption, stated in Definition 2.1 below, was introduced and thoroughly discussed by [3]; we also refer the interested reader to [28]. D´efinition 2.1. We say that a n × q matrix A satisfies the restricted eigenvalue condition RE(s, 1), if κ(s, 1) ∆= min |J|≤s min ∥δJc∥1≤∥δJ∥1 ∥Aδ∥2 √n∥δJ∥2 > 0. We say that A satisfies the strong restricted eigenvalue condition RE(s, s, 1), if κ(s, s, 1) ∆= min |J|≤s min ∥δJc∥1≤∥δJ∥1 ∥Aδ∥2 √n∥δJ∪J0∥2 > 0, where J0 is the subset of {1, ..., q} corresponding to the s largest in absolute value coordinates of δ. For notational convenience, we assume that M is normalized in such a way that the diagonal elements of 1 nM⊤ † X⊤XM† are all equal to 1. This can always be done by multiplying M from the left by a suitably chosen positive definite diagonal matrix. Furthermore, we will repeatedly use the projector2 Π = XN†(N⊤ † X⊤XN†)−1N⊤ † X⊤onto the subspace of Rn spanned by the columns of XN†. We denote by r = rank{Π} the rank of this projector which is typically very small compared to n ∧p, and is always smaller than n ∧(p −q). In all theoretical results, the matrices X and M are assumed deterministic. Theorem 2.1. Let us fix a tolerance level δ ∈(0, 1) and define λ = p 2nγ log(q/δ). Assume that the tuning parameters γ, µ > 0 satisfy µ γ ≤1 −r n −2 p (n −r) log(1/δ) + log(1/δ) n . (6) If the vector Mβ∗is s-sparse and the matrix (In −Π)XM† satisfies the condition RE(s, 1) with some κ > 0 then, with probability at least 1 −6δ, it holds: ∥M(bβ −β∗)∥1 ≤4 κ2 (bσ + σ∗)s r 2γ log(q/δ) n + σ∗ κ r 2s log(1/δ) n (7) ∥X(bβ −β∗)∥2 ≤2(bσ + σ∗) p 2γs log(q/δ) κ + σ∗ p 8 log(1/δ) + r . (8) If, in addition, (In −Π)XM† satisfies the condition RE(s, s, 1) with some κ > 0 then, with a probability at least 1 −6δ, we have: ∥Mbβ −Mβ∗∥2 ≤4(bσ + σ∗) κ2 r 2s log(q/δ) n + σ∗ κ r 2 log(1/δ) n (9) Moreover, with a probability at least 1 −7δ, we have: bσ ≤σ∗ µ1/2 + λ∥Mβ∗∥1 nµ + s1/2σ∗log(q/δ) nκµ1/2 + (σ∗+ ∥Mβ∗∥1)µ−1/2 r 2 log(1/δ) n . (10) 2Here and in the sequel, the inverse of a singular matrix is understood as MoorePenrose pseudoinverse. 4 Before looking at the consequences of these risk bounds in the particular case of robust estimation, let us present some comments highlighting the claims of Theorem 2.1. The first comment is about the conditions on the tuning parameters µ and γ. It is interesting to observe that the roles of these parameters are very clearly defined: γ controls the quality of estimating β∗while µ determines the quality of estimating σ∗. One can note that all the quantities entering in the right-hand side of (6) are known, so that it is not hard to choose µ and γ in such a way that they satisfy the conditions of Theorem 2.1. However, in practice, this theoretical choice may be too conservative in which case it could be a better idea to rely on cross validation. The second remark is about the rates of convergence. According to (8), the rate of estimation measured in the mean prediction loss 1 n∥X(bβ −β∗)∥2 2 is of the order of s log(q)/n, which is known as fast or parametric rate. The vector Mβ∗is also estimated with the nearly parametric rate in both ℓ1 and ℓ2-norms. To the best of our knowledge, this is the first work where such kind of fast rates are derived in the context of fused sparsity with unknown noise-level. With some extra work, one can check that if, for instance, γ = 1 and |µ −1| ≤cn−1/2 for some constant c, then the estimator bσ has also a risk of the order of sn−1/2. However, the price to pay for being adaptive with respect to the noise level is the presence of ∥Mβ∗∥1 in the bound on bσ, which deteriorates the quality of estimation in the case of large signal-to-noise ratio. Even if Theorem 2.1 requires the noise distribution to be Gaussian, the proposed algorithm remains valid in a far broader context and tight risk bounds can be obtained under more general conditions on the noise distribution. In fact, one can see from the proof that we only need to know confidence sets for some linear and quadratic functionals of ξ. For instance, such kind of confidence sets can be readily obtained in the case of bounded errors ξi using the Bernstein inequality. It is also worthwhile to mention that the proof of Theorem 2.1 is not a simple adaptation of the arguments used to prove analogous results for ordinary sparsity, but contains some qualitatively novel ideas. More precisely, the cornerstone of the proof of risk bounds for the Dantzig selector [4, 3, 9] is that the true parameter β∗is a feasible solution. In our case, this argument cannot be used anymore. Our proposal is then to specify another vector eβ that simultaneously satisfies the following three conditions: Meβ has the same sparsity pattern as Mβ∗, eβ is close to β∗and lies in the feasible set. A last remark is about the restricted eigenvalue conditions. They are somewhat cumbersome in this abstract setting, but simplify a lot when the concrete example of robust estimation is considered, cf. the next section. At a heuristical level, these conditions require from the columns of XM† to be not very strongly correlated. Unfortunately, this condition fails for the matrices appearing in the problem of multiple change-point detection, which is an important particular instance of fused sparsity. There are some workarounds to circumvent this limitation in that particular setting, see [17, 11]. The extension of these kind of arguments to the case of unknown σ∗is an open problem we intend to tackle in the near future. 3 Application to robust estimation This methodology can be applied in the context of robust estimation, i.e., when we observe Y ∈Rn and A ∈Rn×k such that the relation Yi = (Aθ∗)i + σ∗ξi, ξi iid∼N(0, 1) holds only for some indexes i ∈I ⊂{1, ..., n}, called inliers. The indexes does not belonging to I will be referred to as outliers. The setting we are interested in is the one frequently encountered in computer vision [13, 25]: the dimensionality k of θ∗is small as compared to n but the presence of outliers causes the complete failure of the least squares estimator. In what follows, we use the standard assumption that the matrix 1 nA⊤A has diagonal entries equal to one. Following the ideas developed in [6, 7, 8, 18, 15], we introduce a new vector ω ∈Rn that serves to characterize the outliers. If an entry ωi of ω is nonzero, then the corresponding observation Yi is an outlier. This leads to the model: Y = Aθ∗+ √nω∗+ σ∗ξ = Xβ∗+ σ∗ξ, where X = [√n In A], and β = [ω∗; θ∗]⊤. Thus, we have rewritten the problem of robust estimation in linear models as a problem of estimation in high dimension under the fused sparsity scenario. Indeed, we have X ∈Rn×(n+k) 5 and β∗∈Rn+k, and we are interested in finding an estimator bβ of β∗for which bω = [In0n×k]bβ contains as many zeros as possible. This means that we expect that the number of outliers is significantly smaller than the sample size. We are thus in the setting of fused sparsity with M = [In 0n×k]. Setting N = [0k×n Ik], we define the Scaled Robust Dantzig Selector (SRDS) as a solution (bθ, bω, bσ) of the problem: minimize ∥ω∥1 subject to √n∥Aθ + √n ω −Y ∥∞≤λσ, A⊤(Aθ + √n ω −Y ) = 0, nµσ2 + Y ⊤(Aθ + √nω) ≤∥Y ∥2 2. (P2) Once again, this can be recast in a SOCP and solved with great efficiency by standard algorithms. Furthermore, the results of the previous section provide us with strong theoretical guarantees for the SRDS. To state the corresponding result, we will need a notation for the largest and the smallest singular values of 1 √nA denoted by ν∗and ν∗respectively. Theorem 3.1. Let us fix a tolerance level δ ∈(0, 1) and define λ = p 2nγ log(n/δ). Assume that the tuning parameters γ, µ > 0 satisfy µ γ ≤1 −k n −2 n p (n −k) log(1/δ) + log(1/δ) . Let Π denote the orthogonal projector onto the k-dimensional subspace of Rn spanned by the columns of A. If the vector ω∗is s-sparse and the matrix √n(In −Π) satisfies the condition RE(s, 1) with some κ > 0 then, with probability at least 1 −5δ, it holds: ∥bω −ω∗∥1 ≤4 κ2 (bσ + σ∗)s r 2γ log(n/δ) n + σ∗ κ r 2s log(1/δ) n , (11) ∥(In −Π)(bω −ω∗)∥2 ≤2(bσ + σ∗) κ r 2s log(n/δ) n + σ∗ r 2 log(1/δ) n . (12) If, in addition, √n (In −Π) satisfies the condition RE(s, s, 1) with some κ > 0 then, with a probability at least 1 −6δ, we have: ∥bω −ω∗∥2 ≤4(bσ + σ∗) κ2 r 2s log(n/δ) n + σ∗ κ r 2 log(1/δ) n ∥bθ −θ∗∥2 ≤ν∗ ν2∗ 4(bσ + σ∗) κ2 r 2s log(n/δ) n + σ∗ κ r 2 log(1/δ) n + σ∗( √ k + p 2 log(1/δ)) √n Moreover, with a probability at least 1 −7δ, the following inequality holds: bσ ≤σ∗ µ1/2 + λ∥ω∗∥1 nµ + s1/2σ∗log(n/δ) nκµ1/2 + (σ∗+ ∥ω∗∥1)µ−1/2 r 2 log(1/δ) n . (13) All the comments made after Theorem 2.1, especially those concerning the tuning parameters and the rates of convergence, hold true for the risk bounds in Theorem 3.1 as well. Furthermore, the restricted eigenvalue condition in the latter theorem is much simpler and deserves a special attention. In particular, one can remark that the failure of RE(s, 1) for √n(In −Π) implies that there is a unit vector δ in Im(A) such that |δ(1)| + . . . + |δ(n−s)| ≤|δ(n−s+1)| + . . . + |δ(n)|, where δ(k) stands for the kth smallest (in absolute value) entry of δ. To gain a better understanding of how restrictive this assumption is, let us consider the case where the rows a1, . . . , an of A are i.i.d. zero mean Gaussian vectors. Since δ ∈Im(A), its coordinates δi are also i.i.d. Gaussian random variables (they can be considered N(0, 1) due to the homogeneity of the inequality we are interested in). The inequality |δ(1)| + . . . + |δ(n−s)| ≤|δ(n−s+1)| + . . . + |δ(n)| can be written as 1 n P i |δi| ≤ 2 n(|δ(n−s+1)| + . . . + |δ(n)|). While the left-hand side of this inequality tends to E[|δ1|] > 0, the right-hand side is upper-bounded by 2s n maxi |δi|, which is on the order of 2s√log n n . Therefore, if 2s√log n n is small, the condition RE(s, 1) is satisfied. This informal discussion can be made rigorous by studying large deviations of the quantity maxδ∈Im(A)\{0} ∥δ∥∞/∥δ∥1. A simple sufficient condition entailing RE(s, 1) for √n(In −Π) is presented in the following lemma. Lemma 3.2. Let us set ζs(A) = infu∈Sk−1 1 n Pn i=1 |aiu|−2s∥A∥2,∞ √n . If ζs(A) > 0, then √n (In− Π) satisfies both RE(s, 1) and RE(s, s, 1) with κ(s, 1) ≥κ(s, s, 1) ≥ζs(A)/ p (ν∗)2 + ζs(A)2. 6 SFDS Lasso Square-Root Lasso |bβ −β∗|2 |bσ −σ∗| |bβ −β∗|2 |bβ −β∗|2 |bσ −σ∗| ( T, p, s∗, σ∗) Ave StD Ave StD Ave StD Ave StD Ave StD (200, 400, 2, .5) 0.04 0.03 0.18 0.14 0.07 0.05 0.06 0.04 0.20 0.14 (200, 400, 2, 1) 0.09 0.05 0.42 0.35 0.16 0.11 0.13 0.09 0.46 0.37 (200, 400, 2, 2) 0.23 0.17 0.75 0.55 0.31 0.21 0.25 0.18 0.79 0.56 (200, 400, 5, .5) 0.06 0.01 0.28 0.11 0.13 0.09 0.11 0.06 0.18 0.27 (200, 400, 5, 1) 0.20 0.05 0.56 0.10 0.31 0.04 0.25 0.02 0.66 0.05 (200, 400, 5, 2) 0.34 0.11 0.34 0.21 0.73 0.25 0.47 0.29 0.69 0.70 (200, 400, 10, .5) 0.10 0.01 0.36 0.02 0.15 0.00 0.10 0.01 0.36 0.02 (200, 400, 10, 1) 0.19 0.09 0.27 0.26 0.31 0.04 0.19 0.09 0.27 0.26 (200, 400, 10, 2) 1.90 0.20 4.74 1.01 0.61 0.08 1.80 0.04 3.70 0.48 Table 1: Comparing our procedure SFDS with the (oracle) Lasso and the SqRL on a synthetic dataset. The average values and the standard deviations of the quantities |bβ −β∗|2 and |bσ −σ∗| over 500 trials are reported. They represent respectively the accuracy in estimating the regression vector and the level of noise. The proof of the lemma can be found in the supplementary material. One can take note that the problem (P2) boils down to computing (bω, bσ) as a solution to minimize ∥ω∥1 subject to √n∥(In −Π)(√nω −Y )∥∞≤λσ, nµσ2 + √n[(In −Π)Y ]⊤ω ≤∥(In −Π)Y ∥2 2. and then setting bθ = (A⊤A)−1A⊤(Y −√n bω). 4 Experiments For the empirical evaluation we use a synthetic dataset with randomly drawn Gaussian design matrix X and the real-world dataset fountain-P113, on which we apply our methodology for computing the fundamental matrices between consecutive images. 4.1 Comparative evaluation on synthetic data We randomly generated a n × p matrix X with independent entries distributed according to the standard normal distribution. Then we chose a vector β∗∈Rp that has exactly s nonzero elements all equal to one. The indexes of these elements were chosen at random. Finally, the response Y ∈Rn was computed by adding a random noise σ∗Nn(0, In) to the signal Xβ∗. Once Y and X available, we computed three estimators of the parameters using the standard sparsity penalization (in order to be able to compare our approach to the others): the SFDS, the Lasso and the squareroot Lasso (SqRL). We used the “universal” tuning parameters for all these methods: (λ, µ) = ( p 2n log(p), 1) for the SFDS, λ = p 2 log(p) for the SqRL and λ = σ∗p 2 log(p) for the Lasso. Note that the latter is not really an estimator but rather an oracle since it exploits the knowledge of the true σ∗. This is why the accuracy in estimating σ∗is not reported in Table 1. To reduce the well known bias toward zero [4, 23], we performed a post-processing for all of three procedures. It consisted in computing least squares estimators after removing all the covariates corresponding to vanishing coefficients of the estimator of β∗. The results summarized in Table 1 show that the SFDS is competitive with the state-of-the-art methods and, a bit surprisingly, is sometimes more accurate than the oracle Lasso using the true variance in the penalization. We stress however that the SFDS is designed for being applied in—and has theoretical guarantees for—the broader setting of fused sparsity. 4.2 Robust estimation of the fundamental matrix To provide a qualitative evaluation of the proposed methodology on real data, we applied the SRDS to the problem of fundamental matrix estimation in multiple-view geometry, which constitutes an 3available at http://cvlab.epfl.ch/˜strecha/multiview/denseMVS.html 7 1 2 3 4 5 6 7 8 9 10 Average bσ 0.13 0.13 0.13 0.17 0.16 0.17 0.20 0.18 0.17 0.11 0.15 ∥bω∥0 218 80 236 90 198 309 17 31 207 8 139.4 100 n ∥bω∥0 1.3 0.46 1.37 0.52 1.13 1.84 0.12 0.19 1.49 1.02 0.94 Table 2: Quantitative results on fountain dataset. Figure 1: Qualitative results on fountain dataset. Top left: the values of bωi for the first pair of images. There is a clear separation between outliers and inliers. Top right: the first pair of images and the matches classified as wrong by SRDS. Bottom: the eleven images of the dataset. essential step in almost all pipelines of 3D reconstruction [13, 25]. In short, if we have two images I and I′ representing the same 3D scene, then there is a 3×3 matrix F, called fundamental matrix, such that a point x = (x, y) in I1 matches with the point x′ = (x′, y′) in I′ only if [x; y; 1] F [x′; y′; 1]⊤= 0. Clearly, F is defined up to a scale factor: if F33 ̸= 0, one can assume that F33 = 1. Thus, each pair x ↔x′ of matching points in images I and I′ yields a linear constraint on the eight remaining coefficients of F. Because of the quantification and the presence of noise in images, these linear relations are satisfied up to some error. Thus, estimation of F from a family of matching points {xi ↔x′ i; i = 1, . . . , n} is a problem of linear regression. Typically, matches are computed by comparing local descriptors (such as SIFT [16]) and, for images of reasonable resolution, hundreds of matching points are found. The computation of the fundamental matrix would not be a problem in this context of large sample size / low dimension, if the matching algorithms were perfectly correct. However, due to noise, repetitive structures and other factors, a non-negligible fraction of detected matches are wrong (outliers). Elimination of these outliers and robust estimation of F are crucial steps for performing 3D reconstruction. Here, we apply the SRDS to the problem of estimation of F for 10 pairs of consecutive images provided by the fountain dataset [21]: the 11 images are shown at the bottom of Fig. 1. Using SIFT descriptors, we found more than 17.000 point matches in most pairs of images among the 10 pairs we are considering. The CPU time for computing each matrix using the SeDuMi solver [22] was about 7 seconds, despite such a large dimensionality. The number of outliers and the estimated noise-level for each pair of images are reported in Table 2. We also showed in Fig. 1 the 218 outliers for the first pair of images. They are all indeed wrong correspondncies, even those which correspond to the windows (this is due to the repetitive structure of the window). 5 Conclusion and perspectives We have presented a new procedure, SFDS, for the problem of learning linear models with unknown noise level under the fused sparsity scenario. We showed that this procedure is inspired by the penalized maximum likelihood but has the advantage of being computable by solving a secondorder cone program. We established tight, nonasymptotic, theoretical guarantees for the SFDS with a special attention paid to robust estimation in linear models. The experiments we have carried out are very promising and support our theoretical results. In the future, we intend to generalize the theoretical study of the performance of the SFDS to the case of non-Gaussian errors ξi, as well as to investigate its power in variable selection. The extension to the case where the number of lines in M is larger than the number of columns is another interesting topic for future research. 8 References [1] Stephen Becker, Emmanuel Cand`es, and Michael Grant. Templates for convex cone problems with applications to sparse signal recovery. Math. Program. Comput., 3(3):165–218, 2011. [2] A. Belloni, Victor Chernozhukov, and L. Wang. Square-root lasso: Pivotal recovery of sparse signals via conic programming. Biometrika, to appear, 2012. [3] Peter J. Bickel, Ya’acov Ritov, and Alexandre B. Tsybakov. Simultaneous analysis of lasso and Dantzig selector. Ann. Statist., 37(4):1705–1732, 2009. [4] Emmanuel Candes and Terence Tao. The Dantzig selector: statistical estimation when p is much larger than n. Ann. Statist., 35(6):2313–2351, 2007. [5] Emmanuel J. Cand`es. The restricted isometry property and its implications for compressed sensing. C. R. Math. Acad. Sci. Paris, 346(9-10):589–592, 2008. [6] Emmanuel J. Cand`es and Paige A. Randall. Highly robust error correction by convex programming. IEEE Trans. Inform. Theory, 54(7):2829–2840, 2008. [7] Arnak S. Dalalyan and Renaud Keriven. L1-penalized robust estimation for a class of inverse problems arising in multiview geometry. In NIPS, pages 441–449, 2009. [8] Arnak S. Dalalyan and Renaud Keriven. Robust estimation for an inverse problem arising in multiview geometry. J. Math. Imaging Vision., 43(1):10–23, 2012. [9] Eric Gautier and Alexandre Tsybakov. High-dimensional instrumental variables regression and confidence sets. Technical Report arxiv:1105.2454, September 2011. [10] Christophe Giraud, Sylvie Huet, and Nicolas Verzelen. High-dimensional regression with unknown variance. submitted, page arXiv:1109.5587v2 [math.ST]. [11] Z. Harchaoui and C. L´evy-Leduc. Multiple change-point estimation with a total variation penalty. J. Amer. Statist. Assoc., 105(492):1480–1493, 2010. [12] Za¨ıd Harchaoui and C´eline L´evy-Leduc. Catching change-points with lasso. In John Platt, Daphne Koller, Yoram Singer, and Sam Roweis, editors, NIPS. Curran Associates, Inc., 2007. [13] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, June 2004. [14] A. Iouditski, F. Kilinc Karzan, A. S. Nemirovski, and B. T. Polyak. On the accuracy of l1-filtering of signals with block-sparse structure. In NIPS 24, pages 1260–1268. 2011. [15] S. Lambert-Lacroix and L. Zwald. Robust regression through the Huber’s criterion and adaptive lasso penalty. Electron. J. Stat., 5:1015–1053, 2011. [16] David G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91–110, 2004. [17] E. Mammen and S. van de Geer. Locally adaptive regression splines. Ann. Statist., 25(1):387–413, 1997. [18] Nam H. Nguyen, Nasser M. Nasrabadi, and Trac D. Tran. Robust lasso with missing and grossly corrupted observations. In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 1881–1889. 2011. [19] A. Rinaldo. Properties and refinements of the fused lasso. Ann. Statist., 37(5B):2922–2952, 2009. [20] Nicolas St¨adler, Peter B¨uhlmann, and Sara van de Geer. ℓ1-penalization for mixture regression models. TEST, 19(2):209–256, 2010. [21] C. Strecha, W. von Hansen, L. Van Gool, P. Fua, and U. Thoennessen. On benchmarking camera calibration and multi-view stereo for high resolution imagery. In Conference on Computer Vision and Pattern Recognition, pages 1–8, 2009. [22] J. F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw., 11/12(1-4):625–653, 1999. [23] T. Sun and C.-H. Zhang. Comments on: ℓ1-penalization for mixture regression models. TEST, 19(2): 270–275, 2010. [24] T. Sun and C.-H. Zhang. Scaled sparse linear regression. arXiv:1104.4595, 2011. [25] R. Szeliski. Computer Vision: Algorithms and Applications. Texts in Computer Science. Springer, 2010. [26] Robert Tibshirani. Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B, 58(1): 267–288, 1996. [27] Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and smoothness via the fused lasso. J. R. Stat. Soc. Ser. B Stat. Methodol., 67(1):91–108, 2005. [28] Sara A. van de Geer and Peter B¨uhlmann. On the conditions used to prove oracle results for the Lasso. Electron. J. Stat., 3:1360–1392, 2009. 9
|
2012
|
173
|
4,534
|
On the Sample Complexity of Robust PCA Matthew Coudron Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Cambridge, MA 02139 mcoudron@mit.edu Gilad Lerman School of Mathematics University of Minnesota Minneapolis, MN 55455 lerman@umn.edu Abstract We estimate the rate of convergence and sample complexity of a recent robust estimator for a generalized version of the inverse covariance matrix. This estimator is used in a convex algorithm for robust subspace recovery (i.e., robust PCA). Our model assumes a sub-Gaussian underlying distribution and an i.i.d. sample from it. Our main result shows with high probability that the norm of the difference between the generalized inverse covariance of the underlying distribution and its estimator from an i.i.d. sample of size N is of order O(N −0.5+ϵ) for arbitrarily small ϵ > 0 (affecting the probabilistic estimate); this rate of convergence is close to the one of direct covariance estimation, i.e., O(N −0.5). Our precise probabilistic estimate implies for some natural settings that the sample complexity of the generalized inverse covariance estimation when using the Frobenius norm is O(D2+δ) for arbitrarily small δ > 0 (whereas the sample complexity of direct covariance estimation with Frobenius norm is O(D2)). These results provide similar rates of convergence and sample complexity for the corresponding robust subspace recovery algorithm. To the best of our knowledge, this is the only work analyzing the sample complexity of any robust PCA algorithm. 1 Introduction A fundamental problem in probability and statistics is to determine with overwhelming probability the rate of convergence of the empirical covariance (or inverse covariance) of an i.i.d. sample of increasing size N to the covariance (or inverse covariance) of the underlying random variable (see e.g., [17, 3] and references therein). Clearly, this problem is also closely related to estimating with high probability the sample complexity, that is, the number of samples required to obtain a given error of approximation ϵ. In the case of a compactly supported (or even more generally subGaussian) underlying distribution, it is a classical exercise to show that this rate of convergence is O(N −0.5) (with a comparability constant depending on properties of µ, in particular D, as well as on the threshold probability, see e.g., [17, Proposition 2.1]). The precise estimate for this rate of convergence implies that the sample complexity of covariance estimation is O(D) when using the spectral norm and O(D2) when using the Frobenius norm. The rate of convergence and sample complexity of PCA immediately follow from these estimates (see e.g., [15]). While such estimates are theoretically fundamental, they can be completely useless in the presence of outliers. That is, direct covariance or inverse covariance estimation and its resulting PCA are very sensitive to outliers. Many robust versions of covariance estimation, PCA and dimension reduction have been developed in the last three decades (see e.g., the standard textbooks [8, 10, 14]). In the last few years new convex algorithms with provable guarantees have been suggested for robust subspace recovery and its corresponding dimension reduction [5, 4, 19, 20, 11, 7, 2, 1, 21, 9]. Most of these works minimize a mixture of an ℓ1-type norm (depending on the application) and the nuclear norm. Their algorithmic complexity is not as competitive as PCA and their sample com1 plexity is hard to estimate due to the problem of extending the nuclear norm out-of-sample. On the other hand, Zhang and Lerman [21] have proposed a novel M-estimator for robust PCA, which is based on a convex relaxation of the sum of Euclidean distances to subspaces (which is originally minimized over the non-convex Grassmannian). This procedure suggests an estimator for a generalized version of the inverse covariance matrix and uses it to robustly recover an underlying low-dimensional subspace. This idea was extended in [9] to obtain an even more accurate method for subspace recovery, though it does not estimate the generalized inverse covariance matrix (in particular, it has no analogous notion of singular values or their inverses). The algorithmic complexity of the algorithms solving the convex formulations of [21] and [9] is comparable to that of full PCA. Here we show that for the setting of sub-Gaussian distributions the sample complexity of the robust PCA algorithm in [21] (or its generalized inverse covariance estimation) is close to that of PCA (or to sample covariance estimation). Our analysis immediately extends to the robust PCA algorithm of [9]. 1.1 The Generalized Inverse Covariance and its Corresponding Robust PCA Zhang and Lerman [21] formed the set H := {Q ∈RD×D : Q = QT , tr(Q) = 1}, (1.1) as a convex relaxation for the orthoprojectors (from RD to RD), and defined the following energy function on H (with respect to a data set X in RD): FX (Q) := X x∈X ∥Qx∥, (1.2) where ∥· ∥denotes the Euclidean norm of a vector in RD. Their generalized empirical inverse covariance is ˆQX = arg min Q∈H FX (Q). (1.3) They showed that when replacing the term ∥Qx∥by ∥Qx∥2 in (1.2) and when Sp{X} = RD, then the minimization (1.3) results in a scaled version of the empirical inverse covariance matrix. It is thus clear why we can refer to ˆQX as a generalized empirical inverse covariance (or ℓ1-type version of it). We describe the absolute notion of generalized inverse covariance matrix, i.e., nonempirical, in §1.2. Zhang and Lerman [21] did not emphasize the empirical generalized inverse covariance, but the robust estimate of the underlying low-dimensional subspace by the span of the bottom eigenvectors of this matrix. They rigorously proved that such a procedure robustly recovers the underlying subspace under some conditions. 1.2 Main Result of this Paper We focus on computing the sample complexity of the estimator ˆQX . This problem is practically equivalent with estimating the rate of convergence of ˆQX of an i.i.d. sample X to the “generalized inverse covariance” of the underlying distribution µ. We may assume that µ is a sub-Gaussian probability measure on RD (see §2.1 and the extended version of this paper). However, in order to easily express the dependence of our probabilistic estimates on properties of the measure µ, we assume for simplicity that µ is compactly supported and denote by Rµ the minimal radius among all balls containing the support of µ, that is, Rµ = min{r > 0 : supp(µ) ⊆B(0, r)}, where B(0, r) is the ball around the origin 0 with radius r. We further assume that for some 0 < γ < 1, µ satisfies the following condition, which we refer to as the “two-subspaces criterion” (for γ): For any pair of (D −1)-dimensional subspaces of RD, L1 and L2: µ((L1 ∪L2)c) ≥γ. (1.4) We note that if µ satisfies the two-subspaces criterion for any particular 0 < γ < 1, then its support cannot be a union of two hyperplanes of RD. The use of this assumption is clarified below in §3.2, though it is possible that one may weaken it. 2 We first formulate the generalized inverse covariance of the underlying measure as follows: ˆQ = arg min Q∈H F(Q), (1.5) where F(Q) = Z ∥Qx∥dµ(x). (1.6) Let {xi}∞ i=1 be a sequence of i.i.d. random variables sampled from µ (i.e., each variable has distribution µ). Let XN := {xi}N i=1 and denote ˆQN := ˆQXN and FN := FXN . (1.7) Our main result shows with high probability that ˆQ and ˆQN are uniquely defined (which we denote by u.d. from now on) and that { ˆQN}N∈N converges to ˆQ in the following specified rate. It uses the common notation: a ∨b := max(a, b). We explain its implications in §2. Theorem 1.1. If µ is a compactly supported distribution satisfying the two-subspaces criterion for γ > 0, then there exists a constant α0 ≡α0(µ, D, ϵ) > 0 such that for any ϵ > 0 and N > 2(D−1) the following estimate holds: P ˆQ & ˆQN are u.d. and ∥ˆQ −ˆQN∥F ≤2 α0 N −1 2 +ϵ ≥1 −C0N D2 exp −N 2ϵ D · R2µ −2 N D −1 2 (1 −γ)N−2(D−1), (1.8) where C0 ≡C0(α0, D) := 4 · ((4α0) ∨2) · 10 D 2α0 + 4((4α0) ∨2)Rµ 1 − 2α0 (4α0)∨2 ! D(D+1) 2 . (1.9) Intuitively, α0 represents a lower bound on the directional second derivatives of F. Therefore, α0 should affect sample complexity because the number of random samples taken to approximate a minimum of F should be affected by how sharply F increases about its minimum. It is an interesting and important open problem to find lower bounds on α0 for general µ. 2 Implication and Extensions of the Main Result 2.1 Generalization to Sub-Gaussian Measures We can remove the assumption that the support of µ is bounded (with radius Rµ) and assume instead that µ is sub-Gaussian. In this case, instead of Hoeffding’s inequality, we apply [18, Proposition 5.10] with ai = 1 for all 1 ≤i ≤n. When formulating the corresponding inequality, one may note that supp≥1 p−1/2(Eµ|x|p)1/p (where x represents a random variable sampled from µ) can be regarded as a substitute for Rµ (see [21] for more details of a similar analysis). 2.2 Sample Complexity The notion of sample complexity arises in the framework of Probably-Approximately-Correct Learning of Valiant [16]. Generally speaking, the sample complexity in our setting is the minimum number of samples N required, as a function of dimension D, to achieve a good estimation of ˆQ with high probability. We recall that in this paper we use the Frobenius norm for the estimation error. The following calculation will show that under some assumptions on µ it suffices to use N = Ω(Dη) samples for any η > 2 (we recall that f(x) = Ω(g(x)) as x →∞if and only if g(x) = O(f(x))). In our analysis we will have to assume that γ is a fixed constant, and α0 goes as 1/ √ D. These assumptions are placing additional restrictions on the measure µ, which we expect to be reasonable in practice as we later clarify. We further assume that Rµ = O(D−0.5) and also explain later why it makes sense for the setting of robust subspace recovery. 3 To bound the sample complexity we set C1 := 4 · ((4α0) ∨2) and C2 := 10 · (2α0 + 4((4α0) ∨ 2)Rµ)/(1−2α0/(4α0) ∨2) so that C0 ≤C1·(C2·D)D2 (see (1.9)). Applying this bound and (1.8) we obtain that if η > 2 is fixed, 1/η < ϵ < 1 2 and N ≥Dη, then P ˆQ & ˆQN are u.d. and ∥ˆQ −ˆQN∥F ≤2 α0 N −1 2 +ϵ (2.1) ≥1 −C1(C2 · D · N)D2 exp −N 2ϵ D · R2µ −2 N 2(D−1)(1 −γ)N−2(D−1) ≥1 −C1 exp log(C2 · D1+η)D2 −D2ηϵ −2 exp (2η(D −1) log(D) + log(1 −γ)(Dη −2(D −1))) . Since ϵ > 1/η the first term in the RHS of (2.1) decays exponentially as a function of D (or, equivalently, as a function of N ≥Dη). Similarly, since 0 < γ < 1 and η > 1 the second term in the RHS of (2.1) decays exponentially as a function of D. Furthermore, since ϵ < 1 2 it follows that the error term for the minimizer, i.e., N −1 2 +ϵ ≤Dη(ϵ−1 2 ), decays polynomially in D. Thus, in order to achieve low error estimation with high probability it is sufficient to take N = Ω(Dη) samples for any η > 2. The exact guarantees on error estimation and probability of error can be manipulated by changing the constant hidden in the Ωterm. We would like to point out the expected tradeoff between the sample complexity and the rate of convergence. If ϵ approaches 0, then the rate of convergence becomes optimal but the sample complexity deteriorates. On the other hand, if ϵ approaches 0.5, then the sample complexity becomes optimal, but the rate of convergence deteriorates. To motivate our assumption on Rµ, γ and α0, we recall the needle-haystack and syringe-haystack models of [9] as a prototype for robust subspace recovery. These models assume a mixtures of outlier and inliers components. The distribution of the outliers component is normal N(0, (σout2/D)ID) and the distribution of the inliers component is a mixture of N(0, (σin2/d)PL) (where L is a dsubspace) and N(0, (σin2/(CD))ID), where C ≫1 (the latter component has coefficient zero in the needle-haystack model). The underlying distribution of the syringe-haystack (or needle-haystack) model is not compactly supported, but clearly sub-Gaussian (as discussed in §2.1) and its standard deviation is of order O(D−0.5). We also note that γ here is the coefficient of the outlier component in the needle-haystack model, which we denote by ν0. Indeed, the only non-zero measure that can be contained in a (D-1)dimensional subspace is the measure associated with N(0, (σin2/d)PL), and that has total weight at most (1 −ν0). It is also possible to verify explicitly that α0 is lower bounded by 1/ √ D in this case (though our argument is currently rather lengthy and will appear in the extended version of this paper). 2.3 From Generalized Covariances to Subspace Recovery We recall that the underlying d-dimensional subspace can be recovered from the bottom d eigenvectors of ˆQN. Therefore, the rate of convergence of the subspace recovery (or its corresponding sample complexity) follows directly from Theorem 1.1 and the Davis-Kahan Theorem [6]. To formulate this, we assume here for simplicity that ˆQ and ˆQN are u.d. (recall Theorems 3.1 and 3.2). Theorem 2.1. If d < D, ϵ > 0, α0 ≡α0(µ, D, ϵ) is the positive constant guaranteed by Theorem 2.1, ˆQ and ˆQN are u.d. and ˆLd, ˆLd,N are the subspaces spanned by the bottom d eigenvectors (i.e., with lowest d eigenvalues) of ˆQ and ˆQN respectively, PˆLd and PˆLd,N are the orthoprojectors on these subspaces and νD−d is the (D −d)th eigengap of ˆQ, then P ∥PˆLd −PˆLd,N ∥F ≤ 4 α0 · νD−d N −1 2 +ϵ ≥1 −C0N D2 exp −N 2ϵ D · R2µ . (2.2) 2.4 Nontrivial Robustness to Noise We remark that (2.2) implies nontrivial robustness to noise for robust PCA. Indeed, assume for example an underlying d-subspace L∗ d and a mixture distribution (representing noisy inliers/outliers 4 components) whose inliers component is symmetric around L∗ d with relatively high level of variance in the orthogonal component of ˆLd and its outliers component is spherically symmetric with sufficiently small mixture coefficient. One can show that in this case ˆLd = L∗ d. Combining this observation and (2.2), we can verify robustness to nontrivial noise when recovering L∗ d from i.i.d. samples of such distributions. 2.5 Convergence Rate of the REAPER Estimator The REAPER and S-REAPER Algorithms [9] are variants of the robust PCA algorithm of [21]. The objective of the REAPER algorithm can be formulated as aiming to minimize the energy FX (Q) over the set G := {Q ∈RD×D : Q = QT , tr(Q) = D −d and Q ≼I}, (2.3) where ≼denotes the semi-definite order. The d-dimensional subspace can then be recovered by the bottom d eigenvectors of Q (in [9] this minimization is formulated with P = I −Q, whose top d eigenvectors are found). The rate of convergence of the minimizer of FX (Q) over G to the minimizer of F(Q) over G is similar to that in Theorem 1.1. The proof of Theorem 1.1 must be modified to deal with the boundary of the set G. If the minimizer ˆQ lies on the interior of G then the proof is the same. If ˆQ is on the boundary of G we must only consider the directional derivatives which point towards the interior of G, or tangent to the boundary. Other than that the proof is the same. 2.6 Convergence Rate with Additional Sparsity Term Rothman et al. [13] and Ravikumar et al. [12] have analyzed an estimator for sparse inverse covariance. This estimator minimizes over all Q ≻0 the energy ⟨Q, bΣN⟩F −log det(Q) + λN∥Q∥ℓ1, (2.4) where bΣN is the empirical covariance matrix based on sample of size N, ⟨·, ·⟩F is the Frobenius inner product (i.e., sum of elementwise products) and ∥Q∥ℓ1 = PD i,j=1 |Qi,j|. Zhang and Zou [22] have suggested a similar minimization, which replaces the first two terms in (2.4) (corresponding to λN = 0) with ⟨Q2, bΣN⟩F /2 −tr(Q). (2.5) Indeed, the minimizers of (2.4) when λN = 0 and of (2.5) are both equal to bΣ−1 N (assuming that the Sp({xi}N i=1) = RD so that the inverse empirical covariance exists). Using the definition of bΣN, i.e., bΣN = PN i=1 xixT i /N, we note that ⟨Q2, bΣN⟩F = 1 N N X i=1 ∥Qxi∥2. (2.6) Therefore, the minimizer of (2.5) over all Q ≻0 is the same up to a multiplicative constant as the minimizer of the RHS of (2.6) over all Q ≻0 with tr(Q) = 1. Teng Zhang suggested to us replacing the RHS of (2.6) with FX and modifying the original problem of (2.4) (or more precisely its variant in [22]) with the minimization over all Q ∈H of the energy FX (Q) + λN∥Q∥ℓ1. (2.7) The second term enforces sparseness and we expect the first term to enforce robustness. By choosing λN = O(N −0.5) we can obtain similar rates of convergence for the minimizer of (2.7) as the one when λN = 0 (see extended version of this paper), namely, rate of convergence of order O(N −0.5+ϵ) for any ϵ > 0. The dependence on D is also the same. That is, the minimum sample size when using the Frobenius norm is O(Dη) for any η > 2. Nevertheless, Ravikumar et al. [12] show that under some assumptions (see e.g., Assumption 1 in [12]), the minimal sample size is O(log(D)r2), where r is the maximum node degree for a graph, whose edges are the nonzero entries of the inverse covariance. It will be interesting to generalize such estimates to the minimization of (2.7). 5 3 Overview of the Proof of Theorem 1.1 3.1 Structure of the Proof We first discuss in §3.2 conditions for uniqueness of ˆQ and ˆQN (with high probability). In §3.3 and §3.4 we explain in short the two basic components of the proof of Theorem 1.1. The first of them is that ∥ˆQ −ˆQN∥F can be controlled from above by differences of directional derivatives of F. The second component is that the rate of convergence of the derivatives of {FN}∞ N=1 to the derivative of F is easily obtained by Hoeffding’s inequality. In §3.5 we gain some intuition for the validity of Theorem 1.1 in view of these two components and also explain why they are not sufficient to conclude the proof. In §3.6 we describe the construction of “nets” of increasing precision; using these nets we conclude the proof of Theorem 1.1 in §3.7. Throughout this section we only provide the global ideas of the proof, whereas in the extended version of this paper we present the details. 3.2 Uniqueness of the Minimizers The two-subspaces criterion for µ guarantees that ˆQ is u.d. and that ˆQN is u.d. with overwhelming probability for sufficiently large N as follows. Theorem 3.1. If µ satisfies the two-subspaces criterion for some γ > 0, then F is strictly convex. Theorem 3.2. If µ satisfies the two-subspaces criterion for some γ > 0 and N > 2(D −1), then P (FN is not strictly convex) ≤2 N D −1 2 (1 −γ)N−2(D−1). (3.1) 3.3 From Energy Minimizers to Directional Derivatives of Energies We control the difference ∥Q −ˆQ∥F from above by differences of derivatives of energies at Q and ˆQ. Here Q is an arbitrary matrix in Br( ˆQ) for some r > 0 (where Br( ˆQ) is the ball in H with center ˆQ and radius r w.r.t. the Frobenius norm), but we will later apply it with Q = ˆQN for some N ∈N. 3.3.1 Preliminary Notation and Definitions The “directions” of the derivatives, which we define below, are elements in the unit sphere of the tangent space of H, i.e., D := {D ∈RD×D | D = DT , tr(D) = 0, ∥D∥F = 1}. Throughout the paper, directions in D are often determined by particular points Q1, Q2 ∈H, where Q1 ̸= Q2. We denote the direction from Q1 to Q2 by DQ1,Q2, that is, DQ1,Q2 := Q2 −Q1 ∥Q2 −Q1∥F . (3.2) Directional derivatives with respect to an element of D may not exist and therefore we use directional derivatives from the right. That is, for Q ∈H and D ∈D, the directional derivative (from the right) of F at Q in the direction D is ∇+ DF(Q) := d dtF(Q + tD) t=0+. (3.3) 3.3.2 Mathematical Statement We use the above notation to formulate the desired bound on ∥Q −ˆQ∥F . It involves the constant α0, which is also used in Theorem 1.1. The proof of this lemma clarifies the existence of α0, though it does not suggest an explicit approximation for it. Lemma 3.3. For r > 0 there exists a constant α0 ≡α0(r, µ, D) > 0 such that for all Q ∈ Br( ˆQ) \ { ˆQ}: ∇+ D ˆ Q,QF(Q) −∇+ D ˆ Q,QF( ˆQ) ≥α0∥Q −ˆQ∥F (3.4) 6 and consequently ∇+ D ˆ Q,QF(Q) ≥α0∥Q −ˆQ∥F . (3.5) 3.4 N −1/2 Convergence of Directional Derivatives We formulate the following convergence rate of the directional derivatives of FN from the right: Theorem 3.4. For Q ∈H and D ∈D, P ∇+ DF(Q) −∇+ DFN(Q) ≥N ϵ−1 2 ≤2 exp −N 2ϵ D · R2µ . (3.6) It will be desirable to replace ∇+ DF(Q)−∇+ DFN(Q) in (3.6) with ∇+ DF(Q), though it is impossible in general. We will later use the following observation to implicitly obtain a result in this direction. Lemma 3.5. If Q ∈H \ { ˆQ}, then ∇+ D ˆ Q,QF(Q) ≥0. (3.7) 3.5 An Incomplete Idea for Proving Theorem 1.1 At this point we can outline the basic intuition behind the proof of Theorem 1.1. We assume for simplicity that ˆQN is u.d. Suppose, for the moment, that we could use (3.6) of Theorem 3.4 with Q := ˆQN. This is actually not mathematically sound, as we will discuss shortly, but if we could do it then we would have from (3.6) that P |∇+ D ˆ Q, ˆ QN F( ˆQN) −∇+ D ˆ Q, ˆ QN FN( ˆQN)| ≥N ϵ−1 2 ≤2 exp −N 2ϵ D · R2µ . (3.8) We note that (3.7) as well as both the convexity of FN and the definition of ˆQN imply that ∇+ D ˆ Q, ˆ QN F( ˆQN) ≥0 and ∇+ D ˆ Q, ˆ QN FN( ˆQN) ≤0. (3.9) Combining (3.8) and (3.9), we obtain that P ∇+ D ˆ Q, ˆ QN F( ˆQN) ≥N ϵ−1 2 ≤2 exp −N 2ϵ D · R2µ . (3.10) At last, combining (3.5), (3.10) and Theorem 3.2 we can formally prove Theorem 1.1. However, as mentioned above, we cannot legally use Theorem 3.4 with Q = ˆQN. This is because ˆQN is a function of the samples (random variables) {xi}N i=1, but for our proof to be valid, Q needs to be fixed before the sampling begins. Therefore, our new goal is to utilize the intuition described above, but modify the proof to make it mathematically sound. This is accomplished by creating a series of “nets” (subsets of H) of increasing precision. Each matrix in each of the nets is determined before the sampling begins, so it can be used in Theorem 3.4. However, the construction of the nets guarantees that the Nth net contains a matrix Q which is sufficiently close to ˆQN to be used as a substitute for ˆQN in the above process. 3.6 The Missing Component: Adaptive Nets We describe here a result on the existence of a sequence of nets as suggested in §3.5. They are constructed in several stages, which cannot fit in here (see careful explanation in the extended version of this paper). We recall that B2( ˆQ) denotes a ball in H with center ˆQ and radius 2 w.r.t. the Frobenius norm. Lemma 3.6. Given κ ≥2 and τ > 0, there exists a sequence of sets {Sn}∞ n=1 such that ∀n ∈N Sn ⊂B2( ˆQ) and for any Q ∈B2( ˆQ) with ∥Q −ˆQ∥F > n−1 2 , ∃Q′ ∈Sn with ∥Q′ −ˆQ∥F ≤∥Q −ˆQ∥F , (3.11) 7 2n−1 2 (τ + κ−1) ≥∥Q′ −Q∥F ≥n−1 2 κ−1 and (3.12) ∥D ˆQ,Q′ −D ˆQ,Q∥F ≤τn−1 . (3.13) Furthermore, |Sn| ≤2κn 1 2 10Dn τ D(D+1) 2 . (3.14) The following lemma shows that we can use SN to guarantee good approximation of ˆQ by ˆQN as long as the differences of partial derivatives are well-controlled for elements of SN (it uses the fixed constants κ and τ for SN; see Lemma 3.6). Lemma 3.7. If for some ϵ > 0, FN is strictly convex and ∇+ DQ, ˆ QF(Q) −∇+ DQ, ˆ QFN(Q) ≤N −1 2 +ϵ ∀Q ∈SN, (3.15) then ˆQN is u.d. and ∥ˆQ −ˆQN∥F ≤1 + 2α0(τ + 1 κ) + 4Rµκτ α0 N −1 2 +ϵ. (3.16) 3.7 Completing the Proof of Theorem 1.1 Let us fix κ0 = (4α0) ∨2, τ0 := (1 −2α0/κ0)/(2α0 + 4Rµκ0) and N > 2(D −1). We note that 1 + 2α0(τ0 + 1 κ0 ) + 4Rµκ0τ0 = 2. (3.17) We rewrite (3.14) using κ := κ0 and τ := τ0 and then bound its RHS from above as follows |SN| ≤2((4α0) ∨2)N D2+D+1 2 10D2α0 + 4Rµ((4α0) ∨2) 1 − 2α0 (4α0)∨2 ! D(D+1) 2 (3.18) ≤C0 2 N D2 . Combining (3.6) (applied to any Q ∈SN) and (3.18) we obtain that P ∃Q ∈SN with ∇+ DQ, ˆ QF(Q) −∇+ DQ, ˆ QFN(Q) ≥N −1 2 +ϵ ≤C0N D2 exp −N 2ϵ/(D · R2 µ) . (3.19) Furthermore, (3.1) and (3.19) imply that P ∇+ DQ, ˆ QF(Q) −∇+ DQ, ˆ QFN(Q) ≤N −1 2 +ϵ ∀Q ∈SN and ˆQN is u.d. ≥1 −C0N D2 exp −N 2ϵ D · R2µ −2 N D −1 2 (1 −γ)N−2(D−1). (3.20) Theorem 1.1 clearly concludes from Lemma 3.7 (applied with κ := κ0 and τ := τ0), (3.20) and (3.17). Acknowledgment This work was supported by NSF grants DMS-09-15064 and DMS-09-56072. Part of this work was performed when M. Coudron attended the University of Minnesota (as an undergraduate student). We thank T. Zhang for valuable conversations and forwarding us [22]. 8 References [1] A. Agarwal, S. Negahban, and M. Wainwright. Fast global convergence of gradient methods for high-dimensional statistical recovery. Technical Report arXiv:1104.4824, Apr 2011. [2] A. Agarwal, S. Negahban, and M. Wainwright. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. In ICML, pages 1129–1136, 2011. [3] T. T. Cai, C.-H. Zhang, and H. H. Zhou. Optimal rates of convergence for covariance matrix estimation. Ann. Statist., 38(4):2118–2144, 2010. [4] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. ACM, 58(3):11, 2011. [5] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Rank-sparsity incoherence for matrix decomposition. Arxiv, 02139:1–24, 2009. [6] C. Davis and W. M. Kahan. The rotation of eigenvectors by a perturbation. iii. SIAM J. on Numerical Analysis, 7:1–46, 1970. [7] D. Hsu, S. Kakade, and T. Zhang. Robust matrix decomposition with sparse corruptions. Information Theory, IEEE Transactions on, 57(11):7221 –7234, nov. 2011. [8] P. J. Huber and E. Ronchetti. Robust statistics. Wiley series in probability and mathematical statistics. Probability and mathematical statistics. Wiley, 2009. [9] G. Lerman, M. McCoy, J. A. Tropp, and T. Zhang. Robust computation of linear models, or How to find a needle in a haystack. ArXiv e-prints, Feb. 2012. [10] R. A. Maronna, R. D. Martin, and V. J. Yohai. Robust statistics: Theory and methods. Wiley Series in Probability and Statistics. John Wiley & Sons Ltd., Chichester, 2006. [11] M. McCoy and J. Tropp. Two proposals for robust PCA using semidefinite programming. Elec. J. Stat., 5:1123–1160, 2011. [12] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence. Electron. J. Stat., 5:935–980, 2011. [13] A. J. Rothman, P. J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation. Electron. J. Stat., 2:494–515, 2008. [14] P. J. Rousseeuw and A. M. Leroy. Robust regression and outlier detection. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. John Wiley & Sons Inc., New York, 1987. [15] J. Shawe-taylor, C. Williams, N. Cristianini, and J. Kandola. On the eigenspectrum of the Gram matrix and the generalisation error of kernel PCA. IEEE Transactions on Information Theory, 51(1):2510–2522, 2005. [16] L. G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134–1142, Nov. 1984. [17] R. Vershynin. How close is the sample covariance matrix to the actual covariance matrix? to appear. [18] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Y. C. Eldar and G. Kutyniok, editors, Compressed Sensing: Theory and Applications. Cambridge Univ Press, to appear. [19] H. Xu, C. Caramanis, and S. Sanghavi. Robust pca via outlier pursuit. In NIPS, pages 2496– 2504, 2010. [20] H. Xu, C. Caramanis, and S. Sanghavi. Robust pca via outlier pursuit. Information Theory, IEEE Transactions on, PP(99):1, 2012. [21] T. Zhang and G. Lerman. A novel m-estimator for robust pca. Submitted, available at arXiv:1112.4863. [22] T. Zhang and H. Zou. Sparse precision matrix estimation via positive definite constrained minimization of ℓ1 penalized d-trace loss. Personal Communication, 2012. 9
|
2012
|
174
|
4,535
|
Learning to Discover Social Circles in Ego Networks Julian McAuley Stanford, USA jmcauley@cs.stanford.edu Jure Leskovec Stanford, USA jure@cs.stanford.edu Abstract Our personal social networks are big and cluttered, and currently there is no good way to organize them. Social networking sites allow users to manually categorize their friends into social circles (e.g. ‘circles’ on Google+, and ‘lists’ on Facebook and Twitter), however they are laborious to construct and must be updated whenever a user’s network grows. We define a novel machine learning task of identifying users’ social circles. We pose the problem as a node clustering problem on a user’s ego-network, a network of connections between her friends. We develop a model for detecting circles that combines network structure as well as user profile information. For each circle we learn its members and the circle-specific user profile similarity metric. Modeling node membership to multiple circles allows us to detect overlapping as well as hierarchically nested circles. Experiments show that our model accurately identifies circles on a diverse set of data from Facebook, Google+, and Twitter for all of which we obtain hand-labeled ground-truth. 1 Introduction Online social networks allow users to follow streams of posts generated by hundreds of their friends and acquaintances. Users’ friends generate overwhelming volumes of information and to cope with the ‘information overload’ they need to organize their personal social networks. One of the main mechanisms for users of social networking sites to organize their networks and the content generated by them is to categorize their friends into what we refer to as social circles. Practically all major social networks provide such functionality, for example, ‘circles’ on Google+, and ‘lists’ on Facebook and Twitter. Once a user creates her circles, they can be used for content filtering (e.g. to filter status updates posted by distant acquaintances), for privacy (e.g. to hide personal information from coworkers), and for sharing groups of users that others may wish to follow. Currently, users in Facebook, Google+ and Twitter identify their circles either manually, or in a na¨ıve fashion by identifying friends sharing a common attribute. Neither approach is particularly satisfactory: the former is time consuming and does not update automatically as a user adds more friends, while the latter fails to capture individual aspects of users’ communities, and may function poorly when profile information is missing or withheld. In this paper we study the problem of automatically discovering users’ social circles. In particular, given a single user with her personal social network, our goal is to identify her circles, each of which is a subset of her friends. Circles are user-specific as each user organizes her personal network of friends independently of all other users to whom she is not connected. This means that we can formulate the problem of circle detection as a clustering problem on her ego-network, the network of friendships between her friends. In Figure 1 we are given a single user u and we form a network between her friends vi. We refer to the user u as the ego and to the nodes vi as alters. The task then is to identify the circles to which each alter vi belongs, as in Figure 1. In other words, the goal is to find nested as well as overlapping communities/clusters in u’s ego-network. Generally, there are two useful sources of data that help with this task. The first is the set of edges of the ego-network. We expect that circles are formed by densely-connected sets of alters [20]. 1 Figure 1: An ego-network with labeled circles. This network shows typical behavior that we observe in our data: Approximately 25% of our ground-truth circles (from Facebook) are contained completely within another circle, 50% overlap with another circle, and 25% of the circles have no members in common with any other circle. The goal is to discover these circles given only the network between the ego’s friends. We aim to discover circle memberships and to find common properties around which circles form. However, different circles overlap heavily, i.e., alters belong to multiple circles simultaneously [1, 21, 28, 29], and many circles are hierarchically nested in larger ones (Figure 1). Thus it is important to model an alter’s memberships to multiple circles. Secondly, we expect that each circle is not only densely connected but its members also share common properties or traits [18, 28]. Thus we need to explicitly model different dimensions of user profiles along which each circle emerges. We model circle affiliations as latent variables, and similarity between alters as a function of common profile information. We propose an unsupervised method to learn which dimensions of profile similarity lead to densely linked circles. Our model has two innovations: First, in contrast to mixedmembership models [2] we predict hard assignment of a node to multiple circles, which proves critical for good performance. Second, by proposing a parameterized definition of profile similarity, we learn the dimensions of similarity along which links emerge. This extends the notion of homophily [12] by allowing different circles to form along different social dimensions, an idea related to the concept of Blau spaces [16]. We achieve this by allowing each circle to have a different definition of profile similarity, so that one circle might form around friends from the same school, and another around friends from the same location. We learn the model by simultaneously choosing node circle memberships and profile similarity functions so as to best explain the observed data. We introduce a dataset of 1,143 ego-networks from Facebook, Google+, and Twitter, for which we obtain hand-labeled ground-truth from 5,636 different circles.1 Experimental results show that by simultaneously considering social network structure as well as user profile information our method performs significantly better than natural alternatives and the current state-of-the-art. Besides being more accurate our method also allows us to generate automatic explanations of why certain nodes belong to common communities. Our method is completely unsupervised, and is able to automatically determine both the number of circles as well as the circles themselves. Further Related Work. Topic-modeling techniques have been used to uncover ‘mixedmemberships’ of nodes to multiple groups [2], and extensions allow entities to be attributed with text information [3, 5, 11, 13, 26]. Classical algorithms tend to identify communities based on node features [9] or graph structure [1, 21], but rarely use both in concert. Our work is related to [30] in the sense that it performs clustering on social-network data, and [23], which models memberships to multiple communities. Finally, there are works that model network data similar to ours [6, 17], though the underlying models do not form communities. As we shall see, our problem has unique characteristics that require a new model. An extended version of our article appears in [15]. 2 A Generative Model for Friendships in Social Circles We desire a model of circle formation with the following properties: (1) Nodes within circles should have common properties, or ‘aspects’. (2) Different circles should be formed by different aspects, e.g. one circle might be formed by family members, and another by students who attended the same university. (3) Circles should be allowed to overlap, and ‘stronger’ circles should be allowed to form within ‘weaker’ ones, e.g. a circle of friends from the same degree program may form within a circle 1http://snap.stanford.edu/data/ 2 from the same university, as in Figure 1. (4) We would like to leverage both profile information and network structure in order to identify the circles. Ideally we would like to be able to pinpoint which aspects of a profile caused a circle to form, so that the model is interpretable by the user. The input to our model is an ego-network G = (V, E), along with ‘profiles’ for each user v ∈V . The ‘center’ node u of the ego-network (the ‘ego’) is not included in G, but rather G consists only of u’s friends (the ‘alters’). We define the ego-network in this way precisely because creators of circles do not themselves appear in their own circles. For each ego-network, our goal is to predict a set of circles C = {C1 . . . CK}, Ck ⊆V , and associated parameter vectors θk that encode how each circle emerged. We encode ‘user profiles’ into pairwise features φ(x, y) that in some way capture what properties the users x and y have in common. We first describe our model, which can be applied using arbitrary feature vectors φ(x, y), and in Section 5 we describe several ways to construct feature vectors φ(x, y) that are suited to our particular application. We describe a model of social circles that treats circle memberships as latent variables. Nodes within a common circle are given an opportunity to form an edge, which naturally leads to hierarchical and overlapping circles. We will then devise an unsupervised algorithm to jointly optimize the latent variables and the profile similarity parameters so as to best explain the observed network data. Our model of social circles is defined as follows. Given an ego-network G and a set of K circles C = {C1 . . . CK}, we model the probability that a pair of nodes (x, y) ∈V × V form an edge as p((x, y) ∈E) ∝exp ( X Ck⊇{x,y} ⟨φ(x, y), θk⟩ | {z } circles containing both nodes − X Ck⊉{x,y} αk ⟨φ(x, y), θk⟩ | {z } all other circles ) . (1) For each circle Ck, θk is the profile similarity parameter that we will learn. The idea is that ⟨φ(x, y), θk⟩is high if both nodes belong to Ck, and low if either of them do not (αk trades-off these two effects). Since the feature vector φ(x, y) encodes the similarity between the profiles of two users x and y, the parameter vector θk encodes what dimensions of profile similarity caused the circle to form, so that nodes within a circle Ck should ‘look similar’ according to θk. Considering that edges e = (x, y) are generated independently, we can write the probability of G as PΘ(G; C) = Y e∈E p(e ∈E) × Y e̸∈E p(e /∈E), (2) where Θ = {(θk, αk)}k=1...K is our set of model parameters. Defining the shorthand notation dk(e) = δ(e ∈Ck) −αkδ(e /∈Ck), Φ(e) = X Ck∈C dk(e) ⟨φ(e), θk⟩ allows us to write the log-likelihood of G: lΘ(G; C) = X e∈E Φ(e) − X e∈V ×V log 1 + eΦ(e) . (3) Next, we describe how to optimize node circle memberships C as well as the parameters of the user profile similarity functions Θ = {(θk, αk)} (k = 1 . . . K) given a graph G and user profiles. 3 Unsupervised Learning of Model Parameters Treating circles C as latent variables, we aim to find ˆΘ = {ˆθ, ˆα} so as to maximize the regularized log-likelihood of (eq. 3), i.e., ˆΘ, ˆC = argmax Θ,C lΘ(G; C) −λΩ(θ). (4) We solve this problem using coordinate ascent on Θ and C [14]: Ct = argmax C lΘt(G; C) (5) Θt+1 = argmax Θ lΘ(G; Ct) −λΩ(θ). (6) 3 Noting that (eq. 3) is concave in θ, we optimize (eq. 6) through gradient ascent, where partial derivatives are given by ∂l ∂θk = X e∈V ×V −de(k)θk eΦ(e) 1 + eΦ(e) + X e∈E dk(e)θk −∂Ω ∂θk ∂l ∂αk = X e∈V ×V δ(e /∈Ck) ⟨φ(e), θk⟩ eΦ(e) 1 + eΦ(e) − X e∈E δ(e /∈Ck) ⟨φ(e), θk⟩. For fixed C \ Ci we note that solving argmaxCi lΘ(G; C \ Ci) can be expressed as pseudo-boolean optimization in a pairwise graphical model [4], i.e., it can be written as Ck = argmax C X (x,y)∈V ×V E(x,y)(δ(x ∈C), δ(y ∈C)). (7) In words, we want edges with high weight (under θk) to appear in Ck, and edges with low weight to appear outside of Ck. Defining ok(e) = P Ck∈C\Ci dk(e) ⟨φ(e), θk⟩the energy Ee of (eq. 7) is Ee(0, 0) = Ee(0, 1) = Ee(1, 0) = ok(e) −αk ⟨φ(e), θk⟩−log(1 + eok(e)−αk⟨φ(e),θk⟩), e ∈E −log(1 + eok(e)−αk⟨φ(e),θk⟩), e /∈E Ee(1, 1) = ok(e) + ⟨φ(e), θk⟩−log(1 + eok(e)+⟨φ(e),θk⟩), e ∈E −log(1 + eok(e)+⟨φ(e),θk⟩), e /∈E . By expressing the problem in this form we can draw upon existing work on pseudo-boolean optimization. We use the publicly-available ‘QPBO’ software described in [22], which is able to accurately approximate problems of the form shown in (eq. 7). We solve (eq. 7) for each Ck in a random order. The two optimization steps of (eq. 5) and (eq. 6) are repeated until convergence, i.e., until Ct+1 = Ct. We regularize (eq. 4) using the ℓ1 norm, i.e., Ω(θ) = PK k=1 P|θk| i=1 |θki|, which leads to sparse (and readily interpretable) parameters. Since ego-networks are naturally relatively small, our algorithm can readily handle problems at the scale required. In the case of Facebook, the average ego-network has around 190 nodes [24], while the largest network we encountered has 4,964 nodes. Note that since the method is unsupervised, inference is performed independently for each ego-network. This means that our method could be run on the full Facebook graph (for example), as circles are independently detected for each user, and the ego-networks typically contain only hundreds of nodes. Hyperparameter estimation. To choose the optimal number of circles, we choose K so as to minimize an approximation to the Bayesian Information Criterion (BIC) [2, 8, 25], ˆK = argmin K BIC(K; ΘK) (8) where ΘK is the set of parameters predicted for a particular number of communities K, and BIC(K; ΘK) ≃−2lΘK(G; C) + |ΘK| log |E|. (9) The regularization parameter λ ∈{0, 1, 10, 100} was determined using leave-one-out cross validation, though in our experience did not significantly impact performance. 4 Dataset Description Our goal is to evaluate our unsupervised method on ground-truth data. We expended significant time, effort, and resources to obtain high quality hand-labeled data.2 We were able to obtain ego-networks and ground-truth from three major social networking sites: Facebook, Google+, and Twitter. From Facebook we obtained profile and network data from 10 ego-networks, consisting of 193 circles and 4,039 users. To do so we developed our own Facebook application and conducted a survey of ten users, who were asked to manually identify all the circles to which their friends belonged. On average, users identified 19 circles in their ego-networks, with an average circle size of 22 friends. Examples of such circles include students of common universities, sports teams, relatives, etc. 2http://snap.stanford.edu/data/ 4 first name last name work Alan Turing position Cryptanalyst company GC&CS education name Cambridge type College name Princeton type Graduate School first name last name work Dilly Knox position Cryptanalyst company GC&CS education position Cryptanalyst company Royal Navy name Cambridge type College 1 −σx,y = 2 666666666666664 0 0 0 0 1 1 0 1 1 0 0 3 777777777777775 first name : Dilly last name : Knox first name : Alan last name : Turing work : position : Cryptanalyst work : location : GC&CS work : location : Royal Navy education : name : Cambridge education : type : College education : name : Princeton education : type : Graduate School 1 −σ′ x,y = 2 666664 0 0 1 1 1 1 3 777775 first name last name work : position work : location education : name education : type Figure 2: Feature construction. Profiles are tree-structured, and we construct features by comparing paths in those trees. Examples of trees for two users x (blue) and y (pink) are shown at left. Two schemes for constructing feature vectors from these profiles are shown at right: (1) (top right) we construct binary indicators measuring the difference between leaves in the two trees, e.g. ‘work→position→Cryptanalyst’ appears in both trees. (2) (bottom right) we sum over the leaf nodes in the first scheme, maintaining the fact that the two users worked at the same institution, but discarding the identity of that institution. For the other two datasets we obtained publicly accessible data. From Google+ we obtained data from 133 ego-networks, consisting of 479 circles and 106,674 users. The 133 ego-networks represent all 133 Google+ users who had shared at least two circles, and whose network information was publicly accessible at the time of our crawl. The Google+ circles are quite different to those from Facebook, in the sense that their creators have chosen to release them publicly, and because Google+ is a directed network (note that our model can very naturally be applied to both to directed and undirected networks). For example, one circle contains candidates from the 2012 republican primary, who presumably do not follow their followers, nor each other. Finally, from Twitter we obtained data from 1,000 ego-networks, consisting of 4,869 circles (or ‘lists’ [10, 19, 27, 31]) and 81,362 users. The ego-networks we obtained range in size from 10 to 4,964 nodes. Taken together our data contains 1,143 different ego-networks, 5,541 circles, and 192,075 users. The size differences between these datasets simply reflects the availability of data from each of the three sources. Our Facebook data is fully labeled, in the sense that we obtain every circle that a user considers to be a cohesive community, whereas our Google+ and Twitter data is only partially labeled, in the sense that we only have access to public circles. We design our evaluation procedure in Section 6 so that partial labels cause no issues. 5 Constructing Features from User Profiles Profile information in all of our datasets can be represented as a tree where each level encodes increasingly specific information (Figure 2, left). From Google+ we collect data from six categories (gender, last name, job titles, institutions, universities, and places lived). From Facebook we collect data from 26 categories, including hometowns, birthdays, colleagues, political affiliations, etc. For Twitter, many choices exist as proxies for user profiles; we simply collect data from two categories, namely the set of hashtags and mentions used by each user during two-weeks’ worth of tweets. ‘Categories’ correspond to parents of leaf nodes in a profile tree, as shown in Figure 2. We first describe a difference vector to encode the relationship between two profiles. A non-technical description is given in Figure 2. Suppose that users v ∈V each have an associated profile tree Tv, and that l ∈Tv is a leaf in that tree. We define the difference vector σx,y between two users x and y as a binary indicator encoding the profile aspects where users x and y differ (Figure 2, top right): σx,y[l] = δ((l ∈Tx) ̸= (l ∈Ty)). (10) Note that feature descriptors are defined per ego-network: while many thousands of high schools (for example) exist among all Facebook users, only a small number appear among any particular user’s friends. Although the above difference vector has the advantage that it encodes profile information at a fine granularity, it has the disadvantage that it is high-dimensional (up to 4,122 dimensions in the data 5 we considered). One way to address this is to form difference vectors based on the parents of leaf nodes: this way, we encode what profile categories two users have in common, but disregard specific values (Figure 2, bottom right). For example, we encode how many hashtags two users tweeted in common, but discard which hashtags they tweeted: σ′ x,y[p] = P l∈children(p)σx,y[l]. (11) This scheme has the advantage that it requires a constant number of dimensions, regardless of the size of the ego-network (26 for Facebook, 6 for Google+, 2 for Twitter, as described above). Based on the difference vectors σx,y (and σ′ x,y) we now describe how to construct edge features φ(x, y). The first property we wish to model is that members of circles should have common relationships with each other: φ1(x, y) = (1; −σx,y). (12) The second property we wish to model is that members of circles should have common relationships to the ego of the ego-network. In this case, we consider the profile tree Tu from the ego user u. We then define our features in terms of that user: φ2(x, y) = (1; − σx,u −σy,u ) (13) (|σx,u −σy,u| is taken elementwise). These two parameterizations allow us to assess which mechanism better captures users’ subjective definition of a circle. In both cases, we include a constant feature (‘1’), which controls the probability that edges form within circles, or equivalently it measures the extent to which circles are made up of friends. Importantly, this allows us to predict memberships even for users who have no profile information, simply due to their patterns of connectivity. Similarly, for the ‘compressed’ difference vector σ′ x,y, we define ψ1(x, y) = (1; −σ′ x,y), ψ2(x, y) = (1; − σ′ x,u −σ′ y,u ). (14) To summarize, we have identified four ways of representing the compatibility between different aspects of profiles for two users. We considered two ways of constructing a difference vector (σx,y vs. σ′ x,y) and two ways of capturing the compatibility of a pair of profiles (φ(x, y) vs. ψ(x, y)). 6 Experiments Although our method is unsupervised, we can evaluate it on ground-truth data by examining the maximum-likelihood assignments of the latent circles C = {C1 . . . CK} after convergence. Our goal is that for a properly regularized model, the latent variables will align closely with the human labeled ground-truth circles ¯C = { ¯C1 . . . ¯C ¯ K}. Evaluation metrics. To measure the alignment between a predicted circle C and a ground-truth circle ¯C, we compute the Balanced Error Rate (BER) between the two circles [7], BER(C, ¯C) = 1 2 |C\ ¯ C| |C| + | ¯ C\C| | ¯ C| . This measure assigns equal importance to false positives and false negatives, so that trivial or random predictions incur an error of 0.5 on average. Such a measure is preferable to the 0/1 loss (for example), which assigns extremely low error to trivial predictions. We also report the F1 score, which we find produces qualitatively similar results. Aligning predicted and ground-truth circles. Since we do not know the correspondence between circles in C and ¯C, we compute the optimal match via linear assignment by maximizing: max f:C→¯C 1 |f| X C∈dom(f) (1 −BER(C, f(C))), (15) where f is a (partial) correspondence between C and ¯C. That is, if the number of predicted circles |C| is less than the number of ground-truth circles | ¯C|, then every circle C ∈C must have a match ¯C ∈¯C, but if |C| > | ¯C|, we do not incur a penalty for additional predictions that could have been circles but were not included in the ground-truth. We use established techniques to estimate the number of circles, so that none of the baselines suffers a disadvantage by mispredicting ˆK = |C|, nor can any method predict the ‘trivial’ solution of returning the powerset of all users. We note that removing the bijectivity requirement (i.e., forcing all circles to be aligned by allowing multiple predicted circles to match a single groundtruth circle or vice versa) lead to qualitatively similar results. 6 Facebook Google+ Twitter 0.5 1.0 Accuracy (1 - BER) .77 .72 .70 .84 .72 .70 Accuracy on detected communities (1 - Balanced Error Rate, higher is better) multi-assignment clustering (Streich, Frank, et al.) low-rank embedding (Yoshida) block-LDA (Balasubramanyan and Cohen) our model (friend-to-friend features φ1, eq. 12) our model (friend-to-user features φ2, eq. 13) our model (compressed features ψ1, eq. 14) our model (compressed features ψ2, eq. 14) Facebook Google+ Twitter 0.0 1.0 Accuracy (F1 score) .40 .38 .34 .59 .38 .34 Accuracy on detected communities (F1 score, higher is better) multi-assignment clustering (Streich, Frank, et al.) low-rank embedding (Yoshida) block-LDA (Balasubramanyan and Cohen) our model (friend-to-friend features φ1, eq. 12) our model (friend-to-user features φ2, eq. 13) our model (compressed features ψ1, eq. 14) our model (compressed features ψ2, eq. 14) Figure 3: Performance on Facebook, Google+, and Twitter, in terms of the Balanced Error Rate (top), and the F1 score (bottom). Higher is better. Error bars show standard error. The improvement of our best features φ1 compared to the nearest competitor are significant at the 1% level or better. Baselines. We considered a wide number of baseline methods, including those that consider only network structure, those that consider only profile information, and those that consider both. First we experimented with Mixed Membership Stochastic Block Models [2], which consider only network information, and variants that also consider text attributes [5, 6, 13]. For each node, mixedmembership models predict a stochastic vector encoding partial circle memberships, which we threshold to generate ‘hard’ assignments. We also considered Block-LDA [3], where we generate ‘documents’ by treating aspects of user profiles as words in a bag-of-words model. Secondly, we experimented with classical clustering algorithms, such as K-means and Hierarchical Clustering [9], that form clusters based only on node profiles, but ignore the network. Conversely we considered Link Clustering [1] and Clique Percolation [21], which use network information, but ignore profiles. We also considered the Low-Rank Embedding approach of [30], where node attributes and edge information are projected into a feature space where classical clustering techniques can be applied. Finally we considered Multi-Assignment Clustering [23], which is promising in that it predicts hard assignments to multiple clusters, though it does so without using the network. Of the eight baselines highlighted above we report the three whose overall performance was the best, namely Block-LDA [3] (which slightly outperformed mixed membership stochastic block models [2]), Low-Rank Embedding [30], and Multi-Assignment Clustering [23]. Performance on Facebook, Google+, and Twitter Data. Figure 3 shows results on our Facebook, Google+, and Twitter data. Circles were aligned as described in (eq. 15), with the number of circles ˆK determined as described in Section 3. For non-probabilistic baselines, we chose ˆK so as to maximize the modularity, as described in [20]. In terms of absolute performance our best model φ1 achieves BER scores of 0.84 on Facebook, 0.72 on Google+ and 0.70 on Twitter (F1 scores are 0.59, 0.38, and 0.34, respectively). The lower F1 scores on Google+ and Twitter are explained by the fact that many circles have not been maintained since they were initially created: we achieve high recall (we recover the friends in each circle), but at low precision (we recover additional friends who appeared after the circle was created). Comparing our method to baselines we notice that we outperform all baselines on all datasets by a statistically significant margin. Compared to the nearest competitors, our best performing features φ1 improve on the BER by 43% on Facebook, 26% on Google+, and 16% on Twitter (improvements in terms of the F1 score are similar). Regarding the performance of the baseline methods, we note that good performance seems to depend critically on predicting hard memberships to multiple circles, using a combination of node and edge information; none of the baselines exhibit precisely this combination, a shortcoming our model addresses. Both of the features we propose (friend-to-friend features φ1 and friend-to-user features φ2) perform similarly, revealing that both schemes ultimately encode similar information, which is not surprising, 7 Figure 4: Three detected circles on a small ego-network from Facebook, compared to three groundtruth circles (BER ≃0.81). Blue nodes: true positives. Grey: true negatives. Red: false positives. Yellow: false negatives. Our method correctly identifies the largest circle (left), a sub-circle contained within it (center), and a third circle that significantly overlaps with it (right). feature index for φ1 i 1 weight θ1,i people with PhDs living in S.F. or Stanford feature index for φ1 i 1 weight θ2,i Germans who went to school in 1997 feature index for φ1 i 1 weight θ3,i Americans feature index for φ1 i 1 weight θ4,i college educated people working at a particular institute feature index for ψ1 i 1 weight θ1,i studied the same degree speak the same languages feature index for ψ1 i 1 weight θ2,i studied the same degree feature index for ψ1 i 1 weight θ3,i same level of education feature index for ψ1 i 1 weight θ4,i worked for the same employer at the same time Figure 5: Parameter vectors of four communities for a particular Facebook user. The top four plots show ‘complete’ features φ1, while the bottom four plots show ‘compressed’ features ψ1 (in both cases, BER ≃0.78). For example the former features encode the fact that members of a particular community tend to speak German, while the latter features encode the fact that they speak the same language. (Personally identifiable annotations have been suppressed.) since users and their friends have similar profiles. Using the ‘compressed’ features ψ1 and ψ2 does not significantly impact performance, which is promising since they have far lower dimension than the full features; what this reveals is that it is sufficient to model categories of attributes that users have in common (e.g. same school, same town), rather than the attribute values themselves. We found that all algorithms perform significantly better on Facebook than on Google+ or Twitter. There are a few explanations: Firstly, our Facebook data is complete, in the sense that survey participants manually labeled every circle in their ego-networks, whereas in other datasets we only observe publicly-visible circles, which may not be up-to-date. Secondly, the 26 profile categories available from Facebook are more informative than the 6 categories from Google+, or the tweet-based profiles we build from Twitter. A more basic difference lies in the nature of the networks themselves: edges in Facebook encode mutual ties, whereas edges in Google+ and Twitter encode follower relationships, which changes the role that circles serve [27]. The latter two points explain why algorithms that use either edge or profile information in isolation are unlikely to perform well on this data. Qualitative analysis. Finally we examine the output of our model in greater detail. Figure 4 shows results of our method on an example ego-network from Facebook. Different colors indicate true-, false- positives and negatives. Our method is correctly able to identify overlapping circles as well as sub-circles (circles within circles). Figure 5 shows parameter vectors learned for four circles for a particular Facebook user. Positive weights indicate properties that users in a particular circle have in common. Notice how the model naturally learns the social dimensions that lead to a social circle. Moreover, the first parameter that corresponds to a constant feature ‘1’ has the highest weight; this reveals that membership to the same community provides the strongest signal that edges will form, while profile data provides a weaker (but still relevant) signal. Acknowledgements. This research has been supported in part by NSF IIS-1016909, CNS-1010921, IIS-1159679, DARPA XDATA, DARPA GRAPHS, Albert Yu & Mary Bechmann Foundation, Boeing, Allyes, Samsung, Intel, Alfred P. Sloan Fellowship and the Microsoft Faculty Fellowship. 8 References [1] Y.-Y. Ahn, J. Bagrow, and S. Lehmann. Link communities reveal multiscale complexity in networks. Nature, 2010. [2] E. Airoldi, D. Blei, S. Fienberg, and E. Xing. Mixed membership stochastic blockmodels. JMLR, 2008. [3] R. Balasubramanyan and W. Cohen. Block-LDA: Jointly modeling entity-annotated text and entity-entity links. In SDM, 2011. [4] E. Boros and P. Hammer. Pseudo-boolean optimization. Discrete Applied Mathematics, 2002. [5] J. Chang and D. Blei. Relational topic models for document networks. In AIStats, 2009. [6] J. Chang, J. Boyd-Graber, and D. Blei. Connections between the lines: augmenting social networks with text. In KDD, 2009. [7] Y. Chen and C. Lin. Combining SVMs with various feature selection strategies. Springer, 2006. [8] M. Handcock, A. Raftery, and J. Tantrum. Model-based clustering for social networks. Journal of the Royal Statistical Society Series A, 2007. [9] S. Johnson. Hierarchical clustering schemes. Psychometrika, 1967. [10] D. Kim, Y. Jo, L.-C. Moon, and A. Oh. Analysis of twitter lists as a potential source for discovering latent characteristics of users. In CHI, 2010. [11] P. Krivitsky, M. Handcock, A. Raftery, and P. Hoff. Representing degree distributions, clustering, and homophily in social networks with latent cluster random effects models. Social Networks, 2009. [12] P. Lazarsfeld and R. Merton. Friendship as a social process: A substantive and methodological analysis. In Freedom and Control in Modern Society. 1954. [13] Y. Liu, A. Niculescu-Mizil, and W. Gryc. Topic-link LDA: joint models of topic and author community. In ICML, 2009. [14] D. MacKay. Information Theory, Inference and Learning Algorithms. Cambrdige University Press, 2003. [15] J. McAuley and J. Leskovec. Discovering social circles in ego networks. arXiv:1210.8182, 2012. [16] M. McPherson. An ecology of affiliation. American Sociological Review, 1983. [17] A. Menon and C. Elkan. Link prediction via matrix factorization. In ECML/PKDD, 2011. [18] A. Mislove, B. Viswanath, K. Gummadi, and P. Druschel. You are who you know: Inferring user profiles in online social networks. In WSDM, 2010. [19] P. Nasirifard and C. Hayes. Tadvise: A twitter assistant based on twitter lists. In SocInfo, 2011. [20] M. Newman. Modularity and community structure in networks. PNAS, 2006. [21] G. Palla, I. Derenyi, I. Farkas, and T. Vicsek. Uncovering the overlapping community structure of complex networks in nature and society. Nature, 2005. [22] C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer. Optimizing binary MRFs via extended roof duality. In CVPR, 2007. [23] A. Streich, M. Frank, D. Basin, and J. Buhmann. Multi-assignment clustering for boolean data. JMLR, 2012. [24] J. Ugander, B. Karrer, L. Backstrom, and C. Marlow. The anatomy of the Facebook social graph. preprint, 2011. [25] C. Volinsky and A. Raftery. Bayesian information criterion for censored survival models. Biometrics, 2000. [26] D. Vu, A. Asuncion, D. Hunter, and P. Smyth. Dynamic egocentric models for citation networks. In ICML, 2011. [27] S. Wu, J. Hofman, W. Mason, and D. Watts. Who says what to whom on twitter. In WWW, 2011. [28] J. Yang and J. Leskovec. Community-affiliation graph model for overlapping community detection. In ICDM, 2012. [29] J. Yang and J. Leskovec. Defining and evaluating network communities based on ground-truth. In ICDM, 2012. [30] T. Yoshida. Toward finding hidden communities based on user profiles. In ICDM Workshops, 2010. [31] J. Zhao. Examining the evolution of networks based on lists in twitter. In IMSAA, 2011. 9
|
2012
|
175
|
4,536
|
Exact and Stable Recovery of Sequences of Signals with Sparse Increments via Differential ℓ1-Minimization Demba Ba1,2, Behtash Babadi1,2, Patrick Purdon2 and Emery Brown1,2 1MIT Department of BCS, Cambridge, MA 02139 2MGH Department of Anesthesia, Critical Care and Pain Medicine 55 Fruit st, GRJ 4, Boston, MA 02114 demba@mit.edu, {behtash,patrickp}@nmr.mgh.harvard.edu enb@neurostat.mit.edu Abstract We consider the problem of recovering a sequence of vectors, (xk)K k=0, for which the increments xk −xk−1 are Sk-sparse (with Sk typically smaller than S1), based on linear measurements (yk = Akxk + ek)K k=1, where Ak and ek denote the measurement matrix and noise, respectively. Assuming each Ak obeys the restricted isometry property (RIP) of a certain order—depending only on Sk—we show that in the absence of noise a convex program, which minimizes the weighted sum of the ℓ1-norm of successive differences subject to the linear measurement constraints, recovers the sequence (xk)K k=1 exactly. This is an interesting result because this convex program is equivalent to a standard compressive sensing problem with a highly-structured aggregate measurement matrix which does not satisfy the RIP requirements in the standard sense, and yet we can achieve exact recovery. In the presence of bounded noise, we propose a quadratically-constrained convex program for recovery and derive bounds on the reconstruction error of the sequence. We supplement our theoretical analysis with simulations and an application to real video data. These further support the validity of the proposed approach for acquisition and recovery of signals with time-varying sparsity. 1 Introduction In the field of theoretical signal processing, compressive sensing (CS) has arguably been one of the major developments of the past decade. This claim is supported in part by the deluge of research efforts (see for example Rice University’s CS repository [1]) which has followed the inception of this field [2, 3, 4]. CS considers the problem of acquiring and recovering signals that are sparse (or compressible) in a given basis using non-adaptive linear measurements, at a rate smaller than what the Shannon-Nyquist theorem would require. The work [2, 4] derived conditions under which a sparse signal can be recovered exactly from a small set of non-adaptive linear measurements. In [3], the authors propose a recovery algorithm for the case of measurements contaminated by bounded noise. They show that this algorithm is stable, that is, within a constant of the noise tolerance. Recovery of these sparse or compressible signals is performed using convex optimization techniques. The classic CS setting does not take into account the structure, e.g. temporal or spatial, of the underlying high-dimensional sparse signals of interest. In recent years, the attention has shifted to formulations which incorporate the signal structure into the CS framework. A number of problems and applications of interest deal with time-varying signals which may not only be sparse at any given instant, but may also exhibit sparse changes from one instant to the next. For example, a video 1 of a natural scene consists of a sequence of natural images (compressible signals) which exhibits sparse changes from one frame to the next. It is thus reasonable to hope that one would be able to get away with far fewer measurements than prescribed by conventional CS theory to acquire and recover such time-varying signals as videos. The problem of recovering signals with time-varying sparsity has been referred to in the literature as dynamic CS. A number of empirically-motivated algorithms to solve the dynamic CS problem have been proposed, e.g. [5, 6]. To our knowledge, no recovery guarantees have been proved for these algorithms, which typically assume that the support of the signal and/or the amplitudes of the coefficients change smoothly with time. In [5], for instance, the authors propose message-passing algorithms for tracking and smoothing of signals with time-varying sparsity. Simulation results show the superiority of the algorithms compared to one based on applying conventional CS principles at each time instant. Dynamic CS algorithms have potential applications to video processing [7], estimation of sources of brain activity from MEG time-series [8], medical imaging [7], and estimation of time-varying networks [9]. To the best of our knowledge, the dynamic CS problem has not received rigorous, theoretical scrutiny. In this paper, we develop rigorous results for dynamic CS both in the absence and in the presence of noise. More specifically, in the absence of noise, we show that one can exactly recover a sequence (xk)K k=0 of vectors, for which the increments xk −xk−1 are Sk-sparse, based on linear measurements yk = Akxk and under certain regularity conditions on (Ak)K k=1, by solving a convex program which minimizes the weighted sum of the ℓ1-norms of successive differences. In the presence of noise, we derive error bounds for a quadratically-constrained convex program for recovery of the sequence (xk)K k=0. In the following section, we formulate the problem of interest and introduce our notation. In Section 3, we present our main theoretical results, which we supplement with simulated experiments and an application to real video data in Section 4. In this latter section, we introduce probability-ofrecovery surfaces for the dynamic CS problem, which generalize the traditional recovery curves of CS. We give concluding remarks in Section 5. 2 Problem Formulation and Notation We denote the support of a vector x ∈Rp by supp(x) = {j : xj ̸= 0}. We say that a vector x ∈Rp is S-sparse if ||x||0 ≤S, where ||x||0 := |supp(x)|. We consider the problem of recovering a sequence (xk)K k=0 of Rp vectors such that xk −xk−1 is Sk-sparse based on linear measurements of the form yk = Akxk + ek. Here, Ak ∈Rnk×p, ek ∈Rnk and yk ∈Rnk denote the measurement matrix, measurement noise, and the observation vector, respectively. Typically, Sk < nk ≪p, which accounts for the compressive nature of the measurements. For convenience, we let x0 be the Rp vector of all zeros. For the rest of our treatment, it will be useful to introduce some notation. We will be dealing with sequences (of sets, matrices, vectors), as such we let the index k denote the kth element of any such sequence. Let J be the set of indices {1, 2, · · · , p}. For each k, we denote by {akj : j ∈J}, the columns of the matrix Ak and by Hk the Hilbert space spanned by these vectors. For two matrices A1 ∈Rn1×p and A2 ∈Rn2×p, n2 ≤n1, we say that A2 ⊂A1 if the rows of A2 are distinct and each row of A2 coincides with a row of A1. We say that the matrix A ∈Rn×p satisfies the restricted isometry property (RIP) or order S if, for all S-sparse x ∈Rp, we have (1 −δS) ||x||2 2 ≤||Ax||2 2 ≤(1 + δS) ||x||2 2 , (1) where δS ∈(0, 1) is the smallest constant for which Equation 1 is satisfied [2]. Consider the following convex optimization programs min x1,x2,··· ,xK K X k=1 ||xk −xk−1||1 √Sk s.t. yk = Akxk, k = 1, 2, · · · , K. (P1) min x1,x2,··· ,xK K X k=1 ∥xk −xk−1∥1 √Sk s.t. ∥yk −Akxk∥2 ≤ǫk, k = 1, 2, · · · , K. (P2) 2 What theoretical guarantees can we provide on the performance of the above programs for recovery of sequences of signals with sparse increments, respectively in the absence (P1) and in the presence (P2) of noise? 3 Theoretical Results We first present a lemma giving sufficient conditions for the uniqueness of sequences of vectors with sparse increments given linear measurements in the absence of noise. Then, we prove a theorem which shows that, by strengthening the conditions of this lemma, program (P1) can exactly recover every sequence of vectors with sparse increments. Finally, we derive error bounds for program (P2) in the context of recovery of sequences of vectors with sparse increments in the presence of noise. Lemma 1 (Uniqueness of Sequences of Vectors with Sparse Increments). Suppose (Sk)K k=0 is such that S0 = 0, and for each k, Sk ≥1. Let Ak satisfy the RIP of order 2Sk. Let xk ∈Rp supported on Tk ⊆J be such that ||xk −xk−1||0 ≤Sk, for k = 1, 2, · · · , K. Suppose T0 = ∅without loss of generality (w.l.o.g.). Then, given Ak and yk = Akxk, the sequence of sets (Tk)K k=1, and consequently the sequence of coefficients (xk)K k=1, can be reconstructed uniquely. Proof. For brevity, and w.l.o.g., we prove the lemma for K = 2. We prove that there is a unique choice of x1 and x2 such that ||x1 −x0||0 ≤S1, ||x2 −x1||0 ≤S2 and obeying y1 = A1x1, y2 = A2x2. We proceed by contradiction , and assume that there exist x′ 1 ̸= x1 and x′ 2 ̸= x2 supported on T ′ 1 and T ′ 2, respectively, such that y1 = A1x1 = A1x′ 1, y2 = A2x2 = A2x′ 2, ||x′ 1 −x0||0 ≤S1, and ||x′ 2 −x′ 1||0 ≤S2. Then ||A1(x1 −x′ 1)||2 = 0. Using the lower bound in the RIP of A1 and the fact that δ2S1 < 1, this leads to ||x1 −x′ 1||2 2 = 0, i.e. x1 = x′ 1, thus contradicting our assumption that x1 ̸= x′ 1. Now consider the case of x2 and x′ 2. We have 0 = A2(x2 −x′ 2) = A2(x2 −x1 + x1 −x′ 2) = A2(x2 −x1 + x′ 1 −x′ 2). (2) Using the lower bound in the RIP of A2 and the fact that δ2S2 < 1, this leads to ||x2 −x1 + x′ 1 −x′ 2||2 2 = 0, i.e. x2 −x1 = x′ 2 −x′ 1, which implies x′ 2 = x2, thus contradicting our assumption that x2 ̸= x′ 2. As in Cand`es and Tao’s work [2], this lemma only suggests what may be possible in terms of recovery of (xk)K k=1 through a combinatorial, brute-force approach. By imposing stricter conditions on (δ2Sk)K k=1, we can recover (xk)K k=1 by solving a convex program. This is summarized in the following theorem. Theorem 2 (Exact Recovery in the Absence of Noise). Let (¯xk)K k=1 ∈Rp be a sequence of Rp vectors such that, for each k, ||¯xk −¯xk−1||0 ≤Sk for some Sk < p/2. Suppose that the measurements yk = Ak¯xk ∈Rnk are given, such that nk < p, A1 ⊃ A2, Ak = A2 for k = 3, · · · , K and (Ak)K k=1 satisfies δSk + δ2Sk + δ3Sk < 1 for k = 1, 2 · · · , K. Then, the sequence (¯xk)K k=1 is the unique minimizer to the program (P1). Proof. As before, we consider the case K = 2. The proof easily generalizes to the case of arbitrary K. We can re-write the program as follows: min x1,x2 ||x1||1 √S1 + ||x2 −x1||1 √S2 s.t. A1x1 = A1¯x1, A2(x2 −x1) = A2(¯x2 −¯x1), (3) where we have used the fact that A1 ⊃A2: A2x2 −A1x1 = A2¯x2 −A1¯x1, which implies A2(x2 − x1) = A2(¯x2 −¯x1). Let x∗ 1 and x∗ 2 be the solutions to the above program. Let T1 = supp(¯x1) and ∆T2 = supp(¯x2 −¯x1). Assume |T1| ≤S1 and |∆T2| ≤S2. Key element of the proof: The key element of the proof is the existence of vectors u1, u2 satisfying the exact reconstruction property (ERP) [10, 11]. It has been shown in [10] that given δSk + δ2Sk + δ3Sk < 1 for k = 1, 2: 3 1. ⟨u1, a1j⟩= sgn(x1,j), for all j ∈T1, and ⟨u2, a2j⟩= sgn(x2,j), for all j ∈∆T2. 2. |⟨u1, a1j⟩| < 1, for all j ∈T c 1, and |⟨u2, a2j⟩| < 1, for all j ∈∆T c 2 . Since ¯x1 and ¯x2 −¯x1 are feasible, we have ||x∗ 1||1 √S1 + ||x∗ 2 −x∗ 1||1 √S2 ≤||¯x1||1 √S1 + ||¯x2 −¯x1||1 √S2 . (4) ||x∗ 1||1 √S1 + ||x∗ 2 −x∗ 1||1 √S2 = 1 √S1 X j∈T1 |¯x1,j + (x∗ 1,j −¯x1,j)| + 1 √S1 X j∈T c 1 |x∗ 1,j| + 1 √S2 X j∈∆T2 |¯x2,j −¯x1,j + x∗ 2,j −x∗ 1,j −(¯x2,j −¯x1,j) | + 1 √S2 X j∈∆T c 2 |x∗ 2,j −x∗ 1,j| ≥ 1 √S1 X j∈T1 sgn(¯x1,j) | {z } ⟨u1,a1j⟩ (¯x1,j + (x∗ 1,j −¯x1,j)) + 1 √S1 X j∈T c 1 x∗ 1,j⟨u1, a1j⟩ + 1 √S2 X j∈∆T2 sgn(¯x2,j −¯x1,j) | {z } ⟨u2,a2j⟩ (¯x2,j −¯x1,j + (x∗ 2,j −x∗ 1,j −(¯x2,j −¯x1,j))) + 1 √S2 X j∈∆T c 2 (x∗ 2,j −x∗ 1,j)⟨u2, a2j⟩ = 1 √S1 X j∈T1 |¯x1,j| + 1 √S1 ⟨u1, X j∈J x∗ 1,ja1j | {z } A1x∗ 1 − X j∈T1 ¯x1,ja1j | {z } A1¯x1 ⟩ + 1 √S2 X j∈∆T2 |¯x2,j −¯x1,j| + 1 √S2 ⟨u2, X j∈J (x∗ 2,j −x∗ 1,j)a2j | {z } A2(x∗ 2−x∗ 1) − X j∈∆T2 (¯x2,j −¯x1,j)a2j | {z } A2(¯x2−¯x1) ⟩ = ||¯x1||1 √S1 + ||¯x2 −¯x1||1 √S2 . (5) This implies that all of the inequalities in the derivation above must in fact be equalities. In particular, 1 √S1 X j∈T c 1 |x∗ 1,j| + 1 √S2 X j∈∆T c 2 |x∗ 2,j −x∗ 1,j| = 1 √S1 X j∈T c 1 x∗ 1,j⟨u1, a1j⟩+ 1 √S2 X j∈∆T c 2 (x∗ 2,j −x∗ 1,j)⟨u2, a2j⟩ ≤ 1 √S1 X j∈T c 1 |x∗ 1,j| |⟨u1, a1j⟩| | {z } <1 + 1 √S2 X j∈∆T c 2 |x∗ 2,j −x∗ 1,j| |⟨u2, a2j⟩| | {z } <1 . Therefore, x∗ 1,j = 0 ∀j ∈T c 1, and x∗ 2,j −x∗ 1,j = 0 ∀j ∈∆T c 2. Using the lower bounds in the RIP of A1 and A2 leads to 0 = ||A1(x∗ 1 −¯x1)||2 ≥ (1 −δ2S1) ||x∗ 1 −¯x1||2 (6) 0 = ||A2(x∗ 2 −x∗ 1 −(¯x2 −¯x1))||2 ≥ (1 −δ2S2) ||x∗ 2 −x∗ 1 −(¯x2 −¯x1)||2 , (7) so that x∗ 1 = ¯x1, and x∗ 2 = ¯x2. Uniqueness follows from simple convexity arguments. A few remarks are in order. First, Theorem 2 effectively asserts that the program (P1) is equivalent to sequentially solving (i.e. for k = 1, 2, · · · , K) the following program, starting with x∗ 0 the vector of all zeros in Rp: min xk xk −x∗ k−1 1 s.t. yk −Akx∗ k−1 = Ak(xk −x∗ k−1), k = 1, 2, · · · , K. (8) 4 Second, it is interesting and surprising that Theorem 2 would hold, if one naively applies standard CS principles to our problem. To see this, if we let wk = xk −xk−1, then program (P1) becomes min w1,··· ,wK K X k=1 ||wk||1 √Sk s.t. y = Aw, (9) where w = (w′ 1, · · · , w′ K)′ ∈RK×p, y = (y′ 1, · · · , y′ K)′ ∈R PK k=1 nk and A is given by A = A1 0 · · · 0 A2 A2 · · · 0 ... ... ... ... AK AK · · · AK . As K grows large, the columns of A become increasingly correlated or coherent, which intuitively means that A would be far from satisfying RIP of any order. Yet, we get exact recovery. This is an important reminder that the RIP is a sufficient, but not necessary condition for recovery. Third, the assumption that A1 ⊃A2, Ak = A2 for k = 3, · · · , K makes practical sense as it allows one to avoid the prohibitive storage and computational cost of generating several distinct measurement matrices. Note that if a random A1 satisfies the RIP of some order and A1 ⊃A2, then A2 also satisfies the RIP (of lower order). Lastly, the key advantage of dynamic CS recovery (P1) is the smaller number of measurements required compared to the classical approach [2] which would solve K separate ℓ1-minimization problems. For each k = 1, · · · , K, one would require nk ≥CSk log(p/Sk) measurements for dynamic recovery, compared to nk ≥CS1 log(p/S1) for classical recovery. Due to the hypothesis of Sk ≤S1 ≪p, i.e., the sparse increments are small, we conclude that there are less number of measurements required for dynamic CS. We now move to the case where the measurements are perturbed by bounded noise. More specifically, we derive error bounds for a quadratically-constrained convex program for recovery of sequences of vectors with sparse increments in the presence of noise. Theorem 3 (Conditionally Stable Recovery in Presence of Noise). Let (¯xk)K k=1 ∈Rp be as stated in Theorem 2, and x0 be the vector of all zeros in Rp. Suppose that the measurements yk = Akxk + ek ∈Rnk are given such that ||ek||2 ≤ǫk and (Ak)K k=1 satisfy δ3Sk + 3δ4Sk < 2, for each k. Let (x∗ k)K k=1 be the solution to the program (P2). Finally, let hk := (x∗ k −x∗ k−1) −(¯xk −¯xk−1), for k = 1, 2, · · · , K, with the convention that ¯x0 := x∗ 0 := 0 ∈Rp. Then, we have: K X k=1 ∥hk∥2 ≤ K X k=1 2CSkǫk + K X k=2 CSk Ak X ℓ<k hℓ 2 (10) where, for each k = 1, 2, · · · , K, CSk is only a function of δ3Sk and δ4Sk. Proof sketch. Cand`es et al.’s proof for stable recovery in the presence of bounded noise relies on the so-called tube and cone constraints [3]. Our proof for Theorem 3 relies on generalization of these two constraints. We omit some of the algebraic details of the proof as they can be filled in by following the proof of [3] for the time-invariant case. Generalized tube constraint: Let ¯wk = ¯xk −¯xk−1, w∗ k = x∗ k −x∗ k−1, for k = 1, · · · , K. The generalized tube constraints are obtained using a simple application of the triangle inequality: ||A1( ¯w1 −w∗ 1)||2 ≤ 2ǫ1 (11) ||A2( ¯w2 −w∗ 2)||2 ≤ 2ǫ2 + ||A2h1||2 and more generally, (12) ||Ak( ¯wk −w∗ k)||2 ≤ 2ǫk + Ak X ℓ<k hℓ 2 , for k = 2, · · · , K. (13) 5 Generalized cone constraint: To obtain a generalization of the cone constraint in [3], we need to account for the fact that the increments (xk −xk−1)K k=1 (may) have different support sizes. The resulting generalized cone constraint is as follows: K X k=1 hk∆T c k 1 √Sk ≤ K X k=1 ||hk∆Tk||1 √Sk , (14) where ∆Tk = supp(¯xk −¯xk−1). The proof proceeds along the lines of that presented in [3], with CSk = 1+√ 1/3 √ 1−δ4Sk − q 1+δ3Sk 3 . Equation (10) is an implicit bound: the second term in the inequality reflects the fact that, for a given k, the error x∗ k −¯xk depends on previous errors. Our bound proves a form of stability that is conditional on the stability of previous estimates.The appeal of dynamic CS comes from the fact that one may pick the constants CSk in the bound above to be much smaller that those from the corresponding conventional CS bound [3] (Equation (10) without the second term). This ensures that the errors do not propagate in an unbounded manner. One may obtain sharper bounds using techniques as in [12]. In the next section, we use simulations to compare explicitly the average mean-squared error (MSE) of conventional CS and our algorithm. 4 Experiments/Simulations We ran a series of numerical experiments to assess the ability of the convex programs introduced to recover signals with time-varying sparsity. In the absence of noise, the experiments result in probability-of-recovery surfaces for the dynamic CS problem, which generalize the traditional recovery curves of CS. In the presence of noise, we compare dynamic CS to conventional CS in terms of their reconstruction error as a function of signal-to-noise-ratio (SNR). We also show an application to real video data. All optimization problems were solved using CVX, a package for specifying and solving convex programs [13, 14]. 4.1 Simulated noiseless data Experimental set-up: 1. Select nk, for k = 1, · · · , K, and p, so that the Ak’s are nk × p matrices; sample Ak with independent Gaussian entries, for k = 1, 2, · · · , K. 2. Select S1 = ⌈s1 · p⌉, s1 ∈(0, 1), and Sk = ⌈s2 · p⌉, s2 ∈(0, 1), for k = 2, · · · , K. 3. Select T1 of size S1 uniformly at random and set ¯x1,j = 1 for all j ∈T1, and 0 otherwise; for k = 2, · · · , K, select ∆Tk = supp(¯xk −¯xk−1) of size Sk uniformly at random and set ¯xk,j −¯xk−1,j = 1 for all j ∈∆Tk, and 0 otherwise. 4. Make yk = Ak¯xk, for k = 1, 2, · · · , K; solve the program (P1) to obtain (x∗ k)K k=1. 5. Compare (¯xk)K k=1 to (x∗ k)K k=1. 6. Repeat 100 times for each (s1, s2). We compare dynamic CS to conventional CS applied independently at each k. Figure 1 shows results for nk = 100, p = 200, and K = 2. We can infer the expected behavior for larger values of K from the case K = 2 and from the theory developed above (see remarks below). The probability of recovery for conventional CS is 1 on the set {(s1, s2) : s1 + (K −1)s2 ≤ s∗}, and 0 on its complement, where s∗is the sparsity level at which a phase transition occurs in the conventional CS problem [2]. The figure shows that, when the measurement matrices Ak, for k = 2, · · · , K are derived from A1 as assumed in Theorem 1, dynamic CS (DCS 1) outperforms conventional CS (CCS). However, when we used different measurement matrices (DCS 2), we see that there is an asymmetry between s1 and s2, which is not predicted by our Theorem 1. Intuitively, this is because for small s2, the program (P1) operates in a regime where we have not only one but multiple measurements to recover a given sparse vector [15]. Program (P1) is equivalent to sequential CS. Therefore, we expect the behavior of conventional CS to persist for larger K. 6 CCS s1 s2 0.1 0.2 0.3 0.4 0.4 0.3 0.2 0.1 0 DCS 1 s1 s2 0.1 0.2 0.3 0.4 0.4 0.3 0.2 0.1 0 DCS 2 s1 s2 0.1 0.2 0.3 0.4 0.4 0.3 0.2 0.1 0 Figure 1: Probability of recovery maps as a function of s1 and s2. 4.2 Simulated noisy data The experimental set-up differs slightly from the one of the noiseless case. In Step 2, we fix constant values for S1 and Sk, k = 2, · · · , K. Moreover, in Step 4, we form yk = Akxk + ek, where the ek’s are drawn uniformly in (−α, α). In Step 6, we repeat the experiment 100 times for each α. In our experiments, we used n1 = 100, S1 = 5, n2 = 20, Sk = 1, for k = 2, · · · , K, and p = 200. We report results for K = 2 and K = 10, and choose values of α resulting in SNRs in the range [5, 30] dB, in increments of 5 dB. Figure 2 displays the average MSE given by 10 · log10( 1 K PK k=1 ||¯xk −x∗ k||2 2) of conventional CS and dynamic CS as a function of SNR. The Figure shows that the proposed algorithm outperforms conventional CS, and is robust to noise. 5 10 15 20 25 30 −8 −6 −4 −2 0 2 4 Average MSE, K = 2 SNR (dB) MSE (dB) Conventional CS Dynamic CS 5 10 15 20 25 30 −6 −4 −2 0 2 4 6 Average MSE, K = 10 SNR (dB) MSE (dB) Figure 2: Average MSE as a function of SNR. 4.3 Real video data We consider the problem of recovering the first 10 frames of a real video using our dynamic CS algorithm, and conventional CS applied to each frame separately. In both cases, we assume the absence of noise. We use a video portraying a close-up of a woman engaged in a telephonic conversation [16]. The video has a frame rate of 12 Hz and a total of 150 frames, each of size 176 × 144. Due to computational constraints, we downsampled each frame by a factor of 3 in each dimension. We obtained measurements in the wavelet domain by performing a two-level decomposition of each frame using Daubechies-1 wavelet. In Table 1, we report the negative of the normalized MSE given by −10·log10( 1 10 P10 k=1 ||¯xk−x∗ k||2 2 ||¯xk||2 2 ) in dB for various (n1, n2) measurement pairs (nk = n2, for k = 3, · · · , 10). Larger numbers indicate better reconstruction accuracy. The table shows that, for all (n1, n2) considered, dynamic CS outperforms conventional CS. The average performance gap across (n1, n2) pairs is approximately 7 dB. Interestingly, for sufficient number of measurements, dynamic CS improves as the video progresses. We observed this phenomenon in the small-s2 regime of the simulations. Figure 3 shows the reconstructed frames highlighted in Table 1. The frames reconstructed using dynamic CS are more appealing visually than their conventional CS counterparts. 7 Table 1: Normalized negated MSE in dB for frames 1, 5, 10, and average over all 10 frames. Each frame consist of ≈3000 pixels. Each row of the table corresponds to a different (n1, n2) pair (refer to text). Larger numbers indicate better reconstruction accuracy. Frame 1 Frame 5 Frame 10 Avg. (10 frames) CCS DCS CCS DCS CCS DCS CCS DCS (2400,2400) 27.8 27.8 28.5 38 28 41.1 28.2 35 (2000,2000) 22.4 22.4 22.3 31.3 22.9 35.6 22.8 28.9 (2400,1200) 27.8 27.8 15.2 24.2 14.8 25.4 15.9 25.5 (1600,1600) 19.1 19.1 18.9 25 19.8 29.7 19.1 24.1 (1600,800) 19.2 19.2 8.4 17.6 9.3 16.7 8.4 17.8 Frame 1 Original CCS DCS Frame 5 Frame 10 Figure 3: Comparison of frames reconstructed using dynamic CS and conventional CS, (n1, n2) = (2000, 2000). 5 Discussion In this paper, we proved rigorous guarantees for convex programs for recovery of sequences of vectors with sparse increments, both in the absence and in the presence of noise. Our formulation of the dynamic CS problem is more general than the empirically-motivated solutions proposed in the literature, e.g. [5, 6]. Indeed, we only require that x1 is sparse, as well as the increments. Therefore, there may exist values of k such that xk is not a sparse vector. We supplemented our theoretical analysis with simulation experiments and an application to real video data. In the noiseless case, we introduced probability-of-recovery surfaces which generalize traditional CS recovery curves. The recovery surface showed that dynamic CS significantly outperforms conventional CS, especially for large sequences (large K). In the noisy case, simulations showed that dynamic CS also outperforms conventional CS for SNR values ranging from 5 to 30 dB. Our results on real video data demonstrated that dynamic CS outperforms conventional CS in terms of visual appeal of the reconstructed frames, and by an average MSE gap of 7dB. 8 References [1] Compressive sensing resources, rice university. Rice University, http://dsp.rice/edu/cs/. [2] E.J. Cand`es and T. Tao. Decoding by linear programming. Information Theory, IEEE Transactions on, 51(12):4203–4215, 2005. [3] E.J. Cand`es, J.K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on pure and applied mathematics, 59(8):1207–1223, 2006. [4] D.L. Donoho. Compressed sensing. Information Theory, IEEE Transactions on, 52(4):1289– 1306, 2006. [5] J. Ziniel, L.C. Potter, and P. Schniter. Tracking and smoothing of time-varying sparse signals via approximate belief propagation. In Signals, Systems and Computers (ASILOMAR), 2010 Conference Record of the Forty Fourth Asilomar Conference on, pages 808–812. IEEE, 2010. [6] M. Salman Asif and J. Romberg. Dynamic updating for ℓ1-minimization. Selected Topics in Signal Processing, IEEE Journal of, 4(2):421–434, 2010. [7] H. Jung and J.C. Ye. Motion estimated and compensated compressed sensing dynamic magnetic resonance imaging: What we can learn from video compression techniques. International Journal of Imaging Systems and Technology, 20(2):81–98, 2010. [8] J.W. Phillips, R.M. Leahy, and J.C. Mosher. Meg-based imaging of focal neuronal current sources. Medical Imaging, IEEE Transactions on, 16(3):338–348, 1997. [9] M. Kolar, L. Song, A. Ahmed, and E.P. Xing. Estimating time-varying networks. The Annals of Applied Statistics, 4(1):94–123, 2010. [10] E. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, June 2004. Submitted. [11] E. Cand`es and T. Tao. Near optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Inform. Theory, October 2004. Submitted. [12] E.J. Cand`es. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 346(9):589–592, 2008. [13] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.22. http://cvxr.com/cvx, May 2012. [14] M. Grant and S. Boyd. Graph implementations for nonsmooth convex programs. In V. Blondel, S. Boyd, and H. Kimura, editors, Recent Advances in Learning and Control, Lecture Notes in Control and Information Sciences, pages 95–110. Springer-Verlag Limited, 2008. [15] S.F. Cotter, B.D. Rao, K. Engan, and K. Kreutz-Delgado. Sparse solutions to linear inverse problems with multiple measurement vectors. Signal Processing, IEEE Transactions on, 53(7):2477–2488, 2005. [16] Softage video codec demo download page. Softage, http:www.softage.ru/products/video-codec/uncompressed/suzie.avi. 9
|
2012
|
176
|
4,537
|
Tight Bounds on Profile Redundancy and Distinguishability Jayadev Acharya ECE, UCSD jacharya@ucsd.edu Hirakendu Das Yahoo! hdas@yahoo-inc.com Alon Orlitsky ECE & CSE, UCSD alon@ucsd.edu Abstract The minimax KL-divergence of any distribution from all distributions in a collection P has several practical implications. In compression, it is called redundancy and represents the least additional number of bits over the entropy needed to encode the output of any distribution in P. In online estimation and learning, it is the lowest expected log-loss regret when guessing a sequence of random values generated by a distribution in P. In hypothesis testing, it upper bounds the largest number of distinguishable distributions in P. Motivated by problems ranging from population estimation to text classification and speech recognition, several machine-learning and information-theory researchers have recently considered label-invariant observations and properties induced by i.i.d. distributions. A sufficient statistic for all these properties is the data’s profile, the multiset of the number of times each data element appears. Improving on a sequence of previous works, we show that the redundancy of the collection of distributions induced over profiles by length-n i.i.d. sequences is between 0.3 · n1/3 and n1/3 log2 n, in particular, establishing its exact growth power. 1 Introduction Information theory, machine learning, and statistics, are closely related disciplines. One of their main intersection areas is the confluence of universal compression, online learning, and hypothesis testing. We consider two concepts in this overlap. The minimax KL divergence—a fundamental measure for, among other things, how difficult distributions are to compress, predict, and classify, and profiles—a relatively new approach for compression, classification, and property testing over large alphabets. Improving on several previous results, we determine the exact growth power of the KL-divergence minimax of profiles of i.i.d. distributions over any alphabet. 1.1 Minimax KL divergence As is well known in information theory, the expected number of bits required to compress data X generated according to a known distribution P is the distribution’s entropy, H(P) = EP log 1/P(X), and is achieved by encoding X using roughly log 1/P(X) bits. However, in many applications P is unknown, except that it belongs to a known collection P of distributions, for example the collection of all i.i.d., or all Markov distributions. This uncertainty typically raises the number of bits above the entropy and is studied in Universal compression [9, 13]. Any encoding corresponds to some distribution Q over the encoded symbols. Hence the increase in the expected number of bits used to encode the output of P is EP log 1/Q(X) −H(P) = D(P||Q), the KL divergence between P and Q. Typically one is interested in the highest increase for any distribution P ∈P, and finds the encoding that minimizes it. The resulting quantity, called the (expected) redundancy of P, e.g., [8, Chap. 13], is therefore the KL minimax R(P) def = min Q max P ∈P D(P||Q). The same quantity arises in online-learning, e.g., [5, Ch. 9], where the probabilities of random elements X1, . . . , Xn are sequentially estimated. One of the most popular measures for the performance of an estimator Q is the per-symbol log loss 1 n Pn i=1 log Q(Xi|Xi−1). As in compression, for underlying distribution P ∈P, the expected log loss is EP log 1/Q(X), and the log-loss regret is EP log 1/Q(X) −H(P) = D(P||Q). The maximal expected regret for any distribution in P, minimized over all estimators Q is again the KL minimax, namely, redundancy. 1 In statistics, redundancy arises in multiple hypothesis testing. Consider the largest number of distributions that can be distinguished from their observations. For example, the largest number of topics distinguishable based on text of a given length. Let P be a collection of distributions over a support set X. As in [18], a sub-collection S ⊆P of the distributions is ϵ-distinguishable if there is a mapping f : X →S such that if X is generated by a distribution S ∈S, then P(f(X) ̸= S) ≤ϵ. Let M(P, ϵ) be the largest number of ϵ-distinguishable distributions in P, and let h(ϵ) be the binary entropy function. In Section 4 we show that for all P, (1 −ϵ) log M(P, ϵ) ≤R(P) + h(ϵ), (1) and in many cases, like the one considered here, the inequality is close to equality. Redundancy has many other connections to data compression [27, 28], the minimum-description-length principle [3, 16, 17], sequential prediction [21], and gambling [20]. Because of the fundamental nature of R(P), and since tight bounds on it often reveal the structure of P, the value of R(P) has been studied extensively in all three communities, e.g., the above references as well as [29, 37] and a related minimax in [6]. 1.2 Redundancy of i.i.d. distributions The most extensively studied collections are independently, identically distributed (i.i.d.). For example, for the collection In k of length-n i.i.d. distributions over alphabets of size k, a string of works [7, 10, 11, 28, 33, 35, 36] determined the redundancy up to a diminishing additive term, R(In k ) = k −1 2 log n + Ck + o(1), (2) where the constant Ck was determined exactly in terms of k. For compression this shows that the extra number of bits per symbol required to encode an i.i.d. sequence when the underlying distribution is unknown diminishes to zero as (k −1) log n/(2n). For online learning this shows that these distributions can be learned (or approximated) and that this approximation can be done at the above rate. In hypothesis testing this shows that there are roughly n(k−1)/2 distinguishable i.i.d. distributions of alphabet size k and length n. Unfortunately, while R(In k ) increases logarithmically in the sequence length n, it grows linearly in the alphabet size k. For sufficiently large k, this value even exceeds n itself, showing that general distributions over large alphabets cannot be compressed or learned at a uniform rate over all alphabet sizes, and as the alphabet size increases, progressively larger lengths are needed to achieve a given redundancy, learning rate, or test error. 1.3 Patterns Partly motivated by redundancy’s fast increase with the alphabet size, a new approach was recently proposed to address compression, estimation, classification, and property testing over large alphabets. The pattern [25] of a sequence represents the relative order in which its symbols appear. For example, the pattern of abracadabra is 12314151231. A natural method to compress a sequence over a large alphabet is to compress its pattern as well as the dictionary that maps the order to the original symbols. For example, for abracadabra, 1 →a, 2 →b, 3 →r, 4 →c, 5 →d. It can be shown [15, 26] that for all i.i.d. distributions, over any alphabet, even infinitely large, as the sequence length increases, essentially all the entropy lies in the pattern, and practically none is in the dictionary. Hence [25] focused on the redundancy of compressing patterns. They showed, e.g., Subsection 1.5, that the although, as in (2), i.i.d. sequences over large alphabets have arbitrarily high per-symbol redundancy, and although as above patterns contain essentially all the information of long sequences, the per-symbol redundancy of patterns diminishes to zero at a uniform rate independent of the alphabet size. In online learning, patterns correspond to estimating the probabilities of each observed symbol, and of all unseen ones combined. For example, after observing the sequence dad, with pattern 121, we estimate the probabilities of 1, 2, and 3. The probability we assign to 1 is that of d, the probability we assign to 2 is that of a, and the probability we assign to 3 is the probability of all remaining letters combined. The aforementioned results imply that while distributions over large alphabets cannot be learned with uniformly diminishing per-symbol log loss, if we would like to estimate the probability of each seen element, but combine together the probabilities of all unseen ones, then the per symbol log loss diminishes to zero uniformly regardless of the alphabet size. 2 1.4 Profiles Improving on existing pattern-redundancy bounds seems easier to accomplish via profiles. Since we consider i.i.d. distributions, the order of the elements in a pattern does not affect its probability. For example, for every distribution P, P(112) = P(121). It is easy to see that the probability of a pattern is determined by the fingerprint [4] or profile [25] of the pattern, the multiset of the number of appearances of the symbols in the pattern. For example, the profile of the pattern 121 is {1, 2} and all patterns with this profile, 112, 121, 122 will have the same probability under any distribution P. Similarly, the profile of 1213 is {1, 1, 2} and all patterns with this profile, 1123, 1213, 1231, 1223, 1232, and 1233, will have the same probability under any distribution. It is easy to see that since all patterns of a given profile have the same probability, the ratio between the actual and estimated probability of a profile is the same as this ratio for each of its patterns. Hence pattern redundancy is the same as profile redundancy [25]. Therefore from now on we consider only profile redundancy, and begin by defining it more formally. The multiplicity µ(a) of a symbol a in a sequence is the number of times it appears. The profile ϕ(x) of a sequence x is the multiset of multiplicities of all symbols appearing in it [24, 25]. The profile of the sequence is the multiset of multiplicities. For example, the sequence ababcde has multiplicities µ(a) = µ(b) = 2, µ(c) = µ(d) = µ(e) = 1, and profile {1, 1, 1, 2, 2}. The prevalence ϕµ of a multiplicity µ is the number of elements with multiplicity µ. Let Φn denote the collection of all profiles of length-n sequences. For example, for sequences of length one there is a single element appearing once, hence Φ1 = {{1}}, for length two, either one element appears twice, or each of two elements appear once, hence Φ2 = {{2}, {1, 1}}, similarly Φ3 = {{3}, {2, 1}, {1, 1, 1}}, etc. We consider the distributions induced on Φn by all discrete i.i.d. distributions over any alphabet. The probability that an i.i.d. distribution P generates an n-element sequence x is P(x) def = Qn i=1 P(xi). The probability of a profile ϕ ∈Φn is the sum of the probabilities of all sequences of this profile, P(ϕ) def = P x:ϕ(x)=ϕ P(x). For example, if P is B(2/3) over h and t, then for n = 3, P({3}) = P(hhh)+P(ttt) = 1/3, P({2, 1}) = P(hht)+P(hth)+P(thh)+ P(tth) + P(tht) + P(htt) = 2/3, and P({1, 1, 1} = 0 as this P is binary hence at most two symbols can appear. On the other hand, if P is a roll of a fair die, then P({3}) = 1/36, P({2, 1}) = 5/12, and P({1, 1, 1} = 5/9. We let In Φ = {P(ϕ) : P is a discrete i.i.d. distribution} be the collection of all distributions on Φn induced by any discrete i.i.d. distribution over any alphabet, possibly even infinite. It is easy to see that any relabeling of the elements in an i.i.d. distribution will leave the profile distribution unchanged, for example, if instead of h and t above, we have a distribution over 0’s and 1’s. Furthermore, profiles are sufficient statistics for every label-invariant property. While many theoretical properties of profiles are known, even calculating the profile probabilities for a given distribution and a profile seems hard [23, 38] in general. Profile redundancy arises in at least two other machine-learning applications, closeness-testing and classification.In closeness testing [4], we try to determine if two sequences are generated by same or different distributions. In classification, we try to assign a test sequence to one of two training sequences. Joint profiles and quantities related to profile redundancy are used to construct competitive closeness tests and classifiers that perform almost as well as the best possible [1, 2]. Profiles also arise in statistics, in estimating symmetric or label-invariant properties of i.i.d. distributions ([34] and references therein). For example the support size, entropy, moments, or number of heavy hitters. All these properties depend only on the multiset of probability values in the distribution. For example, the entropy of the distribution p(heads) = .6, p(tails) = .4, depends only on the probability multiset {.6, .4}. For all these properties, profiles are a sufficient statistic. 1.5 Previous Results As patterns and profiles have the same redundancy, we describe the results for profiles. Instead of the expected redundancy R(In Φ) that reflects the increase in the expected number of bits, [25] bounded the more stringent but closely-related worst-case redundancy, ˆR(In Φ), reflecting the increase in the worst-case number of bits, namely over all sequences. Using bounds [19] on the partition function, they showed that Ω(n1/3) ≤ˆR(In Φ) ≤ π r 2 3 ! n1/2. 3 These bounds do not involve the alphabet size, hence show that unlike the sequences themselves, patterns (whose redundancy equals that of profiles), though containing essentially all the information of the sequence, can be compressed and learned with redundancy and log-loss diminishing as n−1/2, uniformly over all alphabet sizes. Note however that by contrast to i.i.d. distributions, where the redundancy (2) was determined up to a diminishing additive constant, here not even the power was known. Consequently several papers considered improvements of these bounds, mostly for expected redundancy, the minimax KL divergence. Since expected redundancy is at most the worst-case redundancy, the upper bound applies also for expected redundancy. Subsequently [31] described a partial proof-outline that could potentially show the following tighter upper bound on expected redundancy, and [14] proved the following lower bound, strengthening one in [32], 1.84 n log n 1/3 ≤R(In Φ) ≤n0.4. (3) 1.6 New results In Theorem 15 we use error-correcting codes to exhibit a larger class of distinguishable distributions in In Φ than was known before, thereby removing the log n factor from the lower bound in (3). In Theorem 11 we demonstrate a small number of distributions such that every distribution in In Φ is within a small KL divergence from one of them, thereby reducing the upper bound to have the same power as the lower bound. Combining these results we obtain, 0.3 · n1/3 ≤(1 −ϵ) log M(In Φ, ϵ) ≤R(In Φ) ≤n1/3 log2 n. (4) These results close the power gap between the upper and lower bounds that existed in the literature. They show that when a pattern is compressed or a sequence is estimated (with all unseen elements combined into new), the per-symbol redundancy and log-loss decrease to 0 uniformly over all distributions faster than log2 n/n2/3, a rate that is optimal up to a log2 n factor. They also show that for length-n profiles, the redundancy R(In Φ) is essentially the logarithm log M(In Φ, ϵ) of the number of distinguishable distributions. 1.7 Outline In the next section we describe properties of Poisson sampling and redundancy that will be used later in the paper. In Section 3 we establish the upper bound and in Section 4, the lower bound. Most of the proofs are provided in the Appendix. 2 Preliminaries We describe some techniques and results used in the proofs. 2.1 Poisson sampling When a distribution is sampled i.i.d. exactly n times, the multiplicities are dependent, complicating the analysis of many properties. A standard approach [22] to overcome the dependence is to sample the distribution a random poi(n) times, the Poisson distribution with parameter n, resulting in sequences of random length near close to n. We let poi(λ, µ) def = e−λλµ/µ! denote the probability that a poi(λ) random variable attains the value µ. The following basic properties of Poisson sampling help simplify the analysis and relate it to fixed-length sampling. Lemma 1. If a discrete i.i.d. distribution is sampled poi(n) times then: (1) the number of appearances of different symbols are independent; (2) a symbol with probability p appears poi(np) times; (3) for any fixed n0, conditioned on the length poi(n) ≥n0, the first n0 elements are distributed identically to sampling P exactly n0 times. We now express profile probabilities and redundancy under Poisson sampling. As we saw, the probability of a profile is determined by just the multiset of probability value and the symbol labels are irrelevant. For convenience, we assume that the distribution is over the positive integers, and we replace the distribution parameters {pi} by the Poisson parameters {npi}. For a distribution P = {p1, p2, . . .}, let λi def = npi, and Λ = {λ1, λ2, . . .}. The profile generated 4 by this distribution is a multiset ϕ = {µ1, µ2, . . .}, where each µi generated independently according to poi(λi). The probability that Λ generates ϕ is [1, 25], Λ(ϕ) = 1 Q∞ µ=0 ϕµ! X σ Y i poi(λσ(i), µi). (5) where the summation is over all permutations of the support set. For example, for Λ = {λ1, λ2, λ3}, the profile ϕ = {2, 2, 3} can be generated by specifying which element appears three times. This is reflected by the ϕ2! in the denominator, and each of the repeated terms in the numerator are counted only once. Similar to In Φ, we use Ipoi(n) Φ to denote the class of distributions induced on Φ∗∆= Φ0 ∪Φ1 ∪Φ2 ∪. . . when sequences of length poi(n) are generated i.i.d.. It is easy to see that a distribution in Ipoi(n) Φ is a collection of λi’s summing to n. The redundancy R(Ipoi(n) Φ ), and ϵ-distinguishability M(Ipoi(n) Φ , ϵ) are defined as before. The following lemma shows that bounding M(Ipoi(n) Φ , ϵ) and R(Ipoi(n) Φ ) is sufficient to bound R(In Φ). Lemma 2. For any fixed ϵ > 0, (1 −o(1))R(In−√n log n Φ ) ≤R(Ipoi(n) Φ ) and M(Ipoi(n) Φ , ϵ) ≤M(In+√n log n Φ , 2ϵ). Proof Sketch. It is easy to show that R(In Φ) and M(In Φ, ϵ) are non-decreasing in n. Combining this with the fact that the probability that poi(n) is less than n −√n log n or greater than n + √n log n goes to 0 yields the bounds. ■ Finally, the next lemma, proved in the Appendix, provides a simple formula for cross expectations of Poisson distributions. Lemma 3. For any λ0, λ1, λ2 > 0, Eµ∼poi(λ1) poi(λ2, µ) poi(λ0, µ) = exp (λ1 −λ0)(λ2 −λ0) λ0 . 2.2 Redundancy We state some basic properties of redundancy. For a distribution P over A and a function f : A →B, let f(P) be the distribution over B that assigns to b ∈B the probability P(f −1(b)). Similarly, for a collection P of distributions over A, let f(P) = {f(P) : P ∈P}. The convexity of KL-divergence shows that D(f(P)||f(Q)) ≤D(P||Q), and can be used to show Lemma 4 (Function Redundancy). R(f(P)) ≤R(P). For a collection P of distributions over A×B, let PA and PB be the collection of marginal distributions over A and B, respectively. In general, R(P) can be larger or smaller than R(PA) + R(PB). However, when P consists of product distributions, namely P(a, b) = PA(a) · PB(b), the redundancy of the product is at most the sum of the marginal redundancies. The proof is given in the Appendix. Lemma 5 (Redundancy of products). If P be a collection of product distributions over A × B, then R(P) ≤R(PA) + R(PB). For a prefix-free code C : A →{0, 1}∗, let EP [|C|] be the expected length of C under distribution P. Redundancy is the extra number of bits above the entropy needed to encode the output of any distribution in P. Hence, Lemma 6. For every prefix-free code C, R(P) ≤maxP ∈P EP [|C|]. Lemma 7 (Redundancy of unions). If P1, . . . , PT are distribution collections, then R( [ 1≤i≤k Pi) ≤max 1≤i≤T R(Pi) + log T. 5 3 Upper bound A distribution in Λ ∈Ipoi(n) Φ is a multiset of λ’s adding to n. For any such distribution, let Λlow def = {λ ∈Λ : λ ≤n1/3}, Λmed def = {λ ∈Λ : n1/3 < λ ≤n2/3}, Λhigh def = {λ ∈Λ : λ > n2/3}, and let ϕlow, ϕmed, ϕhigh denote the corresponding profile each subset generates. Then ϕ = ϕlow ∪ϕmed ∪ϕhigh. Let Iϕlow = {Λlow : Λ ∈Ipoi(n) Φ } be the collection of all Λlow. Note that n is implicit here and in the rest of the paper. A distribution in Iϕlow is a multiset of λ’s such that each is ≤n1/3 and they sum to either n or to ≤n −n1/3. Iϕmed and Iϕhigh are defined similarly. ϕ is determined by the triple (ϕlow, ϕmed, ϕhigh), and by Poisson sampling, ϕlow, ϕmed and ϕhigh are independent. Hence by Lemmas 4 and 5, R(In Φ) ≤R(I(ϕlow,ϕmed,ϕhigh)) ≤R(Iϕlow) + R(Iϕmed) + R(Iϕhigh). In Subsection 3.1 we show that Iϕlow < 4n1/3 log n and Iϕhigh < 4n1/3 log n. In Subsection 3.2 we show that Iϕmed < 1 2n1/3 log2 n. In the next two subsections we elaborate on the overview and sketch some proof details. 3.1 Bounds on R(Iϕlow) and R(Iϕhigh) Elias Codes [12] are prefix-free codes that encode a positive integer n using at most log n + log(log n + 1) + 1 bits. We use Elias codes and design explicit coding schemes for distributions in Iϕlow and Iϕhigh, and prove the following result. Lemma 8. R(Iϕlow) < 4n1/3 log n, and R(Iϕhigh) < 2n1/3 log n. Proof. Any distribution Λhigh ∈Iϕhigh consists of λ’s that are > n2/3 and add to ≤n. Hence |Λhigh| is < n1/3, and so is the number of multiplicities in ϕhigh. Each multiplicity is a poi(λ) random variable, and is encoded separately using Elias code. For example, the profile {100, 100, 200, 250, 500} is encoded by coding the sequence 100, 100, 200, 250, 500 all using Elias scheme. For λ > 10, the number of bits needed to encode a poi(λ) random variable using Elias codes can be shown to be at most 2 log λ. The expected code-length is at most n1/3 · 2 log n. Applying Lemma 6 gives R(Iϕhigh) < 2n1/3 log n. A distribution Λlow ∈Iϕlow consists of λ’s less that < n1/3 and sum at most n. We encode distinct multiplicities along with their prevalences, using two integers for each distinct multiplicity. For example, ϕ = {1, 1, 1, 1, 1, 2, 2, 2, 5} is coded as 1, 5, 2, 3, 5, 1. Using Poisson tail bounds, we bound the largest multiplicity in ϕlow, and use arguments similar to Iϕhigh to obtain R(Iϕlow) < 4n1/3 log n. ■ 3.2 Bound on R Iϕmed We partition the interval (n1/3, n2/3] into B = n1/3 bins. For each distribution in Iϕmed, we divide the λ’s in it according to these bins. We show that within each interval, there is a uniform distribution such that the KL divergence between the underlying distribution and the induced uniform distribution is small. We then show that the number of uniform distributions needed is at most exp(n1/3 log n). We expand on these ideas and bound R(Iϕmed). We partition Iϕmed into T ≤exp(n1/3 log n) classes, upper bound the redundancy of each class, and then invoke Lemma 7 to obtain an upper bound on R Iϕmed . A distribution Λ = {λ1, λ2, . . . , λr} ∈Iϕmed is such that λi ∈ [n1/3, n2/3] and Pr i=1 λi ≤n. Consider any partition of (n1/3, n2/3] into B def = n1/3 consecutive intervals I1, I2, . . . , IB of lengths ∆1, ∆2, . . . , ∆B.For each distribution Λ ∈Iϕmed, let Λj def = {λj,l : l = 1, 2, . . . , mj} def = {λ : λ ∈Λ ∩Ij} be the set of elements of Λ in Ij where mj def = mj(Λ) def = |Λj| is the number of elements of Λ in Ij. Let τ(Λ) def = (m1, m2, . . . , mB) 6 be the B−tuple of the counts of λ’s in each interval. For example, if n = 1000, then n1/3 = 10 and n2/3 = 100. For simplicity, we choose B = 3 instead of n1/3 and ∆1 = 10, ∆2 = 30, ∆3 = 50, so the intervals are I1 = (10, 20], I2 = (20, 50], I3 = (50, 100]. Suppose, Λ = {12, 15, 25, 35, 32, 43, 46, 73}, then Λ1 = {12, 15}, Λ2 = {25, 35, 32, 43, 46}, Λ3 = {73} and τ(Λ) = (m1, m2, m3) = (2, 5, 1). We partition Iϕmed, such that two distributions Λ and Λ′ are in the same class if and only if τ(Λ) = τ(Λ′). Thus each class of distributions is characterized by a B-tuple of integers τ = (m1, m2, . . . , mB) and let Iτ denote this class. Let T def = T (∆) be the set of all possible different τ (such that Iτ is non-empty), and T = |T | be the number of classes. We first bound T below. Observe that for any Λ ∈Iϕmed, and any j, we have mj < n2/3, otherwise P λ∈Λ λ > mj · n1/3 = n. So, each mj in τ can take at most n2/3 < n values. So, T < (n2/3)B < nn1/3 = exp(n1/3 log n). For any choice of ∆, let λ− j def = n1/3 + Pj−1 i=1 ∆i be the left end point of the interval Ij for j = 1, 2, . . . , B. We upper bound R(Iτ) of any particular class τ = (m1, m2, . . . , mB) in the following result. Lemma 9. For all choices of ∆= (∆1, . . . , ∆B), and all classes Iτ such that τ = (m1, . . . , mB) ∈T (∆), R(Iτ) ≤ B X j=1 mj ∆2 j λ− j . Proof Sketch. For any choice of ∆, τ = (m1, . . . , mB) ∈T (∆), we show a distribution Λ∗∈Iτ such that for all Λ ∈Iτ, D(Λ||Λ∗) ≤PB j=1 mj ∆2 j λ∗ j . Recall that for Λ ∈Iτ, Λj is the set of elements of Λ in Ij. Let ϕj be the profile generated by Λj. Then, ϕmed = ϕ1 ∪. . . ∪ϕB. The distribution Λ∗is chosen to be of the form {λ∗ 1×m1, λ∗ 2×m2, . . . , λ∗ B ×mB}, i.e., each Λ∗ j is uniform. The result follows from Lemma 3, and the details are in the Appendix . ■ We now prove that R(Iϕmed) < 1 2n1/3 log2 n. By Lemma 7 it suffices to bound R(Iτ). From Theorem 9 it follows that the choice of ∆determines the bound on R(Iτ). A solution to the following optimization problem yields a bound : min ∆ max τ B X j=1 mj ∆2 j λ− j , subject to B X j=1 mjλ− j ≤n. Instead of minimizing over all partitions, we choose the endpoints of the intervals as a geometric series as a bound for the expression. The left-end point of Ij is λ− j , so λ− 1 = n1/3. We let λ− j+1 = λ− j (1 + c). The constant c is chosen to ensure that λ− 1 (1+c)B = n1/3(1+c)n1/3 = n2/3, the right end-point of IB. This yields, c < 2 log(1+c) = 2 log(n1/3) n1/3 . Now, ∆j = λ− j+1 −λ− j = cλ− j , so ∆2 j λ− j = c2λ− j . This translates the objective function to the constraint, and is in fact the optimal intervals for the optimization problem (details omitted). Using this, for any τ = (m1, . . . , mB) ∈T (∆), B X j=1 mj ∆2 j λ− j = c2 B X j=1 mjλ− j ≤c2n < 2 log(n1/3) n1/3 2 n = 4 9n1/3 log2 n. This, along with Lemma 7 gives the following Corollary for sufficiently large n. Corollary 10. For large n, R(Iϕmed) < 1 2 · n1/3 log2 n. Combining Lemma 8 with this result yields, Theorem 11. For sufficiently large n, R(In Φ) ≤n1/3 log2 n. 7 4 Lower bound We use error-correcting codes to construct a collection of 20.3n1/3 distinguishable distributions, improving by a logarithmic factor the bound in [14, 31]. The convexity of KL-divergence can be used to show Lemma 12. Let P and Q be distributions on A. Suppose A1 ⊂A be such that P(A1) ≥1 −ϵ > 1/2, Q(A1) ≤δ < 1/2. Then, D(P||Q) ≥(1 −ϵ) log 1 δ −h(ϵ). We use this result to show that (1 −ϵ) log M(P, ϵ) ≤R(P). Recall that for P over A, M def = M(P, ϵ) is the largest number of ϵ−distinguishable distributions in P. Let P1, P2, . . . , PM in P and A1, A2, . . . , AM be a partition of A such that Pj(Aj) ≥1−ϵ. Let Q0 be the distribution such that, R(P) = supP ∈P D(P||Q0). Since PM j=1 Q0(Aj) = 1, Q0(Am) < 1 M for some m ∈{1, . . . , M}. Also, Pm(Am) ≥1 −ϵ. Plugging in P = Pm, Q = Q0, A1 = Am, and δ = 1/M in the Lemma 12, R(P) ≥D(Pm||Q0) ≥(1 −ϵ) log (M(P, ϵ)) −h(ϵ). We now describe the class of distinguishable distributions. Fix C > 0. Let λ∗ i def = Ci2, K def = ⌊(3n/C)1/3⌋, and S def = {λ∗ i : 1 ≤i ≤K}. K is chosen so that sum of elements in S is at most n. Let x = x1x2 . . . xK be a binary string and Λx def = {λ∗ i : xi = 1} ∪ n n − X λ∗ i xi o . The distribution contains λ∗ i whenever xi = 1, and the last element ensures that the elements add up to n. A binary code of length k and minimum distance dmin is a collection of k−length binary strings with Hamming distance between any two strings is at least dmin. The size of the code is the number of elements (codewords) in it. The following shows the existence of codes with a specified minimum distance and size. Lemma 13 ([30]). Let 1 2 > α > 0. There exists a code with dmin ≥αk and size ≥2k(1−h(α)−o(1)). Let C be a code satisfying Lemma 13 for k = K and let L = {Λc : c ∈C} be the set of distributions generated by using the strings in C. The following result shows that distributions in L are distinguishable and is proved in Appendix . Lemma 14. The set L is 2e−C/4 α −distinguishable. Plugging α = 5 × 10−5 and C = 60, then Lemma 13 and Equation (1) yields, Theorem 15. For sufficiently large n, 0.3 · n1/3 ≤R(In Φ). Acknowledgments The authors thank Ashkan Jafarpour and Ananda Theertha Suresh for many helpful discussions. References [1] J. Acharya, H. Das, A. Jafarpour, A. Orlitsky, and S. Pan. Competitive closeness testing. J. of Machine Learning Research Proceedings Track, 19:47–68, 2011. [2] J. Acharya, H. Das, A. Jafarpour, A. Orlitsky, S. Pan, and A. T. Suresh. Competitive classification and closeness testing. Journal of Machine Learning Research - Proceedings Track, 23:22.1–22.18, 2012. [3] A. R. Barron, J. Rissanen, and B. Yu. The minimum description length principle in coding and modeling. IEEE Transactions on Information Theory, 44(6):2743–2760, 1998. [4] T. Batu, L. Fortnow, R. Rubinfeld, W. D. Smith, and P. White. Testing that distributions are close. In Annual Symposium on Foundations of Computer Science, page 259, 2000. [5] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. [6] K. Chaudhuri and A. McGregor. Finding metric structure in information theoretic clustering. In Conference on Learning Theory, pages 391–402, 2008. [7] T. Cover. Universal portfolios. Mathematical Finance, 1(1):1–29, January 1991. 8 [8] T. Cover and J. Thomas. Elements of Information Theory, 2nd Ed. Wiley Interscience, 2006. [9] L. Davisson. Universal noiseless coding. IEEE Transactions on Information Theory, 19(6):783–795, November 1973. [10] L. D. Davisson, R. J. McEliece, M. B. Pursley, and M. S. Wallace. Efficient universal noiseless source codes. IEEE Transactions on Information Theory, 27(3):269–279, 1981. [11] M. Drmota and W. Szpankowski. Precise minimax redundancy and regret. IEEE Transactions on Information Theory, 50(11):2686–2707, 2004. [12] P. Elias. Universal codeword sets and representations of the integers. IEEE Transactions on Information Theory, 21(2):194– 203, Mar 1975. [13] B. M. Fitingof. Optimal coding in the case of unknown and changing message statistics. Probl. Inform. Transm., 2(2):1–7, 1966. [14] A. Garivier. A lower-bound for the maximin redundancy in pattern coding. Entropy, 11(4):634–642, 2009. [15] G. M. Gemelos and T. Weissman. On the entropy rate of pattern processes. IEEE Transactions on Information Theory, 52(9):3994–4007, 2006. [16] P. Gr¨unwald. A tutorial introduction to the minimum description length principle. CoRR, math.ST/0406077, 2004. [17] P. Gr¨unwald, J. S. Jones, J. de Winter, and ´E. Smith. Safe learning: bridging the gap between bayes, mdl and statistical learning theory via empirical convexity. J. of Machine Learning Research - Proceedings Track, 19:397–420, 2011. [18] P. D. Gr¨unwald. The Minimum Description Length Principle. The MIT Press, 2007. [19] G. Hardy and S. Ramanujan. Asymptotic formulae in combinatory analysis. Proceedings of London Mathematics Society, 17(2):75–115, 1918. [20] J. Kelly. A new interpretation of information rate. IEEE Transactions on Information Theory, 2(3):185–189, 1956. [21] N. Merhav and M. Feder. Universal prediction. IEEE Transactions on Information Theory, 44(6):2124–2147, October 1998. [22] M. Mitzenmacher and E. Upfal. Probability and computing - randomized algorithms and probabilistic analysis. Cambridge University Press, 2005. [23] A. Orlitsky, S. Pan, Sajama, N. Santhanam, and K. Viswanathan. Pattern maximum likelihood: computation and experiments. In preparation, 2012. [24] A. Orlitsky, N. Santhanam, K. Viswanathan, and J. Zhang. On modeling profiles instead of values. In Proceedings of the 20th conference on Uncertainty in artificial intelligence, 2004. [25] A. Orlitsky, N. Santhanam, and J. Zhang. Universal compression of memoryless sources over unknown alphabets. IEEE Transactions on Information Theory, 50(7):1469– 1481, July 2004. [26] A. Orlitsky, N. P. Santhanam, K. Viswanathan, and J. Zhang. Limit results on pattern entropy. IEEE Transactions on Information Theory, 52(7):2954–2964, 2006. [27] J. Rissanen. Universal coding, information, prediction, and estimation. IEEE Transactions on Information Theory, 30(4):629– 636, July 1984. [28] J. Rissanen. Fisher information and stochastic complexity. IEEE Transactions on Information Theory, 42(1):40–47, January 1996. [29] J. Rissanen, T. P. Speed, and B. Yu. Density estimation by stochastic complexity. IEEE Transactions on Information Theory, 38(2):315–323, 1992. [30] R. M. Roth. Introduction to coding theory. Cambridge University Press, 2006. [31] G. Shamir. A new upper bound on the redundancy of unknown alphabets. In Proceedings of The 38th Annual Conference on Information Sciences and Systems, Princeton, New-Jersey, 2004. [32] G. Shamir. Universal lossless compression with unknown alphabets—the average case. IEEE Transactions on Information Theory, 52(11):4915–4944, November 2006. [33] W. Szpankowski. On asymptotics of certain recurrences arising in universal coding. Problems of Information Transmission, 34(2):142–146, 1998. [34] P. Valiant. Testing symmetric properties of distributions. PhD thesis, Cambridge, MA, USA, 2008. AAI0821026. [35] F. M. J. Willems, Y. M. Shtarkov, and T. J. Tjalkens. The context-tree weighting method: basic properties. IEEE Transactions on Information Theory, 41(3):653–664, 1995. [36] Q. Xie and A. Barron. Asymptotic minimax regret for data compression, gambling and prediction. IEEE Transactions on Information Theory, 46(2):431–445, March 2000. [37] B. Yu and T. P. Speed. A rate of convergence result for a universal d-semifaithful code. IEEE Transactions on Information Theory, 39(3):813–820, 1993. [38] J. Zhang. Universal Compression and Probability Estimation with Unknown Alphabets. PhD thesis, UCSD, 2005. 9
|
2012
|
177
|
4,538
|
On Triangular versus Edge Representations — Towards Scalable Modeling of Networks Qirong Ho School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 qho@cs.cmu.edu Junming Yin School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 junmingy@cs.cmu.edu Eric P. Xing School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 epxing@cs.cmu.edu Abstract In this paper, we argue for representing networks as a bag of triangular motifs, particularly for important network problems that current model-based approaches handle poorly due to computational bottlenecks incurred by using edge representations. Such approaches require both 1-edges and 0-edges (missing edges) to be provided as input, and as a consequence, approximate inference algorithms for these models usually require Ω(N 2) time per iteration, precluding their application to larger real-world networks. In contrast, triangular modeling requires less computation, while providing equivalent or better inference quality. A triangular motif is a vertex triple containing 2 or 3 edges, and the number of such motifs is Θ(P i D2 i ) (where Di is the degree of vertex i), which is much smaller than N 2 for low-maximum-degree networks. Using this representation, we develop a novel mixed-membership network model and approximate inference algorithm suitable for large networks with low max-degree. For networks with high maximum degree, the triangular motifs can be naturally subsampled in a node-centric fashion, allowing for much faster inference at a small cost in accuracy. Empirically, we demonstrate that our approach, when compared to that of an edge-based model, has faster runtime and improved accuracy for mixed-membership community detection. We conclude with a large-scale demonstration on an N ≈280, 000-node network, which is infeasible for network models with Ω(N 2) inference cost. 1 Introduction Network analysis methods such as MMSB [1], ERGMs [20], spectral clustering [17] and latent feature models [12] require the adjacency matrix A of the network as input, reflecting the natural assumption that networks are best represented as a set of edges taking on the values 0 (absent) or 1 (present). This assumption is intuitive, reasonable, and often necessary for some tasks, such as link prediction, but it comes at a cost (which is not always necessary, as we will discuss later) for other tasks, such as community detection in both the single-membership or admixture (mixedmembership) settings. The fundamental difference between link prediction and community detection is that the first is concerned with link outcomes on pairs of vertices, for which providing links as input is intuitive. However, the second task is about discovering the community memberships of individual vertices, and links are in fact no longer the only sensible representation. By representing the input network as a bag of triangular motifs — by which we mean vertex triples with 2 or 3 edges — one can design novel models for mixed-membership community detection that outperform models based on the adjacency matrix representation. The main advantage of the bag-of-triangles representation lies in its huge reduction of computational cost for certain network analysis problems, with little or no loss of outcome quality. In the traditional edge representation, if N is the number of vertices, then the adjacency matrix has size Θ(N 2) — thus, any network analysis algorithm that touches every element must have Ω(N 2) runtime complexity. For probabilistic network models, this statement applies to the cost of approximate 1 i j k (a) i j k i j k i j k (b) i j k i j k i j k (c) i j k (d) Figure 1: Four types of triangular motifs: (a) full-triangle; (b) 2-triangle; (c) 1-triangle; (d) empty-triangle. For mixed-membership community detection, we only focus on full-triangles and 2-triangles. inference. For example, the Mixed Membership Stochastic Blockmodel (MMSB) [1] has Θ(N 2) latent variables, implying an inference cost of Ω(N 2) per iteration. Looking beyond, the popular p∗ or Exponential Random Graph models [20] are normally estimated via MCMC-MLE, which entails drawing network samples (each of size Θ(N 2)) from some importance distribution. Finally, latent factor models such as [12] only have Θ(N) latent variables, but the Markov blanket of each variable depends on Θ(N) observed variables, resulting in Ω(N 2) computation per sweep over all variables. With an inference cost of Ω(N 2), even modestly large networks with only ∼10, 000 vertices are infeasible, to say nothing of modern social networks with millions of vertices or more. On the other hand, it can be shown that the number of 2- and 3-edge triangular motifs is upperbounded by Θ(P i D2 i ), where Di is the degree of vertex i. For networks with low maximum degree, this quantity is ≪N 2, allowing us to construct more parsimonious models with faster inference algorithms. Moreover, for networks with high maximum degree, one can subsample Θ(Nδ2) of these triangular motifs in a node-centric fashion, where δ is a user-chosen parameter. Specifically, we assign triangular motifs to nodes in a natural manner, and then subsample motifs only from nodes with too many of them. In contrast, MMSB and latent factor models rely on distributions over 0/1edges (i.e. edge probabilities), and for real-world networks, these distributions cannot be preserved with small (i.e. o(N 2)) sample sizes because the 0-edges asymptotically outnumber the 1-edges. As we will show, a triangular representation does not preserve all information found in an edge representation. Nevertheless, we argue that one should represent complex data objects in a task-dependent manner, especially since computational cost is becoming a bottleneck for real-world problems like analyzing web-scale network data. The idea of transforming the input representation (e.g. from network to bag-of-triangles) for better task-specific performance is not new. A classic example is the bag-of-words representation of a document, in which the ordering of words is discarded. This representation has proven effective in natural language processing tasks such as topic modeling [2], even though it eliminates practically all grammatical information. Another example from computer vision is the use of superpixels to represent images [3, 4]. By grouping adjacent pixels into larger superpixels, one obtains a more compact image representation, in turn leading to faster and more meaningful algorithms. When it comes to networks, triangular motifs (Figure 1) are already of significant interest in biology [13], social science [19, 9, 10, 16], and data mining [21, 18, 8]. In particular, 2- and 3-edge triangular motifs are central to the notion of transitivity in the social sciences — if we observe edges A-B and B-C, does A have an edge to C as well? Transitivity is of special importance, because high transitivity (i.e. we frequently observe the third edge A-C) intuitively leads to stronger clusters with more within-cluster edges. In fact, the ratio of 3-edge triangles to connected vertex triples (i.e. 2- and 3-edge triangular motifs) is precisely the definition of the network clustering coefficient [16], which is a popular measure of cluster strength. In the following sections, we begin by characterizing the triangular motifs, following which we develop a mixed-membership model and inference algorithm based on these motifs. Our model, which we call MMTM or the Mixed-Membership Triangular Model, performs mixed-membership community detection, assigning each vertex i to a mixture of communities. This allows for better outlier detection and more informative visualization compared to single-membership modeling. In addition, mixed-membership modeling has two key advantages: first, MM models such as MMSB, Latent Dirichlet Allocation and our MMTM are easily modified for specialized tasks — as evidenced by the rich literature on topic models [2, 1, 14, 5]. Second, MM models over disparate data types (text, network, etc.) can be combined by fusing their latent spaces, resulting in a multi-view model — for example, [14, 5] model both text and network data from the same mixed-membership vectors. Thus, our MMTM can serve as a basic modeling component for massive real-world networks with copious side information. After developing our model and inference algorithm, we present simulated experiments comparing them on a variety of network types to an adjacency-matrix-based model (MMSB) and its inference algorithm. These experiments will show that triangular mixed-membership modeling results in both faster inference and more accurate mixed-membership recovery. We conclude by demonstrating our model/algorithm on a network with N ≈280, 000 nodes and ∼2, 300, 000 edges, which is far too large for Ω(N 2) inference algorithms such as variational MMSB [1] and the Gibbs sampling MMSB inference algorithm we developed for our experiments. 2 2 Triangular Motif Representation of a Network In this work, we consider undirected networks over N vertices, such as social networks. Most of the ideas presented here also generalize to directed networks, though the analysis is more involved since directed networks can generate more motifs than undirected ones. To prevent confusion, we shall use the term “1-edge” to refer to edges that exist between two vertices, and the term “0edge” to refer to missing edges. Now, define a triangular motif Eijk involving vertices i < j < k to be the type of subgraph over these 3 vertices. There are 4 basic classes of triangular motifs (Figure 1), distinguished by their number of 1-edges: full-triangle ∆3 (three 1-edges), 2-triangle ∆2 (two 1-edges), 1-triangle ∆1 (one 1-edge), and empty-triangle ∆0 (no 1-edges). The total number of triangles, over all 4 classes, is Θ(N 3). However, our goal is not to account for all 4 classes; instead, we will focus on ∆3 and ∆2 while ignoring ∆1 and ∆0. We have three primary motivations for this: 1. In the network literature, the most commonly studied “network motifs” [13], defined as patterns of significantly recurring inter-connections in complex networks, are the threenode connected subgraphs (namely ∆3 and ∆2) [13, 19, 9, 10, 16]. 2. Since the full-triangle and 2-triangle classes are regarded as the basic structural elements of most networks [19, 13, 9, 10, 16], we naturally expect them to characterize most of the community structure in networks (cf. network clustering coefficient, as explained in the introduction). In particular, the ∆3 and ∆2 triangular motifs preserve almost all 1-edges from the original network: every 1-edge appears in some triangular motif ∆2, ∆3, except for isolated 1-edges (i.e. connected components of size 2), which are less interesting from a large-scale community detection perspective. 3. For real networks, which have far more 0- than 1-edges, focusing only on ∆3 and ∆2 greatly reduces the number of triangular motifs, via the following lemma: Lemma 1. The total number of ∆3’s and ∆2’s is upper bounded by P i 1 2(Di)(Di −1) = Θ(P i D2 i ), where Di is the degree of vertex i. Proof. Let Ni be the neighbor set of vertex i. For each vertex i, form the set Ti of tuples (i, j, k) where j < k and j, k ∈Ni, which represents the set of all pairs of neighbors of i. Because j and k are neighbors of i, for every tuple (i, j, k) ∈Ti, Eijk is either a ∆3 or a ∆2. It is easy to see that each ∆2 is accounted for by exactly one Ti, where i is the center vertex of the ∆2, and that each ∆3 is accounted for by three sets Ti, Tj and Tk, one for each vertex in the full-triangle. Thus, P i |Ti| = P i 1 2(Di)(Di −1) is an upper bound of the total number of ∆3’s and ∆2’s. For networks with low maximum degree D, Θ(P i D2 i ) = Θ(ND2) is typically much smaller than Θ(N 2), allowing triangular models to scale to larger networks than edge-based models. As for networks with high maximum degree, we suggest the following node-centric subsampling procedure, which we call δ-subsampling: for each vertex i with degree Di > δ for some threshold δ, sample 1 2δ(δ −1) triangles without replacement and uniformly at random from Ti; intuitively, this is similar to capping the network’s maximum degree at Ds = δ. A full-triangle ∆3 associated with vertices i, j and k shall appear in the final subsample only if it has been subsampled from at least one of Ti, Tj and Tk. To obtain the set of all subsampled triangles ∆2 and ∆3, we simply take the union of subsampled triangles from each Ti, discarding those full-triangles duplicated in the subsamples. Although this node-centric subsampling does not preserve all properties of a network, such as the distribution of node degrees, it approximately preserves the local cluster properties of each vertex, thus capturing most of the community structure in networks. Specifically, the “local” clustering coefficient (LCC) of each vertex i, defined as the ratio of #(∆3) touching i to #(∆3, ∆2) touching i, is well-preserved. This follows from subsampling the ∆3 and ∆2’s at i uniformly at random, though the LCC has a small upwards bias since each ∆3 may also be sampled by the other two vertices j and k. Hence, we expect community detection based on the subsampled triangles to be nearly as accurate as with the original set of triangles — which our experiments will show. We note that other subsampling strategies [11, 22] preserve various network properties, such as degree distribution, diameter, and inter-node random walk times. In our triangular model, the main property of interest is the distribution over ∆3 and ∆2, analogous to how latent factor models and MMSB model distributions over 0- and 1-edges. Thus, subsampling strategies that preserve ∆3/∆2 distributions (e.g. our δ-subsampling) would be appropriate for our model. In contrast, 0/1-edge subsampling for MMSB and latent factor models is difficult: most networks have Θ(N 2) 0-edges but only o(N 2) 1-edges, thus sampling o(N 2) 0/1-edges leads to high variance in their distribution. 3 3 Mixed-Membership Triangular Model Given a network, now represented by triangular motifs ∆3 and ∆2, our goal is to perform community detection for each network vertex i, in the same sense as what an MMSB model would enable. Under an MMSB, each vertex i is assigned to a mixture over communities, as opposed to traditional singlemembership community detection, which assigns each vertex to exactly one community. By taking a mixed-membership approach, one gains many benefits over single-membership models, such as outlier detection, improved visualization, and better interpretability [2, 1]. Bxyz si,jk θi θj sj,ik sk,ij θk α Eijk λ Figure 2: Graphical model representation for MMTM, our mixed-membership model over triangular motifs. Following a design principle similar to the one underlying the MMSB models, we now present a new mixedmembership network model built on the more parsimonious triangular representation. For each triplet of vertices i, j, k ∈{1, . . . , N} , i < j < k, if the subgraph on i, j, k is a 2-triangle with i, j, or k at the center, then let Eijk = 1, 2 or 3 respectively, and if the subgraph is a fulltriangle, then let Eijk = 4. Whenever i, j, k corresponds to a 1- or an empty-triangle, we do not model Eijk. We assume K latent communities, and that each vertex takes a distribution (i.e. mixed-membership) over them. The observed bag-of-triangles {Eijk} is generated according to (1) the distribution over community-memberships at each vertex, and (2) a tensor of triangle generation probabilities, containing different triangle probabilities for different combinations of communities. More specifically, each vertex i is associated with a community mixed-membership vector θi ∈ ∆K−1 restricted to the (K −1)-simplex ∆K−1. This mixed-membership vector θi is used to generate community indicators si,jk ∈{1, . . . , K}, each of which represents the community chosen by vertex i when it is forming a triangle with vertices j and k. The probability of observing a triangular motif Eijk depends on the community-triplet si,jk, sj,ik, sk,ij, and a tensor of multinomial parameters B. Let x, y, z ∈{1, . . . , K} be the values of si,jk, sj,ik, sk,ij, and assume WLOG that x < y < z1. Then, Bxyz ∈∆3 represents the probabilities of generating the 4 triangular motifs2 among vertices i, j and k. In detail, Bxyz,1 is the probability of the 2-triangle whose center vertex has community x, and analogously for Bxyz,2 and community y, and for Bxyz,3 and community z; Bxyz,4 is the probability of the full-triangle. The MMTM generative model is summarized below; see Figure 2 for a graphical model illustration. • Triangle tensor Bxyz ∼Dirichlet (λ) for all x, y, z ∈{1, . . . , K}, where x < y < z • Community mixed-membership vectors θi ∼Dirichlet (α) for all i ∈{1, . . . , N} • For each triplet (i, j, k) where i < j < k, – Community indices si,jk ∼Discrete (θi), sj,ik ∼Discrete (θj), sk,ij ∼Discrete (θk). – Generate the triangular motif Eijk based on Bxyz and the ordered values of si,jk, sj,ik, sk,ij; see Table 1 for the exact conditional probabilities. There are 6 entries in Table 1, corresponding to the 6 possible orderings of si,jk, sj,ik, sk,ij. 4 Inference We adopt a collapsed, blocked Gibbs sampling approach, where θ and B have been integrated out. Thus, only the community indices s need to be sampled. For each triplet (i, j, k) where i < j < k, P (si,jk, sj,ik, sk,ij | s−ijk, E, α, λ) ∝ P (Eijk|E−ijk, s, λ) P (si,jk | si,−jk, α) P (sj,ik | sj,−ik, α) P (sk,ij | sk,−ij, α) , 1The cases x = y = z, x = y < z and x < y = z require special treatment, due to ambiguity cased by having identical communities. In the interest of keeping our discussion at a high level, we shall refer the reader to the appendix for these cases. 2It is possible to generate a set of triangles that does not correspond to a network, e.g. a 2-triangle centered on i for (i, j, k) followed by a 3-triangle for (j, k, ℓ), which produces a mismatch on the edge (j, k). This is a consequence of using a bag-of-triangles model, just as the bag-of-words model in Latent Dirichlet Allocation can generate sets of words that do not correspond to grammatical sentences. In practice, this is not an issue for either our model or LDA, as both models are used for mixed-membership recovery, rather than data simulation. 4 Order Conditional probability of Eijk ∈{1, 2, 3, 4} si,jk < sj,ik < sk,ij Discrete([Bxyz,1, Bxyz,2, Bxyz,3, Bxyz,4]) si,jk < sk,ij < sj,ik Discrete([Bxyz,1, Bxyz,3, Bxyz,2, Bxyz,4]) sj,ik < si,jk < sk,ij Discrete([Bxyz,2, Bxyz,1, Bxyz,3, Bxyz,4]) sj,ik < sk,ij < si,jk Discrete([Bxyz,3, Bxyz,1, Bxyz,2, Bxyz,4]) sk,ij < si,jk < sj,ik Discrete([Bxyz,2, Bxyz,3, Bxyz,1, Bxyz,4]) sk,ij < sj,ik < si,jk Discrete([Bxyz,3, Bxyz,2, Bxyz,1, Bxyz,4]) Table 1: Conditional probabilities of Eijk given si,jk, sj,ik and sk,ij. We define x, y, z to be the ordered (i.e. sorted) values of si,jk, sj,ik, sk,ij. where s−ijk is the set of all community memberships except for si,jk, sj,ik, sk,ij, and si,−jk is the set of all community memberships of vertex i except for si,jk. The last three terms are predictive distributions of a multinomial-Dirichlet model, with the multinomial parameter θ marginalized out: P (si,jk | si,−jk, α) = # [si,−jk = si,jk] + α # [si,−jk] + Kα . The first term is also a multinomial-Dirichlet predictive distribution (refer to appendix for details). 5 Comparing Mixed-Membership Network Models on Synthetic Networks For a mixed-membership network model to be useful, it must recover some meaningful notion of mixed community membership for each vertex. The precise definition of network community has been a subject of much debate, and various notions of community [1, 15, 17, 12, 6] have been proposed under different motivations. Our MMTM, too, conveys another notion of community based on membership in full triangles ∆3 and 2-triangles ∆2, which are key aspects of network clustering coefficients. In our simulations, we shall compare our MMTM against an adjacencymatrix-based model (MMSB), in terms of how well they recover mixed-memberships from networks generated under a range of assumptions. Note that some of these synthetic networks will not match the generative assumptions of either our model or MMSB; this is intentional, as we want to compare the performance of both models under model misspecification. We shall also demonstrate that MMTM leads to faster inference, particularly when δ-subsampling triangles (as described in Section 2). Intuitively, we expect the mixed-membership recovery of our inference algorithm to depend on (a) the degree distribution of the network, and (b) the “degree limit” δ used in subsampling the network; performance should increase as the number of vertices i having degree Di ≤δ goes up. In particular, our experiments will demonstrate that subsampling yields good performance even when the network contains a few vertices with very large degree Di (a characteristic of many real-world networks). Synthetic networks We compared our MMTM to MMSB3 [1] on multiple synthetic networks, evaluating them according to how well their inference algorithms recovered the vertex mixedmembership vectors θi. Each network was generated from N = 4, 000 mixed-membership vectors θi of dimensionality K = 5 (i.e. 5 possible communities), according to one of several models: 1. The Mixed Membership Stochastic Blockmodel [1], an admixture generalization of the stochastic blockmodel. The probability of a link from i to j is θiBθj for some block matrix B, and we convert all directed edges into undirected edges. In our experiments, we use a B with on-diagonal elements Baa = 1/80, and off-diagonal elements Bab = 1/800. Our values of B are lower than typically seen in the literature, because they are intended to replicate the 1-edge density of real-world networks with size around N = 4, 000. 2. A simplex Latent position model, where the probability of a link between i, j is γ(1 − 1 2||θi −θj||1) for some scaling parameter γ. In other words, the closer that θi and θj are, the higher the link probability. Note that 0 ≤||θi −θj||1 ≤2, because θi and θj lie in the simplex. We choose γ = 1/40, again to reproduce the 1-edge density seen in real networks. 3. A “Biased” scale-free model that combines the preferred attachment model [7] with a mixed-membership model. Specifically, we generated M = 60, 000 1-edges as follows: (a) pick a vertex i with probability proportional to its degree; (b) randomly pick a destination community k from θi; (c) find the set Vk of all vertices v such that θvk is the largest element of θv (i.e. the vertices that mostly belong to community k); (d) within Vk, pick the destination vertex j with probability proportional to its degree. The resulting network 3MMSB is applicable to both directed and undirected networks; our experiments use the latter. 5 #0,1-edges #1-edges max(Di) #∆3, ∆2 δ = 20 δ = 15 δ = 10 δ = 5 MMSB 7,998,000 55,696 51 1,541,085 749,018 418,764 179,841 39,996 Latent position q 56,077 51 1,562,710 746,979 418,448 179,757 39,988 Biased scale-free q 60,000 231 3,176,927 497,737 304,866 144,206 35,470 Pure membership q 55,651 44 1,533,365 746,796 418,222 179,693 39,986 Table 2: Number of edges, maximum degree, and number of 3- and 2-edge triangles ∆3, ∆2 for each N = 4, 000 synthetic network, as well as #triangles when subsampling at various degree thresholds δ. MMSB inference is linear in #0,1-edges, while our MMTM’s inference is linear in #∆3, ∆2. exhibits both a block diagonal structure, as well as a power-law degree distribution. In contrast, the other two models have binomial (i.e. Gaussian-like) degree distributions. To use these models, we must input mixed-memberships θi. These were generated as follows: 1. Divide the N = 4, 000 vertices into 5 groups of size 800. Assign each group to a (different) dominant community k ∈{1, . . . , 5}. 2. Within each group: (a) Pick 160 vertices to have mixed-membership in 3 communities: 0.8 in the dominant community k, and 0.1 in two other randomly chosen communities. (b) The remaining 640 vertices have mixed-membership in 2 communities: 0.8 in the dominant community k, and 0.2 in one other randomly chosen community. In other words, every vertex has a dominant community, and one or two other minor communities. Using these θi’s, we generated one synthetic network for each of the three models described. In addition, we generated a fourth “pure membership” network under the MMSB model, using pure θi’s with full membership in the dominant community. This network represents the special case of single-community membership. Statistics for all 4 networks can be found in Table 2. Inference and Evaluation For our MMTM4, we used our collapsed, blocked Gibbs sampler for inference. The hyperparameters were fixed at α, λ = 0.1 and K = 5, and we ran each experiment for 2,000 iterations. For evaluation, we estimated all θi’s using the last sample, and scored the estimates according to P i ||ˆθi −θi||2, the sum of ℓ2 distances of each estimate ˆθi from its true value θi. These results were taken under the most favorable permutation for the ˆθi’s, in order to avoid the permutation non-identifiability issue. We repeated every experiment 5 times. To investigate the effect of δ-subsampling triangles (Section 2), we repeated every MMTM experiment under four different values of δ: 20, 15, 10 and 5. The triangles were subsampled prior to running the Gibbs sampler, and they remained fixed during inference. With MMSB, we opted not to use the variational inference algorithm of [1], because we wanted our experiments to be, as far as possible, a comparison of models rather than inference techniques. To accomplish this, we derived a collapsed, blocked Gibbs sampler for the MMSB model, with added Beta hyperparameters λ1, λ2 on each element of the block matrix B. The mixed-membership vectors θi (πi in the original paper) and blockmatrix B were integrated out, and we Gibbs sampled each edge (i, j)’s associated community indicators zi→j, zi←j in a block fashion. Hence, this MMSB sampler uses the exact same techniques as our MMTM sampler, ensuring that we are comparing models rather than inference strategies. Furthermore, its per-iteration runtime is still Θ(N 2), equal to the original MMSB variational algorithm. All experiments were conducted in exactly the same manner as with MMTM, with the MMSB hyperparameters fixed at α, λ1, λ2 = 0.1 and K = 5. Results Figure 3 plots the cumulative ℓ2 error for each experiment, as well as the time taken per trial. On all 4 networks, the full MMTM model performs better than MMSB — even on the MMSBgenerated network! MMTM also requires less runtime for all but the biased scale-free network, which has a much larger maximum degree than the others (Table 2). Furthermore, δ-subsampling is effective: MMTM with δ = 20 runs faster than full MMTM, and still outperforms MMSB while approaching full MMTM in accuracy. The runtime benefit is most noticable on the biased scale-free network, underscoring the need to subsample real-world networks with high maximum degree. We hypothesize MMSB’s poorer performance on networks of this size (N = 4, 000) results from having Θ(N 2) latent variables, while noting that the literature has only considered smaller N < 1, 000 networks [1]. Compared to MMTM, having many latent variables not only increases runtime per iteration, but also the number of iterations required for convergence, since the latent variable state space grows exponentially with the number of latent variables. In support of this, we have observed 4As explained in Section 2, we first need to preprocess the network adjacency list into the ∆3, ∆2 triangle representation. The time required is linear in the number of ∆3, ∆2 triangles, and is insignificant compared to the actual cost of MMTM inference. 6 MMSB Latent position Biased scale−free Pure membership 0 500 1000 1500 2000 2500 3000 3500 4000 4500 Mixed−membership community recovery: Accuracy Cumulative L2 error MMSB MMTM MMTM δ=20 MMTM δ=15 MMTM δ=10 MMTM δ=5 MMSB Latent position Biased scale−free Pure membership 0 2 4 6 8 10 12 x 10 4 Mixed−membership community recovery: Total runtime Total runtime (s) MMSB MMTM MMTM δ=20 MMTM δ=15 MMTM δ=10 MMTM δ=5 Figure 3: Mixed-membership community recovery task: Cumulative ℓ2 errors and runtime per trial for MMSB, MMTM and MMTM with δ-subsampling, on N = 4, 000 synthetic networks. that the MMSB sampler’s complete log-likelihood fluctuates greatly across all 2000 iterations; in contrast, the MMTM sampler plateaus within 500 iterations, and remains stable. Scalability Experiments Although the preceding N = 4, 000 experiments appear fairly small, in actual fact, they are close to the feasible limit for adjacency-matrix-based models like MMSB. To demonstrate this, we generated four networks with sizes N ∈{1000, 4000, 10000, 40000} from the MMSB generative model. The generative parameters for the N = 4, 000 network are identical to our earlier experiment, while the parameters for the other three network sizes were adjusted to maintain the same average degree5. We then ran the MMSB, MMTM, and MMTM with δ-subsampling inference algorithms on all 4 networks, and plotted the average per-iteration runtime in Figure 4. The figure clearly exposes the scalability differences between MMSB and MMTM. The δsubsampled MMTM experiments show linear runtime dependence on N, which is expected since the number of subsampled triangles is O(Nδ2). The full MMTM experiment is also roughly linear — though we caution that this is not necessarily true for all networks, particularly high maximum degree ones such as scale-free networks. Conversely, MMSB shows a clear quadratic dependence on N. In fact, we had to omit the MMSB N = 40, 000 experiment because the latent variables would not fit in memory, and even if they did, the extrapolated runtime would have been unreasonably long. 6 A Larger Network Demonstration The MMTM model with δ-subsampling scales to even larger networks than the ones we have been discussing. To demonstrate this, we ran the MMTM Gibbs sampler with δ = 20 on the SNAP Stanford Web Graph6, containing N = 281, 903 vertices (webpages), 2, 312, 497 1-edges, and approximately 4 billiion 2- and 3-edge triangles ∆3, ∆2, which we reduced to 11, 353, 778 via δ = 20subsampling. Note that the vast majority of triangles are associated with exceptionally high-degree vertices, which make up a small fraction of the network. By using δ-subsampling, we limited the number of triangles that come from such vertices, thus making the network feasible for MMTM. We ran the MMTM sampler with settings identical to our synthetic experiments: 2,000 sampling iterations, hyperparameters fixed to α, λ = 0.1. The experiment took 74 hours, and we observed log-likelihood convergence within 500 iterations. The recovered mixed-membership vectors θi are visualized in Figure 5. A key challenge is that the θi exist in the 4-simplex ∆4, which is difficult to visualize in two dimensions. To overcome this, Figure 5 uses both position and color to communicate the values of θi. Every vertex i is displayed as a circle ci, whose size is proportional to the network degree of i. The position of ci is equal to a convex combination of the 5 pentagon corners’ (x, y) coordinates, where the coordinates are weighted by the elements of θi. In particular, circles ci at the pentagon’s corners represent single-membership θi’s, while circles on the lines connecting the corners represent θi’s with mixedmembership in 2 communities. All other circles represent θi’s with mixed-membership in ≥3 communities. Furthermore, each circle ci’s color is also a θi-weighted convex combination, this time of the RGB values of 5 colors: blue, green, red, cyan and purple. This use of color helps distinguish between vertices with 2 versus 3 or more communities: for example, even though the largest circle sits on the blue-red line (which initially suggets mixed-membership in 2 communities), its dark green color actually comes from mixed-membership in 3 communities: green, red and cyan. 5Note that the maximum degree still increases with N, because MMSB has a binomial degree distribution. 6Available at http://snap.stanford.edu/data/web-Stanford.html 7 0 0.5 1 1.5 2 2.5 3 3.5 4 x 10 4 0 50 100 150 200 250 Per−iteration runtime for MMSB and MMTM Gibbs samplers Time per iteration (s) Number of vertices MMSB MMTM MMTM δ=20 MMTM δ=15 MMTM δ=10 MMTM δ=5 Figure 4: Per-iteration runtimes for MMSB, MMTM and MMTM with δ-subsampling, on synthetic networks with N ranging from 1,000 to 40,000, but with constant average degree. Figure 5: N = 281, 903 Stanford web graph, MMTM mixed-membership visualization. Most high-degree vertices (large circles) are found at the pentagon’s corners, leading to the intuitive conclusion that the five communities are centered on hub webpages with many links. Interestingly, the highest-degree vertices are all mixed-membership, suggesting that these webpages (which are mostly frontpages) lie on the boundaries between the communities. Finally, if we focus on the sets of vertices near each corner, we see that the green and red sets have distinct degree (i.e. circle size) distributions, suggesting that those communities may be functionally different from the other three. 7 Future Work and Conclusion We have focused exclusively on triangular motifs because of their popularity in the literature, their relationship to community structure through the network clustering coefficient, and the ability to subsample them in a natural, node-centric fashion with minor impact on accuracy. However, the bag-of-network-motifs idea extends beyond triangles — one could easily consider subgraphs over 4 or more vertices, as in [13]. As with triangular motifs, it is algorithmically infeasible to consider all possible subgraphs; rather, we must focus our attention on a meaningful subset of them. Nevertheless, higher order motifs could be more suited for particular tasks, thus meriting their investigation. In modeling terms, we have applied triangular motifs to a generative mixed-membership setting, which is suitable for visualization but not necessarily for attribute prediction. Recent developments in constrained learning of generative models [23, 24] have yielded significant improvements in predictive accuracy, and these techniques are also applicable to mixed-membership triangular modeling. Also, given how well δ = 20-subsampling works for MMTM at N = 4, 000, the next step would be investigating how to adaptively choose δ as N increases, in order to achieve good performance. To summarize, we have introduced the bag-of-triangles representation as a parsimonius alternative to the network adjacency matrix, and developed a model (MMTM) and inference algorithm for mixedmembership community detection in networks. Compared to mixed-membership models that use the adjacency matrix (exemplified by MMSB), our model features a much smaller latent variable space, leading to faster inference and better performance at mixed-membership recovery. When combined with triangle subsampling, our model and inference algorithm scale easily to networks with 100,000s of vertices, which are completely infeasible for Θ(N 2) adjacency-matrix-based models — the adjacency matrix might not even fit in memory, to say nothing of runtime. As a final note, we speculate that the local nature of the triangles lends itself better to parallel inference than the adjacency matrix representation; it may be possible to find good “triangle separators”, small subsets of triangles that divide the remaining triangles into large, non-vertex-overlapping subsets, which can then be inferred in parallel. This is similar to classical 1-edge separators that divide networks into non-overlapping subgraphs, which are unfortunately inapplicable to adjacencymatrix-based models, as they require separators over both the 0- and 1-edges. With triangle separators, we expect triangle models to scale to networks with millions of vertices and more. Acknowledgments This work was supported by AFOSR FA9550010247, NIH 1R01GM093156 to Eric P. Xing. Qirong Ho is supported by an Agency for Science, Research and Technology, Singapore fellowship. Junming Yin is a Lane Fellow under the Ray and Stephanie Lane Center for Computational Biology. 8 References [1] E.M. Airoldi, D.M. Blei, S.E. Fienberg, and E.P. Xing. Mixed membership stochastic blockmodels. The Journal of Machine Learning Research, 9:1981–2014, 2008. [2] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent dirichlet allocation. The Journal of Machine Learning Research, 3:993–1022, 2003. [3] L. Cao and L. Fei-Fei. Spatially coherent latent topic model for concurrent segmentation and classification of objects and scenes. In ICCV 2007, pages 1–8. IEEE, 2007. [4] B. Fulkerson, A. Vedaldi, and S. Soatto. Class segmentation and object localization with superpixel neighborhoods. In ICCV 2009, pages 670–677. IEEE, 2009. [5] Q. Ho, J. Eisenstein, and E.P. Xing. Document hierarchies from text and links. In Proceedings of the 21st international conference on World Wide Web, pages 739–748. ACM, 2012. [6] Q. Ho, A. Parikh, L. Song, and EP Xing. Multiscale community blockmodel for network exploration. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, 2011. [7] M.J. Keeling and K.T.D. Eames. Networks and epidemic models. Journal of the Royal Society Interface, 2(4):295–307, 2005. [8] R. Kondor, N. Shervashidze, and K.M. Borgwardt. The graphlet spectrum. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 529–536. ACM, 2009. [9] D. Krackhardt and M. Handcock. Heider vs simmel: Emergent features in dynamic structures. Statistical Network Analysis: Models, Issues, and New Directions, pages 14–27, 2007. [10] J. Leskovec, L. Backstrom, R. Kumar, and A. Tomkins. Microscopic evolution of social networks. In Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 462–470. ACM, 2008. [11] J. Leskovec and C. Faloutsos. Sampling from large graphs. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 631–636. ACM, 2006. [12] K.T. Miller, T.L. Griffiths, and M.I. Jordan. Nonparametric latent feature models for link prediction. Advances in Neural Information Processing Systems (NIPS), pages 1276–1284, 2009. [13] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon. Network motifs: Simple building blocks of complex networks. Science, 298(5594):824–827, 2002. [14] R.M. Nallapati, A. Ahmed, E.P. Xing, and W.W. Cohen. Joint latent topic models for text and citations. In Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 542–550. ACM, 2008. [15] M.E.J. Newman. Modularity and community structure in networks. Proceedings of the National Academy of Sciences, 103(23):8577–8582, 2006. [16] M.E.J. Newman and J. Park. Why social networks are different from other types of networks. Arxiv preprint cond-mat/0305612, 2003. [17] A.Y. Ng, M.I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 2:849–856, 2002. [18] N. Shervashidze, SVN Vishwanathan, T. Petri, K. Mehlhorn, and K. Borgwardt. Efficient graphlet kernels for large graph comparison. In Proceedings of the International Workshop on Artificial Intelligence and Statistics. Society for Artificial Intelligence and Statistics, 2009. [19] G. Simmel and K.H. Wolff. The Sociology of Georg Simmel. Free Press, 1950. [20] T.A.B. Snijders. Markov chain monte carlo estimation of exponential random graph models. Journal of Social Structure, 3(2):1–40, 2002. [21] C.E. Tsourakakis. Fast counting of triangles in large real networks without counting: Algorithms and laws. In Data Mining, 2008. ICDM’08. Eighth IEEE International Conference on, pages 608–617. IEEE, 2008. [22] A. Vattani, D. Chakrabarti, and M. Gurevich. Preserving personalized pagerank in subgraphs. In ICML 2011, 2011. [23] J. Zhu, A. Ahmed, and E.P. Xing. Medlda: maximum margin supervised topic models for regression and classification. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1257–1264. ACM, 2009. [24] J. Zhu, N. Chen, and E.P. Xing. Infinite latent svm for classification and multi-task learning. Advances in Neural Information Processing Systems, 25. 9
|
2012
|
178
|
4,539
|
A Better Way to Pretrain Deep Boltzmann Machines Ruslan Salakhutdinov Department of Statistics and Computer Science University of Toronto rsalakhu@cs.toronto.edu Geoffrey Hinton Department of Computer Science University of Toronto hinton@cs.toronto.edu Abstract We describe how the pretraining algorithm for Deep Boltzmann Machines (DBMs) is related to the pretraining algorithm for Deep Belief Networks and we show that under certain conditions, the pretraining procedure improves the variational lower bound of a two-hidden-layer DBM. Based on this analysis, we develop a different method of pretraining DBMs that distributes the modelling work more evenly over the hidden layers. Our results on the MNIST and NORB datasets demonstrate that the new pretraining algorithm allows us to learn better generative models. 1 Introduction A Deep Boltzmann Machine (DBM) is a type of binary pairwise Markov Random Field with multiple layers of hidden random variables. Maximum likelihood learning in DBMs, and other related models, is very difficult because of the hard inference problem induced by the partition function [3, 1, 12, 6]. Multiple layers of hidden units make learning in DBM’s far more difficult [13]. Learning meaningful DBM models, particularly when modelling high-dimensional data, relies on the heuristic greedy pretraining procedure introduced by [7], which is based on learning a stack of modified Restricted Boltzmann Machines (RBMs). Unfortunately, unlike the pretraining algorithm for Deep Belief Networks (DBNs), the existing procedure lacks a proof that adding additional layers improves the variational bound on the log-probability that the model assigns to the training data. In this paper, we first show that under certain conditions, the pretraining algorithm improves a variational lower bound of a two-layer DBM. This result gives a much deeper understanding of the relationship between the pretraining algorithms for Deep Boltzmann Machines and Deep Belief Networks. Using this understanding, we introduce a new pretraining procedure for DBMs and show that it allows us to learn better generative models of handwritten digits and 3D objects. 2 Deep Boltzmann Machines (DBMs) A Deep Boltzmann Machine is a network of symmetrically coupled stochastic binary units. It contains a set of visible units v ∈{0, 1}D, and a series of layers of hidden units h(1) ∈{0, 1}F1, h(2) ∈{0, 1}F2,..., h(L) ∈{0, 1}FL. There are connections only between units in adjacent layers. Consider a DBM with three hidden layers, as shown in Fig. 1, left panel. The probability that the DBM assigns to a visible vector v is: P(v; θ) = 1 Z(θ) X h exp X ij W (1) ij vih(1) j + X jl W (2) jl h(1) j h(2) l + X lm W (3) lm h(2) l h(3) m , (1) 1 h(3) h(2) h(1) v W(3) W(2) W(1) h(3) h(2) h(1) v W(3) W(2) W(1) Deep Belief Network Deep Boltzmann Machine ’ “RBM” RBM “RBM” v 2W(1) W(1) h(1) 2W(2) 2W(2) W(3) 2W(3) h(1) h(2) h(2) h(3) W(1) W(2) W(3) Pretraining Figure 1: Left: Deep Belief Network (DBN) and Deep Boltzmann Machine (DBM). The top two layers of a DBN form an undirected graph and the remaining layers form a belief net with directed, top-down connections. For a DBM, all the connections are undirected. Right Pretraining a DBM with three hidden layers consists of learning a stack of RBMs that are then composed to create a DBM. The first and last RBMs in the stack need to be modified by using asymmetric weights. where h = {h(1), h(2), h(3)} are the set of hidden units, and θ = {W(1), W(2), W(3)} are the model parameters, representing visible-to-hidden and hidden-to-hidden symmetric interaction terms1. Setting W(2)=0 and W(3)=0 recovers the Restricted Boltzmann Machine (RBM) model. Approximate Learning: Exact maximum likelihood learning in this model is intractable, but efficient approximate learning of DBMs can be carried out by using a mean-field inference to estimate data-dependent expectations, and an MCMC based stochastic approximation procedure to approximate the model’s expected sufficient statistics [7]. In particular, consider approximating the true posterior P(h|v; θ) with a fully factorized approximating distribution over the three sets of hidden units: Q(h|v; µ) = QF1 j=1 QF2 l=1 QF3 k=1 q(h(1) j |v)q(h(2) l |v)q(h(3) k |v), where µ = {µ(1), µ(2), µ(3)} are the mean-field parameters with q(h(l) i = 1) = µ(l) i for l = 1, 2, 3. In this case, we can write down the variational lower bound on the log-probability of the data, which takes a particularly simple form: log P(v; θ) ≥ v⊤W(1)µ(1) + µ(1)⊤W(2)µ(2) + µ(2)⊤W(3)µ(3) −log Z(θ) + H(Q), (2) where H(·) is the entropy functional. Learning proceeds by finding the value of µ that maximizes this lower bound for the current value of model parameters θ, which results in a set of the mean-field fixed-point equations. Given the variational parameters µ, the model parameters θ are then updated to maximize the variational bound using stochastic approximation (for details see [7, 11, 14, 15]). 3 Pretraining Deep Boltzmann Machines The above learning procedure works quite poorly when applied to DBMs that start with randomly initialized weights. Hidden units in higher layers are very under-constrained so there is no consistent learning signal for their weights. To alleviate this problem, [7] introduced a layer-wise pretraining algorithm based on learning a stack of “modified” Restricted Boltzmann Machines (RBMs). The idea behind the pretraining algorithm is straightforward. When learning parameters of the first layer “RBM”, the bottom-up weights are constrained to be twice the top-down weights (see Fig. 1, right panel). Intuitively, using twice the weights when inferring the states of the hidden units h(1) compensates for the initial lack of top-down feedback. Conversely, when pretraining the last “RBM” in the stack, the top-down weights are constrained to be twice the bottom-up weights. For all the intermediate RBMs the weights are halved in both directions when composing them to form a DBM, as shown in Fig. 1, right panel. This heuristic pretraining algorithm works surprisingly well in practice. However, it is solely motivated by the need to end up with a model that has symmetric weights, and does not provide any 1We omit the bias terms for clarity of presentation. 2 useful insights into what is happening during the pretraining stage. Furthermore, unlike the pretraining algorithm for Deep Belief Networks (DBNs), it lacks a proof that each time a layer is added to the DBM, the variational bound improves. 3.1 Pretraining Algorithm for Deep Belief Networks We first briefly review the pretraining algorithm for Deep Belief Networks [2], which will form the basis for developing a new pretraining algorithm for Deep Boltzmann Machines. Consider pretraining a two-layer DBN using a stack of RBMs. After learning the first RBM in the stack, we can write the generative model as: p(v; W(1)) = P h(1) p(v|h(1); W(1))p(h(1); W(1)). The second RBM in the stack attempts to replace the prior p(h(1); W(1)) by a better model p(h(1); W(2)) = P h(2) p(h(1), h(2); W(2)), thus improving the fit to the training data. More formally, for any approximating distribution Q(h(1)|v), the DBN’s log-likelihood has the following variational lower bound on the log probability of the training data {v1, ..., vN}: N X n=1 log P(vn) ≥ X n EQ(h(1)|vn) h log P(vn|h(1); W(1)) i − X n KL Q(h(1)|vn)||P(h(1); W(1)) . We set Q(h(1)|vn; W(1)) = P(h(1)|vn; W(1)), which is the true factorial posterior of the first layer RBM. Initially, when W(2) = W(1)⊤, Q(h(1)|vn) defines the DBN’s true posterior over h(1), and the bound is tight. Maximizing the bound with respect to W(2) only affects the last KL term in the above equation, and amounts to maximizing: 1 N N X n=1 X h(1) Q(h(1)|vn; W(1))P(h(1); W(2)). (3) This is equivalent to training the second layer RBM with vectors drawn from Q(h(1)|v; W(1)) as data. Hence, the second RBM in the stack learns a better model of the mixture over all N training cases: 1/N P n Q(h(1)|vn; W(1)), called the “aggregated posterior”. This scheme can be extended to training higher-layer RBMs. Observe that during the pretraining stage the whole prior of the lower-layer RBM is replaced by the next RBM in the stack. This leads to the hybrid Deep Belief Network model, with the top two layers forming a Restricted Boltzmann Machine, and the lower layers forming a directed sigmoid belief network (see Fig. 1, left panel). 3.2 A Variational Bound for Pretraining a Two-layer Deep Boltzmann Machine Consider a simple two-layer DBM with tied weights W(2) = W(1)⊤, as shown in Fig. 2a: P(v; W(1)) = 1 Z(W(1)) X h(1),h(2) exp v⊤W(1)h(1) + h(1)⊤W(1)⊤h(2) . (4) Similar to DBNs, for any approximate posterior Q(h(1)|v), we can write a variational lower bound on the log probability that this DBM assigns to the training data: N X n=1 log P(vn) ≥ X n EQ(h(1)|vn) h log P(vn|h(1); W(1)) i − X n KL Q(h(1)|vn)||P(h(1); W(1)) . (5) The key insight is to note that the model’s marginal distribution over h(1) is the product of two identical distributions, one defined by an RBM composed of h(1) and v, and the other defined by an identical RBM composed of h(1) and h(2) [8]: P(h(1); W(1)) = 1 Z(W(1)) X v ev⊤W(1)h(1) | {z } RBM with h(1) and v X h(2) eh(2)⊤W(1)h(1) | {z } RBM with h(1) and h(2) . (6) 3 v h(2) h(1) W(1) W(1) W(2) W(2) h(1) h(2a) h(2b) v h(1) h(2) W(1) W(2) a) b) c) v h(2) = v h(1) W(1) W(1) Figure 2: Left: Pretraining a Deep Boltzmann Machine with two hidden layers. a) The DBM with tied weights. b) The second RBM with two sets of replicated hidden units, which will replace half of the 1stRBM’s prior. c) The resulting DBM with modified second hidden layer. Right: The DBM with tied weights is trained to model the data using one-step contrastive divergence. The idea is to keep one of these two RBMs and replace the other by the square root of a better prior P(h(1); W(2)). In particular, another RBM with two sets of replicated hidden units and tied weights P(h(1); W(2)) = P h(2a),h(2b) P(h(1), h(2a), h(2b); W(2)) is trained to be a better model of the aggregated variational posterior 1 N P n Q(h(1)|vn; W(1)) of the first model (see Fig. 2b). By initializing W(2) = W(1)⊤, the second-layer RBM has exactly the same prior over h(1) as the original DBM. If the RBM is trained by maximizing the log likelihood objective: X n X h(1) Q(h(1)|vn) log P(h(1); W(2)), (7) then we obtain: X n KL(Q(h(1)|vn)||P(h(1); W(2))) ≤ X n KL(Q(h(1)|vn)||P(h(1); W(1))). (8) Similar to Eq. 6, the distribution over h(1) defined by the second-layer RBM is also the product of two identical distributions. Once the two RBMs are composed to form a two-layer DBM model (see Fig. 2c), the marginal distribution over h(1) is the geometric mean of the two probability distributions: P(h(1); W(1)), P(h(1); W(2)) defined by the first and second-layer RBMs: P(h(1); W(1), W(2)) = 1 Z(W(1), W(2)) X v ev⊤W(1)h(1) X h(2) eh(1)⊤W(2)h(2) . (9) Based on Eqs. 8, 9, it is easy to show that the variational lower bound of Eq. 5 improves because replacing half of the prior by a better model reduces the KL divergence from the variational posterior: X n KL Q(h(1)|vn)||P(h(1); W(1), W(2)) ≤ X n KL Q(h(1)|vn)||P(h(1); W(1)) . (10) Due to the convexity of asymmetric divergence, this is guaranteed to improve the variational bound of the training data by at least half as much as fully replacing the original prior. This result highlights a major difference between DBNs and DBMs. The procedure for adding an extra layer to a DBN replaces the full prior over the previous top layer, whereas the procedure for adding an extra layer to a DBM only replaces half of the prior. So in a DBM, the weights of the bottom level RBM perform much more of the work than in a DBN, where the weights are only used to define the last stage of the generative process P(v|h(1); W(1)). This result also suggests that adding layers to a DBM will give diminishing improvements in the variational bound, compared to adding layers to a DBN. This may explain why DBMs with three hidden layers typically perform worse than the DBMs with two hidden layers [7, 8]. On the other hand, the disadvantage of the pretraining procedure for Deep Belief Networks is that the top-layer RBM is forced to do most of the modelling work. This may also explain the need to use a large number of hidden units in the top-layer RBM [2]. There is, however, a way to design a new pretraining algorithm that would spread the modelling work more equally across all layers, hence bypassing shortcomings of the existing pretraining algorithms for DBNs and DBMs. 4 v h(2)a h(2)b h(1) W(1) W(1) W(1) W(2) W(2) W(2) h(1) h(2)a h(2)b h(2)c v h(2)a h(2)b h(1) W(2) W(2) W(1) a) b) c) v h(1) 3W(1) W(1) h(1) h(2) 2W(2) 3W(2) v h(2) h(1) 2W(2) W(1) Replacing 2/3 of the Prior Practical Implementation Figure 3: Left: Pretraining a Deep Boltzmann Machine with two hidden layers. a) The DBM with tied weights. b) The second layer RBM is trained to model 2/3 of the 1st RBM’s prior. c) The resulting DBM with modified second hidden layer. Right: The corresponding practical implementation of the pretraining algorithm that uses asymmetric weights. 3.3 Controlling the Amount of Modelling Work done by Each Layer Consider a slightly modified two-layer DBM with two groups of replicated 2nd-layer units, h(2a) and h(2b), and tied weights (see Fig. 3a). The model’s marginal distribution over h(1) is the product of three identical RBM distributions, defined by h(1) and v, h(1) and h(2a), and h(1) and h(2b): P(h(1); W(1)) = 1 Z(W(1)) X v ev⊤W(1)h(1) X h(2a) eh(2a)⊤W(1)h(1) X h(2b) eh(2b)⊤W(1)h(1) . During the pretraining stage, we keep one of these RBMs and replace the other two by a better prior P(h(1); W(2)). To do so, similar to Sec. 3.2, we train another RBM, but with three sets of hidden units and tied weights (see Fig. 3b). When we combine the two RBMs into a DBM, the marginal distribution over h(1) is the geometric mean of three probability distributions: one defined by the first-layer RBM, and the remaining two defined by the second-layer RBMs: P(h(1); W(1), W(2)) = 1 Z(W(1), W(2))P(h(1); W(1))P(h(1); W(2))P(h(1); W(2)) = 1 Z(W(1), W(2)) X v ev⊤W(1)h(1) X h(2a) eh(2a)⊤W(2)h(1) X h(2b) eh(2b)⊤W(2)h(1) . In this DBM, 2/3 of the first RBM’s prior over the first hidden layer has been replaced by the prior defined by the second-layer RBM. The variational bound on the training data is guaranteed to improve by at least 2/3 as much as fully replacing the original prior. Hence in this slightly modified DBM model, the second layer performs 2/3 of the modelling work compared to the first layer. Clearly, controlling the number of replicated hidden groups allows us to easily control the amount of modelling work left to the higher layers in the stack. 3.4 Practical Implementation So far, we have made the assumption that we start with a two-layer DBM with tied weights. We now specify how one would train this initial set of tied weights W(1). Let us consider the original two-layer DBM in Fig. 2a with tied weights. If we knew the initial state vector h(2), we could train this DBM using one-step contrastive divergence (CD) with mean field reconstructions of both the visible states v and the top-layer states h(2), as shown in Fig. 2, right panel. Instead, we simply set the initial state vector h(2) to be equal to the data, v. Using mean-field reconstructions for v and h(2), one-step CD is exactly equivalent to training a modified “RBM” with only one hidden layer but with bottom-up weights that are twice the top-down weights, as defined in the original pretraining algorithm (see Fig. 1, right panel). This way of training the simple DBM with tied weights is unlikely to maximize the likelihood objective, but in practice it produces surprisingly good models that reconstruct the training data well. When learning the second RBM in the stack, instead of maintaining a set of replicated hidden groups, it will often be convenient to approximate CD learning by training a modified RBM with one hidden layer but with asymmetric bottom-up and top-down weights. 5 For example, consider pretraining a two-layer DBM, in which we would like to split the modelling work between the 1st and 2nd-layer RBMs as 1/3 and 2/3. In this case, we train the first layer RBM using one-step CD, but with the bottom-up weights constrained to be three times the top-down weights (see Fig. 3, right panel). The conditional distributions needed for CD learning take form: P(h(1) j = 1|v) = 1 1 + exp(−P i 3W (1) ij vi) , P(vi = 1|h(1)) = 1 1 + exp(−P j W (1) ij h(1) j ) . Conversely, for the second modified RBM in the stack, the top-down weights are constrained to be 3/2 times the bottom-up weights. The conditional distributions take form: P(h(2) l = 1|h(1)) = 1 1 + exp(−P j 2W (2) jl h(1) j ) , P(h(1) j = 1|h(2)) = 1 1 + exp(−P l 3W (2) jl h(2) l ) . Note that this second-layer modified RBM simply approximates the proper RBM with three sets of replicated h(2) groups. In practice, this simple approximation works well compared to training a proper RBM, and is much easier to implement. When combining the RBMs into a two-layer DBM, we end up with W(1) and 2W(2) in the first and second layers, each performing 1/3 and 2/3 of the modelling work respectively: P(v; θ) = 1 Z(θ) X h(1),h(2) exp v⊤W(1)h(1) + h(1)⊤2W(2)h(2) . (11) Parameters of the entire model can be generatively fine-tuned using the combination of the meanfield algorithm and the stochastic approximation algorithm described in Sec. 2 4 Pretraining a Three Layer Deep Boltzmann Machine In the previous section, we showed that provided we start with a two-layer DBM with tied weights, we can train the second-layer RBM in a way that is guaranteed to improve the variational bound. For the DBM with more than two layers, we have not been able to develop a pretraining algorithm that is guaranteed to improve a variational bound. However, results of Sec. 3 suggest that using simple modifications when pretraining a stack of RBMs would allow us to approximately control the amount of modelling work done by each layer. v h(1) 3W(1) W(1) h(1) h(2) 4W(2) 3W(2) h(2) h(3) 2W(3) 4W(3) 2W(3) h(3) v h(2) h(1) 2W(2) W(1) Pretraining a 3-layer DBM Figure 4: Layer-wise pretraining of a 3-layer Deep Boltzmann Machine. Consider learning a 3-layer DBM, in which each layer is forced to perform approximately 1/3 of the modelling work. This can easily be accomplished by learning a stack of three modified RBMs. Similar to the two-layer model, we train the first layer RBM using one-step CD, but with the bottom-up weights constrained to be three times the top-down weights (see Fig. 4). Two-thirds of this RBM’s prior will be modelled by the 2ndand 3rd-layer RBMs. For the second modified RBM in the stack, we use 4W(2) bottom-up and 3W(2) topdown. Note that we are using 4W(2) bottom-up, as we are expecting to replace half of the second RBM prior by a third RBM, hence splitting the remaining 2/3 of the work equally between the top two layers. If we were to pretrain only a two-layer DBM, we would use 2W(2) bottom-up and 3W(2) top-down, as discussed in Sec. 3.2. For the last RBM in the stack, we use 2W(3) bottom-up and 4W(2) top-down. When combining the three RBMs into a three-layer DBM, we end up with symmetric weights W(1), 2W(2), and 2W(3) in the first, second, and third layers, with each layer performing 1/3 of the modelling work: P(v; θ) = 1 Z(θ) X h exp v⊤W(1)h(1) + h(1)⊤2W(2)h(2) + h(2)⊤2W(3)h(3) . (12) 6 Algorithm 1 Greedy Pretraining Algorithm for a 3-layer Deep Boltzmann Machine 1: Train the 1st layer “RBM” using one-step CD learning with mean field reconstructions of the visible vectors. Constrain the bottom-up weights, 3W(1), to be three times the top-down weights, W(1). 2: Freeze 3W(1) that defines the 1st layer of features, and use samples h(1) from P(h(1)|v; 3W(1)) as the data for training the second RBM. 3: Train the 2nd layer “RBM” using one-step CD learning with mean field reconstructions of the visible vectors. Set the bottom-up weights to 4W(1), and the top-down weights to 3W(1). 4: Freeze 4W(2) that defines the 2nd layer of features and use the samples h(3) from P(h(2)|h(1); 4W(2)) as the data for training the next RBM. 5: Train the 3rd-layer “RBM” using one-step CD learning with mean field reconstructions of its visible vectors. During the learning, set the bottom-up weights to 2W(3), and the top-down weights to 4W(3). 6: Use the weights {W(1), 2W(2), 2W(3)} to compose a three-layer Deep Boltzmann Machine. The new pretraining procedure for a 3-layer DBM is shown in Alg. 1. Note that compared to the original algorithm, it requires almost no extra work and can be easily integrated into existing code. Extensions to training DBMs with more layers is trivial. As we show in our experimental results, this pretraining can improve the generative performance of Deep Boltzmann Machines. 5 Experimental Results In our experiments we used the MNIST and NORB datasets. During greedy pretraining, each layer was trained for 100 epochs using one-step contrastive divergence. Generative fine-tuning of the full DBM model, using mean-field together with stochastic approximation, required 300 epochs. In order to estimate the variational lower-bounds achieved by different pretraining algorithms, we need to estimate the global normalization constant. Recently, [10] demonstrated that Annealed Importance Sampling (AIS) can be used to efficiently estimate the partition function of an RBM. We adopt AIS in our experiments as well. Together with variational inference this will allow us to obtain good estimates of the lower bound on the log-probability of the training and test data. 5.1 MNIST The MNIST digit dataset contains 60,000 training and 10,000 test images of ten handwritten digits (0 to 9), with 28×28 pixels. In our first experiment, we considered a standard two-layer DBM with 500 and 1000 hidden units2, and used two different algorithms for pretraining it. The first pretraining algorithm, which we call DBM-1/2-1/2, is the original algorithm for pretraining DBMs, as introduced by [7] (see Fig. 1). Here, the modelling work between the 1stand 2nd-layer RBMs is split equally. The second algorithm, DBM-1/3-2/3, uses a modified pretraining procedure of Sec. 3.4, so that the second RBM in the stack ends up doing 2/3 of the modelling work compared to the 1st-layer RBM. Results are shown in Table 1. Prior to the global generative fine-tuning, the estimate of the lower bound on the average test log-probability for DBM-1/3-2/3 was −108.65 per test case, compared to −114.32 achieved by the standard pretraining algorithm DBM-1/2-1/2. The large difference of about 7 nats shows that leaving more of the modelling work to the second layer, which has a larger number of hidden units, substantially improves the variational bound. After the global generative fine-tuning, DBM-1/3-2/3 achieves a lower bound of −83.43, which is better compared to −84.62, achieved by DBM-1/2-1/2. This is also lower compared to the lower bound of −85.97, achieved by a carefully trained two-hidden-layer Deep Belief Network [10]. In our second experiment, we pretrained a 3-layer Deep Boltzmann Machine with 500, 500, and 1000 hidden units. The existing pretraining algorithm, DBM-1/2-1/4-1/4, approximately splits the modelling between three RBMs in the stack as 1/2, 1/4, 1/4, so the weights in the 1st-layer RBM perform half of the work compared to the higher-level RBMs. On the other hand, the new pretraining procedure (see Alg. 1), which we call DBM-1/3-1/3-1/3, splits the modelling work equally across all three layers. 2These architectures have been considered before in [7, 9], which allows us to provide a direct comparison. 7 Table 1: MNIST: Estimating the lower bound on the average training and test log-probabilities for two DBMs: one with two layers (500 and 1000 hidden units), and the other one with three layers (500, 500, and 1000 hidden units). Results are shown for various pretraining algorithms, followed by generative fine-tuning. Pretraining Generative Fine-Tuning Train Test Train Test 2 layers DBM-1/2-1/2 −113.32 −114.32 −83.61 −84.62 DBM-1/3-2/3 −107.89 −108.65 −82.83 −83.43 3 layers DBM-1/2-1/4-1/4 −116.74 −117.38 −84.49 −85.10 DBM-1/3-1/3-1/3 −107.12 −107.65 −82.34 −83.02 Table 2: NORB: Estimating the lower bound on the average training and test log-probabilities for two DBMs: one with two layers (1000 and 2000 hidden units), and the other one with three layers (1000, 1000, and 2000 hidden units). Results are shown for various pretraining algorithms, followed by generative fine-tuning. Pretraining Generative Fine-Tuning Train Test Train Test 2 layers DBM-1/2-1/2 −640.94 −643.87 −598.13 −601.76 DBM-1/3-2/3 −633.21 −636.65 −593.76 −597.23 3 layers DBM-1/2-1/4-1/4 −641.87 −645.06 −598.98 −602.84 DBM-1/3-1/3-1/3 −632.75 −635.14 −592.87 −596.11 Table 1 shows that DBM-1/3-1/3-1/3 achieves a lower bound on the average test log-probability of −107.65, improving upon DBM-1/2-1/4-1/4’s bound of −117.38. The difference of about 10 nats further demonstrates that during the pretraining stage, it is rather crucial to push more of the modelling work to the higher layers. After generative fine-tuning, the bound on the test log-probabilities for DBM-1/3-1/3-1/3 was −83.02, so with a new pretraining procedure, the three-hidden-layer DBM performs slightly better than the two-hidden-layer DBM. With the original pretraining procedure, the 3-layer DBM achieves a bound of −85.10, which is worse than the bound of 84.62, achieved by the 2-layer DBM, as reported by [7, 9]. 5.2 NORB The NORB dataset [4] contains images of 50 different 3D toy objects with 10 objects in each of five generic classes: cars, trucks, planes, animals, and humans. Each object is photographed from different viewpoints and under various lighting conditions. The training set contains 24,300 stereo image pairs of 25 objects, 5 per class, while the test set contains 24,300 stereo pairs of the remaining, different 25 objects. From the training data, 4,300 were set aside for validation. To deal with raw pixel data, we followed the approach of [5] by first learning a Gaussian-binary RBM with 4000 hidden units, and then treating the the activities of its hidden layer as preprocessed binary data. Similar to the MNIST experiments, we trained two Deep Boltzmann Machines: one with two layers (1000 and 2000 hidden units), and the other one with three layers (1000, 1000, and 2000 hidden units). Table 2 reveals that for both DBMs, the new pretraining achieves much better variational bounds on the average test log-probability. Even after the global generative fine-tuning, Deep Boltzmann Machines, pretrained using a new algorithm, improve upon standard DBMs by at least 5 nats. 6 Conclusion In this paper we provided a better understanding of how the pretraining algorithms for Deep Belief Networks and Deep Boltzmann Machines are related, and used this understanding to develop a different method of pretraining. Unlike many of the existing pretraining algorithms for DBNs and DBMs, the new procedure can distribute the modelling work more evenly over the hidden layers. Our results on the MNIST and NORB datasets demonstrate that the new pretraining algorithm allows us to learn much better generative models. Acknowledgments This research was funded by NSERC, Early Researcher Award, and gifts from Microsoft and Google. G.H. and R.S. are fellows of the Canadian Institute for Advanced Research. 8 References [1] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009. [2] G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527–1554, 2006. [3] H. Larochelle, Y. Bengio, J. Louradour, and P. Lamblin. Exploring strategies for training deep neural networks. Journal of Machine Learning Research, 10:1–40, 2009. [4] Y. LeCun, F. J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In CVPR (2), pages 97–104, 2004. [5] V. Nair and G. E. Hinton. Implicit mixtures of restricted Boltzmann machines. In Advances in Neural Information Processing Systems, volume 21, 2009. [6] M. A. Ranzato. Unsupervised learning of feature hierarchies. In Ph.D. New York University, 2009. [7] R. R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 12, 2009. [8] R. R. Salakhutdinov and G. E. Hinton. An efficient learning procedure for Deep Boltzmann Machines. Neural Computation, 24:1967 – 2006, 2012. [9] R. R. Salakhutdinov and H. Larochelle. Efficient learning of deep Boltzmann machines. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 13, 2010. [10] R. R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In Proceedings of the International Conference on Machine Learning, volume 25, pages 872 – 879, 2008. [11] T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In ICML. ACM, 2008. [12] M. Welling and G. E. Hinton. A new learning algorithm for mean field Boltzmann machines. Lecture Notes in Computer Science, 2415, 2002. [13] M. Welling and C. Sutton. Learning in markov random fields with contrastive free energies. In International Workshop on AI and Statistics (AISTATS’2005), 2005. [14] L. Younes. On the convergence of Markovian stochastic algorithms with rapidly decreasing ergodicity rates, March 17 2000. [15] A. L. Yuille. The convergence of contrastive divergences. In Advances in Neural Information Processing Systems, 2004. 9
|
2012
|
179
|
4,540
|
Bayesian estimation of discrete entropy with mixtures of stick-breaking priors Evan Archer⇤124, Il Memming Park⇤234, & Jonathan W. Pillow234 1. Institute for Computational and Engineering Sciences 2. Center for Perceptual Systems, 3. Dept. of Psychology, 4. Division of Statistics & Scientific Computation The University of Texas at Austin Abstract We consider the problem of estimating Shannon’s entropy H in the under-sampled regime, where the number of possible symbols may be unknown or countably infinite. Dirichlet and Pitman-Yor processes provide tractable prior distributions over the space of countably infinite discrete distributions, and have found major applications in Bayesian non-parametric statistics and machine learning. Here we show that they provide natural priors for Bayesian entropy estimation, due to the analytic tractability of the moments of the induced posterior distribution over entropy H. We derive formulas for the posterior mean and variance of H given data. However, we show that a fixed Dirichlet or Pitman-Yor process prior implies a narrow prior on H, meaning the prior strongly determines the estimate in the under-sampled regime. We therefore define a family of continuous mixing measures such that the resulting mixture of Dirichlet or Pitman-Yor processes produces an approximately flat prior over H. We explore the theoretical properties of the resulting estimators and show that they perform well on data sampled from both exponential and power-law tailed distributions. 1 Introduction An important statistical problem in the study of natural systems is to estimate the entropy of an unknown discrete distribution on the basis of an observed sample. This is often much easier than the problem of estimating the distribution itself; in many cases, entropy can be accurately estimated with fewer samples than the number of distinct symbols. Entropy estimation remains a difficult problem, however, as there is no unbiased estimator for entropy, and the maximum likelihood estimator exhibits severe bias for small datasets. Previous work has tended to focus on methods for computing and reducing this bias [1–5]. Here, we instead take a Bayesian approach, building on a framework introduced by Nemenman et al [6]. The basic idea is to place a prior over the space of probability distributions that might have generated the data, and then perform inference using the induced posterior distribution over entropy. (See Fig. 1). We focus on the setting where our data are a finite sample from an unknown, or possibly even countably infinite, number of symbols. A Bayesian approach requires us to consider distributions over the infinite-dimensional simplex, ∆1. To do so, we employ the Pitman-Yor (PYP) and Dirichlet (DP) processes [7–9]. These processes provide an attractive family of priors for this problem, since: (1) the posterior distribution over entropy has analytically tractable moments; and (2) distributions drawn from a PYP can exhibit power-law tails, a feature commonly observed in data from social, biological, and physical systems [10–12]. However, we show that a fixed PYP prior imposes a narrow ⇤These authors contributed equally. 1 parameter distribution data entropy ... Figure 1: Graphical model illustrating the ingredients for Bayesian entropy estimation. Arrows indicate conditional dependencies between variables, and the gray “plate” denotes multiple copies of a random variable (with the number of copies N indicated at bottom). For entropy estimation, the joint probability distribution over entropy H, data x = {xj}, discrete distribution ⇡= {⇡i}, and parameter ✓factorizes as: p(H, x, ⇡, ✓) = p(H|⇡)p(x|⇡)p(⇡|✓)p(✓). Entropy is a deterministic function of ⇡, so p(H|⇡) = δ(H −P i ⇡i log ⇡i). prior over entropy, leading to severe bias and overly narrow credible intervals for small datasets. We address this shortcoming by introducing a set of mixing measures such that the resulting Pitman-Yor Mixture (PYM) prior provides an approximately non-informative (i.e., flat) prior over entropy. The remainder of the paper is organized as follows. In Section 2, we introduce the entropy estimation problem and review prior work. In Section 3, we introduce the Dirichlet and Pitman-Yor processes and discuss key mathematical properties relating to entropy. In Section 4, we introduce a novel entropy estimator based on PYM priors and derive several of its theoretical properties. In Section 5, we show applications to data. 2 Entropy Estimation Consider samples x := {xj}N j=1 drawn iid from an unknown discrete distribution ⇡:= {⇡i}A i=1 on a finite or (countably) infinite alphabet X. We wish to estimate the entropy of ⇡, H(⇡) = − A X i=1 ⇡i log ⇡i, (1) where we identify X = {1, 2, . . . , A} as the set of alphabets without loss of generality (where the alphabet size A may be infinite), and ⇡i > 0 denotes the probability of observing symbol i. We focus on the setting where N ⌧A. A reasonable first step toward estimating H is to estimate the distribution ⇡. The sum of observed counts nk = PN i=1 1{xi=k} for each k 2 X yields the empirical distribution ˆ⇡, where ˆ⇡k = nk/N. Plugging this estimate for ⇡into eq. 1, we obtain the so-called “plugin” estimator: ˆHplugin = −P ˆ⇡i log ˆ⇡i, which is also the maximum-likelihood estimator. It exhibits substantial negative bias in the undersampled regime. 2.1 Bayesian entropy estimation The Bayesian approach to entropy estimation involves formulating a prior over distributions ⇡, and then turning the crank of Bayesian inference to infer H using the posterior distribution. Bayes’ least squares (BLS) estimators take the form: ˆH(x) = E[H|x] = Z H(⇡)p(⇡|x) d⇡ (2) where p(⇡|x) is the posterior over ⇡under some prior p(⇡) and categorical likelihood p(x|⇡) = Q j p(xj|⇡), where p(xj = i) = ⇡i. The conditional p(H|⇡) = δ(H −P i ⇡i log ⇡i), since H is deterministically related to ⇡. To the extent that p(⇡) expresses our true prior uncertainty over the unknown distribution that generated the data, this estimate is optimal in a least-squares sense, and the corresponding credible intervals capture our uncertainty about H given the data. For distributions with known finite alphabet size A, the Dirichlet distribution provides an obvious choice of prior due to its conjugacy to the discrete (or multinomial) likelihood. It takes the form p(⇡) / QA i=1 ⇡↵−1 i , for ⇡on the A-dimensional simplex (⇡i ≥1, P ⇡i = 1), with concentration 2 10 0 10 1 10 2 10 3 10 4 10 5 10 ï5 10 ï4 10 ï3 10 ï2 10 ï1 10 0 P[wordcount > n] wordcount n cell data 95% confidence Neural Alphabet Frequency (27 spiking neurons) Word Frequency in Moby Dick 10 0 10 1 10 2 10 3 10 ï5 10 ï4 10 ï3 10 ï2 10 ï1 10 0 P[wordcount > n] wordcount n word data 95% confidence DP PY Figure 2: Power-law frequency distributions from neural signals and natural language. We compare samples from the DP (red) and PYP (blue) priors for two datasets with heavy tails (black). In both cases, we compare the empirical CDF with distributions sampled given d and ↵fixed to their ML estimates. For both datasets, the PYP better captures the heavy-tailed behavior of the data. Left: Frequencies among N = 1.2e6 neural spike words from 27 simultaneously-recorded retinal ganglion cells, binarized and binned at 10 ms [18]. Right: Frequency of N = 217826 words in the novel Moby Dick by Herman Melville. parameter ↵[13]. Many previously proposed estimators can be viewed as Bayesian estimators with a particular fixed choice of ↵. (See [14] for an overview). 2.2 Nemenman-Shafee-Bialek (NSB) estimator In a seminal paper, Nemenman et al [6] showed that Dirichlet priors impose a narrow prior over entropy. In the under-sampled regime, Bayesian estimates using a fixed Dirichlet prior are severely biased, and have small credible intervals (i.e., they give highly confident wrong answers!). To address this problem, [6] suggested a mixture-of-Dirichlets prior: p(⇡) = Z pDir(⇡|↵)p(↵)d↵, (3) where pDir(⇡|↵) denotes a Dir(↵) prior on ⇡. To construct an approximately flat prior on entropy, [6] proposed the mixing weights on ↵given by, p(↵) / d d↵E[H|↵] = A 1(A↵+ 1) − 1(↵+ 1), (4) where E[H|↵] denotes the expected value of H under a Dir(↵) prior, and 1(·) denotes the trigamma function. To the extent that p(H|↵) resembles a delta function, eq. 3 implies a uniform prior for H on [0, log A].The BLS estimator under the NSB prior can then be written as, ˆHnsb = E[H|x] = ZZ H(⇡)p(⇡|x, ↵) p(↵|x) d⇡d↵= Z E[H|x, ↵]p(x|↵)p(↵) p(x) d↵, (5) where E[H|x, ↵] is the posterior mean under a Dir(↵) prior, and p(x|↵) denotes the evidence, which has a Polya distribution. Given analytic expressions for E[H|x, ↵] and p(x|↵), this estimate is extremely fast to compute via 1D numerical integration in ↵. (See Appendix for details). Next, we shall consider the problem of extending this approach to infinite-dimensional discrete distributions. Nemenman et al proposed one such extension using an approximation to ˆHnsb in the limit A ! 1,which we refer to as ˆHnsb1 [15, 16]. Unfortunately, ˆHnsb1 increases unboundedly with N (as noted by [17]), and it performs poorly for the examples we consider. 3 Stick-Breaking Priors To construct a prior over countably infinite discrete distributions we employ a class of distributions from nonparametric Bayesian statistics known as stick-breaking processes [19]. In particular, we 3 focus on two well-known subclasses of stick-breaking processes: the Dirichlet Process (DP) and Pitman-Yor process (PYP). Both are stochastic processes whose samples are discrete probability distributions [7,20]. A sample from a DP or PYP may be written as P1 i=1 ⇡iδφi, where ⇡= {⇡i} denotes a countably infinite set of ‘weights’ on a set of atoms {φi} drawn from some base probability measure, where δφi denotes a delta function on the atom φi.1 The prior distribution over ⇡under the DP and PYP is technically called the GEM distribution or the two-parameter Poisson-Dirichlet distribution, but we will abuse terminology and refer to it more simply as script notation DP or PY. The DP weight distribution DP(↵) may be described as a limit of the finite Dirichlet distributions where the alphabet size grows and concentration parameter shrinks, A ! 1 and ↵0 ! 0, such that ↵0 A ! ↵[20]. The PYP generalizes the DP to allow power-law tails, and includes DP as a special case [7]. Let PY(d, ↵) denote the PYP weight distribution with discount parameter d and concentration parameter ↵(also called the “Dirichlet parameter”), for d 2 [0, 1), ↵> −d. When d = 0, this reduces to the DP weight distribution, denoted DP(↵). The name “stick-breaking” refers to the fact that the weights of the DP and PYP can be sampled by transforming an infinite sequence of independent Beta random variables in a procedure known as “stick-breaking” [21]. Stick-breaking provides samples ⇡⇠PY(d, ↵) according to: βi ⇠Beta(1 −d, ↵+ id) ˜⇡i = i−1 Y k=1 (1 −βk)βi, (6) where ˜⇡i is known as the i’th size-biased sample from ⇡. (The ˜⇡i sampled in this manner are not strictly decreasing, but decrease on average such that P1 i=1 ˜⇡i = 1 with probability 1). Asymptotically, the tails of a (sorted) sample from DP(↵) decay exponentially, while for PY(d, ↵) with d 6= 0, the tails approximately follow a power-law: ⇡i / (i)−1 d ( [7], pp. 867)2. Many natural phenomena such as city size, language, spike responses, etc., also exhibit power-law tails [10,12]. (See Fig. 2). 3.1 Expectations over DP and PY weight distributions A key virtue of PYP priors is a mathematical property called invariance under size-biased sampling, which allows us to convert expectations over ⇡on the infinite-dimensional simplex to one or twodimensional integrals with respect to the distribution of the first two size-biased samples [23, 24]. These expectations are required for computing the mean and variance of H under the prior (or posterior) over ⇡. Proposition 1 (Expectations with first two size-biased samples). For ⇡⇠PY(d, ↵) and arbitrary integrable functionals f and g of ⇡, E(⇡|d,↵) " 1 X i=1 f(⇡i) # = E(˜⇡1|d,↵) f(˜⇡1) ˜⇡1 ' , (7) E(⇡|d,↵) 2 4 X i,j6=i g(⇡i, ⇡j) 3 5 = E(˜⇡1,˜⇡2|d,↵) [g(˜⇡1, ˜⇡2)(1 −˜⇡1)] , (8) where ˜⇡1 and ˜⇡2 are the first two size-biased samples from ⇡. The first result (eq. 7) appears in [7], and we construct an analogous proof for eq. 8 (see Appendix). The direct consequence of this lemma is that first two moments of H(⇡) under the DP and PY priors have closed forms , which can be obtained using (from eq. 6): ˜⇡1 ⇠Beta(1 −d, ↵+ d), and ˜⇡2/(1−˜⇡1)|˜⇡1 ⇠Beta(1−d, ↵+2d), with f(⇡i) = −⇡i log(⇡i) for E[H], and f(⇡i) = ⇡2 i (log ⇡i)2 and g(⇡i, ⇡j) = ⇡i⇡j(log ⇡i)(log ⇡j) for E[H2]. 1Here, we will assume the base measure is non-atomic, so that the atoms φi are distinct with probability 1. This allows us to ignore the base measure, making entropy of the distribution equal to the entropy of the weights ⇡. 2Note that the power-law exponent is given incorrectly in [9,22]. 4 100 105 1010 0 10 20 30 Prior Mean expected entropy (nats) 100 105 1010 0 1 2 Prior Uncertainty standard deviation (nats) d=0.9 d=0.8 d=0.7 d=0.6 d=0.5 d=0.4 d=0.3 d=0.2 d=0.1 d=0.0 Figure 3: Prior mean and standard deviation over entropy H under a fixed PY prior, as a function of ↵and d. Note that expected entropy is approximately linear in log ↵. Small prior standard deviations (right) indicate that p(H(⇡)|d, ↵) is highly concentrated around the prior mean (left). 3.2 Posterior distribution over weights A second desirable property of the PY distribution is that the posterior p(⇡post|x, d, ↵) takes the form of a (finite) Dirichlet mixture of point masses and a PY distribution [8]. This makes it possible to apply the above results to the posterior mean and variance of H. Let ni denote the count of symbol i in an observed dataset. Then let ↵i = ni −d, N = P ni, and A = P ↵i = P i ni −Kd = N −Kd, where K = PA i=1 1{ni>0} is the number of unique symbols observed. Given data, the posterior over (countably infinite) discrete distributions, written as ⇡post = (p1, p2, p3, . . . , pK, p⇤⇡), has the distribution (given in [19]): (p1, p2, p3, . . . , pK, p⇤) ⇠Dir(n1 −d, n2 −d, . . . , nK −d, ↵+ Kd) (9) ⇡:= (⇡1, ⇡2, ⇡3, . . . ) ⇠PY(d, ↵+ Kd). 4 Bayesian entropy inference with PY priors 4.1 Fixed PY priors Using the results of the previous section (eqs. 7 and 8), we can derive the prior mean and variance of H under a PY(d, ↵) prior on ⇡: E[H(⇡)|d, ↵] = 0(1 + ↵) − 0(1 −d), (10) var[H(⇡)|d, ↵] = ↵+ d (1 + ↵)2(1 −d) + 1 −d 1 + ↵ 1(2 −d) − 1(2 + ↵), (11) where n is the polygamma of n-th order (i.e., 0 is the digamma function). Fig. 3 shows these functions for a range of d and ↵values. These reveal the same phenomenon that [6] observed for finite Dirichlet distributions: a PY prior with fixed (d, ↵) induces a narrow prior over H. In the undersampled regime, Bayesian estimates under PY priors will therefore be strongly determined by the choice of (d, ↵), and posterior credible intervals will be unrealistically narrow.3 4.2 Pitman-Yor process mixture (PYM) prior The narrow prior on H induced by fixed PY priors suggests a strategy for constructing a noninformative prior: mix together a family of PY distributions with some hyper-prior p(d, ↵) selected to yield an approximately flat prior on H. Following the approach of [6], we setting p(d, ↵) proportional to the derivative of the expected entropy. This leaves one extra degree of freedom, since large 3The only exception is near the corner d ! 1 and ↵! −d. There, one can obtain arbitrarily large prior variance over H for given mean. However, these such priors have very heavy tails and seem poorly suited to data with finite or exponential tails; we do not explore them further here. 5 0 10 20 0 1 entropy (nats) 5 10 15 20 0 10 20 0 1 (standard params) (new params) 0 0.05 0.1 p(H) 0 0.02 0.04 0.06 p(H) 0 1 2 3 4 5 0 0.02 0.04 0.06 Entropy (H) p(H) Figure 4: Expected entropy under Pitman-Yor and Pitman-Yor Mixture priors. (A) Left: expected entropy as a function of the natural parameters (d, ↵). Right: expected entropy as a function of transformed parameters (h, γ). (B) Sampled prior distributions (N = 5e3) over entropy implied by three different PY mixtures: (1) p(γ, h) / δ(γ −1) (red), a mixture of PY(d, 0) distributions; (2) p(γ, h) / δ(γ) (blue), a mixture of DP(↵) distributions; and (3) p(γ, h) / exp(−10 1−γ ) (grey), which provides a tradeoff between (1) & (2). Note that the implied prior over H is approximately flat. prior entropies can arise either from large values of ↵(as in the DP) or from values of d near 1. (See Fig. 4A). We can explicitly control this trade-off by reparametrizing the PY distribution, letting h = 0(1 + ↵) − 0(1 −d), γ = 0(1) − 0(1 −d) 0(1 + ↵) − 0(1 −d), (12) where h > 0 is equal to the expected entropy of the prior (eq. 10) and γ > 0 captures prior beliefs about tail behavior of ⇡. For γ = 0, we have the DP (d = 0); for γ = 1 we have a PY(d, 0) process (i.e., ↵= 0). Where required, the inverse transformation to standard PY parameters is given by: ↵= 0 −1 (h(1 −γ) + 0(1)) −1, d = 1 − 0 −1 ( 0(1) −hγ) , where 0 −1(·) denotes the inverse digamma function. We can construct an (approximately) flat improper prior over H on [0, 1] by setting p(h, γ) = q(γ), where q is any density on [0, 1]. The induced prior on entropy is thus: p(H) = ZZ p(H|⇡)pPY(⇡|γ, h)p(γ, h)dγ dh, (13) where pPY(⇡|γ, h) denotes a PY distribution on ⇡with parameters γ, h. Fig. 4B shows samples from this prior under three different choices of q(γ), for h uniform on [0, 3]. We refer to the resulting prior distribution over ⇡as the Pitman-Yor mixture (PYM) prior. All results in the figures are generated using the prior q(γ) / max(1 −γ, 0). 4.3 Posterior inference Posterior inference under the PYM prior amounts to computing the two-dimensional integral over the hyperparameters (d, ↵), ˆHPYM = E[H|x] = Z E[H|x, d, ↵]p(x|d, ↵)p(↵, d) p(x) d(d, ↵) (14) Although in practice we parametrize our prior using the variables γ and h, for clarity and consistency with other literature we present results in terms of d and ↵. Just as the case with the prior mean, the posterior mean E[H|x, d, ↵] is given by a convenient analytic form (derived in the Appendix), E[H|↵, d, x] = 0(↵+ N + 1) −↵+ Kd ↵+ N 0(1 −d) − 1 ↵+ N " K X i=1 (ni −d) 0(ni −d + 1) # . (15) The evidence, p(x|d, ↵), is given by p(x|d, ↵) = ⇣QK−1 l=1 (↵+ ld) ⌘⇣QK i=1 Γ(ni −d) ⌘ Γ(1 + ↵) Γ(1 −d)KΓ(↵+ N) . (16) 6 We can obtain confidence regions for ˆHPYM by computing the posterior variance E[(H−ˆHPYM)2|x]. The estimate takes the same form as eq. 14, except that we substitute var[H|x, d, ↵] for E[H|x, d, ↵]. Although var[H|x, d, ↵] has an analytic closed form that is fast to compute, it is a lengthy expression that we do not have space to reproduce here; we provide it in the Appendix. 4.4 Computation In practice, the two-dimensional integral over ↵and d is fast to compute numerically. Computation of the integrand can be carried out more efficiently using a representation in terms of multiplicities (also known as the empirical histogram distribution function [4]), the number of symbols that have occurred with a given frequency in the sample. Letting mk = |{i : ni = k}| denote the total number of symbols with exactly k observations in the sample gives the compressed statistic m = [m0, m1, . . . , mM]>, where nmax is the largest number of samples for any symbol. Note that the inner product [0, 1, . . . , nmax] · m = N, the total number of samples. The multiplicities representation significantly reduces the time and space complexity of our computations for most datasets, as we need only compute sums and products involving the number symbols with distinct frequencies (at most nmax), rather than the total number of symbols K. In practice, we compute all expressions not explicitly involving ⇡using the multiplicities representation. For instance, in terms of the multiplicities, the evidence takes the compressed form p(x|d, ↵) = p(m1, . . . , mM|d, ↵) = Γ(1 + ↵) QK−1 l=1 (↵+ ld) Γ(↵+ n) M Y i=1 ✓Γ(i −d) i!Γ(1 −d) ◆mi M! mi!. (17) 4.5 Existence of posterior mean Given that the PYM prior with p(h) / 1 on [0, 1] is improper, the prior expectation E[H] does not exist. It is therefore reasonable to ask what conditions on the data are sufficient to obtain finite posterior expectation E[H|x]. We give an answer to this question in the following short proposition, the proof of which we provide in Appendix B. Theorem 1. Given a fixed dataset x of N samples and any bounded (potentially improper) prior p(γ, h), ˆHPYM < 1 when N −K ≥2. This result says that the BLS entropy estimate is finite whenever there are at least two “coincidences”, i.e., two fewer unique symbols than samples, even though the prior expectation is infinite. 5 Results We compare PYM to other proposed entropy estimators using four example datasets in Fig. 5. The Miller-Maddow estimator is a well-known method for bias correction based on a first-order Taylor expansion of the entropy functional. The CAE (“Coverage Adjusted Estimator”) addresses bias by combining the Horvitz-Thompson estimator with a nonparametric estimate of the proportion of total probability mass (the “coverage”) accounted for by the observed data x [17, 25]. When d = 0, PYM becomes a DP mixture (DPM). It may also be thought of as NSB with a very large A, and indeed the empirical performance of NSB with large A is nearly identical to that of DPM. All estimators appear to converge except ˆHnsb1, the asymptotic extension of NSB discussed in Section 2.2, which increases unboundedly with data size. In addition PYM performs competitively with other estimators. Note that unlike frequentist estimators, PYM error bars in Fig. 5 arise from direct compution of the posterior variance of the entropy. 6 Discussion In this paper we introduced PYM, a novel entropy estimator for distributions with unknown support. We derived analytic forms for the conditional mean and variance of entropy under a DP and PY prior for fixed parameters. Inspired by the work of [6], we defined a novel PY mixture prior, PYM, which implies an approximately flat prior on entropy. PYM addresses two major issues with NSB: its dependence on knowledge of A and its inability (inherited from the Dirichlet distribution) to 7 20 90 400 10000 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 Retinal Ganglion Cell Spike Trains # of samples Entropy (nats) 10 60 300 10000 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 3RZHUïODZ 100 1600 18000 210000 4.5 5 5.5 6 6.5 7 7.5 Moby Dick words Entropy (nats) A B D C 10 20 40 90 200 0.6 0.8 1 1.2 1.4 1.6 1.8 Exponential distribution # of samples plugin MiMa DPM PYM CAE NSB Figure 5: Convergence of entropy estimators with sample size, on two simulated and two real datasets. We write “MiMa” for “Miller-Maddow” and “NSB1” for ˆHnsb1. Note that DPM (“DP mixture”) is simply a PYM with γ = 0. Credible intervals are indicated by two standard deviation of the posterior for DPM and PYM. (A) Exponential distribution ⇡i / e−i. (B) Power law distribution with exponent 2 (⇡i / i−2). (C) Word frequency from the novel Moby Dick. (D) Neural words from 8 simultaneouslyrecorded retinal ganglion cells. Note that for clarity ˆHnsb1 has been cropped from B and D. All plots are average of 16 Monte Carlo runs. account for the heavy-tailed distributions which abound in biological and other natural data. We have shown that PYM performs well in comparison to other entropy estimators, and indicated its practicality in example applications to data. We note, however, that despite its strong performance in simulation and in many practical examples, we cannot assure that PYM will always be well-behaved. There may be specific distributions for which the PYM estimate is so heavily biased that the credible intervals fail to bracket the true entropy. This reflects a general state of affairs for entropy estimation on countable distributions: any convergence rate result must depend on restricting to a subclass of distributions [26]. Rather than working within some analytically-defined subclass of discrete distributions (such as, for instance, those with finite “entropy variance” [17]), we work within the space of distributions parametrized by PY which spans both the exponential and power-law tail distributions. Although PY parameterizes a large class of distributions, its structure allows us to use the PY parameters to understand the qualitative features of the distributions made likely under a choice of prior. We feel this is a key feature for small-sample inference, where the choice of prior is most relevant. Moreover, in a forthcoming paper, we demonstrate the consistency of PYM, and show that its small-sample flexibility does not sacrifice desirable asymptotic properties. In conclusion, we have defined the PYM prior through a reparametrization that assures an approximately flat prior on entropy. Moreover, although parametrized over the space of countably-infinite discrete distributions, the computation of PYM depends primarily on the first two conditional moments of entropy under PY. We derive closed-form expressions for these moments that are fast to compute, and allow the efficient computation of both the PYM estimate and its posterior credible interval. As we demonstrate in application to data, PYM is competitive with previously proposed estimators, and is especially well-suited to neural applications, where heavy-tailed distributions are commonplace. 8 Acknowledgments We thank E. J. Chichilnisky, A. M. Litke, A. Sher and J. Shlens for retinal data, and Y. .W. Teh for helpful comments on the manuscript. This work was supported by a Sloan Research Fellowship, McKnight Scholar’s Award, and NSF CAREER Award IIS-1150186 (JP). References [1] G. Miller. Note on the bias of information estimates. Information theory in psychology: Problems and methods, 2:95–100, 1955. [2] S. Panzeri and A. Treves. Analytical estimates of limited sampling biases in different information measures. Network: Computation in Neural Systems, 7:87–107, 1996. [3] R. Strong, S. Koberle, de Ruyter van Steveninck R., and W. Bialek. Entropy and information in neural spike trains. Physical Review Letters, 80:197–202, 1998. [4] L. Paninski. Estimation of entropy and mutual information. Neural Computation, 15:1191–1253, 2003. [5] P. Grassberger. Entropy estimates from insufficient samplings. arXiv preprint, January 2008, arXiv:0307138 [physics]. [6] I. Nemenman, F. Shafee, and W. Bialek. Entropy and inference, revisited. Adv. Neur. Inf. Proc. Sys., 14, 2002. [7] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. The Annals of Probability, 25(2):855–900, 1997. [8] H. Ishwaran and L. James. Generalized weighted chinese restaurant processes for species sampling mixture models. Statistica Sinica, 13(4):1211–1236, 2003. [9] S. Goldwater, T. Griffiths, and M. Johnson. Interpolating between types and tokens by estimating powerlaw generators. Adv. Neur. Inf. Proc. Sys., 18:459, 2006. [10] G. Zipf. Human behavior and the principle of least effort. Addison-Wesley Press, 1949. [11] T. Dudok de Wit. When do finite sample effects significantly affect entropy estimates? Eur. Phys. J. B Cond. Matter and Complex Sys., 11(3):513–516, October 1999. [12] M. Newman. Power laws, Pareto distributions and Zipf’s law. Contemporary physics, 46(5):323–351, 2005. [13] M. Hutter. Distribution of mutual information. Adv. Neur. Inf. Proc. Sys., 14:399, 2002. [14] J. Hausser and K. Strimmer. Entropy inference and the James-Stein estimator, with application to nonlinear gene association networks. The Journal of Machine Learning Research, 10:1469–1484, 2009. [15] I. Nemenman, W. Bialek, and R. Van Steveninck. Entropy and information in neural spike trains: Progress on the sampling problem. Physical Review E, 69(5):056111, 2004. [16] I. Nemenman. Coincidences and estimation of entropies of random variables with large cardinalities. Entropy, 13(12):2013–2023, 2011. [17] V. Q. Vu, B. Yu, and R. E. Kass. Coverage-adjusted entropy estimation. Statistics in medicine, 26(21):4039–4060, 2007. [18] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, and E. P. Chichilnisky, E. J. Simoncelli. Nature, 454:995–999, 2008. [19] H. Ishwaran and M. Zarepour. Exact and approximate sum representations for the Dirichlet process. Canadian Journal of Statistics, 30(2):269–283, 2002. [20] J. Kingman. Random discrete distributions. Journal of the Royal Statistical Society. Series B (Methodological), 37(1):1–22, 1975. [21] H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. Journal of the American Statistical Association, 96(453):161–173, March 2001. [22] Y. Teh. A hierarchical Bayesian language model based on Pitman-Yor processes. Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 985–992, 2006. [23] M. Perman, J. Pitman, and M. Yor. Size-biased sampling of poisson point processes and excursions. Probability Theory and Related Fields, 92(1):21–39, March 1992. [24] J. Pitman. Random discrete distributions invariant under size-biased permutation. Advances in Applied Probability, pages 525–539, 1996. [25] A. Chao and T. Shen. Nonparametric estimation of Shannon’s index of diversity when there are unseen species in sample. Environmental and Ecological Statistics, 10(4):429–443, 2003. [26] A. Antos and I. Kontoyiannis. Convergence properties of functional estimates for discrete distributions. Random Structures & Algorithms, 19(3-4):163–193, 2001. [27] D. Wolpert and D. Wolf. Estimating functions of probability distributions from a finite set of samples. Physical Review E, 52(6):6841–6854, 1995. 9
|
2012
|
18
|
4,541
|
Semi-supervised Eigenvectors for Locally-biased Learning Toke Jansen Hansen Section for Cognitive Systems DTU Informatics Technical University of Denmark tjha@imm.dtu.dk Michael W. Mahoney Department of Mathematics Stanford University Stanford, CA 94305 mmahoney@cs.stanford.edu Abstract In many applications, one has side information, e.g., labels that are provided in a semi-supervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks “nearby” that pre-specified target region. Locally-biased problems of this sort are particularly challenging for popular eigenvector-based machine learning and data analysis tools. At root, the reason is that eigenvectors are inherently global quantities. In this paper, we address this issue by providing a methodology to construct semi-supervised eigenvectors of a graph Laplacian, and we illustrate how these locally-biased eigenvectors can be used to perform locally-biased machine learning. These semi-supervised eigenvectors capture successively-orthogonalized directions of maximum variance, conditioned on being well-correlated with an input seed set of nodes that is assumed to be provided in a semi-supervised manner. We also provide several empirical examples demonstrating how these semi-supervised eigenvectors can be used to perform locally-biased learning. 1 Introduction We consider the problem of finding a set of locally-biased vectors that inherit many of the “nice” properties that the leading nontrivial global eigenvectors of a graph Laplacian have—for example, that capture “slowly varying” modes in the data, that are fairly-efficiently computable, that can be used for common machine learning and data analysis tasks such as kernel-based and semi-supervised learning, etc.—so that we can perform what we will call locally-biased machine learning in a principled manner. By locally-biased machine learning, we mean that we have a very large data set, e.g., represented as a graph, and that we have information, e.g., given in a semi-supervised manner, that certain “regions” of the data graph are of particular interest. In this case, we may want to focus predominantly on those regions and perform data analysis and machine learning, e.g., classification, clustering, ranking, etc., that is “biased toward” those pre-specified regions. Examples of this include the following. • Locally-biased community identification. In social and information network analysis, one might have a small “seed set” of nodes that belong to a cluster or community of interest [2, 13]; in this case, one might want to perform link or edge prediction, or one might want to “refine” the seed set in order to find other nearby members. • Locally-biased image segmentation. In computer vision, one might have a large corpus of images along with a “ground truth” set of pixels as provided by a face detection algorithm [7, 14, 15]; in this case, one might want to segment entire heads from the background for all the images in the corpus in an automated manner. 1 • Locally-biased neural connectivity analysis. In functional magnetic resonance imaging applications, one might have small sets of neurons that “fire” in response to some external experimental stimulus [16]; in this case, one might want to analyze the subsequent temporal dynamics of stimulation of neurons that are “nearby,” either in terms of connectivity topology or functional response. These examples present considerable challenges for spectral techniques and traditional eigenvectorbased methods. At root, the reason is that eigenvectors are inherently global quantities, thus limiting their applicability in situations where one is interested in very local properties of the data. In this paper, we provide a methodology to construct what we will call semi-supervised eigenvectors of a graph Laplacian; and we illustrate how these locally-biased eigenvectors inherit many of the properties that make the leading nontrivial global eigenvectors of the graph Laplacian so useful in applications. To achieve this, we will formulate an optimization ansatz that is a variant of the usual global spectral graph partitioning optimization problem that includes a natural locality constraint as well as an orthogonality constraint, and we will iteratively solve this problem. In more detail, assume that we are given as input a (possibly weighted) data graph G = (V, E), an indicator vector s of a small “seed set” of nodes, a correlation parameter κ ∈[0, 1], and a positive integer k. Then, informally, we would like to construct k vectors that satisfy the following bicriteria: first, each of these k vectors is well-correlated with the input seed set; and second, those k vectors describe successively-orthogonalized directions of maximum variance, in a manner analogous to the leading k nontrivial global eigenvectors of the graph Laplacian. (We emphasize that the seed set s of nodes, the integer k, and the correlation parameter κ are part of the input; and thus they should be thought of as being available in a semi-supervised manner.) Somewhat more formally, our main algorithm, Algorithm 1 in Section 3, returns as output k semi-supervised eigenvectors; each of these is the solution to an optimization problem of the form of GENERALIZED LOCALSPECTRAL in Figure 1, and thus each “captures” (say) κ/k of the correlation with the seed set. Our main theoretical result states that these vectors define successively-orthogonalized directions of maximum variance, conditioned on being κ/k-well-correlated with an input seed set s; and that each of these k semisupervised eigenvectors can be computed quickly as the solution to a system of linear equations. From a technical perspective, the work most closely related to ours is that of Mahoney et al. [14]. The original algorithm of Mahoney et al. [14] introduced a methodology to construct a locally-biased version of the leading nontrivial eigenvector of a graph Laplacian and showed (theoretically and empirically in a social network analysis application) that the resulting vector could be used to partition a graph in a locally-biased manner. From this perspective, our extension incorporates a natural orthogonality constraint that successive vectors need to be orthogonal to previous vectors. Subsequent to the work of [14], [15] applied the algorithm of [14] to the problem of finding locally-biased cuts in a computer vision application. Similar ideas have also been applied somewhat differently. For example, [2] use locally-biased random walks, e.g., short random walks starting from a small seed set of nodes, to find clusters and communities in graphs arising in Internet advertising applications; [13] used locally-biased random walks to characterize the local and global clustering structure of a wide range of social and information networks; [11] developed the Spectral Graph Transducer (SGT), that performs transductive learning via spectral graph partitioning. The objectives in both [11] and [14] are considered constrained eigenvalue problems, that can be solved by finding the smallest eigenvalue of an asymmetric generalized eigenvalue problem, but in practice this procedure can be highly unstable [8]. The SGT reduces the instabilities by performing all calculations in a subspace spanned by the d smallest eigenvectors of the graph Laplacian, whereas [14] perform a binary search, exploiting the monotonic relationship between a control parameter and the corresponding Lagrange multiplier. In parallel, [3] and a large body of subsequent work including [6] used eigenvectors of the graph Laplacian to perform dimensionality reduction and data representation, in unsupervised and semisupervised settings. Many of these methods have a natural interpretation in terms of kernel-based learning [18]. Many of these diffusion-based spectral methods also have a natural interpretation in terms of spectral ranking [21]. “Topic sensitive” and “personalized” versions of these spectral ranking methods have also been studied [9, 10]; and these were the motivation for diffusion-based methods to find locally-biased clusters in large graphs [19, 1, 14]. Our optimization ansatz is a generalization of the linear equation formulation of the PageRank procedure [17, 14, 21], and the solution involves Laplacian-based linear equation solving, which has been suggested as a primitive 2 of more general interest in large-scale data analysis [20]. Finally, the form of our optimization problem has similarities to other work in computer vision applications: e.g., [23] and [7] find good conductance clusters subject to a set of linear constraints. 2 Background and Notation Let G = (V, E, w) be a connected undirected graph with n = |V | vertices and m = |E| edges, in which edge {i, j} has non-negative weight wij. In the following, AG ∈RV ×V will denote the adjacency matrix of G, while DG ∈RV ×V will denote the diagonal degree matrix of G, i.e., DG(i, i) = di = P {i,j}∈E wij, the weighted degree of vertex i. Moreover, for a set of vertices S ⊆V in a graph, the volume of S is vol(S) def = P i∈S di. The Laplacian of G is defined as LG def = DG −AG. (This is also called the combinatorial Laplacian, in which case the normalized Laplacian of G is LG def = D−1/2 G LGD−1/2 G .) The Laplacian is the symmetric matrix having quadratic form xT LGx = P ij∈E wij(xi −xj)2, for x ∈RV . This implies that LG is positive semidefinite and that the all-one vector 1 ∈RV is the eigenvector corresponding to the smallest eigenvalue 0. The generalized eigenvalues of LGx = λiDGx are 0 = λ1 < λ2 ≤· · · ≤λN. We will use v2 to denote smallest non-trivial eigenvector, i.e., the eigenvector corresponding to λ2; v3 to denote the next eigenvector; and so on. Finally, for a matrix A, let A+ denote its (uniquely defined) Moore-Penrose pseudoinverse. For two vectors x, y ∈Rn, and the degree matrix DG for a graph G, we define the degree-weighted inner product as xT DGy def = Pn i=1 xiyidi. In particular, if a vector x has unit norm, then xT DGx = 1. Given a subset of vertices S ⊆V , we denote by 1S the indicator vector of S in RV and by 1 the vector in RV having all entries set equal to 1. 3 Optimization Approach to Semi-supervised Eigenvectors 3.1 Motivation for the Program Recall the optimization perspective on how one computes the leading nontrivial global eigenvectors of the normalized Laplacian LG. The first nontrivial eigenvector v2 is the solution to the problem GLOBALSPECTRAL that is presented on the left of Figure 1. Equivalently, although GLOBALSPECTRAL is a non-convex optimization problem, strong duality holds for it and it’s solution may be computed as v2, the leading nontrivial generalized eigenvector of LG. The next eigenvector v3 is the solution to GLOBALSPECTRAL, augmented with the constraint that xT DGv2 = 0; and in general the tth generalized eigenvector of LG is the solution to GLOBALSPECTRAL, augmented with the constraints that xT DGvi = 0, for i ∈{2, . . . , t −1}. Clearly, this set of constraints and the constraint xT DG1 = 0 can be written as xT DGQ = 0, where 0 is a (t −1)-dimensional all-zeros vector, and where Q is an n × (t −1) orthogonal matrix whose ith column equals vi (where v1 = 1, the all-ones vector, is the first column of Q). Also presented in Figure 1 is LOCALSPECTRAL, which includes a constraint requiring the solution to be well-correlated with an input seed set. This LOCALSPECTRAL optimization problem was introduced in [14], where it was shown that the solution to LOCALSPECTRAL may be interpreted as a locally-biased version of the second eigenvector of the Laplacian. In particular, although LOCALSPECTRAL is not convex, it’s solution can be computed efficiently as the solution to a set of linear equations that generalize the popular Personalized PageRank procedure; in addition, by performing a sweep cut and appealing to a variant of Cheeger’s inequality, this locally-biased eigenvector can be used to perform locally-biased spectral graph partitioning [14]. 3.2 Our Main Algorithm We will formulate the problem of computing semi-supervised vectors in terms of a primitive optimization problem of independent interest. Consider the GENERALIZED LOCALSPECTRAL optimization problem, as shown in Figure 1. For this problem, we are given a graph G = (V, E), with associated Laplacian matrix LG and diagonal degree matrix DG; an indicator vector s of a small 3 GLOBALSPECTRAL minimize xT LGx s.t xT DGx = 1 xT DG1 = 0 LOCALSPECTRAL minimize xT LGx s.t xT DGx = 1 xT DG1 = 0 xT DGs ≥√κ GENERALIZED LOCALSPECTRAL minimize xT LGx s.t xT DGx = 1 xT DGQ = 0 xT DGs ≥√κ Figure 1: Left: The usual GLOBALSPECTRAL partitioning optimization problem; the vector achieving the optimal solution is v2, the leading nontrivial generalized eigenvector of LG with respect to DG. Middle: The LOCALSPECTRAL optimization problem, which was originally introduced in [14]; for κ = 0, this coincides with the usual global spectral objective, while for κ > 0, this produces solutions that are biased toward the seed vector s. Right: The GENERALIZED LOCALSPECTRAL optimization problem we introduce that includes both the locality constraint and a more general orthogonality constraint. Our main algorithm for computing semi-supervised eigenvectors will iteratively compute the solution to GENERALIZED LOCALSPECTRAL for a sequence of Q matrices. In all three cases, the optimization variable is x ∈Rn. “seed set” of nodes; a correlation parameter κ ∈[0, 1]; and an n×ν constraint matrix Q that may be assumed to be an orthogonal matrix. We will assume (without loss of generality) that s is properly normalized and orthogonalized so that sT DGs = 1 and sT DG1 = 0. While s can be a general unit vector orthogonal to 1, it may be helpful to think of s as the indicator vector of one or more vertices in V , corresponding to the target region of the graph. In words, the problem GENERALIZED LOCALSPECTRAL asks us to find a vector x ∈Rn that minimizes the variance xT LGx subject to several constraints: that x is unit length; that x is orthogonal to the span of Q; and that x is √κ-well-correlated with the input seed set vector s. In our application of GENERALIZED LOCALSPECTRAL to the computation of semi-supervised eigenvectors, we will iteratively compute the solution to GENERALIZED LOCALSPECTRAL, updating Q to contain the already-computed semi-supervised eigenvectors. That is, to compute the first semi-supervised eigenvector, we let Q = 1, i.e., the n-dimensional all-ones vector, which is the trivial eigenvector of LG, in which case Q is an n×1 matrix; and to compute each subsequent semi-supervised eigenvector, we let the columns of Q consist of 1 and the other semi-supervised eigenvectors found in each of the previous iterations. To show that GENERALIZED LOCALSPECTRAL is efficiently-solvable, note that it is a quadratic program with only one quadratic constraint and one linear equality constraint. In order to remove the equality constraint, which will simplify the problem, let’s change variables by defining the n×(n−ν) matrix F as {x : QT DGx = 0} = {x : x = Fy}. That is, F is a span for the null space of QT ; and we will take F to be an orthogonal matrix. Then, with respect to the y variable, GENERALIZED LOCALSPECTRAL becomes minimize y yT F T LGFy subject to yT F T DGFy = 1, yT F T DGs ≥√κ. (1) In terms of the variable x, the solution to this optimization problem is of the form x∗ = cF F T (LG −γDG) F + F T DGs = c FF T (LG −γDG) FF T + DGs, (2) for a normalization constant c ∈(0, ∞) and for some γ that depends on √κ. The second line follows from the first since F is an n×(n−ν) orthogonal matrix. This so-called “S-procedure” is described in greater detail in Chapter 5 and Appendix B of [4]. The significance of this is that, although it is a non-convex optimization problem, the GENERALIZED LOCALSPECTRAL problem can be solved by solving a linear equation, in the form given in Eqn. (2). Returning to our problem of computing semi-supervised eigenvectors, recall that, in addition to the input for the GENERALIZED LOCALSPECTRAL problem, we need to specify a positive integer k that indicates the number of vectors to be computed. In the simplest case, we would assume that 4 we would like the correlation to be “evenly distributed” across all k vectors, in which case we will require that each vector is p κ/k-well-correlated with the input seed set vector s; but this assumption can easily be relaxed, and thus Algorithm 1 is formulated more generally as taking a k-dimensional vector κ = [κ1, . . . , κk]T of correlation coefficients as input. To compute the first semi-supervised eigenvector, we will let Q = 1, the all-ones vector, in which case the first nontrivial semi-supervised eigenvector is x∗ 1 = c (LG −γ1DG)+ DGs, (3) where γ1 is chosen to saturate the part of the correlation constraint along the first direction. (Note that the projections FF T from Eqn. (2) are not present in Eqn. (3) since by design sT DG1 = 0.) That is, to find the correct setting of γ1, it suffices to perform a binary search over the possible values of γ1 in the interval (−vol(G), λ2(G)) until the correlation constraint is satisfied, that is, until (sT DGx)2 is sufficiently close to κ2 1, see [8, 14]. To compute subsequent semi-supervised eigenvectors, i.e., at steps t = 2, . . . , k if one ultimately wants a total of k semi-supervised eigenvectors, then one lets Q be the n × (t −1) matrix with first column equal to 1 and with jth column, for i = 2, . . . , t −1, equal to x∗ j−1 (where we emphasize that x∗ j−1 is a vector not an element of a vector). That is, Q is of the form Q = [1, x∗ 1, . . . , x∗ t−1], where x∗ i are successive semi-supervised eigenvectors, and the projection matrix FF T is of the form FF T = I −DGQ(QT DGDGQ)−1QT DG, due to the degree-weighted inner norm. Then, by Eqn. (2), the tth semi-supervised eigenvector takes the form x∗ t = c FF T (LG −γtDG)FF T + DGs. (4) Algorithm 1 Semi-supervised eigenvectors Input: LG, DG, s, κ = [κ1, . . . , κk]T , ϵ Require: sT DG1 = 0, sT DGs = 1, κT 1 ≤1 1: Q = [1] 2: for t = 1 to k do 3: FF T ←I −DGQ(QT DGDGQ)−1QT DG 4: ⊤←λ2 where FF T LGFF T v2 = λ2FF T DGFF T v2 5: ⊥←−vol(G) 6: repeat 7: γt ←(⊥+ ⊤)/2 (Binary search over γt) 8: xt ←(FF T (LG −γtDG)FF T )+FF T DGs 9: Normalize xt such that xT t DGxt = 1 10: if (xT t DGs)2 > κt then ⊥←γt else ⊤←γt end if 11: until ∥(xT t DGs)2 −κt∥≤ϵ or ∥(⊥+ ⊤)/2 −γt∥≤ϵ 12: Augment Q with x∗ t by letting Q = [Q, x∗ t ]. 13: end for In more detail, Algorithm 1 presents pseudo-code for our main algorithm for computing semisupervised eigenvectors. Several things should be noted about our implementation. First, note that we implicitly compute the projection matrix FF T . Second, a na¨ıve approach to Eqn. (2) does not immediately lead to an efficient solution, since DGs will not be in the span of (FF T (LG − γDG)FF T ), thus leading to a large residual. By changing variables so that x = FF T y, the solution becomes x∗∝FF T (FF T (LG −γDG)FF T )+FF T DGs. Since FF T is a projection matrix, this expression is equivalent to x∗∝(FF T (LG −γDG)FF T )+FF T DGs. Third, we exploit that FF T (LG −γiDG)FF T is an SPSD matrix, and we apply the conjugate gradient method, rather than computing the explicit pseudoinverse. That is, in the implementation we never represent the dense matrix FF T , but instead we treat it as an operator and we simply evaluate the result of applying a vector to it on either side. Fourth, we use that λ2 can never decrease (here we refer to λ2 as the smallest non-zero eigenvalue of the modified matrix), so we only recalculate the upper bound for the binary search when an iteration saturates without satisfying ∥(xT t DGs)2 −κt∥≤ϵ. In case of saturation one can for instance recalculate λ2 iteratively by using the inverse iteration method, vk+1 2 ∝(FF T LGFF T −λest 2 FF T DGFF T )+FF T DGFF T vk 2, and normalizing such that (vk+1 2 )T vk+1 2 = 1. 5 4 Illustrative Empirical Results In this section, we will provide a detailed empirical evaluation of our method of semi-supervised eigenvectors and how they can be used for locally-biased machine learning. Our goal will be twofold: first, to illustrate how the “knobs” of our method work; and second, to illustrate the usefulness of the method in a real application. To do so, we will consider: • Toy data. In Section 4.1, we will consider one-dimensional examples of the popular “small world” model [22]. This is a parameterized family of models that interpolates between low-dimensional grids and random graphs; and, as such, it will allow us to illustrate the behavior of our method and it’s various parameters in a controlled setting. • Handwritten image data. In Section 4.2, we will consider the data from the MNIST digit data set [12]. These data have been widely-studied in machine learning and related areas and they have substantial “local heterogeneity”; and thus these data will allow us to illustrate how our method may be used to perform locally-biased versions of common machine learning tasks such as smoothing, clustering, and kernel construction. 4.1 Small-world Data To illustrate how the “knobs” of our method work, and in particular how κ and γ interplay, we consider data constructed from the so-called small-world model. To demonstrate how semi-supervised eigenvectors can focus on specific target regions of a data graph to capture slowest modes of local variation, we plot semi-supervised eigenvectors around illustrations of (non-rewired and rewired) realizations of the small-world graph; see Figure 2. p = 0, λ2 = 0.000011, λ3 = 0.000011, λ4 = 0.000046, λ5 = 0.000046. (a) Global eigenvectors p = 0.01, λ2 = 0.000149, λ3 = 0.000274, λ4 = 0.000315, λ5 = 0.000489. (b) Global eigenvectors p = 0.01, κ = 0.005, γ1 = 0.000047, γ2 = 0.000052, γ3 = −0.000000, γ4 = −0.000000. (c) Semi-supervised eigenvectors p = 0.01, κ = 0.05, γ1 = −0.004367, γ2 = −0.001778, γ3 = −0.001665, γ4 = −0.000822. (d) Semi-supervised eigenvectors Figure 2: In each case, (a-d) the data consist of 3600 nodes, each connected to it’s 8 nearestneighbors. In the center of each subfigure, we show the nodes (blue) and edges (black and light gray are the local edges, and blue are the randomly-rewired edges). In each subfigure, we wrap a plot (black x-axis and gray background) visualizing the 4 smallest semi-supervised eigenvectors, allowing us to see the effect of random edges (different values of rewiring probability p) and degree of localization (different values of κ). Eigenvectors are color coded as blue, red, yellow, and green, starting with the one having the smallest eigenvalue. See the main text for more details. In Figure 2.a, we show a graph with no randomly-rewired edges (p = 0) and a locality parameter κ such that the global eigenvectors are obtained. This yields a symmetric graph with eigenvectors corresponding to orthogonal sinusoids, i.e., for all eigenvectors, except the all-ones with eigenvalue 0, the algebraic multiplicity is 2, i.e., the first two capture the slowest mode of variation and correspond to a sine and cosine with equal random phase-shift (rotational ambiguity). In Figure 2.b, random edges have been added with probability p = 0.01 and the locality parameter κ is still chosen such that the global eigenvectors of the rewired graph are obtained. In particular, note small kinks in the eigenvectors at the location of the randomly added edges. Since the graph is no longer symmetric, all of the visualized eigenvectors have algebraic multiplicity 1. Moreover, note that the slow mode of variation in the interval on the top left; a normalized-cut based on the leading global eigenvector would extract this region since the remainder of the ring is more well-connected due to the degree of rewiring. In Figure 2.c, we see the same graph realization as in Figure 2.b, except that the semi-supervised eigenvectors have a seed node at the top of the circle and the correlation 6 parameter κt = 0.005. Note that, like the global eigenvectors, the local approach produces modes of increasing variation. In addition, note that the neighborhood around “11 o-clock” contains more mass, when compared with Figure 2.b; the reason for this is that this region is well-connected with the seed via a randomly added edge. Above the visualization we also show the γt that saturates κt, i.e., γt is the Lagrange multiplier that defines the effective correlation κt. Not shown is that if we kept reducing κ, then γt would tend towards λt+1, and the respective semi-supervised eigenvector would tend towards the global eigenvector. Finally, in Figure 2.d, the desired correlation is increased to κ = 0.05 (thus decreasing the value of γt), making the different modes of variation more localized in the neighborhood of the seed. It should be clear that, in addition to being determined by the locality parameter, we can think of γ as a regularizer biasing the global eigenvectors towards the region near the seed set. 4.2 MNIST Digit Data We now demonstrate the semi-supervised eigenvectors as a feature extraction preprocessing step in a machine learning setting. We consider the well-studied MNIST dataset containing 60000 training digits and 10000 test digits ranging from 0 to 9. We construct the complete 70000 × 70000 k-NN graph with k = 10 and with edge weights given by wij = exp(−4 σ2 i ∥xi −xj∥2), where σ2 i being the Euclidean distance to it’s nearest neighbor, and we define the graph Laplacian in the usual way. We evaluate the semi-supervised eigenvectors in a transductive learning setting by disregarding the majority of labels in the entire training data. We then use a few samples from each class to seed our semi-supervised eigenvectors, and a few others to train a downstream classification algorithm. Here we choose to apply the SGT of [11] for two main reasons. First, the transductive classifier is inherently designed to work on a subset of global eigenvectors of the graph Laplacian, making it ideal for validating that our localized basis constructed by the semi-supervised eigenvectors can be more informative when we are solely interested in the “local heterogeneity” near a seed set. Second, using the SGT based on global eigenvectors is a good point of comparison, because we are only interested in the effect of our subspace representation. (If we used one type of classifier in the local setting, and another in the global, the classification accuracy that we measure would obviously be biased.) As in [11], we normalize the spectrum of both global and semi-supervised eigenvectors by replacing the eigenvalues with some monotonically increasing function. We use λi = i2 k2 , i.e., focusing on ranking among smallest cuts; see [5]. Furthermore, we fix the regularization parameter of the SGT to c = 3200, and for simplicity we fix γ = 0 for all semi-supervised eigenvectors, implicitly defining the effective κ = [κ1, . . . , κk]T . Clearly, other correlation distributions and values of γ may yield subspaces with even better discriminative properties1. #Semi-supervised eigenvectors for SGT #Global eigenvectors for SGT Labeled points 1 2 4 6 8 10 1 5 10 15 20 25 1 : 1 0.39 0.39 0.38 0.38 0.38 0.36 0.50 0.48 0.36 0.27 0.27 0.19 1 : 10 0.30 0.31 0.25 0.23 0.19 0.15 0.49 0.36 0.09 0.08 0.06 0.06 5 : 50 0.12 0.15 0.09 0.08 0.07 0.06 0.49 0.09 0.08 0.07 0.05 0.04 10 : 100 0.09 0.10 0.07 0.06 0.05 0.05 0.49 0.08 0.07 0.06 0.04 0.04 50 : 500 0.03 0.03 0.03 0.03 0.03 0.03 0.49 0.10 0.07 0.06 0.04 0.04 Table 1: Classification error for the SGT based on respectively semi-supervised and global eigenvectors. The first column from the left encodes the configuration, e.g., 1:10 interprets as 1 seed and 10 training samples from each class (total of 22 samples - for the global approach these are all used for training). When the seed is well determined and the number of training samples moderate (50:500) a single semi-supervised eigenvector is sufficient, where for less data we benefit from using multiple semi-supervised eigenvectors. All experiments have been repeated 10 times. Here, we consider the task of discriminating between fours and nines, as these two classes tend to overlap more than other combinations. (A closed four usually resembles nine more than an “open” four.) Hence, we expect localization on low order global eigenvectors, meaning that class separation will not be evident in the leading global eigenvector, but instead will be “buried” further down the spectrum. Thus, this will illustrate how semi-supervised eigenvectors can represent relevant heterogeneities in a local subspace of low dimensionality. Table 1 summarizes our classification results based on respectively semi-supervised and global eigenvectors. Finally, Figure 3 and 4 illustrates two realizations for the 1:10 configuration, where the training samples are fixed, but where we vary 1A thorough analysis regarding the importance of this parameter will appear in the journal version. 7 the seed nodes, to demonstrate the influence of the seed. See the caption in these figures for further details. s+ = { } ←−−−−−−−−−−Test data −−−−−−−−−−→ l+ = { } s−= { } l−= { } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0.08 0.07 0.06 0.05 0.03 0.02 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 Classification error Unexplained correlation 1 vs. 2 1 vs. 3 1 vs. 4 1 vs. 5 2 vs. 3 2 vs. 4 2 vs. 5 3 vs. 4 3 vs. 5 4 vs. 5 #Semi-supervised eigenvectors 0.6 0.5 0.4 0.3 0.2 0.1 0 Figure 3: Left: Shows a subset of the classification results for the SGT based on 5 semi-supervised eigenvectors seeded in s+ and s−, and trained using samples l+ and l−. Misclassifications are marked with black frames. Right: Visualizes all test data spanned by the first 5 semi-supervised eigenvectors, by plotting each component as a function of the others. Red (blue) points correspond to 4 (9), whereas green points correspond to remaining digits. As the seed nodes are good representatives, we note that the eigenvectors provide a good class separation. We also plot the error as a function of local dimensionality, as well as the unexplained correlation, i.e., initial components explain the majority of the correlation with the seed (effect of γ = 0). The particular realization based on the leading 5 semi-supervised eigenvectors yields an error of ≈0.03 (dashed circle). s+ = { } ←−−−−−−−−−−Test data −−−−−−−−−−→ l+ = { } s−= { } l−= { } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0.48 0.31 0.30 0.30 0.30 0.29 0.27 0.24 0.20 0.15 0.10 0.04 0.04 0.04 0.04 Classification error Unexplained correlation 1 vs. 2 1 vs. 3 1 vs. 4 1 vs. 5 2 vs. 3 2 vs. 4 2 vs. 5 3 vs. 4 3 vs. 5 4 vs. 5 #Semi-supervised eigenvectors 0.6 0.5 0.4 0.3 0.2 0.1 0 Figure 4: See the general description in Figure 3. Here we illustrate an instance where the s+ shares many similarities with s−, i.e., s+ is on the boundary of the two classes. This particular realization achieves a classification error of ≈0.30 (dashed circle). In this constellation we first discover localization on low order semi-supervised eigenvectors (≈12 eigenvectors), which is comparable to the error based on global eigenvectors (see Table 1), i.e., further down the spectrum we recover from the bad seed and pickup the relevant mode of variation. In summary: We introduced the concept of semi-supervised eigenvectors that are biased towards local regions of interest in a large data graph. We demonstrated the feasibility on a well-studied dataset and found that our approach leads to more compact subspace representations by extracting desired local heterogeneities. Moreover, the algorithm is scalable as the eigenvectors are computed by the solution to a sparse system of linear equations, preserving the low O(m) space complexity. Finally, we foresee that the approach will prove useful in a wide range of data analysis fields, due to the algorithm’s speed, simplicity, and stability. 8 References [1] R. Andersen, F.R.K. Chung, and K. Lang. Local graph partitioning using PageRank vectors. In FOCS ’06: Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science, pages 475–486, 2006. [2] R. Andersen and K. Lang. Communities from seed sets. In WWW ’06: Proceedings of the 15th International Conference on World Wide Web, pages 223–232, 2006. [3] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373–1396, 2003. [4] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, UK, 2004. [5] O. Chapelle, J. Weston, and B. Sch¨olkopf. Cluster Kernels for Semi-Supervised Learning. In Becker, editor, NIPS 2002, volume 15, pages 585–592, Cambridge, MA, USA, 2003. [6] R.R. Coifman, S. Lafon, A.B. Lee, M. Maggioni, B. Nadler, F. Warner, and S.W. Zucker. Geometric diffusions as a tool for harmonic analysis and structure definition in data: Diffusion maps. Proc. Natl. Acad. Sci. USA, 102(21):7426–7431, 2005. [7] A. P. Eriksson, C. Olsson, and F. Kahl. Normalized cuts revisited: A reformulation for segmentation with linear grouping constraints. In Proceedings of th 11th International Conference on Computer Vision, pages 1–8, 2007. [8] W. Gander, G. H. Golub, and U. von Matt. A constrained eigenvalue problem. Linear Algebra and its Applications, 114/115:815–839, 1989. [9] T.H. Haveliwala. Topic-sensitive PageRank: A context-sensitive ranking algorithm for web search. IEEE Transactions on Knowledge and Data Engineering, 15(4):784–796, 2003. [10] G. Jeh and J. Widom. Scaling personalized web search. In WWW ’03: Proceedings of the 12th International Conference on World Wide Web, pages 271–279, 2003. [11] T. Joachims. Transductive learning via spectral graph partitioning. In Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003), 2003. [12] Y. Lecun and C. Cortes. The MNIST database of handwritten digits. [13] J. Leskovec, K.J. Lang, A. Dasgupta, and M.W. Mahoney. Statistical properties of community structure in large social and information networks. In WWW ’08: Proceedings of the 17th International Conference on World Wide Web, pages 695–704, 2008. [14] M. W. Mahoney, L. Orecchia, and N. K. Vishnoi. A local spectral method for graphs: with applications to improving graph partitions and exploring data graphs locally. Technical report, 2009. Preprint: arXiv:0912.0681. [15] S. Maji, N. K. Vishnoi, and J. Malik. Biased normalized cuts. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2057–2064, 2011. [16] K.A. Norman, S.M. Polyn, G.J. Detre, and J.V. Haxby. Beyond mind-reading: multi-voxel pattern analysis of fmri data. Trends in Cognitive Sciences, 10(9):424–30, 2006. [17] L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999. [18] B. Scholkopf and A.J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2001. [19] D.A. Spielman and S.-H. Teng. Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In STOC ’04: Proceedings of the 36th annual ACM Symposium on Theory of Computing, pages 81–90, 2004. [20] S.-H. Teng. The Laplacian paradigm: Emerging algorithms for massive graphs. In Proceedings of the 7th Annual Conference on Theory and Applications of Models of Computation, pages 2–14, 2010. [21] S. Vigna. Spectral ranking. Technical report. Preprint: arXiv:0912.0238 (2009). [22] D.J. Watts and S.H. Strogatz. Collective dynamics of small-world networks. Nature, 393:440– 442, 1998. [23] S. X. Yu and J. Shi. Grouping with bias. In Annual Advances in Neural Information Processing Systems 14: Proceedings of the 2001 Conference, pages 1327–1334, 2002. 9
|
2012
|
180
|
4,542
|
The variational hierarchical EM algorithm for clustering hidden Markov models Emanuele Coviello ECE Dept., UC San Diego ecoviell@ucsd.edu Antoni B. Chan CS Dept., CityU of Hong Kong abchan@cityu.edu.hk Gert R.G. Lanckriet ECE Dept., UC San Diego gert@ece.ucsd.edu Abstract In this paper, we derive a novel algorithm to cluster hidden Markov models (HMMs) according to their probability distributions. We propose a variational hierarchical EM algorithm that i) clusters a given collection of HMMs into groups of HMMs that are similar, in terms of the distributions they represent, and ii) characterizes each group by a “cluster center”, i.e., a novel HMM that is representative for the group. We illustrate the benefits of the proposed algorithm on hierarchical clustering of motion capture sequences as well as on automatic music tagging. 1 Introduction The hidden Markov model (HMM) [1] is a probabilistic model that assumes a signal is generated by a double embedded stochastic process. A discrete-time hidden state process, which evolves as a Markov chain, encodes the dynamics of the signal, and an observation process, at each time conditioned on the current state, encodes the appearance of the signal. HMMs have successfully served a variety of applications, including speech recognition [1], music analysis [2] and identification [3], and clustering of time series data [4, 5]. This paper is about clustering HMMs. More precisely, we are interested in an algorithm that, given a collection of HMMs, partitions them into K clusters of “similar” HMMs, while also learning a representative HMM “cluster center” that concisely and appropriately represents each cluster. This is similar to standard k-means clustering, except that the data points are HMMs now instead of vectors in Rd. Various applications motivate the design of HMM clustering algorithms, ranging from hierarchical clustering of sequential data (e.g., speech or motion sequences modeled by HMMs [4]), over hierarchical indexing for fast retrieval, to reducing the computational complexity of estimating mixtures of HMMs from large datasets (e.g., semantic annotation models for music and video) — by clustering HMMs, efficiently estimated from many small subsets of the data, into a more compact mixture model of all data. However, there has been relatively little work on HMM clustering and, therefore, its applications. Existing approaches to clustering HMMs operate directly on the HMM parameter space, by grouping HMMs according to a suitable pairwise distance defined in terms of the HMM parameters. However, as HMM parameters lie on a non-linear manifold, a simple application of the k-means algorithm will not succeed in the task, since it assumes real vectors in a Euclidean space. In addition, such an approach would have the additional complication that HMM parameters for a particular generative model are not unique, i.e., a permutation of the states leads to the same generative model. One solution, proposed in [4], first constructs an appropriate similarity matrix between all HMMs that are to be clustered (e.g., based on the Bhattacharyya affinity, which depends non-linearly on the HMM parameters [6]), and then applies spectral clustering. While this approach has proven successful to group HMMs into similar clusters [4], it does not allow to generate novel HMMs as cluster centers. Each cluster can still be represented by choosing one of the given HMMs, e.g., the HMM which the spectral clustering procedure maps the closest to each spectral clustering center. However, this may be suboptimal for various applications of HMM clustering, e.g., in hierarchical estimation 1 of HMM mixtures. Spectral clustering can be based on other affinity scores between HMMs distributions than Bhattacharyya affinity, such as KL divergence approximated with sampling [7]. Instead, in this paper we propose to cluster HMMs directly with respect to the probability distributions they represent. We derive a hierarchical expectation maximization (HEM) algorithm that, starting from a group of HMMs, estimates a smaller mixture model that concisely represents and clusters the input HMMs (i.e., the input HMM distributions guide the estimation of the output mixture distribution). Historically, the first HEM algorithm was designed to cluster Gaussian probability distributions [8]. This algorithm starts from a Gaussian mixture model (GMM) and reduces it to another GMM with fewer components, where each of the mixture components of the reduced GMM represents, i.e., clusters, a group of the original Gaussian mixture components. More recently, Chan et al. [9] derived an HEM algorithm to cluster dynamic texture (DT) models (i.e., linear dynamical systems, LDSs) through their probability distributions. HEM has been applied successfully to many machine learning tasks for images [10], video [9] and music [11, 12]. The HEM algorithm is similar in spirit to Bregman-clustering [13], which is based on assigning points to cluster centers using KL-divergence. To extend the HEM framework for GMMs to hidden Markov mixture models (H3Ms), additional marginalization of the hidden-state processes is required, as for DTMs. However, while Gaussians and DTs allow tractable inference in the E-step of HEM, this is no longer the case for HMMs. Therefore, in this work, we derive a variational formulation of the HEM algorithm (VHEM), and then leverage a variational approximation derived in [14] (which has not been used in a learning context so far) to make the inference in the E-step tractable. The proposed VHEM algorithm for H3Ms (VHEM-H3M) allows to cluster hidden Markov models, while also learning novel HMM centers that are representative of each cluster, in a way that is consistent with the underlying generative model of the input HMMs. The resulting VHEM algorithm can be generalized to handle other classes of graphical models, for which exact computation of the E-step in standard HEM would be intractable, by leveraging similar variational approximations. The efficacy of the VHEM-H3M algorithm is demonstrated on hierarchical motion clustering and semantic music annotation and retrieval. The remainder of the paper is organized as follows. We review the hidden Markov model (HMM) and the hidden Markov mixture model (H3M) in Section 2. We present the derivation of the VHEMH3M algorithm in Section 3, discussion and an experimental evaluation in Section 4. 2 The hidden Markov (mixture) model A hidden Markov model (HMM) M assumes a sequence of τ observations y1:τ is generated by a double embedded stochastic process. The hidden state process x1:τ is a first order Markov chain on S states, with transition matrix A whose entries are aβ,γ = P(xt+1 = γ|xt = β), and initial state distribution π = [π1, . . . , πS], where πβ = P(x1 = β|M). Each state β generates observations according to an emission probability density function p(y|x = β, M) which here we assume time-invariant and modeled as a Gaussian mixture with M components, i.e., p(y|x = β, M) = PM m=1 cβ,mp(y|ζ = m, M), where ζ ∼multinomial(cβ,1, . . . , cβ,M) is the hidden variable that selects the mixture component, cβ,m the mixture weight of the mth Gaussian component, and p(y|ζ = m, M) = N(y; µβ,m, Σβ,m) is the probability density function of a multivariate Gaussian distribution with mean µβ,m and covariance matrix Σβ,m. The HMM is specified by the parameters M = {π, A, {{cβ,m, µβ,m, Σβ,m}M m=1}S β=1} which can be efficiently learned from an observation sequence y1:τ with the Baum-Welch algorithm [1]. A hidden Markov mixture model (H3M) models a set of observation sequences as samples from a group of K hidden Markov models, each associated to a specific sub-behavior [5]. For a given sequence, an assignment variable z ∼multinomial(ω1, · · · ωK) selects the parameters of one of the K HMMs. Each mixture component is parametrized by Mz = {πz, Az, {{cz β,m, µz β,m, Σz β,m}M m=1}S β=1} and the H3M is parametrized by M = {ωz, Mz}K z=1. The likelihood of a random sequence y1:τ ∼M is p(y1:τ|M) = K X i=1 ωip(y1:τ|z = i, M), (1) where p(y1:τ|z = i, M) is the likelihood of y1:τ under the ith HMM component. To reduce clutter, here we assume that all the HMMs have the same number S of hidden states and that all emission probabilities have M mixture components, though our derivation could be easily extended to the more general case, and in the remainder of the paper we use the notation in Table 1. 2 Table 1: Notation. (b) base model, (r) reduced model. variables (b) (r) probability distributions notation short-hand index for HMM comp. i j HMM state seq. (b) p(x1:τ=β1:τ|z(b)=i, M(b)) π(b),i β1:τ HMM states β ρ HMM state seq. (r) p(x1:τ=ρ1:τ|z(r)=j, M(r)) π(r),j ρ1:τ HMM state sequence β1:τ ={β1· · ·βτ} ρ1:τ ={ρ1· · ·ρτ} HMM obs. likelihood (r) p(y1:τ|z(r) = j, M(r)) p(y1:τ|M(r) j ) index for comp. of GMM m ℓ GMM emit likelihood (r) p(yt|xt = ρ, M(r) j ) p(yt|M(r) j,ρ) models Gaussian likelihood (r) p(yt|ζt = ℓ, xt = ρ, M(r) j ) p(yt|M(r) j,ρ,ℓ) H3M M(b) M(r) expectations HMM component M(b) i M(r) j HMM obs. seq. Ey1:τ |z(b)=i,M(b)[·] EM(b) i [·] GMM emission M(b) i,β M(r) j,ρ GMM emission Eyt|xt=β,M(b) i [·] EM(b) i,β[·] component of GMM M(b) i,β,m M(r) j,ρ,ℓ Gaussian component Eyt|ζt=m,xt=β,M(b) i [·] EM(b) i,β,m[·] 3 Clustering hidden Markov models We now derive the variational hierarchical EM algorithm for clustering HMMs (VHEM-H3M). Let M(b) = {ω(b) i , M(b) i }K(b) i=1 be a base hidden Markov mixture model (H3M) with K(b) components. The goal of the VHEM-H3M algorithm is to find a reduced hidden Markov mixture model M(r) = {ω(r) j , M(r) j }K(r) j=1 with fewer components (i.e., K(r) < K(b)), that represents M(b) well. At a high level, the VHEM-H3M algorithm estimates the reduced H3M model M(r) from virtual samples distributed according to the base H3M model M(b). From this estimation procedure, the VHEM algorithm provides: (i) a (soft) clustering of the original K(b) HMMs into K(r) groups, encoded in assignment variables ˆzi,j, and (ii) novel HMM cluster centers, i.e., the HMM components of M(r), each of them representing a group of the original HMMs of M(b). Finally, because we take the expectation over the virtual samples, the estimation is carried out in an efficient manner that requires only knowledge of the parameters of the base model without the need of generating actual virtual samples. 3.1 Parameter estimation We consider a set Y of N virtual samples distributed accordingly to the base model M(b), such that the Ni = Nω(b) i samples Yi = {y(i,m) 1:τ }Ni m=1 are from the ith component (i.e., y(i,m) 1:τ ∼M(b) i ). We denote the entire set of samples as Y = {Yi}K(b) i=1 , and, in order to obtain a consistent clustering of the input HMMs M(b) i , we assume the entirety of samples Yi is assigned to the same component of the reduced model [8]. Note that, in this formulation, we are not using virtual samples {x(i,m) 1:τ , y(i,m) 1:τ } for each base component, according to its joint distribution p(x1:τ, y1:τ|M(b) i ), but we treat Xi = {x(i,m) 1:τ }Ni m=1 as “missing” information, and estimate them in the E-step. The reason is that a basis mismatch between components of M(b) i will cause problems when the parameters of M(r) j are computed from virtual samples of the hidden states of {M(b) i }K(b) i=1 . The original formulation of HEM [8] maximizes log-likelihood of the virtual samples, i.e., log p(Y |M(r)) = PK(b) i=1 log p(Yi|M(r)), with respect to M(r), and uses the law of large numbers to turn the virtual samples into an expectation over the base model components M(b) i . In this paper, we will start with a slightly different objective function to derive the VHEM algorithm. To estimate M(r), we will maximize the expected log-likelihood of the virtual samples, J (M(r)) = EM(b) h log p(Y |M(r)) i = K(b) X i=1 EM(b) i h log p(Yi|M(r)) i , (2) where the expectation is over the base model components M(b) i . A general framework for maximum likelihood estimation in the presence of hidden variables (which is the case for H3Ms) is the EM algorithm [15]. In this work, we take a variational perspective [16, 17, 18], which views both E- and M-step as a maximization step. The variational E-step first obtains a family of lower bounds to the log-likelihood (i.e., to equation 2), indexed by variational parameters, and then optimizes over the variational parameters to find the tightest bound. The corresponding M-step then maximizes the lower bound (with the variational parameters fixed) with respect to the 3 model parameters. One advantage of the variational formulation is that it allows to replace a difficult inference in the E-step with a variational approximation, by restricting the maximization to a smaller domain for which the lower bound is tractable. 3.1.1 Lower bound to an expected log-likelihood Before proceeding with the derivation of VHEM for H3Ms, we first need to derive a lower-bound to an expected log-likelihood term (e.g., (2)). We will first consider the lower bound to a log-likelihood. In all generality, let {O, H} be the observation and hidden variables of a probabilistic model, respectively, where p(H) is the distribution of the hidden variables, p(O|H) is the conditional likelihood of the observations, and p(O) = P H p(O|H)p(H) is the observation likelihood. We can define a variational lower bound to the observation log-likelihood [18, 19]: log p(O) ≥log p(O) −D(q(H)||p(H|O)) = X H q(H) log p(H)p(O|H) q(H) (3) where p(H|O) is the posterior distribution of H given observation O, and q(H) is the variational distribution (i.e., P H q(H) = 1 and qi(H) ≥0) or approximate posterior distribution. D(p∥q) = R p(y) log p(y) q(y)dy is the Kullback-Leibler (KL) divergence between two distributions, p and q. When the variational distribution equals the true posterior, q(H) = P(H|O), then the KL divergence is zero, and hence the lower-bound reaches log p(O). When the true posterior is not possible to calculate, then typically q is restricted to some set of approximate posterior distributions that are tractable, and the best lower-bound is obtained by maximizing over q, log p(O) ≥max q∈Q X H q(H) log p(H)p(O|H) q(H) (4) Using the lower bound in (4), we can now derive a lower bound to an expected log-likelihood expression. Let Eb[·] be the expectation of O with respect to a distribution pb(O). Since pb(O) is non-negative, taking the expectation on both sides of (4) yields, Eb [log p(O)] ≥Eb " max q∈Q X H q(H) log p(H)p(O|H) q(H) # ≥max q∈Q Eb "X H q(H) log p(H)p(O|H) q(H) # (5) = max q∈Q X H q(H) log p(H) q(H) + Eb [log p(O|H)] , (6) where (5) follows from Jensen’s inequality (i.e., f(E[x]) ≤E[f(x)] when f is convex), and the convexity of the max function. 3.1.2 Variational lower bound We now derive the lower bound of the expected log-likelihood cost function in (2). The derivation proceeds by successively applying the lower bound from (6) on each arising expected log-likelihood term, which results in a set of nested lower bounds. We first define the following three lower bounds: EM(b) i [log p(Yi|M(r))] ≥Li H3M, (7) EM(b) i [log p(y1:τ|M(r) j )] ≥Li,j HMM, (8) EM(b) i,βt [log p(yt|M(r) j,ρt)] ≥L(i,βt),(j,ρt) GMM . (9) The first lower bound, Li H3M, is on the expected log-likelihood between an HMM and an H3M. The second lower bound, Li,j HMM, is on the expected log-likelihood of an HMM M(r) j , marginalized over observation sequences from a different HMM M(b) i . Although the data log-likelihood log p(y1:τ|M(r) j ) can be computed exactly using the forward algorithm [1], calculating its expectation is not analytically tractable since y1:τ ∼M(r) j is essentially an observation from a mixture with O(Sτ) components. The third lower bound is between GMM emission densities M(b) i,βt and M(r) j,ρt. 4 H3M lower bound - Looking at an individual term in (2), p(Yi|M(r)) is a mixture of HMMs, and thus the observation variable is Yi and the hidden variable is zi (the assignment of Yi to a component M(r) j ). Hence, introducing the variational distribution qi(zi) and applying (6), we have EM(b) i h log p(Yi|M(r)) i ≥max qi X j qi(zi = j) log p(zi = j) qi(zi = j) + NiEM(b) i [log p(y1:τ|M(r) j )] ≥max qi X j qi(zi = j) log p(zi = j) qi(zi = j) + NiLi,j HMM ≜Li H3M. (10) where we use the fact that Yi is a set of Ni i.i.d. samples, and we use the lower bound (8) for the expectation of log p(y1:τ|M(r) j ), which is the observation log-likelihood of an HMM and hence its expectation cannot be calculated directly. To compute Li H3M, we will restrict the variational distributions to the form qi(zi = j) = zij for all i, where PK(r) j=1 zij = 1, and zij ≥0 ∀j. HMM lower bound - For the HMM likelihood p(y1:τ|M(r) j ), the observation variable is y1:τ and the hidden variable is its state sequence ρ1:τ. Hence, for the lower bound Li,j HMM we get EM(b) i [log p(y1:τ|M(r) j )] = X β1:τ π(b),i β1:τ EM(b) i |β1:τ [log p(y1:τ|M(r) j )] (11) ≥ X β1:τ π(b),i β1:τ max qi,j X ρ1:τ qi,j(ρ1:τ|β1:τ) ( log p(ρ1:τ|M(r) j ) qi,j(ρ1:τ|β1:τ) + X t EM(b) i,βt [log p(yt|M(r) j,ρt)] ) (12) ≥ X β1:τ π(b),i β1:τ max qi,j X ρ1:τ qi,j(ρ1:τ|β1:τ) ( log p(ρ1:τ|M(r) j ) qi,j(ρ1:τ|β1:τ) + X t L(i,βt),(j,ρt) GMM ) ≜Li,j HMM (13) where in (11) we first rewrite the expectation EM(b) i to explicitly marginalize over the HMM state sequence β1:τ from M(b) i , in (12) we introduce a variational distribution qi,j β1:τ (ρ1:τ) on the state sequence ρ1:τ, which depends on the particular sequence β1:τ, and apply (6) , and in the last line we use the lower bound, defined in (9), on each expectation. To compute Li,j HMM we will restrict the variational distributions to the form of a Markov chain [14], qi,j(ρ1:τ|β1:τ) = φi,j(ρ1:τ|β1:τ) = φi,j(ρ1|β1) τY t=2 φi,j βt (ρt|ρt−1), (14) where PS ρ1=1 φi,j β1(ρ1) = 1 for each value of β1, and PS ρt=1 φi,j βt (ρt|ρt−1) = 1 for each value of βt and ρt−1. The variational distribution qi,j β1:τ (ρ1:τ) assigns state sequences β1:τ ∼M(b) i to state sequences ρ1:τ ∼M(r) j , based on how well (in expectation) the state sequence ρ1:τ ∼M(r) j can explain an observation sequence generated by HMM M(b) i evolving through state sequence β1:τ ∼M(b) i , i.e., by p(y1:τ|M(b) i , β1:τ). GMM lower bound - In [20] we derive the lower bound (9), by marginalizing EM(b) i,βt over GMM assignment m, introducing the variational distributions qi,j β,ρ(ζ = l|m), and applying (6). We will restrict the variational distributions to qi,j β,ρ(ζ = l|m) = η(i,β),(j,ρ) ℓ|m , where PM ℓ=1 η(i,βt),(j,ρt) ℓ|m =1 ∀m, and η(i,βt),(j,ρt) ℓ|m ≥0 ∀ℓ,m. Intuitively, η(i,βt),(j,ρt) is the responsibility matrix between Gaussian observation components for state βt in M(b) i and state ρt in M(r) j , where η(i,βt),(j,ρt) ℓ|m is the probability that an observation from component m of M(b) i,βt corresponds to component ℓof M(r) j,ρt. 3.2 Variational HEM algorithm Finally, the variational lower bound of the expected log-likelihood of the virtual samples in (2) is J (M(r)) = EM(b) h log p(Y |M(r)) i ≥ K(b) X i=1 Li H3M, (15) 5 which is composed of three nested lower bounds, corresponding to different model elements (the H3M, the component HMMs, and the emission GMMs). The VHEM algorithm for HMMs consists in coordinate ascent on the right hand side of (15). E-step - The variational E-step (see [20] for details) calculates the variational parameters zij, φi,j(ρ1:τ|β1:τ) = φi,j β1(ρ1) Qτ t=2 φi,j βt (ρt|ρt−1), and η(i,β),(j,ρ) for the lower bounds in (9) (13) (10). In particular, given the nesting of the lower bounds, we proceed by first maximizing the GMM lower bound L(i,βt),(j,ρt) GMM for each (i, j, βt, ρt). Next, the HMM lower bound Li,j HMM is maximized for each (i, j), which is followed by maximizing Li H3M for each i. The latter gives ˆzij ∝w(r) j exp(NiLi,j HMM), which is similar to the formula derived in [8, 9], but the expectation is now replaced with its lower bound. We then collect the summary statistics: νi,j 1 (ρ1, β1) = π(b),i ρ1 ˆφi,j 1 (ρ1|β1), ξi,j t (ρt−1, ρt, βt) = PS βt−1=1νi,j t−1(ρt−1, βt−1)a(b),i βt−1,γt−1 ˆφi,j t (ρt|ρt−1, βt), and νi,j t (ρt, βt) = PS ρt−1=1 ξi,j t (ρt−1, ρt, βt), the last two for t = 2, . . . , τ, and their aggregates which are necessary for the M-step: ˆνi,j 1 (σ) = S X β=1 νi,j 1 (σ, β), ˆνi,j(σ, β) = τ X t=1 νi,j t (σ, β), ˆξi,j(ρ, ρ′) = τ X t=2 S X β=1 ξi,j t (ρ, ρ′, β). (16) The statistic ˆνi,j 1 (ρ) is the expected number of times that the HMM M(r) j starts from state ρ, when modeling sequences generated by M(b) i . The quantity ˆνi,j(ρ, β) is the expected number of times that the HMM M(r) j is in state ρ when the HMM M(b) i is in state β, when both are modeling sequences generated by M(b) i . Similarly, the quantity ˆξi,j(ρ, ρ′) is the expected number of transitions from state ρ to state ρ′ of M(r) j , when modeling sequences generated by M(b) i . M-step - The lower bound (15) is maximized with respect to the parameters M(r). Defined a weighted sum operator Ωj,ρ,ℓ(x(i, β, m)) = PK(b) i=1 ˆzi,jω(b) i PS β=1 ˆνi,j(ρ, β) PM m=1 c(b),i β,m x(i, β, m), the parameters M(r) are updated according to (derivation in [20]): ω(r) j ∗= PK(b) i=1 ˆzi,j K(b) , π(r),j ρ ∗= PK(b) i=1 ˆzi,jω(b) i ˆνi,j 1 (ρ) PS ρ′=1 PK(b) i=1 ˆzi,jω(b) i ˆνi,j 1 (ρ′)) , a(r),j ρ,ρ′ ∗= PK(b) i=1 ˆzi,jω(b) i ˆξi,j(ρ, ρ′) PS σ=1 PK(b) i=1 ˆzi,jω(b) i ˆξi,j(ρ, σ) , c(r),j ρ,ℓ ∗= Ωj,ρ,ℓ ˆη(i,β),(j,ρ) ℓ|m PM ℓ′=1 Ωj,ρ,ℓ′ ˆη(i,β),(j,ρ) ℓ′|m , µ(r),j ρ,ℓ ∗= Ωj,ρ,ℓ η(i,β),(j,ρ) ℓ|m µ(b),i β,m Ωj,ρ,ℓ ˆη(i,β),(j,ρ) ℓ|m , (17) Σ(r),j ρ,ℓ ∗= Ωj,ρ,ℓ ˆη(i,β),(j,ρ) ℓ|m h Σ(b),i β,m + (µ(b),i β,m −µ(r),j ρ,ℓ ) (µ(b),i β,m −µ(r),j ρ,ℓ )ti /Ωj,ρ,ℓ ˆη(i,β),(j,ρ) ℓ|m . (18) Equations (17) and (18) are all weighted averages over all base models, model states, and Gaussian components. The covariance matrices of the reduced models (18) are never smaller in magnitude than the covariance matrices of the base models, due to the outer-product term. This regularization effect derives from the E-step, which averages all possible observations from the base model. 4 Discussion, Experiments and Conclusions Jebara et al. [4] cluster a collection of HMMs by applying spectral clustering to a probability product kernel (PPK) matrix between HMMs. While this has been proven successful in grouping HMMs into similar clusters, it cannot learn novel HMM cluster centers and therefore is suboptimal for hierarchical estimation of mixture models (see Section 4.2). A second limitation is that the cost of building the PPK matrix is quadratic in the number K(b) of input HMMs. Note that we extended the algorithm in [4] to support GMM observations instead of only Gaussians. The VHEM-H3M algorithm clusters a collection of HMMs directly through the distributions they represent, by estimating a smaller mixture of novel HMMs that concisely models the distribution represented by the input HMMs. This is achieved by maximizing the log-likelihood of “virtual” samples generated from the input HMMs. As a result, the VHEM cluster centers are consistent with the underlying generative probabilistic framework. As a first advantage, since VHEM-H3M estimates novel HMM cluster centers, we expect the learned cluster centers to retain more information on the clusters’ structure and VHEM-H3M to produce better hierarchical clusterings than [4], which suffers out-of-sample limitations. A second advantage is that VHEM does not build a kernel embedding as in [4], an is therefore expected to be more efficient, especially for large K(b). 6 In addition, VHEM-H3M allows for efficient estimation of HMM mixtures from large datasets using a hierarchical estimation procedure. In particular, in a first stage intermediate HMM mixtures are estimated in parallel by running standard EM on small independent portions of the dataset, and the final model is estimated from the intermediate models using the VHEM algorithm. Relative to direct EM estimation on the entire data, VHEM-H3M is more time- and memory-efficient. First, it does not need to evaluate the likelihood of all the samples at each iteration, and converges to effective estimates in shorter times. Second, it no longer requires storing in memory the entire data set during parameter estimation. Another advantage is that the intermediate models implicitly provide more “samples” (virtual variations of each time-series) to the final VHEM stage. This acts as a form of regularization that prevents over-fitting and improves robustness of the learned models. Therefore, we expect models learned using the hierarchical estimation procedure to perform better than those learned with EM directly on the entire data. Note that in the second stage we could use the spectral clustering algorithm in [4] instead of VHEM — run spectral clustering over intermediate models pooled together, and form the final H3M with the HMMs mapped the closest to the K cluster centers. VHEM, however, is expected to do better since it learns novel cluster centers. As an alternative to VHEM, we tested a version of HEM that, instead of marginalizing over virtual samples, uses actual sampling and the EM algorithm [5] to learn the reduced H3M. Despite its simplicity, the algorithm requires a large number of samples for learning accurate models, and has longer learning times (since it evaluates the likelihood of all samples at each iteration). 4.1 Experiment on hierarchical motion clustering Table 2: Hierarchical clustering on Motion Capture data, using various algorithms. The Rand-index is the probability that any pair of motion sequences are correctly clustered with respect to each other. Results are averages of 10 trials. Rand-index log-likelihood (×106) time (s) Level (#samples) 2 3 4 2 3 4 VHEM-H3M 0.937 0.811 0.518 -5.361 -5.682 -5.866 30.97 PPK-SC 0.956 0.740 0.393 -5.399 -5.845 -6.068 37.69 SHEM-H3M (560) 0.714 0.359 0.234 -13.632 -69.746 -275.650 843.89 SHEM-H3M (2800) 0.782 0.685 0.480 -14.645 -30.086 -52.227 3849.72 EM-H3M 0.831 0.430 0.340 -5.713 -202.55 -168.90 667.97 HEM-DTM 0.897 0.661 0.412 -7.125 -8.163 -8.532 121.32 Table 3: Annotation and retrieval on CAL500, for VHEMH3M, PPK-SC, EM-H3M, HEM-DTM and HEM-GMM, averaged over the 97 tags with at least 30 examples in CAL500, and result of 5 fold-cross validation. annotation retrieval P R F MAP P@10 time (h) VHEM-H3M 0.446 0.211 0.260 0.440 0.451 678 EM-H3M 0.415 0.214 0.248 0.423 0.422 1860 PPK-SC 0.299 0.159 0.151 0.347 0.340 1033 HEM-DTM 0.430 0.202 0.252 0.439 0.453 426 HEM-GMM 0.374 0.205 0.213 0.417 0.425 5 10 20 30 40 50 Level 1 walk 1 basket jump soccer run walk 2 jog sit 1 2 Level 4 1 2 3 4 Level 3 1 2 3 4 5 6 7 8 Level 2 PPK-SC 1 2 Level 4 Level 3 Level 2 1 2 3 4 1 2 3 4 5 6 7 8 VHEM-H3M algorithm 10 20 30 40 50 Level 1 Figure 1: Hierarchical clustering of Motion Capture data (qualitative). Best in color. We tested the VHEM algorithm on hierarchical motion clustering, where each of the input HMMs to be clustered is estimated on a sequence of motion capture data from the Motion Capture dataset (http://mocap.cs.cmu.edu/). In particular, we start from K1 = 56 motion examples from 8 different classes (“jump”, “run”, ‘jog‘”, “walk 1” and “walk 2” which are from two different subjects, “basket”, “soccer”, “sit”), and learn a HMM for each of them, forming the first level of the hierarchy. A tree-structure is formed by successively clustering HMMs with the VHEM algorithm, and using the learned cluster centers as the representative HMMs at the new level. Level 2, 3, and 4 of the hierarchy correspond to K2 = 8, K3 = 4 and K4 = 2 clusters. The hierarchical clustering obtained with VHEM is illustrated in Figure 1 (top). In the first level, each vertical bar represents a motion sequence, and different colors indicate different ground-truth classes. At Level 2, the 8 HMM clusters are shown with vertical bars, with the colors indicating the proportions of the motion classes in the cluster. At Level 2, VHEM produces clusters with examples from a single motion class (e.g., “run”, “jog”, “jump”), but mixes some “soccer” examples with “basket”, possibly because both actions consists in a sequence of movement-shot-pause. Moving up the hierarchy, VHEM clusters similar motions classes together (as indicated by the arrows), and at Level 4 it creates a dichotomy between “sit” and the other (more dynamic) motion classes. On the 7 bottom, in Figure 1, the same experiment is repeated using spectral clustering in tandem with PPK similarity (PPK-SC). PPK-SC clusters motion sequences properly, however, at Level 2 it incorrectly aggregates “sit” and “soccer” that have quite different dynamics, and Level 4 is not as interpretable as the one by VHEM. Table 2 provides a quantitative comparison. While VHEM has lower Randindex than PPK-SC at Level 2 (0.937 vs. 0.956), it has higher Rand-index at Level 3 (0.811 vs. 0.740) and Level 4 (0.518 vs. 0.393). In addition, VHEM-H3M has higher data log-likelihood than PPK-SC at each level, and is more efficient. This suggests that the novel HMM cluster centers learned by VHEM-H3M retain more information on the clusters’ structure than the spectral cluster centers, which is increasingly visible moving up the hierarchy. Finally, VHEM-H3M performs better and is more efficient than the HEM version based on actual sampling (SHEM-H3M), the EM applied directly on the motion sequences, and the HEM-DTM algorithm [9]. 4.2 Experiment on automatic music tagging We evaluated VHEM-H3M on content-based music auto-tagging on the CAL500 [11], a collection of 502 songs annotated with respect to a vocabulary V of 149 tags. For each song, we extract a time series Y = {y1, . . . , yT } of 13 Mel frequency cepstral coefficients (MFCC) [1] over halfoverlapping windows of 46ms, with first and second instantaneous derivatives. We formulate music auto-tagging as supervised multi-class labeling [10], where each class is a tag from V and is modeled as a H3M probability distribution estimated from audio-sequences (of T = 125 audio features, i.e., approximately 3s of audio) extracted from the relevant songs in the database, using the VHEMH3M algorithm. First, for each song the EM algorithm is used to learn a H3Ms with K(s) = 6 components (as many as the structural parts of most pop songs). Then, for each tag, the relevant song-level H3Ms are pooled together and the VHEM-H3M algorithm is used to learn the final H3M tag model with K = 3 components. We compare the proposed VHEM-H3M algorithm to PPK-SC,1 direct EM-estimation (EM-H3M) [5] from the relevant songs’ audio sequences, HEM-DTM [12] and HEM-GMM [11]. The last two use an efficient HEM algorithm for learning, and are state-of-the-art baselines for music tagging. We were not able to successfully estimate tag-H3Ms with the sampling version of HEM-H3M. Annotation (precision P, recall R, and f-score F) and retrieval (mean average precision MAP, and top-10 precision P@10) are reported in Table 3. VHEM-H3M is the most efficient algorithm for learning H3Ms as it requires only 36% of the time of EM-H3M, and 65% of the time of PPKSC. VHEM-H3M capitalizes on the song-level H3Ms learned in the first stage (about one third of the total time), by efficiently using them to learn the final tag models. The gain in computational efficiency does not negatively affect the quality of the resulting models. On the contrary, VHEMH3M achieves better performance than EM-H3M (differences are statistically significant based on a paired t-test with 95% confidence), since it has the benefit of regularization, and outperforms PPK-SC. Designed for clustering HMMs, PPK-SC does not produce accurate annotation models, since it discards information on the clusters’ structure by approximating it with one of the original HMMs. Instead, VHEM-H3M generates novel HMM cluster centers that effectively summarizes each cluster. VHEM-H3M outperforms HEM-GMM, which does not model temporal information in the audio signal. Finally, HEM-DTM, based on LDSs (a continuous-state model), can model only stationary time-series in a linear subspace. In contrast, VHEM-H3M uses HMMs with discrete states and GMM emissions, and can also adapt to non-stationary time-series on a non-linear manifold. Hence, VHEM-H3M outperforms HEM-DTM on the human MoCap data (see Table (2)), which has non-linear dynamics, while the two perform similarly on the music data (difference were statistically significant only on annotation P), where the audio features are stationary over short time frames. 4.3 Conclusion We presented a variational HEM algorithm for clustering HMMs through their distributions and generates novel HMM cluster centers. The efficacy of the algorithm was demonstrated on hierarchical motion clustering and automatic music tagging, with improvement over current methods. Acknowledgments The authors acknowledge support from Google, Inc. E.C. and G.R.G.L. acknowledge support from Qualcomm, Inc., Yahoo! Inc., and the National Science Foundation (grants CCF-083053, IIS1054960 and EIA-0303622). A.B.C. acknowledges support from the Research Grants Council of the Hong Kong SAR, China (CityU 110610). G.R.G.L. acknowledges support from the Alfred P. Sloan Foundation. 1It was necessary to implement PPK-SC with song-level H3Ms with K(s)=1. K(s)=2 took about quadruple the time with no improvement in performance. Larger K(s) would determine impractical learning times. 8 References [1] L. Rabiner and B. H. Juang. Fundamentals of Speech Recognition. Prentice Hall, Upper Saddle River (NJ, USA), 1993. [2] Y. Qi, J.W. Paisley, and L. Carin. Music analysis using hidden markov mixture models. Signal Processing, IEEE Transactions on, 55(11):5209–5224, 2007. [3] E. Batlle, J. Masip, and E. Guaus. Automatic song identification in noisy broadcast audio. In IASTED International Conference on Signal and Image Processing. Citeseer, 2002. [4] T. Jebara, Y. Song, and K. Thadani. Spectral clustering and embedding with hidden markov models. Machine Learning: ECML 2007, pages 164–175, 2007. [5] P. Smyth. Clustering sequences with hidden markov models. In Advances in neural information processing systems, 1997. [6] T. Jebara, R. Kondor, and A. Howard. Probability product kernels. The Journal of Machine Learning Research, 5:819–844, 2004. [7] B. H. Juang and L. R. Rabiner. A probabilistic distance measure for hidden Markov models. AT&T Technical Journal, 64(2):391–408, February 1985. [8] N. Vasconcelos and A. Lippman. Learning mixture hierarchies. In Advances in Neural Information Processing Systems, 1998. [9] A.B. Chan, E. Coviello, and G.R.G. Lanckriet. Clustering dynamic textures with the hierarchical em algorithm. In Intl. Conference on Computer Vision and Pattern Recognition, 2010. [10] G. Carneiro, A.B. Chan, P.J. Moreno, and N. Vasconcelos. Supervised learning of semantic classes for image annotation and retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(3):394–410, 2007. [11] D. Turnbull, L. Barrington, D. Torres, and G. Lanckriet. Semantic annotation and retrieval of music and sound effects. IEEE Transactions on Audio, Speech and Language Processing, 16(2):467–476, February 2008. [12] E. Coviello, A. Chan, and G. Lanckriet. Time series models for semantic music annotation. Audio, Speech, and Language Processing, IEEE Transactions on, 5(19):1343–1359, 2011. [13] A. Banerjee, S. Merugu, I.S. Dhillon, and J. Ghosh. Clustering with bregman divergences. The Journal of Machine Learning Research, 6:1705–1749, 2005. [14] J.R. Hershey, P.A. Olsen, and S.J. Rennie. Variational Kullback-Leibler divergence for hidden Markov models. In Automatic Speech Recognition & Understanding, 2007. ASRU. IEEE Workshop on, pages 323–328. IEEE, 2008. [15] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1–38, 1977. [16] R.M. Neal and G.E. Hinton. A view of the em algorithm that justifies incremental, sparse, and other variants. NATO ASI SERIES D BEHAVIOURAL AND SOCIAL SCIENCES, 89:355–370, 1998. [17] I. Csisz, G. Tusn´ady, et al. Information geometry and alternating minimization procedures. Statistics and decisions, 1984. [18] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, and L.K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999. [19] Tommi S. Jaakkola. Tutorial on Variational Approximation Methods. In In Advanced Mean Field Methods: Theory and Practice, pages 129–159. MIT Press, 2000. [20] Anonymous. Derivation of the Variational HEM Algorithm for Hidden Markov Mixture Models. Technical report, Anonymous, 2012. 9
|
2012
|
181
|
4,543
|
Scalable nonconvex inexact proximal splitting Suvrit Sra Max Planck Institute for Intelligent Systems 72076 T¨ubigen, Germany suvrit@tuebingen.mpg.de Abstract We study a class of large-scale, nonsmooth, and nonconvex optimization problems. In particular, we focus on nonconvex problems with composite objectives. This class includes the extensively studied class of convex composite objective problems as a subclass. To solve composite nonconvex problems we introduce a powerful new framework based on asymptotically nonvanishing errors, avoiding the common stronger assumption of vanishing errors. Within our new framework we derive both batch and incremental proximal splitting algorithms. To our knowledge, our work is first to develop and analyze incremental nonconvex proximalsplitting algorithms, even if we were to disregard the ability to handle nonvanishing errors. We illustrate one instance of our general framework by showing an application to large-scale nonsmooth matrix factorization. 1 Introduction This paper focuses on nonconvex composite objective problems having the form minimize Φ(x) := f(x) + h(x) x ∈X, (1) where f : Rn →R is continuously differentiable, h : Rm →R ∪{∞} is lower semi-continuous (lsc) and convex (possibly nonsmooth), and X is a compact convex set. We also make the common assumption that ∇f is locally (in X) Lipschitz continuous, i.e., there is a constant L > 0 such that ∥∇f(x) −∇f(y)∥≤L∥x −y∥ for all x, y ∈X. (2) Problem (1) is a natural but far-reaching generalization of composite objective convex problems, which enjoy tremendous importance in machine learning; see e.g., [2, 3, 11, 34]. Although, convex formulations are extremely useful, for many difficult problems a nonconvex formulation is natural. Familiar examples include matrix factorization [20, 23], blind deconvolution [19], dictionary learning [18, 23], and neural networks [4, 17]. The primary contribution of this paper is theoretical. Specifically, we present a new algorithmic framework: Nonconvex Inexact Proximal Splitting (NIPS). Our framework solves (1) by “splitting” the task into smooth (gradient) and nonsmooth (proximal) parts. Beyond splitting, the most notable feature of NIPS is that it allows computational errors. This capability proves critical to obtaining a scalable, incremental-gradient variant of NIPS, which, to our knowledge, is the first incremental proximal-splitting method for nonconvex problems. NIPS further distinguishes itself in how it models computational errors. Notably, it does not require the errors to vanish in the limit, which is a more realistic assumption as often one has limited to no control over computational errors inherent to a complex system. In accord with the errors, NIPS also does not require stepsizes (learning rates) to shrink to zero. In contrast, most incremental-gradient methods [5] and stochastic gradient algorithms [16] do assume that the computational errors and stepsizes decay to zero. We do not make these simplifying assumptions, which complicates the convergence analysis a bit, but results in perhaps a more satisfying description. 1 Our analysis builds on the remarkable work of Solodov [29], who studied the simpler setting of differentiable nonconvex problems (which correspond with h ≡0 in (1)). NIPS is strictly more general: unlike [29] it solves a non-differentiable problem by allowing a nonsmooth regularizer h ̸≡0, and this h is tackled by invoking proximal-splitting [8]. Proximal-splitting has proved to be exceptionally fruitful and effective [2, 3, 8, 11]. It retains the simplicity of gradient-projection while handling the nonsmooth regularizer h via its proximity operator. This approach is especially attractive because for several important choices of h, efficient implementations of the associated proximity operators exist [2, 22, 23]. For convex problems, an alternative to proximal splitting is the subgradient method; similarly, for nonconvex problems one may use a generalized subgradient method [7, 12]. However, as in the convex case, the use of subgradients has drawbacks: it fails to exploit the composite structure, and even when using sparsity promoting regularizers it does not generate intermediate sparse iterates [11]. Among batch nonconvex splitting methods, an early paper is [14]. More recently, in his pioneering paper on convex composite minimization, Nesterov [26] also briefly discussed nonconvex problems. Both [14] and [26], however, enforced monotonic descent in the objective value to ensure convergence. Very recently, Attouch et al. [1] have introduced a generic method for nonconvex nonsmooth problems based on Kurdyka-Łojasiewicz theory, but their entire framework too hinges on descent. A method that uses nonmontone line-search to eliminate dependence on strict descent is [13]. In general, the insistence on strict descent and exact gradients makes many of the methods unsuitable for incremental, stochastic, or online variants, all of which usually lead to a nonmonotone objective values especially due to inexact gradients. Among nonmonotonic methods that apply to (1), we are aware of the generalized gradient-type algorithms of [31] and the stochastic generalized gradient methods of [12]. Both methods, however, are analogous to the usual subgradient-based algorithms that fail to exploit the composite objective structure, unlike proximal-splitting methods. But proximal-splitting methods do not apply out-of-the-box to (1): nonconvexity raises significant obstructions, especially because nonmonotonic descent in the objective function values is allowed and inexact gradient might be used. Overcoming these obstructions to achieve a scalable non-descent based method that allows inexact gradients is what makes our NIPS framework novel. 2 The NIPS Framework To simplify presentation, we replace h by the penalty function g(x) := h(x) + δ(x|X), (3) where δ(·|X) is the indicator function for X: δ(x|X) = 0 for x ∈X, and δ(x|X) = ∞for x ̸∈X. With this notation, we may rewrite (1) as the unconstrained problem: minx∈Rn Φ(x) := f(x) + g(x), (4) and this particular formulation is our primary focus. We solve (4) via a proximal-splitting approach, so let us begin by defining our most important component. Definition 1 (Proximity operator). Let g : Rn →R be an lsc, convex function. The proximity operator for g, indexed by η > 0, is the nonlinear map [see e.g., 28; Def. 1.22]: Pg η : y 7→argmin x∈Rn g(x) + 1 2η∥x −y∥2 . (5) The operator (5) was introduced by Moreau [24] (1962) as a generalization of orthogonal projections. It is also key to Rockafellar’s classic proximal point algorithm [27], and it arises in a host of proximal-splitting methods [2, 3, 8, 11], most notably in forward-backward splitting (FBS) [8]. FBS is particularly attractive because of its simplicity and algorithmic structure. It minimizes convex composite objective functions by alternating between “forward” (gradient) steps and “backward” (proximal) steps. Formally, suppose f in (4) is convex; for such f, FBS performs the iteration xk+1 = Pg ηk(xk −ηk∇f(xk)), k = 0, 1, . . . , (6) where {ηk} is a suitable sequence of stepsizes. The usual convergence analysis of FBS is intimately tied to convexity of f. Therefore, to tackle nonconvex f we must take a different approach. As 2 previously mentioned, such approaches were considered by Fukushima and Mine [14] and Nesterov [26], but both proved convergence by enforcing monotonic descent. This insistence on descent severely impedes scalability. Thus, the key challenge is: how to retain the algorithmic simplicity of FBS and allow nonconvex losses, without sacrificing scalability? We address this challenge by introducing the following inexact proximal-splitting iteration: xk+1 = Pg ηk(xk −ηk∇f(xk) + ηke(xk)), k = 0, 1, . . . , (7) where e(xk) models the computational errors in computing the gradient ∇f(xk). We also assume that for η > 0 smaller than some stepsize ¯η, the computational error is uniformly bounded, that is, η∥e(x)∥≤¯ϵ, for some fixed error level ¯ϵ ≥0, and ∀x ∈X. (8) Condition (8) is weaker than the typical vanishing error requirements X k η∥e(xk)∥< ∞, lim k→∞η∥e(xk)∥= 0, which are stipulated by most analyses of methods with gradient errors [4, 5]. Obviously, since errors are nonvanishing, exact stationarity cannot be guaranteed. We will, however, show that the iterates produced by (7) do progress towards reasonable inexact stationary points. We note in passing that even if we assume the simpler case of vanishing errors, NIPS is still the first nonconvex proximalsplitting framework that does not insist on monotonicity, which complicates convergence analysis but ultimately proves crucial to scalability. Algorithm 1 Inexact Nonconvex Proximal Splitting (NIPS) Input: Operator Pg η, and a sequence {ηk} satisfying c ≤lim infk ηk, lim supk ηk ≤min {1, 2/L −c} , 0 < c < 1/L. (9) Output: Approximate solution to (7) k ←0; Select arbitrary x0 ∈X while ¬ converged do Compute approximate gradient e∇f(xk) := ∇f(xk) −e(xk) Update: xk+1 = Pg ηk(xk −ηk e∇f(xk)) k ←k + 1 end while 2.1 Convergence analysis We begin by characterizing inexact stationarity. A point x∗is a stationary point for (4) if and only if it satisfies the inclusion 0 ∈∂CΦ(x∗) := ∇f(x∗) + ∂g(x∗), (10) where ∂Cφ denotes the Clarke subdifferential [7]. A brief exercise shows that this inclusion may be equivalently recast as the fixed-point equation (which augurs the idea of proximal-splitting) x∗= Pg η(x∗−η∇f(x∗)), for η > 0. (11) This equation helps us define a measure of inexact stationarity: the proximal residual ρ(x) := x −Pg 1(x −∇f(x)). (12) Note that for an exact stationary point x∗the residual norm ∥ρ(x∗)∥= 0. Thus, we call a point x to be ϵ-stationary if for a prescribed error level ϵ(x), the corresponding residual norm satisfies ∥ρ(x)∥≤ϵ(x). (13) Assuming the error-level ϵ(x) (say if ¯ϵ = lim supk ϵ(xk)) satisfies the bound (8), we prove below that the iterates xk generated by (7) satisfy an approximate stationarity condition of the form (13), by allowing the stepsize η to become correspondingly small (but strictly bounded away from zero). We start by recalling two basic facts, stated without proof as they are standard knowledge. 3 Lemma 2 (Lipschitz-descent [see e.g., 25; Lemma 2.1.3]). Let f ∈C1 L(X). Then, |f(x) −f(y) −⟨∇f(y), x −y⟩| ≤L 2 ∥x −y∥2, ∀x, y ∈X. (14) Lemma 3 (Nonexpansivity [see e.g., 9; Lemma 2.4]). The operator Pg η is nonexpansive, that is, ∥Pg η(x) −Pg η(y)∥≤∥x −y∥, ∀x, y ∈Rn. (15) Next we prove a crucial monotonicity property that actually subsumes similar results for projection operators derived by Gafni and Bertsekas [15; Lem. 1], and may therefore be of independent interest. Lemma 4 (Prox-Monotonicity). Let y, z ∈Rn, and η > 0. Define the functions pg(η) := 1 η∥Pg η(y −ηz) −y∥, and qg(η) := ∥Pg η(y −ηz) −y∥. (16) Then, pg(η) is a decreasing function of η, and qg(η) an increasing function of η. Proof. Our proof exploits properties of Moreau-envelopes [28; pp. 19,52], and we present it in the language of proximity operators. Consider the “deflected” proximal objective mg(x, η; y, z) := ⟨z, x −y⟩+ 1 2η∥x −y∥2 + g(x), for some y, z ∈X. (17) Associate to objective mg the deflected Moreau-envelope Eg(η) := inf x∈X mg(x, η; y, z), (18) whose infimum is attained at the unique point Pg η(y −ηz). Thus, Eg(η) is differentiable, and its derivative is given by E′ g(η) = −1 2η2 ∥Pg η(y −ηz) −y∥2 = −1 2p(η)2. Since Eg is convex in η, E′ g is increasing ([28; Thm. 2.26]), or equivalently p(η) is decreasing. Similarly, define ˆeg(γ) := Eg(1/γ); this function is concave in γ as it is a pointwise infimum (indexed by x) of functions linear in γ [see e.g., §3.2.3 in 6]. Thus, its derivative ˆe′ g(γ) = 1 2∥Pg 1/γ(x −γ−1y) −x∥2 = qg(1/γ), is a decreasing function of γ. Set η = 1/γ to conclude the argument about qg(η). We now proceed to bound the difference between objective function values from iteration k to k+1, by developing a bound of the form Φ(xk) −Φ(xk+1) ≥h(xk). (19) Obviously, since we do not enforce strict descent, h(xk) may be negative too. However, we show that for sufficiently large k the algorithm makes enough progress to ensure convergence. Lemma 5. Let xk+1, xk, ηk, and X be as in (7), and that ηk∥e(xk)∥≤ϵ(xk) holds. Then, Φ(xk) −Φ(xk+1) ≥ 2−Lηk 2ηk ∥xk+1 −xk∥2 − 1 ηk ϵ(xk)∥xk+1 −xk∥. (20) Proof. For the deflected Moreau envelope (17), consider the directional derivative dmg with respect to x in the direction w; at x = xk+1, this derivative satisfies the optimality condition dmg(xk+1, η; y, z)(w) = ⟨z + η−1(xk+1 −y) + sk+1, w⟩≥0, sk+1 ∈∂g(xk+1). (21) Set z = ∇f(xk) −e(xk), y = xk, and w = xk −xk+1 in (21), and rearrange to obtain ⟨∇f(xk) −e(xk), xk+1 −xk⟩≤⟨η−1(xk+1 −xk) + sk+1, xk −xk+1⟩. (22) From Lemma 2 it follows that Φ(xk+1) ≤f(xk) + ⟨∇f(xk), xk+1 −xk⟩+ L 2 ∥xk+1 −xk∥2 + g(xk+1), (23) whereby upon adding and subtracting e(xk), and then using (22) we further obtain f(xk) + ⟨∇f(xk) −e(xk), xk+1 −xk⟩+ L 2 ∥xk+1 −xk∥2 + g(xk+1) + ⟨e(xk), xk+1 −xk⟩ ≤f(xk) + g(xk+1) + ⟨sk+1, xk −xk+1⟩+ L 2 − 1 ηk ∥xk+1 −xk∥2 + ⟨e(xk), xk+1 −xk⟩ ≤f(xk) + g(xk) −2−Lηk 2ηk ∥xk+1 −xk∥2 + ⟨e(xk), xk+1 −xk⟩ ≤Φ(xk) −2−Lηk 2ηk ∥xk+1 −xk∥2 + ∥e(xk)∥∥xk+1 −xk∥ ≤Φ(xk) −2−Lηk 2ηk ∥xk+1 −xk∥2 + 1 ηk ϵ(xk)∥xk+1 −xk∥. The second inequality above follows from convexity of g, the third one from Cauchy-Schwarz, and the last one by assumption on ϵ(xk). Now flip signs and apply (23) to conclude the bound (20). 4 Next we further bound (20) by deriving two-sided bounds on ∥xk+1 −xk∥. Lemma 6. Let xk+1, xk, and ϵ(xk) be as before; also let c and ηk satisfy (9). Then, c∥ρ(xk)∥−ϵ(xk) ≤∥xk+1 −xk∥≤∥ρ(xk)∥+ ϵ(xk). (24) Proof. First observe that from Lemma 4 that for ηk > 0 it holds that if 1 ≤ηk then q(1) ≤qg(ηk), and if ηk ≤1 then pg(1) ≤pg(ηk) = 1 ηk qg(ηk). (25) Using (25), the triangle inequality, and Lemma 3, we have min {1, ηk} qg(1) = min {1, ηk} ∥ρ(xk)∥ ≤ ∥Pg ηk(xk −ηk∇f(xk)) −xk∥ ≤ ∥xk+1 −xk∥+ ∥xk+1 −Pg ηk(xk −ηk∇f(xk))∥ ≤ ∥xk+1 −xk∥+ ∥ηke(xk)∥ ≤ ∥xk+1 −xk∥+ ϵ(xk). From (9) it follows that for sufficiently large k we have ∥xk+1 −xk∥≥c∥ρ(xk)∥−ϵ(xk). For the upper bound note that ∥xk+1 −xk∥≤∥xk −Pg ηk(xk −ηk∇f(xk))∥+ ∥Pg ηk(xk −ηk∇f(xk)) −xk+1∥ ≤max {1, ηk} ∥ρ(xk)∥+ ∥ηke(xk)∥ ≤ ∥ρ(xk)∥+ ϵ(xk). Lemma 5 and Lemma 6 help prove the following crucial corollary. Corollary 7. Let xk, xk+1, ηk, and c be as above and k sufficiently large so that c and ηk satisfy (9). Then, Φ(xk) −Φ(xk+1) ≥h(xk) holds with h(xk) given by h(xk) := L2c3 2(2−2Lc)∥ρ(xk)∥2 − L2c2 2−cL + 1 c ∥ρ(xk)∥ϵ(xk) − 1 c − L2c 2(2−cL) ϵ(xk)2. (26) Proof. Plug in the bounds (24) into (20), invoke (9), and simplify—see [32] for details. We now have all the ingredients to state the main convergence theorem. Theorem 8 (Convergence). Let f ∈C1 L(X) such that infX f > −∞and let g be lsc, convex on X. Let xk ⊂X be a sequence generated by (7), and let condition (8) on each ∥e(xk)∥hold. There exists a limit point x∗of the sequence xk , and a constant K > 0, such that ∥ρ(x∗)∥≤Kϵ(x∗). If Φ(xk) converges, then for every limit point x∗of xk it holds that ∥ρ(x∗)∥≤Kϵ(x∗). Proof. Lemma 5, 6, and Corollary 7 have done all the hard work. Indeed, they allow us to reduce our convergence proof to the case where the analysis of the differentiable case becomes applicable, and an appeal to the analysis of [29; Thm. 2.1] grants us our claim. Theorem 8 says that we can obtain an approximate stationary point for which the norm of the residual is bounded by a linear function of the error level. The statement of the theorem is written in a conditional form, because nonvanishing errors e(x) prevent us from making a stronger statement. In particular, once the iterates enter a region where the residual norm falls below the error threshold, the behavior of xk may be arbitrary. This, however, is a small price to pay for having the added flexibility of nonvanishing errors. Under the stronger assumption of vanishing errors (and diminishing stepsizes), we can also obtain guarantees to exact stationary points. 3 Scaling up NIPS: incremental variant We now apply NIPS to the large-scale setting, where we have composite objectives of the form Φ(x) := XT t=1 ft(x) + g(x), (27) where each ft : Rn →R is a C1 Lt(X) function. For simplicity, we use L = maxt Lt in the sequel. It is well-known that for such decomposable objectives it can be advantageous to replace the full gradient P t ∇ft(x) by an incremental gradient ∇fσ(t)(x), where σ(t) is some suitable index. 5 Nonconvex incremental methods for differentiable problems have been extensively analyzed, e.g., backpropagation algorithms [5, 29], which correspond to g(x) ≡0. However, when g(x) ̸= 0, the only incremental methods that we are aware of are stochastic generalized gradient methods of [12] or the generalized gradient methods of [31]. As previously mentioned, both of these fail to exploit the composite structure of the objective function, a disadvantage even in the convex case [11]. In stark contrast, we do exploit the composite structure of (27). Formally, we propose the following incremental nonconvex proximal-splitting iteration: xk+1 = M xk −ηk XT t=1 ∇ft(xk,t) , k = 0, 1, . . . , xk,1 = xk, xk,t+1 = O(xk,t −ηk∇ft(xk,t)), t = 1, . . . , T −1, (28) where O and M are appropriate operators, different choices of which lead to different algorithms. For example, when X = Rn, g(x) ≡0, M = O = Id, and ηk →0, then (28) reduces to the classic incremental gradient method (IGM) [4], and to the IGM of [30], if lim ηk = ¯η > 0. If X a closed convex set, g(x) ≡0, M is orthogonal projection onto X, O = Id, and ηk →0, then iteration (28) reduces to (projected) IGM [4, 5]. We may consider four variants of (28) in Table 1; to our knowledge, all of these are new. Which of the four variants one prefers depends on the complexity of the constraint set X and cost to apply Pg η. The analysis of all four variants is similar, so we present details only for the most general case. X g M O Penalty and constraints Proximity operator calls Rn ̸≡0 Pg η Id penalized, unconstrained once every major (k) iteration Rn ̸≡0 Pg η Pg η penalized, unconstrained once every minor (k, t) iteration Convex h(x) + δ(X|x) Pg η Id penalized, constrained once every major (k) iteration Convex h(x) + δ(X|x) Pg η Pg η penalized, constrained once every minor (k, t) iteration Table 1: Different variants of incremental NIPS (28). 3.1 Convergence analysis Specifically, we analyze convergence for the case M = O = Pg η by generalizing the differentiable case treated by [30]. We begin by rewriting (28) in a form that matches the main iteration (7): xk+1 = Pg η xk −ηk XT t=1 ∇ft(xk,t) = Pg η xk −ηk XT t=1 ∇ft(xk) + ηk XT t=1 ft(xk) −ft(xk,t) = Pg η xk −ηk X t ∇ft(xk) + ηke(xk) . (29) To show that iteration (29) is well-behaved and actually fits the main NIPS iteration (7), we must ensure that the norm of the error term is bounded. We show this via a sequence of lemmas. Lemma 9 (Bounded-increment). Let xk,t+1 be computed by (28), and let st ∈∂g(xk,t). Then, ∥xk,t+1 −xk,t∥≤2ηk∥∇ft(xk,t) + st∥. (30) Proof. From the definition of a proximity operator (5), we have the inequality 1 2∥xk,t+1 −xk,t + ηk∇ft(xk,t)∥2 + ηkg(xk,t+1) ≤1 2∥ηk∇ft(xk,t)∥2 + ηkg(xk,t), =⇒ 1 2∥xk,t+1 −xk,t∥2 ≤ηk⟨∇ft(xk,t), xk,t −xk,t+1⟩+ ηk(g(xk,t) −g(xk,t+1)). Since st ∈∂g(xk,t), we have g(xk,t+1) ≥g(xk,t) + ⟨st, xk,t+1 −xk,t⟩. Therefore, 1 2∥xk,t+1 −xk,t∥2 ≤ηk⟨st, xk,t −xk,t+1⟩+ ⟨∇ft(xk,t), xk,t −xk,t+1⟩ ≤ηk∥st + ∇ft(xk,t)∥∥xk,t −xk,t+1∥ =⇒ ∥xk,t+1 −xk,t∥≤2ηk∥∇ft(xk,t) + st∥. Lemma 9 proves helpful in bounding the overall error. 6 Lemma 10 (Bounded error). If for all xk ∈X, ∥∇ft(xk)∥≤M and ∥∂g(xk)∥≤G, then there exists a constant K1 > 0 such that ∥e(xk)∥≤K1. Proof. To bound the error of using xk,t instead of xk first define the term ϵt := ∥∇ft(xk,t) −∇ft(xk)∥, t = 1, . . . , T. (31) Then, an inductive argument (see [32] for details) shows that for 2 ≤t ≤T ϵt ≤2ηkL Xt−1 j=1(1 + 2ηkL)t−1−j∥∇fj(xk) + sj∥. (32) Since ∥e(xk)∥= PT t=1 ϵt, and ϵ1 = 0, (32) then leads to the bound XT t=2 ϵt ≤2ηkL XT t=2 Xt−1 j=1(1 + 2ηkL)t−1−jβj = 2ηkL XT −1 t=1 βt XT −t−1 j=0 (1 + 2ηkL)j ≤ XT −1 t=1 (1 + 2ηkL)T −tβt ≤ (1 + 2ηkL)T −1 XT −1 t=1 ∥∇ft(x) + st∥ ≤C1(T −1)(M + G) =: K1. Thus, the error norm ∥e(xk)∥is bounded from above by a constant, whereby it satisfies the requirement (8), making the incremental NIPS method (28) a special case of the general NIPS framework. This allows us to invoke the convergence result Theorem 8 for without further ado. 4 Illustrative application The main contribution of our paper is the new NIPS framework, and a specific application is not one of the prime aims of this paper. We do, however, provide an illustrative application of NIPS to a challenging nonconvex problem: sparsity regularized low-rank matrix factorization min X,A≥0 1 2∥Y −XA∥2 F + ψ0(X) + XT t=1 ψt(at), (33) where Y ∈Rm×T , X ∈Rm×K and A ∈RK×T , with a1, . . . , aT as its columns. Problem (33) generalizes the well-known nonnegative matrix factorization (NMF) problem of [20] by permitting arbitrary Y (not necessarily nonnegative), and adding regularizers on X and A. A related class of problems was studied in [23], but with a crucial difference: the formulation in [23] does not allow nonsmooth regularizers on X. The class of problems studied in [23] is in fact a subset of those covered by NIPS. On a more theoretical note, [23] considered stochastic-gradient like methods whose analysis requires computational errors and stepsizes to vanish, whereas our method is deterministic and allows nonvanishing stepsizes and errors. Following [23] we also rewrite (33) in a form more amenable to NIPS. We eliminate A and consider minX φ(X) := XT t=1 ft(X) + g(X), where g(X) := ψ0(X) + δ(X|≥0), (34) and where each ft(X) for 1 ≤t ≤T is defined as ft(X) := mina 1 2∥yt −Xa∥2 + gt(a), (35) where gt(a) := ψt(a) + δ(a|≥0). For simplicity, assume that (35) attains its unique1 minimum, say a∗, then ft(X) is differentiable and we have ∇Xft(X) = (Xa∗−yt)(a∗)T . Thus, we can instantiate (28), and all we need is a subroutine for solving (35).2 We present empirical results on the following two variants of (34): (i) pure unpenalized NMF (ψt ≡ 0 for 0 ≤t ≤T) as a baseline; and (ii) sparsity penalized NMF where ψ0(X) ≡λ∥X∥1 and ψt(at) ≡γ∥at∥1. Note that without the nonnegativity constraints, (34) is similar to sparse-PCA. We use the following datasets and parameters: JiK RAND: 4000 × 4000 dense random (uniform [0, 1]); rank-32 factorization; (λ, γ) = (10−5, 10); JiiK CBCL: CBCL database [33]; 361 × 2429; rank-49 factorization; JiiiK YALE: Yale B Database [21]; 32256×2414 matrix; rank-32 factorization; JivK WEB: Web graph from google; sparse 714545 × 739454 (empty rows and columns removed) matrix; ID: 2301 in the sparse matrix collection [10]); rank-4 factorization; (λ = γ = 10−6). 1Otherwise, at the expense of more notation, we can add a small strictly convex perturbation to ensure uniqueness; this perturbation can be then absorbed into the overall computational error. 2In practice, it is better to use mini-batches, and we used the same sized mini-batches for all the algorithms. 7 0 10 20 30 40 50 60 70 10 0 Running time (seconds) Objective function value NIPS SPAMS 0 5 10 15 20 25 10 2.1 10 2.2 10 2.3 Running time (seconds) Objective function value NIPS SPAMS 0 50 100 150 200 250 300 350 400 10 2.3 10 2.4 10 2.5 10 2.6 10 2.7 10 2.8 Running time (seconds) Objective function value NIPS SPAMS Figure 1: Running times of NIPS (Matlab) versus SPAMS (C++) for NMF on RAND, CBCL, and YALE datasets. Initial objective values and tiny runtimes have been suppressed for clarity of presentation. On the NMF baseline (Fig. 1), we compare NIPS against the well optimized state-of-the-art C++ toolbox SPAMS (version 2.3) [23]. We compare against SPAMS only on dense matrices, as its NMF code seems to be optimized for this case. Obviously, the comparison is not fair: unlike SPAMS, NIPS and its subroutines are all implemented in MATLAB, and they run equally easily on large sparse matrices. Nevertheless, NIPS proves to be quite competitive: Fig. 1 shows that our MATLAB implementation runs only slightly slower than SPAMS. We expect a well-tuned C++ implementation of NIPS to run at least 4–10 times faster than the MATLAB version—the dashed line in the plots visualizes what such a mere 3X-speedup to NIPS might mean. Figure 2 shows numerical results comparing the stochastic generalized gradient (SGGD) algorithm of [12] against NIPS, when started at the same point. As in well-known, SGGD requires careful stepsize tuning; so we searched over a range of stepsizes, and have reported the best results. NIPS too requires some stepsize tuning, but substantially lesser than SGGD. As predicted, the solutions returned by NIPS have objective function values lower than SGGD, and have greater sparsity. 0 10 20 30 40 50 60 70 80 10 1 10 2 10 3 10 4 Running time (seconds) Objective function value NIPS SGGD 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 SGGD−A NIPS−A SGGD−X NIPS−X Sparsity Figure 2: Sparse NMF: NIPS versus SGGD. The bar plots show the sparsity (higher is better) of the factors X and A. Left plots for RAND dataset; right plots for WEB. As expected, SGGD yields slightly worse objective function values and less sparse solutions than NIPS. 5 Discussion We presented a new framework called NIPS, which solves a broad class of nonconvex composite objective problems. NIPS permits nonvanishing computational errors, which can be practically useful. We specialized NIPS to also obtain a scalable incremental version. Our numerical experiments on large scale matrix factorization indicate that NIPS is competitive with state-of-the-art methods. We conclude by mentioning that NIPS includes numerous other algorithms as special cases. For example, batch and incremental convex FBS, convex and nonconvex gradient projection, the proximalpoint algorithm, among others. Theoretically, however, the most exciting open problem resulting from this paper is: extend NIPS in a scalable way when even the nonsmooth part is nonconvex. This case will require very different convergence analysis, and is left to the future. References [1] H. Attouch, J. Bolte, and B. F. Svaiter. Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. 8 Programming Series A, Aug. 2011. Online First. [2] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Convex optimization with sparsity-inducing norms. In S. Sra, S. Nowozin, and S. J. Wright, editors, Optimization for Machine Learning. MIT Press, 2011. [3] A. Beck and M. Teboulle. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imgaging Sciences, 2(1):183–202, 2009. [4] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, second edition, 1999. [5] D. P. Bertsekas. Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey. Technical Report LIDS-P-2848, MIT, August 2010. [6] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, March 2004. [7] F. H. Clarke. Optimization and nonsmooth analysis. John Wiley & Sons, Inc., 1983. [8] P. L. Combettes and J.-C. Pesquet. Proximal Splitting Methods in Signal Processing. arXiv:0912.3522v4, May 2010. [9] P. L. Combettes and V. R. Wajs. Signal recovery by proximal forward-backward splitting. Multiscale Modeling and Simulation, 4(4):1168–1200, 2005. [10] T. A. Davis and Y. Hu. The University of Florida Sparse Matrix Collection. ACM Transactions on Mathematical Software, 2011. To appear. [11] J. Duchi and Y. Singer. Online and Batch Learning using Forward-Backward Splitting. J. Mach. Learning Res. (JMLR), Sep. 2009. [12] Y. M. Ermoliev and V. I. Norkin. Stochastic generalized gradient method for nonconvex nonsmooth stochastic optimization. Cybernetics and Systems Analysis, 34:196–215, 1998. [13] M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright. Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems. IEEE J. Selected Topics in Sig. Proc., 1 (4):586–597, 2007. [14] M. Fukushima and H. Mine. A generalized proximal point algorithm for certain non-convex minimization problems. Int. J. Systems Science, 12(8):989–1000, 1981. [15] E. M. Gafni and D. P. Bertsekas. Two-metric projection methods for constrained optimization. SIAM Journal on Control and Optimization, 22(6):936–964, 1984. [16] A. A. Gaivoronski. Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods. Part 1. Optimization methods and Software, 4(2):117–134, 1994. [17] S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall PTR, 1st edition, 1994. [18] K. Kreutz-Delgado, J. F. Murray, B. D. Rao, K. Engan, T.-W. Lee, and T. J. Sejnowski. Dictionary learning algorithms for sparse representation. Neural Computation, 15:349–396, 2003. [19] D. Kundur and D. Hatzinakos. Blind image deconvolution. IEEE Signal Processing Magazine, 13(3), May 1996. [20] D. D. Lee and H. S. Seung. Algorithms for Nonnegative Matrix Factorization. In NIPS, 2000. [21] K.C. Lee, J. Ho, and D. Kriegman. Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans. Pattern Anal. Mach. Intelligence, 27(5):684–698, 2005. [22] J. Liu and J. Ye. Efficient Euclidean projections in linear time. In ICML, Jun. 2009. [23] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online Learning for Matrix Factorization and Sparse Coding. JMLR, 11:10–60, 2010. [24] J. J. Moreau. Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris Sr. A Math., 255:2897–2899, 1962. [25] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Springer, 2004. [26] Y. Nesterov. Gradient methods for minimizing composite objective function. Technical Report 2007/76, Universit catholique de Louvain, September 2007. [27] R. T. Rockafellar. Monotone operators and the proximal point algorithm. SIAM J. Control and Optimization, 14, 1976. [28] R. T. Rockafellar and R. J.-B. Wets. Variational analysis. Springer, 1998. [29] M. V. Solodov. Convergence analysis of perturbed feasible descent methods. J. Optimization Theory and Applications, 93(2):337–353, 1997. [30] M. V. Solodov. Incremental gradient algorithms with stepsizes bounded away from zero. Computational Optimization and Applications, 11:23–35, 1998. [31] M. V. Solodov and S. K. Zavriev. Error stability properties of generalized gradient-type algorithms. J. Optimization Theory and Applications, 98(3):663–680, 1998. [32] S. Sra. Nonconvex proximal-splitting: Batch and incremental algorithms. Sep. 2012. arXiv:1109.0258v2. [33] K.-K. Sung. Learning and Example Selection for Object and Pattern Recognition. PhD thesis, MIT, 1996. [34] L. Xiao. Dual averaging method for regularized stochastic learning and online optimization. In NIPS, 2009. 9
|
2012
|
182
|
4,544
|
Bayesian nonparametric models for ranked data Franc¸ois Caron INRIA IMB - University of Bordeaux Talence, France Francois.Caron@inria.fr Yee Whye Teh Department of Statistics University of Oxford Oxford, United Kingdom y.w.teh@stats.ox.ac.uk Abstract We develop a Bayesian nonparametric extension of the popular Plackett-Luce choice model that can handle an infinite number of choice items. Our framework is based on the theory of random atomic measures, with the prior specified by a gamma process. We derive a posterior characterization and a simple and effective Gibbs sampler for posterior simulation. We develop a time-varying extension of our model, and apply it to the New York Times lists of weekly bestselling books. 1 Introduction Data in the form of partial rankings, i.e. in terms of an ordered list of the top-m items, arise in many contexts. For example, in this paper we consider datasets consisting of the top 20 bestselling books as published each week by the New York Times. The Plackett-Luce model [1, 2] is a popular model for modeling such partial rankings of a finite collection of M items. It has found many applications, including choice modeling [3], sport ranking [4], and voting [5]. [6, Chap. 9] provides detailed discussions on the statistical foundations of this model. In the Plackett-Luce model, each item k ∈[M] = {1, . . . , M} is assigned a positive rating parameter wk, which represents the desirability or rating of a product in the case of choice modeling, or the skill of a player in sport rankings. The Plackett-Luce model assumes the following generative story for a top-m list ρ = (ρ1, . . . , ρm) of items ρi ∈[M]: At each stage i = 1, . . . , m, an item is chosen to be the ith item in the list from among the items that have not yet appeared, with the probability that ρi is selected being proportional to its desirability wρi. The overall probability of a given partial ranking ρ is then: P(ρ) = m Y i=1 wρi PM k=1 wk − Pi−1 j=1 wρj . (1) with the denominator in (1) being the sum over all items not yet selected at stage i. In many situations the collection of available items can be very large and potentially unknown. In this case, a nonparametric approach can be sensible, where the pool of items is assumed to be infinite and the model allows for the possibility of items not observed in previous top-m lists to appear in new ones. In this paper we propose such a Bayesian nonparametric Plackett-Luce model. Our approach is built upon recent work on Bayesian inference for the (finite) Plackett-Luce model and its extensions [7, 8, 9]. Our model assumes the existence of an infinite pool of items {Xk}∞ k=1, each with its own rating parameter, {wk}∞ k=1. The probability of a top-m list of items, say (Xρ1, . . . , Xρm), is then a direct extension of the finite case (1): P(Xρ1, . . . , Xρm) = m Y i=1 wρi P∞ k=1 wk − Pi−1 j=1 wρj . (2) To formalize the framework, a natural representation to encapsulate the pool of items along with their ratings is using an atomic measure: G = ∞ X k=1 wkδXk (3) 1 Using this representation, note that the top item Xρ1 in our list is simply a draw from the probability measure obtained by normalizing G, while subsequent items in the top-m list are draws from probability measures obtained by first removing from G the atoms corresponding to previously picked items and normalizing. Described this way, it is clear that the Plackett-Luce model is basically a partial size-biased permutation of the atoms in G [10], and the existing machinery of random measures and exchangeable random partitions [11] can be brought to bear on our problem. In particular, in Section 2 we will use a gamma process as the prior over the atomic measure G. This is a completely random measure [12] with gamma marginals, such that the corresponding normalized probability measure is a Dirichlet process. We will show that with the introduction of a suitable set of auxiliary variables, we can characterize the posterior law of G given observations of top-m lists distributed according to (2). A simple Gibbs sampler can then be derived to simulate from the posterior distribution. In Section 3 we develop a time-varying extension of our model and derive a simple and effective Gibbs sampler for posterior simulation. In Section 4 we apply our time-varying Bayesian nonparametric Plackett- Luce model to the aforementioned New York Times bestsellers datasets, and conclude in Section 5. 2 A Bayesian nonparametric model for partial ranking We start this section by briefly describing a Bayesian approach to inference in finite Plackett-Luce models [9], and taking the infinite limit to arrive at the nonparametric model. This will give good intuitions for how the model operates, before we rederive the same nonparametric model more formally using gamma processes. Throughout this paper we will suppose that our data consists of L partial rankings, with ρℓ= (ρℓ1, . . . , ρℓm) for ℓ∈[L]. For notational simplicity we assume that all the partial rankings are length m. 2.1 Finite Plackett-Luce model with gamma prior Suppose we have M choice items, with item k ∈[M] having a positive desirability parameter wk. A partial ranking ρℓ= (ρℓ1, . . . , ρℓm) can be constructed generatively by picking the ith item ρℓi at the ith stage for i = 1, . . . , m, with probability proportional to wρℓi as in (1). An alternative Thurstonian interpretation, which will be important in the following, is as follows: For each item k let zℓk ∼Exp(wk) be exponentially distributed with rate wk. Thinking of zℓk as the arrival time of item k in a race, let ρℓi be the index of the ith item to arrive (the ith smallest value among (zℓk)M k=1). The resulting probability of ρℓcan then be shown to still be (1). In this interpretation (zℓk) can be understood as latent variables, and the EM algorithm can be applied to derive an algorithm to find a ML parameter setting for (wk)M k=1 given multiple partial rankings. Unfortunately the posterior distribution of (zℓk) given ρℓis difficult to compute directly, so we instead consider an alternative parameterization: Let Zℓi = zρℓi −zρℓi−1 be the waiting time for the ith item to arrive after the i −1th item (with zρℓ0 defined to be 0). Then it can be shown that the joint probability is: P((ρℓ)L ℓ=1, (Zℓi)L,m ℓ=1,i=1|(wk)M k=1) = L Y ℓ=1 m Y i=1 wρℓi exp −Zℓi PM k=1 wk −Pi−1 j=1 wρℓj (4) Note that the posterior of (Zℓi)m i=1 is simply factorized with Zℓi|ρ, w ∼Exp(PM k=1 wk − Pi−1 j=1 wρℓj), and the ML parameter setting can be easily derived as well. Taking a further step, we note that a factorized gamma prior over (wk) is conjugate to (4), say wk ∼Gamma( α M , τ) with hyperparameters α, τ > 0. Now Bayesian inference can be carried out either with a VB EM algorithm, or a Gibbs sampler. In this paper we shall consider only Gibbs sampling algorithms. In this case the parameter updates are of the form wk|(ρℓ), (Zℓi), (wk′)k′̸=k ∼Gamma α M + nk, τ + PL ℓ=1 Pm i=1 δℓikZℓi (5) where nk is the number of occurrences of item k among the observed partial rankings, and δℓik = 0 if there is a j < i with ρℓj = k and 1 otherwise. These terms arise by regrouping those in the exponential in (4). A nonparametric Plackett-Luce model can now be easily derived by taking the limit as the number of choice items M →∞. For those items k that have appeared among the observed partial rankings, 2 the limiting conditional distribution (5) is well defined since nk > 0. For items that did not appear in the observations, (5) becomes degenerate at 0. Instead we can define w∗= P k:nk=0 wk to be the total desirability among all infinitely many previously unobserved items, and show that w∗|(ρℓ), (Zℓi), (wk)k:nk>0 ∼Gamma α, τ + PL ℓ=1 Pm i=1 Zℓi (6) The Gibbs sampler thus alternates between updating (Zℓi), and updating the ratings of the observed items (wk)k:nk>0 and of the unobserved ones w∗. This nonparametric model allows us to estimate the probability of seeing new items appearing in future partial rankings in a consistent manner. While intuitive, this derivation is ad hoc in the sense that it arises as the infinite limit of the Gibbs sampler for finite models, and is unsatisfying as it did not directly capture the structure of the underlying infinite dimensional object, which we will show in the next subsection to be a gamma process. 2.2 A Bayesian nonparametric Plackett-Luce model Let X be a measurable space of choice items. A gamma process is a completely random measure over X with gamma marginals. Specifically, it is a random atomic measure of the form (3), such that for each measurable subset A, the (random) mass G(A) is gamma distributed. Assuming that G has no fixed atoms (that is, for each element x ∈X we have G({x}) = 0 with probability one) and that the atom locations {Xk} are independent of their masses {wk}, it can be shown that such a random measure can be constructed as follows: each Xk is iid according to a base distribution H (which we assume is non-atomic with density h(x)), while the set of masses {wk} is distributed according to a Poisson process over R+ with intensity λ(w) = αw−1e−wτ where α > 0 is the concentration parameter and τ > 0 the inverse scale. We write this as G ∼Γ(α, τ, H). Under this parametrization, we have that G(A) ∼Gamma(αH(A), τ). Each atom Xk is a choice item, with its mass wk > 0 corresponding to the desirability parameter. The Thurstonian view described in the finite model can be easily extended to the nonparametric one, where a partial ranking (Xρℓ1 . . . Xρℓm) can be generated as the first m items to arrive in a race. In particular, for each atom Xk let zℓk ∼Exp(wk) be the time of arrival of Xk and Xρℓi the ith item to arrive. The first m items to arrive (Xρℓ1 . . . Xρℓm) then constitutes our top-m list, with probability as given in (2). Again reparametrizing using inter-arrival durations, let Zℓi = zρℓi −zρℓi−1 for i = 1, 2, . . . (with zρ0 = 0). Then the joint probability is: P((Xρℓi)m i=1, (Zℓi)m i=1|G) = P((zρℓ1 . . . zρℓm), and zℓk > zρℓm for all k ̸∈{ρℓ1, . . . , ρℓm}) (7) = m Y i=1 wρℓie−wρℓizρℓi Y k̸∈{ρℓi}m i=1 e−wkzρℓm = m Y i=1 wρℓi exp −Zℓi ∞ X k=1 wk − i−1 X j=1 wρℓj Marginalizing out (Zℓi)m i=1 gives the probability of (Xρℓi)m i=1 in (2). Further, conditional on ρℓ it is seen that the inter-arrival durations Zℓ1 . . . Zℓm are mutually independent and exponentially distributed: Zℓi|(Xρℓi)m i=1, G ∼Exp ∞ X k=1 wk − i−1 X j=1 wρℓj (8) The above construction is depicted on Figure 1(left). We visualize on right some top-m lists generated from the model, with τ = 1 and different values of α. 2.3 Posterior characterization Consider a number L of partial rankings, with the ℓth list denoted Yℓ= (Yℓ1 . . . Yℓmℓ) , for ℓ∈[L]. While previously our top-m list (Xρ1 . . . Xρm) consists of an ordered list of the atoms in G. Here G is unobserved and (Yℓ1 . . . Yℓmℓ) is simply a list of observed choice items, which is why they were not expressed as an ordered list of atoms in G. The task here is then to characterize the posterior law of G under a gamma process prior and supposing that the observed partial rankings were drawn iid from the nonparametric Plackett-Luce model given G. Re-expressing the conditional distribution (2) of Yℓgiven G, we have: P(Yℓ|G) = mℓ Y i=1 G({Yℓi}) G(X\{Yℓ1 . . . Yℓi−1}) (9) 3 G uρ 1 uρ 2 uρ 3 E 5 10 15 20 25 30 2 4 6 8 10 12 14 16 18 20 5 10 15 20 25 30 2 4 6 8 10 12 14 16 18 20 5 10 15 20 25 30 2 4 6 8 10 12 14 16 18 20 Figure 1: Bayesian nonparametric Plackett-Luce model. Left: G and U = P k ukδXk where uk = −log(zk). The top-3 ranking is (ρ1, ρ2, ρ3). Right: Visualization of top-5 rankings with rows corresponding to different rankings and columns to items sorted by size biased order. A lighter shade corresponds to a higher rank. Each figure is for a different G, with α = .1, 1, 3. As before, for each ℓ, we will also introduce a set of auxiliary variables Zℓ= (Zℓ1 . . . Zℓmℓ) (the inter-arrival times) that are conditionally mutually independent given G and Yℓ, with: Zℓi|Yℓ, G ∼Exp(G(X\{Yℓ1, . . . , Yℓi−1})) (10) The joint probability of the item lists and auxiliary variables is then (c.f. (7)): P((Yℓ, Zℓ)L ℓ=1|G) = L Y ℓ=1 mℓ Y i=1 G({Yℓi}) exp(−ZℓiG(X\{Yℓ1, . . . , Yℓi−1})) (11) Note that under the generative process described in Section 2.2, there is positive probability that an item appearing in a list Yℓappears in another list Yℓ′ with ℓ′ ̸= ℓ. Denote the unique items among all L lists by X∗ 1 . . . X∗ K, and for each k = 1, . . . , K let nk be the number of occurrences of X∗ k among the item lists. Finally define occurrence indicators δℓik = 0 if ∃j < i with Yℓj = X∗ k; 1 otherwise. (12) i.e. δℓik is the indicator of the occurence that item X∗ k does not appear at a rank lower than i in the ℓth list. Then the joint probability under the nonparametric Plackett-Luce model is: P((Yℓ, Zℓ)L ℓ=1|G) = K Y k=1 G({X∗ k})nk × L Y ℓ=1 mℓ Y i=1 exp(−ZℓiG(X\{Yℓ1, . . . , Yℓi−1})) = exp −G(X) X ℓi Zℓi ! K Y k=1 G({X∗ k})nk exp −G({X∗ k}) X ℓi (δℓik −1)Zℓi ! (13) Taking expectation of (13) with respect to G using the Palm formula gives: Theorem 1 The marginal probability of the L partial rankings and auxiliary variables is: P((Yℓ, Zℓ)L ℓ=1) = e−ψ(P ℓi Zℓi) K Y k=1 h(X∗ k)κ nk, X ℓi δℓikZℓi (14) where ψ(z) is the Laplace transform of λ, ψ(z) = −log E h e−zG(X)i = Z R+ λ(w)(1 −e−zw)dw = α log 1 + z τ (15) and κ(n, z) is the nth moment of the exponentially tilted L´evy intensity λ(w)e−zw: κ(n, z) = Z R+ λ(w)wne−zwdw = α (z + τ)n Γ(n) (16) Details are given in the supplementary material. Another application of the Palm formula now allows us to derive a posterior characterisation of G: 4 Theorem 2 Given the observations and associated auxiliary variables (Yℓ, Zℓ)L ℓ=1, the posterior law of G is also a gamma process, but with atoms with both fixed and random locations. Specifically, G|(Yℓ, Zℓ)L ℓ=1 = G∗+ K X k=1 w∗ kδX∗ k (17) where G∗and w∗ 1, . . . , w∗ K are mutually independent. The law of G∗is still a gamma process, G∗|(Xℓ, Zℓ)L ℓ=1 ∼Γ(α, τ ∗, h) τ ∗= τ + X ℓi Zℓi (18) while the masses have distributions, w∗ k|(Yℓ, Zℓ)L ℓ=1 ∼Gamma nk, τ + X ℓi δℓikZℓi (19) 2.4 Gibbs sampling Given the results of the previous section, a simple Gibbs sampler can now be derived, where all the conditionals are of known analytic form. In particular, we will integrate out all of G∗except for its total mass w∗ ∗= G∗(X). This leaves the latent variables to consist of the masses w∗ ∗, (w∗ k) and the auxiliary variables (Zℓi). The update for Zℓi is given by (10), while those for the masses are given in Theorem 2: Gibbs update for Zℓi: Zℓi|rest ∼Exp w∗ ∗+ P k δℓikw∗ k (20) Gibbs update for w∗ k: w∗ k|rest ∼Gamma nk, τ + P ℓi δℓikZℓi (21) Gibbs update for w∗ ∗: w∗ ∗|rest ∼Gamma α, τ + P ℓi Zℓi (22) Note that the auxiliary variables are conditionally independent given the masses and vice versa. Hyperparameters of the gamma process can be simply derived from the joint distribution in Theorem 1. Since the marginal probability of the partial rankings is invariant to rescaling of the masses, it is sufficient to keep τ fixed at 1. As for α, if a Gamma(a, b) prior is placed on it, its conditional distribution is still gamma: Gibbs update for α: α|rest ∼Gamma a + K, b + log 1 + P ℓi Zℓi τ (23) Note that this update was derived with w∗ ∗marginalized out, so after an update to α it is necessary to immediately update w∗ ∗via (22) before proceeding to update other variables. 3 Dynamic Bayesian nonparametric ranking models In this section we develop an extension of the Bayesian nonparametric Plackett-Luce model to model time-varying rankings, where the rating parameters of items may change smoothly over time and reflected in a changing series of rankings. Given a series of times indexed by t = 1, 2, . . ., we may model the rankings at time t using a gamma process distributed random measure Gt as in Section 2.2, with Markov dependence among the sequence of measures (Gt) enabling dependence among the rankings over time. 3.1 Pitt-Walker dependence model We will construct a dependent sequence (Gt) which marginally follow a gamma process Γ(α, τ, H) using the construction of [13]. Suppose Gt ∼Γ(α, τ, H). Since Gt is atomic, we can write it in the form: Gt = ∞ X k=1 wtkδXtk (24) Define a random measure Ct with conditional law: Ct|Gt = ∞ X k=1 ctkδXtk ctk|Gt ∼Poisson(φtwtk) (25) where φt > 0 is a dependence parameter. Using the same method as in Section 2.3, we can show: 5 Proposition 3 Suppose the law of Gt is Γ(α, τ, H). The conditional law of Gt given Ct is then: Gt = G∗ t + ∞ X k=1 w∗ tkδXtk (26) where G∗ t and (w∗ tk)∞ k=1 are all mutually independent. The law of G∗ t is given by a gamma process, while the masses are conditionally gamma, G∗ t |Ct ∼Γ(α, τ + φt, H) w∗ tk|Ct ∼Gamma(ctk, τ + φt) (27) The idea of [13] is to define the conditional law of Gt+1 given Gt and Ct to coincide with the conditional law of Gt given Ct as in Proposition 3. In other words, define Gt+1 = G∗ t+1 + ∞ X k=1 wt+1,kδXtk (28) where G∗ t+1 ∼Γ(α, τ + φt, H) and wt+1,k ∼Gamma(ctk, τ + φt) are mutually independent. If the prior law of Gt is Γ(α, τ, H), the marginal law of Gt+1 will be Γ(α, τ, H) as well when both Gt and Ct are marginalized out, thus maintaining a form of stationarity. Further, although we have described the process in order of increasing t, the joint law of Gt, Ct, Gt+1 can equivalently be described in the reverse order with the same conditional laws as above. Note that if ctk = 0, the conditional distribution of wt+1,k will be degenerate at 0. Hence Gt+1 has an atom at Xtk if and only if Ct has an atom at Xtk, that is, if ctk > 0. In addition, it also has atoms (those in G∗ t+1) where Ct does not (nor does Gt). Finally, the parameter φt can be interpreted as controlling the strength of dependence between Gt+1 and Gt. Indeed it can be shown that E[Gt+1|Gt] = φt φt + τ Gt + τ φt + τ H. (29) Another measure of dependence can be gleaned by examining the “lifetime” of an atom. Suppose X is an atom in G1 with mass w > 0. The probability that X is an atom in C2 with positive mass is 1 −exp(−φ1w), in which case it has positive mass in G2 as well. Conversely, once it is not an atom, it will never be an atom in the future since the base distribution H is non-atomic. The lifetime of the atom is then the smallest t such that it is no longer an atom. We can show by induction that: (details in supplementary material) Proposition 4 The probability that an atom X in G1 with mass w > 0 is dead at time t is given by P(Gt({X}) = 0|w) = exp(−yt|1w) where yt|1 can be obtained by the recurrence yt|t−1 = φt−1 and yt|s−1 = yt|sφs−1 φs−1+τ+yt|s . 3.2 Posterior characterization and Gibbs sampling Assume for simplicity that at each time step t = 1, . . . , T we observe one top-m list Yt = (Yt1, . . . , Ytm) (it trivially extends to multiple partial rankings of differing sizes). We extend the results of the previous section in characterizing the posterior and developing a Gibbs sampler for the dynamical model. Since each observed item at time t has to be an atom in its corresponding random measure Gt, and atoms in Gt can propagate to neighboring random measures via the Pitt-Walker dependence model, we conclude that the set of all observed items (through all times) has to include all fixed atoms in the posterior of Gt. Thus let X∗= (X∗ k), k = 1, . . . , K be the set of unique items observed in Y1, . . . , YT , let ntk ∈{0, 1} be the number of times the item X∗ k appears at time t, and let ρt be defined as Yt = (X∗ ρ1, . . . , X∗ ρm). We write the masses of the fixed atoms as wtk = Gt({X∗ k}), while the total mass of all other random atoms is denoted wt∗= Gt(X\X∗). Note that wtk has to be positive on a random contiguous interval of time that includes all observations of X∗ k—it’s lifetime—but is zero outside of the interval. We also write ctk = Ct({X∗ k}) and ct∗= Ct(X\X∗). As before, we introduce, for t = 1, . . . , T and i = 1, . . . , m, latent variables Zti ∼Exp wt∗+ K X k=1 wtk − i−1 X j=1 wtρj (30) 6 Figure 2: Sample path drawn from the Dawson-Watanabe superprocess. Each colour represents an atom, with height being its (varying) mass. Left shows (Gt) and right (Gt/Gt(X)), a Fleming-Viot process. Each iteration of the Gibbs sampler then proceeds as follows (details in supplementary material). The latent variables (Zti) are updated as above. Conditioned on the latent variables (Zti), (ctk) and (ct∗), we update the masses (wtk), which are independent and gamma distributed since all likelihoods are of gamma form. Note that the total masses (Gt(X)) are not likelihood identifiable, so we introduce an extra step to improve mixing by sampling them from the prior (integrating out (ctk), (ct∗)), scaling all masses along with it. Directly after this step we update (ctk), (ct∗). We update α along with the random masses (wt∗) and (ct∗) efficiently using a forward-backward recursion. Finally, the dependence parameters (φt) are updated. 3.3 Continuous time formulation using superprocesses The dynamic model described in the previous section is formulated for discrete time data. When the time interval between ranking observations is not constant, it is desirable to work with dynamic models evolving over continuous-time instead, with the underlying random measures (Gt) defined over all t ∈R, but with observations at a discrete set of times t1 < t2 < · · · . Here we propose a continuous-time model based on the Dawson-Watanabe superprocess [14, 15] (see also [16, 17, 18, 19]). This is a diffusion on the space of measures with the gamma process Γ(α, τ, H) as its equilibrium distribution. It is defined by a generator L = ξ Z G(dX) ∂2 ∂G(X)2 + α Z H(dX) ∂ ∂G(X) −τ Z G(dX) ∂ ∂G(X) with ξ parametrizing the rate of evolution. Figure 2 gives a sample path, where we see that it is continuous but non-differentiable. For efficient inference, it is desirable to be able to integrate out all Gt’s except those Gt1, Gt2, . . . at observation times. An advantage to using the Dawson-Watanabe superprocess is that, the conditional distribution of Gts given Gts−1 is remarkably simple [20]. In particular it is simply given by the discrete-time process of the previous section with dependence parameter φts|ts−1 = τ eτξ(ts−ts−1)−1. Thus the inference algorithm developed previously is directly applicable to the continuous-time model too. 4 Experiments We apply the discrete-time dynamic Plackett-Luce model to the New York Times bestsellers data. These consist of the weekly top-20 best-sellers list from June 2008 to April 2012 in various categories. We consider here the categories paperback nonfiction (PN) and hardcover fiction (HF), for which respectively 249 and 916 books appear at least once in the top-20 lists over the 200 weeks. We consider that the correlation parameter φt = φ is constant over time, and assign flat improper priors p(α) ∝1/α and p(φ) ∝1/φ. In order to take into account the publication date of a book, we do not consider books in the likelihood before their first appearance in a list. We run the Gibbs sampler with 10000 burn-in iterations followed by 10000 samples. Mean normalized weights for the more popular books in both categories are shown in Figure 3. The model is able to estimate the weights associated to each book that appeared at least once, as well as the total weight associated to all other books, i.e. the probability that a new book enters at the first rank in the list, represented by the black curve. Moreover, the Bayesian approach enables us to have a measure of the uncertainty on the weights. The hardcover fiction category is characterized by rapid changes in successive lists, compared to the paperback nonfiction. This is quantified by the estimated value of the parameter φ, which are respectively 85 ± 20 and 140 ± 40 for PN and HF. The estimated values of the shape parameter α are 7 ± 1.5 and 2 ± 1 respectively. 7 Nov2008 Mar2009 Aug2009 Dec2009 May2010 Oct2010 Feb2011 Jul2011 Nov2011 Apr2012 0 0.05 0.1 0.15 0.2 0.25 0.3 EAT, PRAY, LOVE THE AUDACITY OF HOPE MARLEY AND ME DREAMS FROM MY FATHER THREE CUPS OF TEA I HOPE THEY SERVE BEER IN HELL GLENN BECK’S ‘COMMON SENSE’ THE BLIND SIDE THE LOST CITY OF Z A PATRIOT’S HISTORY OF THE UNITED STATES CONSERVATIVE VICTORY MENNONITE IN A LITTLE BLACK DRESS INSIDE OF A DOG THE VOW HEAVEN IS FOR REAL Normalized weights Date Figure 3: Mean normalized weights for paperback nonfiction (left) and hardcover fiction (right). The black lines represent the weight associated to all the books that have not appear in the top-20 lists. 5 Discussion We have proposed a Bayesian nonparametric Plackett-Luce model for ranked data. Our approach is based on the theory of atomic random measures, where we showed that the Plackett-Luce generative model corresponds exactly to a size-biased permutation of the atoms in the random measure. We characterized the posterior distribution, and derived a simple MCMC sampling algorithm for posterior simulation. Our approach can be see as a multi-stage generalization of posterior inference in normalized random measures [21, 22, 23], and can be easily extended from gamma processes to general completely random measures. We also proposed dynamical extensions of our model for both discrete and continuous time data, and applied it to modeling the bestsellers’ lists on the New York Times. Our dynamic extension may be useful for modeling time varying densities or clusterings as well. In our experiments we found that our model is insufficient to capture the empirical observation that bestsellers often start off high on the lists and tail off afterwards, since our model has continuous sample paths. We adjusted for this by simply not including books in the model prior to their publication date. It may be possible to model this better using models with discontinuous sample paths, for example, the Orstein-Uhlenbeck approach of [24] where the process evolves via a series of discrete jump events instead of continuously. Acknowledgements YWT thanks the Gatsby Charitable Foundation for generous funding. 8 References [1] R.D. Luce. Individual choice behavior: A theoretical analysis. Wiley, 1959. [2] R. Plackett. The analysis of permutations. Applied Statistics, 24:193–202, 1975. [3] R.D. Luce. The choice axiom after twenty years. Journal of Mathematical Psychology, 15:215–233, 1977. [4] D.R. Hunter. MM algorithms for generalized Bradley-Terry models. The Annals of Statistics, 32:384–406, 2004. [5] I.C. Gormley and T.B. Murphy. Exploring voting blocs with the Irish electorate: a mixture modeling approach. Journal of the American Statistical Association, 103:1014–1027, 2008. [6] P. Diaconis. Group representations in probability and statistics, IMS Lecture Notes, volume 11. Institute of Mathematical Statistics, 1988. [7] I.C. Gormley and T.B. Murphy. A grade of membership model for rank data. Bayesian Analysis, 4:265–296, 2009. [8] J. Guiver and E. Snelson. Bayesian inference for Plackett-Luce ranking models. In International Conference on Machine Learning, 2009. [9] F. Caron and A. Doucet. Efficient Bayesian inference for generalized Bradley-Terry models. Journal of Computational and Graphical Statistics, 21(1):174–196, 2012. [10] G.P. Patil and C. Taillie. Diversity as a concept and its implications for random communities. Bulletin of the International Statistical Institute, 47:497–515, 1977. [11] J. Pitman. Combinatorial stochastic processes. Ecole d’´et´e de Probabilit´es de Saint-Flour XXXII - 2002, volume 1875 of Lecture Notes in Mathematics. Springer, 2006. [12] J. F. C. Kingman. Completely random measures. Pacific Journal of Mathematics, 21(1):59–78, 1967. [13] M.K. Pitt and S.G. Walker. Constructing stationary time series models using auxiliary variables with applications. Journal of the American Statistical Association, 100(470):554–564, 2005. [14] S. Watanabe. A limit theorem of branching processes and continuous state branching processes. Journal of Mathematics of Kyoto University, 8:141–167, 1968. [15] D. A. Dawson. Stochastic evolution equations and related measure processes. Journal of Multivariate Analysis, 5:1–52, 1975. [16] S.N. Ethier and RC Griffiths. The transition function of a measure-valued branching diffusion with immigration. Stochastic Processes. A Festschrift in Honour of Gopinath Kallianpur (S. Cambanis, J. Ghosh, RL Karandikar and PK Sen, eds.), 71:79, 1993. [17] R.H. Mena and S.G. Walker. On a construction of Markov models in continuous time. MetronInternational Journal of Statistics, 67(3):303–323, 2009. [18] S. Feng. Poisson-Dirichlet Distribution and Related Topics. Springer, 2010. [19] J.C. Cox, J.E. Ingersoll Jr, and S.A. Ross. A theory of the term structure of interest rates. Econometrica: Journal of the Econometric Society, pages 385–407, 1985. [20] S. N. Ethier and R. C. Griffiths. The transition function of a measure-valued branching diffusion with immigration. Stochastic Processes, 1993. [21] L.F. James, A. Lijoi, and I. Pr¨unster. Posterior analysis for normalized random measures with independent increments. Scandinavian Journal of Statistics, 36(1):76–97, 2009. [22] J.E. Griffin and S.G. Walker. Posterior simulation of normalized random measure mixtures. Journal of Computational and Graphical Statistics, 20(1):241–259, 2011. [23] S. Favaro and Y.W. Teh. MCMC for normalized random measure mixture models. Technical report, University of Turin, 2012. [24] J. E. Griffin. The Ornstein-Uhlenbeck Dirichlet process and other time-varying processes for Bayesian nonparametric inference. Journal of Statistical Planning and Inference, 141:3648– 3664, 2011. 9
|
2012
|
183
|
4,545
|
GenDeR: A Generic Diversified Ranking Algorithm Jingrui He IBM T.J. Watson Research Yorktown Heights, NY 10598 jingruhe@us.ibm.com Hanghang Tong IBM T.J. Watson Research Yorktown Heights, NY 10598 htong@us.ibm.com Qiaozhu Mei University of Michigan Ann Arbor, MI 48109 qmei@umich.edu Boleslaw K. Szymanski Rensselaer Polytechnic Institute Troy, NY 12180 szymab@rpi.edu Abstract Diversified ranking is a fundamental task in machine learning. It is broadly applicable in many real world problems, e.g., information retrieval, team assembling, product search, etc. In this paper, we consider a generic setting where we aim to diversify the top-k ranking list based on an arbitrary relevance function and an arbitrary similarity function among all the examples. We formulate it as an optimization problem and show that in general it is NP-hard. Then, we show that for a large volume of the parameter space, the proposed objective function enjoys the diminishing returns property, which enables us to design a scalable, greedy algorithm to find the (1 −1/e) near-optimal solution. Experimental results on real data sets demonstrate the effectiveness of the proposed algorithm. 1 Introduction Many real applications can be reduced to a ranking problem. While traditional ranking tasks mainly focus on relevance, it has been widely recognized that diversity is another highly desirable property. It is not only a key factor to address the uncertainty and ambiguity in an information need, but also an effective way to cover the different aspects of the information need [14]. Take team assembling as an example. Given a task which typically requires a set of skills, we want to form a team of experts to perform that task. On one hand, each team member should have some relevant skills. On the other hand, the whole team should somehow be diversified, so that we can cover all the required skills for the task and different team members can benefit from each other’s diversified, complementary knowledge and social capital. More recent research discovers that diversity plays a positive role in improving employees’ performance within big organizations as well as their job retention rate in face of lay-off [21]; in improving the human-centric sensing results [15, 17]; in the decision of joining a new social media site (e.g., Facebook) [18], etc. To date, many diversified ranking algorithms have been proposed. Early works mainly focus on text data [5, 23] where the goal is to improve the coverage of (sub-)topics in the retrieval result. In recently years, more attention has been paid to result diversification in web search [2, 20]. For example, if a query bears multiple meanings (such as the key word ‘jaguar’, which could refer to either cars or cats), we would like to have each meaning (e.g., ‘cars’ and ‘cats’ in the example of ‘jaguar’) covered by a subset of the top ranked web pages. Another recent trend is to diversify PageRank-type of algorithms for graph data [24, 11, 16]. It is worth pointing out that almost all the existing diversified ranking algorithms hinge on the specific choice of the relevance function and/or the similarity function. For example, in [2] and [20], both the relevance function and the similarity function implicitly depend on the categories/subtopics associated with the query and the documents; in [16], the 1 relevance function is obtained via personalized PageRank [8], and the similarity is measured based on the so-called ‘Google matrix’; etc. In this paper, we shift the problem to a more generic setting and ask: given an arbitrary relevance function wrt an implicit or explicit query, and an arbitrary similarity function among all the available examples, how can we diversify the resulting top-k ranking list? We address this problem from the optimization viewpoint. First, we propose an objective function that admits any non-negative relevance function and any non-negative, symmetric similarity function. It naturally captures both the relevance with regard to the query and the diversity of the ranking list, with a regularization parameter that balances between them. Then, we show that while such an optimization problem is NP-hard in general, for a large volume of the parameter space, the objective function exhibits the diminishing returns property, including submodurality, monotonicity, etc. Finally, we propose a scalable, greedy algorithm to find provably near-optimal solution. The rest of the paper is organized as follows. We present our optimization formulation for diversified ranking in Section 2, followed by the analysis of its hardness and properties. Section 3 presents our greedy algorithm for solving the optimization problem. The performance of the proposed algorithm is evaluated in Section 4. In Section 5, we briefly review the related work. Finally, we conclude the paper in Section 6. 2 The Optimization Formulation In this section, we present the optimization formulation for diversified ranking. We start by introducing the notation, and then present the objective function, followed by the analysis regarding its hardness and properties. 2.1 Notation In this paper: we use normal lower-case letters to denote scalers or functions, bold-face lower-case letters to denote vectors, bold-face upper-case letters to denote matrices, and calligraphic upper-case letters to denote sets. To be specific, for a set X of n examples {x1, x2, . . . , xn}, let S denote the n × n similarity matrix, which is both symmetric and non-negative. In other words, Si,j = Sj,i and Si,j ≥0, where Si,j is the element of S in the ith row and the jth column (i, j = 1, . . . , n). For any ranking function r(·), which returns the non-negative relevance score for each example in X with respect to an implicit or explicit query, our goal is to find a subset T of k examples, which are relevant to the query and diversified among themselves. Here the positive integer k is the budget of the ranking list size, and the ranking function r(·) generates an n × 1 vector r, whose ith element ri = r(xi). When we describe the objective function as well as the proposed optimization algorithm, it is convenient to introduce the following n × 1 reference vector q = S · r. Intuitively, its ith element qi measures the importance of xi. To be specific, if xi is similar to many examples (high Si,j (j = 1, 2, ...., )) that are relevant to the query (high rj(j = 1, 2, ...), it is more important than the examples whose neighbors are not relevant. For example, if xi is close to the center of a big cluster relevant to the query, the value of qi is large. 2.2 Objective Function With the above notation, our goal is to find a subset T of k examples which are both relevant to the query and diversified among themselves. To this end, we propose the following optimization problem. arg max |T |=k g(T ) = w X i∈T qiri − X i,j∈T riSi,jrj (1) where w is a positive regularization parameter that defines the trade-off between the two terms, and T consists of the indices of the k examples that will be returned in the ranking list. Intuitively, in the goodness function g(T ), the first term measures the weighted overall relevance of T with respect to the query, and qi is the weight for xi. It favors relevant examples from big clusters. In other words, if two examples are equally relevant to the query, one from a big cluster and the other isolated, by using the weighted relevance, we prefer the former. The second term measures 2 the similarity among the examples within T . That is, it penalizes the selection of multiple relevant examples that are very similar to each other. By including this term in the objective function, we seek a set of examples which are relevant to the query, but also dissimilar to each other. For example, in the human-centric sensing [15, 17], due to the homophily in social networks, reports of two friends are likely correlated so that they are a lesser corroboration of events than reports of two socially unrelated witnesses. 2.3 The Hardness of Equation (1) In the optimization problem in Equation (1), we want to find a subset T of k examples that collectively maximize the goodness function g(T ). Unfortunately, by the following theorem, it is NP-hard to find the optimal solution. Theorem 2.1. The optimization problem in Equation (1) is NP-hard. Proof. We will prove this from the reduction of the Densest k-Subgraph (DkS) problem, which is known to be NP-hard [7]. To be specific, given an undirected graph G(V, E) with the connectivity matrix W , where V is the set of vertices, and E is the set of edges. W is a |V|×|V| symmetric matrix with elements being 0 or 1. Let |E| be the total number of the edges in the graph. The DkS problem is defined in Equation (2). Q = arg max |Q|=k X i,j∈Q W i,j (2) Define another |V| × |V| matrix ¯ W as: ¯ W i,j = 1 −W i,j. It is easy to see that P i,j∈Q W i,j = k2 −P i,j∈Q ¯ W i,j. Therefore, Equation (2) is equivalent to Q = arg min |Q|=k X i,j∈Q ¯ W i,j (3) Furthermore, notice that P|V| i,j=1 ¯ W i,j = |V|2 −|E| = constant. Let T = V \ Q, then Equation (3) is equivalent to arg max |Q|=k X i∈Q,j∈T ¯ W i,j + X i∈T ,j∈Q ¯ W i,j + X i∈T ,j∈T ¯ W i,j = arg max |T |=|V|−k 2 X i∈Q,j∈T ¯ W i,j + X i,j∈T ¯ W i,j (4) Next, we will show that Equation (4) can be viewed as an instance of the optimization problem in Equation (1) with the following setting: let the similarity function S be ¯ W , the ranking function r be 1|V|×1, the budget be |V| −k, and the regularization parameter w be 2. Under such settings, the objective function in Equation (1) becomes g(T ) = 2 X i∈T qiri − X i,j∈T ri ¯ W i,jrj = 2 X i∈T |V| X j=1 ri ¯ W ijrj − X i,j∈T ri ¯ W i,jrj (dfn. of q) = 2 X i∈Q X j∈T ri ¯ W ijrj + X i,j∈T ri ¯ W i,jrj (symmetry of ¯ W) = 2 X i∈Q X j∈T ¯ W ij + X i,j∈T ¯ W i,j (dfn. of r) (5) which is equivalent to the objective function in Equation (4). This completes the proof. □ 3 2.4 Diminish Returns Property of g(T ) Given that Equation (1) is NP-hard in general, we seek for a provably near-optimal solution instead in the next section. Here, let us first answer the following question: under what condition (e.g., in which range of the regularization parameter w), is it possible to find such a near-optimal solution for Equation (1)? To this end, we present the so-called diminishing returns property of the goodness function g(T ) defined in Equation (1), which is summarized in the following theorem. By Theorem 2.2, if we add more examples into an existing top-k ranking list, the goodness of the overall ranking list is non-decreasing (P2). However, the marginal benefit of adding additional examples into the ranking list decreases wrt the size of the existing ranking list (P1). Theorem 2.2. Diminish Returns Property of g(T ). The goodness function g(T ) defined in Equation (1) has the following properties: (P1) submodularity. For any w > 0, the objective function g(T ) is submodular wrt T ; (P2) monotonicity. For any w ≥2, The objective function g(T ) is monotonically nondecreasing wrt T . Proof. We first prove (P1). For any T1 ⊂T2 and any given example x /∈T2, we have g(T1 ∪x) −g(T1) = (w X i∈T1∪x qiri − X i,j∈T1∪x riSi,jrj) −(w X i∈T1 qiri − X i,j∈T1 riSi,jrj) = wqxrx −( X i∈T1 riSi,xrx + X j∈T1 rxSx,jrj + rxSx,xrx) = wqxrx −Sx,xr2 x −2rx X j∈T1 Sx,jrj (6) Similarly, we have g(T2 ∪x) −g(T2) = wqxrx −Sx,xr2 x −2rx P j∈T2 Sx,jrj. Therefore, we have (g(T1 ∪x) −g(T1)) −(g(T2 ∪x) −g(T2)) = 2rx X j∈T2 Sx,jrj −2rx X j∈T1 Sx,jrj = 2rx X j∈T2\T1 Sx,jrj ≥0 (7) which completes the proof of (P1). Next, we prove (P2). Given any T1 ∩T2 = Φ, where Φ is the empty set, with w ≥2, we have g(T2 ∪T1) −g(T2) = w X i∈T1 qiri −( X i∈T1,j∈T2 riSi,jrj + X i∈T2,j∈T1 riSi,jrj + X i,j∈T1 riSi,jrj) = w X i∈T1 ri n X j=1 Si,jrj −(2 X i∈T1,j∈T2 riSi,jrj + X i,j∈T1 riSi,jrj) ≥ 2 X i∈T1 ri n X j=1 Si,jrj −2( X i∈T1,j∈T2 riSi,jrj + X i,j∈T1 riSi,jrj) = 2 X i∈T1 ri( n X j=1 Si,jrj − X j∈T1∪T2 Si,jrj) = 2 X i∈T1 ri X j /∈T1∪T2 Si,jrj ≥0 (8) which completes the proof of (P2). □ 4 3 The Optimization Algorithm In this section, we present our algorithm GenDeR for solving Equation (1), and analyze its performance with respect to its near-optimality and complexity. 3.1 Algorithm Description Based on the diminishing returns property of the goodness function g(T ), we propose the following greedy algorithm to find a diversified top-k ranking list. In Alg. 1, after we calculate the reference vector q (Step 1) and initialize the ranking list T (Step 2), we try to expand the ranking list T one-by-one (Step 4-8). At each iteration, we add one more example with the highest score si into the current ranking list T (Step 5). Each time we expand the current ranking list, we update the score vector s based on the newly added example i (Step 7). Notice that in Alg. 1, ‘⊗’ means the element-wise multiplication, and diag(S) returns an n × 1 vector with the corresponding elements being the diagonal elements in the similarity matrix S. Algorithm 1 GenDeR Input: The similarity matrix Sn×n, the relevance vector rn×1, the weight w ≥2, and the budget k; Output: A subset T of k nodes. 1: Compute the reference vector q: q = Sr; 2: Initialize T as an empty set; 3: Initialize the score vector s = w × (q ⊗r) −diag(S) ⊗r ⊗r; 4: for iter = 1 : k do 5: Find i = argmaxj(sj|j = 1, ..., n; j /∈T ); 6: Add i to T ; 7: Update the score vector s ←s −2riS:,i ⊗r 8: end for 9: Return the subset T as the ranking list (earlier selected examples ranked higher). 3.2 Algorithm Analysis The accuracy of the proposed GenDeR is summarized in Lemma 3.1, which says that for a large volume of the parameter space (i.e., w ≥2), GenDeR leads to a (1 −1/e) near-optimal solution. Lemma 3.1. Near-Optimality of GenDeR. Let T be the subset found by GenDeR, |T | = k, and T ∗= argmax|T |=kg(T ). We have that g(T ) ≥(1 −1/e)g(T ∗), where e is the base of the natural logarithm. Proof. The key of the proof is to verify that for any example xj /∈T , sj = g(T ∪xj) −g(T ), where s is the score vector we calculate in Step 3 or update in Step 7, and T is the initial empty ranking list or the current ranking list in Step 6. The remaining part of the proof directly follows the diminishing returns property of the goodness function in Theorem 2.2, together with the fact that g(Φ) = 0 [12]. We omit the detailed proof for brevity. □ The complexity of the proposed GenDeR is summarized in Lemma 3.2. Notice that the quadratic term in the time complexity comes from the matrix-vector multiplication in Step 1 (i.e., q = Sr); and the quadratic term in the space complexity is the cost to store the similarity matrix S. If the similarity matrix S is sparse, say we have m non-zero elements in S, we can reduce the time complexity to O(m + nk); and reduce the space complexity to O(m + n + k). Lemma 3.2. Complexity of GenDeR. The time complexity of GenDeR is O(n2 + nk); the space complexity of GenDeR is O(n2 + k). Proof. Omitted for Brevity. □ 5 4 Experimental Results We compare the proposed GenDeR with several most recent diversified ranking algorithms, including DivRank based on reinforced random walks [11] (referred to as ‘DR’), GCD via resistive graph centers [6] (referred to as ‘GCD’) and manifold ranking with stop points [25] (referred to as ‘MF’). As all these methods aim to improve the diversity of PageRank-type of algorithms, we also present the results by PageRank [13] itself as the baseline. We use two real data sets, including an IMDB actor professional network and an academic citation data set. In [11, 6], the authors provide detailed experimental comparisons with some earlier methods (e.g., [24, 23, 5], etc) on the same data sets. We omit the results by these methods for clarity. 4.1 Results on Actor Professional Network The actor professional network is constructed from the Internet Movie Database (IMDB)1, where the nodes are the actors/actresses and the edges are the numbers of the co-stared movies between two actors/actresses. For the inputs of GenDeR, we use the adjacency matrix of the co-stared network as the similarity function S; and the ranking results by ‘DR’ as the relevance vector r. Given a top-k ranking list, we use the density of the induced subgraph of S by the k nodes as the reverse measure of the diversity (lower density means higher diversity). We also measure the diversity of the ranking list by the so-called ‘country coverage’ as well as ‘movie coverage’ (higher coverage means higher diversity), which are defined in [24]. Notice that for a good top-k diversified ranking list, it often requires the balance between the diversity and the relevance in order to fulfill the user’s information need. Therefore, we also present the relevance score (measured by PageRank) captured by the entire top-k ranking list. In this application, such a relevance score measures the overall prestige of the actors/actresses in the ranking list. Overall, we have 3,452 actors/actresses, 23,460 edges, 1,027 movies and 47 countries. The results are presented in Fig. 1. First, let us compare GenDeR with the baseline method ‘PageRank’. From Fig. 1(d), we can see that our GenDeR is as good as ‘PageRank’ in terms of capturing the relevance of the entire top-k ranking list (notice that the two curves almost overlap with each other). On the other hand, GenDeR outperforms ‘PageRank’ in terms of the diversity by all the three measures (Fig. 1(a-c)). Since GenDeR uses the ranking results by ‘DR’ as its input, ‘DR’ can be viewed as another baseline method. The two methods perform similarly in terms of density (Fig. 1(c)). Regarding all the remaining measures, our GenDeR is always better than ‘DR’. For example, when k ≥300, GenDeR returns both higher ‘country-coverage’ (Fig. 1(a)) and higher ‘movie-coverage’ (Fig. 1(b)). In the entire range of the budget k (Fig. 1(d)), our GenDeR captures higher relevance scores than ‘DR’, indicating the actors/actresses in our ranking list might be more prestigious than those by ‘DR’. Based on these results, we conclude that our GenDeR indeed improves ‘DR’ in terms of both diversity and relevance. The most competitive method is ‘MF’. We can see that GenDeR and ‘MF’ perform similarly in terms of both density (Fig. 1(c)) and ‘movie coverage’ (Fig. 1(b)). In terms of ‘country coverage’ (Fig. 1(a)), ‘MF’ performs slightly better than our GenDeR when 300 ≤k ≤400; and for the other values of k, the two methods mix with each other. However, in terms of relevance (Fig. 1(d)), our GenDeR is much better than ‘MF’. Therefore, we conclude that ‘MF’ performs comparably with or slightly better than our GenDeR in terms of diversity, at the cost of sacrificing the relevance of the entire ranking list. As for ‘GCD’, although it leads to the lowest density, it performs poorly in terms of balancing between the diversity and the relevance (Fig. 1(d)), as well as the coverage of countries/movies (Fig. 1(a-b)). 4.2 Results on Academic Citation Networks This data set is from ACL Anthology Network2. It consists of a paper citation network and a researcher citation network. Here, the nodes are papers or researchers; and the edges indicate the citation relationship. Overall, we have 11,609 papers and 54,208 edges in the paper citation network; 9,641 researchers and 229,719 edges in the researcher citation network. For the inputs of GenDeR, we use the symmetrized adjacency matrix as the similarity function S; and the ranking results by ‘DR’ as the relevance vector r. We use the same measure as in [11] (referred to as ‘coverage’), which is the total number of unique papers/researchers that cite the top-k papers/researchers in the ranking list. As pointed out in [11], the ‘coverage’ might provide a better measure of the overall quality of the top-k ranking list than those traditional measures (e.g., h-index) as they ignore the diversity of the ranking list. The results are presented in Fig. 2. We can see that the proposed GenDeR 1http://www.imdb.com/ 2http://www.aclweb.org/anthology-new/ 6 50 100 150 200 250 300 350 400 450 500 15 20 25 30 35 40 45 50 k country coverage PageRank DR MF GCD GenDeR 50 100 150 200 250 300 350 400 450 500 100 200 300 400 500 600 700 800 900 1000 k movie coverage PageRank DR MF GCD GenDeR (a) Country Coverage (Higher is better) (b) Movie Coverage (Higher is better) 50 100 150 200 250 300 350 400 450 500 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 k density PageRank DR MF GCD GenDeR 50 100 150 200 250 300 350 400 450 500 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 k relevance PageRank DR MF GCD GenDeR (c) Density (Lower is better) (d) Relevance (Higher is better) Figure 1: The evaluations on actor professional network. (a-c) are different diversity measures and (d) measures the relevance of the entire ranking list. performs better than all the alternative choices. For example, with k = 50, GenDeR improves the ‘coverage’ of the next best method by 416 and 157 on the two citation networks, respectively. 0 20 40 60 80 100 0 1000 2000 3000 4000 5000 6000 k coverage PageRank DR MF GCD GenDeR 0 20 40 60 80 100 0 1000 2000 3000 4000 5000 6000 7000 8000 k coverage PageRank DR MF GCD GenDeR (a) Paper Citation Network (b) Researcher Citation Network Figure 2: The evaluations on academic citation networks. Higher is better. 5 Related Work Carbonell et al [5] are among the first to study diversified ranking in the context of text retrieval and summarization. To this end, they propose to use the Maximal Marginal Relevance (MMR) criterion to reduce redundancy while maintaining query relevance, which is a linear combination of relevance and novelty. In [23], Zhai et al address this problem from a different perspective by explicitly modeling the subtopics associated with a query, and proposing a framework to evaluate subtopic retrieval. Recently, researchers leverage external information sources to help with diversified ranking. For example, in [2], Agrawal et al maximize the probability that the average user finds at least one useful 7 result within the top ranked results with the help of a taxonomy available through Open Directory Project (ODP); in [4], Capannini et al mine the query log to find specializations of a given query, and use the search results of the specializations to help evaluate the set of top ranked documents; in [20], Welch et al model the expected number of hits based on the number of relevant documents a user will visit, user intent in terms of the probability distribution over subtopics, and document categorization, which are obtained from the query logs, WordNet or Wikipedia. With the prevalence of graph data, such as social networks, author/paper citation networks, actor professional networks, etc, researchers have started to study the problem of diversified ranking in the presence of relationships among the examples. For instance, in [24], Zhu et al propose the GRASSHOPPER algorithm by constructing random walks on the input graph, and iteratively turning the ranked nodes into absorbing states. In [11], Mei et al propose the DivRank algorithm based on a reinforced random walk defined on the input graph, which automatically balances the prestige and the diversity among the top ranked nodes due to the fact that adjacent nodes compete for their ranking scores. In [16], Tong et al propose a scalable algorithm to find the near-optimal solution to diversify the top-k ranking list for PageRank. Due to the asymmetry in their formulation, it remains unclear if the optimization problem in [16] is NP-hard. On a higher level, the method in [16] can be roughly viewed as an instantiation of our proposed formulation with the specific choices in the optimization problem (e.g, the relevance function, the similarity function, the regularization parameter, etc). In [25], Zhu et al leverage the stopping points in the manifold ranking algorithms to diversify the results. All these works aim to diversify the results of one specific type of ranking function (i.e., PageRank and its variants). Learning to rank [10, 1, 3] and metric learning [19, 22, 9] have been two very active areas in the recent years. Most of these methods require some additional information (e.g., label, partial ordering, etc) for training. They are often tailored for other purposes (e.g., improving the F-score in the ranking task, improving the classification accuracy in metric learning, etc) without the consideration of diversity. Nonetheless, thanks to the generality of our formulation, the learned ranking functions and metric functions from most of these works can be naturally admitted into our optimization objective function. In other words, our formulation brings the possibility to take advantage of these existing research results in the diversified ranking setting. Remarks. While generality is one of the major contributions of this paper, we do not disregard the value of the domain-specific knowledge. The generality of our method is orthogonal to domainspecific knowledge. For example, such knowledge can be reflected in the (learnt) ranking function and/or the (learnt) similarity function, which can in turn serve as the input of our method. 6 Conclusion In this paper, we study the problem of diversified ranking. The key feature of our formulation lies in its generality: it admits any non-negative relevance function and any non-negative, symmetric similarity function as input, and outputs a top-k ranking list that enjoys both relevance and diversity. Furthermore, we identify the regularization parameter space where our problem can be solved nearoptimally; and we analyze the hardness of the problem, the optimality as well as the complexity of the proposed algorithm. Finally, we conduct experiments on several real data sets to demonstrate the effectiveness of this algorithm. Future work includes extending our formulation to the on-line, dynamic setting. 7 Acknowledgement Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-09-2-0053. This work was in part supported by the National Science Foundation under grant numbers IIS-1054199 and CCF-1048168; and by DAPRA under SMISC Program Agreement No. W911NF-12-C-0028. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory, the National Science Foundation, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. References [1] A. Agarwal and S. Chakrabarti. Learning random walks to rank nodes in graphs. In ICML, pages 9–16, 2007. 8 [2] R. Agrawal, S. Gollapudi, A. Halverson, and S. Ieong. Diversifying search results. In WSDM, pages 5–14, 2009. [3] C. J. C. Burges, K. M. Svore, P. N. Bennett, A. Pastusiak, and Q. Wu. Learning to rank using an ensemble of lambda-gradient models. Journal of Machine Learning Research - Proceedings Track, 14:25–35, 2011. [4] G. Capannini, F. M. Nardini, R. Perego, and F. Silvestri. Efficient diversification of search results using query logs. In WWW (Companion Volume), pages 17–18, 2011. [5] J. G. Carbonell and J. Goldstein. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In SIGIR, pages 335–336, 1998. [6] A. Dubey, S. Chakrabarti, and C. Bhattacharyya. Diversity in ranking via resistive graph centers. In KDD, pages 78–86, 2011. [7] U. Feige, G. Kortsarz, and D. Peleg. The dense k-subgraph problem. Algorithmica, 29, 1999. [8] T. H. Haveliwala. Topic-sensitive pagerank: A context-sensitive ranking algorithm for web search. IEEE Trans. Knowl. Data Eng., 15(4):784–796, 2003. [9] P. Jain, B. Kulis, and I. S. Dhillon. Inductive regularized learning of kernel functions. In NIPS, pages 946–954, 2010. [10] T.-Y. Liu. Learning to rank for information retrieval. In SIGIR, page 904, 2010. [11] Q. Mei, J. Guo, and D. R. Radev. Divrank: the interplay of prestige and diversity in information networks. In KDD, pages 1009–1018, 2010. [12] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functionsłi. MATHEMATICAL PROGRAMMING, (1):265–294, 1973. [13] L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank citation ranking: Bringing order to the web. Technical report, Stanford Digital Library Technologies Project, 1998. Paper SIDL-WP-1999-0120 (version of 11/11/1999). [14] F. Radlinski, P. N. Bennett, B. Carterette, and T. Joachims. Redundancy, diversity and interdependent document relevance. SIGIR Forum, 43(2):46–52, 2009. [15] M. Srivastava, T. Abdelzaher, and B. Szymanski. Human-centric sensing. Phil. Trans. R. Soc. 370 ser. A(1958), pages 176–197, 2012. [16] H. Tong, J. He, Z. Wen, R. Konuru, and C.-Y. Lin. Diversified ranking on large graphs: an optimization viewpoint. In KDD, pages 1028–1036, 2011. [17] M. Y. S. Uddin, M. T. A. Amin, H. Le, T. Abdelzaher, B. Szymanski, and T. Nguyen. On diversifying source selection in social sensing. In INSS, 2012. [18] J. Ugander, L. Backstrom, C. Marlow, and J. Kleinberg. Structural diversity in social contagion. PNAS, 109(16):596–5966, 2012. [19] J. Wang, H. Do, A. Woznica, and A. Kalousis. Metric learning with multiple kernels. In NIPS, pages 1170–1178, 2011. [20] M. J. Welch, J. Cho, and C. Olston. Search result diversity for informational queries. In WWW, pages 237–246, 2011. [21] L. Wu. Social network effects on performance and layoffs: Evidence from the adoption of a social networking tool. Job Market Paper, 2011. [22] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. J. Russell. Distance metric learning with application to clustering with side-information. In NIPS, pages 505–512, 2002. [23] C. Zhai, W. W. Cohen, and J. D. Lafferty. Beyond independent relevance: methods and evaluation metrics for subtopic retrieval. In SIGIR, pages 10–17, 2003. [24] X. Zhu, A. B. Goldberg, J. V. Gael, and D. Andrzejewski. Improving diversity in ranking using absorbing random walks. In HLT-NAACL, pages 97–104, 2007. [25] X. Zhu, J. Guo, X. Cheng, P. Du, and H. Shen. A unified framework for recommending diverse and relevant queries. In WWW, pages 37–46, 2011. 9
|
2012
|
184
|
4,546
|
Accuracy at the Top Stephen Boyd Stanford University Packard 264 Stanford, CA 94305 boyd@stanford.edu Corinna Cortes Google Research 76 Ninth Avenue New York, NY 10011 corinna@google.com Mehryar Mohri Courant Institute and Google 251 Mercer Street New York, NY 10012 mohri@cims.nyu.edu Ana Radovanovic Google Research 76 Ninth Avenue New York, NY 10011 anaradovanovic@google.com Abstract We introduce a new notion of classification accuracy based on the top ⌧-quantile values of a scoring function, a relevant criterion in a number of problems arising for search engines. We define an algorithm optimizing a convex surrogate of the corresponding loss, and discuss its solution in terms of a set of convex optimization problems. We also present margin-based guarantees for this algorithm based on the top ⌧-quantile value of the scores of the functions in the hypothesis set. Finally, we report the results of several experiments in the bipartite setting evaluating the performance of our solution and comparing the results to several other algorithms seeking high precision at the top. In most examples, our solution achieves a better performance in precision at the top. 1 Introduction The accuracy of the items placed near the top is crucial for many information retrieval systems such as search engines or recommendation systems, since most users of these systems browse or consider only the first k items. Different criteria have been introduced in the past to measure this quality, including the precision at k (Precision@k), the normalized discounted cumulative gain (NDCG) and other variants of DCG, or the mean reciprocal rank (MRR) when the rank of the most relevant document is critical. A somewhat different but also related criterion adopted by [1] is based on the position of the top irrelevant item. Several machine learning algorithms have been recently designed to optimize these criteria and other related ones [6, 12, 11, 21, 7, 14, 13]. A general algorithm inspired by the structured prediction technique SVMStruct [22] was incorporated in an algorithm by [15] which can be used to optimize a convex upper bound on the number of errors among the top k items. The algorithm seeks to solve a convex problem with exponentially many constraints via several rounds of optimization with a smaller number of constraints, augmenting the set of constraints at each round with the most violating one. Another algorithm, also based on structured prediction ideas, is proposed in an unpublished manuscript of [19] and covers several criteria, including Precision@k and NDCG. A regression-based solution is suggested by [10] for DCG in the case of large sample sizes. Some other methods have also been proposed to optimize a smooth version of a non-convex cost function in this context [8]. [1] discusses an optimization solution for an algorithm seeking to minimize the position of the top irrelevant item. 1 However, one obvious shortcoming of all these algorithms is that the notion of top k does not generalize to new data. For what k should one train if the test data in some instances is half the size and in other cases twice the size? In fact, no generalization guarantee is available for such precision@k optimization or algorithm. A more principled approach in all the applications already mentioned consists of designing algorithms that optimize accuracy in some top fraction of the scores returned by a real-valued hypothesis. This paper deals precisely with this problem. The desired objective is to learn a scoring function that is as accurate as possible for the items whose scores are above the top ⌧-quantile. To be more specific, when applied to a set of size n, the number of top items is k = ⌧n for a ⌧-quantile, while for a different set of size n0 6= n, this would correspond to k0 = ⌧n0 6= k. The implementation of the Precision@k algorithm in [15] indirectly acknowledges the problem that the notion of top k does not generalize since the command-line flag requires k to be specified as a fraction of the positive samples. Nevertheless, the formulation of the problem as well as the solution are still in terms of the top k items of the training set. A study of various statistical questions related to the problem of accuracy at the top is discussed by [9]. The authors also present generalization bounds for the specific case of empirical risk minimization (ERM) under some assumptions about the hypothesis set and the distribution. But, to our knowledge, no previous publication has given general learning guarantees for the problem of accuracy in the top quantile scoring items or carefully addressed the corresponding algorithmic problem. We discuss the formulation of this problem (Section 3.1) and define an algorithm optimizing a convex surrogate of the corresponding loss in the case of linear scoring functions. We discuss the solution of this problem in terms of several simple convex optimization problems and show that these problems can be extended to the case where positive semi-definite kernels are used (Section 3.2). In Section 4, we present a Rademacher complexity analysis of the problem and give margin-based guarantees for our algorithm based on the ⌧-quantile value of the functions in the hypothesis set. In Section 5, we also report the results of several experiments evaluating the performance of our algorithm. In a comparison in a bipartite setting with several algorithms seeking high precision at the top, our algorithm achieves a better performance in precision at the top. We start with a presentation of notions and notation useful for the discussion in the following sections. 2 Preliminaries Let X denote the input space and D a distribution over X ⇥X. We interpret the presence of a pair (x, x0) in the support of D as the preference of x0 over x. We denote by S = ! (x1, x0 1), . . . , (xm, x0 m) " 2 (X ⇥X)m a labeled sample of size m drawn i.i.d. according to D and denote by bD the corresponding empirical distribution. D induces a marginal distribution over X that we denote by D0, which in the discrete case can be defined via D0(x) = 1 2 X x02X ! D(x, x0) + D(x0, x) " . We also denote by bD0 the empirical distribution associated to D0 based on the sample S. The learning problems we are studying are defined in terms of the top ⌧-quantile of the values taken by a function h: X ! R, that is a score q such that Prx⇠D0[h(x) > q] = ⌧(see Figure 1(a)). In general, q is not unique and this equality may hold for all q in an interval [qmin, qmax]. We will be particularly interested in the properties of the set of points x whose scores are above a quantile, that is sq = {x: h(x) > q}. Since for any (q, q0) 2 [qmin, qmax]2, sq and sq0 differ only by a set of measure zero, the particular choice of q in that interval has no significant consequence. Thus, in what follows, when it is not unique, we will choose the quantile value to be the maximum, qmax. For any ⌧2 [0, 1], let ⇢⌧denote the function defined by 8u 2 R, ⇢⌧(u) = −⌧(u)−+ (1 −⌧)(u)+, where (u)+ = max(u, 0) and (u)−= min(u, 0) (see Figure 1(b)). ⇢⌧is convex as a sum of two convex functions since u 7! (u)+ is convex, u 7! (u)−concave. We will denote by argMinu f(u) the largest minimizer of function f. It is known (see for example [17]) that the 2 page Mehryar Mohri - Courant & Google τ-Quantile Set , . 5 ⌧2 [0, 1] U = {u1, . . . , un} ⊆R ! top τ fraction of scores page Mehryar Mohri - Courant & Google 1 0 u ρτ (a) (b) Figure 1: (a) Illustration of the ⌧-quantile. (b) Graph of function ⇢⌧for ⌧= .25. (maximum) ⌧-quantile value bq of a sample of real numbers X = (u1, . . . , un) 2 Rn can be given by bq = argMinu2R F⌧(u), where F⌧is the convex function defined for all u 2 R by F⌧(u) = 1 n Pn i=1 ⇢⌧(ui −u). 3 Accuracy at the top (AATP) 3.1 Problem formulation and algorithm The learning problem we consider is that of accuracy at the top (AATP) which consists of achieving an ordering of all items so that items whose scores are among the top ⌧-quantile are as relevant as possible. Ideally, all preferred items are ranked above the quantile and non-preferred ones ranked below. Thus, the loss or generalization error of a hypothesis h: X ! R with top ⌧-quantile value qh is the average number of non-preferred elements that h ranks above qh and preferred ones ranked below: R(h) = 1 2 E (x,x0)⇠D ⇥ 1h(x)>qh + 1h(x0)<qh ⇤ . qh can be defined as follows in terms of the distribution D0: qh = argMinu2R Ex⇠D0[⇢⌧(h(x)−u)]. The quantile value qh depends on the true distribution D. To define the empirical error of h for a sample S = ! (x1, x0 1), . . . , (xm, x0 m) " 2 (X ⇥X)m, we will use instead an empirical estimate bqh of qh: bqh = argMinu2R Ex⇠b D0[⇢⌧(h(x) −u)]. Thus, we define the empirical error of h for a labeled sample as follows: bR(h) = 1 2m m X i=1 ⇥ 1h(xi)>bqh + 1h(x0 i)<bqh ⇤ . We first assume that X is a subset of RN for some N ≥1 and consider a hypothesis set H of linear functions h: x 7! w · x. We will use a surrogate empirical loss taking into consideration how much the score w·xi of a non-preferred item xi exceeds bqh, and similarly how much lower the score w·x0 i for a preferred point x0 i is than bqh, and seek a solution w minimizing a trade-off of that surrogate loss and the norm squared kwk2. This leads to the following optimization problem for AATP: min w 1 2kwk2 + C h m X i=1 ! w · xi −bqw + 1 " + + ! bqw −w · x0 i + 1 " + i (1) subject to bqw = argMin u2R Q⌧(w, u), where C ≥0 is a regularization parameter and Q⌧the quantile function defined as follows for a sample S, for any w 2 RN and u 2 R: Q⌧(w, u) = 1 2m h m X i=1 ⇢⌧ ! (w · xi) −u) " + ⇢⌧ ! (w · x0 i) −u) "i . In the following, we will assume that ⌧is a multiple of 1/2m, otherwise it can be rounded to the nearest such value. 3.2 Analysis of the optimization problem Problem (1) is not a convex optimization problem since, while the objective function is convex, the equality constraint is not affine. Here, we further analyze the problem and discuss a solution. 3 The equality constraint could be written as an infinite number of inequalities of Q⌧(w, bqw) Q⌧(w, u) for all u 2 R. Observe, however, that the quantile value qw must coincide with the score of one of training points xk or x0 k, that is w · xk or w · x0 k. Thus, Problem (1) can be equivalently written with a finite number of constraints as follows: min w 1 2kwk2 + C h m X i=1 ! w · xi −bqw + 1 " + + ! bqw −w · x0 i + 1 " + i subject to bqw 2 {w · xk, w · x0 k : k 2 [1, m]} 8k 2 [1, m], Q⌧(w, bqw) Q⌧(w, w · xk), 8k 2 [1, m], Q⌧(w, bqw) Q⌧(w, w · x0 k). The inequality constraints do not correspond to non-positivity constraints on convex functions. Thus, the problem is not a standard convex optimization problem, but our analysis leads us to a simple approximate solution for the problem. For convenience, let (z1, . . . , z2m) denote (x1, . . . , xm, x0 1, . . . , x0 m). Our method consists of solving the convex quadratic programming (QP) problem for each value of k 2 [1, 2m]: min w 1 2kwk2 + C h m X i=1 ! w · xi −bqw + 1 " + + ! bqw −w · x0 i + 1 " + i (2) subject to bqw = w · zk. Let wk be the solution of Problem (2). For each k 2 [1, 2m], we determine the ⌧-quantile value of the scores {wk ·zi : i 2 [1, 2m]}. This can be checked straightforwardly in time O(m log m) by sorting the scores. Then, the solution w⇤we return is the wk for which wk ·zk is closest to the ⌧-quantile value, the one for which the objective function is the smallest in the presence of ties. The method for determining w⇤is thus based on the solution of 2m simple QPs. Our solution naturally parallelizes so that on a distributed computing environment, the computational time for solving the problem can be reduced to roughly the same as that of solving a single QP. 3.3 Kernelized formulation For any i2[1, 2m], let yi =−1 if im, yi =+1 otherwise. Then, Problem (2) admits the following equivalent dual optimization problem similar to that of SVMs: max ↵ 2m X i=1 ↵i −1 2 2m X i,j=1 ↵i↵jyiyj(zi −zk) · (zj −zk) (3) subject to: 8i 2 [1, 2m], 0 ↵i C, which depends only on inner products between points of the training set. The vector w can be obtained from the solution via w = P2m i=1 ↵iyi(zi−zk). The algorithm can therefore be generalized by using equivalently any positive semi-definite kernel symmetric (PDS) kernel K : X ⇥X ! R instead of the inner product in the input space, thereby also extending it to the case of non-vectorial input spaces X. The corresponding hypothesis set H is that of linear functions h: x 7! w · Φ(x) where Φ: X ! H is a feature mapping to a Hilbert space H associated to K and w an element of H. In view of (3), for any k 2 [1, 2m], the dual problem of (2) can then be expressed as follows: max ↵ 2m X i=1 ↵i −1 2 2m X i,j=1 ↵i↵jyiyjKk(zi, zj) (4) subject to: 8i 2 [1, 2m], 0 ↵i C, where, for any k 2 [1, 2m], Kk is the PDS kernel defined by Kk : (z, z0) 7! K(z, z0) −K(z, zk) − K(zk, z0) + K(zk, zk). Our solution can therefore also be found in the dual by solving the 2m QPs defined by (4). 4 Theoretical guarantees We here present margin-based generalization bounds for the AATP learning problem. 4 Let Φ⇢: R ! [0; 1] be the function defined by Φ⇢: x 7! 1x0 + (1 −x/⇢)+1x>0. For any ⇢> 0 and t 2 R, we define the generalization error R(h, t) and empirical margin loss bR⇢(h, t), both with respect to t, by R(h, t) = 1 2 E (x,x0)⇠D ⇥ 1h(x)>t + 1h(x0)<t ⇤ bR⇢(h, t) = 1 2m m X i=1 ⇥ Φ⇢(t −h(xi)) + Φ⇢(h(x0 i) −t) ⇤ . In particular, R(h, qh) corresponds to the generalization error and bR⇢(h, qh) to the empirical margin loss of a hypothesis h for AATP. For any t > 0, the empirical margin loss bR⇢(h, t) is upper bounded by the average of the fraction of non-preferred elements xi that h ranks above t or less than ⇢below t, and the fraction of preferred ones x0 i it ranks below t or less than ⇢above t: bR⇢(h, t) 1 2m m X i=1 ⇥ 1t−h(xi)<⇢+ 1h(x0 i)−t<⇢ ⇤ . (5) We denote by D1 the marginal distribution of the first element of the pairs in X ⇥X derived from D, and by D2 the marginal distribution with respect to the second element. Similarly, S1 is the sample derived from S by keeping only the first element of each pair: S1 = ! x1, . . . , xm " and S2 the one obtained by keeping only the second element: S2 = ! x0 1, . . . , x0 m " . We also denote by RD1 m (H) the Rademacher complexity of H with respect to the marginal distribution D1, that is RD1 m (H) = E[bRS1(H)], and RD2 m (H) = E[bRS2(H)]. Theorem 1 Let H be a set of real-valued functions taking values in [−M, +M] for some M > 0. Fix ⌧2 [0, 1] and ⇢> 0, then, for any δ > 0, with probability at least 1 −δ over the choice of a sample S of size m, each of the following inequalities holds for all h 2 H and t 2 [−M, +M]: R(h, t) bR⇢(h, t)+ 1 ⇢ ✓ RD1 m (H) + RD2 m (H) + 2M pm ◆ + r log 1/δ 2m R(h, t) bR⇢(h, t)+ 1 ⇢ ✓ bRS1(H) + bRS2(H) + 2M pm ◆ +3 r log 2/δ 2m . Proof. Let eH be the family of hypotheses mapping (X ⇥X) to R defined by eH = {z = (x, x0) 7! t −h(x): h 2 H, t 2 [−M, +M]} and similarly eH0 = {z = (x, x0) 7! h(x0) −t: h 2 H, t 2 [−M, +M]}. Consider the two families of functions e H and e H0 taking values in [0, 1] defined by e H = {Φ⇢◦f : f 2 eH} and e H0 = {Φ⇢◦f : f 2 eH0}. By the general Rademacher complexity bounds for functions taking values in [0, 1] [18, 3, 20], with probability at least 1 −δ, 1 2 E ⇥ Φ⇢(t −h(x)) + Φ⇢(h(x0) −t) ⇤ bR⇢(h, t) + 2Rm ⇣1 2( e H + e H0) ⌘ + r log 1/δ 2m bR⇢(h, t) + Rm( e H) + Rm( e H0" + r log 1/δ 2m , for all h 2 H. Since 1u<0 Φ⇢(u) for all u 2 R, the generalization error R(h, t) is a lower bound on left-hand side: R(h, t) 1 2 E ⇥ Φ⇢(t −h(x)) + Φ⇢(h(x0) −t) ⇤ , we obtain R(h, t) bR⇢(h, t) + Rm( e H) + Rm( e H0" + r log 1/δ 2m . Since Φ⇢is 1/⇢-Lipschitz, by Talagrand’s contraction lemma, we have Rm ! e H " (1/⇢)Rm( eH) and Rm ! e H0" (1/⇢)Rm( eH0). By definition of the Rademacher complexity, Rm( eH)= 1 m E S⇠Dm,σ " sup h2H,t m X i=1 σi(t −h(xi)) # = 1 m E S,σ " sup t m X i=1 σit + sup h2H m X i=1 −σih(xi) # = 1 m E σ h sup t2[−M,+M] t m X i=1 σi i + 1 m E σ sup h2H m X i=1 −σih(xi) 3 . 5 Since the random variables σi and −σi follow the same distribution, the second term coincides with RD1 m (H). The first term can be rewritten and upper bounded as follows using Jensen’s inequality: 1 m E σ " sup −MtM m X i=1 σit # = M m X Pm i=1 σi>0 Pr[σ] m X i=1 σi −M m X Pm i=1 σi<0 Pr[σ] m X i=1 σi = M m E σ "444 m X i=1 σi 444 # M m E σ h! m X i=1 σi "2i 1 2 = M m E σ h m X i=1 σ2 i i 1 2 = M pm. Note that, by the Kahane-Khintchine inequality, the last upper bound used is tight modulo a constant (1/ p 2). Similarly, we can show that Rm( eH0) RD2 m (H)+M/pm. This proves the first inequality of the theorem; the second inequality can be derived from the first one using the standard bound relating the empirical and true Rademacher complexity. 2 Since the bounds of the theorem hold uniformly for all t 2 [−M, +M], they hold in particular for any quantile value qh. Corollary 1 (Margin bounds for AATP) Let H be a set of real-valued functions taking values in [−M, +M] for some M > 0. Fix ⌧2 [0, 1] and ⇢> 0, then, for any δ > 0, with probability at least 1 −δ over the choice of a sample S of size m, for all h 2 H it holds that: R(h)bR⇢(h, qh)+ 1 ⇢ ✓ RD1 m (H) + RD2 m (H) + 2M pm ◆ + r log 1/δ 2m R(h)bR⇢(h, qh)+ 1 ⇢ ✓ bRS1(H) + bRS2(H) + 2M pm ◆ +3 r log 2/δ 2m . A more explicit version of this corollary can be derived for kernel-based hypotheses (Appendix A). In the results of the previous theorem and corollary, the right-hand side of the generalization bounds is expressed in terms of the empirical margin loss with respect to the true quantile value qh, which is upper bounded (see (5)) by half the fraction of non-preferred points in the sample whose score is above qh −⇢and half the fraction of the preferred points whose score is less than qh + ⇢. These fractions are close to the same fractions with qh replaced with bqh since the probability that a score falls between qh and bqh can be shown to be uniformly bounded by a term in O(1/pm).1 Altogether, this analysis provides a strong support for our algorithm which is precisely seeking to minimize the sum of an empirical margin loss based on the quantile and a term that depends on the complexity, as in the right-hand side of the learning guarantees above. 5 Experiments This section reports the results of experiments with our AATP algorithm on several datasets. To measure the effectiveness of our algorithm, we compare it to two other algorithms, the INFINITEPUSH algorithm [1] and the SVMPERF algorithm [15], which are both algorithms seeking to emphasize the accuracy near the top. Our experiments are carried out using three data sets from the UC Irvine Machine Learning Repository http://archive.ics.uci.edu/ml/datasets.html: Ionosphere, Housing, and Spambase. (Results for Spambase can be found in Appendix C). In addition, we use the TREC 2003 (LETOR 2.0) data set which is available for download from the following Microsoft Research URL: http://research.microsoft.com/letor. All the UC Irvine data sets we experiment with are for two-group classification problems. From these we construct bipartite ranking problems where a preference pair consists of one positive and one negative example. To explicitly indicate the dependency on the quantile, we denote by q⌧the value of the top ⌧-th quantile of the score distribution of a hypothesis. We will use N to denote the number of instances in a particular data set, as well as si, i = 1, . . . , N, to denote the particular score values. If n+ denotes the number of positive examples in the data set and n−denotes the number of negative examples, then N = n+ + n−and the number of preferences is m = n+n−. 1Note that the Bahadur-Kiefer representation is known to provide a uniform convergence bound on the difference of the true and empirical quantiles when the distribution admits a density [2, 16], a stronger result than what is needed in our context. 6 Table 1: Ionosphere data: for each top quantile ⌧and each evaluation metric, the three rows correspond to AATP (top), SVMPERF(middle) and INFINITEPUSH (bottom). For the INFINITEPUSH algorithm we only report mean values over the folds. ⌧(%) P@⌧ AP DCG@⌧ NDCG@⌧ Positives@top 19 0.89 ± 0.04 0.86 ± 0.03 29.21 ± 0.10 0.92 ± 0.06 12.1 ± 12.5 0.89 ± 0.06 0.83 ± 0.04 28.88 ± 1.37 0.89 ± 0.11 6.00 ± 11.1 0.85 0.80 27.83 0.85 10.32 14 0.91 ± 0.05 0.84 ± 0.03 28.15 ± 0.95 0.91 ± 0.07 13.31 ± 12.5 0.82 ± 0.11 0.79 ± 0.04 27.02 ± 1.37 0.75 ± 0.16 4.10 ± 11.1 0.87 0.80 27.91 0.87 11.51 9.50 0.93 ± 0.06 0.84 ± 0.03 28.15 ± 0.95 0.91 ± 0.09 13.31 ± 12.49 0.77 ± 0.18 0.79 ± 0.04 27.02 ± 1.35 0.70 ± 0.21 4.50 ± 10.9 0.90 0.80 27.90 0.89 11.51 5 0.91 ± 0.14 0.84 ± 0.03 28.15 ± 0.95 0.89 ± 0.15 13.31 ± 12.49 0.66 ± 0.27 0.79 ± 0.04 27.02 ± 1.36 0.60 ± 0.30 4.60 ± 11.0 0.86 0.81 27.90 0.87 11.59 1 0.85 ± 0.24 0.84 ± 0.03 28.15 ± 0.95 0.88 ± 0.19 13.30 ± 12.53 0.35 ± 0.41 0.79 ± 0.04 27.02 ± 1.36 0.34 ± 0.41 4.50 ± 11.0 0.85 0.80 27.91 0.86 11.50 5.1 Implementation We solved the convex optimization problems (2) using the CVX solver http://cvxr.com/. As already noted, the AATP problem can be solved efficiently using a distributed computing environment. The convex optimization problem of the INFINITEPUSH algorithm (see (3.9) of [1]) can also be solved using CVX. However, this optimization problem has as many variables as the product of the numbers of positively and negatively labeled instances (n+n−), which makes it prohibitive to solve for large data sets within a runtime of a few days. Thus, we experimented with the INFINITEPUSH algorithm only on the Ionosphere data set. Finally, for SVMPERF’s training and score prediction we used the binary executables downloaded from the URL http://www.cs.cornell.edu/people/tj and used the SVMPERF’s settings that are the closest to our optimization formulation. Thus, we used L1-norm for slack variables and allowed the constraint cache and the tolerance for termination criterion to grow in order to control the algorithm’s convergence, especially for larger values of the regularization constant. 5.2 Evaluation measures To evaluate and compare the AATP, INFINITEPUSH, and SVMPERF algorithms, we used a number of standard metrics: Precision at the top (P@⌧), Average Precision (AP), Number of positives at the absolute top (Positives@top), Discounted Cumulative Gain (DCG@⌧), and Normalized Discounted Cumulative Gain (NDCG@⌧). Definitions are included in Appendix B. 5.3 Ionosphere data The data set’s 351 instances represent radar signals collected from phased antennas, where ‘good’ signals (225 positively labeled instances) are those that reflect back toward the antennas and ‘bad’ signals (126 negatively labeled instances) are those that pass through the ionosphere. The data has 34 features. We split the data set into 10 independent sets of instances, say S1, . . . , S10. Then, we ran 10 experiments, where we used 3 consecutive sets for learning and the rest (7 sets) for testing. We evaluated and compared the algorithms for 5 different top quantiles ⌧2 {19, 14, 9.5, 5, 1} (%), which would correspond to the top 20, 15, 10, 5, 1 items, respectively. For each ⌧, the regularization parameter C was selected based on the average value of P@⌧. The performance of AATP is significantly better than that of the other algorithms, particularly for the smallest top quantiles. The two main criteria on which to evaluate the AATP algorithm are Precision at the top, (P@⌧), and Number of positive at the top, (Positives@top). For ⌧= 5% the AATP algorithm obtains a stellar 91% accuracy with an average of 13.3 positive elements at the top (Table 1). 7 Table 2: Housing data: for each quantile value ⌧and each evaluation metric, there are two rows corresponding to AATP (top) and SVMPERF(bottom). ⌧(%) P@⌧ AP DCG@⌧ NDCG@⌧ Positives@top 6 0.14 ± 0.05 0.11 ± 0.03 4.64 ± 0.40 0.13 ± 0.08 0.20 ± 0.45 0.13 ± 0.05 0.10 ± 0.02 4.81 ± 0.46 0.16 ± 0.09 0.21 ± 0.45 5 0.17 ± 0.07 0.10 ± 0.03 4.69 ± 0.26 0.16 ± 0.07 0.00 ± 0.00 0.12 ± 0.10 0.09 ± 0.03 4.76 ± 0.60 0.16 ± 0.14 0.20 ± 0.48 4 0.19 ± 0.13 0.12 ± 0.03 4.83 ± 0.45 0.18 ± 0.15 0.00 ± 0.00 0.14 ± 0.05 0.10 ± 0.02 4.66 ± 0.25 0.13 ± 0.07 0.00 ± 0.00 3 0.20 ± 0.12 0.10 ± 0.03 4.70 ± 0.26 0.18 ± 0.11 0.00 ± 0.00 0.17 ± 0.12 0.09 ± 0.02 4.65 ± 0.40 0.18 ± 0.13 0.00 ± 0.00 2 0.23 ± 0.10 0.10 ± 0.03 4.69 ± 0.26 0.19 ± 0.11 0.00 ± 0.00 0.25 ± 0.17 0.10 ± 0.03 4.89 ± 0.48 0.27 ± 0.16 0.20 ± 0.46 1 0.20 ± 0.27 0.12 ± 0.03 4.80 ± 0.45 0.17 ± 0.23 0.00 ± 0.00 0.30 ± 0.27 0.09 ± 0.02 4.74 ± 0.56 0.29 ± 0.27 0.20 ± 0.45 5.4 Housing data The Boston Housing data set has 506 examples, 35 positive and 471 negative, described by 13 features. We used feature 4 as the binary target value. Two thirds of the data instances was randomly selected and used for training, and the rest for testing. We created 10 experimental folds analogously as in the case of the Ionosphere data. The Housing data is very unbalanced with less than 7% positive examples. For this dataset we obtain results very comparable to SVMPERF for the very top quantiles, see Table 2. Naturally, the standard deviations are large as a result of the low percentage of positive examples, so the results are not always significant. For higher top quantiles, e.g., top 4%, the AATP algorithm significantly outperforms SVMPERF, obtaining 19% accuracy at the top (P@⌧). For the highest top quantiles the difference in performance between the two algorithms is not significant. 5.5 LETOR 2.0 This data set corresponds to a relatively hard ranking problem, with an average of only 1% relevant query-URL pairs per query. It consists of 5 folds. Our Matlab implementation (with CVX) of the algorithms prevented us from trying our approach on larger data sets. Hence from each training fold we randomly selected 500 items for training. For testing, we selected 1000 items at random from the test fold. Here, we only report results for P@1%. SVMPERF obtained an accuracy of 1.5% ± 1.5% while the AATP algorithm obtained an accuracy of 4.6% ± 2.4%. This significantly better result indicates the power of the algorithm proposed. 6 Conclusion We presented a series of results for the problem of accuracy at the top quantile, including an AATP algorithm, a margin-based theoretical analysis in support of that algorithm, and a series of experiments with several data sets demonstrating the effectiveness of our algorithm. These results are of practical interest in applications where the accuracy among the top quantile is sought. The analysis of problems based on other loss functions depending on the top ⌧-quantile scores is also likely to benefit form the theoretical and algorithmic results we presented. The optimization algorithm we discussed is highly parallelizable, since it is based on solving 2m independent QPs. Our initial experiments reported here were carried out using Matlab with CVX, which prevented us from evaluating our approach on larger data sets, such as the full LETOR 2.0 data set. However, we have now designed a solution for very large m based on the ADMM (Alternating Direction Method of Multipliers) framework [4]. We have implemented that solution and will present and discuss it in future work. 8 References [1] S. Agarwal. The infinite push: A new support vector ranking algorithm that directly optimizes accuracy at the absolute top of the list. In Proceedings of the SIAM International Conference on Data Mining, 2011. [2] R. R. Bahadur. A note on quantiles in large samples. Annals of Mathematical Statistics, 37, 1966. [3] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:2002, 2002. [4] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1–122, 2011. [5] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [6] J. S. Breese, D. Heckerman, and C. M. Kadie. Empirical analysis of predictive algorithms for collaborative filtering. In UAI ’98: Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann, 1998. [7] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, ICML ’05, pages 89–96, New York, NY, USA, 2005. ACM. [8] C. J. C. Burges, R. Ragno, and Q. V. Le. Learning to rank with nonsmooth cost functions. In NIPS, pages 193–200, 2006. [9] S. Cl´emenc¸on and N. Vayatis. Ranking the best instances. Journal of Machine Learning Research, 8:2671–2699, 2007. [10] D. Cossock and T. Zhang. Statistical analysis of Bayes optimal subset ranking. IEEE Transactions on Information Theory, 54(11):5140–5154, 2008. [11] K. Crammer and Y. Singer. PRanking with ranking. In Neural Information Processing Systems (NIPS 2001). MIT Press, 2001. [12] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. J. Mach. Learn. Res., 4, December 2003. [13] R. Herbrich, K. Obermayer, and T. Graepel. Advances in Large Margin Classifiers, chapter Large Margin Rank Boundaries for Ordinal Regression. MIT Press, 2000. [14] T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’02, pages 133–142, New York, NY, USA, 2002. ACM. [15] T. Joachims. A support vector method for multivariate performance measures. In ICML, pages 377–384, 2005. [16] J. Kiefer. On Bahadur’s representation of sample quantiles. Annals of Mathematical Statistics, 38, 1967. [17] R. Koenker. Quantile Regression. Cambridge University Press, 2005. [18] V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. Annals of Statistics, 30, 2002. [19] Q. V. Le, A. Smola, O. Chapelle, and C. H. Teo. Optimization of ranking measures. Unpublished, 2009. [20] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. The MIT Press, 2012. [21] C. Rudin, C. Cortes, M. Mohri, and R. E. Schapire. Margin-based ranking meets boosting in the middle. In COLT, pages 63–78, 2005. [22] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6:1453–1484, 2005. 9
|
2012
|
185
|
4,547
|
Approximating Equilibria in Sequential Auctions with Incomplete Information and Multi-Unit Demand Amy Greenwald and Eric Sodomka Department of Computer Science Brown University Providence, RI 02912 {amy,sodomka}@cs.brown.edu Jiacui Li Department of Applied Math/Economics Brown University Providence, RI 02912 jiacui li@alumni.brown.edu Abstract In many large economic markets, goods are sold through sequential auctions. Examples include eBay, online ad auctions, wireless spectrum auctions, and the Dutch flower auctions. In this paper, we combine methods from game theory and decision theory to search for approximate equilibria in sequential auction domains, in which bidders do not know their opponents’ values for goods, bidders only partially observe the actions of their opponents’, and bidders demand multiple goods. We restrict attention to two-phased strategies: first predict (i.e., learn); second, optimize. We use best-reply dynamics [4] for prediction (i.e., to predict other bidders’ strategies), and then assuming fixed other-bidder strategies, we estimate and solve the ensuing Markov decision processes (MDP) [18] for optimization. We exploit auction properties to represent the MDP in a more compact state space, and we use Monte Carlo simulation to make estimating the MDP tractable. We show how equilibria found using our search procedure compare to known equilibria for simpler auction domains, and we approximate an equilibrium for a more complex auction domain where analytical solutions are unknown. 1 Introduction Decision-making entities, whether they are businesses, governments, or individuals, usually interact in game-theoretic environments, in which the final outcome is intimately tied to the actions taken by others in the environment. Auctions are examples of such game-theoretic environments with significant economic relevance. Internet advertising, of which a significant portion of transactions take place through online auctions, has had spending increase 24 percent from 2010 to 2011, globally becoming an $85 billion industry [16]. The FCC has conducted auctions for wireless spectrum since 1994, reaching sales of over $60 billion.1 Perishable commodities such as flowers are often sold via auction; the Dutch flower auctions had about $5.4 billion in sales in 2011.2 A game-theoretic equilibrium, in which each bidder best responds to the strategies of its opponents, can be used as a means of prescribing and predicting auction outcomes. Finding equilibria in auctions is potentially valuable to bidders, as they can use the resulting strategies as prescriptions that guide their decisions, and to auction designers, as they can use the resulting strategies as predictions for bidder behavior. While a rich literature exists on computing equilibria for relatively simple auction games [11], auction theory offers few analytical solutions for real-world auctions. Even existing computational methods for approximating equilibria quickly become intractable as the number of bidders and goods, and the complexity of preferences and decisions, increase. 1See http://wireless.fcc.gov/auctions/default.htm?job=auctions_all. 2See http://www.floraholland.com/en/. 1 In this paper, we combine methods from game theory and decision theory to approximate equilibria in sequential auction domains, in which bidders do not know their opponents’ values for goods, bidders partially observe the actions of their opponents’, and bidders demand multiple goods. Our method of searching for equilibria is motivated by the desire to reach strategies that real-world bidders might actually use. To this end, we consider strategies that consist of two parts: a prediction (i.e., learning) phase and an optimization phase. We use best-reply dynamics [4] for prediction (i.e., to predict other bidders’ strategies), and then assuming fixed other-bidder strategies, we estimate and solve a Markov decision processes (MDP) [18] for optimization. We exploit auction properties to represent the MDPs in a more compact state space, and we use Monte Carlo simulation to make estimating the MDPs tractable. 2 Sequential Auctions We focus on sequential sealed-bid auctions, with a single good being sold at each of K rounds. The number of bidders n and the order in which goods are sold are assumed to be common knowledge. During auction round k, each bidder i submits a private bid bk i ∈Bi to the auctioneer. We let bk = ⟨bk 1, . . . , bk n⟩denote the vector of bids submitted by all bidders at round k. The bidder who submits the highest bid wins and is assigned a cost based on a commonly known payment rule. At the end of round k, the auctioneer sends a private (or public) signal ok i ∈Oi to each bidder i, which is a tuple specifying information about the auction outcome for round k, such as the winning bid, the bids of all agents, the winner identities, whether or not a particular agent won the good, or any combination thereof. Bidders only observe opponents’ bids if those bids are announced by the auctioneer. Regardless, we assume that bidder i is told at least which set of goods she won in the kth round, wk i ∈{∅, {k}}, and how much she paid, ck i ∈R. We let ψ(ok | bk) ∈[0, 1] denote the probability that the auctioneer sends the bidders signals ok = ⟨ok 1, . . . , ok n⟩given bk, and we let ψ(ok i | bk) express the probability that player i receives signal ok i , given bk. An auction history at round k consists of past bids plus all information communicated by the auctioneer though round k −1. Let hk i = ⟨(b1 i , o1 i ), . . . , (bk−1 i ok−1 i )⟩be a possible auction history at round k as observed by bidder i. Let Hi be the set of all possible auction histories for bidder i. Each bidder i is endowed with a privately known type θi ∈Θi, drawn from a commonly known distribution F, that determines bidder i’s valuations for various bundles of goods. A (behavioral) strategy σi : Θ × Hi 7→△Bi for bidder i specifies a distribution over bids for each possible type and auction history. The set Σi contains all possible strategies. At the end of the K auction rounds, bidder i’s utility is based on the bundle of goods she won and the amount she paid for those goods. Let X ⊆{1, . . . , K} be a possible bundle of goods, and let v(X; θi) denote a bidder’s valuation for bundle X when its type is θi. No assumptions are made about the structure of this value function. A bidder’s utility for type θi and history hK after K auction rounds is simply that bidder’s value for the bundle of goods it won minus its cost: ui(θi, hK i ) = v(∪K k=1wk i ; θi) −PK k=1 ck i . Given a sequential auction Γ (defined by all of the above), bidder i’s objective is to choose a strategy that maximizes its expected utility. But this quantity depends on the actions of other bidders. A strategy profile ⃗σ = (σ1, · · · , σN) = (σi, σ−i) defines a strategy for each bidder. (Throughout the paper, subscript i refers to a bidder i while −i refers to all bidders except i.) Let Ui(⃗σ) = Eθi,hK i |⃗σ[ui(θi, hK i )] denote bidder i’s expected utility given strategy profile ⃗σ. Definition 1 (ϵ-Bayes-Nash Equilibrium (ϵ-BNE)). Given a sequential auction Γ, a strategy profile ⃗σ ∈Σ is an ϵ-Bayes-Nash-equilibrium if Ui(⃗σ) + ϵ ≥Ui(σ′ i, σ−i) ∀i ∈{1, . . . , n}, ∀σ′ i ∈Σi. In an ϵ-Bayes-Nash Equilibrium, each bidder has to come within an additive factor (ϵ) of bestresponding to its opponent strategies. A Bayes-Nash equilibrium is an ϵ-Bayes-Nash equilibrium where ϵ = 0. In this paper, we explore techniques for finding ϵ-BNE in sequential auctions. We also explain how to experimentally estimate the so-called ϵ-factor of a strategy profile: Definition 2 (ϵ-Factor). Given a sequential auction Γ, the ϵ-factor of strategy profile ⃗σ for bidder i is ϵi(⃗σ) = maxσ′ i Ui(σ′ i, σ−i) −Ui(σi, σ−i). In words, the ϵ-factor measures bidder i’s loss in expected utility for not playing his part of ⃗σ when other bidders are playing their parts. 2 3 Theoretical Results As the number of rounds, bidders, possible types, or possible actions in a sequential auction increases, it quickly becomes intractable to find equilibria using existing computational methods. Such real-world intractability is one reason bidders often do not attempt to solve for equilibria, but rather optimize with respect to predictions about opponent behavior. Building on past work [2, 8], our first contribution is to fully represent the decision problem for a single bidder i in a sequential auction Γ as a Markov decision process (MDP). Definition 3 (Full-history MDP). A full-history MDP Mi(Γ, θi, T) represents the sequential auction Γ from bidder i’s perspective, assuming i’s type is θi, with states S = Hi, actions A = Bi, rewards R(s) = {ui(θi, hK i ) if s = hK i is a history of length K; 0 otherwise}, and transition function T. If bidder types are correlated, bidder i’s type informs its beliefs about opponents’ types and thus opponents’ predicted behavior. For notational and computational simplicity, we assume that bidder types are drawn independently, in which case there is one transition function T regardless of bidder i’s type. We also assume that bidders are symmetric, meaning their types are all drawn from the same distribution. When bidders are symmetric, we can restrict our attention to symmetric equilibria, where a single set of full-history MDPs, one per type, is solved on behalf of all bidders. Definition 4 (MDP Assessment). An MDP assessment (π, T) for a sequential auction Γ is a set of policies {πθi | θi ∈Θi}, one for each full-history MDP Mi(Γ, θi, T). We now explain where the transition function T comes from. At a high level, we define (symmetric) induced transition probabilities Induced(π) to be the transition probabilities that result from agent i using Bayesian updating to infer something about its opponents’ private information, and then reasoning about its opponents’ subsequent actions, assuming they all follow policy π. The following example provides some intuition for this process. Example 1. Consider a first-price sequential auction with two rounds, two bidders, two possible types (“H” and “L”) drawn independently from a uniform prior (i.e., p(H) = 0.5 and p(L) = 0.5), and two possible actions (“high” and “low”). Suppose Bidder 2 is playing the following simple strategy: if type H: bid “high” with probability .9, and bid “low” with probability .1; if type L: bid “high” with probability .1, and bid “low” with probability .9. At round k = 1, from the perspective of Bidder 1, the only uncertainty that exists is about Bidder 2’s type. Bidder 1’s beliefs about Bidder 2’s type is based solely on the type prior, resulting in beliefs that Bidder 2 will bid “high” and “low” each with equal probability. Suppose Bidder 1 bids “low” and loses to Bidder 2, who the auctioneer reports as having bid “high”. At round k = 2, Bidder 1 must update its posterior beliefs about Bidder 2 after observing the given outcome. This is done using Bayes’ rule to find that Bidder 2 is of type “H” with probability 0.9. Based on its policy, in the subsequent round, the probability Bidder 2 bids “high” is 0.9(0.9) + 0.1(0.1) = 0.82, and the probability it bids “low” is 0.9(0.1) + 0.1(0.9) = 0.18. Given this bid distribution for Bidder 2, Bidder 1 can compute her probability of transitioning to various future states for each possible bid. More formally, denoting sk i and ak i as agent i’s state and action at auction round k, respectively, define Pr(sk+1 i | sk i , ak i ) to be the probability of reaching state sk+1 i given that action ak i was taken in state sk i . By twice applying the law of total probability and then noting conditional independencies, Pr(sk+1 i | sk i , ak i ) = X ak −i Pr(sk+1 i | sk i , ak i , ak −i) Pr(ak −i | sk i , ak i ) = X θ−i X sk −i X ak −i Pr(sk+1 i | sk i , ak i , ak −i, sk −i, θ−i) Pr(ak −i | sk i , ak i , sk −i, θ−i) Pr(sk −i, θ−i | sk i , ak i ) = X θ−i X sk −i X ak −i Pr(sk+1 i | sk i , ak i , ak −i) | {z } Pr(ak −i | sk −i, θ−i) | {z } Pr(sk −i, θ−i | sk i , ak i ) | {z } (1) The first term in Equation 1 is defined by the auction rules and depends only on the actions taken at round k: Pr(sk+1 i | sk i , ak i , ak −i) = ψ(ok i | ak). The second term is a joint distribution over opponents’ actions given opponents’ private information. Each agent’s action at round k is conditionally independent given that agent’s state at round k: Pr(ak −i | sk −i, θ−i) = Q j̸=i Pr(ak j | sk j , θj) = Q j̸=i πθj(ak j | sk j ). The third term is the joint distribution over opponents’ private information, 3 given agent i’s observations. This term can be computed using Bayesian updating. We compute induced transition probabilities Induced(π)(sk i , ak i , sk+1 i ) using Equation 1. Definition 5 (δ-Stable MDP Assessment). An MDP assessment (π, T) for a sequential auction Γ is called δ-stable if d(T, Induced(π)) < δ, for some symmetric distance function d. When δ = 0, the induced transition probabilities exactly equal the transition probabilities from the MDP assessment (π, T), meaning that if all agents follow (π, T), the transition function T is correct. Define Ui(π, T) ≡Eθi,hK i |π,T [ui(θi, hK i )] to be the expected utility for following an MDP assessment’s policy π when the transition function is T. (We abbreviate Ui by U because of symmetry.) Definition 6 (α-Optimal MDP Assessment). An MDP assessment (π, T) for a sequential auction Γ is called α-optimal if for all policies π′, U(π, T) + α ≥U(π′, T). If each agent is playing a 0-optimal (i.e., optimal) 0-stable (i.e., stable) MDP assessment for the sequential auction Γ, each agent is best responding to its beliefs, and each agent’s beliefs are correct. It follows that any optimal stable MDP assessment for the sequential auction Γ corresponds to a symmetric Bayes-Nash equilibrium for Γ. Corollary 2 (below) generalizes this observation to approximate equilibria.3 Suppose we have a black box that tells us the difference in perceived versus actual expected utility for optimizing with respect to the wrong beliefs: i.e., the wrong transition function. More precisely, if we were to give the black box two transition functions T and T ′ that differ by at most δ (i.e., d(T, T ′) < δ), the black box would return maxπ |U(π, T) −U(π, T ′)| ≡D(δ). Theorem 1. Given such a black box, if (π, T) is an α-optimal δ-stable MDP assessment for the sequential auction Γ, then π is a symmetric ϵ-Bayes-Nash equilibrium for Γ, where ϵ = 2D(δ) + α. Proof. Let Tπ = Induced(π), and let π∗be such that (π∗, Tπ) is an optimal MDP assessment. U(π, Tπ) ≥U(π, T) −D(δ) (2) ≥U(π∗, T) −(α + D(δ)) (3) ≥U(π∗, Tπ) −(α + 2D(δ)) (4) Lines 2 and 4 hold because (π, T) is δ-stable. Line 3 holds because (π, T) is α-optimal. Corollary 2. If (π, T) is an α-optimal δ-stable MDP assessment for the sequential auction Γ, then π is a symmetric ϵ-Bayes-Nash equilibrium for Γ, where ϵ = 2δK + α. In particlar, when the distance between other-agent bid predictions and the actual other-agent bids induced by the actual other-agent policies is less than δ, optimizing agents play a 2δK-BNE. This corollary follows from the simulation lemma in Kakade et al. [9], which provides us with a black box.4 In particular, if MDP assessment (π, T) is δ-stable, then |U(π, T) − U(π, Induced(π))| ≤δK, where d(T, T ′) = P sk+1 i |T(sk i , ak i , sk+1 i ) −T ′(sk i , ak i , sk+1 i )| and K is the MDP’s horizon. Wellman et al. [24] show that, for simultaneous one-shot auctions, optimizing with respect to predictions about other-agent bids is an ϵ-Bayes-Nash equilibrium, where ϵ depends on the distance between other-agent bid predictions and the actual other-agent bids induced by the actual otheragent strategies. Corollary 2 is an extension of that result to sequential auctions. 4 Searching for an ϵ-BNE We now know that an optimal, stable MDP assessment is a BNE, and moreover, a near-optimal, near-stable MDP assessment is nearly a BNE. Hence, we propose to search for approximate BNE by searching the space of MDP assessments for any that are nearly optimal and nearly stable. 3Note that this result also generalizes to non-symmetric equilibria: we would calculate a vector of induced transition probabilities (one per bidder), given a vector of MDP assessments, (one per bidder), instead of assuming that each bidder abides by the same assessment. Similarly, stability would need to be defined in terms of a vector of MDP assessments. We present our theoretical results in terms of symmetric equilibria for notational simplicity, and because we search for symmetric equilibria in Section 5. 4Slightly adjusted since there is error only in the transition probabilities, not in the rewards. 4 Our search uses an iterative two-step learning process. We first find a set of optimal policies π with respect to some transition function T (i.e., π = Solve MDP(T)) using dynamic programming, as described by Bellman’s equations [1]. We then update the transition function T to reflect what would happen if all agents followed the new policies π (i.e., T ∗= Induced(π)). More precisely, 1. Initiate the search from an arbitrary MDP assessment (π0, T 0) 2. Initialize t = 1 and ϵ = ∞ 3. While (t < τ) and (ϵ > κ) (a) PREDICT: T t = Induced(πt−1) (b) OPTIMIZE: for all types θi, πt = Solve MDP(θi, T t) (c) Calculate ϵ ≡ϵ(πτ) (defined below) (d) Increment t 4. Return MDP assessment (πτ, T τ) and ϵ This learning process is not guaranteed to converge, so upon termination, it could return an optimal, δ-stable MDP assessment for some very large δ. However, it has been shown to be successful experimentally in simultaneous auction games [24] and other large games of imperfect information [7]. Monte Carlo Simulations Recall how we define induced transition functions (Equation 1). In practice, the Bayesian updating involved in this calculation is intractable. Instead, we employ Monte Carlo simulations. First, we further simplify Equation 1 using the law of total probability and noting conditional independencies (Equation 5). Second, we exploit some special structure of sequential auctions: if nothing but the winning price at each round is revealed, conditional on reaching state sk i , the posterior distribution over highest opponent bids is sufficient for computing the probability of that round’s outcome (Equation 6).5 Third, we simulate N auction trajectories for the given policy π and multiple draws from the agent’s type distribution, and count the number of times each highest opponent bid occurs at each state (Equation 7): Induced(π)(sk i , ak i , sk+1 i ) = Pr(sk+1 i | sk i , ak i , max ak −i)Pr(max ak −i | sk i , ak i ) (5) = Pr(sk+1 i | sk i , ak i , max ak −i)Pr(max ak −i | sk i ) (6) InducedN(π)(sk i , ak i , sk+1 i ) = ψ(ok i | max(ak −i), ak i )#(max(ak −i), sk i ) #(sk i ) (7) Solving the MDP As previously stated, we solve the MDPs exactly using dynamic programming, but we can only do so because we exploit the structure of auctions to reduce the number of states in each MDP. Recall that we assume symmetry: i.e., all bidders’ types are drawn from the same distribution. Under this assumption, when the auctioneer announces that an Bidder j has won an auction for the first time, this provides the same information as if a different Bidder k won an auction for the first time. We thus collapse these two outcomes into the same state. This can greatly decrease the MDP state space, particularly if the number of players n is larger than the number of auctions K, as is often the case in competitive markets. In fact, by handling this symmetry, the MDP state space is the same for any number of players n ≥K.6 Second, we exploit the property of losing bid symmetry: if a bidder i loses with a bid of b or a bid of b′, its beliefs about its opponents bids are unchanged, and thus it receives the same reward for placing the same bid at either resulting state. 5A distribution over the next round’s highest opponent bid is only sufficient without the possibility of ties. In ties can occur, a distribution over the number of opponents placing that highest bid is also needed. In our experiments, we do not maintain such a distribution; if there is a tie, the agent in question wins with probability 0.5 (i.e., we assume it tied with only one opponent). 6Even when n < K, the state space can still be significantly reduced, since instead of n different possible winner identities in the kth round, there are only min(n; k + 1). In the extreme case of n = 2, there is no winner identity symmetry to exploit, since n = k + 1 even in the first round. 5 ϵ-factor Approximation Define Ui(⃗π) = Eθi,hK i |⃗π[ui(θi, hK i )] to be bidder i’s expected utility i when each agent plays its part in the vector of MDP assessment policies ⃗π. Following Definition 2, the ϵ-factor measures bidder i’s loss in expected utility for not playing his part of ⃗π when other bidders are playing their parts: ϵi(⃗π) = maxπ′ i Ui(π′ i, π−i) −Ui(πi, π−i). In fact, since we are only interested in finding symmetric equilibria, where ⃗π = (π, . . . , π), we calculate ϵ(π) = maxπ′ U(π′,⃗π−i) −U(π,⃗π−i). The first term in this definition is the expected utility of the best response, π∗, to ⃗π−i. This quantity typically cannot be computed exactly, so instead, we compute a near-best response ˆπ∗ N = Solve MDP(InducedN(π)), which is optimal with respect to InducedN(π) ≈Induced(π), and then measure the gain in expected utility of deviating from π to ˆπ∗ N. Further, we approximate expected utility through Monte Carlo simulation. Specificially, we compute ˆUL(⃗π) = 1 L PL l=1 u(θl, hl) by sampling ⃗θ and simulating (πθ, . . . , πθ) L times, and then averaging bidder i’s resulting utilities. Thus, we approximate ϵ(π) by ˆϵ(π) ≈ˆUL(ˆπ∗ N,⃗π−i) −ˆUL(π,⃗π−i). The approximation error in ˆϵ(π) comes from both imprecision in InducedN(π), which depends on the sample size N, and imprecision in the expected utility calculation, which depends on the sample size L. The latter is O( √ L) by the central limit theorem, and can be made arbitrarily small. (In our experiments, we plot the confidence bounds of this error to make sure it is indeed small.) The former arises because ˆπ∗ N is not truly optimal with respect to Induced(π), and goes to zero as N goes to infinity by standard reinforcement learning results [20]. In practice we make sure that N is large enough so that this error is negligible. 5 Experimental Results This section presents the results of running our iterative learning method on three auction models studied in the economics literature: Katzman [10], Weber [23], and Menezes and Monteiro [14]. These models are all two-round, second-price, sequential auctions7, with continuous valuation spaces; they differ only in their specific choice of valuations. The authors analytically derive a symmetric pure strategy equilibrium for each model, which we attempt to re-discover using our iterative method. After discretizing the valuation space, our method is sufficiently general to apply immediately in all three settings. Although these particular sequential auctions are all second price, our method applies to sequential auctions with other rules as well. We picked this format because of the abundance of corresponding theoretical results and the simplicity of exposition in two-round auctions. It is a dominant strategy to bid truthfully in a one-shot second-price auction [22]; hence, when comparing policies in two-round second-price auctions it suffices to compare first-round policies only. Static Experiments We first run one iteration of our learning procedure to check whether the derived equilibria are strict. In other words, we check whether Solve MDP(InducedN(πE)) = πE, where πE is a (discretized) derived equilibrium strategy. For each of the three models, Figures 1(a)–1(c) compare first-round bidding functions of the former (blue) with the latter (green). Our results indicate that the equilibria derived by Weber and Katzman are indeed strict, while that by Menezes and Monteiro (MM) is not, since there exists a set of best-responses to the equilibrium strategy, not a unique best-response. We confirm analytically that the set of bids output by our learning procedure are best-responses to the theoretical equilibrium, with the upper bound being the known theoretical equilibrium strategy and the lower bound being the black dotted line.8 To our knowledge, this instability was previously unknown. Dynamic Experiments Since MM’s theoretical equilibrium is not strict, we apply our iterative learning procedure to search for more stable approximate equilibria. Our procedure converges within a small number of iterations to an ϵ-BNE with a small ϵ factor, and the convergence is robust across different initializations. We chose initial strategies π0 parametrized by p ∈R+ that bid xp when the marginal value of winning an additional good is x. By varying the exponent p, we initialize the learning procedure with bidding strategies whose level of aggressiveness varies. 7Weber’s model can be extended to any number of rounds, but is unit, not multi-unit, demand. 8These analytical derivations are included in supplemental material. 6 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Valuation Bid Weber, 4 agents (a) 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Valuation Bid Katzman, 2 agents (b) 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Menezes, 3 agents Valuation Bid (c) Figure 1: Comparison of first-round bidding functions of theoretical equilibrium strategies (green) and that of the best response from one step of the iterative learning procedure initialized with those equilibrium strategies (blue). (a) Weber. (b) Katzman. (c) MM. 2 4 6 8 10 0 0.002 0.004 0.006 0.008 0.01 2 round, d(πi t,πi t+1) Iteration p = 0.5 p = 1.0 p = 2.0 (a) 2 4 6 8 10 0 0.002 0.004 0.006 0.008 0.01 2 round, d(πi t,πj t) Iteration p=1.0 − p=0.5 p=2.0 − p=0.5 p=2.0 − p=1.0 (b) 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Valuation Bid After 20 iterations (c) 2 4 6 8 10 −1 −0.5 0 0.5 1 x 10 −3 Iteration 2 round epsilon p = 1.0 [−2e−05,7e−05] (d) Figure 2: Convergence properties of the learning procedure in two-round MM model with 3 agents. (a),(b) evaluates convergence through L1 distance of first-round bidding functions; (c) compares the learned best response (blue) with different learning procedure initializations (green). (d) plots evolution of estimated ϵfactor for learning dynamics with one specific initialization; plots for other initializations look very similar. The bracketed values in the legend give the 99% confidence bound for the ϵ-factor in the final iteration, which is estimated using more sample points (N = L = 109) than previous iterations (N = L = 106). Our iterative learning procedure is not guaranteed to converge. Nonetheless, in this experiment, our procedure not only converges with different initialization parameters p (Figure 2(a)), but also converges to the same solution regardless of initial conditions (Figure 2(b)). The distance measure d(π, π′) between two strategies π, π′ in these figures is defined as the L1 distance of their respective first-round bidding functions. Furthermore, the more economically meaningful measure of ϵ(π), measured by ˆϵ(π), converges quickly to a negligible factor smaller than 1 × 10−4, which is less than 0.01% of the expected bidder profit (Figure 2(d)). All existing theoretical work on Bayesian sequential auctions with multi-unit demand is confined to two-round cases due to the increased complexity of additional rounds, but our method removes this constraint. We extend the two-round MM model into a three-round auction model,9 and apply our learning procedure. It requires more iterations for our algorithm to converge in this set up, but it again converges to a rather stable ϵ-BNE regardless of initial conditions. The final ϵ-factor is smaller than 0.5% of expected bidder profit (Figure 3(d)). Although d(π, π′) no longer fully summarizes strategy differences, it still strongly indicates that the learning procedure converges to very similar strategies regardless of initial conditions (Figure 3(b)). 6 Related Work On the theoretical side, Weber [23] derived equilibrium strategies for a basic model in which n bidders compete in k first or second price auctions, but bidders are assumed to have unit demand. F´evrier [6] and Yao [25] studied a model where n bidders have multi-unit demand, but there are only two auctions and a bidder’s per-good valuation is the same across the two goods. Liu [13] and Paes Leme et al. [17] studied models of n bidders with multi-unit demand where bidders have 9This model is described in supplemental material. 7 5 10 15 20 25 30 0 0.1 0.2 0.3 0.4 3 round, d(πi t,πi t+1) Iteration p = 0.5 p = 1.0 p = 2.0 (a) 0 10 20 30 0 0.1 0.2 0.3 0.4 3 round, d(πi t,πj t) Iteration p=1.0 − p=0.5 p=2.0 − p=0.5 p=2.0 − p=1.0 (b) 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Valuation Bid After 30 iterations (c) 10 20 30 −0.1 −0.05 0 0.05 0.1 Iteration 3 round epsilon p = 1.0 [−4e−05,0.003] (d) Figure 3: The same set of graphs as in Figure 2 for three round MM model with 3 agents. complete information about opponents’ valuations and perfect information about opponents’ past bids. Syrgkanis and Tardos [21] extended to the case of incomplete information with unit demand. On the computational side, Rabinovich et al. [19] generalized fictitious play to finite-action incomplete information games and applied their technique to simultaneous second-price auctions with utilities expressible as linear functions over a one-dimensional type space. Cai and Wurman [3] take a heuristic approach to finding equilibria for sequential auctions with incomplete information; opponent valuations are sampled to create complete information games, which are solved with dynamic programming and a general game solver, and then aggregated into mixed behavior strategies to form a policy for the original incomplete information game. Fatima et al. [5] find equilibrium bidding strategies in sequential auctions with incomplete information under various rules of information revelation after each round. Additional methods of computing equilibria have been developed for sequential games outside the context of auctions: Ganzfried and Sandholm [7] study the problem of computing approximate equilibria in the context of poker, and Mostafa and Lesser [15] describe an anytime algorithm for approximating equilibria in general incomplete information games. From a decision-theoretic perspective, the bidding problem for sequential auctions was previously formulated as an MDP in related domains. In Boutilier et al. [2], an MDP is created where distinct goods are for sold consecutively, complementarities exist across goods, and the bidder is budgetconstrained. A similar formulation was studied in Greenwald and Boyan [8], but without budget constraints. There, purchasing costs were models as negative rewards, significantly reducing the size of the MDP’s state space. Lee et al. [12] represent multi-round games as iterated semi-netform games, and then use reinforcement learning techniques to find K-level reasoning strategies for those games. Their experiments are for two-player games with perfect information about opponent actions, but their approach is not conceptually limited to such models. 7 Conclusion We presented a two step procedure (predict and optimize) for finding approximate equilibria in a class of complex sequential auctions in which bidders have incomplete information about opponents’ types, imperfect information about opponents’ bids, and demand multiple goods. Our procedure is applicable under numerous pricing rules, allocation rules, and information-revelation policies. We evaluated our method on models with analytically derived equilibria and on an auction domain in which analytical solutions were heretofore unknown. Our method was able to both show that the known equilibrium for one model was not strict and guided our own analytical derivation of the non-strict set of equilibria. For a more complex auction with no known analytical solutions, our method converged to an approximate equilibria with an ϵ-factor less than 10−4, and did so robustly with respect to initialization of the learning procedure. While we achieved fast convergence in the MM model, such convergence is not guaranteed. The fact that our procedure converged to nearly identical approximate equilibria even from different initializations is promising, and further exploring convergence properties in this domain is a direction for future work. Acknowledgements This research was supported by U.S. National Science Foundation Grants CCF-0905139 and IIS-1217761. The authors (and hence, the paper) benefited from lengthy discussions with Michael Wellman, Michael Littman, and Victor Naroditskiy. Chris Amato also provided useful insights, and James Tavares contributed to the code development. 8 References [1] R. E. Bellman. Dynamic Programming. Princeton University Press, Princeton, NJ, 1957. [2] C. Boutilier, M. Goldszmidt, and B. Sabata. Sequential auctions for the allocation of resources with complementarities. In International Joint Conference on Artificial Intelligence, volume 16, pages 527– 534. Lawrence Erlbaum Associates LTD, 1999. [3] G. Cai and P. R. Wurman. Monte Carlo approximation in incomplete information, sequential auction games. Decision Support Systems, 39(2):153–168, Apr. 2005. [4] A. Cournot. Recherches sur les Principes Mathematics de la Theorie la Richesse. Hachette, 1838. [5] S. S. Fatima, M. Wooldridge, and N. R. Jennings. Sequential Auctions in Uncertain Information Settings. Agent-Mediated Electronic Commerce and Trading Agent Design and Analysis, pages 16—-29, 2009. [6] P. F´evrier. He who must not be named. Review of Economic Design, 8(1):99–1, Aug. 2003. [7] S. Ganzfried and T. Sandholm. Computing Equilibria in Multiplayer Stochastic Games of Imperfect Information. International Joint Conference on Artificial Intelligence, pages 140–146, 2009. [8] A. Greenwald and J. Boyan. Bidding under uncertainty: Theory and experiments. In Twentieth Conference on Uncertainty in Artificial Intelligence, pages 209–216, Banff, 2004. [9] S. M. Kakade, M. J. Kearns, and J. Langford. Exploration in metric state spaces. In Proceedings of the 20th International Conference on Machine Learning ICML03, 2003. [10] B. Katzman. A Two Stage Sequential Auction with Multi-Unit Demands,. Journal of Economic Theory, 86(1):77–99, May 1999. [11] P. Klemperer. Auctions: theory and practice. Princeton University Press, 2004. [12] R. Lee, S. Backhaus, J. Bono, W. Dc, D. H. Wolpert, R. Bent, and B. Tracey. Modeling Humans as Reinforcement Learners : How to Predict Human Behavior in Multi-Stage Games. In NIPS 2011, 2011. [13] Q. Liu. Equilibrium of a sequence of auctions when bidders demand multiple items. Economics Letters, 112(2):192–194, 2011. [14] F. M. Menezes and P. K. Monteiro. Synergies and Price Trends in Sequential Auctions. Review of Economic Design, 8:85–98, 2003. [15] H. Mostafa and V. Lesser. Approximately Solving Sequential Games With Incomplete Information. In Proceedings of the AAMAS08 Workshop on Multi-Agent Sequential Decision Making in Uncertain MultiAgent Domains, pages 92–106, 2008. [16] Nielsen Company. Nielsen’s quarterly global adview pulse report, 2011. [17] R. Paes Leme, V. Syrgkanis, and E. Tardos. Sequential Auctions and Externalities. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, pages 869–886, 2012. [18] M. Puterman. Markov decision processes: discrete stochastic dynamic programming. Wiley, 1994. [19] Z. Rabinovich, V. Naroditskiy, E. H. Gerding, and N. R. Jennings. Computing pure Bayesian Nash equilibria in games with finite actions and continuous types. Technical report, University of Southampton, 2011. [20] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction, volume 9 of Adaptive computation and machine learning. MIT Press, 1998. [21] V. Syrgkanis and E. Tardos. Bayesian sequential auctions. In Proceedings of the 13th ACM Conference on Electronic Commerce, pages 929–944. ACM, 2012. [22] W. Vickrey. Counterspeculation, Auctions, and Competitive Sealed Tenders. Journal of Finance, 16(1): 8–37, 1961. [23] R. J. Weber. Multiple-Object Auctions. In R. Engelbrecht-Wiggans, R. M. Stark, and M. Shubik, editors, Competitive Bidding, Auctions, and Procurement, pages 165–191. New York University Press, 1983. [24] M. Wellman, E. Sodomka, and A. Greenwald. Self-confirming price prediction strategies for simultaneous one-shot auctions. In The Conference on Uncertainty in Artificial Intelligence (UAI), 2012. [25] Z. Yao. Sequential First-Price Auctions with Multi-Unit Demand. Technical report, Discussion paper, UCLA, 2007. 9
|
2012
|
186
|
4,548
|
Multiresolution Gaussian Processes Emily B. Fox Dept of Statistics, University of Washington ebfox@stat.washington.edu David B. Dunson Dept of Statistical Science, Duke University dunson@stat.duke.edu Abstract We propose a multiresolution Gaussian process to capture long-range, nonMarkovian dependencies while allowing for abrupt changes and non-stationarity. The multiresolution GP hierarchically couples a collection of smooth GPs, each defined over an element of a random nested partition. Long-range dependencies are captured by the top-level GP while the partition points define the abrupt changes. Due to the inherent conjugacy of the GPs, one can analytically marginalize the GPs and compute the marginal likelihood of the observations given the partition tree. This property allows for efficient inference of the partition itself, for which we employ graph-theoretic techniques. We apply the multiresolution GP to the analysis of magnetoencephalography (MEG) recordings of brain activity. 1 Introduction A key challenge in many time series applications is capturing long-range dependencies for which Markov-based models are insufficient. One method of addressing this challenge is through employing a Gaussian process (GP) with an appropriate (non-band-limited) covariance function. However, GPs typically assume smoothness properties that can blur key elements of the signal if abrupt changes occur. The Mat´ern kernel enables less smooth functions, but assumes a stationary process that does not adapt to varying levels of smoothness. Likewise, a changepoint [21] or partition [8] model between smooth functions fails to capture long range dependencies spanning changepoints. Another long-memory process is the fractional ARIMA process [5, 13]. Wavelet methods have also been proposed, including recently for smooth functions with discontinuities [2]. We take a fundamentally different approach based on GPs that allows (i) direct interpretability, (ii) local stationarity, (iii) irregular grids of observations, and (iv) sharing information across related time series. As a motivating application, consider magnetoencephalography (MEG) recordings of brain activity in response to some word stimulus. Due to the low signal-to-noise-ratio (SNR) regime, multiple trials are often recorded, presenting a functional data analysis scenario. Each trial results in a noisy trajectory with key discontinuities (e.g., after stimulus onset). Although there are overall similarities between the trials, there are also key differences that occur based on various physiological phenomena, as depicted in Fig. 1. We clearly see abrupt changes as well as long-range correlations. Key to the data analysis is the ability to share information about the overall trajectory between the single trials without forcing unrealistic smoothness assumptions on the single trials themselves. In order to capture both long-range dependencies and potential discontinuities, we propose a multiresolution GP (mGP) that hierarchically couples a collection of smooth GPs, each defined over an element of a nested partition set. The top-level GP captures a smooth global trajectory, while the partition points define abrupt changes in correlation induced by the lower-level GPs. Due to the inherent conjugacy of the GPs, conditioned on the partition points the resulting function at the bottom level is marginally GP-distributed with a partition-dependent (and thus non-stationary) covariance function. The correlation between any two observations yi and yj generated by the mGP at locations xi and xj is a function of the distance ||xi −xj|| and which partition sets contain both xi and xj. In a standard regression setting, the marginal GP structure of the mGP allows us to compute the marginal likelihood of the data conditioned on the partition, enabling efficient inference of the partition itself. We integrate over the hierarchy of GPs and only sample the partition points. For our 1 0 50 100 150 200 250 300 −1 −0.5 0 0.5 Time Observations 50 100 150 200 250 300 50 100 150 200 250 300 50 100 150 200 250 300 50 100 150 200 250 300 A1 1 A1 2 A2 1 A2 2 A2 3 A2 4 A0 A1 1 A1 2 A2 2 A2 1 A2 3 A2 4 Figure 1: For sensor 1 and word house, Left: Data from three trials; Middle: Empirical correlation matrix from 20 trials; Right: Hierarchical segmentation produced by recursive minimization of normalized cut objective, with color indicating tree level. Figure 2: mGP on a balanced, binary tree partition: Parent function is split by A1 = {A1 1, A1 2}. Recursing down the tree, each partition has a GP with mean given by its parent function restricted to that set. proposal distribution, we borrow the graph-theoreticidea of normalized cuts [22] often used in image segmentation. Our inferences integrate over the partition tree, allowing blurring of discontinuities and producing functions which can appear smooth when discontinuities are not present in the data. 2 Background A GP provides a distribution on real-valued functions f : X →ℜ, with the property that the function evaluated at any finite collection of points is jointly Gaussian. The GP, denoted GP(m, c), is uniquely defined by its mean function m and covariance function c. That is, f ∼GP(m, c) if and only if for all n ≥1 and x1, . . . , xn, (f(x1), . . . , f(xn)) ∼Nn(µ, K), with µ = [m(x1), . . . , m(xn)] and [K]ij = c(xi, xj). The properties (e.g., continuity, smoothness, periodicity, etc.) of functions drawn from a given GP are determined by the covariance function. The squared exponential kernel, c(x, x′) = d exp(−κ||x−x′||2 2), leads to smooth functions. Here, d is a scale hyperparameter and κ is the bandwidth determining the extent of the correlation in f over X. See [18] for further details. 3 Multiresolution Gaussian Process Formulation Our interest is in modeling a function g that (i) is locally smooth, (ii) exhibits long-range correlations (i.e., corr(g(x), g(x′)) > 0 for ||x −x′|| relatively large), and (iii) has abrupt changes. We begin by modeling a single function, but with a specification that readily lends itself to modeling a collection of functions that share a common global trajectory, as explored in Sec. 4. Generative Model Assume a set of noisy observations y = {y1, . . . , yn}, yi ∈ℜ, of the function g at locations {x1, . . . , xn}, xi ∈X ⊂ℜp: yi = g(xi) + ǫi, ǫi ∼N(0, σ2). (1) We hierarchically define g as follows. Let A = {A0, A1, . . . , AL−1} be a nested partition, or tree partition, of X with A0 = X, X = S i Aℓ i, Aℓ i ∩Aℓ j = ∅, and Aℓ i ⊂Aℓ−1 k for some k. Furthermore, assume that each Aℓ i is a contiguous subset of X. Fig. 2 depicts a balanced, binary tree partition. We define a global parent function on A0 as f 0 ∼GP(0, c0). This function captures the overall shape of g and its long-range dependencies. Then, over each partition set Aℓ i we independently draw f ℓ(Aℓ i) ∼GP(f ℓ−1(Aℓ i), cℓ i). (2) That is, the mean of the GP is given by the parent function restricted to the current partition set. Due to the conditional independence of these draws, f ℓcan have discontinuities at the partition points. However, due to the coupling of GPs through the tree, f ℓwill maintain aspects of the shape of f 0. Finally, we set g = f L−1. A pictorial representation of the mGP is shown in Fig. 2. We can equivalently represent the mGP as an additive GP model: φℓ(Aℓ i) ∼GP(0, cℓ i), g = P ℓφℓ. Covariance Function We assume a squared exponential kernel cℓ i = dℓ i exp(−κℓ i||x −x′||2 2), encouraging local smoothness over each partition set Aℓ i. We focus on dℓ i = dℓwith P∞ ℓ=1(dℓ)2 < 1 for finite variance regardless of tree depth and additionally encouraging lower levels to vary less from their parent function, providing regularization and robustness to the choice of L. We typically assume bandwidths κℓ i = κ/||Aℓ i||2 2 so that each child function is locally as smooth as its parent. One can think of this formulation as akin to a fractal process: zooming in on any partition, the locally defined function has the same smoothness as that of its parent over the larger partition. Thus, lower levels encode finer-resolution details. We denote the covariance hyperparameters as θ = {d0, . . . , dL−1, κ}, and omit the dependency in conditional distributions for notational simplicity. See the Supplementary Material for discussion of other possible covariance specifications. 2 Induced Marginal GP The conditional independencies of our mGP imply that p(g | A) = Z p(f 0) L−1 Y ℓ=1 p(f ℓ| f ℓ−1, Aℓ)df 0:L−2. (3) Due to the inherent conjugacy of the GPs, one can analytically marginalize the hierarchy of GPs conditioned on the partition tree A yielding g | A ∼GP(0, c∗ A), c∗ A = L−1 X ℓ=0 X i cℓ iIAℓ i. (4) Here, IAℓ i(x, x′) = 1 if x, x′ ∈Aℓ i and 0 otherwise. Eq. (4) provides an interpretation of the mGP as a (marginally) partition-dependent GP, where the partition A defines the discontinuities in the covariance function c∗ A. The covariance function encodes local smoothness of g and discontinuities at the partition points. Note that c∗ A defines a non-stationary covariance function. The correlation between any two observations yi and yj at locations xi and xj generated as in Eq. (1) is a function of how many tree levels contain both xi and xj and the distance ||xi −xj||. Let rℓ i index the partition set such that xi ∈Aℓ rℓ i and Lij the lowest level for which xi and xj fall into the same set (i.e., the largest ℓsuch that rℓ i = rℓ j). Then, for xi ̸= xj, corr(yi, yj | A) = PLij ℓ=0 cℓ rℓ i (xi, xj) Q k∈{i,j}(σ2 + PL−1 ℓ=0 cℓ rℓ k(xk, xk)) 1 2 = PLij ℓ=0 dℓexp(−κ||xi −xj||2 2/||Aℓ rℓ i ||2 2) σ2 + PL−1 ℓ=0 dℓ , (5) where the second equality follows from assuming the previously described kernels. An example correlation matrix is shown in Fig. 3(c). κ determines the width of the bands while dℓcontrols the contribution of level ℓ. Since dℓis square summable, lower levels are less influential. Marginal Likelihood Based on a vector of observations y = [y1 · · · yn]′ at locations x = [x1 · · · xn]′, we can restrict our attention to evaluating the GPs at x. Let f ℓ(x) = [f ℓ(x1) · · · f ℓ(xn)]′. By definition of the GP, we have f ℓ(x) | f ℓ−1(x), Aℓ∼N(f ℓ−1(x), Kℓ), [Kℓ]i,j = cℓ r(xi, xj) xi, xj ∈Aℓ r 0 otherwise . (6) The level-specific covariance matrix Kℓis block-diagonal with structure determined by the levelspecific partition Aℓ. Observations are generated as y | g(x) ∼N(g(x), σ2In). Recalling Eq. (3), standard results yield g(x) | A ∼N 0, L−1 X ℓ=0 Kℓ y | A ∼N 0, σ2In + L−1 X ℓ=0 Kℓ . (7) This result can also be derived from the induced mGP of Eq. (4). We see that the marginal likelihood p(y | A) has a closed form. Alternatively, one can condition on the GP at any level ℓ′: y | f ℓ′(x), A ∼N f ℓ′(x), σ2In + L−1 X ℓ=ℓ′+1 Kℓ . (8) A key advantage of the mGP is the conditional conjugacy of the latent GPs that allows us to compute the likelihood of the data simply conditioned on the hierarchical partition A (see Eq. (7)). This fact is fundamental to the efficiency of the partition inference procedure described in Sec. 5. 4 Multiple Trials In many applications, such as the motivating MEG application, one has a collection of observations of an underlying signal. To capture the common global trajectory of these trials while still allowing for trial-specific variability, we model each as a realization from an mGP with a shared parent function f 0. One could trivially allow for alternative structures of hierarchical sharing beyond f 0 if an application warranted. For simplicity, and due to the motivating MEG application, we additionally assume shared changepoints between the trials, though this assumption can also be relaxed. 3 0 50 100 150 200 −4 −2 0 2 4 6 8 Time Observations 0 50 100 150 200 −5 0 5 10 Time Observations 20 40 60 80 100 120 140 160 180 200 20 40 60 80 100 120 140 160 180 200 20 40 60 80 100 120 140 160 180 200 20 40 60 80 100 120 140 160 180 200 20 40 60 80 100 120 140 160 180 200 20 40 60 80 100 120 140 160 180 200 (a) (b) (c) (d) (e) Figure 3: (a) Three trials and (b) all 100 trials of data generated from a 5-level mGP with a shared parent function f 0 and partition A (randomly sampled). (c) True correlation matrix. (d) Empirical correlation matrix from 100 trials. (e) Hierarchical segmentation produced by recursive minimization of normalized cut objective. Generative Model For each trial y(j) = {y(j) 1 , . . . , y(j) n }, we model y(j) i = g(j)(xi) + ǫ(j) i , ǫ(j) i ∼N(0, σ2), (9) with g(j) = f L−1,(j) generated from a trial-specific GP hierarchy f 0 →f 1,(j) →· · · →f L−1,(j) with shared parent f 0. (Again, alternative structures can be considered.) From Eq. (8) with ℓ′ = 0, and exploiting the independence of {f ℓ,(j)}, independently for each j y(j) | f 0(x), A ∼N y(j); f 0(x), σ2In + L−1 X ℓ=1 Kℓ . (10) Note that with our GP-based formulation, we need not assume coincident observation locations x1, . . . , xn between the trials. However, for simplicity of exposition, we consider shared locations. We compactly denote the covariance by Σ = σ2In + PL−1 ℓ=1 Kℓ. Simulated data generated from a 5-level mGP with shared f 0 and A are shown in Fig. 3. The sample correlation matrix is also shown. Compare with the MEG data of Fig. 1. Both the qualitative structure of the raw time series as well as blockiness of the correlation matrix have striking similarities. Posterior Global Trajectory and Predictions Based on a set of trials {y(1), . . . , y(J)}, it is of interest to infer the posterior of f 0. Standard Gaussian conjugacy results imply that p(f 0(x) | y(1), . . . , y(J), A) = N K−1 0 + JΣ−1−1 ˜y, K−1 0 + JΣ−1−1 , (11) where ˜y = Σ−1 P i y(i). Likewise, the predictive distribution of data from a new trial is p(y(J+1) | y(1), . . . , y(J), A) = Z p(y(J+1) | f 0(x), A)p(f 0(x) | y(1), . . . , y(J), A)df 0 = N K−1 0 + JΣ−1−1 ˜y, Σ + K−1 0 + JΣ−1−1 . (12) Marginal Likelihood Since the set of trials Y = {y(1), . . . , y(J)} are generated from a shared parent function f 0, the marginal likelihood does not decompose over trials. Instead, p(Y | A) = |K0|−1/2|Σ|−J/2 (2π)−nJ/2|K−1 0 + JΣ−1|1/2 exp −1 2 X i y(i)′Σ−1y(i) + 1 2 ˜y′(K−1 0 + JΣ−1)−1˜y . (13) See the Supplementary Material for a derivation. One can easily verify that the above simplifies to the marginal likelihood of Eq. (7) when J = 1. 5 Inference of the Hierarchical Partition In the formulation so far, we have assumed that the hierarchical partition A is given. A key question is to infer the partition from the data. Assume that we have prior p(A) on the hierarchical partition. Based on the fact that we can analytically compute p(Y | A), we can use importance sampling or independence chain Metropolis Hastings to draw samples from the posterior p(A | Y ). In what follows, we assume a balanced binary tree for A. See the Supplementary Material for a discussion of how unbalanced trees can be considered via modifications to the covariance hyperparameter specification or by considering alternative priors p(A) such as the Mondrian process [20]. 4 Partition Prior We consider a prior solely on the partition points {z1, . . . , z2L−1−1} rather than taking tree level into account as well. Because of our time-series analysis focus, we assume X ⊂ℜ. We define a distribution F on X and specify p(A) = Q i F(zi). Generatively, one can think of drawing 2L−1 −1 partition points from F and deterministically forming a balanced binary tree A from these. For multidimensional X, one could use Voronoi tessellation and graph matching to build the tree from the randomly selected zi. Such a prior allows for trivial specification of a uniform distribution on A (simply taking F uniform on X) or for eliciting prior information on changepoints, such as based on physiological information for the MEG data. Eliciting such information in a leveldependent setup is not straightforward. Also, despite common deployment, taking the partition point at level ℓas uniformly distributed over the parent set Aℓ−1 i yields high mass on A with small Aℓ i. This property is undesirable because it leads to trees with highly unbalanced partitions. Our resulting inferences perform Bayesian model averaging over trees. As such, even though we specify a prior on partitions with 2L−1 −1 changepoints, the resulting functions can appear to adaptively use fewer by averaging over the uncertainty in the discontinuity location. Partition Proposal Although stochastic tree search algorithms tend to be inefficient in general, we can harness the well-defined correlation structure associated with a given hierarchical partition to much more efficiently search the tree space. One can think of every observed location xi as a node in a graph with edge weights between xi and xj defined by the magnitude of the correlation of yi and yj. Based on this interpretation, the partition points of A correspond to graph cuts that bisect small edge weights, as graphically depicted in Fig. 4. As such, we seek a method for hierarchically cutting a graph. Given a cost matrix W with elements wuv defined for all pairs of nodes u, v in a set V , the normalized cut metric [22] for partitioning V into disjoint sets A and B is given by ncut(A, B) = cut(A, B) assoc(A, V )−1 + assoc(B, V )−1 , (14) where cut(A, B) = P u∈A,v∈B wuv and assoc(A, V ) = P u∈A,v∈V wuv. Typically, the cut point is selected as the minimum of the metric ncut(A, B) computed over all possible subsets A and B. The normalized cut metric balances between the cost of edge weights cut and the connectivity of the cut component, thus avoiding cuts that separate small sets. Fig. 1 shows an example of applying a greedy normalized cuts algorithm (recursively minimizing ncut(A, B)) to MEG data. cut 1 cut 2 cut 2 TIME Figure 4: Illustration of cutpoints dividing contiguous segments at points of low correlation. Instead of deterministically selecting cut points, we employ the normalized cut objective as a proposal distribution. Let the cost matrix W be the absolute value of the empirical correlation matrix computed from trials {y(1), . . . , y(J)} (see Fig. 1). Due to the natural ordering of our locations xi ∈X ⊂ℜ, the algorithm is straightforwardly implemented. We step down the hierarchy, first proposing a cut of A0 into {A1 1, A1 2} with probability q({A1 1, A1 2}) ∝ncut(A1 1, A1 2)−1. (15) At level ℓ, each Aℓ i is partitioned via a normalized cut proposal based on the submatrix of W corresponding to the locations xi ∈Aℓ i. The probability of any partition A under the specified proposal distribution is simply computed as the product of the sequence of conditional probabilities of each cut. This procedure generates cut points only at the observed locations xi. More formally, the partition point in X is proposed as uniformly distributed between xi and xi+1. Extensions to multidimensional X rely on spectral clustering algorithms based on the graph Laplacian [24]. Markov Chain Monte Carlo An importance sampler draws hierarchical partitions A(m) ∼q, with the proposal distribution q defined as above, and then weights the samples by p(A(m))/q(A(m)) to obtain posterior draws [19]. Such an approach is naively parallelizable, and thus amenable to efficient computations, though the effective sample size may be low if q does not adequately match the posterior p(A | Y ). Alternatively, a straightforward independence chain Metropolis Hastings algorithm (see Supplementary Material) is defined by iteratively proposing A′ ∼q which is accepted with probability min{r(A′ | A), 1} where A is a previous sample of a hierarchical partition and r(A′ | A) = p(Y | A′)p(A′)q(A)/[p(Y | A)p(A)q(A′)]. (16) The tailoring of the proposal distribution q to this application based on normalized cuts dramatically aids in improving the acceptance rate relative to more naive tree proposals. However, the acceptance 5 rate tends to decrease as higher posterior probability partitions A are discovered, especially for trees with many levels and large input spaces X for which the search space is larger. One benefit of the MCMC approach over importance sampling is the ability to include more intricate tree proposals to increase efficiency. We choose to interleave both local and global tree proposals. At each iteration, we first randomly select a node in the tree (i.e., a partition set Aℓ i) and then propose a new sequence of cuts for all children of this node. When the root node is selected, corresponding to A0, the proposal is equivalent to the global proposals previously considered. We adapt the proposal distribution for node selection to encourage more global searches at first and then shift towards a greater balance between local and global searches as the sampling progresses. Sequential Monte Carlo methods [4] can also be considered, with particles generated as global proposals. Computational Complexity The per iteration complexity is O(n3), equivalent to a typical likelihood evaluation under a GP prior. Using dynamic programming, the cost associated with the normalized cuts proposal is O(n2(L−1)). Standard techniques for more efficient GP computations are readily applicable, as well as extensions that harness the additive block structure of the covariance. 6 Related Work Various aspects of the mGP have similarities to other models proposed in the literature that primarily fall into two main categories: (i) GPs defined over a partitioned input space, and (ii) collections of GPs defined at tree nodes. The treed GP [8] captures non-stationarities by defining independent GPs at the leaves of a Bayesian CART-partitioned input space. The related approach of [12] assumes a Voronoi tessellation. For time series, [21] examines online inference of changepoints with GPs modeling the data within each segment. These methods capture abrupt changes, but do not allow for long-range dependencies spanning changepoints nor a functional data hierarchical structure, both inherent to our multiresolution perspective. A main motivation of the treed GP is the resulting computational speed-ups of an independently partitioned GP. A two-level hierarchical GP also aimed at computational efficiency is considered by [16], where the top-level GP is defined at a coarser scale and provides a piece-wise constant mean for lower-level GPs on a pre-partitioned input space. [10, 11] consider covariance functions defined on a phylogenetic tree such that the covariance between function-valued traits depends on both their spatial distance and evolutionary time spanned via a common ancestor. Here, the tree defines the strength and structure of sharing between a collection of functions rather than abrupt changes within the function. The Bayesian rose tree of [3] considers a mixture of GP experts, as in [14, 17], but using Bayesian hierarchical clustering with arbitrary branching structure in place of a Dirichlet process mixture. Such an approach is fundamentally different from the mGP: each GP is defined over the entire input space, data result from a GP mixture, and input points are not necessarily spatially clustered. Alternatively, multiscale processes have a long history (cf. [25]): the variables define a Markov process on a typically balanced, binary tree and higher-level nodes capture coarser level information about the process. In contrast, the higher level nodes in the mGP share the same temporal resolution and only vary in smoothness. At a high level, the mGP differs from previous GP-based tree models in that the nodes of our tree represent GPs over a contiguous subset of the input space X constrained in a hierarchical fashion. Thus, the mGP combines ideas of GP-based tree models and GP-based partition models. As presented in Sec. 3, one can formulate an mGP as an additive GP where each GP in the sum decomposes independently over the level-specific partition of the input space X. The additive GPs of [6] instead focus on coping with multivariate inputs, in a similar vain to hierarchical kernel learning [1], thus addressing an inherently different task. 7 Results 7.1 Synthetic Experiments To assess our ability to infer a hierarchical partition via the proposed MCMC sampler, we generated 100 trials of length 200 from a 5-level mGP with a shared parent function f 0. The hyperparameters were set to σ2 = 0.1, κ = 10, dℓ= d0 exp(−0.5(ℓ+ 1)) for ℓ= 0, . . . , L −1 with d0 = 5. The data are shown in Fig. 3, along with the empirical correlation matrix that is used as the cost matrix for the normalized cuts proposals. For inference, we set σ2 = ˆσ2/3 and dℓ= (ˆσ2/3) exp(−0.5ℓ), where ˆσ2 is the average timespecific sample variance. κ was as in the simulation. The hyperparameter mismatch demonstrates 6 20 40 60 80 100 120 140 160 180 200 20 40 60 80 100 120 140 160 180 200 20 40 60 80 100 120 140 160 180 200 20 40 60 80 100 120 140 160 180 200 0 1000 2000 3000 −3 −2.98 −2.96 −2.94 −2.92 −2.9 −2.88x 10 4 Iteration Log Likelihood 0 50 100 150 200 −0.2 −0.1 0 0.1 0.2 0.3 Time estimated(f0) − f0 mGP hGP GP −500 −450 −400 −350 −300 GP hGP L=2 L=5 L=7 L=10 Heldout Log Likelihood (a) (b) (c) (d) (e) Figure 5: For the data of Fig. 3, (a) true and (b) MAP partitions. (c) Trace plots of log likelihood versus MCMC iteration for 10 chains. Log likelihood under the true partition (cyan) and minimized normalized cut partition of Fig. 3 (magenta) are also shown. (d) Errors between posterior mean f 0 and true f 0 for GP, hGP, and mGP. (e) Predictive log likelihood of 10 heldout sequences for GP, hGP, and mGP with L = 2, 5(true), 7, 10. some robustness to mispecification. For a uniform prior p(A), 10 independent MCMC chains were run for 3000 iterations, thinned by 10. The first 1000 iterations used pure global tree searches; the sampler was then tempered to uniform node proposals. The effects of this choice are apparent in the likelihood plot of Fig. 5, which also displays the true hierarchical partition and MAP estimate. Compare to the normalized cuts partition of Fig. 3, especially at the important level 1 cut. The full simulation study took less than 7 minutes to run on a single 1.8 GHz Intel Core i7 processor. To assess sensitivity to the choice of L, we compare the predictive log-likelihood of 10 heldout test sequences under an mGP with 2, 5, 7, and 10 levels. As shown in Fig. 5(e), there is a clear gain going from 2 to 5 levels. However, overestimating L has minimal influence on predictive likelihood since lower tree levels capture finer details and have less overall effect. We also compare to a single GP and a 2-level hierarchical GP (hGP) (see Sec. 7.2). For a direct comparison, both use squared exponential kernels. Hyperparameters were set as in the mGP for the top-level GP. The total variance was also matched with the GP taking this as noise and the hGP splitting between level 2 and noise. In addition to better predictive performance, Fig. 5(d) shows the mGP’s improved estimation of f 0. 7.2 MEG Analysis We analyzed magnetoencephalography (MEG) recordings of neuronal activity collected from a helmet with gradiometers distributed over 102 locations around the head. The gradiometers measure the spatial gradient of the magnetic activity in Teslas per meter (T/m) [9]. Since the firings of neurons in the brain only induce a weak magnetic field outside of the skull, the signal-to-noise ratio of the MEG data is very low and typically multiple recordings, or trials, of a given task are collected. Our MEG data was recorded while a subject viewed 20 stimuli describing concrete nouns (both the written noun and a representative line drawing), with 20 interleaved trials per word. See the Supplementary Material for further details on the data and our analyses presented herein. Efficient sharing of information between the single trials is important for tasks such as word classification [7]. A key insight of [7] was the importance of capturing the time-varying correlations between MEG sensors for performing classification. However, the formulation still necessitates a mean model. [7] propose a 2-level hierarchical GP (hGP): a parent GP captures the common global trajectory, as in the mGP, and each trial-specific GP is centered about the entire parent function1. This formulation maintains global smoothness at the individual trial level. The mGP instead models the trial-specific variability with a multi-level tree of GPs defined as deviations from the parent function over local partitions, allowing for abrupt changes relative to the smooth global trajectory. For our analyses, we consider the words associated with the “building” and “tool” categories shown in Fig. 7. Independently for each of the 10 words and 102 sensors, we trained a 5-level mGP using 15 randomly selected trials as training data and the 5 remaining for testing. Each trial was of length n = 340. We ran 3 independent MCMC chains for 3000 iterations with both global and local tree searches. We discarded the first 1000 samples as burn-in and thinned by 10. The mGP hyperparameters were set exactly as in the simulated study of Sec. 7.1 for structure learning and then optimized over a grid to maximize the marginal likelihood of the training data. We compare the predictive performance of the mGP in terms of MSE of heldout segments relative to a GP and hGP, each with similarly optimized hyperparameters. The predictive mean conditioned on data up to the heldout time is straightforwardly derived from Eq. (12). For the mGP, the calculation is averaged over the posterior samples of A. Fig. 6 displays the MSEs decomposed by cortical region. 1The model of [7] uses an hGP in a latent space. The mGP could be similarly deployed. 7 100 150 200 250 300 −5 0 5 10 15 20 25 Conditioning Point % Decrease in MSE v. GP Visual Frontal Parietal Temporal 100 150 200 250 300 −5 0 5 10 15 20 25 Conditioning Point % Decrease in MSE v. hGP Visual Frontal Parietal Temporal 0 0.5 1 1.5 −20 −10 0 10 Time (sec) Observations MLE hGP mGP −8 −7 −6 −5 x 10 4 wfmm mGP Heldout Log Likelihood (a) (b) (c) (d) Figure 6: Per-lobe comparison of mGP to (a) GP and (b) hGP: For various values of τ, % decrease in predictive MSE of heldout y∗ τ:τ+30 conditioned on y∗ 1:τ−1 and 15 training sequences. (c) For a visual cortex sensor and word hammer, plots of test data, empirical mean (MLE), and hGP and mGP predictive mean for entire heldout y∗. (d) Boxplots of predictive log likelihood of heldout y∗for the mGP and wavelet-based method of [15]. Time (sec) Visual L=1 0 0.5 1 igloo house church apartment barn hammer saw screwdriver pliers chisel Time (sec) Frontal L=1 0 0.5 1 Time (sec) Parietal L=1 0 0.5 1 igloo house church apartment barn hammer saw screwdriver pliers chisel Time (sec) Temporal L=1 0 0.5 1 Figure 7: Inferred changepoints at level 1 aggregated over sensors within each lobe: visual (top-left), frontal (top-right), parietal (bottom-left), and temporal (bottom-right). The results clearly indicate that the mGP consistently better captures the features of the data, and particularly for sensors with large abrupt changes such as in the visual cortex. The heldout trials for a visual cortex sensor are displayed in Fig. 6(c). Relative to the hGP, the mGP much better tracks the early dip in activity right after the visual stimulus onset (t = 0). The posterior distribution of inferred changepoints at level 1, also broken down by cortical region, are displayed in Fig. 7. As expected, the visual cortex has the earliest changepoints. Similar trends are seen in the parietal lobe that handles perception and sensory integration. The temporal lobe, which is key in semantic processing, has changepoints occurring later. These results concur with the findings of [23]: semantic processing starts between 250 and 600 ms and word length (a visual feature) is decoded most accurately very near the standard 100ms response time (“n100”). We also compare our predictive performance to that of the wavelet-based functional mixed model (wfmm) of [15]. The wfmm has become a standard approach for functional data analysis since it allows for spiky trajectories and efficient sharing of information between trials. One limitation, however, is the restriction to a regular grid of observations. The wfmm enables analysis in a multivariate setting, but for a direct comparison we simply apply the wfmm to each word and sensor independently. Fig. 6(d) shows boxplots of the predictive heldout log likelihood of the test trials under the mGP and wfmm. The results are over 5 heldout trials, 102 sensors, and 10 words. In addition to the easier interpretability of the mGP, the predictive performance also exceeds that of the wfmm. 8 Discussion The mGP provides a flexible framework for characterizing the dependence structure of real data, such as the examined MEG recordings, capturing certain features more accurately than previous models. In particular, the mGP provides a hierarchical functional data analysis framework for modeling (i) strong, locally smooth sharing of information, (ii) global long-range correlations, and (iii) abrupt changes. The simplicity of the mGP formulation enables further theoretical analysis, for example, combining posterior consistency results from changepoint analysis with those for GPs. Although we focused on univariate time series analysis, our formulation is amenable to multivariate functional data analysis extensions: one can naturally accommodate hierarchical dependence structures through partial sharing of parents in the tree, or possibly via mGP factor models. There are many interesting questions relating to the proposed covariance function. Our fractal specification represents a particular choice to avoid over-parameterization, although alternatives could be considered. For hyperparameter inference, we anticipate that joint sampling with the partition would mix poorly, and consider it a topic for future exploration. Another interesting topic is to explore proposals for more general tree structures. We believe that the proposed mGP represents a powerful, broadly applicable new framework for non-stationary analyses, especially in a functional data analysis setting, and sets the foundation for many interesting possible extensions. Acknowledgments The authors thank Alona Fyshe, Gustavo Sudre and Tom Mitchell for their help with data acquisition, preprocessing, and useful suggestions. This work was supported in part by AFOSR Grant FA9550-12-1-0453 and the National Institute of Environmental Health Sciences (NIEHS) of the NIH under Grant R01 ES017240. 8 References [1] F. Bach. High-dimensional non-linear variable selection through hierarchical kernel learning. Technical Report 0909.0844v1, arXiv, 2009. [2] J. Beran and Y. Shumeyko. On asymptotically optimal wavelet estimation of trend functions under longrange dependence. Bernoulli, 18(1):137–176, 2012. [3] C. Blundell, Y. W. Teh, and K. A. Heller. Bayesian rose trees. In Proc. Uncertainty in Artificial Intelligence, pages 65–72, 2010. [4] P. Del Moral, A. Doucet, and A. Jasra. Sequential Monte Carlo samplers. Journal of the Royal Statistical Society, Series B, 68(3):411–436, 2006. [5] F. X. Diebold and G. D. Rudebusch. Long memory and persistence in aggregate output. Journal of Monetary Economics, 24:189–209, 1989. [6] D. Duvenaud, H. Nickisch, and C. E. Rasmussen. Additive Gaussian processes. In Advances in Neural Information Processing Systems, volume 24, pages 226–234, 2011. [7] A. Y. Fyshe, E. B. Fox, D. B. Dunson, and T. Mitchell. Hierarchical latent dictionaries for models of brain activation. In Proc. International Conference on Artificial Intelligence and Statistics, pages 409– 421, 2012. [8] R. .B. Gramacy and H. K. H. Lee. Bayesian treed Gaussian process models with an application to computer modeling. Journal of the American Statistical Association, 103(483):1119–1130, 2008. [9] P. Hansen, M. Kringelbach, and R. Salmelin. MEG: An Introduction to Methods. Oxford University Press, USA, 2010. ISBN 0195307232. [10] R. Henao and J. E. Lucas. Efficient hierarchical clustering for continuous data. Technical Report 1204.4708v1, arXiv, 2012. [11] N. S. Jones and J. Moriarty. Evolutionary inference for function-valued traits: Gaussian process regression on phylogenies. Technical Report 1004.4668v2, arXiv, 2011. [12] H. M. Kim, B. K. Mallick, and C. C. Holmes. Analyzing nonstationary spatial data using piecewise Gaussian processes. Journal of the American Statistical Association, 100(470):653–668, 2005. [13] P. S. Kokoszka and M. S. Taqqu. Parameter estimation for infinite variance fractional ARIMA. The Annals of Statistics, 24(5):1880–1913, 1996. [14] E. Meeds and S. Osindero. An alternative mixture of Gaussian process experts. In Advances in Neural Information Processing Systems, volume 18, pages 883–890, 2006. [15] J. S. Morris and R. J. Carroll. Wavelet-based functional mixed models. Journal of the Royal Statistical Society, Series B, 68(2):179–199, 2006. [16] S. Park and S. Choi. Hierarchical Gaussian process regression. In Proc. Asian Conference on Machine Learning, pages 95–110, 2010. [17] C. E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In Advances in Neural Information Processing Systems, volume 2, pages 881–888, 2002. [18] C. E. Rasmussen and C. K. .I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [19] C. P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer, 2005. [20] D. M. Roy and Y. W. Teh. The Mondrian process. In Advances in Neural Information Processing Systems, volume 21, pages 1377–1384, 2009. [21] Y. Saatci, R. Turner, and C. E. Rasmussen. Gausssian process change point models. In Proc. International Conference on Machine Learning, pages 927–934, 2010. [22] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000. [23] G. Sudre, D. Pomerleaum, M. Palatucci, L. Wehbe, A. Fyshe, R. Salmelin, and T. Mitchell. Tracking neural coding of perceptual and semantic features of concrete nouns. Neuroimage, 62(1):451–463, 2012. [24] U. von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395–416, 2007. [25] A. S. Willsky. Multiresolution Markov models for signal and image processing. Proceedings of the IEEE, 90(8):1396–1458, 2002. 9
|
2012
|
187
|
4,549
|
Emergence of Object-Selective Features in Unsupervised Feature Learning Adam Coates, Andrej Karpathy, Andrew Y. Ng Computer Science Department Stanford University Stanford, CA 94305 {acoates,karpathy,ang}@cs.stanford.edu Abstract Recent work in unsupervised feature learning has focused on the goal of discovering high-level features from unlabeled images. Much progress has been made in this direction, but in most cases it is still standard to use a large amount of labeled data in order to construct detectors sensitive to object classes or other complex patterns in the data. In this paper, we aim to test the hypothesis that unsupervised feature learning methods, provided with only unlabeled data, can learn high-level, invariant features that are sensitive to commonly-occurring objects. Though a handful of prior results suggest that this is possible when each object class accounts for a large fraction of the data (as in many labeled datasets), it is unclear whether something similar can be accomplished when dealing with completely unlabeled data. A major obstacle to this test, however, is scale: we cannot expect to succeed with small datasets or with small numbers of learned features. Here, we propose a large-scale feature learning system that enables us to carry out this experiment, learning 150,000 features from tens of millions of unlabeled images. Based on two scalable clustering algorithms (K-means and agglomerative clustering), we find that our simple system can discover features sensitive to a commonly occurring object class (human faces) and can also combine these into detectors invariant to significant global distortions like large translations and scale. 1 Introduction Many algorithms are now available to learn hierarchical features from unlabeled image data. There is some evidence that these algorithms are able to learn useful high-level features without labels, yet in practice it is still common to train such features from labeled datasets (but ignoring the labels), and to ultimately use a supervised learning algorithm to learn to detect more complex patterns that the unsupervised learning algorithm is unable to find on its own. Thus, an interesting open question is whether unsupervised feature learning algorithms are able to construct features, without the benefit of supervision, that can identify high-level concepts like frequently-occurring object classes. It is already known that this can be achieved when the dataset is sufficiently restricted that object classes are clearly defined (typically closely cropped images) and occur very frequently [13, 21, 22]. In this work we aim to test whether unsupervised learning algorithms can achieve a similar result without any supervision at all. The setting we consider is a challenging one. We have harvested a dataset of 1.4 million image thumbnails from YouTube and extracted roughly 57 million 32-by-32 pixel patches at random locations and scales. These patches are very different from those found in labeled datasets like CIFAR10 [9]. The overwhelming majority of patches in our dataset appear to be random clutter. In the cases where such a patch contains an identifiable object, it may well be scaled, arbitrarily cropped, or uncentered. As a result, it is very unclear where an “object class” begins or ends in this type of patch dataset, and less clear that a completely unsupervised learning algorithm could manage to cre1 ate “object-selective” features able to distinguish an object from the wide variety of clutter without some other type of supervision. In order to have some hope of success, we can identify several key properties that our learning algorithm should likely have. First, since identifiable objects show up very rarely, it is clear that we are obliged to train from extremely large datasets. We have no way of controlling how often a particular object shows up and thus enough data must be used to ensure that an object class is seen many times—often enough that it cannot be disregarded as random clutter. Second, we are also likely to need a very large number of features. Training too few features will cause us to “under-fit” the distribution, forcing the learning algorithm to ignore rare events like objects. Finally, as is already common in feature learning work, we should aim to build features that incorporate invariance so that features respond not just to a specific pattern (e.g., an object at a single location and scale), but to a range of patterns that collectively belong to the same object class (e.g., the same object seen at many locations and scales). Unfortunately, these desiderata are difficult to achieve at once: current methods for building invariant hierarchies of features are difficult to scale up to train many thousands of features from our 57 million patch dataset on our cluster of 30 machines. In this paper, we will propose a highly scalable combination of clustering algorithms for learning selective and invariant features that are capable of tackling this size of problem. Surprisingly, we find that despite the simplicity of these algorithms we are nevertheless able to discover high-level features sensitive to the most commonly occurring object class present in our dataset: human faces. In fact, we find that these features are better face detectors than a linear filter trained from labeled data, achieving up to 86% AUC compared to 77% on labeled validation data. Thus, our results emphasize that not only can unsupervised learning algorithms discover object-selective features with no labeled data, but that such features can potentially perform better than basic supervised detectors due to their deep architecture. Though our approach is based on fast clustering algorithms (K-means and agglomerative clustering), its basic behavior is essentially similar to existing methods for building invariant feature hierarchies, suggesting that other popular feature learning methods currently available may also be able to achieve such results if run at large enough scale. Indeed, recent work with a more sophisticated (but vastly more expensive) feature-learning algorithm appears to achieve similar results [11] when presented with full-frame images. We will begin with a description of our algorithms for learning selective and invariant features, and explain their relationship to existing systems. We will then move on to presenting our experimental results. Related results and methods to our own will be reviewed briefly before concluding. 2 Algorithm Our system is built on two separate learning modules: (i) an algorithm to learn selective features (linear filters that respond to a specific input pattern), and (ii) an algorithm to combine the selective features into invariant features (that respond to a spectrum of gradually changing patterns). We will refer to these features as “simple cells” and “complex cells” respectively, in analogy to previous work and to biological cells with (very loosely) related response properties. Following other popular systems [14, 12, 6, 5] we will then use these two algorithms to build alternating layers of simple cell and complex cell features. 2.1 Learning Selective Features (Simple Cells) The first module in our learning system trains a bank of linear filters to represent our selective “simple cell” features. For this purpose we use the K-means-like method used by [2], which has previously been used for large-scale feature learning. The algorithm is given a set of input vectors x(i) ∈ℜn, i = 1, . . . , m. These vectors are preprocessed by removing the mean and normalizing each example, then performing PCA whitening. We then learn a dictionary D ∈ℜn×d of linear filters as in [2] by alternating optimization over filters D and “cluster assignments” C: minimize D,C ||DC(i) −x(i)||2 2 subject to ||D(j)||2 = 1, ∀j, and ||C(i)||0 ≤1, ∀i. 2 Here the constraint ||C(i)||0 ≤1 means that the vectors C(i), i = 1, . . . , m are allowed to contain only a single non-zero, but the non-zero value is otherwise unconstrained. Given the linear filters D, we then define the responses of the learned simple cell features as s(i) = g(a(i)) where a(i) = D⊤x(i) and g(·) is a nonlinear activation function. In our experiments we will typically use g(a) = |a| for the first layer of simple cells, and g(a) = a for the second.1 2.2 Learning Invariant Features (Complex Cells) To construct invariant complex cell features a common approach is to create “pooling units” that combine the responses of lower-level simple cells. In this work, we use max-pooling units [14, 13]. Specifically, given a vector of simple cell responses s(i), we will train complex cell features whose responses are given by: c(i) j = max k∈Gj s(i) k where Gj is a set that specifies which simple cells the j’th complex cell should pool over. Thus, the complex cell cj is an invariant feature that responds significantly to any of the patterns represented by simple cells in its group. Each group Gj should specify a set of simple cells that are, in some sense, similar to one another. In convolutional neural networks [12], for instance, each group is hard-coded to include translated copies of the same filter resulting in complex cell responses cj that are invariant to small translations. Some algorithms [6, 3] fix the groups Gj ahead of time then optimize the simple cell filters D so that the simple cells in each group share a particular form of statistical dependence. In our system, we will use linear correlation of simple cell responses as our similarity metric, E[akal], and construct groups Gj that combine similar features according to this metric. Computing the similarity directly would normally require us to estimate the correlations from data, but since the inputs x(i) are whitened we can instead compute the similarity directly from the filter weights: E[akal] = E[D(k)⊤x(i)x(i)⊤D(l)] = D(k)⊤D(l). For convenience in the following, we will actually use the dissimilarity between features, defined as d(k, l) = ||D(k) −D(l)||2 = p 2 −2E[akal]. To construct the groups G, we will use a version of single-link agglomerative clustering to combine sets of features that have low dissimilarity according to d(k, l).2 To construct a single group G0 we begin by choosing a random simple cell filter, say D(k), as the first member. We then search for candidate cells to be added to the group by computing d(k, l) for each simple cell filter D(l) and add D(l) to the group if d(k, l) is less than some limit τ. The algorithm then continues to expand G0 by adding any additional simple cells that are closer than τ to any one of the simple cells already in the group. This procedure continues until there are no more cells to be added, or until the diameter of the group (the dissimilarity between the two furthest cells in the group) reaches a limit ∆.3 This procedure can be executed, quite rapidly, in parallel for a large number of randomly chosen simple cells to act as the “seed” cell, thus allowing us to train many complex cells at once. Compared to the simple cell learning procedure, the computational cost is extremely small even for our rudimentary implementation. In practice, we often generate many groups (e.g., several thousand) and then keep only a random subset of the largest groups. This ensures that we do not end up with many groups that pool over very few simple cells (and hence yield complex cells cj that are not especially invariant). 2.3 Algorithm Behavior Though it seems plausible that pooling simple cells with similar-looking filters according to d(k, l) as above should give us some form of invariant feature, it may not yet be clear why this form of 1This allows us to train roughly half as many simple cell features for the first layer. 2Since the first layer uses g(a) = |a|, we actually use d(k, l) = min{||D(k) −D(l)||2, ||D(k) + D(l)||2} to account for −D(l) and +D(l) being essentially the same feature. 3We use τ = 0.3 for the first layer of complex cells and τ = 1.0 for the second layer. These were chosen by examining the typical distance between a filter D(k) and its nearest neighbor. We use ∆= 1.5 > √ 2 so that a complex cell group may include orthogonal filters but cannot grow without limit. 3 invariance is desirable. To explain, we will consider a simple “toy” data distribution where the behavior of these algorithms is more clear. Specifically, we will generate three heavy-tailed random variables X, Y, Z according to: σ1, σ2 ∼L(0, λ) e1, e2, e3 ∼N(0, 1) X = e1σ1, Y = e2σ1, Z = e3σ2 Here, σ1, σ2 are scale parameters sampled independently from a Laplace distribution, and e1, e2, e3 are sampled independently from a unit Gaussian. The result is that Z is independent of both X and Y , but X and Y are not independent due to their shared scale parameter σ1 [6]. An isocontour of the density of this distribution is shown in Figure 1a. Other popular algorithms [6, 5, 3] for learning complex-cell features are designed to identify X and Y as features to be pooled together due to the correlation in their energies (scales). One empirical motivation for this kind of invariance comes from natural images: if we have three simple-cell filter responses a1 = D(1)⊤x, a2 = D(2)⊤x, a3 = D(3)⊤x where D(1) and D(2) are Gabor filters in quadrature phase, but D(3) is a Gabor filter at a different orientation, then the responses a1, a2, a3 will tend to have a distribution very similar to the model of X, Y, Z above [7]. By pooling together the responses of a1 and a2 a complex cell is able to detect an edge of fixed orientation invariant to small translations. This model also makes sense for higher-level invariances where X and Y do not merely represent responses of linear filters on image patches but feature responses in a deep network. Indeed, the X–Y plane in Figure 1a is referred to as an “invariant subspace” [8]. Our combination of simple cell and complex cell learning algorithms above tend to learn this same type of invariance. After whitening and normalization, the data points X, Y, Z drawn from the distribution above will lie (roughly) on a sphere. The density of these data points is pictured in Figure 1b, where it can be seen that the highest density areas are in a “belt” in the X–Y plane and at the poles along the Z axis with a low-density region in between. Application of our K-means clustering method to this data results in centroids shown as ∗marks in Figure 1b. From this picture it is clear what a subsequent application of our single-link clustering algorithm will do: it will try to string together the centroids around the “belt” that forms the invariant subspace and avoid connecting them to the (distant) centroids at the poles. Max-pooling over the responses of these filters will result in a complex cell that responds consistently to points in the X–Y plane, but not in the Z direction— that is, we end up with an invariant feature detector very similar to those constructed by existing methods. Figure 1c depicts this result, along with visualizations of the hypothetical gabor filters D(1), D(2), D(3) described above that might correspond to the learned centroids. (a) (b) (c) Figure 1: (a) An isocontour of a sparse probability distribution over variables X, Y, and Z. (See text for details.) (b) A visualization of the spherical density obtained from the distribution in (a) after normalization. Red areas are high density and dark blue areas are low density. Centroids learned by K-means from this data are shown on the surface of the sphere as * marks. (c) A pooling unit identified by applying single-link clustering to the centroids (black links join pooled filters). (See text.) 2.4 Feature Hierarchy Now that we have defined our simple and complex cell learning algorithms, we can use them to train alternating layers of selective and invariant features. We will train 4 layers total, 2 of each type. The architecture we use is pictured in Figure 2a. 4 (a) (b) Figure 2: (a) Cross-section of network architecture used for experiments. Full layer sizes are shown at right. (b) Randomly selected 128-by-96 images from our dataset. Our first layer of simple cell features are locally connected to 16 non-overlapping 8-by-8 pixel patches within the 32-by-32 pixel image. These features are trained by building a dataset of 8-by8 patches and passing them to our simple cell learning procedure to train 6400 first-layer filters D ∈ℜ64×6400. We apply our complex cell learning procedure to this bank of filters to find 128 pooling groups G1, G2, . . . , G128. Using these results, we can extract our simple cell and complex cell features from each 8-by-8 pixel subpatch of the 32-by-32 image. Specifically, the linear filters D are used to extract the first layer simple cell responses s(p) i = g(D(i)⊤x(p)) where x(p), p = 1, .., 16 are the 16 subpatches of the 32-by-32 image. We then compute the complex cell feature responses c(p) j = maxk∈Gj s(p) k for each patch. Once complete, we have an array of 128-by-4-by-4 = 2048 complex cell responses c representing each 32-by-32 image. These responses are then used to form a new dataset from which to learn a second layer of simple cells with K-means. In our experiments we train 150,000 second layer simple cells. We denote the second layer of learned filters as ¯D, and the second layer simple cell responses as ¯s = ¯D⊤c. Applying again our complex cell learning procedure to ¯D, we obtain pooling groups ¯G, and complex cells ¯c defined analogously. 3 Experiments As described above, we ran our algorithm on patches harvested from YouTube thumbnails downloaded from the web. Specifically, we downloaded the thumbnails for over 1.4 million YouTube videos4, some of which are shown in Figure 2b. These images were downsampled to 128-by-96 pixels and converted to grayscale. We cropped 57 million randomly selected 32-by-32 pixel patches from these images to form our unlabeled training set. No supervision was used—thus most patches contain partial views of objects or clutter at differing scales. We ran our algorithm on these images using a cluster of 30 machines over 3 days—virtually all of the time spent training the 150,000 second-layer features.5 We will now visualize these features and check whether any of them have learned to identify an object class. 3.1 Low-Level Simple and Complex Cell Visualizations We visualize the learned low-level filters D and pooling groups G to verify that they are, in fact, similar to those learned by other well-known algorithms. It is already known that our K-meansbased algorithm learns simple-cell-like filters (e.g., edge-like features, as well as spots, curves) as shown in Figure 3a. To visualize the learned complex cells we inspect the simple cell filters that belong to each of the pooling groups. The filters for several pooling groups are shown in Figure 3b. As expected, the filters cover a spectrum of similar image structures. Though many pairs of filters are extremely similar6, 4We cannot select videos at random, so we query videos under each YouTube category (“Pets & Animals”, “Science & Technology”, etc.) along with a date (e.g., “January 2001”). 5Though this is a fairly long run, we note that 1 iteration of K-means is cheaper than a single batch gradient step for most other methods able to learn high-level invariant features. We expect that these experiments would be impossible to perform in a reasonable amount of time on our cluster with another algorithm. 6Some filters have reversed polarity due to our use of absolute-value rectification during training of the first layer. 5 there are also other pairs that differ significantly yet are included in the group due to the singlelink clustering method. Note that some of our groups are composed of similar edges at differing locations, and thus appear to have learned translation invariance as expected. 3.2 Higher-Level Simple and Complex Cells Finally, we inspect the learned higher layer simple cell and complex cell features, ¯s and ¯c, particularly to see whether any of them are selective for an object class. The most commonly occurring object in these video thumbnails is human faces (even though we estimate that much less than 0.1% of patches contain a well-framed face). Thus we search through our learned features for cells that are selective for human faces at varying locations and scales. To locate such features we use a dataset of labeled images: several hundred thousand non-face images as well as tens of thousands of known face images from the “Labeled Faces in the Wild” (LFW) dataset [4].7 To test whether any of the ¯s simple cell features are selective for faces, we use each feature by itself as a “detector” on the labeled dataset: we compute the area under the precision-recall curve (AUC) obtained when each feature’s response ¯si is used as a simple classifier. Indeed, it turns out that there are a handful of high-level features that tend to be good detectors for faces. The precision-recall curves for the best 5 detectors are shown in Figure 3c (top curves); the best of these achieves 86% AUC. We visualize 16 of the simple cell features identified by this procedure8 in Figure 4a along with a sampling of the image patches that activate the first of these cells strongly. There it can be seen that these simple cells are selective for faces located at particular locations and scales. Within each group the faces differ slightly due to the learned invariance provided by the complex cells in the lower layer (and thus the mean of each group of images is blurry). (a) (b) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision (c) Figure 3: (a) First layer simple cell filters learned by K-means. (b) Sets of simple cell filters belonging to three pooling groups learned by our complex cell training algorithm. (c) Precision-Recall curves showing selectivity for human faces of 5 low-level simple cells trained from a full 32-by-32 patch (red curves, bottom) versus 5 higher-level simple cells (green curves, top). Performance of the best linear filter found by SVM from labeled data is also shown (black dotted curve, middle). It may appear that this result could be obtained by applying our simple cell learning procedure directly to full 32-by-32 images without any attempts at incorporating local invariance. That is, rather than training D (the first-layer filters) from 8-by-8 patches, we could try to train D directly from the 32-by-32 images. This turns out not to be successful. The lower curves in Figure 3c are the precision-recall curves for the best 5 simple cells found in this way. Clearly the higher-level features are dramatically better detectors than simple cells built directly from pixels9 (only 64% AUC). 7Our positive face samples include the entire set of labeled faces, plus randomly scaled and translated copies. 8We visualize the higher-level features by averaging together the 100 unlabeled images from our YouTube dataset that elicit the strongest activation. 9These simple cells were trained by applying K-means to normalized, whitened 32-by-32 pixel patches from a smaller unlabeled set known to have a higher concentration of faces. Due to this, a handful of centroids look roughly like face exemplars and act as simple “template matchers”. When trained on the full dataset (which contains far fewer faces), K-means learns only edge and arc features which perform much worse (about 45% AUC). 6 Best 32-by-32 simple cell Best in ¯s Best in ¯c Supervised Linear SVM AUC 64% 86% 80% 77% Table 1: Area under PR curve for different cells on our face detection validation set. Only the SVM uses labeled data. (a) (b) (c) (d) Figure 4: Visualizations. (a) A collection of patches from our unlabeled dataset that maximally activate one of the high-level simple cells from ¯s. (b) The mean of the top stimuli for a handful of face-selective cells in ¯s. (c) Visualization of the face-selective cells that belong to one of the complex cells in ¯c discovered by the single-link clustering algorithm applied to ¯D. (d) A collection of unlabeled patches that elicit a strong response from the complex cell visualized in (c) — virtually all are faces, at a variety of scales and positions. Compare to (a). As a second control experiment we train a linear SVM from half of the labeled data using only pixels as input (contrast-normalized and whitened). The PR curve for this linear classifier is shown in Figure 3c as a black dotted line. There we see that the supervised linear classifier is significantly better (77% AUC) than the 32-by-32 linear simple cells. On the other hand, it does not perform as well as the higher level simple cells learned by our system even though it is likely the best possible linear detector. Finally, we inspect the higher-level complex cells learned by the applying the same agglomerative clustering procedure to the higher-level simple cell filters. Due to the invariance introduced at the lower layers, two simple cells that detect faces at slightly different locations or scales will often have very similar filter weights and thus we expect our algorithm to find and combine these simple cells into higher-level invariant features cells. To visualize our higher-level complex cell features ¯c, we can simply look at visualizations for all of the simple cells in each of the groups ¯G. These visualizations show us the set of patches that strongly activate each simple cell, and hence also activate the complex cell. The results of such a visualization for one group that was found to contain only face-selective cells is shown in Figure 4c. There it can be seen that this single “complex cell” selects for faces at multiple positions and scales. A sampling of image patches collected from the unlabeled data that strongly activate the corresponding complex cell are shown in Figure 4d. We see that the complex cell detects many faces but at a much wider variety of positions and scales compared to the simple cells, demonstrating that even “higher level” invariances are being captured, including scale invariance. Benchmarked on our labeled set, this complex cell achieves 80.0% AUC—somewhat worse than the very best simple cells, but still in the top 10 performing cells in the entire network. Interestingly, the qualitative results in Figure 4d are excellent, and we believe these images represent an even greater range of variations than those in the labeled set. Thus the 80% AUC number may somewhat under-rate the quality of these features. These results suggest that the basic notions of invariance and selectivity that underpin popular feature learning algorithms may be sufficient to discover the kinds of high-level features that we desire, possibly including whole object classes robust to local and global variations. Indeed, using simple implementations of selective and invariant features closely related to existing algorithms, we have found that is possible to build features with high selectivity for a coherent, commonly occurring object class. Though human faces occur only very rarely in our very large dataset, it is clear that the complex cell visualized Figure 4d is adept at spotting them amongst tens of millions of images. The enabler for these results is the scalability of the algorithms we have employed, suggesting that other systems can likely achieve similar results to the ones shown here if their computational limitations are overcome. 7 4 Related Work The method that we have proposed has close connections to a wide array of prior work. For instance, the basic notions of selectivity and invariance that drive our system can be identified in many other algorithms: Group sparse coding methods [3] and Topographic ICA [6, 7] build invariances by pooling simple cells that lie in an invariant subspace, identified by strong scale correlations between cell responses. The advantage of this criterion is that it can determine which features to pool together even when the simple cell filters are orthogonal (where they would be too far apart for our algorithm to recognize their relationship). Our results suggest that while this type of invariance is very useful, there exist simple ways of achieving a similar effect. Our approach is also connected with methods that attempt to model the geometric (e.g., manifold) structure of the input space. For instance, Contractive Auto-Encoders [16, 15], Local Coordinate Coding [20], and Locality-constrained Linear Coding [19] learn sparse linear filters while attempting to model the manifold structure staked out by these filters (sometimes termed “anchor points”). One interpretation of our method, suggested by Figure 1b, is that with extremely overcomplete dictionaries it is possible to use trivial distance calculations to identify neighboring points on the manifold. This in turn allows us to construct features invariant to shifts along the manifold with little effort. [1] use similar intuitions to propose a clustering method similar to our approach. One of our key results, the unsupervised discovery of features selective for human faces is fairly unique (though seen recently in the extremely large system of [11]). Results of this kind have appeared previously in restricted settings. For instance, [13] trained Deep Belief Network models that decomposed object classes like faces and cars into parts using a probabilistic max-pooling to gain translation invariance. Similarly, [21] has shown results of a similar flavor on the Caltech recognition datasets. [22] showed that a probabilistic model (with some hand-coded geometric knowledge) can recover clusters containing 20 known object class silhouettes from outlines in the LabelMe dataset. Other authors have shown the ability to discover detailed manifold structure (e.g., as seen in the results of embedding algorithms [18, 17]) when trained in similarly restricted settings. The structure that these methods discover, however, is far more apparent when we are using labeled, tightly cropped images. Even if we do not use the labels themselves the labeled examples are, by construction, highly clustered: faces will be separated from other objects because there are no partial faces or random clutter. In our dataset, no supervision is used except to probe the representation post hoc. Finally, we note the recent, extensive findings of Le et al. [11]. In that work an extremely large 9layer neural network based on a TICA-like learning algorithm [10, 6] is also capable of identifying a wide variety of object classes (including cats and upper-bodies of people) seen in YouTube videos. Our results complement this work in several key ways. First, by training on smaller randomly cropped patches, we show that object-selectivity may still be obtained even when objects are almost never framed properly within the image—ruling out this bias as the source of object-selectivity. Second, we have shown that the key concepts (sparse selective filters and invariant-subspace pooling) used in their system can also be implemented in a different way using scalable clustering algorithms, allowing us to achieve results reminiscent of theirs using a vastly smaller amount of computing power. (We used 240 cores, while their large-scale system is composed of 16,000 cores.) In combination, these results point strongly to the conclusion that almost any highly scalable implementation of existing feature-learning concepts is enough to discover these sophisticated high-level representations. 5 Conclusions In this paper we have presented a feature learning system composed of two highly scalable but otherwise very simple learning algorithms: K-means clustering to find sparse linear filters (“simple cells”) and agglomerative clustering to stitch simple cells together into invariant features (“complex cells”). We showed that these two components are, in fact, capable of learning complicated high-level representations in large scale experiments on unlabeled images pulled from YouTube. Specifically, we found that higher level simple cells could learn to detect human faces without any supervision at all, and that our complex-cell learning procedure combined these into even higher-level invariances. These results indicate that we are apparently equipped with many of the key principles needed to achieve such results and that a critical remaining puzzle is how to scale up our algorithms to the sizes needed to capture more object classes and even more sophisticated invariances. 8 References [1] Y. Boureau, N. L. Roux, F. Bach, J. Ponce, and Y. LeCun. Ask the locals: multi-way local pooling for image recognition. In 13th International Conference on Computer Vision, pages 2651–2658, 2011. [2] A. Coates and A. Y. Ng. The importance of encoding versus training with sparse coding and vector quantization. In International Conference on Machine Learning, pages 921–928, 2011. [3] P. Garrigues and B. Olshausen. Group sparse coding with a laplacian scale mixture prior. In Advances in Neural Information Processing Systems 23, pages 676–684, 2010. [4] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 0749, University of Massachusetts, Amherst, October 2007. [5] A. Hyv¨arinen and P. Hoyer. Emergence of phase-and shift-invariant features by decomposition of natural images into independent feature subspaces. Neural Computation, 12(7):1705–1720, 2000. [6] A. Hyv¨arinen, P. Hoyer, and M. Inki. Topographic independent component analysis. Neural Computation, 13(7):1527–1558, 2001. [7] A. Hyv¨arinen, J. Hurri, and P. Hoyer. Natural Image Statistics. Springer-Verlag, 2009. [8] T. Kohonen. Emergence of invariant-feature detectors in self-organization. In M. Palaniswami et al., editor, Computational Intelligence, A Dynamic System Perspective, pages 17–31. IEEE Press, New York, 1995. [9] A. Krizhevsky. Learning multiple layers of features from Tiny Images. Master’s thesis, Dept. of Comp. Sci., University of Toronto, 2009. [10] Q. Le, A. Karpenko, J. Ngiam, and A. Ng. ICA with reconstruction cost for efficient overcomplete feature learning. In Advances in Neural Information Processing Systems, 2011. [11] Q. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean, and A. Ng. Building high-level features using large scale unsupervised learning. In International Conference on Machine Learning, 2012. [12] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1:541– 551, 1989. [13] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In International Conference on Machine Learning, pages 609–616, 2009. [14] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature neuroscience, 2, 1999. [15] S. Rifai, Y. Dauphin, P. Vincent, Y. Bengio, and X. Muller. The manifold tangent classifier. In Advances in Neural Information Processing, 2011. [16] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio. Contractive auto-encoders: Explicit invariance during feature extraction. In International Conference on Machine Learning, 2011. [17] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323—2326, December 2000. [18] L. van der Maaten and G. Hinton. Visualizing high-dimensional data using t-SNE. Journal of Machine Learning Research, 9:2579—2605, November 2008. [19] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained linear coding for image classification. In Computer Vision and Pattern Recognition, pages 3360–3367, 2010. [20] K. Yu, T. Zhang, and Y. Gong. Nonlinear learning using local coordinate coding. In Advances in Neural Information Processing Systems 22, pages 2223–2231, 2009. [21] M. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and high level feature learning. In International Conference on Computer Vision, 2011. [22] L. Zhu, Y. Chen, A. Torralba, W. Freeman, and A. Yuille. Part and Appearance Sharing: Recursive Compositional Models for Multi-View Multi-Object Detection. In Computer Vision and Pattern Recognition, 2010. 9
|
2012
|
188
|
4,550
|
Truly Nonparametric Online Variational Inference for Hierarchical Dirichlet Processes Michael Bryant and Erik B. Sudderth Department of Computer Science, Brown University, Providence, RI mbryantj@gmail.com, sudderth@cs.brown.edu Abstract Variational methods provide a computationally scalable alternative to Monte Carlo methods for large-scale, Bayesian nonparametric learning. In practice, however, conventional batch and online variational methods quickly become trapped in local optima. In this paper, we consider a nonparametric topic model based on the hierarchical Dirichlet process (HDP), and develop a novel online variational inference algorithm based on split-merge topic updates. We derive a simpler and faster variational approximation of the HDP, and show that by intelligently splitting and merging components of the variational posterior, we can achieve substantially better predictions of test data than conventional online and batch variational algorithms. For streaming analysis of large datasets where batch analysis is infeasible, we show that our split-merge updates better capture the nonparametric properties of the underlying model, allowing continual learning of new topics. 1 Introduction Bayesian nonparametric methods provide an increasingly important framework for unsupervised learning from structured data. For example, the hierarchical Dirichlet process (HDP) [1] provides a general approach to joint clustering of grouped data, and leads to effective nonparametric topic models. While nonparametric methods are best motivated by their potential to capture the details of large datasets, practical applications have been limited by the poor computational scaling of conventional Monte Carlo learning algorithms. Mean field variational methods provide an alternative, optimization-based framework for nonparametric learning [2, 3]. Aiming at larger-scale applications, recent work [4] has extended online variational methods [5] for the parametric, latent Dirichlet allocation (LDA) topic model [6] to the HDP. While this online approach can produce reasonable models of large data streams, we show that the variational posteriors of existing algorithms often converge to poor local optima. Multiple runs are usually necessary to show robust performance, reducing the desired computational gains. Furthermore, by applying a fixed truncation to the number of posterior topics or clusters, conventional variational methods limit the ability of purportedly nonparametric models to fully adapt to the data. In this paper, we propose novel split-merge moves for online variational inference for the HDP (oHDP) which result in much better predictive performance. We validate our approach on two corpora, one with millions of documents. We also propose an alternative, direct assignment HDP representation which is faster and more accurate than the Chinese restaurant franchise representation used in prior work [4]. Additionally, the inclusion of split-merge moves during posterior inference allows us to dynamically vary the truncation level throughout learning. While conservative truncations can be theoretically justifed for batch analysis of fixed-size datasets [2], our data-driven adaptation of the trunction level is far better suited to large-scale analysis of streaming data. Split-merge proposals have been previously investigated for Monte Carlo analysis of nonparametric models [7, 8, 9]. They have also been used for maximum likelihood and variational analysis of 1 α πj zjn wjn γ β η φk Nj D ∞ Figure 1: Directed graphical representation of a hierarchical Dirichlet process topic model, in which an unbounded collection of topics φk model the Nj words in each of D documents. Topics occur with frequency πj in document j, and with frequency β across the full corpus. parametric models [10, 11, 12, 13]. These deterministic algorithms validate split-merge proposals by evaluating a batch objective on the entire dataset, an approach which is unexplored for nonparametric models and infeasible for online learning. We instead optimize the variational objective via stochastic gradient ascent, and split or merge based on only a noisy estimate of the variational lower bound. Over time, these local decisions lead to global estimates of the number of topics present in a given corpus. We review the HDP and conventional variational methods in Sec. 2, develop our novel split-merge procedure in Sec. 3, and evaluate on various document corpora in Sec. 4. 2 Variational Inference for Bayesian Nonparametric Models 2.1 Hierarchical Dirichlet processes The HDP is a hierarchical nonparametric prior for grouped mixed-membership data. In its simplest form, it consists of a top-level DP and a collection of D bottom-level DPs (indexed by j) which share the top-level DP as their base measure: G0 ∼DP(γH), Gj ∼DP(αG0), j = 1, . . . , D. Here, H is a base measure on some parameter space, and γ > 0, α > 0 are concentration parameters. Using a stick-breaking representation [1] of the global measure G0, the HDP can be expressed as G0 = ∞ X k=1 βkδφk, Gj = ∞ X k=1 πjkδφk. The global weights β are drawn from a stick-breaking distribution β ∼GEM(γ), and atoms are independently drawn as φk ∼H. Each Gj shares atoms with the global measure G, and the lowerlevel weights are drawn πj ∼DP(αβ). For this direct assignment representation, the k indices for each Gj index directly into the global set of atoms. To complete the definition of the general HDP, parameters ψjn ∼Gj are then drawn for each observation n in group j, and observations are drawn xjn ∼F(ψjn) for some likelihood family F. Note that ψjn = φzjn for some discrete indicator zjn. In this paper we focus on an application of the HDP to modeling document corpora. The topics φk ∼Dirichlet(η) are distributions on a vocabulary of W words. The global topic weights, β ∼GEM(γ), are still drawn from a stick-breaking prior. For each document j, document-specific topic frequencies are drawn πj ∼DP(αβ). Then for each word index n in document j, a topic indicator is drawn zjn ∼Categorical(πj), and finally a word is drawn wjn ∼Categorical(φzjn). 2.2 Batch Variational Inference for the HDP We use variational inference [14] to approximate the posterior of the latent variables (φ, β, π, z) — the topics, global topic weights, document-specific topic weights, and topic indicators, respectively — with a tractable distribution q, indexed by a set of free variational parameters. Appealing to mean field methods, our variational distribution is fully factorized, and is of the form q(φ, β, π, z | λ, θ, ϕ) = q(β) ∞ Y k=1 q(φk | λk) D Y j=1 q(πj | θj) Nj Y n=1 q(zjn | ϕjn), (1) 2 where D is the number of documents in the corpus and Nj is the number of words in document j. Individual distributions are selected from appropriate exponential families: q(β) = δβ∗(β) q(φk | λk) = Dirichlet(φk | λk) q(πj | θj) = Dirichlet(πj | θj) q(zjn) = Categorical(zjn | ϕjn) where δβ∗(β) denotes a degenerate distribution at the point β∗.1 In our update derivations below, we use ϕjw to denote the shared ϕjn for all word tokens in document j of type w. Selection of an appropriate truncation strategy is crucial to the accuracy of variational methods for nonparametric models. Here, we truncate the topic indicator distributions by fixing q(zjn = k) = 0 for k > K, where K is a threshold which varies dynamically in our later algorithms. With this assumption, the topic distributions with indices greater than K are conditionally independent of the observed data; we may thus ignore them and tractably update the remaining parameters with respect to the true, infinite model. A similar truncation has been previously used in the context of an otherwise more complex collapsed variational method [3]. Desirably, this truncation is nested such that increasing K always gives potentially improved bounds, but does not require the computation of infinite sums, as in [16]. In contrast, approximations based on truncations of the stick-breaking topic frequency prior [2, 4] are not nested, and their artifactual placement of extra mass on the final topic K is less suitable for our split-merge online variational inference. Via standard convexity arguments [14], we lower bound the marginal log likelihood of the observed data using the expected complete-data log likelihood and the entropy of the variational distribution, L(q) def = Eq[log p(φ, β, π, z, w | α, γ, η)] −Eq[log q(φ, π, z | λ, θ, ϕ)] = Eq[log p(w | z, φ)] + Eq[log p(z | π)] + Eq[log p(π | αβ)] + Eq[log p(φ | η)] + Eq[log p(β | γ)] −Eq[log q(z | ϕ)] −Eq[log q(π | θ)] −Eq[log q(φ | λ)] = D X j=1 n Eq[log p(wj | zj, φ)] + Eq[log p(zj | πj)] + Eq[log p(πj | αβ)] −Eq[log q(zj | ϕj)] −Eq[log q(πj | θj)] + 1 D Eq[log p(φ | η)] + Eq[log p(β | γ)] −Eq[log q(φ | λ)] o , (2) and maximize this quantity by coordinate ascent on the variational parameters. The expectations are with respect to the variational distribution. Each expectation is dependent on only a subset of the variational parameters; we leave off particular subscripts for notational clarity. Note that the expansion of the variational lower bound in (2) contains all terms inside a summation over documents. This is the key observation that allowed [5] to develop an online inference algorithm for LDA. A full expansion of the variational objective is given in the supplemental material. Taking derivatives of L(q) with respect to each of the variational parameters yields the following updates: ϕjwk ∝exp {Eq[log φkw] + Eq[log πjk]} (3) θjk ←αβk + PW w=1 nw(j)ϕjwk (4) λkw ←η + PD j=1 nw(j)ϕjwk, (5) Here, nw(j) is the number of times word w appears in document j. The expectations in (3) are Eq[log φkw] = Ψ(λkw) −Ψ(P i λki), Eq[log πjk] = Ψ(θjk) −Ψ(P i θji), where Ψ(x) is the digamma function, the first derivative of the log of the gamma function. In evaluating our objective, we represent β∗as a (K + 1)-dim. vector containing the probabilities of the first K topics, and the total mass of all other topics. While β∗cannot be optimized in closed form, it can be updated via gradient-based methods; we use a variant of L-BFGS. Drawing a parallel between variational inference and the expectation maximization (EM) algorithm, we label the document-specific updates of (ϕj, θj) the E-step, and the corpus-wide updates of (λ, β) the M-step. 1We expect β to have small posterior variance in large datasets, and using a point estimate β∗simplifies variational derivations for our direct assignment formulation. As empirically explored for the HDP-PCFG [15], updates to the global topic weights have much less predictive impact than improvements to topic distributions. 3 2.3 Online Variational Inference Batch variational inference requires a full pass through the data at each iteration, making it computationally infeasible for large datasets and impossible for streaming data. To remedy this, we adapt and improve recent work on online variational inference algorithms [4, 5]. The form of the lower bound in (2), as a scaled expectation with respect to the document collection, suggests an online learning algorithm. Given a learning rate ρt satisfying P∞ t=0 ρt = ∞and P∞ t=0 ρ2 t < ∞, we can optimize the variational objective stochastically. Each update begins by sampling a “mini-batch” of documents S, of size |S|. After updating the mini-batch of documentspecific parameters (ϕj, θj) by iterating (3,4), we update the corpus-wide parameters as λkw ←(1 −ρt)λkw + ρtˆλkw, (6) β∗ k ←(1 −ρt)β∗ k + ρt ˆβk, (7) where ˆλkw is a set of sufficient statistics for topic k, computed from a noisy estimate of (5): ˆλkw = η + D |S| X j∈S nw(j)ϕjwk. (8) The candidate topic weights ˆβ are found via gradient-based optimization on S. The resulting inference algorithm is similar to conventional batch methods, but is applicable to streaming, big data. 3 Split-Merge Updates for Online Variational Inference We develop a data-driven split-merge algorithm for online variational inference for the HDP, referred to as oHDP-SM. The algorithm dynamically expands and contracts the truncation level K by splitting and merging topics during specialized moves which are interleaved with standard online variational updates. The resulting model truly allows the number of topics to grow with the data. As such, we do not have to employ the technique of [4, 3] and other truncated variational approaches of setting K above the expected number of topics and relying on the inference to infer a smaller number. Instead, we initialize with small K and let the inference discover new topics as it progresses, similar to the approach used in [17]. One can see how this property would be desirable in an online setting, as documents seen after many inference steps may still create new topics. 3.1 Split: Creation of New Topics Given the result of analyzing one mini-batch q∗= (ϕj, θj)|S| j=1, λ, β∗ , and the corresponding value of the lower bound L(q∗), we consider splitting topic k into two topics k′, k′′.2 The split procedure proceeds as follows: (1) initialize all variational posteriors to break symmetry between the new topics, using information from the data; (2) refine the new variational posteriors using a restricted iteration; (3) accept or reject the split via the change in variational objective value. Initialize new variational posteriors To break symmetry, we initialize the new topic posteriors (λk′, λk′′), and topic weights (β∗ k′, β∗ k′′), using sufficient statistics from the previous iteration: λk′ = (1 −ρt)λk, λk′′ = ρtˆλk, β∗ k′ = (1 −ρt)β∗ k, β∗ k′′ = ρt ˆβk. Intuitively, we expect the sufficient statistics to provide insight into how a topic was actually used during the E-step. The minibatch-specific parameters {ϕj, θj}|S| j=1 are then initialized as follows, ϕjwk′ = ωkϕjwk, ϕjwk′′ = (1 −ωk)ϕjwk, θjk′ = ωkθjk, θjk′′ = (1 −ωk)θjk, with the weights defined as ωk = βk′/(βk′ + βk′′). 2Technically, we replace topic k with topic k′ and add k′′ as a new topic. In practice, we found that the order of topics in the global stick-breaking distribution had little effect on overall algorithm performance. 4 Algorithm 1 Restricted iteration 1: initialize (λℓ, βℓ) for ℓ∈{k′, k′′} 2: for j ∈S do 3: initialize (ϕj, θj) for ℓ∈{k′, k′′} 4: while not converged do 5: update (ϕj, θj) for ℓ∈{k′, k′′} using (3, 4) 6: end while 7: update (λℓ, βℓ) for ℓ∈{k′, k′′} using (6, 7) 8: end for Restricted iteration After initializing the variational parameters for the new topics, we update them through a restricted iteration of online variational inference. The restricted iteration consists of restricted analogues to both the E-step and the M-step, where all parameters except those for the new topics are held constant. This procedure is similar to, and inspired by, the “partial E-step” for split-merge EM [10] and restricted Gibbs updates for split-merge MCMC methods [7]. All values of ϕjwℓand θjℓ, ℓ/∈{k′, k′′}, remain unchanged. It is important to note that even though these values are not updated, they are still used in the calculations for both the variational expectation of πj and the normalization of ϕ. In particular, ϕjwk′ = exp {Eq[log φk′w] + Eq[log πjk′]} P ℓ∈T exp {Eq[log φℓw] + Eq[log πjℓ]}, Eq[log πjk′] = Ψ(θjk′) −Ψ(P k∈T θjk), where T is the original set of topics, minus k, plus k′ and k′′. The expected log word probabilities Eq[log φk′w] and Eq[log φk′′w] are computed using the newly updated λ values. Evaluate Split Quality Let ϕsplit for minibatch S be ϕ as defined above, but with ϕjwk replaced by the ϕjwk′ and ϕjwk′′ learned in the restricted E-step. Let θsplit, λsplit and β∗ split be defined similarly. Now we have a new model state qsplit(k) = (ϕsplit, θsplit)|S| j=1, λsplit, β∗ split . We calculate L qsplit(k) , and if L qsplit(k) > L(q∗), we update the new model state q∗←qsplit(k), accepting the split. If L qsplit(k) < L(q∗), then we go back and test another split, until all splits are tested. In practice we limit the maximum number of allowed splits each iteration to a small constant. If we wish to allow the model to expand the number of topics more quickly, we can increase this number. Finally, it is important to note that all aspects of the split procedure are driven by the data — the new topics are initialized using data-driven proposals, refined by re-running the variational E-step, and accepted based on an unbiased estimate of the change in the variational objective. 3.2 Merge: Removal of Redundant Topics Consider a candidate merge of two topics, k′ and k′′, into a new topic k. For batch variational methods, it is straightforward to determine whether such a merge will increase or decrease the variational objective by combining all parameters for all documents, ϕjwk = ϕjwk′ + ϕjwk′′, θjk = θjk′ + θjk′′, βk = βk′ + βk′′, λk = λk′ + λk′′, and computing the difference in the variational objective before and after the merge. Because many terms cancel, computing this bound change is fairly computationally inexpensive, but it can still be computationally infeasible to consider all pairs of topics for large K. Instead, we identify potential merge candidates by looking at the sample covariance of the θj vectors across the corpus (or minibatch). Topics with positive covariance above a certain threshold have the quantitative effects of their merge evaluated. Intuitively, if there are two copies of a topic or a topic is split into two pieces, they should tend to be used together, and therefore have positive covariance. For consistency in notation, we call the model state with topics k′ and k′′ merged qmerge(k′,k′′). Combining this merge procedure with the previous split proposals leads to the online variational method of Algorithm 2. In an online setting, we can only compute unbiased noisy estimates of the true difference in the variational objective; split or merge moves that increase the expected variational objective are not guaranteed to do so for the objective evaluated over the entire corpus. The 5 Algorithm 2 Online variational inference for the HDP + split-merge 1: initialize (λ, β∗) 2: for t = 1, 2, . . . do 3: for j ∈minibatch S do 4: initialize (ϕj, θj) 5: while not converged do 6: update (ϕj, θj) using (3, 4) 7: end while 8: end for 9: for pairs of topics {k′, k′′} ∈K × K with Cov(θjk′, θjk′′) > 0 do 10: if L qmerge(k′,k′′) > L(q) then 11: q ←qmerge(k′,k′′) 12: end if 13: end for 14: update (λ, β∗) using (6, 7) 15: for k = 1, 2, . . . , K do 16: compute L qsplit(k) via restricted iteration 17: if L qsplit(k) > L(q) then 18: q ←qsplit(k) 19: end if 20: end for 21: end for uncertainty associated with the online method can be mitigated to some extent by using large minibatches. Confidence intervals for the expected change in the variational objective can be computed, and might be useful in a more sophisticated acceptance rule. Note that our usage of a nested family of variational bounds is key to the accuracy and stability of our split-merge acceptance rules. 4 Experimental Results To demonstrate the effectiveness of our split-merge moves, we compare three algorithms: batch variational inference (bHDP), online variational inference without split-merge (oHDP), and online variational inference with split-merge (oHDP-SM). On the NIPS corpus we also compare these three methods to collapsed Gibbs sampling (CGS) and the CRF-style oHDP model (oHDP-CRF) proposed by [4].3 We test the models on one synthetic and two real datasets: Bars A 20-topic bars dataset of the type introduced in [18], where topics can be viewed as bars on a 10 × 10 grid. The vocabulary size is 100, with a training set of 2000 documents and a test set of 200 documents, 250 words per document. NIPS 1,740 documents from the Neural Information Processing Systems conference proceedings, 1988-2000. The vocabulary size is 13,649, and there are 2.3 million tokens in total. We randomly divide the corpus into a 1,392-document training set and a 348-document test set. New York Times The New York Times Annotated Corpus4 consists of over 1.8 million articles appearing in the New York Times between 1987 and 2007. The vocabulary is pruned to 8,000 words. We hold out a randomly selected subset of 5,000 test documents, and use the remainder for training. All values of K given for oHDP-SM models are initial values — the actual truncation levels fluctuate during inference. While the truncation level K is different from the actual number of topics assigned non-negligible mass, the split-merge model tends to merge away unused topics, so these numbers are usually fairly close. Hyperparameters are initialized to consistent values across all algorithms and datasets, and learned via Newton-Raphson updates (or in the case of CGS, resampled). We use a constant learning rate across all online algorithms. As suggested by [4], we set ρt = (τ + t)−κ where τ = 1, κ = 0.5. Empirically, we found that slower learning rates could result in greatly reduced performance, across all models and datasets. 3For CGS we use the code available at http://www.gatsby.ucl.ac.uk/∼ywteh/research/npbayes/npbayesr21.tgz, and for oHDP-CRF we use the code at http://www.cs.princeton.edu/∼chongw/software/onlinehdp.tar.gz. 4http://www.ldc.upenn.edu/Catalog/catalogEntry.jsp?catalogId=LDC2008T19 6 To compare algorithm performance, we use per-word heldout likelihood, similarly to the metrics of [3, 19, 4]. We randomly split each test document in Dtest into 80%-20% pieces, wj1 and wj2. Then, using ¯φ as the variational expectation of the topics from training, we learn ¯πj on wj1 and approximate the probability of wj2 as Q w∈wj2 P k ¯πjk ¯φkw. The overall test metric is then E = P j∈Dtest P w∈wj2 log P k ¯πjk ¯φkw P j∈Dtest |wj2| 4.1 Bars For the bars data, we initialize eight oHDP-SM runs with K = {2, 5, 10, 20, 40, 50, 80, 100}, eight runs of oHDP with K = 20, and eight runs with K = 50. As seen in Figure 2(a), the oHDP algorithm converges to local optima, while the oHDP-SM runs all converge to the global optimum. More importantly, all split-merge methods converge to the correct number of topics, while oHDP uses either too few or too many topics. Note that the data-driven split-merge procedure allows splitting and merging of topics to mostly cease once the inference has converged (Figure 2(d)). 4.2 NIPS We compare oHDP-SM, oHDP, bHDP, oHDP-CRF, and CGS in Figure 2. Shown are two runs of oHDP-SM with K = {100, 300}, two runs each of oHDP and bHDP with K = {300, 1000}, and one run each of oHDP-CRF and CGS with K = 300. All the runs displayed are the best runs from a larger sample of trials. Since oHDP and bHDP will use only a subset of topics under the truncation, setting K much higher results in comparable numbers of topics as oHDP-SM. We set |S| = 200 for the online algorithms, and run all methods for approximately 40 hours of CPU time. The non split-merge methods reach poor local optima relatively quickly, while the split-merge algorithms continue to improve. Notably, both oHDP-CRF and CGS perform much worse than any of our methods. It appears that the CRF model performs very poorly for small datasets, and CGS reaches a mode quickly but does not mix between modes. Even though the split-merge algorithms improve in part by adding topics, they are using their topics much more effectively (Figure 2(h)). We speculate that for the NIPS corpus especially, the reason that models achieve better predictive likelihoods with more topics is due to the bursty properties of text data [20]. Figure 3 illustrates the topic refinement and specialization which occurs in successful split proposals. 4.3 New York Times As batch variational methods and samplers are not feasible for such a large dataset, we compare two runs of oHDP with K = {300, 500} to a run of oHDP-SM with K = 200 initial topics. We also use a larger minibatch size of |S| = 10,000; split-merge acceptance decisions can sometimes be unstable with overly small minibatches. Figure 2(c) shows an inherent problem with oHDP for very large datasets — when truncated to K = 500, the algorithms uses all of its available topics and exhibits overfitting. For the oHDP-SM, however, predictive likelihood improves over a substantially longer period and overfitting is greatly reduced. 5 Discussion We have developed a novel split-merge online variational algorithm for the hierarchical DP. This approach leads to more accurate models and better predictive performance, as well as a model that is able to adapt the number of topics more freely than conventional approximations based on fixed truncations. Our moves are similar in spirit to split-merge samplers, but by evaluating their quality stochastically using streaming data, we can rapidly adapt model structure to large-scale datasets. While many papers have tried to improve conventional mean field methods via higher-order variational expansions [21], local optima can make the resulting algorithms compare unfavorably to Monte Carlo methods [3]. Here we pursue the complementary goal of more robust, scalable optimization of simple variational objectives. Generalization of our approach to more complex hierarchies of DPs, or basic DP mixtures, is feasible. We believe similar online learning methods will prove effective for the combinatorial structures of other Bayesian nonparametric models. Acknowledgments We thank Dae Il Kim for his assistance with the experimental results. 7 0 50 100 150 200 250 300 350 400 −4.5 −4 −3.5 −3 −2.5 Bars iteration per−word log likelihood oHDP−SM oHDP, K=50 oHDP, K=20 0 2.5 5 7.5 10 12.5 15 x 10 5 −8.1 −8 −7.9 −7.8 −7.7 −7.6 −7.5 −7.4 documents seen NIPS per−word log likelihood oHDP−SM, K=100 oHDP−SM, K=300 oHDP, K=300 oHDP, K=1000 bHDP, K=300 bHDP, K=1000 oHDP−CRF, K=300 CGS, K=300 0 0.5 1 1.5 2 2.5 3 3.5 4 x 10 6 −7.8 −7.78 −7.76 −7.74 −7.72 −7.7 −7.68 −7.66 −7.64 −7.62 −7.6 −7.58 −7.56 documents seen per−word log likelihood New York Times oHDP, K=300 oHDP, K=500 oHDP−SM, K=200 (a) (b) (c) 0 50 100 150 200 0 10 20 30 40 50 60 70 80 iteration # topics used oHDP−SM, K=2,100 oHDP, K=50 oHDP, K=20 0 2.5 5 7.5 10 12.5 15 x 10 5 0 100 200 300 400 500 600 documents seen # topics used 0 0.5 1 1.5 2 2.5 3 3.5 4 x 10 6 150 200 250 300 350 400 450 500 550 documents seen # topics used (d) (e) (f) 0 10 20 30 40 50 60 70 80 −4.5 −4 −3.5 −3 −2.5 # topics used per−word log likelihood 0 100 200 300 400 500 600 −8.1 −8 −7.9 −7.8 −7.7 −7.6 −7.5 −7.4 # topics used per−word log likelihood 150 200 250 300 350 400 450 500 550 −7.8 −7.78 −7.76 −7.74 −7.72 −7.7 −7.68 −7.66 −7.64 −7.62 −7.6 −7.58 −7.56 # topics used per−word log likelihood (g) (h) (i) Figure 2: Trace plots of heldout likelihood and number of topics used. Across all datasets, common color indicates common algorithm, while for NIPS and New York Times, line type indicates different initializations. Top: Test log likelihood for each dataset. Middle: Number of topics used per iteration. Bottom: A plot of per-word log likelihood against number of topics used. Note particularly plot (h), where for every cardinality of used topics shown, there is a split-merge method outperforming a conventional method. Original topic 40,000 80,000 120,000 160,000 200,000 240,000 patterns patterns patterns patterns patterns patterns pattern pattern pattern pattern pattern pattern cortex cortex cortex cortex cortex cortex neurons neurons neurons neurons neurons responses neuronal neuronal responses responses responses types patterns responses responses neuronal type type type pattern single single single behavioral behavioral behavioral cortex inputs temporal type types types form neurons temporal inputs number neuronal form neurons neuronal activation type temporal single neuronal areas single patterns neuronal neuronal neuronal neuronal neuronal responses neuronal patterns neurons dendritic dendritic dendritic inputs pattern pattern activation peak fire postsynaptic type neurons neurons cortex activation peak fire activation cortex cortex dendrite cortex activation cortex inputs activation preferred pyramidal msec activation activation dendrite patterns msec pyramidal peak type inputs peak fire cortex msec preferred peak pyramidal dendrites postsynaptic pyramidal peak preferred inputs inputs inputs inputs Figure 3: The evolution of a split topic. The left column shows the topic directly prior to the split. After 240,000 more documents have been analyzed, subtle differences become apparent: the top topic covers terms relating to general neuronal behavior, while the bottom topic deals more specifically with neuron firing. 8 References [1] Y.W. Teh, M. Jordan, and M. Beal. Hierarchical Dirichlet processes. JASA, 2006. [2] D. Blei and M. Jordan. Variational methods for Dirichlet process mixtures. Bayesian Analysis, 1:121–144, 2005. [3] Y.W. Teh, K. Kurihara, and M. Welling. Collapsed variational inference for HDP. NIPS, 2008. [4] C. Wang, J. Paisley, and D. Blei. Online variational inference for the hierarchical Dirichlet process. AISTATS, 2011. [5] M. Hoffman, D. Blei, and F. Bach. Online learning for latent Dirichlet allocation. NIPS, 2010. [6] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. JMLR, 2003. [7] S. Jain and R. Neal. A split-merge Markov chain Monte Carlo procedure for the Dirichlet process mixture model. Journal of Computational and Graphical Statistics, 13:158–182, 2004. [8] D.B. Dahl. Sequentially-allocated merge-split sampler for conjugate and nonconjugate Dirichlet process mixture models. Technical report, Texas A&M University, 2005. [9] C. Wang and D. Blei. A split-merge MCMC algorithm for the hierarchical Dirichlet process. ArXiv e-prints, January 2012. [10] N. Ueda, R. Nakano, Z. Ghahramani, and G. Hinton. SMEM algorithm for mixture models. Neural Computation, 2000. [11] K. Kurihara and M. Welling. Bayesian K-means as a ’Maximization-Expectation’ algorithm. SIAM conference on data mining SDM06, 2006. [12] N. Ueda and Z. Ghahramani. Bayesian model search for mixture models based on optimizing variational bounds. Neural Networks, 15, 2002. [13] Z. Ghahramani and M. Beal. Variational inference for Bayesian mixtures of factor analysers. NIPS, 2000. [14] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. Introduction to variational methods for graphical models. Machine Learning, 1999. [15] P. Liang, S. Petrov, D. Klein, and M. Jordan. The infinite PCFG using hierarchical Dirichlet processes. Empirical Methods in Natural Language Processing, 2007. [16] K. Kurihara, M. Welling, and N. Vlassis. Accelerated variational Dirichlet process mixtures. NIPS, 2007. [17] D. Blei and C. Wang. Variational inference for the nested Chinese restaurant process. NIPS, 2009. [18] T. L. Griffiths and M. Steyvers. Finding scientific topics. PNAS, 101:5228–5235, 2004. [19] A. Asuncion, M. Welling, P. Smyth, and Y.W. Teh. On smoothing and inference for topic models. UAI, 2009. [20] G. Doyle and C. Elkan. Accounting for word burstiness in topic models. ICML, 2009. [21] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1:1–305, 2008. 9
|
2012
|
189
|
4,551
|
Timely Object Recognition Sergey Karayev UC Berkeley Tobias Baumgartner RWTH Aachen University Mario Fritz MPI for Informatics Trevor Darrell UC Berkeley Abstract In a large visual multi-class detection framework, the timeliness of results can be crucial. Our method for timely multi-class detection aims to give the best possible performance at any single point after a start time; it is terminated at a deadline time. Toward this goal, we formulate a dynamic, closed-loop policy that infers the contents of the image in order to decide which detector to deploy next. In contrast to previous work, our method significantly diverges from the predominant greedy strategies, and is able to learn to take actions with deferred values. We evaluate our method with a novel timeliness measure, computed as the area under an Average Precision vs. Time curve. Experiments are conducted on the PASCAL VOC object detection dataset. If execution is stopped when only half the detectors have been run, our method obtains 66% better AP than a random ordering, and 14% better performance than an intelligent baseline. On the timeliness measure, our method obtains at least 11% better performance. Our method is easily extensible, as it treats detectors and classifiers as black boxes and learns from execution traces using reinforcement learning. 1 Introduction In real-world applications of visual object recognition, performance is time-sensitive. In robotics, a small finite amount of processing power per unit time is all that is available for robust object detection, if the robot is to usefully interact with humans. In large-scale detection systems, such as image search, results need to be obtained quickly per image as the number of items to process is constantly growing. In such cases, an acceptable answer at a reasonable time may be more valuable than the best answer given too late. A hypothetical system for vision-based advertising presents a case study: companies pay money to have their products detected in images on the internet. The system has different values (in terms of cost per click) and accuracies for different classes of objects, and the queue of unprocessed images varies in size. The detection strategy to maximize profit in such an environment has to exploit every inter-object context signal available to it, because there is not enough time to run detection for all classes. What matters in the real world is timeliness, and either not all images can be processed or not all classes can be evaluated in a detection task. Yet the conventional approach to evaluating visual recognition does not consider efficiency, and evaluates performance independently across classes. We argue that the key to tackling problems of dynamic recognition resource allocation is to start asking a new question: What is the best performance we can get on a budget? Taking the task of object detection, we propose a new timeliness measure of performance vs. time (shown in Figure 1). We present a method that treats different detectors and classifiers as black boxes, and uses reinforcement learning to learn a dynamic policy for selecting actions to achieve the highest performance under this evaluation. Specifically, we run scene context and object class detectors over the whole image sequentially, using the results of detection obtained so far to select the next actions. Evaluating on the PASCAL 1 C3 C2 C1 adet1 adet2 adet3 agist C3 C2 C1 adet1 adet2 adet3 agist t = 0.1 t = 0.3 t = 0 C3 C2 C1 adet1 adet2 adet3 agist scene context 2 machine translation and information retrieval. For example, until recently speech recognition and machine translation systems based on n-gram language models outperformed systems based on grammars and phrase structure. In our experience maintaining performance seems to require gradual enrichment of the model. One reason why simple models can perform better in practice is that rich models often suffer from difficulties in training. For object detection, rigid templates and bagof-features models can be easily trained using discriminative methods such as support vector machines (SVM). Richer models are more difficult to train, in particular because they often make use of latent information. Consider the problem of training a part-based model from images labeled only with bounding boxes around the objects of interest. Since the part locations are not labeled, they must be treated as latent (hidden) variables during training. While it is possible that more complete labeling would support better training, it could also result in inferior training if the labeling used suboptimal parts. Automatic part labeling has the potential to achieve better performance by automatically finding effective parts. More elaborate labeling is also time consuming and expensive. The Dalal-Triggs detector [10], which won the 2006 PASCAL object detection challenge, used a single filter on histogram of oriented gradients (HOG) features to represent an object category. The Dalal-Triggs detector uses a sliding window approach, where a filter is applied at all positions and scales of an image. We can think of the detector as a classifier which takes as input an image, a position within that image, and a scale. The classifier determines whether or not there is an instance of the target category at the given position and scale. Since the model is a simple filter we can compute a score as β · Φ(x) where β is the filter, x is an image with a specified position and scale, and Φ(x) is a feature vector. A major innovation of the Dalal-Triggs detector was the construction of particularly effective features. Our first innovation involves enriching the DalalTriggs model using a star-structured part-based model defined by a “root” filter (analogous to the Dalal-Triggs filter) plus a collection of part filters and associated deformation models. The score of one of our star models at a particular position and scale within an image is the score of the root filter at the given location plus the sum over parts of the maximum, over placements of that part, of the part filter score on its location minus a deformation cost measuring the deviation of the part from its ideal location. Both root and part filter scores are defined by the dot product between a filter (a set of weights) and a subwindow of a feature pyramid computed from the input image. Figure 1 shows a star model for the person category. One interesting aspect of our models is that the features for the part filters are computed at twice the spatial resolution of the root filter. To train models using partially labeled data we use a latent variable formulation of MI-SVM [3] that we call (a) (b) (c) Fig. 1. Detections obtained with a single component person model. The model is defined by a coarse root filter (a), several higher resolution part filters (b) and a spatial model for the location of each part relative to the root (c). The filters specify weights for histogram of oriented gradients features. Their visualization show the positive weights at different orientations. The visualization of the spatial models reflects the “cost” of placing the center of a part at different locations relative to the root. latent SVM (LSVM). In a latent SVM each example x is scored by a function of the following form, fβ(x) = max z2Z(x) β · Φ(x, z). (1) Here β is a vector of model parameters, z are latent values, and Φ(x, z) is a feature vector. In the case of one of our star models β is the concatenation of the root filter, the part filters, and deformation cost weights, z is a specification of the object configuration, and Φ(x, z) is a concatenation of subwindows from a feature pyramid and part deformation features. We note that (1) can handle very general forms of latent information. For example, z could specify a derivation under a rich visual grammar. Our second class of models represents each object category by a mixture of star models. The score of one of our mixture models at a given position and scale is the maximum over components, of the score of that component model at the given location. In this case the latent information, z, specifies a component label and a configuration for that component. Figure 2 shows a mixture model for the bicycle category. To obtain high performance using discriminative training it is often important to use large training sets. In the case of object detection the training problem is highly unbalanced because there is vastly more background than objects. This motivates a process of searching through 3 Fig. 2. Detections obtained with a 2 component bicycle model. These examples illustrate the importance of deformations mixture models. In this model the first component captures sideways views of bicycles while the second component captures frontal and near frontal views. The sideways component can deform to match a “wheelie”. the background to find a relatively small number of potential false positives. A methodology of data-mining for hard negative examples was adopted by Dalal and Triggs [10] but goes back at least to the bootstrapping methods used by [38] and [35]. Here we analyze data-mining algorithms for SVM and LSVM training. We prove that data-mining methods can be made to converge to the optimal model defined in terms of the entire training set. Our object models are defined using filters that score subwindows of a feature pyramid. We have investigated feature sets similar to HOG [10] and found lower dimensional features which perform as well as the original ones. By doing principal component analysis on HOG features the dimensionality of the feature vector can be significantly reduced with no noticeable loss of information. Moreover, by examining the principal eigenvectors we discover structure that leads to “analytic” versions of low-dimensional features which are easily interpretable and can be computed efficiently. We have also considered some specific problems that arise in the PASCAL object detection challenge and similar datasets. We show how the locations of parts in an object hypothesis can be used to predict a bounding box for the object. This is done by training a model specific predictor using least-squares regression. We also demonstrate a simple method for aggregating the output of several object detectors. The basic idea is that objects of some categories provide evidence for, or against, objects of other categories in the same image. We exploit this idea by training a category specific classifier that rescores every detection of that category using its original score and the highest scoring detection from each of the other categories. 2 RELATED WORK There is a significant body of work on deformable models of various types for object detection, including several kinds of deformable template models (e.g. [7], [8], [21], [43]), and a variety of part-based models (e.g. [2], [6], [9], [15], [18], [20], [28], [42]). In the constellation models from [18], [42] parts are constrained to be in a sparse set of locations determined by an interest point operator, and their geometric arrangement is captured by a Gaussian distribution. In contrast, pictorial structure models [15], [20] define a matching problem where parts have an individual match cost in a dense set of locations, and their geometric arrangement is constrained by a set of “springs” connecting pairs of parts. The patchwork of parts model from [2] is similar, but it explicitly considers how the appearance model of overlapping parts interact to define a dense appearance model for images. Our models are largely based on the pictorial structures framework from [15], [20]. We use a dense set of possible positions and scales in an image, and define a bicycle detector person detector Ts Ts Ts Td Td Td time Figure 1: A sample trace of our method. At each time step beginning at t = 0, potential actions are considered according to their predicted value, and the maximizing action is picked. The selected action is performed and returns observations. Different actions return different observations: a detector returns a list of detections, while a scene context action simply returns its computed feature. The belief model of our system is updated with the observations, which influences the selection of the next action. The final evaluation of a detection episode is the area of the AP vs. Time curve between given start and end times. The value of an action is the expected result of final evaluation if the action is taken and the policy continues to be followed, which allows actions without an immediate benefit to be scheduled. VOC dataset and evaluation regime, we are able to obtain better performance than all baselines when there is less time available than is needed to exhaustively run all detectors. 2 Recognition Problems and Related Work Formally, we deal with a dataset of images D, where each image I contains zero or more objects. Each object is labeled with exactly one category label k ∈{1, . . . , K}. The multi-class, multi-label classification problem asks whether I contains at least one object of class k. We write the ground truth for an image as C = {C1, . . . , CK}, where Ck ∈{0, 1} is set to 1 if an object of class k is present. The detection problem is to output a list of bounding boxes (sub-images defined by four coordinates), each with a real-valued confidence that it encloses a single instance of an object of class k, for each k. The answer for a single class k is given by an algorithm detect(I, k), which outputs a list of sub-image bounding boxes B and their associated confidences. Performance is evaluated by plotting precision vs. recall across dataset D (by progressively lowering the confidence threshold for a positive detection). The area under the curve yields the Average Precision (AP) metric, which has become the standard evaluation for recognition performance on challenging datasets in vision [1]. A common measure of a correct detection is the PASCAL overlap: two bounding boxes are considered to match if they have the same class label and the ratio of their intersection to their union is at least 1 2. To highlight the hierarchical structure of these problems, we note that the confidences for each subimage b ∈B may be given by classify(b, k), and, more saliently for our setup, correct answer to the detection problem also answers the classification problem. 2 Multi-class performance is evaluated by averaging the individual per-class AP values. In a specialized system such as the advertising case study from section 1, the metric generalizes to a weighted average, with the weights set by the values of the classes. 2.1 Related Work Object detection The best recent performance has come from detectors that use gradient-based features to represent objects as either a collection of local patches or as object-sized windows [2, 3]. Classifiers are then used to distinguish between featurizations of a given class and all other possible contents of an image window. Window proposal is most often done exhaustively over the image space, as a “sliding window”. For state-of-the-art performance, the object-sized window models are augmented with parts [4], and the bag-of-visual-words models employ non-linear classifiers [5]. We employ the widely used Deformable Part Model detector [4] in our evaluation. Using context The most common source of context for detection is the scene or other non-detector cues; the most common scene-level feature is the GIST [6] of the image. We use this source of scene context in our evaluation. Inter-object context has also been shown to improve detection [7]. In a standard evaluation setup, inter-object context plays a role only in post-filtering, once all detectors have been run. In contrast, our work leverages inter-object context in the action-planning loop. A critical summary of the main approaches to using context for object and scene recognition is given in [8]. For the commonly used PASCAL VOC dataset [1], GIST and other sources of context are quantitatively explored in [9]. Efficiency through cascades An early success in efficient object detection of a single class uses simple, fast features to build up a cascade of classifiers, which then considers image regions in a sliding window regime [10]. Most recently, cyclic optimization has been applied to optimize cascades with respect to feature computation cost as well as classifier performance [11]. Cascades are not dynamic policies: they cannot change the order of execution based on observations obtained during execution, which is our goal. Anytime and active classification This surprisingly little-explored line of work in vision is closest to our approach. A recent application to the problem of visual detection picks features with maximum value of information in a Hough-voting framework [12]. There has also been work on active classification [13] and active sensing [14], in which intermediate results are considered in order to decide on the next classification step. Most commonly, the scheduling in these approaches is greedy with respect to some manual quantity such as expected information gain. In contrast, we learn policies that take actions without any immediate reward. 3 Multi-class Recognition Policy Our goal is a multi-class recognition policy π that takes an image I and outputs a list of multi-class detection results by running detector and global scene actions sequentially. The policy repeatedly selects an action ai ∈A, executes it, receiving observations oi, and then selects the next action. The set of actions A can include both classifiers and detectors: anything that would be useful for inferring the contents of the image. Each action ai has an expected cost c(ai) of execution. Depending on the setting, the cost can be defined in terms of algorithmic runtime analysis, an idealized property such as number of flops, or simply the empirical runtime on specific hardware. We take the empirical approach: every executed action advances t, the time into episode, by its runtime. As shown in Figure 1, the system is given two times: the setup time Ts and deadline Td. We want to obtain the best possible answer if stopped at any given time between the setup time and the deadline. A single-number metric that corresponds to this objective is the area captured under the curve between the start and deadline bounds, normalized by the total area. We evaluate policies by this more robust metric and not simply by the final performance at deadline time for the same 3 reason that Average Precision is used instead of a fixed Precision vs. Recall point in the conventional evaluations. 3.1 Sequential Execution An open-loop policy, such as the common classifier cascade [10], takes actions in a sequence that does not depend on observations received from previous actions. In contrast, our goal is to learn a dynamic, or closed-loop, policy, which would exploit the signal in scene and inter-object context for a maximally efficient path through the actions. We refer to the information available to the decision process as the state s. The state includes the current estimate of the distribution over class presence variables P(C) = {P(C0), . . . , P(CK)}, where we write P(Ck) to mean P(Ck = 1) (class k is present in the image). Additionally, the state records that an action ai has been taken by adding it to the initially empty set O and recording the resulting observations oi. We refer to the current set of observations as o = {oi|ai ∈O}. The state also keeps track of the time into episode t, and the setup and deadline times Ts, Td. A recognition episode takes an image I and proceeds from the initial state s0 and action a0 to the next pair (s1, a1), and so on until (sJ, aJ), where J is the last step of the process with t ≤Td. At that point, the policy is terminated, and a new episode can begin on a new image. The specific actions we consider in the following exposition are detector actions adeti, where deti is a detector class Ci, and a scene-level context action agist, which updates the probabilities of all classes. Although we avoid this in the exposition, note that our system easily handles multiple detector actions per class. 3.2 Selecting actions As our goal is to pick actions dynamically, we want a function Q(s, a) : S ×A 7→R, where S is the space of all possible states, to assign a value to a potential action a ∈A given the current state s of the decision process. We can then define the policy π as simply taking the action with the maximum value: π(s) = argmax ai∈A\O Q(s, ai) (1) Although the action space A is manageable, the space of possible states S is intractable, and we must use function approximation to represent Q(s, a): a common technique in reinforcement learning [15]. We featurize the state-action pair and assume linear structure: Qπ(s, a) = θ⊤ π φ(s, a) (2) The policy’s performance at time t is determined by all detections that are part of the set of observations oj at the last state sj before t. Recall that detector actions returns lists of detection hypotheses. Therefore, the final AP vs. Time evaluation of an episode is a function eval(h, Ts, Td) of the history of execution h = s0, s1, . . . , sJ. It is precisely the normalized area under the AP vs. Time curve between Ts and Td, as determined by the detections in oj for all steps j in the episode. Note from Figure 3b that this evaluation function is additive per action, as each action a generates observations that may raise or lower the mean AP of the results so far (∆ap) and takes a certain time (∆t). We can accordingly represent the final evaluation eval(h, Ts, Td) in terms of individual action rewards: PJ j=0 R(sj, aj). Specifically, as shown in Figure 3b, we define the reward of an action a as R(sj, a) = ∆ap(tj T −1 2∆t) (3) where tj T is the time left until Td at state sj, and ∆t and ∆ap are the time taken and AP change produced by the action a. (We do not account for Ts here for clarity of exposition.) 4 3.3 Learning the policy The expected value of the final evaluation can be written recursively in terms of the value function: Qπ(sj, a) = Esj+1[R(sj, a) + γQπ(sj+1, π(sj+1))] (4) where γ ∈[0, 1] is the discount value. With γ = 0, the value function is determined entirely by the immediate reward, and so only completely greedy policies can be learned. With γ = 1, the value function is determined by the correct expected rewards to the end of the episode. However, a lower value of γ mitigates the effects of increasing uncertainty regarding the state transitions over long episodes. We set this meta-parameter of our approach through cross-validation, and find that a mid-level value (0.4) works best. While we can’t directly compute the expectation in (4), we can sample it by running actual episodes to gather < s, a, r, s′ > samples, where r is the reward obtained by taking action a in state s, and s′ is the following state. We then learn the optimal policy by repeatedly gathering samples with the current policy, minimizing the error between the discounted reward to the end of the episode as predicted by our current Q(sj, a) and the actual values gathered, and updating the policy with the resulting weights. To ensure sufficient exploration of the state space, we implement ϵ-greedy action selection during training: with a probability that decreases with each training iteration, a random action is selected instead of following the policy. During test time, ϵ is set to 0.05. To prevent overfitting to the training data, we use L2-regularized regression. We run 15 iterations of accumulating samples by running 350 episodes, starting with a baseline policy which will be described in section 4, and cross-validating the regularization parameter at each iteration. Samples are not thrown away between iterations. With pre-computed detections on the PASCAL VOC 2007 dataset, the training procedure takes about 4 hours on an 8-core Xeon E5620 machine. 3.4 Feature representation Our policy is at its base determined by a linear function of the features of the state: π(s) = argmax ai∈A\O θ⊤ π φ(s, ai). (5) We include the following quantities as features φ(s, a): P(Ca) The prior probability of the class that corresponds to the detector of action a (omitted for the scene-context action). P(C0|o) . . . P(CK|o) The probabilities for all classes, conditioned on the current set of observations. H(C0|o) . . . H(CK|o) The entropies for all classes, conditioned on the current set of observations. Additionally, we include the mean and maximum of [H(C0|o) . . . H(CK|o)], and 4 time features that represent the times until start and deadline, for a total of F = 1 + 2K + 6 features. We note that this setup is commonly used to solve Markov Decision Processes [15]. There are two related limitations of MDPs when it comes to most systems of interesting complexity, however: the state has to be functionally approximated instead of exhaustively enumerated; and some aspects of the state are not observed, making the problem a Partially Observed MDP (POMDP), for which exact solution methods are intractable for all but rather small problems [16]. Our initial solution to the problem of partial observability is to include features corresponding to our level of uncertainty into the feature representation, as in the technique of augmented MDPs [17]. To formulate learning the policy as a single regression problem, we represent the features in block form, where φ(s, a) is a vector of size F|A|, with all values set to 0 except for the F-sized block corresponding to a. 5 As an illustration, we visualize the learned weights on these features in Figure 2, reshaped such that each row shows the weights learned for an action, with the top row representing the scene context action and then next 20 rows corresponding to the PASCAL VOC class detector actions. P(C|o) H(C|o) GIST action GIST action time P(Ca) RL Greedy (a) Greedy P(C|o) H(C|o) GIST action GIST action time P(Ca) RL Greedy P(C|o) H(C|o) GIST action GIST action time P(Ca) RL Greedy (b) Reinforcement Learning Figure 2: Learned policy weights θπ (best viewed in color: red corresponds to positive, blue to negative values). The first row corresponds to the scene-level action, which does not generate detections itself but only helps reduce uncertainty about the contents of the image. Note that in the greedy learning case, this action is learned to never be taken, but it is shown to be useful in the reinforcement learning case. 3.5 Updating with observations The bulk of our feature representation is formed by probability of individual class occurrence, conditioned on the observations so far: P(C0|o) . . . P(CK|o). This allows the action-value function to learn correlations between presence of different classes, and so the policy can look for the most probable classes given the observations. However, higher-order co-occurrences are not well represented in this form. Additionally, updating P(Ci|o) presents choices regarding independence assumptions between the classes. We evaluate two approaches for updating probabilities: direct and MRF. In the direct method, P(Ci|o) = score(Ci) if o includes the observations for class Ci and P(Ci|o) = P(Ci) otherwise. This means that an observation of class i does not directly influence the estimated probability of any class but Ci. The MRF approach employs a pairwise fully-connected Markov Random Field (MRF), as shown in Figure 1, with the observation nodes set to score(Ci) appropriately, or considered unobserved. The graphical model structure is set as fully-connected, but some classes almost never co-occurr in our dataset. Accordingly, the edge weights are learned with L1 regularization, which obtains a sparse structure [18]. All parameters of the model are trained on fully-observed data, and Loopy Belief Propagation inference is implemented with an open-source graphical model package [19]. An implementation detail: score(Ci) for adeti is obtained by training a probabilistic classifier on the list of detections, featurized by the top few confidence scores and the total number of detections. Similarly, score(Ci) for agist is obtained by training probabilistic classifiers on the GIST feature, for all classes. 4 Evaluation We evaluate our system on the multi-class, multi-label detection task, as previously described. We evaluate on a popular detection challenge task: the PASCAL VOC 2007 dataset [1]. This datasets exhibits a rather modest amount of class co-occurrence: the “person” class is highly likely to occur, and less than 10% of the images have more than two classes. We learn weights on the training and validation sets, and run our policy on all images in the testing set. The final evaluation pools all detections up to a certain time, and computes their multi-class AP per image, averaging over all images. This is done for different times to plot the AP vs. Time curve over the whole dataset. Our method of averaging per-image performance follows [20]. 6 For the detector actions, we use one-vs-all cascaded deformable part-model detectors on a HOG featurization of the image [21], with linear classification of the list of detections as described in the previous section. There are 20 classes in the PASCAL challenge task, so there are 20 detector actions. Running a detector on a PASCAL image takes about 1 second. We test three different settings of the start and deadline times. In the first one, the start time is immediate and execution is cut off at 20 seconds, which is enough time to run all actions. In the second one, execution is cut off after only 10 seconds. Lastly, we measure performance between 5 seconds and 15 seconds. These operating points show how our method behaves when deployed in different conditions. The results are given in rows of Table 1. (a) Ts tj T Td ∆t ∆ap ∆ap(tj T −1 2∆t) (b) Figure 3: (a) AP vs. Time curves for Random, Oracle, the Fixed Order baseline, and our bestperforming policy. (b) Graphically representing our reward function, as described in section 3.2. We establish the first baseline for our system by selecting actions randomly at each step. As shown in Figure 3a, the Random policy results in a roughly linear gain of AP vs. time. This is expected: the detectors are capable of obtaining a certain level of performance; if half the detectors are run, the expected performance level is half of the maximum level. To establish an upper bound on performance, we plot the Oracle policy, obtained by re-ordering the actions at the end of each detection episode in the order of AP gains they produced. We consider another baseline: selecting actions in a fixed order based on the value they bring to the AP vs. Time evaluation, which is roughly proportional to their occurrence probability. We refer to this as Fixed Order. Then there are instantiations of our method, as described in the previous section : RL w/ Direct inference and RL w/ MRF inference. As the MRF model consistently outperformed Direct by a small margin, we report results for that model only. In Figure 3a, we can see that due to the dataset bias, the fixed-order policy performs well at first, as the person class is disproportionately likely to be in the image, but is significantly overtaken by our model as execution goes on and more rare classes have to be detected. Lastly, we include an additional scene-level GIST feature that updates the posterior probabilities of all classes. This is considered one action, and takes about 0.3 seconds. This setting always uses the MRF model to properly update the class probabilities with GIST observations. This brings another small boost in performance. The results are shown in Table 1. Visualizing the learned weights in Figure 2, we note that the GIST action is learned to never be taken in the greedy (γ = 0) setting, but is learned to be taken with a higher value of γ. It is additionally informative to consider the action trajectories of different policies in Figure 4. 7 Figure 4: Visualizing the action trajectories of different policies. Action selection traces are plotted in orange over many episodes; the size of the blue circles correspond to the increase in AP obtained by the action. We see that the Random policy selects actions and obtains rewards randomly, while the Oracle policy obtains all rewards in the first few actions. The Fixed Order policy selects actions in a static optimal order. Our policy does not stick a static order but selects actions dynamically to maximize the rewards obtained early on. Table 1: The areas under the AP vs. Time curve for different experimental conditions. Bounds Random Fixed Order RL RL w/ GIST Oracle (0,20) 0.250 0.342 0.378 0.382 0.488 (0,10) 0.119 0.240 0.266 0.267 0.464 (5,15) 0.257 0.362 0.418 0.420 0.530 5 Conclusion We presented a method for learning “closed-loop” policies for multi-class object recognition, given existing object detectors and classifiers and a metric to optimize. The method learns the optimal policy using reinforcement learning, by observing execution traces in training. If detection on an image is cut off after only half the detectors have been run, our method does 66% better than a random ordering, and 14% better than an intelligent baseline. In particular, our method learns to take action with no intermediate reward in order to improve the overall performance of the system. As always with reinforcement learning problems, defining the reward function requires some manual work. Here, we derive it for the novel detection AP vs. Time evaluation that we suggest is useful for evaluating efficiency in recognition. Although computation devoted to scheduling actions is less significant than the computation due to running the actions, the next research direction is to explicitly consider this decision-making cost; the same goes for feature computation costs. Additionally, it is interesting to consider actions defined not just by object category but also by spatial region. The code for our method is available1. Acknowledgments This research was made with Government support under and awarded by DoD, Air Force Office of Scientific Research, National Defense Science and Engineering Graduate (NDSEG) Fellowship, 32 CFR 168a. 1http://sergeykarayev.com/work/timely/ 8 References [1] M Everingham, L Van Gool, C K I Williams, J Winn, and A Zisserman. The PASCAL VOC Challenge. http://www.pascal-network.org/challenges/VOC/, 2010. 2, 3, 6 [2] N Dalal and B Triggs. Histograms of Oriented Gradients for Human Detection. In CVPR, pages 886–893, 2005. 3 [3] David G Lowe. Distinctive Image Features from Scale-Invariant Keypoints. IJCV, 60(2):91–110, November 2004. 3 [4] Pedro F Felzenszwalb, Ross B Girshick, David McAllester, and Deva Ramanan. Object detection with discriminatively trained part-based models. PAMI, 32(9):1627–1645, September 2010. 3 [5] Andrea Vedaldi, Varun Gulshan, Manik Varma, and Andrew Zisserman. Multiple kernels for object detection. ICCV, pages 606–613, September 2009. 3 [6] Aude Oliva and Antonio Torralba. Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope. IJCV, 42(3):145–175, 2001. 3 [7] Antonio Torralba, Kevin P Murphy, and William T Freeman. Contextual Models for Object Detection Using Boosted Random Fields. MIT CSAIL Technical Report, 2004. 3 [8] Carolina Galleguillos and Serge Belongie. Context based object categorization: A critical survey. Computer Vision and Image Understanding, 114(6):712–722, June 2010. 3 [9] Santosh K Divvala, Derek Hoiem, James H Hays, Alexei A Efros, and Martial Hebert. An empirical study of context in object detection. In CVPR, pages 1271–1278, June 2009. 3 [10] Paul Viola and Michael Jones. Rapid object detection using a boosted cascade of simple features. In CVPR, 2001. 3, 4 [11] Minmin Chen, Zhixiang (Eddie) Xu, Kilian Q Weinberger, Olivier Chapelle, and Dor Kedem. Classifier Cascade for Minimizing Feature Evaluation Cost. In AISTATS, 2012. 3 [12] Sudheendra Vijayanarasimhan and Ashish Kapoor. Visual Recognition and Detection Under Bounded Computational Resources. In CVPR, pages 1006–1013, 2010. 3 [13] Tianshi Gao and Daphne Koller. Active Classification based on Value of Classifier. In NIPS, 2011. 3 [14] Shipeng Yu, Balaji Krishnapuram, Romer Rosales, and R Bharat Rao. Active Sensing. In AISTATS, pages 639–646, 2009. 3 [15] Richard S Sutton and Andrew G Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. 4, 5 [16] Nicholas Roy and Geoffrey Gordon. Exponential Family PCA for Belief Compression in POMDPs. In NIPS, 2002. 5 [17] Cody Kwok and Dieter Fox. Reinforcement Learning for Sensing Strategies. In IROS, 2004. 5 [18] Su-In Lee, Varun Ganapathi, and Daphne Koller. Efficient Structure Learning of Markov Networks using L1-Regularization. In NIPS, 2006. 6 [19] Ariel Jaimovich and Ian Mcgraw. FastInf: An Efficient Approximate Inference Library. Journal of Machine Learning Research, 11:1733–1736, 2010. 6 [20] Chaitanya Desai, Deva Ramanan, and Charless Fowlkes. Discriminative models for multi-class object layout. In ICCV, pages 229–236, September 2009. 6 [21] Pedro F Felzenszwalb, Ross B Girshick, and David McAllester. Cascade object detection with deformable part models. In CVPR, pages 2241–2248. IEEE, June 2010. 7 9
|
2012
|
19
|
4,552
|
3D Object Detection and Viewpoint Estimation with a Deformable 3D Cuboid Model Sanja Fidler TTI Chicago fidler@ttic.edu Sven Dickinson University of Toronto sven@cs.toronto.edu Raquel Urtasun TTI Chicago rurtasun@ttic.edu Abstract This paper addresses the problem of category-level 3D object detection. Given a monocular image, our aim is to localize the objects in 3D by enclosing them with tight oriented 3D bounding boxes. We propose a novel approach that extends the well-acclaimed deformable part-based model [1] to reason in 3D. Our model represents an object class as a deformable 3D cuboid composed of faces and parts, which are both allowed to deform with respect to their anchors on the 3D box. We model the appearance of each face in fronto-parallel coordinates, thus effectively factoring out the appearance variation induced by viewpoint. Our model reasons about face visibility patters called aspects. We train the cuboid model jointly and discriminatively and share weights across all aspects to attain efficiency. Inference then entails sliding and rotating the box in 3D and scoring object hypotheses. While for inference we discretize the search space, the variables are continuous in our model. We demonstrate the effectiveness of our approach in indoor and outdoor scenarios, and show that our approach significantly outperforms the stateof-the-art in both 2D [1] and 3D object detection [2]. 1 Introduction Estimating semantic 3D information from monocular images is an important task in applications such as autonomous driving and personal robotics. Let’s consider for example, the case of an autonomous agent driving around a city. In order to properly react to dynamic situations, such an agent needs to reason about which objects are present in the scene, as well as their 3D location, orientation and 3D extent. Likewise, a home robot requires accurate 3D information in order to navigate in cluttered environments as well as grasp and manipulate objects. While impressive performance has been achieved for instance-level 3D object recognition [3], category-level 3D object detection has proven to be a much harder task, due to intra-class variation as well as appearance variation due to viewpoint changes. The most common approach to 3D detection is to discretize the viewing sphere into bins and train a 2D detector for each viewpoint [4, 5, 1, 6]. However, these approaches output rather weak 3D information, where typically a 2D bounding box around the object is returned along with an estimated discretized viewpoint. In contrast, object-centered approaches represent and reason about objects using more sophisticated 3D models. The main idea is to index (or vote) into a parameterized pose space with local geometric [7] or appearance features, that bear only weak viewpoint dependencies [8, 9, 10, 11]. The main advantage of this line of work is that it enables a continuous pose representation [10, 11, 12, 8], 3D bounding box prediction [8], and potentially requires less training examples due to its more com1 Figure 1: Left: Our deformable 3D cuboid model. Right Viewpoint angle θ. pact visual representation. Unfortunately, these approaches work with weaker appearance models that cannot compete with current discriminative approaches [1, 6, 13]. Recently, Hedau et al. [2] proposed to extend the 2D HOG-based template detector of [14] to predict 3D cuboids. However, since the model represents object’s appearance as a rigid template in 3D, its performance has been shown to be inferior to (2D) deformable part-based models (DPMs) [1]. In contrast, in this paper we extend DPM to reason in 3D. Our model represents an object class with a deformable 3D cuboid composed of faces and parts, which are both allowed to deform with respect to their anchors on the 3D box (see Fig 1). Towards this goal, we introduce the notion of stitching point, which enables the deformation between the faces and the cuboid to be encoded efficiently. We model the appearance of each face in fronto-parallel coordinates, thus effectively factoring out the appearance variation due to viewpoint. We reason about different face visibility patterns called aspects [15]. We train the cuboid model jointly and discriminatively and share weights across all aspects to attain efficiency. In inference, our model outputs 2D along with oriented 3D bounding boxes around the objects. This enables the estimation of object’s viewpoint which is a continuous variable in our representation. We demonstrate the effectiveness of our approach in indoor [2] and outdoor scenarios [16], and show that our approach significantly outperforms the state-of-the-art in both 2D [1] and 3D object detection [2]. 2 Related work The most common way to tackle 3D detection is to represent a 3D object by a collection of independent 2D appearance models [4, 5, 1, 6, 13], one for each viewpoint. Several authors augmented the multi-view representation with weak 3D information by linking the features or parts across views [17, 18, 19, 20, 21]. This allows for a dense representation of the viewing sphere by morphing related near-by views [12]. Since these methods usually require a significant amount of training data, renderings of synthetic CAD models have been used to supplement under-represented views or provide supervision for training object parts or object geometry [22, 13, 8]. Object-centered approaches, represent object classes with a 3D model typically equipped with viewinvariant geometry and appearance [7, 23, 24, 8, 9, 10, 11, 25]. While these types of models are attractive as they enable continuous viewpoint representations, their detection performance has typically been inferior to 2D deformable models. Deformable part-based models (DPMs) [1] are nowadays arguably the most successful approach to category-level 2D detection. Towards 3D, DPMs have been extended to reason about object viewpoint by training the mixture model with viewpoint supervision [6, 13]. Pepik et al. [13] took a step further by incorporating supervision also at the part level. Consistency was enforced by forcing the parts for different 2D viewpoint models to belong to the same set of 3D parts in the physical space. However, all these approaches base their representation in 2D and thus output only 2D bounding boxes along with a discretized viewpoint. The closest work to ours is [2], which models an object with a rigid 3D cuboid, composed of independently trained faces without deformations or parts. Our model shares certain similarities with this work, but has a set of important differences. First, our model is hierarchical and deformable: we allow deformations of the faces, while the faces themselves are composed of deformable parts. We also explicitly reason about the visibility patterns of the cuboid model and train the model accordingly. Furthermore, all the parameters in our model are trained jointly using a latent SVM formulation. These differences are important, as our approach outperforms [2] by a significant margin. 2 Figure 2: Aspects, together with the range of θ that they cover, for (left) cars and (right) beds. Finally, in concurrent work, Xiang and Savarese [26] introduced a deformable 3D aspect model, where an object is represented as a set of planar parts in 3D. This model shares many similarities with our approach, however, unlike ours, it requires a collection of CAD models in training. 3 A Deformable 3D Cuboid Model In this paper, we are interested in the problem of, given a single image, estimating the 3D location and orientation of the objects present in the scene. We parameterize the problem as the one of estimating a tight 3D bounding box around each object. Our 3D box is oriented, as we reason about the correspondences between the faces in the estimated bounding box and the faces of our model (i.e., which face is the top face, front face, etc). Towards this goal, we represent an object class as a deformable 3D cuboid, which is composed of 6 deformable faces, i.e., their locations and scales can deviate from their anchors on the cuboid. The model for each cuboid’s face is a 2D template that represents the appearance of the object in view-rectified coordinates, i.e., where the face is frontal. Additionally, we augment each face with parts, and employ a deformation model between the locations of the parts and the anchor points on the face they belong to. We assume that any viewpoint of an object in the image domain can be modeled by rotating our cuboid in 3D, followed by perspective projection onto the image plane. Thus inference involves sliding and rotating the deformable cuboid in 3D and scoring the hypotheses. A necessary component of any 3D model is to properly reason about the face visibility of the object (in our case, the cuboid). Assuming a perspective camera, for any given viewpoint, at most 3 faces are visible in an image. Topologically different visibility patterns define different aspects [15] of the object. Note that a cuboid can have up to 26 aspects, however, not all necessarily occur for each object class. For example, for objects supported by the floor, the bottom face will never be visible. For cars, typically the top face is not visible either. Our model only reasons about the occurring aspects of the object class of interest, which we estimate from the training data. Note that the visibility, and thus the aspect, is a function of the 3D orientation and position of a cuboid hypothesis with respect to the camera. We define θ to be the angle between the outer normal to the front face of the cuboid hypothesis, and the vector connecting the camera and the center of the 3D box. We refer the reader to Fig. 1 for a visualization. Assuming a camera overlooking the center of the cuboid, Fig. 2 shows the range of the cuboid orientation angle on the viewing sphere for which each aspect occurs in the datasets of [2, 16], which we employ for our experiments. Note however, that in inference we do not assume that the object’s center lies on the camera’s principal axis. In order to make the cuboid deformable, we introduce the notion of stitching point, which is a point on the box that is common to all visible faces for a particular aspect. We incorporate a quadratic deformation cost between the locations of the faces and the stitching point to encourage the cuboid to be as rigid as possible. We impose an additional deformation cost between the visible faces, ensuring that their sizes match when we stitch them into a cuboid hypothesis. Our model represents each aspect with its own set of weights. To reduce the computational complexity and impose regularization, we share the face and part templates across all aspects, as well as the deformations between them. However, the deformations between the faces and the cuboid are aspect specific as they depend on the stitching point. We formally define the model by a (6·(n+1)+1)-tuple ({(Pi, Pi,1, . . . , Pi,n)}i=1,..,6, b) where Pi models the i-th face, Pi,j is a model for the j-th part belonging to face i, and b is a real valued bias term. For ease of exposition, we assume each face to have the same number of parts, n; however, the framework is general and allows the numbers of parts to vary across faces. For each aspect a, 3 R−T L−T F−T F−R−TF−L−T 0 50 100 150 200 cuboid aspects num of training examples BBOX3D: aspect statistics for bed front left right top 0 100 200 300 400 cuboid faces num of training examples BBOX3D: face statistics for bed 1 2 3 4 5 6 0 100 200 300 400 mixture id num of training examples DPM: mixture statistics for bed Figure 3: Dataset [2] statistics for training our cuboid model (left and middle) and DPM [1] (right). we define each of its visible faces by a 3-tuple (Fi, ra,i, dstitch a,i , ba), where Fi is a filter for the i-th face, ra,i is a two-dimensional vector specifying the position of the i-th face relative to the position of the stitching point in the rectified view, and di is a four-dimensional vector specifying coefficients of a quadratic function defining a deformation cost for each possible placement of the face relative to the position of the stitching point. Here, ba is a bias term that is aspect specific and allows us to calibrate the scores across aspects with different number of visible faces. Note that Fi will be shared across aspects and thus we omit index a. The model representing each part is face-specific, and is defined by a 3-tuple (Fi,j, ri,j, di,j), where Fi,j is a filter for the j-th part of the i-th face, ri,j is a two-dimensional vector specifying an “anchor” position for part j relative to the root position of face i, and di,j is a four dimensional vector specifying coefficients of a quadratic function defining a deformation cost for each possible placement of the part relative to the anchor position on the face. Note that the parts are defined relative to the face and are thus independent of the aspects. We thus share them across aspects. The appearance templates as well as the deformation parameters in the model are defined for each face in a canonical view where that face is frontal. We thus score a face hypothesis in the rectified view that makes the hypothesis frontal. Each pair of parallel faces shares a homography, and thus at most three rectifications are needed for each viewpoint hypothesis θ. In indoor scenarios, we estimate the 3 orthogonal vanishing points and assume a Manhattan world. As a consequence only 3 rectifications are necessary altogether. In the outdoor scenario, we assume that at least the vertical vanishing point is given, or equivalently, that the orientation (but not position) of the ground plane is known. As a consequence, we only need to search for a 1-D angle θ, i.e., the azimuth, in order to estimate the rotation of the 3D box. A sliding window approach is then used to score the cuboid hypotheses, by scoring the parts, faces and their deformations in their own rectified view, as well as the deformations of the faces with respect to the stitching point. Following 2D deformable part-based models [1], we use a pyramid of HOG features to describe each face-specific rectified view, H(i, θ), and score a template for a face as follows: score(pi, θ) = X u′,v′ Fi(u′, v′) · H[ui + u′; vi + v′; i, θ] (1) where pi = (ui, vi, li) specifies the position (ui, vi) and level li of the face filters in the face-rectified feature pyramids. We score each part pi,j = (ui,j, vi,j, li,j) in a similar fashion, but the pyramid is indexed at twice the resolution of the face. We define the compatibility score between the parts and the corresponding face, denoted as pi = {pi, {pi,j}j=1,...,n}, as the sum over the part scores and their deformations with respect to the anchor positions on the face: scoreparts(pi, θ) = n X j=1 (score(pi,j, θ) −dij · φd(pi, pi,j)) , (2) We thus define the score of a 3D cuboid hypothesis to be the sum of the scores of each face and its parts, as well as the deformation of each face with respect to the stitching point and the deformation of the faces with respect to each other as follows score(x, θ, s, p) = 6 X i=1 V (i, a) score(pi, θ) −dstitch a,i · φstich d (pi, s, θ) − − 6 X i>ref V (i, a) · dface i,refφface d (pi, pref, θ) + 6 X i=1 V (i, a) · scoreparts(pi, θ) + ba 4 Figure 4: Learned models for (left) bed, (right) car. where p = (p1, · · · , p6) and V (i, a) is a binary variable encoding whether face i is visible under aspect a. Note that a = a(θ, s) can be deterministically computed from the rotation angle θ and the position of the stitching point s (which we assume to always be visible), which in turns determines the face visibility V . We use ref to index the first visible face in the aspect model, and φd(pi, pi,j, θ) = φd(du, dv) = (du, dv, du2, dv2) (3) are the part deformation features, computed in the rectified image of face i implied by the 3D angle θ. As in [1], we employ a quadratic deformation cost to model the relationships between the parts and the anchor points on the face, and define (dui,j, dvi,j) = (ui,j, vi,j) −(2 · (ui, vi) + ri,j) as the displacement of the j-th part with respect to its anchor (ui, vi) in the rectified j-th face. The deformation features φstich d (pi, s, θ) between the face pi and the stitching point s are defined as (dui, dvi) = (ui, vi)−(u(s, i), v(s, i))+ra,i). Here, (u(s, i), v(s, i)) is the position of the stitching point in the rectified coordinates corresponding to face i and level l. We define the deformation cost between the faces to be a function of their relative dimensions: φface d (pi, pk, θ) = ( 0, if max(ei,ek) min(ei,ek) < 1 + ϵ ∞ otherwise (4) with ei and ek the lengths of the common edge between faces i and k. We define the deformation of a face with respect to the stitching point to also be quadratic. It is defined in the rectified view, and thus depend on θ. We additionally incorporate a bias term for each aspect, ba, to make the scores of multiple aspects comparable when we combine them into a full cuboid model. Given an image x, the score of a hypothesized 3D cuboid can be obtained as the dot product between the model’s parameters and a feature vector, i.e., score(x, θ, s, p) = wa · Φ(x, a(θ, s), p), with wa = (F ′ 1, · · · , F ′ 6, F ′ 1,1, · · · , F ′ 6,n, d1,1, · · · , d6,n, dstitch a,1 , · · · , dstitch a,6 , dface 1,2 , · · · , dface 5,6 , ba), (5) and the feature vector: Φ(x, a(θ, s), p) = ˆH(p1, i, θ), · · · , ˆH(p1,1, i, θ), −ˆφd(p1, p1,1), · · · , −ˆφd(p6, p6,n), −ˆφstitch d (p1, s, θ), · · · , −ˆφstitch d (p6, s, θ), −ˆφface d (p1, p2), · · · , 1 where ˆφ includes the visibility score in the feature vector, e.g., ˆφ(i, ·) = V (i, a) · φ(i, ·). Inference: Inference in this model can be done by computing fw(x) = max θ,s,p wa · Φ(x, a(θ, s), p) This can be solved exactly via dynamic programming, where the score is first computed for each θ, i.e., maxs,p wa · Φ(x, a(θ, s), p), and then a max is taken over the angles θ. We use a discretization of 20 deg for the angles. To get the score for each θ, we first compute the feature responses for the part and face templates (Eq. (1)) using a sliding window approach in the corresponding feature pyramids. As in [1], distance transforms are used to compute the deformation scores of the parts efficiently, that is, Eq. (2). The score for each face simply sums the response of the face template and the scores of the parts. We again use distance transforms to compute the deformation scores for each face and the stitching point, which is carried out in the rectified coordinates for each face. We then compute the deformation scores between the faces in Eq. (4), which can be performed efficiently due to the fact that sides of the same length along one dimension (horizontal or vertical) in the coordinates of face i will also be constant along the corresponding line when projected to the coordinate system of face j. Thus, computing the side length ratios of two faces is not quadratic in the number of pixels but only in the number of horizontal or vertical lines. Finally, we reproject the scores to the image coordinate system and sum them to get the score for each θ. 5 Detectors’ performance Layout rescoring DPM [1] 3D det. combined DPM [1] 3D det. combined Hedau et al. [2] 54.2% 51.3% 59.6% 62.8% ours 55.6% 59.4% 60.5% 60.0% 64.6% 63.8% Table 1: Detection performance (measured in AP at 0.5 IOU overlap) for the bed dataset of [2] 3D measure DPM fit3D BBOX3D combined BBOX3D + layout comb. + layout convex hull 48.2% 53.9% 53.9% 57.8% 57.1% face overlap 16.3% 33.0% 34.4% 33.5% 33.6% Table 2: 3D detection performance in AP (50% IOU overlap of convex hulls and faces) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 recall precision bed: 2D Detection performance DPM (AP = 0.556) 3D BBOX (AP = 0.594) combined (AP = 0.605) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 recall precision bed: 3D perf.: conv hull overlap DPM fit3D (AP = 0.482) 3D BBOX (AP = 0.539) combined (AP = 0.539) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 recall precision bed: 3D perf.: face overlap DPM fit3D (AP = 0.163) 3D BBOX (AP = 0.330) combined (AP = 0.344) Figure 5: Precision-recall curves for (left) 2D detection (middle) convex hull, (right) face overlap. Learning: Given a set of training samples D = (⟨x1, y1, bb1⟩, · · · ⟨xN, yN, bbN⟩), where x is an image, yi ∈{−1, 1}, and bb ∈R8×2 are the eight coordinates of the 3D bounding box in the image, our goal is to learn the weights w = [wa1, · · · , waP ] for all P aspects in Eq. (5). To train our model using partially labeled data, we use a latent SVM formulation [1], however, frameworks such as latent structural SVMs [27] are also possible. To initialize the full model, we first learn a deformable face+parts model for each face independently, where the faces of the training examples are rectified to be frontal prior to training. We estimate the different aspects of our 3D model from the statistics of the training data, and compute for each training cuboid the relative positions va,i of face i and the stitching point in the rectified view of each face. We then perform joint training of the full model, treating the training cuboid and the stitching point as latent, however, requiring that each face filter and the face annotation overlap more than 70%. Following [1], we utilize a stochastic gradient descent approach which alternates between solving for the latent variables and updating the weights w. Note that this algorithm is only guaranteed to converge to a local optimum, as the latent variables make the problem non-convex. 4 Experiments We evaluate our approach on two datasets, the dataset of [2] as well as KITTI [16], an autonomous driving dataset. To our knowledge, these are the only datasets which have been labeled with 3D bounding boxes. We begin our experimentation with the indoor scenario [2]. The bedroom dataset contains 181 train and 128 test images. To enable a comparison with the DPM detector [1], we trained a model with 6 mixtures and 8 parts using the same training instances but employing 2D bounding boxes. Our 3D bed model was trained with two parts per face. Fig. 3 shows the statistics of the dataset in terms of the number of training examples for each aspect (where L-R-T denotes an aspect for which the front, right and the top face are visible), as well as per face. Note that the fact that the dataset is unbalanced (fewer examples for aspects with two faces) does not affect too much our approach, as only the face-stitching point deformation parameters are aspect specific. As we share the weights among the aspects, the number of training instances for each face is significantly higher (Fig. 3, middle). We compare this to DPM in Fig. 3, right. Our method can better exploit the training data by factoring out the viewpoint dependance of the training examples. We begin our quantitative evaluation by using our model to reason about 2D detection. The 2D bounding boxes for our model are computed by fitting a 2D box around the convex hull of the projection of the predicted 3D box. We report average precision (AP) where we require that the output 2D boxes overlap with the ground-truth boxes at least 50% using the intersection-over-union (IOU) criteria. The precision-recall curves are shown in Fig. 5. We compare our approach to the deformable part model (DPM) [1] and the cuboid model of Hedau et al. [2]. As shown in Table 1 we outperform the cuboid model of [2] by 8.1% and DPM by 3.8%. This is notable, as to the best 6 Figure 6: Detection examples obtained with our model on the bed dataset [2]. Figure 7: Detections in 3D + layout of our knowledge, this is the first time that a 3D approach outperforms the DPM. 1 Examples of detections of our model are shown in Fig. 6. A standard way to improve the detector’s performance has been to rescore object detections using contextual information [1]. Following [2], we use two types of context. We first combined our detector with the 2D-DPM [1] to see whether the two sources of information complement each other. The second type of context is at the scene level, where we exploit the fact that the objects in indoor environments do not penetrate the walls and usually respect certain size ratios in 3D. We combine the 3D and 2D detectors using a two step process, where first the 2D detector is run inside the bounding boxes produced by our cuboid model. A linear SVM that utilizes both scores as input is then employed to produce a score for the combined detection. While we observe a slight improvement in performance (1.1%), it seems that our cuboid model is already scoring the correct boxes well. This is in contrast to the cuboid model of [2], where the increase in performance is more significant due to the poorer accuracy of their 3D approach. Following [2], we use an estimate of the room layout to rescore the object hypotheses at the scene level. We use the approach by Schwing et al. [28] to estimate the layout. To train the re-scoring classifier, we use the image-relative width and height features as in [1], footprint overlap between the 3D box and the floor as in [2] as well as 3D statistics such as distance between the object 3D box and the wall relative to the room height and the ratio between the object and room height in 3D. This further increases our performance by 5.2% (Table 1). Examples of 3D reconstruction of the room and our predicted 3D object hypotheses are shown in Fig. 7. To evaluate the 3D performance of our detector we use the convex hull overlap measure as introduced in [2]. Here, instead of computing the overlap between the predicted boxes, we require that the convex hulls of our 3D hypotheses projected to the image plane and groundtruth annotations overlap at least 50% in IOU measure. Table 2 reports the results and shows that only little is lost in performance due to a stricter overlap measure. 1Note that the numbers for our and [2]’s version of DPM slightly differ. The difference is likely due to how the negative examples are sampled during training (the dataset has a positive example in each training image). 7 Figure 8: KITTI: examples of car detections. (top) Ground truth, (bottom) Our 3D detections, augmented with best fitting CAD models to visualize inferred 3D box orientations. Since our model also predicts the locations of the dominant object faces (and thus the 3D object orientation), we would like to quantify its accuracy. We introduce an even stricter measure where we require that also the predicted cuboid faces overlap with the faces of the ground-truth cuboids. In particular, a hypothesis is correct if the average of the overlaps between top faces and vertical faces exceeds 50% IOU. We compare the results of our approach to DPM [1]. Note however, that [1] returns only 2D boxes and hence a direct comparison is not possible. We thus augment the original DPM with 3D information in the following way. Since the three dominant orientations of the room, and thus the objects, are known (estimated via the vanishing points), we can find a 3D box whose projection best overlaps with the output of the 2D detector. This can be done by sliding a cuboid (whose dimensions match our cuboid model) in 3D to best fit the 2D bounding box. Our approach outperforms the 3D augmented DPM by a significant margin of 16.7%. We attribute this to the fact that our cuboid is deformable and thus the faces localize more accurately on the faces of the object. We also conducted preliminary tests for our model on the autonomous driving dataset KITTI [16]. We trained our model with 8 aspects (estimated from the data) and 4 parts per face. An example of a learned aspect model is shown in Fig. 4. Note that the rectangular patches on the faces represent the parts, and color coding is used to depict the learned part and face deformation weights. We can observe that the model effectively and compactly factors out the appearance changes due to changes in viewpoint. Examples of detections are shown in Fig.8. The top rows show groundtruth annotations, while the bottom rows depict our predicted 3D boxes. To showcase also the viewpoint prediction of our detector we insert a CAD model inside each estimated 3D box, matching its orientation in 3D. In particular, for each detection we automatically chose a CAD model out of a collection of 80 models whose 3D bounding box best matches the dimensions of the predicted box. One can see that our 3D detector is able to predict the viewpoints of the objects well, as well as the type of car. 5 Conclusion We proposed a novel approach to 3D object detection, which extends the well-acclaimed DPM to reason in 3D by means of a deformable 3D cuboid. Our cuboid allows for deformations at the face level via a stitching point as well as deformations between the faces and the parts. We demonstrated the effectiveness of our approach in indoor and outdoor scenarios and showed that our approach outperforms [1] and [2] in terms of 2D and 3D estimation. In future work, we plan to reason jointly about the 3D scene layout and the objects in order to improve the performance in both tasks. Acknowledgements. S.F. has been supported in part by DARPA, contract number W911NF-10-20060. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either express or implied, of the Army Research Laboratory or the U.S. Government. 8 References [1] Felzenszwalb, P. F., Girshick, R. B., McAllester, D., and Ramanan, D. (2010) Object detection with discriminatively trained part based models. IEEE TPAMI, 32, 1627–1645. [2] Hedau, V., Hoiem, D., and Forsyth, D. (2010) Thinking inside the box: Using appearance models and context based on room geometry. ECCV, vol. 6, pp. 224–237. [3] Hinterstoisser, S., Lepetit, V., Ilic, S., Fua, P., and Navab, N. (2010) Dominant orientation templates for real-time detection of texture-less objects. CVPR. [4] Schneiderman, H. and Kanade, T. (2000) A statistical method for 3d object detection applied to faces and cars. CVPR, pp. 1746–1759. [5] Torralba, A., Murphy, K. P., and Freeman, W. T. (2007) Sharing visual features for multiclass and multiview object detection. IEEE TPAMI, 29, 854–869. [6] Gu, C. and Ren, X. (2010) Discriminative mixture-of-templates for viewpoint classification. ECCV, pp. 408–421. [7] Lowe, D. (1991) Fitting parameterized three-dimensional models to images. IEEE TPAMI, 13, 441–450. [8] Liebelt, J., Schmid, C., and Schertler, K. (2008) Viewpoint-independent object class detection using 3d feature maps. CVPR. [9] Yan, P., Khan, S. M., and Shah, M. (2007) 3d model based oblect class detection in an arbitrary view. ICCV. [10] Glasner, D., Galun, M., Alpert, S., Basri, R., and Shakhnarovich, G. (2011) Viewpoint-aware object detection and pose estimation. ICCV. [11] Savarese, S. and Fei-Fei, L. (2007) 3d generic object categorization, localization and pose estimation. ICCV. [12] Su, H., Sun, M., Fei-Fei, L., and Savarese, S. (2009) Learning a dense multi-view representation for detection, viewpoint classification and synthesis of object categories. ICCV. [13] Pepik, B., Stark, M., Gehler, P., and Schiele, B. (2012) Teaching 3d geometry to deformable part models. Belongie, S., Blake, A., Luo, J., and Yuille, A. (eds.), CVPR. [14] Dalal, N. and Triggs, B. (2005) Histograms of oriented gradients for human detection. CVPR. [15] Koenderink, J. and van Doorn, A. (1976) The singularities of the visual mappings. Bio. Cyber., 24, 51–59. [16] Geiger, A., Lenz, P., and Urtasun, R. (2012) Are we ready for autonomous driving? CVPR. [17] Kushal, A., Schmid, C., and Ponce, J. (2007) Flexible object models for category-level 3d object recognition. CVPR. [18] Thomas, A., Ferrari, V., Leibe, B., Tuytelaars, T., Schiele, B., and Gool, L. V. (2006) Toward multi-view object class detection. CVPR. [19] Hoiem, D., Rother, C., and Winn, J. (2007) 3d layoutcrf for multi-view object class recognition and segmentation. CVPR. [20] Sun, M., Su, H., Savarese, S., and Fei-Fei, L. (2009) A multi-view probabilistic model for 3d oblect classes. CVPR. [21] Payet, N. and Todorovic, S. (2011) Probabilistic pose recovery using learned hierarchical object models. ICCV. [22] Stark, M., Goesele, M., and Schiele, B. (2010) Back to the future: Learning shape models from 3d cad data. British Machine Vision Conference. [23] Brooks, R. A. (1983) Model-based three-dimensional interpretations of two-dimensional images. IEEE TPAMI, 5, 140–150. [24] Dickinson, S. J., Pentland, A. P., and Rosenfeld, A. (1992) 3-d shape recovery using distributed aspect matching. IEEE TPAMI, 14, 174–198. [25] Sun, M., Bradski, G., Xu, B.-X., and Savarese, S. (2010) Depth-encoded hough voting for coherent object detection, pose estimation, and shape recovery. ECCV. [26] Xiang, Y. and Savarese, S. (2012) Estimating the aspect layout of object categories. CVPR. [27] Yu, C.-N. and Joachims, T. (2009) Learning structural svms with latent variables. ICML. [28] Schwing, A., Hazan, T., Pollefeys, M., and Urtasun, R. (2012) Efficient structured prediction for 3d indoor scene understanding. CVPR. 9
|
2012
|
190
|
4,553
|
Risk–Aversion in Multi–armed Bandits Amir Sani Alessandro Lazaric Rémi Munos INRIA Lille - Nord Europe, Team SequeL {amir.sani,alessandro.lazaric,remi.munos}@inria.fr Abstract Stochastic multi–armed bandits solve the Exploration–Exploitation dilemma and ultimately maximize the expected reward. Nonetheless, in many practical problems, maximizing the expected reward is not the most desirable objective. In this paper, we introduce a novel setting based on the principle of risk–aversion where the objective is to compete against the arm with the best risk–return trade–off. This setting proves to be more difficult than the standard multi-arm bandit setting due in part to an exploration risk which introduces a regret associated to the variability of an algorithm. Using variance as a measure of risk, we define two algorithms, investigate their theoretical guarantees, and report preliminary empirical results. 1 Introduction The multi–armed bandit [13] elegantly formalizes the problem of on–line learning with partial feedback, which encompasses a large number of real–world applications, such as clinical trials, online advertisements, adaptive routing, and cognitive radio. In the stochastic multi–armed bandit model, a learner chooses among several arms (e.g., different treatments), each characterized by an independent reward distribution (e.g., the treatment effectiveness). At each point in time, the learner selects one arm and receives a noisy reward observation from that arm (e.g., the effect of the treatment on one patient). Given a finite number of n rounds (e.g., patients involved in the clinical trial), the learner faces a dilemma between repeatedly exploring all arms and collecting reward information versus exploiting current reward estimates by selecting the arm with the highest estimated reward. Roughly speaking, the learning objective is to solve this exploration–exploitation dilemma and accumulate as much reward as possible over n rounds. Multi–arm bandit literature typically focuses on the problem of finding a learning algorithm capable of maximizing the expected cumulative reward (i.e., the reward collected over n rounds averaged over all possible observation realizations), thus implying that the best arm returns the highest expected reward. Nonetheless, in many practical problems, maximizing the expected reward is not always the most desirable objective. For instance, in clinical trials, the treatment which works best on average might also have considerable variability; resulting in adverse side effects for some patients. In this case, a treatment which is less effective on average but consistently effective on different patients may be preferable to an effective but risky treatment. More generally, some applications require an effective trade–off between risk and reward. There is no agreed upon definition for risk. A variety of behaviours result in an uncertainty which might be deemed unfavourable for a specific application and referred to as a risk. For example, an algorithm which is consistent over multiple runs may not satisfy the desire for a solution with low variability in every single realization of the algorithm. Two foundational risk modeling paradigms are Expected Utility theory [12] and the historically popular and accessible Mean-Variance paradigm [10]. A large part of decision–making theory focuses on defining and managing risk (see e.g., [9] for an introduction to risk from an expected utility theory perspective). Risk has mostly been studied in on–line learning within the so–called expert advice setting (i.e., adversarial full–information on–line learning). In particular, [8] showed that in general, although it is possible to achieve a small regret w.r.t. to the expert with the best average performance, it is not possible to compete against the expert which best trades off between average return and risk. On the other hand, it is possible to define no–regret algorithms for simplified measures of risk– 1 return. [16] studied the case of pure risk minimization (notably variance minimization) in an on-line setting where at each step the learner is given a covariance matrix and must choose a weight vector that minimizes the variance. The regret is then computed over horizon and compared to the fixed weights minimizing the variance in hindsight. In the multi–arm bandit domain, the most interesting results are by [5] and [14]. [5] introduced an analysis of the expected regret and its distribution, revealing that an anytime version of UCB [6] and UCB-V might have large regret with some nonnegligible probability.1 This analysis is further extended by [14] who derived negative results which show no anytime algorithm can achieve a regret with both a small expected regret and exponential tails. Although these results represent an important step towards the analysis of risk within bandit algorithms, they are limited to the case where an algorithm’s cumulative reward is compared to the reward obtained by pulling the arm with the highest expectation. In this paper, we focus on the problem of competing against the arm with the best risk–return trade– off. In particular, we refer to the popular mean–variance model introduced by [10]. In Sect. 2 we introduce notation and define the mean–variance bandit problem. In Sect. 3 and 4 we introduce two algorithms and study their theoretical properties. In Sect. 5 we report a set of numerical simulations aiming at validating the theoretical results. Finally, in Sect. 7 we conclude with a discussion on possible extensions. The proofs and additional experiments are reported in the extended version [15]. 2 Mean–Variance Multi–arm Bandit In this section we introduce the notation and define the mean–variance multi–arm bandit problem. We consider the standard multi–arm bandit setting with K arms, each characterized by a distribution νi bounded in the interval [0, 1]. Each distribution has a mean µi and a variance σ2 i . The bandit problem is defined over a finite horizon of n rounds. We denote by Xi,s ∼νi the s-th random sample drawn from the distribution of arm i. All arms and samples are independent. In the multi– arm bandit protocol, at each round t, an algorithm selects arm It and observes sample XIt,Ti,t, where Ti,t is the number of samples observed from arm i up to time t (i.e., Ti,t = Pt s=1 I{It = i}). While in the standard bandit literature the objective is to select the arm leading to the highest reward in expectation (the arm with the largest expected value µi), here we focus on the problem of finding the arm which effectively trades off between its expected reward (i.e., the return) and its variability (i.e., the risk). Although a large number of models for risk–return trade–off have been proposed, here we focus on the most historically popular and simple model: the mean–variance model proposed by [10],where the return of an arm is measured by the expected reward and its risk by its variance. Definition 1. The mean–variance of an arm i with mean µi, variance σ2 i and coefficient of absolute risk tolerance ρ is defined as2 MVi = σ2 i −ρµi. Thus the optimal arm is the arm with the smallest mean-variance, that is i∗= arg mini MVi. We notice that we can obtain two extreme settings depending on the value of risk tolerance ρ. As ρ →∞, the mean–variance of arm i tends to the opposite of its expected value µi and the problem reduces to the standard expected reward maximization traditionally considered in multi–arm bandit problems. With ρ = 0, the mean–variance reduces to σ2 i and the objective becomes variance minimization. Given {Xi,s}t s=1 i.i.d. samples from the distribution νi, we define the empirical mean–variance of an arm i with t samples as d MVi,t = ˆσ2 i,t −ρˆµi,t, where ˆµi,t = 1 t t X s=1 Xi,s, ˆσ2 i,t = 1 t t X s=1 Xi,s −ˆµi,t 2. (1) We now consider a learning algorithm A and its corresponding performance over n rounds. Similar to a single arm i we define its empirical mean–variance as d MVn(A) = ˆσ2 n(A) −ρˆµn(A), (2) where ˆµn(A) = 1 n n X t=1 Zt, ˆσ2 n(A) = 1 n n X t=1 Zt −ˆµn(A) 2, (3) 1The analysis is for the pseudo–regret but it can be extended to the true regret (see Remark 2 at p.23 of [5]). 2The coefficient of risk tolerance is the inverse of the more popular coefficient of risk aversion A = 1/ρ. 2 with Zt = XIt,Ti,t, that is the reward collected by the algorithm at time t. This leads to a natural definition of the (random) regret at each single run of the algorithm as the difference in the mean– variance performance of the algorithm compared to the best arm. Definition 2. The regret for a learning algorithm A over n rounds is defined as Rn(A) = d MVn(A) −d MVi∗,n. (4) Given this definition, the objective is to design an algorithm whose regret decreases as the number of rounds increases (in high probability or in expectation). We notice that the previous definition actually depends on unobserved samples. In fact, d MVi∗,n is computed on n samples i∗which are not actually observed when running A. This matches the definition of true regret in standard bandits (see e.g., [5]). Thus, in order to clarify the main components characterizing the regret, we introduce additional notation. Let Yi,t = Xi∗,t if i = i∗ Xi∗,t′ with t′ = Ti∗,n + P j<i,j̸=i∗Tj,n + t otherwise be a renaming of the samples from the optimal arm, such that while the algorithm was pulling arm i for the t-th time, Yi,t is the unobserved sample from i∗. The corresponding mean and variance is ˜µi,Ti,n = 1 Ti,n Ti,n X t=1 Yi,t, ˜σ2 i,Ti,n = 1 Ti,n Ti,n X t=1 Yi,t −˜µi,Ti,n 2. (5) Given these additional definitions, we can rewrite the regret as (see App. A.1 in [15]) Rn(A) = 1 n X i̸=i∗ Ti,n h (ˆσ2 i,Ti,n −ρˆµi,Ti,n) −(˜σ2 i,Ti,n −ρ˜µi,Ti,n) i + 1 n K X i=1 Ti,n ˆµi,Ti,n −ˆµn(A) 2 −1 n K X i=1 Ti,n ˜µi,Ti,n −ˆµi∗,n 2. (6) Since the last term is always negative and small 3, our analysis focuses on the first two terms which reveal two interesting characteristics of A. First, an algorithm A suffers a regret whenever it chooses a suboptimal arm i ̸= i∗and the regret corresponds to the difference in the empirical mean–variance of i w.r.t. the optimal arm i∗. Such a definition has a strong similarity to the standard definition of regret, where i∗is the arm with highest expected value and the regret depends on the number of times suboptimal arms are pulled and their respective gaps w.r.t. the optimal arm i∗. In contrast to the standard formulation of regret, A also suffers an additional regret from the variance ˆσ2 n(A), which depends on the variability of pulls Ti,n over different arms. Recalling the definition of the mean ˆµn(A) as the weighted mean of the empirical means ˆµi,Ti,n with weights Ti,n/n (see eq. 3), we notice that this second term is a weighted variance of the means and illustrates the exploration risk of the algorithm. In fact, if an algorithm simply selects and pulls a single arm from the beginning, it would not suffer any exploration risk (secondary regret) since ˆµn(A) would coincide with ˆµi,Ti,n for the chosen arm and all other components would have zero weight. On the other hand, an algorithm accumulates exploration risk through this second term as the mean ˆµn(A) deviates from any specific arm; where the maximum exploration risk peaks at the mean ˆµn(A) furthest from all arm means. The previous definition of regret can be further elaborated to obtain the upper bound (see App. A.1) Rn(A) ≤1 n X i̸=i∗ Ti,n b∆i + 1 n2 K X i=1 X j̸=i Ti,nTj,nbΓ2 i,j, (7) where b∆i = (ˆσ2 i,Ti,n −˜σ2 i,Ti,n) −ρ(ˆµi,Ti,n −˜µi,Ti,n) and bΓ2 i,j = (ˆµi,Ti,n −ˆµj,Tj,n)2. Unlike the definition in eq. 6, this upper bound explicitly illustrates the relationship between the regret and the number of pulls Ti,n; suggesting that a bound on the pulls is sufficient to bound the regret. Finally, we can also introduce a definition of the pseudo-regret. 3More precisely, it can be shown that this term decreases with rate O(K log(1/δ)/n) with probability 1−δ. 3 Input: Confidence δ for t = 1, . . . , n do for i = 1, . . . , K do Compute Bi,Ti,t−1 = d MVi,Ti,t−1 −(5 + ρ) q log 1/δ 2Ti,t−1 end for Return It = arg mini=1,...,K Bi,Ti,t−1 Update Ti,t = Ti,t−1 + 1 Observe XIt,Ti,t ∼νIt Update d MVi,Ti,t end for Figure 1: Pseudo-code of the MV-LCB algorithm. Definition 3. The pseudo regret for a learning algorithm A over n rounds is defined as eRn(A) = 1 n X i̸=i∗ Ti,n∆i + 2 n2 K X i=1 X j̸=i Ti,nTj,nΓ2 i,j, (8) where ∆i = MVi −MVi∗and Γi,j = µi −µj. In the following, we denote the two components of the pseudo–regret as eR∆ n (A) = 1 n X i̸=i∗ Ti,n∆i, and eRΓ n(A) = 2 n2 K X i=1 X j̸=i Ti,nTj,nΓ2 i,j. (9) Where eR∆ n (A) constitutes the standard regret derived from the traditional formulation of the multiarm bandit problem and eRΓ n(A) denotes the exploration risk. This regret can be shown to be close to the true regret up to small terms with high probability. Lemma 1. Given definitions 2 and 3, Rn(A) ≤eRn(A) + (5 + ρ) r 2K log(6nK/δ) n + 4 √ 2K log(6nK/δ) n , with probability at least 1 −δ. The previous lemma shows that any (high–probability) bound on the pseudo–regret immediately translates into a bound on the true regret. Thus, we report most of the theoretical analysis according to eRn(A). Nonetheless, it is interesting to notice the major difference between the true and pseudo– regret when compared to the standard bandit problem. In fact, it is possible to show in the risk–averse case that the pseudo–regret is not an unbiased estimator of the true regret, i.e., E[Rn] ̸= E[ eRn]. Thus, to bound the expectation of Rn we build on the high–probability result from Lemma 1. 3 The Mean–Variance Lower Confidence Bound Algorithm In this section we introduce a risk–averse bandit algorithm whose objective is to identify the arm which best trades off risk and return. The algorithm is a natural extension of UCB1 [6] and we report a theoretical performance analysis on how its mean–variance. 3.1 The Algorithm We propose an index–based bandit algorithm which estimates the mean–variance of each arm and selects the optimal arm according to the optimistic confidence–bounds on the current estimates. A sketch of the algorithm is reported in Figure 1. For each arm, the algorithm keeps track of the empirical mean–variance d MVi,s computed according to s samples. We can build high–probability confidence bounds on empirical mean–variance through an application of the Chernoff–Hoeffding inequality (see e.g., [1] for the bound on the variance) on terms ˆµ and ˆσ2. 4 Lemma 2. Let {Xi,s} be i.i.d. random variables bounded in [0, 1] from the distribution νi with mean µi and variance σ2 i , and the empirical mean ˆµi,s and variance ˆσ2 i,s computed as in Equation 1, then P " ∃i = 1, . . . , K, s = 1, . . . , n, |d MVi,s −MVi| ≥(5 + ρ) r log 1/δ 2s # ≤6nKδ, The algorithm in Figure 1 implements the principle of optimism in the face of uncertainty used in many multi–arm bandit algorithms. On the basis of the previous confidence bounds, we define a lower–confidence bound on the mean–variance of arm i when it has been pulled s times as Bi,s = d MVi,s −(5 + ρ) r log 1/δ 2s , (10) where δ is an input parameter of the algorithm. Given the index of each arm at each round t, the algorithm simply selects the arm with the smallest mean–variance index, i.e., It = arg mini Bi,Ti,t−1. We refer to this algorithm as the mean–variance lower–confidence bound (MV-LCB) algorithm. Remark 1. We notice that MV-LCB reduces to UCB1 for ρ →∞. This is coherent with the fact that for ρ →∞the mean–variance problem reduces to expected reward maximization, for which UCB1 is known to be nearly-optimal. On the other hand, for ρ = 0 (variance minimization), the algorithm plays according to a lower–confidence–bound on the variances. Remark 2. The MV-LCB algorithm has a parameter δ defining the confidence level of the bounds employed in (10). In Thm. 1 we show how to optimize the parameter when the horizon n is known in advance. On the other hand, if n is not known, it is possible to design an anytime version of MV-LCB by defining a non-decreasing exploration sequence (εt)t instead of the term log 1/δ. 3.2 Theoretical Analysis In this section we report the analysis of the regret Rn(A) of MV-LCB (Fig. 1). As highlighted in eq. 7, it is enough to analyze the number of pulls for each of the arms to recover a bound on the regret. The proofs (reported in [15]) are mostly based on similar arguments to the proof of UCB. We derive the following regret bound in high probability and expectation. Theorem 1. Let the optimal arm i∗be unique and b = 2(5 + ρ), the MV-LCB algorithm achieves a pseudo–regret bounded as eRn(A) ≤b2 log 1/δ n X i̸=i∗ 1 ∆i + 4 X i̸=i∗ Γ2 i∗,i ∆2 i + 2b2 log 1/δ n X i̸=i∗ X j̸=i j̸=i∗ Γ2 i,j ∆2 i ∆2 j + 5K n , with probability at least 1 −6nKδ. Similarly, if MV-LCB is run with δ = 1/n2 then E[ eRn(A)] ≤2b2 log n n X i̸=i∗ 1 ∆i + 4 X i̸=i∗ Γ2 i∗,i ∆2 i + 4b2 log n n X i̸=i∗ X j̸=i j̸=i∗ Γ2 i,j ∆2 i ∆2 j + (17 + 6ρ)K n . Remark 1 (the bound). Let ∆min = mini̸=i∗∆i and Γmax = maxi |Γi|, then a rough simplification of the previous bound leads to E[ eRn(A)] ≤O K ∆min log n n + K2 Γ2 max ∆4 min log2 n n . First we notice that the regret decreases as O(log2 n/n), implying that MV-LCB is a consistent algorithm. As already highlighted in Def. 2, the regret is mainly composed by two terms. The first term is due to the difference in the mean–variance of the best arm and the arms pulled by the algorithm, while the second term denotes the additional variance introduced by the exploration risk of pulling arms with different means. In particular, this additional term depends on the squared difference of the arm means Γ2 i,j. Thus, if all the arms have the same mean, this term would be zero. Remark 2 (worst–case analysis). We can further study the result of Thm. 1 by considering the worst–case performance of MV-LCB, that is the performance when the distributions of the arms are 5 chosen so as to maximize the regret. In order to illustrate our argument we consider the simple case of K = 2 arms, ρ = 0 (variance minimization), µ1 ̸= µ2, and σ2 1 = σ2 2 = 0 (deterministic arms). 4 In this case we have a variance gap ∆= 0 and Γ2 > 0. According to the definition of MV-LCB, the index Bi,s would simply reduce to Bi,s = p log(1/δ)/s, thus forcing the algorithm to pull both arms uniformly (i.e., T1,n = T2,n = n/2 up to rounding effects). Since the arms have the same variance, there is no direct regret in pulling either one or the other. Nonetheless, the algorithm has an additional variance due to the difference in the samples drawn from distributions with different means. In this case, the algorithm suffers a constant (true) regret Rn(MV-LCB) = 0 + T1,nT2,n n2 Γ2 = 1 4Γ2, independent from the number of rounds n. This argument can be generalized to multiple arms and ρ ̸= 0, since it is always possible to design an environment (i.e., a set of distributions) such that ∆min = 0 and Γmax ̸= 0. 5 This result is not surprising. In fact, two arms with the same mean– variance are likely to produce similar observations, thus leading MV-LCB to pull the two arms repeatedly over time, since the algorithm is designed to try to discriminate between similar arms. Although this behavior does not suffer from any regret in pulling the “suboptimal” arm (the two arms are equivalent), it does introduce an additional variance, due to the difference in the means of the arms (Γ ̸= 0), which finally leads to a regret the algorithm is not “aware” of. This argument suggests that, for any n, it is always possible to design an environment for which MV-LCB has a constant regret. This is particularly interesting since it reveals a huge gap between the mean–variance and the standard expected regret minimization problem and will be further investigated in the numerical simulations in Sect. 5. In fact, UCB is known to have a worst–case regret of Ω(1/√n) [3], while in the worst case, MV-LCB suffers a constant regret. In the next section we introduce a simple algorithm able to deal with this problem and achieve a vanishing worst–case regret. 4 The Exploration–Exploitation Algorithm The ExpExp algorithm divides the time horizon n into two distinct phases of length τ and n −τ respectively. During the first phase all the arms are explored uniformly, thus collecting τ/K samples each 6. Once the exploration phase is over, the mean–variance of each arm is computed and the arm with the smallest estimated mean–variance MVi,τ/K is repeatedly pulled until the end. The MV-LCB is specifically designed to minimize the probability of pulling the wrong arms, so whenever there are two equivalent arms (i.e., arms with the same mean–variance), the algorithm tends to pull them the same number of times, at the cost of potentially introducing an additional variance which might result in a constant regret. On the other hand, ExpExp stops exploring the arms after τ rounds and then elicits one arm as the best and keeps pulling it for the remaining n −τ rounds. Intuitively, the parameter τ should be tuned so as to meet different requirements. The first part of the regret (i.e., the regret coming from pulling the suboptimal arms) suggests that the exploration phase τ should be long enough for the algorithm to select the empirically best arm ˆi∗ at τ equivalent to the actual optimal arm i∗with high probability; and at the same time, as short as possible to reduce the number of times the suboptimal arms are explored. On the other hand, the second part of the regret (i.e., the variance of pulling arms with different means) is minimized by taking τ as small as possible (e.g., τ = 0 would guarantee a zero regret). The following theorem illustrates the optimal trade-off between these contrasting needs. Theorem 2. Let ExpExp be run with τ = K(n/14)2/3, then for any choice of distributions {νi} the expected regret is E[ eRn(A)] ≤2 K n1/3 . Remark 1 (the bound). We first notice that this bound suggests that ExpExp performs worse than MV-LCB on easy problems. In fact, Thm. 1 demonstrates that MV-LCB has a regret decreasing as O(K log(n)/n) whenever the gaps ∆are not small compared to n, while in the remarks of Thm. 1 we highlighted the fact that for any value of n, it is always possible to design an environment which leads MV-LCB to suffer a constant regret. On the other hand, the previous bound for ExpExp is distribution independent and indicates the regret is still a decreasing function of n even in the worst 4Note that in this case (i.e., ∆= 0), Thm. 1 does not hold, since the optimal arm is not unique. 5Notice that this is always possible for a large majority of distributions with independent mean and variance. 6In the definition and in the following analysis we ignore rounding effects. 6 0 0.15 n × 103 Mean Regret MV-LCB Regret Terms vs. n 5 10 25 50 100 250 Regret Regret∆ RegretΓ 2 3.3 5.6 8.7 14 22 25.3 26.3 27.5 31.2 35.3 n × 103 MeanRegret × 10−2 Worst Case Regret vs. n 5 10 25 50 100 250 MV-LCB ExpExp Figure 2: Regret of MV-LCB and ExpExp in different scenarios. case. This opens the question whether it is possible to design an algorithm which works as well as MV-LCB on easy problems and as robustly as ExpExp on difficult problems. Remark 2 (exploration phase). The previous result can be improved by changing the exploration strategy used in the first τ rounds. Instead of a pure uniform exploration of all the arms, we could adopt a best–arm identification algorithms such as Successive Reject or UCB-E, which maximize the probability of returning the best arm given a fixed budget of rounds τ (see e.g., [4]). 5 Numerical Simulations In this section we report numerical simulations aimed at validating the main theoretical findings reported in the previous sections. In the following graphs we study the true regret Rn(A) averaged over 500 runs. We first consider the variance minimization problem (ρ = 0) with K = 2 Gaussian arms set to µ1 = 1.0, µ2 = 0.5, σ2 1 = 0.05, and σ2 2 = 0.25 and run MV-LCB 7. In Figure 2 we report the true regret Rn (as in the original definition in eq. 4) and its two components Rb∆ n and RbΓ n (these two values are defined as in eq. 9 with b∆and bΓ replacing ∆and Γ). As expected (see e.g., Thm. 1), the regret is characterized by the regret realized from pulling suboptimal arms and arms with different means (Exploration Risk) and tends to zero as n increases. Indeed, if we considered two distributions with equal means (µ1 = µ2), the average regret coincides with Rb∆ n . Furthermore, as shown in Thm. 1 the two regret terms decrease with the same rate O(log n/n). A detailed analysis of the impact of ∆and Γ on the performance of MV-LCB is reported in App. D in [15]. Here we only compare the worst–case performance of MV-LCB to ExpExp (see Figure 2). In order to have a fair comparison, for any value of n and for each of the two algorithms, we select the pair ∆w, Γw which corresponds to the largest regret (we search in a grid of values with µ1 = 1.5, µ2 ∈[0.4; 1.5], σ2 1 ∈[0.0; 0.25], and σ2 2 = 0.25, so that ∆∈[0.0; 0.25] and Γ ∈[0.0; 1.1]). As discussed in Sect. 4, while the worst–case regret of ExpExp keeps decreasing over n, it is always possible to find a problem for which regret of MV-LCB stabilizes to a constant. For numerical results with multiple values of ρ and 15 arms, see App. D in [15]. 6 Discussion In this paper we evaluate the risk of an algorithm in terms of the variability of the sequences of samples that it actually generates. Although this notion might resemble other analyses of bandit algorithms (see e.g., the high-probability analysis in [5]), it captures different features of the learning algorithm. Whenever a bandit algorithm is run over n rounds, its behavior, combined with the arms’ distributions, generates a probability distribution over sequences of n rewards. While the quality of this sequence is usually defined by its cumulative sum (or average), here we say that a sequence of rewards is good if it displays a good trade-off between its (empirical) mean and variance. The variance of the sequence does not coincide with the variance of the algorithm over multiple runs. Let us consider a simple case with two arms that deterministically generate 0s and 1s respectively, and two different algorithms. Algorithm A1 pulls the arms in a fixed sequence at each run (e.g., arm 1, arm 2, arm 1, arm 2, and so on), so that each arm is always pulled n/2 times. Algorithm A2 chooses one arm uniformly at random at the beginning of the run and repeatedly pulls this arm for n rounds. Algorithm A1 generates sequences such as 010101... which have high variability within 7Notice that although in the paper we assumed the distributions to be bounded in [0, 1] all the results can be extended to sub-Gaussian distributions. 7 each run, incurs a high regret (e.g., if ρ = 0), but has no variance over multiple runs because it always generates the same sequence. On the other hand, A2 has no variability in each run, since it generates sequences with only 0s or only 1s, suffers no regret in the case of variance minimization, but has high variance over multiple runs since the two completely different sequences are generated with equal probability. This simple example shows that an algorithm with small standard regret (e.g., A1), might generate at each run sequences with high variability, while an algorithm with small mean-variance regret (e.g., A2) might have a high variance over multiple runs. 7 Conclusions The majority of multi–armed bandit literature focuses on the problem of minimizing the regret w.r.t. the arm with the highest return in expectation. In this paper, we introduced a novel multi–armed bandit setting where the objective is to perform as well as the arm with the best risk–return trade–off. In particular, we relied on the mean–variance model introduced in [10] to measure the performance of the arms and define the regret of a learning algorithm. We show that defining the risk of a learning algorithm as the variability (i.e., empirical variance) of the sequence of rewards generated at each run, leads to an interesting effect on the regret where an additional algorithm variance appears. We proposed two novel algorithms to solve the mean–variance bandit problem and we reported their corresponding theoretical analysis. To the best of our knowledge this is the first work introducing risk–aversion in the multi–armed bandit setting and it opens a series of interesting questions. Lower bound. As discussed in the remarks of Thm. 1 and Thm. 2, MV-LCB has a regret of order O( p K/n) on easy problems and O(1) on difficult problems, while ExpExp achieves the same regret O(K/n1/3) over all problems. The primary open question is whether O(K/n1/3) is actually the best possible achievable rate (in the worst–case) for this problem. This question is of particular interest since the standard reward expectation maximization problem has a known lower–bound of Ω( p 1/n), and a minimax rate of Ω(1/n1/3) for the mean–variance problem would imply that the risk–averse bandit problem is intrinsically more difficult than standard bandit problems. Different measures of return–risk. Considering alternative notions of risk is a natural extension to the previous setting. In fact, over the years the mean–variance model has often been criticized. From a point of view of the expected utility theory, the mean–variance model is only justified under a Gaussianity assumption on the arm distributions. It also violates the monotonocity condition due to the different orders of the mean and variance and is not a coherent measure of risk [2]. Furthermore, the variance is a symmetric measure of risk, while it is often the case that only one–sided deviations from the mean are undesirable (e.g., in finance only losses w.r.t. to the expected return are considered as a risk, while any positive deviation is not considered as a real risk). Popular replacements for the mean–variance are the α value–at–risk (i.e., the quantile) or Conditional Value at Risk (otherwise known as average value at risk, tail value at risk, expected shortfall and lower tail risk) or other coherent measures of risk [2]. While the estimation of the α value–at–risk might be challenging 8, concentration inequalities exist for the CVaR [7]. Another issue in moving from variance to other measures of risk is whether single-period or multi-period risk evaluation should be used. While the single-period risk of an arm is simply the risk of its distribution, in a multi-period evaluation we consider the risk of the sum of rewards obtained by repeatedly pulling the same arm over n rounds. Unlike the variance, for which the variance of a sum of n i.i.d. samples is simply n times their variance, for other measures of risk (e.g., α value–at–risk) this is not necessarily the case. As a result, an arm with the smallest single-period risk might not be the optimal choice over an horizon of n rounds. Therefore, the performance of an algorithm should be compared to the smallest risk that can be achieved by any sequence of arms over n rounds, thus requiring a new definition of regret. Simple regret. Finally, an interesting related problem is the simple regret setting where the learner is allowed to explore over n rounds and it only suffers a regret defined on the solution returned at the end. It is known that it is possible to design algorithm able to effectively estimate the mean of the arms and finally return the best arm with high probability. In the risk-return setting, the objective would be to return the arm with the best risk-return tradeoff. Acknowledgments This work was supported by Ministry of Higher Education and Research, NordPas de Calais Regional Council and FEDER through the “contrat de projets état region 2007–2013", European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement n◦270327, and PASCAL2 European Network of Excellence. 8While the cumulative distribution of a random variable can be reliably estimated (see e.g., [11]), estimating the quantile might be more difficult 8 References [1] András Antos, Varun Grover, and Csaba Szepesvári. Active learning in heteroscedastic noise. Theoretical Computer Science, 411:2712–2728, June 2010. [2] P Artzner, F Delbaen, JM Eber, and D Heath. Coherent measures of risk. Mathematical finance, (June 1996):1–24, 1999. [3] Jean-Yves Audibert and Sébastien Bubeck. Regret bounds and minimax policies under partial monitoring. Journal of Machine Learning Research, 11:2785–2836, 2010. [4] Jean-Yves Audibert, Sébastien Bubeck, and Rémi Munos. Best arm identification in multiarmed bandits. In Proceedings of the Twenty-third Conference on Learning Theory (COLT’10), 2010. [5] Jean-Yves Audibert, Rémi Munos, and Csaba Szepesvári. Exploration-exploitation trade-off using variance estimates in multi-armed bandits. Theoretical Computer Science, 410:1876– 1902, 2009. [6] Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multi-armed bandit problem. Machine Learning, 47:235–256, 2002. [7] David B. Brown. Large deviations bounds for estimating conditional value-at-risk. Operations Research Letters, 35:722–730, 2007. [8] Eyal Even-Dar, Michael Kearns, and Jennifer Wortman. Risk-sensitive online learning. In Proceedings of the 17th international conference on Algorithmic Learning Theory (ALT’06), pages 199–213, 2006. [9] Christian Gollier. The Economics of Risk and Time. The MIT Press, 2001. [10] Harry Markowitz. Portfolio selection. The Journal of Finance, 7(1):77–91, 1952. [11] Pascal Massart. The tight constant in the dvoretzky-kiefer-wolfowitz inequality. The Annals of Probability, 18(3):pp. 1269–1283, 1990. [12] J Neumann and O Morgenstern. Theory of games and economic behavior. Princeton University, Princeton, 1947. [13] Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the AMS, 58:527–535, 1952. [14] Antoine Salomon and Jean-Yves Audibert. Deviations of stochastic bandit regret. In Proceedings of the 22nd international conference on Algorithmic learning theory (ALT’11), pages 159–173, 2011. [15] Amir Sani, Alessandro Lazaric, and Rémi Munos. Risk-aversion in multi-arm bandit. Technical Report hal-00750298, INRIA, 2012. [16] Manfred K. Warmuth and Dima Kuzmin. Online variance minimization. In Proceedings of the 19th Annual Conference on Learning Theory (COLT’06), pages 514–528, 2006. 9
|
2012
|
191
|
4,554
|
Approximate Message Passing with Consistent Parameter Estimation and Applications to Sparse Learning Ulugbek S. Kamilov EPFL ulugbek.kamilov@epfl.ch Sundeep Rangan Polytechnic Institute of New York University srangan@poly.edu Alyson K. Fletcher University of California, Santa Cruz afletcher@soe.ucsc.edu Michael Unser EPFL michael.unser@epfl.ch Abstract We consider the estimation of an i.i.d. vector x ∈Rn from measurements y ∈Rm obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise (possibly nonlinear) measurement channel. We present a method, called adaptive generalized approximate message passing (Adaptive GAMP), that enables joint learning of the statistics of the prior and measurement channel along with estimation of the unknown vector x. Our method can be applied to a large class of learning problems including the learning of sparse priors in compressed sensing or identification of linear-nonlinear cascade models in dynamical systems and neural spiking processes. We prove that for large i.i.d. Gaussian transform matrices the asymptotic componentwise behavior of the adaptive GAMP algorithm is predicted by a simple set of scalar state evolution equations. This analysis shows that the adaptive GAMP method can yield asymptotically consistent parameter estimates, which implies that the algorithm achieves a reconstruction quality equivalent to the oracle algorithm that knows the correct parameter values. The adaptive GAMP methodology thus provides a systematic, general and computationally efficient method applicable to a large range of complex linear-nonlinear models with provable guarantees. 1 Introduction Consider the estimation of a random vector x ∈Rn from a measurement vector y ∈Rm. As illustrated in Figure 1, the vector x, which is assumed to have i.i.d. components xj ∼PX, is passed through a known linear transform that outputs z = Ax ∈Rm. The components of y ∈Rm are generated by a componentwise transfer function PY |Z. This paper addresses the cases where the distributions PX and PY |Z have some parametric uncertainty that must be learned so as to properly estimate x. This joint estimation and learning problem with linear transforms and componentwise nonlinearities arises in a range of applications, including empirical Bayesian approaches to inverse problems in signal processing, linear regression and classification [1,2], and, more recently, Bayesian compressed 1 Componentwise Output Channel Available Measurements Unknown i.i.d. signal Mixing Matrix Unknown Linear Measurements Signal Prior Figure 1: Measurement model considered in this work. The vector x ∈Rn with an i.i.d. prior PX(x|λx) passes through the linear transform A ∈Rm×n followed by a componentwise nonlinear channel PY |Z(y|z, λz) to result in y ∈Rm. The prior PX and the nonlinear channel PY |Z depend on the unknown parameters λx and λz, respectively. We propose adaptive GAMP to jointly estimate x and (λx, λz) given the measurements y. sensing for estimation of sparse vectors x from underdetermined measurements [3–5]. Also, since the parameters in the output transfer function PY |Z can model unknown nonlinearities, this problem formulation can be applied to the identification of linear-nonlinear cascade models of dynamical systems, in particular for neural spike responses [6–8]. In recent years, there has been considerable interest in so-called approximate message passing (AMP) methods for this estimation problem. The AMP techniques use Gaussian and quadratic approximations of loopy belief propagation (LBP) to provide estimation methods that are computationally efficient, general and analytically tractable. However, the AMP methods generally require that the distributions PX and PY |Z are known perfectly. When the parameters λx and λz are unknown, various extensions have been proposed including combining AMP methods with Expectation Maximization (EM) estimation [9–12] and hybrid graphical models approaches [13]. In this work, we present a novel method for joint parameter and vector estimation called adaptive generalized AMP (adaptive GAMP), that extends the GAMP method of [14]. We present two major theoretical results related to adaptive GAMP: We first show that, similar to the analysis of the standard GAMP algorithm, the componentwise asymptotic behavior of adaptive GAMP can be exactly described by a simple scalar state evolution (SE) equations [14–18]. An important consequence of this result is a theoretical justification to the EM-GAMP algorithm in [9–12] which is a special case of adaptive GAMP with a particular choice of adaptation functions. Our second result demonstrates the asymptotic consistency of adaptive GAMP when adaptation functions correspond to the maximum-likelihood (ML) parameter estimation. We show that when the ML estimation is computed exactly, the estimated parameters converge to the true values and the performance of adaptive GAMP asymptotically coincides with the performance of the oracle GAMP algorithms that knows correct parameter values. Adaptive GAMP thus provides a computationally-efficient method for solving a wide variety of joint estimation and learning problems with a simple, exact performance characterization and provable conditions for asymptotic consistency. All proofs and some technical details that have been omitted for space appear in the full paper [19] that also provides more background and simulations. 2 Adaptive GAMP Approximate message passing (AMP) refers to a class of algorithms based on Gaussian approximations of loopy belief propagation (LBP) for the estimation of the vectors x and z according to the model described in Section 1. These methods originated from CDMA multiuser detection problems in [15, 20, 21]; more recently, they have attracted considerable attention in compressed sensing [17,18,22]. The Gaussian approximations used in AMP are closely related to standard expectation propagation techniques [23,24], but with additional simplifications that exploit the linear coupling between the variables x and z. The key benefits of AMP methods are their computational performance, their large domain of application, and, for certain large random A, their exact asymptotic performance characterizations with testable conditions for optimality [15–18]. This paper considers an adaptive version of the so-called generalized AMP (GAMP) method of [14] that extends the algorithm in [22] to arbitrary output distributions PY |Z. The original GAMP algorithm of [14] requires that the distributions PX and PY |Z are known. We propose an adaptive GAMP, shown in Algorithm 1, to allow for simultaneous estimation of the distributions PX and PY |Z along with the estimation of x and z. The algorithm assumes that distributions PX and PY |Z have some parametric forms PX(x|λx), PY |Z(y|z, λz), (1) 2 for parameters λx ∈Λx and λz ∈Λz and for parameter sets Λx and Λz. Algorithm 1 produces a sequence of estimates bxt and bzt for x and z along with parameter estimates bλt x and bλt z, The precise value of these estimates depends on several factors in the algorithm including the termination criteria and the choice of what we will call estimation functions Gt x, Gt z and Gt s, and adaptation functions Ht x and Ht z. Algorithm 1 Adaptive GAMP Require: Matrix A, estimation functions Gt x, Gt s and Gt z and adaptation functions Ht x and Ht z. 1: Initialize t ←0, s−1 ←0 and some values for bx0, τ 0 x, 2: repeat 3: {Output node update} 4: τ t p ←∥A∥2 F τ t x/m 5: pt ←Abxt −st−1τ t p 6: bλt z ←Ht z(pt, y, τ t p) 7: bzt i ←Gt z(pt i, yi, τ t p, bλt z) for all i = 1, . . . , m 8: st i ←Gt s(pt i, yi, τ t p, bλt z) for all i = 1, . . . , m 9: τ t s ←−(1/m) P i ∂Gt s(pt i, yi, τ t p, bλt z)/∂pt i 10: 11: {Input node update} 12: 1/τ t r ←∥A∥2 F τ t s/n 13: rt = xt + τ t rATst 14: bλt x ←Ht x(rt, τ t r) 15: bxt+1 j ←Gt x(rt j, τ t r, bλt x) for all j = 1, . . . , n 16: τ t+1 x ←(τ t r/n) P j ∂Gt x(rt j, τ t r, bλt x)/∂rj 17: until Terminated The choice of the estimation and adaptation functions allows for considerable flexibility in the algorithm. For example, it is shown in [14] that Gt x, Gt z, and Gt s can be selected such that the GAMP algorithm implements Gaussian approximations of either max-sum LBP or sum-product LBP that approximate the maximum-a-posteriori (MAP) or minimum-mean-squared-error (MMSE) estimates of x given y, respectively. The adaptation functions can also be selected for a number of different parameter-estimation strategies. Because of space limitation, we present only the estimation functions for the sum-product GAMP algorithm from [14] along with an ML-type adaptation. Some of the analysis below, however, applies more generally. As described in [14], the sum-product estimation can be implemented with the functions Gt x(r, τr, bλx) := E[X|R = r, τr, bλx], (2a) Gt z(p, y, τp, bλz) := E[Z|P = p, Y = y, τp, bλz], (2b) Gt s(p, y, τp, bλz) := 1 τp Gt z(p, y, τp, bλz) −p , (2c) where the expectations are with respect to the scalar random variables R = X + Vx, Vx ∼N(0, τr), X ∼PX(·|bλx), (3a) Z = P + Vz, Vz ∼N(0, τp), Y ∼PY |Z(·|Z, bλz). (3b) The estimation functions (2) correspond to scalar estimates of random variables in additive white Gaussian noise (AWGN). A key result of [14] is that, when the parameters are set to the true values (i.e. (bλx, bλz) = (λx, λz)), the outputs bxt and bzt can be interpreted as sum products estimates of the conditional expectations E(x|y) and E(z|y). The algorithm thus reduces the vector-valued estimation problem to a computationally simple sequence of scalar AWGN estimation problems along with linear transforms. The estimation functions Ht x and Ht z in Algorithm 1 produce the estimates for the parameters λx and λz. In the special case when Ht x and Ht z produce fixed outputs Ht z(pt, yt, τ t p) = λ t z, Ht x(rt, τ t r) = λ t x, 3 for pre-computed values of λ t z and λ t x, the adaptive GAMP algorithm reduces to the standard (nonadaptive) GAMP algorithm of [14]. The non-adaptive GAMP algorithm can be used when the parameters λx and λz are known. When the parameters λx and λz are unknown, it has been proposed in [9–12] that they can be estimated via an EM method that exploits that fact that GAMP provides estimates of the posterior distributions of x and z given the current parameter estimates. As described in the full paper [19], this EM-GAMP method corresponds to a special case of the Adaptive GAMP method for a particular choice of the adaptation functions Ht x and Ht z. However, in this work, we consider an alternate parameter estimation method based on ML adaptation. The ML adaptation uses the following fact that we will rigorously justify below: For certain large random A, at any iteration t, the components of the vectors rt and the joint vectors (pt, yt) will be distributed as R = αrX + Vx, Vx ∼N(0, ξr), X ∼PX(·|λ∗ x), (4a) Z = P + Vz, (Z, P) ∼N(0, Kp), Y ∼PY |Z(·|Z, λ∗ z), (4b) where λ∗ x and λ∗ z are the “true” parameters and the scalars αr and ξr and the covariance matrix Kp are some parameters that depend on the estimation and adaptation functions used in the previous iterations. Remarkably, the distributions of the components of rt and (pt, yt) will follow (4) even if the estimation functions in the iterations prior to t used the incorrect parameter values. The adaptive GAMP algorithm can thus attempt to estimate the parameters via a maximum likelihood (ML) estimation: Ht x(rt, τ t r) := arg max λx∈Λx max (αr,ξr)∈Sx(τ tr) 1 n n−1 X j=0 φx(rt j, λx, αr, ξr) , (5a) Ht z(pt, y, τ t p) := arg max λz∈Λz max Kp∈Sz(τ tp) ( 1 m m−1 X i=0 φz(pt i, yi, Kp) ) , (5b) where Sx and Sz are sets of possible values for the parameters αr, ξr and Kp, φx and φz are the log-likelihoods φx(r, λx, αr, ξr) = log pR(r|λx, αr, ξr), (6a) φz(p, y, λz, Kp) = log pP,Y (p, y|λz, Kp) (6b) and pR and pP,Y are the probability density functions corresponding to the distributions in (4). 3 Convergence and Asymptotic Consistency with Gaussian Transforms 3.1 General State Evolution Analysis Before proving the asymptotic consistency of the adaptive GAMP method with ML adaptation, we first prove a more general convergence result. Among other consequences, the result will justify the distribution model (4) assumed by the ML adaptation. Similar to the SE analyses in [14, 18] we consider the asymptotic behavior of the adaptive GAMP algorithm with large i.i.d. Gaussian matrices. The assumptions are summarized as follows. Details can be found in the full paper [19, Assumption 2]. Assumption 1 Consider the adaptive GAMP algorithm running on a sequence of problems indexed by the dimension n, satisfying the following: (a) For each n, the matrix A ∈Rm×n has i.i.d. components with Aij ∼N(0, 1/m) and the dimension m = m(n) is a deterministic function of n satisfying n/m →β for some β > 0 as n →∞. (b) The input vectors x and initial condition bx0 are deterministic sequences whose components converge empirically with bounded moments of order s = 2k −2 as lim n→∞(x, bx0) PL(s) = (X, b X0), (7) 4 to some random vector (X, b X0) for k = 2. See [19] for a precise statement of this type of convergence. (c) The output vectors z and y ∈Rm are generated by z = Ax, y = h(z, w), (8) for some scalar function h(z, w) where the disturbance vector w is deterministic, but empirically converges as lim n→∞w PL(s) = W, (9) with s = 2k−2, k = 2 and W is some random variable. We let PY |Z denote the conditional distribution of the random variable Y = h(Z, W). (d) Suitable continuity assumptions on the estimation functions Gt x, Gt z and Gt s and adaptation functions Ht x and Ht z – see [19] for details. Now define the sets of vectors θt x := {(xj, rt j, bxt+1 j ), j = 1, . . . , n}, θt z := {(zi, bzt i, yi, pt i), i = 1, . . . , m}. (10) The first vector set, θt x, represents the components of the the “true,” but unknown, input vector x, its adaptive GAMP estimate bxt as well as rt. The second vector, θt z, contains the components of the “true,” but unknown, output vector z, its GAMP estimate bzt, as well as pt and the observed input y. The sets θt x and θt z are implicitly functions of the dimension n. Our main result, Theorem 1 below, characterizes the asymptotic joint distribution of the components of these two sets as n →∞. Specifically, we will show that the empirical distribution of the components of θt x and θt z converge to a random vectors of the form θ t x := (X, Rt, b Xt+1), θ t z := (Z, bZt, Y, P t), (11) where X is the random variable in the initial condition (7). Rt and b Xt+1 are given by Rt = αt rX + V t, V t ∼N(0, ξt r), b Xt+1 = Gt x(Rt, τ t r, λ t x) (12) for some deterministic constants αt r, ξt r, τ t r and λ t x that will be defined momentarily. Similarly, (Z, P t) ∼N(0, Kt p), and Y ∼PY |Z(·|Z), bZt = Gt z(P t, Y, τ t p, λ t z), (13) where W is the random variable in (9) and Kt p and λ t z are also deterministic constants. The deterministic constants above can be computed iteratively with the following state evolution (SE) equations shown in Algorithm 2. Theorem 1 Consider the random vectors θt x and θt z generated by the outputs of GAMP under Assumption 1. Let θ t x and θ t z be the random vectors in (11) with the parameters determined by the SE equations in Algorithm 2. Then, for any fixed t, almost surely, the components of θt x and θt z converge empirically with bounded moments of order k = 2 as lim n→∞θt x PL(k) = θ t x, lim n→∞θt z PL(k) = θ t z. (17) where θ t x and θ t z are given in (11). In addition, for any t, the limits lim n λt x = λ t x, lim n λt z = λ t z, lim n τ t r = τ t r, lim n τ t p = τ t p, (18) also hold almost surely. Similar to several other analyses of AMP algorithms such as [14–18], the theorem provides a scalar equivalent model for the componentwise behavior of the adaptive GAMP method. That is, asymptotically the components of the sets θt x and θt z in (10) are distributed identically to simple scalar random variables. The parameters in these random variables can be computed via the SE equations 5 Algorithm 2 Adaptive GAMP State Evolution Given the distributions in Assumption 1, compute the sequence of parameters as follows: • Initialization: Set t = 0 with K0 x = cov(X, b X0), τ 0 x = τ 0 x, (14) where the expectation is over the random variables (X, b X0) in Assumption 1(b) and τ 0 x is the initial value in the GAMP algorithm. • Output node update: Compute the variables associated with θ t z: τ t p = βτ t x, Kt p = βKt x, λ t z = Ht z(P t, τ t p), (15a) τ t r = −E−1 ∂ ∂pGt s(P t, Y, τ t p, λ t z) , ξt r = (τ t r)2E h Gt s(P t, Y, τ t p, λ t z) i , (15b) αt r = τ t rE ∂ ∂z Gt s( bP, h(z, W), τ t p, λ t z) z=Z . (15c) where the expectations are over the random variables (P t, Y, W). • Input node update: Compute the variables associated with θ t x: λ t x = Ht x(Rt, τ t r), (16a) τ t+1 x = τ t rE ∂ ∂rGt x(Rt, τ t r, λ t x) , Kt+1 x = cov(X, b Xt+1), (16b) where the expectation is over the random variable (X, b Xt+1). (14), (15) and (16), which can be evaluated with one or two-dimensional integrals. From this scalar equivalent model, one can compute a large class of componentwise performance metrics such as mean-squared error (MSE) or detection error rates. Thus, the SE analysis shows that for, essentially arbitrary estimation and adaptation functions, and distributions on the true input and disturbance, we can exactly evaluate the asymptotic behavior of the adaptive GAMP algorithm. In addition, when the parameter values λx and λz are fixed, the SE equations in Algorithm 2 reduce to SE equations for the standard (non-adaptive) GAMP algorithm described in [14]. 3.2 Asymptotic Consistency with ML Adaptation The general result, Theorem 1, can be applied to the adaptive GAMP algorithm with arbitrary estimation and adaptation function. In particular, the result can be used to rigorously justify the SE analysis of the EM-GAMP presented in [11, 12]. Here, we use the result to prove the asymptotic parameter consistency of Adaptive GAMP with ML adaptation. The key point is to realize that the distributions (12) and (13) exactly match the distributions (4) assumed by the ML adaptation functions (5). Thus, the ML adaptation should work provided that the maximizations in (5) yield the correct parameter estimates. This condition is essentially an identifiability requirement that we make precise with the following definitions. Definition 1 Consider a family of distributions, {PX(x|λx), λx ∈Λx}, a set Sx of parameters (αr, ξr) of a Gaussian channel and function φx(r, λx, αr, ξr). We say that PX(x|λx) is identifiable with Gaussian outputs with parameter set Sx and function φx if: (a) The sets Sx and Λx are compact. (b) For any “true” parameters λ∗ x ∈Λx, and (αr, ξr) ∈Sx, the maximization bλx = arg max λx∈Λx max (αr,ξr)∈Sx E [φx(α∗ rX + V, λx, αr, ξr)|λ∗ x, α∗ r, ξ∗ r] , (19) is well-defined, unique and returns the true value, bλx = λ∗ x. The expectation in (19) is with respect to X ∼PX(·|λ∗ x) and V ∼N(0, ξ∗ r). 6 (c) Suitable continuity assumptions – see [19] for details. Definition 2 Consider a family of conditional distributions, {PY |Z(y|z, λz), λz ∈Λz} generated by the mapping Y = h(Z, W, λz) where W ∼PW is some random variable and h(z, w, λz) is a scalar function. Let Sz be a set of covariance matrices Kp and let φz(y, p, λz, Kp) be some function. We say that conditional distribution family PY |Z(·|·, λz) is identifiable with Gaussian inputs with covariance set Sz and function φz if: (a) The parameter sets Sz and Λz are compact. (b) For any “true” parameter λ∗ z ∈Λz and true covariance K∗ p, the maximization bλz = arg max λz∈Λz max Kp∈Sz E φz(Y, P, λz, Kp)|λ∗ z, K∗ p , (20) is well-defined, unique and returns the true value, bλz = λ∗ z, The expectation in (20) is with respect to Y |Z ∼PY |Z(y|z, λ∗ z) and (Z, P) ∼N(0, K∗ p). (c) Suitable continuity assumptions – see [19] for details. Definitions 1 and 2 essentially require that the parameters λx and λz can be identified through a maximization. The functions φx and φz can be the log likelihood functions (6a) and (6b), although we permit other functions as well. See [19] for further discussion of the likelihood functions as well as the choice of the parameter sets Sx and Sz. Theorem 2 Let PX(·|λx) and PY |Z(·|·, λz) be families of input and output distributions that are identifiable in the sense of Definitions 1 and 2. Consider the outputs of the adaptive GAMP algorithm using the ML adaptation functions (5) using the functions φx and φz and parameter sets in Definitions 1 and 2. In addition, suppose Assumption 1(a) to (c) hold where the distribution of X is given by PX(·|λ∗ x) for some “true” parameter λ∗ x ∈Λx and the conditional distribution of Y given Z is given by PY |Z(y|z, λ∗ z) for some “true” parameter λ∗ z ∈Λz. Then, under suitable continuity conditions (see [19] for details), for any fixed t, (a) The components of θt x and θt z in (10) converge empirically with bounded moments of order k = 2 as in (17) and the limits (18) hold almost surely. (b) If (αt r, ξt r) ∈Sx(τ t r) for some t, then limn→∞bλt x = λ t x = λ∗ x almost surely. (c) If Kt p ∈Sz(τ t p) for some t, then limn→∞bλt z = λ t z = λ∗ z almost surely. The theorem shows, remarkably, that for a very large class of the parameterized distributions, the adaptive GAMP algorithm with ML adaptation is able to asymptotically estimate the correct parameters. Also, once the consistency limits in (b) and (c) hold, the SE equations in Algorithm 2 reduce to the SE equations for the non-adaptive GAMP method running with the true parameters. Thus, we conclude there is asymptotically no performance loss between the adaptive GAMP algorithm and a corresponding oracle GAMP algorithm that knows the correct parameters in the sense that the empirical distributions of the algorithm outputs are described by the same SE equations. 4 Numerical Example: Estimation of a Gauss-Bernoulli input Recent results suggest that there is considerable value in learning of priors PX in the context of compressed sensing [25], which considers the estimation of sparse vectors x from underdetermined measurements (m < n) . It is known that estimators such as LASSO offer certain optimal min-max performance over a large class of sparse distributions [26]. However, for many particular distributions, there is a potentially large performance gap between LASSO and MMSE estimator with the correct prior. This gap was the main motivation for [9, 10] which showed large gains of the EMGAMP method due to its ability to learn the prior. Here, we present a simple simulation to illustrate the performance gain of adaptive GAMP and its asymptotic consistency. Specifically, Fig. 2 compares the performance of adaptive GAMP for estimation of a sparse Gauss-Bernoulli signal x ∈Rn from m noisy measurements y = Ax + w, 7 (a) (b) Noise variance ( ) Measurement ratio ( ) MSE (dB) MSE (dB) 0.5 1 1.5 2 ï14 ï13 ï12 ï11 ï10 ï9 ï8 ï7 Measurement ratio (m/n) MSE (dB) State Evolution LASSO Oracle GAMP Adaptive GAMP 10 ï3 10 ï2 10 ï1 ï35 ï30 ï25 ï20 ï15 ï10 Noise Variance (m2) MSE (dB) Figure 2: Reconstruction of a Gauss-Bernoulli signal from noisy measurements. The average reconstruction MSE is plotted against (a) measurement ratio m/n and (b) AWGN variance σ2. The plots illustrate that adaptive GAMP yields considerable improvement over ℓ1-based LASSO estimator. Moreover, it exactly matches the performance of oracle GAMP that knows the prior parameters. where the additive noise w is random with i.i.d. entries wi ∼N(0, σ2). The signal of length n = 400 has 20% nonzero components drawn from the Gaussian distribution of variance 5. Adaptive GAMP uses EM iterations, which are used to approximate ML parameter estimation, to jointly recover the unknown signal x and the true parameters λx = (ρ = 0.2, σ2 x = 5). The performance of adaptive GAMP is compared to that of LASSO with MSE optimal regularization parameter, and oracle GAMP that knows the parameters of the prior exactly. For generating the graphs, we performed 1000 random trials by forming the measurement matrix A from i.i.d. zero-mean Gaussian random variables of variance 1/m. In Figure 2(a), we keep the variance of the noise fixed to σ2 = 0.1 and plot the average MSE of the reconstruction against the measurement ratio m/n. In Figure 2(b), we keep the measurement ratio fixed to m/n = 0.75 and plot the average MSE of the reconstruction against the noise variance σ2. For completeness, we also provide the asymptotic MSE values computed via SE recursion. The results illustrate that GAMP significantly outperforms LASSO over the whole range of m/n and σ2. Moreover, the results corroborate the consistency of adaptive GAMP which achieves nearly identical quality of reconstruction with oracle GAMP. The performance results here and in [19] indicate that adaptive GAMP can be an effective method for estimation when the parameters of the problem are difficult to characterize and must be estimated from data. 5 Conclusions and Future Work We have presented an adaptive GAMP method for the estimation of i.i.d. vectors x observed through a known linear transforms followed by an arbitrary, componentwise random transform. The procedure, which is a generalization of EM-GAMP methodology of [9, 10], estimates both the vector x as well as parameters in the source and componentwise output transform. In the case of large i.i.d. Gaussian transforms with ML parameter estimation, it is shown that the adaptive GAMP method is provably asymptotically consistent in that the parameter estimates converge to the true values. This convergence result holds over a large class of models with essentially arbitrarily complex parameterizations. Moreover, the algorithm is computationally efficient since it reduces the vector-valued estimation problem to a sequence of scalar estimation problems in Gaussian noise. We believe that this method is applicable to a large class of linear-nonlinear models with provable guarantees and that it can have applications in a wide range of problems. We have mentioned the use of the method for learning sparse priors in compressed sensing. Future work will include possible extensions to non-Gaussian matrices. References [1] M. Tipping, “Sparse Bayesian learning and the relevance vector machine,” J. Machine Learning Research, vol. 1, pp. 211–244, Sep. 2001. [2] M. West, “Bayesian factor regressionm models in the “large p, small n” paradigm,” Bayesian Statistics, vol. 7, 2003. 8 [3] D. Wipf and B. Rao, “Sparse Bayesian learning for basis selection,” IEEE Trans. Signal Process., vol. 52, no. 8, pp. 2153–2164, Aug. 2004. [4] S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Trans. Signal Process., vol. 56, pp. 2346–2356, Jun. 2008. [5] V. Cevher, “Learning with compressible priors,” in Proc. NIPS, Vancouver, BC, Dec. 2009. [6] S. Billings and S. Fakhouri, “Identification of systems containing linear dynamic and static nonlinear elements,” Automatica, vol. 18, no. 1, pp. 15–26, 1982. [7] I. W. Hunter and M. J. Korenberg, “The identification of nonlinear biological systems: Wiener and Hammerstein cascade models,” Biological Cybernetics, vol. 55, no. 2–3, pp. 135–144, 1986. [8] O. Schwartz, J. W. Pillow, N. C. Rust, and E. P. Simoncelli, “Spike-triggered neural characterization,” J. Vision, vol. 6, no. 4, pp. 484–507, Jul. 2006. [9] J. P. Vila and P. Schniter, “Expectation-maximization Bernoulli-Gaussian approximate message passing,” in Conf. Rec. 45th Asilomar Conf. Signals, Syst. & Comput., Pacific Grove, CA, Nov. 2011, pp. 799–803. [10] ——, “Expectation-maximization Gaussian-mixture approximate message passing,” in Proc. Conf. on Inform. Sci. & Sys., Princeton, NJ, Mar. 2012. [11] F. Krzakala, M. M´ezard, F. Sausset, Y. Sun, and L. Zdeborov´a, “Statistical physics-based reconstruction in compressed sensing,” arXiv:1109.4424, Sep. 2011. [12] ——, “Probabilistic reconstruction in compressed sensing: Algorithms, phase diagrams, and threshold achieving matrices,” arXiv:1206.3953, Jun. 2012. [13] S. Rangan, A. K. Fletcher, V. K. Goyal, and P. Schniter, “Hybrid generalized approximation message passing with applications to structured sparsity,” in Proc. IEEE Int. Symp. Inform. Theory, Cambridge, MA, Jul. 2012, pp. 1241–1245. [14] S. Rangan, “Generalized approximate message passing for estimation with random linear mixing,” in Proc. IEEE Int. Symp. Inform. Theory, Saint Petersburg, Russia, Jul.–Aug. 2011, pp. 2174–2178. [15] D. Guo and C.-C. Wang, “Asymptotic mean-square optimality of belief propagation for sparse linear systems,” in Proc. IEEE Inform. Theory Workshop, Chengdu, China, Oct. 2006, pp. 194–198. [16] ——, “Random sparse linear systems observed via arbitrary channels: A decoupling principle,” in Proc. IEEE Int. Symp. Inform. Theory, Nice, France, Jun. 2007, pp. 946–950. [17] S. Rangan, “Estimation with random linear mixing, belief propagation and compressed sensing,” in Proc. Conf. on Inform. Sci. & Sys., Princeton, NJ, Mar. 2010, pp. 1–6. [18] M. Bayati and A. Montanari, “The dynamics of message passing on dense graphs, with applications to compressed sensing,” IEEE Trans. Inform. Theory, vol. 57, no. 2, pp. 764–785, Feb. 2011. [19] U. S. Kamilov, S. Rangan, A. K. Fletcher, and M. Unser, “Approximate message passing with consistent parameter estimation and applications to sparse learning,” arXiv:1207.3859 [cs.IT], Jul. 2012. [20] J. Boutros and G. Caire, “Iterative multiuser joint decoding: Unified framework and asymptotic analysis,” IEEE Trans. Inform. Theory, vol. 48, no. 7, pp. 1772–1793, Jul. 2002. [21] T. Tanaka and M. Okada, “Approximate belief propagation, density evolution, and neurodynamics for CDMA multiuser detection,” IEEE Trans. Inform. Theory, vol. 51, no. 2, pp. 700–706, Feb. 2005. [22] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proc. Nat. Acad. Sci., vol. 106, no. 45, pp. 18 914–18 919, Nov. 2009. [23] T. P. Minka, “A family of algorithms for approximate Bayesian inference,” Ph.D. dissertation, Massachusetts Institute of Technology, Cambridge, MA, 2001. [24] M. Seeger, “Bayesian inference and optimal design for the sparse linear model,” J. Machine Learning Research, vol. 9, pp. 759–813, Sep. 2008. [25] E. J. Cand`es and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inform. Theory, vol. 52, no. 12, pp. 5406–5425, Dec. 2006. [26] D. Donoho, I. Johnstone, A. Maleki, and A. Montanari, “Compressed sensing over ℓp-balls: Minimax mean square error,” in Proc. ISIT, St. Petersburg, Russia, Jun. 2011. 9
|
2012
|
192
|
4,555
|
Gradient-based kernel method for feature extraction and variable selection Kenji Fukumizu The Institute of Statistical Mathematics 10-3 Midori-cho, Tachikawa, Tokyo 190-8562 Japan fukumizu@ism.ac.jp Chenlei Leng National University of Singapore 6 Science Drive 2, Singapore, 117546 stalc@nus.edu.sg Abstract We propose a novel kernel approach to dimension reduction for supervised learning: feature extraction and variable selection; the former constructs a small number of features from predictors, and the latter finds a subset of predictors. First, a method of linear feature extraction is proposed using the gradient of regression function, based on the recent development of the kernel method. In comparison with other existing methods, the proposed one has wide applicability without strong assumptions on the regressor or type of variables, and uses computationally simple eigendecomposition, thus applicable to large data sets. Second, in combination of a sparse penalty, the method is extended to variable selection, following the approach by Chen et al. [2]. Experimental results show that the proposed methods successfully find effective features and variables without parametric models. 1 Introduction Dimension reduction is involved in most of modern data analysis, in which high dimensional data must be handled. There are two categories of dimension reduction: feature extraction, in which a linear or nonlinear mapping to a low-dimensional space is pursued, and variable selection, in which a subset of variables is selected. This paper discusses both the methods in supervised learning. Let (X, Y ) be a random vector such that X = (X1, . . . , Xm) ∈Rm. The domain of Y can be arbitrary, either continuous, discrete, or structured. The goal of dimension reduction in supervised setting is to find such features or a subset of variables X that explain Y as effectively as possible. This paper focuses linear dimension reduction, in which linear combinations of the components of X are used to make effective features. Although there are many methods for extracting nonlinear features, this paper confines its attentions on linear features, since linear methods are more stable than nonlinear feature extraction, which depends strongly on the choice of the nonlinearity, and after establishing a linear method, extension to a nonlinear one would not be difficult. We first develop a method for linear feature extraction with kernels, and extend it to variable selection with a sparseness penalty. The most significant point of the proposed methods is that we do not assume any parametric models on the conditional probability, or make strong assumptions on the distribution of variables. This differs from many other methods, particularly for variable selection, where a specific parametric model is often assumed. Beyond the classical approaches such as Fisher Discriminant Analysis and Canonical Correlation Analysis to linear dimension reduction, the modern approach is based on the notion of conditional independence; we assume for the distribution p(Y |X) = ˜p(Y |BT X) or equivalently Y ⊥⊥X | BT X, (1) where B is a projection matrix (BT B = Id) onto a d-dimensional subspace (d < m) in Rm, and wish to estimate B. For variable selection, we further assume that some rows of B may be zero. The subspace spanned by the columns of B is called the effective direction for regression, or EDR space [14]. Our goal is thus to estimate B without specific parametric models for p(y|x). 1 First, consider the linear feature extraction based on Eq. (1). The first method using this formulation is the sliced inverse regression (SIR, [13]), which employs the fact that the inverse regression E[X|Y ] lies in the EDR space under some assumptions. Many methods have been proposed in this vein of inverse regression ([4, 12] among others). While the methods are computationally simple, they often need some strong assumptions on the distribution of X such as elliptic symmetry. There are two most relevant works to this paper. The first one is the dimension reduction with the gradient of regressor E[Y |X = x] [11, 17]. As explained in Sec. 2.1, under Eq. (1) the gradient is contained in the EDR space. One can thus estimate the space by some standard nonparametric method. There are some limitations in this approach, however: the nonparametric gradient estimation in high-dimensional spaces is challenging, and the method may not work unless the noise is additive. The second one is the kernel dimension reduction (KDR, [8, 9, 28]), which uses the kernel method for characterizing the conditional independence to overcome various limitations of existing methods. While KDR applies to a wide class of problems without any strong assumptions on the distributions or types of X or Y , and shows high estimation accuracy for small data sets, its optimization has a problem: the gradient descent method used for KDR may have local optima, and needs many matrix inversions, which prohibits application to high-dimensional or large data. We propose a kernel method for linear feature extraction using the gradient-based approach, but unlike the existing ones [11, 17], the gradient is estimated based on the recent development of the kernel method [9, 19]. It solves the problems of existing methods: by virtue of the kernel method, Y can be of arbitrary type, and the kernel estimator is stable without careful decrease of bandwidth. It solves also the problem of KDR: the estimator by an eigenproblem needs no numerical optimization. The method is thus applicable to large and high-dimensional data, as we demonstrate experimentally. Second, by using the above feature extraction in conjunction with a sparseness penalty, we propose a novel method for variable selection. Recently extensive studies have been done for variable selection with a sparseness penalty such as LASSO [23] and SCAD [6]. It is also known that with appropriate choice of regularization coefficients they have oracle property [6, 25, 30]. These methods, however, use some specific model for regression such as linear regression, which is a limitation of the methods. Chen et al. [2] proposed a novel method for sparse variable selection based on the objective function of linear feature extraction formulated as an eigenproblem such as SIR. We follow this approach to derive our method for variable selection. Unlike the methods used in [2], the proposed one does not require strong assumptions on the regressor or distribution, and thus provides a variable selection method based on the conditional independence irrespective of the regression model. 2 Gradient-based kernel dimension reduction 2.1 Gradient of a regression function and dimension reduction We review the basic idea of the gradient-based method [11, 17] for dimension reduction. Suppose Y is an R-valued random variable. If the assumption of Eq. (1) holds, we have ∂ ∂xE[Y |X = x] = ∂ ∂x R yp(y|x)dy = R y ∂ ∂x ˜p(y|BT x)dy = B R y ∂ ∂z ˜p(y|z) z=BT x dy, which implies that the gradient ∂ ∂xE[Y |X = x] at any x is contained in the EDR space. Based on this fact, the average derivative estimates (ADE, [17]) has been proposed to estimate B. In the more recent method [11], a standard local linear least squares with a smoothing kernel (not necessarily positive definite, [5]) is used for estimating the gradient, and the dimensionality of the projection is continuously reduced to the desired one in the iteration. Since the gradient estimation for highdimensional data is difficult in general, the iterative reduction is expected to give more accurate estimation. We call the method in [11] iterative average derivative estimates (IADE) in the sequel. 2.2 Kernel method for estimating gradient of regression For a set Ω, a (R-valued) positive definite kernel k on Ωis a symmetric kernel k : Ω× Ω→R such that Pn i,j=1 cicjk(xi, xj) ≥0 for any x1, . . . , xn in Ωand c1, . . . , cn ∈R. It is known that a positive definite kernel on Ωuniquely defines a Hilbert space H consisting of functions on Ω, in which the reproducing property ⟨f, k(·, x)⟩H = f(x) (∀f ∈H) holds, where ⟨·, ·⟩H is the inner product of H. The Hilbert space H is called the reproducing kernel Hilbert space (RKHS) associated with k. We assume that an RKHS is always separable. 2 In deriving a kernel method based on the approach in Sec. 2.1, the fundamental tool is the reproducing property for the derivative of a function. It is known (e.g., [21] Sec. 4.3) that if a positive definite kernel k(x, y) on an open set in the Euclidean space is continuously differentiable with respect to x and y, every f in the corresponding RKHS H is continuously differentiable. If further ∂ ∂xk(·, x) ∈H, we have ∂f ∂x = D f, ∂ ∂xk(·, x) E H. (2) This reproducing property combined with the following kernel estimator of the conditional expectation (see [8, 9, 19] for details) will provide a method for dimension reduction. Let (X, Y ) be a random variable on X × Y with probability P. We always assume that the p.d.f. p(x, y) and the conditional p.d.f. p(y|x) exist, and that a positive definite kernel is measurable and bounded. Let kX and kY be positive definite kernels on X and Y, respectively, with respective RKHS HX and HY. The (uncentered) covariance operator CY X : HX →HY is defined by the equation ⟨g, CY Xf⟩HY = E[f(X)g(Y )] = E ⟨f, ΦX (X)⟩HX ⟨ΦY(Y ), g⟩HY (3) for all f ∈HX , g ∈HY, where ΦX (x) = kX (·, x) and ΦY(y) = kY(·, y). Similarly, CXX denotes the operator on HX that satisfies ⟨f2, CXXf1⟩= E[f2(X)f1(X)] for any f1, f2 ∈HX . These definitions are straightforward extensions of the ordinary covariance matrices, if we consider the covariance of the random vectors ΦX (X) and ΦY(Y ) on the RKHSs. One of the advantages of the kernel method is that estimation with finite data is straightforward. Given i.i.d. sample (X1, Y1), . . . , (Xn, Yn) with law P, the covariance operator is estimated by bC(n) Y Xf = 1 n Pn i=1kY(·, Yi)⟨kX (·, Xi), f⟩HX bC(n) XXf = 1 n Pn i=1kX (·, Xi)⟨kX (·, Xi), f⟩HX . (4) It is known [8] that if E[g(Y )|X = ·] ∈HX holds for g ∈HY, then we have CXXE[g(Y )|X = ·] = CXY g. If further CXX is injective1, this relation can be expressed as E[g(Y )|X = ·] = CXX −1CXY g. (5) While the assumption E[g(Y )|X = ·] ∈HX may not hold in general, we can nonetheless obtain an empirical estimator based on Eq. (5), namely, ( bC(n) XX + εnI)−1 bC(n) XY g, where εn is a regularization coefficient in Tikhonov-type regularization. Note that the above expression is the kernel ridge regression of g(Y ) on X. As we discuss in Supplements, we can in fact prove rigorously that this estimator converges to E[g(Y )|X = ·]. Assume now that X = Rm, CXX is injective, kX (x, ˜x) is continuously differentiable, E[g(Y )|X = x] ∈HX for any g ∈HY, and ∂ ∂xkX (·, x) ∈R(CXX), where R denotes the range of the operator. From Eqs. (5) and (2), ∂ ∂xE[g(Y )|X = x] = ⟨C−1 XXCXY g, ∂kX (·,x) ∂x ⟩= ⟨g, CY XC−1 XX ∂kX (·,x) ∂x ⟩. With g = kY(·, ˜y), we obtain the gradient of regression of the feature vector ΦY(Y ) on X as ∂ ∂xE[ΦY(Y )|X = x] = CY XC−1 XX ∂kX (·, x) ∂x . (6) 2.3 Gradient-based kernel method for linear feature extraction It follows from the same argument as in Sec. 2.1 that ∂ ∂xE[kY(·, y)|X = x] = Ξ(x)B with an operator Ξ(x) from Rm to HY, where we use a slight abuse of notation by identifying the operator Ξ(x) with a matrix. In combination with Eq. (6), we have BT ⟨Ξ(x), Ξ(x)⟩HYB = D∂kX (·, x) ∂x , C−1 XXCXY CY XC−1 XX ∂kX (·, x) ∂x E HX =: M(x), (7) which shows that the eigenvectors for non-zero eigenvalues of m × m matrix M(x) are contained in the EDR space. This fact is the basis of our method. In contrast to the conventional gradientbased method described in Sec. 2.1, this method incorporates high (or infinite) dimensional regressor E[ΦY(Y )|X = x]. 1Noting ⟨CXXf, f⟩= E[f(X)2], it is easy to see that CXX is injective, if kX is a continuous kernel on a topological space X, and PX is a Borel probability measure such that P(U) > 0 for any open set U in X. 3 Given i.i.d. sample (X1, Y1), . . . , (Xn, Yn) from the true distribution, based on the empirical covariance operators Eq. (4) and regularized inversions, the matrix M(x) is estimated by c Mn(x) = ∂kX (·,x) ∂x , bC(n) XX + εnI −1 bC(n) XY bC(n) Y X bC(n) XX + εnI −1 ∂kX (·,x) ∂x = ∇kX(x)T (GX + nεnI)−1GY (GX + nεnI)−1∇kX(x), (8) where GX and GY are the Gram matrices (kX (Xi, Xj)) and (kY(Yi, Yj)), respectively, and ∇kX(x) = (∂kX (X1, x)/∂x, · · · , ∂kX (Xn, x)/∂x)T ∈Rn. As the eigenvectors of M(x) are contained in the EDR space for any x, we propose to use the average of M(Xi) over all the data points Xi, and define ˜ Mn := 1 n Pn i=1 c Mn(Xi) = 1 n Pn i=1∇kX(Xi)T (GX + nεnIn)−1GY (GX + nεnIn)−1∇kX(Xi). We call the dimension reduction with the matrix ˜ Mn the gradient-based kernel dimension reduction (gKDR). For linear feature extraction, the projection matrix B in Eq. (1) is then estimated simply by the top d eigenvectors of ˜ Mn. We call this method gKDR-FEX. The proposed method applies to a wide class of problems; in contrast to many existing methods, the gKDR-FEX can handle any type of data for Y including multinomial or structured variables, and make no strong assumptions on the regressor or distribution of X. Additionally, since the gKDR incorporates the high dimensional feature vector ΦY(Y ), it works for any regression relation including multiplicative noise, for which many existing methods such as SIR and IADE fail. As in all kernel methods, the results of gKDR depend on the choice of kernels. We use the crossvalidation (CV) for choosing kernels and parameters, combined with some regression or classification method. In this paper, the k-nearest neighbor (kNN) regression / classification is used in CV for its simplicity: for each candidate of a kernel or parameter, we compute the CV error by the kNN method with (BT Xi, Yi), where B is given by gKDR, and choose the one that gives the least error. The time complexity of the matrix inversions and the eigendecomposition for gKDR are O(n3), which is prohibitive for large data sets. We can apply, however, low-rank approximation of Gram matrices, such as incomplete Cholesky decomposition. The space complexity may be also a problem of gKDR, since (∇kX(Xi))n i=1 has n2 × m dimension. In the case of Gaussian kernel, where ∂ ∂xa kX(Xj, x)|x=Xi = 1 σ2 (Xa j −Xa i ) exp(−∥Xj −Xi∥2/(2σ2)), we have a way of reducing the necessary memory by low rank approximation. Let GX ≈RRT and GY ≈HHT be the low rank approximation with rx = rkR, ry = rkH (rx, ry < n, m). With the notation F := (GX + nεnIn)−1H and Θas i = 1 σ2 Xa i Ris, we have, for 1 ≤a, b ≤m, ˜ Mn,ab = Pn i=1 Pry t=1Γt iaΓt ib, Γt ia = Prx s=1Ris Pn j=1Θas j Fjt −Prx s=1Θas i Pn j=1RjsFjt . With this method, the complexity is O(nmr) in space and O(nm2r) in time (r = max{rx, ry}), which is much more efficient in memory than straightforward implementation. We introduce two variants of gKDR-FEX. First, since accurate nonparametric estimation with highdimensional X is not easy, we propose a method for decreasing the dimensionality iteratively. Using gKDR-FEX, we first find a matrix B1 of dimensionality d1 larger than the target d, project data Xi onto the subspace as Z(1) i = BT 1 Xi, find the projection matrix B2 (d1 × d2 matrix) for Z(1) i onto a d2 (d2 < d1) dimensional subspace, and repeat this process. We call this method gKDR-FEXi. Second, if Y takes only L points as in classification, the Gram matrix GY and thus ˜ Mn are of rank L at most (see Eq. (8)), which is a strong limitation of gKDR. Note that this problem is shared by many linear dimension reduction methods including CCA and slice-based methods. To solve this problem, we propose to use the variation of c Mn(x) over the points x = Xi instead of the average ˜ Mn. By partitioning {1, . . . , n} into T1, . . . , Tℓ, the projection matrices bB[a] given by the eigenvectors of c M[a] = P i∈Ta c M(Xi) are used to define bP = 1 ℓ Pℓ a=1 bB[a] bBT [a]. The estimator of B is then given by the top d eigenvectors of bP. We call this method gKDR-FEXv. 2.4 Theoretical analysis of gKDR We have derived the gKDR method based on the necessary condition of EDR space. The following theorem shows that it is also sufficient, if kY is characteristic. A positive definite kernel k on a 4 gKDR -FEX gKDR -FEXi gKDR -FEXv IADE SIR II KDR gKDR-FEX +KDR (A) n = 100 0.1989 0.1639 0.2002 0.1372 0.2986 0.2807 0.0883 (A) n = 200 0.1264 0.0995 0.1287 0.0857 0.2077 0.1175 0.0501 (B) n = 100 0.1500 0.1358 0.1630 0.1690 0.3137 0.2138 0.1076 (B) n = 200 0.0755 0.0750 0.0802 0.0940 0.2129 0.1440 0.0506 (C) n = 200 0.1919 0.2322 0.1930 0.7724 0.7326 0.1479 0.1285 (C) n = 400 0.1346 0.1372 0.1369 0.7863 0.7167 0.0897 0.0893 Table 1: gKDE-FEX for synthetic data: mean discrepancies over 100 runs. measurable space is characteristic if EP [k(·, X)] = EQ[k(·, X)] means P = Q, i.e., the mean of feature vector uniquely determines a probability [9, 20]. Examples include Gaussian kernel. In the following theoretical results, we assume (i) ∂kX (·, x)/∂xa ∈R(CXX) (a = 1, . . . , m), (ii) E[kY(y, X)|X = ·] ∈HX for any y ∈Y, and (iii) E[g(Y )|BT X = z] is a differentiable function of z for any g ∈HY and the linear functional g 7→∂E[g(Y )|BT X = z]/∂z is continuous for any z. In the sequel, the subspace spanned by the columns of B is denoted by Span(B), and the Frobenius norm of a matrix M by ∥M∥F . The proofs are given in Supplements. Theorem 1. In addition to the above assumptions (i)-(iii), assume that the kernel kY is characteristic. If the eigenspaces for the non-zero eigenvalues of E[M(X)] are included in Span(B), then Y and X are conditionally independent given BT X. We can obtain the rate of consistency for c Mn(x) and ˜ Mn. Theorem 2. In addition to (i)-(iii), assume that ∂kX (·,x) ∂xa ∈R(Cβ+1 XX ) (a = 1, . . . , m) for some β ≥0, and E[kY(y, Y )|X = ·] ∈HX for every y ∈Y. Then, for εn = n−max{ 1 3 , 1 2β+2 }, we have c Mn(x) −M(x) = Op n−min{ 1 3 , 2β+1 4β+4 } for every x ∈X as n →∞. If further E[∥M(X)∥2 F ] < ∞and ∂kX (·,x) ∂xa = Cβ+1 XX ha x with E∥ha X∥HX < ∞, then ˜ Mn →E[M(X)] in the same order as above. Note that, assuming that the eigenvalues of M(x) or E[M(X)] are all distinct, the convergence of matrices implies the convergence of the eigenvectors [22], thus the estimator of gKDR-FEX is consistent to the subspace given by the top eigenvectors of E[M(X)]. 2.5 Experiments with gKDR-FEX We always use the Gaussian kernel k(x, ˜x) = exp(−1 2σ2 ∥x−˜x∥2) in the kernel method below. First we use three synthetic data to verify the basic performance of gKDR-FEX(i,v). The data are generated by (A): Y = Z sin( √ 5Z)+W, Z = 1 √ 5(1, 2, 0, . . . , 0)T X, (B): Y = (Z3 1 +Z2)(Z1−Z3 2)+W, Z1 = 1 √ 2(1, 1, 0, . . . , 0)T X, Z2 = 1 √ 2(1, −1, 0, . . . , 0)T X, where 10-dimensional X is generated by the uniform distribution on [−1, 1]10 and W is independent noise with N(0, 10−2), and (C): Y = Z4E, Z = (1, 0, . . . , 0)T X, where each component of 10-dimensional X is independently generated by the truncated normal distribution N(0, 1/4) ∗I[−1,1] and E ∼N(0, 1) is a multiplicative noise. The discrepancy between the estimator B and the true projector B0 is measured by ∥B0BT 0 (Im −BBT )∥F /d. For choosing the parameter σ in Gaussian kernel and the regularization parameter εn, the CV in Sec. 2.3 with kNN (k = 5, manually chosen to optimize the results) is used with 8 different values given by cσmed (0.5 ≤c ≤10), where σmed is the median of pairwise distances of data [10], and ℓ= 4, 5, 6, 7 for εn = 10−ℓ(a similar strategy is used for the CV below). We compare the results with those of IADE, SIR II [13], and KDR. The IADE has seven parameters [11], and we tuned two of them (h1 and ρmin) manually to optimize the performance. For SIR II, we tried several numbers of slices, and chose the one that gave the best result. From Table 1, we see that gKDR-FEX(i,v) show much better results than SIR II in all the cases. The IADE works better than these methods for (A), while for (B) and (C) it works worse. Since (C) has multiplicative noise, the IADE does not obtain meaningful estimation. The KDR attains higher accuracy for (C), but less accurate for (A) and (B) with n = 100; this undesired result is caused by failure of optimization in 5 3 5 7 9 11 13 70 75 80 85 Dimensionality Classification rate (%) gKDR−v KDR All variables 3 5 10 15 20 34 70 75 80 85 90 95 100 Dimensionality Classification rate (%) gKDR−v KDR All variables 3 5 10 15 20 30 80 85 90 95 Dimensionality Classification rate (%) gKDR−v KDR All variables (H) Heart Disease (I) Ionoshpere (B) Breast-cancer-Wisconsin (m:13, ntr:129, ntest:148) (m:34, ntr:151, ntest:200) (m:30, ntr:200, ntest:369) Figure 1: Classification accuracy with gKDR-v and KDR for binary classification problems. m, ntr and ntest are the dimension of X, training data size, and testing data size, respectively. Dim. 10 20 30 40 50 gKDR + kNN 13.53 4.55 – – – gKDR-v + kNN 13.15 4.55 4.81 5.26 5.58 CCA + kNN 22.77 6.74 – – – SIR-II + kNN 77.42 70.11 63.44 52.66 50.61 gKDR + SVM 14.43 5.00 – – – gKDR-v + SVM 16.87 4.75 3.85 3.59 3.08 CCA + SVM 13.09 6.54 – – – L gKDR +SVM Corr +SVM Corr +SVM (500) (2000) 10 12.0 15.7 8.3 20 16.2 30.2 18.0 30 18.0 29.2 24.0 40 21.8 35.4 25.0 50 19.5 41.1 29.0 Table 2: Left: ISOLET - classification errors for test data (percentage). Right: Amazon Reviews 10-fold cross-validation errors (%) for classification some runs (see Supplements for error bars). We also used the results of gKDR-FEX as the initial state for KDR, which improved the accuracy significantly for (A) and (B). Note however that these data sets are very small in size and dimension, and KDR is not applicable to large data used later. One way of evaluating dimension reduction methods in supervised learning is to consider the classification or regression accuracy after projecting data onto the estimated subspaces. We next used three data sets for binary classification, heart-disease (H), ionoshpere (I), and breast-cancer-Wisconsin (B), from UCI repository [7], and evaluated the classification rates of gKDR-FEXv with kNN classifiers (k = 7). We compared them with KDR, as KDR shows high accuracy for small data sets. From Fig. 1, we see gKDR-FEXv shows competitive accuracy with KDR: slightly worse for (I), and slightly better for (B). The computation of gKDR-FEXv for these data sets can be much faster than that of KDR. For each parameter set, the computational time of gKDR vs KDR was, in (H) 0.044 sec / 622 sec (d = 11), in (I) 0.l03 sec / 84.77 sec (d = 20), and in (B) 0.116 sec / 615 sec (d = 20). The next two data sets taken from UCI repository are larger in the sample size and dimensionality, for which the optimization of KDR is difficult to apply. The first one is ISOLET, which provides 617 dimensional continuous features of speech signals to classify 26 alphabets. In addition to 6238 training data, 1559 test data are separately provided. We evaluate the classification errors with the kNN classifier (k = 5) and 1-vs-1 SVM to see the effectiveness of the estimated subspaces (see Table 2). From the information on the data at the UCI repository, the best performance with neural networks and C4.5 with ECOC are 3.27% and 6.61%, respectively. In comparison with these results, the low dimensional subspaces found by gKDR-FEX and gKDR-FEXv maintain the information for classification effectively. SIR-II does not find meaningful features. The second data set is author identification of Amazon commerce reviews with 10000 dimensional linguistic features. The total number of authors is 50, and 30 reviews were collected for each author; the total size of data is thus 1500. We varied the used number of authors (L) to make different levels of difficulty for the tasks. The reduced dimensionality by gKDR-FEX is set to the same as L, and the 10-fold CV errors with data projected on the estimated EDR space are evaluated using 1-vs-1 SVM. As comparison, the squared sum of variable-wise Pearson correlations, PL ℓ=1 Corr[Xa, Y ℓ]2, is also used for choosing explanatory variables (a = 1, . . . , 10000). Such variable selection methods with Pearson correlation are popularly used for very high dimensional data. The variables with top 500 and 2000 correlations are used to make SVM classifiers. As we can see from Table 2, the gKDRFEX gives much more effective subspaces for regression than the Pearson correlation method, when 6 the number of authors is large. The creator of the data set has also reported the classification result with a neural network model [15]; for 50 authors, the 10-fold CV error with 2000 selected variables is 19.51%, which is similar to the gKDR-FEX result with only 50 linear features. 3 Variable selection with gKDR In recent years, extensive studies have been done on variable selection with a sparseness penalty ([6, 16, 18, 23–27, 29, 30] among many others). In supervised setting, these studies often consider some specific model for the regression such as least square or logistic regression. While consistency and oracle property have been also established for many methods, the assumption that there is a true parameter in the model may not hold in practice, and thus a strong restriction of the methods. It is then important to consider more flexible ways of variable selection without assuming any parametric model on the regression. The gKDR approach is appealing to this problem, since it realizes conditional independence without strong assumptions for regression or distribution of variables. Chen et al. [2] recently proposed the Coordinate-Independent Sparse (CIS) method, which is a semiparametric method for sparse variable selection. In CIS, the linear feature BT X is assumed with some rows of B zero, but no parametric model is specified for regression. We wish to estimate B so that the zero-rows should be estimated as zeros. This is achieved by imposing the sparseness penalty of the group LASSO [29] in combination with an objective function of linear feature extraction written in the form of eigenproblem such as SIR and PFC [3]. We follow the CIS method for our variable selection with gKDR; since the gKDR is given by the eigenproblem with matrix ˜ Mn, the CIS method is applied straightforwardly. The significance of our method is that the gKDR formulates the conditional independence of Y and X given BT X, while the existing CIS-based methods in [2] realize only weaker conditions under strong assumptions. 3.1 Sparse variable selection with gKDR Throughout this section, it is assumed that the true probability satisfies Eq. (1) with B = B0 = (vT 01, . . . , vT 0m)T , and with some 1 ≤q ≤m the j-th row v0j is non-zero for j ≤q and v0j = 0 for j ≥q + 1. The projection matrix is B = (b1, . . . , bd) = (vT 1 , . . . , vT m)T , where bi is the i-th column and vj is the j-th row. The proposed variable selection method, gKDR-VS, estimates B by bBλ = arg min B:BT B=Id h −Tr[BT ˜ MnB] + m X i=1 λi∥vi∥ i , (9) where ∥vj∥is the Euclidean norm and λ = (λ1, . . . , λm) ∈Rm + is the regularization coefficients. To optimize Eq. (9), as in [2], we used the local quadratic approximation [6], which is simple and fast. We used the matlab code provided at the homepage of X. Chen. The choice of λ is crucial on the practical performance of sparse variable selection. As a theoretical guarantee, we will show that some asymptotic condition provides model consistency. In practice, as in the Adaptive Lasso [30], it is suitable to consider λ = λ(θ) define by λi = θ∥˜vi∥−r where θ and r are positive numbers, and ˜vi is the row vector of ˜B0, the solution to gKDR without penalty, i.e., ˜B0 = arg minBT B=Id −Tr[BT ˜ MnB]. We used r = 1/2 for all of our experiments. To choose the parameter θ, a BIC-based method is often used in sparse variable selection [27, 31] with theoretical guarantee of model consistency. We use a BIC-type method for choosing θ by minimizing BICθ = −Tr[ bBT λ(θ) ˜ Mn bBλ(θ)] + Cndfθ log n n , (10) where dfθ = d(p −d) is the degree of freedom of bBλ(θ) with p the number of non-zero rows in bBλ(θ), and Cn is a positive number of Op(1). We used Cn = α1 log log(m) with α1 is the largest eigenvalue of ˜ Mn. The log log(m) factor is used in [27], where increasing number of variables is discussed, and α1 is introduced to adjust the scale of Tr[ bBT λ ˜ Mn bBλ]; we use CV for choosing the hyperparameters (kernel and regularization coefficient), in which the values of Tr[ bBT λ ˜ Mn bBλ] is not normalized well for different choices. 7 gKDR -VS CIS -SIR (A) n = 60 .94/.99/75 .89/1.0/65 (A) n = 120 1.0/1.0/98 .99/1.0/97 (B) n = 100 .92/.84/63 .19/.85/1 (B) n = 200 .98/.89/75 .18/.85/1 Table 3: gKDR-VS and CIS-SIR with synthetic data (ratio of nonzeros in 1 ≤j ≤q / ratio of zeros in q + 1 ≤j ≤m / number of correct models among 100 runs). Method gKDR-VS CIS-SIR CIS-PFC CRIM 0 0 0 0 0 0 ZN 0 0 -0.000 -0.008 0 0 INDUS 0 0 0 0 0 0 CHAS 0 0 0 0 0 0 NOX 0 0 0 0 0 0 RM 0.896 0.393 -1.00 -1.253 1.045 -1.390 AGE 0 0 0.005 -0.022 -0.003 -0.011 DIS -0.169 0.022 0 0 0 0 RAD 0.018 -0.000 0 0 0 0 TAX 0 0 0.001 -0.001 -0.001 -0.005 PTRATIO -0.376 0.919 0.049 0.003 -0.038 0.007 B 0 0 -0.001 0.002 0.001 0.005 LSTAT -0.165 0.017 0.043 -0.114 -0.043 -0.113 Table 4: Boston Housing Data: estimated sparse EDR. 3.2 Theoretical results on gKDR-VS This subsection shows the model consistency of the gKDR-VS. All the proofs are shown in Supplements. Let αn = max{λj | 1 ≤j ≤q} and βn = min{λj | q + 1 ≤j ≤m}. The eigenvalues of M = E[M(X)] are η1 ≥. . . ≥ηm ≥0. For two m × d matrices Bi (i = 1, 2) with BT i Bi = Id, we define D(B1, B2) = ∥B1BT 1 −B2BT 2 ∥, where ∥· ∥is the operator norm. Theorem 3. Suppose ∥˜ Mn −M∥F = Op(n−τ) for some τ > 0. If nταn →0 as n →∞and ηq > ηq+1, then the estimator bBλ in Eq. (9) satisfies D( bBλ, B0) = Op(n−τ) as n →∞. We saw in Theorem 2 that under some conditions ˜ Mn converges to M at the rate Op(n−τ) with 1/4 ≤τ ≤1/3. Thus Theorem 3 shows that bBλ is also consistent of the same rate. Theorem 4. In addition to the assumptions in Theorem 3, assume nτβn →∞as n →∞. Then, for all q + 1 ≤j ≤m, Pr(bvj = 0) →1 as n →∞, where bvj is the j-th row of bBλ. 3.3 Experiments with gKDR-VS We first apply the gKDR-VS with d = 1 to synthetic data generated by the following two models: (A): Y = X1 + X2 + X3 + W and (B): Y = (X1 + X2 + X3)4W, where the noise W follows N(0, 1). For (A), X = (X1, . . . , X24) is generated by N(0, Σ) with Σij = (1/2)|i−j| (1 ≤i, j ≤ 24), and for (B) X = (X1, . . . , X10) by N(0, 4I10). Note that (B) includes multiplicative noise, which cannot be handled by many dimension reduction methods. In comparison, the CIS method with SIR is also applied to the same data. The regularization parameter of CIS-SIR is chosen by BIC described in [2]. While both the methods work effectively for (A), only gKDR-VS can handle the multiplicative noise of (C). The next experiment uses Boston Housing data, which has been often used for variable selection. The response Y is the median value of homes in each tract, and thirteen variables are used to explain it. The detail of the variables is described in Supplements, Sec. E. The results of gKDR-VS and CISSIR / CIS-PFC with d = 2 are shown in Table 4. The variables selected by gKDR-VS are RM, DIS, RAD, PTRATIO and LSTAT, which are slightly different from the CIS methods. In a previous study [1], the four variables RM, TAX, PTRATIO and LSTAT are considered to have major contribution. 4 Conclusions We have proposed a gradient-based kernel approach for dimension reduction in supervised learning. The method is based on the general kernel formulation of conditional independence, and thus has wide applicability without strong restrictions on the model or variables. The linear feature extraction, gKDR-FEX, finds effective features with simple eigendecomposition, even when other conventional methods are not applicable by multiplicative noise or high-dimensionality. The consistency is also guaranteed. We have extended the method to variable selection (gKDR-VS) with a sparseness penalty, and demonstrated its promising performance with synthetic and real world data. The model consistency has been also proved. Acknowledgements. KF has been supported in part by JSPS KAKENHI (B). 22300098. 8 References [1] L. Breiman and J. Friedman. Estimating optimal transformations for multiple regression and correlation. J. Amer. Stat. Assoc., 80:580–598, 1985. [2] X. Chen, C. Zou, and R. Dennis Cook. Coordinate-independent sparse sufficient dimension reduction and variable selection. Ann. Stat., 38(6):3696–3723, 2010. [3] R. Dennis Cook and L. Forzani. Principal fitted components for dimension reduction in regression. Statistical Science, 23(4):485–501, 2008. [4] R. Dennis Cook and S. Weisberg. Discussion of Li (1991). J. Amer. Stat. Assoc., 86:328–332, 1991. [5] J. Fan and I. Gijbels. Local Polynomial Modelling and its Applications. Chapman and Hall, 1996. [6] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Stat. Assoc., 96(456):1348–1360, 2001. [7] A. Frank and A. Asuncion. UCI machine learning repository, [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. 2010. [8] K. Fukumizu, F.R. Bach, and M.I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. JMLR, 5:73–99, 2004. [9] K. Fukumizu, F.R. Bach, and M.I. Jordan. Kernel dimension reduction in regression. Ann. Stat., 37(4):1871–1905, 2009. [10] A. Gretton, K. Fukumizu, C.H. Teo, L. Song, B. Sch¨olkopf, and Alex Smola. A kernel statistical test of independence. In Advances in NIPS 20, pages 585–592. 2008. [11] M. Hristache, A. Juditsky, J. Polzehl, and V. Spokoiny. Structure adaptive approach for dimension reduction. Ann. Stat., 29(6):1537–1566, 2001. [12] B. Li, H. Zha, and F. Chiaromonte. Contour regression: A general approach to dimension reduction. Ann. Stat., 33(4):1580–1616, 2005. [13] K.-C. Li. Sliced inverse regression for dimension reduction (with discussion). J. Amer. Stat. Assoc., 86:316–342, 1991. [14] K.-C. Li. On principal Hessian directions for data visualization and dimension reduction: Another application of Stein’s lemma. J. Amer. Stat. Assoc., 87:1025–1039, 1992. [15] S. Liu, Z. Liu, J. Sun, and L. Liu. Application of synergetic neural network in online writeprint identification. Intern. J. Digital Content Technology and its Applications, 5(3):126–135, 2011. [16] L. Meier, S. Van De Geer, and P. B¨uhlmann. The group lasso for logistic regression. J. Royal Stat. Soc.: Ser. B, 70(1):53–71, 2008. [17] A.M. Samarov. Exploring regression structure using nonparametric functional estimation. J. Amer. Stat. Assoc., 88(423):836–847, 1993. [18] S. K. Shevade and S. S. Keerthi. A simple and efficient algorithm for gene selection using sparse logistic regression. Bioinformatics, 19(17):2246–2253, 2003. [19] L. Song, J. Huang, A. Smola, and K. Fukumizu. Hilbert space embeddings of conditional distributions with applications to dynamical systems. In Proc. ICML2009, pages 961–968. 2009. [20] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch¨olkopf, and G.R.G. Lanckriet. Hilbert space embeddings and metrics on probability measures. JMLR, 11:1517–1561, 2010. [21] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008. [22] G.W. Stewart and J.-Q. Sun. Matrix Perturbation Theory. Academic Press, 1990. [23] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal Stat. Soc.: Ser. B, 58(1):pp. 267–288, 1996. [24] H. Wang and C. Leng. Unified lasso estimation by least squares approximation. J. Amer. Stat. Assoc., 102 (479):1039–1048, 2007. [25] H. Wang, G. Li, and C.-L. Tsai. Regression coefficient and autoregressive order shrinkage and selection via the lasso. J. Royal Stat. Soc.: Ser. B, 69(1):63–78, 2007. [26] H. Wang, G. Li, and C.-L. Tsai. On the consistency of SCAD tunign parameter selector. Biometrika, 94: 553–558, 2007. [27] H. Wang, B. Li, and C. Leng. Shrinkage tuning parameter selection with a diverging number of parameters. J. Royal Stat. Soc.: Ser. B, 71(3):671–683, 2009. [28] M. Wang, F. Sha, and M. Jordan. Unsupervised kernel dimension reduction. NIPS 23, pages 2379–2387. 2010. [29] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. J. Royal Stat. Soc.: Ser. B, 68(1):49–67, 2006. [30] H. Zou. The adaptive lasso and its oracle properties. J. Amer. Stat. Assoc., 101:1418–1429, 2006. [31] C. Zou and X. Chen. On the consistency of coordinate-independent sparse estimation with BIC. J. Multivariate Analysis, 112:248–255, 2012. 9
|
2012
|
193
|
4,556
|
Scalable imputation of genetic data with a discrete fragmentation-coagulation process Lloyd T. Elliott Gatsby Computational Neuroscience Unit University College London 17 Queen Square London WC1N 3AR, U.K. elliott@gatsby.ucl.ac.uk Yee Whye Teh Department of Statistics University of Oxford 1 South Parks Road Oxford OX1 3TG, U.K. y.w.teh@stats.ox.ac.uk Abstract We present a Bayesian nonparametric model for genetic sequence data in which a set of genetic sequences is modelled using a Markov model of partitions. The partitions at consecutive locations in the genome are related by the splitting and merging of their clusters. Our model can be thought of as a discrete analogue of the continuous fragmentation-coagulation process [Teh et al 2011], preserving the important properties of projectivity, exchangeability and reversibility, while being more scalable. We apply this model to the problem of genotype imputation, showing improved computational efficiency while maintaining accuracies comparable to other state-of-the-art genotype imputation methods. 1 Introduction The increasing availability of genetic data (for example, from the Thousand Genomes project [1]) and the importance of genetics in scientific and medical applications requires the development of scalable and accurate models for genetic sequences which are informed by genetic processes. Although standard models such as the coalescent with recombination [2] are accurate, they suffer from intractable posterior computations. To address this, various hidden Markov model (HMM) based approaches have been proposed in the literature as more scalable alternatives (e.g. [3,4]). Due to gene conversion and chromosomal crossover, genetic sequences exhibit a local ‘mosaic’-like structure wherein sequences are composed of prototypical segments called haplotypes [5]. Locally, these prototypical segments are shared by a cluster of sequences: each sequence in the cluster is described well by a haplotype that is specific to the location on the chromosome of the cluster. An example of such a structure is shown in Figure 1. HMMs can capture this structure by having each latent state correspond to one of the haplotypes [3, 6]. Unfortunately, this leads to symmetries in the posterior distribution arising from the nonidentifiability of the state labels [7, 8]. Furthermore, current state-of-the-art HMM methods often involve costly model selection procedures in order to choose the number of latent states. A continuous fragmentation-coagulation process (CFCP) has recently been proposed for modelling local mosaic structure in genetic sequences [9]. The CFCP is a nonparametric models defined directly on unlabelled partitions thereby avoiding both costly model selection and the label switching problem [8]. Although inference algorithms derived for the CFCP scale linearly in the number and length of the sequences [9], since the CFCP is a Markov jump process the computational overhead needed to model the arbitrary number of latent events located between two consecutive observations might preclude scalability to large datasets. In this work, we present a novel fragmentation-coagulation process defined on a discrete grid (called the DFCP) which provides the advantages of the CFCP while being more scalable. The DFCP 1 C T C C C T C T A T A C A T G C T C T T G T T A C T A A T G A T A G T G G C G A C G G C G A T A C G T A T A T C T A T C A G G T A C C T G T G A G C C A C A T T A T T C G C T T A C C T A T G T C C C T A A G A A A G G T G C A A C G A C C A A G A C C C G C A G T C C C C A G T T C C G rs16870907 rs2857105 rs2857103 rs13209654 rs16870923 rs1894411 rs1044043 rs2857101 rs10484565 rs241456 rs241455 rs241454 rs241453 rs241452 rs241451 rs17034 rs241449 rs241448 rs241447 rs4148876 rs241446 rs241445 rs241440 rs241439 rs241438 rs241437 rs2228396 rs241436 rs9380326 rs4576294 rs1015166 rs241433 rs2228397 Figure 1: Haplotype structure of the CEU and YRI populations from HapMap [10] found by DFCP. Data consists of single nucleotide polymorphisms (SNPs) from TAP2 gene. Horizontal axis indicates SNP location and label. Vertical axis represents clusters from last sample of an MCMC chain converging to DFCP posterior. Letters inside clusters indicate base identity. describes location-dependent unlabelled partitions such that at each location on the chromosome the clusters will split into multiple clusters which then merge to form the clusters at the next location. As with the CFCP, the DFCP avoids the label switching problem by defining a probability distribution directly on the space of unlabelled partitions. The splitting and merging of clusters across the chromosome forms a mosaic structure of haplotypes. Figure 1 gives an example of the structure discovered by the DFCP. We describe the DFCP in section 2, and a forward-backward inference algorithm in section 3. Sections 4 and 5 report some experimental results showing good performance on an imputation problem, and in section 6 we conclude. 2 The discrete fragmentation-coagulation process In humans, most of the bases on a chromosome are the same for all individuals in a population. Genetic variations arise through mutations such as single nucleotide polymorphisms (SNPs), which are locations in the genome where a single base was altered by a mutation at some time in the ancestry of the chromosome. At each SNP location, a particular chromosome has one of usually two possible bases (referred to as the major and minor allele). Consequently, SNP data for a chromosome can be modelled as a binary sequence, with each entry indicating which of the two bases is present at that location. In this paper we consider SNP data consisting of n binary sequences x = (xi)n i=1, where each sequence xi = (xit)T t=1 is of length T and corresponds to the T SNPs on a segment of a chromosome in an individual. The t-th entry xit of sequence i is equal to zero if individual i has the major allele at location t and equal to one otherwise. We will model these sequences using a discrete fragmentation-coagulation process (DFCP) so that the sequence values at the SNP at location t are described by the latent partition πt of the sequences. Each cluster in the partition corresponds to a haplotype. The DFCP models the sequence of partitions using a discrete Markov chain as follows: starting with πt, we first fragment each cluster in πt into smaller clusters, forming a finer partition ρt. Then we coagulate the clusters in ρt to form the clusters of πt+1. In the remainder of this section, we will first give some background theory on partitions, and random fragmentation and coagulation operations and then we will describe the DFCP as a Markov chain over partitions. Finally, we will describe the likelihood model used to relate the sequence of partitions to the observed sequences. 2.1 Random partitions, fragmentations and coagulations A partition of a set S is a clustering of S into non-overlapping non-empty subsets of S whose union is all of S. The Chinese restaurant process (CRP) forms a canonical family of distributions on partitions. A random partition π of a set S is said to follow the law CRP(S, α, σ) if: Pr(π) = [α + σ]#π−1 σ [α + 1]#S−1 1 Y a∈π [1 −σ]#a−1 1 (1) where [x]n d = (x)(x + d) . . . (x + (n −1)d) is Kramp’s symbol and α > −σ, σ ∈[0, 1) are the concentration and discount parameters respectively [11]. A CRP can also be described by the following 2 analogy: customers (elements of S) enter a Chinese restaurant and choose to sit at tables (clusters in π). The first customer chooses any table. Subsequently, the i-th customer sits at a previously chosen table a with probability proportional to #a−σ where #a is the number of customers already sitting there and at some unoccupied table with probability proportional to α + σ#π where #π is the total number of tables already sat at by previous customers. The fragmentation and coagulation operators are random operations on partitions. The fragmentation FRAG(π, α, σ) of a partition π is formed by partitioning further each cluster a of π according to CRP(a, α, σ) and then taking the union of the resulting partitions, yielding a partition of S that is finer than π. Conversely, the coagulation COAG(π, α, σ) of π is formed by partitioning the set of clusters of π (i.e., the set π itself) according to CRP(π, α, σ) and then replacing each cluster with the union of its elements, yielding a partition that is coarser than π. The fragmentation and coagulation operators are linked through the following theorem by Pitman [12]. Theorem 1. Let S be a set and let A1, B1, A2, B2 be random partitions of S such that: A1 ∼CRP(S, ασ2, σ1σ2), B1|A1 ∼FRAG(A1, −σ1σ2, σ2), B2 ∼CRP(S, ασ2, σ2), A2|B2 ∼COAG(B2, α, σ1). Then, for all partitions A and B of the set S such that B is a refinement of A: Pr(A1 =A, B1 =B) = Pr(A2 =A, B2 =B). (2) 2.2 The discrete fragmentation-coagulation process The DFCP is parameterized by a concentration µ > 0 and rates (Rt)T −1 t=1 with Rt ∈[0, 1). Under the DFCP, the marginal distribution of the partition πt is CRP(S, µ, 0) and so µ controls the number of clusters that are found at each location. The rate parameter Rt controls the strength of dependence between πt and πt+1, with Rt = 0 implying that πt = πt+1 and Rt →1 implying independence. Given µ and (Rt)T −1 t=1 , the DFCP on a set of sequences indexed by the set S = {1, . . . , n} is described by the following Markov chain. First we draw a partition π1 ∼CRP(S, µ, 0). This CRP describes the clustering of S at location t = 1. Subsequently, we draw ρt|πt from FRAG(πt, 0, Rt), which fragments each of the clusters in πt into smaller clusters in ρt, and then πt+1|ρt from COAG(ρt, µ/Rt, 0), which coagulates clusters in ρt into larger clusters in πt+1. Each πt has CRP(S, µ, 0) as its invariant marginal distribution and each ρt is marginally distributed as CRP(S, µ, Rt). This can be seen by applying Theorem 1 with the substitution σ1 = 0, σ2 = Rt, α = µ/Rt. In population genetics the CRP appears as (and was predated by) Ewen’s sampling formula [13], a counting formula for the number of alleles appearing in a population, observed at a given location. Over a short segment of the chromosome where recombination rates are low, haplotypes behave like alleles and so a CRP prior on the number of haplotypes at a location is reasonable. Further, since fragmentation and coagulation operators are defined in terms of CRPs which are projective and exchangeable, the Markov chain is projective and exchangeable in S as well. Projectivity and exchangeability are desirable properties for Bayesian nonparametric models because they imply that the marginal distribution of a given data item does not depend on the total number of other data items or on the order in which the other data items are indexed. In genetics, this captures the fact that usually only a small subset of a population is observed. Finally, the theorem also shows that conditioned on πt+1, ρt has distribution FRAG(πt+1, 0, Rt) while πt|ρt has distribution COAG(ρt, µ/Rt, 0) meaning that the Markov chain defining the DFCP is reversible. Chromosome replication is directional and so statistics for genetic processes along the chromosome are not reversible. But the strength of this effect on SNP data is not currently known and many genetic models such as the coalescent with recombination [14] assume reversibility for simplicity. The non-reversibility displayed by models such as fastPHASE is an artifact of their construction rather than an attempt to capture non-reversible aspects of genetic sequences. 2.3 Likelihood model for sequence observations Given the sequence of partitions (πt)T t=1, we model the observations in each cluster at each location t independently. For each cluster a ∈πt at location t, we adopt a discrete likelihood model in which 3 ρ1 ρ2 ρT −1 π1 π2 · · · πT xi1 xi2 · · · xiT θ1a θ2a · · · θT a β1 β2 · · · βT ∀1 ≤i ≤n ∀a ∈π1 ∀a ∈π2 ∀a ∈πT π1 ∼CRP(S, µ, 0), ρt|πt ∼FRAG(πt, 0, Rt), πt+1|ρt ∼COAG(ρt, µ/Rt, 0), log µ ∼N(m, v), log Rt ∼Uniform(log Rmin, 0), xit|ait = θtait, θta|βt ∼Bernoulli(βt), βt|γt ∼Beta(γt 2 , γt 2 ), log γt ∼Uniform(log γmin, 0). (3) Figure 2: Left: Graphical model for the discrete fragmentation coagulation process. Hyperparameters are not shown. Right: Generative process for genetic sequences xit. the same observation is emitted for each sequence in the cluster. For each sequence i, let ait ∈πt be the cluster in πt containing i. Let θta be the emission of cluster a at location t. Since SNP data has binary labels, θta ∈{0, 1} is a Bernoulli random variable. Let the mean of θta be βt (this is the latent allele frequency at location t). We assume that conditioned on the partitions and the parameters, the observations xit are independent, and determined by the cluster parameter θta. Thus the probability Pr(θta = 1|βt) = βt and the probability Pr(xit|ait = a, θta) = δ(xit = θta) where δ is an indicator function (i.e., it is one if xit = θta and zero otherwise). We place a beta prior on βt with mean parameter 1/2 and mass parameter γt. The mass parameters are themselves marginally independent and we place on them an uninformative log-uniform prior over a range: p(γt) ∝γ−1 t , γt ≥γmin. Since this distribution is heavy tailed, the βt variables will have more mass near 0 and 1 than they would have if γt were fixed, adding sparsity to the latent allele frequencies. This phenomenon is empirically observed in SNP data. We also place an uninformative log-uniform prior on Rt over a range: p(Rt) ∝R−1 t , Rt ≥Rmin. Note that the prior gives more mass to values of Rt close to Rmin which we set close to zero, since we expect the partitions of consecutive locations to be relatively similar so that the mosaic haplotype structure can be formed. Finally, we place a truncated log-normal prior on µ with mean m and variance v: log µ ∼N(m, v), µ > 0. The graphical model for this generative process is shown in Figure 2. 2.4 Relationship with the continuous fragmentation-coagulation process The continuous version of the fragmentation-coagulation process [9], which we refer to as the CFCP, is a partition valued Markov jump process (MJP). (The ‘time’ variable for this MJP is the chromosome location, viewed as a continuous variable.) The CFCP is a pure jump process and can be defined in terms of its rates for various jump events. There are two types of events in the CFCP: binary fragmentation events, in which a single cluster a is split into two clusters b and c at a rate of RΓ(#b)Γ(#c)/Γ(#a), and binary coagulation events in which two clusters b and c merge to form one cluster a at a rate of R/µ. As was shown in [9] the CFCP can be realised as a continuous limit of the DFCP. Consider a DFCP with concentration µ and constant rate parameter Rε. Then as ε →0 the probability that the coagulation and fragmentation operations at a specific time step t induce no change in the partition structure πt approaches 1. Conversely, the probability that these operations are the binary events given above scales as O(ε), while all other events scale as larger powers of ε. If we rescale the time steps by t 7→εt, then the expected number of binary events over a finite interval approaches ε times the rates given above and the expected number of all other events goes to zero, yielding the CFCP. In the CFCP fragmentation and coagulation events are binary: they involve either one cluster fragmenting into two new clusters, or two clusters coagulating into one new cluster. However, for the DFCP the fragmentation and coagulation operators can describe more complicated haplotype structures without introducing more latent events. For example one cluster splitting into three clusters (as happens to the second haplotype from the top of Figure 1 after the 18th SNP) can be described 4 by the DFCP using just one fragmentation operator. The order of the latent events introduced by the CFCP required does not matter, adding unnecessary symmetry to its posterior. 3 Inference with the discrete fragmentation coagulation process We derive a Gibbs sampler for posterior simulation in the DFCP by making use of the exchangeability of the process. Each iteration of the sampler updates the trajectory of cluster assignments of one sequence i through the partition structure. To arrive at the updates, we first derive the conditional distribution of the i-th trajectory given the others, which can be shown to be a Markov chain. Coupled with the deterministic likelihood terms, we then use a backwards-filtering/forwards-sampling algorithm to obtain a new trajectory for sequence i. In this section, we derive the conditional distribution of trajectory i using the definition of fragmentation and coagulation and also the posterior distributions of the parameters Rt, µ which we will update using slice sampling [15]. 3.1 Conditional probabilities for the trajectory of sequence i We will refer to the projection of the partitions πt and ρt onto S −{i} by π−i t and ρ−i t respectively. Let at (respectively bt) be the cluster assignment of sequence i at location t in πt (respectively ρt). If the sequence i is placed in a new cluster by itself in πt (i.e., it forms a singleton cluster) we will denote this by at = ∅and for ρ−i t we will denote the respective event by bt = ∅. Otherwise, if the the sequence i is placed in an existing cluster in π−i t (respectively ρ−i t ) we will denote this by at ∈π−i t (respectively bt ∈ρ−i t ). Thus the state spaces of at and bt are respectively π−i t ∪{∅} and ρ−i t ∪{∅}. Starting at t = 1, since the initial distribution is π1 ∈CRP(S, µ, 0), the conditional cluster assignment of the sequence i in π1 is given by the CRP probabilities from (1): Pr(at = a|π−i 1 ) = #a/(n −1 + µ) if a ∈π−i t , µ/(n −1 + µ) if a = ∅. (4) To find the conditional distribution of bt given at, we use the definition of the fragmentation operation as independent CRP partitions of each cluster in πt. If at = ∅, then the sequence i is in a cluster by itself in πt and so it will remain in a cluster by itself after fragmenting. Thus, bt = ∅ with probability 1. If at = a ∈π−i t then bt must be one of the clusters in ρt into which a fragments. This can be a singleton cluster, in which case bt = ∅, or it can be one of the clusters in ρ−i t . We will refer to this set of clusters in ρ−i t by Ft(a). Since a is fragmented according to CRP(a, 0, R), when the i-th sequence is added to this CRP it is placed in a cluster b ∈Ft(a) with probability proportional to (#b −R) and it is placed in a singleton cluster with probability proportional to R#Ft(a). Normalizing these probabilities yields the following joint distribution: Pr(bt = b|at = a, π−i t , ρ−i t ) = (#b −Rt)/#a if a ∈π−i t , b ∈Ft(a), Rt#Ft(a)/#a if a ∈π−i t , b = ∅, 1 if a = b = ∅, 0 otherwise. (5) Similarly, to find the conditional distribution of at+1 given bt = b we use the definition of the coagulation operation. If b ̸= ∅, then the sequence i was not in a singleton cluster in ρ−i t and so it must follow the rest of the sequences in b to the unique a ∈π−i t+1 such that b ⊆a (i.e., b coagulates with other clusters to form a). We will refer to the set of clusters in ρ−i t that coagulate to form a by Ct(a). If b = ∅then the sequence i is in a singleton cluster in ρ−i t and so we can imagine it being the last customer added to the coagulating CRP(ρt, µ/Rt, 0) of the clusters of ρt. Hence the probability that sequence i is placed in a cluster a ∈π−i t+1 is proportional to #Ct(a) while the probability that it forms a cluster by itself in π−i t+1 is proportional to µ/Rt. This yields the following joint probability: Pr(at+1 = a|bt = b, π−i t+1, ρ−i t ) = 1 if a ∈π−i t+1, b ∈Ct(a), Rt#Ct(a)/(µ + Rt#ρ−i t ) if a ∈π−i t+1, b = ∅, µ/(µ + Rt#ρ−i t ) if a = b = ∅, 0 otherwise. (6) 5 3.2 Message passing and sampling for the sequences of the DFCP Once the conditional probabilities are defined, it is straightforward to derive messages that allow us to conduct backwards-filtering/forwards-sampling to resample the trajectory of sequence i in the DFCP. This provides an exact Gibbs update for the trajectory of that sequence conditioned on the trajectories of all the other sequences and the data. The messages we will define are the conditional distribution of all the data seen after a given location in the sequence conditioned on the cluster assignment of sequence i at that location. The messages are defined as follows: mt C(a) = Pr(xi,(t+1):T |at = a, π−i t:T , ρ−i t:(T −1)). (7) mt F(b) = Pr(xi,(t+1):T |bt = b, π−i t:T , ρ−i t:(T −1)). (8) We define the last messages to be mT C (a) = 1. These messages are computed as follows: mt F(b) = X a∈π−i t+1∪{∅} mt+1 C (a) δ(xi,(t+1) = θ(t+1),a) | {z } Likelihood. Pr(at+1 = a|bt = b, π−i t+1, ρ−i t ) | {z } Coagulation probabilities from (6). . (9) mt C(a) = X b∈ρ−i t ∪{∅} mt F(b) Pr(bt = b|at = a, π−i t , ρ−i t ) | {z } Fragmentation probabilities from (5). . (10) As the fragmentation and coagulation conditional probabilities are only supported for clusters a, b such that b ⊆a, these sums can be expanded so that only non-zero terms are summed over. For simplicity we do not provide these expanded forms here. Given these computations it is easy to define backwards messages using the reversibility of the process. The backwards messages can be used to compute marginal probabilities of the observation as in the forward-backward algorithm. To sample from the posterior distribution of the trajectory for sequence i conditioned on the other trajectories and the data, we use the Markov property for the chain a1, b1, . . . , bT −1, aT and the definition of the messages. Starting at location 1, we have: Pr(a1 = a|xi, π−i 1:T , ρ−i 1:(T −1)) ∝Pr(a1 = a|π−i 1 ) Pr(xi1|a1 = a) Pr(xi,2:T |a1 = a, π−i 1:T , ρ−i 1:(T −1)), = Pr(a1 = a|π−i 1 ) | {z } CRP probabilities (1). δ(x1 = θ1a) | {z } Likelihood. m1 C(a). (11) For subsequent bt and at+1 for locations t = 1, . . . , T −1, Pr(bt = b|at = a, xi, π−i 1:T , ρ−i 1:(T −1)) ∝Pr(bt = b|at = a, π−i t , ρ−i t ) Pr(xi,(t+1):T |bt = b, π−i t:T , ρ−i t:(T −1)), = Pr(bt = b|at = a, π−i t , ρ−i t ) | {z } Fragmentation probabilities from (5). mt F(b). (12) Pr(at = a|bt−1 = b, xi, π−i 1:T , ρ−i 1:(T −1)) ∝Pr(at = a|bt−1 = b, π−i t , ρ−i t−1) Pr(xit|at = a) Pr(xi,(t+1):T |at = a, π−i t:T , ρ−i t:(T −1)), = Pr(at = a|bt−1 = b, π−i t , ρ−i t−1) | {z } Coagulation probability from (6). δ(xit = θta) | {z } Likelihood. mt C(a). (13) The complexity of this update is O(KT) where K is the expected number of clusters in the posterior. This complexity class is the same as for the continuous fragmentation-coagulation process and other related HMM methods such as fastPHASE. But there is no exact Gibbs update for the trajectories in the CFCP. Instead the CFCP sampler relies on uniformization [16] which has slower mixing times than exact Gibbs and so the update for the DFCP is, theoretically, more efficient. 3.3 Parameter updates We use slice sampling [15] to update the µ and Rt parameters conditioned on the partition structure. Using Bayes’ rule, the definition (3) of the DFCP, and the identity [a]n b = bnΓ(a/b + n)/Γ(a/b), 6 proportion missing data 0.94 0.95 0.96 0.97 0.98 0.99 1.00 accuracy (proportion correct) DFCP CFCP fastPHASE BEAGLE 0.3 0.5 0.1 0.7 0.9 0 500 1000 1500 2000 2500 3000 3500 runtime (seconds) 0.988 0.989 0.990 accuracy (proportion correct) DFCP CFCP Figure 3: Allele imputation for X chromosomes from the Thousand Genomes project. Left: Accuracy for prediction of held out alleles for continuous (CFCP) and discrete (DFCP) versions of fragmentation-coagulation process and for popular methods BEAGLE and fastPHASE. 90% missing data condition truncates BEAGLE accuracies to emphasize other conditions. Right: Runtime versus accuracy for 500 MCMC iterations for DFCP and CFCP in 50% missing data condition. Points are averaged over 20 datasets and 25 consecutive samples. the posterior probabilities of µ and Rt given the partitions π1:T and ρ1:(T −1) are as follows: Pr(µ|π, ρ) ∝Pr(µ) Pr(π1|µ, R1) Pr(ρ1|π1, µ, R1) · · · Pr(πT |ρT −1, µ, RT −1), ∝Pr(µ) Γ(µ) Γ(µ + n)µ−T +PT t=1 #πt T −1 Y t=1 Γ(µ/Rt) Γ(µ/Rt + #ρt). (14) Pr(Rt|π, ρ, µ) ∝Pr(Rt) Pr(ρt|πt, µ, Rt) Pr(πt+1|ρt, µ, Rt), ∝Pr(Rt)R#ρt−#πt−#πt+1+1 t Γ(µ/Rt)Γ(1 −Rt)−#ρt Γ(#ρt + µ/Rt) Y b∈ρt Γ(#b −Rt). (15) 4 Experiments To examine the accuracy and scalability of the DFCP we conducted an allele imputation experiment on SNP data from the Thousand Genomes project1. We also compared the runtime of the samplers for the DFCP and CFCP on data simulated from the coalescent with recombination model [14]. In this section, we describe the setup of these experiments and in section 5 we present the results. For the allele imputation experiment, we considered SNPs from 524 male X chromosomes. We chose 20 intervals randomly, each containing 500 consecutive SNPs. In five conditions we held out nested sets of between 10% and 90% of the alleles uniformly over all pairs of sites and individuals, and used fastPHASE [3], BEAGLE [17], CFCP [9] and the DFCP to predict the held out alleles. We used the most recent versions of BEAGLE and fastPHASE software available to us. We implemented the DFCP with many of the same libraries and programming techniques as the CFCP and both versions were optimized. In each missing data condition, the CFCP and DFCP were run with five random restarts and 46 MCMC iterations per restart (26 of which were discarded for burnin and thinning). The accuracies for the DFCP and CFCP were computed by thresholding the empirical marginal probabilities of the held out alleles at 0.5. The priors on the hyper parameters and the likelihood specification of the two models were matched and the samplers were initialized using a sequential Monte Carlo method based on the trajectory updates. The posterior distributions of the concentration parameter µ for the two methods are different. In order to match the expected number of clusters in the posterior, we also conducted allele imputation in the 50% missing data condition with µ fixed at 10.0 for both models. We simulated 500 MCMC iterations with no random restarts. We then computed the accuracy of the samples by predicting held out alleles based on the cluster assignments of the sample. 1March 2012 v3 release of the Thousand Genomes Project. 7 In a second experiment we simulated datasets from the coalescent with recombination model consisting of between 10,000 and 50,000 sequences using the software ms [14]. We conducted posterior MCMC simulation in both models and compared the computation time required per iteration. 5 Results The accuracy of the DFCP in the allele imputation experiment was comparable to that of the CFCP and fastPHASE in all missing data conditions (Figure 3, left). For the 70% and 90% missing data conditions, BEAGLE performed poorly (its median accuracy for this condition was 93.90% and mean at chance accuracy for all conditions was 93.44%). In Figure 3(right) we compare the accuracy and runtime for the 50% missing data condition. This figure shows that the runtime required for each iteration is lower for the DFCP and the sequential Monte Carlo initialization is better (i.e., closer to a posterior mode) for the DFCP. No difference in mixing time is suggested by the figure. As an aside, we estimated the Shannon entropy in these samples and found that the DFCP had slightly more entropy per sample than the CFCP (the difference was small but statistically significant). This could indicate that the DFCP has better mixing. For the second experiment, we plot the runtime per iteration of both models against the number of sequences in the simulated dataset (Figure 4). The DFCP was around 2.5 times faster than the CFCP for the condition with 50,000 sequences. In both models, most of the computation time was spent calculating the messages in the backwards-filtering step. The CFCP has an arbitrary number of latent events between consecutive observations and it is likely that the runtime improvement shown by the DFCP is due to its reduced number of required message calculations. 6 Discussion 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 #individuals 0 10 20 30 40 50 runtime (seconds/iteration) DFCP CFCP ×104 Figure 4: Runtimes per iteration per sequence of DFCP and CFCP on simulated datasets consisting of large numbers of sequences. Lines indicate mean. Shaded region indicates standard deviation. In this paper we have presented a discrete fragmentation-coagulation process. The DFCP is a partition-valued Markov chain, where partitions change along the chromosome by a fragmentation operation followed by a coagulation operation. The DFCP is designed to model the mosaic haplotype structure observed in genetic sequences. We applied the DFCP to an allele prediction task on data from the Thousand Genomes Project yielding accuracies comparable to state-of-the-art methods and runtimes that were lower than the runtimes of the continuous fragmentation-coagulation process [9]. The DFCP and CFCP induce different joint distributions on the partitions at adjacent locations. The CFCP is a Markov jump process with an arbitrary number of latent binary events wherein a single cluster is split into two clusters, or two clusters are merged into one. The DFCP however can model any partition structure with one pair of fragmentation and coagulation operations. Exact Gibbs updates for the partitions are possible in the DFCP whereas sampling in the CFCP uses uniformization [16] which, although fast in practice, has in theory slower mixing than exact Gibbs. In future work we will explore better calling and calibration methods to improve imputation accuracies. Another avenue of future research is to understand how other genetic processes can be incorporated into the fragmentation-coagulation framework, including population admixture and gene conversion. Although haplotype structure is a local property, the Markov assumption does not hold in real genetic data. This could be reflected through hierarchical FCP models or adaptation of other dependent nonparametric models such as the spatially normalized Gamma process [18]. Acknowledgements We thank the Gatsby Charitable Foundation for funding. We also thank Andriy Mnih, Vinayak Rao and Anna Goldenberg for helpful discussion and the anonymous reviewers for their suggestions. 8 References [1] The 1000 Genomes Project Consortium. A map of human genome variation from population-scale sequencing. Nature, 467:1061–1073, 2010. [2] R. R. Hudson. Properties of a neutral allele model with intragenic recombination. Theoretical Population Biology, 23(2):183 – 201, 1983. [3] P. Scheet and M. Stephens. A fast and flexible statistical model for large-scale population genotype data: Applications to inferring missing genotypes and haplotypic phase. The American Journal of Human Genetics, 78(4):629 – 644, 2006. [4] J. Marchini, B. Howie, S. Myers, G. McVean, and P. Donnelly. A new multipoint method for genome-wide association studies by imputation of genotypes. Nature Genetics, 39(7):906–913, 2007. [5] M. J. Daly, J. D. Rioux, S. F. Schaffner, T. J. Hudson, and R. S. Lander. High-resolution haplotype structure in the human genome. Nature Genetics, 29:229–232, 2001. [6] J. Marchini, D. Cutler, N. Patterson, M. Stephens, E. Eskin, E. Halperin, S. Lin, Z.S. Qin, H.M. Munro, G.R. Abecasis, P. Donnelly, and the International HapMap Consortium. A comparison of phasing algorithms for trios and unrelated individuals. The American Journal of Human Genetics, 78(3):437 – 450, 2006. [7] M. Stephens. Dealing with label switching in mixture models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 62(4):795–809, 2000. [8] A. Jasra, C. C. Holmes, and D. A. Stephens. Markov chain Monte Carlo methods and the label switching problem in Bayesian mixture modeling. Statistical Science, 20(1):50–67, 2005. [9] Y. W. Teh, C. Blundell, and L. T. Elliott. Modelling genetic variations using fragmentation-coagulation processes. In Advances in neural information processing systems, 2011. [10] The International HapMap Consortium. The international HapMap project. Nature, 426:789–796, 2003. [11] J. Pitman. Combinatorial stochastic processes. Springer-Verlag, 2006. [12] J. Pitman. Coalescents with multiple collisions. Annals of Probability, 27:1870–1902, 1999. [13] W. J. Ewens. The sampling theory of selectively neutral alleles. Theoretical Population Biology, 3:87– 112, 1972. [14] R. R. Hudson. Generating samples under a Wright-Fisher neutral model of genetic variation. Bioinfomatics, 18:337–338, 2002. [15] R. M. Neal. Slice sampling. Annals of Statistics, 31:705–767, 2003. [16] V. Rao and Y. W. Teh. Fast MCMC sampling for Markov jump processes and continuous time Bayesian networks. In Proceedings of the International Conference on Uncertainty in Artificial Intelligence, 2011. [17] B. L. Browning and S. R. Browning. A unified approach to genotype imputation and haplotype-phase inference for large data sets of trios and unrelated individuals. American Journal of Human Genetics, 84:210–223, 2009. [18] V. Rao and Y. W. Teh. Spatial normalized gamma processes. In Advances in Neural Information Processing Systems, volume 22, pages 1554–1562, 2009. 9
|
2012
|
194
|
4,557
|
Non-parametric Approximate Dynamic Programming via the Kernel Method Nikhil Bhat Graduate School of Business Columbia University New York, NY 10027 nbhat15@gsb.columbai.edu Vivek F. Farias Sloan School of Management Massachusetts Institute of Technology Cambridge, MA 02142 vivekf@mit.edu Ciamac C. Moallemi Graduate School of Business Columbia University New York, NY 10027 ciamac@gsb.columbai.edu Abstract This paper presents a novel non-parametric approximate dynamic programming (ADP) algorithm that enjoys graceful approximation and sample complexity guarantees. In particular, we establish both theoretically and computationally that our proposal can serve as a viable alternative to state-of-the-art parametric ADP algorithms, freeing the designer from carefully specifying an approximation architecture. We accomplish this by developing a kernel-based mathematical program for ADP. Via a computational study on a controlled queueing network, we show that our procedure is competitive with parametric ADP approaches. 1 Introduction Problems of dynamic optimization in the face of uncertainty are frequently posed as Markov decision processes (MDPs). The central computational problem is then reduced to the computation of an optimal ‘cost-to-go’ function that encodes the cost incurred under an optimal policy starting from any given MDP state. Many MDPs of practical interest suffer from the curse of dimensionality, where intractably large state spaces precluding exact computation of the cost-to-go function. Approximate dynamic programming (ADP) is an umbrella term for algorithms designed to produce good approximation to this function, yielding a natural ‘greedy’ control policy. ADP algorithms are, in large part, parametric in nature; requiring the user to provide an ‘approximation architecture’ (i.e., a set of basis functions). The algorithm then produces an approximation in the span of this basis. The strongest theoretical results available for such algorithms typically share two features: (1) the quality of the approximation produced is comparable with the best possible within the basis specified, and (2) the computational effort required for doing so typically scales as the dimension of the basis specified. These results highlight the importance of selecting a ‘good’ approximation architecture, and remain somewhat dissatisfying in that additional sampling or computational effort cannot remedy a bad approximation architecture. On the other hand, a non-parametric approach would, in principle, permit the user to select a rich, potentially full-dimensional architecture (e.g., the Haar basis). One would then expect to compute increasingly accurate approximations with increasing computational effort. The present work presents a practical algorithm of this type. Before describing our contributions, we begin with summarizing the existing body of research on non-parametric ADP algorithms. 1 The key computational step in approximate policy iteration methods is approximate policy evaluation. This step involves solving the projected Bellman equation, a linear stochastic fixed point equation. A numerically stable approach to this is to perform regression with a certain 2-regularization, where the loss is the 2-norm of the Bellman error. By substituting this step with a suitable nonparametric regression procedure, [2, 3, 4] come up with a corresponding non-parametric algorithm. Unfortunately schemes such approximate policy iteration have no convergence guarantees in parametric settings, and these difficulties remain in non-parametric variations. Another idea has been to use kernel-based local averaging ideas to approximate the solution of an MDP with that of a simpler variation on a sampled state space [5, 6, 7]. However, convergence rates for local averaging methods are exponential in the dimension of the problem state space. As in our setting, [8] constructs kernel-based cost-to-go function approximations. These are subsequently plugged into various ad hoc optimization-based ADP formulations, without theoretical justification. Closely related to our work, [9, 10] consider modifying the approximate linear program with an 1 regularization term to encourage sparse approximations in the span of a large, but necessarily tractable set of features. Along these lines, [11] discuss a non-parametric method that explicitly restricts the smoothness of the value function. However, sample complexity results for this method are not provided and it appears unsuitable for high-dimensional problems (such as, for instance, the problem we consider in our experiments). In contrast to this line of work, our approach will allow for approximations in a potentially infinite dimensional approximation architecture with a constraint on an appropriate 2-norm of the weight vector. The non-parametric ADP algorithm we develop enjoys non-trivial approximation and sample complexity guarantees. We show that our approach complements state-of-the-art parametric ADP algorithms by allowing the algorithm designer to compute what is essentially the best possible ‘simple’ approximation1 in a full-dimensional approximation architecture as opposed to restricting attention to some a-priori fixed low dimensional architecture. In greater detail, we make the following contributions: A new mathematical programming formulation. We rigorously develop a kernel-based variation of the ‘smoothed’ approximate LP (SALP) approach to ADP proposed by [12]. The resulting mathematical program, which we dub the regularized smoothed approximate LP (RSALP), is distinct from simply substituting a kernel-based approximation in the SALP formulation. We develop a companion active set method that is capable of solving this mathematical program rapidly and with limited memory requirements. Theoretical guarantees. 2 We establish a graceful approximation guarantee for our algorithm. Our algorithm can be interpreted as solving an approximate linear program in an appropriate Hilbert space. We provide, with high probability, an upper bound on the approximation error of the algorithm relative to the best possible approximation subject to a regularization constraint. The sampling requirements for our method are, in fact, independent of the dimension of the approximation architecture. Instead, we show that the number of samples grows polynomially as a function of a regularization parameter. Hence, the sampling requirements are a function of the complexity of the approximation, not of the dimension of the approximating architecture. This result can be seen as the ‘right’ generalization of the prior parametric approximate LP approaches [13, 14, 12], where, in contrast, sample complexity grows with the dimension of the approximating architecture. A computational study. To study the efficacy of RSALP, we consider an MDP arising from a challenging queueing network scheduling problem. We demonstrate that our RSALP method yields significant improvements over known heuristics and standard parametric ADP methods. In what follows, proofs and a detailed discussion of our numerical procedure are deferred to the Online Supplement to this paper. 1In the sense that the 2 norm of the weight vector can grow at most polynomially with a certain measure of computational budget. 2These guarantees come under assumption of being able to sample from a certain idealized distribution. This is a common in the ADP literature. 2 2 Formulation Consider a discrete time Markov decision process with finite state space S and finite action space A. We denote by xt and at respectively, the state and action at time t. We assume time-homogeneous Markovian dynamics: conditioned on being at state x and taking action a, the system transitions to state xwith probability p(x, x, a) independent of the past. A policy is a map µ: S →A, so that Jµ(x) Ex,µ ∞ t=0 αtgxt,at represents the expected (discounted, infinite horizon) cost-to-go under policy µ starting at state x. Letting Π denote the set of all policies our goal is to find an optimal policy µ∗such that µ∗∈ argmaxµ∈Π Jµ(x) for all x ∈S (it is well known that such a policy exists). We denote the optimal cost-to-go function by J∗Jµ∗. An optimal policy µ∗can be recovered as a ‘greedy’ policy with respect to J∗, µ∗(x) ∈argmin a∈A gx,a + αEx,a[J∗(X)], where we define Ex,a[f(X)] as x∈S p(x, x, a)f(x), for all f : S →R. Since in practical applications S is often intractably large, exact computation of J∗is untenable. ADP algorithms are principally tasked with computing approximations to J∗of the form J∗(x) ≈ zΦ(x) ˜J(x), where Φ: S →Rm is referred to as an ‘approximation architecture’ or a basis and must be provided as input to the ADP algorithm. The ADP algorithm computes a ‘weight’ vector z; one then employs a policy that is greedy with respect to the corresponding approximation ˜J. 2.1 Primal Formulation Motivated by the LP for exact dynamic programming, a series of ADP algorithms [15, 13, 12] have been proposed that compute a weight vector z by solving an appropriate modification of the exact LP for dynamic programming. In particular, [12] propose solving the following optimization problem where ν ∈RS + is a strictly positive probability distribution and κ > 0 is a penalty parameter: max x∈S νxzΦ(x) −κ x∈S πxsx s. t. zΦ(x) ≤ga,x + αEx,a[zΦ(X)] + sx, ∀x ∈S, a ∈A, z ∈Rm, s ∈RS +. (1) In parsing the above program notice that if one insisted that the slack variables s were precisely 0, one is left with the ALP proposed by [15]. [13] provided a pioneering analysis that loosely showed J∗−z∗Φ1,ν ≤ 2 1 −α inf z J∗−zΦ∞, for an optimal solution z∗to the ALP; [12] showed that these bounds could be improved upon substantially by ‘smoothing’ the constraints of the ALP, i.e., permitting positive slacks. In both cases, one must solve a ‘sampled’ version of the above program. Now, consider allowing Φ to map from S to a general (potentially infinite dimensional) Hilbert space H. We use bold letters to denote elements in the Hilbert space H, e.g., the weight vector is denoted by z ∈H. We further suppress the dependence on Φ and denote the elements H corresponding to their counterparts in S by bold letters. Hence, for example, x Φ(x) and X Φ(X). Further, we denote X Φ(S); X ⊂H. The value function approximation in this case would be given by ˜Jz,b(x) x, z+ b = Φ(x), z+ b, (2) where b is a scalar offset corresponding to a constant basis function. The following generalization of (1) — which we dub the regularized SALP (RSALP) — then essentially suggests itself: max x∈S νxx, z+ b −κ x∈S πxsx −Γ 2 z, z s. t. x, z+ b ≤ga,x + αEx,a[X, z+ b] + sx, ∀x ∈S, a ∈A, z ∈H, b ∈R, s ∈RS +. (3) 3 The only ‘new’ ingredient in the program above is the fact that we regularize z using the parameter Γ > 0. Constraining zH z, zto lie within some 2-ball anticipates that we will eventually resort to sampling in solving this program and we cannot hope for a reasonable number of samples to provide a good solution to a problem where z was unconstrained. This regularization, which plays a crucial role both in theory and practice, is easily missed if one directly ‘plugs in’ a local averaging approximation in place of zΦ(x) as is the case in the earlier work of [5, 6, 7, 8] and others. Since the RSALP, i.e., program (3), can be interpreted as a regularized stochastic optimization problem, one may hope to solve it via its sample average approximation. To this end, define the likelihood ratio wx νx/πx, and let ˆS ⊂S be a set of N states sampled independently according to the distribution π. The sample average approximation of (3) is then max 1 N x∈ˆ S wxx, z+ b −κ N x∈ˆ S sx −Γ 2 z, z s. t. x, z+ b ≤ga,x + αEx,a[X, z+ b] + sx, ∀x ∈ˆS, a ∈A, z ∈H, b ∈R, s ∈R ˆ S +. (4) We call this program the sampled RSALP. Even if | ˆS| were small, it is still not clear that this program can be solved effectively. We will, in fact, solve the dual to this problem. 2.2 Dual Formulation We begin by establishing some notation. Let Nx,a {x} ∪{x∈S|p(x, x, a) > 0}. Now, define the symmetric positive semi-definite matrix Q ∈R( ˆ S×A)×( ˆ S×A) according to Q(x, a, x, a) y∈Nx,a y∈Nx,a 1{x=y} −αp(x, y, a) 1{x=y} −αp(x, y, a) y, y, (5) and the vector R ∈R ˆ S×A according to R(x, a) Γgx,a −1 N x∈ˆ S y∈Nx,a wx 1{x=y} −αp(x, y, a) y, x. (6) Notice that Q and R depend only on inner products in X (and other, easily computable quantities). The dual to (4) is then given by: min 1 2λQλ + Rλ s. t. a∈A λx,a ≤κ N , ∀x ∈ˆS, x∈ˆ S a∈A λx,a = 1 1 −α, λ ∈R ˆ S×A + . (7) Assuming that Q and R can be easily computed, this finite dimensional quadratic program, is tractable – its size is polynomial in the number of sampled states. We may recover a primal solution (i.e., the weight vector z∗) from an optimal dual solution: Proposition 1. The optimal solution to (7) is attained at some λ∗, then optimal solution to (4) is attained at some (z∗, s∗, b∗) with z∗= 1 Γ 1 N x∈ˆ S wxx − x∈ˆ S,a∈A λ∗ x,a x −αEx,a[X] . (8) Having solved this program, we may, using Proposition 1, recover our approximate cost-to-go function ˜J(x) = z∗, x+ b∗as ˜J(x) = 1 Γ 1 N y∈ˆ S wyy, x− y∈ˆ S,a∈A λ∗ y,a y, x−αEy,a[X, x] + b∗. (9) 4 A policy greedy with respect to ˜J is not affected by constant translations, hence in (9), the value of b∗can be set to be zero arbitrarily. Again note that given λ∗, ˜J only involves the inner products. At this point, we use the ‘kernel’ trick: instead of explicitly specifying H or the mapping Φ, we take the approach of specifying inner products. In particular, given any positive definite kernel K : S × S →R, it is well known (Mercer’s theorem) that there exists a Hilbert space H and Φ: S →H such that K(x, y) = Φ(x), Φ(y). Consequently, given a positive definite kernel, we simply replace every inner product x, xin the defining of the program (7) with the quantity K(x, x) and similarly in the approximation (9). In particular, this is equivalent to using a Hilbert space, H and mapping Φ corresponding to that kernel. Solving (7) directly is costly. In particular, it is computationally expensive to pre-compute and store the matrix Q. An alternative to this is to employ the following broad strategy, as recognized by [16] and [17] in the context of solving SVM classification problems, referred to as an active set method: At every point in time, one attempts to (a) change only a small number of variables while not impacting other variables (b) maintain feasibility. It turns out that this results in a method that requires memory and per-step computation that scales only linearly with the sample size. We defer the details of the procedure as well as the theoretical analysis to the Online Supplement 3 Approximation Guarantees Recall that we are employing an approximation ˜Jz,b of the form (2), parameterized by the weight vector z and the offset parameter b. Now denoting by C the feasible region of the RSALP projected onto the z and b co-ordinates, the best possible approximation one may hope for among those permitted by the RSALP will have ∞-approximation error inf(z,b)∈C J∗−˜Jz,b∞. Provided the Gram matrix given by the kernel restricted to S is positive definite, this quantity can be made arbitrarily small by making Γ small. The rate at which this happens would reflect the quality of the kernel in use. Here we focus on asking the following question: for a fixed choice of regularization parameters (i.e., with C fixed) what approximation guarantee can be obtained for a solution to the RSALP? This section will show that one can achieve a guarantee that is, in essence, within a certain constant multiple of the optimal approximation error using a number of samples that is independent of the size of the state space and the dimension of the approximation architecture. 3.1 The Guarantee Define the Bellman operator, T : RS →RS according to (TJ)(x) min a∈A gx,a + αEx,a[J(X)]. Let ˆS be a set of N states drawn independently at random from S under the distribution π over S. Given the definition of ˜Jz,b in (2), we consider the following sampled version of RSALP, max ν˜Jz,b − 2 1 −α 1 N x∈ˆ S sx s. t. x, z+ b ≤ga,x + αEx,a[X, z+ b] + sx, ∀x ∈ˆS, a ∈A, zH ≤C, |b| ≤B, z ∈H, b ∈R, s ∈R ˆ S +. (10) We will assume that states are sampled according to an idealized distribution. In particular, π πµ∗,ν where π µ∗,ν (1 −α) ∞ t=0 αtνP t µ∗. (11) Here, Pµ∗is the transition matrix under the optimal policy µ∗. This idealized assumption is also common to the work of [14] and [12]. In addition, this program is somewhat distinct from the program presented earlier, (4): (1) As opposed to a ‘soft’ regularization term in the objective, we have a ‘hard’ regularization constraint, zH ≤C. It is easy to see that given a Γ, we can choose a radius C(Γ) that yields an equivalent optimization problem. (2) We bound the magnitude of the offset b. This is for theoretical convenience; our sample complexity bound will be parameterized 5 by B. (3) We fix κ = 2/(1 −α). Our analysis reveals this to be the ‘right’ penalty weight on the Bellman inequality violations. Before stating our bound we establish a few bits of notation. We let (z∗, b∗) denote an optimal solution to (10). We let K maxx∈X xH, and finally, we define the quantity Ξ(C, B, K, δ) 1 + 1 2 ln(1/δ) 4CK(1 + α) + 4B(1 −α) + 2g∞ . We have the following theorem: Theorem 1. For any > 0 and δ > 0, let N ≥Ξ(C, B, K, δ)2/2. If (10) is solved by sampling N states from S with distribution πµ∗,ν, then with probability at least 1 −δ −δ4, J∗−˜Jz∗,b∗1,ν ≤ inf zH≤C,|b|≤B 3 + α 1 −αJ∗−˜Jz,b∞+ 4 1 −α. (12) Ignoring the -dependent error terms, we see that the quality of approximation provided by (z∗, b∗) is essentially within a constant multiple of the optimal (in the sense of ∞-error) approximation to J∗possible using a weight vector z and offsets b permitted by the regularization constraints. This is a ‘structural’ error term that will persist even if one were permitted to draw an arbitrarily large number of samples. It is analogous to the approximation results produced in parametric settings with the important distinction that one allows comparisons to approximations in potentially fulldimensional basis sets which might be substantially superior. In addition to the structural error above, one incurs an additional additive ‘sampling’ error that scales like O(N −1/2(CK + B) ln 1/δ). This quantity has no explicit dependence on the dimension of the approximation architecture. In contrast, comparable sample complexity results (eg. [14, 12]) typically scale with the dimension of the approximation architecture. Here, this space may be full dimensional, so that such a dependence would yield a vacuous guarantee. The error depends on the user specified quantities C and B, and K, which is bounded for many kernels. The result allows for arbitrary ‘simple’ (i.e. with zH small) approximations in a rich feature space as opposed to restricting us to some a-priori fixed, low dimensional feature space. This yields some intuition for why we expect the approach to perform well even with a relatively general choice of kernel. As C and B grow large, the structural error will decrease to zero provided K restricted to S is positive definite. In order to maintain the sampling error constant, one would then need to increase N (at a rate that is Ω((CK + B)2). In summary, increased sampling yields approximations of increasing quality, approaching an exact approximation. If J∗admits a good approximation with zH small, one can expect a good approximation with a reasonable number of samples. 3.2 Proof Sketch A detailed proof of a stronger result is in the Online Supplement. Here, we provide a proof sketch. The first step of the proof involves providing a guarantee for the exact (non-sampled) RSALP with hard regularization. Assuming (z∗, b∗) is the ‘learned’ parameter pair, we first establish the guarantee: J∗−˜Jz∗,b∗1,ν ≤3 + α 1 −α inf zH≤C,b∈R J∗−˜Jz,b∞. Geometrically, the proof works loosely by translating the ‘best’ approximation given the regularization constraints to one that is guaranteed to yield an approximation error no worse that that produced by the RSALP. To establish a guarantee for the sampled RSALP, we first pose the RSALP as a stochastic optimization problem by setting s(z, b) ( ˜Jz,b −T ˜Jz,b)+. We must ensure that with high probability, the sample averages in the sampled program are close to the exact expectations, uniformly for all possible values of (z, b) with high accuracy. In order to establish such a guarantee, we bound the Rademacher complexity of the class of functions given by ¯FS,µ x →( ˜Jz,b(x) −Tµ ˜Jz,b(x))+ : zH ≤C, |b| ≤B , 6 queue 1 server 1 queue 2 queue 3 server 2 queue 4 µ1 = 0.12 µ3 = 0.28 µ2 = 0.12 µ4 = 0.28 λ1 = 0.08 λ4 = 0.08 Figure 1: The queueing network example. (where Tµ is the Bellman operator associated with policy µ), This yields the appropriate uniform large deviations bound. Using this guarantee we show that the optimal solution to the sampled RSALP yields similar approximation guarantees as that with the exact RSALP; this proof is somewhat delicate as it appears difficult to directly show that the optimal solutions themselves are close. 4 Case Study: A Queueing Network This section considers the problem of controlling the queuing network illustrated in Figure 1, with the objective of minimizing long run average delay. There are two ‘flows’ in this network: the first through server 1 followed by server 2 (with buffering at queues 1 and 2, respectively), and the second through server 2 followed by server 1 (with buffering at queues 4 and 3, respectively). Here, all interarrival and service times are exponential with rate parameters summarized in Figure 1. This specific network has been studied [13, 18] and is considered to be a challenging control problem. Our goal in this section will be two-fold. First, we will show that the RSALP can surpass the performance of both heuristic as well as established ADP-based approaches, when used ‘out-of-the-box’ with a generic kernel. Second, we will show that the RSALP can be solved efficiently. 4.1 MDP Formulation Although the control problem at hand is nominally a continuous time problem, it is routinely converted into a discrete time problem via a standard uniformization device; see [19], for instance, for an explicit such example. In the equivalent discrete time problem, at most a single event can occur in a given epoch, corresponding either to the arrival of a job at queues 1 or 4, or the arrival of a service token for one of the four queues with probability proportional to the corresponding rates. The state of the system is described by the number of jobs is each of the four queues, so that S Z4 +, whereas the action space A consists of four potential actions each corresponding to a matching between servers and queues. We take the single period cost as the total number of jobs in the system, so that gx,a = x1; note that minimizing the average number of jobs in the system is equivalent to minimizing average delay by Little’s law. Finally, we take α = 0.9 as our discount factor. 4.2 Approaches RSALP (this paper). We solve (7) using the active set method outlined in the Online Supplement, taking as our kernel the standard Gaussian radial basis function kernel K(x, y) exp −x −y2 2/h , with the bandwidth parameter h 100. (The sensitivity of our results to this bandwidth parameter appears minimal.) Note that this implicitly corresponds to a full-dimensional basis function architecture. Since the idealized sampling distribution, πµ∗,ν is unavailable to us, we use in its place the geometric distribution π(x) (1 −ζ)4ζx1, with the sampling parameter ζ set at 0.9, as in [13]. The regularization parameter Γ was chosen via a line-search; we report results for Γ 10−8. (Again performance does not appear to be very sensitive to Γ, so that a crude linesearch appears to suffice.) In accordance with the theory we set the constraint violation parameter κ 2/(1 −α), as suggested by the analysis of Section 3.1, as well as by [12], 7 policy performance Longest Queue 8.09 Max-Weight 6.55 sample size 1000 3000 5000 10000 SALP, cubic basis 7.19 (1.76) 7.89 (1.76) 6.94 (1.15) 6.63 (0.92) RSALP, Gaussian kernel 6.72 (0.39) 6.31 (0.11) 6.13 (0.08) 6.04 (0.05) Table 1: Performance results in the queueing example. For the SALP and RSALP methods, the number in the parenthesis gives the standard deviation across sample sets. SALP [12]. The SALP formulation (1), is, as discussed earlier, the parametric counterpart to the RSALP. It may be viewed as a generalization of the ALP approach proposed by [13] and has been demonstrated to provide substantial performance benefits relative to the ALP approach. Our choice of parameters for the SALP mirrors those for the RSALP to the extent possible, so as to allow for an ‘apples-to-apples’ comparison. Thus, we solve the sample average approximation of this program using the same geometric sampling distribution and parameter κ. Approximation architectures in which the basis functions are monomials of the queue lengths appear to be a popular choice for queueing control problems [13]. We use all monomials with degree at most 3, which we will call the cubic basis, as our approximation architectures. Longest Queue (generic). This is a simple heuristic approach: at any given time, a server chooses to work on the longest queue from among those it can service. Max-Weight [20]. Max-Weight is a well known scheduling heuristic for queueing networks. The policy is obtained as the greedy policy with respect to a value function approximation of the form ˜JMW (x) 4 i=1 |xi|1+, given a parameter > 0. This policy has been extensively studied and shown to have a number of good properties, for example, being throughput optimal and offering good performance for critically loaded settings [21]. Via a line-search, we chose to 1.5 as the exponent for our experiments. 4.3 Results Policies were evaluated using a common set of arrival process sample paths. The performance metric we report for each control policy is the long run average number of jobs in the system under that policy, T t=1 xt1/T, where we set T 10000. We further average this random quantity over an ensemble of 300 sample paths. Further, in order to generate SALP and RSALP policies, state sampling is required. To understand the effect of the sample size on the resulting policy performance, the different sample sizes listed in Table 1 were used. Since the policies generated involve randomness to the sampled states, we further average performance over 10 sets of sampled states. The results are reported in Table 1 and have the following salient features: 1. RSALP outperforms established policies: Approaches such as the Max-Weight or ‘parametric’ ADP with basis spanning polynomials have been previously shown to work well for the problem of interest. We see that RSALP with 10000 samples achieves performance that is superior to these extant schemes. 2. Sampling improves performance: This is expected from the theory in Section 3. Ideally, as the sample size is increased one should relax the regularization. However, for our experiments we noticed that the performance is quite insensitive to the parameter Γ. Nonetheless, it is clear that larger sample sets yield a significant performance improvement. 3. RSALP in less sensitive to state sampling: We notice from the standard deviation values in Table 1 that our approach gives policies whose performance varies significantly less across different sample sets of the same size. In summary we view these results as indicative of the possibility that the RSALP may serve as a practical and viable alternative to state-of-the-art parametric ADP techniques. 8 References [1] D. P. Bertsekas. Dynamic Programming and Optimal Control, Vol. II. Athena Scientific, 2007. [2] B. Bethke, J. P. How, and A. Ozdaglar. Kernel-based reinforcement learning using Bellman residual elimination. MIT Working Paper, 2008. [3] Y. Engel, S. Mannor, and R. Meir. Bayes meets Bellman: The Gaussian process approach to temporal difference learning. In Proceedings of the 20th International Conference on Machine Learning, pages 154–161. AAAI Press, 2003. [4] X. Xu, D. Hu, and X. Lu. Kernel-based least squares policy iteration for reinforcement learning. IEEE Transactions on Neural Networks, 18(4):973–992, 2007. [5] D. Ormoneit and S. Sen. Kernel-based reinforcement learning. Machine Learning, 49(2):161– 178, 2002. [6] D. Ormoneit and P. Glynn. Kernel-based reinforcement learning in average cost poblems. IEEE Transactions on Automatic Control, 47(10):1624–1636, 2002. [7] A. M. S. Barreto, D. Precup, and J. Pineau. Reinforcement learning using kernel-based stochastic factorization. In Advances in Neural Information Processing Systems, volume 24, pages 720–728. MIT Press, 2011. [8] T. G. Dietterich and X. Wang. Batch value function approximation via support vectors. In Advances in Neural Information Processing Systems, volume 14, pages 1491–1498. MIT Press, 2002. [9] J. Kolter and A. Ng. Regularization and feature selection in least-squares temporal difference learning. ICML ’09, pages 521–528. ACM, 2009. [10] M. Petrik, G. Taylor, R. Parr, and S. Zilberstein. Feature selection using regularization in approximate linear programs for Markov decision processes. ICML ’10, pages 871–879, 2010. [11] J. Pazis and R. Parr. Non-parametric approximate linear programming for MDPs. AAAI Conference on Artificial Intelligence. AAAI, 2011. [12] V. V. Desai, V. F. Farias, and C. C. Moallemi. Approximate dynamic programming via a smoothed linear program. To appear in Operations Research, 2011. [13] D. P. de Farias and B. Van Roy. The linear programming approach to approximate dynamic programming. Operations Research, 51(6):850–865, 2003. [14] D. P. de Farias and B. Van Roy. On constraint sampling in the linear programming approach to approximate dynamic programming. Mathematics of Operations Research, 29:3:462–478, 2004. [15] P. Schweitzer and A. Seidman. Generalized polynomial approximations in Markovian decision processes. Journal of Mathematical Analysis and Applications, 110:568–582, 1985. [16] E. Osuna, R. Freund, and F. Girosi. An improved training algorithm for support vector machines. In Neural Networks for Signal Processing, Proceedings of the 1997 IEEE Workshop, pages 276 –285, sep 1997. [17] T. Joachims. Making large-scale support vector machine learning practical, pages 169–184. MIT Press, Cambridge, MA, USA, 1999. [18] R. R. Chen and S. Meyn. Value iteration and optimization of multiclass queueing networks. In Decision and Control, 1998. Proceedings of the 37th IEEE Conference on, volume 1, pages 50 –55 vol.1, 1998. [19] C. C. Moallemi, S. Kumar, and B. Van Roy. Approximate and data-driven dynamic programming for queueing networks. Working Paper, 2008. [20] L. Tassiulas and A. Ephremides. Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks. IEEE Transactions on Automatic Control, 37(12):1936–1948, December 1992. [21] A. L. Stolyar. Maxweight scheduling in a generalized switch: State space collapse and workload minimization in heavy traffic. The Annals of Applied Probability, 14:1–53, 2004. 9
|
2012
|
195
|
4,558
|
Probabilistic n-Choose-k Models for Classification and Ranking Kevin Swersky Daniel Tarlow Dept. of Computer Science University of Toronto [kswersky,dtarlow]@cs.toronto.edu Ryan P. Adams School of Eng. and Appl. Sciences Harvard University rpa@seas.harvard.edu Richard S. Zemel Dept. of Computer Science University of Toronto zemel@cs.toronto.edu Brendan J. Frey Prob. and Stat. Inf. Group University of Toronto frey@psi.toronto.edu Abstract In categorical data there is often structure in the number of variables that take on each label. For example, the total number of objects in an image and the number of highly relevant documents per query in web search both tend to follow a structured distribution. In this paper, we study a probabilistic model that explicitly includes a prior distribution over such counts, along with a count-conditional likelihood that defines probabilities over all subsets of a given size. When labels are binary and the prior over counts is a Poisson-Binomial distribution, a standard logistic regression model is recovered, but for other count distributions, such priors induce global dependencies and combinatorics that appear to complicate learning and inference. However, we demonstrate that simple, efficient learning procedures can be derived for more general forms of this model. We illustrate the utility of the formulation by exploring applications to multi-object classification, learning to rank, and top-K classification. 1 Introduction When models contain multiple output variables, an important potential source of structure is the number of variables that take on a particular value. For example, if we have binary variables indicating the presence or absence of a particular object class in an image, then the number of “present” objects may be highly structured, such as the number of digits in a zip code. In ordinal regression problems there may be some prior knowledge about the proportion of outputs within each level. For instance, when modeling scores assigned to papers submitted to a conference, this structure can be due to instructions that reviewers assign scores such that the distribution is roughly uniform. One popular model for multiple output classification problems is logistic regression (LR), in which the class probabilities are modeled as being conditionally independent, given the features; another popular approach utilizes a softmax over the class outputs. Both models can be seen as possessing a prior on the label counts: in the case of the softmax model this prior is explicit that exactly one is active. For LR, there is an implicit factorization in which there is a specific prior on counts; this prior is the source of computational tractability, but also imparts an inductive bias to the model. The starting observation for our work is that we do not lose much efficiency by replacing the LR counts prior with a general prior, which permits the specification of a variety of inductive biases. In this paper we present a probabilistic model of multiple output classification, the n-choosek model, which incorporates a distribution over the label counts, and show that computations needed 1 for learning and inference in this model are efficient. We develop applications of this model to diverse problems. A maximum-likelihood version of the model can be used for problems such as multi-class recognition, in which the label counts are known at training time but only a prior distribution is known at test time. The model easily extends to ordinal regression problems, such as ranking or collaborative filtering, in which each item is assigned to one of a small number of relevance levels. We establish a connection between n-choose-k models and ranking objectives, and prove that optimal decision theoretic predictions under the model for “monotonic” gain functions (to be defined later), which include standard objectives used in ranking, can be achieved by a simple sorting operation. Other problems can be modeled via direct maximization of expected gain. An important aim in classification and information retrieval is to optimize expected precision@K. We show that we can efficiently optimize this objective under the model and that it yields promising results. Overall, the result is a class of models along with a well-developed probabilistic framework for learning and inference that makes use of algorithms and modeling components that are not often used in machine learning. We demonstrate that it is a simple, yet expressive probabilistic approach that has many desirable computational properties. 2 Binary n-Choose-k Model We begin by defining the basic model under the assumption of binary output variables. In the following section, we will generalize to the case of ordinal variables. The model inputs are x, and θ is defined as θ = Wx, where W are the parameters. The model output is a vector of D binary variables y ∈Y = {0, 1}D. We will use subsets c ⊆{1, . . . , D} of variable indices and will represent the value assigned to a subset of variables as yc. We will also make use of the notation ¯c to mean the complement {1, . . . , D}\c. The generative procedure is then defined as follows: • Draw k from a prior distribution p(k) over counts k. • Draw k variables to take on label 1, where the probability of choosing subset c is given by p(yc = 1, y¯c = 0 | k) = ( exp{P d∈c θd} Zk(θ) if |c| = k 0 otherwise , (1) where θ = (θ1, . . . , θD) are parameters that determine individual variable biases towards being off or on, and Zk(θ) = P y| P d yd=k exp{P d θdyd}. Under this definition Z0 = 1, and p(0 | 0) = 1. This has been referred to as a conditional Bernoulli distribution [1]. Logistic regression can be viewed as an instantiation of this model, with a “prior” distribution over count values that depends on parameters θ. This is a forced interpretation, but it is useful in understanding the implicit prior over counts that is imposed when using LR. Specifically, if p(k) is defined as be a particular function of θ (known as a Poisson-Binomial distribution [2]): p(k; θ) = Zk(θ) Z(θ) , where Z(θ) = P k Zk(θ), then the joint probability p(y, k; θ) becomes equivalent to a LR model in the following sense. Suppose we have a joint assignment of variables y and P d yd = k, and p(k; θ) is Poisson-Binomial, then p(y, k; θ) = p(k; θ)p(y | k; θ) = Zk(θ) Z(θ) exp{P d∈c θd} Zk(θ) = Y d exp{θdyd} 1 + exp{θd}. (2) Note that the last equality factorizes Z(θ) to create independence across variables, but it requires that the “prior” be defined in terms of parameters θ. Our interest in this paper is in the more flexible family of models that arise after breaking the dependence of the “prior” on θ. First, we explore treating p(k) as a prior in the Bayesian sense, using it to express prior knowledge about label counts; later we will explore learning p(k) using separate parameters from θ. A consequence of these decisions is that the distribution does not factorize. At this point, we have not made it clear that these models can be learned efficiently, but we will show in the next section that this is indeed the case. 2.1 Maximum Likelihood Learning Our goal in learning is to select parameters so as to maximize the probability assigned to observed data by the model. For notational simplicity in this section, we compute partial derivatives with 2 respect to θ, then it should be clear that these can be back-propagated to a model of θ(x; W). We note that if this relationship is linear, and the objective is convex in terms of θ, then it will also be convex in terms of W. The log-likelihood is as follows: log p(y; θ) = log D X k=0 p(k)p(y | k; θ) = log p(y | X d yd; θ) + κ (3) = X d θdyd −log ZP d yd(θ) + κ, (4) where κ is a constant that is independent of θ. As is standard, if we are given multiple sets of binary variables, {yn}N n=1, we maximize the sum of log probabilities P n log p(yn; θ). The partial derivatives take a standard log-sum-exp form, requiring expectations Ep(yd|k=P d′ yd′) [yd]. A naive computation of this expectation would require summing over D k=P d yd configurations. However, there are more efficient alternatives: the dynamic programming algorithms developed in the context of Poisson-Binomial distributions are applicable, e.g., the algorithm from [3] runs in O(Dk) time. The basic idea is to compute partial sums along a chain that lays out variables yd in sequence. An alternative formulation of the dynamic program [4] can be made to yield an O(D log2 D) algorithm by using a divide-and-conquer algorithm that employs Fast Fourier Transforms (FFTs). These algorithms are quite general and can also be used to compute Zk values, incorporate prior distributions over count values, and draw a sample of y values conditional upon some k for the same computational cost [5]. We use the FFT tree algorithm from [5] throughout, because it is most flexible and has best worst-case complexity. 2.2 Test-time Inference Having learned a model, we would like to make test-time predictions. In Section 4.2, we will show that optimal decision-theoretic predictions (i.e., that maximize expected gain) can be made in several settings by a simple sorting procedure, and this will be our primary way of using the learned model. However, here, we consider the task of producing a distribution over labels y, given θ(x). To draw a joint sample of y values, we can begin by drawing k from p(k), then conditional on that k, use the dynamic programming algorithm to draw a sample conditional on k. To compute marginals, a simple strategy is to loop over each value of k and run dynamic programming conditioned on k, and then average the results weighted by the respective prior. For priors that only give support to a small number of k values, this is quite efficient. An alternative approach is to draw several samples of k from p(k), then for each sampled value, run dynamic programming to compute marginals. Averaging these marginals can then be seen as a Rao-Blackwellized estimate. Finally, it is possible to compute exact marginals for arbitrary p(k) in a single run of an O(D log2 D) dynamic programming algorithm, but the simpler strategies were sufficient for our needs here, so we do not pursue that direction further. 3 Ordinal n-Choose-k Model An extension of the binary n-choose-k model can be developed in the case of ordinal data, where we assume that labels y can take on one of R categorical labels, and where there is an inherent ordering to labels R > R −1 > . . . > 1; each label represents a relevance label in a learning-to-rank setting. Let kr represent the number of variables y that take on label r and define k = (kR, . . . , k1). The idea in the ordinal case is to define a joint model over count variables k, then to reduce the conditional distribution of p(y | k) to be a series of binary models. The generative model is defined as follows: • Initialize all variables y to be unlabeled. • Sample kR, . . . , k1 jointly from p(k). • Repeat for r = R to 1: – Choose a set cr of kr unlabeled variables y≤r and assign them relevance label r. Choose subsets with probability equal to the following: p(y≤r,cr = 1, y≤r,¯cr = 0 | kr) = ( exp{P d∈cr θd} Zr,k(θ,y≤r) if |cr| = kr 0 otherwise , (5) 3 where we use the notation y≤r to represent all variables that are given a relevance label less than or equal to r. Zr,k is similar to the normalization constant Zk that appears in the binary model, but it is restricted to sum over y≤r instead of the full y: Zr,kr(θ, y≤r) = P y≤r|(P d 1{yd=r})=kr exp {θd · 1 {yd = r}}. Note that if R = D and p(k) specifies that kr = 1 for all r, then this process defines a Plackett-Luce (PL) [6, 7, 8] ranking model. One interpretation of this model is as a “group” PL model, where instead of drawing individual elements in the generative process, groups of elements are drawn simultaneously. In this work, we focus on ranking with weak labels (R < D) which is more restrictive than modeling distributions over permutations [9], where learning would require marginalizing over all possible permutations consistent with the given labels. In this setting, inference in the ordinal n-choose-k model is both exact and efficient. 3.1 Maximum Likelihood Learning Let kr = P d 1 {yd = r}. The log likelihood of parameters θ can be written as follows: log X k∈K p(k)p(y | k; θ) = R X r=1 X d:yd=r θd −log Zr,kr(θ, y≤r) + κ. (6) Here, we see that learning decomposes into the sum of R objectives that are of the same form as arise in the binary n-choose-k model. As before, the only non-trivial part of the gradient computation comes from the log-sum-exp term, but the required expectations that arise can be efficiently computed using dynamic programming. In this case, R −1 calls are required. 3.2 Test-time Inference The test-time inference procedure in the ordinal model is similar to the binary case. Brute force enumeration over k becomes exponentially more expensive as R grows, but for some priors where p(k) has sparse support, this may be feasible. To draw samples of y, the main requirement is the ability to draw a joint sample of k from p(k). In the case that p(k) is a simple distribution such as a multinomial, this can be done easily. It is also possible to efficiently draw a joint sample if the distribution over k takes the form p(k) = 1 {P r kr = D} · Q r p(kr). That is, there is an arbitrary but independent prior over each kr value, along with a single constraint that the chosen kr values sum to exactly D. Given a sample of k, it is straightforward to sample y using R calls to dynamic programming. To do so, begin by using the binary algorithm to sample kR variables to take on value R. Then remove the chosen variables from the set of possible variables, and sample kR−1 variables to take on value R −1. Repeat until all variables have been assigned a value. An alternative to producing marginal probabilities at test time is trying to optimize performance under a task-specific evaluation measure. The main motivation for the ordinal model is the learning to rank problem [10], so our main interest is in methods that do well under such task-specific evaluation measures that arise in the ranking task. In Section 4.2, we show that we can make exact optimal decision theoretic test-time predictions under the learning-to-rank gain functions without the need for sampling. 4 Incorporating Gain 4.1 Training to Maximize Expected Top-K Classification Gain One of the motivating applications for this model is the top-K classification (TKC) task. We formulate this task using a gain function, parameterized by a value K and a “scoring vector” t, which is assumed to be of the same dimension as y. The gain function stipulates that K elements of y are chosen, (assigning a score of zero if some other number is chosen), and assigns reward for choosing each element of y based on t. Specifically the gain function is defined as follows: GK(y, t) = P d ydtd if P d yd = K 0 otherwise . (7) The same gain can be used for Precision@K, in which case the number of nonzero values in t is unrestricted. Here, we focus on the case where t is binary with a single nonzero entry at index d∗. 4 An interesting issue is what gain function should be used to train a model when the test-time evaluation metric is TKC, or Precision@K. Maximum likelihood training of TKC in this case of a single target class could correspond to a version of our n-choose-k model in which p(k) is a spike at k = 1; note that in this case the n-choose-k model is equivalent to a softmax over the output classes. An alternative is to train using the same gain function used at test-time. Here, we consider incorporating the TKC gain at training time for binary t with one nonzero entry, training the model to maximize expected gain. Specifically, the objective is the following: Ep[GK(y, t)] = X k X y p(k)p(y | k)1 (X d yd = K ) X d ydtd = X y p(K)p(y | K)yd∗ (8) It becomes clear that this objective is equivalent to the marginal probability of yd∗under a prior distribution that places all its mass on k = K. In Section 5.3, we empirically investigate training under expected gain versus training under maximum likelihood 4.2 Optimal Decision-theoretic Predictions for Monotonic Gain Functions We now turn attention to gain functions defined on rankings of items. Letting π be a permutation, we define a “monotonic” gain function as follows: Definition 1. A gain function G(π, r) is a monotonic ranking gain if: • It can be expressed as PD d=1 αdf(rπd), where αd is a weighting (or discount) term, and πd is the index of the item ranked in position d, • αd ≥αd+1 ≥0 for all d, and • f(r) ≥f(r −1) ≥0 for all r ≥r′. It is straightforward to see that popular learning-to-rank scoring functions like normalized discounted cumulative gain (NDCG) and Precision@K are monotonic ranking gains. NDCG(π, r) ∝P d 2rπd −1 log2(1+d), so set αd = κ · 1 log2(1+d) and f(r) = 2r −1. We define Precision@K gain to be the fraction of documents in the top K produced ranks that have label R: P@K(π, r) = P d 1 {d ≤K} 1 {rπd = R}, so set αd = 1 {d ≤K} and f(r) = 1 {r = R}. The expected gain under a monotonic ranking gain and ordinal n-choose-k model is Ep [G(π)] = X y′∈Y p(y′) D X d=1 αdf(y′ πd) = D X d=1 αd R X y′πd=1 f(y′ πd)p(yπd = y′ πd) = D X d=1 αdgπd, (9) where we have defined gd = PR r=1 f(r)p(yd = r). We now state four propositions and a lemma. The proofs of the propositions mostly result from algebraic manipulation, so we leave their proof to the supplementary materials. The main theorem will be proved afterwards. Proposition 1. If θi ≥θj, then p(yi = R) ≥p(yj = R). Proposition 2. If θi ≥θj and p(yi ≥r) ≥p(yj ≥r), then p(yi ≥r −1) ≥p(yj ≥r −1). Lemma 1. If θi ≥θj, then for all r, p(yi ≥r) ≥p(yj ≥r). Proof. By induction. Proposition 1 is the base case, and Proposition 2 is the inductive step. Proposition 3. If θi ≥θj and f is defined as in Definition 1, then gi ≥gj. Proposition 4. Consider two pairs of non-negative real numbers ai, aj and bi, bj where ai ≥aj and bi ≥bj. It follows that aibi + ajbj ≥aibj + ajbi. Theorem 1. Under an ordinal n-choose-k model, the optimal decision theoretic predictions for a monotonic ranking gain are made by sorting θ values. 5 Figure 1: Four example images from the embedded MNIST dataset test set, along with the PoissonBinomial distribution produced by logistic regression for each image. The area marked in red has zero probability under the data distribution, but the logistic regression model is not flexible enough to model it. Proof. Without loss of generality, assume that we are given a vector α corresponding to placing the α’s in descending order and a vector gπ where π is some arbitrary ordering of the g’s. The goal now is to find the ordering π∗that maximizes the objective given in (9) which is equivalently expressed as the inner product αT gπ. Assume that we are given an ordering ˆπ where for at least one pair i, j where i > j, we have that θˆπi < θˆπj. Furthermore, assume that this ordering is optimal. That is, ˆπ = π∗. By Proposition 3 we have that gˆπi < gˆπj. The contributions of these elements to the overall objective is given by αigˆπi + αjgˆπj. By Proposition 4 we improve the objective by swapping θˆπi and θˆπj contradicting the assumption that ˆπ is a local optimum. If we have multiple elements that are not in sorted order, then we can repeat this argument by considering pairs of elements until the whole vector is sorted. 5 Experiments 5.1 Modeling Varying Numbers of Objects Our first experiment explores an issue that arises frequently in computer vision, where there are an unknown number of objects in an image, but the number is highly structured. We developed a multiple image dataset that simulates this scenario.1 To generate an image, we uniformly sampled a count between 1 and 4, and then take that number of digit instances (with at most one instance per digit class) from the MNIST dataset and embed them in a 60 × 60 image. The x, y locations are chosen from a 4 × 4 uniformly spaced grid and and then a small amount of jitter is added. We generated 10,000 images each for the training, test, and validation sets. The goal is to predict the set of digits that appear in a given image. Examples can be seen in Figure 1. We train a binary n-choose-k model on this dataset. The inputs to the model are features learned from the images by a standard Restricted Boltzmann Machine with 1000 hidden units. As a baseline, we trained a logistic regression classifier on the features and achieved a test-set negative loglikelihood (NLL) of 2.84. Ideally, this model should learn that there are never more than four digits in any image. In Figure 1, we show four test images, and the Poisson-Binomial distribution over counts that arises from the logistic regression model. Marked in red are regions where there is zero probability of the count value in the data distribution. Here it is clear that the implicit count prior in LR is not powerful enough to model this data. As a comparison, we trained a binary n-choosek model where we explicitly parameterize and learn an input-dependent prior. The model learns the correct distribution over counts and achieves a test-set NLL of 1.95. We show a visualization of the learned likelihood and prior parameters in the supplementary material. 5.2 Ranking A second set of experiments considers learning-to-rank applications of the n-choose-k model. We report on comparisons to other ranking approaches, using seven datasets associated with the LETOR 3.0 benchmark [10]. Following the standard LETOR procedures, we trained over five folds, each with distinct training, validation, and testing splits. For each dataset, we train an ordinal n-choose-k model to maximize the likelihood of the data, where each training example consists of a number of items, each assigned a particular relevance level; the number of levels ranges from 2-4 across the datasets. At test time, we produce a ranking, which as 1http://www.cs.toronto.edu/˜kswersky/data/ 6 Ordinal nCk AdaRank!NDCG FRank ListNet RankBoost RankSVM Regression!Reg SmoothRank 1 2 3 4 5 6 7 8 9 10 0.2 0.25 0.3 0.35 0.4 NDCG Truncation Level (K) NDCG@K 1 2 3 4 5 6 7 8 9 10 0.4 0.5 0.6 0.7 0.8 0.9 NDCG Truncation Level (K) NDCG@K (a) TD 2003 (b) NP 2004 Figure 2: Ranking results on two datasets from LETOR 3.0. Results for the other 5 datasets, along with Precision@K results, appear in the supplementary material. shown in Section 4.2 is the optimal decision theoretic prediction under a ranking gain function, by simply sorting the items for each test query based on their θ score values. Note that this is a very simple ranking model, in that the score assigned to each test item by the model is a linear function of the input features, and the only hyperparameter to tune is an ℓ2 regularization strength. Results for two of the data sets are shown in Figure 2 (first is our best relative performance, second is typical); the full set of results are in the supplementary material. Several publicly available baselines are shown for comparison. As can be seen in the graphs, our approach is competitive with the stateof-the-art on all data sets, and substantially outperforms all baselines on the TD 2003 dataset. Note that the performance of the baseline methods is quite variable and it appears that overfitting is an issue on these datasets, even for linear models. We hypothesize that proper probabilistic incorporation of weak labels helps to mitigate this effect to some degree. 5.3 Top-K Classification Our third and final set of experiments concern top-K classification, an important task that has gained considerable attention recently in the ImageNet Challenge.2 Here we consider a task analogous to that in the ImageNet Challenge, in which each image contains a single object label, but a model is allowed to return up to K class predictions per image. A classification is deemed correct if the appropriate class is one of the K returned classes. We train binary n-choose-k models, experimenting with different training protocols that directly maximize expected gain under the model, as described in Section 4.1. That is, we train on the expected top-K gain for different values of K. Note that top-1 is equivalent to softmax regression. For each model/evaluation criterion combination, we find the ℓ2 penalty that gives the highest validation accuracy; the corresponding test-set results are shown in Table 1. For comparison, we also include logistic regression, where each output is conditionally independent. We experimented on the embedded MNIST dataset where all but one label from each example was randomly removed, and on the Caltech-101 Silhouettes dataset [11], which consists of images of binarized silhouettes from 101 different categories. In both datasets we trained the models using the pixels as inputs. We noticed that the optimal ℓ2 strength chosen by each method was quite high, suggesting that overfitting is an issue in these datasets. When the ℓ2 strength is low, the difference between the objectives becomes more apparent. On Caltech it is clear that training for the expected gain improves the corresponding test accuracy in this regime. On the embedded MNIST dataset, when the ℓ2 strength is low there is a surprising result that the top-3 and top-5 criteria outperform top-1, even when top-1 is used as the evaluation measure. Since there are several digits actually present in the ground truth, there is no real signal in the data that differentiates the digit labeled as the target from the other equally valid “distractor” digits. In order to satisfy the top-1 objective for the given target, the learning algorithm is forced to find some arbitrary criterion by which to cause the given target to be preferred over the distractors, which is harmful for generalization purposes. This scenario does occur in datasets like ImageNet, where multiple objects can be present in a single image. It would be interesting to repeat these experiments on more challenging, large scale datasets, but we leave this for future work. 2http://www.image-net.org/challenges/LSVRC/2011/ 7 Top 1 / Top 3 / Top 5 LR 0.606 / 0.785 / 0.812 Top 1 0.621 / 0.796 / 0.831 Top 3 0.614 / 0.792 / 0.834 Top 5 0.602 / 0.787 / 0.834 (a) Caltech Sil. strong ℓ2 Top 1 / Top 3 / Top 5 0.545 / 0.716 / 0.766 0.574 / 0.755 / 0.804 0.558 / 0.771 / 0.813 0.523 / 0.767 / 0.823 (b) Caltech Sil. weak ℓ2 Top 1 / Top 3 / Top 5 0.346 / 0.647 / 0.815 0.353 / 0.659 / 0.820 0.353 / 0.671 / 0.834 0.330 / 0.659 / 0.824 (c) EMNIST strong ℓ2 Top 1 / Top 3 / Top 5 0.263 / 0.557 / 0.742 0.268 / 0.569 / 0.757 0.318 / 0.637 / 0.815 0.313 / 0.642 / 0.822 (d) EMNIST weak ℓ2 Table 1: Top-K classification results when various models are trained using an expected top-K gain and then tested using some possibly different top-K criterion. The rows correspond to training criteria, and the columns correspond to test criteria. (a) and (c) show the test accuracy when a strong ℓ2 regularizer is used, while (b) and (d) use a relatively weaker regularizer. Logistic regression is included for comparison. 6 Related Work Our work here is related to many different areas; we cannot hope to survey all related work in multilabel classification and ranking. Instead, we focus on work related to the main novelty in this paper, the explicit modeling of structure on label counts. That is, given that we have prior knowledge of label count structure, or are modeling a domain that exhibits such structure, the question is how can the structure be leveraged to improve a model. The first and most direct approach is the one that we take here: explicitly model the count structure within the model. There are other alternative approaches that are similar in this respect. The work of [12] considers MAP inference in the context of cardinality-based models and develops applications to named entity recognition tasks. Similarly, [13] develops an example application where a cardinality-based term constrains the number of pixels that take on the label “foreground” in a foreground/background image segmentation task. [14] develops models that include a penalty in the energy function for using more labels, which can be seen as a restricted form of structure over label cardinalities. An alternative way of incorporating structure over counts into a model is via the gain function. The work of Joachims [15] can be seen in this light – the training objective is formulated so as to optimize performance on evaluation measures that include Precision@K. A different approach to including count information in the gain function comes from [16], which trains an image segmentation model so as match count statistics present in the ground truth data. Finally, there are other approaches that do not neatly fall into either category, such as the posterior regularization framework of [17] and related works such as [18]. There, structure, including structure that encodes prior knowledge about counts, such as there being at least one verb in most sentences, is added as a regularization term that is used both during learning and during inference. Overall, the main difference between our work and these others is that we work in a proper probabilistic framework, either maximizing likelihood, maximizing expected gain, and/or making proper decision-theoretic predictions at test time. Importantly, there is no significant penalty for assuming the proper probabilistic approach: learning is exact, and test-time prediction is efficient. 7 Discussion We have presented a flexible probabilistic model for multiple output variables that explicitly models structure in the number of variables taking on specific values. The model is simple, efficient, easy to learn due to its convex objective, and widely applicable. Our theoretical contribution provides a link between this type of ordinal model and ranking problems, bridging the gap between the two tasks, and allowing the same model to be effective for several quite different problems. Finally, there are many extensions. More powerful models of θ can be put into the formulation, and gradients can easily be back-propagated. Also, while we chose to take a maximum likelihood approach in this paper, the model is well suited to fully Bayesian inference using e.g., slice sampling. The unimodal posterior distribution should lead to good behavior of the sampler. Beyond these extensions, we believe the framework here to be a valuable modeling building block that has broad application to problems in machine learning. 8 References [1] S. X. Chen and J. S. Liu. Statistical applications of the Poisson-Binomial and conditional Bernoulli distributions. Statistica Sinica, 7(4), 1997. [2] X. H. Chen, A. P. Dempster, and J. S. Liu. Weighted finite population sampling to maximize entropy. Biometrika, 81(3):457–469, 1994. [3] M. H. Gail, J. H. Lubin, and L. V. Rubinstein. Likelihood calculations for matched case-control studies and survival studies with tied death times. Biometrika, 68:703–707, 1981. [4] L. Belfore. An O(n) log2(n) algorithm for computing the reliability of k-out-of-n:G and k-tol-out-of-n:G systems. IEEE Transactions on Reliability, 44(1), 1995. [5] D. Tarlow, K. Swersky, R. Zemel, R.P. Adams, and B. Frey. Fast exact inference for recursive cardinality models. In Uncertainty in Artificial Intelligence, 2012. [6] R. Plackett. The analysis of permutations. Applied Statistics, pages 193–202, 1975. [7] R.D. Luce. Individual Choice Behavior a Theoretical Analysis. Wiley, 1959. [8] J. Guiver and E. Snelson. Bayesian inference for plackett-luce ranking models. In International Conference on Machine Learning, 2009. [9] J. Huang, C. Guestrin, and L. Guibas. Efficient inference for distributions on permutations. In Advances in Neural Information Processing Systems, 2007. [10] T. Qin, T.Y. Liu, J. Xu, and H. Li. LETOR: A benchmark collection for research on learning to rank for information retrieval. Information Retrieval Journal, 2010. [11] B. Marlin, K. Swersky, B. Chen, and N. de Freitas. Inductive principles for restricted Boltzmann machine learning. In Artificial Intelligence and Statistics, 2010. [12] R. Gupta, A. Diwan, and S. Sarawagi. Efficient inference with cardinality-based clique potentials. In International Conference on Machine Learning, 2007. [13] D. Tarlow, I. Givoni, and R. Zemel. HOP-MAP: Efficient message passing for high order potentials. In Artificial Intelligence and Statistics, 2010. [14] A. Delong, A. Osokin, H.N. Isack, and Y. Boykov. Fast approximate energy minimization with label costs. International Journal of Computer Vision, 96(1):127, 2012. [15] T. Joachims. A support vector method for multivariate performance measures. In International Conference on Machine Learning, 2005. [16] P. Pletscher and P. Kohli. Learning low-order models for enforcing high-order statistics. In Artificial Intelligence and Statistics, 2012. [17] K. Ganchev, J. Grac¸a, J. Gillenwater, and B. Taskar. Posterior regularization for structured latent variable models. Journal of Machine Learning Research, 11:2001–2049, 2010. [18] G. Mann and A McCallum. Generalized expectation criteria with application to semisupervised classification and sequence modeling. Journal of Machine Learning Research, 11:955–984, 2010. 9
|
2012
|
196
|
4,559
|
Delay Compensation with Dynamical Synapses C. C. Alan Fung, K. Y. Michael Wong Hong Kong University of Science and Technology, Hong Kong, China alanfung@ust.hk, phkywong@ust.hk Si Wu State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China wusi@bnu.edu.cn Abstract Time delay is pervasive in neural information processing. To achieve real-time tracking, it is critical to compensate the transmission and processing delays in a neural system. In the present study we show that dynamical synapses with shortterm depression can enhance the mobility of a continuous attractor network to the extent that the system tracks time-varying stimuli in a timely manner. The state of the network can either track the instantaneous position of a moving stimulus perfectly (with zero-lag) or lead it with an effectively constant time, in agreement with experiments on the head-direction systems in rodents. The parameter regions for delayed, perfect and anticipative tracking correspond to network states that are static, ready-to-move and spontaneously moving, respectively, demonstrating the strong correlation between tracking performance and the intrinsic dynamics of the network. We also find that when the speed of the stimulus coincides with the natural speed of the network state, the delay becomes effectively independent of the stimulus amplitude. 1 Introduction Time delay is pervasive in neural information processing. Its occurrence is due to the time for signals to transmit in the neural pathways, e.g., 50-80 ms for electrical signals to propagate from the retina to the primary visual cortex [13], and the time for neurons responding to inputs, which is in the order of 10-20 ms. Delay is also inevitable for neural information processing. For a neural system carrying out computations in the temporal domain, such as speech recognition and motor control, input information needs to be integrated over time, which necessarily incur delays. To achieve real-time tracking of fast moving objects, it is critical for a neural system to compensate for the delay; otherwise, the object position perceived by the neural system will lag behind the true object position considerably. A natural way to compensate for delays is to predict the future position of the moving stimulus. Experimental findings suggested that delay compensations are widely adopted in neural systems. A remarkable example is the head-direction (HD) systems in rodents, which encode the head direction of a rodent in the horizontal plane relative to a static environment [14, 17]. It was found that when the head of a rodent is moving continuously in space, the direction perceived by the HD neurons in the postsubicular cortex has nearly zero-lag with respect to the instantaneous position of the rodent head [18]. More interestingly, in the anterior dorsal thalamic nucleus, the HD neurons perceive the future direction of the rodent head, leading the current position by a constant time [3]. The similar anticipative behavior is also observed in the eyeposition neurons when animals make saccadic eye movement, the so-called saccadic remapping [16]. In human psychophysical experiments, the classic flash-lag effect also supports the notion of delay 1 -2 0 2 x-z(t) 0 0.2 0.4 0.6 u(x,t) u(x,t) I ext(x,t) 0 0.02 0.04 0.06 I ext (a) 0 50 100 t/τs 0 1 2 z(t), z0(t) I ext(x,t) u(x,t) (b) Figure 1: (a) Profiles of u (x, t) and Iext (x, t) in the absence of STD, where the center of mass of the stimulus is moving with constant velocity v = 0.02a/τs. As shown, the profile of u (x, t) is almost Gaussian. (b) The centers of mass of u (x, t) and Iext (x, t) as functions of time. Parameters: ρ = 128/2π, a = 0.5, J0 = √ 2πa and ρJ0A = 1.0. compensation [12]. In the experiment, a flash is perceived to lag behind a moving object, even though they are physically aligned. The underlying cause is that the visual system predicts the future position of the continuously moving object, but is unable do so for the unpredictable flash. Depending on the available information, the brain may employ different strategies for delay compensation. In the case of self-motion, such as an animal rotating its head actively or performing saccadic eye movements, the motor command responsible for the motion can serve as a cue for delay compensation. It was suggested that an efference copy of the motor command, called corollary discharge, is sent to the corresponding internal representation system prior to the motion [18]. For the head rotation, the advanced time can be up to 20 ms; for the saccadic eye movement, the advanced time is about 70 ms. In the case of tracking an external moving stimulus, the neural system has to rely on the moving speed of the stimulus for prediction. Asymmetric neural interactions have been proposed to drive the network states to catch up with changes in head directions [22] or positions [4]. These may be achieved by the so-called conjunctive cells projecting neural signals between successive modules in forward directions [10]. To explain the flash-lag effect, Nijhawan et al. proposed a dynamical routing mechanism to compensate the transmission delay in the visual system, in which retinal neurons dynamically choose a pathway according to the speed of the stimulus, and transmit the signal directly to the future position in the cortex [13]. In this study we propose a novel mechanism of how a neural system compensates for the processing delay. By the processing delay, we mean the time consumed by a neural system in response to external inputs. The proposed mechanism does not require corollary discharge, or efforts of choosing signal pathways, or specific network structures such as asymmetric interactions or conjunctive cells. It is based on the short-term depression (STD) of synapses, the inherent and ubiquitous nature that the synaptic efficacy of a neuron is reduced after firing due to the depletion of neurotransmitters [11]. It has been found that STD enhances the mobility of the states of neural networks [21, 9, 6]. The underlying mechanism is that neurotransmitters become depleted in the active region of the network states compared with the neighboring regions, thus increasing the likelihood of the locally active network state to shift to its neighboring positions when it is tracking a continuously shifting stimulus. When STD is sufficiently strong, the tracking state of the network can even overtake the moving stimulus, demonstrating its potential for generating predictions. 2 The Model We consider continuous attractor neural networks (CANNs) as the internal representation models for continuous stimuli [7, 2, 15]. A CANN holds a continuous family of bump-shaped stationary states, which form a subspace in which the neural system is neutrally stable [20]. This property endows the neural system the capacity of tracking time-varying stimuli smoothly. Consider a continuous stimulus x being encoded by a neural ensemble. The variable x may represent the orientation, the head direction, or the spatial location of an object. Neurons with preferred stimuli x produce the maximum response when an external stimulus is present at x. Their preferred stimuli 2 are uniformly distributed in the space −∞< x < ∞. In the continuum limit, the dynamics of the neural ensemble can be described by a CANN. We denote as u(x, t) the population-averaged synaptic current to the neurons at position x and time t. The dynamics of u(x, t) is determined by the external input, the lateral interactions among the neurons, and its relaxation towards zero response. It is given by τs ∂u (x, t) ∂t = Iext (x, t) + ρ ∫ dx′J (x, x′) p (x′, t) r (x′, t) −u (x, t) , (1) where τs is the synaptic time constant, which is typically in the order of 1 to 5 ms, Iext(x, t) the external input, ρ the density of neurons, J(x, x′) the coupling between neurons at x and x′, and r(x, t) is the firing rate of the neurons. The variable p(x, t) represents the fraction of available neurotransmitters, which evolves according to [6, 19] τd ∂p (x, t) ∂t = 1 −p (x, t) −τdβp (x, t) r (x, t) , (2) where τd is the STD time scale, which is typically of the order of 102 ms. In this work, we choose τd = 50τs. The STD effect is controlled by the parameter β, which can be considered as the fraction of total neurotransmitters consumed per spike. The actual forms of J(x, x′) and r(x, t) depend on the details of the neural dynamics. Here, for the convenience of analysis, we choose them to be J (x, x′) = J0 a √ 2π exp [ −(x −x′)2 2a2 ] , (3) r (x, t) = Θ[u(x, t)] u (x, t)2 1 + kρ ∫ dx′u (x′, t)2 , (4) where J0 and a control the magnitude and range of the neuronal excitatory interactions respectively. J(x, x′) is translationally invariant in the space x, since it is a function of (x−x′), which is essential for the network state to be neutrally stable. In the expression for the firing rate, Θ is the step function. Here, the stabilizing effect of inhibitory interactions is achieved by the divisive normalization operation in Eq. (4). Let us consider first the case without STD by setting β = 0. Hence, p (x, t) = 1 in Eq. (1). For k ≤kc ≡ρJ2 0/(8 √ 2πa), the network holds a continuous family of Gaussian-shaped stationary states when Iext(x, t) = 0. These stationary states are ¯u (x) = ¯u0 exp [ −(x −z)2 4a2 ] . (5) where ¯u is the rescaled variable ¯u ≡ρJ0u, and ¯u0 is the rescaled bump height. The parameter z, i.e., the center of the bump, is a free parameter, implying that the stationary state of the network can be located anywhere in the space x. Next, we consider the case that the network receives a moving input, Iext(x, t) = A exp [ −(x −z0(t))2 4a2 ] , (6) where A is the magnitude of the input and z0 the stimulus position. Without loss of generality, we consider the stimulus position at time t = 0 to be z0 = 0, and the stimulus moves at a constant speed thereafter, i.e., z0 = vt for t ≥0. Let s ≡z(t) −z0(t) be the displacement between the network state and the stimulus position. It has been shown that without STD, the steady value of the displacement is determined by [5] v = −As τs exp ( −s2 8a2 ) . (7) Note that s has the opposite sign of v, implying that the network state always trails behind the stimulus (see Fig. 1(a)). This is due to the response delay of the network relative to the input. 3 3 Tracking in the Presence of STD The analysis of tracking in the presence of STD is more involved. Motivated by the nearly Gaussianshaped profile of the network states, we adopt a perturbation approach to solve the network dynamics [5]. The key idea is to expand the network states as linear combinations of a set of orthonormal basis functions corresponding to different distortion modes of the bump, that is, u (x, t) = ∑ n un (t) ψn (x −z) , (8) 1 −p (x, t) = ∑ n pn (t) ϕn (x −z) , (9) where the basis functions are ψn (x −z) = 1 √√ 2πa2nn! Hn (x −z √ 2a ) exp [ −(x −z)2 4a2 ] , (10) ϕn (x −z) = 1 √√πa2nn! Hn (x −z a ) exp [ −(x −z)2 2a2 ] . (11) Here, Hn is the nth-order Hermite polynomial function. ψn (x −z) and ϕn (x −z) have clear physical meanings. For instance, for n = 1, 2, 3, 4, they corresponds to, respectively, the height, the position, the width and the skewness changes of the Gaussian bump. Depending on the approximation precision, we can take the above expansions up to a proper order, and substituting them into Eqs. (1) and (2) to solve the network dynamics analytically. Results obtained from the 11th order perturbation are shown in Fig. 2(a) for three representative cases. They depend on the rescaled inhibition ¯k ≡k/kc and the rescaled STD strength ¯β ≡τdβ/(ρ2J2 0). When STD is weak, the tracking state lags behind the stimulus. When the STD strength increases to a critical value ¯βperfect, s becomes effectively zero in a rather broad range of stimulus velocity, achieving perfect tracking. When the STD strength is above the critical value, the tracking state leads the stimulus. Hence delay compensation in a tracking task can be implemented at two different levels. The first one is perfect tracking, in which the tracking state has zero-lag with respect to the true stimulus position independent of the stimulus speed. The second one is anticipative tracking, in which the tracking state leads by a constant time τant relative to the stimulus position, that is, the tracking state is at the position the stimulus will travel to at a later time τant. To achieve a constant anticipation time, it requires the leading displacement to increase with the stimulus velocity proportionally, i.e., s = vτant. Both forms of delay compensation have been observed in the head-direction systems of rodents, and may serve different functional purposes. 3.1 Prefect Tracking To analyze the parameter regime for perfect tracking, it is instructive to consider the 1st order perturbation of the network dynamics, i.e., u [x −z (t)] = u0 (t) exp [ −(x −z (t))2 4a2 ] , (12) p [x −z (t)] = 1 −p0 (t) exp [ −(x −z (t))2 2a2 ] + p1 (t) [x −z (t) a ] exp [ −(x −z (t))2 2a2 ] . (13) 4 -2 -1 0 1 2 τdv/a -0.2 -0.1 0 0.1 0.2 0.3 0.4 s/a β = 0 β = 0.0035 β = 0.022 (a) 0 0.2 0.4 0.6 0.8 k~ 0 0.005 0.01 β~ perfect 0.5 1 1.5 A~ 0 0.002 0.004 0.006 0.008 k~ = 0.6 k~ = 0.5 k~ = 0.4 k~ = 0.3 k~ = 0.7 (b) Figure 2: (a) The dependence of the displacement between the bump and the stimulus on the velocity of the moving stimulus for different values of ¯β. Parameters: ¯k = 0.4 and ¯A = 1.8. (b) The dependence of ¯βperfect on ¯k with ¯A = 1.0. Symbols: simulations. Solid line: the predicted curve of ¯βperfect. Dashed line: the boundary separating the static and metastatic phases according to the 1st order perturbation [6]. Inset: the dependence of ¯βperfect on ¯A. Symbols: simulations. Lines: theoretical prediction according to the 1st order perturbation. Substituting them into Eqs. (1) and (2) and utilizing the orthogonality of the basis functions, we get (see Supplementary Material) τs d¯u0 dt = ¯u2 0 B √ 2 ( 1 −p0 √ 4 7 ) −¯u0 + ¯Ae−(vt−z)2 8a2 , (14) τs 2a dz dt = ¯u0 B (2 7 )3/2 p1 + ¯A 2¯u0 (vt −z a ) e−(vt−z)2 8a2 , (15) τs dp0 dt = τs τd [ ¯β¯u2 0 B ( 1 −p0 √ 2 3 ) −p0 ] −τsp1 2a dz dt , (16) τs p0 dp1 dt = −τs τd [ 1 + ¯β¯u2 0 B (2 3 )3/2] p1 p0 + τs a dz dt . (17) At the steady state, d¯u0/dt = dp0/dt = dp1/dt = 0, and dz/dt = v. Furthermore, for a sufficiently small displacements, i.e., |s|/a ≪1, one can approximate ¯A exp[−(vt −z)2/(8a2)] ≈¯A and ¯A[(vt −z)/a] exp[−(vt −z)2/(8a2)] ≈−¯As/a. Solving the above equations, we find that s/a can be expressed in terms of the variables ¯u0/ ¯A, τs/τd and vτd/a. When vτd/a ≪1, the rescaled displacement s/a can be approximated by a power series expansion of the rescaled velocity vτd/a. Since the displacement reverses sign when the velocity reverses, s/a is an odd function of vτd/a. This means that s/a ≈c1(vτd/a) + c3(vτd/a)3. For perfect tracking in the low velocity limit, we have c1 = 0 and find s a = −C 2 ¯u0 ¯A τs τd (vτd a )3 , (18) where C is a parameter less than 1 (the detailed expression can be found in Supplementary Material). For the network tracking a moving stimulus, the input magnitude cannot be too small. This means that ¯u0/ ¯A is not a large number. Therefore, for tracking speeds up to vτd/a ∼1, the displacement s is very small and can be regarded as zero effectively (see Fig. 2(a)). The velocity range in which the tracking is effectively perfect is rather broad, since it scales as (τd/τs)1/3 ≫1. Equation (18) is valid when ¯β takes a particular value. This ields an extimate of ¯βperfect in the 1st order perturbation. Its expression is derived in Supplementary Material and plotted in Fig. 2(b). For reference, we also plot the boundary that separates the metastatic phase above it from the static phase below, as reported in the study of intrinsic properties of CANNs with STD in [6]. In the static phase, the bump is stable at any position, whereas in the metastatic phase, the static bump starts to move spontanaeously once it is pushed. Hence we say that the phase boundary is in a ready-to-move state. Fig. 2(b) shows that ¯βperfect is just above the phase boundary. Indeed, when ¯A approaches 0, the expression of ¯βperfect reduces to the value of ¯β along the phase boundary for the 1st order 5 0 0.2 0.4 0.6 0.8 1 τdv/a 0 0.2 0.4 0.6 0.8 1 1.2 τant/τd β = 0.022, A = 1.8 β = 0.030, A = 1.5 β = 0.030, A = 1.0 0 100 200 300 400 Angular velocity (degree/sec) 10 20 30 40 50 Anticipatory time (ms) (a) 0 0.2 0.4 0.6 0.8 1 k 0 0.005 0.01 0.015 0.02 0.025 0.03 β τant = 0.2τd τant = 0.1τd τant = 0.02τd (b) Static Figure 3: (a) The anticipatory time as a function of the speed of the stimulus. Different sets of parameters may correspond to different levels of anticipatory behavior. Parameter: ¯k = 0.4. The numerical scales are estimated from parameters in [8]. (b) The contours of constant anticipatory time in the space of rescaled inhibition ¯k and the rescaled STD strength ¯β in the limit of very small stimulus speed. Dashed line: boundary separating the static and metastatic phases. Dotted line: boundary separating the existence and non-existence phases of bumps. Calculations are done using 11th order perturbation. perturbation. The inset of Fig. 2(b)) confirms that ¯βperfect does not change significantly with ¯A for different values of ¯k. This implies that the network with ¯β = ¯βperfect exhibits effectively perfect tracking performance because it is intrinsically in a ready-to-move state. 3.2 Anticipative Tracking We further explore the network dynamics when the STD strength is higher than that for achieving perfect tracking. By solving the network dynamics with the perturbation expansion up to the 11th order, we obtain the relation between the displacement s and the stimulus speed v. The solid curve in Fig. 2(a) shows that for strong STD, s increases linearly with v over a broad range of v. This implies that the network achieves a constant anticipatory time τant over a broad range of the stimulus speed. To gain insights into how the anticipation time depends on the stimulus speed, we consider the regime of small displacements. In this regime, the rescaled displacement s/a can be approximated by a power series expansion of the rescaled velocity vτd/a, leading to s/a = c1(vτd/a) + c3(vτd/a)3. The coefficients c1and c3 are determined such that the anticipation time in the limit v = 0 should be τant(0) = s/v, and that s/a reaches a maximum when v = vmax. This yields the result s a = τant (0) τd [ vτd a −1 3 ( a vmaxτd )2 (vτd a )3 ] . (19) Hence the anticipatory time is given by τant (v) = τant (0) ( 1 − v2 3v2max ) . (20) This shows that the anticipation time is effectively constant in a wide range of stimulus velocities, as shown in Fig. 3(a). Even for v = 0.5vmax, the anticipation time is only reduced from its maximum by 9%. The contours of anticipatory times for slowly moving stimuli are shown in Fig. 3(b). Hence the region of anticipative behavior effectively coincides with the metastatic phase, as indicated by the region above the phase line (dashed) in Fig. 2(b). In summary, there is a direct correspondence between delayed, perfect, and anticipative tracking on one hand, and the static, ready-to-move, and spontaneously moving beahviors on the other. This demonstrates the strong correlation between the tracking performance and the intrinsic behaviors of the CANN. We compare the prediction of the model with experimental data. In a typical HD experiment of rodents [8], τs = 1 ms, a = 28.5 degree/ √ 2, and the anticipation time drops from 20 ms at v = 0 to 15 ms at v = 360 degree/s. Substituting into Eq. (19) and assuming τd = 50τs, these parameters yield a slope of 0.41 at the origin and the maximum lead at vmaxτd/a = 1.03. This result can be 6 compared favorably with the curve of ¯β = 0.022 in Fig. 2(a), where the slope at the origin is 0.45 and the maximum lead is located at vmaxτd/a = 1.01. Based on these parameters, the lowest curve plotted in Fig. 3(a) is consistent with the real data in Fig. 4 of [8]. 0 0.5 1 1.5 2 2.5 τdv/a -0.4 -0.2 0 0.2 0.4 s/a β = 0.005, A = 1 β = 0.005, A = 2 β = 0.005, A = 4 β = 0.010, A = 1 β = 0.010, A = 2 β = 0.010, A = 4 L1 L2 L3 Figure 4: Confluence points at natural speeds. There are six curves in two groups with different sets of parameters. Curves in one group intersect at the confluence point with the natural speed at the corresponding value of ¯β. Symbols: simulations. Thin lines: prediction of the displacement-velocity relation by 11th order perturbation. L1: natural speed at ¯β = 0.005. L2: natural speed at ¯β = 0.01. L3: the line for natural tracking at high ¯A limit. Parameter: ¯k = 0.3. 3.3 Natural Tracking For strong enough STD, a CANN holds spontaneously moving bump states. The speed of the spontaneously moving bump is an intrinsic property of the network depending only on the network parameters. We call this the natural speed of the network, denoted as vnatural. An interesting issue is the tracking performance of the network when the stimulus is moving at its natural speed. Two sets of curves corresponding to two values of ¯β are shown in Fig. 4, when the stimulus amplitude ¯A is sufficiently strong. The lines L1 and L2 indicate the corresponding natural speeds of the system for these values of ¯β. Remarkably, we obtain a confluence point of these curves at the natural speed. This point is referred to as the natural tracking point. It has the important property that the lag is independent of the stimulus amplitude. This independence of s from ¯A persists in the asymptotic limit of large ¯A. In this limit, s approaches −vnaturalτs , corresponding to a delay time of τs, showing that the response is limited by the synaptic time scale in this limit. This asymptotic limit is described by the line L3 and is identical for all values of ¯k and ¯β. Hence the invariant point for natural tracking is given by (v, s) = (vnatural, −vnaturalτs) for all values of ¯k and ¯β. We also consider natural tracking in the weak ¯A limit. Again we find a confluence point of the displacement curves at the natural speed, but the delay time (and in some cases the anticipation time) depends on the value of ¯k. For example, at ¯k = 0.3, the natural tracking point traces out an effectively linear curve in the space of v and s when ¯β increases, with a slope equal to 0.8τs. This shows that the delay time is 0.8τs, effectively independent of ¯β at ¯k = 0.3. Since the delay time is different from the value of τs applicable in the strong ¯A limit, the natural tracking point is slowly drifting from the weak to the strong ¯A limit. However, the magnitude of the natural time delay remains of the order of τs. This is confirmed by the analysis of the dynamical equations when the stimulus speed is vnatural + δv in the weak ¯A limit. 3.4 Extension to other CANNs To investigate whether the delay compensation behavior and the prediction of the natural tracking point are general features of CANN models, we consider a network with Mexican-hat couplings. We replace J(x, x′) in Eq. (1) by JMH (x, x′) = J0 [ 1 2 − (x −x′ 2a )2] exp [ −(x −x′)2 2a2 ] , (21) and r (x, t) in Eqs. (1) and (2) by r (x, t) = Θ [u (x, t)] u (x, t)2 1 + u (x, t)2 (22) 7 0 0.2 0.4 0.6 0.8 1 τdv/a -0.2 0 0.2 0.4 0.6 0.8 1 1.2 τant/τd A = 0.3 A = 0.4 A = 0.5 0 100 200 300 400 Angular velocity (degree/sec) 0 10 20 30 40 50 Anticipatory time (ms) (a) 0 0.001 0.002 0.003 0.004 β 0 0.5 1 1.5 2 vnaturalτd/a (b) 0 0.2 0.4 0.6 0.8 vτd/a -0.08 -0.04 0 0.04 s/a A = 0.1 A = 0.2 A = 0.3 L1 (c) Figure 5: (a) The dependence of anticipatory time on the stimulus speed in the Mexican-hat model. Parameter: β = 0.003. (b) Natural speed of the network as a function of β. (c) Plot of s against v. There is a confluence point at the natural speed of the system. L1: the natural speed of the system at β = 0.0011. Common parameters: ρ = 128/ (2π) , J0 = 0.5 and a = 0.5. Fig. 5 shows that the network exhibits the same behaviors as the model in Eqs. (1) and (2). As shown in Fig. 5(a), the anticipatory times are effectively constant and similar in magnitude in the range of stimulus speed comparable to experimental settings. In Fig. 5(b), the natural speed of the bump is zero for β less than a critical value. As β increases, the natural speed increases from zero. In Fig. 5(c), the displacement s is plotted as a function of the stimulus speed v. The invariance of the displacement at the natural speed, independent of the stimulus amplitude, also appears in the Mexican-hat model. The confluence point of the family of curves is close to the natural speed. Furthermore, the displacement at the natural tracking point increases with the natural speed. 4 Conclusions In the present study we have investigated a simple mechanism of how processing delays can be compensated in neural information processing. The mechanism is based on the intrinsic dynamics of a neural circuit, utilizing the STD property of neuronal synapses. The latter induces translational instability of neural activities in a CANN and enhances the mobility of the network states in response to external inputs. We found that for strong STD, the neural system can track moving stimuli with either zero-lag or a lead of a constant time. The conditions for perfect and anticipative tracking hold for a wide range of stimulus speeds, making them applicable in practice. By choosing biologically plausible parameters, our model successfully justifies the experimentally observed delay compensation behaviors. We also made an interesting prediction in the network dynamics, that is, when the speed of the stimulus coincides with the natural speed of the network state, the delay becomes effectively independent of the stimulus amplitude. We also studied more than one kind of CANN models to confirm the generality of our results. Compared with other delay compensation strategies relying on corollary discharge or dynamical routing, the mechanism we propose here is fully dependent on the intrinsic dynamics of the network, namely, the network automatically “adjusts” its tracking speed according to the input information. There exists strong correlations between tracking performance and the intrinsic dynamics of the network. The parameter regions for delayed, perfect and anticipative tracking correspond to network states being static, ready-to-move and spontaneously moving, respectively. It has been suggested the anticipative response of HD neurons in anterior dorsal thalamus is due to the corollary discharge of motor neurons responsible for moving the head. However, experimental studies revealed that when rats were moved passively (and hence no corollary discharge is available), either by hand or by a chart, the anticipative response of HD neurons still exists and has an even larger leading time [1]. Our model provides a possible mechanism to describe this phenomenon. Acknowledgement This work is supported by the Research Grants Council of Hong Kong (grant number 605010) and the National Foundation of Natural Science of China (No.91132702, No.31221003). 8 References [1] J. P. Bassett, M. B. Zugaro, G. M. Muir, E. J. Golob, R. U. Muller and J. S. Taube. Passive Movements of the Head Do Not Abolish Anticipatory Firing Properties of Head Direction Cells. J. Neurophysiol. 93, 1304-1316 (2005). [2] R. Ben-Yishai, R. Lev. Bar-Or, and H. Sompolinsky. Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci. U.S.A. 92, 3844-3848 (1995). [3] H. T. Blair and P. E. Sharp. Anticipatory head direction signals in anterior thalamus: evidence for a thalamocortical circuit that integrates angular head motion to compute head direction. J. Neurosci. 15, 6260-6270 (1995). [4] M. C. Fuhs and D. S. Touretzky. J. Neurosci. 26, A Spin Glass Model of Path Integration in Rat Medial Entorhinal Cortex . 4266-4276 (2006). [5] C. C. A. Fung, K. Y. Wong and S. Wu. Neural Comput. Moving Bump in a Continuous Manifold: A Comprehensive Study of the Tracking Dynamics of Continuous Attractor Neural Networks. 22, 752-792 (2010). [6] C. C. A. Fung, K. Y. M. Wong, H. Wang and S. Wu. Dynamical Synapses Enhance Neural Information Processing: Gracefulness, Accuracy and Mobility. Neural Comput. 24, 1147-1185 (2012). [7] A. P. Georgopoulos, J. T. Lurito, M. Petrides, Mental rotation of the neuronal population vector. A. B. Schwartz, and J. T. Massey, Science 243, 234-236 (1989). [8] J. P. Goodridge and D. S. Touretzky. Modeling attractor deformation in the rodent head direction system. J. Neurophysiol.83, 3402-3410 (2000). [9] Z. P. Kilpatrick and P. C. Bressloff. Effects of synaptic depression and adaptation on spatiotemporal dynamics of an excitatory neuronal network. Physica D 239, 547-560 (2010). [10] B. L. McNaughton, F. P. Battaglia, O. Jensen, E. I. Moser and M.-B. Moser. Path integration and the neural basis of the ‘cognitive map’. Nature Rev. Neurosci. 7, 663-678 (2006). [11] H. Markram and M. Tsodyks. Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature 382, 807-810, 1996. [12] R. Nijhawan. Motion extrapolation in catching. Nature 370, 256-257 (1994). [13] R. Nijhawan and S. Wu. Phil. Compensating time delays with neural predictions: are predictions sensory or motor? Trans. R. Soc. A 367, 1063-1078 (2009). [14] J. O’Keefe and J. Dostrovsky. The hippocampus as a spatial map: preliminary evidence from unit activity in the freely moving rat. Brain Res. 34, 171-175 (1971). [15] A. Samsonovich and B. L. McNaughton. Path integration and cognitive mappping in a continuous attractor neural network model. J. Neurosci. 17, 5900-5920 (1997). [16] M. A. Sommer and R. H. Wurtz. Influence of the thalamus on spatial visual processing in frontal cortex. Nature 444, 374-377 (2006). [17] J. S. Taube, R. U. Muller and J. B. Ranck Jr. Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis. J. Neurosci. 10, 420-435 (1990). [18] J. S. Taube and R. I. Muller. Comparisons of head direction cell activity in the postsubiculum and anterior thalamus of freely moving rats. Hippocampus 8, 87-108 (1998). [19] M. Tsodyks, K. Pawelzik, and H. Markram. Neural Networks with Dynamic Synapses. Neural Comput. 10, 821-835 (1998). [20] S. Wu and S. Amari. Computing with Continuous Attractors: Stability and Online Aspects no access. Neural Comput. 17, 2215-2239 (2005). [21] L. C. York and M. C. W. van Rossum. Recurrent networks with short term synaptic depression. J. Comput. Neurosci 27, 707-620 (2009). [22] K. Zhang. Representation of spatial orientation bythe intrinsic dynamics of the head-direction cell ensemble: a theory. J. Neurosci. 16, 2112-2126 (1996). 9
|
2012
|
197
|
4,560
|
Visual Recognition using Embedded Feature Selection for Curvature Self-Similarity Angela Eigenstetter HCI & IWR, University of Heidelberg aeigenst@iwr.uni-heidelberg.de Bj¨orn Ommer HCI & IWR, University of Heidelberg ommer@uni-heidelberg.de Abstract Category-level object detection has a crucial need for informative object representations. This demand has led to feature descriptors of ever increasing dimensionality like co-occurrence statistics and self-similarity. In this paper we propose a new object representation based on curvature self-similarity that goes beyond the currently popular approximation of objects using straight lines. However, like all descriptors using second order statistics, ours also exhibits a high dimensionality. Although improving discriminability, the high dimensionality becomes a critical issue due to lack of generalization ability and curse of dimensionality. Given only a limited amount of training data, even sophisticated learning algorithms such as the popular kernel methods are not able to suppress noisy or superfluous dimensions of such high-dimensional data. Consequently, there is a natural need for feature selection when using present-day informative features and, particularly, curvature self-similarity. We therefore suggest an embedded feature selection method for SVMs that reduces complexity and improves generalization capability of object models. By successfully integrating the proposed curvature self-similarity representation together with the embedded feature selection in a widely used state-of-the-art object detection framework we show the general pertinence of the approach. 1 Introduction One of the key challenges of computer vision is the robust representation of complex objects and so over the years, increasingly rich features have been proposed. Starting with brightness values of image pixels and simple edge histograms [10] descriptors evolved and more sophisticated features like shape context [1] and wavelets [23] were suggested. The probably most widely used and best performing image descriptors today are SIFT [18] and HOG [4] which model objects based on edge orientation histograms. Recently, there has been a trend to utilize more complicated image statistics like co-occurrence and self-similarity [25, 5, 15, 29, 31] to build more robust descriptors. This development shows, that the dimensionality of descriptors is getting larger and larger. Furthermore it is noticeable that all descriptors that model the object boundary rely on image statistics that are primarily based on edge orientation. Thus, they approximate objects with straight lines. However, it was shown in different studies within the perception community that besides orientation also curvature is an important cue when performing visual search tasks. In our earlier work [21] we extended the modeling of object boundary contours beyond the widely used edge orientation histograms by utilizing curvature information to overcome the drawbacks of straight line approximations. However, curvature can provide even more information about the object boundary. By computing co-occurrences between discriminatively curved boundaries we build a curvature self-similarity descriptor that provides a more detailed and accurate object description.While it was shown that self-similarity and co-occurrence lead to very robust and highly discriminative object representations, these second order image statistics are also pushing feature spaces to extremely 1 high dimensions. Since the amount of training data stays more or less the same, the dimensionality of the object representation has to be reduced to prevent systems to suffer from curse of dimensionality and overfitting. Nevertheless, well designed features still increase performance. Deselaers et al. [5], for instance, suggested an approach that results in a 160000 dimensional descriptor which was evaluated on the ETHZ shape dataset which contains on average 30 positive object instances per category. To exploit the full capabilities of high-dimensional representations applied in object detection we developed a new embedded feature selection method for SVM which reliable discards superfluous dimensions and therefore improves object detection performance. The paper is organized as follows: First we will give a short overview on embedded feature selection methods for SVMs (Section 2.1) and describe a novel method to capture the important dimensions from high-dimensional representations (Section 2.2). After that we describe our new self-similarity descriptor based on curvature to go beyond the straight line approximation of objects to a more accurate description (Section 3). Moreover, Section 3 discusses previous work on self-similarity. In the experimental section at the end of the paper we evaluate the suggested curvature self-similarity descriptor along with our feature selection method. 2 Feature Selection for Support Vector Machines 2.1 Embedded Feature Selection Approaches Guyon et al. [12] categorize feature selection methods into filters, wrappers and embedded methods. Contrary to filters and wrappers embedded feature selection methods incorporate feature selection as a part of the learning process (for a review see [17]). The focus of this paper is on embedded feature selection methods for SVMs, since most state-of-the-art detection systems use SVM as a classifier. To directly integrate feature selection into the learning process of SVMs sparsity can be enforced on the model parameter w. Several researchers e.g [2] have considered replacing the L2 regularization term ∥w∥2 2 with an L1 regularization term ∥w∥1. Since L1 norm penalty for SVM has some serious limitations, Wang et al. [30] suggested the doubly regularized SVM (DrSVM) which is not replacing the L2 regularization but adding an additional L1 regularization to automatically select dimensions during the learning process. Contrary to linear SVM enforcing sparsity on the model parameter w does reduce dimensionality for non-linear kernel functions in the higher dimensional kernel space rather than in the number of input features. To reduce the dimensionality for non-linear SVMs in the feature space one can introduce an additional selection vector θ ∈[0, 1]n, where larger values of θi indicate more useful features. The objective is then to find the best kernel of the form Kθ(x, z) = K(θ ∗x, θ ∗z), where x, z ∈Rn are the feature vectors and ∗is element-wise multiplication. These hyper-parameters θ can be obtained via gradient descent on a generalization bound or a validation error. Another possibility is to consider the scaling factors θ as parameters of the learning algorithm [11], where the problem was solved using a reduced conjugate gradient technique. In this paper we integrate the scaling factors into the learning algorithm, but instead of using L2 norm constraint like in [11] on the scaling parameter θ we apply an L1 norm sparsity which is explicitly discarding dimensions of the input feature vector. For the linear case our optimization problem becomes similar to DrSVM [30] where a gradient descent method is applied to find the optimal solution w∗. To find a starting point a computational costly initialization is applied, while our selection step can start at the canonical θ = 1, because w is modeled in a separate variable. 2.2 Iterative Dimensionality Reduction for SVM A SVM classifier is learning a hyperplane defined by w and b which best separates the training data {(xi, yi)}1≤i≤N with labels yi ∈{−1, +1}. We are following the concept of embedded feature selection and therefore include the feature selection parameter θ directly in the SVM classifier. The corresponding optimization problem can be expressed in the following way: min θ min w,b,ξ 1 2∥w∥2 2 + C N X i=1 ξi (1) subject to : yi(wT ψ(θ ∗xi) + b) ≥1 −ξi ∧ ξi ≥0 ∧ ∥θ∥1 ≤θ0 2 Algorithm 1: Iterative Dimensionality Reduction for SVM 1: converged := FALSE, θ := 1 2: while converged==FALSE do 3: [x′ l , α , b] = trainSVM( X′, Y ′, θ, C) 4: θ* = applyBundleMethod(X′′,Y ′′,x′ l,α,b,C) 5: if θ* == θ then 6: converged=TRUE; 7: end if 8: θ = θ* 9: end while p p p i i+l k p' i p' k p' i+l Dik D'ik Figure 1: Visualization of curvature computation. Dik is on the left-hand side of the vector (pi+l −pi) and therefore has a positive sign, while D′ ik is on the right-hand side of the vector (p′ i+l −p′ i) and therefore gets a negative sign where K(x, z) := ψ(x) · ψ(z) is the SVM kernel function. The function ψ(x) is typically unknown and represents the mapping of the feature vector x into a higher dimensional space. We enforce sparsity of the feature selection parameter θ by the last constraint of Eq. 1, which restricts the L1-norm of θ by a constant θ0. Since SVM uses L2 normalization it does not explicitly enforce single dimensions to be exactly zero. However, this is necessary to explicitly discard unnecessary dimensions. We rewrite the problem in Eq. 1 without additional constraints in the following way: min θ min w,b λ∥θ∥1 + 1 2∥w∥2 2 + C N X i=1 max(0, 1 −yifθ(xi)) (2) where the decision function fθ is given by fθ(x) = wT ψ(θ ∗x) + b. Note, that the last constraint, where the L1-norm is restricted by a constant θ0 is rewritten as an L1-regularization term, multiplied with the sparsity parameter λ. Due to the complexity of problem 2 we propose to solve two simpler problems iteratively. We first split the training data into three sets, training {(x′ i, y′ i)}1≤i≤N ′, validation {(x′′ i , y′′ i )}1≤i≤N ′′ and a hold out testset. Now we optimize the problem according to w and b for a fixed selection parameter θ using a standard SVM algorithm on the training set. Parameter θ is optimized in a second optimization step on the validation data using an extended version of the bundle method suggested in [6]. We are performing the second step of our algorithm on a separate validation set to prevent overfitting. In the first step of our algorithm, the parameter θ is fixed and the remaining problem is converted into the dual problem max α N ′ X i=1 αi−1 2 N ′ X i,j=1 αiαjy′ iy′ jK(θ ∗x′ i, θ ∗x′ j) (3) subject to : 0 ≤αi ≤C, N ′ X i=1 αiy′ i = 0 where the decision function fθ is given by fθ(x) = Pm l=1 αlylK(θ ∗x, θ ∗x′ l) + b, where m is the number of support vectors. Eq. 3 is solved using a standard SVM algorithm [3, 19]. The optimization of the selection parameter θ starts at the canonical solution where all dimensions are set to one. This is corresponding to the solution that is usually taken as a final model in other approaches. In our approach we apply a second optimization step to explicitly eliminate dimensions which are not necessary to classify data from the validation set. Fixing the values of the Lagrange multipliers α, the support vectors x′ l and the offset b obtained by solving Eq. 3, leads to min θ λ∥θ∥1 + 1 2∥w∥2 2 + C N X i=1 max(0, 1 −yifθ(x′′ i )). (4) which is an instance of the regularized risk minimization problem min θ λΩ(θ) + R(θ) , where Ω(θ) is a regularization term and R(θ) is an upper bound on the empirical risk. To solve such nondifferentiable risk minimization problems bundle methods have recently gained increasing interest in the machine learning community. For the case that the risk function R is non-negative and convex 3 it is always lower bounded by its cutting plane at a certain point θi : R(θ) ≥< ai, θ > +bi for all i (5) where ai := ∂θR(θi) and bi := R(θi)−< ai, θi >. Bundle methods build an iteratively increasing piecewise lower bound of the objective function by utilizing its cutting planes. Starting with an initial solution it solves the problem where R is approximated by one initial cutting plane using standard solver. A second cutting plane is build at the solution of the approximated problem. The new approximated lower bound of R is now the maximum over all cutting planes. The more cutting planes are added the more accurate gets the lower bound of the risk function. For the general case of non-linear kernel functions the problem in Eq. 4 is a non-convex and therefore especially hard to optimize. In the special case of a linear kernel the problem is convex and the applied bundle method converges towards the global optimum. Some efforts have been made to adjust bundle methods to handle non-convex problems [16, 6]. We adapted the method of [6] to apply L1 regularization instead of L2 regularization and employ it to solve the optimization problem in Eq. 4. Although the convergence rate of O(1/e) to a solution of accuracy e [6] does no longer apply for our L1 regularized version, we observed that the algorithm converges withing the order of 10 iterations which is in the same range as for the algorithm in [6]. An overview of the suggested iterative dimensionality reduction algorithm is given in Algorithm 1. 3 Representing Curvature Self-Similarity Although several methods have been suggested for the robust estimation of curvature, it has been mainly represented indirectly in a contour based manner [1, 32] and to locate interest points at boundary points with high curvature value. To design a more exact object representation that represents object curvedness in a natural way we revisit the idea of [21] and design a novel curvature self-similarity descriptor. The idea of self-similarity was first suggested by Shechtman et al. [25] who proposed a descriptor based on local self-similarity (LSS). Instead of measuring image features directly it measures the correlation of an image patch with a larger surrounding image region. The general idea of self-similarity was used in several methods and applications [5, 15, 29, 31]. In [15] self-similarity is used to improve the Local Binary Pattern (LBP) descriptor for face identification. Deselaers et al. [5] explored global self-similarity (GSS) and showed its advantages over local self-similarity (LSS) for object detection. Furthermore, Walk et al. [29] showed that using color histograms directly is decreasing performance while using color self-similarity (CSS) as a feature is more appropriate. Besides object classification and detection, self-similarity was also used for action recognition [15] and turned out to be very robust to viewpoint variations. We propose a new holistic self-similarity representation based on curvature. To make use of the aforementioned advantages of global self-similarity we compute all pairwise curvature similarities across the whole image. This results in a very high dimensional object representation. As mentioned before such high dimensional representations have a natural need for dimensionality reduction which we fulfill by applying our embedded feature selection algorithm outlined in the previous section. To describe complex objects it is not sufficient to build a self-similarity descriptor solely based on curvature information, since self-similarity of curvature leaves open many ambiguities. To resolve these ambiguities we add 360 degree orientation information to get a more accurate descriptor. We are using 360 degree orientation, since curved lines cannot be fully described by their 180 degree orientation. This is different to straight lines, where 180 degree orientation gives us the full information about the line. Consider a half circle, with an arbitrary tangent line on it. The tangent line has an orientation between 0 and 180 degrees. However, it does not provide information on which side of the tangent the half circle is actually located, in contrast to a 360 degree orientation. Therefore, using a 180 degree orientation yields to high similarities between a left curved line segment and a right curved line segment. As a first step we extract the curvature information and the corresponding 360 degree orientation of all edge pixels in the image. To estimate the curvature we follow our approach presented in [21] and use the distance accumulation method of Han et al. [13], which accurately approximates the curvedness along given 2D line segments. Let B be a set of N consecutive boundary points, B := {p0, p1, p2, ..., pN−1} representing one line segment. A fixed integer value l defines a line Li between pairs of points pi to pi+l, where i + l is taken modulo N. The perpendicular distance Dik 4 Figure 2: Our visualization shows the original images along with their curvature self-similarity matrices displaying the similarity between all pairs of curvature histogram cells. While curvature self-similarity descriptor is similar for the same object category it looks quite different to other object categories is computed from Li to the point pk, using the euclidean distance. The distance accumulation for point pk and a chord length l is the sum hl(k) = Pk i=k−l Dik . The distance is positive if pk is on the left-hand side of the vector (pi+l −pi), and negative otherwise (see Figure 1 and Figure 3). To get the 360 degree orientation information we compute the gradient of the probabilistic boundary edge image [20] and extend the resulting 180 degree gradient orientation to a 360 degree orientation using the sign of the curvature. Contrary to the original curvature feature proposed in [21] where histograms of curvature are computed using differently sized image regions we build our basic curvature feature using equally sized cells to make it more suitable for computing self-similarities. We divide the image into nonoverlapping 8 × 8 pixel cells and build histograms over the curvature values in each cell. Next we do the same for the 360 degree orientation and concatenate the two histograms. This results in histograms of 28 bins, 10 bins representing the curvature and 18 bins representing the 360 degree orientation. There are many ways to define similarities between histograms. We follow the scheme that was applied to compute self similarities between color histograms [29] and use histogram intersection as a comparison measure to compute the similarities between different curvature histograms in the same bounding box. Furthermore, we apply an L2-normalization to the final self-similarity vector. The computation of self-similarities between all curvature-orientation histograms results in an extremely high-dimensional representation. Let D be the number of cells in an image, then computing all pairwise similarities results in a D2 large curvature self-similarity matrix. Some examples are shown in Figure 2. Since, the similarity matrix is symmetric we use only the upper triangle which results in a (D · (D −1)/2)-dimensional vector. This representation gives a very detailed description of the object. The higher dimensional a descriptor gets, the more likely it contains noisy and correlated dimensions. Furthermore, it is also intuitive that not all similarities extracted from a bounding box are helpful to describe the object. To discard such superfluous dimensions we apply our embedded feature selection method to the proposed curvature self-similarity representation. 4 Experiments We evaluate our curvature self-similarity descriptor in combination with the suggested embedded dimensionality reduction algorithm for the object detection task on the PASCAL dataset [7]. To show the individual strengths of these two contributions we need to perform a number of evaluations. Since this is not supported by the PASCAL VOC 2011 evaluation server we follow the best practice guidelines and use the VOC 2007 dataset. Our experiments show, that curvature self-similarity is providing complementary information to straight lines, while our feature selection algorithm is further improving performance by fulfilling its natural need for dimensionality reduction. The common basic concept shared by many current detection systems are high-dimensional, holistic representations learned with a discriminative classifier, mostly an SVM [28]. In particular the combination of HOG [4] and SVM constitutes the basis of many powerful recognition systems and it has laid the foundation for numerous extensions like, part based models [8, 22, 24, 33], variations of the SVM classifier [8, 27] and approaches utilizing context information [14, 26]. These systems rely on high-dimensional holistic image statistics primarily utilizing straight line approximations. In this paper we explore a orthogonal direction to these extensions and focus on how one can improve on the basic system by extending the straight line representation of HOG to a more discriminative description using curvature self-similarity. At the same time our aim is to reduce the dimensionality 5 Table 1: Average precision of our iterative feature reduction algorithm for linear and non-linear kernel function using our final feature vector consisting of HOG+Curv+CurvSS. For linear kernel function we compare our feature selection (linSVM+FS) to L2 normalized linear SVM (linSVM) and to the doubly regularized SVM (DrSVM) [30]. For non-linear kernel function we compare the fast intersection kernel SVM (FIKSVM) [19] with our feature selection (FIKSVM+FS) aero bike bird boat bottle bus car cat chair cow linSVM 66.1 80.0 53.0 53.1 70.7 73.8 75.3 61.2 63.8 70.7 DrSVM 59.1 77.6 53.5 49.9 64.4 71.6 75.8 50.8 56.1 64.5 linSVM + FS 69.7 80.3 55.5 56.2 71.8 74.0 75.9 63.2 64.8 71.0 FIKSVM 80.1 74.8 57.1 59.3 63.3 73.9 77.3 77.3 69.1 66.4 FIKSVM + FS 80.4 74.9 57.5 62.1 66.7 73.9 78.0 80.1 70.6 69.9 table dog horse mbike pers plant sheep sofa train tv mean linSVM 71.4 57.2 76.5 83.0 72.9 47.7 55.1 61.1 70.4 73.1 66.8 DrSVM 59.9 53.9 70.9 76.5 72.3 47.7 66.3 69.0 67.7 79.7 64.3 linSVM + FS 72.0 57.8 77.2 83.3 73.0 49.7 56.7 62.4 70.7 73.8 68.0 FIKSVM 64.1 61.7 74.6 70.9 79.4 47.5 62.0 59.8 76.9 69.3 68.1 FIKSVM + FS 67.6 64.6 79.7 74.2 79.6 53.0 64.2 64.6 77.1 69.8 70.4 of such high-dimensional representations to decrease the complexity of the learning procedure and to improve generalization performance. In the first part of our experiments we adjust the selection parameter λ of our iterative dimensionality reduction technique via cross-validation. Furthermore, we compare the performance of our feature selection algorithm to L2 regularized SVM [3, 19] and DrSVM [30]. In the second part we evaluate the suggested curvature self-similarity feature after applying our feature selection method to it. 4.1 Evaluation of Feature Selection All experiments in this section are performed using our final feature vector consisting of HOG, curvature (Curv) and curvature self-similarity (CurvSS). We apply our iterative dimensionality reduction algorithm in combination with linear L2 regularized SVM classifier (linSVM) [3] and nonlinear fast intersection kernel SVM (FIKSVM) by Maji et al. [19]. The FIKSVM is widely used and evaluation is relatively fast compared to other non-linear kernels. Nevertheless, computational complexity is still an issue on the PASCAL dataset. This is why on this database linear kernels are typically used [8, 26]. Because of the high computational complexity of DrSVM and FIKSVM, we compare to these methods on a smaller train and test subset obtained from the PASCAL training and validation data in the following way. All training and validation data from the PASCAL VOC 2007 dataset are used to train an SVM using our final object representation on all positive samples and randomly chosen negative samples. The resulting model is used to collect hard negative samples. The set of collected samples is split up into three sets: training, validation and test. Out of the collected set of samples every tenth sample is assigned to the hold out test set which is used to compare the performance of our feature selection method. The remaining samples are randomly split into training and validation set of equal size which are used to perform the feature selection. The reduction algorithm is applied on 5 different training/validation splits which results in five different sets of selected features. For each set we train an L2 norm SVM on all samples from the training and validation set using only the remaining dimensions of the feature vector. Then we choose the feature set with the best performance on the hold out test set. To find the best performing selection parameter λ, we repeat this procedure for different values of λ. The performance of our dimensionality reduction algorithm is compared to the performance of linSVM and DrSVM [30] for the case of a linear kernel. Since DrSVM is solving a similar optimization problem as our suggested feature selection algorithm for a linear kernel this comparison is of particular interest. We are not comparing performance to DrSVM in the non-linear case since 6 curvature values Figure 3: Based on meaningful edge images one can extract accurate curvature information which is used to build our curvature selfsimilarity object representation Figure 4: A significant number of images from PASCAL VOC feature contour artifacts i.e, due to their size, low resolution, or compression artifacts. The edge maps are obtained from the state-of-the-art probabilistic boundary detector [20]. It is evident that objects like the sheep are not defined by their boundary shape and are thus beyond the scope of approaches base on contour shape it is performing feature selection in the higher dimensional kernel space rather than in the original feature space. Instead we compare our feature selection method to that of FIKSVM for the nonlinear case. Our feature selection method reduces the dimensionality of the feature by up to 55% for the linear case and by up to 40% in the non-linear case, while the performance in average precision is constant or increases beyond the performance of linSVM and FIKSVM. On average our feature selection increases performance about 1.2% for linSVM and 2.3% for FIKSVM on the hold-out testset. The DrSVM is actually decreasing the performance of linSVM by 2.5% while discarding a similar amount of features. All in all our approach improves the DrSVM by 3.7% (see Table 1). Our results confirm that our feature selection method reduces the amount of noisy dimensions of highdimensional representations and therefore increases the average precision compared to an linear and non-linear SVM classifier without applying any feature selection. For the linear kernel we showed furthermore that the proposed feature selection algorithm achieves gain over the DrSVM. 4.2 Object Detection using Curvature Self-Similarity In this section we provide a structured evaluation of the parts of our final object detection system. We use the HOG of Felzenszwalb et al. [8, 9] as baseline system, since it is the basis for many powerful object detection systems. All detection results are measured in terms of average precision performing object detection on the PASCAL VOC 2007 dataset. To the best of our knowledge neither curvature nor self-similarity was used to perform object detection on a dataset of similar complexity as the PASCAL dataset so far. Deselaers et al. [5] evaluated their global self-similarity descriptor (GSS) on the simpler classification challenge on the PASCAL VOC 2007 dataset, while the object detection evaluation was performed on the ETHZ shape dataset. However, we showed in [21], that including curvature already solves the detection task almost perfectly on the ETHZ dataset. Furthermore, [21] outperforms the GSS descriptor on three categories and reached comparable performance on the other two. Thus we evaluate on the more challenging PASCAL dataset. Since the proposed approach models the shape of curved object contours and reduces the dimensionality of the representation, we expect it to be of particular value for objects that are characterized by their shape and where their contours can be extracted using state-of-the-art methods. However, a significant number of images form PASCAL VOC are corrupted due to noise or compression artifacts (see Fig. 4). Therefore state-of-the-art edge extraction fails to provide any basis for contour based approaches on these images and one can therefore only expect a significant gain on categories where proper edge information can be computed for a majority of the images. Our training procedure makes use of all objects that are not marked as difficult from the training and validation set. We evaluate the performance of our system on the full testset consisting of 4952 images containing objects from 20 categories using a linear SVM classifier [3]. Due to the large amount of data in the PASCAL database the usage of intersection kernel for object detection becomes comparable intractable. Results of our final system consisting of HOG, curvature (Curv), curvature self-similarity (CurvSS) and our embedded feature selection method (FS) are reported in terms of average precision in Table 2. We compare our results to that of HOG [9] without applying the part based model. Additionally we show results of our own HOG baseline system which is using standard linear SVM [3] instead of the latent SVM used in [9]. Furthermore we show results with 7 Table 2: Detection performance in terms of average precision of the HOG baseline system, HOG and curvature (Curv) before and after discarding noisy dimensions using our feature selection method (FS) and our final detection system consisting of HOG, curvature (Curv), the suggested curvature self-similarity (CurvSS) with and without feature selection (FS) on the PASCAL VOC 2007 dataset. Note, that we use all data points to compute the average precision as it is specified by the default experimental protocol since VOC 2010 development kit. This yields lower but more accurate average precision measurements aero bike bird boat bottle bus car cat chair cow HOG of [9] 19.0 44.5 2.9 4.2 13.5 37.7 39.0 8.3 11.4 15.8 HOG 20.8 43.0 2.1 5.0 13.7 37.8 38.7 6.7 12.1 16.3 HOG+Curv 23.0 42.6 3.7 6.7 12.4 38.6 39.9 7.5 10.0 16.9 HOG+Curv+FS 25.4 42.9 3.7 6.8 13.5 38.8 40.0 8.1 12.0 17.1 HOG+Curv+CurvSS 28.6 39.1 2.3 6.8 12.9 40.3 38.8 9.3 11.1 13.9 HOG+Curv+ CurvSS+FS 28.9 43.1 3.5 7.0 13.6 40.6 40.4 9.6 12.5 17.3 table dog horse mbike pers plant sheep sofa train tv mean HOG of [9] 10.5 2.0 43.5 29.7 24.0 3.0 11.6 17.7 28.3 32.4 20.0 HOG 9.8 2.2 42.4 29.5 24.3 3.8 11.5 17.6 29.0 33.4 20.0 HOG+Curv 13.0 3.7 46.0 30.5 25.5 4.0 8.7 18.7 32.3 33.6 20.9 HOG+Curv+FS 15.6 3.7 46.4 30.8 25.7 4.0 11.3 19.1 32.3 33.6 21.5 HOG+Curv+CurvSS 16.3 6.2 48.0 27.5 27.2 4.2 9.3 20.5 35.9 34.8 21.7 HOG+Curv+ CurvSS+FS 16.7 6.4 48.5 30.6 27.3 4.8 11.6 20.7 36.0 34.8 22.7 and without feature selection to show the individual gain of the curvature self-similarity descriptor and our embedded feature selection algorithm. The results show that the suggested self-similarity representation in combination with feature selection improves performance on most of the categories. All in all this results in an increase of 2.7% in average precision compared to the HOG descriptor. One can observe that curvature information in combination with our feature selection algorithm is already improving performance over the HOG baseline and that adding curvature self-similarity additionally increases performance by 1.2%. The gain obtained by applying our feature selection (FS) depends obviously on the dimensionality of the feature vector; the higher the dimensionality the more can be gained by removing noisy dimensions. For HOG+Curv applying our feature selection is improving performance by 0.6% while the gain for the higher dimensional HOG+Curv+CurvSS is 1%. The results underline that curvature information provides complementary information to straight lines and that feature selection is needed when dealing with high dimensional features like self-similarity. 5 Conclusion We have observed that high-dimensional representations cannot be sufficiently handled by linear and non-linear SVM classifiers. An embedded feature selection method for SVMs has therefore been proposed in this paper, which has been demonstrated to successfully deal with high-dimensional descriptions and it increases the performance of linear and intersection kernel SVM. Moreover, the proposed curvature self-similarity representation has been shown to add complementary information to widely used orientation histograms.1 References [1] S. Belongie, J. Malik, and J. Puzicha. Matching shapes. ICCV, 2001. 1This work was supported by the Excellence Initiative of the German Federal Government and the Frontier fund, DFG project number ZUK 49/1. 8 [2] P. S. Bradley and O. L. Magasarian. Feature selection via concave minimization and support vector machines. ICML, 1998. [3] C.-C Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. [4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. CVPR, 2005. [5] T. Deselaers and V. Ferrari. Global and efficient self-similarity for object classification and detection. CVPR, 2010. [6] T.-M.-T. Do and T. Arti´eres. Large margin training for hidden markov models with partially observed states. ICML, 2009. [7] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html. [8] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. PAMI, 2010. [9] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester. Discriminatively trained deformable part models, release 4. http://www.cs.brown.edu/ pff/latent-release4/. [10] W. T. Freeman and M. Roth. Orientation histograms for hand gesture recognition. Intl. Workshop on Automatic Face and Gesture- Recognition, 1995. [11] Y. Grandvalet and S. Canu. Adaptive scaling for feature selection in SVMs. NIPS, 2003. [12] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. JMLR, 3:11571182, 2003. [13] J. H. Han and T. Poston. Chord-to-point distance acccumulation and planar curvature: a new approach to discrete curvature. Pattern Recognition Letters, 22(10):1133 – 1144, 2001. [14] G. Heitz and D. Koller. Learning spatial context: Using stuff to find things. ECCV, 2008. [15] I. N. Junejo, E. Dexter, I. Laptec, and P. Per´ez. Cross-view action recognition from temporal selfsimilarities. ECCV, 2008. [16] N. Karmitsa, M. Tanaka Filho, and J. Herskovits. Globally convergent cutting plane method for nonconvex nonsmooth minimization. Journal of Optimization Theory and Applications, 148(3):528 – 549, 2011. [17] T. N. Lal, O. Chapelle, J. Weston, and A. Elisseeff. Studies in Fuzziness and Soft Computing. I. Guyon and S. Gunn and N. Nikravesh and L. A. Zadeh, 2006. [18] D.G. Lowe. Object recognition from local scale-invariant features. ICCV, 1999. [19] S. Maji, A. C. Berg, and J. Malik. Classification using intersection kernel support vector machines is efficient. CVPR, 2008. [20] D. Martin, C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. PAMI, 26(5):530 – 549, 2004. [21] A. Monroy, A. Eigenstetter, and B. Ommer. Beyond straight lines - object detection using curvature. ICIP, 2011. [22] A. Monroy and B. Ommer. Beyond bounding-boxes: Learning object shape by model-driven grouping. ECCV, 2012. [23] C. P. Papageorgiou, M. Oren, and T. Poggio. A general framwork for object detection. ICCV, 1998. [24] P. Schnitzspan, M. Fritz, S. Roth, and B. Schiele. Discriminative structure learning of hierarchical representations for object detection. CVPR, 2009. [25] E. Shechtman and M. Irani. Matching local self-similarities across images and videos. CVPR, 2007. [26] Z. Song, Q. Chen, Z. Huang, Y. Hua, and S. Yan. Contextualizing object detection and classification. CVPR, 2011. [27] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector learning for interdependent and structured output spaces. ICML, 2004. [28] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, 1995. [29] S. Walk, N. Majer, K. Schindler, and B. Schiele. New features and insights for pedestiran detection. CVPR, 2010. [30] L. Wang, J. Zhu, and H. Zou. The doubly regularized support vector machine. Statistica Sinica, 16, 2006. [31] L. Wolf, T. Hassner, and Y. Taigman. Descriptor based methods in the wild. ECCV, 2008. [32] P. Yarlagadda and B. Ommer. From meaningful contours to discriminative object shape. ECCV, 2012. [33] L. Zhu, Y. Chen, A. Yuille, and W. Freeman. Latent hierarchical structural learning for object detection. CVPR, pages 1062 –1069, 2010. 9
|
2012
|
198
|
4,561
|
High-Order Multi-Task Feature Learning to Identify Longitudinal Phenotypic Markers for Alzheimer’s Disease Progression Prediction Hua Wang, Feiping Nie, Heng Huang, Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019 {huawangcs, feipingnie}@gmail.com, heng@uta.edu Jingwen Yan, Sungeun Kim, Shannon L. Risacher, Andrew J. Saykin, Li Shen, for the ADNI∗ Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN 46202 {jingyan, sk31, srisache, asaykin, shenli}@iupui.edu Abstract Alzheimer’s disease (AD) is a neurodegenerative disorder characterized by progressive impairment of memory and other cognitive functions. Regression analysis has been studied to relate neuroimaging measures to cognitive status. However, whether these measures have further predictive power to infer a trajectory of cognitive performance over time is still an under-explored but important topic in AD research. We propose a novel high-order multi-task learning model to address this issue. The proposed model explores the temporal correlations existing in imaging and cognitive data by structured sparsity-inducing norms. The sparsity of the model enables the selection of a small number of imaging measures while maintaining high prediction accuracy. The empirical studies, using the longitudinal imaging and cognitive data of the ADNI cohort, have yielded promising results. 1 Introduction Neuroimaging is a powerful tool for characterizing neurodegenerative process in the progression of Alzheimer’s disease (AD). Neuroimaging measures have been widely studied to predict disease status and/or cognitive performance [1, 2, 3, 4, 5, 6, 7]. However, whether these measures have further predictive power to infer a trajectory of cognitive performance over time is still an underexplored yet important topic in AD research. A simple strategy typically used in longitudinal studies (e.g., [8]) is to analyze a single summarized value such as average change, rate of change, or slope. This approach may be inadequate to distinguish the complete dynamics of cognitive trajectories and thus become unable to identify underlying neurodegenerative mechanism. Figure 1 shows a schematic example. Let us look at the plot of Cognitive Score 2. The red and blue groups can be easily separated by their complete trajectories. However, given very similar score values at the time points of t0 and t3, any of the aforementioned summarized values may not be sufficient to identify the group difference. Therefore, if longitudinal cognitive outcomes are available, it would be beneficial to use the complete information for the identification of relevant imaging markers [9, 10]. ∗Data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.ucla.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.ucla.edu/wpcontent/uploads/how to apply/ADNI Acknowledgement List.pdf. 1 Figure 1: Longitudinal multi-task regression of cognitive trajectories on MRI measures. However, how to identify the temporal imaging features that predict longitudinal outcomes is a challenging machine learning problem. First, the input data and response measures often are high-order tensors, not regular data/label matrix. For example, both input neuroimaging measures (samples × features × time) and output cognitive scores (samples × scores × time) are 3D tensors. Thus, it is not trivial to build the longitudinal learning model for tensor data. Second, the associations between features and a specific task (e.g. cognitive score) at two consecutive time points are often correlated. How to efficiently include such correlations of associations cross time is unclear. Third, some longitudinal learning tasks are often interrelated to each other. For example, it is well known that [3, 4] in RAVLT assessment, the total number of words remembered by the participants in the first 5 learning trials heavily impacts the total number of words which can be recalled in the 6th learning trial, and the results of these two measures both partially determines the final recognition rate after 30 minutes delay. How to integrate such tasks correlations into longitudinal learning model is under-explored. In this paper, we focus on the problem of predicting longitudinal cognitive trajectories using neuroimaging measures. We propose a novel high-order multi-task feature learning approach to identify longitudinal neuroimaging markers that can accurately predict cognitive scores over all the time points. The sparsity-inducing norms are introduced to integrate the correlations existing in both features and tasks. As a result, the selected imaging markers can fully differentiate the entire longitudinal trajectory of relevant scores and better capture the associations between imaging markers and cognitive changes over time. Because the structured sparsity-inducing norms enforce the correlations along two directions of the learned coefficient tensor, the parameters in different sparsity norms are tangled together by distinct structures and lead to a difficult optimization problem. We derive an efficient algorithm to solve the proposed high-order multi-task feature learning objective with closed form solution in each iteration. We further prove the global convergence of our algorithm. We apply the proposed longitudinal multi-task regression method to the ADNI cohort. In our experiments, the proposed method not only achieves competitive prediction accuracy but also identifies a small number of imaging markers that are consistent with prior knowledge. 2 High-Order Multi-Task Feature Learning Using Sparsity-Inducing Norms For AD progression prediction using longitudinal phenotypic markers, the input imaging features are a set of matrices X = {X1, X2, . . . , XT } ∈Rd×n×T corresponding to the measurements at T consecutive time points, where Xt is the phenotypic measurements for a certain type of imaging markers, such as voxel-based morphometry (VBM) markers (see details in Section 3) used in this study, at time t (1 ≤t ≤T ). Obviously, X is a tensor data with d imaging features, n subject samples and T time points. The output cognitive assessments for the same set of subjects are a set of matrices Y = {Y1, Y2, . . . , YT } ∈Rn×c×T for a certain type of the cognitive measurements, such as RAVLT memory scores (see details in Section 3), at the same T consecutive time points. Again, Y is a tensor data with n samples, c scores, and T time points. Our goal is to learn from {X, Y} a model that can reveal the longitudinal associations between the imaging and cognitive trajectories, by which we expect to better understand how the variations of different regions of human brains affect the AD progression, such that we can improve the diagnosis and treatment to the disease. Prior regression analyses typically study the associations between imaging features and cognitive measures at each time point separately, which is equivalent to assume that the learning tasks, i.e., cognitive measures, at different time points are independent. Although this assumption can simplify the problem and make the solution easier to obtain, it overlooks the temporal correlations of imaging and cognitive measures. To address this, we propose to jointly learn a single longitudinal regression model for the all time points to identify imaging markers which are associated to cog2 B(2) = unfold(2) (B) = # BT 1 ; : : : ; BT T $ B(1) = unfold(1) (B) = [B1; : : : ; BT] … T Time Features d BT Tasks c B1 B = fB1; : : : ; BTg B1 B2 BT … … d c x T … … c d x T T B1 T B2 T T B BT Figure 2: Left: visualization of the coefficient tensor B learned for the association study on longitudinal data. Middle: the matrix unfolded from B along the first mode (feature dimension). Right: the matrix unfolded from B along the second mode (task dimension). nitive patterns. As a result, we aim to learn a coefficient tensor (a stack of coefficient matrices) B = {B1, · · · , Bn} ∈Rd×c×T, as illustrated in the left panel of Figure 2, to reveal the temporal changes of the coefficient matrices. Given the additional time dimension, our problem becomes a difficult high-order data analysis problem, which we call as high-order multi-task learning. 2.1 Longitudinal Multi-Task Feature Learning In order to associate the imaging markers and the cognitive measures, the multivariate regression model was used in traditional association studies, which minimizes the following objective: min B J0 =
B ⊗1 X T −Y
2 F + α ∥B∥2 2 = T X t=1 ||XT t Bt −Yt||2 F + α T X t=1 d X k=1 ||bk t ||2 2 . (1) where bk t denotes the k-th row of coefficient matrix Bt at time t. Apparently, the objective J0 in Eq. (1) can be decoupled for each individual time point. Therefore it does not take into account the longitudinal correlations between imaging features and cognitive measures. Because our goal in the association study is to select the imaging markers which are connected to the temporal changes of all the cognitive measures, the T groups of regression tasks at different time points should not be decoupled and have to be performed simultaneously. To achieve this, we select imaging markers correlated to all the cognitive measures at all time points by introducing the sparse regularization [11, 12, 13] into the longitudinal data regression and feature selection model as follows: min B J1 = T X t=1 ||XT t Bt −Yt||2 F + α d X k=1 v u u t T X t=1 ||bk t ||2 2 = T X t=1 ||XT t Bt −Yt||2 F + α
B(1)
2,1 , (2) where we denote unfoldk (B) = B(k) ∈RIk×(I1...Ik−1Ik+1...In) as the unfolding operation to a general n-mode tensor B along the k-th mode, and B(1) = unfold1 (B) = [B1, . . . , BT ] as illustrated in the middle panel of Figure 2. By solving the objective J1, the imaging features with common influences across all the time points for all the cognitive measures will be selected due to the second term in Eq. (2), which is a tensor extension of the widely used ℓ2,1-norm for matrix. 2.2 High-Order Multi-Task Correlations The objective J1 in Eq. (2) couples all the learning tasks together, which, though, still does not address the correlations among different learning tasks at different time points. As discussed earlier, during the AD progression, many cognitive measures are interrelated together and their effects during the process could overlap, thus it is necessary to further develop the objective J1 in Eq. (2) to leverage the useful information conveyed by the correlations among different cognitive measures. In order to capture the longitudinal patterns of the AD data, we consider two types of tasks correlations. First, for an individual cognitive measure, although its association to the imaging features at different stages of the disease could be different, its associations patterns at two consecutive time points tend to be similar [9]. Second, we know that [4, 14] during the AD progression, different cognitive measures are interrelated to each other. Mathematically speaking, the above two types of correlations can both be described by the low ranks of the coefficient matrices unfolded from the 3 coefficient tensor along different modes. Thus we further develop our learning model in Eq. (2) to impose additional low rank regularizations to exploit these task correlations. Let B(2) = unfold2 (B) = BT 1 , . . . , BT T as illustrated in the right panel of Figure 2, we minimize the ranks of B(1) and B(2) to capture the two types of task correlations, one for each type, as follows: min B J2 = T X t=1 ||XT t Bt −Yt||2 F + α
B(1)
2,1 + β
B(1)
∗+
B(2)
∗ , (3) where ∥·∥∗denote the trace norm of a matrix. Given a matrix M ∈Rn×m and its singular values σi (1 ≤i ≤min (n, m)), the trace norm of M is defined as ∥M∥∗= Pmin (n,m) i=1 σi = Tr MM T 1 2 . It has been shown that [15, 16, 17] the trace-norm is the best convex approximation of the rank-norm. Therefore, the third and fourth terms of J2 in Eq. (3) indeed minimize the rank of the unfolded learning model B, such that the two types of correlations among the learning tasks at different time points can be utilized. Due to its capabilities for both imaging marker selection and task correlation integration on longitudinal data, we call J2 defined in Eq. (3) as the proposed HighOrder Multi-Task Feature Learning model, by which we will study the problem of longitudinal data analysis to predict cognitive trajectories and identify relevant imaging markers. 2.3 New Optimization Algorithm and Its Global Convergence Despite its nice properties, our new objective J2 in Eq. (3) is a non-smooth convex problem. Some existing methods can solve it, but not efficiently. Thus, in this subsection we will derive a new efficient algorithm to solve this optimization problem with global convergence proof, where we employ an iteratively reweighted method [18] to deal with the non-smooth regularization terms. Taking the derivative of the objective J2 in Eq. (3) with respect to Bt and set it as 0, we obtain1: 2XtXT t Bt −2XtYt + 2αDBt + 2β ¯DBt + Bt ˆD = 0 , (4) where D is a diagonal matrix with D (i, i) = 1 2 qPT t=1∥bk t ∥ 2 2 , ¯D = 1 2 B(1)BT (1) −1/2 and ˆD = 1 2 B(2)BT (2) −1/2 . We can re-write Eq. (4) as following: XtXT t + αD + β ¯D Bt + βBt ˆD = XtYt , (5) which is a Sylvester equation and can be solved in closed form. When the time t changes from 1 to T , we can calculate Bt (1 ≤t ≤T ) by solving Eq. (5). Because D, ¯D and ˆD are dependent on B and can be seen as latent variables, we propose an iterative algorithm to obtain the global optimum solutions of Bt (1 ≤t ≤T ), which is summarized in Algorithm 1. Convergence analysis of the new algorithm. We first prove the following two useful lemmas, by which we will prove the convergence of Algorithm 1. Lemma 1 Given a constant α > 0, for function f (x) = x −x2 2α, we have f (x) ≤f (α) for any x ∈R. The equality holds if and only if x = α. The proof of Lemma 1 is obvious and skipped due to space limit. Lemma 2 Given two semi-positive definite matrices A and ˜A, the following inequality holds: tr ˜A 1 2 −1 2 tr ˜AA−1 2 ≤tr A 1 2 −1 2 tr AA−1 2 . (6) The equality holds if and only if A = ˜A. 1∥M∥2,1 is a non-smooth function of M and not differentiable when one of its row mi = 0. Following [18], we introduce a small perturbation ζ > 0 to replace ∥M∥2,1 by P i q ∥mi∥2 2 + ζ, which is smooth and differentiable with respect to M. Apparently, P i q ∥mi∥2 2 + ζ is reduced to ∥M∥2,1 when ζ →0. In the sequel of this paper, we implicitly apply this replacement for all ∥·∥2,1. Following the same idea, we also introduce a small perturbation ξ > 0 to replace ∥M∥∗by tr MM T + ξI 1 2 for the same reason. 4 Algorithm 1: A new algorithm to solve the optimization problem in Eq. (3). Data: X = [X1, X2, . . . , XT ] ∈Rd×n×T , Y = [Y1, Y2, . . . , YT ] ∈Rn×c×T . 1. Set g = 1. Initialize B(1) t ∈Rd×c (1 ≤t ≤T ) using the linear regression results at each individual time point. repeat 2. Calculate the diagonal matrix D(g), where the i-th diagonal element is computed as D(g) (i, i) = 1 2 s PT t=1
b(g),k t
2 2 ; calculate ¯ D(g) = 1 2 B(g) (1) B(g) (1) T −1 2 ; calculate ˆ D(g) = 1 2 B(g) (2) B(g) (2) T −1 2 . 3. Update B(g+1) t (1 ≤t ≤T ) by solving the Sylvester equation in Eq. (5). 4. g = g + 1. until Converges Result: B = [B1, B2, . . . , BT ] ∈Rd×c×T . Proof: Because A and ˜A are two semi-positive definite matrices and we know that tr A ˜A = tr ˜AA , we can derive: tr A 1 2 −2 ˜A 1 2 + ˜AA−1 2 = tr A−1 4 A + ˜A −A 1 2 ˜A 1 2 −˜A 1 2 A 1 2 A−1 4 = tr A−1 4 A 1 2 −˜A 1 2 2 A−1 4 =
A−1 4 A 1 2 −˜A 1 2
2 F ≥0 , (7) by which we have the following inequality tr ˜A 1 2 −1 2 tr ˜AA−1 2 ≤1 2 tr A 1 2 , which is equivalent to Eq. (6) and completes the proof of Lemma 2. □ Now we prove the convergence of Algorithm 1, which is summarized by the following theorem. Theorem 1 Algorithm 1 monotonically decreases the objective of the problem in Eq. (3) in each iteration, and converges to the globally optimal solution. Proof: In Algorithm 1, we denote the updated Bt in each iteration as ˜Bt. We also denote the least square loss in the g-th iteration as L(g) = PT t=1 ||XT t B(g) t −Yt||2 F . According to Step 3 of Algorithm 1 we know that the following inequality holds: L(g+1) + α T X t=1 tr ˜BT t D ˜Bt + β T X t=1 tr ˜BT t ¯D ˜Bt + β T X t=1 tr ˜Bt ˆD ˜BT t ≤ L(g) + α T X t=1 tr BT t DBt + β T X t=1 tr BT t ¯DBt + β T X t=1 tr Bt ˆDBT t . (8) Denote the updated B(1) as ˜B(1), and the updated B(2) as ˜B(1), from Eq. (8) we can derive: L(g+1) + α tr ˜BT (1)D ˜B(1) + β tr ˜B(1) ˜BT (1) ¯D + β tr ˜B(2) ˜BT (2) ˆD ≤ L(g) + α T X t=1 tr BT (1)DB(1) + β T X t=1 tr B(1)BT (1) ¯D + β T X t=1 tr B(2)BT (2) ˆD . (9) According to the definitions of D, ¯D and ˆD, we have: L(g+1) + α 2 d X k=1 PT t=1 ||b(g+1),k t ||2 2 qPT t=1 ||b(g),k t ||2 2 + β 2 tr ˜B(1) ˜BT (1) B(1)BT (1) −1 2 + β 2 tr ˜B(2) ˜BT (2) B(2)BT (2) −1 2 ≤ L(g) + α 2 d X k=1 PT t=1 ||b(g),k t ||2 2 qPT t=1 ||b(g),k t ||2 2 + β 2 tr B(1)BT (1) B(1)BT (1) −1 2 + β 2 tr B(1)BT (1) B(2)BT (2) −1 2 . (10) Then according to Lemma 1 and Lemma 2, the following three inequalities hold: v u u t T X t=1 ||b(g+1),k t ||2 2 − PT t=1 ||b(g+1),k t ||2 2 2 qPT t=1 ||b(g),k t ||2 2 ≤ v u u t T X t=1 ||b(g),k t ||2 2 − PT t=1 ||b(g),k t ||2 2 2 qPT t=1 ||b(g),k t ||2 2 . (11) 5 tr ˜B(1) ˜BT (1) −tr 1 2 ˜B(1) ˜BT (1) B(1)BT (1) −1 2 ≤tr B(1)BT (1) −tr 1 2B(1)BT (1) B(1)BT (1) −1 2 , (12) tr ˜B(2) ˜BT (2) −tr 1 2 ˜B(2) ˜BT (2) B(2)BT (2) −1 2 ≤tr B(2)BT (2) −tr 1 2B(2)BT (2) B(2)BT (2) −1 2 . (13) Adding the both sides of of Eqs. (10–13) together, we can obtain: L(g+1) + α d X k=1 v u u t T X t=1 ||b(g+1),k t ||2 2 + β tr ˜B(1) ˜BT (1) + β tr ˜B(2) ˜BT (2) ≤ L(g+1) + α d X k=1 v u u t T X t=1 ||b(g),k t ||2 2 + β tr B(1)BT (1) + β tr B(2)BT (2) (14) Thus, our algorithm decreases the objective value of Eq. (3) in each iteration. When the objective value keeps unchange, Eq. (4) is satisfied, i.e., the K.K.T. condition of the objective is satisfied. Thus, our algorithm reaches one of the optimal solutions. Because the objective in Eq. (3) is a convex problem, Algorithm 1 will converge to one of the globally optimal solution. □ 3 Experiments We evaluate the proposed method by applying it to the Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohort to examine the association between a wide range of imaging measures and two types of cognitive measures over a certain period of time. Our goal is to discover a compact set of imaging markers that are closely related to cognitive trajectories. Imaging markers and cognitive measures. Data used in this work were obtained from the ADNI database (adni.loni.ucla.edu). One goal of ADNI has been to test whether serial MRI, PET, other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of Mild Cognitive Impairment (MCI) and early AD. For up-to-date information, see www.adni-info.org. We downloaded 1.5 T MRI scans and demographic information for 821 ADNI-1 participants. We performed voxel-based morphometry (VBM) on the MRI data by following [8], and extracted mean modulated gray matter (GM) measures for 90 target regions of interest (ROIs) (see Figure 3 for the ROI list and detailed definitions of these ROIs in [3]). These measures were adjusted for the baseline intracranial volume (ICV) using the regression weights derived from the healthy control (HC) participants at the baseline. We also downloaded the longitudinal scores of the participants in two independent cognitive assessments including Fluency Test and Rey’s Auditory Verbal Learning Test (RAVLT). The details of these cognitive assessments can be found in the ADNI procedure manuals2. The time points examined in this study for both imaging markers and cognitive assessments included baseline (BL), Month 6 (M6), Month 12 (M12) and Month 24 (M24). All the participants with no missing BL/M6/M12/M24 MRI measurements and cognitive measures were included in this study. A total of 417 subjects were involved in our study, including 84 AD, and 191 MCI and 142 HC participants. We examined 3 RAVLT scores RAVLT TOTAL, RAVLT TOT6 and RAVLT RECOG, and 2 Fluency scores FLU ANIM and FLU VEG. 3.1 Improved Cognitive Score Prediction from Longitudinal Imaging Markers We first evaluate the proposed method by applying it to the ADNI cohort for predicting the two types of cognitive scores using the VBM markers, tracked over four different time points. Our goal in this experiment is to improve the prediction performance. Experimental setting. We compare the proposed method against its two close counterparts including multivariate linear regression (LR) and ridge regression (RR). LR is the simplest and widely used regression model in statistical learning and brain image analysis. RR is a regularized version of LR to avoid over-fitting. Due to their mathematical nature, these two methods are performed for 2http://www.adni-info.org/Scientists/ProceduresManuals.aspx 6 Table 1: Performance comparison for memory score prediction measured by RMSE. LR RR TGL Ours (ℓ2,1-norm only) Ours (trace norm only) Ours RAVLT 0.380 0.341 0.318 0.306 0.301 0.283 Fluency 0.171 0.165 0.155 0.144 0.147 0.135 each cognitive measure at each time point separately, and thus they cannot make use of the temporal correlation. We also compare our method to a recent longitudinal method, called as Temporal Group Lasso Multi-Task Regression (TGL) [9]. TGL takes into account the longitudinal property of the data, which, however, is designed to analyze only one single memory score at a time. In contrast, besides imposing structured sparsity via tensor ℓ2,1-norm regularization for imaging marker selection, our new method also imposes two trace norm regularizations to capture the interrelationships among different cognitive measures over the temporal dimension. Thus, the proposed method is able to perform association study for all the relevant scores of a cognitive test at the same time, e.g., our method can simultaneously deal with the three RAVLT scores, or the two Fluency scores. To evaluate the usefulness of each component of the proposed method, we implement three versions of our method as follows. First, we only impose the ℓ2,1-norm regularization on the unfolded coefficient tensor B along the feature mode, denoted as “ℓ2,1-norm only”. Second, we only impose the trace norm regularizations on the two coefficient matrices unfolded from the coefficient tensor B along the feature and task modes respectively, denoted as “trace norm only”. Finally, we implement the full version of our new method that solves the proposed objective in Eq. (3). Note that, if no regularization is imposed, our method is degenerated to the traditional LR method. To measure prediction performance, we use standard 5-fold cross-validation strategy by computing the root mean square error (RMSE) between the predicted and actual values of the cognitive scores on the testing data only. Specifically, the whole set of subjects are equally and randomly partitioned into five subsets, and each time the subjects within one subset are selected as the testing samples and all other subjects in the remaining four subsets are used for training the regression models. This process is repeated for five times and average results are reported in Table 1. To treat all regression tasks equally, data for each response variable is normalized to have zero mean and unit variance. Experimental results. From Table 1 we can see that the proposed method is consistently better than the three competing methods, which can be attributed to the following reasons. First, because LR and RR methods by nature can only deal with one individual cognitive measure at one single time point at a time, they cannot benefit from the correlations across different cognitive measures over the entire time course. Second, although TGL method improves the previous two methods in that it does take into account longitudinal data patterns, it still assumes all the test scores (i.e., learning tasks) from one cognitive assessment to be independent, which, though, is not true in reality. For example, it is well known that [3, 4] in RAVLT assessment, the total number of words remembered by the participants in the first 5 learning trials (RAVLT TOTAL) heavily impacts the total number of words which can be recalled in the 6th learning trial (RAVLT TOT6), and the results of these two measures both partially determines the final recognition rate after 30 minutes delay (RAVLT RECOG). In contrast, our new method considers all c learning tasks (c = 3 for RAVLT assessment and c = 2 for Fluency assessment) as an integral learning object as formulated in Eq. (3), such that their correlations can be incorporated by the two imposed low-rank regularization terms. Besides, we also observe that the two degenerated versions of the proposed method do not perform as well as their full version counterpart, which provides a concrete evidence to support the necessities of the component terms of our learning objective in Eq. (3) and justifies our motivation to impose ℓ2,1norm regularization for feature selection and trace norm regularization to capture task correlations. 3.2 Identification of Longitudinal Imaging Markers Because one of the primary goals of our regression analysis is to identify a subset of imaging markers which are highly correlated to the AD progression reflected by the cognitive changes over time. Therefore, we examine the imaging markers identified by the proposed methods with respect to the longitudinal changes encoded by the cognitive scores recorded at the four consecutive time points. 7 LAmygdala RAmygdala LAngular RAngular LCalcarine RCalcarine LCaudate RCaudate LAntCingulate RAntCingulate LMidCingulate RMidCingulate LPostCingulate RPostCingulate LCuneus RCuneus LInfFrontal_Oper RInfFrontal_Oper LInfOrbFrontal RInfOrbFrontal LInfFrontal_Triang RInfFrontal_Triang LMedOrbFrontal RMedOrbFrontal LMidFrontal RMidFrontal LMidOrbFrontal RMidOrbFrontal LSupFrontal RSupFrontal LMedSupFrontal RMedSupFrontal LSupOrbFrontal RSupOrbFrontal LFusiform RFusiform LHeschl RHeschl LHippocampus RHippocampus LInsula RInsula LLingual RLingual LInfOccipital RInfOccipital LMidOccipital RMidOccipital LSupOccipital RSupOccipital LOlfactory ROlfactory LPallidum RPallidum LParahipp RParahipp LParacentral RParacentral LInfParietal RInfParietal LSupParietal RSupParietal LPostcentral RPostcentral LPrecentral RPrecentral LPrecuneus RPrecuneus LPutamen RPutamen LRectus RRectus LRolandic_Oper RRolandic_Oper LSuppMotorArea RSuppMotorArea LSupramarg RSupramarg LInfTemporal RInfTemporal LMidTemporal RMidTemporal LMidTempPole RMidTempPole LSupTempPole RSupTempPole LSupTemporal RSupTemporal LThalamus RThalamus BL M6 M12 M24 0.001 0.002 0.003 0.004 0.005 0.006 Figure 3: Top panel: Average regression weights of imaging markers for predicting three RAVLT memory scores. Bottom panel: Top 10 average weights mapped onto the brain. Shown in Figure 3 are (1) the heat map of the learned weights (magnitudes of the average regression weights for all three RAVLT scores at each time point) of the VBM measures at different time points calculated by our method; and (2) the top 10 weights mapped onto the brain anatomy. A first glance at the heat map in Figure 3 indicates that the selected imaging markers have clear patterns that span across all the four studied time points, which demonstrates that these markers are longitudinally stable and thereby can potentially serve as screening targets over the course of AD progression. Moreover, we observe that the bilateral hippocampi and parahippocampal gyri are among the top selected features. These findings are in accordance with the known knowledge that in the pathological pathway of AD, medial temporal lobe is firstly affected, followed by progressive neocortical damage [19, 20]. Evidence of a significant atrophy of middle temporal region in AD patients has also been observed in previous studies [21, 22, 23]. In summary, the identified longitudinally stable imaging markers are highly suggestive and strongly agree with the existing research findings, which warrants the correctness of the discovered imagingcognition associations to reveal the complex relationships between MRI measures and cognitive scores. This is important for both theoretical research and clinical practices for a better understanding of AD mechanism. 4 Conclusion To reveal the relationship between longitudinal cognitive measures and neuroimaging markers, we have proposed a novel high-order multi-task feature learning model, which selects the longitudinal imaging markers that can accurately predict cognitive measures at all the time points. As a result, these imaging markers could fully differentiate the entire longitudinal trajectory of relevant cognitive measures and better capture the associations between imaging markers and cognitive changes over time. To solve our new objective, which uses the non-smooth structured sparsity-inducing norms, we have derived an iterative algorithm with a closed form solution in each iteration. We have further proved our algorithm converges to the global optimal solution. The validations using ADNI imaging and cognitive data have demonstrated the promise of our method. Acknowledgement. This work was supported by NSF CCF-0830780, CCF-0917274, DMS0915228, and IIS-1117965 at UTA; and by NSF IIS-1117335, NIH R01 LM011360, UL1 RR025761, U01 AG024904, RC2 AG036535, R01 AG19771, and P30 AG10133-18S1 at IU. Data used in the work were obtained from the ADNI database. ADNI funding information is available at http://adni.loni.ucla.edu/wp-content/uploads/how to apply/ADNI DSP Policy.pdf. 8 References [1] C Hinrichs, V Singh, G Xu, SC Johnson, and ADNI. Predictive markers for ad in a multi-modality framework: an analysis of mci progression in the adni population. Neuroimage, 55(2):574–89, 2011. [2] CM Stonnington, C Chu, S Kloppel, and et al. Predicting clinical scores from magnetic resonance scans in alzheimer’s disease. Neuroimage, 51(4):1405–13, 2010. [3] L. Shen, S. Kim, and et al. Whole genome association study of brain-wide imaging phenotypes for identifying quantitative trait loci in MCI and AD: A study of the ADNI cohort. Neuroimage, 2010. [4] H. Wang, F. Nie, H. Huang, S. Risacher, C. Ding, A.J. Saykin, L. Shen, et al. Sparse multi-task regression and feature selection to identify brain imaging predictors for memory performance. In ICCV, 2011. [5] D. Zhang and D. Shen. Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in alzheimer’s disease. Neuroimage, 2011. [6] H. Wang, F. Nie, H. Huang, S. Kim, Nho K., S. Risacher, A. Saykin, and L. Shen. Identifying Quantitative Trait Loci via Group-Sparse Multi-Task Regression and Feature Selection: An Imaging Genetics Study of the ADNI Cohort. Bioinformatics, 28(2):229–237, 2012. [7] H. Wang, F. Nie, H. Huang, S. Risacher, A. Saykin, and L. Shen. Identifying Disease Sensitive and Quantitative Trait Relevant Biomarkers from Multi-Dimensional Heterogeneous Imaging Genetics Data via Sparse Multi-Modal Multi-Task Learning. Bioinformatics, 28(18):i127–i136, 2012. [8] S. L. Risacher, L. Shen, J. D. West, S. Kim, B. C. McDonald, L. A. Beckett, D. J. Harvey, Jr. Jack, C. R., M. W. Weiner, A. J. Saykin, and ADNI. Longitudinal MRI atrophy biomarkers: relationship to conversion in the ADNI cohort. Neurobiol Aging, 31(8):1401–18, 2010. [9] J. Zhou, L. Yuan, J. Liu, and J. Ye. A multi-task learning formulation for predicting disease progression. In SIGKDD, 2011. [10] H. Wang, F. Nie, H. Huang, J. Yan, S. Kim, Nho K., S. Risacher, A. Saykin, and L. Shen. From Phenotype to Genotype: An Association Study of Candidate Phenotypic Markers to Alzheimer’s Disease Relevant SNPs. Bioinformatics, 28(12):i619–i625, 2012. [11] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. NIPS, pages 41–48, 2007. [12] G. Obozinski, B. Taskar, and M. Jordan. Multi-task feature selection. Technical report, Department of Statistics, University of California, Berkeley, 2006. [13] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of The Royal Statistical Society Series B, 68(1):49C–67, 2006. [14] H. Wang, F. Nie, H. Huang, S. Risacher, A. Saykin, and L. Shen. Identifying ad-sensitive and cognitionrelevant imaging biomarkers via joint classification and regression. Medical Image Computing and Computer-Assisted Intervention (MICCAI 2011), pages 115–123, 2011. [15] B. Recht, M. Fazel, and P.A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. Arxiv preprint arxiv:0706.4138, 2007. [16] E.J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009. [17] E.J. Candes and T. Tao. The power of convex relaxation: Near-optimal matrix completion. Information Theory, IEEE Transactions on, 56(5):2053–2080, 2010. [18] I.F. Gorodnitsky and B.D. Rao. Sparse signal reconstruction from limited data using focuss: A reweighted minimum norm algorithm. Signal Processing, IEEE Transactions on, 45(3):600–616, 1997. [19] H. Braak and E. Braak. Neuropathological stageing of alzheimer-related changes. Acta neuropathologica, 82(4):239–259, 1991. [20] A. Delacourte, JP David, N. Sergeant, L. Buee, A. Wattez, P. Vermersch, F. Ghozali, C. Fallet-Bianco, F. Pasquier, F. Lebert, et al. The biochemical pathway of neurofibrillary degeneration in aging and alzheimers disease. Neurology, 52(6):1158–1158, 1999. [21] L.G. Apostolova, P.H. Lu, S. Rogers, R.A. Dutton, K.M. Hayashi, A.W. Toga, J.L. Cummings, and P.M. Thompson. 3d mapping of mini-mental state examination performance in clinical and preclinical alzheimer disease. Alzheimer Disease & Associated Disorders, 20(4):224, 2006. [22] A. Convit, J. De Asis, MJ De Leon, CY Tarshish, S. De Santi, and H. Rusinek. Atrophy of the medial occipitotemporal, inferior, and middle temporal gyri in non-demented elderly predict decline to Alzheimer’s disease. Neurobiol of aging, 21(1):19–26, 2000. [23] V. Julkunen, E. Niskanen, S. Muehlboeck, M. Pihlajam¨aki, M. K¨on¨onen, M. Hallikainen, M. Kivipelto, S. Tervo, R. Vanninen, A. Evans, et al. Cortical thickness analysis to detect progressive mild cognitive impairment: a reference to alzheimer’s disease. Dementia and geriatric cognitive disorders, 28(5):404– 412, 2009. 9
|
2012
|
199
|
4,562
|
Clustering Aggregation as Maximum-Weight Independent Set Nan Li Longin Jan Latecki Department of Computer and Information Sciences Temple University, Philadelphia, USA {nan.li,latecki}@temple.edu Abstract We formulate clustering aggregation as a special instance of Maximum-Weight Independent Set (MWIS) problem. For a given dataset, an attributed graph is constructed from the union of the input clusterings generated by different underlying clustering algorithms with different parameters. The vertices, which represent the distinct clusters, are weighted by an internal index measuring both cohesion and separation. The edges connect the vertices whose corresponding clusters overlap. Intuitively, an optimal aggregated clustering can be obtained by selecting an optimal subset of non-overlapping clusters partitioning the dataset together. We formalize this intuition as the MWIS problem on the attributed graph, i.e., finding the heaviest subset of mutually non-adjacent vertices. This MWIS problem exhibits a special structure. Since the clusters of each input clustering form a partition of the dataset, the vertices corresponding to each clustering form a maximal independent set (MIS) in the attributed graph. We propose a variant of simulated annealing method that takes advantage of this special structure. Our algorithm starts from each MIS, which is close to a distinct local optimum of the MWIS problem, and utilizes a local search heuristic to explore its neighborhood in order to find the MWIS. Extensive experiments on many challenging datasets show that: 1. our approach to clustering aggregation automatically decides the optimal number of clusters; 2. it does not require any parameter tuning for the underlying clustering algorithms; 3. it can combine the advantages of different underlying clustering algorithms to achieve superior performance; 4. it is robust against moderate or even bad input clusterings. 1 Introduction Clustering is a fundamental problem in data analysis, and has extensive applications in statistics, data mining, computer vision and even in social sciences. The goal is to partition the data objects into a set of groups (clusters) such that objects in the same group are similar, while objects in different groups are dissimilar. In the past two decades, many different clustering algorithms have been developed. Some popular ones include K-means, DBSCAN, Ward’s algorithm, EM-clustering and so on. However, there are potential shortcomings for each of the known clustering algorithms. For instance, K-means [7] and its variations have difficulty detecting the ”natural” clusters, which have non-spherical shapes or widely different sizes or densities. Furthermore, in order to achieve good performance, they require an appropriate number of clusters as the input parameter, which is usually very hard to specify. DBSCAN [8], a density-based clustering algorithm, can detect clusters of arbitrary shapes and sizes. However, it has trouble with data which have widely varying densities. Also, DBSCAN requires two input parameters specified by the user: the radius, Eps, to define the neighborhood of each data object, and the minimum number, minPts, of data objects required to form a cluster. 1 Consensus clustering, also called clustering aggregation or clustering ensemble, refers to a kind of methods which try to find a single (consensus) superior clustering from a number of input clusterings obtained by different algorithms with different parameters. The basic motivation of these methods is to combine the advantages of different clustering algorithms and overcome their respective shortcomings. Besides generating stable and robust clusterings, consensus clustering methods can be applied in many other scenarios, such as categorical data clustering, ”privacy-preserving” clustering and so on. Some representative methods include [1, 2, 9, 11, 12, 13, 14]. [2] formulates clustering ensemble as a combinatorial optimization problem in terms of shared mutual information. That is, the relationship between each pair of data objects is measured based on their cluster labels from the multiple input clusterings, rather than the original features. Then a graph representation is constructed according to these relationships, and finding a single consolidated clustering is reduced to a graph partitioning problem. Similarly, in [1], a number of deterministic approximation algorithms are proposed to find an ”aggregated” clustering which agrees as much as possible with the input clusterings. [9] also applies a similar idea to combine multiple runs of K-means algorithm. [11] proposes to capture the notion of agreement using an measure based on a 2D string encoding. They derive a nonlinear optimization model to maximize the new agreement measure and transform it into a strict 0-1 Semidefinite Program. [12] presents three iterative EM-like algorithms for the consensus clustering problem. A common feature of these consensus clustering methods is that they usually do not access to the original features of the data objects. They utilize the cluster labels in different input clusterings as the new features of each data object to find an optimal clustering. Consequently, the success of these consensus clustering methods heavily relies on a premise that the majority of the input clusterings are reasonably good and consistent, which is not often the case in practice. For example, given a new challenging dataset, it is probable that only some few of the chosen underlying clustering algorithms can generate good clusterings. Many moderate or even bad input clustering can mislead the final ”consensus”. Furthermore, even if we choose the appropriate underlying clustering algorithms, in order to obtain good input clusterings, we still have to specify the appropriate input parameters. Therefore, it is desired to devise new consensus clustering methods which are more robust and do not need the optimal input parameters to be specified. In this paper, our definition of ”clustering aggregation” is different. Informally, for each of the clusters in the input clusterings, we evaluate its quality with some internal indices measuring both the cohesion and separation. Then we select an optimal subset of clusters, which partition the dataset together and have the best overall quality, as the ”aggregated clustering”. (We give a formal statement of our ”clustering aggregation” problem in Sec. 2). In this framework, ideally, we can find the optimal ”aggregated clustering” even if only a minority of the input clusterings are good enough. Therefore, we only need to specify an appropriate range of the input parameters, rather than the optimal values, for the underlying clustering algorithms. We formulate this ”clustering aggregation” problem as a special instance of Maximum-Weight Independent Set (MWIS) problem. An attributed graph is constructed from the union of the input clusterings. The vertices, which represent the distinct clusters, are weighted by an internal index measuring both cohesion and separation. The edges connect the vertices whose corresponding clusters overlap (In practice, we may tolerate a relatively small amount of overlap for robustness). Then selecting an optimal subset of non-overlapping clusters partitioning the dataset together can be formulated as seeking the MWIS of the attributed graph, which is the heaviest subset of mutually non-adjacent vertices. Moreover, this MWIS problem exhibits a special structure. Since the clusters of each input clustering form a partition of the dataset, the vertices corresponding to each clustering form a maximal independent set (MIS) in the attributed graph. The most important source of motivation for our work is [3]. In [3], image segmentation is formulated as a MWIS problem. Specifically, given an image, they first segment it with different bottom-up segmentation schemes to get an ensemble of distinct superpixels. Then they select a subset of the most ”meaningful” non-overlapping superpixels to partition the image. This selection procedure is formulated as solving a MWIS problem. In this respect, our work is very similar to [3]. The only difference is that our work applies the MWIS formulation to a more general problem, clustering aggregation. MWIS problem is known to be NP-hard. Many heuristic approaches are proposed to find approximate solutions. As we mentioned before, in the context of clustering aggregation, the formulated 2 MWIS problem exhibits a special structure. That is, the vertices corresponding to each clustering form a maximal independent set (MIS) in the attributed graph. This special structure is valuable for finding good approximations to the MWIS because, although these MISs may not be the global optimum of the MWIS, they are close to distinct local optimums. We propose a variant of simulated annealing method that takes advantage of this special structure. Our algorithm starts from each MIS and utilizes a local search heuristic to explore its neighborhood in order to find better approximations to the MWIS. The best solution found in this process is returned as the final approximate MWIS. Since the exploration for each MIS is independent, our algorithm is suitable for parallel computation. Finally, since the selected clusters may not be able to cover the entire dataset, our approach performs a post-processing to assign the missing data objects to their nearest clusters. Extensive experiments on many challenging datasets show that: 1. our approach to clustering aggregation automatically decides the optimal number of clusters; 2. it does not require any parameter tuning for the underlying clustering algorithms; 3. it can combine the advantages of different underlying clustering algorithms to achieve superior performance; 4. it is robust against moderate or even bad input clusterings. Paper Organization In Sec. 2, we present the formal statement of the clustering aggregation problem and its formulation as a special instance of MWIS problem. In Sec. 3, we present our algorithm. The experimental evaluations and conclusion are given in Sec. 4 and Sec. 5 respectively. 2 MWIS Formulation of Clustering Aggregation Consider a set of n data objects D = {d1, d2, ..., dn}. A clustering Ci of D is obtained by applying an exclusive clustering algorithm with a specific set of input parameters on D. The disjoint clusters ci1, ci2, ..., cik of Ci are a partition of D, i.e. Sk j=1 cij = D and cip ∩ciq = ∅for all p ̸= q. With different clustering algorithms and different parameters, we can obtain a set of m different clusterings of D: C1, C2, ..., Cm. For each cluster cij in the union of these m clusterings, we evaluate its quality with an internal index measuring both cohesion and separation. We use the average silhouette coefficient of a cluster as such an internal index in this paper. The silhouette coefficient is defined for an individual data object. It is a measure of how similar that data object is to data objects in its own cluster compared to data objects in other clusters. Formally, the silhouette coefficient for the tth data object, St, is defined as St = bt −at max(at, bt) (1) where at is the average distance from the tth data object to the other data objects in the same cluster as t, and bt is the minimum average distance from the tth data object to data objects in a different cluster, minimized over clusters. Silhouette coefficient ranges from -1 to +1 and a positive value is desirable. The quality of a particular cluster cij can be evaluated with the average of the silhouette coefficients of the data objects belonging to it. ASCcij = P t∈cij St |cij| (2) where St is the silhouette coefficient of the tth data object in cluster cij, |cij| is the cardinality of cluster cij. We select an optimal subset of non-overlapping clusters from the union of all the clusterings, which partition the dataset together and have the best overall quality, as the ”aggregated clustering”. The selection of clusters is formulated as a special instance of the Maximum-Weight Independent Set (MWIS) problem. Formally, consider an undirected and weighted graph G = (V, E), where V = {1, 2, ..., n} is the vertex set and E ⊆V × V is the edge set. For each vertex i ∈V , a positive weight wi is associated with i. A = (aij)n×n is the adjacency matrix of G, where aij = 1 if (i, j) ∈E is an 3 edge of G, and aij = 0 if (i, j) /∈E. A subset of V can be represented by an indicator vector x = (xi) ∈{0, 1}n, where xi = 1 means that i is in the subset, and xi = 0 means that i is not in the subset. An independent set is a subset of V , whose elements are pairwise nonadjacent. Then finding a maximum-weight independent set, denoted as x∗can be posed as the following: x∗= argmaxxwTx, s.t. ∀i ∈V : xi ∈{0, 1}, xT Ax = 0 (3) The weight wi on vertex i is defined as: wi = ASCci × |ci| (4) where ci is the cluster represented by vertex i, ASCci and |ci| are its quality measure and cardinality respectively. Our problem (3) is a special instance of MWIS problem, since graph G exhibits an additional structure, which we will unitize in the proposed algorithm. The vertex set V can be partitioned into disjoint subsets P = {P1, P2, ..., Pm}, where Pi corresponds to the clustering Ci, such that each Pi is also a maximal independent set (MIS), which means it is not a subset of any other independent set. This follows from the fact that each clustering Ci is a partition of the dataset D. Formally, m [ i=1 Pi = V, Pi ∩Pj = ∅, i ̸= j, and Pi is MIS, ∀i, j ∈{1, 2, ..., m} (5) 3 Our Algorithm The basic idea of our algorithm is to explore the neighborhood of each known MIS Pi independently with a local search heuristic in order to find better solutions. The proposed algorithm is an instance of simulated annealing methods [10] with multiple initializations. Our algorithm starts with a particular MIS Pi, denoted by x0. xt+1, which is a neighbor of xt, is obtained by replacing some lower-weight vertices in xt with higher-weight vertices under the constraint of always being an independent set. Specifically, we first reduce xt by removing a proportion q of lower-weight vertices. Here we remove a proportion, rather than a fixed number, of vertices in order to make the reduction adaptive with respect to the number s of vertices in xt. In practice, we use ceil(s × q) to make sure at least one vertex will be removed. Note that this step is probabilistic, rather than deterministic. The probability that a vertex i will be retained is proportional to its WD value, which is defined as follows. WDi = wi P j∈Ni wj (6) where Ni is the set of vertices which are connected with vertex i in G. Intuitively, larger WD value indicates larger weight, less conflict with other vertices or both. Therefore, the obtained x′ t is likely to contain vertices with large weights and have large potential room for improvement. The parameter of proportion q is used to control the ”radius” of the neighborhood to be explored. Then our algorithm iteratively improves x′ t by adding compatible vertices one by one. In each iteration, it first identifies all the vertices compatible with the existing ones in current x′ t, called candidates. Then a ”local” measure WD′ is calculated to evaluate each of these candidates: WD′ i = wi P j∈N ′ i wj (7) where N ′ i is the set of candidate vertices which are connected with vertex i. The large value of WD′ i indicates that candidate i either can bring large improvement this time (numerator) or has small conflict with further improvements (denominator) or both. The candidate with the largest WD′ value is added to x′ t. In next iteration, this new x′ t will be further improved. This iterative procedure continues until x′ t cannot be further improved. We obtain x′ t as a randomized neighbor of xt. 4 Algorithm 1: Input: Graph G, weights w, adjacency matrix A, the known MIS P = {P1, P2, ..., Pm} Output: An approximate solution to MWIS 1 Calculate WD for each vertex; 2 for Each MIS Pi do 3 Initialize x0 with Pi; 4 for t = 1, 2, ..., n do 5 Reduce xt to x′ t probabilistically by removing a proportion q of vertices with relatively lower WD values; 6 repeat 7 Identify candidate vertices compatible with current x′ t; 8 Calculate WD′ for each candidate; 9 Update x′ t by adding the candidate with the largest WD′; 10 until x′ t cannot be further improved; 11 Calculate α = min[1, e(W (x′ t)−W (xt))/βt]; 12 Update xt+1 as x′ t with probability α, otherwise xt+1 = xt; 13 end 14 end 15 return the best solution found in the process; Now our algorithm calculates the acceptance ratio α = e(W (x′ t)−W (xt))/βt, where W(x) = wT x; 0 < β < 1 is a constant which is usually picked to be close to 1. If α ≥1, then x′ t is accepted as xt+1. Otherwise, it is accepted with probability α. This exploration starting from Pi continues for a number of iterations, or until xt converges. The best solution encountered in this process is recorded. After exploring the neighborhood for all the known MISs, the best solution is returned. A formal description can be found in Algorithm 1. Our algorithm is essentially a variant of simulated annealing method [10], since the maximization of W(x) = wT x is equivalent to the minimization of the energy function E(x) = −W(x) = −wT x. Lines 5 to 10 in Alg. 1 define a randomized ”moving” procedure of making a transition from xt to its neighbor x′ t. When calculating the acceptance ratio α = e(W (x′ t)−W (xt))/βt, suppose T0 = 1 (initial temperature), then it is equivalent to α = e(−(W (xt)−W (x′ t)))/(βt) = e(−(E(x′ t)−E(xt)))/(βt). Hence Algorithm 1 is a variant of simulated annealing. Therefore, our algorithm converges in theory. In practice, the convergence of our algorithm is fast. In all the experiments presented in next section, our algorithm converges in less than 100 iterations. The reason is that our algorithm takes advantage of that the known MISs are close to distinct local maximum. Also, the local search heuristic of our algorithm is effective to find better candidate in the neighborhood. The parameter q controls the ”radius” of the neighborhood to be explored in each iteration. Small q means small ”radius” and results in more iterations to converge. On the other side, using large q will take less advantage of the known MISs. Unstable exploration also results in more iterations to converge. Since our algorithm explores the neighborhood of each known MIS independently, its efficiency can be further improved by using parallel computation. 4 Results We evaluate the performance of our approach with three experiments. In these experiments, for the underlying clustering algorithms, including K-means, single linkage, complete linkage and Ward’s clustering, we use the implementations in MATLAB. Unless specified explicitly, the parameters are MATLAB’s defaults. For example, when using K-means, we only specify the number K of desired clusters. The default ”Squared Euclidean distance” is used as the distance measure. When calculating silhouette coefficients, we use MATLAB’s function ”silhouette(X,clust)” and the default metric ”Squared Euclidean distance”. For robustness in our experiments, we tolerate slight overlap 5 between clusters. That is, for the adjacency matrix A = (aij)n×n, aij = 1 if |ci∩cj| min(|ci|,|cj|) > 0.1, and aij = 0 otherwise. In these experiments, the parameters of our local search algorithm are: q = 0.3; β = 0.999; iteration number n = 100. We test different combinations of q = 0.1 : 0.1 : 0.5 and n = 100 : 100 : 1000. The results are almost the same. In the first experiment, we evaluate our approach’s ability to achieve good performance without specifying the optimal input parameters for the underlying clustering algorithms. We use the dataset from [6]. This dataset consists of 4 subsets (S1, S2, S3, S4) of synthetic 2-d data points. Each subset contains 5000 vectors in 15 Gaussian clusters, but with different degree of cluster overlapping. We choose K-means as the underlying clustering algorithm and vary the parameter K = 5 : 1 : 25, which is the desired number of clusters. Since different runs of K-means starting from random initialization of centroids typically produce different clustering results, we run K-means 5 times for each value of K. That is, there are a total of 21 × 5 = 105 different input clusterings. Note that, in order to show the performance of our approach clearly, we do not perform the post-processing of assigning the missing data points to their nearest clusters. 0 5 10 x 10 5 0 2 4 6 8 10 x 10 5 S1 0 5 10 x 10 5 0 2 4 6 8 10 x 10 5 S2 0 5 10 x 10 5 0 2 4 6 8 10 x 10 5 S3 0 5 10 x 10 5 0 2 4 6 8 10 x 10 5 S4 0 5 10 x 10 5 0 2 4 6 8 10 x 10 5 Our S1 0 5 10 x 10 5 0 2 4 6 8 10 x 10 5 Our S2 0 5 10 x 10 5 0 2 4 6 8 10 x 10 5 Our S3 0 5 10 x 10 5 0 2 4 6 8 10 x 10 5 Our S4 Figure 1: Clustering aggregation without parameter tuning. (top row) Original data. (bottom row) Clustering results of our approach. Best viewed in color. As shown in Fig. 1, on each of the four subsets, the aggregated clustering obtained by our approach has the correct number (15) of clusters and near-perfect structure. Only a very small portion of data points is not assigned to any cluster. These results confirm that our approach can automatically decide the optimal number of clusters without any parameter tuning for the underlying clustering algorithms. In the second experiment, we evaluate our approach’s ability of combining the advantages of different underlying clustering algorithms and canceling out the errors introduced by them. The dataset is from [1]. As shown in the fifth panel of Fig. 2, this synthetic dataset consists of 7 distinct groups of 2-d data points, which have significantly different shapes and sizes. There are also some ”bridges” between different groups of data points. Consequently, this dataset is very challenging for any single clustering algorithm. In this experiment, we use four different underlying clustering algorithms implemented in MATLAB: single linkage, complete linkage, Ward’s clustering and K-means. The first two are both agglomerative bottom-up algorithms. The only difference between them is that when merging pairs of clusters, single linkage is based on the minimum distance, while complete linkage is based on maximum distance. The third one, Ward’s clustering algorithm, is also an agglomerative bottom-up algorithm. In each merging step, it chooses the pair of clusters which minimize the sum of the square of distances from each point to the mean of the two clusters. The fourth algorithm is K-means. 6 For each of the underlying clustering algorithms, we vary the input parameter of desired number of clusters as 4 : 1 : 10. That is, we have a total of 7 × 4 = 28 input clusterings. Note that, unlike [1], we do not use the average linkage clustering algorithm, because by specifying the correct number of clusters, it can generate near-perfect clustering by itself. We abandon the best algorithm here in order to show the performance of our approach clearly. But, in practice, by utilizing good underlying clustering algorithms, it can significantly increase the chance for our approach to obtain superior aggregated clusterings. Like experiment 1, we do not perform the postprocessing in this experiment. 0 10 20 30 40 0 10 20 30 Single Linkage 0 10 20 30 40 0 10 20 30 Complete Linkage 0 10 20 30 40 0 10 20 30 Ward's clustering 0 10 20 30 40 0 10 20 30 K-means 0 10 20 30 40 0 10 20 30 Original data 0 10 20 30 40 0 10 20 30 Our result Figure 2: Clustering aggregation on four different input clusterings. Best viewed in color. In the first four panels of Fig. 2, we show the clustering results obtained by the four underlying clustering algorithms with the number of clusters set to be 7. Obviously, even with the optimal input parameters, the results of these algorithms are far from being correct. The ground truth and the result of our approach are shown in the fifth and sixth panels, respectively. As we can see, our aggregated clustering is almost perfect, except for the three green data points in the ”bridge” between the cyan and green ”balls”. These results confirm that our approach can effectively combine the advantages of different clustering algorithms and cancel out the errors introduced by them. Also, in contrast to the other consensus clustering algorithms, such as [1], our aggregated clustering is obtained without specifying the optimal input parameters for any of the underlying clustering algorithm. This is a very desirable feature in practice. In the third experiment, we compare our approach with some other popular consensus clustering algorithms, including Cluster-based Similarity Partitioning Algorithm (CSPA) [2], HyperGraph Partitioning Algorithm (HGPA) [2], Meta-Clustering Algorithm (MCLA) [2], the Furthest (Furth) algorithm [1], the Agglomerative (Agglo) [1] algorithm and the Balls (Balls) algorithm [1]. The performance is evaluated on three datasets: 8D5K [2] , Iris [4] and Pen-Based Recognition of Handwritten Digits (PENDIG) [5]. 8D5K is an artificial dataset. It contains 1000 points from five multivariate Gaussian distributions (200 points each) in 8D space. Iris is a real dataset. It consists of 150 instances of three classes (50 each). There are four numeric attributes for each instance. PENDIG is also a real dataset. It contains a total of 7494 + 3498 = 10992 instances in 10 classes. Each instance has 16 integer attributes. For our approach and all those consensus clustering algorithms, we choose K-means and Ward’s algorithm as the underlying clustering algorithms. The multiple clusterings for each dataset are obtained by varying the desired number of clusters for both K-means and Ward’s algorithm. Specif7 ically, for the test on 8D5K, we set the desired numbers of clusters as 3:1:7. Consequently, there are 5 × 2 = 10 different input clusterings. For Iris and PENDIG, the numbers are 3:1:7 and 8:1:12 respectively. So there are also 10 different input clusterings for each of them. In this paper, we use Jaccard coefficient to measure the quality of clusterings. Jaccard Coefficient = f11 f01 + f10 + f11 (8) where f11 is the number of object pairs which are in the same class and in the same cluster; f01 and is the number of object pairs which are in different classes but the same cluster; f10 is the number of object pairs which are in the same class but in different cluster. Figure 3: Results of comparative experiments on different datasets. Best viewed in color. As shown in Fig. 3, the performance of our approach is better than those of the other consensus clustering algorithms. The main reason is that, with a range of different input parameters, most clusterings generated by the underlying clustering algorithms are not good enough. The ”consensus” based on these moderate or even bad input clusterings and much less good ones cannot be good. In contrast, by selecting an optimal subset of the clusters, our approach can still achieve superior performance as long as there are good clusters in the input clusterings. Therefore, our approach is much more robust, as confirmed by the results of this experiment. 5 Conclusion The contribution of this paper is twofold: 1. We formulate clustering aggregation as a MWIS problem with a special structure. 2. We propose a novel variant of simulated annealing method, which takes advantage of the special structure, for solving this special MWIS problem. Experimental results confirm that: 1. our approach to clustering aggregation automatically decides the optimal number of clusters; 2. it does not require any parameter tuning for the underlying clustering algorithms; 3. it can combine the advantages of different underlying clustering algorithms to achieve superior performance; 4. it is robust against moderate or even bad input clusterings. Acknowledgments This work was supported by US Department of Energy Award 71498-001-09 and by US National Science Foundation Grants IIS-0812118, BCS-0924164, OIA-1027897. 8 References [1] Gionis, A. & Mannila, H. & Tsaparas, P. (2005) ”Clustering aggregation”. Proceedings of the 21st ICDE [2] Strehl, A. & Ghosh, J. (2003) ”Cluster ensembles—a knowledge reuse framework for combining multiple partitions”. The Journal of Machine Learning Research (3):583-617. [3] Brendel, W. & Todorovic, S. (2010) ”Segmentation as maximum-weight independent set”. Neural Information Processing Systems [4] Fisher, R.A. (1936) ”The use of multiple measurements in taxonomic problems”. Annual Eugenics (7) Part II: 179-188 [5] Alimoglu, F. & Alpaydin, E. (1996) ”Methods of Combining Multiple Classifiers Based on Different Representations for Pen-based Handwriting Recognition”. Proceedings of the Fifth Turkish Artificial Intelligence and Artificial Neural Networks Symposium (TAINN 96) [6] Franti, P. & Virmajoki, O. (2006) ”Iterative shrinking method for clustering problems”. Pattern Recognition 39 (5), 761-765 [7] Lloyd, S. P. (1982) ”Least squares quantization in PCM”. IEEE Transactions on Information Theory 28 (2): 129-137 [8] Martin Ester, Hans-Peter Kriegel, Jorg Sander, Xiaowei Xu (1996) ”A density-based algorithm for discovering clusters in large spatial databases with noise”. Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96) [9] Fred, A.L.N. & Jain, A.K. (2002) ”Data clustering using evidence accumulation”. Proceedings of the International Conference on Pattern Recognition(ICPR) 276-280 [10] Kirkpatrick, S. & Gelatt, C. D. & Vecchi, M. P. (1983). ”Optimization by Simulated Annealing”. Science 220 (4598): 671C680 [11] Vikas Singh & Lopamudra Mukherjee & Jiming Peng & Jinhui Xu (2008) ”Ensemble Clustering using Semidefinite Programming”. Advances in Neural Information Processing Systems 20: 1353–1360 [12] Nguyen, N. & Caruana, R. (2007) ”Consensus clusterings”. IEEE International Conference on Data Mining ICDM 2007 607–612 [13] X. Z. Fern & C. E. Brodley (2004) ”Solving cluster ensemble problems by bipartite graph partitioning”. Proc. of International Conference on Machine Learning page 36 [14] Topchy, A. & Jain, A.K. & Punch, W. (2003) ”Combining multiple weak clusterings”. IEEE International Conference on Data Mining, ICDM 2003 331–338 9
|
2012
|
2
|
4,563
|
Efficient high-dimensional maximum entropy modeling via symmetric partition functions Paul Vernaza The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 pvernaza@cmu.edu J. Andrew Bagnell The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 dbagnell@ri.cmu.edu Abstract Maximum entropy (MaxEnt) modeling is a popular choice for sequence analysis in applications such as natural language processing, where the sequences are embedded in discrete, tractably-sized spaces. We consider the problem of applying MaxEnt to distributions over paths in continuous spaces of high dimensionality— a problem for which inference is generally intractable. Our main contribution is to show that this intractability can be avoided as long as the constrained features possess a certain kind of low dimensional structure. In this case, we show that the associated partition function is symmetric and that this symmetry can be exploited to compute the partition function efficiently in a compressed form. Empirical results are given showing an application of our method to learning models of high-dimensional human motion capture data. 1 Introduction This work aims to generate useful probabilistic models of high dimensional trajectories in continuous spaces. This is illustrated in Fig. 1, which demonstrates the application of our proposed method to the problem of building generative models of high dimensional human motion capture data. Using this method, we may efficiently learn models and perform inferences including but not limited to the following: (1) Given any single pose, what is the probability that a certain type of motion ever visits this pose? (2) Given any pose, what is the distribution over future positions of the actor’s hands? (3) Given any initial sequence of poses, what are the odds that this sequence corresponds to one action type versus another? (4) What is the most likely sequence of poses interpolating any two states? The maximum entropy learning (MaxEnt) approach advocated here has the distinct advantage of being able to efficiently answer all of the aforementioned global inferences in a unified framework while also allowing the use of global features of the state and observations. In this sense, it is analogous to another MaxEnt learning method: the Conditional Random Field (CRF), which is typically applied to modeling discrete sequences. We show how MaxEnt modeling may be efficiently applied to paths in continuous state spaces of high dimensionality. This is achieved without having to resort to expensive, approximate inference methods based on MCMC, and without having to assume that the sequences themselves lie in or near a low dimensional submanifold, as in standard dimensionality-reduction-based methods. The key to our method is to make a natural assumption about the complexity of the features, rather than the paths, that results in simplifying symmetries. This idea is illustrated in Fig. 2. Here we suppose that we are tasked with the problem of comparing two sets of paths: the first, sampled from an empirical distribution; and the second, sampled from a learned distribution intended to model the distribution underlying the empirical samples. Suppose first that we are to determine whether the learned distribution correctly samples the desired distribution. We claim that a natural approach to this problem is to visualize both sets of paths by projecting 1 Log prob. -2 * 10-11 Log prob. -24.6 Log prob. -49.9 Log prob. -70.0 up-phase jumping jack down-phase jumping jack side twist cross-toe touch (a) True held-out class = side twist Log prob. -81.6 Log prob. -79.0 Log prob. -10.6 Log prob. -2*10-5 up-phase jumping jack down-phase jumping jack side twist cross-toe touch (b) True held-out class = down-phase jumping jack Figure 1: Visualizations of predictions of future locations of hands for an individually held-out motion capture frame, conditioned on classes indicated by labels above figures, and corresponding class membership probabilities. See supplementary material for video demonstration. Figure 2: Illustration of the constraint that paths sampled from the learned distribution should (in expectation) visit certain regions of space exactly as often as they are visited by paths sampled from the true distribution, after projection of both onto a low dimensional subspace. The shading of each planar cell is proportional to the expected number of times that cell is visited by a path. them onto a common low dimensional basis. If these projections appear similar, then we might conclude that the learned model is valid. If they do not appear similar, we might try to adjust the learned distribution, and compare projections again, iterating until the projections appear similar enough to convince us that the learned model is valid. We then might consider automating this procedure by choosing numerical features of the projected paths and comparing these features in order to determine whether the projected paths appear similar. Our approach may be thought of as a way of formalizing this procedure. The MaxEnt method described here iteratively samples paths, projects them onto a low dimensional subspace, computes features of these projected paths, and adjusts the distribution so as to ensure that, in expectation, these features match the desired features. A key contribution of this work is to show that that employing low dimensional features of this sort enables tractable inference and learning algorithms, even in high dimensional spaces. Maximum entropy learning requires repeatedly calculating feature statistics for different distributions, which generally requires computing average feature values over all paths sampled from the distributions. Though this is straightforward to accomplish via dynamic programming in low dimensional spaces, it may not be obvious that the same can be accomplished in high-dimensional spaces. We will show how this is possible by exploiting symmetries that result from this assumption. The organization of this paper is as follows. We first review some preliminary material. We then continue with a detailed exposition of our method, followed by experimental results. Finally, we describe the relation of our method to existing methods and discuss conclusions. 2 2 Preliminaries We now briefly review the basic MaxEnt modeling problem in discrete state spaces. In the basic MaxEnt problem, we have N disjoint events xi, K random variables denoted features φj(xi) mapping events to scalars, and K expected values of these features Eφj. To continue the example previously discussed, we will think of each xi as being a path, φj(xi) as being the number of times that a path passes through the jth spatial region, and Eφj as the empirically estimated number of times that a path visits the jth region. Our goal is to find a distribution p(xi) over the events consistent with our empirical observations in the sense that it generates the observed feature expectations: X i φj(xi)p(xi) = Eφj, ∀j ∈{1 . . . K}. Of all such distributions, we will seek the one whose entropy is maximal [6]. This problem can be written compactly as max p∈∆− X i pi log pi s.t. Φp = Eφ, (1) where we have defined vectors pi = p(xi) and φ, the feature matrix Φij = φi(xj), and the probability simplex ∆. Introducing a vector of Lagrange multipliers θ, the Lagrangian dual of this concave maximization problem is [3] max θ −log X i exp(− X j Φjiθj) −EφT θ. (2) It is straightforward to show that the gradient of the dual objective g(θ) is given by ∇θg = E¯p[φ | θ] −Eφ, where ¯p is the Gibbs distribution over x defined by ¯p(xi | θ) ∝exp − X j φj(xi)θj . (3) 3 MaxEnt modeling of continuous paths We now consider an extension of the MaxEnt formalism to the case that the events are paths embedded in a continuous space. The main questions to be addressed here are how to handle the transition from a finite number of events to an infinite number of events, and how to define appropriate features. We will address the latter problem first. We suppose that each event x now consists of a continuous, arc-length-parameterized path, expressed as a function R+ →RN mapping a non-negative time into the state space RN. A natural choice in this case is to express each feature φj as an integral of the following form: φj(x) = Z T 0 ψj(x(s))ds, (4) where T is the duration (or length) of x and each ψj : RN →R+ is what we refer to as a feature potential. Continuing the previous example, if we choose ψj(x(t)) = 1 if x(t) is in region j and ψj(x(t)) = 0 otherwise, then ψj(x) is the total time that x spends within the jth region of space. An analogous expression for the probability of a continuous path is then obtained by substituting these features into (3). Defining the cost function Cθ := P j θjψj and the cost functional Sθ{x} := Z T 0 Cθ(x(s))ds, (5) we have that ¯p(x | θ) = exp −Sθ{x} R exp −Sθ{x}Dx, (6) 3 where the notation R exp −Sθ{x}Dx denotes the integral of the cost functional over the space of all continuous paths. The normalization factor Zθ := R exp −Sθ{x}Dx is referred to as the partition function. As in the discrete case, computing the partition function is of prime concern, as it enables a variety of inference and learning techniques. The functional integral in (6) can be formalized in several ways, including taking an expectation with respect to Wiener measure [12] or as a Feynman integral [4]. Computationally, evaluating Zθ requires the solution of an elliptic partial differential equation over the state space, which can be derived via the Feynman-Kac theorem [12, 5]. The solution, denoted Zθ(a) for a ∈RN, gives the value of the functional integral evaluated over all paths beginning at a and ending at a given goal location (henceforth assumed w.l.o.g. to be the origin). A discrete approximation to the partition function can therefore be computed via standard numerical methods such as finite differences, finite elements, or spectral methods [2]. However, we proceed by discretizing the state space as a lattice graph and computing the partition function associated with discrete paths in this graph via a standard dynamic programming method [1, 15, 11]. Recent work has shown that this method recovers the PDE solution in the discretization limit [5]. Concretely, the discretized partition function is computed as the fixed point of the following iteration: Zθ(a) ←δ(a) + exp(−ϵCθ(a)) X a′∼a Zθ(a′), (7) where a′ ∼a denotes the set of a′ adjacent to a in the lattice, ϵ is the spacing between adjacent lattice elements, and δ is the Kronecker delta. 1 4 Efficient inference via symmetry reduction Unfortunately, the dynamic programming approach described above is tractable only for low dimensional problems; for problems in more than a few dimensions, even storing the partition function would be infeasible. Fortunately, we show in this section that it is possible to compute the partition function directly in a compressed form, given that the features also satisfy a certain compressibility property. 4.1 Symmetry of the partition function Elaborating on this statement, we now recall Eq. (4), which expresses the features as integrals of feature potentials ψj over paths. We then examine the effects of assuming that the ψj are compressible in the sense that they may be predicted exactly from their projection onto a low dimensional subspace—i.e., we assume that ψj(a) = ψj(WW T a), ∀j, a, (8) for some given N ×d matrix W, with d < N. The following results show that compressibility of the features in this sense implies that the corresponding partition function is also compressible, in the sense that we need only compute it restricted to a d + 1 dimensional subspace in order to determine its values at arbitrary locations in N-dimensional space. This is shown in two steps. First, we show that the partition function is symmetric about rotations about the origin that preserve the subspace spanned by the columns of W. We then show that there always exists such a rotation that also brings an arbitrary point in RN into correspondence with a point in a a d + 1-dimensional slice where the partition function has been computed. Theorem 4.1. Let Zθ = R exp −Sθ{x}Dx, with Sθ as defined in Eq. 5 and features derived from feature potentials ψj. Suppose that ψj(x) = ψj(WW T x), ∀j, x. Then for any orthogonal R such that RW = W, Zθ(a) = Zθ(Ra), ∀a ∈RN. (9) Proof. By definition, Zθ(Ra) = Z x(0)=0 x(T )=Ra exp − Z T 0 Cθ(x(s))ds ! Dx. 1In practice, this is typically done with respect to log Zθ, which yields an iteration similar to a soft version of value iteration of the Bellman equation [15] 4 The substitution y(t) = RT x(t) yields Zθ(Ra) = Z y(0)=0 y(T )=a exp − Z T 0 Cθ(Ry(s))ds ! Dy. Since ψj(a) = ψj(WW T a), ∀j, a implies that Cθ(x) = Cθ(WW T x) ∀x, we can make the substitutions Cθ(Ry) = Cθ(WW T Ry) = Cθ(WW T y) = Cθ(y) in the previous expression to prove the result. The next theorem makes explicit how to exploit the symmetry of the partition function by computing it restricted to a low-dimensional slice of the state space. Corollary 4.2. Let W be a matrix such that ψj(a) = ψj(WW T a), ∀j, a, and let ν be any vector such that W T ν = 0 and ∥ν∥= 1. Then Zθ(a) = Zθ(WW T a + ∥(I −WW T )a∥ν), ∀a (10) Proof. The proof of this result is to show that there always exists a rotation satisfying the conditions of Theorem 4.1 that rotates b onto the subspace spanned by the columns of W and ν. We simply choose an R such that RW = W and R(I −WW T b) = ∥I −WW T b∥ν. That this is a valid rotation follows from the orthogonality of W and ν and the unit-norm assumption on ν. Applying any such rotation to b proves the result. 4.2 Exploiting symmetry in DP We proceed to compute the discretized partition function via a modified version of the dynamic programming algorithm described in Sec. 3. The only substantial change is that we leverage Corollary 4.2 in order to represent the partition function in a compressed form. This implies corresponding changes in the updates, as these must now be derived from the new, compressed representation. Figure 3 illustrates the algorithm applied to computing the partition function associated with a constant C(x) in a two-dimensional space. The partition function is represented by its values on a regular lattice lying in the low-dimensional slice spanned by the columns of W and ν, as defined in Corollary 4.2. In the illustrated example, W is empty, and ν is any arbitrary line. At each iteration of the algorithm, we update each value in the slice based on adjacent values, as before. However, it is now the case that some of the adjacent nodes lie off of the slice. We compute the values associated with such nodes by rotating them onto the slice (according to Corollary 4.2) and interpolating the value based on those of adjacent nodes within the slice. An explicit formula for these updates is readily obtained. Suppose that b is a point contained within the slice and y := b + δ is an adjacent point lying off the slice whose value we wish to compute. By assumption, W T δ = νT δ = 0. We therefore observe that δT (I −WW T )b = 0, since (I − WW T )b ∝ν. Hence, V (y) = V (WW T (b + δ) + ∥(I −WW T )(b + δ)∥ν) = V (WW T b + ∥(I −WW T )b + δ∥ν) (11) = V (WW T b + q ∥(I −WW T )b∥2 + ∥δ∥2ν). An interesting observation is that this formula depends on y only through ∥δ∥. Therefore, assuming that all nodes adjacent to b lie at a distance of δ from it, all of the updates from the off-slice neighbors will be identical, which allows us to compute the net contribution due to all such nodes simply by multiplying the above value by their cardinality. The computational complexity of the algorithm is in this case independent of the dimension of the ambient space. A detailed description of the algorithm is given in Algorithm 1. 4.3 MaxEnt training procedure Given the ability to efficiently compute the partition function, learning may proceed in a way exactly analogous to the discrete case (Sec. 2). A particular complication in our case is that exactly 5 Figure 3: Illustration of dynamic programming update (constant cost example). The large sphere marked goal denotes origin with respect to which partition function is computed. Partition function in this case is symmetric about all rotations around the origin; hence, any value can be computed by rotation onto any axis (slice) where the partition function is known (ν). Contributions from off-slice and on-slice points are denoted by off and on, respectively. Symmetry implies that value updates from off-axis nodes can be computed by rotation (proj) onto the axis. See supplementary material for video demonstration. computing feature expectations under the model distribution is not as straightforward as in the low dimensional case, as we must account for the symmetry of the partition function. As such, we compute feature expectations by sampling paths from the model given the partition function. Algorithm 1 PartitionFunc(xT , Cθ, W, N, d) Z : Rd+1 →R : y 7→0 {initialize partition function to zero} ν ←(ν | ⟨ν, ν⟩= 1, W T ν = 0) {choose an appropriate ν} lift : Rd+1 →RN : y 7→[W ν]y + xT {define lifting and projection operators} proj : RN →Rd+1 : x 7→ W T (x −xT ) ∥(I −WW T )(x −xT )∥ while Z not converged do for y ∈G ⊂Zd+1 do zon ←P {δ∈Zd+1|∥δ∥=1} Z(y′ + δ) {calculate on-slice contributions} zoff ←2(N −d −1)Z(y1, . . . , yd, q y2 d+1 + 1) {calculate off-slice contributions} Z(y) ← zon+zoff+2Nδ(y) 2N(exp ϵCθ(lift(y))) {iterate fixed-point equation} end for end while Z′ : RN →R : x 7→Z(proj(x)) {return partition function in original coordinates} return Z′ 5 Results We implemented the method and applied it to the problem of modeling high dimensional motion capture data, as described in the introduction. Our training set consisted of a small sample of trajectories representing four different exercises performed by a human actor. Each sequence is represented as a 123-dimensional time series representing the Cartesian coordinates of 41 reflective markers located on the actor’s body. The feature potentials employed consisted of indicator functions of the form φj(a) = {1 if W T a ∈Cj, 0 otherwise}, (12) where the Cj were non-overlapping, rectangular regions of the projected state space. A W was chosen with two columns, using the method proposed in [13], which is effectively similar to performing PCA on the velocities of the trajectory. 6 0 100 200 log odds ratio 0 100 200 log odds ratio 0 100 200 log odds ratio 0 100 200 log odds ratio correct discrimination threshold log. reg. fraction of path revealed log. reg. fraction of path revealed log. reg. fraction of path revealed log. reg. fraction of path revealed up-phase jumping jack down-phase jumping jack side twist cross-toe touch HDMaxEnt HDMaxEnt HDMaxEnt HDMaxEnt correct discrimination threshold correct discrimination threshold correct discrimination threshold Figure 4: Results of classification experiment given progressively revealed trajectories. Title indicates true class of held-out trajectory. Abscissa indicates the fraction of the trajectory revealed to the classifiers. Samples of held-out trajectory at different points along abscissa are illustrated above fraction of path revealed. Ordinate shows predicted log-odds ratio between correct class and next-most-probable class. We applied our method to train a maximum entropy model independently for each of the four classes. Given our ability to efficiently compute the partition function, this enables us to normalize each of these probability distributions. Classification can then be performed simply by evaluating the probability of a held-out example under each of the class models. Knowing the partition function also enables us to perform various marginalizations of the distribution that would otherwise be intractable. [8, 15] In particular, we performed an experiment consisting of evaluating the probability of a held-out trajectory under each model as it was progressively revealed in time. This can be accomplished by evaluating the following quantity: P(x0)γt exp − t X i=1 ϵCθ(xi) ! Zθ(xt) Zθ(x0), (13) where x0, . . . , xt represents the portion of the trajectory revealed up to time t, P(x0) is the prior probability of the initial state, and ϵ is the spacing between successive samples. Results of this experiment are shown in Fig. 4, which plots the predicted log-odds ratio between the correct and next-most-probable classes. For comparison, we also implemented a classifier based on logistic regression. Features for this classifier consisted of radial basis functions centered around the portion of each training trajectory revealed up to the current time step. Both methods also employed the same prior initial state probability P(x0), which was constructed as a single isotropic Gaussian distribution for each class. Both classifiers therefore predict the same class distributions at time t = 0. In the first three held-out examples, the initial state was distinctive enough to unambiguously predict the sequence label. The logistic regression predictions were generally inaccurate on their own, but the the confidence of these predictions was so low that these probabilities were far outweighed by the prior—the log-odds ratio in time therefore appears almost flat for logistic regression. Our method (denoted HDMaxEnt in the figure), on the other hand, demonstrated exponentially increasing confidence as the sequences were progressively revealed. In the last example, the initial state appeared more similar to that of another class, causing the prior to mispredict its label. Logistic regression again exhibited no deviation from the prior in time. Our method, however, quickly recovered the correct label as the rest of the sequence was revealed. Figures 1(a) and 1(b) show the result of a different inference—here we used the same learned class models to evaluate the probability that a single held-out frame was generated by a path in each class. This probability can be computed as the product of forward and backwards partition functions evaluated at the held-out frame divided by the partition function between nominal start and goal positions. [15] We also sampled trajectories given each potential class label, given the held-out frame as a starting point, and visualized the results. 7 The first held-out frame, displayed in Fig. 1(a), is distinctive enough that its marginal probability under the correct class, is far greater than its probability under any other class. The visualizations make it apparent that it is highly unlikely that this frame was sampled from one of the jumping jack paths, as this would require an unnatural excursion from the kinds of trajectory normally produced by those classes, while it is slightly more plausible that the frame could have been taken from a path sampled from the cross-toe touch class. Fig. 1(b) shows a case where the held-out frame is ambiguous enough that it could have been generated by either the jumping jack up or down phases. In this case, the most likely prediction is incorrect, but it is still the case that the probabilities of the two plausible classes far outweigh those of the visibly less-plausible classes. 6 Related work Our work bears the most relation to the extensive literature on maximum entropy modeling in sequence analysis. A well-known example of such a technique is the Conditional Random Field [9], which is applicable to modeling discrete sequences, such as those encountered in natural language processing. Our method is also an instance of MaxEnt modeling applied to sequence analysis; however, our method applies to high-dimensional paths in continuous spaces with a continuous notion of (potentially unbounded) time (as opposed to the discrete notions of finite sequence length or horizon). These considerations necessitate the development of the formulation and inference techniques described here. Also notable are latent variable models that employ Gaussian process regression to probabilistically represent observation models and the latent dynamics [14, 10, 7]. Our method differs from these principally in two ways. First our method is able to exploit global, contextual features of sequences without having to model how these features are generated from a latent state. Although the features used in the experiments shown here were fairly simple, we plan to show in future work how our method can leverage context-dependent features to generalize across different environments. Second, global inferences in the aforementioned GP-based methods are intractable, since the state distribution as a function of time is generally not a Gaussian process, unless the dynamics are assumed linear. Therefore, expensive, approximate inference methods such as MCMC would be required to compute any of the inferences demonstrated here. 7 Conclusions We have demonstrated a method for efficiently performing inference and learning for maximumentropy modeling of high dimensional, continuous trajectories. Key to the method is the assumption that features arise from potentials that vary only in low dimensional subspaces. The partition functions associated with such features can be computed efficiently by exploiting the symmetries that arise in this case. The ability to efficiently compute the partition function enables tractable learning as well as the opportunity to compute a variety of inferences that would otherwise be intractable. We have demonstrated experimentally that the method is able to build plausible models of high dimensional motion capture trajectories that are well-suited for classification and other prediction tasks. As future work, we would like to explore similar ideas to leverage more generic types of low dimensional structure that might arise in maximum entropy modeling. In particular, we anticipate that the method described here might be leveraged as a subroutine in future approximate inference methods for this class of problems. We are also investigating problem domains such as assistive teleoperation, where the ability to leverage contextual features is essential to learning policies that generalize. 8 Acknowledgments This work is supported by the ONR MURI grant N00014-09-1-1052, Distributed Reasoning in Reduced Information Spaces. 8 References [1] T. Akamatsu. Cyclic flows, markov process and stochastic traffic assignment. Transportation Research Part B: Methodological, 30(5):369–386, 1996. [2] J.P. Boyd. Chebyshev and Fourier spectral methods. Dover, 2001. [3] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge Univ Pr, 2004. [4] R.P. Feynman, A.R. Hibbs, and D.F. Styer. Quantum Mechanics and Path Integrals: Emended Edition. Dover Publications, 2010. [5] S. Garc´ıa-D´ıez, E. Vandenbussche, and M. Saerens. A continuous-state version of discrete randomized shortest-paths, with application to path planning. In CDC and ECC, 2011. [6] E.T. Jaynes. Information theory and statistical mechanics. The Physical Review, 106(4):620– 630, 1957. [7] J. Ko and D. Fox. Gp-BayesFilters: Bayesian filtering using Gaussian process prediction and observation models. Autonomous Robots, 27(1):75–90, 2009. [8] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. [9] J. Lafferty. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, 2001. [10] N.D. Lawrence and J. Qui˜nonero-Candela. Local distance preservation in the GP-LVM through back constraints. In Proceedings of the 23rd international conference on Machine learning, pages 513–520. ACM, 2006. [11] A. Mantrach, L. Yen, J. Callut, K. Francoisse, M. Shimbo, and M. Saerens. The sum-over-paths covariance kernel: A novel covariance measure between nodes of a directed graph. PAMI, 32(6):1112–1126, 2010. [12] B.K. Øksendal. Stochastic differential equations: an introduction with applications. Springer Verlag, 2003. [13] P. Vernaza, D.D. Lee, and S.J. Yi. Learning and planning high-dimensional physical trajectories via structured lagrangians. In ICRA, pages 846–852. IEEE, 2010. [14] J. Wang, D. Fleet, and A. Hertzmann. Gaussian process dynamical models. NIPS, 18:1441, 2006. [15] Brian D. Ziebart, Andrew Maas, J. Andrew Bagnell, and Anind K. Dey. Maximum entropy inverse reinforcement learning. In AAAI, pages 1433–1438, 2008. 9
|
2012
|
20
|
4,564
|
Multiresolution analysis on the symmetric group Risi Kondor and Walter Dempsey Department of Statistics and Department of Computer Science The University of Chicago {risi,wdempsey}@uchicago.edu Abstract There is no generally accepted way to define wavelets on permutations. We address this issue by introducing the notion of coset based multiresolution analysis (CMRA) on the symmetric group, find the corresponding wavelet functions, and describe a fast wavelet transform for sparse signals. We discuss potential applications in ranking, sparse approximation, and multi-object tracking. 1 Introduction A variety of problems in machine learning, from ranking to multi-object tracking, involve inference over permutations. Invariably, the bottleneck in such problems is that the number of permutations grows with n!, ruling out the possibility of representing generic functions or distributions over permutations explicitly, as soon as n exceeds about ten or twelve. Recently, a number of authors have advocated approximations based on a type of generalized Fourier transform [1][2][3][4][5][6]. On the group Sn of permutations of n objects, this takes the form bf(λ) = X σ∈Sn f(σ) ρλ(σ), (1) where λ plays the role of frequency, while the ρλ matrix valued functions, called irreducible representations, are similar to the e−i2πkx/N factors in ordinary Fourier analysis. It is possible to show that, just as in classical Fourier analysis, the bf(λ) Fourier matrices correspond to components of f at different levels of smoothness with respect to the underlying permutation topology [2][7]. Ordering the λ’s from smooth to rough as λ1 ⋞λ2 ⋞. . ., one is thus lead to “band-limited” approximations of f via the nested sequence of spaces Vµ = { f ∈RSn | bf(λ) = 0 for all λ ≻µ } . While this framework is attractive mathematically, it suffers from the same disease as classical Fourier approximations, namely its inability to handle discontinuities with grace. In applications such as multi-object tracking this is a particularly serious issue, because each observation of the form “object i is at track j” introduces a new discontinuity into the assignment distribution, and the resulting Gibbs phenonomenon makes it difficult to ensure even that f(σ) remains positive. The time-honored solution is to use wavelets. However, in the absence of a natural dilation operator, defining wavelets on a discrete space is not trivial. Recently, Gavish et al. defined an analog of Haar wavelets on trees [8], while Coifman and Maggioni [9] and Hammond et al. [10] managed to define wavelets on general graphs. In this paper we attempt to do the same on the much more structured domain of permutations by introducing an altogether new notion of multiresolution analysis, which we call coset-based multiresolution (CMRA). 1 . . . / V0 / !C C C C C C C C V−1 / #F F F F F F F F V−2 / #F F F F F F F F V−3 / #F F F F F F F F . . . W−1 W−2 W−3 W−4 Figure 1: Multiresolution 2 Multiresolution analysis and the multiscale structure of Sn The notion of multiresolution analysis on the real line was first formalized by Mallat [11]: a nested sequence of function spaces . . . ⊂V−1 ⊂V0 ⊂V1 ⊂V2 ⊂. . . is said to constitute a multiresolution analysis (MRA) for L2(R) if it satisfies the following axioms: MRA1. T k Vk = {0}, MRA2. S k Vk = L2(R), MRA3. for any f ∈Vk and any m ∈Z, the function f ′(x) = f(x −m 2−k) is also in Vk, MRA4. for any f ∈Vk, the function f ′(x) = f(2x), is in Vk+1. Setting Vk+1 = Vk ⊕Wk and starting with, say, Vℓ, the process of moving up the chain of spaces can be thought of as splitting Vℓinto a smoother part Vℓ−1 (called the scaling space) and a rougher part Wℓ−1 (called the wavelet space), and then repeating this process recursively for Vℓ−1, Vℓ−2, and so on (Figure 1). To get an actual wavelet transform, one needs to define appropriate bases for the {Vi} and {Wi} spaces. In the simplest case, a single function φ, called the scaling function, is sufficient to generate an orthonormal basis for V0, and a single function ψ, called the mother wavelet generates an orthonormal basis for W0. In this case, defining φk,m(x) = 2k/2 φ(2k x −m), and ψk,m(x) = 2k/2 ψ(2k x−m), we find that {φk,m}m∈Z and {ψk,m}m∈Z will be orthonormal bases for Vk and Wk, respectively. Moreover, {ψk,m}k,m∈Z is an orthonormal basis for the whole of L2(R). By the wavelet transform of f we mean its expansion in this basis. The difficulty in defining multiresolution analysis on discrete spaces is that there is no natural analog of dilation, as required by Mallat’s fourth axiom. However, in the specific case of the symmetric group, we do at least have a natural multiscale structure on our domain. Our goal in this paper is to find an analog of Mallat’s axioms that can take advantage of this structure. 2.1 Two decompositions of RSn A permutation of n objects is a bijective mapping {1, 2, . . . , n} →{1, 2, . . . , n}. With respect to the natural notion of multiplication (σ2σ1)(i) = σ2(σ1(i)), the n! different permutations of {1, . . . , n} form a group, called the symmetric group of degree n, which we denote Sn. Our MRA on Sn is born of the tension between two different ways of carving up RSn into orthogonal sums of subspaces: one corresponding to subdivision in “time”, the other in “frequency”. The first of these is easier to describe, since it is based on recursively partitioning Sn according to the hierarchy of sets Si1 = { σ ∈Sn | σ(n) = i1 } i1 ∈{1, . . . , n} Si1,i2 = { σ ∈Sn | σ(n) = i1, σ(n−1) = i2 } i1 ̸= i2, i1, i2 ∈{1, . . . , n} , and so on, down to sets of the form Si1...in−1, which only have a single element. Intuitively, this tree of nested sets captures the way in which we zoom in on a particular permutation σ by first fixing σ(n), then σ(n−1), etc. (see Figure 2 in Appendix B in the Supplement). From the algebraic point of view, Si1,...,ik is a so-called (left) Sn−k–coset µi1,...,ikSn−k := { µi1...ikτ | τ ∈Sn−k } , (2) 2 where µi1...ik is a permutation mapping n 7→i1, ..., n−k +1 7→ik. This emphasizes that in some sense each Si1,...,ik is just a “copy” of Sn−k inside Sn. The first important system of subspaces of RSn for our purposes are the window spaces Si1...ik = { f | supp(f) ⊆Si1...ik } 0 ≤k ≤n−1, {i1, . . . , ik} ⊆{1, . . . , n} . Clearly, for any given k, RSn = L i1,...,ik Si1...ik. The second system of spaces is related to the behavior of functions under translation. In fact, there are two distinct ways in which a given f ∈RSn can be translated by some τ ∈Sn: left–translation, f 7→Tτf, where (Tτf)(σ) = f(τ −1σ), and right–translation f 7→T R τ f, where (T R τ f)(σ) = f(στ −1). For now we focus on the former. We say that a space V ⊆RSn is a left Sn–module if it is invariant to left-translation in the sense that for any f ∈V and τ ∈Sn, Tτf ∈V . A fundamental result in representation theory tells us that if V is reducible in the sense that it has a proper subset V1 that is fixed by left-translation, then V = V1 ⊕V2, where V1 and V2 are both (left Sn–)modules. In particular, RSn is a (left Sn–)invariant space, therefore RSn = M t∈Tn Mt (3) for some set {Mt} of irreducible modules. This is our second important system of spaces. To understand the interplay between modules and window spaces, observe that each coset µi1...ikSn−k has an internal notion of left–translation (T i1...ik τ f)(σ) = f(µi1...ikτ −1µ−1 i1...ikσ), τ ∈Sn−k, (4) which fixes Si1...ik. Therefore, Si1...ik must be decomposable into a sum of irreducible Sn−k– modules, Si1...ik = M t∈Tn−k M i1...ik t . (5) Furthermore, the modules of different window spaces can be defined in such a way that M i′ 1,...,i′ k t = µi′ 1,...,i′ kµ−1 i1...ikM i1...ik t . (Note that each M i1...ik t is an Sn−k–module in the sense of being invariant to the internal translation action (4), and this action depends on i1 . . . ik.) Now, for any fixed t, the space U = L i1,...,ikM i1...ik t , is fully Sn–invariant, and therefore we must also have U = L α∈AMα, where the Mα are now irreducible Sn–modules. Whenever a relationship of this type holds between two sets of irreducible Sn– resp. Sn−k–modules, we say that the {Mα} modules are induced by {M i1...ik t }. The situation is complicated by the fact that decompositions like (3) and (5) are not unique. In particular, there is no guarantee that the {Mα} induced modules will be amongst the modules featured in (3). However, there is a unique, so-called adapted system of modules, for which this issue does not arise. Specifically, if, as is usually done, we let the indexing set Tm be the set of Standard Young Tableaux (SYT) of size m (see Appendix A in the supplementary materials for the exact definition), such as t = 1 3 5 6 7 2 4 8 ∈T8, . then the adapted modules at different levels of the coset tree are connected via M i1...ik M i1...ik t = M t′∈t↑n Mt′ ∀t ∈Tn−k, (6) where t ↑n:= { t′ ∈Tn | t′ ↓n−k= t } and t′↓n−k is the tableau that we get by removing the boxes containing n−k +1, . . . , n from t′. We also extend these relationships to sets in the obvious way: µ ↓n−k:= { t′↓n−k | t′ ∈µ } and ν ↑n:= S t∈ν t ↑n. We will give an explicit description of the adapted modules in Section 4. For now abstract relationships of the type (6) will suffice. 3 Coset based multiresolution analysis on Sn Our guiding principle in defining an analog of Mallat’s axioms for permutations is that the resulting multiresolution analysis should reflect the multiscale structure of the tree of cosets. At the same time, we also want the {Vk} spaces to be invariant to translation. Letting P be the projection operator 3 (Pi1...ikf)(σ) := f(σ) if σ ∈µi1...ikSn−k, 0 otherwise, (7) we propose the following definition. Definition 1 We say that a sequence of spaces V0 ⊆V1 ⊆. . . ⊆Vn−1 = RSn forms a left-invariant coset based multiresolution analysis (L-CMRA) for Sn if L1. for any f ∈Vk and any τ ∈Sn, we have Tτf ∈Vk, L2. if f ∈Vk, then Pi1...ik+1f ∈Vk+1, for any i1, . . . , ik+1, and L3. if g ∈Vk+1, then for any i1, . . . , ik+1 there is an f ∈Vk such that Pi1...ik+1f = g. Given any left-translation invariant space Vk, the unique Vk+1 that satisfies axioms L1–L3 is Vk+1 := L i1...ik+1Pi1...ik+1Vk. Applying this formula recursively, we find that Vk = M i1...ik Pi1...ikV0, (8) so V0 determines the entire sequence of spaces V0, V1, . . . , Vn−1. In contrast to most classical MRAs, however, this relationship is not bidirectional: Vk does not determine V0, . . . , Vk−1. To gain a better understanding of L-CMRA, we exploit that (by axiom L1) each Vk is Sn–invariant, and is therefore a sum of irreducible Sn–modules. By the following proposition, if V0 is a sum of adapted modules, then V1, . . . , Vn−1 are easy to describe. Proposition 1 If {Mt}t∈Tn are the adapted left Sn–modules of RSn, and V0 = L t∈ν0Mt for some ν0 ⊆Tn, then Vk = M t ∈νk Mt, Wk = M t ∈νk+1\νk Mt, where νk = ν0↓n−k↑n, (9) for any k ∈{0, 1, . . . , n−1}. Proof. By (6) Pi1...ik[L t′∈t↑nMt′] = M i1...ik t . Therefore, for any t′ ∈(t↑n∩ν0) there must be some f ∈Mt′ ⊆V0 such that for some i1 . . . ik, Pi1...ikf ∈M i1...ik t (and Pi1...ikf is non-zero). By Lemmas 1 and 2 in Appendix D, this implies that M i1...ik t ⊆Vk for all i1 . . . ik. On the other hand, from (6) it is also clear that if t′ ̸∈ν0, then M i1...ik t ∩Vk = {0}. Therefore, Vk = M t∈ν0↓n−k M i1...ik M i1...ik t = M t′′∈ν0↓n−k↑n Mt′′ . The expression for Wk follows from the general formula Vk+1 = Vk ⊕Wk. ■ Example 1 The simplest case of L-CMRA is when ν0 = { 1 2 · · · n }. In this case, setting m = n −k, we find that ν0 ↓m= { 1 2 · · · m}, and νk = ν0 ↓m↑n is the set of all Young tableaux whose first row starts with the numbers 1, 2, . . . , m. It so happens that M i1...ik 1 2 · · m is just the trivial invariant subspace of constant functions on µi1...ikSn−k. Therefore, this instance of L-CMRA is an exact analog of Haar wavelets: Vk will consist of all functions that are constant on each left Sn−k–coset. Some more interesting examples of adapted L-CMRAs are described in Appendix C. ⌟ When V0 cannot be written as a direct sum of adapted modules, the analysis becomes significantly more complicated. Due to space limitations, we leave the discussion of this case to the Appendix. 3.1 Bi-invariant multiresolution analysis The left-invariant multiresolution of Definition 1 is appropriate for problems like ranking, where we have a natural permutation invariance with respect to relabeling the objects to be ranked, but not the ranks themselves. In contrast, in problems like multi-object tracking, we want our V0 ⊂. . . ⊂Vn−1 hierarchy to be invariant on both the left and the right. This leads to the following definition. 4 Definition 2 We say that a sequence of spaces V0 ⊆V1 ⊆. . . ⊆Vn−1 = RSn forms a bi-invariant coset based multiresolution analysis (Bi-CMRA) for Sn if Bi1. for any f ∈Vk and any τ ∈Sn, we have Tτf ∈Vk and T R τ f ∈Vk Bi2. if f ∈Vk−1, then Pi1...ikf ∈Vk, for any i1, . . . , ik; and Bi3. Vk is the smallest subspace of RSn satisfying Bi1 and Bi2. Note that the third axiom had to be modified somewhat compared to Definition 1, but essentially it serves the same purpose as L3. A subspace U that is invariant to both left- and right-translation (i.e., for any f ∈U and any σ, τ ∈Sn both Tσf ∈U and T R τ f ∈U) is called a two-sided module. The main reason that Bi-CMRA is easier to describe than L-CMRA is that the irreducible two-sided modules in RSn, called isotypic subspaces, are unique. In particular, the isotypics turn out to be Uλ = M t∈Tn : λ(t)=λ Mt λ ∈Λn, where λ(t) is the vector (λ1, . . . , λp) in which λi is the number of boxes in row i of t. For t to be a valid SYT, we must have λ1 ≥λ2 ≥. . . ≥λp ≥1, and Pp i=1 λi = n. We use Λn to denote the set of all such p–tuples, called integer partitions of n. Bi-CMRA is a much more constrained framework than L-CMRA because (by axiom Bi1) each Vk space must be of the form Vk = L λ∈νkUλ. It should come as no surprise that the way that ν0 determines ν1, . . . , νn−1 is related to restriction and extension relationships between partitions. We write λ′≤λ if λ′ i≤λi for all i (assuming λ is padded with zeros to make it the same length as λ), and for m ≤n, we define λ↓m:= { λ′ ∈Λm | λ′ ≤λ }, and λ′↑n:= { λ ∈Λn | λ ≥λ′ }. Again, these operators are extended to sets of partitions by µ↓m:= S λ∈µλ↓m and ν ↑n:= S λ∈νλ↑n. (See Figure 3 in Appendix B.) Proposition 2 Given a set of partitions ν0 ⊆Λn, the corresponding Bi-CMRA comprises the spaces Vk = M λ ∈νk Uλ, Wk = M λ ∈νk+1\νk Uλ, where νk = ν0 ↓n−k↑n . (10) Moreover, any system of spaces satisfying Definition 2 is of this form for some ν0 ⊆Λn. Example 2 The simplest case of Bi-CMRA corresponds to taking ν0 = {(n)}. In this case ν0 ↓n−k= {(n −k)}, and νk = { λ ∈Λn | λ1 ≥n−k }. In Section 6 we discuss that Vk = L λ∈νkUλ has a clear interpretation as the subspace of RSn determined by up to k’th order interactions between elements of the set {1, . . . , n}. ⌟ 4 Wavelets As mentioned in Section 2, to go from multiresolution analysis to orthogonal wavelets, one needs to define appropriate bases for the spaces V0, W0, W1, . . . Wn−2. This can be done via the close connection between irreducible modules and the {ρλ} irreducible representations (irreps), that we encountered in the context of the Fourier transform (1). As explained in Appendix A, each integer partition λ ∈Λn has a corresponding irrep ρλ : Sn →Rdλ×dλ; the rows and columns of the ρλ(σ) matrices are labeled by the set Tλ of standard Young tableaux of shape λ; and if the ρλ are defined according to Young’s Orthogonal Representation (YOR), then for any t ∈Tn and t′ ∈Tλ(t), the functions ϕt′(σ) = [ρλ(t)(σ)]t′,t form a basis for the adapted module Mt. Thus, the orthonormal system of functions φt,t′(σ) = p dλ/n! [ρλ(σ)]t′,t t ∈ν0 λ = λ(t) t′ ∈Tλ (11) ψk t,t′(σ) = p dλ/n! [ρλ(σ)]t′,t t ∈νk+1\νk λ = λ(t) t′ ∈Tλ, (12) seems to be a natural choice of scaling resp. wavelet functions for the L-CMRA of Proposition 1. Similarly, we can take φt,t′(σ) = p dλ/n! [ρλ(σ)]t′,t λ ∈ν0 t, t′ ∈Tλ (13) ψk t,t′(σ) = p dλ/n! [ρλ(σ)]t′,t λ ∈νk+1\νk t, t′ ∈Tλ, (14) 5 as a basis for the Bi-CMRA of Proposition 2. Comparing with (1), we find that if we use these bases to compute the wavelet transform of a function, then the wavelet coefficients will just be rescaled versions of specific columns of the Fourier transform. From the computational point of view, this is encouraging, because there are well-known and practical fast Fourier transforms (FFTs) available for Sn [12][13]. On the other hand, it is also somewhat of a letdown, since it suggests that all that we have gained so far is a way to reinterpret parts of the Fourier transform as wavelet coefficients. An even more serious concern is that the ψk t,t′ functions are not at all localized in the spatial domain, largely contradicting the very idea of wavelets. A solution to this dilemma emerges when we consider that since νk+1 \ νk = (ν0 ↓n−k−1↑n) \ (ν0 ↓n−k↑n) = (ν0 ↓n−k−1↑n−k) \ (ν0 ↓n−k) ↑n, each of the Wk wavelet spaces of Proposition 1 can be rewritten as Wk = M i1...ik M t∈ωk M i1...ik t ωk = (ν0 ↓n−k−1↑n−k) \ (ν0 ↓n−k), (15) and similarly, the wavelet spaces of Proposition 2 can be rewritten as Wk = M i1...ik M λ∈ωk U i1...ik λ ωk = (ν0 ↓n−k−1↑n−k) \ (ν0 ↓n−k), (16) where U i1...ik λ are now the “local isotypics” U i1...ik λ := L t∈TλM i1...ik t . An orthonormal basis for the M i1...ik spaces is provided by the local Fourier basis functions ψi1...ik t,t′ (σ) := p dλ(t)/(n−k)! [ρλ(t)(µ−1 i1...ikσ)]t′,t σ ∈µi1...ikSn−k 0 otherwise, (17) which are localized both in “frequency” and in “space”. This basis also affirms the multiscale nature of our wavelet spaces, since projecting onto the wavelet functions ψi1...ik t1,t′ 1 of a specific shape, say, λ1 = (n−k −2, 2) captures very similar information about functions in Si1...ik as projecting onto the analogous ψ j′ 1...j′ k′ t2,t′ 2 for functions in Sj1,...,jk′ if t2 and t′ 2 are of shape λ2 = (n−k′ −2, 2). Taking (17) as our wavelet functions, we define the L-CMRA wavelet transform of a function f : Sn →R as the collection of column vectors w∗ f(t) := (⟨f, φt,t′⟩)⊤ t′∈λ(t) t ∈ν0 (18) wf(t; i1, . . . , ik) := (⟨f, ψi1...ik t,t′ ⟩)⊤ t′∈λ(t) t ∈ωk {i1, . . . , ik} ⊂{1, . . . , n} , (19) where 0 ≤k ≤n−2, and ωk is as in (15). Similarly, we define the Bi-CMRA wavelet transform of f as the collection of matrices w∗ f(λ) := (⟨f, φt,t′⟩)t,t′∈λ λ ∈ν0 (20) wf(λ; i1, . . . , ik) := (⟨f, ψi1...ik t,t′ ⟩)t,t′∈λ λ ∈ωk {i1, . . . , ik} ⊂{1, . . . , n} , (21) where 0 ≤k ≤n−2, and ωk is as in (16). 4.1 Overcomplete wavelet bases While the wavelet spaces W0, . . . , Wk−1 of Bi-CMRA are left- and right-invariant, the wavelets (17) still carry the mark of the coset tree, which is not a right-invariant object, since it branches in the specific order n, n−1, n−2, . . .. In contexts where wavelets are used as a means of promoting sparsity, this will bias us towards sparsity patterns that match the particular cosets featured in the coset tree. The only way to avoid this phenomenon is to span W0, . . . , Wk−1 with the overcomplete system of wavelets ψi1...ik j1...jk,t,t′(σ) := p dλ(t)/(n−k)! [ρλ(t)(µ−1 i1...ikσ µj1...jk)]t′,t σ ∈µi1...ikSn−k µj1...jk 0 otherwise, where now both {i1, . . . , ik} and {j1, . . . , jk} are allowed to run over all k–element subsets of {1, . . . , n}. While sacrificing orthogonality, such a basis is extremely well suited for sparse modeling in various applications. 6 5 Fast wavelet transforms In the absence of fast wavelet transforms, multiresolution analysis would only be of theoretical interest. Fortunately, our wavelet transforms naturally lend themselves to efficient recursive computation along branches of the coset tree. This is especially attractive when dealing with functions that are sparse, since subtrees that only have zeros at their leaves can be eliminated from the transform altogether. 1: function FastLCWT(f, ν, (i1 . . . ik)) { 2: if k = n−1 then 3: return(Scalingν(v(f))) 4: end if 5: v ←0 6: for each ik+1 ̸∈{i1 . . . ik} do 7: if Pi1...ik+1f ̸= 0 then 8: v ←v + Φik(FastLCWT(f↓i1...ik+1, ν ↓n−k−1, (i1 . . . ik+1))) 9: end if 10: end for 11: output Waveletν↓n−k−1↑n−k\ν(v) 12: return Scalingν(v) } Algorithm 1: A high level description of a recursive algorithm that computes the wavelet transform (18)–(19). The function is called as FastLCWT(f, ν0, ()). The symbol v stands for the collection of coefficient vectors {wf(t; i1 . . . ik)}t∈ν↓n−k−1↑n−k. The function Scaling selects the subset of these vectors that are scaling coefficients, whereas Wavelet selects the wavelet coefficients. f ↓i1...ik : Sn−k →R is the restriction of f to µi1...ikSn−k, i.e., f↓i1...ik (τ) = f(µi1...ikτ). A very high level sketch of the resulting algorithm is given in Algorithm 1, while a more detailed description in terms of actual coefficient matrices is in Appendix E. Bi-CMRA would lead to a similar algorithm, which we omit for brevity. A key component of these algorithms is the function Φik, which serves to convert the coefficient vectors representing any g ∈Si1...ik+1 in terms of the basis {ψi1...ik+1 t,t′ }t,t′ to the coefficient vectors representing the same g in terms of {ψi1...ik t,t′ }t,t′. While in general this can be a complicated and expensive linear transformation, due to the special properties of Young’s orthogonal representation, in our case it reduces to wg(t; i1 . . . ik) = q dλ′(n−k) dλ ρλ(Jik+1, n −kK) wg(t′; i1 . . . ik+1)↑t , (22) where t′ = t↓n−k−1; λ = λ(t); λ′ = λ(t′); Jik+1, kK is a special permutation, called a contiguous cycle, that maps k to ik+1; and ↑t is a copy operation that promotes its argument to a dλ–dimensional vector by wg(t′; . . .)↑t t′′ = [wg(t′; . . .)]t′′↓n−k−1 if t′′ ↓n−k−1∈Tλ′ 0 otherwise. Clausen’s FFT [12] uses essentially the same elementary transformations to compute (1). However, whereas the FFT runs in O(n3n!) operations, by working with the local wavelet functions (17) as opposed to (12) and (14), if f is sparse, Algorithm 1 needs only polynomial time. Proposition 3 Given f : Sn →R such that |supp(f)| ≤q, and ν0 ⊆Tn, Algorithm 1 can compute the L-CMRA wavelet coefficients (18)–(19) in n2Nq scalar operations, where N = P t∈ν1 dλ(t). The analogous Bi-CMRA transform runs in n2Mq time, where M = P λ∈ν1 d2 λ. To estimate the N and M constants in this result, note that for partitions with λ1 >> λ2, λ3, . . ., dλ = O(nn−λ1). For example, d(n−1,1) = n−1, d(n−2,2) = n(n−3)/2, etc.. The inverse wavelet transforms essentially follow the same computations in reverse and have similar complexity bounds. 7 6 Applications There is a range of applied problems involving permutations that could benefit from the wavelets defined in this paper. In this section we mention just two potential applications. 6.1 Spectral analysis of ranking data Given a distribution p over permutations, the matrix Mk of k’th order marginals is [Mk]j1...jk;i1...ik = p( σ(i1) = j1, . . . , σ(ik) = jk ) = X σ∈S j1...jk i1...ik p(σ), where Sj1...jk i1...ik is the two-sided coset µj1...jkSn−k µ−1 i1...ik := µj1...jkτµ−1 i1...ik | τ ∈Sn−k . Clearly, these matrices satisfy a number of linear equations, and therefore are redundant. However, it can be shown that for for some appropriate basis transformation matrix Tk, Mk = T ⊤ k M λ∈Tn : λ1≥n−k bp(λ) Tk, i.e., the Fourier matrices {bp(λ)}λ : λi=n−k capture exactly the “pure k’th order effects” in the distribution p. In the spectral analysis of rankings, as advocated, e.g., in [7], there is a lot of emphasis on projecting data to this space, Margk, but using an FFT this takes around O(n2n!) time. On the other hand, Margk is exactly the wavelet space Wk−1 of the Bi-CMRA generated by ν0 = {(n)} of Example 2. Therefore, when p is q–sparse, noting that d(n−1,1) = n−1, by using the methods of the previous section, we can find its projection to each of these spaces in just O(n4q) time. 6.2 Multi-object tracking In multi-object tracking, as mentioned in the Introduction, the first few Fourier coefficients {bp(λ)}λ∈ξ (w.r.t. the majorizing order on permutations) provide an optimal approximation to the assignment distribution p between targets and tracks in the face of a random noise process [2][1]. However, observing target i at track j will zero out p everywhere outside the coset µjSn−kµ−1 i , which is difficult for the Fourier approach to handle. In fact, by analogy with (7), denoting the operator that projects to the space of functions supported on this coset by Pij, the new distribution will just be Pijp. Thus, if we set ν0 = ξ, after any single observation, our distribution will lie in V1 of the corresponding Bi-CMRA. Unfortunately, after a second observation, p will fall in V2, etc., leading to a combinatorial explosion in the size of the space needed to represent p. However, while each observation makes p less smooth, it also makes it more concentrated, suggesting that this problem is ideally suited to a sparse representation in terms of the overcomplete basis functions of Section 4.1. The important departure from the fast wavelet transforms of Section 5 is that now, to find the optimally sparse representation of p, we must allow branching to two-sided cosets of the form µj1...jkSn−k µi1...ik, which are no longer mutually disjoint. 7 Conclusions Starting from the self-similar structure of the Sn−k coset tree, we developed a framework for wavelet analysis on the symmetric group. Our framework resembles Mallat’s multiresolution analysis in its axiomatic foundations, yet is closer to continuous wavelet transforms in its invariance properties. It also has strong ties to the “separation of variables” technique of non-commutative FFTs [14]. In a certain special case we recover the analog of Haar wavelets on the coset tree, In general, wavelets can circumvent the rigidity of the Fourier approach when dealing with functions that are sparse and/or have discontinuities, and, in contrast to the O(n2n!) complexity of the best FFTs, for sparse functions and a reasonable choice of ν0, our fast wavelet transform runs in O(np) time for some small p. Importantly, wavelets also provide a natural basis for sparse approximations, which have hithero not been explored much in the context of permutations. Finally, much of our framework is applicable not just to the symmetric group, but to other finite groups, as well. 8 References [1] J. Huang, C. Guestrin, and L. Guibas. Fourier Theoretic Probabilistic Inference over Permutations. Journal of Machine Learning Research, 10:997–1070, 2009. [2] R. Kondor, A. Howard, and T. Jebara. Multi-object tracking with representations of the symmetric group. In Artificial Intelligence and Statistics (AISTATS), 2007. [3] S. Jagabathula and D. Shah. Inferring Rankings under Constrained Sensing. In In Advances in Neural Information Processing Systems (NIPS), 2008. [4] J. Huang, C. Guestrin, X. Jiang, and L. Guibas. Exploiting Probabilistic Independence for Permutations. In Artificial Intelligence and Statistics (AISTATS), 2009. [5] X. Jiang, J. Huang, and L. Guibas. Fourier-information duality in the identity management problem. In In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), Athens, Greece, September 2011. [6] D. Rockmore, P. Kostelec, W. Hordijk, and P. F. Stadler. Fast Fourier Transforms for Fitness Landscapes. Applied and Computational Harmonic Analysis, 12(1):57–76, 2002. [7] P. Diaconis. Group representations in probability and statistics. Institute of Mathematical Statistics, 1988. [8] M. Gavish, B. Nadler, and R. R. Coifman. Multiscale Wavelets on Trees, Graphs and High Dimensional Data: Theory and Applications to Semi Supervised Learning. In International Conference on Machine Learning (ICML), 2010. [9] R. R. Coifman and M. Maggioni. Diffusion wavelets. Applied and Computational Harmonic Analysis, 21, 2006. [10] D. K. Hammond, P. Vandergheynst, and R. Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30:129–150, 2011. [11] S. G. Mallat. A Theory for Multiresolution Signal Decomposition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11:674–693, 1989. [12] M. Clausen. Fast generalized Fourier transforms. Theor. Comput. Sci., 67(1):55–63, 1989. [13] D. Maslen and D. Rockmore. Generalized FFTs – a survey of some recent results. In Groups and Computation II, volume 28 of DIMACS Ser. Discrete Math. Theor. Comput. Sci., pages 183–287. AMS, Providence, RI, 1997. [14] D. K. Maslen and D. N. Rockmore. Separation of Variables and the Computation of Fourier Transforms on Finite Groups, I. Journal of the American Mathematical Society, 10:169–214, 1997. 9
|
2012
|
200
|
4,565
|
Learned Prioritization for Trading Off Accuracy and Speed∗ Jiarong Jiang∗ Adam Teichert† Hal Daum´e III∗ Jason Eisner† ∗Department of Computer Science University of Maryland College Park, MD 20742 {jiarong,hal}@umiacs.umd.edu †Department of Computer Science Johns Hopkins University Baltimore, MD 21218 {teichert,eisner}@jhu.edu Abstract Users want inference to be both fast and accurate, but quality often comes at the cost of speed. The field has experimented with approximate inference algorithms that make different speed-accuracy tradeoffs (for particular problems and datasets). We aim to explore this space automatically, focusing here on the case of agenda-based syntactic parsing [12]. Unfortunately, off-the-shelf reinforcement learning techniques fail to learn good policies: the state space is simply too large to explore naively. An attempt to counteract this by applying imitation learning algorithms also fails: the “teacher” follows a far better policy than anything in our learner’s policy space, free of the speed-accuracy tradeoff that arises when oracle information is unavailable, and thus largely insensitive to the known reward functfion. We propose a hybrid reinforcement/apprenticeship learning algorithm that learns to speed up an initial policy, trading off accuracy for speed according to various settings of a speed term in the loss function. 1 Introduction The nominal goal of predictive inference is to achieve high accuracy. Unfortunately, high accuracy often comes at the price of slow computation. In practice one wants a “reasonable” tradeoff between accuracy and speed. But the definition of “reasonable” varies with the application. Our goal is to optimize a system with respect to a user-specified speed/accuracy tradeoff, on a user-specified data distribution. We formalize our problem in terms of learning priority functions for generic inference algorithms (Section 2). Much research in natural language processing (NLP) has been dedicated to finding speedups for exact or approximate computation in a wide range of inference problems including sequence tagging, constituent parsing, dependency parsing, and machine translation. Many of the speedup strategies in the literature can be expressed as pruning or prioritization heuristics. Prioritization heuristics govern the order in which search actions are taken while pruning heuristics explicitly dictate whether particular actions should be taken at all. Examples of prioritization include A∗[13] and Hierarchical A∗[19] heuristics, which, in the case of agenda-based parsing, prioritize parse actions so as to reduce work while maintaining the guarantee that the most likely parse is found. Alternatively, coarse-to-fine pruning [21], classifier-based pruning [23], [22] beam-width prediction [3], etc can result in even faster inference if a small amount of search error can be tolerated. Unfortunately, deciding which techniques to use for a specific setting can be difficult: it is impractical to “try everything.” In the same way that statistical learning has dramatically improved the accuracy of NLP applications, we seek to develop statistical learning technology that can dramatically improve their speed while maintaining tolerable accuracy. By combining reinforcement learning and imitation learning methods, we develop an algorithm that can successfully learn such a tradeoff in the context of constituency parsing. Although this paper focuses on parsing, we expect the approach to transfer to prioritization in other agenda-based algorithms, such as machine translation and residual belief propagation. We give a broader discussion of this setting in [8]. ∗This material is based upon work supported by the National Science Foundation under Grant No. 0964681. 1 2 Priority-based Inference Inference algorithms in NLP (e.g. parsers, taggers, or translation systems) as well as more broadly in artificial intelligence (e.g., planners) often rely on prioritized exploration. For concreteness, we describe inference in the context of parsing, though it is well known that this setting captures all the essential structure of a much larger family of “deductive inference” problems [12, 9]. 2.1 Prioritized Parsing Given a probabilistic context-free grammar, one approach to inferring the best parse tree for a given sentence is to build the tree from the bottom up by dynamic programming, as in CKY [29]. When a prospective constituent such as “NP from 3 to 8” is built, its Viterbi inside score is the log-probability of the best known subparse that matches that description.1 A standard extension of the CKY algorithm [12] uses an agenda—a priority queue of constituents built so far—to decide which constituent is most promising to extend next, as detailed in section 2.2 below. The success of the inference algorithm in terms of speed and accuracy hinge on its ability to prioritize “good” actions before “bad” actions. In our context, a constituent is “good” if it somehow leads to a high accuracy solution, quickly. Running Example 1. Either CKY or an agenda-based parser that prioritizes by Viterbi inside score will find the highest-scoring parse. This achieves a percentage accuracy of 93.3, given the very large grammar and experimental conditions described in Section 6. However, the agenda-based parser is over an order of magnitude faster than CKY (wall clock time) because it stops as soon as it finds a parse, without building further constituents. With mild pruning according to Viterbi inside score, the accuracy remains 93.3 and the speed triples. With more aggressive pruning, the accuracy drops to 92.0 and the speed triples again. Our goal is to learn a prioritization function that satisfies this condition. In order to operationalize this approach, we need to define the test-time objective function we wish to optimize; we choose a simple linear interpolation of accuracy and speed: quality = accuracy −λ × time (1) where we can choose a λ that reflects our true preferences. The goal of λ is to encode “how much more time am I willing to spend to achieve an additional unit of accuracy?” In this paper, we consider a very simple notion of time: the number of constituents popped from/pushed into the agenda during inference, halting inference as soon as the parser pops its first complete parse. When considering how to optimize the expectation of Eq (1) over test data, several challenges present themselves. First, this is a sequential decision process: the parsing decisions made at a given time may affect both the availability and goodness of future decisions. Second, the parser’s total runtime and accuracy on a sentence are unknown until parsing is complete, making this an instance of delayed reward. These considerations lead us to formulate this problem as a Markov Decision Process (MDP), a well-studied model of decision processes. 2.2 Inference as a Markov Decision Process A Markov Decision Process (MDP) is a formalization of a memoryless search process. An MDP consists of a state space S, an action space A, and a transition function T. An agent in an MDP observes the current state s ∈S and chooses an action a ∈A. The environment responds by transitioning to a state s′ ∈S, sampled from the transition distribution T(s′ | s, a). The agent then observes its new state and chooses a new action. An agent’s policy π describes how the (memoryless) agent chooses an action based on its current state, where π is either a deterministic function of the state (i.e., π(s) 7→a) or a stochastic distribution over actions (i.e., π(a | s)). For parsing, the state is the full current chart and agenda (and is astronomically large: roughly 1017 states for average sentences). The agent controls which item (constituent) to “pop” from the agenda. The initial state has an agenda consisting of all single-word constituents, and an empty chart of previously popped constituents. Possible actions correspond to items currently on the agenda. When the agent chooses to pop item y, the environment deterministically adds y to the chart, combines y as licensed by the grammar with adjacent items z in the chart, and places each resulting new item x 1E.g., the maximum log-probability of generating some tree whose fringe is the substring spanning words (3,8], given that NP (noun phrase) is the root nonterminal. This is the total log-probability of rules in the tree. 2 on the agenda. (Duplicates in the chart or agenda are merged: the one of highest Viterbi inside score is kept.) The only stochasticity is the initial draw of a new sentence to be parsed. We are interested in learning a deterministic policy that always pops the highest-priority available action. Thus, learning a policy corresponds to learning a priority function. We define the priority of action a in state s as the dot product of a feature vector φ(a, s) with the weight vector θ; our features are described in Section 2.3. Formally, our policy is πθ(s) = arg max a θ · φ(a, s) (2) An admissible policy in the sense of A∗search [13] would guarantee that we always return the parse of highest Viterbi inside score—but we do not require this, instead aiming to optimize Eq (1). 2.3 Features for Prioritized Parsing We use the following simple features to prioritize a possible constituent. (1) Viterbi inside score; (2) constituent touches start of sentence; (3) constituent touches end of sentence; (4) constituent length; (5) constituent length sentence length ; (6) log p(constituent label | prev. word POS tag) and log p(constituent label | next word POS tag), where the part-of-speech (POS) tag of w is taken to be arg maxt p(w | t) under the grammar; (7) 12 features indicating whether the constituent’s {preceding, following, initial} word starts with an {uppercase, lowercase, number, symbol} character; (8) the 5 most positive and 5 most negative punctuation features from [14], which consider the placement of punctuation marks within the constituent. The log-probability features (1), (6) are inspired by work on figures of merit for agenda-based parsing [4], while case and punctuation patterns (7), (8) are inspired by structure-free parsing [14]. 3 Reinforcement Learning Reinforcement learning (RL) provides a generic solution to solving learning problems with delayed reward [25]. The reward function takes a state of the world s and an agent’s chosen action a and returns a real value r that indicates the “immediate reward” the agent receives for taking that action. In general the reward function may be stochastic, but in our case, it is deterministic: r(s, a) ∈R. The reward function we consider is: r(s, a) = acc(a) −λ · time(s) if a is a full parse tree 0 otherwise (3) Here, acc(a) measures the accuracy of the full parse tree popped by the action a (against a gold standard) and time(s) is a user-defined measure of time. In words, when the parser completes parsing, it receives reward given by Eq (1); at all other times, it receives no reward. 3.1 Boltzmann Exploration At test time, the transition between states is deterministic: our policy always chooses the action a that has highest priority in the current state s. However, during training, we promote exploration of policy space by running with stochastic policies πθ(a | s). Thus, there is some chance of popping a lower-priority action, to find out if it is useful and should be given higher-priority. In particular, we use Boltzmann exploration to construct a stochastic policy with a Gibbs distribution. Our policy is: πθ(a | s) = 1 Z(s) exp 1 temp θ · φ(a, s) with Z(s) as the appropriate normalizing constant (4) That is, the log-likelihood of action a at state s is an affine function of its priority. The temperature temp controls the amount of exploration. As temp →0, πθ approaches the deterministic policy in Eq (2); as temp →∞, πθ approaches the uniform distribution over available actions. During training, temp can be decreased to shift from exploration to exploitation. A trajectory τ is the complete sequence of state/action/reward triples from parsing a single sentence. As is common, we denote τ = ⟨s0, a0, r0, s1, a1, r1, . . . , sT , aT , rT ⟩, where: s0 is the starting state; at is chosen by the agent by πθ(at | st); rt = r(st, at); and st+1 is drawn by the environment from 3 T(st+1 | st, at), deterministically in our case. At a given temperature, the weight vector θ gives rise to a distribution over trajectories and hence to an expected total reward: R = Eτ∼πθ [R(τ)] = Eτ∼πθ " T X t=0 rt # . (5) where τ is a random trajectory chosen by policy πθ, and rt is the reward at step t of τ. 3.2 Policy Gradient Given our features, we wish to find parameters that yield the highest possible expected reward. We carry out this optimization using a stochastic gradient ascent algorithm known as policy gradient [27, 26]. This operates by taking steps in the direction of ∇θR: ∇θEτ[R(τ)] = Eτ[∇θpθ(τ) pθ(τ) R(τ)] = Eτ h R(τ)∇θ log pθ(τ) i = Eτ h R(τ) T X t=0 ∇θ log π(at | st) i (6) The expectation can be approximated by sampling trajectories. It also requires computing the gradient of each policy decision, which, by Eq (4), is: ∇θ log πθ(at | st) = 1 temp φ(at, st) − X a′∈A πθ(a′ | st)φ(a′, st) ! (7) Combining Eq (6) and Eq (7) gives the form of the gradient with respect to a single trajectory. The policy gradient algorithm samples one trajectory (or several) according to the current πθ, and then takes a gradient step according to Eq (6). This increases the probability of actions on high-reward trajectories more than actions on low-reward trajectories. Running Example 2. The baseline system from Running Example 1 always returns the target parse (the complete parse with maximum Viterbi inside score). This achieves an accuracy of 93.3 (percent recall) and speed of 1.5 mpops (million pops) on training data. Unfortunately, running policy gradient from this starting point degrades speed and accuracy. Training is not practically feasible: even the first pass over 100 training sentences (sampling 5 trajectories per sentence) takes over a day. 3.3 Analysis One might wonder why policy gradient performed so poorly on this problem. One hypothesis is that it is the fault of stochastic gradient descent: the optimization problem was too hard or our step sizes were chosen poorly. To address this, we attempted an experiment where we added a “cheating” feature to the model, which had a value of one for constituents that should be in the final parse, and zero otherwise. Under almost every condition, policy gradient was able to learn a near-optimal policy by placing high weight on this cheating feature. An alternative hypothesis is overfitting to the training data. However, we were unable to achieve significantly higher accuracy even when evaluating on our training data—indeed, even for a single train/test sentence. The main difficulty with policy gradient is credit assignment: it has no way to determine which actions were “responsible” for a trajectory’s reward. Without causal reasoning, we need to sample many trajectories in order to distinguish which actions are reliably associated with higher-reward. This is a significant problem for us, since the average trajectory length of an A∗ 0 parser on a 15 word sentence is about 30,000 steps, only about 40 of which (less than 0.15%) are actually needed to successfully complete the parse optimally. 3.4 Reward Shaping A classic approach to attenuating the credit assignment problem when one has some knowledge about the domain is reward shaping [10]. The goal of reward shaping is to heuristically associate portions of the total reward with specific time steps, and to favor actions that are observed to be soon followed by a reward, on the assumption that they caused that reward. If speed is measured by the number of popped items and accuracy is measured by labeled constituent recall of the first-popped complete parse (compared to the gold-standard parse), one natural way to shape rewards is to give an immediate penalty for the time incurred in performing the action while giving an immediate positive reward for actions that build constituents of the gold parse. Since only some of the correct constituents built may actually make it into the returned tree, we can correct for having “incorrectly” rewarded the others by penalizing the final action. Thus, the shaped reward: 4 ˜r(s, a) = 1 −∆(s, a) −λ if a pops a complete parse (causing the parser to halt and return a) 1 −λ if a pops a labeled constituent that appears in the gold parse −λ otherwise (8) λ is from Eq (1), penalizing the runtime of each step. 1 rewards a correct constituent. The correction ∆(s, a) is the number of correct constituents popped into the chart of s that were not in the first-popped parse a. It is easy to see that for any trajectory ending in a complete parse, the total shaped and unshaped rewards along a trajectory are equal (i.e. r(τ) = ˜r(τ)). We now modify the total reward to use temporal discounting. Let 0 ≤γ ≤1 be a discount factor. When rewards are discounted over time, the policy gradient becomes Eτ∼πθ[ ˜Rγ(τ)] = Eτ∼πθ " T X t=0 T X t′=t γt′−t˜rt′ ! ∇θ log πθ(at | st) # (9) where ˜rt′ = ˜r(st′, at′). When γ = 1, the gradient of the above turns out to be equivalent to Eq (6) [20, section 3.1], and therefore following the gradient is equivalent to policy gradient. When γ = 0, the parser gets only immediate reward—and in general, a small γ assigns the credit for a local reward ˜rt′ mainly to actions at at closely preceding times. This gradient step can now achieve some credit assignment. If an action is on a good trajectory but occurs after most of the useful actions (pops of correct constituents), then it does not receive credit for those previously occurring actions. However, if it occurs before useful actions, it still does receive credit because we do not know (without additional simulation) whether it was a necessary step toward those actions. Running Example 3. Reward shaping helps significantly, but not enough to be competitive. As the parser speeds up, training is about 10 times faster than before. The best setting (γ = 0, λ = 10−6) achieves an accuracy in the mid-70’s with only about 0.2 mpops. No settings were able to achieve higher accuracy. 4 Apprenticeship Learning In reinforcement learning, an agent interacts with an environment and attempts to learn to maximize its reward by repeating actions that led to high reward in the past. In apprenticeship learning, we assume access to a collection of trajectories taken by an optimal policy and attempt to learn to mimic those trajectories. The learner’s only goal is to behave like the teacher at every step: it does not have any notion of reward. In contrast, the related task of inverse reinforcement learning/optimal control [17, 11] attempts to infer a reward function from the teacher’s optimal behavior. Many algorithms exist for apprenticeship learning. Some of them work by first executing inverse reinforcement learning [11, 17] to induce a reward function and then feeding this reward function into an off-the-shelf reinforcement learning algorithm like policy gradient to learn an approximately optimal agent [1]. Alternatively, one can directly learn to mimic an optimal demonstrator, without going through the side task of trying to induce its reward function [7, 24]. 4.1 Oracle Actions With a teacher to help guide the learning process, we would like to explore more intelligently than Boltzmann exploration, in particular, focusing on high-reward regions of policy space. We introduce oracle actions as a guidance for areas to explore. Ideally, oracle actions should lead to a maximum-reward tree. In training, we will identify oracle actions to be those that build items in the maximum likelihood parse consistent with the gold parse. When multiple oracle actions are available on the agenda, we will break ties according to the priority assigned by the current policy (i.e., choose the oracle action that it currently likes best). 4.2 Apprenticeship Learning via Classification Given a notion of oracle actions, a straightforward approach to policy learning is to simply train a classifier to follow the oracle—a popular approach in incremental parsing [6, 5]. Indeed, this serves as the initial iteration of the state-of-the-art apprenticeship learning algorithm, DAGGER [24]. We train a classifier as follows. Trajectories are generated by following oracle actions, breaking ties using the initial policy (Viterbi inside score) when multiple oracle actions are available. These trajectories are incredibly 5 short (roughly double the number of words in the sentence). At each step in the trajectory, (st, at), a classification example is generated, where the action taken by the oracle (at) is considered the correct class and all other available actions are considered incorrect. The classifier that we train on these examples is a maximum entropy classifier, so it has exactly the same form as the Boltzmann exploration model (Eq (4)) but without the temperature control. In fact, the gradient of this classifier (Eq (10)) is nearly identical to the policy gradient (Eq (6)) except that τ is distributed differently and the total reward R(τ) does not appear: instead of mimicking high-reward trajectories we now try to mimic oracle trajectories. Eτ∼π∗ " T X t=0 φ(at, st) − X a′∈A πθ(a′ | st)φ(a′, st) !# (10) where π∗denotes the oracle policy so at is the oracle action. The potential benefit of the classifier-based approach over policy gradient with shaped rewards is increased credit assignment. In policy gradient with reward shaping, an action gets credit for all future reward (though no past reward). In the classifier-based approach, it gets credit for exactly whether or not it builds an item that is in the true parse. Running Example 4. The classifier-based approach performs only marginally better than policy gradient with shaped rewards. The best accuracy we can obtain is 76.5 with 0.19 mpops. To execute the DAGGER algorithm, we would continue in the next iteration by following the trajectories learned by the classifier and generating new classification examples on those states. Unfortunately, this is not computationally feasible due to the poor quality of the policy learned in the first iteration. Attempting to follow the learned policy essentially tries to build all possible constituents licensed by the grammar, which can be prohibitively expensive. We will remedy this in section 5. 4.3 What’s Wrong With Apprenticeship Learning An obvious practical issue with the classifier-based approach is that it trains the classifier only at states visited by the oracle. This leads to the well-known problem that it is unable to learn to recover from past errors [2, 28, 7, 24]. Even though our current feature set depends only on the action and not on the state, making action scores independent of the current state, there is still an issue since the set of actions to choose from does depend on the state. That is, the classifier is trained to discriminate only among the small set of agenda items available on the oracle trajectory (which are always combinations of correct constituents). But the action sets the parser faces at test time are much larger and more diverse. An additional objection to classifiers is that not all errors are created equal. Some incorrect actions are more expensive than others, if they create constituents that can be combined in many locally-attractive ways and hence slow the parser down or result in errors. Our classification problem does not distinguish among incorrect actions. The SEARN algorithm [7] would distinguish them by explicitly evaluating the future reward of each possible action (instead of using a teacher) and incorporating this into the classification problem. But explicit evaluation is computationally infeasible in our setting (at each time step, it must roll out a full future trajectory for each possible action from the agenda). Policy gradient provides another approach by observing which actions are good or bad across many random trajectories, but recall that we found it impractical as well. We do not further address this problem in this paper, but in [8] we suggested explicit causality analysis. A final issue has to do with the nature of the oracle. Recall that the oracle is “supposed to” choose optimal actions for the given reward. Also recall that our oracle always picks correct constituents. There seems to be a contradiction here: our oracle action selector ignores λ, the tradeoff between accuracy and speed, and only focuses on accuracy. This happens because for any reasonable setting of λ, the optimal thing to do is always to just build the correct tree without building any extra constituents. Only for very large values of λ is it optimal to do anything else, and for such values of λ, the learned model will have hugely negative reward. This means that under the apprenticeship learning setting, we are actually never going to be able to learn to trade off accuracy and speed: as far as the oracle is concerned, you can have both! The tradeoff only appears because our model cannot come remotely close to mimicking the oracle. 5 Oracle-Infused Policy Gradient The failure of both standard reinforcement learning algorithms and standard apprenticeship learning algorithms on our problem leads us to develop a new approach. We start with the policy gradient algorithm (Section 3.2) and use ideas from apprenticeship learning to improve it. Our formulation preserves the reinforcement learning flavor of our overall setting, which involves delayed reward for a known reward function. Our approach is specifically designed for the non-deterministic nature of the agenda-based parsing setting [8]: once some action a becomes available (appears on the agenda), it never goes away until it is taken. This makes the notion of “interleaving” oracle actions with policy actions both feasible and sensible. Like policy gradient, we draw trajectories from a policy and take gradient steps that favor actions with high reward under reward shaping. Like SEARN and DAGGER, we begin by exploring the space around the optimal policy and slowly explore out from there. 6 To achieve this, we define the notion of an oracle-infused policy. Let π be an arbitrary policy and let δ ∈[0, 1]. We define the oracle-infused policy π+ δ as follows: π+ δ (a | s) = δπ∗(a | s) + (1 −δ)π(a | s) (11) In other words, when choosing an action, π+ δ explores the policy space with probability 1 −δ (according to its current model), but with probability δ, we force it to take an oracle action. Our algorithm takes policy gradient steps with reward shaping (Eqs (9) and (7)), but with respect to trajectories drawn from π+ δ rather than π. If δ = 0, it reduces to policy gradient, with reward shaping if γ < 1 and immediate reward if γ = 0. For δ = 1, the γ = 0 case reduces to the classifier-based approach with π∗(which in turn breaks ties by choosing the best action under π). Similar to DAGGER and SEARN, we do not stay at δ = 1, but wean our learner off the oracle supervision as it starts to find a good policy π that imitates the classifier reasonably well. We use δ = 0.8epoch, where epoch is the total number of passes made through the training set at that point (so δ = 0.80 = 1 on the initial pass). Over time, δ →0, so that eventually we are training the policy to do well on the same distribution of states that it will pass through at test time (as in policy gradient). With intermediate values of δ (and γ ≈1), an iteration behaves similarly to an iteration of SEARN, except that it “rolls out” the consequences of an action chosen randomly from (11) instead of evaluating all possible actions in parallel. Running Example 5. Oracle-infusion gives a competitive speed and accuracy tradeoff. A typical result is 91.2 with 0.68 mpops. 6 Experiments All of our experiments (including those discussed earlier) are based on the Wall Street Journal portion of the Penn Treebank [15]. We use a probabilistic context-free grammar with 370,396 rules—enough to make the baseline system accurate but slow. We obtained it as a latent-variable grammar [16] using 5 split-merge iterations [21] on sections 2–20 of the Treebank, reserving section 22 for learning the parameters of our policy. All approaches to trading off speed and accuracy are trained on section 22; in particular, for the running example and Section 6.2, the same 100 sentences of at most 15 words from that section were used for training and test. We measure accuracy in terms of labeled recall (including preterminals) and measure speed in terms of the number of pops from on the agenda. The limitation to relatively short sentences is purely for improved efficiency at training time. 6.1 Baseline Approaches Our baseline approaches trade off speed and accuracy not by learning to prioritize, but by varying the pruning level ∆. A constituent is pruned if its Viterbi inside score is more than ∆worse than that of some other constituent that covers the same substring. Our baselines are: (HA∗) a Hierarchical A∗parser [18] with the same pruning threshold at each hierarchy level; (A∗ 0) an A∗parser with a 0 heuristic function plus pruning; (IDA∗ 0) an iterative deepening A∗algorithm, on which a failure to find any parse causes us to increase ∆and try again with less aggressive pruning (note that this is not the traditional meaning of IDA*); and (CTF) the default coarse-to-fine parser in the Berkeley parser [21]. Several of these algorithms can make multiple passes, in which case the runtime (number of pops) is assessed cumulatively. 6.2 Learned Prioritization Approaches Model # of pops Recall F1 A∗ 0 (no pruning) 1496080 93.34 93.19 D686641 56.35 58.74 I187403 76.48 76.92 D+ 1275292 84.17 83.38 I+ 682540 91.16 91.33 Figure 1: Performance on 100 sentences. We explored four variants of our oracle-infused policy gradient with with λ = 10−6. Figure 1 shows the result on the 100 training sentences. The “-” tests are the degenerate case of δ = 1, or apprenticeship learning (section 4.2), while the “+” tests use δ = 0.8epoch as recommended in section 5. Temperature matters for the “+” tests and we use temp = 1. We performed stochastic gradient descent for 25 passes over the data, sampling 5 trajectories in a row for each sentence (when δ < 1 so that trajectories are random). We can see that the classifier-based approaches “-” perform poorly: when training trajectories consist of only oracle actions, learning is severely biased. Yet we saw in section 3.2 that without any help from the oracle actions, we suffer from such large variance in the training trajectories that performance degrades rapidly and learning does not converge even after days of training. Our “oracle-infused” compromise “+” uses some oracle actions: after several passes through the data, the parser learns to make good decisions without help from the oracle. 7 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0 1 2 3 x 10 7 Recall # of pops Change of recall and # of pops I+ A*0 IDA*0 CTF HA* Figure 2: Pareto frontiers: Our I+ parser at different values of λ, against the baselines at different pruning levels. The other axis of variation is that the “D” tests (delayed reward) use γ = 1, while the “I” tests (immediate reward) use γ = 0. Note that I+ attempts a form of credit assignment and works better than D+.2 We were not able to get better results with intermediate values of γ, presumably because this crudely assigns credit for a reward (correct constituent) to the actions that closely preceded it, whereas in our agenda-based parser, the causes of the reward (correct subconstituents) related actions may have happened much earlier [8]. 6.3 Pareto Frontier Our final evaluation is on the held-out test set (length-limited sentences from Section 23). A 5-split grammar trained on section 2-21 is used. Given our previous results in Table 1, we only consider the I+ model: immediate reward with oracle infusion. To investigate trading off speed and accuracy, we learn and then evaluate a policy for each of several settings of the tradeoff parameter: λ. We train our policy using sentences of at most 15 words from Section 22 and evaluate the learned policy on the held out data (from Section 23). We measure accuracy as labeled constituent recall and evaluate speed in terms of the number of pops (or pushes) performed on the agenda. Figure 2 shows the baselines at different pruning thresholds as well as the performance of our policies trained using I+ for λ ∈{10−3, 10−4, . . . , 10−8}, using agenda pops as the measure of time. I+ is about 3 times as fast as unpruned A∗ 0 at the cost of about 1% drop in accuracy (F-score from 94.58 to 93.56). Thus, I+ achieves the same accuracy as the pruned version of A∗ 0 while still being twice as fast. I+ also improves upon HA∗and IDA∗ 0 with respect to speed at 60% of the pops. I+ always does better than the coarse-to-fine parser (CTF) in terms of both speed and accuracy, though using the number of agenda pops as our measure of speed puts both of our hierarchical baselines at a disadvantage. We also ran experiments using the number of agenda pushes as a more accurate measure of time, again sweeping over settings of λ. Since our reward shaping was crafted with agenda pops in mind, perhaps it is not surprising that learning performs relatively poorly in this setting. Still, we do manage to learn to trade off speed and accuracy. With a 1% drop in recall (F-score from 94.58 to 93.54), we speed up from A∗ 0 by a factor of 4 (from around 8 billion pushes to 2 billion). Note that known pruning methods could also be employed in conjunction with learned prioritization. 7 Conclusions and Future Work In this paper, we considered the application of both reinforcement learning and apprenticeship learning to prioritize search in a way that is sensitive to a user-defined tradeoff between speed and accuracy. We found that a novel oracle-infused variant of the policy gradient algorithm for reinforcement learning is effective for learning a fast and accurate parser with only a simple set of features. In addition, we uncovered many properties of this problem that separate it from more standard learning scenarios, and designed experiments to determine the reasons off-the-shelf learning algorithms fail. An important avenue for future work is to consider better credit assignment. We are also very interested in designing richer feature sets, including “dynamic” features that depend on both the action and the state of the chart and agenda. One role for dynamic features is to decide when to halt. The parser might decide to continue working past the first complete parse, or give up (returning a partial or default parse) before any complete parse is found. 2The D- and I- approaches are quite similar to each other. Both train on oracle trajectories where all actions receive a reward of 1 −λ, and simply try to make these oracle actions probable. However, D- trains more aggressively on long trajectories, since (9) implies that it weights a given training action by T −t + 1, the number of future actions on that trajectory. The difference between D+ and I+ is more interesting because the trajectory includes non-oracle actions as well. 8 References [1] Pieter Abbeel and Andrew Ng. Apprenticeship learning via inverse reinforcement learning. In ICML, 2004. [2] J. Andrew Bagnell. Robust supervised learning. In AAAI, 2005. [3] Nathan Bodenstab, Aaron Dunlop, Keith Hall, and Brian Roark. Beam-width prediction for efficient CYK parsing. In ACL, 2011. [4] Sharon A. Caraballo and Eugene Charniak. New figures of merit for best-first probabilistic chart parsing. Computational Linguistics, 24(2):275–298, 1998. [5] Eugene Charniak. Top-down nearly-context-sensitive parsing. In EMNLP, 2010. [6] Michael Collins and Brian Roark. Incremental parsing with the perceptron algorithm. In ACL, 2004. [7] Hal Daum´e III, John Langford, and Daniel Marcu. Search-based structured prediction. Machine Learning, 75(3):297–325, 2009. [8] Jason Eisner and Hal Daum´e III. Learning speed-accuracy tradeoffs in nondeterministic inference algorithms. In COST: NIPS Workshop on Computational Trade-offs in Statistical Learning, 2011. [9] Joshua Goodman. Semiring parsing. Computational Linguistics, 25(4):573–605, December 1999. [10] V. Gullapalli and A. G. Barto. Shaping as a method for accelerating reinforcement learning. In Proceedings of the IEEE International Symposium on Intelligent Control, 1992. [11] R. Kalman. Contributions to the theory of optimal control. Bol. Soc. Mat. Mexicana, 5:558–563, 1968. [12] Martin Kay. Algorithm schemata and data structures in syntactic processing. In B. J. Grosz, K. Sparck Jones, and B. L. Webber, editors, Readings in Natural Language Processing, pages 35–70. Kaufmann, 1986. First published (1980) as Xerox PARC TR CSL-80-12. [13] Dan Klein and Chris Manning. A* parsing: Fast exact Viterbi parse selection. In NAACL/HLT, 2003. [14] Percy Liang, Hal Daum´e III, and Dan Klein. Structure compilation: Trading structure for features. In ICML, Helsinki, Finland, 2008. [15] M.P. Marcus, M.A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of English: The Penn Treebank. Computational linguistics, 19(2):330, 1993. [16] Takuya Matsuzaki, Yusuke Miyao, and Junichi Tsujii. Probabilistic CFG with latent annotations. In ACL, 2005. [17] Andrew Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In ICML, 2000. [18] A. Pauls and D. Klein. Hierarchical search for parsing. In NAACL/HLT, pages 557–565. Association for Computational Linguistics, 2009. [19] A. Pauls and D. Klein. Hierarchical A* parsing with bridge outside scores. In ACL, pages 348–352. Association for Computational Linguistics, 2010. [20] Jan Peters and Stefan Schaal. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4), 2008. [21] S. Petrov and D. Klein. Improved inference for unlexicalized parsing. In NAACL/HLT, pages 404–411, 2007. [22] B. Roark, K. Hollingshead, and N. Bodenstab. Finite-state chart constraints for reduced complexity context-free parsing pipelines. Computational Linguistics, Early Access:1–35, 2012. [23] Brian Roark and Kristy Hollingshead. Classifying chart cells for quadratic complexity context-free inference. In COLING, pages 745–752, Manchester, UK, August 2008. Coling 2008 Organizing Committee. [24] Stephane Ross, Geoff J. Gordon, and J. Andrew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AI-Stats, 2011. [25] Richard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [26] Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS, pages 1057–1063. MIT Press, 2000. [27] R.J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(23), 1992. [28] Yuehua Xu and Alan Fern. On learning linear ranking functions for beam search. In ICML, pages 1047– 1054, 2007. [29] D. H. Younger. Recognition and parsing of context-free languages in time n3. Information and Control, 10(2):189–208, February 1967. 9
|
2012
|
201
|
4,566
|
Learning as MAP Inference in Discrete Graphical Models Xianghang Liu NICTA/UNSW Sydney, Australia xianghang.liu@nicta.com.au James Petterson NICTA/ANU Canberra, Australia james.petterson@nicta.com.au Tiberio S. Caetano NICTA/ANU/University of Sydney Canberra and Sydney, Australia tiberio.caetano@nicta.com.au Abstract We present a new formulation for binary classification. Instead of relying on convex losses and regularizers such as in SVMs, logistic regression and boosting, or instead non-convex but continuous formulations such as those encountered in neural networks and deep belief networks, our framework entails a non-convex but discrete formulation, where estimation amounts to finding a MAP configuration in a graphical model whose potential functions are low-dimensional discrete surrogates for the misclassification loss. We argue that such a discrete formulation can naturally account for a number of issues that are typically encountered in either the convex or the continuous non-convex approaches, or both. By reducing the learning problem to a MAP inference problem, we can immediately translate the guarantees available for many inference settings to the learning problem itself. We empirically demonstrate in a number of experiments that this approach is promising in dealing with issues such as severe label noise, while still having global optimality guarantees. Due to the discrete nature of the formulation, it also allows for direct regularization through cardinality-based penalties, such as the ℓ0 pseudo-norm, thus providing the ability to perform feature selection and trade-offinterpretability and predictability in a principled manner. We also outline a number of open problems arising from the formulation. 1 Introduction A large fraction of the machine learning community is concerned itself with the formulation of a learning problem as a single, well-defined optimization problem. This is the case for many popular techniques, including those associated with margin or likelihood-based estimators, such as SVMs, logistic regression, boosting, CRFs and deep belief networks. Among these optimization-based frameworks for learning, two paradigms stand out: the one based on convex formulations (such as SVMs) and the one based on non-convex formulations (such as deep belief networks). The main argument in favor of convex formulations is that we can effectively decouple modeling from optimization, what has substantial theoretical and practical benefits. In particular, it is of great value in terms of reproducibility, modularity and ease of use. Coming from the other end, the main argument for non-convexity is that a convex formulation very often fails to capture fundamental properties of a real problem (e.g. see [1, 2] for examples of some fundamental limitations of convex loss functions). 1 The motivation for this paper starts from the observation that the above tension is not really between convexity and non-convexity, but between convexity and continuous non-convexity. Historically, the optimization-based approach to machine learning has been virtually a synonym of continuous optimization. Estimation in continuous parameter spaces in some cases allows for closed-form solutions (such as in least-squares regression), or if not we can resort to computing gradients (for smooth continuous functions) or subgradients (for non-smooth continuous functions) which give us a generic tool for finding a local optimum of an arbitrary continuous function (global optimum if the continuous function is convex). On the contrary, unless P=NP there is no general tool to efficiently optimize discrete functions. We suspect this is one of the reasons why machine learning has traditionally been formulated in terms of continuous optimization: it is indeed convenient to compute gradients or subgradients and delegate optimization to some off-the-shelf gradient-based algorithm. The formulation we introduce in this paper is non-convex, but discrete rather than continuous. By being non-convex we will attempt at capturing some of the expressive power of continuous non-convex formulations (such as robustness to labeling noise), and by being discrete we will retain the ability of convex formulations to provide theoretical guarantees in optimization. There are highly non-trivial classes of non-convex discrete functions defined over exponentially large discrete spaces which can be optimized efficiently. This is, after all, the main topic of combinatorial optimization. Discrete functions factored over cliques of low-treewidth graphs can be optimized efficiently via dynamic programming [3]. Arbitrary submodular functions can be minimized in polynomial time [4]. Particular submodular functions can be optimized very efficiently using max-flow algorithms [5]. Discrete functions defined over other particular classes of graphs also have polynomial-time algorithms (planar graphs [6], perfect graphs [7]). And of course although many discrete optimization problems are NP-hard, several have efficient constant-factor approximations [8]. In addition to all that, much progress has been done recently on developing tight LP relaxations for hard combinatorial problems [9]. Although all these discrete approaches have been widely used for solving inference problems in machine learning settings, we argue in this paper that they should also be used to solve estimation problems, or learning per se. The discrete approach does pose several new questions though, which we list at the end. Our contribution is to outline the overall framework in terms of a few key ideas and assumptions, as well as to empirically evaluate in real-world datasets particular model instances within the framework. Although these instances are very simple, they already display important desirable behavior that is missing in state-of-the-art estimators such as SVMs. 2 Desiderata We want to rethink the problem of learning a linear binary classifier. In this section we list the features that we would like a general-purpose learning machine for this problem to possess. These features essentially guide the assumptions behind our framework. Option to decouple modeling from optimization: As discussed in the introduction, this is the great appeal of convex formulations, and we would like to retain it. Note however that we want the option, not necessarily a mandate of always decoupling modeling from optimization. We want to be able to please the user who is not an optimization expert or doesn’t have the time or resources to refine the optimizer, by having the option of requesting the learning machine to configure itself in a mode in which global optimization is guaranteed and the runtime of optimization is precisely predictable. However we also want to please the user who is an expert, and is willing to spend a lot of time in refining the optimizer, to achieve the best possible results regardless of training time considerations. In our framework, we have the option to explore the spectrum between simpler models in which we can generate precise estimates of the runtime of the whole algorithm, and more complex models where we can focus on boosted performance at the expense of runtime predictability or demand for expert-exclusive fine-tuning skills. Option of Simplicity: This point is related to the previous one, but it’s more general. The complexity of a learning algorithm is a great barrier for its dissemination, even if it promises exceptional results once properly implemented. Most users of machine learning are not machine learning experts themselves, and for them in particular the cost of getting 2 a complex algorithm to work often outweighs the accuracy gains, especially if a reasonably good solution can be obtained with a very simple algorithm. For instance, in our framework the user has the option of reducing the learning algorithm to a series of matrix multiplications and lookup operations, while having a precise estimate of the total runtime of the algorithm and retaining good performance. Robustness to label noise: SVMs are considered state-of-the-art estimators for binary classifiers, as well as boosting and logistic regression. All these optimize convex loss functions. However, when label noise is present, convex loss functions inflict arbitrarily large penalty on misclassifications because they are unbounded. In other words, in high label noise settings these convex loss functions become poor proxies for the 0/1 loss (the loss we really care about). This fundamental limitation of convex loss functions is well understood theoretically [1]. The fact that the loss function of interest is itself discrete is indeed a hint that maybe we should investigate discrete surrogates rather than continuous surrogates for the 0/1 loss: optimizing discrete functions over continuous spaces is hard, but not necessarily over discrete spaces. In our framework we directly address this issue. Ability to achieve sparsity: Often we need to estimate sparse models. This can be for several reasons, including interpretability (be able to tell which are the ‘most important’ features), efficiency (at prediction time we can only afford to use a limited number of features) or, importantly, for purely statistical reasons (constraining the solution to low-dimensional subspaces has a regularization effect). The standard convex approach uses ℓ1 regularization. However the assumptions required to make ℓ1-regularized models be actually good proxies for the support cardinality function (ℓ0 pseudo-norm) are very strong and in practice rarely met [10]. In fact this has motivated an entire new line of work on structured sparsity, which tries to further regularize the solution so as to obtain better statistical properties in high dimensions [11, 12, 13]. This however comes at the price of more expensive optimization algorithms. Ideally we would like to regularize with ℓ0 directly; maybe this suggests the possibility of exploring an inherently discrete formulation? In our approach we have the ability to perform direct regularization via the ℓ0 pseudo-norm, or other scale-invariant regularizers. Leverage the power of low-dimensional approximations: Machine learning folklore has it that the Naive Bayes assumption (features conditionally independent given the class label) often produces remarkably good classifiers. So a natural question is: is it really necessary to work directly in the original high-dimensional space, such as SVMs do? A key aspect of our framework is that we explicitly exploit the concept of composing a highdimensional model from low-dimensional pieces. However we go beyond the Naive Bayes assumption by constructing graphs that model dependencies between variables. By varying the properties of these graphs we can trade-offmodel complexity and optimization efficiency in a straightforward manner. 3 Basic Setting Much of current machine learning research studies estimators of the type argmin θ∈Θ X n ℓ(yn, f(xn; θ)) + λΩ(θ) (1) where {xn, yn} is a training set of inputs x ∈X and outputs y ∈Y, assumed sampled independently from an unknown probability measure P on X × Y. f : X →Y is a member of a given class of predictors parameterized by θ, Θ is a continuous space such as a Hilbert space, and ℓas well as Ωare continuous and convex functions of θ. ℓis a loss function which enforces a penalty whenever f(xn) ̸= yn, and therefore the first term in (1) measures the total loss incurred by predictor f on the training sample {xn, yn} under parameterization θ. Ωcontrols the complexity of θ so as to avoid overfitting, and λ trades-offthe importance of a good fit to the training set versus model parsimony, so that good generalization is hopefully achieved. Problem (1) is often called regularized empirical risk minimization, since the first term is the risk (expected loss) under the empirical distribution of the training data, and the second is a regularizer. This formulation is used for regression (Y continuous) as well as classification and structured prediction (Y discrete). Logistic Regression, Regularized Least-Squares 3 Regression, SVMs, CRFs, structured SVMs, Lasso, Group Lasso and a variety of other estimators are all instances of (1) for particular choices of ℓ, f, Θ and Ω. The formulation in (1) is a very general formulation for machine learning under the i.i.d. assumption. In this paper we study problem (1) under the assumption that the parameter space Θ is discrete and finite, focusing on binary classification, when Y = {−1, 1}. 4 Formulation Our formulation departs from the one in (1) in two ways. The first assumption is that both the loss ℓand the regularizer Ωare additive on low-dimensional functions defined by a graph G = (V, E), i.e., ℓ(y, f(x; θ)) = X c∈C ℓc(y, fc(x; θc)) (2) Ω(θ) = X c∈C′ Ωc(θc) (3) where C ∪C′ is the set of maximal cliques in G. Note that (3) is standard: ℓ1 and ℓ2 norms for example are both additive on singletons (in which case C′ = V ). The arguably strong assumption here is (2). C is the set of parts where each part c is, in principle, an arbitrary subset of {1, . . . , D}, where D is the dimensionality of the parameterization, i.e., θ = (θ1, . . . , θD). ℓc is a low-dimensional discrete surrogate for ℓ, and fc is a low-dimensional predictor, both to be defined below. Note that in general two parameter subvectors θci and θcj are not independent since the cliques ci and cj can overlap. Indeed, one of the key reasons sustaining the power of this formulation is that all θc are coupled either directly or indirectly through the connected graph G = (V, E). The second assumption is that Θ is discrete and therefore the vector θ = (θ1, . . . , θD) is discrete in the sense that θi is only allowed to take on finitely many values, including the value 0 (this will be important when we discuss regularization). For simplicity of exposition let’s assume that the number of discrete values (bins) for each θi is the same: B. B can be potentially quite large, for example it can be in the hundreds. Random Projections. An instance x above in reality is not the raw feature vector but instead a random projection of it into a space of the same or higher dimension, i.e., we effectively apply X = RX′ where X′ is the original data matrix, R is a random matrix with entries drawn from N(0, 1) and X is the new data matrix. This often provides improved performance for our model due to the spreading of higher-order dependencies over lowerorder cliques (when mapping to a higher dimensional space) and also is motivated from a theoretical argument (section 6). In what follows x is the feature vector after the projection. Low-Dimensional Predictor. We will assume a standard linear predictor of the kind fc(x; θ) = argmax y∈{−1,1} y ⟨xc, θc⟩= sign ⟨xc, θc⟩ (4) In other words, we have a linear classifier that only considers the features in clique c.1 Low-Dimensional Discrete Surrogates for the 0/1 loss The low-dimensional discrete surrogate for the 0/1 loss is simply defined as the 0/1 loss incurred by predictor fc: ℓc(y; fc(x; θ)) = (1 −yfc(x; θ))/2 (5) A key observation now is that fc and therefore ℓc can be computed in O(Bk) by full enumeration over the Bk instantiations of θc, where k is the size of clique c. In other words, the 0/1 loss constrained to the discretized subspace defined by clique c can be exactly and efficiently computed (for small cliques). Regularization. One critical technical issue is that linear predictors of the kind argmaxy ⟨φ(x, y), θ⟩are insensitive to scalings of θ [14]. Therefore, the loss ℓwill be such that ℓ(y, f(x; αθ)) = ℓ(y, f(x; θ)) for α ̸= 0. This means that any regularizer that depends 1For notational simplicity we assume an offset parameter is already included in θc and a corresponding entry of 1 is appended to the vector xc. 4 on scale (such as ℓ1 and ℓ2 norms) is effectively meaningless since the minimization in (1) will drive Ω(θ) to 0 (as this doesn’t affect the loss). In other words, in such discrete setting we need a scale-invariant regularizer, such as the ℓ0 pseudo-norm. Note that ℓ0 is trivial to implement in this formulation, as we have enforced that the zero value must be included in the set of B values attainable by each θi: Ω(θ) = ℓ0(θ) = X i 1θi̸=0 (6) In addition, since this regularizer is additive on singletons θi, it comes for free the fact that it does not contribute to the complexity of inference in the graphical model (i.e., it is a unary potential), which is a convenient property. Nothing prevents us however from having group regularizers, for example of the form P c∈C′ λc1θc̸=0. Again, we can trade-offmodel simplicity and optimization efficiency by controlling the size of the maximal clique in C′. Final optimization Problem. After compiling the low-dimensional discrete proxies for the 0/1 loss (the functions lc) and incorporating our regularizer, we can assemble the following optimization problem argmin θ∈Θ X c∈C N X n=1 ℓc(yn, fc(xn; θc)) | {z } :=−Nψc(θc) + D X i=1 λ1θi̸=0 | {z } :=−λφi(θi) (7) which is a relaxation of (1) under all the above assumptions. The critical observation now is that (7) is a MAP inference problem in a discrete graphical model with clique set C, high-order clique potentials ψc(θc) and unary potentials φi(θi) [15]. Therefore we can resort to the vast literature on inference in graphical models to find exact or approximate solutions for (7). For example, if G = (V, E) is a tree, then (7) can be solved exactly and efficiently using a dynamic programming algorithm that only requires matrix-vector multiplications in the (min, +) semiring, in addition to elementary lookup operations [3]. For more general graphs the problem (7) can become NP-hard, but even in that case there are several principled approaches that often find excellent solutions, such as those based on linear programming relaxations [9] for tightly outer-bounding the marginal polytope [16]. In the experimental section we explore several options for constructing G, from simply generating a random chain (where MAP inference can be solved efficiently by dynamic programming) to generating dense random graphs (where MAP inference requires a more sophisticated approach such as an LP relaxation). 5 Related Work The most closely related work we found is a recent paper by Potetz [17]. In a similar spirit to our approach, it also addresses the problem of estimating linear binary classifiers in a discrete formulation. However, instead of composing low-dimensional discrete surrogates of the 0/1 loss as we do, it instead uses a fully connected factor graph and performs inference by estimating the mean of the max-marginals rather than MAP. Inference is approached using message-passing, which for the fully connected graph reduces to an intractable knapsack problem. In order to obtain a tractable model, the problem is then relaxed to a linear multiple choice knapsack problem, which can be solved efficiently. All the experiments though are performed on very low-dimensional datasets2 and it is unclear how this approach would scale to high dimensionality while keeping a fully connected graph. 6 Analysis Here we sketch arguments supporting the assumptions driving our formulation. Obtaining a rigorous theoretical analysis is left as an open problem for future research. Our assumptions involve three approximations of the problem of 0/1 loss minimization. First, the discretization of the parameter space. Second, the computation of low-dimensional proxies for the 0/1 loss rather than attacking the 0/1 loss directly in the resulting discrete space. Finally, the use of a graph G = (V, E) which in general will be sparse, i.e., not fully connected. We now discuss each of these assumptions. 2Seven datasets with dimensionalities 7,9,10,11,14,15 and 61. See [17]. 5 6.1 Discretization of the parameter space The explicit enforcement of a finite number of possible values for each parameter may seem at first a strong assumption. However, a key observation here is that we are restricting ourselves to linear predictors, which basically means that, for any sample, small perturbations of a random hyperplane will with high probability induce at most small changes in the 0/1 loss. Therefore there are good reasons to believe that indeed, for linear predictors, increasing binning has a diminishing returns behavior and after only a moderate amount of bins no much improvement can be obtained. This assumption is also used in [17]. 6.2 Low-dimensional proxies for the 0/1 loss This assumption can be justified using recent results stating that the margin is well-preserved under random projections to low-dimensional subspaces [18, 19]. For instance, Theorem 6 in [19] shows that the margin is preserved with high probability for embeddings with dimension only logarithmic on the sample size (a result similar in spirit to the Johnson-Lindenstrauss Lemma [20]). Since the (soft)margin upper bounds the 0/1 loss, this should also be preserved with at least equivalent guarantees. 6.3 Graph sparsity This is apparently the strongest assumption. In our formulation, we impose conditional independence assumptions on the set of random variables used as features. There are two main observations. The first is that in real high-dimensional data the existence of (approximate) conditional independences is more of a rule than an exception. This is directly related to the fact that usually high-dimensional data inhabit low-dimensional manifolds or subspaces. In our case, we have a graph with the nodes representing different features, and this can be seen as a patching of low-dimensional subspaces, where each subspace is defined by one of the cliques in the graph. We do not address in this work how to optimally determine a subgraph, leaving that as an open problem in this framework. Rather, we show that even with random subgraphs, and in particular subgraphs as simple as chains, we can obtain models that have high accuracy and remarkable robustness to high degrees of label noise. The second observation is that nothing prevents us from using quite dense graphs and seeking approximate rather than exact MAP inference, say through LP relaxations [9]. Indeed we illustrate this possibility in the experimental section below. 7 Experiments Settings. To evaluate our method (DISCRETE) for binary classification problems, we apply it to real-world datasets and compared it to linear Support Vector Machines (SVM), which are a state-of-the-art estimator for linear classifiers. We note that although both use linear predictors, the model classes are not identical: since we use discretization, the set of hyperplanes our estimator will optimize over is strictly smaller. We run these algorithms on publicly available datasets from the UCI machine learning repository [21]. See Table 1 for the details of these datasets. For both algorithms, the only hyperparameter is the tradeoffbetween the loss and the regularization term. We run 5-fold cross validation for both methods to select the optimal hyperparameters. The number of bins used for discretization may affect the accuracy of DISCRETE. For the experiments, we fix it to 11, since for larger values there was negligible improvement (which supports our argument from section 6.1). Robustness to Label Noise. In the first experiment, we test the robustness of different methods to increasing label noise. We first flip the labels of the training data with increasing probability from 0 to 0.4 and then run these algorithms on the noisy training data. The plots of the classification accuracy at each noise level are shown in Figure 1. For DISCRETE, we used as the graph G a random chain, i.e., the simplest possible option for a connected graph. In this case, optimization is straightforward via a Viterbi algorithm: a sequence of matrixvector multiplications in the (min, +) semiring with trivial bookkeeping and subsequential lookup, which will run in O(B2D) since we have B states per variable and D variables. To assess the effect of randomization, we run on 20 random chains and plot both the average and the standard error obtained. The impact of randomization seems negligible. From Figure 1, DISCRETE demonstrates classification accuracy only slightly inferior to SVM in 6 (a) GISETTE (b) MNIST 5 vs 6 (c) A2A (d) USPS 8 vs 9 (e) ISOLET (f) ACOUSTIC Figure 1: Comparison of the Discrete Method and Linear SVM the noiseless regime (i.e., when the hinge loss is a good proxy for the 0/1 loss). However, as soon as a significant amount of label noise is present, SVM degrades substantially while DISCRETE remains remarkably stable, delivering high accuracy even after flipping labels with 40% probability. We believe these are significant results given the truly elementary nature of the optimization procedure: the method is simple, fast and the runtime can be predicted with high accuracy since there is a determined number of operations; 2(D −1) messages are passed, each with worst-case runtime of O(B2) determined by the matrixvector multiplication. Note in particular how this differs from continuous optimization settings in which the analysis is in terms of rate of convergence rather than the precise number of discrete operations performed. It is also interesting to observe that for different values of the cross-validation parameter our algorithm runs in precisely the same amount of time, while for SVMs convergence will be much slower for small scalings of the regularizer since the relative importance of the non-differentiable hinge loss over the strongly convex quadratic term increases. This experiment shows that even if we have the simplest setting of our formulation (random chains, which comes with very fast and exact MAP inference) we can still obtain results that are close or similar to those obtained by the state-of-the-art linear SVM classifier in the noiseless case, and superior for high levels of label noise. Evaluation without Noise. As seen in Figure 1, in the noiseless (or small noise) regime SVM is often slightly superior to our random chain model. A natural question to ask is therefore how would more complex graph topologies perform. Here we run experiments on two other types of graphs: a random 2-chain (i.e. a random junction tree with cliques {i, i + 1, i + 2}) and a random k-regular graph, where k is set to be such that the resulting graph has 10% of the possible edges. For the 2-chain, the optimization algorithm is exact inference via (min, +) message-passing, just as the Viterbi algorithm, but now applied to a larger clique, which increases the memory and runtime cost by O(B). For the random graph, we obtain a more complex topology in which exact inference is intractable. In our experiments we used the approximate inference algorithm from [22], which solves optimally and efficiently an LP relaxation via the alternating direction method of multipliers, ADMM [23]. 7 Table 1: Datasets used for the experiments in Figure 1 GISETTE MNIST A2A USPS ISOLET ACOUSTIC # Train 6000 10205 2265 950 480 19705 # Test 1000 1134 30296 237 120 78823 # Features 5000 784 123 256 617 50 Table 2: Error rates of different methods for binary classification, without label noise. In this setting, the hinge loss used by SVM is an excellent proxy for the 0/1 loss. Yet, the proposed variants (top 3 rows) are still competitive in most datasets. GISETTE MNIST A2A USPS ISOLET ACOUSTIC random chain 89.23 93.79 82.55 97.51 100 76.01 random 2-chain 89 94.47 82.65 97.78 100 76.55 random graph 88.6 94.89 83.17 97.44 100 74.80 SVM 97.7 96.47 83.88 98.4 100 76.01 8 Extensions and Open Problems Clearly the results in this paper are only a first step in the direction proposed. Several questions arise from this formulation. Theory. In section 6 we only sketched the reasons why we pursued the assumptions laid out in this paper. We did not present any rigorous quantitative arguments analyzing the limitations of our formulation. This is left as an open problem. However we believe section 6 does point to the key ideas that will ultimately underly a quantitative theory. Extension to multi-class and structured prediction. In this work we only study binary classification problems. The extension to multi-class and structured prediction, as well as other learning settings is an open problem. Adaptive binning. When discretizing the parameters, we used a fixed number of bins. This can be made more elaborate through the use of adaptive binning techniques that are dependent on the information content of each variable. Informative graph construction. We only explored randomly generated graphs. The problem of selecting a graph topology in an informative way is highly relevant and is left open. For example B-matching can be used to generate an informative regular graph [24]. This problem is essentially a manifold learning problem and there are several ways it could be approached. Existing work on supervised manifold learning is very relevant here. Nonparametric extension. We considered only linear parametric models. It would be interesting to consider nonparametric models, where the discretization occurs at the level of parameters associated with each training instance (as in the dual formulation of SVMs). 9 Conclusion We presented a discrete formulation for learning linear binary classifiers. Parameters associated with features of the linear model are discretized into bins, and low-dimensional discrete surrogates of the 0/1 loss restricted to small groups of features are constructed. This results in a data structure that can be seen as a graphical model, where regularized risk minimization can be performed via MAP inference. We sketch theoretical arguments supporting the assumptions underlying our proposal and present empirical evidence that very simple, easily and quickly trainable models estimated with such a procedure can deliver results that are often comparable to those obtained by linear SVMs for noiseless scenarios, and superior under moderate to severe label noise. Acknowledgements We thank E. Bonilla, A. Defazio, D. Garc´ıa-Garc´ıa, S. Gould, J. McAuley, S. Nowozin, M. Reid, S. Sanner and B. Williamson for discussions. NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. 8 References [1] P. M. Long and R. A. Servedio, “Random classification noise defeats all convex potential boosters,” Machine Learning, vol. 78, no. 3, pp. 287–304, 2010. [2] P. M. Long and R. A. Servedio, “Learning large-margin halfspaces with more malicious noise,” in NIPS, 2011. [3] S. M. Aji and R. J. McEliece, “The generalized distributive law,” IEEE Trans. Inform. Theory, vol. 46, no. 2, pp. 325–343, 2000. [4] B. Korte and J. Vygen, Combinatorial Optimization: Theory and Algorithms. Springer Publishing Company, Incorporated, 4th ed., 2007. [5] V. Kolmogorov and R. Zabih, “What energy functions can be minimizedvia graph cuts?,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, pp. 147–159, 2004. [6] A. Globerson and T. S. Jaakkola, “Approximate inference using planar graph decomposition,” in Advances in Neural Information Processing Systems 19 (B. Sch¨olkopf, J. Platt, and T. Hoffman, eds.), pp. 473–480, Cambridge, MA: MIT Press, 2007. [7] T. Jebara, “Perfect graphs and graphical modeling.” To Appear in Tractability, Cambridge University Press, 2012. [8] V. V. Vazirani, Approximation Algorithms. Springer, 2004. [9] D. Sontag, Approximate Inference in Graphical Models using LP Relaxations. PhD thesis, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2010. [10] P. Zhao and B. Yu, “On model selection consistency of lasso,” J. Mach. Learn. Res., vol. 7, pp. 2541–2563, Dec. 2006. [11] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski, “Structured sparsity through convex optimization.” Technical report, HAL 00621245-v2, to appear in Statistical Science, 2012. [12] J. Huang, T. Zhang, and D. Metaxas, “Learning with structured sparsity,” in Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, (New York, NY, USA), pp. 417–424, ACM, 2009. [13] F. R. Bach, “Structured sparsity-inducing norms through submodular functions,” in NIPS, pp. 118–126, 2010. [14] D. Mcallester and J. Keshet, “Generalization bounds and consistency for latent structural probit and ramp loss,” in Advances in Neural Information Processing Systems 24 (J. ShaweTaylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, eds.), pp. 2205–2212, 2011. [15] D. Koller and N. Friedman, Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. [16] M. J. Wainwright and M. I. Jordan, Graphical Models, Exponential Families, and Variational Inference. Hanover, MA, USA: Now Publishers Inc., 2008. [17] B. Potetz, “Estimating the bayes point using linear knapsack problems,” in ICML, pp. 257–264, 2011. [18] M.-F. Balcan, A. Blum, and S. Vempala, “Kernels as features: On kernels, margins, and low-dimensional mappings,” Machine Learning, vol. 65, no. 1, pp. 79–94, 2006. [19] Q. Shi, C. Chen, R. Hill, and A. van den Hengel, “Is margin preserved after random projection?,” in ICML, 2012. [20] S. Dasgupta and A. Gupta, “An elementary proof of a theorem of johnson and lindenstrauss,” Random Struct. Algorithms, vol. 22, pp. 60–65, Jan. 2003. [21] A. Frank and A. Asuncion, “UCI machine learning repository,” 2010. [22] O. Meshi and A. Globerson, “An alternating direction method for dual map lp relaxation,” in Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part II, ECML PKDD’11, (Berlin, Heidelberg), pp. 470–483, SpringerVerlag, 2011. [23] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, 2011. [24] T. Jebara, J. Wang, and S. Chang, “Graph construction and b-matching for semi-supervised learning,” in ICML, 2009. 9
|
2012
|
202
|
4,567
|
Hierarchical Optimistic Region Selection driven by Curiosity Odalric-Ambrym Maillard Lehrstuhl f¨ur Informationstechnologie Montanuniversit¨at Leoben Leoben, A-8700, Austria odalricambrym.maillard@gmail.com Abstract This paper aims to take a step forwards making the term “intrinsic motivation” from reinforcement learning theoretically well founded, focusing on curiositydriven learning. To that end, we consider the setting where, a fixed partition P of a continuous space X being given, and a process ν defined on X being unknown, we are asked to sequentially decide which cell of the partition to select as well as where to sample ν in that cell, in order to minimize a loss function that is inspired from previous work on curiosity-driven learning. The loss on each cell consists of one term measuring a simple worst case quadratic sampling error, and a penalty term proportional to the range of the variance in that cell. The corresponding problem formulation extends the setting known as active learning for multi-armed bandits to the case when each arm is a continuous region, and we show how an adaptation of recent algorithms for that problem and of hierarchical optimistic sampling algorithms for optimization can be used in order to solve this problem. The resulting procedure, called Hierarchical Optimistic Region SElection driven by Curiosity (HORSE.C) is provided together with a finite-time regret analysis. 1 Introduction In this paper, we focus on the setting of intrinsically motivated reinforcement learning (see Oudeyer and Kaplan [2007], Baranes and Oudeyer [2009], Schmidhuber [2010], Graziano et al. [2011]), which is an important emergent topic that proposes new difficult and interesting challenges for the theorist. Indeed, if some formal objective criteria have been proposed to implement specific notions of intrinsic rewards (see Jung et al. [2011], Martius et al. [2007]), so far, many - and only - experimental work has been carried out for this problem, often with interesting output (see Graziano et al. [2011], Mugan [2010], Konidaris [2011]) but unfortunately no performance guarantee validating a proposed approach. Thus proposing such an analysis may have great immediate consequences for validating some experimental studies. Motivation. A typical example is the work of Baranes and Oudeyer [2009] about curiosity-driven learning (and later on Graziano et al. [2011], Mugan [2010], Konidaris [2011]), where a precise algorithm is defined together with an experimental study, yet no formal goal is defined, and no analysis is performed as well. They consider a so-called sensory-motor space X def = S×M ⊂[0, 1]d where S is a (continuous) state space and M is a (continuous) action space. There is no reward, yet one can consider that the goal is to actively select and sample subregions of X for which a notion of “learning progress” - this intuitively measures the decay of some notion of error when successively sampling into one subregion - is maximal. Two key components are advocated in Baranes and Oudeyer [2009], in order to achieve successful results (despite that success is a fuzzy notion): • The use of a hierarchy of regions, where each region is progressively split into sub-regions. 1 • Splitting leaf-regions in two based on the optimization of the dissimilarity, amongst the regions, of the learning progress. The idea is to identify regions with a learning complexity that is a globally constant in that region, which also provides better justification for allocating samples between identified regions. We believe it is possible to go one step towards a full performance analysis of such algorithms, by relating the corresponding active sampling problem to existing frameworks. Contribution. This paper aims to take a step forwards making the term “intrinsic motivation” from reinforcement learning theoretically well founded, focusing on curiosity-driven learning. We introduce a mathematical framework in which a metric space (which intuitively plays the role of the state-action space) is divided into regions and a learner has to sample from an unknown random function in a way that reduces a notion of error measure the most. This error consists of two terms, the first one is a robust measure of the quadratic error between the observed samples and their unknown mean, the second one penalizes regions with non constant learning complexity, thus enforcing the notion of curiosity. The paper focuses on how to choose the region to sample from, when a partition of the space is provided. The resulting problem formulation can be seen as a non trivial extension of the setting of active learning in multi-armed bandits (see Carpentier et al. [2011] or Antos et al. [2010]), where the main idea is to estimate the variance of each arm and sample proportionally to it, to the case when each arm is a region as opposed to a point. In order to deal with this difficulty, the maximal and minimal variance inside each region is tracked by means of a hierarchical optimization procedure, in the spirit of the HOO algorithm from Bubeck et al. [2011]. This leads to a new procedure called Hierarchical Optimistic Region SElection driven by Curiosity (HORSE.C) for which we provide a theoretical performance analysis. Outline. The outline of the paper is the following. In Section 2 we introduce the precise setting and define the objective function. Section 3 defines our assumptions. Then in Section 4 we present the HORSE.C algorithm. Finally in Section 5, we provide the main Theorem 1 that gives performance guarantees for the proposed algorithm. 2 Setting: Robust region allocation with curiosity-inducing penalty. Let X assumed to be a metric space and let Y ⊂Rd be a normed space, equipped with the Euclidean norm || · ||. We consider an unknown Y-valued process defined on X, written ν : X →M+ 1 (Y), where M+ 1 (Y) refers to the set of all probability measures on Y, such that for all x ∈X, the random variable Y ∼ν(x) has mean µ(x) ∈Rd and covariance matrix Σ(x) ∈Md,d(R) assumed to be diagonal. We thus introduce for convenience the notation ρ(x) def = trace(Σ(x)), where trace is the trace operator (this corresponds to the variance in dimension 1). We call X the input space or sampling space, and Y the output space or value space. Intuition Intuitively when applied to the setting of Baranes and Oudeyer [2009], then X def = S × A is the space of state-action pairs, where S is a continuous state space and A a continuous action space, ν is the transition kernel of an unknown MDP, and finally Y def = S. This is the reason why we consider Y ⊂Rd and not only Y ⊂R as would seem more natural. One difference is that we assume (see Section 3) that we can sample anywhere in X, which is a restrictive yet common assumption in the reinforcement learning literature. How to get rid of this assumption is an open and challenging question that is left for future work. Sampling error and robustness Let us consider a sequential sampling process on X, i.e. a process that samples at time t a value Yt ∼ν(Xt) at point Xt, where Xt ∈F<t is a measurable function of the past inputs and outputs {(Xs, Ys)}s<t. It is natural to look at the following quantity, that we call average noise vector ηt: ηt def = 1 t t s=1 Ys −µ(Xs) ∈Rd . One interesting property is that this is a martingale difference sequence, which means that the norm of this vector enjoys a concentration property. More precisely (see [Maillard, 2012, Lemma 1] in the extended version of the paper), we have for all deterministic t > 0 E[ ||ηt||2 ] = 1 t E 1 t t s=1 ρ(Xs) . 2 A similar property holds for a region R ⊂X that has been sampled nt(R) times, and in order to be robust against a bad sampling strategy inside a region, it is natural to look at the worst case error, that we define as eR(nt) def = supx∈R ρ(x) nt(R) . One reason for looking at robustness is that for instance, in the case we work with an MDP, we are generally not completely free to choose the sample Xs ∈S × A: we can only choose the action and the next state is generally given by Nature. Thus, it is important to be able to estimate this worst case error so as to prevent from bad situations. Goal Now let P be a fixed, known partition of the space X and consider the following game. The goal of an algorithm is, at each time step t, to propose one point xt where to sample the space X, so that its allocation of samples {nt(R)}R∈P (that is, the number of points sampled in each region) minimizes some objective function. Thus, the algorithm is free to sample everywhere in each region, with the goal that the total number of points chosen in each region is optimal in some sense. A simple candidate for this objective function would be the following LP(nt) def = max eR(nt) ; R ∈P , however, in order to incorporate a notion of curiosity, we would also like to penalize regions that have a variance term ρ that is non homogeneous (i.e. the less homogeneous, the more samples we allocate). Indeed, if a region has constant variance, then we do not really need to understand more its internal structure, and thus its better to focus on an other region that has very heterogeneous variance. For instance, one would like to split such a region in several homogeneous parts, which is essentially the idea behind section C.3 of Baranes and Oudeyer [2009]. We thus add a curiositypenalization term to the previous objective function, which leads us to define the pseudo-loss of an allocation nt def = {nt(R)}R∈P in the following way: LP(nt) def = max eR(nt) + λ|R|(max x∈R ρ(x) −min x∈R ρ(x)) ; R ∈P . (1) Indeed, this means that we do not want to focus just on regions with high variance, but also trade-off with highly heterogeneous regions, which is coherent with the notion of curiosity (see Oudeyer and Kaplan [2007]). For convenience, we also define the pseudo-loss of a region R by LR(nt) def = eR(nt) + λ|R|(max x∈R ρ(x) −min x∈R ρ(x)) . Regret The regret (or loss) of an allocation algorithm at time T is defined as the difference between the cumulated pseudo-loss of the allocations nt = {nR,t}R∈P proposed by the algorithm and that of the best allocation strategy n t = {n R,t}R∈P at each time steps; we define RT def = T t=|P| LP(nt) −LP(n t ) , where an optimal allocation at time t is defined by n t ∈argmin LP(nt) ; {nt(R)}R∈P is such that R∈P nt(R) = t . Note that the sum starts at t = |P| for a technical reason, since for t < |P|, whatever the allocation, there is always at least one region with no sample, and thus LP(nt) = ∞. Example 1 In the special case when X = {1, . . . , K} is finite with K T, and when P is the complete partition (each cell corresponds to exactly one point), the penalization term is canceled. Thus the problem reduces to the choice of the quantities nt(i) for each arm i, and the loss of an allocation simply becomes L(nt) def = max ρ(i) nt(i) ; 1 ≤i ≤K . This almost corresponds to the already challenging setting analyzed for instance in Carpentier et al. [2011] or Antos et al. [2010]. The difference is that we are interested in the cumulative regret of our allocation instead of only the regret suffered for the last round as considered in Carpentier et al. [2011] or Antos et al. [2010]. Also we directly target ρ(i) nt(i) whereas they consider the mean sampling error (but both terms are actually of the same order). Thus the setting we consider can be seen as a generalization of these works to the case when each arm corresponds to a continuous sampling domain. 3 3 Assumptions In this section, we introduce some mild assumptions. We essentially assume that the unknown distribution is such that it has a sub-Gaussian noise, and a smooth mean and variance functions. These are actually very mild assumptions. Concerning the algorithm, we consider it can use a partition tree of the space, and that this one is essentially not degenerated (a typical binary tree that satisfies all the following assumptions is such that each cell is split in two children of equal volume). Such assumptions on trees have been extensively discussed for instance in Bubeck et al. [2011]. Sampling At any time, we assume that we are able to sample at any point in X, i.e. we assume we have a generative model1 of the unknown distribution ν. Unknown distribution We assume that ν is sub-Gaussian, meaning that for all fixed x ∈X ∀λ ∈Rd ln E exp[λ, Y −µ(X)] ≤λT Σ(x)λ 2 , and has diagonal covariance matrix in each point2. The function µ is assumed to be Lipschitz w.r.t a metric 1, i.e. it satisfies ∀x, x ∈X ||µ(x) −µ(x)|| ≤1(x, x) . Similarly, the function ρ is assumed to be Lipschitz w.r.t a metric 2, i.e. it satisfies ∀x, x ∈X |ρ(x) −ρ(x)| ≤2(x, x) . Hierarchy We assume that Y is a convex and compact subset of [0, 1]d. We consider an infinite binary tree T whose nodes correspond to regions of X. A node is indexed by a pair (h, i), where h ≥0 is the depth of the nodes in T and 0 ≤i < 2h is the position of the node at depth h. We write R(h, i) ⊂X the region associated with node (h, i). The regions are fixed in advance, are all assumed to be measurable with positive measure, and must satisfy that for each h ≥1, {R(h, i)}0≤i<2h is a partition of X that is compatible with depth h −1, where R(0, 0) def = X; in particular for all h ≥0, for all 0 ≤i < 2h, then R(h, i) = R(h + 1, 2i) ∪R(h + 1, 2i + 1) . In dimension d, a standard way to define such a tree is to split each parent node in half along the largest side of the corresponding hyper-rectangle, see Bubeck et al. [2011] for details. For a finite sub-tree Tt of T , we write Leaf(Tt) for the set of all leaves of Tt. For a region (h, i) ∈ Tt, we denote by Ct(h, i) the set of its children in Tt, and by Tt(h, i) the subtree of Tt starting with root node (h, i). Algorithm and partition The partition P is assumed to be such that each of its regions R corresponds to one region R(h, i) ∈T ; equivalently, there exists a finite sub-tree T0 ⊂T such that Leaf(T0) = P. An algorithm is only allowed to expand one node of Tt at each time step t. In the sequel, we write indifferently P ∈T and (h, i) ∈T or P and R(h, i) ⊂X to refer to the partition or one of its cell. Exponential decays Finally, we assume that the 1 and 2 diameters of the region R(h, i) as well as its volume |R(h, i)| decay at exponential rate in the sense that there exists positive constants γ, γ1, γ2 and c, c1, c2 such that for all h ≥0, then |R(h, i)| ≤cγh, max x,x∈R(h,i) 1(x, x) ≤c1γh 1 and max x,x∈R(h,i) 2(x, x) ≤c2γh 2 . Similarly, we assume that there exists positive constants c ≤c, c 1 ≤c1 and c 2 ≤c2 such that for all h ≥0, then |R(h, i)| ≥cγh, max x,x∈R(h,i) 1(x, x) ≥c 1γh 1 and max x,x∈R(h,i) 2(x, x) ≥c 2γh 2 . This assumption is made to avoid degenerate trees and for general purpose only. It actually holds for any reasonable binary tree. 1using the standard terminology in Reinforcement Learning. 2this assumption is only here to make calculations easier and avoid nasty technical considerations that anyway do not affect the order of the final regret bound but only concern second order terms. 4 4 Allocation algorithm In this section, we now introduce the main algorithm of this paper in order to solve the problem considered in Section 2. It is called Hierarchical Optimistic Region SElection driven by Curiosity. Before proceeding, we need to define some quantities. 4.1 High-probability upper-bound and lower-bound estimations Let us consider the following (biased) estimator ˆσ2 t(R) def = 1 Nt(R) t s=1 ||Ys||2I{Xs ∈R} −|| 1 Nt(R) t s=1 YsI{Xs ∈R}||2 . Apart from a small multiplicative biased by a factor Nt(R)−1 Nt(R) , it has more importantly a positive bias due to the fact that the random variables do not share the same mean; this phenomenon is the same as the estimation of the average variance for independent but non i.i.d variables with different means {µi}i≤n, where the bias would be given by 1 n n i=1[µi −1 n n j=1 µj]2 (see Lemma 5). In our case, it is thus always non negative, and under the assumption that µ is Lipschitz w.r.t the metric 1, it is fortunately bounded by d1(R)2, the diameter of R w.r.t the metric 1. We then introduce the two following key quantities, defined for all x ∈R and δ ∈[0, 1] by Ut(R, x, δ) def = ˆσ2 t(R) + (1 + 2 √ d) d ln(2d/δ) 2Nt(R) + d ln(2d/δ) 2Nt(R) + 1 Nt(R) t s=1 2(Xs, x)I{Xs ∈R}, Lt(R, x, δ) def = ˆσ2 t(R) −(1 + 2 √ d) d ln(2d/δ) 2Nt(R) −d1(R)2 − 1 Nt(R) t s=1 2(Xs, x)I{Xs ∈R} . Note that we would have preferred to replace the terms involving ln(2d/δ) with a term depending on the empirical variance, in the spirit of Carpentier et al. [2011] or Antos et al. [2010]. However, contrary to the estimation of the mean, extending the standard results valid for i.i.d data to the case of a martingale difference sequence is non trivial for the estimation of the variance, especially due to the additive bias resulting from the fact that the variables may not share the same mean, but also to the absence of such results for U-statistics (up to the author’s knowledge). For that reason such an extension is left for future work. The following results (we provide the proof in [Maillard, 2012, Appendix A.3]) show that Ut(R, x, δ) is a high probability upper bound on ρ(x) while Lt(R, x, δ) is a high probability lower bound on ρ(x). Proposition 1 Under the assumptions that Y is a convex subset of [0, 1]d, ν is sub-Gaussian, ρ is Lipschitz w.r.t. 2 and R ⊂X is compact and convex, then P ∀x ∈X ; Ut(R, x, δ) ≤ρ(x) ≤tδ . Similarly, under the same assumptions, then P ∀x ∈X ; Lt(R, x, δ) ≤ρ(x) −b(x, R, Nt(R), δ) ≤tδ , where we introduced for convenience the quantity b(x, R, n, δ) def = 2 max x∈R 2(x, x) + d1(R)2 + 2(1 + 2 √ d) d ln(2d/δ) 2n + d ln(2d/δ) 2n . Now on the other other hand, we have that (see the proof in [Maillard, 2012, Appendix A.3]) Proposition 2 Under the assumptions that Y is a convex subset of [0, 1]d, ν is sub-Gaussian, µ is Lipschitz w.r.t. 1, ρ is Lipschitz w.r.t. 2 and R ⊂X is compact and convex, then P ∀x ∈X ; Ut(R, x, δ) ≥ρ(x) + b(x, R, Nt(R), δ) ≤tδ . Similarly, under the same assumptions, then P ∀x ∈X ; Lt(R, x, δ) ≥ρ(x) ≤tδ . 5 4.2 Hierarchical Optimistic Region SElection driven by Curiosity (HORSE.C). The pseudo-code of the HORSE.C algorithm is presented in Figure 1 below. This algorithm relies on the estimation of the quantities maxx∈R ρ(x) and minx∈R ρ(x) in order to define which point Xt+1 to sample at time t + 1. It is chosen by expanding a leaf of a hierarchical tree Tt ⊂T , in an optimistic way, starting with a tree T0 with leaves corresponding to the partition P. The intuition is the following: let us consider a node (h, i) of the tree Tt expanded by the algorithm at time t. The maximum value of ρ in R(h, i) is thus achieved for one of its children node (h, i) ∈ Ct(h, i). Thus if we have computed an upper bound on the maximal value of ρ in each child, then we have an upper bound on the maximum value of ρ in R(h, i). Proceeding in a similar way for the lower bound, this motivates the following two recursive definitions: ˆρ+ t (h, i; δ) def = min max x∈R(h,i) Ut(R(h, i), x, δ) , max
ˆρ+ t (h, i; δ) ; (h, i) ∈Ct(h, i) , ˆρ− t (h, i; δ) def = max min x∈R(h,i) Lt(R(h, i), x, δ) , min
ˆρ− t (h, i; δ) ; (h, i) ∈Ct(h, i) . These values are used in order to build an optimistic estimate of the quantity LR(h,i)(Nt) in region (h, i) (step 4), and then to select in which cell of the partition we should sample (step 5). Then the algorithm chooses where to sample in the selected region so as to improve the estimations of ˆρ+ t and ˆρ− t . This is done by alternating (step 6.) between expanding a leaf following a path that is optimistic according to ˆρ+ t (step 7,8,9), or according to ˆρ− t (step 11.) Thus, at a high level, the algorithm performs on each cell (h, i) ∈P of the given partition two hierarchical searches, one for the maximum value of ρ in region R(h, i) and one for its minimal value. This can be seen as an adaptation of the algorithm HOO from Bubeck et al. [2011] with the main difference that we target the variance and not just the mean (this is more difficult). On the other hand, there is a strong link between step 5, where we decide to allocate samples between regions {R(h, i)}(h,i)∈P, and the CH-AS algorithm from Carpentier et al. [2011]. 5 Performance analysis of the HORSE.C algorithm In this section, we are now ready to provide the main theorem of this paper, i.e. a regret bound on the performance of the HORSE.C algorithm, which is the main contribution of this work. To this end, we make use of the notion of near-optimality dimension, introduced in Bubeck et al. [2011], and that measures a notion of intrinsic dimension of the maximization problem. Definition (Near optimality dimension) For c > 0, the c-optimality dimension of ρ restricted to the region R with respect to the pseudo-metric 2 is defined as max lim sup →0 ln(N(Rc, 2, ")) ln("−1) , 0 where Rc def = x ∈R ; ρ(x) ≥max x∈R ρ(x) −" , and where N(Rc, 2, ") is the "-packing number of the region Rc. Let d+(h0, i0) be the c-optimality dimension of ρ restricted to the region R(h0, i0) (see e.g. Bubeck et al. [2011]), with the constant c def = 4(2c2 + c2 1)/c 2. Similarly, let d−(h0, i0) be the c-optimality dimension of −ρ restricted to the region R(h0, i0). Let us finally define the biggest near-optimality dimension of ρ over each cell of the partition P to be dρ def = max max
d+(h0, i0), d−(h0, i0) ; (h0, i0) ∈P . Theorem 1 (Regret bound for HORSE.C) Under the assumptions of Section 3 and if moreover γ2 1 ≤γ2, then for all δ ∈[0, 1], the regret of the Hierarchical Optimistic Region SElection driven by Curiosity procedure parameterized with δ is bounded with probability higher than 1−2δ as follows. RT ≤ T t=|P| max (h0,i0)∈P 1 n t (h0, i0) + 2λcγh0 B h0, n t (h0, i0), δt , 6 Algorithm 1 The HORSE.C algorithm. Require: An infinite binary tree T , a partition P ⊂T , δ ∈[0, 1], λ ≥0 1: Let T0 be such that Leaf(T0) = P, and δi,t = 6δ π2i2(2t+1)|P|t3 , t := 0. 2: while true do 3: define for each region (h, i) ∈Tt the estimated loss ˆLt(h, i) def = ˆρ+ t (h, i; δ) Nt(R(h, i)) + λ|R(h, i)| ˆρ+ t (h, i; δ) −ˆρ− t (h, i; δ) , where δ = δNt(R(h,i)),t, where by convention ˆLt(h, i) if it is undefined. 4: choose the next region of the current partition P ⊂T to sample (Ht+1, It+1) def = argmax ˆLt(h, i) ; (h, i) ⊂P . 5: if Nt(R(h, i)) = n is odd then 6: sequentially select a path of children of (Ht+1, It+1) in Tt defined by the initial node (H0 t+1, I0 t+1) def = (Ht+1, It+1), and then (Hj+1 t+1 , Ij+1 t+1 ) def = argmax ˆρ+ t (h, i; δn,t) ; (h, i) ∈Ct(Hj t+1, Ij t+1) , until j = jt+1 is such that (Hjt+1 t+1 , Ijt+1 t+1 ) ∈Leaf(Tt). 7: expand the node (Hjt+1 t+1 , Ijt+1 t+1 ) in order to define Tt+1 and then define the candidate child (ht+1, it+1) def = argmax ˆρ+ t (h, i; δn,t) ; (h, i) ∈Ct+1(Hjt+1 t+1 , Ijt+1 t+1 ) . 8: sample at point Xt+1 and receive the value Yt+1 ∼ν(Xt+1), where Xt+1 def = argmax Ut(R(ht+1, it+1), x, δn,t) ; x ∈R(ht+1, it+1) , 9: else 10: proceed similarly than steps 6,7,8 with ˆρ+ t replaced with ˆρ− t . 11: end if 12: t := t + 1. 13: end while where δt is a shorthand notation for the quantity δn t (h0,i0),t−1, where n t (h0, i0) is the optimal allocation at round t for the region (h0, i0) ∈P and where B(h0, k, δk,t) def = min h0≤h 2c2γh 2 + c2 1γ2h 1 + 2(1 + 2 √ d) d ln(2d/δk,t) 2Nh0(h, k) + d ln(2d/δk,t) 2Nh0(h, k) , in which we have used the following quantity Nh0(h, k) def = 1 C(c 2γh 2 )−dρ k −2h−h0[2 + 4 √ d + d ln(2d/δk,t)/2]2 d ln(2d/δk,t) 2(2c2γh 2 + c2 1γ2h 1 )2 . Note that the assumption γ2 1 ≤γ2 is only here so that dρ can be defined w.r.t the metric 2 only. We can remove it at the price of using instead a metric mixing 1 and 2 together and of much more technical considerations. Similarly, we could have expressed the result using the local values d+(h0, i0) instead of the less precise dρ (neither those, nor dρ need to be known by the algorithm). The full proof of this theorem is reported in the appendix. The main steps of the proof are as follows. First we provide upper and lower confidence bounds for the estimation of the quantities Ut(R, x, δ) and Lt(R, x, δ). Then, we lower-bound the depth of the subtree of each region (h0, i0) ∈P that contains a maximal point argmaxx∈R(h0,i0) ρ(x), and proceed similarly for a minimal point. This uses the near-optimality dimension of ρ and −ρ in the region R(h0, i0), and enables to provide an upper bound on ˆρ+ t (h, i; δ) as well as a lower bound on ˆρ− t (h, i; δ). This then enables us to deduce bounds relating the estimated loss ˆLt(h, i) to the true loss LR(h,i)(Nt). Finally, we relate the true loss of the current allocation to the one using the optimal one n t+1(h0, i0) by discussing whether a region has been over or under sampled. This final part is closed in spirit to the proof of the regret bound for CH-AS in Carpentier et al. [2011]. In order to better understand the gain in Theorem 1, we provide the following corollary that gives more insights about the order of magnitude of the regret. 7 Corollary 1 Let β def = 1+ln max{2, γ−dρ 2 } . Under the assumptions of Theorem 1, assuming that the partition P of the space X is well behaved, i.e. that for all (h0, i0) ∈P, then n t+1(h0, i0) grows at least at speed O ln(t) 1 γ2 2h0β , then for all δ ∈[0, 1], with probability higher than 1 −2δ we have RT = O T t=|P| max (h0,i0)∈P 1 n t (h0, i0) + 2λcγh0 ln(t) n t (h0, i0) 1 2β . This regret term has to be compared with the typical range of the cumulative loss of the optimal allocation strategy, that is given by T t=|P| LP(n t ) = T t=|P| max (h0,i0)∈P ρ+ (h0,i0) n t (h0, i0) + 2λcγh0(ρ+ (h0,i0) −ρ− (h0,i0)) , where ρ+ (h0,i0) def = maxx∈R(h0,i0) ρ(x), and similarly ρ− (h0,i0) def = minx∈R(h0,i0) ρ(x). Thus, this shows that, after normalization, the relative regret on each cell (h0, i0) is roughly of order 1 ρ+(h0,i0) ln(t) n t (h0,i0) 1 2β , i.e. decays at speed n t (h0, i0)−1 2β . This shows that we are not only able to compete with the performance of the best allocation strategy, but we actually achieve the exact same performance with multiplicative factor 1, up to a second order term. Note also that, when specified to the case of Example 1, the order of this regret is competitive with the standard results from Carpentier et al. [2011]. The lost of the variance term ρ+(h0, i0)−1 (that is actually a constant) here comes from the fact that we are only able to use Hoeffding’s like bounds for the estimation of the variance. In order to remove it, one would need empirical Bernstein’s bounds for variance estimation in the case of martingale difference sequences. This is postponed to future work. 6 Discussion In this paper, we have provided an algorithm together with a regret analysis for a problem of online allocation of samples in a fixed partition, where the objective is to minimize a loss that contains a penalty term that is driven by a notion of curiosity. A very specific case (finite state space) already corresponds to a difficult question known as active learning in the multi-armed bandit setting and has been previously addressed in the literature (e.g. Antos et al. [2010], Carpentier et al. [2011]). We have considered an extension of this problem to a continuous domain where a fixed partition of the space as well as a generative model of the unknown dynamic are given, using our curiosity-driven loss function as a measure of performance. Our main result is a regret bound for that problem, that shows that our procedure is first order optimal, i.e. achieves the same performance as the best possible allocation (thus with multiplicative constant 1). We believe this result contributes to filling the important gap that exists between existing algorithms for the challenging setting of intrinsic reinforcement learning and a theoretical analysis of such, the HORSE.C algorithm being related in spirit to, yet simpler and less ambitious the RIAC algorithm from Baranes and Oudeyer [2009]. Indeed, in order to achieve the objective that tries to address RIAC, one should first remove the assumption that the partition is given: One trivial solution is to run the HORSE.C algorithm in episodes of doubling length, starting with the trivial partition, and to select at the end of each a possibly better partition based on computed confidence intervals, however making efficient use of previous samples and avoiding a blow-up of candidate partitions happen to be a challenging question; then one should relax the generative model assumption (i.e. that we can sample wherever we want), a question that shares links with a problem called autonomous exploration. Thus, even if the regret analysis of the HORSE.C algorithm is already a strong, new result that is interesting independently of such difficult specific goals and of the reinforcement learning framework (no MDP structure is required), those questions are naturally left for future work. Acknowledgements The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement no 270327 (CompLACS) and no 216886 (PASCAL2). 8 References Andr`as Antos, Varun Grover, and Csaba Szepesv`ari. Active learning in heteroscedastic noise. Theoretical Computer Science, 411(29-30):2712–2728, 2010. A. Baranes and P.-Y. Oudeyer. R-IAC: Robust Intrinsically Motivated Exploration and Active Learning. IEEE Transactions on Autonomous Mental Development, 1(3):155–169, October 2009. S´ebastien Bubeck, R´emi Munos, Gilles Stoltz, and Csaba Szepesv`ari. X-armed bandits. Journal of Machine Learning Research, 12:1655–1695, 2011. Alexandra Carpentier, Alessandro Lazaric, Mohammad Ghavamzadeh, R´emi Munos, and Peter Auer. Upper-confidence-bound algorithms for active learning in multi-armed bandits. In Jyrki Kivinen, Csaba Szepesv`ari, Esko Ukkonen, and Thomas Zeugmann, editors, Algorithmic Learning Theory, volume 6925 of Lecture Notes in Computer Science, pages 189–203. Springer Berlin / Heidelberg, 2011. Vincent Graziano, Tobias Glasmachers, Tom Schaul, Leo Pape, Giuseppe Cuccu, J. Leitner, and J. Schmidhuber. Artificial Curiosity for Autonomous Space Exploration. Acta Futura (in press), (1), 2011. Tobias Jung, Daniel Polani, and Peter Stone. Empowerment for continuous agent-environment systems. Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems, 19(1): 16–39, 2011. G.D. Konidaris. Autonomous robot skill acquisition. PhD thesis, University of Massachusetts Amherst, 2011. Odalric-Ambrym Maillard. Hierarchical optimistic region selection driven by curiosity. HAL, 2012. URL http://hal.archives-ouvertes.fr/hal-00740418. Georg Martius, J. Michael Herrmann, and Ralf Der. Guided self-organisation for autonomous robot development. In Proceedings of the 9th European conference on Advances in artificial life, ECAL’07, pages 766–775, Berlin, Heidelberg, 2007. Springer-Verlag. Jonathan Mugan. Autonomous Qualitative Learning of Distinctions and Actions in a Developing Agent. PhD thesis, University of Texas at Austin, 2010. Pierre-Yves Oudeyer and Frederic Kaplan. What is Intrinsic Motivation? A Typology of Computational Approaches. Frontiers in neurorobotics, 1(November):6, January 2007. J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990-2010). Autonomous Mental Development, IEEE Transactions on, 2(3):230–247, 2010. 9
|
2012
|
203
|
4,568
|
Trajectory-Based Short-Sighted Probabilistic Planning Felipe W. Trevizan Machine Learning Department Manuela M. Veloso Computer Science Department Carnegie Mellon University - Pittsburgh, PA {fwt,mmv}@cs.cmu.edu Abstract Probabilistic planning captures the uncertainty of plan execution by probabilistically modeling the effects of actions in the environment, and therefore the probability of reaching different states from a given state and action. In order to compute a solution for a probabilistic planning problem, planners need to manage the uncertainty associated with the different paths from the initial state to a goal state. Several approaches to manage uncertainty were proposed, e.g., consider all paths at once, perform determinization of actions, and sampling. In this paper, we introduce trajectory-based short-sighted Stochastic Shortest Path Problems (SSPs), a novel approach to manage uncertainty for probabilistic planning problems in which states reachable with low probability are substituted by artificial goals that heuristically estimate their cost to reach a goal state. We also extend the theoretical results of Short-Sighted Probabilistic Planner (SSiPP) [1] by proving that SSiPP always finishes and is asymptotically optimal under sufficient conditions on the structure of short-sighted SSPs. We empirically compare SSiPP using trajectorybased short-sighted SSPs with the winners of the previous probabilistic planning competitions and other state-of-the-art planners in the triangle tireworld problems. Trajectory-based SSiPP outperforms all the competitors and is the only planner able to scale up to problem number 60, a problem in which the optimal solution contains approximately 1070 states. 1 Introduction The uncertainty of plan execution can be modeled by using probabilistic effects in actions, and therefore the probability of reaching different states from a given state and action. This search space, defined by the probabilistic paths from the initial state to a goal state, challenges the scalability of planners. Planners manage the uncertainty by choosing a search strategy to explore the space. In this work, we present a novel approach to manage uncertainty for probabilistic planning problems that improves its scalability while still being optimal. One approach to manage uncertainty while searching for the solution of probabilistic planning problems is to consider the complete search space at once. Examples of such algorithms are value iteration and policy iteration [2]. Planners based on these algorithms return a closed policy, i.e., a universal mapping function from every state to the optimal action that leads to a goal state. Assuming the model correctly captures the cost and uncertainty of the actions in the environment, closed policies are extremely powerful as their execution never “fails,” and the planner does not need to be re-invoked ever. Unfortunately the computation of such policies is prohibitive in complexity as problems scale up. Value iteration based probabilistic planners can be improved by combining asynchronous updates and heuristic search [3–7]. Although these techniques allow planners to compute compact policies, in the worst case, these policies are still linear in the size of the state space, which itself can be exponential in the size of the state or goals. 1 Another approach to manage uncertainty is basically to ignore uncertainty during planning, i.e., to approximate the probabilistic actions as deterministic actions. Examples of replanners based on determinization are FF-Replan [8], the winner of the first International Probabilistic Planning Competition (IPPC) [9], Robust FF [10], the winner of the third IPPC [11] and FF-Hindsight [12, 13]. Despite the major success of determinization, this simplification in the action space results in algorithms oblivious to probabilities and dead-ends, leading to poor performance in specific problems, e.g., the triangle tireworld [14]. Besides the action space simplification, uncertainty management can be performed by simplifying the problem horizon, i.e., look-ahead search [15]. Based on sampling, the Upper Confidence bound for Trees (UCT) algorithm [16] approximates the look-ahead search by focusing the search in the most promising nodes. The state space can also be simplified to manage uncertainty in probabilistic planning. One example of such approach is Envelope Propagation (EP) [17]. EP computes an initial partial policy π and then prunes all the states not considered by π. The pruned states are represented by a special meta state. Then EP iteratively improves its approximation of the state space. Previously, we introduced short-sighted planning [1], a new approach to manage uncertainty in planning problems: given a state s, only the uncertainty structure of the problem in the neighborhood of s is taken into account and the remaining states are approximated by artificial goals that heuristically estimate their cost to reach a goal state. In this paper, we introduced trajectory-based short-sighted Stochastic Shortest Path Problems (SSPs), a novel model to manage uncertainty in probabilistic planning problems. Trajectory-based short-sighted SSPs manage uncertainty by pruning the state space based on the most likely trajectory between states and defining artificial goal states that guide the solution towards the original goal. We also contribute by defining a class of short-sighted models and proving that the Short-Sighted Probabilistic Planner (SSiPP) [1] always terminates and is asymptotically optimal for models in this class of short-sighted models. The remainder of this paper is organized as follows: Section 2 introduces the basic concepts and notation. Section 3 defines formally trajectory-based short-sighted SSPs. Section 4 presents our new theoretical results for SSiPP. Section 5 empirically evaluates SSiPP using trajectory-based shortsighted SSPs with the winners of the previous IPPCs and other state-of-the-art planner. Section 6 concludes the paper. 2 Background A Stochastic Shortest Path Problem (SSP) is defined by the tuple S = ⟨S, s0, G, A, P, C⟩, in which [1, 18]: S is the finite set of state; s0 ∈S is the initial state; G ⊆S is the set of goal states; A is the finite set of actions; P(s′|s, a) represents the probability that s′ ∈S is reached after applying action a ∈A in state s ∈S; C(s, a, s′) ∈(0, +∞) is the cost incurred when state s′ is reached after applying action a in state s and this function is required to be defined for all s ∈S, a ∈A, s′ ∈S such that P(s′|s, a) > 0. A solution to an SSP is a policy π, i.e., a mapping from S to A. If π is defined over the entire space S, then π is a closed policy. A policy π defined only for the states reachable from s0 when following π is a closed policy w.r.t. s0 and S(π, s0) denotes this set of reachable states. For instance, in the SSP depicted in Figure 1(a), the policy π0 = {(s0, a0), (s′ 1, a0), (s′ 2, a0), (s′ 3, a0)} is a closed policy w.r.t. s0 and S(π0, s0) = {s0, s′ 1, s′ 2, s′ 3, sG}. Given a policy π, we define trajectory as a sequence Tπ = ⟨s(0), . . . , s(k)⟩such that, for all i ∈{0, · · · , k −1}, π(s(i)) is defined and P(s(i+1)|s(i), π(s(i))) > 0. The probability of a trajectory Tπ is defined as P(Tπ) = Qi<|Tπ| i=0 P(s(i+1)|s(i), π(s(i))) and maximum probability of a trajectory between two states Pmax(s, s′) is defined as maxπ P(Tπ = ⟨s, . . . , s′⟩). An optimal policy π∗for an SSP is any policy that always reaches a goal state when followed from s0 and also minimizes the expected cost of Tπ∗. For a given SSP, π∗might not be unique, however the optimal value function V ∗, i.e., the mapping from states to the minimum expected cost to reach a goal state, is unique. V ∗is the fixed point of the set of equations defined by (1) for all s ∈S \ G and V ∗(s) = 0 for all s ∈G. Notice that under the optimality criterion given by (1), SSPs are 2 s 1 ’ s 3 ’ s 2 s 1 s 0 s 3 sG .4 .6 .75 25 .75 .75 .25 .6 .4 .6 s 2 ’ .6 .25 .25 .75 .4 .4 a0 a1 s 1 ’ s 2 ’ s 3 ’ s 2 s 1 s 0 s 3 sG .4 .6 .4 .6 .4 .6 s 1 ’ s 2 ’ s 3 ’ s 2 s 1 s 0 s 3 sG .4 .6 .75 25 .75 .75 .25 .25 .6 .4 .4 .6 .6 .25 .75 .4 t>1 t>2 t>3 t>4 p<0.4 p<1.0 p<0.4 p<0.75 p<0.75 p<0.753 2 2 .75 .75 .75 .75 .25 .25 .25 .6 .4 .25 Figure 1: (a) Example of an SSP. The initial state is s0, the goal state is sG, C(s, a, s′) = 1, ∀s ∈S, a ∈A and s′ ∈S. (b) State-space partition of (a) according to the depth-based short-sighted SSPs: Gs0,t contains all the states in dotted regions which their conditions hold for the given value of t. (c) State-space partition of (a) according to the trajectory-based short-sighted SSPs: Gs0,ρ contains all the states in dotted regions which their conditions hold for the given value of ρ. more general than Markov Decision Processes (MDPs) [19], therefore all the work presented here is directly applicable to MDPs. V ∗(s) = min a∈A X s′∈S h C(s, a, s′) + P(s′|s, a)V ∗(s′) i (1) Definition 1 (reachability assumption). An SSP satisfies the reachability assumption if, for all s ∈S, there exists sG ∈G such that Pmax(s, sG) > 0. Given an SSP S, if a goal state can be reached with positive probability from every state s ∈S, then the reachability assumption (Definition 1) holds for S and 0 ≤V ∗(s) < ∞[19]. Once V ∗is known, any optimal policy π∗can be extracted from V ∗by substituting the operator min by argmin in equation (1). A possible approach to compute V ∗is the value iteration algorithm: define V i+1(s) as in (1) with V i in the right hand side instead of V ∗and the sequence ⟨V 0, V 1, . . . , V k⟩converges to V ∗as k →∞[19]. The process of computing V i+1 from V i is known as Bellman update and V 0(s) can be initialized with an admissible heuristic H(s), i.e., a lower bound for V ∗. In practice we are interested in reaching ǫ-convergence, that is, given ǫ, find V such that maxs |V (s) −mina P s′[C(s, a, s′) + P(s′|s, a)V (s′)]| ≤ǫ. The following well-known result is necessary in most of our proofs [2, Assumption 2.2 and Lemma 2.1]: Theorem 1. Given an SSP S, if the reachability assumption holds for S, then the admissibility and monotonicity of V are preserved through the Bellman updates. 3 Trajectory-Based Short-Sighted Stochastic SSPs Short-sighted Stochastic Path Problems (short-sighted SSPs) [1] are a special case of SSPs in which the original problem is transformed into a smaller one by: (i) pruning the state space; and (ii) adding artificial goal states to heuristically guide the search towards the goals of the original problem. Depth-based short-sighted SSPs are defined based on the action-distance between states [1]: Definition 2 (action-distance). The non-symmetric action-distance δ(s, s′) between two states s and s′ is argmink{Tπ = ⟨s, s(1), . . . , s(k−1), s′⟩|∃π and Tπ is a trajectory}. Definition 3 (Depth-Based Short-Sighted SSP). Given an SSP S = ⟨S, s0, G, A, P, C⟩, a state s ∈ S, t > 0 and a heuristic H, the (s, t)-depth-based short-sighted SSP Ss,t = ⟨Ss,t, s, Gs,t, A, P, Cs,t⟩ associated with S is defined as: • Ss,t = {s′ ∈S|δ(s, s′) ≤t}; • Gs,t = {s′ ∈S|δ(s, s′) = t} ∪(G ∩Ss,t); • Cs,t(s′, a, s′′) = C(s′, a, s′′) + H(s′′) if s′′ ∈Gs,t C(s′, a, s′′) otherwise , ∀s′ ∈Ss,t, a ∈A, s′′ ∈Ss,t Figure 1(b) shows, for different values of t, Ss0,t for the SSP in Figure 1(a); for instance, if t = 2 then Ss0,2 = {s0, s1, s′ 1, s2, s′ 2} and Gs0,2 = {s2, s′ 2}. In the example shown in Figure 1(b), we can 3 see that generation of Ss0,t is independent of the trajectories probabilities: for t = 2, s2 ∈Ss0,2 and s′ 3 ̸∈Ss0,2, however Pmax(s0, s2) = 0.16 < Pmax(s0, s′ 3) = 0.753 ≈0.42. Definition 4 (Trajectory-Based Short-Sighted SSP). Given an SSP S = ⟨S, s0, G, A, P, C⟩, a state s ∈S, ρ ∈[0, 1] and a heuristic H, the (s, ρ)-trajectory-based short-sighted SSP Ss,ρ = ⟨Ss,ρ, s, Gs,ρ, A, P, Cs,ρ⟩associated with S is defined as: • Ss,ρ = {s′ ∈S|∃ˆs ∈S and a ∈A s.t. Pmax(s, ˆs) ≥ρ and P(s′|ˆs, a) > 0}; • Gs,ρ = (G ∩Ss,ρ) ∪(Ss,ρ ∩{s′ ∈S|Pmax(s, s′) < ρ}); • Cs,ρ(s′, a, s′′) = C(s′, a, s′′) + H(s′′) if s′′ ∈Gs,ρ C(s′, a, s′′) otherwise , ∀s′ ∈Ss,ρ, a ∈A, s′′ ∈Ss,ρ For simplicity, when H is not clear by context nor explicit, then H(s) = 0 for all s ∈S. Our novel model, the trajectory-based short-sighted SSPs (Definition 4), addresses the issue of states with low trajectory probability by explicitly defining its state space Ss,ρ based on maximum probability of a trajectory between s and the candidate states s′ (Pmax(s, s′)). Figure 1(c) shows, for all values of ρ ∈[0, 1], the trajectory-based Ss0,ρ for the SSP in Figure 1(a): for instance, if ρ = 0.753 then Ss0,0.753 = {s0, s1, s′ 1, s′ 2, s′ 3, sG} and Gs0,0.75 = {s1, sG}. This example shows how trajectory-based short-sighted SSP can manage uncertainty efficiently: for ρ = 0.753, |Ss0,ρ| = 6 and the goal of the original SSP sG is already included in Ss0,ρ while, for the depthbased short-sighted SSPs, sG ∈Ss0,t only for t ≥4 case in which |Ss0,t| = |S| = 8. Notice that the definition of Ss,ρ cannot be simplified to {ˆs ∈S|Pmax(s, ˆs) ≥ρ} since not all the resulting states of actions would be included in Ss,ρ. For example, consider S = {s, s′, s′′}, P(s′|s, a) = 0.9 and P(s′′|s, a) = 0.1, then for ρ ∈(0.1, 1], {ˆs ∈S|Pmax(s, ˆs) ≥ρ} = {s, s′}, generating an invalid SSP since not all the resulting states of a would be contained in the model. 4 Short-Sighted Probabilistic Planner The Short-Sighted Probabilistic Planner (SSiPP) is an algorithm that solves SSPs based on shortsighted SSPs [1]. SSiPP is reviewed in Algorithm 1 and consists of iteratively generating and solving short-sighted SSPs of the given SSP. Due to the reduced size of the short-sighted problems, SSiPP solves each of them by computing a closed policy w.r.t. their initial state. Therefore, we obtain a “fail-proof” solution for each short-sighted SSP, thus if this solution is directly executed in the environment, then replanning is not needed until an artificial goal is reached. Alternatively, an anytime behavior is obtained if the execution of the computed closed policy for the short-sighted SSP is simulated (Algorithm 1 line 4) until an artificial goal sa is reached and this procedure is repeated, starting sa, until convergence or an interruption. In [1], we proved that SSiPP always terminates and is asymptotically optimal for depth-based shortsighted SSPs. We generalize the results regarding SSiPP by: (i) providing the sufficient conditions for the generation of short-sighted problems (Algorithm 1, line 1) in Definition 5; and (ii) proving that SSiPP always terminates (Theorem 3) and is asymptotically optimal (Corollary 4) when the short-sighted SSP generator respects Definition 5. Notice that, by definition, both depth-based and trajectory-based short-sighted SSPs meet the sufficient conditions presented on Definition 5. Definition 5. Given an SSP ⟨S, s0, G, A, P, C⟩, the sufficient conditions on the short-sighted SSPs ⟨S′, ˆs, G′, A, P ′, C′⟩returned by the generator in Algorithm 1 line 1 are: 1. G ∩S′ ⊆G′; 2. ˆs ̸∈G →ˆs ̸∈G′; and 3. for all s ∈S, a ∈A and s′ ∈S′ \ G′, if P(s|s′, a) > 0 then s ∈S′ and P ′(s|s′, a) = P(s|s′, a). Lemma 2. SSiPP performs Bellman updates on the original SSP S. 4 SSIPP(SSP S = ⟨S, s0, G, A, P, C⟩, H a heuristic for V ∗and params the parameters to generate short-sighted SSPs) begin V ←Value function for S initialized by H s ←s0 while s ̸∈G do ⟨S′, s, G′, A, P, C′⟩←GENERATE-SHORT-SIGHTED-SSP(S, s, V , params) 1 (ˆπ∗, ˆV ∗) ←OPTIMAL-SSP-SOLVER(⟨S′, s, G′, A, P, C′⟩, V ) forall s′ ∈S′(ˆπ∗, s) do 2 V (s′) ←ˆV ∗(s′) while s ̸∈G′ do 3 s ←execute-action(ˆπ∗(s)) 4 return V end Algorithm 1: SSiPP algorithm [1]. GENERATE-SHORT-SIGHTED-SSP represents a procedure to generate short-sighted SSPs, either depth-based or trajectory-based. In the former case params = t and params = ρ for the latter. OPTIMAL-SSP-SOLVER returns an optimal policy π∗w.r.t. s0 for S and V ∗associated to π∗, i.e., V ∗needs to be defined only for s ∈S(π∗, s0). Proof. In order to show that SSiPP performs Bellman updates implicitly, consider the loop in line 2 of Algorithm 1. Since OPTIMAL-SOLVER computes ˆV ∗, by definition of shortsighted SSP: (i) ˆV ∗(sG) equals V (sG) for all sG ∈G′, therefore the value of V (sG) remains the same; and (ii) mina∈A P s′∈S [C(s, a, s′) + P(s′|s, a)V (s′)] ≤ ˆV ∗(s) for s ∈S′ \ G′, i.e., the assignment V (s) ←ˆV ∗is equivalent to at least one Bellman update on V (s), because V is a lower bound on ˆV ∗and Theorem 1. Because s ̸∈ G′ and Definition 5, mina∈A P s′∈S C(s, a, s′) + P(s′|s, a)V (s′) ≤ˆV ∗(s) is equivalent to the one Bellman update in the original SSP S. Theorem 3. Given an SSP S = ⟨S, s0, G, A, P, C⟩such that the reachability assumption holds, an admissible heuristic H and a short-sighted problem generator that respects Definition 5, then SSiPP always terminates. Proof. Since OPTIMAL-SOLVER always finishes and the short-sighted SSP is an SSP by definition, then a goal state sG of the short-sighted SSP is always reached, therefore the loop in line 3 of Algorithm 1 always finishes. If sG ∈G, then SSiPP terminates in this iteration. Otherwise, sG is an artificial goal and sG ̸= s (Definition 5), i.e., sG differs from the state s used as initial state for the short-sighted SSP generation. Thus another iteration of SSiPP is performed using sG as s. Suppose, for contradiction purpose, that every goal state reached during SSiPP execution is an artificial goal, i.e., SSiPP does not terminate. Then infinitely many short-sighted SSPs are solved. Since S is finite, then there exists s ∈S that is updated infinitely often, therefore V (s) →∞. However, V ∗(s) < ∞by the reachability assumption. Since SSiPP performs Bellman updates (Lemma 2) then V (s) ≤V ∗(s) by monotonicity of Bellman updates (Theorem 1) and admissibility of H, a contradiction. Thus every execution of SSiPP reaches a goal state s′ G ∈G and therefore terminates. Corollary 4. Under the same assumptions of Theorem 3, the sequence ⟨V 0, V 1, · · · , V t⟩, where V 0 = H and V t = SSiPP(S, t, V t−1), converges to V ∗as t →∞for all s ∈S(π∗, s0). Proof. Let S∗⊆S be the set of states being visited infinitely many times. Clearly, S(π∗, s0) ⊆S∗ since a partial policy cannot be executed ad infinitum without reaching a state in which it is not defined. Since SSiPP performs Bellman updates in the original SSP space (Lemma 2) and every execution of SSiPP terminates (Theorem 3), then we can view the sequence of lower bounds ⟨V 0, V 1, · · · , V t⟩generated by SSiPP as asynchronous value iteration. The convergence of V t−1(s) to V ∗(s) as t →∞for all s ∈S(π∗, s0) ⊆S∗follows by [2, Proposition 2.2, p. 27] and guarantees the convergence of SSiPP. 5 (a) 0 5 10 15 20 25 30 35 40 45 50 55 60 10 0 10 10 10 20 10 30 10 40 10 50 10 60 10 70 10 80 Triangle Tireworld Problem Size Number of States (log scale) |S(π*,s0)| |S| (b) Figure 2: (a) Map of the triangle tireworld for the sizes 1, 2 and 3. Circles (squares) represent locations in which there is one (no) spare tire. The shades of gray represent, for each location l, maxπ P(car reaches l and the tire is not flat when following the policy π from s0). (b) Log-lin plot of the state space size (|S|) and the size of the states reachable from s0 when following the optimal policy π∗(|S(π∗, s0)|) versus the number of the triangle tireworld problem. 5 Experiments We present two sets of experiments using the triangle tireworld problems [9, 11, 20], a series of probabilistic interesting problems [14] in which a car has to travel between locations in order to reach a goal location from its initial location. The roads are represented as directed graph in a shape of a triangle and, every time the car moves between locations, a flat tire happens with probability 0.5. Some locations have a spare tire and in these locations the car can deterministically replace its flat tire by new one. When the car has a flat tire, it cannot change its location, therefore the car can get stuck in locations that do not have a spare tire (dead-ends). Figure 2(a) depicts the map of the triangle tireworld problems 1, 2 and 3 and Figure 2(b) shows the size of S and S(π∗, s0) for problems up to size 60. For example, the problem number 3 has 28 locations, i.e., 28 nodes in the corresponding graph on Figure 2(a), its state space has 19562 states and its optimal policy reaches 8190 states. Every triangle tireworld problem is a probabilistic interesting problem [14]: there is only one policy that reaches the goal with probability 1 and all the other policies have probability at most 0.5 of reaching the goal. Also, the solution based on the shortest path has probability 0.52n−1 of reaching the goal, where n is the problem number. This property is illustrated by the shades of gray in Figure 2(a) that represents, for each location l, maxπ P(car reaches l and the tire is not flat when following the policy π from s0). For the experiments in this section, we use the zero-heuristic for all the planners, i.e., V (s) = 0 for all s ∈S and LRTDP [4] as OPTIMAL-SOLVER for SSiPP. For all planners, the parameter ǫ (for ǫ-convergence) is set to 10−4. For UCT, we disabled the random rollouts because the probability of any policy other than the optimal policy to reach a dead-end is at least 0.5 therefore, with highprobability, UCT would assign ∞(cost of a dead-end) as the cost of all the states including the initial state. The experiments are conducted in a Linux machine with 4 cores running at 3.07GHz using MDPSIM [9] as environment simulator. The following terminology is used for describing the experiments: round, the computation for a solution for the given SSP; and run, a set of rounds in which learning is allowed between rounds, i.e., the knowledge obtained from one round can be used to solve subsequent rounds. The solution computed during one round is simulated by MDPSIM in a client-server loop: MDPSIM sends a state s and requests an action from the planner, then the planner replies by sending the action a to be executed in s. The evaluation is done by the number of rounds simulated by MDPSIM that reached a goal state. The maximum number of actions allowed per round is 2000 and rounds that exceed this limit are stopped by MDPSIM and declared as failure, i.e., goal not reached. 6 Triangle Tireworld Problem Number Planner 5 10 15 20 25 30 35 40 45 50 55 60 SSiPP depth=8 50.0 40.7 41.2 40.8 41.1 41.0 40.9 40.0 40.6 40.8 40.3 40.4 UCT 50.0 50.0 50.0 50.0 50.0 43.1 15.7 12.1 8.2 6.8 5.0 4.0 SSiPP trajectory 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 Table 1: Number of rounds solved out of 50 for experiment in Section 5.1. Results are averaged over 10 runs and the 95% confidence interval is always less than 1.0. In all the problems, SSiPP using trajectory-based short-sighted SSPs solves all the 50 round in all the 10 runs, therefore its 95% confidence interval is 0.0 for all the problems. Best results shown in bold font. Triangle Tireworld Problem Number Planner 5 10 15 20 25 30 35 40 45 50 55 60 SSiPP depth=8 50.0 45.4 41.2 42.3 41.2 44.1 42.4 32.7 20.6 14.1 9.9 7.0 LRTDP 50.0 23.0 14.1 0.3 UCT (4, 100) 50.0 50.0 50.0 48.8 24.0 12.3 6.5 4.0 2.5 1.3 1.0 0.7 UCT (8, 100) 50.0 50.0 50.0 46.3 24.0 12.3 6.7 3.7 2.2 1.2 1.0 0.6 UCT (2, 100) 50.0 50.0 50.0 49.5 23.2 12.0 7.5 3.5 2.2 1.2 1.0 0.6 SSiPP ρ = 1.0 50.0 27.9 29.1 26.8 26.0 26.6 28.6 27.2 26.6 27.6 26.2 26.9 SSiPP ρ = 0.50 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 SSiPP ρ = 0.25 50.0 50.0 50.0 50.0 47.6 45.0 41.1 42.7 41.9 40.7 40.1 40.4 SSiPP ρ = 0.125 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 49.8 37.4 26.4 18.9 Table 2: Number of rounds solved out of 50 for experiment in Section 5.2. Results are averaged over 10 runs and the 95% confidence interval is always less than 2.6. UCT (c, w) represents UCT using c as bias parameter and w samples per decision. In all the problems, trajectory-based SSiPP for ρ = 0.5 solves all the 50 round in all the 10 runs, therefore its 95% confidence interval is 0.0 for all the problems. Best results shown in bold font. 5.1 Fixed number of search nodes per decision In this experiment, we compare the performance of UCT, depth-based SSiPP, and trajectory-based SSiPP with respect to the number of nodes explored by depth-based SSiPP. Formally, to decide what action to apply in a given state s, each planner is allowed to use at most B = |Ss,t| search nodes, i.e., the size of the search space is bounded by the equivalent (s, t)-short-sighted SSP. We choose t equals to 8 since it obtains the best performance in the triangle tireworld problems [1]. Given the search nodes budget B, for UCT we sample the environment until the search tree contains B nodes; and for trajectory-based SSiPP we use ρ = argmaxρ{|Ss,ρ| s.t. B ≥|Ss,ρ|}. The methodology for this experiment is as follows: for each problem, 10 runs of 50 rounds are performed for each planner using the search nodes budget B. The results, averaged over the 10 runs, are presented in Table 1. We set as time and memory cut-off 8 hours and 8 Gb, respectively, and UCT for problems 35 to 60 was the only planner preempted by the time cut-off. Trace-based SSiPP outperforms both depth-based SSiPP and UCT, solving all the 50 rounds in all the 10 runs for all the problems. 5.2 Fixed maximum planning time In this experiment, we compare planners by limiting the maximum planning time. The methodology used in this experiment is similar to the one in IPPC’04 and IPPC’06: for each problem, planners need to solve 1 run of 50 rounds in 20 minutes. For this experiment, the planners are allowed to perform internal simulations, for instance, a planner can spend 15 minutes solving rounds using internal simulations and then use the computed policy to solve the required 50 rounds through MDPSIM in the remaining 5 minutes. The memory cut-off is 3Gb. For this experiment, we consider the following planners: depth-based SSiPP for t = 8 [1], trajectorybased SSiPP for ρ ∈{1.0, 0.5, 0.25, 0.125}, LRTDP using 3-look-ahead [1] and 12 different parametrizations of UCT obtained by using the bias parameter c ∈{1, 2, 4, 8} and the number of samples per decision w ∈{10, 100, 1000}. The winners of IPPC’04, IPPC’06 and IPPC’08 are 7 omitted since their performance on the triangle tireworld problems are strictly dominated by depthbase SSiPP for t = 8. Table 2 shows the results of this experiment and due to space limitations we show only the top 3 parametrizations of UCT: 1st (c = 4, w = 100); 2nd (c = 8, w = 100); and 3rd (c = 2, w = 100). All the four parametrizations of trajectory-based SSiPP outperform the other planners for problems of size equal or greater than 45. Trajectory-based SSiPP using ρ = 0.5 is especially noteworthy because it achieves the perfect score in all problems, i.e., it reaches a goal state in all the 50 rounds in all the 10 runs for all the problems. The same happens for ρ = 0.125 and problems up to size 40. For larger problems, trajectory-based SSiPP using ρ = 0.125 reaches the 20 minutes time cut-off before solving 50 rounds, however all the solved rounds successfully reach the goal. This interesting behavior of trajectory-based SSiPP for the triangle tireworld can be explained by the following theorem: Theorem 5. For the triangle tireworld, trajectory-based SSiPP using an admissible heuristic never falls in a dead-end for ρ ∈(0.5i+1, 0.5i] for i ∈{1, 3, 5, . . . }. Proof Sketch. The optimal policy for the triangle tireworld is to follow the longest path: move from the initial location l0 to the goal location lG passing through location lc, where l0, lc and lG are the vertices of the triangle formed by the problem’s map. The path from lc to lG is unique, i.e., there is only one applicable move-car action for all the locations in this path. Therefore all the decision making to find the optimal policy happens between the locations l0 and lc. Each location l′ in the path from l0 to lc has either two or three applicable move-car actions and we refer to the set of locations l′ with three applicable move-car actions as N. Every location l′ ∈N is reachable from l0 by applying an even number of move-car actions (Figure 2(a)) and the three applicable move-car actions in l′ are: (i) the optimal action ac, i.e., move the car towards lc; (ii) the action aG that moves the car towards lG; and (iii) the action ap that moves the car parallel to the shortest-path from l0 to lG. The location reached by ap does not have a spare tire, therefore ap is never selected by a greedy choice over any admissible heuristic since it reaches a dead-end with probability 0.5. The locations reached by applying either ac or aG have a spare tire and the greedy choice between them depends on the admissible heuristic used, thus aG might be selected instead of ac. However, after applying aG, only one move-car action a is available and it reaches a location that does not have a spare tire. Therefore, the greedy choice between ac and aG considering two or more move-car actions is optimal under any admissible heuristic: every sequence of actions ⟨aG, a, . . . ⟩reaches a dead-end with probability at least 0.5 and at least one sequence of actions starting with ac has probability 0 to reach a dead-end, e.g., the optimal solution. Given ρ, we denote as Ls,ρ the set of all locations corresponding to states in Ss,ρ and as ls the location corresponding to the state s. Thus, Ls,ρ contains all the locations reachable from ls using up to m = ⌊log0.5 ρ⌋+ 1 move-car actions. If m is even and ls ∈N, then every location in Ls,ρ ∩N represents a state either in Gs,ρ or at least two move-car actions away from any state in Gs,ρ. Therefore the solution of the (s, ρ)-trajectory-based short-sighted SSP only chooses the action ac to move the car. Also, since m is even, every state s used by SSiPP for generating (s, ρ)-trajectory-based short-sighted SSPs has ls ∈N. Therefore, for even values of m, i.e., for ρ ∈(0.5i+1, 0.5i] and i ∈{1, 3, 5, . . . }, trajectory-based SSiPP always chooses the actions ac to move the car to lc, thus avoiding the all dead-ends. 6 Conclusion In this paper, we introduced trajectory-based short-sighted SSPs, a new model to manage uncertainty in probabilistic planning problems. This approach consists of pruning the state space based on the most likely trajectory between states and defining artificial goal states that guide the solution towards the original goals. We also defined a class of short-sighted models that includes depth-based and trajectory-based short-sighted SSPs and proved that SSiPP always terminates and is asymptotically optimal for short-sighted models in this class. We empirically compared trajectory-based SSiPP with depth-based SSiPP and other state-of-the-art planners in the triangle tireworld. Trajectory-based SSiPP outperforms all the other planners and it is the only planner able to scale up to problem number 60, a problem in which the optimal solution contains approximately 1070 states, under the IPPC evaluation methodology. 8 References [1] F. W. Trevizan and M. M. Veloso. Short-sighted stochastic shortest path problems. In In Proc. of the 22nd International Conference on Automated Planning and Scheduling (ICAPS), 2012. [2] D. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [3] A.G. Barto, S.J. Bradtke, and S.P. Singh. Learning to act using real-time dynamic programming. Artificial Intelligence, 72(1-2):81–138, 1995. [4] B. Bonet and H. Geffner. Labeled RTDP: Improving the convergence of real-time dynamic programming. In Proc. of the 13th International Conference on Automated Planning and Scheduling (ICAPS), 2003. [5] H.B. McMahan, M. Likhachev, and G.J. Gordon. Bounded real-time dynamic programming: RTDP with monotone upper bounds and performance guarantees. In Proc. of the 22nd International Conference on Machine Learning (ICML), 2005. [6] Trey Smith and Reid G. Simmons. Focused Real-Time Dynamic Programming for MDPs: Squeezing More Out of a Heuristic. In Proc. of the 21st National Conference on Artificial Intelligence (AAAI), 2006. [7] S. Sanner, R. Goetschalckx, K. Driessens, and G. Shani. Bayesian real-time dynamic programming. In Proc. of the 21st International Joint Conference on Artificial Intelligence (IJCAI), 2009. [8] S. Yoon, A. Fern, and R. Givan. FF-Replan: A baseline for probabilistic planning. In Proc. of the 17th International Conference on Automated Planning and Scheduling (ICAPS), 2007. [9] H.L.S. Younes, M.L. Littman, D. Weissman, and J. Asmuth. The first probabilistic track of the international planning competition. Journal of Artificial Intelligence Research, 24(1):851–887, 2005. [10] F. Teichteil-Koenigsbuch, G. Infantes, and U. Kuter. RFF: A robust, FF-based mdp planning algorithm for generating policies with low probability of failure. 3rd International Planning Competition (IPPC-ICAPS), 2008. [11] D. Bryce and O. Buffet. 6th International Planning Competition: Uncertainty Track. In 3rd International Probabilistic Planning Competition (IPPC-ICAPS), 2008. [12] S. Yoon, A. Fern, R. Givan, and S. Kambhampati. Probabilistic planning via determinization in hindsight. In Proc. of the 23rd National Conference on Artificial Intelligence (AAAI), 2008. [13] S. Yoon, W. Ruml, J. Benton, and M. B. Do. Improving Determinization in Hindsight for Online Probabilistic Planning. In Proc. of the 20th International Conference on Automated Planning and Scheduling (ICAPS), 2010. [14] I. Little and S. Thi´ebaux. Probabilistic planning vs replanning. In Proc. of ICAPS Workshop on IPC: Past, Present and Future, 2007. [15] J. Pearl. Heuristics: Intelligent Search Strategies for Computer Problem Solving. AddisonWesley, Menlo Park, California, 1985. [16] Levente Kocsis and Csaba Szepesvri. Bandit based Monte-Carlo Planning. In Proc. of the European Conference on Machine Learning (ECML), 2006. [17] T. Dean, L.P. Kaelbling, J. Kirman, and A. Nicholson. Planning under time constraints in stochastic domains. Artificial Intelligence, 76(1-2):35–74, 1995. [18] D.P. Bertsekas and J.N. Tsitsiklis. An analysis of stochastic shortest path problems. Mathematics of Operations Research, 16(3):580–595, 1991. [19] D.P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 1995. [20] Blai Bonet and Robert Givan. 2th International Probabilistic Planning Competition (IPPCICAPS). http://www.ldc.usb.ve/˜bonet/ipc5/ (accessed on Dec 13, 2011), 2007. 9
|
2012
|
204
|
4,569
|
Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence Victor Gabillon Mohammad Ghavamzadeh Alessandro Lazaric INRIA Lille - Nord Europe, Team SequeL Victor Gabillon, Mohammad Ghavamzadeh & Alessandro Lazaric Abstract We study the problem of identifying the best arm(s) in the stochastic multi-armed bandit setting. This problem has been studied in the literature from two different perspectives: fixed budget and fixed confidence. We propose a unifying approach that leads to a meta-algorithm called unified gap-based exploration (UGapE), with a common structure and similar theoretical analysis for these two settings. We prove a performance bound for the two versions of the algorithm showing that the two problems are characterized by the same notion of complexity. We also show how the UGapE algorithm as well as its theoretical analysis can be extended to take into account the variance of the arms and to multiple bandits. Finally, we evaluate the performance of UGapE and compare it with a number of existing fixed budget and fixed confidence algorithms. 1 Introduction The problem of best arm(s) identification [6, 3, 1] in the stochastic multi-armed bandit setting has recently received much attention. In this problem, a forecaster repeatedly selects an arm and observes a sample drawn from its reward distribution during an exploration phase, and then is asked to return the best arm(s). Unlike the standard multi-armed bandit problem, where the goal is to maximize the cumulative sum of rewards obtained by the forecaster (see e.g., [15, 2]), in this problem the forecaster is evaluated on the quality of the arm(s) returned at the end of the exploration phase. This abstract problem models a wide range of applications. For instance, let us consider a company that has K different variants of a product and needs to identify the best one(s) before actually placing it on the market. The company sets up a testing phase in which the products are tested by potential customers. Each customer tests one product at the time and gives it a score (a reward). The objective of the company is to return a product at the end of the test phase which is likely to be successful once placed on the market (i.e., the best arm identification), and it is not interested in the scores collected during the test phase (i.e., the cumulative reward). The problem of best arm(s) identification has been studied in two distinct settings in the literature. Fixed budget. In the fixed budget setting (see e.g., [3, 1]), the number of rounds of the exploration phase is fixed and is known by the forecaster, and the objective is to maximize the probability of returning the best arm(s). In the above example, the company fixes the length of the test phase before hand (e.g., enrolls a fixed number of customers) and defines a strategy to choose which products to show to the testers so that the final selected product is the best with the highest probability. Audibert et al. [1] proposed two different strategies to solve this problem. They defined a strategy based on upper confidence bounds, called UCB-E, whose optimal parameterization is strictly related to a measure of the complexity of the problem. They also introduced an elimination algorithm, called Successive Rejects, which divides the budget n in phases and discards one arm per phase. Both algorithms were shown to have nearly optimal probability of returning the best arm. Deng et al. [5] and Gabillon et al. [8] considered the extension of the best arm identification problem to the multi1 bandit setting, where the objective is to return the best arm for each bandit. Recently, Bubeck et al. [4] extended the previous results to the problem of m-best arm identification and introduced a new version of the Successive Rejects algorithm (with accept and reject) that is able to return the set of the m-best arms with high probability. Fixed confidence. In the fixed confidence setting (see e.g., [12, 6]), the forecaster tries to minimize the number of rounds needed to achieve a fixed confidence about the quality of the returned arm(s). In the above example, the company keeps enrolling customers in the test until it is, e.g., 95% confident that the best product has been identified. Maron & Moore [12] considered a slightly different setting where besides a fixed confidence also the maximum number of rounds is fixed. They designed an elimination algorithm, called Hoeffding Races, based on progressively discarding the arms that are suboptimal with enough confidence. Mnih et al. [14] introduced an improved algorithm, built on the Bernstein concentration inequality, which takes into account the empirical variance of each arm. Even-Dar et al. [6] studied the fixed confidence setting without any budget constraint and designed an elimination algorithm able to return an arm with a required accuracy ϵ (i.e., whose performance is at least ϵ-close to the optimal arm). Kalyanakrishnan & Stone [10] further extended this approach to the case where the m-best arms must be returned with a given confidence. Finally, Kalyanakrishnan et al. [11] recently introduced an algorithm for the case of m-best arm identification along with a thorough theoretical analysis showing the number of rounds needed to achieve the desired confidence. Although the fixed budget and fixed confidence problems have been studied separately, they display several similarities. In this paper, we propose a unified approach to these two settings in the general case of m-best arm identification with accuracy ϵ.1 The main contributions of the paper can be summarized as follows: Algorithm. In Section 3, we propose a novel meta-algorithm, called unified gap-based exploration (UGapE), which uses the same arm selection and (arm) return strategies for the two settings. This algorithm allows us to solve settings that have not been covered in the previous work (e.g., the case of ϵ ̸= 0 has not been studied in the fixed budget setting). Furthermore, we show in Appendix C of [7] that UGapE outperforms existing algorithms in some settings (e.g., it improves the performance of the algorithm by Mnih et al. [14] in the fixed confidence setting). We also provide a thorough empirical evaluation of UGapE and compare it with a number of existing fixed budget and fixed confidence algorithms in Appendix C of [7]. Theoretical analysis. Similar to the algorithmic contribution, in Section 4, we show that a large portion of the theoretical analysis required to study the behavior of the two settings of the UGapE algorithm can be unified in a series of lemmas. The final theoretical guarantees are thus a direct consequence of these lemmas when used in the two specific settings. Problem complexity. In Section 4.4, we show that the theoretical analysis indicates that the two problems share exactly the same definition of complexity. In particular, we show that the probability of success in the fixed budget setting as well as the sample complexity in the fixed confidence setting strictly depend on the inverse of the gaps of the arms and the desired accuracy ϵ. Extensions. Finally, in Appendix B of [7], we discuss how the proposed algorithm and analysis can be extended to improved definitions of confidence interval (e.g., Bernstein-based bounds) and to more complex settings, such as the multi-bandit best arm identification problem introduced in [8]. 2 Problem Formulation In this section, we introduce the notation used throughout the paper. Let A = {1, . . . , K} be the set of arms such that each arm k ∈A is characterized by a distribution νk bounded in [0, b] with mean µk and variance σ2 k. We define the m-max and m-argmax operators as2 µ(m) = m max k∈A µk and (m) = arg m max k∈A µk , where (m) denotes the index of the m-th best arm in A and µ(m) is its corresponding mean so that µ(1) ≥µ(2) ≥. . . ≥µ(K). We denote by Sm ⊂A any subset of m arms (i.e., |Sm| = m < K) and by Sm,∗the subset of the m best arms (i.e., k ∈Sm,∗iif µk ≥µ(m)). Without loss of generality, we 1Note that when ϵ = 0 and m = 1 this reduces to the standard best arm identification problem. 2Ties are broken in an arbitrary but consistent manner. 2 assume there exists a unique set Sm,∗. In the following we drop the superscript m and use S = Sm and S∗= Sm,∗whenever m is clear from the context. With a slight abuse of notation we further extend the m-max operator to an operator returning a set of arms, such that {µ(1), . . . , µ(m)} = 1..m max k∈A µk and S∗= arg 1..m max k∈A µk . For each arm k ∈A, we define the gap ∆k as ∆k = µk −µ(m+1) if k ∈S∗ µ(m) −µk if k /∈S∗. This definition of gap indicates that if k ∈S∗, ∆k represents the “advantage” of arm k over the suboptimal arms, and if k /∈S∗, ∆k denotes how suboptimal arm k is. Note that we can also write the gap as ∆k = | m max i̸=k µi −µk|. Given an accuracy ϵ and a number of arms m, we say that an arm k is (ϵ,m)-optimal if µk ≥µ(m) −ϵ. Thus, we define the (ϵ,m)-best arm identification problem as the problem of finding a set S of m (ϵ,m)-optimal arms. The (ϵ,m)-best arm identification problem can be formalized as a game between a stochastic bandit environment and a forecaster. The distributions {νk} are unknown to the forecaster. At each round t, the forecaster pulls an arm I(t) ∈A and observes an independent sample drawn from the distribution νI(t). The forecaster estimates the expected value of each arm by computing the average of the samples observed over time. Let Tk(t) be the number of times that arm k has been pulled by the end of round t, then the mean of this arm is estimated as bµk(t) = 1 Tk(t) PTk(t) s=1 Xk(s), where Xk(s) is the s-th sample observed from νk. For any arm k ∈A, we define the notion of arm simple regret as rk = µ(m) −µk, (1) and for any set S ⊂A of m arms, we define the simple regret as rS = max k∈S rk = µ(m) −min k∈S µk. (2) We denote by Ω(t) ⊂A the set of m arms returned by the forecaster at the end of the exploration phase (when the alg. stops after t rounds), and by rΩ(t) its corresponding simple regret. Returning m (ϵ,m)-optimal arms is then equivalent to having rΩ(t) smaller than ϵ. Given an accuracy ϵ and a number of arms m to return, we now formalize the two settings of fixed budget and fixed confidence. Fixed budget. The objective is to design a forecaster capable of returning a set of m (ϵ,m)-optimal arms with the largest possible confidence using a fixed budget of n rounds. More formally, given a budget n, the performance of the forecaster is measured by the probability eδ of not meeting the (ϵ,m) requirement, i.e., eδ = P rΩ(n) ≥ϵ , the smaller eδ, the better the algorithm. Fixed confidence. The goal is to design a forecaster that stops as soon as possible and returns a set of m (ϵ,m)-optimal arms with a fixed confidence. We denote by en the time when the algorithm stops and by Ω(en) its set of returned arms. Given a confidence level δ, the forecaster has to guarantee that P rΩ(en) ≥ϵ ≤δ. The performance of the forecaster is then measured by the number of rounds en either in expectation or high probability. Although these settings have been considered as two distinct problems, in Section 3 we introduce a unified arm selection strategy that can be used in both cases by simply changing the stopping criteria. Moreover, we show in Section 4 that the bounds on the performance of the algorithm in the two settings share the same notion of complexity and can be derived using very similar arguments. 3 Unified Gap-based Exploration Algorithm In this section, we describe the unified gap-based exploration (UGapE) meta-algorithm and show how it is implemented in the fixed-budget and fixed-confidence settings. As shown in Figure 1, both fixed-budget (UGapEb) and fixed-confidence (UGapEc) instances of UGapE use the same armselection strategy, SELECT-ARM (described in Figure 2), and upon stopping, return the m-best arms in the same manner (using Ω). The two algorithms only differ in their stopping criteria. More precisely, both algorithms receive as input the definition of the problem (ϵ, m), a constraint (the 3 budget n in UGapEb and the confidence level δ in UGapEc), and a parameter (a or c). While UGapEb runs for n rounds and then returns the set of arms Ω(n), UGapEc runs until it achieves the desired accuracy ϵ with the requested confidence level δ. This difference is due to the two different objectives targeted by the algorithms; while UGapEc optimizes its budget for a given confidence level, UGapEb’s goal is to optimize the quality of its recommendation for a fixed budget. UGapEb (ϵ, m, n, a) Parameters: accuracy ϵ, number of arms m, budget n, exploration parameter a Initialize: Pull each arm k once, update bµk(K) and set Tk(K) = 1 SAMP for t = K + 1, . . . , n do SELECT-ARM (t) end for SAMP Return Ω(n) = arg min J(t) BJ(t)(t) UGapEc (ϵ, m, δ, c) Parameters: accuracy ϵ, number of arms m, confidence level δ, exploration parameter c Initialize: Pull each arm k once, update bµk(K), set Tk(K) = 1 and t ←K + 1 SAMP while BJ(t)(t) ≥ϵ do SELECT-ARM (t) t ←t + 1 end while SAMP Return Ω(t) = J(t) Figure 1: The pseudo-code for the UGapE algorithm in the fixed-budget (UGapEb) (left) and fixedconfidence (UGapEc) (right) settings. SELECT-ARM (t) Compute Bk(t) for each arm k ∈A Identify the set of m arms J(t) ∈arg 1..m min k∈A Bk(t) Pull the arm I(t) = arg max k∈{lt,ut} βk(t −1) Observe XI(t) TI(t)(t −1) + 1 ∼νI(t) Update bµI(t)(t) and TI(t)(t) Figure 2: The pseudo-code for the UGapE’s armselection strategy. This routine is used in both UGapEb and UGapEc instances of UGapE. Regardless of the final objective, how to select an arm at each round (arm-selection strategy) is the key component of any multi-arm bandit algorithm. One of the most important features of UGapE is having a unique arm-selection strategy for the fixed-budget and fixed-confidence settings. We now describe the UGapE’s armselection strategy, whose pseudo-code has been reported in Figure 2. At each time step t, UGapE first uses the observations up to time t− 1 and computes an index Bk(t) = m max i̸=k Ui(t)− Lk(t) for each arm k ∈A, where ∀t, ∀k ∈A Uk(t) = bµk(t −1) + βk(t −1) , Lk(t) = bµk(t −1) −βk(t −1). (3) In Eq. 3, βk(t −1) is a confidence interval,3 and Uk(t) and Lk(t) are high probability upper and lower bounds on the mean of arm k, µk, after t−1 rounds. Note that the parameters a and c are used in the definition of the confidence interval βk, whose shape strictly depends on the concentration bound used by the algorithm. For example, we can derive βk from the Chernoff-Hoeffding bound as UGapEb: βk(t −1) = b r a Tk(t −1) , UGapEc: βk(t −1) = b s c log 4K(t−1)3 δ Tk(t −1) . (4) In Sec. 4, we discuss how the parameters a and c can be tuned and we show that while a should be tuned as a function of n and ϵ in UGapEb, c = 1/2 is always a good choice for UGapEc. Defining the confidence interval in a general form βk(t−1) allows us to easily extend the algorithm by taking into account different (higher) moments of the arms (see Appendix B of [7] for the case of variance, where βk(t −1) is obtained from the Bernstein inequality). From Eq. 3, we may see that the index Bk(t) is an upper-bound on the simple regret rk of the kth arm (see Eq. 1). We also define an index for a set S as BS(t) = maxi∈S Bi(t). Similar to the arm index, BS is also defined in order to upper-bound the simple regret rS with high probability (see Lemma 1). After computing the arm indices, UGapE finds a set of m arms J(t) with minimum upper-bound on their simple regrets, i.e., J(t) = arg 1..m min k∈A Bk(t). From J(t), it computes two arm indices ut = arg maxj /∈J(t) Uj(t) and lt = arg mini∈J(t) Li(t), where in both cases the tie is broken in favor of 3To be more precise, βk(t −1) is the width of a confidence interval or a confidence radius. 4 the arm with the largest uncertainty β(t−1). Arms lt and ut are the worst possible arm among those in J(t) and the best possible arm left outside J(t), respectively, and together they represent how bad the choice of J(t) could be. Finally, the algorithm selects and pulls the arm I(t) as the arm with the larger β(t −1) among ut and lt, observes a sample XI(t) TI(t)(t −1) + 1 from the distribution νI(t), and updates the empirical mean bµI(t)(t) and the number of pulls TI(t)(t) of the selected arm I(t). There are two more points that need to be discussed about the UGapE algorithm. 1) While UGapEc defines the set of returned arms as Ω(t) = J(t), UGapEb returns the set of arms J(t) with the smallest index, i.e., Ω(n) = arg minJ(t) BJ(t)(t), t ∈{1, . . . , n}. 2) UGapEc stops (we refer to the number of rounds before stopping as en) when BJ(en+1)(en + 1) is less than the given accuracy ϵ, i.e., when even the mth worst upper-bound on the arm simple regret among all the arms in the selected set J(en + 1) is smaller than ϵ. This guarantees that the simple regret (see Eq. 2) of the set returned by the algorithm, Ω(en) = J(en + 1), to be smaller than ϵ with probability larger than 1 −δ. 4 Theoretical Analysis In this section, we provide high probability upper-bounds on the performance of the two instances of the UGapE algorithm, UGapEb and UGapEc, introduced in Section 3. An important feature of UGapE is that since its fixed-budget and fixed-confidence versions share the same arm-selection strategy, a large part of their theoretical analysis can be unified. We first report this unified part of the proof in Section 4.1, and then provide the final performance bound for each of the algorithms, UGapEb and UGapEc, separately, in Sections 4.2 and 4.3, respectively. Before moving to the main results, we define additional notation used in the analysis. We first define event E as E = ∀k ∈A, ∀t ∈{1, . . . , T}, bµk(t) −µk < βk(t) , (5) where the values of T and βk are defined for each specific setting separately. Note that event E plays an important role in the sequel, since it allows us to first derive a series of results which are directly implied by the event E and to postpone the study of the stochastic nature of the problem (i.e., the probability of E) in the two specific settings. In particular, when E holds, we have that for any arm k ∈A and at any time t, Lk(t) ≤µk ≤Uk(t). Finally, we define the complexity of the problem as Hϵ = K X i=1 b2 max( ∆i+ϵ 2 , ϵ)2 . (6) Note that although the complexity has an explicit dependence on ϵ, it also depends on the number of arms m through the definition of the gaps ∆i, thus making it a complexity measure of the (ϵ, m) best arm identification problem. In Section 4.4, we will discuss why the complexity of the two instances of the problem is measured by this quantity. 4.1 Analysis of the Arm-Selection Strategy Here we report lower (Lemma 1) and upper (Lemma 2) bounds for indices BS on the event E, which show their connection with the regret and gaps. The technical lemmas used in the proofs (Lemmas 3 and 4 and Corollary 1) are reported in Appendix A of [7]. We first prove that for any set S ̸= S∗ and any time t ∈{1, . . . , T}, the index BS(t) is an upper-bound on the simple regret of this set rS. Lemma 1. On event E, for any set S ̸= S∗and any time t ∈{1, . . . , T}, we have BS(t) ≥rS. Proof. On event E, for any arm i /∈S∗and each time t ∈{1, . . . , T}, we may write Bi(t) = m max j̸=i Uj(t) −Li(t) = m max j̸=i bµj(t −1) + βj(t −1) − bµi(t −1) −βi(t −1) ≥ m max j̸=i µj −µi = µ(m) −µi = ri . (7) Using Eq. 7, we have BS(t) = max i∈S Bi(t) ≥ max i∈(S−S∗) Bi(t) ≥ max i∈(S−S∗) ri = rS, where the last passage follows from the fact that ri ≤0 for any i ∈S∗. 5 Lemma 2. On event E, if arm k ∈{lt, ut} is pulled at time t ∈{1, . . . , T}, we have BJ(t)(t) ≤min 0, −∆k + 2βk(t −1) + 2βk(t −1). (8) Proof. We first prove the statement for B(t) = Uut(t) −Llt(t), i.e., B(t) ≤min 0, −∆k + 2βk(t −1) + 2βk(t −1). (9) We consider the following cases: Case 1. k = ut: Case 1.1. ut ∈S∗: Since by definition ut /∈J(t), there exists an arm j /∈S∗such that j ∈J(t). Now we may write µ(m+1) ≥µj (a) ≥Lj(t) (b) ≥Llt(t) (c) ≥Lut(t) = bµk(t −1) −βk(t −1) (d) ≥µk −2βk(t −1) (10) (a) and (d) hold because of event E, (b) follows from the fact that j ∈J(t) and from the definition of lt, and (c) is the result of Lemma 4. From Eq. 10, we may deduce that −∆k + 2βk(t −1) ≥0, which together with Corollary 1 gives us the desired result (Eq. 9). Case 1.2. ut /∈S∗: Case 1.2.1. lt ∈S∗: In this case, we may write B(t) = Uut(t) −Llt(t) (a) ≤µut + 2βut(t −1) −µlt + 2βlt(t −1) (b) ≤µut + 2βut(t −1) −µ(m) + 2βlt(t −1) (c) ≤−∆ut + 4βut(t −1) (11) (a) holds because of event E, (b) is from the fact that lt ∈S∗, and (c) is because ut is pulled, and thus, βut(t −1) ≥βlt(t −1). The final result follows from Eq. 11 and Corollary 1. Case 1.2.2. lt /∈S∗: Since lt /∈S∗and the fact that by definition lt ∈J(t), there exists an arm j ∈S∗such that j /∈J(t). Now we may write µut + 2βut(t −1) (a) ≥Uut(t) (b) ≥Uj(t) (c) ≥µj (d) ≥µ(m) (12) (a) and (c) hold because of event E, (b) is from the definition of ut and the fact that j /∈J(t), and (d) holds because j ∈S∗. From Eq. 12, we may deduce that −∆ut + 2βut(t −1) ≥0, which together with Corollary 1 gives us the final result (Eq. 9). With similar arguments and cases, we prove the result of Eq. 9 for k = lt. The final statement of the lemma (Eq. 8) follows directly from BJ(t)(t) ≥B(t) as shown in Lemma 3. Using Lemmas 1 and 2, we define an upper and a lower bounds on BJ(t) in terms of quantities related to the regret of J(t). Lemma 1 confirms the intuition that the B-values upper-bound the regret of the corresponding set of arms (with high probability). Unfortunately, this is not enough to claim that selecting J(t) as the set of arms with smallest B-values actually correspond to arms with small regret, since BJ(t) could be an arbitrary loose bound on the regret. Lemma 2 provides this complementary guarantee specifically for the set J(t), in the form of an upper-bound on BJ(t) w.r.t. the gap of k ∈{ut, lt}. This implies that as the algorithm runs, the choice of J(t) becomes more and more accurate since BJ(t) is constrained between rJ(t) and a quantity (Eq. 8) that gets smaller and smaller, thus implying that selecting the arms with the smaller B-value, i.e., the set J(t), corresponds to those which actually have the smallest regret, i.e., the arms in S∗. This argument will be implicitly at the basis of the proofs of the two following theorems. 4.2 Regret Bound for the Fixed-Budget Setting Here we prove an upper-bound on the simple-regret of UGapEb. Since the setting considered by the algorithm is fixed-budget, we may set T = n. From the definition of the confidence interval βi(t) in Eq. 4 and a union bound, we have that P(E) ≥1 −2Kn exp(−2a).4 We now have all the tools needed to prove the performance of UGapEb for the m (ϵ,m)-best arm identification problem. 4The extension to a confidence interval that takes into account the variance of the arms is discussed in Appendix B of [7]. 6 Theorem 1. If we run UGapEb with parameter 0 < a ≤n−K 4Hϵ , its simple regret rΩ(n) satisfies eδ = P rΩ(n) ≥ϵ ≤2Kn exp(−2a), and in particular this probability is minimized for a = n−K 4Hϵ . Proof. The proof is by contradiction. We assume that rΩ(n) > ϵ on event E and consider the following two steps: Step 1: Here we show that on event E, we have the following upper-bound on the number of pulls of any arm i ∈A: Ti(n) < 4ab2 max ∆i+ϵ 2 , ϵ 2 + 1. (13) Let ti be the last time that arm i is pulled. If arm i has been pulled only during the initialization phase, Ti(n) = 1 and Eq. 13 trivially holds. If i has been selected by SELECT-ARM, then we have min −∆i + 2βi(ti −1), 0 + 2βi(ti −1) (a) ≥B(ti) (b) ≥BJ(ti)(ti) (c) ≥BΩ(n)(tℓ) (d)> ϵ, (14) where tℓ∈{1, . . . , n} is the time such that Ω(n) = J(tℓ). (a) and (b) are the results of Lemmas 2 and 3, (c) is by the definition of Ω(n), and (d) holds because using Lemma 1, we know that if the algorithm suffers a simple regret rΩ(n) > ϵ (as assumed at the beginning of the proof), then ∀t = 1, . . . , n + 1, BΩ(n)(t) > ϵ. By the definition of ti, we know Ti(n) = Ti(ti −1) + 1. Using this fact, the definition of βi(ti −1), and Eq. 14, it is straightforward to show that Eq. 13 holds. Step 2: We know that PK i=1 Ti(n) = n. Using Eq. 13, we have PK i=1 4ab2 max ∆i+ϵ 2 ,ϵ 2 + K > n on event E. It is easy to see that by selecting a ≤n−K 4Hϵ , the left-hand-side of this inequality will be smaller than or equal to n, which is a contradiction. Thus, we conclude that rΩ(n) ≤ϵ on event E. The final result follows from the probability of event E defined at the beginning of this section. 4.3 Regret Bound for the Fixed-Confidence Setting Here we prove an upper-bound on the simple-regret of UGapEc. Since the setting considered by the algorithm is fixed-confidence, we may set T = +∞. From the definition of the confidence interval βi(t) in Eq. 4 and a union bound on Tk(t) ∈{0, . . . , t}, t = 1, . . . , ∞, we have that P(E) ≥1 −δ. Theorem 2. The UGapEc algorithm stops after en rounds and returns a set of m arms, Ω(en), that satisfies P rΩ(en+1) ≤ϵ ∧en ≤N ≥1 −δ, where N = K + O(Hϵ log Hϵ δ ) and c has been set to its optimal value 1/2. Proof. We first prove the bound on the simple regret of UGapEc. Using Lemma 1, we have that on event E, the simple regret of UGapEc upon stopping satisfies BJ(en+1)(en + 1) = BΩ(en+1)(en + 1) ≥ rΩ(en+1). As a result, on event E, the regret of UGapEc cannot be bigger than ϵ, because then it contradicts the stopping condition of the algorithm, i.e., BJ(en+1)(en + 1) < ϵ. Therefore, we have P rΩ(en+1) ≤ϵ ≥1 −δ. Now we prove the bound for the sample complexity. Similar to the proof of Theorem 1, we consider the following two steps: Step 1: Here we show that on event E, we have the following upper-bound on the number of pulls of any arm i ∈A: Ti(en) ≤2b2 log(4K(en −1)3/δ) max ∆i+ϵ 2 , ϵ 2 + 1. (15) Let ti be the last time that arm i is pulled. If arm i has been pulled only during the initialization phase, Ti(en) = 1 and Eq. 15 trivially holds. If i has been selected by SELECT-ARM, then we have BJ(ti)(ti) ≥ϵ. Now using Lemma 2, we may write BJ(ti)(ti) ≤min 0, −∆i + 2βi(ti −1) + 2βi(ti −1). (16) We can prove Eq. 15 by plugging in the value of βi(ti −1) from Eq. 4 and solving Eq. 16 for Ti(ti) taking into account that Ti(ti −1) + 1 = Ti(ti). 7 Step 2: We know that PK i=1 Ti(en) = en. Using Eq. 15, on event E, we have 2Hϵ log K(en − 1)3/δ + K ≥en. Solving this inequality gives us en ≤N. 4.4 Problem Complexity Theorems 1 and 2 indicate that both the probability of success and sample complexity of UGapE are directly related to the complexity Hϵ defined by Eq. 6. This implies that Hϵ captures the intrinsic difficulty of the (ϵ,m)-best arm(s) identification problem independently from the specific setting considered. Furthermore, note that this definition generalizes existing notions of complexity. For example, for ϵ = 0 and m = 1 we recover the complexity used in the definition of UCB-E [1] for the fixed budget setting and the one defined in [6] for the fixed accuracy problem. Let us analyze Hϵ in the general case of ϵ > 0. We define the complexity of a single arm i ∈A, Hϵ,i = b2/ max( ∆i+ϵ 2 , ϵ)2. When the gap ∆i is smaller than the desired accuracy ϵ, i.e., ∆i ≤ϵ, then the complexity reduces to Hϵ,i = 1/ϵ2. In fact, the algorithm can stop as soon as the desired accuracy ϵ is achieved, which means that there is no need to exactly discriminate between arm i and the best arm. On the other hand, when ∆i > ϵ, then the complexity becomes Hϵ,i = 4b2/(∆i + ϵ)2. This shows that when the desired accuracy is smaller than the gap, the complexity of the problem is smaller than the case of ϵ = 0, for which we have H0,i = 4b2/∆2 i . More in general, the analysis reported in the paper suggests that the performance of a upper confidence bound based algorithm such as UGapE is characterized by the same notion of complexity in both settings. Thus, whenever the complexity is known, it is possible to exploit the theoretical analysis (bounds on the performance) to easily switch from one setting to the other. For instance, as also suggested in Section 5.4 of [9], if the complexity H is known, an algorithm like UGapEc can be adapted to run in the fixed budget setting by inverting the bound on its sample complexity. This would lead to an algorithm similar to UGapEb with similar performance, although the parameter tuning could be more difficult because of the intrinsic poor accuracy in the constants of the bound. On the other hand, it is an open question whether it is possible to find an “equivalence” between algorithms for the two different settings when the complexity is not known. In particular, it would be important to derive a distribution-dependent lower bound in the form of the one reported in [1] for the general case of ϵ ≥0 and m ≥1 for both the fixed budget and fixed confidence settings. 5 Summary and Discussion We proposed a meta-algorithm, called unified gap-based exploration (UGapE), that unifies the two settings of the best arm(s) identification problem in stochastic multi-armed bandit: fixed budget and fixed confidence. UGapE can be instantiated as two algorithms with a common structure (the same arm-selection and arm-return strategies) corresponding to these two settings, whose performance can be analyzed in a unified way, i.e., a large portion of their theoretical analysis can be unified in a series of lemmas. We proved a performance bound for the UGapE algorithm in the two settings. We also showed how UGapE and its theoretical analysis can be extended to take into account the variance of the arms and to multiple bandits. Finally, we evaluated the performance of UGapE and compare it with a number of existing fixed budget and fixed confidence algorithms. This unification is important for both theoretical and algorithmic reasons. Despite their similarities, fixed budget and fixed confidence settings have been treated differently in the literature. We believe that this unification provides a better understanding of the intrinsic difficulties of the best arm(s) identification problem. In particular, our analysis showed that the same complexity term characterizes the hardness of both settings. As mentioned in the introduction, there was no algorithm available for several settings considered in this paper, e.g., (ϵ,m)-best arm identification with fixed budget. With UGapE, we introduced an algorithm that can be easily adapted to all these settings. Acknowledgments This work was supported by Ministry of Higher Education and Research, NordPas de Calais Regional Council and FEDER through the “contrat de projets état region 2007–2013", French National Research Agency (ANR) under project LAMPADA n◦ANR-09-EMER-007, European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement n◦ 270327, and PASCAL2 European Network of Excellence. 8 References [1] J.-Y. Audibert, S. Bubeck, and R. Munos. Best arm identification in multi-armed bandits. In Proceedings of the Twenty-Third Annual Conference on Learning Theory, pages 41–53, 2010. [2] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multi-armed bandit problem. Machine Learning, 47:235–256, 2002. [3] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi-armed bandit problems. In Proceedings of the Twentieth International Conference on Algorithmic Learning Theory, pages 23–37, 2009. [4] S. Bubeck, T. Wang, and N. Viswanathan. Multiple identifications in multi-armed bandits. CoRR, abs/1205.3181, 2012. [5] K. Deng, J. Pineau, and S. Murphy. Active learning for developing personalized treatment. In Proceedings of the Twenty-Seventh International Conference on Uncertainty in Artificial Intelligence, pages 161–168, 2011. [6] E. Even-Dar, S. Mannor, and Y. Mansour. Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. Journal of Machine Learning Research, 7:1079–1105, 2006. [7] V. Gabillon, M. Ghavamzadeh, and A. Lazaric. Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence. Technical report 00747005, October 2012. [8] V. Gabillon, M. Ghavamzadeh, A. Lazaric, and S. Bubeck. Multi-bandit best arm identification. In Proceedings of Advances in Neural Information Processing Systems 25, pages 2222–2230, 2011. [9] S. Kalyanakrishnan. Learning Methods for Sequential Decision Making with Imperfect Representations. PhD thesis, Department of Computer Science, The University of Texas at Austin, Austin, Texas, USA, December 2011. Published as UT Austin Computer Science Technical Report TR-11-41. [10] S. Kalyanakrishnan and P. Stone. Efficient selection of multiple bandit arms: Theory and practice. In Proceedings of the Twenty-Seventh International Conference on Machine Learning, pages 511–518, 2010. [11] S. Kalyanakrishnan, A. Tewari, P. Auer, and P. Stone. Pac subset selection in stochastic multiarmed bandits. In Proceedings of the Twentieth International Conference on Machine Learning, 2012. [12] O. Maron and A. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In Proceedings of Advances in Neural Information Processing Systems 6, pages 59–66, 1993. [13] A. Maurer and M. Pontil. Empirical bernstein bounds and sample-variance penalization. In 22th annual conference on learning theory, 2009. [14] V. Mnih, Cs. Szepesvári, and J.-Y. Audibert. Empirical Bernstein stopping. In Proceedings of the Twenty-Fifth International Conference on Machine Learning, pages 672–679, 2008. [15] H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematics Society, 58:527–535, 1952. 9
|
2012
|
205
|
4,570
|
On the Use of Non-Stationary Policies for Stationary Infinite-Horizon Markov Decision Processes Bruno Scherrer Inria, Villers-l`es-Nancy, F-54600, France bruno.scherrer@inria.fr Boris Lesner Inria, Villers-l`es-Nancy, F-54600, France boris.lesner@inria.fr Abstract We consider infinite-horizon stationary γ-discounted Markov Decision Processes, for which it is known that there exists a stationary optimal policy. Using Value and Policy Iteration with some error ϵ at each iteration, it is well-known that one can compute stationary policies that are 2γ (1−γ)2 ϵ-optimal. After arguing that this guarantee is tight, we develop variations of Value and Policy Iteration for computing non-stationary policies that can be up to 2γ 1−γ ϵ-optimal, which constitutes a significant improvement in the usual situation when γ is close to 1. Surprisingly, this shows that the problem of “computing near-optimal non-stationary policies” is much simpler than that of “computing near-optimal stationary policies”. 1 Introduction Given an infinite-horizon stationary γ-discounted Markov Decision Process [24, 4], we consider approximate versions of the standard Dynamic Programming algorithms, Policy and Value Iteration, that build sequences of value functions vk and policies πk as follows Approximate Value Iteration (AVI): vk+1 ←Tvk + ϵk+1 (1) Approximate Policy Iteration (API): vk ← vπk + ϵk πk+1 ← any element of G(vk) (2) where v0 and π0 are arbitrary, T is the Bellman optimality operator, vπk is the value of policy πk and G(vk) is the set of policies that are greedy with respect to vk. At each iteration k, the term ϵk accounts for a possible approximation of the Bellman operator (for AVI) or the evaluation of vπk (for API). Throughout the paper, we will assume that error terms ϵk satisfy for all k, ∥ϵk∥∞≤ϵ for some ϵ ≥0. Under this assumption, it is well-known that both algorithms share the following performance bound (see [25, 11, 4] for AVI and [4] for API): Theorem 1. For API (resp. AVI), the loss due to running policy πk (resp. any policy πk in G(vk−1)) instead of the optimal policy π∗satisfies lim sup k→∞ ∥v∗−vπk∥∞≤ 2γ (1 −γ)2 ϵ. The constant 2γ (1−γ)2 can be very big, in particular when γ is close to 1, and consequently the above bound is commonly believed to be conservative for practical applications. Interestingly, this very constant 2γ (1−γ)2 appears in many works analyzing AVI algorithms [25, 11, 27, 12, 13, 23, 7, 6, 20, 21, 22, 9], API algorithms [15, 19, 16, 1, 8, 18, 5, 17, 10, 3, 9, 2] and in one of their generalization [26], suggesting that it cannot be improved. Indeed, the bound (and the 2γ (1−γ)2 constant) are tight for API [4, Example 6.4], and we will show in Section 3 – to our knowledge, this has never been argued in the literature – that it is also tight for AVI. 1 Even though the theory of optimal control states that there exists a stationary policy that is optimal, the main contribution of our paper is to show that looking for a non-stationary policy (instead of a stationary one) may lead to a much better performance bound. In Section 4, we will show how to deduce such a non-stationary policy from a run of AVI. In Section 5, we will describe two original policy iteration variations that compute non-stationary policies. For all these algorithms, we will prove that we have a performance bound that can be reduced down to 2γ 1−γ ϵ. This is a factor 1 1−γ better than the standard bound of Theorem 1, which is significant when γ is close to 1. Surprisingly, this will show that the problem of “computing near-optimal non-stationary policies” is much simpler than that of “computing near-optimal stationary policies”. Before we present these contributions, the next section begins by precisely describing our setting. 2 Background We consider an infinite-horizon discounted Markov Decision Process [24, 4] (S, A, P, r, γ), where S is a possibly infinite state space, A is a finite action space, P(ds′|s, a), for all (s, a), is a probability kernel on S, r : S × A →R is a reward function bounded in max-norm by Rmax, and γ ∈(0, 1) is a discount factor. A stationary deterministic policy π : S →A maps states to actions. We write rπ(s) = r(s, π(s)) and Pπ(ds′|s) = P(ds′|s, π(s)) for the immediate reward and the stochastic kernel associated to policy π. The value vπ of a policy π is a function mapping states to the expected discounted sum of rewards received when following π from any state: for all s ∈S, vπ(s) = E " ∞ X t=0 γtrπ(st) s0 = s, st+1 ∼Pπ(·|st) # . The value vπ is clearly bounded by Vmax = Rmax/(1 −γ). It is well-known that vπ can be characterized as the unique fixed point of the linear Bellman operator associated to a policy π: Tπ : v 7→rπ + γPπv. Similarly, the Bellman optimality operator T : v 7→maxπ Tπv has as unique fixed point the optimal value v∗= maxπ vπ. A policy π is greedy w.r.t. a value function v if Tπv = Tv, the set of such greedy policies is written G(v). Finally, a policy π∗is optimal, with value vπ∗= v∗, iff π∗∈G(v∗), or equivalently Tπ∗v∗= v∗. Though it is known [24, 4] that there always exists a deterministic stationary policy that is optimal, we will, in this article, consider non-stationary policies and now introduce related notations. Given a sequence π1, π2, . . . , πk of k stationary policies (this sequence will be clear in the context we describe later), and for any 1 ≤m ≤k, we will denote πk,m the periodic non-stationary policy that takes the first action according to πk, the second according to πk−1, ..., the mth according to πk−m+1 and then starts again. Formally, this can be written as πk,m = πk πk−1 · · · πk−m+1 πk πk−1 · · · πk−m+1 · · · It is straightforward to show that the value vπk,m of this periodic non-stationary policy πk,m is the unique fixed point of the following operator: Tk,m = Tπk Tπk−1 · · · Tπk−m+1. Finally, it will be convenient to introduce the following discounted kernel: Γk,m = (γPπk)(γPπk−1) · · · (γPπk−m+1). In particular, for any pair of values v and v′, it can easily be seen that Tk,mv−Tk,mv′ = Γk,m(v−v′). 3 Tightness of the performance bound of Theorem 1 The bound of Theorem 1 is tight for API in the sense that there exists an MDP [4, Example 6.4] for which the bound is reached. To the best of our knowledge, a similar argument has never been provided for AVI in the literature. It turns out that the MDP that is used for showing the tightness for API also applies to AVI. This is what we show in this section. Example 1. Consider the γ-discounted deterministic MDP from [4, Example 6.4] depicted on Figure 1. It involves states 1, 2, . . . . In state 1 there is only one self-loop action with zero reward, for each state i > 1 there are two possible choices: either move to state i −1 with zero reward or stay 2 1 2 3 . . . k . . . 0 0 −2γϵ 0 −2(γ + γ2)ϵ 0 0 −2 γ−γk 1−γ ϵ 0 Figure 1: The determinisitic MDP for which the bound of Theorem 1 is tight for Value and Policy Iteration. with reward ri = −2 γ−γi 1−γ ϵ with ϵ ≥0. Clearly the optimal policy in all states i > 1 is to move to i −1 and the optimal value function v∗is 0 in all states. Starting with v0 = v∗, we are going to show that for all iterations k ≥1 it is possible to have a policy πk+1 ∈G(vk) which moves in every state but k + 1 and thus is such that vπk+1(k + 1) = rk+1 1−γ = −2 γ−γk+1 (1−γ)2 ϵ, which meets the bound of Theorem 1 when k tends to infinity. To do so, we assume that the following approximation errors are made at each iteration k > 0: ϵk(i) = ( −ϵ if i = k ϵ if i = k + 1 0 otherwise . With this error, we are now going to prove by induction on k that for all k ≥1, vk(i) = −γk−1ϵ if i < k rk/2 −ϵ if i = k −(rk/2 −ϵ) if i = k + 1 0 otherwise . Since v0 = 0 the best action is clearly to move in every state i ≥2 which gives v1 = v0 + ϵ1 = ϵ1 which establishes the claim for k = 1. Assuming that our induction claim holds for k, we now show that it also holds for k + 1. For the move action, write qm k its action-value function. For all i > 1 we have qm k (i) = 0 + γvk(i − 1), hence qm k (i) = γ(−γk−1ϵ) = −γkϵ if i = 2, . . . , k γ(rk/2 −ϵ) = rk+1/2 if i = k + 1 −γ(rk/2 −ϵ) = −rk+1/2 if i = k + 2 0 otherwise . For the stay action, write qs k its action-value function. For all i > 0 we have qs k(i) = ri + γvk(i), hence qs k(i) = ri + γ(−γk−1ϵ) = ri −γkϵ if i = 1, . . . , k −1 rk + γ(rk/2 −ϵ) = rk + rk+1/2 if i = k rk+1 −rk+1/2 = rk+1/2 if i = k + 1 rk+2 + γ0 = rk+2 if i = k + 2 0 otherwise . First, only the stay action is available in state 1, hence, since r0 = 0 and ϵk+1(1) = 0, we have vk+1(1) = qs k(1) + ϵk+1(1) = −γkϵ, as desired. Second, since ri < 0 for all i > 1 we have qm k (i) > qs k(i) for all these states but k + 1 where qm k (k + 1) = qs k(k + 1) = rk+1/2. Using the fact that vk+1 = max(qm k , qs k) + ϵk+1 gives the result for vk+1. The fact that for i > 1 we have qm k (i) ≥qs k(i) with equality only at i = k+1 implies that there exists a policy πk+1 greedy for vk which takes the optimal move action in all states but k + 1 where the stay action has the same value, leaving the algorithm the possibility of choosing the suboptimal stay action in this state, yielding a value vπk+1(k + 1), matching the upper bound as k goes to infinity. Since Example 1 shows that the bound of Theorem 1 is tight, improving performance bounds imply to modify the algorithms. The following sections of the paper shows that considering non-stationary policies instead of stationary policies is an interesting path to follow. 3 4 Deducing a non-stationary policy from AVI While AVI (Equation (1)) is usually considered as generating a sequence of values v0, v1, . . . , vk−1, it also implicitely produces a sequence1 of policies π1, π2, . . . , πk, where for i = 0, . . . , k −1, πi+1 ∈G(vi). Instead of outputing only the last policy πk, we here simply propose to output the periodic non-stationary policy πk,m that loops over the last m generated policies. The following theorem shows that it is indeed a good idea. Theorem 2. For all iteration k and m such that 1 ≤m ≤k, the loss of running the non-stationary policy πk,m instead of the optimal policy π∗satisfies: ∥v∗−vπk,m∥∞≤ 2 1 −γm γ −γk 1 −γ ϵ + γk∥v∗−v0∥∞ . When m = 1 and k tends to infinity, one exactly recovers the result of Theorem 1. For general m, this new bound is a factor 1−γm 1−γ better than the standard bound of Theorem 1. The choice that optimizes the bound, m = k, and which consists in looping over all the policies generated from the very start, leads to the following bound: ∥v∗−vπk,k∥∞≤2 γ 1 −γ − γk 1 −γk ϵ + 2γk 1 −γk ∥v∗−v0∥∞, that tends to 2γ 1−γ ϵ when k tends to ∞. The rest of the section is devoted to the proof of Theorem 2. An important step of our proof lies in the following lemma, that implies that for sufficiently big m, vk = Tvk−1 + ϵk is a rather good approximation (of the order ϵ 1−γ ) of the value vπk,m of the non-stationary policy πk,m (whereas in general, it is a much poorer approximation of the value vπk of the last stationary policy πk). Lemma 1. For all m and k such that 1 ≤m ≤k, ∥Tvk−1 −vπk,m∥∞≤γm∥vk−m −vπk,m∥∞+ γ −γm 1 −γ ϵ. Proof of Lemma 1. The value of πk,m satisfies: vπk,m = TπkTπk−1 · · · Tπk−m+1vπk,m. (3) By induction, it can be shown that the sequence of values generated by AVI satisfies: Tπkvk−1 = TπkTπk−1 · · · Tπk−m+1vk−m + m−1 X i=1 Γk,iϵk−i. (4) By substracting Equations (4) and (3), one obtains: Tvk−1 −vπk,m = Tπkvk−1 −vπk,m = Γk,m(vk−m −vπk,m) + m−1 X i=1 Γk,iϵk−i and the result follows by taking the norm and using the fact that for all i, ∥Γk,i∥∞= γi. We are now ready to prove the main result of this section. Proof of Theorem 2. Using the fact that T is a contraction in max-norm, we have: ∥v∗−vk∥∞= ∥v∗−Tvk−1 + ϵk∥∞ ≤∥Tv∗−Tvk−1∥∞+ ϵ ≤γ∥v∗−vk−1∥∞+ ϵ. 1A given sequence of value functions may induce many sequences of policies since more than one greedy policy may exist for one particular value function. Our results holds for all such possible choices of greedy policies. 4 Then, by induction on k, we have that for all k ≥1, ∥v∗−vk∥∞≤γk∥v∗−v0∥∞+ 1 −γk 1 −γ ϵ. (5) Using Lemma 1 and Equation (5) twice, we can conclude by observing that ∥v∗−vπk,m∥∞≤∥Tv∗−Tvk−1∥∞+ ∥Tvk−1 −vπk,m∥∞ ≤γ∥v∗−vk−1∥∞+ γm∥vk−m −vπk,m∥∞+ γ −γm 1 −γ ϵ ≤γ γk−1∥v∗−v0∥∞+ 1 −γk−1 1 −γ ϵ + γm ∥vk−m −v∗∥∞+ ∥v∗−vπk,m∥∞ + γ −γm 1 −γ ϵ ≤γk∥v∗−v0∥∞+ γ −γk 1 −γ ϵ + γm γk−m∥v∗−v0∥∞+ 1 −γk−m 1 −γ ϵ + ∥v∗−vπk,m∥∞ + γ −γm 1 −γ ϵ = γm∥v∗−vπk,m∥∞+ 2γk∥v∗−v0∥∞+ 2(γ −γk) 1 −γ ϵ ≤ 2 1 −γm γ −γk 1 −γ ϵ + γk∥v∗−v0∥∞ . 5 API algorithms for computing non-stationary policies We now present similar results that have a Policy Iteration flavour. Unlike in the previous section where only the output of AVI needed to be changed, improving the bound for an API-like algorithm is slightly more involved. In this section, we describe and analyze two API algorithms that output non-stationary policies with improved performance bounds. API with a non-stationary policy of growing period Following our findings on non-stationary policies AVI, we consider the following variation of API, where at each iteration, instead of computing the value of the last stationary policy πk, we compute that of the periodic non-stationary policy πk,k that loops over all the policies π1, . . . , πk generated from the very start: vk ←vπk,k + ϵk πk+1 ←any element of G(vk) where the initial (stationary) policy π1,1 is chosen arbitrarily. Thus, iteration after iteration, the nonstationary policy πk,k is made of more and more stationary policies, and this is why we refer to it as having a growing period. We can prove the following performance bound for this algorithm: Theorem 3. After k iterations, the loss of running the non-stationary policy πk,k instead of the optimal policy π∗satisfies: ∥v∗−vπk,k∥∞≤2(γ −γk) 1 −γ ϵ + γk−1∥v∗−vπ1,1∥∞+ 2(k −1)γkVmax. When k tends to infinity, this bound tends to 2γ 1−γ ϵ, and is thus again a factor 1 1−γ better than the original API bound. 5 Proof of Theorem 3. Using the facts that Tk+1,k+1vπk,k = Tπk+1Tk,kvπk,k = Tπk+1vπk,k and Tπk+1vk ≥Tπ∗vk (since πk+1 ∈G(vk)), we have: v∗−vπk+1,k+1 = Tπ∗v∗−Tk+1,k+1vπk+1,k+1 = Tπ∗v∗−Tπ∗vπk,k + Tπ∗vπk,k −Tk+1,k+1vπk,k + Tk+1,k+1vπk,k −Tk+1,k+1vπk+1,k+1 = γPπ∗(v∗−vπk,k) + Tπ∗vπk,k −Tπk+1vπk,k + Γk+1,k+1(vπk,k −vπk+1,k+1) = γPπ∗(v∗−vπk,k) + Tπ∗vk −Tπk+1vk + γ(Pπk+1 −Pπ∗)ϵk + Γk+1,k+1(vπk,k −vπk+1,k+1) ≤γPπ∗(v∗−vπk,k) + γ(Pπk+1 −Pπ∗)ϵk + Γk+1,k+1(vπk,k −vπk+1,k+1). By taking the norm, and using the facts that ∥vπk,k∥∞≤Vmax, ∥vπk+1,k+1∥∞≤Vmax, and ∥Γk+1,k+1∥∞= γk+1, we get: ∥v∗−vπk+1,k+1∥∞≤γ∥v∗−vπk,k∥∞+ 2γϵ + 2γk+1Vmax. Finally, by induction on k, we obtain: ∥v∗−vπk,k∥∞≤2(γ −γk) 1 −γ ϵ + γk−1∥v∗−vπ1,1∥∞+ 2(k −1)γkVmax. Though it has an improved asymptotic performance bound, the API algorithm we have just described has two (related) drawbacks: 1) its finite iteration bound has a somewhat unsatisfactory term of the form 2(k −1)γkVmax, and 2) even when there is no error (when ϵ = 0), we cannot guarantee that, similarly to standard Policy Iteration, it generates a sequence of policies of increasing values (it is easy to see that in general, we do not have vπk+1,k+1 ≥vπk,k). These two points motivate the introduction of another API algorithm. API with a non-stationary policy of fixed period We consider now another variation of API parameterized by m ≥1, that iterates as follows for k ≥m: vk ←vπk,m + ϵk πk+1 ←any element of G(vk) where the initial non-stationary policy πm,m is built from a sequence of m arbitrary stationary policies π1, π2, · · · , πm. Unlike the previous API algorithm, the non-stationary policy πk,m here only involves the last m greedy stationary policies instead of all of them, and is thus of fixed period. This is a strict generalization of the standard API algorithm, with which it coincides when m = 1. For this algorithm, we can prove the following performance bound: Theorem 4. For all m, for all k ≥m, the loss of running the non-stationary policy πk,m instead of the optimal policy π∗satisfies: ∥v∗−vπk,m∥∞≤γk−m∥v∗−vπm,m∥∞+ 2(γ −γk+1−m) (1 −γ)(1 −γm)ϵ. When m = 1 and k tends to infinity, we recover exactly the bound of Theorem 1. When m > 1 and k tends to infinity, this bound coincides with that of Theorem 2 for our non-stationary version of AVI: it is a factor 1−γm 1−γ better than the standard bound of Theorem 1. The rest of this section develops the proof of this performance bound. A central argument of our proof is the following lemma, which shows that similarly to the standard API, our new algorithm has an (approximate) policy improvement property. Lemma 2. At each iteration of the algorithm, the value vπk+1,m of the non-stationary policy πk+1,m = πk+1 πk . . . πk+2−m πk+1 πk . . . πk−m+2 . . . cannot be much worse than the value vπ′ k,m of the non-stationary policy π′ k,m = πk−m+1 πk . . . πk+2−m πk−m+1 πk . . . πk−m+2 . . . in the precise following sense: vπk+1,m ≥vπ′ k,m − 2γ 1 −γm ϵ. 6 The policy π′ k,m differs from πk+1,m in that every m steps, it chooses the oldest policy πk−m+1 instead of the newest one πk+1. Also π′ k,m is related to πk,m as follows: π′ k,m takes the first action according to πk−m+1 and then runs πk,m; equivalently, since πk,m loops over πkπk−1 . . . πk−m+1, π′ k,m = πk−m+1πk,m can be seen as a 1-step right rotation of πk,m. When there is no error (when ϵ = 0), this shows that the new policy πk+1,m is better than a “rotation” of πk,m. When m = 1, πk+1,m = πk+1 and π′ k,m = πk and we thus recover the well-known (approximate) policy improvement theorem for standard API (see for instance [4, Lemma 6.1]). Proof of Lemma 2. Since π′ k,m takes the first action with respect to πk−m+1 and then runs πk,m, we have vπ′ k,m = Tπk−m+1vπk,m. Now, since πk+1 ∈G(vk), we have Tπk+1vk ≥Tπk−m+1vk and vπ′ k,m −vπk+1,m = Tπk−m+1vπk,m −vπk+1,m = Tπk−m+1vk −γPπk−m+1ϵk −vπk+1,m ≤Tπk+1vk −γPπk−m+1ϵk −vπk+1,m = Tπk+1vπk,m + γ(Pπk+1 −Pπk−m+1)ϵk −vπk+1,m = Tπk+1Tk,mvπk,m −Tk+1,mvπk+1,m + γ(Pπk+1 −Pπk−m+1)ϵk = Tk+1,mTπk−m+1vπk,m −Tk+1,mvπk+1,m + γ(Pπk+1 −Pπk−m+1)ϵk = Γk+1,m(Tπk−m+1vπk,m −vπk+1,m) + γ(Pπk+1 −Pπk−m+1)ϵk = Γk+1,m(vπ′ k,m −vπk+1,m) + γ(Pπk+1 −Pπk−m+1)ϵk. from which we deduce that: vπ′ k,m −vπk+1,m ≤(I −Γk+1,m)−1γ(Pπk+1 −Pπk−m+1)ϵk and the result follows by using the facts that ∥ϵk∥∞≤ϵ and ∥(I −Γk+1,m)−1∥∞= 1 1−γm . We are now ready to prove the main result of this section. Proof of Theorem 4. Using the facts that 1) Tk+1,m+1vπk,m = Tπk+1Tk,mvπk,m = Tπk+1vπk,m and 2) Tπk+1vk ≥Tπ∗vk (since πk+1 ∈G(vk)), we have for k ≥m, v∗−vπk+1,m = Tπ∗v∗−Tk+1,mvπk+1,m = Tπ∗v∗−Tπ∗vπk,m + Tπ∗vπk,m −Tk+1,m+1vπk,m + Tk+1,m+1vπk,m −Tk+1,mvπk+1,m = γPπ∗(v∗−vπk,m) + Tπ∗vπk,m −Tπk+1vπk,m + Γk+1,m(Tπk−m+1vπk,m −vπk+1,m) ≤γPπ∗(v∗−vπk,m) + Tπ∗vk −Tπk+1vk + γ(Pπk+1 −Pπ∗)ϵk + Γk+1,m(Tπk−m+1vπk,m −vπk+1,m) ≤γPπ∗(v∗−vπk,m) + γ(Pπk+1 −Pπ∗)ϵk + Γk+1,m(Tπk−m+1vπk,m −vπk+1,m). (6) Consider the policy π′ k,m defined in Lemma 2. Observing as in the beginning of the proof of Lemma 2 that Tπk−m+1vπk,m = vπ′ k,m, Equation (6) can be rewritten as follows: v∗−vπk+1,m ≤γPπ∗(v∗−vπk,m) + γ(Pπk+1 −Pπ∗)ϵk + Γk+1,m(vπ′ k,m −vπk+1,m). By using the facts that v∗≥vπk,m, v∗≥vπk+1,m and Lemma 2, we get ∥v∗−vπk+1,m∥∞≤γ∥v∗−vπk,m∥∞+ 2γϵ + γm(2γϵ) 1 −γm = γ∥v∗−vπk,m∥∞+ 2γ 1 −γm ϵ. Finally, we obtain by induction that for all k ≥m, ∥v∗−vπk,m∥∞≤γk−m∥v∗−vπm,m∥∞+ 2(γ −γk+1−m) (1 −γ)(1 −γm)ϵ. 7 6 Discussion, conclusion and future work We recalled in Theorem 1 the standard performance bound when computing an approximately optimal stationary policy with the standard AVI and API algorithms. After arguing that this bound is tight – in particular by providing an original argument for AVI – we proposed three new dynamic programming algorithms (one based on AVI and two on API) that output non-stationary policies for which the performance bound can be significantly reduced (by a factor 1 1−γ ). From a bibliographical point of view, it is the work of [14] that made us think that non-stationary policies may lead to better performance bounds. In that work, the author considers problems with a finite-horizon T for which one computes non-stationary policies with performance bounds in O(Tϵ), and infinite-horizon problems for which one computes stationary policies with performance bounds in O( ϵ (1−γ)2 ). Using the informal equivalence of the horizons T ≃ 1 1−γ one sees that non-stationary policies look better than stationary policies. In [14], non-stationary policies are only computed in the context of finite-horizon (and thus non-stationary) problems; the fact that nonstationary policies can also be useful in an infinite-horizon stationary context is to our knowledge completely new. The best performance improvements are obtained when our algorithms consider periodic nonstationary policies of which the period grows to infinity, and thus require an infinite memory, which may look like a practical limitation. However, in two of the proposed algorithm, a parameter m allows to make a trade-off between the quality of approximation 2γ (1−γm)(1−γ)ϵ and the amount of memory O(m) required. In practice, it is easy to see that by choosing m = l 1 1−γ m , that is a memory that scales linearly with the horizon (and thus the difficulty) of the problem, one can get a performance bound of2 2γ (1−e−1)(1−γ)ϵ ≤3.164γ 1−γ ϵ. We conjecture that our asymptotic bound of 2γ 1−γ ϵ, and the non-asymptotic bounds of Theorems 2 and 4 are tight. The actual proof of this conjecture is left for future work. Important recent works of the literature involve studying performance bounds when the errors are controlled in Lp norms instead of max-norm [19, 20, 21, 1, 8, 18, 17] which is natural when supervised learning algorithms are used to approximate the evaluation steps of AVI and API. Since our proof are based on componentwise bounds like those of the pioneer works in this topic [19, 20], we believe that the extension of our analysis to Lp norm analysis is straightforward. Last but not least, an important research direction that we plan to follow consists in revisiting the many implementations of AVI and API for building stationary policies (see the list in the introduction), turn them into algorithms that look for non-stationary policies and study them precisely analytically as well as empirically. References [1] A. Antos, Cs. Szepesv´ari, and R. Munos. Learning near-optimal policies with Bellmanresidual minimization based fitted policy iteration and a single sample path. Machine Learning, 71(1):89–129, 2008. [2] M. Gheshlaghi Azar, V. Gmez, and H.J. Kappen. Dynamic Policy Programming with Function Approximation. In 14th International Conference on Artificial Intelligence and Statistics (AISTATS), volume 15, Fort Lauderdale, FL, USA, 2011. [3] D.P. Bertsekas. Approximate policy iteration: a survey and some new methods. Journal of Control Theory and Applications, 9:310–335, 2011. [4] D.P. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [5] L. Busoniu, A. Lazaric, M. Ghavamzadeh, R. Munos, R. Babuska, and B. De Schutter. Leastsquares methods for Policy Iteration. In M. Wiering and M. van Otterlo, editors, Reinforcement Learning: State of the Art. Springer, 2011. [6] D. Ernst, P. Geurts, and L. Wehenkel. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research (JMLR), 6, 2005. 2With this choice of m, we have m ≥ 1 log 1/γ and thus 2 1−γm ≤ 2 1−e−1 ≤3.164. 8 [7] E. Even-dar. Planning in pomdps using multiplicity automata. In Uncertainty in Artificial Intelligence (UAI, pages 185–192, 2005. [8] A.M. Farahmand, M. Ghavamzadeh, Cs. Szepesv´ari, and S. Mannor. Regularized policy iteration. Advances in Neural Information Processing Systems, 21:441–448, 2009. [9] A.M. Farahmand, R. Munos, and Cs. Szepesv´ari. Error propagation for approximate policy and value iteration (extended version). In NIPS, December 2010. [10] V. Gabillon, A. Lazaric, M. Ghavamzadeh, and B. Scherrer. Classification-based Policy Iteration with a Critic. In International Conference on Machine Learning (ICML), pages 1049– 1056, Seattle, ´Etats-Unis, June 2011. [11] G.J. Gordon. Stable Function Approximation in Dynamic Programming. In ICML, pages 261–268, 1995. [12] C. Guestrin, D. Koller, and R. Parr. Max-norm projections for factored MDPs. In International Joint Conference on Artificial Intelligence, volume 17-1, pages 673–682, 2001. [13] C. Guestrin, D. Koller, R. Parr, and S. Venkataraman. Efficient Solution Algorithms for Factored MDPs. Journal of Artificial Intelligence Research (JAIR), 19:399–468, 2003. [14] S.M. Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College London, 2003. [15] S.M. Kakade and J. Langford. Approximately Optimal Approximate Reinforcement Learning. In International Conference on Machine Learning (ICML), pages 267–274, 2002. [16] M.G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning Research (JMLR), 4:1107–1149, 2003. [17] A. Lazaric, M. Ghavamzadeh, and R. Munos. Finite-Sample Analysis of Least-Squares Policy Iteration. To appear in Journal of Machine learning Research (JMLR), 2011. [18] O.A. Maillard, R. Munos, A. Lazaric, and M. Ghavamzadeh. Finite Sample Analysis of Bellman Residual Minimization. In Masashi Sugiyama and Qiang Yang, editors, Asian Conference on Machine Learpning. JMLR: Workshop and Conference Proceedings, volume 13, pages 309– 324, 2010. [19] R. Munos. Error Bounds for Approximate Policy Iteration. In International Conference on Machine Learning (ICML), pages 560–567, 2003. [20] R. Munos. Performance Bounds in Lp norm for Approximate Value Iteration. SIAM J. Control and Optimization, 2007. [21] R. Munos and Cs. Szepesv´ari. Finite time bounds for sampling based fitted value iteration. Journal of Machine Learning Research (JMLR), 9:815–857, 2008. [22] M. Petrik and B. Scherrer. Biasing Approximate Dynamic Programming with a Lower Discount Factor. In Twenty-Second Annual Conference on Neural Information Processing Systems -NIPS 2008, Vancouver, Canada, 2008. [23] J. Pineau, G.J. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for POMDPs. In International Joint Conference on Artificial Intelligence, volume 18, pages 1025–1032, 2003. [24] M. Puterman. Markov Decision Processes. Wiley, New York, 1994. [25] S. Singh and R. Yee. An Upper Bound on the Loss from Approximate Optimal-Value Functions. Machine Learning, 16-3:227–233, 1994. [26] C. Thiery and B. Scherrer. Least-Squares λ Policy Iteration: Bias-Variance Trade-off in Control Problems. In International Conference on Machine Learning, Haifa, Israel, 2010. [27] J.N. Tsitsiklis and B. Van Roy. Feature-Based Methods for Large Scale Dynamic Programming. Machine Learning, 22(1-3):59–94, 1996. 9
|
2012
|
206
|
4,571
|
Deep Spatio-Temporal Architectures and Learning for Protein Structure Prediction Pietro Di Lena, Ken Nagata, Pierre Baldi Department of Computer Science, Institute for Genomics and Bioinformatics University of California, Irvine {pdilena,knagata,pfbaldi}@[ics.]uci.edu Abstract Residue-residue contact prediction is a fundamental problem in protein structure prediction. Hower, despite considerable research efforts, contact prediction methods are still largely unreliable. Here we introduce a novel deep machine-learning architecture which consists of a multidimensional stack of learning modules. For contact prediction, the idea is implemented as a three-dimensional stack of Neural Networks NNk ij, where i and j index the spatial coordinates of the contact map and k indexes “time”. The temporal dimension is introduced to capture the fact that protein folding is not an instantaneous process, but rather a progressive refinement. Networks at level k in the stack can be trained in supervised fashion to refine the predictions produced by the previous level, hence addressing the problem of vanishing gradients, typical of deep architectures. Increased accuracy and generalization capabilities of this approach are established by rigorous comparison with other classical machine learning approaches for contact prediction. The deep approach leads to an accuracy for difficult long-range contacts of about 30%, roughly 10% above the state-of-the-art. Many variations in the architectures and the training algorithms are possible, leaving room for further improvements. Furthermore, the approach is applicable to other problems with strong underlying spatial and temporal components. 1 Introduction Protein structure prediction from amino acidic sequence is one of the grand challenges in Bioinformatics and Computational Biology. To date, the more accurate and reliable computational methods for protein structure prediction are based on homology modeling [27]. Homology-based methods use similarity to model the unknown target structure using known template structures. However, when good templates do not exist in protein structure repositories or when sequence similarity is very poor–which is often the case–homology modeling is no more effective. This is the realm of ab initio modeling methods, which attempt to recover three-dimensional protein models more or less from scratch. Because the structure of proteins is invariant under translations and rotations, it is useful to consider structural representations that do not depend on cartesian coordinates. One such representation is the contact map, essentially a sparse binary matrix representing which amino acids are in contact in the 3D structure. While contact map prediction can be viewed as a sub-problem in protein structure prediction, it is well known that it is essentially equivalent to protein structure predictions since 3D structures can be completely recovered from sufficiently large subsets of true contacts [20, 26, 23]. Furthermore, even small sets of correctly predicted contacts can be useful for improving ab initio methods [25]. In short, contact map prediction plays a fundamental role in protein structure prediction and most of the state-of-the art contact predictors use some form of machine learning. Contact prediction is assessed every two years in the CASP experiments [9, 15]. However, despite considerable efforts, the accuracy of the best predictors at CASP rarely exceeds 1 20% for long-range contacts, suggesting major room for improvements. Simulations suggest that this accuracy ought to be increased to about 35% in order to be able to recover good 3D structures. There are two main issues arising in contact prediction that have not been addressed systematically: (1) Residue contacts are not randomly distributed in native protein structures, rather they are spatially correlated. Current contact predictors generally do not take into account these correlations, not even at the local level, since the contact probability for a residue pair is typically learned/inferred independently of the contact probabilities in the neighborhood of the pair. (2) Proteins do not assume a 3D conformation instantaneously, but rather through a dynamic folding process that progressively refines the structure. In contrast, current machine learning approaches attempt to learn contact map probabilities in a single step. To address these issues, here we introduce a new machine-learning deep architecture, designed as a deep stack of neural networks, in such a way that each level in the stack receives in input and refines the predictions produced at the previous level. Each level can be trained in a fully supervised fashion on the same set of target contacts/non-contacts, thus overcoming the gradient vanishing problem, typical of deep architectures. The idea of layering learning modules, such that the outputs of previous layers are fed in input to the next layers, is not completely new and it has been applied in different contexts, particularly to computer vision detection problems [4, 10, 12, 22]. However the techniques developed in visual detection cannot be directly applied to contact prediction due to the intrinsic difference of such problems: protein sequences have different lengths, thus it is not possible to process the entire sequence at once in the network input, as it is done for images. The present work represents, to our knowledge, the first attempt to introduce spatial correlation in protein contact prediction. 2 Data preparation 2.1 Contact definition and evaluation criteria We define two residues to be in contact if the Euclidean distance between their Cβ atoms (Cα for Glycines) is lower than 8 ˚A. This is the contact definition adopted for the contact prediction assessment in CASP experiments [15]. The protein map of contact (or contact map) provides a two-dimensional translation and rotation invariant representation of the protein three-dimensional structure. The information content of the contact map is not uniform within different regions of the map. Three distinct classes of contacts can be defined, depending on the linear sequence separation between the residues: (1) long-range contacts, with separation ≥24 residues; (2) medium-range contacts, with separation between 12 and 23 residues; and (3) short-range contacts, with separation between 6 and 11 residues. Contacts between residues separated by less than 6 residues are dense and can be easily predicted from the secondary structure. Conversely, the sparse long-range contacts are the most informative and also the most difficult to predict. Thus, as in the CASP experiments, we focus primarily on long-range contact prediction for performance assessment. The contact prediction performance is evaluated using the standard accuracy measure [15]: Acc = TP/(TP+FP), where TP and FP are the true positive and false positive predicted contacts, respectively. The Acc measure is computed for the sets of L/5, L/10 and 5 top scored predicted pairs, where L is the length of the domain sequence. The most widely accepted measure of performance for contact prediction assessment is Acc for L/5 pairs and sequence separation ≥24 [15]. 2.2 Training and test sets In order to asses the performance of our method, a training and a test set of protein domains are derived from the ASTRAL database [6]. We extract from the ASTRAL release 1.73 the precompiled set of protein domains with less than 20% pairwise sequence identity. We select only the domains belonging to the main SCOP [17] classes (All-Alpha, All-Beta, Alpha/Beta and Alpha+Beta). We exclude domains of length less than 50 residues, domains with multiple 3D structures, as well as non-contiguous domains (including those with missing backbone atoms). We further filter this list by selecting just one representative domain–the shortest one–per SCOP family. This yields a training set of 2,191 structures (the list of protein domains can be found as supplementary material of [8]). For performance assessment purposes, this set is partitioned into 10 disjoint groups of roughly the same size and average domain lengths, so that no domains from two distinct groups belong to the same SCOP fold. As a result, the 10 sets do not share any structural or sequence similarity, providing 2 a high-quality benchmark for ab initio prediction. Model performance is assessed using a standard 10-fold cross-validation procedure. In all our tests, the accuracy results on training/test are averaged over the 10 cross-validation experiments. 2.3 Feature and training example selection In this work, we do not attempt to determine the best static input features for contact prediction. Rather, we focus on a minimal set of features commonly used in machine learning-based contact prediction [11, 2, 21, 7, 24, 5]. Each residue in the protein sequence is described by a feature vector encoding three sources of information (for a total of 25 values): evolutionary information in the form of profiles (20 values, one for each amino acid type), predicted secondary structure (3 binary values, β-sheet or α-helix or coil), and predicted solvent accessibility (2 binary values, buried or exposed). The profiles are computed using PSI-BLAST [1] with an E-value cutoff equal to 0.001 and up to ten iterations against the non redundant protein sequence database (NR). The secondary structure is predicted with SSPRO [18] and the solvent accessibility with ACCPRO [19]. For a pair of residues, these features are included in the network input by using a 9-residue long sliding window centered at each residue in the pair. In our Deep NN, these features represent the spatial features (Section 3). The uneven distribution of positive (residue pairs in contact) and negative (residue pairs not in contact) examples in native protein structures requires some rebalancing of the training data. For each training structure we randomly select 20% of the negative examples, while keeping all the positive examples. We do not include in our set of selected examples residue pairs with sequence separation less than 6. All the methods compared in Section 4 are trained on exactly the same sets of examples. 3 Deep Spatio-Temporal Neural Network (DST-NN) architecture In the specific implementation used in the simulations, the DST-NN architecture consists of a threedimensional stack of neural networks NNk ij, where i and j are the usual spatial coordinates of the contact map, and k is a “temporal” index. All the neural networks in the stack have the same topology (same input, hidden, and output layer sizes) with a single hidden layer, and a single sigmoidal output unit estimating the probability of contact between i and j at the level k (Figure 1(a) and 1(b)). Furthermore, in this implementation, all the networks in the level k have the same weights (weight sharing). Each level k can be trained in a fully supervised fashion, using the same contact maps as targets. In this way, each level of the deep architecture represents a distinct contact predictor. The inputs into NNk ij can be separated into purely spatial inputs, and temporal inputs (which are not purely temporal but include also a spatial component). For fixed i and j, the purely spatial inputs are identical for all levels k in the stack, hence they do not depend on “time”. These purely spatial inputs include evolutionary profiles, predicted secondary structure, and solvent accessibility in a window around residue i and residue j. These are the standard inputs used by most other predictors which attempt to predict contacts in one shot and are described in more detail in Section 2.3. The temporal inputs, on the other hand, are novel. 3.1 Temporal Features The temporal inputs for NNk ik correspond to the outputs of the networks NNk−1 rs at the previous level in the stack, where r and s range over a neighborhood of i and j. Here we use a neighborhood of radius 4 centered at (i, j). The temporal features capture the idea that residue contacts are not randomly distributed in native protein structures, rather they are spatially correlated: a contacting residue pair is very likely to be in the proximity of a different pair of contacting residues. For instance, a comparison of the contact proximity distribution (data not shown) for long-range residue pairs in contact and not in contact shows that over 98% of the contacting residue pairs are in the proximity of at least one additional contact, compared to 30% for non-contacting residue pairs, within a neighborhood of radius 4. Although the contact predictions at a given level of the stack are inaccurate, the contact probabilities included in the temporal feature vector can still provide some rough estimation of the contact distribution in a given neighborhood. Thus, in short, while our model is not necessarily meant to simulate the physical folding process, the stack is used to organize the prediction in such a way that each level in the stack is meant to refine the predictions produced by the previous levels, integrating information over both space and 3 time. In particular, through the temporal inputs the architecture ought to be able to capture spatial correlations between contacts, at least over some range. 25×9×2 3×7×7 4×7×7 81 Spatial features Temporal features receptive field residue features coarse features alignment features Spatial features Output of NNi j Spatial features Output of NNi j Spatial features All zeros .... .... NNi j k+1 NNi j 2 NNi j 1 1 k (a) DST-NN architecture Spatial features for i and j Temporal features for i and j i j NNi j Contact map predicted with the networks NNi j k+1 k Contact probability for i and j (b) Temporal input features for NNk ij Figure 1: DST-NN architecture. (a) Overview. Each NNk ij represents a feed-forward neural network trainable by back-propagation. (b) For a pair of residues (i, j), the temporal inputs into NNk+1 ij consist of the contact probabilities produced by the network at the previous level over a neighborhood of (i, j). 3.2 Deep Learning Training deep multi-layered neural networks is generally hard, since the error gradient tends to vanish or explode with a high number of layers [16]. In contrast, in the proposed model, the learning capabilities are not directly degraded by the depth of the stack, since each level of the stack can be trained in a supervised fashion using true contact maps to provide the targets. In this way, training can be performed incrementally, by adding a new layer to the stack. More precisely, the weights of the first level network, NN1 ij, are randomly initialized and the temporal feature vector is set to 0. The first network NN1 ij is then trained for one epoch on the given set of examples. The weights of NN1 ij are then used to initialize the weights of NN2 ij and the predictions obtained with NN1 ij are used to setup the temporal feature vector of NN2 ij. The network NN2 ij is then trained for one epoch on the same set of examples used for NN1 ij and this procedure is repeated up to a certain depth. We have experimented with several variations of this training procedure, such as randomization of the weights for each new network in the stack, training each network in the stack for more than one epoch, growing the stack up to a maximum number of training epochs (one network for each epoch), or growing it to a smaller depth but then repeating the training procedure through one or more epochs. In Section 4.2 we discuss and compare such different training strategies. In Section 5 we discuss some possible variants and generalizations of the full architecture. In any case, this approach enables training very deep networks (e.g. with maximal values of k up to 100, corresponding to a global neural network architecture with 300 layers). 4 Results 4.1 Performance comparison Here we investigate the learning and generalization capabilities of the DST-NN model, and compare it with plain three-layer Neural Network (NN) models, as well as 2D Recurrent Neural Network (RNN) models, which are two of the most widely used machine learning approaches for contact prediction [11, 2, 21, 24]. Here, the NN model is perfectly equivalent to the NNs implemented in the DST-NN architecture, except for the temporal feature vector (which is missing in the NN implementation). All three methods are trained with a standard on-line back-propagation procedure using exactly the same set of examples and the same input features (Section 2.3). One of the most typical problem in neural network design is related to the issue of choosing, for a given classification problem, the most appropriate network size (i.e. typically the hidden layer size, which affects the total number of connections in the network). The learning time and the 4 generalization capabilities of the particular neural network model are highly affected by the network size parameter. In order to take into account the intrinsic incomparable capabilities of the different DST-NN, NN, and RNN architectures, we perform our tests by considering a range of exponentially increasing hidden layer sizes (4,8,16,32,64, and 128 units) for each architecture. The total number of connection weights for each architecture in function of the hidden layer size, as well as the time needed to perform one training epoch, are shown in Table 1. Figure 2 shows the learning curves of the three methods as a function of the training epoch and the different hidden layer sizes. We show the cross-training average accuracy on both training sets (continuous line) and test sets (dotted line). The learning curves in Figure 2 show the generalization performance with respect to the contact prediction accuracy on L/5 long range contacts; the accuracy of prediction on long range contacts is the most widely accepted evaluation measure for contact prediction and it provides a better estimate of the prediction performance than the training/testing error. Since very large training epochs are infeasible in terms of time for the RNN model (see Table 1), for the aim of comparison, we trained each method for a maximum of 100 epochs. In Table 2 we summarize the prediction performance of the three machine learning methods by showing the maximum average accuracy achieved in testing over 100 training epochs. From Figure 2, the DST-NN has overall higher storage and generalization capacity than NN and RNN. In particular, for hidden layer sizes larger than or equal to 8, the DST-NN performance are superior to those of NN and RNN, regardless of their sizes. Moreover, note that hidden layer sizes larger than 32 do not increase the generalization capabilities of any one of the three methods (Table 2). The counterintuitive learning curves of the RNN for hidden layer sizes larger than 8 can be explained by considering the structure of the RNN architecture. The RNN model exploits a recursive architecture that suffers, as general deep architectures, from the problem of gradient vanishing/explosion. In order to overcome this problem the authors of [2] use a modified form of gradient descent, by which the delta-error for back-propagation is mapped into a piecewise linear interval; this prevents the delta-error from becoming too small or too large. The boundaries of the interval have been tuned for very small hidden layers (private communication). In our experiments, we use the same boundaries for all the tested hidden layer sizes and, apparently, these proved to be ineffective for hidden layer sizes larger than or equal to 16. In comparison, we remark again that the DST-NN is unaffected by the gradient vanishing problem, even for very deep stacks. From Figure 2, we notice that the DST-NN tends to overfit the training data more easily than the NN. For instance, we notice some small overfitting for the DST-NN starting with hidden layer size 32, while the NN starts to show some small overfitting only at hidden layer size 128. On the contrary, the RNN does not show any sign of overfitting in 100 epochs of training, regardless the hidden layer size in the tested range, and the performance in training is somewhat equivalent to the performance in testing. As a final consideration, from Table 2, the NN and RNN best performance on L/5 long range contacts reflect quite well the state-of-the-art in contact prediction [9, 15] with an accuracy in the 21-23% range. In contrast, the DST-NN architecture achieves a maximum accuracy of %29 which represents a significant improvement over the state-of-the-art. As a visual example, Figure 3 shows the best predictions obtained by each method on a target domain in our data set. Despite the three methods achieve exactly the same accuracy (0.6) on the top-scored L/5 long range contacts, it is evident that the DST-NN provides an overall better prediction of the contact map topology. Table 1: Connection weights and training times HL size DST-NN NN RNN #Conn Time #Conn Time #Conn Time 4 2,133 ∼6m 1,809 ∼1m 17,169 ∼1h30min 8 4,265 ∼10m 3,617 ∼3m 19,105 ∼2h 16 8,529 ∼15m 7,233 ∼5m 22,977 ∼2h40m 32 17,057 ∼26m 14,465 ∼8m 30,721 ∼3h20m 64 34,113 ∼1h20m 28,929 ∼15m 46,209 ∼4h50m 128 68,225 ∼2h 57,857 ∼28m 77,185 ∼7h 5 1 10 20 30 40 50 60 70 80 90 100 0.12 0.14 0.16 0.18 0.2 0.22 Epochs Accuracy DST−NN train NN train RNN train DST−NN test NN test RNN test (a) Hidden Layer size 4 1 10 20 30 40 50 60 70 80 90 100 0.16 0.18 0.2 0.22 0.24 0.26 Epochs Accuracy (b) Hidden Layer size 8 1 10 20 30 40 50 60 70 80 90 100 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.3 Epochs Accuracy (c) Hidden Layer size 16 1 10 20 30 40 50 60 70 80 90 100 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.3 0.32 Epochs Accuracy (d) Hidden Layer size 32 1 10 20 30 40 50 60 70 80 90 100 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.3 0.32 0.34 Epochs Accuracy (e) Hidden Layer size 64 1 10 20 30 40 50 60 70 80 90 100 0.15 0.2 0.25 0.3 0.35 Epochs Accuracy (f) Hidden Layer size 128 Figure 2: Learning curves of different machine learning methods Table 2: Best prediction performance HL size DST-NN NN RNN L/5 L/10 Best5 L/5 L/10 Best5 L/5 L/10 Best5 4 0.21 0.23 0.26 0.21 0.24 0.27 0.21 0.23 0.25 8 0.25 0.27 0.29 0.21 0.24 0.27 0.23 0.26 0.29 16 0.27 0.30 0.33 0.23 0.26 0.28 0.22 0.25 0.29 32 0.29 0.32 0.35 0.23 0.26 0.29 0.23 0.26 0.29 64 0.29 0.33 0.37 0.23 0.25 0.28 0.22 0.25 0.28 128 0.29 0.33 0.36 0.23 0.25 0.28 0.22 0.25 0.28 4.2 Training strategies comparison Here we compare the generalization performance of the DST-NN under different training strategies. Since the training time for the DST-NN increases substantially with the size of the hidden layers, in these tests we consider only hidden layers of size 16 and 32. On the other end, as shown in Table 2, a hidden layer of size 32 does not limit the generalization performance of our method in comparison to larger sizes. As in the previous section, we show the performance of the different training strategies in terms of learning curves (Figure 4) and maximum achievable accuracy in testing (Table 3). Recall that, according to our general training strategy, when a new network is added to the stack its initial connection weights are copied from the previous-level network in the stack. Moreover, each network is trained on exactly the same set of examples. Thus, a natural question is to which extent the randomization, in terms of both connection weights and training examples, affects the network learning capabilities. As shown in Figure 4(a)(b), under weight randomization (DST-NN1), the DSTNN gets stuck in local minima and the best prediction performance are comparable to those of NN 6 1 54 1 54 (a) DST-NN 1 54 1 54 (b) NN 1 54 1 54 (c) RNN Figure 3: Predicted contacts at sequence separation ≥6 for the d1igqa domain. In all three figures, the lower triangle shows the native contacts (black dots). The blue and red dots in the upper triangle represent the correctly (blue) and incorrectly (red) predicted contacts among the N topscored residue pairs, where N is the number of native contacts at sequence separation ≥6. All three methods achieve 0.6 accuracy on the top L/5 long range contacts. 1 10 20 30 40 50 60 70 80 90 100 0.1 0.15 0.2 0.25 0.3 0.35 Epochs Accuracy DST−NN train DST−NN1 train DST−NN2 train DST−NN3 train DST−NN test DST−NN1 test DST−NN2 test DST−NN3 test (a) Hidden Layer size 16 1 10 20 30 40 50 60 70 80 90 100 0.2 0.22 0.24 0.26 0.28 0.3 0.32 0.34 0.36 Epochs Accuracy (b) Hidden Layer size 32 Figure 4: Learning curves of different training strategies and RNN (Table 2 and Table 3). On the other hand, under weight randomization, the DST-NN does not show any sign of overfitting and the training performance is similar to the testing performance, as for the RNN in the previous section. Conversely, randomized selection of the training examples (DST-NN2) does not affect the performance of the DST-NN. However, this training strategy seems to be slightly less stable than our general strategy, since the standard deviation of the accuracy over the ten training/testing sets is slightly higher (data not shown). In these tests, according to our general training strategy, each network in the stack has been trained for one single epoch. The approach of training each network for more than one single epoch leads to slightly better accuracy (< 1% of improvement) at the cost of a larger training time (data not shown). Another natural issue concerning DST-NNs is whether the depth of the stack affects the generalization capabilities of the model. To assess this issue, we train a new DST-NN by limiting the depth of the stack to a fixed number of networks and then repeating the training procedure up to 100 epochs (DST-NN3). For this test, we use a limit size of 20 networks, which roughly corresponds to the interval with highest learning peaks for hidden layer size 16 (see Figure 2). Due to the increased training time for this model (20 times slower), testing different stack depths is not practical. For this training strategy, the randomization of the weights for each newly added network in the stack does not produce any dramatic loss in prediction accuracy, although the performance results are slightly lower than those obtained by using our general weight initialization strategy (data not shown). As shown in Figure 4 and Table 3, although more time consuming, this training technique allows an improvement of approximatively 2% points of accuracy with respect to our general training approach (at least for a hidden layer of size 16). For this reason, restarting the training on a fixed size stack is more advantageous in terms of prediction performance than having a very deep stack. Unfortunately, the optimal stack depth is very likely related to the specific classification problem and it cannot be inferred a priori from the architecture topology. 7 Table 3: Best prediction performance Method HL 16 HL 32 L/5 L/10 Best5 L/5 L/10 Best5 DST-NN 0.27 0.30 0.33 0.29 0.32 0.35 DST-NN1 0.24 0.27 0.30 0.24 0.27 0.29 DST-NN2 0.27 0.30 0.33 0.29 0.33 0.36 DST-NN3 0.29 0.32 0.35 0.30 0.33 0.37 5 Concluding remarks We have presented a novel and general deep machine-learning architecture for contact prediction, implemented as a stack of Neural Networks NNk ij with two spatial dimensions and one temporal dimension. The stack architecture is used to organize the prediction in such a way that each level in the stack can receive in input, through the temporal feature vectors, and refine the predictions produced by the previous stages in the stack. This approach is closer to the characteristics of the folding process, where the folded state is dynamically attained through a series of local refinements. While our architecture is not meant to simulate the folding process, the idea to model the contact prediction in a multi-level fashion seems more natural than the traditional single-shot approach. This is confirmed by the improved generalization capabilities and accuracy of the DST-NN model, which have been demonstrated by rigorous comparison against other approaches. The proposed architecture is somewhat general and it can be adopted as a starting point for more sophisticate methods for contact prediction or other problems. For instance, while the elementary learning modules of the architecture are implemented using neural networks, it is clear that these could be replaced by other models, such as SVMs. Moreover, here we considered a simple square neighborhood for encoding the contact predictions in the temporal feature vector; more complex relationships could be discovered by exploiting different topologies for such feature vector . For example, different secondary structure elements tend to form specific contacting patterns and such patterns could be directly implemented in one or more specific feature vectors (see, for example, [8]). Another property of our DST-NN approach is that each level can be trained in supervised fashion. While we have used the true contact map as the target for all the levels in the architecture, it is clear that different targets could be used at different levels [3]. For instance, experimental or simulation data1 on protein folding could be used to generate contact maps at different stages of folding and use those as targets. Different variations based on these ideas are currently under investigation. The DST-NN approach is in fact a special case of the DAG-RNN approach described in [2] and relies on an underlying directed acyclic graph (DAG) to organize the computations. For these reasons, one could also imagine architectures based on a higher-dimensional stack of learning modules, for instance a stack of the form NNlm ijk where the spatial coordinates are three-dimensional, and the “temporal” coordinates are two-dimensional with a connectivity that ensures the absence of directed cycles (the temporal connections running only from the “past” towards the “future”). DST-NNs of the form NNk i , with one spatial and one temporal coordinate, could be applied to sequence problems, for instance to the prediction of secondary structure or relative solvent accessibility. Likewise, DST-NNs of the form NNl ijk, with three spatial and one temporal coordinate, could be applied, for instance, to problems in weather forecasting [13] or trajectory prediction in robot movements [14]. References [1] Altschul,S.F., Madden,T.L., Sch¨affer,A.A., Zhang,J., Zhang,Z., Miller,W., Lipman, D.J. (1997) Gapped BLAST and PSI-BLAST: a new generation of protein database search programs, Nucleic Acids Res., 25(17), 3389-3402. [2] Baldi,P., Pollastri,G. (2003) The Principled Design of Large-Scale Recursive Neural Network Architectures-DAG-RNNs and the Protein Structure Prediction Problem, Journal of Machine Learning Research, 4, 575-602. 1http:www.dynameomics.org 8 [3] Baldi,P. (2012) Boolean Autoencoders and Hypercube Clustering Complexity, Designs, Codes, and Cryptography, 65, 383-403. [4] Bengio,Y., Lamblin,P., Popovici,D., Larochelle,H. (2006) Greedy Layer-Wise Training of Deep Networks. Proceedings of the 20th Annual Conference on Neural Information Processing Systems (NIPS 2006), 153160. [5] Bj¨orkholm,P., Daniluk,P., Kryshtafovych,A., Fidelis,K., Andersson,R., Hvidsten,T.R. (2009) Using multidata hidden Markov models trained on local neighborhoods of protein structure to predict residue-residue contacts. Bioinformatics, 25, 1264-1270. [6] Chandonia,J.M., Hon,G., Walker,N.S., Lo Conte,L., Koehl,P., Levitt, M., Brenner, S.E. (2004) The ASTRAL Compendium in 2004, Nucl. Acids Res. , 32(suppl 1), D189-D192. [7] Cheng,J., Baldi,P. (2007) Improved residue contact prediction using support vector machines and a large feature set, BMC Bioinformatics, 8, 113. [8] Di Lena,P., Nagata,K., Baldi,P. (2012) Deep Architectures for Protein Contact Map Prediction, Bioinformatics, 28, 2449-2457. [9] Ezkurdia,I., Gra˜na,O., Izarzugaza,J.M., Tress,M.L. (2009) Assessment of domain boundary predictions and the prediction of intramolecular contacts in CASP8, Proteins, 77(suppl 9), 196-209 [10] Farabet,C. Couprie,C., Najman,L., LeCun,Y. (2012) Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers. Proceedings of the 29th International Conference on Machine Learning (ICML 2012). [11] Fariselli,P.,Olmea,O.,Valencia,A.,Casadio,R. (2001) Progress in predicting inter-residue contacts of proteins with neural networks and correlated mutations. Proteins 5, 157-162. [12] Heitz,G., Gould,S., Saxena,A., Koller,D. (2008) Cascaded Classification Models: Combining Models for Holistic Scene Understanding. Proceedings of the 22nd Annual Conference on Neural Information Processing Systems (NIPS 2008), 641-648. [13] Hsieh,W. (2009) Machine Learning Methods in the Environmental Sciences: Neural Networks and Kernels. Cambridge University Press, NY, USA. [14] Jetchev,N., Toussaint,M. (2009) Trajectory prediction: learning to map situations to robot trajectories. Proceedings of the 26th Annual International Conference on Machine Learning, 449-456. [15] Kryshtafovych,A., Fidelis,K., Moult,J. (2011) CASP9 results compared to those of previous CASP experiments, Proteins, In press. [16] Larochelle,H., Bengio,J., Louradour,J., Lamblin,P. (2009) Exploring Strategies for Training Deep Neural Networks Journal of Machine Learning Research, 10, 1-40. [17] Murzin,A.G., Brenner,S.E., Hubbard,T., Chothia,C. (1995) SCOP: a structural classification of proteins database for the investigation of sequences and structures, J. Mol. Biol., 247(4), 536-540. [18] Pollastri,G., Przybylski,D., Rost,B., Baldi,P. (2002) Improving the prediction of protein secondary structure in three and eight classes using recurrent neural networks and profiles, Proteins, 47(2), 228-235. [19] Pollastri,G., Baldi,P., Fariselli,P., Casadio,R. (2002) Prediction of Coordination Number and Relative Solvent Accessibility in Proteins, Proteins, 47(2), 142-153. [20] Porto,M., Bastolla,U., Roman,H.E., Vendruscolo,M. (2004) Reconstruction of protein structures from a vectorial representation, Phys. Rev. Lett., 92, 218101. [21] Punta,M., Rost,B. (2005) PROFcon: novel prediction of long-range contacts, Bioinformatics, 21, 29602968.) [22] Ross,S., Munoz,D., Hebert,M., Bagnell,J.A. (2011) Learning message-passing inference machines for structured prediction, Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, 2737-2744. [23] Sathyapriya,R., Duarte,J.M., Stehr,H., Filippis,I., Lappe,M. (2009) Defining an Essence of Structure Determining Residue Contacts in Proteins. PLoS Comput Biol, 5(12), e1000584. [24] Shackelford,G., Karplus, K. (2007) Contact prediction using mutual information and neural nets.Proteins, 69,159-164. [25] Tress,M.L., Valencia,A. (2010) Predicted residue-residue contacts can help the scoring of 3D models. Proteins, 78(8), 1980-1991. [26] Vassura,M., Margara,L., Di Lena,P., Medri,F., Fariselli,P. , Casadio,R. (2008) FT-COMAR: fault tolerant three-dimensional structure reconstruction from protein contact maps. Bioinformatics, 24, 1313-1315. [27] Zhang,Y. (2008) Progress and challenges in protein structure prediction. Curr Opin Struct Biol., 18(3), 342-348. 9
|
2012
|
207
|
4,572
|
Isotropic Hashing Weihao Kong, Wu-Jun Li Shanghai Key Laboratory of Scalable Computing and Systems Department of Computer Science and Engineering, Shanghai Jiao Tong University, China {kongweihao,liwujun}@cs.sjtu.edu.cn Abstract Most existing hashing methods adopt some projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit (zero or one) by thresholding. Typically, the variances of different projected dimensions are different for existing projection functions such as principal component analysis (PCA). Using the same number of bits for different projected dimensions is unreasonable because larger-variance dimensions will carry more information. Although this viewpoint has been widely accepted by many researchers, it is still not verified by either theory or experiment because no methods have been proposed to find a projection with equal variances for different dimensions. In this paper, we propose a novel method, called isotropic hashing (IsoHash), to learn projection functions which can produce projected dimensions with isotropic variances (equal variances). Experimental results on real data sets show that IsoHash can outperform its counterpart with different variances for different dimensions, which verifies the viewpoint that projections with isotropic variances will be better than those with anisotropic variances. 1 Introduction Due to its fast query speed and low storage cost, hashing [1, 5] has been successfully used for approximate nearest neighbor (ANN) search [28]. The basic idea of hashing is to learn similaritypreserving binary codes for data representation. More specifically, each data point will be hashed into a compact binary string, and similar points in the original feature space should be hashed into close points in the hashcode space. Compared with the original feature representation, hashing has two advantages. One is the reduced storage cost, and the other is the constant or sub-linear query time complexity [28]. These two advantages make hashing become a promising choice for efficient ANN search in massive data sets [1, 5, 6, 9, 10, 14, 15, 17, 20, 21, 23, 26, 29, 30, 31, 32, 33, 34]. Most existing hashing methods adopt some projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit (zero or one) by thresholding. Locality-sensitive hashing (LSH) [1, 5] and its extensions [4, 18, 19, 22, 25] use simple random projections for hash functions. These methods are called data-independent methods because the projection functions are independent of training data. Another class of methods are called data-dependent methods, whose projection functions are learned from training data. Representative data-dependent methods include spectral hashing (SH) [31], anchor graph hashing (AGH) [21], sequential projection learning (SPL) [29], principal component analysis [13] based hashing (PCAH) [7], and iterative quantization (ITQ) [7, 8]. SH learns the hashing functions based on spectral graph partitioning. AGH adopts anchor graphs to speed up the computation of graph Laplacian eigenvectors, based on which the Nystr¨om method is used to compute projection functions. SPL leans the projection functions in a sequential way that each function is designed to correct the errors caused by the previous one. PCAH adopts principal component analysis (PCA) to learn the projection functions. ITQ tries to learn an orthogonal rotation matrix to refine the initial projection matrix learned by PCA so that the quantization error of mapping the data 1 to the vertices of binary hypercube is minimized. Compared to the data-dependent methods, the data-independent methods need longer codes to achieve satisfactory performance [7]. For most existing projection functions such as those mentioned above, the variances of different projected dimensions are different. Many researchers [7, 12, 21] have argued that using the same number of bits for different projected dimensions with unequal variances is unreasonable because larger-variance dimensions will carry more information. Some methods [7, 12] use orthogonal transformation to the PCA-projected data with the expectation of balancing the variances of different PCA dimensions, and achieve better performance than the original PCA based hashing. However, to the best of our knowledge, there exist no methods which can guarantee to learn a projection with equal variances for different dimensions. Hence, the viewpoint that using the same number of bits for different projected dimensions is unreasonable has still not been verified by either theory or experiment. In this paper, a novel hashing method, called isotropic hashing (IsoHash), is proposed to learn a projection function which can produce projected dimensions with isotropic variances (equal variances). To the best of our knowledge, this is the first work which can learn projections with isotropic variances for hashing. Experimental results on real data sets show that IsoHash can outperform its counterpart with anisotropic variances for different dimensions, which verifies the intuitive viewpoint that projections with isotropic variances will be better than those with anisotropic variances. Furthermore, the performance of IsoHash is also comparable, if not superior, to the state-of-the-art methods. 2 Isotropic Hashing 2.1 Problem Statement Assume we are given n data points {x1, x2, · · · , xn} with xi ∈Rd, which form the columns of the data matrix X ∈Rd×n. Without loss of generality, in this paper the data are assumed to be zero centered which means Pn i=1 xi = 0. The basic idea of hashing is to map each point xi into a binary string yi ∈{0, 1}m with m denoting the code size. Furthermore, close points in the original space Rd should be hashed into similar binary codes in the code space {0, 1}m to preserve the similarity structure in the original space. In general, we compute the binary code of xi as yi = [h1(xi), h2(xi), · · · , hm(xi)]T with m binary hash functions {hk(·)}m k=1. Because it is NP hard to directly compute the best binary functions hk(·) for a given data set [31], most hashing methods adopt a two-stage strategy to learn hk(·). In the projection stage, m realvalued projection functions {fk(x)}m k=1 are learned and each function can generate one real value. Hence, we have m projected dimensions each of which corresponds to one projection function. In the quantization stage, the real-values are quantized into a binary string by thresholding. Currently, most methods use one bit to quantize each projected dimension. More specifically, hk(xi) = sgn(fk(xi)) where sgn(x) = 1 if x ≥0 and 0 otherwise. The exceptions of the quantization methods only contain AGH [21], DBQ [14] and MH [15], which use two bits to quantize each dimension. In sum, all of these methods adopt the same number (either one or two) of bits for different projected dimensions. However, the variances of different projected dimensions are unequal, and larger-variance dimensions typically carry more information. Hence, using the same number of bits for different projected dimensions with unequal variances is unreasonable, which has also been argued by many researchers [7, 12, 21]. Unfortunately, there exist no methods which can learn projection functions with equal variances for different dimensions. In the following content of this section, we present a novel model to learn projections with isotropic variances. 2.2 Model Formulation The idea of our IsoHash method is to learn an orthogonal matrix to rotate the PCA projection matrix. To generate a code of m bits, PCAH performs PCA on X, and then use the top m eigenvectors of the covariance matrix XXT as columns of the projection matrix W ∈Rd×m. Here, top m eigenvectors are those corresponding to the m largest eigenvalues {λk}m k=1, generally arranged with the non2 increasing order λ1 ≥λ2 ≥· · · ≥λm. Hence, the projection functions of PCAH are defined as follows: fk(x) = wT k x, where wk is the kth column of W. Let λ = [λ1, λ2, · · · , λm]T and Λ = diag(λ), where diag(λ) denotes the diagonal matrix whose diagonal entries are formed from the vector λ. It is easy to prove that W T XXT W = Λ. Hence, the variance of the values {fk(xi)}n i=1 on the kth projected dimension, which corresponds to the kth row of W T X, is λk. Obviously, the variances for different PCA dimensions are anisotropic. To get isotropic projection functions, the idea of our IsoHash method is to learn an orthogonal matrix Q ∈Rm×m which makes QT W T XXT WQ become a matrix with equal diagonal values, i.e., [QT W T XXT WQ]11 = [QT W T XXT WQ]22 = · · · = [QT W T XXT WQ]mm. Here, Aii denotes the ith diagonal entry of a square matrix A, and a matrix Q is said to be orthogonal if QT Q = I where I is an identity matrix whose dimensionality depends on the context. The effect of the orthogonal matrix Q is to rotate the coordinate axes while keeping the Euclidean distances between any two points unchanged. It is easy to prove that the new projection functions of IsoHash are fk(x) = (WQ)T k x which have the same (isotropic) variance. Here (WQ)k denotes the kth column of WQ. If we use tr(A) to denote the trace of a symmetric matrix A, we have the following Lemma 1. Lemma 1. If QT Q = I, tr(QT AQ) = tr(A). Based on Lemma 1, we have tr(QT W T XXT WQ) = tr(W T XXT W) = tr(Λ) = Pm i=1 λi if QT Q = I. Hence, to make QT W T XXT WQ become a matrix with equal diagonal values, we should set this diagonal value a = Pm i=1 λi m . Let a = [a1, a2, · · · , am] with ai = a = Pm i=1 λi m , (1) and T (z) = {T ∈Rm×m|diag(T) = diag(z)}, where z is a vector of length m, diag(T) is overloaded to denote a diagonal matrix with the same diagonal entries of matrix T. Based on our motivation of IsoHash, we can define the problem of IsoHash as follows: Problem 1. The problem of IsoHash is to find an orthogonal matrix Q making QT W T XXT WQ ∈ T (a), where a is defined in (1). Then, we have the following Theorem 1: Theorem 1. Assume QT Q = I and T ∈T (a). If QT ΛQ = T, Q will be a solution to the problem of IsoHash. Proof. Because W T XXT W = Λ, we have QT ΛQ = QT [W T XXT W]Q. It is obvious that Q will be a solution to the problem of IsoHash. As in [2], we define M(Λ) = {QT ΛQ|Q ∈O(m)}, (2) where O(m) is the set of all orthogonal matrices in Rm×m, i.e., QT Q = I. According to Theorem 1, the problem of IsoHash is equivalent to finding an orthogonal matrix Q for the following equation [2]: ||T −Z||F = 0, (3) where T ∈T (a), Z ∈M(Λ), || · ||F denotes the Frobenius norm. Please note that for ease of understanding, we use the same notations as those in [2]. In the following content, we will use the Schur-Horn lemma [11] to prove that we can always find a solution to problem (3). 3 Lemma 2. [Schur-Horn Lemma] Let c = {ci} ∈Rm and b = {bi} ∈Rm be real vectors in non-increasing order respectively 1, i.e., c1 ≥c2 ≥· · · ≥cm, b1 ≥b2 ≥· · · ≥bm. There exists a Hermitian matrix H with eigenvalues c and diagonal values b if and only if k X i=1 bi ≤ k X i=1 ci, for any k = 1, 2, ..., m, m X i=1 bi = m X i=1 ci. Proof. Please refer to Horn’s article [11]. Base on Lemma 2, we have the following Theorem 2. Theorem 2. There exists a solution to the IsoHash problem in (3). And this solution is in the intersection of T (a) and M(Λ). Proof. Because λ1 ≥λ2 ≥· · · ≥λm and a1 = a2 = · · · = am = Pm i=1 λi m , it is easy to prove that Pk i=1 λi k ≥ Pm i=1 λi m for any k. Hence, Pk i=1 λi = k × Pk i=1 λi k ≥k × Pm i=1 λi m = Pk i=1 ai. Furthermore, we can prove that Pm i=1 λi = Pm i=1 ai. According to Lemma 2, there exists a Hermitian matrix H with eigenvalues λ and diagonal values a. Moreover, we can prove that H is in the intersection of T (a) and M(Λ), i.e., H ∈T (a) and H ∈M(Λ). According to Theorem 2, to find a Q solving the problem in (3) is equivalent to finding the intersection point of T (a) and M(Λ), which is just an inverse eigenvalue problem called SHIEP in [2]. 2.3 Learning The problem in (3) can be reformulated as the following optimization problem: argmin Q:T ∈T (a),Z∈M(Λ) ||T −Z||F . (4) As in [2], we propose two algorithms to learn Q: one is called lift and projection (LP), and the other is called gradient flow (GF). For ease of understanding, we use the same notations as those in [2], and some proofs of theorems are omitted. The readers can refer to [2] for the details. 2.3.1 Lift and Projection The main idea of lift and projection (LP) algorithm is to alternate between the following two steps: • Lift step: Given a T (k) ∈T (a), we find the point Z(k) ∈M(Λ) such that ||T (k) −Z(k)||F = dist(T (k), M(Λ)), where dist(T (k), M(Λ)) denotes the minimum distance between T (k) and the points in M(Λ). • Projection step: Given a Z(k), we find T (k+1) ∈T (a) such that ||T (k+1) −Z(k)||F = dist(T (a), Z(k)), where dist(T (a), Z(k)) denotes the minimum distance between Z(k) and the points in T (a). 1Please note in [2], the values are in increasing order. It is easy to prove that our presentation of Schur-Horn lemma is equivalent to that in [2]. The non-increasing order is chosen here just because it will facilitate our following presentation due to the non-increasing order of the eigenvalues in Λ. 4 We call Z(k) a lift of T (k) onto M(Λ) and T (k+1) a projection of Z(k) onto T (a). The projection operation is easy to complete. Suppose Z(k) = [zij], then T (k+1) = [tij] must be given by tij = zij, if i ̸= j ai, if i = j (5) For the lift operation, we have the following Theorem 3. Theorem 3. Suppose T = QT DQ is an eigen-decomposition of T where D = diag(d) with d = [d1, d2, ..., dm]T being T’s eigenvalues which are ordered as d1 ≥d2 ≥· · · ≥dm. Then the nearest neighbor of T in M(Λ) is given by Z = QT ΛQ. (6) Proof. See Theorem 4.1 in [3]. Since in each step we minimize the distance between T and Z, we have ||T (k) −Z(k)||F ≥||T (k+1) −Z(k)||F ≥||T (k+1) −Z(k+1)||F . It is easy to see that (T (k), Z(k)) will converge to a stationary point. The whole IsoHash algorithm based on LP, abbreviated as IsoHash-LP, is briefly summarized in Algorithm 1. Algorithm 1 Lift and projection based IsoHash (IsoHash-LP) Input: X ∈Rd×n, m ∈N+, t ∈N+ • [Λ, W] = PCA(X, m), as stated in Section 2.2. • Generate a random orthogonal matrix Q0 ∈Rm×m. • Z(0) ←QT 0 ΛQ0. • for k = 1 →t do Calculate T (k) from Z(k−1) by equation (5). Perform eigen-decomposition of T (k) to get QT k DQk = T (k). Calculate Z(k) from Qk and Λ by equation (6). • end for • Y = sgn(QT t W T X). Output: Y Because M(Λ) is not a convex set, the stationary point we find is not necessarily inside the intersection of T (a) and M(Λ). For example, if we set Z(0) = Λ, the lift and projection learning algorithm would get no progress because the Z and T are already in a stationary point. To solve this problem of degenerate solutions, we initiate Z as a transformed Λ with some random orthogonal matrix Q0, which is illustrated in Algorithm 1. 2.3.2 Gradient Flow Another learning algorithm is a continuous one based on the construction of a gradient flow (GF) on the surface M(Λ) that moves towards the desired intersection point. Because there always exists a solution for the problem in (3) according to Theorem 2, the objective function in (4) can be reformulated as follows [2]: min Q∈O(m) F(Q) = 1 2||diag(QT ΛQ) −diag(a)||2 F . (7) The details about how to optimize (7) can be found in [2]. We just show some key steps of the learning algorithm in the following content. The gradient ∇F at Q can be calculated as ∇F(Q) = 2Λβ(Q), (8) where β(Q) = diag(QT ΛQ) −diag(a). Once we have computed the gradient of F, it can be projected onto the manifold O(m) according to the following Theorem 4. 5 Theorem 4. The projection of ∇F(Q) onto O(m) is given by g(Q) = Q[QT ΛQ, β(Q)] (9) where [A, B] = AB −BA is the Lie bracket. Proof. See the formulas (20), (21) and (22) in [3]. The vector field ˙Q = −g(Q) defines a steepest descent flow on the manifold O(m) for function F(Q). Letting Z = QT ΛQ and α(Z) = β(Q), we get ˙Z = [Z, [α(Z), Z]], (10) where ˙Z is an isospectral flow that moves to reduce the objective function F(Q). As stated by Theorems 3.3 and 3.4 in [2], a stable equilibrium point of (10) must be combined with β(Q) = 0, which means that F(Q) has decreased to zero. Hence, the gradient flow method can always find an intersection point as the solution. The whole IsoHash algorithm based on GF, abbreviated as IsoHash-GF, is briefly summarized in Algorithm 2. Algorithm 2 Gradient flow based IsoHash (IsoHash-GF) Input: X ∈Rd×n, m ∈N+ • [Λ, W] = PCA(X, m), as stated in Section 2.2. • Generate a random orthogonal matrix Q0 ∈Rm×m. • Z(0) ←QT 0 ΛQ0. • Start integration from Z = Z(0) with gradient computed from equation (10). • Stop integration when reaching a stable equilibrium point. • Perform eigen-decomposition of Z to get QT ΛQ = Z. • Y = sgn(QT W T X). Output: Y We now discuss some implementation details of IsoHash-GF. Since all diagonal matrices in M(Λ) result in ˙Z = 0, one should not use Λ as the starting point. In our implementation, we use the same method as that in IsoHash-LP to avoid this degenerate case, i.e., a random orthogonal transformation matrix Q0 is use to rotate Λ. To integrate Z with gradient in (10), we use Adams-Bashforth-Moulton PECE solver in [27] where the parameter RelTol is set to 10−3. The relative error of the algorithm is computed by comparing the diagonal entries of Z to the target diag(a). The whole integration process will be terminated when their relative error is below 10−7. 2.4 Complexity Analysis The learning of our IsoHash method contains two phases: the first phase is PCA and the second phase is LP or GF. The time complexity of PCA is O(min(n2d, nd2)). The time complexity of LP after PCA is O(m3t), and that of GF after PCA is O(m3). In our experiments, t is set to 100 because good performance can be achieved at this setting. Because m is typically set to be a very small number like 64 or 128, the main time complexity of IsoHash is from the PCA phase. In general, the training of IsoHash-GF will be faster than IsoHash-LP in our experiments. One promising property of both LP and GF is that the time complexity after PCA is independent of the number of training data, which makes them scalable to large-scale data sets. 3 Relation to Existing Works The most related method of IsoHash is ITQ [7], because both ITQ and IsoHash have to learn an orthogonal matrix. However, IsoHash is different from ITQ in many aspects: firstly, the goal of IsoHash is to learn a projection with isotropic variances, but the results of ITQ cannot necessarily guarantee isotropic variances; secondly, IsoHash directly learns the orthogonal matrix from the eigenvalues and eigenvectors of PCA, but ITQ first quantizes the PCA results to get some binary 6 codes, and then learns the orthogonal matrix based on the resulting binary codes; thirdly, IsoHash has an explicit objective function to optimize, but ITQ uses a two-step heuristic strategy whose goal cannot be formulated by a single objective function; fourthly, to learn the orthogonal matrix, IsoHash uses Lift and Projection or Gradient Flow, but ITQ uses Procruster method which is much slower than IsoHash. From the experimental results which will be presented in the next section, we can find that IsoHash can achieve accuracy comparable to ITQ with much faster training speed. 4 Experiment 4.1 Data Sets We evaluate our methods on two widely used data sets, CIFAR [16] and LabelMe [28]. The first data set is CIFAR-10 [16] which consists of 60,000 images. These images are manually labeled into 10 classes, which are airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. The size of each image is 32×32 pixels. We represent them with 256-dimensional gray-scale GIST descriptors [24]. The second data set is 22K LabelMe used in [23, 28] which contains 22,019 images sampled from the large LabelMe data set. As in [28], The images are scaled to 32×32 pixels, and then represented by 512-dimensional GIST descriptors [24]. 4.2 Evaluation Protocols and Baselines As the protocols widely used in recent papers [7, 23, 25, 31], Euclidean neighbors in the original space are considered as ground truth. More specifically, a threshold of the average distance to the 50th nearest neighbor is used to define whether a point is a true positive or not. Based on the Euclidean ground truth, we compute the precision-recall curve and mean average precision (mAP) [7, 21]. For all experiments, we randomly select 1000 points as queries, and leave the rest as training set to learn the hash functions. All the experimental results are averaged over 10 random training/test partitions. Although a lot of hashing methods have been proposed, some of them are either supervised [23] or semi-supervised [29]. Our IsoHash method is essentially an unsupervised one. Hence, for fair comparison, we select the most representative unsupervised methods for evaluation, which contain PCAH [7], ITQ [7], SH [31], LSH [1], and SIKH [25]. Among these methods, PCAH, ITQ and SH are data-dependent methods, while SIKH and LSH are data-independent methods. All experiments are conducted on our workstation with Intel(R) Xeon(R) CPU X7560@2.27GHz and 64G memory. 4.3 Accuracy Table 1 shows the Hamming ranking performance measured by mAP on LabelMe and CIFAR. It is clear that our IsoHash methods, including both IsoHash-GF and IsoHash-LP, achieve far better performance than PCAH. The main difference between IsoHash and PCAH is that the PCAH dimensions have anisotropic variances while IsoHash dimensions have isotropic variances. Hence, the intuitive viewpoint that using the same number of bits for different projected dimensions with anisotropic variances is unreasonable has been successfully verified by our experiments. Furthermore, the performance of IsoHash is also comparable, if not superior, to the state-of-the-art methods, such as ITQ. Figure 1 illustrates the precision-recall curves on LabelMe data set with different code sizes. The relative performance in the precision-recall curves on CIFAR is similar to that on LabelMe. We omit the results on CIFAR due to space limitation. Once again, we can find that our IsoHash methods can achieve performance which is far better than PCAH and comparable to the state-of-the-art. 4.4 Computational Cost Table 2 shows the training time on CIFAR. We can see that our IsoHash methods are much faster than ITQ. The time complexity of ITQ also contains two parts: the first part is PCA which is the same 7 Table 1: mAP on LabelMe and CIFAR data sets. Method LabelMe CIFAR # bits 32 64 96 128 256 32 64 96 128 256 IsoHash-GF 0.2580 0.3269 0.3528 0.3662 0.3889 0.2249 0.2969 0.3256 0.3357 0.3600 IsoHash-LP 0.2534 0.3223 0.3577 0.3826 0.4274 0.1907 0.2624 0.3027 0.3223 0.3651 PCAH 0.0516 0.0401 0.0341 0.0307 0.0232 0.0319 0.0274 0.0241 0.0216 0.0168 ITQ 0.2786 0.3328 0.3504 0.3615 0.3728 0.2490 0.3051 0.3238 0.3319 0.3436 SH 0.0826 0.1034 0.1447 0.1653 0.2080 0.0510 0.0589 0.0802 0.1121 0.1535 SIKH 0.0590 0.1482 0.2074 0.2526 0.4488 0.0353 0.0902 0.1245 0.1909 0.3614 LSH 0.1549 0.2574 0.3147 0.3375 0.4034 0.1052 0.1907 0.2396 0.2776 0.3432 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision IsoHash−GF IsoHash−LP ITQ SH SIKH LSH PCAH (a) 32 bits 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision IsoHash−GF IsoHash−LP ITQ SH SIKH LSH PCAH (b) 64 bits 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision IsoHash−GF IsoHash−LP ITQ SH SIKH LSH PCAH (c) 96 bits 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision IsoHash−GF IsoHash−LP ITQ SH SIKH LSH PCAH (d) 256 bits Figure 1: Precision-recall curves on LabelMe data set. as that in IsoHash, and the second part is an iteration algorithm to rotate the original PCA matrix with time complexity O(nm2), where n is the number of training points and m is the number of bits in the binary code. Hence, as the number of training data increases, the second-part time complexity of ITQ will increase linearly to the number of training points. But the time complexity of IsoHash after PCA is independent of the number of training points. Hence, IsoHash will be much faster than ITQ, particularly in the case with a large number of training points. This is clearly shown in Figure 2 which illustrates the training time when the numbers of training data are changed. Table 2: Training time (in second) on CIFAR. # bits 32 64 96 128 256 IsoHash-GF 2.48 2.45 2.70 3.00 5.55 IsoHash-LP 2.14 2.43 2.94 3.47 8.83 PCAH 1.84 2.14 2.23 2.36 2.92 ITQ 4.35 6.33 9.73 12.40 29.25 SH 1.60 3.41 8.37 13.66 49.44 SIKH 1.30 1.44 1.57 1.55 2.20 LSH 0.05 0.08 0.11 0.19 0.31 0 1 2 3 4 5 6 x 10 4 0 10 20 30 40 50 Number of training data Training Time(s) IsoHash−GF IsoHash−LP ITQ SH SIKH LSH PCAH Figure 2: Training time on CIFAR . 5 Conclusion Although many researchers have intuitively argued that using the same number of bits for different projected dimensions with anisotropic variances is unreasonable, this viewpoint has still not been verified by either theory or experiment because no methods have been proposed to find projection functions with isotropic variances for different dimensions. The proposed IsoHash method in this paper is the first work to learn projection functions which can produce projected dimensions with isotropic variances (equal variances). Experimental results on real data sets have successfully verified the viewpoint that projections with isotropic variances will be better than those with anisotropic variances. Furthermore, IsoHash can achieve accuracy comparable to the state-of-the-art methods with faster training speed. 6 Acknowledgments This work is supported by the NSFC (No. 61100125), the 863 Program of China (No. 2011AA01A202, No. 2012AA011003), and the Program for Changjiang Scholars and Innovative Research Team in University of China (IRT1158, PCSIRT). 8 References [1] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. Commun. ACM, 51(1):117–122, 2008. [2] M.T. Chu. Constructing a Hermitian matrix from its diagonal entries and eigenvalues. SIAM Journal on Matrix Analysis and Applications, 16(1):207–217, 1995. [3] M.T. Chu and K.R. Driessel. The projected gradient method for least squares matrix approximations with spectral constraints. SIAM Journal on Numerical Analysis, pages 1050–1060, 1990. [4] M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the ACM Symposium on Computational Geometry, 2004. [5] A. Gionis, P. Indyk, and R. Motwani. Similarity search in high dimensions via hashing. In VLDB, 1999. [6] Y. Gong, S. Kumar, V. Verma, and S. Lazebnik. Angular quantization based binary codes for fast similarity search. In NIPS, 2012. [7] Y. Gong and S. Lazebnik. Iterative quantization: A Procrustean approach to learning binary codes. In CVPR, 2011. [8] Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A Procrustean approach to learning binary codes for large-scale image retrieval. In IEEE Trans. Pattern Anal. Mach. Intell., 2012. [9] J. He, W. Liu, and S.-F. Chang. Scalable similarity search with optimized kernel hashing. In KDD, 2010. [10] J.-P. Heo, Y. Lee, J. He, S.-F. Chang, and S.-E. Yoon. Spherical hashing. In CVPR, 2012. [11] A. Horn. Doubly stochastic matrices and the diagonal of a rotation matrix. American Journal of Mathematics, 76(3):620–630, 1954. [12] H. Jegou, M. Douze, C. Schmid, and P. P´erez. Aggregating local descriptors into a compact image representation. In CVPR, 2010. [13] I. Jolliffe. Principal Component Analysis. Springer, 2002. [14] W. Kong and W.-J. Li. Double-bit quantization for hashing. In AAAI, 2012. [15] W. Kong, W.-J. Li, and M. Guo. Manhattan hashing for large-scale image retrieval. In SIGIR, 2012. [16] A. Krizhevsky. Learning multiple layers of features from tiny images. Tech report, University of Toronto, 2009. [17] B. Kulis and T. Darrell. Learning to hash with binary reconstructive embeddings. In NIPS, 2009. [18] B. Kulis and K. Grauman. Kernelized locality-sensitive hashing for scalable image search. In ICCV, 2009. [19] B. Kulis, P. Jain, and K. Grauman. Fast similarity search for learned metrics. IEEE Trans. Pattern Anal. Mach. Intell., 31(12):2143–2157, 2009. [20] W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. Supervised hashing with kernels. In CVPR, 2012. [21] W. Liu, J. Wang, S. Kumar, and S.-F. Chang. Hashing with graphs. In ICML, 2011. [22] Y. Mu and S. Yan. Non-metric locality-sensitive hashing. In AAAI, 2010. [23] M. Norouzi and D. J. Fleet. Minimal loss hashing for compact binary codes. In ICML, 2011. [24] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision, 42(3):145–175, 2001. [25] M. Raginsky and S. Lazebnik. Locality-sensitive binary codes from shift-invariant kernels. In NIPS, 2009. [26] R. Salakhutdinov and G. E. Hinton. Semantic hashing. Int. J. Approx. Reasoning, 50(7):969–978, 2009. [27] L.F. Shampine and M.K. Gordon. Computer solution of ordinary differential equations: the initial value problem. Freeman, San Francisco, California, 1975. [28] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition. In CVPR, 2008. [29] J. Wang, S. Kumar, and S.-F. Chang. Sequential projection learning for hashing with compact codes. In ICML, 2010. [30] J. Wang, S. Kumar, and S.-F. Chang. Semi-supervised hashing for large-scale search. IEEE Trans. Pattern Anal. Mach. Intell., 34(12):2393–2406, 2012. [31] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS, 2008. [32] H. Xu, J. Wang, Z. Li, G. Zeng, S. Li, and N. Yu. Complementary hashing for approximate nearest neighbor search. In ICCV, 2011. [33] D. Zhang, F. Wang, and L. Si. Composite hashing with multiple information sources. In SIGIR, 2011. [34] Y. Zhen and D.-Y. Yeung. A probabilistic model for multimodal hash function learning. In KDD, 2012. 9
|
2012
|
208
|
4,573
|
Repulsive Mixtures Francesca Petralia Department of Statistical Science Duke University fp12@duke.edu Vinayak Rao Gatsby Computational Neuroscience Unit University College London vrao@gatsby.ucl.ac.uk David B. Dunson Department of Statistical Science Duke University dunson@stat.duke.edu Abstract Discrete mixtures are used routinely in broad sweeping applications ranging from unsupervised settings to fully supervised multi-task learning. Indeed, finite mixtures and infinite mixtures, relying on Dirichlet processes and modifications, have become a standard tool. One important issue that arises in using discrete mixtures is low separation in the components; in particular, different components can be introduced that are very similar and hence redundant. Such redundancy leads to too many clusters that are too similar, degrading performance in unsupervised learning and leading to computational problems and an unnecessarily complex model in supervised settings. Redundancy can arise in the absence of a penalty on components placed close together even when a Bayesian approach is used to learn the number of components. To solve this problem, we propose a novel prior that generates components from a repulsive process, automatically penalizing redundant components. We characterize this repulsive prior theoretically and propose a Markov chain Monte Carlo sampling algorithm for posterior computation. The methods are illustrated using synthetic examples and an iris data set. Key Words: Bayesian nonparametrics; Dirichlet process; Gaussian mixture model; Model-based clustering; Repulsive point process; Well separated mixture. 1 Introduction Discrete mixture models characterize the density of y ∈Y ⊂ℜm as f(y) = k X h=1 phφ(y; γh) (1) where p = (p1, . . . , pk)T is a vector of probabilities summing to one, and φ(·; γ) is a kernel depending on parameters γ ∈Γ, which may consist of location and scale parameters. In analyses of finite mixture models, a common concern is over-fitting in which redundant mixture components located close together are introduced. Over-fitting can have an adverse impact on predictions and degrade unsupervised learning. In particular, introducing components located close together can lead to splitting of well separated clusters into a larger number of closely overlapping clusters. Ideally, the criteria for selecting k in a frequentist analysis and the prior on k and {γh} in a Bayesian analysis should guard against such over-fitting. However, the impact of the criteria used and prior chosen can be subtle. 1 Recently, [1] studied the asymptotic behavior of the posterior distribution in over-fitted Bayesian mixture models having more components than needed. They showed that a carefully chosen prior will lead to asymptotic emptying of the redundant components. However, several challenging practical issues arise. For their prior and in standard Bayesian practice, one assumes that γh ∼P0 independently a priori. For example, if we consider a finite location-scale mixture of multivariate Gaussians, one may choose P0 to be multivariate Gaussian-inverse Wishart. However, the behavior of the posterior can be sensitive to P0 for finite samples, with higher variance P0 favoring allocation to fewer clusters. In addition, drawing the component-specific parameters from a common prior tends to favor components located close together unless the variance is high. Sensitivity to P0 is just one of the issues. For finite samples, the weight assigned to redundant components is often substantial. This can be attributed to non- or weak identifiability. Each mixture component can potentially be split into multiple components having the same parameters. Even if exact equivalence is ruled out, it can be difficult to distinguish between models having different degrees of splitting of well-separated components into components located close together. This issue can lead to an unnecessarily complex model, and creates difficulties in estimating the number of components and component-specific parameters. Existing strategies, such as the incorporation of order constraints, do not adequately address this issue, since it is difficult to choose reasonable constraints in multivariate problems and even with constraints, the components can be close together. The problem of separating components has been studied for Gaussian mixture models ([2]; [3]). Two Gaussians can be separated by placing an arbitrarily chosen lower bound on the distance between their means. Separated Gaussians have been mainly utilized to speed up convergence of the Expectation-Maximization (EM) algorithm. In choosing a minimal separation level, it is not clear how to obtain a good compromise between values that are too low to solve the problem and ones that are so large that one obtains a poor fit. To avoid such arbitrary hard separation thresholds, we instead propose a repulsive prior that smoothly pushes components apart. In contrast to the vast majority of the recent Bayesian literature on discrete mixture models, instead of drawing the component-specific parameters {γh} independently from a common prior P0, we propose a joint prior for {γ1, . . . , γk} that is chosen to assign low density to γhs located close together. The deviation from independence is specified a priori by a pair of repulsion parameters. The proposed class of repulsive mixture models will only place components close together if it results in a substantial gain in model fit. As we illustrate, the prior will favor a more parsimonious representation of densities, while improving practical performance in unsupervised learning. We provide strong theoretical results on rates of posterior convergence and develop Markov chain Monte Carlo algorithms for posterior computation. 2 Bayesian repulsive mixture models 2.1 Background on Bayesian mixture modeling Considering the finite mixture model in expression (1), a Bayesian specification is completed by choosing priors for the number of components k, the probability weights p, and the componentspecific parameters γ = (γ1, . . . , γk)T . Typically, k is assigned a Poisson or multinomial prior, p a Dirichlet(α) prior with α = (α1, . . . , αk)T , and γh ∼P0 independently, with P0 often chosen to be conjugate to the kernel φ. Posterior computation can proceed via a reversible jump Markov chain Monte Carlo algorithm involving moves for adding or deleting mixture components. Unfortunately, in making a k →k + 1 change in model dimension, efficient moves critically depend on the choice of proposal density. [4] proposed an alternate Markov chain Monte Carlo method, which treats the parameters as a marked point process, but does not have clear computational advantages relative to reversible jump. It has become popular to use over-fitted mixture models in which k is chosen as a conservative upper bound on the number of components under the expectation that only relatively few of the components will be occupied by subjects in the sample. From a practical perspective, the success of over-fitted mixture models has been largely due to ease in computation. As motivated in [5], simply letting αh = c/k for h = 1, . . . , k and a constant c > 0 leads to an approximation to a Dirichlet process mixture model for the density of y, which is obtained in the 2 limit as k approaches infinity. An alternative finite approximation to a Dirichlet process mixture is obtained by truncating the stick-breaking representation of [6], leading to a similarly simple Gibbs sampling algorithm [7]. These approaches are now used routinely in practice. 2.2 Repulsive densities We seek a prior on the component parameters in (1) that automatically favors spread out components near the support of the data. Instead of generating the atoms γh independently from P0, one could generate them from a repulsive process that automatically pushes the atoms apart. This idea is conceptually related to the literature on repulsive point processes [8]. In the spatial statistics literature, a variety of repulsive processes have been proposed. One such model assumes that points are clustered spatially, with the cluster centers having a Strauss density [9], that is p(k, γ) ∝βkρr(γ) where k is the number of clusters, β > 0, 0 < ρ ≤1 and r(γ) is the number of pairwise centers that lie within a pre-specified distance r of each other. A possibly unappealing feature is that repulsion is not directly dependent on the pairwise distances between the clusters. We propose an alternative class of priors, which smoothly push apart components based on pairwise distances. Definition 1. A density h(γ) is repulsive if for any δ > 0 there is a corresponding ϵ > 0 such that h(γ) < δ for all γ ∈Γ \ Gϵ, where Gϵ = {γ : d(γs, γi) > ϵ; s = 1, . . . , k; i < s} and d is a metric. Depending on the specification of the metric d(γs, γj), a prior satisfying definition 1 may limit overfitting or favor well separated clusters. When d(γs, γj) is the distance between sub-vectors of γs and γj corresponding to only locations the proposed prior favors well separated clusters. Instead, when d(γs, γj) is the distance between the sth and jth kernel, a prior satisfying definition 1 limits overfitting in density estimation. Though both cases can be implemented, in this paper we will focus exclusively on the clustering problem. As a convenient class of repulsive priors which smoothly push components apart, we propose π(γ) = c1 k Y h=1 g0(γh) ! h(γ), (2) with c1 being the normalizing constant that depends on the number of components k. The proposed prior is related to a class of point processes from the statistical physics and spatial statistics literature referred to as Gibbs processes [10]. We assume g0 : Γ →ℜ+ and h : Γk →[0, ∞) are continuous with respect to Lesbesgue measure, and h is bounded above by a positive constant c2 and is repulsive according to definition 1. It follows that density π defined in (2) is also repulsive. A special hardcore repulsion is produced if the repulsion function is zero when at least one pairwise distance is smaller than a pre-specified threshold. Such a density implies choosing a minimal separation level between the atoms. As mentioned in the introduction, we avoid such arbitrary hard separation thresholds by considering repulsive priors that smoothly push components apart. In particular, we propose two repulsion functions defined as h(γ) = Y {(s,j)∈A} g{d(γs, γj)} (3) h(γ) = min {(s,j)∈A} g{d(γs, γj)} (4) with A = {(s, j) : s = 1, . . . , k; j < s} and g : ℜ+ →[0, M] a strictly monotone differentiable function with g(0) = 0, g(x) > 0 for all x > 0 and M < ∞. It is straightforward to show that h in (3) and (4) is integrable and satisfies definition 1. The two alternative repulsion functions differ in their dependence on the relative distances between components, with all the pairwise distances playing a role in (3), while (4) only depends on the minimal separation. A flexible choice of g corresponds to g{d(γs, γj)} = exp −τ{d(γs, γj)}−ν , (5) where τ > 0 is a scale parameter and ν is a positive integer controlling the rate at which g approaches zero as d(γs, γj) decreases. Figure 1 shows contour plots of the prior π(γ1, γ2) defined as (2) with g0 being the standard normal density, the repulsive function defined as (3) or (4) and g defined as (5) for different values of (τ, ν). As τ and ν increase, the prior increasingly favors well separated components. 3 (I) −5 0 5 −5 0 5 (II) −5 0 5 −5 0 5 (III) −5 0 5 −5 0 5 (IV) −5 0 5 −5 0 5 Figure 1: Contour plots of the repulsive prior π(γ1, γ2) under (3), either (4) or (5) and (6) with hyperparameters (τ, ν) equal to (I)(1, 2), (II)(1, 4), (III)(5, 2) and (IV )(5, 4) 2.3 Theoretical properties Let the true density f0 : ℜm →ℜ+ be defined as f0 = Pk0 h=1 p0hφ(γ0h) with γ0h ∈Γ and γ0js such that there exists an ϵ1 > 0 such that min{(s,j):s<j} d(γ0s, γ0j) ≥ϵ1 with d being the Euclidean distance. Let f = Pk h=1 phφ(γh) with γh ∈Γ. Let γ ∼π with γ = (γ1, . . . , γk)T and π satisfying definition 1. Let p ∼λ with λ = Dirichlet(α) and k ∼µ with µ(k = k0) > 0. Let θ = (p, γ). These assumptions on f0 and f will be referred to as condition B0. Let Π be the prior induced on ∪∞ j=1Fk, where Fk is the space of all distributions defined as (1). We will focus on γ being a location parameter, though the results can be extended to location-scale kernels. Let | · |1 denote the L1 norm and KL(f0, f) = R f0 log(f0/f) refer to the KullbackLeibler (K-L) divergence between f0 and f. Density f0 belongs to the K-L support of the prior Π if Π{f : KL(f0, f) < ϵ} > 0 for all ϵ > 0. The next lemma provides sufficient conditions under which the true density is in the K-L support of the prior. Lemma 1. Assume condition B0 is satisfied with m = 1. Let D0 be a compact set containing parameters (γ01, . . . , γ0k0). Suppose γ ∼π with π satisfying definition 1. Let φ and π satisfy the following conditions: A1. for any y ∈Y, the map γ →φ(y; γ) is uniformly continuous A2. for any y ∈Y, φ(y; γ) is bounded above by a constant A3. R f0 log supγ∈D0 φ(γ) −log {infγ∈D0 φ(γ)} < ∞ A4. π is continuous with respect to Lebesgue measure and for any vector x ∈Γk with min{(s,j):s<j} d(xs, xj) ≥υ for some υ > 0 there is a δ > 0 such that π(γ) > 0 for all γ satisfying ||γ −x||1 < δ Then f0 is in the K-L support of the prior Π. Lemma 2. The repulsive density in (2) with h defined as either (3) or (4) satisfies condition A4 in lemma 1. The next lemma formalizes the posterior rate of concentration for univariate location mixtures of Gaussians. Lemma 3. Let condition B0 be satisfied, let m = 1 and φ be the normal kernel depending on a location parameter γ and a scale parameter σ. Assume that condition (i), (ii) and (iii) of theorem 3.1 in [11] and assumption A4 in lemma 1 are satisfied. Furthermore, assume that C1) the joint density π leads to exchangeable random variables and for all k the marginal density of the location parameter γ1 satisfies πm(|γ1| ≥t) ≲exp −q1t2 for a given q1 > 0 4 C2) there are constants u1, u2, u3 > 0, possibly depending on f0, such that for any ϵ ≤u3 π(||γ −γ0||1 ≤ϵ) ≥u1 exp(−u2k0 log(1/ϵ)) Then the posterior rate of convergence relative to the L1 metric is ϵn = n−1/2 log n. Lemma 3 is essentially a modification of theorem 3.1 in [11] to the proposed repulsive mixture model. Lemma 4 gives sufficient conditions for π to satisfy condition C1 and C2 in lemma 3. Lemma 4. Let π be defined as (2) and h be defined as either (3) or (4), then π satisfies condition C2 in lemma 3. Furthermore, if for a positive constant n1 the function g0 satisfies g0(|x| ≥t) ≲ exp(−n1t2), π satisfies condition C1 in lemma 3. As motivated above, when the number of mixture components is chosen to be unnecessarily large, it is appealing for the posterior distribution of the weights of the extra components to be concentrated near zero. Theorem 1 formalizes the rate of concentration with increasing sample size n. One of the main assumptions required in theorem 1 is that the posterior rate of convergence relative to the L1 metric is δn = n−1/2(log n)q with q ≥0. We provided the contraction rate, under the proposed prior specification and univariate Gaussian kernel, in lemma 3. However, theorem 1 is a more general statement and it applies to multivariate mixture density of any kernel. Theorem 1. Let assumptions B0 −B5 be satisfied. Let π be defined as (2) and h be defined as either (3) or (4). If ¯α = max(α1, . . . , αk) < m/2 and for positive constants r1, r2, r3 the function g satisfies g(x) ≤r1xr2 for 0 ≤x < r3 then lim M→∞lim sup n→∞E0 n " P ( min {σ∈Sk} k X i=k0+1 pσ(i) ! > Mn−1/2(log n)q(1+s(k0,α)/sr2) )# = 0 with s(k0, α) = k0 −1 + mk0 + ¯α(k −k0), sr2 = r2 + m/2 −¯α and Sk the set of all possible permutations of {1, . . . , k}. Assumptions (B1 −B5) can be found in the supplementary material. Theorem 1 is a modification of theorem 1 in [1] to the proposed repulsive mixture model. Theorem 1 implies that the posterior expectation of weights of the extra components is of order O n−1/2(log n)q(1+s(k0,α)/sr2) . When g is defined as (5), parameters r1 and r2 can be chosen such that r1 = τ and r2 = ν. When the number of components is unknown, with only an upper bound known, the posterior rate of convergence is equivalent to the parametric rate n−1/2 [12]. In this case, the rate in theorem 1 is n−1/2 under usual priors or the repulsive prior. However, in our experience using usual priors, the sum of the extra components can be substantial in small to moderate sample sizes, and often has high variability. As we show in Section 3, for repulsive priors the sum of the extra component weights is close to zero and has small variance for small as well as large sample sizes. On the other hand, when an upper bound on the number of components is unknown, the posterior rate of concentration is n−1/2(log n)q with q > 0. In this case, according to theorem 1, using the proposed prior specification the logarithmic factor in theorem 1 of [1] can be improved. 2.4 Parameter calibration and posterior computation The parameters involved in the repulsion function h are chosen such that a priori, with high probability, the clusters will be adequately separated. Consider the case where φ is a location-scale kernel with location and scale parameters (γ, Σ) and is symmetric about γ. Here, it is natural to relate the separation of two densities to the distance between their location parameters. The following definition introduces the concept of separation level between two densities. Definition 2. Let f1 and f2 be two densities having location-scale parameters (γ1, Σ1) and (γ2, Σ2) respectively, with γ1, γ2 ∈Γ and Σ1, Σ2 ∈Ω. Given a metric t(·, ·), a positive constant c and a function ω : Ω× Ω→ℜ+, f1 and f2 are c-separated if t(γ1, γ2) ≥cω(Σ1, Σ2)1/2 Definition 2 is in the spirit of [2] but generalized to any symmetric location-scale kernel. A mixture of k densities is c-separated if all pairs of densities are c-separated. The parameters of the repulsion 5 −10 −5 0 5 10 0 0.1 0.2 0.3 0.4 (I) −2 0 2 0 0.2 0.4 0.6 (II) −3 −2 −1 0 1 2 3 0 0.2 0.4 0.6 0.8 1 (III) (IV) −2 −1 0 1 2 3 −2 −1 0 1 2 3 Figure 2: (I) Student’s t density, (II) two-components mixture of poorly (solid) and well separated (dot-dash) Gaussian densities, referred as (IIa, IIb), (III) mixture of poorly (dot-dash) and well separated (solid) Gaussian and Pearson densities, referred as (IIIa, IIIb), (IV ) two-components mixture of two-dimensional non-spherical Gaussians function, (τ, ν), will be chosen such that, for an a priori chosen separation level c, definition 2 is satisfied with high probability. In practice, for a given pair (τ, ν), we estimate the probability of pairwise c-separation empirically by simulating N replicates of (γh, Σh) for each component h = 1, . . . , k from the prior. The appropriate values (τ, ν) are obtained by starting with small values, and increasing until the pre-specified pairwise c-separated probability is reached. In practice, only τ will be calibrated to reach a particular probability value. This is because ν controls the rate at which the density tends to zero as two components approach but not the separation level across them. In practice we have found that ν = 2 provides a good default value and we fix ν at this value in all our applications below. A possible issue with the proposed repulsive mixture prior is that the full conditionals are nonstandard, complicating posterior computation. To address this, we propose a data augmentation scheme, introducing auxiliary slice variables to facilitate sampling [13]. This algorithm is straightforward to implement and is efficient by MCMC standards. Further details can be found in the supplementary material. It will be interesting in future work to develop fast approximations to MCMC for implementation of repulsive mixture models, such as variational methods for approximating the full posterior and optimization methods for obtaining a maximum a posteriori estimate. The latter approach would provide an alternative to usual maximum likelihood estimation via the EM algorithm, which provides a penalty on components located close together. 3 Synthetic examples Synthetic toy examples were considered to assess the performance of the repulsive prior in density estimation, classification and emptying the extra components. Figure 2 plots the true densities in the various synthetic cases that we considered. For each synthetic dataset, repulsive and non-repulsive mixture models were compared considering a fixed upper bound on the number of components; extra components should be assigned small probabilities and hence effectively excluded. The auxiliary variable sampler was run for 10, 000 iterations with a burn-in of 5, 000. The chain was thinned by keeping every 10th simulated draw. To overcome the label switching problem, the samples were post-processed following the algorithm of [14]. Details on parameters involved in the true densities and choice of prior distributions can be found in the supplementary material. Table 1 shows summary statistics of the K-L divergence, the misclassification error and the sum of extra weights under repulsive and non-repulsive mixtures with six mixture components as the upper bound. Table 1 shows also the misclassification error resulting from hierarchical clustering [15]. In practice, observations drawn from the same mixture component were considered as belonging to the same category and for each dataset a similarity matrix was constructed. The misclassification error was established in terms of divergence between the true similarity matrix and the posterior similar6 ity matrix. As shown in table 1, the K-L divergences under repulsive and non-repulsive mixtures become more similar as the sample size increases. For smaller sample sizes, the results are more similar when components are very well separated. Since a repulsive prior tends to discourage overlapping mixture components, a repulsive model might not estimate the density quite as accurately when a mixture of closely overlapping components is needed. However, as the sample size increases, the fitted density approaches the true density regardless of the degree of closeness among clusters. Again, though repulsive and non-repulsive mixtures perform similarly in estimating the true density, repulsive mixtures place considerably less probability on extra components leading to more interpretable clusters. In terms of misclassification error, the repulsive model outperforms the other two approaches while, in most cases, the worst performance was obtained by the non-repulsive model. Potentially, one may favor fewer clusters, and hence possibly better separated clusters, by penalizing the introduction of new clusters more through modifying the precision in the Dirichlet prior for the weights; in the supplemental materials, we demonstrate that this cannot solve the problem. Table 1: Mean and standard deviation of K-L divergence, misclassification error and sum of extra weights resulting from non-repulsive (N-R) and repulsive (R) mixtures with a maximum number of clusters equal to six under different synthetic data scenarios n=100 n=1000 I IIa IIb IIIa IIIb IV I IIa IIb IIIa IIIb IV K-L divergence N-R 0·05 0·03 0·07 0·05 0·08 0·22 0·00 0·01 0·01 0·00 0·01 0·02 0·03 0·01 0·02 0·02 0·03 0·04 0·00 0·00 0·00 0·00 0·00 0·00 R 0·03 0·08 0·09 0·07 0·09 0·24 0·01 0·01 0·01 0·01 0·01 0·03 0·02 0·02 0·03 0·03 0·03 0·04 0·00 0·00 0·00 0·00 0·00 0·00 Misclassification HCT 0·12 0·11 0·41 0·12 0·78 0·21 0·45 0·42 0·14 0·42 0·09 0·20 N-R 0·68 0·26 0·06 0·17 0·05 0·13 0·65 0·24 0·03 0·14 0·02 0·19 0·09 0·10 0·05 0·09 0·06 0·05 0·11 0·08 0·04 0·08 0·03 0·02 R 0·06 0·09 0·00 0·05 0·00 0·09 0·05 0·08 0·00 0·03 0·00 0·18 0·05 0·04 0·02 0·03 0·01 0·03 0·05 0·02 0·02 0·03 0·01 0·01 Sum of extra weights N-R 0·30 0·21 0·09 0·16 0·07 0·13 0·30 0·21 0·03 0·16 0·03 0·29 0·10 0·11 0·07 0·09 0·07 0·07 0·11 0·11 0·04 0·10 0·03 0·03 R 0·01 0·01 0·01 0·01 0·01 0·08 0·01 0·00 0·00 0·00 0·00 0·26 0·01 0·01 0·01 0·01 0·01 0·05 0·01 0·00 0·00 0·00 0·00 0·03 4 Real data We assessed the clustering performance of the proposed method on a real dataset. This dataset consists of 150 observations from three different species of iris each with four measurements. This dataset was previously analyzed by [16] and [17] proposing new methods to estimate the number of clusters based on minimizing loss functions. They concluded the optimal number of clusters was two. This result did not agree with the number of species due to low separation in the data between two of the species. Such point estimates of the number of clusters do not provide a characterization of uncertainty in clustering in contrast to Bayesian approaches. Repulsive and non-repulsive mixtures were fitted under different choices of upper bound on the number of components. Since the data contains three true biological clusters, with two of these having similar distributions of the available features, we would expect the posterior to concentrate on two or three components. Posterior means and standard deviations of the three highest weights were (0·30, 0·23, 0·13) and (0·05, 0·04, 0·04) for non-repulsive and (0·60, 0·30, 0·04) and (0·04, 0·03, 0·02) for repulsive under six components. Clearly, repulsive priors lead to a posterior more concentrated on two components, and assign low probability to more than three components. 7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 5 10 15 20 25 Figure 3: Posterior density of the total probability weight assigned to more than three components in the Iris data under a max of 6 or 10 components for non-repulsive (6:solid, 10:dash-dot) and repulsive (6:dash, 10:dot) mixtures. Figure 3 shows the density of the total probability assigned to the extra components. This quantity was computed considering the number of species as the true number of clusters. According to figure 3, our repulsive prior specification leads to extra component weights very close to zero regardless of the upper bound on the number of components. The posterior uncertainty is also small. Non-repulsive mixtures assign large weight to extra components, with posterior uncertainty increasing considerably as the number of components increases. Discussions We have proposed a new repulsive mixture modeling framework, which should lead to substantially improved unsupervised learning (clustering) performance in general applications. A key aspect is soft penalization of components located close together to favor, without sharply enforcing, well separated clusters that should be more likely to correspond to the true missing labels. We have focused on Bayesian MCMC-based methods, but there are numerous interesting directions for ongoing research, including fast optimization-based approaches for learning mixture models with repulsive penalties. Acknowledgments This research was partially supported by grant 5R01-ES-017436-04 from the National Institute of Environmental Health Sciences (NIEHS) of the National Institutes of Health (NIH) and DARPA MSEE. 8 References [1] J. Rousseau and K. Mengersen. Asymptotic Behaviour of the Posterior Distribution in Over-Fitted Models. Journal of the Royal Statistical Society B, 73:689–710, 2011. [2] S. Dasgupta. Learning Mixtures of Gaussians. Proceedings of the 40th Annual Symposium on Foundations of Computer Science, pages 633–644, 1999. [3] S. Dasgupta and L. Schulman. A Probabilistic Analysis of EM for Mixtures of Separated, Spherical Gaussians. The Journal of Machine Learning Research, 8:203–226, 2007. [4] M. Stephens. Bayesian Analysis of Mixture Models with an Unknown Number of Components - An Alternative to Reversible Jump Methods. The Annals of Statistics, 28:40–74, 2000. [5] H. Ishwaran and M. Zarepour. Dirichlet Prior Sieves in Finite Normal Mixtures. Statistica Sinica, 12:941– 963, 2002. [6] J. Sethuraman. A Constructive Denition of Dirichlet Priors. Statistica Sinica, 4:639–650, 1994. [7] H. Ishwaran and L. F. James. Gibbs Sampling Methods for Stick-Breaking Priors. Journal of the American Statistical Association, 96:161–173, 2001. [8] M. L. Huber and R. L. Wolpert. Likelihood-Based Inference for Matern Type-III Repulsive Point Processes. Advances in Applied Probability, 41:958–977, 2009. [9] A. Lawson and A. Clark. Spatial Cluster Modeling. Chapman & Hall CRC, London, UK, 2002. [10] D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes. Springer, 2008. [11] Catia Scricciolo. Posterior Rates of Convergence for Dirichlet Mixtures of Exponential Power Densities. Electronic Journal of Statistics, 5:270–308, 2011. [12] H. Ishwaran, L. F. James, and J. Sun. Bayesian Model Selection in Finite Mixtures by Marginal Density Decompositions. Journal of American Statistical Association, 96:1316–1332, 2001. [13] Paul Damien, Jon Wakefield, and Stephen Walker. Gibbs Sampling for Bayesian Non-Conjugate and Hierarchical Models by Using Auxiliary Variables. Journal of the Royal Statistical Society B, 61:331– 344, 1999. [14] M. Stephens. Dealing with label switching in mixture models. Journal of the Roya; statistical society B, 62:795–810, 2000. [15] H. Locarek-Junge and C. Weihs. Classification as a Tool for Research. Springer, 2009. [16] C. Sugar and G. James. Finding the number of clusters in a data set: an information theoretic approach. Journal of the American Statistical Association, 98:750–763, 2003. [17] J. Wang. Consistent selection of the number of clusters via crossvalidation. Biometrika, 97:893–904, 2010. 9
|
2012
|
209
|
4,574
|
Topic-Partitioned Multinetwork Embeddings Peter Krafft∗ CSAIL MIT pkrafft@mit.edu Juston Moore†, Bruce Desmarais‡, Hanna Wallach† †Department of Computer Science, ‡Department of Political Science University of Massachusetts Amherst †{jmoore, wallach}@cs.umass.edu ‡desmarais@polsci.umass.edu Abstract We introduce a new Bayesian admixture model intended for exploratory analysis of communication networks—specifically, the discovery and visualization of topic-specific subnetworks in email data sets. Our model produces principled visualizations of email networks, i.e., visualizations that have precise mathematical interpretations in terms of our model and its relationship to the observed data. We validate our modeling assumptions by demonstrating that our model achieves better link prediction performance than three state-of-the-art network models and exhibits topic coherence comparable to that of latent Dirichlet allocation. We showcase our model’s ability to discover and visualize topic-specific communication patterns using a new email data set: the New Hanover County email network. We provide an extensive analysis of these communication patterns, leading us to recommend our model for any exploratory analysis of email networks or other similarly-structured communication data. Finally, we advocate for principled visualization as a primary objective in the development of new network models. 1 Introduction The structures of organizational communication networks are critical to collaborative problem solving [1]. Although it is seldom possible for researchers to directly observe complete organizational communication networks, email data sets provide one means by which they can at least partially observe and reason about them. As a result—and especially in light of their rich textual detail, existing infrastructure, and widespread usage—email data sets hold the potential to answer many important scientific and practical questions within the organizational and social sciences. While some questions may be answered by studying the structure of an email network as a whole, other, more nuanced, questions can only be answered at finer levels of granularity—specifically, by studying topic-specific subnetworks. For example, breaks in communication (or duplicated communication) about particular topics may indicate a need for some form of organizational restructuring. In order to facilitate the study of these kinds of questions, we present a new Bayesian admixture model intended for discovering and summarizing topic-specific communication subnetworks in email data sets. There are a number of probabilistic models that incorporate both network and text data. Although some of these models are specifically for email networks (e.g., McCallum et al.’s author–recipient– topic model [2]), most are intended for networks of documents, such as web pages and the links between them [3] or academic papers and their citations [4]. In contrast, an email network is more naturally viewed as a network of actors exchanging documents, i.e., actors are associated with nodes while documents are associated with edges. In other words, an email network defines a multinetwork in which there may be multiple edges (one per email) between any pair of actors. Perhaps more importantly, much of the recent work on modeling networks and text has focused on tasks such as ∗Work done at the University of Massachusetts Amherst 1 = + + Figure 1: Our model partitions an observed email network (left) into topic-specific subnetworks (right) by associating each author–recipient edge in the observed network with a single topic. predicting links or detecting communities. Instead, we take a complementary approach and focus on exploratory analysis. Specifically, our goal is to discover and visualize topic-specific subnetworks. Rather than taking a two-stage approach in which subnetworks are discovered using one model and visualized using another, we present a single probabilistic model that partitions an observed email network into topic-specific subnetworks while simultaneously producing a visual representation of each subnetwork. If network modeling and visualization are undertaken separately, the resultant visualizations may not directly reflect the model and its relationship to the observed data. Rather, these visualizations provide a view of the model and the data seen through the lens of the visualization algorithm and its associated assumptions, so any conclusions drawn from such visualizations can be biased by artifacts of the visualization algorithm. Producing principled visualizations of networks, i.e., visualizations that have precise interpretations in terms of an associated network model and its relationship to the observed data, remains an open challenge in statistical network modeling [5]. Addressing this open challenge was a primary objective in the development of our new model. In order to discover and visualize topic-specific subnetworks, our model must associate each author– recipient edge in the observed email network with a topic, as shown in Figure 1. Our model draws upon ideas from latent Dirichlet allocation (LDA) [6] to identify a set of corpus-wide topics of communication, as well as the subset of topics that best describe each observed email. We model network structure using an approach similar to that of Hoff et al.’s latent space model (LSM) [7] so as to facilitate visualization. Given an observed network, LSM associates each actor in the network with a point in K-dimensional Euclidean space. For any pair of actors, the smaller the distance between their points, the more likely they are to interact. If K = 2 or K = 3, these interaction probabilities, collectively known as a “communication pattern”, can be directly visualized in 2- or 3-dimensional space via the locations of the actor-specific points. Our model extends this idea by associating a K-dimensional Euclidean space with each topic. Observed author–recipient edges are explicitly associated with topics via the K-dimensional topic-specific communication patterns. In the next section, we present the mathematical details of our new model and outline a corresponding inference algorithm. We then introduce a new email data set: the New Hanover County (NHC) email network. Although our model is intended for exploratory analysis, we test our modeling assumptions via three validation tasks. In Section 4.1, we show that our model achieves better link prediction performance than three state-of-the-art network models. We also demonstrate that our model is capable of inferring topics that are as coherent as those inferred using LDA. Together, these experiments indicate that our model is an appropriate model of network structure and that modeling this structure does not compromise topic quality. As a final validation experiment, we show that synthetic data generated using our model possesses similar network statistics to those of the NHC email network. In Section 4.4, we showcase our model’s ability to discover and visualize topic-specific communication patterns using the NHC network. We give an extensive analysis of these communication patterns and demonstrate that they provide accessible visualizations of emailbased collaboration while possessing precise, meaningful interpretations within the mathematical framework of our model. These findings lead us to recommend our model for any exploratory analysis of email networks or other similarly-structured communication data. Finally, we advocate for principled visualization as a primary objective in the development of new network models. 2 Topic-Partitioned Multinetwork Embeddings In this section, we present our new probabilistic generative model (and associated inference algorithm) for communication networks. For concreteness, we frame our discussion of this model in 2 terms of email data, although it is generally applicable to any similarly-structured communication data. The generative process and graphical model are provided in the supplementary materials. A single email, indexed by d, is represented by a set of tokens w(d) = {w(d) n }N (d) n=1 that comprise the text of that email, an integer a(d) ∈{1, ..., A} indicating the identity of that email’s author, and a set of binary variables y(d) = {y(d) r }A r=1 indicating whether each of the A actors in the network is a recipient of that email. For simplicity, we assume that authors do not send emails to themselves (i.e., y(d) r = 0 if r = a(d)). Given a real-world email data set D = {{w(d), a(d), y(d)}}D d=1, our model permits inference of the topics expressed in the text of the emails, a set of topic-specific K-dimensional embeddings (i.e., points in K-dimensional Euclidean space) of the A actors in the network, and a partition of the full communication network into a set of topic-specific subnetworks. As in LDA [6], a “topic” t is characterized by a discrete distribution over V word types with probability vector φ(t). A symmetric Dirichlet prior with concentration parameter β is placed over Φ = {φ(1), ..., φ(T )}. To capture the relationship between the topics expressed in an email and that email’s recipients, each topic t is also associated with a “communication pattern”: an A × A matrix of probabilities P (t). Given an email about topic t, authored by actor a, element p(t) ar is the probability of actor a including actor r as a recipient of that email. Inspired by LSM [7], each communication pattern P (t) is represented implicitly via a set of A points in K-dimensional Euclidean space S(t) = {s(t) a }A a=1 and a scalar bias term b(t) such that p(t) ar = p(t) ra = σ(b(t) −∥s(t) a −s(t) r ∥) with s(t) a ∼N(0, σ2 1I) and b(t) ∼N(µ, σ2 2).1 If K = 2 or K = 3, this representation enables each topic-specific communication pattern to be visualized in 2- or 3-dimensional space via the locations of the points associated with the A actors. It is worth noting that the dimensions of each K-dimensional space have no inherent meaning. In isolation, each point s(t) a conveys no information; however, the distance between any two points has a precise and meaningful interpretation in the generative process. Specifically, the recipients of any email associated with topic t are more likely to be those actors near to the email’s author in the Euclidean space corresponding to that topic. Each email, indexed by d, has a discrete distribution over topics θ(d). A symmetric Dirichlet prior with concentration parameter α is placed over Θ = {θ(1), ..., θ(D)}. Each token w(d) n is associated with a topic assignment z(d) n , such that z(d) n ∼θ(d) and w(d) n ∼φ(t) for z(d) n = t. Our model does not include a distribution over authors; the generative process is conditioned upon their identities. The email-specific binary variables y(d) = {y(d) r }A r=1 indicate the recipients of email d and thus the presence (or absence) of email-specific edges from author a(d) to each of the A −1 other actors. Consequently, there may be multiple edges (one per email) between any pair of actors, and D defines a multinetwork over the entire set of actors. We assume that the complete multinetwork comprises T topic-specific subnetworks. In other words, each y(d) r is associated with some topic t and therefore with topic-specific communication pattern P (t) such that y(d) r ∼Bern(p(t) ar ) for a(d) =a. A natural way to associate each y(d) r with a topic would be to draw a topic assignment from θ(d) in a manner analogous to the generation of z(d) n ; however, as outlined by Blei and Jordan [8], this approach can result in the undesirable scenario in which one subset of topics is associated with tokens, while another (disjoint) subset is associated with edges. Additionally, models of annotated data that possess this exchangeable structure tend to exhibit poor generalization [3, 8]. A better approach, advocated by Blei and Jordan, is to draw a topic assignment for each y(d) r from the empirical distribution over topics defined by z(d). By definition, the set of topics associated with edges will therefore be a subset of the topics associated with tokens. One way of simulating this generative process is to associate each y(d) r with a position n = 1, . . . , max (1, N (d)) and therefore with the topic assignment z(d) n at that position2 by drawing a position assignment x(d) r ∼U(1, . . . , max (1, N (d))) for each y(d) r . This indirect procedure ensures that y(d) r ∼Bern(p(t) ar ) for a(d) =a, x(d) r =n, and z(d) n =t, as desired. 1The function σ(·) is the logistic function, while the function ∥· ∥is the l2-norm. 2Emails that do not contain any text (i.e., N (d) = 0) convey information about the frequencies of communication between their authors and recipients. As a result, we do not omit such emails from D; instead, we augment each one with a single, “dummy” topic assignment z(d) 1 for which there is no associated token w(d) 1 . 3 2.1 Inference For real-world data D = {w(d), a(d), y(d)}D d=1, the tokens W = {w(d)}D d=1, authors A = {a(d)}D d=1, and recipients Y = {y(d)}D d=1 are observed, while Φ, Θ, S = {S(t)}T t=1, B = {b(t)}T t=1, Z = {z(d)}D d=1, and X = {x(d)}D d=1 are unobserved. Dirichlet–multinomial conjugacy allows Φ and Θ to be marginalized out [9], while typical values for the remaining unobserved variables can be sampled from their joint posterior distribution using Markov chain Monte Carlo methods. In this section, we outline a Metropolis-within-Gibbs sampling algorithm that operates by sequentially resampling the value of each latent variable (i.e., s(t) a , bt, z(d) n , or x(d) r ) from its conditional posterior. Since z(d) n is a discrete random variable, new values may be sampled directly using P(z(d) n =t | w(d) n =v, W\d,n, A, Y, S, B, Z\d,n, X, α, β) ∝ (N (t|d) \d,n + α T ) N (v|t) \d,n + β V N (t) \d,n+β Q r:x(d) r =n (p(t) a(d)r) y(d) r (1 −p(t) a(d)r) 1−y(d) r for N (d) > 0 Q r:r̸=a(d) (p(t) a(d)r) y(d) r (1 −p(t) a(d)r) 1−y(d) r otherwise, where subscript “\d, n” denotes a quantity excluding data from position n in email d. Count N (t) is the total number of tokens in W assigned to topic t by Z, of which N (v|t) are of type v and N (t|d) belong to email d. New values for discrete random variable x(d) r may be sampled directly using P(x(d) r =n | A, Y, S, B, z(d) n =t, Z\d,n) ∝(p(t) a(d)r) y(d) r (1 −p(t) a(d)r) 1−y(d) r . New values for continuous random variables s(t) a and b(t) cannot be sampled directly from their conditional posteriors, but may instead be obtained using the Metropolis–Hastings algorithm. With a non-informative prior over s(t) a (i.e., s(t) a ∼N(0, ∞)), the conditional posterior over s(t) a is P(s(t) a | A, Y, S(t) \a, b(t), Z, X) ∝ Y r:r̸=a (p(t) ar ) N (1|a,r,t)+N (1|r,a,t) (1 −p(t) ar ) N (0|a,r,t)+N (0|r,a,t) , where count N (1|a,r,t) = PD d=1 1(a(d) =a) 1(y(d) r =1) PN (d) n=1 1(x(d) r =n) 1(z(d) n =t) .3 Counts N (1|r,a,t), N (0|a,r,t), and N (0|r,a,t) are defined similarly. Likewise, with an improper, noninformative prior over b(t) (i.e., b(t) ∼N(0, ∞)), the conditional posterior over b(t) is P(b(t) | A, Y, S(t), Z, X) ∝ A Y a=1 Y r:r<a (p(t) ar ) N (1|a,r,t)+N (1|r,a,t) (1 −p(t) ar ) N (0|a,r,t)+N (0|r,a,t) . 3 Data Due to a variety of factors involving personal privacy concerns and the ownership of content by email service providers, academic researchers rarely have access to organizational email data. For example, the Enron data set [10]—arguably the most widely studied email data set—was only released because of a court order. The public record is an alternative source of organizational email data. Public record data sets are widely available and can be continually updated, yet remain relatively untapped by the academic community. We therefore introduce and analyze a new public record email data set relevant to researchers in the organizational and social sciences as well as machine learning researchers. This data set consists of emails between the managers of the departments that constitute the executive arm of government at the county level for New Hanover County, North Carolina. In this semi-autonomous local government, county managers act as executives, and the individual departments are synonymous with the individual departments and agencies in, for instance, the U.S. federal government. Therefore, not only does this email data set offer a view into the communication patterns of the managers of New Hanover County, but analyses of it also serve as case studies in modeling inter-agency communications in the U.S. federal government administration. 3The function 1(·) evaluates to one if its argument evaluates to true and evaluates to zero otherwise. 4 0 50 100 150 200 0.0 0.1 0.2 0.3 0.4 Number of Topics Average F1 Score G G G G G G G G G G G G G G G G G G G G G G G G G G G our model Erosheva baseline 2 MMSB baseline 1 LSM (a) 0 50 100 150 200 0.0 0.1 0.2 0.3 0.4 Number of Topics Average F1 Score G G G G G G G G G G G G G G G G G G G G G G G G G G G our model Erosheva baseline 2 MMSB baseline 1 LSM (b) 0 50 100 150 200 −110 −90 −80 −70 −60 −50 Number of Topics Average Topic Coherence G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G our model Erosheva baseline 2 LDA (c) 0 50 100 150 200 −110 −90 −80 −70 −60 −50 Number of Topics Average Topic Coherence G G G G G G G G G G GG G G G G G G G GG G G G G G G G G G G G G G G G our model Erosheva baseline 2 LDA (d) Figure 2: Average link prediction performance for (a) the NHC email network and (b) the Enron data set. For MMSB and LSM, we only report results obtained using the best-performing hyperparameter values. Average topic coherence scores for (c) the NHC email network and (d) the Enron data set. The New Hanover County (NHC) email network comprises the complete inboxes and outboxes of 30 department managers from the month of February, 2011. In total, there are 30,909 emails, of which 8,097 were authored by managers. Of these 8,097 emails, 1,739 were sent to other managers (via the “To” or “Cc” fields), excluding any emails sent from a manager to him- or herself only. For our experiments, we used these 1,739 emails between 30 actors. To verify that our model is applicable beyond the NHC email network, we also performed two validation experiments using the Enron email data set [10]. For this data set, we treated each unique @enron email address as an actor and used only those emails between the 50 most active actors (determined by the total numbers of emails sent and received). Emails that were not sent to at least one other active actor (via the “To” or “Cc” fields) were discarded. To avoid duplicate emails, we retained only those emails from “ sent mail”, “sent”, or “sent items” folders. These steps resulted in a total of 8,322 emails involving 50 actors. Both data sets were preprocessed to concatenate the text of subject lines and message bodies and to remove any stop words, URLs, quoted text, and (where possible) signatures. 4 Experiments Our model is primarily intended as an exploratory analysis tool for organizational communication networks. In this section, we use the NHC email network to showcase our model’s ability to discover and visualize topic-specific communication subnetworks. First, however, we test our underlying modeling assumptions via three quantitative validation tasks, as recommended by Schrodt [11]. 4.1 Link Prediction In order to gauge our model’s predictive performance, we evaluated its ability to predict the recipients of “test” emails, from either the NHC email network or the Enron data set, conditioned on the text of those emails and the identities of their authors. For each test email d, the binary variables indicating the recipients of that email, i.e., {y(d) r }A r=1, were treated as unobserved. Typical values for these variables were sampled from their joint posterior distribution and compared to the true values to yield an F1 score. We formed a test split of each data set by randomly selecting emails with probability 0.1. For each data set, we averaged the F1 scores over five random test splits. We compared our model’s performance with that of two baselines and three existing network models, thereby situating it within the existing literature. Given a test email authored by actor a, our simplest baseline na¨ıvely predicts that actor a will include actor r as a recipient of that email with probability equal to the number of non-test emails sent from actor a to actor r divided by the total number of non-test emails sent by actor a. Our second baseline is a variant of our model in which each topic-specific communication pattern P (t) is represented explicitly via A(A + 1) / 2 probabilities drawn from a symmetric Beta prior with concentration parameter γ. Comparing our model to this variant enables us to validate our assumption that topic-specific communication patterns can indeed be accurately represented by a set of A points (one per actor) in K-dimensional Euclidean space. We also compared our model’s performance to that of three existing network models: a variant of Erosheva et al.’s model for analyzing scientific publications [4], LSM [7], and the 5 mixed-membership stochastic blockmodel (MMSB) [12]. Erosheva et al.’s model can be viewed as a variant of our model in which the topic assignment for each y(d) r is drawn from θ(d) instead of the empirical distribution over topics defined by z(d). Like our second baseline, each topic-specific communication pattern is represented explicitly via probabilities drawn from a symmetric Beta prior with concentration parameter γ; however, unlike this baseline, each one is represented using A probabilities such that p(t) ar = p(t) r . LSM can be viewed as a network-only variant of our model in which text is not modeled. As a result, there are no topics and a single communication pattern P . This pattern is represented implicitly via a set of A actor-specific points in K-dimensional Euclidean space. Finally, MMSB is a widely-used model for mixed-membership community discovery in networks. For our model and all its variants, typical values for {y(d) r }A r=1 can be sampled from their joint posterior distribution using an appropriately-modified version of the Metropolis-within-Gibbs algorithm in Section 2.1. In all our experiments, we ran this algorithm for 40,000–50,000 iterations. On iteration i, we defined each proposal distribution to be a Gaussian distribution centered on the value from iteration i−1 with covariance matrix max (1, 100 / i) I, thereby resulting in larger covariances for earlier iterations. Beta–binomial conjugacy allows the elements of P (t) to be marginalized out in both our second baseline and Erosheva et al.’s model. For MMSB, typical values can be sampled using a modified version of Chang’s Gibbs sampling algorithm [13]. We ran this algorithm for 5,000 iterations. For all models involving topics, we set concentration parameter α to 1 for the NHC network and 2 for the Enron data set. For both data sets, we set concentration parameter β to 0.01V .4 We varied the number of topics from 1 to 200. In order to facilitate visualization, we used 2-dimensional Euclidean spaces for our model. For LSM, however, we varied the dimensionality of the Euclidean space from 1 to 200. We report only those results obtained using the best-performing dimensionality. For our second baseline and Erosheva et al.’s model, we set concentration parameter γ to 0.02. For MMSB, we performed a grid search over all hyperparameter values and the number of blocks and, as with LSM, report only those results obtained using the best-performing values.5 F1 scores, averaged over five random test splits of each data set, are shown in Figure 2. Although our model is intended for exploratory analysis, it achieves better link prediction performance than the other models. Furthermore, the fact that our model outperforms our second baseline and Erosheva et al.’s model validates our assumption that topic-specific communication patterns can indeed be accurately represented by a set of A actor-specific points in 2-dimensional Euclidean space. 4.2 Topic Coherence When evaluating unsupervised topic models, topic coherence metrics [14, 15] are often used as a proxy for subjective evaluation of semantic coherence. In order to demonstrate that incorporating network data does not impair our model’s ability to model text, we compared the coherence of topics inferred using our model with the coherence of topics inferred using LDA, our second baseline, and Erosheva et al.’s model. For each model, we varied the number of topics from 1 to 200 and drew five samples from the joint posterior distribution over the unobserved random variables in that model. We evaluated the topics resulting from each sample using Mimno et al.’s coherence metric [14]. Topic coherence, averaged over the five samples, is shown in Figure 2. Our model achieves coherence comparable to that of LDA. This result, when combined with the results in Section 4.1, demonstrates that our model can achieve state-of-the-art predictive performance while producing coherent topics. 4.3 Posterior Predictive Checks We used posterior predictive checking to assess the extent to which our model is a “good fit” for the NHC email network [16, 17]. Specifically, we defined four network statistics (i.e., four discrepancy functions) that summarize meaningful aspects of the NHC network: generalized graph transitivity, the dyad intensity distribution, the vertex degree distribution, and the geodesic distance distribution.6 We then generated 1,000 synthetic networks from the posterior predictive distribution implied by our 4These values were obtained by slice sampling typical values for the concentration parameters in LDA. They are consistent with the concentration parameter values used in previous work [9]. 5These values correspond to a Dir(0.1, . . . , 0.1) prior over block memberships, a Beta(0.1, 0.1) prior over diagonal entries of the blockmodel, a Beta(0.01, 0.01) prior over off-diagonal entries, and 30 blocks. 6These statistics are defined in the supplementary materials. 6 Frequency 0.645 0.655 0.665 0 50 100 150 200 Transitivity (a) 0 20 40 60 0 20 40 60 80 Simulated Quantile Observed Quantile (b) G GGG G G GG G G G GG G G GG G G G GGG GG GG G G G G GG G GGG G GGGGGG G G G G GGGGGG G G GGG G GGG G GG G GGG GGGGGGGGG GGGGGG G G GGGGGG GGGGGGGGGGGGGGG G G GGG G GG G GG G GGGG G GGGGGGGGGGGGGGGGGG G 0 200 400 600 800 Actor (Sorted by Observed Degree) Degree (c) 0.0 0.2 0.4 0.6 0.8 1.0 1.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Simulated Quantile Observed Quantile (d) Figure 3: Four posterior predictive checks of our model using the NHC email network and 100 topics: (a) a histogram of the graph transitivity of the synthetic networks, with the graph transitivity of the NHC email network indicated by a vertical line; (b) a quantile–quantile plot comparing the distribution of dyadic intensities in the synthetic networks to that of the observed network; (c) a box plot indicating the sampled degree of each manager in the synthetic networks, with managers sorted from highest to lowest observed degree and their observed degrees indicated by a line; and (d) a quantile–quantile plot comparing the observed and synthetic geodesic distance distributions. model and the NHC network. We applied each discrepancy function to each synthetic network to yield four distributions over the values of the four network statistics. If our model is a “good fit” for the NHC network, these distributions should be centered around the values of the corresponding discrepancy functions when computed using the observed NHC network. As shown in Figure 3, our model generates synthetic networks with dyad intensity, vertex degree, and geodesic distance distributions that are very similar to those of the NHC network. The distribution over synthetic graph transitivity values is not centered around the observed graph transitivity, but the observed transitivity is not sufficiently far into the tail of the distribution to warrant reparameterization of our model. 4.4 Exploratory Analysis In order to demonstrate our model’s novel ability to discover and visualize topic-specific communication patterns, we performed an exploratory analysis of four such patterns inferred from the NHC email network using our model. These patterns are visualized in Figure 4. Each pattern is represented implicitly via a single set of A points in 2-dimensional Euclidean space drawn from their joint posterior distribution. The recipients of any email associated with topic t are more likely to be those actors near to the email’s author in the Euclidean space corresponding to that topic. We selected the patterns in Figure 4 so as to highlight the types of insights that can be obtained using our model. Although many structural properties may be of interest, we focus on modularity and assortativity. For each topic-specific communication pattern, we examined whether there are active, disconnected components in that topic’s Euclidean space (i.e., high modularity). The presence of such components indicates that there are groups of actors who engage in within- but not between-group communication about that topic. We also used a combination of node proximity and node coloration to determine whether there is more communication between departments that belong to the same “division” in the New Hanover County government organizational chart than between departments within different divisions (i.e., assortativity). In Figure 4, we show one topic that exhibits strong modularity and little assortativity (the “Public Signage” topic), one topic that exhibits strong assortativity and little modularity (the “Broadcast Messages” topic), and one topic that exhibits both strong assortativity and strong modularity (the “Meeting Scheduling” topic). The “Public Relations” topic, which includes communication with news agencies, is mostly dominated by a cluster involving many departments. Finally, the “Meeting Scheduling” topic displays hierarchical structure, with two assistant county managers located at the centers of groups that correspond to their divisions. Exploratory analysis of communication patterns is a powerful tool for understanding organizational communication networks. For example, examining assortativity can reveal whether actual communication patterns resemble official organizational structures. Similarly, if a communication pattern exhibits modularity, each disconnected component may benefit from organizational efforts to facilitate inter-component communication. Finally, structural properties other than assortativity and modularity may also yield scientific or practical insights, depending on organizational needs. 7 Public Signage Broadcast Messages change signs sign process ordinance fw fyi bulletin summary week legislative G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G −400 −200 0 200 400 −400 −200 0 200 400 15 PS CE AM FN CM EL AM BG PI FS HR HL SF VS LB PM RD EV EG PG YS SS IT MS TX CC RM DS CA EM G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G −300 −200 −100 0 100 200 300 −300 −200 −100 0 100 200 300 17 PS CE AM FN CM EL AM BG PI FS HR HL SF VS LB PM RD EV EG PG YS SS IT MS TX CC RM DS CA EM Public Relations Meeting Scheduling city breakdown information give meeting march board agenda week G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G −400 −200 0 200 400 600 −400 −200 0 200 400 31 PS CE AM FN CM EL AM BG PI FS HR HL SF VS LB PM RD EV EG PG YS SS IT MS TX CC RM DS CA EM G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G −200 0 200 400 −600 −400 −200 0 200 400 63 PS CE AM FN CM EL AM BG PI FS HR HL SF VS LB PM RD EV EG PG YS SS IT MS TX CC RM DS CA EM G Assistant County Manager Budget Cooperative Extension County Attorney County Commissioners County Manager Development Services Elections Emergency Management Engineering Environmental Management Finance Fire Services Health Human Resources Information Technology Library Museum Parks and Gardens Planning and Inspections Pretrial Release Screening Property Management Register of Deeds Risk Management Sheriff Social Services Tax Veteran Services Youth Empowerment Services AM BG CE CA CC CM DS EL EM EG EV FN FS HL HR IT LB MS PG PI PS PM RD RM SF SS TX VS YS Figure 4: Four topic-specific communication patterns inferred from the NHC email network. Each pattern is labeled with a human-selected name for the corresponding topic, along with that topic’s most probable words in order of decreasing probability. The size of each manager’s acronym in topic t’s pattern (given by 0.45 + 1.25 q d(t) a / maxa d(t) a , where d(t) a is the degree of actor a in that subnetwork) indicates how often that manager communicates about that topic. Managers’ acronyms are colored according to their respective division in the New Hanover County organizational chart. The acronym “AM” appears twice in all plots because there are two assistant county managers. 5 Conclusions We introduced a new Bayesian admixture model for the discovery and visualization of topic-specific communication subnetworks. Although our model is intended for exploratory analysis, the validation experiments described in Sections 4.1 and 4.2 demonstrate that our model can achieve stateof-the-art predictive performance while exhibiting topic coherence comparable to that of LDA. To showcase our model’s ability to discover and visualize topic-specific communication patterns, we introduced a new data set (the NHC email network) and analyzed four such patterns inferred from this data set using our model. Via this analysis, were are able to examine the extent to which actual communication patterns resemble official organizational structures and identify groups of managers who engage in within- but not between-group communication about certain topics. Together, these predictive and exploratory analyses lead us to recommend our model for any exploratory analysis of email networks or other similarly-structured communication data. Finally, our model is capable of producing principled visualizations of email networks, i.e., visualizations that have precise mathematical interpretations in terms of this model and its relationship to the observed data. We advocate for principled visualization as a primary objective in the development of new network models. Acknowledgments This work was supported in part by the Center for Intelligent Information Retrieval and in part by the NSF GRFP under grant #1122374. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsors. 8 References [1] W. Mason and D.J. Watts. Collaborative learning in networks. Proceedings of the National Academy of Sciences, 109(3):764–769, 2012. [2] A. McCallum, A. Corrada-Emmanuel, and X. Wang. Topic and role discovery in social networks. In Proceedings of the International Joint Conference on Artificial Intelligence, 2005. [3] J. Chang and D.M. Blei. Relational topic models for document networks. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, 2009. [4] E. Erosheva, S. Fienberg, and J. Lafferty. Mixed-membership models of scientific publications. Proceedings of the National Academy of Sciences, 101(Suppl. 1), 2004. [5] S.E Fienberg. A brief history of statistical models for network analysis and open challenges. Journal of Computational and Graphical Statistics, 22, 2012. [6] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [7] P.D. Hoff, A.E. Raftery, and M.S. Handcock. Latent space approaches to social network analysis. Journal of the American Statistical Association, 97(460):1090–1098, 2002. [8] D.M. Blei and M.I. Jordan. Modeling annotated data. In Proceedings of the Twenty-Sixth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 127–134, 2003. [9] T.L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of Sciences, 101(Suppl. 1), 2004. [10] B. Klimt and Y. Yang. Introducing the Enron corpus. In Proceedings of the First Conference on Email and Anti-Spam, 2004. [11] P.A Schrodt. Seven deadly sins of contemporary quantitative political analysis. In Proceedings of the Annual American Political Science Association Meeting and Exhibition, 2010. [12] E.M. Airoldi, D.M. Blei, S.E. Fienberg, and E.P. Xing. Mixed membership stochastic blockmodels. Journal of Machine Learning Research, 9:1981–2014, 2008. [13] J. Chang. Uncovering, Understanding, and Predicting Links. PhD thesis, Princeton Unversity, 2011. [14] D. Mimno, H.M. Wallach, E.T.M. Leenders, and A. McCallum. Optimizing semantic coherence in topic models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2011. [15] D. Newman, J.H. Lau, K. Grieser, and T. Baldwin. Automatic evaluation of topic coherence. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 100–108, 2010. [16] D.R. Hunter, M.S. Handcock, C.T. Butts, S.M. Goodreau, and M. Morris. ergm: A package to fit, simulate and diagnose exponential-family models for networks. Journal of Statistical Software, 24(3):1–29, 2008. [17] D. Mimno and D.M. Blei. Bayesian checking for topic models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 227–237, 2011. 9
|
2012
|
21
|
4,575
|
Forward-Backward Activation Algorithm for Hierarchical Hidden Markov Models Kei Wakabayashi Faculty of Library, Information and Media Science University of Tsukuba, Japan kwakaba@slis.tsukuba.ac.jp Takao Miura Department of Engineering Hosei University, Japan miurat@hosei.ac.jp Abstract Hierarchical Hidden Markov Models (HHMMs) are sophisticated stochastic models that enable us to capture a hierarchical context characterization of sequence data. However, existing HHMM parameter estimation methods require large computations of time complexity O(TN 2D) at least for model inference, where D is the depth of the hierarchy, N is the number of states in each level, and T is the sequence length. In this paper, we propose a new inference method of HHMMs for which the time complexity is O(TN D+1). A key idea of our algorithm is application of the forward-backward algorithm to state activation probabilities. The notion of a state activation, which offers a simple formalization of the hierarchical transition behavior of HHMMs, enables us to conduct model inference efficiently. We present some experiments to demonstrate that our proposed method works more efficiently to estimate HHMM parameters than do some existing methods such as the flattening method and Gibbs sampling method. 1 Introduction Latent structure analysis of sequence data is an important technique for many applications such as speech recognition, bioinformatics, and natural language processing. Hidden Markov Models (HMMs) play a key role in solving these problems. HMMs assume a single Markov chain of hidden states as the latent structure of sequence data. Because of this simple assumption, HMMs tend to capture only local context patterns of sequence data. Hierarchical Hidden Markov Models (HHMMs) are stochastic models which assume hierarchical Markov chains of hidden states as the latent structure of sequence data [3]. HHMMs have a hierarchical state transition mechanism that yields the capability of capturing global and local sequence patterns in various granularities. By their nature, HHMMs are applicable to problems of many kinds including handwritten letter recognition [3], information extraction from documents [11], musical pitch structure modeling [12], video structure modeling [13], and human activity modeling [8, 6]. For conventional HMMs, we can conduct unsupervised learning efficiently using the forwardbackward algorithm, which is a kind of dynamic programming [9]. In situations where few or no supervised data are available, the existence of the efficient unsupervised learning algorithm is a salient advantage of using HMMs. The unsupervised learning of HHMMs is an important technique, as it is for HMMs. In this paper, we discuss unsupervised learning techniques for HHMMs. We introduce a key notion, activation probability, to formalize the hierarchical transition mechanism naturally. Using this notion, we propose a new exact inference algorithm which has less time complexity than existing methods have. The remainder of the paper is organized as follows. In section 2, we overview HHMMs. In section 3, we survey HHMM parameter estimation techniques proposed to date. In section 4, we introduce our parameter estimation algorithm. Section 5 presents experiments to show the effectiveness of our algorithm. We conclude our discussion in section 6. 1 Figure 1: (left) Dynamic Bayesian network of the HHMM. (top-right) Tree representation of the HHMM state space. (bottom-right) State identification by the absolute path of the tree. 2 Hierarchical Hidden Markov Models Let O = {O1, ..., Ot, ..., OT } be a sequence of observations in which subscript t denotes the time in the sequence. We designate time as an integer index of observation numbered from the beginning of the sequence. HHMMs define Qd t for 1 ≤t ≤T, 1 ≤d ≤D as a hidden state at time t and level d, where d = 1 represents the top level and d = D represents the bottom level. HHMMs also define binary variables F d t , called termination indicators. If F d t = 1, then it is indicated that the Markov chain of level d terminates at time t. In HHMMs, a state transition at level d is permitted only when the Markov chain of level d + 1 terminates, i.e. Qd t = Qd t−1 if F d+1 t−1 = 0. A terminated Markov chain is initialized again at the next time. Figure 1 (left) presents a Dynamic Bayesian Network (DBN) expression for an HHMM of hierarchical depth D = 3. The conditional probability distribution of Q, F and O is defined as follows [7]. p(Qd t = j|Qd t−1 = i, F d+1 t−1 = b, F d t−1 = f, Q1:d−1 t = k) = δ(i, j) (if b = 0) Ad k(i, j) (if b = 1, f = 0) πd k(j) (if b = 1, f = 1) p(F d t = 1|Qd t = i, Q1:d−1 t = k, F d+1 t = b) = { 0 (if b = 0) Ad k(i, end) (if b = 1) p(Ot = v|Q1:D t = k) = Bk(v) We use a notation Q1:d−1 t as a combination of states {Q1 t, ..., Qd−1 t }. Probabilities of the initialization and the state transition of Markov chains at level d depend on all higher states Q1:d−1. Ad k(i, j) is a model parameter of the transition probability at level d from state i to j when Q1:d−1 t = k. Ad k(i, end) denotes a termination probability that state i terminates the Markov chain at level d when Q1:d−1 t = k. πd k(j) is an initial state probability of state j at level d when Q1:d−1 t = k. Bk(v) is an output probability of observation v when Q1:D t = k. A state space of HHMM is expressed as a tree structure [3]. Figure 1 (top-right) presents a tree expression of state space of an HHMM for which the depth D = 3 and the number of states in each level N = 3. The level of the tree corresponds to the level of HHMM states. Each node at level d corresponds to a combination of states Q1:d. Each node has N children because there are N possible states for each level. The rectangles in the figure denote local HMMs in which nodes can mutually transit directly using the transition probability A. For the analysis described herein, we assume the balanced N-ary tree to simplify discussions of computational complexity. However, arbitrary state space trees do not change the substance of what follows. The behavior of Markov chain at level d depends on the combination of all higher-up states Q1:d−1, not only on the individual Qd. In the tree structure, the absolute path which corresponds to Q1:d is meaningful, rather than the relative path which corresponds to Qd. We refer to Q1:d as Zd and call it absolute path state. Figure 1 (bottom-right) presents an absolute path state identification. The set of values taken by an absolute path state at level d, denoted by Ωd, contains N d elements in the balanced N-ary tree state space. We define a function to obtain the parent absolute path state of Zd as parent(Zd). Similarly, we define a function to obtain the set of child absolute path states of Zd as child(Zd), and a function to obtain the set of siblings of Zd as sib(Zd) = child(parent(Zd)). 2 Table 1: Notation for HHMMs. D Depth of hierarchy N Number of states in each level Ωd Set of values taken by absolute path state at level d Zd t ∈Ωd Absolute path state at time t and level d F d t ∈{0, 1} Termination indicator at time t and level d Ot ∈{1, ..., V } Observation at time t Adij State transition probability from state Zd t = i to state Zd t+1 = j at level d AdiEnd Termination probability of Markov chain at level d from state Zd t = i πdi Initial state probability of state Zd = i at level d Biv Output probability of observation v with ZD = i Table 1 presents the notation used for the HHMM description. We use the notation of the absolute path state Zd rather than Qd throughout the paper. Therefore, we define compatible notations for the model parameters. Whereas the conventional notation πd k(j) denotes the initial state probability of Qd = j when Q1:d−1 = k, we aggregate Qd and Q1:d−1 into Q1:d = Zd and define πdi as the initial state probability of Zd = i. Similarly, we define Adij as the state transition probability from Zd = i to j. Note that ∑ i′∈sib(i) πdi′ = 1 and ∑ j′∈{sib(i)∪End} Adij′ = 1. 3 Existing Parameter Estimation Methods for HHMMs The first work for HHMMs [3] proposed the generalized Baum-Welch algorithm. This algorithm is based on an inside-outside algorithm used for inference of probabilistic context free grammars. This method takes O(T 3) time complexity, which is not practical for long sequence data. A more efficient approach is the flattening method [7]. The hierarchical state sequence can be reduced to a single sequence of the bottom level absolute path states {ZD 1 , ..., ZD T }. If we regard ZD as a flat HMM state, then we can conduct the inference by using the forward-backward algorithm with O(TN 2D) time complexity since |ΩD| = N D. Notice that the flat state ZD can transit to any other flat state, and we cannot apply efficient algorithms for HMMs of sparse transition matrix. In the flattening method, we must make a weak constraint on the HHMM parameters, say minimally self-referential (MinSR) [12], which restricts the self-transition at higher levels i.e. Adii = 0 for 1 ≤ d ≤D−1. The MinSR constraint enables us to identify the path connecting two flat states uniquely. This property is necessary for estimating HHMM parameters by using the flattening method. We also discuss a sampling approach as an alternative parameter estimation technique. The Gibbs sampling is often used for parameter estimation of probabilistic models including latent variables [4]. We can estimate HMM parameters using a Gibbs sampler, which sample each hidden states iteratively. This method is applicable to inference of HHMMs in a straightforward manner on the flat HMM. This straightforward approach, called the Direct Gibbs Sampler (DGS), takes the O(TN D) time complexity for a single iteration. The convergence of a posterior distribution by the DGS method is said to be extremely slow for HMMs [10] because the DGS ignores long time dependencies. Chib [2] introduced an alternative method, called the Forward-Backward Gibbs Sampler (FBS), which calculates forward probabilities in advance. FBS samples hidden states from the end of the sequence regarding the forward probabilities. FBS method requires larger computations for a single iteration than DGS does, but it can bring a posterior of hidden states to its stationary distribution with fewer iterations [10]. Heller [5] proposed Infinite Hierarchical Hidden Markov Models (IHHMMs) which can have an infinitely large depth by weakening the dependency between the states at different levels. They proposed the inference method for IHHMMs based on a blocked Gibbs sampler of which the sampling unit is a state sequence from t = 1 to T at a single level. This inference takes only O(TD) time for a single iteration. In HHMMs, the states in each level are strongly dependent, so resampling a state at an intermediate level causes all lower states to alter into a state which has a completely different behavior. Therefore, it is not practical to apply this Gibbs sampler to HHMMs in terms of the convergence speed. 3 4 Forward-Backward Activation Algorithm In this section, we introduce a new parameter estimation algorithm for HHMMs, which theoretically has O(TN D+1) time complexity. The basic idea of our algorithm is a decomposition of the flat transition probability distribution p(ZD t+1|ZD t ), which the flattening method calculates directly for all pairs of the flat states. We can rewrite the flat transition probability distribution into a sum of two cases that the Markov chain at level D terminates or not, as follows. p(ZD t+1|ZD t ) = p(ZD t+1|ZD t , F D t = 0)p(F D t = 0|ZD t ) + p(ZD t+1|ZD−1 t+1 , F D t =1)p(ZD−1 t+1 |ZD−1 t , F D t =1)p(F D t =1|ZD t ) The first term corresponds to the direct transition without the Markov chain termination. The actual computational complexity for calculating this term is O(N D+1) because the direct transition is permitted only between the sibling states, i.e. ADij = 0 if j /∈sib(i). The second term, corresponding to the case in which the Markov chain terminates at level D, contains two factors: The upper level transition probability p(ZD−1 t+1 |ZD−1 t , F D t = 1) and the state initialization probability for the terminated Markov chain p(ZD t+1|ZD−1 t+1 , F D t = 1). We attempt to compute these probability distributions efficiently in a dynamic programming manner. The transition probability at level d has the form p(Zd t+1|Zd t , F d+1 t = 1). We define ending activation ed t , as the condition of the transition probability from Zd t , formally: p(ed t = i) = p(Zd t = i, F d+1 t = 1) (if i ̸= null and d < D) p(Zd t = i) (if i ̸= null and d = D) p(F d+1 t = 0) (if i = null) The null value in ed t indicates that the Markov chain at level d + 1 does not terminate at time t. The state initialization probability for level d + 1 has the form p(Zd+1 t |Zd t , F d+1 t−1 = 1). We define beginning activation bd t , as the condition of the state initialization probability from Zd t , formally, as p(bd t = i) = p(Zd t = i, F d+1 t−1 = 1) (if i ̸= null and d < D and t > 1) p(Zd t = i) (if i ̸= null and (d = D or t = 1)) p(F d+1 t−1 = 0) (if i = null) The null value in bd t indicates that the Markov chain at level d + 1 does not terminate at time t −1. Using these notations, we can represent the flat transition with propagations of activation probabilities as shown in figure 2 (left) because p(ZD t+1|ZD t ) = p(bD t+1|eD t ). This representation naturally describes the decomposition of the flat transition probability distribution discussed above, and it enables us to apply the decomposition recursively for all levels. We can derive the conditional probability distributions of ed t and bd t+1 as p(ed t = i|ed+1 t ) = { ∑ c∈child(i) p(ed+1 t = c)A(d+1)cEnd (if i̸=null) ∑ c∈Ωd+1 p(ed+1 t =c)(1−A(d+1)cEnd)+p(ed+1 t =null) (if i=null) p(bd t+1 =i|ed t , bd−1 t+1 ) = { p(bd−1 t+1 =parent(i))πdi + ∑ j∈sib(i) p(ed t =j)Adji (if i̸=null) p(ed t = null) (if i=null) In the following subsections, we show the efficient inference algorithm and the parameter estimation algorithm using the activation probabilities. 4.1 Inference using Forward and Backward Activation Probabilities We can translate the DBN of HHMMs in figure 1 (left) equivalently into simpler DBN using activation probabilities. The translated DBN is portrayed in figure 2 (right). The inference algorithm proposed herein is based on a forward-backward calculation over this DBN. We define forward activation probability α and backward activation probability β as follows. αed t (i) = p(ed t = i, O1:t) αbd t (i) = p(bd t = i, O1:t−1) βed t (i) = p(Ot+1:T , F 1 T = 1|ed t = i) βbd t (i) = p(Ot:T , F 1 T = 1|bd t = i) 4 Figure 2: (left) Propagation of activation probabilities for calculating the flat transition probability from time t to t + 1. (right) Equivalent DBN of the HHMM using activation probabilities. Algorithm 1 Calculate forward activation probabilities 1: for t = 1 to T do 2: if t = 1 then 3: αb1 1(i ∈Ω1) = π1i 4: for d = 2 to D do 5: αbd 1(i ∈Ωd) = αbd−1 1 (parent(i))πdi 6: end for 7: else 8: αb1 t (i ∈Ω1) = ∑ j∈sib(i)αe1 t−1(j)A1ji 9: for d = 2 to D do 10: αbd t (i ∈Ωd) = αbd−1 t (parent(i))πdi +∑ j∈sib(i)αed t−1(j)Adji 11: end for 12: end if 13: αeD t (i ∈ΩD) = αbD t (i)BiOt 14: for d = D −1 to 1 do 15: αed t (i ∈Ωd) = ∑ c∈child(i) αed+1 t (c)A(d+1)cEnd 16: end for 17: end for These probabilities are efficiently calculable in a dynamic programming manner. Algorithm 1 presents the pseudocodes to calculate whole α. αbd t are derived downward from αb1 t to αbD t by summing up to the initialization probability from the parent and the transition probabilities from the siblings (Line 8 to 11). αed t are propagated upward from αeD t to αe1 t by summing up to the probabilities of the child Markov chain termination (Line 13 to 16). This algorithm includes the calculation of |Ωd| = N d quantities involving the summation of |sib(i)| = N terms for d = 1 to D and for t = 1 to T. Therefore, the time complexity of algorithm 1 is O(T ∑D d=1 N d+1) = O(TN D+1). Algorithm 2 propagates the backward activation probabilities similarly in backward order. We can derive the conditional independence of O1:t and {Ot+1:T , F 1 T = 1} given ed t ̸= null or bd t+1 ̸= null, because both of ed t ̸= null and bd t+1 ̸= null indicates that the Markov chains at level d + 1, ..., D terminates at time t. On the basis of this conditional independence, the exact inference of a posterior of activation probabilities can be obtained using α and β as presented below. p(ed t = i|O1:T , F 1 T = 1) ∝p(ed t = i, O1:t)p(Ot+1:T , F 1 T = 1|ed t = i) = αed t (i)βed t (i) p(bd t = i|O1:T , F 1 T = 1) ∝p(bd t = i, O1:t−1)p(Ot:T , F 1 T = 1|bd t = i) = αbd t (i)βbd t (i) The inference of the flat state p(ZD t |O1:T , F 1 T = 1) is identical to of the bottom level activation probability p(eD t |O1:T , F 1 T =1). We can calculate the likelihood of the whole observation as follows. p(O1:T , F 1 T = 1) = ∑ i∈Ω1 p(e1 T = i, O1:T )p(F 1 T = 1|e1 T = i) = ∑ i∈Ω1 αe1 T (i)βe1 T (i) 5 Algorithm 2 Calculate backward activation probabilities 1: for t = T to 1 do 2: if t = T then 3: βe1 T (i ∈Ω1) = A1iEnd 4: for d = 2 to D do 5: βed T (i ∈Ωd) = βed−1 T (parent(i))AdiEnd 6: end for 7: else 8: βe1 t (i ∈Ω1) = ∑ j∈sib(i) βb1 t+1(j)A1ij 9: for d = 2 to D do 10: βed t (i ∈Ωd) = βed−1 t (parent(i))AdiEnd +∑ j∈sib(i)βbd t+1(j)Adij 11: end for 12: end if 13: βbD t (i ∈ΩD) = βeD t (i)BiOt 14: for d = D −1 to 1 do 15: βbd t (i ∈Ωd) = ∑ c∈child(i) βbd+1 t (c)π(d+1)c 16: end for 17: end for 4.2 Updating Parameters Using the forward and backward activation probabilities, we can estimate HHMM parameters efficiently in the EM framework. In the EM algorithm, the function Q(θ, ¯θ) is defined, where θ is a parameter set before updating and ¯θ is a parameter set after updating, as described below. Q(θ, ¯θ) = ∑ Y pθ(Y |X) log p¯θ(X, Y ) In that equation, X represents a set of observed variables, and Y is a set of latent variables. The difference of log likelihood between the models of θ and ¯θ is known to be greater than Q(θ, ¯θ)−Q(θ, θ) [1]. For this reason, we can increase the likelihood monotonically by selecting a new parameter ¯θ to maximize the function Q. For HHMMs, the set of parameters is θ = {A, π, B}. The set of observed variables is X = {O1:T , F 1 T = 1}. The set of latent variables is Y = {Z1:D 1:T , F 1:D 1:T −1}. Therefore, the function Q can be represented as shown below. Q(θ, ¯θ) ∝ ∑ Z1:D 1:T ,F 1:D 1:T −1 pθ(O1:T , F 1 T = 1, Z1:D 1:T , F 1:D 1:T −1) log p¯θ(O1:T , F 1 T = 1, Z1:D 1:T , F 1:D 1:T −1) (1) The joint probability of observed variables and latent variables is given below. pθ(O1:T , F 1 T = 1, Z1:D 1:T , F 1:D 1:T −1) = D ∏ d=1 πdZd 1 T −1 ∏ t=1 D ∏ d=1 (AF d t dZd t EndA F d+1 t (1−F d t ) dZd t Zd t+1 πF d t dZd t+1) D ∏ d=1 AdZd T End T ∏ t=1 BZD t Ot We substitute this equation for the joint probability in equation (1). We integrate out irrelevant variables and organize around each parameter. Thereby, we obtain the following. Q(θ, ¯θ) ∝ D ∑ d=1 ∑ i∈Ωd gπdi log ¯πdi + D ∑ d=1 ∑ i∈Ωd ∑ j∈{sib(i)∪End} gAdij log ¯Adij + ∑ i∈ΩD V ∑ v=1 gBiv log ¯Biv Therein, gπdi, gAdij, gBiv are shown by equation (2)(3)(4)(5). They are calculable using forward and backward activation probabilities. gπdi = αbd 1(i)βbd 1(i) + T −1 ∑ t=1 αbd−1 t+1 (parent(i))πdiβbd t+1(i) (2) gAdiEnd = T −1 ∑ t=1 αed t (i)AdiEndβed−1 t (parent(i)) + αed T (i)βed T (i) (3) 6 Table 2: Log-likelihood achieved at each iteration. Iteration 1 2 3 4 5 10 50 100 FBA w/o MinSR -773.47 -672.44 -668.50 -631.30 -610.63 -577.33 -457.66 -447.90 FBA with MinSR -773.89 -672.47 -670.40 -643.62 -614.98 -573.84 -453.09 -448.52 FFB -773.89 -672.47 -670.40 -643.62 -614.98 -573.84 -453.09 -448.52 gAdij = T −1 ∑ t=1 αed t (i)Adijβbd t+1(j) (4) gBiv = ∑ t:Ot=v αeD t (i)βeD t (i) (5) Using Lagrange multipliers, we can obtain parameters ¯π, ¯A, ¯B, which maximize the function Q under the constraint ∑ i′∈sib(i) ¯πdi′ = 1,∑ j′∈{sib(i)∪End} ¯Adij′ = 1,∑ v ¯Biv = 1 as shown below. ¯πdi = gπdi ∑ i′∈sib(i) gπdi′ , ¯Adij = gAdij ∑ j′∈{sib(i)∪End} gAdij′ , ¯Biv = gBiv ∑ v gBiv Consequently, we can calculate the update parameters using α and β. The time complexity for computing a single EM iteration is O(TN D+1), which is identical to the calculation of forward and backward activation probabilities. 5 Experiments Firstly, we experimentally confirm that the forward-backward activation algorithm yields exactly identical parameter estimation to the flattening method does. Remind that we must make the MinSR constraint on the HHMM parameter set in the flattening method (see section 3). We compare three parameter estimation algorithms: our forward-backward activation algorithm for a MinSR HHMM (FBA with MinSR), for a HHMM without MinSR (FBA w/o MinSR), and the flattening method(FFB). The dataset to learn includes 5 sequences of 10 length, which are artificially generated by a MinSR HHMM of biased parameter set. We execute three algorithms and examine the log-likelihood achieved at each iteration. Table 2 presents the result. The FBA with MinSR and the FFB achieve the identical log-likelihood through the training. This result provides experimental evidence that our algorithm estimates HHMM parameters exactly identically to the flattening method does. Furthermore, the FBA enables us to conduct the parameter estimation of HHMMs which has non-zero self-transition parameters. To evaluate the computational costs empirically, we compare four methods of HHMM parameter estimation. Two are based on the EM algorithm with inference by the forward-backward activation algorithm (FBA), and by the flattening forward-backward method (FFB). Another two are based on a sampling approach: direct Gibbs sampling for the flat HMMs (DGS) and forward-backward activation sampling (FBAS). FBAS is a straightforward application of the forward-backward sampling scheme to the translated DBN presented in figure 2. In FBAS, we first calculate forward activation probabilities. Then we sample state activation variables from e1 T to b1 1 in the backward order with respect to forward activation probabilities. We evaluate four methods based on three aspects: execution time, convergence speed, and scalability of the state space size. We apply each method to four different HHMMs of (D = 3,N = 3), (D = 3,N = 4), (D = 4,N = 3), and (D = 4,N = 4). We examine the log-likelihood of the training dataset achieved at each iteration to ascertain the learning convergence. As a training dataset, we use 100 documents from the Reuters corpus as word sequences. The dataset includes 36,262 words in all, with a 4,899 word vocabulary. Figure 3 presents the log-likelihood of the training data. The horizontal axis shows the logarithmically scaled execution time. Table 2 presents the average execution time for a single iteration. From these results, we can say primarily that FBA outperforms FFB in terms of execution time. The improvement is remarkable, especially for the HHMMs of large state space size because FBA has less time complexity for N and D than FFB has. 7 -260000 -250000 -240000 -230000 -220000 -210000 -200000 -190000 -180000 100 1000 10000 100000 1e+006 Log-Likelihood Execution Time (ms) FB Activation Flattening FB FB Activation Sampling Direct Gibbs Sampling -260000 -250000 -240000 -230000 -220000 -210000 -200000 -190000 -180000 100 1000 10000 100000 1e+006 Log-Likelihood Execution Time (ms) FB Activation Flattening FB FB Activation Sampling Direct Gibbs Sampling -260000 -250000 -240000 -230000 -220000 -210000 -200000 -190000 -180000 100 1000 10000 100000 1e+006 Log-Likelihood Execution Time (ms) FB Activation Flattening FB FB Activation Sampling Direct Gibbs Sampling -260000 -250000 -240000 -230000 -220000 -210000 -200000 -190000 -180000 1000 10000 100000 1e+006 Log-Likelihood Execution Time (ms) FB Activation Flattening FB FB Activation Sampling Direct Gibbs Sampling Figure 3: Convergence of log-likelihood for the training data on the Reuters corpus. Log-likelihood (vertical) is shown against the log-scaled execution time (horizontal) to display the execution time necessary to converge the learning of each algorithm. (top-left) HHMM of D = 3, N = 3. (topright) D = 3, N = 4. (bottom-left) D = 4, N = 3. (bottom-right) HHMM of D = 4, N = 4. Table 3: Average execution time for a single iteration (ms). Method D = 3, N = 3 D = 3, N = 4 D = 4, N = 3 D = 4, N = 4 (N D = 27) (N D = 64) (N D = 81) (N D = 256) FBA 186.65 391.73 476.92 1652.03 FFB 1729.90 9242.35 19257.80 220224.00 FBAS 82.45 142.20 183.39 581.58 DGS 24.19 37.50 45.43 265.98 The results show that the likelihood convergence using DGS is much slower than that of other methods.The execution time of DGS is less than that of other methods for a single iteration, but this cannot compensate for the low convergence speed. However, FBAS achieves a competitive likelihood in comparison to FBA. Results show that FBAS might be appropriate for some situations because FBAS finds a better solution than that FBA do in some results. 6 Conclusion In this work, we proposed a new inference algorithm for HHMMs based on the activation probability. Results show that the performance of our proposed algorithm surpasses that of existing methods. The forward-backward activation algorithm described herein enables us to conduct unsupervised parameter learning with a practical computational cost for HHMMs of larger state space size. References [1] C. Bishop. Pattern Recognition and Machine Learning. Springer, 2007. [2] S. Chib. Calculating posterior distributions and modal estimates in markov mixture models. Journal of Econometrics, 1996. [3] S. Fine, Y. Singer, and N. Tishby. The hierarchical hidden markov model: Analysis and applications. Machine Learning, 1998. 8 [4] T. Griffiths and M. Steyvers. Finding scientific topics. Proc. the National Academy of Sciences of the United States of America, 2004. [5] K. Heller, Y. Teh, and D. Gorur. Infinite hierarchical hidden markov models. In Proc. International Conference on Artificial Intelligence and Statistics, 2009. [6] S. Luhr, H. Bui, S. Venkatesh, and G. West. Recognition of human activity through hierarchical stochastic learning. In Proc. Pervasive Computing and Communication, 2003. [7] K. Murphy and M. Paskin. Linear time inference in hierarchical hmms. In Proc. Neural Information Processing Systems, 2001. [8] N. Nguyen, D. Phung, and S. Venkatesh. Learning and detecting activities from movement trajectories using the hierarchical hidden markov models. In Proc. Computer Vision and Pattern Recognition, 2005. [9] L. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proc. IEEE, 1989. [10] S. Scott. Bayesian methods for hidden markov models: Recursive computing in the 21st century. Journal of the American Statistical Association, 2002. [11] M. Skounakis, M. Craven, and S. Ray. Hierarchical hidden markov models for information extraction. In Proc. International Joint Conference on Artificial Intelligence, 2003. [12] M. Weiland, A. Smaill, and P. Nelson. Learning musical pitch structures with hierarchical hidden markov models. In Proc. Journees Informatiques Musicales, 2005. [13] L. Xie, S. Chang, A. Divakaran, and H. Sun. Learning hierarchical hidden markov models for video structure discovery. Technical report, Columbia University, 2002. 9
|
2012
|
210
|
4,576
|
Finding Exemplars from Pairwise Dissimilarities via Simultaneous Sparse Recovery Ehsan Elhamifar EECS Department University of California, Berkeley Guillermo Sapiro ECE, CS Department Duke University Ren´e Vidal Center for Imaging Science Johns Hopkins University Abstract Given pairwise dissimilarities between data points, we consider the problem of finding a subset of data points, called representatives or exemplars, that can efficiently describe the data collection. We formulate the problem as a row-sparsity regularized trace minimization problem that can be solved efficiently using convex programming. The solution of the proposed optimization program finds the representatives and the probability that each data point is associated with each one of the representatives. We obtain the range of the regularization parameter for which the solution of the proposed optimization program changes from selecting one representative for all data points to selecting all data points as representatives. When data points are distributed around multiple clusters according to the dissimilarities, we show that the data points in each cluster select representatives only from that cluster. Unlike metric-based methods, our algorithm can be applied to dissimilarities that are asymmetric or violate the triangle inequality, i.e., it does not require that the pairwise dissimilarities come from a metric. We demonstrate the effectiveness of the proposed algorithm on synthetic data as well as real-world image and text data. 1 Introduction Finding a subset of data points, called representatives or exemplars, which can efficiently describe the data collection, is an important problem in scientific data analysis with applications in machine learning, computer vision, information retrieval, etc. Representatives help to summarize and visualize datasets of images, videos, text and web documents. Computational time and memory requirements of classification algorithms improve by working on representatives, which contain much of the information of the original data collection. For example, the efficiency of the NN method improves [1] by comparing tests samples to K representatives as opposed to all N training samples, where typically we have K ≪N. Representatives provide clustering of data points, and, as the most prototypical data points, can be used for efficient synthesis/generation of new data points. The problem of finding representative data has been well-studied in the literature [2, 3, 4, 5, 6, 7, 8]. Depending on the type of the information that should be preserved by the representatives, algorithms can be divided into two categories. The first group of algorithms finds representatives from data that lie in one or multiple low-dimensional subspaces and typically operate on the measurement data vectors directly [5, 6, 7, 8, 9, 10, 11]. The Rank Revealing QR (RRQR) algorithm [6, 9] assumes that the data come from a low-rank model and tries to find a subset of columns of the data matrix that corresponds to the best conditioned submatrix. Randomized and greedy algorithms have also been proposed to find a subset of the columns of a low-rank matrix [5, 8, 10]. Assuming that the data can be expressed as a linear combination of the representatives, [7, 11] formulate the problem of finding representatives as a joint-sparse recovery problem, [7] showing that when the data lie in a union of low-rank models, the algorithm finds representatives from each low-rank model. 1 The second group of algorithms finds representatives by assuming that there is a natural grouping of the data collection based on an appropriate measure of similarity between pairs of data points [2, 4, 12, 13, 14]. As a result, such algorithms typically operate on similarities/dissimilarities between data points. The Kmedoids algorithm [2] tries to find K representatives from pairwise dissimilarities between data points. As solving the original optimization program is, in general, NP-hard [12], an iterative approach is employed. The performance of Kmedoids, similar to Kmeans [15], depends on initialization and decreases as the number of representatives, K, increases. The Affinity Propagation (AP) algorithm [4, 13, 14] tries to find representatives from pairwise similarities between data points by using a message passing algorithm. While AP has suboptimal properties and finds approximate solutions, it does not require initialization and has been shown to perform well in problems such as unsupervised image categorization [16] and facility location problems [17]. In this paper, we propose an algorithm for selecting representatives of a data collection given dissimilarities between pairs of data points. We propose a row-sparsity regularized [18, 19] trace minimization program whose objective is to find a few representatives that encode well the collection of data points according to the provided dissimilarities. The solution of the proposed optimization program finds the representatives and the probability that each data point is associated with each one of the representatives. Instead of choosing the number of representatives, the regularization parameter puts a trade-off between the number of representatives and the encoding cost of the data points via the representatives based on the dissimilarities. We obtain the range of the regularization parameter where the solution of the proposed optimization program changes from selecting one representative for all data points to selecting each data point as a representative. When there is a clustering of data points, defined based on their dissimilarities, we show that, for a suitable range of the regularization parameter, the algorithm finds representatives from each cluster. Moreover, data points in each cluster select representatives only from the same cluster. Unlike metric-based methods, we do not require that the dissimilarities come from a metric. Specifically, the dissimilarities can be asymmetric or can violate the triangle inequality. We demonstrate the effectiveness of the proposed algorithm on synthetic data and real-world image and text data. 2 Problem Statement We consider the problem of finding representatives from a collection of N data points. Assume we are given a set of nonnegative dissimilarities {dij}i,j=1,...,N between every pair of data points i and j. The dissimilarity dij indicates how well the data point i is suited to be a representative of the data point j. More specifically, the smaller the value of dij is, the better the data point i is a representative of the data point j.1 Such dissimilarities can be built from measured data points, e.g., by using the Euclidean/geodesic distances or the inner products between data points. Dissimilarities can also be given directly without accessing or measuring the data points, e.g., they can be subjective measurements of the relationships between different objects. We can arrange the dissimilarities into a matrix of the form D ≜ d⊤ 1... d⊤ N = d11 d12 · · · d1N ... ... ... dN1 dN2 · · · dNN ∈RN×N, (1) where di ∈RN denotes the i-th row of D. Remark 1 We do not require the dissimilarities to satisfy the triangle inequality. In addition, we do not assume symmetry on the pairwise dissimilarities. D can be asymmetric, where dij ̸= dji for some pairs of data points. In other words, how well data point i represents data point j can be different from how well j represents i. In the experiments, we will show an example of asymmetric dissimilarities for finding representative sentences in text documents. Given D, our goal is to select a subset of data points, called representatives or exemplars, that efficiently represent the collection of data points. We consider an optimization program that promotes selecting a few data points that can well encode all data points via the dissimilarities. To do so, we consider variables zij associated with dissimilarities dij and denote by the matrix of all variables as 1dii can be set to have a nonzero value, as we will show in the experiments on the text data. 2 −1 0 1 2 3 4 5 −1 −0.5 0 0.5 1 Representatives for λ =0.002 λmax,2 data points representatives −1 0 1 2 3 4 5 −1 −0.5 0 0.5 1 Representatives for λ =0.005 λmax,2 data points representatives −1 0 1 2 3 4 5 −1 −0.5 0 0.5 1 Representatives for λ =0.01 λmax,2 data points representatives −1 0 1 2 3 4 5 −1 −0.5 0 0.5 1 Representatives for λ =0.1 λmax,2 data points representatives −1 0 1 2 3 4 5 −1 −0.5 0 0.5 1 Representatives for λ =1 λmax,2 data points representatives −1 0 1 2 3 4 5 −1 −0.5 0 0.5 1 Representatives for λ =0.007 λmax,∞ data points representatives −1 0 1 2 3 4 5 −1 −0.5 0 0.5 1 Representatives for λ =0.05 λmax,∞ data points representatives −1 0 1 2 3 4 5 −1 −0.5 0 0.5 1 Representatives for λ =0.1 λmax,∞ data points representatives −1 0 1 2 3 4 5 −1 −0.5 0 0.5 1 Representatives for λ =0.9 λmax,∞ data points representatives −1 0 1 2 3 4 5 −1 −0.5 0 0.5 1 Representatives for λ =1 λmax,∞ data points representatives Figure 1: Data points (blue dots) in two clusters and the representatives (red circles) found by the proposed optimization program in (4) for several values of λ with λmax,q defined in (6). Top: q = 2, Bottom: q = ∞. Z ≜ z⊤ 1... z⊤ N = z11 z12 · · · z1N ... ... ... zN1 zN2 · · · zNN ∈RN×N, (2) where zi ∈RN denotes the i-th row of Z. We interpret zij as the probability that data point i be a representative for data point j, hence zij ∈[0, 1]. A data point j can have multiple representatives in which case zij > 0 for all the indices i of the representatives. As a result, we must have PN i=1 zij = 1, which ensures that the total probability of data point j choosing all its representatives is equal to one. Our goal is to select a few representatives that well encode the data collection according to the dissimilarities. To do so, we propose a row-sparsity regularized trace minimization program on Z that consists of two terms. First, we want the representatives to encode well all data points via dissimilarities. If the data point i is chosen to be a representative of a data point j with probability zij, the cost of encoding j with i is dijzij ∈[0, dij]. Hence, the total cost of encoding j using all its representatives is PN i=1 dijzij. Second, we would like to have as few representatives as possible for all the data points. When the data point i is a representative of some of the data points, we have zi ̸= 0, i.e., the i-th row of Z is nonzero. Having a few representatives then corresponds to having a few nonzero rows in the matrix Z. Putting these two goals together, we consider the following minimization program min N X j=1 N X i=1 dijzij + λ N X i=1 I(∥zi∥q) s. t. zij ≥0, ∀i, j; N X i=1 zij = 1, ∀j, (3) where I(·) denotes the indicator function, which is zero when its argument is zero and is one otherwise. The first term in the objective function corresponds to the total cost of encoding all data points using the representatives and the second term corresponds to the cost associated with the number of the representatives. The parameter λ > 0 sets the trade-off between the two terms. Since the minimization in (3) that involves counting the number of nonzero rows of Z is, in general, NP-hard, we consider the following standard convex relaxation min N X j=1 N X i=1 dijzij + λ N X i=1 ∥zi∥q s. t. zij ≥0, ∀i, j; N X i=1 zij = 1, ∀j, (4) where, instead of counting the number of nonzero rows of Z, we use the sum of the ℓq-norms of the rows of Z. Typically, we choose q ∈{2, ∞} for which the optimization program (4) is convex.2 Note that the optimization program (4) can be rewritten in the matrix form as min tr(D⊤Z) + λ∥Z∥1,q s. t. Z ≥0, 1⊤Z = 1⊤, (5) where tr(·) denotes the trace operator, ∥Z∥1,q ≜PN i=1 ∥zi∥q, and 1 denotes an N-dimensional vector whose elements are all equal to one. 2It is typically the case that q = ∞favors having 0 and 1 elements for Z, while q = 2 allows elements that more often take other values in [0, 1]. Note that q = 1 also imposes sparsity in the nonzero rows of Z, which is not desirable since it promotes only a few data points to be associated with each representative. 3 Z matrix for λ =0.002 λmax,2 10 20 30 40 50 60 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 Z matrix for λ =0.005 λmax,2 10 20 30 40 50 60 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 Z matrix for λ =0.01 λmax,2 10 20 30 40 50 60 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 Z matrix for λ =0.1 λmax,2 10 20 30 40 50 60 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 Z matrix for λ =1 λmax,2 10 20 30 40 50 60 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 Z matrix for λ =0.007 λmax,∞ 10 20 30 40 50 60 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 Z matrix for λ =0.05 λmax,∞ 10 20 30 40 50 60 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 Z matrix for λ =0.1 λmax,∞ 10 20 30 40 50 60 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 Z matrix for λ =0.9 λmax,∞ 10 20 30 40 50 60 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 Z matrix for λ =1 λmax,∞ 10 20 30 40 50 60 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 Figure 2: For the data points shown in Fig. 1, the matrix Z obtained by the proposed optimization program in (4) is shown for several values of λ, where λmax,q is defined in (6). Top: q = 2, Bottom: q = ∞. As we change the regularization parameter λ in (4), the number of representatives found by the algorithm changes. For small values of λ, where we put more emphasis on better encoding data points via representatives, we obtain more representatives. In the limiting case of λ →0 all points are selected as representatives, each point being the representative of itself, i.e., zii = 1 for all i. On the other hand, for large values of λ, where we put more emphasis on the row-sparsity of Z, we select a small number of representatives. In the limiting case of λ →∞, we select only one representative for all data points. Figures 1 and 2 illustrate the representatives and the matrix Z, respectively, for several values of λ. In Section 3, we compute the range of λ for which the solution of (4) changes from a single representative to all points being representatives. Note that, similar to the relationship between sparse dictionary leaning [20] and Kmeans, there is a relationship between our method and Kmedoids. A discussion of this is part of a future publication. Once we have solved the optimization program (4), we can find the representative indices from the nonzero rows of Z. We can also obtain the clustering of data points into K clusters associated with K representatives by assigning each data point to its closets representative. More specifically, if i1, · · · , iK denote the indices of the representatives, data point j is assigned to the representative R(j) according to R(j) = argminℓ∈{i1,··· ,iK} dℓj. As mentioned before, the solution Z gives the probability that each data point is associated with each one of the representatives, which also provides a soft clustering of data points to the representatives. In Section 3 we show that when there is a clustering of data points based on their dissimilarities (see Definition 1), each point selects representatives from its own cluster. 3 Theoretical Analysis In this section, we consider the optimization program (4) and study the behavior of its solution as a function of the regularization parameter. First, we analyze the solution of (4) for a sufficiently large value of λ. We obtain a threshold value on λ after which the solution of (4) remains the same, selecting only one representative data point. More specifically, we show the following result. Theorem 1 Consider the optimization program (4). Let ℓ≜argmini 1⊤di and λmax,2 ≜max i̸=ℓ √ N 2 · ∥di −dℓ∥2 2 1⊤(di −dℓ), λmax,∞≜max i̸=ℓ ∥di −dℓ∥1 2 . (6) For q ∈{2, ∞}, when λ ≥λmax,q, the solution of the optimization program (4) is equal to Z = eℓ1⊤, where eℓdenotes the vector whose elements are all zero except its ℓ-th element, which is equal to 1. In other words, the solution of (4) for λ ≥λmax,q corresponds to choosing only the ℓ-th data point as the representative of all the data points. Note that the threshold value of the regularization parameter, for which we obtain only one representative, is different for q = 2 and q = ∞. However, the two cases obtain the same representative given by the data point for which 1⊤di is minimum, i.e., the data point with the smallest sum of 4 ∆ δ1 δ2 Figure 3: Data points in two clusters with dissimilarities given by pairwise Euclidean distances. For λ < ∆−max{δ1, δ2}, in the solution of the optimization program (4), points in each cluster are represented by representatives from the same cluster. dissimilarities to other data points. Notice also that when the dissimilarities are the Euclidean distances between the data points, the single representative corresponds to the data point closest to the geometric median of all data points, as shown in the right plot of Figure 1. When the regularization parameter λ is smaller than the threshold in (6), the optimization program in (4) can find multiple representatives for each data point. However, when there is a clustering of data points based on their dissimilarities (see Definition 1), we expect to select representatives from each cluster. In addition, we expect that the data points in each cluster be associated with the representatives in that cluster only. Definition 1 Given dissimilarities {dij}i,j=1,...,N between N data points, we say that the data partitions into n clusters {Ci}n i=1 according to the dissimilarities, if for any data point j′ in any Cj, the largest dissimilarity to other data points in Cj is strictly smaller than the smallest dissimilarity to the data points in any Ci different from Cj, i.e., max i′∈Cj di′j′ < min i̸=j min i′∈Ci di′j′, ∀j = 1, . . . , n, ∀j′ ∈Cj. (7) In other words, the data partitions into clusters {Ci}n i=1, when the interclass dissimilarity is smaller than the intraclass dissimilarity. Next, we show that for a suitable range of the regularization parameter that depends on the intraclass and interclass dissimilarities, the probability that a point chooses representatives from other clusters is zero. More precisely, we have the following result. Theorem 2 Given dissimilarities {dij}i,j=1,...,N between N data points, assume that the data partitions into n clusters {Ci}n i=1 according to Definition 1. Let λc be defined as λc ≜min j min j′∈Cj(min i̸=j min i′∈Ci di′j′ −max i′∈Cj di′j′). (8) Then for λ ≤λc , the optimization program (4) finds representatives in each cluster, where the data points in every Ci select representatives only from Ci. A less tight clustering threshold λ′ c ≤λc on the regularization parameter is given by λ′ c ≜min i̸=j min i′∈Ci,j′∈Cj di′j′ −max i max i′̸=j′∈Ci di′j′. (9) The first term in the right-hand-side of (9) shows the minimum dissimilarity between data points in two different clusters. The second term in the right-hand-side of (9) shows the maximum, over all clusters, of the dissimilarity between different data points in each cluster. When λc or λ′ c increase, e.g., when the intraclass dissimilarities increase or the interclass dissimilarities decrease, the maximum possible λ for which we obtain clustering increases. As an illustrative example, consider Figure 3, where data points are distributed in two clusters according to the dissimilarities given by the pairwise Euclidean distances of the data points. Let δi denote the diameter of cluster i and ∆be the minimum distance among pairs of data points in different clusters. Assuming max{δ1, δ2} < ∆, for λ < ∆−max{δ1, δ2}, the solution of the optimization program (4) is of the form Z = Γ Z1 0 0 Z2 , where Γ ∈RN×N is a permutation matrix corresponding to the separation of the data into the two clusters. Remark 2 The results of Theorems 1 and 2 suggest that there is a range of the regularization parameter for which we obtain only one representative from each cluster. In other words, if 5 10 −4 10 −2 10 0 0 5 10 15 20 25 30 α Number of Representatives q = 2 , ∆ / δ =1.1 Left cluster Right cluster 10 −4 10 −2 10 0 0 5 10 15 20 25 30 α Number of Representatives q = ∞ , ∆ / δ =1.1 Left cluster Right cluster 10 −4 10 −2 10 0 0 5 10 15 20 25 30 α Number of Representatives q = 2 , ∆ / δ =4 Left cluster Right cluster 10 −4 10 −2 10 0 0 5 10 15 20 25 30 α Number of Representatives q = ∞ , ∆ / δ =4 Left cluster Right cluster Figure 4: Number of representatives obtained by the proposed optimization program in (4) for data points in the two clusters shown in Fig. 1 as a function of the regularization parameter λ = αλmax,q with q ∈{2, ∞}. −1.5 −1 −0.5 0 0.5 1 1.5 −1 0 1 2 3 Representatives for λ =0.05 λmax,∞ data points representatives Z matrix for λ =0.05 λmax,∞ 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 −1.5 −1 −0.5 0 0.5 1 1.5 −1 0 1 2 3 Representatives for λ =0.5 λmax,∞ data points representatives Z matrix for λ =0.5 λmax,∞ 20 40 60 80 100 120 140 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 Figure 5: Representatives and the probability matrix Z obtained by our proposed algorithm in (4) for q = ∞. 20 random data points are added to 120 data points generated by a mixture of 3 Gaussian distributions. λmax,q(Ci) denotes the threshold on λ after which we obtain only one representative from Ci, then for maxi λmax,q(Ci) ≤λ < λc, the data points in each Ci select only one representative that is in Ci. As we will show in the experiments, such an interval often exists and can, in fact, be large. For a sufficiently small value of λ, where we put less emphasis in the row-sparsity term in the optimization program (4), each data point becomes a representative, i.e., zii = 1 for all i. In such a case, each data point forms its own cluster. From the result in Theorem 2, we obtain a threshold λmin such that for λ ≤λmin the solution Z is equal to the identity matrix. Corollary 1 Let λmin,q ≜minj(mini̸=j dij −djj) for q ∈{2, ∞}. For λ ≤λmin,q, the solution of the optimization program (4) for q ∈{2, ∞} is equal to the identity matrix. In other words, each data point is the representative of itself. 4 Experiments In this section, we evaluate the performance of the proposed algorithm on synthetic and real datasets. As scaling of D and λ by the same value does not change the solution of (4), we always scale dissimilarities to lie in [0, 1] by dividing the elements of D by its largest element. Unless stated otherwise, we typically set λ = αλmax,q with α ∈[0.01, 0.1], for which we obtain good results. 4.1 Experiments on Synthetic Data We consider the synthetic dataset shown in Figure 1 that consists of data points distributed around two clusters. We run the proposed optimization program in (4) for both q = 2 and q = ∞for several values of λ. Figures 1 and 2 show the representatives and the matrix of variables Z, respectively, for several values of the regularization parameter. Notice that, as discussed before, for small values of λ, we obtain more representatives and as we increase λ, the number of representatives decreases. When the regularization parameter reaches λmax,q, computed using our theoretical analysis, we obtain only one representative for the dataset. It is important to note that, as we showed in the theoretical analysis, when the regularization parameter is sufficiently small, data points in each cluster only select representatives from that cluster (see Figure 2), i.e., Z has a block-diagonal structure when its columns are permuted according to the clusters. Moreover, as Figure 2 shows, for a sufficiently large range of the regularization parameter, we obtain only one representative from each cluster. To better see this, we run the optimization program with λ = αλmax,q for different values of α. The two left-hand side plots in Figure 4 show the number of the representatives for q = 2 and q = ∞, respectively, from each of the two clusters. As shown, when λ gets larger than λmax,q, we obtain only one representative from the right cluster and no representative from the left cluster, i.e., as expected, we obtain one representative for all the data points. Also, when λ gets smaller than λmin,q, all data points become representatives, as 6 4% 16% 0 5 10 15 20 25 Percentage of Selected Training Samples Classification Error (%) USPS Dataset Rand Kmedoids−w Kmedoids−b AP Proposed 20% 40% 0 5 10 15 20 25 Percentage of Selected Training Samples Classification Error (%) ISOLET Dataset Rand Kmedoids−w Kmedoids−b AP Proposed Figure 6: Classification error on the USPS (left) and ISOLET (right) datasets using representatives obtained by different algorithms. Horizontal axis shows the percentage of the selected representatives from each class (averaged over all classes). Dashed line shows the classification error (%) using all the training samples. expected from our theoretical result. It is also important to note that, for a sufficiently large range of the values of λ, we select only one representative from each cluster. The two right-hand side plots in Figure 4 show the number of the representatives when we increase the distance between the two clusters. Notice that we obtain similar results as before except that the range of λ for which we select one representative from each cluster has increased. This is also expected from our theoretical analysis, since λc in (8) increases as the distance between the two clusters increases. Note that we also obtain similar results for larger number of clusters. For better visualization, we have shown the results for only two clusters. Also, when there is not a clear partitioning of the data points into clusters according to Definition 1, e.g., when there are data points distributed between different clusters, as shown in Figure 5, we still obtain similar results to what we have discussed in our theoretical analysis. This suggests the existence of stronger theoretical guarantees for our proposed algorithm, which is the subject of our future work. 4.2 Experiments on Real Data In this section, we evaluate the performance of our proposed algorithm on real image and text data. We report the result for q = ∞as it typically obtains better results than q = 2. 4.2.1 NN Classification using Representatives First, we consider the problem of finding prototypes for classification using the nearest neighbor (NN) algorithm [15]. Finding representatives that correspond to the modes of the data distribution helps to significantly reduce the computational cost and memory requirements of classification algorithms, while maintaining their performance. To investigate the effectiveness of our proposed method for finding informative prototypes for classification, we consider two datasets of USPS [21] and ISOLET [22]. We find the representatives of the training data in each class of a dataset and use the representatives as a reduced training set to perform NN classification on the test data. We obtain the representatives by taking dissimilarities to be pairwise Euclidean distances between data points. We compare our proposed algorithm with AP [4], Kmedoids [2], and random selection of data points (Rand) as the baseline. Since Kmedoids depends on initialization, we run the algorithm 1000 times with different random initializations and report the results corresponding to the best solution (lowest energy) and the worst solution (highest energy) as Kmedoids-w and Kmedoids-b, respectively. To have a fair comparison, we run all algorithms so that they obtain the same number of representatives. Figure 6 shows the average classification errors using the NN method for the two datasets. The classification error using all training samples of each dataset is also shown with a black dashed line. As the results show, the classification performance using the representatives found by our proposed algorithm is close to that of using all the training samples. Specifically, in the USPS dataset, using representatives found by our proposed method, which consist of only 16% of the training samples, we obtain 6.2% classification error compared to 4.7% error obtained using all the training samples. In the ISOLET dataset, with representatives corresponding to less than half of the training samples, we obtain very close classification performance to using all the training samples (12.4% error compared to 11.4% error). Notice that when the number of representatives decreases, as expected, the classification performance also decreases. However, in all cases, our proposed algorithm as well as AP are less affected by the decrease in the number of the representatives. 7 Figure 7: Some frames of a political debate video, which consists of multiple shots, and the automatically computed representatives (inside red rectangles) of the whole video sequence using our proposed algorithm. 4.2.2 Video Summarization using Representatives We now evaluate our proposed algorithm for finding representative frames of video sequences. We take a political debate video [7], downsample the frames to 80 × 100 pixels, and convert each frame to a grayscale image. Each data point then corresponds to an 8000-dimensional vector obtained by vectorizing each grayscale downsampled frame. We set the dissimilarities to be the Euclidean distances between pairs of data points. Figure 7 shows some frames of the video and the representatives computed by our method. Notice that we obtain a representative for each shot of the video. It is worth mentioning that the computed representatives do not change for λ ∈[2.68, 6.55]. 4.2.3 Finding Representative Sentences in Text Documents As we discussed earlier, our proposed algorithm can deal with dissimilarities that are not necessarily metric, i.e., can be asymmetric or violate the triangle inequality. We consider now an example of asymmetric dissimilarities where we find representative sentences in the text document of this paper. We compute the dissimilarities between sentences using an information theory-based criterion as follows [4]: we treat each sentence as a “bag of words” and compute dij (how well sentence i represents sentence j) based on the sum of the costs of encoding every word in sentence j using the words in sentence i. More precisely, for sentences in the text of the paper, we extract the words delimited by spaces, we remove all punctuations, and eliminate words that have less than 5 characters. For each word in sentence j, if the word matches3 a word in sentence i, we set the encoding cost for the word to the logarithm of the number of words in sentence i, which is the cost of encoding the index of the matched word. Otherwise, we set the encoding cost for the word to the logarithm of the number of the words in the text dictionary, which is the cost of encoding the index of the word in all the text. We also compute dii using the same procedure, i.e., dii ̸= 0, which penalizes selecting very long sentences. We found that 96% of the dissimilarities are asymmetric. The four representative sentences obtained by our algorithm summarize the paper as follows: –Given pairwise dissimilarities between data points, we consider the problem of finding a subset of data points, called representatives or exemplars, that can efficiently describe the data collection. –We obtain the range of the regularization parameter for which the solution of the proposed optimization program changes from selecting one representative for all data points to selecting all data points as representatives. –When there is a clustering of data points, defined based on their dissimilarities, we show that, for a suitable range of the regularization parameter, the algorithm finds representatives from each cluster. –As the results show, the classification performance using the representatives found by our proposed algorithm is close to that of using all the training samples. Acknowledgment E. Elhamifar and R. Vidal are supported by grants NSF CNS-0931805, NSF ECCS-0941463, NSF OIA-0941362, and ONR N00014-09-10839. G. Sapiro acknowledges partial support by ONR, DARPA, NSF, NGA, and AFOSR grants. 3We consider a word to match another word, if either word is a substring of the other. 8 References [1] S. Garcia, J. Derrac, J. R. Cano, and F. Herrera, “Prototype selection for nearest neighbor classification: Taxonomy and empirical study,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 3, pp. 417–435, 2012. [2] L. Kaufman and P. Rousseeuw, “Clustering by means of medoids,” In Y. Dodge (Ed.), Statistical Data Analysis based on the L1 Norm (North-Holland, Amsterdam), pp. 405–416, 1987. [3] M. Gu and S. C. Eisenstat, “Efficient algorithms for computing a strong rank-revealing qr factorization,” SIAM Journal on Scientific Computing, vol. 17, pp. 848–869, 1996. [4] B. J. Frey and D. Dueck, “Clustering by passing messages between data points,” Science, vol. 315, pp. 972–976, 2007. [5] J. A. Tropp, “Column subset selection, matrix factorization, and eigenvalue optimization,” ACM-SIAM Symp. Discrete Algorithms (SODA), pp. 978–986, 2009. [6] C. Boutsidis, M. W. Mahoney, and P. Drineas, “An improved approximation algorithm for the column subset selection problem,” in Proceedings of SODA, 2009, pp. 968–977. [7] E. Elhamifar, G. Sapiro, and R. Vidal, “See all by looking at a few: Sparse modeling for finding representative objects,” in IEEE Conference on Computer Vision and Pattern Recognition, 2012. [8] J. Bien, Y. Xu, and M. W. Mahoney, “CUR from a sparse optimization viewpoint,” NIPS, 2010. [9] T. Chan, “Rank revealing QR factorizations,” Lin. Alg. and its Appl., vol. 88-89, pp. 67–82, 1987. [10] L. Balzano, R. Nowak, and W. Bajwa, “Column subset selection with missing data,” in NIPS Workshop on Low-Rank Methods for Large-Scale Machine Learning, 2010. [11] E. Esser, M. Moller, S. Osher, G. Sapiro, and J. Xin, “A convex model for non-negative matrix factorization and dimensionality reduction on physical space,” IEEE Transactions on Image Processing, vol. 21, no. 7, pp. 3239–3252, 2012. [12] M. Charikar, S. Guha, A. Tardos, and D. B. Shmoys, “A constant-factor approximation algorithm for the k-median problem,” Journal of Computer System Sciences, vol. 65, no. 1, pp. 129–149, 2002. [13] B. J. Frey and D. Dueck, “Mixture modeling by affinity propagation,” Neural Information Processing Systems, 2006. [14] I. E. Givoni, C. Chung, and B. J. Frey, “Hierarchical affinity propagation,” Conference on Uncertainty in Artificial Intelligence, 2011. [15] R. Duda, P. Hart, and D. Stork, Pattern Classification. Wiley-Interscience, October 2004. [16] D. Dueck and B. J. Frey, “Non-metric affinity propagation for unsupervised image categorization,” International Conference in Computer Vision, 2007. [17] N. Lazic, B. J. Frey, and P. Aarabi, “Solving the uncapacitated facility location problem using message passing algorithms,” International Conference on Artificial Intelligence and Statistics, 2007. [18] R. Jenatton, J. Y. Audibert, and F. Bach, “Structured variable selection with sparsity-inducing norms,” Journal of Machine Learning Research, vol. 12, pp. 2777–2824, 2011. [19] J. A. Tropp., “Algorithms for simultaneous sparse approximation. part ii: Convex relaxation,” Signal Processing, special issue “Sparse approximations in signal and image processing”, vol. 86, pp. 589–602, 2006. [20] M. Aharon, M. Elad, and A. M. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006. [21] J. J. Hull, “A database for handwritten text recognition research,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 5, pp. 550–554, 1994. [22] M. Fanty and R. Cole, “Spoken letter recognition,” in Neural Information Processing Systems, 1991. 9
|
2012
|
211
|
4,577
|
Mirror Descent Meets Fixed Share (and feels no regret) Nicolò Cesa-Bianchi Università degli Studi di Milano nicolo.cesa-bianchi@unimi.it Pierre Gaillard Ecole Normale Supérieure∗, Paris pierre.gaillard@ens.fr Gábor Lugosi ICREA & Universitat Pompeu Fabra, Barcelona gabor.lugosi@upf.edu Gilles Stoltz Ecole Normale Supérieure∗, Paris & HEC Paris, Jouy-en-Josas, France gilles.stoltz@ens.fr Abstract Mirror descent with an entropic regularizer is known to achieve shifting regret bounds that are logarithmic in the dimension. This is done using either a carefully designed projection or by a weight sharing technique. Via a novel unified analysis, we show that these two approaches deliver essentially equivalent bounds on a notion of regret generalizing shifting, adaptive, discounted, and other related regrets. Our analysis also captures and extends the generalized weight sharing technique of Bousquet and Warmuth, and can be refined in several ways, including improvements for small losses and adaptive tuning of parameters. 1 Introduction Online convex optimization is a sequential prediction paradigm in which, at each time step, the learner chooses an element from a fixed convex set S and then is given access to a convex loss function defined on the same set. The value of the function on the chosen element is the learner’s loss. Many problems such as prediction with expert advice, sequential investment, and online regression/classification can be viewed as special cases of this general framework. Online learning algorithms are designed to minimize the regret. The standard notion of regret is the difference between the learner’s cumulative loss and the cumulative loss of the single best element in S. A much harder criterion to minimize is shifting regret, which is defined as the difference between the learner’s cumulative loss and the cumulative loss of an arbitrary sequence of elements in S. Shifting regret bounds are typically expressed in terms of the shift, a notion of regularity measuring the length of the trajectory in S described by the comparison sequence (i.e., the sequence of elements against which the regret is evaluated). In online convex optimization, shifting regret bounds for convex subsets S ⊆Rd are obtained for the projected online mirror descent (or follow-the-regularized-leader) algorithm. In this case the shift is typically computed in terms of the p-norm of the difference of consecutive elements in the comparison sequence —see [1, 2] and [3]. We focus on the important special case when S is the simplex. In [1] shifting bounds are shown for projected mirror descent with entropic regularizers using a 1-norm to measure the shift.1 When the comparison sequence is restricted to the corners of the simplex (which is the setting of prediction with expert advice), then the shift is naturally defined to be the number of times the trajectory moves ∗Ecole Normale Supérieure, Paris – CNRS – INRIA, within the project-team CLASSIC 1Similar 1-norm shifting bounds can also be proven using the analysis of [2]. However, without using entropic regularizers it is not clear how to achieve a logarithmic dependence on the dimension, which is one of the advantages of working in the simplex. 1 to a different corner. This problem is often called “tracking the best expert” —see, e.g., [4, 5, 1, 6, 7], and it is well known that exponential weights with weight sharing, which corresponds to the fixedshare algorithm of [4], achieves a good shifting bound in this setting. In [6] the authors introduce a generalization of the fixed-share algorithm, and prove various shifting bounds for any trajectory in the simplex. However, their bounds are expressed using a quantity that corresponds to a proper shift only for trajectories on the simplex corners. In this paper we offer a unified analysis of mirror descent, fixed share, and the generalized fixed share of [6] for the setting of online convex optimization in the simplex. Our bounds are expressed in terms of a notion of shift based on the total variation distance. Our analysis relies on a generalized notion of shifting regret which includes, as special cases, related notions of regret such as adaptive regret, discounted regret, and regret with time-selection functions. Perhaps surprisingly, we show that projected mirror descent and fixed share achieve essentially the same generalized regret bound. Finally, we show that widespread techniques in online learning, such as improvements for small losses and adaptive tuning of parameters, are all easily captured by our analysis. 2 Preliminaries For simplicity, we derive our results in the setting of online linear optimization. As we show in the supplementary material, these results can be easily extended to the more general setting of online convex optimization through a standard linearization step. Online linear optimization may be cast as a repeated game between the forecaster and the environment as follows. We use ∆d to denote the simplex q ∈[0, 1]d : ∥q∥1 = 1 . Online linear optimization in the simplex. For each round t = 1, . . . , T, 1. Forecaster chooses bpt = (bp1,t, . . . , bpd,t) ∈∆d 2. Environment chooses a loss vector ℓt = (ℓ1,t, . . . , ℓd,t) ∈[0, 1]d 3. Forecaster suffers loss bp⊤ t ℓt . The goal of the forecaster is to minimize the accumulated loss, e.g., bLT = PT t=1 bp⊤ t ℓt. In the now classical problem of prediction with expert advice, the goal of the forecaster is to compete with the best fixed component (often called “expert”) chosen in hindsight, that is, with mini=1,...,T PT t=1 ℓi,t; or even to compete with a richer class of sequences of components. In Section 3 we state more specifically the goals considered in this paper. We start by introducing our main algorithmic tool, described in Figure 1, a share algorithm whose formulation generalizes the seemingly unrelated formulations of the algorithms studied in [4, 1, 6]. It is parameterized by the “mixing functions” ψt : [0, 1]td →∆d for t ⩾2 that assign probabilities to past “pre-weights” as defined below. In all examples discussed in this paper, these mixing functions are quite simple, but working with such a general model makes the main ideas more transparent. We then provide a simple lemma that serves as the starting point2 for analyzing different instances of this generalized share algorithm. Lemma 1. For all t ⩾1 and for all qt ∈∆d, Algorithm 1 satisfies bpt −qt ⊤ℓt ⩽1 η d X i=1 qi,t ln vi,t+1 bpi,t + η 8 . Proof. By Hoeffding’s inequality (see, e.g., [3, Section A.1.1]), d X j=1 bpj,t ℓj,t ⩽−1 η ln d X j=1 bpj,t e−η ℓj,t + η 8 . (1) By definition of vi,t+1, for all i = 1, . . . , d we then have Pd j=1 bpj,t e−η ℓj,t = bpi,t e−η ℓi,t/vi,t+1, which implies bp⊤ t ℓt ⩽ℓi,t + (1/η) ln(vi,t+1/bpi,t) + η/8. The proof is concluded by taking a convex aggregation with respect to qt. 2We only deal with linear losses in this paper. However, it is straightforward that for sequences of η–expconcave loss functions, the additional term η/8 in the bound is no longer needed. 2 Parameters: learning rate η > 0 and mixing functions ψt for t ⩾2 Initialization: bp1 = v1 = (1/d, . . . , 1/d) For each round t = 1, . . . , T, 1. Predict bpt ; 2. Observe loss ℓt ∈[0, 1]d ; 3. [loss update] For each j = 1, . . . , d define vj,t+1 = bpj,t e−η ℓj,t Pd i=1 bpi,t e−η ℓi,t the current pre-weights, and vt+1 = (v1,t+1, . . . , vd,t+1); Vt+1 = vi,s 1⩽i⩽d, 1⩽s⩽t+1 the d × (t + 1) matrix of all past and current pre-weights; 4. [shared update] Define bpt+1 = ψt+1 Vt+1 . Algorithm 1: The generalized share algorithm. 3 A generalized shifting regret for the simplex We now introduce a generalized notion of shifting regret which unifies and generalizes the notions of discounted regret (see [3, Section 2.11]), adaptive regret (see [8]), and shifting regret (see [2]). For a fixed horizon T, a sequence of discount factors βt,T ⩾0 for t = 1, . . . , T assigns varying weights to the instantaneous losses suffered at each round. We compare the total loss of the forecaster with the loss of an arbitrary sequence of vectors q1, . . . , qT in the simplex ∆d. Our goal is to bound the regret T X t=1 βt,T bp⊤ t ℓt − T X t=1 βt,T q⊤ t ℓt in terms of the “regularity” of the comparison sequence q1, . . . , qT and of the variations of the discounting weights βt,T . By setting ut = βt,T q⊤ t ∈Rd +, we can rephrase the above regret as T X t=1 ∥ut∥1 bp⊤ t ℓt − T X t=1 u⊤ t ℓt . (2) In the literature on tracking the best expert [4, 5, 1, 6], the regularity of the sequence u1, . . . , uT is measured as the number of times ut ̸= ut+1. We introduce the following regularity measure m(uT 1 ) = T X t=2 DTV(ut, ut−1) (3) where for x = (x1, . . . , xd), y = (y1, . . . , yd) ∈Rd +, we define DTV(x, y) = P xi⩾yi(xi −yi). Note that when x, y ∈∆d, we recover the total variation distance DTV(x, y) = 1 2 ∥x −y∥1, while for general x, y ∈Rd +, the quantity DTV(x, y) is not necessarily symmetric and is always bounded by ∥x −y∥1. The traditional shifting regret of [4, 5, 1, 6] is obtained from (2) when all ut are such that ∥ut∥1 = 1. 4 Projected update The shifting variant of the EG algorithm analyzed in [1] is a special case of the generalized share algorithm in which the function ψt+1 performs a projection of the pre-weights on the convex set ∆α d = [α/d, 1]d ∩∆d. Here α ∈(0, 1) is a fixed parameter. We can prove (using techniques similar to the ones shown in the next section—see the supplementary material) the following bound which generalizes [1, Theorem 16]. 3 Theorem 1. For all T ⩾1, for all sequences ℓ1, . . . , ℓt ∈[0, 1]d of loss vectors, and for all u1, . . . , uT ∈Rd +, if Algorithm 1 is run with the above update, then T X t=1 ∥ut∥1 bp⊤ t ℓt − T X t=1 u⊤ t ℓt ⩽∥u1∥1 ln d η + m(uT 1 ) η ln d α + η 8 + α T X t=1 ∥ut∥1 . (4) This bound can be optimized by a proper tuning of α and η parameters. We show a similarly tuned (and slightly better) bound in Corollary 1. 5 Fixed-share update Next, we consider a different instance of the generalized share algorithm corresponding to the update bpj,t+1 = d X i=1 α d + (1 −α)1i=j vi,t+1 = α d + (1 −α)vj,t+1 , 0 ⩽α ⩽1 (5) Despite seemingly different statements, this update in Algorithm 1 can be seen to lead exactly to the fixed-share algorithm of [4] for prediction with expert advice. We now show that this update delivers a bound on the regret almost equivalent to (though slightly better than) that achieved by projection on the subset ∆α d of the simplex. Theorem 2. With the above update, for all T ⩾1, for all sequences ℓ1, . . . , ℓT of loss vectors ℓt ∈[0, 1]d, and for all u1, . . . , uT ∈Rd +, T X t=1 ∥ut∥1 bp⊤ t ℓt − T X t=1 u⊤ t ℓt ⩽∥u1∥1 ln d η + η 8 T X t=1 ∥ut∥1 + m(uT 1 ) η ln d α + PT t=2 ∥ut∥1 −m(uT 1 ) η ln 1 1 −α . Note that if we only consider vectors of the form ut = qt = (0, . . . , 0, 1, 0, . . . , 0) then m(qT 1 ) corresponds to the number of times qt+1 ̸= qt in the sequence qT 1 . We thus recover [4, Theorem 1] and [6, Lemma 6] from the much more general Theorem 2. The fixed-share forecaster does not need to “know” anything in advance about the sequence of the norms ∥ut∥for the bound above to be valid. Of course, in order to minimize the obtained upper bound, the tuning parameters α, η need to be optimized and their values will depend on the maximal values of m(uT 1 ) and PT t=1 ∥ut∥1 for the sequences one wishes to compete against. This is illustrated in the following corollary, whose proof is omitted. Therein, h(x) = −x ln x −(1 − x) ln(1 −x) denotes the binary entropy function for x ∈[0, 1]. We recall3 that h(x) ⩽x ln(e/x) for x ∈[0, 1]. Corollary 1. Suppose Algorithm 1 is run with the update (5). Let m0 > 0 and U0 > 0. For all T ⩾ 1, for all sequences ℓ1, . . . , ℓT of loss vectors ℓt ∈[0, 1]d, and for all sequences u1, . . . , uT ∈Rd + with ∥u1∥1 + m(uT 1 ) ⩽m0 and PT t=1 ∥ut∥1 ⩽U0, T X t=1 ∥ut∥1 bp⊤ t ℓt− T X t=1 u⊤ t ℓt ⩽ v u u tU0 2 m0 ln d + U0 h m0 U0 ! ⩽ v u u tU0 m0 2 ln d + ln e U0 m0 ! whenever η and α are optimally chosen in terms of m0 and U0. Proof of Theorem 2. Applying Lemma 1 with qt = ut/ ∥ut∥1, and multiplying by ∥ut∥1, we get for all t ⩾1 and ut ∈Rd + ∥ut∥1 bp⊤ t ℓt −u⊤ t ℓt ⩽1 η d X i=1 ui,t ln vi,t+1 bpi,t + η 8 ∥ut∥1 . (6) 3As can be seen by noting that ln 1/(1 −x) < x/(1 −x) 4 We now examine d X i=1 ui,t ln vi,t+1 bpi,t = d X i=1 ui,t ln 1 bpi,t −ui,t−1 ln 1 vi,t + d X i=1 ui,t−1 ln 1 vi,t −ui,t ln 1 vi,t+1 . (7) For the first term on the right-hand side, we have d X i=1 ui,t ln 1 bpi,t −ui,t−1 ln 1 vi,t = X i : ui,t⩾ui,t−1 (ui,t −ui,t−1) ln 1 bpi,t + ui,t−1 ln vi,t bpi,t + X i : ui,t<ui,t−1 (ui,t −ui,t−1) ln 1 vi,t | {z } ⩽0 +ui,t ln vi,t bpi,t . (8) In view of the update (5), we have 1/bpi,t ⩽d/α and vi,t/bpi,t ⩽1/(1 −α). Substituting in (8), we get d X i=1 ui,t ln 1 bpi,t −ui,t−1 ln 1 vi,t ⩽ X i : ui,t⩾ui,t−1 (ui,t −ui,t−1) ln d α + X i: ui,t⩾ui,t−1 ui,t−1 + X i: ui,t<ui,t−1 ui,t ln 1 1 −α = DTV(ut, ut−1) ln d α + d X i=1 ui,t − X i : ui,t⩾ui,t−1 (ui,t −ui,t−1) | {z } =∥ut∥1−DTV(ut,ut−1) ln 1 1 −α . The sum of the second term in (7) telescopes. Substituting the obtained bounds in the first sum of the right-hand side in (7), and summing over t = 2, . . . , T, leads to T X t=2 d X i=1 ui,t ln vi,t+1 bpi,t ⩽m(uT 1 ) ln d α + T X t=2 ∥ut∥1 −m(uT 1 ) ! ln 1 1 −α + d X i=1 ui,1 ln 1 vi,2 −ui,T ln 1 vi,T +1 | {z } ⩽0 . We hence get from (6), which we use in particular for t = 1, T X t=1 ∥ut∥1 bp⊤ t ℓt −u⊤ t ℓt ⩽1 η d X i=1 ui,1 ln 1 bpi,1 + η 8 T X t=1 ∥ut∥1 + m(uT 1 ) η ln d α + PT t=2 ∥ut∥1 m(uT 1 ) η ln 1 1 −α . 6 Applications We now show how our regret bounds can be specialized to obtain bounds on adaptive and discounted regret, and on regret with time-selection functions. We show regret bounds only for the specific instance of the generalized share algorithm using update (5); but the discussion below also holds up to minor modifications for the forecaster studied in Theorem 1. Adaptive regret was introduced by [8] and can be viewed as a variant of discounted regret where the monotonicity assumption is dropped. For τ0 ∈{1, . . . , T}, the τ0-adaptive regret of a forecaster is defined by Rτ0−adapt T = max [r, s] ⊂[1, T ] s + 1 −r ⩽τ0 ( s X t=r bp⊤ t ℓt −min q∈∆d s X t=r q⊤ℓt ) . (9) 5 The fact that this is a special case of (2) clearly emerges from the proof of Corollary 2 below here. Adaptive regret is an alternative way to measure the performance of a forecaster against a changing environment. It is a straightforward observation that adaptive regret bounds also lead to shifting regret bounds (in terms of hard shifts). In this paper we note that these two notions of regret share an even tighter connection, as they can be both viewed as instances of the same alma mater notion of regret, i.e., the generalized shifting regret introduced in Section 3. The work [8] essentially considered the case of online convex optimization with exp-concave loss function; in case of general convex functions, they also mentioned that the greedy projection forecaster of [2] enjoys adaptive regret guarantees. This is obtained in much the same way as we obtain an adaptive regret bound for the fixed-share forecaster in the next result. Corollary 2. Suppose that Algorithm 1 is run with the shared update (5). Then for all T ⩾1, for all sequences ℓ1, . . . , ℓT of loss vectors ℓt ∈[0, 1]d, and for all τ0 ∈{1, . . . , T}, Rτ0−adapt T ⩽ s τ0 2 τ0 h 1 τ0 + ln d ⩽ rτ0 2 ln(edτ0) whenever η and α are chosen optimally (depending on τ0 and T). As mentioned in [8], standard lower bounds on the regret show that the obtained bound is optimal up to the logarithmic factors. Proof. For 1 ⩽r ⩽s ⩽T and q ∈∆d, the regret in the right-hand side of (9) equals the regret considered in Theorem 2 against the sequence uT 1 defined as ut = q for t = r, . . . , s and 0 = (0, . . . , 0) for the remaining t. When r ⩾2, this sequence is such that DTV(ur, ur−1) = DTV(q, 0) = 1 and DTV(us+1, us) = DTV(0, q) = 0 so that m(uT 1 ) = 1, while ∥u1∥1 = 0. When r = 1, we have ∥u1∥1 = 1 and m(uT 1 ) = 0. In all cases, m(uT 1 ) + ∥u1∥1 = 1, that is, m0 = 1. Specializing the bound of Theorem 2 with the additional choice U0 = τ0 gives the result. Discounted regret was introduced in [3, Section 2.11] and is defined by max q∈∆d T X t=1 βt,T bp⊤ t ℓt −q⊤ℓt . (10) The discount factors βt,T measure the relative importance of more recent losses to older losses. For instance, for a given horizon T, the discounts βt,T may be larger as t is closer to T. On the contrary, in a game-theoretic setting, the earlier losses may matter more then the more recent ones (because of interest rates), in which case βt,T would be smaller as t gets closer to T. We mostly consider below monotonic sequences of discounts (both non-decreasing and non-increasing). Up to a normalization, we assume that all discounts βt,T are in [0, 1]. As shown in [3], a minimal requirement to get nontrivial bounds is that the sum of the discounts satisfies UT = P t⩽T βt,T →∞as T →∞. A natural objective is to show that the quantity in (10) is o(UT ), for instance, by bounding it by something of the order of √UT . We claim that Corollary 1 does so, at least whenever the sequences (βt,T ) are monotonic for all T. To support this claim, we only need to show that m0 = 1 is a suitable value to deal with (10). Indeed, for all T ⩾1 and for all q ∈∆d, the measure of regularity involved in the corollary satisfies ∥β1,T q∥1 + m (βt,T q)t⩽T = β1,T + T X t=2 βt,T −βt−1,T + = max β1,T , βT,T ⩽1 , where the second equality follows from the monotonicity assumption on the discounts. The values of the discounts for all t and T are usually known in advance. However, the horizon T is not. Hence, a calibration issue may arise. The online tuning of the parameters α and η shown in Section 7.3 entails a forecaster that can get discounted regret bounds of the order √UT for all T. The fundamental reason for this is that the discounts only come in the definition of the fixedshare forecaster via their sums. In contrast, the forecaster discussed in [3, Section 2.11] weighs each instance t directly with βt,T (i.e., in the very definition of the forecaster) and enjoys therefore no regret guarantees for horizons other than T (neither before T nor after T). Therein, the knowledge 6 of the horizon T is so crucial that it cannot be dealt with easily, not even with online calibration of the parameters or with a doubling trick. We insist that for the fixed-share forecaster, much flexibility is gained as some of the discounts βt,T can change in a drastic manner for a round T to values βt,T +1 for the next round. However we must admit that the bound of [3, Section 2.11] is smaller than the one obtained above, as it of the order of qP t⩽T β2 t,T , in contrast to our qP t⩽T βt,T bound. Again, this improvement was made possible because of the knowledge of the time horizon. As for the comparison to the setting of discounted losses of [9], we note that the latter can be cast as a special case of our setting (since the discounting weights take the special form βt,T = γt . . . γT −1 therein, for some sequence γs of positive numbers). In particular, the fixed-share forecaster can satisfy the bound stated in [9, Theorem 2], for instance, by using the online tuning techniques of Section 7.3. A final reference to mention is the setting of time-selection functions of [10, Section 6], which basically corresponds to knowing in advance the weights ∥ut∥1 of the comparison sequence u1, . . . , uT the forecaster will be evaluated against. We thus generalize their results as well. 7 Refinements and extensions We now show that techniques for refining the standard online analysis can be easily applied to our framework. We focus on the following: improvement for small losses, sparse target sequences, and dynamic tuning of parameters. Not all of them where within reach of previous analyses. 7.1 Improvement for small losses The regret bounds of the fixed-share forecaster can be significantly improved when the cumulative loss of the best sequence of experts is small. The next result improves on Corollary 1 whenever L0 ≪U0. For concreteness, we focus on the fixed-share update (5). Corollary 3. Suppose Algorithm 1 is run with the update (5). Let m0 > 0, U0 > 0, and L0 > 0. For all T ⩾1, for all sequences ℓ1, . . . , ℓT of loss vectors ℓt ∈[0, 1]d, and for all sequences u1, . . . , uT ∈Rd + with ∥u1∥1 + m(uT 1 ) ⩽m0, PT t=1 ∥ut∥1 ⩽U0, and PT t=1 u⊤ t ℓt ⩽L0, T X t=1 ∥ut∥1 bp⊤ t ℓt − T X t=1 u⊤ t ℓt ⩽ v u u tL0 m0 ln d + ln e U0 m0 ! + ln d + ln e U0 m0 whenever η and α are optimally chosen in terms of m0, U0, and L0. Here again, the parameters α and η may be tuned online using the techniques shown in Section 7.3. The above refinement is obtained by mimicking the analysis of Hedge forecasters for small losses (see, e.g., [3, Section 2.4]). In particular, one should substitute Lemma 1 with the following lemma in the analysis carried out in Section 5; its proof follows from the mere replacement of Hoeffding’s inequality by [3, Lemma A.3], which states that for all η ∈R and for all random variable X taking values in [0, 1], one has ln E[e−ηX] ⩽(e−η −1)EX. Lemma 2. Algorithm 1 satisfies 1 −e−η η bp⊤ t ℓt −q⊤ t ℓt ⩽1 η d X i=1 qi,t ln vi,t bpi,t+1 for all qt ∈∆d. 7.2 Sparse target sequences The work [6] introduced forecasters that are able to efficiently compete with the best sequence of experts among all those sequences that only switch a bounded number of times and also take a small number of different values. Such “sparse” sequences of experts appear naturally in many applications. In this section we show that their algorithms in fact work very well in comparison with a much larger class of sequences u1, . . . , uT that are “regular”—that is, m(uT 1 ), defined in (3) is small—and “sparse” in the sense that the quantity n(uT 1 ) = Pd i=1 maxt=1,...,T ui,t is small. Note that when qt ∈∆d for all t, then two interesting upper bounds can be provided. First, denoting the union of the supports of these convex combinations by S ⊆{1, . . . , d}, we have n(qT 1 ) ⩽|S|, the cardinality of S. Also, n(qT 1 ) ⩽ {qt, t = 1, . . . , T} , the cardinality of the pool of convex combinations. Thus, n(uT 1 ) generalizes the notion of sparsity of [6]. 7 Here we consider a family of shared updates of the form bpj,t = (1 −α)vj,t + αwj,t Zt , 0 ⩽α ⩽1 , (11) where the wj,t are nonnegative weights that may depend on past and current pre-weights and Zt = Pd i=1 wi,t is a normalization constant. Shared updates of this form were proposed by [6, Sections 3 and 5.2]. Apart from generalizing the regret bounds of [6], we believe that the analysis given below is significantly simpler and more transparent. We are also able to slightly improve their original bounds. We focus on choices of the weights wj,t that satisfy the following conditions: there exists a constant C ⩾1 such that for all j = 1, . . . , d and t = 1, . . . , T, vj,t ⩽wj,t ⩽1 and C wj,t+1 ⩾wj,t . (12) The next result improves on Theorem 2 when T ≪d and n(uT 1 ) ≪m(uT 1 ), that is, when the dimension (or number of experts) d is large but the sequence uT 1 is sparse. Its proof can be found in the supplementary material; it is a variation on the proof of Theorem 2. Theorem 3. Suppose Algorithm 1 is run with the shared update (11) with weights satisfying the conditions (12). Then for all T ⩾1, for all sequences ℓ1, . . . , ℓT of loss vectors ℓt ∈[0, 1]d, and for all sequences u1, . . . , uT ∈Rd +, T X t=1 ∥ut∥1 bp⊤ t ℓt − T X t=1 u⊤ t ℓt ⩽n(uT 1 ) ln d η + n(uT 1 ) T ln C η + η 8 T X t=1 ∥ut∥1 + m(uT 1 ) η ln maxt⩽T Zt α + PT t=2 ∥ut∥1 −m(uT 1 ) η ln 1 1 −α . Corollaries 8 and 9 of [6] can now be generalized (and even improved); we do so—in the supplementary material—by showing two specific instances of the generic update (11) that satisfy (12). 7.3 Online tuning of the parameters The forecasters studied above need their parameters η and α to be tuned according to various quantities, including the time horizon T. We show here how the trick of [11] of having these parameters vary over time can be extended to our setting. For the sake of concreteness we focus on the fixedshare update, i.e., Algorithm 1 run with the update (5). We respectively replace steps 3 and 4 of its description by the loss and shared updates vj,t+1 = bp ηt ηt−1 j,t e−ηtℓj,t , d X i=1 bp ηt ηt−1 i,t e−ηtℓi,t and pj,t+1 = αt d + (1 −αt) vj,t+1 , (13) for all t ⩾1 and all j ∈{1, . . . , d}, where (ητ) and (ατ) are two sequences of positive numbers, indexed by τ ⩾1. We also conventionally define η0 = η1. Theorem 2 is then adapted in the following way (when ηt ≡η and αt ≡α, Theorem 2 is exactly recovered). Theorem 4. The forecaster based on the updates (13) is such that whenever ηt ⩽ηt−1 and αt ⩽ αt−1 for all t ⩾1, the following performance bound is achieved. For all T ⩾1, for all sequences ℓ1, . . . , ℓT of loss vectors ℓt ∈[0, 1]d, and for all u1, . . . , uT ∈Rd +, T X t=1 ∥ut∥1 bp⊤ t ℓt − T X t=1 u⊤ t ℓt ⩽ ∥ut∥1 η1 + T X t=2 ∥ut∥1 1 ηt − 1 ηt−1 ! ln d + m(uT 1 ) ηT ln d(1 −αT ) αT + T X t=2 ∥ut∥1 ηt−1 ln 1 1 −αt + T X t=1 ηt−1 8 ∥ut∥1 . Due to space constraints, we provide an illustration of this bound only in the supplementary material. Acknowledgments The authors acknowledge support from the French National Research Agency (ANR) under grant EXPLO/RA (“Exploration–exploitation for efficient resource allocation”) and by the PASCAL2 Network of Excellence under EC grant no. 506778. 8 References [1] M. Herbster and M. Warmuth. Tracking the best linear predictor. Journal of Machine Learning Research, 1:281–309, 2001. [2] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning, ICML 2003, 2003. [3] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [4] M. Herbster and M. Warmuth. Tracking the best expert. Machine Learning, 32:151–178, 1998. [5] V. Vovk. Derandomizing stochastic prediction strategies. Machine Learning, 35(3):247–282, Jun. 1999. [6] O. Bousquet and M.K. Warmuth. Tracking a small set of experts by mixing past posteriors. Journal of Machine Learning Research, 3:363–396, 2002. [7] A. György, T. Linder, and G. Lugosi. Tracking the best of many experts. In Proceedings of the 18th Annual Conference on Learning Theory (COLT), pages 204–216, Bertinoro, Italy, Jun. 2005. Springer. [8] E. Hazan and C. Seshadhri. Efficient learning algorithms for changing environments. Proceedings of the 26th International Conference of Machine Learning (ICML), 2009. [9] A. Chernov and F. Zhdanov. Prediction with expert advice under discounted loss. In Proceedings of the 21st International Conference on Algorithmic Learning Theory, ALT 2010, pages 255–269. Springer, 2008. [10] A. Blum and Y. Mansour. From extermal to internal regret. Journal of Machine Learning Research, 8:1307–1324, 2007. [11] P. Auer, N. Cesa-Bianchi, and C. Gentile. Adaptive and self-confident on-line learning algorithms. Journal of Computer and System Sciences, 64:48–75, 2002. 9
|
2012
|
212
|
4,578
|
Semi-Supervised Domain Adaptation with Non-Parametric Copulas David Lopez-Paz MPI for Intelligent Systems dlopez@tue.mpg.de Jos´e Miguel Hern´andez-Lobato University of Cambridge jmh233@cam.ac.uk Bernhard Sch¨olkopf MPI for Intelligent Systems bs@tue.mpg.de Abstract A new framework based on the theory of copulas is proposed to address semisupervised domain adaptation problems. The presented method factorizes any multivariate density into a product of marginal distributions and bivariate copula functions. Therefore, changes in each of these factors can be detected and corrected to adapt a density model accross different learning domains. Importantly, we introduce a novel vine copula model, which allows for this factorization in a non-parametric manner. Experimental results on regression problems with real-world data illustrate the efficacy of the proposed approach when compared to state-of-the-art techniques. 1 Introduction When humans address a new learning problem, they often use knowledge acquired while learning different but related tasks in the past. For example, when learning a second language, people rely on grammar rules and word derivations from their mother tongue. This is called language transfer [19]. However, in machine learning, most of the traditional methods are not able to exploit similarities between different learning tasks. These techniques only achieve good performance when the data distribution is stable between training and test phases. When this is not the case, it is necessary to a) collect and label additional data and b) re-run the learning algorithm. However, these operations are not affordable in most practical scenarios. Domain adaptation, transfer learning or multitask learning frameworks [17, 2, 5, 13] confront these issues by first, building a notion of task relatedness and second, providing mechanisms to transfer knowledge between similar tasks. Generally, we are interested in improving predictive performance on a target task by using knowledge obtained when solving another related source task. Domain adaptation methods are concerned about what knowledge we can share between different tasks, how we can transfer this knowledge and when we should do it or not to avoid additional damage [4]. In this work, we study semi-supervised domain adaptation for regression tasks. In these problems, the object of interest (the mechanism that maps a set of inputs to a set of outputs) can be stated as a conditional density function. The data available for solving each learning task is assumed to be sampled from modified versions of a common multivariate distribution. Therefore, we are interested in sharing the “common pieces” of this generative model between tasks, and use the data from each individual task to detect, learn and adapt the varying parts of the model. To do so, we must find a decomposition of multivariate distributions into simpler building blocks that may be studied separately across different domains. The theory of copulas provides such representations [18]. Copulas are statistical tools that factorize multivariate distributions into the product of its marginals and a function that captures any possible form of dependence among them. This function is referred to as the copula, and it links the marginals together into the joint multivariate model. Firstly intro1 duced by Sklar [22], copulas have been successfully used in a wide range of applications, including finance, time series or natural phenomena modeling [12]. Recently, a new family of copulas named vines have gained interest in the statistics literature [1]. These are methods that factorize multivariate densities into a product of marginal distributions and bivariate copula functions. Each of these factors corresponds to one of the building blocks that we assume either constant or varying across different learning domains. The contributions of this paper are two-fold. First, we propose a non-parametric vine copula model which can be used as a high-dimensional density estimator. Second, by making use of this method, we present a new framework to address semi-supervised domain adaptation problems, which performance is validated in a series of experiments with real-world data and competing state-of-the-art techniques. The rest of the paper is organized as follows: Section 2 provides a brief introduction to copulas, and describes a non-parametric estimator for the bivariate case. Section 3 introduces a novel nonparametric vine copula model, which is formed by the described bivariate non-parametric copulas. Section 4 describes a new framework to address semi-supervised domain adaptation problems using the proposed vine method. Finally, section 5 describes a series of experiments that validate the proposed approach on regression problems with real-world data. 2 Copulas When the components of x = (x1, . . . , xd) are jointly independent, their density function p(x) can be written as p(x) = d Y i=1 p(xi) . (1) This equality does not hold when x1, . . . , xd are not independent. Nevertheless, the differences can be corrected if we multiply the right hand side of (1) by a specific function that fully describes any possible dependence between x1, . . . , xd. This function is called the copula of p(x) [18] and satisfies p(x) = d Y i=1 p(xi) c(P(x1), ..., P(xd)) | {z } copula . (2) The copula c is the joint density of P(x1), . . . , P(xd), where P(xi) is the marginal cdf of the random variable xi. This density has uniform marginals, since P(z) ∼U[0, 1] for any random variable z. That is, when we apply the transformation P(x1), . . . , P(xd) to x1, . . . , xd, we are eliminating all information about the marginal distributions. Therefore, the copula captures any distributional pattern that does not depend on their specific form, or, in other words, all the information regarding the dependencies between x1, . . . , xd. When P(x1), . . . , P(xd) are continuous, the copula c is unique [22]. However, infinitely many multivariate models share the same underlying copula function, as illustrated in Figure 1. The main advantage of copulas is that they allow us to model separately the marginal distributions and the dependencies linking them together to produce the multivariate model subject of study. Given a sample from (2), we can estimate p(x) as follows. First, we construct estimates of the marginal pdfs, ˆp(x1), . . . , ˆp(xd), which also provide estimates of the corresponding marginal cdfs, ˆP(x1), . . . , ˆP(xd). These cdfs estimates are used to map the data to the d-dimensional unit hypercube. The transformed data are then used to obtain an estimate ˆc for the copula of p(x). Finally, (2) is approximated as ˆp(x) = d Y i=1 ˆp(xi) ˆc( ˆP(x1), ..., ˆP(xd)). (3) The estimation of marginal pdfs and cdfs can be implemented in a non-parametric manner by using unidimensional kernel density estimates. By contrast, it is common practice to assume a parametric model for the estimation of the copula function. Some examples of parametric copulas are Gaussian, Gumbel, Frank, Clayton or Student copulas [18]. Nevertheless, real-world data often exhibit complex dependencies which cannot be correctly described by these parametric copula models. This lack of flexibility of parametric copulas is illustrated in Figure 2. As an alternative, we propose 2 G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 2 4 6 −2 0 2 G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G GG G G G G G G G G G G GG G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0.0 2.5 5.0 0.0 2.5 5.0 Figure 1: Left, sample from a Gaussian copula with correlation ρ = 0.8. Middle and right, two samples drawn from multivariate models with this same copula but different marginal distributions, depicted as rug plots. GGG G G G GGGG G GG G GG G G GGGG G GG G GG GG G G GG GGG GGGGGG G G GG G G G GG G G G G G G G G GG G G G G GG G G G GG G GG G G GG G G G GGG G G G G G GG GG G GG G G G G G G G G G G G G G GG G G G G GG G GG GGGG GG G G G G G GGG G GG G GGG G GG G GG GG G G G G GG G GGGG G GG G G G G G GG G G G G G G GG G G G GG G G GG G G G GGG GGG G G G G G G GG G G G GG G G GG GG G G G G G G G G G G GG G G G GG G GG G G G G G GG G G G G GG G G G G G G G GG G G G G G G G G GG G G GGG GG G G GG G G GGG G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G GG GGG GGGGG GG G G G G G GG GG G G GGG G GG GGG G G G G G G GGGG GG GG G GGG G G G G G G G G G GG G GGG G G G G G GGG G G G G G GG GG G G G G GG GG G G GG G G G GG GG G G GG G GG GG GG GG G G G G G G G G GGGG G G G G G G G G G G G G G G GGG GG G GG G GG G G G G G G GG G G G G G G GG G G G GG GGG GGG G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G GG G G G GG G G G G G GG G G G G G G G G G G G GGG G G GG G G G GGG G G G G G G G G G G G G G GG G G G G G G G G G G GG G G GG G G G G GG G G G G G G G G G GG G G G G G G G G G G G G G G GG G GG G GG G GG G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGG G GG G G G G G G G G G G GGG G G G G G G GG GG G G G G G G GG G G G G G G G G GG G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G GG G G G G G G GG G G G G G G G G G G G G GG G G G G G GG G G G G GG G G G G GG G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGG G G G G G G G G G G G G G G G G G GGG G G G GG G G G G GG G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G GG G G G G G G G G G G GG GG G G G G G G G G G G G G G G G G GG G G G G G G G G G GG G G G G G G GG G G G GG G G G G GG G G G G G G G G G G G G G G GG G G G G G GG G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G GG G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G GG G G G G G G GG G G G G G G GG GG G GG G G G G GG G G G G G G G GG G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G GG G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G GG G G G G G G G G G GG G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G GG G G GG G G GGG G G G G G G G G G G G G G G G G G G G G GGG G G G G G G G G GG G G G G GG G G GG G G G G G G G G G G G GGGG G G GG G G G G G G GG G GG G G G GG G G GG G G G G G G G G G G GG G G G G G G G G G G GG G G G GG G G G G G G G G G G G G G G G G G G G GG G GG G G G G G G G G G G G G G GGG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G GG G G G GG GG G G GG G G GGGG G G G G G G G G G G G GG G G GG G GG G GGGG G G GG G G G G GG G G G GG G G G G G G G GG G G G G G GGG G G G G G GG GG G G G G GG G G G G G G G G G G G G G G GG G GG GGGG G G G G G G G G G G G G GG G G G G G G G G G G G G G G G GG G GG G G GG G G G GG GGG G G GGG GG G G G GG GG G G G G G GGGG G GG GGGG G G GG GG GG G G GG G G G G G GG G G GGG G G G G G GG G GG G G G G G G GG G G G G GG G G GGGG GGGG G G G G G G G G G G GG G GGG G G G GG G G G GG GG GG G GG G GG GG GG G G GGG G G G GG G G GG GG G G G G G GG G GG G G G GG GG G G GGGGG G G GG G G GGG GG G G G G GG GG G G GGG GGG G GG G GG GG G G G G GG GG G GG G G G G G G GG GG G G GGGG GGGG GG G G G G GG G GGG G G GG GG G GGGGG G G GGG G G G G G G G G GG G G G G G G G G G G G GG G G G GG GG GG G GGG G G GG GGG G G G GG G GG GGG GG GG G G G GG G GG GG G GG G G G GGG G G G G GG G G GG G G GG G G G G G G GG G G G GGG G G G G G G G G GG G G G GGGG G GG G G G GG GGG G GG GG GGGG GG G G GGG G G G G G G G G GG G GG G GG G G G G G G G G GGG G G G G G G GGG G G G GG G G GG G G GG G G G G GG G G G GG GG G GG G G GGG G GGG G G GG G G G GG G G G G G GG GGGG G G G G G G GG G GG G G G G G GG G G G G GG G GGG GGGGGGG G G G G G G GG G G G G GG G G G G G G G GG G G G G GG G G G G G G G G G G G G GGG G GG G GG G G GG GG GG GGG G G GGGG G G GG G G G GG GG G G G GG G G G G G G G G G GG G G GG G GG G GG G G G G GG GGG G G G GG G G G G G G G G G GG G G G G G GG G GG G G G G GG GGG GGG G G G G GG G GGGGG GG GG GG GG G G G G G GG G G G GG GGG G G GG GGG G GG G GG GGGG GGG G G G G G GG GGG GG G GGG G G G G G G GG GGG G GGGG G G G G GG GG GG G G GG G G GG G GG GG GG G G GG G GG GGG G G G G G G G G GG GG GG GG GGG G G G G GG GG G G G G G G G GG G G G G GGG G GGGG G G GGG G G G G G GG GGG G GGG G GG GG G GG GG GG GGGGG G GGG GG GG GG GG G GG GGGG G G GG G G G G G G GGG G GG G G GG GGG G G G GG GGG GGG GG G GGG GGGG G G GGG GGG G G G GGGGG GGGG G GG G G G G G G G G G G G GG G G GG GG GGG G GGG GGG G GG GG G GG G G GGG G GG G G G GG G G G G G GGG GG G GGGG G GGGG G GG G G G G G GG G GGG GGG G G GGG G GG G GGG G G G G GG GG GGGG GG G G G GGG G GG G G G G G GG G G G G G G G G G G GG GGGGG G G GG GG GG GG GGG GG GG GG G GGG GG GG GG GGGG G G G GG GG G G GG GG G G G G G GGGGGGGG GG G G G GG G GG G GGGG G G GGGG GG GG G GG G GG G GG GGGGG G GG G G GGG GG GG GGG G GGG G GG G G GG GGG G GG GGG G G GGG G GG GGG G GGG GGG G G G G G G GG GGG G GG G G GGGGGGG GG GGG G GG GG GG G GGG GGG G G GG GG G G GG GGG GGG GGG G G GGG G GGGGG GGG G GG GGG GG G GG GG GG G GG GG G GGG G G G GG G G GGGG G GGG GG GG G GG GG GGGG GG G GG GG GGG GG G GGG GG GGGG GG GG G GG GGG GG GGGGG G GG G G GG GGG G GG G GGGGGG G G G GGG G GGG GGG GG GG G GG GG G G G G GGG GG GG G GG GGG GG GG GGG GG G GGG GGGG G G G GG GGGGG G GGG GG GGG G GGG G G GG GGG GGG GGG G G GG GG G GGG G G GGGG GGGGG GGG G GG GG G GG GG G GG G G GGGGG G G G GGG GGGGG G G GGG G GG GG G G GGG GG G G G G G G GG GGG GG GGG GG G GGG G G GGG G GG G G GG G G GGG GG GG GG GGG G GGG GG GGGG G GG G G G G G G G GG GGG G G GGGG GG G GGGG G G GG GGGG G GG G GGG G G G GG GG G G G G GG G G G G G GG G GGGG GGG GGG G G GG GG GG G G G G GG G GG GG GG GG GG GGG GGGG GGG GGGGGGGG GGG GG G G G G G G G G G G G G GGG G G GG GGG G GGGGGG G GG GGGG GGG GG G GGGGGGG GG G GG G GG G GG G GG GGGGGG GG GG GGGG GGG GG GGG GGG GG G G GG G G GGGG GGGG G GG GG G GG G G G GG G GG GG G GGG G G G G GGG GG G GGG GG GGGG GG GG GGGG GG GGG G GG G GG GG G GGGG GG G GG GG GG G GGGGGG G GG G GG G GG G GGGGGG G GGGG GG GG G G GG GGGG G GG GG G GG G GG GG G GG G GGG G G GG G GG G G G GG G GGG G G G GG GG G G G GGGG G GGG GG G GG G G GG G G G G GG G G G GGGGGG G G G GGGG GG G GG G G G G GG GG GGGG GG G G G G G G G G GGGGG G G G GGG G G G G GGG GG GG G G G G GG GG G GG G G G G GG GG G GGG GG GG G G G G GGG GGG GG G GG GGGG GG G G G G GGG G G G GG G G G GG G G G G G G GG GG GG G G G GG G GG G G G G GG G GGGG G G G G G G GG GG G G GGG GG GGG G G G G GG G GG G GGGG GG GG GG G GG GG G G G GG GGG GGGGG G G GG GGG G GG G G G G G G G GG GG G G G GG G G G G GG GG G GG G G G G GG G G GG GG G GG G G G G G G G GG G G G G G GG G G G GG G G GG G G GG G G G G G GG G G G G G G G G G G G GG GG G G G G G G G GG G G G G G G G GG G G G G GG G G G G G G G G GG G G G GG G G G GG G G GGG G G G G G G G GG G GG G G G GG G G G G G GGG GG G G G GG G GGG G G GGGG GG G GGG GGGG G G G G G G G G GG GG G G GGG GG G G GGG G G GG G GGG G G G GG GGG G G G G G G GGG G G G G G G G G GGGG GGG G G G GGG GG G G G G GGG G G G GG GG GG GGG G G GG GG GG G GG G G G GG GG G G G G G G G GG GGGG G GG G GGG G GG G G G G G GG G G G G GG GG G G GG GG G G GG G GGG GGG G G G GG GG G G G G G G G G G G G GGG G G GGGG GG GG G G G G GG G G G GGG GGGGG G G G G G G G GG G G G G G G G G G G G G G G G GG G G G G GG G G G G GGG GG G G GG GG GGG G GG GG G G G G G G G G GG G G G G GG G G G G G G GG G G GG G G G G G G G G G G G G GG G G G G GG GG G GGG G G GG G G G G G G G GG GG G G GG G G G GGG G G G GG G G GGG GG GG G G G GG G G G G GGG G G G GG G G G G G GG G G G GG GG G G GGG GG G G G G GGG G G GGG G G GGGG GG G G GG G G G G G G G G GG G GGG GGG G G GG G G GGG G G G G G G GG G G G G G G G G G GG G G GG GGG G G G GGG G G G G G G G G G G GG G G G G G G GG G G GG G G G G G G G G GGG GG GGG G G G G GGGGG G G G GG G G G G G G G G G GG G G G G G GG G G GG G GG GG G G G G G G GGG G G G G GG G G G G G G G G GG G G G GG G GG GG G G GGG G G GGG GG G G GG G GGG GGG G G G G G GG G G G G G GG G G G GG G GG G GG G GG G G G G G G GG G G GGG G G G G G G G GG GG G G G G G GG G G G GG G G GGG G GG G GG G GG G GG G GG G G GG GG G G GGGGG GG G GG GG G GG G G GG GG G GGGG G GGG GGG GGGG G GG G GG G GG G G G G GG GG G G G G G GGGG GGGG GG G GGG GGG G GG GG GG G G G G G G G G GG G GG G GG G G GG GG GG G G GG G G G G GGG GG GG GGGG GG GG G G G G GG GG G GG GG G G G G GGG G G G G GG G G GG GG G G G G G G G G GG GG GG GG GGGG GGG GG GGGGG G G G G G G G GG G GG G G G G G G G G GG GGGG G G G G G GGG GG GG G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G GG GGG G G G GG G G G G G G GG G G G G G G G G G G G G GG GG G G G G G G G G G G G G G G G G G GGG G G G G G G G G G G G G G GG G GG G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G GGG G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G GGG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGG G G G G G GG G G GG GGG GG G G G G G G G G G G G G G G G G G G GG G G G G G GG G G GG GG G G G G G G G G G GGG G G G G G G G G G G G G G G G G GG G G G G G G GG GG G G G G G G G G G G G G GG G G GG GG G G G G GG G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G GG G GG GG GG G G G G G G G G G G G G G G G GG G G G G G G G G G GG G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G GG G G GG G G G G G G G G G G G G G G G G G G G GG G GG G G GG GG G G GG G G G G G G G G G G G GG G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G GG G G G G G G GG G G G G GG GG G G G G G G G G G G G GG GG G G G G G G G G G G G G G G G GGG G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G GG G G G G G G G G G G G GG G G G G G G G G G G GG G G G G G G G G G G G G G G GG G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G GG G G G G G G G GG G G G G G G G G G G GG G G G G G G G G G G G G G GG G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGG G G G GG G G G G G G G G G GGG G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G GG G GG G G G G G G G G G G G G G GG G GG G G G G G G GG G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G GG G G G G GG GG G G G G G G G GG G G G G G GG G G G GG G G G G GG G GG G G GG GG G GGG G 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0 25 50 75 100 0 25 50 75 100 0 25 50 75 100 0 25 50 75 100 Figure 2: Left, sample from the copula linking variables 4 and 11 in the WIRELESS dataset. Middle, density estimate generated by a Gaussian copula model when fitted to the data. This technique is unable to capture the complex patterns present in the data. Right, copula density estimate generated by the non-parametric method described in section 2.1. to approximate the copula function in a non-parametric manner. Kernel density estimates can also be used to generate non-parametric approximations of copulas, as described in [8]. The following section reviews this method for the two-dimensional case. 2.1 Non-parametric Bivariate Copulas We now elaborate on how to non-parametrically estimate the copula of a given bivariate density p(x, y). Recall that this density can be factorized as the product of its marginals and its copula p(x, y) = p(x) p(y) c(P(x), P(y)). (4) Additionally, given a sample {(xi, yi)}n i=1 from p(x, y), we can obtain a pseudo-sample from its copula c by mapping each observation to the unit square using estimates of the marginal cdfs, namely {(ui, vi)}n i=1 := {( ˆP(xi), ˆP(yi))}n i=1. (5) These are approximate observations from the uniformly distributed random variables u = P(x) and v = P(y), whose joint density is the copula function c(u, v). We could try to approximate this density function by placing Gaussian kernels on each observation ui and vi. However, the resulting density estimate would have support on R2, while the support of c is the unit square. A solution is to perform the density estimation in a transformed space. For this, we select some continuous distribution with support on R, strictly positive density φ, cumulative distribution Φ and quantile function Φ−1. Let z and w be two new random variables given by z = Φ−1(u) and w = Φ−1(v). Then, the joint density of z and w is p(z, w) = φ(z) φ(w) c(Φ(z), Φ(w)) . (6) The copula of this new density is identical to the copula of (4), since the performed transformations are marginal-wise. The support of (6) is now R2; therefore, we can now approximate it with 3 Gaussian kernels. Let zi = Φ−1(ui) and wi = Φ−1(vi). Then, ˆp(z, w) = 1 n n X i=1 N(z, w|zi, wi, Σ), (7) where N(·, ·|ν1, ν2, Σ) is a two-dimensional Gaussian density with mean (ν1, ν2) and covariance matrix Σ. For convenience, we select φ, Φ and Φ−1 to be the standard Gaussian pdf, cdf and quantile function, respectively. Finally, the copula density c(u, v) is approximated by combining (6) with (7): ˆc(u, v) = ˆp(Φ−1(u), Φ−1(v)) φ(Φ−1(u))φ(Φ−1(v)) = 1 n n X i=1 N(Φ−1(u), Φ−1(v)|Φ−1(ui), Φ−1(vi), Σ) φ(Φ−1(u))φ(Φ−1(v)) . (8) 3 Regular Vines The method described above can be generalized to the estimation of copulas of more than two random variables. However, although kernel density estimates can be successful in spaces of one or two dimensions, as the number of variables increases, this methods start to be significantly affected by the curse of dimensionality and tend to overfit to the training data. Additionally, for addressing domain adaptation problems, we are interested in factorizing these high-dimensional copulas into simpler building blocks transferrable accross learning domains. These two drawbacks can be addressed by recent methods in copula modelling called vines [1]. Vines decompose any high-dimensional copula density as a product of bivariate copula densities that can be approximated using the nonparametric model described above. These bivariate copulas (as well as the marginals) correspond to the simple building blocks that we plan to transfer from one learning domain to another. Different types of vines have been proposed in the literature. Some examples are canonical vines, D-vines or regular vines [16, 1]. In this work we focus on regular vines (R-vines) since they are the most general models. An R-vine V for a probability density p(x1, . . . , xd) with variable set V = {1, . . . d} is formed by a set of undirected trees T1, . . . , Td−1, each of them with corresponding set of nodes Vi and set of edges Ei, where Vi = Ei−1 for i ∈[2, d −1] . Any edge e ∈Ei has associated three sets C(e), D(e), N(e) ⊂V called the conditioned, conditioning and constraint sets of e, respectively. Initially, T1 is inferred from a complete graph with a node associated with each element of V; for any e ∈T1 joining nodes Vj and Vk, C(e) = N(e) = {Vj, Vk} and D(e) = {∅}. The trees T2, ..., Td−1 are constructed so that each e ∈Ei is formed by joining two edges e1, e2 ∈Ei−1 which share a common node, for i ≥2. The new edge e has conditioned, conditioning and constraint sets given by C(e) = N(e1)∆N(e2), D(e) = N(e1) ∩N(e1), N(e) = N(e1) ∪N(e2), where ∆is the symmetric difference operator. Figure 3 illustrates this procedure for an R-vine with 4 variables. For any edge e(j, k) ∈Ti, i = 1, . . . , d −1 with conditioned set C(e) = {j, k} and conditioning set D(e) let cjk|D(e) be the value of the copula density for the conditional distribution of xj and xk when conditioning on {xi : i ∈D(e)}, that is, cjk|D(e) := c(Pj|D(e), Pk|D(e)|xi : i ∈D(e)), (9) where Pj|D(e) := P(xj|xi : i ∈D(e)) is the conditional cdf of xj when conditioning on {xi : i ∈ D(e)}. Kurowicka and Cooke [16] indicate that any probability density function p(x1, . . . , xd) can then be factorized as p(x) = d Y i=1 p(xi) d−1 Y i=1 Y e(j,k)∈Ei cjk|D(e) , (10) where E1, . . . , Ed−1 are the edge sets of the R-vine V for p(x1, . . . , xd). In particular, each of the edges in the trees from V specify a different conditional copula density in (10). For d variables, the density in (10) is formed by d(d −1)/2 factors. Changes in each of these factors can be detected and independently transferred accross different learning domains to improve the estimation of the target density function. The definition of cjk|D(e) in (9) requires the calculation of conditional marginal cdfs. For this, we use the following recursive identity introduced by Joe [14], that is, Pj|D(e) = ∂Cjk|D(e)\k ∂Pk|D(e)\k , (11) 4 1, 1|∅ Tree 1 2, 2|∅ 3, 3|∅ 4, 4|∅ 1, 2|∅ 1, 3|∅ 2, 4|∅ 3, 4|∅ 1, 2|∅ Tree 2 1, 3|∅ 3, 4|∅ 2, 3|1 1, 4|3 2, 3|1 Tree 3 1, 4|3 2, 4|1, 3 p1234 = p1 · p2 · p3 · p4 | {z } Marginals · c12 · c13 · c34 | {z } Tree 1 · c23|1 · c14|3 | {z } Tree 2 · c24|13 | {z } Tree 3 Figure 3: Example of the hierarchical construction of a R-vine copula for a system of four variables. The edges selected to form each tree are highlighted in bold. Conditioned and conditioning sets for each node and edge are shown as C(e)|D(e). Later, each edge in bold will correspond to a different bivariate copula function. which holds for any k ∈D(e), where D(e) \ k = {i : i ∈D(e) ∧i ̸= k} and Cjk|D(e)\k is the cdf of cjk|D(e)\k. One major advantage of vines is that they can model high-dimensional data by estimating density functions of only one or two random variables. For this reason, these techniques are significantly less affected by the curse of dimensionality than regular density estimators based on kernels, as we show in Section 5. So far Vines have been generally constructed using parametric models for the estimation of bivariate copulas. In the following, we describe a novel method for the construction of non-parametric regular vines. 3.1 Non-parametric Regular Vines In this section, we introduce a vine distribution in which all participant bivariate copulas can be estimated in a non-parametric manner. Todo so, we model each of the copulas in (10) using the nonparametric method described in Section 2.1. Let {(ui, vi)}n i=1 be a sample from the copula density c(u, v). The basic operation needed for the implementation of the proposed method is the evaluation of the conditional cdf P(u|v) using the recursive equation (11). Define w = Φ−1(v), zi = Φ−1(ui) and wi = Φ−1(vi). Combining (8) and (11) we obtain ˆP(u|v) = Z u 0 ˆc(x, v) dx = 1 nφ(w) n X i=1 Z u 0 N(Φ−1(x), w|zi, wi, Σ) φ(Φ−1(x)) dx = 1 nφ(w) n X i=1 N(w|wi, σ2 w) Φ " Φ−1(u) −µzi|wi σ2 zi|wi # , (12) where N(·|µ, σ2) denotes a Gaussian density with mean µ and variance σ2, Σ = σ2 z γ γ σ2 w the kernel bandwidth matrix, µzi|wi = zi + σz σw γ(w −wi) and σ2 zi|wi = σ2 z(1 −γ2). Equation (12) can be used to approximate any conditional cdf Pj|D(e). For this, we use the fact that P(xj|xi : i ∈D(e)) = P(uj|ui : i ∈D(e)), where ui = P(xi), for i = 1, . . . , d, and recursively apply rule (11) using equation (12) to compute ˆP(uj|ui : i ∈D(e)). To complete the inference recipe for the non-parametric regular vine, we must specify how to construct the hierarchy of trees T1, . . . , Td−1. In other words, we must define a procedure to select the edges (bivariate copulas) that will form each tree. We have a total of d(d −1)/2 bivariate copulas 5 which should be distributed among the different trees. Ideally, we would like to include in the first trees of the hierarchy the copulas with strongest dependence level. This will allow us to prune the model by assuming independence in the last k < d trees, since the density function for the independent copula is constant and equal to 1. To construct the trees T1, . . . , Td−1, we assign a weight to each edge e(j, k) (copula) according to the level of dependence between the random variables xj and xk. A common practice is to fix this weight to the empirical estimate of Kendall’s’ τ for the two random variables under consideration[1]1. Given these weights for each edge, we propose to solve the edge selection problem by obtaining d −1 maximum spanning trees. Prim’s Algorithm [20] can be used to solve this problem efficiently. 4 Domain Adaptation with Regular Vines In this section we describe how regular vines can be used to address domain adaptation problems in the non-linear regression setting with continuous data. The proposed approach could be easily extended to other problems such as density estimation or classification. In regression problems, we are interested in inferring the mapping mechanism or conditional distribution with density p(y|x) that maps one feature vector x = (x1, . . . , xd) ∈Rd into a target scalar value y ∈R. Rephrased into the copula framework, this conditional density can be expressed as p(y|x) ∝p(y) d Y i=1 Y e(j,k)∈Ei cjk|D(e) (13) where E1, . . . , Ed are the edge sets of an R-vine for p(x, y). Note that the normalization of the right part of (13) is relatively easy since y is scalar. In the classic domain adaptation setup we usually have large amounts of data for solving a source task characterized by the density function ps(x, y). However, only a partial or reduced sample is available for solving a target task with density pt(x, y). Given the data available for both tasks, our objective is to build a good estimate for the conditional density pt(y|x). To address this domain adaptation problem, we assume that pt is a modified version of ps. In particular, we assume that pt is obtained in two steps from ps. First, ps is expressed using an R-vine representation as in (10) and second, some of the factors included in that representation (marginal distributions or pairwise copulas) are modified to derive pt. All we need to address the adaptation across domains is to reconstruct the R-vine representation of ps using data from the source task, and then identify which of the factors have been modified to produce pt. These factors are corrected using data from the target task. In the following, we describe how to identify and correct these modified factors. Marginal distributions can change between source and target tasks (also known as covariate shift). In this case, Ps(xi) ̸= Pt(xi), for i = 1, . . . , d, or Ps(y) ̸= Pt(y), and we need to re-generate the estimates of the affected marginals using data from the target task. Additionally, some of the bivariate copulas cjk|D(e) may differ from source to target tasks. In this case, we also re-estimate the affected copulas using data from the target task. Simultaneous changes in both copulas and marginals can occur. However, there is no limitation in updating each of the modified components separately. Finally, if some of the factors remain constant across domains, we can use the available data from the target task to improve the estimates obtained using only the data from the source task. Note that we are addressing a more general problem than covariate shift. Besides identifying and correcting changes in marginal distributions, we also consider changes in any possible form of dependence (conditional distributions) between random variables. For the implementation of the strategy mentioned above, we need to identify when two samples come from the same distribution or not. For this, we propose to use the non-parametric two-sample test Maximum Mean Discrepancy (MMD) [10]. MMD will return low p-values when two samples are unlikely to have been drawn from the same distribution. Specifically, given samples from two distributions P and Q, MMD will determine P ̸= Q if the distance between the embeddings of the empirical distributions for these two samples in a RKHS is significantly large. 1We have tried more general dependence measures such as the HSIC (Hilbert-Schmidt Independence Criterion) without observing gains that justify the increase of computational costs. 6 Table 1: Average TLL obtained by NPRV, GRV and KDE on six different UCI datasets. Dataset Auto Cloud Housing Magic Page-Blocks Wireless No. of variables 8 10 14 11 10 11 KDE 1.32 ± 0.06 3.25 ± 0.10 1.96 ± 0.17 1.13 ± 0.11 1.90 ± 0.13 0.98 ± 0.06 GRV 1.84 ± 0.08 5.00 ± 0.12 1.68 ± 0.11 2.09 ± 0.08 4.69 ± 0.20 0.36 ± 0.08 NPRV 2.07 ± 0.07 4.54 ± 0.13 3.18 ± 0.17 2.72 ± 0.17 5.64 ± 0.14 2.17 ± 0.13 Semi-supervised and unsupervised domain adaptation: The proposed approach can be easily extended to take advantage of additional unlabeled data to improve the estimation of our model. Specifically, extra unlabeled target task data can be used to refine the factors in the R-Vine decomposition of pt which do not depend on y. This is still valid even in the limiting case of not having access to labeled data from the target task at training time (unsupervised domain adaptation). 5 Experiments To validate the proposed method, we run two series of experiments using real world data. The first series illustrates the accuracy of the density estimates generated by the proposed non-parametric vine method. The second series validates the effectiveness of the proposed framework for domain adaptation problems in the non-linear regression setting. In all experiments, kernel bandwidth matrices are selected using Silverman’s rule-of-thumb [21]. For comparative purposes, we include the results of different state-of-the-art domain adaptation methods whose parameters are selected by a 10-fold cross validation process on the training data. Approximations: A complete R-Vine requires the use of conditional copula functions, which are challenging to learn. A common approximation is to ignore any dependence between the copula functional form and its set of conditioning variables. Note that the copula functions arguments remain to be conditioned cdfs. Moreover, to avoid excesive computational costs, we consider only the first tree (d −1 copulas) of the R-Vine, which is the one containing the most amount of dependence between the distribution variables. Increasing the number of considered trees did not lead to significant performance improvements. 5.1 Accuracy of Non-parametric Regular Vines for Density Estimation The density estimates generated by the new non-parametric R-vine method (NPRV) are evaluated on data from six normalized UCI datasets [9]. We compare against a standard density estimator based on Gaussian kernels (KDE), and a parametric vine method based on bivariate Gaussian copulas (GRV). From each dataset, we extract 50 random samples of size 1000. Training is performed using 30% of each random sample. Average test log-likelihoods and corresponding standard deviations on the remaining 70% of the random sample are summarized in Table 1 for each technique. In these experiments, NPRV obtains the highest average test log-likelihood in all cases except one, where it is outperformed by GRV. KDE shows the worst performance, due to its direct exposure to the curse of dimensionality. 5.2 Comparison with other Domain Adaptation Methods NPRV is analyzed in a series of experiments for domain adaptation on the non-linear regression setting with real-world data. Detailed descriptions of the 6 UCI selected datasets and their domains are available in the supplementary material. The proposed technique is compared with different benchmark methods. The first two, GP-SOURCE and GP-ALL, are considered baselines. They are two gaussian process (GP) methods, the first one trained only with data from the source task, and the second one trained with the normalized union of data from both source and target problems. The other five methods are considered state-of-the-art domain adaptation techniques. DAUME [7] performs a feature augmentation such that the kernel function evaluated at two points from the same 7 Table 2: Average NMSE and standard deviation for all algorithms and UCI datasets. Dataset Wine Sarcos Rocks-Mines Hill-Valleys Axis-Slice Isolet No. of variables 12 21 60 100 386 617 GP-Source 0.86 ± 0.02 1.80 ± 0.04 0.90 ± 0.01 1.00 ± 0.00 1.52 ± 0.02 1.59 ± 0.02 GP-All 0.83 ± 0.03 1.69 ± 0.04 1.10 ± 0.08 0.87 ± 0.06 1.27 ± 0.07 1.58 ± 0.02 Daume 0.97 ± 0.03 0.88 ± 0.02 0.72 ± 0.09 0.99 ± 0.03 0.95 ± 0.02 0.99 ± 0.00 SSL-Daume 0.82 ± 0.05 0.74 ± 0.08 0.59 ± 0.07 0.82 ± 0.07 0.65 ± 0.04 0.64 ± 0.02 ATGP 0.86 ± 0.08 0.79 ± 0.07 0.56 ± 0.10 0.15 ± 0.07 1.00 ± 0.01 1.00 ± 0.00 KMM 1.03 ± 0.01 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 1.00 ± 0.00 KuLSIF 0.91 ± 0.08 1.67 ± 0.06 0.65 ± 0.10 0.80 ± 0.11 0.98 ± 0.07 0.58 ± 0.02 NPRV 0.73 ± 0.07 0.61 ± 0.10 0.72 ± 0.13 0.15 ± 0.07 0.38 ± 0.07 0.46 ± 0.09 UNPRV 0.76 ± 0.06 0.62 ± 0.13 0.72 ± 0.15 0.19 ± 0.09 0.37 ± 0.07 0.42 ± 0.04 Av. Ch. Mar. 10 1 38 100 226 89 Av. Ch. Cop. 5 8 49 34 155 474 domain is twice larger than when these two points come from different domains. SSL-DAUME [6] is a SSL extension of DAUME which takes into account unlabeled data from the target domain. ATGP [4] models the source and target task data using a single GP, but learns additional kernel parameters to correlate input vectors between domains. This method outperforms others like the one proposed by Bonilla et al. [3]. KMM [11] minimizes the distance of marginal distributions in source and target domains by matching their means when mapped into an universal RKHS. Finally, KULSIF [15] operates in a similar way as KMM. Besides NPRV, we also include in the experiments its fully unsupervised variant, UNPRV, which ignores any labeled data from the target task. For training, we randomly sample 1000 data points for both source and target tasks, where all the data in the source task and 5% of the data in the target task are labeled. The test set contains 1000 points from the target task. Table 2 summarizes the average test normalized mean square error (NMSE) and corresponding standard deviation for each method in each dataset across 30 random repetitions of the experiment. The proposed methods obtain the best results in 5 out of 6 cases. Notably, UNPRV (Unsupervised NPRV), which ignores labeled data from the target task, also outperforms the other benchmark methods in most cases. Finally, the two bottom rows in Table 2 show the average number of marginals and bivariate copulas which are updated in each dataset during the execution of NPRV, respectively. Computational Costs: Running NPRV requires to fill in a weight matrix of size O(d2) with the empirical estimates of Kendall’s τ for any two random variables. The computation of each of these estimates can be done efficiently with cost O(n log n), where n is the number of available data points. Therefore, the final training cost of NPRV is O(d2n log n). In practice, we obtain competitive training times. Training NPRV for the Isolet dataset took about 3 minutes on a regular laptop computer. Predictions made by a single level NPRV have cost O(nd). Parametric copulas may be used to reduce the computational demands. 6 Conclusions We have proposed a novel non-parametric domain adaptation strategy based on copulas. The new approach works by decomposing any multivariate density into a product of marginal densities and bivariate copula functions. Changes in these factors across different domains can be detected using two sample tests, and transferred across domains in order to adapt the target task density model. A novel non-parametric vine method has been introduced for the practical implementation of this method. This technique leads to better density estimates than standard parametric vines or KDE, and is also able to outperform a large number of alternative domain adaptation methods in a collection of regression problems with real-world data. 8 References [1] K. Aas, C. Czado, A. Frigessi, and H. Bakken. Pair-copula constructions of multiple dependence. Insurance: Mathematics and Economics, 44(2):182–198, 2006. [2] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman. A theory of learning from different domains. Machine Learning, 79(1):151–175, 2010. [3] E. Bonilla, K. Chai, and C. Williams. Multi-task gaussian process prediction. NIPS, 2008. [4] B. Cao, S. Jialin, Y. Zhang, D. Yeung, and Q. Yang. Adaptive transfer learning. AAAI, 2010. [5] C. Cortes and M. Mohri. Domain adaptation in regression. In Proceedings of the 22nd international conference on Algorithmic learning theory, ALT’11, pages 308–323, Berlin, Heidelberg, 2011. Springer-Verlag. [6] H. Daum´e, III, Abhishek Kumar, and Avishek Saha. Frustratingly easy semi-supervised domain adaptation. Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, pages 53–59, 2010. [7] H. Daum´e III. Frustratingly easy domain adaptation. Association of Computational Linguistics, pages 256–263, 2007. [8] J. Fermanian and O. Scaillet. The estimation of copulas: Theory and practice. Copulas: From Theory to Application in Finance, pages 35–60, 2007. [9] A. Frank and A. Asuncion. UCI machine learning repository, 2010. [10] A. Gretton, K. Borgwardt, M. Rasch, B. Scholkopf, and A. Smola. A kernel method for the two-sample-problem. NIPS, pages 513–520, 2007. [11] J. Huang, A. Smola, A. Gretton, K. Borgwardt, and B. Schoelkopf. Correcting sample selection bias by unlabeled data. NIPS, pages 601–608, 2007. [12] P. Jaworski, F. Durante, W.K. H¨ardle, and T. Rychlik. Copula Theory and Its Applications. Lecture Notes in Statistics. Springer, 2010. [13] S. Jialin-Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359, 2010. [14] H. Joe. Families of m-variate distributions with given margins and m(m −1)/2 bivariate dependence parameters. Distributions with Fixed Marginals and Related Topics, 1996. [15] T. Kanamori, T. Suzuki, and M. Sugiyama. Statistical analysis of kernel-based least-squares density-ratio estimation. Machine Learning, 86(3):335–367, 2012. [16] D. Kurowicka and R. Cooke. Uncertainty Analysis with High Dimensional Dependence Modelling. Wiley Series in Probability and Statistics, 1st edition, 2006. [17] Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation: Learning bounds and algorithms. In COLT, 2009. [18] R. Nelsen. An Introduction to Copulas. Springer Series in Statistics, 2nd edition, 2006. [19] S. Nitschke, E. Kidd, and L. Serratrice. First language transfer and long-term structural priming in comprehension. Language and Cognitive Processes, 5(1):94–114, 2010. [20] R. C. Prim. Shortest connection networks and some generalizations. Bell System Technology Journal, 36:1389–1401, 1957. [21] B.W. Silverman. Density Estimation for Statistics and Data Analysis. Monographs on Statistics and Applied Probability. Chapman and Hall, 1986. [22] A. Sklar. Fonctions de repartition `a n dimension set leurs marges. Publ. Inst. Statis. Univ. Paris, 8(1):229–231, 1959. 9
|
2012
|
213
|
4,579
|
The Lov´asz ϑ function, SVMs and finding large dense subgraphs Vinay Jethava ∗ Computer Science & Engineering Department, Chalmers University of Technology 412 96, Goteborg, SWEDEN jethava@chalmers.se Anders Martinsson Department of Mathematics, Chalmers University of Technology 412 96, Goteborg, SWEDEN andemar@student.chalmers.se Chiranjib Bhattacharyya Department of CSA, Indian Institute of Science Bangalore, 560012, INDIA chiru@csa.iisc.ernet.in Devdatt Dubhashi Computer Science & Engineering Department, Chalmers University of Technology 412 96, Goteborg, SWEDEN dubhashi@chalmers.se Abstract The Lov´asz ϑ function of a graph, a fundamental tool in combinatorial optimization and approximation algorithms, is computed by solving a SDP. In this paper we establish that the Lov´asz ϑ function is equivalent to a kernel learning problem related to one class SVM. This interesting connection opens up many opportunities bridging graph theoretic algorithms and machine learning. We show that there exist graphs, which we call SVM −ϑ graphs, on which the Lov´asz ϑ function can be approximated well by a one-class SVM. This leads to novel use of SVM techniques for solving algorithmic problems in large graphs e.g. identifying a planted clique of size Θ(√n) in a random graph G(n, 1 2). A classic approach for this problem involves computing the ϑ function, however it is not scalable due to SDP computation. We show that the random graph with a planted clique is an example of SVM −ϑ graph. As a consequence a SVM based approach easily identifies the clique in large graphs and is competitive with the state-of-the-art. We introduce the notion of common orthogonal labelling and show that it can be computed by solving a Multiple Kernel learning problem. It is further shown that such a labelling is extremely useful in identifying a large common dense subgraph in multiple graphs, which is known to be a computationally difficult problem. The proposed algorithm achieves an order of magnitude scalability compared to state of the art methods. 1 Introduction The Lov´asz ϑ function [19] plays a fundamental role in modern combinatorial optimization and in various approximation algorithms on graphs, indeed Goemans was led to say It seems all roads lead to ϑ [10]. The function is an instance of semidefinite programming(SDP) and hence computing it is an extremely demanding task even for moderately sized graphs. In this paper we establish that the ϑ function is equivalent to solving a kernel learning problem in the one-class SVM setting. This surprising connection opens up many opportunities which can benefit both graph theory and machine learning. In this paper we exploit this novel connection to show an interesting application of the SVM setup for identfying large dense subgraphs. More specifically we make the following contributions. ∗Relevant code and datasets can be found on http://www.cse.chalmers.se/e jethava/svm-theta.html 1 1.1 Contributions: 1.We give a new SDP characterization of Lov´asz ϑ function, min K∈K(G) ω(K) = ϑ(G) where ω(K) is computed by solving an one-class SVM. The matrix K is a kernel matrix, associated with any orthogonal labelling of G. This is discussed in Section 2. 2. Using an easy to compute orthogonal labelling we show that there exist graphs, which we call SVM −ϑ graphs, on which Lov´asz ϑ function can be well approximated by solving an one-class SVM. This is discussed in Section 3. 3. The problem of finding a large common dense subgraph in multiple graphs arises in a variety of domains including Biology, Internet, Social Sciences [18]. Existing state-of-the-art methods [14] are enumerative in nature and has complexity exponential in the size of the subgraph. We introduce the notion of common orthogonal labelling which can be used to develop a formulation which is close in spirit to a Multiple Kernel Learning based formulation. Our results on the well known DIMACS benchmark dataset show that it can identify large common dense subgraphs in wide variety of settings, beyond the realm of state-of-the-art methods. This is discussed in Section 4. 4. Lastly, in Section 5, we show that the famous planted clique problem, can be easily solved for large graphs by solving an one-class SVM. Many problems of interest in the area of machine learning can be reduced to the problem of detecting planted clique, e.g detecting correlations [1, section 4.6], correlation clustering [21] etc. The planted clique problem consists of identifying a large clique in a random graph. There is an elegant approach for identifying the planted clique by computing the Lov´asz ϑ function [8], however it is not practical for large graphs as it requires solving an SDP. We show that the graph associated with the planted clique problem is a SVM −ϑ graph, paving the way for identifying the clique by solving an one-class SVM. Apart from the method based on computing the ϑ function, there are other methods for planted clique identification, which do not require solving an SDP [2, 7, 24]. Our result is also competitive with the state-of-the-art non-SDP based approaches [24]. Notation We denote the Euclidean norm by ∥· ∥and the infinity norm by ∥· ∥∞. Let Sd−1 = {u ∈Rd| ∥u∥= 1} denote a d dimensional sphere. Let Sn denote the set of n×n square symmetric matrices and S+ n denote n × n square symmetric positive semidefinite matrices. For any A ∈Sn we denote the eigenvalues λ1(A) ≥. . . ≥λn(A). diag(r) will denote a diagonal matrix with diagonal entries defined by components of r. We denote the one-class SVM objective function by ω(K) = max αi≥0,i=1,...,n 2 n X i=1 αi − n X i=1 αiαjKij ! | {z } f(α;K) (1) where K ∈S+ n . Let G = (V, E) be a graph on vertices V = {1, . . . , n} and edge set E. Let A ∈Sn denote the adjacency matrix of G where Aij = 1 if edge (i, j) ∈E, and 0 otherwise. An eigenvalue of graph G would mean the eigenvalue of the adjacency matrix of G. Let ¯G denote the complement graph of G. The adjacency matrix of ¯G is ¯A = ee⊤−I −A, where e = [1, 1, . . . , 1]⊤ is a vector of length n containing all 1’s, and I denotes the identity matrix. Let GS = (S, ES) denote the subgraph induced by S ⊆V in graph G; having density γ(GS) is given by γ(GS) = |ES|/ |S| 2 . Let Ni(G) = {j ∈V : (i, j) ∈E} denote the set of neighbours of vertex i in graph G, and degree of node i to be di(G) = |Ni(G)|. An independent set in G (a clique in ¯G is a subset of vertices S ⊆V for which no (every) pair of vertices has an edge in G (in ¯G). The notation is standard e.g. see [3]. 2 Lov´asz ϑ function and Kernel learning Consider the problem of embedding a graph G = (V, E) on a d dimensional unit sphere Sd−1. The study of this problem was initiated in [19] which introduced the idea of orthogonal labelling: An 2 orthogonal labelling of graph G = (V, E) with |V | = n, is a matrix U = [u1, . . . , un] ∈Rd×n such that u⊤ i uj = 0 whenever (i, j) ̸∈E and ui ∈Sd−1 ∀i = 1, . . . , n. An orthogonal labelling defines an embedding of a graph on a d dimensional unit sphere: for every vertex i there is a vector ui on the unit sphere and for every (i, j) ̸∈E ui and uj are orthogonal. Using the notion of orthogonal labellings, [19] defined a function, famously known as Lov´asz ϑ function, which upper bounds the size of maximum independent set. More specifically for any graph G : ALPHA(G) ≤ϑ(G), where ALPHA(G) is the size of the largest independent set. Finding large independent sets is a fundamental problem in algorithm design and analysis and computing ALPHA(G) is a classic NP-hard problem which is even very hard even to approximate [11]. However, the Lov´asz function ϑ(G) gives a tractable upper-bound and since then Lov´asz ϑ function has been extensively used in solving a variety of algorithmic problems e.g. [6]. It maybe useful to recall the definition of Lov´asz ϑ function. Denote the set of all possible orthogonal labellings of G by Lab(G) = {U = [u1, . . . , un]|ui ∈Sd−1, u⊤ i uj = 0 ∀(i, j) ̸∈E}. ϑ(G) = min U∈Lab(G) min c∈Sd−1 max i 1 (c⊤ui)2 (2) There exist several other equivalent definitions of ϑ, for a comprehensive discussion see [16]. However computation of Lov´asz ϑ function is not practical even for moderately sized graphs as it requires solving a semidefinite program on a matrix which is of the size of the graph. In the following theorem, we show that there exist connections between the ϑ function and the SVM formulation. Theorem 2.1. For a undirected graph G = (V, E), with |V | = n, let K(G) := {K ∈S+ n | Kii = 1, i ∈[n], Kij = 0, (i, j) ̸∈E} Then, ϑ(G) = minK∈K(G) ω(K) Proof. We begin by noting that any K ∈K(G) is positive semidefinite and hence there exists U ∈Rd×n such that K = U⊤U. Note that Kij = u⊤ i uj where ui is a column of U. Hence by inspection it is clear that the columns of U defines an orthogonal labelling on G, i.e U ∈Lab(G). Using a similar argument we can show that for any U ∈Lab(G), the matrix K = U⊤U, is an element of K(G). The set of valid kernel matrices K(G) is thus equivalent to Lab(G). Note that if U is a labelling then U = Udiag(ϵ) is also an orthogonal labelling for any ϵ⊤= [ϵ1, . . . , ϵn], ϵi = ±1 i = 1, . . . , n. It thus suffices to consider only those labellings for which c⊤ui ≥0 ∀i = 1, . . . , n holds. For a fixed c one can write maxi 1 (c⊤ui)2 = mint t2 subject to 1 c⊤ui ≤t. This is true because the minimum over t is attained at maxi 1 c⊤ui . Setting w = 2tc yields the following relation minc∈Sd−1 maxi 1 (c⊤ui)2 = minw∈Rd ∥w∥2 4 with constraints w⊤ui ≥2. This establishes that for a labelling, U, the optimal c is obtained by solving an one-class SVM. Application of strong duality immediately leads to the claim minc∈Sd−1 maxi 1 (c⊤ui)2 = ω(K) where K = U⊤U and ω(K) is defined in (1). As there is a correspondence between each element of Lab(G) and K minimization of ω(K) over K is equivalent to computing the ϑ(G) function. This is a significant result which establishes connections between two well studied formulations, namely ϑ function and the SVM formulation. An important consequence of Theorem 2.1 is an easily computable upperbound on ϑ(G) namely that for any graph G ALPHA(G) ≤ϑ(G) ≤ω(K) ∀K ∈K(G) (3) Since solving ω(K) is a convex quadratic program, it is indeed a computationally efficient alternative to the ϑ function. In fact we will show that there exist families of graphs for which ϑ(G) can be approximated to within a constant factor by ω(K) for suitable K. Theorem 2.1 is closely related to the following result proved in [20]. Theorem 2.2. [20] For a graph G = (V, E) with |V | = n let C ∈Sn matrix with Cij = 0 whenever (i, j) ̸∈E. Then, ϑ(G) = minC v(G, C) = max x≥0 2x⊤e −x⊤ C −λn(C) + I x 3 Proof. See [20] See that for any feasible C the matrix I + C −λn(C) ∈K(G). Theorem 2.1 is a restatement of Theorem 2.2, but has the additional advantage that the stated optimization problem can be solved as an SDP. The optimization problem minCv(G, C) with constraints on C is not an SDP. If we fix C = A, the adjacency matrix, we obtain a very interesting orthogonal labelling, which we will refer to as LS labelling, introduced in [20]. Indeed there exists family of graphs, called Q graphs for which LS labelling yields the interesting result ALPHA(G) = v(G, A), see [20]. Indeed on a Q graph one does not need to compute a SDP, but can solve an one-class SVM, which has obvious computational benefits. Inspired by this result, in the remaining part of the paper, we study this labelling more closely. As a labelling is completely defined by the associated kernel matrix, we refer to the following kernel as the LS labelling, K = A ρ + I where ρ ≥−λn(A). (4) 3 SVM −ϑ graphs: Graphs where ϑ function can be approximated by SVM We now introduce a class of graphs on which ϑ function can be well approximated by ω(K) for K defined by (4). In the spirit of approximation algorithms we define: Definition 3.1. A graph G is a SVM −ϑ graph if ω(K) ≤(1 + O(1))ϑ(G) where K is a LS labelling. Such classes of graphs are interesting because on them, one can approximate the Lov´asz ϑ function by solving an SVM, instead of an SDP, which in turn can be extremely useful in the design and analysis of approximation algorithms. We will demosntrate two examples of SVM −ϑ graphs namely (a.) the Erd¨os–Renyi random graph G(n, 1/2) and (b.) a planted variation. Here the relaxation ω(K) could be used in place of ϑ(G), resulting in algorithms with the same quality guarantees but with faster running time – in particular, this will allow the algorithms to be scaled to large graphs. The classical Erd¨os-Renyi random graph G(n, 1/2) has n vertices and each edge (i, j) is present independently with probability 1/2. We list a few facts about G(n, 1/2) that will be used repeatedly. Fact 3.1. For G(n, 1/2), • With probability 1 −O(1/n), the degree of each vertex is in the range n/2 ± O(√n log n). • With probability 1 −e−nc for some c > 0, the maximum eigenvalue is n/2 ± o(n) and the minimum eigenvalue is in the range [−√n, √n] [9]. Theorem 3.1. Let ϵ > √ 2 −1. For G = G(n, 1/2) , with probability 1 −O(1/n), ω(K) ≤ (1 + ϵ)ϑ(G) where K is defined in (4) with ρ = 1+ϵ √ 2 √n. Proof. We begin by considering the case for ρ = (1 + δ 2)√n. By Fact 3.1 for all choices of δ > 0, the minimum eigenvalue of 1 ρA + I is, almost surely, greater than 0 which implies that f(α, K) (see (1)) is strongly concave. For such functions KKT conditions are neccessary and sufficient for optimality. The KKT conditions for a G(n, 1 2) are given by the following equation αi + 1 ρ X (i,j)∈E Ai,jαj = 1 + µi, µiαi = 0, µi ≥0 (5) As A is random we begin by analyzing the case for expectation of A. Let E(A) = 1 2(ee⊤−I), be the expectation of A. For the given choice of ρ, the matrix ˜K = E(A) ρ + I is positive definite. More importantly f(α, ˜K) is again strongly concave and attains maximum at a KKT point. By direct verification ˆα = ˆβe where ˆβ = 2ρ n−1+2ρ satisfies α + 1 ρE(A)α = e. (6) 4 Thus ˆα is the KKT point for the problem, ¯f = max α≥0 f(α, ˜K) = n X i=1 ˆα −ˆα⊤ E(A) ρ + I ˆα = nˆβ (7) with the optimal objective function value ¯f. By choice of ρ = (1 + δ 2)√n we can write ˆβ = 2ρ/n + O(1/n). Using the fact about degrees of vertices in G(n, 1/2), we know that a⊤ i e = n −1 2 + ∆i with |∆i| ≤ p n log n (8) where a⊤ i is the ith row of the adjacency matrix A. As a consequence we note that ˆαi + 1 ρ X j Aij ˆαj −1 = ˆβ ρ ∆i (9) Recalling the definition of f and using the above equation along with (8) gives |f(ˆα; K) −¯f| ≤n ˆβ2 ρ p n log n (10) As noted before the function f(α; K) is strongly concave with ∇2 αf(α; K) ⪯− δ 2+δI for all feasible α. Recalling a useful result from convex optimization, see Lemma 3.1, we obtain ω(K) −f(ˆα; K) ≤ 1 + 1 δ ∥∇f(ˆα; K)∥2 (11) Observing that ∇f(α; K) = 2(e−α−A ρ α) and using the relation between ∥·∥∞and 2 norm along with (9) and (8) gives ∥∇f(ˆα; K)∥≤√n∥∇f(ˆα; K)∥∞= 2n ˆβ ρ √log n. Plugging this estimate in (11) and using equation (10) we obtain ω(K) ≤ˆf + O(log n) = (2+ δ)√n + O(log n) The second equality follows by plugging in the value of ˆβ in (7). It is well known [6] that ϑ(G) = √ 2√n for G(n, 1 2) with high probability. One concludes that ω(K) ≤2+δ √ 2 ϑ(G) + o(√n) and the theorem follows by choice of δ. Discussion: Theorem 3.1 establishes that instead of SDP one can solve an SVM to evaluate ϑ function on G(n, 1/2). Although it is well known that ALPHA(G(n, 1/2)) = 2 log n whp, there is no known polynomial time algorithm for computing the maximum independent set. [6] gives an approximation algorithm that finds an independent set in G(n, p) which runs in expected polynomial time, via a computation of ϑ(G(n, p)),which also applies to p = 1/2. The ϑ function also serves as a guarantee of the approximation which other algorithms a simple Greedy algorithm cannot give. Theorem 3.1 allows us to obtain similar guarantees but without the computational overhead of solving an SDP. Apart from finding independent sets computing ϑ(G(n, p)) is also used as a subroutine in colorability [6], and here again one can use the SVM based approach to approximate the ϑ function. Similar arguments show also that other families of graphs such as the 11 families of pseudo–random graphs described in [17] are SVM −ϑ graphs. Lemma 3.1. [4] A function g : C ⊂Rd →R is said to be strongly concave over S if there exists t > 0 such that ∇2g(x) ⪯−tI ∀x ∈C. For such functions one can show that if p∗= maxx∈C g(x) < ∞then ∀x ∈C p∗−g(x) ≤1 2t∥∇g(x)∥2 4 Dense common subgraph detection The problem of finding a large dense subgraph in multiple graphs has many applications [23, 22, 18]. We introduce the notion of common orthogonal labelling, and show that it is indeed possible to recover dense regions in large graphs by solving a MKL problem. This constitutes significant progress with respect to state of the art enumerative methods [14]. 5 Problem definition Let G = {G(1), . . . , G(M)} be a set of simple, undirected graphs G(m) = (V, E(m)) defined on vertex set V = {1, . . . , n}. Find a common subgraph which is dense in all the graphs. Most algorithms which attempts the problem of finding a dense region are enumerative in nature and hence do not scale well to finding large cliques. [14], first studied a related problem of finding all possible common subgraphs for a given choice of parameters {γ(1), . . . , γ(M)} which is atleast γi dense in G(i). In the worst case, the algorithm performs depth first search over space of n nT possible cliques of size nT . This has Θ( n nT ) space and time complexity, which makes it impractical for moderately large nT . For example, finding quasicliques of size 60 requires 8 hours (see Section 6). In the remainder of this section, we focus on finding a large common sparse subgraph in a given collection of graphs; with the observation that this is equivalent to finding a large common dense subgraph in the set of complement graphs. To this end we introduce the following definition Definition 4.1. Given simple unweighted graphs, G(m) = (V, E(m)) m = 1, . . . , M on a common vertex set V with |V | = n, the common orthogonal labelling on all the labellings is given by ui ∈Sd−1 such that u⊤ i uj = 0 if (i, j) /∈E(m) ∀m = 1, . . . , M}.1 Following the arguments of Section 2 it is immediate that size of the largest common independent set is upper bounded by minK∈L ω(K) where L = {K ∈S+ n : Kii = 1 ∀i ∈[n], Kij = 0 whenever (i, j) /∈E(m) ∀m = 1, . . . , M}. We wish to exploit this fact in identifying large common sparse regions in general graphs. Unfortunately this problem is a SDP and will not scale well to large graphs. Taking cue from MKL literature we pose a restricted version of the problem namely min K=PM m=1 δmK(m) , δm≥0 PM m=1 δm=1 ω(K) (12) where K(m) is an orthogonal labelling of G(m). Direct verification shows that any feasible K is also a common orthogonal labelling. Using the fact that ∀x ∈RM minpm≥0,PM m=1 pm=1 p⊤x = minm xm = max{t|xm ≥t ∀m = 1, . . . , M} one can recast the optimization problem in (12) as follows max t∈R,αi≥0 t s.t. f(α; K(m)) ≥t ∀m = 1, . . . , M (13) where K(m) is the LS labelling for G(m), ∀m = 1, . . . , M. The above optimization can be readily solved by state of the art MKL solvers. This result allows us to build a parameter free common sparse subgraph (CSS) algorithm shown in Figure 1 having following advantages: it provides a theoretical bound on subgraph density (Claim 4.1 below); and, it requires no parameters from the user beyond the set of graphs G(1), . . . , G(M). Let α∗be the optimal solution in (13); and SV = {i : α∗ i > 0} and S1 = {i : α∗ i = 1} with cardinalities nsv = |SV | and n1 = |S1| respectively. Let ¯α(m) min,S = mini∈S P j∈Ni(G(m) S ) α∗ j di(G(m) S ) denote the average of the support vector coefficients in the neighbourhood Ni(G(m) S ) of vertex i in induced subgraph G(m) S having degree di(G(m) S ) = |Ni(G(m) S )|. We define T (m) = ( i ∈SV : di(G(m) SV ) < (1 −c)ρ(m) ¯α(m) min,SV ) where c = min i∈SV α∗ i (14) α∗= Use MKL solvers to solve eqn. (13) T = ∩mT (m) {eqn. (14)} Return T Figure 1: Algorithm for finding common sparse subgraph: T = CSS(G(1), . . . , G(M)) Claim 4.1. Let T ⊆V be computed as in Algorithm 1. The subgraph G(m) T induced by T, in graph G(m), has density at most γ(m) where γ(m) = (1−c)ρ(m) ¯αmin,SV (nT −1) Proof. (Sketch) At optimality, t = Pn i=1 α∗ i . This allows us to write 0 ≤P i∈S α∗ i (2 −α∗ i −P j̸=i K(m) ij α∗ j) −t as 0 ≤P i∈T (1 −c − di(G(m) T ) ρ(m) ¯α(m) min,SV ) Dividing by nT 2 completes the proof. 1This is equivalent to defining an orthogonal labelling on the Union graph of G(1), . . . , G(M) 6 5 Finding Planted Cliques in G(n, 1/2) graphs Finding large cliques or independent sets is a computationally difficult problem even in random graphs. While it is known that the size of the largest clique or independent set in G(n, 1/2) is 2 log n with high probability, there is no known efficient algorithm to find a clique of size significantly larger than log n - even a cryptographic application was suggested based on this (see the discussion and references in the introduction of [8]). Hidden planted clique A random graph G(n, 1/2) is chosen first and then a clique of size k is introduced in the first 1, . . . , k vertices. The problem is to identify the clique. [8] showed that if k = Ω(√n), then the hidden clique can be discovered in polynomial time by computing the Lov´asz ϑ function. There are other approaches [2, 7, 24] which do not require computing the ϑ function. We consider the (equivalent) complement model G(n, 1/2, k) where a independent set is planted on the set of k vertices. We show that in the regime k = Ω(√n), ¯G(n, 1/2, k) is a SVM −ϑ graph. We will further demonstrate that as a consequence one can identify the hidden independent set with high probability by solving an SVM. The following is the main result of the section. Theorem 5.1. For G = ¯G(n, 1/2, k) and k = 2t√n for large enough constant t ≥1 with K as in (4) and ρ = √n + k/2, ω(K) = 2(t + 1)√n + O(log n) = 1 + 1 t + o(1) ϑ(G) with probability at least 1 −O(1/n). Proof. The proof is analogous to that of Theorem 3.1. Note that |λn(G)| ≤√n + k/2. First we consider the expected case where all vertices outside the planted part S are adjacent to k/2 vertices in S and (n −k)/2 vertices outside S. and all verties in the planted part have degree (n −k)/2. We check that αi = 2(t + 1)/√n for i ̸∈S and αi = 2(t + 1)2/√n for i ∈S satisfy KKT conditions with an error of O(1/√n). Now apply Chernoff bounds to conclude that with high probability, all vertices in S have degree (n −k)/2 ± p (n −k) log(n −k) and those outside S are adjacent to k/2 ± √k log k vertices in S and to (n −k)/2 ± p (n −k) log(n −k) vertices ouside S. Now we check that the same solution satisfies KKT conditions of ¯G(n, 1/2, k) with an error of ϵ = O q log n n . Using the same arguments as in the proof of Theorem 3.1, we conclude that ω(K) ≤2(t + 1)√n + O(log n). Since ϑ(G) = 2t√n for this case [8], the result follows. The above theorem suggests that the planted independent set can be recovered by taking the top k values in the optimal solution. In the experimental section we will discuss the performance of this recovery algorithm. The runtime of this algorithm is one call to SVM solver, which is considerably cheaper than the SDP option. Indeed the algorithm due to [8], requires computation of ϑ function. The current best known algorithm for ϑ computation has an O(n5 log n)[5], run time complexity. In contrast the proposed approach needs to solve an SVM and hence scales well to large graphs. Our approach is competitive with the state of the art [24] as it gives the same high probability guarantees and have the same running time, O(n2). Here we have assumed that we are working with a SVM solver which has a time complexity of O(n2) [13]. 6 Experimental evaluation Comparison with exhaustive approach [14] We generate synthetic m = 3 random graphs over n vertices with average density δ = 0.2, and having single (common) quasi-clique of size k = 2√n with density γ = 0.95 in all the three graphs. This is similar to the synthetic graphs generated in the original paper [see 14, Section 6.1.2]. We note that both our MKL-based approach and exhaustive search in [14] recovers the quasi-clique. However, the time requirements are drastically different. All experiments were conducted on a computer with 16 GB RAM and Intel X3470 quadcore processor running at 2.93 GHz. Three values of k namely k = 50, 60 and k = 100 were used. It is interesting to note that CROCHET [14] took 2 hours and 9 hours for k = 50 and k = 60 sized cliques and failed to find a clique of size of 100. The corresponding numbers for MKL are 47.5,54.8 and 137.6 seconds respectively. 7 Common dense subgraph detection We evaluate our algorithm for finding large dense regions on the DIMACS Challenge graphs 2 [15], which is a comprehensive benchmark for testing of clique finding and related algorithms. For the families of dense graphs (brock, san, sanr), we focus on finding large dense region in the complement of the original graphs. We run Algorithm 1 using SimpleMkl3 to find large common dense subgraph. In order to evaluate the performance of our algorithm, we compute ¯a = maxm a(m) and a = minm a(m) where a(m) = γ(G(m) T )/γ(G(m)) is relative density of induced subgraph (compared to original graph density); and nT /N is relative size of induced subgraph compared to original graph size. We want a high value of nT /N; while a should not be lower than 1. Table 1 shows evaluation of Algorithm 1 on DIMACS dataset. We note that our algorithm finds a large subgraph (large nT /N) with higher density compared to original graph in all of DIMACS graph classes making it suitable for finding large dense regions in multiple graphs. In all cases the size of the subgraph, nT was more than 100. The MKL experiments reported in Table 1 took less than 1 minute (for each graph family); while the algorithm in [14] aborts after several hours due to memory constraints. Graph family N M nT N ¯a a c-fat200 200 3 0.50 2.12 0.99 c-fat500 500 4 0.31 3.57 1.01 brock200‡ 200 4 0.41 1.36 0.99 brock400‡ 400 4 0.50 1.15 1.05 brock800‡ 800 4 0.50 1.08 1.01 p hat300 300 3 0.53 1.53 1.15 p hat500 500 3 0.48 1.55 1.17 p hat700 700 3 0.45 1.58 1.18 p hat1000 1000 3 0.43 1.60 1.19 p hat1500 1500 3 0.38 1.63 1.20 san200‡ 200 5 0.50 1.51 1.08 san400‡ 400 3 0.42 1.19 1.02 sanr200‡ 200 2 0.39 1.86 1.04 sanr400‡ 400 2 0.43 1.20 1.02 Table 1: Common dense subgraph recovery on multiple graphs in DIMACS dataset. Here ¯a and a denote the maximum and minimum relative density of the induced subgraph (relative to density of the original graph) and nT /N is the relative size of the induced subgraph compared to original graph size. Planted clique recovery We generate 100 random graphs based on planted clique model G(n, 1/2, k) where n = 30000 and hidden clique size k = 2t√n for each choice of t. We evaluate the recovery algorithm discussed in Section 4.2. The SVM problem is solved using Libsvm4. For t ≥2 we find perfect recovery of the clique on all the graphs, which is agreement with Theorem 5.1. It is worth noting that the approach takes 10 minutes to recover the clique in this graph of 30000 vertices which is far beyond the scope of SDP based procedures. 7 Conclusion In this paper we have established that the Lov´asz ϑ function, well studied in graph theory can be linked to the one-class SVM formulation. This link allows us to design scalable algorithms for computationally difficult problems. In particular we have demonstrated that finding a common dense region in multiple graphs can be solved by a MKL problem, while finding a large planted clique can be solved by an one class SVM. Acknowledgements CB is grateful to Department of CSE, Chalmers University of Technology for their hospitality and was supported by grants from ICT and Transport Areas of Advance, Chalmers University. VJ and DD were supported by SSF grant Data Driven Secure Business Intelligence. 2ftp://dimacs.rutgers.edu/pub/challenge/graph/benchmarks/clique/ 3http://asi.insa-rouen.fr/enseignants/˜arakotom/code/mklindex.html 4http://www.csie.ntu.edu.tw/˜cjlin/libsvm/ 8 References [1] Louigi Addario-berry, Nicolas Broutin, Gbor Lugosi, and Luc Devroye. Combinatorial testing problems. Annals of Statistics, 38:3063–3092, 2010. [2] Noga Alon, Michael Krivelevich, and Benny Sudakov. Finding a large hidden clique in a random graph. Random Structures and Algorithms, pages 457–466, 1998. [3] B. Bollob´as. Modern graph theory, volume 184. Springer Verlag, 1998. [4] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, New York, NY, USA, 2004. [5] T.-H. Hubert Chan, Kevin L. Chang, and Rajiv Raman. An sdp primal-dual algorithm for approximating the lov´asz-theta function. In ISIT, 2009. [6] Amin Coja-Oghlan and Anusch Taraz. Exact and approximative algorithms for coloring g(n, p). Random Struct. Algorithms, 24(3):259–278, 2004. [7] U. Feige and D. Ron. Finding hidden cliques in linear time. In AofA10, 2010. [8] Uriel Feige and Robert Krauthgamer. Finding and certifying a large hidden clique in a semirandom graph. Random Struct. Algorithms, 16:195–208, March 2000. [9] Z. F¨uredi and J. Koml´os. The eigenvalues of random symmetric matrices. Combinatorica, 1:233–241, 1981. [10] Michel X. Goemans. Semidefinite programming in combinatorial optimization. Math. Program., 79:143–161, 1997. [11] J. H˚astad. Clique is hard to approximate within n1−ε. Acta Mathematica, 182(1):105–142, 1999. [12] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1990. [13] Don R. Hush, Patrick Kelly, Clint Scovel, and Ingo Steinwart. Qp algorithms with guaranteed accuracy and run time for support vector machines. Journal of Machine Learning Research, 7:733–769, 2006. [14] D. Jiang and J. Pei. Mining frequent cross-graph quasi-cliques. ACM Transactions on Knowledge Discovery from Data (TKDD), 2(4):16, 2009. [15] D.S. Johnson and M.A. Trick. Cliques, coloring, and satisfiability: second DIMACS implementation challenge, October 11-13, 1993, volume 26. Amer Mathematical Society, 1996. [16] Donald Knuth. The sandwich theorem. Electronic Journal of Combinatorics, 1(A1), 1994. [17] Michael Krivelevich and Benny Sudakov. Pseudo-random graphs. In More Sets, Graphs and Numbers, volume 15 of Bolyai Society Mathematical Studies, pages 199–262. Springer Berlin Heidelberg, 2006. [18] V.E. Lee, N. Ruan, R. Jin, and C. Aggarwal. A survey of algorithms for dense subgraph discovery. Managing and Mining Graph Data, pages 303–336, 2010. [19] L. Lovasz. On the Shannon capacity of a graph. Information Theory, IEEE Transactions on, 25(1):1–7, 1979. [20] C.J. Luz and A. Schrijver. A convex quadratic characterization of the lov´asz theta number. SIAM Journal on Discrete Mathematics, 19(2):382–387, 2006. [21] Claire Mathieu and Warren Schudy. Correlation clustering with noisy input. In Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’10, pages 712–728, Philadelphia, PA, USA, 2010. Society for Industrial and Applied Mathematics. [22] P. Pardalos and S. Rebennack. Computational challenges with cliques, quasi-cliques and clique partitions in graphs. Experimental Algorithms, pages 13–22, 2010. [23] V. Spirin and L.A. Mirny. Protein complexes and functional modules in molecular networks. Proceedings of the National Academy of Sciences, 100(21):12123, 2003. [24] Dekel Yael, Gurel-Gurevich Ori, and Peres Yuval. Finding hidden cliques in linear time with high probability. In ANALCO11, 2011. 9
|
2012
|
214
|
4,580
|
Slice sampling normalized kernel-weighted completely random measure mixture models Nicholas J. Foti Department of Computer Science Dartmouth College Hanover, NH 03755 nfoti@cs.dartmouth.edu Sinead A. Williamson Department of Machine Learning Carnegie Mellon University Pittsburgh, PA 15213 sinead@cs.cmu.edu Abstract A number of dependent nonparametric processes have been proposed to model non-stationary data with unknown latent dimensionality. However, the inference algorithms are often slow and unwieldy, and are in general highly specific to a given model formulation. In this paper, we describe a large class of dependent nonparametric processes, including several existing models, and present a slice sampler that allows efficient inference across this class of models. 1 Introduction Nonparametric mixture models allow us to bypass the issue of model selection, by modeling data using a random number of mixture components that can grow if we observe more data. However, such models work on the assumption that data can be considered exchangeable. This assumption often does not hold in practice as distributions commonly vary with some covariate. For example, the proportions of different species may vary across geographic regions, and the distribution over topics discussed on Twitter is likely to evolve over time. Recently, there has been increasing interest in dependent nonparametric processes [1], that extend existing nonparametric distributions to non-stationary data. While a nonparametric process is a distribution over a single measure, a dependent nonparametric process is a distribution over a collection of measures, which may be associated with values in a covariate space. The key property of a dependent nonparametric process is that the measure at each covariate value is marginally distributed according to a known nonparametric process. A number of dependent nonparametric processes have been developed in the literature ([2] §6). For example, the single-p DDP [1] defines a collection of Dirichlet processes with common atom sizes but variable atom locations. The order-based DDP [3] constructs a collection of Dirichlet processes using a common set of beta random variables, but permuting the order in which they are used in a stick-breaking construction. The Spatial Normalized Gamma Process (SNGP) [4] defines a gamma process on an augmented space, such that at each covariate location a subset of the atoms are available. This creates a dependent gamma process, that can be normalized to obtain a dependent Dirichlet process. The kernel beta process (KBP) [5] defines a beta process on an augmented space, and at each covariate location modulates the atom sizes using a collection of kernels, to create a collection of dependent beta processes. Unfortunately, while such models have a number of appealing properties, inference can be challenging. While there are many similarities between existing dependent nonparametric processes, most of the inference schemes that have been proposed are highly specific, and cannot be generally applied without significant modification. 1 The contributions of this paper are twofold. First, in Section 2 we describe a general class of dependent nonparametric processes, based on defining completely random measures on an extended space. This class of models includes the SNGP and the KBP as special cases. Second, we develop a slice sampler that is applicable for all the dependent probability measures in this framework. We compare our slice sampler to existing inference algorithms, and show that we are able to achieve superior performance over existing algorithms. Further, the generality of our algorithm mean we are able to easily modify the assumptions of existing models to better fit the data, without the need to significantly modify our sampler. 2 Constructing dependent nonparametric models using kernels In this section, we describe a general class of dependent completely random measures, that includes the kernel beta process as a special case. We then describe the class of dependent normalized random measures obtained by normalizing these dependent completely random measures, and show that the SNGP lies in this framework. 2.1 Kernel CRMs A completely random measure (CRM) [6, 7] is a distribution over discrete1 measures B on some measurable space Ωsuch that, for any disjoint subsets Ak ⊂Ω, the masses B(Ak) are independent. Commonly used examples of CRMs include the gamma process, the generalized gamma process, the beta process, and the stable process. A CRM is uniquely categorized by a L´evy measure ν(dω, dπ) on Ω× R+, which controls the location and size of the jumps. We can interpret a CRM as a Poisson process on Ω× R+ with mean measure ν(dω, dπ). Let Ω= (X ×Θ), and let Π = {(µk, θk, πk)}∞ k=1 be a Poisson process on the space X ×Θ×R+ with associated product σ-algebra. The space has three components: X, a bounded space of covariates; Θ, a space of parameter values; and R+, the space of atom masses. Let the mean measure of Π be described by the positive L´evy measure ν(dµ, dθ, dπ). While the construction herein applies for any such L´evy measure, we focus on the class of L´evy measures that factorize as ν(dµ, dθ, dπ) = R0(dµ)H0(dθ)ν0(dπ). This corresponds to the class of homogeneous CRMs, where the size of an atom is independent of its location in Θ × X, and covers most CRMs encountered in the literature. We assume that X is a discrete space with P unique values, µ∗ p, in order to simplify the exposition, and without loss of generality we assume that R0(X) = 1. Additionally, let K(·, ·) : X ×X →[0, 1] be a bounded kernel function. Though any such kernel may be used, for concreteness we only consider a box kernel and square exponential kernel defined as • Box kernel: K(x, µ) = 1 (||x −µ|| < W), where we call W the width. • Square exponential kernel: K(x, µ) = exp −ψ ||x −µ||2 , for ||·|| a dissimilarity measure, and ψ > 0 a fixed constant. Using the setup above we define a kernel-weighted CRM (KCRM) at a fixed covariate x ∈X and for A measurable as Bx(A) = P∞ m=1 K(x, µm)πmδθm(A) (1) which is seen to be a CRM on Θ by the mapping theorem for Poisson processes [8]. For a fixed set of observations (x1, . . . , xG)T we define B(A) = (Bx1(A), . . . , BxG(A))T as the vector of measures of the KCRM at the observed covariates. CRMs are characterized by their characteristic function (CF) [9] which for the CRM B can be written as E[exp(−vT B(A))] = exp − Z X×A×R+(1 −exp(−vT Kµπ)ν(dµ, dθ, dπ)) (2) where v ∈RG and Kµ = (K(x1, µ), . . . , K(xG, µ))T . Equation 2 is easily derived from the general form of the CF of a Poisson process [8] and by noting that the one-dimensional CFs are exactly those of the individual Bxi(A). See [5] for a discussion of the dependence structure between Bx and Bx′ for x, x′ ∈X. 1with, possibly, a deterministic continuous component 2 Taking ν0 to be the L´evy measure of a beta process [10] results in the KBP. Alternatively, taking ν0 as the L´evy measure of a gamma process, νGaP [11], and K(·, ·) as the box kernel we recover the unnormalized form of the SNGP. 2.2 Kernel NRMs A distribution over probability measures can be obtained by starting from a CRM, and normalizing the resulting random measure. Such distributions are often referred to as normalized random measures (NRM) [12]. The most commonly used example of an NRM is the Dirichlet process, which can be obtained as a normalized gamma process [11]. Other CRMs yield NRMs with different properties – for example a normalized generalized gamma process can have heavier tails than a Dirichlet process [13]. We can define a class of dependent NRMs in a similar manner, starting from the KCRM defined above. Since each marginal measure Bx of B is a CRM, we can normalize it by its total mass, Bx(Θ), to produce a NRM Px(A) = Bx(A)/Bx(Θ) = ∞ X m=1 K(x, µm)πm P∞ l=1 K(x, µl)πl δθm(A) (3) This formulation of a kernel NRM (KNRM) is similar to that in [14] for Ornstein-Uhlenbeck NRMs (OUNRM). While the OUNRM framework allows for arbitrary CRMs, in theory, extending it to arbitrary kernel functions is non-trivial. A fundamental difference between OUNRMs and normalized KCRMs is that the marginals of an OUNRM follow a specified process, whereas the marginals of a KCRM may be different than the underlying CRM. A common use in statistics and machine learning for NRMs is as prior distributions for mixture models with an unbounded number of components [15]. Analogously, covariate-dependent NRMs can be used as priors for mixture models where the probability of being associated with a mixture component varies with the covariate [4, 14]. For concreteness, we limit ourselves to a kernel gamma process (KGaP) which we denote as B ∼KGaP(K, R0, H0, νGaP), although the slice sampler can be adapted to any normalized KCRM. Specifically, we observe data {(xj, yj)}N j=1 where xj ∈X denotes the covariate of observation j and yj ∈Rd denotes the quantities we wish to model. Let x∗ g denote the gth unique covariate value among all the xj which induces a partition on the observations so that observation j belongs to group g if xj = x∗ g. We denote the ith observation corresponding to x∗ g as yg,i. Each observation is associated with a mixture component which we denote as sg,i which is drawn according to a normalized KGaP on a parameter space Θ, such that (θ, φ) ∈Θ, where θ is a mean and φ a precision. Conditional on sg,i, each observation is then drawn from some density q(·|θ, φ) which we assume to be N(θ, φ−1). The full model can then be specified as Pg(A)|B ∼Bg(A)/Bg(Θ) sg,i|Pg ∼ ∞ X m=1 K(x∗ g, µm)πm P∞ l=1 K(x∗g, µl)πl δm (θ∗ m, φ∗ m) ∼H0(dθ, dφ) yg,i|sg,i, {(θ∗, φ∗)} ∼q(yg,i|θ∗ sg,i, φ∗ sg,i) (4) If K(·, ·) is a box kernel, Eq. 4 describes a SNGP mixture model [4]. 3 A slice sampler for dependent NRMs The slice sampler of [16] allows us to perform inference in arbitrary NRMs. We extend this slice sampler to perform inference in the class of dependent NRMs described in Sec. 2.2. The slice sampler can be used with any underlying CRM, but for simplicity we concentrate on an underlying gamma process, as described in Eq. 4. In the supplement we also derive a Rao-Blackwellized estimator of the predictive density for unobserved data using the output from the slice sampler. We use this estimator to compute predictive densities in the experiments. 3 Analogously to [16] we introduce a set of auxiliary slice variables – one for each data point. Each data point can only belong to clusters corresponding to atoms larger than its slice variable. The set of slice variables thus defines a minimum atom size that need be represented, ensuring a finite number of instantiated atoms. We extend this idea to the KNRM framework. Note that, in this case, an atom will exhibit different sizes at different covariate locations. We refer to these sizes as the kernelized atom sizes, K(x∗ g, µ)π, obtained by applying a kernel K, evaluated at location x∗ g, to the raw atom π. Following [16], we introduce a local slice variable ug,i. This allows us to write the joint distribution over the data points yg,i, their cluster allocations sg,i and their slice variables ug,i as f(y, u, s|π, µ, θ, φ) = G Y g=1 V ng−1 g e(−VgBT g) ng Y i=1 1 ug,i < K(x∗ g, µsg,i)πsg,i q(yg,i|θsg,i, φsg,i) (5) where BT g = Bx∗g(Θ) = P∞ m=1 K(x∗ g, µm)πm and Vg ∼Ga(ng, BT g) is an auxiliary variable2. See the supplement and [16, 17] for a complete derivation. In order to evaluate Eq. 5, we need to evaluate BT g, the total mass of the unnormalized CRM at each covariate value. This involves summing over an infinite number of atoms – which we do not wish to represent. Define 0 < L = min {usg,i}. This gives the smallest possible (kernelized) atom size to which data can be attached. Therefore, if we instantiate all atoms with raw size greater than L, we will include all atoms associated with occupied clusters. For any value of L, there will be a finite number M of atoms above this threshold. From these M raw atoms, we can obtain the kernelized atoms above the slice corresponding to a given data point. We must obtain the remaining mass by marginalizing over all kernelized atoms that are below the slice (see the supplement). We can split this mass into, a.) the mass due to atoms that are not instantiated (i.e. whose kernelized value is below the slice at all covariate locations) and, b.) the mass due to currently instantiated atoms (i.e. atoms whose kernelized value is above the slice at at least one covariate location) 3. As we show in the supplement, the first term, a, corresponds to atoms (π, µ) where π < L, the mass of which can be written as X µ∗∈X R0(µ∗) Z L 0 (1 −exp (−V T Kµ∗π))ν0(dπ) ! (6) where V = (V1, . . . , VG)T . This can be evaluated numerically for many CRMs including gamma and generalized gamma processes [16]. The second term, b, consists of realized atoms {(πk, µk)} such that K(x∗ g, µk)πk < L at covariate x∗ g. We use a Monte Carlo estimate for b that we describe in the supplement. For box kernels term b vanishes, and we have found that even for the square exponential kernel ignoring this term yields good results. 3.1 Sampling equations Having specified the joint distribution in terms of a finite measure with a random truncation point L we can now describe a sampler that samples in turn from the conditional distributions for the auxiliary variables Vg, the gamma process parameter α = H0(Θ), the instantiated raw atom sizes πm and corresponding locations in covariate space µm and in parameter space (θm, φm), and the slice variables ug,i. We define some simplifying notation: Kµ = (K(x∗ 1, µ), . . . , K(x∗ G, µ))T ; B+ = (B+1, . . . , B+G)T , B∗= (B∗1, . . . , B∗G)T , where B+g = PM m=1 K(x∗ g, µm)πm, B∗g = P∞ m=M+1 K(x∗ g, µm)πm so that BT g = B+g+B∗g; and ng,m = |{sg,i : sg,i = m, i ∈1, . . . , ng}|. • Auxiliary variables Vg: The full conditional distribution for Vg is given by p(Vg | ng, V−g, B+, B∗) ∝V ng−1 g exp(−V T B+)E[exp(−V T B∗)], Vg > 0 (7) which we sample using Metropolis-Hastings moves, as in [18]. 2We parametrize the gamma distribution so that X ∼Ga(a, b) has mean a/b and variance a/b2 3If X were not bounded there would be a third term consisting of raw atoms > L that when kernelized fall below the slice everywhere. These can be ignored by a judicious choice of the space X and the allowable kernel widths. 4 • Gamma process parameter α: The conditional distribution for α is given by p(α | K, V, µ, π) ∝p(α)αKe−α[ R ∞ L ν0(dπ)+ R L 0 R X (1−exp (−V T Kµπ))R0(dµ)ν0(dπ)] (8) If p(α) = Ga(a0, b0) then the posterior is also a gamma distribution with parameters a = a0 + K (9) b = b0 + Z ∞ L ν0(dπ) + Z X Z L 0 (1 −exp(−V T Kµπ))ν0(dπ)R0(dµ) (10) where the first integral in Eq. 10 can be evaluated for many processes of interest and the second integral can be evaluated as in Eq. 6. • Raw atom sizes πm: The posterior for atoms associated with occupied clusters is given by p(πm | ng,m, µm, V, B+) ∝π PG g=1 ng,m m exp −πm G X g=1 VgK(x∗ g, µm) ! ν0(πm) (11) For an underlying gamma or generalized gamma process, the posterior of πm will be given by a gamma distribution due to conjugacy [16]. There will also be a number of atoms with raw size πm > L that do not have associated data. The number of such atoms is Poisson distributed with mean α R A exp(−V T Kµπ)ν0(dπ)R0(dπ), where A = {(µ, π) : K(x∗ g, µ)π > L, for some g} and which can be computed using the approach described for Eq. 6. • Raw atom covariate locations µm: Since we assume a finite set of covariate locations, we can sample µm according to the discrete distribution p(µm | ng,m, V, B+) ∝ G Y g=1 K(x∗ g, µk)ng,m exp −πm K X g=1 VgK(x∗ g, µm) ! R0(µm) (12) • Slice variables ug,i: Sampled as ug,i|{π}, {µ}, sg,i ∼Un[0, K(x∗ g, µsg,i)πsg,i]. • Cluster allocations sg,i: The prior on sg,i cancels with the prior on ug,i, yielding p(sg,i = m | yg,i, ug,i, θm, πm, µm) ∝q(yg,i|θm, φm)1 ug,i < K(x∗ g, µm)πm (13) where only a finite number of m need be evaluated. • Parameter locations: Can be sampled as in a standard mixture model [16]. 4 Experiments We evaluate the performance of the proposed slice sampler in the setting of covariate dependent density estimation. We assume the statistical model in Eq. 4 and consider a univariate Gaussian distribution as the data generating distribution. We use both synthetic and real data sets in our experiments and compare the slice sampler to a Gibbs sampler for a finite approximation to the model (see the supplement for details of the model and sampler) and to the original SNGP sampler. We assess the mixing characteristics of the sampler using the integrated autocorrelation time τ of the number of clusters used by the sampler at each iteration after a burn-in period, and by the predictive quality of the collective samples on held-out data. The integrated autocorrelation time of samples drawn from an MCMC algorithm controls the Monte Carlo error inherent in a sample drawn from the MCMC algorithm. It can be shown that in a set of T samples from the MCMC algorithm, there are in effect only T/(2τ) “independent” samples. Therefore, lower values of τ are deemed better. We obtain an estimate ˆτ of the integrated autocorrelation time following [19]. We assess the predictive performance of the collected samples from the various algorithms by computing a Monte Carlo estimate of the predictive log-likelihood of a held-out data point under the model. Specifically, for a held out point y∗we have log p(y∗|y) ≈1 T T X t=1 log M (t) X m=1 w(t) m q y∗|θ(t) m , φ(t) m . (14) 5 Table 1: Results of the samplers using different kernels. Entries are of the form “average predictive density / average number of clusters used / ˆτ” where two standard errors are shown in parentheses. Results are averaged over 5 hold-out data sets. Synthetic CMB Motorcycle Slice Box -2.70 (0.12) / 11.6 / 2442 -0.15 (0.11) / 14.4 / 2465 -0.90 (0.28) / 10.3 / 2414 SNGP -2.67 (0.12) / 43.3 / 2488 -0.22 (0.14) / 79.1 / 2495 NA Finite Box -2.78 (0.15) / 11.7 / 2497 -0.41 (0.14) / 18.2 / 2444 -1.19 (0.16) / 16.4 / 2352 Slice SE NA -0.28 (0.07) / 14.7 / 2447 -0.87 (0.28) / 8.2 / 2377 Finite SE NA -0.29 (0.05) / 9.5 / 2491 -0.99 (0.19) / 7.3 / 2159 0 0.2 0.4 0.6 0.8 1 ï30 ï20 ï10 0 10 Time Observation 0 1000 2000 3000 4000 5000 10 15 20 25 30 35 40 Iteration # used clusters Slice SNGP Finite ï25 ï20 ï15 ï10 ï5 0 200 400 600 800 log(L) Frequency Figure 1: Left: Synthetic data. Middle: Trace plots of the number of clusters used by the three samplers. Right: Histogram of truncation point L. The weight w(t) m is the probability of choosing atom m for sample t. We did not use the RaoBlackwellized estimator to compute Eq. 14 for the slice sampler to achieve fair comparisons (see the supplement for the results using the Rao-Blackwellized estimator). 4.1 Synthetic data We generated synthetic data from a dynamic mixture model with 12 components (Figure 1). Each component has an associated location, µk, that can take the value of any of ten uniformly spaced time stamps, tj ∈[0, 1]. The components are active according to the kernel K(x, µk) = 1 (|x −µk| < .2) – i.e. components are active for two time stamps around their location. At each time stamp, tj, we generate 60 data points. For each data point we choose a component, k, such that 1 (|tj −µk| < .2) and then generate that data point from a Gaussian distribution with mean µk and variance 10. We use 50 of the generated data points per time stamp as a training set and hold out 10 data points for prediction. Since the SNGP is a special case of the normalized KGaP, we compare the finite and slice samplers, which are both conditional samplers, to the original marginal sampler proposed in [4]. We use the basic version of the SNGP that uses fixed-width kernels, as we assume fixed width kernel functions for simplicity. The implementation of the SNGP sampler we used also only allows for fixed component variances, so we fix all φk = 1/10, the true data generating precision. We use the true kernel function that was used to generate the data as the kernel for the normalized KGaP model. We ran the slice sampler for 10, 000 burn-in iterations and subsequently collected 5, 000 samples. We truncated the finite version of the model to 100 atoms and ran the sampler for 5, 000 burn-in iterations and collected 5, 000 samples. The SNGP sampler was run for 2, 000 burn-in iterations and 5, 000 samples were collected4. The predictive log-likelihood, mean number of clusters used and ˆτ are shown in the “Synthetic” column in Table 1. We see that all three algorithms find a region of the posterior that gives predictive estimates of a similar quality. The autocorrelation estimates for the three samplers are also very similar. This might seem surprising, since the SNGP sampler uses sophisticated split-merge moves to improve mixing, which have no analogue in the slice sampler. In addition, we note that although the per-iteration 4No thinning was performed in any of the experiments in this paper. 6 mixing performance is comparable, the average time per 100 iterations for the slice sampler was ∼10 seconds, for the SNGP sampler was ∼30 seconds and for the finite sampler was ∼200 seconds. Even with only 100 atoms the finite sampler is much more expensive than the slice and SNGP5 samplers. We also observe (Figure 1) that both the slice and finite samplers use essentially the true number of components underlying the data and that the SNGP sampler uses on average twice as many components. The finite sampler finds a posterior mode with 13 clusters and rarely makes small moves from that mode. The slice sampler explores modes with 10-17 clusters, but never makes large jumps away from this region. The SNGP sampler explores the largest number of used clusters ranging from 23-40, however, it has not explored regions that use less clusters. Figure 1 also depicts the distribution of the variable truncation level L over all samples in the slice sampler. This suggests that a finite model that discards atoms with πk < 10−18 introduces negligible truncation error. However, this value of L corresponds to ≈1018 atoms in the finite model which is computationally intractable. To keep the computation times reasonable we were only able to use 100 atoms, a far cry from the number implied by L. In Figure 2 (Left) we plot estimates of the predictive density at each time stamp for the slice (a), finite (b) and SNGP (c) samplers. All three samplers capture the evolving structure of the distribution. However, the finite sampler seems unable to discard unneeded components. This is evidenced by the small mass of probability that spans times [0, 0.8] when the data that the component explains only exists at times [0.2, 0.5]. The slice and SNGP samplers seem to both provide reasonable explanations for the distribution, with the slice sampler tending to provide smoother estimates. 4.2 Real data As well as providing an alternative inference method for existing models, our slice sampler can be used in a range of models that fall under the general class of KNRMs. To demonstrate this, we use the finite and slice versions of our sampler to learn two kernel DPs, one using a box kernel, K(x, µ) = 1 (|x −µ| < 0.2) (the setting in the SNGP), and the other using a square exponential kernel K(x, µ) = exp(−200(x −µ)2), which has support approximately on [µ −.2, µ + .2]. The kernel was chosen to be somewhat comparable to the box kernel, however, this kernel allows the influence of an atom to diminish gradually as opposed to being constant. We compare to the SNGP sampler for the box kernel model, but note that this sampler is not applicable to the exponential kernel model. We compare these approaches on two real-world datasets: • Cosmic microwave background radiation (CMB)[20]: TT power spectrum measurements, η, from the cosmic microwave background radiation (CMB) at various ‘multipole moments’, denoted M. Both variables are considered continuous and exhibit dependence. We rescale M to be in [0, 1] and standardize η to have mean 0 and unit variance. • Motorcycle crash data [21]. This data set records the head acceleration, A, at various times during a simulated motorcycle crash. We normalize time to [0, 1] and standardize A to have mean 0 and unit variance. Both datasets exhibit local heteroskedasticity, which cannot be captured using the SNGP. For the CMB data, we consider only the first 600 multipole moments, where the variance is approximately constant, allowing us to compare the SNGP sampler to the other algorithms. For all models we fixed the observation variance to 0.02, which we estimated from the standardized data. To ease the computational burden of the samplers we picked 18 time stamps in [0.05, 0.95], equally spaced 0.05 apart and assigned each observation to the time stamp closest to its associated value of M. This step is by no means necessary, but the running time of the algorithms improves significantly. For the 5Sampling the cluster means and assignments is the slowest step for the SNGP sampler taking about 3 seconds. The times reported here only performed this step every 25 iterations achieving reasonable results. If this step were performed every iteration the results may improve, but the computation time will explode. 7 −1 0 1 2 0.2 0.4 0.6 0.8 multipole moment TT power spectrum sampler finite slice sngp −1 0 1 2 0.2 0.4 0.6 0.8 multipole moment TT power spectrum sampler finite slice Figure 2: Left: Predictive density at each time stamp for synthetic data using the slice (a), finite (b) and SNGP (c) samplers. The scales of all three axis are identical. Middle: Mean and 95% CI of predictive distribution for all three samplers on CMB data using the box kernel. Right: Mean and 95% CI of predictive distribution using the square exponential kernel. motorcycle data, there was no regime of constant variance, so we only compare the slice and finite truncation samplers6. For each dataset and each model/sampler, the held-out predictive log-likelihood, the mean number of used clusters and ˆτ are reported in Table 1. The mixing characteristics of the chain are similar to those obtained for the synthetic data. We see in Table 1 that the box kernel and the square exponential kernel produce similar results on the CMB data. However, the kernel width was not optimized and different values may prove to yield superior results. For the motorcycle data we see a noticeable difference between using the box and square exponential kernels where using the latter improves the held-out predictive likelihood and results in both samplers using fewer components on average. Figure 2 shows the predictive distributions obtained on the CMB data. Looking at the mean and 95% CI of the predictive distribution (middle) we see that when using the box kernel the SNGP actually fits the data the best. This is most likely due to the fact that the SNGP is using more atoms than the slice or finite samplers. We show that the square exponential kernel (right) gives much smoother estimates and appears to fit the data better, using the same number of atoms as were learned with the box kernel (see Table 1). We note that the slice sampler took ∼20 seconds per 100 iterations while the finite sampler used ∼150 seconds. 5 Conclusion We presented the class of normalized kernel CRMs, a type of dependent normalized random measure. This class generalizes previous work by allowing more flexibility in the underlying CRM and kernel function used to induce dependence. We developed a slice sampler to perform inference on the infinite dimensional measure and compared this method with samplers for a finite approximation and for the SNGP. We found that the slice sampler yields samples with competitive predictive accuracy at a fraction of the computational cost. There are many directions for future research. Incorporating reversible-jump moves [22] such as split-merge proposals should allow the slice sampler to explore larger regions of the parameter space with a limited decrease in computational efficiency. A similar methodology may yield efficient inference algorithms for KCRMs such as the KBP, extending the existing slice sampler for the Indian Buffet Process [23]. Acknowledgments NF was funded by grant AFOSR FA9550-11-1-0166. SW was funded by grants NIH R01GM087694 and AFOSR FA9550010247. 6The SNGP could still be used to model this data, however, then we would be comparing the models as opposed to the samplers. 8 References [1] S.N. MacEachern. Dependent nonparametric processes. In ASA Proceedings of the Section on Bayesian Statistical Science, 1999. [2] D. Dunson. Nonparametric Bayes applications to biostatistics. In N. L. Hjort, C. Holmes, P. M¨uller, and S. G. Walker, editors, Bayesian Nonparametrics. Cambridge University Press, 2010. [3] J.E. Griffin and M.F.J. Steel. Order-based dependent Dirichlet processes. JASA, 101(473):179– 194, 2006. [4] V. Rao and Y.W. Teh. Spatial normalized gamma processes. In NIPS, 2009. [5] L. Ren, Y. Wang, D. Dunson, and L. Carin. The kernel beta process. In NIPS, 2011. [6] J.F.C. Kingman. Completely random measures. Pacific Journal of Mathematics, 21(1):59–78, 1967. [7] A. Lijoi and I. Pr¨unster. Models beyond the Dirichlet process. Technical Report 129, Collegio Carlo Alberto, 2009. [8] J.F.C. Kingman. Poisson processes. OUP, 1993. [9] B. Fristedt and L.F. Gray. A Modern Approach to Probability Theory. Probability and Its Applications. Birkh¨auser, 1997. [10] N.L. Hjort. Nonparametric Bayes estimators based on beta processes in models for life history data. Annals of Statistics, 18:1259–1294, 1990. [11] T.S. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics, 1(2):209–230, 1973. [12] E. Regazzini, A. Lijoi, and I. Pr¨unster. Distributional results for means of normalized random measures with independent increments. Annals of Statistics, 31(2):pp. 560–585, 2003. [13] A. Lijoi, R.H. Mena, and I. Pr¨unster. Controlling the reinforcement in Bayesian non-parametric mixture models. JRSS B, 69(4):715–740, 2007. [14] J.E. Griffin. The Ornstein-Uhlenbeck Dirichlet process and other time-varying processes for Bayesian nonparametric inference. Technical report, Department of Statistics, University of Warwick, 2007. [15] S. Favaro and Y.W. Teh. MCMC for normalized random measure mixture models. Submitted, 2012. [16] J. E. Griffin and S. G. Walker. Posterior simulation of normalized random measure mixtures. Journal of Computational and Graphical Statistics, 20(1):241–259, 2011. [17] L.F. James, A. Lijoi, and I. Pr¨unster. Posterior analysis for normalized random measures with independent increments. Scandinavian Journal of Statistics, 36(1):76–97, 2009. [18] J.E. Griffin, M. Kolossiatis, and M.F.J. Steel. Comparing distributions using dependent normalized random measure mixtures. Technical report, University of Warwick, 2010. [19] M. Kalli, J.E. Griffin, and S.G. Walker. Slice sampling mixture models. Statistics and Computing, 21(1):93–105, 2011. [20] C.L. Bennett et al. First year Wilkinson Microwave Anisotropy Probe (WMAP) observations: Preliminary maps and basic results. Astrophysics Journal Supplement, 148:1, 2003. [21] B.W. Silverman. Some aspects of the spline smoothing approach to non-parametric curve fitting. JRSS B, 47:1–52, 1985. [22] P.J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82(4):711–732, 1995. [23] Y.W. Teh, D. G¨or¨ur, and Z. Ghahramani. Stick-breaking construction for the Indian buffet process. In AISTATS, volume 11, 2007. 9
|
2012
|
215
|
4,581
|
Automatic Feature Induction for Stagewise Collaborative Filtering Joonseok Leea, Mingxuan Suna, Seungyeon Kima, Guy Lebanona, b a College of Computing, Georgia Institute of Technology, Atlanta, GA 30332 b Google Research, Mountain View, CA 94043 {jlee716, msun3, seungyeon.kim}@gatech.edu, lebanon@cc.gatech.edu Abstract Recent approaches to collaborative filtering have concentrated on estimating an algebraic or statistical model, and using the model for predicting missing ratings. In this paper we observe that different models have relative advantages in different regions of the input space. This motivates our approach of using stagewise linear combinations of collaborative filtering algorithms, with non-constant combination coefficients based on kernel smoothing. The resulting stagewise model is computationally scalable and outperforms a wide selection of state-of-the-art collaborative filtering algorithms. 1 Introduction Recent approaches to collaborative filtering (CF) have concentrated on estimating an algebraic or statistical model, and using the model for predicting the missing rating of user u on item i. We denote CF methods as f(u, i), and the family of potential CF methods as F. Ensemble methods, which combine multiple models from F into a “meta-model”, have been a significant research direction in classification and regression. Linear combinations of K models F (K)(x) = K X k=1 αkfk(x) (1) where α1, . . . αK ∈R and f1, . . . , fK ∈F, such as boosting or stagewise linear regression and stagewise logistic regression, enjoy a significant performance boost over the single top-performing model. This is not surprising since (1) includes as a degenerate case each of the models f ∈F by itself. Stagewise models are greedy incremental models of the form (αk, fk) = arg min αk∈R,fk∈F Risk(F (k−1) + αkfk), k = 1, . . . , K, (2) where the parameters of F (K) are estimated one by one without modifying previously selected parameters). Stagewise models have two important benefits: (a) a significant resistance to overfitting, and (b) computational scalability to large data and high K. It is somewhat surprising that ensemble methods have had relatively little success in the collaborative filtering literature. Generally speaking, ensemble or combination methods have shown only a minor improvement over the top-performing CF methods. The cases where ensemble methods did show an improvement (for example the Netflix prize winner [10] and runner up), relied heavily on manual feature engineering, manual parameter setting, and other tinkering. This paper follows up on an experimental discovery: different recommendation systems perform better than others for some users and items but not for others. In other words, the relative strengths of two distinct CF models f1(u, i), f2(u, i) ∈F depend on the user u and the item i whose rating 1 40 70 100 130 160 190 0.70 0.75 0.80 0.85 0.90 0.95 Number of available ratings for target item MAE User Average Item Average Figure 1: Test set loss (mean absolute error) of two simple algorithms (user average and item average) on items with different number of ratings. is being predicted. One example of two such systems appears in Figure 1 that graphs the test-set loss of two recommendation rules (user average and item average) as a function of the number of available ratings for the recommended item i. The two recommendation rules outperform each other, depending on whether the item in question has few or many ratings in the training data. We conclude from this graph and other comprehensive experiments [14] that algorithms that are inferior in some circumstances may be superior in other circumstances. The inescapable conclusion is that the weights αk in the combination should be functions of u and i rather than constants F (K)(u, i) = K X k=1 αk(u, i)fk(u, i) (3) where αk(u, i) ∈R and fk ∈F for k = 1, . . . , K. In this paper we explore the use of such models for collaborative filtering, where the weight functions αk(u, i) are learned from. A major part of our contribution is a feature induction strategy to identify feature functions expressing useful locality information. Our experimental study shows that the proposed method outperforms a wide variety of state-of-the-art and traditional methods, and also outperforms other CF ensemble methods. 2 Related Work Many memory-based CF methods predict the rating of items based on the similarity of the test user and the training users [21, 3, 6]. Similarity measures include Pearson correlation [21] and Vector cosine similarity [3, 6]. Other memory-based CF methods includes item-based CF [25] and a nonparametric probabilistic model based on ranking preference similarities [28]. Model-based CF includes user and item clustering [3, 29, 32], Bayesian networks [3], dependence network [5] and probabilistic latent variable models [19, 17, 33]. Slope-one [16] achieved fast and reasonably accurate prediction. The state-of-the-art methods including the Netflix competition winner are based on matrix factorization. The factorized matrix can be used to fill out the unobserved entries of the user-rating matrix in a way similar to latent factor analysis [20, 12, 9, 13, 24, 23, 11]. Some recent work suggested that combining different CF models may improve the prediction accuracy. Specifically, a memory-based method linearly combined with a latent factor method [1, 8] retains the advantages of both models. Ensembles of maximum margin matrix factorizations were explored to improve the result of a single MMMF model in [4]. A mixture of experts model is proposed in [27] to linearly combine the prediction results of more than two models. In many cases, there is significant manual intervention such as setting the combination weights manually. Feature-weighted linear stacking [26] is the ensemble method most closely related to our approach. The primary difference is the manual selection of features in [26] as opposed to automatic induction of local features in our paper that leads to a significant improvement in prediction quality. Model combination based on locality has been proposed in other machine learning topics, such as classification [31, 18] or sensitivity estimation [2]. 2 3 Combination of CF Methods with Non-Constant Weights Recalling the linear combination (3) from Section 1, we define non-constant combination weights αk(u, i) that are functions of the user and item that are being predicted. We propose the following algebraic form αk(u, i) = βk hk(u, i), βk ∈R, hk ∈H (4) where βk is a parameter and hk is a function selected from a family H of candidate feature functions. The combination (3) with non-constant weights (4) enables some CF methods fk to be emphasized for some user-item combinations through an appropriate selection of the βk parameters. We assume that H contains the constant function, capturing the constant-weight combination within our model. Substituting (4) into (3) we get F (K)(u, i) = K X k=1 βk hk(u, i) fk(u, i), βk ∈R, hk ∈H, fk ∈F. (5) Note that since hk and fk are selected from the sets of CF methods and feature functions respectively, we may have fj = fl or hj = hl for j ̸= l. This is similar to boosting and other stagewise algorithms where one feature or base learner may be chosen multiple times, effectively updating its associate feature functions and parameters. The total weight function associated with a particular f ∈F is P k:fk=f βkhk(u, i). A simple way to fit β = (β1, . . . , βK) is least squares β∗= arg min β∈C X u,i F (K)(u, i) −Ru,i 2 , (6) where Ru,i denotes the rating of user u on item i in the training data and the summation ranges over all ratings in the training set. A variation of (6), where β is constrained such that αk(u, i) ≥0, and PK k=1 αk(u, i) = 1 endows F with the following probabilistic interpretation F(u, i) = Ep {f | u, i} , (7) where f represents a random draw from F, with probabilities p(f|u, i) proportional to P k:fk=f βkhk(u, i). In contrast to standard combination models with fixed weights, (7) forms a conditional expectation, rather than an expectation. 4 Inducing Local Features In contrast to [26] that manually defined 25 features, we induce the features hk from data. The features hk(u, i) should emphasize users u and items i that are likely to lead to variations in the relative strength of the f1, . . . , fK. We consider below two issues: (i) defining the set H of candidate features, and (ii) a strategy for selecting features from H to add to the combination F. 4.1 Candidate Feature Families H We denote the sets of users and items by U and I respectively, and the domain of f ∈F and h ∈H as Ω= U × I. The set R ⊂Ωis the set of user-item pairs present in the training set, and the set of user-item pairs that are being predicted is ω ∈Ω\ R. We consider the following three unimodal functions on Ω, parameterized by a location parameter or mode ω∗= (u∗, i∗) ∈Ωand a bandwidth h > 0 K(1) h,(u∗,i∗)(u, i) ∝ 1 −d(u∗, u) h I (d(u∗, u) ≤h) , K(2) h,(u∗,i∗)(u, i) ∝ 1 −d(i∗, i) h I (d(i∗, i) ≤h) , K(3) h,(u∗,i∗)(u, i) ∝ 1 −d(u∗, u) h I (d(u∗, u) ≤h) · 1 −d(i∗, i) h I (d(i∗, i) ≤h) , (8) 3 where I(A) = 1 if A holds and 0 otherwise. The first function is unimodal in u, centered around u∗, and constant in i. The second function is unimodal in i, centered around i∗, and constant in u. The third is unimodal in u, i and centered around (u∗, i∗). There are several possible choices for the distance functions in (8) between users and between items. For simplicity, we use in our experiments the angular distance d(x, y) = arc cos ⟨x, y⟩ ∥x∥· ∥y∥ (9) where the inner products above are computed based on the user-item rating matrix expressing the training set (ignoring entries not present in both arguments). The functions (8) are the discrete analogs of the triangular kernel Kh(x) = h−1(1 −|x − x∗|/h)I(|x −x∗| ≤h) used in non-parametric kernel smoothing [30]. Their values decay linearly with the distance from their mode (truncated at zero), and feature a bandwidth parameter h, controlling the rate of decay. As h increases the support size |{ω ∈Ω: K(ω) > 0}| increases and maxω∈ΩK(ω) decreases. The unimodal feature functions (8) capture locality in the Ωspace by measuring proximity to a mode, representing a user u∗, an item i∗, or a user-item pair. We define the family of candidate features H as all possible additive mixtures or max-mixtures of the functions (8), parameterized by a set of multiple modes ω∗= {ω∗ 1, . . . , ω∗ r} Kω∗(u, i) ∝ r X j=1 Kω∗ j (u, i) (10) Kω∗(u, i) ∝ max j=1,...,r Kω∗ j (u, i). (11) Using this definition, features functions hk(u, i) ∈H are able to express a wide variety of locality information involving multiple potential modes. We discuss next the strategy for identifying useful features from H and adding them to the model F in a stagewise manner. 4.2 Feature Induction Strategy Adapting the stagewise learning approach to the model (5) we have F (K)(u, i) = K X k=1 βk hk(u, i) fk(u, i), (12) (βk, hk, fk) = arg min βk∈R,hk∈H,fk∈F X (u,i)∈R F (k−1)(u, i) + βkhk(u, i)fk(u, i) −Ru,i 2 . It is a well-known fact that stagewise algorithms sometimes outperform non-greedy algorithms due to resistance to overfitting (see [22], for example). This explains the good generalization ability of boosting and stage-wise linear regression. From a computational standpoint, (12) scales nicely with K and with the training set size. The onedimensional quadratic optimization with respect to β is solved via a closed form, but the optimization over F and H has to be done by brute force or by some approximate method such as sampling. The computational complexity of each iteration is thus O(|H| · |F| · |R|), assuming no approximation are performed. Since we consider relatively small families F of CF methods, the optimization over F does not pose a substantial problem. The optimization over H is more problematic since H is potentially infinite, or otherwise very large. We address this difficulty by restricting H to a finite collection of additive or max-mixtures kernels with r modes, randomly sampled from the users or items present in the training data. Our experiments conclude that it is possible to find useful features from a surprisingly small number of randomly-chosen samples. 4 5 Experiments We describe below the experimental setting, followed by the experimental results and conclusions. 5.1 Experimental Design We used a recommendation algorithm toolkit PREA [15] for candidate algorithms, including three simple baselines (Constant model, User Average, and Item Average) and five matrix-factorization methods (Regularized SVD, NMF [13], PMF [24], Bayesian PMF [23], and Non-Linear PMF [12]), and Slope-one [16]. This list includes traditional baselines as well as state-of-the-art CF methods that were proposed recently in the research literature. We evaluate the performance using the Root Mean Squared Error (RMSE), measured on the test set. Table 1 lists 5 experimental settings. SINGLE runs each CF algorithm individually, and chooses the one with the best average performance. CONST combines all candidate algorithms with constant weights as in (1). FWLS combines all candidate algorithms with non-constant weights as in (3) [26]. For CONST and FWLS, the weights are estimated from data by solving a least-square problem. STAGE combines CF algorithms in stage-wise manner. FEAT applies the feature induction techniques discussed in Section 4. To evaluate whether the automatic feature induction in FEAT works better or worse than manually constructed features, we used in FWLS and STAGE manual features similar to the ones in [26] (excluding features requiring temporal data). Examples include number of movies rated per user, number of users rating each movie, standard deviation of the users’ ratings, and standard deviation of the item’s ratings. The feature induction in FEAT used a feature space H with additive multi-mode smoothing kernels as described in Section 4 (for simplicity we avoided kernels unimodal in both u and i). The family H included 200 randomly sampled features (a new sample was taken for each of the iterations in the stagewise algorithms). The r in (11) was set to 5% of user or item count, and bandwidth h values of 0.05 (an extreme case where most features have value either 0 or 1) and 0.8 (each user or item has moderate similarity values). The stagewise algorithm continues until either five consecutive trials fail to improve the RMSE on validation set, or the iteration number reaches 100, which occur only in a few cases. We used similar L2 regularization for all methods (both stagewise and non-stagewise), where the regularization parameter was selected among 5 different values based on a validation set. We experimented with the two standard MovieLens datasets: 100K and 1M, and with the Netflix dataset. In the Netflix dataset experiments, we sub-sampled the data since (a) running state-ofthe-art candidate algorithms on the full Netflix data takes too long time - for example, Bayesian PMF was reported to take 188 hours [23], and (b) it enables us to run extensive experiments measuring the performance of the CF algorithms as a function of the number of users, number of items, voting sparsity, and facilitates cross-validation and statistical tests. More specifically, we sub-sampled from the most active M users and the most often rated N items to obtain pre-specified data density levels |R|/|Ω|. As shown in Table 2, we varied either the user or item count in the set {1000, 1500, 2000, 2500, 3000}, holding the other variable fixed at 1000 and the density at 1%, which is comparable density of the original Netflix dataset. We also conducted an experiment where the data density varied in the set {1%, 1.5%, 2%, 2.5%} with fixed user and item count of 1000 each. We set aside a randomly chosen 20% for test set, and used the remaining 80% for both for training the individual recommenders and for learning the ensemble model. It is possible, and perhaps more motivated, to use two distinct train sets for the CF models and the ensemble. However, in our case, we got high performance even in the case of using the same training dataset in both stages. Method C W S I Explanation SINGLE Best-performed single CF algorithm CONST O Mixture of CF without features FWLS O O Mixture of CF with manually-designed features STAGE O O O Stagewise mixture with manual features FEAT O O O O Stagewise mixture with induced features Table 1: Experimental setting. (C: Combination of multiple algorithms, W: Weights varying with features, S: Stage-wise algorithm, I: Induced features) 5 Dataset Netflix MovieLens User Count 1000 2000 3000 1000 1000 943 6039 Item Count 1000 1000 2000 3000 1000 1682 3883 Density 1.0% 1.0% 1.0% 1.5% 2.0% 2.5% 6.3% 4.3% Single CF Constant 1.2188 1.2013 1.2072 1.1964 1.1888 1.2188 1.2235 1.2113 1.2408 1.2590 UserAvg 1.0566 1.0513 1.0375 1.0359 1.0174 1.0566 1.0318 1.0252 1.0408 1.0352 ItemAvg 1.1260 1.0611 1.0445 1.1221 1.1444 1.1260 1.1029 1.0900 1.0183 0.9789 Slope1 1.4490 1.4012 1.3321 1.4049 1.3196 1.4490 1.3505 1.0725 0.9371 0.9017 RegSVD 1.0623 1.0155 1.0083 1.0354 1.0289 1.0343 1.0154 1.0020 0.9098 0.8671 NMF 1.0784 1.0205 1.0069 1.0423 1.0298 1.0406 1.0151 1.0091 0.9601 0.9268 PMF 1.6180 1.4824 1.4081 1.4953 1.4804 1.4903 1.3594 1.1818 0.9328 0.9623 BPMF 1.3973 1.2951 1.2949 1.2566 1.2102 1.3160 1.2021 1.1514 0.9629 0.9000 NLPMF 1.0561 1.0507 1.0382 1.0361 1.0471 1.0436 1.0382 1.0523 0.9560 0.9415 Combined SINGLE 1.0561 1.0155 1.0069 1.0354 1.0174 1.0343 1.0151 1.0020 0.9098 0.8671 CONST 1.0429 1.0072 0.9963 1.0198 1.0102 1.0255 0.9968 0.9824 0.9073 0.8660 FWLS 1.0288 1.0050 0.9946 1.0089 1.0016 1.0179 0.9935 0.9802 0.9010 0.8649 STAGE 1.0036 0.9784 0.9668 0.9967 0.9821 0.9935 0.9846 0.9769 0.8961 0.8623 FEAT 0.9862 0.9607 0.9607 0.9740 0.9717 0.9703 0.9589 0.9492 0.8949 0.8569 p-Value 0.0028 0.0001 0.0003 0.0008 0.0014 0.0002 0.0019 0.0013 0.0014 0.0023 Table 2: Test error in RMSE (lower values are better) for single CF algorithms used as candidates and combined models. Data where M or N is 1500 or 2500 are omitted due to the lack of space, as it is shown in Figure 2. The best-performing one in each group is indicated in Italic. The last row indicates p-value for statistical test of hypothesis FEAT ≻FWLS. 1000 1500 2000 2500 3000 0.94 0.96 0.98 1.00 1.02 1.04 1.06 1.08 1.10 User Count RMSE 1000 1500 2000 2500 3000 0.94 0.96 0.98 1.00 1.02 1.04 1.06 1.08 1.10 Item Count RMSE 1.0% 1.5% 2.0% 2.5% 0.94 0.96 0.98 1.00 1.02 1.04 1.06 1.08 1.10 Density RMSE SINGLE CONST FWLS STAGE FEAT Figure 2: Performance trend with varied user count (left), item count (middle), and density (right) on Netflix dataset. For stagewise methods, the 80% train set was divided to 60% training set and 20% validation set, used to determine when to stop the stagewise addition process. The non-stagewise methods used the entire 80% for training. The 10% of training set is used to select regularization parameter for both stagewise and non-stagewise. The results were averaged over 10 random data samples. 6 Result and Discussion 6.1 Performance Analysis and Example Table 2 displays the performance in RMSE of each combination method, as well as the individual algorithms. Examining it, we observe the following partial order with respect to prediction accuracy: FEAT ≻STAGE ≻FWLS ≻CONST ≻SINGLE. • FWLS ≻CONST ≻SINGLE: Combining CF algorithms (even only with constant weights) produces better prediction than the best-single CF method. Also, using nonconstant weights improves performance further. This result is consistent with what has been known in literature [7, 26]. • STAGE ≻FWLS: Figure 2 indicates that stagewise combinations where features are chosen with replacement are more accurate. The selection with replacement allow certain features to be selected more than once, correcting a previous inaccurate parameter setting. • FEAT ≻STAGE: Making use of induced features improves prediction accuracy further from stagewise optimization with manually-designed features. Overall, our experiments indicate that the combination with non-constant weights and feature induction (FEAT) outperforms three baselines (the best single method, standard combinations with constant weight, and the FWLS method using manually constructed features [26]). We tested the 6 0 200 400 600 800 1000 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 item (sorted by algorithm 1) weight alg 1 alg 2 alg 8 0 200 400 600 800 1000 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 item (sorted by algorithm 2) alg 1 alg 2 alg 8 0 200 400 600 800 1000 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 item (sorted by algorithm 8) alg 1 alg 2 alg 8 0 200 400 600 800 1000 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 user (sorted by algorithm 1) weight alg 1 alg 2 alg 8 0 200 400 600 800 1000 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 user (sorted by algorithm 2) alg 1 alg 2 alg 8 0 200 400 600 800 1000 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 user (sorted by algorithm 8) alg 1 alg 2 alg 8 Figure 3: Average weight values of each item (top) and user (bottom) sorted in the order of high to low weight of selected algorithm. Note that the sorting order is similar between algorithm 1 (User Average) and 2 (Item Average). In contrast, algorithm 8 (NLPMF) has opposite order, which will be weighted higher in different part of the data, compared to algorithm 1 and 2. hypothesis RMSEF EAT < RMSEF W LS with paired t-test. Based on the p-values (See the last row in Table 2), we can reject the null hypothesis with significance of 99%. We conclude that our proposed combination outperforms state-of-the-art methods, and several previously proposed combination methods. To see how feature induction works in detail, we illustrate an example with a case where the user count and item count equals 1000. Figure 3 shows the average weight distribution that each user or item receives under three CF methods: user average, item average, and NLPMF. We focused on these three methods since they are frequently selected by the stagewise algorithm. The x axis variables in the three panels are sorted in the order of decreasing weights of selected algorithm. Note that in each figure, one curve is monotonically decaying, showing the weights of the CF method according to which the sorting was done. An interesting observation is that algorithm 1 (User Average) and algorithm 2 (Item Average) have similar pattern of sorting order in Figure 3 (right column). In other words, these two algorithms are similar in nature, and are relatively strong or weaker in similar regions of Ω. Algorithm 8 (NLPMF) on the other hand, has a very different relative strength pattern. 6.2 Trend Analysis Figure 2 graphs the RMSE of the different combination methods as a function of the user count, item count, and density. We make the following observations. • As expected, prediction accuracy for all combination methods and for the top single method improves with the user count, item count, and density. • The performance gap between the best single algorithm and the combinations tends to decrease with larger user and item count. This is a manifestation of the law of diminishing returns, and the fact that the size of a suitable family H capturing locality information increases with the user and item count. Thus, the stagewise procedure becomes more challenging computationally, and less accurate since in our experiment we sampled same number of compositions from H, rather than increasing it for larger data. • We note that all combination methods and the single best CF method improve performance as the density increases. The improvement seems to be the most pronounced for the single best algorithm and for the FEAT method, indicating that FEAT scales up its performance aggressively with increasing density levels. 7 • Comparing the left and middle panels of Figure 2 implies that having more users is more informative than having more items. In other words, if the total dataset size M × N is equal, the performance tends to be better when M > N (left panel of Figure 2) than M < N (middle panel of Figure 2). 6.3 Scalability Our proposed stagewise algorithm is very efficient, when compared to other feature selection algorithms such as step-wise or subset selection. Nevertheless, the large number of possible features may result in computational issues. In our experiments, we sampled from the space of candidate features a small subset of features that was considered for addition (the random subset is different in each iteration of the stagewise algorithm). In the limit K →∞, such a sampling scheme would recover the optimal ensemble as each feature will be selected for consideration infinitely often. Our experiments conclude that this scheme works well also in practice and results in significant improvement to the state-of-the-art even for a relatively small sample of feature candidates such as 200. Viewed from another perspective, this implies that randomly selecting such a small subset of features each iteration ensures the selection of useful features. In fact, the features induced in this manner were found to be more useful than the manually crafted features in the FWLS algorithm [26]. 7 Summary We started from an observation that the relative performance of different candidate recommendation systems f(u, i) depends on u and i, for example on the activity level of user u and popularity of item i. This motivated the development of combination of recommendation systems with non-constant weights that emphasize different candidates based on their relative strengths in the feature space. In contrast to the FWLS method that focused on manual construction of features, we developed a feature induction algorithm that works in conjunction with stagewise least-squares. We formulate a family of feature function, based on the discrete analog of triangular kernel smoothing. This family captures a wide variety of local information and is thus able to model the relative strengths of the different CF methods and how they change across Ω. The combination with induced features outperformed any of the base candidates as well as other combination methods in literature. This includes the recently proposed FWLS method that uses manually constructed feature function. As our candidates included many of the recently proposed state-of-the-art recommendation systems our conclusions are significant for the engineering community as well as recommendation system scientists. References [1] R. Bell, Y. Koren, and C. Volinsky. Modeling relationships at multiple scales to improve accuracy of large recommender systems. In Proc. of the ACM SIGKDD, 2007. [2] P. Bennett. Neighborhood-based local sensitivity. In Proc. of the European Conference on Machine Learning, 2007. [3] J. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive algorithms for collaborative filtering. In Uncertainty in Artificial Intelligence, 1998. [4] D. DeCoste. Collaborative prediction using ensembles of maximum margin matrix factorizations. In Proc. of the International Conference on Machine Learning, 2006. [5] D. Heckerman, D. Maxwell Chickering, C. Meek, R. Rounthwaite, and C. Kadie. Dependency networks for inference, collaborative filtering, and data visualization. Journal of Machine Learning Research, 1, 2000. [6] J. L. Herlocker, J. A. Konstan, A. Borchers, and J. Riedl. An algorithmic framework for performing collaborative filtering. In Proc. of ACM SIGIR Conference, 1999. [7] M. Jahrer, A. T¨oscher, and R. Legenstein. Combining predictions for accurate recommender systems. In Proc. of the ACM SIGKDD, 2010. [8] Y. Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In Proc. of the ACM SIGKDD, 2008. 8 [9] Y. Koren. Factor in the neighbors: Scalable and accurate collaborative filtering. ACM Transactions on Knowledge Discovery from Data, 4(1):1–24, 2010. [10] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30–37, 2009. [11] B. Lakshminarayanan, G. Bouchard, and C. Archambeau. Robust bayesian matrix factorisation. In Proc. of the International Conference on Artificial Intelligence and Statistics, 2011. [12] N. D. Lawrence and R. Urtasun. Non-linear matrix factorization with gaussian processes. In Proc. of the International Conference on Machine Learning, 2009. [13] D. Lee and H. Seung. Algorithms for non-negative matrix factorization. In Advances in Neural Information Processing Systems, 2001. [14] J. Lee, M. Sun, and G. Lebanon. A comparative study of collaborative filtering algorithms. ArXiv Report 1205.3193, 2012. [15] J. Lee, M. Sun, and G. Lebanon. Prea: Personalized recommendation algorithms toolkit. Journal of Machine Learning Research, 13:2699–2703, 2012. [16] D. Lemire and A. Maclachlan. Slope one predictors for online rating-based collaborative filtering. Society for Industrial Mathematics, 5:471–480, 2005. [17] B. Marlin. Modeling user rating profiles for collaborative filtering. In Advances in Neural Information Processing Systems, 2004. [18] C. J. Merz. Dynamical selection of learning algorithms. Lecture Notes in Statistics, pages 281–290, 1996. [19] D. M. Pennock, E. Horvitz, S. Lawrence, and C. L. Giles. Collaborative filtering by personality diagnosis: A hybrid memory- and model-based approach. In Uncertainty in Artificial Intelligence, 2000. [20] J.D.M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proc. of the International Conference on Machine Learning, 2005. [21] P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom, and J. Riedl. Grouplens: an open architecture for collaborative filtering of netnews. In Proc. of the Conference on CSCW, 1994. [22] L. Reyzin and R. E. Schapire. How boosting the margin can also boost classifier complexity. In Proc. of the International Conference on Machine Learning, 2006. [23] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using markov chain monte carlo. In Proc. of the International Conference on Machine Learning, 2008. [24] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Advances in Neural Information Processing Systems, 2008. [25] B. Sarwar, G. Karypis, J. Konstan, and J. Reidl. Item-based collaborative filtering recommendation algorithms. In Proc. of the International Conference on World Wide Web, 2001. [26] J. Sill, G. Takacs, L. Mackey, and D. Lin. Feature-weighted linear stacking. Arxiv Report arXiv:0911.0460, 2009. [27] X. Su, R. Greiner, T. M. Khoshgoftaar, and X. Zhu. Hybrid collaborative filtering algorithms using a mixture of experts. In Proc. of the IEEE/WIC/ACM International Conference on Web Intelligence, 2007. [28] M. Sun, G. Lebanon, and P. Kidwell. Estimating probabilities in recommendation systems. In Proc. of the International Conference on Artificial Intelligence and Statistics, 2011. [29] L. H. Ungar and D. P. Foster. Clustering methods for collaborative filtering. In AAAI Workshop on Recommendation Systems, 1998. [30] M. P. Wand and M. C. Jones. Kernel Smoothing. Chapman and Hall/CRC, 1995. [31] K. Woods, W.P. Kegelmeyer Jr, and K. Bowyer. Combination of multiple classifiers using local accuracy estimates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):405–410, 1997. [32] G. R. Xue, C. Lin, Q. Yang, W. S. Xi, H. J. Zeng, Y. Yu, and Z. Chen. Scalable collaborative filtering using cluster-based smoothing. In Proc. of ACM SIGIR Conference, 2005. [33] K. Yu, S. Zhu, J. Lafferty, and Y. Gong. Fast nonparametric matrix factorization for large-scale collaborative filtering. In Proc. of ACM SIGIR Conference, 2009. 9
|
2012
|
216
|
4,582
|
A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets Nicolas Le Roux SIERRA Project-Team INRIA - ENS Paris, France nicolas@le-roux.name Mark Schmidt SIERRA Project-Team INRIA - ENS Paris, France mark.schmidt@inria.fr Francis Bach SIERRA Project-Team INRIA - ENS Paris, France francis.bach@ens.fr Abstract We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of optimizing the training error and reducing the test error quickly. 1 Introduction A plethora of the problems arising in machine learning involve computing an approximate minimizer of the sum of a loss function over a large number of training examples, where there is a large amount of redundancy between examples. The most wildly successful class of algorithms for taking advantage of this type of problem structure are stochastic gradient (SG) methods [1, 2]. Although the theory behind SG methods allows them to be applied more generally, in the context of machine learning SG methods are typically used to solve the problem of optimizing a sample average over a finite training set, i.e., minimize x∈Rp g(x) := 1 n n X i=1 fi(x). (1) In this work, we focus on such finite training data problems where each fi is smooth and the average function g is strongly-convex. As an example, in the case of ℓ2-regularized logistic regression we have fi(x) := λ 2 ∥x∥2 + log(1 + exp(−biaT i x)), where ai ∈Rp and bi ∈{−1, 1} are the training examples associated with a binary classification problem and λ is a regularization parameter. More generally, any ℓ2-regularized empirical risk minimization problem of the form minimize x∈Rp λ 2 ∥x∥2 + 1 n n X i=1 li(x), (2) falls in the framework of (1) provided that the loss functions li are convex and smooth. An extensive list of convex loss functions used in machine learning is given by [3], and we can even include non-smooth loss functions (or regularizers) by using smooth approximations. The standard full gradient (FG) method, which dates back to [4], uses iterations of the form xk+1 = xk −αkg′(xk) = xk −αk n n X i=1 f ′ i(xk). (3) Using x∗to denote the unique minimizer of g, the FG method with a constant step size achieves a linear convergence rate: g(xk) −g(x∗) = O(ρk), 1 for some ρ < 1 which depends on the condition number of g [5, Theorem 2.1.15]. Linear convergence is also known as geometric or exponential convergence, because the cost is cut by a fixed fraction on each iteration. Despite the fast convergence rate of the FG method, it can be unappealing when n is large because its iteration cost scales linearly in n. SG methods, on the other hand, have an iteration cost which is independent of n, making them suited for that setting. The basic SG method for optimizing (1) uses iterations of the form xk+1 = xk −αkf ′ ik(xk), (4) where αk is a step-size and a training example ik is selected uniformly among the set {1, . . . , n}. The randomly chosen gradient f ′ ik(xk) yields an unbiased estimate of the true gradient g′(xk), and one can show under standard assumptions that, for a suitably chosen decreasing step-size sequence {αk}, the SG iterations achieve the sublinear convergence rate E[g(xk)] −g(x∗) = O(1/k), where the expectation is taken with respect to the selection of the ik variables. Under certain assumptions this convergence rate is optimal for strongly-convex optimization in a model of computation where the algorithm only accesses the function through unbiased measurements of its objective and gradient (see [6, 7, 8]). Thus, we cannot hope to obtain a better convergence rate if the algorithm only relies on unbiased gradient measurements. Nevertheless, by using the stronger assumption that the functions are sampled from a finite dataset, in this paper we show that we can achieve an exponential converengence rate while preserving the iteration cost of SG methods. The primay contribution of this work is the analysis of a new algorithm that we call the stochastic average gradient (SAG) method, a randomized variant of the incremental aggregated gradient (IAG) method of [9], which combines the low iteration cost of SG methods with a linear convergence rate as in FG methods. The SAG method uses iterations of the form xk+1 = xk −αk n n X i=1 yk i , (5) where at each iteration a random training example ik is selected and we set yk i = f ′ i(xk) if i = ik, yk−1 i otherwise. That is, like the FG method, the step incorporates a gradient with respect to each training example. But, like the SG method, each iteration only computes the gradient with respect to a single training example and the cost of the iterations is independent of n. Despite the low cost of the SAG iterations, in this paper we show that the SAG iterations have a linear convergence rate, like the FG method. That is, by having access to ik and by keeping a memory of the most recent gradient value computed for each training example i, this iteration achieves a faster convergence rate than is possible for standard SG methods. Further, in terms of effective passes through the data, we also show that for certain problems the convergence rate of SAG is faster than is possible for standard FG method. In a machine learning context where g(x) is a training cost associated with a predictor parameterized by x, we are often ultimately interested in the testing cost, the expected loss on unseen data points. Note that a linear convergence rate for the training cost does not translate into a similar rate for the testing cost, and an appealing propertly of SG methods is that they achieve the optimal O(1/k) rate for the testing cost as long as every datapoint is seen only once. However, as is common in machine learning, we assume that we are only given a finite training data set and thus that datapoints are revisited multiple times. In this context, the analysis of SG methods only applies to the training cost and, although our analysis also focuses on the training cost, in our experiments the SAG method typically reached the optimal testing cost faster than both FG and SG methods. The next section reviews closely-related algorithms from the literature, including previous attempts to combine the appealing aspects of FG and SG methods. However, despite 60 years of extensive research on SG methods, most of the applications focusing on finite datasets, we are not aware of any other SG method that achieves a linear convergence rate while preserving the iteration cost of standard SG methods. Section 3 states the (standard) assumptions underlying our analysis and gives the main technical results; we first give a slow linear convergence rate that applies for any problem, and then give a very fast linear convergence rate that applies when n is sufficiently large. Section 4 discusses practical implementation issues, including how to reduce the storage cost from O(np) to O(n) when each fi only depends on a linear combination of x. Section 5 presents a numerical comparison of an implementation based on SAG to SG and FG methods, indicating that the method may be very useful for problems where we can only afford to do a few passes through a data set. 2 2 Related Work There is a large variety of approaches available to accelerate the convergence of SG methods, and a full review of this immense literature would be outside the scope of this work. Below, we comment on the relationships between the new method and several of the most closely-related ideas. Momentum: SG methods that incorporate a momentum term use iterations of the form xk+1 = xk −αkf ′ ik(xk) + βk(xk −xk−1), see [10]. It is common to set all βk = β for some constant β, and in this case we can rewrite the SG with momentum method as xk+1 = xk −Pk j=1 αjβk−jf ′ ij(xj). We can re-write the SAG updates (5) in a similar form as xk+1 = xk −Pk j=1 αkS(j, i1:k)f ′ ij(xj), (6) where the selection function S(j, i1:k) is equal to 1/n if j corresponds to the last iteration where j = ik and is set to 0 otherwise. Thus, momentum uses a geometric weighting of previous gradients while the SAG iterations select and average the most recent evaluation of each previous gradient. While momentum can lead to improved practical performance, it still requires the use of a decreasing sequence of step sizes and is not known to lead to a faster convergence rate. Gradient Averaging: Closely related to momentum is using the sample average of all previous gradients, xk+1 = xk −αk k Pk j=1 f ′ ij(xj), which is similar to the SAG iteration in the form (5) but where all previous gradients are used. This approach is used in the dual averaging method [11], and while this averaging procedure leads to convergence for a constant step size and can improve the constants in the convergence rate [12], it does not improve on the O(1/k) rate. Iterate Averaging: Rather than averaging the gradients, some authors use the basic SG iteration but take an average over xk values. With a suitable choice of step-sizes, this gives the same asymptotic efficiency as Newton-like second-order SG methods and also leads to increased robustness of the convergence rate to the exact sequence of step sizes [13]. Baher’s method [14, §1.3.4] combines gradient averaging with online iterate averaging, and also displays appealing asymptotic properties. The epoch SG method uses averaging to obtain the O(1/k) rate even for non-smooth objectives [15]. However, the convergence rates of these averaging methods remain sublinear. Stochastic versions of FG methods: Various options are available to accelerate the convergence of the FG method for smooth functions, such as the accelerated full gradient (AFG) method of Nesterov [16], as well as classical techniques based on quadratic approximations such as non-linear conjugate gradient, quasi-Newton, and Hessian-free Newton methods. Several authors have analyzed stochastic variants of these algorithms [17, 18, 19, 20, 12]. Under certain conditions these variants are convergent with an O(1/k) rate [18]. Alternately, if we split the convergence rate into a deterministic and stochastic part, these methods can improve the dependency on the deterministic part [19, 12]. However, as with all other methods we have discussed thus far in this section, we are not aware of any existing method of this flavor that improves on the O(1/k) rate. Constant step size: If the SG iterations are used with a constant step size (rather than a decreasing sequence), then the convergence rate of the method can be split into two parts [21, Proposition 2.4], where the first part depends on k and converges linearly to 0 and the second part is independent of k but does not converge to 0. Thus, with a constant step size the SG iterations have a linear convergence rate up to some tolerance, and in general after this point the iterations do not make further progress. Indeed, convergence of the basic SG method with a constant step size has only been shown under extremely strong assumptions about the relationship between the functions fi [22]. This contrasts with the method we present in this work which converges to the optimal solution using a constant step size and does so with a linear rate (without additional assumptions). Accelerated methods: Accelerated SG methods, which despite their name are not related to the aforementioned AFG method, take advantage of the fast convergence rate of SG methods with a constant step size. In particular, accelerated SG methods use a constant step size by default, and only decrease the step size on iterations where the inner-product between successive gradient estimates 3 is negative [23, 24]. This leads to convergence of the method and allows it to potentially achieve periods of linear convergence where the step size stays constant. However, the overall convergence rate of the method remains sublinear. Hybrid Methods: Some authors have proposed variants of the SG method for problems of the form (1) that seek to gradually transform the iterates into the FG method in order to achieve a linear convergence rate. Bertsekas proposes to go through the data cyclically with a specialized weighting that allows the method to achieve a linear convergence rate for strongly-convex quadratic functions [25]. However, the weighting is numerically unstable and the linear convergence rate treats full passes through the data as iterations. A related strategy is to group the fi functions into ‘batches’ of increasing size and perform SG iterations on the batches [26]. In both cases, the iterations that achieve the linear rate have a cost that is not independent of n, as opposed to SAG. Incremental Aggregated Gradient: Finally, Blatt et al. presents the most closely-related algorithm, the IAG method [9]. This method is identical to the SAG iteration (5), but uses a cyclic choice of ik rather than sampling the ik values. This distinction has several important consequences. In particular, Blatt et al. are only able to show that the convergence rate is linear for strongly-convex quadratic functions (without deriving an explicit rate), and their analysis treats full passes through the data as iterations. Using a non-trivial extension of their analysis and a proof technique involving bounding the gradients and iterates simultaneously by a Lyapunov potential function, in this work we give an explicit linear convergence rate for general strongly-convex functions using the SAG iterations that only examine a single training example. Further, as our analysis and experiments show, when the number of training examples is sufficiently large, the SAG iterations achieve a linear convergence rate under a much larger set of step sizes than the IAG method. This leads to more robustness to the selection of the step size and also, if suitably chosen, leads to a faster convergence rate and improved practical performance. We also emphasize that in our experiments IAG and the basic FG method perform similarly, while SAG performs much better, showing that the simple change (random selection vs. cycling) can dramatically improve optimization performance. 3 Convergence Analysis In our analysis we assume that each function fi in (1) is differentiable and that each gradient f ′ i is Lipschitz-continuous with constant L, meaning that for all x and y in Rp we have ∥f ′ i(x) −f ′ i(y)∥≤L∥x −y∥. This is a fairly weak assumption on the fi functions, and in cases where the fi are twicedifferentiable it is equivalent to saying that the eigenvalues of the Hessians of each fi are bounded above by L. In addition, we also assume that the average function g = 1 n Pn i=1 fi is strongly-convex with constant µ > 0, meaning that the function x 7→g(x) −µ 2 ∥x∥2 is convex. This is a stronger assumption and is not satisfied by all machine learning models. However, note that in machine learning we are typically free to choose the regularizer, and we can always add an ℓ2-regularization term as in Eq. (2) to transform any convex problem into a strongly-convex problem (in this case we have µ ≥λ). Note that strong-convexity implies that the problem is solvable, meaning that there exists some unique x∗that achieves the optimal function value. Our convergence results assume that we initialize y0 i to a zero vector for all i, and our results depend on the variance of the gradient norms at the optimum x∗, denoted by σ2 = 1 n P i ∥f ′ i(x∗)∥2. Finally, all our convergence results consider expectations with respect to the internal randomization of the algorithm, and not with respect to the data (which are assumed to be deterministic and fixed). We first consider the convergence rate of the method when using a constant step size of αk = 1 2nL, which is similar to the step size needed for convergence of the IAG method in practice. Proposition 1 With a constant step size of αk = 1 2nL, the SAG iterations satisfy for k ≥1: E ∥xk −x∗∥2 ⩽ 1 − µ 8Ln kh 3∥x0 −x∗∥2 + 9σ2 4L2 i . The proof is given in the supplementary material. Note that the SAG iterations also trivially obtain the O(1/k) rate achieved by SG methods, since 1 − µ 8Ln k ⩽exp −kµ 8Ln ⩽8Ln kµ = O(n/k), albeit with a constant which is proportional to n. Despite this constant, they are advantageous over SG methods in later iterations because they obtain an exponential convergence rate as in FG 4 methods. We also note that an exponential convergence rate is obtained for any constant step size smaller than 1 2nL. In terms of passes through the data, the rate in Proposition 1 is similar to that achieved by the basic FG method. However, our next result shows that, if the number of training examples is slightly larger than L/µ (which will often be the case, as discussed in Section 6), then the SAG iterations can use a larger step size and obtain a better convergence rate that is independent of µ and L (see proof in the supplementary material). Proposition 2 If n ⩾8L µ , with a step size of αk = 1 2nµ the SAG iterations satisfy for k ⩾n: E g(xk) −g(x∗) ⩽C 1 −1 8n k , with C = 16L 3n ∥x0 −x∗∥2 + 4σ2 3nµ 8 log 1 + µn 4L + 1 . We state this result for k ⩾n because we assume that the first n iterations of the algorithm use an SG method and that we initialize the subsequent SAG iterations with the average of the iterates, which leads to an O((log n)/k) rate. In contrast, using the SAG iterations from the beginning gives the same rate but with a constant proportional to n. Note that this bound is obtained when initializing all yi to zero after the SG phase.1 However, in our experiments we do not use the SG initialization but rather use a minor variant of SAG (discussed in the next section), which appears more difficult to analyze but which gives better performance. It is interesting to compare this convergence rate with the known convergence rates of first-order methods [5, see §2]. For example, if we take n = 100000, L = 100, and µ = 0.01 then the basic FG method has a rate of ((L −µ)/(L + µ))2 = 0.9996 and the ‘optimal’ AFG method has a faster rate of (1 − p µ/L) = 0.9900. In contrast, running n iterations of SAG has a much faster rate of (1 −1/8n)n = 0.8825 using the same number of evaluations of f ′ i. Further, the lower-bound for a black-box first-order method is (( √ L −√µ)/( √ L + √µ))2 = 0.9608, indicating that SAG can be substantially faster than any FG method that does not use the structure of the problem.2 In the supplementary material, we compare Propositions 1 and 2 to the rates of primal and dual FG and coordinate-wise methods for the special case of ℓ2-regularized leasts squares. Even though n appears in the convergence rate, if we perform n iterations of SAG (i.e., one effective pass through the data), the error is multiplied by (1 −1/8n)n ≤exp(−1/8), which is independent of n. Thus, each pass through the data reduces the excess cost by a constant multiplicative factor that is independent of the problem, as long as n ⩾8L/µ. Further, while the step size in Proposition 2 depends on µ and n, we can obtain the same convergence rate by using a step size as large as αk = 1 16L. This is because the proposition is true for all values of µ satisfying µ L ⩾8 n, so we can choose the smallest possible value of µ = 8L n . We have observed in practice that the IAG method with a step size of αk = 1 2nµ may diverge, even under these assumptions. Thus, for certain problems the SAG iterations can tolerate a much larger step size, which leads to increased robustness to the selection of the step size. Further, as our analysis and experiments indicate, the ability to use a large step size leads to improved performance of the SAG iterations. While we have stated Proposition 1 in terms of the iterates and Proposition 2 in terms of the function values, the rates obtained on iterates and function values are equivalent because, by the Lipschitz and strong-convexity assumptions, we have µ 2 ∥xk −x∗∥2 ⩽g(xk) −g(x∗) ⩽L 2 ∥xk −x∗∥2. 4 Implementation Details In this section we describe modifications that substantially reduce the SAG iteration’s memory requirements, as well as modifications that lead to better practical performance. Structured gradients: For many problems the storage cost of O(np) for the yk i vectors is prohibitive, but we can often use structure in the f ′ i to reduce this cost. For example, many loss functions fi take the form fi(aT i x) for a vector ai. Since ai is constant, for these problems we only 1While it may appear suboptimal to not use the gradients computed during the n iterations of stochastic gradient descent, using them only improves the bound by a constant. 2Note that L in the SAG rates is based on the f ′ i functions, while in the FG methods it is based on g′ which can be much smaller. 5 need to store the scalar f ′ ik(uk i ) for uk i = aT ikxk rather than the full gradient aT i f ′ i(uk i ), reducing the storage cost to O(n). Further, because of the simple form of the SAG updates, if ai is sparse we can use ‘lazy updates’ in order to reduce the iteration cost from O(p) down to the sparsity level of ai. Mini-batches: To employ vectorization and parallelism, practical SG implementations often group training examples into ‘mini-batches’ and perform SG iterations on the mini-batches. We can also use mini-batches within the SAG iterations, and for problems with dense gradients this decreases the storage requirements of the algorithm since we only need a yk i for each mini-batch. Thus, for example, using mini-batches of size 100 leads to a 100-fold reduction in the storage cost. Step-size re-weighting: On early iterations of the SAG algorithm, when most yk i are set to the uninformative zero vector, rather than dividing αk in (5) by n we found it was more effective to divide by m, the number of unique ik values that we have sampled so far (which converges to n). This modification appears more difficult to analyze, but with this modification we found that the SAG algorithm outperformed the SG/SAG hybrid algorithm analyzed in Proposition 2. Exact regularization: For regularized objectives like (2) we can use the exact gradient of the regularizer, rather than approximating it. For example, our experiments on ℓ2-regularized optimization problems used the recursion d ←d −yi, yi ←l′ i(xk), d ←d + yi, x ← 1 −αλ x −α md . (7) This can be implemented efficiently for sparse data sets by using the representation x = κz, where κ is a scalar and z is a vector, since the update based on the regularizer simply updates κ. Large step sizes: Proposition 1 requires αk ⩽1/2Ln while under an additional assumption Proposition 2 allows αk ⩽1/16L. In practice we observed better performance using step sizes of αk = 1/L and αk = 2/(L + nµ). These step sizes seem to work even when the additional assumption of Proposition 2 is not satisfied, and we conjecture that the convergence rates under these step sizes are much faster than the rate obtained in Proposition 1 for the general case. Line search: Since L is generally not known, we experimented with a basic line-search, where we start with an initial estimate L0, and we double this estimate whenever we do not satisfy the instantiated Lipschitz inequality fik(xk −(1/Lk)f ′ ik(xk)) ⩽fik(xk) − 1 2Lk ∥f ′ ik(xk)∥2. To avoid instability caused by comparing very small numbers, we only do this test when ∥f ′ ik(xk)∥2 > 10−8. To allow the algorithm to potentially achieve a faster rate due to a higher degree of local smoothness, we multiply Lk by 2(−1/n) after each iteration. 5 Experimental Results Our experiments compared an extensive variety of competitive FG and SG methods. In the supplementary material we compare to the IAG method and an extensive variety of SG methods, and we allow these competing methods to choose the best step-size in hindsight. However, our experiments in the main paper focus on the following methods, which we chose because they have no dataset-dependent tuning parameters: – Steepest: The full gradient method described by iteration (3), with a line-search that uses cubic Hermite polynomial interpolation to find a step size satisfying the strong Wolfe conditions, and where the parameters of the line-search were tuned for the problems at hand. – AFG: Nesterov’s accelerated full gradient method [16], where iterations of (3) with a fixed step size are interleaved with an extrapolation step, and we use an adaptive line-search based on [27]. – L-BFGS: A publicly-available limited-memory quasi-Newton method that has been tuned for log-linear models.3 This method is by far the most complicated method we considered. – Pegasos: The state-of-the-art SG method described by iteration (4) with a step size of αk = 1/µk and a projection step onto a norm-ball known to contain the optimal solution [28]. – RDA: The regularized dual averaging method of [12], another recent state-of-the-art SG method. – ESG: The epoch SG method of [15], which runs SG with a constant step size and averaging in a series of epochs, and is optimal for non-smooth stochastic strongly-convex optimization. 3http://www.di.ens.fr/˜mschmidt/Software/minFunc.html 6 0 5 10 15 20 25 10 −4 10 −3 10 −2 10 −1 10 0 Effective Passes Objective minus Optimum Steepest AFG L−BFGS pegasos RDA ESG NOSG SAG−C SAG−LS 0 5 10 15 20 25 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 Effective Passes Objective minus Optimum Steepest AFG L−BFGS pegasos RDA ESG NOSG SAG−C SAG−LS 0 5 10 15 20 25 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 Effective Passes Objective minus Optimum Steepest AFG L−BFGS pegasos RDA ESG NOSG SAG−C SAG−LS 0 5 10 15 20 25 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 10 4 Effective Passes Test Logistic Loss Steepest AFG L−BFGS pegasos RDA ESG NOSG SAG−C SAG−LS 0 5 10 15 20 25 2000 2500 3000 3500 4000 4500 5000 5500 6000 6500 7000 Effective Passes Test Logistic Loss Steepest AFG L−BFGS pegasos RDA ESG NOSG SAG−C SAG−LS 0 5 10 15 20 25 1.5 1.55 1.6 1.65 1.7 1.75 1.8 1.85 1.9 1.95 2 x 10 5 Effective Passes Test Logistic Loss Steepest AFG L−BFGS pegasos RDA ESG NOSG SAG−C SAG−LS Figure 1: Comparison of optimization strategies for ℓ2-regularized logistic regression. Top: training excess cost. Bottom: testing cost. From left to right are the results on the protein, rcv1 and covertype data sets. This figure is best viewed in colour. – NOSG: The nearly-optimal SG method of [19], which combines ideas from SG and AFG methods to obtain a nearly-optimal dependency on a variety of problem-dependent constants. – SAG: The proposed stochastic average gradient method described by iteration (5) using the modifications discussed in the previous section. We used a step-size of αk = 2/(Lk + nλ) where Lk is either set constant to the global Lipschitz constant (SAG-C) or set by adaptively estimating the constant with respect to the logistic loss function using the line-search described in the previous section (SAG-LS). The SAG-LS method was initialized with L0 = 1 . In all the experiments, we measure the training and testing costs as a function of the number of effective passes through the data, measured as the number of f ′ i evaluations divided by n. These results are thus independent of the practical implementation of the algorithms. The theoretical convergence rates suggest the following strategies for deciding on whether to use an FG or an SG method: 1. If we can only afford one pass through the data, then an SG method should be used. 2. If we can afford to do many passes through the data (say, several hundred), then an FG method should be used. We expect that the SAG iterations will be most useful between these two extremes, where we can afford to do more than one pass through the data but cannot afford to do enough passes to warrant using FG algorithms like L-BFGS. To test whether this is indeed the case on real data sets, we performed experiments on a set of freely available benchmark binary classification data sets. The protein (n = 145751, p = 74) data set was obtained from the KDD Cup 2004 website,4 while the rcv1 (n = 20242, p = 47236) and covertype (n = 581012, p = 54) data sets were obtained from the LIBSVM data website.5 Although our method can be applied to any differentiable function, on these data sets we focus on an ℓ2-regularized logistic regression problem, with λ = 1/n. We split each dataset in two, training on one half and testing on the other half. We added a (regularized) bias term to all data sets, and for dense features we standardized so that they would have a mean of zero and a variance of one. We plot the training and testing costs of the different methods for 30 effective passes through the data in Figure 1. In the supplementary material, we present additional experimental results including the test classification accuracy and results on different data sets. We can observe several trends across the experiments from both the main paper and the supplementary material. 4http://osmot.cs.cornell.edu/kddcup 5http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets 7 – FG vs. SG: Although the performance of SG methods can be catastrophic if the step size is not chosen carefully (e.g., the covertype data), with a carefully-chosen step-size the SG methods do substantially better than FG methods on the first few passes through the data (e.g., the rcv1 data). In contrast, FG methods are not sensitive to the step size and because of their steady progress we also see that FG methods slowly catch up to the SG methods and eventually (or will eventually) pass them (e.g., the protein data). – (FG and SG) vs. SAG: The SAG iterations seem to achieve the best of both worlds. They start out substantially better than FG methods, but continue to make steady (linear) progress which leads to better performance than SG methods. In some cases (protein and covertype), the significant speed-up observed for SAG in reaching low training costs also translates to reaching the optimal testing cost more quickly than the other methods. – IAG vs. SAG: Our experiments (in the supplementary material) show that the IAG method performs similar to the regular FG method, and they also show the surprising result that the randomized SAG method outperforms the closely-related deterministic IAG method by a very large margin. This is due to the larger step sizes used by the SAG iterations, which would cause IAG to diverge. 6 Discussion Optimal regularization strength: One might wonder if the additional hypothesis in Proposition 2 is satisfied in practice. In a learning context, where each function fi is the loss associated to a single data point, L is equal to the largest value of the loss second derivative ξ (1 for the square loss, 1/4 for the logistic loss) times R2, where R is a the uniform bound on the norm of each data point. Thus, the constraint µ L ⩾ 8 n is satisfied when λ ⩾8ξR2 n . In low-dimensional settings, the optimal regularization parameter is of the form C/n [29] where C is a scalar constant, and may thus violate the constraint. However, the improvement with respect to regularization parameters of the form λ = C/√n is known to be asymptotically negligible, and in any case in such low-dimensional settings, regular stochastic or batch gradient descent may be efficient enough in practice. In the more interesting high-dimensional settings where the dimension p of our covariates is not small compared to the sample size n, then all theoretical analyses we are aware of advocate settings of λ which satisfy this constraint. For example, [30] considers parameters of the form λ = C √n in the parametric setting, while [31] considers λ = C nβ with β < 1 in a non-parametric setting. Training cost vs. testing cost: The theoretical contribution of this work is limited to the convergence rate of the training cost. Though there are several settings where this is the metric of interest (e.g., variational inference in graphical models), in many cases one will be interested in the convergence speed of the testing cost. Since the O(1/k) convergence rate of the testing cost, achieved by SG methods with decreasing step sizes (and a single pass through the data), is provably optimal when the algorithm only accesses the function through unbiased measurements of the objective and its gradient, it is unlikely that one can obtain a linear convergence rate for the testing cost with the SAG iterations. However, as shown in our experiments, the testing cost of the SAG iterates often reaches its minimum quicker than existing SG methods, and we could expect to improve the constant in the O(1/k) convergence rate, as is the case with online second-order methods [32]. Step-size selection and termination criteria: The three major disadvantages of SG methods are: (i) the slow convergence rate, (ii) deciding when to terminate the algorithm, and (iii) choosing the step size while running the algorithm. This paper showed that the SAG iterations achieve a much faster convergence rate, but the SAG iterations may also be advantageous in terms of tuning step sizes and designing termination criteria. In particular, the SAG iterations suggest a natural termination criterion; since the average of the yk i variables converges to g′(xk) as ∥xk−xk−1∥converges to zero, we can use (1/n)∥P i yk i ∥as an approximation of the optimality of xk. Further, while SG methods require specifying a sequence of step sizes and mispecifying this sequence can have a disastrous effect on the convergence rate [7, §2.1], our theory shows that the SAG iterations iterations achieve a linear convergence rate for any sufficiently small constant step size and our experiments indicate that a simple line-search gives strong performance. Acknowledgements Nicolas Le Roux, Mark Schmidt, and Francis Bach are supported by the European Research Council (SIERRA-ERC-239993). Mark Schmidt is also supported by a postdoctoral fellowship from the Natural Sciences and Engineering Research Council of Canada (NSERC). 8 References [1] H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22(3):400–407, 1951. [2] L. Bottou and Y. LeCun. Large scale online learning. NIPS, 2003. [3] C. H. Teo, Q. Le, A. J. Smola, and S. V. N. Vishwanathan. A scalable modular convex solver for regularized risk minimization. KDD, 2007. [4] M. A. Cauchy. M´ethode g´en´erale pour la r´esolution des syst`emes d’´equations simultan´ees. Comptes rendus des s´eances de l’Acad´emie des sciences de Paris, 25:536–538, 1847. [5] Y. Nesterov. Introductory lectures on convex optimization: A basic course. Springer, 2004. [6] A. Nemirovski and D. B. Yudin. Problem complexity and method efficiency in optimization. Wiley, 1983. [7] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574–1609, 2009. [8] A. Agarwal, P. L. Bartlett, P. Ravikumar, and M. J. Wainwright. Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization. IEEE Transactions on Information Theory, 58(5), 2012. [9] D. Blatt, A. O. Hero, and H. Gauchman. A convergent incremental gradient method with a constant step size. SIAM Journal on Optimization, 18(1):29–51, 2007. [10] P. Tseng. An incremental gradient(-projection) method with momentum term and adaptive stepsize rule. SIAM Journal on Optimization, 8(2):506–531, 1998. [11] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical programming, 120(1):221–259, 2009. [12] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research, 11:2543–2596, 2010. [13] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838–855, 1992. [14] H. J. Kushner and G. Yin. Stochastic approximation and recursive algorithms and applications. SpringerVerlag, Second edition, 2003. [15] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization. COLT, 2011. [16] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence O(1/k2). Doklady AN SSSR, 269(3):543–547, 1983. [17] N.N. Schraudolph. Local gain adaptation in stochastic gradient descent. ICANN, 1999. [18] P. Sunehag, J. Trumpf, SVN Vishwanathan, and N. Schraudolph. Variable metric stochastic approximation theory. International Conference on Artificial Intelligence and Statistics, 2009. [19] S. Ghadimi and G. Lan. Optimal stochastic‘ approximation algorithms for strongly convex stochastic composite optimization. Optimization Online, July, 2010. [20] J. Martens. Deep learning via Hessian-free optimization. ICML, 2010. [21] A. Nedic and D. Bertsekas. Convergence rate of incremental subgradient algorithms. In Stochastic Optimization: Algorithms and Applications, pages 263–304. Kluwer Academic, 2000. [22] M.V. Solodov. Incremental gradient algorithms with stepsizes bounded away from zero. Computational Optimization and Applications, 11(1):23–35, 1998. [23] H. Kesten. Accelerated stochastic approximation. Annals of Mathematical Statistics, 29(1):41–59, 1958. [24] B. Delyon and A. Juditsky. Accelerated stochastic approximation. SIAM Journal on Optimization, 3(4):868–881, 1993. [25] D. P. Bertsekas. A new class of incremental gradient methods for least squares problems. SIAM Journal on Optimization, 7(4):913–926, 1997. [26] M. P. Friedlander and M. Schmidt. Hybrid deterministic-stochastic methods for data fitting. SIAM Journal of Scientific Computing, 34(3):A1351–A1379, 2012. [27] J. Liu, J. Chen, and J. Ye. Large-scale sparse logistic regression. KDD, 2009. [28] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for svm. ICML, 2007. [29] P. Liang, F. Bach, and M. I. Jordan. Asymptotically optimal regularization in smooth parametric models. NIPS, 2009. [30] K. Sridharan, S. Shalev-Shwartz, and N. Srebro. Fast rates for regularized objectives. NIPS, 2008. [31] M. Eberts and I. Steinwart. Optimal learning rates for least squares SVMs using Gaussian kernels. NIPS, 2011. [32] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. NIPS, 2007. 9
|
2012
|
217
|
4,583
|
Monte Carlo Methods for Maximum Margin Supervised Topic Models Qixia Jiang†‡, Jun Zhu†‡, Maosong Sun†, and Eric P. Xing∗∗ †Department of Computer Science & Technology, Tsinghua National TNList Lab, †State Key Lab of Intelligent Tech. & Sys., Tsinghua University, Beijing 100084, China ∗School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213 {qixia,dcszj,sms}@mail.tsinghua.edu.cn; epxing@cs.cmu.edu Abstract An effective strategy to exploit the supervising side information for discovering predictive topic representations is to impose discriminative constraints induced by such information on the posterior distributions under a topic model. This strategy has been adopted by a number of supervised topic models, such as MedLDA, which employs max-margin posterior constraints. However, unlike the likelihoodbased supervised topic models, of which posterior inference can be carried out using the Bayes’ rule, the max-margin posterior constraints have made Monte Carlo methods infeasible or at least not directly applicable, thereby limited the choice of inference algorithms to be based on variational approximation with strict mean field assumptions. In this paper, we develop two efficient Monte Carlo methods under much weaker assumptions for max-margin supervised topic models based on an importance sampler and a collapsed Gibbs sampler, respectively, in a convex dual formulation. We report thorough experimental results that compare our approach favorably against existing alternatives in both accuracy and efficiency. 1 Introduction Topic models, such as Latent Dirichlet Allocation (LDA) [3], have shown great promise in discovering latent semantic representations of large collections of text documents. In order to fit data better, LDA has been successfully extended in various ways. One notable extension is supervised topic models, which were developed to incorporate supervising side information for discovering predictive latent topic representations. Representative methods include supervised LDA (sLDA) [2, 12], discriminative LDA (DiscLDA) [8], and max-entropy discrimination LDA (MedLDA) [16]. MedLDA differs from its counterpart supervised topic models by imposing discriminative constraints (i.e., max-margin constraints) directly on the desired posterior distributions, instead of defining a normalized likelihood model as in sLDA and DiscLDA. Such topic models with max-margin posterior constraints have shown superior performance in various settings [16, 14, 13, 9]. However, their constrained formulations, especially when using soft margin constraints for inseparable practical problems, make it infeasible or at least hard if possible at all1 to directly apply Monte Carlo (MC) methods [10], which have been widely used in the posterior inference of likelihood based models, such as the collapsed Gibbs sampling methods for LDA [5]. Previous inference methods for such models with max-margin posterior constraints have been exclusively on the variational methods [7] usually with a strict mean-field assumption. Although factorized variational methods often seek faster approximation solutions, they could be inaccurate or obtain too compact results [1]. ∗‡indicates equal contributions from these authors. 1Rejection sampling can be applied when the constraints are hard, e.g., for separable problems. But it would be inefficient when the sample space is large. 1 In this paper, we develop efficient Monte Carlo methods for max-margin supervised topic models, which we believe is crucial for highly scalable implementation, and further performance enhancement of this class of models. Specifically, we first provide a new and equivalent formulation of the MedLDA as a regularized Bayesian model with max-margin posterior constraints, based on Zellner’s interpretation of Bayes’ rule as a learning model [15] and the recent development of regularized Bayesian inference [17]. This interpretation is arguably more natural than the original formulation of MedLDA as a hybrid max-likelihood and max-margin learning, where the log-likelihood is approximated by a variational upper bound for computational tractability. Then, we deal with the set of soft max-margin constraints with convex duality methods and derive the optimal solutions of the desired posterior distributions. To effectively reduce the size of the sampling space, we develop two samplers, namely, an importance sampler and a collapsed Gibbs sampler [4, 1], with a much weaker assumption on the desired posterior distribution compared to the mean field methods in [16]. We note that the work [11] presents a duality method to handle moment matching constraints in maximum entropy models. Our work is an extension of their results to learn topic models, which have nontrivially structured latent variables and also use the general soft margin constraints. 2 Latent Dirichlet Allocation LDA [3] is a hierarchical Bayesian model that posits each document as an admixture of K topics, where each topic Φk is a multinomial distribution over a V -word vocabulary. For document d, its topic proportion θd is a multinomial distribution drawn from a Dirichlet prior. Let wd = {wdn}N n=1 denote the words appearing in document d. For the n-th word wdn, a topic assignment zdn = k is drawn from θd and wdn is drawn from Φk. In short, the generative process of d is θd ∼Dir(α), zdn = k ∼Mult(θd), wdn ∼Mult(Φk), (1) where Dir(·) is a Dirichlet, Mult(·) is a multinomial. For fully-Bayesian LDA, the topics are also random samples drawn from a Dirichlet prior, i.e., Φk ∼Dir(β). Let W = {wd}D d=1 denote all the words in a corpus with D documents, and define zd = {zdn}N n=1, Z = {zd}D d=1, Θ = {θd}D d=1. The goal of LDA is to infer the posterior distribution p(Θ, Z, Φ|W, α, β) = p0(Θ, Z, Φ|α, β)p(W|Θ, Z, Φ) p(W|α, β) . (2) Since inferring the true posterior distribution is intractable, researchers must resort to variational [3] or Monte Carlo [5] approximate methods. Although both methods have shown success in various scenarios. They have complementary advantages. For example, variational methods (e.g., meanfield) can be generally more efficient, while MC methods can obtain more accurate estimates. 3 MedLDA: a supervised topic model with max-margin constraints MedLDA extends LDA by integrating the max-margin learning into the procedure of discovering latent topic representations to learn latent representations that are good for predicting class labels or rating scores of a document. Empirically, MedLDA and its various extensions [14, 13, 9] have demonstrated promise in learning more discriminative topic representations. The original MedLDA was designed as a hybrid max likelihood and max-margin learning, where the intractable loglikelihood is approximated by a variational bound. To derive our sampling methods, we present a new interpretation of MedLDA from the perspective of regularized Bayesian inference [17]. 3.1 Bayesian inference as a learning model As shown in Eq. (2), Bayesian inference is an information processing rule that projects the prior p0 and empirical evidence to a post-data posterior distribution via the Bayes’ rule. It is the core for likelihood-based supervised topic models [2, 12]. A fresh interpretation of Bayesian inference was given by Zellner [15], which leads to our novel interpretation of MedLDA. Specifically, Zellner showed that the posterior distribution by Bayes’ rule is the solution of an optimization problem. For instance, the posterior p(Θ, Z, Φ|W) of LDA is equivalent to the optimum solution of min p(Θ,Z,Φ)∈P KL[p(Θ, Z, Φ)∥p0(Θ, Z, Φ)] −Ep[log p(W|Θ, Z, Φ)], (3) where KL(q||p) is the Kullback-Leibler divergence from q to p, and P is the space of probability distributions. We will use L(p(Θ, Z, Φ)) to denote the objective function. 2 3.2 MedLDA: a regularized Bayesian model For brevity, we consider the classification model. Let D = {(wd, yd)}D d=1 be a given fully-labeled training set, where the response variable Y takes values from a finite set Y = {1, . . . , M}. MedLDA consists of two parts. The first part is an LDA likelihood model for describing input documents. As in previous work, we use the partial2 likelihood model for W. The second part is a mechanism to consider supervising signal. Since our goal is to discover latent representations Z that are good for classification, one natural solution is to connect Z directly to our ultimate goal. MedLDA obtains such a goal by building a classification model on Z. One good candidate of the classification model is the max-margin methods, which avoid defining a normalized likelihood model [12]. Formally, let η denote the parameters of the classification model. To make the model fully-Bayesian, we also treat η random. Then, we want to infer the joint posterior distribution p(η, Θ, Z, Φ|D). For classification, MedLDA defines the following discrimination function F(y, η, z; w) = η⊤f(y, ¯z), F(y; w) = Ep(η,z|w)[F(y, η, z; w)], (4) where ¯z is a K-dim vector whose element ¯zk equals to 1 N ∑N n=1 I(zn = k), and I(x) is an indicator function which equals to 1 when x is true otherwise 0; f(y, ¯z) is an MK-dim vector whose elements from (y −1)K to yK are ¯z and all others are zero; and η is an MK-dimensional vector concatenating M class-specific sub-vectors. With the above definitions, a natural prediction rule is ˆy = argmax y F(y; w), (5) and we would like to “regularize” the properties of the latent topic representations to make them suitable for a classification task. One way to achieve that goal is to take the optimization view of Bayes’ theorem and impose the following max-margin constraints to problem (3) F(yd; wd) −F(y; wd) ≥ℓd(y) −ξd, ∀y ∈Y, ∀d, (6) where ℓd(y) is a non-negative function that penalizes the wrong predictions; ξ = {ξd}D d=1 are non-negative slack variables for inseparable cases. Let L(p) = KL(p||p0(η, Θ, Z, Φ)) − Ep[log p(W|Z, Φ)] and ∆f(y, ¯zd) = f(yd, ¯zd) −f(y, ¯zd). Then, we define the soft-margin MedLDA as solving min p(η,Θ,Z,Φ)∈P,ξ L(p(η, Θ, Z, Φ)) + C D D ∑ d=1 ξd s.t. : Ep[η⊤∆f(y, ¯zd)] ≥ℓd(y) −ξd, ξd ≥0, ∀d, ∀y, (7) where the prior is p0(η, Θ, Z, Φ) = p0(η)p0(Θ, Z, Φ). With the above discussions, we can see that MedLDA is an instance of regularized Bayesian models [17]. Also, problem (7) can be equivalently written as min p(η,Θ,Z,Φ)∈P L(p(η, Θ, Z, Φ)) + CR(p(η, Θ, Z, Φ)) (8) where R = 1 D ∑ d argmaxy(ℓd(y) −Ep[η⊤∆f(y, ¯zd)]) is the hinge loss, an upper bound of the prediction error on training data. 4 Monte Carlo methods for MedLDA As in other variants of topic models, it is intractable to solve problem (7) or the equivalent problem (8) directly. Previous solutions resort to variational mean-field approximation methods. It is easy to show that the variational EM method in [16] is a coordinate descent algorithm to solve problem (7), with the additional fully-factorized mean-field constraint, p(η, Θ, Z, Φ) = p(η)( ∏ d p(θd) ∏ n p(zdn)) ∏ k p(Φk). (9) Below, we present two MC sampling methods to solve the MedLDA problem, with much weaker constraints on p, and thus they could be expected to produce more accurate solutions. Specifically, we assume p(η, Θ, Z, Φ) = p(η)p(Θ, Z, Φ). Then, the general procedure is to alternately solve problem (8) by performing the following two steps. 2A full likelihood model on both W and Y can be defined as in [12]. But its normalization constant (a function of Z) could make the problem hard to solve. 3 Estimate p(η): Given p(Θ, Z, Φ), the subproblem (in an equivalent constrained form) is to solve min p(η),ξ KL(p(η)∥p0(η)) + C D D ∑ d=1 ξd s.t. : Ep[η]⊤∆f(y, E[¯zd]) ≥ℓd(y) −ξd, ξd ≥0, ∀d, ∀y. (10) By using the Lagrangian methods with multipliers λ, we have the optimum posterior distribution p(η) ∝p0(η)eη⊤·∑D d=1 ∑ y λy d∆f(y,E[¯zd]). (11) For the prior p0, for simplicity, we choose the standard normal prior, i.e., p0(η) = N(0, I). In this case, p(η) = N(κ, I) and the dual problem is max λ −1 2κ⊤κ + D ∑ d=1 ∑ y λy dℓd(y) s.t. : ∑ y λy d ∈[0, C D ], ∀d. (12) where κ = ∑D d=1 ∑ y λy d∆f(y, E[¯zd]). Note that κ is the posterior mean of classifier parameters η, and the element κyk represents the contribution of topic k in classifying a data point to category y. This problem is the dual problem of a multi-class SVM [6] and we can solve it (or its primal form) efficiently using existing high-performance SVM learners. We denote the optimum solution of this problem by (p∗(η), κ∗, ξ∗, λ∗). Estimate p(Θ, Z, Φ): Given p(η), the subproblem (in an equivalent constrained form) is to solve min p(Θ,Z,Φ),ξ L(p(Θ, Z, Φ)) + C D D ∑ d=1 ξd s.t. : (κ∗)⊤∆f(y, Ep[¯zd]) ≥ℓd(y) −ξd, ξd ≥0, ∀d, ∀y. (13) Although in theory we can solve this subproblem again using Lagrangian dual methods, it would be hard to derive the dual objective function (if possible at all). Here, we use the same strategy as in [16], that is, to update p(Θ, Z, Φ) for only one step with ξ being fixed at ξ∗(the optimum solution of the previous step). It is easy to show that by fixing ξ at ξ∗, we will have the optimum solution p(Θ, Z, Φ) ∝p(W, Z, Θ, Φ)e(κ∗)⊤∑ dy(λy d)∗∆f(y,¯zd), (14) The differences between MedLDA and LDA lie in the above posterior distribution. The first term is the same as the posterior of LDA (the evidence p(W) can be absorbed into the normalization constant). The second term indicates the regularization effects due to the max-margin posterior constraints, which is consistent with our intuition. Specifically, for those data with non-zero Lagrange multipliers (i.e., the data are around the decision boundary or misclassified), the second term will bias the model towards a new posterior distribution that favors more discriminative representations on these “hard” data points. Now, the remaining problem is how to efficiently draw samples from p(Θ, Z, Φ) and estimate the expectations E[¯z] as accurate as possible, which are needed in learning classification models. Below, we present two representative samplers – an importance sampler and a collapsed Gibbs sampler. 4.1 Importance sampler To avoid dealing with the intractable normalization constant of p(Θ, Z, Φ), one natural choice is to use importance sampling. Importance sampling aims at drawing some samples from a “simple” distribution and the expectation is estimated as a weighted average over these samples. However, directly applying importance sampling to p(Θ, Z, Φ) may cause some issues since importance sampling suffers from severe limitations in large sample spaces. Alternatively, since the distribution p(Θ, Z, Φ) in Eq. (14) has the factorization form p(Θ, Z, Φ) = p0(Θ, Φ)p(Z|Θ, Φ), another possible method is to adopt the ancestral sampling strategy to draw sample ( ˆΘ, ˆΦ) from p0(Θ, Φ) and then draw samples from p(Z| ˆΘ, ˆΦ). Although it is easy to draw a sample from the Dirichlet prior p0(Θ, Φ) = Dir(α)Dir(β), it would require a large number of samples to get a robust estimate of the expectations E[Z]. Below, we present one solution to reduce sample space. 4 One feasible method to reduce the sample space is to collapse (Θ, Φ) out and directly draw samples from the marginal distribution p(Z). However, this will cause tight couplings between Z and make the number of samples needed to estimate the expectation grow exponentially with the dimensionality of Z for importance sampler. A practical sampler for this collapsed distribution would be a Markov chain, as we will present in next section. Here, we propose to use the MAP estimate of (Θ, Φ) as their “single sample”3 and proceed to draw samples of Z. Specifically, given ( ˆΘ, ˆΦ), we have the conditional distribution p(Z| ˆΘ, ˆΦ) ∝ p(W, Z| ˆΘ, ˆΦ)e(κ∗)⊤∑ dy(λy d)∗∆f(y,¯zd) = D ∏ d=1 Nd ∏ n=1 p(zdn|ˆθd, ˆΦ), (15) where p(zdn = k|ˆθd, ˆΦ, wdn = t) = 1 Zdn ˆϕktˆθdke 1 Nd ∑ y(λy d)∗(κ∗ ydk−κ∗ yk) (16) and Zdn is a normalization constant, and κ∗ yk is the [(y −1)K +k]-th element of κ∗. The difference (κ∗ ydk −κ∗ yk) represents the different contribution of topic k in classifying d to the true category yd and a wrong category y. If the difference is positive, topic k contributes to make a correct prediction for d; otherwise, it contributes to make a wrong prediction. Then, we draw J samples {z(j) dn }J j=1 from a proposal distribution g(z) and compute the expectations E[¯zdk] = 1 Nd Nd ∑ n=1 E[zdn], ∀¯zdk ∈¯zd and E[zdn] ≈ J ∑ j=1 γj dn ∑J j=1 γj dn z(j) dn , (17) where the importance weight γj dn is γj dn = K ∏ k=1 ( ˆθdk ˆϕkwdn g(k) e 1 Nd ∑ y(λy d)∗(κ∗ ydk−κ∗ yk) )I(z(j) dn =k) (18) With the J samples, we update the MAP estimate ( ˆΘ, ˆΦ) ˆθdk ∝1 J ∑Nd n=1 ∑J j=1 γj dn ∑J j=1 γj dn I(z(j) dn = k) + αk ˆϕkt ∝1 J ∑D d=1 ∑Nd n=1 ∑J j=1 γj dn ∑J j=1 γj dn I(z(j) dn = k)I(wdn = t) + βt. (19) The above two steps are repeated until convergence, initializing ( ˆΘ, ˆΦ) to be uniform, and the samples from the last iteration are used to estimate the expectation statistics needed in the problem of inferring p(η). 4.2 Collapsed Gibbs sampler As we have stated, another way to effectively reduce the sample space is to integrate out the intermediate variables (Θ, Φ) and build a Markov chain whose equilibrium distribution is the resulting marginal distribution p(Z). We propose to use collapsed Gibbs sampling, which has been successfully used for LDA [5]. For MedLDA, we integrate out (Θ, Φ) and get the marginalized posterior distribution p(Z) = p(W,Z|α,β) Zq e(κ∗)⊤∑ d ∑ y(λy d)∗∆f(y,¯zd) = 1 Z [ ∏D d=1 δ(Cd+α) δ(α) e(κ∗)⊤∑ y(λy d)∗∆f(y,¯zd) ][ ∏K k=1 δ(Ck+β) δ(β) ] , (20) where δ(x) = ∏dim(x) i=1 Γ(xi) Γ(∑dim(x) i=1 xi), Ct k is the number of times the term t being assigned to topic k over the whole corpus and Ck = {Ct k}V t=1; Ck d is the number of times that terms being associated with topic k within the d-th document and Cd = {Ck d}K k=1. We can also derive the transition probability of one variable zdn given others which we denote by Z¬ as: p(zdn = k|Z¬, W¬, wdn = t) ∝ Ct k,¬n + βt ∑ t Ct k,¬n + ∑V t=1 βt (Ck d,¬n+αk)e 1 Nd ∑ y(λy d)∗(κ∗ ydk−κ∗ yk) (21) where C· ·,¬n indicates that term n is excluded from the corresponding document or topic. Again, we can see the difference between MedLDA and LDA (using collapsed Gibbs sampling) from the additional last term in Eq. (21), which is due to the max-margin posterior constraints. 3This collapses the sample space of (Θ, Φ) to a single point. 5 For those data on the margin or misclassified (with non-zero Lagrange multipliers), the last term is non-zero and acts as a regularizer directly affecting the topic assignments of these difficult data. Then, we use the transition distribution in Eq. (21) to construct a Markov chain. After this Markov chain has converged (i.e., finished the burn-in stage), we draw J samples {Z(j)} and estimate the expectation statistics E[¯zdk] = 1 Nd Nd ∑ n=1 E[zdn], ∀¯zdk ∈¯zd, and E[zdn] = 1 J J ∑ j=1 z(j) dn . (22) 4.3 Prediction To make prediction on unlabeled testing data using the prediction rule (5), we take the approach that has been adopted for the variational MedLDA, which uses a point estimate of topics Φ from training data and makes prediction based on them. Specifically, we use the MAP estimate ˆΦ to replace the probability distribution p(Φ). For the importance sampler, ˆΦ is computed as in Eq. (19). For the collapsed Gibbs sampler, an estimate of ˆΦ using the samples is ˆϕkt ∝1 J ∑J j=1 Ct k (j) + βt, where Ct k (j) is the times that term t is assigned to topic k in the j-th sample. Given a new document w to be predicted, for importance sampler, the importance weight should be altered as γj n = ∏K k=1(θk ˆϕkwn/g(k))I(z(j) n =k). Then, we approximate the expectation of z as in Eq. (17). For Gibbs sampler, we infer its latent components z using the obtained ˆΦ as p(zn = k|z¬n) ∝ˆϕkwn(Ck ¬n + αk), where Ck ¬n is the times that the terms in this document w assigned to topic k with the n-th term excluded. Then, we approximate the E[¯z] as in Eq. (22). 5 Experiments We empirically evaluate the importance sampler and the Gibbs sampler for MedLDA (denoted by iMedLDA and gMedLDA respectively) on the 20 Newsgroups data set with a standard list of stop words4 removed. This data set contains about 20K postings within 20 groups. Due to space limitation, we focus on the multi-class setting. We use the cutting-plane algorithm [6] to solve the multi-class SVM to infer p(η) and solve for the lagrange multipliers λ in MedLDA. For simplicity, we use the uniform proposal distribution g in iMedLDA. In this case, we can globally draw J (e.g., = 3 × K) samples {Z(j)}J j=1 from g(z) outside the iteration loop and only update the importance weights to save time. For gMedLDA, we keep J (e.g., 20) adjacent samples after gMedLDA has converged to estimate the expectation statistics. To be fair, we use the same C for different MedLDA methods. The optimum C is chosen via 5-fold cross validation during the training procedure of fMedLDA from {a2 : a = 1, . . . , 8}. We use symmetric Dirichlet priors for all LDA topic models, i.e., α = αeK and β = βeV , where en is a n-dim vector with every entry being 1. We assess the convergence of a Markov chain when (1) it has run for a maximum number of iterations (e.g., 100), or (2) the relative change in its objective, i.e., |Lt+1−Lt| Lt , is less than a tolerance threshold ϵ (e.g., ϵ = 10−4). We use the same strategy to judge whether the overall inference algorithm converges. We randomly select 7,505 documents from the whole set as the test set and the rest as the training data. We set the cost parameter ℓd(y) in problem (7) to be 16, which produces better classification performance than the standard 0/1 cost [16]. To measure the sparsity of the latent representations of documents, we compute the average entropy over test documents: 1 |Dt| ∑ d∈Dt H(θd). We also measure the sparsity of the inferred topic distributions Φ in terms of the average entropy over topics, i.e., 1 K ∑K k=1 H(Φk). All experiments are carried out on a PC with 2.2GHz CPU and 3.6G RAM. We report the mean and standard deviation for each model with 4 times randomly initialized runs. 5.1 Performance with different topic numbers This section compares gMedLDA and iMedLDA with baseline methods. MedLDA was shown to outperform sLDA for document classification. Here, we focus on comparing the performance of MedLDA and LDA when using different inference algorithms. Specifically, we compare with the 4http://mallet.cs.umass.edu/ 6 20 40 60 80 100 120 0.4 0.5 0.6 0.7 0.8 # Topics Accuracy iMedLDA gMedLDA fMedLDA gLDA fLDA (a) 20 40 60 80 100 120 1 2 3 4 5 # Topics Average Entropy over Docs iMedLDA gMedLDA fMedLDA gLDA fLDA (b) 20 40 60 80 100 120 4 5 6 7 8 9 # Topics Average Entropy over Topics iMedLDA gMedLDA fMedLDA gLDA fLDA (c) Figure 1: Performance of multi-class classification of different topic models with different topic numbers on 20-Newsgroups data set: (a) classification accuracy, (b) the average entropy of Θ over test documents, and (c) The average entropy of topic distributions Φ. LDA model that uses collapsed Gibbs sampling [5] (denoted by gLDA) and the LDA model that uses fully-factorized variational methods [3] (denoted by fLDA). For LDA models, we discover the latent representations of the training documents and use them to build a multi-class SVM classifier. For MedLDA, we report the results when using fully-factorized variational methods (denoted by fMedLDA) as in [16]. Furthermore, fMedLDA and fLDA optimize the hyper-parameter α using the Newton-Rampion method [3], while gMedLDA, iMedLDA and gLDA determine α by 5-fold cross-validation. We have tested a wide range of values of β (e.g., 10−16 ∼103) and found that the performance of iMedLDA degrades seriously when β is larger than 10−3. Therefore, we set β to be 10−5 for iMedLDA while 0.01 for the other topic models just as in the literature [5]. Fig. 1(a) shows the accuracy. We can see that Monte Carlo methods generally outperform the fullyfactorized mean-field methods, mainly because of their weaker factorization assumptions. The reason for the superior performance of iMedLDA over gMedLDA is probably because iMedLDA is more effective in dealing with sample sparsity issues. More insights will be provided in Section 5.2. Fig. 1(b) shows the average entropy of latent representations Θ over test documents. We find that the entropy of gMedLDA and iMedLDA are smaller than those of gLDA and fLDA, especially for (relatively) large K. This implies that sampling methods for MedLDA can effectively concentrate the probability mass on just several topics thus discover more predictive topic representations. However, fMedLDA yields the smallest entropy, which is mainly because the fully-factorized variational methods tend to get too compact results, e.g., sparse local optimums. Fig. 1(c) shows the average entropy of topic distributions Φ over topics. We can see that gMedLDA improves the sparsity of Φ than fMedLDA. However, gMedLDA’s entropy is larger than gLDA’s. This is because for those “hard” documents, the exponential component in Eq. (21) “regularizes” the conditional probability p(zdn|Z¬) and leads to a smoother estimate of Φ. On the other hand, we find that iMedLDA has the largest entropy. This is probably because many of the samples (topic assignments) generated by the proposal distribution are “incorrect” but importance sampler still assigns weights to these samples. As a result, the inferred topic distributions are very dense and thus have a large entropy. Moreover, in the above experiments, we found that the lagrange multipliers in MedLDA are very sparse (about 1% non-zeros for both iMedLDA and gMedLDA; about 1.5% for fMedLDA), much sparser than those of SVM built on raw input data (about 8% non-zeros). 5.2 Sensitivity analysis with respect to key parameters Sensitivity to α. Fig. 2(a) shows the classification performance of gMedLDA and iMedLDA with different values of α. We can see that the performance of gMedLDA increases as α becomes large and retains stable when α is larger than 0.1. In contrast, the accuracy of iMedLDA decreases a bit (especially for small K) when α becomes large, but is relative stable when α is small (e.g., ≤0.01). This is probably because with a finite number of samples, Gibbs sampler tends to produce a too sparse estimate of E[Z], and a slightly stronger prior is helpful to deal with the sample sparsity issue. In contrast, the importance sampler avoids such sparsity issue by using a uniform proposal distribution, which could make the samples well cover all topic dimensions. Thus, a small prior is sufficient to get good performance, and increasing the prior’s strength could potentially hurt. Sensitivity to sample size J. For sampling methods, we always need to decide how many samples (sample size J) to keep to ensure sufficient statistics power. Fig. 2(b) shows the classification accuracy of both gMedLDA and iMedLDA with different sample size J when α = 10−2/K and C = 16. 7 10 −4 10 −3 10 −2 10 −1 10 0 0.5 0.6 0.7 0.8 α Accuracy iMedLDAK=30 iMedLDAK=60 iMedLDAK=90 gMedLDAK=30 gMedLDAK=60 gMedLDAK=90 (a) 5 10 100 1000 0 0.2 0.4 0.6 0.8 Sample Size Accuracy iMedLDAK=30 iMedLDAK=60 iMedLDAK=90 gMedLDAK=30 gMedLDAK=60 gMedLDAK=90 (b) 10 −4 10 −3 10 −2 0.7 0.75 0.8 0.85 ε Accuracy K=30 K=60 K=90 (c) 1 5 10 50 100 0 0.2 0.4 0.6 0.8 1 # iteration Accuracy iMedLDA gMedLDA fMedLDA (d) Figure 2: Sensitivity study of iMedLDA and gMedLDA: (a) classification accuracy with different α for different topic numbers, (b) classification accuracy with different sample size J, (c) classification accuracy with different convergence criterion ϵ for gMedLDA, and (d) classification accuracy of different methods varies as a function of iterations when the topic number is 30. For gMedLDA, we have tested different values of J for training and prediction. We found that the sample size in the training process has almost no influence on the prediction accuracy even when it equals to 1. Hence, for efficiency, we set J to be 1 during the training. It shows that gMedLDA is relatively stable when J is larger than about 20 at prediction. For iMedLDA, Fig. 2(b) shows that it becomes stable when the prediction sample size J is larger than 3 × K. Sensitivity to convergence criterion ϵ. For gMedLDA, we have to judge whether a Markov chain has reached its stationarity. Relative change in the objective is a commonly used diagnostic to justify the convergence. We study the influence of ϵ. In this experiment, we don’t bound the maximum number of iterations and allow the Gibbs sampler to run until the tolerance ϵ is reached. Fig. 2(c) shows the accuracy of gMedLDA with different values of ϵ. We can see that gMedLDA is relatively insensitive to ϵ. This is mainly because gMedLDA alternately updates posterior distribution and Lagrangian multipliers. Thus, it does Gibbs sampling for many times, which compensates for the influence that each Markov chain has not reached its stationarity. On the other hand, small ϵ values can greatly slow the convergence. For instance, when the topic number is 90, gMedLDA takes 11,986 seconds on training when ϵ = 10−4 but 1,795 seconds when ϵ = 10−2. These results imply that we can loose the convergence criterion to speedup training while still obtain a good model. Sensitivity to iteration. Fig. 2(d) shows the the classification accuracy of MedLDA with various inference methods as a function of iteration when the topic number is set at 30. We can see that all the various MedLDA models converge quite quickly to get good accuracy. Compared to fMedLDA, which uses mean-field variational inference, the two MedLDA models using Monte Carlo methods (i.e., iMedLDA and gMedLDA) are slightly faster to get stable prediction performance. 5.3 Time efficiency 20 40 60 80 100 120 10 2 10 3 10 4 # Topics CPU−Seconds iMedLDA gMedLDA fMedLDA gLDA fLDA Figure 3: Training time. Although gMedLDA can get good results even for a loosen convergence criterion ϵ as discussed in Sec. 5.2, we set ϵ to be 10−4 for all the methods in order to get a more objective comparison. Fig. 3 reports the total training time of different models, which includes two phases – inferring the latent topic representations and training SVMs. We find iMedLDA is the most efficient, which benefits from (1) generateing samples outside the iteration loop and uses them for all iterations; and (2) using the MAP estimates to collapse the sample space of (Θ, Φ) to a “single sample” for efficiency. In contrast, both gMedLDA and fMedLDA have to iteratively update the variables or variational parameters. gMedLDA requires more time than fMedLDA but is comparable when ϵ is set to be 0.01. By using the equivalent 1-slack formulation, about 76% of the training time spent on inference for iMedLDA and 90% for gMedLDA. For prediction, both iMedLDA and gMedLDA are slightly slower than fMedLDA. 6 Conclusions We have presented two Monte Carlo methods for MedLDA, a supervised topic model using maxmargin constraints directly on the desired posterior distributions for discovering predictive latent topic representations. Our methods are based on a novel interpretation of MedLDA as a regularized Bayesian model and the a convex dual formulation to deal with soft-margin constraints. Experimental results on the 20 Newsgroups data set show that Monte Carlo methods are robust to hyper-parameters and could yield very competitive results for such max-margin topic models. 8 Acknowledgements Part of the work was done when QJ was visiting CMU. JZ and MS are supported by the National Basic Research Program of China (No. 2013CB329403 and 2012CB316301), National Natural Science Foundation of China (No. 91120011, 61273023 and 61170196) and Tsinghua Initiative Scientific Research Program No.20121088071. EX is supported by AFOSR FA95501010247, ONR N000140910758, NSF Career DBI-0546594 and Alfred P. Sloan Research Fellowship. References [1] C.M. Bishop. Pattern recognition and machine learning, volume 4. springer New York, 2006. [2] D.M. Blei and J.D. McAuliffe. Supervised topic models. NIPS, pages 121–128, 2007. [3] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. JMLR, 3:993–1022, 2003. [4] A. Gelman, J.B. Carlin, H.S. Stern, and D.B. Rubin. Bayesian data analysis. Boca Raton, FL: Chapman and Hall/CRC, 2004. [5] T.L. Griffiths and M. Steyvers. Finding scientific topics. Proc. of National Academy of Sci., pages 5228–5235, 2004. [6] T. Joachims, T. Finley, and C.N.J. Yu. Cutting-plane training of structural SVMs. Machine Learning, 77(1):27–59, 2009. [7] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, and L.K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999. [8] S. Lacoste-Jullien, F. Sha, and M.I. Jordan. DiscLDA: Discriminative learning for dimensionality reduction and classification. NIPS, pages 897–904, 2009. [9] D. Li, S. Somasundaran, and A. Chakraborty. A combination of topic models with max-margin learning for relation detection. In ACL TextGraphs-6 Workshop, 2011. [10] R.Y. Rubinstein and D.P. Kroese. Simulation and the Monte Carlo method, volume 707. Wileyinterscience, 2008. [11] E. Schofield. Fitting maximum-entropy models on large sample spaces. PhD thesis, Department of Computing, Imperial College London, 2006. [12] C. Wang, D.M. Blei, and Li F.F. Simultaneous image classification and annotation. CVPR, 2009. [13] Y. Wang and G. Mori. Max-margin latent Dirichlet allocation for image classification and annotation. In BMVC, 2011. [14] S. Yang, J. Bian, and H. Zha. Hybrid generative/discriminative learning for automatic image annotation. In UAI, 2010. [15] A. Zellner. Optimal information processing and Bayes’s theorem. American Statistician, pages 278–280, 1988. [16] J. Zhu, A. Ahmed, and E.P. Xing. MedLDA: maximum margin supervised topic models for regression and classification. In ICML, pages 1257–1264, 2009. [17] J. Zhu, N. Chen, and E.P. Xing. Infinite latent SVM for classification and multi-task learning. In NIPS, 2011. 9
|
2012
|
218
|
4,584
|
Analyzing 3D Objects in Cluttered Images Mohsen Hejrati UC Irvine shejrati@ics.uci.edu Deva Ramanan UC Irvine dramanan@ics.uci.edu Abstract We present an approach to detecting and analyzing the 3D configuration of objects in real-world images with heavy occlusion and clutter. We focus on the application of finding and analyzing cars. We do so with a two-stage model; the first stage reasons about 2D shape and appearance variation due to within-class variation (station wagons look different than sedans) and changes in viewpoint. Rather than using a view-based model, we describe a compositional representation that models a large number of effective views and shapes using a small number of local view-based templates. We use this model to propose candidate detections and 2D estimates of shape. These estimates are then refined by our second stage, using an explicit 3D model of shape and viewpoint. We use a morphable model to capture 3D within-class variation, and use a weak-perspective camera model to capture viewpoint. We learn all model parameters from 2D annotations. We demonstrate state-of-the-art accuracy for detection, viewpoint estimation, and 3D shape reconstruction on challenging images from the PASCAL VOC 2011 dataset. 1 Introduction Figure 1: We describe two-stage models for detecting and analyzing the 3D shape of objects in unconstrained images. In the first stage, our models reason about 2D appearance and shape using variants of deformable part models (DPMs). We use global mixtures of trees with local mixtures of gradient-based part templates (top-left). Global mixtures capture constraints on visibility and shape (headlights are only visible in certain views at certain locations), while local mixtures capture constraints on appearance (headlights look different in different views). Our 2D models localize even fully-occluded landmarks, shown as hollow circles and dashed lines in (top-middle). We feed this output to our second stage, which directly reasons about 3D shape and camera viewpoint. We show the reconstructed 3D model and associated ground-plane (assuming its parallel to the car body) on (top-right). The bottom row shows 3D reconstructions from four novel viewpoints. A grand challenge in machine vision is the task of understanding 3D objects from 2D images. Classic approaches based on 3D geometric models [2] could sometimes exhibit brittle behavior on cluttered, “in-the-wild” images. Contemporary recognition methods tend to build statistical models of 2D appearance, consisting of classifiers trained with large training sets using engineered appearance features. Successful examples include face detectors [30], pedestrian detectors [7], and general 1 object-category detectors [10]. Such methods seem to work well even in cluttered scenes, but are usually limited to coarse 2D output, such as bounding-boxes. Our work is an attempt to combine the two approaches, with a focus on statistical, 3D geometric models of objects. Specifically, we focus on the practical application of detecting and analyzing cars in cluttered, unconstrained images. We refer the reader to our results (Fig.4) for a sampling of cluttered images that we consider. We develop a model that detects cars, estimates camera viewpoint, and recovers 3D landmarks configurations and their visibility with state-of-the-art accuracy. It does so by reasoning about appearance, 3D shape, and camera viewpoint through the use of 2D structured, relational classifiers and 3D geometric subspace models. While deformable models and pictorial structures [10, 31, 11] are known to successfully model articulation, 3D viewpoint is still not well understood. The typical solution is to “discretize” viewpoint and build multiple view-based models tuned for each view (frontal, side, 3/4...). One advantage of such a “brute-force” approach is that it is computationally efficient, at least for a small number of views. Fine-grained 3D shape estimation may still be difficult with such a strategy. On the other hand, it is difficult to build models that reason directly in 3D because the “inverse-rendering” problem is hard to solve. We introduce a two-stage approach that first reasons about 2D shape and appearance variation, and then reasons explicitly about 3D shape and viewpoint given 2D correspondences from the first stage. We show that “inverse-rendering” is feasible by way of 2D correspondences. 2D shape and appearance: Our first stage models 2D shape and appearance using a variant of deformable part models (DPMs) designed to produce reliable 2D landmark correspondences. Our approach differs from traditional view-based models in that it is compositional; it “cuts and pastes” together different sets of local view-based templates to model a large set of global viewpoints. We use global mixtures of trees with local mixtures of “part” or landmark templates. Global mixtures capture constraints on visibility and shape (headlights are only visible in certain views at certain locations), while local mixtures capture constraints on appearance (headlights look different in different views). We use this model to efficiently generate candidate 2D detections that are refined by our second 3D stage. One salient aspect of our 2D model is that it reports 2D locations of all landmarks including occluded ones, each augmented with a visibility flag. 3D shape and viewpoint: Our second layer processes the 2D output of our first stage, incorporating global shape constraints arising from 3D shape variation and viewpoint. To capture viewpoint constraints, we model landmarks as weak-perspective projections of a 3D object. To capture withinclass variation, we model the 3D shape of any object instance as a linear combination of 3D basis shapes. We use tools from nonrigid structure-from-motion (SFM) to both learn and enforce such models using 2D correspondences. Crucially, we make use of occlusion reports generated by our local view-based templates to estimate morphable 3D shape and camera viewpoint. 2 Related Work We focus most on recognition methods that deal explicitly with 3D viewpoint variation. Voting-based methods: One approach to detection and viewpoint classification is based on bottomup geometric voting, using a Hough transform or geometric hashing. Images are first processed to obtain a set of local feature detections. Each detection can then vote for both an object location and viewpoint. Examples include [12] and implicit shape models [1, 26]. Our approach differs in that we require no initial feature detection stage, and instead we reason about all possible geometric configurations and occlusion states. View-based models: Early successful approaches included multivew face detection [24, 17]. Recent approaches based on view-based deformable part models include [19, 13, 10]. Our model differs in that we use a single representation that directly generates multiple views. One can augment viewbased models to share local parts across views [27, 21, 32]. This typically requires reasoning about topological changes in viewpoint; certain parts or features can only be visible in certain view due to self-occlusion. One classic representation for encoding such visibility constraints is an aspect graph [5]. [33] model such topological constraints with global mixtures with varying tree structures. Our model is similar to such approaches, except that we use a decomposable notion of aspect; we simultaneously reason about global and semi-local changes in visibility using local part mixtures with global co-occurrence constraints. 2 3D models: One can also directly reason about local features and their geometric arrangement in a 3D coordinate system [23, 25, 34]. Though such models are three-dimensional in terms of their underlying representation, run-time inference usually proceeds in a bottom-up manner, where detected features vote for object locations. To handle non-Gaussian observation models, [18] evaluate randomly sampled model estimates within a RANSAC search. Our approach is closely related to the recent work of [22], which also uses a deformable part model (DPM) to capture viewpoint variation in cars. Though they learn spatial constraints in a 3D coordinate frame, their model at run-time is equivalent to a view-based model, where each view is modeled with a star-structured DPM. Our model differs in that we directly reason about the location of fully-occluded landmarks, we model an exponential number of viewpoints by using a compositional representation, and we produce continuous 3D shapes and camera viewpoints associated with each detection using only 2D training data. Finally, we represent the space of 3D models of an object category using a set of basis shapes, similar to the morphable models of [3]. To estimate such models from 2D data, we adapt methods designed for tracking morphable shapes to 3D object category recognition [29, 28]. 3 2D Shape and Appearance We first describe our 2D model of shape and appearance. We write it as a scoring function with linear parameters. Our model can be seen as an extension of the flexible mixtures-of-part model [31], which itself augments a deformable part model (DPM) [10] to reason about local mixtures. Our model differs its encoding of occlusion states using local mixtures, as well as the introduction of global mixtures that enforce occlusions and spatial geometry consistent with changes in 3D viewpoint. We take care to design our model so as to allow for efficient dynamic-programming algorithms for inference. Let I be an image, pi = (x, y) be the pixel location for part i and ti ∈{1..T} be the local mixture component of part i. As an example, part i may correspond to a front-left headlight, and ti can correspond to different appearances of a headlight in frontal, side, or three-quarter views. A notable aspect of our model is that we estimate landmark locations for all parts in all views, even when they are fully occluded. We will show that local mixture variables perform surprisingly well at modeling complex appearances arising from occlusions. Let i ∈V where V is the set of all landmarks. We consider different relational graphs Gm = (V, Em) where Em connects pairs of landmarks constrained to have consistent locations and local mixtures in global mixture m. We can loosely think of m as a “global viewpoint”, though it will be latently estimated from the data. We use the lack of subscript to denote the set of variables obtained by iterating over that subscript; e.g., p = {pi : i ∈V }. Given an image, we score a collection of landmark locations and mixture variables S(I, p, t, m) = X i∈V h αti i · φ(I, pi) i + X ij∈Em h βti,tj ijm · ψ(pi −pj) + γti,tj ijm i (1) Local model: The first term scores the appearance evidence for placing a template αti i for part i, tuned for mixture ti, at location pi. We write φ(I, pi) for the feature vector (e.g., HOG descriptor [7]) extracted from pixel location pi in image I. Note that we define a template even for mixtures ti corresponding to fully-occluded states. One may argue that no image evidence should be scored during an occlusion; we take the view that the learning algorithm can decide for itself. It may choose to learn a template of all zeros (essentially ignoring image evidence) or it may find gradient features statistically correlated with occlusions (such as t-junctions). Unlike the remaining terms in our scoring function, the local appearance model is not dependent on the global mixture/viewpoint. We show that this independence allows our model to compose together different local mixtures to model a single global viewpoint. Relational model: The second term scores relational constraints between pairs of parts. We write ψ(pi −pj) = dx dx2 dy dy2 , a vector of relative offsets between part i and part j. We can interpret βti,tj ijm as the parameters of a spring specifying the relative rest location and quadratic spring penalty for deviating from that rest location. Notably, this spring depends on part i and j, the local mixture components of part i and j, and the global mixture m. This dependency captures many natural constraints due to self-occlusion; for example, if a car’s left-front wheel lies to the right of the other front wheel (in image space), than it is likely self-occluded. Hence it is crucial that local appearance and geometry depend on each other. The last term γti,tj ijm defines a co-occurrence score associated with instancing local mixture ti and tj, and global mixture m. This encodes the 3 constraint that, if the left front headlight is occluded due to self occlusion, the left front wheel is also likely occluded. Global model: We define different graphs Gm = (V, Em) corresponding to different global mixtures. We can loosely think of the global variable m are capturing a coarse, quantized viewpoint. To ensure tractability, we force all edge structures to be tree-structured. Intuitively, different relational structures may help because occluded landmarks tend to be localized with less reliability. One may expect occluded/unreliable parts should have fewer connections (lower degrees in Gm) than reliable parts. Even for a fixed global mixture m, our model can generate an exponentially-large set of appearances |V |T , where T is the number of local mixture types. We show such a model outperforms a naive view-based model in our experiments. 3.1 Inference Inference corresponds to maximizing (1) with respect to landmark locations p, local mixtures t, and global mixtures m: S∗(I) = max m [max p,t S(I, p, t, m)] (2) We optimize the above equation by enumerating all global mixtures m, and for each global mixture, finding the optimal combination of landmark locations p and local mixtures t by dynamic programing (DP). To see that the inner maximization can be optimized by DP, let us define zi = (pi, ti) to denote both the discrete pixel position and discrete mixture type of part i. We can rewrite the score from (1) for a fixed image I and global mixture m with edge structure E as: S(z) = X i∈V φi(zi) + X ij∈E ψij(zi, zj), (for a fixed I and m) (3) where φi(zi) = αti i · φ(I, pi) and ψij(zi, zj) = βti,tj ijm · ψ(pi −pj) + γti,tj ijm From this perspective, it is clear that our model (conditioned on I and m) is a discrete, pairwise Markov random field (MRF). When G = (V, E) is tree-structured, one can compute maxz S(z) with dynamic programming [31]. 3.2 Learning We assume we are given training data consisting of image-landmark triplet {In, pin, oin}, where landmarks are augmented with an additional discrete visibility flag oin. With a slight abuse of notation, we use n to denote an instance of a training image. We use oin ∈{0, 1, 2} to denote visible, self-occlusion, and other-occlusion respectively, where other occlusion corresponds to a landmark that is occluded by another object (or the image border). We now show how to augment this training set with local mixtures labels tin, global mixtures labels mn, and global edge structures Em. Essentially, we infer such mixture labels using probabilistic algorithms for generating local/global clusters of 2D landmark configurations. We then use this inferred mixture labels to train the linear parameters of the scoring function (1) using supervised, max-margin methods. Learning local mixtures: We use the clustering algorithm described in [8, 4] to learn local part mixtures. We construct a “local-geometric-context” vector for each part, and obtain landmark mixture labels by grouping landmark instances with similar local geometry. Specifically, for each landmark i and image n, we construct a K-element vector gin that defines the 2D relative location of a landmark with respect to the other K landmarks in instance n, normalized for the size of that training instance. We construct sets of features Setij = {gin : n ∈1..N and oin = j} corresponding to each part i and occlusion state j. We separately cluster each set of vectors using K-means, and then interpret cluster membership as mixture label tin. This means that, for landmark i, a third of its T local mixtures will model visible instances in the training set, a third will model self-occlusions, and a third will capture other-occlusions. Learning relational structure: Given local mixture labels tin, we simultaneously learn global mixtures mn and edge structure Em with a probabilistic model of zin = (pin, tin). We find the global mixtures and edge structure that maximizes the probability of the observed {zin} labels. Probabilistically speaking, our spatial spring model is equivalent to a Gaussian model (who’s mean and covariance correspond to the rest location and rigidity), making estimation relatively straightforward. We first describe the special case of a single global mixture, for which the most-likely tree E can be obtained by maximizing the mutual information of the labels using the Chow-Liu algorithm 4 [6, 15]. In our case, we find the maximum-weight spanning tree in a fully connected graph whose edges are labeled with the mutual information (MI) between zi = (pi, ti) and zj = (pj, tj): MI(zi, zj) = MI(ti, tj) + X ti,tj P(ti, tj)MI(pi, pj|ti, tj) (4) MI(ti, tj) can be directly computed from the empirical joint frequency of mixture labels in the training set. MI(pi, pj|ti, tj) is the mutual information of the Gaussian random variables for the location of landmarks i and j given a fixed pair of discrete mixture types ti, tj; this again is readily obtained by computing the determinant of the sample covariance of the locations of landmarks i and j, estimated from the training data. Hence both spatial consistency and mixture consistency are used when learning our relational structure. Learning structure and global mixtures: To simultaneously learn global mixture labels mn and edge structures associated with each mixture Em, we use an EM algorithm for learning mixtures of trees [20, 15]. Specifically, Meila and Jordan [20] describe an EM algorithm that iterates between inferring distributions over tree mixture assignments (the E-step) and estimating the tree structure (the M-step). One can write the expected complete log-likelihood of the observed labels {z}, where θ are the model parameters (Gaussian spatial models, local mixture co-occurrences and global mixture priors) to be maximized and the global mixture assignment variables {mn} are the hidden variables to be marginalized. Notably, the M-step makes use of the Chow-Liu algorithm. We omit detailed equations for lack of space, but note that this is a relatively straightforward application of [20]. We demonstrate that our latently-estimated global mixtures are crucial for high-performance in 3D reasoning. Learning parameters: The previous steps produces local/global mixture labels and edge structures. Treating these as “ground-truth”, we now define a supervised max-margin framework for learning model parameters. To do so, let us write the landmark position labels pn, local mixtures labels tn, and global mixture label mn collectively as yn. Given a training set of positive images with labels {In, yn} and negative images not containing the object of interest, we define a structured prediction objective function similar to one proposed in [31]. The scoring function in (1) is linear in the parameters w = {α, β, γ}, and therefore can be expressed as S(In, yn) = w · Φ(In, yn). We learn a model of the form: argmin w,ξi≥0 1 2wT · w + C X n ξn (5) s.t. ∀n ∈positive images w · Φ(In, yn) ≥1 −ξn ∀n ∈negative images, ∀y w · Φ(In, y) ≤−1 + ξn The above constraint states that positive examples should score better than 1 (the margin), while negative examples, for all configurations of part positions and mixtures, should score less than -1. We collect negative examples from images that does not contain any cars. This form of learning problem is known as a structural SVM, and there exist many well-tuned solvers such as the cutting plane solver of SVMStruct in [16] and the stochastic gradient descent solver in [10]. We use the dual coordinate-descent QP solver of [31]. We show an example of a learned model and its learned tree structure in Fig.1. 4 3D Shape and Viewpoint The previous section describes our 2D model of appearance and shape. We use it to propose detections with associated landmarks positions p∗. In this section, we describe a 3D shape and viewpoint model for refining p∗. Consider 2D views of a single rigid object; 2D landmark positions must obey epipolar geometry constraints. In our case, we must account for within-class shape variation as well (e.g., sedans look different than station wagons). To do so, we make two simplifying assumptions: (1) We assume depth variation of our objects are small compared to the distance from the camera, which corresponds to a weak-perspective camera model. (2) We assume the 3D landmarks of all object instances can be written as linear combinations of a few basis shapes. Let us write the set of detected landmark positions as p∗as a 2 × K matrix where K = |V |. We now describe a procedure for refining p∗to be consistent with these two assumptions: min R,α ||p∗−R X i αiBi||2 where p ∈R2×K, R ∈R2×3, RRT = Id, Bi ∈R3×K (6) 5 Here, R is an orthonormal camera projection matrix and Bi is the ith basis shape, and Id is the identity matrix. We factor out camera translations by working with mean-centered points p∗and let α directly model weak-perspective scalings. Inference: Given 2D landmark locations p∗and a known set of 3D basis shapes Bi, inference corresponds to minimizing (6). For a single basis shape (nB = 1), this problem is equivalent to the well-known “extrinsic orientation” problem of registering a 3D point cloud to a 2D point cloud with known correspondence [14]. Because the squared error is linear in ai and R, we solve for the coefficients and rotation with an iterative least-squares algorithm. We enforce the orthonormality of R with a nonlinear optimization, initialized by the least-squares solution [14]. This means that we can associate each detection with shape basis coefficients αi (which allows us to reconstruct the 3D shape) and camera viewpoint R. One could combine the reprojection error of (6) with our original scoring function from (1) into a single objective that jointly searches over all 2D and 3D unknowns. However inference would be exponential in K. We find a two-layer inference algorithm to be computationally efficient but still effective. Learning: The above inference algorithm requires the morphable 3D basis Bi at test-time. One can estimate such a basis given training data with labeled 2D landmark positions by casting this as nonrigid structure from motion (SFM) problem. Stack all 2D landmarks from N training images into a 2N × K matrix. In the noise-free case, this matrix is rank 3nB (where nB is the number of basis shapes), since each row can be written as a linear combination of the 3D coordinates of nB basis shapes. This means that one can use rank constraints to learn a 3D morphable basis. We use the publically-available nonrigid SFM code [28]. By slightly modifying it to estimate “motion” given a known “structure”, we can also use it to perform the previous projection step during inference. Occlusion: A well-known limitation of SFM methods is their restricted success under heavy occlusion. Notably, our 2D appearance model provides location estimates for occluded landmarks. Many SFM methods (including [28]) can deal with limited occlusion through the use of low-rank constraints; essentially, one can still estimate low-rank approximations of matrices with some missing entries. We can use this property to learn models from partially-labeled training sets. Recall that our learning formulation requires all landmarks (including occluded ones) to be labeled in training data. Manually labeling the positions of occluded landmarks can be ambiguous. Instead, we use the estimated shape basis and camera viewpoints to infer/correct the locations of occluded landmarks. 5 Experiments Datasets: To evaluate our model, we focus on car detection and 3D landmark estimation in cluttered, real-world datasets with severe occlusions. We labeled a subset of 500 images from the PASCAL VOC 2011 dataset [9] with locations and visibility states of 20 car landmarks. Our dataset contains 723 car instances. 36% of landmarks are not visible due to self-occlusion, while 21% of landmarks are not visible due to occlusion by another object (or truncation due to the image border). Hence over half our landmarks are occluded, making our dataset considerably more difficult than those typically used for landmark localization or 3D viewpoint estimation. We evenly split the images into a train/test set. We also compare results on a more standard viewpoint dataset from [1], which consists of 200 relatively “clean” cars from the PASCAL VOC 2007 dataset, marked with 40 discrete viewpoint class labels. Implementation: We modify the publically-available code of [31] and [28] to learn our models, setting the number of local mixtures T = 9, the number of global mixtures M = 50, and the number of basis shapes nB = 5. We found results relatively robust to these settings. Learning our 2D deformable model takes roughly 4 hours, while learning our 3D shape model takes less than a minute. Our model is defined at a canonical scale, so we search over an image pyramid to find detections at multiple scales. Total run-time for a test image (including both 2D and 3D processing over all scales) is 10 seconds. Evaluation: Given an image, our algorithm produces multiple detections, each with 3D landmark locations, visibility flags, and camera viewpoints. We qualitatively visualize such output in Fig.4. To evaluate our output, we assume test images are marked with ground-truth cars, each annotated with ground-truth 2D landmarks and visibility flags. We measure the performance of our algorithm on four tasks. We evaluate object detection (AP) using using the PASCAL criteria of Average Precision [9], defining a detection to be correct if its bounding box overlaps the ground truth by 50% or more. We evaluate 2D landmark localization (LP) by counting the fraction of predicted 6 0 50 100 150 0 20 40 60 80 degrees Our model Arie−Nachimson and Basri Glasner et al. 0 10 20 30 Our Model Arie−Nachimson and Basri Glasner et al. Median Degree Error Figure 2: We report histograms of viewpoint label errors for the dataset of [1]. We compare to the reported performance of [1] and [12]. Our model reduces the median error (right) by a factor of 2. 0.55 0.6 0.65 0.7 0.75 0.8 LP VP AP MV Tree MV Star US 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 recall precision DPM = 63.6% MV star = 74.0% MV Tree = 72.3% Us = 72.5% 0.55 0.6 0.65 0.7 0.75 0.8 LP VP AP Global+3D Global Local 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 recall precision Local = 69% Global = 72.5% (a) Baseline comparison (b) Diagnostic analysis Figure 3: We compare our model with various view-based baselines in (a), and examine various components of our model through a diagnostic analysis in (b). We refer the reader to the text for a detailed analysis, but our model outperforms many state-of-the-art view-based baselines based on trees, stars, and latent parts. We also find that modeling the effects of shape due to global changes in 3D viewpoint is crucial for both detection and landmark localization. landmarks that lie within .5x pixels of the ground-truth, where x is the diameter of the associated ground-truth wheel. We evaluate landmark visibility prediction (VP) by counting the number of landmarks whose predicted visibility state matches the ground-truth, where landmarks may be “visible”, “self-occluded”, or “other-occluded”. Our 3D shape model refines only LP and VP, so AP is determined solely by our 2D (mixtures of trees) model. To avoid conflating the evaluation measures, we evaluate LP and VP assuming bounding-box correspondences between candidates and ground-truth instances are provided. Finally to evaluate viewpoint classification (VC), we compare predicted camera viewpoints with ground-truth viewpoints on the standard benchmark of [1]. Viewpoint Classification: We first present results for viewpoint classification in Fig.2 on the benchmark of [1]. Given a test instance, we run our detector, estimate the camera rotation R, and report the reconstructed 2D landmarks generated using the estimated R. Then we produce a quantized viewpoint label by matching the reconstructions to landmark locations for a reference image (provided in the dataset). We found this approach more reliable than directly matching 3D rotation matrices (for which metric distances are hard to define). We produce a median error of 9 degrees, a factor of 2 improvement over state-of-the-art. This suggests our model does accurately capture viewpoints. We next turn to a detailed analysis on our new cluttered dataset. Baselines: We compare the performance of our overall system to several existing approaches for multiview detection in Fig.3(a). We first compare to widely-used latent deformable part model (DPM) of [10], trained on the exact same data as our model. A supervised DPM (MV-star) considerably improves performance from 63 to 74% AP, where supervision is provided for (view-specific) root mixtures and part locations. This latter model is equivalent in structure to a state-of-the-art model for car detection and viewpoint estimation [22], which trains a DPM using supervision provided by a 3D CAD model. By allowing for tree-structured relations in each view-specific global mixture (MV-tree), we see a small drop in AP = 72.3%. Our final model is similar in term of detection performance (AP = 72.5%), but does noticeably better than both view-based models for landmark prediction. We correctly localize landmarks 69.5% of time, while MV-tree and MV-star score 65.7% and 64.7%, respectively. We produce landmark visibility (VP) estimates from our multiview baselines by predicting a fixed set of visibility labels conditioned on the view-based mixture. We should note that accurate landmark localization is crucial for estimating the 3D shape of the detected instance. We attribute our improvement to the fact that our model can model a large number of global viewpoints by composing together different local view-based templates. 7 Figure 4: Sample results of our system on real images with heavy clutter and occlusion. We show pairs of images corresponding to detections that matched to ground-truth annotations. The top image (in the pair) shows the output of our tree model, and the bottom shows our 3D shape reconstruction, following the notational conventions of Fig.1. Our system estimates 3D shapes of multiple cars under heavy clutter and occlusions, even in cases where more than 50% of a car is occluded. Our morphable 3D model adapts to the shape of the car, producing different reconstructions for SUVs and sedans (row 2, columns 2-3). Recall that our tree model explicitly reasons about changes in visibility due to self-occlusions versus occlusions from other objects, manifested as local mixture templates. This allow our 3D reconstructions to model occlusions due to other objects (e.g., the rear of the car in row 2, column 3). In some cases, the estimated 3D shape is misaligned due to extreme shape variation of the car instance (e.g., the folding doors on the lower-right). Diagnostics: We compare various aspects of our model in Fig.3(b). “Local” refers to a single tree model with local mixtures only, while “Global” refers to our global mixtures of trees. We see a small improvement in terms of AP, from 69% for “Local” to 72.5% for “Global”. However, in terms of landmark prediction, “Global” strongly outperforms “Local”, 69.4% to 57.2%. We use these predicted landmarks to estimate 3D shape below. 3D Shape: Our 3D shape model reports back a z depth value for each landmark (x, y) position. Unfortunately, depth is hard to evaluate without ground-truth 3D annotations. Instead, we evaluate the improvement in re-projected VP and LP due to our 3D shape model; we see a small 2% improvement in LP accuracy, from 69.4% to 71.2%. We further analyze this by looking at the improvement in localization accuracy of ground-truth landmarks that are visible (73.3 to 74.8%), self-occluded (70.5 to 72.5%), and other-occluded (22.5 to 23.4%). We see the largest improvement for occluded parts, which makes intuitive sense. Local templates corresponding to occluded mixtures will be less accurate, and so will benefit more from a 3D shape model. Conclusion: We have described a geometric model for detecting and estimating the 3D shape of objects in heavily cluttered, occluded, real-world images. Our model differs from typical multiview approaches by reasoning about local changes in landmark appearance and global changes in visibility and shape, through the aid of a morphable 3D model. While our model is similar to prior work in terms of detection performance, it produces significantly better estimates of 2D/3D landmarks and camera positions, and quantifiably improves localization of occluded landmarks. Though we have focused on the application of analyzing cars, we believe our method could apply to other geometrically-constrained objects. 8 References [1] M. Arie-Nachimson and R. Basri. Constructing implicit 3d shape models for pose estimation. In ICCV, 2009. [2] T. Binford. Survey of model-based image analysis systems. The International Journal of Robotics Research, 1(1):18–64, 1982. [3] V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pages 187–194. ACM Press/AddisonWesley Publishing Co., 1999. [4] L. Bourdev and J. Malik. Poselets: Body part detectors trained using 3d human pose annotations. In Computer Vision, 2009 IEEE 12th International Conference on, pages 1365–1372. IEEE, 2009. [5] K. Bowyer and C. Dyer. Aspect graphs: An introduction and survey of recent results. International Journal of Imaging Systems and Technology, 2(4):315–328, 1990. [6] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. Information Theory, IEEE Transactions on, 14(3):462–467, 1968. [7] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. [8] C. Desai and D. Ramanan. Detecting actions, poses, and objects with relational phraselets. ECCV, 2012. [9] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2011 (VOC2011) Results. http://www.pascalnetwork.org/challenges/VOC/voc2011/workshop/index.html. [10] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. IEEE PAMI, 99(1), 5555. [11] R. Girshick, P. Felzenszwalb, and D. McAllester. Object detection with grammar models. In NIPS, 2011. [12] D. Glasner, M. Galun, S. Alpert, R. Basri, and G. Shakhnarovich. Viewpoint-aware object detection and pose estimation. In ICCV, pages 1275–1282. IEEE, 2011. [13] C. Gu and X. Ren. Discriminative mixture-of-templates for viewpoint classification. ECCV, pages 408– 421, 2010. [14] B. Horn. Robot vision. The MIT Press, 1986. [15] S. Ioffe and D. Forsyth. Mixtures of trees for object recognition. In CVPR, 2001. [16] T. Joachims, T. Finley, and C. Yu. Cutting plane training of structural SVMs. Machine Learning, 2009. [17] M. Jones and P. Viola. Fast multi-view face detection. In CVPR 2003. [18] Y. Li, L. Gu, and T. Kanade. A robust shape model for multi-view car alignment. In CVPR, 2009. [19] R. Lopez-Sastre, T. Tuytelaars, and S. Savarese. Deformable part models revisited: A performance evaluation for object category pose estimation. In Computer Vision Workshops (ICCV Workshops), 2011. [20] M. Meila and M. Jordan. Learning with mixtures of trees. JMLR, 1:1–48, 2001. [21] P. Ott and M. Everingham. Shared parts for deformable part-based models. In CVPR, 2011. [22] B. Pepik, M. Stark, P. Gehler, and B. Scheile. Teaching geometry to deformable part models. In CVPR, 2012. [23] S. Savarese and L. Fei-Fei. 3d generic object categorization, localization and pose estimation. In ICCV, pages 1–8. IEEE, 2007. [24] H. Schneiderman and T. Kanade. A statistical method for 3d object detection applied to faces and cars. In CVPR, volume 1, pages 746–751. IEEE, 2000. [25] M. Sun, H. Su, S. Savarese, and L. Fei-Fei. A multi-view probabilistic model for 3d object classes. In CVPR, pages 1247–1254. IEEE, 2009. [26] A. Thomas, V. Ferrar, B. Leibe, T. Tuytelaars, B. Schiel, and L. Van Gool. Towards multi-view object class detection. In CVPR, volume 2, pages 1589–1596. IEEE, 2006. [27] A. Torralba, K. Murphy, and W. Freeman. Sharing visual features for multiclass and multiview object detection. PAMI, 29(5):854–869, 2007. [28] L. Torresani, A. Hertzmann, and C. Bregler. Learning non-rigid 3d shape from 2d motion. Advances in Neural Information Processing Systems, 16, 2003. [29] L. Torresani, D. Yang, E. Alexander, and C. Bregler. Tracking and modeling non-rigid objects with rank constraints. In CVPR, volume 1, pages I–493. IEEE, 2001. [30] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In CVPR, volume 1, pages I–511. IEEE, 2001. [31] Y. Yang and D. Ramanan. Articulated pose estimation with flexible mixtures-of-parts. In CVPR, 2011. [32] L. Zhu, Y. Chen, A. Torralba, W. Freeman, and A. Yuille. Part and appearance sharing: Recursive compositional models for multi-view multi-object detection. Pattern Recognition, 2010. [33] X. Zhu and D. Ramanan. Face detection, pose estimation, and landmark localization in the wild. In CVPR, 2012. [34] M. Zia, M. Stark, B. Schiele, and K. Schindler. Revisiting 3d geometric models for accurate object shape and pose. In ICCV Workshops, pages 569–576. IEEE, 2011. 9
|
2012
|
219
|
4,585
|
Recovery of Sparse Probability Measures via Convex Programming Mert Pilanci and Laurent El Ghaoui Electrical Engineering and Computer Science University of California Berkeley Berkeley, CA 94720 {mert,elghaoui}@eecs.berkeley.edu Venkat Chandrasekaran Department of Computing and Mathematical Sciences California Institute of Technology Pasadena, CA 91125 venkatc@caltech.edu Abstract We consider the problem of cardinality penalized optimization of a convex function over the probability simplex with additional convex constraints. The classical ℓ1 regularizer fails to promote sparsity on the probability simplex since ℓ1 norm on the probability simplex is trivially constant. We propose a direct relaxation of the minimum cardinality problem and show that it can be efficiently solved using convex programming. As a first application we consider recovering a sparse probability measure given moment constraints, in which our formulation becomes linear programming, hence can be solved very efficiently. A sufficient condition for exact recovery of the minimum cardinality solution is derived for arbitrary affine constraints. We then develop a penalized version for the noisy setting which can be solved using second order cone programs. The proposed method outperforms known rescaling heuristics based on ℓ1 norm. As a second application we consider convex clustering using a sparse Gaussian mixture and compare our results with the well known soft k-means algorithm. 1 Introduction We consider optimization problems of the following form, p∗= min x∈C, 1T x=1, x≥0 f(x) + λcard(x) where f is a convex function, C is a convex set, card(x) denotes the number of nonzero elements of x and λ ≥0 is a given tradeoff parameter for adjusting desired sparsity. Since the cardinality penalty is inherently of combinatorial nature, these problems are in general not solvable in polynomial-time. In recent years ℓ1 norm penalization as a proxy for penalizing cardinality has attracted a great deal of attention in machine learning, statistics, engineering and applied mathematics [1], [2], [3], [4]. However the aforementioned types of sparse probability optimization problems are not amenable to the ℓ1 heuristic since ∥x∥1 = 1T x = 1 is constant on the probability simplex. Numerous problems in machine learning, statistics, finance and signal processing fall into this category however to the authors’ knowledge there is no known general convex optimization strategy for such problems constrained on the probability simplex. The aim of this paper is to claim that the reciprocal of the (a) Level sets of the regularization function 1 maxi xi on the probability simplex C x * (b) The sparsest probability distribution on the set C is x∗(green) which also minimizes 1 maxi xi on the intersection (red) Figure 1: Probability simplex and the reciprocal of the infinity norm infinity-norm, i.e., 1 maxi xi can be used as a convex heuristic for penalizing cardinality on the probability simplex and the resulting relaxations can be solved via convex optimization. Figure 1(a) and 1(b) depict the level sets and an example of a sparse probability measure which has maximal infinity norm. In the following sections we expand our discussion by exploring two specific problems: recovering a measure from given moments where f = 0 and C is affine, and convex clustering where f is a log-likelihood and C = R. For the former case we give a sufficient condition for this convex relaxation to exactly recover the minimal cardinality solution of p∗. We then present numerical simulations for the both problems which suggest that the proposed scheme offers a very efficient convex relaxation for penalizing cardinality on the probability simplex. 2 Optimizing over sparse probability measures We begin the discussion by first taking an alternative approach to the cardinality penalized optimization by directly lower-bounding the original hard problem using the following relation ∥x∥1 = n X i=1 |xi| ≤card(x) max i |xi| ≤card(x) ∥x∥∞ which is essentially one of the core motivations of using ℓ1 penalty as a proxy for cardinality. When constrained to the probability simplex, the lower-bound for the cardinality simply becomes 1 maxi xi ≤card(x). Using this bound on the cardinality, we immediately have a lower-bound on our original NP-hard problem which we denote by p∗ ∞: p∗≥p∗ ∞:= min x∈C, 1T x=1, x≥0 f(x) + λ 1 maxi xi (1) The function 1 maxi xi is concave and hence the above lower-bounding problem is not a convex optimization problem. However below we show that the above problem can be exactly solved using convex programming. Proposition 2.1. The lower-bounding problem defined by p∗ ∞can be globally solved using the following n convex programs in n + 1 dimensions: p∗≥p∗ ∞= min i=1,...,n min x∈C, 1T x=1, x≥0, t≥0 f(x) + t : xi ≥λ/t . (2) Note that the constraint xi ≥λ/t is jointly convex since 1/t is convex in t ∈R+, and they can be handled in most of the general purpose convex optimizers, e.g. cvx, using either the positive inverse function or rotated cone constraints. Proof. p∗ ∞ = min x∈C, 1T x=1, x≥0 f(x) + min i λ xi (3) = min i min x∈C, 1T x=1, x≥0 f(x) + λ xi (4) = min i min x∈C, 1T x=1, x≥0,t≥0 f(x) + t s.t. λ xi ≤t (5) The above formulation can be used to efficiently approximate the original cardinality constrained problem by lower-bounding for arbitrary convex f and C. In the next section we show how to compute the quality of approximation. 2.1 Computing a bound on the quality of approximation By the virtue of being a relaxation to the original cardinality problem, we have the following remarkable property. Let ˆx be an optimal solution to the convex program p∗ ∞, then we have the following relation f(ˆx) + λcard(ˆx) ≥p∗≥p∗ ∞ (6) Since the left-hand side and right-hand side of the above bound are readily available when p∗ ∞ defined in (2) is solved, we immediately have a bound on the quality of relaxation. More specifically the relaxation is exact, i.e., we find a solution for the original cardinality penalized problem, if the following holds: f(ˆx) + λcard(ˆx) = p∗ ∞ It should be noted that for general cardinality penalized problems, using ℓ1 heuristic does not yield such a quality bound, since it is not a lower or upper bound in general. Moreover most of the known equivalence conditions for ℓ1 heuristics such as Restricted Isometry Property and variants are NPhard to check. Therefore a remarkable property of the proposed scheme is that it comes with a simple computable bound on the quality of approximation. 3 Recovering a Sparse Measure Suppose that µ is a discrete probability measure and we would like to know the sparsest measure satisfying some arbitrary moment constraints: p∗= min µ card(µ) : Eµ[Xi] = bi, i = 1, . . . , m where Xi’s are random variables and Eµ denotes expectation with respect to the measure µ. One motivation for the above problem is the fact that it upper-bounds the minimum entropy power problem: p∗≥min µ exp H(µ) : Eµ[Xi] = bi, i = 1, . . . , m where H(µ) := −P i µi log µi is the Shannon entropy. Both of the above problems are non-convex and in general very hard to solve. When viewed as a finite dimensional optimization problem the minimum cardinality problem can be cast as a linear sparse recovery problem: p∗= min 1T x=1, x≥0 card(x) : Ax = b (7) As noted previously, applying the ℓ1 heuristic doesn’t work and it does not even yield a unique solution when the problem is underdetermined since it simply solves a feasibility problem: p∗ 1 = min 1T x=1, x≥0 ∥x∥1 : Ax = b (8) = min 1T x=1, x≥0 1 : Ax = b (9) and recovers the true minimum cardinality solution if and only if the set 1T x = 1, x ≥0, Ax = b is a singleton. This condition may hold in some cases, i.e. when the first 2k−1 moments are available, i.e., A is a Vandermonde matrix where k = card(x) [6]. However in general this set is a polyhedron containing dense vectors. Below we show how the proposed scheme applies to this problem. Using general form in (2), the proposed relaxation is given by the following, (p∗)−1 ≤(p∗ ∞)−1 = max i=1,...,n max 1T x=1, x≥0 xi : Ax = b . (10) which can be solved very efficiently by solving n linear programs in n variables. The total complexity is at most O(n4) using a primal-dual LP solver. It’s easy to check that strong duality holds and the dual problems are given by the following: (p∗ ∞)−1 = max i=1,...,n min w, λ wT b + λ : AT w + λ1 ≥ei . (11) where 1 is the all ones vector and ei is all zeros with a one in only i’th coordinate. 3.1 An alternative minimal cardinality selection scheme When the desired criteria is to find a minimum cardinality probability vector satisfying Ax = b, the following alternative selection scheme offers a further refinement, by picking the lowest cardinality solution among the n linear programming solutions. Define ˆxi : = arg max 1T x=1, x≥0 xi : Ax = b (12) ˆxmin : = arg min i=1,...,n card(ˆxi) (13) The following theorem gives a sufficient condition for the recovery of a sparse measure using the above method. Theorem 3.1. Assume that the solution to p∗in (7) is unique and given by x∗. If the following condition holds min 1T x=1, y≥0, 1T y=1 xi s.t. ASx = AScy > 0 where b = Ax∗and AS is the submatrix containing columns of A corresponding to non-zero elements of x∗and ASc is the submatrix of remaining columns, then the convex linear program max 1T x=1, x≥0 xi : Ax = b has a unique solution given by x∗. Let Conv(a1, . . . , am) denote the convex hull of the m vectors {a1, . . . , am}. The following corollary depicts a geometric condition for recovery. Corollary 3.2. If Conv(ASc) does not intersect an extreme point of Conv(AS) then ˆxmin = x∗, i.e. we recover the minimum cardinality solution using n linear programs. Proof Outline: Consider k’th inner linear program defined in the problem p∗ ∞. Using the optimality conditions of the primal-dual linear program pairs in (10) and (11), it can be shown that the existence of a pair (w, λ) satisfying AT Sw + λ1 = ek (14) AT Scw + λ1 > 0 (15) implies that the support of solution of the linear program is exactly equal to the support of x∗, and in particular they have the same cardinality. Since the solution of p∗is unique and has minimum cardinality, we conclude that x∗is indeed the unique solution to the k’th linear program. Applying Farkas’ lemma and duality theory we arrive at the conditions defined in Theorem 3.1. The corollary follows by first observing that the condition of Theorem 3.1 is satisfied if Conv(ASc) does not intersect an extreme point of Conv(AS). Finally observe that if any of the n linear programs recover the minimal cardinality solution then ˆxmin = x∗, since card(ˆxmin) ≤card(ˆxk), ∀k. 3.2 Noisy measure recovery When the data contains noise and inaccuracies, such as the case when using empirical moments instead of exact moments, we propose the following noise-aware robust version, which follows from the general recipe given in the first section: min i=1,...,n min 1T x=1, x≥0,t≥0 ∥Ax −b∥2 2 + t : xi ≥λ/t . (16) where λ ≥0 is a penalty parameter for encouraging sparsity. The above problem can be solved using n second-order cone programs in n + 1 variables, hence has O(n4) worst case complexity. The proposed measure recovery algorithms are investigated and compared with a known suboptimal heuristic in Section 6. 4 Convex Clustering In this section we base our discussion on the exemplar based convex clustering framework of [8]. Given a set of data points {z1, . . . , zn} of d-dimensional vectors, the task of clustering is to fit a mixture probability model to maximize the log likelihood function L := 1 n n X i=1 log k X j=1 xjf(zi; mj) where f(z; m) is an exponential family distribution on Z with parameter m, and x is a k-dimensional vector on the probability simplex denoting the mixture weights. For the standard multivariate Normal distribution we have f(zi; mj) = e−β∥zi−mj∥2 2 for some parameter β > 0. As in [8] we’ll further assume that the mean parameter mj is one of the examples zi which is unknown a-priori. This assumption helps to simply the log-likelihood whose data dependence is now only through a kernel matrix Kij := e−β∥zi−zj∥2 2 as follows L = 1 n n X i=1 log k X j=1 xje−β∥zi−zj∥2 2 (17) = 1 n n X i=1 log k X j=1 xjKij (18) Partitioning the data {z1, . . . , zn} into few clusters is equivalent to have a sparse mixture x, i.e., each example is assigned to few centers (which are some other examples). Therefore to cluster the data we propose to approximate the following cardinality penalized problem, p∗ c := max 1T x=1, x≥0 n X i=1 log k X j=1 xjKij −λcardx (19) As hinted previously, the above problem can be seen as a lower-bound for the entropy penalized problem p∗ c ≤ max 1T x=1, x≥0 n X i=1 log k X j=1 xjKij −λ exp H(x) (20) where H(x) is the Shannon entropy of the mixture probability vector. Applying our convexification strategy, we arrive at another upper-bound which can be computed via convex optimization p∗ c ≤p∗ ∞:= max 1T x=1, x≥0 n X i=1 log k X j=1 xjKij − λ maxi xi (21) We investigate the above approach in a numerical example in Section 6 and compare with the wellknown soft k-means algorithm. 5 Algorithms 5.1 Exponentiated Gradient Exponentiated gradient [7] is a proximal algorithm to optimize over the probability simplex which employs the Kullback-Leibler divergence D(x, y) = P i xi log xi yi between two probability distributions. For minimizing a convex function ψ the exponentiated gradient updates are given by the following: xk+1 = arg min x ψ(xk) + ∇ψ(xk)T (x −xk) + 1 αD(x, xk) When applied to the general form of 2 it yields the following updates to solve the i’th problem of p∗ ∞ xk+1 i = rk i xk i / X j rk j xk j where the weights ri are exponentiated gradients: rk i = exp α(∇if(xk) −λ/x2 i ) We also note that the above updates can be done in parallel for the n convex programs, and they are guaranteed to converge to the optimum. 6 Numerical Results 6.1 Recovering a Measure from Gaussian Measurements Here we show that the proposed recovery scheme is able to recover a sparse measure exactly with overwhelming probability, when the matrix A ∈Rm×n is chosen from the independent Gaussian ensemble, i.e, Ai,j ∼N(0, 1) i.i.d. As an alternative method we consider a commonly employed simple heuristic to optimize over a probability measure which first drops the constraint 1T x = 1 and solves the corresponding ℓ1 penalized problem. And finally rescales the optimal x such that 1T x = 1. In the worst case, this procedure recovers the true solution whenever minimizing ℓ1-norm recovers the solution, i.e., when there is only one feasible vector satistfying Ax = b and x ≥0, 1T x = 1. This is clearly a suboptimal approach and we will refer it as the rescaling heuristic. We set n = 50 and randomly pick a probability vector x∗which is k sparse, let b = Ax∗be m noiseless measurements, then check the probability of recovery, i.e. ˆx = x∗where ˆx is the solution to, max i=1,...,n max 1T x=1, x≥0 xi : Ax = b . (22) Figure 2(a) shows the probability of exact recovery as a function of m, the number of measurements, in 100 independent realizations of A for the proposed LP formulation and the rescaling heuristic. As it can be seen in Figure 2(a), the proposed method recovers the correct measure with probability almost 1 when m ≥5. Quite interestingly the rescaling heuristic doesn’t succeed to recover the true measure with high probability even for a cardinality 2 vector. We then add normal distributed noise with standard deviation 0.1 on the observations and solve, min i=1,...,n min 1T x=1, x≥0,t≥0 ∥Ax −b∥2 2 + t : xi ≥λ/t . (23) We compare the above approach by the corresponding rescaling heuristic, which first solves a nonnegative Lasso, min x≥0 ∥Ax −b∥2 2 + λ ∥x∥1 (24) then rescales x such that 1T x = 1. For each realization of A and measurement noise we run both methods using a primal-dual interior point solver for 30 equally spaced values of λ ∈[0, 10] and record the minimum error ∥ˆx −x∗∥1. The average error over 100 realizations are shown in Figure 2(b). Is it can be seen in the figure the proposed scheme clearly outperforms the rescaling heuristic since it can utilize the fact that x is on the probability simplex, without trivializing it’s complexity regularizer. 1 2 3 4 5 6 7 8 9 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 m − number of measurements (moment constraints) Probability of Exact Recovery in 100 independent trials of A Rescaling L1 Heuristic Proposed relaxation (a) Probability of exact recovery as a function of m 1 2 3 4 5 6 7 8 9 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 m − number of measurements (moment constraints) Averaged error of estimating the true measure : ||x−xt||1 Rescaling L1 Heuristic Proposed relaxation (b) Average error for noisy recovery as a function of m Figure 2: A comparison of the exact recovery probability in the noiseless setting (top) and estimation error in the noisy setting (bottom) of the proposed approach and the rescaled ℓ1 heuristic 6.2 Convex Clustering We generate synthetic data using a Gaussian mixture of 10 components with identity covariances and cluster the data using the proposed method, the resulting clusters given by the mixture density is presented in Figure 3. The centers of the circles represent the means of the mixture components and the radii are proportional to the respective mixture weights. We then repeat the clustering procedure using the well known soft k-means algorithm and present the results in Figure 4. As it can be seen from the figures the proposed convex relaxation is able to penalize the cardinality on the mixture probability vector and produce clusters significantly better than soft k-means algorithm. Note that soft k-means is a non-convex procedure whose performance depends heavily on the initialization. The proposed approach is convex hence insensitive to the initializations. Note that in [8] the number of clusters are adjusted indirectly by varying the β parameter of the distribution. In contrast our approach tries to implicitly optimizes the likelihood/cardinality tradeoff by varying λ. Hence when the number of clusters is unknown, choosing a value of λ is usually easier than specificying a value of k for the k-means algorithms. 7 Conclusions and Future Directions We presented a convex cardinality penalization scheme for problems constrained on the probability simplex. We then derived a sufficient condition for recovering the sparsest probability measure in an affine space using the proposed method. The geometric interpretation suggests that it holds for a large class of matrices. An open theoretical question is to analyze the probability of exact recovery for a normally distributed A. Another interesting direction is to extend the recovery analysis to the noisy setting and arbitrary functions such as the log-likelihood in the clustering example. There might also be other problems where proposed approach could be practically useful such as portfolio optimization, where a sparse convex combination of assets is sought or sparse multiple kernel learn−1.5 −1 −0.5 0 0.5 1 1.5 2 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 (a) λ = 1000 −1.5 −1 −0.5 0 0.5 1 1.5 2 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 (b) λ = 300 −1.5 −1 −0.5 0 0.5 1 1.5 2 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 (c) λ = 100 −1.5 −1 −0.5 0 0.5 1 1.5 2 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 (d) λ = 45 Figure 3: Proposed convex clustering scheme −1.5 −1 −0.5 0 0.5 1 1.5 2 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 (a) k = 3 −1.5 −1 −0.5 0 0.5 1 1.5 2 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 (b) k = 4 −1.5 −1 −0.5 0 0.5 1 1.5 2 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 (c) k = 8 −1.5 −1 −0.5 0 0.5 1 1.5 2 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 (d) k = 10 Figure 4: Soft k-means algorithm ing. Acknowledgements This work is partially supported by the National Science Foundation under Grants No. CMMI-0969923, FRG-1160319, and SES-0835531, as well as by a University of California CITRIS seed grant, and a NASA grant No. NAS2-03144. The authors would like to thank the Area Editor and the reviewers for their careful review of our submission. References [1] E.J. Cand´es, T. Tao, ”Decoding by linear programming”. IEEE Trans. Inform. Theory 51 (2005), 4203-4215. [2] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit” SIAM Review, 43(1):129-159, 2001. [3] A. Bruckstein, D. Donoho, and M. Elad. ”From sparse solutions of systems of equations to sparse modeling of signals and images”. SIAM Review, 2007. [4] V. Chandrasekaran, B. Recht, P.A. Parrilo, and A.S. Willsky. ”The convex algebraic geometry of linear inverse problems”. In Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on, pages 699-703, 2010. [5] S. Boyd and L. Vandenberghe, ”Convex Optimization”. Cambridge, U.K.: Cambridge Univ. Press, 2003. [6] A. Cohen and A. Yeredor, ”On the use of sparsity for recovering discrete probability distributions from their moments”. Statistical Signal Processing Workshop (SSP), 2011 IEEE [7] J. Kivinen and M. Warmuth. ”Exponentiated gradient versus gradient descent for linear predictors”. Information and Computation, 132(1):1-63, 1997. [8] D. Lashkari and P. Golland, ”Convex clustering with exemplar-based models”, in NIPS, 2008.
|
2012
|
22
|
4,586
|
A mechanistic model of early sensory processing based on subtracting sparse representations Shaul Druckmann* Tao Hu* Dmitri B. Chklovskii * - Equal contribution Janelia Farm Research Campus {druckmanns, hut, mitya}@janelia.hhmi.org Abstract Early stages of sensory systems face the challenge of compressing information from numerous receptors onto a much smaller number of projection neurons, a so called communication bottleneck. To make more efficient use of limited bandwidth, compression may be achieved using predictive coding, whereby predictable, or redundant, components of the stimulus are removed. In the case of the retina, Srinivasan et al. (1982) suggested that feedforward inhibitory connections subtracting a linear prediction generated from nearby receptors implement such compression, resulting in biphasic center-surround receptive fields. However, feedback inhibitory circuits are common in early sensory circuits and furthermore their dynamics may be nonlinear. Can such circuits implement predictive coding as well? Here, solving the transient dynamics of nonlinear reciprocal feedback circuits through analogy to a signal-processing algorithm called linearized Bregman iteration we show that nonlinear predictive coding can be implemented in an inhibitory feedback circuit. In response to a step stimulus, interneuron activity in time constructs progressively less sparse but more accurate representations of the stimulus, a temporally evolving prediction. This analysis provides a powerful theoretical framework to interpret and understand the dynamics of early sensory processing in a variety of physiological experiments and yields novel predictions regarding the relation between activity and stimulus statistics. 1 Introduction Receptor neurons in early sensory systems are more numerous than the projection neurons that transmit sensory information to higher brain areas, implying that sensory signals must be compressed to pass through a limited bandwidth channel known as “Barlow’s bottleneck” [1]. Since natural signals arise from physical objects, which are contiguous in space and time, they are highly spatially and temporally correlated [2-4]. Such signals are ideally suited for predictive coding, a compression strategy borrowed from engineering whereby redundant, or predictable components of the signal are subtracted and only the residual is transmitted [5]. Consider, for example, the processing of natural images in the retina. Instead of transmitting photoreceptor signals, which are highly correlated in space and time, ganglion cells can transmit differences in signal between nearby pixels or consecutive time points. The seminal work of Srinivasan et al. introduced predictive coding to neuroscience, proposing that feedforward inhibition could implement predictive coding by subtracting a prediction for the activity of a given photoreceptor generated from the activity of nearby receptors [6]. Indeed, the well known center surround spatial receptive fields or biphasic temporal receptive fields of ganglion cells [7] may be viewed as evidence of predictive coding because they effectively code such differences [6, 8-10]. Although the Srinivasan et al. model captured the essence of predictive coding it does not reflect two important biological facts. First, in the retina, and other early sensory systems, inhibition has a significant feedback component [11-13]. Second, interneuron transfer functions are often non-linear [14-16]. Here, we demonstrate that feedback circuits can be viewed as implementing predictive coding. Surprisingly, by taking advantage of recent developments in applied mathematics and signal processing we are able to solve the non-linear recurrent dynamics of such a circuit, for an arbitrary number of sensory channels and interneurons, allowing us to address in detail the circuit dynamics and consequently the temporal and stimulus dependencies. Moreover, introducing nonlinear feedback dramatically changes the nature of predictions. Instead of a static relation between stimulus and prediction, we find that the prediction becomes both stimulus and time dependent. 2 Model 2.1 Dynamics of the linear single-channel feedback circuit We start by considering predictive coding in feedback circuits, where principal neurons are reciprocally connected with inhibitory interneuron forming a negative feedback loop. Much of the intuition can be developed from linear circuits and we start from this point. Consider a negative feedback circuit composed of a single principal neuron, p, and a single interneuron, n (Fig. 1a). Assuming that both types of neurons are linear first-order elements, their dynamics are given by: !! ! !" !" = −!! !!(!) + !! ! !(!) −!"(!) , !! ! !" !" = −!! !! ! + !! !!" ! (1) where gm is the membrane conductance (inverse of membrane resistance), Cm the membrane capacitance, gs synaptic conductance and the subscript designates the neuron class (principal and interneuron) and w in the second equation is the weight of the synapse from the principal neuron to the interneuron. For simplicity, we assumed that the weight of the synapse from the interneuron to the principal neuron is the same in magnitude but with negative sign, -w. Although we do not necessarily expect the brain to fully reconstruct the stimulus on the receiving side, we must still ensure that the transmitted signal is decodable. To guarantee that this is the case, the prediction made by the interneuron must be strictly causal. In other words, there must be a delay between the input to the interneurons, !"(!), and the output of the interneurons, !(! + !). Given that feedback requires signals passing through a synapse, such delay is biologically plausible. When discussing analytical solutions below, we assume that ! →0 to avoid clutter and do not explicitly indicate the time dependence of the vectors p, s, and n. By rearranging the terms in Eq. 1 we obtain: !! !" !" = −! + !! ! !! ! ! −!" , !! !" !" = −! + !! ! !! ! !" (2) where τ=RC is the membrane time constant. Since principal neurons should be able to transmit fast changes in the stimuli, we assume that the time constant of the principal cells is small compared to that of the interneurons. Therefore, we can assume that the first equation reaches equilibrium instantaneously: ! = ! ! −!" !! !" !" = −! + !! ! !! ! !" , (3) where we defined ! = !! ! !! !. As the purpose of interneuron integration will be to construct stimulus representation, the integration time should be on the order of the auto-correlation time in the stimulus. Since here we study the simplified case of the semi-infinite step-stimulus, the time constant of the neuron should approach infinity. We assume this occurs by the interneurons having a very large membrane resistance (or correspondingly a very small conductance) and moderate capacitance. Therefore, the leakage term, -n, which is the only term in the second line of Eq. 3 that doesn’t grow with the membrane resistance, can be neglected in the dynamics of interneurons. By this assumption and substituting the first equation into the second, we find: ! = ! ! −!" !! !! !" = !! ! !! ! !" ! −!" . (4) Defining the effective time constant ! = !! !!! ! ! we have: ! = ! ! −!" ! !! !" = !" (5) In response to a step stimulus: !(!) = ! ! !, where !(!) is the Heavyside function, the dynamics of equation 5 are straightforward to solve, yielding: ! ! = ! ! θ ! 1 – exp −!! ! ! ! ! ! = !" θ ! exp −!! ! ! ! (6) The interneuron’s activity, n(t), grows with time as it integrates the output of the principal neuron, p(t), Fig. 1a. In turn, the principal neuron’s output, p(t), is the difference between the incoming stimulus and the interneuron’s activity, n(t), i.e. a residual, which decays with time from the onset of the stimulus. In the limit considered here (infinite interneuron time constant), the interneuron’s feedback will approach the incoming stimulus and the residual will decay to zero. To summarize, one can view the interneuron’s activity as a series of progressively more accurate predictions of the stimulus. The principal neuron subtracts these predictions and sends the series of residuals to higher brain areas, a more efficient approach than direct transmission (Fig. 1a). Figure 1 Schematic view of early processing in a single sensory channel in response to a step stimulus. a. A predictive coding model consists of a coding circuit, transmission channel and, for theoretical analysis only, a virtual decoding circuit. Coding is performed in a negative feedback circuit containing a principal neuron, p, and an inhibitory interneuron, n. In response to a step-stimulus (top left) the interneuron charges up with time (top right) till it reaches the value of the stimulus. Principal neuron (middle left) transmits the difference between the interneuron activity and the stimulus, resulting in a transient signal. b. Direct transmission. The transient response to a step stimulus (Fig. 1a left) is consistent with electrophysiological measurements from principal neurons in invertebrate and vertebrate retina [10, 17]. For example, in flies, cells post-synaptic to photoreceptors (the LMCs) have graded potential response consistent with Equation 5. In the vertebrate retina, most recordings are performed on ganglion cells, which read out signals from bipolar cells. In response to a step-stimulus the firing rate of ganglion cells is consistent with Equation 6 [17]. 2.2 Dynamics of the linear multi-channel feedback circuit In most sensory systems, stimuli are transmitted along multiple parallel sensory channels, such as mitral cells in the olfactory bulb, or bipolar cells in the retina. Although a circuit could implement predictive coding by replicating the negative feedback loop in each channel, this 6HQVRU\LQSXWïV Transmission channel Interneuron, n Time Time Output Principal neuron, p n n p p n p Decoding Direct transmission Negative feedback circuit Coding w w w w δ a p p p Transmission channel α 6HQVRU\LQSXWïV Time Output Principal neuron, p b solution is likely suboptimal due to the contiguous nature of objects in space, which often results in stimuli correlated across different channels. Therefore, interneurons that combine inputs across channels may generate an accurate prediction more rapidly. The dynamics of a multichannel linear negative feedback circuit are given by: ! = ! −!" ! !! !" = !!!, (7) where boldface lowercase letters are column vectors representing stimulus, ! = (!!, !!, !!, … )!, activity of principal neurons, !, and interneurons, !, Fig. 2a. Boldface uppercase letters designate synaptic weight matrices. Synaptic weights from principal neurons to interneurons are !!, and synaptic weights from interneurons to principal neurons are, for simplicity, symmetric but with the negative sign, −!. Such symmetry was suggested for olfactory bulb, considering dendrodendritic synapses [18]. Each column of ! contains the weights of synapses from correlated principal neurons to a given interneuron, thus defining that interneuron’s feature vector (Fig. 2b). Linear dynamics of the feedback circuit in response to a multi-dimensional step stimulus can be solved in the standard manner similarly to equation 6: ! = (!!!)!! 1 – exp −!!! ! ! ! !!! ! = ! ! −!(!!!)!! 1 – exp −!!! ! ! ! !! ! (8) provided !!! is invertible. When the matrix !!! is not full rank, for instance if the number of interneurons exceeds the number of sensory channels, the solution of Equation 7 is given by: ! = !!(!!!)!! 1 – exp −!!! ! ! ! ! ! = !exp −!!! ! ! ! ! (9) Recapitulating the equations in words, as above one can view the interneurons’ activity as a series of progressively more accurate stimulus predictions, ! = !". The principal neuron sends the series of residuals of these predictions, ! = ! −!, to higher brain areas, and the dynamics result in the transmitted residual decreasing in time [19-22] (Fig. 2c,d). 2.3 Dynamics of the non-linear multi-channel feedback circuit Our solution of the circuit dynamics in the previous sub-section relied on the assumption that neurons act as linear elements, which in view of non-linearities in real neurons, represents a drastic simplification. We now extend this analysis to the non-linear circuit. A typical neural response non-linearity is the existence of a non-zero input threshold below which neurons do not respond. A pair of such on- and off- neurons is described by a threshold function (Fig. 2e) that has a “gap” or “deadzone” around zero activity and is not equivalent to a linear neuron: Thresh ! = ! −!, ! > ! 0, ! ≤! ! + !, ! < −! (10) Accordingly, the dynamics are given by: ! = ! −!" ! !! !" = !!! ! = Thresh!(!) , (11) The central contribution of this paper is an analysis of predictive coding in a feedback circuit with threshold-linear interneurons inspired by the equivalence of the network dynamics to a signal-processing algorithm called linearized Bregman iteration [23, 24]. Before showing the equivalence, we first describe linearized Bregman iteration. This algorithm constructs a faithful representation of an input as a linear sum over dictionary elements while minimizing the L1-L2 norm of the representation [25]. Formally, the problem is defined as follows: for ! ! ≡! |! |! + ! !! | !| ! !, min! !(!) !. !. !" = !. (12) Remarkably, this high-dimensional non-linear optimization problem can be solved by a simple iterative scheme (see Appendix): !!!! = !! + !!! ! −!!! !!!! = Thresh! !!!! , (13) combining a linear step, which looks like gradient descent on the representation error, and a component-wise threshold-linear step. Eq. 11, the network dynamics, is the continuous version of linearized Bregman iteration, Eq. 13. Intuitively speaking, the dynamics of the network play the role of the iterations in the algorithm. Having identified this equivalence, we are able to both solve and interpret the transient non-linear dynamics (see supplementary materials for further details). The analytical solution allows us a deeper understanding, for instance of the convergence of the algorithm. We note that if the interneuron feature vectors span the stimulus space the steady-state activity will be zero for any stimulus and thus non-informative. Therefore, solving the transient dynamics, as opposed to just the steady-state activity [18, 19, 21, 26], was particularly crucial in this case. Next, we describe in words the mathematical expressions for the response of the feedback circuit to a step-stimulus (see Supplement for dynamics equations), Fig. 2f-g. Unlike in the linear circuit, interneurons do not inhibit principal neurons until their internal activity crosses threshold, Fig. 2f. Therefore, their internal activity initially grows with a rate proportional to the projection of the sensory stimulus on their feature vectors, !!!. With time, interneurons cross threshold and contribute to the stimulus representation, thereby constructing a more accurate representation of the stimulus, Fig. 2f,g. The first interneuron to cross threshold is the one for which the projection of the sensory stimulus on its feature vector, !!! is highest. As its contribution is subtracted from the activity of the principal neurons, the driving force on other interneurons !!(! −!") changes. Therefore, the order by which interneurons cross threshold depends also on the correlation between the feature vectors, Fig. 2b,f. Figure 2. Predictive coding in a feedback circuit in response to a step stimulus at time zero. a. Circuit diagram for feedback circuit. b. Stimulus (grayscale in black box left) and a subset of interneuron’s feature vector (grayscale in boxes). c-d. Response of linear feedback circuit to a step stimulus at time zero in interneurons (c) and principal neurons (d). e. Thresholdlinear transfer function relating internal, n, and external, a, activity of interneurons. Dashed line shows diagonal. Firing rates cannot be negative and therefore the threshold-linear function b a Output to higher brain areas Sensory Input p3 p2 p1 n4 n3 n2 n1 T W W Feature vectors Stimulus -5 0 10 20 30 40 50 −0.1 0 0.1 0.2 0.3 Interneuron activity Time c Linear negative feedback circ. Linear negative feedback circ. -5 0 10 20 30 40 50 Time 0 0.02 0.04 0.06 0.08 0.1 0.12 Principal neuron activity d 0 10 20 30 40 50 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 External activity, a Time -5 g f Thresh-linear negative feedback circ. Thresh-linear negative feedback circuit -λ λ 0 Internal act., n λ λ 0 Expanded view of early time T min(λ /δ W s) 0 10 20 30 40 50 0 Time -5 0.02 0.04 0.06 0.08 0.1 0.12 0.14 Principal neuron activity h -λ e Internal activity n External activity, a λ should be thought as combining a pair of on and off-cells. f-h. Response of interneurons (f-g) and principal neurons to a step stimulus at time zero. f. Expanded view of internal activity of the interneurons (only some are shown, see grayscale in boxes color coded to match b) at early times. g. External activity of a larger subset of interneurons over a longer time period. Grayscale boxes show the stimulus represented by the interneuron layer at various times marked by arrows. h. Principal neuron activity as a function of time. As interneurons cross threshold they more closely represent the stimulus and cancel out more of the principal cell activity. Eventually, the interneuron representation (right box in g) is nearly identical to the stimulus and the principal neurons’ activity drops almost to zero. Collectively the representation progresses from sparse to dense, but individual interneurons may first be active then become silent. Eventually interneurons will accurately represent the input with their activity, ! = !", and will fully subtract it from the principal cells’ activity, resulting in no further excitation to the interneurons, Fig. 2g,h. However, this description leads to an immediate puzzle. Namely, the algorithm builds a representation of the stimulus by the activity of interneurons. Yet, interneurons are local circuit elements whose activity is not transmitted outside the circuit. Why would a representation be built if it is available only locally within the neural circuit? The answer to this conundrum is found by considering the notion of predictive coding in early sensory circuits presented in the introduction. The interneurons serve as the predictor and the principal neurons transmit a prediction residual. As expected by the framework of predictive coding, at each point in time, the circuit subtracts the prediction, ! = !", which was constructed in the interneurons from previous incoming sensory signals, from the current sensory stimulus and the principal neurons transmit the residual, ! = ! −!, to higher brain areas. We note that initially the interneurons are silent and the principal neurons transmit the stimulus directly. If there were no bandwidth limitation, the stimulus could be decoded just from this initial transmission. However, the bandwidth limitation results in coarse, or noisy, principal neuron transmission, an issue we will return to later. 3 Results In neuroscience, the predictive coding strategy was originally suggested to allow efficient transmission through a limited bandwidth channel (Srinivasan et al., 1982). Our main result is the solution of the transient dynamics given in the section above. Understanding circuit dynamics in the predictive coding framework allows us to make a prediction regarding the length of transient activity for different types of stimuli. Recall that the time from stimulus onset to cancellation of the stimulus depends on the rate of the interneurons’ activation, which in turn is proportional to the projection of the stimulus on the interneurons’ feature vectors. Presumably, interneuron feature vectors are adapted to the most common stimuli, e.g. natural images in the case of the retina, therefore this type of stimulus should be relatively quickly cancelled out. In contrast, non-natural stimuli, like white noise patterns, will be less well captured by interneuron receptive fields and their activation will occur after a longer delay. Accordingly, it will take longer to cancel out nonnatural stimuli, leading to longer principal neuron transients. Below, we show that the feedback circuit with threshold-linear neurons is indeed more efficient than the existing alternatives. We first consider a scenario in which effective bandwidth limitation is imposed through addition of noise. Secondly, we consider a more biologically relevant model, where transmission bandwidth is set by the discreteness of Poisson neural activity. We find that threshold linear interneurons achieve more accurate predictions when faced with stimulus corrupted with i.i.d Gaussian noise. The intuition behind this result is that of sparse denoising [23]. Namely, if the signal can be expressed as a sparse sum of strong activation of dictionary elements, whereas the noise requires a large number of weakly activated elements, then thresholding the elements will suppress the noise more than the signal, yielding denoising. We note that this fact alone does not in itself argue for the biological plausibility of this network, but threshold-linear dynamics are a common approximation in neural networks. Figure 3. Predictions by negative feedback circuit. Left: Relative prediction error ( ! −! !/ ! !), where ! = !", as a function of time for a stimulus consisting of an image patch corrupted by i.i.d Gaussian noise at every time point. Right: An image is sent through principal neurons that transmit Poisson. The reconstruction error as a function of time following the presentation of stimulus is shown for the full non-linear negative feedback circuit (black), for a linear negative feedback circuit (red), for a direct transmission circuit (blue), and for a circuit where the sparse approximation itself is transmitted instead of the residual (green). Time on the x-axis is measured in units of the time length in which a single noisy transmission occurs. Inset shows log-log plot. In addition to considering transmission of stimuli corrupted by Gaussian noise, we also studied a different model where bandwidth limitation is set by the discreteness of spiking, modeled by a Poisson process. Although the discreteness of transmission can be overcome by averaging over time, this comes at the cost of longer perceptual delays, or lower transmission rates, as longer integration takes place. Therefore, we characterize transmission efficiency by reconstruction error as a function of time, Fig. 3. We find that, for Poisson transmission, predictive coding provides more accurate stimulus reconstruction than direct transmission for all times but the brief interval until the first interneuron has crossed threshold (Fig. 3). 4 Discussion By solving the dynamics of the negative feedback circuit through equivalence to linearized Bregman iteration we have shown that the development of activity in a simplified early sensory circuit can be viewed as implementing an efficient, non-linear, intrinsically parallel algorithm for predictive coding. Our study maps the steps of the algorithm onto specific neuronal substrates, providing a solid theoretical framework for understanding physiological experiments on early sensory processing as well as experimentally testing predictive coding ideas on a finer, more quantitative level. Recently, sparse representations were studied in a single-layer circuit with lateral inhibitory connections proposed as a model of a different brain area, namely primary cortical areas. The circuit constructs the stimulus representation in the projection neurons themselves and directly transmits it downstream [27, 28]. We believe it does not model early sensory systems as well as the negative feedback circuit for a number of reasons. First, anatomical data is more consistent with the reciprocally connected interneuron layer than lateral connections between principal neurons [11, 13]. Second, direct transmission of the representation would result in greater perceptual delays after stimulus onset since no information is transmitted while the representation is being built up in the sub-threshold range. In contrast, in the predictive coding model the projection neurons pass forth (a coarse and possibly noisy version of) the input stimulus from the very beginning. We note that adding a nonlinearity on the principal neurons would result in a delay in transmission in both models. Although there is no biological justification for introducing a threshold to interneurons only, the availability of an analytically solvable model justifies this abstraction. Dynamics of a circuit with threshold on principal neurons will be explored elsewhere. From a computational point of view there are three main advantages to overcompleteness in the negative feedback circuit. First, the delay until subtraction of prediction, which occurs when the first interneuron crosses threshold, will be briefer as the number of feature vectors grows since the maximal projection of the stimulus on the interneurons’ feature vectors will be higher. Second, the larger the number of feature vectors the fewer the number of interneurons with supra-threshold activity, which may be energetically more efficient. Third, if stimuli come from different statistical ensembles, it could be advantageous to have feature vectors tailored to the different stimulus ensembles, which may result in more feature vectors, i.e., interneurons than principle neurons. Our study considered responses to step-like stimuli. If the sensory environment changes on slow time scales, a series of step-like responses may be taken as an approximation to the true signal. Naturally, the extension of our framework to fully time-varying stimuli is an important research direction. Acknowledgements We thank S. Baccus, A. Genkin, V. Goyal, A. Koulakov, M. Meister, G. Murphy, D. Rinberg, and R. Wilson for valuable discussions and their input. Appendix: Derivation of linearized Bregman iteration Here, inspired by [22,23], we solve the following basis pursuit-like optimization problem: For ! ! ≡! |! |! + ! !! | !| ! !, min! !(!) !. !. !" = !. (A1) The idea behind linearized Bregman iteration, is to start with !! = 0 and, at each iteration, to seek to update a so as to minimize the square error plus the distance from the previous value of a. Thus, we perform the following update: !!!! = argmin! !! !! !, !! + ! ! | ! −!" |! (A2) where we used a notation !! ! !, ! for the Bregman divergence [29] between the two points a and b induced by the convex function J. The Bregman divergence is an appropriate measure for such problems that can handle the non-differentiable nature of the cost. It is defined by the following expression: !! ! !, ! = ! ! −! ! −!, ! −! , where ! ∈!"(!) is an element of the subgradient of J at the point b. The Bregman divergence for the elastic net cost function J defined in Eq. A1 is: !! !(!, !!) = !| !| ! −! |!! |! + ! !! |! |! ! − ! !! !! |! ! − !, ! −!! , (A3) where ! is a subgradient of J at ak . The condition for the minimum in Eq. A2 is: ! ! |!!!! |! + ! !! !!!! |! ! ∋!! + !! ! −!!! , (A4) where ! [.] designates a subdifferential. Consistency of the iteration scheme requires that the update !!!! be a subgradient of J as well: ! ! |!!!! |! + ! !! !!!! |! ! ∋!!!!. (A5) By combining Eqs. A4,A5 we set: !!!! = !! + !! ! −!!! . (A6) By substituting Eq. A6 into Eq. A4 and simplifying we get: !!!! = argmin! ! ! ! + ! !! | ! −!!!!! |! , (A7) which has the explicit solution: !!!! = Thresh!"(!!!!!) (A8) By defining !! = !!! and expressing it in Eqs. A6,A8 with substitution ! = !" we get: !!!! = !! + !!! ! −!!! !!!! = Thresh! !!!! (A9) Eq. A9 is the linearized Bregman iteration algorithm (main text Eq. 13), thereby showing that the iterative scheme indeed finds a minimum of Eq. A2 at every time point. The sequence convergence proof [23, 24] is beyond the scope of this paper. References 1. Barlow, H.B. and W.R. Levick, Threshold setting by the surround of cat retinal ganglion cells. The Journal of physiology, 1976. 259(3): p. 737-57. 2. Dong, D.W. and J.J. Atick, Statistics of natural time-varying images. Network: Computation in Neural Systems, 1995. 6(3): p. 345--358. 3. Field, D.J., Relations between the statistics of natural images and the response properties of cortical cells. Journal of the Optical Society of America. A, Optics and image science, 1987. 4(12): p. 2379-94. 4. Ruderman, D.L. and W. Bialek, Statistics of natural images: Scaling in the woods. Physical review letters, 1994. 73(6): p. 814-817. 5. Elias, P., Predictive coding. Information Theory, IRE Transactions on, 1955. 1(1): p. 16--24. 6. Srinivasan, M.V., S.B. Laughlin, and A. Dubs, Predictive coding: a fresh view of inhibition in the retina, in Proc R Soc Lond, B, Biol Sci1982. p. 427-59. 7. Victor, J.D., Temporal aspects of neural coding in the retina and lateral geniculate. Network-Computation in Neural Systems, 1999. 10(4): p. R1-R66. 8. Hosoya, T., S.A. Baccus, and M. Meister, Dynamic predictive coding by the retina. Nature, 2005. 436(7047): p. 71-7. 9. Huang, Y. and R.P.N. Rao, Predictive coding. Wiley Interdisciplinary Reviews: Cognitive Science, 2011. 2(5): p. 580-593. 10. Laughlin, S., A simple coding procedure enhances a neuron's information capacity. Zeitschrift fur Naturforschung. Section C: Biosciences, 1981. 36(9-10): p. 910-2. 11. Masland, R.H., The fundamental plan of the retina. Nature neuroscience, 2001. 4(9): p. 87786. 12. Olsen, S.R., V. Bhandawat, and R.I. Wilson, Divisive normalization in olfactory population codes. Neuron, 2010. 66(2): p. 287-99. 13. Shepherd, G.M., et al., The olfactory granule cell: from classical enigma to central role in olfactory processing. Brain research reviews, 2007. 55(2): p. 373-82. 14. Arevian, A.C., V. Kapoor, and N.N. Urban, Activity-dependent gating of lateral inhibition in the mouse olfactory bulb. Nature neuroscience, 2008. 11(1): p. 80-7. 15. Baccus, S.A., Timing and computation in inner retinal circuitry. Annu Rev Physiol, 2007. 69: p. 271-90. 16. Rieke, F. and G. Schwartz, Nonlinear spatial encoding by retinal ganglion cells: when 1+1 not equal 2. Journal of General Physiology, 2011. 138(3): p. 283-290. 17. Shapley, R.M. and J.D. Victor, The effect of contrast on the transfer properties of cat retinal ganglion cells. The Journal of physiology, 1978. 285: p. 275-98. 18. Koulakov, A.A. and D. Rinberg, Sparse Incomplete Representations: A Potential Role of Olfactory Granule Cells. Neuron, 2011. 72(1): p. 124-136. 19. Lee, D.D. and H.S. Seung, Unsupervised learning by convex and conic coding. Advances in Neural Information Processing Systems, 1997: p. 515--521. 20. Lochmann, T. and S. Deneve, Neural processing as causal inference. Curr Opin Neurobiol, 2011. 21. Olshausen, B.A. and D.J. Field, Sparse coding with an overcomplete basis set: a strategy employed by V1? Vision research, 1997. 37(23): p. 3311-25. 22. Rao, R.P.N. and D.H. Ballard, Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. nature neuroscience, 1999. 2: p. 79--87. 23. Osher, S., et al., Fast linearized Bregman iteration for compressive sensing and sparse denoising. Communications in Mathematical Sciences, 2009. 24. Yin, W., et al., Bregman iterative algorithms for l1-minimization with applications to compressed sensing. SIAM Journal on Imaging Sciences, 2008. 1(1): p. 143--168. 25. Zou, H. and T. Hastie, Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2005. 67(2): p. 301--320. 26. Dayan, P., Recurrent sampling models for the Helmholtz machine. Neural computation, 1999. 11(3): p. 653-78. 27. Rehn, M. and F.T. Sommer, A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields. Journal of computational neuroscience, 2007. 22(2): p. 135-46. 28. Rozell, C.J., et al., Sparse coding via thresholding and local competition in neural circuits. Neural computation, 2008. 20(10): p. 2526-63. 29. Bregman, L.M., The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming* 1. USSR computational mathematics and mathematical physics, 1967. 7(3): p. 200--217.
|
2012
|
220
|
4,587
|
Ensemble weighted kernel estimators for multivariate entropy estimation Kumar Sricharan, Alfred O. Hero III Department of EECS University of Michigan Ann Arbor, MI 48104 {kksreddy,hero}@umich.edu Abstract The problem of estimation of entropy functionals of probability densities has received much attention in the information theory, machine learning and statistics communities. Kernel density plug-in estimators are simple, easy to implement and widely used for estimation of entropy. However, for large feature dimension d, kernel plug-in estimators suffer from the curse of dimensionality: the MSE rate of convergence is glacially slow - of order O(T −γ/d), where T is the number of samples, and γ > 0 is a rate parameter. In this paper, it is shown that for sufficiently smooth densities, an ensemble of kernel plug-in estimators can be combined via a weighted convex combination, such that the resulting weighted estimator has a superior parametric MSE rate of convergence of order O(T −1). Furthermore, it is shown that these optimal weights can be determined by solving a convex optimization problem which does not require training data or knowledge of the underlying density, and therefore can be performed offline. This novel result is remarkable in that, while each of the individual kernel plug-in estimators belonging to the ensemble suffer from the curse of dimensionality, by appropriate ensemble averaging we can achieve parametric convergence rates. 1 Introduction Non-linear entropy functionals of a multivariate density f of the form R g(f(x), x)f(x)dx arise in applications including machine learning, signal processing, mathematical statistics, and statistical communication theory. Important examples of such functionals include Shannon and R´enyi entropy. Entropy based applications include image registration and texture classification, ICA, anomaly detection, data and image compression, testing of statistical models and parameter estimation. For details and other applications, see, for example, Beirlant et al. [2] and Leonenko et al. [18]. In these applications, the functional of interest must be estimated empirically from sample realizations of the underlying densities. Several estimators of entropy measures have been proposed for general multivariate densities f. These include consistent estimators based on histograms [10, 2], kernel density plug-in estimators, entropic graphs [5, 20], gap estimators [24] and nearest neighbor distances [8, 18, 19]. Kernel density plug-in estimators [1, 6, 11, 15, 12] are simple, easy to implement, computationally fast and therefore widely used for estimation of entropy [2, 23, 14, 4, 13]. However, these estimators suffer from mean squared error (MSE) rates which typically grow with feature dimension d as O(T −γ/d), where T is the number of samples and γ is a positive rate parameter. 1 In this paper, we propose a novel weighted ensemble kernel density plug-in estimator of entropy ˆGw, that achieves parametric MSE rates of O(T −1) when the feature density is smooth. The estimator is constructed as a weighted convex combination ˆGw = P l∈¯l w(l) ˆGk(l) of individual kernel density plug-in estimators ˆGk(l) wrt the weights {w(l); l ∈¯l}. Here, ¯l is a vector of indices {l1, .., lL} and k(l) = l p T/2 is proportional to the the volume of the kernel bins used in evaluating ˆGk(l). The individual kernel estimators ˆGk(l) are similar to the data-split kernel estimator of Gy¨orfiand van der Muelen [11], and have slow MSE rates of convergence of order O(T −1/1+d). Please refer to Section 2 for the exact definition of ˆGk(l). The principal result presented in this paper is as follows. It is shown that the weights {w(l); l ∈¯l} can be chosen so as to significantly improve the rate of MSE convergence of the weighted estimator ˆGw. In fact our ensemble averaging method can improve MSE convergence of ˆGw to the parametric rate O(T −1). These optimal weights can be determined by solving a convex optimization problem. Furthermore, this optimization problem does not involve any density-dependent parameters and can therefore be performed offline. 1.1 Related work Ensemble based methods have been previously proposed in the context of classification. For example, in both boosting [21] and multiple kernel learning [16] algorithms, lower complexity weak learners are combined to produce classifiers with higher accuracy. Our work differs from these methods in several ways. First and foremost, our proposed method performs estimation rather than classification. An important consequence of this is that the weights we use are data independent, while the weights in boosting and multiple kernel learning must be estimated from training data since they depend on the unknown distribution. Birge and Massart [3] show that for density f in a Holder smoothness class with s derivatives, the minimax MSE rate for estimation of a smooth functional is T −2γ, where γ = min{1/2, 4s/(4s + d)}. This means that for s > 4/d, parametric rates are achievable. The kernel estimators proposed in this paper require higher order smoothness conditions on the density, i. e. the density must be s > d times differentiable. While there exist other estimators [17, 7] that achieve parametric MSE rates of O(1/T ) when s > 4/d, these estimators are more difficult to implement than kernel density estimators, which are a staple of many toolboxes in machine learning, pattern recognition, and statistics. The proposed ensemble weighted estimator is a simple weighted combination of off-the-shelf kernel density estimators. 1.2 Organization The reminder of the paper is organized as follows. We formally describe the kernel plug-in entropy estimators for entropy estimation in Section 2 and discuss the MSE convergence properties of these estimators. In particular, we establish that these estimators have MSE rate which decays as O(T −1/1+d). Next, we propose the weighted ensemble of kernel entropy estimators in Section 3. Subsequently, we provide an MSE-optimal set of weights as the solution to a convex optimization(3.4) and show that the resultant optimally weighted estimator has a MSE of O(T −1). We present simulation results in Section 4 that illustrate the superior performance of this ensemble entropy estimator in the context of (i) estimation of the Panter-Dite distortion-rate factor [9] and (ii) testing the probability distribution of a random sample. We conclude the paper in Section 5. Notation We will use bold face type to indicate random variables and random vectors and regular type face for constants. We denote the expectation operator by the symbol E, the variance operator as V[X] = E[(X −E[X])2], and the bias of an estimator by B. 2 2 Entropy estimation This paper focuses on the estimation of general non-linear functionals G(f) of d-dimensional multivariate densities f with known support S = [a, b]d, where G(f) has the form G(f) = Z g(f(x), x)f(x)dµ(x), (2.1) for some smooth function g(f, x). Let B denote the boundary of S. Here, µ denotes the Lebesgue measure and E denotes statistical expectation with respect to the density f. Assume that T = N + M i.i.d realizations of feature vectors {X1, . . . , XN, XN+1, . . . , XN+M} are available from the density f. In the sequel f will be called the feature density. 2.1 Plug-in estimators of entropy A truncated kernel density estimator with uniform kernel is defined below. Our proposed weighted ensemble method applies to other types of kernels as well but we specialize to uniform kernels as it makes the derivations clearer. For integer 1 ≤k ≤M, define the distance dk to be: dk = (k/M)1/d. Define the truncated kernel bin region for each X ∈S to be Sk(X) = {Y ∈S : ||X −Y ||1 ≤dk/2}, and the volume of the truncated kernel bins to be Vk(X) = R Sk(X) dz. Note that when the smallest distance from X to S is greater than dk, Vk(X) = dd k = k/M. Let lk(X) denotes the number of points falling in Sk(X): lk(X) = PM i=1 1{Xi∈Sk(X)}. The truncated kernel density estimator is defined as ˆfk(X) = lk(X) MVk(X). (2.2) The plug-in estimator of the density functional is constructed using a data splitting approach as follows. The data is randomly subdivided into two parts {X1, . . . , XN} and {XN+1, . . . , XN+M} of N and M points respectively. In the first stage, we estimate the kernel density estimate ˆfk at the N points {X1, . . . , XN} using the M realizations {XN+1, . . . , XN+M}. Subsequently, we use the N samples {X1, . . . , XN} to approximate the functional G(f) and obtain the plug-in estimator: ˆGk = 1 N N X i=1 g(ˆf k(Xi), Xi). (2.3) Also define a standard kernel density estimator with uniform kernel ˜fk(X), which is identical to ˆfk(X) except that the volume Vk(X) is always set to be Vk(X) = k/M. Define ˜Gk = 1 N N X i=1 g(˜f k(Xi), Xi). (2.4) The estimator ˜Gk is identical to the estimator of Gy¨orfiand van der Muelen [11]. Observe that the implementation of ˜Gk, unlike ˆGk, does not require knowledge about the support of the density. 2.1.1 Assumptions We make a number of technical assumptions that will allow us to obtain tight MSE convergence rates for the kernel density estimators defined in above. These assumptions are comparable to other rigorous treatments of entropy estimation. Please refer to Section II, [2] for details. (A.0) : Assume that the kernel bandwidth satisfies k = k0M β for any rate constant 0 < β < 1, and assume that M, N and T are linearly related through the proportionality constant αfrac with: 0 < αfrac < 1, M = αfracT and N = (1 −αfrac)T . (A.1) : Let the feature density f be uniformly bounded away from 0 and upper bounded on the set S, i.e., there exist constants ǫ0, ǫ∞such that 0 < ǫ0 ≤f(x) ≤ǫ∞< ∞∀x ∈S. (A.2): Assume that the density f has continuous partial derivatives of order d in the interior of the set S, and that these derivatives are upper bounded. (A.3): Assume that the 3 function g(f, x) has max{λ, d} partial derivatives w.r.t. the argument f, where λ satisfies the conditions λβ > 1. Denote the n-th partial derivative of g(f, x) wrt x by g(n)(f, x). Also, let g′(f, x) := g(1)(f, x) and g′′(f, x) := g(2)(f, x). (A.4): Assume that the absolute value of the functional g(f, x) and its partial derivatives are strictly bounded away from ∞in the range ǫ0 < x < ǫ∞for all y. (A.5): Let ǫ ∈(0, 1) and δ ∈(2/3, 1). Let C(M) be a positive function satisfying the condition C(M) = O(exp(−M β(1−δ))). For some fixed 0 < ǫ < 1, define pl = (1 −ǫ)ǫ0 and pu = (1 + ǫ)ǫ∞. Assume that the following four conditions are satisfied by h(f, x) = g(f, x), g(3)(f, x) and g(λ)(f, x) : (i) supx |h(0, x)| = G1 < ∞, (ii) supf∈(pl,pu),x |h(f, x)| = G2/4 < ∞, (iii) supf∈(1/k,pu),x |h(f, x)|C(M) = G3 < ∞, and (iv)E[supf∈(pl,2dM/k),x |h(f, x)|]C(M) = G4 < ∞. 2.1.2 Analysis of MSE Under these assumptions, we have shown the following (please see [22] for the proof) : Theorem 1. The bias of the plug-in estimators ˆGk, ˜Gk is given by B( ˆGk) = X i∈I c1,i k M i/d + c2 k + o 1 k + k M B( ˜Gk) = c1 k M 1/d + c2 k + o 1 k + k M . Theorem 2. The variance of the plug-in estimators ˆGk, ˜Gk is given by V( ˆGk) = c4 1 N + c5 1 M + o 1 M + 1 N V( ˜Gk) = c4 1 N + c5 1 M + o 1 M + 1 N . In the above expressions, c1,i, c1, c2, c4 and c5 are constants that depend only on g, f and their partial derivatives, and I = {1, . . . , d}. In particular, the constants c1,i, c1, c2, c4 and c5 are independent of k, N and M. 2.1.3 Optimal MSE rate From Theorem 1, k →∞and k/M →0 for the estimators ˆGk and ˜Gk to be unbiased. Likewise from Theorem 2 N →∞and M →∞for the variance of the estimator to converge to 0. We can optimize the choice of bandwidth k, and the data splitting proportions N/(N + M), M/(N + M) for minimum M.S.E. Minimizing the MSE over k is equivalent to minimizing the bias over k. The optimal choice of k is given by kopt = O(M 1/(1+d)), and the bias evaluated at kopt is O(M −1/(1+d)). Also observe that the MSE of ˆGk and ˜Gk is dominated by the squared bias (O(M −2/(1+d))) as contrasted to the variance (O(1/N + 1/M)). This implies that the asymptotic MSE rate of convergence is invariant to selected proportionality constant αfrac. The optimal MSE for the estimators ˆGk and ˜Gk is therefore achieved for the choice of k = O(M 1/(1+d)), and is given by O(T −2/(1+d)). In particular, observe that both ˆGk and ˜Gk have identical optimal rates of MSE convergence. Our goal is to reduce the estimator MSE to O(T −1). We do so by applying the method of weighted ensembles described next in section 3. 3 Ensemble estimators For a positive integer L > d, choose ¯l = {l1, . . . , lL} to be a vector of distinct positive real numbers. Define the mapping k(l) = l √ M and let ¯k = {k(l); l ∈¯l}. Observe that any k ∈¯k corresponds to the rate constant β = 0.5, and that N = Θ(T ) and M = Θ(T ). Define the weighted ensemble estimator ˆGw = X l∈¯l w(l) ˆGk(l). (3.1) 4 Theorem 3. There exists a weight vector w∗such that E[( ˆGw∗−G(f))2] = O(1/T ). This weight vector can be found by solving a convex optimization. Furthermore, this optimal weight vector does not depend on the unknown feature density f or the samples {X1, .., XN+M}, and hence can be solved off-line. Proof. For each i ∈I, define γw(i) = P l∈¯l w(l)li/d. The bias of the ensemble estimator follows from Theorem 1 and is given by B[ ˆGw] = X i∈I c1,iγw(i)M −i/2d + O 1 √ T . (3.2) Denote the covariance matrix of { ˆGk(l); l ∈¯l} by ΣL. Let ¯ΣL = ΣLT . Observe that by (2.5) and the Cauchy-Schwarz inequality, the entries of ¯ΣL are O(1). The variance of the weighted estimator ˆGw can then be bound as follows: V[ ˆGw] = V X l∈¯l wl ˆGk(l) = w′ΣLw = w′ ¯ΣLw T ≤λmax(¯ΣL)||w||2 2 T . (3.3) We seek a weight vector w that (i) ensures that the bias of the weighted estimator is O(T −1/2) and (ii) has low ℓ2 norm ||w||2 in order to limit the contribution of the variance of the weighted estimator. To this end, let w∗be the solution to the convex optimization problem minimize w ||w||2 subject to X l∈¯l w(l) = 1, |γw(i)| = 0, i ∈I. (3.4) This problem is equivalent to minimizing ||w||2 subject to A0w = b, where A0 and b are defined below. Let fIN : I →{1, .., I} be a bijective mapping. Let a0 be the vector of ones: [1, 1..., 1]1×L; and let afIN (i), for i ∈I be given by afIN (i) = [li/d 1 , .., li/d L ]. Define A0 = [a′ 0, a′ 1, ..., a′ I]′, A1 = [a′ 1, ..., a′ I] and b = [1; 0; 0; ..; 0](I+1)×1. Observe that the entries of A0 and b are O(1), and therefore the entries of the solution w∗are O(1). Consequently, by (3.2), the bias B[ ˆGw∗] = O(1/ √ T). Furthermore, the optimal minimum η(d) := ||w∗||2 is given by η(d) = p det(A1A′ 1)/det(A0A′ 0). By (6.4), the estimator variance V[ ˆGw∗] is of order O(η(d)/T ). This concludes the proof. While we have illustrated the weighted ensemble method only in the context of kernel estimators, this method can be applied to any general ensemble of estimators that satisfy bias and variance conditions C .1 and C .2 in [22]. 4 Experiments We illustrate the superior performance of the proposed weighted ensemble estimator for two applications: (i) estimation of the Panter-Dite rate distortion factor, and (ii) estimation of entropy to test for randomness of a random sample. 4.1 Panter-Dite factor estimation For a d-dimensional source with underlying density f, the Panter-Dite distortion-rate distortion-rate function [9] for a q-dimensional vector quantizer with n levels of quantization is given by δ(n) = n−2/q R f q/(q+2)(x)dx. The Panter-Dite factor corresponds to the 5 10 3 10 4 10 −4 10 −3 10 −2 10 −1 10 0 Sample size T Mean squared error Standard kernel plug−in estimator [12] Truncated kernel plug−in estimator (2.3) Histogram plug−in estimator [11] k−nearest neighbor estimator [19] Entropic graph estimator [6,21] Weighted kernel estimator (3.1) (a) Variation of MSE of Panter-Dite factor estimates as a function of sample size T . From the figure, we see that the proposed weighted estimator has the fastest MSE rate of convergence wrt sample size T . 1 2 3 4 5 6 7 8 9 10 10 −4 10 −3 10 −2 10 −1 10 0 dimension d Mean squared error Standard kernel plug−in estimator [12] Truncated kernel plug−in estimator (2.3) Histogram plug−in estimator [11] k−nearest neighbor estimator [19] Entropic graph estimator [6,21] Weighted kernel estimator (3.1) (b) Variation of MSE of Panter-Dite factor estimates as a function of dimension d. From the figure, we see that the MSE of the proposed weighted estimator has the slowest rate of growth with increasing dimension d. Figure 1: Variation of MSE of Panter-Dite factor estimates using standard kernel plug-in estimator [12], truncated kernel plug-in estimator (2.3), histogram plug-in estimator[11], k-NN estimator [19], entropic graph estimator [6,21] and the weighted ensemble estimator (3.1). functional G(f) with g(f, x) = n−2/qf −2/(q+2)I(f > 0) + I(f = 0), where I(.) is the indicator function. The Panter-Dite factor is directly related to the R´enyi α-entropy, for which several other estimators have been proposed. In our simulations we compare six different choices of functional estimators - the three estimators previously introduced: (i) the standard kernel plug-in estimator ˆGk, (ii) the boundary truncated plug-in estimator ˆGk and (iii) the weighted estimator ˆGw with optimal weight w = w∗given by (3.4), and in addition the following popular entropy estimators: (iv) histogram plug-in estimator [10], (v) k-nearest neighbor (k-NN) entropy estimator [18] and (vi) entropic k-NN graph estimator [5, 20]. For both ˜Gk and ˆGk, we select the bandwidth parameter k as a function of M according to the optimal proportionality k = M 1/(1+d) and N = M = T/2. To illustrate the weighted estimator of the Panter-Dite factor we assume that f is the d = 6 dimensional mixture density f(a, b, p, d) = pfβ(a, b, d) + (1 −p)fu(d); where fβ(a, b, d) is a d-dimensional Beta density with parameters a = 6, b = 6, fu(d) is a d-dimensional uniform density and the mixing ratio p is 0.8. 4.1.1 Variation of MSE with sample size T The MSE results of these different estimators are shown in Fig. 1(a) as a function of sample size T . It is clear from the figure that the proposed ensemble estimator ˆGw has significantly 6 0 100 200 300 400 500 600 700 800 900 1000 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 Hypothesis index True entropy Standard kernel plug−in estimate Truncated kernel plug−in estimate Weighted plug−in estimate Weighted plug−in estimate Truncated kernel plug−in estimate Standard kernel plug−in estimate (a) Entropy estimates for random samples corresponding to hypothesis H0 and H1. −1 −0.9 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 0 50 100 Standard kernel plug−in estimate −1.4 −1.35 −1.3 −1.25 −1.2 −1.15 −1.1 −1.05 −1 −0.95 0 50 100 Truncated kernel plug−in estimate −2.3 −2.2 −2.1 −2 −1.9 −1.8 −1.7 −1.6 −1.5 −1.4 −1.3 0 50 100 Weighted estimate (b) Histogram envelopes of entropy estimates for random samples corresponding to hypothesis H0 (blue) and H1 (red). Figure 2: Entropy estimates using standard kernel plug-in estimator, truncated kernel plugin estimator and the weighted estimator, for random samples corresponding to hypothesis H0 and H1. The weighted estimator provided better discrimination ability by suppressing the bias, at the cost of some additional variance. faster rate of convergence while the MSE of the rest of the estimators, including the truncated kernel plug-in estimator, have similar, slow rates of convergence. It is therefore clear that the proposed optimal ensemble averaging significantly accelerates the MSE convergence rate. 4.1.2 Variation of MSE with dimension d The MSE results of these different estimators are shown in Fig. 1(b) as a function of dimension d, for fixed sample size T = 3000. For the standard kernel plug-in estimator and truncated kernel plug-in estimator, the MSE varies exponentially with d as expected. The MSE of the histogram and k-NN estimators increase at a similar rate, indicating that these estimators suffer from the curse of dimensionality as well. The MSE of the weighted estimator on the other hand increases at a slower rate, which is in agreement with our theory that the MSE is O(η(d)/T ) and observing that η(d) is an increasing function of d. Also observe that the MSE of the weighted estimator is significantly smaller than the MSE of the other estimators for all dimensions d > 3. 4.2 Distribution testing In this section, Shannon differential entropy is estimated using the function g(f, x) = −log(f)I(f > 0) + I(f = 0) and used as a test statistic to test for the underlying probability distribution of a random sample. In particular, we draw 500 instances each of random samples of size 103 from the probability distribution f(a, b, p, d), described in Sec. 4. 1, with fixed d = 6, p = 0.75 for two sets of values of a, b under the null and alternate hypothesis, H0 : a = a0, b = b0 versus H1 : a = a1, b = b1. First, we fix a0 = b0 = 6 and a1 = b1 = 5. We note that the underlying density under the null hypothesis f(6, 6, 0.75, 6) has greater curvature relative to f(5, 5, 0.75, 6) and therefore has smaller entropy (randomness). The true entropy, and entropy estimates using ˜Gk, ˆGk and ˆGw for the cases corresponding to each of the 500 instances of hypothesis H0 and H1 are shown in Fig. 2(a). From this figure, it is apparent that the weighted estimator provides better discrimination ability by suppressing the bias, at the cost of some additional variance. To demonstrate that the weighted estimator provides better discrimination, we plot the histogram envelope of the entropy estimates using standard kernel plug-in estimator, truncated kernel plug-in estimator and the weighted estimator for the cases corresponding to the hypothesis H0 (color coded blue) and H1 (color coded red) in Fig. 2(b). Furthermore, we quantitatively measure the discriminative ability of the different estimators using the deflection statistic ds = |µ1 −µ0|/ p σ2 0 + σ2 1, where µ0 and σ0 (respectively µ1 and σ1) are 7 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 False Positive rate False Negative rate Standard kernel plug−in estimator Truncated kernel plug−in estimator Weighted estimator (a) ROC curves corresponding to entropy estimates obtained using standard and truncated kernel plug-in estimator and the weighted estimator. The corresponding AUC are given by 0.9271, 0.9459 and 0.9619. 0.2 0.4 0.6 0.8 1 0.5 0.6 0.7 0.8 0.9 1 δ Area under ROC curve Neyman−Pearson test Standard kernel plug−in estimate Truncated kernel plug−in estimate Weighted estimate (b) Variation of AUC curves vs δ(= a0 −a1, b0 − b1) corresponding to Neyman-Pearson omniscient test, entropy estimates using the standard and truncated kernel plug-in estimator and the weighted estimator. Figure 3: Comparison of performance in terms of ROC for the distribution testing problem. The weighted estimator uniformly outperforms the individual plug-in estimators. the sample mean and standard deviation of the entropy estimates. The deflection statistic was found to be 1.49, 1.60 and 1.89 for the standard kernel plug-in estimator, truncated kernel plug-in estimator and the weighted estimator respectively. The receiver operating curves (ROC) for this test using these three different estimators is shown in Fig. 3(a). The corresponding area under the ROC curves (AUC) are given by 0.9271, 0.9459 and 0.9619. In our final experiment, we fix a0 = b0 = 10 and set a1 = b1 = 10 −δ, draw 500 instances each of random samples of size 5 × 103 under the null and alternate hypothesis, and plot the AUC as δ varies from 0 to 1 in Fig. 3(b). For comparison, we also plot the AUC for the Neyman-Pearson likelihood ratio test. The Neyman-Pearson likelihood ratio test, unlike the Shannon entropy based tests, is an omniscient test that assumes knowledge of both the underlying beta-uniform mixture parametric model of the density and the parameter values a0, b0 and a1, b1 under the null and alternate hypothesis respectively. Figure 4 shows that the weighted estimator uniformly and significantly outperforms the individual plug-in estimators and is closest to the performance of the omniscient Neyman-Pearson likelihood test. The relatively superior performance of the Neyman-Pearson likelihood test is due to the fact that the weighted estimator is a nonparametric estimator that has marginally higher variance (proportional to ||w∗||2 2) compared to the underlying parametric model for which the Neyman-Pearson test statistic provides the most powerful test. 5 Conclusions A novel method of weighted ensemble estimation was proposed in this paper. This method combines slowly converging individual estimators to produce a new estimator with faster MSE rate of convergence. In this paper, we applied weighted ensembles to improve the MSE of a set of uniform kernel density estimators with different kernel width parameters. We showed by theory and in simulation that that the improved ensemble estimator achieves parametric MSE convergence rate O(T −1). The optimal weights are determined by solving a convex optimization problem which does not require training data and can be performed offline. The superior performance of the weighted ensemble entropy estimator was verified in the context of two important problems: (i) estimation of the Panter-Dite factor and (ii) non-parametric hypothesis testing. Acknowledgments This work was partially supported by ARO grant W911NF-12-1-0443. 8 References [1] I. Ahmad and Pi-Erh Lin. A nonparametric estimation of the entropy for absolutely continuous distributions (corresp.). Information Theory, IEEE Trans. on, 22(3):372 – 375, May 1976. [2] J. Beirlant, EJ Dudewicz, L. Gy¨orfi, and EC Van der Meulen. Nonparametric entropy estimation: An overview. Intl. Journal of Mathematical and Statistical Sciences, 6:17–40, 1997. [3] L. Birge and P. Massart. Estimation of integral functions of a density. The Annals of Statistics, 23(1):11–29, 1995. [4] D. Chauveau and P. Vandekerkhove. Selection of a MCMC simulation strategy via an entropy convergence criterion. ArXiv Mathematics e-prints, May 2006. [5] J.A. Costa and A.O. Hero. Geodesic entropic graphs for dimension and entropy estimation in manifold learning. Signal Processing, IEEE Transactions on, 52(8):2210–2221, 2004. [6] P. B. Eggermont and V. N. LaRiccia. Best asymptotic normality of the kernel density entropy estimator for smooth densities. Information Theory, IEEE Trans. on, 45(4):1321 –1326, May 1999. [7] E. Gin´e and D.M. Mason. Uniform in bandwidth estimation of integral functionals of the density function. Scandinavian Journal of Statistics, 35:739761, 2008. [8] M. Goria, N. Leonenko, V. Mergel, and P. L. Novi Inverardi. A new class of random vector entropy estimators and its applications in testing statistical hypotheses. Nonparametric Statistics, 2004. [9] R. Gupta. Quantization Strategies for Low-Power Communications. PhD thesis, University of Michigan, Ann Arbor, 2001. [10] L. Gy¨orfiand E. C. van der Meulen. Density-free convergence properties of various estimators of entropy. Comput. Statist. Data Anal., pages 425–436, 1987. [11] L. Gy¨orfiand E. C. van der Meulen. An entropy estimate based on a kernel density estimation. Limit Theorems in Probability and Statistics, pages 229–240, 1989. [12] P. Hall and S. C. Morton. On the estimation of the entropy. Ann. Inst. Statist. Meth., 45:69–88, 1993. [13] K. Hlav´aˇckov´a-Schindler, M. Paluˇs, M. Vejmelka, and J. Bhattacharya. Causality detection based on information-theoretic approaches in time series analysis. Physics Reports, 441(1):1– 46, 2007. [14] A.T. Ihler, J.W. Fisher III, and A.S. Willsky. Nonparametric estimators for online signature authentication. In Acoustics, Speech, and Signal Processing, 2001. Proceedings.(ICASSP’01). 2001 IEEE International Conference on, volume 6, pages 3473–3476. IEEE, 2001. [15] H. Joe. Estimation of entropy and other functionals of a multivariate density. Annals of the Institute of Statistical Mathematics, 41(4):683–697, 1989. [16] G. Lanckriet, N. Cristianini, P. Bartlett, and L. El Ghaoui. Learning the kernel matrix with semi-definite programming. Journal of Machine Learning Research, 5:2004, 2002. [17] B. Laurent. Efficient estimation of integral functionals of a density. The Annals of Statistics, 24(2):659–681, 1996. [18] N. Leonenko, L. Prozanto, and V. Savani. A class of R´enyi information estimators for multidimensional densities. Annals of Statistics, 36:2153–2182, 2008. [19] E. Liiti¨ainen, A. Lendasse, and F. Corona. On the statistical estimation of r´enyi entropies. In Proceedings of IEEE/MLSP 2009 International Workshop on Machine Learning for Signal Processing, Grenoble (France), September 2-4 2009. [20] D. Pal, B. Poczos, and C. Szepesvari. Estimation of R´enyi entropy and mutual information based on generalized nearest-neighbor graphs. In Proc. Advances in Neural Information Processing Systems (NIPS). MIT Press, 2010. [21] Robert E. Schapire. The strength of weak learnability. Machine Learning, 5(2):197–227–227, June 1990. [22] K. Sricharan and A. O. Hero, III. Ensemble estimators for multivariate entropy estimation. ArXiv e-prints, March 2012. [23] C. Studholme, C. Drapaca, B. Iordanova, and V. Cardenas. Deformation-based mapping of volume change from serial brain mri in the presence of local tissue contrast change. Medical Imaging, IEEE Transactions on, 25(5):626–639, 2006. [24] B. van Es. Estimating functionals related to a density by class of statistics based on spacing. Scandinavian Journal of Statistics, 1992. 9
|
2012
|
221
|
4,588
|
Active Comparison of Prediction Models Christoph Sawade, Niels Landwehr, and Tobias Scheffer University of Potsdam Department of Computer Science August-Bebel-Strasse 89, 14482 Potsdam, Germany {sawade, landwehr, scheffer}@cs.uni-potsdam.de Abstract We address the problem of comparing the risks of two given predictive models—for instance, a baseline model and a challenger—as confidently as possible on a fixed labeling budget. This problem occurs whenever models cannot be compared on held-out training data, possibly because the training data are unavailable or do not reflect the desired test distribution. In this case, new test instances have to be drawn and labeled at a cost. We devise an active comparison method that selects instances according to an instrumental sampling distribution. We derive the sampling distribution that maximizes the power of a statistical test applied to the observed empirical risks, and thereby minimizes the likelihood of choosing the inferior model. Empirically, we investigate model selection problems on several classification and regression tasks and study the accuracy of the resulting p-values. 1 Introduction We address situations in which an informed choice between candidate predictive models—for instance, a baseline method and a challenger—has to be made. In practice, it is not always possible to compare the models’ risks on held-out training data. For example, in computer vision it is common to acquire pre-trained object or face recognizers from third parties. Such recognizers do not typically come with the image databases that have been used to train them. The suppliers of the models could provide risk estimates based on held-out training data; however, such estimates might be biased because the training data would not necessarily reflect the distribution of images the deployed models will be exposed to. Another example are domains where the input distribution changes over a period of time in which a baseline model, e.g., a spam filter, has been employed. By the time a new predictive model is considered, a previous risk estimate of the baseline model may no longer be accurate. In these example scenarios, new test data have to be drawn and labeled. The standard approach to comparing models would be to draw n test instances according to the test distribution which the model is exposed to in practice, label these data, and calculate the difference of the empirical risks ˆ∆n and the sample variance S2 n. Then, under the null hypothesis of identical risks, √n ˆ∆n Sn is asymptotically governed by a standard normal distribution, and we can compute a p-value which quantifies the likelihood that an observed empirical difference is due to chance, indicating how confidently the decision to prefer the apparently better model can be made. In many application scenarios, unlabeled test instances are readily available whereas the process of labeling data is costly. We study an active model comparison process that, in analogy to active learning, selects instances from a pool of unlabeled test data and queries their labels. Instances are selected according to an instrumental sampling distribution q. The empirical difference of the models’ risks is weighted appropriately to compensate for the discrepancy between instrumental and test distributions which leads to consistent—that is, asymptotically unbiased—risk estimates. 1 The principal theoretical contribution of this paper is the derivation of a sampling distribution q that allows us to make the decision to prefer the superior model as confidently as possible given a fixed labeling budget n, if one of the models is in fact superior. Equivalently, one may use q to minimize the labeling costs n required to reach a correct decision at a prescribed level of confidence. The active comparison problem that we study can be seen as an extreme case of active learning, in which the model space contains only two (or, more generally, a small number of) models. For the special case of classification with zero-one loss and two models under study, a simplified version of the sampling distribution we derive coincides with the sampling distribution used in the A 2 and IWAL active learning algorithms proposed by Balcan et al. [1] and Beygelzimer et al. [2]. For A 2 and IWAL, the derivation of this distribution is based on finite-sample complexity bounds, while in our approach, it is based on maximizing the power of a statistical test comparing the models under study. The latter approach has the advantage that it directly generalizes to regression problems. A further difference to active learning is that our goal is not only to choose the best model, but also to obtain a well-calibrated p-value indicating the confidence with which this decision can be made. Our method is also related to recent work on active data acquisition strategies for the evaluation of a single predictive model, in terms of standard risks [8] or generalized risks that subsume precision, recall, and f-measure [9]. The problem addressed in this paper is different in that we seek to assess the relative performance of two models, without necessarily determining absolute risks precisely. Madani et al. have studied active model selection, where the goal is also to identify a model with lowest risk [5]. However, in their setting costs are associated with obtaining predictions ˆy = f(x), while in our setting costs are associated with obtaining labels y ∼p(y|x). Hoeffding races [6] and sequential sampling algorithms [10] perform efficient model selection by keeping track of risk bounds for candidate models and removing models that are clearly outperformed from consideration. The goal of these methods is to reduce computational complexity, not labeling effort. The rest of this paper is organized as follows. The problem setting is laid out in Section 2. Section 3 derives the instrumental distribution and details our theoretical findings. Section 4 explores active model comparison experimentally. Section 5 concludes. 2 Problem Setting Let X denote the feature space and Y the label space; an unknown test distribution p(x, y) is defined over X × Y. Let p(y|x; θ1) and p(y|x; θ2) be given θ-parameterized models of p(y|x) and let fj : X →Y with fj(x) = arg maxy p(y|x; θj) be the corresponding predictive functions. The risks of f1, f2 are given by R[fj] = ZZ ℓ(fj(x), y)p(x, y)dy dx (1) for a loss function ℓ: Y × Y →R. In a classification setting, the integral over Y reduces to a sum. The standard approach to comparing models is to compare empirical risk estimates ˆRn[fj] = 1 n n X i=1 ℓ(fj(xi), yi), (2) where n test instances (xi, yi) are drawn from p(x, y) = p(x)p(y|x). We assume that unlabeled data are readily available, but acquiring labels y for selected instances x according to p(y|x) is a costly process that may involve a query to a human labeler. Test instances need not necessarily be drawn according to the input distribution p(x). We will focus on a data labeling process that draws test instances according to an instrumental distribution q(x) rather than p(x). Intuitively, q(x) should be designed such as to prefer instances that highlight differences between the models f1 and f2. Let q(x) denote an instrumental distribution with the property that p(x) > 0 implies q(x) > 0 for all x ∈X. A consistent risk estimate is then given by ˆRn,q[fj] = 1 W n X i=1 p(xi) q(xi)ℓ(fj(xi), yi), (3) 2 where (xi, yi) ∼q(x)p(y|x) and W = Pn i=1 p(xi) q(xi). Weighting factors p(xi) q(xi) compensate for the discrepancy between test and instrumental distribution, and the normalizer is the sum of weights. Because of the weighting factors, Equation 3 defines a consistent risk estimate (see [4], Chapter 2). Consistency means that the expected value of ˆRn,q[fj] converges to the true risk R[fj] for n →∞. Given estimates ˆRn,q[f1] and ˆRn,q[f2], the difference ˆ∆n,q = ˆRn,q[f1]−ˆRn,q[f2] provides evidence on which model is preferable; a positive ˆ∆n,q argues in favor of f2. In preferring one model over the other, one rejects the null hypothesis that the observed difference ˆ∆n,q is only a random effect, and R[f1] = R[f2] holds. The null hypothesis implies that the mean of ˆ∆n,q is asymptotically zero. Because ˆ∆n,q is asymptotically normally distributed (see, e.g., [3]), it further implies that the statistic √n ˆ∆n,q σn,q ∼N(0, 1) is asymptotically standard-normally distributed, where 1 nσ2 n,q = Var[ ˆ∆n,q] denotes the variance of ˆ∆n,q. In practice, σ2 n,q is unknown. A consistent estimator of σ2 n,q is given by S2 n,q = 1 W n X i=1 p(xi)2 q(xi)2 ℓ(f1(xi), yi) −ℓ(f2(xi), yi) −ˆ∆n,q 2 , (4) as shown, for example, by Geweke [3]. Substituting the empirical for the true standard deviation yields an observable statistic √n ˆ∆n,q Sn,q . Because S2 n,q consistently estimates σ2 n,q, the null hypothesis also implies that the observable statistic is asymptotically standard normally distributed, √n ˆ∆n,q Sn,q ∼N(0, 1). Let Φ denote the cumulative distribution function of the standard normal distribution. Then, 2 1 −Φ √n| ˆ∆n,q| Sn,q !! (5) is called the p-value of a two-sided paired Wald test (see, e.g., [12], Chapter 10). The p-value quantifies the likelihood of observing the given absolute value of the test statistic, or a higher value, by chance under the null hypothesis. Student’s t-distribution can serve as a more popular approximation of the distribution of a test statistic under the null hypothesis, resulting in the common t-test. Note, however, that Sn,q would have to be a sum of squared, normally distributed random variables for the test statistic to be asymptotically governed by the t-distribution. This assumption is reasonable for regression, but not for classification, and only for the case of p = q. If the null hypothesis does not hold and the two models incur different risks, the distribution of the test statistic depends on the chosen sampling distribution q(x). Our goal is to find a distribution q(x) that allows us to tell the risks of f1 and f2 apart with high confidence. More formally, the power of a test when sampling from q(x) is the likelihood that the null hypothesis can be rejected, that is, the likelihood that the p-value falls below a pre-specified confidence threshold α. Our goal is to find the sampling distribution q that maximizes test power: q∗= arg max q p 2 1 −Φ √n| ˆ∆n,q| Sn,q !! ≤α ! . (6) 3 Active Model Comparison We now turn towards deriving an optimal sampling distribution q∗according to Equation 6. Section 3.1 analytically derives an asymptotically optimal sampling distribution. Section 3.2 discusses the sampling distribution in a pool-based setting and presents the active comparison algorithm. 3 3.1 Asymptotically Optimal Sampling Let ∆= R[f1] −R[f2] denote the true risk difference, and assume ∆̸= 0. Given a confidence threshold α, the test power equals the probability that the absolute value of the test statistic exceeds the corresponding critical value zα = Φ−1 1 −α 2 : p 2 −2Φ √n| ˆ∆n,q| Sn,q ! ≤α ! = p √n| ˆ∆n,q| Sn,q ≥zα ! . (7) Asymptotically, it holds that √n( ˆ∆n,q −∆) σn,q ∼N(0, 1). Since Sn,q consistently estimates σn,q, it follows that for large n the statistic √n ˆ∆n,q Sn,q is normally distributed with mean √n∆ σn,q and unit variance, √n ˆ∆n,q Sn,q ∼N √n∆ σn,q , 1 . (8) Equation 8 implies that the absolute value √n | ˆ∆n,q| Sn,q of the test statistic follows a folded normal distribution with location parameter √n∆ σn,q and scale parameter one. According to Equation 7, test power can thus be approximated in terms of the cumulative distribution of this folded normal distribution, p 2 −2Φ √n| ˆ∆n,q| Sn,q ! ≤α ! ≈1 − Z zα 0 f T; √n∆ σn,q , 1 dT, (9) where f (T; µ, 1) = 1 √ 2π exp −1 2 (T + µ)2 + 1 √ 2π exp −1 2 (T −µ)2 denotes the density of a folded normal distribution with location parameter µ and scale parameter one. We define the shorthand βn,q = 1 − Z zα 0 f T; √n∆ σn,q , 1 dT for the approximation of test power given by Equation 9. In the following, we derive a sampling distribution maximizing βn,q, thereby approximately solving the optimization problem of Equation 6. Theorem 1 (Optimal Sampling Distribution). Let ∆= R[f1] −R[f2] with ∆̸= 0. The distribution q∗(x) ∝p(x) sZ (ℓ(f1(x), y) −ℓ(f2(x), y) −∆)2 p(y|x)dy asymptotically maximizes βn,q; that is, for any other sampling distribution q ̸= q∗it holds that βn,q < βn,q∗for sufficiently large n. Before we prove Theorem 1, we show that a sampling distribution asymptotically maximizes βn,q if and only if it minimizes the asymptotic variance of the estimator ˆ∆n,q. Lemma 2 (Variance Optimality). Let q, q′ denote two sampling distributions. Then it holds that βn,q > βn,q′ for sufficiently large n if and only if lim n→∞n Var h ˆ∆n,q i < lim n→∞n Var h ˆ∆n,q′ i . (10) A proof is included in the online appendix. Lemma 2 shows that in order to solve the optimization problem given by Equation 6, we need to find the sampling distribution minimizing the asymptotic variance of the estimator ˆ∆n,q. This asymptotic variance is characterized by the following Lemma. 4 Lemma 3 (Asymptotic Variance). The asymptotic variance σ2 q = lim n→∞n Var[ ˆ∆n,q] of ˆ∆n,q is given by σ2 q = ZZ p(x)2 q(x)2 (ℓ(f1(x), y) −ℓ(f2(x), y) −∆)2 p(y|x)q(x)dy dx. A proof of Lemma 3 is included in the online appendix. Proof of Theorem 1. We can now prove Theorem 1 by deriving the distribution q∗that minimizes the asymptotic variance σ2 q as given by Lemma 3. We minimize the functional σ2 q in terms of q under the constraint R q(x)dx = 1 using a Lagrange multiplier β. L [q, β] = σ2 q + β Z q(x)dx −1 = Z c(x) q(x) + β (q(x) −p(x)) dx where c(x) = p(x)2 R (ℓ(f1(x), y) −ℓ(f2(x), y) −∆)2 p(y|x)dy. The optimal point for the constrained problem satisfies the Euler-Lagrange equation ∂ ∂q(x) c(x) q(x) + β (q(x) −p(x)) = −c(x) q(x)2 + β = 0. (11) A solution for Equation 11 with respect to the normalization constraint is given by q∗(x) = p c(x) R p c(x)dx . (12) Resubstitution of c(x) into Equation 12 implies the theorem. 3.2 Empirical Sampling Distribution The distribution q∗also depends on the true conditional p(y|x) and the true difference in risks ∆. In order to implement the method, we have to approximate these quantities. Note that as long as p(x) > 0 implies q(x) > 0, any choice of q will yield consistent risk estimates because weighting factors account for the discrepancy between sampling and test distribution (Equation 3). That is, ˆ∆n,q is guaranteed to converge to ∆as n grows large; any approximation employed to compute q∗ will only affect the number of test examples required to reach a certain level of estimation accuracy. To approximate the true conditional p(y|x), we use the given predictive models p(y|x; θ1) and p(y|x; θ2), and assume a mixture distribution giving equal weight to both models: p(y|x) ≈1 2p(y|x; θ1) + 1 2p(y|x; θ2). (13) The risk difference ∆is replaced by a difference ∆θ of introspective risks calculated from Equation 1, where the integral over X is replaced by a sum over the pool, p(x) = 1 m, and p(y|x) is approximated by Equation 13. We will now derive the empirical sampling distribution for two standard loss functions. Derivation 4 (Sampling for Zero-one Loss). Let ℓbe the zero-one loss for a binary prediction problem with label space Y = {0, 1}. When p(y|x) is approximated as in Equation 13, the sampling distribution asymptotically maximizing βn,q in a pool-based setting resolves to q∗(x)∝ |∆θ| : f1(x) = f2(x) q 1 −2∆θ(1 −2p(y = 1|x; θ)) + ∆θ 2 : f1(x) > f2(x) q 1 + 2∆θ(1 −2p(y = 1|x; θ)) + ∆θ 2 : f1(x) < f2(x) for all x ∈D. A proof is included in the online appendix. Instead of using Approximation 13, an uninformative approximation p(y = 1|x) ≈0.5 may be used. In this case q∗degenerates to uniform sampling from the subset of the pool where f1(x) ̸= f2(x). We denote this baseline as active̸=. This baseline coincides with the A 2 as well as the IWAL active learning algorithms, applied to the model space {f1, f2}, as can be seen from inspection of Algorithm 1 in [1] and Algorithms 1 and 2 in [2]. We now derive the optimal sampling distribution for regression problems with a squared loss function, assuming that the predictive distributions p(y|x; θ1) and p(y|x; θ2) are Gaussian: 5 Algorithm 1 Active Model Comparison input Models f1, f2 with distributions p(y|x; θ1), p(y|x; θ2); pool D; labeling budget n. 1: Compute sampling distribution q∗(Derivation 4 or 5). 2: for i = 1, . . . , n do 3: Draw xi ∼q∗(x) from D with replacement. 4: Query label yi ∼p(y|xi) from oracle. 5: end for 6: Compute ˆRn,q[f1] and ˆRn,q[f2] (Equation 3). 7: Determine f ∗←arg minf∈{f1,f2} ˆRn,q[f], compute p-value for sample (Equation 5) output f ∗, p-value. Derivation 5 (Sampling for Squared Loss). Let ℓbe the squared loss, and let p(y|x; θ1) and p(y|x; θ2) be Gaussian. When p(y|x) is approximated as in Equation 13, then the sampling distribution asymptotically maximizing βn,q in a pool-based setting resolves to q∗(x) ∝ s 2 (f1(x) −f2(x))2 (f 2 1 (x) + f 2 2 (x) + τ 2x)−(f 2 1 (x) −f 2 2 (x))2 (14) for all x ∈D, where τ 2 x denotes the sum of the variances of the predictive distributions at x ∈D. A proof is given in the online appendix. Variances of predictive distributions at instance x would be available from a probabilistic model such as a Gaussian process [7]. If only predictions fj(x) but no predictive distribution is available, we can assume peaked distributions with τ 2 x →0, leading to q∗(x) ∝(f1(x) −f2(x))2, or we can assume infinitely broad predictive distributions with τ 2 x →∞, leading to q∗(x) ∝|f1(x) −f2(x)|. We refer to these baselines as active0 and active∞. Algorithm 1 summarizes the active model comparison algorithm. It samples n instances with replacement from the pool according to the distribution prescribed by Derivations 4 (for zero-one loss) or 5 (for squared loss) and queries their label. Note that instances can be drawn more than once; in the special case that the labeling process is deterministic, the actual labeling costs may thus stay below the sample size. In this case, the loop is continued until the labeling budget is exhausted. We have so far focused on the problem of comparing the risks of two prediction models, such as a baseline and a challenger. We might also consider several alternative models; the objective of an evaluation could be to rank the models according to the risk incurred or to identify the model with lowest risk. Standard generalizations of the Wald test that compare multiple alternatives—for instance, within-subject ANOVA [11]—try to reject the null hypothesis that the means of all considered alternatives are equal. Rejection does not imply that all empirically observed differences are significant; for instance, the test could become significant because one of the alternatives performs clearly worst. Choosing a sampling distribution q that maximizes the power of such a test would thus in general not reflect the objectives of the empirical evaluation. In practice, researchers often resort to pairwise hypothesis testing when comparing multiple prediction models. Accordingly, we derive a heuristic sampling distribution for the comparison of multiple models θ1, ..., θk as a mixture of pairwise-optimal sampling distributions, q∗(x) = 1 k(k −1) X i̸=j q∗ i,j(x), (15) where q∗ i,j denotes the optimal distribution for comparing the models θi and θj given by Theorem 1. When comparing multiple models, we replace Equation 13 by a mixture over all models θ1, ..., θk. 4 Empirical Results We study the empirical behavior of active comparison (Algorithm 1, labeled active in all diagrams) relative to a risk comparison based on a test sample drawn uniformly from the pool (labeled passive) 6 50 100 150 200 0.4 0.5 0.6 0.7 0.8 0.9 labeling costs n model selection accuracy Spam Filtering (Classification, 2 Models) active≠ passive ARE A2 active IWAL 200 400 600 800 0.65 0.7 0.75 0.8 0.85 0.9 0.95 labeling costs n model selection accuracy Abalone (Regression, 2 Models) active∞ active0 ARE passive active 200 400 600 800 0.7 0.75 0.8 0.85 0.9 0.95 1 labeling costs n model selection accuracy Inverse Dynamics (Regression, 2 Models) active∞ active0 ARE passive active 50 100 150 200 0 0.2 0.4 0.6 0.8 1 labeling costs n model selection accuracy Object Recognition (Classification, 13 Models) active≠ passive ARE A2 active IWAL 200 400 600 800 0.4 0.5 0.6 0.7 0.8 0.9 labeling costs n model selection accuracy Abalone (Regression, 5 Models) active∞ active0 ARE passive active 200 400 600 800 0.4 0.5 0.6 0.7 0.8 labeling costs n model selection accuracy Inverse Dynamics (Regression, 5 Models) active∞ active0 ARE passive active Figure 1: Model selection accuracy over labeling costs for comparison of two prediction models (top) and multiple prediction models (bottom). Error bars indicate the standard error. and the baselines active̸=, active0, and active∞discussed in Section 3.2. We also include the active risk estimator presented in [8] in our study, which infers optimal sampling distributions q∗ 1 and q∗ 2 for individually estimating the risks of the models θ1 and θ2. Test instances are sampled from a mixture distribution q∗(x) = 1 2q∗ 1(x) + 1 2q∗ 2(x) (labeled ARE). Each comparison method returns the model with lower empirical risk and the p-value of a paired two-sided test. When studying classification, we also include the active learning algorithms A 2 [1] and IWAL [2] as baselines by using them to sample test instances. Their model space is the set of predictive models that are to be compared. We conduct experiments in two classification domains (spam filtering, object recognition) and two regression domains (inverse dynamics, Abalone) ranging from 4,109 to 169,612 instances. Kernelized logistic regression is employed for classification, Gaussian processes are employed for regression. In the spam filtering domain, we compare models that differ in the recency of their training data. In the object recognition domain, we compare SIFT-based recognizers using different interest point detectors (Harris operator, Canny edge detector, F¨orstner operator) and visual vocabularies. For regression, we compare models that differ in the choice of their kernel function (linear versus Matern, polynomial kernels of different degrees). Models are trained on part of the available data; the rest of the data serve as the pool of unlabeled test instances for which labels can be queried. Results are averaged over 5,000 repetitions of the evaluation process. Further details on the datasets and experimental setup are included in the online appendix. 4.1 Identifying the Model With Lower True Risk We measure model selection accuracy, defined as the fraction of experiments in which an evaluation method correctly identifies the model with lower true risk. The true risk is taken to be the risk over all test instances in the pool. Figure 1 (top) shows that for the comparison of two models active results in significantly higher model selection accuracy than passive, or, equivalently, saves between 70% and 90% of labeling effort. Differences between active and the simplified variants active0, active∞, and active̸= are marginal. These variants do not require an estimate of p(y|x), thus the method is applicable even if no such estimate is available. A 2 and IWAL coincide with active̸= (cf. Section 3.2). Figure 1 (bottom) shows results when comparing multiple models. In the object recognition domain, active saves approximately 70% of labeling effort compared to passive. A 2 and IWAL outperform passive but are less accurate than active. For the regression domains, active saves between 60% and 85% of labeling effort compared to passive. 4.2 Significance Testing: Type I and Type II Errors We now study how often a comparison method is able to reject the null hypothesis that two predictive models incur identical risks, and the calibration of the resulting p-values. For classification, the 7 0.001 0.01 0.05 0.1 0 0.2 0.4 0.6 0.8 1 True Positive Significance Inverse Dynamics (Regression, n=800) α−level frequency passive active 0.001 0.01 0.05 0.1 0 0.2 0.4 0.6 0.8 1 True Positive Significance Abalone (Regression, n=800) α−level frequency passive active 200 400 600 800 0 0.1 0.2 0.3 labeling costs n average p−value Average p−value Inverse Dynamics (Regression) passive active 200 400 600 800 0 0.1 0.2 0.3 labeling costs n average p−value Average p−value Abalone (Regression) passive active Figure 2: True-positive significance rate for different test levels α (left, left-center). Average p-value over labeling costs n (right-center, right). Error bars indicate the standard error. 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 α−level frequency False Positive Significance Inverse Dynamics (Regression, n=800) passive active 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 α−level frequency False Positive Significance Abalone (Regression, n=800) passive active 200 400 600 800 0 0.02 0.04 labeling costs n frequency False Positive Significance Inverse Dynamics (Regression, α=0.05) passive active 200 400 600 800 0 0.02 0.04 labeling costs n frequency False Positive Significance Abalone (Regression, α=0.05) passive active Figure 3: False-positive significance rate over test level α (left, left-center). False-positive significance rate over labeling costs n (right-center, right). Error bars indicate the standard error. method active̸= is equivalent to passive applied to D̸= = {x ∈D|f1(x) ̸= f2(x)} (see Section 3.2). Labeling effort is thus simply reduced by a factor of |D̸=|/|D|. For regression, the analysis is less straightforward as typically D = D̸=. In this section we therefore focus on regression problems. Figure 2 (left, left-center) shows how often the active and passive comparison methods are able to reject the null hypothesis that the two models incur identical risk. The true risks incurred are never equal in these experiments. We observe that active is able to reject the null hypothesis more often and with a higher confidence. In the Abalone domain, active rejects the null hypothesis at α = 0.001 more often than passive is able to reject it at α = 0.1. Figure 2 (right-center, right) shows that active comparison also results in lower average p-values, in particular for large n. We also conduct experiments under the null hypothesis. Whenever a test instance x is sampled and the predictions y = f1(x) and y′ = f2(x) are queried, the predicted labels y and y′ are swapped with probability 0.5; this ensures that the true risks of f1 and f2 coincide. Figure 3 (left, left-center) shows that Type I errors are well calibrated for both tests, as the false-positive rate stays below the (ideal) diagonal line when plotted against α. Figure 3 (right-center, right) shows that both tests are slightly conservative for small n, and approach the expected false-positive rate as n grows larger. We finally study a protocol in which test instances are drawn and labeled until the null hypothesis can be rejected or the labeling budget is exhausted. Results (included in the online appendix) indicate that active incurs the lowest average labeling costs, obtains significance results most often, and has the lowest likelihood of incorrectly choosing the model with higher true risk. 5 Conclusion We have derived the sampling distribution that asymptotically maximizes the power of a statistical test that compares the risk of two predictive models. The sampling distribution intuitively gives preference to test instances on which the models disagree strongly. Empirically, we observed that the resulting active comparison method consistently outperforms a traditional comparison based on a uniform sample of test instances. Active comparison identifies the model with lower true risk more often, and is able to detect significant differences between the risks of two given models more quickly. In the four experimental domains that we studied, performing active comparison resulted in a saved labeling effort of between 60% and over 90%. We also performed experiments under the null hypothesis that both models incur identical risks, and verified that active comparison does not lead to increased false-positive significance results. Acknowledgements We wish to thank Paul Prasse for his help with the experiments on object recognition data. 8 References [1] M. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In Proceedings of the 23rd International Conference on Machine Learning, 2006. [2] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In Proceedings of the 26th International Conference on Machine Learning, 2009. [3] J. Geweke. Bayesian inference in econometric models using monte carlo integration. Econometrica, 57(6):1317–1339, 1989. [4] J. S. Liu. Monte Carlo Strategies in Scientific Computing. Springer, 2001. [5] O. Madani, D. J. Lizotte, and R. Greiner. Active model selection. In Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, 2004. [6] O. Maron and A. W. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In Proceedings of the 6th Annual Conference on Neural Information Processing Systems, 1993. [7] Carl Edward Rasmussen and Christopher Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [8] C. Sawade, N. Landwehr, S. Bickel, and T. Scheffer. Active risk estimation. In Proceedings of the 27th International Conference on Machine Learning, 2010. [9] C. Sawade, N. Landwehr, and T. Scheffer. Active estimation of f-measures. In Proceedings of the 23rd Annual Conference on Neural Information Processing Systems, 2010. [10] T. Scheffer and S. Wrobel. Finding the most interesting patterns in a database quickly by using sequential sampling. Journal of Machine Learning Research, 3:833–862, 2003. [11] D. Sheskin. Handbook of Parametric and Nonparametric Statistical Procedures. Chapman & Hall, 2004. [12] L. Wasserman. All of Statistics: a Concise Course in Statistical Inference. Springer, 2004. 9
|
2012
|
222
|
4,589
|
Reducing statistical time-series problems to binary classification Daniil Ryabko SequeL-INRIA/LIFL-CNRS, Universit´e de Lille, France daniil@ryabko.net J´er´emie Mary SequeL-INRIA/LIFL-CNRS, Universit´e de Lille, France Jeremie.Mary@inria.fr Abstract We show how binary classification methods developed to work on i.i.d. data can be used for solving statistical problems that are seemingly unrelated to classification and concern highly-dependent time series. Specifically, the problems of time-series clustering, homogeneity testing and the three-sample problem are addressed. The algorithms that we construct for solving these problems are based on a new metric between time-series distributions, which can be evaluated using binary classification methods. Universal consistency of the proposed algorithms is proven under most general assumptions. The theoretical results are illustrated with experiments on synthetic and real-world data. 1 Introduction Binary classification is one of the most well-understood problems of machine learning and statistics: a wealth of efficient classification algorithms has been developed and applied to a wide range of applications. Perhaps one of the reasons for this is that binary classification is conceptually one of the simplest statistical learning problems. It is thus natural to try and use it as a building block for solving other, more complex, newer or just different problems; in other words, one can try to obtain efficient algorithms for different learning problems by reducing them to binary classification. This approach has been applied to many different problems, starting with multi-class classification, and including regression and ranking [3, 16], to give just a few examples. However, all of these problems are formulated in terms of independent and identically distributed (i.i.d.) samples. This is also the assumption underlying the theoretical analysis of most of the classification algorithms. In this work we consider learning problems that concern time-series data for which independence assumptions do not hold. The series can exhibit arbitrary long-range dependence, and different timeseries samples may be interdependent as well. Moreover, the learning problems that we consider — the three-sample problem, time-series clustering, and homogeneity testing — at first glance seem completely unrelated to classification. We show how the considered problems can be reduced to binary classification methods. The results include asymptotically consistent algorithms, as well as finite-sample analysis. To establish the consistency of the suggested methods, for clustering and the three-sample problem the only assumption that we make on the data is that the distributions generating the samples are stationary ergodic; this is one of the weakest assumptions used in statistics. For homogeneity testing we have to make some mixing assumptions in order to obtain consistency results (this is indeed unavoidable [22]). Mixing conditions are also used to obtain finite-sample performance guarantees for the first two problems. The proposed approach is based on a new distance between time-series distributions (that is, between probability distributions on the space of infinite sequences), which we call telescope distance. This distance can be evaluated using binary classification methods, and its finite-sample estimates are shown to be asymptotically consistent. Three main building blocks are used to construct the tele1 scope distance. The first one is a distance on finite-dimensional marginal distributions. The distance we use for this is the following: dH(P, Q) := suph∈H |EP h −EQh| where P, Q are distributions and H is a set of functions. This distance can be estimated using binary classification methods, and thus can be used to reduce various statistical problems to the classification problem. This distance was previously applied to such statistical problems as homogeneity testing and change-point estimation [14]. However, these applications so far have only concerned i.i.d. data, whereas we want to work with highly-dependent time series. Thus, the second building block are the recent results of [1, 2], that show that empirical estimates of dH are consistent (under certain conditions on H) for arbitrary stationary ergodic distributions. This, however, is not enough: evaluating dH for (stationary ergodic) time-series distributions means measuring the distance between their finitedimensional marginals, and not the distributions themselves. Finally, the third step to construct the distance is what we call telescoping. It consists in summing the distances for all the (infinitely many) finite-dimensional marginals with decreasing weights. We show that the resulting distance (telescope distance) indeed can be consistently estimated based on sampling, for arbitrary stationary ergodic distributions. Further, we show how this fact can be used to construct consistent algorithms for the considered problems on time series. Thus we can harness binary classification methods to solve statistical learning problems concerning time series. To illustrate the theoretical results in an experimental setting, we chose the problem of time-series clustering, since it is a difficult unsupervised problem which seems most different from the problem of binary classification. Experiments on both synthetic and real-world data are provided. The real-world setting concerns brain-computer interface (BCI) data, which is a notoriously challenging application, and on which the presented algorithm demonstrates competitive performance. A related approach to address the problems considered here, as well some related problems about stationary ergodic time series, is based on (consistent) empirical estimates of the distributional distance, see [23, 21, 13] and [8] about the distributional distance. The empirical distance is based on counting frequencies of bins of decreasing sizes and “telescoping.” A similar telescoping trick is used in different problems, e.g. sequence prediction [19]. Another related approach to time-series analysis involves a different reduction, namely, that to data compression [20]. Organisation. Section 2 is preliminary. In Section 3 we introduce and discuss the telescope distance. Section 4 explains how this distance can be calculated using binary classification methods. Sections 5 and 6 are devoted to the three-sample problem and clustering, respectively. In Section 7, under some mixing conditions, we address the problems of homogeneity testing, clustering with unknown k, and finite-sample performance guarantees. Section 8 presents experimental evaluation. Some proofs are deferred to the supplementary material. 2 Notation and definitions Let (X, FX ) be a measurable space (the domain). Time-series (or process) distributions are probability measures on the space (XN, FN) of one-way infinite sequences (where FN is the Borel sigmaalgebra of XN). We use the abbreviation X1..k for X1, . . . , Xk. All sets and functions introduced below (in particular, the sets Hk and their elements) are assumed measurable. A distribution ρ is stationary if ρ(X1..k ∈A) = ρ(Xn+1..n+k ∈A) for all A ∈FX k, k, n ∈N (with FX k being the sigma-algebra of Xk). A stationary distribution is called (stationary) ergodic if limn→∞1 n P i=1..n−k+1 IXi..i+k∈A = ρ(A) ρ-a.s. for every A ∈FX k, k ∈N. (This definition, which is more suited for the purposes of this work, is equivalent to the usual one expressed in terms of invariant sets, see e.g. [8].) 3 A distance between time-series distributions We start with a distance between distributions on X, and then we will extend it to distributions on X ∞. For two probability distributions P and Q on (X, F) and a set H of measurable functions on X, one can define the distance dH(P, Q) := sup h∈H |EP h −EQh|. 2 Special cases of this distance are Kolmogorov-Smirnov [15], Kantorovich-Rubinstein [11] and Fortet-Mourier [7] metrics; the general case has been studied since at least [26]. We will be interested in the cases where dH(P, Q) = 0 implies P = Q. Note that in this case dH is a metric (the rest of the properties are easy to see). For reasons that will become apparent shortly (see Remark below), we will be mainly interested in the sets H that consist of indicator functions. In this case we can identify each f ∈H with the set {x : f(x) = 1} ⊂X and (by a slight abuse of notation) write dH(P, Q) := suph∈H |P(h) −Q(h)|. It is easy to check that in this case dH is a metric if and only if H generates F. The latter property is often easy to verify directly. First of all, it trivially holds for the case where H is the set of halfspaces in a Euclidean X. It is also easy to check that it holds if H is the set of halfspaces in the feature space of most commonly used kernels (provided the feature space is of the same or higher dimension than the input space), such as polynomial and Gaussian kernels. Based on dH we can construct a distance between time-series probability distributions. For two time-series distributions ρ1, ρ2 we take the dH between k-dimensional marginal distributions of ρ1 and ρ2 for each k ∈N, and sum them all up with decreasing weights. Definition 1 (telescope distance D). For two time series distributions ρ1 and ρ2 on the space (X∞, F∞) and a sequence of sets of functions H = (H1, H2, . . . ) define the telescope distance DH(ρ1, ρ2) := ∞ X k=1 wk sup h∈Hk |Eρ1h(X1, . . . , Xk) −Eρ2h(Y1, . . . , Yk)|, (1) where wk, k ∈N is a sequence of positive summable real weights (e.g. wk = 1/k2). Lemma 1. DH is a metric if and only if dHk is a metric for every k ∈N. Proof. The statement follows from the fact that two process distributions are the same if and only if all their finite-dimensional marginals coincide. Definition 2 (empirical telescope distance ˆD). For a pair of samples X1..n and Y1..m define empirical telescope distance as ˆDH(X1..n, Y1..m) := min{m,n} X k=1 wk sup h∈Hk 1 n −k + 1 n−k+1 X i=1 h(Xi..i+k−1) − 1 m −k + 1 m−k+1 X i=1 h(Yi..i+k−1) . (2) All the methods presented in this work are based on the empirical telescope distance. The key fact is that it is an asymptotically consistent estimate of the telescope distance, that is, the latter can be consistently estimated based on sampling. Theorem 1. Let H = (H1, H2, . . . ), Hk ⊂X k, k ∈N be a sequence of separable sets of indicator functions of finite VC dimension such that Hk generates FX k. Then, for every stationary ergodic time series distributions ρX and ρY generating samples X1..n and Y1..m we have lim n,m→∞ ˆDH(X1..n, Y1..m) = DH(ρX, ρY ) (3) The proof is deferred to the supplementary material. Note that ˆDH is a biased estimate of DH, and, unlike in the i.i.d. case, the bias may depend on the distributions; however, the bias is o(n). Remark. The condition that the sets Hk are sets of indicator function of finite VC dimension comes from [2], where it is shown that for any stationary ergodic distribution ρ, under these conditions, suph∈Hk 1 n−k+1 Pn−k+1 i=1 h(Xi..i+k−1) is an asymptotically consistent estimate of Eρh(X1, . . . , Xk). This fact implies that dH can be consistently estimated, from which the theorem is derived. 4 Calculating ˆDH using binary classification methods The methods for solving various statistical problems that we suggest are all based on ˆDH. The main appeal of this approach is that ˆDH can be calculated using binary classification methods. Here we explain how to do it. 3 The definition (2) of DH involves calculating l summands (where l := min{n, m}), that is sup h∈Hk 1 n −k + 1 n−k+1 X i=1 h(Xi..i+k−1) − 1 m −k + 1 m−k+1 X i=1 h(Yi..i+k−1) (4) for each k = 1..l. Assuming that h ∈Hk are indicator functions, calculating each of the summands amounts to solving the following k-dimensional binary classification problem. Consider Xi..i+k−1, i = 1..n −k + 1 as class-1 examples and Yi..i+k−1, i = 1..m −k + 1 as class-0 examples. The supremum (4) is attained on h ∈Hk that minimizes the empirical risk, with examples wighted with respect to the sample size. Indeed, then we can define the weighted empirical risk of any h ∈Hk as 1 n −k + 1 n−k+1 X i=1 (1 −h(Xi..i+k−1)) + 1 m −k + 1 m−k+1 X i=1 h(Yi..i+k−1) , which is obviously minimized by any h ∈Hk that attains (4). Thus, as long as we have a way to find h ∈Hk that minimizes empirical risk, we have a consistent estimate of DH(ρX, ρY ), under the mild conditions on H required by Theorem 1. Since the dimension of the resulting classification problems grows with the length of the sequences, one should prefer methods that work in high dimensions, such as soft-margin SVMs [6]. A particularly remarkable feature is that the choice of Hk is much easier for the problems that we consider than in the binary classification problem. Specifically, if (for some fixed k) the classifier that achieves the minimal (Bayes) error for the classification problem is not in Hk, then obviously the error of an empirical risk minimizer will not tend to zero, no matter how much data we have. In contrast, all we need to achieve asymptotically 0 error in estimating ˆD (and therefore, in the learning problems considered below) is that the sets Hk asymptotically generate FX k and have a finite VC dimension (for each k). This is the case already for the set of hyperplanes in Rk! Thus, while the choice of Hk (or, say, of the kernel to use in SVM) is still important from the practical point of view, it is almost irrelevant for the theoretical consistency results. Thus, we have the following. Claim 1. The approximation error |DH(P, Q) −ˆDH(X, Y )|, and thus the error of the algorithms below, can be much smaller than the error of classification algorithms used to calculate DH(X, Y ). Finally, we remark that while in (2) the number of summands is l, it can be replaced with any γl such that γl →∞, without affecting any asymptotic consistency results. A practically viable choice is γl = log l; in fact, there is no reason to choose faster growing γn since the estimates for higher-order summands will not have enough data to converge. This is also the value we use in the experiments. 5 The three-sample problem We start with a conceptually simple problem known in statistics as the three-sample problem (some times also called time-series classification). We are given three samples X = (X1, . . . , Xn), Y = (Y1, . . . , Ym) and Z = (Z1, . . . , Zl). It is known that X and Y were generated by different time-series distributions, whereas Z was generated by the same distribution as either X or Y . It is required to find out which one is the case. Both distributions are assumed to be stationary ergodic, but no further assumptions are made about them (no independence, mixing or memory assumptions). The three sample-problem for dependent time series has been addressed in [9] for Markov processes and in [23] for stationary ergodic time series. The latter work uses an approach based on the distributional distance. Indeed, to solve this problem it suffices to have consistent estimates of some distance between time series distributions. Thus, we can use the telescope distance. The following statement is a simple corollary of Theorem 1. Theorem 2. Let the samples X = (X1, . . . , Xn), Y = (Y1, . . . , Ym) and Z = (Z1, . . . , Zl) be generated by stationary ergodic distributions ρX, ρY and ρZ, with ρX ̸= ρY and either (i) ρZ = ρX or (ii) ρZ = ρY . Assume that the sets Hk ⊂X k, k ∈N are separable sets of indicator functions of finite VC dimension such that Hk generates FX k. A test that declares (i) if ˆDH(Z, X) ≤ˆDH(Z, Y ) and (ii) otherwise makes only finitely many errors with probability 1 as n, m, l →∞. It is straightforward to extend this theorem to more than two classes; in other words, instead of X and Y one can have an arbitrary number of samples from different stationary ergodic distributions. 4 6 Clustering time series We are given N samples X1 = (X1 1, . . . , X1 n1), . . . , XN = (XN 1 , . . . , XN nN ) generated by k different stationary ergodic time-series distributions ρ1, . . . , ρk. The number k is known, but the distributions are not. It is required to group the N samples into k groups (clusters), that is, to output a partitioning of {X1..XN} into k sets. While there may be many different approaches to define what is a good clustering (and, in general, deciding what is a good clustering is a difficult problem), for the problem of classifying time-series samples there is a natural choice, proposed in [21]: those samples should be put together that were generated by the same distribution. Thus, define target clustering as the partitioning in which those and only those samples that were generated by the same distribution are placed in the same cluster. A clustering algorithm is called asymptotically consistent if with probability 1 there is an n′ such that the algorithm produces the target clustering whenever maxi=1..N ni ≥n′. Again, to solve this problem it is enough to have a metric between time-series distributions that can be consistently estimated. Our approach here is based on the telescope distance, and thus we use ˆD. The clustering problem is relatively simple if the target clustering has what is called the strict separation property [4]: every two points in the same target cluster are closer to each other than to any point from a different target cluster. The following statement is an easy corollary of Theorem 1. Theorem 3. Assume that the sets Hk ⊂X k, k ∈N are separable sets of indicator functions of finite VC dimension, such that Hk generates FX k. If the distributions ρ1, . . . , ρk generating the samples X1 = (X1 1, . . . , X1 n1), . . . , XN = (XN 1 , . . . , XN nN ) are stationary ergodic, then with probability 1 from some n := maxi=1..N ni on the target clustering has the strict separation property with respect to ˆDH. With the strict separation property at hand, it is easy to find asymptotically consistent algorithms. We will give some simple examples, but the theorem below can be extended to many other distancebased clustering algorithms. The average linkage algorithm works as follows. The distance between clusters is defined as the average distance between points in these clusters. First, put each point into a separate cluster. Then, merge the two closest clusters; repeat the last step until the total number of clusters is k. The farthest point clustering works as follows. Assign c1 := X1 to the first cluster. For i = 2..k, find the point Xj, j ∈{1..N} that maximizes the distance mint=1..i ˆDH(Xj, ct) (to the points already assigned to clusters) and assign ci := Xj to the cluster i. Then assign each of the remaining points to the nearest cluster. The following statement is a corollary of Theorem 3. Theorem 4. Under the conditions of Theorem 3, average linkage and farthest point clusterings are asymptotically consistent. Note that we do not require the samples to be independent; the joint distributions of the samples may be completely arbitrary, as long as the marginal distribution of each sample is stationary ergodic. These results can be extended to the online setting in the spirit of [13]. 7 Speed of convergence The results established so far are asymptotic out of necessity: they are established under the assumption that the distributions involved are stationary ergodic, which is too general to allow for any meaningful finite-time performance guarantees. Moreover, some statistical problems, such as homogeneity testing or clustering when the number of clusters is unknown, are provably impossible to solve under this assumption [22]. While it is interesting to be able to establish consistency results under such general assumptions, it is also interesting to see what results can be obtained under stronger assumptions. Moreover, since it is usually not known in advance whether the data at hand satisfies given assumptions or not, it appears important to have methods that have both asymptotic consistency in the general setting and finite-time performance guarantees under stronger assumptions. In this section we will look at the speed of convergence of ˆD under certain mixing conditions, and use it to construct solutions for the problems of homogeneity and clustering with an unknown num5 ber of clusters, as well as to establish finite-time performance guarantees for the methods presented in the previous sections. A stationary distribution on the space of one-way infinite sequences (X N, FN) can be uniquely extended to a stationary distribution on the space of two-way infinite sequences (X Z, FZ) of the form . . . , X−1, X0, X1, . . . . Definition 3 (β-mixing coefficients). For a process distribution ρ define the mixing coefficients β(ρ, k) := sup A∈σ(X−∞..0), B∈σ(Xk..∞) |ρ(A ∩B) −ρ(A)ρ(B)| where σ(..) denotes the sigma-algebra of the random variables in brackets. When β(ρ, k) →0 the process ρ is called absolutely regular; this condition is much stronger than ergodicity, but is much weaker than the i.i.d. assumption. 7.1 Speed of convergence of ˆD Assume that a sample X1..n is generated by a distribution ρ that is uniformly β-mixing with coefficients β(ρ, k) Assume further that Hk is a set of indicator functions with a finite VC dimension dk, for each k ∈N. The general tool that we use to obtain performance guarantees in this section is the following bound that can be obtained from the results of [12]. qn(ρ, Hk, ε) := ρ sup h∈Hk | 1 n −k + 1 n−k+1 X i=1 h(Xi..i+k−1) −Eρ1h(X1..k)| > ε ≤nβ(ρ, tn −k) + 8tdk+1 n e−lnε2/8, (5) where tn are any integers in 1..n and ln = n/tn. The parameters tn should be set according to the values of β in order to optimize the bound. One can use similar bounds for classes of finite Pollard dimension [18] or more general bounds expressed in terms of covering numbers, such as those given in [12]. Here we consider classes of finite VC dimension only for the ease of the exposition and for the sake of continuity with the previous section (where it was necessary). Furthermore, for the rest of this section we assume geometric β-mixing distributions, that is, β(ρ, t) ≤γt for some γ < 1. Letting ln = tn = √n the bound (5) becomes qn(ρ, Hk, ε) ≤nγ √n−k + 8n(dk+1)/2e−√nε2/8. (6) Lemma 2. Let two samples X1..n and Y1..m be generated by stationary distributions ρX and ρY whose β-mixing coefficients satisfy β(ρ., t) ≤γt for some γ < 1. Let Hk, k ∈N be some sets of indicator functions on X k whose VC dimension dk is finite and non-decreasing with k. Then P(| ˆDH(X1..n, Y1..m) −DH(ρX, ρY )| > ε) ≤2∆(ε/4, n′) (7) where n′ := min{n1, n2}, the probability is with respect to ρX × ρY and ∆(ε, n) := −log ε(nγ √n+log(ε) + 8n(d−log ε+1)/2e−√nε2/8). (8) 7.2 Homogeneity testing Given two samples X1..n and Y1..m generated by distributions ρX and ρY respectively, the problem of homogeneity testing (or the two-sample problem) consists in deciding whether ρX = ρY . A test is called (asymptotically) consistent if its probability of error goes to zero as n′ := min{m, n} goes to infinity. In general, for stationary ergodic time series distributions, there is no asymptotically consistent test for homogeneity [22], so stronger assumptions are in order. Homogeneity testing is one of the classical problems of mathematical statistics, and one of the most studied ones. Vast literature exits on homogeneity testing for i.i.d. data, and for dependent processes 6 as well. We do not attempt to survey this literature here. Our contribution to this line of research is to show that this problem can be reduced (via the telescope distance) to binary classification, in the case of strongly dependent processes satisfying some mixing conditions. It is easy to see that under the mixing conditions of Lemma 1 a consistent test for homogeneity exists, and finite-sample performance guarantees can be obtained. It is enough to find a sequence εn →0 such that ∆(εn, n) →0 (see (8). Then the test can be constructed as follows: say that the two sequences X1..n and Y1..m were generated by the same distribution if ˆDH(X1..n, Y1..m) < εmin{n,m}; otherwise say that they were generated by different distributions. The following statement is an immediate consequence of Lemma 2. Theorem 5. Under the conditions of Lemma 2 the probability of Type I error (the distributions are the same but the test says they are different) of the described test is upper-bounded by 4∆(ε/8, n′). The probability of Type II error (the distributions are different but the test says they are the same) is upper-bounded by 4∆(δ −ε/8, n′) where δ := 1/2DH(ρX, ρY ). The optimal choice of εn may depend on the speed at which dk (the VC dimension of Hk) increases; however, for most natural cases (recall that Hk are also parameters of the algorithm) this growth is polynomial so the main term to control is e−√nε2/8. For example, if Hk is the set of halfspaces in X k = Rk then dk = k + 1 and one can chose εn := n−1/8. The resulting probability of Type I error decreases as exp(−n1/4). 7.3 Clustering with a known or unknown number of clusters If the distributions generating the samples satisfy certain mixing conditions, then we can augment Theorems 3 and 4 with finite-sample performance guarantees. Theorem 6. Let the distributions ρ1, . . . , ρk generating the samples X1 = (X1 1, . . . , X1 n1), . . . , XN = (XN 1 , . . . , XN nN ) satisfy the conditions of Lemma 2. Define δ := mini,j=1..N,i̸=j DH(ρi, ρj) and n := mini=1..N ni. Then with probability at least 1 −N(N −1)∆(δ/4, n)/2 the target clustering of the samples has the strict separation property. In this case single linkage and farthest point algorithms output the target clustering. Proof. Note that a sufficient condition for the strict separation property to hold is that for every one out of N(N −1)/2 pairs of samples the estimate ˆDH(Xi, Xj) i, j = 1..N is within δ/4 of the DH distance between the corresponding distributions. It remains to apply Lemma 2 to obtain the first statement, and the second statement is obvious (cf. Theorem 4). As with homogeneity testing, while in the general case of stationary ergodic distributions it is impossible to have a consistent clustering algorithm when the number of clusters k is unknown, the situation changes if the distributions satisfy certain mixing conditions. In this case a consistent clustering algorithm can be obtained as follows. Assign to the same cluster all samples that are at most εn-far from each other, where the threshold εn is selected the same way as for homogeneity testing: εn →0 and ∆(εn, n) →0. The optimal choice of this parameter depends on the choice of Hk through the speed of growth of the VC dimension dk of these sets. Theorem 7. Given N samples generated by k different stationary distributions ρi, i = 1..k (unknown k) all satisfying the conditions of Lemma 2, the probability of error (misclustering at least one sample) of the described algorithm is upper-bounded by 2N(N −1) max{∆(ε/8, n), ∆(δ −ε/8, n)} where δ := mini,j=1..k,i̸=j DH(ρi, ρj) and n = mini=1..N ni, with ni, i = 1..N being lengths of the samples. 8 Experiments For experimental evaluation we chose the problem of time-series clustering. Average-linkage clustering is used, with the telescope distance between samples calculated using an SVM, as described 7 in Section 4. In all experiments, SVM is used with radial basis kernel, with default parameters of libsvm [5]. 8.1 Synthetic data For the artificial setting we have chosen highly-dependent time series distributions which have the same single-dimensional marginals and which cannot be well approximated by finite- or countablestate models. The distributions ρ(α), α ∈(0, 1), are constructed as follows. Select r0 ∈[0, 1] uniformly at random; then, for each i = 1..n obtain ri by shifting ri−1 by α to the right, and removing the integer part. The time series (X1, X2, . . . ) is then obtained from ri by drawing a point from a distribution law N1 if ri < 0.5 and from N2 otherwise. N1 is a 3-dimensional Gaussian with mean of 0 and covariance matrix Id ×1/4. N2 is the same but with mean 1. If α is irrational1 then the distribution ρ(α) is stationary ergodic, but does not belong to any simpler natural distribution family [25]. The single-dimensional marginal is the same for all values of α. The latter two properties make all parametric and most non-parametric methods inapplicable to this problem. In our experiments, we use two process distributions ρ(αi), i ∈{1, 2}, with α1 = 0.31..., α2 = 0.35...,. The dependence of error rate on the length of time series is shown on Figure 1. One clustering experiment on sequences of length 1000 takes about 5 min. on a standard laptop. 8.2 Real data To demonstrate the applicability of the proposed methods to realistic scenarios, we chose the braincomputer interface data from BCI competition III [17]. The dataset consists of (pre-processed) BCI recordings of mental imagery: a person is thinking about one of three subjects (left foot, right foot, a random letter). Originally, each time series consisted of several consecutive sequences of different classes, and the problem was supervised: three time series for training and one for testing. We split each of the original time series into classes, and then used our clustering algorithm in a completely unsupervised setting. The original problem is 96-dimensional, but we used only the first 3 dimensions (using all 96 gives worse performance). The typical sequence length is 300. The performance is reported in Table 1, labeled TSSVM. All the computation for this experiment takes approximately 6 minutes on a standard laptop. The following methods were used for comparison. First, we used dynamic time wrapping (DTW) [24] which is a popular base-line approach for time-series clustering. The other two methods in Table 1 are from [10]. The comparison is not fully relevant, since the results in [10] are for different settings; the method KCpA was used in change-point estimation method (a different but also unsupervised setting), and SVM was used in a supervised setting. The latter is of particular interest since the classification method we used in the telescope distance is also SVM, but our setting is unsupervised (clustering). 0 200 400 600 800 1000 1200 0.0 0.1 0.2 0.3 0.4 Time of observation Error rate Figure 1: Error of two-class clustering using TSSVM; 10 time series in each target cluster, averaged over 20 runs. s1 s2 s3 TSSVM 84% 81% 61% DTW 46% 41% 36% KCpA 79% 74% 61% SVM 76% 69% 60% Table 1: Clustering accuracy in the BCI dataset. 3 subjects (columns), 4 methods (rows). Our method is TSSVM. Acknowledgments. This research was funded by the Ministry of Higher Education and Research, Nord-Pasde-Calais Regional Council and FEDER (Contrat de Projets Etat Region CPER 2007-2013), ANR projects EXPLO-RA (ANR-08-COSI-004), Lampada (ANR-09-EMER-007) and CoAdapt, and by the European Community’s FP7 Program under grant agreements n◦216886 (PASCAL2) and n◦270327 (CompLACS). 1in the experiments simulated by a longdouble with a long mantissa 8 References [1] T. M. Adams and A. B. Nobel. Uniform convergence of Vapnik-Chervonenkis classes under ergodic sampling. The Annals of Probability, 38:1345–1367, 2010. [2] T. M. Adams and A. B. Nobel. Uniform approximation of Vapnik-Chervonenkis classes. Bernoulli, 18(4):1310–1319, 2012. [3] M.-F. Balcan, N. Bansal, A. Beygelzimer, D. Coppersmith, J. Langford, and G. Sorkin. Robust reductions from ranking to classification. In COLT’07, v. 4539 of LNCS, pages 604–619. 2007. [4] M.F. Balcan, A. Blum, and S. Vempala. A discriminative framework for clustering via similarity functions. In STOC, pp. 671–680. ACM, 2008. [5] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. Software available at http://www.csie.ntu.edu.tw/˜cjlin/libsvm. [6] C. Cortes and V. Vapnik. Support-vector networks. Mach. Learn., 20(3):273–297, 1995. [7] R. Fortet and E. Mourier. Convergence de la r´epartition empirique vers la r´epartition th´eoretique. Ann. Sci. Ec. Norm. Super., III. Ser, 70(3):267–285, 1953. [8] R. Gray. Probability, Random Processes, and Ergodic Properties. Springer Verlag, 1988. [9] M. Gutman. Asymptotically optimal classification for multiple tests with empirically observed statistics. IEEE Transactions on Information Theory, 35(2):402–408, 1989. [10] Z. Harchaoui, F. Bach, E. Moulines. Kernel change-point analysis. NIPS, pp. 609–616, 2008. [11] L. V. Kantorovich and G. S. Rubinstein. On a function space in certain extremal problems. Dokl. Akad. Nauk USSR, 115(6):1058–1061, 1957. [12] R.L. Karandikara and M. Vidyasagar. Rates of uniform convergence of empirical means with mixing processes. Statistics and Probability Letters, 58:297–307, 2002. [13] A. Khaleghi, D. Ryabko, J. Mary, and P. Preux. Online clustering of processes. In AISTATS, JMLR W&CP 22, pages 601–609, 2012. [14] D. Kifer, S. Ben-David, J. Gehrke. Detecting change in data streams. VLDB (v.30): 180–191, 2004. [15] A.N. Kolmogorov. Sulla determinazione empirica di una legge di distribuzione. G. Inst. Ital. Attuari, pages 83–91, 1933. [16] John Langford, Roberto Oliveira, and Bianca Zadrozny. Predicting conditional quantiles via reduction to classification. In UAI, 2006. [17] Jos´e del R. Mill´an. On the need for on-line learning in brain-computer interfaces. In Proc. of the Int. Joint Conf. on Neural Networks, 2004. [18] D. Pollard. Convergence of Stochastic Processes. Springer, 1984. [19] B. Ryabko. Prediction of random sequences and universal coding. Problems of Information Transmission, 24:87–96, 1988. [20] B. Ryabko. Compression-based methods for nonparametric prediction and estimation of some characteristics of time series. IEEE Transactions on Information Theory, 55:4309–4315, 2009. [21] D. Ryabko. Clustering processes. In Proc. ICML 2010, pp. 919–926, Haifa, Israel, 2010. [22] D. Ryabko. Discrimination between B-processes is impossible. Journal of Theoretical Probability, 23(2):565–575, 2010. [23] D. Ryabko and B. Ryabko. Nonparametric statistical inference for ergodic processes. IEEE Transactions on Information Theory, 56(3):1430–1435, 2010. [24] H. Sakoe and S. Chiba. Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech and Signal Processing, 26(1):43–49, 1978. [25] P. Shields. The Ergodic Theory of Discrete Sample Paths. AMS Bookstore, 1996. [26] V. M. Zolotarev. Metric distances in spaces of random variables and their distributions. Math. USSR-Sb, 30(3):373–401, 1976. 9
|
2012
|
223
|
4,590
|
Max-Margin Structured Output Regression for Spatio-Temporal Action Localization Du Tran and Junsong Yuan School of Electrical and Electronic Engineering Nanyang Technological University, Singapore trandu@gmail.com, jsyuan@ntu.edu.sg Abstract Structured output learning has been successfully applied to object localization, where the mapping between an image and an object bounding box can be well captured. Its extension to action localization in videos, however, is much more challenging, because we need to predict the locations of the action patterns both spatially and temporally, i.e., identifying a sequence of bounding boxes that track the action in video. The problem becomes intractable due to the exponentially large size of the structured video space where actions could occur. We propose a novel structured learning approach for spatio-temporal action localization. The mapping between a video and a spatio-temporal action trajectory is learned. The intractable inference and learning problems are addressed by leveraging an efficient Max-Path search method, thus making it feasible to optimize the model over the whole structured space. Experiments on two challenging benchmark datasets show that our proposed method outperforms the state-of-the-art methods. 1 Introduction Blaschko and Lampert have recently shown that object localization can be approached as structured regression problem [2]. Instead of modeling object localization as a binary classification and treating every bounding box independently, their method trains a discriminant function directly for predicting the bounding boxes of objects located in images. Compared with conventional sliding-window based approach, it considers the correlations among the output variables and avoids an exhaustive search of the subwindows for object detection. Motivated by the successful application of structured regression in object localization [2], it is natural to ask if we can perform structured regression for action localization in videos. Although this idea looks plausible, the extension from object localization to action localization is non-trivial. Different from object localization, where a visual object can be well localized by a 2-dimensional (2D) subwindow, human actions cannot be tightly bounded in such a similar way, i.e., using a 3dimensional (3D) subvolume. Although many current methods for action detection are based on this 3D subvolume assumption [6, 9, 20, 29], and search for video subvolumes to detect actions, such an assumption is only reasonable for “static” actions, where the subjects do not move globally e.g., pick-up or kiss. For “dynamic” actions, where the subjects can move globally e.g., walk, run, or dive, the subvolume constraint is no longer suitable. Thus, a more accurate localization scheme that can track the actions is required for localizing dynamic actions in videos. For example, one can localize an action by a 2D bounding box in each frame, and track it as the action moves across different frames. This localization structured output generates a smooth spatio-temporal path of connected 2-D bounding boxes. Such a spatio-temporal path can tightly bound the actions in the video space and provides a more accurate spatio-temporal localization of actions. 1 left right top bottom object a) b) c) Figure 1: Complexities of object and action localization: a) Object localization is of O(n4). b) Action localization by subvolume search is of O(n6). c) Spatio-temporal action localization in a much larger search space. However, as the video space is much larger than the image space, spatio-temporal action localization has a much larger structured space compared with object localization. For a video with size w × h × n, the search space for 3D subvolumes and 2D subwindows is only O(w2h2n2) and O(w2h2), respectively (Figure 1). However, the search space for possible spatio-temporal paths in the video space is exponential O(whnkn)[23] if we do not know the start and end points of the path (k is the number of incoming edges per node). Any one of these paths can be the candidates for spatiotemporal action localization, thus an exhaustive search is infeasible. This huge structured space keeps structured learning approaches from being practical to spatio-temporal action localization due to intractable inferences. This paper proposes a new approach for spatio-temporal action localization which mainly addresses the above mentioned problems. Instead of using the 3D subvolume localization scheme, we precisely locate and track the action by finding an optimal spatio-temporal path to detect and localize actions. The mapping between a video and a spatio-temporal action trajectory is learned. By leveraging an efficient Max-Path search method [23], the intractable inference and learning problems can be addressed, thus makes our approach practical and effective although the structured space is very large. Being solved as structured learning problem, our method can well exploit the correlations between local dependent video features, and therefore optimizes the structured output. Experiments on two challenging benchmark datasets show that our method significantly outperforms the state-ofthe-art methods. 1.1 Related work Human action detection is traditionally approached by spatio-temporal video volume matching using different features: space-time orientation [6], volumetric [9], action MACH [20], HOG3D [10]. The sliding window scheme is then applied to locate actions which is ineffective and time-consuming. Different matching, learning models have also been introduced. Boiman and Irani proposed ensembles of patches to detect irregularities in images and videos [3]. Hu et al used multiple-instance learning to detect actions [8]. Mahadevan et al used mixtures of dynamic textures to detect anomaly events [15]. Le et al used deep learning to learn unsuppervised features for recognizing human activities [14]. Niebles et al used a probabilistic latent semantic analysis model for recognizing actions [17]. Yao et al trained probabilistic non-linear latent variable models to track complex activities [28]. Yuan et al extended the branch-and-bound subwindow search [11] to subvolume search for action detection [29]. Recently, Tran and Yuan relaxed the 3D bounding box constraint for detecting and localizing medium and long video events [23]. Despite the improvements over 3D subvolume based approaches, this method did not fully utilize the correlations between local part detectors as they were independently trained. Max-margin structured output learning [19, 21, 24] was recently proposed and demonstrated its success in many applications. One of its attractive features is that although the structured space can be very large, whenever inference is tractable, learning is also tractable. Finley and Joachims further showed that overgenerating (e.g. relaxations) algorithms have theoretic advantages over undergenerating (e.g. greedy) methods when exact inference is intractable [7]. Various structured learning based approaches were proposed to solve computer vision problems including pedestrian detection [22], object detection [2, 25], object segmentation [1], facial action unit detection [16], human interaction recognition [18], group activity recognition [13], and human pose parsing [27]. More recently, Lan et al used a latent SVM to jointly detect and recognize actions in videos [12]. 2 Among these work, Lan et al is most similar to ours. However, this method requires a reliable human detector in both inference and learning, thus it is not applicable to “dynamic” actions where the human poses are significantly varied. Moreover, because of its using HOG3D [26], it only detects actions in a sparse subset of frames where the interest points present. 2 Spatio-Temporal Action Localization as Structured Output Regression Given a video x with the size of w × h × m where w × h is the frame size and m is its length, to localize actions, one needs to predict a structured object y which is a smooth spatio-temporal path in the video space. We denote a path y = {(l, t, r, b)i=1..m} where (l, t, r, b)i are respectively the left, top, right, bottom of the rectangle that bounds the action in the i-th frame. These values of (l, t, r, b) are all set to zeros when there is no action in this frame. Because of the spatio-temporal smoothness constraint, the boxes in y are necessarily smoothed over the spatio-temporal video space. Let us denote X ⊂[0, 255]3whm as the set of color videos, and Y ⊂R4m as the set of all smooth spatiotemporal paths in the video space. The problem of spatio-temporal action localization becomes to learn a structured prediction function of f : X 7→Y. 2.1 Structured Output Learning Let {x1, . . . , xn} ⊂X be the training videos, and {y1, . . . , yn} ⊂Y be their corresponding annotated ground truths. We formulate the action localization problem using the structured learning as presented in [24]. Instead of searching for f, we learn a discriminant function F : X ×Y 7→R. F is a compatibility function which measures how compatible the localization y will be suited to the given input video x. If the model utilizes a parameter set of w, then we denote F(x, y; w) = ⟨w, φ(x, y)⟩, which is a family of functions parameterized by w, and φ(x, y) is a joint kernel feature map which represents spatio-temporal features of y given x. Once F is trained, meaning the optimal parameter w∗is determined, the final prediction ˆy can be obtained by maximizing F over Y for a specific input x. ˆy = f(x; w∗) = argmax y∈Y F(x, y; w∗) = argmax y∈Y ⟨w∗, φ(x, y)⟩ (1) The optimal parameter set w∗is selected by solving the convex optimization problem in Eq. 2: min w,ξ 1 2∥w∥2 + C n X i=1 ξi s.t. ⟨w, φ(xi, yi) −φ(xi, y)⟩≥∆(yi, y) −ξi, ∀i, ∀y ∈Y\yi, ξi ≥0, ∀i. (2) Eq. 2 optimizes w such that the score of the true structure yi of xi will be larger than any other structure y by a margin which is rescaled by the loss of ∆(yi, y). The loss function will be defined in Section 2.3. This optimization is similar to the traditional support vector machine (SVM) formulation except for two differences. First, the number of constraints is much larger due to the huge size of the structure space Y. Second, the margins are rescaled differently by the constraint’s loss ∆(yi, y). Because of the large number of constraints, the problem in Eq. 2 cannot be solved directly although it is a convex problem. Alternatively, one can solve the above problem by the cutting plane algorithm [24] or subgradient methods [19, 21]. We use the cutting plane algorithm to solve this learning problem. The algorithm starts with a random parameter w and an empty constraint set. At each round, it searches for the most violated constraint and add it to the constraint set. This step is to search for y that maximizes the violation value ξi (Eq. 3). When a new constraint is found, the optimization is applied to update w. The process is repeated until no more constraint is added. This algorithm is proven to converge [24] and normally within a small number of constraints due to the sparsity of the structured space. ξi ≥∆(yi, y) + ⟨w, φ(xi, y)⟩−⟨w, φ(xi, yi)⟩, ∀y ∈Y\yi (3) 2.2 The Joint Kernel Feature Map for Action Localization Let us denote x|y as the video portion cut out from x by the path y, namely the stack of images cropped by the bounding boxes b1..m of y. We also denote ϕ(bi) ∈Rk as a feature map for a 2D 3 box bi. It is worth noting that ϕ(bi) can be represented by either local features (e.g. local interest points) or global features (e.g. HOG, HOF) of the whole box bi. We thus have a feature map for x|y as φ(x, y) which is also a vector in Rk: φ(x, y) = 1 m m X i=1 ϕ(bi) (4) Finally, the decision function of our structured prediction is now formed as in Eq. 5. F(x, y; w) = ⟨w, φ(x, y)⟩= 1 m m X i=1 ⟨w, ϕ(bi)⟩. (5) 2.3 Loss Function We define a Hinge loss function ∆: Y × Y 7→[0, 1] for evaluating the loss induced by a predicted structure ˆy compared with a true structure label y. We denote y = {bi=1..m}, where bi = (l, t, r, b)i is the ground truth box of the i-th frame. Similarly, we denote ˆy = {ˆbi=1..m} the predicted structure. The loss function is defined as follow: ∆(y, ˆy) = 1 m m X i=1 δ(bi, ˆbi). (6) δ(b,ˆb) = ( 1 −Area(b∩ˆb) Area(b∪ˆb) if lb = lˆb = 1 1 −( 1 2(lblˆb + 1)), otherwise. (7) lb = −1 if b = (0, 0, 0, 0) 1, otherwise. (8) 3 Inference and Learning We need a feasible way to perform the inference in Eq. 1 during testing which can be rewritten as in Eq. 9. ˆy = argmax y∈Y ⟨w, φ(x, y)⟩= 1 m argmax y∈Y m X i=1 ⟨w, ϕ(bi)⟩. (9) During training, we need to search for the most violated constraints by maximizing the right hand side of Eq. 3 which is equivalent to Eq. 10. From now on, we denote ¯y for yi in Eq. 2 because the example index i is no longer important. max y∈Y {∆(y, ¯y) + ⟨w, φ(x, y)⟩} (10) = max y∈Y ( 1 m m X i=1 δ(bi, ¯bi) + 1 m m X i=1 ⟨w, ϕ(bi)⟩ ) (11) = 1 m max y∈Y ( m X i=1 δ(bi, ¯bi) + ⟨w, ϕ(bi)⟩ ) (12) To solve Eq. 9 and Eq. 12, one needs to search for a smooth path y∗in the spatio-temporal video space Y which gives the maximum total score. Both of the above equations are difficult due to the large size of Y, e.g. the exponential number of possible spatio-temporal paths in Y (see supplemental material). We now show that both problems in Eq. 9 and Eq. 12 can be reduced to Max-Path search problem and solved by [23] efficiently. Max-Path algorithm [23] was proposed to detect dynamic video events. It is guaranteed to obtain the best spatio-temporal path in the video space provided that the local windows’ scores can be precomputed. The algorithm takes a 3D trellis of local windows’ scores as input, and outputs the best path which the maximum total score. In testing, the trellis’s local scores are ⟨w, ϕ(bi)⟩where bi is the local window. These values are easily evaluated given a w and a feature map ϕ. In training, those values of the trellis are δ(bi, ¯bi) + ⟨w, ϕ(bi)⟩which are also computable given parameter w, feature map ϕ, and ground truth ¯bi. After the trellis is constructed, the Max-Path algorithm is employed to find the best path, therefore we can identify the smoothed spatio-temporal path y∗that maximizes Eq. 9 and Eq. 12. 4 3.1 Constraint Enforcement Let us consider one constraint in Eq. 2, here we ignore the index i of the example for simplicity and use ¯y as the ground truth for example x. We also denote y = b1..m and ¯y = ¯b1..m. ⟨w, φ(x, ¯y)⟩−⟨w, φ(x, y)⟩≥∆(¯y, y) −ξ, ∀y ∈Y\¯y (13) ⇔ 1 m m X i=1 ⟨w, ϕ(¯bi)⟩−1 m m X i=1 ⟨w, ϕ(bi)⟩≥1 m m X i=1 δ(bi, ¯bi) −ξ, ∀y ∈Y\¯y (14) ⇔ m X i=1 ⟨w, ϕ(¯bi)⟩− m X i=1 ⟨w, ϕ(bi)⟩≥ m X i=1 δ(bi, ¯bi) −mξ, ∀y ∈Y\¯y (15) The constraint in Eq. 15 can be split into m constraints in Eq. 16 which are harder, therefore satisfying these m constraints will lead to satisfying the Eq. 15 constraint ⟨w, ϕ(¯bi)⟩−⟨w, ϕ(bi)⟩≥δ(bi, ¯bi) −ξ, ∀i ∈[1..m], ∀y ∈Y\¯y (16) In training, instead of solving Eq. 2 with the constraints in Eq. 13, we solve it with the set of constraints as in Eq. 16. The problem is harder because of tighter constraints. However, the important benefit of using such enforcements is that instead of comparing features of two different spatio-temporal paths y and ¯y, one can compare the features of individual box pairs (bi, ¯bi) of those two paths. This constraint enforcement will help the training algorithm to avoid comparing features of two paths of different lengths which is unstable due to feature normalization. 4 Experimetial Setup Datasets: we conduct experiments on two datasets: UCF-Sport [20] and Oxford-TV [18]. UCFSport dataset consists of 150 video sequences of 10 different action classes. We use the same split as in [12] for training and testing. On this dataset, we detect three different actions: horse-riding, running, and diving. We choose those actions because they have different levels of body movements. Horse riding is relatively rigid; running is more deformable; while diving is extremely deforming in terms of articulated body movements. Oxford-TV dataset consists of 300 videos taken from real TV programs. It has 4 classes of actions: hand-shake, high-five, hug, kiss, and a set of 100 negative videos. As used in [18], this dataset is divided into two equal subsets. We use set 1 for training and set 2 for testing. We perform the task of kiss detection and localization on this dataset. Kissing actions is more challenging compared with other action classes in this dataset due to less motion and appearance cues. Features and Parameters: our algorithm needs a feature representation ϕ(b) of a cropped image b. We use a global representation for ϕ(b) using Histogram of Oriented Gradients (HOF) [4] and Histogram of Flows (HOF) [5]. The cropped image b is divided into h × v half-overlapped blocks; each block has 2×2 cells. Each cell is represented by a 9-bin histogram. The feature vector’s length become h × v × 2 × 2 × 9 × 2 = 72× h × v for both HOG and HOF. (h, v) can be different for each class due to different shape-ratios of the actions (e.g. rectangle boxes for horse-riding and running, square boxes for diving). More specifically, we use (7, 15) for horse-riding and running, (11, 11) for diving, (9, 7) for kissing. The regularization parameter C in Eq. 2 is set to 1 for all cases. Evaluation Metrics: we quantitatively evaluate different methods in both detection and localization. As used in [12], the video localization score is measured by averaging its frame localization scores which are the overlap area divided by the union area of the predicted and truth boxes. A prediction is then considered as correct if its localization score is greater or equal to σ = 0.2. It is worth noting that detection evaluations are applied to both positive and negative testing examples while localization evaluations are only applied to positive ones. As a result, the detection metric is to measure the reliability of the detections (precision/recall) where the localization metric indicates the quality of detections, e.g. how accurate are the predicted spatio-temporal paths compared with ground truth. More specific, detection is to answer the question “Is there any action of interest in this video?” while localization is to answer to “Provided that there is one action instance that appears in this video, where is it?”. 5 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision Horse-ride; Subset test Lan et al Tran&Yuan Our method 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision Dive; Subset test Lan et al Tran&Yuan Our method 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision Horse-ride; All test Tran&Yuan Our method 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision Dive; All test Tran&Yuan Our method 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision Run; Subset test Lan et al Tran&Yuan Our method 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision Run; All test Tran&Yuan Our method Figure 2: Action detection results on UCF-Sport: detection curves of our proposed method compared with [12] and [23]. Upper plots are detection results evaluated on subset frames given by [12], while lower plots are the results of all-frame evaluations. Except for diving, our proposed method significantly improves the other methods. Eval. Set Method H-Ride Run Dive Average Subset [12] 21.75 19.60 42.67 28.01 [23] 62.19 50.20 16.41 42.93 Our 68.06 61.41 36.54 55.34 All [12] N/A N/A N/A N/A [23] 63.06 48.09 22.64 44.60 Our 64.01 61.86 37.03 54.30 Table 1: Action localization results on UCF-Sport: comparisons among our proposed method, [12], and [23]. The upper section presents results evaluated on a subset of frames given by [12], while the lower section reports results from evaluating on all frames. Our method improves 27.33% from [12] and 12.41% from [23] on subset evaluations and improves 9.7% from [23] on all-frame evaluations. N/A indicates not applicable. 5 Experimental Results UCF-Sport: we compare our method with two current approaches: Lan et al [12], Tran and Yuan [23]. The output predictions of Lan et al are directly obtained from [12]. For [23], we train a linear SVM detector for each action class using the same features as ours. The Max-Path algorithm is then applied to detect the actions of interest. According to [12], its method used HOG3D [26], so that it is only able to detect and localize actions at a sparse set of frames where the HOG3D interest points present. To provide a fair comparison with [12], we report two different sets of evaluations. The first set is applied only to the subset of frames where [12] reports detections and the second set is to take all frames into consideration. Table 1 reports the results of action localization of different methods and action classes. On average, our method improves 27.33% from [12] and 12.41% from [23] on subset evaluations and improves 9.7% from [23] on all-frame evaluations. Figure 2 shows detection results of different methods on UCF-Sport dataset. Our method significantly improves over [23] for all three action classes on both subset and all-frame evaluations. Compared with [12] on subset evaluations, our method significantly improve over [12] on horse-riding and running detection. However, [12] provides better detection results than ours on diving detection. This better detection is because their interest-pointbased sparse features are more suitable to deformable actions as diving. For a complete presentation, we visualze localization results of our method comapared with those of [12] and [23] on a diving sequence (Figure 3). All predicted boxes are plotted together with ground truth boxes for comparisons. It is worth noting that [12] has only predictions at a sparse set of frames, therefore blue 6 0 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 Frame number Localization score Lan et al Tran&Yuan Our method 45 28 20 51 7 Figure 3: Visualization of diving localization: the plots of localization scores of different methods on a diving video sequence. Lan et al’s [12] results are visualized in blue, Tran and Yuan’s [23] are green, ours are red, and ground truth are black boxes. Best view in color. Figure 4: Action detection and localization on UCF-Sport: Lan et al’s [12] results are visualized in blue, Tran and Yuan’s [23] are green, ours are read, and ground truth are black. Our method and [23] can detect multiple instances of actions (two bottom left images). squares are visualized as discrete dots while the other methods are visualized by continuous curves. Our method (red curve) localizes diving action much more accurately than [23] (green curve). [12] localizes diving action fairly good, however it is not applicable when more accurate localizations (e.g. all frame predictions) are required. Oxford-TV: we compare our method with [23] on both detection and localization tasks. For detection, we report two different quantitative evaluations: the equal precision-recall (EPR) and the area under ROC curve (AUC). For localization, besides the spatial localization (SL) metric as used in UCF dataset experiments, we also evaluate different methods by temporal localization (TL) metric. This metric is not applicable to UCF dataset because most action instances in UCF dataset start and end at the first and last frame, respectively. Temporal localization is computed as the length Method EPR(%) AUC SL(%) TL(%) [18] 32.50* N/A N/A N/A [23] 24.14 0.27 18.46 40.09 Our 38.89 0.42 39.52 45.30 Table 2: Kiss detection and localization results. We improve 14.74% in equal precision/recall detection rate, 0.15 in area under ROC curve, 21.06% in spatial localization, and 5.21% in temporal localization over [23]. *Result of [18] is not directly comparable. N/A indicates not applicable. 7 Figure 5: Visualizaiton of kiss detection: our results are visualized in red; ground truths are in green. The upper two rows are some of correct detections while the last row shows false or missed detections. 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 Recall Precision Tran&Yuan: 8.82/10.34 Our method: 29.03/31.03 0 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 1 Recall Precision Tran&Yuan: 46.67/24.14 Our method: 38.89/48.28 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 False positive rate True positive rate Tran&Yuan: 0.27 Our method: 0.42 a) b) c) Figure 6: Kiss detection results: a) Precision-recall curves with σ = 0.2. b) Precision-recall curves with σ = 0.4. c) ROC curves with σ = 0.2. Numbers inside the legends are best precision-recall values(a and b) and the area under ROC curve(c). (measured in frames) of the intersection divided by the union of detection and ground truth. Table 2 presents detection and localization results of our proposed method compared with [23]. On localization task, our method improves 21.06% in spatial localization, and 5.21% in temporal localization over [23]. On detection task, by using the cut-off threshold σ = 0.2, our method improves 14.74% in equal precision-recall rate and 0.15 in area under ROC curve over [23] (Figure 6a and 6c). One may further ask “what if we need more accurate detections?”. Interestingly, when we increase the cut-off threshold σ to 0.4, [23] significantly drops from 24.11% to 8.82% while our method remains 29.03% (Figure 6b) which demonstrates that our method can simultaneously detect and localize actions with high accuracy. 6 Conclusions We have proposed a novel structured learning approach for spatio-temporal action localization in videos. While most of current approaches detect actions as 3D subvolumes [6, 9, 20, 29] or a sparse subset of frames [12], our method can precisely detect and track actions in both spatial and temporal spaces. Although [23] is also applicable to spatio-temporal action detection, this method cannot be optimized over the large video space due to its independently trained detectors. Our approach significantly outperforms [23] thanks to the structured optimization. This improvement gap is also consistent with the theoretic analysis in [7]. Moreover, being free from people detection and background subtraction, our approach can efficiently handle unconstrained videos and be easily extended to detect other spatio-temporal video patterns. Strong experimental results on two challenging benchmark datasets demonstrate that our proposed method significantly outperforms the state-of-the-arts. 8 Acknowledgments The authors would like to thank Tian Lan for reproducing [12]’s results on UCF dataset, Minh Hoai Nguyen for useful discussions about the cutting-plane algorithm. This work is supported in part by the Nanyang Assistant Professorship (SUG M58040015) to Dr. Junsong Yuan. References [1] L. Bertelli, T. Yu, D. Vu, and S. Gokturk. Kernelized structural SVM learning for supervised object segmentation. CVPR, 2011. [2] M. B. Blaschko and C. H. Lampert. Learning to localize objects with structured output regression. ECCV, 2008. [3] O. Boiman and M. Irani. Detecting irregularities in images and in video. IJCV, 2007. [4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. CVPR, 2005. [5] N. Dalal, B. Triggs, and C. Schmid. Human detection using oriented histograms of flow and appearance. ECCV, 2006. [6] K. Derpanis, M. Sizintsev, K. Cannons, and P. Wildes. Efficient action spotting based on a spacetime oriented structure representation. CVPR, 2010. [7] T. Finley and T. Joachims. Training structural SVMs when exact inference is intractable. ICML, 2008. [8] Y. Hu, L. Cao, F. Lv, S. Yan, Y. Gong, and T. S. Huang. Action detection in complex scenes with spatial and temporal ambiguities. ICCV, 2009. [9] Y. Ke, R. Sukthankar, and M. Hebert. Volumetric features for video event detection. IJCV, 2010. [10] A. Klaser, M. Marszalek, and C. Schmid. A spatio-temporal descriptor based on 3d-gradients. BMVC, 2008. [11] C. H. Lampert, M. B. Blaschko, and T. Hofmann. Efficient subwindow search: A branch and bound framework for object localization. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2009. [12] T. Lan, Y. Wang, and G. Mori. Discriminative figure-centric models for joint action localization and recognition. ICCV, 2011. [13] T. Lan, Y. Wang, W. Yang, and G. Mori. Beyond actions: Discriminative models for contextual group activities. NIPS, 2010. [14] Q. Le, W. Zou, S. Yeung, and A. Ng. Learning hierarchical spatio-temporal features for action recognition with independent subspace analysis. CVPR, 2011. [15] V. Mahadevan, W. Li, V. Bhalodia, and N. Vasconcelos. Anomaly detection in crowded scenes. CVPR, 2010. [16] M. H. Nguyen, T. Simon, F. De la Torre, and J. Cohn. Action unit detection with segment-based SVMs. CVPR, 2010. [17] J. C. Niebles, H. Wang, and L. Fei-Fei. Unsupervised learning of human action categories using spatialtemporal words. International Journal of Computer Vision, 2008. [18] A. Patron-Perez, M. Marszalek, A. Zisserman, and I. Reid. High five: Recognising human interactions in tv shows. BMVC, 2010. [19] N. Ratliff, J. A. Bagnell, and M. Zinkevich. Subgradient methods for maximum margin structured learning. ICML 2006 Workshop on Learning in Structured Output Spaces, 2006. [20] M. D. Rodriguez, J. Ahmed, and M. Shah. Action mach: A spatio-temporal maximum average correlation height filter for action recognition. CVPR, 2008. [21] B. Taskar, S. Lacoste-Julien, and M. Jordan. Structured prediction via the extragradient method. NIPS, 2005. [22] D. Tran and D. Forsyth. Configuration estimates improve pedestrian finding. NIPS, 2007. [23] D. Tran and J. Yuan. Optimal spatio-temporal path discovery for video event detection. CVPR, pages 3321–3328, 2011. [24] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. JMLR, 2005. [25] A. Vedaldi and A. Zisserman. Structured output regression for detection with partial truncation. NIPS, 2009. [26] H. Wang, M. M. Ullah, A. Klaser, I. Laptev, and C. Schmid. Evaluation of local spatio-temporal features for action recognition. BMVC, 2009. [27] Y. Wang, D. Tran, and Z. Liao. Learning hierarchical poselets for human parsing. CVPR, 2011. [28] A. Yao, J. Gall, L. V. Gool, and R. Urtasun. Learning probabilistic non-linear latent variable models for tracking complex activities. NIPS, 2011. [29] J. Yuan, Z. Liu, and Y. Wu. Discriminative video pattern search for efficient action detection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2011. 9
|
2012
|
224
|
4,591
|
Minimizing Sparse High-Order Energies by Submodular Vertex-Cover Andrew Delong University of Toronto andrew.delong@gmail.com Olga Veksler Western University olga@csd.uwo.ca Anton Osokin Moscow State University anton.osokin@gmail.com Yuri Boykov Western University yuri@csd.uwo.ca Abstract Inference in high-order graphical models has become important in recent years. Several approaches are based, for example, on generalized message-passing, or on transformation to a pairwise model with extra ‘auxiliary’ variables. We focus on a special case where a much more efficient transformation is possible. Instead of adding variables, we transform the original problem into a comparatively small instance of submodular vertex-cover. These vertex-cover instances can then be attacked by existing algorithms (e.g. belief propagation, QPBO), where they often run 4–15 times faster and find better solutions than when applied to the original problem. We evaluate our approach on synthetic data, then we show applications within a fast hierarchical clustering and model-fitting framework. 1 Introduction MAP inference on graphical models is a central problem in machine learning, pattern recognition, and computer vision. Several algorithms have emerged as practical tools for inference, especially for graphs containing only unary and pairwise factors. Prominent examples include belief propagation [30], more advanced message passing methods like TRW-S [21] or MPLP [33], combinatorial methods like α-expansion [6] (for ‘metric’ factors) and QPBO [32] (mainly for binary problems). In terms of optimization, these algorithms are designed to minimize objective functions (energies) containing unary and pairwise terms. Many inference problems must be modeled using high-order terms, not just pairwise, and such problems are increasingly important for many applications. Recent developments in high-order inference include, for example, high-arity CRF potentials [19, 38, 25, 31], cardinality-based potentials [13, 34], global potentials controlling the appearance of labels [24, 26, 7], learning with high-order loss functions [35], among many others. One standard approach to high-order inference is to transform the problem to the pairwise case and then simply apply one of the aforementioned ‘pairwise’ algorithms. These transformations add many ‘auxiliary’ variables to the problem but, if the high-order terms are sparse in the sense suggested by Rother et al. [31], this can still be a very efficient approach. There can be several equivalent high-order-to-pairwise transformations, and this choice affects the difficulty of the resulting pairwise inference problem. Choosing the ‘easiest’ transformation is not trivial and has been explicitly studied, for example, by Gallagher et al. [11]. Our work is about fast energy minimization (MAP inference) for particularly sparse, high-order “pattern potentials” used in [25, 31, 29]: each energy term prefers a specific (but arbitrary) assignment to its subset of variables. Instead of directly transforming the high-order problem to pairwise, we transform the entire problem to a comparatively small instance of submodular vertex-cover (SVC). The vertex-cover implicitly provides a solution to the original high-order problem. The SVC instance can itself be converted to pairwise, and standard inference techniques run much faster and are often more effective on this compact representation. 1 We also show that our ‘sparse’ high-order energies naturally appear when trying to solve hierarchical clustering problems using the algorithmic approach called fusion moves [27], also conceptually known as optimized crossover [1]. Fusion is a powerful very large-scale neighborhood search technique [3] that in some sense generalizes α-expansion. The fusion approach is not standard for the kind of clustering objective we will consider, but we believe it is an interesting optimization strategy. The remainder of the paper is organized as follows. Section 2 introduces the class of high-order energies we consider, then derives the transformation to SVC and the subsequent decoding. Section 3 contains experiments that suggest significant speedups, and discusses possible applications. 2 Sparse High-Order Energies Reducible to SVC In what follows we use x to denote a vector of binary variables, xP to denote product ∏ i∈P xi, and xQ to denote ∏ i∈Q xi. It will be convenient to adopt the convention that x{} = 1 and x{} = 1. We always use i to denote a variable index from I, and j to denote a clique index from V. It is well-known that any pseudo-boolean function (binary energy) can be written in the form F(x) = ∑ i∈I aixi − ∑ j∈V bjxPjxQj (1) where each clique j has coefficient −bj with bj ≥0, and is defined over variables in sets Pj, Qj ⊆I. Our approach will be of practical interest only when, roughly speaking, |V| ≪|I|. For example, if x = (x1, . . . , x7) then a clique j with Pj = {2, 3} and Qj = {4, 5, 6} will explicitly reward binary configuration (·, 1, 1, 0, 0, 0, ·) by the amount bj (depicted as b1 in Figure 1). If there are several overlapping (and conflicting) cliques, then the minimization problem can be difficult. A standard way to minimize F(x) would be to substitute each −bjxPjxQj term with a collection of equivalent pairwise terms. In our experiments, we used the substitution −xPjxQj = −1 + miny∈{0,1} y + ∑ i∈Pj xiy + ∑ i∈Qj xiy where y is an auxiliary variable. This is like the Type-II transformation in [31], and we found that it worked better than Type-I for our experiments. However, we aim to minimize F(x) in a novel way, so first we review the submodular vertex-cover problem. 2.1 Review of Submodular Vertex-Cover The classic minimum-weighted vertex-cover (VC) problem can be stated as a 0-1 integer program where variable uj = 1 if and only if vertex j is included in the cover. (VC) minimize ∑ j∈V wjuj (2) subject to uj + uj′ ≥1 ∀{j, j′} ∈E (3) uj ∈{0, 1}. Without loss of generality one can assume wj > 0 and j ̸= j′ for all {j, j′} ∈E. If the graph (V, E) is bipartite, then we call the specialized problem VC-B and it can be solved very efficiently by specialized bipartite maximum flow algorithms such as [2]. A function f(x) is called submodular if f(x∧y)+f(x∨y) ≤f(x)+f(y) for all x, y ∈{0, 1}V where (x ∧y)j = xjyj and (x ∨y)j = 1 −xjyj. A submodular function can be minimized in strongly polynomial time by combinatorial methods [17], but becomes NP-hard when subject to arbitrary covering constraints like (3). The submodular vertex-cover (SVC) problem generalizes VC by replacing the linear (modular) objective (2) with an arbitrary submodular objective, (SVC) minimize f(u) (4) subject to uj + uj′ ≥1 ∀{j, j′} ∈E uj ∈{0, 1}. Iwata & Nagano [18] recently showed that when f(·) ≥0 a 2-approximation can be found in polynomial time and that this is the best constant-ratio bound achievable. It turns out that a halfintegral relaxation uj ∈{0, 1 2, 1} (call this problem SVC-H), followed by upward rounding, gives 2 a 2-approximation much like for standard VC. They also show how to transform any SVC-H instance into a bipartite instance of SVC (see below); this extends a classic result by Nemhauser & Trotter [28], allowing specialized combinatorial algorithms like [17] to solve the relaxation. In the bipartite submodular vertex-cover (SVC-B) problem, the graph nodes V can be partitioned into sets J , K so the binary variables are u ∈{0, 1}J, v ∈{0, 1}K and we solve (SVC-B) minimize f(u) + g(v) (5) subject to uj + vk ≥1 ∀{j, k} ∈E uj, vk ∈{0, 1} ∀j ∈J , k ∈K where both f(·) and g(·) are submodular functions. This SVC-B formulation is a trivial extension of the construction in [18] (they assume g = f), and their proof of tractability extends easily to (5). 2.2 Solving Bipartite SVC with Min-Cut It will be useful to note that if f and g above can be written in a special manner, SVC-B can be solved by fast s-t minimum cut instead of by [17, 15]. Suppose we have an SVC-B instance (J , K, E, f, g) where we can write submodular f and g as f(u) = ∑ S∈S0 wSuS, and g(v) = ∑ S∈S1 wSvS. (6) Here S0 and S1 are collections of subsets of J and K respectively, and typescript uS denotes product ∏ j∈S uj throughout (as distinct from typescript u, which denotes a vector). Proposition 1. If wS ≤0 for all |S| ≥2 in (6), then SVC-B reduces to s-t minimum cut. Proof. We can define an equivalent problem over variables uj and zk = vk. With this substitution, the covering constraints become uj ≥zk. Since “g(v) submodular in v” implies “g(1−v) submodular in v,” letting ¯g(z) = g(z) = g(v) means ¯g(z) is submodular as a function of z. Minimizing f(u)+¯g(z) subject to uj ≥zk is equivalent to our original problem. Since uj ≥zk can be enforced by large (submodular) penalty on assignment ujzk, SVC-B is equivalent to minimize f(u) + ¯g(z) + ∑ (j,k)∈E ηujzk where η = ∞. (7) When f and g take the form (6), we have ¯g(z) = ∑ S∈S1 wSzS where zS denotes product ∏ k∈S zk. If wS ≤0 for all |S| ≥2, we can build an s-t minimum cut graph corresponding to (7) by directly applying the constructions in [23, 10]. We can do this because each term has coefficient wS ≤0 when written as u1 · · · u|S| or z1 · · · z|S|, i.e. either all complemented or all uncomplemented. 2.3 Transforming F(x) to SVC To get a sense for how our transformation works, see Figure 1. The transformation is reminiscent of the binary dual of a Constraint Satisfaction Problem (CSP) [37]. The vertex-cover construction of [4] is actually a special linear (modular) case of our transformation (details in Proposition 2). Figure 1: Left: factor graph F(x) = ∑7 i=1 aixi−b1x2x3x4x5x6−b2x1x2x3x4x5−b3x3x4x5x6x7. A small white square indicates ai > 0, a black square ai < 0. A hollow edge connecting xi to factor j indicates i ∈Pj, and a filled-in edge indicates i ∈Qj. Right: factor graph of our corresponding SVC instance. High-order factors of the original problem, shown with gray squares on the left, are transformed into variables of SVC problem. Covering constraints are shown as dashed lines. Two pairwise factors are formed with coefficients w{1,3} = −a3 and w{1,2} = a4 + a5, both ≤0. 3 Theorem 1. For any F(x) there exists an instance of SVC such that an optimum x∗∈{0, 1}I for F can be computed from an optimal vertex-cover u∗∈{0, 1}V. Proof. First we give the construction for SVC instance (V, E, f). Introduce auxiliary binary variables u ∈{0, 1}V where uj = xPjxQj. Because each bj ≥0, minimizing F(x) is equivalent to the 0-1 integer program with non-linear constraints minimize F(x, u) subject to uj ≤xPjxQj ∀j ∈V. (8) Inequality (8) is sufficient if bj ≥0 because, for any fixed x, equality uj = xPjxQj holds for some u that minimizes F(x, u). We try to formulate a minimization problem solely over u. As a consequence of (8) we have uj = 0 ⇒xPj = 1, xQj = 0. (We use typescript xS to denote vector (xi)i∈S, whereas xS denotes a product—a scalar value.) Notice that, when some Pj and Qj′ overlap, not all u ∈{0, 1}V can be feasible with respect to assignments x ∈{0, 1}I. For each i ∈I, let us collect the cliques that i participates in: define sets Ji, Ki ⊆V where Ji = { j | i ∈Pj} and Ki = { j | i ∈Qj}. We show that u can be feasible if and only if uJi + uKi ≥1 for all i ∈I, where uS denotes a product. In other words, u can be feasible if and only if, for each i, ∃uj = 0, j ∈Ji =⇒ uk = 1 ∀j ∈Ki ∃uk = 0, k ∈Ki =⇒ uj = 1 ∀j ∈Ji. (9) (⇒) If uj ≤xPjxQj for all j ∈V, then having uJi + uKi ≥1 is necessary: if both uJi = 0 and uKi = 0 for any i it would mean there exists j ∈Ji and k ∈Ki for which xPj = 1 and xQk = 0, contradicting any unique assignment to xi. (⇐) If uJi + uKi ≥1 for all i ∈I, then we can always choose some x ∈{0, 1}I for which every uj ≤xPjxQj. It will be convenient to choose a minimum cost assignment for each xi, subject to the constraints uJi = 0 ⇒xi = 1 and uKi = 0 ⇒xi = 0. If both uJi = uKi = 1 then xi could be either 0 or 1 so choose the best, giving x(u)i = { 0 if uKi = 0 1 if uJi = 0 [ai < 0] otherwise. (10) The assignment x(u) is feasible with respect to (8) because for any uj = 1 we have x(u)Pj = 1 and x(u)Qj = 0. We have completed the proof that u can be feasible if and only if uJi + uKi ≥1. To express minimization of F solely in terms of u, first write (10) in equivalent form x(u)i = { uKi if ai < 0 1 −uJi otherwise. (11) Again, this definition of x(u) minimizes F(x, u) over all x satisfying inequality (8). Use (11) to write new SVC objective f(u) = F(x(u), u), which becomes f(u) = ∑ i : ai>0 ai(1 −uJi) + ∑ i : ai<0 aiuKi − ∑ j∈V bj(1 −uj) = ∑ i : ai>0 −aiuJi + ∑ i : ai<0 aiuKi + ∑ j∈V bjuj + const. (12) To collect coefficients in the first two summands of (12), we must group them by each unique clique that appears. We define set S = {S ⊆V | (∃Ji = S) ∨(∃Ki = S)} and write f(u) = ∑ S∈S wSuS + const (13) where wS = ∑ i : ai>0, Ji=S −ai + ∑ i : ai<0, Ki=S ai ( + bj if S = {j} ) . (14) Since the high-order terms uS in (13) have non-positive coefficients wS ≤0, then f(u) is submodular [5]. Also note that for each i at most one of Ji or Ki contributes to the sum, so there are at most |S| ≤|I| unique terms uS with wS ̸= 0. If |S|, |V| ≪|I| then our SVC instance will be small. Finally, to ensure (9) holds we add a covering constraint uj + uk ≥1 whenever there exists i such that j ∈Ji, k ∈Ki. For this SVC instance, an optimal covering u minimizes F(x(u), u). 4 The construction in Theorem 1 suggests the entire minimization procedure below. MINIMIZE-BY-SVC(F) where F is a pseudo-boolean function in the form of (1) 1 w{j} := bj ∀j ∈V 2 for i ∈I do 3 if ai > 0 then wJi := wJi −ai (distribute ai to high-order SVC coefficients) 4 else if ai < 0 then wKi := wKi + ai (where index sets Ji and Ki defined in Theorem 1) 5 E := E ∪{{j, k}} ∀j ∈Ji, k ∈Ki (add covering constraints to enforce uJi + uKi ≥1) 6 let f(u) = ∑ S∈S wSuS (define SVC objective over clique indices V) 7 u∗:= SOLVE-SVC(V, E, f) (solve with BP, QPBO, Iwata, etc.) 8 return x(u∗) (decode the covering as in (10)) One reviewer suggested an extension that scales better with the number of overlapping cliques. The idea is to formulate SVC over the elements of S rather than V. Specifically, let y ∈{0, 1}S and use submodular objective f(y) = ∑ S∈S wSyS + ∑ j∈S(bj + 1)ySy{j}, where the inner sum ensures yS = ∏ j∈S y{j} at a local minimum because w{j} ≤bj. For each unique pair {Ji, Ki}, add a covering constraint yJi+yKi ≥1 (instead of O(|Ji|·|Ki|) constraints). An optimal covering y∗of S then gives an optimal covering of V by assigning uj = y∗ {j}. Here we use the original construction, and still report significant speedups. See [8] for discussion of efficient implementation, and an alternate proof of Theorem 1 based on LP relaxation. 2.4 Special Cases of Note Proposition 2. If {Pj}j∈V are disjoint and, separately, {Qj}j∈V are disjoint (equivalently each |Ji|, |Ki| ≤1), then the SVC instance in Theorem 1 reduces to standard VC. Proof. Each S ∈S in objective (13) must be S = {j} for some j ∈V. The objective then becomes f(u) = ∑ j∈V w{j}uj + const, a form of standard VC. Proposition 2 shows that the main result of [4] is a special case of our Theorem 1 when Ji = {j} and Ki = {k} with j, k determined by two labelings being ‘fused’. In Section 3, this generalization of [4] will allow us to apply a similar fusion-based algorithm to hierarchical clustering problems. Proposition 3. If each particular j ∈V has either Pj = {} or Qj = {}, then the construction in Theorem 1 is an instance of SVC-B. Moreover, it is reducible to s-t minimum cut. Proof. In this case Ji is disjoint with Ki′ for any i, i′ ∈I, so sets J = {j : |Pj| ≥1} and K = {j : |Qj| ≥1} are disjoint. Since E contains pairs (j, k) with j ∈J and k ∈K, graph (V, E) is bipartite. By the disjointness of any Ji and Ki′, the unique clique sets S can be partitioned into S0 = {S ⊆J | ∃Ji = S} and S1 = {S ⊆K | ∃Ki = S} so that (13) can be written as in Proposition 1 and thereby reduced to s-t minimum cut. Corollary 1. If sets {Pj}j∈V and {Qj}j∈V satisfy the conditions of propositions 2 and 3, then minimizing F(x) reduces to an instance of VC-B and can be solved by bipartite maximum flow. We should note that even though SVC has a 2-approximation algorithm [18], this does not give us a 2-approximation for minimizing F in general. Even if F(x) ≥0 for all x, it does not imply f(u) ≥0 for configurations of u that violate the covering constraints, as would be required. 3 Applications Even though any pseudo-boolean function can be expressed in form (1), many interesting problems would require an exponential number of terms to be expressed in that form. Only certain specific applications will naturally have |V| ≪|I|, so this is the main limitation of our approach. There may be applications in high-order segmentation. For example, when P n-Potts potentials [19] are incorporated into α-expansion, the resulting expansion step contains high-order terms that are compact in this form; in the absence of pairwise CRF terms, Proposition 3 would apply. The α-expansion algorithm has also been extended to optimize the facility location objective [7] commonly used for clustering (e.g. [24]). The resulting high-order terms inside the expansion step 5 0.4 0.6 0.8 1 ICM BP TRWS MPLP QPBO QPBOP lb+5 +7300 +20000 +56000 +120000 0 0.2 1 2 4 8 16 QPBOI λ = ICM SVC-ICM SVC-BP SVC-TRWS SVC-MPLP SVC-QPBO SVC-QPBOP lb+5 +7300 +20000 +56000 +120000 1 2 4 8 16 SVC-QPBOP SVC-QPBOI SVC-Iwata λ = Figure 2: Effectiveness of each algorithm as strength of high-order coefficients is increased by factor of λ ∈{1..16}. For a fixed λ, the final energy of each algorithm was normalized between 0.0 (best lower bound) and 1.0 (baseline ICM energy); the true energy gap between lower bound and baseline is indicated at top, e.g. for λ = 1 the “lb+5” means ICM was typically within 5 of the lower bound. also take the form (1) (in fact, Corollary 1 applies here); with no need to build the ‘full’ high-order graph, this would allow α-expansion to work as a fast alternative to the classic greedy algorithm for facility location, very similar to the fusion-based algorithm in [4]. However, in Section 3.2 we show that our generalized transformation allows for a novel way to optimize a hierarchical facility location objective. We will use a recent geometric image parsing model [36] as a specific example. First, Section 3.1 compares a number of methods on synthetic instances of energy (1). 3.1 Results on Synthetic Instances Each instance is a function F(x) where x represents a 100 × 100 grid of binary variables with random unary coefficients ai ∈[−10, 10]. Each instance also has |J | = 50 high-order cliques with bj ∈[250λ, 500λ] (we will vary λ), where variable sets Pj and Qj each cover a random nj ×nj and mj × mj region respectively (here the region size nj, mj ∈{10, . . . , 15} is chosen randomly). If Pj and Qj are not disjoint, then either Pj := Pj \ Qj or Qj := Qj \ Pj, as determined by a coin flip. We tested the following algorithms: BP [30], TRW-S [21], MPLP [33], QPBO [14], and extensions QPBO-P and QPBO-I [32]. For BP we actually used the implementation provided by [21] which is very fast but, we should note, does not support message-damping; convergence of BP may be more reliable if this were supported. Algorithms were configured as follows: BP for 25 iterations (more did not help); TRW-S for 800 iterations (epsilon 1); MPLP for 2000 initial iterations + 20 clusters added + 100 iterations per tightening; QPBO-I with 5 random improve steps. We ran MPLP for a particularly long time to ensure it had ample time to tighten and converge; indeed, it always yielded the best lower bound. We also tested MINIMIZE-BY-SVC by applying each of these algorithms to solve the resulting SVC problem, and in this case also tried the Iwata-Nagano construction [18]. To transform high-order potentials to quadratic, we report results using Type-II binary reduction [31] because for TRW-S/MPLP it dominated the Type-I reduction in our experiments, and for BP and the others it made no difference. This runs counter to the conventional used of “number of supermodular terms” as an estimate of difficulty: the Type-I reduction would generate one supermodular edge per high-order term, whereas Type-II generates |Pj| supermodular edges for each term (∑ i∈Pj xiy). One minor detail is how to evaluate the ‘partial’ labelings returned by QPBO and QPBO-P. In the case of minimizing F directly, we simply assigned such variables xi = [ai < 0]. In the case of MINIMIZE-BY-SVC we included all unlabeled nodes in the cover, which means a variable xi with uJi and uKi all unlabeled will similarly be assigned xi = [ai < 0]. Figure 2 shows the relative performance of each algorithm, on average. When λ = 1 the high-order coefficients are relatively weak compared to the unary terms, so even ICM succeeds at finding a near-optimal energy. For larger λ the high-order terms become more important, and we make a number of observations: – ICM, BP, TRW-S, MPLP all perform much better when applied to the SVC problem. – QPBO-based methods do not perform better when applied to the SVC problem. – QPBO-I consistently gives good results; BP also gives good results if applied to SVC. – The Iwata-Nagano construction is effectively the same as QBPO applied to SVC. 6 We also observed that the TRW-S lower bound was the same with or without transformation to SVC, but convergence took much fewer iterations when applied to SVC. In principle, TRW on binary problems solves the same LP relaxation as QPBO [22]. The TRW-S code finds much better solutions because it uses the final messages as hints to decode a good solution, unlike for QPBO. Table 1 gives typical running times for each of the cases in Figure 2 on a 2.66 GHz Intel Core2 processor. Code was written in C++, but the SVC transformation was not optimized at all. Still, SVC-QBPOI is 20 times faster than QPBOI while giving similar energies on average. The overall results suggest that SVC-BP or SVC-QPBOI are the fastest ways to find a low-energy solution (bold in Table 1) on problems containing many conflicting high-order terms of the form (1). Running times were relatively consistent for all λ ≥2. Table 1: Typical running times of each algorithm. First row uses Type-II binary reduction on F, then directly runs each algorithm. Second row first transforms to SVC, does Type-II reduction, runs the algorithm, and decodes the result; times shown include all these steps. BP TRW-S MPLP QPBO QPBO-P QPBO-I Iwata directly minimize F 22ms 670ms 25min 30ms 25sec 140ms N/A MINIMIZE-BY-SVC(F) 5.2ms 19ms 80sec 5.4ms 99ms 7.2ms 5ms 3.2 Application: Hierarchical Model-Estimation / Clustering In clustering and multi-model estimation, it is quite common to either explicitly constrain the number of clusters, or—more relevant to our work—to penalize the number of clusters in a solution. Penalizing the number of clusters is a kind of complexity penalty on the solution. Recent examples include [24, 7, 26], but the basic idea has been used in many contexts over a long period. A classic operations research problem with the same fundamental components is facility location: the clients (data points) must be assigned to a nearby facility (cluster) but each facility costs money to open. This can be thought of as a labeling problem, where each data point is a variable, and there is a label for each cluster. For hard optimization problems there is a particular algorithmic approach called fusion [27] or optimized crossover [1]. The basic idea is two take two candidate solutions (e.g. two attempts at clustering), and to ‘fuse’ the best parts of each solution, effectively stitching them together. To see this more concretely, imagine a labeling problem where we wish to minimize E(l) where l = (li)i∈I is a vector of label assignments. If l0 is the first candidate labeling, and l1 is the second candidate labeling, a fusion operation seeks a binary string x∗such that the crossover labeling l(x) = (lxi i )i∈I minimizes E(l(x)). In other words, x∗identifies the best possible ‘stitching’ of the two candidate solutions with respect to the energy. In [4] we derived a fusion operation based on the greedy formulation of facility location, and found that the subproblem reduced to minimum-weighted vertex-cover. We will now show that the fusion operation for hierarchical facility location objectives requires minimizing an energy of the form (1), which we have already shown can be transformed to a submodular vertex-cover problem. Givoni et al. [12] recently proposed a message-passing scheme for hierarchical facility location, with experiments on synthetic and HIV strain data. We focus on more a computer vision-centric application: detecting a hierarchy of lines and vanishing points in images using the geometric image parsing objective proposed by Tretyak et al. [36]. The hierarchical energy proposed by [36] contains five ‘layers’: edges, line segments, lines, vanishing points, and horizon. Each layer provides evidence for subsequent (higher) layers, and at each level their is a complexity cost that regulates how much evidence is needed to detect a line, to detect a vanishing point, etc. For simplicity we only model edges, lines, and vanishing points, but our fusion-based framework easily extends to the full model. The purpose of our experiments are, first and foremost, to demonstrate that MINIMIZE-BY-SVC speeds up inference and, secondly, to suggest that a hierarchical clustering framework based on fusion operations (similar to non-hierarchical [4]) is an interesting and potentially worthwhile alternative to the greedy and local optimization used in state-of-the-art methods like [36]. 7 Let {yi}i∈I be a set of oriented edges yi = (xi, yi, ψi) where (x, y) is position in the image and ψ is an angle; these bottom-level features are generated by a Canny edge detector. Let L be a set of candidate lines, and let V be a set of candidate vanishing points. These sets are built by randomly sampling: one oriented edge to generate each candidate line, and pairs of lines to generate each candidate vanishing point. Each line j ∈L is associated with one vanishing point kj ∈V. (If a line passes close to multiple vanishing points, a copy of the line is made for each.) We seek a labeling l where li ∈L ∪⊘identifies the line (and vanishing point) that edge i belongs to, or assigns outlier label ⊘. Let Di(j) = distj(xi, yi) + distj(ψi) denote the spatial distance and angular deviation of edge yi to line j, and let the outlier cost be Di(⊘) = const. Similarly, let Dj = distj(kj) be the distance of line j and its associated vanishing point projected onto the Gaussian sphere (see [36]). Finally let Cl and Cv denote positive constants that penalize the detection of a line and a vanishing point respectively. The hierarchical energy we minimize is E(l) = ∑ i∈I Di(li) + ∑ j∈L (Cl + Dj)·[∃li = j] + ∑ k∈V Cv·[∃kli = k]. (15) This energy penalizes the number of unique lines, and the number of unique vanishing points that labeling l depends on. Given two candidate labelings l0, l1, writing the fusion energy for (15) gives E(l(x)) = ∑ i∈I D0 i +(D1 i −D0 i )xi + ∑ j∈L (Cl + Dj)·(1−xPjxQj) + ∑ k∈V Cv·(1−xPkxQk) (16) where Pj = { i | l0 i = j }, Qj = { i | l1 i = j }, and Pk = { i | kl0 i = k }, Qk = { i | kl1 i = k }. Notice that sets {Pj} are disjoint with each other, but each Pj is nested in subset Pkj, so overall Proposition 2 does not apply, and so neither does the algorithm in [4]. For each image we used 10,000 edges, generated 8,000 candidate lines and 150 candidate vanishing points. We then generated 4 candidate labelings, each by allowing vanishing points to be detected in randomized order, and their associated lines to be detected in greedy order, and then we fused the labelings together by minimizing (16). Overall inference with QPBOI took 2–6 seconds per image, whereas SVC-QPBOI took 0.5-0.9 seconds per image with relative speedup of 4–6 times. The simplified model is enough to show that hierarchical clustering can be done in this new and potentially powerful way. As argued in [27], fusion is a robust approach because it combines the strengths—quite literally—of all methods used to generate candidates. Figure 3: (Best seen in color.) Edge features color-coded by their detected vanishing point. Not shown are the detected lines that make up the intermediate layer of inference (similar to [36]). Images taken from York [9] and Eurasia [36] datasets. Acknowledgements We thank Danny Tarlow for helpful discussion regarding MPLP, and an anonymous reviewer for suggesting a more efficient way to enforce covering constraints(!). This work supported by NSERC Discovery Grant R3584A02, Canadian Foundation for Innovation (CFI), and Early Researcher Award (ERA). References [1] Aggarwal, C.C., Orlin, J.B., & Tai, R.P. (1997) Optimized Crossover for the Independent Set Problem. Operations Research 45(2):226–234. [2] Ahuja, R.K., Orlin, J.B., Stein, C. & Tarjan, R.E. (1994) Improved algorithms for bipartite network flow. SIAM Journal on Computing 23(5):906–933. [3] Ahuja, R.K., Ergun, ¨O., Orlin, J.B., & Punnen, A.P. (2002) A survey of very large-scale neighborhood search techniques. Discrete Applied Mathematics 123(1–3):75–202. [4] Delong, A., Veksler, O. & Boykov, Y. (2012) Fast Fusion Moves for Multi-Model Estimation. European Conference on Computer Vision. [5] Boros, E. & Hammer, P.L. (2002) Pseudo-Boolean Optimization. Discrete App. Math. 123(1–3):155–225. [6] Boykov, Y., Veksler, O., & Zabih, R. (2001) Fast Approximate Energy Minimization via Graph Cuts. IEEE Transactions on Pattern Recognition and Machine Intelligence. 23(11):1222–1239. 8 [7] Delong, A., Osokin, A., Isack, H.N., & Boykov, Y. (20120) Fast Approximate Energy Minimization with Label Costs. International Journal of Computer Vision 96(1):127. Earlier version in CVPR 2010. [8] Delong, A., Veksler, O., Osokin, A., & Boykov, Y. (2012) Minimizing Sparse High-Order Energies by Submodular Vertex-Cover. Technical Report, Western University. [9] Denis, P., Elder, J., & Estrada, F. (2008) Efficient Edge-Based Methods for Estimating Manhattan Frames in Urban Imagery. European Conference on Computer Vision. [10] Freedman, D. & Drineas, P. (2005) Energy minimization via graph cuts: settling what is possible. IEEE Conference on Computer Vision and Pattern Recognition. [11] Gallagher, A.C., Batra, D., & Parikh, D. (2011) Inference for order reduction in Markov random fields. IEEE Conference on Computer Vision and Pattern Recognition. [12] Givoni, I.E., Chung, C., & Frey, B.J. (2011) Hierarchical Affinity Propagation. Uncertainty in AI. [13] Gupta, R., Diwan, A., & Sarawagi, S. (2007) Efficient inference with cardinality-based clique potentials. International Conference on Machine Learning. [14] Hammer, P.L., Hansen, P., & Simeone, B. (1984) Roof duality, complementation and persistency in quadratic 0-1 optimization. Mathematical Programming 28:121–155. [15] Hochbaum, D.S. (2010) Submodular problems – approximations and algorithms. Arxiv preprint arXiv:1010.1945. [16] Iwata, S., Fleischer, L. & Fujishige, S. (2001) A combinatorial, strongly polynomial-time algorithm for minimizing submodular functions. Journal of the ACM 48:761–777. [17] Iwata, S. & Orlin, J.B. (2009) A simple combinatorial algorithm for submodular function minimization. ACM-SIAM Symposium on Discrete Algorithms. [18] Iwata, S. & Nagano, K. (2009) Submodular Function Minimization under Covering Constraints. IEEE Symposium on Foundations of Computer Science. [19] Kohli, P., Kumar, M.P. & Torr, P.H.S. (2007) P3 & Beyond: Solving Energies with Higher Order Cliques. IEEE Conference on Computer Vision and Pattern Recognition. [20] Kolmogorov, V. (2010) Minimizing a sum of submodular functions. Arxiv preprint arXiv:1006.1990. [21] Kolmogorov, V. (2006) Convergent Tree-Reweighted Message Passing for Energy Minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(10):1568–1583. [22] Kolmogorov, V., & Wainwright, M.J. (2005) On the optimality of tree-reweighted max-product messagepassing. Uncertainty in Artificial Intelligence. [23] Kolmogorov, V. & Zabih, R. (2004) What Energy Functions Can Be Optimized via Graph Cuts? IEEE Transactions on Pattern Analysis and Machine Intelligence 26(2):147–159. [24] Komodakis, N., Paragios, N., & Tziritas, G. (2008) Clustering via LP-based Stabilities. Neural Information Processing Systems. [25] Komodakis, N., & Paragios, N. (2009) Beyond pairwise energies: Efficient optimization for higher-order MRFs. IEEE Computer Vision and Pattern Recognition. [26] Ladick´y, L., Russell, C., Kohli, P., & Torr, P.H.S (2010) Graph Cut based Inference with Co-occurrence Statistics. European Conference on Computer Vision. [27] Lempitsky, V., Rother, C., Roth, S., & Blake, A. (2010) Fusion Moves for Markov Random Field Optimization. IEEE Transactions on Pattern Analysis and Machine Inference. 32(9):1392–1405. [28] Nemhauser, G.L. and Trotter, L.E. (1975) Vertex packings: Structural properties and algorithms. Mathematical Programming 8(1):232–248. [29] Osokin, A., & Vetrov, D. (2012) Submodular relaxations for MRFs with high-order potentials. HiPot: ECCV Workshop on Higher-Order Models and Global Constraints in Computer Vision. [30] Pearl, J. (1988) Fusion, propagation, and structuring in belief networks. Artificial Intell. 29(3):251–288. [31] Rother, C., Kohli, P., Feng, W., & Jia, J. (2009) Minimizing sparse higher order energy functions of discrete variables. IEEE Conference on Computer Vision and Pattern Recognition. [32] Rother, C., Kolmogorov, V., Lempitsky, V., & Szummer, M. (2007) Optimizing Binary MRFs via Extended Roof Duality. IEEE Conference on Computer Vision and Pattern Recognition. [33] Sontag, D., Meltzer, T., Globerson, A., Jaakkola, T., & Weiss, Y. (2008) Tightening LP relaxations for MAP using message passing. Uncertainty in Artificial Intelligence. [34] Tarlow, D., Givoni, I.E., & Zemel, R.S. (2010) HOPMAP: Efficient message passing with high order potentials. International Conference on Artificial Intelligence and Statistics. [35] Tarlow, D., & Zemel, R. (2012) Structured Output Learning with High Order Loss Functions. International Conference on Artificial Intelligence and Statistics. [36] Tretyak, E., Barinova, O., Kohli, P., & Lempitsky, V. (2011) Geometric Image Parsing in Man-Made Environments. International Journal of Computer Vision 97(3):305–321. [37] Tsang, E. (1993) Foundations of constraint satisfaction. Academic Press, London. [38] Werner, T. (2008) High-arity Interactions, Polyhedral Relaxations, and Cutting Plane Algorithm for Soft Constraint Optimisation (MAP-MRF). IEEE Conference on Computer Vision and Pattern Recognition. 9
|
2012
|
225
|
4,592
|
A new metric on the manifold of kernel matrices with application to matrix geometric means Suvrit Sra Max Planck Institute for Intelligent Systems 72076 T¨ubigen, Germany suvrit@tuebingen.mpg.de Abstract Symmetric positive definite (spd) matrices pervade numerous scientific disciplines, including machine learning and optimization. We consider the key task of measuring distances between two spd matrices; a task that is often nontrivial whenever the distance function must respect the non-Euclidean geometry of spd matrices. Typical non-Euclidean distance measures such as the Riemannian metric δR(X, Y ) = ∥log(Y −1/2XY −1/2)∥F, are computationally demanding and also complicated to use. To allay some of these difficulties, we introduce a new metric on spd matrices, which not only respects non-Euclidean geometry but also offers faster computation than δR while being less complicated to use. We support our claims theoretically by listing a set of theorems that relate our metric to δR(X, Y ), and experimentally by studying the nonconvex problem of computing matrix geometric means based on squared distances. 1 Introduction Symmetric positive definite (spd) matrices1 are remarkably pervasive in a multitude of areas, especially machine learning and optimization. Several applications in these areas require an answer to the fundamental question: how to measure a distance between two spd matrices? This question arises, for instance, when optimizing over the set of spd matrices. To judge convergence of an optimization procedure or in the design of algorithms we may need to compute distances between spd matrices [1–3]. As a more concrete example, suppose we wish to retrieve from a large database of spd matrices the “closest” spd matrix to an input query. The quality of such a retrieval depends crucially on the distance function used to measure closeness; a choice that also dramatically impacts the actual search algorithm itself [4, 5]. Another familiar setting is that of computing statistical metrics for multivariate Gaussian distributions [6], or more recently, quantum statistics [7]. Several other applications depend on being able to effectively measure distances between spd matrices–see e.g., [8–10] and references therein. In many of these domains, viewing spd matrices as members of a Euclidean vector space is insufficient, and the non-Euclidean geometry conferred by a suitable metric is of great importance. Indeed, the set of (strict) spd matrices forms a differentiable Riemannian manifold [11, 10] that is perhaps the most studied example of a manifold of nonpositive curvature [12; Ch.10]. These matrices also form a convex cone, and the set of spd matrices in fact serves as a canonical higher-rank symmetric space [13]. The conic view is of great importance in convex optimization [14–16], symmetric spaces are important in algebra and analysis [13, 17], and in optimization [14, 18], while the manifold and other views are also widely important—see e.g., [11; Ch.6] for an overview. 1We could equally consider Hermitian matrices, but for simplicity we consider only real matrices. 1 The starting point for this paper is the manifold view. For space reasons, we limit our discussion to P(n) as a Riemannian manifold, noting that most of the discussion could also be set in terms of Finsler manifolds. But before we go further, let us fix basic notation. Notation. Let Sn denote the set of n × n real symmetric matrices. A matrix A ∈Sn is called positive (we drop the word “definite” for brevity) if ⟨x, Ax⟩> 0 for all x ̸= 0; also denoted as A > 0. (1) We denote the set of n × n positive matrices by Pn. If only the non-strict inequality ⟨x, Ax⟩≥0 holds (for all x ∈Rn) we say A is positive semidefinite; this is also denoted as A ≥0. For two matrices A, B ∈Sn, the operator inequality A ≥B means that the difference A −B ≥0. The Frobenius norm of a matrix X ∈Rm×n is defined as ∥X∥F = p tr(XT X), while ∥X∥denotes the standard operator norm. For an analytic function f on C, and a diagonalizable matrix A = UΛU T , f(A) := Uf(Λ)U T . Let λ(X) denote the vector of eigenvalues of X (in any order) and Eig(X) a diagonal matrix that has λ(X) as its diagonal. We also use λ↓(X) to denote a sorted (in descending order) version of λ(X) and λ↑(X) is defined likewise. Finally, we define Eig↓(X) and Eig↑(X) as the corresponding diagonal matrices. Background. The set Pn is a canonical higher-rank symmetric space that is actually an open set within Sn, and thereby a differentiable manifold of dimension n(n + 1)/2. The tangent space at a point A ∈Pn can be identified with Sn, so a suitable inner-product on Sn leads to the Riemannian distance on Pn [11; Ch.6]. At the point A this metric is induced by the differential form ds2 = ∥A−1/2dAA−1/2∥2 F = tr(A−1dAA−1dA). (2) For A, B ∈Pn, it is known that there is a unique geodesic joining them given by [11; Thm.6.1.6]: γ(t) := A♯tB := A1/2(A−1/2BA−1/2)tA1/2, 0 ≤t ≤1, (3) and its midpoint γ(1/2) is the geometric mean of A and B. The associated Riemannian metric is δR(A, B) := ∥log(A−1/2BA−1/2)∥F, for A, B > 0. (4) From definition (4) it is apparent that computing δR will be computationally demanding, and requires care. Indeed, to compute (4) we must essentially compute generalized eigenvalues of A and B. For an application that must repeatedly compute distances between numerous pairs of matrices this computational burden can be excessive [4]. Driven by such computational concerns, Cherian et al. [4] introduced a symmetrized “log-det” based matrix divergence: J(A, B) = log det A+B 2 −1 2 log det(AB) for A, B > 0. (5) This divergence was used as a proxy for δR and observed that J(A, B) offers the same level of performance on a difficult nearest neighbor retrieval task as δR, while being many times faster! Among other reasons, a large part of their speedup was attributed to the avoidance of eigenvalue computations for obtaining J(A, B) or its derivatives, a luxury the δR does not permit. Independently, Chebbi and Moahker [2] also introduced a slightly generalized version of (5) and studied some of its properties, especially computation of “centroids” of positive matrices using their matrix divergence. Interestingly, Cherian et al. [4] claimed that p J(A, B) might not be metric, whereas Chebbi and Moahker [2] conjectured that p J(A, B) is a metric. We resolve this uncertainty and prove that p J(A, B) is indeed a metric, albeit not one that embeds isometrically into a Hilbert space. Due to space constraints, we only summarily mention several of the properties that this metric satisfies, primarily to help develop intuition that motivates √ J as a good proxy for the Riemannian metric δR. We apply these insights to study “matrix geometric means” of set of positive matrices: a problem also studied in [4, 2]. Both cited papers have some gaps in their claims, which we fill by proving that even though computing the geometric mean is a nonconvex problem, we can still compute it efficiently and optimally. 2 2 The δℓd metric The main result of this paper is Theorem 1. Theorem 1. Let J be as in (5), and define δℓd := √ J. Then, δℓd is a metric on Pn. Our proof of Theorem 1 depends on several key steps. Due to restrictions on space we cannot include full proofs of all the results, and refer the reader to the longer article [19] instead. We do, however, provide sketches for the crucial steps in our proof. Proposition 2. Let A, B, C ∈Pn. Then, (i) δℓd(I, A) = δℓd(I, Eig(A)); (ii) for P, Q ∈GL(n, C), δℓd(PAQ, PBQ) = δℓd(A, B); (iii) for X ∈GL(n, C), δℓd(X∗AX, X∗BX) = δℓd(A, B); (iv) δℓd(A, B) = δℓd(A−1, B−1); (v) δℓd(A ⊗B, A ⊗C) = √nδℓd(B, C), where ⊗denotes the Kronecker or tensor product. The first crucial result is that for positive scalars, δℓd is indeed a metric. To prove this, recall the notion of negative definite functions (Def. 3), and a related classical result of Schoenberg (Thm. 4). Definition 3 ([20; Def. 1.1]). Let X be a nonempty set. A function ψ : X × X →R is said to be negative definite if for all x, y ∈X it is symmetric (ψ(x, y) = ψ(y, x)), and satisfies the inequality Xn i,j=1 cicjψ(xi, xj) ≤0, (6) for all integers n ≥2, and subsets {xi}n i=1 ⊆X, {ci}n i=1 ⊆R with Pn i=1 ci = 0. Theorem 4 ([20; Prop. 3.2, Chap. 3]). Let ψ : X × X →R be negative definite. Then, there is a Hilbert space H ⊆RX and a mapping x 7→ϕ(x) from X →H such that we have the equality ∥ϕ(x) −ϕ(y)∥2 H = ψ(x, y) −1 2(ψ(x, x) + ψ(y, y)). (7) Moreover, negative definiteness of ψ is necessary for such a mapping to exist. Theorem 5 (Scalar case). Define δ2 s(x, y) := log[(x + y)/(2√xy)] for scalars x, y > 0. Then, δs(x, y) ≤δs(x, z) + δs(y, z) for all x, y, z > 0. (8) Proof. We show that ψ(x, y) = log x+y 2 is negative definite. Since δ2 s(x, y) = ψ(x, y) − 1 2(ψ(x, x)+ψ(y, y)), Thm. 4 then implies the triangle inequality (8). To prove ψ is negative definite, by [Thm. 2.2, Chap. 3, 20] we may equivalently show that e−βψ(x,y) = ((x + y)/2)−β is a positive definite function for β > 0, and all x, y > 0. To that end, it suffices to show that the matrix H = [hij] = (xi + xj)−β , 1 ≤i, j ≤n, is positive definite for every integer n ≥1, and positive numbers {xi}n i=1. Now, observe that hij = 1 (xi + xj)β = 1 Γ(β) Z ∞ 0 e−t(xi+xj)tβ−1dt, (9) where Γ(β) = R ∞ 0 e−ttβ−1dt is the well-known Gamma function. Thus, with fi(t) = e−txit β−1 2 ∈ L2([0, ∞)), we see that [hij] equals the Gram matrix [⟨fi, fj⟩], whereby H > 0. Using Thm. 5 we obtain the following simple but important “Minkowsi” inequality for δs. Corollary 6. Let x, y, z > 0 be scalars, and let p ≥1. Then, Xn i=1 δp s(xi, yi) 1/p ≤ Xn i=1 δp s(xi, zi) 1/p + Xn i=1 δp s(yi, zi) 1/p . (10) Corollary 7. Let X, Y, Z > 0 be diagonal matrices. Then, δℓd(X, Y ) ≤δℓd(X, Z) + δℓd(Y, Z) (11) Next, we recall a fundamental determinantal inequality. Theorem 8 ([21; Exercise VI.7.2]). Let A, B ∈Pn. Then, Yn i=1(λ↓ i (A) + λ↓ i (B)) ≤det(A + B) ≤ Yn i=1(λ↓ i (A) + λ↑ i (B)). (12) 3 Corollary 9. Let A, B > 0. Then, δℓd(Eig↓(A), Eig↓(B)) ≤ δℓd(A, B) ≤ δℓd(Eig↓(A), Eig↑(B)) The final result that we need is a well-known fact from linear algebra (our own proof is in [19]). Lemma 10 ([e.g., 22; p.58]). Let A > 0, and let B be Hermitian. There is a matrix P for which P ∗AP = I, and P ∗BP = D, and D is diagonal. (13) With all these theorems and lemmas in hand, we are now finally ready to prove Thm. 1. Proof. (Theorem 1). We must prove that δℓd is symmetric, nonnegative, definite, and that is satisfies the triangle inequality. Symmetry is immediate from definition. Nonnegativity and definiteness follow from the strict log-concavity (on Pn) of the determinant, whereby det X+Y 2 ≥det(X)1/2 det(Y )1/2, which equality iff X = Y , which in turn implies that δℓd(X, Y ) ≥0 with equality iff X = Y . The only hard part is to prove the triangle inequality, a result that has eluded previous attempts [4, 2]. Let X, Y, Z > 0 be arbitrary. From Lemma 10 we know that there is a matrix P such that P ∗XP = I and P ∗Y P = D. Since Z > 0 is arbitrary, and congruence preserves positive definiteness, we may write just Z instead of P ∗ZP. Also, since δℓd(P ∗XP, P ∗Y P) = δℓd(X, Y ) (see Prop. 2), proving the triangle inequality reduces to showing that δℓd(I, D) ≤δℓd(I, Z) + δℓd(D, Z). (14) Consider now the diagonal matrices D↓and Eig↓(Z). Corollary 7 asserts the inequality δℓd(I, D↓) ≤δℓd(I, Eig↓(Z)) + δℓd(D↓, Eig↓(Z)). (15) Prop. 2(i) implies that δℓd(I, D) = δℓd(I, D↓) and δℓd(I, Z) = δℓd(I, Eig↓(Z)), while Cor. 9 shows that δℓd(D↓, Eig↓(Z)) ≤δℓd(D, Z). Combining these inequalities, we obtain (14), as desired. Although the metric space (Pn, δℓd) has numerous fascinating properties, due to space concerns, we do not discuss it further. Instead we discuss a connection more important to machine learning and related areas: kernel functions arising from δℓd. Indeed, some of connections (e.g., Thm. 11) have already been successfully applied very recently in computer vision [23]. 2.1 Hilbert space embedding of δℓd Theorem 1 shows that δℓd is a metric and Theorem 5 shows that actually for positive scalars, the metric space (R++, δs) embeds isometrically into a Hilbert space. It is, therefore, natural to ask whether (Pn, δℓd) also admits such an embedding? Theorem 4 says that such a kernel exists if and only if δ2 ℓd is negative definite; equivalently, iff e−βδ2 ℓd(X,Y ) = det(XY )β det((X+Y )/2)β , (16) is a positive definite kernel for all β > 0. To verify this, it suffices to check if the matrix Hβ = [hij] := h 1 det(Xi+Xj)β i , 1 ≤i, j ≤m, (17) is positive for every integer m ≥1 and arbitrary positive matrices X1, . . . , Xm. Unfortunately, a numerical experiment (see [19]) reveals that Hβ is not always positive. This implies that (Pd, δℓd) cannot embed isometrically into a Hilbert space. Undeterred, we still ask: For what choices of β is Hβ positive? Surprisingly, this question admits a complete answer. Theorem 11 characterizes the values of β necessary and sufficient for Hβ to be positive. We note here that the case β = 1 was essentially treated in [24], in the context of semigroup kernels on measures. Theorem 11. Let X1, . . . , Xm ∈Pn. The matrix Hβ defined by (17) is positive, if and only if β ∈ j 2 : j ∈N, and 1 ≤j ≤(n −1) ∪ γ : γ ∈R, and γ > 1 2(n −1) . (18) 4 Proof. We first prove the “if” part. Define the function fi := 1 πn/4 e−xT Xix (for 1 ≤i ≤m). Then, fi ∈L2(Rn), where the inner-product is given by the Gaussian integral ⟨fi, fj⟩:= 1 πd/2 Z Rn e−xT (Xi+Xj)xdx = 1 det(Xi+Xj)1/2 . (19) From (19) it follows that H1/2 is positive. Since the Schur (elementwise) product of two positive matrices is again positive, it follows that Hβ > 0 whenever β is an integer multiple of 1/2. To extend the result to all β covered by (18), we need a more intricate integral representation, namely the multivariate Gamma function, defined as [25; §2.1.2] Γn(β) := Z Pn e−tr(A) det(A)β−(n+1)/2dA, (20) where the integral converges for β > 1 2(n −1). Define for each i the function fi := ce−tr(AXi) (c > 0 is a constant). Then, fi ∈L2(Pn), which we equip with the inner product ⟨fi, fj⟩:= c2 Z Pn e−tr(A(Xi+Xj)) det(A)β−(n+1)/2dA = det(Xi + Xj)−β, and it exists whenever β > 1 2(n −1). Consequently, Hβ is positive for all β defined by (18). The “only if” part follows from deeper results in the rich theory of symmetric spaces.2 Specifically, since Pn is a symmetric cone, and 1/ det(X) is a decreasing function on this cone, (i.e., 1/ det(X + Y ) ≤1/ det(X) for all X, Y > 0), an appeal to [26; VII.3.1] grants our claim. Remark 12. Readers versed in stochastic processes will recognize that the above result provides a different perspective on a classical result concerning infinite divisibility of Wishart processes [27], where the set (18) also arises as a consequence of Gindikin’s theorem [28]. At this point, it is worth mentioning the following “obvious” result. Theorem 13. Let X be a set of positive matrices that commute with each other. Then, (X, δℓd) can be isometrically embedded into some Hilbert space. Proof. The proof follows because a commuting set of matrices can be simultaneously diagonalized, and for diagonal matrices, δ2 ℓd(X, Y ) = P i δ2 s(Xii, Yii), which is a nonnegative sum of negative definite kernels and is therefore itself negative definite. 3 Connections between δℓd and δR After showing that δℓd is a metric and studying its relation to kernel functions, let us now return to our original motivation: introducing δℓd as a reasonable alternative to the widely used Riemannian metric δR. We note here that Cherian et al. [4; 29] offer strong experimental evidence supporting δℓd as an alternative; we offer more theoretical results. Our theoretical results are based around showing that δℓd fulfills several properties akin to those displayed by δR. Due to lack of space, we present only a summary of our results in Table 1, and cite the corresponding theorems in the longer article [19] for proofs. While the actual proofs are valuable and instructive, the key message worth noting is: both δR and δℓd express the (negatively curved) non-Euclidean geometry of their respective metric spaces by displaying similar properties. 4 Application: computing geometric means In this section we turn our attention to an object that perhaps connects δR and δℓd most intimately: the operator geometric mean (GM), which is given by the midpoint of the geodesic (3), denoted as A♯B := γ(1/2) = A1/2(A−1/2BA−1/2)1/2A1/2. (21) 2Specifically, the set (18) is identical to the Wallach set which is important in the study of Hilbert spaces of holomorphic functions over symmetric domains [26; Ch.XIII]. 5 Riemannian metric Ref. δℓd-metric Ref. δR(X∗AX, X∗BX) = δR(A, B) [11; Ch.6] δℓd(X∗AX, X∗BX) = δℓd(A, B) Prop. 2 δR(A−1, B−1) = δR(A, B) [11; Ch.6] δℓd(A−1, B−1) = δℓd(A, B) Prop. 2 δR(At, Bt) ≤tδR(A, B) [11; Ex.6.5.4] δℓd(At, Bt) ≤ √ tδℓd(A, B) [19; Th.4.6] δR(As, Bs) ≤(s/u)δR(Au, Bu) [19; Th.4.11] δℓd(As, Bs) ≤ p s/uδℓd(Au, Bu) [19; Th.4.11] δR(A, A♯B) = δR(B, A♯B) Trivial δℓd(A, A♯B) = δℓd(B, A♯B) Th.14 δR(A, A♯tB) = tδR(A, B) [11; Th.6.1.6] δℓd(A, A♯tB) ≤ √ tδℓd(A, B) [19; Th.4.7] δR(A♯tB, A♯tC) ≤tδR(B, C) [11; Th.6.1.2] δℓd(A♯tB, A♯tC) ≤ √ tδℓd(B, C) [19; Th.4.8] δ2 R(X, A) + δ2 R(X, B) min 7→GM [11; Ch. 6] δ2 ℓd(X, A) + δ2 ℓd(X, B) min 7→GM Th.14 δR(A + X, A + Y ) ≤δR(X, Y ) [3] δℓd(A + X, A + Y ) ≤δℓd(X, Y ) [19; Th.4.9] Table 1: Some of the similarities between δR and δℓd. All matrices are assumed to be in Pn. The scalars t, s, u satisfy 0 < t ≤1, 1 ≤s ≤u < ∞. The GM (21) has numerous attractive properties—see for instance [30]—among these, the following variational characterization is very important [31, 32], A♯B = argminX>0 δ2 R(A, X) + δ2 R(B, X). (22) especially because it generalizes the matrix geometric mean to more than two matrices. Specifically, this “natural” generalization is the Karcher mean (Fr´echet mean) [31, 32, 11]: GM(A1, . . . , Am) := argminX>0 Xm i=1 δ2 R(X, Ai). (23) This multivariable generalization is in fact a well-studied difficult problem—see e.g., [33] for information on state-of-the-art. Indeed, its inordinate computational expenses motivated Cherian et al. [4] to study the alternative mean GMℓd(A1, . . . , Am) := argmin X>0 φ(X) := Xm i=1 δ2 ℓd(X, Ai), (24) which has also been more thoroughly studied by Chebbi and Moahker [2]. Although the mean (24) was previously studied in [4, 2], some crucial aspects were missing. Specifically, Cherian et al. [4] only proved their solution to be a stationary point of φ(X); they did not prove either global or local optimality. Although Chebbi and Moahker [2] showed that (24) has a unique solution, like [4] they too only proved stationarity, neither global nor local optimality. We fill these gaps, and we make the following main contributions below: 1. We connect (24) to the Karcher mean more closely, where in Theorem 14 we shows that for the two matrix case both problems have the same solution; 2. We show that the unique positive solution to (24) is globally optimal; this result is particularly interesting because φ(X) is nonconvex. We begin by looking at the two variable case of GMℓd (24). Theorem 14. Let A, B > 0. Then, A♯B = argminX>0 φ(X) := δ2 ℓd(X, A) + δ2 ℓd(X, B). (25) Moreover, A♯B is equidistant from A and B, i.e., δℓd(A, A♯B) = δℓd(B, A♯B). Proof. If A = B, then clearly X = A minimizes φ(X). Assume therefore, that A ̸= B. Ignoring the constraint X > 0 momentarily, we see that any stationary point must satisfy ∇φ(X) = 0. Thus, ∇φ(X) = X+A 2 −1 1 2 + X+B 2 −1 1 2 −X−1 = 0 =⇒(X + A)X−1(X + B) = 2X + A + B =⇒ B = XA−1X. (26) The latter equation is a Riccati equation that is known to have a unique, positive definite solution given by the matrix GM (21) (see [11; Prop 1.2.13]). All that remains to show is that this GM is in fact a local minimizer. To that end, we must show that the Hessian ∇2φ(X) > 0 at X = A♯B; but this claim is immediate from Theorem 18. So A♯B is a strict local minimum of (8), which is actually a global minimum because it is the unique positive solution to φ(X) = 0. Finally, the equidistance property follows after some algebraic manipulations; we omit details for brevity [19]. 6 Let us now turn to the general case (24). The first-order optimality condition is ∇φ(X) = Xm i=1 1 2 X+Ai 2 −1 −1 2mX−1 = 0, X > 0. (27) From (27) using Lemma 15 it can be inferred that [see also 2, 4] that any critical point X of (24) lies in a convex, compact set specified by 1 m Pm i=1 A−1 i −1 ⪯X ⪯ 1 m Pm i=1 Ai . Lemma 15 ([21; Ch.5]). The map X−1 on Pn is order reversing and operator convex. That is, for X, Y ∈Pn, if X ≥Y , then X−1 ≤Y −1; for t ∈[0, 1], (tX + (1 −t)Y )−1 ≤tX−1+(1−t)Y −1. Lemma 16 ([19]). Let A, B, C, D ∈Pn, so that A ≥B and C ≥D. Then, A ⊗C ≥B ⊗D. Lemma 17 (Uniqueness [2]). The nonlinear equation (27) has a unique positive solution. Using the above results, we can finally prove the main theorem of this section. Theorem 18. Let X be a matrix satisfying (27). Then, it is the unique global minimizer of (24). Proof. The objective function φ(X) (24) has only one positive stationary point, which follows from Lemma 17. Let X be this stationary point satisfying (27). We show that X is actually a local minimum; global optimality is immediate from uniqueness of X. To show local optimality, we prove that the Hessian ∇2φ(X) > 0. Ignoring constants, showing positivity of the Hessian reduces to proving that mX−1 ⊗X−1 − Xm i=1 1 2 X+Ai 2 −1 ⊗ X+Ai 2 −1 > 0. (28) Now replace mX−1 in (28) using the condition (27); therewith inequality (28) turns into Xm i=1 X+Ai 2 −1 ⊗X−1 > Xm i=1 X+Ai 2 −1 ⊗(X + Ai)−1 ⇐⇒ Xm i=1 X+Ai 2 −1 ⊗X−1 > Xm i=1 X+Ai 2 −1 ⊗(X + Ai)−1 . (29) From Lemma 15 we know that X−1 > (X + Ai)−1, so that an application of Lemma 16 shows that X+Ai 2 −1 ⊗X−1 > X+Ai 2 −1 ⊗(X + Ai)−1 for 1 ≤i ≤m. Summing up, we obtain (29), which implies the desired local (and by uniqueness, global) optimality of X. Remark 19. It is worth noting that Theorem 18 establishes that solving (27) yields the global minimum of a nonconvex optimization problem. This result is even more remarkable because unlike CAT(0)-metrics such as δR, the metric δℓd is not geodesically convex. 4.1 Numerical Results We present a key numerical result to illustrate the large savings in running time when computing with δℓd when compared with δR. To compute the Karcher mean we downloaded the “Matrix Means Toolbox” of Bini and Iannazzo from http://bezout.dm.unipi.it/software/mmtoolbox/. In particular, we use the file called rich.m which implements a state-of-the-art method [33]. The first plot in Fig. 1 indicate that δℓd can be around 5 times faster than δR2 and up to 50 times faster than δR1. The second plot shows how expensive it can be to compute GM (23) as opposed to GMℓd (24)—up to 1000 times! The former was computed using the method of [33], while the latter runs the fixed-point iteration proposed in [2] (the iteration was run until ∥∇φ(X)∥fell below 10−10). The key point here is not that the fixed-point iteration is faster, but rather that (24) is a much simpler problem thanks to the convenient eigenvalue free structure of δℓd. 5 Conclusions and future work We presented a new metric on the manifold of positive definite matrices, and related it to the classical Riemmannian metric on this manifold. Empirically, our new metric was shown to lead to large computational gains, while theoretically, a series of theorems demonstrated how it expresses the negatively curved non-Euclidean geometry in a manner analogous to the Riemannian metric. 7 0 500 1000 1500 2000 10 −4 10 −3 10 −2 10 −1 10 0 10 1 10 2 Dimensionality (n) of the matrices used Running time (seconds) Time taken to compute δR and δS δR1 δR2 δS 0 50 100 150 200 10 −2 10 −1 10 0 10 1 10 2 10 3 Dimensionality (n) of the matrices used Running time (seconds) Time taken to compute GM and GMld for 10 matrices GM GMld Figure 1: Running time comparisons between δR and δℓd. The left panel shows time (in seconds) taken to compute δR and δℓd, averaged over 10 runs to reduce variance. In the plot, δR1 refers to the implementation of δR in the matrix means toolbox [33], while δR2 is our own implementation. At this point, there are several directions of future work opened by our paper. We mention some of the most relevant ones below. (i) Study further geometric properties of the metric space (Pn, δℓd); (ii) Further enrich the connections to δR, and to other (Finsler) metrics on Pn; (iii) Study properties of geometric mean GMℓd (24), including faster algorithms to compute it; (iv) Akin to [4], apply δℓd in where δR has been so far dominant. We plan to tackle some of these problems, and hope that our paper encourages other researchers in machine learning and optimization to also study them. References [1] H. Lee and Y. Lim. Invariant metrics, contractions and nonlinear matrix equations. Nonlinearity, 21: 857–878, 2008. [2] Z. Chebbi and M. Moahker. Means of hermitian positive-definite matrices based on the log-determinant α-divergence function. Linear Algebra and its Applications, 436:1872–1889, 2012. [3] P. Bougerol. Kalman Filtering with Random Coefficients and Contractions. SIAM J. Control Optim., 31 (4):942–959, 1993. [4] A. Cherian, S. Sra, A. Banerjee, and N. Papanikolopoulos. Efficient Similarity Search for Covariance Matrices via the Jensen-Bregman LogDet Divergence. In International Conference on Computer Vision (ICCV), Nov. 2011. [5] F. Porikli, O. Tuzel, and P. Meer. Covariance Tracking using Model Update Based on Lie Algebra. In IEEE CVPR, 2006. [6] L. T. Skovgaard. A Riemannian Geometry of the Multivariate Normal Model. Scandinavian Journal of Statistics, 11(4):211–223, 1984. [7] D. Petz. Quantum Information Theory and Quantum Statistics. Springer, 2008. [8] I. Dryden, A. Koloydenko, and D. Zhou. Non-Euclidean statistics for covariance matrices, with applications to diffusion tensor imaging. Annals of Applied Statistics, 3(3):1102–1123, 2009. [9] H. Zhu, H. Zhang, J. G. Ibrahim, and B. S. Peterson. Statistical Analysis of Diffusion Tensors in DiffusionWeighted Magnetic Resonance Imaging Data. Journal of the American Statistical Association, 102(480): 1085–1102, 2007. [10] F. Hiai and D. Petz. Riemannian metrics on positive definite matrices related to means. Linear Algebra and its Applications, 430:3105–3130, 2009. [11] R. Bhatia. Positive Definite Matrices. Princeton University Press, 2007. [12] M. R. Bridson and A. Haeflinger. Metric Spaces of Non-Positive Curvature. Springer, 1999. [13] A. Terras. Harmonic Analysis on Symmetric Spaces and Applications, volume II. Springer, 1988. [14] Yu. Nesterov and A. Nemirovskii. Interior-Point Polynomial Algorithms in Convex Programming. SIAM, 1987. [15] A. Ben-Tal and A. Nemirovksii. Lectures on modern convex optimization: Analysis, algorithms, and engineering applications. SIAM, 2001. [16] Yu. Nesterov and M. J. Todd. On the riemannian geometry defined for self-concordant barriers and interior point methods. Found. Comput. Math., 2:333–361, 2002. 8 [17] S. Helgason. Geometric Analysis on Symmetric Spaces. Number 39 in Mathematical Surveys and Monographs. AMS, second edition, 2008. [18] H. Wolkowicz, R. Saigal, and L. Vandenberghe, editors. Handbook of Semidefinite Programming: Theory, Algorithms, and Applications. Kluwer Academic, 2000. [19] S. Sra. Positive definite matrices and the Symmetric Stein Divergence. arXiv: 1110.1773, October 2012. [20] C. Berg, J. P. R. Christensen, and P. Ressel. Harmonic analysis on semigroups: theory of positive definite and related functions, volume 100 of GTM. Springer, 1984. [21] R. Bhatia. Matrix Analysis. Springer, 1997. [22] R. Bellman. Introduction to Matrix Analysis. SIAM, second edition, 1970. [23] M. Harandi, C. Sanderson, R. Hartley, and B. Lovell. Sparse Coding and Dictionary Learning for Symmetric Positive Definite Matrices: A Kernel Approach. In European Conference on Computer Vision (ECCV), 2012. [24] M. Cuturi, K. Fukumizu, and J. P. Vert. Semigroup kernels on measures. JMLR, 6:1169–1198, 2005. [25] R. J. Muirhead. Aspects of multivariate statistical theory. Wiley Interscience, 1982. [26] J. Faraut and A. Kor´anyi. Analysis on Symmetric Cones. Clarendon Press, 1994. [27] M.-F. Bru. Wishart Processes. J. Theoretical Probability, 4(4), 1991. [28] S. G. Gindikin. Invariant generalized functions in homogeneous domains. Functional Analysis and its Applications, 9:50–52, 1975. [29] A. Cherian, S. Sra, A. Banerjee, and N. Papanikolopoulos. Jensen-Bregman LogDet Divergence with Application to Efficient Similarity Search for Covariance Matrices. IEEE TPAMI, 2012. Submitted. [30] T. Ando. Concavity of certain maps on positive definite matrices and applications to hadamard products. Linear Algebra and its Applications, 26(0):203–241, 1979. [31] R. Bhatia and J. A. R. Holbrook. Riemannian geometry and matrix geometric means. Linear Algebra Appl., 413:594–618, 2006. [32] M. Moakher. A differential geometric approach to the geometric mean of symmetric positive-definite matrices. SIAM J. Matrix Anal. Appl. (SIMAX), 26:735–747, 2005. [33] D. A. Bini and B. Iannazzo. Computing the Karcher mean of symmetric positive definite matrices. Linear Algebra and its Applications, Oct. 2011. Available online. 9
|
2012
|
226
|
4,593
|
Wavelet based multi-scale shape features on arbitrary surfaces for cortical thickness discrimination Won Hwa Kim†¶∗Deepti Pachauri† Charles Hatt‡ Moo K. Chung§ Sterling C. Johnson∗¶ Vikas Singh§†∗¶ †Dept. of Computer Sciences, University of Wisconsin, Madison, WI §Dept. of Biostatistics & Med. Informatics, University of Wisconsin, Madison, WI ‡Dept. of Biomedical Engineering, University of Wisconsin, Madison, WI ¶Wisconsin Alzheimer’s Disease Research Center, University of Wisconsin, Madison, WI ∗GRECC, William S. Middleton VA Hospital, Madison, WI {wonhwa, pachauri}@cs.wisc.edu {hatt, mkchung}@wisc.edu scj@medicine.wisc.edu vsingh@biostat.wisc.edu Abstract Hypothesis testing on signals defined on surfaces (such as the cortical surface) is a fundamental component of a variety of studies in Neuroscience. The goal here is to identify regions that exhibit changes as a function of the clinical condition under study. As the clinical questions of interest move towards identifying very early signs of diseases, the corresponding statistical differences at the group level invariably become weaker and increasingly hard to identify. Indeed, after a multiple comparisons correction is adopted (to account for correlated statistical tests over all surface points), very few regions may survive. In contrast to hypothesis tests on point-wise measurements, in this paper, we make the case for performing statistical analysis on multi-scale shape descriptors that characterize the local topological context of the signal around each surface vertex. Our descriptors are based on recent results from harmonic analysis, that show how wavelet theory extends to non-Euclidean settings (i.e., irregular weighted graphs). We provide strong evidence that these descriptors successfully pick up group-wise differences, where traditional methods either fail or yield unsatisfactory results. Other than this primary application, we show how the framework allows performing cortical surface smoothing in the native space without mappint to a unit sphere. 1 Introduction Cortical thickness measures the distance between the outer and inner cortical surfaces (see Fig. 1). It is an important biomarker implicated in brain development and disorders [3]. Since 2011, more than 1000 articles (from a search on Google Scholar and/or Pubmed) tie cortical thickness to conditions ranging from Alzheimer’s disease (AD), to Schizophrenia and Traumatic Brain injury (TBI) [9, 14, 13]. Many of these results show how cortical thickness also correlates with brain growth (and atrophy) during adolescence (and aging) respectively [22, 20, 7]. Given that brain function and pathology manifest strongly as changes in the cortical thickness, the statistical analysis of such data (to find group level differences in clinically disparate populations) plays a central role in structural neuroimaging studies. In typical cortical thickness studies, magnetic resonance images (MRI) are acquired for two populations: clinical and normal. A sequence of image processing steps are performed to segment the cortical surfaces and establish vertex-to-vertex correspondence across surface meshes [15]. Then, a group-level analysis is performed at each vertex. That is, we can ask if there are statistically significant differences in the signal between the two groups. Since there are multiple correlated statistical 1 tests over all voxels, a Bonferroni type multiple comparisons correction is required [4]. If many vertices survive the correction (i.e., differences are strong enough), the analysis will reveal a set of discriminative cortical surface regions, which may be positively or negatively correlated with the clinical condition of interest. This procedure is well understood and routinely used in practice. Figure 1: Cortical thickness illustration: the outer cortical surface (in yellow) and the inner cortical surface (in blue). The distance between the two surfaces is the cortical thickness. In the last five years, a significant majority of research has shifted towards investigations focused on the pre-clinical stages of diseases [16, 23, 17]. For instance, we may be interested in identifying early signs of dementia by analyzing cortical surfaces (e.g., by comparing subjects that carry a certain gene versus those who do not). In this regime, the differences are weaker, and the cortical differences may be too subtle to be detected. In a statistically under-powered cortical thickness analysis, few vertices may survive the multiple comparisons correction. Another aspect that makes this task challenging is that the cortical thickness data (obtained from state of the art tools) is still inherently noisy. The standard approach for filtering cortical surface noise is to adopt an appropriate parameterization to model the signal followed by a diffusion-type smoothing [6]. The primary difficulty is that most (if not all) widely used parameterizations operate in a spherical coordinate system using spherical harmonic (SPHARM) basis functions [6]. As a result, one must first project the signal on the surface to a unit sphere. This “ballooning” process introduces serious metric distortions. Second, SPHARM parameterization usually suffers from ringing artifacts (i.e., Gibbs phenomena) when used to fit rapidly changing localized cortical measurements [10]. Third, SPHARM uses global basis functions which typically requires a large number of terms in the expansion to model cortical surface signals to high fidelity. Subsequently, even if the globally-based coefficients exhibit statistical differences, interpreting which brain regions contribute to these variations is difficult. As a result, the coefficients of the model cannot be used directly in localizing variations in the cortical signal. This paper is motivated by the simple observation that statistical inference on surface based signals should be based not on a single scalar measurement but on multivariate descriptors that characterize the topologically localized context around each point sample. This view insures against signal noise at individual vertices, and should offer the tools to meaningfully compare the behavior of the signal at multiple resolutions of the topological feature, across multiple subjects. The ability to perform the analysis in a multi-resolution manner, it seems, is addressable if one makes use of Wavelets based methods (e.g., scaleograms [19]). Unfortunately, the non-regular structure of the topology makes this problematic. In our neuroimaging application, samples are not drawn on a regular grid, instead governed entirely by the underlying cortical surface mesh of the participant. To get around this difficulty, we make use of some recent results from the harmonic analysis literature [8] – which suggests how wavelet analysis can be extended to arbitrary weighted graphs with irregular topology. We show how these ideas can be used to derive a wavelet multi-scale descriptor for statistical analysis of signals defined on surfaces. This framework yields rather surprising improvements in discrimination power and promises immediate benefits for structural neuroimaging studies. Contributions. We derive wavelet based multi-scale representations of surface based signals. Our representation has varying levels of local support, and as a result can characterize the local context around a vertex to varying levels of granularity. We show how this facilitates statistical analysis of signals defined on arbitrary topologies (instead of the lattice setup used in image processing). (i) We show how the new model significantly extends the operating range of analysis of cortical surface signals (such as cortical thickness). At a pre-specified significance level, we can detect a much stronger signal showing group differences that are barely detectable using existing approaches. This is the main experimental result of this paper. (ii) We illustrate how the procedure of smoothing of cortical surfaces (and shapes) can completely bypass the mapping on to a sphere, since smoothing can now be performed in the native space. 2 2 A Brief Review of Wavelets in Signal Processing Recall that the celebrated Fourier series representation of a periodic function is expressed via a superposition of sines and cosines, which is widely used in signal processing for representing a signal in the frequency domain and obtaining meaningful information from it. Wavelets are conceptually similar to the Fourier series transform, in that they can be used to extract information from many different kinds of data, however unlike the Fourier transform which is localized in frequency only, wavelets can be localized in both time and frequency [12] and extend frequency analysis to the notion of scale. The construction of wavelets is defined by a wavelet function ψ (called an analyzing wavelet or a mother wavelet) and a scaling function φ. Here, ψ serves as a band-pass filter and φ operates as a low-pass filter covering the low frequency components of the signal which cannot be tackled by the band-pass filters. When the band-pass filter is transformed back by the inverse transform and translated, it becomes a localized oscillating function with finite duration, providing very compact (local) support in the original domain [21]. This indicates that points in the original domain which are far apart have negligable impact on one another. Note the contrast with Fourier series representation of a short pulse which suffers from issues due to nonlocal support of sin(·) with infinite duration. Formally, the wavelet function ψ on x is a function of two parameters, the scale and translation parameters, s and a ψs,a(x) = 1 aψ(x −a s ) (1) Varying scales control the dilation of the wavelet, and together with a translation parameter, constitute the key building blocks for approximating a signal using a wavelet expansion. The function ψs,a(x) forms a basis for the signal and can be used with other bases at different scales to decompose a signal, similar to Fourier transform. The wavelet transform of a signal f(x) is defined as the inner product of the wavelet and signal and can be represented as Wf(s, a) = ⟨f, ψ⟩= 1 a Z f(x)ψ∗(x −a s )dx (2) where Wf(s, a) is the wavelet coefficient at scale s and at location a. The function ψ∗represents the complex conjugate of ψ. Such a transform is invertible, that is f(x) = 1 Cψ ZZ Wf(s, a)ψs,a(x)da ds (3) where Cψ = R |Ψ(jω)|2 |ω| dω is called the admissibility condition constant, and Ψ is the Fourier transform of the wavelet [21], and the ω is the domain of frequency. As mentioned earlier, the scale parameter s controls the dilation of the basis and can be used to produce both short and long basis functions. While short basis functions correspond to high frequency components and are useful to isolate signal discontinuities, longer basis functions corresponding to lower frequencies, are also required to to obtain detailed frequency analysis. Indeed, wavelets transforms have an infinite set of possible basis functions, unlike the single set of basis functions (sine and cosine) in the Fourier transform. Before concluding this section, we note that while wavelets based analysis for image processing is a mature field, most of these results are not directly applicable to non-uniform topologies such as those encountered in shape meshes and surfaces in Fig. 1. 3 Defining Wavelets on Arbitrary Graphs Note that the topology of a brain surface is naturally modeled as a weighted graph. However, the application of wavelets to this setting is not straightforward, as wavelets have traditionally been limited to the Euclidean space setting. Extending the notion of wavelets to a non-Euclidean setting, particularly to weighted graphs, requires deriving a multi-scale representation of a function defined on the vertices. The first bottleneck here is to come up with analogs of dilation and translation on the graph. To address this problem, in [8], the authors introduce Diffusion Wavelets on manifolds. The basic idea is related to some well known results from machine learning, especially the eigenmaps framework by Belkin and Niyogi [1]. It also has a strong relationship with random walks on a weighted graph. Briefly, a graph G = (V, E, w) with vertex set V , edge set E and symmetric edge 3 weights w has an associated random walk R. The walk R, when represented as a matrix, is conjugate to a self adjoint matrix T, which can be interpreted as an operator associated with a diffusion process, explaining how the random walk propagates from one node to another. Higher powers of T (given as T t) induce a dilation (or scaling) process on the function to which it is applied, and describes the behavior of the diffusion at varying time scales (t). This is equivalent to iteratively performing a random walk for a certain number of steps and collecting together random walks into representatives [8]. Note that the orthonormalization of the columns of T induces the effect of “compression”, and corresponds to downsampling in the function space [5]. In fact, the powers of T are low rank (since the spectrum of T decays), and this ties back to the compressibility behavior of classical wavelets used in image processing applications (e.g., JPEG standard). In this way, the formalization in [8] obtains all wavelet-specific properties including dilations, translations, and downsampling. 3.1 Constructing Wavelet Multiscale Descriptors (WMD) Very recently, [11] showed that while the orthonormalization above is useful for iteratively obtaining compression (i.e., coarser subspaces), it complicates the construction of the transform and only provides limited control on scale selection. These issues are critical in practice, especially when adopting this framework for analysis of surface meshes with ∼200, 000 vertices with a wide spectum of frequencies (which can benefit from finer control over scale). The solution proposed in [11] discards repeated application of the diffusion operator T, and instead relies on the graph Laplacian to derive a spectral graph wavelet transform (SGWT). To do this, [11] uses a form of the wavelet operator in the Fourier domain, and generalizes it to graphs. Particularly, SGWT takes the Fourier transform of the graph by using the properties of the Laplacian L (since the eigenvectors of L are analogous to the Fourier basis elements). The formalization is shown to preserve the localization properties at fine scales as well as other wavelets specific properties. But beyond constructing the transform, the operator-valued functions of the Laplacian are very useful to derive a powerful multiscale shape descriptor localized at different frequencies which performs very well in experiments. For a function f(m) defined on a vertex m of a graph, interpreting f(sm) for a scaling constant s, is not meaningful on its own. SGWT gets around this problem by operating in the dual domain – by taking the graph Fourier transformation. In this scenario, the spectrum of the Laplacian is analogous to the frequency domain, where scales can be defined (seen in (6) later). This provides a multiresolution view of the signal localized at m. By analyzing the entire spectra at once, we can obtain a handle on which scale best characterizes the signal of interest. Indeed, for graphs, this provides a mechanism for simultaneously analyzing various local topologically-based contexts around each vertex. And for a specific scale s, we can now construct band-pass filters g in the frequency domain which suppresses the influence of scales s′ ̸= s. When transformed back to the original domain, we directly obtain a representation of the signal for that scale. Repeating this process for multiple scales, the set of coefficients obtained for S scales comprises our multiscale descriptor for that vertex. Given a mesh with N vertices, we first obtain the complete orthonormal basis χl and eigenvalues λl, l ∈{0, 1, · · · , N −1} for the graph Laplacian. Using these bases, the forward and inverse graph Fourier transformation are defined using eigenvalues and eigenvectors of L as, ˆf(l) = ⟨χl, f⟩= N X n=1 χ∗ l (n)f(n), and f(n) = N−1 X l=0 ˆf(l)χl(n) (4) Using the transforms above, we construct spectral graph wavelets by applying band-pass filters at multiple scales and localizing it with an impulse function. Since the transformed impulse function in the frequency domain is equivalent to a unit function, the wavelet ψ localized at vertex n is defined as, ψs,n(m) = N−1 X l=0 g(sλl)χ∗ l (n)χl(m) (5) where m is a vertex index on the graph. The wavelet coefficients of a given function f(n) can be easily generated from the inner product of the wavelets and the given function, Wf(s, n) = ⟨ψs,n, f⟩= N−1 X l=0 g(sλl) ˆf(l)χl(n) (6) 4 The coefficients obtained from the transformation yield the Wavelet Multiscale Descriptor (WMD) as a set of wavelet coefficients at each vertex n for each scale s. WMDf(n) = {Wf(s, n)|s ∈S} (7) In the following sections, we make use of the multi-scale descriptor for the statistical analysis of signals defined on surfaces(i.e., standard structured meshes). We will discuss shortly how many of the low-level processes in obtaining wavelet coefficients can be expressed as linear algebra primitives that can be translated on to the CUDA architecture. 4 Applications of Multiscale Shape Features In this section, we present extensive experimental results demonstrating the applicability of the descriptors described above. Our core application domain is Neuroimaging. In this context, we first test if the multi-scale shape descriptors can drive significant improvements in the statistical analysis of cortical surface measurements. Then, we use these ideas to perform smoothing of cortical surface meshes without first projecting them onto a spherical coordinate system (the conventional approach). 4.1 Cortical Thickness Discrimination: Group Analysis for Alzheimer’s disease (AD) studies As we briefly discussed in Section 1, the identification of group differences between cortical surface signals is based on comparing the distribution of the signal across the two groups at each vertex. This can be done either by using the signal (cortical thickness) obtained from the segmentation directly, or by using a spherical harmonic (SPHARM) or spherical wavelet approach to first parameterize and then smooth the signal, followed by a vertex-wise T−test on the smoothed signal. These spherical approaches change the domain of the data from manifolds to a sphere, introducing distortion. In contrast, our multi-scale descriptor is well defined for characterizing the shape (and the signal) on the native graph domain itself. We employ hypothesis testing using the original cortical thickness and SPHARM as the two baselines for comparison when presenting our experiments below. Data and Pre-processing. We used Magnetic Resonance (MR) images acquired as part of the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Our data included brain images from 356 participants: 160 Alzheimer’s disease subjects (AD) and 196 healthy controls (CN). Details of the dataset are given in Table1. Table 1: Demographic details and baseline cognitive status measure of the ADNI dataset ADNI data Category AD (mean) AD (s.d.) Ctrl (mean) Ctrl (s.d.) # of Subjects 160 196 Age 75.53 7.41 76.09 5.13 Gender (M/F) 86 / 74 101 / 95 MMSE at Baseline 21.83 5.98 28.87 3.09 Years of Education 13.81 4.61 15.87 3.23 This dataset was pre-processed using a standard image processing pipeline, and the Freesurfer algorithm [18] was used to segment the cortical surfaces, calculate the cortical thickness values, and provide vertex to vertex correspondences across brain surfaces. The data was then analyzed using our algorithm and the two baselines algorithms mentioned above. We constructed WMDs for each vertex on the template cortical surface at 6 different scales, and used Hotelling’s T 2−test for group analysis. The same procedure was repeated using the cortical thickness measurements (from Freesurfer) and the smoothed signal obtained from SPHARM. The resulting p-value map was corrected for multiple comparisons over all vertices using the false discovery rate (FDR) method [2]. Fig. 2 summarizes the results of our analysis. The first row corresponds to group analysis using the original cortical thickness values (CT). Here, while we see some discriminative regions, group differences are weak and statistically significant in only a small region. The second row shows results pertaining to SPHARM, which indicate a significant improvement over the baseline, partly due to the effect of noise filtering. Finally, the bottom row in Fig. 2 shows that performing the statistical tests using our multi-scale descriptor gives substantially larger regions with much lower p-values. To further investigate this behavior, we repeated these experiments by making the significance level more conservative. These results (after FDR correction) are shown in Fig. 4. Again, we can directly compare CT, SPHARM and WMD for a different FDR. A very conservative FDR q = 10−7 was used on the uncorrected p-values from the hypothesis test, and the q-values after the correction were projected back on the template mesh. Similar to Fig. 2, we see that relative to CT and SPHARM, several more regions (with substantially improved q-values) are recovered using the multi-scale descriptor. 5 To quantitatively compare the behavior above, we evaluated the uncorrected p-values over all vertices and sorted them in increasing order. Recall that any p-value below the FDR threshold is considered significant, and gives q-values. Fig. 3 shows the sorted p-values, where blue/black dotted lines are the FDR thresholds identifying significant vertices. Figure 2: Normalized log scale p-values after FDR correction at q = 10−5, projected back on a brain mesh and displayed. Row 1: Original cortical thickness, Row 2: SPHARM, Row 3: Wavelet Multiscale descriptor. Figure 3: Sorted p-values from statistical analysis of sampled vertices from left hemisphere using cortical thickness (CT), SPHARM, WMD for FDR q = 10−3 (black) and q = 10−4 (blue). As seen in Figs. 2, 3 and 5, the number of significant vertices is far larger in WMD compared to CT and SPHARM. At FDR 10−4 level, there are total 6943 (CT), 28789 (SPHARM) and 40548 (WMD) vertices, showing that WMD finds 51.3% and 17.9% more discriminative vertices over CT and SPHARM methods. In Fig. 5, we can see the effect of FDR correction. With FDR set to 10−3, 10−5 and 10−7, the number of vertices that survives the correction threshold decreases to 51929, 28606 and 13226 respectively. Finally, we evaluated the regions identified by these tests in the context of their relevance to Alzheimer’s disease. We found that the identified regions are those that might be expected to be atrophic in AD. All three methods identified the anterior entorhinal cortex in the mesial temporal lobe, but at the prespecified threshold, the WMD method was more sensitive to changes in this location as well as in the posterior cingulate, precuneus, lateral parietal lobe, and dorsolateral frontal lobe. These are regions that are commonly implicated in AD, and strongly tie to known results from neuroscience. Remarks. When we compare two clinically different groups of brain subjects at the opposite ends of the disease spectrum (AD versus controls), the tests help identify which brain regions are severely affected. Then, if the analysis of mild AD versus controls reveals the same regions, we know that the new method is indeed picking up the legitimate regions. The ADNI dataset comprises of mild (and relatively younger) AD subjects, and the result from our method identifies regions which are known to be affected by AD. Our experiments suggest that for a study where group differences are expected to be weak, WMDs can facilitate identification of important variations which may be missed by the current state of the art, and can improve the statistical power of the experiment. 6 Figure 4: Normalized log scale p-values after FDR correction on the left hemisphere with q = 10−7 on cortical thickness (left column) , SPHARM (middle column), WMD (right column) repectively, showing both inner and outer sides of the hemisphere. Figure 5: Normalized log scale p-values showing the effect of FDR correction on the template left hemisphere using WMD with FDR q = 10−3 (left column), q = 10−5 (middle column) and q = 10−7 (right column) repectively, showing both inner and outer sides of the hemisphere. 4.2 Cortical Surface Smoothing without Sphere Mapping Existing methods for smoothing cortical surfaces and the signal defined on it, such as spherical harmonics, explicitly represent the cortical surface as a combination of basis functions defined over regular Euclidean spaces. Such methods have been shown to be quite powerful, but invariably cause information loss due to the spherical mapping. Our goal was to evaluate whether the ideas introduced here can avoid this compromise by being able to represent (and smooth) the signal defined on any arbitrarily shaped mesh using the basis in Section 3.1 . A small set of experiments were performed to evaluate this idea. We used wavelets of varying scales to localize the structure of the brain mesh. An inverse wavelet transformation of the resultant function provides the smooth estimate of the cortical surface at various scales. The same process can be applied to the signal defined on the surface as well. Let us rewrite (3) in terms of the graph Fourier basis, 1 Cg P l R ∞ 0 g2(sλl) s ds ˆf(l)χl(m) which sums over the entire scale s. Interestingly, in our case, the set of scales directly control the spatial smoothness of the surface. In contrast, existing methods introduce an additional smoothness parameter (e.g., σ in case of heat kernel). Coarser spectral scales overlap less and smooth higher frequencies. At finer scale, the complete spectrum is used and recovers the original surface to high fidelity. An optimal choice of scale removes noisy high frequency variations and provide the true underlying signal. Representative examples are shown in Fig. 6 where we illustrate the process of reconstructing the surface of a brain mesh (and the cortical thickness signal) from a coarse to finer scales. 7 The final reconstruction of the sample brain surface from inverse transformation using five scales of wavelets and one scaling function returns total error of 2.5855 on x coordinate, 2.2407 in y coordinate and 2.4594 in z coordinate repectively over entire 136228 vertices. The combined error of all three coordinates per vertex is 5.346 × 10−5, which is small. Qualitatively, we found that the results compare favorably with [6, 24] but does not need a transformation to a spherical coordinate system. Figure 6: Structural smoothing on a brain mesh. Top row: Structural smoothing from coarse to finer scales, Bottom row: Smoothed cortical thickness displayed on the surface. Implementation. Processing large surface meshes with ∼200000 vertices is computationally intensive. A key bottleneck is the diagonalization of the Laplacian, which can be avoided by a clever use of a Chebyshev polynomial approximation method, as suggested by [11]. It turns out that this procedure basically consists of n iterative sparse matrix-vector multiplications and scalar-vector multiplications, where n is the degree of the polynomial. Figure 7: Running times to process a single brain dataset using native MATLAB code, Jacket, and our own implementation With some manipulations (details in the code release), the processes above translate nicely on to the GPU architecture. Using the cusparse and cublas libraries, we derived a specialized procedure for computing the wavelet transform, which makes heavy use of commodity graphics-card hardware. Fig. 7 provides a comparison of our results to the serial MATLAB implementation and code using the commercial Jacket toolbox, for processing one brain with 166367 vertices over 6 wavelet scales as a function of polynomial degree. We see that a dataset can be processed in less than 10 seconds (even with high polynomial order) using our implementation. 5 Conclusions We showed that shape descriptors based on multi-scale representations of surface based signals are a powerful tool for performing multivariate analysis of such data at various resolutions. Using a large and well characterized neuroimaging dataset, we showed how the framework improves statistical power in hypothesis testing of cortical thickness signals. We expect that in many cases, this form of analysis can detect group differences where traditional methods fail. This is the primary experimental result of the paper. We also demonstrated how the idea is applicable to cortical surface smoothing and yield competitive results without a spherical coordinate transformation. The implementation will be publicly distributed as a supplement to our paper. Acknowledgments This research was supported by funding from NIH R01AG040396, NIH R01AG021155, NSF RI 1116584, the Wisconsin Partnership Proposal, UW ADRC, and UW ICTR (1UL1RR025011). The authors are grateful to Lopa Mukherjee for much help in improving the presentation of this paper. 8 References [1] M. Belkin and P. Niyogi. Laplacian Eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):pp. 1373–1396, 2003. [2] Y. Benjamini and Y. Hochberg. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, 57(1):pp. 289–300, 1995. [3] R. Brown, N. Colter, and J. Corsellis. Postmortem evidence of structural brain changes in Schizophrenia differences in brain weight, temporal horn area, and parahippocampal gyrus compared with affective disorder. Arch Gen Psychiatry, 43:36–42, 1986. [4] R. Cabin and R. Mitchell. To Bonferroni or not to Bonferroni: when and how are the questions. Bulletin of the Ecological Society of America, 81(3):246–248, 2000. [5] H. Cheng, Z. Gimbutas, P. G. Martinsson, and V. Rokhlin. On the compression of low rank matrices. SIAM J. Sci. Comput., 26(4):1389–1404, 2005. [6] M. Chung, K. Dalton, S. Li, et al. Weighted Fourier series representation and its application to quantifying the amount of gray matter. Med. Imaging, IEEE Trans. on, 26(4):566 –581, 2007. [7] M. Chung, K. Worsley, S. Robbins, et al. Deformation-based surface morphometry applied to gray matter deformation. NeuroImage, 18(2):198 – 213, 2003. [8] R. Coifman and M. Maggioni. Diffusion wavelets. Applied and Computational Harmonic Analysis, 21(1):53 – 94, 2006. [9] S. DeKosky and S. Scheff. Synapse loss in frontal cortex biopsies in Alzheimer’s disease: Correlation with cognitive severity. Annals of Neurology, 27(5):457–464, 1990. [10] A. Gelb. The resolution of the Gibbs phenomenon for spherical harmonics. Mathematics of Computation, 66:699–717, 1997. [11] D. Hammond, P. Vandergheynst, and R. Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129 – 150, 2011. [12] S. Mallat. A theory for multiresolution signal decomposition: the wavelet representation. Pattern Analysis and Machine Intelligence, IEEE Trans. on, 11(7):674 –693, 1989. [13] T. Merkley, E. Bigler, E. Wilde, et al. Short communication: Diffuse changes in cortical thickness in pediatric moderate-to-severe traumatic brain injury. Journal of Neurotrauma, 25(11):1343–1345, 2008. [14] K. Narr, R. Bilder, A. Toga, et al. Mapping cortical thickness and gray matter concentration in first episode Schizophrenia. Cerebral Cortex, 15(6):708–719, 2005. [15] D. Pachauri, C. Hinrichs, M. Chung, et al. Topology-based kernels with application to inference problems in Alzheimer’s disease. Medical Imaging, IEEE Transactions on, 30(10):1760 –1770, 2011. [16] S. Peng, J. Wuu, E. Mufson, et al. Precursor form of brain-derived neurotrophic factor and mature brainderived neurotrophic factor are decreased in the pre-clinical stages of Alzheimer’s disease. Journal of Neurochemistry, 93(6):1412–21, 2005. [17] E. Reiman, R. Caselli, L. Yun, et al. Preclinical evidence of Alzheimer’s disease in persons homozygous for the ε4 allele for apolipoprotein e. New England Journal of Medicine, 334(12):752–758, 1996. [18] M. Reuter, H. D. Rosas, and B. Fischl. Highly accurate inverse consistent registration: A robust approach. NeuroImage, 53(4):1181–1196, 2010. [19] O. Rioul and M. Vetterli. Wavelets and signal processing. Signal Processing Magazine, 8(4):14–38, 1991. [20] P. Shaw, D. Greenstein, J. Lerch, et al. Intellectual ability and cortical development in children and adolescents. Nature, 440:676–679, 2006. [21] S.Haykin and B. V. Veen. Signals and Systems, 2nd Edition. Wiley, 2005. [22] E. Sowell, P. Thompson, C. Leonard, et al. Longitudinal mapping of cortical thickness and brain growth in normal children. The Journal of Neuroscience, 24:8223–8231, 2004. [23] R. Sperling, P. Aisen, L. Beckett, et al. Toward defining the preclinical stages of Alzheimers disease: Recommendations from the national institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimer’s and Dementia, 7(3):280 – 292, 2011. [24] P. Yu, P. Grant, Y. Qi, et al. Cortical surface shape analysis based on spherical wavelets. Med. Imaging, IEEE Trans. on, 26(4):582 –597, 2007. 9
|
2012
|
227
|
4,594
|
Cardinality Restricted Boltzmann Machines Kevin Swersky Daniel Tarlow Ilya Sutskever Dept. of Computer Science University of Toronto [kswersky,dtarlow,ilya]@cs.toronto.edu Ruslan Salakhutdinov†,‡ Richard S. Zemel† Dept. of Computer Science† and Statistics‡ University of Toronto [rsalakhu,zemel]@cs.toronto.edu Ryan P. Adams School of Eng. and Appl. Sciences Harvard University rpa@seas.harvard.edu Abstract The Restricted Boltzmann Machine (RBM) is a popular density model that is also good for extracting features. A main source of tractability in RBM models is that, given an input, the posterior distribution over hidden variables is factorizable and can be easily computed and sampled from. Sparsity and competition in the hidden representation is beneficial, and while an RBM with competition among its hidden units would acquire some of the attractive properties of sparse coding, such constraints are typically not added, as the resulting posterior over the hidden units seemingly becomes intractable. In this paper we show that a dynamic programming algorithm can be used to implement exact sparsity in the RBM’s hidden units. We also show how to pass derivatives through the resulting posterior marginals, which makes it possible to fine-tune a pre-trained neural network with sparse hidden layers. 1 Introduction The Restricted Boltzmann Machine (RBM) [1, 2] is an important class of probabilistic graphical models. Although it is a capable density estimator, it is most often used as a building block for deep belief networks (DBNs). The benefit of using RBMs as building blocks for a DBN is that they often provide a good initialization for feed-forward neural networks, and they can effectively utilize large amounts of unlabeled data, which has led to success in a variety of application domains [3]. Despite the benefits of this approach, there is a disconnect between the unsupervised nature of RBMs and the final discriminative task (e.g., classification) for which the learned features are used. This disconnect has motivated the search for ways to improve task-specific performance, while still retaining the unsupervised nature of the original model [4, 5]. One effective method for improving performance has been the incorporation of sparsity into the learned representation. Approaches that learn and use sparse representations have achieved good results on a number of tasks [6], and in the context of computer vision, sparsity has been linked with learning features that are invariant to local transformations [7]. Sparse features are also often more interpretable than dense representations after unsupervised learning. For directed models, such as sparse coding [8], sparsity can be enforced using a Laplace or spike and slab prior [9]. For undirected models, introducing hard sparsity constraints directly into the energy function often results in non-trivial dependencies between hidden units that makes inference intractable. The most common way around this is to encourage sparsity during training by way of a penalty function on the expected conditional hidden unit activations given data [10]. However, this training-time procedure is a heuristic and does not guarantee sparsity at test time. 1 Recently, methods for efficiently dealing with highly structured global interactions within the graphical modeling framework have received considerable interest. One class of these interactions is based on assigning preferences to counts over subsets of binary variables [11, 12]. These are known as cardinality potentials. For example, the softmax distribution can be seen as arising from a cardinality potential that forces exactly one binary variable to be active. For general potentials over counts, it would seem that the cost of inference would grow exponentially with the number of binary variables. However, efficient algorithms have been proposed that compute exact marginals for many higher-order potentials of interest [12]. For achieving sparsity in RBMs, it turns out that a relatively simple dynamic programming algorithm by Gail et al. [13] contains the key ingredients necessary to make inference and learning efficient. The main idea behind these algorithms is the introduction of auxiliary variables that store cumulative sums in the form of a chain or a tree. In this paper, we show how to combine these higher-order potentials with RBMs by placing a cardinality potential directly over the hidden units to form a Cardinality-RBM (CaRBM) model. This will allow us to obtain genuinely sparse representations, where only a small number of units are allowed to be active. We further show how gradients can be backpropagated through inference using a recently proposed finite-difference method [14]. On a benchmark suite of classification experiments, the CaRBM is competitive with current approaches that do not enforce sparsity at test-time. 2 Background 2.1 Restricted Boltzmann Machines A Restricted Boltzmann Machine is a particular type of Markov random field that has a two-layer architecture, in which the visible, stochastic units v ∈{0, 1}Nv are connected to hidden stochastic units h ∈{0, 1}Nh. The probability of the joint configuration {v, h} is given by: P(v, h) = 1 Z exp (v⊤Wh + v⊤bv + h⊤bh), (1) where Z is the normalizing constant, and {W ∈RNv×Nh, bv ∈RNv, bh ∈RNh} are the model parameters, with W representing visible-to-hidden symmetric interaction terms, and bv, bh representing visible and hidden biases respectively. The derivative of the log-likelihood with respect to the model parameters1 W can be obtained from Eq. 1: ∂log P(v; θ) ∂W = EPdata[vh⊤] −EPmodel[vh⊤], (2) where EPdata[·] denotes an expectation with respect to the data distribution Pdata(h, v; θ) = P(h | v; θ) Pdata(v), (3) where Pdata(v) = 1 N P n δ(v−vn) represents the empirical distribution, and EPmodel[·] is an expectation with respect to the distribution defined by the model, as in Eq. 1. Exact maximum likelihood learning in this model is intractable because exact computation of the expectation EPmodel[·] takes time that is exponential in the number of visible or hidden units. Instead, learning can be performed by following an approximation to the gradient, the “Contrastive Divergence” (CD) objective [15]. After learning, the hidden units of the RBM can be thought of as features extracted from the input data. Quite often, they are used to initialize a deep belief network (DBN), or they can be used directly as inputs to some other learning system. 2.2 The Sparse RBM (SpRBM) For many challenging tasks, such as object or speech recognition, a desirable property for the hidden variables is to encode the data using sparse representations. That is, given an input vector v, we would like the corresponding distribution P(h|v) to favour sparse configurations. The resulting features are often more interpretable and tend to improve performance of the learning systems that use these features as inputs. On its own, it is highly unlikely that the RBM will produce sparse features. However, suppose we have some desired target expected sparsity ρ. If qj represents a 1The derivatives with respect to the bias terms take a similar form. 2 running average of the hidden unit marginals qj = 1/N P n P(hj = 1|vn), then we can add the following penalty term to the log-likelihood objective [16]: λ (ρ log qj + (1 −ρ) log(1 −qj)) , (4) where λ represents the strength of the penalty. This penalty is proportional to the negative of the KL divergence between the hidden unit marginals and the target sparsity probability. The derivative with respect to the activity on any case n is proportional to λ(ρ −qj). Note that this is applied to each hidden unit independently and has the intuitive property of encouraging each hidden unit to activate with proportion ρ across the dataset. If the hidden unit activations are stored in a matrix where each row corresponds to a training example, and each column corresponds to a hidden unit, then this is enforcing sparsity in the columns of the matrix. This is also referred to as lifetime sparsity. When using the SpRBM model, the hope is that each individual example will be encoded by a sparse vector, corresponding to sparsity across the rows, or population sparsity. 3 The Cardinality Potential Consider a distribution of the form q(x) = 1 Z ψ N X j=1 xj N Y j=1 φj(xj), (5) where x is a binary vector and Z is the normalizing constant. This distribution consists of noninteracting terms, with the exception of the ψ(·) potential, which couples all of the variables together. This is a cardinality potential (or “counts potential”), because it depends only on the number of 1’s in the vector x, but not on their identity. This distribution is useful for imposing sparsity because it allows us to represent the constraint that the vector x can have at most k elements set to one. There is an efficient exact inference algorithm for computing the normalizing constant and marginals of this distribution. This can be interpreted as a dynamic programming algorithm [13, 17], or as an instance of the sum-product algorithm [18]. We prefer the sum-product interpretation because it makes clear how to compute marginal distributions over binary variables, how to compute marginal distributions over total counts, and how to draw an exact joint sample from the model (pass messages forwards, then sample backwards) and also lends itself towards extensions. In this view, we create N auxiliary variables zj ∈{1, . . . , N}. The auxiliary variables are then deterministically related to the x variables by setting zj = Pj k=1 xk, where zj represents the cumulative sum of the first j binary variables. More formally, consider the following joint distribution ˆq(x, z): ˆq(x, z) = N Y j=1 φj(xj) · N Y j=2 γ(xj, zj, zj−1) · ψ(zN). (6) We let γ(xj, zj, zj−1) be a deterministic “addition potential”, which assigns the value one to any triplet (x, z, z′) satisfying z = x + z′ and zero otherwise. Note that the second product ranges from j = 2, and that z1 is replaced with x1. This notation represents the observation that zj can be computed either as zj = Pj k=1 xk, or more simply as zj = zj−1 + xj. The latter is preferable, because it induces a chain-structured dependency graph amongst the z and x variables. Thus, the distribution ˆq(x, z) has two important properties. First, it is chain-structured, and therefore we can perform exact inference using the sum-product algorithm. By leveraging the fact that at most k are allowed to be on, the runtime can be made to be O(Nk) by reducing the range of each zi from {1, . . . , N} to {1, . . . , k + 1}. Second, the posterior ˆq(z|x) assigns a probability of 1 to the configuration z∗that is given by z∗ j = Pj k=1 xj for all j. This is a direct consequence of the sum-potentials γ(·) enforcing the constraint z∗ j = xj + z∗ j−1. Since z∗ N = PN j=1 xj, it follows that q(x) = ˆq(x, z∗), and since q(z|x) concentrates all of its mass on z∗, we obtain: ˆq(x) = X z ˆq(x, z) = X z ˆq(z|x)ˆq(x) = ˆq(x, z∗) = q(x). (7) 3 This shows that q(x) is the marginal distribution of the chain-structured distribution ˆq(x, z). By running the sum-product algorithm on ˆq we can recover the singleton marginals µj(xj), which are also the marginals of q(·). We can likewise sample from q by computing all of the pairwise marginals µj+1,j(zj+1, zj), computing the pairwise conditionals µj+1,j(zj+1|zj), and sampling each zj sequentially, given zj−1, to obtain a sample z. The vector x can be recovered via xj = zj −zj−1. The basic idea behind this algorithm is given in [13] and the sum-product interpretation is elaborated upon in [18]. There are many algorithmic extensions, such as performing summations in tree-structured distributions, which allow for more efficient inference with very large N (e.g. N > 1000) using fast Fourier transforms [19, 18]. But in this work we only use the chain-structured distribution ˆq described above with the restriction that there are only k states. 4 The Cardinality RBM (CaRBM) The Cardinality Restricted Boltzmann Machine is defined as follows: P(v, h) = 1 Z exp v⊤Wh + v⊤bv + h⊤bh · ψk Nh X j=1 hj , (8) where ψk is a potential given by ψk(c) = 1 if c ≤k and 0 otherwise. Observe that the conditional distribution P(h|v) assigns a non-zero probability mass to a vector h only if |h| ≤k. The cardinality potential implements competition in the hidden layer because now, a data vector v can be explained by at most k hidden units. This form of competition is similar to sparse coding in that there may be many non-sparse configurations that assign high probability to the data, however only sparse configurations are allowed to be used. Unlike sparse coding, however, the CaRBM learning problem involves maximizing the likelihood of the training data, rather than minimizing a reconstruction cost. Using the techniques from the previous section, computing the conditional distribution P(h|v) is tractable, allowing us to use learning algorithms like CD or stochastic maximum likelihood [20]. The conditional distribution P(v|h) is still factorial and easy to sample from. Perhaps the best way to view the effect of the cardinality potential is to consider the case of k = 1 with the further restriction that configurations with 0 active hidden units are disallowed. In this case, the CaRBM reduces to an ordinary RBM with a single multinomial hidden unit. A similar model to the CaRBM is the Boltzmann Perceptron [21], which also introduces a term in the energy function to promote competition between units; however, they do not provide a way to efficiently compute marginals or draw joint samples from P(h|v). Another similar line of work is the Restricted Boltzmann Forest [22], which uses k groups of multinomial hidden units. We should note that the actual marginal probabilities of the hidden units given the visible units are not guaranteed to be sparse, but rather the distribution assigns zero mass to any hidden configuration that is not sparse. In practice though, we find that after learning, the marginal probabilities do tend to have low entropy. Understanding this as a form of regularization is a topic left for future work. 4.1 The Cardinality Marginal Nonlinearity One of the most common ways to use an RBM is to consider it as a pre-training method for a deep belief network [2]. After one or several RBMs are trained in a greedy layer-wise fashion, the network is converted into a deterministic feed-forward neural network that is fine-tuned with the backpropagation algorithm. The fine-tuning step is important for getting the best results with a DBN model [23]. While it is easy to convert a stack of standard RBMs into a feed-forward neural network, turning a stack of CaRBMs into a feed-forward neural network is less obvious, because it is not clear what nonlinearity should be used. Observe that in the case of a standard, binary-binary RBM, the selected nonlinearity is the sigmoid σ(x) ≡1/(1+exp(−x)). We can justify this choice by noticing that it is the expectation of the conditional distribution P(h|v), namely σ(W ⊤v + bh) = EP (h|v)[h], (9) 4 where the sigmoid is applied to the vector in an element-wise fashion. In particular, using the conditional expectation as the nonlinearity is a fundamental ingredient in the variational lower bound that justifies the greedy layer-wise procedure [2]. It also appears naturally when the score matching estimator is applied to RBMs over Gaussian-distributed visible units [24, 25]. This justification suggests that for the CaRBM, we should choose a nonlinearity µ(·) which will satisfy the following equality: µ(W ⊤v + bh) = EP (h|v)[h], (10) where the conditional P(h|v) can be derived from Eq. 8. First note that such a nonlinear function exists, because the distribution P(h|v) is completely determined by the total input W ⊤v + bh. Therefore, the feed-forward neural network that is obtained from a stack of CaRBMs uses a messagepassing algorithm to compute the nonlinearity µ(·). We should note that µ depends on k, the number of units that can take on the value 1, but this is a constant that is independent of the input. In practice, we keep k fixed to the k that was used in unsupervised training. To compute gradients for learning the network, it is necessary to “backpropagate” through µ, which is equivalent to multiplying by the Jacobian of µ. Analytic computation of the Jacobian, however, results in an overly expensive O(N 2) algorithm. We also note that it is possible to manually differentiate the computational graph of µ by passing the derivatives back through the sum-product algorithm. While this approach is correct, it is difficult to implement and can be numerically unstable. We propose an alternative approach to multiplying by the Jacobian of µ. Let x = W ⊤v + bh be the total input to the RBM’s hidden units, then the Jacobian J(x) is given by: J(x) = EP (h|v)[hh⊤] −EP (h|v)[h] EP (h|v)[h⊤], = EP (h|v)[hh⊤] −µ(x)µ(x)⊤. (11) We need to multiply by the transpose of the Jacobian from the right, since by the chain rule, ∂L ∂x = ∂µ ∂x ⊤∂L ∂µ = J(x)⊤∂L ∂µ , (12) where L is the corresponding loss function. One way to do this is to reuse the sample h ∼P(h|v) in order to obtain a rank-one unbiased estimate of EP (h|v)[hh⊤], but we found this to be inaccurate. Luckily, Domke [14] makes two critical observations. First, the Jacobian J(x) is symmetric (see Eq. 11). Second, it is easy to multiply by the Jacobian of any function using numerical differentiation, because multiplication by the Jacobian (without a transpose) is precisely a directional derivative. More formally, let f(x) be any differentiable function and J be its Jacobian. For any vector ℓ, it can be easily verified that: lim ϵ→0 f(x + ϵℓ) −f(x) ϵ = lim ϵ→0 f(x) + ϵJℓ+ o(ϵ) −f(x) ϵ = lim ϵ→0 o(ϵ) ϵ + ϵJℓ ϵ = Jℓ. (13) Since µ is a differentiable function, we can compute J(x)ℓby a finite difference formula: J(x)ℓ≈µ(x + ϵℓ) −µ(x −ϵℓ) 2ϵ . (14) Using the symmetry of the Jacobian of µ, we can backpropagate a vector of derivatives ∂L/∂µ using Eq. 14. Of the approaches we tried, we found this approach to provide the best combination of speed and accuracy. 5 Experiments The majority of our experiments were carried out on various binary datasets from Larochelle et al [26], hence referred to as the Montreal datasets. Each model was trained using the CD-1 algorithm with stochastic gradient descent on mini-batches. For training the SpRBM, we followed the guidelines from Hinton [27]. 5 5.1 Training CaRBMs One issue when training a model with lateral inhibition is that in the initial learning epochs, a small group of hidden units can learn global features of the data and effectively suppress the other hidden units, leading to “dead units”. This effect has been noted before in energy-based models with competition [22]. One option is to augment the log-likelihood with the KL penalty given in Eq. 4. In the SpRBM, this penalty term is used to encourage each hidden unit to be active a small number of times across the training set, which indirectly provides sparsity per-example. In the CaRBM it is used to ensure that each hidden unit is used roughly equally across the training set, while the per-example sparsity is directly controlled. We observed that dead units occurred only with a random initialization of the parameters and that this was no longer an issue once the weights had been properly initialized. In our experiments, we used the KL penalty during unsupervised learning, but not during supervised fine-tuning. A related issue with SpRBMs is that if the KL penalty is set too high then it can create dead examples (examples that activate no hidden units). Note that the KL penalty will not penalize this case as long as the inter-example activations matches the target probability ρ. 5.2 Comparing CaRBM with SpRBM Both the CaRBM and SpRBM models attempt to achieve the same goal of sparsity in the hidden unit activations. However, the way in which they accomplish this is fundamentally different. For datasets such as MNIST, we found the two models to give qualitatively similar results. Indeed, this seemed to be the case for several datasets. On the convex dataset, however, we noticed that the models produced quite different results. The convex dataset consists of binary 28 × 28-pixel images of polygons (sometimes with multiple polygons per image). Figure 1 (a) shows several examples from this dataset. Unlike the MNIST dataset, there is a large variation in the number of active pixels in the inputs. Figure 1 (e) shows the distribution of the number of pixels taking the value 1. In some examples, barely any pixels are active, while in others virtually every pixel is on. For both models, we set the target sparsity to 10%. We next performed a grid search over the strength of the KL penalty until we found a setting that achieved an average hidden unit population sparsity that matched the target without creating dead examples (in the case of the SpRBM) or dead units (in the case of the CaRBM). Figure 1 (d) and (h) show that both models achieve the desired mean population sparsity. However, the SpRBM exhibits a heavy-tailed distribution over activations, with some examples activating over half of the hidden units. By comparison, all inputs activate the maximum number of allowable hidden units in the CaRBM, generating a spike at 10%. Indeed, in the CaRBM, the hidden units suppress each other through competition, while in the SpRBM there is no such direct competition. Figure 1 (b) and (f) display the learned weights. Both models appear to give qualitatively similar results, although the CaRBM weights appear to model slightly more localized features at this level of sparsity. 5.3 Classification Performance To evaluate the classification performance of CaRBMs, we performed a set of experiments on the Montreal datasets. We conducted a random search over hyperparameter settings as recommended by Bergstra & Bengio [28], and set the target sparsity to be between 2.5% and 10%. Table 1 shows that the CarBM and SpRBM achieve comparable performance. On this suite we found that the validation sets were quite small and prone to overfitting. For example, both the SpRBM and CaRBM achieve 0.5% validation error on the rectangles dataset. Interestingly, for the convex dataset, the SpRBM model, chosen by cross-validation, used a weak penalty strength and only achieved a population sparsity of 25%. As we increased the strength of the sparsity penalty, classification performance in the SpRBM degraded, but the desired sparsity level was still not achieved. 5.4 CIFAR-10 Patches We extracted 16 × 16 whitened image patches from the CIFAR-10 dataset [29] and trained both models. Figure 2 (a) shows learned filters of the CaRBM model (both models behave similarly 6 (a) (b) 0.0 0.2 0.5 (c) 0.0 0.3 0.6 (d) 0 400 800 (e) (f) 0.05 0.10 0.15 (g) 0.0 0.5 1.0 (h) Figure 1: (a),(e) Samples from the Convex dataset and the distribution of the number of pixels in each image with the value 1. (b),(f) Visualization of the incoming weights to 25 randomly selected hidden units in the SpRBM and CaRBM models respectively. (c),(g) The distribution of the mean lifetime activations (across examples) of the hidden units in the SpRBM and CaRBM respectively. (d),(h) The distribution of the mean population activations (within examples) of the hidden units in the SpRBM and CaRBM respectively. Dataset RBM SpRBM CaRBM rectangles 4.05% 2.66% 5.60% background im 23.78% 23.49% 22.16% background im rot 58.21% 56.48% 56.39% recangles im 24.24% 22.50% 22.56% Dataset RBM SpRBM CaRBM convex 20.66% 18.52% 21.13% mnist basic 4.42% 3.84% 3.65% mnist rot 14.83% 13.11% 12.40% background rand 12.96% 12.97% 12.67% Table 1: Test-set classification errors on the Montreal datasets. and so we just display the CaRBM weights). Observe that the learned weights resemble Gabor-like filters. These features are often considered to be beneficial for classification when modeling images. 5.5 Topic Modeling with the NIPS Dataset One form of data with highly variable inputs is text, because some words are used much more frequently than others. We applied the SpRBM and CaRBM to the NIPS dataset2, which consists of 13649 words and 1740 papers from NIPS conferences from 1987 to 1999. Each row corresponds to a paper, each column corresponds to a word, and the entries are the number of times each word appears in each paper. We binarized the dataset by truncating the word counts and train the SpRBM and CaRBM models with 50 hidden units, searching over learning rates and KL penalty strengths until 10% sparsity is achieved without dead units or examples. Once a model is learned, we define a topic for a hidden unit by considering the 5 words with the highest connections to that unit. We conjecture that sparse RBMs should be beneficial in learning interpretable topics because there will be fewer ways for hidden units to collude in order to model a given input. Table 2 shows the result of picking a general topic and finding the closest matching hidden unit from each model. While all models discover meaningful topics, we found that the grouping of words produced by the RBM tended to be less cohesive than those produced by the SpRBM or CaRBM. For example, many of the hidden units contain the words ‘abstract’ and ‘reference’, both of which appear in nearly every paper. Figure 2 (b)-(d) displays the effect that the KL penalty λ has on the population sparsity of the SpRBM. For a fairly narrow range, if λ is too small then the desired sparsity level will not be met. 2http://psiexp.ss.uci.edu/research/programs_data/toolbox.htm 7 Model Computer Vision Neuroscience Bayesian Inference RBM images, pixel, computer, quickly, stanford inhibitory, organization, neurons, synaptic, explain probability, bayesian, priors, likelihood, covariance SpRBM visual, object, objects, images, vision neurons, biology, spike, synaptic, realistic conditional, probability, bayesian, hidden, mackay CaRBM image, images, pixels, objects, recognition membrane, resting, inhibitory, physiol, excitatory likelihood, hyperparameters, monte, variational, neal Table 2: Topics learned by each model on the NIPS dataset. Each column corresponds to a chosen topic, and each cell corresponds to a single hidden unit. The hidden unit is chosen as the best match to the given topic from amongst all of the hidden units learned by the model in the row. (a) 0.0 0.2 0.5 (b) λ = 0.1 0.0 0.2 0.5 (c) λ = 0.5 0.0 0.2 0.5 (d) λ = 1 Figure 2: (a) Weights of the CaRBM learned on 16×16 images patches sampled from the CIFAR-10 dataset. (b)-(c) Change in population sparsity with increasing KL penalty λ on the NIPS dataset. The SpRBM is sensitive to λ, and can fail to model certain examples if λ is set too high. As it is increased, the lifetime sparsity better matches the target but at the cost of an increasing number of dead examples. This may hurt the generative performance of the SpRBM. 6 Conclusion We have introduced cardinality potentials into the energy function of a Restricted Boltzmann Machine in order to enforce sparsity in the hidden representation. We showed how to use an auxiliary variable representation in order to perform efficient posterior inference and sampling. Furthermore, we showed how the marginal probabilities can be treated as nonlinearities, and how a simple finitedifference trick from Domke [14] can be used to backpropagate through the network. We found that the CaRBM performs similarly to an RBM that has been trained with a sparsity-encouraging regularizer, with the exception being datasets that exhibit a wide range of variability in the number of active inputs (e.g. text), where the SpRBM seems to have difficulty matching the target sparsity. It is possible that this effect may be significant in other kinds of data, such as images with high amounts of lighting variation. There are a number of possible extensions to the CaRBM. For example, the cardinality potentials can be relaxed to encourage sparsity, but not enforce it, and they can be learned along with the other model parameters. It would also be interesting to see if other high order potentials could be used within the RBM framework. Finally, it would be worth exploring the use of the sparse marginal nonlinearity in auto-encoder architectures and in the deeper layers of a deep belief network. References [1] P. Smolensky. Information processing in dynamical systems: foundations of harmony theory. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, pages 194–281. MIT Press, 1986. [2] G.E. Hinton, S. Osindero, and Y.W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527–1554, 2006. [3] H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In International Conference on Machine Learning, 2009. 8 [4] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. Advances in Neural Information Processing Systems, 2007. [5] J. Snoek, R. P. Adams, and H. Larochelle. Nonparametric guidance of autoencoder representations using label information. Journal of Machine Learning Research, 13:2567–2588, 2012. [6] J. Yang, K. Yu, Y. Gong, and T. Huang. Linear spatial pyramid matching using sparse coding for image classification. In Computer Vision and Pattern Recognition, 2009. [7] I. Goodfellow, Q. Le, A. Saxe, H. Lee, and A.Y. Ng. Measuring invariances in deep networks. Advances in Neural Information Processing Systems, 2009. [8] B.A. Olshausen and D.J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37(23):3311–3325, 1997. [9] I. Goodfellow, A. Courville, and Y. Bengio. Large-scale feature learning with spike-and-slab sparse coding. International Conference on Machine Learning, 2012. [10] H. Lee, C. Ekanadham, and A. Ng. Sparse deep belief net model for visual area V2. Advances in Neural Information Processing Systems, 2007. [11] R. Gupta, A. Diwan, and S. Sarawagi. Efficient inference with cardinality-based clique potentials. In International Conference on Machine Learning, 2007. [12] D. Tarlow, I. Givoni, and R. Zemel. HOP-MAP: Efficient message passing for high order potentials. In Artificial Intelligence and Statistics, 2010. [13] M. H. Gail, J. H. Lubin, and L. V. Rubinstein. Likelihood calculations for matched case-control studies and survival studies with tied death times. Biometrika, 68:703–707, 1981. [14] J. Domke. Implicit differentiation by perturbation. Advances in Neural Information Processing Systems, 2010. [15] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771–1800, 2002. [16] V. Nair and G.E. Hinton. 3d object recognition with deep belief nets. Advances in Neural Information Processing Systems, 2009. [17] R. E. Barlow and K. D. Heidtmann. Computing k-out-of-n system reliability. IEEE Transactions on Reliability, 33:322–323, 1984. [18] D. Tarlow, K. Swersky, R. Zemel, R.P. Adams, and B. Frey. Fast exact inference for recursive cardinality models. In Uncertainty in Artificial Intelligence, 2012. [19] L. Belfore. An O(n) log2(n) algorithm for computing the reliability of k-out-of-n:G and k-to-l-out-of-n:G systems. IEEE Transactions on Reliability, 44(1), 1995. [20] T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In International Conference on Machine Learning, 2008. [21] H.J. Kappen. Deterministic learning rules for Boltzmann machines. Neural Networks, 8(4):537–548, 1995. [22] H. Larochelle, Y. Bengio, and J. Turian. Tractable multivariate binary density estimation and the restricted Boltzmann forest. Neural Computation, 22(9):2285–2307, 2010. [23] G.E. Hinton and R.R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. [24] K. Swersky, M. Ranzato, D. Buchman, B.M. Marlin, and N. de Freitas. On autoencoders and score matching for energy based models. In International Conference on Machine Learning, 2011. [25] P. Vincent. A connection between score matching and denoising autoencoders. Neural Computation, 23(7):1661–1674, 2011. [26] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In International Conference on Machine Learning, 2007. [27] G.E. Hinton. A practical guide to training restricted Boltzmann machines. Technical Report UTML-TR 2010003, Department of Computer Science, University of Toronto, 2010. [28] J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13:281–305, 2012. [29] A. Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, University of Toronto, 2009. 9
|
2012
|
228
|
4,595
|
Sparse Prediction with the k-Support Norm Andreas Argyriou ´Ecole Centrale Paris argyrioua@ecp.fr Rina Foygel Department of Statistics, Stanford University rinafb@stanford.edu Nathan Srebro Toyota Technological Institute at Chicago nati@ttic.edu Abstract We derive a novel norm that corresponds to the tightest convex relaxation of sparsity combined with an ℓ2 penalty. We show that this new k-support norm provides a tighter relaxation than the elastic net and can thus be advantageous in in sparse prediction problems. We also bound the looseness of the elastic net, thus shedding new light on it and providing justification for its use. 1 Introduction Regularizing with the ℓ1 norm, when we expect a sparse solution to a regression problem, is often justified by ∥w∥1 being the “convex envelope” of ∥w∥0 (the number of non-zero coordinates of a vector w ∈Rd). That is, ∥w∥1 is the tightest convex lower bound on ∥w∥0. But we must be careful with this statement—for sparse vectors with large entries, ∥w∥0 can be small while ∥w∥1 is large. In order to discuss convex lower bounds on ∥w∥0, we must impose some scale constraint. A more accurate statement is that ∥w∥1 ≤∥w∥∞∥w∥0, and so, when the magnitudes of entries in w are bounded by 1, then ∥w∥1 ≤∥w∥0, and indeed it is the largest such convex lower bound. Viewed as a convex outer relaxation, S(∞) k := w ∥w∥0 ≤k, ∥w∥∞≤1 ⊆ w ∥w∥1 ≤k . Intersecting the right-hand-side with the ℓ∞unit ball, we get the tightest convex outer bound (convex hull) of S(∞) k : w ∥w∥1 ≤k, ∥w∥∞≤1 = conv(S(∞) k ) . However, in our view, this relationship between ∥w∥1 and ∥w∥0 yields disappointing learning guarantees, and does not appropriately capture the success of the ℓ1 norm as a surrogate for sparsity. In particular, the sample complexity1 of learning a linear predictor with k non-zero entries by empirical risk minimization inside this class (an NP-hard optimization problem) scales as O(k log d), but relaxing to the constraint ∥w∥1 ≤k yields a sample complexity which scales as O(k2 log d), because the sample complexity of ℓ1-regularized learning scales quadratically with the ℓ1 norm [11, 20]. Perhaps a better reason for the ℓ1 norm being a good surrogate for sparsity is that, not only do we expect the magnitude of each entry of w to be bounded, but we further expect ∥w∥2 to be small. In a regression setting, with a vector of features x, this can be justified when E[(x⊤w)2] is bounded (a reasonable assumption) and the features are not too correlated—see, e.g. [15]. More broadly, 1We define this as the number of observations needed in order to ensure expected prediction error no more than ϵ worse than that of the best k-sparse predictor, for an arbitrary constant ϵ (that is, we suppress the dependence on ϵ and focus on the dependence on the sparsity k and dimensionality d). 1 especially in the presence of correlations, we might require this as a modeling assumption to aid in robustness and generalization. In any case, we have ∥w∥1 ≤∥w∥2 p ∥w∥0, and so if we are interested in predictors with bounded ℓ2 norm, we can motivate the ℓ1 norm through the following relaxation of sparsity, where the scale is now set by the ℓ2 norm: w ∥w∥0 ≤k, ∥w∥2 ≤B ⊆ w ∥w∥1 ≤B √ k . The sample complexity when using the relaxation now scales as2 O(k log d). Sparse + ℓ2 constraint. Our starting point is then that of combining sparsity and ℓ2 regularization, and learning a sparse predictor with small ℓ2 norm. We are thus interested in classes of the form S(2) k := w ∥w∥0 ≤k, ∥w∥2 ≤1 . As discussed above, the class {∥w∥1 ≤ √ k} (corresponding to the standard Lasso) provides a convex relaxation of S(2) k . But clearly we can get a tighter relaxation by keeping the ℓ2 constraint: conv(S(2) k ) ⊆ n w ∥w∥1 ≤ √ k, ∥w∥2 ≤1 o ⊊ n w ∥w∥1 ≤ √ k o . (1) Constraining (or equivalently, penalizing) both the ℓ1 and ℓ2 norms, as in (1), is known as the “elastic net” [5, 21] and has indeed been advocated as a better alternative to the Lasso. In this paper, we ask whether the elastic net is the tightest convex relaxation to sparsity plus ℓ2 (that is, to S(2) k ) or whether a tighter, and better, convex relaxation is possible. A new norm. We consider the convex hull (tightest convex outer bound) of S(2) k , Ck := conv(S(2) k ) = conv w ∥w∥0 ≤k, ∥w∥2 ≤1 . (2) We study the gauge function associated with this convex set, that is, the norm whose unit ball is given by (2), which we call the k-support norm. We show that, for k > 1, this is indeed a tighter convex relaxation than the elastic net (that is, both inequalities in (1) are in fact strict inequalities), and is therefore a better convex constraint than the elastic net when seeking a sparse, low ℓ2-norm linear predictor. We thus advocate using it as a replacement for the elastic net. However, we also show that the gap between the elastic net and the k-support norm is at most a factor of √ 2, corresponding to a factor of two difference in the sample complexity. Thus, our work can also be interpreted as justifying the use of the elastic net, viewing it as a fairly good approximation to the tightest possible convex relaxation of sparsity intersected with an ℓ2 constraint. Still, even a factor of two should not necessarily be ignored and, as we show in our experiments, using the tighter k-support norm can indeed be beneficial. To better understand the k-support norm, we show in Section 2 that it can also be described as the group lasso with overlaps norm [10] corresponding to all d k subsets of k features. Despite the exponential number of groups in this description, we show that the k-support norm can be calculated efficiently in time O(d log d) and that its dual is given simply by the ℓ2 norm of the k largest entries. We also provide efficient first-order optimization algorithms for learning with the k-support norm. Related Work In many learning problems of interest, Lasso has been observed to shrink too many of the variables of w to zero. In particular, in many applications, when a group of variables is highly correlated, the Lasso may prefer a sparse solution, but we might gain more predictive accuracy by including all the correlated variables in our model. These drawbacks have recently motivated the use of various other regularization methods, such as the elastic net [21], which penalizes the regression coefficients w with a combination of ℓ1 and ℓ2 norms: min 1 2∥Xw −y∥2 + λ1 ∥w∥1 + λ2 ∥w∥2 2 : w ∈Rd , (3) 2More precisely, the sample complexity is O(B2k log d), where the dependence on B2 is to be expected. Note that if feature vectors are ℓ∞-bounded (i.e. individual features are bounded), the sample complexity when using only ∥w∥2 ≤B (without a sparsity or ℓ1 constraint) scales as O(B2d). That is, even after identifying the correct support, we still need a sample complexity that scales with B2. 2 where for a sample of size n, y ∈Rn is the vector of response values, and X ∈Rn×d is a matrix with column j containing the values of feature j. The elastic net can be viewed as a trade-off between ℓ1 regularization (the Lasso) and ℓ2 regularization (Ridge regression [9]), depending on the relative values of λ1 and λ2. In particular, when λ2 = 0, (3) is equivalent to the Lasso. This method, and the other methods discussed below, have been observed to significantly outperform Lasso in many real applications. The pairwise elastic net (PEN) [13] is a penalty function that accounts for similarity among features: ∥w∥P EN R = ∥w∥2 2 + ∥w∥2 1 −|w|⊤R|w| , where R ∈[0, 1]p×p is a matrix with Rjk measuring similarity between features Xj and Xk. The trace Lasso [6] is a second method proposed to handle correlations within X, defined by ∥w∥trace X = ∥Xdiag(w)∥∗, where ∥·∥∗denotes the matrix trace-norm (the sum of the singular values) and promotes a low-rank solution. If the features are orthogonal, then both the PEN and the Trace Lasso are equivalent to the Lasso. If the features are all identical, then both penalties are equivalent to Ridge regression (penalizing ∥w∥2). Another existing penalty is OSCAR [3], given by ∥w∥OSCAR c = ∥w∥1 + c X j<k max{|wj|, |wk|} . Like the elastic net, each one of these three methods also “prefers” averaging similar features over selecting a single feature. 2 The k-Support Norm One argument for the elastic net has been the flexibility of tuning the cardinality k of the regression vector w. Thus, when groups of correlated variables are present, a larger k may be learned, which corresponds to a higher λ2 in (3). A more natural way to obtain such an effect of tuning the cardinality is to consider the convex hull of cardinality k vectors, Ck = conv(S(2) k ) = conv{w ∈Rd ∥w∥0 ≤k, ∥w∥2 ≤1}. Clearly the sets Ck are nested, and C1 and Cd are the unit balls for the ℓ1 and ℓ2 norms, respectively. Consequently we define the k-support norm as the norm whose unit ball equals Ck (the gauge function associated with the Ck ball).3 An equivalent definition is the following variational formula: Definition 2.1. Let k ∈{1, . . . , d}. The k-support norm ∥· ∥sp k is defined, for every w ∈Rd, as ∥w∥sp k := min (X I∈Gk ∥vI∥2 : supp(vI) ⊆I, X I∈Gk vI = w ) , where Gk denotes the set of all subsets of {1, . . . , d} of cardinality at most k. The equivalence is immediate by rewriting vI = µIzI in the above definition, where µI ≥0, zI ∈ Ck, ∀I ∈Gk, P I∈Gk µI = 1. In addition, this immediately implies that ∥· ∥sp k is indeed a norm. In fact, the k-support norm is equivalent to the norm used by the group lasso with overlaps [10], when the set of overlapping groups is chosen to be Gk (however, the group lasso has traditionally been used for applications with some specific known group structure, unlike the case considered here). Although the variational definition 2.1 is not amenable to computation because of the exponential growth of the set of groups Gk, the k-support norm is computationally very tractable, with an O(d log d) algorithm described in Section 2.2. As already mentioned, ∥· ∥sp 1 = ∥· ∥1 and ∥· ∥sp d = ∥· ∥2. The unit ball of this new norm in R3 for k = 2 is depicted in Figure 1. We immediately notice several differences between this unit ball and the elastic net unit ball. For example, at points with cardinality k and ℓ2 norm equal to 1, the k-support norm is not differentiable, but unlike the ℓ1 or elastic-net norm, it is differentiable at points with cardinality less than k. Thus, the k-support norm is less “biased” towards sparse vectors than the elastic net and the ℓ1 norm. 3The gauge function γCk : Rd →R ∪{+∞} is defined as γCk(x) = inf{λ ∈R+ : x ∈λCk}. 3 Figure 1: Unit ball of the 2-support norm (left) and of the elastic net (right) on R3. 2.1 The Dual Norm It is interesting and useful to compute the dual of the k-support norm. For w ∈Rd, denote |w| for the vector of absolute values, and w↓ i for the i-th largest element of w [2]. We have ∥u∥sp∗ k = max {⟨w, u⟩: ∥w∥sp k ≤1} = max X i∈I u2 i ! 1 2 : I ∈Gk = k X i=1 (|u|↓ i )2 ! 1 2 =: ∥u∥(2) (k) . This is the ℓ2-norm of the largest k entries in u, and is known as the 2-k symmetric gauge norm [2]. Not surprisingly, this dual norm interpolates between the ℓ2 norm (when k = d and all entries are taken) and the ℓ∞norm (when k = 1 and only the largest entry is taken). This parallels the interpolation of the k-support norm between the ℓ1 and ℓ2 norms. 2.2 Computation of the Norm In this section, we derive an alternative formula for the k-support norm, which leads to computation of the value of the norm in O(d log d) steps. Proposition 2.1. For every w ∈Rd, ∥w∥sp k = k−r−1 P i=1 (|w|↓ i )2 + 1 r+1 dP i=k−r |w|↓ i !2 1 2 , where, letting |w|↓ 0 denote +∞, r is the unique integer in {0, . . . , k −1} satisfying |w|↓ k−r−1 > 1 r + 1 d X i=k−r |w|↓ i ≥|w|↓ k−r . (4) This result shows that ∥· ∥sp k trades off between the ℓ1 and ℓ2 norms in a way that favors sparse vectors but allows for cardinality larger than k. It combines the uniform shrinkage of an ℓ2 penalty for the largest components, with the sparse shrinkage of an ℓ1 penalty for the smallest components. Proof of Proposition 2.1. We will use the inequality ⟨w, u⟩≤⟨w↓, u↓⟩[7]. We have 1 2(∥w∥sp k )2 = max ⟨u, w⟩−1 2(∥u∥(2) (k))2 : u ∈Rd = max ( d X i=1 αi|w|↓ i −1 2 k X i=1 α2 i : α1 ≥· · · ≥αd ≥0 ) = max (k−1 X i=1 αi|w|↓ i + αk d X i=k |w|↓ i −1 2 k X i=1 α2 i : α1 ≥· · · ≥αk ≥0 ) . Let Ar := dP i=k−r |w|↓ i for r ∈{0, . . . , k −1}. If A0 < |w|↓ k−1 then the solution α is given by αi = |w|↓ i for i = 1, . . . , (k −1), αi = A0 for i = k, . . . , d. If A0 ≥|w|↓ k−1 then the optimal αk, αk−1 lie between |w|↓ k−1 and A0, and have to be equal. So, the maximization becomes max (k−2 X i=1 αi|w|↓ i −1 2 k−2 X i=1 α2 i + A1αk−1 −α2 k−1 : α1 ≥· · · ≥αk−1 ≥0 ) . 4 If A0 ≥|w|↓ k−1 and |w|↓ k−2 > A1 2 then the solution is αi = |w|↓ i for i = 1, . . . , (k −2), αi = A1 2 for i = (k −1), . . . , d. Otherwise we proceed as before and continue this process. At stage r the process terminates if A0 ≥|w|↓ k−1, . . . , Ar−1 r ≥|w|↓ k−r, Ar r+1 < |w|↓ k−r−1 and all but the last two inequalities are redundant. Hence the condition can be rewritten as (4). One optimal solution is αi = |w|↓ i for i = 1, . . . , k −r −1, αi = Ar r+1 for i = k −r, . . . , d. This proves the claim. 2.3 Learning with the k-support norm We thus propose using learning rules with k-support norm regularization. These are appropriate when we would like to learn a sparse predictor that also has low ℓ2 norm, and are especially relevant when features might be correlated (that is, in almost all learning tasks) but the correlation structure is not known in advance. E.g., for squared error regression problems we have: min 1 2∥Xw −y∥2 + λ 2 (∥w∥sp k )2 : w ∈Rd (5) with λ > 0 a regularization parameter and k ∈{1, . . . , d} also a parameter to be tuned. As typical in regularization-based methods, both λ and k can be selected by cross validation [8]. Despite the relationship to S(2) k , the parameter k does not necessarily correspond to the sparsity of the actual minimizer of (5), and should be chosen via cross-validation rather than set to the desired sparsity. 3 Relation to the Elastic Net Recall that the elastic net with penalty parameters λ1 and λ2 selects a vector of coefficients given by arg min 1 2∥Xw −y∥2 + λ1 ∥w∥1 + λ2 ∥w∥2 2 . (6) For ease of comparison with the k-support norm, we first show that the set of optimal solutions for the elastic net, when the parameters are varied, is the same as for the norm ∥w∥el k := max n ∥w∥2, ∥w∥1/ √ k o , when k ∈[1, d], corresponding to the unit ball in (1) (note that k is not necessarily an integer). To see this, let ˆw be a solution to (6), and let k := (∥ˆw∥1/∥ˆw∥2)2 ∈[1, d] . Now for any w ̸= ˆw, if ∥w∥el k ≤∥ˆw∥el k , then ∥w∥p ≤∥ˆw∥p for p = 1, 2. Since ˆw is a solution to (6), therefore, ∥Xw −y∥2 2 ≥∥X ˆw −y∥2 2. This proves that, for some constraint parameter B, ˆw = arg min 1 n∥Xw −y∥2 2 : ∥w∥el k ≤B . Like the k-support norm, the elastic net interpolates between the ℓ1 and ℓ2 norms. In fact, when k is an integer, any k-sparse unit vector w ∈Rd must lie in the unit ball of ∥· ∥el k . Since the k-support norm gives the convex hull of all k-sparse unit vectors, this immediately implies that ∥w∥el k ≤∥w∥sp k ∀w ∈Rd . The two norms are not equal, however. The difference between the two is illustrated in Figure 1, where we see that the k-support norm is more “rounded”. To see an example where the two norms are not equal, we set d = 1 + k2 for some large k, and let w = (k1.5, 1, 1, . . . , 1)⊤∈Rd. Then ∥w∥el k = max p k3 + k2, k1.5 + k2 √ k = k1.5 1 + 1 √ k . Taking u = ( 1 √ 2, 1 √ 2k, 1 √ 2k, . . . , 1 √ 2k)⊤, we have ∥u∥(2) (k) < 1, and recalling this norm is dual to the k-support norm: ∥w∥sp k > ⟨w, u⟩= k1.5 √ 2 + k2 · 1 √ 2k = √ 2 · k1.5 . In this example, we see that the two norms can differ by as much as a factor of √ 2. We now show that this is actually the most by which they can differ. 5 Proposition 3.1. ∥· ∥el k ≤∥· ∥sp k < √ 2 ∥· ∥el k . Proof. We show that these bounds hold in the duals of the two norms. First, since ∥· ∥el k is a maximum over the ℓ1 and ℓ2 norms, its dual is given by ∥u∥(el)∗ k := inf a∈Rd n ∥a∥2 + √ k · ∥u −a∥∞ o Now take any u ∈Rd. First we show ∥u∥(2) (k) ≤∥u∥(el)∗ k . Without loss of generality, we take u1 ≥· · · ≥ud ≥0. For any a ∈Rd, ∥u∥(2) (k) = ∥u1:k∥2 ≤∥a1:k∥2 + ∥u1:k −a1:k∥2 ≤∥a∥2 + √ k∥u −a∥∞. Finally, we show that ∥u∥(el)∗ k < √ 2 ∥u∥(2) (k). Let a = (u1 −uk+1, . . . , uk −uk+1, 0, . . . , 0)⊤. Then ∥u∥(el)∗ k ≤∥a∥2 + √ k · ∥u −a∥∞= v u u t k X i=1 (ui −uk+1)2 + √ k · |uk+1| ≤ v u u t k X i=1 (u2 i −u2 k+1) + q k u2 k+1 ≤ √ 2 · v u u t k X i=1 (u2 i −u2 k+1) + k u2 k+1 = √ 2 ∥u∥(2) (k) . Furthermore, this yields a strict inequality, because if u1 > uk+1, the next-to-last inequality is strict, while if u1 = · · · = uk+1, then the last inequality is strict. 4 Optimization Solving the optimization problem (5) efficiently can be done with a first-order proximal algorithm. Proximal methods – see [1, 4, 14, 18, 19] and references therein – are used to solve composite problems of the form min{f(x) + ω(x) : x ∈Rd}, where the loss function f(x) and the regularizer ω(x) are convex functions, and f is smooth with an L-Lipschitz gradient. These methods require fast computation of the gradient ∇f and the proximity operator proxω(x) := argmin 1 2∥u −x∥2 + ω(u) : u ∈Rd . To obtain a proximal method for k-support regularization, it suffices to compute the proximity map of g = 1 2β (∥· ∥sp k )2, for any β > 0 (in particular, for problem (5) β corresponds to L λ ). This computation can be done in O(d(k + log d)) steps with Algorithm 1. Algorithm 1 Computation of the proximity operator. Input v ∈Rd Output q = prox 1 2β (∥·∥sp k )2(v) Find r ∈{0, . . . , k −1}, ℓ∈{k, . . . , d} such that 1 β+1zk−r−1 > Tr,ℓ ℓ−k+(β+1)r+β+1 ≥ 1 β+1zk−r (7) zℓ> Tr,ℓ ℓ−k+(β+1)r+β+1 ≥zℓ+1 (8) where z := |v|↓, z0 := +∞, zd+1 := −∞, Tr,ℓ:= ℓP i=k−r zi qi ← β β+1zi if i = 1, . . . , k −r −1 zi − Tr,ℓ ℓ−k+(β+1)r+β+1 if i = k −r, . . . , ℓ 0 if i = ℓ+ 1, . . . , d Reorder and change signs of q to conform with v 6 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 45 50 Figure 2: Solutions learned for the synthetic data. Left to right: k-support, Lasso and elastic net. Proof of Correctness of Algorithm 1. Since the support-norm is sign and permutation invariant, proxg(v) has the same ordering and signs as v. Hence, without loss of generality, we may assume that v1 ≥· · · ≥vd ≥0 and require that q1 ≥· · · ≥qd ≥0, which follows from inequality (7) and the fact that z is ordered. Now, q = proxg(v) is equivalent to βz −βq = βv −βq ∈∂1 2(∥· ∥sp k )2(q). It suffices to show that, for w = q, βz −βq is an optimal α in the proof of Proposition 2.1. Indeed, Ar corresponds to dP i=k−r qi = ℓP i=k−r zi − Tr,ℓ ℓ−k+(β+1)r+β+1 = Tr,ℓ− (ℓ−k+r+1)Tr,ℓ ℓ−k+(β+1)r+β+1 = (r + 1) β Tr,ℓ ℓ−k+(β+1)r+β+1 and (4) is equivalent to condition (7). For i ≤k −r −1, we have βzi −βqi = qi. For k −r ≤i ≤ℓ, we have βzi −βqi = 1 r+1Ar. For i ≥ℓ+ 1, since qi = 0, we only need βzi −βqi ≤ 1 r+1Ar, which is true by (8). We can now apply a standard accelerated proximal method, such as FISTA [1], to (5), at each iteration using the gradient of the loss and performing a prox step using Algorithm 1. The FISTA guarantee ensures us that, with appropriate step sizes, after T such iterations, we have: 1 2∥XwT −y∥2 + λ 2 (∥wT ∥sp k )2 ≤ 1 2∥Xw∗−y∥2 + λ 2 (∥w∗∥sp k )2 ! + 2L∥w∗−w1∥2 (T + 1)2 . 5 Empirical Comparisons Our theoretical analysis indicates that the k-support norm and the elastic net differ by at most a factor of √ 2, corresponding to at most a factor of two difference in their sample complexities and generalization guarantees. We thus do not expect huge differences between their actual performances, but would still like to see whether the tighter relaxation of the k-support norm does yield some gains. Synthetic Data For the first simulation we follow [21, Sec. 5, example 4]. In this experimental protocol, the target (oracle) vector equals w∗= (3, . . . , 3 | {z } 15 , 0 . . . , 0 | {z } 25 ), with y = (w∗)⊤x + N(0, 1). The input data X were generated from a normal distribution such that components 1, . . . , 5 have the same random mean Z1 ∼N(0, 1), components 6, . . . , 10 have mean Z2 ∼N(0, 1) and components 11, . . . , 15 have mean Z3 ∼N(0, 1). A total of 50 data sets were created in this way, each containing 50 training points, 50 validation points and 350 test points. The goal is to achieve good prediction performance on the test data. We compared the k-support norm with Lasso and the elastic net. We considered the ranges k = {1, . . . , d} for k-support norm regularization, λ = 10i, i = {−15, . . . , 5}, for the regularization parameter of Lasso and k-support regularization and the same range for the λ1, λ2 of the elastic net. For each method, the optimal set of parameters was selected based on mean squared error on the validation set. The error reported in Table 5 is the mean squared error with respect to the oracle w∗, namely MSE = ( ˆw −w∗)⊤V ( ˆw −w∗), where V is the population covariance matrix of Xtest. To further illustrate the effect of the k-support norm, in Figure 5 we show the coefficients learned by each method, in absolute value. For each image, one row corresponds to the w learned for one of the 50 data sets. Whereas all three methods distinguish the 15 relevant variables, the elastic net result varies less within these variables. South African Heart Data This is a classification task which has been used in [8]. There are 9 variables and 462 examples, and the response is presence/absence of coronary heart disease. We 7 Table 1: Mean squared errors and classification accuracy for the synthetic data (median over 50 repetition), SA heart data (median over 50 replications) and for the “20 newsgroups” data set. (SE = standard error) Synthetic Heart Newsgroups Method MSE (SE) MSE (SE) Accuracy (SE) MSE Accuracy Lasso 0.2685 (0.02) 0.18 (0.005) 66.41 (0.53) 0.70 73.02 Elastic net 0.2274 (0.02) 0.18 (0.005) 66.41 (0.53) 0.70 73.02 k-support 0.2143 (0.02) 0.18 (0.005) 66.41 (0.53) 0.69 73.40 normalized the data so that each predictor variable has zero mean and unit variance. We then split the data 50 times randomly into training, validation, and test sets of sizes 400, 30, and 32 respectively. For each method, parameters were selected using the validation data. In Tables 5, we report the MSE and accuracy of each method on the test data. We observe that all three methods have identical performance. 20 Newsgroups This is a binary classification version of 20 newsgroups created in [12] which can be found in the LIBSVM data repository.4 The positive class consists of the 10 groups with names of form sci.*, comp.*, or misc.forsale and the negative class consists of the other 10 groups. To reduce the number of features, we removed the words which appear in less than 3 documents. We randomly split the data into a training, a validation and a test set of sizes 14000,1000 and 4996, respectively. We report MSE and accuracy on the test data in Table 5. We found that k-support regularization gave improved prediction accuracy over both other methods.5 6 Summary We introduced the k-support norm as the tightest convex relaxation of sparsity plus ℓ2 regularization, and showed that it is tighter than the elastic net by exactly a factor of √ 2. In our view, this sheds light on the elastic net as a close approximation to this tightest possible convex relaxation, and motivates using the k-support norm when a tighter relaxation is sought. This is also demonstrated in our empirical results. We note that the k-support norm has better prediction properties, but not necessarily better sparsityinducing properties, as evident from its more rounded unit ball. It is well understood that there is often a tradeoff between sparsity and good prediction, and that even if the population optimal predictor is sparse, a denser predictor often yields better predictive performance [3, 10, 21]. For example, in the presence of correlated features, it is often beneficial to include several highly correlated features rather than a single representative feature. This is exactly the behavior encouraged by ℓ2 norm regularization, and the elastic net is already known to yield less sparse (but more predictive) solutions. The k-support norm goes a step further in this direction, often yielding solutions that are even less sparse (but more predictive) compared to the elastic net. Nevertheless, it is interesting to consider whether compressed sensing results, where ℓ1 regularization is of course central, can be refined by using the k-support norm, which might be able to handle more correlation structure within the set of features. Acknowledgements The construction showing that the gap between the elastic net and the koverlap norm can be as large as √ 2 is due to joint work with Ohad Shamir. Rina Foygel was supported by NSF grant DMS-1203762. References [1] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal of Imaging Sciences, 2(1):183–202, 2009. [2] R. Bhatia. Matrix Analysis. Graduate Texts in Mathematics. Springer, 1997. 4http://www.csie.ntu.edu.tw/∼cjlin/libsvmtools/datasets/ 5Regarding other sparse prediction methods, we did not manage to compare with OSCAR, due to memory limitations, or to PEN or trace Lasso, which do not have code available online. 8 [3] H.D. Bondell and B.J. Reich. Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with OSCAR. Biometrics, 64(1):115–123, 2008. [4] P.L. Combettes and V.R. Wajs. Signal recovery by proximal forward-backward splitting. Multiscale Modeling and Simulation, 4(4):1168–1200, 2006. [5] C. De Mol, E. De Vito, and L. Rosasco. Elastic-net regularization in learning theory. Journal of Complexity, 25(2):201–230, 2009. [6] E. Grave, G. R. Obozinski, and F. Bach. Trace lasso: a trace norm regularization for correlated designs. In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, 2011. [7] G. H. Hardy, J. E. Littlewood, and G. P´olya. Inequalities. Cambridge University Press, 1934. [8] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer Verlag Series in Statistics, 2001. [9] A.E. Hoerl and R.W. Kennard. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, pages 55–67, 1970. [10] L. Jacob, G. Obozinski, and J.P. Vert. Group Lasso with overlap and graph Lasso. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 433–440. ACM, 2009. [11] S.M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In Advances in Neural Information Processing Systems, volume 22, 2008. [12] S. S. Keerthi and D. DeCoste. A modified finite Newton method for fast solution of large scale linear SVMs. Journal of Machine Learning Research, 6:341–361, 2005. [13] A. Lorbert, D. Eis, V. Kostina, D.M. Blei, and P.J. Ramadge. Exploiting covariate similarity in sparse regression via the pairwise elastic net. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, 2010. [14] Y. Nesterov. Gradient methods for minimizing composite objective function. CORE, 2007. [15] N. Srebro, K. Sridharan, and A. Tewari. Smoothness, low-noise and fast rates. In Advances in Neural Information Processing Systems 23, 2010. [16] T. Suzuki and R. Tomioka. SpicyMKL: a fast algorithm for multiple kernel learning with thousands of kernels. Machine learning, pages 1–32, 2011. [17] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B (Statistical Methodology), 58(1):267–288, 1996. [18] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. Preprint, 2008. [19] P. Tseng. Approximation accuracy, gradient methods, and error bound for structured convex optimization. Mathematical Programming, 125(2):263–295, 2010. [20] T. Zhang. Covering number bounds of certain regularized linear function classes. The Journal of Machine Learning Research, 2:527–550, 2002. [21] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320, 2005. 9
|
2012
|
229
|
4,596
|
Proximal Newton-type methods for convex optimization Jason D. Lee∗and Yuekai Sun∗ Institute for Computational and Mathematical Engineering Stanford University, Stanford, CA {jdl17,yuekai}@stanford.edu Michael A. Saunders Department of Management Science and Engineering Stanford University, Stanford, CA saunders@stanford.edu Abstract We seek to solve convex optimization problems in composite form: minimize x∈Rn f(x) := g(x) + h(x), where g is convex and continuously differentiable and h : Rn →R is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. We prove such methods are globally convergent and achieve superlinear rates of convergence in the vicinity of an optimal solution. We also demonstrate the performance of these methods using problems of relevance in machine learning and statistics. 1 Introduction Many problems of relevance in machine learning, signal processing, and high dimensional statistics can be posed in composite form: minimize x∈Rn f(x) := g(x) + h(x), (1) where g : Rn →R is a convex, continuously differentiable loss function, and h : Rn →R is a convex, continuous, but not necessarily differentiable penalty function. Such problems include: (i) the lasso [23] (ii) multitask learning [14] and (iii) trace-norm matrix completion [6]. We describe a family of Newton-type methods tailored to these problems that achieve superlinear rates of convergence subject to standard assumptions. These methods can be interpreted as generalizations of the classic proximal gradient method that use the curvature of the objective function to select a search direction. 1.1 First-order methods The most popular methods for solving convex optimization problems in composite form are firstorder methods that use proximal mappings to handle the nonsmooth part. SpaRSA is a generalized spectral projected gradient method that uses a spectral step length together with a nonmonotone line ∗Equal contributors 1 search to improve convergence [24]. TRIP by Kim et al. also uses a spectral step length but selects search directions using a trust-region strategy [12]. TRIP performs comparably with SpaRSA and the projected Newton-type methods we describe later. A closely related family of methods is the set of optimal first-order methods, also called accelerated first-order methods, which achieve ϵ-suboptimality within O(1/√ϵ) iterations [22]. The two most popular methods in this family are Auslender and Teboulle’s method [1] and Fast Iterative Shrinkage-Thresholding Algorithm (FISTA), by Beck and Teboulle [2]. These methods have been implemented in the software package TFOCS and used to solve problems that commonly arise in statistics, machine learning, and signal processing [3]. 1.2 Newton-type methods There are three classes of methods that generalize Newton-type methods to handle nonsmooth objective functions. The first are projected Newton-type methods for constrained optimization [20]. Such methods cannot handle nonsmooth objective functions; they tackle problems in composite form via constraints of the form h(x) ≤τ. PQN is an implementation that uses a limited-memory quasiNewton update and has both excellent empirical performance and theoretical properties [19, 18]. The second class of these methods by Yu et al. [25] uses a local quadratic approximation to the smooth part of the form Q(x) := f(x) + sup g∈∂f(x) gT d + 1 2dT Hd, where ∂f(x) denotes the subdifferential of f at x. These methods achieve state-of-the-art performance on many problems of relevance, such as ℓ1-regularized logistic regression and ℓ2-regularized support vector machines. This paper focuses on proximal Newton-type methods that were previously studied in [16, 18] and are closely related to the methods of Fukushima and Mine [10] and Tseng and Yun [21]. Both use search directions ∆x that are solutions to subproblems of the form minimize d ∇g(x)T d + 1 2dT Hd + h(x + d), where H is a positive definite matrix that approximates the Hessian ∇2g(x). Fukushima and Mine choose H to be a multiple of the identity, while Tseng and Yun set some components of the search direction ∆x to be zero to obtain a (block) coordinate descent direction. Proximal Newton-type methods were first studied empirically by Mark Schmidt in his Ph.D. thesis [18]. The methods GLMNET [9] (ℓ1-regularized regression), LIBLINEAR [26] (ℓ1-regularized classification), QUIC and recent work by Olsen et al. [11, 15] (sparse inverse covariance estimation) are special cases of proximal Newton-type methods. These methods are considered state-of-the-art for their specific applications, often outperforming generic methods by orders of magnitude. QUIC and LIBLINEAR also achieve a quadratic rate of convergence, although these results rely crucially on the structure of the ℓ1 norm and do not generalize to generic nonsmooth regularizers. The quasi-Newton splitting method developed by Becker and Fadili is equivalent to a proximal quasi-Newton method with rank-one Hessian approxiamtion [4]. In this case, they can solve the subproblem via the solution of a single variable root finding problem, making their method significantly more efficient than a generic proximal Newton-type method. The methods described in this paper are a special case of cost approximation (CA), a class of methods developed by Patriksson [16]. CA requires a CA function ϕ and selects search directions via subproblems of the form minimize d g(x) + ϕ(x + d) −ϕ(x) + h(x + d) −∇g(x)T d. Cost approximation attains a linear convergence rate. Our methods are equivalent to using the CA function ϕ(x) := 1 2xT Hx. We refer to [16] for details about cost approximation and its convergence analysis. 2 2 Proximal Newton-type methods We seek to solve convex optimization problems in composite form: minimize x∈Rn f(x) := g(x) + h(x). (2) We assume g : Rn →R is a closed, proper convex, continuously differentiable function, and its gradient ∇g is Lipschitz continuous with constant L1; i.e. ∥∇g(x) −∇g(y)∥≤L1∥x −y∥ for all x and y in Rn. h : Rn →R is a closed and proper convex but not necessarily everywhere differentiable function whose proximal mapping can be evaluated efficiently. We also assume the optimal value, f ⋆, is attained at some optimal solution x⋆, not necessarily unique. 2.1 The proximal gradient method The proximal mapping of a convex function h at x is proxh(x) = arg min y h(y) + 1 2∥y −x∥2. Proximal mappings can be interpreted as generalized projections because if h is the indicator function of a convex set, then proxh(x) is the projection of x onto the set. The classic proximal gradient method for composite optimization uses proximal mappings to handle the nonsmooth part of the objective function and can be interpreted as minimizing the nonsmooth function h plus a simple quadratic approximation to the smooth function g during every iteration: xk+1 = proxtkh (xk −tk∇g(xk)) = arg min y ∇g(xk)T (y −xk) + 1 2tk ∥y −xk∥2 + h(y), where tk denotes the k-th step length. We can also interpret the proximal gradient step as a generalized gradient step Gf(x) = proxh(x −∇g(x)) −x. (3) Gf(x) = 0 if and only if x minimizes f so ∥Gf(x)∥generalizes the smooth first-order measure of optimality ∥∇f(x)∥. Many state-of-the-art methods for problems in composite form, such as SpaRSA and the optimal first-order methods, are variants of this method. Our method uses a Newton-type approximation in lieu of the simple quadratic to achieve faster convergence. 2.2 The proximal Newton iteration Definition 2.1 (Scaled proximal mappings). Let h be a convex function and H, a positive definite matrix. Then the scaled proximal mapping of h at x is defined to be proxH h (x) := arg min y h(y) + 1 2∥y −x∥2 H. (4) Proximal Newton-type methods use the iteration xk+1 = xk + tk∆xk, (5) ∆xk := proxHk h xk −H−1 k ∇g(xk) −xk, (6) where tk > 0 is the k-th step length, usually determined using a line search procedure and Hk is an approximation to the Hessian of g at xk. We can interpret the search direction ∆xk as a step to the minimizer of the nonsmooth function h plus a local quadratic approximation to g because proxHk h xk −H−1 k ∇g(xk) = arg min y h(y) + 1 2∥(y −xk) + H−1 k ∇g(xk)∥2 Hk = arg min y ∇g(xk)T (y −xk) + 1 2(y −xk)T Hk(y −xk) + h(y). (7) 3 Hence, the search direction solves the subproblem ∆xk = arg min d ∇g(xk)T d + 1 2dT Hkd + h(xk + d) = arg min d Qk(d) + h(xk + d). To simplify notation, we shall drop the subscripts and say x+ = x + t∆x in lieu of xk+1 = xk + tk∆xk when discussing a single iteration. Lemma 2.2 (Search direction properties). If H is a positive definite matrix, then the search direction ∆x = arg mind Q(d) + h(x + d) satisfies: f(x+) ≤f(x) + t ∇g(x)T ∆x + h(x + ∆x) −h(x) + O(t2), (8) ∇g(x)T ∆x + h(x + ∆x) −h(x) ≤−∆xT H∆x. (9) Lemma 2.2 implies the search direction is a descent direction for f because we can substitute (9) into (8) to obtain f(x+) ≤f(x) −t∆xT H∆x + O(t2). We use a quasi-Newton approximation to the Hessian and a first-order method to solve the subproblem for a search direction, although the user is free to use a method of his or her choice. Empirically, we find that inexact solutions to the subproblem yield viable descent directions. We use a backtracking line search to select a step length t that satisfies a sufficient descent condition: f(x+) ≤f(x) + αt∆ (10) ∆:= ∇g(x)T ∆x + h(x + ∆x) −h(x), (11) where α ∈(0, 0.5). This sufficient descent condition is motivated by our convergence analysis but it also seems to perform well in practice. Lemma 2.3 (Step length conditions). Suppose H ⪰mI for some m > 0 and ∇g is the Lipschitz continuous with constant L1. Then the step lengths t ≤min 1, 2m L1 (1 −α) . (12) satisfies the sufficient descent condition (10). Algorithm 1 A generic proximal Newton-type method Require: x0 in dom f 1: repeat 2: Update Hk using a quasi-Newton update rule 3: zk ←proxHk h xk −H−1 k ∇g(xk) 4: ∆xk ←zk −xk 5: Conduct backtracking line search to select tk 6: xk+1 ←xk + tk∆xk 7: until stopping conditions are satisfied 3 Convergence analysis 3.1 Global convergence We assume our Hessian approximations are sufficiently positive definite; i.e. Hk ⪰mI, k = 1, 2, . . . for some m > 0. This assumption guarantees the existence of step lengths that satisfy the sufficient decrease condition. Lemma 3.1 (First-order optimality conditions). Suppose H is a positive definite matrix. Then x is a minimizer of f if and only if the search direction is zero at x; i.e. 0 = arg min d Q(d) + h(x + d). 4 The global convergence of proximal Newton-type methods results from the fact that the search directions are descent directions and if our Hessian approximations are sufficiently positive definite, then the step lengths are bounded away from zero. Theorem 3.2 (Global convergence). Suppose Hk ⪰mI, k = 1, 2, . . . for some m > 0. Then the sequence {xk} generated by a proximal Newton-type method converges to a minimizer of f. 3.2 Convergence rate If g is twice-continuously differentiable and we use the second order Taylor approximation as our local quadratic approximation to g, then we can prove {xk} converges Q-quadratically to the optimal solution x⋆. We assume in a neighborhood of x⋆: (i) g is strongly convex with constant m; i.e. ∇2g(x) ⪰mI, x ∈Nϵ(x⋆) where Nϵ(x⋆) := {x | ∥x −x⋆∥≤ϵ}; and (ii) ∇2g is Lipschitz continuous with constant L2. This convergence analysis is similar to that of Fukushima and Min´e [10] and Patriksson [16]. First, we state two lemmas: (i) that says step lengths of unity satisfy the sufficient descent condition after sufficiently many iterations and (ii) that the backward step is nonexpansive. Lemma 3.3. Suppose (i) ∇2g ⪰mI and (ii) ∇2g is Lipschitz continuous with constant L2. If we let Hk = ∇2g(xk), k = 1, 2, . . . , then the step length tk = 1 satisfies the sufficient decrease condition (10) for k sufficiently large. We can characterize the solution of the subproblem using the first-order optimality conditions for (4). Let y denote proxH h x −H−1∇g(x) , then H(x −H−1∇g(x) −y) ∈∂h(u). or equivalently [H −∇g] (x) ∈[H + ∂h] (y) Let R(x) and S(x) denote 1 m(H + ∂h) −1 (x) and 1 m(H −∇g) (x) respectively, where m is the smallest eigenvalue of H. Then y = [H + ∂h]−1 [H −∇g] (x) = R ◦S(x). . Lemma 3.4. Suppose R(x) = 1 mH + ∂h −1 (x), where H is positive definite. Then R is firmlynonexpansive; i.e. for x and y in dom f, R satisfies (R(x) −R(y))T (x −y) ≥∥R(x) −R(y)∥2. We note that x⋆is a fixed point of R ◦S; i.e. R ◦S(x⋆) = x⋆, so we can express ∥y −x⋆∥as ∥y −x⋆∥= ∥R ◦S(x) −R ◦S(x⋆)∥≤∥S(x) −S(x⋆)∥. Theorem 3.5. Suppose (i) ∇2g ⪰mI and (ii) ∇2g is Lipschitz continuous with constant L2. If we let Hk = ∇2g(xk), k = 1, 2, . . . , then {xk} converges to x⋆Q-quadratically; i.e. ∥xk+1 −x⋆∥ ∥xk −x⋆∥2 →c. We can also use the fact that the proximal Newton method converges quadratically to prove a proximal quasi-Newton method converges superlinearly. We assume the quasi-Newton Hessian approximations satisfy the Dennis-Mor´e criterion [7]:
Hk −∇2g(x⋆) (xk+1 −xk)
∥xk+1 −xk∥ →0. (13) We first prove two lemmas: (i) step lengths of unity satisfy the sufficient descent condition after sufficiently many iterations and (ii) the proximal quasi-Newton step is close to the proximal Newton step. 5 Lemma 3.6. Suppose g is twice-continuously differentiable and the eigenvalues of Hk, k = 1, 2, . . . are bounded; i.e. there exist M ≥m > 0 such that mI ⪯H ⪯MI. If {Hk} satisfy the Dennis-Mor´e criterion, then the unit step length satisfies the sufficient descent condition (10) after sufficiently many iterations. Lemma 3.7. Suppose H and ˆH are positive definite matrices with bounded eigenvalues; i.e. mI ⪯ H ⪯MI and ˆmI ⪯ˆH ⪯ˆ MI. Let ∆x and ∆ˆx denote the search directions generated using H and ˆH respectively; i.e. ∆x = proxH h x −H−1∇g(x) −x, ∆ˆx = prox ˆ H h x −ˆH−1∇g(x) −x. Then these two search directions satisfy ∥∆x −∆ˆx∥≤ s 1 + c(H, ˆH) m
( ˆH −H)∆x
1/2 ∥∆x∥1/2, where c is a constant that depends on H and ˆH. Theorem 3.8. Suppose g is twice-continuously differentiable and the eigenvalues of Hk, k = 1, 2, . . . are bounded. If {Hk} satisfy the Dennis-Mor´e criterion, then the sequence {xk} converges to x⋆Q-superlinearly; i.e. ∥xk+1 −x⋆∥ ∥xk −x⋆∥ →0. 4 Computational experiments 4.1 PNOPT: Proximal Newton OPTimizer PNOPT1 is a MATLAB package that uses proximal Newton-type methods to minimize convex objective functions in composite form. PNOPT can build BFGS and L-BFGS approximation to the Hessian (the user can also supply a Hessian approximation) and uses our implementation of SpaRSA or an optimal first order method to solve the subproblem for a search direction. PNOPT uses an early stopping condition for the subproblem solver based on two ideas: (i) the subproblem should be solved to a higher accuracy if Qk is a good approximation to g and (ii) near a solution, the subproblem should be solved almost exactly to achieve fast convergence. We thus require that the solution to the k-th subproblem (7) y⋆ k satisfy ∥GQ+h(y⋆ k)∥≤ηk∥Gf(y⋆ k)∥, (14) where Gf(x) denotes the generalized gradient step at x (3) and ηk is a forcing term. We choose forcing terms based on the agreement between g and the previous quadratic approximation to g Qk−1. We set η1 := 0.5 and ηk := min 0.5, ∥∇g(xk) −∇Qk−1(xk)∥ ∥∇g(xk)∥ , k = 2, 3, . . . (15) This choice measures the agreement between ∇g(xk) and ∇Qk−1(xk) and is borrowed from a choice of forcing terms for inexact Newton methods described by Eisenstat and Walker [8]. Empirically, we find that this choice avoids “oversolving” the subproblem and yields desirable convergence behavior. We compare the performance of PNOPT, our implementation of SpaRSA, and the TFOCS implementations of Auslender and Teboulle’s method (AT) and FISTA on ℓ1-regularized logistic regression and Markov random field structure learning. We used the following settings: 1. PNOPT: We use an L-BFGS approximation to the Hessian with L = 50 and set the sufficient decrease parameter to α = 0.0001. To solve the subproblem, we use the TFOCS implementation of FISTA. 1PNOPT is available at www.stanford.edu/group/SOL/software/pnopt.html. 6 0 100 200 300 10 −5 10 0 Iteration log(f−f*) Fista AT PN100 PN15 SpaRSA (a) 0 20 40 60 80 10 −5 10 0 Time (sec) log(f−f*) Fista AT PN100 PN15 SpaRSA (b) Figure 1: Figure 1a and 1b compare two variants of proximal Newton-type methods with SpaRSA and TFOCS on on the MRF structure learning problem. 2. SpaRSA: We use a nonmonotone line search with a 10 iteration memory and also set the sufficient decrease parameter to α = 0.0001. Our implementation of SpaRSA is included in PNOPT as the default solver for the subproblem. 3. AT/FISTA: We set tfocsOpts.restart = -inf to turn on adaptive restarting and use default values for the rest of the settings. These experiments were conducted on a machine running the 64-bit version of Ubuntu 12.04 with an Intel Core i7 870 CPU and 8 GB RAM. 4.2 Markov random field structure learning We seek the maximum likelihood estimates of the parameters of a Markov random field (MRF) subject to a group elastic-net penalty on the estimates. The objective function is given by minimize θ − X (r,j)∈E θrj(xr, xj) + log Z(θ) + X (r,j)∈E λ1 ∥θrj∥2 + λ2 ∥θrj∥2 F . (16) xr is a k state variable; xj is a l state variable, and each parameter block θrj is a k × l matrix that is associated with an edge in the MRF. We randomly generate a graphical model with |V | = 12 and n = 300. The edges are sample uniformly with p = 0.3. The parameters of the non-zero edges are sampled from a N(0, 1) distribution. The group elastic-net penalty regularizes the solution and promotes solutions with a few non-zero groups θrj corresponding to edges of the graphical model [27]. The regularization parameters were set to λ1 = p n log |V | and λ2 = .1λ1. These parameter settings are shown to be model selection consistent under certain irrepresentable conditions [17]. The algorithms for solving (16) require evaluating the value and gradient of the smooth part. For a discrete graphical model without special structure, the smooth part requires O(k|V |) operations to evaluate, where k is the number of states per variable. Thus even for our small example, where k = 3 and |V | = 12, function and gradient evaluations dominate the computational expense required to solve (16). We see that for maximum likelihood learning in graphical models, it is important to minimize the number of function evaluations. Proximal Newton-type methods are well-suited to solve such problems because the main computational expense is shifted to solving the subproblems that do not require function evaluations. 7 0 1000 2000 3000 4000 5000 10 −6 10 −4 10 −2 10 0 Function evaluations Relative suboptimality AT FISTA SpaRSA PN (a) 0 100 200 300 400 500 10 −6 10 −4 10 −2 10 0 Time (sec) Relative suboptimality AT FISTA SpaRSA PN (b) Figure 2: Figure 2 compares proximal Newton-type methods with SpaRSA and TFOCS on ℓ1regularized logistic regression. 4.3 ℓ1-regularized logistic regression Given training data (xi, yi), i = 1, 2, . . . , n, ℓ1-regularized logistic regression trains a classifier via the solution of the convex optimization problem minimize w∈Rp 1 n n X i=1 log(1 + exp(−yiwT xi)) + λ∥w∥1. (17) for a set of parameters w in Rp. The regularization term ∥w∥1 avoids overfitting the training data and promotes sparse solutions. λ is trades-off between goodness-of-fit and model complexity. We use the dataset gisette, a handwritten digits dataset from the NIPS 2003 feature selection challenge. The dataset is available at http://www.csie.ntu.edu.tw/∼cjlin/ libsvmtools/datasets. We train our classifier using the original training set consisting of 6000 examples starting at w = 0. λ was chosen to match the value reported in [26], where it was chosen by five-fold cross validation on the training set. The gisette dataset is quite dense (3 million nonzeros in the 6000 × 5000 design matrix) and the evaluation of the log-likelihood requires many expensive exp/log operations. We see in figure 2 that PNOPT outperforms the other methods because the computational expense is shifted to solving the subproblems, whose objective functions are cheap to evaluate. 5 Conclusion Proximal Newton-type methods are natural generalizations of first-order methods that account for curvature of the objective function. They share many of the desirable characteristics of traditional first-order methods for convex optimization problems in composite form and achieve superlinear rates of convergence subject to standard assumptions. These methods are especially suited to problems with expensive function evaluations because the main computational expense is shifted to solving subproblems that do not require function evaluations. 6 Acknowledgements We wish to thank Trevor Hastie, Nick Henderson, Ernest Ryu, Ed Schmerling, Carlos Sing-Long, and Walter Murray for their insightful comments. 8 References [1] A. Auslender and M. Teboulle, Interior gradient and proximal methods for convex and conic optimization, SIAM J. Optim., 16 (2006), pp. 697–725. [2] A. Beck and M. Teboulle , A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), pp. 183–202. [3] S. R. Becker, M. J. Cand`es, and M. C. Grant, Templates for convex cone problems with applications to sparse signal recovery, Math. Program. Comput., 3 (2011), pp. 1–54. [4] S. Becker and J. Fadili, A quasi-Newton proximal splitting method, NIPS, Lake Tahoe, California, 2012. [5] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, 2004. [6] E. J. Cand`es and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math, 9 (2009), pp. 717–772. [7] J. E. Dennis, Jr. and J. J. Mor´e, A characterization of superlinear convergence and its application to quasi-Newton methods, Math. Comp., 28, (1974), pp. 549–560. [8] S. C. Eisenstat and H. F. Walker, Choosing the forcing terms in an inexact Newton method, SIAM J. Sci. Comput., 17 (1996), pp. 16–32. [9] J. Friedman, T. Hastie, H. Holfing, and R. Tibshirani, Pathwise coordinate optimization, Ann. Appl. Stat. (2007), pp. 302–332 [10] M. Fukushima and H. Mine, A generalized proximal point algorithm for certain non-convex minimization problems, Internat. J. Systems Sci., 12 (1981), pp. 989–1000. [11] C. J. Hsieh, M. A. Sustik, P. Ravikumar, and I. S. Dhillon, Sparse inverse covariance matrix estimation using quadratic approximation, NIPS, Granada, Spain, 2011. [12] D. Kim, S. Sra, and I. S. Dhillon, A scalable trust-region algorithm with applications to mixed-norm regression, ICML, Haifa, Israel, 2010. [13] Y. Nesterov, Gradient methods for minimizing composite objective function, CORE discussion paper, 2007. [14] G. Obozinski, B. Taskar, and M. I. Jordan, Joint covariate selection and joint subspace selection for multiple classification problems, Stat. Comput. (2010), pp. 231–252 [15] P. Olsen, F. Oztoprak, J. Nocedal, S. Rennie, Newton-like methods for sparse inverse covariance estimation, NIPS, Lake Tahoe, California, 2012. [16] M. Patriksson, Nonlinear Programming and Variational Inequality Problems, Kluwer Academic Publishers, The Netherlands, 1999. [17] P. Ravikumar, M. J. Wainwright and J. D. Lafferty, High-dimensional Ising model selection using ℓ1regularized logistic regression, Ann. Statist. (2010), pp. 1287-1319. [18] M. Schmidt, Graphical Model Structure Learning with l1-Regularization, Ph.D. Thesis (2010), University of British Columbia [19] M. Schmidt, E. van den Berg, M. P. Friedlander, and K. Murphy, Optimizing costly functions with simple constraints: a limited-memory projected quasi-Newton algorithm, AISTATS, Clearwater Beach, Florida, 2009. [20] M. Schmidt, D. Kim, and S. Sra, Projected Newton-type methods in machine learning, in S. Sra, S. Nowozin, and S. Wright, editors, Optimization for Machine Learning, MIT Press (2011). [21] P. Tseng and S. Yun, A coordinate gradient descent method for nonsmooth separable minimization, Math. Prog. Ser. B, 117 (2009), pp. 387–423. [22] P. Tseng, On accelerated proximal gradient methods for convex-concave optimization, submitted to SIAM J. Optim. (2008). [23] R. Tibshirani, Regression shrinkage and selection via the lasso, J. R. Stat. Soc. Ser. B Stat. Methodol., 58 (1996), pp. 267–288. [24] S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, Sparse reconstruction by separable approximation, IEEE Trans. Signal Process., 57 (2009), pp. 2479–2493. [25] J. Yu, S. V. N. Vishwanathan, S. G¨unter, and N. N. Schraudolph, A Quasi-Newton Approach to Nonsmooth Convex Optimization, ICML, Helsinki, Finland, 2008. [26] G. X. Yuan, C. H. Ho and C. J. Lin, An improved GLMNET for ℓ1-regularized logistic regression and support vector machines, National Taiwan University, Tech. Report 2011. [27] R. H. Zou and T. Hastie, Regularization and variable selection via the elastic net, J. R. Stat. Soc. Ser. B Stat. Methodol., 67 (2005), pp. 301–320. 9
|
2012
|
23
|
4,597
|
A Marginalized Particle Gaussian Process Regression Yali Wang and Brahim Chaib-draa Department of Computer Science Laval University Quebec, Quebec G1V0A6 {wang,chaib}@damas.ift.ulaval.ca Abstract We present a novel marginalized particle Gaussian process (MPGP) regression, which provides a fast, accurate online Bayesian filtering framework to model the latent function. Using a state space model established by the data construction procedure, our MPGP recursively filters out the estimation of hidden function values by a Gaussian mixture. Meanwhile, it provides a new online method for training hyperparameters with a number of weighted particles. We demonstrate the estimated performance of our MPGP on both simulated and real large data sets. The results show that our MPGP is a robust estimation algorithm with high computational efficiency, which outperforms other state-of-art sparse GP methods. 1 Introduction The Gaussian process (GP) is a popular nonparametric Bayesian method for nonlinear regression. However, the O(n3) computational load for training the GP model would severely limit its applicability in practice when the number of training points n is larger than a few thousand [1]. A number of attempts have been made to handle it with a small computational load. One typical method is a sparse pseudo-input Gaussian process (SPGP) [2] that uses a pseudo-input data set with m inputs (m ≪n) to parameterize the GP predictive distribution to reduce the computational burden. Then a sparse spectrum Gaussian process (SSGP) [3] was proposed to further improve the performance of SPGP while retaining the computational efficiency by using a stationary trigonometric Bayesian model with m basis functions. However, both SPGP and SSGP learn hyperparameters offline by maximizing the marginal likelihood before making the inference. They would take a risk to fall in the local optimum. Another recent model is a Kalman filter Gaussian process (KFGP) [4] which reduces computation load by correlating function values of data subsets at each Kalman filter iteration. But it still causes underfitting or overfitting if the hyperparameters are badly learned offline. On the contrary, we propose in this paper an online marginalized particle filter to simultaneously learn the hyperprameters and hidden function values. By collecting small data subsets sequentially, we establish a novel state space model which allows us to estimate the marginal posterior distribution (not the marginal likelihood) of hyperparameters online with a number of weighted particles. For each particle, a Kalman filter is applied to estimate the posterior distribution of hidden function values. We will later explain it in details and show its validity via the experiments on large datasets. 2 Data Construction In practice, the whole training data set is usually constructed by gathering small subsets several times. For the tth collection, the training subset (Xt, yt) consists of nt input-output pairs: {(x1 t, y1 t ), · · · (xnt t , ynt t )}. Each scalar output yi t is generated from a nonlinear function f(xi t) of a d-dimensional input vector xi t with an additive Gaussian noise N(0, a2 0). All the pairs are separately organized as an input matrix Xt and output vector yt. For simplicity, the whole training data with 1 T collections is symbolized as (X1:T , y1:T ). The goal refers to a regression issue - estimating the function value of f(x) at m test inputs X⋆= [x1 ⋆, · · · xm ⋆] given (X1:T , y1:T ). 3 Gaussian Process Regression A Gaussian process (GP) represents a distribution over functions, which is a generalization of the Gaussian distribution to an infinite dimensional function space. Formally, it is a collection of random variables, any finite number of which have a joint Gaussian distribution [1]. Similar to a Gaussian distribution specified by a mean vector and covariance matrix, a GP is fully defined by a mean function m(x) = E[f(x)] and covariance function k(x, x′) = E[(f(x) −m(x))(f(x′) −m(x′))]. Here we follow the practical choice that m(x) is set to be zero. Moreover, due to the spatial nonstationary phenomena in the real world, we choose k(x, x′) as kSE(x, x′) + kNN(x, x′) where kSE = a2 1exp[−0.5a−2 2 (x −x′)T (x −x′)] is the stationary squared exponential covariance function, kNN = a2 3sin−1[a−2 4 ˜xT ˜x′((1 + a−2 4 ˜xT ˜x)(1 + a−2 4 ˜x′T ˜x′))−0.5] is the nonstationary neural network covariance function with the augmented input ˜x = [1 xT ]T . For simplicity, all the hyperparameters are collected into a vector θ = [a0 a1 a2 a3 a4]T . The regression problem could be solved by the standard GP with the following two steps: First, learning θ given (X1:T , y1:T ). One technique is to draw samples from p(θ|X1:T , y1:T ) using Markov Chain Monte Carlo (MCMC) [5, 6], another popular way is to maximize the log evidence p(y1:T |X1:T , θ) via a gradient based optimizer [1]. Second, estimating the distribution of the function value p(f(X⋆)|X1:T , y1:T , X⋆, θ). From the perspective of GP, a function f(x) could be loosely considered as an infinitely long vector in which each random variable is the function value at an input x, and any finite set of function values is jointly Gaussian distributed. Hence, the joint distribution p(y1:T , f(X⋆)|X1:T , X⋆, θ) is a multivariate Gaussian distribution. Then according to the conditional property of Gaussian distribution, p(f(X⋆)|X1:T , y1:T , X⋆, θ) is also Gaussian distributed with the following mean vector ¯f(X⋆) and covariance matrix P(X⋆, X⋆) [1, 7]: ¯f(X⋆) = Kθ(X⋆, X1:T )[Kθ(X1:T , X1:T ) + a2 0I]−1y1:T P(X⋆, X⋆) = Kθ(X⋆, X⋆) −Kθ(X⋆, X1:T )[Kθ(X1:T , X1:T ) + a2 0I]−1Kθ(X⋆, X1:T )T If there are n training inputs and m test inputs then Kθ(X⋆, X1:T ) denotes an m × n covariance matrix in which each entry is calculated by the covariance function k(x, x′) with the learned θ. It is similar to construct Kθ(X1:T , X1:T ) and Kθ(X⋆, X⋆). 4 Marginalized Particle Gaussian Process Regression Even though GP is an elegant nonparametric method for Bayesian regression, it is commonly infeasible for large data sets due to an O(n3) scaling for learning the model. In order to derive a computational tractable GP model which preserves the estimation accuracy, we firstly explore a state space model from the data construction procedure, then propose a marginalized particle filter to estimate the hidden f(X⋆) and θ in an online Bayesian filtering framework. 4.1 State Space Model The standard state space model (SSM) consists of the state equation and observation equation. The state equation reflects the Markovian evolution of hidden states (the hyperparamters and function values). For the hidden static hyperparameter θ, a popular method in filtering techniques is to add an artificial evolution using kernel smoothing which guarantees the estimation convergence [8, 9, 10]: θt = bθt−1 + (1 −b)¯θt−1 + st−1 (1) where b = (3δ −1)/(2δ), δ is a discount factor which is typically around 0.95-0.99, ¯θt−1 is the Monte Carlo mean of θ at t −1, and st−1 ∼N(0, r2Σt−1), r2 = 1 −b2, Σt−1 is the Monte Carlo variance matrix of θ at t −1. For hidden function values, we attempt to explore the relation between the (t −1)th and tth data subset. For simplicity, we denoted Xc t = Xt ∪X⋆and f c t = f(Xc t ). If f(x) ∼GP(0, k(x, x′)), then the prior distribution p(f c t , f c t−1|Xc t−1, Xc t , θt) is jointly Gaussian: p(f c t , f c t−1|Xc t−1, Xc t , θt) = N(0, Kθt(Xc t , Xc t ) Kθt(Xc t , Xc t−1) Kθt(Xc t , Xc t−1)T Kθt(Xc t−1, Xc t−1) ) 2 Then according to the conditional property of Gaussian distribution, we could get p(f c t |f c t−1, Xc t−1, Xc t , θt) = N(G(θt)f c t−1, Q(θt)) (2) where G(θt) = Kθt(Xc t , Xc t−1)K−1 θt (Xc t−1, Xc t−1) (3) Q(θt) = Kθt(Xc t , Xc t ) −Kθt(Xc t , Xc t−1)K−1 θt (Xc t−1, Xc t−1)Kθt(Xc t , Xc t−1)T (4) This conditional density (2) could be transformed into a linear equation of the function value with an additive Gaussian noise vf t ∼N(0, Q(θt)): f c t = G(θt)f c t−1 + vf t (5) Finally, the observation (output) equation could be directly obtained from the tth data collection: yt = Htf c t + vy t (6) where Ht = [Int 0] is an index matrix to make Htf c t = f(Xt) since yt is only obtained from the tth training inputs Xt. The noise vy t ∼N(0, R(θt)) is from the section 2 where R(θt) = a2 0,tI. Note that a0 is a fixed unknown hyperparameter. We use the symbol a0,t just because of the consistency with the artificial evolution of θ. To sum up, our SSM is fully specified by (1), (5), (6). 4.2 Bayesian Inference by Marginalized Particle Filter In contrast to the GP regression with a two-step offline inference in section 3, we propose an online filtering framework to simultaneously learn hyperparameters and estimate hidden function values. According to the SSM before, the inference problem refers to compute the posterior distribution p(f c t , θ1:t|X1:t, X⋆, y1:t). One technique is MCMC, but MCMC usually suffers from a long convergence time. Hence we choose another popular technique - particle filter. However, for our SSM, the traditional sampling importance resampling (SIR) particle filter will introduce the unnecessary computational load due to the fact that (5) in the SSM is a linear structure given θt. This inspires us to apply a more efficient marginalized particle filter (also called Rao-Blackwellised particle filter) [9, 11, 12, 13] to deal with the estimation problem by combining Kalman filter into particle filter. Using Bayes rule, the posterior could be factorized as p(f c t , θ1:t|X1:t, X⋆, y1:t) = p(θ1:t|X1:t, X⋆, y1:t)p(f c t |θ1:t, X1:t, X⋆, y1:t) p(θ1:t|X1:t, X⋆, y1:t) refers to a marginal posterior which could be solved by particle filter. After obtaining the estimation of θ1:t, the second term p(f c t |θ1:t, X1:t, X⋆, y1:t) could be computed by Kalman filter since f c t is the hidden state in the linear substructure (equation (5)) of SSM. The detailed inference procedure is as follows: First, p(θ1:t|X1:t, X⋆, y1:t) should be factorized in a recursive form so that it could be applied into sequential importance sampling framework: p(θ1:t|X1:t, X⋆, y1:t) ∝p(yt|y1:t−1, θ1:t, X1:t, X⋆)p(θt|θt−1)p(θ1:t−1|X1:t−1, X⋆, y1:t−1) At each iteration of the sequential importance sampling, the particles for the hyperparameter vector are drawn from the proposal distribution p(θt|θt−1) (easily obtained from equation (1)), then the importance weight for each particle at t could be computed according to p(yt|y1:t−1, θ1:t, X1:t, X⋆). This distribution could be solved analytically: p(yt|y1:t−1, θ1:t, X1:t, X⋆) = Z p(yt, f c t |y1:t−1, θ1:t, X1:t, X⋆)df c t = Z p(yt|f c t , θt, Xt, X⋆)p(f c t |y1:t−1, θ1:t, X1:t, X⋆)df c t = Z N(Htf c t , R(θt))N(f c t|t−1, P c t|t−1)df c t = N(Htf c t|t−1, HtP c t|t−1HT t + R(θt)) (7) where p(yt|f c t , θt, Xt, X⋆) follows a Gaussian distribution N(Htf c t , R(θt)) (refers to equation (6)), p(f c t |y1:t−1, θ1:t, X1:t, X⋆) = N(f c t|t−1, P c t|t−1) is the prediction step of Kalman filter for f c t which is also Gaussian distributed with the predictive mean f c t|t−1 and covariance P c t|t−1. 3 Second, we explain how to compute p(f c t |θ1:t, X1:t, X⋆, y1:t) using the prediction-update Kalman filter. According to the recursive Bayesian filtering, this posterior could be factorized as: p(f c t |θ1:t, X1:t, X⋆, y1:t) = p(yt|f c t , θt, Xt, X⋆)p(f c t |y1:t−1, θ1:t, X1:t, X⋆) p(yt|y1:t−1, θ1:t, X1:t, X⋆) (8) In the prediction step, the goal is to compute p(f c t |y1:t−1, θ1:t, X1:t, X⋆) which is an integral: p(f c t |y1:t−1, θ1:t, X1:t, X⋆) = Z p(f c t , f c t−1|y1:t−1, θ1:t, X1:t, X⋆)df c t−1 = Z p(f c t |f c t−1, θt, Xt−1:t, X⋆)p(f c t−1|y1:t−1, θ1:t−1, X1:t−1, X⋆)df c t−1 = Z N(G(θt)f c t−1, Q(θt))N(f c t−1|t−1, P c t−1|t−1)df c t−1 = N(G(θt)f c t−1|t−1, G(θt)P c t−1|t−1G(θt)T + Q(θt)) (9) where p(f c t |f c t−1, θt, Xt−1:t, X⋆) is directly from (2), and p(f c t−1|y1:t−1, θ1:t−1, X1:t−1, X⋆) = N(f c t−1|t−1, P c t−1|t−1) is the posterior estimation for f c t−1. Since p(f c t |y1:t−1, θ1:t, X1:t, X⋆) could also be expressed as N(f c t|t−1, P c t|t−1), then the prediction step is summarized as: f c t|t−1 = G(θt)f c t−1|t−1, P c t|t−1 = G(θt)P c t−1|t−1G(θt)T + Q(θt) (10) In the update step, the current observation density p(yt|f c t , θt, Xt, X⋆) = N(Htf c t , R(θt)) is used to correct the prediction. Putting (7) and (9) into (8), p(f c t |θ1:t, X1:t, X⋆, y1:t) = N(f c t|t, P c t|t) is actually Gaussian distributed with the Kalman Gain Γt where: Γt = P c t|t−1HT t (HtP c t|t−1HT t + R(θt))−1 (11) f c t|t = f c t|t−1 + Γt(yt −Htf c t|t−1), P c t|t = P c t|t−1 −ΓtHtP c t|t−1 (12) Finally, the whole algorithm (t = 1, 2, 3, ....) is summarized as follows: • For i = 1, 2, ...N – Drawing θi t ∼p(θt|˜θi t−1) according to (1) – Using θi t to specify k(x, x′) in GP to construct G(θi t), Q(θi t), R(θi t) in (3-4) and (6) – Kalman Predict: Using ˜f c,i t−1|t−1, ˜P c,i t−1|t−1 into (10) to compute f c,i t|t−1, P c,i t|t−1 – Kalman Update: Using f c,i t|t−1 and P c,i t|t−1 into (11) and (12) to compute f c,i t|t and P c,i t|t – Putting f c,i t|t−1, P c,i t|t−1, R(θi t) into (7) to compute the importance weight ¯wi t • Normalizing the weight: wi t = ¯wi t/(PN i=1 ¯wi t) (i = 1, ...N) • Hyperparameter and Hidden function value estimation: ˆθt = PN i=1 wi tθi t, ˆf c t|t = PN i=1 wi tf c,i t|t ⇒ˆf ⋆ t|t = H⋆ t ˆf c t|t ˆP c t|t = PN i=1 wi t(P c,i t|t + (f c,i t|t −ˆf c t|t)(f c,i t|t −ˆf c t|t)T ) ⇒ˆP ⋆ t|t = H⋆ t ˆP c t|t(H⋆ t )T where H⋆ t = [0 Im] is an index matrix to get the function value estimation at X⋆ • Resampling: For i = 1, ...N, resample θi t, f c,i t|t , P c,i t|t with respect to the importance weight wi t to obtain ˜θi t, ˜f c,i t|t , ˜P c,i t|t for the next step At each iteration, our marginalized particle Gaussian process (MPGP) uses a small training subset to estimate f(X⋆) by Kalman filters, and learn hyperparameters online by weighted particles. The computational cost of the marginalized particle filter is governed by O(NTS3) [10] where N is the number of particles, T is the number of data collections, S is the size of each collection. This could largely reduce the computational load. Moreover, the MPGP propagates the previous estimation to improve the current accuracy in the recursive filtering framework. From the algorithm above, we also find that f(X⋆) is estimated as a Gaussian mixture at each iteration since each hyperparameter particle accompanies with a Kalman filter for f(X⋆). Hence the MPGP could accelerate the 4 −2 −1 0 1 2 −4 −2 0 2 4 x y −2 −1 0 1 2 −4 −2 0 2 4 x y −2 −1 0 1 2 −4 −2 0 2 4 x y −2 −1 0 1 2 −4 −2 0 2 4 x y 0 0.2 0.4 0.6 0.8 1 −5 0 5 10 x y 0 0.2 0.4 0.6 0.8 1 −5 0 5 10 x y 0 0.2 0.4 0.6 0.8 1 −5 0 5 10 x y 0 0.2 0.4 0.6 0.8 1 −5 0 5 10 x y 0 50 100 −2 −1.5 −1 −0.5 0 t log(a1) 0 50 100 −1 −0.5 0 0.5 t log(a2) 0 50 100 −2 −1.5 −1 −0.5 0 t log(a3) 0 50 100 −1.6 −1.4 −1.2 −1 t log(a4) 0 50 100 −1 −0.5 0 0.5 t log(a0) 0 10 20 30 40 50 −3 −2 −1 0 t log(a1) 0 10 20 30 40 50 −1 0 1 2 t log(a2) 0 10 20 30 40 50 −1 −0.8 −0.6 −0.4 −0.2 t log(a3) 0 10 20 30 40 50 −0.5 0 0.5 1 1.5 t log(a4) 0 10 20 30 40 50 −1 0 1 2 t log(a0) SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP SENN−MPGP SENN−MPGP SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP SENN−MPGP SENN−MPGP (b) (c) (a) (e) (f) (g) (h) (d) t=10 t=10 t=10 t=10 t=100 t=100 (i) (j) (k) (l) (m) (r) (q) (p) (o) (n) t=50 t=50 Figure 1: Estimation result comparison. (a-b) show the estimation for f1 at t = 10 by SE-KFGP (blue line with blue dashed interval in (a)), SE-MPGP (red line with red dashed interval in (a)), SENN-KFGP (blue line with blue dashed interval in (b)), SENN-MPGP (red line with red dashed interval in (b)). The black crosses are the training outputs at t = 10, the black line is the true f(X⋆). The denotation of (c-d),(e-f),(g-h) is same as (a-b) except that (c-d) are for f2 at t = 10, (e-f) are for f1 at t = 100, (g-h) are for f2 at t = 50. (i-m), (n-r) are the estimation of the log hyperparameters (log(a0) to log(a4)) for f1, f2 over time. computational speed, while preserving the accuracy. Additionally, it is worth to mention that the Kalman filter GP (KFGP) [4] is a special case of our MPGP since the KFGP firstly trains the hyperparamter vector offline and uses it to specify the SSM, then estimates p(f c t |θ1:t, X1:t, X⋆, y1:t) by Kalman filter. But the offline learning procedure in KFGP will either take a long time using a large extra training data or fall into an unsatisfactory local optimum using a small extra training data. In our MPGP, the local optimum could be used as the initial setting of hyperparameters, then the underlying θ could be learned online by the marginalized particle filter to improve the performance. Finally, to avoid confusion, we should clarify the difference between our MPGP and the GP modeled Bayesian filters [14, 15]. The goal of GP modeled Bayesian filters is to use GP modeling for Bayesian filtering, on the contrary, our MPGP is to use Bayesian filtering for GP modeling. 5 Experiments Two Synthetic Datasets: The proposed MPGP is firstly evaluated on two simulated onedimensional datasets. One is a function with a sharp peak which is spatially inhomogeneously smooth [16]: f1(x) = sin(x) + 2 exp(−30x2). For f1(x), we gather the training data with 100 collections. For each collection, we randomly select 30 inputs from [-2, 2], then calculate their outputs by adding a Gaussian noise N(0, 0.32) to their function values. The test input is from -2 to 2 with 0.05 interval. The other function is with a discontinuity [17]: if 0 ≤x ≤0.3, f2(x) = N(x; 0.6, 0.22)+N(x; 0.15, 0.052), if 0.3 < x ≤1, f2(x) = N(x; 0.6, 0.22)+N(x; 0.15, 0.052)+ 4. For f2(x), we gather the training data with 50 collections. For each collection, we randomly select 60 inputs from [0, 1], then calculate their outputs by adding a Gaussian noise N(0, 0.82) to their function values. The test input is from 0 to 1 with 0.02 interval. The first experiment aims to evaluate the estimation performance in comparison of KFGP in [4]. We denote SE-KFGP, SENN-KFGP as KFGP with the covariance function kSE, KFGP with the covariance function kSE + kNN. Similarly, SE-MPGP and SENN-MPGP are MPGP with kSE, 5 0 20 40 60 80 100 0.08 0.1 0.12 0.14 0.16 0.18 0.2 t 0 20 40 60 80 100 0.4 0.6 0.8 1 t 0 10 20 30 40 50 0.1 0.2 0.3 0.4 0.5 t 0 10 20 30 40 50 1 1.2 1.4 1.6 1.8 2 t SE−KFGP SENN−KFGP SE−MPGP SENN−MPGP SE−KFGP SE−MPGP SENN−KFGP SENN−MPGP SE−KFGP SENN−KFGP SE−MPGP SENN−MPGP SE−KFGP SENN−KFGP SE−MPGP SENN−MPGP MNLP for f1(x) NMSE for f2(x) NMSE for f1(x) MNLP for f2(x) Figure 2: The NMSE and MNLP of KFGP and MPGP for f1, f2 over time. 2 4 6 8 10 12 14 16 0.085 0.09 0.095 0.1 Number of Particles NMSE 2 4 6 8 10 12 14 16 0 0.5 1 1.5 Number of Particles MNLP 2 4 6 8 10 12 14 16 0 10 20 30 40 50 60 Number of Particles Running Time 0 5 10 15 20 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Number of Particles NMSE 0 5 10 15 20 1 1.5 2 2.5 3 Number of Particles MNLP 0 5 10 15 20 0 10 20 30 40 Number of Particles Running Time SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP Figure 3: The NMSE and MNLP of MPGP as a function of the number of particles. The first row is for f1, the second row is for f2. MPGP with kSE + kNN. The number of particles in MPGP is set to 10. The evaluation criterion is the test Normalized Mean Square Error (NMSE) and the test Mean Negative Log Probability (MNLP) as suggested in [3]. First, it is shown in Figure 1 that the estimation performance for both KFGP and MPGP is getting better and attempts to convergence over time (refers to (a-h)) since the previous estimation would be incorporated into the current estimation in the recursive Bayesian filtering. Second, for both f1 and f2, the estimation of MPGP is better than KFGP via the NMSE and MNLP comparison in Figure 2. The KFGP uses offline learned hyperparameters all the time. On the contrary, MPGP initializes hyperparameters using the ones by KFGP, then online learns the true hyperparameters (refers to (i-r) in Figure 1). Hence the MNLP of MPGP is much lower than KFGP. Finally, if we only focus on our MPGP, then we could find SENN-MPGP is better than SE-MPGP since SENN-MPGP takes the spatial nonstationary phenomenon into account. The second experiment aims to illustrate the average performance of SE-MPGP and SENN-MPGP when the number of particles increases. For each number of particles, we run the SE-MPGP and SENN-MPGP 5 times and compute the average NMSE and MNLP. From Figure 3, we find: First, with increasing the number of particles, the NMSE and MNLP of SE-MPGP and SENN-MPGP would decrease at the beginning and become convergence while the running time increases over time. The reason is that the estimation accuracy and computational load of particle filters will increase when the number of particles increases. Second, the average performance of SENN-MPGP is better than SE-MPGP since it captures the spatial nonstationarity, but SENN-MPGP needs more running time since the size of the hyperparameter vector to be inferred will increase. The third experiment aims to compare our MPGP with the benchmarks. The state-of-art sparse GP methods we choose are: sparse pseudo-input Gaussian process (SPGP) [2] and sparse spectrum Gaussian process (SSGP) [3]. Moreover, we also want to examine the robustness of our MPGP since we should clarify whether the good estimation of our MPGP heavily depends on the order of training data collection. Hence, we randomly interrupt the order of training subsets we used before, then implement SPGP with 5 pseudo inputs (5-SPGP), SSGP with 10 basis functions (10SSGP), SE-MPGP with 5 particles (5-SE-MPGP), SENN-MPGP with 5 particles (5-SENN-MPGP). 6 Table 1: Benchmarks Comparison for Synthetic Datasets. The NMSEi, MNLPi, RTimei represent the NMSE, MNLP and running time for the function fi (i = 1, 2) Method NMSE1 MNLP1 RTime1 NMSE2 MNLP2 RTime2 5-SPGP 0.2243 0.5409 28.6418s 0.5445 1.5950 30.3578s 10-SSGP 0.0887 0.1606 18.8605s 0.1144 1.1208 10.2025s 5-SE-MPGP 0.0880 1.6318 12.5737s 0.1687 1.3524 12.4801s 5-SENN-MPGP 0.0881 0.1820 18.7513s 0.1289 1.1782 11.5909s Table 2: Benchmarks Comparison. Data1 is the temperature dataset. Data2 is the pendulum dataset. Data1 NMSE MNLP RTime Data2 NMSE MNLP RTime 5-SPGP 0.48 1.62 181.3s 10-SPGP 0.61 1.98 16.54s 10-SSGP 0.27 1.33 97.16s 10-SSGP 1.04 10.85 23.59s 5-SE-MPGP 0.11 1.05 50.99s 20-SE-MPGP 0.63 2.20 7.04s 5-SENN-MPGP 0.10 1.16 59.25s 20-SENN-MPGP 0.58 2.12 8.60s In Table 1, our 5-SE-MPGP mainly outperforms SPGP except that its MNLP1 is worse than the one of SPGP. The reason is the synthetic functions are nonstationary but SE-MPGP uses a stationary SE kernel. Hence we perform 5-SENN-MPGP with a nonstationary kernel to show that our MPGP is competitive with SSGP, and much better with shorter running time than SPGP. Global Surface Temperature Dataset: We present here a preliminary analysis of the Global Surface Temperature Dataset in January 2011 (http://data.giss.nasa.gov/gistemp/). We first gather the training data with 100 collections. For each collection, we randomly select 90 data points where the input vector is the longitude and latitude location, the output is the temperature (oC). There are two test data sets: the first one is a grid test input set (Longitude: -180:40:180, Latitude: -90:20:90) that is used to show the estimated surface temperature. The second test input set (100 points) is randomly selected from the data website after obtaining all the training data. The first experiment aims to show the predicted surface temperature at the grid test inputs. We set the number of particles in the SE-MPGP and SENN-MPGP as 20. From Figure 4, the KFGP methods stuck in the local optimum: SE-KFGP seems underfitting since it does not model the cold region around the location (100, 50), SENN-KFGP seems overfitting since it unexpectedly models the cold region around (-100, -50). On the contrary, SE-MPGP and SENN-MPGP suitably fit the data set via the hyperparameter online learning. The second experiment is to evaluate the estimation error of our MPGP using the second test data. We firstly run all the methods to compute the NMSE and MNLP over iteration. From the first row of Figure 5, the NMSE and MNLP of MPGP are lower than KFGP. Moreover, SENN-MPGP is much lower than SE-MPGP, which shows that SENN-MPGP successfully models the spatial nonstationarity of the temperature data. Then we change the number of particles. For each number, we run SE-MPGP, SENN-MPGP 3 times to evaluate the average NMSE, MNLP and running time. It shows that SENN-MPGP fits the data better than SE-MPGP but the trade-off is the longer running time. The third experiment is to compare our MPGP with the benchmarks. All the denotations are same as the third experiment of the simulated data. We also randomly interrupt the order of training subsets for the robustness consideration. From Table 2, the comparison results show that our MPGP uses a shorter running time with a better estimation performance than SPGP and SSGP. Pendulum Dataset: This is a small data set which contains 315 training points. In [3], it is mentioned that SSGP model seems to be overfitting for this data due to the gradient ascent optimization. We are interested in whether our method can successfully capture the nonlinear property of this pendulum data. We firstly collect the training data 9 times, and 35 training data for each collection. Then, 100 test points are randomly selected for evaluating the performance. From Table 2, our SENN-MPGP obtains the estimation with the fastest speed and the smallest NMSE among all the methods, and the MNLP is competitive to SPGP. 7 longitude latitude −180 −100 0 100 180 −90 −50 0 50 90 longitude latitude −180 −100 0 100 180 −90 −50 0 50 90 longitude latitude −180 −100 0 100 180 −90 −50 0 50 90 longitude latitude −180 −100 0 100 180 −90 −50 0 50 90 longitude latitude −180 −100 0 100 180 −90 −50 0 50 90 0 50 100 3.8 4 4.2 4.4 4.6 4.8 5 5.2 5.4 t log(a1) 0 50 100 −0.4 −0.3 −0.2 −0.1 0 0.1 t log(a2) 0 50 100 −3.05 −3 −2.95 −2.9 −2.85 −2.8 −2.75 −2.7 −2.65 t log(a3) 0 50 100 −1 −0.98 −0.96 −0.94 −0.92 −0.9 −0.88 −0.86 −0.84 −0.82 t log(a4) 0 50 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 log(a0) t −8 −6 −4 −2 0 2 4 6 8 SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP SENN−MPGP SENN−MPGP Figure 4: The temperature estimation at t = 100. The first row (from left to right): the temperature value bar, the full training observation plot, the grid test output estimation by SE-KFGP, SENNKFGP, SE-MPGP, SENN-MPGP. The black crosses are the observations at t = 100. The second row (from left to right) is the estimation of log hyperparameters (log(a0) to log(a4)). 0 10 20 30 40 50 60 70 80 90 100 0.2 0.3 0.4 0.5 0.6 Iteration NMSE 0 10 20 30 40 50 60 70 80 90 100 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 Iteration MNLP 5 10 15 20 25 30 0.2 0.25 0.3 0.35 0.4 0.45 Number of Particles NMSE 5 10 15 20 25 30 1.4 1.6 1.8 2 2.2 2.4 2.6 Number of Particles MNLP 5 10 15 20 25 30 0 100 200 300 400 Number of Particles Running Time SE−KFGP SENN−KFGP SE−MPGP SENN−MPGP SE−KFGP SENN−KFGP SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP SE−MPGP SENN−MPGP Figure 5: The NMSE and MNLP evaluation. The first row: the NMSE and MNLP over iteration. The second row: the average NMSE, MNLP, Running time as a function of the number of particles. 6 Conclusion We have proposed a novel Bayesian filtering framework for GP regression, which is a fast and accurate online method. Our MPGP framework does not only estimate the function value successfully, but it also provides a new technique for learning the unknown static hyperparameters by online estimating the marginal posterior of hyperparameters. The small training set at each iteration would largely reduce the computation load while the estimation performance is improved over iteration due to the fact that recursive filtering would propagate the previous estimation to enhance the current estimation. In comparison with other benchmarks, we have shown that our MPGP could provide a robust estimation with a competitively computational speed. In the future, it would be interesting to explore the time-varying function estimation with our MPGP. 8 References [1] C. E. Rasmussen, C. K. I. Williams, Gaussian Process for Machine learning, MIT Press, Cambridge, MA, 2006. [2] E. Snelson, Z. Ghahramani, Sparse gaussian processes using pseudo-inputs, in: NIPS, 2006, pp. 1257–1264. [3] M. L.-Gredilla, J. Q.-Candela, C. E. Rasmussen, A. R. F.-Vidal, Sparse spectrum gaussian process regression, Journal of Machine Learning Research 11 (2010) 1865–1881. [4] S. Reece, S. Roberts, An introduction to gaussian processes for the kalman filter expert, in: FUSION, 2010. [5] R. M. Neal, Monte carlo implementation of gaussian process models for bayesian regression and classification, Tech. rep., Department of Statistics, University of Toronto (1997). [6] D. J. C. MacKay, Introduction to gaussian processes, in: Neural Networks and Machine Learning, 1998, pp. 133–165. [7] M. P. Deisenroth, Efficient reinforcement learning using gaussian processes, Ph.D. thesis, Karlsruhe Institute of Technology (2010). [8] J. Liu, M. West, Combined parameter and state estimation in simulation-based filtering, in: Sequential Monte Carlo Methods in Practice, 2001, pp. 197–223. [9] P. Li, R. Goodall, V. Kadirkamanathan, Estimation of parameters in a linear state space model using a Rao-Blackwellised particle filter, IEE Proceedings on Control Theory and Applications 151 (2004) 727–738. [10] N. Kantas, A. Doucet, S. S. Singh, J. M. Maciejowski, An overview of squential Monte Carlo methods for parameter estimation in general state space models, in: 15 th IFAC Symposium on System Identification, 2009. [11] A. Doucet, N. de Freitas, K. Murphy, S. Russell, Rao-Blackwellised particle filtering for dynamic Bayesian networks, in: UAI, 2000, pp. 176–183. [12] N. de Freitas, Rao-Blackwellised particle filtering for fault diagnosis, in: IEEE Aerospace Conference Proceedings, 2002, pp. 1767–1772. [13] T. Sch¨on, F. Gustafsson, P.-J. Nordlund, Marginalized particle filters for mixed linear/nonlinear state-space models, IEEE Transactions on Signal Processing 53 (2005) 2279 – 2289. [14] J. Ko, D. Fox, Gp-bayesfilters: Bayesian filtering using gaussian process prediction and observation models, in: IROS, 2008, pp. 3471–3476. [15] M. P. Deisenroth, R. Turner, M. F. Huber, U. D. Hanebeck, C. E. Rasmussen, Robust filtering and smoothing with gaussian processes, IEEE Transactions on Automatic Control. [16] I. DiMatteo, C. R. Genovese, R. E. Kass, Bayesian Curve Fitting with Free-Knot Splines, Biometrika 88 (2001) 1055–1071. [17] S. A. Wood, Bayesian mixture of splines for spatially adaptive nonparametric regression, Biometrika 89 (2002) 513–528. 9
|
2012
|
230
|
4,598
|
Iterative Ranking from Pair-wise Comparisons Sahand Negahban Department of EECS Massachusetts Institute of Technology sahandn@mit.edu Sewoong Oh Department of IESE University of Illinois at Urbana Champaign swoh@illinois.edu Devavrat Shah Department of EECS Massachusetts Institute of Technology devavrat@mit.edu Abstract The question of aggregating pairwise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers (e.g. MSR’s TrueSkill system) and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining ranking, finding ‘scores’ for each object (e.g. player’s rating) is of interest to understanding the intensity of the preferences. In this paper, we propose a novel iterative rank aggregation algorithm for discovering scores for objects from pairwise comparisons. The algorithm has a natural random walk interpretation over the graph of objects with edges present between two objects if they are compared; the scores turn out to be the stationary probability of this random walk. The algorithm is model independent. To establish the efficacy of our method, however, we consider the popular Bradley-Terry-Luce (BTL) model in which each object has an associated score which determines the probabilistic outcomes of pairwise comparisons between objects. We bound the finite sample error rates between the scores assumed by the BTL model and those estimated by our algorithm. This, in essence, leads to order-optimal dependence on the number of samples required to learn the scores well by our algorithm. Indeed, the experimental evaluation shows that our (model independent) algorithm performs as well as the Maximum Likelihood Estimator of the BTL model and outperforms a recently proposed algorithm by Ammar and Shah [1]. 1 Introduction Rank aggregation is an important task in a wide range of learning and social contexts arising in recommendation systems, information retrieval, and sports and competitions. Given n items, we wish to infer relevancy scores or an ordering on the items based on partial orderings provided through many (possibly contradictory) samples. Frequently, the available data that is presented to us is in the form of a comparison: player A defeats player B; book A is purchased when books A and B are displayed (a bigger collection of books implies multiple pairwise comparisons); movie A is liked more compared to movie B. From such partial preferences in the form of comparisons, we frequently wish to deduce not only the order of the underlying objects, but also the scores associated with the objects themselves so as to deduce the intensity of the resulting preference order. For example, the Microsoft TrueSkill engine assigns scores to online gamers based on the outcomes of (pairwise) games between players. Indeed, it assumes that each player has inherent “skill” and the 1 outcomes of the games are used to learn these skill parameters which in turn lead to scores associate with each player. In most such settings, similar model-based approaches are employed. In this paper, we have set out with the following goal: develop an algorithm for the above stated problem which (a) is computationally simple, (b) works with available (comparison) data only and does not try to fit any model per se, (c) makes sense in general, and (d) if the data indeed obeys a reasonable model, then the algorithm should do as well as the best model aware algorithm. The main result of this paper is an affirmative answer to all these questions. Related work. Most rating based systems rely on users to provide explicit numeric scores for their interests. While these assumptions have led to a flurry of theoretical research for item recommendations based on matrix completion [2, 3, 4], it is widely believed that numeric scores provided by individual users are generally inconsistent. Furthermore, in a number of learning contexts as illustrated above, it is simply impractical to ask a user to provide explicit scores. These observations have led to the need to develop methods that can aggregate such forms of ordering information into relevance ratings. In general, however, designing consistent aggregation methods can be challenging due in part to possible contradictions between individual preferences. For example, if we consider items A, B, and C, one user might prefer A to B, while another prefers B to C, and a third user prefers C to A. Such problems have been well studied as in the work by Condorcet [5]. In the celebrated work by Arrow [6], existence of a rank aggregation algorithm with reasonable sets of properties (or axioms) was shown to be impossible. In this paper, we are interested in a more restrictive setting: we have outcomes of pairwise comparisons between pairs of items, rather than a complete ordering as considered in [6]. Based on those pairwise comparisons, we want to obtain a ranking of items along with a score for each item indicating the intensity of the preference. One reasonable way to think about our setting is to imagine that there is a distribution over orderings or rankings or permutations of items and every time a pair of items is compared, the outcome is generated as per this underlying distribution. With this, our question becomes even harder than the setting considered by Arrow [6] as, in that work, effectively the entire distribution over permutations was already known! Indeed, such hurdles have not stopped the scientific community as well as practical designers from designing such systems. Chess rating systems and the more recent MSR TrueSkill system are prime examples. Our work falls precisely into this realm: design algorithms that work well in practice, makes sense in general, and perhaps more importantly, have attractive theoretical properties under common comparative judgment models. With this philosophy in mind, in recent work, Ammar and Shah [1] have presented an algorithm that tries to achieve the goal with which we have set out. However, their algorithm requires information about comparisons between all pairs, and for each pair it requires the exact pairwise comparison ‘marginal’ with respect to the underlying distribution over permutations. Indeed, in reality, not all pairs of items can typically be compared, and the number of times each pair is compared is also very small. Therefore, while an important step is taken in [1], it stops short of achieving the desired goal. In somewhat related work by Braverman and Mossel [7], the authors present an algorithm that produces an ordering based on O(n log n) pair-wise comparisons on adaptively selected pairs. They assume that there is an underlying true ranking and one observes noisy comparison results. Each time a pair is queried, we are given the true ordering of the pair with probability 1/2 + γ for some γ > 0 which does not depend on the items being compared. One limitation of this model is that it does not capture the fact that in many applications, like chess matches, the outcome of a comparison very much depends on the opponents that are competing. Such considerations have naturally led to the study of noise models induced by parametric distributions over permutations. An important and landmark model in this class is called the Bradley-TerryLuce (BTL) model [8, 9], which is also known as the Multinomial Logit (MNL) model (cf. [10]). It has been the backbone of many practical system designs including pricing in the airline industry [11]. Adler et al. [12] used such models to design adaptive algorithms that select the winner from small number of rounds. Interestingly enough, the (near-)optimal performance of their adaptive algorithm for winner selection is matched by our non-adaptive (model independent) algorithm for assigning scores to obtain global rankings of all players. Our contributions. In this paper, we provide an iterative algorithm that takes the noisy comparison answers between a subset of all possible pairs of items as input and produces scores for each item 2 as the output. The proposed algorithm has a nice intuitive explanation. Consider a graph with nodes/vertices corresponding to the items of interest (e.g. players). Construct a random walk on this graph where at each time, the random walk is likely to go from vertex i to vertex j if items i and j were ever compared; and if so, the likelihood of going from i to j depends on how often i lost to j. That is, the random walk is more likely to move to a neighbor who has more “wins”. How frequently this walk visits a particular node in the long run, or equivalently the stationary distribution, is the score of the corresponding item. Thus, effectively this algorithm captures preference of the given item versus all of the others, not just immediate neighbors: the global effect induced by transitivity of comparisons is captured through the stationary distribution. Such an interpretation of the stationary distribution of a Markov chain or a random walk has been an effective measure of relative importance of a node in wide class of graph problems, popularly known as the network centrality [13]. Notable examples of such network centralities include the random surfer model on the web graph for the version of the PageRank [14] which computes the relative importance of a web page, and a model of a random crawler in a peer-to-peer file-sharing network to assign trust value to each peer in EigenTrust [15]. The computation of the stationary distribution of the Markov chain boils down to ‘power iteration’ using transition matrix lending to a nice iterative algorithm. Thus, in effect, we have produced an algorithm that (a) is computationally simple and iterative, (b) is model independent and works with the data only, and (c) intuitively makes sense. To establish rigorous properties of the algorithm, we analyze its performance under the BTL model described in Section 2.1. Formally, we establish the following result: given n items, when comparison results between randomly chosen O(npoly(log n)) pairs of them are produced as per an (unknown) underlying BTL model, the stationary distribution produced by our algorithm (asymptotically) matches the true score (induced by the BTL model). It should be noted that Ω(n log n) is a necessary number of (random) comparisons for any algorithm to even produce a consistent ranking (due to connectivity threshold of random bipartite graph). In that sense, we will see that up to poly(log n) factor, our algorithm is optimal in terms of sample complexity. Indeed, the empirical experimental study shows that the performance of our algorithm is identical to the ML estimation of the BTL model. Furthermore, it handsomely outperforms other popular choices including the algorithm by [1]. Some remarks about our analytic technique. Our analysis boils down to studying the induced stationary distribution of the random walk or Markov chain corresponding to the algorithm. Like most such scenarios, the only hope to obtain meaningful results for such ‘random noisy’ Markov chain is to relate it to stationary distribution of a known Markov chain. Through recent concentration of measure results for random matrices and comparison technique using Dirichlet forms for characterizing the spectrum of reversible/self-adjoint operators, along with the known expansion property of the random graph, we obtain the eventual result. Indeed, it is the consequence of such powerful results that lead to near-optimal analytic results. The remainder of this paper is organized as follows. In Section 2 we will concretely introduce our model, the problem, and our algorithm. In Section 3 we will discuss our main theoretical results. The proofs will be presented in Section 4. Notation. We use C, C′, etc. to denote generic numerical constants. We use AT to denote the transpose of a matrix. The Euclidean norm of a vector is denoted by ∥x∥= pP i x2 i , and the operator norm of a linear operator is denoted by ∥A∥2 = maxx xT Ax/xT x. Also define [n] = {1, 2, . . . , n} to be the set of all integers from 1 to n. 2 Model, Problem Statement, and Algorithm We now present a concrete exposition of our underlying probabilistic model and our problem. We then present our explicit random walk approach to ranking. 2.1 Bradley-Terry-Luce model for comparative judgment In this section we discuss our model of comparisons between various items. As alluded to above, for the purpose of establishing analytic properties of the algorithm, we will assume comparisons are 3 governed by the BTL model of pairwise comparisons. However, the algorithm itself operates with data generated in arbitrary manner. To begin with, there are n items of interest, represented as [n] = {1, . . . , n}. We shall assume that for each item i ∈[n] that there is an associated weight score wi ∈R+ (i.e. it’s a strictly positive real number). Hence, we may consider the vector w ∈Rn + to be the associated weight vector of all items. Given a pair of items i and j we will let Y l ij be 1 if j is preferred over i and 0 otherwise during the lth competition for 1 ≤l ≤k, where k is the total number of competitions for the pair. Under the BTL model we assume that P(Y l ij = 1) = wj wi + wj . (1) Furthermore, conditioned on the score vector w we assume the the variables Y l i,j are independent for all i, j, and l. We further assume that given some item i we will compare item j to i with probability d/n. In our setting d will be poly-logarithmic in n. This model is a natural one to consider because over a population of individuals the comparisons cannot be adaptively selected. A more realistic model might incorporate selecting various items with different distributions: for example, the Netflix dataset demonstrates skews in the sampling distribution for different films [16]. Thus, given this model our goal is to recover the weight vector w given such pairwise comparisons. We now discuss our method for computing the scores wi. 2.2 Random walk approach to ranking In our setting, we will assume that aij represents the fraction of times object j has been preferred to object i, for example the fraction of times chess player j has defeated player i. Given the notation above we have that aij = (1/k) Pk l=1 Y l ij. Consider a random walk on a weighted directed graph G = ([n], E, A), where a pair (i, j) ∈E if and only if the pair has been compared. The weight edges are defined as the outcome of the comparisons Aij = aij/(aij + aji) and Aji = aji/(aij + aji). We let Aij = 0 if the pair has not been compared. Note that by the Strong Law of Large Numbers, as the number k →∞the quantity Aij converges to wj/(wi + wj) almost surely. A random walk can be represented by a time-independent transition matrix P, where Pij = P(Xt+1 = j|Xt = i). By definition, the entries of a transition matrix are non-negative and satisfy P j Pij = 1. One way to define a valid transition matrix of a random walk on G is to scale all the edge weights by 1/dmax, where we define dmax as the maximum out-degree of a node. This ensures that each row-sum is at most one. Finally, to ensure that each row-sum is exactly one, we add a self-loop to each node. More concretely, Pij = 1 dmax Aij if i ̸= j , 1 − 1 dmax P k̸=i Aik if i = j . (2) The choice to construct our random walk as above is not arbitrary. In an ideal setting with infinite samples (k →∞) the transition matrix P would define a reversible Markov chain. Recall that a Markov chain is reversible if it satisfies the detailed balance equation: there exists v ∈Rn + such that viPij = vjPji for all i, j; and in that case, π ∈Rn + defined as πi = vi/(P j vj) is it’s unique stationary distribution. In the ideal setting (say k →∞), we will have Pij ≡˜Pij = (1/dmax)wj/(wi + wj). That is, the random walk will move from state i to state j with probability equal to the chance that item j is preferred to item i. In such a setting, it is clear that v = w satisfies the reversibility conditions. Therefore, under these ideal conditions it immediately follows that the vector w/ P i wi acts as a valid stationary distribution for the Markov chain defined by ˜P, the ideal matrix. Hence, as long as the graph G is connected and at least one node has a self loop then we are guaranteed that our graph has a unique stationary distribution proportional to w. If the Markov chain is reversible then we may apply the spectral analysis of self-adjoint operators, which is crucial in the analysis when we repeatedly apply the operator ˜P. In our setting, the matrix P is a noisy version (due to finite sample error) of the ideal matrix ˜P discussed above. Therefore, it naturally suggests the following algorithm as a surrogate. We estimate the probability distribution obtained by applying matrix P repeated starting from any initial condition. Precisely, let pt(i) = P(Xt = i) denote the distribution of the random walk at time t 4 with p0 = (p0(i)) ∈Rn + be an arbitrary starting distribution on [n]. Then, pT t+1 = pT t P . (3) Regardless of the starting distribution, when the transition matrix has a unique top eigenvalue, the random walk always converges to a unique distribution: the stationary distribution π = limt→∞pt. In linear algebra terms, this stationary distribution π is the top left eigenvector of P, which makes computing π a simple eigenvector computation. Formally, we state the algorithm, which assigns numerical scores to each node, which we shall call Rank Centrality: Rank Centrality Input: G = ([n], E, A) Output: rank {π(i)}i∈[n] 1: Compute the transition matrix P according to (2); 2: Compute the stationary distribution π. The stationary distribution of the random walk is a fixed point of the following equation: π(i) = X j π(j) Aji P ℓAiℓ . This suggests an alternative intuitive justification: an object receives a high rank if it has been preferred to other high ranking objects or if it has been preferred to many objects. One key question remains: does P have a well defined stationary distribution? As discussed earlier, when G is connected, the idealized transition matrix ˜P has stationary distribution with desired properties. But due to noise, P may not be reversible and the arguments of ideal ˜P do not apply to our setting. Indeed, it is the finite sample error that governs the noise. Therefore, by analyzing the effect of this noise (and hence the finite samples), it is likely that we can obtain the error bound on the performance of the algorithm. As an important contribution of this work, we will show that even the iterations (cf. (3)) induced by P are close enough to those induced by ˜P. Subsequently, we can guarantee that the iterative algorithm will converge to a solution that is close to the ideal stationary distribution. 3 Main Results Our main result, Theorem 1, provides an upper bound on estimating the stationary distribution given the observation model presented above. The results demonstrate that even with random sampling we can estimate the underlying score with high probability with good accuracy. The bounds are presented as the rescaled Euclidean norm between our estimate π and the underlying stationary distribution ˜P. This error metric provides us with a means to quantify the relative certainty in guessing if one item is preferred over another. Furthermore, producing such scores are ubiquitous [17] as they may also be used to calculate the desired rankings. After presenting our main theoretical result we will then provide simulations demonstrating the empirical performance of our algorithm in different contexts. 3.1 Error bound in stationary distribution recovery via Rank Centrality The theorem below presents our main recovery theorem under the sampling assumptions described above. It is worth noting that while the result presented below is for the specific sampling model described above. The results can be extended to general graphs as long as the spectral gap of the corresponding Markov chain is well behaved. We will discuss the point further in the sequel. Theorem 1. Assume that, among n items, each pair is chosen with probability d/n and for each chosen pair we collect the outcomes of k comparisons according to the BTL model. Then, there exists positive universal constants C, C′, and C′′ such that when d ≥C(log n)2, and k d ≥Cb5 log n, the following bound on the error rate holds with probability at least 1 −C′′/n3:
π −˜π
∥˜π∥ ≤ C′b3 r log n k d , where ˜π(i) = wi/ P ℓwℓand b ≡maxi,j wi/wj. 5 Remarks. Some remarks are in order. First, the above result implies that as long as we choose d = Θ(log2 n) and k = ω(1) (i.e. large enough, say k = Θ(log n)), the error goes to 0 (with k = Θ(log n), it goes down at rate 1/√log n) as n increases. Since we are sampling each of the n 2 pairs with probability d/n and then sampling them k times, we obtain O(n log3 n) (with k = Θ(log n)) comparisons in total. Due to classical results on Erdos-Renyi graphs, the induced graph G is connected with high probability only when total number of pairs sampled scales as Ω(n log n)–we need at least those many comparisons. Thus, our result can be sub-optimal only up to log2 n (log1+ϵ n if k = logϵ n). Second, the b parameter should be treated as constant. It is the dynamic range in which we are trying to resolve the uncertainty between scores. If b were scaling with n, then it would be really easy to differentiate scores of items that are at the two opposite end of the dynamic range; in which case one could focus on differentiating scores of items that have their parameter values near-by. Therefore, the interesting and challenging regime is where b is constant and not scaling. Finally, observe the interesting consequence that under the conditions on d, since the induced distribution π is close to ˜π, it implies connectivity of G. Thus, the analysis of our algorithm provides an alternative proof of connectivity in an Erdos-Renyi graph (of course, by using heavy machinery!). 3.2 Experimental Results Under the BTL model, define an error metric of a estimated ordering σ as the weighted sum of pairs (i, j) whose ordering is incorrect: Dw(σ) = n 1 2n∥w∥2 X i<j (wi −wj)2 I (wi −wj)(σi −σj) > 0 o1/2 , where I(·) is an indicator function. This is a more natural error metric compared to the Kemeny distance, which is an unweighted version of the above sum, since Dw(·) is less sensitive to errors between pairs with similar weights. Further, assuming without loss of generality that w is normalized such that P i wi = 1, the next lemma connects the error in Dw(·) to the bound provided in Theorem 1. Hence, the same upper bound holds for Dw error. Due to space constraints, we refer to a longer version of this paper for a proof of this lemma. Lemma 3.1. Let σ be an ordering of n items induced by a scoring π. Then, Dw(σ) ≤∥w−π∥/∥w∥. For a fixed n = 400 and a fixed b = 10, Figure. 1 illustrates how the error scales with two problem parameters: varying the number of comparisons per pair with fixed d = 10 log n (left) and varying the sampling probability with fixed k = 32 (right). The ML estimator directly maximizes the likelihood assuming the BTL model [18]. If we reparameterize the problem so that θi = log(wi) then we obtain our estimates bθ by solving the convex program bθ ∈arg min θ X (i,j)∈E k X l=1 log(1 + exp(θj −θi)) −Y l ij(θj −θi), which is pair-wise logistic regression. This choice is optimal in the asymptotic setting, however for fixed-samples there do not exist theoretical guarantees for recovering the transformed scores θi. The method Count Wins scores an item by counting the number of wins divided by the total number of comparisons [1]. Ratio Matrix assigns scores according to the top eigenvector of a matrix, whose (i, j)-th entry is aij/aji [19]. As we see in Figure 1, the error achieved by our Random Walk approach is comparable to that of ML estimator, and vanishes at the rate of 1/ √ k as predicted by our main result. Interestingly, for fixed d, both the Count Wins and Ratio Matrix algorithms have strictly positive error even if we take k →∞. The figure on the right illustrates that the error scales as 1/ √ d as expected from our main result. 4 Proofs We may now present the proof of Theorem 1. As previously alluded to the statement of Theorem 1 can be made more general. The result that we presented is a specific instance of a more general 6 0.0001 0.001 0.01 0.1 1 10 100 Ratio Matrix Count Wins Rank Centrality ML estimate 0.001 0.01 0.1 0.1 1 Ratio Matrix Count Wins Rank Centrality ML estimate Dw(σ) k d/n Figure 1: Average error Dw(σ) of orderings from four rank aggregation algorithms, averaged over 20 instances. In the figure on the right we assume that d and n are fixed while we increase k. The figure on the right takes k = 32 fixed and lets d increase. lemma that we state below, which shows that our algorithm enjoys convergence properties that result in useful upper bounds. The lemma is made general and uses standard techniques of spectral theory. The main difficulty arises in establishing that the Markov chain P satisfies certain properties that we will discuss below. In order to show that these properties hold we must rely on the specific model that allows us to ultimately establish error bounds that hold with high probability. In what follows we present the lemma and omit the proofs of certain technical details to the longer version of the paper. 4.1 Algorithm convergence In this section, we characterize the error rate achieved by our ranking algorithm. Given the random Markov chain P, where the randomness comes from the outcome of the comparisons, we will show that it does not deviate too much from its expectation ˜P, where we recall is defined as ˜Pij = ( 1 dmax wj wi+wj if i ̸= j , 1 − 1 dmax P ℓ̸=i wℓ wi+wℓ if i = j for all (i, j) ∈E and ˜Pij = 0 otherwise. Recall from the discussion following equation (2) that the transition matrix P used in our ranking algorithm has been carefully chosen such that the corresponding expected transition matrix ˜P has two important properties. First, the stationary distribution of ˜P, which we denote with ˜π is proportional to the weight vectors w. Furthermore, when the graph is connected and has self loops (which at least one exists), this Markov chain is irreducible and aperiodic so that the stationary distribution is unique. The next important property of ˜P is that it is reversible– ˜π(i) ˜Pij = ˜π(j) ˜Pji. This observation implies that the operator ˜P is symmetric in an appropriate defined inner product space. The symmetry of the operator ˜P will be crucial in applying ideas from spectral analysis to prove our main results. Let ∆denote the fluctuation of the transition matrix around its mean, such that ∆≡P −˜P. The following lemma bounds the deviation of the Markov chain after t steps in terms of two important quantities: the spectral radius of the fluctuation ∥∆∥2 and the spectral gap 1 −λmax( ˜P), where λmax( ˜P) ≡ max{λ2( ˜P), −λn( ˜P)} . Lemma 4.1. For any Markov chain P = ˜P + ∆with a reversible Markov chain ˜P, let pt be the distribution of the Markov chain P when started with initial distribution p0. Then,
pt −˜π
∥˜π∥ ≤ ρt ∥p0 −˜π∥ ∥˜π∥ r ˜πmax ˜πmin + 1 1 −ρ∥∆∥2 ˜πmax ˜πmin . (4) where ˜π is the stationary distribution of ˜P, ˜πmin = mini ˜π(i), ˜πmax = maxi ˜π(i), and ρ = λmax( ˜P) + ∥∆∥2 p ˜πmax/˜πmin. 7 The above result provides a general mechanism for establishing error bounds between an estimated stationary distribution π and the desired stationary distribution ˜π. It is worth noting that the result only requires control on the quantities ∥∆∥2 and 1 −ρ. We may now state two technical lemmas that provide control on the quantities ∥∆∥2 and 1 −ρ, respectively. Lemma 4.2. Under the assumptions of Theorem 1, we have that the error matrix ∆= P −˜P satisfies ∥∆∥2 ≤ C r log n kd for some positive universal constant C with probability at least 1 −3n−4 The next lemma provides our desired bound on 1 −ρ. Lemma 4.3. Under the assumptions of Theorem 1, the spectral radius satisfies 1 −ρ ≥C′/b2 with probability at least 1 −n−c, for some positive universal constant C′ and c. The constant c can be made as large as we want by increasing the constant C in d ≥C log n. With the above results in hand we may now proceed with the proof of Theorem 1. When there is a positive spectral gap ρ < 1 the first term in (4) vanishes as t grows. The rest of the first term is bounded and independent of t. Formally, we have ˜πmax/˜πmin ≤b , ∥˜π∥≥1/√n , and ∥p0 −˜π∥≤2 , by the assumption that maxi,j wi/wj ≤b and the fact that ˜π(i) = wi/(P j wj). Hence, the error between the distribution at the tth iteration pt and the true stationary distribution ˜π is dominated by the second term in equation (4). Therefore, in order to finish the proof of Theorem 1 we require bounds on ∥∆∥2 and 1 −ρ. We recall that by Lemma 4.2 we have ∥∆∥2 ≤C p log n/(kd) and from Lemma 4.3 that there is a positive spectral gap 1 −ρ ≥C′/b2 for some numerical constants C and C′. Given these observations the dominant second term in equation (4) is bounded by lim t→∞
pt −˜π
∥˜π∥ ≤ C b3 r log n kd . This finishes the proof of Theorem 1. 5 Discussion In this paper, we developed a novel iterative rank aggregation algorithm for discovering scores of objects given pairwise comparisons. The algorithm has a natural random walk interpretation over the graph of objects with edges present between two objects if they are compared; the scores turn out to be the stationary probability of this random walk. In lieu of recent works on network centrality which are graph score functions primarily based on random walks, we call this algorithm Rank Centrality. The algorithm is model independent. We also established the efficacy of the algorithm by analyzing its performance when data is generated as per the popular Bradley-Terry-Luce (BTL) model. We have obtained an analytic bound on the finite sample error rates between the scores assumed by the BTL model and those estimated by our algorithm. As shown, these lead to order-optimal dependence on the number of samples required to learn the scores well by our algorithm. The experimental evaluation show that our (model independent) algorithm performs as well as the Maximum Likelihood Estimator of the BTL model and outperforms other known competitors included the recently proposed algorithm by Ammar and Shah [1]. Given the simplicity of the algorithm, analytic guarantees and wide utility of the problem of rank aggregation, we believe that this algorithm will be of great practical value. 8 References [1] A. Ammar and D. Shah. Communication, control, and computing (allerton), 2011, 49th annual allerton conference on. pages 776–783, September 2011. [2] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009. [3] R.H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. Information Theory, IEEE Transactions on, 56(6):2980 –2998, june 2010. [4] S. Negahban and M. J. Wainwright. Restricted strong convexity and (weighted) matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 2012. To appear; posted at http://arxiv.org/abs/1009.2118. [5] M. Condorcet. Essai sur l’application de l’analyse `a la probabilit´e des d´ecisions rendues `a la pluralit´e des voix. l’Imprimerie Royale, 1785. [6] K. J. Arrow. Social Choice and Individual Values. Yale University Press, 1963. [7] M. Braverman and E. Mossel. Noisy sorting without resampling. In Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms, SODA ’08, pages 268–276. Society for Industrial and Applied Mathematics, 2008. [8] R. A. Bradley and M. E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324–345, 1955. [9] D. R. Luce. Individual Choice Behavior. Wiley, New York, 1959. [10] D. McFadden. Conditional logit analysis of qualitative choice behavior. Frontiers in Econometrics, pages 105–142, 1973. [11] K. T. Talluri and G. Van Ryzin. The Theory and Practice of Revenue Management. springer, 2005. [12] M. Adler, P. Gemmell, M. Harchol-Balter, R. M. Karp, and C. Kenyon. Selection in the presence of noise: the design of playoff systems. In Proceedings of the fifth annual ACMSIAM symposium on Discrete algorithms, SODA ’94, pages 564–572. Society for Industrial and Applied Mathematics, 1994. [13] M. E. J. Newman. Networks: An Introduction. Oxford University Press, 2010. [14] S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. In Seventh International World-Wide Web Conference (WWW 1998), 1998. [15] S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina. The eigentrust algorithm for reputation management in p2p networks. In Proceedings of the 12th international conference on World Wide Web, WWW ’03, pages 640–651, New York, NY, USA, 2003. ACM. [16] R. Salakhutdinov and N. Srebro. Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. Technical Report abs/1002.2780v1, Toyota Institute of Technology, 2010. [17] J. C. Duchi, L. Mackey, and M. I. Jordan. On the consistency of ranking algorithms. In Proceedings of the ICML Conference, Haifa, Israel, June 2010. [18] L. R. Ford Jr. Solution of a ranking problem from binary comparisons. The American Mathematical Monthly, 64(8):28–33, 1957. [19] T. L. Saaty. Decision-making with the ahp: Why is the principal eigenvector necessary. European Journal of Operational Research, 145:pp. 85–91, 2003. 9
|
2012
|
231
|
4,599
|
Training sparse natural image models with a fast Gibbs sampler of an extended state space Lucas Theis Werner Reichardt Centre for Integrative Neuroscience lucas@bethgelab.org Jascha Sohl-Dickstein Redwood Center for Theoretical Neuroscience jascha@berkeley.edu Matthias Bethge Werner Reichardt Centre for Integrative Neuroscience matthias@bethgelab.org Abstract We present a new learning strategy based on an efficient blocked Gibbs sampler for sparse overcomplete linear models. Particular emphasis is placed on statistical image modeling, where overcomplete models have played an important role in discovering sparse representations. Our Gibbs sampler is faster than general purpose sampling schemes while also requiring no tuning as it is free of parameters. Using the Gibbs sampler and a persistent variant of expectation maximization, we are able to extract highly sparse distributions over latent sources from data. When applied to natural images, our algorithm learns source distributions which resemble spike-and-slab distributions. We evaluate the likelihood and quantitatively compare the performance of the overcomplete linear model to its complete counterpart as well as a product of experts model, which represents another overcomplete generalization of the complete linear model. In contrast to previous claims, we find that overcomplete representations lead to significant improvements, but that the overcomplete linear model still underperforms other models. 1 Introduction Here we study learning and inference in the overcomplete linear model given by x = As, p(s) = Y i fi(si), (1) where A ∈RM×N, N ≥M, and each marginal source distribution fi may depend on additional parameters. Our goal is to find parameters which maximize the model’s log-likelihood, log p(x), for a given set of observations x. Most of the literature on overcomplete linear models assumes observations corrupted by additive Gaussian noise, that is, x = As + ε for a Gaussian distributed random variable ε. Note that this is a special case of the model discussed here, as we can always represent this noise by making some of the sources Gaussian. When the observations are image patches, the source distributions fi(si) are typically assumed to be sparse or leptokurtotic [e.g., 2, 20, 28]. Examples include the Laplace distribution, the Cauchy distribution, and Student’s t-distribution. A large family of leptokurtotic distributions which also contains 1 A B s x λ z A s1 s2 A B p(s|x) p(z|x) Figure 1: A: In the noiseless overcomplete linear model, the posterior distribution over hidden sources s lives on a linear subspace. The two parallel lines indicate two different subspaces for different values of x. For sparse source distributions, the posterior will generally be heavy-tailed and multimodal, as can be seen on the right. B: A graphical model representation of the overcomplete linear model extended by two sets of auxiliary variables (Equation 2 and 3). We perform blocked Gibbs sampling between λ and z to sample from the posterior distribution over all latent variables given an observation x. For a given λ, the posterior over z becomes Gaussian while for given z, the posterior over λ becomes factorial and is thus easy to sample from. the aforementioned distributions as a special case is formed by Gaussian scale mixtures (GSMs), fi(si) = Z ∞ 0 gi(λi)N(si; 0, λ−1 i ) dλi, (2) where gi(λi) is a univariate density over precisions λi. In the following, we will concentrate on linear models whose marginal source distributions can be represented as GSMs. For a detailed description of the representational power of GSMs, see Andrews and Mallows’ paper [1]. Despite the apparent simplicity of the linear model, inference over the latent variables is computationally hard except for a few special cases such as when all sources are Gaussian distributed. In particular, the posterior distribution over sources p(s | x) is constrained to a linear subspace and can have multiple modes with heavy tails (Figure 1A). Inference can be simplified by assuming additive Gaussian noise, constraining the source distributions to be log-concave or making crude approximations to the posterior. Here, however, we would like to exhaust the full potential of the linear model. On this account, we use Markov chain Monte Carlo (MCMC) methods to obtain samples with which we represent the posterior distribution. While computationally more demanding than many other methods, this allows us, at least in principle, to approximate the posterior to arbitrary precision. Other approximations often introduce strong biases and preclude learning of meaningful source distributions. Using MCMC, on the other hand, we can study the model’s optimal sparseness and overcompleteness level in a more objective fashion as well as evaluate the model’s log-likelihood. However, multiple modes and heavy tails also pose challenges to MCMC methods. General purpose methods are therefore likely to be slow. In the following, we will describe an efficient blocked Gibbs sampler which exploits the specific structure of the sparse linear model. 2 Sampling and inference In this section, we first review the nullspace sampling algorithm of Chen and Wu [4], which solves the problem of sampling from a linear subspace in the noiseless case of the overcomplete linear model. We then introduce an additional set of auxiliary variables which leads to an efficient blocked Gibbs sampler. 2 2.1 Nullspace sampling The basic idea behind the nullspace sampling algorithm is to extend the overcomplete linear model by an additional set of variables z which essentially makes it complete (Figure 1B), x z = A B s, (3) where B ∈R(N−M)×N and square brackets denote concatenation. If in addition to our observation x we knew the unobserved variables z, we could perform inference as in the complete case by simply solving the above linear system, provided the concatenation of A and B is invertible. If the rows of A and B are orthogonal, AB⊤= 0, or, in other words, B spans the nullspace of A, we have s = A+x + B+z, (4) where A+ and B+ are the pseudoinverses [24] of A and B, respectively. The marginal distributions over x and s do not depend on our choice of B, which means we can choose B freely. An orthogonal basis spanning the nullspace of A can be obtained from A’s singular value decomposition [4]. Making use of Equation 4, we can equally well try to obtain samples from the posterior p(z | x) instead of p(s | x). In contrast to the latter, this distribution has full support and is not restricted to just a linear subspace, p(z | x) ∝p(z, x) ∝p(s) = Y i fi(w⊤ i x + v⊤ i z), (5) where w⊤ i and v⊤ i are the i-th rows of A+ and B+, respectively. Chen and Wu [4] used Metropolisadjusted Langevin (MALA) sampling [25] to sample from p(z | x). 2.2 Blocked Gibbs sampling The fact that the marginals fi(si) are expressed as Gaussian mixtures (Equation 2) can be used to derive an efficient blocked Gibbs sampler. The Gibbs sampler alternately samples nullspace representations z and precisions of the source marginals λ. The key observation here is that given the precisions λ, the distribution over x and z becomes Gaussian which makes sampling from the posterior distribution tractable. A similar idea was pursued by Olshausen and Millman [21], who modeled the source distributions with mixtures of Gaussians and conditionally Gibbs sampled precisions one by one. However, a change in one of the precision variables entails larger computational costs, so that this algorithm is most efficient if only few Gaussians are used and the probability of changing precisions is small. In contrast, here we update all precision variables in parallel by conditioning on the nullspace representation z. This makes it feasible to use a large or even infinite number of precisions. Conditioned on a data point x and a corresponding nullspace representation z, the distribution over precisions λ becomes factorial, p(λ | x, z) = p(λ | s) ∝p(s | λ)p(λ) = Y i N(si; 0, λ−1 i )gi(λi), (6) where we have used the fact that we can perfectly recover the sources given x and z (Equation 4). Using a finite number of precisions ϑik with prior probabilities πik, for example, the posterior probability of λi being ϑij becomes p(λi = ϑij | x, z) = N(si; 0, ϑ−1 ij )πij P k N(si; 0, ϑ−1 ik )πik . (7) Conditioned on λ, s is Gaussian distributed with diagonal covariance Λ−1 = diag(λ−1). As a linear transformation of s, the distribution over x and z is also Gaussian with covariance Σ = AΛ−1A⊤ AΛ−1B⊤ BΛ−1A⊤ BΛ−1B⊤ = Σxx Σxz Σ⊤ xz Σzz . (8) 3 Using standard Gaussian identities, we obtain p(z | x, λ) = N(z; µz|x, Σz|x), (9) where µz|x = Σ⊤ xzΣ−1 xx x and Σz|x = Σzz −Σ⊤ xzΣ−1 xx Σxz. We use the following computationally efficient method to conditionally sample Gaussian distributions [8, 14]: x′ z′ ∼N(0, Σ), z = z′ + Σ⊤ xzΣ−1 xx (x −x′). (10) It can easily be shown that z has the desired distribution of Equation 9. Together, equations 7 and 9 implement a rapidly mixing blocked Gibbs sampler. However, the computational cost of solving Equation 10 is larger than for a single Markov step in other sampling methods such as MALA. We empirically show in the results section that for natural image patches the benefits of blocked Gibbs sampling outweigh its computational costs. A closely related sampling algorithm was proposed by Park and Casella [23] for implementing Bayesian inference in the linear regression model with Laplace prior. The main differences here are that we also consider the noiseless case by exploiting the nullspace representation, that instead of using a fixed Laplace prior we will use the sampler to learn the distribution over source variables, and that we apply the algorithm in the context of image modeling. Related ideas were also discussed by Papandreou and Yuille [22], Schmidt et al. [27], and others. 3 Learning In the following, we describe a learning strategy for the overcomplete linear model based on the idea of persistent Markov chains [26, 32, 36], which already has led to improved learning strategies for a number of different models [e.g., 6, 12, 29, 32]. Following Girolami [11] and others, we use expectation maximization (EM) [7] to maximize the likelihood of the overcomplete linear model. Instead of a variational approximation, here we use the blocked Gibbs sampler to sample a hidden state z for every data point x in the E-step. Each M-step then reduces to maximum likelihood learning as in the complete case, for which many algorithms are available. Due to the sampling step, this variant of EM is known as Monte Carlo EM [34]. Despite our efforts to make sampling efficient, running the Markov chain till convergence can still be a costly operation due to the generally large number of data points and high dimensionality of posterior samples. To further reduce computational costs, we developed a learning strategy which makes use of persistent Markov chains and only requires a few sampling steps in every iteration. Instead of starting the Markov chain anew in every iteration, we initialize the Markov chain with the samples of the previous iteration. This approach is based on the following intuition. First, if the model changes only slightly, the posterior will change only slightly. As a result, the samples from the previous iteration will provide a good initialization and fewer updates of the Markov chain will be sufficient to reach convergence. Second, if updating the Markov chain has only a small effect on the posterior samples z, also the distribution of the complete data (x, z) will change very little. Thus, the optimal parameters of the previous M-step will be close to optimal in the current M-step. This causes an inefficient Markov chain to automatically slow down the learning process, so that the posterior samples will always be close to the stationary distribution. Even updating the Markov chain only once results in a valid EM strategy, which can be seen as follows. EM can be viewed as alternately optimizing a lower bound to the log-likelihood with respect to model parameters θ and an approximating posterior distribution q [18]: F[q, θ] = log p(x; θ) −DKL [q(z | x) || p(z | x, θ)] . (11) Each M-step increases F for fixed q while each E-step increases F for fixed θ. This is repeated until a local optimum is reached. Importantly, local maxima of F are also local maxima of the log-likelihood, log p(x; θ). Interestingly, improving the lower bound F with respect to q can be accomplished by driving the Markov chain with our Gibbs sampler or some other transition operator [26]. This can be seen 4 0 5 10 15 6.9 7.1 7.3 7.5 Time [s] Avg. posterior energy Toy model 0 20 40 60 80 25 0 −25 −50 Time [s] Image model 0 5 10 15 0 0.5 1 Time [s] Autocorrelation Toy model MALA HMC Gibbs 0 20 40 60 80 0 0.5 1 Time [s] Image model A B Figure 2: A: The average energy of posterior samples for different sampling methods after deterministic initialization. Depending on the initialization, the average energy can be initially too low or too high. Gray lines correspond to different hyperparameter choices for the HMC sampler, red and brown lines indicate the manually picked best performing HMC and MALA samplers. The dashed line represents an unbiased estimate of the true average posterior energy. B: Autocorrelation functions for Gibbs sampling and the best HMC and MALA samplers. by using the fact that application of a transition operator T to any distribution cannot increase its Kullback-Leibler (KL) divergence to a stationary distribution [5, 15]: DKL [Tq(z | x) || p(z | x, θ)] ≤DKL [q(z | x) || p(z | x, θ)] , (12) where Tq(z | x) = R q(z0 | x)T(z | z0, x) dz0 and T(z | z0, x) is the probability density of making a transition from z0 to z. Hence, each Gibbs update of the hidden states implicitly increases F. In practice, of course, we only have access to samples from Tq and will never compute it explicitly. This shows that the algorithm converges provided the log-likelihood is bounded. This stands in contrast to other contexts where persistent Markov chains have been successful but training can diverge [10]. To guarantee not only convergence but convergence to a local optimum of F, we would also have to prove DKL [T nq(z | x) || p(z | x, θ)] →0 for n →∞. Unfortunately, most results on MCMC convergence deal with convergence in total variation, which is weaker than convergence in KL divergence. 4 Results We trained several linear models on log-transformed, centered and symmetrically whitened image patches extracted from van Hateren’s dataset of natural images [33]. We explicitly modeled the DC component of the whitened image patches using a mixture of Gaussians and constrained the remaining components of the linear basis to be orthogonal to the DC component. For faster convergence, we initialized the linear basis with the sparse coding algorithm of Olshausen and Field [19], which corresponds to learning with MAP inference and fixed marginal source distributions. After initialization, we optimized the basis using L-BFGS [3] during each M-step and updated the representation of the posterior using 2 steps of Gibbs sampling in each E-step. To represent the source marginals, we used finite GSMs (Equation 8) with 10 precisions ϑij each and equal prior weights, that is, πij = 0.1. The source marginals were initialized by fitting them to samples from the Laplace distribution and later optimized using 10 iterations of standard EM at the beginning of each M-step. 4.1 Performance of the blocked Gibbs sampler We compared the sampling performance of our Gibbs sampler to MALA sampling—as used by Chen and Wu [4]—as well as HMC sampling [9], which is a generalization of MALA. The HMC sampler has two parameters: a step width and a number of so called leap frog steps. In addition, we slightly randomized the step width to avoid problems with periodicity [17], which added an additional parameter to control the degree of randomization. After manually determining a reasonable range for the parameters of HMC, we picked 40 parameter sets for each model to test against our Gibbs sampler. 5 64 128 192 256 0 0.25 0.5 0.75 1 Basis coefficient, i Basis vector norm, ||ai|| Laplace, 2x GSM, 2x GSM, 3x GSM, 4x Figure 3: We trained models with up to four times overcomplete representations using either Laplace marginals or GSM marginals. A four times overcomplete basis set is shown in the center. Basis vectors were normalized so that the corresponding source distributions had unit variance. The left plot shows the norms of the learned basis vectors. With fixed Laplace marginals, the algorithm produces a basis which is barely overcomplete. However, with GSM marginals the model learns bases which are at least three times overcomplete. The right panel shows log-densities of the source distributions corresponding to basis vectors inside the dashed rectangle. For reference, each plot also contains a Laplace distribution of equal variance. The algorithms were tested on one toy model and one two times overcomplete model trained on 8 × 8 image patches. The toy model employed 1 visible unit and 3 hidden units with exponential power distributions whose exponents were 0.5. The entries of its basis matrix were randomly drawn from a Gaussian distribution with mean 1 and standard deviation 0.2. Figure 2 shows trace plots and autocorrelation functions for the different sampling methods. The trace plots were generated by measuring the negative log-density (or energy) of posterior samples for a fixed set of visible states over time, −log p(x, zt), and averaging over data points. Autocorrelation functions were estimated from single Markov chain runs of equal duration for each sampler and data point. All Markov chains were initialized using 100 burn-in steps of Gibbs sampling, independent of the sampler used to generate the autocorrelation functions. Finally, we averaged several autocorrelation functions corresponding to different data points (see Supplementary Section 1 for more information). For both models we observed faster convergence with Gibbs sampling than with the best MALA or HMC samplers (Figure 2). The image model in particular benefited from replacing MALA by HMC. Still, even the best HMC sampler produced more correlated samples than the blocked Gibbs sampler. While the best HMC sampler reached an autocorrelation of 0.05 after about 64 seconds, it took only about 26 seconds with the blocked Gibbs sampler (right-hand side of Figure 2B). All tests were performed on a single core of an AMD Opteron 6174 machine with 2.20 GHz and implementations written in Python and NumPy. 4.2 Sparsity and overcompleteness Berkes et al. [2] found that even for very sparse choices of the Student-t prior, the representations learned by the linear model are barely overcomplete if a variational approximation to the posterior is used. Similar results and even undercomplete representations were obtained by Seeger [28] with the Laplace prior. The results of these studies suggest that the optimal basis set is not very overcomplete. On the other hand, basis sets obtained with other, often more crude approximations are often highly overcomplete. In the following, we revisit the question of optimal overcompletness and support our findings with quantitative measurements. Consistent with the study of Seeger [28], if we fix the source distributions to be Laplacian, our algorithm learns representations which are only slightly overcomplete (Figure 3). However, much more overcomplete representations were obtained when the source distributions were learned from the data. This is in line with the results of Olshausen and Millman [21], who used mixtures of two 6 1x 2x 3x 4x 2x 3x 4x 0.9 1.1 1.3 1.5 1.33 1.44 1.47 1.48 1.48 1.55 1.58 1.47 1.03 Overcompleteness 16 × 16 image patches Gaussian GSM LM OLM PoT 1x 2x 3x 4x 2x 3x 4x 0.9 1.1 1.3 1.5 1.25 1.36 1.38 1.39 1.41 1.46 1.49 1.41 0.96 Overcompleteness Log-likelihood ± SEM [bit/pixel] 8 × 8 image patches Figure 4: A comparison of different models for natural image patches. While using overcomplete representations (OLM) yields substantial improvements over the complete linear model (LM), it still cannot compete with other models of natural image patches. GSM here refers to a single multivariate Gaussian scale mixture, that is, an elliptically contoured distribution with very few parameters (see Supplementary Section 3). Log-likelihoods are reported for non-whitened image patches. Average log-likelihood and standard error of the mean (SEM) were calculated from log-probabilities of 10000 test data points. and three Gaussians as source distributions and obtained two times overcomplete representations for 8 × 8 image patches. Figure 3 suggests that with GSMs as source distributions, the model can make use of three and up to four times overcomplete representations. Our quantitative evaluations confirmed a substantial improvement of the two-times overcomplete model over the complete model. Beyond this, however, the improvements quickly become negligible (Figure 4). The source distributions discovered by our algorithm were extremely sparse and resembled spikeand-slab distributions, generating mostly values close to zero with the occasional outlier. Source distributions of low-frequency components generally had narrower peaks than those of high-frequency components (Figure 3). 4.3 Model comparison To compare the performance of the overcomplete linear model to the complete linear model and other image models, we would like to evaluate the overcomplete linear models’ log-likelihood on a test set of images. However, to do this, we would have to integrate out all hidden units, which we cannot do analytically. One way to nevertheless obtain an unbiased estimate of p(x) is by introducing a tractable distribution as follows: p(x) = Z p(x, z) dz = Z q(z | x) p(x, z) q(z | x) dz. (13) We can then estimate the above integral by sampling states zn from q(z | x) and averaging over p(x, zn)/q(zn | x), a technique called importance sampling. The closer q(z | x) is to p(z | x), the more efficient the estimator will be. A procedure for constructing distributions q(z | x) from transition operators such as our Gibbs sampling operator is annealed importance sampling (AIS) [16]. AIS starts with a simple and tractable distribution and successively brings it closer to p(z | x). The computational and statistical efficiency of the estimator depends on the efficieny of the transition operator. Here, we used our Gibbs sampler and constructed intermediate distributions by interpolating between a Gaussian distribution and the overcomplete linear model. For the four-times overcomplete model, we used 300 intermediate distributions and 300 importance samples to estimate the density of each data point. We find that the overcomplete linear model is still worse than, for example, a single multivariate GSM with separately modeled DC component (Figure 4; see also Supplementary Section 3). 7 An alternative overcomplete generalization of the complete linear model is the family of products of experts (PoE) [13]. Instead of introducing additional source variables, a PoE can have more factors than visible units, s = Wx, p(x) ∝ Y i fi(si), (14) where W ∈RN×M and each factor is also called an expert. For N = M, the PoE is equivalent to the linear model (Equation 1). In contrast to the overcomplete linear model, the prior over hidden sources s here is in general not factorial. A popular choice of PoE in the context of natural images is the product of Student-t (PoT) distributions, in which experts have the form fi(si) = (1 + s2 i )−αi [35]. To train the PoT, we used a persistent variant of minimum probability flow learning [29, 31]. We used AIS in combination with HMC to evaluate each PoT model [30]. We find that the PoT is better suited for modeling the statistics of natural images and takes better advantage of overcomplete representations (Figure 4). While both the estimator for the PoT and the estimator for the overcomplete linear model are consistent, the former tends to overestimate and the latter tends to underestimate the average loglikelihood. It is thus crucial to test convergence of both estimates if any meaningful comparison is to be made (see Supplementary Section 2). 5 Discussion We have shown how to efficiently perform inference, training and evaluation in the sparse overcomplete linear model. While general purpose sampling algorithms such as MALA or HMC have the advantage of being more widely applicable, we showed that blocked Gibbs sampling can be much faster when the source distributions are sparse, as for natural images. Another advantage of our sampler is that it is parameter free. Choosing suboptimal parameters for the HMC sampler can lead to extremely poor performance. Which parameters are optimal can change from data point to data point and over time as the model is trained. Furthermore, monitoring the convergence of the Markov chains can be problematic [28]. We showed that by training a model with a persistent variant of Monte Carlo EM, even the number of sampling steps performed in each E-step becomes much less crucial for the success of training. Optimizing and evaluating the likelihood of overcomplete linear models is a challenging problem. To our knowledge, our study is the first to show a clear advantage of the overcomplete linear model over its complete counterpart on natural images. At the same time, we demonstrated that with the assumptions of a factorial prior, the overcomplete linear model underperforms other generalizations of the complete linear model. Yet it is easy to see how our algorithm could be extended to other, much better performing models. For instance, models in which multiple sources are modeled jointly by a multivariate GSM, or bilinear models with two sets of latent variables. Code for training and evaluating overcomplete linear models is available at http://bethgelab.org/code/theis2012d/. Acknowledgments The authors would like to thank Bruno Olshausen, Nicolas Heess and George Papandreou for helpful comments. This study was financially supported by the Bernstein award (BMBF; FKZ: 01GQ0601), the German Research Foundation (DFG; priority program 1527, BE 3848/2-1), and a DFG-NSF collaboration grant (TO 409/8-1). References [1] D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. Journal of the Royal Statistical Society, Series B, 36(1):99–102, 1974. [2] P. Berkes, R. Turner, and M. Sahani. On sparsity and overcompleteness in image models. Advances in Neural Information Processing Systems, 20, 2008. [3] R. H. Byrd, P. Lu, and J. Nocedal. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific and Statistical Computing, 16(5):1190–1208, 1995. 8 [4] R.-B. Chen and Y. N. Wu. A null space method for over-complete blind source separation. Computational Statistics & Data Analysis, 51(12):5519–5536, 2007. [5] T. Cover and J. Thomas. Elements of Information Theory. Wiley, 1991. [6] B. J. Culpepper, J. Sohl-Dickstein, and B. A. Olshausen. Building a better probabilistic model of images by factorization. Proceedings of the International Conference on Computer Vision, 13, 2011. [7] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38, 1977. [8] A. Doucet. A note on efficient conditional simulation of Gaussian distributions, 2010. [9] S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics Letters B, 195 (2):216–222, 1987. [10] A. Fischer and C. Igel. Empirical analysis of the divergence of Gibbs sampling based learning algorithms for restricted Boltzmann machines. Proceedings of the 20th International Conference on Artificial Neural Networks, 2010. [11] M. Girolami. A variational method for learning sparse and overcomplete representations. Neural Computation, 13(11):2517–2532, 2001. [12] N. Heess, N. Le Roux, and J. Winn. Weakly supervised learning of foreground-background segmentation using masked rbms. International Conference on Artificial Neural Networks, 2011. [13] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771–1800, 2002. [14] Y. Hoffman and E. Ribak. Constrained realizations of Gaussian fields: a simple algorithm. The Astrophysical Journal, 380:L5–L8, 1991. [15] I. Murray and R. Salakhutdinov. Notes on the KL-divergence between a Markov chain and its equilibrium distribution, 2008. [16] R. M. Neal. Annealed importance sampling. Statistics and Computing, 11(2):125–139, 2001. [17] R. M. Neal. MCMC using Hamiltonian Dynamics, pages 113–162. Chapman & Hall/CRC Press, 2011. [18] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants, pages 355–368. MIT Press, 1998. [19] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607–609, 1996. [20] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37(23):3311–3325, 1997. [21] B. A. Olshausen and K. J. Millman. Learning sparse codes with a mixture-of-Gaussians prior. Advances in Neural Information Processing Systems, 12, 2000. [22] G. Papandreou and A. L. Yuille. Gaussian sampling by local perturbations. Advances in Neural Information Processing Systems, 23, 2010. [23] T. Park and G. Casella. The Bayesian lasso. Journal of the American Statistical Association, 103(482): 681–686, 2008. [24] R. Penrose. A generalized inverse for matrices. Proceedings of the Cambridge Philosophical Society, 51:406–413, 1955. [25] G. O. Roberts and R. L. Tweedie. Exponential convergence of Langevin diffusions and their discrete approximations. Bernoulli, 2(4):341–363, 1996. [26] B. Sallans. A hierarchical community of experts. Master’s thesis, University of Toronto, 1998. [27] U. Schmidt, Q. Gao, and S. Roth. A generative perspective on MRFs in low-level vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2010. [28] M. W. Seeger. Bayesian inference and optimal design for the sparse linear model. Journal of Machine Learning Research, 9:759–813, 2008. [29] J. Sohl-Dickstein. Persistent minimum probability flow, 2011. [30] J. Sohl-Dickstein and B. J. Culpepper. Hamiltonian annealed importance sampling for partition function estimation, 2012. [31] J. Sohl-Dickstein, P. Battaglino, and M. R. DeWeese. Minimum probability flow learning. Proceedings of the 28th International Conference on Machine Learning, 2011. [32] T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. Proceedings of the 25th International Conference on Machine Learning, 2008. [33] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc. of the Royal Society B: Biological Sciences, 265(1394), 1998. [34] G. C. G. Wei and M. A. Tanner. A Monte Carlo implementation of the EM algorithm and the poor man’s data augmentation algorithms. Journal of the American Statistical Association, 85(411):699–704, 1990. [35] M. Welling, G. Hinton, and S. Osindero. Learning sparse topographic representations with products of Student-t distributions. Advances in Neural Information Processing Systems, 15, 2003. [36] L. Younes. Parametric inference for imperfectly observed Gibbsian fields. Probability Theory and Related Fields, 1999. 9
|
2012
|
232
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.