index
int64
0
20.3k
text
stringlengths
0
1.3M
year
stringdate
1987-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
2,400
No Unbiased Estimator of the Variance of K-Fold Cross-Validation Yoshua Bengio and Yves Grandvalet Dept. IRO, Universit´e de Montr´eal C.P. 6128, Montreal, Qc, H3C 3J7, Canada {bengioy,grandvay}@iro.umontreal.ca Abstract Most machine learning researchers perform quantitative experiments to estimate generalization error and compare algorithm performances. In order to draw statistically convincing conclusions, it is important to estimate the uncertainty of such estimates. This paper studies the estimation of uncertainty around the K-fold cross-validation estimator. The main theorem shows that there exists no universal unbiased estimator of the variance of K-fold cross-validation. An analysis based on the eigendecomposition of the covariance matrix of errors helps to better understand the nature of the problem and shows that naive estimators may grossly underestimate variance, as con£rmed by numerical experiments. 1 Introduction The standard measure of accuracy for trained models is the prediction error (PE), i.e. the expected loss on future examples. Learning algorithms themselves are often compared on their average performance, which estimates expected value of prediction error (EPE) over training sets. If the amount of data is large enough, PE can be estimated by the mean error over a hold-out test set. The hold-out technique does not account for the variance with respect to the training set, and may thus be considered inappropriate for the purpose of algorithm comparison [4]. Moreover, it makes an inef£cient use of data which forbids its application to small sample sizes. In this situation, one resorts to computer intensive resampling methods such as cross-validation or bootstrap to estimate PE or EPE. We focus here on K-fold cross-validation. While it is known that cross-validation provides an unbiased estimate of EPE, it is also known that its variance may be very large [2]. This variance should be estimated to provide faithful con£dence intervals on PE or EPE, and to test the signi£cance of observed differences between algorithms. This paper provides theoretical arguments showing the dif£culty of this estimation. The dif£culties of the variance estimation have already been addressed [4, 7, 8]. Some distribution-free bounds on the deviations of cross-validation are available, but they are speci£c to locally de£ned classi£ers, such as nearest neighbors [3]. This paper builds upon the work of Nadeau and Bengio [8], which investigated in detail the theoretical and practical merits of several estimators of the variance of cross-validation. Our analysis departs from this work in the sampling procedure de£ning the cross-validation estimate. While [8] considers K independent training and test splits, we focus on the standard K-fold crossvalidation procedure, with no overlap between test sets: each example is used once and only once as a test example. 2 General Framework Formally, we have a training set D = {z1, . . . , zn}, with zi ∈Z, assumed independently sampled from an unknown distribution P. We also have a learning algorithm A : Z ∗→F which maps a data set to a function. Here we consider symmetric algorithms, i.e. A is insensitive to the ordering of examples in the training set D. The discrepancy between the prediction and the observation z is measured by a loss functional L : F × Z →R. For example one may take in regression L(f, (x, y)) = (f(x) −y)2, and in classi£cation L(f, (x, y)) = 1f(x)̸=y. Let f = A(D) be the function returned by algorithm A on the training set D. In application-based evaluation, the goal of learning is usually stated as the minimization of the expected loss of f = A(D) on future test examples: PE(D) = E[L(f, z)] , (1) where the expectation is taken with respect to z ∼P. To evaluate and compare learning algorithms [4] we care about the expected performance of learning algorithm A over different training sets: EPE(n) = E[L(A(D), z)] , (2) where the expectation is taken with respect to D ×z independently sampled from P n × P. When P is unknown, PE and EPE have to be estimated, and it is crucial to assess the uncertainty attached to this estimation. Although this point is often overlooked, estimating the variance of the estimates c PE and [ EPE requires caution, as illustrated here. 2.1 Hold-out estimates of performance The mean error over a hold-out test set estimates PE, and the variance of c PE is given by the usual variance estimate for means of independent variables. However, this variance estimator is not suited to [ EPE: the test errors are correlated when the training set is considered as a random variable. Figure 1 illustrates how crucial it is to take these correlations into account. The average ratio (estimator of variance/empirical variance) is displayed for two variance estimators, in an ideal situation where 10 independent training and test sets are available. The average of bθ1/θ, the naive variance estimator ignoring correlations, shows that this estimate is highly down-biased, even for large sample sizes. 100 200 300 400 500 600 0.4 0.6 0.8 1 Figure 1: Average ratio (estimator of variance/empirical variance) on 100 000 experiments: bθ1/θ (ignoring correlations, lower curve) and bθ2/θ (taking into account correlations, upper curve) vs. sample size n. The error bars represent ±2 standard errors on the average value. Experiment 1 Ideal hold-out estimate of EPE. We have K = 10 independent training sets D1, . . . , DK of n independent examples zi = (xi, yi), where xi = (xi1, . . . , xid)′ is a d-dimensional centered, unit covariance Gaussian variable (d = 30), yi = p 3/d Pd k=1 xik + εi with εi being independent, centered, unit variance Gaussian variables (the p 3/d factor provides R2 ≃3/4). We also have K independent test sets T1, . . . , TK of size n sampled from the same distribution. The learning algorithm consists in £tting a line by ordinary least squares, and the estimate of EPE is the average quadratic loss on test examples [ EPE = ¯L = 1 K PK k=1 1 n P zi∈Tk Lki, where Lki = L(A(Dk), zi). The £rst estimate of variance of [ EPE is bθ1 = 1 Kn(Kn−1) PK k=1 P i(Lki −¯L)2, which is unbiased provided there is no correlation between test errors. The second estimate is bθ2 = 1 K(K−1)n2 PK k=1 P i,j(Lki −¯L)(Lkj −¯L), which estimates correlations. Note that Figure 1 suggests that the naive estimator of variance bθ1 asymptotically converges to the true variance. This can be shown by taking advantage of the results in this paper, as long as the learning algorithm converges (PE(D) a.s. →limn→∞EPE(n)), i.e. provided that the only source of variability of [ EPE is due to the £nite test size. 2.2 K-fold cross-validation estimates of performance In K-fold cross-validation [9], the data set D is £rst chunked into K disjoint subsets (or blocks) of the same size m = n/K (to simplify the analysis below we assume that n is a multiple of K). Let us write Tk for the k-th such block, and Dk the training set obtained by removing the elements in Tk from D. The estimator is CV = 1 K K X k=1 1 m X zi∈Tk L(A(Dk), zi) . (3) Under stability assumptions on A, CV estimates PE(D) at least as accurately as the training error [6]. However, as CV is an average of unbiased estimates of PE(D1), . . . , PE(DK), a more general statement is that CV estimates unbiasedly EPE(n−m). Note that the forthcoming analysis also applies to the version of cross-validation dedicated to comparing algorithms, using matched pairs ∆CV = 1 K K X k=1 1 m X zi∈Tk L(A1(Dk), zi) −L(A2(Dk), zi) , and to the delete-m jackknife estimate of PE(D) debiasing the training error (see e.g. [5]): JK= 1 n n X i=1 L(A(D), zi)−(K−1) à 1 K(n −m) K X k=1 X zi∈Dk L(A(Dk), zi) −1 n n X i=1 L(A(D), zi) ! . In what follows, CV, ∆CV and JK will generically be denoted by ˆµ: ˆµ = 1 n n X i=1 ei = 1 K K X k=1 1 m X i∈Tk ei , where, slightly abusing notation, i ∈Tk means zi ∈Tk and ∀i ∈Tk, ei =    L(A(Dk), zi) for ˆµ = CV , L(A1(Dk), zi) −L(A2(Dk), zi) for ˆµ = ∆CV , KL(A(D), zi) −P ℓ̸=k L(A(Dℓ), zi) for ˆµ = JK . Note that ˆµ is the average of identically distributed (dependent) variables. Thus, it asymptotically converges to a normally distributed variable, which is completely characterized by its expectation E[ˆµ] and its variance Var[ˆµ]. 3 Structure of the Covariance Matrix The variance of ˆµ is θ = 1 n2 P i,j Cov(ei, ej) . By using symmetry over permutations of the examples in D, we show that the covariance matrix has a simple block structure. Lemma 1 Using the notation introduced in section 2.2, 1) all ei are identically distributed; 2) all pairs (ei, ej) belonging to the same test block are jointly identically distributed; 3) all pairs (ei, ej) belonging to different test blocks are jointly identically distributed; Proof: derived immediately from the permutation-invariance of P(D) and the symmetry of A. See [1] for details and the proofs not shown here for lack of space. Corollary 1 The covariance matrix Σ of cross-validation errors e = (e1, . . . , en)′ has the simple block structure depicted in Figure 2: 1) all diagonal elements are identical ∀i, Cov(ei, ei) = Var[ei] = σ2; 2) all the off-diagonal entries of the K m × m diagonal blocks are identical ∀(i, j) ∈T 2 k : j ̸= i, T(j) = T(i), Cov(ei, ej) = ω; 3) all the remaining entries are identical ∀i ∈Tk, ∀j ∈Tℓ: ℓ̸= k, Cov(ei, ej) = γ. n z }| { | {z } m Figure 2: Structure of the covariance matrix. Corollary 2 The variance of the cross-validation estimator is a linear combination of three moments: θ = 1 n2 X i,j Cov(ei, ej) = 1 nσ2 + m −1 n ω + n −m n γ (4) Hence, the problem of estimating θ does not involve estimating n(n + 1)/2 covariances, but it cannot be reduced to that of estimating a single variance parameter. Three components intervene, which may be interpreted as follows when ˆµ is the K-fold cross-validation estimate of EPE: 1. the variance σ2 is the average (taken over training sets) variance of errors for “true” test examples (i.e. sampled independently from the training sets) when algorithm A is fed with training sets of size m(K −1); 2. the within-block covariance ω would also apply to these “true” test examples; it arises from the dependence of test errors stemming from the common training set. 3. the between-blocks covariance γ is due to the dependence of training sets (which share n(K −2)/K examples) and the fact that test block Tk appears in all the training sets Dℓfor ℓ̸= k. 4 No Unbiased Estimator of Var[ˆµ] Exists Consider a generic estimator ˆθ that depends on the sequence of cross-validation errors e = (e1, e2, . . . , en)′. Assuming ˆθ is analytic in e, consider its Taylor expansion: ˆθ = α0 + X i α1(i)ei + X i,j α2(i, j)eiej + X i,j,k α3(i, j, k)eiejek + . . . (5) We £rst show that for unbiased variance estimates (i.e. E[ ˆθ] = Var[ˆµ]), all the αi coef£cients must vanish except for the second order coef£cients α2,i,j. Lemma 2 There is no universal unbiased estimator of Var[ˆµ] that involves the ei in a non-quadratic way. Proof: Take the expected value of ˆθ expressed as in (5), and equate it with Var[ˆµ] (4). Since estimators that include moments other than the second moments in their expectation are biased, we now focus on estimators which are quadratic forms of the errors, i.e. ˆθ = e′We = X i,j Wijeiej . (6) Lemma 3 The expectation of quadratic estimators ˆθ de£ned as in (6) is a linear combination of only three terms E[ˆθ] = a(σ2 + µ2) + b(ω + µ2) + c(γ + µ2) , (7) where (a, b, c) are de£ned as follows:      a ∆= Pn i=1 Wii , b ∆= PK k=1 P i∈Tk P j∈Tk:j̸=i Wij , c ∆= PK k=1 P ℓ̸=k P i∈Tk P j∈TℓWij . A “trivial” representer of estimators with this expected value is ˆθ = as1 + bs2 + cs3 , (8) where (s1, s2, s3) are the only quadratic statistics of e that are invariants to the within blocks and between blocks permutations described in Lemma 1:        s1 ∆= 1 n Pn i=1 e2 i , s2 ∆= 1 n(m−1) PK k=1 P i∈Tk P j∈Tk:j̸=i eiej , s3 ∆= 1 n(n−m) PK k=1 P ℓ̸=k P i∈Tk P j∈Tℓeiej . (9) Proof: in (6), group the terms that have the same expected values (from Corollary 1). Theorem 1 There exists no universally unbiased estimator of Var[ˆµ]. Proof: thanks to Lemma 2 and 3, it is enough to show that E[ˆθ] = Var[ˆµ] has no solution for quadratic estimators: E[ˆθ] = Var[ˆµ] ⇔a(σ2 + µ2) + b(ω + µ2) + c(γ + µ2) = 1 nσ2 + m −1 n ω + n −m n γ . Finding (a, b, c) satisfying this equality for all admissible values of (µ, σ2, ω, γ) is impossible since it is equivalent to solving the following overdetermined system:      a = 1 n , b = m−1 n , c = n−m n , a + b + c = 0 (10) Q.E.D. 5 Eigenanalysis of the covariance matrix One way to gain insight on the origin of the negative statement of Theorem 1 is via the eigenanalysis of Σ, the covariance of e. This decomposition can be performed analytically thanks to the very particular block structure displayed in Figure 2. Lemma 4 Let vk be the binary vector indicating the membership of each example to test block k. The eigenvalues of Σ are as follows: • λ1 = σ2 −ω with multiplicity n −K and eigenspace orthogonal to {vk}K k=1; • λ2 = σ2 + (m −1)ω −mγ with multiplicity K −1 and eigenspace de£ned in the orthogonal of 1 by the basis {vk}K k=1; • λ3 = σ2 + (m −1)ω + (n −m)γ with eigenvector 1. Lemma 4 states that the vector e can be decomposed into three uncorrelated parts: n −K projections to the subspace orthogonal to {vk}K k=1, K −1 projections to the subspace spanned by {vk}K k=1 in the orthogonal of 1, and one projection on 1. A single vector example with n independent elements can be seen as n independent examples. Similarly, the uncorrelated projections of e can be equivalently represented by respectively n −K, K −1 and one uncorrelated one-dimensional examples. In particular, for the projection on 1, with a single example, the sample variance is null, resulting in the absence of unbiased variance estimator of λ3. The projection of e on the eigenvector 1 n1 is precisely ˆµ. Hence there is no unbiased estimate of V ar[ˆµ] = λ3 n when we have only one realization of the vector e. For the same reason, even with simple parametric assumptions on e (such as e Gaussian), the maximum likelihood estimate of θ is not de£ned. Only λ1 and λ2 can be estimated unbiasedly. Note that this problem cannot be addressed by performing multiple K-fold splits of the data set. Such a procedure would not provide independent realizations of e. 6 Possible values for ω and γ Theorem 1 states that no estimator is unbiased, and in its demonstration, it is shown that the bias of any quadratic estimator is a linear combination of µ2, σ2, ω and γ. Regarding estimation, it is thus interesting to see what constraints restrict their possible range. Lemma 5 For ˆµ = CV and ˆµ = ∆CV, the following inequalities hold: ½ 0 ≤ ω ≤ σ2 − 1 n−m(σ2 + (m −1)ω) ≤ γ ≤ 1 m(σ2 + (m −1)ω) ⇒ ½ 0 ≤ ω ≤ σ2 − m n−mσ2 ≤ γ ≤ σ2 . The admissible (ω, γ) region is very large, and there is no constraint linking µ to σ2. Hence, we cannot propose a variance estimate with universally small bias. 7 Experiments The bias of any quadratic estimator is a linear combination of µ2, σ2, ω and γ. The admissible values provided earlier suggest that ω and γ cannot be proved to be negligible compared to σ2. This section illustrates that in practice, the contribution to the variance of ˆµ due to ω and γ (see Equation (4)) can be of same order as the one due σ2. This con£rms that the estimators of θ should indeed take into account the correlations of ei. Experiment 2 True variance of K-fold cross-validation. We repeat the experimental setup of Experiment 1, except that only one sample of size n is available. Since cross-validation is known to be sensitive to the instability of algorithms, in addition to this standard setup, we also consider another one with outliers: The input xi = (xi1, . . . , xid)′ is still 30-dimensional, but it is now a mixture of two centered Gaussian: let ti be a binary variable, with P(ti = 1) = p = 0.95; ti = 1 ⇒xi ∼ N(0, I), ti = 0 ⇒xi ∼N(0, 100I); yi = p 3/(d(p + 100(1 −p))) Pd k=1 xik + εi; ti = 1 ⇒εi ∼N(0, 1/(p + 100(1 −p))), ti = 0 ⇒εi ∼N(0, 100/(p + 100(1 −p))). We now look at the variance of K-fold cross-validation (K = 10), and decompose in the three orthogonal components σ2, ω and γ. The results are shown in Figure 3. 60 80 100 120 160 220 280 360 460 600 0 0.05 0.1 0.15 0.2 0.25 n−m θ σ2 ω γ 60 80 100 120 160 220 280 360 460 600 0 1 2 3 4 n−m θ σ2 ω γ no outliers outliers Figure 3: Contributions of (σ2, ω, γ) to total variance V ar[CV ] vs. n −m. Without outliers, the contribution of γ is very important for small sample sizes. For large sample sizes, the overall variance is considerably reduced and is mainly caused by σ2 because the learning algorithm returns very similar answers for all training sets. When there are outliers, the contribution of γ is of same order as the one of σ2 even when the ratio of examples to free parameters is large (here up to 20). Thus, in dif£cult situations, where A(D) varies according to the realization of D, neglecting the effect of ω and γ can be expected to introduce a bias of the order of the true variance. It is also interesting to see how these quantities are affected by the number of folds K. The decomposition of θ in σ2, ω and γ (4) does not imply that K should be set either to n or to 2 (according to the sign of ω −γ) in order to minimize the variance of ˆµ. Modifying K affects σ2, ω and γ through the size and overlaps of the training sets D1, . . . , DK, as illustrated in Figure 4. For a £xed sample size, the variance of ˆµ and the contribution of σ 2, ω and γ vary smoothly with K (of course, the mean of ˆµ is also affected in the process). 2 3 4 5 6 8 10 12 15 20 24 30 40 60120 0 0.05 0.1 0.15 0.2 0.25 K θ σ2 ω γ 2 3 4 5 6 8 10 12 15 20 24 30 40 60120 0 0.5 1 1.5 2 2.5 K θ no outliers outliers Figure 4: Contributions of (σ2, ω, γ) to total variance V ar[CV ] vs. K for n = 120. 8 Discussion The analysis presented in this paper for K-fold cross-validation can be instantiated to several interesting cases. First, when having K independent training and test sets (K = 1 is the realistic case), the structure of hold-out errors resemble the one of cross-validation errors, with γ = 0. Knowing that allows to build the unbiased estimate bθ2 given in 2.1: knowing that γ = 0 removes the third equation of system (10) in the proof of Theorem 1. Two-fold cross-validation has been advocated to perform hypothesis testing [4]. It is a special case of K-fold cross-validation where the training blocks are mutually independent since they do not overlap. However, this independence does not modify the structure of e in the sense that γ is not null. The between-block correlation stems from the fact that the training block D1 is the test block T2 and vice-versa. Finally, Leave-one-out cross validation is another particular case, with K = n. The structure of the covariance matrix is simpli£ed, without diagonal blocks. The estimation dif£culties however remain: even in this particular case, there is no unbiased estimate of variance. From the de£nition of b in Lemma 3, we have b = 0, and with m = 1 the linear system (10) still admits no solution. To summarize, it is known that K-fold cross-validation may suffer from high variability, which can be responsible for bad choices in model selection and erratic behavior in the estimated expected prediction error [2, 4, 8]. This paper demonstrates that estimating the variance of K-fold cross-validation is dif£cult. Not only there is no unbiased estimate of this variance, but we have no theoretical result showing that this bias should be negligible in the non-asymptotical regime. The eigenanalysis of the covariance matrix of errors traces the problem back to the dependencies between test-block errors, which induce the absence of redundant pieces of information regarding the average test error. i.e. the K-fold cross-validation estimate. It is clear that this absence of redundancy is bound to provide dif£culties in the estimation of variance. Our experiments show that the bias incurred by ignoring test errors dependencies can be of the order of the variance itself, even for large sample sizes. Thus, the assessment of the signi£cance of observed differences in cross-validation scores should be treated with much caution. The next step of this study consists in building and comparing variance estimators dedicated to the very speci£c structure of the test-block error dependencies. References [1] Y. Bengio and Y. Grandvalet. No unbiased estimator of the variance of K-fold cross-validation. Journal of Machine Learning Research, 2003. [2] L. Breiman. Heuristics of instability and stabilization in model selection. The Annals of Statistics, 24(6):2350–2383, 1996. [3] L. Devroye, L. Gy¨or£, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996. [4] T. G. Dietterich. Approximate statistical tests for comparing supervised classi£cation learning algorithms. Neural Computation, 10(7):1895–1924, 1999. [5] B. Efron and R. J. Tibshirani. An Introduction to the Bootstrap, volume 57 of Monographs on Statistics and Applied Probability. Chapman & Hall, 1993. [6] M. Kearns and D. Ron. Algorithmic stability and sanity-check bounds for leave-one-out crossvalidation. Neural Computation, 11(6):1427–1453, 1996. [7] R. Kohavi. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proceedings of the Fourteenth International Joint Conference on Arti£cial Intelligence, pages 1137–1143, 1995. [8] C. Nadeau and Y. Bengio. Inference for the generalization error. Machine Learning, 52(3):239– 281, 2003. [9] M. Stone. Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society, B, 36(1):111–147, 1974.
2003
182
2,401
GPPS: A Gaussian Process Positioning System for Cellular Networks Anton Schwaighofer∗, Marian Grigoras¸, Volker Tresp, Clemens Hoffmann Siemens Corporate Technology, Information and Communications 81730 Munich, Germany http://www.igi.tugraz.at/aschwaig Abstract In this article, we present a novel approach to solving the localization problem in cellular networks. The goal is to estimate a mobile user’s position, based on measurements of the signal strengths received from network base stations. Our solution works by building Gaussian process models for the distribution of signal strengths, as obtained in a series of calibration measurements. In the localization stage, the user’s position can be estimated by maximizing the likelihood of received signal strengths with respect to the position. We investigate the accuracy of the proposed approach on data obtained within a large indoor cellular network. 1 Introduction Cellular networks form the basis of modern wireless communication infrastructure. Examples include GSM and UMTS networks for mobile phones, wireless LAN (WLAN) for computer networks, and DECT for cordless phones. Within these networks, location-based services (services that are tailored specifically to the current position of the mobile user) have great potential. Examples of such services are guiding the user through a building or city, delivering the time-table of buses at the nearest bus stop, or simply answering the user’s query “Where am I?”. All such services crucially depend on methods to accurately estimate the position of the mobile user within the network (“localization”, “positioning”). In this article, we present a novel approach to obtain position estimates for the mobile user. Most importantly, this method is based solely on infrastructure that is already present in a typical cellular network, and thus leads to minimal extra cost. Furthermore, we focus on indoor networks, where a number of specific problems needs to be addressed. Since our approach relies heavily on Gaussian process models, we call it the “Gaussian process positioning system” (GPPS). We proceed by introducing the localization problem in detail in Sec. 1.1, and by giving a brief overview of previous approaches. Sec. 2 follows with a description of the Gaussian process positioning system (GPPS). Sec. 3 shows how the required calibration stage of the system can be performed in an optimal manner. Sec. 4 presents an evaluation of the ∗Also with the Institute for Theoretical Computer Science, Graz University of Technology, Austria GPPS in a DECT network environment. We show that the GPPS gives accurate location estimates, in particular when only very few calibration measurements are available. 1.1 Problem Description Our overall goal is to develop a localization system for indoor cellular networks, that is (in order to minimize cost) based solely on existing standard networking hardware. Location estimates can be based on different characteristics of the radio signal received at the mobile station (i.e., the laptop in a WLAN network, or the phone in a DECT network). Yet, in most hardware, the only available information about the radio signal is the received signal strength. Information like phase or propagation time from the base station requires additional hardware, and can thus not be used. In general, estimating the user’s position based only on measurements of the signal strength is known to be a very challenging task [7], in particular in indoor networks. Due to reflections, refraction, and scattering of the electromagnetic waves along structures of the building, the received signal is only a distorted version of the transmitted signal. Noise and co-channel interference further corrupt the signals [4]. Furthermore, when using standard hardware, we must expect a high level of measurement noise for the signal strength. Changes in the environment can also have a strong impact on signal propagation. For example, in a WLAN environment [1], it has been noted that shielding by a single person can attenuate the signal by up to −3.5 dBm. Also, the localization system ought to be robust, since base stations may fail, be switched off, or may be temporarily shielded for unknown reasons. In these cases, a sensible localization system should not draw the conclusion that the user is far from the respective base station. Due to the complex signal propagation behaviour, almost all previous approaches to indoor localization use an initial calibration stage. Calibration here means that signal strengths received from the network base stations are measured at a number of points inside the building. Systems differ in their ways of using this calibration data. In principle, two basic approaches can be used here. In a “forward modelling” approach, a model of signal strength as a function of position is built first. The localization procedure then tries to find the location which best agrees with the measured signal strengths. Alternatively, the mapping from signal strengths to position can be modelled directly (“inverse modelling”). The RADAR system [1], one of the first indoor localization systems, is an inverse modelling approach using a nearest neighbor technique. [7] build simple probabilistic models from the calibration data (forward modelling), in conjunction with maximum likelihood position estimation. Bayesian networks have been considered by [2], with states of node corresponding to different locations (using coarse discretization). Discrete locations, yet with a finer grid, are also considered in [5], in an approach inspired by robot navigation. 2 The Gaussian Process Positioning System The difficulties of indoor localization, as mentioned in Sec. 1.1, call for a probabilistic method for localization. The key idea of the Gaussian process positioning system (GPPS) is to use Gaussian process models for the signal strength received from each base station, and to obtain position estimates via maximum likelihood, i.e. by searching for the position which best fits the measured signal strengths. Consider a cellular network with a total of B base stations. Assume that, for each of base stations, we have a probabilistic model that describes the distribution of received signal strength. More formally, we denote by pj(s j |t) the likelihood of receiving a signal strength sj from the j-th base station on position t. With the models p j(sj |t), j = 1,...,B given, localization can be done in a straight-forward way. The user reports a vector s (of length B) of signal strength measurements for all base stations. It may occur that no signal is received from some base stations (indicated by sj = /0), e.g., because the user is too far from this base station, or due to hardware failure. In the GPPS, the estimated position ˆt is computed by maximizing the joint likelihood1 with respect to the unknown position, ˆt = argmax t ∏ j:s j̸=/0 pj(s j |t). (1) In the above equation, we only use the likelihood contributions of those base stations that are actually received. Alternatively, one could use a very low signal strength as a default value for each base station that is not received [7]. We found that this can give high errors if a base station close to the user fails, since now the low default value indicates that one should expect the user to be far from the base station. Thus, by using the above expression, we also obtain a certain degree of robustness with respect to hardware failures or other unexpected effects. Yet, we still need to define and build suitable base station models p j(sj |t), j = 1,...,B. In the GPPS, we use Gaussian process (GP) models for this task, where each base station model is estimated from the calibration data. Gaussian processes are particularly useful here for several reasons. Firstly, one obtains a full predictive distribution, as opposed to the point estimate output by other regression approaches. Secondly, GPs are a nonparametric method that can flexibly adapt to the complex signal propagation behaviour observed in indoor cellular networks. Mind that this approach opens a wide range of possibilities for further extensions. Due to particular project requirements, we currently only use the maximum likelihood position estimate in Eq. (1) (“one-shot localization” without error estimates). Instead of the implicitly assumed uninformative prior in Eq. (1), one could, for example, specify an informative prior based on known previous positions of the user, in conjunction with a motion model. Subsequently, the complete posterior distribution p(t|s) can be evaluated for localization. In the following sections, we will describe the GP models in more detail, and also discuss the choice of kernel function, which is of great importance in order to build an accurate localization system. 2.1 Gaussian Process Models for Signal Strengths In the GPPS, a Gaussian process (GP) approach is used for the models p j(sj |t) that describe the signal strength received from a single base station j. Details on GP models can be found, for example, in [6]; we only give a brief summary here. Recall from Sec. 1.1 that the proposed GPPS is based on a set calibration measurements, where the signal strength is measured at a number of points spread over the area to be covered. Consider now the calibration data for a single base station j. We denote this calibration data by Dj = {(xi,yi)}N i=1, meaning that a signal strength of yi has been measured on point xi, with a total of N calibration measurements. For simplicity of computation, we use a GP model with Gaussian noise, i.e., the measured signal strength yi is composed of a “true” signal strength s(xi) plus independent Gaussian (measurement) noise ei of variance σ2, with yi = s(xi) + ei. The Gaussian process assumption for the true signal s implies that the true signal strengths for all calibration 1Assuming independence of the individual measurements. One could also use a solution inspired from co-kriging, that takes into account the full dependence between signals received from different base stations. We did not consider this solution for reasons of efficiency. points (s(x1),...,s(xN)) are jointly Gaussian distributed, with zero mean and covariance matrix K. K itself is given by the kernel (covariance) function k, with Kmn = k(xm,xn), m,n = 1,...,N. Given the calibration data Dj, the predictive distribution for the signal strength s j received on some arbitrary point t turns out to be Gaussian. With v(t) = (k(t,x1),...,k(t,xN))⊤, y = (y1,...,yN)⊤and Q = K +σ2I, mean and variance of the prediction are E(sj |Dj,t) = v(t)⊤Q−1y (2) var(sj |Dj,t) = k(t,t)−v(t)⊤Q−1v(t) (3) Using these expressions for the predictive distribution (a univariate Gaussian) in Eq. (1) becomes straight forward. Also, gradients of the likelihood with respect to the position t can be derived easily [8]. Thus, the position estimate, Eq. (1), can be computed easily using either some standard optimization routine, or by evaluating the likelihood grid-based in the area of interest. An important issue is also the choice of noise variance σ2 and the parameters θ of the kernel function k (which we have not explicitly denoted above) . We set them by maximizing the marginal likelihood of the calibration data with respect to the model parameters, which turns out to be [6] (ˆσ2, ˆθ) = argmax σ2,θ  −logdetQ−y⊤Q−1y  . (4) The model parameters (ˆσ2, ˆθ) are set individually for each base station. 2.2 The Mat´ern Kernel Function In our GPPS application, with a 2-dimensional input space for the GP models, the choice of an appropriate kernel function is a more critical issue if compared to typical machine learning applications with many input dimensions. For the commonly used squared exponential kernel, k(x,x′) = exp(−w∥x −x′∥2), it has been argued [9] that sample paths of such GP models are “infinitely smooth”, thus often leading to unreasonably low predictive variance. In GPPS, we instead use the Mat´ern class of kernel functions [9], which allows a continuous parameterization of the smoothness of the sample paths via its parameter ν. Its functional form is k(x,x′) = Mν(z) = 2 √νz ν Γ(ν) Kν(2√νz) (5) where Γ(ν) is the Gamma function and Kν(r) is the modified Bessel function of the second kind of degree ν. The parameter ν determines the smoothness (fractal dimension) of the sample paths and can be estimated from the data using Eq. (4). We use an isotropic kernel function with length scale w, thus z2 = w∥x−x′∥2. 2.3 Learning GP Models with Mat´ern Kernel For efficient solutions of Eq. (4), we require derivatives of the Mat´ern kernel function Eq. (5) with respect to all its parameters ν,w. Numerical gradients, as used for example by [9], require a large number of evaluations of the Bessel functions and thus lead to a huge computational overhead. To compute the derivatives analytically, we use ∂Γ(ν) ∂ν = Γ(ν)Ψ(ν) and ∂Kν(z) ∂z = −1 2 (Kν−1(z)+Kν+1(z)) (6) where Ψ(ν) is the Polygamma function of order 0. To the best of our knowledge, there is no closed form expression for the derivative of the Bessel function Kν(z) with respect to its degree ν. We approximate this by ∂Kν(z) ∂ν = DKν(z) ≈ε−1(Kν+ε(z) −Kν(z)). Using these identities, we find for the gradients of the Mat´ern function, Eq. (5), ∂Mν(z) ∂z = ν z Mν(z)−2√ν √νz ν Γ(ν) Kν−1(2√νz)+Kν+1(2√νz)  ∂Mν(z) ∂ν = Mν(z) 1 2 +log √νz  −Ψ(ν)  + 2 √νz ν Γ(ν)  − z 2√ν Kν−1(2√νz)+Kν+1(2√νz)  +DKν(2√νz)  . (7) Based on the above equations, derivatives of Eq. (4) with respect to the model parameters σ2,ν,w can be computed using standard matrix algebra, see [6]. 3 Optimal Calibration and Model Building In order to make the GPPS, as presented in Sec. 2, a practical system, two further issues need to be solved. Firstly, it must be noted that taking calibration measurements is a very time-consuming (thus, expensive) task. The number of calibration data must thus be kept as low as possible, while retaining high localization accuracy. This question has been addressed in the literature under the name optimal design. [3] showed that—in a 2-dimensional space—hexagonal sampling design yields optimal results in terms of integral mean square error when the covariance structure of the underlying Gaussian process is unknown. We also adopt this optimal design for the GPPS system when evaluating it in Sec. 4. Secondly, we assumed a GP model with zero mean in Sec. 2.1, which clearly does not fit the propagation law of radio signals. In the actual GPPS, the GP mean is a linear function of the distance to the base station (when signal strength is given on a logarithmic scale). The overall process of building the GPPS is summarized is follows. Starting point is the calibration data, with a total of C measurements. On calibration point xi,i ∈{1,...,C}, we receive a signal strength of ci j from base station j, j ∈{1,...,B}, or ci j = /0 if base station j has not been received at xi (for example, due to signal obstruction). Signal strength is measured in dB, all model fitting is thus done on a logarithmic scale. The calibration data is then split into subsets Dj containing those points where base station j has actually been received, i.e., Dj = {(xi,ci j) : ci j ̸= /0}, corresponding to Dj introduced in Sec. 2.1. For each base station, that is, for each data Dj, we proceed as follows: 1. Often, the exact position of base station j is not known.2 In this case, we use a simple estimate for the base station position, that is the average of the 3 calibration points xi with maximum signal strength yi. This estimate is rather crude, yet we found it to give sensible results in all of the configurations we have considered. In particular with sparse calibration measurements, more sophisticated estimates for the base station position are difficult to come up with. 2. Compute the distance of each calibration point to the base station (using either the exact or the estimated position obtained in step 1). As the mean function of the GP model, we fit a linear model3 to the received signal strength as a function of distance to the base station. Subtract the value of the mean function from the 2When setting up the network, or after modifying the network by moving base stations, the base station positions are often not recorded. 3Alternatively, one could also use a procedure similar to universal kriging, and combine fitting of the mean function with learning the parameters of the kernel function, see Eq. (4). original measurements, and use the modified values in the subsequent GP model fitting procedure. 3. Use Eq. (4) to find optimal parameters for the GP model, which are the noise variance σ2, the Mat´ern smoothness parameter ν and the input length scale w. 4 Evaluation in a DECT Network We tested the accuracy of the GPPS in a large DECT cellular network. In a large assembly hall of 250×180 meters, measurements of signal strengths received from DECT base stations were made on 650 points spread over the hall. In this environment, moving robots, metal constructions, corridors, office cubicles, etc., are all affecting the signal propagation. We observed a very high fluctuation of received signals (up to ±10 dB when repeating measurements, while the total signal range is only −30 to −90 dB), both due to measurement noise, and due to dynamical changes of the environment. We compare the GPPS with a nearest neighbor based localization system (abbreviated by NNLoc in the following), that is quite similar to the RADAR [1] approach.4 This system finds the calibration measurements that best match the signal strength received at test stage. The best matches are used in a weighted triangulation scheme to compute the location estimate. This method requires careful fine tuning of parameters, and we have to omit details for brevity here. Dense Calibration Points In a first experiment, we investigate the achievable precision of location estimates when using the full set of calibration measurements. We evaluate both the GPPS and the nearest neighbor based method in a 5fold cross validation scheme. The total set of measurements is split up into five equally sized parts, where four of these parts were used as the calibration set. The resulting positioning system is tested on the fifth part left out. This is repeated five times, so that each point is being used as the test point exactly once. We found that, in this setting, the nearest neighbor based method NNLoc works very fine, and provides an average localization error of 7 meters. The GPPS performs slightly worse, with an average error of 7.5 meters. With the GPPS, localization is typically based on around 15 base stations, that is, 15 likelihood terms contributing to Eq. (1). Unfortunately, such a high number of calibration measurements is unlikely to be available in practice. Taking calibration measurements is a very costly process, in particular if larger areas need to be covered. Thus, one is very much interested in keeping the number of calibration points as low as possible. Experiments with Sparse Calibration Points In the second experimental setup, we aim at building the positioning system with only a minimal number of calibration points. Again, 5fold cross validation was performed. After splitting the data into five parts, we select subsets of ˜C = 100,50,25,12 points, either at random or simulating the optimal design, from the union of four of these parts. The localization system is built based on these ˜C points and evaluated on the fifth part of the data. In order to simulate a near-optimal design (see Sec. 3), we superimpose a hexagonal grid with ˜C points on the area under consideration. Out of the given calibration measurements, we select those ˜C points that are closest (in terms of Euclidean distance) to the grid points. In Fig. 1 we plot the localization accuracy, averaged over the 5fold cross validation, of the GPPS and the nearest neighbor based system built on only ˜C calibration points, 4We also investigated localization using Eq. (1) with a simplistic propagation model, where the expected signal (on log scale) is a linear function of the distance to the base station. Yet, this approach lead to very poor localization accuracy, and is thus not considered in more detail here. Figure 1: Mean localization error of the GPPS and the NNLoc method, as a function of the number of calibration points used. Vertical bars indicate ±1 standard deviation of the mean localization error. The calibration points are either selected at random, or according to an optimal design criterion ˜C ∈{100,50,25,12} calibration measurement. It can be clearly seen that the GPPS system (with optimal design) achieves a high precision for its location estimates, even when using only a minimal number of calibration measurements. With only 12 calibration measurements, GPPS achieves an average error of around 17 meters, while the competing method reaches only 29 meters at best. In this setting, the average distance in between calibration measurements is around 75 meters. Both the NNLoc system and the GPPS system show large improvements of performance when selecting the calibration points according to the optimal design, instead of a purely random fashion. Also, note that the localization error of the GPPS system degrades only slowly when the number of calibration measurements is reduced. In contrast, the curves for the nearest neighbor based method show a sharper increase of positioning error. It is worth noticing that the choice of kernel functions has a strong impact on the localization accuracy of the GPPS. In Fig. 2(a), we also plot a comparison of the GPPS with either the Mat´ern kernel, Eq. (5), or an RBF kernel of the form k(x,x′) = exp(−w∥x−x′∥). GP models with RBF kernels tend to be over-optimistic [9] about the predictive variance, Eq. (3), which in turn leads to overly tight position estimates. Thus, the accuracy of GPPS with RBF kernel is clearly inferior to that of GPPS with Mat´ern kernel. It is also interesting to consider different methods for selecting the calibration points. Fig. 2(b) plots the accuracy obtained with GPPS, when calibration points are either placed randomly, on a hexagonal grid (the theoretically optimal procedure) or on a square grid. Somehow counterintuitively, a square grid for calibration gives a performance that is just as good or even worse than a random grid. In contrast, localization with NNLoc performs about the same with either hexagonal or square grid (this is not plotted in the figure). 5 Conclusions In this article, we presented a novel approach to solving the localization problem in indoor cellular network networks. Gaussian process (GP) models with the Mat´ern kernel function were used as models for individual base stations, so that location estimates could be computed using maximum likelihood. We showed that this new Gaussian process positioning system (GPPS) can provide sufficiently high accuracy when used within a DECT network. (a) GPPS using either the Mat´ern or the RBF kernel function (b) GPPS with calibration measurements placed either randomly, on a square grid, or on a hexagonal grid (optimal design) Figure 2: Average localization error of the GPPS method with different kernel function (left) and different methods for placing calibration points (right) A particular advantage of the GPPS system is that it can be based on only a small number of calibration measurements, and yet retain high accuracy. Furthermore, we showed how calibration points can be optimally chosen in order to provide high accuracy position estimates. Acknowledgments Anton Schwaighofer gratefully acknowledges support through an Ernst-von-Siemens scholarship. References [1] Bahl, P., Padmanabhan, V. N., and Balachandran, A. A software system for locating mobile users: Design, evaluation, and lessons, 2000. Revised version of Microsoft Research Technical Report MSR-TR-2000-12, available from the authors webpages. [2] Castro, P., Chiu, P., Kremenek, T., and Muntz, R. A probabilistic room location service for wireless network environments. In Proceedings of the 3rd International Conference on Ubiquitous Computing (Ubicomp 2001). 2001. [3] Hamprecht, F. A. and Agrell, E. Exploring a space of materials: Spatial sampling design and subset selection. In J. N. Cawse, ed., Experimental Design for Combinatorial and High Throughput Materials Development. John Wiley & Sons, 2002. [4] Hashemi, H. The indoor radio propagation channel. Proceedings of the IEEE, 81(7):943–968, 1993. [5] Ladd, A. M., Bekris, K. E., Rudys, A., Marceau, G., Kavraki, L. E., and Wallach, D. S. Roboticsbased location sensing using wireless ethernet. In Proceedings of the Eight ACM International Conference on Mobile Computing and Networking (MOBICOM 2002). 2002. [6] Rasmussen, C. E. Evaluation of Gaussian Processes and other methods for non-linear regression. Ph.D. thesis, University of Toronto, 1996. [7] Roos, T., Myllym¨aki, P., Tirri, H., Misikangas, P., and Siev¨anen, J. A probabilistic approach to WLAN user location estimation. International Journal of Wireless Information Networks, 9(3):155–164, 2002. [8] Schwaighofer, A. Kernel Systems for Regression and Graphical Modelling. Ph.D. thesis, Institute for Theoretical Computer Science, Graz University of Technology, Austria, 2003. [9] Stein, M. Interpolation of Spatial Data. Some Theory for Kriging. Springer Verlag, 1999.
2003
183
2,402
Reconstructing MEG Sources with Unknown Correlations Maneesh Sahani W. M. Keck Foundation Center for Integrative Neuroscience, UC, San Francisco, CA 94143-0732 maneesh@phy.ucsf.edu Srikantan S. Nagarajan Biomagnetic Imaging Laboratory, Department of Radiology, UC, San Francisco, CA 94143-0628 sri@radiology.ucsf.edu Abstract Existing source location and recovery algorithms used in magnetoencephalographic imaging generally assume that the source activity at different brain locations is independent or that the correlation structure is known. However, electrophysiological recordings of local field potentials show strong correlations in aggregate activity over significant distances. Indeed, it seems very likely that stimulus-evoked activity would follow strongly correlated time-courses in different brain areas. Here, we present, and validate through simulations, a new approach to source reconstruction in which the correlation between sources is modelled and estimated explicitly by variational Bayesian methods, facilitating accurate recovery of source locations and the time-courses of their activation. 1 Introduction The brain’s neuronal activity generates weak magnetic fields (10 fT – 1 pT). Magnetoencephalography (MEG) is a non-invasive technique for detecting and characterising these magnetic fields. MEG sensors use super-conducting quantum interference devices (SQUIDs) to measure the changes in the brain’s magnetic field on a millisecond time-scale. When combined with electromagnetic source localisation, magnetic source imaging (MSI) becomes a functional brain imaging method that allows us to characterise macroscopic dynamic neural information processing. In the past decade, the development of MSI source reconstruction algorithms has progressed significantly [1]. Currently, there are two general approaches to estimating MEG sources: parametric methods and imaging methods [2]. With parametric methods, a few current dipoles of unknown location and moment are assumed to represent the sources of activity in the brain. In this case, solving the inverse problem requires a non-linear optimisation to estimate the position and magnitude of an unknown number of dipoles. With imaging methods, a grid of voxels is used to represent the entire brain volume. The inverse problem is then to recover whole brain activation images, represented by the timedependent moment and magnitude of an elementary dipole source located at each voxel. This formulation leads to a linear forward model. However, the ill-posed nature of the problem leads to non-unique solutions which must be distinguished by prior information, usually in the form of assumptions regarding the correlation between the sources. In this paper, we formulate a general spatiotemporal imaging model for MEG data. Our formulation makes no assumptions about the correlation of the sources; instead, we estimate the extent of the correlation by an evidence optimisation procedure within a variational Bayesian framework [3]. 1.1 MEG imaging Many standard MEG devices measure the radial gradient of the magnetic field at a number, db, of sensor locations (typically arranged on a segment of a sphere). Measurements made at a single time can be formed into a db-dimensional vector b; an experiment yields a series of N such samples, giving a db × N data matrix B. This measured field-gradient is affected by a number of different processes. The component we seek to isolate is stimulus- or event-related, and is presumably contributed to by significant activity at a relatively small number of locations in the brain. This signal is corrupted by thermal noise at the sensors, and by widespread spontaneous, unrelated brain activity. For our purposes, these are both sources of noise, whose distributions are approximately normal [2] (in the case of the unrelated brain activity, the normality results from the fact that any one sensor sees the sum of effects from a large number of locations). The covariance matrix of this noise, Ψ, can be measured approximately by accumulating sensor readings in a quiescent state; simulations suggest that the techniques presented here are reasonably tolerant to mis-estimation of the noise level. Measurements are also affected by other forms of interference associated with experimental electronics or bio-magnetic activity external to the brain. We will not here treat such interference explicitly, instead assuming that major sources have been removed by preprocessing the measured data, e.g., by using blind source separation methods [4]. To represent the significant brain sources, we divide the volume of the brain (or a subsection of that volume that contains the sources) into a number of voxels and then calculate the lead-field matrix L that linearly relates the strength of a current dipole in each orientation at each voxel, to the sensor measurements. For simplicity, we assume a spherical volume conductor model, which permits analytical calculation of L independent of the tissue conductivity [2], and which is reasonably accurate for most brain regions [1]. (Non-uniform volume conduction properties of the brain and surrounding tissues can be explicitly accounted for by elaborating the lead-field matrix calculation, but they do not otherwise affect the analysis presented below.) In the simple model, only the two tangential components of the current dipole which fall orthogonal to the radial direction contribute to b, and so the source vector s has a dimension ds which is twice the number of voxels dv. The source matrix S associated with the N field measurements has dimensions ds × N. Thus the probabilistic forward model for MEG measurements is given by b ∼N (Ls, Ψ) (1) Without considerable prior knowledge of the pattern of brain activation, the number of possible degrees of freedom in the source vector, ds, will be far greater than the number of measurements, db; and so there is no unique maximum-likelihood estimate of s. Instead, attempts at source recovery depend, either implicitly or explicitly, on the application of prior knowledge about the source distribution. Most existing methods constrain the source locations and/or activities in various ways: based on anatomical or fMRI data; by maximum entropy, minimum L1 norm, weighted-minimum L2 norm or maximum smoothness priors; or to achieve optimal resolution [1]. Most of these constraints can be formulated as priors for maximum a posteriori estimation of the sources (although the original statements do not always make such priors explicit). In addition, some studies have also included temporal constraints on sources such as smoothness or phase-locking between sources [5]. Consider, for example, linear estimates of s given by ˆs = F ′b. The optimal estimate (in a least-squares sense) is given by the Wiener filter: F = ⟨bb ′⟩ −1 ⟨bs ′⟩= ⟨bb ′⟩ −1 ⟨(Ls + n)s ′⟩= ⟨bb ′⟩ −1 L ⟨ss ′⟩, (2) (where n ∼N (0, Ψ) is a noise vector uncorrelated with s) and therefore requires knowledge of the source correlation matrix ⟨ss′⟩. One approach to source reconstruction, the minimum-variance adaptive beamformer (or “beamformer” for short), can be viewed as an approximation to the Wiener filter in which the correlation matrix of sensor measurements ⟨bb′⟩is estimated by the observed correlation BB′/N, and the sources at each location are taken to be uncorrelated [6]. If the orientation of each source dipole is known or estimated independently (so that s contains only one magnitude at each location), then the source correlation matrix ⟨ss′⟩reduces to a diagonal matrix of gain factors. For the beamformer, these factors are chosen to give a unit “loop gain” for each source i.e. such that diag [F ′L] = 1. It can be shown that the beamformer only yields accurate results when the number of active sources is few [7]. Thus, this approach makes two assumptions about the sources: an explicit one of decorrelation and an implicit one of sparse activation. Other techniques tend to make similar assumptions. A related algorithm using Multiple Signal Classification (MUSIC) also assumes sparsity and linear independence in the time-series of the sources [1]. Minimum-norm methods can also be viewed as making specific assumptions about the source correlation matrix [8]. In sharp contrast to the assumed independence or known correlation of brain activity in these algorithms, electrophysiological studies have shown pronounced and variable correlations in local potentials measured in different (sometimes widely separated) regions of the brain, and indeed, have argued that these correlations reflect relevant aspects of brain processing [9, 10]. This simple observation has profound consequences for most current MEG imaging algorithms. Not only are they unable to access this source of temporal information about brain function (despite the temporal fidelity of the technique in other respects), but they may also provide inaccurate source localisations or reconstructions by dint of their incorrect assumptions regarding source correlation. In this paper, we present a novel approach to source reconstruction. Our technique shares with many of the methods described above the assumption of sparsity in source activation. However, it dispenses entirely with assumption of source independence. Instead, we estimate the source correlation matrix from the data by hyperparameter optimisation. 2 Model To parameterise the source correlation matrix in a manner tractable for learning, we assume that the source activities s are formed by a linear combination, with weight matrix W, of dz independent unit-variance normal pre-sources z, s = Wz; z ∼N (0, I) , (3) so that learning the correlation matrix ⟨ss′⟩= WW ′ becomes equivalent to estimation of the weights W.1 The sources are not really expected to have the Gaussian amplitude distribution that this construction implies. Instead, the assumption forms a convenient fiction, making it easy to estimate the source correlation matrix. We show in simulations below that estimation in this framework can indeed yield accurate estimates of the correlation matrix even for non-normally distributed source activity. Once the correlation matrix has been established, estimation using the Wiener filter of (2) provides the best linear estimate of source activity (and would be the exact maximum a posteriori estimate if the sources really were normally distributed). 1This formulation is similar to that used in weighted minimum-norm methods, although there the weights W are fixed, implying a pre-determined source correlation matrix. The model of (3) parameterises the source correlation in a general way, subject to a maximum rank of dz. This rank constraint does not by itself favour sparsity in the source distribution, and could easily be chosen to be equal to ds. Instead, the sparsity emerges from a hyperparameter optimisation similar to the automatic relevance determination (ARD) of Mackay and Neal [11] (see also [12, 13]). Equation (3) defined a prior on s with parameters W. We now add a hyperprior on W under which the expected power of both tangential components at the vth voxel is determined by a hyperparameter αv. For notational convenience we collect the αv into a vector α and introduce a ds × dv indicator matrix J, with Jiv = 1 if the ith source is located in the vth voxel and 0 otherwise. Thus, each column of J contains exactly two unit entries, one for each tangential component of the corresponding voxel dipole. Finally, we introduce a ds × ds diagonal matrix A with Aii = (Jα)i. Then Wij ∼N (0, A −1 ii ) . (4) Thus each αv sets a prior distribution on the length of the two rows in the weight matrix corresponding to source components at the vth voxel. As in the original ARD models, optimisation of the marginal likelihood or evidence, P (B | α, L, Ψ), with respect to the αv results in a number of the hyperparameters diverging to infinity. This imposes a zero-centred delta-function prior on the corresponding row of W, in turn forcing the corresponding source power to vanish. It is this optimisation, then, which introduces the sparsity. Before passing to the optimisation scheme, we summarise the model introduced above by the log joint probability it assigns to observations, pre-sources and weights (here, and below, we drop the explicit conditioning on the fixed parameters L and Ψ) log P (B, Z, W | α) = −1 2 (N log |2πΨ| + Tr [(B −LWZ) ′Ψ −1(B −LWZ)]) −1 2 (Ndz log(2π) + Tr [Z ′Z]) −1 2 (dz log |2πA −1| + Tr [W ′AW]) (5) 3 Learning Direct optimisation of the log marginal likelihood log R dZ dW P (B, Z, W | α) proves to be intractable. Instead, we adopt the “variational Bayes” (VB) framework of [3, 12]. VB is a form of the Expectation-Maximisation (EM) algorithm for maximum-likelihood estimation. Given unknown distributions Qz(Z) and Qw(W), Jensen’s inequality provides a bound on the log-likelihood log P (B | α) = log Z dZ dW Qz(Z)Qw(W) Qz(Z)Qw(W)P (B, Z, W | α) ≥⟨log P (B, Z, W | α)⟩Qz(Z)Qw(W ) + H(Qz) + H(Qw) (where H(·) represents the Shannon entropy). This bound can then be optimised by alternate maximisations with respect to Qz, Qw and the hyperparameters α. If, in place of the factored distribution Qz(Z)Qw(W) we had used a joint Q(Z, W), this procedure would be guaranteed to find a local maximum in the marginal likelihood (by analogy to EM). As it is, the optimisation is only approximate, but has been found to yield good maxima in a factor analysis model very similar to the one we consider here [12]. In our experiments, a slight variant of the standard VB procedure, described below, improved further on the accuracy of the solutions found. Given estimates Qn z , Qn w and αn at the nth step, the (n + 1)th iteration is given by: Qn+1 z (Z) ∝exp ⟨log P (B, Z, W | αn)⟩Qn w = N  Σn+1 z ⟨W⟩ ′ Qn w L ′Ψ −1B, Σn+1 z  with Σn+1 z = W ′L ′Ψ −1LW + I −1 Qn w , Qn+1 w (W) ∝exp ⟨log P (B, Z, W | αn)⟩Qn z = N  Σn+1 w vec  L ′Ψ −1B ⟨Z ′⟩Qn+1 z  , Σn+1 w  ; with Σn+1 w =  ⟨ZZ ′⟩Qn+1 z ⊗L ′Ψ −1L + I ⊗An−1 , and αn v = dz  J ′diag h ⟨W⟩Qn+1 w ⟨W⟩ ′ Qn+1 w i−1 v (J ′1)v −αv(J ′diag  Σn+1 w  )v  . where the normal distribution on Z implies a normal distribution on each column z; the distribution on W is normal on vec (W) 2; 1 is a vector of ones; and the diag [·] operator returns the main diagonal of its argument as a vector. Our experience is that better results can be obtained if the posterior expectation of ZZ′ in the Qw update is replaced by its value under the prior on Z, NI. This variant appears to constrain the factored posterior to remain closer to the true joint distribution. It has the additional benefit of simplifying both the notational and computational complexities of the updates (for the latter, it reduces the complexity of the inversion needed to calculate Σw from (dsdz)3 to d2 s). We can then rewrite the updates into a more compact form by using this assumption, and by evaluating the expectations, to obtain Σn+1 z = (W n′L ′Ψ −1LW n + Tr [L ′Ψ −1L ′Σn w] I + I) −1 (6a) Σn+1 w = (NL ′Ψ −1L + An) −1 = (An) −1 −(An) −1L ′(N −1Ψ + L(An) −1L ′) −1L(An) −1 (6b) W n+1 = Σn+1 w L ′Ψ −1BB ′Ψ −1LW nΣn+1 z (6c) αn+1 v = dz J ′diag  W n+1W n+1′−1 v ((J ′1)v −αn v(J ′diag  Σn+1 w  )v), (6d) where W n = ⟨W⟩Qn w. The use of the matrix inversion lemma in (6b) exploits the diagonality of A to reduce the computational complexity of the algorithm with respect to ds. The formulae of (6) are easily implemented and recover an estimate of W, and thus the source correlation matrix, by iteration. The source activities can then be estimated by use of the Wiener filter (2). The updates of (6) also demonstrate an important point concerning the validity of our Gaussian model. Note that the measured data enter into the estimation procedure only through their correlation BB′. In other words, the hyperparameter optimisation stage of our algorithm is only being used to model the data correlation, not their amplitudes. As a result, the effects of incorrectly assuming a Gaussian source amplitude distribution can be expected to remain relatively benign. 4 Simulations Simulation studies provide an important tool for evaluating source recovery algorithms, in that they provide “sensor” data sets for which the correct answer (i.e. the true locations and time-courses of the sources) is known. We report here the results of simulations carried out using parameters similar to those that might be encountered in realistic recordings. 4.1 Methods We simulated 100 1-s-long epochs of evoked response data. The sensor configuration was taken from a real experiment: two sensor arrays, with 37 gradiometer coils each, were 2for a discussion of the vec operator and the Kronecker product ⊗see e.g. [14] located on either side of the head (see figure 1). Candidate source dipoles were located on a grid with 1 cm spacing within a hemispherical brain volume with a radius of 8 cm, to give a total of 956 possible source locations. Significant (above background) evoked activity was simulated at 5 of these locations (see figure 1a), with random dipole orientations. The evoked waveforms were similar in form to the evoked responses seen in many areas of the brain (see figure 2a), and were strongly correlated between the five sites (figure 3a). The two most lateral sites (one on each side), expressed bilateral primary sensory activation, and had identical time-courses with the shortest latency. Another lateral site, on the left side, had activity with the same waveform, but delayed by 50 ms. Two medial sites had slower and more delayed activation profiles. The dipole orientation at each site was chosen randomly in the plane parallel to the sensor tangent. Note that the amplitude distribution of these sources is strongly non-Gaussian; we will see, however, that they can be recovered successfully by the present technique despite its assumption of normality. The simulated sensor recordings were corrupted by noise from two sources, both with Gaussian distribution. Background activity in the brain was simulated with equal power at every point on the grid of candidate sources, with a root-mean-square (RMS) amplitude 1.5 decades below that of the 5 significant sources. Although this background activity was uncorrelated between brain locations, it resulted in correlated disturbances at the magnetic sensors. Thermal noise in the sensors was uncorrelated, and had a similar magnitude (at the sensors) to that of the background noise. The novel Bayesian estimation technique was applied to the raw simulated sensor trace rather than to epoch-averaged data. While in this simulation the evoked activity was identical in each trial, determining the correlation matrix from unaveraged data should, in the more general case, make single-trial reconstructions more accurate. Once reconstructed, the source timecourses were averaged, and are shown in figure 2. The number of presources dz, a free parameter in the algorithm, was set to 10. Sources associated with inverse variance hyperparameters αi above a threshold (here 1015) were taken to be inactive. For comparison, we also reconstructed sources using the vector minimum-variance adaptive beamformer approach [15]. Note that this technique, along with many other existing reconstruction methods, assumes that sources at different locations are uncorrelated and so it should not be expected to perform well under the conditions of our simulation. 4.2 Results Figure 1 shows the source locations and powers reconstructed by the novel Bayesian approach developed here (b) and by the beamformer (c). The Bayesian approach identified the correct number of sources, at the correct locations and with approximately correct relative powers. By contrast, the beamformer approach, which assumes uncorrelated sources, entirely failed to locate the sources of activity. Figure 2b shows the average evoked-response reconstruction at each of the identified source locations (with the simulated waveforms shown in panel a). The general time-course of the activities has clearly been well characterised. The time-courses estimated by the vector beamformer are shown in figure 2c. As beamformer localisation proved to be unreliable, the time-courses shown are the reconstructions at the positions of the correct (simulated) sources. Nonetheless, the strong correlations in the sources have corrupted the reconstructions. Note that the only difference between the time-courses shown in figure 2b and c is premultiplication by the estimated source correlation matrix in b. Finally, figure 3 shows the correlation coefficient matrices for the dipole amplitude timecourses of the active sources shown in figure 2. We see that the Bayesian approach finds a reasonable approximation to the correct correlation structure. Again, however, the beamformer is unable to accurately characterise the correlation matrix. Figure 1: Reconstructed source power. Each dot represents a single voxel, the size and shade of the superimposed circles indicates the relative power of the corresponding source. Each column contains two orthogonal projections of the same source distribution: (a) simulated sources, (b) reconstruction by evidence optimisation, (c) beamformer reconstruction (powers have been compressed to make smaller sources more visible) 0 100 200 300 400 500 600 700 800 900 1000 1 2 3 4 5 time within epoch (ms) source number a 0 100 200 300 400 500 600 700 800 900 1000 1 2 3 4 5 time within epoch (ms) source number b 0 100 200 300 400 500 600 700 800 900 1000 1 2 3 4 5 time within epoch (ms) source number c Figure 2: Source waveforms at active locations. Sources are numbered from left to right in the brain. The two traces for each location show the dipole components in two orthogonal directions. (a) simulated waveforms; (b) waveforms reconstructed by our novel algorithm; (c) waveforms reconstructed by beamforming (at the simulated locations) 1 2 3 4 5 1 2 3 4 5 source source simulated correlations a 1 2 3 4 5 1 2 3 4 5 source source reconstructed correlations b 1 2 3 4 5 1 2 3 4 5 source source beamformer correlations c Figure 3: Source correlation coefficient matrices. Correlations were computed between epochaveraged dipole amplitude time-courses at each location. The size of each square indicates the magnitude of the corresponding coefficient (the maximum value being 1), with whites squares positive and black squares negative. (a) simulated sources; (b) sources reconstructed by our novel algorithm; (c) sources reconstructed by beamforming. 5 Conclusions We have demonstrated a novel evidence-optimisation approach to the location and reconstruction of dipole sources contributing to MEG measurements. Unlike existing methods, this new technique does not assume a correlation structure for the sources, instead estimating it from the data. As such, this approach holds great promise for high fidelity imaging of correlated magnetic activity in the brain. Acknowledgements We thank Dr. Sekihara for useful discussions. This work is funded by grants from the Whitaker Foundation and from NIH (1R01004855-01A1). References [1] S. Baillet, J. C. Mosher, and R. M. Leahy. IEEE Signal Processing Magazine, 18(6):14–30, 2001. [2] M. H¨am¨al¨ainen, R. Hari, R. Ilmoniemi, J. Knuutila, and O. V. Lounasmaa. Rev. Mod. Phys., 65:413–97, 1993. [3] H. Attias. In S. A. Solla, T. K. Leen, and K.-R. M¨uller, eds., Adv. Neural Info. Processing Sys., vol. 12. MIT Press, 2000. [4] A. C. Tang, B. A. Pearlmutter, N. A. Malaszenko, D. B. Phung, and B. C. Reeb. Neural Comput., 14(8):1827–58, 2002. [5] O. David, L. Garnero, D. Cosmelli, and F. J. Varela. IEEE Trans. Biomed. Eng., 49(9):975–87, 2002. [6] K. Sekihara and B. Scholz. IEEE Trans. Biomed. Eng., 43(3):281–91, 1996. [7] K. Sekihara, S. S. Nagarajan, D. Poeppel, and A. Marantz. IEEE Trans. Biomed. Eng., 49(12):1234–46, 2002. [8] C. Phillips, M. D. Rugg, and K. J. Friston. Neuroimage, 16(3):678–95, 2002. [9] E. Rodriguez, N. George, J. P. Lachaux, J. Martinerie, B. Renault, and F. J. Varela. Nature, 397(6718):430–3, 1999. [10] C. Bernasconi, A. von Stein, and C. Chiang. Neuroreport, 11(4):689–92, 2000. [11] D. J. C. MacKay. In ASHRAE Transactions, V.100, Pt.2, pp. 1053–1062. ASHRAE, 1994. [12] Z. Ghahramani and M. Beal. In S. A. Solla, T. K. Leen, and K.-R. M¨uller, eds., Adv. Neural Info. Processing Sys., vol. 12. MIT Press, 2000. [13] M. Sahani and J. F. Linden. In S. Becker, S. Thrun, and K. Obermayer, eds., Adv. Neural Info. Processing Sys., vol. 15. MIT Press, 2003. [14] R. A. Horn and C. R. Johnson. Topics in Matrix Analysis. CUP, 1991. [15] K. Sekihara, S. S. Nagarajan, D. Poeppel, A. Marantz, and Y. Miyashita. IEEE Trans. Biomed. Eng., 48(7):760–71, 2001.
2003
184
2,403
Increase information transfer rates in BCI by CSP extension to multi-class Guido Dornhege1, Benjamin Blankertz1, Gabriel Curio2, Klaus-Robert Müller1,3 1Fraunhofer FIRST.IDA, Kekuléstr. 7, 12489 Berlin, Germany 2Neurophysics Group, Dept. of Neurology, Klinikum Benjamin Franklin, Freie Universität Berlin, Hindenburgdamm 30, 12203 Berlin, Germany 3University of Potsdam, August-Bebel-Str. 89, 14482 Potsdam, Germany {dornhege,blanker,klaus}@first.fraunhofer.de, curio@zedat.fu-berlin.de Abstract Brain-Computer Interfaces (BCI) are an interesting emerging technology that is driven by the motivation to develop an effective communication interface translating human intentions into a control signal for devices like computers or neuroprostheses. If this can be done bypassing the usual human output pathways like peripheral nerves and muscles it can ultimately become a valuable tool for paralyzed patients. Most activity in BCI research is devoted to finding suitable features and algorithms to increase information transfer rates (ITRs). The present paper studies the implications of using more classes, e.g., left vs. right hand vs. foot, for operating a BCI. We contribute by (1) a theoretical study showing under some mild assumptions that it is practically not useful to employ more than three or four classes, (2) two extensions of the common spatial pattern (CSP) algorithm, one interestingly based on simultaneous diagonalization, and (3) controlled EEG experiments that underline our theoretical findings and show excellent improved ITRs. 1 Introduction The goal of a Brain-Computer Interface (BCI) is to establish a communication channel for translating human intentions – reflected by suitable brain signals – into a control signal for, e.g., a computer application or a neuroprosthesis (cf. [1]). If the brain signal is measured non-invasively by an electroencephalogram (EEG), if short training and preparation times are feasible and if it is possible to achieve high information transfer rates (ITRs), this interface can become a useful tool for disabled patients or an interesting gadget in the context of computer games. Recently, some approaches have been presented (cf. [1, 2]) which are good candidates for successfully implementing such an interface. In a BCI system a subject tries to convey her/his intentions by behaving according to welldefined paradigms, like imagination of specific movements. An effective discrimination of different brain states is important in order to implement a suitable system for human subjects. Therefore appropriate features have to be chosen by signal processing techniques according to the selected paradigm. These features are translated into a control signal, either by simple threshold criteria (cf. [1]), or by machine learning techniques where the computer learns a decision function from some training data [1, 3, 4, 5, 6]. For non-invasive BCI systems that are based on discrimination of voluntarily induced brain states three approaches are characteristic. (1) The Tübingen Thought Translation Device (TTD) [7] enables subjects to learn self-regulation of slow cortical potentials (SCP), i.e., electrocortical positivity and negativity. After some training in experiments with vertical cursor movement as feedback navigated by the SCP from central scalp position, patients are able to generate binary decisions in a 4-6 second pace with an accuracy of up to 85 %. (2) Users of the Albany BCI system [8] are able to control a cursor movement by their oscillatory brain activity into one of two or four possible targets on the computer screen and to achieve over 90 % hit rates after adapting to the system during many feedback sessions with a selection rate of 4 to 5 seconds in the binary decision problem. And (3), based on eventrelated modulations of the pericentral µ- and/or β-rhythms of sensorimotor cortices (with a focus on motor preparation and imagination) the Graz BCI system [9] obtains accuracies of over 96 % in a ternary classification task with a trial duration of 8 seconds by evaluation of adaptive auto-regressive models (AAR). Note that there are other BCI systems which rely on stimulus/response paradigms, e.g. P300, see [1] for an overview. In [10] an approach called Common Spatial Patterns (CSP) was suggested for use in a BCI context. This algorithm extracts event-related desynchronization (ERD) effects, i.e., event-related attenuations in some frequency bands, e.g., µ/β-rhythm. However, the CSP algorithm can be used more generally, e.g., in [11] a suitable modification to movementrelated potentials was presented. Further in [12] a first multi-class extension of CSP is presented which is based on pairwise classification and voting. In this paper we present further ways to extend this approach to many classes and compare to prior work. By extending a BCI system to many classes a gain in performance can be obtained since the ITR can increase even if the percentage of correct classifications decreases. In [13] a first study for increasing the number of classes is demonstrated based on a hidden markov model approach. The authors conclude to use three classes which attains the highest ITR. We are focussing here on the same problem but using CSP extracted features and arrive at similar results. However, in a theoretical part we show that using more classes can be worth the effort if a suitable accuracy of all pairwise classifications is available. Consequently, extensions to multi-class settings are worthwhile for a BCI system, if and only if a suitable number of effectivly separable human brain states can be assigned. 2 How many brain states should be chosen? Out of many different brain states (classes) our task is to find a subset of classes which is most profitable for the user of a BCI system. In this part we only focus on the information theoretical perspective. Using more classes holds the potential to increase ITR, although the rate of correct classifications decreases. For the subsequent theoretical considerations we assume gaussian distributions with equal covariance matrices for all classes which is a reasonable assumption for a wide range of EEG features, see section 4.3. Furthermore we assume equal priors between all classes. For three classes and equal pairwise classifications errors err, bounds for the expected classification error can be calculated in the following way: Let (X,Y) ∈IRn ×Y (Y = {1,2,3}) be random variables and P ∼N (µ1,2,3,Σ) the probability distribution. Scaling appropriately we can assume Σ = I. We define the optimal classifier by f ∗: IRn →Y with f ∗= argmin f∈FP(f(X) ̸= Y), where F is some class of functions1. Similarly f ∗ i, j describes the optimal classifier between classes i and j. Directly we get err := P(f ∗ i, j(X) ̸= Y) = G(||µi −µ j||/2) with G(x) := 1 √ 2π R ∞ x exp(−x2/2)dx and 1For the moment we pay no attention to whether such a function exists. In the current set-up F is usually the space of all linear classifiers, and under the probability assumptions mentioned above such a minimum exist. µ1 µ2 µ3 A B Cl R = A+B+Cl+D Cu = Cl+D+E D D E 0 2 4 6 8 10 12 14 16 18 20 0 0.5 1 1.5 2 2.5 2 calc 3 sim 4 sim 5 sim 6 sim 3 range Figure 1: The figure on the left visualizes a method to estimate bounds for the ITR depending on the expected pairwise misclassification risk for three classes. The figure on the right shows the ITR [bits per decision] depending on the classification error [%] for simulated data for different number of classes (3-6 sim) and for 2 classes the real values (2 calc). Additionally the expected range (see (1)) (3 range) for three classes is visualized. i ̸= j. Therefore we get ||µ j −µi||2 = Φ for all i ̸= j with some Φ > 0 and finally due to symmetry and equal priors P(f ∗(X) ̸=Y) = Q(||X||2 ≥min j=2,3(||X −µ j +µ1||2/2)) where Q ∼N (0,I). Since evaluation of probabilities for polyhedrons in the gaussian space is hard, we only estimate lower and upper bounds. We can directly reduce the problem to a 2 dimensional space by shifting and rotating and by Fubini’s theorem. Since ||µ j −µi||2 = Φ for all i ̸= j the means lie at corners of an equilateral triangle (see Figure 1). We define R := {x ∈IR2| ||x||2 ≥||x−µ j +µ1||2, j = 2,3} and we can see after some calculation or by Figure 1 (left) with the sets defined there, that A∪B∪Cl ⊂R ⊂A∪B∪Cu. Due symmetry, the equilateral triangle and polar coordinates transformation we get finally err + exp(−Φ2/6) 6 ≤ P(f ∗(X) ̸= Y) ≤ err + exp(−Φ2/8) 6 . (1) To compare classification performances involing different numbers of classes, we use the ITR quantified as bit rate per decision I as defined due to Shannon’s theorem: I := log2 N + plog2(p)+(1−p)log2((1−p)/(N −1)) per decision with number of classes N and classification accuracy p (cf. [14]). Figure 1 (right) shows the bounds in (1) for the ITR as a function of the expected pairwise misclassification errors. Additionally the same values on simulated data (100000 data points for each class) under the assumptions described above (equal pairwise performance, Gaussian distributed ...) are visualized for N = 2,...,6 classes. First of all, the figure confirms our estimated bounds. Furthermore the figure shows that under this strong assumptions extensions to multi-class are worthwhile. However, the gain of using more than 4 classes is tiny if the pairwise classification error is about 10 % or more. Under more realistic assumptions, i.e., more classes have increasing pairwise classification error compared to a wisely chosen subset it is improbable to increase the bit rate by increasing the number of classes higher than three or four. However, this depends strongly on the pairwise errors. If a suitable number of different brain states that can be discriminitated well, then indeed extensions to more classes are useful. 3 CSP and some multi-class extension The CSP algorithm in its original form can be utilized for brain states that are characterized by a decrease or increase of a cortical rhythm with a characteristic topographic pattern. 3.1 CSP in a binary problem Let Σ1,2 be the centered covariance matrices calculated in the standard way of a trialconcatenated vector of dimension [channels × concatenated timepoints] belonging to the respective label. The computation of Σ1,2 needs to be adapted to the paradigm, e.g., for slow cortical features such as the lateralized readiness potential (cf. [11]). The original CSP algorithm calculates a matrix R and diagonal matrix D with elements in [0,1] with RΣ1RT = D and RΣ2RT = 1−D (2) which can easily be obtained by whitening and spectral theory. Only a few projections with the highest ratio between their eigenvalues (lowest and highest ratios) are selected. Intuitively the CSP projections provide the scalp patterns which are most discriminative (see e.g. Figure 4). 3.2 Multi-Class extensions Using CSP within the classifier (IN): This algorithm reduces a multi-class to several binary problems (cf. [15]) and was suggested in [12] for CSP in a BCI context. For all combinations of two different classes the CSP patterns are calculated as described in Eq.(2). The variances of the projections to CSP of every channel are used as input for an LDAclassifier for each 2-class combination. New trials are projected on these CSP patterns and are assigned to the class for which most classifiers are voting. One versus rest CSP (OVR): We suggest a subtle modification of the approach above which permits to compute the CSP approach before the classification. We compute spatial patterns for each class against all others2. Then we project the EEG signals on all these CSP patterns, calculate the variances as before and then perform an LDA multi-class classification. The approach OVR appears rather similar to the approach IN, but there is in fact a large practical difference (additionally to the one-versus-rest strategy as opposed to pairwise binary subproblems). In the approach IN classification is only done binary on the CSP patterns according to the binary choice. OVR does multi-class classification on all projected signals. Simultaneous diagonalization (SIM): The main trick in the binary case is that the CSP algorithm finds a simultaneous diagonalization of both covariance matrices whose eigenvalues sum to one. Thus a possible extension to many classes, i.e., many covariances (Σi)i=1,...,N is to find a matrix R and diagonal matrices (Di)i=1,...N with elements in [0,1] and with RΣiRT = Di for all i = 1,...,N and ∑N i=1 Di = I. Such a decomposition can only be approximated for N > 2. There are several algorithms for approximate simultaneous diagonalization (cf. [16, 17]) and we are using the algorithm described in [18] due to its speed and reliability. As opposed to the two class problem there is no canonical way to choose the relevant CSP patterns. We explored several options such as using the highest or lowest eigenvalues. Finally, the best strategy was based on the assumption that two different eigenvalues for the same pattern have the same effect if their ratios to the mean of the eigenvalues of the other classes are multiplicatively inverse to each other, i.e., their product is 1. Thus all eigenvalues λ are mapped to max(λ,(1−λ)/(1−λ +(N −1)2λ)) and a specified number m of highest eigenvalues for each class are used as CSP patterns. It should be mentioned that each pattern is only used once, namely for the class which has the highest modified eigenvalue. If a second class would choose this pattern it is left out for this class and the next one is chosen. Finally variances are computed on the projected trials as before and conventional LDA multi-class classification is done. 4 Data acquisition and analysis methods 4.1 Experiments We recorded brain activity from 4 subjects (codes aa, af, ak and ar) with multi-channel EEG amplifiers using 64 (128 for aa) channels band-pass filtered between 0.05 and 200 Hz and sampled at 1000 Hz. For offline analysis all signals were downsampled to 100 Hz. Surface EMG at both forearms and one leg, as well as horizontal and vertical EOG signals, were recorded to check for muscle activation and eye movements, but no trial was rejected. 2Note that this can be done similarly with pairwise patterns, but in our studies no substantical difference was observable and therefore one-versus-rest is favourable, since it chooses less patterns. The subjects in this experiment were sitting in a comfortable chair with arms lying relaxed on the armrests. All 4.5 seconds one of 6 different letters was appearing on the computer screen for 3 seconds. During this period the subject should imagine one of 6 different actions according to the displayed letter: imagination of left or right hand or foot movement, or imagination of a visual, auditory or tactile sensation. Subject aa took only part in an experiment with the 3 classes l, r and f. 200 (resp. 160 for aa) trials for each class were recorded. The aim of classification in these experiments is to discriminate trials of different classes using the whole period of imagination. A further reasonable objective to detect a new brainstate as fast as possible was not an object of this particular study. Note that the classes v, a and t were originally not intended to be BCI paradigms. Rather, these experiments were included to explore multi-class single-trial detection for brain states related to different sensory modalities for which it can reasonably be assumed that the regional activations can be well differentiated at a macroscopic scale of several centimeters. 4.2 Feature Extraction Due to the fact that we focus on desynchronization effects (here the µ-rhythm) we apply first a causal frequency filter of 8–15 Hz to the signals. Further, each trial consists of a two second window starting 500 ms after the visual stimulus. Then, the CSP algorithm is applied and finally variances of the projected trials were calculated to acquire the feature vectors. Alternatively, to see how effective the CSP algorithm is, the projection is left out for the binary classification task and we use instead techniques like Laplace filtering or common average reference (CAR) with a regularized LDA classifier on the variances. The frequency band and the time period should be chosen individually by closer analysis of each data set. However, we are not focussing on this effect here, therefore we choose a setting which works well for all subjects. The number of chosen CSP patterns is a further variable. Extended search for different values can be done, but is omitted here. To have similar number of patterns for each algorithm we choose for IN 2 patterns from each side in each pairwise classification (resulting in 2N(N −1) patterns), for OVR 2 patterns from each side in each one-versus rest choice and for SIM 4 patterns for each class (both resulting in 4N patterns). 4.3 Classification and Validation According to our studies the assumption that the features we are using are Gaussian distributed with equal covariance matrices holds well [2]. In this case Linear Discriminant Analysis (LDA) is optimal for classification in the sense that it minimizes the risk of misclassifications. Due to the low dimensionality of the CSP features regularization is not required. To assess the classification performance, the generalization error was estimated by 10×10fold cross-validation. Since the CSP algorithm depends on the class labels, the calculation of this projection is done in the cross-validation on each training set. Doing it on the whole data set beforehand can result in overfitting, i.e., underestimating the generalization error. For the purpose of this paper the best configuration of classes should be found. The most sophisticated way in BCI context would have consisted in doing many experiment with different sets of classes. Unfortunately this is very time consuming and not of interest for the BCI user. A more useful way is to do in a preliminary step experiments with many classes and choose within an offline analysis which is the best subset by testing all combinations. With the best chosen class configuration the experiment should be repeated to confirm the results. However, in this paper we present results of this simpler experiment, in fact following the setting in [13]. 5 Results 0 0.2 0.4 0.6 0 0.1 0.2 0.3 0.4 0.5 0.6 CSP LAPLACE, CAR aa af ak ar Figure 2: In the scatter plot the ITRs [bits per decision] for all 2-class combinations for all subjects obtained by CSP are shown on the x-axis while those by LAPLACE (dark points) resp. CAR (light points) are on the y-axis. That means for marks below the diagonal CSP outperforms LAPLACE resp. CAR. In Figure 2 the bit rates for all binary combinations of two classes and for all subjects are shown. The results for the CSP algorithm are contrasted in the plot with the results of LAPLACE/CAR in such a way that for points below the diagonal CSP is better and for points above the other algorithms are better. We can conclude that it is usually advantageous to use the CSP algorithm. Furthermore it is observable that the pairwise classification performances differ strongly. According to our theoretical considerations we should therefore assume that in the multi-class case a configuration with 3 classes will perform best. Figure 3 shows the ITRs for all multi-class configurations (N=3,...,6) for different subjects. Results for baseline method IN are compared to the new methods SIM and OVR. The latter methods are superior for those configurations whose results are below the diagonal in the scatter plot. For an overview the upper plots show histograms of the differences in ITR between SIM/OVR and IN and a gaussian approximation. We can conclude from these figures that no algorithm is generally the best. SIM shows the best mean performance for subjects ak and ar but the performance falls off for subject af. Since for aa only one three class combination is available, we omit a visualization. However, SIM performs again best for this subject. Statistical tests of significance are omitted since the classification results are generally not independent, e.g., classification of {l,r,f} and {l,a,t} are dependent since the trials of class l are involved in both. For a given number of classes Figure 4 shows the ITR obtained for the optimal subset of brain states by the best of the presented algorithms. As conjectured from fluctuations in pairwise discriminability, the bit rates decrease when using more than three classes. In three out of four subjects the peak ITR is obtained with three classes, only for subject aa pairwise classification is better. Here one further strategy is helpful. Additionally to the variance, autoregressive parameters can be calculated on the projections on the CSP patterns filtered here at 7–30 Hz and used for classification. In this case the pairwise classification errors are more balanced such that we acquire finally an ITR of 0.76 OVR SIM OVR SIM −0.1 0.1 0 af −0.1 0.1 0 ak −0.1 0.1 0 ar OVR SIM OVR,SIM IN 0.3 0.4 0.5 0.6 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0 0.2 0.4 0 0.1 0.2 0.3 0.4 0.5 0.05 0.1 0.15 0.2 0.25 0.05 0.1 0.15 0.2 0.25 Figure 3: In the scatter plot the ITRs [bits per decision] obtained by the baseline method IN are shown on the y-axis while those by SIM (+) and OVR (◦) are on the x-axis. That means for marks below the diagonal SIM resp. OVR outperforms IN. For an overall overview the upper plots show histograms of the differences in ITR between SIM/OVR and IN and shows a gaussian approximation of them. Here positive values belong to good performances of SIM and OVR. aa af ak ar 0 0.2 0.4 0.6 2 3 4 5 6 left right foot Figure 4: The figure on the left shows the ITR per trial for different number of classes with the best algorithm described above. The figure on the right visualizes the first pattern chosen by SIM for each class for aa. per decision, whereas the best binary combination has 0.6 bits per decision. The worth of using AR for this subject are caused by different frequency bands in which discriminative informations are. For the other subjects similar gains could not be observed by using AR parameters. Finally the CSP algorithm contains some further feature, namely that the spatial patterns can be plotted as scalp topographies. In Figure 4 the first pattern for each class of algorithm SIM is shown for subject aa. Evidently, this algorithm can reproduce neurophysiological prior knowledge about the location of ERD effects because for each activated limb the appropriate region of motor cortex is activated, e.g., a left (right) lateral site for the right (left) hand and an area closer to the central midline for the foot. Psychological perspective. In principle, multi-class decisions can be derived from a decision space natural to human subjects. In a BCI context such set of decisions will be performed most ’intuitively’, i.e., without a need for prolonged training, if the differential brain states are naturally related to a set of intended actions. This is the case, e.g., for movements of different body parts which have a somatotopically ordered lay-out in the primary motor cortex resulting in spatially discriminable patterns of EEG signals, such as readiness potentials or event-related desynchonizations specific for finger, elbow or shoulder movement intentions. In contrast, having to imagine a tune in order to move a cursor upwards vs imaging a visual scene to induce a downward movement will produce spatially discriminable patterns of EEG signals related to either auditory or visual imagery, but its action-effect-contingency would be counter-intuitive. While humans are able to adapt and to learn such complex tasks, this could take weeks of training before it would be performed fast, reliably and ’automatically’. Another important aspect of multi-class settings is that the use of more classes which is discriminated by the BCI device only at lower accuracy is likely to confuse the user. 6 Concluding discussion Current BCI research strives for enhanced information transfer rates. Several options are available: (1) training of the BCI users, which can be somewhat tedious if up to 300 hours of training would be necessary, (2) invasive BCI techniques, which we consider not applicable for healthy human test subjects, (3) improved machine learning and signal processing methods where, e.g., new filtering, feature extraction and sophisticated classifiers are constantly tuned and improved3, (4) faster trial speeds and finally (5) more classes among which the BCI user is choosing. This work analysed the theoretical and practical implications of using more than two classes, and also psychological issues were shortly discussed. In essence we found that higher a ITR is achieved with three classes, however, it seems unlikely that it can be increased by moving above four classes. This finding is confirmed in EEG experiments. As a further, more algorithmic, contribution we suggested two modifications of the CSP method for the multi-class case. As a side remark: our multi-class CSP algorithms also allow to gain a significant speed up in a real-time feedback experiment as filtering operations only need to be performed on very few CSP components (as opposed to on all channels). Since this corresponds to an implicit dimensionality reduction, good 3See 1st and 2nd BCI competition: http://ida.first.fraunhofer.de/~blanker/competition/ results can be also achieved with CSP using less patterns/trials. Comparing the results of SIM, OVR and IN we find that for most of the subjects SIM or OVR provide better results. Assuringly the algorithms SIM, OVR and IN allow to extract scalp pattern for the classification that match well with neurophysiological textbook knowledge (cf. Figure 4). In this paper the beneficial role of a third class was confirmed by an offline analysis. Future studies will therefore target on online experiments with more than two classes; first experimental results are promising. Another line of study will explore information from complementary neurophysiological effects in the spirit of [19] in combination with multi-class paradigms. Finally it would be useful to explore configurations with more than two classes which are more natural and also more userfriendly from the psychological perspective discussed above. Acknowledgments We thank S. Harmeling, M. Kawanabe, A. Ziehe, G. Rätsch, S. Mika, P. Laskov, D. Tax, M. Kirsch, C. Schäfer and T. Zander for helpful discussions. The studies were supported by BMBF-grants FKZ 01IBB02A and FKZ 01IBB02B. References [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain-computer interfaces for communication and control”, Clin. Neurophysiol., 113: 767–791, 2002. [2] B. Blankertz, G. Dornhege, C. Schäfer, R. Krepki, J. Kohlmorgen, K.-R. Müller, V. Kunzmann, F. Losch, and G. Curio, “Boosting Bit Rates and Error Detection for the Classification of Fast-Paced Motor Commands Based on Single-Trial EEG Analysis”, IEEE Trans. Neural Sys. Rehab. Eng., 11(2): 127–131, 2003. [3] B. Blankertz, G. Curio, and K.-R. Müller, “Classifying Single Trial EEG: Towards Brain Computer Interfacing”, in: T. G. Diettrich, S. Becker, and Z. Ghahramani, eds., Advances in Neural Inf. Proc. Systems (NIPS 01), vol. 14, 157–164, 2002. [4] L. Trejo, K. Wheeler, C. Jorgensen, R. Rosipal, S. Clanton, B. Matthews, A. Hibbs, R. Matthews, and M. Krupka, “Multimodal Neuroelectric Interface Development”, IEEE Trans. Neural Sys. Rehab. Eng., 2003, accepted. [5] L. Parra, C. Alvino, A. C. Tang, B. A. Pearlmutter, N. Yeung, A. Osman, and P. Sajda, “Linear spatial integration for single trial detection in encephalography”, NeuroImage, 2002, to appear. [6] W. D. Penny, S. J. Roberts, E. A. Curran, and M. J. Stokes, “EEG-Based Communication: A Pattern Recognition Approach”, IEEE Trans. Rehab. Eng., 8(2): 214–215, 2000. [7] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. Kübler, J. Perelmouter, E. Taub, and H. Flor, “A spelling device for the paralysed”, Nature, 398: 297–298, 1999. [8] J. R. Wolpaw, D. J. McFarland, and T. M. Vaughan, “Brain-Computer Interface Research at the Wadsworth Center”, IEEE Trans. Rehab. Eng., 8(2): 222–226, 2000. [9] B. O. Peters, G. Pfurtscheller, and H. Flyvbjerg, “Automatic Differentiation of Multichannel EEG Signals”, IEEE Trans. Biomed. Eng., 48(1): 111–116, 2001. [10] H. Ramoser, J. Müller-Gerking, and G. Pfurtscheller, “Optimal spatial filtering of single trial EEG during imagined hand movement”, IEEE Trans. Rehab. Eng., 8(4): 441–446, 2000. [11] G. Dornhege, B. Blankertz, and G. Curio, “Speeding up classification of multi-channel Brain-Computer Interfaces: Common spatial patterns for slow cortical potentials”, in: Proceedings of the 1st International IEEE EMBS Conference on Neural Engineering. Capri 2003, 591–594, 2003. [12] J. Müller-Gerking, G. Pfurtscheller, and H. Flyvbjerg, “Designing optimal spatial filters for single-trial EEG classification in a movement task”, Clin. Neurophysiol., 110: 787–798, 1999. [13] B. Obermaier, C. Neuper, C. Guger, and G. Pfurtscheller, “Information Transfer Rate in a Five-Classes Brain-Computer Interface”, IEEE Trans. Neural Sys. Rehab. Eng., 9(3): 283–288, 2001. [14] J. R. Wolpaw, N. Birbaumer, W. J. Heetderks, D. J. McFarland, P. H. Peckham, G. Schalk, E. Donchin, L. A. Quatrano, C. J. Robinson, and T. M. Vaughan, “Brain-Computer Interface Technology: A review of the First International Meeting”, IEEE Trans. Rehab. Eng., 8(2): 164–173, 2000. [15] E. Allwein, R. Schapire, and Y. Singer, “Reducing multiclass to binary: A unifying approach for margin classifiers”, Journal of Machine Learning Research, 1: 113–141, 2000. [16] J.-F. Cardoso and A. Souloumiac, “Jacobi angles for simultaneous diagonalization”, SIAM J.Mat.Anal.Appl., 17(1): 161 ff., 1996. [17] D.-T. Pham, “Joint Approximate Diagonalization of Positive Definite Matrices”, SIAM J. on Matrix Anal. and Appl., 22(4): 1136–1152, 2001. [18] A. Ziehe, P. Laskov, K.-R. Müller, and G. Nolte, “A Linear Least-Squares Algorithm for Joint Diagonalization”, in: Proc. 4th International Symposium on Independent Component Analysis and Blind Signal Separation (ICA2003), 469–474, Nara, Japan, 2003. [19] G. Dornhege, B. Blankertz, G. Curio, and K.-R. Müller, “Combining Features for BCI”, in: S. Becker, S. Thrun, and K. Obermayer, eds., Advances in Neural Inf. Proc. Systems (NIPS 02), vol. 15, MIT Press: Cambridge, MA, 2003.
2003
185
2,404
Dopamine modulation in a basal ganglio-cortical network implements saliency-based gating of working memory Aaron J. Gruber1,2, Peter Dayan3, Boris S. Gutkin3, and Sara A. Solla2,4 Biomedical Engineering1, Physiology2, and Physics and Astronomy4, Northwestern University, Chicago, IL, USA. Gatsby Computational Neuroscience Unit3, University College London, London, UK. {a-gruber1,solla }@northwestern.edu, {dayan,boris}@gatsby.ucl.ac.uk Abstract Dopamine exerts two classes of effect on the sustained neural activity in prefrontal cortex that underlies working memory. Direct release in the cortex increases the contrast of prefrontal neurons, enhancing the robustness of storage. Release of dopamine in the striatum is associated with salient stimuli and makes medium spiny neurons bistable; this modulation of the output of spiny neurons affects prefrontal cortex so as to indirectly gate access to working memory and additionally damp sensitivity to noise. Existing models have treated dopamine in one or other structure, or have addressed basal ganglia gating of working memory exclusive of dopamine effects. In this paper we combine these mechanisms and explore their joint effect. We model a memory-guided saccade task to illustrate how dopamine’s actions lead to working memory that is selective for salient input and has increased robustness to distraction. 1 Introduction Ample evidence indicates that the maintenance of information in working memory (WM) is mediated by persistent neural activity in the prefrontal cortex (PFC) [9, 10]. Critical for such memories is to control how salient external information is gated into storage, and to limit the effects of noise in the neural substrate of the memory itself. Experimental [15, 18] and theoretical [2, 13, 4, 17] studies implicate dopaminergic neuromodulation of PFC in information gating and noise control. In addition, there is credible speculation [7] that input to the PFC from the basal ganglia (BG) should also exert gating effects. Since the striatum is also a major target of dopamine innervation, the nature of the interaction between these various control structures and mechanisms in manipulating WM is important. A wealth of mathematical and computational models bear on these questions. A recent cellular-level model, which includes many known effects of dopamine (DA) on ionic conductances, indicates that modulation of pyramidal neurons causes the pattern of network activity at a fixed point attractor to become more robust both to noise and to input-driven switching of attractor states [6]. This result is consistent with reported effects of DA in more abstract, spiking-based models [2] of WM, and provides a cellular substrate for network models that account for gating effects of DA in cognitive WM tasks [1]. Other network models [7] of cognitive tasks have concentrated on the input from the BG, arguing that it has a disinhibitory effect (as in models of motor output) that controls bistability in cortical neurons and thereby gates external input to WM. This approach emphasizes the role of dopamine in providing a training signal to the BG, in contrast to the modulatory effects of DA discussed here, which are important for on-line neural processing. Finally, dopaminergic neuromodulation in the striatum has itself been recently captured in a biophysically-grounded model [11], which describes how medium spiny neurons (MSNs) become bistable in elevated dopamine. As the output of a major subset of MSNs ultimately reaches PFC after further processing through other nuclei, this bistability can have potentially strong effects on WM. In this paper, we combine these various influences on working memory activity in the PFC. We model a memory-guided saccade task [8] in which subjects must fixate on a centrally located fixation spot while a visual target is flashed at a peripheral location. After a delay period of up to a few seconds, subjects must saccade to the remembered target location. Numerous experimental studies of the task show that memory is maintained through striatal and sustained prefrontal neuronal activity; this persistent activity is consistent with attractor dynamics. Robustness to noise is of particular importance in the WM storage of continuous scalar quantities such as the angular location of a saccade target, since internal noise in the attractor network can easily lead to drift in the activity encoding the memory. In successive sections of this paper, we consider the effect of DA on resistance to attractor switching in the isolated cortical network; the effect of MSN activity on gating and noise; and the effect of dopamine induced bistability in MSNs on WM activity associated with salient stimuli. We demonstrate that DA exerts complementary direct and indirect effects, which result in superior performance in memory-guided tasks. 2 Model description The components of the network model I PF Cortex Input BG DA pyramidal activity medium spiny up state input activation input E S T Figure 1: The network model consists of three modules: cortical input, basal ganglia (BG), and prefrontal cortex (PFC). Insets show the response functions of spiny (BG) and pyramidal (PFC) neurons for both low (dotted curves) and high (solid curves) dopamine. used to simulate the WM activity during a memory-guided saccade task are shown in Fig 1. The input module consists of a ring of 120 units that project both to the PFC and the BG modules. Input units are assigned firing rates rT j to represent the sensory cortical response to visual targets. Bumps of activity centered at different locations along the ring encode for the position of different targets around the circle, as characterized by an angle in the [0, 2π) interval. The BG module consists of 24 medium spiny neurons (MSNs). Connections from the input units consist of Gaussian receptive fields that assign to each MSN a preferred direction; these preferred directions are monotonically and uniformly distributed. The dynamics of individual MSNs follow from a biophysically-grounded single compartment model [11] −C ˙V S = γ (IIRK + ILCa) + IORK + IL + IT , (1) which incorporates three crucial ionic currents: an inward rectifying K+ current (IIRK), an outward rectifying K+ current (IORK), and an L-type Ca2+ current (ILCa). The characterization of these currents is based on available biophysical data on MSNs. The factor γ represents an increase in the magnitude of the IIRK and ILCa currents due to the activation of D1 dopamine receptors. This DA induced current enhancement renders the response function of MSNs bistable for γ ≳1.2 (see Fig 1 for γ = 1.4). The synaptic input IT is an ohmic term with conductance given by the weighted summed activity of the corresponding input unit; input to the j-th MSN is thus given by IT j = P i W ST ji rT i V S j , where W ST ji is the strength of the connection from the i-th input neuron to the j-th spiny neuron. The firing rate of MSNs is a logistic function of their membrane potential: rS j = L(V S j ). The MSNs provide excitatory inputs to the PFC; in the model, this monosynaptic projection represents the direct pathway through the globus pallidus/substantia nigra and thalamus. The PFC module implements a line attractor capable of sustaining a bump of activity that encodes for the value of an angular variable in [0, 2π). ‘Bump’ networks like this have been used [3, 5] to model head direction and visual stimulus location characterized by a single angular variable. The module consists of 120 excitatory units; each unit is assigned a preferred direction, uniformly covering the [0, 2π) interval. Lateral connections between excitatory units are a Gaussian function of the angular difference between the corresponding preferred directions. A single inhibitory unit provides uniform global inhibition; the activity of the inhibitory unit is controlled by the total activity of the excitatory population. This type of connectivity guarantees that a localized bump of activity, once established, will persist beyond the disappearance of the external input that originated it (see Fig 2). One of the purposes of this paper is to investigate whether this persistent activity bump is robust to noise in the line attractor network. The excitatory units follow the stochastic differential equation τ E ˙V E j = −V E j + P i W ES ji rS i + P i̸=j W EE ji rE i −rI + rT j + σeη. (2) The first sum in Eq 2 represents inputs from the BG; the connections W ES ji consist of Gaussian receptive fields centered to align with the preferred direction of the corresponding excitatory unit. The second sum represents inputs from other excitatory PFC units; note that self-connections are excluded. The following two terms represent input from the inhibitory PFC unit (rI) and information about the visual target provided by the input module (rT j ). Crucially, the last term provides a stochastic input that models fluctuations in the activities that contribute to the total input to the excitatory units. The random variable η is drawn from a Gaussian distribution with zero mean and unit variance. The noise amplitude σe scales like (dt)−1/2, where dt is the integration time step. The firing rate of the PFC excitatory units is a logistic function rE j = L(V E j ); as shown in Fig 1, the steepness of this response function is controlled by DA. The dynamics of the inhibitory unit follows from τ I ˙V I = P i rE i , where the sum represents the total activity of the excitatory population. The firing rate rI of the inhibitory unit is a linear threshold function of V I. Dopaminergic modulation of the PFC network is implemented through an increase in the steepness of the response function of the excitatory cortical units. Gain control of this form has been adopted in a previous, more abstract, network theory of WM [17], and is generally consistent with biophysically-grounded models [6, 2]. To investigate the properties of the network model represented in Fig 1, the system of equations summarized above is integrated numerically using a 5th order Runge-Kutta method with variable time step that ensures an error tolerance below 5 µV/ms. 3 Results 3.1 Dopamine effects on the cortex: increased memory robustness PFC Neuron angular label activity 0 100 200 300 400 π 3/2π time (ms) PFC Nuron label * 0 π/2 π π/4 0 π 2π 0 0 0.8 ∆bθ ∆dθ π/4 2π/3 π/2 3π/2 θb θ0 0 A B θd Figure 2: (A) Activity profile of the bump state in low DA (open dots) and high DA (full dots). (B) Robustness characteristics of bump activity in low DA (dashed curve) and high DA (solid curve). For reference, the thin dotted line indicates the identity ∆bθ = ∆dθ. The activity profile shown as a function of time in the inset (grey scale, white as most active) illustrates the displacement of the bump from its initial location at θ0 to a final location at θb due to a distractor input at θd. This case corresponds to the asterisk on the curves in B. We first investigate the properties of the cortical network isolated from the input and basal ganglia components. The connectivity among cortical units is set so there are two stable states of activity for the PFC network: either all excitatory units have very low activity level, or a subset of them participates in a localized bump of elevated activity (Fig 2A, open dots). The bump can be translated to any position along the ring of cortical units, thus providing a way to encode a continuous variable, such as the angular position of a stimulus within a circle. The encoded angle corresponds to the location of the bump peak, and it can be read out by computing the population vector. The effect of DA on the PFC module, modeled here as an increase in the gain of the response function of the excitatory units, results in a narrower bump with a higher peak (Fig 2A, full dots). We measure the robustness of the location of the bump state against perturbative distractor inputs by applying a brief distractor at an angular distance ∆dθ from the current location of the bump and assessing the resulting angular displacement ∆bθ in the location of the bump 40 ms after the offset of the distractor. The procedure is illustrated in the inset of Fig 2B, which shows that a distractor current injection centered at a location θd causes a drift in bump location from its initial position θ0 to a final position θb, closer to the angular location of the distractor. If θd is close to θ0, the distractor is capable of moving the bump completely to the injection location, and ∆bθ is almost equal to ∆dθ. As shown in Fig 2B, the plot of ∆bθ versus ∆dθ remains close to the identity line for small ∆dθ. However, as ∆dθ increases the distractor becomes less and less effective, until the displacement ∆bθ of the bump decreases abruptly and becomes negligible. The generic features of bump stability shown in Fig 2B apply to both low DA (dashed curve) and high DA (solid curve) conditions. The difference between these two curves reveals that the dopamine induced increase in the gain of PFC units decreases the sensitivity of the bump to distractors, resulting in a consistently smaller bump displacement. The actual location of these two curves can be altered by varying the intensity and/or the duration of the distractor input, but their features and relative order remain invariant. This numerical experiment demonstrates that DA increases the robustness of the encoded memory, consistent with other PFC models of DA effects on WM [2, 6]. 3.2 Basal ganglia effects on the cortex: increased memory robustness and input gating Next, we investigate the effects of BG input (both tonic and phasic) on the stability of PFC bump activity in the absence of DA modulation. Tonic input from a single MSN, whose preferred direction coincides with the angular location of the bump, anchors the bump at that location and increases memory robustness against both noise induced diffusion (Figs 0 2 4 0 π/6 θ 0 2 4 0 0.06 0 < θ2> −π/6 π/6 0 π/2 π with BG without BG time (s) ∆dθ time (s) A B C ∆bθ Figure 3: Diffusion of the bump location due to noise in low DA (grey traces in A; dashed curve in B) is greatly reduced by input from a single BG unit with the same preferred angular location (dark traces in A; solid curve in B). The robustness to distractor driven drift is also increased by BG input (C). 3A and 3B) and distractors (Fig 3C). Such localized tonic input to the PFC effectively breaks the symmetry of the line attractor, yielding a single fixed point for the cortical active state: a bump centered at the location of maximal BG input. This transition from a continuous line attractor to a fixed point attractor reduces the maximal deviation of the bump by a distractor. Active MSNs provide control over the encoded memory not only by enhancing robustness, as shown above for the case of tonic input to the PFC, but also by providing phasic input that can assist a relevant visual stimulus in switching the location of the PFC activity bump. We show in Fig 4 (top plots) the location of the activity bump θb as a function of time in response to two stimuli at different locations θs. The nature of the PFC response to the second stimulus depends dramatically on whether it elicits activity in the MSNs. The initial stimulus activates a tight group of MSNs which encode for its angular position. It also causes activation of a group of PFC neurons whose population vector encodes for the same angular position. When the input disappears, the MSNs become inactive and the cortical layer relaxes to a characteristic bump state centered at the angular position of the stimulus. A second stimulus (distractor) that fails to activate BG units (Fig 4A) has only a minimal effect on the bump location. However, if the stimulus does activate the BG units (Fig 4B), then it causes a switch in bump location. In this case, the PFC memory is updated to encode for the location of the most recent stimulus. Thus a direct stimulus input to the PFC that by itself is not sufficient to switch attractor states can trigger a switch, provided it activates the BG, whose activity yields additional input to the PFC. Transient activation of MSNs thus effectively gates access to working memory. 3.3 Dopamine effects on the basal ganglia: saliency-based gating Ample evidence indicates that DA, the release of which is associated with the presentation of conditioned stimuli [16], modulates the activity of MSNs. Our previous computational model of MSNs [11] studied the apparently paradoxical effects of DA modulation, manifested in both suppression and enhancement of MSN activity in a complex reward-based saccade task [12]. We showed that DA can induce bistability in the response functions of MSNs, with important consequences. In high DA, the effective threshold for reaching the active ’up’ state is increased; the activity of units that do not exceed threshold is suppressed into a quiescent ’down’ state, while units that reach the up state exhibit a higher firing rate which is extended in duration due to effects of hysteresis. We now demonstrate that the dual enhancing/suppressing nature of DA modulation of MSNs activity significantly affects the network’s response to stimuli. We show in Fig 5 (top plot) the location of the activity bump θb as a function of time in response to four stimuli at two different locations: θA, θB, θ∗ A, θB. Crucially, in this sequence, only θ∗ A is a conditioned stimulus that triggers DA release. 0 0.5 1 1.5 2 0 0.5 1 1.5 2 time (s) time (s) 0 2π 0 MSN label 0 π DA (γ) 2π π 2π π PFC label A B θs, θb Figure 4: Top plot shows the location θb of the encoded memory as determined from the population vector of the excitatory cortical units (thin black curve) and the location θs of stimuli as encoded by a Gaussian bump of activity in the input units (grey bars) as a function of time. The middle and bottom panels show the activity of the BG and the PFC modules, respectively. Dopamine level remains low. The first two stimuli activate appropriate MSNs, and are therefore gated into WM. The presentation of θ∗ A activates the same set of MSNs as θA, but the DA-modulated MSNs now become bistable: high activity is enhanced while intermediate activity is suppressed. Only the central MSN remains active with an enhanced amplitude; the two lateral MSNs that were transiently activated by θA in low DA are now suppressed. The activity of the central MSN suffices to gate the location of the new stimulus into WM; the location of the PFC activity bump switches accordingly. Interestingly, this switch from B to A occurs more slowly than the preceding switch from A to B. This effect is also attributable to DA: its release affects the response function of excitatory PFC units, making them less likely to react to a subsequent stimulus and thus enhancing the stability of the bump at the θB angular position. Once the bump has switched to the angular location θ∗ A to encode for the conditioned stimulus, the subsequent presentation of θB does not activate MSNs since they are hysteretically locked in the inactive down state. The pattern of activity in the BG continues to encode for θA for as long as the DA level remains elevated, and the PFC activity bump continues to encode for θ∗ A. In sum, DA induced bistability of MSNs, associated with an expectation of reward, imparts salience selectivity to the gating function of the BG. By locking the activation of MSNs associated with salient input, the BG input prevents a switch in PFC bump activity and preserves the conditioned stimulus in WM. The robustness of the WM activity is enhanced by a combined effect of DA through both increasing the gain of PFC neurons and sustaining MSN input during the delay period (see Fig 5, bottom plot). 4 Discussion We have built a working memory model which links dopaminergic neuromodulation in the prefrontal cortex, bistability-inducing dopaminergic neuromodulation of striatal spiny 0 2π 0 MSN label 0.5 1 1.5 2 2.5 3 0 0 π/2 π 0 π/4 π DA (γ) 2π π 2π π time (s) θs, θb ∆bθ ∆dθ PFC label Figure 5: Top plot shows the location θb of the encoded memory as determined from the population vector of the excitatory cortical units (thin black curve) and the location θs of stimuli as encoded by a Gaussian bump of activity in the input units (grey bars) as a function of time. The second and third panels bottom plots show the activity of the BG and the PFC modules, respectively. Dopamine level increases in response to the conditioned stimulus. The bottom plot displays increased robustness of WM for conditioned (solid curve) as compared to unconditioned (dashed curve) stimuli. neurons, and the effects of basal ganglia output on cortical persistence. The resulting interactions provide a sophisticated control mechanism over the read-in to working memory and the elimination of noise. We demonstrated the quality of the system in a model of a standard memory-guided saccade task. There are two central issues for models of working memory: robustness to external noise, such as explicit lures presented during the memory delay period, and robustness to internal noise, coming from unwarranted corruption of the neural substrate of persistent activity. Our model, along with various others, addresses these issues at a cortical level via two basic mechanisms: DA modulation, which changes the excitability of neurons in a particular way (units that are inactive are less excitable by input, while units that are active can become more active), and targeted input from the BG. However, models differ as to the nature and provenance of the BG input, and also its effects on the PFC. Ours is the first to consider the combined, complementary, effects of DA in the PFC and the BG. The requirements for a gating signal are that it be activated at the same time as the stimuli that are to be stored, and that it is a (possibly exclusive) means by which a WM state is established. Following the experimental evidence that perturbing DA leads to disruption of WM [18], a set of theories suggested that a phasic DA signal (as associated, for instance, with reward predicting conditioned stimuli [16]) acts as the gate in the cortex [4]. In various models [17, 2, 6], and also in ours, phasic DA is able to act as a gate through its contrast-enhancing effect on cortical activity. However, as discussed at length in Frank et al [7] (whose model does not incorporate the effect at all), this is unlikely to be the sole gating mechanism, since various stimuli that would not lead to the release of phasic DA still require storage in WM. In our model, even in low DA, the BG gates information by controlling the switching of the attractor state in response to inputs. Frank et al [7] point out the various advantages of this type of gating, largely associated with the opportunities for precise temporal and spatial gating specificity, based on information about the task context. Our BG gating mechanism simply involves additional targeted excitatory input to the cortex from the (currently over-simplified) output of striatal spiny neurons, coupled with a detailed account [11] of DA induced bistability in MSNs. This allows us to couple gating to motivationally salient stimuli that induce the release of DA. Since DA controls plasticity in cortico-striatal synapses [14], there is an available mechanism for learning the appropriate gating of salient stimuli, as well as motivationally neutral contextual stimuli that do not trigger DA release but are important to store. Robustness against noise that is internal to the WM is of particular importance for line or surface attractor memories, since they have one or more global directions of null stability and therefore exhibit propensity to diffuse. Rather than rely on bistability in cortical neurons [3], our model relies on input from the striatum to reduce drift. This mechanism is available in both high and low DA conditions. This additional input turns the line attractor into a point attractor at the given location, and thereby adds stability while it persists. The DA induced bistability of MSNs, for which there is now experimental evidence, enhances this stabilization effect. We have focused on the mechanisms by which DA and the BG can influence WM. An important direction for future work is to relate this material to our growing understanding of the provenance of the DA signal in terms of reward prediction errors and motivationally salient cues. References [1] Braver TS, Cohen JD (1999) Prog. Brain Res. 121:327-349. [2] Brunel N, Wang XJ (2001) J. Comp. Neurosci. 11:63-85. [3] Camperi M, Wang XJ (1998) J. Comp. Neurosci. 5:383-405. [4] Cohen JD, Braver TS, Brown JW (2002) Curr. Opin. Neurobiol. 12:223-229. [5] Compte A, Brunel N, Goldman-Rakic P, Wang XJ (2000) Cereb. Cortex 10:910-923. [6] Durstewitz D, Seamans J, Sejnowski T (2000) J. Neurophys. 83:1733-1750. [7] Frank M, Loughry B, O’Reilly RC (2001) Cog., Affective, & Behav. Neurosci. 1(2):137-160. [8] Funahashi S, Bruce CJ, Goldman-Rakic PS (1989) J. Neurophys. 255:556-559. [9] Fuster J (1995) Memory in the Cerebral Cortex MIT Press. [10] Goldman-Rakic PS (1995) Neuron 14:477-85. [11] Gruber AJ, Solla SA, Houk JC (2003). NIPS 15. [12] Kawagoe R, Takikawa Y, Hikosaka O (1998) Nat. Neurosci. 1:411-416. [13] O’Reilly RC, Noelle DC, Braver TS, Cohen JD (2002) Cerebral Cortex 12:246-257. [14] Reynolds JN, Wickens JR (2000) Neurosci. 99:199-203. [15] Sawaguchi T, Goldman-Rakic PS (1991) Science 251:947-950. [16] Schultz W, Apicella P, Ljungberg T (1993) J. Neurosci. 13:900-913. [17] Servan-Schreiber D, Printz H, Cohen J (1990) Science 249:892-895. [18] Williams GV, Goldman-Rakic PS (1995) Nature 376:572-575.
2003
186
2,405
Approximate Planning in POMDPs with Macro-Actions Georgios Theocharous MIT AI Lab 200 Technology Square Cambridge, MA 02139 theochar@ai.mit.edu Leslie Pack Kaelbling MIT AI Lab 200 Technology Square Cambridge, MA 02139 lpk@ai.mit.edu Abstract Recent research has demonstrated that useful POMDP solutions do not require consideration of the entire belief space. We extend this idea with the notion of temporal abstraction. We present and explore a new reinforcement learning algorithm over grid-points in belief space, which uses macro-actions and Monte Carlo updates of the Q-values. We apply the algorithm to a large scale robot navigation task and demonstrate that with temporal abstraction we can consider an even smaller part of the belief space, we can learn POMDP policies faster, and we can do information gathering more efficiently. 1 Introduction A popular approach to artificial intelligence is to model an agent and its interaction with its environment through actions, perceptions, and rewards [10]. Intelligent agents should choose actions after every perception, such that their long-term reward is maximized. A well defined framework for this interaction is the partially observable Markov decision process (POMDP) model. Unfortunately solving POMDPs is an intractable problem mainly due to the fact that exact solutions rely on computing a policy over the entire belief-space [6, 3], which is a simplex of dimension equal to the number of states in the underlying Markov decision process (MDP). Recently researchers have proposed algorithms that take advantage of the fact that for most POMDP problems, a large proportion of the belief space is not experienced [7, 9]. In this paper we explore the same idea, but in combination with the notion of temporally extended actions (macro-actions). We propose and investigate a new model-based reinforcement learning algorithm over grid-points in belief space, which uses macro-actions and Monte Carlo updates of the Q-values. We apply our algorithm to large scale robot navigation and demonstrate the various advantages of macro-actions in POMDPs. Our experimental results show that with macro-actions an agent experiences a significantly smaller part of the belief space than with simple primitive actions. In addition, learning is faster because an agent can look further into the future and propagate values of belief points faster. And finally, well designed macros, such as macros that can easily take an agent from a high entropy belief state to a low entropy belief state (e.g., go down the corridor), enable agents to perform information gathering. 2 POMDP Planning with Macros We now describe our algorithm for finding an approximately optimal plan for a known POMDP with macro actions. It works by using a dynamically-created finite-grid approximation to the belief space, and then using model-based reinforcement learning to compute a value function at the grid points. Our algorithm takes as input a POMDP model, a resolution r, and a set of macro-actions (described as policies or finite state automata). The output is a set of grid-points (in belief space) and their associated action-values, which via interpolation specify an action-value function over the entire belief space, and therefore a complete policy for the POMDP. Dynamic Grid Approximation A standard method of finding approximate solutions to POMDPs is to discretize the belief space by covering it with a uniformly-spaced grid (otherwise called regular grid as shown in Figure 1, then solve an MDP that takes those grid points as states [1]. Unfortunately, the number of grid points required rises exponentially in the number of dimensions in the belief space, which corresponds to the number of states in the original space. Recent studies have shown that in many cases, an agent actually travels through a very small subpart of its entire belief space. Roy and Gordon [9] find a low-dimensional subspace of the original belief space, then discretize that uniformly to get an MDP approximation to the original POMDP. This is an effective strategy, but it might be that the final uniform discretization is unnecessarily fine. S1 S2 S3 (1,0,0) (0,1,0) (0,0,1) S1 S3 S2 RESOLUTION 1 RESOLUTION 2 RESOLUTION 4 (0.5,0.5,0) S1 S2 S3 (0.25,0.75,0) Figure 1: The figure depicts various regular dicretizations of a 3 dimensional belief simplex. The belief-space is the surface of the triangle, while grid points are the intersection of the lines drawn within the triangles. Using resolution of powers of 2 allows finer discretizations to include the points of coarser dicretizations. In our work, we allocate grid points from a uniformly-spaced grid dynamically by simulating trajectories of the agent through the belief space. At each belief state experienced, we find the grid point that is closest to that belief state and add it to the set of grid points that we explicitly consider. In this way, we develop a set of grid points that is typically a very small subset of the entire possible grid, which is adapted to the parts of the belief space typically inhabited by the agent. In particular, given a grid resolution r and a belief state b we can compute the coordinates (grid points gi) of the belief simplex that contains b using an efficient method called Freudenthal triangulation [2]. In addition to the vertices of a sub-simplex, Freundenthal triangulation also produces barycentric coordinates λi, with respect to gi, which enable effective interpolation for the value of the belief state b from the values of the grid points gi [1]. Using the barycentric coordinates we can also decide which is the closest grid-point to be added in the state space. Macro Actions The semi-Markov decision process (SMDP) model has become the preferred method for modeling temporally extended actions. An SMDP is defined as a fivetuple (S,A,P,R,F), where S is a finite set of states, A is the set of actions, P is the state and action transition probability function, R is the reward function, and F is a function giving probability of transition times for each state-action pair. The transitions are at decision epochs only. The SMDP represents snapshots of the system at decision points, whereas the so-called natural process [8] describes the evolution of the system over all times. Discretetime SMDPs represent transition distributions as F(s′, N|s, a), which specifies the expected number of steps N that action a will take before terminating in state s′ starting in state s. Q-learning generalizes nicely to discrete SMDPs. The Q-learning rule for discretetime discounted SMDPs is Qt+1(s, a) ←(1 −β)Qt(s, a) + β  R + γk max a′∈A(s′) Qt(s′, a′)  , where β ∈(0, 1), and action a was initiated in state s, lasted for k steps, and terminated in state s′, while generating a total discounted sum of rewards of R. Several frameworks for hierarchical reinforcement learning have been proposed, all of which are variants of SMDPs, such as the “options” framework [11]. Macro actions have been shown to be useful in a variety of MDP situations, but they have a special utility in POMDPs. For example, in a robot navigation task modeled as a POMDP, macro actions can consist of small state machines, such as a simple policy for driving down a corridor without hitting the walls until the end is reached. Such actions may have the useful property of reducing the entropy of the belief space, by helping a robot to localize its position. In addition, they relieve us of the burden of having to choose another primitive action based on the new belief state. Using macro actions tends to reduce the number of belief states that are visited by the agent. If a robot navigates largely by using macro-actions to move to important landmarks, it will never be necessary to model the belief states that are concerned with where the robot is within a corridor, for example. Algorithm Our algorithm works by building a grid-based approximation of the belief space while executing a policy made up of macro actions. The policy is determined by “solving” the finite MDP over the grid points. Computing a policy over grid points equally spaced in the belief simplex, otherwise called regular discretization, is computationally intractable since the number of grid-points grows exponentially with the resolution [2]. Nonetheless, the value of a belief point in a regular dicretization can be interpolated efficiently from the values of the neighboring grid-points [2]. On the other hand, in variable resolution non-regular grids, interpolation can be computationally expensive [1]. A better approach is variable resolution with regular dicretization which takes advantage of fast interpolation and increases resolution only in the necessary areas [12]. Our approach falls in this last category with the addition of macro-actions, which exhibit various advantages over approaches using primitive actions only. Specifically, we use a reinforcement-learning algorithm (rather than dynamic programming) to compute a value function over the MDP states. It works by generating trajectories through the belief space according to the current policy, with some added exploration. Reinforcement learning using a model, otherwise called real time dynamic programming (RTDP) is not only better suited for huge spaces but in our case is also convenient in estimating the necessary models of our macro-actions over the experienced grid points. While Figure 2 gives a graphical explanation of the algorithm, below we sketch the entire algorithm in detail: 1. Assume a current true state s. This is the physical true location of the agent, and it should have support in the current belief state b (that is b(s) ̸= 0). 2. Discretize the current belief state b →gi, where gi is the closest grid-point (with the maximum barycentric coordinate) in a regular discretization of the belief space. If gi is missing add it to the table. If the resolution is 1 initialize its value to zero otherwise interpolate its initial value from coarser resolutions. g2 b g g3 g1 b’ b’’ Figure 2: The agent finds itself at a belief state b. It maps b to the grid point g, which has the largest barycentric coordinate among the sub-simplex coordinates that contain b. Now, it needs to do a value backup for that grid point. It chooses a macro action and executes it starting from the chosen grid-point, using the primitive actions and observations that it does along the way to update its belief state. It needs to get a value estimate for the resulting belief state b′′. It does so by using the barycentric coordinates from the grid to interpolate a value from nearby grid points g1, g2, and g3. In case the nearest grid-point gi is missing, it is interpolated from coarser resolutions and added to the representation. If the resolution is 1, the value of gi is initialized to zero. The agent executes the macro-action from the same grid point g multiple times so that it can approximate the probability distribution over the resulting belief-states b′′. Finally, it can update the estimated value of the grid point g and execute the macro-action chosen from the true belief state b. The process repeats from the next true belief state b′. 3. Choose a random action ϵ% of the time. The rest of the time choose the best macro-action µ by interpolating over the Q values of the vertices of the subsimplex that contains b: µ = argmaxµ∈M P|S|+1 i=1 λiQ(gi, µ). 4. Estimate E [R(gi, µ) + γtV (b′)] by sampling: (a) Sample a state s from the current grid-belief state gi (which like all belief states represents a probability distribution over world states). i. Set t = 0 ii. Choose the appropriate primitive action a according to macro-action µ. iii. Sample the next state s′ from the transition model T (s, a, ·). iv. Sample an observation z from observation model O(a, s′, ·). v. Store the reward R(gi, µ) := R(gi, µ) + γt ∗R(s, a). For faster learning we use reward-shaping: R(gi, µ) := R(gi, µ) + γt+1V (s′) −γtV (s), where V (s) are the values of the underlying MDP [5]. vi. Update the belief state: b′(j) := 1 αO(a, j, z) P i∈S T (i, a, j), for all states j, where α is a normalizing factor. vii. Set t = t+1, b = b′, s = s′ and repeat from step 4(a)ii until µ terminates. (b) Compute the value of the resulting belief state b′ by interpolating over the vertices in the resulting belief sub-simplex: V (b′) = P|S|+1 i λiV (gi). If the closest grid-point (with the maximum barycentric coordinate) is missing, interpolate it from coarser resolutions, and add it to the hash-table. (c) Repeat steps 4a and 4b multiple times, and average the estimate [R(gi, µ) + γtV (b′)]. 5. Update the state action value: Q(gi, µ) = (1 −β)Q(gi, µ) + β [R + γtV (b′)]. 6. Update the state value: V (gi) = argmaxµ∈MQ(gi, µ). 7. Execute the macro-action µ starting from belief state b until termination. During execution, generate observations by sampling the POMDP model, starting from the true state s. Set b = b′ and s = s′ and go to step 2. 8. Repeat this learning epoch multiple times starting from the same b. 3 Experimental Results We tested this algorithm by applying it to the problem of robot navigation, which is a classic sequential decision-making problem under uncertainty. We performed experiments in a corridor environment, shown in Figure 3. Such a topological map can be compiled into POMDPs, in which the discrete states stand for regions in the robot’s pose space (for example 2 square meters in position and 90 degrees in orientation). In such a representation, the robot can move through the different environment states by taking actions such as “go-forward”, “turn-left”, and “turn-right”. A macro-actions is implemented as a behavior (could be a POMDP policy) that takes as inputs observations and outputs actions. In our experiments we have a macro-action for going down the corridor until the end. In this navigation domain, our robot can only perceive sixteen possible observations, which indicate the presence of a wall and opening on the four sides of the robot. The observations are extracted from trained neural nets where the inputs are local occupancy grids constructed from sonar sensors and outputs are probabilities of walls and openings [4]. The POMDP model of the corridor environment has a reward function with value -1 in every state, except for -100 for going forward into a wall and +100 for taking any action from the four-way junction. 14 26 20 20 20 40 40 4 40 36 8 40 26 18 32 32 2 66 24 96 Figure 3: The figure on the left shows the floor plan of our experimental environment. The figure on the right is a topological map representation of the floor, which compiles into a POMDP with 1068 world states. The numbers next to the edges are the distances between the nodes in meters. We ran the algorithm starting with resolution 1. When the average number of training steps stabilized we increased the resolution by multiplying it by 2. The maximum resolution we considered was 4. Each training episode started from the uniform initial belief state and was terminated when the four-way junction was reached or when more than 200 steps were taken. We ran the algorithm with and without the macro-action go-down-the-corridor. We compared the results with the QMDP heuristic which first solves the underlying MDP and then given any belief state, chooses the action that maximizes the dot product of the belief and Q values of state action pairs: QMDPa = argmaxa P|S| s=1 b(s)Q(s, a). With Reward Shaping The learning results in Figure 4 demonstrate that learning with macro-actions requires fewer number of training steps, which means the agent is getting to the goal faster. An exception is when the resolution is 1, where training with only primitive actions requires a small number of steps too. Nonetheless as we increase the resolution, training with primitive actions only does not scale well, because the number of states increases dramatically. In general, the number of grid points used with or without macro-actions is significantly smaller than the total number of points allowed for regular dicretization. For example, for a regular discretization the number of grid points can be computed by the formula given in [2], (r+|S|−1)! r!(|S|−1)! , which is 5.410 for r = 4 and |S| = 1068. Our algorithm with macro actions uses only about about 3000 and with primitive actions only about 6500 grid points. 0 20 40 60 80 100 120 140 160 180 0 100 200 300 400 500 600 700 800 900 Average # of training steps Number of training episodes Training Steps primitive macro 0 1000 2000 3000 4000 5000 6000 7000 0 100 200 300 400 500 600 700 800 900 Number of states Number of training episodes Number of States primitive macro Figure 4: The graph on the left shows the average number of training-steps per episode as a function of the number of episodes. The graph on the right shows the number of grid-points added during learning. The sharp changes in the graph are due to the resolution increase. We tested the policies that resulted from each algorithm by starting from a uniform initial belief state and a uniformly randomly chosen world state and simulating the greedy policy derived by interpolating the grid value function. We tested our plans over 200 different sampling sequences and report the results in Figure 5. A run was considered a success if the robot was able to reach the goal in fewer than 200 steps. 0 20 40 60 80 100 0 1 2 3 4 Success % Resolution Success qmdp primitive macro 0 20 40 60 80 100 120 140 0 1 2 3 4 Steps to goal Resolution Testing Steps primitive macro Figure 5: The figure on the left shows the success percentage for the different methods during testing. The results are reported after training for each resolution. The graph on the right shows the number of steps during testing. For the primitive-actions only algorithm we report the result for resolution 1 only, since it was as successful as the macro-action algorithm. From Figure 5 we can conclude that the QMDP approach can never be 100% successful, while the primitive-actions algorithm can perform quite well with resolution 1 in this environment. It is also evident from Figure 5 that as we increase the resolution, the macroaction algorithm maintains its robustness while the primitive-action algorithm performs considerably worse, mainly due to the fact that it requires more grid-points. In addition, when we compared the average number of testing steps for resolution 1 the macro-action algorithm seems to have learned a better policy. The macro-action policy policy seems to get worse for resolution 4 due to the increasing number of grid-points added in the representation. This means that more training is required. Without Reward Shaping We also performed experiments to investigate the effect of reward-shaping. Figure 6 shows that with primitive actions only, the algorithm fails completely. However, with macro-actions the algorithm still converges and is more successful than the QMDP-heuristic. 110 120 130 140 150 160 170 180 190 200 0 100 200 300 400 500 600 700 800 900 Average # of training steps Number of training episodes Training Steps primitive macro 0 20 40 60 80 100 0 1 Success % Resolution Success primitive macro Figure 6: The The graph on the left shows the average number of training-steps (without reward shaping). The figure on the right shows the success percentage Information Gathering Apart from simulated experiments we also wanted to compare the performance of QMDP with the macro-action algorithm on a platform more closely related to a real robot. We used the Nomad 200 simulator and describe a test in Figure 7 to demonstrate how our algorithm is able to perform information gathering, as compared to QMDP. 4 Conclusions In this paper we have presented an approximate planning algorithm for POMDPs that uses macro-actions. Our algorithm is able to solve a difficult planning problem, namely the task of navigating to a goal in a huge space POMDP starting from a uniform initial belief, which is more difficult than many of the tasks that similar algorithms are tested on. In addition, we have presented an effective reward-shaping approach to POMDPs that results in faster training (even without macro-actions). In general macro-actions in POMDPs allow us to experience a smaller part of the state space, backup values faster, and do information gathering. As a result we can afford to allow for higher grid resolution which results in better performance. We cannot do this with only primitive actions (unless we use reward shaping) and it is completely out of the question for exact solution over the entire regular grid. In our current research we are investigating methods for dynamic discovery of “good” macro-actions given a POMDP. References [1] M. Hauskrecht. Value-function approximations for partially observable Markov decision processes. Journal of Artificial Intelligence Research, 13:33–94, 2000. [2] W. S. Lovejoy. Computationally feasible bounds for partially observed Markov decision processes. Operations Research, 39(1):162–175, January-February 1991. [3] O. Madani, S. Hanks, and A. Gordon. On the undecidability of probabilistic planning and infinite-horizon partially observable Markov decision processes. In Proceedings of the Sixteenth National Conference in Artificial Intelligence, pages 409–416, 1999. GOAL START J1 J5 J3 J2 J4 Figure 7: The figure shows the actual floor as it was designed in the Nomad 200 simulator. For the QMDP approach the robot starts from START with uniform initial belief. After reaching J2 the belief becomes bi-modal concentrating on J1 and J2. The robot then keeps turning left and right. On the other hand, with our planning algorithm, the robot again starts from START and a uniform initial belief. Upon reaching J2 the belief becomes bimodal over J1 and J2. The agent resolves its uncertainty by deciding that the best action to take is the go-down-the-corridor macro, at which point it encounters J3 and localizes. The robot then is able to reach its goal by traveling from J3, to J2 , J1, J4, and J5. [4] S. Mahadevan, G. Theocharous, and N. Khaleeli. Fast concept learning for mobile robots. Machine Learning and Autonomous Robots Journal (joint issue), 31/5:239–251, 1998. [5] A. Y. Ng, D. Harada, and S. Russell. Theory and application to reward shaping. In Proceedings of the Sixteenth International Conference on Machine Learning, 1999. [6] C. Papadimitriou and J. Tsitsiklis. The complexity of Markov decision processes. Mathematics of Operation Research, 12(3), 1987. [7] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for POMDPs. In International Joint Conference on Artificial Intelligence, 2003. [8] M. Puterman. Markov Decision Processes: Discrete Dynamic Stochastic Programming. John Wiley, 1994. [9] N. Roy and G. Gordon. Exponential family PCA for belief compression in POMDPs. In Advances in Neural Information Processing Systems, 2003. [10] S. J. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, 2nd edition, 2003. [11] R. S. Sutton, D. Precup, and S. Singh. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, pages 112:181–211, 1999. [12] R. Zhou and E. A. Hansen. An improved grid-based approximation algorithm for POMDPs. In Proceedings of the Seventeenth International Conference in Artificial intelligence (IJCAI-01), Seattle, WA, August 2001.
2003
187
2,406
ARA*: Anytime A* with Provable Bounds on Sub-Optimality Maxim Likhachev, Geoff Gordon and Sebastian Thrun School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 {maxim+, ggordon, thrun}@cs.cmu.edu Abstract In real world planning problems, time for deliberation is often limited. Anytime planners are well suited for these problems: they find a feasible solution quickly and then continually work on improving it until time runs out. In this paper we propose an anytime heuristic search, ARA*, which tunes its performance bound based on available search time. It starts by finding a suboptimal solution quickly using a loose bound, then tightens the bound progressively as time allows. Given enough time it finds a provably optimal solution. While improving its bound, ARA* reuses previous search efforts and, as a result, is significantly more efficient than other anytime search methods. In addition to our theoretical analysis, we demonstrate the practical utility of ARA* with experiments on a simulated robot kinematic arm and a dynamic path planning problem for an outdoor rover. 1 Introduction Optimal search is often infeasible for real world problems, as we are given a limited amount of time for deliberation and want to find the best solution given the time provided. In these conditions anytime algorithms [9, 2] prove to be useful as they usually find a first, possibly highly suboptimal, solution very fast and then continually work on improving the solution until allocated time expires. Unfortunately, they can rarely provide bounds on the sub-optimality of their solutions unless the cost of an optimal solution is already known. Even less often can these algorithms control their sub-optimality. Providing suboptimality bounds is valuable, though: it allows one to judge the quality of the current plan, decide whether to continue or preempt search based on the current sub-optimality, and evaluate the quality of past planning episodes and allocate time for future planning episodes accordingly. Control over the sub-optimality bounds helps in adjusting the tradeoff between computation and plan quality. A* search with inflated heuristics (actual heuristic values are multiplied by an inflation factor ϵ > 1) is sub-optimal but proves to be fast for many domains [1, 5, 8] and also provides a bound on the sub-optimality, namely, the ϵ by which the heuristic is inflated [7]. To construct an anytime algorithm with sub-optimality bounds one could run a succession of these A* searches with decreasing inflation factors. This naive approach results in a series of solutions, each one with a sub-optimality factor equal to the corresponding inflation factor. This approach has control over the sub-optimality bound, but wastes a lot of computation since each search iteration duplicates most of the efforts of the previous searches. One could try to employ incremental heuristic searches (e.g., [4]), but the sub-optimality bounds for each search iteration would no longer be guaranteed. To this end we propose the ARA* (Anytime Repairing A*) algorithm, which is an efficient anytime heuristic search that also runs A* with inflated heuristics in succession but reuses search efforts from previous executions in such a way that the sub-optimality bounds are still satisfied. As a result, a substantial speedup is achieved by not re-computing the state values that have been correctly computed in the previous iterations. We show the efficiency of ARA* on two different domains. An evaluation of ARA* on a simulated robot kinematic arm with six degrees of freedom shows up to 6-fold speedup over the succession of A* searches. We also demonstrate ARA* on the problem of planning a path for a mobile robot that takes into account the robot’s dynamics. The only other anytime heuristic search known to us is Anytime A*, described in [8]. It also first executes an A* with inflated heuristics and then continues to improve a solution. However, the algorithm does not have control over its sub-optimality bound, except by selecting the inflation factor of the first search. Our experiments show that ARA* is able to decrease its bounds much more gradually and, moreover, does so significantly faster. Another advantage of ARA* is that it guarantees to examine each state at most once during its first search, unlike the algorithm of [8]. This property is important because it provides a bound on the amount of time before ARA* produces its first plan. Nevertheless, as mentioned later, [8] describes a number of very interesting ideas that are also applicable to ARA*. 2 The ARA* Algorithm 2.1 A* with Weighted Heuristic Normally, A* takes as input a heuristic h(s) which must be consistent. That is, h(s) ≤ c(s, s′) + h(s′) for any successor s′ of s if s ̸= sgoal and h(s) = 0 if s = sgoal. Here c(s, s′) denotes the cost of an edge from s to s′ and has to be positive. Consistency, in its turn, guarantees that the heuristic is admissible: h(s) is never larger than the true cost of reaching the goal from s. Inflating the heuristic (that is, using ϵ ∗h(s) for ϵ > 1) often results in much fewer state expansions and consequently faster searches. However, inflating the heuristic may also violate the admissibility property, and as a result, a solution is no longer guaranteed to be optimal. The pseudocode of A* with inflated heuristic is given in Figure 1 for easy comparison with our algorithm, ARA*, presented later. A* maintains two functions from states to real numbers: g(s) is the cost of the current path from the start node to s (it is assumed to be ∞if no path to s has been found yet), and f(s) = g(s)+ϵ∗h(s) is an estimate of the total distance from start to goal going through s. A* also maintains a priority queue, OPEN, of states which it plans to expand. The OPEN queue is sorted by f(s), so that A* always expands next the state which appears to be on the shortest path from start to goal. A* initializes the OPEN list with the start state, sstart (line 02). Each time it expands a state s (lines 04-11), it removes s from OPEN. It then updates the g-values of all of s’s neighbors; if it decreases g(s′), it inserts s′ into OPEN. A* terminates as soon as the goal state is expanded. 01 g(sstart) = 0; OPEN = ∅; 02 insert sstart into OPEN with f(sstart) = ϵ ∗h(sstart); 03 while(sgoal is not expanded) 04 remove s with the smallest f-value from OPEN; 05 for each successor s′ of s 06 if s′ was not visited before then 07 f(s′) = g(s′) = ∞; 08 if g(s′) > g(s) + c(s, s′) 09 g(s′) = g(s) + c(s, s′); 10 f(s′) = g(s′) + ϵ ∗h(s′); 11 insert s′ into OPEN with f(s′); Figure 1: A* with heuristic weighted by ϵ ≥1 ϵ = 2.5 ϵ = 1.5 ϵ = 1.0 ϵ = 2.5 ϵ = 1.5 ϵ = 1.0 Figure 2: Left three columns: A* searches with decreasing ϵ. Right three columns: the corresponding ARA* search iterations. Setting ϵ to 1 results in standard A* with an uninflated heuristic; the resulting solution is guaranteed to be optimal. For ϵ > 1 a solution can be sub-optimal, but the sub-optimality is bounded by a factor of ϵ: the length of the found solution is no larger than ϵ times the length of the optimal solution [7]. The left three columns in Figure 2 show the operation of the A* algorithm with a heuristic inflated by ϵ = 2.5, ϵ = 1.5, and ϵ = 1 (no inflation) on a simple grid world. In this example we use an eight-connected grid with black cells being obstacles. S denotes a start state, while G denotes a goal state. The cost of moving from one cell to its neighbor is one. The heuristic is the larger of the x and y distances from the cell to the goal. The cells which were expanded are shown in grey. (A* can stop search as soon as it is about to expand a goal state without actually expanding it. Thus, the goal state is not shown in grey.) The paths found by these searches are shown with grey arrows. The A* searches with inflated heuristics expand substantially fewer cells than A* with ϵ = 1, but their solution is sub-optimal. 2.2 ARA*: Reuse of Search Results ARA* works by executing A* multiple times, starting with a large ϵ and decreasing ϵ prior to each execution until ϵ = 1. As a result, after each search a solution is guaranteed to be within a factor ϵ of optimal. Running A* search from scratch every time we decrease ϵ, however, would be very expensive. We will now explain how ARA* reuses the results of the previous searches to save computation. We first explain the ImprovePath function (left column in Figure 3) that recomputes a path for a given ϵ. In the next section we explain the Main function of ARA* (right column in Figure 3) that repetitively calls the ImprovePath function with a series of decreasing ϵs. Let us first introduce a notion of local inconsistency (we borrow this term from [4]). A state is called locally inconsistent every time its g-value is decreased (line 09, Figure 1) and until the next time the state is expanded. That is, suppose that state s is the best predecessor for some state s′: that is, g(s′) = mins′′∈pred(s′)(g(s′′)+c(s′′, s′)) = g(s)+c(s, s′). Then, if g(s) decreases we get g(s′) > mins′′∈pred(s′)(g(s′′) + c(s′′, s′)). In other words, the decrease in g(s) introduces a local inconsistency between the g-value of s and the g-values of its successors. Whenever s is expanded, on the other hand, the inconsistency of s is corrected by re-evaluating the g-values of the successors of s (line 08-09, Figure 1). This in turn makes the successors of s locally inconsistent. In this way the local inconsistency is propagated to the children of s via a series of expansions. Eventually the children no longer rely on s, none of their g-values are lowered, and none of them are inserted into the OPEN list. Given this definition of local inconsistency it is clear that the OPEN list consists of exactly all locally inconsistent states: every time a g-value is lowered the state is inserted into OPEN, and every time a state is expanded it is removed from OPEN until the next time its g-value is lowered. Thus, the OPEN list can be viewed as a set of states from which we need to propagate local inconsistency. A* with a consistent heuristic is guaranteed not to expand any state more than once. Setting ϵ > 1, however, may violate consistency, and as a result A* search may re-expand states multiple times. It turns out that if we restrict each state to be expanded no more than once, then the sub-optimality bound of ϵ still holds. To implement this restriction we check any state whose g-value is lowered and insert it into OPEN only if it has not been previously expanded (line 10, Figure 3). The set of expanded states is maintained in the CLOSED variable. procedure fvalue(s) 01 return g(s) + ϵ ∗h(s); procedure ImprovePath() 02 while(fvalue(sgoal) > mins∈OPEN(fvalue(s))) 03 remove s with the smallest fvalue(s) from OPEN; 04 CLOSED = CLOSED ∪{s}; 05 for each successor s′ of s 06 if s′ was not visited before then 07 g(s′) = ∞; 08 if g(s′) > g(s) + c(s, s′) 09 g(s′) = g(s) + c(s, s′); 10 if s′ ̸∈CLOSED 11 insert s′ into OPEN with fvalue(s′); 12 else 13 insert s′ into INCONS; procedure Main() 01’ g(sgoal) = ∞; g(sstart) = 0; 02’ OPEN = CLOSED = INCONS = ∅; 03’ insert sstart into OPEN with fvalue(sstart); 04’ ImprovePath(); 05’ ϵ′ = min(ϵ, g(sgoal)/ mins∈OPEN∪INCONS(g(s)+h(s))); 06’ publish current ϵ′-suboptimal solution; 07’ while ϵ′ > 1 08’ decrease ϵ; 09’ Move states from INCONS into OPEN; 10’ Update the priorities for all s ∈OPEN according to fvalue(s); 11’ CLOSED = ∅; 12’ ImprovePath(); 13’ ϵ′ = min(ϵ, g(sgoal)/ mins∈OPEN∪INCONS(g(s)+h(s))); 14’ publish current ϵ′-suboptimal solution; Figure 3: ARA* With this restriction we will expand each state at most once, but OPEN may no longer contain all the locally inconsistent states. In fact, it will only contain the locally inconsistent states that have not yet been expanded. It is important, however, to keep track of all the locally inconsistent states as they will be the starting points for inconsistency propagation in the future search iterations. We do this by maintaining the set INCONS of all the locally inconsistent states that are not in OPEN (lines 12-13, Figure 3). Thus, the union of INCONS and OPEN is exactly the set of all locally inconsistent states, and can be used as a starting point for inconsistency propagation before each new search iteration. The only other difference between the ImprovePath function and A* is the termination condition. Since the ImprovePath function reuses search efforts from the previous executions, sgoal may never become locally inconsistent and thus may never be inserted into OPEN. As a result, the termination condition of A* becomes invalid. A* search, however, can also stop as soon as f(sgoal) is equal to the minimal f-value among all the states on OPEN list. This is the condition that we use in the ImprovePath function (line 02, Figure 3). It also allows us to avoid expanding sgoal as well as possibly some other states with the same f-value. (Note that ARA* no longer maintains f-values as variables since in between the calls to the ImprovePath function ϵ is changed, and it would be prohibitively expensive to update the f-values of all the states. Instead, the fvalue(s) function is called to compute and return the f-values only for the states in OPEN and sgoal.) 2.3 ARA*: Iterative Execution of Searches We now introduce the main function of ARA* (right column in Figure 3) which performs a series of search iterations. It does initialization and then repetitively calls the ImprovePath function with a series of decreasing ϵs. Before each call to the ImprovePath function a new OPEN list is constructed by moving into it the contents of the set INCONS. Since OPEN list has to be sorted by the current f-values of states it is also re-ordered (lines 09’10’, Figure 3). Thus, after each call to the ImprovePath function we get a solution that is sub-optimal by at most a factor of ϵ. As suggested in [8] a sub-optimality bound can also be computed as the ratio between g(sgoal), which gives an upper bound on the cost of an optimal solution, and the minimum un-weighted f-value of a locally inconsistent state, which gives a lower bound on the cost of an optimal solution. (This is a valid sub-optimality bound as long as the ratio is larger than or equal to one. Otherwise, g(sgoal) is already equal to the cost of an optimal solution.) Thus, the actual sub-optimality bound for ARA* is computed as the minimum between ϵ and the ratio (lines 05’ and 13’, Figure 3). At first, one may also think of using this actual sub-optimality bound in deciding how to decrease ϵ between search iterations (e.g., setting ϵ to ϵ′ minus a small delta). Experiments, however, seem to suggest that decreasing ϵ in small steps is still more beneficial. The reason is that a small decrease in ϵ often results in the improvement of the solution, despite the fact that the actual sub-optimality bound of the previous solution was already substantially less than the value of ϵ. A large decrease in ϵ, on the other hand, may often result in the expansion of too many states during the next search. (Another useful suggestion from [8], which we have not implemented in ARA*, is to prune OPEN so that it never contains a state whose un-weighted f-value is larger than or equal to g(sgoal).) Within each execution of the ImprovePath function we mainly save computation by not re-expanding the states which were locally consistent and whose g-values were already correct before the call to ImprovePath (Theorem 2 states this more precisely). For example, the right three columns in Figure 2 show a series of calls to the ImprovePath function. States that are locally inconsistent at the end of an iteration are shown with an asterisk. While the first call (ϵ = 2.5) is identical to the A* call with the same ϵ, the second call to the ImprovePath function (ϵ = 1.5) expands only 1 cell. This is in contrast to 15 cells expanded by A* search with the same ϵ. For both searches the sub-optimality factor, ϵ, decreases from 2.5 to 1.5. Finally, the third call to the ImprovePath function with ϵ set to 1 expands only 9 cells. The solution is now optimal, and the total number of expansions is 23. Only 2 cells are expanded more than once across all three calls to the ImprovePath function. Even a single optimal search from scratch expands 20 cells. 2.4 Theoretical Properties of the Algorithm We now present some of the theoretical properties of ARA*. For the proofs of these and other properties of the algorithm please refer to [6]. We use g∗(s) to denote the cost of an optimal path from sstart to s. Let us also define a greedy path from sstart to s as a path that is computed by tracing it backward as follows: start at s, and at any state si pick a state si−1 = arg mins′∈pred(si)(g(s′) + c(s′, si)) until si−1 = sstart. Theorem 1 Whenever the ImprovePath function exits, for any state s with f(s) ≤ mins′∈OPEN(f(s′)), we have g∗(s) ≤g(s) ≤ϵ ∗g∗(s), and the cost of a greedy path from sstart to s is no larger than g(s). The correctness of ARA* follows from this theorem: each execution of the ImprovePath function terminates when f(sgoal) is no larger than the minimum f-value in OPEN, which means that the greedy path from start to goal that we have found is within a factor ϵ of optimal. Since before each iteration ϵ is decreased, and it, in its turn, is an upper bound on ϵ′, ARA* gradually decreases the sub-optimality bound and finds new solutions to satisfy the bound. Theorem 2 Within each call to ImprovePath() a state is expanded at most once and only if it was locally inconsistent before the call to ImprovePath() or its g-value was lowered during the current execution of ImprovePath(). The second theorem formalizes where the computational savings for ARA* search come from. Unlike A* search with an inflated heuristic, each search iteration in ARA* is guaranteed not to expand states more than once. Moreover, it also does not expand states whose g-values before a call to the ImprovePath function have already been correctly computed by some previous search iteration, unless they are in the set of locally inconsistent states already and thus need to update their neighbors (propagate local inconsistency). 3 Experimental Study 3.1 Robotic Arm We first evaluate the performance of ARA* on simulated 6 and 20 degree of freedom (DOF) robotic arms (Figure 4). The base of the arm is fixed, and the task is to move its end-effector to the goal while navigating around obstacles (indicated by grey rectangles). An action is defined as a change of a global angle of any particular joint (i.e., the next joint further along the arm rotates in the opposite direction to maintain the global angle of the remaining joints.) We discretitize the workspace into 50 by 50 cells and compute a distance from each cell to the cell containing the goal while taking into account that some cells are occupied by obstacles. This distance is our heuristic. In order for the heuristic not to overestimate true costs, joint angles are discretitized so as to never move the end-effector by more than one cell in a single action. The resulting state-space is over 3 billion states for a 6 DOF robot arm and over 1026 states for a 20 DOF robot arm, and memory for states is allocated on demand. (a) 6D arm trajectory for ϵ = 3 (b) uniform costs (c) non-uniform costs (d) both Anytime A* and A* (e) ARA* (f) non-uniform costs after 90 secs, cost=682, ϵ′=15.5 after 90 secs, cost=657, ϵ′=14.9 Figure 4: Top row: 6D robot arm experiments. Bottom row: 20D robot arm experiments (the trajectories shown are downsampled by 6). Anytime A* is the algorithm in [8]. Figure 4a shows the planned trajectory of the robot arm after the initial search of ARA* with ϵ = 3.0. This search takes about 0.05 secs. (By comparison, a search for an optimal trajectory is infeasible as it runs out of memory very quickly.) The plot in Figure 4b shows that ARA* improves both the quality of the solution and the bound on its sub-optimality faster and in a more gradual manner than either a succession of A* searches or Anytime A* [8]. In this experiment ϵ is initially set to 3.0 for all three algorithms. For all the experiments in this section ϵ is decreased in steps of 0.02 (2% sub-optimality) for ARA* and a succession of A* searches. Anytime A* does not control ϵ, and in this experiment it apparently performs a lot of computations that result in a large decrease of ϵ at the end. On the other hand, it does reach the optimal solution first this way. To evaluate the expense of the anytime property of ARA* we also ran ARA* and an optimal A* search in a slightly simpler environment (for the optimal search to be feasible). Optimal A* search required about 5.3 mins (2,202,666 state expanded) to find an optimal solution, while ARA* required about 5.5 mins (2,207,178 state expanded) to decrease ϵ in steps of 0.02 from 3.0 until a provably optimal solution was found (about 4% overhead). While in the experiment for Figure 4b all the actions have the same cost, in the experiment for Figure 4c actions have non-uniform costs: changing a joint angle closer to the base is more expensive than changing a higher joint angle. As a result of the non-uniform costs our heuristic becomes less informative, and so search is much more expensive. In this experiment we start with ϵ = 10, and run all algorithms for 30 minutes. At the end, ARA* achieves a solution with a substantially smaller cost (200 vs. 220 for the succession of A* searches and 223 for Anytime A*) and a better sub-optimality bound (3.92 vs. 4.46 for both the succession of A* searches and Anytime A*). Also, since ARA* controls ϵ it decreases the cost of the solution gradually. Reading the graph differently, ARA* reaches a sub-optimality bound ϵ′ = 4.5 after about 59 thousand expansions and 11.7 secs, while the succession of A* searches reaches the same bound after 12.5 million expansions and 27.4 minutes (about 140-fold speedup by ARA*) and Anytime A* reaches it after over 4 million expansions and 8.8 minutes (over 44-fold speedup by ARA*). Similar results hold when comparing the amount of work each of the algorithms spend on obtaining a solution of cost 225. While Figure 4 shows execution time, the comparison of states expanded (not shown) is almost identical. Additionally, to demonstrate the advantage of ARA* expanding each state no more than once per search iteration, we compare the first searches of ARA* and Anytime A*: the first search of ARA* performed 6,378 expansions, while Anytime A* performed 8,994 expansions, mainly because some of the states were expanded up to (a) robot with laser scanner (b) 3D Map (c) optimal 2D search (d) optimal 4D search with A* (e) 4D search with ARA* (f) 4D search with ARA* after 25 secs after 0.6 secs (ϵ = 2.5) after 25 secs (ϵ = 1.0) Figure 5: outdoor robot navigation experiment (cross shows the position of the robot) seven times before a first solution was found. Figures 4d-f show the results of experiments done on a 20 DOF robot arm, with actions that have non-uniform costs. All three algorithms start with ϵ = 30. Figures 4d and 4e show that in 90 seconds of planning the cost of the trajectory found by ARA* and the suboptimality bound it can guarantee is substantially smaller than for the other algorithms. For example, the trajectory in Figure 4d contains more steps and also makes one extra change in the angle of the third joint from the base of the arm (despite the fact that changing lower joint angles is very expensive) in comparison to the trajectory in Figure 4e. The graph in Figure 4f compares the performance of the three algorithms on twenty randomized environments similar to the environment in Figure 4d. The environments had random goal locations, and the obstacles were slid to random locations along the outside walls. The graph shows the additional time the other algorithms require to achieve the same sub-optimality bound that ARA* does. To make the results from different environments comparable we normalize the bound by dividing it by the maximum of the best bounds that the algorithms achieve before they run out of memory. Averaging over all environments, the time for ARA* to achieve the best bound was 10.1 secs. Thus, the difference of 40 seconds at the end of the Anytime A* graph corresponds to an overhead of about a factor of 4. 3.2 Outdoor Robot Navigation For us the motivation for this work was efficient path-planning for mobile robots in large outdoor environments, where optimal trajectories involve fast motion and sweeping turns at speed. In such environments it is particularly important to take advantage of the robot’s momentum and find dynamic rather than static plans. We use a 4D state space: xy position, orientation, and velocity. High dimensionality and large environments result in very large state-spaces for the planner and make it computationally infeasible for the robot to plan optimally every time it discovers new obstacles or modelling errors. To solve this problem we built a two-level planner: a 4D planner that uses ARA*, and a fast 2D (x, y) planner that uses A* search and whose results serve as the heuristic for the 4D planner.1 1To interleave search with the execution of the best plan so far we perform 4D search backward. That is, the start of the search, sstart, is the actual goal state of the robot, while the goal of the search, sgoal, is the current state of the robot. Thus, sstart does not change as the robot moves and the search tree remains valid in between search iterations. Since heuristics estimate the distances to sgoal (the robot position) we have to recompute them during the reorder operation (line 10’, Figure 3). In Figure 5 we show the robot we used for navigation and a 3D laser scan [3] constructed by the robot of the environment we tested our system in. The scan is converted into a map of the environment (Figure 5c, obstacles shown in black). The size of the environment is 91.2 by 94.4 meters, and the map is discretitized into cells of 0.4 by 0.4 meters. Thus, the 2D state-space consists of 53808 states. The 4D state space has over 20 million states. The robot’s initial state is the upper circle, while its goal is the lower circle. To ensure safe operation we created a buffer zone with high costs around each obstacle. The squares in the upper-right corners of the figures show a magnified fragment of the map with grayscale proportional to cost. The 2D plan (Figure 5c) makes sharp 45 degree turns when going around the obstacles, requiring the robot to come to complete stops. The optimal 4D plan results in a wider turn, and the velocity of the robot remains high throughout the whole trajectory. In the first plan computed by ARA* starting at ϵ = 2.5 (Figure 5e) the trajectory is much better than the 2D plan, but somewhat worse than the optimal 4D plan. The time required for the optimal 4D planner was 11.196 secs, whereas the time for the 4D ARA* planner to generate the plan in Figure 5e was 556ms. As a result, the robot that runs ARA* can start executing its plan much earlier. A robot running the optimal 4D planner would still be near the beginning of its path 25 seconds after receiving a goal location (Figure 5d). In contrast, in the same amount of time the robot running ARA* has advanced much further (Figure 5f), and its plan by now has converged to optimal (ϵ has decreased to 1). 4 Conclusions We have presented the first anytime heuristic search that works by continually decreasing a sub-optimality bound on its solution and finding new solutions that satisfy the bound on the way. It executes a series of searches with decreasing sub-optimality bounds, and each search tries to reuse as much as possible of the results from previous searches. The experiments show that our algorithm is much more efficient than any of the previous anytime searches, and can successfully solve large robotic planning problems. Acknowledgments This work was supported by AFRL contract F30602–01–C–0219, DARPA’s MICA program. References [1] B. Bonet and H. Geffner. Planning as heuristic search. Artificial Intelligence, 129(12):5–33, 2001. [2] T. L. Dean and M. Boddy. An analysis of time-dependent planning. In Proc. of the National Conference on Artificial Intelligence (AAAI), 1988. [3] D. Haehnel. Personal communication, 2003. [4] S. Koenig and M. Likhachev. Incremental A*. In Advances in Neural Information Processing Systems (NIPS) 14. Cambridge, MA: MIT Press, 2002. [5] R. E. Korf. Linear-space best-first search. Artificial Intelligence, 62:41–78, 1993. [6] M. Likhachev, G. Gordon, and S. Thrun. ARA*: Formal Analysis. Tech. Rep. CMUCS-03-148, Carnegie Mellon University, Pittsburgh, PA, 2003. [7] J. Pearl. Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley, 1984. [8] R. Zhou and E. A. Hansen. Multiple sequence alignment using A*. In Proc. of the National Conference on Artificial Intelligence (AAAI), 2002. Student abstract. [9] S. Zilberstein and S. Russell. Approximate reasoning using anytime algorithms. In Imprecise and Approximate Computation. Kluwer Academic Publishers, 1995.
2003
188
2,407
Salient Boundary Detection using Ratio Contour Song Wang, Toshiro Kubota Dept. Computer Science & Engineering University of South Carolina Columbia, SC 29208 {songwang|kubota}@cse.sc.edu Jeffrey Mark Siskind School Electrical & Comput. Engr. Purdue University West Lafayette, IN 47906 qobi@purdue.edu Abstract This paper presents a novel graph-theoretic approach, named ratio contour, to extract perceptually salient boundaries from a set of noisy boundary fragments detected in real images. The boundary saliency is defined using the Gestalt laws of closure, proximity, and continuity. This paper first constructs an undirected graph with two different sets of edges: solid edges and dashed edges. The weights of solid and dashed edges measure the local saliency in and between boundary fragments, respectively. Then the most salient boundary is detected by searching for an optimal cycle in this graph with minimum average weight. The proposed approach guarantees the global optimality without introducing any biases related to region area or boundary length. We collect a variety of images for testing the proposed approach with encouraging results. 1 Introduction Human vision and neural systems possess very strong capabilities of identifying salient structures from various images. Implementing such capabilities on a computer is an important but extremely challenging problem for artificial intelligence, computer vision, and machine learning. The main challenges come from two closely related aspects: (a) the definition of the structural saliency, and (b) the design of efficient algorithms for finding the salient structures. On one hand, we expect very comprehensive and advanced definitions of the saliency so that it models accurately the human perceptual and visual process. On the other hand, we expect simple definitions of saliency so that the global optimum can be found in polynomial time. Previous methods for salient-structure detection can be grouped into two classes. The first class of methods aims to directly group or segment all the image pixels into some disjoint regions, which are expected to coincide with the underlying salient structures. Earlier efforts include the region-merging/splitting methods, watershed methods, and the activecontour-like methods. Those methods usually have difficulties in finding the globally optimal boundaries in terms of the selected saliency definitions. Recently we have witnessed some advanced methods, like ratio region [5], minimum cut[17], normalized cut [14], globally optimal region/cycle [9], and ratio cut [15], which aim to produce globally optimal boundaries. However, those pixel-grouping based methods usually have difficulties in effectively incorporating perceptual rules, such as boundary smoothness, into their saliency definitions. Instead of operating directly on the image pixels, another class of methods is designed based on some pre-extracted boundary fragments (or for brevity, fragments) 1, which can be obtained using some standard edge-detection methods like Canny detectors. As shown in Fig. 1(a), although those fragments are disconnected and contain serious noise, they provide abundant information on boundary length, tangent directions, and curvatures, which can greatly facilitate the incorporation of advanced perceptual rules like boundary smoothness. Shashua and Ullman [13] presents a parallel network model for detecting salient boundary based on fragment proximity, boundary length, and boundary smoothness. Recent development in this class includes Alter and Basri [2], Jacobs [8], Sarkar and Boyer [12], Guy and Medioni [7], Williams and Thornber [16, 11], and Amir and Lindenbaum [3]. However, many of them still have difficulty in finding the closed boundaries in a sense of global optimality with respect to the given boundary-saliency measure. Elder and Zucker [6] use the shortest-path algorithm to connect fragments to form salient closed boundaries. However, the results have a bias to produce boundaries with shorter length. This paper presents a new graph based approach to extract salient closed boundaries from a set of fragments detected from real images. This approach seeks a good balance between the complexity of the saliency definition and the complexity of the optimization algorithm. The boundary saliency is based on the well-known Gestalt laws of closure, proximity, and continuity. To avoid the various biases as in Elder and Zucker [6], this paper defines the boundary saliency as the average saliency along the whole boundary. We finally formulate the salient-boundary detection problem into a problem for finding an optimal cycle in an undirected graph. We show this problem is of polynomial time-complexity and give an algorithm to solve it. The proposed algorithm is then tested on a variety of real images. 2 Problem Formulation (b) (a) (c) (d) Figure 1: An illustration of detecting salient boundaries from some fragments. (a) Boundary fragments, (b) salient boundary by connecting some fragments with dashed curves, (c) a solid-dashed graph, and (d) an alternate cycle in (c). The basic primitives in the ratio-contour approach are a set of noisy (boundary) fragments extracted by edge detection. For simplicity, here we assume each detected fragment is a continuous open curve segment with two endpoints, as shown in Fig. 1(a). Our goal is to identify and connect a subset of fragments to form the most salient structural boundary as shown in Fig. 1(b). In this paper, we measure the boundary saliency using simple Gestalt laws of closure, proximity, and continuity. The closure means that the salient boundary must be a closed contour. The proximity implies that we desire relatively small gaps (dashed curves in Fig. 1(b)) in connecting the fragments. The continuity indicates that the resulting contour should be continuous and sufficiently smooth. Let the parametric form of a boundary B be v(t), 0 ≤t ≤1. We have v(0) = v(1) as the boundary is closed. Considering the boundary proximity and the continuity, we define its 1Most literatures use the terminology edge instead of fragment. However, in this paper edge has other specified meaning in a graph model. cost, which is negatively related to the boundary saliency, as R(B) ≜T(B) L(B) = R B[σ(t) + λ · κ2(t)]dt R B dt , (1) where σ(t) = 1 if v(t) is in the gap and σ(t) = 0, otherwise. κ(t) is the curvature at v(t). We can see that the un-normalized cost T(B) combines the total gap-length and curvature along the boundary B and has bias to produce a short boundary. The issue is addressed in (1) through normalizing T(B) by the boundary length L(B). The most salient boundary B is then the one with the minimum cost R(B). The parameter λ > 0 is set to balance the weight between proximity and continuity. We can formulate the above cost into an undirected graph G = (V, E) with vertices V = {v1, v2, · · · , vn} and edges E = {e1, e2, · · · , em}. A unique vertex is constructed from each fragment endpoint. Two different kinds of edges, solid edges and dashed edges, are constructed between vertices. (a) If vi and vj correspond to the two endpoints of the same fragment, we construct a solid edge between vi and vj to model this fragment. (b) Between each possible vertex pair vi and vj, we construct a dashed edge to model the gap or a virtual fragment (dashed curves in Fig. 1(b)). An example is shown in Fig. 1(c), which is made up of 3 solid edges for three fragments and all 15 possible dashed edges. For clarity, sometimes we call the boundary fragment a real fragment when both real and virtual fragments are involved. The constructed graph always has even number of vertices, as each real fragment has two endpoints. More interestingly, no two solid edges are incident from the same vertex and each vertex has exactly one incident solid edge. We name such a graph an (undirected) solid-dashed graph. We further define an alternate cycle in a solid-dashed graph as a simple cycle that traverses the solid edges and dashed edges alternately. Examples of a soliddashed graph and an alternate cycle are given in Fig. 1(c) and (d), respectively. Since a boundary always traverses real fragments and virtual fragments alternately, it can be described by an alternate cycle. Note that not all the cycles in a solid-dashed graph are alternate cycles, because two adjacent dashed edges can appear sequentially in the same cycle. According to the cost function (1), we define a weight function w(e) and a length function l(e) for each edge e. For convenience, we define B(e) as a function that gives the (real or virtual) fragment corresponding to an edge e. Then the weight w(e) ≜T(B(e)) = R B(e)[σ(t)+λ·κ2(t)]dt is the un-normalized cost on B(e). The edge length l(e) is defined as the length of B(e). We can see that the most salient boundary with minimum cost (1) corresponds to an alternate cycle C with minimum cycle ratio CR(C) = P e∈C w(e) P e∈C l(e) . Fragments extracted from real images usually contain noise, intersections, and even some closed curves, which cause difficulties in estimating the curve length, curvature, and therefore, the edge weight and length. We will describe a spline-based method to address this problem in Section 4. In the following, we first present a polynomial-time algorithm to identify the alternate cycle with the minimum cycle ratio CR(C). 3 Ratio-Contour Algorithm For simplicity, we denote the alternate cycle with minimum cycle ratio as MRA (Minimum Ratio Alternate) cycle. In this section, we introduce a graph algorithm for finding the MRA cycle in polynomial time. This algorithm consists of three reductions. (a) Both the weight and edge length of the solid edges can be set to zero by merging them into the weight and length of their adjacent dashed edges, without changing the underlying MRA. (b) The problem of finding an MRA cycle can be reduced to a problem of detecting a negativeweight alternate (NWA) cycle in the same graph. (c) Finding NWA cycles in a solid-dashed graph with zero solid-edge weights and zero solid-edge lengths can be reduced to finding a minimum-weight perfect matching (MWPM) in the same graph. Finding MWPM has been shown to be of polynomial-time complexity with various efficient algorithms available. 3.1 Setting Zero-Weight and Zero-Length to Solid Edges As illustrated in Fig. 2(a) and (b), each solid edge e can only be adjacent to a set of dashed edges, say {e1, e2, · · · , eK}, in a solid-dashed graph, and no two solid edges are adjacent to each other. Therefore, we can directly merge the solid-edge weight and length to its adjacent dashed edges by ( w(ek) ←w(ek) + w(e) Nk l(ek) ←l(ek) + l(e) Nk , k = 1, 2, · · · K, where Nk = 2 if ek shares one vertex with e as in Fig. 2(a) and Nk = 1 if ek shares both vertices with e as in Fig. 2(b). Then we reset the weight and length of this solid edge to zero, i.e., w(e) = 0, l(e) = 0. This merging process is performed on all solid edges. While solid and dashed edges are traversed alternately in an alternate cycle, it is not difficult to achieve the following conclusion. Lemma 3.1 The above processing of edge weights and edge-lengths does not change the cycle ratio of any alternate cycles. e1 ek e1 (d) (a) (b) (c) e e Figure 2: An illustration of reductions in ratio-contour algorithm. (a) Merging the weight and length of a solid edge to its adjacent dashed edges. (b) A special case for weight merging. (c) A perfect matching in a solid-dashed graph. (d) Derived cycle from the perfect matching shown in (c). 3.2 Reducing to Negative-Alternate-Cycle Detection The following lemma claims that MRA cycles are invariant to some more general linear edge-weight transforms. Lemma 3.2 The MRA cycle in a solid-dashed graph G = (V, E) is invariant to the following linear transform on the edge weight w(e) ←w(e) −b · l(e), ∀e ∈E. (2) The proof for this lemma is similar to the one we gave for general ratio-cycle detection problem [15]. Notice that all the edge lengths are non-negative. There always exists an optimal b = b∗so that after weight transform (2), the MRA cycle has the cycle ratio of zero. In this case, the MRA cycle is the same as the cycle with total edge weight of zero. The detection of the optimal b∗and the MRA cycle can then be reduced into a problem of finding the NWA cycle (negative weight alternate cycle). Basically, if we can detect an NWA cycle after the edge weight transform (2), we know b > b∗. Otherwise, we know that b ≤b∗. With an NWA cycle detection algorithm, we can use binary or sequential search to locate the optimal b∗and the desired MRA cycle. This search process is polynomial if all the edge weight are integers [15]. In addition, with the first reduction mentioned in Section 3.1, it is easy to see that the linear transform (2) always preserves zero weight and zero length for all solid edges in this search process. 3.3 Reducing to Minimum Weight Perfect Matching The problem of detecting an NWA cycle in a solid-dashed graph can be reduced to a problem of finding a minimum weight perfect matching (MWPM) in the same graph. A perfect matching in G denotes a subgraph that contains all the vertices in G while each vertex only has one incident edge. An example is shown in Fig. 2(c), where three thick edges together with their vertices form a perfect matching. The MWPM is the perfect matching with minimum total edge weight. As all the solid edges form a trivial perfect matching with total weight zero, the MWPM in our solid-dashed graph should have non-positive total weight. We can construct a set of cycles from a perfect matching P by (a) removing from P all the solid edges and their endpoints, and (b) adding to P any solid edges in the solid-dashed graph G whose two endpoints are still in P after the removal in (a). The remaining subgraph must consist of a set of cycles because each remaining vertex has two incident edges: one is solid and the other one is dashed. This also confirms that all the resulting cycles are alternate cycles. An example of this reduction is shown in Fig. 2(d), which is constructed from (c). As all the solid edges have zero weight and zero length, it is not difficult to see that the total weight of the perfect matching is the same as the total weight of the resulting cycles. Therefore, the NWA detection problem is reduced into a problem of finding a perfect matching with negative total weight. This is the same as the problem of finding the MWPM, which is of polynomial-time complexity [1]. 4 Edge-Weight and Edge-Length Functions We need to estimate the curvature and length of both real and virtual fragments for defining w(e) and l(e) of solid and dashed edges. To deal with the noise and aliasing in detected fragments, we impose a pre-smoothing process on those fragments. In this paper, we approximate a fragment by a set of quadratic splines with the parametric form  xi(ti) yi(ti)  =  xi yi  +  Ai Bi Ci Di   t2 i ti  , where 0 ≤ti ≤1 is the parameter for the spline. We developed an iterative algorithm [10] to estimate the optimal parameters xi, yi, Ai, Bi, Ci, and Di minimizing a comprehensive cost function that measures smoothness, under the constraint of C0 and C1 continuities across the fragment. An example is illustrated in Fig. 3 where solid curves in (a) and (b) are fragments before and after smoothing. More discussion and analysis on this curvesmoothing method can be found in our previous work [10]. With the parametric form of quadratic splines, the total length and the curvature along a real fragment can be computed by summing over each spline its length and its total curvature as li = Z 1 0 p (2Ait + Bi)2 + (2Cit + Di)2dt, Z 1 0 κ2 i (t)dt = Z 1 0 4(AiDi −BiCi)2 [(2Ait + Bi)2 + (2Cit + Di)2]3 dt, where li is the length and κi(t) is the curvature function of the ith spline. However, estimating these quantities for a virtual fragment is not trivial, as no information is given on how the virtual fragment should look like. We take the following approach to compute the dashed-edge weight. First, a pair of endpoints involved in forming a particular dashed edge is connected with a straight line. Then a new curve segment is constructed by connecting this straight line and adjacent fragments. The smoothing process described above is applied to this new curve segment. In this smoothed curve segment, the virtual fragment is then the part corresponding to the straight line before the smoothing. The dashed curve in Fig. 3(b) shows a resulting virtual fragment used for estimating curvature, length, and finally edge weight. (a) (d) (c) (b) Figure 3: An illustration of the edge weight estimation process. (a) Two noisy fragments. (b) Smoothed real fragments and an estimated virtual fragment. (c) Fragments obtained by Canny detector. (d) Smoothed fragments after breaking undesired connections, corresponding to the portion of the box in (c). Crossings specify the endpoints and breaking points. In real implementation, another issue is that the detected fragments using edge detectors may not be disjoint open curves as assumed in Section 2. It is common that some fragments are connected in the form of intersections, attachments, and even undesired closure, as shown in Fig. 4. Therefore, we need to break those connections to construct the graph model. First, we identify the intersection points and split them to get multiple open fragments. An example is shown in Fig. 4(a) and (d), where an intersection point is broken into three endpoints. In the constructed graph, they (u1, u2, and u3) are connected by dashed edges with zero weight and zero length. Attachment specifies the case where two fragments are undesirably connected into a single fragment as shown in Fig. 4(b). This greatly hurts the reliability of salient boundary detection as those attached fragments may exclude many desired dashed edges from the graph. We alleviate this problem by splitting all the fragments at their high-curvature points, as illustrated in Figs. 4(b) and (e). Similarly, we can break closed fragments into open fragments at high-curvature points, as shown in Fig. 4(c) and (f). Note that the identification of high-curvature points requires the smoothing of the noisy fragments. We apply the same smoothing technique described above to each fragment for this purpose. Figures 3(c) and (d) show an example of dealing with the above special cases. 5 Experiments and Discussion In this section, we test the proposed ratio-contour algorithm to extract the salient boundaries from real images. For initial fragment detection, we use the standard Canny edge detector in the Matlab software with its default threshold settings. We also adopt the Blossom4 implementation [4] of the minimum-weight perfect matching. One problem in the implementation is the construction of dashed edges, which may be of a very large number (O(n2)) if we connect every two possible vertices. In this paper, we constrain the proximity to reduce the number of dashed edges. In the implementation, for each vertex, we only keep certain number of incident dashed edges with smallest length. B(e ) 1 B(e ) 3 B(e ) 2 3 B(e ) B(e ) 2 B(e ) 1 3 e 2 e 1 e 3 e 2 e 1 e u 1 u 2 B(e ) 1 1 e u 1 u 2 u 1 u 3 u 2 (d) (a) (b) (c) (f) (e) Figure 4: An illustration of fragment identification and graph construction in some special cases. (a), (b), and (c) show the detected fragments with intersections, attachments, and closures. (d), (e), and (f) are the constructed graphs from (a), (b), and (c), respectively. This number is uniformly set to 20 in all experiments. Meanwhile, we set the parameter λ = 50 in the edge-weight definition. Figure 5 shows salient boundaries detected from seven real images, together with the initial fragments from Canny detector. It can be seen that the proposed method integrates well the Gestalt laws of proximity, continuity, and closure. (a) (b) (c) (d) (e) (f) (g) Figure 5: Salient boundaries detected from some real images using the proposed ratiocontour algorithm. Each subfigure from (a) to (g) contains three images, left: original images, middle: Canny detection results, and right: the detected most salient boundaries. 6 Conclusions We have presented a novel graph-theoretic approach, named ratio contour, for extracting perceptually salient boundaries from a set of noisy boundary fragments detected in real images. The approach guarantees the global optimality without introducing any biases related to region area or boundary length, and exhibits promising performance in extracting salient objects from real cluttered images. One potential extension of this research is to extract multiple salient objects that are overlapped or share part of boundaries by performing ratio-contour algorithm iteratively. We are currently investigating this extension and plan on reporting the result in the future. Acknowledgements The authors would like to thank David Jacobs and anonymous reviewers for important comments. This work was funded, in part, by National Science Foundation grant EIA0312861, and the USC SOM-COEIT research development fund. References [1] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows: Theory, Algorithms, & Applications. Prentice Hall, Englewood Cliffs, 1993. [2] T. Alter and R. Basri. Extracting salient contours from images: An analysis of the saliency network. In IEEE Conference on Computer Vision and Pattern Recognition, pages 13–20, 1996. [3] A. Amir and M. Lindenbaum. A generic grouping algorithm and its quantitative analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(2):168–185, 1998. [4] W. Cook and A. Rohe. Computing minimum-weight perfect matchings. http://www.or.unibonn.de/home/rohe/matching.html, Aug. 1998. [5] I. Cox, S. B. Rao, and Y. Zhong. Ratio regions: A technique for image segmentation. In International Conference on Pattern Recognition, pages 557–564, 1996. [6] J. Elder and S. Zucker. Computing contour closure. In European Conference on Computer Vision, pages 399–412, 1996. [7] G. Guy and G. Medioni. Inferring global perceptual contours from local features. International Journal of Computer Vision, 20(1):113–133, 1996. [8] D. Jacobs. Robust and efficient detection of convex groups. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(1):23–37, 1996. [9] I. H. Jermyn and H. Ishikawa. Globally optimal regions and boundaries as minimum ratio cycles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(10):1075–1088, 2001. [10] T. Kubota. Contextual and non-combinatorial approach to feature extraction. In Int’l Workshop on EMMCVPR, pages 467–482, 2003. [11] S. Mahamud, L. R. Williams, K. K. Thornber, and K. Xu. Segmentation of multiple salient closed contours from real images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(4):433–444, 2003. [12] S. Sarkar and K. Boyer. Quantitative measures of change bvased on feature organization: Eigenvalues and eigenvectors. In IEEE Conference on Computer Vision and Pattern Recognition, pages 478–483, 1996. [13] A. Shashua and S. Ullman. Structural saliency: The detection of globallly salient structures using a locally connected network. In International Conference on Computer Vision, pages 321–327, 1988. [14] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000. [15] S. Wang and J. M. Siskind. Image segmentation with ratio cut. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(6):675–690, 2003. [16] L. Williams and K. K. Thornber. A comparison measures for detecting natural shapes in cluttered background. International Journal of Computer Vision, 34(2/3):81–96, 2000. [17] Z. Wu and R. Leahy. An optimal graph theoretic approach to data clustering: Theory and its application to image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(11):1101–1113, 1993.
2003
189
2,408
A Biologically Plausible Algorithm for Reinforcement-shaped Representational Learning Maneesh Sahani W.M. Keck Foundation Center for Integrative Neuroscience University of California, San Francisco, CA 94143-0732 maneesh@phy.ucsf.edu Abstract Significant plasticity in sensory cortical representations can be driven in mature animals either by behavioural tasks that pair sensory stimuli with reinforcement, or by electrophysiological experiments that pair sensory input with direct stimulation of neuromodulatory nuclei, but usually not by sensory stimuli presented alone. Biologically motivated theories of representational learning, however, have tended to focus on unsupervised mechanisms, which may play a significant role on evolutionary or developmental timescales, but which neglect this essential role of reinforcement in adult plasticity. By contrast, theoretical reinforcement learning has generally dealt with the acquisition of optimal policies for action in an uncertain world, rather than with the concurrent shaping of sensory representations. This paper develops a framework for representational learning which builds on the relative success of unsupervised generativemodelling accounts of cortical encodings to incorporate the effects of reinforcement in a biologically plausible way. 1 Introduction A remarkable feature of the brain is its ability to adapt to, and learn from, experience. This learning has measurable physiological correlates in terms of changes in the stimulusresponse properties of individual neurons in the sensory systems of the brain (as well as in many other areas). While passive exposure to sensory stimuli can have profound effects on the developing sensory cortex, significant plasticity in mature animals tends to be observed only in situations where sensory stimuli are associated with either behavioural or electrical reinforcement. Considerable theoretical attention has been paid to unsupervised learning of representations adapted to natural sensory statistics, and to the learning of optimal policies of action for decision processes; however, relatively little work (particularly of a biological bent) has sought to understand the impact of reinforcement tasks on representation. To be complete, understanding of sensory plasticity must come at two different levels. At a mechanistic level, it is important to understand how synapses are modified, and how synaptic modifications can lead to observed changes in the response properties of cells. Numerous experiments and models have addressed these questions of how sensory plasticity occurs. However, a mechanistic description alone neglects the information-processing aspects of the brain’s function. Measured changes in sensory representation must underlie an adaptive change in neural information processing. If we can understand the processing goals of sensory systems, and therefore understand how changes in representation advance these goals in the face of changing experience, we will have shed light on the question of why sensory plasticity occurs. This is the goal of the current work. To approach this goal, we first construct a representational model and associated objective function which together isolate the question of how the reinforcement-related value of a stimulus is learned (the classic problem of reinforcement learning) from the question of how this value impacts the sensory representation. We show that the objective function can be optimised by an expectation-maximisation learning procedure, but suggest that direct optimisation is not biologically plausible, relying as it does on the availability of an exact posterior distribution over the cortical representation given both stimulus and reinforcement-value. We therefore develop and validate (through simulation) an alternative optimisation approach based on the statistical technique of importance sampling. 2 Model The standard algorithms of reinforcement learning (RL) deal with an agent that receives rewards or penalties as it interacts with a world of known structure and, generally Markovian, dynamics [1]. The agent passes through a series of “states”, choosing in each one an action which results (perhaps stochastically) in a payoff and in a transition to another state. Associated with each state (or state-action pair) and a given policy of action is a value, which represents the expected payoff that would be received if the policy were to be followed starting from that initial state (and initial action). Much work in RL has focused on learning the value function. Often the state that the agent occupies at each point in time is assumed to be directly observable. In other cases, the agent receives only partial information about the state it occupies, although in almost all studies the basic structure of the world is assumed to be known. In these partially observable models, then, the state information (which might be thought of as a form of sensory input) is used to estimate which one of a known group of states is currently occupied, and so a natural representation emerges in terms of a belief-distribution over states. In the general case, however, the state structure of the world, if indeed a division into discrete states makes sense at all, is unknown. Instead, the agent must simultaneously discover a representation of the sensory inputs suitable for predicting the reinforcement value, and learn the action-contingent value function itself. This general problem is quite difficult. In probabilistic terms, solving it exactly would require coping with a complicated joint distribution over representational structures and value functions. However, using an analogy to the variational inference methods of unsupervised learning [2], we might modularise our approach by factoring this joint into independent distributions over the sensory representation on the one hand and the value function on the other. In this framework approximate estimation might proceed iteratively, using the current value function to tune the sensory representation, and then re¨estimating the value function for the revised sensory encoding. The present work, being concerned with the way in which reinforcement guides sensory representational learning, focuses exclusively on the first of these two steps. Thus, we take the value associated with the current sensory input to be given. This value might represent a current estimate generated in the course of the iterative procedure described above. In many of the reinforcement schedules used in physiological experiments, however, the value is easily determined. For example, in a classical conditioning paradigm the value is independent of action, and is given by the sum of the current reinforcement and the discounted average reinforcement received. Our problem, then, is to develop a biologically plausible algorithm which is able to find a representation of the sensory input which facilitates prediction of the value. Although our eventual goal clearly fits well in the framework of RL, we find it useful to start from a standard theoretical account of unsupervised representational learning. The view we adopt fits well with a Helmholtzian account of perceptual processing, in which the sensory cortex interprets the activities of receptors so as to infer the state of the external world that must have given rise to the observed pattern of activation. Perception, by this account, may be thought of as a form of probabilistic inference in a generative model. The general structure of such a model involves a set of latent variables or causes whose values directly reflect the underlying state of the world, along with a parameterisation of effects of these causes on immediate sensory experience. A generative model of visual sensation, for example, might contain a hierarchy of latent variables that, at the top, corresponded to the identities and poses of visual objects or the colour and direction of the illuminating light, and at lower levels, represented local consequences of these more basic causes, for example the orientation and contrast of local edges. Taken together, these variables would provide a causal account for observations that correspond to photoreceptor activation. To apply such a framework as a model for cortical processing, then, we take the sensory cortical activity to represent the inferred values of the latent variables. Thus, perceptual inference in this framework involves estimating the values of the causal variables that gave rise to the sensory input, while developmental (unsupervised) learning involves discovering the correct causal structure from sensory experience. Such a treatment has been used to account for the structure of simple-cell receptive fields in visual cortex [3, 4], and has been extended to further visual cortical response properties in subsequent studies. In the present work our goal is to consider how such a model might be affected by reinforcement. Thus, in addition to the latent causes Li that generate a sensory event Si, we consider an associated (possibly action-contingent) value Vi. This value is presumably more parsimoniously associated with the causes underlying the sensory experience, rather than with the details of the receptor activation, and so we take the sensory input and the corresponding value to be conditionally independent given the cortical representation: Pθ(Si, Vi) = Z dLi Pθ(Si | Li)Pθ(Vi | Li)Pθ(Li), (1) where θ is a general vector of model parameters. Thus, the variables Si, Li and Vi form a Markov chain. In particular, this means that whatever information Si carries about Vi is expressed (if the model is well-fit) in the cortical representation Li, making this structure appropriate for value prediction. The causal variables Li have taken on the rˆole of the “state” in standard RL. 3 Objective function The natural objective in reinforcement learning is to maximise some form of accumulated reward. However, the model of (1) is, by itself, descriptive rather than prescriptive. That is, the parameters modelled (those determining the responses in the sensory cortex, rather than in associative or motor areas) do not directly control actions or policies of action. Instead, these descriptive parameters only influence the animal’s accumulated reinforcement through the accuracy of the description they generate. As a result, even though the ultimate objective may be to maximise total reward, we need to use objective functions that are closer in spirit to the likelihoods common in probabilistic unsupervised learning. In particular, we consider functions of the form L(θ) = X i α(Vi) log Pθ(Si) + β(Vi) log Pθ(Vi | Si) (2) In this expression, the two log probabilities reflect the accuracy of stimulus representation, and of value prediction, respectively. These two terms would appear alone in a straightforward representational model of the joint distribution over sensory stimuli and values. However, in considering a representational subsystem within a reinforcement learning agent, where the overall goal is to maximise accumulated reward, it seems reasonable that the demand for representative or predictive fidelity depend on the value associated with the stimulus; this dependence is reflected here by a value-based weighting of the log probabilities, which we assume will weight the more valuable cases more heavily. 4 Learning While the objective function (2) does not depend explicitly on the cortical representation variables, it does depend on their distributions, through the marginal likelihoods Pθ(Si) = R dLi Pθ(Si, Li) and Pθ(Vi | Si) = R dLi Pθ(Vi, Li | Si). For all but the simplest probabilistic models, optimising these integral expressions directly is computationally prohibitive. However, a standard technique called the Expectation-Maximisation (EM) algorithm can be extended in a straightforward way to facilitate optimisation of functions with the form we consider here. We introduce 2N unknown probability distributions over the cortical representation, Qα(Li) and Qβ(Li). Then, using Jensen’s inequality for convex functions, we obtain a lower bound on the objective function: L(θ) = X i α(Vi) log Z Qα(Li) Qα(Li)Pθ(Si, Li) + β(Vi) log Z Qβ(Li) Qβ(Li)Pθ(Li, Vi | Si) ≥ X i α(Vi)  ⟨log Pθ(Si, Li)⟩Qα(Li) + H[Qα(Li)]  + β(Vi)  ⟨log Pθ(Li, Vi | Si)⟩Qβ(Li) + H[Qβ(Li)]  = F(θ, Qα(Li), Qβ(Li)) It can be shown that, provided both functions are continuous and differentiable, local maxima of the “free-energy” F with respect to all of its arguments correspond, in their optimal values of θ, to local maxima of L [5]. Thus, any hill-climbing technique applied to the freeenergy functional can be used to find parameters that maximise the objective. In particular, the usual EM approach alternates maximisations (or just steps in the gradient direction) with respect to each of the arguments of F. In our case, this results in the following on-line learning updates made after observing the ith data point: Qα(Li) ←Pθ(Li | Si) (3a) Qβ(Li) ←Pθ(Li | Vi, Si) (3b) θ ←θ + η∇θ  α(Vi) ⟨log Pθ(Si, Li)⟩Qα(Li) + β(Vi) ⟨log Pθ(Li, Vi | Si)⟩Qβ(Li)  (3c) where the first two equations represent exact maximisations, while the third is a gradient step, with learning rate η. It will be useful to rewrite (3c) as θ ←θ + η  α(Vi) ⟨∇θ log Pθ(Si, Li)⟩Qα(Li) + β(Vi) ⟨∇θ log Pθ(Li | Si)⟩Qβ(Li) +β(Vi) ⟨∇θ log Pθ(Vi | Li)⟩Qβ(Li)  (3c′) where the conditioning on Si in the final term in not needed due to the Markovian structure of the model. 5 Biologically Plausible Learning Could something like the updates of (3) underlie the task- or neuromodulator-driven changes that are seen in sensory cortex? Two out of the three steps seem plausible. In (3a), the distribution Pθ(Li | Si) represents the animal’s beliefs about the latent causes that led to the current sensory experience, and as such is the usual product of perceptual inference. In (3c′), the various log probabilities involved are similarly natural products of perceptual or predictive computations. However, the calculation of the distribution Pθ(Li | Vi, Si) in (3b) is less easily reconciled with biological constraints. There are two difficulties. First, the sensory input, Si, and the information needed to assess its associated value, Vi, often arrive at quite different times. However, construction of the posterior distribution in its full detail requires simultaneous knowledge of both Si and Vi, and would therefore only be possible if rich information about the sensory stimulus were to be preserved until the associated value could be determined. The feasibility of such detailed persistence of sensory information is unclear. The second difficulty is an architectural one. The connections from receptor epithelium to sensory areas of cortex are extensive, easily capable of conveying the information needed to estimate P(L | S). By contrast, the brain structures that seem to be associated with the evaluation of reinforcement, such as the ventral tegmental area or nucleus basalis, make only sparse projections to early sensory cortex; and these projections are frequently modulatory in character, rather than synaptic. Thus, exact computation of P(Li | Vi) (a component of the full P(Li | Vi, Si)) seems difficult to imagine. It might seem at first that the former of these two problems would also apply to the weight α(Vi) (in the first term of (3c′)), in that execution of this portion of the update would also need to be delayed until this value-dependent weight could be calculated. On closer examination, however, it becomes evident that this difficulty can be avoided. The trick is that in learning, the weight can be applied to the gradient. Thus, it is sufficient only to remember the gradient, or indeed the corresponding change in synaptic weights. One possible way to do this is to actually carry out an update of the weights when just the sensory stimulus is known, but then consolidate this learning (or not) as indicated by the value-related weight. Such a consolidation signal might easily be carried by a neuromodulatory projection from subcortical nuclei involved in the evaluation of reinforcement. We propose to solve the problem posed by P(L | S, V ) in essentially the same way, that is by using information about reinforcement-value to guide modulatory reweighting or consolidation of synaptic changes that are initially based on the sensory stimulus alone. Note that the expectations over P(Li | Si, Vi) that appear in (3c′) could, in principle, be replaced by sums over samples drawn from the distribution. Since learning is gradual and on-line, such a stochastic gradient ascent algorithm would still converge (in probability) to the optimum. Of course, sampling from this distribution is no more compatible with the foregoing biological constraints than integrating over it. However, consider drawing samples ˜Li from P(Li | Si), and then weighting the corresponding terms in the sum by w(˜Li) = P(Vi | ˜Li)/P(Vi | Si). Then we have, taking the second term in (3c′) for example, D ∇θ log Pθ(˜Li | Si)w(˜Li) E ˜Li∼P(Li|Si) = Z d˜Li∇θ log Pθ(˜Li | Si)P(Vi | ˜Li) P(Vi | Si) P(˜Li | Si) = Z d˜Li∇θ log Pθ(˜Li | Si)P(Vi, ˜Li | Si) P(Vi | Si) = D ∇θ log Pθ(˜Li | Si) E ˜Li∼P(Li|Si,Vi) . This approach to learning, which exploits the standard statistical technique of importance sampling [6], resolves both of the difficulties discussed above. It implies that reinforcement-related processing and learning in the sensory systems of the brain proceeds in these stages: 1. The sensory input is processed to infer beliefs about the latent causes Pθ(Li | Si). One or more samples ˜Li are drawn from this distribution. 2. Synaptic weights are updated to follow the gradients ⟨∇θ log Pθ(Si, Li)⟩Pθ(Li|Si) and ∇θ log Pθ(˜Li | Si) (corresponding to the first two terms of (3c′). 3. The associated value is predicted, both on the basis of the full posterior, giving Pθ(Vi | Si), and on the basis of the sample(s), giving Pθ(Vi | ˜Li). 4. The actual value is observed or estimated, facilitating calculation of the weights α(Vi), β(Vi), and w(˜Li). 5. These weights are conveyed to sensory cortex and used to consolidate (or not) the synaptic changes of step 2. This description does not encompass the updates corresponding to the third term of (3c′). Such updates could be undertaken once the associated value became apparent; however, the parameters that represent the explicit dependence of value on the latent variables are unlikely to lie in the sensory cortex itself (instead determining computations in subsequent processing). 5.1 Distributional Sampling A commonly encountered difficulty with importance sampling has to do with the distribution of importance weights wi. If the range of weights is too extensive, the optimisation will be driven primarily by few large weights, leading to slow and noisy learning. Fortunately, it is possible to formulate an alternative, in which distributions over the cortical representational variables, rather than samples of the variables themselves, are randomly generated and weighted appropriately.1 Let ePi(L) be a distribution over the latent causes L, drawn randomly from a functional distribution P(ePi | Si), such that D ePi(L) E P(ePi|Si) = P(Li | Si). Then, by analogy with the result above, it can be shown that given importance weights w(ePi) = Z dL P(Vi | L)ePi(L) P(Vi | Si) , (4) we have D ∇θ log Pθ(˜Li | Si) E ePi(L) w(ePi)  ePi∼P(ePi|Si) = ⟨∇θ log Pθ(Li | Si)⟩˜Li∼P(Li|Si,Vi) . These distributional samples can thus be used in almost exactly the same manner as the single-valued samples described above. 6 Simulation A paradigmatic generative model structure is that underlying factor analysis (FA) [7], in which both latent and observed variables are normally distributed: Pθ(Li) = N (0, I) ; Pθ(Si | Li) = N (ΛSLi, ΨS) ; Pθ(Vi | Li) = N (ΛV Li, ΨV ) . (5) 1This sampling scheme can also be formalised as standard importance sampling carried out with a cortical representation re-expressed in terms of the parameters determining the distribution ePi(L). 0 0.5 1 relative amplitude a generative weights 0 0.5 1 relative amplitude b unweighted learning 0 0.5 1 relative amplitude sensory input dimension c weighted learning Figure 1: Generative and learned sensory weights. See text for details. The parameters of the FA model (grouped here in θ) comprise two linear weight matrices ΛS and ΛV and two diagonal noise covariance matrices ΨS and ΨV . This model is similar in its linear generative structure to the independent components analysis models that have previously been employed in accounts of unsupervised development of visual cortical properties [3, 4]; the only difference is in the assumed distribution of the latent variables. The unit normal assumption of FA introduces a rotational degeneracy in solutions. This can be resolved in general by constraining the weight matrix Λ = [ΛS, ΛV ] to be orthogonal – giving a version of FA known as principal factor analysis (PFA). We used a PFA-based simulation to verify that the distributional importance-weighted sampling procedure described here is indeed able to learn the correct model given sensory and reinforcement-value data. Random vectors representing sensory inputs and associated values were generated according to (5); these were then used as inputs to a learning system. The objective function optimised had both value-dependent weights α(Vi) and β(Vi) set to unity; thus the learning system simply attempted to model the joint distribution of sensory and reinforcement data. The generative model comprised 11 latent variables, 40 observed sensory variables (which were arranged linearly so as to represent 40 discrete values along a single sensory axis), and a single reinforcement variable. Ten of the latent variables only affected the sensory observations. The weight vectors corresponding to each of these are shown by the solid lines in figure 1a. These “tuning curves” were designed to be orthogonal. The curves shown in figure 1a have been rescaled to have equal maximal amplitude; in fact the amplitudes were randomly varied so that they formed a unique orthogonal basis for the data. These features of the generative weight matrix were essential for PFA to be able to recover the generative model uniquely. The final latent variable affected both reinforcement value and the sensory input at a single point (indicated by the dashed line in figure 1a). Since the output noise matrix in PFA can associate arbitrary variance with each sensory variable, a model fit to only the sensory data would treat the influence of this latent cause as noise. Only when the joint distribution over both sensory input and reinforcement is modelled will this aspect of the sensory data be captured in the model parameters. Learning was carried out by processing data generated by the model described above one sample at a time. The posterior distribution Pθ(Li | Si) for the PFA model is Gaussian, with covariance ΣL = (I + ΛT SΨ−1 S ΛS)−1 and mean µL = ΣLΛT SΨ−1 S Si. The distributional samples ePi were also taken to be Gaussian. Each had covariance 0.6ΣL and mean drawn randomly from N (µL, 0.4ΣL). Two simulations were performed. In one case learning proceeded according to the sampled distributions ePi, with no importance weighting. In the other, learning was modulated by the importance weights given by (4). In all other regards the two simulations were identical. In particular, in both cases the reinforcement predictive weights ΛV were estimated, and in both cases the orthogonality constraint of PFA was applied to the combined estimated weight matrix [ΛS, ΛV ]. Figure 1b and c shows the sensory weights ΛS learnt by each of these procedures (again the curves have been rescaled to show relative weights). Both algorithms recovered the basic tuning properties; however, only the importance sampling algorithm was able to model the additional data feature that was linked to the prediction of reinforcement value. The fact that in all other regards the two learning simulations were identical demonstrates that the importance weighting procedure (rather than, say, the orthogonality constraint) was responsible for this difference. 7 Summary This paper has presented a framework within which the experimentally observed impact of behavioural reinforcement on sensory plasticity might be understood. This framework rests on a similar foundation to the recent work that has related unsupervised learning to sensory response properties. It extends this foundation to consider prediction of the reinforcement value associated with sensory stimuli. Direct learning by expectation-maximisation within this framework poses difficulties regarding biological plausibility. However, these were resolved by the introduction of an importance sampled approach, along with its extension to distributional sampling. Information about reinforcement is thus carried by a weighting signal that might be identified with the neuromodulatory signals in the brain. References [1] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998. [2] M. I. Jordan, Z. Ghahramani, T. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Mach. Learning, 37(2):183–233, 1999. [3] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–9, 1996. [4] A. J. Bell and T. J. Sejnowski. The ”independent components” of natural scenes are edge filters. Vision Res., 37(23):3327–3338, 1997. [5] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan, ed., Learning in Graphical Models, pp. 355–370. Kluwer Academic Press, 1998. [6] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C: The Art of Scientific Computing. CUP, Cambridge, 2nd edition, 1993. [7] B. S. Everitt. An Introduction to Latent Variable Models. Chapman and Hall, London, 1984.
2003
19
2,409
Semidefinite Programming by Perceptron Learning Thore Graepel Ralf Herbrich Microsoft Research Ltd., Cambridge, UK {thoreg,rherb}@microsoft.com Andriy Kharechko John Shawe-Taylor Royal Holloway, University of London, UK {ak03r,jst}@ecs.soton.ac.uk Abstract We present a modified version of the perceptron learning algorithm (PLA) which solves semidefinite programs (SDPs) in polynomial time. The algorithm is based on the following three observations: (i) Semidefinite programs are linear programs with infinitely many (linear) constraints; (ii) every linear program can be solved by a sequence of constraint satisfaction problems with linear constraints; (iii) in general, the perceptron learning algorithm solves a constraint satisfaction problem with linear constraints in finitely many updates. Combining the PLA with a probabilistic rescaling algorithm (which, on average, increases the size of the feasable region) results in a probabilistic algorithm for solving SDPs that runs in polynomial time. We present preliminary results which demonstrate that the algorithm works, but is not competitive with state-of-the-art interior point methods. 1 Introduction Semidefinite programming (SDP) is one of the most active research areas in optimisation. Its appeal derives from important applications in combinatorial optimisation and control theory, from the recent development of efficient algorithms for solving SDP problems and the depth and elegance of the underlying optimisation theory [14], which covers linear, quadratic, and second-order cone programming as special cases. Recently, semidefinite programming has been discovered as a useful toolkit in machine learning with applications ranging from pattern separation via ellipsoids [4] to kernel matrix optimisation [5] and transformation invariant learning [6]. Methods for solving SDPs have mostly been developed in an analogy to linear programming. Generalised simplex-like algorithms were developed for SDPs [11], but to the best of our knowledge are currently merely of theoretical interest. The ellipsoid method works by searching for a feasible point via repeatedly “halving” an ellipsoid that encloses the affine space of constraint matrices such that the centre of the ellipsoid is a feasible point [7]. However, this method shows poor performance in practice as the running time usually attains its worst-case bound. A third set of methods for solving SDPs are interior point methods [14]. These methods minimise a linear function on convex sets provided the sets are endowed with self-concordant barrier functions. Since such a barrier function is known for SDPs, interior point methods are currently the most efficient method for solving SDPs in practice. Considering the great generality of semidefinite programming and the complexity of state-of-the-art solution methods it is quite surprising that the forty year old simple perceptron learning algorithm [12] can be modified so as to solve SDPs. In this paper we present a combination of the perceptron learning algorithm (PLA) with a rescaling algorithm (originally developed for LPs [3]) that is able to solve semidefinite programs in polynomial time. We start with a short introduction into semidefinite programming and the perceptron learning algorithm in Section 2. In Section 3 we present our main algorithm together with some performance guarantees, whose proofs we only sketch due to space restrictions. While our numerical results presented in Section 4 are very preliminary, they do give insights into the workings of the algorithm and demonstrate that machine learning may have something to offer to the field of convex optimisation. For the rest of the paper we denote matrices and vectors by bold face upper and lower case letters, e.g., A and x. We shall use x := x/ ∥x∥to denote the unit length vector in the direction of x. The notation A ⪰0 is used to denote x′Ax ≥0 for all x, that is, A is positive semidefinite. 2 Learning and Convex Optimisation 2.1 Semidefinite Programming In semidefinite programming a linear objective function is minimised over the image of an affine transformation of the cone of semidefinite matrices, expressed by linear matrix inequalities (LMI): minimise x∈Rn c′x subject to F (x) := F0 + n X i=1 xiFi ⪰0 , (1) where c ∈Rn and Fi ∈Rm×m for all i ∈{0, . . . , n}. The following proposition shows that semidefinite programs are a direct generalisation of linear programs. Proposition 1. Every semidefinite program is a linear program with infinitely many linear constraints. Proof. Obviously, the objective function in (1) is linear in x. For any u ∈Rm, define the vector au := (u′F1u, . . . , u′Fnu). Then, the constraints in (1) can be written as ∀u ∈Rm : u′F (x) u ≥0 ⇔ ∀u ∈Rm : x′au ≥−u′F0u . (2) This is a linear constraint in x for all u ∈Rm (of which there are infinitely many). Since the objective function is linear in x, we can solve an SDP by a sequence of semidefinite constraint satisfaction problems (CSPs) introducing the additional constraint c′x ≤c0 and varying c0 ∈R. Moreover, we have the following proposition. Proposition 2. Any SDP can be solved by a sequence of homogenised semidefinite CSPs of the following form: find x ∈Rn+1 subject to G (x) := n X i=0 xiGi ≻0 . Algorithm 1 Perceptron Learning Algorithm Require: A (possibly) infinite set A of vectors a ∈Rn Set t ←0 and xt = 0 while there exists a ∈A such that x′ ta ≤0 do xt+1 = xt + a t ←t + 1 end while return xt Proof. In order to make F0 and c0 dependent on the optimisation variables, we introduce an auxiliary variable x0 > 0; the solution to the original problem is given by x−1 0 · x. Moreover, we can repose the two linear constraints c0x0 −c′x ≥0 and x0 > 0 as an LMI using the fact that a block-diagonal matrix is positive (semi)definite if and only if every block is positive (semi)definite. Thus, the following matrices are sufficient: G0 =   F0 0 0 0′ c0 0 0′ 0 1  , Gi = Ã Fi 0 0 0 −ci 0 0 0 0 ! . Given an upper and a lower bound on the objective function, repeated bisection can be used to determine the solution in O(log 1 ε) steps to accuracy ε. In order to simplify notation, we will assume that n ←n+1 and m ←m+2 whenever we speak about a semidefinite CSP for an SDP in n variables with Fi ∈Rm×m. 2.2 Perceptron Learning Algorithm The perceptron learning algorithm (PLA) [12] is an online procedure which finds a linear separation of a set of points from the origin (see Algorithm 1). In machine learning this algorithm is usually applied to two sets A+1 and A−1 of points labelled +1 and −1 by multiplying every data vector ai by its class label1; the resulting vector xt (often referred to as the weight vector in perceptron learning) is then read as the normal of a hyperplane which separates the sets A+1 and A−1. A remarkable property of the perceptron learning algorithm is that the total number t of updates is independent of the cardinality of A but can be upper bounded simply in terms of the following quantity ρ (A) := max x∈Rn ρ (A, x) := max x∈Rn min a∈A a′x . This quantity is known as the (normalised) margin of A in the machine learning community or as the radius of the feasible region in the optimisation community. It quantifies the radius of the largest ball that can be fitted in the convex region enclosed by all a ∈A (the so-called feasible set). Then, the perceptron convergence theorem [10] states that t ≤ρ−2 (A). For the purpose of this paper we observe that Algorithm 1 solves a linear CSP where the linear constraints are given by the vectors a ∈A. Moreover, by the last argument we have the following proposition. Proposition 3. If the feasible set has a positive radius, then the perceptron learning algorithm solves a linear CSP in finitely many steps. It is worth mentioning that in the last few decades a series of modified PLAs A have been developed (see [2] for a good overview) which mainly aim at guaranteeing 1Note that sometimes the update equation is given using the unnormalised vector a. Algorithm 2 Rescaling algorithm Require: A maximal number T ∈N+ of steps and a parameter σ ∈R+ Set y uniformly at random in {z : ∥z∥= 1} for t = 0, . . . , T do Find au such that ¯y′¯au := u′G(¯y)u √Pn j=1(u′Gju)2 ≤−σ (u ≈smallest EV of G (¯y)) if u does not exists then Set ∀i ∈{1, . . . , n} : Gi ←Gi + yiG (y); return y end if y ←y −(y′au) au; t ←t + 1 end for return unsolved not only feasibility of the solution xt but also a lower bound on ρ (A, xt). These guarantees usually come at the price of a slightly larger mistake bound which we shall denote by M (A, ρ (A)), that is, t ≤M (A, ρ (A)). 3 Semidefinite Programming by Perceptron Learning If we combine Propositions 1, 2 and 3 together with Equation (2) we obtain a perceptron algorithm that sequentially solves SDPs. However, there remain two problems: 1. How do we find a vector a ∈A such that x′a ≤0? 2. How can we make the running time of this algorithm polynomial in the description length of the data?2 In order to address the first problem we notice that A in Algorithm 1 is not explicitly given but is defined by virtue of A (G1, . . . , Gn) := {au := (u′G1u, . . . , u′Gnu) | u ∈Rm } . Hence, finding a vector au ∈A such that x′au ≤0 is equivalent to identifying a vector u ∈Rm such that n X i=1 xiu′Giu = u′G (x) u ≤0 . One possible way of finding such a vector u (and consequently au) for the current solution xt in Algorithm 1 is to calculate the eigenvector corresponding to the smallest eigenvalue of G (xt); if this eigenvalue is positive, the algorithm stops and outputs xt. Note, however, that computationally easier procedures can be applied to find a suitable u ∈Rm (see also Section 4). The second problem requires us to improve the dependency of the runtime from O(ρ−2) to O(−log(ρ)). To this end we employ a probabilistic rescaling algorithm (see Algorithm 2) which was originally developed for LPs [3]. The purpose of this algorithm is to enlarge the feasible region (in terms of ρ (A (G1, . . . , Gn))) by a constant factor κ, on average, which would imply a decrease in the number of updates of the perceptron algorithm exponential in the number of calls to this rescaling algorithm. This is achieved by running Algorithm 2. If the algorithm does not return unsolved the rescaling procedure on the Gi has the effect that au changes into au + (y′au) y for every u ∈Rm. In order to be able to reconstruct the solution xt to the original problem, whenever we rescale the Gi we need to remember the vector y used for rescaling. In Figure 1 we have shown the effect of rescaling for three linear con2Note that polynomial runtime is only guaranteed if ρ−2 (A (G1, . . . , Gn)) is bounded by a polynomial function of the description length of the data. Figure 1: Illustration of the rescaling procedure. Shown is the feasible region and one feasible point before (left) and after (left) rescaling with the feasible point. straints in R3. The main idea of Algorithm 2 is to find a vector y that is σ-close to the current feasible region and hence leads to an increase in its radius when used for rescaling. The following property holds for Algorithm 2. Theorem 1. Assume Algorithm 2 did not return unsolved. Let σ ≤ 1 32n, ρ be the radius of the feasible set before rescaling and ρ′ be the radius of the feasible set after rescaling and assume that ρ ≤ 1 4n. Then 1. ρ′ ≥ ¡ 1 − 1 16n ¢ ρ with probability at most 3 4. 2. ρ′ ≥ ¡ 1 + 1 4n ¢ ρ with probability at least 1 4. The probabilistic nature of the theorem stems from the fact that the rescaling can only be shown to increase the size of the feasible region if the (random) initial value y already points sufficiently closely to the feasible region. A consequence of this theorem is that, on average, the radius increases by κ = (1 + 1/64n) > 1. Algorithm 3 combines rescaling and perceptron learning, which results in a probabilistic polynomial runtime algorithm3 which alternates between calls to Algorithm 1 and 2 . This algorithm may return infeasible in two cases: either Ti many calls to Algorithm 2 have returned unsolved or L many calls of Algorithm 1 together with rescaling have not returned a solution. Each of these two conditions can either happen because of an “unlucky” draw of y in Algorithm 2 or because ρ (A (G1, . . . , Gn)) is too small. Following the argument in [3] one can show that for L = −2048n · ln (ρmin) the total probability of returning infeasible despite ρ (A (G1, . . . , Gn)) > ρmin cannot exceed exp (−n). 4 Experimental Results The experiments reported in this section fall into two parts. Our initial aim was to demonstrate that the method works in practice and to assess its efficacy on a 3Note that we assume that the optimisation problem in line 3 of Algorithm 2 can be solved in polynomial time with algorithms such as Newton-Raphson. Algorithm 3 Positive Definite Perceptron Algorithm Require: G1, . . . , Gn ∈Rm×m and maximal number of iteration L ∈N+ Set B = In for i = 1, . . . , L do Call Algorithm 1 for at most M ¡ A, 1 4n ¢ many updates if Algorithm 1 converged then return Bx Set δi = 3 π2i2 and Ti = ln(δi) ln( 3 4) for j = 1, . . . , Ti do Call Algorithm 2 with T = 1024n2 ln (n) and σ = 1 32n if Algorithm 2 returns y then B ←B (In + yy′); goto the outer for-loop end for return infeasible end for return infeasible benchmark example from graph bisection [1]. These experiments would also indicate how competitive the baseline method is when compared to other solvers. The algorithm was implemented in MATLAB and all of the experiments were run on 1.7GHz machines. The time taken can be compared with a standard method SDPT3 [13] partially implemented in C but running under MATLAB. We considered benchmark problems arising from semidefinite relaxations to the MAXCUT problems of weighted graphs, which is posed as finding a maximum weight bisection of a graph. The benchmark MAXCUT problems have the following relaxed SDP form (see [8]): minimise x∈Rn 1′x subject to −1 4 (diag(C1) −C) | {z } F0 + diag (x) | {z } P i xiFi ⪰0 , (3) where C ∈Rn×n is the adjacency matrix of the graph with n vertices. The benchmark used was ‘mcp100’ provided by SDPLIB 1.2 [1]. For this problem, n = 100 and it is known that the optimal value of the objective function equals 226.1574. The baseline method used the bisection approach to identify the critical value of the objective, referred to throughout this section as c0. Figure 2 (left) shows a plot of the time per iteration against the value of c0 for the first four iterations of the bisection method. As can be seen from the plots the time taken by the algorithm for each iteration is quite long, with the time of the fourth iteration being around 19,000 seconds. The initial value of 999 for c0 was found without an objective constraint and converged within 0.012 secs. The bisection then started with the lower (infeasible) value of 0 and the upper value of 999. Iteration 1 was run with c0 = 499.5, but the feasible solution had an objective value of 492. This was found in just 617 secs. The second iteration used a value of c0 = 246 slightly above the optimum of 226. The third iteration was infeasible but since it was quite far from the optimum, the algorithm was able to deduce this fact quite quickly. The final iteration was also infeasible, but much closer to the optimal value. The running time suffered correspondingly taking 5.36 hours. If we were to continue the next iteration would also be infeasible but closer to the optimum and so would take even longer. The first experiment demonstrated several things. First, that the method does indeed work as predicted; secondly, that the running times are very far from being 0 100 200 300 400 500 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 Value of objective function (c0) Time (in sec.) 1 2 3 4 Optimal value 0 10 20 30 40 50 200 300 400 500 600 700 800 900 1000 Value of objective function (c0) Iterations Optimal value Figure 2: (Left) Four iterations of the bisection method showing time taken per iteration (outer for loop in Algorithm 3) against the value of the objective constraint. (Right) Decay of the attained objective function value while iterating through Algorithm 3 with a non-zero threshold of τ = 500. competitive (SDPT3 takes under 12 seconds to solve this problem) and thirdly that the running times increase as the value of c0 approaches the optimum with those iterations that must prove infeasibility being more costly than those that find a solution. The final observation prompted our first adaptation of the base algorithm. Rather than perform the search using the bisection method we implemented a non-zero threshold on the objective constraint (see the while-statement in Algorithm 1). The value of this threshold is denoted τ, following the notation introduced in [9]. Using a value of τ = 500 ensured that when a feasible solution is found, its objective value is significantly below that of the objective constraint c0. Figure 2 (right) shows the values of c0 as a function of the outer for-loops (iterations); the algorithm eventually approached its estimate of the optimal value at 228.106. This is within 1% of the optimum, though of course iterations could have been continued. Despite the clear convergence, using this approach the running time to an accurate estimate of the solution is still prohibitive because overall the algorithm took approximately 60 hours of CPU time to find its solution. A profile of the execution, however, revealed that up to 93% of the execution time is spent in the eigenvalue decomposition to identify u. Observe that we do not need a minimal eigenvector to perform an update, simply a vector u satisfying u′G(x)u < 0 (4) Cholesky decomposition will either return u satisfying (4) or it will converge indicating that G(x) is psd and Algorithm 1 has converged. 5 Conclusions Semidefinite programming has interesting applications in machine learning. In turn, we have shown how a simple learning algorithm can be modified to solve higher order convex optimisation problems such as semidefinite programs. Although the experimental results given here suggest the approach is far from computationally competitive, the insights gained may lead to effective algorithms in concrete applications in the same way that for example SMO is a competitive algorithm for solving quadratic programming problems arising from support vector machines. While the optimisation setting leads to the somewhat artificial and inefficient bisection method the positive definite perceptron algorithm excels at solving positive definite CSPs as found, e.g., in problems of transformation invariant pattern recognition as solved by Semidefinite Programming Machines [6]. In future work it will be of interest to consider the combined primal-dual problem at a predefined level ε of granularity so as to avoid the necessity of bisection search. Acknowledgments We would like to thank J. Kandola, J. Dunagan, and A. Ambroladze for interesting discussions. This work was supported by EPSRC under grant number GR/R55948 and by Microsoft Research Cambridge. References [1] B. Borchers. SDPLIB 1.2, A library of semidefinite programming test problems. Optimization Methods and Software, 11(1):683–690, 1999. [2] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification and Scene Analysis. John Wiley and Sons, New York, 2001. Second edition. [3] J. Dunagan and S. Vempala. A polynomial-time rescaling algorithm for solving linear programs. Technical Report MSR-TR-02-92, Microsoft Research, 2002. [4] F. Glineur. Pattern separation via ellipsoids and conic programming. M´emoire de D.E.A., Facult´e Polytechnique de Mons, Mons, Belgium, Sept. 1998. [5] T. Graepel. Kernel matrix completion by semidefinite programming. In J. R. Dorronsoro, editor, Proceedings of the International Conference on Neural Networks, ICANN2002, Lecture Notes in Computer Science, pages 694–699. Springer, 2002. [6] T. Graepel and R. Herbrich. Invariant pattern recognition by Semidefinite Programming Machines. In S. Thrun, L. Saul, and B. Sch¨olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, 2004. [7] M. Gr¨otschel, L. Lov´asz, and A. Schrijver. Geometric Algorithms and Combinatorial Optimization, volume 2 of Algorithms and Combinatorics. Springer-Verlag, 1988. [8] C. Helmberg. Semidefinite programming for combinatorial optimization. Technical Report ZR-00-34, Konrad-Zuse-Zentrum f¨ur Informationstechnik Berlin, Oct. 2000. [9] Y. Li, H. Zaragoza, R. Herbrich, J. Shawe-Taylor, and J. Kandola. The perceptron algorithm with uneven margins. In Proceedings of the International Conference of Machine Learning (ICML’2002), pages 379–386, 2002. [10] A. B. J. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on the Mathematical Theory of Automata, volume 12, pages 615–622. Polytechnic Institute of Brooklyn, 1962. [11] G. Pataki. Cone-LP’s and semi-definite programs: facial structure, basic solutions, and the symplex method. Technical Report GSIA, Carnegie Mellon University, 1995. [12] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6):386–408, 1958. [13] K. C. Toh, M. Todd, and R. T¨ut¨unc¨u. SDPT3 – a MATLAB software package for semidefinite programming. Technical Report TR1177, Cornell University, 1996. [14] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 38(1):49–95, 1996.
2003
190
2,410
An iterative improvement procedure for hierarchical clustering David Kauchak Department of Computer Science University of California, San Diego dkauchak@cs.ucsd.edu Sanjoy Dasgupta Department of Computer Science University of California, San Diego dasgupta@cs.ucsd.edu Abstract We describe a procedure which finds a hierarchical clustering by hillclimbing. The cost function we use is a hierarchical extension of the k-means cost; our local moves are tree restructurings and node reorderings. We show these can be accomplished efficiently, by exploiting special properties of squared Euclidean distances and by using techniques from scheduling algorithms. 1 Introduction A hierarchical clustering of n data points is a recursive partitioning of the data into 2, 3, 4, . . . and finally n clusters. Each intermediate clustering is made more fine-grained by splitting one of its clusters. It is natural to depict this process as a tree whose leaves are the data points and whose interior nodes represent intermediate clusters. Such hierarchical representations are very popular – they depict a data set at multiple levels of granularity, simultaneously; they require no prior specification of the number of the clusters; and there are several simple heuristics for constructing them [2, 3]. Some of these heuristics – such as average-linkage – implicitly try to create clusters of small “radius” throughout the hierarchy. However, to the best of our knowledge, there is so far no procedure which specifically hillclimbs the space of hierarchical clusterings according to a precise objective function. Given the heuristic nature of existing algorithms, it would be most helpful to be able to call an iterative improvement procedure on their output. In particular, we seek an analogue of k-means for hierarchical clustering. Taken literally this is possible only to a certain extent – the basic object we are dealing with is a tree rather than a partition – but k-means has closely informed many aspects of our procedure, and has determined our choice of objective function. We use a canonical tree representation of a hierarchical clustering, in which the leaves are data points, and the interior nodes are ordered; such a clustering is specified completely by a tree structure and by an ordering of nodes. Our cost function is a hierarchical extension of the k-means cost function, and is the same cost function which motivates average-linkage schemes. Our iterative procedure alternates between two simple moves: 1. The ordering of nodes is kept fixed, and one subtree is relocated. This is the natural generalization of a standard heuristic clustering move in which a data point is transferred from one cluster to another. 2. The tree structure is kept fixed, and its interior nodes are reordered optimally. We show that by exploiting properties of Euclidean distance (which underlies the k-means cost function and therefore ours as well), these tasks can be performed efficiently. For instance, the second one can be transformed into a problem in VLSI design and job scheduling called minimum linear arrangement. In general this problem is NP-hard, but for our particular case it is known [4] to be efficiently solvable, in O(n log n) time. After motivating and describing our model and our algorithm, we end with some experimental results. 2 The model 2.1 The space of trees A hierarchical clustering of n points contains n different clusterings, nested within each other. It is often depicted using a dendogram, such as the one below on the left (for a data set of five points). We will use the term k-clustering, and the notation Ck, to denote the grouping into k clusters. One of these clusters is divided in two to yield the (k + 1)clustering Ck+1, and so on. Instead of a dendogram, it is convenient to use a rooted binary tree (shown below on the right) in which the leaves are data points and internal nodes have exactly two children, so there are 2n −1 nodes overall. Each internal node is annotated with a unique “split number” between 1 and n −1. These satisfy the property that the split number of a parent is less than that of its children; so the root is numbered 1. The k-clustering is produced by removing the internal nodes numbered 1, 2, 3, . . . , k −1; each cluster consists of (the leaves in) one of the resulting connected components. d a b e c • 2-clustering: {a, b, e}, {c, d} • 3-clustering: {a, b}, {e}, {c, d} • 4-clustering: {a}, {b}, {e}, {c, d} a b c d 4 1 3 e 2 Henceforth we will use “node i” to mean “the internal node with split number i”. The maximal subtree rooted at this node is Ti; the mean of its data points (leaves) is called µi. To summarize, a hierarchical clustering is specified by: a binary tree with the data points at the leaves; and an ordering of the internal nodes. 2.2 Cost function If the clusters of Ck are S1, S2, . . . , Sk, then the k-means cost function is cost(Ck) = k X j=1 X x∈Sj ∥x −µ(Sj)∥2, where µ(S) is the mean of set S. To evaluate a hierarchical clustering, we need to combine the costs of all n intermediate clusterings, and we do so in the most obvious way, by a linear combination. We take the overall cost of the hierarchical clustering to be n X k=1 wk · cost(Ck), where the wk are non-negative weights which add up to one. The default choice is to make all wk = 1/n, but in general the specific application will dictate the choice of weights. A decreasing schedule w1 > w2 > w3 > · · · > wn places more emphasis upon coarser clusterings (ie. small k); a setting wk = 1 singles out a particular intermediate clustering. Although many features of our cost function are familiar from the simpler k-means setting, there is one which is worth pointing out. Consider the set of six points shown here: Under the k-means cost function, it is clear what the best 2-clustering is (three points in each cluster). It is similarly clear what the best 3-clustering is, but this cannot be nested within the best 2-clustering. In other words, the imposition of a hierarchical structure forces certain tradeoffs between the intermediate clusterings. This particular feature is fundamental to hierarchical clustering, and in our cost function it is laid bare. By adjusting the weights wk, the user can bias this tradeoff according to his or her particular needs. It is worth pointing out that cost(Ck) decreases as k increases; as more clusters are allowed, the data can be modeled with less error. This means that even when all the weights wk are identical, the smaller values of k contribute more to the cost function, and therefore, a procedure for minimizing this function must implicitly focus a little more on smaller k than on larger k. This is the sort of bias we usually seek. If we wanted to further emphasize small values of k, we could for instance use an exponentially decreasing schedule of weights, ie. wk = c · αk, where α < 1 and where c is a normalization constant. Notice that any given subtree Tj can appear as an individual cluster in many of the clusterings Ck. If π(j) denotes the parent of j, then Tj first appears as its own cluster in Cπ(j)+1, and is part of all the successive clusterings up to and including Cj. At that point, it gets split in two. 2.3 Relation to previous work The most commonly used heuristics for hierarchical clustering are agglomerative. They work bottom-up, starting with each data point in its own cluster, and then repeatedly merging the two “closest” clusters until finally all the points are grouped together in one cluster. The different schemes are distinguished by their measure of closeness between clusters. 1. Single linkage – the distance between two clusters S and T is taken to be the distance between their closest pair of points, ie. minx∈S,y∈T ∥x −y∥. 2. Complete linkage uses the distance between the farthest pair of points, ie. maxx∈S,y∈T ∥x −y∥. 3. Average linkage seems to have now become a generic term encompassing at least three different measures of distance between clusters. (a) (Sokal-Michener) ∥µ(S) −µ(T)∥2 (b) 1 |S|·|T | P x∈S,y∈T ∥x −y∥2 (c) (Ward’s method) |S|·|T | |S|+|T |∥µ(S) −µ(T)∥2 Average linkage appears to be the most widely used of these; for instance, it is a standard tool for analyzing gene expression data [1]. The three average linkage distance functions are all trying to minimize something very much like our cost function. In particular, Ward’s measure of the distance between two clusters is exactly the increase in k-means cost occasioned by merging those clusters. For our experimental comparisons, we have therefore chosen Ward’s method. 3 Local moves Each element of the search space is a tree structure in which the data points are leaves and in which the interior nodes are ordered. A quick calculation shows that this space has size n((n −1)!)2/2n−1 (consider the sequence of n −1 merge operations which create the tree from the data set). We consider two moves for navigating the space, along the lines of the standard “alternating optimization” paradigm of k-means and EM: 1. keep the structure fixed and reorder the internal nodes optimally; 2. keep the ordering of the internal nodes fixed and alter the structure by relocating some subtree. A key concern in the design of these local moves is efficiency. A k-means update takes O(kn) time; in our situation the analogue would be O(n2) time since we are dealing with all values of k. Ideally, however, we’d like a faster update. For our first move – reordering internal nodes – we show that a previously-known scheduling algorithm [4] can be adapted to solve this task (in the case of uniform weights) in just O(n log n) time. For the second move, we show that any given subtree can be relocated optimally in O(n) time, using just a single pass through the tree. These efficiency results are nontrivial; a crucial step in obtaining them is to exploit special properties of squared Euclidean distance. In particular, we write our cost function in three different, but completely equivalent, ways; and we switch back and forth between these: 1. In the form given above (the definition). 2. We define the cost of a subtree Ti to be cost(Ti) = P x∈Ti ∥x −µi∥2 (where the sum is over leaf nodes), that is, the cost of the single cluster rooted at point i. Then the overall cost is a linear combination of subtree costs. Specifically, it is n−1 X j=1 Wπ(j),j · cost(Tj), (1) where π(j) is the parent of node j and Wij = wi+1 + wi+2 + · · · + wj. 3. We annotate each tree edge (i, j) (i is the parent of j > i) by ∥µi −µj∥2; the overall cost is also a linear combination of these edge weights, specifically, X (k,l)∈T Wk · nl · ∥µk −µl∥2, (2) where Wk = w1 + w2 + · · · + wk and nl is the number of leaves in subtree Tl. All proofs are in a technical report [5] which can be obtained from the authors. To give a hint for why these alternative formulations of the cost function are true, we briefly mention a simple “bias-variance” decomposition of squared Euclidean distance: Suppose S is a set of points with mean µS. Then for any µ, X x∈S ∥x −µ∥2 = X x∈S ∥x −µS∥2 + |S| · ∥µ −µS∥2. 3.1 The graft In a graft move, an entire subtree is moved to a different location, as shown below. The letters a, b, i, . . . denote split numbers of interior nodes; here the subtree Tj is moved. The only prerequisite (to ensure a consistent ordering) is a < i < b. k i j a 1 h b a 1 h i b k j First of all, a basic sanity check: this move enables us to traverse the entire search space. Claim. Any two hierarchical clusterings are connected by a sequence of graft operations. It is important to find good grafts efficiently. Suppose we want to move a subtree Tj; what is the best place for it? Evaluating the cost of a hierarchical clustering takes O(n) time using equation (1) and doing a single, bottom-up pass. Since there are O(n) possible locations for Tj, naively it seems like evaluating all of them would take O(n2) time. In fact, the best relocation of Tj can be computed in just O(n) time, in a single pass over the tree. To see why this is possible, notice that in the diagram above, the movement of Tj affects only the subtrees on the path between a and h. Some of these subtrees get bigger (Tj is added to them); others shrink (Tj is removed). The precise change in cost of any given subtree Tl on this path is easy to compute: Claim. If subtree Tj is merged into Tl, then the cost of Tl goes up by ∆+ l = cost(Tl ∪Tj) −cost(Tl) = cost(Tj) + nlnj nl + nj · ∥µl −µj∥2. Claim. If subtree Tj ⊂Tl is removed from Tl, then the cost of Tl changes by ∆− l = cost(Tl −Tj) −cost(Tl) = −cost(Tj) − ninl nl −nj · ∥µl −µj∥2. Using (1), the total change in cost from grafting Tj between a, b (as depicted above) can be found by adding terms of the form Wπ(l),l∆± l , for nodes l on the path between j and a. This suggests a two-pass algorithm for optimally relocating Tj: in the first pass over the tree, for each Tl, the potential cost change from adding/removing Tj is computed. The second pass finds the best location. In fact, these can be combined into a single pass [5]. 3.2 Reordering internal nodes Let Vint be the interior nodes of the tree; if there are n data points (leaves), then |Vint| = n −1. For any x ∈Vint, let Tx be the maximal subtree rooted at x, which contains all the descendants of x. Let nx be the number of leaves in this subtree. If x has children y and z, then the goodness of split at x is the reduction in cost obtained by splitting cluster Tx, cost(Tx) −(cost(Ty) + cost(Tz)), which we henceforth denote g(x) (for leaves g(x) = 0). Again using properties of Euclidean distance, we can rewrite it thus: g(x) = ny∥µx −µy∥2 + nz∥µx −µz∥2. Priority queue operations: makequeue, max, deletemax, union, insert. Linked list operations: ◦(concatenation) procedure reorder(T) u ←root of T Q ←makequeue(u) while Q is not empty L ←deletemax(Q) Output elements of list L, in order function makequeue(x) if x is a leaf return { } let y, z be the children of x Q ←union(makequeue(y), makequeue(z)) r ←ny∥µx −µy∥2 + nz∥µx −µz∥2 L ←[x] while r < r(max(Q)) L′ ←deletemax(Q) r ←r·|L|+r(L′)·|L′| |L|+|L′| L ←L ◦L′ r(L) ←r insert(Q, L) return Q Figure 1: The reordering move. Here Q is a priority queue of linked lists. Each list L has a value r(L); and Q is ordered according to these. We wish to find a numbering σ : Vint →{1, 2, . . . , n −1} which – respects the precedence constraints of the tree: if x is the parent of y then σ(x) < σ(y). – minimizes the overall cost of the hierarchical clustering. Assuming uniform weights wk = 1/n, this cost can be seen (by manipulating equation (2)) to be 1 n X x∈Vint σ(x)g(x). Notice that this is essentially a scheduling problem. There is a “task” (a split) corresponding to each x ∈Vint. We would like to schedule the good tasks (with high g(x)) early on; in the language of clustering, if there are particularly useful splits (which lead to well separated clusters), we would like to perform them early in the hierarchy. And there are precedence constraints which must be respected: certain splits must precede others. The naive greedy solution – always pick the node with highest g(x), subject to precedence constraints – doesn’t work. The reason: it is quite possible that a particular split has low g(x)-value, but that it leads to other splits of very high value. A greedy algorithm would schedule this split very late; an algorithm with some “lookahead” capability would realize the value of this split and schedule it early. Horn[4] has a scheduling algorithm which obtains the optimal ordering, in the case where all the weights wk are equal, and can be implemented in O(n log n) time. We believe it can be extended to exponentially decaying, “memoryless” weights, ie. wk = c · αk, where α < 1 and c is some normalization constant. We now present an overview of Horn’s algorithm. For each node x ∈V , define r(x) to be the maximum, over all subtrees T (not necessarily maximal) rooted at x, of 1 |T | P z∈T g(z) (in words, the average of g(·) over nodes of T). This value r(x) is a more reliable indicator of the utility of split x than the immediate return g(x). Once these r(x) are known, the optimal numbering is easy to find: pick nodes in decreasing order of r(·) while respecting the precedence constraints. So the main goal is to compute the r(x) for all x in the tree. This can be done by a short divide-and-conquer procedure in O(n log n) time (Figure 1). (a) a d c b e (b) 1 2 4 d c 3 e a b (c) 1 2 3 a b e 4 c d (d) 1 a b e 4 c d 3 2 (e) 1 a b 3 2 4 d e c (f) 1 a b 4 d e c 2 3 Figure 2: (a) Five data points. (b)–(f) Iteratively improving the hierarchical clustering. (a) a c b d 1.0 0.8 1.0 (b) 1 a 2 d 3 c b (c) 1 2 3 c a b d Figure 3: (a) Four points on a line. (b) Average linkage. (c) Optimal tree. 4 Experiments In the experiments, we used uniform weights wk = 1/n. In each iteration of our procedure, we did a reordering of the nodes, and performed one graft – by trying each possible subtree (all O(n) of them), determining the optimal move for that subtree, and greedily picking the best move. We would prefer a more efficient, randomized way to pick which subtree to graft – either completely randomly, or biased by a simple criterion like “amount it deviates from the center of its parent cluster”; this is future work. Simple examples. To give some concrete intuition, Figure 2 shows the sequence of moves taken on a toy example involving five data points in the plane. The initial tree (b) is random and has a cost of 62.25. A single graft (c) reduces the cost to 27. A reordering (d), swapping 2 and 3, reduces the cost to 25.5, and a further graft (e) and reordering (f) result in the final tree, which is optimal and has cost 21. Figure 3 demonstrates a typical failing of average linkage. The initial greedy merger of points b, c gives a small early benefit but later turns out to be a bad idea; yet the resulting tree is only one graft away from being optimal. Really bad cases for average linkage can be constructed by recursively compounding this simple instance. A larger data set. Average linkage is often used in the analysis of gene expression data. 0 2 4 6 8 10 12 14 16 18 20 0 50 100 150 200 250 300 350 400 450 500 % improvement over average linkage k 4900 5000 5100 5200 5300 5400 5500 5600 0 10 20 30 40 50 60 70 80 90 cost iterations Figure 4: (a) On the left, a comparison with average linkage. (b) On the right, the behavior of the cost function over the 80 iterations required for convergence. We tried our method on the yeast data of [1]. We randomly chose clean subsets (no missing entries) of varying sizes from this data set, and tried the following on it: average linkage, our method initialized randomly, and our method initialized with average linkage. There were two clear trends. First of all: our method, whether initialized randomly or with average linkage, systematically did better than average linkage, not only for the particular aggregate cost function we are using, but across the whole spectrum of values of k. Figure 4(a), obtained on a 500-point data set, shows for each k, the percent by which the (induced) k-clustering found in our method (initialized with average linkage) improved upon that found by average linkage; the metric here is the k-means cost function. This is a fair comparison because both methods are explicitly trying to minimize this cost. Notice that an improvement in the aggregate (weighted average) is to be expected, since we are hillclimbing based on this measure. What was reassuring to us was that this improvement came across at almost all values of k (especially the smaller ones), rather than by negotiating some unexpected tradeoff between different values of k. This experiment also indicates that, in general, the output of average linkage has real scope for improvement. Second, our method often took an order of magnitude (ten or more times) longer to converge if initialized randomly than if initialized with average linkage, even though better solutions were often found with random initialization. We therefore prefer starting with average linkage. On the scant examples we tried, there was a period of rapid improvement involving grafts of large subtrees, followed by a long series of minor “fixes”; see Figure 4(b), which refers again to the 500-point data set mentioned earlier. References [1] T.L. Ferea et al. Systematic changes in gene expression patterns following adaptive evolution in yeast. Proceedings of the National Academy of Sciences, 97, 1999. [2] J.A. Hartigan. Clustering algorithms. Wiley, 1975. [3] J.A. Hartigan. Statistical theory in clustering. Journal of Classification, 1985. [4] W.A. Horn. Single-machine job sequencing with treelike precedence ordering and linear delay penalties. SIAM Journal on Applied Mathematics, 23:189–202, 1972. [5] D. Kauchak and S. Dasgupta. Manuscript, 2003.
2003
191
2,411
Kernels for Structured Natural Language Data Jun Suzuki, Yutaka Sasaki, and Eisaku Maeda NTT Communication Science Laboratories, NTT Corp. 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0237 Japan {jun, sasaki, maeda}@cslab.kecl.ntt.co.jp Abstract This paper devises a novel kernel function for structured natural language data. In the field of Natural Language Processing, feature extraction consists of the following two steps: (1) syntactically and semantically analyzing raw data, i.e., character strings, then representing the results as discrete structures, such as parse trees and dependency graphs with part-of-speech tags; (2) creating (possibly high-dimensional) numerical feature vectors from the discrete structures. The new kernels, called Hierarchical Directed Acyclic Graph (HDAG) kernels, directly accept DAGs whose nodes can contain DAGs. HDAG data structures are needed to fully reflect the syntactic and semantic structures that natural language data inherently have. In this paper, we define the kernel function and show how it permits efficient calculation. Experiments demonstrate that the proposed kernels are superior to existing kernel functions, e.g., sequence kernels, tree kernels, and bag-of-words kernels. 1 Introduction Recent developments in kernel technology enable us to handle discrete structures, such as sequences, trees, and graphs. Kernel functions suitable for Natural Language Processing (NLP) have recently been proposed. Convolution Kernels [4, 12] demonstrate how to build kernels over discrete structures. Since texts can be analyzed as discrete structures, these discrete kernels have been applied to NLP tasks, such as sequence kernels [8, 9] for text categorization and tree kernels [1, 2] for (shallow) parsing. In this paper, we focus on tasks in the application areas of NLP, such as Machine Translation, Text Summarization, Text Categorization and Question Answering. In these tasks, richer types of information within texts, such as syntactic and semantic information, are required for higher performance. However, syntactic information and semantic information are formed by very complex structures that cannot be written in simple structures, such as sequences and trees. The motivation of this paper is to propose kernels specifically suited to structured natural language data. The proposed kernels can handle several of the structures found within texts and calculate kernels with regard to these structures at a practical cost and time. Accordingly, these kernels can be efficiently applied to learning and clustering problems in NLP applications. Koizumi Junichiro is prime minister of Japan . JJ NN IN . NNP NNP NNP VBJ Koizumi Junichiro is prime minister of Japan . JJ NN IN . NNP NNP NNP VBJ Country Person Koizumi Junichiro is prime minister of Japan . Country Person Koizumi Junichiro is prime minister of Japan . NP Koizumi Junichiro is prime minister of Japan . NP NP NP Koizumi Junichiro is prime minister of Japan . NP NP Koizumi Junichiro is prime minister of Japan . Koizumi Junichiro is prime minister of Japan . (2) result of a noun phrase chunker (4) result of a dependency structure analyzer (1) result of a part-of-speech tagger (3) result of a named entities tagger Koizumi Junichiro is prime minister of Japan . Koizumi Junichiro is prime minister of Japan . Text : Koizumi Junichiro is prime minister of Japan . (5) semantic information from dictionary (eg. Word-Net) [executive] [executive director] [Asian Country] [Asian nation] [number] is prime minister of JJ NN IN NP VBJ [number] [executive] Country NP Japan . . NNP [Asian Country] Koizumi Junichiro NNP NNP Person NP (1) - (5) Figure 1: Examples of structures within texts as determined by basic NLP tools 2 Structured Natural Language Data for Application Tasks in NLP In general, natural language data contain many kinds of syntactic and semantic structures. For example, texts have several levels of syntactic and semantic chunks, such as part-ofspeech (POS) chunks, named entities (NEs), noun phrase (NP) chunks, sentences, and discourse segments, and these are bound by relation structures, such as dependency structures, anaphora, discourse relations and coreference. These syntactic and semantic structures can provide important information for understanding natural language and, moreover, tackling real tasks in application areas of NLP. The accuracies of basic NLP tools such as POS taggers, NP chunkers, NE taggers, and dependency structure analyzers have improved to the point that they can help to develop real applications. This paper proposes a method to handle these syntactic and semantic structures in a single framework: We combine the results of basic NLP tools to make one hierarchically structured data set. Figure 1 shows an example of structures within texts analyzed by basic NLP tools that are currently available and that offer easy use and high performance. As shown in Figure 1, structures in texts can be hierarchical or recursive “graphs in graph”. A certain node can be constructed or characterized by other graphs. Nodes usually have several kinds of attributes, such as words, POS tags, semantic information such as WordNet [3], and classes of the named entities. Moreover, the relations between nodes are usually directed. Therefore, we should employ a (1) directed, (2) multi-labeled, and (3) hierarchically structured graph to model structured natural language data. Let V be a set of vertices (or nodes) and E be a set of edges (or links). Then, a graph G = (V, E) is called a directed graph if E is a set of directed links E ⊂V × V . Definition 1 (Multi-Labeled Graph) Let Γ be a set of labels (or attributes) and M ⊂V ×Γ be label allocations. Then, G = (V, E, M) is called a multi-labeled graph. Definition 2 (Hierarchically Structured Graph) Let Gi = (Vi, Ei) be a subgraph in G = (V, E) where Vi ⊆V and Ei ⊆E, and G = {G1, . . . , Gn} be a set of subgraphs in G. F ⊂V × G represents a set of vertical links from a node v ∈V to a subgraph Gi ∈G. Then, G = (V, E, G, F) is called a hierarchically structured graph if each node has at most one vertical edge. Intuitively, vertical link fi,Gj ∈F from node vi to graph Gj indicates that node vi contains graph Gj. Finally, in this paper, we successfully represent structured natural language data by using a multi-labeled hierarchical directed graph. Definition 3 (Multi-Labeled Hierarchical Directed Graph) G = (V, E, M, G, F) is a multi-labeled hierarchical directed graph. c a N b d a q2 q3 q5 q1 q4 P P q6 q7 q8 b e N c d r2 r3 r4 r1 P r6 r7 r8 V a r5 1 : G 2 : G : node : directed link : vertical link : subgraph ei,j qi, ri fi, j G j G : node : directed link : vertical link : subgraph ei,j qi, ri fi, j G fi, j G j G c a N P b d a P A Graphical model of structures within text Multi-Labeled Hierarchical Directed Graph e N c a P b V d : chunk : relation between chunks : chunk : relation between chunks f1, 1 1 G f1, 1 1 G e2,4 e3,4 e1,5 e6,4 e5,7 f5, 1 2 G f5, 1 2 G f7, 1 3 G f7, 1 3 G 1 1 G 1 2 G 1 3 G 2 1 G 2 2 G 2 3 G e2,3 f1, 2 1 G f1, 2 1 G f4, 2 2 G f4, 2 2 G f6, 2 3 G f6, 2 3 G e7,8 e7,3 e6,5 e1,4 Figure 2: Examples of Hierarchical Directed Graph structures (these are also HDAG): each letter represents an attribute Figure 2 shows examples of multi-labeled hierarchical directed graphs. In this paper, we call a multi-labeled hierarchical directed graph a hierarchical directed graph. 3 Kernels on Hierarchical Directed Acyclic Graph At first, in order to calculate kernels efficiently, we add one constraint: that the hierarchical directed graph has no cyclic paths. First, we define a path on a Hierarchical Directed Graph. If a node has no vertical link, then the node is called a terminal node, which is denoted as T ⊂V ; otherwise it is a non-terminal node, which is denoted as ¯T ⊂V . Definition 4 (Hierarchical Path (HiP)) Let p = ⟨vi, ei,j, vj, . . . , vk, ek,l, vl⟩be a path. Let Υ(v) be a function that returns a subgraph Gi that is linked with v by a vertical link if v ∈¯T. Let P(G) be a function that returns the set of all HiPs in G, where links between v ∈G and v /∈G are ignored. Then, ph = ⟨h(vi), ei,j, h(vj), . . . , h(vk), ek,l, h(vl)⟩is defined as a HiP, where h(v) returns vph x, ph x ∈P(Gx) s.t. Gx = Υ(v) if v ∈¯T otherwise returns v. Intuitively, a HiP is constructed by a path in the path structure, e.g., ph = ⟨vi, ei,j, vj⟨vm, em,n, vn⟩, . . . , vk, ek,l, vl⟩. Definition 5 (Hierarchical Directed Acyclic Graph (HDAG)) hierarchical directed graph G = (V, E, M, G, F) is an HDAG if there is no HiP from any node v to the same node v. A primitive feature for defining kernels on HDAGs is a hierarchical attribute subsequence. Definition 6 (Hierarchical Attribute Subsequence (HiAS)) A HiAS is defined as a list of attributes with hierarchical information extracted from nodes on HiPs. For example, let ph = ⟨vi, ei,j, vj⟨vm, em,n, vn⟩, . . . , vk, ek,l, vl⟩be a HiP, then, HiASs in ph are written as τ(ph) = ⟨ai, aj⟨am, an⟩, . . . , ak, al⟩, which is all combinations for all ai ∈τ(vi), where τ(v) of node v is a function that returns the set of attributes allocated to node v, and τ(ph) of HiP ph is a function that returns all possible HiASs extracted from HiP ph. Γ∗denotes all possible HiASs constructed by the attribute in Γ and γi ∈Γ∗denotes the i’th HiAS. An explicit representation of a feature vector of an HDAG kernel is defined as φ(G) = (φ1(G), . . . , φ|Γ∗|(G)), where φ represents the explicit feature mapping from HDAG to the numerical feature space. The value of φi(G) becomes the weighted number of occurrences of γi in G. According to this approach, the HDAG kernel, K(G1, G2) = P|Γ∗| i=1⟨φi(G1) · φi(G2)⟩, calculates the inner product of the weighted common HiASs in Li : weight of Label Li ei,j : weight of directed link from vi to vj vi : weight of node vi fi, : weight of vertical link from vi to j G j G Li : weight of Label Li ei,j : weight of directed link from vi to vj vi : weight of node vi fi, : weight of vertical link from vi to j G j G L3:1.2 L1:0.4 L2:0.5 L4:0.9 L5:1.0 : G e2,4:0.6 e3,4:0.7 1 G f1, :0.8 1 G f1, :0.8 1 G v2:0.8 v3:0.7 v4:0.9 v1:1.0 : node : directed link : vertical link : subgraph ei,j vi fi, j G j G : node : directed link : vertical link : subgraph ei,j vi fi, j G fi, j G j G Figure 3: An Example of Hierarchical Directed Graph “G” with weight factors two HDAGs, G1 and G2. In this paper, we use | stand for the meaning of “such that,” since it is simple. KHDAG(G1, G2) = X γi∈Γ∗ X γi∈τ(ph 1 )|ph 1 ∈P(G1) X γi∈τ(ph 2 )|ph 2 ∈P(G2) Wγi(ph 1)Wγi(ph 2), (1) where Wγi(ph) represents the weight value of HiAS γi in HiP ph. The weight of HiAS γi in HiP ph is determined by Wγi(ph) = Y v∈V (ph) WV (v) Y ei,j∈E(ph) WE(vi, vj) Y fi,Gj ∈F (ph) WF (vi, Gj) Y a∈τ(γi) WΓ(a), (2) where WV (v), WE(vi, vj), WF (vi, Gj), and WΓ(a) represent the weight of node v, link from vi to vj, vertical link from vi to subgraph Gj, and attribute a, respectively. An example of how each weight factor is given is shown in Figure 3. In the case of NL data, for example, WΓ(a) might be given by the score of tf ∗idf from large scale documents, WV (v) by the type of chunk such as word, phrase or named entity, WE(vi, vj) by the type of relation between vi and vj, and WF (vi, Gj) by the number of nodes in Gj. Soft Structural Matching Frameworks Since HDAG kernels permit not only the exact matching of substructures but also approximate matching, we add the framework of node skip and relaxation of hierarchical information. First, we discuss the framework of the node skip. We introduce decay function ΛV (v)(0 < ΛV (v) ≤1), which represents the cost of skipping node v when extracting HiASs from the HiPs, which is almost the same architecture as [8]. For example, a HiAS under the node skips is written as ⟨∗⟨a2, a3⟩, ∗, ⟨a5⟩⟩from HiP ⟨v1⟨v2, v3⟩, v4, ⟨v5⟩⟩, where ∗is the explicit representation of a node that is skipped. Next, in the case of the relaxation of hierarchical information, we perform two processes: (1) we form one hierarchy if there is multiple hierarchy information in the same point, for example, ⟨⟨⟨ai, aj⟩⟩, ak⟩becomes ⟨⟨ai, aj⟩, ak⟩; and (2) we delete hierarchical information if there exists only one node, for example, ⟨⟨ai⟩, aj, ak⟩becomes ⟨ai, aj, ak⟩. These two frameworks achieve approximate substructure matching automatically. Table 1 shows an explicit representation of the common HiASs (features) of G1 and G2 in Figure 2. For the sake of simplicity, for all the weights WV (v), WE(vi, vj), WF (vi, Gj), and WΓ(a), are taken as 1 and for all v, ΛV (v) = λ if v has at least one attribute, otherwise ΛV (v) = 1. Efficient Recursive Computation In general, when the dimension of the feature space |Γ∗| becomes very high, it is computationally infeasible to generate feature vector φ(G) explicitly. We define an efficient calculation formula between HDAGs G1 and G2, which is written as: KHDAG(G1, G2) = X q∈Q X r∈R K(q, r), (3) Table 1: Common HiASs of G1 and G2 in Figure 2: (N.S. represents the node skip, H.R. represents the relaxation of hierarchical information) G1 G2 N.S. N.S.+ H.R. HiAS with ∗ HiAS value HiAS with ∗ HiAS value common HiAS value common HiAS value ⟨P ⟩ ⟨P ⟩ 2 ⟨P ⟩ ⟨P ⟩ 1 ⟨P ⟩ 2 ⟨P ⟩ 2 ⟨N⟩ ⟨N⟩ 1 ⟨N⟩ ⟨N⟩ 1 ⟨N⟩ 1 ⟨N⟩ 1 ⟨a⟩ ⟨a⟩ 2 ⟨a⟩ ⟨a⟩ 1 ⟨a⟩ 2 ⟨a⟩ 2 ⟨b⟩ ⟨b⟩ 1 ⟨b⟩ ⟨b⟩ 1 ⟨b⟩ 1 ⟨b⟩ 1 ⟨c⟩ ⟨c⟩ 1 ⟨c⟩ ⟨c⟩ 1 ⟨c⟩ 1 ⟨c⟩ 1 ⟨d⟩ ⟨d⟩ 1 ⟨d⟩ ⟨d⟩ 1 ⟨d⟩ 1 ⟨d⟩ 1 ⟨c, b⟩ ⟨c, b⟩ 1 ⟨c, b⟩ ⟨c, b⟩ 1 ⟨c, b⟩ 1 ⟨c, b⟩ 1 ⟨d, b⟩ ⟨d, b⟩ 1 ⟨⟨d⟩, ⟨⟨b⟩⟩⟩ ⟨⟨d⟩, ⟨⟨b⟩⟩⟩ 1 0 ⟨b, d⟩ 1 P ⟨a⟩ P ⟨a⟩ 2 P ⟨a⟩ P ⟨a⟩ 1 P ⟨a⟩ 2 P ⟨a⟩ 2 P ⟨c⟩ P ⟨c⟩ 1 P ⟨⟨c⟩⟩ P ⟨⟨c⟩⟩ 1 0 P ⟨c⟩ 1 ⟨∗⟨N⟩, ⟨∗⟩, ∗⟨a⟩⟩ ⟨⟨N⟩, ⟨a⟩⟩ λ3 ⟨⟨N⟩, ∗⟨a⟩⟩ ⟨⟨N⟩, ⟨a⟩⟩ λ ⟨⟨N⟩, ⟨a⟩⟩ λ4 ⟨N, a⟩ λ4 ⟨∗⟨N⟩, ⟨∗⟩, P ⟩ ⟨⟨N⟩, P ⟩ λ2 ⟨⟨N⟩, P ⟩ ⟨⟨N⟩, P ⟩ 1 ⟨⟨N⟩, P ⟩ λ2 ⟨N, P ⟩ λ2 ⟨N, b⟩ ⟨N, b⟩ 1 ⟨N, b⟩ ⟨N, b⟩ 1 ⟨N, b⟩ 1 ⟨N, b⟩ 1 ⟨∗⟨N⟩, ⟨d⟩⟩ ⟨⟨N⟩, ⟨d⟩⟩ λ ⟨⟨N⟩, ∗⟨⟨d⟩⟩⟩ ⟨⟨N⟩, ⟨⟨d⟩⟩⟩ λ 0 ⟨N, d⟩ λ2 ⟨∗⟨b⟩, ⟨∗⟩, ∗⟨a⟩⟩ ⟨⟨b⟩, ⟨a⟩⟩ λ3 ⟨⟨b⟩, ∗⟨a⟩⟩ ⟨⟨b⟩, ⟨a⟩⟩ λ ⟨⟨b⟩, ⟨a⟩⟩ λ4 ⟨b, a⟩ λ4 ⟨∗⟨b⟩, ⟨∗⟩, P ⟩ ⟨⟨b⟩, P ⟩ λ2 ⟨⟨b⟩, P ⟩ ⟨⟨b⟩, P ⟩ 1 ⟨⟨b⟩, P ⟩ λ2 ⟨b, P ⟩ λ2 ⟨∗⟨b⟩, ⟨d⟩⟩ ⟨⟨b⟩, ⟨d⟩⟩ λ ⟨⟨b⟩, ∗⟨⟨d⟩⟩⟩ ⟨⟨b⟩, ⟨⟨d⟩⟩⟩ λ 0 ⟨b, d⟩ λ2 ⟨∗⟨c⟩, ⟨∗⟩, ∗⟨a⟩⟩ ⟨⟨c⟩, ⟨a⟩⟩ λ3 ⟨⟨c⟩, a⟩ ⟨⟨c⟩, a⟩ 1 0 ⟨c, a⟩ λ3 ⟨∗⟨c⟩, ⟨d⟩⟩ ⟨⟨c⟩, ⟨d⟩⟩ λ ⟨c, d⟩ ⟨c, d⟩ 1 0 ⟨c, d⟩ λ ⟨⟨d⟩, ∗⟨a⟩⟩ ⟨⟨d⟩, ⟨a⟩⟩ λ ⟨⟨d⟩, a⟩ ⟨⟨d⟩, a⟩ 1 0 ⟨d, a⟩ λ ⟨∗⟨N⟩, ⟨∗⟩, P ⟨a⟩⟩ ⟨⟨N⟩, P ⟨a⟩⟩ λ2 ⟨⟨N⟩, P ⟨a⟩⟩ ⟨⟨N⟩, P ⟨a⟩⟩ 1 ⟨⟨N⟩, P ⟨a⟩⟩ λ2 ⟨N, P ⟨a⟩⟩ λ2 ⟨∗⟨b⟩, ⟨∗⟩, P ⟨a⟩⟩ ⟨⟨b⟩, P ⟨a⟩⟩ λ2 ⟨⟨b⟩, P ⟨a⟩⟩ ⟨⟨b⟩, P ⟨a⟩⟩ 1 ⟨⟨b⟩, P ⟨a⟩⟩ λ2 ⟨b, P ⟨a⟩⟩ λ2 ⟨∗⟨N, b⟩, ⟨∗⟩, ∗⟨a⟩⟩ ⟨⟨N, b⟩, ⟨a⟩⟩ λ3 ⟨⟨N, b⟩, ∗⟨a⟩⟩ ⟨⟨N, b⟩, ⟨a⟩⟩ λ ⟨⟨N, b⟩, ⟨a⟩⟩ λ4 ⟨⟨N, b⟩, a⟩ λ4 ⟨∗⟨N, b⟩, ⟨∗⟩, P ⟩ ⟨⟨N, b⟩, P ⟩ λ2 ⟨⟨N, b⟩, P ⟩ ⟨⟨N, b⟩, P ⟩ 1 ⟨⟨N, b⟩, P ⟩ λ2 ⟨⟨N, b⟩, P ⟩ λ2 ⟨∗⟨N, b⟩, ⟨d⟩⟩ ⟨⟨N, b⟩, ⟨d⟩⟩ λ ⟨⟨N, b⟩, ∗⟨⟨d⟩⟩⟩⟨⟨N, b⟩, ⟨⟨d⟩⟩⟩ λ 0 ⟨⟨N, b⟩, d⟩ λ2 ⟨∗⟨N, b⟩, ⟨∗⟩, P ⟨a⟩⟩⟨⟨N, b⟩, P ⟨a⟩⟩λ2 ⟨⟨N, b⟩, P ⟨a⟩⟩⟨⟨N, b⟩, P ⟨a⟩⟩ 1 ⟨⟨N, b⟩, P ⟨a⟩⟩λ2 ⟨⟨N, b⟩, P ⟨a⟩⟩λ2 where Q = {q1, . . . , q|Q|} and R = {r1, . . . , r|R|} represent nodes in G1 and G2, respectively. K(q, r) represents the sum of the weighted common HiASs that are extracted from the HiPs whose sink nodes are q and r. K(q, r) = J′′ G1,G2(q, r)H(q, r) + ˆH(q, r)I(q, r) + I(q, r) (4) Function I(q, r) returns the weighted number of common attributes of nodes q and r, I(q, r) = WV (q)WV (r) X a1∈τ(q) X a2∈τ(r) WΓ(a1)WΓ(a2)δ(a1, a2), (5) where δ(a1, a2) = 1 if a1 = a2, and 0 otherwise. Let H(q, r) be a function that returns the sum of the weighted common HiASs between q and r including Υ(q) and Υ(r). H(q, r) =  I(q, r) + (I(q, r) + ΛV (q)ΛV (r)) ˆH(q, r), if q, r ∈¯T I(q, r), otherwise (6) ˆH(q, r) = X s∈G1 i |G1 i =Υ(q) X t∈G2 j |G2 j =Υ(r) WF (q, G1 i )WF (r, G2 j )JG1 i ,G2 j (s, t) (7) Let Jx,y(q, r), J′ x,y(q, r), and J′′ x,y(q, r), where x, y are (sub)graphs, be recursive functions to calculate H(q, r) and K(q, r). Jx,y(q, r) = J′′ x,y(q, r)H(q, r) + H(q, r) (8) J′ x,y(q, r) =    X t∈{ψ(r)∩V (y)} WE(q, t) Λ′ V (t)J′ x,y(q, t)+Jx,y(q, t)  , if ψ(r) ̸= ∅ 0, otherwise (9) J′′ x,y(q, r) =    X s∈{ψ(q)∩V (x)} WE(s, r) Λ′ V (s)J′′ x,y(s, r)+J′ x,y(s, r)  , if ψ(q) ̸= ∅ 0, otherwise (10) where Λ′ V (v) = ΛV (v) Q t∈Gi|Gi=Υ(v) ΛV (t) if v ∈¯T, Λ′ V (v) = ΛV (v) otherwise. Function ψ(q) returns a set of nodes that have direct links to node q. ψ(q) = ∅means that no node has a direct link to s. Next, we show the formula when using the framework of relaxation of hierarchical information. The functions have the same meanings as in the previous formula. We denote ˜H(q, r) = H(q, r) + H′(q, r). K(q, r) = J′′ G1,G2(q, r) ˜H(q, r) + H′(q, r) + H′′(q, r)  I(q, r) + I(q, r) (11) H(q, r) = H′(q, r) + H′′(q, r)  I(q, r) + H′′(q, r) + I(q, r) (12) H′(q, r) =    X t∈G2 j |G2 j =Υ(r) WF (r, G2 j ) ˜H(q, t), if r ∈¯T 0, otherwise (13) H′′(q, r) =            X s∈G1 i |G1 i =Υ(q) WF (q, G1 i )H(s, r) + ˆH(q, r), if q, r ∈¯T X s∈G1 i |G1 i =Υ(q) WF (q, G1 i )H(s, r), if q ∈¯T 0, otherwise (14) Jx,y(q, r) = J′′ x,y(q, r) ˜H(q, r) (15) J′ x,y(q, r) =    X t∈{ψ(r)∩V (y)} WE(q, t) Λ′ V (t)J′ x,y(q, t)+Jx,y(q, t)+ ˜H(q, t)  , if ψ(r) ̸= ∅ 0, otherwise (16) Functions I(q, r), J′′ x,y(q, r), and ˆH(q, r) are the same as those shown above. According to equation (3), given the recursive definition of KHDAG(q, r), the value between two HDAGs can be calculated in time O(|Q||R|). In actual use, we may want to evaluate only the subset of all HiASs whose sizes are under n when determining the kernel value because of the problem discussed in [1]. This can simply realized by not calculating those HiASs whose size exceeds n when calculating K(q, r); the calculation cost becomes O(n|Q||R|). Finally, we normalize the values of the HDAG kernels to remove any bias introduced by the number of nodes in the graphs. This normalization corresponds to the standard unit norm normalization of examples in the feature space corresponding to the kernel space ˆK(x, y) = K(x, y) · (K(x, x)K(y, y))−1/2 [4]. We will now elucidate an efficient processing algorithm. First, as a pre-process, the nodes are sorted under two conditions, V (Υ(v)) ≺v and Ψ(v) ≺v, where Ψ(v) represents all nodes that have a path to v. The dynamic programming technique can be used to compute HDAG kernels very efficiently: By following the sorted order, the values that are needed to calculate K(q, r) have already been calculated in the previous calculation. 4 Experiments Our aim was to test the efficiency of using the richer syntactic and semantic structures available within texts, which can be treated now for the first time by our proposed method. We evaluated the performance of the proposed method in the actual NLP task of Question Classification, which is similar to the Text Classification task except that it requires many Who is prime minister of Japan ? question: Who is prime minister of Japan ? question: word order of attributes (Seq-K) Who is prime minister of Japan ? JJ NN IN . WP Country NNP VBJ [executive] [Asian Country] [number] Who is prime minister of Japan ? JJ NN IN . WP Country NNP VBJ [executive] [Asian Country] [number] Who is prime minister of JJ NN IN WP VBJ dependency structures of attributes (DS-K, DAG-K) [number] [executive] Japan ? . Country NNP [Asian Country] Who is prime minister of JJ NN IN WP VBJ dependency structures of attributes (DS-K, DAG-K) [number] [executive] Japan ? . Country NNP [Asian Country] Who is prime minister of JJ NN IN WP NP NP VBJ hierarchical chunks and their relations (HDAG-K) [number] [executive] Country NP Japan ? . NNP [Asian Country] Figure 4: Examples of input data of comparison methods Table 2: Results of question classification by SVM with comparison kernel functions evaluated by F-measure TIME TOP LOCATION ORGANIZATION NUMEX n 1 2 3 4 HDAG-K .951 .942 .926 DAG-K .946 .913 .869 DS-K .615 .564 .403 Seq-K .946 .910 .866 BOW-K .899 .906 .885 .853 1 2 3 4 .802 .813 .784 .803 .774 .729 .544 .507 .466 .792 .774 .733 .748 .772 .757 .745 1 2 3 4 .716 .712 .697 .704 .671 .610 .535 .509 .419 .706 .668 .595 .638 .690 .633 .571 1 2 3 4 .916 .922 .874 .912 .880 .813 .602 .504 .424 .913 .885 .815 .841 .846 .804 .719 more semantic features within texts [7, 10]. We used three different QA data sets written in Japanese [10]. We compared the performance of the proposed kernel, the HDAG Kernel (HDAG-K), with DAG kernels (DAG-K), Dependency Structure kernels (DS-K) [2], and sequence kernels (Seq-K) [9]. Moreover, we evaluated the bag-of-words kernel (BOW-K) [6], that is, the bag-of-words with polynomial kernels, as the baseline method. The main difference between each method is the ability to treat syntactic and semantic information within texts. Figure 4 shows the differences of input objects between each method. For better understanding, these examples are shown in English. We used words, named entity tags, and semantic information [5] for attributes. Seq-K only treats word order, DS-K and DAG-K treat dependency structures, and HDAG-K treats the NP and NE chunks with their dependency structures. We used the same formula with our proposed method for DAG-K. Comparing HDAG-K to DAG-K shows the difference in performance between handling the hierarchical structures and not handling them. We extended Seq-K and DS-K to improve the total performance and to establish a more equal evaluation, with the same conditions, against our proposed method. Note that though DAG-K and DS-K handle input objects of the same form, their kernel calculation methods differ as do their return values. We used node skip parameter ΛV (v) = 0.5 for all nodes v in each comparison. We used SVM [11] as a kernel-based machine learning algorithm. We evaluated the performance of the comparison methods with question type TIME TOP, ORGANIZATION, LOCATION, and NUMEX, which are defined in the CRL QA-data1. Table 2 shows the average F-measure as evaluated by 5-fold cross validation. n in Table 2 indicates the threshold of an attribute’s number, that is, we evaluated only those HiASs that contain less than n-attributes for each kernel calculation. As shown in this table, HDAGK showed the best performance in the experiments. The experiments in this paper were designed to investigate how to improve the performance by using the richer syntactic and semantic structures within texts. In the task of Question Classification, a given question is classified into Question Type, which reflects the intention of the question. These results 1http://www.cs.nyu.edu/˜sekine/PROJECT/CRLQA/ indicate that our approach, incorporating richer structure features within texts, is well suited to the tasks in the NLP applications. The original DS-K requires exact matching of the tree structure, even when it is extended for more flexible matching. This is why DS-K showed the worst performance in our experiments. The sequence, DAG, and HDAG kernels offer approximate matching by the framework of node skip, which produces better performance in the tasks that evaluate the intention of the texts. The structure of HDAG approaches that of DAG if we do not consider the hierarchical structure. In addition, the structures of sequences and trees are entirely included in that of DAG. Thus, the HDAG kernel subsumes some of the discrete kernels, such as sequence, tree, and graph kernels. 5 Conclusions This paper proposed HDAG kernels, which can handle more of the rich syntactic and semantic information present within texts. Our proposed method is a very generalized framework for handling structured natural language data. We evaluated the performance of HDAG kernels with the real NLP task of question classification. Our experiments showed that HDAG kernels offer better performance than sequence kernels, tree kernels, and the baseline method bag-of-words kernels if the target task requires the use of the richer information within texts. References [1] M. Collins and N. Duffy. Convolution Kernels for Natural Language. In Proc. of Neural Information Processing Systems (NIPS’2001), 2001. [2] M. Collins and N. Duffy. Parsing with a Single Neuron: Convolution Kernels for Natural Language Problems. In Technical Report UCS-CRL-01-10. UC Santa Cruz, 2001. [3] C. Fellbaum. WordNet: An Electronic Lexical Database. MIT Press, 1998. [4] D. Haussler. Convolution Kernels on Discrete Structures. In Technical Report UCS-CRL-99-10. UC Santa Cruz, 1999. [5] S. Ikehara, M. Miyazaki, S. Shirai, A. Yokoo, H. Nakaiwa, K. Ogura, Y. Oyama, and Y. Hayashi, editors. The Semantic Attribute System, Goi-Taikei — A Japanese Lexicon, volume 1. Iwanami Publishing, 1997. (in Japanese). [6] T. Joachims. Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In Proc. of European Conference on Machine Learning(ECML ’98), pages 137–142, 1998. [7] X. Li and D. Roth. Learning Question Classifiers. In Proc. of the 19th International Conference on Computational Linguistics (COLING 2002), pages 556–562, 2002. [8] H. Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, and C. Watkins. Text Classification Using String Kernel. Journal of Machine Learning Research, 2:419–444, 2002. [9] N. Cancedda and E. Gaussier and C. Goutte and J.-M. Renders. Word-Sequence Kernels. Journal of Machine Learning Research, 3:1059–1082, 2003. [10] J. Suzuki, H. Taira, Y. Sasaki, and E. Maeda. Question Classification using HDAG Kernel. In Workshop on Multilingual Summarization and Question Answering (2003), pages 61–68, 2003. [11] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995. [12] C. Watkins. Dynamic Alignment Kernels. In Technical Report CSD-TR-98-11. Royal Holloway, University of London Computer Science Department, 1999.
2003
192
2,412
Near-Minimax Optimal Classification with Dyadic Classification Trees Clayton Scott Electrical and Computer Engineering Rice University Houston, TX 77005 cscott@rice.edu Robert Nowak Electrical and Computer Engineering University of Wisconsin Madison, WI 53706 nowak@engr.wisc.edu Abstract This paper reports on a family of computationally practical classifiers that converge to the Bayes error at near-minimax optimal rates for a variety of distributions. The classifiers are based on dyadic classification trees (DCTs), which involve adaptively pruned partitions of the feature space. A key aspect of DCTs is their spatial adaptivity, which enables local (rather than global) fitting of the decision boundary. Our risk analysis involves a spatial decomposition of the usual concentration inequalities, leading to a spatially adaptive, data-dependent pruning criterion. For any distribution on (X, Y ) whose Bayes decision boundary behaves locally like a Lipschitz smooth function, we show that the DCT error converges to the Bayes error at a rate within a logarithmic factor of the minimax optimal rate. We also study DCTs equipped with polynomial classification rules at each leaf, and show that as the smoothness of the boundary increases their errors converge to the Bayes error at a rate approaching n−1/2, the parametric rate. We are not aware of any other practical classifiers that provide similar rate of convergence guarantees. Fast algorithms for tree pruning are discussed. 1 Introduction We previously studied dyadic classification trees, equipped with simple binary decision rules at each leaf, in [1]. There we applied standard structural risk minimization to derive a pruning rule that minimizes the empirical error plus a complexity penalty proportional to the square root of the size of the tree. Our main result concerned the rate of convergence of the expected error probability of our pruned dyadic classification tree to the Bayes error for a certain class of problems. This class, which essentially requires the Bayes decision boundary to be locally Lipschitz, had previously been studied by Mammen and Tsybakov [2]. They showed the minimax rate of convergence for this class to be n−1/d, where n is the number of labeled training samples, and d is the dimension of each sample. They also demonstrated a classification rule achieving this rate, but the rule requires minimization of the empirical error over the entire class of decision boundaries, an infeasible task in practice. In contrast, DCTs are computationally efficient, but converge at a slower rate of n−1/(d+1). In this paper we exhibit a new pruning strategy that is both computationally efficient and realizes the minimax rate to within a log factor. Our approach is motivated by recent results from Kearns and Mansour [3] and Mansour and McAllester [4]. Those works develop a theory of local uniform convergence, which allows the error to be decomposed in a spatially adaptive way (unlike conventional structural risk minimization). In essence, the associated pruning rules allow a more refined partition in a region where the classification problem is harder (i.e., near the decision boundary). Heuristic arguments and anecdotal evidence in both [3] and [4] suggest that spatially adaptive penalties lead to improved performance compared to “global” penalties. In this work, we give theoretical support to this claim (for a specific kind of classification tree, the DCT) by showing a superior rate of convergence for DCTs pruned according to spatially adaptive penalties. We go on to study DCTs equipped with polynomial classification rules at each leaf. This provides more flexible classifiers that can take advantage of additional smoothness in the Bayes decision boundary. We call such a classifier a polynomial-decorated DCT (PDCT). PDCTs can be practically implemented by employing polynomial kernel SVMs at each leaf node of a pruned DCT. For any distribution whose Bayes decision boundary behaves locally like a H¨older-γ smooth function, we show that the PDCT error converges to the Bayes error at a rate no slower than O((log n/n)γ/(d+2γ−2)). As γ →∞the rate tends to within a log factor of the parametric rate, n−1/2. Perceptron trees, tree classifiers having linear splits at each node, have been investigated by many authors and in particular we point to the works [5,6]. Those works consider optimization methods and generalization errors associated with perceptron trees, but do not address rates of approximation and convergence. A key aspect of PDCTs is their spatial adaptivity, which enables local (rather than global) polynomial fitting of the decision boundary. Traditional polynomial kernel-based methods are not capable of achieving such rates of convergence due to their lack of spatial adaptivity, and it is unlikely that other kernels can solve this problem for the same reason. Consider approximating a H¨older-γ smooth function on a bounded domain with a single polynomial. Then the error in approximation is O(1), a constant, which is the best one could hope for in learning a H¨older smooth boundary with a traditional polynomial kernel scheme. On the other hand, if we partition the domain into hypercubes of side length O(1/m) and fit an individual polynomial on each hypercube, then the approximation error decays like O(m−γ). Letting m grow with the sample size n guarantees that the approximation error will tend to zero. On the other hand, pruning back the partition helps to avoid overfitting. This is precisely the idea behind the PDCT. 2 Dyadic Classification Trees In this section we review our earlier results on dyadic classification trees. Let X be a d-dimensional observation, and Y ∈{0, 1} its class label. Assume X ∈[0, 1]d. This is a realistic assumption for real-world data, provided appropriate translation and scaling has been applied. DCTs are based on the concept of a cyclic dyadic partition (CDP). Let P = {R1, . . . , Rk} be a tree-structured partition of the input space, where each Ri is a hyperrectangle with sides parallel to the coordinate axes. Given an integer ℓ, let [ℓ]d denote the element of {1, . . . , d} that is congruent to ℓmodulo d. If Ri ∈P is a cell at depth j in the tree, let R(1) i and R(2) i be the rectangles formed by splitting Ri at its midpoint along coordinate [j + 1]d. A CDP is a partition P constructed according to the rules: (i) The trivial partition P = [0, 1]d is a CDP; (ii) If {R1, . . . , Rk} is a CDP, then so is {R1, . . . , Ri−1, R(1) i , R(2) i , Ri+1, . . . , Rk}, where 1 ≤i ≤d. The term “cyclic” refers to how the splits cycle through the coordinates of the input space as one traverses a path down the tree. We define a dyadic classification tree (DCT) to be a cyclic dyadic partition with (a) (b) (c) Figure 1: Example of a dyadic classification tree when d = 2. (a) Training samples from two classes, and Bayes decision boundary. (b) Initial dyadic partition. (c) Pruned dyadic classification tree. Polynomial-decorated DCTs, discussed in Section 4, are similar in structure, but a polynomial decision rule is employed at each leaf of the pruned tree, instead of a simple binary label. a class label (0 or 1) assigned to each node in the tree. We use the notation T to denote a DCT. Figure 1 (c) shows an example of a DCT in the two-dimensional case. Previously we presented a rule for pruning DCTs with consistency and rate of convergence properties. In this section we review those results, setting the stage for our main result in the next section. Let m = 2J be a dyadic integer, and define T0 to be the DCT that has every leaf node at depth dJ. Then each leaf of T0 corresponds to a cube of side length 1/m, and T0 has md total leaf nodes. Assume a training sample of size n is given, and each node of T0 is labeled according to a majority vote with respect to the training data reaching that node. A subtree T of T0 is referred to as a pruned subtree, denoted T ≤T0, if T includes the root of T0, if every internal node of T has both its children in T, and if the nodes of T inherit their labels from T0. The size of a tree T, denoted |T|, is the number of leaf nodes. We defined the complexity penalized dyadic classification tree T ′ n to be the solution of T ′ n = arg min T ≤T0 ˆϵ(T) + αn p |T|, (1) where αn = p 32 log(en)/n, and ˆϵ(T) is the empirical error, i.e., the fraction of training data misclassified by T. (The solution to this pruning problem can be computed efficiently [7].) We showed that if X ∈[0, 1]d with probability one, and md = o(n/ log n), then E{ϵ(T ′ n)} →ϵ∗with probability one (i.e., T ′ n is consistent). Here, ϵ(T) = P{T(X) ̸= Y } is the true error probability for T, and ϵ∗is the Bayes error, i.e., the minimum error probability over all classifiers (not just trees). We also demonstrated a rate of convergence result for T ′ n, under certain assumptions on the distribution of (X, Y ). Let us recall the definition of this class of distributions. Again, let X ∈[0, 1]d with probability one. Definition 1 Let c1, c2 > 0, and let m0 be a dyadic integer. Define F = F(c1, c2, m0) to be the collection of all distributions on (X, Y ) such that A1 (Bounded density): For any measurable set A, P{X ∈A} ≤c1λ(A), where λ denotes the Lebesgue measure. A2 (Regularity): For all dyadic integers m ≥m0, if we subdivide the unit cube into cubes of side length 1/m, The Bayes decision boundary passes through at most c2md−1 of the resulting md cubes. These assumptions are satisfied when the density of X is essentially bounded with respect to Lebesgue measure, and when the Bayes decision boundary for the distribution on (X, Y ) behaves locally like a Lipschitz function. See, for example, the boundary fragment class of [2] with γ = 1 therein. In [1], we showed that if the distribution of (X, Y ) belongs to F, and m ∼ (n/ log n)1/(d+1), then E{ϵ(T ′ n)} −ϵ∗= O((log n/n)1/(d+1)). However, this upper bound on the rate of convergence is not tight. The results of Mammen and Tsybakov [2] show that the minimax rate of convergence, infφn supF E{ϵ(φn)} −ϵ∗, is on the order of n−1/d (here φn ranges over all possible discrimination rules). In the next section, we introduce a new strategy for pruning DCTs, which leads to an improved rate of convergence of (log n/n)1/d (i.e., within a logarithmic factor of the minimax rate). We are not aware of other practically implementable classifiers that can achieve this rate. 3 Improved Tree Pruning with Spatially Adaptive Penalties An improved rate of convergence is achieved by pruning the initial tree T0 using a new complexity penalty. Given a node v in a tree T, let Tv denote the subtree of T rooted at v. Let S denote the training data, and let nv denote the number of training samples reaching node v. Let R denote a pruned subtree of T. In the language of [4], R is called a root fragment. Let L(R) denote the set of leaf nodes of R. Consider the pruning rule that selects Tn = arg min T ≤T0  ˆϵ(T) + min R≤T ∆(T, S, R)  , (2) where ∆(T, S, R) = X v∈L(R) 1 n hp 48nv|Tv| log(2n) + p 48nvd log(m) i . Observe that the penalty is data-dependent (since nv depends on S) and spatially adaptive (choosing R ≤T to minimize ∆). The penalty can be interpreted as follows. The first term in the penalty is written P v∈L(R) bpv p 48|Tv| log(2n)/nv, where bpv = nv/n. This can be viewed as an empirical average of the complexity penalties for each of the subtrees Tv, which depend on the local data associated with each subtree. The second term can be interpreted as the “cost” of spatially decomposing the bound on the generalization error. The penalty has the following property. Consider pruning one of two subtrees, both with the same size, and assume that both options result in the same increase in the empirical error. Then the subtree with more data is selected for pruning. Since deeper nodes typically have less data, this shows that the penalty favors unbalanced trees, which may promote higher resolution (deeper leaf nodes) in the vicinity of the decision boundary. In contrast, the pruning rule (1) penalizes balanced and unbalanced trees (with the same size) equally. The following theorem bounds the expected error of Tn. This kind of bound is known as an index of resolvability result [3,8]. Recall that m specifies the depth of the initial tree T0. Theorem 1 If m ∼(n/ log n)1/d, then E{ϵ(Tn) −ϵ∗} ≤min T ≤T0  (ϵ(T) −ϵ∗) + E  min R≤T ∆(T, S, R)  + O r log n n ! . The first term in braces on the right is the approximation error. The remaining terms on the right-hand side bound the estimation error. Since the bound holds for all T, one feature of the pruning rule (2) is that Tn performs at least as well as the subtree T ≤T0 that minimizes the bound. This theorem may be applied to give us our desired rate of convergence result. Theorem 2 Assume the density of (X, Y ) belongs to F. If m ∼(n/ log n)1/d, then E{ϵ(Tn)} −ϵ∗= O((log n/n)1/d). In other words, the pruning rule (2) comes within a log factor of the minimax rate. These theorems are proved in the last section. 4 Faster Rates for Smoother Boundaries In this section we extend Theorem 2 to the case of smoother decision boundaries. Define F(γ, c1, c2, m0) ⊂F(c1, c2, m0) to be those distributions on (X, Y ) satisfying the following additional assumption. Here γ ≥1 is fixed. A3 (γ-regularity): Subdivide [0, 1]d into cubes of side length 1/m, m ≥m0. Within each cube the Bayes decision boundary is described by a function (one coordinate is a function of the others) with H¨older regularity γ. The collection G contains all distributions whose Bayes decision boundaries behave locally like the graph of a function with H¨older regularity γ. The “boundary fragments” class of Mammen and Tsybakov is a special case of boundaries satisfying A1 and A3. We propose a classifier, called a polynomial-decorated dyadic classification tree (PDCT), that achieves fast rates of convergence for distributions satisfying A3. Given a positive integer r, a PDCT of degree r is a DCT, with class labels at each leaf node assigned by a degree r polynomial classifier. Consider the pruning rule that selects Tn,r = arg min T ≤T0  ˆϵ(T) + min R≤T ∆r(T, S, R)  , (3) where ∆r(T, S, R) = X v 1 n q 48nvVd,r|Tv| log(2n) + p 48nv(d + γ) log(m)  . Here Vd,r = d+r r  is the V C dimension of the collection of degree r polynomial classifiers in d dimensions. Also, the notation T ≤T0 in (3) is rough. We actually consider a search over all pruned subtrees of T0, and with all possible configurations of degree r polynomial classifiers at the leaf nodes. An index of resolvability result analgous to Theorem 1 for Tn,r can be derived. Moreover, If r = ⌈γ⌉−1, then a decision boundary with H¨older regularity γ is well approximated by a PDCT of degree r. In this case, Tn,r converges to the Bayes risk at rates bounded by the next theorem. Theorem 3 Assume the density of (X, Y ) belongs to G and that r = ⌈γ⌉−1. If m ∼ (n/ log n)1/(d+2γ−2), then E{ϵ(Tn,r)} −ϵ∗= O((log n/n)γ/(d+2γ−2)). Note that in the case γ = 1 this result coincides with the near-minimax rate in Theorem 2. Also notice that as γ →∞, the rate of convergence comes within a logarithmic factor of the parametric rate n−1/2. The proof is discussed in the final section. 5 Efficient Algorithms The optimally pruned subtree Tn of rule (2) can be computed exactly in O(|T0|2) operations. This follows from a simple bottom-up dynamic programming algorithm, which we describe below, and uses a method for “square-root” pruning studied in [7]. In the context of Theorem 2, we have |T0| = md ∼n, so the algorithm runs in time O(n2). Note that an algorithm for finding the optimal R ≤T was provided in [4]. We now describe an algorithm for finding both the optimal T ≤T0 and R ≤T solving (2). Given a node v ∈T0, let T ∗ v be the subtree of T0 rooted at v that minimizes the objective function of (2), and let R∗ v be the associated subtree that minimizes ∆(T ∗ v , S, R). The problem is solved by finding T ∗root and R∗root using a bottom-up procedure. If v is a leaf node of T0, then clearly T ∗ v = R∗ v = {v}. If v is an internal node, denote the children of v by u and w. There are three cases for T ∗ v and R∗ v: (i) |T ∗ v | = |R∗ v| = 1, in which case T ∗ v = R∗ v = {v}; (ii) |T ∗ v | ≥|R∗ v| > 1, in which case T ∗ v and R∗ v can be computed by merging T ∗ u with T ∗ w and R∗ u with R∗ w, respectively; (iii) |T ∗ v | > |R∗ v| = 1, in which case R∗ v = {v}, and T ∗ v is determined by solving a square root pruning problem, just like the one in (1). At each node, these three candidates are determined, and T ∗ v and R∗ v are the candidates minimizing the objective function (empirical error plus penalty) at each node. Using the first algorithm in [7], the overall pruning procedure may be accomplished in (|T0|2) operations. Determining the optimally pruned degree r PDCT is more challenging. The problem requires the construction, at each node of T0, a polynomial classifier of degree r having minimum empirical error. Unfortunately, this task is computationally infeasible for large sample sizes. As an alternative, we recommend the use of polynomial support vector machines. SVMs are well known for their good generalization ability in practical problems. Moreover, linear SVMs in perceptron trees have been shown to work well [6]. 6 Conclusions A key aspect of DCTs is their spatial adaptivity, which enables local (rather than global) fitting of the decision boundary. Our risk analysis involves a spatial decomposition of the usual concentration inequalities, leading to a spatially adaptive, data-dependent pruning criterion that promotes unbalanced trees that focus on the decision boundary. For distributions on (X, Y ) whose Bayes decision boundary behave locally like a H¨older-γ smooth function, we show that the PDCT error converges to the Bayes error at a rate no slower than O((log n/n)γ/(d+2γ−2)). Polynomial kernel methods are not capable of achieving such rates due to their lack of spatial adaptivity. When γ = 1, the DCT convergence rate is within a logarithmic factor of the minimax optimal rate. As γ →∞the rate tends to within a log factor of n−1/2, the parametric rate. However, the rates for γ > 1 are not within a logarithmic factor of the minimax rate [2]. It may be possible to tighten the bounds further. On the other hand, near-minimax rates might not be achievable using rectangular partitions, and more flexible partitioning schemes, such as adaptive triangulations, may be required. 7 Proof Sketches The key to proving Theorem 1 is the following result, which is a modified version of a theorem of Mansour and McAllester [4]. Lemma 1 Let δ ∈(0, 1). With probability at least 1 −δ, every T ≤T0 satisfies ϵ(T) ≤ˆϵ(T) + min R≤T f(T, S, R, δ), where f(T, S, R, δ) = X v∈L(R) 1 n hp 48nv|Tv| log(2n) + p 24nv[d log(m) + log(3/δ)] + 2[d log(m) + log(3/δ)] i Our primary modification to the lemma is to replace one local uniform deviation inequality (which holds for countable collections of classifiers [4, Lemma 4]) with another (which holds for infinite collections of classifiers [3, Lemma 2]). This eases our extension to polynomial-decorated DCTs in Section 4, by allowing us to avoid tedious quantization arguments. To prove Theorem 1, define the event Ωm to be the collection of all training samples S such that for all T ≤T0, the bound of Lemma 1 holds, with δ = 3/md. By that lemma, P(Ωm) ≥1 −3/md. Let T ≤T0 be arbitrary. We have E{ϵ(Tn) −ϵ(T)} = P(Ωm)E{ϵ(Tn) −ϵ(T) | Ωm} + P(Ωc m)E{ϵ(Tn) −ϵ(T) | Ωc m} ≤ E{ϵ(Tn) −ϵ(T) | Ωm} + 3 md . Given S ∈Ωm, we know ϵ(Tn) ≤ ˆϵ(Tn) + min R≤Tn f(Tn, S, R, 3m−d) = ˆϵ(Tn) + min R≤Tn ∆(Tn, S, R) + 4d log(m) n ≤ ˆϵ(T) + min R≤T ∆(T, S, R) + 4d log(m) n , where the last inequality comes from the definition of Tn. From Chernoff’s inequality, we know P{ˆϵ(T) ≥ϵ(T) + t} ≤e−2nt2. By applying this bound, and the fact E{Z} ≤ R ∞ 0 P{Z > t} dt, the theorem is proved. 2 7.1 Proof of Theorem 2 By Theorem 1, it suffices to find a tree T ∗≤T0 such that E  min R≤T ∗∆(T ∗, S, R)  + (ϵ(T ∗) −ϵ∗) = O log n n  1 d ! . Define T ∗to be the tree obtained by pruning back T0 at every node (thought of as a region of space) that does not intersect the Bayes decision boundary. It can be shown without much difficulty that ϵ(T ∗) −ϵ∗= O((log n/n)1/d) [9, Lemma 1]. It remains to bound the estimation error. Recall that T0 (and hence T ∗) has depth Jd, where J = log2(m). Define R∗to be the pruned subtree of T ∗consisting of all nodes in T ∗up to depth j0d, where j0 = J − (1/d) log2(J) (truncated if necessary). Let Ωv be the set of all training samples such that √nv ≤2√npv. Let Ωbe the set of all training samples S such that S ∈Ωv for all v ∈L(R∗). Now E  min R≤T ∗∆(T ∗, S, R)  ≤ P(Ω)E  min R≤T ∗∆(T ∗, S, R)|Ω  + P(Ωc)E  min R≤T ∗∆(T ∗, S, R)|Ωc  . It can be shown, by applying the union bound, A2, and a theorem of Okamoto [10], that P(Ωc) = O((log n/n)1/d). Moreover, the second expectation on the right is easily seen to be O(1) by considering the root fragment consisting of only the root node. Hence it remains to bound the first term on the right-hand side. We use P(Ω) ≤1, and focus on bounding the expectation. It can be shown, assuming S ∈Ω, that ∆(T ∗, S, R∗) = O((log n/n)1/d). It suffices to bound the first term of ∆(T ∗, S, R∗), which clearly dominates the second term. The first term, consisting of a sum of terms over the leaf nodes of R∗, is dominated by the sum of those terms over the leaf nodes of R∗at depth j0d. The number of such nodes may be bounded by assumption A2. The remaining expression is bounded using assumptions A1 and A2, as well as the definitions of T ∗, R∗, and Ω. 7.2 Proof of Theorem 3 The estimation error is increased by a constant ∝ p Vd,r, so its asymptotic analysis remains unchanged. The only significant change is in the analysis of the approximation error. The tree T ∗is defined as in the previous proof. Recall the leaf nodes of T ∗at maximum depth are cells of side length 1/m. By a simple Taylor series argument, the approximation error ϵ(T ∗) −ϵ∗behaves like m−γ. The remainder of the proof is essentially the same as the proof of Theorem 2. Acknowledgments This work was partially supported by the National Science Foundation, grant nos. MIP– 9701692 and ANI-0099148, the Army Research Office, grant no. DAAD19-99-1-0349, and the Office of Naval Research, grant no. N00014-00-1-0390. References [1] C. Scott and R. Nowak, “Dyadic classification trees via structural risk minimization,” in Advances in Neural Information Processing Systems 14, S. Becker, S. Thrun, and K. Obermayer, Eds., Cambridge, MA, 2002, MIT Press. [2] E. Mammen and A. B. Tsybakov, “Smooth discrimination analysis,” Annals of Statistics, vol. 27, pp. 1808–1829, 1999. [3] M. Kearns and Y. Mansour, “A fast, bottom-up decision tree pruning algorithm with nearoptimal generalization,” in International Conference on Machine Learning, 1998, pp. 269–277. [4] Y. Mansour and D. McAllester, “Generalization bounds for decision trees,” in Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, Palo Alto, California, Nicol`o Cesa-Bianchi and Sally A. Goldman, Eds., 2000, pp. 69–74. [5] K. Bennett and J. Blue, “A support vector machine approach to decision trees,” in Proceedings of the IEEE International Joint Conference on Neural Networks, Anchorage, Alaska, 1998, vol. 41, pp. 2396–2401. [6] K. Bennett, N. Cristianini, J. Shawe-Taylor, and D. Wu, “Enlarging the margins in perceptron decision trees,” Machine Learning, vol. 41, pp. 295–313, 2000. [7] C. Scott, “Tree pruning using a non-additive penalty,” Tech. Rep. TREE 0301, Rice University, 2003, available at http://www.dsp.rice.edu/∼cscott/pubs.html. [8] A. Barron, “Complexity regularization with application to artificial neural networks,” in Nonparametric functional estimation and related topics, G. Roussas, Ed., pp. 561–576. NATO ASI series, Kluwer Academic Publishers, Dordrecht, 1991. [9] C. Scott and R. Nowak, “Complexity-regularized dyadic classification trees: Efficient pruning and rates of convergence,” Tech. Rep. TREE0201, Rice University, 2002, available at http://www.dsp.rice.edu/∼cscott/pubs.html. [10] M. Okamoto, “Some inequalities relating to the partial sum of binomial probabilites,” Annals of the Institute of Statistical Mathematics, vol. 10, pp. 29–35, 1958.
2003
193
2,413
Bounded invariance and the formation of place fields Reto Wyss and Paul F.M.J. Verschure Institute of Neuroinformatics University/ETH Z¨urich Z¨urich, Switzerland rwyss,pfmjv@ini.phys.ethz.ch Abstract One current explanation of the view independent representation of space by the place-cells of the hippocampus is that they arise out of the summation of view dependent Gaussians. This proposal assumes that visual representations show bounded invariance. Here we investigate whether a recently proposed visual encoding scheme called the temporal population code can provide such representations. Our analysis is based on the behavior of a simulated robot in a virtual environment containing specific visual cues. Our results show that the temporal population code provides a representational substrate that can naturally account for the formation of place fields. 1 Introduction Pyramidal cells in the CA3 and CA1 regions of the rat hippocampus have shown to be selectively active depending on the animal’s position within an environment[1]. The ensemble of locations where such a cell fires – the place field – can be determined by a combination of different environmental and internal cues[2], where vision has been shown to be of particular importance[3]. This raises the question, how egocentric visual representations of visual cues can give rise to an allocentric representation of space. Recently it has been proposed that a place field is formed by the summation of Gaussian tuning curves, each oriented perpendicular to a wall of the environment and peaked at a fixed distance from it[4, 5, 6]. While this proposal tries to explain the actual transformation from one coordinate system to another, it does not account for the problem how appropriate egocentric representations of the environment are formed. Thus, it is unclear, how the information about a rat’s distance to different walls becomes available, and in particular how this proposal would generalize to other environments where more advanced visual skills, such as cue identification, are required. For an agent moving in an environment, visual percepts of objects/cues undergo a combination of transformations comprising zooming and rotation in depth. Thus, the question arises, how to construct a visual detector, which has a Gaussian like tuning with regard to the positions within the environment from which snapshots TPC TPC TPC place cell 3 2 1 3 2 1 Figure 1: Place cells from multiple snapshots. The robot is placed in a virtual square environment with four patterns on the walls, i.e. a square, a triangle, a Z and a X. The robot scans the environment for salient stimuli by rotating on place. A saliency detector triggers the acquisition of visual snapshots which are subsequently transformed into TPCs. A place cell is defined through its associated TPC templates. of a visual cue are taken. The internal representation of a stimulus, upon which such a detector is based, should be tolerant to certain degrees of visual deformations without loosing specificity or, in other words, show a bounded invariance. In this study we show that a recently proposed cortical model of visual pattern encoding, the temporal population code (TPC), directly supports this notion of bounded invariance[7]. The TPC is based on the notion that a cortical network can be seen to transform a spatial pattern into a purely temporal code. Here, we investigate to what extent the bounded invariance provided by the TPC can be exploited for the formation of place fields. We address this question in the context of a virtual robot behaving in an environment containing several visual cues. Our results show, that the combination of a simple saliency mechanism with the TPC naturally gives rise to allocentric representations of space, similar to the place fields observed in the hippocampus. 2 Methods 2.1 The experimental setup Experiments are performed using a simulated version of the real-world robot Khepera (K-team, Lausanne, Switzerland) programmed in C++ using OpenGL. The robot has a circular body with two wheels attached to its side each controlled by an individual motor. The visual input is provided by a camera with a viewing angle of 60◦mounted on top of the robot. The neural networks are simulated on a Linux computer using a neural network simulator programmed in C++. The robot is placed in square arena (fig. 1, left),and in the following, all lengths will be given in units of the side lengths of the square environment. 2.2 The temporal population code Visual information is transformed into a TPC by a network of laterally coupled cortical columns, each selective to one of four orientations ψ ∈{0◦, 45◦, 90◦, 135◦} and one of three spatial frequencies ν ∈{high, medium, low}[7]. The outputs of the network are twelve vectors Aψ,ν each reflecting the average population activity recorded over 100 time-steps for each type of cortical column. These vectors are reduced to three vectors Aν by concatenating the four orientations. This set of vectors form the TPC which represents a single snapshot of a visual scene. The similarity S(s1, s2) between two snapshots s1 and s2 is defined as the average correlation ρ between the corresponding vectors, i.e. S(s1, s2) = * Z  ρ(As1 ν , As2 ν ) + ∀ν (1) where Z is the Fisher Z-Transform given by Z(ρ) = 1/2 ln((1 + ρ)/(1 −ρ)), which transforms a typically skewed distribution of correlation coefficients ρ into an approximately normal distribution of coefficients. Thus, Z(ρ) becomes a measure on a proportional scale such that mean values are well defined. 2.3 Place cells from multiple snapshots In this study, the response properties of a place cell are given by the similarity between incoming snapshots of the environment and template snapshots associated to the place cell when it was constructed. Thus, for both, the acquisition of place cells as well as their exploitation, the system needs to be provided with snapshots of its environment that contain visual features. For this purpose, the robot is equipped with a simple visual saliency detector s(t) that selects scenes with high central contrast: s(t) = P e−y2c(y, t)2 P c(y, t)2 where c(y, t) denotes the contrast at location y ∈[−1, +1]2 in the image at time t. At each point in time where s(t) > θsaliency, a new snapshot is acquired with a probability of 0.1. A place cell k is defined by n snapshots called templates tk i with i = 1 . . . n. Whenever the robot tries to localize itself, it scans the environment by rotating in place and taking snapshots of visually salient scenes (fig. 1). The similarity S between each incoming snapshot sj with j = 1 . . . m and every template tk i is determined using eq. 1. The activation ak of place cell k for a series of m snapshots sj is then given by a sigmoidal function ak(ik) =  1 + exp  −β(ik −θ) −1 where ik = D max i  S(tk i , sj) E j. (2) ik represents the input to the place cell which is computed by determining the maximal similarity of each snapshot to any template of the place cell and subsequent averaging, i.e. ⟨·⟩j corresponds to the average over all snapshots j. 2.4 Position reconstruction There are many different approaches to the problem of position reconstruction or decoding from place cell activity[8]. A basis function method uses a linear combination of basis functions φk(x) with the coefficients proportional to the activity of the place cells ak. Here we use a direct basis approach, i.e. the basis function φk(x) directly corresponds to the average activation ak of place cell k at position x within the environment. The reconstructed position ˆx is then given by ˆx = argmax x X k akφk(x) The reconstruction error is given by the distance between the reconstructed and true position averaged over all positions within the environment. 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 0.5 1 1.5 2 Figure 2: Similarity surfaces for the four different cues. Similarity between a reference snapshot of the different cues taken at the position marked by the white cross and all the other positions surrounding the reference location. 2.5 Place field shape and size In order to investigate the shape of a place field φ(x), and in particular to determine its degree of asymmetry and its size, we computed the two-dimensional normalized inertial tensor I given by Iij = P r φ(r)  δijr2 −rirj  P r φ(r) with r = {r1, r2} = x −ˆx where ˆx = P xφ(x)/ P φ(x) corresponds to the “center of gravity” and δij is the Kronecker delta. I is symmetric and can therefore be diagonalized, i.e. I = VTDV, such that V is an orthonormal transformation matrix and Dii > 0 for i = 1, 2. A measure of the half-width of the place field along its two principal axes is then di = √2Dii such that a measure of asymmetry is given by 0 ≤ d1 −d2 d1 + d2 ≤1 This measure becomes zero for symmetric place fields while approaching one for asymmetric ones. In addition, we can estimate the size of the place field by approximating its shape by an ellipse, i.e. πd1d2. 3 Results 3.1 Bounded invariance Initially, we investigate the topological properties of the temporal population coding space. Depending on the position within an environment, visual stimuli undergo a geometric transformation which is a combination of scaling and rotation in depth. Fig. 2 shows the similarity to a reference snapshot taken at the location of the white cross for the four different cues. Although the precise shape of the similarity surface differs, the similarity decreases smoothly and monotonically for increasing distances to the reference point for all stimuli. The similarity surface for different locations of the reference point is shown in fig. 3 for the Z cue. Although the Z cue has no vertical mirror symmetry, the similarity surfaces are nearly symmetric with respect to the vertical center line. Thus, using a single cue, localization is only possible modulo a mirror along the vertical center. The implications of this will be discussed later. Concerning different distances of the reference point to the stimulus, fig. 3 (along the columns) shows that the specificity of the similarity measure is large for small distances while the tuning becomes 0.5 1 1.5 2 0.5 1 1.5 2 0.5 1 1.5 2 0.5 1 1.5 2 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5 Figure 3: Similarity surface of Z cue for different reference points. The distance/angle of the reference point to the cue is kept constant along the rows/columns respectively. broader for large distances. This is a natural consequence of the perspective projection which implies that the changes in visual perception due to different viewing positions are inversely proportional to the viewing distance. 3.2 Place cells from multiple snapshots The response of a place cell is determined by eq. 2 based on four associated snapshots/templates taken at the same location within the environment. The templates for each place cell are chosen by the saliency detector and therefore there is no explicit control over the actual snapshots defining a place cell, i.e. some place cells are defined based on two or more templates of the same cue. Furthermore, the stochastic nature of the saliency detector does not allow for any control over the precise position of the stimulus within the visual field. This is, where the intrinsic translation invariance of the temporal population code plays an important role, i.e. the precise position of the stimulus within the visual field at the time of the snapshot has no effect on the resulting encoding as long as the whole stimulus is visible. Fig. 4 shows examples of the receptive fields (subsequently also called place fields) of such place cells acquired at the nodes of a regular 5 × 5 lattice within the environment. Most of the place fields have a Gaussian-like tuning which is compatible with single cell recordings from pyramidal cells in CA3 and CA1[2], i.e. the place cells maximally respond close to their associated positions and degrade smoothly and monotonically for increasing distances. Some place cells have multiple subfields in that they respond to different locations in the environment with a similar amplitude. 3.3 Position reconstruction Subsequently, we determine the accuracy up to which the robot can be localized within the environment. Therefore we use the direct basis approach for position reconstruction as described in the Methods. As basis functions we take the normalized response profiles of place cells constructed from four templates taken at the nodes of a regular lattice covering the environment. Fig. 5a shows the reconstruction error averaged over the environment as a function of the number of place cells as well as the number of snapshots taken at each location. The reconstruction error decreases monotonically both for an increasing number of place cells as well as an increasing Figure 4: Place fields of 5 × 5 place cells. The small squares show the average response of 5 × 5 different place cells for all the positions of the robot within the environment. Darker regions correspond to stronger responses. The relative location of each square within the figure corresponds to the associated location of the place cell within the environment. All place fields are scaled to a common maximal response. number of snapshots. An asymptotic reconstruction error is approached very fast, i.e. for more then 25 place cells and more then two snapshots per location. Thus, for a behaving organism exploring an unknown environment, this implies that a relatively sparse exploration strategy suffices to create a complete representation of the new environment. Above we have seen that localization with a single snapshot is only possible modulo a mirror along the axis where the cue is located. The systematic reconstruction error introduced by this short-coming can be determined analytically and is ≈0.13 in units of the side-length of the square environment. For an increasing number of snapshots, the probability that all snapshots are from the same pair of opposite cues, decreases exponentially fast and we therefore also expect the systematic error to vanish. Considering 100 place cells, the difference in reconstruction error between 1 and 10 snapshots amounts to 0.147 ± 0.008 (mean ± SD) which is close to the predicted systematic error due to the effect discussed above. Thus, an increasing number of snapshots primarily helps to resolve ambiguities due to the symmetry properties of the TPC. 3.4 Place field shape Fig. 5b-c shows scatter plots of both, place field asymmetry and size versus the distance of the place field’s associated location from the center of the environment. There is a tendency that off-center place cells have more asymmetric place fields than cells closer to the center (r=0.32) which is in accordance with experimental results[5]. Regarding place field size, there is no direct relation to the associated position of place field (r=0.08) apart from the fact that the variance is maximal for intermediate distances from the center. It must be noted, however, that the size of the place field critically depends on the choice of the threshold θ in eq. 2. Indeed different relations between place field size and location can be achieved by assuming non homogeneous thresholds, which for example might be determined for 0 0.25 0.5 0.75 0 0.2 0.4 0.6 asymmetry distance from center 0 0.1 0.2 0.3 0 0.2 0.4 0.6 size 25 50 75 100 2 4 6 8 10 0 0.1 0.2 0.3 0.4 # placecells # snapshots reconstruction error a b c Figure 5: (a) Position reconstruction error. The average error in position reconstruction as a function of the number of snapshots and the number of place cells considered. (b-c) Scatter plots of the place field asymmetry/size versus the distance of the place fields associated location to the center of the environment. The correlation coefficients are r=0.32/0.08 respectively. each place cell individually based on its range of inputs. The measure for place field asymmetry, in contrast, has shown to be more stable in this respect (data not shown). 4 Discussion We have shown that the bounded invariance properties of visual stimuli encoded in a TPC are well suited for the formation of place fields. More specifically, the topology preservation of similarity amongst different viewing angles and distances allows a direct translation of the visual similarity between two views to their relative location within an environment. Therefore, only a small number of place cells are required for position reconstruction. Regarding the shape of the place fields, only weak correlations between its asymmetry and its distance to the center of the environment have been found. As opposed to the present approach, experimental results suggest that place field formation in the hippocampus relies on multiple sensory modalities and not only vision. Although it was shown that vision may play an important role[3], proprioceptive stimuli, for example, can become important in situations where either visual information is not available such as in the dark or in the presence of visual singularities, where two different locations elicit the same visual sensation[9]. A type of information strongly related to proprioceptive stimuli, is the causal structure of behavior which imposes continuous movement in both space and time, i.e. the information about the last location can be of great importance for estimating the current location[10]. Indeed, a recent study has shown that position reconstruction error greatly reduces, if this additional constraint is taken into account[8]. In the present approach we analyzed the properties of place cells in the absence of a behavioral paradigm. Thus, it is not meaningful to integrate information over different locations. We expect, however, that for a continuously behaving robot this type of information would be particularly useful to resolve the ambiguities introduced by the mirror invariance in the case of a single visual snapshot. As opposed to the large field of view of rats (≈320◦[11]) the robot used in this study has a very restricted field of view. This has direct implications on the robot’s behavior. The advantage of only considering a 60◦field of view is, however, that the amount of information contributed by single cues can be investigated. We have shown, that a single view allows for localization modulo a mirror along the orientation of the corresponding stimulus. This ambiguity can be resolved taking additional snapshots into account. In this context, maximal additional information can be gained if a new snapshot is taken along a direction orthogonal to the first snapshot which is also more efficient from a behavioral point of view than using stimuli from opposite directions. The acquisition of place cells was supervised, in that their associated locations are assumed to correspond to the nodes of a regular lattice spanning the environment. While this allows for a controlled statistical analysis of the place cell properties, it is not very likely that an autonomously behaving agent can acquire place cells in such a regular fashion. Rather, place cells have to be acquired incrementally based on purely local information. Information about the number of place cells responding or the maximal response of any place cell for a particular location is locally available to the agent, and can therefore be used to selectively trigger the acquisition of new place cells. In general, the representation will most likely also reflect further behavioral requirements in that important locations where decisions need to be taken, will be represented by a high density of place cells. Acknowledgments This work was supported by the European Community/Bundesamt f¨ur Bildung und Wissenschaft Grant IST-2001-33066 (to P.V.). The authors thank Peter K¨onig for valuable discussions and contributions to this study. References [1] J. O’Keefe and J. Dostrovsky. The hippocampus as a spatial map: preliminary evidence from unit activity in the freely moving rat. Brain Res, 34:171–5, 1971. [2] J. O’Keefe and L. Nadel. The hippocampus as a cognitive map. Clarendon Press, Oxford, 1987. [3] J. Knierim, H. Kudrimoti, and B. McNaughton. Place cells, head direction cells, and the learning of landmark stability. J. Neursci., 15:1648–59, 1995. [4] J. O’Keefe and N. Burgess. Geometric determinants of the place fields of hippocampal neurons. Nature, 381(6581):425–8, 1996. [5] J. O’Keefe, N. Burgess, J.G. Donnett, K.J. Jeffrey, and E.A. Maguire. Place cells, navigational accuracy, and the human hippocampus. Philos Trans R Soc Lond B Biol Sci., 353(1373):1333–40, 1998. [6] N. Burgess, J.G. Donnett, H.J. Jeffrey, and J. O’Keefe. Robotic and neuronal simulation of the hippocampus and rat navigation. Philos Trans R Soc Lond B Biol Sci., 352(1360):1535–43, 1997. [7] R. Wyss, P. K¨onig, and P.F.M.J. Verschure. Invariant representations of visual patterns in a temporal population code. Proc. Natl. Acad. Sci. USA, 100(1):324–9, 2003. [8] K. Zhang, I. Ginzburg, B.L. McNaughton, and T.J. Sejnowski. Interpreting neuronal population activity by reconstruction: Unified framework with application in hippocampal place cells. J Neurophysiol., 79(2):1017–44, 1998. [9] A. Arleo and W. Gerstner. Spatial cognition and neuro-mimetic navigation: a model of hippocampal place cell activity. Biol Cybern., 83(3):287–99, 2000. [10] G. Quirk, R. Muller, and R. Kubie. The firing of hippocampal place cells in the dark depends on the rat’s recent experience. J. Neursci., 10:2008–17, 1995. [11] A. Hughes. A schematic eye for the rat. Visual Res., 19:569–88, 1977.
2003
194
2,414
Learning curves for stochastic gradient descent in linear feedforward networks Justin Werfel Dept. of EECS MIT Cambridge, MA 02139 jkwerfel@mit.edu Xiaohui Xie Dept. of Molecular Biology Princeton University Princeton, NJ 08544 xhx@princeton.edu H. Sebastian Seung HHMI Dept. of Brain & Cog. Sci. MIT Cambridge, MA 02139 seung@mit.edu Abstract Gradient-following learning methods can encounter problems of implementation in many applications, and stochastic variants are frequently used to overcome these difficulties. We derive quantitative learning curves for three online training methods used with a linear perceptron: direct gradient descent, node perturbation, and weight perturbation. The maximum learning rate for the stochastic methods scales inversely with the first power of the dimensionality of the noise injected into the system; with sufficiently small learning rate, all three methods give identical learning curves. These results suggest guidelines for when these stochastic methods will be limited in their utility, and considerations for architectures in which they will be effective. 1 Introduction Learning in artificial systems can be formulated as optimization of an objective function which quantifies the system’s performance. A typical approach to this optimization is to follow the gradient of the objective function with respect to the tunable parameters of the system. Frequently this is accomplished directly, by calculating the gradient explicitly and updating the parameters by a small step in the direction of locally greatest improvement. In many circumstances, however, attempts at direct gradient-following can encounter problems. In VLSI and other hardware implementations, computation of the gradient may be excessively unwieldy, if not impossible due to unavoidable imperfections in manufacturing [1]-[5]. In some cases, as with many where the reinforcement learning framework is used, there may be no explicit form for the objective function and hence no way of calculating its gradient [6]. And in biological systems, any argument that direct gradient calculation might be what the system is actually doing typically encounters severe obstacles. For instance, backpropagation, the standard method for training artificial neural networks, requires twoway, multipurpose synapses, units with global knowledge about the system that are able to recognize different kinds of signals and treat them in very different ways, and (in the case of trajectory learning) the ability to run backwards in time, all of which strain the bounds of biological plausibility [1, 7]. For reasons such as these, there has been broad interest in stochastic methods which approximate the gradient on average. Compared to a method that follows the true gradient directly, we would intuitively expect a stochastic gradient-following approach to learn more slowly. The stochastic algorithms in this study use a reinforcement-learning framework with a single reward signal, which is assigned based on the contributions of all the tunable parameters of the system; that single reward is all that is available to evaluate how every one of the parameters should be updated, in contrast to a true-gradient method where the optimal updates are all specified. Moreover, if the network is made larger and the number of parameters thereby increased, this credit assignment problem becomes still more difficult; thus we expect the performance of stochastic gradient methods to scale up with network size more poorly than deterministic methods. However, under some circumstances stochastic methods can be equally as effective as direct ones in training even large networks, generating near-identical learning curves (see, e.g., Fig. 2 below). Under what circumstances, then, will stochastic gradient descent have performance comparable to that of the deterministic variety? And how good can that performance be? In this paper, we investigate these issues quantitatively by calculating the learning curves for a linear perceptron using a direct gradient method and two stochastic methods, node perturbation and weight perturbation. We find that the maximum learning speed for each algorithm scales inversely with the first power of the dimensionality of the noise injected into the system; this result is in contradiction to previous work, which reported maximum learning speed scaling inversely with the square root of the dimensionality of the injected noise [4]. Additionally, when learning rates are chosen to be very low, and such that the weight updates prescribed by each method are equal on average, we find that all three methods give identical learning curves. 2 Perceptron comparison Direct and stochastic gradient approaches are general classes of training methods. We study the operation of exemplars of both on a feedforward linear perceptron, which has the advantage over the nonlinear case that the learning curves can be calculated exactly [8]. We have N input units and M output units, connected by a weight matrix w of MN elements; outputs in response to an input x are given by y = wx. For the ensemble of possible inputs, we want to train the network to produce desired corresponding outputs y = d; in order to ensure that this task is realizable by the network, we assume the existence of a teacher network w∗such that d = w∗x. We use the squared error function E = 1 2|y −d|2 = 1 2|(w −w∗)x|2 = 1 2|Wx|2 (1) where we have defined the matrix W ≡w −w∗. We train the network with an online approach, choosing at each time step an input vector x with components drawn from a Gaussian distribution with mean 0 and unit variance, and using it to construct a weight update according to one of the three prescriptions below. The online gradient-following approach explicitly uses the gradient of the error function for a given input to determine the weight update: ∆WOL = −η∇E where η > 0 is the learning rate. This is the approach taken, e.g., by backpropagation. In the stochastic algorithms, the gradient is not calculated directly; instead, some noise is introduced into the system, affecting its error for a given input, and the difference between the error with and without noise is used to estimate the gradient. The simplest case is when noise is added directly to the weight matrix: E′ WP = 1 2|(W + ψ)x|2 Such an approach is sometimes termed ‘weight perturbation’ [2, 4]. We choose each element of the noise matrix ψ from a Gaussian distribution with mean 0 and variance σ2. Intuitively, if the addition of the noise lowers the error, that perturbation to the weight matrix is retained, which will mean lower error for that input in future. Conversely, if the noise leads to an increase in error, the opposite change is made to the weights; the effect of small noise on error can be approximated as linear, and the opposite change in weights will lead to the opposite change in error, again decreasing error for that input in future. These two cases can be combined into the single weight update ∆WWP = −η σ2 (E′ WP −E)ψ A more subtle way to introduce stochasticity involves adding the noise to the output of each output unit rather than to every weight: E′ NP = 1 2|Wx + ξ|2 Such an approach is sometimes called ‘node perturbation’ [1, 3]. Here if the noise leads to a decrease in error, the weights are adjusted in such a way as to move the outputs in the direction of that noise. The degree of freedom for each output unit corresponds to the adjustment of its threshold, making the unit more or less responsive to a given pattern of input activity. The elements of ξ are again chosen independently from a Gaussian distribution with variance σ2; here ξ has M elements, whereas ψ in the previous case had MN. The REINFORCE framework [9] gives for the weight update ∆WNP = −η σ2 (E′ NP −E)ξxT These stochastic frameworks produce weight updates identical to that of direct gradient descent on the error function when averaged over all values of the noise [4, 9], which is the sense in which they constitute stochastic gradient descent. This result is easy to verify in the particular forms taken by ∆WNP and ∆WWP here, shown below. 2.1 Online gradient method Taking the gradient of the error function of Eq. 1 gives ∆WOL = −ηWxxT (2) as the individual weight update for particular values of W and x. This rule lets us calculate a recursion relation specifying how ∥W∥2 changes from one time step to the next: X ij ⟨(W (t) ij )2⟩t = (1 −2η + (N + 2)η2) X ij  W (t−1) ij 2 (3) where the parenthesized superscript is a time index, and the subscripted angle brackets denote an average over the ensemble of all inputs at that time. Applying this recursion relation gives an expression for the average error as a function of time, where the unsubscripted brackets indicate a mean taken over all inputs at every time step: ⟨E(t) OL⟩= (1 −2η + (N + 2)η2)tE(0) In a single online learning run, E(t) would depend on the particular values of x that were randomly chosen; averaging over the ensemble of possible inputs x removes this variation. We therefore use this averaged error ⟨E(t)⟩as the learning curve measuring the performance of the system. We have the condition for convergence of the average error η < 2 N + 2 The limit on η has this dependence on N because of the randomness inherent in an online training regimen; the exact gradient for error due to a given single input x will not in general match that for error averaged over the entire ensemble of inputs. We can write an expression for the ij-component of the weight update, explicitly in terms of ‘gradient signal’ (term multiplying Wij) plus ‘gradient noise’ [1] (contamination from other components of W due to projection onto x): ∆Wij = −η  Wijx2 j + X k̸=j Wikxkxj   We can similarly rewrite Eq. 3 as X ij ⟨W (1)2 ij ⟩= X ij W (0)2 ij (1 −2η + 3η2) + η2(N −1) X ij W (0) ij where the first term is due entirely to the gradient signal and the second to the gradient noise; choosing η ≲1/N allows the signal to be revealed via averaging over ≳N samples (see also the Discussion). This gradient noise is common to all three algorithms considered here. 2.2 Node perturbation Here averages are taken at each step not only over the inputs x but also over the noise ξ. The weight update, recursion relation, learning curve, and convergence condition are ∆WNP = −η σ2 (ξT Wx + 1 2ξT ξ)ξxT X ij ⟨W (t)2 ij ⟩t = X ij W (t−1)2 ij (1 −2η + η2(M + 2)(N + 2)) + 1 4η2σ2MN(M + 2)(M + 4)) ⟨E(t) NP⟩=  E(0) −ησ2(M + 2)(M + 4)MN/8 2 −(N + 2)(M + 2)η  (1 −2η + (M + 2)(N + 2)η2)t + ησ2(M + 2)(M + 4)MN/8 2 −(M + 2)(N + 2)η η < 2 (M + 2)(N + 2) In this case the recursion relation has not only a multiplicative term as before but also an additive one. The latter is a result of the noise ξ; when W is far from the minimum of the objective function, ξ will typically be small in comparison to Wx and the additive term will be negligible, but close to the minimum the noise will prevent the system from attaining arbitrarily low error. This effect appears also in the learning curve. The limit on η is stricter by a factor of M, the dimensionality of the noise, as discussed below. 2.3 Weight perturbation The same approach as before gives in this case ∆WWP = −η σ2 (xT ψT Wx + 1 2xT ψT ψx)ψ X ij ⟨W (t)2 ij ⟩t = X ij W (t−1)2 ij (1 −2η + η2(MN + 2)(N + 2)) + 1 4η2σ2(M 3N 3 + 2M 2N 3 + 2M 3N 2 + 16M 2N 2 + 24MN) ⟨E(t) WP⟩= (E(0) −ησ2MN(MN(M + 2)(N + 2) + 12(MN + 2)) 8(2 −(N + 2)(MN + 2)η) ) ·(1 −2η + (N + 2)(MN + 2)η2)t + ησ2MN(MN(M + 2)(N + 2) + 12(MN + 2)) 8(2 −(N + 2)(MN + 2)η) η < 2 (MN + 2)(N + 2) As with node perturbation, the recursion relation involves both multiplicative and additive terms, and the learning curve shows nonzero residual error even at infinite time. The limit on η is a further factor of N smaller, corresponding to the greater dimensionality of ψ compared to ξ. 3 Comparison of learning curves All three of the above learning curves ⟨E(t)⟩take the form ¯E(a(η))t + b(η, σ), where b is the residual error which the network will approach as t →∞if learning converges, ¯E ≡E(0) −b is the transient error, and a is a multiplicative factor by which ¯E changes at each time step. The magnitude of a, which depends on the parameter η but not on σ, determines whether the average error will converge and the rate at which it will do so. For the online gradient method, b = 0; a network trained this way, if it converges, will approach zero error as t →∞. The stochastic algorithms have positive residual noise b, which depends on both η and σ; in the limit σ →0, this residual error vanishes. Of course, σ cannot be set directly to 0 or the stochastic algorithms will cease to function. 3.1 Maximal learning rates The analysis of the previous section suggests at least two reasonable ways to compare these different algorithms with respect to performance. One is to choose the optimal learning rate for each, that value of η for which the average error converges most quickly. The learning curves, to highest order in η, M, and N, then become ⟨E(t) OL⟩ = ¯E  1 −1 N t ⟨E(t) NP⟩ = ¯E  1 − 1 MN t + 1 8σ2M 2 ⟨E(t) WP⟩ = ¯E  1 − 1 MN 2 t + 1 8σ2M 2N Direct gradient descent, then, can train a network faster than can node perturbation, which in turn is faster than weight perturbation. The noise takes different forms in the two stochastic variants. For node perturbation, ξi is added directly to the ith output unit; for weight perturbation, the quantity added to the same output unit is P ij ψijxj. By the central limit theorem, the latter approaches a Gaussian with mean 0 and variance Nσ2 for large N. For most direct comparison of the two stochastic variants, therefore, σ for ξ should be chosen a factor √ N larger than for ψ. With this choice, the residual error for the two stochastic variants becomes identical, and the learning curves differ only in their rates of convergence. 3.2 Equal average updates A second way to compare the algorithms is to choose learning rates such that all three have the same average weight update. As noted above, choosing the same value of η in all three cases will ensure this condition. That common value of η must be small enough that all three algorithms converge; if we take η ≪ 1 MN2 , the learning curves become ⟨E(t) OL⟩ = ¯E(1 −2η)t ⟨E(t) NP⟩ = ¯E (1 −2η)t + 1 16ησ2M 3N ⟨E(t) WP⟩ = ¯E (1 −2η)t + 1 16ησ2M 3N 3 We began by saying that, because of the credit assignment problem of choosing updates to many parameters based on a single reward signal, intuition is that a stochastic gradientfollowing approach should learn more slowly than a direct one. However, for equal small η, the average error for all three algorithms converges at the same rate. Weight perturbation approaches a larger value of residual error than does node perturbation; however, in the σ →0 limit, the residual error vanishes for both. 4 Discussion In a linear feedforward network of N input and M output units, in terms of the maximum possible rate of convergence of average error, online gradient descent on a squared error function is faster by a factor of M than node perturbation, which in turn is faster by a factor of N than weight perturbation. The difference in the rate of convergence is the dimensionality of the noise. Weight perturbation operates by explicit exploration of the entire MN-dimensional weight space; only one component of a particular update will be in the direction of the true gradient for a given input, while the other components can be viewed as noise masking that signal. That is, an update can be written as ∆W = ⟨∆W⟩(the ‘learning signal’, the actual gradient) + (∆W −⟨∆W⟩) (the ‘learning noise’), where the average is taken over all values of ψ. This learning noise will typically have magnitude √ MN larger than the learning signal, and so MN samples are required in order to average it away. Direct gradient descent gives weight updates that are purely signal in this sense; while still occurring in an MN-dimensional space, they are by definition exactly in the direction of the gradient for a given input. Thus no exploration of the weight space nor averaging over multiple samples is necessary, and the maximum learning speed is correspondingly greater. Node perturbation is a stochastic algorithm like weight perturbation, but it explores the Mdimensional output space rather than the larger weight space; the learning noise is of lower dimension, and correspondingly fewer samples need to be averaged to reveal a learning signal of a given size. It has previously been argued that the maximum learning rate should scale, not with the dimensionality of the update as shown here, but with the square root of that dimensionality [4]. That claim is based on the fact that the squared magnitude of the update goes as the number of dimensions, and for a given error landscape and position in parameter space, there will be a maximum update size, greater than which instability will result. However, a more quantitative approach is to examine the conditions under which error will decrease, as we have done above. Rather than stopping with the statement that the size of the weight update scales as the square root of the number of dimensions, we have shown that this fact implies that the restriction on convergence scales with the first power of the dimensionality. Numerical simulations of error curves, averaged over many individual trials with online updating, support these conclusions with respect to both the quantitative shapes of the learning curves and the scaling behavior of the conditions on convergence (Fig. 1). 10 0 10 2 Online gradient method # of examples Error (arbitrary units) 10 0 10 2 10 4 Node perturbation # of examples 10 0 10 5 Weight perturbation # of examples Figure 1: Sample learning curves for the three algorithms applied to a linear feedforward network as described in the text, showing the agreement between theory (black) and experiment (gray). In each case, a network of linear units with N = 20, M = 25, σ = 10−3, and optimal η was trained on successive input examples for the number of iterations shown. 100 such runs were averaged together in each case; the three gray lines show the mean (solid) and standard deviation (dashed) of squared error among those runs. This scaling result means that, for these stochastic methods, there is no net advantage in speed of training when all degrees of freedom are varied at the same time, compared to when they are varied sequentially, in terms of scaling with M and N. For instance, in the case of weight perturbation, varying only one weight at a time would allow the learning rate to be increased by a factor on the order of MN; but each of the MN weights would need to be trained in this way, so that the total training time required would scale in the same way as if all were varied at once. (The speed of learning for parallel vs. sequential variation, however, can differ by a constant ratio, though we do not pursue this issue here.) The analysis here describes the behavior in a worst case of sorts, where the objective function and distribution of inputs are isotropic. In the anisotropic case, where the problem is effectively lower-dimensional, the scaling behavior of all three methods can be correspondingly more favorable than that derived here, and the relative performance of the stochastic methods can be better. The results described in this paper extend at least qualitatively to more complicated networks and architectures. For instance, Fig. 2 shows learning curves that result from applying the three algorithms to a two-layer feedforward network of nonlinear units. All three algorithms give identical learning curves if the learning rate is set small enough; as η is increased, the weight perturbation curve fails to converge to low error, while the other two curves continue to match; increasing η further leads to the node perturbation curve also failing to converge. In the above, we have shown that stochastic gradient descent techniques can be expected to scale with increasing network size more poorly than direct ones, in terms of maximum learning rate. This may serve as a caution regarding the size of networks they may usefully be applied to. However, with learning rates in the regime where error converges, equal learning curves in each of the three will follow from equal learning rates, although individual weight updates will typically be considerably different. This is because for correspondingly small adjustments to the weights, only the component parallel to the gradient will have a significant effect on error; orthogonal components will not affect the error to first order. Moreover, node perturbation can have performance comparable to that of direct gradient descent even in training very large networks, so long as the number of output units is small [6]. Thus these stochastic methods may be of considerable utility for training networks in some situations, particularly in reinforcement learning frameworks and those where the gradient of the objective function is difficult or impossible to calculate, for mathematical or practical reasons. 10 0 10 5 η = 4 × 10−4 # of examples Error (arbitrary units) 10 0 10 5 η = 4 × 10−3 # of examples 10 0 10 2 η = 4 × 10−2 # of examples Figure 2: Sample learning curves for the three algorithms applied to a two-layer nonlinear feedforward network (gradient descent, black dotted; node perturbation, dark gray dashed; weight perturbation, light gray solid). The input, hidden, and output layers each had 10 units, whose output was equal to the hyperbolic tangent of their weighted input. Inputs and noises were drawn from the same distributions as in the linear case; σ = 10−3, η had the value shown for all three algorithms in each panel. In each case, the network was trained on successive input examples for the number of iterations shown; curves show single representative runs. Error was evaluated based on the total squared difference between the output of the network and that of a teacher network with randomly chosen weights; the test error shown was the mean of that for 100 random inputs not used in training. Acknowledgments We thank Ila Fiete and Gert Cauwenberghs for useful discussions and comments. This work was supported in part by a Packard Foundation Fellowship (to H.S. Seung) and NIH grants (GM07484 to MIT and MH60651 to H.S. Seung). References [1] Widrow, B. & Lehr, M. A. 30 years of adaptive neural networks: Perceptron, Madaline, and backpropagation. Proc. IEEE 78(9):1415–1442, 1990. [2] Jabri, M. & Flower, B. Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayered networks. IEEE Transactions on Neural Networks 3(1):154–157, 1992. [3] Flower, B. & Jabri, M. Summed weight neuron perturbation: an O(n) improvement over weight perturbation. In Advances in Neural Information Processing Systems 5, San Mateo, CA: Morgan Kaufman Publishers: 212–219, 1993. [4] Cauwenberghs, G. A fast stochastic error-descent algorithm for supervised learning and optimization. In Advances in Neural Information Processing Systems 5, San Mateo, CA: Morgan Kaufman Publishers: 244–251, 1993. [5] Cauwenberghs, G. An analog VLSI recurrent neural network learning a continuous-time trajectory. IEEE Transactions on Neural Networks 7(2):346–361, 1996. [6] Fiete, I. Private communication. [7] Bartlett, P. & Baxter, J. Hebbian synaptic modifications in spiking neurons that learn. Technical report, November 27 1999. [8] Baldi, P. & Hornik, K. Learning in linear neural networks: a survey. IEEE Transactions on Neural Networks 6(4):837–858, 1995. [9] Williams, R.J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning 8:229–256, 1992.
2003
195
2,415
Learning Bounds for a Generalized Family of Bayesian Posterior Distributions Tong Zhang IBM T.J. Watson Research Center Yorktown Heights, NY 10598 tzhang@watson.ibm.com Abstract In this paper we obtain convergence bounds for the concentration of Bayesian posterior distributions (around the true distribution) using a novel method that simplifies and enhances previous results. Based on the analysis, we also introduce a generalized family of Bayesian posteriors, and show that the convergence behavior of these generalized posteriors is completely determined by the local prior structure around the true distribution. This important and surprising robustness property does not hold for the standard Bayesian posterior in that it may not concentrate when there exist “bad” prior structures even at places far away from the true distribution. 1 Introduction Consider a sample space X and a measure λ on X (with respect to some σ-field). In statistical inference, the nature picks a probability measure Q on X which is unknown. We assume that Q has a density q with respect to λ. In the Bayesian paradigm, the statistician considers a set of probability densities p(·|θ) (with respect to λ on X) indexed by θ ∈Γ, and makes an assumption1 that the true density q can be represented as p(·|θ) with θ randomly picked from Γ according to a prior distribution π on Γ. Throughout the paper, all quantities appearing in the derivations are assumed to be measurable. Given a set of samples X = {X1, . . . , Xn} ∈X n, where each Xi independently drawn from (the unknown distribution) Q, the optimal Bayesian method can be derived as the optimal inference with respect to the posterior distribution. Although a Bayesian procedure is optimal only when the nature picks the same prior as the statistician (which is very unlikely), it is known that procedures with desirable properties from the frequentist point of view (such minimaxity and admissibility) are often Bayesian [6]. From a theoretical point of view, it is necessary to understand the behavior of Bayesian methods without the assumption that the nature picks the same prior as the statistician. In this respect, the most fundamental issue in Bayesian analysis is whether the Bayesian inference based on the posterior distribution will converge to the corresponding inference of the true (but 1In this paper, we view the Bayesian paradigm as a method to generate statistical inferencing procedures, and thus don’t assume that the Bayesian prior assumption has to be true. In particular, we do not even assume that q ∈{p(·|θ) : θ ∈Γ}. unknown) distribution when the number of observations approach infinity. A more general question is whether the Bayesian posterior distribution will be concentrated around the true underlying distribution when the sample size is large. This is often referred to as the consistency of Bayesian posterior distribution, which is certainly the most fundamental issue for understanding the behavior of Bayesian methods. This problem has drawn considerable attention in statistics. The classical results include average consistency results such as Doob’s consistency theorem and asymptotic convergence results such as the Bernstein-von Mises theorem for parametric problems. For infinite-dimensional problems, one has to choose the prior very carefully, or the Bayesian posterior may not concentrate around the true underlying distribution, which leads to inconsistency [1, 2]. In [1], the authors also gave conditions that guarantee the consistency of Bayesian posterior distributions, although convergence rates were not obtained. The convergence rates were studied in two recent works [3, 8] by using heavy machineries from the empirical process theory. The purpose of this paper is to develop finite-sample convergence bounds for Bayesian posterior distributions using a novel approach that not only simplifies the analysis given in [3, 8], but also leads to tighter bounds. At the heart of our approach are some new posterior averaging bounds that are related to the PAC Bayes analysis appeared in some recent machine learning works. These new bounds are of independent interests (though we cannot fully explore their consequences here) since they can be used to obtain correct convergence rates for other statistical estimation problems such as least squares regression. Motivated by our learning bounds, we introduce a generalized family of Bayesian methods, and show that their convergence behavior relies only on the prior mass in a small neighborhood around the true distribution. This is rather surprising when we consider the example given in [1], which shows that for the (standard) Bayesian method, even if one puts a positive prior mass around the true distribution, one may still get an inconsistent posterior when there exist undesirable prior structures far away from the true distribution. 2 The regularization formulation of Bayesian posterior measure Assume we observe n-samples X = {X1, . . . , Xn} ∈X n, independently drawn from the true underlying distribution Q. We shall call any probability density ˆwX(θ) with respect to π that depends on the observation X (and measurable on X n × Γ) a posterior distribution. ∀α ∈(0, 1], we define a generalized Bayesian posterior πα(·|X) with respect to π as: πα(θ|X) = Qn i=1 pα(Xi|θ) R Γ Qn i=1 pα(Xi|θ)dπ(θ). (1) We call πα the α-Bayesian posterior. The standard Bayesian posterior is denoted as π(·|X) = π1(·|X). Given a probability density w(·) on Γ with respect to π, we define the KL-divergence KL(wdπ||dπ) as: KL(wdπ||dπ) = Z Γ w(θ) ln w(θ)dπ(θ). Consider a real-valued function f(θ) on Γ, we denote by Eπf(θ) the expectation of f(·) with respect to π. Similarly, consider a real-valued function ℓ(x) on X, we denote by Eq ℓ(x) the expectation of ℓ(·) with respect the true underlying distribution q. We also use EX to denote the expectation with respect to the observation X. The key starting point of our analysis is the following simple observation that relates the Bayesian posterior to the solution of an entropy regularized density (with respect to π) estimation. Under this formulation, techniques for analyzing regularized risk minimization problems, such as those recently investigated by the author, can be applied to obtain sample complexity bound for Bayesian posterior distributions. The proof of the following regularization formulation is straight-forward, which we shall skip due to the space limitation. Proposition 2.1 For any density w on Γ with respect to π, let ˆRα X(w) = α 1 n n X i=1 Eπ w(θ) ln q(Xi) p(Xi|θ) + 1 nKL(wdπ||dπ). Then ˆRα X(πα(·|X)) = infw ˆRα X(w). The above Proposition indicates that the generalized Bayesian posterior minimizes the regularized empirical risk ˆRα X(w) among all possible densities w with respect to the prior π. We thus only need to study the behavior of this regularized empirical risk minimization problem. One may define the true risk of w by replacing the empirical expectation ˆEX with the expectation with respect to the true underlying distribution q: Rα q (w) = αEπ w(θ)KL(q||p(·|θ)) + 1 nKL(wdπ||dπ), (2) where KL(q||p) = Eq ln q(x) p(x) is the KL-divergence between q and p, which is always a non-negative number. This quantity is widely used to measure the closeness of two distributions p and q. Clearly the Bayesian posterior is an approximate solution to (2) using empirical expectation. The first term of Rα q (w) measures the average KL-divergence of q and p under the w-density. Since both the first term and the second term are non-negative, we know immediately that if Rα q (w) ≈0, then the distribution w is concentrated around q. Using empirical process techniques, one would typically expect to bound Rα q (w) in term of ˆRα X(w). Unfortunately, it does not work in our case since KL(q||p) is not well-defined for all p. This implies that as long as w has non-zero concentration around a density p with KL(q||p) = +∞, then Rα q (w) = +∞. Therefore we may have Rα q (π(·|X)) = +∞with non-zero probability even when the sample size approaches infinity. A remedy is to consider a distance function that is always well-defined. In statistics, one often considers the ρ-divergence for ρ ∈(0, 1), which is defined as: Dρ(q||p) = 1 ρ(1 −ρ)Eq  1 − p(x) q(x) ρ . (3) This divergence is always well-defined and KL(q||p) = limρ→0 Dρ(q||p). In the statistical literature, the convergence results were often specified under the Hellinger distance (ρ = 0.5). We would also like to mention that our learning bound derived later will become trivial when ρ →0. This is consistent with the above discussion since Rα q (corresponding to ρ = 0) may not converge at all. However, under additional assumptions, such as the boundedness of q/p, KL(q||p) exists and can be bounded using the ρ-divergence Dρ(q||p). 3 Posterior averaging bounds under entropy regularization The following inequality follows directly from a well-known convex duality. For example, see [5, 7] for an explanation. Proposition 3.1 Assume that f(θ) is a measurable real-valued function on Γ, and w(θ) is a density with respect to π, we have Eπ w(θ)f(θ) ≤KL(wdπ||dπ) + ln Eπ exp(f(θ)). The main technical result which forms the basis of the paper is given by the following lemma, where we assume that ˆwX(θ) is a posterior (density with respect to π that depends on X and measurable on X n × Γ). Lemma 3.1 Consider any posterior ˆwX(θ). The following inequality holds for all measurable real-valued functions LX(θ) on X n × Γ: EX exp h Eπ ˆwX(θ)(LX(θ) −ln EXeLX(θ)) −KL( ˆwXdπ||dπ) i ≤1, where EX is the expectation with respect to the observation X. Proof. From Proposition 3.1, we obtain ˆL(X) =Eπ ˆwX(θ)(LX(θ) −ln EXeLX(θ)) −KL( ˆwXdπ||dπ) ≤ln Eπ exp(LX(θ) −ln EXeLX(θ)). Now applying the Fubini’s theorem to interchange the order of integration, we have: EXe ˆL(X) ≤EXEπeLX(θ)−ln EX exp(LX(θ)) = EπEXeLX(θ)−ln EX exp(LX(θ)) = 1. 2 The following corollary is a straight-forward consequence of Lemma 3.1. Note that for the Bayesian method, the loss ℓθ(x) has a form of ℓ(p(x|θ)). Theorem 3.1 (Posterior Averaging Bounds) Under the notation of Lemma 3.1. Let X = {X1, . . . , Xn} be n-samples that are independently drawn from q. Consider a measurable function ℓθ(x) : Γ × X →R. Then ∀t > 0 and real number ρ, the following event holds with probability at least 1 −exp(−t): −Eπ ˆwX(θ) ln Eq exp(−ρℓθ(x))) ≤ρ Pn i=1 Eπ ˆwX(θ)ℓθ(Xi) + KL( ˆwXdπ||dπ) + t n . Moreover, we have the following expected risk bound: −EXEπ ˆwX(θ) ln Eq exp(−ρℓθ(x))) ≤EX ρ Pn i=1 Eπ ˆwX(θ)ℓθ(Xi) + KL( ˆwXdπ||dπ) n . Proof Sketch. The first bound is a direct consequence of Markov inequality. The second bound can be obtained by using the fact EX exp(∆X) ≥exp(EX∆X), which follows from the Jensen’s inequality. 2 The above bounds are immediately applicable to Bayesian posterior distribution. The first leads to an exponential tail inequality, and the second leads to an expected risk bound. Before analyzing Bayesian methods in detail in the next section, we shall briefly compare the above results to the so-called PAC-Bayes bounds, which can be obtained by estimating the left-hand side using the Hoeffding’s inequality with an appropriately chosen ρ. However, in the following, we shall estimate the left-hand side using a Bernstein style bound, which is much more useful for general statistical estimation problems: Corollary 3.1 Under the notation of Theorem 3.1, and assume that supθ,x1,x2 |ℓθ(x1) − ℓθ(x2)| ≤1. Then ∀t, ρ > 0, with probability of at least 1 −exp(−t): Eπ ˆwX(θ)Eqℓθ(x) −ρφ(ρ)Eπ ˆwX(θ)Varqℓθ(x) ≤1 n n X i=1 Eπ ˆwX(θ)ℓθ(Xi) + KL( ˆwXdπ||dπ) + t ρn , where φ(x) = (exp(x) −x −1)/x2 and Varq ℓθ(x) = Eq(ℓθ(x) −Eqℓθ(x))2. Proof Sketch. We follow one of the standard derivations of Bernstein inequality outlined below: it is well known that φ(x) is non-decreasing in x, which in turn implies that ln Eq exp(−ρℓθ(x))) ≤−ρEqℓθ(x) + ρ2φ(ρ)Eq(ℓθ(x) −Eqℓθ(x))2. Now applying this bound to the left hand side of Theorem 3.1, we finish the proof. 2 One may use the simple bound Varq ℓθ(x) ≤1/4 and obtain2. Eπ ˆwX(θ)Eqℓθ(x) ≤Eπ ˆwX(θ) n X i=1 ℓθ(Xi) n + ρφ(ρ) 4 + KL( ˆwXdπ||dπ) + t ρn  . (4) This inequality holds for any data-independent choice of ρ. However, one may easily turn it into a bound which allows ρ to depend on the data using well-known techniques (see [5], for example). After we optimize ρ, the resulting bound becomes similar to the PAC-Bayes bound [4]. Typically the optimal ρ is in the order of p KL( ˆwXdπ||dπ)/n, and hence the rate of convergence given on the right-hand side is no better than O( p 1/n). However, the more interesting case is when there exists a constant b ≥0 such that Eq(ℓθ(x) −Eqℓθ(x))2 ≤bEqℓθ(x). (5) This condition appears in the theoretical analysis of many statistical estimation problems, such as least squares regression, and when the loss function is non-negative (such as classification). It also appears in some analysis of maximum-likelihood estimation (log-loss), though as we shall see, log-loss can be much more directly handled in our framework using Theorem 3.1. A modified version of this condition also occurs in some recent analysis of classification problems even when the problem is not separable. We shall now assume that (5) holds. It follows from Corollary 3.1 that ∀ρ > 0 such that ρφ(ρ) ≤1/b, we have Eπ ˆwX(θ)Eqℓθ(x) ≤ρEπ ˆwX(θ) Pn i=1 ℓθ(Xi) + KL( ˆwXdπ||dπ) + t ρ(1 −bρφ(ρ))n . (6) Again the above inequality holds for any data-independent ρ, but we can easily turn it into a bound that allows ρ to depend on X using standard techniques. However we shall not list the final result here since this is not the purpose of the paper. The parameter ρ can be optimized, and it is not hard to check that the resulting bound is significantly better than (4) when Eπ ˆwX(θ) Pn i=1 ℓθ(Xi) n ≈0. The “self-bounding” condition (5) holds in the theoretical analysis of many statistical estimation problems. To obtain the correct convergence behavior in such cases (including the Bayesian method which we are interested in here), inequality (4) is inadequate, and it is essential to use a Bernstein-type bound such as (6). It is also useful to point out that to analyze such problems, one actually only needs (6) with an appropriately chosen data-independent ρ, which will lead to the correct (minimax) rate of convergence. Note that if we choose ρ to be a constant, then it is possible to achieve a bound that converges as fast as O(1/n). We shall point out that in [7], a KL-divergence version of the PAC-Bayes bound was developed for the 0-1 loss using related techniques, which can lead to a rate as fast as O(ln n/n) if we make near zero errors. However, the Bernstein style bound given here is more generally applicable and is necessary for more complicated statistical estimation problems such as least squares regression. 4 Convergence bounds for Bayesian posterior distributions We shall now analyze the finite sample convergence behavior of Bayesian posterior distributions using Theorem 3.1. Although the exponential tail inequality provides more detailed information, our discussion will be based on the expected risk bound for simplicity. 2In this case, slightly tighter results can be obtained by applying the Hoeffding’s exponential inequality directly to the left-hand side of Theorem 3.1, instead of the method used in Corollary 3.1. To analyze the Bayesian method, we let ℓθ(x) = ln(q(x)/p(x|θ)) in Theorem 3.1. Consider ρ ∈(0, 1). We also let ˆwX(θ) be the Bayesian posterior πα(θ|X) with parameter α ∈[ρ, 1] defined in (1). Consider an arbitrary data-independent density w(θ) with respect to π, using (3), we can obtain from Theorem 3.1 the following chain of equations: EXEππα(θ|X) ln 1 1 −ρ(1 −ρ)Dρ(q||p(·|θ)) = −EXEππα(θ|X) ln Eq exp  −ρ ln q(x) p(x|θ)  ≤EX " ρEπ πα(θ|X) n X i=1 1 n ln q(Xi) p(Xi|θ) + KL(πα(θ|X)dπ||dπ) n # ≤EX " αEπ w(θ) 1 n n X i=1 ln q(Xi) p(Xi|θ) + KL(wdπ||dπ) n # + α −ρ n EX sup θ n X i=1 ln p(Xi|θ) q(Xi) =Rq α(w) + α −ρ n EX sup θ n X i=1 ln p(Xi|θ) q(Xi) , where Rα q (w) is defined in (2). Note that the first inequality follows from Theorem 3.1, and the second inequality follows from Proposition 2.1. The empirical process bound in the second term can be improved using a more precise bounding method, but we shall skip it here due to the lack of space. It is not difficult to see (also see Proposition 2.1 and Proposition 3.1) that (we skip the derivation due to the space limitation): inf w Rα q (w) = −1 n ln Eπ exp(−αnKL(q||p(·|θ))). Using the fact −ln(1 −x) ≥x to simplify the left-hand side, we thus obtain: EXEππα(θ|X)Dρ(q||p(·|θ)) ≤ −ln Eπe−αnKL(q||p(·|θ)) + (α −ρ)EX supθ Pn i=1 ln p(Xi|θ) q(Xi) ρ(1 −ρ)n . (7) In the following, we shall compare our analysis with previous results. To be consistent with the concept used in these previous studies, we shall consider the following quantity: mα,ρ π (X, ϵ) = Eππα(θ|X)1(Dρ(q||p(·|θ)) ≥ϵ), where 1 is the set indicator function. Intuitively mα,ρ π (X, ϵ) is the probability mass of the α-Bayesian posterior πα(·|X) in the region of p(·|θ) that is at least ϵ-distance away from q in Dρ-divergence. Using Markov inequality, we immediately obtain from (7) the following bound for mα,ρ ϵ (X): EXmα,ρ π (X, ϵ) ≤ −ln Eπe−αnKL(q||p(·|θ)) + (α −ρ)EX supθ Pn i=1 ln p(Xi|θ) q(Xi) ρ(1 −ρ)nϵ . (8) Next we would like to estimate the right-hand side of (8). Due to the limitation of space, we shall only consider a simple truncation estimation, which leads to the correct convergence rate for non-parametric problems but yields an unnecessary ln n factor for parametric problems (which can be correctly handled with a more precise estimation). We introduce the following notation, which is essentially the prior measure of an ϵ-radius KL-ball around q: M KL π (ϵ) = π(KL(q||p(·|θ)) ≤ϵ) = Eπ1(KL(q||p(·|θ)) ≤ϵ). Using this definition, we have Eπe−αnKL(q||p(·|θ)) ≥M KL π (ϵ)e−αnϵ. In addition, we shall define the ϵ-upper bracketing of Γ (introduced in [1]), denoted by N(Γ, ϵ), as the minimum number of non-negative functions {fi} on X with respect to λ such that Eq(fi/q) = 1 + ϵ, and ∀θ ∈Γ, ∃i such that p(x|θ) ≤fi(x) a.e. [λ]. We have 1 nEX sup θ n X i=1 ln p(Xi|θ) q(Xi) ≤1 nEX ln N(Γ,ϵ) X j=1 e P n i=1 ln fj (Xi) q(Xi) ≤1 n ln N(Γ,ϵ) X j=1 EXe P n i=1 ln fj (Xi) q(Xi) = ln N(Γ, ϵ) n + ln(1 + ϵ). Therefore we obtain from (8) that ∀s > 0: ρ(1 −ρ)sEXmα,ρ π (X, sϵ) ≤α −1 nϵ ln M KL π (ϵ) + (α −ρ)ln N(Γ, ϵ) + nϵ nϵ . The above bound immediately implies the following consistency and convergence rate theorem for Bayesian posterior distribution: Theorem 4.1 Consider a sequence of Bayesian prior distributions πn on a parameter space Γn, which may be different for different sample sizes. Consider a sequence of positive numbers {ϵn} such that sup n −1 nϵn ln M KL πn (ϵn) < ∞, (9) then ∀sn > 0 such that sn →∞, and ∀α ∈(0, 1), mα,α πn (X, snϵn) →0 in probability. Moreover, if sup n ln N(Γn, ϵn) nϵn < ∞, (10) then ∀sn > 0 such that sn →∞, and ∀ρ ∈(0, 1), m1,ρ πn (X, snϵn) →0 in probability. The first claim implies that for all α < 1, the α-Bayesian posterior πα is concentrated in an ϵn ball around q in Dα divergence, and the rate of convergence is Op(ϵn). Note that ϵn is determined only by the local property of πn around the true distribution q. It also immediately implies that as long as M KL πn (ϵ) > 0 for all ϵ > 0, the α-Bayesian method with α < 1 is consistent. The second claim applies to the standard Bayesian method. Its consistency requires an additional assumption (10), which depends on global properties of the prior πn. This may seem somewhat surprising at first, but the condition is necessary. In fact, the counterexample given in [1] shows that the standard Bayesian method can be inconsistent even under the condition M KL πn (ϵ) > 0 for all ϵ > 0. Therefore a standard Bayesian procedure can be ill-behaved even if we put a sufficient amount of prior around the true distribution. The consistency theorem given in [1] also relies on the upper entropy number N(Γ, ϵ). However, no convergence rates were established. Here we obtained a rate of convergence result for the standard Bayesian method using their covering definitions. Other definitions of covering (e.g. Hellinger covering) were used in more recent works to obtain rate of convergence for non-parametric Bayesian methods [3, 8]. Although it is possible to derive bounds using those different covering definitions in our analysis, we shall not work out the details here. However, we shall point out that these works made assumptions not completely necessary. For example, in [3], the definition of M KL π (ϵ) requires additional assumptions that Eq ln(q/p(·|θ))2 ≤ϵ2. This stronger condition is not needed in our analysis. Finally we shall mention that the bound of the form in Theorem 4.1 is known to produce optimal convergence rates for non-parametric problems (see [3, 8] for examples). 5 Conclusion In this paper, we formulated an extended family of Bayesian algorithms as empirical logrisk minimization under entropy regularization. We then derived general posterior averaging bounds under entropy regularization that are suitable for analyzing Bayesian methods. These new bounds are of independent interests since they lead to Bernstein style exponential inequalities, which are crucial for obtaining the correct convergence behavior for many statistical estimation problems such as least squares regression. Using the posterior averaging bounds, we obtain new convergence results for a generalized family of Bayesian posterior distributions. Our results imply that the α-Bayesian method with α < 1 is more robust than the standard Bayesian method since its convergence behavior is completely determined by the local prior density around the true distribution. Although the standard Bayesian method is “optimal” in a certain averaging sense, its behavior is heavily dependent on the regularity of the prior distribution globally. What happens is that the standard Bayesian method can put too much emphasis on the difficult part of the prior distribution, which degrades the estimation quality in the easier parts where we are actually more interested in. Therefore even if one is able to guess the true distribution by putting a large prior mass around its neighborhood, the Bayesian method can still illbehave if one accidentally makes bad choices elsewhere. It is thus difficult to design good Bayesian priors. The new theoretical insights obtained here imply that unless one completely understands the impact of the prior, it is much safer to use an α-Bayesian method. Acknowledgments The author would like to thank Andrew Barron, Ron Meir, and Matthias Seeger for helpful discussions and comments. References [1] Andrew Barron, Mark J. Schervish, and Larry Wasserman. The consistency of posterior distributions in nonparametric problems. Ann. Statist., 27(2):536–561, 1999. [2] Persi Diaconis and David Freedman. On the consistency of Bayes estimates. Ann. Statist., 14(1):1–67, 1986. With a discussion and a rejoinder by the authors. [3] Subhashis Ghosal, Jayanta K. Ghosh, and Aad W. van der Vaart. Convergence rates of posterior distributions. Ann. Statist., 28(2):500–531, 2000. [4] D. McAllester. PAC-Bayesian stochastic model selection. Machine Learning, 51(1):5– 21, 2003. [5] Ron Meir and Tong Zhang. Generalization error bounds for Bayesian mixture algorithms. Journal of Machine Learning Research, 4:839–860, 2003. [6] C. P. Robert. The Bayesian Choice: A Decision Theoretic Motivation. Springer Verlag, New York, 1994. [7] M. Seeger. PAC-Bayesian generalization error bounds for Gaussian process classification. JMLR, 3:233–269, 2002. [8] Xiaotong Shen and Larry Wasserman. Rates of convergence of posterior distributions. Ann. Statist., 29(3):687–714, 2001.
2003
196
2,416
A Functional Architecture for Motion Pattern Processing in MSTd Scott A. Beardsley Lucia M. Vaina Dept. of Biomedical Engineering Dept. of Biomedical Engineering Boston University Boston University Boston, MA 02215 Boston, MA 02215 sbeardsl@bu.edu vaina@bu.edu Abstract Psychophysical studies suggest the existence of specialized detectors for component motion patterns (radial, circular, and spiral), that are consistent with the visual motion properties of cells in the dorsal medial superior temporal area (MSTd) of non-human primates. Here we use a biologically constrained model of visual motion processing in MSTd, in conjunction with psychophysical performance on two motion pattern tasks, to elucidate the computational mechanisms associated with the processing of widefield motion patterns encountered during self-motion. In both tasks discrimination thresholds varied significantly with the type of motion pattern presented, suggesting perceptual correlates to the preferred motion bias reported in MSTd. Through the model we demonstrate that while independently responding motion pattern units are capable of encoding information relevant to the visual motion tasks, equivalent psychophysical performance can only be achieved using interconnected neural populations that systematically inhibit non-responsive units. These results suggest the cyclic trends in psychophysical performance may be mediated, in part, by recurrent connections within motion pattern responsive areas whose structure is a function of the similarity in preferred motion patterns and receptive field locations between units. 1 Introduction A major challenge in computational neuroscience is to elucidate the architecture of the cortical circuits for sensory processing and their effective role in mediating behavior. In the visual motion system, biologically constrained models are playing an increasingly important role in this endeavor by providing an explanatory substrate linking perceptual performance and the visual properties of single cells. Single cell studies indicate the presence of complex interconnected structures in middle temporal and primary visual cortex whose most basic horizontal connections can impart considerable computational power to the underlying neural population [1, 2]. Combined psychophysical and computational studies support these findings and suggest that recurrent connections may play a significant role in encoding the visual motion properties associated with various psychophysical tasks [3, 4]. Using this methodology our goal is to elucidate the computational mechanisms associated with the processing of wide-field motion patterns encountered during self-motion. In the human visual motion system, psychophysical studies suggest the existence of specialized detectors for the motion pattern components (i.e., radial, circular and spiral motions) associated with self-motion [5, 6]. Neurophysiological studies reporting neurons sensitive to motion patterns in the dorsal medial superior temporal area (MSTd) support the existence of such mechanisms [7-10], and in conjunction with psychophysical studies suggest a strong link between the patterns of neural activity and motion-based perceptual performance [11, 12]. Through the combination of human psychophysical performance and biologically constrained modeling we investigate the computational role of simple recurrent connections within a population of MSTd-like units. Based on the known visual motion properties within MSTd we ask what neural structures are computationally sufficient to encode psychophysical performance on a series of motion pattern tasks. 2 Motion pattern discrimination Using motion pattern stimuli consistent with previous studies [5, 6], we have developed a set of novel psychophysical tasks designed to facilitate a more direct comparison between human perceptual performance and the visual motion properties of cells in MSTd that have been found to underlie the discrimination of motion patterns [11, 12]. The psychophysical tasks, referred to as the graded motion pattern (GMP) and shifted center-of-motion (COM) tasks, are outlined in Fig. 1. Using a temporal two-alternative-forced-choice task we measured discrimination thresholds to global changes in the patterns of complex motion (GMP task), [13], and shifts in the center-of-motion (COM task). Stimuli were presented with central fixation using a constant stimulus paradigm and consisted of dynamic random dot displays presented in a 24o annular region (central 4o removed). In each task, the stimulus duration was randomly perturbed across presentations (440±40 msec) to control for timing-based cues, and dots moved coherently through a radial speed Figure 1: a) Schematic of the graded motion pattern (GMP) task. Discrimination pairs of stimuli were created by perturbing the flow angle (φ) of each 'test' motion (with average dot speed, vav), by ±φp in the stimulus space spanned by radial and circular motions. b) Schematic of the shifted center-of-motion (COM) task. Discrimination pairs of stimuli were created by shifting the COM of the ‘test’ motion to the left and right of a central fixation point. For each motion pattern the COM was shifted within the illusory inner aperture and was never explicitly visible. gradient in directions consistent with the global motion pattern presented. Discrimination thresholds were obtained across eight ‘test’ motions corresponding to expansion, contraction, CW and CCW rotation, and the four intermediate spiral motions. To minimize adaptation to specific motion patterns, opposing motions (e.g., expansion/ contraction) were interleaved across paired presentations. 2.1 Results Discrimination thresholds are reported here from a subset of the observer population consisting of three experienced psychophysical observers, one of which was naïve to the purpose of the psychophysical tasks. For each condition, performance is reported as the mean and standard error averaged across 8-12 thresholds. Across observers and dot speeds GMP thresholds followed a distinct trend in the stimulus space [13], with radial motions (expansion/contraction) significantly lower than circular motions (CW/CCW rotation), (p<0.001; t(37)=3.39), (Fig. 2a). While thresholds for the intermediate spiral motions were not significantly different from circular motions (p=0.223, t(60)=0.74), the trends across 'test' motions were well fit within the stimulus space (SB: r>0.82, SC: r>0.77) by sinusoids whose period and phase were 196 ± 10o and -72 ± 20o respectively (Fig. 1a). When the radial speed gradient was removed by randomizing the spatial distribution of dot speeds, threshold performance increased significantly across observers (p<0.05; t(17)=1.91), particularly for circular motions (p<0.005; t(25)=3.31), (data not shown). Such performance suggests a perceptual contribution associated with the presence of the speed gradient and is particularly interesting given the fact that the speed gradient did not contribute computationally relevant information to the task. However, the speed gradient did convey information regarding the integrative structure of the global motion field and as such suggests a preference of the underlying motion mechanisms for spatially structured speed information. Similar trends in performance were observed in the COM task across observers and dot speeds. Discrimination thresholds varied continuously as a function of the 'test' Figure 2: a) GMP thresholds across 8 'test' motions at two mean dot speeds for two observers. Performance varied continuously with thresholds for radial motions (φ=0, 180o) significantly lower than those for circular motions (φ=90,270o), (p<0.001; t(37)=3.39). b) COM thresholds at three mean dot speeds for two observers. As with the GMP task, performance varied continuously with thresholds for radial motions significantly lower than those for circular motions, (p<0.001; t(37)=4.47). motion with thresholds for radial motions significantly lower than those for circular motions, (p<0.001; t(37)=4.47) and could be well fit by a sinusoidal trend line (e.g. SB at 3 deg/s: r>0.91, period = 178 ± 10o and phase = -70 ± 25o), (Fig. 2b). 2.2 A local or global task? The consistency of the cyclic threshold profile in stimuli that restricted the temporal integration of individual dot motions [13], and simultaneously contained all directions of motion, generally argues against a primary role for local motion mechanisms in the psychophysical tasks. While the psychophysical literature has reported a wide variety of “local” motion direction anisotropies whose properties are reminiscent of the results observed here, e.g. [14], all would predict equivalent thresholds for radial and circular motions for a set of uniformly distributed and/or spatially restricted motion direction mechanisms. Together with the computational impact of the speed gradient and psychophysical studies supporting the existence of wide-field motion pattern mechanisms [5, 6], these results suggest that the threshold differences across the GMP and COM tasks may be associated with variations in the computational properties across a series of specialized motion pattern mechanisms. 3 A computational model The similarities between the motion pattern stimuli used to quantify human perception and the visual motion properties of cells in MSTd suggests that MSTd may play a computational role in the psychophysical tasks. To examine this hypothesis, we constructed a population of MSTd-like units whose visual motion properties were consistent with the reported neurophysiology (see [13] for details). Across the population, the distribution of receptive field centers was uniform across polar angle and followed a gamma distribution Γ(5,6) across eccenticity [7]. For each unit, visual motion responses followed a gaussian tuning profile as a function of the stimulus flow angle G(φ), (σi=60±30o; [10]), and the distance of the stimulus COM from the unit’s receptive field center Gsat(xi, yi, σs=19o), Eq. 1, such that its preferred motion response was position invariant to small shifts in the COM [10] and degraded continuously for large shifts [9]. Within the model, simulations were categorized according to the distribution of preferred motions represented across the population (one reported in MSTd and a uniform control). The first distribution simulated an expansion bias in which the density of preferred motions decreased symmetrically from expansions to contraction [10]. The second distribution simulated a uniform preference for all motions and was used as a control to quantify the effects of an expansion bias on psychophysical performance. Throughout the paper we refer to simulations containing these distributions as ‘Expansion-biased’ and ‘Uniform’ respectively. 3.1 Extracting perceptual estimates from the neural code For each stimulus presentation, the ith unit’s response was calculated as the average firing rate, Ri, from the product of its motion pattern and spatial tuning profiles, [ ] ( ) ( ) ( ) 12 min = + − − − = λ σ σ φ φ P , y y , x x G , G R R s i i sat t i max i i i (1) where Rmax is the maximum preferred stimulus response (spikes/s), min[ ] refers to the minimum angular distance between the stimulus flow angle φ and the unit’s preferred motion φi, Gsat is the unit’s spatial tuning profile saturated within the central 5±3o, σti and σs are the standard deviations of the unit’s motion pattern and spatial tuning profiles respectively, (xi,yi) is the spatial location of the unit’s receptive field center, (x,y) is the spatial location of the stimulus COM, and P(λ=12) is the background activity simulated as an uncorrelated Poisson process. The psychophysical tasks were simulated using a modified center-of-gravity approach to decode estimates of the stimulus properties, i.e. flow angle ( ) φ and COM location in the visual field ( ) y x ˆ ,ˆ , from the neural population ( )           ∑ ∑ ∑ ∑ ∑ = i i i i i i i i i i i i i R R R y R R x y x φ φ v , , ˆ ,ˆ ,ˆ (2) where iφ v is the unit vector in the stimulus space (Fig. 1a) corresponding to the unit’s preferred motion. For each set of paired stimuli, psychophysical judgments were made by comparing the estimated stimulus properties according to the discrimination criteria, specified in the psychophysical tasks. As with the psychophysical experiments, discrimination thresholds were computed using a leastsquares fit to percent correct performance across constant stimulus levels. 3.2 Simulation 1: Independent neural responses In the first series of simulations, GMP and COM thresholds were quantified across three populations (500, 1000, and 2000 units) of independently responding units for each simulated distribution (Expansion-biased and Uniform). Across simulations, both the range in thresholds and their trends across ‘test’ motions were compared with human psychophysical performance to quantify the effects of population size and an expansion biased preferred motion distribution on model performance. Over the psychophysical range of interest (φp ± 7o), GMP thresholds for contracting motions were at chance across all Expansion-biased populations, (Fig. 3a). While thresholds for expanding motions were generally consistent with those for human observers, those for circular motions remained significantly higher for all but the largest populations. Similar trends in performance were observed for the COM task, (Fig. 3b). Here the range of COM thresholds was well matched with human performance for simulations containing 1000 units, however, the trends across motion patterns remained inconsistent even for the largest populations. Figure 3: Model vs. psychophysical performance for independently responding units. Model thresholds are reported as the average (±1 S.E.) across five simulated populations. a) GMP thresholds were highest for contracting motions and lowest for expanding motions across all Expansion-biased populations. b) Comparable trends in performance were observed for COM thresholds. Comparison with the Uniform control simulations in both tasks (2000 units shown here) indicates that thresholds closely followed the distribution of preferred motions simulated within the model. ^ For simulations containing a uniform distribution of preferred motions, the threshold range was consistent with human performance on both tasks, however, the trend across motion patterns was generally flat. What variability did occur was due primarily to the discrete sampling of preferred motions across the population. Comparison of the discrimination thresholds for the Expansion-biased and Uniform populations indicates that the trend across thresholds was closely matched to the underlying distributions of preferred motions. This result in due in part to the nearequal weighting of independently responding units and can be explained to a first approximation by the proportional increase in the signal-to-noise ratio across the population as a function of the density of units responsive to a given 'test' motion. 3.3 Simulation 2: An interconnected neural structure In a second series of simulations, we examined the computational effect of adding recurrent connections between units. If the distribution of preferred motions in MSTd is in fact biased towards expansions, as the neurophysiology suggests, it seems unlikely that independent estimates of the visual motion information would be sufficient to yield the threshold profiles observed in the psychophysical tasks. We hypothesize that a simple fixed architecture of excitatory and/or inhibitory connections is sufficient to account for the cyclic trends in discrimination thresholds. Specifically, we propose that a recurrent connection profile whose strength varies as a function of (a) the similarity between preferred motion patterns and (b) the distance between receptive field centers, is computationally sufficient to recover the trends in GMP/COM performance (Fig. 4), 2 2 2 2 2 2 2 2 2 2 2 2 I j i Ri j i j i e R j i j i ]) (min[ ) y y ( ) x x ( R ) y y ( ) x x ( R ij e S e S e S w σ φ φ φ σ σ − − − + − − − + − − − − = (3) Figure 4: Proposed recurrent connection profile between motion pattern units. a) Across the motion pattern space connection strength followed an inverse gaussian profile such that the ith unit (with preferred motion φi) systematically inhibited units with anti-preferred motions centered at 180+φi. b) Across the visual field connection strength followed a difference-of-gaussians profile as a function of the relative distance between receptive field centers such that spatially local units are mutually excitatory (σRe=10o) and more distant units were mutually inhibitory (σRi=80o). where wij is the strength of the recurrent connection between ith and jth units, (xi,yi) and (xj,yj) denote the spatial locations of their receptive field centers, σRe (=10o) and σRi (=80o) together define the spatial extent of a difference-of-gaussians interaction between receptive field centers, and SR and Sφ scale the connection strength. To examine the effects of the spread of motion pattern-specific inhibition and connection strength in the model, σI, Sφ, and SR were considered free parameters. Within the parameter space used to define recurrent connections (i.e., σI, Sφ and SR), Monte Carlo simulations of Expansion-biased model performance (1000 units) yielded regions of high correlation on both tasks (with respect to the psychophysical thresholds, r>0.7) that were consistent across independently simulated populations. Typically these regions were well defined over a broad range such that there was significant overlap between tasks (e.g., for the GMP task (SR=0.03), σI=[45,120o], Sφ=[0.03,0.3] and for the COM task (σI=80o), Sφ = [0.03,0.08], SR = [0.005, 0.04]). Fig. 5 shows averaged threshold performance for simulations of interconnected units drawn from the highly correlated regions of the (σI, Sφ, SR) parameter space. For populations not explicitly examined in the Monte Carlo simulations connection strengths (Sφ, SR) were scaled inversely with population size to maintain an equivalent level of recurrent activity. With the incorporation of recurrent connections, the sinusoidal trend in GMP and COM thresholds emerged for Expansion-biased populations as the number of units increased. In both tasks the cyclic threshold profiles were established for 1000 units and were well fit (r>0.9) by sinusoids whose periods and phases were consistent with human performance. Unlike the Expansion-biased populations, Uniform populations were not significantly affected by the presence of recurrent connections (Fig. 5). Both the range in thresholds and the flat trend across motion patterns were well matched to those in Section 3.2. Together these results suggest that the sinusoidal trends in GMP and COM performance may be mediated by the combined contribution of the recurrent interconnections and the bias in preferred motions across the population. 4 Discussion Using a biologically constrained computational model in conjunction with human psychophysical performance on two motion pattern tasks we have shown that the visual motion information encoded across an interconnected population of cells Figure 5: Model vs. psychophysical performance for populations containing recurrent connections (σI=80o). As the number of units increased for Expansionbiased populations, discrimination thresholds decreased to psychophysical levels and the sinusoidal trend in thresholds emerged for both the (a) GMP and (b) COM tasks. Sinusoidal trends were established for as few as 1000 units and were well fit (r>0.9) by sinusoids whose periods and phases were (193.8 ± 11.7o, -70.0 ± 22.6o) and (168.2 ± 13.7o, -118.8 ± 31.8o) for the GMP and COM tasks respectively. responsive to motion patterns, such as those in MSTd, is computationally sufficient to extract perceptual estimates consistent with human performance. Specifically, we have shown that the cyclic trend in psychophysical performance observed across tasks, (a) cannot be reproduced using populations of independently responding units and (b) is dependent, in part, on the presence of an expanding motion bias in the distribution of preferred motions across the neural population. The model’s performance suggests the presence of specific recurrent structures within motion pattern responsive areas, such as MSTd, whose strength varies as a function of the similarity between preferred motion patterns and the distance between receptive field centers. While such structures have not been explicitly examined in MSTd and other higher visual motion areas there is anecdotal support for the presence of inhibitory connections [8]. Together, these results suggest that robust processing of the motion patterns associated with self-motion and optic flow may be mediated, in part, by recurrent structures in extrastriate visual motion areas whose distributions of preferred motions are biased strongly in favor of expanding motions. Acknowledgments This work was supported by National Institutes of Health grant EY-2R01-07861-13 to L.M.V. References [1] Malach, R., Schirman, T., Harel, M., Tootell, R., & Malonek, D., (1997), Cerebral Cortex, 7(4): 386-393. [2] Gilbert, C. D., (1992), Neuron, 9: 1-13. [3] Koechlin, E., Anton, J., & Burnod, Y., (1999), Biological Cybernetics, 80: 2544. [4] Stemmler, M., Usher, M., & Niebur, E., (1995), Science, 269: 1877-1880. [5] Burr, D. C., Morrone, M. C., & Vaina, L. M., (1998), Vision Research, 38(12): 1731-1743. [6] Meese, T. S. & Harris, S. J., (2002), Vision Research, 42: 1073-1080. [7] Tanaka, K. & Saito, H. A., (1989), Journal of Neurophysiology, 62(3): 626-641. [8] Duffy, C. J. & Wurtz, R. H., (1991), Journal of Neurophysiology, 65(6): 13461359. [9] Duffy, C. J. & Wurtz, R. H., (1995), Journal of Neuroscience, 15(7): 5192-5208. [10] Graziano, M. S., Anderson, R. A., & Snowden, R., (1994), Journal of Neuroscience, 14(1): 54-67. [11] Celebrini, S. & Newsome, W., (1994), Journal of Neuroscience, 14(7): 41094124. [12] Celebrini, S. & Newsome, W. T., (1995), Journal of Neurophysiology, 73(2): 437-448. [13] Beardsley, S. A. & Vaina, L. M., (2001), Journal of Computational Neuroscience, 10: 255-280. [14] Matthews, N. & Qian, N., (1999), Vision Research, 39: 2205-2211.
2003
197
2,417
Online Learning of Non-stationary Sequences Claire Monteleoni and Tommi Jaakkola MIT Computer Science and Artificial Intelligence Laboratory 200 Technology Square Cambridge, MA 02139 {cmontel,tommi}@ai.mit.edu Abstract We consider an online learning scenario in which the learner can make predictions on the basis of a fixed set of experts. We derive upper and lower relative loss bounds for a class of universal learning algorithms involving a switching dynamics over the choice of the experts. On the basis of the performance bounds we provide the optimal a priori discretization for learning the parameter that governs the switching dynamics. We demonstrate the new algorithm in the context of wireless networks. 1 Introduction We focus on the online learning framework in which the learner has access to a set of experts but possesses no other a priori information relating to the observation sequence. In such a scenario the learner may choose to quickly identify a single best expert to rely on [12], or switch from one expert to another in response to perceived changes in the observation sequence [8], thus making assumptions about the switching dynamics. The ability to shift emphasis from one “expert” to another, in response to changes in the observations, is valuable in many applications, including energy management in wireless networks. Many algorithms developed for universal prediction on the basis of a set of experts have clear performance guarantees (e.g., [12, 6, 8, 14]). The performance bounds characterize the regret relative to the best expert, or best sequence of experts, chosen in hindsight. Algorithms with such relative loss guarantees have also been developed for adaptive game playing [5], online portfolio management [7], paging [3] and the k-armed bandit problem [1]. Other relative performance measures for universal prediction involve comparing across systematic variations in the sequence [4]. Here we extend the class of algorithms considered in [8], by learning the switching-rate parameter online, at the optimal resolution. Our goal of removing the switching-rate as a parameter is similar to Vovk’s in [14], though the approach and the comparison class for the bounds differ. We provide upper and lower performance bounds, and demonstrate the utility of these algorithms in the context of wireless networks. 2 Algorithms and performance guarantees The learner has access to n experts, a1, . . . , an, and each expert makes a prediction at each time-step over a finite (known) time period t = 1, . . . , T. We denote the ith expert at time t as ai,t to suppress any details about how the experts arrive at their predictions and what information is available to facilitate the predictions. These details may vary from one expert to another and may change over time. We denote the non-negative prediction loss of expert i at time t as L(i, t), where the loss, a function of t, naturally depends on the observation yt ∈Y at time t. We consider here algorithms that provide a distribution pt(i), i = 1, . . . , n, over the experts at each time point. The prediction loss of such an algorithm is denoted by L(pt, t). For the purpose of deriving learning algorithms such as Static-expert and Fixedshare described in [8], we associate the loss of each expert with a predictive probability so that −log p(yt|yt−1, . . . , y1, i) = L(i, t). We define the loss of any probabilistic prediction to be the log-loss: L(pt, t) = −log n X i=1 pt(i) p(yt|i, y1, . . . , yt−1) = −log n X i=1 pt(i)e−L(i,t) (1) Many other definitions of the loss corresponding to pt(·) can be bounded by a scaled logloss [6, 8]. We omit such modifications here as they do not change the essential nature of the algorithms nor their analysis. The algorithms combining expert predictions can be now derived as simple Bayesian estimation methods calculating the distribution pt(i) = P(i|y1, . . . , yt−1) over the experts on the basis of the observations seen so far. p1(i) = 1/n for any such method as any other initial bias could be detrimental in terms of relative performance guarantees. Updating pt(·) involves assumptions about how the optimal choice of expert can change with time. For simplicity, we consider here only a Markov dynamics, defined by p(it|it−1; α), where α parameterizes the one-step transition probabilities. Allowing switches at rate α, we define1 p(it|it−1; α) = (1 −α)δ(it, it−1) + α n −1[1 −δ(it, it−1)] (2) which corresponds to the Fixed-share algorithm, and yields the Static-expert algorithm when α = 0. The Bayesian algorithm updating pt(·) is defined analogously to forward propagation in generalized HMMs (allowing observation dependence on past): pt(i; α) = 1 Zt n X j=1 pt−1(j; α)e−L(j,t−1)p(i|j; α) (3) where Zt normalizes the distribution. While we have made various probabilistic assumptions (e.g., conditional independence of expert predictions) in deriving the algorithm, the algorithms can be used in a context where no such statistical assumptions about the observation sequence or the experts are warranted. The performance guarantees we provide below for these algorithms do not require these assumptions. 2.1 Relative loss bounds The existing upper bound on the relative loss of the Fixed-share algorithm [8] is expressed in terms of the loss of the algorithm relative to the loss of the best k-partition of the observation sequence, where the best expert is assigned to each segment. We start by providing here a similar guarantee but characterizing the regret relative to the best Fixedshare algorithm, parameterized by α∗, where α∗is chosen in hindsight after having seen the observation sequence. Our proof technique is different from [8] and gives rise to simple guarantees for a wider class of prediction methods, along with a lower bound on this regret. 1where δ(·, ·) is the Kronecker delta. Lemma 1 Let LT (α) = PT t=1 L(pt;α, t), α ∈[0, 1], be the cumulative loss of the Fixedshare algorithm on an arbitrary sequence of observations. Then for any α, α∗: LT (α) −LT (α∗) = −log h Eˆα∼Q e(T −1)[D(ˆα∥α∗)−D(ˆα∥α)]i (4) Proof: The cumulative log-loss of the Bayesian algorithm can be expressed in terms of negative log-probability of all the observations: LT (α) = −log[ X ⃗s φ(⃗s)p(⃗s; α)] (5) where ⃗s = {i1, . . . , iT }, φ(⃗s) = QT t=1 e−L(it,t), and p(⃗s; α) = p1(i1) QT t=2 p(it|it−1; α). Consequently, LT (α) −LT (α∗) = −log P ⃗s φ(⃗s)p(⃗s; α) P ⃗r φ(⃗r)p(⃗r; α∗) = −log "X ⃗s  φ(⃗s)p(⃗s; α∗) P ⃗r φ(⃗r)p(⃗r; α∗)  p(⃗s; α) p(⃗s; α∗) # = −log "X ⃗s Q(⃗s; α∗) p(⃗s; α) p(⃗s; α∗) # = −log "X ⃗s Q(⃗s; α∗)elog p(⃗s;α) p(⃗s;α∗) # = −log "X ⃗s Q(⃗s; α∗)e(T −1)(ˆα(⃗s) log α α∗+(1−ˆα(⃗s)) log 1−α 1−α∗) # where Q(⃗s; α∗) is the posterior probability over the choices of experts along the sequence, induced by the hindsight-optimal switching-rate α∗, and ˆα(⃗s) is the empirical fraction of non-self-transitions in the selection sequence ⃗s. This can be rewritten as the expected value of ˆα under distribution Q. 2 We obtain upper and lower bounds on regret by optimizing Q in Q, the set of all distributions over ˆα ∈[0, 1], of the expression for regret. 2.1.1 Upper bound The upper bound follows from solving: maxQ∈Q  −log  Eˆα∼Q e(T −1)[D(ˆα∥α∗)−D(ˆα∥α)] subject to the constraint that α∗has to be the hindsight-optimal switching-rate, i.e. that: (C1) d dα(LT (α) −LT (α∗))|α=α∗= 0 Theorem 1 Let LT (α∗) = minα LT(α) be the loss of the best Fixed-share algorithm chosen in hindsight. Then for any α ∈[0, 1], LT(α)−LT (α∗) ≤(T −1) D(α∗∥α), where D(α∗∥α) is the relative entropy between Bernoulli distributions defined by α∗and α. The bound vanishes when α = α∗and does not depend directly on the number of experts. The dependence on n may appear indirectly through α∗, however. While the regret appears proportional to T , this dependence vanishes for any reasonable learning algorithm that is guaranteed to find α such that D(α∗∥α) ≤O(1/T ), as we will show in Section 3. Theorem 1 follows, as a special case, from an analogous result for algorithms based on arbitrary first-order Markov transition dynamics. In the general case, the regret bound is: (T −1) maxi D(P(j|i, α∗) ∥P(j|i, α)), where α, α∗are now transition matrices, and D(·∥·) is the relative entropy between discrete distributions. For brevity, we provide only the proof of the scalar case of Theorem 1. Proof: Constraint (C1) can be expressed simply as d dα LT (α)|α=α∗= 0, which is equivalent to Eˆα∼Q{ˆα} = α∗. Taking the expectation outside the logarithm, in Equation 4, results in the upper bound. 2 2.1.2 Lower bound The relative losses obviously satisfy LT(α)−LT (α∗) ≥0 providing a trivial lower bound. Any non-trivial lower bound on the regret cannot be expressed only in terms of α and α∗, but needs to incorporate some additional information about the losses along the observation sequence. We express the lower bound on the regret as a function of the relative quality β∗ of the minimum α∗: β∗= α∗(1 −α∗) T −1 d2 dα2 LT (α)|α=α∗ (6) where the normalization guarantees that β∗≤1. β∗≥0 for any α∗that minimizes LT (α). The lower bound is found by solving: minQ∈Q  −log  Eˆα∼Q e(T −1)[D(ˆα∥α∗)−D(ˆα∥α)] subject to both constraint (C1) and (C2) d2 dα2 (LT (α) −LT (α∗))|α=α∗= β∗(T −1) α∗(1−α∗) Theorem 2 Let β∗and α∗be defined as above based on an arbitrary observation sequence, and q1 = [1 + T −1 1−β∗1−α∗ α∗]−1 and q0 = [1 + T −1 1−β∗ α∗ 1−α∗]−1. Then LT (α) −LT (α∗) ≥−log h Eˆα∼Q e(T −1)[D(ˆα∥α∗)−D(ˆα∥α)]i (7) where Q(1) = q1 and Q((α∗−q1)/(1 −q1)) = 1 −q1 whenever α ≥α∗; Q(0) = q0 and Q(α∗/(1 −q0)) = 1 −q0 otherwise. Proof omitted due to space constraints. The upper and lower bounds agree for all α, α∗∈ (0, 1) when β∗→1. Thus there may exist observation sequences on which Fixedshare, using α ̸= α∗, must incur regret linear in T . 2.2 Algorithm Learn-α We now give an algorithm to learn the switching-rate simultaneously to updating the probability weighting over the experts. Since the cumulative loss Lt(α) of each Fixedshare algorithm running with switching parameter α can be interpreted as a negative log-probability, the posterior distribution over the switching-rate becomes pt(α) = P(α|yt−1, . . . , y1) ∝e−Lt−1(α) (8) assuming a uniform prior over α ∈[0, 1]. As a predictive distribution pt(α) does not include the observation at the same time point. We can view this algorithm as finding the single best “α-expert,” where the collection of α-experts is given by Fixed-share algorithms running with different switching-rates, α. We will consider a finite resolution version of this algorithm, allowing only m possible choices for the switching-rate, {α1, . . . , αm}. For a sufficiently large m and appropriately chosen values {αj}, we expect to be able to always find αj ≈α∗and suffer only a minimal additional loss due to not being able to represent the hindsight-optimal value exactly. Let pt,j(i) be the distribution over experts defined by the jth Fixed-share algorithm corresponding to αj, and let ptop t (j) be the top-level algorithm producing a weighting over such Fixed-share experts. The top-level algorithm is given by ptop t (j) = 1 Zt ptop t−1(j)e−L(pt−1,j,t−1) (9) where ptop 1 (j) = 1/m, and the loss per time-step becomes Ltop(ptop t , t) = −log m X j=1 ptop t (j)e−L(pt,j,t) = −log m X j=1 n X i=1 ptop t (j)pt,j(i)e−L(i,t) (10) as is appropriate for a hierarchical Bayesian method. 3 Relative loss and optimal discretization We derive here the optimal choice of the discrete set {α1, . . . , αm} on the basis of the upper bound on relative loss. We begin by extending Theorem 1 to provide an analogous guarantee for the Learn-α algorithm. Corollary to Theorem 1 Let Ltop T be the cumulative loss of the hierarchical Learn-α algorithm using {α1, . . . , αm}. Then Ltop T −LT (α∗) ≤log(m) + (T −1) min j=1,...,m D(α∗∥αj) (11) The hierarchical algorithm involves two competing goals that manifest themselves in the regret: 1) the ability to identify the best Fixed-share expert, which degrades for larger m, and 2) the ability to find αj whose loss is close to the optimal α for that sequence, which improves for larger m. The additional regret arising from having to consider a number of non-optimal values of the parameter, in the search, comes from the relative loss bound for the Static-Expert algorithm, i.e. the relative loss associated with tracking the best single expert [8, 12]. This regret is simply log(m) in our context. More precisely, the corollary follows directly from successive application of that single expert relative loss bound, and then our Fixed-share relative loss bound (Theorem 1): Ltop T −LT (α∗) ≤ log(m) + min j=1,...,m LT (αj) (12) ≤ log(m) + (T −1) min j=1,...,m D(α∗∥αj) (13) 3.1 Optimal discretization We start by finding the smallest discrete set of switching-rate parameters so that any additional regret due to discretization does not exceed (T −1)δ, for some threshold δ. In other words, we find m = m(δ) values α1, . . . , αm(δ) such that max α∗∈[0,1] min j=1,...,m(δ) D(α∗∥αj) = δ (14) The resulting discretization, a function of δ, can be found algorithmically as follows. First, we set α1 so that maxα∗∈[0,α1] D(α∗∥α1) = D(0∥α1) = δ implying that α1 = 1 −e−δ. Each subsequent αj is found conditionally on αj−1 so that max α∗∈[αj−1,αj] min{D(α∗∥αj−1), D(α∗∥αj)} = δ (15) The maximizing α∗can be solved explicitly by equating the two relative entropies giving α∗= log(1 −αj−1 1 −αj )  log( αj αj−1 1 −αj−1 1 −αj ) −1 (16) which lies within [αj−1, αj] and is an increasing function of the new point αj. Substituting this α∗back into one of the relative entropies we can set αj so that D(α∗∥αj−1) = δ. The relative entropy is an increasing function of αj (through α∗) and the solution is obtained easily via, e.g., bisection search. The iterative procedure of generating new values αj can be stopped after the new point exceeds 1/2; the remaining levels can be filled-in by symmetry so long as we also include 1/2. The resulting discretization is not uniform but denser towards the edges; the spacing around the edges is O(δ), and O( √ δ) around 1/2. For small values of δ, the logarithm of the number of resulting discretization levels, or log m(δ), closely approximates −1/2 logδ. We can then optimize the regret bound (11): −1/2 logδ + (T −1)δ, yielding δ∗= 1/(2T ), and m(δ∗) = √ 2T. Thus we will need O( √ T ) settings of α, as in the case of choosing the levels uniformly with spacing √ δ. The uniform discretization would not, however, possess the same regret guarantee, resulting in a higher than necessary loss due to discretization. 3.1.1 Optimized regret bound for Learn-α The optimized regret bound for Learn-α(δ∗) is thus (approximately) 1 2 log T +c, which is comparable to analysis of universal coding for word-length T [11]. The optimal discretization for learning the parameter is not affected by n, the number of original experts. Unlike regret bounds for Fixed-share, the value of the bound does not depend on the observation sequence. And notably, in comparison to the lower bound on Fixed-share’s performance, Learn-α’s regret is at most logarithmic in T . 4 Application to wireless networks We applied the Learn-α algorithm to an open problem in computer networks: managing the tradeoff between energy consumption and performance in wireless nodes of the IEEE 802.11 standard [9]. Since a node cannot receive packets while asleep, yet maintaining the awake state drains energy, the existing standard uses a fixed polling time at which a node should wake from the sleep state to poll its neighbors for buffered packets. Polling at fixed intervals however, does not respond optimally to current network activity. This problem is clearly an appropriate application for an online learning algorithm, such as Fixed-share due to [8]. Since we are concerned with wireless, mobile nodes, there is no principled way to set the switching-rate parameter a priori, as network activity varies not only over time, but across location, and the location of the mobile node is allowed to change. We can therefore expect an additional benefit from learning the switching-rate. Previous work includes Krashinsky and Balakrishnan’s [10] Bounded Slowdown algorithm which uses an adaptive control loop to change polling time based on network conditions. This algorithm uses parameterized exploration intervals, and the tradeoff is not managed optimally. Steinbach applied reinforcement learning [13] to this problem, yet required an unrealistic assumption: that network activity possesses the Markov property. We instantiate the experts as deterministic algorithms assuming constant polling times. Thus we use n experts, each corresponding to a different but fixed polling time in milliseconds (ms): Ti : i ∈{1 . . .n} The experts form a discretization over the range of possible polling times. We then apply the Learn-α algorithm exactly as in our previous exposition, using the discretization defined by δ∗, and thus running m(δ∗) sub-algorithms, each running Fixed-share with a different αj. In this application, the learning algorithm can only receive observations, and perform learning updates, when it is awake. So our subscript t here signifies only wake times, not every time epoch at which bytes might arrive. We define the loss function, L, to reflect the tradeoff inherent in the conflicting goals of minimizing both the energy usage of the node, and the network latency it introduces by sleeping. We propose a loss function that is one of many functions proportional to this tradeoff. We define loss per expert i as: Loss(i, t) = γ ItT 2 i 2Tt + 1 Ti (17) where It is the observation the node receives, of how many bytes arrived upon awakening at time t, and Tt is the length of time that the node just slept. The first term models the average latency introduced into the network by buffering those bytes, and scales It to the number of bytes that would have arrived had the node slept for time Ti instead of Tt, under the assumption that the bytes arrived at a uniform rate. The second term models the energy consumption of the node, based on the design that the node wakes only after an interval Tt to poll for buffered bytes, and the fact that it consumes less energy when asleep than awake. The objective function is a sum of convex functions and thus admits a unique minimum. γ > 0 allows for scaling between the units of information and time, and the ability to encode a preference for the ratio between energy and latency that the user favors. a) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 2000 4000 6000 8000 10000 12000 alpha Cumulative loss arbitrary expert (500ms) Fixed−share(alpha) alg best expert (100ms) IEE 802.11 Protocol alg Static−expert alg Learn−alpha(delta*) c) 0 2 4 6 8 10 12 14 x 10 4 800 850 900 950 1000 1050 1100 1150 1/delta Cumulative loss of Learn−alpha alg (n=10) b) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 10 −3 500 1000 1500 2000 2500 3000 3500 alpha Cumulative loss best expert (100ms) IEE 802.11 Protocol alg Fixed−share(alpha) alg Static−expert alg Learn−alpha(delta*) d) 0 2 4 6 8 10 12 14 x 10 4 1080 1100 1120 1140 1160 1180 1200 1220 1240 1260 1280 1/delta Cumulative loss of Learn−alpha alg (n=5) Figure 1: a) Cumulative loss of Fixed-share(α) as a function of α, compared to the cumulative loss on the same trace of the 802.11 protocol, Static-expert, and Learnα(δ∗). Figure b) zooms in on the first 0.002 of the α range. c) Cumulative loss of Learnα(δ), as a function of 1/δ, when n = 10, and b) n = 5. Circles at 1/δ∗= 2T . 4.0.2 Experiments We used traces of real network activity from [2], a UC Berkeley home dial-up server that monitored users accessing HTTP files from home. Multiple overlapping connections, passing through a collection node over several days, were recorded by start and end times, and number of bytes transferred. Per connection we smoothed the total number of bytes uniformly over 10ms intervals spanning its duration. We set γ = 1.0 × 10−7, calibrated to attain polling times within the range of the existing protocol. Figure 1a) and b) compare cumulative loss of the various algorithms on a 4 hour trace, with observation epochs every 10ms. This corresponds to approximately 26,100 training iterations for the learning algorithms. In the typical online learning scenario, T , the number of learning iterations, i.e. the time horizen parameter to the loss bounds, is just the number of observation epochs. In this application, the number of training epochs need not match the number of observation epochs, since the application involves sleeping during many observation epochs, and learning is only done upon awakening. Since in these experiments the performance of the three learning algorithms are compared by each algorithm using n experts spanning the range of 1000ms at regularly spaced intervals of 100ms, to obtain a prior estimate of T , we assume a mean sleep interval of 550ms, the mean of the experts. The Static-expert algorithm achieved lower cumulative loss than the best expert, since it can attain the optimal smoothed value over the desired range of polling times, whereas the expert values just form a discretization. On this trace, the optimal α for Fixed-share turns out to be extremely low. So for most settings of α, one would be better off using a Static-expert model, yet as the second graph shows, there is a value of α below which it is beneficial to use Fixed-share. This lends validity to our fundamental goal of being able to quantify the level of non-stationarity of a process, in order to better model it. Moreover there is a clear advantage to using Learn-α, since without prior knowledge of the stochastic process to be observed, there is no optimal way to set α. Figure 1c) and d) show the cumulative loss of Learn-α as a function of 1/δ. We see that choosing δ = 1 2T , matches the point in the curve beyond which one cannot significantly reduce cumulative loss by decreasing δ. As expected, the performance of the algorithm levels off after the optimal δ that we can compute a priori. Our results also verify that the optimal δ is not significantly affected by the number of experts n. 5 Conclusion We proved upper and lower bounds on the regret for a class of online learning algorithms, applicable to any sequence of observations. The bounds extend to richer models of nonstationary sequences, allowing the switching dynamics to be governed by an arbitrary transition matrix. We derived the regret-optimal discretization (including the overall resolution) for learning the switching-rate parameter in a simple switching dynamics, yielding an algorithm with stronger guarantees than previous algorithms. We exemplified the approach in the context of energy management in wireless networks. In future work, we hope to extend the online estimation of α and the optimal discretization to learning a full transition matrix. References [1] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. Gambling in a rigged casino: the adversarial multi-armed bandit problem. In Proc. of the 36th Annual Symposium on Foundations of Computer Science, pages 322–331, 1995. [2] Berkeley. UC Berkeley home IP web traces. In http://ita.ee.lbl.gov/html/contrib/UCB.homeIP-HTTP.html, 1996. [3] A. Blum, C. Burch, and A. Kalai. Finely-competitive paging. In IEEE 40th Annual Symposium on Foundations of Computer Science, page 450, New York, New York, October 1999. [4] D. P. Foster and R. Vohra. Regret in the on-line decision problem. Games and Economic Behavior, 29:7–35, 1999. [5] Y. Freund and R. Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29:79–103, 1999. [6] D. Haussler, J. Kivinen, and M. K. Warmuth. Sequential prediction of individual sequences under general loss functions. IEEE Trans. on Information Theory, 44(5):1906–1925, 1998. [7] D. P. Helmbold, R. E. Schapire, Y. Singer, and M. K. Warmuth. On-line portfolio selection using multiplicative updates. In International Conference on Machine Learning, pages 243– 251, 1996. [8] M. Herbster and M. K. Warmuth. Tracking the best expert. Machine Learning, 32:151–178, 1998. [9] IEEE. Computer society LAN MAN standards committee. In IEEE Std 802.11: Wireless LAN Medium Access Control and Physical Layer Specifications, August 1999. [10] R. Krashinsky and H. Balakrishnan. Minimizing energy for wireless web access with bounded slowdown. In MobiCom 2002, Atlanta, GA, September 2002. [11] R. Krichevsky and V. Trofimov. The performance of universal encoding. IEEE Trans. on Information Theory, 27(2):199–207, 1981. [12] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. In IEEE Symposium on Foundations of Computer Science, pages 256–261, 1989. [13] C. Steinbach. A reinforcement-learning approach to power management. In AI Technical Report, M.Eng Thesis, Artificial Intelligence Laboratory, MIT, May 2002. [14] V. Vovk. Derandomizing stochastic prediction strategies. Machine Learning, 35:247–282, 1999.
2003
198
2,418
Linear Response for Approximate Inference Max Welling Department of Computer Science University of Toronto Toronto M5S 3G4 Canada welling@cs.utoronto.ca Yee Whye Teh Computer Science Division University of California at Berkeley Berkeley CA94720 USA ywteh@eecs.berkeley.edu Abstract Belief propagation on cyclic graphs is an efficient algorithm for computing approximate marginal probability distributions over single nodes and neighboring nodes in the graph. In this paper we propose two new algorithms for approximating joint probabilities of arbitrary pairs of nodes and prove a number of desirable properties that these estimates fulfill. The first algorithm is a propagation algorithm which is shown to converge if belief propagation converges to a stable fixed point. The second algorithm is based on matrix inversion. Experiments compare a number of competing methods. 1 Introduction Belief propagation (BP) has become an important tool for approximate inference on graphs with cycles. Especially in the field of “error correction decoding”, it has brought performance very close to the Shannon limit. BP was studied in a number of papers which have gradually increased our understanding of the convergence properties and accuracy of the algorithm. In particular, recent developments show that the stable fixed points are local minima of the Bethe free energy [10, 1], which paved the way for more accurate “generalized belief propagation” algorithms and convergent alternatives to BP [11, 6]. Despite its success, BP does not provide a prescription to compute joint probabilities over pairs of non-neighboring nodes in the graph. When the graph is a tree, there is a single chain connecting any two nodes, and dynamic programming can be used to efficiently integrate out the internal variables. However, when cycles exist, it is not clear what approximate procedure is appropriate. It is precisely this problem that we will address in this paper. We show that the required estimates can be obtained by computing the “sensitivity” of the node marginals to small changes in the node potentials. Based on this idea, we present two algorithms to estimate the joint probabilities of arbitrary pairs of nodes. These results are interesting in the inference domain but may also have future applications to learning graphical models from data. For instance, information about dependencies between random variables is relevant for learning the structure of a graph and the parameters encoding the interactions. 2 Belief Propagation on Factor Graphs Let V index a collection of random variables {Xi}i∈V and let xi denote values of Xi. For a subset of nodes α ⊂V let Xα = {Xi}i∈α be the variable associated with that subset, and xα be values of Xα. Let A be a family of such subsets of V . The probability distribution over X .= XV is assumed to have the following form, PX(X = x) = 1 Z Y α∈A ψα(xα) Y i∈V ψi(xi) (1) where Z is the normalization constant (the partition function) and ψα, ψi are positive potential functions defined on subsets and single nodes respectively. In the following we will write P(x) .= PX(X = x) for notational simplicity. The decomposition of (1) is consistent with a factor graph with function nodes over Xα and variables nodes Xi. For each i ∈V denote its neighbors by Ni = {α ∈A : α ∋i} and for each subset α its neighbors are simply Nα = {i ∈α}. Factor graphs are a convenient representation for structured probabilistic models and subsume undirected graphical models and acyclic directed graphical models [3]. Further, there is a simple message passing algorithm for approximate inference that generalizes the belief propagation algorithms on both undirected and acyclic directed graphical models, niα(xi) ←ψi(xi) Y β∈Ni\α mβi(xi) mαi(xi) ← X xα\i ψα(xα) Y j∈Nα\i njα(xj) (2) where niα(xi) represents a message from variable node i to factor node α and vice versa for message mαi(xi). Marginal distributions over factor nodes and variable nodes are expressed in terms of these messages as follows, bα(xα) = 1 γα ψα(xα) Y i∈Nα niα(xi) bi(xi) = 1 γi ψi(xi) Y α∈Ni mαi(xi) (3) where γi, γα are normalization constants. It was recently established in [10, 1] that stable fixed points of these update equations correspond to local minima of the Bethe-Gibbs free energy given by, GBP({bBP i , bBP α }) = X α X xα bBP α (xα) log bBP α (xα) ψα(xα) + X i X xi bBP i (xi) log bBP i (xi)ci ψi(xi) (4) with ci = 1 −|Ni| and the marginals are subject to the following local constraints: X xα\i bBP α (xα) = bBP i (xi), X xα bα(xα) = 1, ∀α ∈A, i ∈α (5) Since only local constraints are enforced it is no longer guaranteed that the set of marginals {bBP i , bBP α } are consistent with a single joint distribution B(x). 3 Linear Response In the following we will be interested in computing estimates of joint probability distributions for arbitrary pairs of nodes. We propose a method based on the linear response theorem. The idea is to study changes in the system when we perturb single node potentials, log ψi(xi) = log ψ0 i (xi) + θi(xi) (6) The superscript 0 indicates unperturbed quantities in (6) and the following. Let θ = {θi} and define the cumulant generating function of P(X) (up to a constant) as, F(θ) = −log X x Y α∈A ψα(xα) Y i∈V ψ0 i (xi)eθi(xi) (7) Differentiating F(θ) with respect to θ gives the cumulants of P(x), − ∂F (θ) ∂θj(xj) ¯¯¯ θ=0 = pj(xj) (8) − ∂2F (θ) ∂θi(xi)∂θj(xj) ¯¯¯ θ=0 = ∂pj(xj) ∂θi(xi) ¯¯¯ θ=0 = ½pij(xi, xj) −pi(xi)pj(xj) if i ̸= j pi(xi)δxi,xj −pi(xi)pj(xj) if i = j (9) where pi, pij are single and pairwise marginals of P(x). Expressions for higher order cumulants can be derived by taking further derivatives of −F(θ). Notice from (9) that the covariance estimates are obtained by studying the perturbations in pj(xj) as we vary θi(xi). This is not practical in general since calculating pj(xj) itself is intractable. Instead, we consider perturbations of approximate marginal distributions {bj}. In the following we will assume that bj(xj; θ) (with the dependence on θ made explicit) are the beliefs at a local minimum of the BP-Gibbs free energy (subject to constraints). In analogy to (9), let Cij(xi, xj) = ∂bj(xj;θ) ∂θi(xi) ¯¯ θ=0 be the linear response estimated covariance, and define the linear response estimated joint pairwise marginal as bLR ij (xi, xj) = b0 i (xi)b0 j(xj) + Cij(xi, xj) (10) where b0 i (xi) .= bi(xi; θ = 0). We will show that bLR ij and Cij satisfy a number of important properties which make them suitable as approximations of joint marginals and covariances. First we show that Cij(xi, xj) can be interpreted as the Hessian of a well-behaved convex function. Let C be the set of beliefs that satisfy the constraints (5). The approximate marginals {b0 i } along with the joint marginals {b0 α} form a local minimum of the BetheGibbs free energy (subject to b0 .= {b0 i , b0 α} ∈C). Assume that b0 is a strict local minimum of GBP (the strict local minimality is in fact attained if we use loopy belief propagation [1]). That is, there is an open domain D containing b0 such that GBP(b0) < GBP(b) for each b ∈D ∩C\b0. Now we can define G∗(θ) = inf b∈D∩C GBP(b) −P i,xi bi(xi)θi(xi) (11) G∗(θ) is a concave function since it is the infimum of a set of linear functions in θ. Further G∗(0) = GBP(b0). Since b0 is a strict local minimum when θ = 0, small perturbations in θ will result in small perturbations in b0, so that G∗is well-behaved on an open neighborhood around θ = 0. Differentiating G∗(θ), we get ∂G∗(θ) ∂θj(xj) = −bj(xj; θ) so we now have Cij(xi, xj) = ∂bj(xj;θ) ∂θi(xi) ¯¯¯ θ=0 = − ∂2G∗(θ) ∂θi(xi)∂θj(xj) ¯¯¯¯ θ=0 (12) In essence, we can interpret G∗(θ) as a local convex dual of GBP(b) (by restricting attention to D). Since GBP is an approximation to the exact Gibbs free energy [8], which is in turn dual to F(θ) [4], G∗(θ) can be seen as an approximation to F(θ) for small values of θ. For that reason we can take its second derivatives Cij(xi, xj) as approximations to the exact covariances (which are second derivatives of −F(θ)). Theorem 1 The approximate covariance satisfies the following symmetry: Cij(xi, xj) = Cji(xj, xi) (13) Proof: The covariances are second derivatives of −G∗(θ) at θ = 0 so we can interchange the order of the derivatives since G∗(θ) is well-behaved on a neighborhood around θ = 0. □ Theorem 2 The approximate covariance satisfies the following “marginalization” conditions for each xi, xj: X x′ i Cij(x′ i, xj) = X x′ j Cij(xi, x′ j) = 0 (14) As a result the approximate joint marginals satisfy local marginalization constraints: X x′ i bLR ij (x′ i, xj) = b0 j(xj) X x′ j bLR ij (xi, x′ j) = b0 i (xi) (15) Proof: Using the definition of Cij(xi, xj) and marginalization constraints for b0 j, X x′ j Cij(xi, x′ j) = X x′ j ∂bj(x′ j;θ) ∂θi(xi) ¯¯¯ θ=0 = ∂P x′ j bj(x′ j;θ) ∂θi(xi) ¯¯¯¯ θ=0 = ∂ ∂θi(xi)1 ¯¯¯ θ=0 = 0 (16) The constraint P x′ i Cij(x′ i, xj) = 0 follows from the symmetry (13), while the corresponding marginalization (15) follows from (14) and the definition of bLR ij . □ Since −F(θ) is convex, its Hessian matrix with entries given in (9) is positive semi-definite. Similarly, since the approximate covariances Cij(xi, xj) are second derivatives of a convex function −G∗(θ), we have: Theorem 3 The matrix formed from the approximate covariances Cij(xi, xj) by varying i and xi over the rows and varying j, xj over the columns is positive semi-definite. Using the above results we can reinterpret the linear response correction as a “projection” of the (only locally consistent) beliefs {b0 i , b0 α} onto a set of beliefs {b0 i , bLR ij } that is both locally consistent (theorem 2) and satisfies the global constraint of being positive semidefinite (theorem 3)1. 4 Propagating Perturbations for Linear Response Recall from (10) that we need the first derivative of bi(xi; θ) with respect to θj(xj) at θ = 0. This does not automatically imply that we need an analytic expression for bi(xi; θ) in terms of θ. In this section we show how we may compute these first derivatives by expanding all quantities and equations up to first order in θ and keeping track of first order dependencies. First we assume that belief propagation has converged to a stable fixed point. We expand the beliefs and messages up to first order as2 bi(xi; θ) = b0 i (xi) µ 1 + X j,yj Rij(xi, yj)θj(yj) ¶ (17) niα(xi) = n0 iα(xi) µ 1 + X k,yk Niα,k(xi, yk)θk(yk) ¶ (18) mαi(xi) = m0 αi(xi) µ 1 + X k,yk Mαi,k(xi, yk)θk(yk) ¶ (19) 1In extreme cases it is however possible that some entries of bLR ij become negative. 2The unconventional form of this expansion will make subsequent derivations more transparent. The “response matrices” Rij, Niα,j, Mαi,j measure the sensitivities of the corresponding logarithms of beliefs and messages to changes in the log potentials log ψj(yj) at node j. Next, inserting the expansions (6,18,19) into the belief propagation equations (2) and matching first order terms, we arrive at the following update equations for the “supermessages” Mαi,k(xi, yk) and Niα,k(xi, yk), Niα,k(xi, yk) ←δikδxiyk + X β∈Ni\α Mβi,k(xi, yk) (20) Mαi,k(xi, yk) ← X xα\i ψα(xα) m0 αi(xi) Y j∈Nα\i n0 jα(xj) X j∈Nα\i Njα,k(xj, yk) (21) The super-messages are initialized at Mαi,k = Niα,k = 0 and updated using (20,21) until convergence. Just as for belief propagation, where messages are normalized to avoid numerical over or under flow, after each update the super-messages are “normalized” as follows, Mαi,k(xi, yk) ←Mαi,k(xi, yk) − X xi Mαi,k(xi, yk) (22) and similarly for Niα,k. After the above fixed point equations have converged, we compute the response matrix Rij(xi, xj) by again inserting the expansions (6,17,19) into (3) and matching first order terms, Rij(xi, xj) = δijδxixj + X α∈Ni Mαi,j(xi, xj) (23) The constraints (14) (which follow from the normalization of bi(xi; θ) and b0 i (xi)) translate into P xi b0 i (xi)Rij(xi, yj) = 0 and it is not hard to verify that the following shift can be applied to accomplish this, Rij(xi, yj) ←Rij(xi, yj) − X xi b0 i (xi)Rij(xi, yj) (24) Finally, combining (17) with (12), we get Cij(xi, xj) = b0 i (xi)Rij(xi, xj) (25) Theorem 4 If the factor graph has no loops then the linear response estimates defined in (25) are exact. Moreover, there exists a scheduling of the super-messages such that the algorithm converges after just one iteration (i.e. every message is updated just once). Sketch of Proof: Both results follow from the fact that belief propagation on tree structured factor graphs computes the exact single node marginals for arbitrary θ. Since the supermessages are the first order terms of the BP updates with arbitrary θ, we can invoke the exact linear response theorem given by (8) and (9) to claim that the algorithm converges to the exact joint pairwise marginal distributions. □ For graphs with cycles, BP is not guaranteed to converge. We can however still prove the following strong result. Theorem 5 If the messages {m0 αi(xi), n0 iα(xi)} have converged to a stable fixed point, then the update equations for the super-messages (20,21,22) will also converge to a unique stable fixed point, using any scheduling of the super-messages. Sketch of Proof3: We first note that the updates (20,21,22) form a linear system of equations which can only have one stable fixed point. The existence and stability of this fixed 3For a more detailed proof of the above two theorems we refer to [9]. point is proven by observing that the first order term is identical to the one obtained from a linear expansion of the BP equations (2) around its stable fixed point. Finally, the SteinRosenberg theorem guarantees that any scheduling will converge to the same fixed point. □ 5 Inverting Matrices for Linear Response In this section we describe an alternative method to compute ∂bi(xi) ∂θk(xk) by first computing ∂θi(xi) ∂bk(xk) and then inverting the matrix formed by flattened {i, xi} into a row index and {k, xk} into a column index. This method is a direct extension of [2]. The intuition is that while perturbations in a single θi(xi) affect the whole system, perturbations in a single bi(xi) (while keeping the others fixed) affect each subsystem α ∈A independently (see [8]). This makes it easier to compute ∂θi(xi) ∂bk(xk) then to compute ∂bi(xi) ∂θk(xk). First we propose minimal representations for bi, θi and the messages. We assume that for each node i there is a distinguished value xi = 0. Set θi(0) = 0 while functionally define bi(0) = 1 −P xi̸=0 bi(xi). Now the matrix formed by ∂θi(xi) ∂bk(xk) for each i, k and xi, xk ̸= 0 is invertible and its inverse gives us the desired covariances for xi, xk ̸= 0. Values for xi = 0 or xk = 0 can then be computed using (14). We will also need minimal representations for the messages. This can be achieved by defining new quantities λiα(xi) = log niα(xi) niα(0) for all i and xi ̸= 0. The λiα’s can be interpreted as Lagrange multipliers to enforce the consistency constraints (5) [10]. We will use these multipliers instead of the messages in this section. Re-expressing the fixed point equations (2,3) in terms of bi’s and λiα’s only, and introducing the perturbations θi, we get: µbi(xi) bi(0) ¶ci = ψi(xi) ψi(0) eθi(xi) Y α∈Ni e−λiα(xi) for all i, xi ̸= 0 (26) bi(xi) = P xα\i ψα(xα) Q j∈Nα eλjα(xj) P xα ψα(xα) Q j∈Nα eλjα(xj) for all i, α ∈Ni, xi ̸= 0 (27) Differentiating the logarithm of (26) with respect to bk(xk), we get ∂θi(xi) ∂bk(xk) = ciδik µ δxixk bi(xi) + 1 bi(0) ¶ + X α∈Ni ∂λiα(xi) ∂bk(xk) (28) remembering that bi(0) is a function of bi(xi), xi ̸= 0. Notice that we need values for ∂λiα(xi) ∂bk(xk) in order to solve for ∂θi(xi) ∂bk(xk). Since perturbations in bk(xk) (while keeping other bj’s fixed) do not affect nodes not directly connected to k, we have ∂λiα(xi) ∂bk(xk) = 0 for k ̸∈α. When k ∈α, these can in turn be obtained by solving, for each α, a matrix inverse. Differentiating (27) by bk(xk), we obtain δikδxixk = X j∈α X xj̸=0 Cα ij(xi, xj) ∂λjα(xj) ∂bk(xk) (29) Cα ij(xi, xj) = ½bα(xi, xj) −bi(xi)bj(xj) if i̸=j bi(xi)δxixj −bi(xi)bj(xj) if i=j (30) for each i, k ∈Nα and xi, xk ̸= 0. Flattening the indices in (29) (varying i, xi over rows and k, xk over columns), the LHS becomes the identity matrix, while the RHS is a product 0.5 1 1.5 2 10 −5 10 −4 10 −3 10 −2 σedge error covariances Neighbors Conditioning BP+LR BP MF+LR C=0 0.5 1 1.5 2 10 −6 10 −4 10 −2 error covariances Next−to−Nearest Neighbors Conditioning BP+LR MF+LR C=0 σedge 0.5 1 1.5 2 10 −6 10 −4 σedge error covariances Distant Nodes Conditioning BP+LR MF+LR C=0 (a) (b) (c) Figure 1: L1-error in covariances for MF+LR, BP, BP+LR and “conditioning”. Dashed line is baseline (C = 0). The results are separately plotted for neighboring nodes (a), next-to-nearest neighboring nodes (b) and the remaining nodes (c). of two matrices. The first is a covariance matrix Cα where the ijth block is Cα ij(xi, xj); while the second matrix consists of all the desired derivatives ∂λjα(xj) ∂bk(xk) . Hence the derivatives are given as elements of the inverse covariance matrix C−1 α . Finally, plugging the values of ∂λjα(xj) ∂bk(xk) into (28) now gives ∂θi(xi) ∂bk(xk) and inverting that matrix will now give us the desired approximate covariances over the whole graph. Interestingly, the method only requires access to the beliefs at the local minimum, not to the potentials or Lagrange multipliers. 6 Experiment The accuracy of the estimated covariances Cij(xi, xj) in the LR approximation was studied on a 6×6 square grid with only nearest neighbors connected and 3 states per node. The solid curves in figure 1 represent the error in the estimates for: 1) mean field + LR approximation [2, 9], 2) BP estimates for neighboring nodes with bEDGE = bα in equation (3), 3) BP+LR and 4) “conditioning”, where bij(xi, xj) = bi|j(xi|xj) bBP j (xj) and bi|j(xi|xj) is computed by running BP N · D times with xj clamped at a specific state (this has the same computational complexity as BP+LR). C was computed as Cij = bij −bibj, with {bi, bj} the marginals of bij, and symmetrizing the result. The error was computed as the absolute difference between the estimated and the true values, averaged over pairs of nodes and their possible states, and averaged over 25 random draws of the network. An instantiation of a network was generated by randomly drawing the logarithm of the edge potentials from a zero mean Gaussian with a standard deviation ranging between [0, 2]. The node potentials were set to 1. From these experiments we conclude that “conditioning” and BP+LR have similar accuracy and significantly outperform MF+LR and BP, while “conditioning” performs slightly better than BP+LR. The latter does however satisfy some desirable properties which are violated by conditioning (see section 7 for further discussion). 7 Discussion In this paper we propose to estimate covariances as follows: first observe that the log partition function is the cumulant generating function, next define its conjugate dual – the Gibbs free energy – and approximate it, finally transform back to obtain a local convex approximation to the log partition function, from which the covariances can be estimated. The computational complexity of the iterative linear response algorithm scales as O(N · E · D3) per iteration (N = #nodes, E = #edges, D = #states per node). The noniterative algorithm scales slightly worse, O(N 3 · D3), but is based on a matrix inverse for which very efficient implementations exist. A question that remains open is whether we can improve the efficiency of the iterative algorithm when we are only interested in the joint distributions of neighboring nodes. There are still a number of generalizations worth mentioning. Firstly, the same ideas can be applied to the MF approximation [9] and the Kikuchi approximation (see also [5]). Secondly, the presented method easily generalizes to the computation of higher order cumulants. Thirdly, when applying the same techniques to Gaussian random fields, a propagation algorithm results that computes the inverse of the weight matrix exactly [9]. In the case of more general continuous random field models we are investigating whether linear response algorithms can be applied to the fixed points of expectation propagation. The most important distinguishing feature between the proposed LR algorithm and the conditioning procedure described in section 6 is the fact that the covariance estimate is automatically positive semi-definite. Indeed the idea to include global constraints such as positive semi-definiteness in approximate inference algorithms was proposed in [7]. Other differences include automatic consistency between joint pairwise marginals from LR and node marginals from BP (not true for conditioning) and a convergence proof for the LR algorithm (absent for conditioning, but not observed to be a problem experimentally). Finally, the non-iterative algorithm is applicable to all local minima in the Bethe-Gibbs free energy, even those that correspond to unstable fixed points of BP. Acknowledgements We would like to thank Martin Wainwright for discussion. MW would like to thank Geoffrey Hinton for support. YWT would like to thank Mike Jordan for support. References [1] T. Heskes. Stable fixed points of loopy belief propagation are minima of the bethe free energy. In Advances in Neural Information Processing Systems, volume 15, Vancouver, CA, 2003. [2] H.J. Kappen and F.B. Rodriguez. Efficient learning in Boltzmann machines using linear response theory. Neural Computation, 10:1137–1156, 1998. [3] F.R. Kschischang, B. Frey, and H.A. Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498–519, 2001. [4] M. Opper and O. Winther. From naive mean field theory to the TAP equations. In Advanced Mean Field Methods – Theory and Practice. MIT Press, 2001. [5] K. Tanaka. Probabilistic inference by means of cluster variation method and linear response theory. IEICE Transactions in Information and Systems, E86-D(7):1228–1242, 2003. [6] Y.W. Teh and M. Welling. The unified propagation and scaling algorithm. In Advances in Neural Information Processing Systems, 2001. [7] M.J. Wainwright and M.I. Jordan. Semidefinite relaxations for approximate inference on graphs with cycles. Technical report, Computer Science Division, University of California Berkeley, 2003. Rep. No. UCB/CSD-3-1226. [8] M. Welling and Y.W. Teh. Approximate inference in boltzmann machines. Artificial Intelligence, 143:19–50, 2003. [9] M. Welling and Y.W. Teh. Linear response algorithms for approximate inference in graphical models. Neural Computation, 16:197–221, 2004. [10] J.S. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. In Advances in Neural Information Processing Systems, volume 13, 2000. [11] A.L. Yuille. CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent alternatives to belief propagation. Neural Computation, 14(7):1691–1722, 2002.
2003
2
2,419
Online Classification on a Budget Koby Crammer Computer Sci. & Eng. Hebrew University Jerusalem 91904, Israel kobics@cs.huji.ac.il Jaz Kandola Royal Holloway, University of London Egham, UK jaz@cs.rhul.ac.uk Yoram Singer Computer Sci. & Eng. Hebrew University Jerusalem 91904, Israel singer@cs.huji.ac.il Abstract Online algorithms for classification often require vast amounts of memory and computation time when employed in conjunction with kernel functions. In this paper we describe and analyze a simple approach for an on-the-fly reduction of the number of past examples used for prediction. Experiments performed with real datasets show that using the proposed algorithmic approach with a single epoch is competitive with the support vector machine (SVM) although the latter, being a batch algorithm, accesses each training example multiple times. 1 Introduction and Motivation Kernel-based methods are widely being used for data modeling and prediction because of their conceptual simplicity and outstanding performance on many real-world tasks. The support vector machine (SVM) is a well known algorithm for finding kernel-based linear classifiers with maximal margin [7]. The kernel trick can be used to provide an effective method to deal with very high dimensional feature spaces as well as to model complex input phenomena via embedding into inner product spaces. However, despite generalization error being upper bounded by a function of the margin of a linear classifier, it is notoriously difficult to implement such classifiers efficiently. Empirically this often translates into very long training times. A number of alternative algorithms exist for finding a maximal margin hyperplane many of which have been inspired by Rosenblatt’s Perceptron algorithm [6] which is an on-line learning algorithm for linear classifiers. The work on SVMs has inspired a number of modifications and enhancements to the original Perceptron algorithm. These incorporate the notion of margin to the learning and prediction processes whilst exhibiting good empirical performance in practice. Examples of such algorithms include the Relaxed Online Maximum Margin Algorithm (ROMMA) [4], the Approximate Maximal Margin Classification Algorithm (ALMA) [2], and the Margin Infused Relaxed Algorithm (MIRA) [1] which can be used in conjunction with kernel functions. A notable limitation of kernel based methods is their computational complexity since the amount of computer memory that they require to store the so called support patterns grows linearly with the number prediction errors. A number of attempts have been made to speed up the training and testing of SVM’s by enforcing a sparsity condition. In this paper we devise an online algorithm that is not only sparse but also generalizes well. To achieve this goal our algorithm employs an insertion and deletion process. Informally, it can be thought of as revising the weight vector after each example on which a prediction mistake has been made. Once such an event occurs the algorithm adds the new erroneous example (the insertion phase), and then immediately searches for past examples that appear to be redundant given the recent addition (the deletion phase). As we describe later, making this adjustment to the algorithm allows us to modify the standard online proof techniques so as to provide a bound on the total number of examples the algorithm keeps. This paper is organized as follows. In Sec. 2 we formalize the problem setting and provide a brief outline of our method for obtaining a sparse set of support patterns in an online setting. In Sec. 3 we present both theoretical and algorithmic details of our approach and provide a bound on the number of support patterns that constitute the cache. Sec. 4 provides experimental details, evaluated on three real world datasets, to illustrate the performance and merits of our sparse online algorithm. We end the paper with conclusions and ideas for future work. 2 Problem Setting and Algorithms This work focuses on online additive algorithms for classification tasks. In such problems we are typically given a stream of instance-label pairs (x1, y1), . . . , (xt, yt), . . .. we assume that each instance is a vector xt ∈Rn and each label belongs to a finite set Y. In this and the next section we assume that Y = {−1, +1} but relax this assumption in Sec. 4 where we describe experiments with datasets consisting of more than two labels. When dealing with the task of predicting new labels, thresholded linear classifiers of the form h(x) = sign(w · x) are commonly employed. The vector w is typically represented as a weighted linear combination of the examples, namely w = P t αtytxt where αt ≥0. The instances for which αt > 0 are referred to as support patterns. Under this assumption, the output of the classifier solely depends on inner-products of the form x · xt the use of kernel functions can easily be employed simply by replacing the standard scalar product with a function K(·, ·) which satisfies Mercer conditions [7]. The resulting classification rule takes the form h(x) = sign(w · x) = sign(P t αtytK(x, xt)). The majority of additive online algorithms for classification, for example the well known Perceptron [6], share a common algorithmic structure. These online algorithms typically work in rounds. On the tth round, an online algorithm receives an instance xt, computes the inner-products st = P i<t αiyiK(xi, xt) and sets the predicted label to be sign(st). The algorithm then receives the correct label yt and evaluates whether ytst ≤βt. The exact value of parameter βt depends on the specific algorithm being used for classification. If the result of this test is negative, the algorithms do not modify wt and thus αt is implicitly set to 0. Otherwise, the algorithms modifies its classification using a predetermined update rule. Informally we can consider this update to be decomposed into three stages. Firstly, the algorithms choose a non-negative value for αt (again the exact choice of the parameter αt is algorithm dependent). Secondly, the prediction vector is replaced with a linear combination of the current vector wt and the example, wt+1 = wt + αtytxt. In a third, optional stage (see for example [4]), the norm of the newly updated weight vector is scaled, wt+1 ← ctwt+1 for some ct > 0. The various online algorithms differ in the way the values of the parameters βt, αt and ct are set. A notable example of an online algorithm is the Perceptron algorithm [6] for which we set βt = 0, αt = 1 and ct = 1. More recent algorithms such as the Relaxed Online Maximum Margin Algorithm (ROMMA) [4] the Approximate Maximal Margin Classification Algorithm (ALMA) [2] and the Margin Infused Relaxed Algorithm (MIRA) [1] can also be described in this framework although the constants βt, αt and ct are not as simple as the ones employed by the Perceptron algorithm. An important computational consideration needs to be made when employing kernel functions for machine learning tasks. This is because the amount of memory required to store the so called support patterns grows linearly with the number prediction errors. In Input: Tolerance β. Initialize: Set ∀t αt = 0 , w0 = 0 , C0 = ∅. Loop: For t = 1, 2, . . . , T • Get a new instance xt ∈Rn. • Predict ˆyt = sign (yt(xt · wt−1)). • Get a new label yt. • if yt(xt · wt−1) ≤β update: 1. Insert Ct ←Ct−1 ∪{t}. 2. Set αt = 1. 3. Compute wt ←wt−1 + ytαtxt. 4. DistillCache(Ct, wt, (α1, . . . , αt)). Output : H(x) = sign(wT · x). Figure 1: The aggressive Perceptron algorithm with a variable-size cache. this paper we shift the focus to the problem of devising online algorithms which are budget-conscious as they attempt to keep the number of support patterns small. The approach is attractive for at least two reasons. Firstly, both the training time and classification time can be reduced significantly if we store only a fraction of the potential support patterns. Secondly, a classier with a small number of support patterns is intuitively ”simpler”, and hence are likely to exhibit good generalization properties rather than complex classifiers with large numbers of support patterns. (See for instance [7] for formal results connecting the number of support patterns to the generalization error.) Input: C, w, (α1, . . . , αt). Loop: • Choose i ∈C such that β ≤yi(w −αiyixi). • if no such i exists then return. • Remove the example i : 1. αi = 0. 2. w ←w −αiyixi. 3. C ←C/{i} Return : C, w, (α1, . . . , αt). Figure 2: DistillCache In Sec. 3 we present a formal analysis and the algorithmic details of our approach. Let us now provide a general overview of how to restrict the number of support patterns in an online setting. Denote by Ct the indices of patterns which constitute the classification vector wt. That is, i ∈Ct if and only if αi > 0 on round t when xt is received. The online classification algorithms discussed above keep enlarging Ct – once an example is added to Ct it will never be deleted. However, as the online algorithm receives more examples, the performance of the classifier improves, and some of the past examples may have become redundant and hence can be removed. Put another way, old examples may have been inserted into the cache simply due the lack of support patterns in early rounds. As more examples are observed, the old examples maybe replaced with new examples whose location is closer to the decision boundary induced by the online classifier. We thus add a new stage to the online algorithm in which we discard a few old examples from the cache Ct. We suggest a modification of the online algorithm structure as follows. Whenever yt P i<t αiyiK(x, xi)  ≤βt, then after adding xt to w and inserting the tth into Ct, we scan the cache Ct for seemingly redundant examples by examining the margin conditions of old examples in Ct. If such an example is found, we discard it from the both the classifier and the cache by updating wt ←wt −αiyixi and setting Ct ←Ct/{i}. The pseudocode for this “budget-conscious” version of the aggressive Perceptron algorithm [3] is given in Fig. 1. We say that the algorithm employs a variable-size cache since we do no limit explicitly the number of support patterns though we do attempt to discard as many patterns as possible from the cache. A similar modification, to that described for aggressive Perceptron, can be made to all of the online classification algorithms outlined above. In particular, we use a modification of the MIRA [1] algorithm in our experiments. 3 Analysis In this section we provide our main formal result for the algorithm described in the previous section. Informally, the theorem below states that the actual size of the cache that the algorithm builds is inversely proportional to the square of the best margin that can be achieved on the data. This form of bound is common to numerous online learning algorithms for classification. However, here the bound is on the size of the cache whereas in common settings the corresponding bounds are on the number of prediction mistakes. The bound also depends on β, the margin used by the algorithm to check whether a new example should be added to the cache and to discard old examples attaining a large margin. Clearly, the larger the value of β the more often we add examples to the cache. Theorem 1 Let (x1, y1), . . . , (xT , yT ) be an input sequence for the algorithm given in Fig. 1, where xt ∈Rn and yt ∈{−1, +1}. Denote by R = maxt ∥xt∥. Assume that there exists a vector u of unit norm (∥u∥= 1) which classifies the entire sequence correctly with a margin γ = mint yt(u · xt) > 0. Then the number of support patterns constituting the cache is at most S ≤(R2 + 2β)/γ2 . Proof: The proof of the theorem is based on the mistake bound of the Perceptron algorithm [5]. To prove the theorem we bound ∥wT ∥2 2 from above and below and compare the bounds. Denote by αt i the weight of the ith example at the end of round t (after stage 4 of the algorithm). Similarly, we denote by ˜αt i to be the weight of the ith example on round t after stage 3, before calling the DistillCache (Fig. 2) procedure. We analogously denote by wt and ˜wt the corresponding instantaneous classifiers. First, we derive a lower bound on ∥wT ∥2 by bounding the term wT · u from below in a recursive manner. wT · u = X t∈CT αT t yt(xt · u) ≥ γ X t∈CT αT t = γ S . (1) We now turn to upper bound ∥wT ∥2. Recall that each example may be added to the cache and removed from the cache a single time. Let us write ∥wT ∥2 as a telescopic sum, ∥wT ∥2 = (∥wT ∥2 −∥˜wT ∥2) + (∥˜wT ∥2 −∥wT −1∥2) + . . . + (∥˜w1∥2 −∥w0∥2) . (2) We now consider three different scenarios that may occur for each new example. The first case is when we did not insert the tth example into the cache at all. In this case, (∥˜wt∥2 −∥wt−1∥2) = 0. The second scenario is when an example is inserted into the cache and is never discarded in future rounds, thus, ∥˜wt∥2 = ∥wt−1 + ytxt∥2 = ∥wt−1∥2 + 2yt(wt−1 · xt) + ∥xt∥2 . Since we inserted (xt, yt), the condition yt(wt−1 · xt) ≤β must hold. Combining this with the assumption that the examples are enclosed in a ball of radius R we get, (∥˜wt∥2 − ∥wt−1∥2) ≤2β +R2. The last scenario occurs when an example is inserted into the cache on some round t, and is then later on removed from the cache on round t + p for p > 0. As in the previous case we can bound the value of summands in Equ. (2), (∥˜wt∥2 −∥wt−1∥2) + (∥wt+p∥2 −∥˜wt+p∥2) Input: Tolerance β, Cache Limit n. Initialize: Set ∀t αt = 0 , w0 = 0 , C0 = ∅. Loop: For t = 1, 2, . . . , T • Get a new instance xt ∈Rn. • Predict ˆyt = sign (yt(xt · wt−1)). • Get a new label yt. • if yt(xt · wt−1) ≤β update: 1. If |Ct| = n remove one example: (a) Find i = arg maxj∈Ct{yj(wt−1 −αjyjxj)}. (b) Update wt−1 ←wt−1 −αiyixi. (c) Remove Ct−1 ←Ct−1/{i} 2. Insert Ct ←Ct−1 ∪{t}. 3. Set αt = 1. 4. Compute wt ←wt−1 + ytαtxt. Output : H(x) = sign(wT · x). Figure 3: The aggressive Perceptron algorithm with as fixed-size cache. = 2yt(wt−1 · xt) + ∥xt∥2 −2yt( ˜wt+p · xt) + ∥xt∥2 = 2 [yt(wt−1 · xt) −yt (( ˜wt+p −ytxt) · xt)] ≤ 2 [β −yt (( ˜wt+p −ytxt) · xt)] . Based on the form of the cache update we know that yt (( ˜wt+p −ytxt) · xt) ≥β, and thus, (∥˜wt∥2 −∥wt−1∥2) + (∥wt+p∥2 −∥˜wt+p∥2) ≤0 . Summarizing all three cases we see that only the examples which persist in the cache contribute a factor of R2 + 2β each to the bound of the telescopic sum of Equ. (2) and the rest of the examples do contribute anything to the bound. Hence, we can bound the norm of wT as follows, ∥wT ∥2 ≤S R2 + 2β  . (3) We finish up the proof by applying the Cauchy-Swartz inequality and the assumption ∥u∥= 1. Combining Equ. (1) and Equ. (3) we get, γ2S2 ≤(wT · u)2 ≤∥wT ∥2∥u∥2 ≤S(2β + R2) , which gives the desired bound. 4 Experiments In this section we describe the experimental methods that were used to compare the performance of standard online algorithms with the new algorithm described above. We also describe shortly another variant that sets a hard limit on the number of support patterns. The experiments were designed with the aim of trying to answer the following questions. First, what is effect of the number of support patterns on the generalization error (measured in terms of classification accuracy on unseen data), and second, would the algorithm described in Fig. 2 be able to find an optimal cache size that is able to achieve the best generalization performance. To examine each question separately we used a modified version of the algorithm described by Fig. 2 in which we restricted ourselves to have a fixed bounded cache. This modified algorithm (which we refer to as the fixed budget Perceptron) Name No. of No. of No. of No. of Training Examples Test Examples Classes Attributes mnist 60000 10000 10 784 letter 16000 4000 26 16 usps 7291 2007 10 256 Table 1: Description of the datasets used in experiments. simulates the original Perceptron algorithm with one notable difference. When the number of support patterns exceeds a pre-determined limit, it chooses a support pattern from the cache and discards it. With this modification the number of support patterns can never exceed the pre-determined limit. This modified algorithm is described in Fig. 3. The algorithm deletes the example which seemingly attains the highest margin after the removal of the example itself (line 1(a) in Fig. 3). Despite the simplicity of the original Perceptron algorithm [6] its good generalization performance on many datasets is remarkable. During the last few year a number of other additive online algorithms have been developed [4, 2, 1] that have shown better performance on a number of tasks. In this paper, we have preferred to embed these ideas into another online algorithm and start with a higher baseline performance. We have chosen to use the Margin Infused Relaxed Algorithm (MIRA) as our baseline algorithm since it has exhibited good generalization performance in previous experiments [1] and has the additional advantage that it is designed to solve multiclass classification problem directly without any recourse to performing reductions. The algorithms were evaluated on three natural datasets: mnist1, usps2 and letter3. The characteristics of these datasets has been summarized in Table 1. A comprehensive overview of the performance of various algorithms on these datasets can be found in a recent paper [2]. Since all of the algorithms that we have evaluated are online, it is not implausible for the specific ordering of examples to affect the generalization performance. We thus report results averaged over 11 random permutations for usps and letter and over 5 random permutations for mnist. No free parameter optimization was carried out and instead we simply used the values reported in [1]. More specifically, the margin parameter was set to β = 0.01 for all algorithms and for all datasets. A homogeneous polynomial kernel of degree 9 was used when training on the mnist and usps data sets, and a RBF kernel for letter data set. (The variance of the RBF kernel was identical to the one used in [1].) We evaluated the performance of four algorithms in total. The first algorithm was the standard MIRA online algorithm, which does not incorporate any budget constraints. The second algorithm is the version of MIRA described in Fig. 3 which uses a fixed limited budget. Here we enumerated the cache size limit in each experiment we performed. The different sizes that we tested are dataset dependent but for each dataset we evaluated at least 10 different sizes. We would like to note that such an enumeration cannot be done in an online fashion and the goal of employing the the algorithm with a fixed-size cache is to underscore the merit of the truly adaptive algorithm. The third algorithm is the version of MIRA described in Fig. 2 that adapts the cache size during the running of the algorithms. We also report additional results for a multiclass version of the SVM [1]. Whilst this algorithm is not online and during the training process it considers all the examples at once, this algorithm serves as our gold-standard algorithm against which we want to compare 1Available from http://www.research.att.com/˜yann 2Available from ftp.kyb.tuebingen.mpg.de 3Available from http://www.ics.uci.edu/˜mlearn/MLRepository.html 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 x 10 4 1.2 1.3 1.4 1.5 1.6 1.7 1.8 mnist # Support Patterns Test Error Fixed Adaptive SVM MIRA 500 1000 1500 2000 2500 3000 3500 3.9 4 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 usps # Support Patterns Test Error Fixed Adaptive SVM MIRA 1000 2000 3000 4000 5000 6000 7000 8000 9000 2 2.5 3 3.5 4 4.5 5 5.5 6 letter # Support Patterns Test Error Fixed Adaptive SVM MIRA 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 x 10 4 1300 1350 1400 1450 1500 1550 mnist # Support Patterns Training Online Errors Fixed Adaptive MIRA 500 1000 1500 2000 2500 3000 3500 235 240 245 250 255 260 265 270 usps # Support Patterns Training Online Errors Fixed Adaptive MIRA 1000 2000 3000 4000 5000 6000 7000 8000 9000 1250 1300 1350 1400 1450 1500 letter # Support Patterns Training Online Errors Fixed Adaptive MIRA 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 x 10 4 2 2.5 3 3.5 4 4.5 5 5.5 x 10 4 mnist # Support Patterns Training Margin Errors Fixed Adaptive MIRA 500 1000 1500 2000 2500 3000 3500 3500 4000 4500 5000 5500 6000 6500 usps # Support Patterns Training Margin Errors Fixed Adaptive MIRA 1000 2000 3000 4000 5000 6000 7000 8000 9000 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 x 10 4 letter # Support Patterns Training Margin Errors Fixed Adaptive MIRA Figure 4: Results on a three data sets - mnist (left), usps (center) and letter (right). Each point in a plot designates the test error (y-axis) vs. the number of support patterns used (x-axis). Four algorithms are compared - SVM, MIRA, MIRA with a fixed cache size and MIRA with a variable cache size. performance. Note that for the multiclass SVM we report the results using the best set of parameters, which does not coincide with the set of parameters used for the online training. The results are summarized in Fig 4. This figure is composed of three different plots organized in columns. Each of these plots corresponds to a different dataset - mnist (left), usps (center) and letter (right). In each of the three plots the x-axis designates number of support patterns the algorithm uses. The results for the fixed-size cache are connected with a line to emphasize the performance dependency on the size of the cache. The top row of the three columns shows the generalization error. Thus the y-axis designates the test error of an algorithm on unseen data at the end of the training. Looking at the error of the algorithm with a fixed-size cache reveals that there is a broad range of cache size where the algorithm exhibits good performance. In fact for MNIST and USPS there are sizes for which the test error of the algorithm is better than SVM’s test error. Naturally, we cannot fix the correct size in hindsight so the question is whether the algorithm with variable cache size is a viable automatic size-selection method. Analyzing each of the datasets in turn reveals that this is indeed the case – the algorithm obtains a very similar number of support patterns and test error when compared to the SVM method. The results are somewhat less impressive for the letter dataset which contains less examples per class. One possible explanation is that the algorithm had fewer chances to modify and distill the cache. Nonetheless, overall the results are remarkable given that all the online algorithms make a single pass through the data and the variable-size method finds a very good cache size while making it also comparable to the SVM in terms of performance. The MIRA algorithm, which does not incorporate any form of example insertion or deletion in its algorithmic structure, obtains the poorest level of performance not only in terms of generalization error but also in terms of number of support patterns. The plot of online training error against the number of support patterns, in row 2 of Fig 4, can be considered to be a good on-the-fly validation of generalization performance. As the plots indicate, for the fixed and adaptive versions of the algorithm, on all the datasets, a low online training error translates into good generalization performance. Comparing the test error plots with the online error plots we see a nice similarity between the qualitative behavior of the two errors. Hence, one can use the online error, which is easy to evaluate, to choose a good cache size for the fixed-size algorithm. The third row gives the online training margin errors that translates directly to the number of insertions into the cache. Here we see that the good test error and compactness of the algorithm with a variable cache size come with a price. Namely, the algorithm makes significantly more insertions into the cache than the fixed size version of the algorithm. However, as the upper two sets of plots indicate, the surplus in insertions is later taken care of by excess deletions and the end result is very good overall performance. In summary, the online algorithm with a variable cache and SVM obtains similar levels of generalization and also number of support patterns. While the SVM is still somewhat better in both aspects for the letter dataset, the online algorithm is much simpler to implement and performs a single sweep through the training data. 5 Summary We have described and analyzed a new sparse online algorithm that attempts to deal with the computational problems implicit in classification algorithms such as the SVM. The proposed method was empirically tested and its performance in both the size of the resulting classifier and its error rate are comparable to SVM. There are a few possible extensions and enhancements. We are currently looking at alternative criteria for the deletions of examples from the cache. For instance, the weight of examples might relay information on their importance for accurate classification. Incorporating prior knowledge to the insertion and deletion scheme might also prove important. We hope that such enhancements would make the proposed approach a viable alternative to SVM and other batch algorithms. Acknowledgements: The authors would like to thank John Shawe-Taylor for many helpful comments and discussions. This research was partially funded by the EU project KerMIT No. IST-2000-25341. References [1] K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. Jornal of Machine Learning Research, 3:951–991, 2003. [2] C. Gentile. A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2:213–242, 2001. [3] M´ezard M. Krauth W. Learning algorithms with optimal stability in neural networks. Journal of Physics A., 20:745, 1987. [4] Y. Li and P. M. Long. The relaxed online maximum margin algorithm. Machine Learning, 46(1–3):361–387, 2002. [5] A. B. J. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on the Mathematical Theory of Automata, volume XII, pages 615–622, 1962. [6] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65:386–407, 1958. (Reprinted in Neurocomputing (MIT Press, 1988).). [7] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.
2003
20
2,420
Pairwise Clustering and Graphical Models Noam Shental Computer Science & Eng. Center for Neural Computation Hebrew University of Jerusalem Jerusalem, Israel 91904 fenoam@cs.huji.ac.il Assaf Zomet Computer Science & Eng. Hebrew University of Jerusalem Jerusalem, Israel 91904 zomet@cs.huji.ac.il Tomer Hertz Computer Science & Eng. Center for Neural Computation Hebrew University of Jerusalem Jerusalem, Israel 91904 tomboy@cs.huji.ac.il Yair Weiss Computer Science & Eng. Center for Neural Computation Hebrew University of Jerusalem Jerusalem, Israel 91904 yweiss@cs.huji.ac.il Abstract Significant progress in clustering has been achieved by algorithms that are based on pairwise affinities between the datapoints. In particular, spectral clustering methods have the advantage of being able to divide arbitrarily shaped clusters and are based on efficient eigenvector calculations. However, spectral methods lack a straightforward probabilistic interpretation which makes it difficult to automatically set parameters using training data. In this paper we use the previously proposed typical cut framework for pairwise clustering. We show an equivalence between calculating the typical cut and inference in an undirected graphical model. We show that for clustering problems with hundreds of datapoints exact inference may still be possible. For more complicated datasets, we show that loopy belief propagation (BP) and generalized belief propagation (GBP) can give excellent results on challenging clustering problems. We also use graphical models to derive a learning algorithm for affinity matrices based on labeled data. 1 Introduction Consider the set of points shown in figure 1a. Datasets of this type, where the two clusters are not easily described by a parametric model can be successfully clustered using pairwise clustering algorithms [4, 6, 3]. These algorithms start by building a graph whose vertices correspond to datapoints and edges exist between nearby points with a weight that decreases with distance. Clustering the points is then equivalent to graph partitioning. a b Figure 1: Clustering as graph partitioning (following [8]). Vertices correspond to datapoints and edges between adjacent pixels are weighted by the distance. A single isolated datapoint is marked by an arrow How would we define a good partitioning? One option is the minimal cut criterion. Define: cut(A, B) = X i∈A,j∈B W(i, j) (1) where W(i, j) is the strength of the weight between node i and j in the graph. The minimal cut criterion finds clusterings that minimize cut(A, B). The advantage of using the minimal cut criterion is that the optimal segmentation can be computed in polynomial time. A disadvantage, pointed out by Shi and Malik [8], is that it will often produce trivial segmentations. Since the cut value grows linearly with the number of edges cut, a single datapoint cut from its neighbors will often have a lower cut value than the desired clustering (e.g, the minimal cut solution separates the full dot in fig1, instead of the desired ‘N’ and ‘I’ clusters). In order to avoid these trivial clusterings, several graph partitioning criteria have been proposed. Shi and Malik suggested the normalized cut criterion which directly penalizes partitions where one of the groups is small, hence a separation of a single isolated datapoint is not favored. Minimization of the normalized cut criterion is NP-Complete but it can be approximated using spectral methods. Despite the success of spectral methods in a wide range of clustering problems, several problems remain. Perhaps the most important one is the lack of a straightforward probabilistic interpretation. However, interesting progress in this direction has be made by Meila and Shi [4] who showed a relation between the top eigenvectors and the equilibrium distribution of a random walk on the graph. The typical cut criterion, suggested by Blatt et al [1] and later by Gdalyahu et al [2], is based on a simple probabilistic model. Blatt et al first defines a probability distribution over possible partitions by: Pr(A, B) = 1 Z e−cut(A,B)/T (2) where Z is a normalizing constant, and the “temperature” T serves as a free parameter. Using this probability distribution, the most probable partition is simply the minimal cut. Thus performing MAP inference under this probability distribution will still lead to trivial segmentations. However, as Blatt et al pointed out, there is far more information in the full probability distribution over partitions than solely in the MAP partition. For example, consider the pairwise correlation p(i, j) defined for any two neighboring nodes in the graph as the probability that they belong to the same segment: p(i, j) = X A,B Pr(A, B)SAME(i, j; A, B) (3) with SAME(i, j; A, B) defined as 1 iff i ∈A and j ∈A or i ∈B and j ∈B. Referring again, to the single isolated datapoint in figure 1, then while that datapoint and its neighbors do not appear in the same cluster in the most probable partition, they do appear in the same cluster for the vast majority of partitions. Thus we would expect p(i, j) > 1/2 for that datapoint and its neighbors. Hence the typical cut algorithm of Blatt et al consists of three stages: • Preprocessing: Construct the affinity matrix W so that each node will be connected to at most K neighbors. Define the affinities W(i, j) as: W(i, j) = e −d(i,j)2 σ2 , where di,j is the distance between points i and j, and σ is the mean distance to the K’th neighbor. • Estimating pairwise correlations: Use a Markov chain Monte-Carlo (MCMC) sampling method to estimate p(i, j) at each temperature T. • Postprocessing: Define the typical cut partition as the connected components of the graph after removing any links for which p(i, j) < 1/2. For a given W(i, j) the algorithm has a single free temperature parameter T (see eq. 5). This parameter implicitly defines the number of clusters. At zero temperature all the datapoints reside in one cluster (this trivially minimizes the cut value), and at high temperatures every datapoint forms a separate cluster. In this paper we show that calculating the typical cut is equivalent to performing inference in an undirected graphical model. We use this equivalence to show that in problems with hundreds of datapoints, the typical cut may be calculated exactly. We show that when exact inference is impossible, loopy belief propagation (BP) and generalized belief propagation (GBP) may give an excellent approximation with very little computational cost. Finally, we use the standard algorithm for ML estimation in graphical models to derive a learning algorithm for affinity matrices based on labeled data 1. 2 The connection between typical cuts and graphical models An undirected graphical model with pairwise potentials (see [10] for a review) consists of a graph G and potential functions Ψij(xi, xj) such that the probability of an assignment x is given by: Pr(x) = 1 Z Y <ij> Ψij(xi, xj) (4) where the product is taken over nodes that are connected in the graph G. To connect this to typical cuts we first define for every partition (A, B) a binary vector x such that x(i) = 0 if i ∈A and x(i) = 1 if i ∈B. We then define: Ψij(xi, xj) =  1 e−W (i,j)/T e−W (i,j)/T 1  (5) Observation 1: The typical cut probability distribution (equation 2) is equivalent to that induced by a pairwise undirected graphical model (equation 4) whose graph G is the same as the graph used for graph partitioning and whose potentials are given by equation 5. So far we have focused on partitioning the graph into two segments, but the equivalence holds for any number of segments q. Let (A1, A2, · · · , Aq) be a partitioning of the graph into q segments (note that these segments need not be connected in G). Define cut(A1, A2, · · · Aq) in direct analogy to equation 1, and: Pr((A1, A2, · · · , Aq) = 1 Z e−1 T cut(A1,A2,···,Aq) (6) The implication of observation 1 is that we can use the powerful tools of graphical models in the context of pairwise clustering. In subsequent sections we provide examples of the benefits of using graphical models to compute typical cuts. 1Parts of this work appeared previously in [7]. 3 Computing typical cuts using inference in a graphical model Typical cuts has been successfully used for clustering of datapoints in Rn [1] using an expensive MCMC to calculate pairwise correlations, p(i, j). Using inference algorithms we provide a deterministic and more efficient estimate of p(i, j). More specifically, we use inference algorithms to compute the pairwise beliefs over neighboring nodes bij(xi, xj), and calculate the pairwise correlation as p(i, j) = Pq t=1 bij(t, t). In cases where the maximal clique size is small enough, we can calculate p(i, j) exactly using the junction tree algorithm. In all other cases we must resort to approximate inference using the BP and the GBP algorithms. The following subsections discuss exact and approximate inference for computing typical cuts. 3.1 Exact inference for typical cut clustering The nature of real life clustering problems seems to suggest that exact inference would be intractable due to the clique size of the junction tree. Surprisingly, in our empirical studies, we discovered that on many datasets, including benchmark problems from the UCI repository, we obtain “thin” junction trees (with maximal clique size less than 20). Figure 2a shows a two dimensional representative result. The temperature parameter T was automatically chosen to provide two large clusters. As shown previously by Gdalyahu et al the typical cut criterion does sensible things: it does not favor segmentation of individual datapoints (as in minimal cut), nor is it fooled by narrow bridges between clusters (as in simple connected components). However, while previous typical cut algorithms approximate p(i, j) using MCMC, in some cases using the framework of graphical model we can calculate p(i, j) exactly and efficiently. 1.8 2 2.2 2.4 2.6 2.8 3 3.2 1.8 2 2.2 2.4 2.6 2.8 3 3.2 −300 −200 −100 0 100 200 300 −300 −200 −100 0 100 200 300 a. b. Figure 2: Clustering examples with clusters indicated by different markers. In example (a) the pairwise correlations were calculated exactly, while in example (b) we used BP. 3.2 Approximate inference for typical cut clustering Although exact inference is shown to be possible, in the more common case it is infeasible, and p(i, j) can only be estimated using approximate inference algorithms. In this section we discuss approximate inference using the BP and the GBP algorithms. Approximate inference using Belief Propagation In BP the pairwise beliefs over neighboring nodes, bij, are defined using the messages as: bij(xi, xj) = αΨij(xi, xj) Y xk∈N(xi)\xj mki(xi) Y xk∈N(xj)\xi mkj(xj) (7) Can this be used as an approximation for pairwise clustering? Observation 2: In case where the messages are initialized uniformly the pairwise beliefs calculated by BP are only a function of the local potentials, i.e bij(xi, xj) ∝ψij(xi, xj). Proof: Due to the symmetry of the potentials and since the messages are initialized uniformly, all the messages in BP remain uniform. Thus equation 7 will simply give the normalized local potentials. A consequence of observation 2 is that we need to break the symmetry of the problem in order to use BP. We use here the method of conditioning. Due to the symmetry of the potentials, if exact inference is used then conditioning on a single node xc = 1 and calculating conditional correlations P(xi = xj|xc = 1) should give exactly the same answer as the unconditional correlations p(i, j) = P(xi = xj). However, when BP inference is used, clamping the value of xc causes its outgoing messages to be nonuniform, and as these messages propagate through the graph they break the symmetry used in the proof of observation 2. Empirically, this yields much better approximations of the correlations. In some cases (e.g. when the graph is disconnected) conditioning on a single point does not break the symmetry throughout the graph and additional points need to be clamped. In order to evaluate the quality of the approximation provided by BP, we compared BP using conditioning and exact inference over the dataset shown in fig 2a. Figure 3 displays the results at two different temperatures: “low” and “high”. Each row presents the clustering solution of exact inference and BP, and a scatter plot of the correlations over all of the edges using the two methods. At the “low” temperature the approximation almost coincides with the exact values, but at the “high” temperature BP over estimates the correlation values. 1.8 2 2.2 2.4 2.6 2.8 3 3.2 1.8 2 2.2 2.4 2.6 2.8 3 3.2 low T Exact 1.8 2 2.2 2.4 2.6 2.8 3 3.2 1.8 2 2.2 2.4 2.6 2.8 3 3.2 low T BP 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Loopy correlations Exact correlations 1.8 2 2.2 2.4 2.6 2.8 3 3.2 1.8 2 2.2 2.4 2.6 2.8 3 3.2 high T Exact 1.8 2 2.2 2.4 2.6 2.8 3 3.2 1.8 2 2.2 2.4 2.6 2.8 3 3.2 high T BP 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Loopy correlations Exact correlations Figure 3: Clustering results at a “low” temperature (upper row) and a “high” temperature (lower row). The left and middle columns present clustering results of exact inference and of BP, respectively. The right column compares the values of the correlations provided by the two methods. Each dot corresponds to an edge in the graph. At “low” temperature most of the correlations are close to 1, hence many edges appear as a single dot. Approximate inference using Generalized Belief Propagation Generalized Belief Propagation algorithms (GBP) [10] extend the BP algorithm by sending messages that are functions of clusters of variables, and has been shown to provide a better approximation than BP in many applications. Can GBP improve the approximation of pairwise correlations in typical cuts? Our empirical studies show that the performance and convergence of GBP over a general graph obtained from arbitrary points in Rn, strongly depends on the initial choice of clusters (regions). As also observed by Minka et al [5] a specific choice of clusters may yield worse results than BP, or may even cause GBP not to converge. However it is far from obvious how to choose these clusters. In previous uses of GBP [10] the basic clusters used were chosen by hand. In order to use GBP to approximate p(i, j) in a general graph, one must obtain a useful automatic procedure for selecting these initial clusters. We have experimented with various heuristics but none of them gave good performance. However, in the case of ordered graphs such as 2D grids and images, we have found that GBP gives an excellent approximation when using four neighboring grid points as a region. Figure 4a shows results of GBP approximations for a 30x30 2D uniform grid. The clique size in a junction tree is of order 230 hence exact inference is infeasible. We compare the correlations p(i, j) calculated using an extensive MCMC sampling procedure [9] to those calculated using GBP with the clusters being four neighboring pixels in the graph. GBP converges in only 10 iterations and can be seen to provide an excellent approximation. Figure 4c presents a comparison of the MCMC correlations with those calculated by GBP on a real 120x80 image shown in fig 4b with affinity based on color similarity. Figure 4d presents the clustering results, which provides a segmentation of the image. 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SW−MC GBP 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SW−MC GBP (a) (b) (c) (d) Figure 4: (a) Scatter plot of pairwise correlations in a 30x30 grid, using MCMC [9] and GBP. Each dot corresponds to the pairwise correlation of one edge at a specific temperature. Notice the excellent correspondence between GBP and MCMC (c) The same comparison performed over the image in (b). (d) shows a gray level map of the 15 largest clusters. 4 Learning Affinity Matrices from Labeled Datasets As noted in the introduction using graphical models to compute typical cuts, can also be advantageous for other aspects of the clustering problem, apart from computing p(i, j). One such important advantage is learning the affinity matrix W(i, j) from labeled data. In many problems, there are multiple ways to define affinities between any two datapoints. For example, in image segmentation where the nodes are pixels, one can define affinity based on color similarity, texture similarity or some combination of the two. Our goal is to use a labeled training set of manually segmented images to learn the “right” affinities. More specifically let us assume the “correct” affinity is a linear combination of a set of known affinity functions {fk}K k=1, each corresponding to different features of the data. Hence the affinity between neighboring points i and j, is defined by: W(i, j) = PK k=1 αkfk(i, j). In addition assume we are given a labeled training sample, which consists of the following: (i) A graph in which neighboring nodes are connected by edges. (ii) Affinity values fk(i, j). (iii) A partition of the graph x. Our goal is to estimate the affinity mixing coefficients αk. This problem can be solved using the graphical model defined by the typical cut probability distribution (Equation 6). Recall that the probability of a partition x is defined as P(x) = 1 Z e−cut(x) = 1 Z e−P <ij>(1−δ(xi−xj))W (i,j) = 1 Z(α)e−P K k=1 αkfcutk(x) (8) Where we have defined: fcutk(x) = P <ij>(1 −δ(xi −xj))fk(i, j). fcutk(x) is the cut value defined by x when only taking into account the affinity function fk, hence it can be computed using the training sample. Differentiating the log likelihood with respect to αk gives the exponential family equation: ∂ln P(x) ∂αk = −fcutk(x)+ < fcutk >α (9) Equation 9 gives an intuitive definition for the optimal α: the optimal α is the one for which < fcutk >α= fcutk(x), i.e, for optimal α the expected values of the cuts for each feature separately, match exactly the values of these cuts in the training set. Since we are dealing with the exponential family, the likelihood is convex and the ML solution can be found using gradient ascent. To calculate the gradient explicitly, we use the linearity of expectation: < fcutk >α= X <ij> < (1 −δ(yi −yj) >α fk(i, j) = X <ij> (1 −p(i, j)α)fk(i, j) Where p(i, j)α are the pairwise correlations for given values of α. Equation 9 is visually similar to the learning rule derived by Meila and Shi [4] but the cost function they are minimizing is actually different, hence the expectations are taken with respect to completely different distributions. 4.1 Combining learning and GBP approximate inference We experimented with the learning algorithm on images, with the pixels grid as the graph and using GBP for approximating p(i, j)α. The three pixel affinity functions, {fk}3 k=1, correspond to the intensity differences in the R, G, B color channels. We used a standard transformation of intensity difference to an affinity function by a Gaussian kernel. The left pane in Fig 5 shows a synthetic example. There is one training image (fig 5a) but two different manual segmentations (fig 5b,c). The first and second training segmentations are based on an illumination-covariant and an illumination-invariant affinities, respectively. We used gradient ascent as given by equation 9. Figure 5d shows a novel image and figures 5e,f show two different pairwise correlations of this image using the learned α. Indeed, the algorithm learns to either ignore or not ignore illumination, based on the training set. The right pane in figure 5 shows results on real images. For real images, we found that a preprocessing of the image colors is required in order to learn shadow-invariant linear transformation. This was done by saturating the image colors. The training segmentation (figures 5a,b,c) ignores shadows. On the novel image (figure 5d) the most salient edge is a shadow on the face. Nevertheless, the segmentation based on the learned affinity (figure 5e) ignores the shadows and segments the facial features from each other. In contrast, a typical cut segmentation which uses a naive affinity function (combining the three color channels with uniform weights) segments mostly based on shadows (figure 5f). 5 Discussion Pairwise clustering algorithms have a wide range of applicability due to their ability to find clusters with arbitrary shapes. In this paper we have shown how pairwise clustering can be (a) (b) (c) (a) (b) (c) (d) (e) (f) (d) (e) (f) Figure 5: Left pane: A synthetic example for learning the affinity function. The top row presents the training set: The input image (a), the clusters of the first (b) and second (c) experiments. The bottom row presents the result of the learning algorithm: The input image (d), the marginal probabilities p(i, j) (Eqn. 3) in the first (e) and second (f) experiments. Right pane: Learning a color affinity function which is invariant to shadows. The top row shows the learning data set: The input image(a), the pre-processed image (b) and the manual segmentation (invariant to shadows) (c). The bottom row presents, from left to right, the pre-processed test image (d), an edge map produced by learning the shadow-invariant affinity (e) and an edge map produced by a naive affinity function, combining the 3 color channels with uniform weights (f). The edge maps were computed by thresholding the pairwise correlations p(i,j) (Eqn. 3). See text for details. Both illustrations are better viewed in color. mapped to an inference problem in a graphical model. This equivalence allowed us to use the standard tools of graphical models: exact and approximate inference and ML learning. We showed how to combine approximate inference and ML learning in the challenging problem of learning affinities for images from labeled data. We have only begun to use the many tools of graphical models. We are currently working on learning from unlabeled sets and on other approximate inference algorithms. References [1] M. Blatt, S. Wiseman, and E. Domany. Data clustering using a mode lgranular magnet. Neural Computation, 9:1805–1842, 1997. [2] Y. Gdalyahu, D. Weinshall, and M. Werman. Self organization in vision: Stochastic clustering for image segmentation, perceptual grouping, and image database organization. IEEE Trans. on Pattern Analysis and Machine Intelligence, 23(10):1053–1074, 2001. [3] T. Hofmann and J. M. Buhmann. Pairwise data clustering by deterministic annealing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(1):1–14, 1997. [4] M. Meila and J. Shi. Learning segmentation by random walks. In Advances in Neural Information Processing Systems 14, 2001. [5] T. Minka and Y. Qi. Tree-structured approximations by expectation propagation. In Advances in Neural Information Processing Systems 16, 2003. [6] A. Ng, M. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In Advances in Neural Information Processing 14, 2001. [7] N. Shental, A. Zomet, T. Hertz, and Y. Weiss. Learning and inferring image segmentations using the gbp typical cut. In 9th International Conference on Computer Vision, 2003. [8] J. Shi and J. Malik. Normalized cuts and image segmentation. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pages 731–737, 1997. [9] J.S. Wang and R.H Swendsen. Cluster monte carlo algorithms. Physica A, 167:565–579, 1990. [10] J. Yedidia, W. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations. In G. Lakemeyer and B. Nebel, editors, Exploring Artificial Intelligence in the New Millennium. Morgan Kaufmann, 2003.
2003
21
2,421
Link Prediction in Relational Data Ben Taskar Ming-Fai Wong Pieter Abbeel Daphne Koller {btaskar, mingfai.wong, abbeel, koller}@cs.stanford.edu Stanford University Abstract Many real-world domains are relational in nature, consisting of a set of objects related to each other in complex ways. This paper focuses on predicting the existence and the type of links between entities in such domains. We apply the relational Markov network framework of Taskar et al. to define a joint probabilistic model over the entire link graph — entity attributes and links. The application of the RMN algorithm to this task requires the definition of probabilistic patterns over subgraph structures. We apply this method to two new relational datasets, one involving university webpages, and the other a social network. We show that the collective classification approach of RMNs, and the introduction of subgraph patterns over link labels, provide significant improvements in accuracy over flat classification, which attempts to predict each link in isolation. 1 Introduction Many real world domains are richly structured, involving entities of multiple types that are related to each other through a network of different types of links. Such data poses new challenges to machine learning. One challenge arises from the task of predicting which entities are related to which others and what are the types of these relationships. For example, in a data set consisting of a set of hyperlinked university webpages, we might want to predict not just which page belongs to a professor and which to a student, but also which professor is which student’s advisor. In some cases, the existence of a relationship will be predicted by the presence of a hyperlink between the pages, and we will have only to decide whether the link reflects an advisor-advisee relationship. In other cases, we might have to infer the very existence of a link from indirect evidence, such as a large number of co-authored papers. In a very different application, we might want to predict links representing participation of individuals in certain terrorist activities. One possible approach to this task is to consider the presence and/or type of the link using only attributes of the potentially linked entities and of the link itself. For example, in our university example, we might try to predict and classify the link using the words on the two webpages, and the anchor words on the link (if present). This approach has the advantage that it reduces to a simple classification task and we can apply standard machine learning techniques. However, it completely ignores a rich source of information that is unique to this task — the graph structure of the link graph. For example, a strong predictor of an advisor-advisee link between a professor and a student is the fact that they jointly participate in several projects. In general, the link graph typically reflects common patterns of interactions between the entities in the domain. Taking these patterns into consideration should allow us to provide a much better prediction for links. In this paper, we tackle this problem using the relational Markov network (RMN) framework of Taskar et al. [14]. We use this framework to define a single probabilistic model over the entire link graph, including both object labels (when relevant) and links between objects. The model parameters are trained discriminatively, to maximize the probability of the (object and) link labels given the known attributes (e.g., the words on the page, hyperlinks). The learned model is then applied, using probabilistic inference, to predict and classify links using any observed attributes and links. 2 Link Prediction A relational domain is described by a relational schema, which specifies a set of object types and attributes for them. In our web example, we have a Webpage type, where each page has a binary-valued attribute for each word in the dictionary, denoting whether the page contains the word. It also has an attribute representing the “class” of the webpage, e.g., a professor’s homepage, a student’s homepage, etc. To address the link prediction problem, we need to make links first-class citizens in our model. Following [5], we introduce into our schema object types that correspond to links between entities. Each link object ℓis associated with a tuple of entity objects (o1, . . . , ok) that participate in the link. For example, a Hyperlink link object would be associated with a pair of entities — the linking page, and the linked-to page, which are part of the link definition. We note that link objects may also have other attributes; e.g., a hyperlink object might have attributes for the anchor words on the link. As our goal is to predict link existence, we must consider links that exist and links that do not. We therefore consider a set of potential links between entities. Each potential link is associated with a tuple of entity objects, but it may or may not actually exist. We denote this event using a binary existence attribute Exists, which is true if the link between the associated entities exists and false otherwise. In our example, our model may contain a potential link ℓfor each pair of webpages, and the value of the variable ℓ.Exists determines whether the link actually exists or not. The link prediction task now reduces to the problem of predicting the existence attributes of these link objects. An instantiation I specifies the set of entities of each entity type and the values of all attributes for all of the entities. For example, an instantiation of the hypertext schema is a collection of webpages, specifying their labels, the words they contain, and which links between them exist. A partial instantiation specifies the set of objects, and values for some of the attributes. In the link prediction task, we might observe all of the attributes for all of the objects, except for the existence attributes for the links. Our goal is to predict these latter attributes given the rest. 3 Relational Markov Networks We begin with a brief review of the framework of undirected graphical models or Markov Networks [13], and their extension to relational domains presented in [14]. Let V denote a set of discrete random variables and v an assignment of values to V. A Markov network for V defines a joint distribution over V. It consists of an undirected dependency graph, and a set of parameters associated with the graph. For a graph G, a clique c is a set of nodes Vc in G, not necessarily maximal, such that each Vi, Vj ∈Vc are connected by an edge in G. Each clique c is associated with a clique potential φc(Vc), which is a non-negative function defined on the joint domain of Vc. Letting C(G) be the set of cliques, the Markov network defines the distribution P(v) = 1 Z Q c∈C(G) φc(vc), where Z is the standard normalizing partition function. A relational Markov network (RMN) [14] specifies the cliques and potentials between attributes of related entities at a template level, so a single model provides a coherent distribution for any collection of instances from the schema. RMNs specify the cliques using the notion of a relational clique template, which specify tuples of variables in the instantiation using a relational query language. (See [14] for details.) For example, if we want to define cliques between the class labels of linked pages, we might define a clique template that applies to all pairs page1,page2 and link of types Webpage, Webpage and Hyperlink, respectively, such that link points from page1 to page2. We then define a potential template that will be used for all pairs of variables page1.Category and page2.Category for such page1 and page2. Given a particular instantiation I of the schema, the RMN M produces an unrolled Markov network over the attributes of entities in I, in the obvious way. The cliques in the unrolled network are determined by the clique templates C. We have one clique for each c ∈C(I), and all of these cliques are associated with the same clique potential φC. Taskar et al. show how the parameters of an RMN over a fixed set of clique templates can be learned from data. In this case, the training data is a single instantiation I, where the same parameters are used multiple times — once for each different entity that uses a feature. A choice of clique potential parameters w specifies a particular RMN, which induces a probability distribution Pw over the unrolled Markov network. Gradient descent over w is used to optimize the conditional likelihood of the target variables given the observed variables in the training set. The gradient involves a term which is the posterior probability of the target variables given the observed, whose computation requires that we run probabilistic inference over the entire unrolled Markov network. In relational domains, this network is typically large and densely connected, making exact inference intractable. Taskar et al. therefore propose the use of belief propagation [13, 17]. 4 Subgraph Templates in a Link Graph The structure of link graphs has been widely used to infer importance of documents in scientific publications [4] and hypertext (PageRank [12], Hubs and Authorities [8]). Social networks have been extensively analyzed in their own right in order to quantify trends in social interactions [16]. Link graph structure has also been used to improve document classification [7, 6, 15]. In our experiments, we found that the combination of a relational language with a probabilistic graphical model provides a very flexible framework for modeling complex patterns common in relational graphs. First, as observed by Getoor et al. [5], there are often correlations between the attributes of entities and the relations in which they participate. For example, in a social network, people with the same hobby are more likely to be friends. We can also exploit correlations between the labels of entities and the relation type. For example, only students can be teaching assistants in a course. We can easily capture such correlations by introducing cliques that involve these attributes. Importantly, these cliques are informative even when attributes are not observed in the test data. For example, if we have evidence indicating an advisor-advisee relationship, our probability that X is a faculty member increases, and thereby our belief that X participates in a teaching assistant link with some entity Z decreases. We also found it useful to consider richer subgraph templates over the link graph. One useful type of template is a similarity template, where objects that share a certain graphbased property are more likely to have the same label. Consider, for example, a professor X and two other entities Y and Z. If X’s webpage mentions Y and Z in the same context, it is likely that the X-Y relation and the Y-Z relation are of the same type; for example, if Y is Professor X’s advisee, then probably so is Z. Our framework accomodates these patterns easily, by introducing pairwise cliques between the appropriate relation variables. Another useful type of subgraph template involves transitivity patterns, where the presence of an A-B link and of a B-C link increases (or decreases) the likelihood of an A-C link. For example, students often assist in courses taught by their advisor. Note that this type of interaction cannot be accounted for just using pairwise cliques. By introducing cliques over triples of relations, we can capture such patterns as well. We can incorporate even more complicated patterns, but of course we are limited by the ability of belief propagation to scale up as we introduce larger cliques and tighter loops in the Markov network. We note that our ability to model these more complex graph patterns relies on our use 0.7 0.75 0.8 0.85 0.9 0.95 ber mit sta ave Accuracy Flat Triad Section Section & Triad 0.6 0.65 0.7 0.75 0.8 0.85 ber mit sta ave Accuracy Flat Neigh 0.45 0.5 0.55 0.6 0.65 0.7 0.75 ber mit sta ave P/R Breakeven Point Phased (Flat/Flat) Phased (Neigh/Flat) Phased (Neigh/Sec) Joint+Neigh Joint+Neigh+Sec (a) (b) (c) Figure 1: (a) Relation prediction with entity labels given. Relational models on average performed better than the baseline Flat model. (b) Entity label prediction. Relational model Neigh performed significantly better. (c) Relation prediction without entity labels. Relational models performed better most of the time, even though there are schools that some models performed worse. of an undirected Markov network as our probabilistic model. In contrast, the approach of Getoor et al. uses directed graphical models (Bayesian networks and PRMs [9]) to represent a probabilistic model of both relations and attributes. Their approach easily captures the dependence of link existence on attributes of entities. But the constraint that the probabilistic dependency graph be a directed acyclic graph makes it hard to see how we would represent the subgraph patterns described above. For example, for the transitivity pattern, we might consider simply directing the correlation edges between link existence variables arbitrarily. However, it is not clear how we would then parameterize a link existence variable for a link that is involve in multiple triangles. See [15] for further discussion. 5 Experiments on Web Data We collected and manually labeled a new relational dataset inspired by WebKB [2]. Our dataset consists of Computer Science department webpages from 3 schools: Stanford, Berkeley, and MIT. A total of 2954 of pages are labeled into one of eight categories: faculty, student, research scientist, staff, research group, research project, course and organization (organization refers to any large entity that is not a research group). Owned pages, which are owned by an entity but are not the main page for that entity, were manually assigned to that entity. The average distribution of classes across schools is: organization (9%), student (40%), research group (8%), faculty (11%), course (16%), research project (7%), research scientist (5%), and staff (3%). We established a set of candidate links between entities based on evidence of a relation between them. One type of evidence for a relation is a hyperlink from an entity page or one of its owned pages to the page of another entity. A second type of evidence is a virtual link: We assigned a number of aliases to each page using the page title, the anchor text of incoming links, and email addresses of the entity involved. Mentioning an alias of a page on another page constitutes a virtual link. The resulting set of 7161 candidate links were labeled as corresponding to one of five relation types — Advisor (faculty, student), Member (research group/project, student/faculty/research scientist), Teach (faculty/research scientist/staff, course), TA (student, course), Part-Of (research group, research proj) — or “none”, denoting that the link does not correspond to any of these relations. The observed attributes for each page are the words on the page itself and the “metawords” on the page — the words in the title, section headings, anchors to the page from other pages. For links, the observed attributes are the anchor text, text just before the link (hyperlink or virtual link), and the heading of the section in which the link appears. Our task is to predict the relation type, if any, for all the candidate links. We tried two settings for our experiments: with page categories observed (in the test data) and page categories unobserved. For all our experiments, we trained on two schools and tested on the remaining school. Observed Entity Labels. We first present results for the setting with observed page categories. Given the page labels, we can rule out many impossible relations; the resulting label breakdown among the candidate links is: none (38%), member (34%), part-of (4%), advisor (11%), teach (9%), TA (5%). There is a huge range of possible models that one can apply to this task. We selected a set of models that we felt represented some range of patterns that manifested in the data. Link-Flat is our baseline model, predicting links one at a time using multinomial logistic regression. This is a strong classifier, and its performance is competitive with other classifiers (e.g., support vector machines). The features used by this model are the labels of the two linked pages and the words on the links going from one page and its owned pages to the other page. The number of features is around 1000. The relational models try to improve upon the baseline model by modeling the interactions between relations and predicting relations jointly. The Section model introduces cliques over relations whose links appear consecutively in a section on a page. This model tries to capture the pattern that similarly related entities (e.g., advisees, members of projects) are often listed together on a webpage. This pattern is a type of similarity template, as described in Section 4. The Triad model is a type of transitivity template, as discussed in Section 4. Specifically, we introduce cliques over sets of three candidate links that form a triangle in the link graph. The Section + Triad model includes the cliques of the two models above. As shown in Fig. 1(a), both the Section and Triad models outperform the flat model, and the combined model has an average accuracy gain of 2.26%, or 10.5% relative reduction in error. As we only have three runs (one for each school), we cannot meaningfully analyze the statistical significance of this improvement. As an example of the interesting inferences made by the models, we found a studentprofessor pair that was misclassified by the Flat model as none (there is only a single hyperlink from the student’s page to the advisor’s) but correctly identified by both the Section and Triad models. The Section model utilizes a paragraph on the student’s webpage describing his research, with a section of links to his research groups and the link to his advisor. Examining the parameters of the Section model clique, we found that the model learned that it is likely for people to mention their research groups and advisors in the same section. By capturing this trend, the Section model is able to increase the confidence of the student-advisor relation. The Triad model corrects the same misclassification in a different way. Using the same example, the Triad model makes use of the information that both the student and the teacher belong to the same research group, and the student TAed a class taught by his advisor. It is important to note that none of the other relations are observed in the test data, but rather the model bootstraps its inferences. Unobserved Entity Labels. When the labels of pages are not known during relations prediction, we cannot rule out possible relations for candidate links based on the labels of participating entities. Thus, we have many more candidate links that do not correspond to any of our relation types (e.g., links between an organization and a student). This makes the existence of relations a very low probability event, with the following breakdown among the potential relations: none (71%), member (16%), part-of (2%), advisor (5%), teach (4%), TA (2%). In addition, when we construct a Markov network in which page labels are not observed, the network is much larger and denser, making the (approximate) inference task much harder. Thus, in addition to models that try to predict page entity and relation labels simultaneously, we also tried a two-phase approach, where we first predict page categories, and then use the predicted labels as features for the model that predicts relations. For predicting page categories, we compared two models. Entity-Flat model is multinomial logistic regression that uses words and “meta-words” from the page and its owned pages in separate “bags” of words. The number of features is roughly 10, 000. The Neighbors model is a relational model that exploits another type of similarity template: pages 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 10%observed 25%observed 50%observed avep/rbreakevenpoint flat compatibility 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 DD JL TX 67 FG LM BC SS avep/rbreakevenpoint flat compatibility (a) (b) Figure 2: (a) Average precision/recall breakeven point for 10%, 25%, 50% observed links. (b) Average precision/recall breakeven point for each fold of school residences at 25% observed links. with similar urls often belong to the same category or tightly linked categories (research group/project, professor/course). For each page, two pages with urls closest in edit distance are selected as “neighbors”, and we introduced pairwise cliques between “neighboring” pages. Fig. 1(b) shows that the Neighbors model clearly outperforms the Flat model across all schools, by an average of 4.9% accuracy gain. Given the page categories, we can now apply the different models for link classification. Thus, the Phased (Flat/Flat) model uses the Entity-Flat model to classify the page labels, and then the Link-Flat model to classify the candidate links using the resulting entity labels. The Phased (Neighbors/Flat) model uses the Neighbors model to classify the entity labels, and then the Link-Flat model to classify the links. The Phased (Neighbors/Section) model uses the Neighbors to classify the entity labels and then the Section model to classify the links. We also tried two models that predict page and relation labels simultaneously. The Joint + Neighbors model is simply the union of the Neighbors model for page categories and the Flat model for relation labels given the page categories. The Joint + Neighbors + Section model additionally introduces the cliques that appeared in the Section model between links that appear consecutively in a section on a page. We train the joint models to predict both page and relation labels simultaneously. As the proportion of the “none” relation is so large, we use the probability of “none” to define a precision-recall curve. If this probability is less than some threshold, we predict the most likely label (other than none), otherwise we predict the most likely label (including none). As usual, we report results at the precision-recall breakeven point on the test data. Fig. 1(c) show the breakeven points achieved by the different models on the three schools. Relational models, both phased and joint, did better than flat models on the average. However, performance varies from school to school and for both joint and phased models, performance on one of the schools is worse than that of the flat model. 6 Experiments on Social Network Data The second dataset we used has been collected by a portal website at a large university that hosts an online community for students [1]. Among other services, it allows students to enter information about themselves, create lists of their friends and browse the social network. Personal information includes residence, gender, major and year, as well as favorite sports, music, books, social activities, etc. We focused on the task of predicting the “friendship” links between students from their personal information and a subset of their links. We selected students living in sixteen different residences or dorms and restricted the data to the friendship links only within each residence, eliminating inter-residence links from the data to generate independent training/test splits. Each residence has about 15–25 students and an average student lists about 25% of his or her house-mates as friends. We used an eight-fold train-test split, where we trained on fourteen residences and tested on two. Predicting links between two students from just personal information alone is a very difficult task, so we tried a more realistic setting, where some proportion of the links is observed in the test data, and can be used as evidence for predicting the remaining links. We used the following proportions of observed links in the test data: 10%, 25%, and 50%. The observed links were selected at random, and the results we report are averaged over five folds of these random selection trials. Using just the observed portion of links, we constructed the following flat features: for each student, the proportion of students in the residence that list him/her and the proportion of students he/she lists; for each pair of students, the proportion of other students they have as common friends. The values of the proportions were discretized into four bins. These features capture some of the relational structure and dependencies between links: Students who list (or are listed by) many friends in the observed portion of the links tend to have links in the unobserved portion as well. More importantly, having friends in common increases the likelihood of a link between a pair of students. The Flat model uses logistic regression with the above features as well as personal information about each user. In addition to individual characteristics of the two people, we also introduced a feature for each match of a characteristic, for example, both people are computer science majors or both are freshmen. The Compatibility model uses a type of similarity template, introducing cliques between each pair of links emanating from each person. Similarly to the Flat model, these cliques include a feature for each match of the characteristics of the two potential friends. This model captures the tendency of a person to have friends who share many characteristics (even though the person might not possess them). For example, a student may be friends with several CS majors, even though he is not a CS major himself. We also tried models that used transitivity templates, but the approximate inference with 3-cliques often failed to converge or produced erratic results. Fig. 2(a) compares the average precision/recall breakpoint achieved by the different models at the three different settings of observed links. Fig. 2(b) shows the performance on each of the eight folds containing two residences each. Using a paired t-test, the Compatibility model outperforms Flat with p-values 0.0036, 0.00064 and 0.054 respectively. 7 Discussion and Conclusions In this paper, we consider the problem of link prediction in relational domains. We focus on the task of collective link classification, where we are simultaneously trying to predict and classify an entire set of links in a link graph. We show that the use of a probabilistic model over link graphs allows us to represent and exploit interesting subgraph patterns in the link graph. Specifically, we have found two types of patterns that seem to be beneficial in several places. Similarity templates relate the classification of links or objects that share a certain graph-based property (e.g., links that share a common endpoint). Transitivity templates relate triples of objects and links organized in a triangle. We show that the use of these patterns significantly improve the classification accuracy over flat models. Relational Markov networks are not the only method one might consider applying to the link prediction and classification task. We could, for example, build a link predictor that considers other links in the graph by converting graph features into flat features [11], as we did in the social network data. As our experiments show, even with these features, the collective prediction approach work better. Another approach is to use relational classifiers such as variants of inductive logic programming [10]. Generally, however, these methods have been applied to the problem of predicting or classifying a single link at a time. It is not clear how well they would extend to the task of simultaneously predicting an entire link graph. Finally, we could apply the directed PRM framework of [5]. However, as shown in [15], the discriminatively trained RMNs perform significantly better than generatively trained PRMs even on the simpler entity classification task. Furthermore, as we discussed, the PRM framework cannot represent (in any natural way) the type of subgraph patterns that seem prevalent in link graph data. Therefore, the RMN framework seems much more appropriate for this task. Although the RMN framework worked fairly well on this task, there is significant room for improvement. One of the key problems limiting the applicability of approach is the reliance on belief propagation, which often does not converge in more complex problems. This problem is especially acute in the link prediction problem, where the presence of all potential links leads to densely connected Markov networks with many short loops. This problem can be addressed with heuristics that focus the search on links that are plausible (as we did in a very simple way in the webpage experiments). A more interesting solution would be to develop a more integrated approximate inference / learning algorithm. Our results use a set of relational patterns that we have discovered to be useful in the domains that we have considered. However, many other rich and interesting patterns are possible. Thus, in the relational setting, even more so than in simpler tasks, the issue of feature construction is critical. It is therefore important to explore the problem of automatic feature induction, as in [3]. Finally, we believe that the problem of modeling link graphs has numerous other applications, including: analyzing communities of people and hierarchical structure of organizations, identifying people or objects that play certain key roles, predicting current and future interactions, and more. Acknowledgments. This work was supported by ONR Contract F3060-01-2-0564-P00002 under DARPA’s EELD program. P. Abbeel was supported by a Siebel Grad. Fellowship. References [1] L. Adamic, O. Buyukkokten, and E. Adar. A social network caught in the web. http://www.hpl.hp.com/shl/papers/social/, 2002. [2] M. Craven, D. DiPasquo, D. Freitag, A. McCallum, T. Mitchell, K. Nigam, and S. Slattery. Learning to extract symbolic knowledge from the world wide web. In Proc. AAAI, 1998. [3] S. Della Pietra, V. Della Pietra, and J. Lafferty. Inducing features of random fields. IEEE Trans. on Pattern Analysis and Machine Intelligence, 19(4):380–393, 1997. [4] L. Egghe and R. Rousseau. Introduction to Informetrics. Elsevier, 1990. [5] L. Getoor, N. Friedman, D. Koller, and B. Taskar. Probabilistic models of relational structure. In Proc. ICML, 2001. [6] L. Getoor, E. Segal, B. Taskar, and D. Koller. Probabilistic models of text and link structure for hypertext classification. In IJCAI Workshop on Text Learning: Beyond Supervision, 2001. [7] R. Ghani, S. Slattery, and Y. Yang. Hypertext categorization using hyperlink patterns and meta data. In Proc ICML, 2001. [8] J. M. Kleinberg. Authoritative sources in a hyperlinked environment. JACM, 46(5):604–632, 1999. [9] D. Koller and A. Pfeffer. Probabilistic frame-based systems. In Proc. AAAI98, pages 580–587, 1998. [10] Nada Lavra˘c and Saso D˘zeroski. Inductive Logic Programming: Techniques and Applications. Ellis Horwood, 1994. [11] J. Neville and D. Jensen. Iterative classification in relational data. In AAAI Workshop on Learning Statistical Models from Relational Data, 2000. [12] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford University, 1998. [13] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [14] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In Proc. UAI, 2002. [15] B. Taskar, E. Segal, and D. Koller. Probabilistic classification and clustering in relational data. In Proc. IJCAI, pages 870–876, 2001. [16] S. Wasserman and P. Pattison. Logit models and logistic regression for social networks. Psychometrika, 61(3):401–425, 1996. [17] J. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. In Proc. NIPS, 2000.
2003
22
2,422
Human and Ideal Observers for Detecting Image Curves Alan Yuille Department of Statistics & Psychology University of California Los Angeles Los Angeles CA yuille@stat.ucla.edu Fang Fang Psychology, University of Minnesota Minneapolis MN 55455 fang0057@tc.umn.edu Paul Schrater Psychology, University of Minnesota Minneapolis MN 55455 schrater@umn.edu Daniel Kersten Psychology, University of Minnesota Minneapolis MN 55455 kersten@umn.edu Abstract This paper compares the ability of human observers to detect target image curves with that of an ideal observer. The target curves are sampled from a generative model which specifies (probabilistically) the geometry and local intensity properties of the curve. The ideal observer performs Bayesian inference on the generative model using MAP estimation. Varying the probability model for the curve geometry enables us investigate whether human performance is best for target curves that obey specific shape statistics, in particular those observed on natural shapes. Experiments are performed with data on both rectangular and hexagonal lattices. Our results show that human observers’ performance approaches that of the ideal observer and are, in general, closest to the ideal for conditions where the target curve tends to be straight or similar to natural statistics on curves. This suggests a bias of human observers towards straight curves and natural statistics. 1 Introduction Detecting curves in images is a fundamental visual task which requires combining local intensity cues with prior knowledge about the probable shape of the curve. Curves with strong intensity edges are easy to detect, but those with weak intensity edges can only be found if we have strong prior knowledge of the shape, see figure (1) But, to the best of our knowledge, there have been no experimental studies which test the ability of human observers to perform curve detection for semi-realistic stimuli with locally ambiguous intensity cues or to explore how the difficulty of the task varies with the geometry of the curve. This paper formulates curve detection as Bayesian inference. Following Geman and Jedynak [6] we define probability distributions PG(.) for the shape geometry of the target curve and Pon(.), Poff(.) for the intensity on and off the curve. Sampling this model gives us semi-realistic images defined on either rectangular or hexagonal grids. The human obAlmost impossible Easy Intermediate Figure 1: It is plausible that the human visual system is adapted to the shape statistics of curves and paths in images like these. Left panel illustrates the trade-off between the reliability of intensity measurements and priors on curve geometry. The tent is easy to detect because of the large intensity difference between it and the background, so little prior knowledge about its shape is required. But detecting the goat (above the tent) is harder and seems to require prior knowledge about its shape. Centre panel illustrates the experimental task of tracing a curve (or road) in clutter. Right panel shows that the first order shape statistics from 49 object images (one datapoint per image) are clustered round P(straight) = 0.64 (with P(left) = 0.18 and P(right) = 0.18) for both rectangular and hexagonal lattices, see [1]. server’s task is to detect the target curve and to report it by tracking it with the (computer) mouse. Human performance is compared with that of an ideal observer which computes the target curve using Bayesian inference (implemented by a dynamic programming algorithm). The ideal observer gives a benchmark against which human performance can be measured. By varying the probability distributions PG, Pon.Poffwe can explore the ability of the human visual system to detect curves under a variety of conditions. For example, we can vary PG and determine what changes in Pon.Poffare required to maintain a pre-specified level of detection performance. In particular, we can investigate how human performance depends on the geometrical distribution PG of the curves. It is plausible that the human visual system has adapted to the statistics of the natural world, see figure (1), and in particular to the geometry of salient curves. Our measurements of natural image curves, see figure (1), and studies by [16], [10], [5] and [2], show distributions for shape statistics similar to those found for image intensities statistics [11, 9, 13]. We therefore investigate whether human performance approaches that of the ideal when the probability distributions PG is similar to that for curves in natural images. This investigation requires specifying performance measures to determine how close human performance is to the ideal (so that we can quantify whether humans do better or worse relative to the ideal for different shape distributions PG). We use two measures of performance. The first is an effective order parameter motivated by the order parameter theory for curve detection [14], [15] which shows that the detectability of target curves, by an ideal observer, depends only on an order parameter K which is a function of the probability distributions characterizing the problem. The second measure computes the value of the posterior distribution for the curves detected by the human and the ideal and takes the logarithm of their ratio. (For theoretical reasons this is expected to give a performance measure similar to the effective order parameter). The experiments are performed by human observers who are required to trace the target curve in the image. We simulated the images first on a rectangle grid and then on a hexagonal grid to test the generality of the results. In these experiments we varied the probability distributions of the geometry PG and the distribution Pon of the intensity on the target curve to allow us to explore a range of different conditions (we kept the distribution Pofffixed). In section (2) we briefly review previous psychophysical studies on edge detection. Sec0 4 8 12 16 0 0.05 0.1 0.15 0.2 0.25 Probability Pon P   Intensity Figure 2: Left panel: the tree structure superimposed on the lattice. Centre panel: a pyramid structure used in the simulations on the rectangular grid. Right panel: Typical distributions of Pon, Poff tion (3) describes our probabilistic model and specifies the ideal observer. In section (4), we describe the order parameter theory and define two performance measures. Sections (5,6) describe experimental results on rectangular and hexagonal grids respectively in terms of our two performance measures. 2 Previous Work Previous psychophysical studies have shown conditions for which the human visual system is able to effectively group contour fragments when embedded in an array of distracting fragments [3, 8]. Most of these studies have focused on the geometrical aspects of the grouping process. For example, it is known that the degree to which a target contour “pops out” depends on the degree of similarity of the orientation of neighboring fragments (typically gabor patches) [3], and that global closure facilitates grouping [8]. Recently, several researchers have shown that psychophysical performance for contour grouping may be understood in terms of the statistical properties of natural contours [12, 5]. For example, Geisler [5] has shown that human contour detection for line segments can be quantitatively predicted from a local grouping rule derived from measurements of local edge statistics. However, apart from studies that manipulate the contrast of gabor patch tokens [4], there has been little work on how intensity and contour geometry information is combined by the visual system under conditions that begin to approximate those of natural contours. In this paper we attempt to fill this gap by using stimuli sampled from a generative model which enables us to quantitatively characterize the shape and intensity information available for detecting curves and compare human performance with that of an ideal detector. 3 The Probabilistic Model for Data Generation We now describe our model in detail. Following [6], we formulate target curve detection as tree search, see figure (2), through a Q-nary tree. The starting point and initial direction is specified and there are QN possible distinct paths down the tree. A target curve hypothesis consists of a set of connected straight-line segments called segments. We can represent a path by a sequence of moves {ti} on the tree. Each move ti belongs to an alphabet {aµ} of size Q. For example, the simplest case sets Q = 3 with an alphabet a1, a2, a3 corresponding to the decisions: (i) a1 – go straight (0 degrees), (ii) a2 – go left (-5 degrees), or (iii) a3 – go right (+ 5 degrees). This determines a path x1, . . . , xN in the image lattice where xi, xi+1 indicate the start and end points of the ith segment. The relationship between the two representations is given by xi+1 = xi + w(xi −xi−1, ti), where w(xi −xi−1, ti) is a vector of approximately fixed magnitude (choosen to ensure that the segment ends on a pixel) and whose direction depends on the angle of the move ti relative to the direction of the previous segment xi −xi−1. In this paper we restrict Q = 3. We put a prior probability on the geometry of paths down the tree. This is of form P({ti}) = QN i=1 P(ti). We will always require that the probabilities to go left or right are equal and hence we can specify the distribution by the probability P(straight) that the curve goes straight. Our analysis of image curve statistics suggests that P(straight) = 0.64 for natural images, see figure (1). We specify the probability models Pon, Poff for the image intensity on and off to be of Poisson form defined over the range (1, ..., 16), see figure (2). This reduced range means that the distributions are expressed as Pon(I = n) = (1/Kon)e−λonλn on/n! and Poff(I = n) = (1/Koff)e−λoff λn off/n!, where Kon, Koff are normalization factors. We fix λoff = 8.0 and will vary λon. The quantity λon −λoff is a measure of the local intensity contrast of the target contour and so we informally refer to it as the signal-to-noise ratio (SNR). The Ideal Observer estimates the target curve trajectory by MAP estimation (which we compute using dynamic programming). As described in [6], MAP estimation corresponds to finding the path {ti} with filter measurements {yi} which maximizes the (scaled) loglikelihood ratio, or reward function, r({ti}, {yi}) = 1 N {log P(Y |X) + log P(X) − N X i=1 log U(ti)} = 1 N N X i=1 log{Pon(yi)/Poff(yi)} + 1 N N X i=1 log{PG(ti)/U(ti)}, (1) where U(.) is the uniform distribution (i.e. U(t) = 1/3 ∀t) and so PN i=1 log U(ti) = −N log 3 which is a constant. The length of the curve is N = 32 in our experiments. We implement this model on both rectangular and hexagonal lattices (the hexagonal latttices equate for contrast at borders, and are visually more realistic). The tree representation used by Geman and Jedynak must be modified when we map onto these lattices. For a rectangular lattice, the easiest way to do this involves defining a pyramid where paths start at the apex and the only allowable “moves” are: (i) one step down, (ii) one step down and one step left, and (iii) one step down and one step right. This can be represented by xi+1 = xi+w(ti) where ti ∈{−1, 0, 1}and w(−1) = −⃗i−⃗j, w(0) = −⃗j, w(1) = +⃗i−⃗j (where⃗i,⃗j are the x, y directions on the lattice). A similar procedure is used on the hexagonal lattice. But for certain geometry probabilities we observed that the sampled curves had “clumping” where the path consists of a large number of zig-zags. This was sometimes confusing to the human observers. So we implemented a higher-order Markov model which explicitly forbade zig-zags. We show experimental results for both the Clumping and No-Cluming models. To obtain computer simulations of target curves in background clutter we proceed in two stages. In the first stage, we stochastically sample from the distribution PG(t) to produce a target curve in the pyramid (starting at the apex and moving downwards). In the second stage, we must sample from the likelihood function to generate the image. So if a pixel x is on or off the target curve (which we generated in the first stage) then we sample the intensity I(x) from the distribution Pon(I) or Poff(I) respectively. 4 Order Parameters and Performance Measures Yuille et al [14],[15] analyzed the Geman and Jedynak model [6] to determine how the ability to detect the target curve depended on the geometry Pg and the intensity properties Pon.Poff. The analysis showed that the ability to detect the target curve behaves as e−KN, where N is the length of the curve and K is an order parameter. The larger the value of K then the easier it is to detect the curve. The order parameter is given by K = D(Pon||Poff)+D(PG||U)−log Q [15], where U is the uniform distribution. If K > 0 then detecting the target curve is possible but if K < 0 then it becomes impossible to find it (informally, it becomes like looking for a needle in a haystack). The order parameter illustrates the trade-off between shape and intensity cues and determines which types of curves are easiest to detect by an ideal observer. The intensity cues are quantified by D(Pon||Poff) and the shape cues by D(PG||U). The easiest curves to detect are those which are straight lines (i.e. D(PG||U) takes its largest possible value). The hardest curves to detect are those for which the geometry is most random. The stronger the intensity cues (i.e. the bigger D(Pon||Poff)) then, of course, the easier the detection becomes. So when comparing human performance to ideal observers we have to take into account that some types of curves are inherent easier to detect (i.e. thay have larger K). Human observers are good at detecting straight line curves but so are ideal obervers. We need performance measures to quantify the relative effectiveness of human and ideal observers. Otherwise, we will not be able to conclude that human observers are biased towards particular curve shapes (such as those occuring in natural images). We now define two performance measures to quantify the relative effectivenes of human and ideal observers. Our first measure is based on the hypothesis that human observers have an “effective order parameter”. In other words, their performance on the target curve tracking task behaves like e−NKH where KH is an effective order parameter which difference from the true order parameter K might reflect a human bias towards straight lines or ecological shape priors. We estimate the effective order parameters by fixing PG, Poffand adjusting Pon until the observers achieve a fixed performance level of at most 5 errors on a path of length 32. This gives distributions P I on, P H on for the ideal and human observers respectively. Then we set KH = K −D(P H on||Poff) + D(P I on||Poff), where P H on, P I on are the distributions used by the human and the ideal (respectively) to achieve similar performance. Our first performance measure is the difference ∆K = D(P H on||Poff) −D(P I on||Poff) between the effective and the true order parameters. But order parameter analysis should be regarded with caution for the curve detection task used in our experiments. The experimental criterion that the target path be found with 5 or less errors, see section (5), was not included in the theoretical analysis [14],[15]. Also some small corrections need to be made to the order parameters due to the nature of the rectangular grid, see [15] for computer calculations of the size of these corrections. These two effects – the error criterion and the grid correction – means that the order parameters are only approximate for these experimental conditions. This motivates a second performance measure where we calculate the value of the posterior probability (proportional to the exponential of r in equation (1)) for the curve detected by the human and the ideal observer (for identical distributions PG, Pon, Poff). We measure the logarithm of the ratio of these values. (A theoretical relationship can be shown between these two measures). 5 Experimental Results on Rectangular Grid To assess human performance on the road tracking task, we first had a set of 7 observers find the target curve in a tree defined by a rectangular grid figure (3)A. The observer tracked the                            Stimulus Human I d eal                              Stimulus Human I d eal Rec tangular grid A B C Figure 3: A. Rectangular Grid Stimulus (Left), Example Path: Ideal (Center), Example Path: Human (Right). B & C. Hexagonal Grid Stimulus (Left), Example Path: Ideal (Center), Example Path: Human (Right). Panel C shows an example of a path with higher order constraints to prevent “clumping”. There were a number of other differences between the rectangular and hexagonal grid psychophysics, including rectangle samples were slightly smaller than the hexgaons, and feedback was presented to the observers without (rectangular) or with background (hexagonal), and the lowest p(straight) was 0.0 for rectangular and0.1 for hexagonal grids. contour by starting at the far left corner and making a series of 32 key presses that moved the observer’s tracked contour either left, right, or straight at each key press. Each contour estimate was scored by counting the number of positions the observer’s contour was off the true path. Each observer had a training period in which the observer was shown examples of contours produced from the four different geometry distributions and practiced tracing in noise. During an experimental session, the geometry distribution was fixed at one the four possible values and observers were told which geometry distribution was being used to generate the contours. The parameter λon of Pon was varied using an adaptive procedure until the human observer managed to repeatedly detect the target curve with at most five misclassified pixels. This gave a threshold of λon −λoff for each probability distribution defined by P(straight). This threshold could be compared to that of the Ideal Observer (obtained by using dynamic programming to estimate the ideal, also allowing for up to five errors). The process was repeated several times for the four geometry distribution conditions. The thresholds for 7 observers and the ideal observer are shown in figure 4. These thresholds can be used to calculate our first performance measure (∆K) and determine how effectively observers are using the available image information at each P(straight). The results are illustrated in figure (4)B where the human data was averaged over seven subjects. They show that humans perform best for curves with P(straight) = 0.66 which is closest to the natural priors, see figure (1). Conversely, ∆K is biggest for the curves with P(straight) = 0.0, which is the condition that differs most from the natural statistics. We next compute our second performance measure (for which Pon, Poff, PG are the same for the ideal and the human observer). The average difference of this performance measure for the each geometry distribution is an alternative way how well observers are using the intensity information as a function of geometry, with a zero difference indicating optimal use of the information. The results are shown in figure (4)C. Notice that the best performance is achieved with P(straight) = 0.9. Observe that the two performance measures give different answers for this experiment. We conclude that our results are consistent either with a bias to ecological statistics or to straight lines. But the rectangular lattice 6 Experiments on Hexagonal Lattices In these experiments we used a hexagonal lattice because, for the human observers, the contrast at the edges corresponding to a left, straight, or right move is the same (in contrast to the rectangular grid, in which left and right moves only share a corner). We also use the same values of Pon, Poff, P(straight) for the humans and the ideal. Figure 4: A-C. Psychophysical results on rectangular grid. A. Threshold λon −λoff plotted against P(straight). The top seven curves are the results of the seven subjects. The bottom curve is for the ideal observer. B. The difference between human and ideal K order parameters. C. The average reward difference between ideal and human observers. D-I shows psychophyscial results on a hexagonal grid. D-F are for the Clumping condition, and G-I for the No Clumping condition for which high order statistics prevented sharp turns that result in “clumps”. We performed experiments on the hexagonal lattice under four different probabilities for the geometry. These were specified by P(straight) = 0.10, 0.33, 0.66, 0.90 (in other words, the straightest curves will be sampled when P(straight) = 0.90 and the least straight from P(straight) = 0.10). For reasons described previously, we did the experiment in two conditions. (1) allowing zig-zags “Clumping”, (2) forbidding zig-zags “NoClumping”. We show examples of the stimuli, the ideal results (indicated by dotted path), and the human results (indicated by dotted path) for the Clumping amd No-Clumping cases in figure (4B & C), respectively. The threshold SNR results for Clumping and No Clumping are summarized in figures (4D & G. The average ∆K = Khuman −Kideal results for Clumping and No Clumping are summarized in figure (4E & H). The average reward difference, ∆r = rideal −rhuman, results for Clumping and No Clumping are summarized in figure (4F & I). Both performance measures give consistent results for the Clumping data suggesting that humans are best when detecting the straightest lines (P(straight) = 0.9). But the situation is more complicated for the No Clumping case where human observers show preferences for P(straight) = 0.9 or P(straight) = 0.66. 7 Summary and Conclusions The results of our experiments suggest that humans are most effective at detecting curves which are straight or which obey ecological statistics. But further experiments are needed to clarify this. Our two performance measures were not always consistent, particularly for the rectangular grid (we are analyzing this discrepency theoretically). The first measure suggested a bias towards ecological statistics on the rectangular grid and for No Clumping stimuli on the hexagonal grid. The second measure showed a bias towards curves with P(straight) = 0.9 on the rectangular and hexagonal grids. To our knowledge, this is the first experiment which tests the performance of human observers for detecting target curves by comparison with that of an ideal observer with ambiguous intensity data. Our novel experimental design and stimuli may cause artifacts due to the rectangular and hexagonal grids. Further experiments may need to ”quantize” curves more carefully and reduce the effect of the grids. Further experiments performed on a larger number of subjects may be able to isolated more precisely the strategy that human observers employ. Do they, for example, make use of a specific geometry prior based on empirical edge statistics [16], [10]. If so, this might account for the bias towards straigthness and natural priors observed in the experiments reported here. Acknowledgments Supported by NIH RO1 EY11507-001, EY02587, EY12691 and, EY013875-01A1, NSF SBR-9631682, 0240148. References [1] Brady, M. J. (1999). Psychophysical investigations of incomplete forms and forms with background. Ph. D., University of Minnesota. [2] Elder J.H. and Goldberg R.M.. Ecological Statistics of Gestalt Laws for the Perceptual Organization of Contours. Journal of Vision, 2, 324-353. 2002. [3] Field, D. J., Hayes, A., & Hess, R. F. Contour integration by the human visual system: evidence for a local “association field”. Vision Res, 33, (2), 173-93. 1993. [4] Field, D. J., Hayes, A., & Hess, R. F. The roles of polarity and symmetry in the perceptual grouping of contour fragments. Spat Vis, 13, (1), 51-66.2000. [5] Geisler W.S. , Perry J.S. , Super B.J. and Gallogly D.P. . Edge co-occurrence in natural images predicts contour grouping performance. Vision Res, 41, (6), 711-24. 2001. [6] Geman D. and Jedynak B. . “An active testing model for tracking roads from satellite images”. IEEE Trans. Pattern Anal. Mach. Intell., 18, 1-14, 1996. [7] Hess, R., & Field, D. . Integration of contours: new insights. Trends Cogn Sci, 3, (12), 480486.1999. [8] Kovacs, I., & Julesz, B. A closed curve is much more than an incomplete one: effect of closure in figure-ground segmentation. Proc Natl Acad Sci U S A, 90, (16), 7495-7. 1993. [9] Lee A.B., Huang J.G., and Mumford D.B., “Random collage model for natural images”, Int’l J. of Computer Vision, Oct. 2000. [10] Ren X. and Malik J. . “A Probabilistic Multi-scale Model for Contour Completion Based on Image Statistics”. In Proceedings ECCV. 2002 [11] Ruderman D.L. and Bialek W. , “Statistics of natural images: scaling in the woods”, Phy. Rev. Letter, 73:814-817, 1994. [12] Sigman, M., Cecchi, G. A., Gilbert, C. D., & Magnasco, M. O. . On a common circle: natural scenes and Gestalt rules. Proc Natl Acad Sci U S A, 98, (4), 1935-40. 2001. [13] Wainwright M.J. and Simoncelli E.P., “Scale mixtures of Gaussian and the statistics of natural images”, NIPS, 855-861, 2000. [14] Yuille A.L. and Coughlan J.M. . “Fundamental Limits of Bayesian Inference: Order Parameters and Phase Transitions for Road Tracking” . IEEE PAMI. Vol. 22. No. 2. February. 2000. [15] Yuille A.L. , Coughlan J.M., Wu Y-N. and Zhu S.C. . “Order Parameters for Minimax Entropy Distributions: When does high level knowledge help?” IJCV. 41(1/2), pp 9-33. 2001. [16] Zhu S.C. . “Embedding Gestalt Laws in Markov Random Fields – A theory for shape modeling and perceptual organization”. IEEE PAMI, Vol. 21, No.11, pp1170-1187, Nov, 1999.
2003
23
2,423
Linear Program Approximations for Factored Continuous-State Markov Decision Processes Milos Hauskrecht and Branislav Kveton Department of Computer Science and Intelligent Systems Program University of Pittsburgh milos,bkveton  @cs.pitt.edu Abstract Approximate linear programming (ALP) has emerged recently as one of the most promising methods for solving complex factored MDPs with finite state spaces. In this work we show that ALP solutions are not limited only to MDPs with finite state spaces, but that they can also be applied successfully to factored continuous-state MDPs (CMDPs). We show how one can build an ALP-based approximation for such a model and contrast it to existing solution methods. We argue that this approach offers a robust alternative for solving high dimensional continuous-state space problems. The point is supported by experiments on three CMDP problems with 24-25 continuous state factors. 1 Introduction Markov decision processes (MDPs) offer an elegant mathematical framework for representing and solving decision problems in the presence of uncertainty. While standard solution techniques, such as value and policy iteration, scale-up well in terms of the number of states, the state space of more realistic MDP problems is factorized and thus becomes exponential in the number of state components. Much of the recent work in the AI community has focused on factored structured representations of finite-state MDPs and their efficient solutions. Approximate linear programming (ALP) has emerged recently as one of the most promising methods for solving complex factored MDPs with discrete state components. The approach uses a linear combination of local feature functions to model the value function. The coefficients of the model are fit using linear program methods. A number of refinements of the ALP approach have been developed over past few years. These include the work by Guestrin et al [8], de Farias and Van Roy [6, 5], Schuurmans and Patrascu [15], and others [11]. In this work we show how the same set of linear programming (LP) methods can be extended also to solutions of factored continuous-state MDPs.1 The optimal solution of the continuous-state MDP (CMDP) may not (and typically does not) have a finite support. To address this problem, CMDPs and their solutions are usually approximated and solved either through state space discretization or by fitting a surrogate and (often much simpler) parametric value function model. The two methods come with different advantages and limitations. 2 The disadvantage of discretizations is their accu1We assume that action spacesstay finite. Rust [14] calls such models discrete decision processes. 2The two methods are described in more depth in Section 3. racy and the fact that higher accuracy solutions are paid for by the exponential increase in the complexity of discretizations. On the other hand, parametric value-function approximations may become unstable when combined with the dynamic programming methods and least squares error [1]. The ALP solution that is developed in this work eliminates the disadvantages of discretization and function approximation approaches while preserving their good properties. It extends the approach of Trick and Zin [17] to factored multidimensional continuous state spaces. Its main benefits are good running time performance, stability of the solution, and good quality policies. Factored models offer a more natural and compact way of parameterizing complex decision processes. However, not all CMDP models and related factorizations are equally suitable also for the purpose of optimization. In this work we study factored CMDPs with state spaces restricted to  . We show that the solutionfor such a model can be approximated by an ALP with infinite number of constraints that decompose locally. In addition, we show that by choosing transition models based on beta densities (or their mixtures) and basis functions defined by products of polynomials one obtains an ALP in which both the objective function and constraints are in closed form. In order to alleviate the problem of infinite number of constraints, we develop and study approximation based on constraint sampling [5, 6]. We show that even under a relatively simple random constraint sampling we are able to very quickly calculate solutions of a high quality that are comparable to other existing CMDP solution methods. The text of the paper is organized as follows. First we review finite-state MDPs and approximate linear programming (ALP) methods developed for their factored refinements. Next we show how to extend the LP approximations to factored continuous-state MDPs and discuss assumptions underlying the model. Finally, we test the new method on a continuousstate version of the computer network problem [8, 15] and compare its performance to alternative CMDP methods. 2 Finite-state MDPs A finite state MDP defines a stochastic control process with components   , where  is a finite set of states,  is a finite set of actions,  "!# $%& defines a probabilistic transition model mapping a state to the next states given an action, and '()*+! IR defines a reward model for choosing an action in a specific state. Given an MDP our objective is to find the policy ,.-/01!2 maximizing the infinitehorizon discounted reward criterion: 34 6587 9:<;>= 96? 9  , where =A@ $%B is a discount factor and ? 9 is a reward obtained in step C . The value of the optimal policy satisfies the Bellman fixed point equation [12]: D E>GFIH*JLK M N O EPRQSUT =WV XLY 4 EUZ\[ EPRQS D EUZ]_^4 (1) where D is the value of the optimal policy and E Z denotes the next state. For all states E @  the equation can be written as D F8` D , where ` is the Bellman operator. Given the value function D , the optimal policy , E> is defined by the action optimizing Eqn 1. Methods for solving an MDP include value iteration, policy iteration, and linear programming [12, 2]. In the linear program (LP) formulation we solve the following problem: minimize V X D E> (2) subject to: D E>.a = VX Y 4 EUZb[ EP QS D EUZc.aO EP Q$edf ghEP Q where values of D E> for every state E are treated as variables. Factorizations and LP approximations In factored MDPs, the state space  is defined in terms of state variables *iBR4jBBklkBk R  . As a result, the state space becomes exponential in the number of variables. Compact parameterizations of MDPs based on dynamic belief networks [7] and decomposable reward functionsare routinely used to represent such MDPs more efficiently. However, the presence of a compact model does not imply the existence of efficient optimal solutions. To address this problem Koller and Parr [9] and Guestrin at al [8] propose to use a linear model [13]: m E>F V 9+n 9 m 9 E 9  to approximate the value function D E> . Here n 9 are the linear coefficients to be found (fit) and m 9 s denote feature functions defined over subsets E 9 of state variables. Given a factored binary-state MDP, the coefficients of the linear model can be found by solving the surrogate of the LP in Equation 2 [8]: minimize o V<prq psBtuUv wxyv V wx{z p |}~p_ (3) subject to: V p q p€‚ z p |} p „ƒ†… V w Y x‡ |}~ˆ p6‰ } p]Š ‹(Œ6  z p |}<ˆ p  Ž ƒ‘’|} Œy ”“ • Œ– } Œ\ where E 9— M are the parents of state variables in E Z 9 under action Q , and O EPRQS decomposes to 5™˜š : i  M — š E M — š RQS , such that  M — š E M — š  Q$ is a local reward function defined over a subset of state variables. Note that while the objective function can be computed efficiently, the number of constraints one has to satisfy remains exponential in the number of random variables. However, only a subset of these constraints becomes active and affect the solution. Guestrin et al [8] showed how to find active constraints by solving a cost network problem. Unfortunately, the cost network formulation is NP-hard. An alternative approach for finding active constraints was devised by Schuurmans and Patrascu [15]. The approach implements a constraint generation method [17] and appears to give a very good performance on average. The idea is to greedily search for maximally violated constraints which can be done efficiently by solving a linear optimization problem. These constraints are included in the linear program and the process is repeated until no violated constraints are found. De Farias and Van Roy [5] analyzed a Monte Carlo approach with randomly sampled constraints. 3 Factored continuous-state MDPs Many stochastic controlled processes are more naturally defined using continuous state variables. In this work we focus on continuous-state MDPs (CMDPs) where state spaces are restricted to $%&  . 3 We assume factored representations where transition probabilities are defined in terms of densities over  state variable subspaces: ›> E Z [ EP QSœF  žl: i ›.  Ÿ Z ž [ EP QS where E Z and E denote the current and previous states. Rewards are represented compactly over subsets of state variables, similarly to factored finite-state MDPs. 3.1 Solving continuous-state MDP The optimal value function for a continuous-state MDP satisfies the Bellman fixed point equation: D E>F™HœJ(K M¢¡ O EP QSUT =¤£ XLY ›. E Z [ EP QS D E Z b¥E Z§¦ k 3We note that in general any bounded subspace of IR t can be transformed to ¨ • ŒR©«ª t . The problem with CMDPs is that in most cases the optimal value function does not have a finite support and cannot be computed. The solutions attempt to replace the value function or the optimal policy with a finite approximation. Grid-based MDP (GMDP) discretizations. A typical solution is to discretize the state space to a set of grid points and approximate value functions over such points. Unfortunately, classic grid algorithms scale up exponentially with the number of state variables [4]. Let ¬­F E i  E j lkBkBkR EU® be a set of grid points over the state space    . Then the Bellman operator ` can be approximated with an operator `/¯ that is restricted to grid points ¬ . One such operator has been studied by Rust [14] and is defined as: D ¯ E 9 FIH*JLK M €‚ O E 9  Q$<T = ® V žl: i 0¯ E ž [ E 9  QS D ¯’ E ž  Ž  (4) where  ¯ E ž [ E 9  QS°F'± M E 9  ›> E ž [ E 9 RQS defines a normalized transition probability such that ± M E 9  is a normalizing constant. Equation 4 applied to grid points ¬ defines a finite state MDP with [ ¬*[ states. The solution, D ¯hF²`œ¯ D ¯ , approximates the original continuous-state MDP. Convergence properties of the approximation scheme in Equation 4 for random or pseudo-random samples were analyzed by Rust [14]. Parametric function approximations. An alternative way to solve a continuous-state MDP is to approximate the optimal value function D E> with an appropriate parametric function model [3]. The parameters of the model are fitted iteratively by applying one step Bellman backups to a finite set of state points arranged on a fixed grid or obtained through Monte Carlo sampling. Least squares criterion is used to fit the parameters of the model. In addition to parallel updates and optimizations, on-line update schemes based on gradient descent [3, 16] are very popular and can be used to optimize the parameters. The disadvantage of the methods is their instability and possible divergence [1]. 3.2 LP approximations of CMDPs Our objective is to develop an alternative to the above solutions that is based on ALP techniques and that takes advantage of model factorizations. It is easy to see that for a general continuous-state model the exact solution cannot be formulated as a linear program as was done in Equation 2 since the number of states is infinite. However, using linear representations of the value functions we need to optimize only over a finite number of weights combining feature functions. So adopting the ALP approach from factored MDPs (Section 2), the CMDP problem can be formulated as: minimize o V p q p £ w³x z p |} p _´} p subject to: V pfq p’€‚ z p_|}~p Uƒ†… £ w Y x’µ¶I· ¸ Y ¹»º w Y x_¼ |½ ˆ ¾ ‰ } ¾ Š ‹ Œ6 _¿À z py|} ˆ p _´} ˆ p Ž ƒ†|} Œ\ ”“ • Œ4– } Œ« The above formulation of the ALP builds upon our observation that linear models in combination with factored transitions are well-behaved when integrated over  6 state space (or any bounded space) and nicely decompose along state-variable subsets defining feature functions similarly to Equation 3. This simplification is a consequence of the following variable elimination transformation: £ i ;ÂÁ £Sà m _ÄL6¥Sijő¥SÆF ¡ Á £à m _ÄLb¥$ijÅÇÆ%¦ i ; F £à m  Ä(6¥SÄk Despite the decomposition, the ALP formulation of the factored CMDP comes with two concerns. First, the integrals may be improper and not computable. Second, we need to satisfy infinite number of constraints (for all values of E and Q ). In the following we give solutions to both issues. Closed form solutions Integrals in the objective function and constraints depend on the choice of transition models and basis functions. We want all these integrals to be proper Riemannian integrals. We prefer integrals with closed-form expressions. To this point, we have identified conjugate classes of transition models and basis functions leading to closed form expressions. Beta transitions. To parameterize the transition model over $%& we propose to use beta densities or their mixtures. The beta transition is defined as: ›.  ŸÈZ ž [ E ž — M  QSFrÉÊ»Ë\QU  ŸÈZ ž [ Ì i ž — M E ž — M  «Ì j ž — M E žR— M «  where E ž — M is the parent set of a variable Ÿ ž under action Q , and Ì i ž — M E žR— M &\Ì j ž — M E ž — M ÎÍ+ for E žR— M @  6Ï X ¹_Ð Ñ Ï define the parameters of the beta model. Feature functions. A feature function form that is particularly suitable for the ALP and matches beta transitions is a product of power functions: m 9 E 9 F · Ò ¹ Ó X x Ÿ ˜ ¹ Ð x ž k It is easy to show that for such a case the integrals in the objective function simplify to: £ X x m 9 E 9 6¥E 9 F £ X x · Ò ¹»Ó X x Ÿ ˜ ¹_Ð x ž ¥ÔE 9 F · Ò ¹ Ó X x £ Ò ¹ Ÿ ˜ ¹_Ð x ž ¥SŸ ž F · Ò ¹ Ó X x  Õ ž — 9 T8 k Similarly, using our conjugate transition and basis models the integrals in constraints simplify to: £ w Y x µ¶ · ¸ Y ¹»º w Y x ¼ |½ ˆ ¾ ‰ } ¾ Š ‹Œ6 _¿À z p |} ˆ p _´} ˆ p~Ö · ¸ Y ¹ º w Y xS× |]س٠¾ Š ‹ |} ¾ Š ‹ ~چØ%Û ¾ Š ‹ |} ¾ Š ‹ 6 × |]س٠¾ Š ‹ |} ¾ Š ‹ 6SÚÝÜ ¾ Š p  × |]Ø Ù ¾ Š ‹ |} ¾ Š ‹ SÚ/Ø Û ¾ Š ‹ |} ¾ Š ‹ <Ú‘Ü ¾ Š p  × |]Ø Ù ¾ Š ‹ |} ¾ Š ‹ 6 Œ where Þ bkß is the gamma function. For example, assuming features with products of state variables: m 9 E 9 PF  Ò ¹Ó X x Ÿ ž , the ALP formulation becomes: minimize o V prq p„à © sSá v wx6v (5) subject to: V p q p€‚ · ¸ ¹ º wx ½ ¾ ƒ‘… · ¸ Y ¹ º w Y x Ø Ù ¾ Š ‹ |} ¾ Š ‹  Ø Ù ¾ Š ‹ |} ¾ Š ‹SÚ/Ø Û ¾ Š ‹ |} ¾ Š ‹B Ž ƒ’|} Œ« P“ • Œ– } Œ\ ALP solution. Although the ALP uses infinitely many constraints, only a finite subset of constraints, active constraints, is necessary to define the optimal solution. Existing ALP methods for factored finite-state MDPs search for this subset more efficiently by taking advantage of local constraint decompositions and various heuristics. However, at the end these methods always rely on the fact the decompositions are defined on a finite state subspace. Unfortunately, constraints in our model decompose over smaller but still continuous subspaces, so the existing solutions for the finite-state MDPs cannot be applied directly. Sampling constraints. To avoid the problem of continuous state spaces we approximate the ALP solution using a finite set of constraints defined by a finite set of state space points and actions in  . These state space points can be defined by regular grids on state subspaces or via random sampling of states E @Ýâ . In this work we focus on and experiment 0 0.5 1 0 1 2 3 4 5 x’j Transition model p(x’j | {xj, xj − 1}, a) a ≠ j, xj = 1 xj − 1 = 0.0 xj − 1 = 0.5 xj − 1 = 1.0 0 0.5 1 0 1 2 3 4 5 6 7 8 a = j x’j (a) (b) Figure 1: a. Topologies of computer networks used in experiments. b. Transition densities for the ã th computer and different previous-state/action combinations. with the random sampling approach. For the finite state spaces such a technique has been devised and analyzed by de Farias and Van Roy [5]. We note that the blind sampling approach can be improved via various heuristics.4 However, despite many possible heuristic improvements, we believe that the crucial benefit comes from the ALP formulation that “fits” the linear model and subsequent constraint and subspace decompositions. 4 Experiments To test the ALP method we use a continuous-state modification of the computer network example proposed by Guestrin et al [8]. Figure 1a illustrates three different network structures used in experiments. Nodes in graphs represent computers. The state of a machine is represented by a number between 0 and 1 reflecting its processing capacity (the ability to process tasks). The network performance can be controlled through activities of a human operator: the operator can attend a machine (one at time) or do nothing. Thus, there is a total of äœTå actions where ä is the number of computers in the network. The processing capacity of a machine fluctuates randomly and is determined by: (1) a random event (e.g., a software bug), (2) machines connected to it and (3) the presence of the operator at the machine console. The transition model represents the dynamics of the computer network. The model is factorized and defined in terms of beta densities: ›> _Ÿ Z ž [ E ž — M RQSOFæÉÊ»Ë\QU  Ÿ Z ž [ Ì i žR— M E ž — M  «Ì j žR— M E ž — M \ , where Ÿ Z ž is the current state of the ã th computer, and E žR— M describes the previous-step state of the computers affecting ã . We use: Ì i žLç : M — M E ž — M 0F8è<TléLŸ ž a{ê(Ÿ ž Ÿ žë i and Ì j žLç : M — M E žR— M Fål„a{èLŸ ž a{ìLŸ ž Ÿ ž³ë i for transitions when the human does not attend the computer, and Ì i žl: M — M E žR— M F8èL and Ì j žl: M — M E žR— M PF8è when the operator is present at the computer. Figure 1b illustrates transition densities for the ã th computer given different values of its parents Ÿ ž Ÿ ž³ë i  and actions. The goal is to maintain the processing ability of the network at the highest possible level over time. The preferences are expressed in the reward function: O EPRQSF'è(Ÿ j i T 5 žl: j Ÿ j ž , where Ÿ i is the server. The discount factor = is k§í(ê . To define the ALP approximation, we used a linear combination of linear (for every node) and quadratic (for every link) feature functions. To demonstrate the practical benefit of the approach we have compared it to the grid-based approximation (Equation 4) and leastsquare value iteration approach (with the same linear value function model as in the ALP). The constraints in the ALP were sampled randomly. To make the comparison fair the same sets of samples were shared by all three methods. The full comparison study was run on 4Various constraint sampling heuristics are analyzed and reported in a separate work [10]. (b) (a) 0 500 130 135 140 145 Expected reward 24−ring random GMDP LS ALP 0 500 0 100 200 300 400 Number of samples Time (sec) 0 500 135 140 145 150 155 25−star 0 500 0 100 200 300 400 Number of samples 0 500 130 135 140 145 24−ring−of−rings 0 500 0 100 200 300 400 Number of samples Figure 2: (a) Average values of control policies for ALP, least-squares (LS), and grid (GMDP) approaches for different sample sizes. Random policy is used as a baseline. (b) Average running times. problems with three network structures from Figure 1a, each with 24 or 25 nodes. Figure 2a illustrates the average quality (value) of a policy obtained by different approximation methods while varying the number of samples. The average is computed over 30 solutions obtained for 30 different sample sets and 100 different (random) start states. The simulation trajectories of length 50 are used. Figure 2b illustrates the scale-up potential of the methods in terms of running times. Results are averaged over 30 solutions. Overall, the results of experiments clearly demonstrate the benefit of the ALP with “local” feature functions. For the sample size range tested, our ALP method came close to the least-squares (LS) approach in terms of the quality. Both used the same value function model and both managed to fit well the parameters, hence we got comparable quality results. However, the ALP was much better in terms of running time. Oscillations and poor convergence behavior of the iterative LS method is responsible for the difference. The ALP outperformed the grid-based approach (GMDP) in both the policy quality and running times. The gap in the policy quality was more pronounced for smaller sample sizes. This can be explained by the ability of the model to “cover” complete state space as opposed to individual grid points. Better running times for the ALP can be explained by the fact that the number of free variables to be optimized is fixed (they are equal to weights î ), while in grid methods free variables correspond to grid samples and their number grows linearly. 5 Conclusions We have extended the application of linear program approximation methods and their benefits to factored MDPs with continuous states. 5 We have proposed a factored transition model based on beta densities and identified feature functions that match well such a model. Our ALP solution offers numerous advantages over standard grid and function approximation approaches: (1) it takes advantage of the structure of the process; (2) it allows one to define non-linear value function models and avoids the instabilities associated with leastsquared approximations; (3) it gives a more robust solution for small sample sizes when 5We note that our CMDP solution paves the road to ALP solutions for factored hybrid state MDPs. compared to grid methods and provides a better way of “smoothing” value function to unseen examples; (4) its running time scales up better than grid methods. These has been demonstrated experimentally on three large problems. Many interestingissues related to the new method remain to be addressed. First, the random sampling of constraints can be improved using various heuristics. We report results of some heuristic solutions in a separate work [10]. Second, we did not give any complexity bounds for the random constraint sampling approach. However, we expect that the proofs by de Farias and Van Roy [5] can be adapted to cover the CMDP case. Finally, our ALP method assumes a bounded subspace of IR . The important open question is how to extend the ALP method to IR spaces. References [1] D.P. Bertsekas. A counter-example to temporal differences learning. Neural Computation, 7:270–279, 1994. [2] D.P. Bertsekas. Dynamic programming and optimal control. Athena Scientific, 1995. [3] D.P. Bertsekas and J.N. Tsitsiklis. Neuro-dynamic Programming. Athena Sc., 1996. [4] C.S. Chow and J.N. Tsitsiklis. An optimal one-way multigrid algorithm for discretetime stochastic control. IEEE Transactions on Automatic Control, 36:898–914, 1991. [5] D. P. de Farias and B. Van Roy. On constraint sampling for the linear programming approach to approximate dynamic programming. Mathematics of Operations Research, submitted, 2001. [6] D.P. de Farias and B. Van Roy. The Linear Programming Approach to Approximate Dynamic Programming. In Operations Research, 51:6, 2003. [7] T. Dean and K. Kanazawa. A model for reasoning about persistence and causation. Computational Intelligence, 5:142–150, 1989. [8] C. Guestrin, D. Koller, and R. Parr. Max-norm projections for factored MDPs. In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, pages 673–682, 2001. [9] D. Koller and R. Parr. Computing factored value functions for policies in structured MDPs. In Proceedings of the 16th International Joint Conference on Artificial Intelligence, pages 1332–1339, 1999. [10] B. Kveton and M. Hauskrecht. Heuristics refinements of approximate linear programming for factored continuous-state Markov decision processes. In 14Th International Conference on Automated Planning and Scheduling, to appear, 2004. [11] P. Poupart, C. Boutilier, R. Patrascu, and D. Schuurmans. Piecewise linear value functionapproximationfor factored MDPs. In Proceedings of the Eighteenth National Conference on AI, pages 292–299, 2002. [12] M.L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley, New York, 1994. [13] B. Van Roy. Learning and value function approximation in complex decision problems. PhD thesis, Massachussetts Institute of Technology, 1998. [14] J. Rust. Using randomization to break the course of dimensionality. Econometrica, 65:487–516, 1997. [15] D. Schuurmans and R.Patrascu. Direct value-approximation for factored MDPs. In Advances in Neural Information Processing Systems 14, MIT Press, 2002. [16] R. S. Sutton and A. G. Barto. Reinforcement Learning: An introduction. 1998. [17] M. Trick and E.S Zin. A linear programming approach to solving stochastic dynamic programs, TR, 1993.
2003
24
2,424
Perception of the structure of the physical world using unknown multimodal sensors and effectors D. Philipona Sony CSL, 6 rue Amyot 75005 Paris, France david.philipona@m4x.org J.K. O’Regan Laboratoire de Psychologie Exp´erimentale, CNRS Universit´e Ren´e Descartes, 71, avenue Edouard Vaillant 92774 Boulogne-Billancourt Cedex, France http://nivea.psycho.univ-paris5.fr J.-P. Nadal Laboratoire de Physique Statistique, ENS rue Lhomond 75231 Paris Cedex 05 O. J.-M. D. Coenen Sony CSL, 6 rue Amyot 75005 Paris, France Abstract Is there a way for an algorithm linked to an unknown body to infer by itself information about this body and the world it is in? Taking the case of space for example, is there a way for this algorithm to realize that its body is in a three dimensional world? Is it possible for this algorithm to discover how to move in a straight line? And more basically: do these questions make any sense at all given that the algorithm only has access to the very high-dimensional data consisting of its sensory inputs and motor outputs? We demonstrate in this article how these questions can be given a positive answer. We show that it is possible to make an algorithm that, by analyzing the law that links its motor outputs to its sensory inputs, discovers information about the structure of the world regardless of the devices constituting the body it is linked to. We present results from simulations demonstrating a way to issue motor orders resulting in “fundamental” movements of the body as regards the structure of the physical world. 1 Introduction What is it possible to discover from behind the interface of an unknown body, embedded in an unknown world? In previous work [4] we presented an algorithm that can deduce the dimensionality of the outside space in which it is embedded, by making random movements and studying the intrinsic properties of the relation linking outgoing motor orders to resulting changes of sensory inputs (the so called sensorimotor law [3]). In the present article we provide a more advanced mathematical overview together with a more robust algorithm, and we also present a multimodal simulation. The mathematical section provides a rigorous treatment, relying on concepts from differential geometry, of what are essentially two very simple ideas. The first idea is that transformations of the organism-environment system which leave the sensory inputs unchanged will do this independently of the code or the structure of sensors, and are in fact the only aspects of the sensorimotor law that are independent of the code (property 1). In a single given sensorimotor configuration the effects of such transformations induce what is called a tangent space over which linear algebra can be used to extract a small number of independent basic elements, which we call “measuring rod”. The second idea is that there is a way of applying these measuring rods globally (property 2) so as to discover an overall substructure in the set of transformations that the organism-environment system can suffer, and that leave sensory inputs unchanged. Taken together these ideas make it possible, if the sensory devices are sufficiently informative, to extract an algebraic group structure corresponding to the intrinsic properties of the space in which the organism is embedded. The simulation section is for the moment limited to an implementation of the first idea. It presents briefly the main steps of an implementation giving access to the measuring rods, and presents the results of its application to a virtual rat with mixed visual, auditory and tactile sensors (see Figure 2). The group discovered reveals the properties of the Euclidian space implicit in the equations describing the physics of the simulated world. Figure 1: The virtual organism used for the simulations. Random motor commands produce random changes in the rat’s body configuration, involving uncoordinated movements of the head, changes in the gaze direction, and changes in the aperture of the eyelids and diaphragms. 2 Mathematical formulation Let us note S the sensory inputs, and M the motor outputs. They are the only things the algorithm can access. Let us note P the configurations of the body controlled by the algorithm and E the configurations of the environment. We will assume that the body position is controlled by the multidimensional motor outputs through some law ϕa and that the sensory devices together deliver a multidimensional input that is a function ϕb of the configuration of the body and the configuration of the environment: P = ϕa(M) and S = ϕb(P, E) We shall write ϕ(M, E) def = ϕb(ϕa(M), E), note S, M, P, E the sets of all S, M, P, E, and assume that M and E are manifolds. 2.1 Isotropy group of the sensorimotor law Through time, the algorithm will be able to experiment a set of sensorimotor laws linking its inputs to its outputs: ϕ(·, E) def = {M 7→ϕ(M, E), E ∈E} These are a set of functions linking S to M, parametrized by the environmental state E. Our goal is to extract from this set something that does not depend on the way the sensory information is provided. In other words something that would be the same for all h◦ϕ(·, E), where h is an invertible function corresponding to a change of encoding, including changes of the sensory devices (as long as they provide access to the same information). If we note Sym(X) def = {f : X →X, f one to one mapping}, and consider : Γ(ϕ) = {f ∈Sym(M × E) such that ϕ ◦f = ϕ} then Property 1 Γ(ϕ1) = Γ(ϕ2) ⇔∃f ∈Sym(S) such that ϕ1 = f ◦ϕ2 Thus Γ(ϕ) is invariant by change of encoding, and retains from ϕ all that is independent of the encoding. This result is easily understood using an example from physics: think of a light sensor with unknown characteristics in a world consisting of a single point light source. The values of the measures are very dependent on the sensor, but the fact that they are equal on concentric spheres is an intrinsic property of the physics of the situation (Γ(ϕ), in this case, would be the group of rotations) and is independent of the code and of the sensor’s characteristics. But how can we understand the transformations f which, first, involve a manifold E the algorithm does not know, and second that are invisible since ϕ ◦f = ϕ. We will show that, under one reasonable assumption, there is an algorithm that can discover the Lie algebra of the Lie subgroups of Γ(ϕ) that have independent actions over M and E, i.e. Lie groups G such that g(M, E) = (g1(M), g2(E)) for any ∈G, with ϕ(g1(M), g2(E)) = ϕ(M, E) ∀g ∈G (1) 2.2 Fundamental vector fields over the sensory inputs We will assume that the sensory inputs provide enough information to observe univocally the changes of the environment when the exteroceptive sensors do not move. In mathematical form, we will assume that: Condition 1 There exists U × V ⊂M × E such that ϕ(M, ·) is an injective immersion from V to S for any M ∈U Under this condition, ϕ(M, V) is a manifold for any P ∈U and ϕ(M, ·) is a diffeomorphism from V to ϕ(M, V). We shall write ϕ−1(M, ·) its inverse. Choosing M0 ∈U, it is thus possible to define an action φM0 of G over the manifold ϕ(M0, V) : φM0(g, S) def = ϕ(M0, g2(ϕ−1(M0, S))) ∀S ∈ϕ(M0, V) As a consequence (see for instance [2]), for any left invariant vector field X on G there is an associated fundamental vector field XS on ϕ(M0, V)1 : XS(S) def = d dt φM0(e−tX, S)|t=0 ∀S ∈ϕ(M0, V) 1To avoid heavy notations we have written XS instead of Xϕ(M0,V). The key point for us is that this whole vector field can be discovered experimentally by the algorithm from one vector alone : let us suppose the algorithm knows the one vector d dt φ1(e−tX, M0)|t=0 ∈TM|M0 (the tangent space of M at M0), that we will call a measuring rod. Then it can construct a motor command MX(t) such that MX(0) = M0 and ˙MX(0) = −d dt φ1(e−tX, M0)|t=0 and observe the fundamental field, thanks to the property: Property 2 XS(S) = d dt ϕ(MX(t), ϕ−1(M0, S))|t=0 ∀S ∈ϕ(M0, V) Indeed the movements of the environment reveal a sub-manifold ϕ(M0, V) of the manifold S of all sensory inputs, and this means they allow to transport the sensory image of the given measuring rod over this sub-manifold : X(S) is the time derivative of the sensory inputs at t = 0 in the movement implied by the motor command MX in that configuration of the environment yielding S at t = 0. The fundamental vector fields are the key to our problem because [2] :  XS, Y S = [X, Y ]S where the left term uses the bracket of the vectors fields on ϕ(M0, V) and the right term uses the bracket in the Lie algebra of G. Thus clearly we can get insight into the properties of the latter by the study of these fields. If the action φM0 is effective (and it is possible to show that for any G there is a subgroup such that it is),we have the additional properties: 1. X 7→XS is an injective Lie algebra morphism: we can understand the whole Lie algebra of G through the Lie bracket over the fundamental vector fields 2. G is diffeomorphic to the group of finite compositions of fundamental flows : any element g of G can be written as g = eX1eX2 . . . eXk, and φM0(g, S) = φM0(eX1, φM0(eX2, . . . φM0(eXk, S))) 2.3 Discovery of the measuring rods Thus the question is: how can the algorithm come to know the measuring rods? If ϕ is not singular (that is: is a subimmersion on U × V, see [1]), then it can be demonstrated that: Property 3 ∂ϕ ∂M (M0, E0) h ˙M −˙MX i = 0 ⇒d dt ϕ(M(t), ·)|t=0 = XS(ϕ(M0, ·)) This means that the particular choice of one vector of TM|M0 among those that have the same sensory image as a given measuring rod is of no importance for the construction of the associated vector field. Consequently, the search for the measuring rods becomes the search for their sensory image, which form a linear subspace of the intersection of the tangent spaces of ϕ(M0, V) and ϕ(U, E0) (as a direct consequence of property 2): ∀X ∂ϕ ∂M (M0, E0) d dt φ1(e−tX, M0)|t=0 ∈Tϕ(M0, V)|S0 \ Tϕ(U, E0)|S0 But what about the rest of the intersection? Reciprocally, it can be shown that: Property 4 Any measuring rod that has a sensory image in the intersection of the tangent spaces of ϕ(M0, V) and ϕ(U, E) for any E ∈V reveals a monodimensional subgroup of transformations over V that is invariant under any change of encoding. 3 Simulation 3.1 Description of the virtual rat We have applied these ideas to a virtual body satisfying the different necessary conditions for the theory to be applied. Though our approach would also apply to the situation where the sensorimotor law involves time-varying functions, for simplicity here we shall take the restricted case where S and M are linked by a non-delayed relationship. We thus implemented a rat’s head with instantaneous reactions so that M ∈Rm and S ∈Rs. In the simulation, m and s have been arbitrarily assigned the value 300. The head had visual, auditory and tactile input devices (see Figure 2). The visual device consisted of two eyes, each one being constituted by 40 photosensitive cells randomly distributed on a planar retina, one lens, one diaphragm (or pupil) and two eyelids. The images of the 9 light sources constituting the environment were projected through the lens on the retina to locally stimulate photosensitive cells, with a total influx related to the aperture of the diaphragm and the eyelids. The auditory device was constituted by one amplitude sensor in each of the two ears, with a sensitivity profile favoring auditory sources with azimuth and elevation 0◦with respect to the orientation of the head. The tactile device was constituted by 4 whiskers on each side of the rat’s jaw, that stuck to an object when touching it, and delivered a signal related to the shift from rest position. The global sensory inputs of dimension 90 (2 × 40 photosensors plus 2 auditory sensors plus 8 tactile sensors) were delivered to the algorithm through a linear mixing of all the signals delivered by these sensors, using a random matrix WS ∈M(s, 90) representing some sensory neural encoding in dimension s = 300. (a) azimuth (b) (c) Figure 2: The sensory system. (a) the sensory part of both eyes is constituted of randomly distributed photosensitive cells (small dark dots). (b) the auditory sensors have a gain profile favoring sounds coming from the front of the ears. (c) tactile devices stick to the sources they come into contact with. The motor device was as follows. Sixteen control parameters were constructed from linear combinations of the motor outputs of dimension m = 300 using a random matrix WM ∈M(16, m) representing some motor neural code. The configuration of the rat’s head was then computed from these sixteen variables in this way: six parameters controlled the position and orientation of the head, and, for each eye, three controlled the eye orientation plus two the aperture of the diaphragm and the eyelids. The whiskers were not controllable, but were fixed to the head. In the simulation we used linear encoding WS and WM in order to show that the algorithm worked even when the dimension of the sensory and motor vectors was high. Note first however that any, even non-linear, continuous high-dimensional function could have been used instead of the linear mixing matrices. More important, note that even when linear mixing is used, the sensorimotor law is highly nonlinear: the sensors deliver signals that are not linear with respect to the configuration of the rat’s head, and this configuration is itself not linear with respect to the motor outputs. 3.2 The algorithm The first important result of the mathematical section was that the sensory images of the measuring rods are in the intersection between the tangent space of the sensory inputs observed when issuing different motor outputs while the environment is immobile, and the tangent space of the sensory inputs observed when the command being issued is constant. In the present simulation we will only be making use of this point, but keep in mind that the second important result was the relation between the fundamental vector fields and these measuring rods. This implies that the tangent vectors we are going to find by an experiment for a given sensory input S0 = ϕ(M0, E0) can be transported in a particular way over the whole sub-manifold ϕ(M0, V), thereby generating the sensory consequences of any transformation of E associated with the Lie subgroup of Γ(ϕ) whose measuring rods have been found. Figure 3: Amplitudes of the ratio of successive singular values of : (a) the estimated tangent sensorimotor law (when E is fixed at E0) during the bootstrapping process; (b) the matrix corresponding to an estimated generating family for the tangent space to the manifold of sensory inputs observed when M is fixed at M0; (c) the matrix constituted by concatenating the vectors found in the two previous cases. The nullspaces of the two first matrices reflect redundant variables; the nullspace of the last one is related to the intersection of the two first tangent spaces (see equation 2). The graphs show there are 14 control parameters with respect to the body, and 27 variables to parametrize the environment (see text). The nullspace of the last matrix leads to the computation of an intersection of dimension 6 reflecting the Lie group of Euclidian transformations SE(3) (see text). In [4], the simulation aimed to demonstrate that the dimensions of the different vector spaces involved were accessible. We now present a simulation that goes beyond this by estimating these vector space themselves, in particular Tϕ(M0, V)|S0 T Tϕ(U, E0)|S0, in the case of multimodal sensory inputs and with a robust algorithm. The method previously used to estimate the first tangent space, and more specifically its dimension, indeed required an unrealistic level of accuracy. One of the reasons was the poor behavior of the Singular Value Decomposition when dealing with badly conditioned matrices. We have developed a much more stable method, that furthermore uses time derivatives as a more plausible way to estimate the differential than multivariate linear approximation. Indeed, the nonlinear functional relationship between the motor output and the sensory inputs implies an exact linear relationship between their respective time derivative at a given motor output M0 S(t) = ϕ(M(t), E0) ⇒˙S(0) = ∂ϕ ∂M (M0, E0) ˙M(0) and this linear relationship can be estimated as the linear mapping associating the ˙M(0), for any curve in the motor command space such that M(0) = M0, to the resulting ˙S(0). The idea is then to use bootstrapping to estimate the time derivative of the “good” sensory input combinations along the “good” movements so that this linear relation is diagonal and the decomposition unnecessary : the purpose of the SVD used at each step is to provide an indication of what vectors seem to be of interest. At the end of the process, when the linear relationship is judged to be sufficiently diagonal, the singular values are taken as the diagonal elements, and are thus estimated with the precision of the time derivative estimator. Figure 3a presents the evolution of the estimated dimension of the tangent space during this bootstrapping process. Using this method in the first stage of the experiment when the environment is immobile makes it possible for the algorithm, at the same time as it finds a basis for the tangent space, to calibrate the signals coming from the head : it extracts sensory input combinations that are meaningful as regards its own mobility. Then during a second stage, using these combinations, it estimates the tangent space to sensory inputs resulting from movement of the environment while it keeps its motor output fixed at M0. Finally, using the tangent spaces estimated in these two stages, it computes their intersection : if TSM is a matrix containing the basis of the first tangent space, and TSE a basis of the second tangent space, then the nullspace of [TSM, TSE] allows to generate the intersection of the two spaces: [TSM, TSE]λ = 0 ⇒TSMλM = −TSEλE where λ = (λT M, λT E)T (2) To conclude, using the pseudo-inverse of the tangent sensorimotor law, the algorithm computes measuring rods that have a sensory image in that intersection; and this computation is simple since the adaptation process made the tangent law diagonal. 3.3 Results2 Figure 3a demonstrates the evolution of the estimation of the ratio between successive singular values. The maximum of this ratio can be taken as the frontier between significantly non-zero values and zero ones, and thus reveals the dimension of the tangent space to the sensory inputs observed in an immobile environment. There are indeed 14 effective parameters of control of the body with respect to the sensory inputs: from the 16 parameters described in section 3.1, for each eye the two parameters controlling the aperture of the diaphragm and the eyelids combine in a single effective one characterizing the total incoming light influx. After this adaptation process the tangent space to sensory inputs observed for a fixed motor output M0 can be estimated without bootstrapping as shown, as regards its dimension (27 = 9 × 3 for the 9 light sources moving in a three dimensional space), in Figure 3b. The intersection is computed from the nullspace of the matrix constituted by concatenation of generating vectors of the two previous spaces, using equation 2. This nullspace is of 2The Matlab code of the simulation can be downloaded at http://nivea.psycho. univ-paris5.fr/˜philipona for further examination. Figure 4: The effects of motor commands corresponding to a generating family of 6 independent measuring rods computed by the algorithm. They reveal the control of the head in a rigid fashion. Without the Lie bracket to understand commutativity, these movements involve arbitrary compositions of translations and rotations. dimension 41 −35 = 6, as shown in Figure 3c. Note that the graph shows the ratio of successive singular values, and thus has one less value than the number of vectors. Figure 4 demonstrates the movements of the rat’s head associated with the measuring rods found using the pseudoinverse of the sensorimotor law. Contrast these with the non-rigid movements of the rat’s head associated with random motor commands of Figure 1. 4 Conclusion We have shown that sensorimotor laws possess intrinsic properties related to the structure of the physical world in which an organism’s body is embedded. These properties have an overall group structure, for which smoothly parametrizable subgroups that act separately on the body and on the environment can be discovered. We have briefly presented a simulation demonstrating the way to access the measuring rods of these subgroups. We are currently conducting our first successful experiments on the estimation of the Lie bracket. This will allow the groups whose measuring rods have been found to be decomposed. It will then be possible for the algorithm to distinguish for instance between translations and rotations, and between rotations around different centers. The question now is to determine what can be done with these first results: is this intrinsic understanding of space enough to discover the subgroups of Γ(ϕ) that do not act both on the body and the environment: for example those acting on the body alone should provide a decomposition of the body with respect to its articulations. The ultimate goal is to show that there is a way of extracting objects in the environment from the sensorimotor law, even though nothing is known about the sensors and effectors. References [1] N. Bourbaki. Vari´etes diff´erentielles et analytiques. Fascicule de r´esultats. Hermann, 1971-1997. [2] T. Masson. G´eom´etrie diff´erentielle, groupes et alg`ebres de Lie, fibr´es et connexions. LPT, 2001. [3] J. K. O’Regan and A. No¨e. A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24(5), 2001. [4] D. Philipona, K. O’Regan, and J.-P. Nadal. Is there something out there ? Inferring space from sensorimotor dependencies. Neural Computation, 15(9), 2003.
2003
25
2,425
Log-Linear Models for Label Ranking Ofer Dekel Computer Science & Eng. Hebrew University oferd@cs.huji.ac.il Christopher D. Manning Computer Science Dept. Stanford University manning@cs.stanford.edu Yoram Singer Computer Science & Eng. Hebrew University singer@cs.huji.ac.il Abstract Label ranking is the task of inferring a total order over a predefined set of labels for each given instance. We present a general framework for batch learning of label ranking functions from supervised data. We assume that each instance in the training data is associated with a list of preferences over the label-set, however we do not assume that this list is either complete or consistent. This enables us to accommodate a variety of ranking problems. In contrast to the general form of the supervision, our goal is to learn a ranking function that induces a total order over the entire set of labels. Special cases of our setting are multilabel categorization and hierarchical classification. We present a general boosting-based learning algorithm for the label ranking problem and prove a lower bound on the progress of each boosting iteration. The applicability of our approach is demonstrated with a set of experiments on a large-scale text corpus. 1 Introduction This paper discusses supervised learning of label rankings – the task of associating instances with a total order over a predefined set of labels. The ordering should be performed in accordance with some notion of relevance of the labels. That is, a label deemed relevant to an instance should be ranked higher than a label which is considered less relevant. With each training instance we receive supervision given as a set of preferences over the labels. Concretely, the supervision we receive with each instance is given in the form of a preference graph: a simple directed graph for which the labels are the graph vertices. A directed edge from a label y to another label y′ denotes that according to the supervision, y is more relevant to the instance than y′. We do not impose any further constraints on the structure of the preference graph. The approach we employ distills and generalizes several learning settings. The simplest setting is multiclass categorization in which each instance is associated with a single label out of k possible labels. Such a setting was discussed for instance in [10] where a boosting algorithm called AdaBoost.MR (MR stands for Multiclass Ranking) for solving this problem was described and analyzed. Using the graph representation for multiclass problems, the preference graph induced by the supervision has k vertices and k −1 edges. A directed edge points from the (single) relevant label to each of the k −1 irrelevant labels (Fig. 1a). An interesting and practical generalization of multiclass problems is multilabel problems [10, 6, 4], in which a set of relevant labels (rather than a single label) is associated with each instance. In this case the supervision is represented by a directed bipartite 1 2 3 4 5 1 3 4 5 2 1 2 3 4 5 1 2 3 4 5 (a) (b) (c) (d) Figure 1: The supervision provided to the algorithm associates every training instance with a preference graph. Different graph topologies define different learning problems. Examples that fit naturally in our generalized setting: (a) multiclass single-label categorization where 1 is the correct label. (b) multiclass multilabel categorization where {1, 2} is the set of correct labels. (c) A multi-layer graph that encodes three levels of label “goodness”, useful for instance in hierarchical multiclass settings. (d) a general (possibly cyclic) preference graph with no predefined structure. graph where the relevant labels constitute one side of the graph and the irrelevant labels the other side and there is a directed edge from each relevant label to each irrelevant label. (Fig. 1b). Similar settings are also encountered in information retrieval and language processing tasks. In these settings the set of labels contains linguistic structures such as tags and parses [1, 12] and the goal is to produce a total order over, for instance, candidate parses. The supervision might consist of information that distinguishes three goodness levels (Fig. 1c); for instance, the Penn Treebank [13] has notations to mark not only the most likely correct parse implicitly opposed to incorrect parses, but also to mark other possibly correct parses involving different phrasal attachments (additional information that almost all previous work in parsing has ignored). Additionally, one can more fully rank the quality of the many candidate parses generated for a sentence based on how many constituents or dependencies each shares with the correct parse – much more directly and effectively approaching the metrics on which parser quality is usually assessed. For concreteness, we use the term label ranking for all of these problems. Our learning framework decomposes each preference graph into subgraphs, where the graph decomposition procedure may take a general form and can change as a function of the instances. Ranking algorithms, especially in multilabel categorization problems, often reduce the ranking task into multiple binary decision problems by enumerating over all pairs of labels [7, 6, 4]. Such a reduction can easily be accommodated within our framework by decomposing the preference graph into elementary subgraphs, each consisting of a single edge. Another approach is to compare a highly preferred label (such as the correct or best parse of a sentence) with less preferred labels. Such approaches can be analyzed within our framework by defining a graph decomposition procedure that generates a subgraph for each relevant label and the neighboring labels that it is preferred over. Returning to multilabel settings, this decomposition amounts to a loss that counts the number of relevant labels which are wrongly ranked below irrelevant ones. The algorithmic core of this paper is based on boosting-style algorithms for exponential models [2, 8]. Specifically, the boosting-style updates we employ build upon the construction used in [2] for solving multiclass problems. Our framework employing graph decomposition can also be used in other settings such as element ranking via projections [3, 11]. Furthermore, settings in which a semi-metric is defined over the label-set can also be reduced to the problem of label ranking, such as the parse ordering case mentioned above or when the labels are arranged in a hierarchical structure. We employ such a reduction in the category ranking experiments described in Sec. 4. The paper is organized as follows: a formal description of our setting is given in Sec. 2. In Sec. 3 we present an algorithm for learning label ranking functions. We demonstrate the merits of our approach on the task of category-ranking in Sec. 4 and conclude in Sec. 5. 2 Problem Setting Let X be an instance domain and let Y be a set of labels, possibly of infinite cardinality. A label ranking for an instance x ∈X is a total order over Y, where y ≻y′ implies that y is preferred over y′ as a label for x. A label ranking function f : X × Y →R induces a label ranking for x ∈X by y ≻y′ ⇐⇒f(x, y) > f(x, y′). Overloading our notation, we denote the label ranking induced by f for x by f(x). We assume that we are provided with a set of base label-ranking functions, h1, . . . , hn, and aim to learn a linear combination of the form f(x, y) = Pn j=1 λjhj(x, y). We are also provided with a training set S = {(xi, Gi)}m i=1 where every example is comprised of an instance xi ∈X and a preference graph Gi. As defined in the previous section, a preference graph is a directed graph G = (V, E), for which the set of vertices V is defined to be the set of labels Y and E is some finite set of directed edges. Every edge in a directed graph e ∈E is associated with an initial vertex, init(e) ∈V , and a terminal vertex, term(e) ∈V . The existence of a directed edge between two labels in a preference graph indicates that init(e) is preferred over term(e) and should be ranked higher. We require preference graphs to be simple, namely to have no more than a single edge between any pair of vertices and to not contain any self-loops. However, we impose no additional constraints on the supervision, namely, the set of edges in a preference graph may be sparse and may even include cycles. This form of supervision was chosen for its generality and flexibility. If Y is very large (possibly infinite), it would be unreasonable to require that the training data contain a complete total order over Y for every instance. Informally, our goal is for the label ranking induced by f to be as consistent as possible with all of the preference graphs given in S. We say that f(xi) disagrees with a preference graph Gi = (Vi, Ei) if there exists an edge e ∈Ei for which f xi, init(e)  ≤f xi, term(e)  . Formally, we define a function δ that indicates when such a disagreement occurs δ(f(x), G) =  1 if ∃e ∈E s.t. f x, init(e)  ≤f x, term(e)  0 otherwise . A simple measure of empirical ranking accuracy immediately follows from the definition of δ: We define the 0 −1 error attained by a ranking function f on a training set S to be the number of training examples for which f(xi) disagrees with Gi, namely, ε0−1(f, S) = m X i=1 δ(f(xi), Gi) . The 0 −1 error may be natural for certain ranking problems, however in general it is a rather crude measure of ranking inaccuracy, as it is invariant to the exact number of edges in Gi with which f(xi) disagrees. Many ranking problems require a more refined notion of ranking accuracy. Thus, we define the disagreement error attained by f(xi) with respect to Gi to be the fraction of edges in Ei with which f(xi) disagrees. The disagreement error attained on the entire training set is the sum of disagreement errors over all training examples. Formally, we define the disagreement error attained on S as εdis(f, S) = m X i=1  e ∈Ei s.t. f x, init(e)  ≤f x, term(e)  Ei . Both the 0−1 error and the disagreement error are reasonable measures of ranking inaccuracy. It turns out that both are instances of a more general notion of ranking error of which additional meaningful instances exist. The definition of this generalized error is slightly more involved but enables us to present a unified account of different measures of error. The missing ingredient needed to define the generalized error is a graph decomposition procedure A that we assume is given together with the training data. A takes as its input 1 2 3 4 5 A1 7−→ 1 2 1 4 1 5 2 3 1 3 3 5 5 2 5 4 εdis = 3 8  1 2 3 4 5 A2 7−→ 1 2 4 5 2 3 1 3 5 2 4 5 εDom = 2 4  1 2 3 4 5 A3 7−→ 1 3 1 2 5 2 3 1 4 5 1 5 3 εdom = 3 5  Figure 2: Applying different graph decomposition procedures induces different error functions: A1 induces εdis, A2 induces εDom and A3 induces εdom. The errors above are with respect to the order 1 ≻2 ≻3 ≻4 ≻5. Dashed edges without arrowheads disagree with this total order, and the errors are the fraction of subgraphs that contain disagreeing edges. a preference graph Gi and returns a set of si subgraphs of Gi, denoted {Gi,1, . . . , Gi,si}, where Gi,k = (Vi, Ei,k). Each subgraph Gi,k is itself a preference graph and therefore δ(f(xi), Gi,k) is well defined. We now define the generalized error attained by f(xi) with respect to Gi as the fraction of subgraphs in A(Gi) with which f(xi) disagrees. The generalized error attained on S is the sum of generalized errors over all training instances. Formally, the generalized ranking error is defined as εgen(f, S, A) = m X i=1 1 si si X k=1 δ(f(xi), Gi,k) where {Gi,1, . . . , Gi,si} = A(Gi) . (1) Previously used losses for label ranking are special cases of the generalized error and are derived by choosing an appropriate decomposition procedure A. For instance, when A is defined to be the identity transformation on graphs (A(G) = {G}), then the generalized ranking error is reduced to the 0 −1 error. Alternatively, for a graph G with s edges, we can define A to return s different subgraphs of G, each consisting of a single edge from G (Fig. 2 top) and the generalized ranking error reduces to the disagreement error. An additional meaningful measure of error is the domination error. A vertex is said to dominate the set of neighboring vertices that are connected to its outgoing edges. We would like every vertex in the preference graph to be ranked above all of its dominated neighbors. The domination error attained by f(xi) with respect to Gi is the fraction of vertices with outgoing edges which are not ranked above all of their dominated neighbors. Formally, let A be the procedure that takes a preference graph G = (V, E) and returns a subgraph for each vertex with outgoing edges, each such subgraph consisting of a dominating vertex, its dominated neighbors and edges between them (Fig. 2 middle). Now define εDom(f, S) = εgen(f, S, A) . Minimizing the domination error is useful for solving multilabel classification problems. In these problems Y is of finite cardinality and every instance xi is associated with a set of correct labels Yi ⊆Y. In order to reduce this problem to a ranking problem, we construct preference graphs Gi = (Y, Ei), where Ei contains edges from every vertex in Yi to every vertex in Y \ Yi. In this case, the domination loss simply counts the number of labels in Yi that are not ranked above all of the labels in Y \ Yi. A final interesting measure of error is the dominated error, denoted εdom. The dominated error is proportional to the number of labels with incoming edges that are not ranked below all of the labels that dominate them. Its graph decomposition procedure is depicted at the bottom of Fig. 2. Additional instances of the generalized ranking error exist, and can be tailored to fit most ranking problems. In the next section we set aside the specifics of the decomposition procedure and derive a minimization procedure for the generalized error. INPUT: training data S = {(xi, Gi)}m i=1 s.t. xi ∈X and Gi is a preference graph, a decomposition procedure A and a set of base ranking functions {h1, . . . , hn}. INITIALIZE: λ1 = (0, 0, . . . , 0) πi,e,j = hj xi, term(e)  −hj xi, init(e)  [1 ≤i ≤m, e ∈Ei, 1 ≤j ≤n] ρ = maxi,e P j |πi,e,j| ITERATE: For t = 1, 2, . . . qt,i,e = X k:e∈Ei,k exp (λt · πi,e) 1 + P e′∈Ei,k exp (λt · πi,e′) [1 ≤i ≤m, e ∈Ei] W + t,j = X i,e:πi,e,j>0 qt,i,e πi,e,j si W − t,j = X i,e:πi,e,j<0 −qt,i,e πi,e,j si [1 ≤j ≤n] Λt,j = 1 2 ln W + t,j W − t,j ! [1 ≤j ≤n] λt+1 = λt −Λt ρ Figure 3: A boosting based algorithm for generalized label ranking. 3 Minimizing the Generalized Ranking Error Our goal is to minimize the generalized error for a given training set S and graph decomposition procedure A. This task generalizes standard classification problems which are known to be NP-complete. Hence we do not attempt to minimize the error directly but rather minimize a smooth, strictly convex, upper bound on εgen. The disagreement of f(xi) and a preference graph Gi,k = (Vi,k, Ei,k) can be upper bounded by δ(f, xi, Gi,k) ≤log2  1 + X e∈Ei,k exp  f xi, term(e)  −f xi, init(e)    Denoting the right hand side of the above as L(f(xi), Gi,k), we define the loss attained by f on the entire training set S to be L(f, S, A) = m X i=1 1 si si X k=1 L(f(xi), Gi,k) where Gi,1, . . . , Gi,si = A(Gi) . From the definition of the generalized error in Eq. (1), we conclude the upper bound εgen(f, S, A) ≤ L(f, S, A) . A boosting-based algorithm that globally minimizes the loss is given in Fig. 3. On every iteration, a weight qt,i,e is calculated for every edge in the training data, and the algorithm focuses on satisfying each edge with proportion to its weight. This set of weights plays the role of the distribution vector common in boosting algorithms for classification. The following theorem bounds the decrease in loss on every iteration of the algorithm by a non-negative auxiliary function. Theorem 1 Let S = {(xi, Gi)}m i=1 be a training set such that every xi ∈X and every Gi is a preference graph. Let A be a graph decomposition procedure that defines for each preference graph Gi a set of subgraphs {Gi,1, . . . , Gi,si} = A(Gi). Denote by ft the ranking function obtained at iteration t of the algorithm given in Fig. 3 (ft = P j λt,jhj). Using the notation defined in Fig. 3, the decrease in loss on iteration t is bounded by L(ft, S, A) −L(ft+1, S, A) ≥1 ρ n X j=1 q W + t,j − q W − t,j 2 . Proof Define ∆t,i,k to be the difference between the loss attained by ft and the loss attained by ft+1 on (xi, Gi,k), that is ∆t,i,k = L(ft(xi), Gi,k) −L(ft+1(xi), Gi,k), and define φt,i,k = P e∈Ei,k exp(λt · πi,e). We can now rewrite L(ft(xi), Gi,k) as log 1 + φt,i,k  . Using the inequality −log(1 −a) ≥a (which holds when log(1 −a) is defined), we get ∆t,i,k = log 1 + φt,i,k  −log 1 + φt+1,i,k  = −log  1 −φt,i,k −φt+1,i,k 1 + φt,i,k  ≥ φt,i,k −φt+1,i,k 1 + φt,i,k = X e∈Ei,k exp(λt · πi,e) −exp(λt+1 · πi,e) 1 + P e′∈Ei,k exp(λt · πi,e′) . (3) The algorithm sets λt+1 = λt −(1/ρ)Λt and therefore exp(λt+1 · πi,e) in Eq. (3) can be replaced by exp(λt · πi,e) exp(−(1/ρ)Λt · πi,e), yielding: ∆t,i,k ≥ X e∈Ei,k exp(λt · πi,e) 1 + P e′∈Ei,k exp(λt · πi,e′) !  1 −exp  −1 ρΛt · πi,e  . Summing both sides of the above over the subgraphs in A(Gi), and plugging in qt,i,e, si X k=1 ∆t,i,k ≥ X e∈Ei   X k:e∈Ei,k exp(λt · πi,e) 1 + P e′∈Ei,k exp(λt · πi,e′)    1 −exp  −1 ρΛt · πi,e  = X e∈Ei qt,i,e  1 −exp  −1 ρΛt · πi,e  . (4) We now rewrite (1/ρ)Λt · πi,e in more convenient form −1 ρΛt · πi,e = − n X j=1 1 ρΛt,jπi,e,j = n X j=1 (|πi,e,j|/ρ) (−sign(πi,e,j)Λt,j) . (5) The rationale behind this rewriting is that we now think of (|πi,e,1|/ρ) , . . . , (|πi,e,n|/ρ) as coefficients in a subconvex combination of (−sign(πi,e,1)Λt,1) , . . . , (−sign(πi,e,n)Λt,n), since ∀j (|πi,e,j|/ρ) ≥0 and from the definition of ρ, P j (|πi,e,1|/ρ) ≤1. Plugging Eq. (5) into Eq. (4) and using the concavity of the function 1 −exp(·) in Eq. (4), we obtain si X k=1 ∆t,i,k ≥ X e∈Ei qt,i,e  1 −exp   n X j=1 (|πi,e,j|/ρ) (−sign(πi,e,j)Λt,j)     ≥ X e∈Ei,k n X j=1 qt,i,e(|πi,e,j|/ρ) 1 −exp (−sign(πi,e,j)Λt,j)  . Finally, we sum both sides of the above over all of S and plug in W +, W −and Λ to get L(ft, S, A) −L(ft+1, S, A) = n X i=1 si X k=1 ∆t,i,k ≥ 1 ρ n X j=1 m X i=1 X e∈Ei qt,i,e|πi,e,j| si 1 −exp (−sign(πi,e,j)Λt,j)  = 1 ρ n X j=1  W + t,j  1 − q W − t,j q W + t,j  + W − t,j  1 − q W + t,j q W − t,j     = 1 ρ n X j=1 q W + t,j − q W − t,j 2 . Thm. 1 proves that the losses attained on each iteration form a monotonically nonincreasing sequence of positive numbers, that must therefore converge. However, we are interested in proving a stronger claim, namely that the vector sequence (λt)∞ t=1 converges to a globally optimal weight-vector λ⋆. Since the loss is a convex function, it suffices to show that the vector sequence converges to a stationary point of the loss. It is easily verified that the non-negative auxiliary function which bounds the decrease in loss equals zero only at stationary points of the loss. This fact implies that (λt)∞ t=1 indeed converges to λ⋆if the set of all feasible values for λ is compact and the loss has a unique global minimum. Compactness of the feasible set and uniqueness of the optimum can be explicitly enforced by adding a form of natural regularization to the boosting algorithm. The specifics of this technique exceed the scope of this paper and are discussed in [5]. In all, the boosting algorithm of Fig. 3 converges to the globally optimal weight-vector λ⋆. 4 Experiments ε0−1 εdis εDom εdom 0 −1 0.63 0.068 0.42 0.12 dis 0.73 0.063 0.51 0.14 Dom 0.59 0.049 0.35 0.10 dom 0.59 0.067 0.41 0.10 Figure 4: The test error averaged over 5fold cross validation. The rows correspond to different optimization problems: minimizing ε0−1, εdis, εDom and εdom. Errors are measured using all 4 error measures. To demonstrate our framework, we chose to learn a category ranking problem on a subset of the Reuters Corpus, Vol. 1 [14]. The full Reuters corpus is comprised of approximately 800, 000 textual news articles, collected over a period of 12 months in 1996– 1997. Most of the articles are labeled by one or more categories. For the purpose of these experiments, we limited ourselves to the subset of articles collected during January 1997: approximately 66, 000 articles labeled by 103 different categories. An interesting aspect of the Reuters corpus is that the categories are arranged in a hierarchy. The set of possible labels contains both general categories and more specific ones, where the specific categories refine the general categories. This concept is best explained with an example: three of the categories in the corpus are Economics, Government Finance and Government Borrowing. It would certainly be correct to categorize an article on government borrowing as either government finance or economics, however these general categories are less specific and do not describe the article as well. Furthermore, misclassifying such an article as government revenue is by far better than misclassifying it as sports. In summary, the category hierarchy induces a preference over the set of labels. We exploit this property to generate supervision for the label ranking problem at hand. Formally, we view every category as a vertex in a rooted tree, where the tree root corresponds to a general abstract category that is relevant to all of the articles in the corpus and every category is a specific instance of its parent in the tree. The labels associated with an article constitute a set of paths from the tree root to a set of leaves. The original corpus is somewhat inconsistent in that not all paths end in a leaf, but rather end in some inner vertex. To fix this inconsistency, we added a dummy child vertex to every inner vertex and diverted all paths that originally end in this inner vertex to its new child. Our learning problem then becomes the problem of ranking leaves. The severity of wrongly categorizing an article by a leaf is proportional to the graph distance between this leaf and the closest correct leaf given in the corpus. The preference graph that encodes this preference is a multi-layer graph where the top layer contains all of the correct labels, the second layer contains all of their sibling vertices in the tree and so on. Every vertex in the multi-layer preference graph has outgoing edges to all vertices in lower layers, but there are no edges between vertices in the same layer. For practical purposes, we conducted experiments using only 3-layer preference graphs generated by collapsing all of the layers below 3 to a single layer. All of the experiments were carried out using 5-fold cross validation. The word counts for each article were used to construct base ranking functions in the following way: for every word w and every category y, let w(xi) denote the number of appearances of w in the article xi. Then, define hw,y(xi, yi) =  log(w(xi)) + 1 if w(xi) > 0 and yi = y 0 otherwise . (6) For each training set, we first applied a heuristic feature selection method common in boosting applications [10] to select some 3200 informative words. These words then define 103 · 3200 base ranking functions as shown in Eq. (6). Next, we ran our learning algorithm using each of the 4 graph decomposition procedures discussed above: zero-one, disagreement, domination and dominated. After learning each problem, we calculated all four error measures on the test data. The results are presented in Fig. 4. Two points are worth noting. First, these results are not comparable with previous results for multilabel problems using this corpus, since label ranking is a more difficult task. For instance, an average preference graph in the test data has 820 edges, and the error for such a graph equals zero only if every single edge agrees with the ranking function. Second, the experiments clearly indicate that the results obtained by minimizing the domination loss are better than the other ranking losses, no matter what error is used for evaluation. In particular, employing the domination loss yields significantly better results than using the disagreement loss which has been the commonly used decomposition method in categorization problems [7, 10, 6, 4]. 5 Summary We presented a general framework for label ranking problems by means of preference graphs and the graph decomposition procedure. This framework was shown to generalize other decision problems, most notably multilabel categorization. We then described and analyzed a boosting algorithm that works with any choice of graph decomposition. We are currently exporting the approach to learning in inner product spaces, where different graph decomposition procedures result in different bindings of slack variables. Another interesting question is whether the graph decomposition approach can be combined with probabilistic models for orderings [9] to achieve algorithmic efficiency. References [1] M. Collins and N. Duffy. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In 30th Annual Meeting of the ACL, 2002. [2] M. Collins, R.E. Schapire, and Y. Singer. Logistic regression, AdaBoost and Bregman distances. Machine Learning, 47(2/3):253–285, 2002. [3] K. Crammer and Y. Singer. Pranking with ranking. NIPS 14, 2001. [4] K. Crammer and Y. Singer. A new family of online algorithms for category ranking. Jornal of Machine Learning Research, 3:1025–1058, 2003. [5] O. Dekel, S. Shalev-Shwartz, and Y. Singer. Smooth epsilon-insensitive regression by loss symmetrization. COLT 16, 2003. [6] A. Elisseeff and J. Weston. A kernel method for multi-labeled classification. NIPS 14, 2001. [7] Y. Freund, R. Iyer, R. E.Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. In Machine Learning: Proc. of the Fifteenth International Conference, 1998. [8] G. Lebanon and J. Lafferty. Boosting and ML for exponential models. NIPS 14, 2001. [9] G. Lebanon and J. Lafferty. Conditional models on the ranking poset. NIPS 15, 2002. [10] R. E. Schapire and Y. Singer. BoosTexter: A boosting-based system for text categorization. Machine Learning, 32(2/3), 2000. [11] A. Shashua and A. Levin. Ranking with large margin principle. NIPS 15, 2002. [12] K. Toutanova and C. D. Manning. Feature selection for a rich HPSG grammar using decision trees. In Proceedings of the Sixth Conference on Natural Language Learning (CoNLL), 2002. [13] The Penn Treebank Project. http://www.cis.upenn.edu/∼treebank/. [14] Reuters Corpus Vol. 1. http://about.reuters.com/researchandstandards/corpus/.
2003
26
2,426
Learning the k in k-means Greg Hamerly, Charles Elkan {ghamerly,elkan}@cs.ucsd.edu Department of Computer Science and Engineering University of California, San Diego La Jolla, California 92093-0114 Abstract When clustering a dataset, the right number k of clusters to use is often not obvious, and choosing k automatically is a hard algorithmic problem. In this paper we present an improved algorithm for learning k while clustering. The G-means algorithm is based on a statistical test for the hypothesis that a subset of data follows a Gaussian distribution. G-means runs k-means with increasing k in a hierarchical fashion until the test accepts the hypothesis that the data assigned to each k-means center are Gaussian. Two key advantages are that the hypothesis test does not limit the covariance of the data and does not compute a full covariance matrix. Additionally, G-means only requires one intuitive parameter, the standard statistical significance level α. We present results from experiments showing that the algorithm works well, and better than a recent method based on the BIC penalty for model complexity. In these experiments, we show that the BIC is ineffective as a scoring function, since it does not penalize strongly enough the model’s complexity. 1 Introduction and related work Clustering algorithms are useful tools for data mining, compression, probability density estimation, and many other important tasks. However, most clustering algorithms require the user to specify the number of clusters (called k), and it is not always clear what is the best value for k. Figure 1 shows examples where k has been improperly chosen. Choosing k is often an ad hoc decision based on prior knowledge, assumptions, and practical experience. Choosing k is made more difficult when the data has many dimensions, even when clusters are well-separated. Center-based clustering algorithms (in particular k-means and Gaussian expectationmaximization) usually assume that each cluster adheres to a unimodal distribution, such as Gaussian. With these methods, only one center should be used to model each subset of data that follows a unimodal distribution. If multiple centers are used to describe data drawn from one mode, the centers are a needlessly complex description of the data, and in fact the multiple centers capture the truth about the subset less well than one center. In this paper we present a simple algorithm called G-means that discovers an appropriate k using a statistical test for deciding whether to split a k-means center into two centers. We describe examples and present experimental results that show that the new algorithm −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 −3 −2 −1 0 1 2 3 −4 −3 −2 −1 0 1 2 3 4 Figure 1: Two clusterings where k was improperly chosen. Dark crosses are k-means centers. On the left, there are too few centers; five should be used. On the right, too many centers are used; one center is sufficient for representing the data. In general, one center should be used to represent one Gaussian cluster. is successful. This technique is useful and applicable for many clustering algorithms other than k-means, but here we consider only the k-means algorithm for simplicity. Several algorithms have been proposed previously to determine k automatically. Like our method, most previous methods are wrappers around k-means or some other clustering algorithm for fixed k. Wrapper methods use splitting and/or merging rules for centers to increase or decrease k as the algorithm proceeds. Pelleg and Moore [14] proposed a regularization framework for learning k, which they call X-means. The algorithm searches over many values of k and scores each clustering model using the so-called Bayesian Information Criterion [10]: BIC(C|X) = L(X|C)−p 2 log n where L(X|C) is the log-likelihood of the dataset X according to model C, p = k(d + 1) is the number of parameters in the model C with dimensionality d and k cluster centers, and n is the number of points in the dataset. X-means chooses the model with the best BIC score on the data. Aside from the BIC, other scoring functions are also available. Bischof et al. [1] use a minimum description length (MDL) framework, where the description length is a measure of how well the data are fit by the model. Their algorithm starts with a large value for k and removes centers (reduces k) whenever that choice reduces the description length. Between steps of reducing k, they use the k-means algorithm to optimize the model fit to the data. With hierarchical clustering algorithms, other methods may be employed to determine the best number of clusters. One is to build a merging tree (“dendrogram”) of the data based on a cluster distance metric, and search for areas of the tree that are stable with respect to inter- and intra-cluster distances [9, Section 5.1]. This method of estimating k is best applied with domain-specific knowledge and human intuition. 2 The Gaussian-means (G-means) algorithm The G-means algorithm starts with a small number of k-means centers, and grows the number of centers. Each iteration of the algorithm splits into two those centers whose data appear not to come from a Gaussian distribution. Between each round of splitting, we run k-means on the entire dataset and all the centers to refine the current solution. We can initialize with just k = 1, or we can choose some larger value of k if we have some prior knowledge about the range of k. G-means repeatedly makes decisions based on a statistical test for the data assigned to each center. If the data currently assigned to a k-means center appear to be Gaussian, then we want to represent that data with only one center. However, if the same data do not appear Algorithm 1 G-means(X, α) 1: Let C be the initial set of centers (usually C ←{¯x}). 2: C ←kmeans(C, X). 3: Let {xi|class(xi) = j} be the set of datapoints assigned to center cj. 4: Use a statistical test to detect if each {xi|class(xi) = j} follow a Gaussian distribution (at confidence level α). 5: If the data look Gaussian, keep cj. Otherwise replace cj with two centers. 6: Repeat from step 2 until no more centers are added. to be Gaussian, then we want to use multiple centers to model the data properly. The algorithm will run k-means multiple times (up to k times when finding k centers), so the time complexity is at most O(k) times that of k-means. The k-means algorithm implicitly assumes that the datapoints in each cluster are spherically distributed around the center. Less restrictively, the Gaussian expectation-maximization algorithm assumes that the datapoints in each cluster have a multidimensional Gaussian distribution with a covariance matrix that may or may not be fixed, or shared. The Gaussian distribution test that we present below are valid for either covariance matrix assumption. The test also accounts for the number of datapoints n tested by incorporating n in the calculation of the critical value of the test (see Equation 2). This prevents the G-means algorithm from making bad decisions about clusters with few datapoints. 2.1 Testing clusters for Gaussian fit To specify the G-means algorithm fully we need a test to detect whether the data assigned to a center are sampled from a Gaussian. The alternative hypotheses are • H0: The data around the center are sampled from a Gaussian. • H1: The data around the center are not sampled from a Gaussian. If we accept the null hypothesis H0, then we believe that the one center is sufficient to model its data, and we should not split the cluster into two sub-clusters. If we reject H0 and accept H1, then we want to split the cluster. The test we use is based on the Anderson-Darling statistic. This one-dimensional test has been shown empirically to be the most powerful normality test that is based on the empirical cumulative distribution function (ECDF). Given a list of values xi that have been converted to mean 0 and variance 1, let x(i) be the ith ordered value. Let zi = F(x(i)), where F is the N(0, 1) cumulative distribution function. Then the statistic is A2(Z) = −1 n n X i=1 (2i −1) [log(zi) + log(1 −zn+1−i)] −n (1) Stephens [17] showed that for the case where µ and σ are estimated from the data (as in clustering), we must correct the statistic according to A2 ∗(Z) = A2(Z)(1 + 4/n −25/(n2)) (2) Given a subset of data X in d dimensions that belongs to center c, the hypothesis test proceeds as follows: 1. Choose a significance level α for the test. 2. Initialize two centers, called “children” of c. See the text for good ways to do this. 3. Run k-means on these two centers in X. This can be run to completion, or to some early stopping point if desired. Let c1, c2 be the child centers chosen by k-means. 4. Let v = c1 −c2 be a d-dimensional vector that connects the two centers. This is the direction that k-means believes to be important for clustering. Then project X onto v: x′ i = ⟨xi, v⟩/||v||2. X′ is a 1-dimensional representation of the data projected onto v. Transform X′ so that it has mean 0 and variance 1. 5. Let zi = F(x′ (i)). If A2 ∗(Z) is in the range of non-critical values at confidence level α, then accept H0, keep the original center, and discard {c1, c2}. Otherwise, reject H0 and keep {c1, c2} in place of the original center. A primary contribution of this work is simplifying the test for Gaussian fit by projecting the data to one dimension where the test is simple to apply. The authors of [5] also use this approach for online dimensionality reduction during clustering. The one-dimensional representation of the data allows us to consider only the data along the direction that kmeans has found to be important for separating the data. This is related to the problem of projection pursuit [7], where here k-means searches for a direction in which the data appears non-Gaussian. We must choose the significance level of the test, α, which is the desired probability of making a Type I error (i.e. incorrectly rejecting H0). It is appropriate to use a Bonferroni adjustment to reduce the chance of making Type I errors over multiple tests. For example, if we want a 0.01 chance of making a Type I error in 100 tests, we should apply a Bonferroni adjustment to make each test use α = 0.01/100 = 0.0001. To find k final centers the G-means algorithm makes k statistical tests, so the Bonferroni correction does not need to be extreme. In our tests, we always use α = 0.0001. We consider two ways to initialize the two child centers. Both approaches initialize with c ± m, where c is a center and m is chosen. The first method chooses m as a random d-dimensional vector such that ||m|| is small compared to the distortion of the data. A second method finds the main principal component s of the data (having eigenvalue λ), and chooses m = s p 2λ/π. This deterministic method places the two centers in their expected locations under H0. The principal component calculations require O(nd2 + d3) time and O(d2) space, but since we only want the main principal component, we can use fast methods like the power method, which takes time that is at most linear in the ratio of the two largest eigenvalues [4]. In this paper we use principal-component-based splitting. 2.2 An example Figure 2 shows a run of the G-means algorithm on a synthetic dataset with two true clusters and 1000 points, using α = 0.0001. The critical value for the Anderson-Darling test is 1.8692 for this confidence level. Starting with one center, after one iteration of G-means, we have 2 centers and the A2 ∗statistic is 38.103. This is much larger than the critical value, so we reject H0 and accept this split. On the next iteration, we split each new center and repeat the statistical test. The A2 ∗values for the two splits are 0.386 and 0.496, both of which are well below the critical value. Therefore we accept H0 for both tests, and discard these splits. Thus G-means gives a final answer of k = 2. 2.3 Statistical power Figure 3 shows the power of the Anderson-Darling test, as compared to the BIC. Lower is better for both plots. We run 1000 tests for each data point plotted for both plots. In the left 0 2 4 6 8 10 12 4 5 6 7 8 9 10 11 12 13 14 0 2 4 6 8 10 12 4 5 6 7 8 9 10 11 12 13 14 0 2 4 6 8 10 12 4 5 6 7 8 9 10 11 12 13 14 Figure 2: An example of running G-means for three iterations on a 2-dimensional dataset with two true clusters and 1000 points. Starting with one center (left plot), G-means splits into two centers (middle). The test for normality is significant, so G-means rejects H0 and keeps the split. After splitting each center again (right), the test values are not significant, so G-means accepts H0 for both tests and does not accept these splits. The middle plot is the G-means answer. See the text for further details. 0 0.2 0.4 0.6 0.8 1 0 30 60 90 120 150 180 210 P(Type I error) number of datapoints G-means X-means 0 0.2 0.4 0.6 0.8 1 0 30 60 90 120 150 180 210 P(Type II error) number of datapoints G-means X-means Figure 3: A comparison of the power of the Anderson-Darling test versus the BIC. For the AD test we fix the significance level (α = 0.0001), while the BIC’s significance level depends on n. The left plot shows the probability of incorrectly splitting (Type I error) one true 2-d cluster that is 5% elliptical. The right plot shows the probability of incorrectly not splitting two true clusters separated by 5σ (Type II error). Both plots are functions of n. Both plots show that the BIC overfits (splits clusters) when n is small. plot, for each test we generate n datapoints from a single true Gaussian distribution, and then plot the frequency with which BIC and G-means will choose k = 2 rather than k = 1 (i.e. commit a Type I error). BIC tends to overfit by choosing too many centers when the data is not strictly spherical, while G-means does not. This is consistent with the tests of real-world data in the next section. While G-means commits more Type II errors when n is small, this prevents it from overfitting the data. The BIC can be considered a likelihood ratio test, but with a significance level that cannot be fixed. The significance level instead varies depending on n and ∆k (the change in the number of model parameters between two models). As n or ∆k decrease, the significance level increases (the BIC becomes weaker as a statistical test) [10]. Figure 3 shows this effect for varying n. In [11] the authors show that penalty-based methods require problemspecific tuning and don’t generalize as well as other methods, such as cross validation. 3 Experiments Table 1 shows the results from running G-means and X-means on many large synthetic. On synthetic datasets with spherically distributed clusters, G-means and X-means do equally Table 1: Results for many synthetic datasets. We report distortion relative to the optimum distortion for the correct clustering (closer to one is better), and time is reported relative to k-means run with the correct k. For BIC, larger values are better, but it is clear that finding the correct clustering does not always coincide with finding a larger BIC. Items with a star are where X-means always chose the largest number of centers we allowed. dataset d method k found distortion(× optimal) BIC(×104) time(× k-means) synthetic 2 G-means 9.1± 9.9 0.89± 0.23 -0.19±2.70 13.2 k=5 X-means 18.1± 3.2 0.37± 0.12 0.70±0.93 2.8 synthetic 2 G-means 20.1± 0.6 0.99± 0.01 0.21±0.18 2.1 k=20 X-means 70.5±11.6 9.45±28.02 14.83±3.50 1.2 synthetic 2 G-means 80.0± 0.2 1.00± 0.01 1.84±0.12 2.2 k=80 X-means 171.7±23.7 48.49±70.04 40.16±6.59 1.8 synthetic 8 G-means 5.0± 0.0 1.00± 0.00 -0.74±0.16 4.6 k=5 X-means *20.0± 0.0 0.47± 0.03 -2.28±0.20 11.0 synthetic 8 G-means 20.0± 0.1 0.99± 0.00 -0.18±0.17 2.6 k=20 X-means *80.0± 0.0 0.47± 0.01 14.36±0.21 4.0 synthetic 8 G-means 80.2± 0.5 0.99± 0.00 1.45±0.20 2.9 k=80 X-means 229.2±36.8 0.57± 0.06 52.28±9.26 6.5 synthetic 32 G-means 5.0± 0.0 1.00± 0.00 -3.36±0.21 4.4 k=5 X-means *20.0± 0.0 0.76± 0.00 -27.92±0.22 29.9 synthetic 32 G-means 20.0± 0.0 1.00± 0.00 -2.73±0.22 2.3 k=20 X-means *80.0± 0.0 0.76± 0.01 -11.13±0.23 21.2 synthetic 32 G-means 80.0± 0.0 1.00± 0.00 -1.10±0.16 2.8 k=80 X-means 171.5±10.9 0.84± 0.01 11.78±2.74 53.3 0 2 4 6 8 10 12 0 1 2 3 4 5 6 7 0 2 4 6 8 10 12 0 1 2 3 4 5 6 7 Figure 4: 2-d synthetic dataset with 5 true clusters. On the left, G-means correctly chooses 5 centers and deals well with non-spherical data. On the right, the BIC causes X-means to overfit the data, choosing 20 unevenly distributed clusters. well at finding the correct k and maximizing the BIC statistic, so we don’t show these results here. Most real-world data is not spherical, however. The synthetic datasets used here each have 5000 datapoints in d = 2/8/32 dimensions. The true ks are 5, 20, and 80. For each synthetic dataset type, we generate 30 datasets with the true center means chosen uniformly randomly from the unit hypercube, and choosing σ so that no two clusters are closer than 3σ apart. Each cluster is also given a transformation to make it non-spherical, by multiplying the data by a randomly chosen scaling and rotation matrix. We run G-means starting with one center. We allow X-means to search between 2 and 4k centers (where here k is the true number of clusters). The G-means algorithm clearly does better at finding the correct k on non-spherical data. Its results are closer to the true distortions and the correct ks. The BIC statistic that X-means uses has been formulated to maximize the likelihood for spherically-distributed data. Thus it overestimates the number of true clusters in non-spherical data. This is especially evident when the number of points per cluster is small, as in datasets with 80 true clusters. 5 10 15 20 25 30 0 1 2 3 4 5 6 7 8 9 Cluster Digit 10 20 30 40 50 60 0 1 2 3 4 5 6 7 8 9 Cluster Digit Figure 5: NIST and Pendigits datasets: correspondence between each digit (row) and each cluster (column) found by G-means. G-means did not have the labels, yet it found meaningful clusters corresponding with the labels. Because of this overestimation, X-means often hits our limit of 4k centers. Figure 4 shows an example of overfitting on a dataset with 5 true clusters. X-means chooses k = 20 while G-means finds all 5 true cluster centers. Also of note is that X-means does not distribute centers evenly among clusters; some clusters receive one center, but others receive many. G-means runs faster than X-means for 8 and 32 dimensions, which we expect, since the kd-tree structures which make X-means fast in low dimensions take time exponential in d, making them slow for more than 8 to 12 dimensions. All our code is written in Matlab; X-means is written in C. 3.1 Discovering true clusters in labeled data We tested these algorithms on two real-world datasets for handwritten digit recognition: the NIST dataset [12] and the Pendigits dataset [2]. The goal is to cluster the data without knowledge of the labels and measure how well the clustering captures the true labels. Both datasets have 10 true classes (digits 0-9). NIST has 60000 training examples and 784 dimensions (28×28 pixels). We use 6000 randomly chosen examples and we reduce the dimension to 50 by random projection (following [3]). The Pendigits dataset has 7984 examples and 16 dimensions; we did not change the data in any way. We cluster each dataset with G-means and X-means, and measure performance by comparing the cluster labels Lc with the true labels Lt. We define the partition quality (PQ) as pq = Pkt i=1 Pkc j=1 p(i, j)2. Pkt i=1 p(i)2 where kt is the true number of classes, and kc is the number of clusters found by the algorithm. This metric is maximized when Lc induces the same partition of the data as Lt; in other words, when all points in each cluster have the same true label, and the estimated k is the true k. The p(i, j) term is the frequency-based probability that a datapoint will be labeled i by Lt and j by Lc. This quality is normalized by the sum of true probabilities, squared. This statistic is related to the Rand statistic for comparing partitions [8]. For the NIST dataset, G-means finds 31 clusters in 30 seconds with a PQ score of 0.177. X-means finds 715 clusters in 4149 seconds, and 369 of these clusters contain only one point, indicating an overestimation problem with the BIC. X-means receives a PQ score of 0.024. For the Pendigits dataset, G-means finds 69 clusters in 30 seconds, with a PQ score of 0.196; X-means finds 235 clusters in 287 seconds, with a PQ score of 0.057. Figure 5 shows Hinton diagrams of the G-means clusterings of both datasets, showing that G-means succeeds at identifying the true clusters concisely, without aid of the labels. The confusions between different digits in the NIST dataset (seen in the off-diagonal elements) are common for other researchers using more sophisticated techniques, see [3]. 4 Discussion and conclusions We have introduced the new G-means algorithm for learning k based on a statistical test for determining whether datapoints are a random sample from a Gaussian distribution with arbitrary dimension and covariance matrix. The splitting uses dimension reduction and a powerful test for Gaussian fitness. G-means uses this statistical test as a wrapper around k-means to discover the number of clusters automatically. The only parameter supplied to the algorithm is the significance level of the statistical test, which can easily be set in a standard way. The G-means algorithm takes linear time and space (plus the cost of the splitting heuristic and test) in the number of datapoints and dimension, since k-means is itself linear in time and space. Empirically, the G-means algorithm works well at finding the correct number of clusters and the locations of genuine cluster centers, and we have shown it works well in moderately high dimensions. Clustering in high dimensions has been an open problem for many years. Recent research has shown that it may be preferable to use dimensionality reduction techniques before clustering, and then use a low-dimensional clustering algorithm such as k-means, rather than clustering in the high dimension directly. In [3] the author shows that using a simple, inexpensive linear projection preserves many of the properties of data (such as cluster distances), while making it easier to find the clusters. Thus there is a need for good-quality, fast clustering algorithms for low-dimensional data. Our work is a step in this direction. Additionally, recent image segmentation algorithms such as normalized cut [16, 13] are based on eigenvector computations on distance matrices. These “spectral” clustering algorithms still use k-means as a post-processing step to find the actual segmentation and they require k to be specified. Thus we expect G-means will be useful in combination with spectral clustering. References [1] Horst Bischof, Aleˇs Leonardis, and Alexander Selb. MDL principle for robust vector quantisation. Pattern analysis and applications, 2:59–72, 1999. [2] C.L. Blake and C.J. Merz. UCI repository of machine learning databases, 1998. http://www.ics.uci.edu/∼mlearn/MLRepository.html. [3] Sanjoy Dasgupta. Experiments with random projection. In Uncertainty in Artificial Intelligence: Proceedings of the Sixteenth Conference (UAI-2000), pages 143–151, San Francisco, CA, 2000. Morgan Kaufmann Publishers. [4] Gianna M. Del Corso. Estimating an eigenvector by the power method with a random start. SIAM Journal on Matrix Analysis and Applications, 18(4):913–937, 1997. [5] Chris Ding, Xiaofeng He, Hongyuan Zha, and Horst Simon. Adaptive dimension reduction for clustering high dimensional data. In Proceedings of the 2nd IEEE International Conference on Data Mining, 2002. [6] Fredrik Farnstrom, James Lewis, and Charles Elkan. Scalability for clustering algorithms revisited. SIGKDD Explorations, 2(1):51–57, 2000. [7] Peter J. Huber. Projection pursuit. Annals of Statistics, 13(2):435–475, June 1985. [8] L. Hubert and P. Arabie. Comparing partitions. Journal of Classification, 2:193–218, 1985. [9] A. K. Jain, M. N. Murty, and P. J. Flynn. Data clustering: a review. ACM Computing Surveys, 31(3):264–323, 1999. [10] Robert E. Kass and Larry Wasserman. A reference Bayesian test for nested hypotheses and its relationship to the Schwarz criterion. Journal of the American Statistical Association, 90(431):928–934, 1995. [11] Michael J. Kearns, Yishay Mansour, Andrew Y. Ng, and Dana Ron. An experimental and theoretical comparison of model selection methods. In Computational Learing Theory (COLT), pages 21–30, 1995. [12] Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [13] Andrew Ng, Michael Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. Neural Information Processing Systems, 14, 2002. [14] Dan Pelleg and Andrew Moore. X-means: Extending K-means with efficient estimation of the number of clusters. In Proceedings of the 17th International Conf. on Machine Learning, pages 727–734. Morgan Kaufmann, San Francisco, CA, 2000. [15] Peter Sand and Andrew Moore. Repairing faulty mixture models using density estimation. In Proceedings of the 18th International Conf. on Machine Learning. Morgan Kaufmann, San Francisco, CA, 2001. [16] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000. [17] M. A. Stephens. EDF statistics for goodness of fit and some comparisons. American Statistical Association, 69(347):730–737, September 1974.
2003
27
2,427
Learning a Rare Event Detection Cascade by Direct Feature Selection Jianxin Wu James M. Rehg Matthew D. Mullin College of Computing and GVU Center, Georgia Institute of Technology {wujx, rehg, mdmullin}@cc.gatech.edu Abstract Face detection is a canonical example of a rare event detection problem, in which target patterns occur with much lower frequency than nontargets. Out of millions of face-sized windows in an input image, for example, only a few will typically contain a face. Viola and Jones recently proposed a cascade architecture for face detection which successfully addresses the rare event nature of the task. A central part of their method is a feature selection algorithm based on AdaBoost. We present a novel cascade learning algorithm based on forward feature selection which is two orders of magnitude faster than the Viola-Jones approach and yields classifiers of equivalent quality. This faster method could be used for more demanding classification tasks, such as on-line learning. 1 Introduction Fast and robust face detection is an important computer vision problem with applications to surveillance, multimedia processing, and HCI. Face detection is often formulated as a search and classification problem: a search strategy generates potential image regions and a classifier determines whether or not they contain a face. A standard approach is brute-force search, in which the image is scanned in raster order and every n × n window of pixels over multiple image scales is classified [1, 2, 3]. When a brute-force search strategy is used, face detection is a rare event detection problem, in the sense that among the millions of image regions, only very few contain faces. The resulting classifier design problem is very challenging: The detection rate must be very high in order to avoid missing any rare events. At the same time, the false positive rate must be very low (e.g. 10−6) in order to dodge the flood of non-events. From the computational standpoint, huge speed-ups are possible if the sparsity of faces in the input set can be exploited. In their seminal work [4], Viola and Jones proposed a face detection method based on a cascade of classifiers, illustrated in figure 1. Each classifier node is designed to reject a portion of the nonface regions and pass all of the faces. Most image regions are rejected quickly, resulting in very fast face detection performance. There are three elements in the Viola-Jones framework: the cascade architecture, a rich over-complete set of rectangle features, and an algorithm based on AdaBoost for constructing ensembles of rectangle features in each classifier node. Much of the recent work on face detection following Viola-Jones has explored alternative boosting algorithms such as FloatBoost [5], GentleBoost [6], and Asymmetric AdaBoost [7] (see [8] for a related method). H 1 d , f 1 1 H2 Non-face Non-face 2 d , f 2 . . . Hn n d , f n Non-face Face Figure 1: Illustration of the cascade architecture with n nodes. This paper is motivated by the observation that the AdaBoost feature selection method is an indirect way to meet the learning goals of the cascade. It is also an expensive algorithm. For example, weeks of computation are required to produce the final cascade in [4]. In this paper we present a new cascade learning algorithm which uses direct forward feature selection to construct the ensemble classifiers in each node of the cascade. We demonstrate empirically that our algorithm is two orders of magnitude faster than the Viola-Jones algorithm, and produces cascades which are indistinguishable in face detection performance. This faster method could be used for more demanding classification tasks, such as on-line learning or searching the space of classifier structures. Our results also suggest that a large portion of the effectiveness of the Viola-Jones detector should be attributed to the cascade design and the choice of the feature set. 2 Cascade Architecture for Rare Event Detection The learning goal for the cascade in figure 1 is the construction of a set of classifiers {Hi}n i=1. Each Hi is required to have a very high detection rate, but only a moderate false positive rate (e.g. 50%). An input image region is passed from Hi to Hi+1 if it is classified as a face, otherwise it is rejected. If the {Hi} can be constructed to produce independent errors, then the overall detection rate d and false positive rate f for the cascade is given by Qn i=1 di and Qn i=1 fi respectively. In a hypothetical example, a 20 node cascade with di = 0.999 and fi = 0.5 would have d = 0.98 and f = 9.6e −7. As in [4], the overall cascade learning method in this paper is a stage-wise, greedy feature selection process. Nodes are constructed sequentially, starting with H1. Within a node Hi, features are added sequentially to form an ensemble. Following Viola-Jones, the training dataset is manipulated between nodes to encourage independent errors. Each node Hi is trained on all of the positive examples and a subset of the negative examples. In moving from node Hi to Hi+1 during training, negative examples that were classified successfully by the cascade are discarded and replaced with new ones, using the standard bootstrapping approach from [1]. The difference between our method and Viola-Jones is the feature selection algorithm for the individual nodes. The cascade architecture in figure 1 should be suitable for other rare event problems, such as network intrusion detection in which an attack constitutes a few packets out of tens of millions. Recent work in that community has also explored a cascade approach [9]. For each node in the cascade architecture, given a training set {xi, yi}, the learning objective is to select a set of weak classifiers {ht} from a total set of F features and combine them into an ensemble H with a high detection rate d and a moderate false positive rate f. Train all weak classifiers Add the feature with minimum weighted error to the ensemble Adjust threshold of the ensemble to meet the detection rate goal f>=F ? (a) Train all weak classifiers d>D? Add the feature to maximize detection rate of the ensemble Add the feature to minimize false positive rate of the ensemble no yes f>=F or d<=D ? (b) Figure 2: Diagram for training one node in the cascade architecture, (a) is for the ViolaJones method, and (b) is for the proposed method. F and D are false positive rate and detection rate goals respectively. A weak classifier is formed from a rectangle feature by applying the feature to the input pattern and thresholding the result.1 Training a weak classifier corresponds to setting its threshold. In [4], an algorithm based on AdaBoost trains weak classifiers, adds them to the ensemble, and computes the ensemble weights. AdaBoost [10] is an iterative method for obtaining an ensemble of weak classifiers by evolving a distribution of weights, Dt, over the training data. In the Viola-Jones approach, each iteration t of boosting adds the classifier ht with the lowest weighted error to the ensemble. After T rounds of boosting, the decision of the ensemble is defined as H(x) = ½ 1 PT t=1 αtht(x) ≥θ 0 otherwise , where the αt are the standard AdaBoost ensemble weights and θ is the threshold of the ensemble. This threshold is adjusted to meet the detection rate goal. More features are then added if necessary to meet the false positive rate goal. The flowchart for the algorithm is given in figure 2(a). The process of sequentially adding features which individually minimize the weighted error is at best an indirect way to meet the learning goals for the ensemble. For example, the false positive goal is relatively easy to meet, compared to the detection rate goal which is near 100%. As a consequence, the threshold θ produced by AdaBoost must be discarded in favor of a threshold computed directly from the ensemble performance. Unfortunately, the weight distribution maintained by AdaBoost requires that the complete set of weak classifiers be retrained in each iteration. This is a computationally demanding task which is in the inner loop of the feature selection algorithm. Beyond these concerns is a more basic question about the cascade learning problem: What is the role of boosting in forming an effective ensemble? Our hypothesis is that the overall success of the method depends upon having a sufficiently rich feature set, which defines the space of possible weak classifiers. From this perspective, a failure mode of the algorithm would be the inability to find sufficient features to meet the learning goal. The question then is to what extent boosting helps to avoid this problem. In the following section we describe a simple, direct feature selection algorithm that sheds some light on these issues. 3 Direct Feature Selection Method We propose a new cascade learning algorithm based on forward feature selection [11]. Pseudo-code of the algorithm for building an ensemble classifier for a single node is given 1A feature and its corresponding classifier will be used interchangeably. 1. Given a training set. Given d, the minimum detection rate and f, the maximum false positive rate. 2. For every feature, j, train a weak classifier hj, whose false positive rate is f. 3. Initialize the ensemble H to an empty set, i.e. H ←φ. t ←0, d0 = 0.0, f0 = 1.0. 4. while dt < d or ft > f (a) if dt < d, then, find the feature k, such that by adding it to H, the new ensemble will have largest detection rate dt+1. (b) else, find the feature k, such that by adding it to H, the new ensemble will have smallest false positive rate ft+1. (c) t ←t + 1, H ←H ∪{hk}. 5. The decision of the ensemble classifier is formed by a majority voting of weak classifiers in H, i.e. H(x) = ½ 1 P hj∈H hj(x) ≥θ 0 otherwise , where θ = T 2 . Decrease θ if necessary. Table 1: The direct feature selection method for building an ensemble classifier. in table 1. The corresponding flowchart is illustrated in figure 2(b). The first step in our algorithm is to train each of the weak classifiers to meet the false positive rate goal for the ensemble. The output of each weak classifier on each training data item is collected in a large lookup table. The core algorithm is an exhaustive search over possible classifiers. In each iteration, we consider adding each possible classifier to the ensemble and select the one which makes the largest improvement to the ensemble performance. The selection criteria directly maximizes the learning objective for the node. The look-up table, in conjunction with majority vote rule, makes this feature search extremely fast. The resulting algorithm is roughly 100 times faster than Viola-Jones. The key difference is that we train the weak classifiers only once per node, while in the Viola-Jones method they are trained once for each feature in the cascade. Let T be the training time for weak classifiers2 and F be the number of features in the final cascade. The learning time for Viola-Jones is roughly FT, which in [4] was on the order of weeks. Let N be the number of nodes in the cascade. Empirically the learning time for our method is 2NT, which is on the order of hours in our experiments. For the cascade of 32 nodes with 4297 features in [4], the difference in learning time will be dramatic. The difficulty of the classifier design problem increases with the depth of the cascade, as the non-face patterns selected by bootstrapping become more challenging. A large number of features may be required to achieve the learning objectives when majority vote is used. In this case, a weighted ensemble could be advantageous. Once feature selection has been performed, a variant of the Viola-Jones algorithm can be used to obtain a weighted ensemble. Pseudo-code for this weight setting method is given in table 2. 4 Experimental Results We conducted three controlled experiments to compare our feature selection method to the Viola-Jones algorithm. The procedures and data sets were the same for all of the ex2In our experiments, T is about 10 minutes. 1. Given a training set, maintain a distribution D over it. 2. Select N features using the algorithm in table 1. These features form a set F. 3. Initialize the ensemble classifier to an empty set, i.e. H ←∅. 4. for i = 1 : N (a) Select the feature k from F that has smallest error ϵ on the training set, weighted over the distribution D. (b) Update the distribution D according to the AdaBoost algorithm as in [4]. (c) Add the feature k and it’s associated weight αk = −log ϵ 1−ϵ to H. And remove the feature k from F. 5. Decision of the ensemble classifier is formed by a weighted average of weak classifiers in H. Decrease the threshold θ until the ensemble reaches the detection rate goal. Table 2: Weight setting algorithm after feature selection. periments. Our training set contained 5000 example face images and 5000 initial non-face examples, all of size 24x24. We used approximately 2284 million non-face patches to bootstrap the non-face examples between nodes. We used 32466 features sampled uniformly from the entire set of rectangle features. For testing purposes we used the MIT+CMU frontal face test set [2] in all experiments. Although many researchers use automatic procedures to evaluate their algorithm, we decided to manually count the missed faces and false positives.3 When scanning a test image at different scales, the image is re-scaled repeatedly by a factor of 1.25. Post-processing is similar to [4]. In the first experiment we constructed three face detection cascades. One cascade used the direct feature selection method from table 1. The second cascade used the weight setting algorithm in table 2. The training algorithms stopped when they exhausted the set of non-face training examples. The third cascade used our implementation of the Viola-Jones algorithm. The three cascades had 38, 37, and 28 nodes respectively. The third cascade was stopped after 28 nodes because the AdaBoost based training algorithm could not meet the learning goal. With 200 features, when the detection rate is 99.9%, the AdaBoost ensemble’s false positive rate is larger than 97%. Adding several hundred additional features did not change the outcome. ROC curves for cascades using our method and the Viola-Jones method are depicted in figure 3(a). We constructed the ROC curves by removing nodes from the cascade to generate points with increasing detection and false positive rates. These curves demonstrate that the test performance of our method is indistinguishable from that of the Viola-Jones method. The second experiment explored the ability of the rectangle feature set to meet the detection rate goal for the ensemble on a difficult node. Figure 3(b) shows the false positive and detection rates for the ensemble (i.e., one node in the cascade architecture) as a function of the number of features that were added to the ensemble. The training set used was the bootstrapped training set for the 19th node in the cascade which was trained by the ViolaJones method. Even for this difficult learning task, the algorithm can improve the detection rate from about 0.7 to 0.9 using only 13 features, without any significant increase in false positive rate. This suggests that the rectangle feature set is sufficiently rich. Our hypothesis is that the strength of this feature set in the context of the cascade architecture is the key to 3We found that the criterion for automatically finding detection errors in [6] was too loose. This criterion yielded higher detection rates and lower false positive rates than manual counting. 85 86 87 88 89 90 91 92 93 94 0 100 200 300 400 500 false positives Viola-Jones Feature selection Weight setting correct detection rate (a) 0.4 0.5 0.6 0.7 0.8 0.9 1 0 50 100 150 200 Number of features detection rate false positive rate (b) Figure 3: Experimental Results. (a) is ROC curves of the proposed method and the ViolaJones method and (b) is trend of detection and false positive rates when more features are combined in one node. the success of the Viola-Jones approach. We conducted a third experiment in which we focused on learning one node in the cascade architecture. Figure 4 shows ROC curves of the Viola-Jones, direct feature selection, and weight setting methods for one node of the cascade. The training set used in figure 4 was the same training set as in the second experiment. Unlike the ROC curves in figure 3(a), these curves show the performance of the node in isolation using a validation set. These curves reinforce the similarity in the performance of our method compared to Viola-Jones. In the region of interest (e.g. detection rate > 99%), our algorithms yield better ROC curve performance than the Viola-Jones method. Although figure 4 and figure 3(b) only showed curves for one specific training set, the same pattern in these figures were found with other bootstrapped training sets in our experiments. 5 Related Work A survey of face detection methods can be found in [12]. We restrict our attention here to frontal face detection algorithms related to the cascade idea. The neural network-based detector of Rowley et. al. [2] incorporated a manually-designed two node cascade. Other cascade structures have been constructed for SVM classifiers. In [13], a set of reduced set vectors is calculated from the support vectors. Each reduced set vector can be interpreted as a face or anti-face template. Since these reduced set vectors are applied sequentially to the input pattern, they can be viewed as nodes in a cascade. An alternative cascade framework for SVM classifiers is proposed by Heisele et. al. in [14]. Based on different assumptions, Keren et al. proposed another object detection method which consists of a series of antiface templates [15]. Carmichael and Hebert propose a hierarchical strategy for detecting chairs at different orientations and scales [16]. Following [4], several authors have developed alternative boosting algorithms for feature selection. Li et al. incorporated floating search into the AdaBoost algorithm (FloatBoost) and proposed some new features for detecting multi-view faces [5]. Lienhart et al. [6] experimentally evaluated different boosting algorithms and different weak classifiers. Their results showed that Gentle AdaBoost and CART decision trees had the best performance. In an extension of their original work [7], Viola and Jones proposed an asymmetric AdaBoost algorithm in which false negatives are penalized more than false positives. This is an interesting attempt to incorporate the rare event observation more explicitly into their 0.8 0.9 1 0.5 0.6 0.7 0.8 0.9 1 False positive rate Viola-Jones Feature Selection Weight Setting Correct detection rate Figure 4: Single node ROC curves on a validation set. learning algorithm (see [8] for a related method). All of these methods explore variations in AdaBoost-based feature selection, and their training times are similar to the original Viola-Jones algorithm. While all of the above methods adopt a brute-force search strategy for generating input regions, there has been some interesting work on generating candidate face hypotheses from more general interest operators. Two examples are [17, 18]. 6 Conclusions Face detection is a canonical example of a rare event detection task, in which target patterns occur with much lower frequency than non-targets. It results in a challenging classifier design problem: The detection rate must be very high in order to avoid missing any rare events and the false positive rate must be very low to dodge the flood of non-events. A cascade classifier architecture is well-suited to rare event detection. The Viola-Jones face detection framework consists of a cascade architecture, a rich overcomplete feature set, and a learning algorithm based on AdaBoost. We have demonstrated that a simpler direct algorithm based on forward feature selection can produce cascades of similar quality with two orders of magnitude less computation. Our algorithm directly optimizes the learning criteria for the ensemble, while the AdaBoost-based method is more indirect. This is because the learning goal is a highly-skewed tradeoff between detection rate and false positive rate which does not fit naturally into the weighted error framework of AdaBoost. Our experiments suggest that the feature set and cascade structure in the Viola-Jones framework are the key elements in the success of the method. Three issues that we plan to explore in future work are: the necessary properties for feature sets, global feature selection methods, and the incorporation of search into the cascade framework. The rectangle feature set seems particularly well-suited for face detection. What general properties must a feature set possess to be successful in the cascade framework? In other rare event detection tasks where a large set of diverse features is not naturally available, methods to create such a feature set may be useful (e.g. the random subspace method proposed by Ho [19]). In our current algorithm, both nodes and features are added sequentially and greedily to the cascade. More global techniques for forming ensembles could yield better results. Finally, the current detection method relies on a brute-force search strategy for generating candidate regions. We plan to explore the cascade architecture in conjunction with more general interest operators, such as those defined in [18, 20]. The authors are grateful to Mike Jones and Paul Viola for providing their training data, along with many valuable discussions. This work was supported by NSF grant IIS-0133779 and the Mitsubishi Electric Research Laboratory. References [1] K. Sung and T. Poggio. Example-based learning for view-based human face detection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 20(1):39–51, 1998. [2] H. A. Rowley, S. Baluja, and T. Kanade. Neural network-based face detection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 20(1):23–38, 1998. [3] Henry Schneiderman and Takeo Kanade. A statistical model for 3d object detection applied to faces and cars. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, June 2000. [4] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Proc. CVPR, pages 511–518, 2001. [5] S.Z. Li, Z.Q. Zhang, Harry Shum, and H.J. Zhang. FloatBoost learning for classification. In S. Thrun S. Becker and K. Obermayer, editors, NIPS 15. MIT Press, December 2002. [6] R. Lienhart, A. Kuranov, and V. Pisarevsky. Empirical analysis of detection cascades of boosted classifiers for rapid object detection. Technical report, MRL, Intel Labs, 2002. [7] P. Viola and M. Jones. Fast and robust classification using asymmetric AdaBoost and a detector cascade. In NIPS 14, 2002. [8] G. J. Karakoulas and J. Shawe-Taylor. Optimizing classifiers for imbalanced training sets. In NIPS 11, pages 253–259, 1999. [9] W. Fan, W. Lee, S. J. Stolfo, and M. Miller. A multiple model cost-sensitive approach for intrusion detection. In Proc. 11th ECML, 2000. [10] R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statististics, 26(5):1651–1686, 1998. [11] A. R. Webb. Statistical Pattern Recognition. Oxford University Press, New York, 1999. [12] M.-H. Yang, D. J. Kriegman, and N. Ahujua. Detecting faces in images: a survey. IEEE Trans. on Pattern Analysis and Machine Intelligence, 24(1):34–58, 2002. [13] S. Romdhani, P. Torr, B. Schoelkopf, and A. Blake. Computationally efficient face detection. In Proc. Intl. Conf. Computer Vision, pages 695–700, 2001. [14] B. Heisele, T. Serre, S. Mukherjee, and T. Poggio. Feature reduction and hierarchy of classifiers for fast object detection in video images. In Proc. CVPR, volume 2, pages 18–24, 2001. [15] D. Keren, M. Osadchy, and C. Gotsman. Antifaces: A novel, fast method for image detection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 23(7):747–761, 2001. [16] O. Carmichael and M. Hebert. Object recognition by a cascade of edge probes. In British Machine Vision Conference, volume 1, pages 103–112, September 2002. [17] T. Leung, M. Burl, and P. Perona. Finding faces in cluttered scenes using random labeled graph matching. In Proc. Intl. Conf. Computer Vision, pages 637–644, 1995. [18] S. Lazebnik, C. Schmid, and J. Ponce. Sparse texture representation using affine-invariant neighborhoods. In Proc. CVPR, 2003. [19] T. K. Ho. The random subspace method for constructing decision forests. IEEE Trans. on Pattern Analysis and Machine Intelligence, 20(8):832–844, 1998. [20] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Trans. on Pattern Analysis and Machine Intelligence, 24(4):509–522, 2002.
2003
28
2,428
Non-linear CCA and PCA by Alignment of Local Models Jakob J. Verbeek†, Sam T. Roweis‡, and Nikos Vlassis† † Informatics Institute, University of Amsterdam ‡ Department of Computer Science,University of Toronto Abstract We propose a non-linear Canonical Correlation Analysis (CCA) method which works by coordinating or aligning mixtures of linear models. In the same way that CCA extends the idea of PCA, our work extends recent methods for non-linear dimensionality reduction to the case where multiple embeddings of the same underlying low dimensional coordinates are observed, each lying on a different high dimensional manifold. We also show that a special case of our method, when applied to only a single manifold, reduces to the Laplacian Eigenmaps algorithm. As with previous alignment schemes, once the mixture models have been estimated, all of the parameters of our model can be estimated in closed form without local optima in the learning. Experimental results illustrate the viability of the approach as a non-linear extension of CCA. 1 Introduction In this paper, we are interested in data that lies on or close to a low dimensional manifold embedded, possibly non-linearly, in a Euclidean space of much higher dimension. Data of this kind is often generated when our observations are very high dimensional but the number of underlying degrees of freedom is small. A typical example are images of an object under different conditions (e.g. pose and lighting). A simpler example is given in Fig. 1, where we have data in IR3 which lies on a two dimensional manifold. We want to recover the structure of the data manifold, so that we can ‘unroll’ the data manifold and work with the data expressed in the underlying ‘latent coordinates’, i.e. coordinates on the manifold. Learning low dimensional latent representations may be desirable for different reasons, such as compression for storage and communication, visualization of high dimensional data, or as preprocessing for further data analysis or prediction tasks. Recent work on unsupervised nonlinear feature extraction has pursued several complementary directions. Various nonparametric spectral methods, such as Isomap[1], LLE[2], Kernel PCA[3] and Laplacian Eigenmaps[4] have been proposed which reduce the dimensionality of a fixed training set in a way that maximally preserve certain inter-point relationships, but these methods do not generally provide a functional mappings between the high and low dimensional spaces that are valid both on and off the training data. In this paper, we consider a method to integrate several local feature extractors into a single global representation, similar to the approaches of [5, 6, 7, 8]. These methods, as well as ours, deliver after training a functional mapping which can be used to convert previously unseen high dimensional observations into their low dimensional global coordinates. Like most of the above algorithms, our method performs non-linear feature extraction by minimizing a convex objective function whose critical points can be characterized as eigenvectors of some matrix. These algorithms are generally simple and efficient; one needs only to construct a matrix based on local feature analysis of the training data and then computes its largest or smallest eigenvectors using standard numerical methods. In contrast, methods like generative topographic mapping[9] and self-organizing maps[10] are prone to local optima in the objective function. Our method is based on the same intuitions as in earlier work: the idea is to learn a mixture of latent variable density models on the original training data so that each mixture component acts as a local feature extractor. For example, we may use a mixture of factor analyzers or a mixture of principal component analyzers (PCA). After this mixture has been learned, the local feature extractors are ‘coordinated’ by finding, for each model, a suitable linear mapping (and offset) from its latent variable space into a single ‘global’ low-dimensional coordinate system. The local feature extractors together with the coordinating linear maps provide a global non-linear map from the data space to the latent space and back. Learning the mixture is driven by a density signal – we want to place models near the training points, while the post-coordination is driven by the idea that when two different models place significant weight on the same point, they should agree on its mapping into the global space. Our algorithm, developed in the following section, builds upon recent work of coordination methods. As in [6], we use a cross-entropy between a unimodal approximation and the true posterior over global coordinates to encourage agreement. However we do not attempt to simultaneously learn the mixture model and coordinate since this causes severe problems with local minima. Instead, as in [7, 8], we fix a specific mixture and then study the computations involved in coordinating its local representations. We extend the latter works as CCA extends PCA: rather than finding a projection of one set of points, we find projections for two sets of corresponding points {xn} and {yn} (xn corresponding to yn) into a single latent space that project corresponding points in the two point sets as nearby as possible. In this setting we begin by showing, in Section 3, how Laplacian Eigenmaps[4] are a special case of the algorithms presented here when they are applied to only a single manifold. We go on, in Section 4, to extend our algorithm to a setting in which multiple different observation spaces are available, each one related to the same underlying global space but through different nonlinear embeddings. This naturally gives rise to a nonlinear version of weighted Canonical Correlation Analysis (CCA). We present results of several experiments in the same section and we conclude the paper with a general discussion in Section 5. 2 Non-linear PCA by aligning local feature extractors Consider a given data set X = {x1, . . . , xN} and a collection of k local feature extractors, fs(x) is a vector containing the, zero or more, features produced by model s. Each feature extractor also provides an “activity signal”, as(x) representing its confidence in modeling the point. We convert these activities into posterior responsibilities using a simple softmax: p(s|x) = exp(as(x))/ P r exp(ar(x)). If the experts are actually components of a mixture, then setting the activities to the logarithm of the posteriors under the mixture will recover exactly the same posteriors above. Next, we consider the relationship between the given representation of the data and the representation of the data in a global latent space, which we would like to find. Throughout, we will use g to denote latent ’Global’ coordinates for data. For the unobserved latent coordinate g corresponding to a data point xn and conditioned on s, we assume the density: p(g|xn, s) = N(g; κs + Asfs(xn), σ2I) = N(g; gns, σ2I), (1) where N(g; µ, Σ) is a Gaussian distribution on g with mean µ and covariance Σ. The mean, gns, of p(g|xn, s) is the sum of the component offset κs in the latent space and a linear transformation, implemented by As, of fs(xn). From now on we will use homogeneous coordinates and write: Ls = [Asκs] and zns = [fs(xn)⊤1]⊤, and thus gns = Lszns. Consider the posterior distribution on latent coordinates given some data: p(g|x) = X s p(s, g|x) = X s p(s|x)p(g|x, s). (2) Given a fixed set of local feature extractors and a corresponding activities, we are interested in finding linear maps Ls that give rise to ‘consistent’ projections of the data in the latent space. By ‘consistent’, we mean that the p(g|x, s) are similar for components with large posterior. If the predictions are in perfect agreement for a point xn, then all the gns are equal and the posterior p(g|x) is Gaussian, in general p(g|x) is a mixture of Gaussians. To measure the consistency, we define the following error function: Φ({L1, . . . , Lk}) = min {Qn,...QN} X n,s qnsD(Qn(g) ∥p(g|xn, s)), (3) where we used qns as a shorthand for p(s|xn) and Qn is a Gaussian with mean gn and covariance matrix Σn. The objective sums for each data point xn and model s the Kullback-Leibler divergence D between a single Gaussian Qn(g) and the component densities p(g|x, s), weighted by the posterior p(s|xn). It is easy to derive that in order to minimize the objective Φ w.r.t. gn and Σn we obtain: gn = X s qnsgns and Σn = σ2I, (4) where I denotes the identity matrix. Skipping some additive and multiplicative constants with respect to the linear maps Ls, the objective Φ then simplifies to: Φ = X n,s qns ∥gn −gns ∥2= 1 2 X n,s,t qnsqnt ∥gnt −gns ∥2≥0. (5) The main attraction with this setup is that our objective is a quadratic function of the linear maps Ls, as in [7, 8]. Using some extra notation, we obtain a clearer form of the objective as a function of the linear maps. Let: un = [qn1z⊤ n1 . . . qnkz⊤ nk], U = [u⊤ 1 . . . u⊤ N]⊤, L = [L1 . . . Lk]⊤. (6) Note that from (4) and (6) we have: gn = (unL)⊤. The expected projection coordinates can thus be computed as: G = [g1 . . . gN]⊤= UL. We define the block-diagonal matrix D with k blocks given by Ds = P n qnsznsz⊤ ns. The objective can now be written as: Φ = Tr{L⊤(D −U⊤U)L}. (7) The objective function is invariant to translation and rotation of the global latent space and re-scaling the latent space changes the objective monotonically, c.f. (5). To make solutions unique with respect to translation, rotation and scaling, we impose two constraints: (transl.) : ¯g = X n gn/N = 0, (rot. + scale) : Σg = X n (gn −¯g)(gn −¯g)⊤/N = I. The columns of L minimizing Φ are characterized as the generalized eigenvectors: (D −U⊤U)v = λU⊤Uv ⇔ Dv = (λ + 1)U⊤Uv. (8) The value of the objective function is given by the sum of the corresponding eigenvalues λ. The smallest eigenvalue is always zero, corresponding to mapping all data into the same −0.5 0 0.5 1 1.5 2 2.5 −1.5 −1 −0.5 0 0.5 1 1.5 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 Figure 1: Data in IR3 with local charts indicated by the axes (left). Data representation in IR2 generated by optimizing our objective function. Expected latent coordinates gn are plotted (right). latent coordinate. This embedding is uninformative since it is constant, therefore we select the eigenvectors corresponding to the second up to the (d + 1)st smallest eigenvalues to obtain the best embedding in d dimensions. Note that, as mentioned in [7], this framework enables us to use feature extractors that provide different numbers of features. In Fig. 1 we give an illustration of applying the above procedure to a simple manifold. The plots show the original data presented to the algorithm (left) and the 2-dimensional latent coordinates gn = P s qnsgns found by the algorithm (right). 3 Laplacian Eigenmaps as a special case Consider the special case of the algorithm of Section 2, where no features are extracted. The only information the mixture model provides are the posterior probabilities collected in the matrix Q with [Q]ns = qns = p(s|xn). In that case: gns = κs, U = Q, L = [κ⊤ 1 . . . κ⊤ k ]⊤, (9) Φ = Tr{L⊤(D −A)L} = X s,t ∥κs −κt ∥2 X n qnsqnt, (10) where A = Q⊤Q is an adjacency matrix with [A]st = P n qnsqnt and D is the diagonal degree matrix of A with [D]ss = P t Ast = P n qns. Optimization under the constrains of zero mean and identity covariance leads to the generalized eigenproblem: (D −A)v = λAv ⇔ (D −A)v = λ 1 + λDv (11) The optimization problem is exactly the Laplacian Eigenmaps algorithm[4], but applied on the mixture components instead of the data points. Since we do not use any feature extractors in this setting, it can be applied to mixture models that model data for which it is hard to design feature extractors, e.g. data that has (both numerical and) categorical features. Thus, we can use mixture densities without latent variables, e.g. mixtures of multinomials, mixtures of Hidden Markov Models, etc. Notice that in this manner the mixture model not only provides a soft grouping of the data through the posteriors, but also an adjacency matrix between the groups. 4 Non-linear CCA by aligning local feature extractors Canonical Correlation Analysis (CCA) is a data analysis method that finds correspondences between two or more sets of measurements. The data are provided in tuples of corresponding measurements in the different spaces. The sets of measurements can be obtained by employing different sensors to make measurements of some phenomenon. Our main interest in this paper is to develop a nonlinear extension of CCA which works when the different measurements come from separate nonlinear manifolds that share an underlying global coordinate system. Non-linear CCA can be trained to find a shared low dimensional embedding for both manifolds, exploiting the pairwise correspondence provided by the data set. Such models can then be used for different purposes, like sensor fusion, denoising, filling in missing data, or predicting a measurement in one space given a measurement in the other space. Another important aspect of this learning setup is that the use of multiple sensors might also function as regularization helping to avoid overfitting, c.f. [11]. In CCA two (zero mean) sets of points are given: X = {x1, . . . , xN} ⊂IRp and Y = {y1, . . . , yN} ⊂IRq. The aim is to find linear maps a and b, that map members of X and Y respectively on the real line, such that the correlation between the linearly transformed variables is maximized. This is easily shown to be equivalent to minimizing: E = 1 2 X n [axn −byn]2 (12) under the constraint that a[P n xnx⊤ n ]a⊤+ b[P n yny⊤ n ]b⊤= 1. The above is easily generalized such that the sets do not need to be zero mean and allowing a translation as well. We can also generalize by mapping to IRd instead of the real line, and then requiring the sum of the covariance matrices of the projections to be identity. CCA can also be readily extended to take into account more than two point sets, as we now show. In the generalized CCA setting with multiple point-sets, allowing translations and linear mappings to IRd, the objective is to minimize the squared distance between all pairs of projections under the same constraint as above. We denote the projection of the n-th point in the s-th point-set as gns and let gn = 1 k P s gns. We then minimize the error function: ΦCCA = 1 2k2 X n,s,t ∥gns −gnt ∥2= 1 k X n,s ∥gns −gn ∥2 . (13) The objective Φ in equation (5) coincides with ΦCCA if qns = 1/k. The different constraints imposed upon the optimization by CCA and our objective of the previous sections are equivalent. We can thus regard the alignment procedure as a weighted form of CCA. This suggests using the coordination technique for non-linear CCA. This is achieved quite easily, without modifying the objective function (5). We consider different point sets, each having a mixture of locally valid linear projections into the ‘global’ latent space that is now shared by all mixture components and point sets. We minimize the weighted sum of the squared distances between all pairs of projections, i.e. we have pairs of projections due to the same point set and also pairs that combine projections from different point sets. We use c as an index ranging over the C different observation spaces, and write qc ns for the posterior on component s for observation n in observation space c. Similarly, we use gc ns to denote the projection due component s from space c. The average projection due to observation space c is then denoted by gc n = P s qc nsgc ns. We use index r to range over all mixture components and observation spaces, so that qnr = 1 C p(s|xn) if r corresponds to (c = 1, s) and qnr = 1 C p(s|yn) if r corresponds to (c = 2, s), i.e. r ↔(c, s). The overall average projection then becomes: gn = 1 C P c gc n = P r qnrgnr. The objective (5) can now be rewritten as: Φ = X n,r qnr ∥gnr −gn ∥2= 1 C X c,n ∥gn −gc n ∥2 + 1 C X c,n,s qc ns ∥gc n −gc ns ∥2 . (14) Observe how in (14) the objective sums between point set consistency of the projections (first summand) and within point set consistency of the projections (second summand). −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2 −1 0 1 2 3 4 5 −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 −1 −0.9 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 Figure 2: Data and charts, indicated by bars (left-middle). Latent coordinates (vert.) and coordinate on generating curve (hor.) (right). The above technique can also be used to get more stable results of the chart coordination procedure for a single manifold discussed in Section 2. Robustness for variation in the mixture fitting can be improved by using several sets of charts fitted to the same manifold. We can then align all these sets of charts by optimizing (14). This aligns the charts within each set and at the same time makes sure the different sets of aligned charts are aligned, providing important regularization, since now every point is modeled by several local models. Note that if the charts and responsibilities are obtained using a mixture of PCA or factor analyzers, the local linear mappings to the latent space induce a Gaussian mixture in the latent space. This mixture can be used to compute responsibilities on components given latent coordinates. Also, for each linear map from the data to the latent space we can compute a pseudo inverse projecting back. By averaging the individual back projections with the responsibilities computed in latent space we obtain a projection from the latent space to the data space. In total, we can thus map from one observation space into another. This is how we generated the reconstructions in the experiments reported below. When using linear CCA for data that is non-linearly embedded, reconstructions will be poor since linear CCA can only map into a low dimensional linear subspace. As an illustrative example of the non-linear CCA we used two point-sets in IR2. The first point-set was generated on an S-shaped curve the second point set was generated along an arc, see Fig. 2. To both point sets we added Gaussian noise and we learned a 10 component mixture model on both sets. In the rightmost panel of Fig. 2 the, clearly successfully, discovered latent coordinates are plotted against the coordinate on the generating curve. Below, we describe three more challenging experiments. In the first experiment we use two data sets which we know to share the same underlying degrees of freedom. We use images of a face varying its gaze left-right and up-down. We cut these images in half to obtain our two sets of images. We trained the system on 1500 image halves of 40×20 pixels each. Both image halves were modeled with a mixture of 40 components. In Fig. 3 some generated right half images based on the left half are shown. The second experiment concerns appearance based pose estimation of an object. One point set consists of a pixel representation of images of an object and the other point set contains the corresponding pose of the camera w.r.t. the object. For the pose parameters we used the identity to ‘extract’ features (i.e. we just used one component for this space). The training data was collected1 by moving a camera over the half-sphere centered at the object. A mixture of 40 PCA’s was trained on the image data and aligned with the pose parameters in a 2-dimensional latent space. The right panel of Fig. 3 shows reconstructions of the images conditioned on various pose inputs (left image of each pair is reconstruction based on pose of right image). Going the other way, when we input an image and estimate the pose, the absolute errors in the longitude (0◦−360◦) were under 10◦in over 80% of the cases and for latitude (0◦−90◦) this was under 5◦in over 90% of the cases. 1Thanks to G. Peters for sharing the images used in [12] and recorded at the Institute for Neural Computation, Ruhr-University Bochum, Germany. Figure 3: Right half of the images was generated given the left half using the trained model (left). Image reconstructions given pose parameters (right). In the third experiment we use the same images as in the second experiment, but replace the direct (low dimensional) supervision signal of the pose parameters with (high dimensional) correspondences in the form of images of another object in corresponding poses. We trained a mixture of 40 PCA’s on both image sets (2000 images of 64×64 pixels in each set) and aligned these in a 3-dimensional latent space. Comparing the pose of an object to the pose of the nearest (in latent space) image from the other object the std. dev. of error in latitude is 2.0◦. For longitude we found 4 errors of about 180◦in our 500 test cases, the rest of the errors had std. dev. 3.9◦. Given a view of one object we can reconstruct the corresponding view of the second object, Fig. 4 shows some of the obtained reconstruction results. All presented reconstructions were made for data not included in training. 5 Discussion In this paper, we have extended alignment methods for single manifold nonlinear dimensionality reduction to perform non-linear CCA using measurements from multiple manifolds. We have also shown the close relationship with Laplacian Eigenmaps[4] in the degenerate case of a single manifold and feature extractors of zero dimensionality. In [7] a related method to coordinate local charts is proposed, which is based on the LLE cost function as opposed to our cross-entropy term; this means that we need more than just a set of local feature extractors and their posteriors: we also need to be able to compute reconstruction weights, collected in a N × N weight matrix. The weights indicate how we can reconstruct each data point from its nearest neighbors. Computing these weights requires access to the original data directly, not just through the “interface” of the mixture model. Defining sensible weights and the ‘right’ number of neighbors might not be straightforward, especially for data in non-Euclidean spaces. Furthermore, computing the weights costs in principle O(N 2) because we need to find nearest neighbors, whereas the presented work has running time linear in the number of data points. In [11] it is considered how to find low dimensional representations for multiple point sets simultaneously, given few correspondences between the point sets. The generalization of LLE presented there for this problem is closely related to our non-linear CCA model. The work presented here can also be extended to the case where we know only for few points in one set to which points they correspond in the other set. The use of multiple sets of charts for one data set is similar in spirit as the self-correspondence technique of [11] where the data is split into several overlapping sets used to stabilize the generalized LLE. d c b a Figure 4: I1: image in first set (a), I2: corresponding image in second set (b), closest image in second set (in latent space) to I1 (c), reconstruction of I2 given I1 (d). Finally, it would be interesting to compare our approach with treating the data in the joint (x, y) space and employing techniques for a single point set[8, 7, 6]. In this case, points for which we do not have the correspondence can be treated as data with missing values. Acknowledgments JJV and NV are supported by the Technology Foundation STW (AIF4997) applied science division of NWO and the technology program of the Dutch Ministry of Economic Affairs. STR is supported in part by the Learning Project of IRIS Canada and by the NSERC. References [1] J.B. Tenenbaum, V. de Silva, and J.C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, December 2000. [2] S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, December 2000. [3] B. Sch¨olkopf, A.J. Smola, and K. M¨uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299–1319, 1998. [4] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems, volume 14, 2002. [5] C. Bregler and S.M. Omohundro. Surface learning with applications to lipreading. In Advances in Neural Information Processing Systems, volume 6, 1994. [6] S.T. Roweis, L.K. Saul, and G.E. Hinton. Global coordination of local linear models. In Advances in Neural Information Processing Systems, volume 14, 2002. [7] Y.W. Teh and S.T. Roweis. Automatic alignment of local representations. In Advances in Neural Information Processing Systems, volume 15, 2003. [8] M. Brand. Charting a manifold. In Advances in Neural Information Processing Systems, volume 15, 2003. [9] C.M. Bishop, M. Svens´en, and C.K.I Williams. GTM: the generative topographic mapping. Neural Computation, 10:215–234, 1998. [10] T. Kohonen. Self-organizing maps. Springer, 2001. [11] J.H. Ham, D.D. Lee, and L.K. Saul. Learning high dimensional correspondences from low dimensional manifolds. In ICML’03, workshop on the continuum from labeled to unlabeled data in machine learning and data mining, 2003. [12] G. Peters, B. Zitova, and C. von der Malsburg. How to measure the pose robustness of object views. Image and Vision Computing, 20(4):249–256, 2002.
2003
29
2,429
Wormholes Improve Contrastive Divergence Geoffrey Hinton, Max Welling and Andriy Mnih Department of Computer Science, University of Toronto 10 King’s College Road, Toronto, M5S 3G5 Canada {hinton,welling,amnih}@cs.toronto.edu Abstract In models that define probabilities via energies, maximum likelihood learning typically involves using Markov Chain Monte Carlo to sample from the model’s distribution. If the Markov chain is started at the data distribution, learning often works well even if the chain is only run for a few time steps [3]. But if the data distribution contains modes separated by regions of very low density, brief MCMC will not ensure that different modes have the correct relative energies because it cannot move particles from one mode to another. We show how to improve brief MCMC by allowing long-range moves that are suggested by the data distribution. If the model is approximately correct, these long-range moves have a reasonable acceptance rate. 1 Introduction One way to model the density of high-dimensional data is to use a set of parameters, Θ to deterministically assign an energy, E(x|Θ) to each possible datavector, x [2]. p(x|Θ) = e−E(x|Θ) R e−E(y|Θ)dy (1) The obvious way to fit such an energy-based model to a set of training data is to follow the gradient of the likelihood. The contribution of a training case, x, to the gradient is: ∂log p(x|Θ) ∂θj = −∂E(x|Θ) ∂θj + Z p(y|Θ) ∂E(y|Θ) ∂θj dy (2) The last term in equation 2 is an integral over all possible datavectors and is usually intractable, but it can be approximated by running a Markov chain to get samples from the Boltzmann distribution defined by the model’s current parameters. The main problem with this approach is the time that it takes for the Markov chain to approach its stationary distribution. Fortunately, in [3] it was shown that if the chain is started at the data distribution, running the chain for just a few steps is often sufficient to provide a signal for learning. The way in which the data distribution gets distorted by the model in the first few steps of the Markov chain provides enough information about how the model differs from reality to allow the parameters of the model to be improved by lowering the energy of the data and raising the energy of the “confabulations” produced by a few steps of the Markov chain. So the steepest ascent learning algorithm implied by equation 2 becomes ∆θj ∝− ¿∂E(.|Θ) ∂θj À data + ¿∂E(.|Θ) ∂θj À confabulations (3) −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 hidden layer first second layer hidden f f f f E E j k j i W W λ data ij jk k j λk (a) (b) Figure 1: a) shows a two-dimensional data distribution that has four well-separated modes. b) shows a feedforward neural network that is used to assign an energy to a two-dimensional input vector. Each hidden unit takes a weighted sum of its inputs, adds a learned bias, and puts this sum through a logistic non-linearity to produce an output that is sent to the next layer. Each hidden unit makes a contribution to the global energy that is equal to its output times a learned scale factor. There are 20 units in the first hidden layer and 3 in the top layer. where the angle brackets denote expected values under the distribution specified by the subscript. If we use a Markov chain that obeys detailed balance, it is clear that when the training data is dense and the model is perfect, the learning procedure in equation 3 will leave the parameters unchanged because the Markov chain will already be at its stationary distribution so the confabulations will have the same distribution as the training data. Unfortunately, real training sets may have modes that are separated by regions of very low density, and running the Markov chain for only a few steps may not allow it to move between these modes even when there is a lot of data. As a result, the relative energies of data points in different modes can be completely wrong without affecting the learning signal given by equation 3. The point of this paper is to show that, in the context of modelfitting, there are ways to use the known training data to introduce extra mode-hopping moves into the Markov chain. We rely on the observation that after some initial training, the training data itself provides useful suggestions about where the modes of the model are and how much probability mass there is in each mode. 2 A simple example of wormholes Figure 1a shows some two-dimensional training data and a model that was used to model the density of the training data. The model is an unsupervised deterministic feedforward neural network with two hidden layers of logistic units. The parameters of the model are the weights and biases of the hidden units and one additional scale parameter per hidden unit which is used to convert the output of the hidden unit into an additive contribution to the global energy. By using backpropagation through the model, it is easy to compute the derivatives of the global energy assigned to an input vector w.r.t. the parameters (needed in equation 3), and it is also easy to compute the gradient of the energy w.r.t. each component of the input vector (i.e the slope of the energy surface at that point in dataspace). The latter gradient is needed for the ’Hybrid Monte Carlo’ sampler that we discuss next. The model is trained on 1024 datapoints for 1000 parameter updates using equation 3. To produce the confabulations we start at the datapoints and use a Markov chain that is a sim(a) (b) Figure 2: (a) shows the probabilities learned by the network without using wormholes, displayed on a 32 × 32 grid in the dataspace. Some modes have much too little probability mass. (b) shows that the probability mass in the different minima matches the data distribution after 10 parameter updates using point-to-point wormholes defined by the vector differences between pairs of training points. The mode-hopping allowed by the wormholes increases the number of confabulations that end up in the deeper minima which causes the learning algorithm to raise the energy of these minima. plified version of Hybrid Monte Carlo. Each datapoint is treated as a particle on the energy surface. The particle is given a random initial momentum chosen from a unit-variance isotropic Gaussian and its deterministic trajectory along the energy surface is then simulated for 10 time steps. If this simulation has no numerical errors the increase, ∆E, in the combined potential and kinetic energy will be zero. If ∆E is positive, the particle is returned to its initial position with a probability of 1−exp(−∆E). The step size is adapted after each batch of trajectories so that only about 10% of the trajectories get rejected. Numerical errors up to second order are eliminated by using a “leapfrog” method [5] which uses the potential energy gradient at time t to compute the velocity increment between time t −1 2 and t + 1 2 and uses the velocity at time t + 1 2 to compute the position increment between time t and t + 1. Figure 2a shows the probability density over the two-dimensional space. Notice that the model assigns much more probability mass to some minima than to others. It is clear that the learning procedure in equation 3 would correct this imbalance if the confabulations were generated by a time-consuming Markov chain that was able to concentrate the confabulations in the deepest minima1, but we want to make use of the data distribution to achieve the same goal much faster. Figure 2b shows how the probability density is corrected by 10 parameter updates using a Markov chain that has been modified by adding an optional long-range jump at the end of each accepted trajectory. The candidate jump is simply the vector difference between two randomly selected training points. The jump is always accepted if it lowers the energy. If it raises the energy it is accepted with a probability of exp(−∆E). Since the probability that point A in the space will be offered a jump to point B is the same as the probability that B will be offered a jump to A, the jumps do not affect detailed balance. One way to think about the jumps is to imagine that every point in the dataspace is connected by wormholes to n(n −1) other points so that it can move to any of these points in a single step. To understand how the long-range moves deal with the trade-off between energy and entropy, consider a proposed move that is based on the vector offset between a training point 1Note that depending on the height of the energy barrier between the modes this may take too long for practical purposes. that lies in a deep narrow energy minimum and a training point that lies in a broad shallow minimum. If the move is applied to a random point in the deep minimum, it stands a good chance of moving to a point within the broad shallow minimum, but it will probably be rejected because the energy has increased. If the opposite move is applied to a random point in the broad minimum, the resulting point is unlikely to fall within the narrow minimum, though if it does it is very likely to be accepted. If the two minima have the same free energy, these two effects exactly balance. Jumps generated by random pairs of datapoints work well if the minima are all the same shape, but in a high-dimensional space it is very unlikely that such a jump will be accepted if different energy minima are strongly elongated in different directions. 3 A local optimization-based method In high dimensions the simple wormhole method will have a low acceptance rate because most jumps will land in high-energy regions. One way avoid to that is to use local optimization: after a jump has been made descend into a nearby low-energy region. The obvious difficulty with this approach is that care must be taken to preserve detailed balance. We use a variation on the method proposed in [7]. It fits Gaussians to the detected low energy regions in order to account for their volume. A Gaussian is fitted using the following procedure. Given a point x, let mx be the point found by running a minimization algorithm on E(x) for a few steps (or until convergence) starting at x. Let Hx be the Hessian of E(x) at mx, adjusted to ensure that it is positive definite by adding a multiple of the identity matrix to it. Let Σx be the inverse of Hx. A Gaussian density gx(y) is then defined by the mean mx and the covariance matrix Σx. To generate a jump proposal, we make a forward jump by adding the vector difference d between two randomly selected data points to the initial point x0, obtaining x. Then we compute mx and Σx, and sample a proposed jump destination y from gx(y). Then we make a backward jump by adding −d to y to obtain z, and compute mz and Σz, specifying gz(x). Finally, we accept the proposal y with probability p = min(1, exp(−E(y)) exp(−E(x0)) gz(x0) gx(y) ). Our implementation of the algorithm executes 20 steps of steepest descent to find mx and mz. To save time, instead of computing the full Hessian, we compute a diagonal approximation to the Hessian using the method proposed in [1]. 4 Gaping wormholes In this section we describe a third method based on “darting MCMC” [8] to jump between the modes of a distribution. The idea of this technique is to define spherical regions on the modes of the distribution and to jump only between corresponding points in those regions. When we consider a long-range move we check whether or not we are inside a wormhole. When inside a wormhole we initiate a jump to some other wormhole (e.g. chosen uniformly); when outside we stay put in order to maintain detailed balance. If we make a jump we must also use the usual Metropolis rejection rule to decide whether to accept the jump. In high dimensional spaces this procedure may still lead to unacceptably high rejection rates because the modes will likely decay sharply in at least a few directions. Since these ridges of probability are likely to be uncorrelated across the modes, the proposed target location of the jump will most of the time have very low probability, resulting in almost certain rejection. To deal with this problem, we propose a generalization of the described method, where the wormholes can have arbitrary shapes and volumes. As before, when we are considering a long-range move we first check our position, and if we are located inside a wormhole we initiate a jump (which may be rejected) while if we are located outside a wormhole we stay put. To maintain detailed balance between wormholes we need to compensate for their potentially different volume factors. To that end, we impose the constraint Vi Pi→j = Vj Pj→i (4) on all pairs of wormholes, where Pi→j is a transition probability and Vi and Vj are the volumes of the wormholes i and j respectively. This in fact defines a separate Markov chain between the wormholes with equilibrium distribution, P EQ i = Vi P j Vj (5) The simplest method2 to compensate for the different volume factors is therefore to sample a target wormhole from this distribution P EQ. When the target wormhole has been determined we can either sample a point uniformly within its volume or design some deterministic mapping (see also [4]). Finally, once the arrival point has been determined we need to compensate for the fact that the probability of the point of departure is likely to be different than the probability of the point of arrival. The usual Metropolis rule applies in this case, Paccept = min · 1, Parrive Pdepart ¸ (6) This combined set of rules ensures that detailed balance holds and that the samples will eventually come from the correct probability distribution. One way of employing this sampler in conjunction with contrastive divergence learning is to fit a “mixture of Gaussians” model to the data distribution in a preprocessing step. The region inside an iso-probability contour of each Gaussian mixture component defines an elliptical wormhole with volume Vellipse = π d 2 αd Qd i=1 σi Γ(1 + d 2) (7) where Γ(x) is the gamma function, σi is the standard deviation of the i’th eigen-direction of the covariance matrix and α is a free parameter controlling the size of the wormhole. These regions provide good jump points during CD-learning because it is expected that the valleys in the energy landscape correspond to the regions where the data cluster. To minimize the rejection rate we map points in one ellipse to “corresponding” points in another ellipse as follows. Let Σdepart and Σarrive be the covariance matrices of the wormholes in question. and let Σ = USU T be an eigenvalue decomposition. The following transformation maps iso-probability contours in one wormhole to iso-probability contours in another, xarrive −µarrive = −UarriveS1/2 arriveS−1/2 departU T depart(xdepart −µdepart) (8) with µ the center location of the ellipse. The negative sign in front of the transformation is to promote better exploration when the target wormhole turns out to be the same as the wormhole from which the jump is initiated. It is important to realize that although the mapping is one-to-one, we still need to satisfy the constraint in equation 4 because a volume element dx will change under the mapping. Thus, wormholes are sampled from P EQ and proposed moves are accepted according to equation 6. For both the deterministic and the stochastic moves we may also want to consider regions that overlap. For instance, if we generate wormholes by fitting a mixture of Gaussians it 2Other methods that respect the constraint 4 are possible but are suboptimal in the sense that they mix slower to the equilibrium distribution. −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 (a) (b) Figure 3: (a) Dataset of 1024 cases uniformly distributed on 2 orthogonal narrow rectangles. (b) Probability density of the model learned with contrastive divergence. The size of each square indicates the probability mass at the corresponding location. is very hard to check whether these regions overlap somewhere in space. Fortunately, we can adapt the sampling procedure to deal with this case as well. First define narrive as the total number of regions that contain point xarrive and similarly for ndepart. Detailed balance can still be maintained for both deterministic and stochastic moves if we adapt the Metropolis acceptance rule as follows, Paccept = min · 1, ndepartParrive narrivePdepart ¸ (9) Further details can be found in [6]. 5 An experimental comparison of the three methods To highlight the difference between the point and the region wormhole sampler, we sampled 1024 data points along two very narrow orthogonal ridges (see figure 3a), with half of the cases in each mode. A model with the same architecture as depicted in figure 1 was learned using contrastive divergence, but with “Cauchy” nonlinearities of the form f(x) = log(1 + x2) instead of the logistic function. The probability density of the model that resulted is shown in figure 3b. Clearly, the lack of mixing between the modes has resulted in one mode being much stronger than the other one. Subsequently, learning was resumed using a Markov chain that proposed a long-range jump for all confabulations after each brief HMC run. The regions in the region wormhole sampler were generated by fitting a mixture of two Gaussians to the data using EM, and setting α = 10. Both the point wormhole method and the region wormhole method were able to correct the asymmetry in the solution but the region method does so much faster as shown in figure 4b. The reason is that a much smaller fraction of the confabulations succeed in making a long-range jump as shown in figure 4a. We then compared all three wormhole algorithms on a family of datasets of varying dimensionality. Each dataset contained 1024 n-dimensional points, where n was one of 2, 4, 8, 16, or 32. The first two components of each point were sampled uniformly from two axis-aligned narrow orthogonal ridges and then rotated by 45◦around the origin to ensure that the diagonal approximation to the Hessian, used by the local optimization-based algorithm, was not unfairly accurate. The remaining n −2 components of each data point were sampled independently from a sharp univariate Gaussian with mean 0 and std. 0.02. 0 20 40 60 80 100 0 100 200 300 400 500 600 parameter updates accepted long range jumps 0 20 40 60 80 100 −8 −7 −6 −5 −4 −3 −2 −1 0 1 2 paramater updates log−odds (a) (b) Figure 4: (a) Number of successful jumps between the modes for point wormhole MCMC (dashed line) and region wormhole MCMC (solid line). (b) Log-odds of the probability masses contained in small volumes surrounding the two modes for the point wormhole method (dashed line) and the region wormhole method (solid line). The log-odds is zero when the probability mass is equal in both modes. The networks used for comparison had architectures identical to the one depicted in Figure 1 in all respects except for the number and the type of units used. The second hidden layer consisted of Cauchy units, while the first hidden layer consisted of some Cauchy and some sigmoid units. The networks were trained for 2000 parameter updates using HMC without wormholes. To speed up the training, an adaptive learning rate and a momentum of 0.95 were used. We also used a weight decay rate of 0.0001 for weights and 0.000001 for scales. Gaussian noise was added to the last n −2 components of each data point. The std. of the noise started at 0.2 and was gradually decreased to zero as training progressed. This prevented HMC from being slowed down by the narrow energy ravines resulting from the tight constraints on the last n −2 components. After the model was trained (without wormholes), we compared the performance of the three jump samplers by allowing each sampler to make a proposal for each training case and then comparing the acceptance rates. This was repeated 25 times to improve the estimate of the acceptance rate. In each sampler, HMC was run for 10 steps before offering points an opportunity to jump. The average number of successful jumps between modes per iteration is shown in the table below. Dimensionality Network Simple OptimizationRegion architecture wormholes based wormholes 2 10+10, 2 10 15 372 4 20+10, 4 6 17 407 8 20+10, 6 3 19 397 16 40+10, 8 1 13 338 32 50+10, 10 1 9 295 Relative 1 2.6 1 run time The network architecture column shows the number of units in the hidden layers with each entry giving the number of Cauchy units plus the number of sigmoid units in the first hidden layer and the number of Cauchy units in the second hidden layer. 6 Summary Maximum likelihood learning of energy-based models is hard because the gradient of the log probability of the data with respect to the parameters depends on the distribution defined by the model and it is computationally expensive to even get samples from this distribution. Minimizing contrastive divergence is much easier than maximizing likelihood but the brief Markov chain does not have time to mix between separated modes in the distribution3. The result is that the local structure around each data cluster is modelled well, but the relative masses of different cluster are not. In this paper we proposed three algorithms to deal with this phenomenon. Their success relies on the fact that the data distribution provides valuable suggestions about the location of the modes of a good model. Since the probability of the model distribution is expected to be substantial in these regions they can be successfully used as target locations for long-range moves in a MCMC sampler. The MCMC sampler with point-to-point wormholes is simple but has a high rejection rate when the modes are not aligned. Performing local gradient descent after a jump significantly increases the acceptance rate, but only leads to a modest improvement in efficiency because of the extra computations required to maintain detailed balance. The MCMC sampler with region-to-region wormholes targets its moves to regions that are likely to have high probability under the model and therefore has a much better acceptance rate, provided the distribution can be modelled well by a mixture. None of the methods we have proposed will work well for high-dimensional, approximately factorial distributions that have exponentially many modes formed by the cross-product of multiple lower-dimensional distributions. Acknowledgements This research was funded by NSERC, CFI, OIT. We thank Radford Neal and Yee-Whye Teh for helpful advice and Sam Roweis for providing software. References [1] S. Becker and Y. LeCun. Improving the convergence of back-propagation learning with sec ond-order methods. In D. Touretzky, G. Hinton, and T. Sejnowski, editors, Proc. of the 1988 Connectionist Models Summer School, pages 29–37, San Mateo, 1989. Morgan Kaufman. [2] Y. Bengio, R. Ducharme, and P. Vincent. A neural probabilistic language model. In Advances in Neural Information Processing Systems, 2001, 2001. [3] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771–1800, 2002. [4] C. Jarzynski. Targeted free energy perturbation. Technical Report LAUR-01-2157, Los Alamos National Laboratory, 2001. [5] R.M. Neal. Probabilistic inference using markov chain monte carlo methods. Technical Report CRG-TR-93-1, University of Toronto, Computer Science, 1993. [6] C. Sminchisescu, M.Welling, and G. Hinton. Generalized darting monte carlo. Technical report, University of Toronto, 2003. Technical Report CSRG-478. [7] H. Tjelemeland and B.K. Hegstad. Mode jumping proposals in mcmc. Technical report, Norwegian University of Science and Technology, Trondheim, Norway, 1999. Rep. No. Statistics no.1/1999. [8] A. Voter. A monte carlo method for determining free-energy differences and transition state theory rate constants. 82(4), 1985. 3However, note that in cases where the modes are well separated, even Markov chains that run for an extraordinarily long time will not mix properly between those modes, and the results of this paper become relevant.
2003
3
2,430
Unsupervised context sensitive language acquisition from a large corpus Zach Solan, David Horn, Eytan Ruppin Sackler Faculty of Exact Sciences Tel Aviv University Tel Aviv, Israel 69978 {rsolan,horn,ruppin}@post.tau.ac.il Shimon Edelman Department of Psychology Cornell University Ithaca, NY 14853, USA se37@cornell.edu Abstract We describe a pattern acquisition algorithm that learns, in an unsupervised fashion, a streamlined representation of linguistic structures from a plain natural-language corpus. This paper addresses the issues of learning structured knowledge from a large-scale natural language data set, and of generalization to unseen text. The implemented algorithm represents sentences as paths on a graph whose vertices are words (or parts of words). Significant patterns, determined by recursive context-sensitive statistical inference, form new vertices. Linguistic constructions are represented by trees composed of significant patterns and their associated equivalence classes. An input module allows the algorithm to be subjected to a standard test of English as a Second Language (ESL) proficiency. The results are encouraging: the model attains a level of performance considered to be “intermediate” for 9th-grade students, despite having been trained on a corpus (CHILDES) containing transcribed speech of parents directed to small children. 1 Introduction A central tenet of generative linguistics is that extensive innate knowledge of grammar is essential to explain the acquisition of language from positive-only data [1, 2]. Here, we explore an alternative hypothesis, according to which syntax is an abstraction that emerges from exposure to language [3], coexisting with the corpus data within the same representational mechanism. Far from parsimonious, the representation we introduce allows partial overlap of linguistic patterns or constructions [4]. The incremental process of acquisition of patterns is driven both by structural similarities and by statistical information inherent in the data, so that frequent strings of similar composition come to be represented by the same pattern. The degree of abstraction of a pattern varies: it may be high, as in the case of a frame with several slots, each occupied by a member of an equivalence class associated with it, or low, as in the extreme case of idioms or formulaic language snippets, where there is no abstraction at all [5, 6]. The acquired patterns represent fully the original data, and, crucially, enable structure-sensitive generalization in the production and the assimilation of unseen examples. Previous approaches to the acquisition of linguistic knowledge, such as n-gram Hidden Markov Models (HMMs) that use raw data, aimed not at grammar induction but rather at expressing the probability of a sentence in terms of the conditional probabilities of its constituents. In comparison, statistical grammar induction methods aim to identify the most probable grammar, given a corpus [7, 8]. Due to the difficulty of this task, a majority of such methods have focused on supervised learning [9]. Grammar induction methods that do attempt unsupervised learning can be categorized into two classes: those that use corpora tagged with part-of-speech information, and those that work with raw, untagged data. The former includes such recent work as alignment-based learning [10], regular expression (“local grammar”) extraction [11], and algorithms that rely on the Minimum Description Length (MDL) principle [12]. The present work extends an earlier study [13] which offered preliminary results demonstrating the feasibility of unsupervised learning of linguistic knowledge from raw data. Here, we describe a new learning model and its implementation and extensive testing on a large corpus of transcribed spoken language from the CHILDES collection [14] (the larger corpora used in many other computational studies do not focus on children-directed language). Our new results suggest that useful patterns embodying syntactic and semantic knowledge of language can indeed be extracted from untagged corpora in an unsupervised manner. 2 The ADIOS model The ADIOS (Automatic DIstillation Of Structure) model has two components: (1) a Representational Data Structure (RDS) graph, and (2) a Pattern Acquisition (PA) algorithm that progressively refines the RDS in an unsupervised fashion. The PA algorithm aims to detect significant patterns (SP): similarly structured sequences of primitives that recur in the corpus. Each SP has an associated equivalence class (EC), which is a set of alternative primitives that may fit into the slot in the SP to construct a given path through the graph (see Figure 1a). The manner whereby the model supports generalization is exemplified in Figure 1c. The algorithm requires neither prior classification of the primitives into syntactic categories, nor even a pre-setting of their scope: it can bootstrap itself from a corpus in which all the words have been broken down into their constituent characters. One of the few free parameters in the earlier version of the model, ADIOS1, was the length L of the typical pattern the system was expected to acquire. Although presetting the value of L sufficed to learn simple artificial grammars, it proved to be problematic for natural language corpora. On the one hand, a small value of L led to over-generalization, because of insufficient uniformity of ECs associated with short SPs (not enough context sensitivity). On the other hand, using large values of L in conjunction with the ADIOS1 statistical learning algorithm did not lead to the emergence of well-supported SPs. The ADIOS2 model addresses this issue by first identifying long significant paths (SPATH) in the graph, then analyzing their k-gram statistics to identify short significant patterns SP. 2.1 Step 1: identifying a significant path For each pathi (sequence of elements e1 →e2 →. . . →ek) longer than a given threshold, the algorithm constructs a set P = {p1, . . . , pm} of paths of the same length as pathi. Each of the paths in P(pathi) consists of the same non-empty prefix (some sequence of graph edges), an equivalence class of vertices, and the same non-empty suffix (another sequence of edges); as an example, consider the set of three paths starting with ‘is’ and ending with the end of sentence symbol ‘END’ in Figure 1. Each such set is assigned a score S(P) .=  j s(pathj), with s(·) defined by eq. 1. This score assesses the likelihood that P captures a significant regularity rather than a random fluctuation in the data. The set with the maximal score in a given pass over the corpus is the SPATH. and Equivalence Class 200: {cat, dog, horse} is Equivalence Class 200 play sleep new equivalence class 201 (a) (b) (c) Sentence Number within-sentence index node edge 101 102 102 102 101 103 103 103 104 104 101 101 104 104 104 103 104 (1) (2) (3) (4) dog ? END BEGIN that the a that is is where where cat horse horse dog ? the and END BEGIN 102 103 102 101 103 eat horse dog the cat ing cat dog horse cat (1) (1) (1) (2) (3) (2) (3) (3) (4) (4) (6) (4) (5) (7) (5) (5) (6) (6) (6) pattern 140: is that a _____ ? (5) a Figure 1: (a) A small portion of the RDS, which is a directed multi-graph, for a simple corpus containing sentences #101 (is that a cat?) #102 (is that a dog?) #103 (and is that a horse?) #104 (where is the dog?). Each sentence is depicted by a solid colored line; edge direction is marked by arrows and is labeled by the sentence number and withinsentence index. The sentences in this example join a pattern is that a {dog, cat, horse} ?. (b). The abstracted pattern and the equivalence class associated with it are highlighted (edges that belong to sequences not subsumed by this pattern, e.g., #104, are untouched). (c) The identification of new significant patterns is done using the acquired equivalence classes (e.g., #200). In this manner, the system “bootstraps” itself, recursively distilling more and more complex patterns. This kind of abstraction also supports generalization: the original three sentences (shaded paths) form a pattern with two equivalence classes, which can then potentially generate six new sentences (e.g., the cat is play-ing and the horse is eat-ing). s(pathi) = P (k)(pathi) log  P (k)(pathi)/P (2)(pathi)  (1) P (k)(pathi) = P(e1)P(e2|e1)P(e3|e1 →e2)...P(ek|e1 →e2 →... →ek−1) (2) P (2)(pathi) = P(e1)P(e2|e1)P(e3|e2)...P(ek|ek−1) (3) The algorithm estimates the probabilities of different paths from the respective k-gram statistics (k being the length of the paths in the set under consideration), as per eq. 2. We observe that P (1)(pathi) corresponds to the “first order” probability of choosing the set of nodes e1, . . . , ek without taking into account their sequential order along the path. Thus, P (1)(pathi) = P(e1)P(e2)P(e3) . . . P(ek). In comparison, P (2) (see eq. 3) is a better candidate for identifying significant strings, as opposed to mere sets of nodes, because it takes into account the sequence of nodes along the path. 2.2 Step 2: identifying a significant pattern Once the SPATH set is determined, the algorithm calculates the degree of cohesion cij for each one of its member sub-paths, according to eq. 4. The k-gram matrix in eq. 4 accumulates all the statistics through order kth −1 of the SPATH embedded in the graph, with the zeroth order statistics located at the diagonal. The sub-path with the highest cscore is now tagged as a Significant Pattern. Our experience shows that the two-stage mechanism just described induces coherent equivalence classes, leading to the formation of meaningful short patterns. The new pattern is added as a new vertex to the RDS graph, replacing the elements and edges it subsumes (Figure 1(b)). Note that only those edges of the multi-graph that belong to the detected pattern are rewired; edges that belong to sequences not subsumed by the pattern are left intact. This highly context-sensitive method of pattern abstraction, which is unique to our approach, allows ADIOS to achieve a high degree of representational parsimony without sacrificing generalization power. P =       p(e1) p(e1|e2) p(e1|e2e3) ... p(e1|e2e3...ek) p(e2|e1) p(e2) p(e2|e3) ... p(e2|e3e4...ek) p(e3|e1e2) p(e3|e2) p(e3) ... p(e3|e4e5...ek) ... ... ... ... p(ek|e1e2...ek−1) p(ek|e2e3...ek−1) p(ek|e3e4...ek−1) ... p(ek)       cij = Pij log Pij Pi,j+1 for i > j (4) During the pass over the corpus, the list of equivalence sets is updated continuously; new significant patterns are found using the current equivalence classes. For each set of candidate paths, the algorithm tries to fit one or more equivalence classes from the pool it maintains. Because an element can a appear in several classes, the algorithm must check different combinations of equivalence classes. The winner combination is always the largest class for which most of the members are found among the candidate paths in the set (the ratio between the number of members that have been found among the paths and the total number of members in the equivalence class is compared to a fixed threshold as one of the configuration acceptance criteria). When not all the members appear in an existing set, the algorithm creates a new equivalence class containing only those members that do. Thus, as the algorithm processes more and more text, it bootstraps itself and enriches the RDS graph structure with new SPs and their accompanying equivalence sets. The recursive nature of this process enables the algorithm to form more and more complex patterns, in a hierarchical manner. The relationships among the distilled patterns can be visualized in a tree format, with tree depth corresponding to the level of recursion (e.g., Figure 2). Such a tree can be seen as a blueprint for creating acceptable (“grammatical”) sequences of elements (strings). The number of all possible string configurations can be estimated and compared to the number of examples seen in the training corpus. The reciprocal of their ratio, η, is the generalization factor, which can be calculated for each pattern in the RDS graph (e.g., in Figure 1(c), η = 0.33). Patterns whose significance score S and generalization factor η are beneath certain thresholds are rejected. The algorithm halts if it processes a given amount of text without finding a new significant pattern or equivalence set (in real language acquisition this process may never stop). 2.3 The test module A collection of patterns distilled from a corpus can be seen as a kind of empirically determined construction grammar; cf. [5], p.63. The patterns can eventually become highly abstract, thus endowing the model with an ability to generalize to unseen inputs. In prodo you 14380 wanna wantto 14379 15041 15040 I 'm was 14378 thought you 14818 were 14819 15540 15539 gonna go ing 14383 to 14384 15544 15543 16544 goto the 16543 (0.25) (1) (1) (1) (1) (1) (1) (1) (1) (1) (0.33) 4 8 9 7 6 5 3 11 2 1 let's 14335 (1) 14374 I 'm was 14378 thought you 14818 were 14819 15540 15539 (1) gonna go ing 14383 to 14384 15544 15543 (0.33) 16556 change her your 16557 16555 (0.14) (1) (1) (1) (1) (1) 10 12 13 14 1516 17 18 19 Figure 2: Two typical patterns extracted from a subset of the CHILDES collection [14]. Hundreds of such patterns and equivalence classes (underscored) together constitute a concise representation of the raw data. Some of the phrases that can be described/generated by patterns #16555 and #16543 are: let’s change her...; I thought you gonna change her...; I was going to go to the.... None of these sentences appear in the training data, illustrating the ability of ADIOS to generalize. The numbers in parentheses denote the generalization factor η of the patterns and their components (e.g., pattern #16555 generates 86% new strings, while pattern #16543 generates 75% new strings). The generation process, which operates as a depth-first search of the tree corresponding to a pattern, is illustrated on the left. For each non-terminal, the children are scanned from left to right; for each equivalence class (underscored), one member is chosen. The scan continues from the node corresponding to that member, with the elements reached at the terminal nodes being written out. duction, generalization is possible, for example, when two equivalence classes are placed next to each other in a pattern, creating new paths among the members of the equivalence classes. In comprehension, generalization can also ensue from partial activation of existing patterns by novel inputs. This function is supported by the test module, designed to process a novel sentence by forming its distributed representation in terms of activities of existing patterns (a similar approach has been proposed for novel object and scene representation in vision [15]). These values, which can be used to support grammaticality judgment, are computed by propagating activation from bottom (the terminals) to top (the patterns) of the RDS. The initial activities aj of the terminals ej are calculated given the novel stimulus s1, . . . , sk as follows: aj = max l=1..k P(sl, ej) log P(sl, ej) P(sl)P(ej) (5) where P(sl, ej) is the joint probability of sl and ej appearing in the same equivalence class, and P(sl) and P(ej) are the probabilities of sl and ej appearing in any equivalence class. For an equivalence class, the value propagated upwards is the strongest non-zero activation of its members; for a pattern, it is the average weight of the children nodes, on the condition that all the children were activated by adjacent inputs. Activity propagation continues until it reaches the top nodes of the pattern lattice. When the algorithm encounters a novel word, all the members of the terminal equivalence class contribute a value of ϵ = 0.01, which is then propagated upwards as usual. This enables the model to make an educated guess as to the meaning of the unfamiliar word, by considering the patterns that become active. 3 Empirical results 3.1 Working with real data: the CHILDES’ parents To illustrate the scalability of our method, we describe here briefly the outcome of applying the PA algorithm to a subset of the CHILDES collection [14], which consists of transcribed speech produced by, or directed at, children. The corpus we selected contained 300,000 sentences (1.3 million tokens) produced by parents. The following results were derived from a snapshot of the algorithm’s state after 14 real-time days. Working at a rate of 250 patterns per day, the algorithm identified 3400 patterns and 3200 equivalence classes, representing the corpus in terms of these elements. The outcome (for some examples, see Figure 2) was encouraging: the algorithm found intuitively significant SPs and produced semantically adequate corresponding equivalence sets. The algorithm’s considerable ability to recombine and reuse constructions it learns is illustrated by the following examples, in which a few of the sentences generated by ADIOS (left) are shown alongside sentences from CHILDES described by the same compositions of patterns: ADIOS CHILDES (parents’ speech) what doe s Spot say ? where doe s it go ? I don ’t think it ’ s good ! that ’ s good ! it ’ s gon ta go first . dog ’ s gon ta eat first . there ’ s a cup and there ’ s some lamb s . there ’ s a table and there ’ s some chair s . 3.2 Novel inputs We have assessed the ability of the ADIOS model to deal with novel inputs by training it on the CHILDES collection and then subjecting it to a grammaticality judgment test, in the form of multiple choice questions used in English as Second Language (ESL) classes. The particular test (http://www.forumeducation.net/servlet/pages/vi/mat/gram/dia001.htm) has been administered to more than 10, 000 people in the G¨oteborg (Sweden) education system as a diagnostic tool when assessing students on upper secondary levels (that is, children who typically had 9 years of school, but only 6-7 years of English; a test designed for assessing proficiency of younger subjects in their native language would be more suitable, but is not available). The test consists of 100 three-choice questions; a score lower than 50% is considered pre-intermediate, 50%−70% intermediate, and a score greater than 70% – advanced, with 65% being the average score for the population mentioned. For each of the three choices in a given question, our algorithm provided a grammaticality score. The choice with the highest score was declared as the winner; if two choices received the same top score, the answer was “don’t know”. The algorithm’s performance in this test at different stages of learning is plotted in Figure 3 versus the number of corpus sentences that have been processed. Over the course of training, the proportion of questions that received a definite answer grew (solid curve), while the proportion of correct answers remained around 60% (dashed curve). The best results were achieved with the ensemble of patterns distilled from two separate runs (two different generalization factors were applied in each run: 0.01 and 0.05). As a benchmark, we compared the performance of ADIOS in this test with that of a word bi-gram model. The latter was tested using the same procedure as ADIOS, except that significant patterns in the bi-gram model were defined as all the word pairs in the corpus (we emphasize that there is no training phase in the bi-gram model, as all the “patterns” are already available in the raw data). ADIOS outperformed the bi-gram model by answering 60% of the questions with 60% hits, compared to 20% of the questions with only 45% hits for the latter (note that chance performance in this test is 33%). Figure 3: The performance of ADIOS2 in an ESL test based on grammaticality judgment, plotted against the number of sentences (paths) scanned during training. The solid curve represents the percentage of questions with a valid answer; the dashed curve shows the percentage of correct answers. 4 Concluding remarks The ADIOS model incrementally learns the (morpho)syntax of English from “raw” input by distilling structural regularities (which can be thought of as constructions [16, 4]) from the accrued statistical co-occurrence and contextual cues. The resulting pattern-based representations are more powerful than finite automata because of their potential for recursion. Their depth, however, is not unbounded (rather, it is driven by the demands of the training data), a limitation that actually makes ADIOS a better candidate model for psycholinguistics (cf. the human limitations on processing recursion [17]). The patterns learned by ADIOS are also more powerful than context-free rewriting rules, because of their conservative nature: members of an equivalence class are only ever considered as interchangeable in a specific context, a characteristic that distinguishes ADIOS from related approaches [18, 10, 9]. On the one hand, this results in larger – but not unmanageable – demands on memory (more patterns need to be stored); on the other hand, crucially, it leads to efficient unsupervised probabilistic learning, and subsequent judicious use, of linguistic knowledge. The ultimate goal of this project is to address the entire spectrum of English syntax-related phenomena (and, eventually, semantics, which, as the construction grammarians hold, is intimately connected to syntax [16, 4]). With respect to some of these, the ADIOS model is already known to behave reasonably: for example, subject-verb agreement (even longrange) is captured properly, due to the conservative structured pattern abstraction. While providing empirical evidence that can be brought to bear on the poverty of the stimulus argument for innateness, our work does not, of course, resolve completely the outstanding issues. In particular, the treatment of many aspects of syntax such as anaphora, auxiliaries, wh-questions, passive, control, etc. [19], awaits both further computational experimentation and further theoretical work. Acknowledgments. Supported by the US-Israel Binational Science Foundation, the Dan David Prize Foundation, and the Horowitz Center for Complexity Science. We thank Todd Siegel for helpful suggestions. References [1] N. Chomsky. Knowledge of language: its nature, origin, and use. Praeger, New York, 1986. [2] S. Pinker. The Language Instinct: How the Mind Creates Language. William Morro, New York, NY, 1994. [3] P. J. Hopper. Emergent grammar. In M. Tomasello, editor, The new psychology of language, pp. 155–175. Erlbaum, Mahwah, NJ, 1998. [4] W. Croft. Radical Construction Grammar: syntactic theory in typological perspective. Oxford University Press, Oxford, 2001. [5] R. W. Langacker. Foundations of cognitive grammar, volume I: theoretical prerequisites. Stanford University Press, Stanford, CA, 1987. [6] A. Wray. Formulaic language and the lexicon. Cambridge University Press, Cambridge, UK, 2002. [7] K. Lari and S. J. Young. The estimation of stochastic context-free grammars using the Inside-Outside algorithm. Computer Speech and Language, 4:35–56, 1990. [8] F. Pereira and Y. Schab`es. Inside-Outside reestimation from partially bracketed corpora. In Annual Meeting of the ACL, pp. 128–135, 1992. [9] D. Klein and C. D. Manning. Natural language grammar induction using a constituent-context model. In T. G. Dietterich, S. Becker, and Z. Ghahramani, ed., Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [10] M. van Zaanen and P. Adriaans. Comparing two unsupervised grammar induction systems: Alignment-based learning vs. EMILE. Report 05, School of Computing, Leeds University, 2001. [11] M. Gross. The construction of local grammars. In E. Roche and Y. Schab`es, ed., Finite-State Language Processing, pp. 329–354. MIT Press, Cambridge, MA, 1997. [12] J. G. Wolff. Learning syntax and meanings through optimization and distributional analysis. In Y. Levy, I. M. Schlesinger, and M. D. S. Braine, ed., Categories and Processes in Language Acquisition, pp. 179–215. Lawrence Erlbaum, Hillsdale, NJ, 1988. [13] Z. Solan, E. Ruppin, D. Horn, and S. Edelman. Automatic acquisition and efficient representation of syntactic structures. In S. Thrun, editor, Advances in Neural Information Processing, volume 15, Cambridge, MA, 2003. MIT Press. [14] B. MacWhinney and C. Snow. The child language exchange system. Journal of Computational Lingustics, 12:271–296, 1985. [15] S. Edelman. Constraining the neural representation of the visual world. Trends in Cognitive Sciences, 6:125–131, 2002. [16] A. E. Goldberg. Constructions: A construction grammar approach to argument structure. University of Chicago Press, Chicago, 1995. [17] M. C. MacDonald and M. H. Christiansen. Reassessing working memory: A comment on Just and Carpenter (1992) and Waters and Caplan (1996). Psychological Review, 109:35–54, 2002. [18] A. Clark. Unsupervised Language Acquisition: Theory and Practice. PhD thesis, COGS, University of Sussex, 2001. [19] I. A. Sag and T. Wasow. Syntactic theory: a formal introduction. CSLI Publications, Stanford, CA, 1999.
2003
30
2,431
Modeling User Rating Profiles For Collaborative Filtering Benjamin Marlin Department of Computer Science University of Toronto Toronto, ON, M5S 3H5, CANADA marlin@cs.toronto.edu Abstract In this paper we present a generative latent variable model for rating-based collaborative filtering called the User Rating Profile model (URP). The generative process which underlies URP is designed to produce complete user rating profiles, an assignment of one rating to each item for each user. Our model represents each user as a mixture of user attitudes, and the mixing proportions are distributed according to a Dirichlet random variable. The rating for each item is generated by selecting a user attitude for the item, and then selecting a rating according to the preference pattern associated with that attitude. URP is related to several models including a multinomial mixture model, the aspect model [7], and LDA [1], but has clear advantages over each. 1 Introduction In rating-based collaborative filtering, users express their preferences by explicitly assigning ratings to items that they have accessed, viewed, or purchased. We assume a set of N users {1, ..., N}, a set of M items {1, ..., M}, and a set of V discrete rating values {1, ..., V }. In the natural case where each user has at most one rating ru y for each item y, the ratings for each user form a vector with one component per item. Of course, the values of some components are not known. We refer to user u’s rating vector as their rating profile denoted ru. Rating prediction is the elementary task performed with rating-based data. Given a particular item and user, the goal is to predict the user’s true rating for the item in question. Early work on rating prediction focused on neighborhood-based methods such as the GroupLens algorithm [9]. Personalized recommendations can be generated for any user by first predicting ratings for all items the user has not rated, and recommending items with the highest predicted ratings. The capability to predict ratings has other interesting applications. Rating predictions can be incorporated with content-based scores to create a preference augmented search procedure [4]. Rating prediction also facilitates an active approach to collaborative filtering using expected value of information. In such a framework the predicted rating of each item is interpreted as its expected utility to the user [2]. In order to gain the maximum advantage from the expressive power of ratings, a probabilistic model must enable the calculation of the distribution over ratings, and thus the calculation of predicted ratings. A handful of such models exist including the multinomial mixture model shown in figure 3, and the aspect model shown in figure 1 [7]. As latent variable models, both the aspect model and the multinomial mixture model have an intuitive appeal. They can be interpreted as decomposing user preferences profiles into a set of typical preference patterns, and the degree to which each user participates in each preference pattern. The settings of the latent variable are casually referred to as user attitudes. The multinomial mixture model constrains all users to have the same prior distribution over user attitudes, while the aspect model allows each user to have a different prior distribution over user attitudes. The added flexibility of the aspect model is quite attractive, but the interpretation of the distribution over user attitudes as parameters instead of random variables induces several problems.1First, the aspect model lacks a principled, maximum likelihood inference procedure for novel user profiles. Second the number of parameters in the model grows linearly with the number of users in the data set. Recent research has seen the proposal of several generative latent variable models for discrete data, including Latent Dirichlet Allocation [1] shown in figure 2, and multinomial PCA (a generalization of LDA to priors other than Dirichlet) [3]. LDA and mPCA were both designed with co-occurrence data in mind (word-document pairs). They can only be applied to rating data if the data is first processed into user-item pairs using some type of thresholding operation on the rating values. These models can then be used to generate recommendations; however, they can not be used to infer a distribution over ratings of items, or to predict the ratings of items. The contribution of this paper is a new generative, latent variable model that views rating-based data at the level of user rating profiles. The URP model incorporates proper generative semantics at the user level that are similar to those used in LDA and mPCA, while the inner workings of the model are designed specifically for rating profiles. Like the aspect model and the multinomial mixture model, the URP model can be interpreted in terms of decomposing rating profiles into typical preference patterns, and the degree to which each user participates in each pattern. In this paper we describe the URP model, give model fitting and initialization procedures, and present empirical results for two data sets. 2 The User Rating Profile Model The graphical representation of the aspect, LDA, multinomial mixture, and URP models are shown in figures 1 through 4. In all models U is a user index, Y is an item index, Z is a user attitude, Zy is the user attitude responsible for item y, R is a rating value, Ry is a rating value for item Y , and βvyz is a multinomial parameter giving P(Ry = v|Zy = z). In the aspect model θ is a set of multinomial parameters where θu z represents P(Z = z|U = u). The number of these parameters obviously grows as the number of training users is increased. In the mixture of multinomials model θ is a single distribution over user attitudes where θz represents P(Z = z). This gives the multinomial mixture model correct, yet simplistic, generative semantics at the user level. In both LDA and URP θ is not a parameter, but a Dirichlet random variable with parameter α. A unique θ is sampled for each user where θz gives 1Girolami and Kab´an have recently shown that a co-occurrence version of the aspect model can be interpreted as a MAP/ML estimated LDA model under a uniform Dirichlet prior [5]. Essentially the same relationship holds between the aspect model for ratings shown in figure 1, and the URP model. Figure 1: Aspect Model Y Z θ α β N M Figure 2: LDA Model R1 Z θ β R2 RM N Figure 3: Multinomial Mixture Model R1 Z1 θ α β R2 Z2 RM ZM N Figure 4: URP Model P(Z = z) for that user. This gives URP much more powerful generative semantics at the user level than the multinomial mixture model. As with LDA, URP could be generalized to use any continuous distribution on the simplex, but in this case the Dirichlet leads to efficient prediction equations. Note that the bottom level of the LDA model consists of an item variable Y , and ratings do not come into LDA at any point. The probability of observing a given user rating profile ru under the URP model is shown in equation 1 where we define δ(ru y, v) to be equal to 1 if user u assigned rating v to item y, and 0 otherwise. Note that we assume unspecified ratings are missing at random. As in LDA, the Dirichlet prior renders the computation of the posterior distribution p(θ, z|ru, α, β) = P(θ, z, ru|α, β)/P(ru|α, β) intractable. P(ru|α, β) = Z θ P(θ|α) M Y y=1 VY v=1 K X z=1 P(Zy = z|θ)P(Ry = v|Zy = z, β) !δ(ru y ,v) dθ (1) 3 Parameter Estimation The procedure we use for parameter estimation is a variational expectation maximization algorithm based on free energy maximization. As with LDA, other methods including expectation propagation could be applied. We choose to apply a fully factored variational q-distribution as shown in equation 2. We define q(θ|γu) to be a Dirichlet distribution with Dirichlet parameters γu z , and q(Zy|φu y) to be a multinomial distribution with parameters φu zy. P(θ, z|α, β, ru) ≈ q(θ, z|γu, φu) = q(θ|γu) M Y y=1 q(Zy = zy|φu y) (2) A per-user free energy function F[γu, φu, α, β] provides a variational lower bound on the log likelihood log p(ru|α, β) of a single user rating profile. The sum of the per-user free energy functions F[γu, φu, α, β] yields the total free energy function F[γ, φ, α, β], which is a lower bound on the log likelihood of a complete data set of user rating profiles. The variational and model parameter updates are obtained by expanding F[γ, φ, α, β] using the previously described distributions, and maximizing the result with respect to γu, φu, α and β. The variational parameter updates are shown in equations 3, and 4. Ψ denotes the first derivative of the log gamma function, also know as the digamma or psi function. φu zy ∝ VY v=1 βvyz δ(ru y ,v) exp(Ψ(γu z ) −Ψ(Pk j=1γu j )) (3) γu z = αz + M X y=1 φu zy (4) By iterating the the variational updates with fixed α and β for a particular user, we are guaranteed to reach a local maximum of the per-user free energy F[γu, φu, α, β]. This iteration is a well defined approximate inference procedure for the URP model. The model multinomial update has a closed form solution as shown in equation 5. This is not the case for the model Dirichlet α due to coupling of its parameters. However, Minka has proposed two iterative methods for estimating a Dirichlet distribution from probability vectors that can be used here. We give Minka’s fixed-point iteration in equations 6 and 7, which yields very similar results compared to the alternative Newton iteration. Details for both procedures including the inversion of the digamma function may be found in [8]. βvyz ∝ N X u=1 φu zyδ(ru y, v) (5) Ψ(αz) = Ψ( K X z=1 αz) + 1/N( N X u=1 Ψ(γu z ) −Ψ( N X u=1 γu z )) (6) αz = Ψ−1(Ψ(αz)) (7) 4 Model Fitting and Initialization We give a variational expectation maximization procedure for model fitting in this section as well as an initialization method that has proved to be very effective for the URP model. Lastly, we discuss stopping criteria used for the EM iterations. 4.1 Model Fitting The variational inference procedure should be run to convergence to insure a maximum likelihood solution. However, if we are satisfied with simply increasing the free energy at each step, other fitting procedures are possible. In general, the number of steps of variational inference can be determined by a user dependant heuristic function H(u). Buntine uses a single step of variational inference for each user to fit the mPCA model. At the other end of the spectrum, Blei et al. select a sufficient number of steps to achieve convergence when fitting the LDA model. Empirically, we have found that simple linear functions, of the number of ratings in each user profile provide a good heuristic. The details of the fitting procedure are given below. E-Step: 1. For all users u 2. For h = 0 to H(u) 3. φu zy ∝QV v=1 βryz δ(ru y ,v) exp(Ψ(γu z ) −Ψ(Pk j=1γu j )) 4. γu z = αz + PM y=1 φu zy M-Step: 1. For each v, y, z set βvyz ∝PN u=1 φu zyvδ(ru y, v). 2. While not converged 3. Ψ(αz) = Ψ(PK z=1 αz) + 1/N(PN u=1 Ψ(γu z ) −Ψ(PN u=1 γu z )) 4. αz = Ψ−1(Ψ(αz)) 4.2 Initialization and Early Stopping Fitting the URP model can be quite difficult starting from randomly initialized parameters. The initialization method we have adopted is to partially fit a multinomial mixture model with the same number of user attitudes as the URP model. Fitting the multinomial mixture model for a small number of EM iterations yields a set of multinomial distributions encoded by β′, as well as a single multinomial distribution over user attitudes encoded by θ′. To initialize the URP model we set β = β′, α = κθ′ where κ is a positive constant. Letting κ = 1 appears to give good results in practice. Normally EM is run until the bound on log likelihood converges, but this tends to lead to over fitting in some models including the aspect model. To combat this problem Hofmann suggests using early stopping of the EM iteration [7]. We implemented early stopping for all models using a separate validation set to allow for a fair comparison. 5 Prediction The primary task for any model applied to the rating-based collaborative filtering problem is to predict ratings for the items a user has not rated, based on the ratings the user has specified. Assume we have a user u with rating profile ru, and we wish to predict the user’s rating ru y for an unrated item y. The distribution over ratings for the item y can be calculated using the model as follows: P(Ry = v|ru) = Z θ X z P(Ry = v|Zy = z)P(Zy = z|θ)P(θ|ru)dθ (8) This quantity may look quite difficult to compute, but by interchanging the sum and integral, and appealing to our variational approximation q(θ|γu) ≈P(θ|ru) we obtain an expression in terms of the model and variational parameters. p(Ry = v|ru) = K X z=1 βvyz γu z PK j=1 γu j (9) To compute P(Ry = v|ru) according to equation 9 given the model parameters α and β, it is necessary to apply our variational inference procedure to compute γu. However, this only needs to be done once for each user in order to predict all unknown ratings in the user’s profile. Given the distribution P(Ry|ru), various rules can be used to compute the predicted rating. One could predict the rating with maximal probability, predict the expected rating, or predict the median rating. Of course, each of these prediction rules minimizes a different prediction error measure. In particular, median prediction minimizes the mean absolute error and is the prediction rule we use in our experiments. 6 Experimentation We consider two different experimental procedures that test the predictive ability of a rating-based collaborative filtering method. The first is a weak generalization all-but-1 experiment where one of each user’s ratings is held out. The model is then trained on the remaining observed ratings and tested on the held out ratings. This experiment is designed to test the ability of a method to generalize to other items rated by the users it was trained on. We introduce a second experimental protocol for testing a stronger form of generalization. The model is first trained using all ratings from a set of training users. Once the model is trained, an all-but-1 experiment is performed using a separate set of test users. This experiment is designed to test the ability of the model to generalize to novel user profiles. Two different base data sets were used in the experiments. The well known EachMovie data set, and the recently released million rating MovieLens data set. Both data sets were filtered to contain users with at least 20 ratings. EachMovie was filtered to remove movies with less than 2 ratings leaving 1621 movies. The MovieLens data was similarly filtered leaving 3592 movies. The EachMovie training sets contained 30000 users while the test sets contained 5000 users. The MovieLens training sets contained 5000 users while the test sets contained 1000 users. The EachMovie rating scale is from 0 to 5, while the MovieLens rating scale is from 1 to 5. Both types of experiment were performed for a range of numbers of user attitudes. For each model and number of user attitudes, each experiment was repeated on three different random partitions of each base data set into known ratings, held out ratings, validation ratings, training users and testing users. In the weak generalization experiments the aspect, multinomial mixture, and URP models were tested. In the strong generalization experiments only the multinomial mixture and URP models were tested since a trained aspect model can not be applied to new user profiles. Also recall that LDA and mPCA can not be used for rating prediction so they are not be tested in these experiments. We provide results obtained with a best-K-neighbors version of the GroupLens method for various values of K as a baseline method. 5 10 15 20 25 30 35 40 0.4 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48 0.49 0.5 EachMovie Weak Generalization K Normalized Mean Absolute Error Neighborhood Aspect Model Multinomial Mixture URP Figure 5: EachMovie weak generalization. 5 10 15 20 25 30 35 40 0.4 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48 0.49 0.5 EachMovie Strong Generalization K Normalized Mean Absolute Error Neighborhood Multinomial Mixture URP Figure 6: EachMovie strong generalization. 2 3 4 5 6 7 8 9 10 0.4 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48 0.49 0.5 MovieLens Weak Generalization K Normalized Mean Absolute Error Neighborhood Aspect Model Multinomial Mixture URP Figure 7: MovieLens weak generalization. 2 3 4 5 6 7 8 9 10 0.4 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48 0.49 0.5 MovieLens Strong Generalization K Normalized Mean Absolute Error Neighborhood Multinomial Mixture URP Figure 8: MovieLens strong generalization. 7 Results Results are reported in figures 5 through 8 in terms of normalized mean absolute error (NMAE). We define our NMAE to be the standard MAE normalized by the the expected value of the MAE assuming uniformly distributed rating values and rating predictions. For the EachMovie dataset E[MAE] is 1.9¯4, and for the MovieLens data set it is 1.6. Note that our definition of NMAE differs from that used by Goldberg et al. [6]. Goldberg et al. take the normalizer to be the difference between the minimum and maximum ratings, which means most of the error scale corresponds to performing much worse than random. In both the weak and strong generalization experiments using the EachMovie data set, the URP model performs significantly better than the other methods, and obtains the lowest prediction error. The results obtained from the MovieLens data set do not show the same clean trends as the EachMovie data set for the weak generalization experiment. The smaller size of MovieLens data set seems to cause URP to over fit for larger values of K, thus increasing its test error. Nevertheless, the lowest error attained by URP is not significantly different than that obtained by the aspect model. In the strong generalization experiment the URP model again out performs the other methods. 8 Conclusions In this paper we have presented the URP model for rating-based collaborative filtering. Our model combines the intuitive appeal of the multinomial mixture and aspect models, with the strong high level generative semantics of LDA and mPCA. As a result of being specially designed for collaborative filtering, our model also contains unique rating profile generative semantics not found in LDA or mPCA. This gives URP the capability to operate directly on ratings data, and to efficiently predict all missing ratings in a user profile. This means URP can be applied to recommendation, as well as many other tasks based on rating prediction. We have empirically demonstrated on two different data sets that the weak generalization performance of URP is at least as good as that of the aspect and multinomial mixture models. For online applications where it is impractical to refit the model each time a rating is supplied by a user, the result of interest is strong generalization performance. The aspect model can not be applied in a principled manner in such a scenario, and we see that URP outperforms the other methods by a significant margin. Acknowledgments We thank the Compaq Computer Corporation for the use of the EachMovie data set, and the GroupLens Research Group at the University of Minnesota for use of the MovieLens data set. Many thanks go to Rich Zemel for helpful comments and numerous discussions about this work. References [1] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, Jan. 2003. [2] C. Boutilier, R. S. Zemel, and B. Marlin. Active collaborative filtering. In Proceedings of the Nineteenth Annual Conference on Uncertainty in Artificial Intelligence, pages 98–106, 2003. [3] W. Buntine. Variational extensions to EM and multinomial PCA. In Proceedings of the European Conference on Machine Learning, 2002. [4] M. Claypool, A. Gokhale, T. Miranda, P. Murnikov, D. Netes, and M. Sartin. Combining content-based and collaborative filters in an online newspaper. In Proceedings of ACM SIGIR Workshop on Recommender Systems, 1999. [5] M. Girolami and A. Kab´an. On an equivalence between PLSI and LDA. In Proceedings of the ACM Conference on Research and Development in Information Retrieval, pages 433–434, 2003. [6] K. Goldberg, T. Roeder, D. Gupta, and C. Perkins. Eigentaste: A constant time collaborative filtering algorithm. Information Retrieval Journal, 4(2):133–151, July 2001. [7] T. Hofmann. Learning What People (Don’t) Want. In Proceedings of the European Conference on Machine Learning, 2001. [8] T. Minka. Estimating a Dirichlet Distribution. Unpublished, 2003. [9] P. Resnick, N. Iacovou, M. Suchak, P. Bergstorm, and J. Riedl. GroupLens: An Open Architecture for Collaborative Filtering of Netnews. In Proceedings of ACM 1994 Conference on Computer Supported Cooperative Work, pages 175– 186, Chapel Hill, North Carolina, 1994. ACM.
2003
31
2,432
Learning a world model and planning with a self-organizing, dynamic neural system Marc Toussaint Institut f¨ur Neuroinformatik Ruhr-Universit¨at Bochum, ND 04 44780 Bochum—Germany mt@neuroinformatik.rub.de Abstract We present a connectionist architecture that can learn a model of the relations between perceptions and actions and use this model for behavior planning. State representations are learned with a growing selforganizing layer which is directly coupled to a perception and a motor layer. Knowledge about possible state transitions is encoded in the lateral connectivity. Motor signals modulate this lateral connectivity and a dynamic field on the layer organizes a planning process. All mechanisms are local and adaptation is based on Hebbian ideas. The model is continuous in the action, perception, and time domain. 1 Introduction Planning of behavior requires some knowledge about the consequences of actions in a given environment. A world model captures such knowledge. There is clear evidence that nervous systems use such internal models to perform predictive motor control, imagery, inference, and planning in a way that involves a simulation of actions and their perceptual implications [1, 2]. However, the level of abstraction, the representation, on which such simulation occurs is hardly the level of physical coordinates. A tempting hypothesis is that the representations the brain uses for reasoning and planning are particularly designed (by adaptation or evolution) for just this purpose. To address such ideas we first need a basic model for how a connectionist architecture can encode a world model and how self-organization of inherent representations is possible. In the field of machine learning, world models are a standard approach to handle behavior organization problems (for a comparison of model-based approaches to the classical, model-free Reinforcement Learning see, e.g., [3]). The basic idea of using neural networks to model the environment was given in [4, 5]. Our approach for a connectionist world model (CWM) is functionally similar to existing Machine Learning approaches with selforganizing state space models [6, 7]. It is able to grow neural representations for different world states and to learn the implications of actions in terms of state transitions. It differs though from classical approaches in some crucial points: • The model is continuous in the action, the perception, as well as the time domain. ks(sj, s) i j wji xi perceptive layer s motor layer a ka(aji, a) Figure 1: Schema of the CWM architecture. • All mechanisms are based on local interactions. The adaptation mechanisms are largely derived from the idea of Hebbian plasticity. E.g., the lateral connectivity, which encodes knowledge about possible state transitions, is adapted by a variant of the temporal Hebb rule and allows local adaptation of the world model to local world changes. • The coupling to the motor system is fully integrated in the architecture via a mechanism incorporating modulating synapses (comparable to shunting mechanisms). • The two dynamic processes on the CWM, the “tracking” process estimating the current state and the planning process (similar to Dynamic Programming), will be realized by activation dynamics on the architecture, incorporating in particular lateral interactions, inspired by neural fields [8]. The outline of the paper is as follows: In the next section we describe our architecture, the dynamics of activation and the couplings to perception and motor layers. In section 3 we introduce a dynamic process that generates, as an attractor, a value field over the layer which is comparable to a state value function estimating the expected future return and allows for goal-oriented behavior organization. The self-organization process and adaptation mechanisms are described in section 4. We demonstrate the features of the model on a maze problem in section 5 and finally discuss the results and the model in general terms. 2 The model The core of the connectionist world model (CWM) is a neural layer which is coupled to a perceptual layer and a motor layer, see figure 1. Let us enumerate the units of the central layer by i = 1, .., N. Lateral connections within the layer may exist and we denote a connection from the i-th to j-th unit by (ji). E.g., “P (ji)” means “summing over all existing connections (ji)”. To every unit we associate an activation xj ∈R which is governed by the dynamics τx ˙xj = −xj + ks(sj, s) + η X (ji) ka(aji, a) wji xi , (1) which we will explain in detail in the following. First of all, xi are the time-dependent activations and the dot-notation τx ˙x = F(x) means a time derivative which we algorithmically implemented by a Euler integration step x(t) = x(t −1) + 1 τx F(x(t −1)). The first term in (1) induces an exponential relaxation while the second and third terms are the inputs. ks(sj, s) is the forward excitation that unit j receives from the perceptive layer. Here, sj is the codebook vector (receptive field) of unit j onto the perception layer which is compared to the current stimulus s via the kernel function ks. We will choose Gaussian kernels as it is the case, e.g., for typical Radial Basis function networks. The third term, P (ji) ka(aji, a) wji xi, describes the lateral interaction on the central layer. Namely, unit j receives lateral input from unit i iff there exists a connection (ji) from i to j. This lateral input is weighted by the connection’s synaptic strength wji. Additionally there is another term entering multiplicatively into this lateral interaction: Lateral inputs are modulated depending on the current motor activation. We chose a modulation of the following kind: To every existing connection (ji) we associate a codebook vector aji onto the motor layer which is compared to the current motor activity a via a Gaussian kernel function ka. Due to the multiplicative coupling, a connection contributes to lateral inputs only when the current motor activity “matches” the codebook vector of this connection. The modulation of information transmission by multiplicative or divisive interactions is a fundamental principle in biological neural systems [9]. One example is shunting inhibition where inhibitory synapses attach to regions of the dentritic tree near to the soma and thereby modulate the transmission of the dentritic input [10]. In our architecture, a shunting synapse, receiving input from the motor layer, might attach to only one branch of a (lateral) dentritic tree and thereby multiplicatively modulate the lateral inputs summed up at this subtree. For the following it is helpful if we briefly discuss a certain relation between equation (1) and a classical probabilistic approach. Let us assume normalized kernel functions ks(sj, s) = 1 √ 2π σs exp −(sj −s)2 2σ2s , ka(aji, a) = 1 √ 2π σa exp −(aji −a)2 2σ2a . These kernel functions can directly be interpreted as probabilities: ks(sj, s) represents the probability P(s|j) that the stimulus is s if j is active, and ka(aji, a) the probability P(a|j, i) that the action is a if a transition i →j occurred. As for typical hidden Markov models we may derive the prior probability distribution P(j|a), given the action: P(j|a, i) = P(a|j, i) P(j|i) P(a|i) = ka(aji, a) P(j|i) P(a|i) , P(j|a) = X i ka(aji, a) P(j|i) P(a|i) P(i) . P(a|i) can be computed by normalizing P(a|j, i) P(j|i) over j such that P j P(j|a, i) = 1. What we would like to point out here is that in equation (1), the lateral input P (ji) ka(aji, a) wji xi can be compared to the prior P(j|a) under the assumption that xi is proportional to P(i) and if we have an adaptation mechanism for wji which converges to a value proportional to P(j|i) and which also ensures normalization, i.e., P j ka(aji, a) wji = 1 for all i and a. This insight will help to judge some details of the next two section. The probabilistic interpretation can be further exploited, e.g., comparing the input of a unit j (or, in the quasi-stationary case, xj itself) to the posterior and deriving theoretically grounded adaptation mechanisms. But this is not within the scope of this paper. 3 The dynamics of planning To organize goal-oriented behavior we assume that, in parallel to the activation dynamics (1), there exists a second dynamic process which can be motivated from classical approaches to Reinforcement Learning [11, 12]. Recall the Bellman equation V ∗ π (i) = X a π(a|i) X j P(j|i, a) h r(j) + γ V ∗ π (j) i , (2) yielded by the expectation V ∗(i) of the discounted future return R(t) = P∞ τ=1 γτ−1 ϱ(t+τ), which yields R(t) = ϱ(t+1) + γ R(t+1), when situated in state i. Here, γ is the discount factor and we presumed that the received rewards ϱ(t) actually depend only on the state and thus enter equation (2) only in terms of the reward function r(i) (we neglect here that rewards may directly depend on the action). Behavior is described by a stochastic policy π(a|i), the probability of executing action a in state i. Knowing the property (2) of V ∗it is straight-forward to define a recursion algorithm for an approximation V of V ∗such that V converges to V ∗. This recursion algorithm is called Value Iteration and reads τv ∆Vπ(i) = −Vπ(i) + X a π(a|i) X j P(j|i, a)  r(j) + γ Vπ(j)  , (3) with a “reciprocal learning rate” or time constant τv. Note that (2) is the fixed point equation of (3). The practical meaning of the state-value function V is that it quantifies how desirable and promising it is to reach a state i, also accounting for future rewards to be expected. In particular, if one knows the current state i it is a simple and efficient rule of behavior to choose that action a that will lead to the neighbor state j with maximal V (j) (the greedy policy). In that sense, V (i) provides a smooth gradient towards desirable goals. Note though that direct Value Iteration presumes that the state and action spaces are known and finite, and that the current state and the world model P(j|i, a) is known. How can we transfer these classical ideas to our model? We suppose that the CWM is given a goal stimulus g from outside, i.e., it is given the command to reach a world state that corresponds to the stimulus g. This stimulus induces a reward excitation ri = ks(si, g) for each unit i. Now, besides the activations xi, we introduce another field over the CWM, the value field vi, which is in analogy to the state-value function V (i). The dynamics is τv ˙vi = −vi + ri + γ max (ji) (wji vj) , (4) and well comparable to (3): One difference is that vi estimates the “current-plusfuture” reward ϱ(t) + γR(t) rather than the future reward only—in the upper notation this corresponds to the value iteration τv ∆Vπ(i) = −Vπ(i) + r(i) + P a π(a|i) P j P(j|i, a)  γ Vπ(j)  . As it is commonly done for Value Iteration, we assumed π to be the greedy policy. More precisely, we considered only that action (i.e., that connection (ji)) that leads to the neighbor state j with maximal value wji vj. In effect, the summations over a as well as over j can be replaced by a maximization over (ji). Finally we replaced the probability factor P(j|i, a) by wji—we will see in the next section how wji is learned and what it will converge to. In practice, the value field will relax quickly to its fixed point v∗ i = ri +γ max(ji)(wji v∗ j ) and stay there if the goal does not change and if the world model is not re-adapted (see the experiments). The quasi-stationary value field vi together with the current (typically nonstationary) activations xi allow the system to generate a motor signal that guides towards the goal. More precisely, the value field vi determines for every unit i the “best” neighbor unit ki = argmaxj wji vj. The output motor signal is then the activation average a = X i xi akii (5) of the motor codebook vectors akii that have been learned for the corresponding connections. Hence, the information flow between the central layer and the motor system is in both ways: In the “tracking” process as given by equation (1) the information flows from the motor layer to the central layer: Motor signals activate the corresponding connections and cause lateral, predictive excitations. In the action selection process as given by equation (5) the signals flow from the central layer back to the motor layer to induce the motor activity that should turn predictions into reality. Depending on the specific problem and the representation of motor commands on the motor layer, a post-processing of the motor signal a, e.g. a competition between contradictory motor units, might be necessary. In our experiments we will have two motor units and will always normalize the 2D vector a to unit length. 4 Self-organization and adaptation The self-organization process of the central layer combines techniques from standard selforganizing maps [13, 14] and their extensions w.r.t. growing representations [15, 16] and the learning of temporal dependencies in lateral connections [17, 18]. The free variables of a CWM subject to adaptation are (1) the number of neurons and the lateral connectivity itself, (2) the codebook vectors si and aji to the perceptive and motor layers, respectively, and (3) the weights wji of the lateral connections. The adaptation mechanisms we propose are based on three general principles: (1) the addition of units for representation of novel states (novelty), (2) the fine tuning of the codebook vectors of units and connections (plasticity), and (3) the adaptation of lateral connections in favor of better prediction performance (prediction). Novelty. Mechanisms similar to those of FuzzyARTMAPs [15] or Growing Neural Gas [16] account for the insertion of new units when novelty is detected. We detect novelty in a straight-forward manner, namely when the difference between the actual perception and the best matching unit becomes too large. To make this detection more robust, we use a low-pass filter (leaky integrator). At a given time, let z be the best matching unit, z = argmaxi xi. For this unit we integrate the error measure ez τe ˙ez = −ez + (1 −ks(sz, s)) . We normalize ks(sz, s) such that it equals 1 in the perfect matching case when sz = s. Whenever this error measure exceeds a threshold called vigilance, ez > ν, ν ∈[0, 1], we generate a new unit j with the codebook vector equal to the current perception, sj = s, and a connection from the last best matching unit z† with the codebook vector equal to the current motor signal, ajz† = a. The errors of both, the new and the old unit, are reset to zero, ez ←0, ej = 0. Plasticity. We use simple Hebbian plasticity to fine tune the representations of existing units and connections. Over time, the receptive fields of units and connections become more and more similar to the average stimuli that activated them. We use the update rules τs ˙sz = −sz + s , τa ˙azz† = −azz† + a , with learning time constants τs and τa. Prediction and a temporal Hebb rule. Although perfect prediction is not the actual objective of the CWM, the predictive power is a measure of the correctness of the learned world model and good predictive power is one-to-one with good behavior planning. The first and simple mechanism to adapt the predictive power is to grow a new lateral connection between two successive best matching units z† and z if it does not yet exist. The new connection is initialized with wzz† = 1 and azz† = a. The second, more interesting mechanism addresses the adaptation of wji based on new experiences and can be motivated as follows: The temporal Hebb rule strengthens a synapse if the pre- and post-synaptic neurons spike in sequence, depending on the inter-spike-interval, and is supposed to roughly describe LTP and LTD (see, e.g.,[19]). In a population code model, this corresponds to a measure of correlation between the pre-synaptic and the delayed post-synaptic activity. In our case we additionally have to account for the action-dependence of a lateral connection. We do so by considering the term ka(aji, a) xi instead of only the pre-synaptic activity. As a measure of temporal correlation we choose to relate this term to the derivative ˙xj of the post-synaptic unit instead of its delayed activation—this saves us from specifying an ad-hoc “typical” delay and directly reflects that, in equation (1), lateral inputs relate to the derivative of xj. Hence, we consider the product ˙xj ka(aji, a) xi as the measure of correlation. Our concrete implementation is a robust version of this idea: τw ˙wji = κji [cji −wji κji] , where τκ ˙cji = −cji + ˙xj ka(aji, a) xi , τκ ˙κji = −κji + ka(aji, a) xi . Here, cji and κji are simply low-pass filters of ˙xj ka(aji, a) xi and of ka(aji, a) xi. The term wji κji ensures convergence (assuming quasi static cji and κji) of wji towards cji  κji. The time scale of adaptation is modulated by the recent activity κji of the connection. 5 Experiments To demonstrate the functionality of the CWM we consider a simple maze problem. The parameters we used are τx η 2 σ2 s 2 σ2 a τv γ τe τs τa τw τκ 2 0.1 0.01 0.5 2 0.8 10 20 5 10 100 . Figure 2a displays the geometry of the maze. The “agent” is allowed to move continuously in this maze. The motor signal is 2-dimensional and encodes the forces f in x- and ydirections; the agent has a momentum and friction according to ¨x = 0.2 (f −˙x). As a stimulus, the CWM is given the 2D position x. Figure 2a also displays the (lateral) topology of the central layer after 30 000 time steps of self-organization, after which the system becomes quasi-stationary. The model is learned from scratch, initialized with one random unit. During this first phase, behavior planning is switched off and the maze is explored with a random walk that changes its direction only with probability 0.1 at a time. In the illustration, the positions of the units correspond to the codebook vectors that have been learned. The directedness and the codebook vectors of the connections can not displayed. After the self-organization phase we switched on behavior planning. A goal stimulus corresponding to a random position in the maze is given and changed every time the agent reaches the goal. Generally, the agent has no problem finding a path to the goal. Figure 2b already displays a more interesting example. The agent has reached goal A and now seeks for goal B. However, we blocked the trespass 1. Starting at A the agent moves normally until it reaches the blockade. It stays there and moves slowly up an down in front of the blockade for a while—this while is of the order of the low-pass filter time scale τκ. During this time, the lateral weights of the connections pointing to the left are depressed and after about 150 time steps, this change of weights has enough influence on the value field dynamics (4) to let the agent chose the way around the bottom to goal B. Figure 2c displays the next scene: Starting at B, the agent tries to reach goal C again via the blockade 1 (the previous adaptation depressed only the connections from right to left). Again, it reaches the blockade, stays there for a while, and then takes the way around to goal C. Figures 2d and 2e repeat this experiment with blockade 2. Starting at D, the agent reaches the blockade 2 and eventually chooses the way around to goal E. Then, seeking for goal F, the agent reaches the blockade first from the left, thereafter from the bottom, then from the right, then it tries from the bottom again, and finally learned that none of these paths are valid anymore and chooses the way all around to goal F. Figures 2f shows that, once the world model has re-adapted to account for these blockades, the agent will not forget about them: Here, moving from G to H, it does not try to trespass block 2. a b A B 1 c C B 1 d D E 2 1 e E F 2 1 f G H 2 1 Figure 2: The CWM on a maze problem: (a) the outcome of self-organization; (b-c) agent movements from goal A to B to C, here, the trespass 1 was blocked and requires readaptation of the world model; (d-f) agent movements that demonstrate adaptation to a second blockade. Please see the text for more explanations. The reader is encouraged to also refer to the movies of these experiments, deposited at www.marc-toussaint.net/03-cwm/, which visualize much better the dynamics of selforganization, the planning behavior, the dynamics of the value field, and the world model readaptation. 6 Discussion The goal of this research is an understanding of how neural systems may learn and represent a world model that allows for the generation of goal-directed behavioral sequences. In our approach for a connectionist world model a perceptual and a motor layer are coupled to selforganize a model of the perceptual implications of motor activity. A dynamical value field on the learned world model organizes behavior planning—a method in principle borrowed from classical Value Iteration. A major feature of our model is its adaptability. The state space model is developed in a self-organizing way and small world changes require only little re-adaptation of the CWM. The system is continuous in the action, perception, and time domain and all dynamics and adaptivity rely on local interactions only. Future work will include the more rigorous probabilistic interpretations of CWMs which we already indicated in section 2. Another, rather straight-forward extension will be to replace random-walk exploration by more directed, information seeking exploration methods as they have already been developed for classical world models [20, 21]. Acknowledgments I acknowledge support from the German Bundesministerium f¨ur Bildung und Forschung (BMBF). References [1] G. Hesslow. Conscious thought as simulation of behaviour and perception. Trends in Cognitive Sciences, 6:242–247, 2002. [2] Rick Grush. The emulation theory of representation: motor control, imagery, and perception. Behavioral and Brain Sciences, 2003. To appear. [3] M.D. Majors and R.J. Richards. Comparing model-free and model-based reinforcement learning. Cambridge University Engineering Department Technical Report CUED/F- INENG/TR.286, 1997. [4] D.E. Rumelhart, P. Smolensky, J.L. McClelland, and G. E. Hinton. Schemata and sequential thought processes in PDP models. In D.E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing, volume 2, pages 7–57. MIT Press, Cambridge, 1986. [5] M. Jordan and D. Rumelhart. Forward models: Supervised learning with a distal teacher. Cognitive Science, 16:307–354, 1992. [6] B. Kr¨ose and M. Eecen. A self-organizing representation of sensor space for mobile robot navigation. In Proc. of Int. Conf. on Intelligent Robots and Systems (IROS 1994), 1994. [7] U. Zimmer. Robust world-modelling and navigation in a real world. NeuroComputing, 13:247– 260, 1996. [8] S. Amari. Dynamics of patterns formation in lateral-inhibition type neural fields. Biological Cybernetics, 27:77–87, 1977. [9] W.A. Phillips and W. Singer. In the search of common foundations for cortical computation. Behavioral and Brain Sciences, 20:657–722, 1997. [10] L.F. Abbott. Realistic synaptic inputs for network models. Network: Computation in Neural Systems, 2:245–258, 1991. [11] D.P. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [12] R.S. Sutton and A.G. Barto. Reinforcement Learning. MIT Press, Cambridge, 1998. [13] C. von der Malsburg. Self-organization of orientation-sensitive cells in the striate cortex. Kybernetik, 15:85–100, 1973. [14] T. Kohonen. Self-organizing maps. Springer, Berlin, 1995. [15] G.A. Carpenter, S. Grossberg, N. Markuzon, J.H. Reynolds, and D.B. Rosen. Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps. IEEE Transactions on Neural Networks, 5:698–713, 1992. [16] B. Fritzke. A growing neural gas network learns topologies. In G. Tesauro, D.S. Touretzky, and T.K. Leen, editors, Advances in Neural Information Processing Systems 7, pages 625–632. MIT Press, Cambridge MA, 1995. [17] C.M. Bishop, G.E. Hinton, and I.G.D. Strachan. GTM through time. In Proc. of IEEE Fifth Int. Conf. on Artificial Neural Networks. Cambridge, 1997. [18] J.C. Wiemer. The time-organized map algorithm: Extending the self-organizing map to spatiotemporal signals. Neural Computation, 15:1143–1171, 2003. [19] P. Dayan and L.F. Abbott. Theoretical Neuroscience. MIT Press, 2001. [20] J. Schmidhuber. Adaptive confidence and adaptive curiosity. Technical Report FKI-149-91, Technical University Munich, 1991. [21] N. Meuleau and P. Bourgine. Exploration of multi-state environments: Local measures and back-propagation of uncertainty. Machine Learning, 35:117–154, 1998.
2003
32
2,433
Bias-Corrected Bootstrap and Model Uncertainty Harald Steck∗ MIT CSAIL 200 Technology Square Cambridge, MA 02139 harald@ai.mit.edu Tommi S. Jaakkola MIT CSAIL 200 Technology Square Cambridge, MA 02139 tommi@ai.mit.edu Abstract The bootstrap has become a popular method for exploring model (structure) uncertainty. Our experiments with artificial and realworld data demonstrate that the graphs learned from bootstrap samples can be severely biased towards too complex graphical models. Accounting for this bias is hence essential, e.g., when exploring model uncertainty. We find that this bias is intimately tied to (well-known) spurious dependences induced by the bootstrap. The leading-order bias-correction equals one half of Akaike’s penalty for model complexity. We demonstrate the effect of this simple bias-correction in our experiments. We also relate this bias to the bias of the plug-in estimator for entropy, as well as to the difference between the expected test and training errors of a graphical model, which asymptotically equals Akaike’s penalty (rather than one half). 1 Introduction Efron’s bootstrap is a powerful tool for estimating various properties of a given statistic, most commonly its bias and variance (cf. [5]). It quickly gained popularity also in the context of model selection. When learning the structure of graphical models from small data sets, like gene-expression data, it has been applied to explore model (structure) uncertainty [7, 6, 8, 12]. However, the bootstrap procedure also involves various problems (e.g., cf. [4] for an overview). For instance, in the non-parametric bootstrap, where bootstrap samples D(b) (b = 1, ..., B) are generated by drawing the data points from the given data D with replacement, each bootstrap sample D(b) often contains multiple identical data points, which is a typical property of discrete data. When the given data D is in fact continuous (with a vanishing probability of two data points being identical), e.g., as in gene-expression data, the bootstrap procedure introduces a spurious discreteness in the samples D(b). A statistic computed from these discrete bootstrap samples may differ from the ones based on the continuous data D. As noted in [4], however, the effects due to this induced spurious discreteness are typically negligible. In this paper, we focus on the spurious dependences induced by the bootstrap procedure, even when given discrete data. We demonstrate that the consequences of those ∗Now at: ETH Zurich, Institute for Computational Science, 8092 Zurich, Switzerland. spurious dependences cannot be neglected when exploring model (structure) uncertainty by means of bootstrap, whether parametric or non-parametric. Graphical models learned from the bootstrap samples are biased towards too complex models and this bias can be considerably larger than the variability of the graph structure, especially in the interesting case of limited data. As a result, too many edges are present in the learned model structures, and the confidence in the presence of edges is overestimated. This suggests that a bias-corrected bootstrap procedure is essential for exploring model structure uncertainty. Similarly to the statistics literature, we give a derivation for the bias-correction term to amend several popular scoring functions when applied to bootstrap samples (cf. Section 3.2). This bias-correction term asymptotically equals one half of the penalty term for model complexity in the Akaike Information Criterion (AIC), cf. Section 3.2. The (huge) effects of this bias and the proposed bias-correction are illustrated in our experiments in Section 5. As the maximum likelihood score and the entropy are intimately tied to each other in the exponential family of probability distributions, we also relate this bias towards too complex models with the bias of the plug-in estimator for entropy (Section 3.1). Moreover, we show in Section 4, similarly to [13, 1], how the (bootstrap) bias-correction can be used to obtain a scoring function whose penalty for model complexity asymptotically equals Akaike’s penalty (rather than one half of that). 2 Bootstrap Bias-Estimation and Bias-Correction In this section, we introduce relevant notation and briefly review the bootstrap bias estimation of an arbitrary statistic as well as the bootstrap bias-correction (cf. also [5, 4]). The scoring-functions commonly used for graphical models such as the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), the Minimum Description Length (MDL), or the posterior probability, can be viewed as special cases of a statistic. In a domain of n discrete random variables, X = (X1, ..., Xn), let p(X) denote the (unknown) true distribution from which the given data D has been sampled. The empirical distribution implied by D is then given by ˆp(X), where ˆp(x) = N(x)/N, where N(x) is the frequency of state X = x and N = P x N(x) is the sample size of D. A statistic T is any number that can be computed from the given data D. Its bias is defined as BiasT = ⟨T(D)⟩D∼p −T(p), where ⟨T(D)⟩D∼p denotes the expectation over the data sets D of size N sampled from the (unknown) true distribution p. While T(D) is an arbitrary statistic, T(p) is the associated, but possibly slightly different, statistic that can be computed from a (normalized) distribution. Since the true distribution p is typically unknown, BiasT cannot be computed. However, it can be approximated by the bootstrap bias-estimate, where p is replaced by the empirical distribution ˆp, and the average over the data sets D is replaced by the one over the bootstrap samples D(b) generated from ˆp, where b = 1, ..., B with sufficiently large B (e.g., cf. [5]): d BiasT = ⟨T(D(b))⟩b −T(ˆp) (1) The estimator T(ˆp) is a so-called plug-in statistic, as the empirical distribution is ”plugged in” in place of the (unknown) true one. For example, ˜Tσ2(ˆp) = IE(X2) − IE(X)2 is the familiar plug-in statistic for the variance, while T unbiased σ2 (D) = N/(N− 1)Tσ2(ˆp) is the unbiased estimator. Obviously, a plug-in statistic yields an unbiased estimate concerning the distribution that is plugged in. Consequently, when the empirical distribution is plugged in, a plug-in statistic typically does not give an unbiased estimate concerning the (unknown) true distribution. Only plug-in statistics that are linear functions of ˆp(x) are inherently unbiased (e.g., the arithmetic mean). However, most statistics, including the above scoring functions, are non-linear functions of ˆp(x) (or equivalently of N(x)). In this case, the bias does not vanish in general. In the special case where a plug-in statistic is a convex (concave) function of ˆp, it follows immediately from the Jensen inequality that its bias is positive (negative). For example, the statistic Tσ2(ˆp) is a negative quadratic, and thus concave, function of ˆp, and hence underestimates the variance of the (unknown) true distribution. The general procedure of bias-correction can be used to reduce the bias of a biased statistic considerably. The bootstrap bias-corrected estimator T BC is given by T BC(D) = T(D) −d BiasT = 2 T(D) −⟨T(D(b))⟩b, (2) where d BiasT is the bootstrap bias estimate according to Eq. 1.1 Typically, T BC(D) agrees with the corresponding unbiased estimator in leading order in N (cf., e.g., [5]). Higher-order corrections can be achieved by ”bootstrapping the bootstrap” [5]. Bias-correction can be dangerous in practice (cf. [5]): even though T BC(D) is less biased than T(D), the bias-corrected estimator may have substantially larger variance. This is due to a possibly higher variability in the estimate of the bias, particularly when computed from small data sets. However, this is not an issue in this paper, since the ”estimate” of the bias turns out to be independent of the empirical distribution (in leading order in N). 3 Bias-Corrected Scoring-Functions In this section, we show that the above popular scoring-functions are (considerably) biased towards too complex models when applied to bootstrap samples (in place of the given data). These scoring functions can be amended by an additional penalty term that accounts for this bias. Using the bootstrap bias-correction in a slightly non-standard way, a simple expression for this penalty term follows easily (Section 3.2) from the well-know bias of the plug-in estimator of the entropy, which is reviewed in Section 3.1 (cf. also, e.g., [11, 2, 16]). 3.1 Bias-Corrected Estimator for True Entropy The entropy of the (true) distribution p(X) is defined by H(p(X)) = −P x p(x) log p(x). Since this is a concave function of the p’s, the plug-in estimator H(ˆp(X)) tends to underestimate the true entropy H(p(X)) (cf. Section 2). The bootstrap bias estimate of H(ˆp(X)) is d BiasH = ⟨H(D(b))⟩b −H(ˆp), where ⟨H(D(b))⟩b = 1 B B X b=1 H(D(b)(X)) = − X x ⟨ν(x) N log ν(x) N ⟩ν(x)∼Bin(N,ˆp(x)), (3) where Bin(N, ˆp(x)) denotes the Binomial distribution that originates from the resampling procedure in the bootstrap; N is the sample size; ˆp(x) is the probability of sampling a data point with X = x. An exact evaluation of Eq. 3 is computationally prohibitive in most cases. Monte Carlo methods, while yielding accurate results, are computationally costly. An analytical approximation of Eq. 3 follows immediately from the second-order Taylor expansion of L(q(x)) := q(x) log q(x) about ˆp(x), where q(x) = ν(x)/N:2 − X x ⟨L(ν(x) N )⟩ν(x) = H(ˆp(x)) −1 2 X x L′′(ˆp(x)) ⟨[ν(x) N −ˆp(x)]2⟩ν(x) + O( 1 N 2 ) = H(ˆp(x)) −1 2N (|X| −1) + O( 1 N 2 ), (4) 1Note that ⟨T(D(b))⟩b is not the bias-corrected statistic. 2Note that this approximation can be applied analogously to BiasH (instead of the bootstrap estimate d BiasH), and the same leading-order term is obtained. where −L′′(ˆp(x)) = −1/ˆp(x) is the observed Fisher information evaluated at the empirical value ˆp(x), and ⟨[ν(x) −N ˆp(x)]2⟩ν(x) = N ˆp(x)(1 −ˆp(x)) is the wellknown variance of the Binomial distribution, induced by the bootstrap. In Eq. 4, |X| is the number of (joint) states of X. The bootstrap bias-corrected estimator for the entropy of the (unknown true) distribution is thus given by HBC(ˆp(X)) = H(ˆp(X)) + 1 2N (|X| −1) + O( 1 N2 ). 3.2 Bias-Correction for Bootstrapped Scoring-Functions This section is concerned with the bias of popular scoring functions that is induced by the bootstrap procedure. For the moment, let us focus on the BIC when learning a Bayesian network structure m, TBIC(D, m) = N n X i=1 X xi,πi ˆp(xi, πi) log ˆp(xi, πi) ˆp(πi) −1 2 log N · |θ|. (5) The maximum likelihood involves a summation over all the variables (i = 1, ..., n) and all the joint states of each variable Xi and its parents Πi according to graph m. The number of independent parameters in the Bayesian network is given by |θ| = n X i=1 (|Xi| −1) · |Πi| (6) where |Xi| denotes the number of states of variable Xi, and |Πi| the number of (joint) states of its parents Πi. Like other scoring-functions, the BIC is obviously intended to be applied to the given data. If done so, optimizing the BIC yields an ”unbiased” estimate of the true network structure underlying the given data. However, when the BIC is applied to a bootstrap sample D(b)(instead of the given data D), the BIC cannot be expected to yield an ”unbiased” estimate of the true graph. This is because the maximum likelihood term in the BIC is biased when computed from the bootstrap sample D(b) instead of the given data D. This bias reads d BiasTBIC = ⟨TBIC(D(b))⟩b −TBIC(D). It differs conceptually from Eq. 1 in two ways. First, it is the (exact) bias induced by the bootstrap procedure, while Eq. 1 is a bootstrap approximation of the (unknown) true bias. Second, while Eq. 1 applies to a statistic in general, the last term in Eq. 1 necessarily has to be a plug-in statistic. In contrast, both terms involved in d BiasTBIC comprise the same general statistic. Since the maximum likelihood term is intimately tied to the entropy in the exponential family of probability distributions, the leading-order approximation of the bias of the entropy carries over (cf. Eq. 4): d BiasTBIC = 1 2 n X i=1  {|Xi| · |Πi| −1} −{|Πi| −1}  + O( 1 N ) = 1 2|θ| + O( 1 N ), (7) where |θ| is the number of independent parameters in the model, as given in Eq. 6 for Bayesian networks. Note that this bias is identical to one half of the penalty for model complexity in the Akaike Information Criterion (AIC). Hence, this bias due to the bootstrap cannot be neglected compared to the penalty terms inherent in all popular scoring functions. Also our experiments in Section 5 confirm the dominating effect of this bias when exploring model uncertainty. This bias in the maximum likelihood gives rise to spurious dependences induced by the bootstrap (a well-known property). In this paper, we are mainly interested in structure learning of graphical models. In this context, the bootstrap procedure obviously gives rise to a (considerable) bias towards too complex models. As a consequence, too many edges are present in the learned graph structure, and the confidence in the presence of edges is overestimated. Moreover, the (undesirable) additional directed edges in Bayesian networks tend to point towards variables that already have a large number of parents. This is because the bias is proportional to the number of joint states of the parents of a variable (cf. Eqs. 7 and 6). Hence, the amount of the induced bias generally varies among the different edges in the graph. Consequently, the BIC has to be amended when applied to a bootstrap sample D(b) (instead of the given data D). The bias-corrected BIC reads T BC BIC(D(b), m) = TBIC(D(b), m) −1 2|θ| (in leading order in N). Since the bias originates from the maximum likelihood term involved in the BIC, the same bias-correction applies to the AIC and MDL scores. Moreover, as the BIC approximates the (Bayesian) log marginal likelihood, log p(D|m), for large N, the leading-order bias-correction in Eq. 7 can also be expected to account for most of the bias of log p(D(b)|m) when applied to bootstrap samples D(b). 4 Bias-Corrected Maximum-Likelihood It may be surprising that the bias derived in Eq. 7 equals only one half of the AIC penalty. In this section, we demonstrate that this is indeed consistent with the AIC score. Using the standard bootstrap bias-correction procedure (cf. Section 2), we obtain a scoring function that asymptotically equals the AIC. This approach is similar to the ones in [1, 13]. Assume that we are given some data D sampled from the (unknown) true distribution p(X). The goal is to learn a Bayesian network model with p(X|ˆθ, m), or ˆp(X|m) in short, where m is the graph structure and ˆθ are the maximum likelihood parameter estimates, given data D. An information theoretic measure for the quality of graph m is the KL divergence between the (unknown) true distribution p(X) and the one described by the Bayesian network, ˆp(X|m) (cf. the approach in [1]). Since the entropy of the true distribution p(X) is an irrelevant constant when comparing different graphs, minimizing the KL-divergence is equivalent to minimizing the statistic T(p, ˆp, m) = − X x p(x) log ˆp(x|m), (8) which is the test error of the learned model when using the log loss. When p is unknown, one cannot evaluate T(p, ˆp, m), but approximate it by the training error, T(ˆp, m) = − X x ˆp(x) log ˆp(x|m) = − X x ˆp(x|m) log ˆp(x|m). (9) (assuming exponential family distributions). Note that T(ˆp, m) is equal to the negative maximum log likelihood up to the irrelevant factor N. It is well-known that the training error underestimates the test error. However, the ”bias-corrected training error”, T BC(ˆp, m) = T(ˆp, m) −BiasT (ˆp,m), (10) can serve as a surrogate, (nearly) unbiased estimator for the unknown test error, T(p, ˆp, m), and hence as a scoring function for model selection. The bias is given by the difference between the expected training error and the expected test error, BiasT = X x p(x|m)⟨log ˆp(x|m)⟩D∼p | {z } =−H(p(X|m))− 1 2N |θ|+O( 1 N2 ) − X x ⟨ˆp(x|m) log ˆp(x|m)⟩D∼p | {z } =−H(p(X|m))+ 1 2N |θ|+O( 1 N2 ) ≈−1 N |θ|. (11) The expectation is taken over the various data sets D (of sample size N) sampled from the unknown true distribution p; H(p(X|m)) is the (unknown) conditional entropy of the true distribution. In the leading-order approximation in N (cf. also Section 3.1), the number of independent parameters of the model, |θ|, is given in Eq. 6 for Bayesian network. Note that both the expected test error and the expected training error give rise to one half of the AIC penalty each. The overall bias amounts to |θ|/N, which exactly equals the AIC penalty for model complexity. Note that, while the AIC asymptotically favors the same models as cross-validation [15], it typically does not select the true model underlying the given data, but a more complex model. When the bootstrap estimate of the (exact) bias in Eq. 11 is inserted in the scoring function in Eq. 10, the resulting score may be viewed as the frequentist version of the (Bayesian) Deviance Information Criterion (DIC)[13] (up to a factor 2): while averaging over the distribution of the model parameters is natural in the Bayesian approach, this is mimicked by the bootstrap in the frequentist approach. 5 Experiments In our experiments with artificial and real-world data, we demonstrate the crucial effect of the bias induced by the bootstrap procedure, when exploring model uncertainty. We also show that the penalty term in Eq. 7 can compensate for most of this (possibly large) bias in structure learning of Bayesian networks. In the first experiment, we used data sampled from the alarm network (37 discrete variables, 46 edges). Comprising 300 and 1,000 data points, respectively, the generated data sets can be expected to entail some model structure uncertainty. We examined two different scoring functions, namely BIC and posterior probability (uniform prior over network structures, equivalent sample size α = 1, cf. [10]). We used the K2 search strategy [3] because of its computational efficiency and its accuracy in structure learning, which is high compared to local search (even when combined with simulated annealing) [10]. This accuracy is due to the additional input required by the K2 algorithm, namely a correct topological ordering of the variables according to the true network structure. Consequently, the reported variability in the learned network structures tends to be smaller than the uncertainty determined by local search (without this additional information). However, we are mainly interested in the bias induced by the bootstrap here, which can be expected to be largely unaffected by the search strategy. Although the true alarm network is known, we use the network structures learned from the given data D as a reference in our experiments: as expected, the optimal graphs learned from our small data sets tend to be sparser than the original graph in order to avoid over-fitting (cf. Table 1).3 We generated 200 bootstrap samples from the given data D (as suggested in [5]), and then learned the network structure from each. Table 1 shows that the bias induced by the bootstrap procedure is considerable for both the BIC and the posterior probability: it cannot be neglected compared to the standard deviation of the distribution over the number of edges. Also note that, despite the small data sets, the bootstrap yields graphs that have even more edges than the true alarm network. In contrast, Table 1 illustrates that this bias towards too complex models can be reduced dramatically by the bias-correction outlined in Section 3.2. However note that the bias-correction does not work perfectly as it is only the leading-order correction in N (cf. Eq. 7). The jackknife is an alternative resampling method, and can be viewed as an approximation to the bootstrap (e.g., cf. [5]). In the delete-d jackknife procedure, subsamples are generated from the given data D by deleting d data points.4 The choice d = 1 is most popular, but leads to inconsistencies for non-smooth statistics (e.g., cf. [5]). These inconsistency can be resolved by choosing a larger value for 3Note that the greedy K2 algorithm yields exactly one graph from each given data set. 4As a consequence, unlike bootstrap samples, jackknife samples do not contain multiple identical data points when generated from a given continuous data set (cf. Section 1). alarm network data pheromone N = 300 N = 1, 000 N = 320 BIC posterior BIC posterior posterior data D 41 40 43 44 63.0 ± 1.5 boot BC 40.7 ± 4.9 40.5 ± 3.5 44.2 ± 2.6 44.1 ± 2.9 57.8 ± 3.5 boot 49.1 ± 11.5 47.8 ± 10.9 47.3 ± 4.6 47.9 ± 4.8 135.7 ± 51.1 jack 1 41.0 ± 0.0 40.0 ± 0.0 43.0 ± 0.0 44.0 ± 0.0 63.2 ± 1.5 jack d 41.1 ± 0.9 40.1 ± 0.3 43.1 ± 0.3 43.7 ± 0.4 63.1 ± 2.3 Table 1: Number of edges (mean ± standard deviation) in the network structures learned from the given data set D, and when using various resampling methods: bias-corrected bootstrap (boot BC), naive bootstrap (boot), delete-1 jackknife (jack 1), and delete-d jackknife (jack d; here d = N/10). 0 0.5 1  given data  0 0.5 1 corrected bootstrap 0 0.5 1  given data  0 0.5 1 bootstrap  0 0.5 1  corrected bootstrap 0 0.5 1 bootstrap  Figure 1: The axis of these scatter plots show the confidence in the presence of the edges in the graphs learned from the pheromone data. The vertical and horizontal lines indicate the threshold values according to the mean number of edges in the graphs determined by the three methods (cf. Table 1). d, roughly speaking √ N < d ≪N, cf. [5]. The underestimation of both the bias and the variance of a statistic is often considered a disadvantage of the jackknife procedure: the ”raw” jackknife estimates of bias and variance typically have to be multiplied by a so-called ”inflation factor”, which is usually of the order of the sample size N. In the context of model selection, however, one may take advantage of the extremely small bias of the ”raw” jackknife estimate when determining, e.g., the mean number of edges in the model. Table 1 shows that the ”raw” jackknife is typically less biased than the bias-corrected bootstrap in our experiments. However, it is not clear in the context of model selection as to how meaningful the ”raw” jackknife estimate of model variability is. Our second experiment essentially confirms the above results. The yeast pheromone response data contains 33 variables and 320 data points (measurements) [9]. We discretized this gene-expression data using the average optimal number of discretization levels for each variable as determined in [14]. Unlike in [14], we simply discretized the data in a preprocessing step, and then conducted our experiments based on this discretized data set.5 Since the correct network structure is unknown in this experiment, we used local search combined with simulated annealing in order to optimize the BIC score and the posterior probability (α = 25, cf. [14]). As a reference in this experiment, we used 320 network structures learned from the given (discretized) data D, each of which is the highest-scoring graph found in a run of local search combined with simulated annealing.6 Each resampling procedure is also based on 320 subsamples. 5Of course, the bias-correction according to Eq. 7 also applies to the joint optimization of the discretization and graph structure when given a bootstrap sample. 6Using the annealing parameters as suggested in [10], each run of simulated annealing resulted in a different network structure (local optimum) in practice. While the pheromone data experiments in Table 1 qualitatively confirm the previous results, the bias induced by the bootstrap is even larger here. We suspect that this difference in the bias is caused by the rather extreme parameter values in the original alarm network model, which leads to a relatively large signal-to-noise ratio even in small data sets. In contrast, gene-expression data is known to be extremely noisy. Another effect of the spurious dependences induced by the bootstrap procedure is shown in Figure 1: the overestimation of the confidence in the presence of individual edges in the network structures. The confidence in an individual edge can be estimated as the ratio between the number of learned graphs where that edge is present and the overall number of learned graphs. Each mark in Figure 1 corresponds to an edge, and its coordinates reflect the confidence estimated by the different methods. Obviously, the naive application of the bootstrap leads to a considerable overestimation of the confidence in the presence of many edges in Figure 1, particularly of those whose absence is favored by both our reference and the bias-corrected bootstrap. In contrast, the confidence estimated by the bias-corrected bootstrap aligns quite well with the confidence determined by our reference in Figure 1, leading to more trustworthy results in our experiments. References [1] H. Akaike. Information theory and an extension of the maximum likelihood principle. International Symposium on Information Theory, pp. 267–81. 1973. [2] Carlton.On the bias of information estimates.Psych. Bulletin, 71:108–13, 1969. [3] G. Cooper and E. Herskovits. A Bayesian method for constructing Bayesian belief networks from databases. UAI, pp. 86–94. 1991. [4] A.C. Davison and D.V. Hinkley. Bootstrap methods and their application. 1997. [5] B. Efron and R. J. Tibshirani. An Introduction to the Bootstrap. 1993. [6] N. Friedman, M. Goldszmidt, and A. Wyner. Data analysis with Bayesian networks: A bootstrap approach. UAI, pp. 196–205. 1999. [7] N. Friedman, M. Goldszmidt, and A. Wyner. On the application of the bootstrap for computing confidence measures on features of induced Bayesian networks. AI & Stat., p.p 197–202. 1999. [8] N. Friedman, M. Linial, I. Nachman, and D. Pe’er. Using Bayesian networks to analyze expression data. Journal of Computational Biology, 7:601–20, 2000. [9] A. J. Hartemink, D. K. Gifford, T. S. Jaakkola, and R. A. Young. Combining location and expression data for principled discovery of genetic regulatory networks. In Pacific Symposium on Biocomputing, 2002. [10] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20:197– 243, 1995. [11] G. A. Miller. Note on the bias of information estimates. Information Theory in Psychology: Problems and Methods, pages 95–100, 1955. [12] D. Pe’er, A. Regev, G. Elidan, and N. Friedman. Inferring subnetworks from perturbed expression profiles. Bioinformatics, 1:1–9, 2001. [13] D. J. Spiegelhalter, N. G. Best, B. P. Carlin, and A. van der Linde. Bayesian measures of model complexity and fit. J. R. Stat. Soc. B, 64:583–639, 2002. [14] H. Steck and T. S. Jaakkola. (Semi-)predictive discretization during model selection. AI Memo 2003-002, MIT, 2003. [15] M. Stone. An asymptotic equivalence of choice of model by cross-validation and Akaike’s criterion. J. R. Stat. Soc. B, 36:44–7, 1977. [16] J. D. Victor. Asymptotic bias in information estimates and the exponential (Bell) polynomials. Neural Computation, 12:2797–804, 2000.
2003
33
2,434
÷ Nonlinear processing in LGN neurons Vincent Bonin* , Valerio Mante and Matteo Carandini Smith-Kettlewell Eye Research Institute 2318 Fillmore Street San Francisco, CA 94115, USA Institute of Neuroinformatics University of Zurich and ETH Zurich Winterthurerstrasse 190 CH-8046 Zurich, Switzerland {vincent,valerio,matteo}@ski.org Abstract According to a widely held view, neurons in lateral geniculate nucleus (LGN) operate on visual stimuli in a linear fashion. There is ample evidence, however, that LGN responses are not entirely linear. To account for nonlinearities we propose a model that synthesizes more than 30 years of research in the field. Model neurons have a linear receptive field, and a nonlinear, divisive suppressive field. The suppressive field computes local root-meansquare contrast. To test this model we recorded responses from LGN of anesthetized paralyzed cats. We estimate model parameters from a basic set of measurements and show that the model can accurately predict responses to novel stimuli. The model might serve as the new standard model of LGN responses. It specifies how visual processing in LGN involves both linear filtering and divisive gain control. 1 Introduction According to a widely held view, neurons in lateral geniculate nucleus (LGN) operate linearly (Cai et al., 1997; Dan et al., 1996). Their response L(t) is the convolution of the map of stimulus contrast S(x,t) with a receptive field F(x,t): [ ]( ) ( ) S F , L t t 0 = ∗ The receptive field F(x,t) is typically taken to be a difference of Gaussians in space (Rodieck, 1965) and a difference of Gamma functions in time (Cai et al., 1997). This linear model accurately predicts the selectivity of responses for spatiotemporal frequency as measured with gratings (Cai et al., 1997; Enroth-Cugell and Robson, 1966). It also predicts the main features of responses to complex dynamic video sequences (Dan et al., 1996). 150 spikes/s Data Model Figure 1. Response of an LGN neuron to a dynamic video sequence along with the prediction made by the linear model. Stimuli were sequences from Walt Disney’s “Tarzan”. From Mante et al. (2002). The linear model, however, suffers from limitations. For example, consider the response of an LGN neuron to a complex dynamic video sequences (Figure 1). The response is characterized by long periods of relative silence interspersed with brief events of high firing rate (Figure 1, thick traces). The linear model (Figure 1, thin traces) successfully predicts the timing of these firing events but fails to account for their magnitude (Mante et al., 2002). The limitations of the linear model are not surprising since there is ample evidence that LGN responses are nonlinear. For instance, responses to drifting gratings saturate as contrast is increased (Sclar et al., 1990) and are reduced, or masked, by superposition of a second grating (Bonin et al., 2002). Moreover, responses are selective for stimulus size (Cleland et al., 1983; Hubel and Wiesel, 1961; Jones and Sillito, 1991) in a nonlinear manner (Solomon et al., 2002). We propose that these and other nonlinearities can be explained by a nonlinear model incorporating a nonlinear suppressive field. The qualitative notion of a suppressive field was proposed three decades ago by Levick and collaborators (1972). We propose that the suppressive field computes local root-mean-square contrast, and operates divisively on the receptive field output. Basic elements of this model appeared in studies of contrast gain control in retina (Shapley and Victor, 1978) and in primary visual cortex (Cavanaugh et al., 2002; Heeger, 1992; Schwartz and Simoncelli, 2001). Some of these notions have been applied to LGN (Solomon et al., 2002), to fit responses to a limited set of stimuli with tailored parameter sets. Here we show that a single model with fixed parameters predicts responses to a broad range of stimuli. 2 Model In the model (Figure 2), the linear response of the receptive field L(t) is divided by the output of the suppressive field. The latter is a measure of local root-mean-square contrast clocal. The result of the division is a generator potential max 50 local ( ) ( ) L t V t V c c = + , where c50 is a constant. Rectification Firing rate c50 Filter Stimulus S(x,t) Suppressive Field Receptive Field clocal H(x,t) F(x,t) V0 R(t) L(t) S*(x,t) Figure 2. Nonlinear model of LGN responses. The suppressive field operates on a filtered version of the stimulus, S*=S*H, where H is a linear filter and * denotes convolution. The squared output of the suppressive field is the local mean square (the local variance) of the filtered stimulus: ( ) ( ) 2 2 local * S , G c t d dt x x x = ∫∫ , where G(x) is a 2-dimensional Gaussian. Firing rate is a rectified version of generator potential, with threshold Vthresh: ( ) ( ) thresh R t V t V + = −    . To test the nonlinear model, we recorded responses from neurons in the LGN of anesthetized paralyzed cats. Methods for these recordings were described elsewhere (Freeman et al., 2002). 3 Results We proceed in two steps: first we estimate model parameters by fitting the model to a large set of canonical data; second we fix model parameters and evaluate the model by predicting responses to a novel set of stimuli. 0.01 0.2 4 0 20 40 60 Response (spikes/s) Spatial Frequency (cpd) 0 20 40 60 80 Temporal Frequency (Hz) Response (spikes/s) 0.5 5 50 A B Figure 3. Estimating the receptive field in an example LGN cell. Stimuli are gratings varying in spatial (A) and temporal (B) frequency. Responses are the harmonic component of spike trains at the grating temporal frequency. Error bars represent standard deviation of responses. Curves indicate model fit. 0.01 0.20 4.00 0 20 40 60 80 100 Mask spatial frequency (cpd) Response (spikes/s) 0.5 4.0 32.0 0 20 40 60 80 100 Mask diameter (deg) Response (spikes/s) 0.25 0.50 0.75 1.00 0 20 40 60 80 100 Mask contrast Response (spikes/s) 0.25 0.50 0.75 1.00 0 50 100 Test contrast Response (spikes/s) C D A B Figure 4. Estimating the suppressive field in the example LGN cell. Stimuli are sums of a test grating and a mask grating. Responses are the harmonic component of spike trains at the temporal frequency of test. A: Responses to test alone. B-D: Responses to test+mask as function of three mask attributes: contrast (B), diameter (C) and spatial frequency (D). Gray areas indicate baseline response (test alone, 50% contrast). Dashed curves are predictions of linear model. Solid curves indicate fit of nonlinear model. 3.1 Characterizing the receptive field We obtain the parameters of the receptive field F(x,t) from responses to large drifting gratings (Figure 3). These stimuli elicit approximately constant output in the suppressive field, so they allow us to characterize the receptive field. Responses to different spatial frequencies constrain F(x,t) in space (Figure 3A). Responses to different temporal frequencies constrain F(x,t) in time (Figure 3B). 3.2 Characterizing the suppressive field To characterize the divisive stage, we start by measuring how responses saturate at high contrast (Figure 4A). A linear model cannot account for this contrast saturation (Figure 4A, dashed curve). The nonlinear model (Figure 4A, solid curve) captures saturation because increases in receptive field output are attenuated by increases in suppressive field output. At low contrast, no saturation is observed because the output of the suppressive field is dominated by the constant c50. From these data we estimate the value of c50. To obtain the parameters of the suppressive field, we recorded responses to sums of two drifting gratings (Figure 4B-D): an optimal test grating at 50% contrast, which elicits a large baseline response, and a mask grating that modulates this response. Test and mask temporal frequencies are incommensurate so that they temporally label a test response (at the frequency of the test) and a mask response (at the frequency of the mask) (Bonds, 1989). We vary mask attributes and study how they affect the test responses. Increasing mask contrast progressively suppresses responses (Figure 4B). The linear model fails to account for this suppression (Figure 4B, dashed curve). The nonlinear model (Figure 4B, solid curve) captures it because increasing mask contrast increases the suppressive field output while the receptive field output (at the temporal frequency of the test) remains constant. With masks of low contrast there is little suppression because the output of the suppressive field is dominated by the constant c50. Similar effects are seen if we increase mask diameter. Responses decrease until they reach a plateau (Figure 4C). A linear model predicts no decrease (Figure 4C, dashed curve). The nonlinear model (Figure 4C, solid curve) captures it because increasing mask diameter increases the suppressive field output while it does not affect the receptive field output. A plateau is reached once masks extend beyond the suppressive field. From these data we estimate the size of the Gaussian envelope G(x) of the suppressive field. Finally, the strength of suppression depends on mask spatial frequency (Figure 4D). At high frequencies, no suppression is elicited. Reducing spatial frequency increases suppression. This dependence of suppression on spatial frequency is captured in the nonlinear model by the filter H(x,t). From these data we estimate the spatial characteristics of the filter. From similar experiments involving different temporal frequencies (not shown), we estimate the filter’s selectivity for temporal frequency. 3.3 Predicting responses to novel stimuli We have seen that with a fixed set of parameters the model provides a good fit to a large set of measurements (Figure 3 and Figure 4). We now test whether the model predicts responses to a set of novel stimuli: drifting gratings varying in contrast and diameter. Responses to high contrast stimuli exhibit size tuning (Figure 5A, squares): they grow with size for small diameters, reach a maximum value at intermediate diameter and are reduced for large diameters (Jones and Sillito, 1991). Size tuning , however, strongly depends on stimulus contrast (Solomon et al., 2002): no size tuning is observed at low contrast (Figure 5A, circles). The model predicts these effects (Figure 5A, curves). For large, high contrast stimuli the output of the suppressive field is dominated by clocal, resulting in suppression of responses. At low contrast, clocal is much smaller than c50, and the suppressive field does not affect responses. Similar considerations can be made by plotting these data as a function of contrast (Figure 5B). As predicted by the nonlinear model (Figure 5B, curves), the effect of increasing contrast depends on stimulus size: responses to large stimuli show strong saturation (Figure 5B, squares), whereas responses to small stimuli grow linearly (Figure 5B, circles). The model predicts these effects because only large, high contrast stimuli elicit large enough responses from the suppressive field to cause suppression. For small, low contrast stimuli, instead, the linear model is a good approximation. Response (spikes/s) 0.50 4.00 32.00 0 20 40 60 80 100 Diameter (deg) 0.00 0.25 0.50 0.75 1.00 Contrast A B Figure 5. Predicting responses to novel stimuli in the example LGN cell. Stimuli are gratings varying in diameter and contrast, and responses are harmonic component of spike trains at grating temporal frequency. Curves show model predictions based on parameters as estimated in previous figures, not fitted to these data. A: Responses as function of diameter for different contrasts. B: Responses as function of contrast for different diameters. 3.4 Model performance To assess model performance across neurons we calculate the percentage of variance in the data that is explained by the model (see Freeman et al., 2002 for methods). The model provides good fits to the data used to characterize the suppressive field (Figure 4), explaining more than 90% of the variance in the data for 9/13 cells (Figure 6A). Model parameters are then held fixed, and the model is used to predict responses to gratings of different contrast and diameter (Figure 5). The model performs well, explaining in 10/13 neurons above 90% of the variance in these novel data (Figure 6B, shaded histogram). The agreement between the quality of the fits and the quality of the predictions suggests that model parameters are well constrained and rules out a role of overfitting in determining the quality of the fits. To further confirm the performance of the model, in an additional 54 cells we ran a subset of the whole protocol, involving only the experiment for characterizing the receptive field (Figure 3), and the experiment involving gratings of different contrast and diameter (Figure 5). For these cells we estimate the suppressive field by fitting the model directly to the latter measurements. The model explains above 90% of the variance in these data in 20/54 neurons and more than 70% in 39/54 neurons (Figure 6B, white histogram). Considering the large size of the data set (more than 100 stimuli, requiring several hours of recordings per neuron) and the small number of free parameters (only 6 for the purpose of this work), the overall, quality of the model predictions is remarkable. Estimating the suppressive field Size tuning at different contrasts 0 50 100 0 5 10 15 Explained variance (%) n=54 0 2 4 6 # cells n=13 A B # cells Figure 6. Percentage of variance in data explained by model. A: Experiments to estimate the suppressive field. B: Experiments to test the model. Gray histogram shows quality of predictions. White histogram shows quality of fits. 4 Conclusions The nonlinear model provides a unified description of visual processing in LGN neurons. Based on a fixed set of parameters, it can predict both linear properties (Figure 3), as well as nonlinear properties such as contrast saturation (Figure 4A) and masking (Figure 4B-D). Moreover, once the parameters are fixed, it predicts responses to novel stimuli (Figure 5). The model explains why responses are tuned for stimulus size at high contrast but not at low contrast, and it correctly predicts that only responses to large stimuli saturate with contrast, while responses to small stimuli grow linearly. The model implements a form of contrast gain control. A possible purpose for this gain control is to increase the range of contrast that can be transmitted given the limited dynamic range of single neurons. Divisive gain control may also play a role in population coding: a similar model applied to responses of primary visual cortex was shown to maximize independence of the responses across neurons (Schwartz and Simoncelli, 2001). We are working towards improving the model in two ways. First, we are characterizing the dynamics of the suppressive field, e.g. to predict how it responds to transient stimuli. Second, we are testing the assumption that the suppressive field computes root-mean-square contrast, a measure that solely depends on the secondorder moments of the light distribution. Our ultimate goal is to predict responses to complex stimuli such as those shown in Figure 1 and quantify to what degree the nonlinear model improves on the predictions of the linear model. Determining the role of visual nonlinearities under more natural stimulation conditions is also critical to understanding their function. The nonlinear model synthesizes more than 30 years of research. It is robust, tractable and generalizes to arbitrary stimuli. As a result it might serve as the new standard model of LGN responses. Because the nonlinearities we discussed are already present in the retina (Shapley and Victor, 1978), and tend to get stronger as one ascends the visual hierarchy (Sclar et al., 1990), it may also be used to study how responses take shape from one stage to another in the visual system. Acknowledgments This work was supported by the Swiss National Science Foundation and by the James S McDonnell Foundation 21st Century Research Award in Bridging Brain, Mind & Behavior. References Bonds, A. B. (1989). Role of inhibition in the specification of orientation selectivity of cells in the cat striate cortex. Vis Neurosci 2, 41-55. Bonin, V., Mante, V., and Carandini, M. (2002). The contrast integration field of cat LGN neurons. Program No. 352.16. In Abstract Viewer/Itinerary Planner (Washington, DC, Society for Neuroscience). Cai, D., DeAngelis, G. C., and Freeman, R. D. (1997). Spatiotemporal receptive field organization in the lateral geniculate nucleus of cats and kittens. J Neurophysiol 78, 10451061. Cavanaugh, J. R., Bair, W., and Movshon, J. A. (2002). Selectivity and spatial distribution of signals from the receptive field surround in macaque v1 neurons. J Neurophysiol 88, 25472556. Cleland, B. G., Lee, B. B., and Vidyasagar, T. R. (1983). Response of neurons in the cat's lateral geniculate nucleus to moving bars of different length. J Neurosci 3, 108-116. Dan, Y., Atick, J. J., and Reid, R. C. (1996). Efficient coding of natural scenes in the lateral geniculate nucleus: experimental test of a computational theory. J Neurosci 16, 3351-3362. Enroth-Cugell, C., and Robson, J. G. (1966). The contrast sensitivity of retinal ganglion cells of the cat. J Physiol (Lond) 187, 517-552. Freeman, T., Durand, S., Kiper, D., and Carandini, M. (2002). Suppression without Inhibition in Visual Cortex. Neuron 35, 759. Heeger, D. J. (1992). Normalization of cell responses in cat striate cortex. Vis Neurosci 9, 181-197. Hubel, D., and Wiesel, T. N. (1961). Integrative action in the cat's lateral geniculate body. J Physiol (Lond) 155, 385-398. Jones, H. E., and Sillito, A. M. (1991). The length-response properties of cells in the feline dorsal lateral geniculate nucleus. J Physiol (Lond) 444, 329-348. Levick, W. R., Cleland, B. G., and Dubin, M. W. (1972). Lateral geniculate neurons of cat: retinal inputs and physiology. Invest Ophthalmol 11, 302-311. Mante, V., Bonin, V., and Carandini, M. (2002). Responses of cat LGN neurons to plaids and movies. Program No. 352.15. In Abstract Viewer/Itinerary Planner (Washington, DC, Society for Neuroscience). Rodieck, R. W. (1965). Quantitative analysis of cat retina ganglion cell response to visual stimuli. Vision Res 5, 583-601. Schwartz, O., and Simoncelli, E. P. (2001). Natural signal statistics and sensory gain control. Nat Neurosci 4, 819-825. Sclar, G., Maunsell, J. H. R., and Lennie, P. (1990). Coding of image contrast in central visual pathways of the macaque monkey. Vision Res 30, 1-10. Shapley, R. M., and Victor, J. D. (1978). The effect of contrast on the transfer properties of cat retinal ganglion cells. J Physiol 285, 275-298. Solomon, S. G., White, A. J., and Martin, P. R. (2002). Extraclassical receptive field properties of parvocellular, magnocellular, and koniocellular cells in the primate lateral geniculate nucleus. J Neurosci 22, 338-349.
2003
34
2,435
Ranking on Data Manifolds Dengyong Zhou, Jason Weston, Arthur Gretton, Olivier Bousquet, and Bernhard Sch¨olkopf Max Planck Institute for Biological Cybernetics, 72076 Tuebingen, Germany {firstname.secondname }@tuebingen.mpg.de Abstract The Google search engine has enjoyed huge success with its web page ranking algorithm, which exploits global, rather than local, hyperlink structure of the web using random walks. Here we propose a simple universal ranking algorithm for data lying in the Euclidean space, such as text or image data. The core idea of our method is to rank the data with respect to the intrinsic manifold structure collectively revealed by a great amount of data. Encouraging experimental results from synthetic, image, and text data illustrate the validity of our method. 1 Introduction The Google search engine [2] accomplishes web page ranking using PageRank algorithm, which exploits the global, rather than local, hyperlink structure of the web [1]. Intuitively, it can be thought of as modelling the behavior of a random surfer on the graph of the web, who simply keeps clicking on successive links at random and also periodically jumps to a random page. The web pages are ranked according to the stationary distribution of the random walk. Empirical results show PageRank is superior to the naive ranking method, in which the web pages are simply ranked according to the sum of inbound hyperlinks, and accordingly only the local structure of the web is exploited. Our interest here is in the situation where the objects to be ranked are represented as vectors in Euclidean space, such as text or image data. Our goal is to rank the data with respect to the intrinsic global manifold structure [6, 7] collectively revealed by a huge amount of data. We believe for many real world data types this should be superior to a local method, which rank data simply by pairwise Euclidean distances or inner products. Let us consider a toy problem to explain our motivation. We are given a set of points constructed in two moons pattern (Figure 1(a)). A query is given in the upper moon, and the task is to rank the remaining points according to their relevances to the query. Intuitively, the relevant degrees of points in the upper moon to the query should decrease along the moon shape. This should also happen for the points in the lower moon. Furthermore, all of the points in the upper moon should be more relevant to the query than the points in the lower moon. If we rank the points with respect to the query simply by Euclidean distance, then the left-most points in the lower moon will be more relevant to the query than the right-most points in the upper moon (Figure 1(b)). Apparently this result is not consistent with our intuition (Figure 1(c)). We propose a simple universal ranking algorithm, which can exploit the intrinsic manifold (a) Two moons ranking problem query (b) Ranking by Euclidean distance (c) Ideal ranking Figure 1: Ranking on the two moons pattern. The marker sizes are proportional to the ranking in the last two figures. (a) toy data set with a single query; (b) ranking by the Euclidean distances; (c) ideal ranking result we hope to obtain. structure of data. This method is derived from our recent research on semi-supervised learning [8]. In fact the ranking problem can be viewed as an extreme case of semi-supervised learning, in which only positive labeled points are available. An intuitive description of our method is as follows. We first form a weighted network on the data, and assign a positive ranking score to each query and zero to the remaining points which are ranked with respect to the queries. All points then spread their ranking score to their nearby neighbors via the weighted network. The spread process is repeated until a global stable state is achieved, and all points except queries are ranked according to their final ranking scores. The rest of the paper is organized as follows. Section 2 describes the ranking algorithm in detail. Section 3 discusses the connections with PageRank. Section 4 further introduces a variant of PageRank, which can rank the data with respect to the specific queries. Finally, Section 5 presents experimental results on toy data, on digit image, and on text documents, and Section 6 concludes this paper. 2 Algorithm Given a set of point X = {x1, ..., xq, xq+1, ..., xn} ⊂Rm, the first q points are the queries and the rest are the points that we want to rank according to their relevances to the queries. Let d : X × X −→R denote a metric on X, such as Euclidean distance, which assigns each pair of points xi and xi a distance d(xi, xj). Let f : X −→R denote a ranking function which assigns to each point xi a ranking value fi. We can view f as a vector f = [f1, .., fn]T . We also define a vector y = [y1, .., yn]T , in which yi = 1 if xi is a query, and yi = 0 otherwise. If we have prior knowledge about the confidences of queries, then we can assign different ranking scores to the queries proportional to their respective confidences. The algorithm is as follows: 1. Sort the pairwise distances among points in ascending order. Repeat connecting the two points with an edge according the order until a connected graph is obtained. 2. Form the affinity matrix W defined by Wij = exp[−d2(xi, xj)/2σ2] if there is an edge linking xi and xj. Note that Wii = 0 because there are no loops in the graph. 3. Symmetrically normalize W by S = D−1/2WD−1/2 in which D is the diagonal matrix with (i, i)-element equal to the sum of the i-th row of W. 4. Iterate f(t + 1) = αSf(t) + (1 −α)y until convergence, where α is a parameter in [0, 1). 5. Let f ∗ i denote the limit of the sequence {fi(t)}. Rank each point xi according its ranking scores f ∗ i (largest ranked first). This iteration algorithm can be understood intuitively. First a connected network is formed in the first step. The network is simply weighted in the second step and the weight is symmetrically normalized in the third step. The normalization in the third step is necessary to prove the algorithm’s convergence. In the forth step, all points spread their ranking score to their neighbors via the weighted network. The spread process is repeated until a global stable state is achieved, and in the fifth step the points are ranked according to their final ranking scores. The parameter α specifies the relative contributions to the ranking scores from neighbors and the initial ranking scores. It is worth mentioning that self-reinforcement is avoided since the diagonal elements of the affinity matrix are set to zero in the second step. In addition, the information is spread symmetrically since S is a symmetric matrix. About the convergence of this algorithm, we have the following theorem: Theorem 1 The sequence {f(t)} converges to f ∗= β(I −αS)−1y, where β = 1 −α. See also [8] for the rigorous proof. Here we only demonstrate how to obtain such a closed form expression. Suppose f(t) converges to f ∗. Substituting f ∗for f(t + 1) and f(t) in the iteration equation f(t + 1) = αSf(f) + (1 −α)y, we have f ∗= αf ∗+ (1 −α)y, (1) which can be transformed into (I −αS)f ∗= (1 −α)y. Since (I −αS) is invertible, we have f ∗= (1 −α)(I −αS)−1y. Clearly, the scaling factor β does not make contributions for our ranking task. Hence the closed form is equivalent to f ∗= (I −αS)−1y. (2) We can use this closed form to compute the ranking scores of points directly. In large-scale real-world problems, however, we prefer using iteration algorithm. Our experiments show that a few iterations are enough to yield high quality ranking results. 3 Connections with Google Let G = (V, E) denote a directed graph with vertices. Let W denote the n × n adjacency matrix W, in which Wij = 1 if there is a link in E from vertex xi to vertex xj, and Wij = 0 otherwise. Note that W is possibly asymmetric. Define a random walk on G determined by the following transition probability matrix P = (1 −ϵ)U + ϵD−1W, (3) where U is the matrix with all entries equal to 1/n. This can be interpreted as a probability ϵ of transition to an adjacent vertex, and a probability 1 −ϵ of jumping to any point on the graph uniform randomly. Then the ranking scores over V computed by PageRank is given by the stationary distribution π of the random walk. In our case, we only consider graphs which are undirected and connected. Clearly, W is symmetric in this situation. If we also rank all points without queries using our method, as is done by Google, then we have the following theorem: Theorem 2 For the task of ranking data represented by a connected and undirected graph without queries, f ∗and PageRank yield the same ranking list. Proof. We fist show that the stationary distribution π of the random walk used in Google is proportional to the vertex degree if the graph G is undirected and connected. Let 1 denote the 1 × n vector with all entries equal to 1. We have 1DP = 1D[(1 −ϵ)U + ϵD−1W] = (1 −ϵ)1DU + ϵ1DD−1W = (1 −ϵ)1D + ϵ1W = (1 −ϵ)1D + ϵ1D = 1D. Let vol G denote the volume of G, which is given by the sum of vertex degrees. The stationary distribution is then π = 1D/vol G. (4) Note that π does not depend on ϵ. Hence π is also the the stationary distribution of the random walk determined by the transition probability matrix D−1W. Now we consider the ranking result given by our method in the situation without queries. The iteration equation in the fourth step of our method becomes f(t + 1) = Sf(t). (5) A standard result [4] of linear algebra states that if f(0) is a vector not orthogonal to the principal eigenvector, then the sequence {f(t)} converges to the principal eigenvector of S. Let 1 denotes the n × 1 vector with all entries equal to 1. Then SD1/21 = D−1/2WD−1/2D1/21 = D−1/2W1 = D−1/2D1 = D1/21. Further, noticing that the maximal eigenvalue of S is 1 [8], we know the principal eigenvector of S is D1/21. Hence f ∗= D1/21. (6) Comparing (4) with (6), it is clear that f ∗and π give the same ranking list. This completes our proof. 4 Personalized Google Although PageRank is designed to rank all points without respect to any query, it is easy to modify for query-based ranking problems. Let P = D−1W. The ranking scores given by PageRank are the elements of the convergence solution π∗of the iteration equation π(t + 1) = αP T π(t). (7) By analogy with the algorithm in Section 2, we can add a query term on the right-hand side of (7) for the query-based ranking, π(t + 1) = αP T π(t) + (1 −α)y. (8) This can be viewed as the personalized version of PageRank. We can show that the sequence {π(t)} converges to π∗= (1 −α)(I −αP T )−1y as before, which is equivalent to π∗= (I −αP T )−1y. (9) Now let us analyze the connection between (2) and (9). Note that (9) can be transformed into π∗= [(D −αW)D−1]−1y = D(D −αW)−1y. In addition, f ∗can be represented as f ∗= [D−1/2(D −αW)D−1/2]−1y = D1/2(D −αW)−1D1/2y. (10) Hence the main difference between π∗and f ∗is that in the latter the initial ranking score yi of each query xi is weighted with respect to its degree. The above observation motivates us to propose a more general personalized PageRank algorithm, π(t + 1) = αP T π(t) + (1 −α)Dky, (11) in which we assign different importance to queries with respect to their degree. The closed form of (11) is given by π∗= (I −αP T )−1Dky. (12) If k = 0, (12) is just (9); and if k = 1, we have π∗= (I −αP T )−1Dy = D(D −αW)−1Dy, which is almost as same as (10). We can also use (12) for classification problems without any modification, besides setting the elements of y to 1 or -1 corresponding to the positive or negative classes of the labeled points, and 0 for the unlabeled data. This shows the ranking and classification problems are closely related. We can do a similar analysis of the relations to Kleinberg’s HITS [5], which is another popular web page ranking algorithm. The basic idea of this method is also to iteratively spread the ranking scores via the existing web graph. We omit further discussion of this method due to lack of space. 5 Experiments We validate our method using a toy problem and two real-world domains: image and text. In our following experiments we use the closed form expression in which α is fixed at 0.99. As a true labeling is known in these problems, i.e. the image and document categories (which is not true in real-world ranking problems), we can compute the ranking error using the Receiver Operator Characteristic (ROC) score [3] to evaluate ranking algorithms. The returned score is between 0 and 1, a score of 1 indicating a perfect ranking. 5.1 Toy Problem In this experiment we considered the toy ranking problem mentioned in the introduction section. The connected graph described in the first step of our algorithm is shown in Figure 2(a). The ranking scores with different time steps: t = 5, 10, 50, 100 are shown in Figures 2(b)-(e). Note that the scores on each moon decrease along the moon shape away from the query, and the scores on the moon containing the query point are larger than on the other moon. Ranking by Euclidean distance is shown in Figure 2(f), which fails to capture the two moons structure. It is worth mentioning that simply ranking the data according to the shortest paths [7] on the graph does not work well. In particular, we draw the reader’s attention to the long edge in Figure 2(a) which links the two moons. It appears that shortest paths are sensitive to the small changes in the graph. The robust solution is to assemble all paths between two points, and weight them by a decreasing factor. This is exactly what we have done. Note that the closed form can be expanded as f ∗= P i αiSiy. 5.2 Image Ranking In this experiment we address a task of ranking on the USPS handwritten 16x16 digits dataset. We rank digits from 1 to 6 in our experiments. There are 1269, 929, 824, 852, 716 and 834 examples for each class, for a total of 5424 examples. (a) Connected graph Figure 2: Ranking on the pattern of two moons. (a) connected graph; (b)-(e) ranking with the different time steps: t = 5, 10, 50, 100; (f) ranking by Euclidean distance. 2 4 6 8 10 0.99 0.992 0.994 0.996 0.998 1 # queries ROC (a) Query digit 1 Manifold ranking Euclidean distance 2 4 6 8 10 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 # queries ROC (b) Query digit 2 Manifold ranking Euclidean distance 2 4 6 8 10 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 # queries ROC (c) Query digit 3 Manifold ranking Euclidean distance 2 4 6 8 10 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 # queries ROC (d) Query digit 4 Manifold ranking Euclidean distance 2 4 6 8 10 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 # queries ROC (e) Query digit 5 Manifold ranking Euclidean distance 2 4 6 8 10 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 # queries ROC (e) Query digit 6 Manifold ranking Euclidean distance Figure 3: ROC on USPS for queries from digits 1 to 6. Note that this experimental results also provide indirect proof of the intrinsic manifold structure in USPS. Figure 4: Ranking digits on USPS. The top-left digit in each panel is the query. The left panel shows the top 99 by the manifold ranking; and the right panel shows the top 99 by the Euclidean distance based ranking. Note that there are many more 2s with knots in the right panel. We randomly select examples from one class of digits to be the query set over 30 trials, and then rank the remaining digits with respect to these sets. We use a RBF kernel with the width σ = 1.25 to construct the affinity matrix W, but the diagonal elements are set to zero. The Euclidean distance based ranking method is used as the baseline: given a query set {xs}(s ∈S), the points x are ranked according to that the highest ranking is given to the point x with the lowest score of mins∈S∥x −xs∥. The results, measured as ROC scores, are summarized in Figure 3; each plot corresponds to a different query class, from digit one to six respectively. Our algorithm is comparable to the baseline when a digit 1 is the query. For the other digits, however, our algorithm significantly outperforms the baseline. This experimental result also provides indirect proof of the underlying manifold structure in the USPS digit dataset [6, 7]. The top ranked 99 images obtained by our algorithm and Euclidean distance, with a random digit 2 as the query, are shown in Figure 4. The top-left digit in each panel is the query. Note that there are some 3s in the right panel. Furthermore, there are many curly 2s in the right panel, which do not match well with the query: the 2s in the left panel are more similar to the query than the 2s in the right panel. This subtle superiority makes a great deal of sense in the real-word ranking task, in which users are only interested in very few leading ranking results. The ROC measure is too simple to reflect this subtle superiority however. 5.3 Text Ranking In this experiment, we investigate the task of text ranking using the 20-newsgroups dataset. We choose the topic rec which contains autos, motorcycles, baseball and hockey from the version 20-news-18828. The articles are processed by the Rainbow software package with the following options: (1) passing all words through the Porter stemmer before counting them; (2) tossing out any token which is on the stoplist of the SMART system; (3) skipping any headers; (4) ignoring words that occur in 5 or fewer documents. No further preprocessing was done. Removing the empty documents, we obtain 3970 document vectors in a 8014-dimensional space. Finally the documents are normalized into TFIDF representation. We use the ranking method based on normalized inner product as the baseline. The affinity matrix W is also constructed by inner product, i.e. linear kernel. The ROC scores for 100 randomly selected queries for each class are given in Figure 5. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.3 0.4 0.5 0.6 0.7 0.8 0.9 (a) autos inner product manifold ranking 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.3 0.4 0.5 0.6 0.7 0.8 0.9 (b) motorcycles inner product manifold ranking 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.3 0.4 0.5 0.6 0.7 0.8 0.9 (c) baseball inner product manifold ranking 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.3 0.4 0.5 0.6 0.7 0.8 0.9 (d) hockey inner product manifold ranking Figure 5: ROC score scatter plots of 100 random queries from the category autos, motorcycles, baseball and hockey contained in the 20-newsgroups dataset. 6 Conclusion Future research should address model selection. Potentially, if one was given a small labeled set or a query set greater than size 1, one could use standard cross validation techniques. In addition, it may be possible to look to the theory of stability of algorithms to choose appropriate hyperparameters. There are also a number of possible extensions to the approach. For example one could implement an iterative feedback framework: as the user specifies positive feedback this can be used to extend the query set and improve the ranking output. Finally, and most importantly, we are interested in applying this algorithm to wide-ranging real-word problems. References [1] R. Albert, H. Jeong, and A. Barabsi. Diameter of the world wide web. Nature, 401:130–131, 1999. [2] S. Brin and L. Page. The anatomy of a large scale hypertextual web search engine. In Proc. 7th International World Wide Web Conf., 1998. [3] R. Duda, P. Hart, and D. Stork. Pattern Classification. Wiley-Interscience, 2nd edition, 2000. [4] G. H. Golub and C. F. Van Loan. Matrix Computations. Johns Hopkins University Press, Baltimore, 1989. [5] J. Kleinberg. Authoritative sources in a hyperlinked environment. JACM, 46(5):604–632, 1999. [6] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323–2326, 2000. [7] J. B. Tenenbaum, V. de Silva, and J. C. Langford. Global geometric framework for nonlinear dimensionality reduction. Science, 290:2319–2323, 2000. [8] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch¨olkopf. Learning with local and global consistency. In 18th Annual Conf. on Neural Information Processing Systems, 2003.
2003
35
2,436
Approximate Analytical Bootstrap Averages for Support Vector Classifiers D¨orthe Malzahn1,2 Manfred Opper3 1 Informatics and Mathematical Modelling, Technical University of Denmark, R.-Petersens-Plads, Building 321, Lyngby DK-2800, Denmark 2 Institute of Mathematical Stochastics, University of Karlsruhe, Englerstr. 2, Karlsruhe D-76131, Germany 3 Neural Computing Research Group, School of Engineering and Applied Science, Aston University, Birmingham B4 7ET, United Kingdom malzahnd@isp.imm.dtu.dk opperm@aston.ac.uk Abstract We compute approximate analytical bootstrap averages for support vector classification using a combination of the replica method of statistical physics and the TAP approach for approximate inference. We test our method on a few datasets and compare it with exact averages obtained by extensive Monte-Carlo sampling. 1 Introduction The bootstrap method [1, 2] is a widely applicable approach to assess the expected qualities of statistical estimators and predictors. Say, for example, in a supervised learning problem, we are interested in measuring the expected error of our favorite prediction method on test points 1 which are not contained in the training set D0. If we have no hold out data, we can use the bootstrap approach to create artificial bootstrap data sets D by resampling with replacement training data from the original set D0. Each data point is taken with equal probability, i.e., some of the examples will appear several times in the bootstrap sample and others not at all. A proxy for the true average test error can be obtained by retraining the model on each bootstrap training set D, calculating the test error only on those points which are not contained in D and finally averaging over all possible sets D. While in general bootstrap averages can be approximated to any desired accuracy by the Monte-Carlo method, by generating a large enough number of random samples, it is useful to have also analytical approximations which avoid the time consuming retraining of the model for each new sample. Existing analytical approximations (based on asymptotic techniques) such as the delta method and the saddle point method require usually explicit analytical formulas for the estimators of the parameters for a trained model (see e.g. [3]). These may not be easily obtained for more complex models in machine learning such as support vector machines (SVMs). Recently, we introduced a novel approach for the approximate calculation of bootstrap averages [4] which avoids explicit formulas for parameter estimates. Instead, we define statistical estimators and predictors implicitly 1The average is over the unknown distribution of training data sets. as expectations with suitably defined pseudo-posterior Gibbs distributions over model parameters. Within this formulation, it becomes possible to perform averages over bootstrap samples analytically using the so-called “replica trick” of statistical physics [5]. The latter involves a specific analytic continuation of the original statistical model. After the average, we are left with a typically intractable inference problem for an effective Bayesian probabilistic model. As a final step, we use techniques for approximate inference to treat the probabilistic model. This combination of techniques allows us to obtain approximate bootstrap averages by solving a set of nonlinear equations rather than by explicit sampling. Our method has passed a first test successfully on the simple case of Gaussian process (GP) regression, where explicit predictions are still cheaply computed. Also, since the original model is a smooth probabilistic one, the success of approximate inference techniques may be not too surprising. In this paper, we will address a more challenging problem, that of the support vector machine. In this case, the connection to a probabilistic model (a type of GP) can be only established by introducing a further parameter which must eventually diverge to obtain the SVM predictor. In this limit, the probabilistic model is becoming highly nonregular and approaches a deterministic model. Hence it is not clear a priori if our framework would survive these delicate limiting manipulations and still be able to give good approximate answers. 2 Hard Margin Support Vector Classifiers The hard margin SVM is a classifier which predicts binary class labels y = sign[ ˆfD0(x)] ∈ {−1, 1} for inputs x ∈IRd based on a set of training points D0 = (z1, z2, . . . , zN), where zi = (xi, yi) (for details see [6]). The usually nonlinear activation function ˆfD0(x) (which we will call “internal field”) is expressed as ˆfD0(x) = PN i=1 yiαiK(x, xi), where K(x, x′) is a positive definite kernel and the set of αi’s is computed from D0 by solving a certain convex optimization problem. For bootstrap problems, we fix the pool of training data D0, and consider the statistics of vectors ˆf D = ( ˆfD(x1), . . . , ˆfD(xN)) at all inputs xi ∈D0, when the predictor ˆf is computed on randomly chosen subsets D of D0. Unfortunately, we do not have an explicit analytical expression for ˆfD, but it is obtained implicitly as the vector f = (f1, . . . , fN) which solves the constraint optimization problem Minimize  f T K−1f with fiyi ≥1 for all i such that (xi, yi) ∈D (1) K is the kernel matrix with elements K(xi, xj). 3 Deriving Predictors from Gibbs Distributions In this section, we will show how to obtain the SVM predictor ˆfD formally as the expectation over a certain type of Gibbs distribution over possible f’s in the form ˆfD = ⟨f⟩= Z df f P[f|D] (2) with respect to a density P[f|D] = 1 Z µ[f] P(D|f) which is constructed from a suitable prior distribution µ[f], a certain type of “likelihood” P(D|f) and a normalizing partition function Z = Z df µ[f] P(D|f) . (3) Our general notation suggests that this principle will apply to a variety of estimators and predictors of the MAP type. To represent the SVM in this framework, we use a well established relation between SVM’s and Gaussian process (GP) models (see e.g. [7, 8]). We choose the GP prior µ[f] = 1 p (2π)Nβ−N det(K) exp  −β 2 f T K−1f  . (4) The pseudo-likelihood 2 is defined by P(D|f) = Y j: zj∈D P(zj|fj) = Y j: zj∈D Θ(yjfj −1) (5) where Θ(u) = 1 for u > 0 and 0 otherwise. In the limit β →∞, the measure P[f|D] ∝ µ[f]P(D|f) obviously concentrates at the vector ˆf which solves Eq. (1). 4 Analytical Bootstrap Averages Using the Replica Trick With the bootstrap method, we would like to compute average properties of the estimator ˆfD, Eq. (2), when datasets D are random subsamples of D0. An important class of such averages are of the type of a generalization error ε which are expectations of loss functions g( ˆfD(xi); xi, yi) over test points i, i.e., those examples which are in D0 but not contained in the bootstrap training set D. Hence, we define ε .= 1 N N X i=1 ED h δsi,0 g( ˆfD(xi); xi, yi) i ED [δsi,0] (6) where ED[· · · ] denotes the expectation over random bootstrap samples D which are created from the original training set D0. Each sample D is represented by a vector of “occupation” numbers s = (s1, . . . , sN) where si is the number of times example zi appears in the set D and PN i=1 si = S. The Kronecker symbol, defined by δsi,0 = 1 for si = 0 and 0 else, guarantees that only realizations of bootstrap training sets D contribute to Eq. (6) which do not contain the test point. For fixed bootstrap sample size S, the distribution of si’s is multinomial. It is simpler (and does not make a big difference when S is sufficiently large) when we work with a Poisson distribution for the size of the set D with S as the mean number of data points in the sample. Then we get the simpler, factorizing joint distribution P(s) = N Y i=1 ( S N )sie−S/N si! (7) for the occupation numbers si. From Eq. (7) we get ED[δsi,0] = e−S N . Since we can represent general loss functions g by their Taylor expansions in powers of ˆfD (or polynomial approximations in case of non-smooth losses) it is sufficient to consider only monomials g( ˆfD(x); x, y) = ( ˆfD(x))r for arbitrary r in the following and regain the general case at the end by resumming the series. Using the definition of the estimator ˆfD, Eq. (2), the bootstrap expectation Eq. (6) can be rewritten as ε(S) = 1 N N X i=1 ED  δsi,0 Z−r R rQ a=1 n df a µ[f a] f a i QN j=1(P(zj|f a j ))sj o ED [δsi,0] . (8) which involves r copies3, i.e. replicas f 1, . . . , f r of the parameter vector f. If the partition functions Z in the numerator of Eq. (8) were raised to positive powers rather than negative 2It does not allow a full probabilistic interpretation [8]. 3The superscripts should NOT be confused with powers of the variables. ones, one could perform the bootstrap average over the distribution Eq. (7) analytically. To enable such an analytical average over the vector s (which is the “quenched disorder” in the language of statistical physics) one introduces the following “trick” extensively used in statistical physics of amorphous systems [5]. We introduce the auxiliary quantity εn(S) = 1 e−S N N N X i=1 ED  δsi,0 Zn−r Z r Y a=1   df a µ[f a] f a i N Y j=1 (P(zj|f a j ))sj      (9) for arbitrary real n, which allows to write ε(S) = lim n→0 εn(S). (10) The advantage of this definition is that for integers n ≥r, εn(S) can be represented in terms of n replicas f 1, f 2, . . . , f n of the original variable f for which an explicit average over si’s is possible. At the end of all calculations an analytical continuation to arbitrary real n and the limit n →0 must be performed. For integer n ≥r, we use the definition of the partition function Eq. (3), exchange the expectation over datasets with the expectation over f’s and use the explicit form of the distribution Eq. (7) to perform the average over bootstrap sets. The resulting expressions can be rewritten as 4 εn(S) = Ξ\i n N N X i=1 ** r Y a=1 f a i ++ \i , (11) where ⟨⟨· · · ⟩⟩\i denotes an average with respect to the so called cavity distribution P\i for replicated variables ⃗fi = (f 1 i , . . . , f n i ) defined by P\i(⃗fi) ∝ 1 Li(⃗fi) Z N Y j=1,j̸=i d⃗fj P(⃗f1, . . . , ⃗fN) . (12) The joint distribution of replica variables P( ⃗f1, . . . , ⃗fN) ∝Qn a=1 µ[f a] QN j=1 Lj(⃗fj) is defined by the new likelihoods Lj(⃗fj) = exp " −S N 1 − n Y a=1 P(zj|f a j ) !# . (13) 5 TAP Approximation We have mapped the original bootstrap problem to an inference problem for an effective Bayesian probabilistic model (the hidden variables have the dimensionality N × n) for which we have to find a tractable approximation which allows analytical continuation of n →0 and β →∞. We use the adaptive TAP approach of Opper and Winther [9] which is often found to give more accurate results than, e.g., a simple mean field or a variational Gaussian approximation. The ADATAP approach replaces the analytically intractable cavity distribution Eq. (12) by a Gaussian distribution. In our case this can be written as P\i(⃗fi) ∝e−1 2 ⃗f T Λc(i) ⃗f+γc(i)T ⃗f , (14) where the parameters Λc and γc are computed selfconsistently from the dataset D0 by solving a set of coupled nonlinear equations. Details are given in the appendix. The form Eq. (14) allows a simple way of dealing with the parameters n and β. We utilize the exchangeability of variables f 1 i , . . . , f n i and assume replica symmetry and further 4P\i(⃗fi), Eq. (12), has the normalizing partition function Ξ\i n where Ξ\i n →1 for n →0. introduce an explicit scaling of all parameters with β. This scaling was found to make all final expressions finite in the limit β →∞. We set Λab c (i) = Λc(i) = β2λc(i) for a ̸= b (15) Λaa c (i) = Λ0 c(i) = β2λ0 c(i) and γa c (i) = βγc(i) for all a = 1, . . . , n . We also assume that ∆λc(i) .= β−1(Λ0 c(i) −Λc(i)) remains finite for β →∞. The ansatz Eq. (15) keeps the number of adjustable parameters independent of n and allows to perform the “replica limit” n →0 and the “SVM-limit” β →∞in all equations analytically before we start the final numerical parameter optimization. Computing the expectation Eq. (11) with Eq. (14) and (15) and resumming the power series over r yields the final theoretical expression for Eq. (6) ε(S) = 1 N N X i=1 Z dG(u) g γc(i) + u p −λc(i) ∆λc(i) ; xi, yi ! (16) where dG(u) = du(2π)−1 2 e−u2 2 and g is an arbitrary loss function. With g( ˆfD(xi); xi, yi) = Θ(−yi ˆfD(xi)) we obtain the bootstrapped classification error ε(S) = 1 N N X i=1 Φ −yiγc(i) p −λc(i) ! (17) where Φ(x) = R x −∞dG(u). Besides the computation of generalization errors, we can use our method to quantify the uncertainty of the SVM prediction at test points. This can be obtained by computing the bootstrap distribution of the “internal fields” ˆfD(xi) at a test input xi. This is obtained from Eq. (16) by inserting g( ˆfD(xi); xi, yi) = δ( ˆfD(xi) −h) using the Dirac δ-function ρi(h) = ∆λc(i) p −2πλc(i) exp  −(h∆λc(i) −γc(i))2 2(−λc(i))  , (18) i.e., mc i = γc(i) ∆λc(i) and V c ii = − λc(i) (∆λc(i))2 are the predicted mean and variance of the internal field. (The predicted posterior variance of the internal field is (β∆λc(i))−1 and goes to zero as β →∞indicating the transition to a deterministic model.) It is possible to extend the result Eq. (18) to “real” test inputs x /∈D0, which is of greater importance to applications. This replaces ∆λc(i), γc(i), λc(i) by ∆λc(x) = K(x, x) − N X i=1 K(x, xi)∆λ(i)Ti(x) !−1 (19) γc(x) = ∆λc(x) N X i=1 Ti(x)γ(i) λc(x) = (∆λc(x))2 N X i=1 (Ti(x))2λ(i) with Ti(x) = PN j=1 K(x, xj)(I + diag(∆λ)K)−1 ji . The parameters ∆λ(i), γ(i), λ(i) are determined from D0 according to Eq. (22), (23). 0 200 400 600 Bootstrap sample size S 0.0 0.1 0.2 0.3 0.4 0.5 Bootstrapped classification error 0 200 400 600 0.04 0.06 0.08 0.10 0.12 0.14 Crabs, N=200 Pima, N=532 Sonar, N=208 Wisconsin, N=683 -2 -1.5 -1 -0.5 0 0.5 1 Bootstrapped local field at a test input x 0.0 0.5 1.0 1.5 2.0 Density 0 0.2 0.4 0.6 0.8 1 Simulation: p(-1|x) 0.0 0.2 0.4 0.6 0.8 1.0 Theory: p(-1|x) S: 0.376 T: 0.405 Figure 1: Left: Average bootstrapped generalization error for hard margin support vector classification on different data sets (simulation: symbols, theory: lines). Right: Bootstrapped distribution of the internal field for Sonar data at a test input x /∈D0. Most distributions are Gaussian-like and in good agreement with the theory Eq. (18). We show an atypical case (simulation: histogram, theory line) which nevertheless predicts the relative weights for both class labels fairly well. The inset shows true versus estimated values of the probability p(−1|x) for predicting label y = −1 . 6 Results for Bootstrap of Hard Margin Support Vector Classifiers We determined the set of theoretical parameters by solving Eq. (21)-(23) for four benchmark data sets D0 [10] and different sample sizes S using a RBF kernel K(x, x′) = exp(−1 2 Pd k=1 vk(xk −x′ k)2)) with individually customized hyperparameters vk. The left panel of Fig.1 compares our theoretical results for the bootstrapped learning curves obtained by Eq. (17) (lines) with results from Monte-Carlo simulations (symbols). The Gaussian approximation of the cavity distribution is based on the assumption that the model prediction at a training input is influenced by a sufficiently large number of neighboring inputs. We expect it to work well for sufficiently broad kernel functions. This was the case for the Crabs and Wisconsin data sets where our theory is very accurate. It predicts correctly the interesting non-monotonous learning curve for the Wisconsin data (inset Fig.1, left). In comparison, the Sonar and Pima data sets were learnt with narrow RBF kernels. Here, we see that the quality of the TAP approximation becomes less good. However, our results provide still a reasonable estimate for the bootstrapped generalization error at sample size S = N. While for practical applications of estimating the “true” generalization error using Efron’s 0.632 bootstrap estimator the case S = N is of main importance, it is also interesting to discuss the limit of extreme oversampling S →∞. Since the hard margin SVM gains no additional information from the multiple presentation of the same data point, in this limit all bootstrap sets D supply exactly the same information as the data set D0 and the data average ED[. . . ] becomes trivial. Variances with respect to ED[. . . ] go to zero. With Eq. (21)-(23), we can write the average prediction mi at input xi ∈D0 as mi = PN j=1 yjαjK(xi, xj) with weights αj = ∆λ(j)∆λc(j) ∆λ(j)+∆λc(j)(yjmj −yjmc j) and recover for S →∞the Kuhn-Tucker conditions αi ≥0 and αiΘ(yimi −1) = 0. The bootstrapped generalization error Eq. (17) is found to converge to the approximate leave-one-out error of Opper and Winther [8] lim S→∞ε(S) = 1 N N X i=1 Θ (−yimc i) = SV X i Θ  αi [K−1 SV ]ii −1  (20) where the weights αi are given by the SVM algorithm on D0 and KSV is the kernel matrix on the set of SV’s. While the leave-one-out estimate is a non-smooth function of model parameters, Efron’s 0.632 ε(N) bootstrap estimate [2] of the generalization error approximated within our theory results in a differentiable expression Eq. (17) which may be used for kernel hyperparameter estimation. Preliminary results are promising. The right panel of Fig. 1 shows results for the bootstrapped distribution of the internal field on test inputs x /∈D0. The data set D0 contained N = 188 Sonar data and the bootstrap is at sample size S = N. We find that the true distribution is often very Gaussian-like and well described by the theory Eq. (18). Figure 1 (right) shows a rare case where a bi-modal distribution (histogram) is found. Nevertheless, the Gaussian (line) predicted by our theory estimates the probability p(−1|x) of a negative output quite accurately in comparison to the probability obtained from the simulation. Both SVM training and the computation of our approximate SVM bootstrap requires the running of iterative algorithms. We compared the time ttrain for training a single SVM on each of the four benchmark data sets D0 with the time ttheo needed to solve our theory for SVM bootstrap estimates on these data for S = N. For sufficiently broad kernels we find ttrain ≥ttheo and our theory is reliable. The exception are extremely narrow kernels. For the latter (Pima example in Fig.1 (left)) we find ttheo > ttrain where our theory is still faster to compute but less reliable than a good Monte-Carlo estimate of the bootstrap. 7 Outlook Our experiments on SVMs show that the approximate replica bootstrap approach appears to be highly robust to apply to models which only fit into our framework after some delicate limiting process. The SVM is also an important application because the prediction for each dataset requires the solution of a costly optimization problem. Experiments on benchmark data showed that our theory is appreciably faster to compute than a good Monte-Carlo estimate of the bootstrap and yields reliable results for kernels which are sufficiently broad. It will be interesting to apply our approach to other kernel methods such as kernel PCA. Since our method is based on a fairly general framework, we will also investigate if it can be applied to models where the bootstrapped parameters have a more complicated structure like, e.g., trees or hidden Markov models. Acknowledgments DM gratefully acknowledges financial support by the Copenhagen Image and Signal Processing Graduate School and by the Postgraduate Programme ”Natural Disasters” at the University of Karlsruhe. Appendix: TAP Equations The ADATAP approach computes the set of parameters Λc(i), γc(i) by constructing an alternative set of tractable likelihoods ˆLj(⃗f) = e−1 2 ⃗f T Λ(j) ⃗f+γ(j)T ⃗f defining an auxiliary Gaussian joint distribution PG(⃗f1, . . . , ⃗fN) ∝Qn a=1 µ(f a) QN j=1 ˆLj(⃗fj). We use replica symmetry and a specific scaling of the parameters with β: γa(j) = βγ(j), Λaa(j) = Λ0(j) = β2λ0(j) for all a, Λab(j) = Λ(j) = β2λ(j) for a ̸= b and ∆λ(j) = β−1(Λ0(j)− Λ(j)). All unknown parameters are found by moment matching: We assume that the first two marginal moments mi = lim n→0⟨⟨f a i ⟩⟩, Vii = lim n→0⟨⟨f a i f b i ⟩⟩−(mi)2, χii = β lim n→0⟨⟨f a i f a i − f a i f b i ⟩⟩of the variables ⃗fi can be computed 1) by marginalizing PG and 2) by using the relations between cavity distribution and marginal distributions P( ⃗fi) ∝Li(⃗fi)P\i(⃗fi) as well as PG(⃗fi) ∝ˆLi(⃗fi)P\i(⃗fi) for all i = 1, . . . , N. This yields χii = χc ii  1 −(1 −e−S N )Φ(∆c i)  (21) mi = mc i  1 −(1 −e−S N )Φ(∆c i)  + yi(1 −e−S N ) Φ(∆c i) + p V c ii √ 2π e−1 2 (∆c i )2 ! Vii = V c ii  1 −(1 −e−S N )Φ(∆c i)  + (1 −yimi)(yimi −yimc i) where mc i = γc(i) ∆λc(i), V c ii = − λc(i) (∆λc(i))2 , χc ii = 1 ∆λc(i) and ∆c i = 1−yimc i √ V c ii . Further χii = (G)ii (22) mi = (G γ)i Vii = −(G diag(λ) G)ii with the N × N matrix G = (K−1 + diag(∆λ))−1 and χii = 1 ∆λ(i) + ∆λc(i) (23) mi = γ(i) + γc(i) ∆λ(i) + ∆λc(i) Vii = − λ(i) + λc(i) (∆λ(i) + ∆λc(i))2 We solve Eq. (21)-(23) by iteration using Eqs. (21) and (22) to evaluate the moments {mi, Vii, χii} and Eq. (23) to update the sets of parameters {γc(i), ∆λc(i), λc(i)} and {γ(i), ∆λ(i), λ(i)}, respectively. Reasonable start values are ∆λ(i) = ∆λ, λ(i) = −∆λ, γ(i) = yi∆λ where ∆λ is obtained as the root of 0 = 1 −1 N PN i=1 ωi∆λ 1+ωi∆λ −(1 −(1 − e−S/N)Φ(∆c)) with ∆c = −0.5 and ωi are the eigenvalues of kernel matrix K. References [1] B. Efron. Ann. Statist., 7: 1-26, 1979. [2] B. Efron, R. J. Tibshirani. An Introduction to the Bootstrap. Monographs on Statistics and Applied Probability 57, Chapman & Hall, 1993. [3] J. Shao, D. Tu, The Jackknife and Bootstrap, Springer Series in Statistics, Springer, 1995. [4] D. Malzahn, M. Opper, A statistical mechanics approach to approximate analytical Bootstrap averages, NIPS 15, S. Becker, S. Thrun, K. Obermayer eds., MIT Press, 2003. [5] M. M´ezard, G. Parisi, M. A. Virasoro, Spin Glass Theory and Beyond, Lecture Notes in Physics 9, World Scientific, 1987. [6] B. Sch¨olkopf, C. J. C. Burges, A. J. Smola (eds.), Advances in Kernel Methods: Support Vector Learning, MIT, Cambridge, MA, 1999. [7] P. Sollich, Probabilistic interpretation and Bayesian methods for Support Vector Machines, In: ICANN99, pp.91-96, Springer 1999. [8] M. Opper, O. Winther, Neural Computation, 12: 2655-2684, 2000. [9] M. Opper, O. Winther, Phys. Rev. Lett. , 86: 3695, 2001. [10] From http://www1.ics.uci.edu/˜mlearn/MLSummary.html and http://www.stats.ox.ac.uk/pub/PRNN/.
2003
36
2,437
An Infinity-sample Theory for Multi-category Large Margin Classification Tong Zhang IBM T.J. Watson Research Center Yorktown Heights, NY 10598 tzhang@watson.ibm.com Abstract The purpose of this paper is to investigate infinity-sample properties of risk minimization based multi-category classification methods. These methods can be considered as natural extensions to binary large margin classification. We establish conditions that guarantee the infinity-sample consistency of classifiers obtained in the risk minimization framework. Examples are provided for two specific forms of the general formulation, which extend a number of known methods. Using these examples, we show that some risk minimization formulations can also be used to obtain conditional probability estimates for the underlying problem. Such conditional probability information will be useful for statistical inferencing tasks beyond classification. 1 Motivation Consider a binary classification problem where we want to predict label y ∈{±1} based on observation x. One of the most significant achievements for binary classification in machine learning is the invention of large margin methods, which include support vector machines and boosting algorithms. Based on a set of observations (X1, Y1), . . . , (Xn, Yn), a large margin classification algorithm produces a decision function ˆfn by empirically minimizing a loss function that is often a convex upper bound of the binary classification error function. Given ˆfn, the binary decision rule is to predict y = 1 if ˆfn(x) ≥0, and to predict y = −1 otherwise (the decision rule at ˆfn(x) = 0 is not important). In the literature, the following form of large margin binary classification is often encountered: we minimize the empirical risk associated with a convex function φ in a pre-chosen function class Cn: ˆfn = arg min f∈Cn 1 n n X i=1 φ(f(Xi)Yi). (1) Originally such a scheme was regarded as a compromise to avoid computational difficulties associated with direct classification error minimization, which often leads to an NP-hard problem. The current view in the statistical literature interprets such methods as algorithms to obtain conditional probability estimates. For example, see [3, 6, 9, 11] for some related studies. This point of view allows people to show the consistency of various large margin methods: that is, in the large sample limit, the obtained classifiers achieve the optimal Bayes error rate. For example, see [1, 4, 7, 8, 10, 11]. The consistency of a learning method is certainly a very desirable property, and one may argue that a good classification method should be consistent in the large sample limit. Although statistical properties of binary classification algorithms based on the risk minimization formulation (1) are quite well-understood due to many recent works such as those mentioned above, there are much fewer studies on risk minimization based multicategory problems which generalizes the binary large margin method (1). The complexity of possible generalizations may be one reason. Another reason may be that one can always estimate the conditional probability for a multi-category problem using the binary classification formulation (1) for each category, and then pick the category with the highest estimated conditional probability (or score).1 However, it is still useful to understand whether there are more natural alternatives, and what kind of risk minimization formulation which generalizes (1) can be used to yield consistent classifiers in the large sample limit. An important step toward this direction has recently been taken in [5], where the authors proposed a multi-category extension of the support vector machine that is Bayes consistent (note that there were a number of earlier proposals that were not consistent). The purpose of this paper is to generalize their investigation so as to include a much wider class of risk minimization formulations that can lead to consistent classifiers in the infinity-sample limit. We shall see that there is a rich structure in risk minimization based multi-category classification formulations. Multi-category large margin methods have started to draw more attention recently. For example, in [2], learning bounds for some multi-category convex risk minimization methods were obtained, although the authors did not study possible choices of Bayes consistent formulations. 2 Multi-category classification We consider the following K-class classification problem: we would like to predict the label y ∈{1, . . . , K} of an input vector x. In this paper, we only consider the simplest scenario with 0 −1 classification loss: we have a loss of 0 for correct prediction, and loss of 1 for incorrect prediction. In binary classification, the class label can be determined using the sign of a decision function. This can be generalized to K class classification problem as follows: we consider K decision functions fc(x) where c = 1, . . . , K and we predict the label y of x as: T(f(x)) = arg max c∈{1,...,K} fc(x), (2) where we denote by f(x) the vector function f(x) = [f1(x), . . . , fK(x)]. Note that if two or more components of f achieve the same maximum value, then we may choose any of them as T(f). In this framework, fc(x) is often regarded as a scoring function for category c that is correlated with how likely x belongs to category c (compared with the remaining k −1 categories). The classification error is given by: ℓ(f) = 1 −EXP(Y = T(X)|X). Note that only the relative strength of fc compared with the alternatives is important. In particular, the decision rule given in (2) does not change when we add the same numerical quantity to each component of f(x). This allows us to impose one constraint on the vector f(x) which decreases the degree of freedom K of the K-component vector f(x) to K −1. 1This approach is often called one-versus-all or ranking in machine learning. Another main approach is to encode a multi-category classification problem into binary classification sub-problems. The consistency of such encoding schemes can be difficult to analyze, and we shall not discuss them. For example, in the binary classification case, we can enforce f1(x)+f2(x) = 0, and hence f(x) can be represented as [f1(x), −f1(x)]. The decision rule in (2), which compares f1(x) ≥f2(x), is equivalent to f1(x) ≥0. This leads to the binary classification rule mentioned in the introduction. In the multi-category case, one may also interpret the possible constraint on the vector function f, which reduces its degree of freedom from K to K −1 based on the following reasoning. In many cases, we seek fc(x) as a function of p(Y = c|x). Since we have a constraint PK c=1 p(Y = c|x) = 1 (implying that the degree of freedom for p(Y = c|x) is K −1), the degree of freedom for f is also K −1 (instead of K). However, we shall point out that in the algorithms we formulate below, we may either enforce such a constraint that reduces the degree of freedom of f, or we do not impose any constraint, which keeps the degree of freedom of f to be K. The advantage of the latter is that it allows the computation of each fc to be decoupled. It is thus much simpler both conceptually and numerically. Moreover, it directly handles multiple-label problems where we may assign each x to multiple labels of y ∈{1, . . . , K}. In this scenario, we do not have a constraint. In this paper, we consider an empirical risk minimization method to solve a multi-category problem, which is of the following general form: ˆfn = arg min f∈Cn 1 n n X i=1 ΨYi(f(Xi)). (3) As we shall see later, this method is a natural generalization of the binary classification method (1). Note that one may consider an even more general form with ΨY (f(X)) replaced by ΨY (f(X), X), which we don’t study in this paper. From the standard learning theory, one can expect that with appropriately chosen Cn, the solution ˆfn of (3) approximately minimizes the true risk R( ˆf) with respect to the unknown underlying distribution within the function class Cn, R(f) = EX,Y ΨY (f(X)) = EXL(P(·|X), f(X)), (4) where P(·|X) = [P(Y = 1|X), . . . , P(Y = K|X)] is the conditional probability, and L(q, f) = K X c=1 qcΨc(f). (5) In order to understand the large sample behavior of the algorithm based on solving (3), we first need to understand the behavior of a function f that approximately minimizes R(f). We introduce the following definition (also referred to as classification calibrated in [1]): Definition 2.1 Consider Ψc(f) in (4). We say that the formulation is admissible (classification calibrated) on a closed set Ω⊆[−∞, ∞]K if the following conditions hold: ∀c, Ψc(·) : Ω→(−∞, ∞] is bounded below and continuous; ∩c{f : Ψc(f) < ∞} is non-empty and dense in Ω; ∀q, if L(q, f ∗) = inff L(q, f), then f ∗ c = supk f ∗ k implies qc = supk qk. Since we allow Ψc(f) = ∞, we use the convention that qcΨc(f) = 0 when qc = 0 and Ψc(f) = ∞. The following result relates the approximate minimization of the Ψ risk to the approximate minimization of classification error: Theorem 2.1 Let B be the set of all Borel measurable functions. For a closed set Ω⊂ [−∞, ∞]K, let BΩ= {f ∈B : ∀x, f(x) ∈Ω}. If Ψc(·) is admissible on Ω, then for a Borel measurable distribution, R(f) →infg∈BΩR(g) implies ℓ(f) →infg∈B ℓ(g). Proof Sketch. First we show that the admissibility implies that ∀ϵ > 0, ∃δ > 0 such that ∀q and x: inf qc≤supk qk−ϵ{L(q, f) : fc = sup k fk} ≥inf g∈ΩL(q, g) + δ. (6) If (6) does not hold, then ∃ϵ > 0, and a sequence of (cm, f m, qm) with f m ∈Ωsuch that f m cm = supk f m k , qm cm ≤supk qm k −ϵ, and L(qm, f m) −infg∈ΩL(qm, g) →0. Taking a limit point of (cm, f m, qm), and using the continuity of Ψc(·), we obtain a contradiction (technical details handling the infinity case are skipped). Therefore (6) must be valid. Now we consider a vector function f(x) ∈ΩB. Let q(x) = P(·|x). Given X, if P(Y = T(f(X))|X) ≥P(Y = T(q(X))|X)+ϵ, then equation (6) implies that L(q(X), f(X)) ≥ infg∈ΩL(q(X), g) + δ. Therefore ℓ(f) −inf g∈B ℓ(g) =EX[P(Y = T(q(X))|X) −P(Y = T(f(X))|X)] ≤ϵ + EXI(P(Y = T(q(X))|X) −P(Y = T(f(X))|X) > ϵ) ≤ϵ + EX LX(q(X), f(X)) −infg∈BΩLX(q(X), g) δ =ϵ + R(f) −infg∈BΩR(g) δ . In the above derivation we use I to denote the indicator function. Since ϵ and δ are arbitrary, we obtain the theorem by letting ϵ →0. 2 Clearly, based on the above theorem, an admissible risk minimization formulation is suitable for multi-category classification problems. The classifier obtained from minimizing (3) can approach the Bayes error rate if we can show that with appropriately chosen function class Cn, approximate minimization of (3) implies approximate minimization of (4). Learning bounds of this forms have been very well-studied in statistics and machine learning. For example, for large margin binary classification, such bounds can be found in [4, 7, 8, 10, 11, 1], where they were used to prove the consistency of various large margin methods. In order to achieve consistency, it is also necessary to take a sequence of function classes Cn (C1 ⊂C2 ⊂· · · ) such that ∪nCn is dense in the set of Borel measurable functions. The set Cn has the effect of regularization, which ensures that R( ˆfn) ≈inff∈Cn R(f). It follows that as n →∞, R( ˆfn) P→inff∈B R(f). Theorem 2.1 then implies that ℓ( ˆfn) P→inff∈B ℓ(f). The purpose of this paper is not to study similar learning bounds that relate approximate minimization of (3) to the approximate minimization of (4). See [2] for a recent investigation. We shall focus on the choices of Ψ that lead to admissible formulations. We pay special attention to the case that each Ψc(f) is a convex function of f, so that the resulting formulation becomes computational more tractable. Instead of working with the general form of Ψc in (4), we focus on two specific choices listed in the next two sections. 3 Unconstrained formulations We consider unconstrained formulation with the following choice of Ψ: Ψc(f) = φ(fc) + s K X k=1 t(fk) ! , (7) where φ, s and t are appropriately chosen functions that are continuously differentiable. The first term, which has a relatively simple form, depends on the label c. The second term is independent of the label, and can be regarded as a normalization term. Note that this function is symmetric with respect to components of f. This choice treats all potential classes equally. It is also possible to treat different classes differently (e.g. replacing φ(fc) by φc(fc)), which can be useful if we associate different classification loss to different kinds of errors. 3.1 Optimality equation and probability model Using (7), the conditional true risk (5) can be written as: L(q, f) = K X c=1 qcφ(fc) + s K X c=1 t(fc) ! . In the following, we study the property of the optimal vector f ∗that minimizes L(q, f) for a fixed q. Given q, the optimal solution f ∗of L(q, f) satisfies the following first order condition: qcφ′(f ∗ c ) + µf ∗t′(f ∗ c ) = 0 (c = 1, . . . , K). (8) where quantity µf ∗= s′(PK k=1 t(f ∗ k)) is independent of k. Clearly this equation relates qc to f ∗ c for each component c. The relationship of q and f ∗ defined by (8) can be regarded as the (infinite sample-size) probability model associated with the learning method (3) with Ψ given by (7). The following result presents a simple criterion to check admissibility. We skip the proof for simplicity. Most of our examples satisfy the condition. Proposition 3.1 Consider (7). Assume Φc(f) is continuous on [−∞, ∞]K and bounded below. If s′(u) ≥0 and ∀p > 0, pφ′(f) + t′(f) = 0 has a unique solution fp that is an increasing function of p, then the formulation is admissible. If s(u) = u, the condition ∀p > 0 in Proposition 3.1 can be replaced by ∀p ∈(0, 1). 3.2 Decoupled formulations We let s(u) = u in (7). The optimality condition (8) becomes qcφ′(f ∗ c ) + t′(f ∗ c ) = 0 (c = 1, . . . , K). (9) This means that we have K decoupled equalities, one for each fc. This is the simplest and in the author’s opinion, the most interesting formulation. Since the estimation problem in (3) is also decoupled into K separate equations, one for each component of ˆfn, this class of methods are computationally relatively simple and easy to parallelize. Although this method seems to be preferable for multi-category problems, it is not the most efficient way for two-class problem (if we want to treat the two classes in a symmetric manner) since we have to solve two separate equations. We only need to deal with one equation in (1) due to the fact that an effective constraint f1 + f2 = 0 can be used to reduce the number of equations. This variable elimination has little impact if there are many categories. In the following, we list some examples of multi-category risk minimization formulations. They all satisfy the admissibility condition in Proposition 3.1. We focus on the relationship of the optimal optimizer function f∗(q) and the conditional probability q. For simplicity, we focus on the choice φ(u) = −u. 3.2.1 φ(u) = −u and t(u) = eu We obtain the following probability model: qc = ef ∗ c . This formulation is closely related to the maximum-likelihood estimate with conditional model qc = efc/ PK k=1 efk (logistic regression). In particular, if we choose a function class such that the normalization condition PK k=1 efk = 1 holds, then the two formulations are identical. However, they become different when we do not impose such a normalization condition. Another very important and closely related formulation is the choice of φ(u) = −ln u and t(u) = u. This is an extension of maximum-likelihood estimate with probability model qc = fc. The resulting method is identical to maximum-likelihood if we choose our function class such that P k fk = 1. However, the formulation also allows us to use function classes that do not satisfy the normalization constraint P k fk = 1. Therefore this method is more flexible. 3.2.2 φ(u) = −u and t(u) = ln(1 + eu) This version uses binary logistic regression loss, and we have the following probability model: qc = (1 + e−f ∗ c )−1. Again this is an unnormalized model. 3.2.3 φ(u) = −u and t(u) = 1 p|u|p (p > 1) We obtain the following probability model: qc = sign(f ∗ c )|f ∗ c |p−1. This means that at the solution, f ∗ c ≥0. One may modify it such that we allow f ∗ c ≤0 to model the condition probability qc = 0. 3.2.4 φ(u) = −u and t(u) = 1 p max(u, 0)p (p > 1) In this probability model, we have the following relationship: qc = max(f ∗ c , 0)p−1. The equation implies that we allow f ∗ c ≤0 to model the conditional probability qc = 0. Therefore, with a fixed function class, this model is more powerful than the previous one. However, at the optimal solution, f ∗ c ≤1. This requirement can be further alleviated with the following modification. 3.2.5 φ(u) = −u and t(u) = 1 p min(max(u, 0)p, p(u −1) + 1) (p > 1) In this probability model, we have the following relationship at the exact solution: qc = min(max(f c ∗, 0), 1)p−1. Clearly this model is more powerful than the previous model since the function value f ∗ c ≥1 can be used to model qc = 1. 3.3 Coupled formulations In the coupled formulation with s(u) ̸= u, the probability model can be normalized in a certain way. We list a few examples. 3.3.1 φ(u) = −u, and t(u) = eu, and s(u) = ln(u) This is the standard logistic regression model. The probability model is: qc(x) = exp(f ∗ c (x))( K X c=1 exp(f ∗ c (x)))−1. The right hand side is always normalized (sum up to 1). Note that the model is not continuous at infinities, and thus not admissible in our definition. However, we may consider the region Ω= {f : supk fk = 0}, and it is easy to check that this model is admissible in Ω. Let f Ω c = fc −supk fk ∈Ω, then f Ωhas the same decision rule as f and R(f) = R(f Ω). Therefore Theorem 2.1 implies that R(f) →infg∈B R(g) implies ℓ(f) →infg∈B ℓ(g). 3.3.2 φ(u) = −u, and t(u) = |u|p′, and s(u) = 1 p|u|p/p′ (p, p′ > 1) The probability model is: qc(x) = ( K X k=1 |f ∗ k(x)|p′)(p−p′)/p′sign(f ∗ c (x))|f ∗ c (x)|p′−1. We may replace t(u) by t(u) = max(0, u)p, and the probability model becomes: qc(x) = ( K X k=1 max(f ∗ k(x), 0)p′)(p−p′)/p′ max(f ∗ c (x), 0)p′−1. These formulations do not seem to have advantages over the decoupled counterparts. Note that if we let p →1, then the sum of the p′ p′−1-th power of the right hand side →1. In a way, this means that the model is normalized in the limit of p →1. 4 Constrained formulations As pointed out, one may impose constraints on possible choices of f. We may impose such a condition when we specify the function class Cn. However, for clarity, we shall directly impose a condition into our formulation. If we impose a constraint into (7), then its effect is rather similar to that of the second term in (7). In this section, we consider a direct extension of binary large-margin method (1) to multi-category case. The choice given below is motivated by [5], where an extension of SVM was proposed. We use a risk formulation that is different from (7), and for simplicity, we will consider linear equality constraint only: Ψc(f) = K X k=1,k̸=c φ(−fk), s.t. f ∈Ω, (10) where we define Ωas: Ω= {f : K X k=1 fk = 0} ∪{f : sup k fk = ∞}. We may interpret the added constraint as a restriction on the function class Cn in (3) such that every f ∈Cn satisfies the constraint. Note that with K = 2, this leads to the usually binary large margin method. Using (10), the conditional true risk (5) can be written as: L(q, f) = K X c=1 (1 −qc)φ(−fc), s.t. f ∈Ω. (11) The following result provides a simple way to check the admissibility of (10). Proposition 4.1 If φ is a convex function which is bounded below and φ′(0) < 0, then (10) is admissible on Ω. Proof Sketch. The continuity condition is straight-forward to verify. We may also assume that φ(·) ≥0 without loss of generality. Now let f achieves the minimum of L(q, ·). If fc = ∞, then it is clear that qc = 1 and thus qk = 0 for k ̸= c. This implies that for k ̸= c, φ(−fk) = inff φ(−f), and thus fk < 0. If fc = supk fk < ∞, then the constraint implies fc ≥0. It is easy to see that ∀k, qc ≥qk since otherwise, we must have φ(−fk) > φ(−fc), and thus φ′(−fk) > 0 and φ′(−fc) < 0, implying that with sufficient small δ > 0, φ(−(fk + δ)) < φ(−fk) and φ(−(fc −δ)) < φ(−fc). A contradiction. 2 Using the above criterion, we can convert any admissible convex φ for the binary formulation (1) into an admissible multi-category classification formulation (10). In [5] the special case of SVM (with loss function φ(u) = max(0, 1−u)) was studied. The authors demonstrated the admissibility by direct calculation, although no results similar to Theorem 2.1 were established. Such a result is needed to prove consistency. The treatment presented here generalizes their study. Note that for the constrained formulation, it is more difficult to relate fc at the optimal solution to a probability model, since such a model will have a much more complicated form compared with the unconstrained counterpart. 5 Conclusion In this paper we proposed a family of risk minimization methods for multi-category classification problems, which are natural extensions of binary large margin classification methods. We established admissibility conditions that ensure the consistency of the obtained classifiers in the large sample limit. Two specific forms of risk minimization were proposed and examples were given to study the induced probability models. As an implication of this work, we see that it is possible to obtain consistent (conditional) density estimation using various non-maximum likelihood estimation methods. One advantage of some of the newly proposed methods is that they allow us to model zero density directly. Note that for the maximum-likelihood method, near zero density may cause serious robustness problems at least in theory. References [1] P.L. Bartlett, M.I. Jordan, and J.D. McAuliffe. Convexity, classification, and risk bounds. Technical Report 638, Statistics Department, University of California, Berkeley, 2003. [2] Ilya Desyatnikov and Ron Meir. Data-dependent bounds for multi-category classification based on convex losses. In COLT, 2003. [3] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: A statistical view of boosting. The Annals of Statistics, 28(2):337–407, 2000. With discussion. [4] W. Jiang. Process consistency for adaboost. The Annals of Statistics, 32, 2004. with discussion. [5] Y. Lee, Y. Lin, and G. Wahba. Multicategory support vector machines, theory, and application to the classification of microarray data and satellite radiance data. Journal of American Statistical Association, 2002. accepted. [6] Yi Lin. Support vector machines and the bayes rule in classification. Data Mining and Knowledge Discovery, pages 259–275, 2002. [7] G. Lugosi and N. Vayatis. On the Bayes-risk consistency of regularized boosting methods. The Annals of Statistics, 32, 2004. with discussion. [8] Shie Mannor, Ron Meir, and Tong Zhang. Greedy algorithms for classification - consistency, convergence rates, and adaptivity. Journal of Machine Learning Research, 4:713–741, 2003. [9] Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37:297–336, 1999. [10] Ingo Steinwart. Support vector machines are universally consistent. J. Complexity, 18:768–791, 2002. [11] Tong Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. The Annals of Statitics, 32, 2004. with discussion.
2003
37
2,438
Abstract The relative depth of objects causes small shifts in the left and right retinal positions of these objects, called binocular disparity. Here, we describe a neuromorphic implementation of a disparity selective complex cell using the binocular energy model, which has been proposed to model the response of disparity selective cells in the visual cortex. Our system consists of two silicon chips containing spiking neurons with monocular Gabor-type spatial receptive fields (RF) and circuits that combine the spike outputs to compute a disparity selective complex cell response. The disparity selectivity of the cell can be adjusted by both position and phase shifts between the monocular RF profiles, which are both used in biology. Our neuromorphic system performs better with phase encoding, because the relative responses of neurons tuned to different disparities by phase shifts are better matched than the responses of neurons tuned by position shifts. 1 Introduction The accurate perception of the relative depth of objects enables both biological organisms and artificial autonomous systems to interact successfully with their environment. Binocular disparity, the positional shift between corresponding points in two eyes or cameras caused by the difference in their vantage points, is one important cue that can be used to infer depth. In the mammalian visual system, neurons in the visual cortex combine signals from the left and right eyes to generate responses selective for a particular disparity [1]. Ohzawa et al.[2] proposed the binocular energy model to explain the responses of binocular complex cells in the cat visual cortex, and found that the predictions of this model are in good agreement with measured data. This model also matches data from the macaque [3]. In the energy model, a neuron achieves its particular disparity tuning by either a position or a phase shift between its monocular receptive field (RF) profiles for the left and right eyes. Based on an analysis of a population of binocular cells, Anzai et al. [4] suggest that the cat primarily encodes disparity via a phase shift, although position shifts may play a larger role at higher spatial frequencies. Computational studies show that it is possible to estimate disparity from the relative responses of model complex cells tuned to different disparities [5][6]. A Neuromorphic Multi-chip Model of a Disparity Selective Complex Cell Eric K. C. Tsang and Bertram E. Shi Dept. of Electrical and Electronic Engineering Hong Kong University of Science and Technology Kowloon, HONG KONG SAR {eeeric,eebert}@ust.hk This paper describes a neuromorphic implementation of disparity tuned neurons constructed according to the binocular energy model. Section 2 reviews the binocular energy model and the encoding of disparity by position and phase shifts. Section 3 describes our implementation. Section 4 presents measured results from the system illustrating better performance for neurons tuned by phase than by position. This preference arises because the position-tuned neurons are more sensitive to the mismatch in the circuits on the Gabortype filter chip than the phase-tuned neurons. We have characterized the mismatch on the chip, as well as its effect on the complex cell outputs, and found that the phase model least sensitive to the parameters that vary most. Section 5 summarizes our results. 2 The Binocular Energy Model Ohzawa et al. [2] proposed the binocular energy model to explain the response of binocular complex cells measured in the cat. Anzai et al. further refined the model in a series of papers [4][7][8]. In this model, the response of a binocular complex cell is the linear combination of the outputs of four binocular simple cells, as shown in Figure 1. The response of a binocular simple cell is computed by applying a linear binocular filter to the input from the two eyes, followed by a half squaring nonlinearity: where is the positive half-wave rectifying nonlinearity. The linear binocular filter output is the sum of two monocular filter outputs (1) where the monocular filters linearly combine image intensity, , with a Gabor receptive field profile where indexes pixel position. The subscripts R and L denote parameters or image intensities from the right or left eye. The parameters and control the spatial frequency and bandwidth of the filter and controls the gain. These parameters are assumed to be the same in all of the simple cells that make up a complex cell. However, the center position, and the phase vary, both between the two eyes and among the four simple cells. Fig. 1: Binocular energy model of a complex cell. rs b xR xL φR φL , , , ( ) + ( )2 = b + max b 0 , { } = b cR cL φR φL , , , ( ) m cR φR IR , , ( ) m cL φL IL , , ( ) + = I x( ) m c φ I , , ( ) Σxg x c φ , , ( )I x( ) = g x c φ , , ( ) κe 1 2--- x c – ( )TC 1 – x c – ( ) – ΩT x c – ( ) φ + ( ) cos = x ZZ2 ∈ Ω IR2 ∈ C IR2 2 × ∈ κ c IR2 ∈ φ IR ∈ Cx + + + + + + Left Right Even Odd Half-squaring + + + + Binocular Complex Cell Binocular Simple Cells Linear Binocular Filter + + Σ Σ Σ Σ While the response of simple cells depends heavily upon the stimulus phase and contrast, the response of complex cells is largely independent of the phase and contrast. The binocular energy model posits that complex cells achieve this invariance by linearly combining the outputs of four simple cell responses whose binocular filters are in quadrature phase, being identical except that they differ in phase by . Because filters that differ in phase by are identical except for a change in sign, we only require two unique binocular filters, the four required simple cell outputs being obtained by positive and negative half squaring their outputs. Complex cells constructed according to the binocular energy model respond to disparities in the direction orthogonal to their preferred orientation. Their disparity tuning in this direction depends upon the relative center positions and the relative phases of the monocular filters. A binocular complex cell whose monocular filters are shifted by and will respond maximally for an input disparity (i.e. ). Disparity is encoded by a position shift if and . Disparity is encoded by a phase shift if and . The cell uses a hybrid encoding if both and . Phase encoding and position encoding are equivalent for the zero disparity tuned cell ( and ). 3 Neuromorphic Implementation Figure 2 shows a block diagram of our binocular cell system, which uses a combination of analog and digital processing. At this time, we use a pattern generator to supply left and right eye input. This gives us precise electronic control over spatial shift between the left and right eye inputs to the orientation selective neurons. We plan to replace the pattern generator with silicon retinae in the future. The left and right eye inputs are processed by two Gabor-type chips that contain retinotopic arrays of spiking neuron circuits whose spatial RF profiles are even and odd symmetric Gabor-type functions. The address filters extract spikes from four neurons in each chip whose output spike rates represent the positive and negative components of the odd and even symmetric filters centered at a desired retinal location. These spike trains are combined in the binocular combination block to implement the summation in (1). The complex cell computation block performs the half squaring nonlinearity and linear summation. In the following, we detail the design of the major building blocks. Fig. 2: System block diagram of a neuromorphic complex cell. The opposite direction arrows represent the AER handshaking protocol. The three groups of four parallel arrows represent spiking channels. The labels “e/o” and “+/-” represent EVEN/ODD and ON/ OFF. The top labels indicate the type of hardware used to implement each stage. π 2 ⁄ π ∆c cR cL – = ∆φ φR φL – = Dpref ∆c ∆φ Ω ⁄ – ≈ IR x( ) IL x Dpref – ( ) ≈ ∆c 0 ≠ ∆φ 0 = ∆c 0 = ∆φ 0 ≠ ∆c 0 ≠ ∆φ 0 ≠ ∆c 0 = ∆φ 0 = Phase Encoding Selection Binocular Combination Complex Cell Computation Complex Cell Response Gabor Chip Address Filter Right Eye Right Retina Address Left Eye Gabor Chip Address Filter Left Retina Address Mixed A/D AER chips Xilinx CPLDs MCU Pattern Generator e+ eo+ oe+ eo+ oB1+ B1B2+ B23.1 Gabor-type filtering chip Images from each eye are passed to a Gabor-type filtering chip [9] that implements the monocular filtering required by the simple cells. Given a spike rate encoded 32 x 64 pixel image ( or ), each chip computes outputs ( or ) corresponding to a 32 x 64 array of center positions and two phases, 0 and . All filters are designed to have the same gain, spatial frequency tuning and bandwidth. We refer to the filter as the EVEN symmetric filter and the filter as the ODD symmetric filter. Figure 3 shows the RF profile of the EVEN and ODD filters, which differ from a Gabor function because the function that modulates the cosine function is not a Gaussian; it decays faster at the origin and slower at the tails. This difference should not affect the resulting binocular complex cell responses significantly. Qian and Zhu [5] show that the binocular complex cell responses in the energy model are insensitive to the exact shape of the modulating envelope. The positive and negative components of each filter output are represented by a spike rate on separate ON and OFF channels. For example, for the right eye at center position , the EVEN-ON spike rate is proportional to and the EVEN-OFF spike rate to . Spikes are encoded on a single asynchronous digital bus using the address event representation (AER) communication protocol. The AER protocol signals the occurrence of a spike in the array by placing an address identifying the cell that spiked on the bus [10]. 3.2 AER Address Filter Each AER address filter extracts only those spikes corresponding to the four neurons whose RF profiles are centered at a desired retinal location and demultiplexes the spikes as voltage pulses on four separate wires. In our addressing scheme, every neuron is assigned a unique X (column) and Y (row) address. As addresses appear on the AER bus, two latches latch the row and column address of each spike, which are compared with the row and column address of the desired retinal location, which is encoded on bits 1-6 of the address. Bit 0 (the LSB) encodes the type of filter: EVEN/ODD on the row address and ON/OFF on the column address. Once the filter detects a spike from the desired retinal location, it generates a voltage pulse which is demultiplexed onto one of four output lines, depending upon the LSB of the latched row and column address. To avoid losing events, we minimize the time the AER address filter requires to process each address by implementing it using a Xilinx XC9500 series Complex Programmable Logic Device (CPLD). We chose this series because of its speed and flexibility. The block delay in each macrocell is 7ns. The series supports in system programming, enabling rapid debugging during system design. Because the AER protocol is asynchronous, we paid particular attention to the timing in the signal path to ensure that addresses are latched correctly and to avoid glitches that could be interpreted as output spikes. Fig. 3: The measured RF profile of the EVEN and ODD symmetric filters at the center pixel. IL IR m cL φL IL , , ( ) m cR φR IR , , ( ) π 2 ⁄ – φ 0 = φ π 2 ⁄ – = cR m cR 0 IR , , ( ) + m – cR 0 IR , , ( ) + 3.3 Binocular combination block The binocular combination block combines eight spike trains to implement the summation operation in Eq. (1) for two phase quadrature binocular filters. To compute the two binocular filter outputs required for a zero disparity tuned cell, we first set the AER address filters so that they extract spikes from monocular neurons with the same RF centers in the left and right eyes . To compute the output of the first binocular filter B1, the binocular combination block sums the outputs of the left and right eye EVEN filters by merging spikes from the left and right EVEN-ON channels onto a positive output line, B1+ (shown in Fig. 2), and merging spikes from the left and right EVEN-OFF channels onto a negative output line, B1-. The difference between the spike rates on B1+ and B1encodes the B1 filter output. However, the B1+ and B1- spike rates do not represent the ON (positive half-wave rectified) and OFF (negative half-wave rectified) components of the binocular filter outputs, since they may both be non-zero at the same time. To compute the output of the second filter, B2, the binocular combination block merges spikes from the left and right ODD channels similarly. The system can also implement binocular filter outputs for neurons tuned to non-zero disparities. For position encoding, we change the relative addresses selected by the AER address filters to set , but leave the binocular combination block unchanged. If we fix the center location of the right eye RF to the center column of the chip (32), we can detect position disparities between -31 and 32 in unit pixel steps. For phase encoding, we leave the AER address filters unchanged and alter the routing in the binocular combination block. Because the RF profiles of the Gabor-type chips have two phase values, altering the routing as shown in Table 1 results in four distinct binocular filters with monocular filter phase shifts of and , which correspond to the tuned far, tuned excitatory, tuned near and tuned inhibitory disparity cells identified by Poggio et al. [11] The binocular combination block uses the same type of Xilinx CPLD as the AER filter. Inputs control the monocular phase shift of the resulting binocular filter by modifying the routing. For simplicity, we implement the merge using inclusive OR gates without arbitration. Although simultaneous spikes on the left and right channels will be merged into a single spike, the probability that this will happen is negligible, since the width of the voltage pulse that represents each spike (~32ns) is much smaller than the inter-spike intervals, which are on the order of milliseconds. 3.4 Complex cell output Since the spike rates at the four outputs of the binocular combination block are relatively low, e.g. 10-1000Hz, we implement the final steps using an 8051 microcontroller (MCU) running at 24 MHz. Integrators count the number of spikes from each channel in a fixed time window, e.g. , to estimate the average spike rate on each of the four lines. We generate the four binocular simple cell responses by positive and negative half squarTable 1: Signal combinations for phase disparity encoding. Each table entry represents the combination of right/left eye inputs combined in a binocular output line to achieve a desired phase shift of . We abbreviate EVEN/ODD by e/o and ON/OFF by +/-. Binocular output line B1+ B1B2+ B2e+/oe-/o+ o+/e+ o-/e0 e+/e+ e-/eo+/o+ o-/oe+/o+ e-/oo+/eo-/e+ e+/ee-/e+ o+/oo-/o+ ∆c 0 = ( ) ∆c 0 ≠ ∆φ π 2 ⁄ – 0 π 2 ⁄ , , = π ∆φ ∆φ π 2 ⁄ – π 2 ⁄ π T 40ms = ing the spike rate differences (B1+ - B1-) and (B2+ - B2-), and sum them to obtain the binocular complex cell output. The MCU computes one set of four simple cell and one complex cell outputs every seconds, where is the time window of the integration. 4 RESULTS We use a pattern generator to supply the left and right eye inputs, which gives us precise control over the input disparity. In a loose biological analogy, we directly stimulate the optic nerve. The pattern generator simultaneously excites a pair of pixels in the left and right Gabor-type chips. The two pixels lie in the same row but are displaced by half the input disparity to the right of the center pixel in the right chip and by half the input disparity to the left of the center pixel in the left chip. The integration time window was 40ms. Figure 4(a) shows the response of binocular complex cells tuned to three different disparities by phase encoding. The AER address filters selected spikes from the retina locations (32,16) in both chips. Consistent with theoretical predictions, the peaks of the non-zero disparity tuned cells are approximately the same height, but smaller than the peak of the zero disparity tuned filter because of the smaller size of the side peaks in the ODD filter response in comparison with the center peak in the EVEN filter. Figure 4(a) shows the response of binocular complex cells tuned to similar disparities by position encoding. The negative-disparity tuned cell combines the outputs of pixels (33,16) in the left chip and (31,16) in the right chip. The positive-disparity tuned cell combines the outputs of pixel (31,16) in the left chip and pixel (33,16) in the right chip. The zero-disparity tuned cells for position and phase encoding are identical. Theoretically, the position model should result in three identical peaks that are displaced in disparity. However, the measurements show a wide variation in the peak sizes. The responses of the phase-tuned neurons exhibit better matching, because they were all computed from the same two sets of pixel outputs. In contrast, the three position-tuned neurons combine the responses of the Gabor-type chip at six different pixels. Decreasing the time over which we integrate the spike outputs of the binocular combinations stage results in faster disparity update. However, Figure 4(c) shows that this also increases variability in the response, when measured as a percentage of the mean response. Although they are nominally identical, the gain, damping (bandwidth), spatial frequency and offset of neurons from different retinal locations on the same chip vary due to transis(a) (b) (c) Fig. 4: (a) Response of three binocular complex cells tuned to three different disparities by phase encoding. (b) Response of three binocular complex cells tuned to three different disparities by position encoding. (c) Standard deviation of the response of zero disparity complex cell expressed as a percentage of the mean response at zero disparity for two integration windows of (solid line) and (dashed line). Statistics computed over 80 samples. T T -15 -10 -5 0 5 10 15 0 0.5 1 1.5 complex cell response input disparity (pixels) -15 -10 -5 0 5 10 15 0 1 2 3 input disparity (pixels) complex cell response -15 -10 -5 0 5 10 15 0 2 4 6 input disparity (pixels) % Std. dev. T 40ms = T 20ms = tor mismatch in the circuits used to implement them. We performed a numerical sensitivity analysis on the effect of variation in these parameters on the complex cell responses, by examining how much variations in them affected the locations at which the disparity tuning curves for neurons tuned to left and right disparities crossed the disparity tuning curve for the neuron tuned to zero disparity. These two locations form decision boundaries between near, zero and far disparities if we classify stimuli according to disparity tuned neuron with the maximum response. We found that the variation in the distance between these points varied much more than their centroid. Figure 5(a) shows the sensitivity coefficients for the distance between these points, where the sensitivity coefficient is defined as the percentage variation in the distance per percentage variation in a RF parameter. We consider the response to be robust to variations if the sensitivity coefficient is less than 1. In most cases, we find that the position model is less robust than the phase model. In addition, we characterized the variability in the RF parameters for neurons from different positions on the chip. We probed the response of seven individual spiking neurons to different spatial impulse inputs and fitted parameterized Gabor-type functions to the responses. We then computed the standard deviation in the parameters across the neurons probed, which we express as a percentage of the mean value. Figure 5(b) shows that the phase model is least sensitive to variations in the parameters that vary the most. 5 CONCLUSION We have replicated the disparity selectivity of complex cells in the visual cortex in a neuromorphic system based upon the disparity energy model. This system contains four silicon chips containing retinotopic arrays of neurons which communicate via the AER communication protocol, as well as circuits that combine the outputs of these chips to generate the response of a model binocular complex cell. We exploit the capability of AER protocol for point to point communication, as well as the ability to reroute spikes. Our measurements indicate that our binocular complex cells are disparity selective and that their selectivity can be adjusted through both position and phase encoding. However, the relative responses of neurons tuned by phase encoding exhibit better matching than the relative responses of neurons tuned by position encoding, because neurons tuned to different disparities by position encoding integrate outputs from different pixels while neurons tuned by phase encoding integrate output from the same pixels. This implementation is an initial step towards the development of a multi-chip neuromorphic system capable of extracting depth information about the visual environment using silicon neurons with physiologically-based functionality. The next step will be to extend (a) (b) Fig. 5: (a) Sensitivity of the phase and position models to variations in the RF parameters of the neurons. (b) A comparison of the sensitivity of phase model to the variability in the RF parameters. The line indicates the percentage standard deviation in the RF parameters. Errors bars indicate the 95% confidence interval. Solid bars show the sensitivity of the phase model from (a). 0 0.5 1 1.5 2 2.5 ON-OFF Gain Damping Spatial Frequency Offset Sensitivity Position Model Phase Model 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 ON-OFF Gain Damping Spatial Frequency Offset Sensitivity 0 10 20 30 40 50 60 70 Chip Variation the system from a single disparity tuned neuron to a set of retinotopic arrays of disparity tuned neurons. In order to do this, we will develop a mixed analog-digital chip whose architecture will be similar to that of the orientation tuned chip, which will combine the outputs from left and right eye orientation-tuned chips to compute an array of neurons tuned to the same disparity but different retinal locations. The tuned disparity can be controlled by address remapping, so additional copies of the same chip could represent neurons tuned to other disparities. This chip will increase the number of neurons we compute simultaneously, as well as decreasing the power consumption required to compute each neuron. In the current implementation, the digital circuits required to combine the monocular responses consume 1.2W. In contrast, the Gabor chips and their associated external bias and interface circuits consume only 62mW, with only about 4mW required for each Gabor chip. We expect the power consumption of the binocular combination chip to be comparable. Computing the neuron outputs in parallel will enable us to investigate the roles of additional processing steps such as pooling [5], [6] and normalization [12], [13]. Acknowledgements This work was supported in part by the Hong Kong Research Grants Council under Grant HKUST6218/01E. It was inspired by a project with Y. Miyawaki at the 2002 Telluride Neuromorphic Workshop. The authors would like to thank K. A. Boahen for helpful discussions and for supplying the receiver board used in this work, and T. Choi for his assistance in building the system. References [1] Barlow, H. B., Blackemore, C., & Pettigrew, J. D. (1967) The neural mechanism of binocular depth discrimination. J. Physiol. Lond., 193, 327-342. [2] Ohzawa, I., Deangelis, G. C., & Freeman, R. D. (1990) Stereoscopic depth discrimination in the visual cortex: neurons ideally suited as disparity detectors. Science, 249, 1037-1041. [3] Cummings, B. G. & Parker, A. J. (1997) Responses of primary visual cortical neurons to binocular disparity without depth perception. Nature, 389, 280-283. [4] Anzai, A., Ohzawa, I., and Freeman, R. D. (1999a) Neural mechanisms for encoding binocular disparity: position vs. phase. J. Neurophysiol., 82, 874-890. [5] Qian, N., & Zhu, Y. (1997) Physiological computation of binocular disparity. Vision Res., 37, 1811-1827. [6] Fleet, D. J., Wagner, H., & Heeger, D. J. (1996) Neural encoding of binocular disparity: energy models, position shifts and phase shifts. Vision Res., 36, 1839-57. [7] Anzai, A., Ohzawa, I., and Freeman, R. D. (1999b) Neural mechanisms for processing binocular information I. Simple cells. J. Neurophysiol., 82, 891-908. [8] Anzai, A., Ohzawa, I., and Freeman, R. D. (1999c) Neural mechanisms for processing binocular information II. Complex cells. J. Neurophysiol., 82, 909-924. [9] Choi, T. Y. W., Shi, B. E., & Boahen, K. (2003) An Orientation Selective 2D AER Transceiver, Proceedings of the IEEE Intl. Conf. on Circuits and Systems, 4, 800-803. [10] Boahen, K. A. (2000) Point-to-point connectivity between neuromorphic chips using address events. IEEE Transactions on Circuits and Systems-II: Analog and Digital Signal Processing, 47, 416-434. [11] Poggio, G. F., Motter, B. C. Squatrito, S., & Trotter, Y. (1985) Responses of neurons in visual cortex (V1 and V2) of the alert macaque to dynamic random-dot stereograms. Vision Research, 25, 397-406. [12] Albrecht, D. G. & Geisler, W. S. (1991) Motion selectivity and the contrast response functions of simple cells in the visual cortex, Visual Neuroscience, 7, 531-546. [13] Heeger, D. J. (1992). Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9, 181-197.
2003
38
2,439
Robustness in Markov Decision Problems with Uncertain Transition Matrices∗ Arnab Nilim Department of EECS † University of California Berkeley, CA 94720 nilim@eecs.berkeley.edu Laurent El Ghaoui Department of EECS University of California Berkeley, CA 94720 elghaoui@eecs.berkeley.edu Abstract Optimal solutions to Markov Decision Problems (MDPs) are very sensitive with respect to the state transition probabilities. In many practical problems, the estimation of those probabilities is far from accurate. Hence, estimation errors are limiting factors in applying MDPs to realworld problems. We propose an algorithm for solving finite-state and finite-action MDPs, where the solution is guaranteed to be robust with respect to estimation errors on the state transition probabilities. Our algorithm involves a statistically accurate yet numerically efficient representation of uncertainty, via Kullback-Leibler divergence bounds. The worst-case complexity of the robust algorithm is the same as the original Bellman recursion. Hence, robustness can be added at practically no extra computing cost. 1 Introduction We consider a finite-state and finite-action Markov decision problem in which the transition probabilities themselves are uncertain, and seek a robust decision for it. Our work is motivated by the fact that in many practical problems, the transition matrices have to be estimated from data. This may be a difficult task and the estimation errors may have a huge impact on the solution, which is often quite sensitive to changes in the transition probabilities [3]. A number of authors have addressed the issue of uncertainty in the transition matrices of an MDP. A Bayesian approach such as described by [9] requires a perfect knowledge of the whole prior distribution on the transition matrix, making it difficult to apply in practice. Other authors have considered the transition matrix to lie in a given set, most typically a polytope: see [8, 10, 5]. Although our approach allows to describe the uncertainty on the transition matrix by a polytope, we may argue against choosing such a model for the uncertainty. First, a general polytope is often not a tractable way to address the robustness problem, as it incurs a significant additional computational effort to handle uncertainty. Perhaps more importantly, polytopic models, especially interval matrices, may be very poor representations of statistical uncertainty and lead to very conservative robust ∗Research funded in part by Eurocontrol-014692, DARPA-F33615-01-C-3150, and NSF-ECS9983874 †Electrical Engineering and Computer Sciences policies. In [1], the authors consider a problem dual to ours, and provide a general statement according to which the cost of solving their problem is polynomial in problem size, provided the uncertainty on the transition matrices is described by convex sets, without proposing any specific algorithm. This paper is a short version of a longer report [2], which contains all the proofs of the results summarized here. Notation. P > 0 or P ≥0 refers to the strict or non-strict componentwise inequality for matrices or vectors. For a vector p > 0, log p refers to the componentwise operation. The notation 1 refers to the vector of ones, with size determined from context. The probability simplex in Rn is denoted ∆n = {p ∈Rn + : pT 1 = 1}, while Θn is the set of n×n transition matrices (componentwise non-negative matrices with rows summing to one). We use σP to denote the support function of a set P ⊆Rn, with for v ∈Rn, σP(v) := sup{pT v : p ∈P}. 2 The problem description We consider a finite horizon Markov decision process with finite decision horizon T = {0, 1, 2, . . . , N −1}. At each stage, the system occupies a state i ∈X, where n = |X| is finite, and a decision maker is allowed to choose an action a deterministically from a finite set of allowable actions A = {a1, . . . , am} (for notational simplicity we assume that A is not state-dependent). The system starts in a given initial state i0. The states make Markov transitions according to a collection of (possibly time-dependent) transition matrices τ := (P a t )a∈A,t∈T , where for every a ∈A, t ∈T, the n × n transition matrix P a t contains the probabilities of transition under action a at stage t. We denote by π = (a0, . . . , aN−1) a generic controller policy, where at(i) denotes the controller action when the system is in state i ∈X at time t ∈T. Let Π = AnN be the corresponding strategy space. Define by ct(i, a) the cost corresponding to state i ∈X and action a ∈A at time t ∈T, and by cN the cost function at the terminal stage. We assume that ct(i, a) is non-negative and finite for every i ∈X and a ∈A. For a given set of transition matrices τ, we define the finite-horizon nominal problem by φN(Π, τ) := min π∈Π CN(π, τ), (1) where CN(π, τ) denotes the expected total cost under controller policy π and transitions τ: CN(π, τ) := E ÃN−1 X t=0 ct(it, at(i)) + cN(iN) ! . (2) A special case of interest is when the expected total cost function bears the form (2), where the terminal cost is zero, and ct(i, a) = νtc(i, a), with c(i, a) now a constant cost function, which we assume non-negative and finite everywhere, and ν ∈(0, 1) is a discount factor. We refer to this cost function as the discounted cost function, and denote by C∞(π, τ) the limit of the discounted cost (2) as N →∞. When the transition matrices are exactly known, the corresponding nominal problem can be solved via a dynamic programming algorithm, which has total complexity of nmN flops in the finite-horizon case. In the infinite-horizon case with a discounted cost function, the cost of computing an ϵ-suboptimal policy via the Bellman recursion is O(nm log(1/ϵ)); see [7] for more details. 2.1 The robust control problems At first we assume that when for each action a and time t, the corresponding transition matrix P a t is only known to lie in some given subset Pa. Two models for transition matrix uncertainty are possible, leading to two possible forms of finite-horizon robust control problems. In a first model, referred to as the stationary uncertainty model, the transition matrices are chosen by nature depending on the controller policy once and for all, and remain fixed thereafter. In a second model, which we refer to as the time-varying uncertainty model, the transition matrices can vary arbitrarily with time, within their prescribed bounds. Each problem leads to a game between the controller and nature, where the controller seeks to minimize the maximum expected cost, with nature being the maximizing player. Let us define our two problems more formally. A policy of nature refers to a specific collection of time-dependent transition matrices τ = (P a t )a∈A,t∈T chosen by nature, and the set of admissible policies of nature is T := (⊗a∈APa)N. Denote by Ts the set of stationary admissible policies of nature: Ts = {τ = (P a t )a∈A,t∈T ∈T : P a t = P a s for every t, s ∈T, a ∈A} . The stationary uncertainty model leads to the problem φN(Π, Ts) := min π∈Π max τ∈Ts CN(π, τ). (3) In contrast, the time-varying uncertainty model leads to a relaxed version of the above: φN(Π, Ts) ≤φN(Π, T ) := min π∈Π max τ∈T CN(π, τ). (4) The first model is attractive for statistical reasons, as it is much easier to develop statistically accurate sets of confidence when the underlying process is time-invariant. Unfortunately, the resulting game (3) seems to be hard to solve. The second model is attractive as one can solve the corresponding game (4) using a variant of the dynamic programming algorithm seen later, but we are left with a difficult task, that of estimating a meaningful set of confidence for the time-varying matrices P a t . In this paper we will use the first model of uncertainty in order to derive statistically meaningful sets of confidence for the transition matrices, based on likelihood or entropy bounds. Then, instead of solving the corresponding difficult control problem (3), we use an approximation that is common in robust control, and solve the time-varying upper bound (4), using the uncertainty sets Pa derived from a stationarity assumption about the transition matrices. We will also consider a variant of the finite-horizon time-varying problem (4), where controller and nature play alternatively, leading to a repeated game φrep N (Π, Q) := min a0 max τ0∈Q min a1 max τ1∈Q . . . min aN−1 max τN−1∈Q CN(π, τ), (5) where the notation τt = (P a t )a∈A denotes the collection of transition matrices at a given time t ∈T, and Q := ⊗a∈APa is the corresponding set of confidence. Finally, we will consider an infinite-horizon robust control problem, with the discounted cost function referred to above, and where we restrict control and nature policies to be stationary: φ∞(Πs, Ts) := min π∈Πs max τ∈Ts C∞(π, τ), (6) where Πs denotes the space of stationary control policies. We define φ∞(Π, T ), φ∞(Π, Ts) and φ∞(Πs, T ) accordingly. In the sequel, for a given control policy π ∈Π and subset S ⊆T , the notation φN(π, S) := maxτ∈S CN(π, τ) denotes the worst-case expected total cost for the finitehorizon problem, and φ∞(π, S) is defined likewise. 2.2 Main results Our main contributions are as follows. First we provide a recursion, the “robust dynamic programming” algorithm, which solves the finite-horizon robust control problem (4). We provide a simple proof in [2] of the optimality of the recursion, where the main ingredient is to show that perfect duality holds in the game (4). As a corollary of this result, we obtain that the repeated game (5) is equivalent to its non-repeated counterpart (4). Second, we provide similar results for the infinite-horizon problem with discounted cost function, (6). Moreover, we obtain that if we consider a finite-horizon problem with a discounted cost function, then the gap between the optimal value of the stationary uncertainty problem (3) and that of its time-varying counterpart (4) goes to zero as the horizon length goes to infinity, at a rate determined by the discount factor. Finally, we identify several classes of uncertainty models, which result in an algorithm that is both statistically accurate and numerically tractable. We provide precise complexity results that imply that, with the proposed approach, robustness can be handled at practically no extra computing cost. 3 Finite-Horizon Robust MDP We consider the finite-horizon robust control problem defined in section 2.1. For a given state i ∈X, action a ∈A, and P a ∈Pa, we denote by pa i the next-state distribution drawn from P a corresponding to state i ∈X; thus pa i is the i-th row of matrix P a. We define Pa i as the projection of the set Pa onto the set of pa i -variables. By assumption, these sets are included in the probability simplex of Rn, ∆n; no other property is assumed. The following theorem is proved in [2]. Theorem 1 (robust dynamic programming) For the robust control problem (4), perfect duality holds: φN(Π, T ) = min π∈Π max τ∈T CN(π, τ) = max τ∈T min π∈Π CN(π, τ) := ψN(Π, T ). The problem can be solved via the recursion vt(i) = min a∈A ¡ ct(i, a) + σPa i (vt+1) ¢ , i ∈X, t ∈T, (7) where σP(v) := sup{pT v : p ∈P} denotes the support function of a set P, vt(i) is the worst-case optimal value function in state i at stage t. A corresponding optimal control policy π∗= (a∗ 0, . . . , a∗ N−1) is obtained by setting a∗ t (i) ∈arg min a∈A © ct(i, a) + σPa i (vt+1) ª , i ∈X. (8) The effect of uncertainty on a given strategy π = (a0, . . . , aN) can be evaluated by the following recursion vπ t (i) = ct(i, at(i)) + σPat(i) i (vπ t+1), i ∈X, (9) which provides the worst-case value function vπ for the strategy π. The above result has a nice consequence for the repeated game (5): Corollary 2 The repeated game (5) is equivalent to the game (4): φrep N (Π, Q) = φN(Π, T ), and the optimal strategies for φN(Π, T ) given in theorem 1 are optimal for φrep N (Π, Q) as well. The interpretation of the perfect duality result given in theorem 1, and its consequence given in corollary 2, is that it does not matter wether the controller or nature play first, or if they alternatively; all these games are equivalent. Each step of the robust dynamic programming algorithm involves the solution of an optimization problem, referred to as the “inner problem”, of the form σPa i (v) = max p∈Pa i vT p, (10) where Pa i is the set that describes the uncertainty on i-th row of the transition matrix P a, and v contains the elements of the value function at some given stage. The complexity of the sets Pa i for each i ∈X and a ∈A is a key component in the complexity of the robust dynamic programming algorithm. Beyond numerical tractability, an additional criteria for the choice of a specific uncertainty model is of course be that the sets Pa should represent accurate (non-conservative) descriptions of the statistical uncertainty on the transition matrices. Perhaps surprisingly, there are statistical models of uncertainty, such as those described in section 5, that are good on both counts. Precisely, these models result in inner problems (10) that can be solved in worst-case time of O(n log(vmax/δ)) via a simple bisection algorithm, where n is the size of the state space, vmax is a global upper bound on the value function, and δ > 0 specifies the accuracy at which the optimal value of the inner problem (10) is computed. In the finite-horizon case, we can bound vmax by O(N). Now consider the following algorithm, where the uncertainty is described in terms of one of the models described in section 5: Robust Finite Horizon Dynamic Programming Algorithm 1. Set ϵ > 0. Initialize the value function to its terminal value ˆvN = cN. 2. Repeat until t = 0: (a) For every state i ∈X and action a ∈A, compute, using the bisection algorithm given in [2], a value ˆσa i such that ˆσa i −ϵ/N ≤σPa i (ˆvt) ≤ˆσa i . (b) Update the value function by ˆvt−1(i) = mina∈A(ct−1(i, a) + ˆσa i ) , i ∈X. (c) Replace t by t −1 and go to 2. 3. For every i ∈X and t ∈T, set πϵ = (aϵ 0, . . . , aϵ N−1), where aϵ t(i) = arg max a∈A {ct−1(i, a) + ˆσa i } , i ∈X, a ∈A. As shown in [2], the above algorithm provides an suboptimal policy πϵ that achieves the exact optimum with prescribed accuracy ϵ, with a required number of flops bounded above by O(mnN log(N/ϵ)). This means that robustness is obtained at a relative increase of computational cost of only log(N/ϵ) with respect to the classical dynamic programming algorithm, which is small for moderate values of N. If N is very large, we can turn instead to the infinite-horizon problem examined in section 4, and similar complexity results hold. 4 Infinite-Horizon MDP In this section, we address a the infinite-horizon robust control problem, with a discounted cost function of the form (2), where the terminal cost is zero, and ct(i, a) = νtc(i, a), where c(i, a) is now a constant cost function, which we assume non-negative and finite everywhere, and ν ∈(0, 1) is a discount factor. We begin with the infinite-horizon problem involving stationary control and nature policies defined in (6). The following theorem is proved in [2]. Theorem 3 (Robust Bellman recursion) For the infinite-horizon robust control problem (6) with stationary uncertainty on the transition matrices, stationary control policies, and a discounted cost function with discount factor ν ∈[0, 1), perfect duality holds: φ∞(Πs, Ts) = max τ∈Ts min π∈Πs C∞(π, τ) := ψ∞(Πs, Ts). (11) The optimal value is given by φ∞(Πs, Ts) = v(i0), where i0 is the initial state, and where the value function v satisfies is the optimality conditions v(i) = min a∈A ¡ c(i, a) + νσPa i (v) ¢ , i ∈X. (12) The value function is the unique limit value of the convergent vector sequence defined by vk+1(i) = min a∈A ¡ c(i, a) + νσPa i (vk) ¢ , i ∈X, k = 1, 2, . . . (13) A stationary, optimal control policy π = (a∗, a∗, . . .) is obtained as a∗(i) ∈arg min a∈A © c(i, a) + νσPa i (v) ª , i ∈X. (14) Note that the problem of computing the dual quantity ψ∞(Πs, Ts) given in (11), has been addressed in [1], where the authors provide the recursion (13) without proof. Theorem (3) leads to the following corollary, also proved in [2]. Corollary 4 In the infinite-horizon problem, we can without loss of generality assume that the control and nature policies are stationary, that is, φ∞(Π, T ) = φ∞(Πs, Ts) = φ∞(Πs, T ) = φ∞(Π, Ts). (15) Furthermore, in the finite-horizon case, with a discounted cost function, the gap between the optimal values of the finite-horizon problems under stationary and time-varying uncertainty models, φN(Π, T )−φN(Π, Ts), goes to zero as the horizon length N goes to infinity, at a geometric rate ν. Now consider the following algorithm, where we describe the uncertainty using one of the models of section 5. Robust Infinite Horizon Dynamic Programming Algorithm 1. Set ϵ > 0, initialize the value function ˆv1 > 0 and set k = 1. 2. (a) For all states i and controls a, compute, using the bisection algorithm given in [2], a value ˆσa i such that ˆσa i −δ ≤σPa i (ˆvk) ≤ˆσa i , where δ = (1 −ν)ϵ/2ν. (b) For all states i and controls a, compute ˆvk+1(i) by, ˆvk+1(i) = min a∈A (c(i, a) + νˆσa i ) . 3. If ∥ˆvk+1 −ˆvk∥< (1 −ν)ϵ 2ν , go to 4. Otherwise, replace k by k + 1 and go to 2. 4. For each i ∈X, set an πϵ = (aϵ, aϵ, . . .), where aϵ(i) = arg max a∈A {c(i, a) + νˆσa i } , i ∈X. In [2], we establish that the above algorithm finds an ϵ-suboptimal robust policy in at most O(nm log(1/ϵ)2) flops. Thus, the extra computational cost incurred by robustness in the infinite-horizon case is only O(log(1/ϵ)). 5 Kullback-Liebler Divergence Uncertainty Models We now address the inner problem (10) for a specific action a ∈A and state i ∈X. Denote by D(p∥q) denotes the Kullback-Leibler (KL) divergence (relative entropy) from the probability distribution q ∈∆n to the probability distribution p ∈∆n: D(p∥q) := X j p(j) log p(j) q(j). The above function provides a natural way to describe errors in (rows of) the transition matrices; examples of models based on this function are given below. Likelihood Models: Our first uncertainty model is derived from a controlled experiment starting from state i = 1, 2, . . . , n and the count of the number of transitions to different states. We denote by F a the matrix of empirical frequencies of transition with control a in the experiment; denote by f a i its ith row. We have F a ≥0 and F a1 = 1, where 1 denotes the vector of ones. The “plug-in” estimate ˆP a = F a is the solution to the maximum likelihood problem max P X i,j F a(i, j) log P(i, j) : P ≥0, P1 = 1. (16) The optimal log-likelihood is βa max = P i,j F a(i, j) log F a(i, j). A classical description of uncertainty in a maximum-likelihood setting is via the ”likelihood region” [6] Pa =   P ∈Rn×n : P ≥0, P1 = 1, X i,j F a(i, j) log P(i, j) ≥βa   , (17) where βa < βa max is a pre-specified number, which represents the uncertainty level. In practice, the designer specifies an uncertainty level βa based on re-sampling Bmethods, or on a large-sample Gaussian approximation, so as to ensure that the set above achieves a desired level of confidence. With the above model, we note that the inner problem (10) only involves the set Pa i := n pa i ∈Rn : pa i ≥0, pa i T 1 = 1, P j F a(i, j) log pa i (j) ≥βa i o , where the parameter βa i := βa −P k̸=i P j F a(k, j) log F a(k, j). The set Pa i is the projection of the set described in (17) on a specific axis of pa i -variables. Noting further that the likelihood function can be expressed in terms of KL divergence, the corresponding uncertainty model on the row pa i for given i ∈X, a ∈A, is given by a set of the form Pa i = {p ∈∆n : D(f a i ∥p) ≤γa i }, where γa i = P j F a(i, j) log F a(i, j) −βa i is a function of the uncertainty level. Maximum A-Posteriori (MAP) Models: a variation on Likelihood models involves Maximum A Posteriori (MAP) estimates. If there exist a prior information regrading the uncertainty on the i-th row of P a, which can be described via a Dirichlet distribution [4] with parameter αa i , the resulting MAP estimation problem takes the form max p (f a i + αa i −1)T log p : pT 1 = 1, p ≥0. Thus, the MAP uncertainty model is equivalent to a Likelihood model, with the sample distribution f a i replaced by f a i + αa i −1, where αa i is the prior corresponding to state i and action a. Relative Entropy Models: Likelihood or MAP models involve the KL divergence from the unknown distribution to a reference distribution. We can also choose to describe uncertainty by exchanging the order of the arguments of the KL divergence. This results in a so-called “relative entropy” model, where the uncertainty on the i-th row of the transition matrix P a described by a set of the form Pa i = {p ∈∆n : D(p∥qa i ) ≤γa i }, where γa i > 0 is fixed, qa i > 0 is a given ”reference” distribution (for example, the Maximum Likelihood distribution). Equipped with one of the above uncertainty models, we can address the inner problem (10). As shown in [2], the inner problem can be converted by convex duality, to a problem of minimizing a single-variable, convex function. In turn, this one-dimensional convex optimization problem can be solved via a bisection algorithm with a worst-case complexity of O(n log(vmax/δ)), where δ > 0 specifies the accuracy at which the optimal value of the inner problem (10) is computed, and vmax is a global upper bound on the value function. Remark: We can also use models where the uncertainty in the i-th row for the transition matrix P a is described by a finite set of vectors, Pa i = {pa,1 i , . . . , pa,K i }. In this case the complexity of the corresponding robust dynamic programming algorithm is increased by a relative factor of K with respect to its classical counterpart, which makes the approach attractive when the number of ”scenarios” K is moderate. 6 Concluding remarks We proposed a “robust dynamic programming” algorithm for solving finite-state and finiteaction MDPs whose solutions are guaranteed to tolerate arbitrary changes of the transition probability matrices within given sets. We proposed models based on KL divergence, which is a natural way to describe estimation errors. The resulting robust dynamic programming algorithm has almost the same computational cost as the classical dynamic programming algorithm: the relative increase to compute an ϵ-suboptimal policy is O(N log(1/ϵ)) in the N-horizon case, and O(log(1/ϵ)) for the infinite-horizon case. References [1] J. Bagnell, A. Ng, and J. Schneider. Solving uncertain Markov decision problems. Technical Report CMU-RI-TR-01-25, Robotics Institute, Carnegie Mellon University, August 2001. [2] L. El-Ghaoui and A. Nilim. Robust solution to Markov decision problems with uncertain transition matrices: proofs and complexity analysis. Technical Report UCB/ERL M04/07, Department of EECS, University of California, Berkeley, January 2004. A related version has been submitted to Operations Research in Dec. 2003. [3] E. Feinberg and A. Shwartz. Handbook of Markov Decision Processes, Methods and Applications. Kluwer’s Academic Publishers, Boston, 2002. [4] T. Ferguson. Prior distributions on space of probability measures. The Annal of Statistics, 2(4):615–629, 1974. [5] R. Givan, S. Leach, and T. Dean. Bounded parameter Markov decision processes. In fourth European Conference on Planning, pages 234–246, 1997. [6] E. Lehmann and G. Casella. Theory of point estimation. Springer-Verlag, New York, USA, 1998. [7] M. Putterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. WileyInterscince, New York, 1994. [8] J. K. Satia and R. L. Lave. Markov decision processes with uncertain transition probabilities. Operations Research, 21(3):728–740, 1973. [9] A. Shapiro and A. J. Kleywegt. Minimax analysis of stochastic problems. Optimization Methods and Software, 2002. to appear. [10] C. C. White and H. K. Eldeib. Markov decision processes with imprecise transition probabilities. Operations Research, 42(4):739–749, 1994.
2003
39
2,440
Probability Estimates for Multi-class Classification by Pairwise Coupling Ting-Fan Wu Chih-Jen Lin Department of Computer Science National Taiwan University Taipei 106, Taiwan Ruby C. Weng Department of Statistics National Chenechi University Taipei 116, Taiwan Abstract Pairwise coupling is a popular multi-class classification method that combines together all pairwise comparisons for each pair of classes. This paper presents two approaches for obtaining class probabilities. Both methods can be reduced to linear systems and are easy to implement. We show conceptually and experimentally that the proposed approaches are more stable than two existing popular methods: voting and [3]. 1 Introduction The multi-class classification problem refers to assigning each of the observations into one of k classes. As two-class problems are much easier to solve, many authors propose to use two-class classifiers for multi-class classification. In this paper we focus on techniques that provide a multi-class classification solution by combining all pairwise comparisons. A common way to combine pairwise comparisons is by voting [6, 2]. It constructs a rule for discriminating between every pair of classes and then selecting the class with the most winning two-class decisions. Though the voting procedure requires just pairwise decisions, it only predicts a class label. In many scenarios, however, probability estimates are desired. As numerous (pairwise) classifiers do provide class probabilities, several authors [12, 11, 3] have proposed probability estimates by combining the pairwise class probabilities. Given the observation x and the class label y, we assume that the estimated pairwise class probabilities rij of µij = p(y = i | y = i or j, x) are available. Here rij are obtained by some binary classifiers. Then, the goal is to estimate {pi}k i=1, where pi = p(y = i | x), i = 1, . . . , k. We propose to obtain an approximate solution to an identity, and then select the label with the highest estimated class probability. The existence of the solution is guaranteed by theory in finite Markov Chains. Motivated by the optimization formulation of this method, we propose a second approach. Interestingly, it can also be regarded as an improved version of the coupling approach given by [12]. Both of the proposed methods can be reduced to solving linear systems and are simple in practical implementation. Furthermore, from conceptual and experimental points of view, we show that the two proposed methods are more stable than voting and the method in [3]. We organize the paper as follows. In Section 2, we review two existing methods. Sections 3 and 4 detail the two proposed approaches. Section 5 presents the relationship among the four methods through their corresponding optimization formulas. In Section 6, we compare these methods using simulated and real data. The classifiers considered are support vector machines. Section 7 concludes the paper. Due to space limit, we omit all detailed proofs. A complete version of this work is available at http://www.csie.ntu.edu.tw/ ˜cjlin/papers/svmprob/svmprob.pdf. 2 Review of Two Methods Let rij be the estimates of µij = pi/(pi + pj). The voting rule [6, 2] is δV = argmaxi[ X j:j̸=i I{rij>rji}]. (1) A simple estimate of probabilities can be derived as pv i = 2 P j:j̸=i I{rij>rji}/(k(k −1)). The authors of [3] suggest another method to estimate class probabilities, and they claim that the resulting classification rule can outperform δV in some situations. Their approach is based on the minimization of the Kullback-Leibler (KL) distance between rij and µij: l(p) = X i̸=j nijrij log(rij/µij), (2) where Pk i=1 pi = 1, pi > 0, i = 1, . . . , k, and nij is the number of instances in class i or j. By letting ∇l(p) = 0, a nonlinear system has to be solved. [3] proposes an iterative procedure to find the minimum of (2). If rij > 0, ∀i ̸= j, the existence of a unique global minimal solution to (2) has been proved in [5] and references therein. Let p∗denote this point. Then the resulting classification rule is δHT (x) = argmaxi[p∗ i ]. It is shown in Theorem 1 of [3] that p∗ i > p∗ j if and only if ˜pi > ˜pj, where ˜pj = 2 P s:s̸=j rjs k(k −1) ; (3) that is, the ˜pi are in the same order as the p∗ i . Therefore, ˜p are sufficient if one only requires the classification rule. In fact, as pointed out by [3], ˜p can be derived as an approximation to the identity by replacing pi + pj with 2/k, and µij with rij. pi = X j:j̸=i (pi + pj k −1 )( pi pi + pj ) = X j:j̸=i (pi + pj k −1 )µij (4) 3 Our First Approach Note that δHT is essentially argmaxi[˜pi], and ˜p is an approximate solution to (4). Instead of replacing pi + pj by 2/k, in this section we propose to solve the system: pi = X j:j̸=i (pi + pj k −1 )rij, ∀i, subject to k X i=1 pi = 1, pi ≥0, ∀i. (5) Let ¯p denote the solution to (5). Then the resulting decision rule is δ1 = argmaxi[¯pi]. As δHT relies on pi + pj ≈k/2, in Section 6.1 we use two examples to illustrate possible problems with this rule. To solve (5), we rewrite it as Qp = p, k X i=1 pi = 1, pi ≥0, ∀i, where Qij = ( rij/(k −1) if i ̸= j, P s:s̸=i ris/(k −1) if i = j. (6) Observe that Pk j=1 Qij = 1 for i = 1, . . . , k and 0 ≤Qij ≤1 for i, j = 1, . . . , k, so there exists a finite Markov Chain whose transition matrix is Q. Moreover, if rij > 0 for all i ̸= j, then Qij > 0, which implies this Markov Chain is irreducible and aperiodic. These conditions guarantee the existence of a unique stationary probability and all states being positive recurrent. Hence, we have the following theorem: Theorem 1 If rij > 0, i ̸= j, then (6) has a unique solution p with 0 < pi < 1, ∀i. With Theorem 1 and some further analyses, if we remove the constraint pi ≥0, ∀i, the linear system with k + 1 equations still has the same unique solution. Furthermore, if any one of the k equalities Qp = p is removed, we have a system with k variables and k equalities, which, again, has the same single solution. Thus, (6) can be solved by Gaussian elimination. On the other hand, as the stationary solution of a Markov Chain can be derived by the limit of the n-step transition probability matrix Qn, we can solve p by repeatedly multiplying QT with any initial vector. Now we reexamine this method to gain more insight. The following arguments show that the solution to (5) is a global minimum of a meaningful optimization problem. To begin, we express (5) as P j:j̸=i rjipi −P j:j̸=i rijpj = 0, i = 1, . . . , k, using the property that rij + rji = 1, ∀i ̸= j. Then the solution to (5) is in fact the global minimum of the following problem: min p k X i=1 ( X j:j̸=i rjipi − X j:j̸=i rijpj)2 subject to k X i=1 pi = 1, pi ≥0, ∀i. (7) Since the object function is always nonnegative, and it attains zero under (5) and (6). 4 Our Second Approach Note that both approaches in Sections 2 and 3 involve solving optimization problems using the relations like pi/(pi +pj) ≈rij or P j:j̸=i rjipi ≈P j:j̸=i rijpj. Motivated by (7), we suggest another optimization formulation as follows: min p 1 2 k X i=1 X j:j̸=i (rjipi −rijpj)2 subject to k X i=1 pi = 1, pi ≥0, ∀i. (8) In related work, [12] proposes to solve a linear system consisting of Pk i=1 pi = 1 and any k −1 equations of the form rjipi = rijpj. However, pointed out in [11], the results of [12] strongly depends on the selection of k −1 equations. In fact, as (8) considers all rijpj −rjipi, not just k −1 of them, it can be viewed as an improved version of [12]. Let p† denote the corresponding solution. We then define the classification rule as δ2 = argmaxi[p† i]. Since (7) has a unique solution, which can be obtained by solving a simple linear system, it is desired to see whether the minimization problem (8) has these nice properties. In the rest of the section, we show that this is true. The following theorem shows that the nonnegative constraints in (8) are redundant. Theorem 2 Problem (8) is equivalent to a simplification without conditions pi ≥0, ∀i. Note that we can rewrite the objective function of (8) as min p = 1 2pT Qp, where Qij = (P s:s̸=i r2 si if i = j, rjirij if i ̸= j. (9) From here we can show that Q is positive semi-definite. Therefore, without constraints pi ≥0, ∀i, (9) is a linear-constrained convex quadratic programming problem. Consequently, a point p is a global minimum if and only if it satisfies the KKT optimality condition: There is a scalar b such that  Q e eT 0   p b  =  0 1  . (10) Here e is the vector of all ones and b is the Lagrangian multiplier of the equality constraint Pk i=1 pi = 1. Thus, the solution of (8) can be obtained by solving the simple linear system (10). The existence of a unique solution is guaranteed by the invertibility of the matrix of (10). Moreover, if Q is positive definite(PD), this matrix is invertible. The following theorem shows that Q is PD under quite general conditions. Theorem 3 If for any i = 1, . . . , k, there are s ̸= i and j ̸= i such that rsirsj ris ̸= rjirjs rij , then Q is positive definite. In addition to direct methods, next we propose a simple iterative method for solving (10): Algorithm 1 1. Start with some initial pi ≥0, ∀i and Pk i=1 pi = 1. 2. Repeat (t = 1, . . . , k, 1, . . .) pt ← 1 Qtt [− X j:j̸=t Qtjpj + pT Qp] (11) normalize p (12) until (10) is satisfied. Theorem 4 If rsj > 0, ∀s ̸= j, and {pi}∞ i=1 is the sequence generated by Algorithm 1, any convergent sub-sequence goes to a global minimum of (8). As Theorem 3 indicates that in general Q is positive definite, the sequence {pi}∞ i=1 from Algorithm 1 usually globally converges to the unique minimum of (8). 5 Relations Among Four Methods The four decision rules δHT , δ1, δ2, and δV can be written as argmaxi[pi], where p is derived by the following four optimization formulations under the constants Pk i=1 pi = 1 and pi ≥0, ∀i: δHT : min p k X i=1 [ k X j:j̸=i (rij 1 k −1 2pi)]2, (13) δ1 : min p k X i=1 [ k X j:j̸=i (rijpj −rjipi)]2, (14) δ2 : min p k X i=1 k X j:j̸=i (rijpj −rjipi)2, (15) δV : min p k X i=1 k X j:j̸=i (I{rij>rji}pj −I{rji>rij}pi)2. (16) Note that (13) can be easily verified, and that (14) and (15) have been explained in Sections 3 and 4. For (16), its solution is pi = c P j:j̸=i I{rji>rij} , where c is the normalizing constant;∗and therefore, argmaxi[pi] is the same as (1). Clearly, (13) can be obtained from (14) by letting pj ≈1/k, ∀j and rji ≈1/2, ∀i, j. Such approximations ignore the differences between pi. Similarly, (16) is from (15) by taking the extreme values of rij: 0 or 1. As a result, (16) may enlarge the differences between pi. Next, compared with (15), (14) may tend to underestimate the differences between the pi’s. The reason is that (14) allows the difference between rijpj and rjipi to get canceled first. Thus, conceptually, (13) and (16) are more extreme – the former tends to underestimate the differences between pi’s, while the latter overestimate them. These arguments will be supported by simulated and real data in the next section. 6 Experiments 6.1 Simple Simulated Examples [3] designs a simple experiment in which all pi’s are fairly close and their method δHT outperforms the voting strategy δV . We conduct this experiment first to assess the performance of our proposed methods. As in [3], we define class probabilities p1 = 1.5/k, pj = (1 −p1)/(k −1), j = 2, . . . , k, and then set rij = pi pi + pj + 0.1zij if i > j, (17) rji = 1 −rij if j > i, (18) where zij are standard normal variates. Since rij are required to be within (0,1), we truncate rij at ϵ below and 1 −ϵ above, with ϵ = 0.00001. In this example, class 1 has the highest probability and hence is the correct class. Figure 1 shows accuracy rates for each of the four methods when k = 3, 5, 8, 10, 12, 15, 20. The accuracy rates are averaged over 1,000 replicates. Note that in this experiment all classes are quite competitive, so, when using δV , sometimes the highest vote occurs at two ∗For I to be well defined, we consider rij ̸= rji, which is generally true. In addition, if there is an i for which P j:j̸=i I{rji>rij} = 0, an optimal solution of (16) is pi = 1, and pj = 0, ∀j ̸= i. The resulting decision is the same as that of (1). 2 3 4 5 6 7 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 log 2 k Accuracy Rates 2 3 4 5 6 7 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 log 2 k Accuracy Rates 2 3 4 5 6 7 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 log 2 k Accuracy Rates (a) balanced pi (b) unbalanced pi (c) highly unbalanced pi Figure 1: Accuracy of predicting the true class by the methods δHT (solid line, cross marked), δV (dash line, square marked), δ1 (dotted line, circle marked), and δ2 (dashed line, asterisk marked) from simulated class probability pi, i = 1, 2 · · · k. or more different classes. We handle this problem by randomly selecting one class from the ties. This partly explains why δV performs poor. Another explanation is that the rij here are all close to 1/2, but (16) uses 1 or 0 instead; therefore, the solution may be severely biased. Besides δV , the other three rules have done very well in this example. Since δHT relies on the approximation pi + pj ≈k/2, this rule may suffer some losses if the class probabilities are not highly balanced. To examine this point, we consider the following two sets of class probabilities: (1) We let k1 = k/2 if k is even, and (k + 1)/2 if k is odd; then we define p1 = 0.95×1.5/k1, pi = (0.95−p1)/(k1−1) for i = 2, . . . , k1, and pi = 0.05/(k−k1) for i = k1 + 1, . . . , k. (2) If k = 3, we define p1 = 0.95 × 1.5/2, p2 = 0.95 −p1, and p3 = 0.05. If k > 3, we define p1 = 0.475, p2 = p3 = 0.475/2, and pi = 0.05/(k −3) for i = 4, . . . , k. After setting pi, we define the pairwise comparisons rij as in (17)-(18). Both experiments are repeated for 1,000 times. The accuracy rates are shown in Figures 1(b) and 1(c). In both scenarios, pi are not balanced. As expected, δHT is quite sensitive to the imbalance of pi. The situation is much worse in Figure 1(c) because the approximation pi + pj ≈k/2 is more seriously violated, especially when k is large. In summary, δ1 and δ2 are less sensitive to pi, and their overall performance are fairly stable. All features observed here agree with our analysis in Section 5. 6.2 Real Data In this section we present experimental results on several multi-class problems: segment, satimage, and letter from the Statlog collection [9], USPS [4], and MNIST [7]. All data sets are available at http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/ t. Their numbers of classes are 7, 6, 26, 10, and 10, respectively. From thousands of instances in each data, we select 300 and 500 as our training and testing sets. We consider support vector machines (SVM) with RBF kernel e−γ∥xi−xj∥2 as the binary classifier. The regularization parameter C and the kernel parameter γ are selected by crossvalidation. To begin, for each training set, a five-fold cross-validation is conducted on the following points of (C, γ): [2−5, 2−3, . . . , 215] × [2−5, 2−3, . . . , 215]. This is done by modifying LIBSVM [1], a library for SVM. At each (C, γ), sequentially four folds are Table 1: Testing errors (in percentage) by four methods: Each row reports the testing errors based on a pair of the training and testing sets. The mean and std (standard deviation) are from five 5-fold cross-validation procedures to select the best (C, γ). Dataset k δHT δ1 δ2 δV mean std mean std mean std mean std satimage 6 14.080 1.306 14.600 0.938 14.760 0.784 15.400 0.219 12.960 0.320 13.400 0.400 13.400 0.400 13.360 0.080 14.520 0.968 14.760 1.637 13.880 0.392 14.080 0.240 12.400 0.000 12.200 0.000 12.640 0.294 12.680 1.114 16.160 0.294 16.400 0.379 16.120 0.299 16.160 0.344 segment 7 9.960 0.480 9.480 0.240 9.000 0.400 8.880 0.271 6.040 0.528 6.280 0.299 6.200 0.456 6.760 0.445 6.600 0.000 6.680 0.349 6.920 0.271 7.160 0.196 5.520 0.466 5.200 0.420 5.400 0.580 5.480 0.588 7.440 0.625 8.160 0.637 8.040 0.408 7.840 0.344 USPS 10 14.840 0.388 13.520 0.560 12.760 0.233 12.520 0.160 12.080 0.560 11.440 0.625 11.600 1.081 11.440 0.991 10.640 0.933 10.000 0.657 9.920 0.483 10.320 0.744 12.320 0.845 11.960 1.031 11.560 0.784 11.840 1.248 13.400 0.310 12.640 0.080 12.920 0.299 12.520 0.917 MNIST 10 17.400 0.000 16.560 0.080 15.760 0.196 15.960 0.463 15.200 0.400 14.600 0.000 13.720 0.588 12.360 0.196 17.320 1.608 14.280 0.560 13.400 0.657 13.760 0.794 14.720 0.449 14.160 0.196 13.360 0.686 13.520 0.325 12.560 0.294 12.600 0.000 13.080 0.560 12.440 0.233 letter 26 39.880 1.412 37.160 1.106 34.560 2.144 33.480 0.325 41.640 0.463 39.400 0.769 35.920 1.389 33.440 1.061 41.320 1.700 38.920 0.854 35.800 1.453 35.000 1.066 35.240 1.439 32.920 1.121 29.240 1.335 27.400 1.117 43.240 0.637 40.360 1.472 36.960 1.741 34.520 1.001 used as the training set while one fold as the validation set. The training of the four folds consists of k(k −1)/2 binary SVMs. For the binary SVM of the ith and the jth classes, using decision values ˆf of training data, we employ an improved implementation [8] of Platt’s posterior probabilities [10] to estimate rij: rij = P(i | i or j, x) = 1 1 + eA ˆ f+B , (19) where A and B are estimated by minimizing the negative log-likelihood function.† Then, for each validation instance , we apply the four methods to obtain classification decisions. The error of the five validation sets is thus the cross-validation error at (C, γ). After the cross-validation is done, each rule obtains its best (C, γ).‡ Using these parameters, we train the whole training set to obtain the final model. Next, the same as (19), the decision values from the training data are employed to find rij. Then, testing data are tested using each of the four rules. Due to the randomness of separating training data into five folds for finding the best (C, γ), we repeat the five-fold cross-validation five times and obtain the mean and standard deviation of the testing error. Moreover, as the selection of 300 and 500 training and testing instances from a larger dataset is also random, we generate five of such pairs. In Table 1, each row reports the testing error based on a pair of the training and testing sets. The results show that when the number of classes k is small, the four methods perform similarly; however, for problems with larger k, δHT is less competitive. In particular, for problem letter which has 26 classes, δ2 or δV outperforms δHT by at least 5%. It seems that for †[10] suggests to use ˆf from the validation instead of the training. However, this requires a further cross-validation on the four-fold data. For simplicity, we directly use ˆf from the training. ‡If more than one parameter sets return the smallest cross-validation error, we simply choose one with the smallest C. problems here, their characteristics are closer to the setting of Figure 1(c), rather than that of Figure 1(a). All these results agree with the previous findings in Sections 5 and 6.1. Note that in Table 1, some standard deviations are zero. That means the best (C, γ) by different cross-validations are all the same. Overall, the variation on parameter selection due to the randomness of cross-validation is not large. 7 Discussions and Conclusions As the minimization of the KL distance is a well known criterion, some may wonder why the performance of δHT is not quite satisfactory in some of the examples. One possible explanation is that here KL distance is derived under the assumptions that nijrij ∼ Bin(nij, µij) and rij are independent; however, as pointed out in [3], neither of the assumptions holds in the classification problem. In conclusion, we have provided two methods which are shown to be more stable than both δHT and δV . In addition, the two proposed approaches require only solutions of linear systems instead of a nonlinear one in [3]. The authors thank S. Sathiya Keerthi for helpful comments. References [1] C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/˜cjlin/libsvm. [2] J. Friedman. Another approach to polychotomous classification. Technical report, Department of Statistics, Stanford University, 1996. Available at http://www-stat.stanford.edu/reports/friedman/poly.ps.Z. [3] T. Hastie and R. Tibshirani. Classification by pairwise coupling. The Annals of Statistics, 26(1):451–471, 1998. [4] J. J. Hull. A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(5):550–554, May 1994. [5] D. R. Hunter. MM algorithms for generalized Bradley-Terry models. The Annals of Statistics, 2004. To appear. [6] S. Knerr, L. Personnaz, and G. Dreyfus. Single-layer learning revisited: a stepwise procedure for building and training a neural network. In J. Fogelman, editor, Neurocomputing: Algorithms, Architectures and Applications. Springer-Verlag, 1990. [7] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, November 1998. MNIST database available at http://yann.lecun.com/exdb/mnist/. [8] H.-T. Lin, C.-J. Lin, and R. C. Weng. A note on Platt’s probabilistic outputs for support vector machines. Technical report, Department of Computer Science and Information Engineering, National Taiwan University, 2003. [9] D. Michie, D. J. Spiegelhalter, and C. C. Taylor. Machine Learning, Neural and Statistical Classification. Prentice Hall, Englewood Cliffs, N.J., 1994. Data available at http://www.ncc.up.pt/liacc/ML/statlog/datasets.html. [10] J. Platt. Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. In A. Smola, P. Bartlett, B. Sch¨olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, Cambridge, MA, 2000. MIT Press. [11] D. Price, S. Knerr, L. Personnaz, and G. Dreyfus. Pairwise nerual network classifiers with probabilistic outputs. In G. Tesauro, D. Touretzky, and T. Leen, editors, Neural Information Processing Systems, volume 7, pages 1109–1116. The MIT Press, 1995. [12] P. Refregier and F. Vallet. Probabilistic approach for multiclass classification with neural networks. In Proceedings of International Conference on Artificial Networks, pages 1003–1007, 1991.
2003
4
2,441
Parameterized Novelty Detection for Environmental Sensor Monitoring Cynthia Archer, Todd K. Leen, Antonio Baptista OGI School of Science & Engineering Oregon Health & Science University 20000 N. W. Walker Road Beaverton, OR 97006 archer@cse.ogi.edu, tleen@cse.ogi.edu, baptista@ccalmr.ogi.edu Abstract As part of an environmental observation and forecasting system, sensors deployed in the Columbia RIver Estuary (CORIE) gather information on physical dynamics and changes in estuary habitat. Of these, salinity sensors are particularly susceptible to biofouling, which gradually degrades sensor response and corrupts critical data. Automatic fault detectors have the capability to identify bio-fouling early and minimize data loss. Complicating the development of discriminatory classifiers is the scarcity of bio-fouling onset examples and the variability of the bio-fouling signature. To solve these problems, we take a novelty detection approach that incorporates a parameterized bio-fouling model. These detectors identify the occurrence of bio-fouling, and its onset time as reliably as human experts. Real-time detectors installed during the summer of 2001 produced no false alarms, yet detected all episodes of sensor degradation before the field staffscheduled these sensors for cleaning. From this initial deployment through February 2003, our bio-fouling detectors have essentially doubled the amount of useful data coming from the CORIE sensors. 1 Introduction Environmental observation and forecasting systems (EOFS) gather, process, and deliver environmental information to facilitate sustainable development of natural resources. Our work is part of a pilot EOFS system being developed for the Columbia River Estuary (CORIE) [1]. This system uses data from sensors deployed throughout the estuary (Figure 1) to calibrate and verify numerical models of circulation and material transport. CORIE scientists use these models to predict and evaluate the effects of development on the estuary environment (e.g. [2]). CORIE salinity sensors deployed in the estuary lose several months of data every year due to sensor degradation. Corrupted and missing field measurements compromise model calibration and verification, which can lead to invalid environmental forecasts. The most common form of salinity sensor degradation is bio-fouling, a reduction of the sensor response due to growth of biological material on the sensor. Prior the deployment of the technology described here, on a yearly basis CORIE salinity sensors suffered a 68% data loss due to bio-fouling. Although bio-fouling degradation is a common problem for environmental sensors, there is apparently no previous work that develops automatic detectors of such degradation. Figure 1: Map of Columbia River estuary marked with locations of CORIE sensors. Early bio-fouling detection is made difficult by the normal variability of salinity measurements. Tides cause the measurements to vary from near river salinity to near ocean salinity twice a day. The temporal pattern of salinity penetration varies spatially in the estuary. In addition, upriver sites, such as AM169, show substantial variability with the 14 and 28 day spring-neap tidal cycle. Changes in weather (e.g. winds, precipitation) and ocean conditions cause additional variations in salinity. To complicate bio-fouling detection further, the bio-fouling signature also varies from episode to episode. The time from onset to complete bio-fouling can take anywhere from 3 weeks to 5 months depending on the season and type of growth. We observe two types of bio-fouling in the estuary, hard growth (e.g. barnacles) characterized by quick linear degradation and soft growth (e.g. plant material) characterized by slow linear degradation with occasional interruptions in the downtrend. Figure 2 illustrates tidal variations in salinity and the effect that bio-fouling has on these measurements. It contains salinity time series in practical salinity units (psu) from two sensors mounted at the Red26 station, Figure 1. The upper trace, from sensor CT1460, contains only clean measurements. The lower trace, from sensor CT1448, contains both clean and bio-fouled measurements. The first half of the two time series are similar, but beginning on September 28th, the salinity measurements diverge. The CT1448 sensor exhibits typical hard-growth bio-fouling degradation. The primary challenge to our work is to detect the degradation quickly, ideally within several diurnal cycles. Early detection will limit the use of corrupted data in on-line applications, and provide a basis to rapidly replace degrading sensors, and thus drastically reduce data loss. Although the CORIE data archives contain many months of bio-fouled data, there are relatively few examples of the onset of degradation for most of the sensors 9/10 9/15 9/20 9/25 9/30 10/05 10/10 0 10 20 30 CT1460 salinity 9/10 9/15 9/20 9/25 9/30 10/05 10/10 0 10 20 30 CT1448 salinity Date (month/day) Figure 2: Clean and bio-fouled salinity time series examples from Red26 station. The upper time series is from clean instrument CT1460. The lower time series from instrument CT1448 shows degradation beginning on September 28, 2001. On removal, CT1448 was found to be bio-fouled. deployed in the estuary, and it is this onset that we must detect. The dearth of onset examples, and the observed variability of the bio-fouling signature spatially, seasonally, and weekly (according to the spring/neap tidal cycle) prevents use of classical discriminatory fault detectors. Instead we develop a parameterized novelty detector to detect bio-fouling. This detector incorporates a parameterized model of bio-fouling behavior. The parameters in the model of bio-fouled sensor behavior are fit on-line by maximum-likelihood estimation. A model of the clean sensor behavior is fit to archival data. These models are used in a sequential likelihood test to provide detection of bio-fouling, and an estimation of the time at which the degradation began. Evaluations show that our detectors identify the onset of bio-fouling as reliably as human experts, and frequently within fewer tidal cycles of the onset. Our deployment of sensors throughout the estuary has resulted in an actual reduction of the error loss from 68% to 35%. However, this figure does not adequately reflect the efficacy of the detectors. Were it economical to replace sensors immediately upon detection of degradation, the data loss would have been reduced to 17%. 2 Salinity and Temperature Our detectors monitor maximum diurnal (md) salinity, defined as the maximum salinity near one of the two diurnal tidal floods. When the sensor is clean, the md salinity stays close to some mean value, with occasional dips of several psu caused by variations in the intrusion of salt water into the estuary. When the sensor biofouls, the md salinity gradually decreases to typically less than half its normal mean value, as seen in the Figure 2 example. Detectors that monitor salinity alone can not distinguish between normal decreases in salinity and early bio-fouling. This results in a high false alarm rate1. Natural salinity decreases can be recognized by monitoring a correlated source of information that is not corrupted by bio-fouling. Salinity and temperature at a station are products of the same mixing process of ocean and river waters, so we expect these values will be correlated. Assuming linear mixing of ocean and river waters, measured salinity Sm and temperature Tm are linear functions of ocean {So, To} and river {Sr, Tr} values Sm = α(t)So + (1 −α(t))Sr (1) Tm = α(t)To + (1 −α(t))Tr (2) where α(t) is the mixing coefficient at time t. River salinity Sr is close to zero. Consequently, the estimated mixing coefficient α(t) = Tr −Tm Tr −To (3) should be well correlated with salinity, Sm ≈αSo. The river temperature is measured at far upstream stations (Elliot or Woody). The ocean temperature is estimated from measurements at Sand Island, the outermost sensor station. 3 Bio-fouling Detection Our early experiments with single-measurement detection suggested that we develop detectors that accrue information over time - similar to the standard sequential likelihood methods in classical pattern recognition. The is a natural framework for detecting degradation that grows with time. Assume a sequence of measurements (salinity and temperature) yn, n = 1, . . . , N where N is the current time. We construct probability densities for such sequences for both clean sensors p(y1, . . . , yN | c), and for biofouled sensors p(y1, . . . , yN | f). With these distributions, we construct a likelihood ratio test h = ln p(y1, . . . , yN | f) p(y1, . . . , yN | c) f> < c λ (4) where the threshold λ is chosen high enough to provide a specified false alarm rate (Neyman-Pearson test). We assume that the probability density for the measurement sequence for fouled detectors is parameterized by a vector of unknown parameters θ. The model is constructed such that at θ = 0 the density for the sequence assuming a fouled detector is equal to the density of the sequence assuming a clean detector p(y1, . . . , yN | f, θ = 0) = p(y1, . . . , yN | c) (5) Next, we suppose that a given sequence contains a bio-fouling event that is initiated at the unknown time τ. Under our density models (below), consecutive measurements in the sequence are independent conditioned on the state of the detector. 1Equivalently, if the alarm threshold is increased to maintain a low false alarm rate, the rate of proper detections is decreased. Consequently, the likelihood ratio for the sequence (4) reduces to h = ln p(y1, . . . , yN | f, τ, θ) p(y1, . . . , yN | c) = ln p(y1, . . . , yτ−1 | c) p(yτ, . . . , yN | τ, θ, f) p(y1, . . . , yN | c) = N X n=τ ln p(yn | τ, θ, f) p(yn | c) f> < c λ (6) Finally, we fit the fouling model parameters θ and the onset time τ, by maximizing the log-likelihood ln p(y1, . . . , yN | f, τ, θ) with respect to θ and τ. Since the clean detector model is independent of τ and θ, this is equivalent to maximizing the log-likelihood ratio in (6). Hence, we replace the latter with h = max τ,θ N X n=τ ln p(yn|τ, θ, f) p(yn|c) f> < c λ (7) If the sequence is coming from a clean sensor, the fit should give θ ≈0 and hence h ≈0 (cf 5), and we will detect no event (assuming λ > 0). This construction is a variant of the type of signal change detection discussed by Basseville [3]. 3.1 Bio-fouling Fault Model By parameterizing the bio-fouling model, we are able to develop detectors using only clean example data. In this parameterized novelty detector, the bio-fouled parameters θ are fit on-line to the data under test. To develop our classifier, we first define models of the clean and bio-fouled data. We model the true salinity, s, and temperature-based mixing coefficient, α, as jointly Gaussian, p(s, α|c) = N(µ, Σ) where µ =  µs µα  and Σ =  σ2 s σsα σsα σ2 α  . (8) This provides a regression of the salinity on α. The probability of md salinity measurement conditioned on temperature when the sensor is clean is Gaussian N(η, ρ2), with conditional mean E[s|α, c] ≡η = µs + (σsα/σ2 α) (α −µα) (9) and conditional variance var[s|α, c] ≡ρ2 = σ2 s −σ2 sα/σ2 α (10) When bio-fouling occurs, the salinity measurement is suppressed relative to the true value. We model this suppression as a linear downtrend with (unknown) rate (slope) m that begins at (unknown) time τ. The model of the measured md salinity value for a fouled detector is xn = g(n)sn (11) where the suppression factor, g(n), is g(n) =  1 n < τ (1 −m(n −τ)) n ≥τ (12) and m is the bio-fouling rate (1/sec). Using this suppression factor g(n) (12), the probability of the salinity measurement, x, conditioned on temperature is p(xn|αn, m, τ, f) = N(g(n)ηn, g2(n)ρ2) (13) Note that since the temperature sensor is not susceptible to bio-fouling, we need not consider the case of both sensors degrading at the same time. The discriminant function in (7) depends on the parameters of the clean model (9) and (10) which are estimated from historical data. It also depends on the slope parameter θ = m of the fouling model, and the onset time τ which are fit online as per (7). Applying our Gaussian models in (8) and 13) to (7) gives us h = max τ,m N X n=τ ln 1 1 −m(n −τ) + (xn −ηn)2 2ρ2 −(xn −(1 −m(n −τ))ηn)2 2(1 −m(n −τ))2ρ2 (14) When h is above our chosen threshold, the detector signals a biofouled sensor. The threshold λ is set to provide a maximum false alarm rate on historical data. 3.2 Model Fitting We find maximum likelihood estimates for µ and Σ from clean archival time series data. For yn = [sn, αn]T and N training values, the mean is given by µ = 1 N P n yn and the covariance matrix by Σ = 1 N P n(yn −µ)(yn −µ)T . All other classifier parameter values, such as µs or E[s|α], can be extracted or calculated from µ and Σ. At each time step N, we determine the maximum likelihood estimate of onset time τ and bio-fouling rate m from the data under test. We find the maximum likelihood estimate of bio-fouling rate m, for some onset time τ, by setting the first derivative of (14) with respect to m equal to zero. This operation yields the relation m N X k=τ+1 (k −τ)2 ω2 k η2 k = N X k=τ+1 k −τ ωk (xk −ηk)ηk ωk −ρ2 + (xk −ωkηk)2 ω2 k  (15) where ωk = 1 −m(k −τ) and N is the current time. Note that m appears both at the beginning of (15) and in the definition of ω, so we do not have a closed form solution for m. However, the ω values act as weights that increase the importance of most recent measurements. This weighting accounts for the expected decrease in measurement variance as bio-fouling progresses. To estimate m we take an iterative approach. First, initialize m to its minimum mean-squared error value given by m(0) = − PN k=τ+1(k −τ)(xk −ηk)ηk PN k=τ+1(k −τ)2η2 k (16) Second, repeatedly solve (15) for m(i) with ω calculated using the previous value m(i−1). The estimated rate value stops changing when h reaches a maximum. If we set the window length N −k to maximize the log likelihood ratio, h, the best estimate of onset time is τ. To determine the onset time estimate, τ, we search over over all past time for the value of k that maximizes h (14). For each possible window length, that is k = 3 . . . N, we determine the maximum likelihood estimate for m and then calculate the corresponding discriminant h. The estimated onset time τ is the window length N −k that gives the largest value of h. If this h is above our threshold, the current measurement is classified as bio-fouled. 4 On-line Bio-fouling Detectors To see how well our classifiers worked in practice, we implemented versions that operated on real-time salinity and temperature measurements. For all four instances of sensor degradation (three bio-fouling incidents and one instrument failure that mimicked bio-fouling) that occurred in the summer 2001 test period, our classifiers correctly indicated a sensor problem before the field staffwas aware of it. In addition, the real-time classifiers produced no false alarms during the summer test period. More in-depth discussion of the detector suite is given by Archer et al in [4]. 9/05 9/10 9/15 9/20 9/25 9/30 10/05 10/10 10 15 20 25 30 35 Max Sal 9/05 9/10 9/15 9/20 9/25 9/30 10/05 10/10 0 5 10 15 20 SLR Date (a) Red26 9/05 9/10 9/15 9/20 9/25 9/30 10/05 10/10 10 15 20 25 30 35 Max Sal 9/05 9/10 9/15 9/20 9/25 9/30 10/05 10/10 0 10 20 30 SLR Date (b) Tansy Point Figure 3: Bio-fouling Indicators Red26 and Tansy Point. Top plots show maximum diurnal salinity. Dotted lines indicate historical no false alarm (lower) and 10% false alarm rate (upper). Field staffschedule sensors for cleaning when the maximum salinity drops “too low”, roughly the no false alarm level. Bottom plots show the sequential likelihood discriminant for forty days of salinity and temperature measurements. Dotted lines indicate historical no false alarm (upper) and 10% false alarm rate (lower). The × indicates the estimated bio-fouling onset time. The on-line monitor displays a bio-fouling indicator for the previous forty days of data. Figure 3 shows the on-line bio-fouling monitor during incidents at the Red26 CT1448 sensor and the Tansy Point CT1462 sensor. Since we had another sensor mounted at the Red26 site that did not bio-foul, Figure 2, we were able to estimate the bio-fouling time as September 28th. Our detector discriminant passed the no false alarm threshold five days after onset and roughly three days before the field staffdecided the instrument needed cleaning. This reduction in time to detection corresponds to reduced data loss of over 30%. In addition, the onset time estimate of September 29th was within a day of the true onset time. The Tansy Point CT1462 sensor began to bio-foul a few days after the Red26 CT1448 sensor. Our detector indicated that the Tansy Point sensor was bio-fouling on October 9th. Since neighboring sensor Red26 was being replaced on October 11th, the field staffdecided to retrieve the Tansy Point sensor as well. On removal, this sensor was found to be in the early stages of bio-fouling. In this case, indications from our classifier permitted the sensor to be replaced before the field staffwould normally have scheduled it for retrieval. Experience with our on-line bio-fouling indicators demonstrates that these methods substantially reduce time from biofouling onset to detection. In addition to the events described above, we have fairly extensive experience with the online detectors since their initial deployment in the Spring of 2001. At this writing we have bio-fouling detectors at all observing stations in the estuary and experience with events throughout the year. Near the end of October, 2001 we experienced a false alarm in a sensor near the surface in the lower estuary. In this case, a steady downward trend in surface salinity, caused by several days of rain triggered a detector response. Following cessation of the precipitation, the discriminant function h returned back to sub-threshold levels. In a recent (February 2003) study of five sensor stations in the estuary we compared data loss prior to the deployment of bio-fouling detectors, with data loss postdeployment. The pre-deployment period included approximately four years of data from 1997 through the summer of 2001. The post-deployment period ran from spring/summer of 2001 through February 2003. Neglecting seasonal variation, prior to the deployment of our detectors, 68% of all the sensor data was corrupted by bio-fouling. Following deployment, the rate of data loss due to bio-fouling dropped to 35%. This is the actual data loss, and includes delay in responding to the event detection. Were it economical to replace the sensors immediately upon detection of bio-fouling, the data loss rate would have been dropped farther to 17%. Even with the delay in responding to event detection, the detectors have more than doubled the amount of reliable data collected from the estuary. 5 Discussion CORIE salinity sensors lose several months of data every year due to sensor biofouling. Developing discriminatory fault detectors for these sensors is hampered by the variability of the bio-fouling time-signature, and the dearth of bio-fouling onset example data for training. To solve this problem, we built parameterized novelty detectors. Clean sensor models were developed based on archive data, while biofouled sensor models are given a simple parametric form that is fit online. On-line bio-fouling detectors deployed during the summer of 2001 detected all episodes of sensor degradation several days before the field staffwithout generating any false alarms. Expanded installation of a suite of detectors throughout the estuary continue to successfully detect bio-fouling with minimal false alarm intrusion. The detector deployment has effectively doubled the amount of clean data available from the estuary salinity sensors. Acknowledgements We thank members of the CORIE team, Arun Chawla and Charles Seaton, for their help in acquiring appropriate sensor data, Michael Wilkin for his assistance in labeling the sensor data, and Haiming Zheng for carrying forward the sensor development and deployment and providing the comparison of data loss rates before and after the detector deployment.. This work was supported by the National Science Foundation under grants ECS-9976452 and CCR-0082736. References [1] A. Baptista, M. Wilkin, P. Pearson, P. Turner, C. McCandlish, and P. Barrett. Costal and estuarine forecast systems: A multipurpose infrastructure for the Columbia river. Earth System Monitor, 9(3), 1999. [2] U.S. Army Corps of Engineers. Biological asssessment - Columbia river channel improvements project. Technical report, USACE Portland District, December 2001. [3] M. Basseville. Detecting changes in signals and systems - a survey. Automatica, 24(3):309–326, 1988. [4] C. Archer, A. Baptista, and T.K. Leen. Fault detection for salinity sensors in the Columbia River Estuary. Water Resources Research, 39, 2003.
2003
40
2,442
Sensory Modality Segregation Virginia R. de Sa Department of Cognitive Science University of California, San Diego La Jolla, CA 92093-0515 desa@ucsd.edu Abstract Why are sensory modalities segregated the way they are? In this paper we show that sensory modalities are well designed for self-supervised cross-modal learning. Using the Minimizing-Disagreement algorithm on an unsupervised speech categorization task with visual (moving lips) and auditory (sound signal) inputs, we show that very informative auditory dimensions actually harm performance when moved to the visual side of the network. It is better to throw them away than to consider them part of the “visual input”. We explain this finding in terms of the statistical structure in sensory inputs. 1 Introduction In previous work [1, 2] we developed a simple neural network algorithm that learned categories from co-occurences of patterns to different sensory modalities. Using only the co-occuring patterns of lip motion and acoustic signal, the network learned separate visual and auditory networks (subnets) to distinguish 5 consonant vowel utterances. It performed almost as well as the corresponding supervised algorithm, where the utterance label is given, on the same data and significantly better than a strategy of separate unsupervised clustering in each modality followed by clustering of these clusters (This strategy is used to initialize our algorithm). In this paper we show that the success of this biologically motivated algorithm depends crucially on the statistics of features derived from different sensory modalities. We do this by examining the performance when the two “network-modalities” or pseudo-modalities are made up of inputs from the different sensory modalities. The Minimizing-Disagreement Algorithm The Minimizing-Disagreement (M-D) algorithm is designed to allow two (or more) modalities (or subnets) to simultaneously train each other by finding a local minimum of the number of times the individual modalities disagree on their classification decision (see Figure 1). The modalities are essentially trained by running Kohonen’s LVQ2.1 algorithm[3] but with the target class set by the output of the subnet of the other modality (receiving a co-occuring pattern) not a supervisory external signal. The steps of the algorithm are as follows. (Auditory) Modality/Network 1 Modality/Network 2 Hidden Units "Class" Units Multi-sensory object area (Visual) feedback of class picked by auditory input visual input Figure 1: The network for Minimizing-Disagreement algorithm. The weights from the hidden units to the output units determine the “labels” of the hidden units. These weights are updated throughout training to allow hidden units to change classes if needed. During training each modality creates an output label for the other as shown on the right side of the figure. After training, each modality subnet is tested separately. 1. Initialize hidden unit weight vectors in each modality (unsupervised clustering) 2. Initialize hidden unit labels using unsupervised clustering of the activity patterns across the hidden units from both modalities 3. Repeat for each presentation of input patterns X1(n) and X2(n) to their respective modalities • For each modality, find the two nearest hidden unit weight vectors to the respective input pattern • Find the hypothesized output class in each modality (as given by the label of the hidden unit with closest weight vector). The label of a hidden unit is the output unit to which it projects most strongly. • For each modality update the hidden unit weight vectors according to the LVQ2.1 rule (Only the rules for modality 1 are given below) Updates are performed only if the current pattern X1(n) falls within c(n) of the border between two hidden units of different classes (one of them agreeing with the output from the other modality). In this case ⃗w1i∗(n) = ⃗w1i∗(n−1)+ε(n) (X1(n)−⃗w1i∗(n−1)) ||X1(n)−⃗w1i∗(n−1)|| ⃗w1 j∗(n) = ⃗w1 j∗(n−1)−ε(n) (X1(n)−⃗w1 j∗(n−1)) ||X1(n)−⃗w1 j∗(n−1)|| where ⃗w1i∗is the weight vector of the hidden unit with the same label, and ⃗w1 j∗is the weight vector of the hidden unit with another label. • Update the labeling weights using Hebbian learning between the winning hidden unit and the output of the other modality. In order to discourage runaway to one of the trivial global minima of disagreement, where both modalities only ever output one class, weights to the output class neurons are renormalized at each step. This normalization means that the algorithm is not modifying the output weights to minimize the disagreement but instead clustering the hidden unit representation using the output class given by the other modality. This objective is better for these weights as it balances the goal of agreement with the desire to avoid the trivial solution of all hidden units having the same label. Vx time Ax Ay vectors frequency image areas frequency channels vectors time Vy motion Figure 2: An example Auditory and Visual pattern vector. The figure shows which dimensions went into each of Ax, Ay, Vx, and Vy. 2 Experiments 2.1 Creation of Sub-Modalities The original auditory and visual data were collected using an 8mm camcorder and directional microphone. The speaker spoke 118 repetitions of /ba/, /va/, /da/, /ga/, and /wa/. The first 98 samples of each utterance class formed the training set and the remaining 20 the test set. The auditory feature vector was encoded using a 24 channel mel code1 over 20 msec windows overlapped by 10 msec. This is a coarse short time frequency encoding, which crudely approximates peripheral auditory processing. Each feature vector was linearly scaled so that all dimensions lie in the range [-1,1]. The final auditory code is a (24 × 9) 216 dimension vector for each utterance. An example auditory feature vector is shown in Figure 2 (bottom). The visual data were processed using software designed and written by Ramprasad Polana [4]. Visual frames were digitized as 64 × 64 8 bit gray-level images using the Datacube MaxVideo system. Segments were taken as 6 frames before the acoustically determined utterance offset and 4 after. The normal flow was computed using differential techniques between successive frames. Each pair of frames was then averaged and then these averaged frames were divided into 25 equal areas (5 × 5) and the motion magnitudes within each frame were averaged within each area. The final visual feature vector of dimension (5 frames × 25 areas) 125 was linearly normalized as for the auditory vectors. An example visual feature vector is shown in Figure 2 (top). The original auditory and visual feature vectors were divided into two parts (called Ax, Ay and Vx,Vy as shown in Figure 2). The partition was arbitrarily determined as a compromise between wanting a similar number of dimensions and similar information content in each part. (We did not search over partitions; the experiments below were performed only for this partition). Our goal is to combine them in different ways and observe the performance of the minimizing-disagreement algorithm. We first benchmarked the divided “sub-modalities” to see how useful they were for the task. For this, we ran a supervised algorithm on each subset. The performance measurements are shown in Table 1. 1linear spacing below 1000 Hz and logarithmic above 1000 Hz. Sub-Modality Supervised Performance Ax 89±2 Ay 91±2 Vx 83±2 Vy 77±3 Table 1: Supervised performance of each of the sub-modalities. All numbers give percent correct classifications on independent test sets ± standard deviations. 2.2 Creation of Pseudo-Modalities Pseudo-modalities were created by combining all combinations (of 3 or less) of Ax, Ay, Vx and Vy; thus Ax+Vx+Vy (Ax+V) would be a pseudo-modality. The idea is to test all possible combinations of pseudo-modalities and compare the resulting performance of the final individual subnets with what a supervised algorithm could do with the same dimensions. 2.3 Pseudo-Modality Experiments In order to allow fair comparison, appropriate parameters were found for each modality division. The data were divided into 75% Training, and 25% Test data. Optimal parameters were selected by observing performance on the training data, and performance is reported on the test data. The results for all possible divisions are presented in Figure 3. Each network has the following key. The light gray bar and number represents the test-set performance of the pseudo-modality consisting of the sub-modalities listed below it. The darker bar and number represents the test-set performance of the other pseudo-modality. The black outlines (and numbers above the outline) give the performance of the corresponding supervised algorithm (LVQ2.1) with the same data. Thus, the empty area between the shaded area and black outline represents the loss from lack of supervision. Looking at the figure, one can make several comparisons. For each submodality, we can ask: To get the best performance of a subnet using those dimensions, where should one put the other sub-modalities in a M-D network? For instance, to answer that question of Ax, one would compare the performance of the Ax subnet in Ax/Ay+V network with that of the Ax+Ay subnet in the Ax+Ay/Vx+Vy network, with that of the Ax+Vx+Vy subnet in the Ax+Vx+Vy/Ay network etc. The subnet containing Ax that performs the best is the Ax+Ay subnet (trained with co-modality Vx+Vy). In fact, it turns out that for each submodality, the architecture for optimal post-training performance of the subnet containing that submodality, is to put the dimensions from the same “real” modality on the same side and those from the other modality on the other side. This raises the question: Is performance better for the Ax+Ay/Vx+Vy network than the Ax/Ay+Vx+Vy network because the benefit of having Ay with Ax is greater than that of having Ay with Vx and Vy (in other words, are there some higher order relationships between dimensions in Ax and those in Ay that require both dimensions to be learned by the same subnet) OR is it actually harmful to have Ay on the opposite side from Ax? We can answer this question by comparing the performance of the Ax/Ay+Vx+Vy network with that of the Ax/Vx+Vy network as shown in Figure 4. For that particular division, the results are not significantly different (even though we have removed the most useful dimensions), but for all the other divisions, performance is improved when dimensions are removed so that only dimensions from one “real” sensory modality are on one side. For example, the two graphs in the second column show that it is actually harmful to include the very useful Ax dimensions on the visual side of the network – we do better when we 73 72 68 79 75 67 90 71 73 69 77 73 Ay,Vx,Vy 88 97 91 97 83 Ax 98 91 93 91 91 86 97 Ax,Vx Ay,Vx Ax,Vy Vx 58 97 Vy 77 Pseudo− Modality2 Modality1 67 Ax,Vx,Vy Ax,Ay,Vy Ax,Ay,Vx Ay Ay,Vy Vx,Vy Pseudo− Ax,Ay Figure 3: Self-supervised and Supervised performances for the various pseudomodality divisions. Standard errors for the self-supervised performance means are ±1. Those for the supervised performances are ±.5. throw them away. Note that this is true even though a supervised network with Ax+Vx+Vy does much better than a supervised network with Vx+Vy — this is not a simple feature selection result. 2.4 Correlational structure is important Why do we get these results? The answer is that the results are very dependent on the statistical structure between dimensions within and between different sensory modalities. Consider a simpler system of two 1-Dimensional modalities and two classes of objects. Assume that the sensation detected by each modality has a probability density given by a Gaussian of different mean for each class. The densities seen by each modality are shown in Figure 5. In part A) of the Figure, the joint density for the stimuli to both modalities is shown for the case of conditionally uncorrelated stimuli (within each class, the inputs are uncorrelated). Parts C) and D) show the changing joint density as the sensations to the two modalities become more correlated within each class. Notice that the density changes from a “two blob” structure to more of a “ridge” structure. As it does this the projection of the joint density gives less indication of the underlying bi-modal structure and the local minimum of the Minimizing-Disagreement Energy function gets shallower and narrower. This means that the M-D algorithm would be less likely to find the correct boundary. A more intuitive explanation is shown in Figure 6. In the figure imagine that there are two classes of objects, with densities given by the thick curve and the thin curve and that this marginal density is the same in each one-dimensional modality. The line drawing below the densities, shows two possible scenarios for how the “modalities” may be related. In the top case, the modalities are conditionally independent. Given that a “thick” object is present, the particular pattern to each modality is independent. The lines represent a possible sampling of data (where points are joined if they co-occured). The minimizing disagreement algorithm wants to find a line from top to bottom that crosses the fewest lines – within the pattern space, disagreement is minimized for the dashed line shown. 72 75 70 71 84 84 Ay,Vx,Vy 88 91 83 Ax Ay 97 Vx 60 75 97 Vy 77 86 86 73 72 68 69 77 73 Ay,Vx,Vy 88 97 91 97 83 Ay 98 Vx 58 67 97 Vy 77 Ax Ax,Ay,Vx Ax,Ay,Vy Ax,Ay,Vy Ax,Ay,Vx Ax,Vx,Vy Ax,Vx,Vy P−M 1 P−M 2 Figure 4: This figure shows the benefits of having a pseudo-modality composed of dimensions from only ONE real modality (even if this means throwing away useful dimensions). Standard errors for the self-supervised performance means are ±1. Those for the supervised performances are ±.5. -4 -2 2 4 0.1 0.2 0.3 0.4 -4 -2 0 2 4 -4 -2 0 2 4 0 0.05 0.1 0.15 0.2 0.25 -4 -2 0 2 4 -4 -2 0 2 4 -4 -2 0 2 4 0 0.05 0.1 0.15 0.2 0.25 -4 -2 0 2 4 -4 -2 0 2 4 -4 -2 0 2 4 0 0.05 0.1 0.15 0.2 0.25 -4 -2 0 2 4 C A B D two classes in one modality Example Densities for Joint Density with = 0 ρ Joint Density with ρ = 0.5 = 0.2 Joint Density with ρ Figure 5: Different joint densities with the same marginal densities Modality 1 Modality 2 Modality 1 Modality 2 Conditionally Independent Highly Correlated Figure 6: Lines are joined between co-occuring patterns in two imaginary 1-D modalities (as shown at top). The M-D algorithm wants to find a partition that crosses the fewest lines. Conditional Information (I(X;Y|Class)) (with diagonal zeroed) Within-Class Correlation Coefficients (averaged over each class) Figure 7: Statistical Structure of our data In the bottom case, the modalities are strongly dependent. In this case there are many local minima for minimum disagreement, that are not closely related to the class boundary. It is easy for the networks to minimize the disagreement between the outputs of the modalities, without paying attention to the class. Having two very strongly dependent variables, one on each side of the network, means that the network can minimize disagreement by simply listening to those units. To verify that our auditory-visual results were due to statistical differences between the dimensions, we examined the statistical structure of our data. It turns out that, within a class, the correlation coefficient between most pairs of dimensions is fairly low. However, for related auditory features (similar time and frequency band) correlations are high and also for related visual features. This is shown in Figure 7. We also computed the conditional mutual information between each pair of features given the class I(x;y|Class). This is also shown in Figure 7. This value is 0 if and only if the two features are conditionally independent given the class. The graphs show that many of the auditory dimensions are highly dependent on each other (even given the class), as are many of the visual dimensions. This makes them unsuitable for serving on the other side of a M-D network. 2.5 Discussion The minimizing-disagreement algorithm was initially developed as a model of selfsupervised cortical learning and the importance of conditionally uncorrelated structure was mentioned in [5]. Since then people have used similar partly-supervised algorithms to deal with limited labeled data in machine learning problems [6, 7]. They have also emphasized the importance of conditional independence between the two sides of the input. However in the co-training style algorithms, inputs that are conditionally dependent are not helpful, but they are also not as harmful. Because the self-supervised algorithm is dependent on the class structure being evident in the joint space as its only source of supervision, it is very sensitive to conditionally dependent relationships between the modalities. We have shown that different sensory modalities are ideally suited for teaching each other. Sensory modalities are also composed of submodalities (e.g. color and motion for the visual modality) which are also likely to be conditionally independent (and indeed may be actively kept so [8, 9, 10]). We suggest that brain connectivity may be constrained not only due to volume limits, but because limiting connectivity may be beneficial for learning. Acknowledgements A preliminary version of this work appeared in a book chapter [5] in the book, Psychology of Learning and Motivation. This work is supported by NSF CAREER grant 0133996. References [1] Virginia R. de Sa. Learning classification with unlabeled data. In J.D. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural Information Processing Systems 6, pages 112—119. Morgan Kaufmann, 1994. [2] Virginia R. de Sa and Dana H. Ballard. Category learning through multimodality sensing. Neural Computation, 10(5):1097–1117, 1998. [3] Teuvo Kohonen. Improved versions of learning vector quantization. In IJCNN International Joint Conference on Neural Networks, volume 1, pages I–545–I–550, 1990. [4] Ramprasad Polana. Temporal Texture and Activity Recognition. PhD thesis, Department of Computer Science, University of Rochester, 1994. [5] Virginia R. de Sa and Dana H. Ballard. Perceptual learning from cross-modal feedback. In R. L. Goldstone, P.G. Schyns, and D. L. Medin, editors, Psychology of Learning and Motivation, volume 36, pages 309–351. Academic Press, San Diego, CA, 1997. [6] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory (COLT-98), pages 92–100, 1998. [7] Ion Muslea, Steve Minton, and Craig Knoblock. Active + semi-supervised learning = robust multi-view learning. In Proceedings of the 19th International Conference on Machine Learning (ICML 2002), pages 435–442, 2002. [8] C. McCollough. Color adaptation of edge-detectors in the human visual system. Science, 149:1115–1116, 1965. [9] P.C. Dodwell and G.K. Humphrey. A functional theory of the mccollough effect. Psychological Review, 1990. [10] F. H. Durgin and D.R. Proffitt. Combining recalibration and learning accounts of contingent aftereffects. In Proceedings of the annual meeting of the Psychonomic Society.
2003
41
2,443
Nonstationary Covariance Functions for Gaussian Process Regression Christopher J. Paciorek and Mark J. Schervish Department of Statistics Carnegie Mellon University Pittsburgh, PA 15213 paciorek@alumni.cmu.edu,mark@stat.cmu.edu Abstract We introduce a class of nonstationary covariance functions for Gaussian process (GP) regression. Nonstationary covariance functions allow the model to adapt to functions whose smoothness varies with the inputs. The class includes a nonstationary version of the Matérn stationary covariance, in which the differentiability of the regression function is controlled by a parameter, freeing one from fixing the differentiability in advance. In experiments, the nonstationary GP regression model performs well when the input space is two or three dimensions, outperforming a neural network model and Bayesian free-knot spline models, and competitive with a Bayesian neural network, but is outperformed in one dimension by a state-of-the-art Bayesian free-knot spline model. The model readily generalizes to non-Gaussian data. Use of computational methods for speeding GP fitting may allow for implementation of the method on larger datasets. 1 Introduction Gaussian processes (GPs) have been used successfully for regression and classification tasks. Standard GP models use a stationary covariance, in which the covariance between any two points is a function of Euclidean distance. However, stationary GPs fail to adapt to variable smoothness in the function of interest [1, 2]. This is of particular importance in geophysical and other spatial datasets, in which domain knowledge suggests that the function may vary more quickly in some parts of the input space than in others. For example, in mountainous areas, environmental variables are likely to be much less smooth than in flat regions. Spatial statistics researchers have made some progress in defining nonstationary covariance structures for kriging, a form of GP regression. We extend the nonstationary covariance structure of [3], of which [1] gives a special case, to a class of nonstationary covariance functions. The class includes a Matérn form, which in contrast to most covariance functions has the added flexibility of a parameter that controls the differentiability of sample functions drawn from the GP distribution. We use the nonstationary covariance structure for one, two, and three dimensional input spaces in a standard GP regression model, as done previously only for one-dimensional input spaces [1]. The problem of variable smoothness has been attacked in spatial statistics by mapping the original input space to a new space in which stationarity is assumed, but research has focused on multiple noisy replicates of the regression function with no development nor assessment of the method in the standard regression setting [4, 5]. The issue has been addressed in regression spline models by choosing the knot locations during the fitting [6] and in smoothing splines by choosing an adaptive penalizer on the integrated squared derivative [7]. The general approach in spline and other models involves learning the underlying basis functions, either explicitly or implicitly, rather than fixing the functions in advance. One alternative to a nonstationary GP model is mixtures of stationary GPs [8, 9]. Such methods adapt to variable smoothness by using different stationary GPs in different parts of the input space. The main difficulty is that the class membership is a function of the inputs; this involves additional unknown functions in the hierarchy of the model. One possibility is to use stationary GPs for these additional unknown functions [8], while [9] reduce computational complexity by using a local estimate of the class membership, but do not know if the resulting model is well-defined probabilistically. While the mixture approach is intriguing, neither of [8, 9] compare their model to other methods. In our model, there are unknown functions in the hierarchy of the model that determine the nonstationary covariance structure. We choose to fully model the functions as Gaussian processes themselves, but recognize the computational cost and suggest that simpler representations are worth investigating. 2 Covariance functions and sample function differentiability The covariance function is crucial in GP regression because it controls how much the data are smoothed in estimating the unknown function. GP distributions are distributions over functions; the covariance function determines the properties of sample functions drawn from the distribution. The stochastic process literature gives conditions for determining sample function properties of GPs based on the covariance function of the process, summarized in [10] for several common covariance functions. Stationary, isotropic covariance functions are functions only of Euclidean distance, τ. Of particular note, the squared exponential (also called the Gaussian) covariance function, C(τ) = σ2 exp −(τ/κ)2 , where σ2 is the variance and κ is a correlation scale parameter, has sample functions with infinitely many derivatives. In contrast, spline regression models have sample functions that are typically only twice differentiable. In addition to being of theoretical concern from an asymptotic perspective [11], other covariance forms might better fit real data for which it is unlikely that the unknown function is so highly differentiable. In spatial statistics, the exponential covariance, C(τ) = σ2 exp (−τ/κ) , is commonly used, but this form gives sample functions that, while continuous, are not differentiable. Recent work in spatial statistics has focused on the Matérn form, C(τ) = σ2 1 Γ(ν)2ν−1 (2√ντ/κ)ν Kν (2√ντ/κ) , where Kν(·) is the modified Bessel function of the second kind, whose order is the differentiability parameter, ν > 0. This form has the desirable property that sample functions are ⌊ν −1⌋ times differentiable. As ν →∞, the Matérn approaches the squared exponential form, while for ν = 0.5, the Matérn takes the exponential form. Standard covariance functions require one to place all of one’s prior probability on a particular degree of differentiability; use of the Matérn allows one to more accurately, yet easily, express prior lack of knowledge about sample function differentiability. One application for which this may be of particular interest is geophysical data. [12] suggest using the squared exponential covariance but with anisotropic distance, τ(xi, xj) = p (xi −xj)T ∆−1(xi −xj), where ∆is an arbitrary positive definite matrix, rather than the standard diagonal matrix. This allows the GP model to more easily model interactions between the inputs. The nonstationary covariance function we introduce next builds on this more general form. 3 Nonstationary covariance functions One nonstationary covariance function, introduced by [3], is C(xi, xj) = R ℜ2 kxi(u)kxj(u)du, where xi, xj, and u are locations in ℜ2, and kx(·) is a kernel function centered at x. One can show directly that C(xi, xj) is positive definite in ℜp, p = 1, 2, . . ., [10]. For Gaussian kernels, the covariance takes the simple form, CNS(xi, xj) = σ2|Σi| 1 4 |Σj| 1 4 | (Σi + Σj) /2|−1 2 exp (−Qij) , (1) with quadratic form Qij = (xi −xj)T ((Σi + Σj) /2)−1 (xi −xj), (2) where Σi, which we call the kernel matrix, is the covariance matrix of the Gaussian kernel at xi. The form (1) is a squared exponential correlation function, but in place of a fixed matrix, ∆, in the quadratic form, we average the kernel matrices for the two locations. The evolution of the kernel matrices in space produces nonstationary covariance, with kernels that drop off quickly producing locally short correlation scales. Independently, [1] derived a special case in which the kernel matrices are diagonal. Unfortunately, so long as the kernel matrices vary smoothly in the input space, sample functions from GPs with the covariance (1) are infinitely differentiable [10], just as for the stationary squared exponential. To generalize (1) and introduce functions for which sample path differentiability varies, we extend (1) as proven in [10]: Theorem 1 Let Qij be defined as in (2). If a stationary correlation function, RS(τ), is positive definite on ℜp for every p = 1, 2, . . ., then RNS(xi, xj) = |Σi| 1 4 |Σj| 1 4 |(Σi + Σj) /2|−1 2 RS p Qij  (3) is a nonstationary correlation function, positive definite on ℜp, p = 1, 2, . . .. One example of nonstationary covariance functions constructed in this way is a nonstationary version of the Matérn covariance, CNS(xi, xj) = σ2 |Σi| 1 4 |Σj| 1 4 Γ(ν)2ν−1 Σi + Σj 2 −1 2  2 p νQij ν Kν  2 p νQij  . (4) Provided the kernel matrices vary smoothly in space, the sample function differentiability of the nonstationary form follows that of the stationary form, so for the nonstationary Matérn, the sample function differentiability increases with ν [10]. 4 Bayesian regression model and implementation Assume independent observations, Y1, . . . , Yn, indexed by a vector of input or feature values, xi ∈ℜP , with Yi ∼N(f(xi), η2), where η2 is the noise variance. Specify a Gaussian process prior, f(·) ∼GP  µf, CNS f (·, ·)  , where CNS f (·, ·) is the nonstationary Matérn covariance function (4) constructed from a set of Gaussian kernels as described below. For the differentiability parameter, we use the prior, νf ∼U(0.5, 30), which varies between non-differentiability (0.5) and high differentiability. We use proper, but diffuse, priors for µf, σ2 f, and η2.The main challenge is to parameterize the kernel matrices, since their evolution determines how quickly the covariance structure changes in the input space and the degree to which the model adapts to variable smoothness in the unknown function. In many problems, it seems natural that the covariance structure would evolve smoothly; if so, the differentiability of the regression function will be determined by νf. We put a prior distribution on the kernel matrices as follows. Any location in the input space, xi, has a Gaussian kernel with mean xi and covariance (kernel) matrix, Σi. When the input space is one-dimensional, each kernel ’matrix’ is just a scalar, the variance of the kernel, and we use a stationary Matérn GP prior on the log variance so that the variances evolve smoothly in the input space. Next consider multi-dimensional input spaces; since there are (implicitly) kernel matrices at each location in the input space, we have a multivariate process, the matrix-valued function, Σ(·). Parameterizing positive definite matrices as a function of the input space is a difficult problem; see [7]. We use the spectral decomposition of an individual covariance matrix, Σi, Σi = Γ(γ1(xi), . . . , γQ(xi))D(λ1(xi), . . . , λP (xi))Γ(γ1(xi), . . . , γQ(xi))T , (5) where D is a diagonal matrix of eigenvalues and Γ is an eigenvector matrix constructed as described below. λp(·), p = 1, . . . , P, and γq(·), q = 1, . . . , Q, which are functions on the input space, construct Σ(·). We will refer to these as the eigenvalue and eigenvector processes, and to them collectively as the eigenprocesses. Let φ(·) ∈ {log(λ1(·)), . . . , log(λP (·)), γ1(·), . . . , γQ(·)} denote any one of these eigenprocesses. To have the kernel matrices vary smoothly, we ensure that their eigenvalues and eigenvectors vary smoothly by taking each φ(·) to have a GP prior with a single stationary, anisotropic Matérn correlation function, common to all the processes and described later. Using a shared correlation function gives us smoothly-varying kernels, while limiting the number of parameters. We force the eigenprocesses to be very smooth by fixing ν = 30. We do not let ν vary, because it should have minimal impact on the regression estimate and is not well-informed by the data. Parameterizing the eigenvectors of the kernel matrices using Givens angles, with each angle a function on ℜP , the input space, is difficult, because the angle functions have range [0, 2π) ≡S1, which is not compatible with the range of a GP. To avoid this, we overparameterize the eigenvectors, using Q = P(P −1)/2 + P −1 Gaussian processes, γq(·), that determine the directions of a set of orthogonal vectors. Here, we demonstrate the construction of the eigenvectors for xi ∈ℜ2 and xi ∈ℜ3; a similar approach, albeit with more parameters, applies to higher-dimensional spaces, but is probably infeasible in dimensions larger than five or so. In ℜ3, we construct an eigenvector matrix for an individual location as Γ = Γ3Γ2, where Γ3 =    a labc −b lab −ac lablabc b labc a lab −bc lablabc c labc 0 lab labc   , Γ2 =   1 0 0 0 u luv −v luv 0 v luv u luv  . The elements of Γ3 are functions of three random variables, {A, B, C}, where labc = √ a2 + b2 + c2 and lab = √ a2 + b2. (Γ3)32 = 0 is a constraint that saves a degree of freedom for the two-dimensional subspace orthogonal to Γ3. The elements of Γ2 are based on two random variables, U and V . To have the matrices, Σ(·), vary smoothly in space, a, b, c, u and v, are the values of the processes, γ1(·), . . . , γ5(·) at the input of interest. One can integrate f, the function evaluated at the inputs, out of the GP model. In the stationary GP model, the marginal posterior contains a small number of hyperparameters to either optimize or sample via MCMC. In the nonstationary case, the presence of the additional GPs for the kernel matrices (5) precludes straightforward optimization, leaving MCMC. For each of the eigenprocesses, we reparameterize the vector, φ, of values of the process at the input locations, φ = µφ +σφL(∆(θ))ωφ, where ωφ ∼N(0, I) a priori and L is a matrix defined below. We sample µφ, σφ, and ωφ via Metropolis-Hastings separately for each eigenprocess. The parameter vector θ, involving P correlation scale parameters and P(P −1)/2 Givens angles, is used to construct an anisotropic distance matrix, ∆(θ), shared by the φ vectors, creating a stationary, anisotropic correlation structure common to all the eigenprocesses. θ is also sampled via Metropolis-Hastings. L(∆(θ)) is a generalized Cholesky decomposition of the correlation matrix shared by the φ vectors that deals 0.0 0.2 0.4 0.6 0.8 1.0 0 6 12 −2 −1 0 1 2 −1 1 0.0 0.2 0.4 0.6 0.8 1.0 −4 2 6 x 0.0 0.2 0.4 0.6 0.8 1.0 y 0.0 0.2 0.4 0.6 0.8 1.0 z 0 2 4 6 8 Figure 1: On the left are the three test functions in one dimension, with one simulated set of observations (of the 50 used in the evaluation), while the right shows the test function with two inputs. with numerically singular correlation matrices by setting the ith column of the matrix to all zeroes when φi is numerically a linear combination of φ1, . . . , φi−1 [13]. One never calculates L(∆(θ))−1 or |L(∆(θ))|, which are not defined, and does not need to introduce jitter, and therefore discontinuity in φ(·), into the covariance structure. 5 Experiments For one-dimensional functions, we compare the nonstationary GP method to a stationary GP model1, two neural network implementations2 , and Bayesian adaptive regression splines (BARS), a Bayesian free-knot spline model that has been very successful in comparisons in the statistical literature [6]. We use three test functions [6]: a smoothly-varying function, a spatially inhomogeneous function, and a function with a sharp jump (Figure 1a). For each, we generate 50 sets of noisy data and compare the models using the means, averaged over the 50 sets, of the standardized MSE, P i( ˆfi −fi)2/ P i(fi −¯f)2, where ˆfi is the posterior mean at xi, and ¯f is the mean of the true values. In the non-Bayesian neural network model, ˆfi is the fitted value and, as a simplification, we use a network with the optimal number of hidden units (3, 3, and 8 for the three functions), thereby giving an overly optimistic assessment of the performance. To avoid local minima, we used the network fit that minimized the MSE (relative to the data, with yi in place of fi in the expression for MSE) over five fits with different random seeds. For higher-dimensional inputs, we compare the nonstationary GP to the stationary GP, the neural network models, and two free-knot spline methods, Bayesian multivariate linear splines (BMLS) [14] and Bayesian multivariate automatic regression splines (BMARS) [15], a Bayesian version of MARS [16]. We choose to compare to neural networks and 1We implement the stationary GP model by replacing CNS f (·, ·) with the Matérn stationary correlation, still using a differentiability parameter, νf, that is allowed to vary. 2For a non-Bayesian model, we use the implementation in the statistical software R, which fits a multilayer perceptron with one hidden layer. For a Bayesian version, results from R. Neal’s FBM software were kindly provided by A. Vehtari. Table 1: Mean (over 50 data samples) and 95% confidence interval for standardized MSE for the five methods on the three test functions with one-dimensional input. Method Function 1 Function 2 Function 3 Stat. GP .0083 (.0073,.0093) .026 (.024,.029) .071 (.067,.074) Nonstat. GP .0083 (0.0073,.0093) .015 (.013,.016) .026 (.021,.030) BARS .0081 (.0071,.0092) .012 (.011,.013) .0050 (.0043,.0056) Bayes. neural net. .0082 (.0072,.0093) .011 (.010,.014) .015 (.014,.016) neural network .0108 (.0095,.012) .013 (.012,.015) .0095 (.0086,.010) splines, because they are popular and these particular implementations have the ability to adapt to variable smoothness. BMLS uses piecewise, continuous linear splines, while BMARS uses tensor products of univariate splines; both are fit via reversible jump MCMC. We use three datasets, the first a function with two inputs [14] (Figure 1b), for which we use 225 training inputs and test on 225 inputs, for each of 50 simulated datasets. The second is a real dataset of air temperature as a function of latitude and longitude [17] that allows assessment on a spatial dataset with distinct variable smoothness. We use a 109 observation subset of the original data, focusing on the Western hemisphere, 222.5◦−322.5◦E and 62.5◦S-82.5◦N and fit the models on 54 splits with 107 training examples and two test examples and one split with 108 training examples and one test example, thereby including each data point as a test point once. The third is a real dataset of 111 daily measurements of ozone [18] included in the S-plus statistical software. The goal is to predict the cube root of ozone based on three features: radiation, temperature, and wind speed. We do 55 splits with 109 training examples and two test examples and one split of 110 training examples and one test example. For the non-Bayesian neural network, 10, 50, and 3 hidden units were optimal for the three datasets, respectively. Table 1 shows that the nonstationary GP does as well or better than the stationary GP, but that BARS does as well or better than the other methods on all three datasets with one input. Part of the difficulty for the nonstationary GP with the third function, which has the sharp jump, is that our parameterization forces smoothly-varying kernel matrices, which prevents our particular implementation from picking up sharp jumps. A potential improvement would be to parameterize kernel matrices that do not vary so smoothly. Table 2 shows that for the known function on two dimensions, the GP models outperform both the spline models and the non-Bayesian neural network, but not the Bayesian network. The stationary and nonstationary GPs are very similar, indicative of the relative homogeneity of the function. For the two real datasets, the nonstationary GP model outperforms the other methods, except the Bayesian network on the temperature dataset. Predictive density calculations that assess the fits of the functions drawn during the MCMC are similar to the point estimate MSE calculations in terms of model comparison, although we do not have predictive density values for the non-Bayesian neural network implementation. 6 Non-Gaussian data We can model non-Gaussian data, using the usual extension from a linear model to a generalized linear model, for observations, Yi ∼D (g (f (xi))), where D(·) (g(·)) is an appropriate distribution (link) function, such as the Poisson (log) for count data or the binomial (logit) for binary data. Take f(·) to have a nonstationary GP prior; it cannot be integrated out of the model because of the lack of conjugacy, which causes slow MCMC mixing. [10] improves mixing, which remains slow, using a sampling scheme in which the hyperparameters (including the kernel structure for the nonstationarity) are sampled jointly with the function values, f, in a way that makes use of information in the likelihood. Table 2: For test function with two inputs, mean (over 50 data samples) and 95% confidence interval for standardized MSE at 225 test locations, and for the temperature and ozone datasets, cross-validated standardized MSE, for the six methods. Method Function with 2 inputs Temp. data Ozone data Stat. GP .024 (.021,.026) .46 .33 Nonstat. GP .023 (.020,.026) .36 .29 Bayesian neural network .020 (.019,.022) .35 .32 neural network .040* (.033,.047) .60 .34 BMARS .076 (.065,.087) .53 .33 BMLS .033 (.029,.038) .78 .33 * [14] report a value of .07 for a neural network implementation We fit the model to the Tokyo rainfall dataset [19]. The data are the presence of rainfall greater than 1 mm for every calendar day in 1983 and 1984. Assuming independence between years [19], conditional on f(·) = logit(p(·)), the likelihood for a given calendar day, xi, is binomial with two trials and unknown probability of rainfall, p(xi). Figure 2a shows that the estimated function reasonably follows the data and is quite variable because the data in some areas are clustered. The model detects inhomogeneity in the function, with more smoothness in the first few months and less smoothness later (Figure 2b). 0.0 0.4 0.8 Prob. of rainfall (a) 0 100 200 300 10 25 calendar day Kernel size (b) Figure 2. (a) Posterior mean estimate, from nonstationary GP model, of p(·), the probability of rainfall as a function of calendar day, with 95% pointwise credible intervals. Dots are empirical probabilities of rainfall based on the two binomial trials. (b) Posterior geometric mean kernel size (square root of geometric mean kernel eigenvalue). 7 Discussion We introduce a class of nonstationary covariance functions that can be used in GP regression (and classification) models and allow the model to adapt to variable smoothness in the unknown function. The nonstationary GPs improve on stationary GP models on several test datasets. In test functions on one-dimensional spaces, a state-of-the-art free-knot spline model outperforms the nonstationary GP, but in higher dimensions, the nonstationary GP outperforms two free-knot spline approaches and a non-Bayesian neural network, while being competitive with a Bayesian neural network. The nonstationary GP may be of particular interest for data indexed by spatial coordinates, where the low dimensionality keeps the parameter complexity manageable. Unfortunately, the nonstationary GP requires many more parameters than a stationary GP, particularly as the dimension grows, losing the attractive simplicity of the stationary GP model. Use of GP priors in the hierarchy of the model to parameterize the nonstationary covariance results in slow computation, limiting the feasibility of the model to approximately n < 1000, because the Cholesky decomposition is O(n3). Our approach provides a general framework; work is ongoing on simpler, more computationally efficient parameterizations of the kernel matrices. Also, approaches that use low-rank approximations to the covariance matrix [20, 21] may speed fitting. References [1] M.N. Gibbs. Bayesian Gaussian Processes for Classification and Regression. PhD thesis, Univ. of Cambridge, Cambridge, U.K., 1997. [2] D.J.C. MacKay. Introduction to Gaussian processes. Technical report, Univ. of Cambridge, 1997. [3] D. Higdon, J. Swall, and J. Kern. Non-stationary spatial modeling. In J.M. Bernardo, J.O. Berger, A.P. Dawid, and A.F.M. Smith, editors, Bayesian Statistics 6, pages 761–768, Oxford, U.K., 1999. Oxford University Press. [4] A.M. Schmidt and A. O’Hagan. Bayesian inference for nonstationary spatial covariance structure via spatial deformations. Technical Report 498/00, University of Sheffield, 2000. [5] D. Damian, P.D. Sampson, and P. Guttorp. Bayesian estimation of semi-parametric nonstationary spatial covariance structure. Environmetrics, 12:161–178, 2001. [6] I. DiMatteo, C.R. Genovese, and R.E. Kass. Bayesian curve-fitting with free-knot splines. Biometrika, 88:1055–1071, 2002. [7] D. MacKay and R. Takeuchi. Interpolation models with multiple hyperparameters, 1995. [8] Volker Tresp. Mixtures of Gaussian processes. In Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13, pages 654–660. MIT Press, 2001. [9] C.E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, Massachusetts, 2002. MIT Press. [10] C.J. Paciorek. Nonstationary Gaussian Processes for Regression and Spatial Modelling. PhD thesis, Carnegie Mellon University, Pittsburgh, Pennsylvania, 2003. [11] M.L. Stein. Interpolation of Spatial Data : Some Theory for Kriging. Springer, N.Y., 1999. [12] F. Vivarelli and C.K.I. Williams. Discovering hidden features with Gaussian processes regression. In M.J. Kearns, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Information Processing Systems 11, 1999. [13] J.R. Lockwood, M.J. Schervish, P.L. Gurian, and M.J. Small. Characterization of arsenic occurrence in source waters of U.S. community water systems. J. Am. Stat. Assoc., 96:1184–1193, 2001. [14] C.C. Holmes and B.K. Mallick. Bayesian regression with multivariate linear splines. Journal of the Royal Statistical Society, Series B, 63:3–17, 2001. [15] D.G.T. Denison, B.K. Mallick, and A.F.M. Smith. Bayesian MARS. Statistics and Computing, 8:337–346, 1998. [16] J.H. Friedman. Multivariate adaptive regression splines. Annals of Statistics, 19:1–141, 1991. [17] S.A. Wood, W.X. Jiang, and M. Tanner. Bayesian mixture of splines for spatially adaptive nonparametric regression. Biometrika, 89:513–528, 2002. [18] S.M. Bruntz, W.S. Cleveland, B. Kleiner, and J.L. Warner. The dependence of ambient ozone on solar radiation, temperature, and mixing height. In American Meteorological Society, editor, Symposium on Atmospheric Diffusion and Air Pollution, pages 125–128, 1974. [19] C. Biller. Adaptive Bayesian regression splines in semiparametric generalized linear models. Journal of Computational and Graphical Statistics, 9:122–140, 2000. [20] A.J. Smola and P. Bartlett. Sparse greedy Gaussian process approximation. In T. Leen, T. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, Cambridge, Massachusetts, 2001. MIT Press. [21] M. Seeger and C. Williams. Fast forward selection to speed up sparse Gaussian process regression. In Workshop on AI and Statistics 9, 2003.
2003
42
2,444
Algorithms for Interdependent Security Games Michael Kearns Luis E. Ortiz Department of Computer and Information Science University of Pennsylvania 1 Introduction Inspired by events ranging from 9/11 to the collapse of the accounting firm Arthur Andersen, economists Kunreuther and Heal [5] recently introduced an interesting game-theoretic model for problems of interdependent security (IDS), in which a large number of players must make individual investment decisions related to security — whether physical, financial, medical, or some other type — but in which the ultimate safety of each participant may depend in a complex way on the actions of the entire population. A simple example is the choice of whether to install a fire sprinkler system in an individual condominium in a large building. While such a system might greatly reduce the chances of the owner’s property being destroyed by a fire originating within their own unit, it might do little or nothing to reduce the chances of damage caused by fires originating in other units (since sprinklers can usually only douse small fires early). If “enough” other unit owners have not made the investment in sprinklers, it may be not cost-effective for any individual to do so. Kunreuther and Heal [5] observe that a great variety of natural problems share this basic interdependent structure, including investment decisions in airline baggage security (in which investments in new screening procedures may reduce the risk of directly checking suspicious cargo, but nearly all airlines accept transferred bags with no additional screening 1); risk management in corporations (in which individual business units have an incentive to avoid high-risk or illegal activities only if enough other units are similarly well-behaved); vaccination against infectious disease (where the fraction of the population choosing vaccination determines the need for or effectiveness of vaccination); certain problems in computer network security; and many others. All these problems share the following important properties:  There is a “bad event” (condominiumfire, airline explosion, corporate bankruptcy, infection, etc.) to be avoided, and the opportunity to reduce the risk of it via some kind of investment.  The cost-effectiveness of the security investment for the individual is a function of the investment decisions made by the others in the population. The original work by Kunreuther and Heal [5] proposed a parametric game-theoretic model for such problems, but left the interesting question of computing the equilibria of model largely untouched. In this paper we examine such computational issues. 1El Al airlines is the exception to this. 2 Definitions In an IDS game, each player i must decide whether or not to invest in some abstract security mechanism or procedure that can reduce their risk of experiencing some abstract bad event. The cost of the investment to i is C i, while the cost of experiencing the bad event is L i; the interesting case is when L i >> C i. Thus, player i has two choices for his action a i: a i = 1 means the player makes the investment, while a i = 0 means he does not. It turns out that the important parameter is the ratio of the two costs, so we define R i = C i =L i. For each player i, there is a parameter p i, which is the probability that player i will experience the bad event due to internal contamination if a i = 0 — for example, this is the probability of the condominium owner’s unit burning down due to a fire originating in his own unit. We can also think of p i as a measure of the direct risk to player i — as we shall see, it is that portion of his risk under his direct control. To model sources of indirect risk, for each pair of players i; j; i 6= j, let q j i be the probability that player i experiences the bad event as a result of a transfer from player j — for example, this is the probability that the condominium of player i burns down due to a fire originating in the unit of player j. Note the implicit constraint that p i + P j 6=i q j i < 1. An IDS game is thus given by the parameters p i, q j i, L i, C i for each player i, and the expected cost to player i under the model is defined to be M i ( ~ a) = a i C i + (1 a i )p i L i + (1 (1 a i )p i ) 2 4 1 n Y j =1;j 6=i (1 (1 a j )q j i ) 3 5 L i (1) Let us take a moment to parse and motivate this definition, which is the sum of three terms. The first term represents the amount invested in security by player i, and is either 0 (if a i = 0) or C i (if a i = 1). The second term is the expected cost to i due to internal or direct risk of the bad event, and is either p i L i (which is the expected cost of internally generated bad events in the case a i = 0), or is 0 (in the case of investment, a i = 1). Thus, there is a natural tension between the first two terms: players can either invest in security, which costs money but reduces risk, or gamble by not investing. Note that here we have assumed that security investment perfectly eradicates direct risk (but not indirect risk); generalizations are obviously possible, but have no qualitative effect on the model. It is the third term of Equation (1) that expresses the interdependent nature of the problem. This term encodes the assumption that there are n sources of risk to player i — his own internal risk, and a specific transfer risk from each of the other n 1 players — and that all these sources are statistically independent. The prefactor (1 (1 a i )p i ) is simply the probability that player i does not experience the bad event due to direct risk. The bracketed expression is the probability that player i experiences a bad event due to transferred risk: each factor (1 (1 a j )q j i ) in the product is the probability that a bad event does not befall player i due to player j (and the product expresses the assumption that all of these possible transfer events are independent). Thus 1 minus this product is the probability of transferred contamination, and of course the product of the various risk probabilities is also multiplied by the cost L i of the bad event. The model parameters and Equation (1) define a compact representation for a multiplayer game in which each player’s goal is to minimize their cost. Our interest is in the efficient computation of Nash equilibria (NE) of such games 2. 2See (for example) [4] for definitions of Nash and approximate Nash equilibria. 3 Algorithms We begin with the observation that it is in fact computationally straightforward to find a single pure NE of any IDS game. To see this, it is easily verified that if there are any conditions under which player i prefers investing ( a i = 1) to not investing ( a i = 0) according to the expected costs given by Equation (1), then it is certainly the case that i will prefer to invest when all the other n 1 players are doing so. Similarly, the most favorable conditions for not investing occur when no other players are investing. Thus, to find a pure NE, we can first check whether either all players investing, or no players investing, forms a NE. If so, we are finished. If neither of these extremes are a NE, then there are some players for whom investing or not investing is a dominant strategy (a best response independent of the behavior of others). If we then “clamp” such players to their dominant strategies, we obtain a new IDS game with fewer players (only those without dominant strategies in the original game), and can again see if this modified game has any players with dominant strategies. At each stage of this iterative process we maintain the invariant that clamped players are playing a best response to any possible setting of the unclamped players. Theorem 1 A pure NE for any n-player IDS game can be computed in time O (n 2 ). In a sense, the argument above demonstrates the fact that in most “interesting” IDS games (those in which each player is a true participant, and can have their behavior swayed by that of the overall population), there are two trivial pure NE (all invest and none invest). However, we are also interested in finding NE in which some players are choosing to invest and others not to (even though no player has a dominant strategy). A primary motivation for finding such NE is the appearance of such behavior in “real world” IDS settings, where individual parties do truly seem to make differing security investment choices (such as with sprinkler systems in large apartment buildings). Conceptually, the most straightforward way to discover such NE would be to compute all NE of the IDS game. As we shall eventually see, for computational efficiency such a demand requires restrictions on the parameters of the game, one natural example of which we now investigate. 3.1 Uniform Transfer IDS Games A uniform transfer IDS game is one in which the transfer risks emanating from a given player are independent of the transfer destination. Thus, for any player j, we have that for all i 6= j, q j i = Æ j for some value Æ j. Note that the risk level Æ j presented to the population by different players j may still vary with j — but each player spreads their risk indiscriminately across the rest of the population. An example would be the assumption that each airline transferred bags with equal probability to all other airlines. In this section, we describe two different approaches for computing NE in uniform transfer IDS games. The first approach views a uniform transfer IDS game as a special type of summarization game, a class recently investigated by Kearns and Mansour [4]. In an n-player summarization game, the payoff of each player i is a function of the actions ~ a i of all the other players, but only through the value of a global and common real-valued summarization function S ( ~ a ). The main result of [4] gives an algorithm for computing approximate NE of summarization games, in which the quality of the approximation depends on the influence of the summarization function S. A well-known notion in discrete functional analysis, the influence of S is the maximum change in S that any input (player) can unilaterally cause. (See [4] for detailed definitions.) It can be shown (details omitted) that any uniform transfer IDS game is in fact a summarization game under the choice S ( ~ a) = n Y j =1 (1 (1 a j )Æ j ) (2) and that the influence of this function is bounded by the largest Æ j. We note that in many natural uniform transfer IDS settings, we expect this influence to diminish like 1=n with the number of players n. (This would be the case if the risk transfer comes about through physical objects like airline baggage, where each transfer event can have only a single destination.) Combined with the results of [4], the above discussion can be shown to yield the following result. Theorem 2 There is an algorithm that takes as input any uniform transfer IDS game, and any  > 0, and computes an O ( +  )-NE, where  = max j f(1 p j )=(1 Æ j )g and  = max j fÆ j g. The running time of the algorithm is polynomial in n, 1=, and . We note that in typical IDS settings we expect both the p j and Æ j to be small (the bad event is relatively rare, regardless of its source), in which case  may be viewed as a constant. Furthermore, it can be verified that this algorithm will in fact be able to compute approximate NE in which some players choose to invest and others not to, even in the absence of any dominant strategies. While viewing uniform transfer IDS games as bounded influence summarization games relates them to a standard class and yields a natural approximation algorithm, an improved approach is possible. We now present an algorithm (Algorithm UniformTransferIDSNash in Figure 3.1) that efficiently computes all NE for uniform transfer IDS games. The algorithm (indeed, even the representation of certain NE) requires the ability to compute mth roots. We may assume without loss of generality that for all players i, Æ i > 0, and p i > 0. For a joint mixed strategy vector ~ x 2 [0; 1] n, denote the set of (fully) investing players as I  fi : x i = 1g; the set of (fully) non-investing players as N  fi : x i = 0g; and the set of partially investing players as P  fi : 0 < x i < 1g: The correctness of algorithm UniformTransferIDSNash follows immediately from two lemmas that we now state without proof due to space considerations. The first lemma is a generalization of Proposition 2 of [2], and essentially establishes that the values R i =p i and (1 Æ i )R i =p i determine a two-level ordering of the players’ willingness to invest. This double ordering generates the outer and inner loops of algorithm UniformTransferIDSNash. Note that a player with small R i =p i has a combination of relatively low cost of investing compared to the loss of a bad event (recall R i = C i =L i), and relatively high direct risk p i, and thus intuitively should be more willing to invest than players with large R i =p i. The lemma makes this intuition precise. Lemma 3 (Ordering Lemma) Let ~ x be a NE for a uniform transfer IDS game G = (n; ~ R ; ~ p ; ~ Æ ). Then for any i 2 I (an investing player), any j 2 N (a partially investing player), and any k 2 P (a non-investing player), the following conditions hold: R i =p i < R j =p j R i =p i  (1 Æ k ) R k =p k < R k =p k (1 Æ j ) R j =p j < (1 Æ k ) R k =p k The second lemma establishes that if a NE contains some partially investing players, the values for their mixed strategies is in fact uniquely determined. The equations for these mixed strategies is exploited in the subroutine TestNash. Algorithm UniformTransferIDSNash Input: An n-player uniform transfer IDS game G with direct risk parameters ~ p, transfer risk parameters ~ Æ, and cost parameters ~ R, where R i = C i =L i. Output: A set S of all exact connected sets of NE for G. 1. Initialize a partition of the players into three sets I ; N ; P (the investing, not investing, and partially investing players, respectively) and test if everybody investing is a NE: I f1; : : : ; ng; N ;; P ;; S TestNash (G ; I ; N ; P ; S ) 2. Let (i 1 ; i 2 ; :::; i n ) be an ordering of the n players satisfying R i 1 =p i 1  : : :  R i n =p i n. Call this the outer ordering. 3. for k = 1; : : : ; n (a) Move the next player in the outer ordering from the investing to the partiallyinvesting sets: P P S fi k g; I I fi k g (b) Let (j 1 ; :::; j k ) be an ordering of the players in P satisfying (1 Æ j 1 ) R j 1 =p j 1  : : :  (1 Æ j k ) R j k =p j k. Call this the inner ordering. (c) Consider a strategy with no not-investing players: N ;; S TestNash (G ; I ; N ; P ; S ) (d) for m = 1; : : : ; k i. Move the next player in the inner ordering from the partially-investing to non-investing sets, and test if there is a NE consistent with the partition: N N S fj m g; P P fj m g; S TestNash (G ; I ; N ; P ; S ) Subroutine TestNash Inputs: An n-player uniform transfer IDS game G; a partition of the players I ; N ; P (as above); S, the current discovered set of connected sets of NE for G Output: S with possibly one additional connected set of NE of G consistent with I ; N, and P (assuming unit-time computation of m-roots of rational numbers) 1. Set pure strategies for not-investing and investing players, respectively: 8k 2 N ; x k 0, 8i 2 I ; x i 1. 2. if jP j = 1 (Lemma 4, part (a) applies) (a) Let P = fj g, U as in Equation 3 and U 0 = U T (0; 1) (b) if R j = p j Q k 2N (1 Æ k ) (i.e., player j is indifferent) and U 0 6= ;, then return S S ff~ y : y j 2 U 0 ; ~ y j = ~ x j gg 3. else (Lemma 4, part (b) applies) (a) Compute mixed strategies 8j 2 P ; x j as in Equation 4 (b) if 9j 2 P ; x j  0 or x j  1, return S (c) if ~ x is a NE for G then return S S ff ~ x gg 4. return S Figure 1: Algorithm UniformTransferIDSNash If I = [l ; u] is an interval of < with endpoints l and u, and a; b 2 < then we define aI + b  [al + b; au + b]. Lemma 4 (Partial Investment Lemma) Let ~ x 2 [0; 1] n be a mixed strategy for a uniform transfer IDS game G = (n; ~ R; ~ p; ~ Æ ), and let P be the set of partially investing players in ~ x. Then (a) if jP j = 1, then letting P = fj g, V = [max i2I R i =p i ; min k 2N (1 Æ k ) R k =p k ] ; and U = ((p j =R j ) V (1 Æ j )) = Æ j (3) it holds that ~ x is a NE if and only if R j = p j Q k 2N 1 Æ k (i.e., player j is indifferent) and player j mixed strategy satisfies x j 2 U ; else, (b) if jP j > 1, and ~ x is a NE, then for all j 2 P, x j = ((p j =R j )E (1 Æ j )) = Æ j (4) where E =  Q j 2P (R j =p j ) . Q k 2N (1 Æ k )  1=(jP j1) : The next theorem summarizes our second algorithmic result on uniform transfer IDS games. The omitted proof follows from Lemmas 3 and 4. Theorem 5 Algorithm UniformTransferIDSNash computes all exact (connected sets of) NE for uniform transfer IDS games in time polynomial in the size of the model. We note that it follows immediately from the description and correctness of the algorithm that any n-player uniform transfer IDS game has at most n(n + 3)=2 + 1 connected sets of NE. In addition, each connected set of NE in a uniform transfer IDS game is either a singleton or a simple interval where n 1 of the players play pure strategies and the remaining player has a simple interval in [0; 1] of probability values from which to choose its strategy. At most n of the connected sets of NE in a uniform transfer IDS game are simple intervals. 3.2 Hardness of General IDS Games In light of the results of the preceding section, it is of course natural to consider the computational difficulty of unrestricted IDS. We now show that even a slight generalization of uniform transfer IDS games, in which we allow the Æ j to assume two fixed values instead of one, leads to the intractabilty of computing at least some of the NE. A graphical uniform transfer IDS game, so named because it can be viewed as a marriage between uniform transfer IDS games and the graphical games introduced in [3], is an IDS game with the restriction that for all players j, q j i 2 f0; Æ j g, for some Æ j > 0. Let N (j )  fi : q j i > 0g be the set of players that can be directly affected by player j’s behavior. In other words, the transfer risk parameter q j i of player j with respect to player i is either zero, in which case the player j has no direct effect on player i’s behavior; or it is constant, in which case, the public safety e j i = (1 (1 x j )Æ j ) of player j with respect to player i 2 N (j ) is the same as for any other player in N (j ). The pure Nash extension problem for an n-player game with binary actions takes as input a description of the game and a partial assignment ~ a 2 f0; 1; g n. The output may be any complete assignment (joint action) ~ b 2 f0; 1g n that agrees with ~ a on all its 0 and 1 settings, and is a (pure) NE for the game; or “none” if no such NE exists. Clearly the problem of computing all the NE is at least as difficult as the pure Nash extension problem. Theorem 6 The pure Nash extension problem for graphical uniform transfer IDS games is NP-complete, even if jN (j )j  3 for all j, and Æ j is some fixed value Æ for all j. The reduction (omitted) is from Monotone One-in-Three SAT [1]. 4 Experimental Study: Airline Baggage Security As an empirical demonstration of IDS games, we constructed and conducted experiments on an IDS game for airline security that is based on real industry data. We have access to a data set consisting of 35,362 records of actual civilian commercial flight reservations, both domestic and international, made on August 26, 2002. Since these records contain complete flight itineraries, they include passenger transfers between the 122 represented commercial air carriers. As described below, we used this data set to construct an IDS game in which the players are the 122 carriers, the “bad event” corresponds to a bomb exploding in a bag being transported in a carrier’s airplane, and the transfer event is the physical transfer of a bag from one carrier to another. For each carrier pair (i; j ), the transfer parameter q j i was set to be proportional to the count of transfers from carrier j to carrier i in the data set. We are thus using the rate of passenger transfers as a proxy for the rate of baggage transfers. The resulting parameters (details omitted) are, as expected, quite asymmetric, as there are highly structured patterns of transfers resulting from differing geographic coverages, alliances between carriers, etc. The model is thus far from being a uniform transfer IDS game, and thus algorithm UniformTransferIDSNash cannot be applied; we instead used a simple gradient learning approach. The data set provides no guidance on reasonable values for the R i and p i, which quantify relative costs of a hypothetical new screening procedure and the direct risks of checking contaminated luggage, respectively; presumably R i depends on the specific economics of the carrier, and p i on some notion of the risk presented by the carrier’s clientele, which might depend on the geographic area served. Thus, for illustrative purposes, an arbitrary value of p i = 0:01 was chosen for all i 3, and a common value for R i of 0.009 (so an explosion is roughly 110 times more costly to a carrier than full investment in security). Since the asymmetries of the q j i preclude the use of algorithm UniformTransferIDSNash, we instead used a learning approach in which each player begins with a random initial investment strategy x i 2 [0; 1], and adjusts its degree of investment up or down based on the gradient dynamics x i x i   i, where  i is determined by computing the derivative of Equation (1) and  = 0:05 was used in the experiments to be discussed. 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 (a) (b) Figure 2: (a) Simulation of the evolution of security investment strategies for the 49 busiest carrier using gradient dynamics under the IDS model. Above each plot is an index indicating the rank of the carrier in terms of overall volume in the data set. Each plot shows the investment level x i (initialized randomly in [0; 1]) for carrier i over 500 simulation steps. (b) Tipping phenomena. Simulation of the evolution of security investment strategies for the 49 busiest carriers, but with the three largest carriers (indices 1, 2 and 3) in the data set clamped (subsidized) at full investment. The plots are ordered as in (a), and again show 500 simulation steps under gradient dynamics. Figure 2(a) shows the evolution, over 500 steps of simulation time, of the investment level x i for the 49 busiest carriers 4. We have ordered the 49 plots with the least busy carrier 3This is (hopefully) an unrealistically large value for the real world; however, it is the relationship between the parameters and not their absolute magnitudes that is important in the model. 4According to the total volume of flights per carrier in the data set. (index 49) plotted in the upper left corner, and the busiest (index 1) in the lower right corner. The horizontal axes measure the 500 time steps, while the vertical axes go from 0 to 1. The axes are unlabeled for legibility. The most striking feature of the figure is the change in the evolution of the investment strategy as we move from less busy to more busy carriers. Broadly speaking, there is a large population of lower-volume carriers (indices 49 down to 34) that quickly converge to full investment ( x i = 1) regardless of initial conditions. The smallest carriers, not shown (ranks 122 down to 50), also all rapidly converge to full investment. There is then a set of mediumvolume carriers whose limiting strategy is approached more slowly, and may eventually converge to either full or no investment (roughly indices 33 down to 14). Finally, the largest carriers (indices 13 and lower) again converge quickly, but to no investment ( x i = 0), because they have a high probability of having bags transferred from other carriers (even if they protect themselves against dangerous bags being loaded directly on their planes). Note also that the dynamics can yield complex, nonlinear behavior that includes reversals of strategy. The simulation eventually converges (within 2000 steps) to a (Nash) equilibrium in which some carriers are at full investment, and the rest at no investment. This property is extremely robust across initial conditions and model parameters, The above simulation model enables one to examine how subsidizing several airlines to encourage it to invest in security can encourage others to do the same. This type of “tipping” behavior [6] can be the basis for developing strategies for inducing adoption of security measures short of formal regulations or requirements. Figure2(b) shows the result of an identical simulation to the one discussed above, except the three largest carriers (indices 1, 2 and 3) are now “clamped” or forced to be at full investment during the entire simulation. Independent of initial conditions, the remaining population now invariably converges to full investment. Thus the model suggests that these three carriers form (one of perhaps many different) tipping sets — carriers whose decision to invest (due to subsidization or other exogenous forces) will create the economic incentive for a large population of otherwise skeptical carriers to follow. The dynamics also reveal a cascading effect — for example, carrier 5 moves towards full investment (after having settled comfortably at no investment) only after a number of larger and smaller carriers have done so. Acknowledgements: We give warm thanks to Howard Kunreuther, Geoffrey Heal and Kilian Weinberger for many helpful discussions. References [1] Michael Garey and David Johnson. Computers and Intractability: A Guide to the Theory of NP-completeness. Freeman, 1979. [2] Geoffrey Heal and Howard Kunreuther. You only die once: Managing discrete interdependent risks. 2003. Working paper, Columbia Business School and Wharton Risk Management and Decision Processes Center. [3] M. Kearns, M. Littman, and S. Singh. Graphical models for game theory. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, pages 253–260, 2001. [4] M. Kearns and Y. Mansour. Efficient Nash computation in summarization games with bounded influence. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2002. [5] Howard Kunreuther and Geoffrey Heal. Interdependent security. Journal of Risk and Uncertainty (Special Issue on Terrorist Risks), 2003. In press. [6] Thomas Schelling. Micromotives and Macrobehavior. Norton, 1978.
2003
43
2,445
How to Combine Expert (or Novice) Advice when Actions Impact the Environment Daniela Pucci de Farias∗ Department of Mechanical Engineering Massachusetts Institute of Technology Cambridge, MA 02139 pucci@mit.edu Nimrod Megiddo IBM Almaden Research Center 650 Harry Road, K53-B2 San Jose, CA 95120 megiddo@almaden.ibm.com Abstract The so-called “experts algorithms” constitute a methodology for choosing actions repeatedly, when the rewards depend both on the choice of action and on the unknown current state of the environment. An experts algorithm has access to a set of strategies (“experts”), each of which may recommend which action to choose. The algorithm learns how to combine the recommendations of individual experts so that, in the long run, for any fixed sequence of states of the environment, it does as well as the best expert would have done relative to the same sequence. This methodology may not be suitable for situations where the evolution of states of the environment depends on past chosen actions, as is usually the case, for example, in a repeated non-zero-sum game. A new experts algorithm is presented and analyzed in the context of repeated games. It is shown that asymptotically, under certain conditions, it performs as well as the best available expert. This algorithm is quite different from previously proposed experts algorithms. It represents a shift from the paradigms of regret minimization and myopic optimization to consideration of the long-term effect of a player’s actions on the opponent’s actions or the environment. The importance of this shift is demonstrated by the fact that this algorithm is capable of inducing cooperation in the repeated Prisoner’s Dilemma game, whereas previous experts algorithms converge to the suboptimal non-cooperative play. 1 Introduction Experts algorithms. A well-known class of methods in machine learning are the socalled experts algorithms. The goal of these methods is to learn from experience how to combine advice from multiple experts in order to make sequential decisions in an online environment. The general idea can be described as follows. An agent has to choose repeatedly from a given set of actions. The reward in each stage is a function of the chosen action and the choices of Nature or the environment (also referred to as the “adversary” or the “opponent”). A set of strategies {1, . . . , r} is available for the agent to choose from. We refer ∗Work done while at IBM Almaden Research Center, San Jose, California. to each such strategy as an “expert,” even though some of them might be simple enough to be called a “novice.” Each expert suggests a choice of an action based on the history of the process and the expert’s own choice algorithm. After each stage, the agent observes his own reward. An experts algorithm directs the agent with regard to which expert to follow in the next stage, based on the past history of actions and rewards. Minimum Regret. A popular criterion in decision processes is called Minimum Regret (MR). Regret is defined as the difference between the reward that could have been achieved, given the choices of Nature, and what was actually achieved. An expert selection rule is said to minimize regret if it yields an average reward as large as that of any single expert, against any fixed sequence of actions chosen by the opponent. Indeed, certain experts algorithms, which at each stage choose an expert from a probability distribution that is related to the reward accumulated by the expert prior to that stage, have been shown to minimize regret [1, 2]. It is crucial to note though that, since the experts are compared on a sequence-bysequence basis, the MR criterion ignores the possibility that different experts may induce different sequences of choices by the opponent. Thus, MR makes sense only under the assumption that Nature’s choices are independent of the decision maker’s choices. Repeated games. We consider a multi-agent interaction in the form of a repeated game. In repeated games, the assumption that the opponent’s choices are independent of the agent’s choices is not justified, because the opponent is likely to base his choices of actions on the past history of the game. This is evident in nonzero-sum games, where players are faced with issues such as how to coordinate actions, establish trust or induce cooperation. These goals require that they take each other’s past actions into account when making decisions. But even in the case of zero-sum games, the possibility that an opponent has bounded rationality may lead a player to look for patterns to be exploited in the opponent’s past actions. We illustrate some of the aforementioned issues with an example involving the Prisoner’s Dilemma game. The Prisoner’s Dilemma. In the single-stage Prisoner’s Dilemma (PD) game, each player can either cooperate (C) or defect (D). Defecting is better than cooperating regardless of what the opponent does, but it is better for both players if both cooperate than if both defect. Consider the repeated PD. Suppose the row player consults with a set of experts, including the “defecting expert,” who recommends defection all the time. Let the strategy of the column player in the repeated game be fixed. In particular, the column player may be very patient and cooperative, willing to wait for the row player to become cooperative, but eventually becoming non-cooperative if the row player does not seem to cooperate. Since defection is a dominant strategy in the stage game, the defecting expert achieves in each step a reward as high as any other expert against any sequence of choices of the column player, so the row player learns with the experts algorithm to defect all the time. Obviously, in retrospect, this seems to minimize regret, since for any fixed sequence of actions by the column player, constant defection is the best response. Obviously, constant defection is not the best response in the repeated game against many possible strategies of the column player. For instance, the row player would regret very much using the experts algorithm if he were told later that the column player had been playing a strategy such as Tit-for-Tat.1 In this paper, we propose and analyze a new experts algorithm, which follows experts judiciously, attempting to maximize the long-term average reward. Our algorithm differs from previous approaches in at least two ways. First, each time an expert is selected, it is followed for multiple stages of the game rather than a single one. Second, our algorithm takes 1The Tit-for-Tat strategy is to play C in the first stage, and later play in every stage whatever the opponent played in the preceding stage. into account only the rewards that were actually achieved by an expert in the stages it was followed, rather than the reward that could have been obtained in any stage. Our algorithm enjoys the appealing simplicity of the previous algorithms, yet it leads to a qualitatively different behavior and improved average reward. We present two results: 1. A “worst-case” guarantee that, in any play of the game, our algorithm achieves an average reward that is asymptotically as large as that of the expert that did best in the rounds of the game when it was played. The worst-case guarantee holds without any assumptions on the opponent’s or experts’ strategies. 2. Under certain conditions, our algorithm achieves an average reward that is asymptotically as large as the average reward that could have been achieved by the best expert, had it been followed exclusively. The conditions are required in order to facilitate learning and for the notion of a “best expert” to be well-defined. The effectiveness of the algorithm is demonstrated by its performance in the repeated PD game, namely, it is capable of identifying the opponent’s willingness to cooperate and it induces cooperative behavior. The paper is organized as follows. The algorithm is described in section 2. A bound based on actual expert performance is presented in section 3. In section 4, we introduce and discuss an assumption about the opponent. This assumption gives rise to asymptotic optimality, which is presented in section 5. 2 The algorithm We consider an “experts strategy” for the row player in a repeated two-person game in normal form. At each stage of the game, the row and column player choose actions i ∈I and j ∈J, respectively. The row player has a reward matrix R, with entries 0 ≤Rij ≤u. The row player may consult at each stage with a set of experts {1, . . . , r}, before choosing an action for the next stage. We denote by σe the strategy proposed by expert e, i.e., σe = σe(hs) is the proposed probability distribution over actions in stage s, given the history hs. We refer to the row player as the agent and to the column player as the opponent. Usually, the form of experts algorithms found in the literature is as follows. Denote by Me(s −1) the average reward achieved by expert e prior to stage s of the game2. Then, a reasonable rule is to follow expert e in stage s with a probability that is proportional to some monotone function of Me(s −1). In particular, when this probability is proportional to exp{ηsMe(s−1)}, for a certain choice of ηs, this algorithm is known to minimize regret [1, 2]. Specifically, by letting js (s = 1, 2, . . .) denote the observed actions of the opponent up to stage s, and letting σX denote the strategy induced by the experts algorithm, we have 1 s s X s′=1 E[R(i, js) : i ∼σX(hs)] ≥sup e 1 s s X s′=1 E[R(i, js) : i ∼σe(hs)] −o(s). (1) The main deficiency of the regret minimization approach is that it fails to consider the influence of chosen actions of a player on the future choices of the opponent — the inequality (1) holds for any fixed sequence (js) of the opponent’s moves, but does not account for the fact that different choices of actions by the agent may induce different sequences of the opponent. This subtlety is also missing in the experts algorithm we described above. At each 2In different variants of the algorithm and depending on what information is available to the row player, Me(s −1) could be either an estimate of the average reward based on reward achieved by expert e in the stages it was played, or the reward it could have obtained, had it been played in all stages against the same history of play of the opponent. stage of the game, the selection of expert is based solely on how well various experts have, or could have, done so far. There is no notion of learning how an expert’s actions affect the opponent’s moves. For instance, in the repeated PD game described in the introduction, assuming that the opponent is playing Tit-for-Tat, the algorithm is unable to establish the connection between the opponent’s cooperative moves and his own. Based on the previous observations, we propose a new experts algorithm, which takes into account how the opponent reacts to each of the experts. The idea is simple: instead of choosing a (potentially different) expert at each stage of the game, the number of stages an expert is followed, each time it is selected, increases gradually. We refer to each such set of stages as a “phase” of the algorithm. Following is the statement of the Strategic Experts Algorithm (SEA). The phase number is denoted by i. The number of phases during which expert e has been followed is denoted by Ne. The average payoff from phases in which expert e has been followed is denoted by Me. Strategic Experts Algorithm (SEA): 1. For e = 1, . . . , r, set Me = Ne = 0. Set i = 1. 2. With probability 1/i perform an exploration phase, namely, choose an expert e from the uniform distribution over {1, . . . , r}; otherwise, perform an exploitation phase, namely, choose an expert e from the uniform distribution over the set of experts e′ with maximum Me′. 3. Set Ne = Ne+1. Follow expert e’s instructions for the next Ne stages. Denote by ˜R the average payoff accumulated during the current phase (i.e., these Ne stages), and set Me = Me + 2 Ne+1 ( ˜R −Me) . 4. Set i = i + 1 and go to step 2. Throughout the paper, s will denote a stage number, and i will denote a phase number. We denote by M1(i), . . . , Mr(i) the values of the registers M1, . . . , Mr, respectively, at the end of phase i. Similarly, we denote by N1(i), . . . , Nr(i) the values of the registers N1, . . . , Nr, respectively, at the end of phase i. Thus, Me(i) and Ne(i) are, respectively, the average payoff accumulated by expert e and the total number of phases this expert was followed on or before phase i. We will also let M(s) and M(i) denote, without confusion, the average payoff accumulated by the algorithm in the first s stages or first i phases of the game. 3 A bound based on actual expert performance When the SEA is employed, the average reward Me(i) that was actually achieved by each available expert e is being tracked. It is therefore interesting to compare the average reward M(s) achieved by the SEA, with the averages achieved by the various experts. The following theorem states that, in the long run, the SEA obtains almost surely at least as much as the actual average reward obtained by any available expert during the same play. Theorem 3.1. Pr  lim inf s→∞M(s) ≥max e lim inf i→∞Me(i)  = 1 . (2) Although the claim of Theorem 3.1 seems very close to regret minimization, there is an essential difference in that we compare the average reward of our algorithm with the average reward actually achieved by each expert in the stages when it was played, as opposed to the estimated average reward based on the whole history of play of the opponent. Note that the bound (2) is merely a statement about the average reward of the SEA in comparison to the average reward achieved by each expert, but nothing is claimed about the limits themselves. Theorem 5.1 proposes an application of this bound in a case when an additional assumption about the experts’ and opponent’s strategies allows us to analyze convergence of the average reward for each expert. Another interesting case occurs when one of the experts plays a maximin strategy; in this case, bound (2) ensures that the SEA achieves at least the maximin value of the game. The same holds if one of the experts is a regret-minimizing experts algorithm, which is known to achieve at least the maximin value of the game. The remainder of this section consists of a sketch of the proof of Theorem 3.1. Sketch of proof: Denote by V be the random variable maxe lim infi→∞Me(i), and denote by ¯E the expert that achieves that maximum (if there is more than one, let ¯E be the one with the least index). For any logical proposition L, let δ(L) = 1 if L is true; otherwise δ(L) = 0. The proof of Theorem 3.1 relies on establishing that, for all ϵ > 0 and any expert e, Pr  lim i→∞ Ne(i) · δ(Me(i) ≤V −ϵ) i = 0  = 1 . (3) In words, if the average reward of an expert falls below V by a non negligible amount, it must have been followed only a small fraction of the total number of phases. There are three possible situations for any expert e: (a) When lim infi→∞Me(i) > V −ϵ, the inequality is satisfied trivially. (b) When lim supi→∞Me(i) < V , there is a phase I such that for all i ≥I, Me(i) < M ¯ E(i), so that expert e is played only on exploration phases, and a large deviations argument establishes that (3) holds. (c) The most involved situation occurs when lim infi→∞Me(i) ≤V −ϵ and lim supi→∞Me(i) ≥V . To show that (3) holds in this case, we are going to focus on the trajectory of Me(i) each time it goes from above V −ϵ/2 to below V −ϵ+δ/2, for some 0 < δ < ϵ. We offer the two following observations: 1. Let Ik be the kth phase such that Me(i) ≤V −ϵ + δ/2, and let I0 k be the first phase before Ik such that Me(i) ≥V −ϵ/2. Then, between phases I0 k and Ik, expert e is selected at least Ne(I0 k)(ϵ −δ)/(6u) times. Denoting by Ij k, j = 1, . . . , Pk, the phases when expert e is selected, between I0 k and Ik, we have Me(Ij k) ≥Me(Ij−1 k )(Ne(I0 k) + j −1)(Ne(I0 k) + j) (Ne(I0 k) + j)(Ne(I0 k) + j + 1) . A simple induction argument shows that, in order to have Me(Ik) ≤V −ϵ 2 ≤Me(I0 k) −ϵ −δ 2 , expert e must be selected a number of times Pk ≥Ne(I0 k)(ϵ −δ)/(6u). 2. For all large enough k, the phases Ij k when expert e is selected are exclusively exploration phases. This follows trivially from the fact that, after a certain phase I, we have M ¯ E(i) ≥ V −ϵ/2, for all i ≥I, whereas Me(i) < V −ϵ/2 for all i between I0 k and Ik. From the first observation, we have Ne(Ik) Ik ≤Ne(I0 k) + Pk Ik −I0 k ≤ (1 + 6u)Pk (ϵ −δ)Ik −I0 k , Since expert e is selected only during exploration phases between I0 k and Ik, a large deviations argument allows us to conclude that the ratio of the number of times Pk expert e is selected, to the total number of phases Ik −I0 k, converges to zero with probability one. We conclude that (3) holds. We now observe that M(i) = P e Ne(i)(Ne(i) + 1)Me(i) P e Ne(i)(Ne(i) + 1) . (4) By a simple optimization argument, we can show that X e Ne(i)(Ne(i) + 1) ≥i(i/r + 1). (5) Using (3) and (5) to bound (4), we conclude that (2) holds for the subsequence of stages s corresponding to the end of each phase of the SEA. It is easy to show that the average reward M(s) in stages s in the middle of phase i becomes arbitrarily close to the average reward at the end of that phase M(i), as i goes to infinity, and the theorem follows . 2 4 The flexible opponent In general, it is impossible for an experts algorithm to guarantee, against an unknown opponent, a reward close to what the best available expert would have achieved if it had been the only expert. It is easy to construct examples which prove this impossibility. Example: Repeated Matching Pennies. In the Matching Pennies (MP) game, each of the player and the adversary has to choose either H (“Heads”) or T (“Tails”). If the choices match, the player loses 1; otherwise, he wins 1. A possible strategy for the adversary in the repeated MP game is: Adversary: Fix a positive integer s and a string σs ∈{H, T}s. In each of the first s stages, play the 50:50 mixed strategy. In each of the stages s+1, s+2, . . . , if the sequence of choices of the player during the first s stages coincided with the string σs, then play T; otherwise, play the 50:50 mixed strategy. Suppose each available expert e corresponds to a strategy of the form: Expert: Fix a string σe ∈{H, T}s. During the first s stages play according to σe. In each of the stages s + 1, s + 2, . . . , play H. Suppose an expert e∗with σe∗= σs is available. Then, in order for an experts algorithm to achieve at least the reward of e∗, it needs to follow the string σs precisely during the first s stages. Of course, without knowing what σs is, the algorithm cannot play it with probability one, nor can it learn anything about it during the play. In view of the repeated MP example, some assumption about the opponent must be made in order for the player to be able to learn how to play to against that opponent. The essence of the difficulty with the above strategy of the opponent is that it is not flexible — the player has only one chance to guess who the best expert is and thus cannot recover from a mistake. Here, we introduce the assumption of flexibility as a possible remedy to that problem. Under the assumption of flexibility, the SEA achieves an average reward that is asymptotically as high as what the best expert could be expected to achieve. Definition 4.1 (Flexibility). (i) An opponent playing strategy π(s) is said to be flexible with respect to expert e (e = 1, . . . , r) if there exist constants µe, τ > 0.25 and c such that for every stage s0, every possible history hs0 at stage s0 and any number of stages s, E h 1 s Ps0+s s=s0+1R(ae(s), b(s)) −µe : ae(s) ∼σe(hs), b(s) ∼π(hs) i ≤c sτ (ii) Flexibility with respect to a set of experts is defined as flexibility with respect to every member of the set. In words, the expected average reward during the s stages between stage s0 and stage s0+s converges (as s tends to infinity) to a limit that does not depend on the history of the play prior to stage s0. Example 4.1 : Finite Automata. In the literature on “bounded rationality”, players are often modelled as finite automata. A probabilistic automaton strategy (PAS) is specified by a tuple A = ⟨M, O, A, σ, P⟩, where M = {1, . . . , m} is the finite set of internal states of the automaton, A is the set of possible actions, O is the set of possible outcomes, σi(a) is the probability of choosing action a while in state i (i = 1, . . . , m) and P o = (P o ij) (1 ≤i, j ≤m) is the matrix of state transition probabilities, given an outcome o ∈O. Thus, at any stage of the game, the automaton picks an action from a probability distribution associated with its current state and transitions into a new state, according to a probability distribution which depends on the outcome of the stage game. If both the opponent and an expert play PASs, then a Markov chain is induced over the set of pairs of the respective internal states. If this Markov chain has a single class of recurrent states, then the flexibility assumption holds. Note that we do not limit the size of the automata; a larger set of internal states implies a slower convergence of the average rewards, but does not affect the asymptotic results for the SEA. Example 4.2 : Bounded dependence on the history. The number of possible histories at stage s grows exponentially with s. Thus, it is reasonable to assume that the choice of action would be based not on the exact detail of the history but rather on the empirical distribution of past actions or patterns of actions. If the opponent is believed not to be stationary, then discounting previous observations by recency may be sensible. For instance, if the frequency of play of action j by the opponent is relevant, the player might condition his choice at stage s + 1 on the quantities τj = Ps s′=1 βs−s′δjjs where β < 1 and δ is the Kronecker delta. In this case, only actions js at stages s that are relatively recent have a significant impact on τj. Therefore strategies based on τj should exhibit behavior similar to that of bounded recall, and lead to flexibility in the same circumstances as the latter. 5 A bound based on expected expert performance In this section we show that if the opponent is “flexible” with respect to the available experts, then the SEA achieves almost surely an average payoff that is asymptotically as large as what the best expert could achieve against the same opponent. Theorem 5.1. If an opponent π is flexible with respect to the experts 1, . . . , r, then the average payoff up to stage s, M(s), satisfies Pr  lim inf s→∞M(s) ≥max e µe  = 1 . Theorem 5.1 follows from Lemma 5.1, stated and proven below, and Theorem 3.1. Flexibility comes into play as a way of ensuring that the value of following any given expert is well-defined, and can eventually be estimated as long as the SEA follows that expert sufficiently many times. In other words, flexibility ensures that there is a best expert to be learned, and that learning can effectively occur because actions taken by other experts, which could affect the behavior of the opponent, are eventually forgotten by the latter. We now present Lemma 5.1, which shows that, under the flexibility assumption, the average reward achieved by each expert is asymptotically almost surely the same as the reward that would have been achieved by the same expert, had he been the only available expert. Lemma 5.1. If the opponent is flexible with respect to expert e, then with probability one, limi→∞Me(i) = µe. Sketch of proof: Let e be any expert. By the Borel-Cantelli lemma, exploration occurs infinitely many times, hence e is followed during infinitely many phases. Let Ij = Ij(e), (j = 1, 2, . . .) be the phase numbers in which e is followed. By Markov’s inequality, for every ϵ > 0, Pr(|Me(Ij) −µe| > ϵ) ≤ϵ−4E[(Me(Ij) −µe)4] . If we could show that ∞ X j=1 E[(Me(Ij) −µe)4] < ∞, (6) then we could conclude, by the Borel-Cantelli lemma, that with probability one, the inequality |Me(Ij)−µe| > ϵ holds only for finitely many values of j. This implies that, with probability one, limi→∞Me(i) = µe. It follows that if the opponent is flexible with respect to expert e, then for some ν > 0, as j tends to infinity, E[(Me(Ij) −µe)4] = O(j−1−ν), which suffices for (6). 2 Example 5.1 : Repeated Prisoner’s Dilemma revisited. Consider playing the repeated PD game against an opponent who plays Tit-for-Tat, and suppose there are only two experts: “Always defect” (AD) and “Always cooperate” (AC). Thus, AC induces cooperation in every stage and yields a payoff higher than AD, which induces defection in every stage of the game except the first one. It is easy to verify that Tit-for-Tat is flexible with respect to the experts AC and AD. Therefore, Theorem 5.1 holds and the SEA achieves an average payoff at least as much as that of AC. By contrast, as mentioned in the introduction, in order to minimize regret, the standard experts algorithm must play D in almost every stage of the game, and therefore achieves a lower payoff. References [1] Auer, P., Cesa-Bianchi, N., Freund, Y. & Schapire, R.E. (1995) Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Proc. 36th Annual IEEE Symp. on Foundations of Computer Science, pp. 322–331, Los Alamitos, CA: IEEE Computer Society Press. [2] Freund, Y. & Schapire, R.E. (1999) Adaptive game playing using multiplicative weights. Games and Economic Behavior 29:79–103. [3] Foster, D. & Vohra, R. (1999) Regret and the on-line decision problem. Games and Economic Behavior 29:7–35. [4] Fudenberg, D. & Levine, D.K. (1997) The Theory of Learning in Games. Cambridge, MA: The MIT Press. [5] Littlestone, N. & Warmuth, M.K. (1994) The weighted majority algorithm. Information and Computation 108 (2):212–261.
2003
44
2,446
Reasoning about Time and Knowledge Neural-Symbolic Learning Systems Artur S. d' Avila Garcez" and Luis C. LambA "Dept. of Computing, City University London London, EC1V OHB, UK (aag@soi.city.ac.uk) ADept. of Computing Theory, PPGC-II-UFRGS Porto Alegre, RS 91501-970, Brazil (lamb@inf.ufrgs.br) Abstract We show that temporal logic and combinations of temporal logics and modal logics of knowledge can be effectively represented in artificial neural networks. We present a Translation Algorithm from temporal rules to neural networks, and show that the networks compute a fixed-point semantics of the rules. We also apply the translation to the muddy children puzzle, which has been used as a testbed for distributed multi-agent systems. We provide a complete solution to the puzzle with the use of simple neural networks, capable of reasoning about time and of knowledge acquisition through inductive learning. 1 Introduction . In Hybrid neural-symbolic systems concern the use of problem-specific symbolic knowledge within the neurocomputing paradigm (d'Avila Garcez et al., 2002a). Typically, translation algorithms from a symbolic to a connectionist representation and vice-versa are employed to provide either (i) a neural implementation of a logic, (ii) a logical characterisation of a neural system, or (iii) a hybrid learning system that brings together features from connectionism and symbolic artificial intelligence (Holldobler, 1993). Until recently, neural-symbolic systems were not able to fully represent, reason and learn expressive languages other than propositional and fragments of first-order logic (Cloete & Zurada, 2000). However, in (d'Avila Garcez et al., 2002b; d'Avila Garcez et al., 2002c; d'Avila Garcez et al., 2003), a new approach to knowledge representation and reasoning in neural-symbolic systems based on neural networks ensembles has been introduced. This new approach shows that modal logics can be effectively represented in artificial neural networks. In this paper, following the approach introduced in (d'Avila Garcez et al., 2002b; d'Avila Garcez et al., 2002c; d'Avila Garcez et al., 2003), we move one step further and show that temporal logics can be effectively represented in artificial neural o Artur Garcez is partly supported by the Nuffield Foundation. Luis Lamb is partly supported by CNPq. The authors would like to thank the referees for their comments. networks. This is done by providing a translation algorithm from temporal logic theories to the initial architecture of a neural network. A theorem then shows that the translation is correct by proving that the network computes a fixed-point semantics of its corresponding temporal theory (van Emden & Kowalski, 1976). The result is a new learning system capable of reasoning about knowledge and time. We have validated the Connectionist Temporal Logic (CTL) proposed here by applying it to a distributed time and knowledge representation problem known as the muddy children puzzle (Fagin et al., 1995). CTL provides a combined (multi-modal) connectionist system of knowledge and time, which allows the modelling of evolving situations such as changing environments or possible worlds. Although a number of multi-modal systems - e.g., combining knowledge and time (Halpern & Vardi, 1986; Halpern et al., 2003) and combining beliefs, desires and intentions (Rao & Georgeff, 1998) - have been proposed for distributed knowledge representation, little attention has been paid to the integration of a learning component for knowledge acquisition. This work contributes to bridge this gap by allowing the knowledge representation to be integrated in a neural learning system. Purely from the point of view of knowledge representation in neural-symbolic systems, this work contributes to the long term aim of representing expressive and computationally well-behaved symbolic formalisms in neural networks. The remainder of this paper is organised as follows. We start, in Section 2, by describing the muddy children puzzle, and use it to exemplify the main features of CTL. In Section 3, we formally introduce CTL's Translation Algorithm, which maps knowledge and time theories into artificial neural networks, and prove that the translation is correct. In Section 4, we conclude and discuss directions for future work. 2 Connectionist Reasoning about Time and Knowledge Temporal logic and its combination with other modalities such as knowledge and belief operators have been the subject of intense investigation (Fagin et al., 1995). In this section, we use the muddy children puzzle, a testbed for distributed knowledge representation formalisms, to exemplify how knowledge and time can be expressed in a connectionist setting. We start by stating the puzzle (Fagin et al., 1995; Huth & Ryan, 2000). There is a number n of (truthful and intelligent) children playing in a garden. A certain number of children k (k :S n) has mud on their faces. Each child can see if the other are muddy, but not themselves. Now, consider the following situation: A caretaker announces that at least one child is muddy (k 2': 1) and asks does any of you know if you have mud on your own face? To help understanding the puzzle, let us consider the cases in which k = 1, k = 2 and k = 3. If k = 1 (only one child is muddy), the muddy child answers yes at the first instance since she cannot see any other muddy child. All the other children answer no at the first instance. If k = 2, suppose children 1 and 2 are muddy. At the first instance, all children can only answer no. This allows 1 to reason as follows: if 2 had said yes the first time, she would have been the only muddy child. Since 2 said no, she must be seeing someone else muddy; and since I cannot see anyone else muddy apart from 2, I myself must be muddy! Child 2 can reason analogously, and also answers yes the second time round. If k = 3, suppose children 1, 2 and 3 are muddy. Every children can only answer no the first two times round. Again, this allows 1 to reason as follows: if 2 or 3 had said yes the second time, they would have been the only two muddy children. Thus, there must be a third person with mud. Since I can only see 2 and 3 with mud, this third person must be me! Children 2 and 3 can reason analogously to conclude as well that yes, they are muddy. The above cases clearly illustrate the need to distinguish between an agent's individual knowledge and common knowledge about the world in a particular situation. For example, when k = 2, after everybody says no at the first round, it becomes common knowledge that at least two children are muddy. Similarly, when k = 3, after everybody says no twice, it becomes common knowledge that at least three children are muddy, and so on. In other words, when it is common knowledge that there are at least k -1 muddy children; after the announcement that nobody knows if they are muddy or not, then it becomes common knowledge that there are at least k muddy children, for if there were k - 1 muddy children all of them would know that they had mud in their faces. I In what follows, a modality K j is used to represent the knowledge of an agent j. In addition, the term Pi is used to denote that proposition P is true for agent i. For example, KjPi means that agent j knows that P is true for agent i. We use Pi to say that child i is muddy, and qk to say that at least k children are muddy (k :s; n). Let us consider the case in which three children are playing in the garden (n = 3). Rule ri below states that when child 1 knows that at least one child is muddy and that neither child 2 nor child 3 are muddy then child 1 knows that she herself is muddy. Similarly, rule r~ states that if child 1 knows that there are at least two muddy children and she knows that child 2 is not muddy then she must also be able to know that she herself is muddy, and so on. The rules for children 2 and 3 are interpreted analogously. ri: K Iql!\KI""'P2!\KI""'P3 ---+KIPI rj: K Iq2!\KI""'P3 ---+KIPI d: K Iq2!\KI""'P2 ---+KIPI rl: K Iq3 ---+KIPI Table 1: Snapshot rules for agent ( child) 1 Each set of snapshot rules r~ (1 :s; I :s; n; mE N+) can be implemented in a single hidden layer neural network Ni as follows. For each rule, a hidden neuron is created. Each rule antecedent (e.g., KIql in ri) is associated with an input neuron. The rule consequent (KIPI) is associated with an output neuron. Finally, the input neurons are connected to the output neuron through the hidden neuron associated with the rule (ri). In addition, weights and biases need to be set up to implement the meaning of the rule. When a neuron is activated (i.e. has activation above a given threshold), we say that its associated concept (e.g., KIql) is true. Conversely, when a neuron is not activated, we say that its associated concept is false. As a result, each input vector of Ni can be associated with an interpretation (an assignment of truth-values) to the set of rules. Weights and biases must be such that the output neuron is activated if and only if the interpretation associated with the input vector satisfies the rule antecedent. In the case of rule ri, the output neuron associated with KIPI must be activated (true) if the input neuron associated with KIql, the input neuron associated with K I""'P2, and the input neuron associated with K I""'P3 are all activated (true). The Connectionist Inductive Learning and Logic Programming (C-ILP) System (d'Avila Garcez et al., 2002a; d'Avila Garcez & Zaverucha, 1999) makes use of the above kind of translation. C-ILP is a massively parallel computational model based on an artificial neural network that integrates inductive learning from examples and background knowledge with deductive learning through logic programming. FollowINotice that this reasoning process can only start once it is common knowledge that at least one child is muddy, as announced by the caretaker. ing (Holldobler & Kalinke, 1994) (see also (Holldobler et al. , 1999)), a Translation Algorithm maps any logic program P into a single hidden layer neural network N such t hat N computes the least fixed point of P . This provides a massively parallel model for computing the stable model semantics of P (Lloyd, 1987). In addition, N can be trained with examples using, e.g., Backpropagation, and using P as background knowledge (Pazzani & Kibler, 1992). The knowledge acquired by training can then be extracted (d'Avila Garcez et al. , 2001) , closing the learning cycle (as in (Towell & Shavlik, 1994)). For each agent (child) , a C-ILP network can be created. Each network can be seen as representing a (learnable) possible world containing information about the knowledge held by an agent in a distributed system. Figure 1 shows the implementation of rules ri to d. In addition, it contains output neurons PI2 and Kql , Kq2 and Kq3 , all represented as facts. 3 This is highlighted in grey in Figure 1. Neurons that appear on both the input and output layers of a C-ILP network (e.g., Kqd are recurrently connected using weight one, as depicted in Figure 1. This allows the network to iterate the computation of truth-values when chains occur in the set of rules. For example, if a ---+ b and b ---+ C are rules of the theory, neuron b will appear on both the input and output layers of the network, and if a is activated then c will be activated through the activation of b. Figure 1: The implementation of rules {ri, ... , rn. If child 1 is muddy, output neuron PI must be activated. Since, child 2 and 3 can see child 1, they will know that PI is muddy. This can be represented as PI ---+ K 2PI and PI ---+ K 3PI , and analogously for P2 and P3 . This means that the activation of output neurons KI 'P2 and K I'P3 in Figure 1 depends on the activation of neurons that are not in this network (NI ), but in N2 and N3 . We need, therefore, to model how the networks in the ensemble interact with each other. Figure 2 illustrates the interaction between three C-ILP networks in the muddy children puzzle. The arrows connecting the networks implement the fact that when a child is muddy, the other children can see her. So if, e.g., neuron PI is activated in NI , neuron KPI must be activated in N2 and N3 . For the sake of clarity, the snapshot rules r;" shown in Figure 1 are omitted here, and this is indicated in Figure 2Note Pl means 'child 1 is muddy' while KPl means 'child 1 knows she is muddy'. 3 A fact is normally represented as a rule with no antecedents. C-ILP represents facts by not connecting the rule's hidden neuron to any input neuron (in the case of fully-connected networks, weights with initial value zero are used). 2 by neurons highlighted in black. In addition, only positive information about the problem is shown in Figure 2. Negative information such as -'PI, K-'PI, K-'P2 and K -'P3 would be implemented analogously. -----------I I I I Figure 2: Interaction between agents in the muddy children puzzle. Figure 2 illustrates well the idea behind this paper. By combining a number of simple C-ILP networks, we are able to model individual and common knowledge. Each network represents a possible world or an agent's current set of beliefs (d' Avila Garcez et al. , 2002b). If we allow a number of ensembles like the one of Figure 2 to be combined, we can represent the evolution in time of an agent's set of beliefs. This is exactly what is required for a complete solution of the muddy children puzzle, as discussed below. As we have seen, the solution to the muddy children puzzle illustrated in Figures 1 and 2 considers only snapshots of knowledge evolution along time rounds without the addition of a time variable (Ruth & Ryan, 2000). A complete solution, however, requires the addition of a temporal variable to allow reasoning about the knowledge acquired after each time round. The snapshot solution of Figures 1 and 2 should then be seen as representing the knowledge held by the agents at an arbitrary time t. The knowledge held by the agents at time t + 1 would then be represented by another set of C-ILP networks, appropriately connected to the original set of networks. Let us consider again the case where k = 3. There are alternative ways of representing that, but one possible representation for child 1 would be as follows: tl : -,KIPI /\ -,K 2P2 /\ -,K 3P3 ---+ O K I Q2 t2 : -,KIPI /\ -,K2P2 /\ -,K3P3 ---+ O K I Q3 Table 2: Temporal rules for agent(child) 1 Each temporal rule is labelled by a time point ti in which the rule holds. In addition, if a rule labelled t i makes use of the next time temporal operator 0 then whatever o qualifies refers to the next time ti+l in a linear time flow. As a result, the first temporal rule above states that if, at tl, no child knows whether she is muddy or not then, at t 2 , child 1 will know that at least two children are muddy. Similarly, the second rule states that, at t2, if still no child knows whether she is muddy or not then, at t3, child 1 will know that at least three children are muddy. As before, analogous temporal rules exist for agents (children) 2 and 3. The temporal rules, together with the snapshot rules, provide a complete solution to the puzzle. This is depicted in Figure 3 and discussed below.4 In Figure 3, networks are replicated to represent an agent's knowledge evolution in time. A network represents an agent's knowledge today (or at tl), a network repre41t is worth noting that each network remains a simple, single hidden layer neural network that can be trained with the use of standard Backpropagation or other off-theshelf learning algorithm. To Agents 2 and 3 (Kpl) at tl To Agents 2 and 3 (Kp1) at t2 $ "~;~~;'---:-\ )if~~;;;3) ~\o • CL)(). CLXI) ) J, .6~o:s;(t:).~~_~_ );::~AgrnU(Kp3) ;' ' .! " --".. -- From Agent 2 (Kp2) 1 at t1~. / \~ , 1 at t2 '~K~' / /// at t1 -, ~K, ,.'" \ .... ~.~ I ,. '" \:. '" ~// ",,"", __ ~ __ ~ _ l /~/ From Agent 3 (p3) - ____ ~ , -~~ / ~ att2 ___ -- From Agent 3 (p3) at t1 '",,' _____ ->~---. From Agent 2 (p2) From Agent 2 (p2) at t1 at t2 Figure 3: Knowledge evolution of agent (child) 1 from time tl to time h sents the same agent's knowledge tomorrow (t 2 ), and the appropriate connections between networks model the relations between today and tomorrow according to O. In the case of tl : ,KIPI 1\ ,K2P2 1\ ,K3P3 -+ OKl q2, for example, output neuron KIPI of the network that represents agent 1 at tl , output neuron K 2P2 of the network that represents agent 2 at tl, and output neuron K 3P3 of the network that represents agent 3 at tl need to be connected to output neuron K l q2 of the network that represents agent 1 at t2 (the next time) such that K l q2 is activated if KIPI, K 2P2 and K 3P3 are not activated. In conclusion, in order to represent time, in addition to knowledge, we need to use a two-dimensional C-ILP ensemble. In one dimension we encode the knowledge interaction between agents at a given time point, and in the other dimension we encode the agents' knowledge evolution through time. 3 Temporal Translation Algorithm In this section, we present an algorithm to translate temporal rules of the form t : OKaLI' ... , OKbLk -+ OKcLk+I' where a, b, c ... are agents and 1 :s; t :s; n,5 into (two-dimensional) C-ILP network ensembles. Let P represent a number q of ground6 temporal rules. In such rules, we call Li (1 :s; i :s; k + 1) a literal, and call KjLi (1 :s; j :s; m) an annotated literal. Each Li can be either a positive literal (p) or a negative literal ('p). Similarly, KjLi can be preceded by , . We use Amin to denote the minimum activation for a neuron to be considered active (true), Amin E (0,1). We number the (annotated) literals7 of P from 1 to v such that, when a C-ILP network N is created, the input and output layers of N are vectors of length v, where the i-th neuron represents the i-th (annotated) literal. For convenience, we use a bipolar semi-linear activation function h(x) = l+e2- IlX -1, and inputs in {-I, I}. Let kz denote the number of (annotated) literals in the body of rule rl; f..L1, the number of rules in P with the same (annotated) literal as consequent, for each rule Tl; MAXrz (kl' f..L1), the greater element between kz and f..L1 for rule Tl; and MAX p (kl' ... , kq, f..LI, ... , f..Lq), the greatest element among all kl's and f..Lz'S of P. We 5There may be n + 1 time points since, e.g., h : Kja, K k f3 -> OKj, means that if agent j knows a and agent k knows f3 at time tl then agent j knows / at time t2. 6Variables such as ti are instantiated into the language's ground terms (tl, t2, t3 ... ). 7We use ' (annotated) literals' to refer to any literal, annotated or not annotated ones. -----+ -----+ also use k as a shorthand for (k1, ... , kq), and fJ, as a shorthand for (fJ,1, ... , fJ,q). For example, for P = {r1 : b /\ c /\ ---,d ----+ a, r2 : e /\ f ----+ a, r3 : ----+ b}, k1 = 3, k2 = 2, k3 = 0, fJ,1 = 2, fJ,2 = 2, fJ,3 = 1, MAXr 1 (k1,fJ,1) = 3, MAXr2 (k2,fJ,2) = 2, -----+ -----+ M AXr 3 (k3, fJ,3) = 1 and M AXp( k , fJ, ) = 3. CTL Translation Algorithm: 1. For each time point t in P do: For each agent j in P do: Create a C-ILP Neural Network Nj,t. 2. Calculate W such that W 2': 2. . In(l ±,~i n)-ln(l -Ami n) ; (3 MAXp(k , M ).(Amin-1)+Amin+1 3. For each rule in P of the form t : OK1L1, ... , OKm- 1Lk ----+ OKmLk+1,8 do: (a) Add a hidden neuron LO to N m,t+1 and set h(x) as the activation function of LO; (b) Connect each neuron OKjLi (1 ::; i ::; k) in Nj,t to LO. If Li is a positive (annotated) literal then set the connection weight to W; otherwise, set the connection weight to -W Set the threshold eO of LO to eO = (1+ Amin)(kl -1)W' . I I 2 ' (c) Connect LO to KmLk+1 in N m,t+1 and set the connection weight to W. Set the threshold e;+l of KmLk+1 to e;+l = (1+ Ami;)(l-Md W ; (d) Add a hidden neuron L e to Nm,t and set h(x) as the activation function of L e ; (e) Connect neuron KmLk+1 in N m,t+1 to Le and set the connection weight to W; Set the threshold ei of Le to zero; (f) Connect L e to OKmLk+1 in Nm,t and set the connection weight to W. Set the threshold et of K L to et = (1+Amin )(l-Md W · I m k+1 I 2 ' 4. For each rule in P of the form t : OK1L 1, ... , OKm-1Lk ----+ KmLk+1 ' do: (a) Add a hidden neuron LO to Nm,t and set h(x) as the activation function of LO; (b) Connect each neuron OKjLi (1 ::; i ::; k) in Nj,t to LO . If Li is a positive (annotated) literal then set the connection weight to W; otherwise, set the connection weight to -W Set the threshold eO of LO to eO = (1+ Amin)(kl -1)W' . I I 2 ' (c) Connect LO to K mL k+1 in Nm,t and set the connection weight to W . Set the threshold ei+1 of K mL k+1 to e;+l = (1+Ami;)(l-Md W; 5. If N ought to be fully-connected, set all other connections to zero. In the above algorithm it is worth noting that, whenever a rule consequent is preceded by 0, a forward connection from t to t + 1 and a feedback connection from t + 1 to t need to be added to the ensemble. For example, if t : a ----+ Ob is a rule of P then not only must the activation of neuron a at t activate neuron b at t + 1, but the activation of neuron b at t + 1 must also activate neuron Ob at t. This is implemented in steps 3(d) to 3(1) of the algorithm. The remainder of the algorithm is concerned with the implementation of snapshot rules (as in Figure 1). The values of Wand e come from C-ILP's Translation Algorithm (d'Avila Garcez & Zaverucha, 1999), and are chosen so that the behaviour of the network matches that of the temporal rules, as the following theorem shows. Theorem 1 (Correctness of Translation Algorithm) For each set of ground temporal rules P, there exists a neural network ensemble N such that N computes the fixed-point operator T p of P. Proof. (sketch) This proof follows directly from the proof of the analogous theorem for single C-ILP networks presented in (d 'Avila Garcez fj Zaverucha, 1999). This is so because C-ILP's definition for Wand e values makes hidden neurons LO and Le behave like and gates, while output neurons behave like or gates. D 8Note that 0 is not required to precede every rule antecedent. In the network, neurons are labelled as OKILI or KILl to differentiate the two concepts. 4 Conclusions In his seminal paper (Valiant, 1984), Valiant argues for the need of rich logic-based knowledge representation mechanisms within learning systems. In this paper, we have addressed such a need, yet complying with important principles of connectionism such as massive parallelism. In particular, a very important feature of the system presented here (CTL) is the temporal dimension that can be combined with an epistemic dimension. This paper provides the first account of how to integrate such dimensions in a neural-symbolic learning system. The CTL framework opens up several interesting research avenues in the domain of neural-symbolic integration, allowing for the representation and learning of expressive formalisms. In this paper, we have illustrated this by providing a full solution to the muddy children puzzle, where agents reason about their knowledge at different time points. In the near future, we plan to also apply the system to a large, real world case study. References Cloete, 1., & Zurada, J. M. (Eds.). (2000) . Knowledge-based neurocomputing. The MIT Press. d'Avila Garcez, A. S., Broda, K., & Gabbay, D. M. (2001). Symbolic knowledge extraction from trained neural networks: A sound approach. Artificial Intelligence , 125, 155- 207. d'Avila Garcez, A. S., Broda, K., & Gabbay, D. M. (2002a) . Neural-symbolic learning systems: Foundations and applications. Perspectives in Neural Computing. Springer-Verlag. d'Avila Garcez, A. S., Lamb, L. C., Broda, K. , & Gabbay, D. M . (2003). Distributed knowledge representation in neural-symbolic learning systems: a case study. Accepted for Proceedings of 16th International FLAIRS Conference. St. Augustine Florida. d 'Avila Garcez, A. S., Lamb, L. C. , & Gabbay, D . M . (2002b). A connectionist inductive learning system for modal logic programming (Technical Report 2002/6). Department of Computing, Imperial College, London. d 'Avila Garcez, A. S. , Lamb, L. C. , & Gabbay, D . M. (2002c). A connectionist inductive learning system for modal logic programming. Proceedings of IEEE International Conference on Neural Information Processing I CONIP'02 (pp. 1992-1997). Singapore. d'Avila Garcez, A. S., & Zaverucha, G. (1999) . The connectionist inductive learning and logic programming system. Applied Intelligence Journal, Special Issue on Neural N etworks and Structured Knowledge, 11 , 59-77. Fagin, R., Halpern, J., Moses, Y., & Vardi, M. (1995). R easoning about knowledge. MIT Press. Halpern, J . Y., van der Meyden, R., & Vardi , M. Y. (2003). Complete axiomatizations for reasoning about knowledge and time. SIAM Journal on Computing. to appear. Halpern, J . Y., & Vardi , M. (1986). The complexity of reasoning about knowledge and time I: lower bounds. Journal of Computer and System Sciences , 38, 195- 237. Holldobler, S. (1993). Automated inferencing and connectionist models. Postdoctoral Thesis, Intellektik, Informatik, TH Darmstadt. Holldobler, S., & Kalinke , Y . (1994). Toward a new massively parallel computational model for logic programming. Proceedings of the Workshop on Combining Symbolic and Connectionist Processing, ECAI94 (pp. 68-77). Holldobler, S., Kalinke, Y., & Storr, H . P. (1999). Approximating the semantics of logic programs by recurrent neural networks. Applied Int elligence Journal, Special Issu e on N eural Networks and Structured Knowledge, 11, 45-58. Huth, M. R. A., & Ryan, M. D. (2000). Logic in computer science: Modelling and reasoning about systems. Cambridge University Press. Lloyd, J. W. (1987) . Foundations of logic programming. Springer-Verlag. Pazzani, M., & Kibler, D. (1992). The utility of knowledge in inductive learning. Machine Learning, 9, 57-94. Rao, A. S., & Georgeff, M. P. (1998). Decision procedures for BDI logics. Journal of Logic and Computation, 8, 293-343. Towell, G. G ., & Shavlik, J. W. (1994). Knowledge-based artificial neural networks. Artificial Intelligence, 70, 119- 165. Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM, 27, 1134- 1142. van Emden, M . H. , & Kowalski, R. A. (1976). The semantics of predicate logic as a programming language. Journal of the ACM, 23, 733- 742.
2003
45
2,447
Policy search by dynamic programming J. Andrew Bagnell Carnegie Mellon University Pittsburgh, PA 15213 Sham Kakade University of Pennsylvania Philadelphia, PA 19104 Andrew Y. Ng Stanford University Stanford, CA 94305 Jeff Schneider Carnegie Mellon University Pittsburgh, PA 15213 Abstract We consider the policy search approach to reinforcement learning. We show that if a “baseline distribution” is given (indicating roughly how often we expect a good policy to visit each state), then we can derive a policy search algorithm that terminates in a finite number of steps, and for which we can provide non-trivial performance guarantees. We also demonstrate this algorithm on several grid-world POMDPs, a planar biped walking robot, and a double-pole balancing problem. 1 Introduction Policy search approaches to reinforcement learning represent a promising method for solving POMDPs and large MDPs. In the policy search setting, we assume that we are given some class Π of policies mapping from the states to the actions, and wish to find a good policy π ∈Π. A common problem with policy search is that the search through Π can be difficult and computationally expensive, and is thus typically based on local search heuristics that do not come with any performance guarantees. In this paper, we show that if we give the learning agent a “base distribution” on states (specifically, one that indicates how often we expect it to be in each state; cf. [5, 4]), then we can derive an efficient policy search algorithm that terminates after a polynomial number of steps. Our algorithm outputs a non-stationary policy, and each step in the algorithm requires only a minimization that can be performed or approximated via a call to a standard supervised learning algorithm. We also provide non-trivial guarantees on the quality of the policies found, and demonstrate the algorithm on several problems. 2 Preliminaries We consider an MDP with state space S; initial state s0 ∈S; action space A; state transition probabilities {Psa(·)} (here, Psa is the next-state distribution on taking action a in state s); and reward function R : S 7→R, which we assume to be bounded in the interval [0, 1]. In the setting in which the goal is to optimize the sum of discounted rewards over an infinitehorizon, it is well known that an optimal policy which is both Markov and stationary (i.e., one where the action taken does not depend on the current time) always exists. For this reason, learning approaches to infinite-horizon discounted MDPs have typically focused on searching for stationary policies (e.g., [8, 5, 9]). In this work, we consider policy search in the space of non-starionary policies, and show how, with a base distribution, this allows us to derive an efficient algorithm. We consider a setting in which the goal is to maximize the sum of undiscounted rewards over a T step horizon: 1 T E[R(s0) + R(s1) + . . . + R(sT −1)]. Clearly, by choosing T sufficiently large, a finite-horizon problem can also be used to approximate arbitrarily well an infinite-horizon discounted problem. (E.g., [6]) Given a non-stationary policy (πt, πt+1, . . . , πT −1), where each πt : S 7→A is a (stationary) policy, we define the value Vπt,...,πT −1(s) ≡1 T E[R(st) + R(st+1) + . . . + R(sT −1)|st = s; (πt, . . . , πT −1)] as the expected (normalized) sum of rewards attained by starting at state s and the “clock” at time t, taking one action according to πt, taking the next action according to πt+1, and so on. Note that Vπt,...,πT −1(s) ≡1 T R(s) + Es′∼Psπt(s)[Vπt+1,...,πT −1(s)], where the “s′ ∼Psπt(s)” subscript indicates that the expectation is with respect to s′ drawn from the state transition distribution Psπt(s). In our policy search setting, we consider a restricted class of deterministic, stationary policies Π, where each π ∈Π is a map π : S 7→A, and a corresponding class of non-stationary policies ΠT = {(π0, π1, . . . , πT −1) | for all t, πt ∈Π}. In the partially observed, POMDP setting, we may restrict Π to contain policies that depend only on the observable aspects of the state, in which case we obtain a class of memoryless/reactive policies. Our goal is to find a non-stationary policy (π0, π1 . . . , πT −1) ∈ΠT which performs well under the performance measure Vπ0,π1...,πT −1(s0), which we abbreviate as Vπ(s0) when there is no risk of confusion. 3 The Policy Search Algorithm Following [5, 4], we assume that we are given a sequence of base distributions µ0, µ1, . . . , µT −1 over the states. Informally, we think of µt as indicating to the algorithm approximately how often we think a good policy visits each state at time t. Our algorithm (also given in [4]), which we call Policy Search by Dynamic Programming (PSDP) is in the spirit of the traditional dynamic programming approach to solving MDPs where values are “backed up.” In PSDP, it is the policy which is backed up. The algorithm begins by finding πT −1, then πT −2, . . . down to π0. Each policy πt is chosen from the stationary policy class Π. More formally, the algorithm is as follows: Algorithm 1 (PSDP) Given T, µt, and Π: for t = T −1, T −2, . . . , 0 Set πt = arg maxπ′∈ΠEs∼µt[Vπ′,πt+1...,πT −1(s)] In other words, we choose πt from Π so as to maximize the expected sum of future rewards for executing actions according to the policy sequence (πt, πt+1, . . . , πT −1) when starting from a random initial state s drawn from the baseline distribution µt. Since µ0, . . . , µT −1 provides the distribution over the state space that the algorithm is optimizing with respect to, we might hope that if a good policy tends to visit the state space in a manner comparable to this base distribution, then PSDP will return a good policy. The following theorem formalizes this intuition. The theorem also allows for the situation where the maximization step in the algorithm (the arg maxπ′∈Π) can be done only approximately. We later give specific examples showing settings in which this maximization can (approximately or exactly) be done efficiently. The following definitions will be useful. For a non-stationary policy π = (π0, . . . , πT −1), define the future state distribution µπ,t(s) = Pr(st = s|s0, π). I.e. µπ,t(s) is the probability that we will be in state s at time t if picking actions according to π and starting from state s0. Also, given two T-step sequences of distributions over states µ = (µ0, . . . , µt) and µ′ = (µ′ 0, . . . , µ′ t), define the average variational distance between them to be1 dvar(µ, µ′) ≡1 T T −1 X t=0 X s∈S |µt(s) −µ′ t(s)| Hence, if πref is some policy, then dvar(µ, µπref) represents how much the base distribution µ differs from the future state distribution of the policy πref. Theorem 1 (Performance Guarantee) Let π = (π0, . . . , πT −1) be a non-stationary policy returned by an ε-approximate version of PSDP in which, on each step, the policy πt found comes within ε of maximizing the value. I.e., Es∼µt[Vπt,πt+1...,πT −1(s)] ≥maxπ′∈ΠEs∼µt[Vπ′,πt+1...,πT −1(s)] −ε . (1) Then for all πref ∈ΠT we have that Vπ(s0) ≥Vπref(s0) −Tε −Tdvar(µ, µπref) . Proof. This proof may also be found in [4], but for the sake of completeness, we also provide it here. Let Pt(s) = Pr(st = s|s0, πref), πref = (πref ,0, . . . , πref ,T −1) ∈ΠT , and π = (π0, . . . , πT −1) be the output of ε-PSDP. We have Vπref −Vπ = 1 T PT −1 t=0 Est∼Pt[R(st)] −Vπ0,...(s) = PT −1 t=0 Est∼Pt[ 1 T R(st) + Vπt,...(st) −Vπt,...(st)] −Vπ0,...(s) = PT −1 t=0 Est∼Pt,st+1∼Pstπref ,t(st)[ 1 T R(st) + Vπt+1,...(st+1) −Vπt,...(st)] = PT −1 t=0 Est∼Pt[Vπref ,t,πt+1,...,πT −1(st) −Vπt,πt+1,...,πT −1(st)] It is well-known that for any function f bounded in absolute value by B, it holds true that |Es∼µ1[f(s)] −Es∼µ2[f(s)]| ≤B P s |µ1(s) −µ2(s)|. Since the values are bounded in the interval [0, 1] and since Pt = µπref,t, PT −1 t=0 Est∼Pt[Vπref ,t,πt+1,...,πT −1(st) −Vπt,πt+1,...,πT −1(st)] ≤PT −1 t=0 Es∼µt[Vπref ,t,πt+1,...,πT −1(s) −Vπt,πt+1,...,πT −1(s)] −PT −1 t=0 |Pt(s) −µt(s)| ≤PT −1 t=0 maxπ′∈ΠEs∼µt[Vπ′,πt+1,...,πT −1(s) −Vπt,πt+1,...,πT −1(s)] −Tdvar(µπref, µ) ≤ Tε + Tdvar(µπref, µ) where we have used equation (1) and the fact that πref ∈ΠT . The result now follows. □ This theorem shows that PSDP returns a policy with performance that competes favorably against those policies πref in ΠT whose future state distributions are close to µ. Hence, we expect our algorithm to provide a good policy if our prior knowledge allows us to choose a µ that is close to a future state distribution for a good policy in ΠT . It is also shown in [4] that the dependence on dvar is tight in the worst case. Furthermore, it is straightforward to show (cf. [6, 8]) that the ε-approximate PSDP can be implemented using a number of samples that is linear in the VC dimension of Π, polynomial in T and 1 ε, but otherwise independent of the size of the state space. (See [4] for details.) 4 Instantiations In this section, we provide detailed examples showing how PSDP may be applied to specific classes of policies, where we can demonstrate computational efficiency. 1If S is continuous and µt and µ′ t are densities, the inner summation is replaced by an integral. 4.1 Discrete observation POMDPs Finding memoryless policies for POMDPs represents a difficult and important problem. Further, it is known that the best memoryless, stochastic, stationary policy can perform better by an arbitrarily large amount than the best memoryless, deterministic policy. This is frequently given as a reason for using stochastic policies. However, as we shortly show, there is no advantage to using stochastic (rather than deterministic) policies, when we are searching for non-stationary policies. Four natural classes of memoryless policies to consider are as follows: stationary deterministic (SD), stationary stochastic (SS), non-stationary deterministic (ND) and non-stationary stochastic (NS). Let the operator opt return the value of the optimal policy in a class. The following specifies the relations among these classes. Proposition 1 (Policy ordering) For any finite-state, finite-action POMDP, opt(SD) ≤opt(SS) ≤opt(ND) = opt(NS) We now sketch a proof of this result. To see that opt(ND) = opt(NS), let µNS be the future distribution of an optimal policy πNS ∈NS. Consider running PSDP with base distribution µNS. After each update, the resulting policy (πNS,0, πNS,1, . . . , πt, . . . , πT ) must be at least as good as πNS. Essentially, we can consider PSDP as sweeping through each timestep and modifying the stochastic policy to be deterministic, while never decreasing performance. A similar argument shows that opt(SS) ≤opt(ND) while a simple example POMDP in the next section demonstrates this inequality can be strict. The potentially superior performance of non-stationary policies contrasted with stationary stochastic ones provides further justification for their use. Furthermore, the last inequality suggests that only considering deterministic policies is sufficient in the non-stationary regime. Unfortunately, one can show that it is NP-hard to exactly or approximately find the best policy in any of these classes (this was shown for SD in [7]). While many search heuristics have been proposed, we now show PSDP offers a viable, computationally tractable, alternative for finding a good policy for POMDPs, one which offers performance guarantees in the form of Theorem 1. Proposition 2 (PSDP complexity) For any POMDP, exact PSDP (ε = 0) runs in time polynomial in the size of the state and observation spaces and in the horizon time T. Under PSDP, the policy update is as follows: πt(o) = arg maxaEs∼µt[p(o|s)Va,πt+1...,πT −1(s)] , (2) where p(o|s) is the observation probabilities of the POMDP and the policy sequence (a, πt+1 . . . , πT −1) always begins by taking action a. It is clear that given the policies from time t + 1 onwards, Va,πt+1...,πT −1(s) can be efficiently computed and thus the update 2 can be performed in polynomial time in the relevant quantities. Intuitively, the distribution µ specifies here how to trade-off the benefits of different underlying state-action pairs that share an observation. Ideally, it is the distribution provided by an optimal policy for ND that optimally specifies this tradeoff. This result does not contradict the NP-hardness results, because it requires that a good baseline distribution µ be provided to the algorithm. However, if µ is the future state distribution of the optimal policy in ND, then PSDP returns an optimal policy for this class in polynomial time. Furthermore, if the state space is prohibitively large to perform the exact update in equation 2, then Monte Carlo integration may be used to evaluate the expectation over the state space. This leads to an ε-approximate version of PSDP, where one can obtain an algorithm with no dependence on the size of the state space and a polynomial dependence on the number of observations, T, and 1 ε (see discussion in [4]). 4.2 Action-value approximation PSDP can also be efficiently implemented if it is possible to efficiently find an approximate action-value function ˜Va,πt+1...,πT −1(s), i.e., if at each timestep ϵ ≥Es∼µt[maxa∈A| ˜Va,πt+1...,πT −1(s) −Va,πt+1...,πT −1(s)|] . (Recall that the policy sequence (a, πt+1 . . . , πT −1) always begins by taking action a.) If the policy πt is greedy with respect to the action value ˜Va,πt+1...,πT −1(s) then it follows immediately from Theorem 1 that our policy value differs from the optimal one by 2Tϵ plus the µ dependent variational penalty term. It is important to note that this error is phrased in terms of an average error over state-space, as opposed to the worst case errors over the state space that are more standard in RL. We can intuitively grasp this by observing that value iteration style algorithms may amplify any small error in the value function by pushing more probability mass through where these errors are. PSDP, however, as it does not use value function backups, cannot make this same error; the use of the computed policies in the future keeps it honest. There are numerous efficient regression algorithms that can minimize this, or approximations to it. 4.3 Linear policy MDPs We now examine in detail a particular policy search example in which we have a twoaction MDP, and a linear policy class is used. This case is interesting because, if the term Es∼µt[Vπ,πt+1,...,πT −1(s)] (from the maximization step in the algorithm) can be nearly maximized by some linear policy π, then a good approximation to π can be found. Let A = {a1, a2}, and Π = {πθ(s) : θ ∈Rn}, where πθ(s) = a1 if θT φ(s) ≥0, and πθ(s) = a2 otherwise. Here, φ(s) ∈Rn is a vector of features of the state s. Consider the maximization step in the PSDP algorithm. Letting 1{·} be the indicator function (1{True} = 1, 1{False} = 0), we have the following algorithm for performing the maximization: Algorithm 2 (Linear maximization) Given m1 and m2: for i = 1 to m1 Sample s(i) ∼µt. Use m2 Monte Carlo samples to estimate Va1,πt+1,...,πT −1(s(i)) and Va2,πt+1,...,πT −1(s(i)). Call the resulting estimates q1 and q2. Let y(i) = 1{q1 > q2}, and w(i) = |q1 −q2|. Find θ = arg minθ Pm1 i=1 w(i)1{1{θT φ(s(i)) ≥0} ̸= y(i)}. Output πθ. Intuitively, the algorithm does the following: It samples m1 states s(1), . . . , s(m1) from the distribution µt. Using m2 Monte Carlo samples, it determines if action a1 or action a2 is preferable from that state, and creates a “label” y(i) for that state accordingly. Finally, it tries to find a linear decision boundary separating the states from which a1 is better from the states from which a2 is better. Further, the “importance” or “weight” w(i) assigned to s(i) is proportional to the difference in the values of the two actions from that state. The final maximization step can be approximated via a call to any standard supervised learning algorithm that tries to find linear decision boundaries, such as a support vector machine or logistic regression. In some of our experiments, we use a weighted logistic regression to perform this maximization. However, using linear programming, it is possible to approximate this maximization. Let T(θ) = m1 X i=1 w(i)1{1{θT φ(s(i)) ≥0} ̸= y(i)} Figure 1: Illustrations of mazes: (a) Hallway (b) McCallum’s Maze (c) Sutton’s Maze be the objective in the minimization. If there is a value of θ that can satisfies T(θ) = 0, then it can be found via linear programming. Specifically, for each value of i, we let there be a constraint ½ θT φ(s(i)) > κ if y(i) = 1 θT φ(s(i)) < −κ otherwise otherwise, where κ is any small positive constant. In the case in which these constraints cannot be simultaneously satisfied, it is NP-hard to find arg minθ T(θ). [1] However, the optimal value can be approximated. Specifically, if θ∗= arg minθ T(θ), then [1] presents a polynomial time algorithm that finds θ so that T(θ) ≤(n + 1)T(θ∗). Here, n is the dimension of θ. Therefore, if there is a linear policy that does well, we also find a policy that does well. (Conversely, if there is no linear policy that does well—i.e., if T(θ∗) above were large—then the bound would be very loose; however, in this setting there is no good linear policy, and hence we arguably should not be using a linear policy anyway or should consider adding more features.) 5 Experiments The experiments below demonstrate each of the instantiations described previously. 5.1 POMDP gridworld example Here we apply PSDP to some simple maze POMDPs (Figure (5.1) to demonstrate its performance. In each the robot can move in any of the 4 cardinal direction. Except in (5.1c), the observation at each grid-cell is simply the directions in which the robot can freely move. The goal in each is to reach the circled grid cell in the minimum total number of steps from each starting cell. First we consider the hallway maze in Figure (5.1a). The robot here is confounded by all the middle states appearing the same, and the optimal stochastic policy must take time at least quadratic in the length of the hallway to ensure it gets to the goal from both sides. PSDP deduces a non-stationary deterministic policy with much better performance: first clear the left half maze by always traveling right and then the right half maze by always traveling left. McCallum’s maze (Figure 5.1b) is discussed in the literature as admitting no satisficing determinisitic reactive policy. When one allows non-stationary policies, however, solutions do exist: PSDP provides a policy with 55 total steps to goal. In our final benchmark, Sutton’s maze (Figure 5.1c), the observations are determined by the openness of all eight connected directions. Below we summarize the total number of steps to goal of our algorithm as compared with optimality for two classes of policy. Column 1 denotes PSDP performance using a uniform baseline distribution. The next column lists the performance of iterating PSDP, starting initially with a uniform baseline µ and then computing with a new baseline µ′ based on the previously constructed policy. 2 Column 3 corresponds to optimal stationary deterministic 2It can be shown that this procedure of refining µ based on previous learned policies will never decrease performance. policy while the final column gives the best theoretically achievable performance given arbitrary memory. It is worthwhile to note that the PSDP computations are very fast in all of these problems, taking well under a second in an interpreted language. µ uniform µ iterated Optimal SD Optimal Hallway 21 21 ∞ 18 McCallum 55 48 ∞ 39 Sutton 412 412 416 ≥408 5.2 Robot walking Our work is related in spirit to Atkeson and Morimoto [2], which describes a differential dynamic programming (DDP) algorithm that learns quadratic value functions along trajectories. These trajectories, which serve as an analog of our µ distribution, are then refined using the resulting policies. A central difference is their use of the value function backups as opposed to policy backups. In tackling the control problem presented in [2] we demonstrate ways in which PSDP extends that work. [2] considers a planar biped robot that walks along a bar. The robot has two legs and a motor that applies torque where they meet. As the robot lacks knees, it walks by essentially brachiating (upside-down); a simple mechanism grabs the bar as a foot swings into position. The robot (excluding the position horizontally along the bar) can be described in a 5 dimensional state space using angles and angular velocities from the foot grasping the bar. The control variable that needs to be determined is the hip-torque. In [2], significant manual “cost-function engineering” or “shaping” of the rewards was used to achieve walking at fixed speed. Much of this is due to the limitations of differential dynamic programming in which cost functions must always be locally quadratic. This rules out natural cost functions that directly penalize, for example, falling. As this limitation does not apply to our algorithm, we used a cost function that rewards the robot for each timestep it remains upright. In addition, we penalize quadratically deviation from the nominal horizontal velocity of 0.4 m/s and control effort applied. Samples of µ are generated in the same way [2] generates initial trajectories, using a parametric policy search. For our policy we approximated the action-value function with a locally-weighted linear regression. PSDP’s policy significantly improves performance over the parametric policy search; while both keep the robot walking we note that PSDP incurs 31% less cost per step. DDP makes strong, perhaps unrealistic assumptions about the observability of state variables. PSDP, in contrast, can learn policies with limited observability. By hiding state variables from the algorithm, this control problem demonstrates PSDP’s leveraging of nonstationarity and ability to cope with partial observability. PSDP can make the robot walk without any observations; open loop control is sufficient to propel the robot, albeit at a significant reduction in performance and robustness. In Figure (5.2) we see the signal generated by the learned open-loop controller. This complex torque signal would be identical for arbitrary initial conditions— modulo sign-reversals, as the applied torque at the hip is inverted from the control signal whenever the stance foot is switched. 5.3 Double-pole balancing Our third problem, double pole balancing, is similar to the standard inverted pendulum problem, except that two unactuated poles, rather than a single one, are attached to the cart, and it is our task to simultaneously keep both of them balanced. This makes the task significantly harder than the standard single pole problem. Using the simulator provided by [3], we implemented PSDP for this problem. The state variables were the cart position x; cart velocity ˙x; the two poles’ angles φ1 and φ2; and the poles’ angular velocities ˙φ1 and ˙φ2. The two actions are to accelerate left 0 2 4 6 8 10 12 14 16 18 20 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 time (s) control torque 0 2 4 6 8 10 12 14 16 18 20 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 time (s) angle (rad) Figure 2: (Left) Control signal from open-loop learned controller. (Right) Resulting angle of one leg. The dashed line in each indicates which foot is grasping the bar at each time. and to accelerate right. We used a linear policy class Π as described previously, and φ(s) = [x, ˙x, φ1, ˙φ1, φ2, ˙φ2]T . By symmetry of the problem, a constant intercept term was unnecessary; leaving out an intercept enforces that if a1 is the better action for some state s, then a2 should be taken in the state −s. The algorithm we used for the optimization step was logistic regression.3 The baseline distribution µ that we chose was a zero-mean multivariate Gaussian distribution over all the state variables. Using a horizon of T = 2000 steps and 5000 Monte Carlo samples per iteration of the PSDP algorithm, we are able to successfully balance both poles. Acknowledgments. We thank Chris Atkeson and John Langford for helpful conversations. J. Bagnell is supported by an NSF graduate fellowship. This work was also supported by NASA, and by the Department of the Interior/DARPA under contract number NBCH1020014. References [1] E. Amaldi and V. Kann. On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems. Theoretical Comp. Sci., 1998. [2] C. Atkeson and J. Morimoto. Non-parametric representation of a policies and value functions: A trajectory based approach. In NIPS 15, 2003. [3] F. Gomez. http://www.cs.utexas.edu/users/nn/pages/software/software.html. [4] Sham Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College London, 2003. [5] Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In Proc. 19th International Conference on Machine Learning, 2002. [6] Michael Kearns, Yishay Mansour, and Andrew Y. Ng. Approximate planning in large POMDPs via reusable trajectories. (extended version of paper in NIPS 12), 1999. [7] M. Littman. Memoryless policies: theoretical limitations and practical results. In Proc. 3rd Conference on Simulation of Adaptive Behavior, 1994. [8] Andrew Y. Ng and Michael I. Jordan. PEGASUS: A policy search method for large MDPs and POMDPs. In Proc. 16th Conf. Uncertainty in Artificial Intelligence, 2000. [9] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256, 1992. 3In our setting, we use weighted logistic regression and minimize −ℓ(θ) = −P i w(i) log p(y(i)|s(i), θ) where p(y = 1|s, θ) = 1/(1 + exp(−θT s)). It is straightforward to show that this is a (convex) upper-bound on the objective function T(θ).
2003
46
2,448
A Holistic Approach to Compositional Semantics: a connectionist model and robot experiments Yuuya Sugita BSI, RIKEN Hirosawa 2-1, Wako-shi Saitama 3510198 JAPAN sugita@bdc.brain.riken.go.jp Jun Tani BSI, RIKEN Hirosawa 2-1, Wako-shi Saitama 3510198 JAPAN tani@bdc.brain.riken.go.jp Abstract We present a novel connectionist model for acquiring the semantics of a simple language through the behavioral experiences of a real robot. We focus on the “compositionality” of semantics, a fundamental characteristic of human language, which is the ability to understand the meaning of a sentence as a combination of the meanings of words. We also pay much attention to the “embodiment” of a robot, which means that the robot should acquire semantics which matches its body, or sensory-motor system. The essential claim is that an embodied compositional semantic representation can be self-organized from generalized correspondences between sentences and behavioral patterns. This claim is examined and confirmed through simple experiments in which a robot generates corresponding behaviors from unlearned sentences by analogy with the correspondences between learned sentences and behaviors. 1 Introduction Implementing language acquisition systems is one of the most difficult problems, since not only the complexity of the syntactical structure, but also the diversity in the domain of meaning make this problem complicated and intractable. In particular, how linguistic meaning can be represented in the system is crucial, and this problem has been investigated for many years. In this paper, we introduce a connectionist model to acquire the semantics of language with respect to the behavioral patterns of a real robot. An essential question is how embodied compositional semantics can be acquired in the proposed connectionist model without providing any representations of the meaning of a word or behavior routines a priori. By “compositionality”, we refer to the fundamental human ability to understand a sentence from (1) the meanings of its constituents, and (2) the way in which they are put together. It is possible for a language acquisition system that acquires compositional semantics to derive the meaning of an unknown sentence from the meanings of known sentences. Consider the unknown sentence: “John likes birds.” It could be understood by learning these three sentences: “John likes cats.”; “Mary likes birds.”; and “Mary likes cats.” That is to say, generalization of meaning can be achieved through compositional semantics. From the point of view of compositionality, the symbolic representation of word meaning has much affinity with processing the linguistic meaning of sentences [4]. Following this observation, various learning models have been proposed to acquire the embodied semantics of language. For example, some models learn semantics in the form of correspondences between sentences and non-linguistic objects, i.e., visual images [10] or the sensory-motor patterns of a robot [7, 13]. In these works, the syntactic aspect of language was acquired through a pre-acquired lexicon. This means that the meanings of words (i.e., lexicon) is acquired independently of the usages of words in sentences (i.e., syntax). Although this separated learning approach seems to be plausible from the requirements of compositionality, it causes inevitable difficulties in representing the meaning of a sentence. A priori separation of lexicon and syntax requires a pre-defined manner of combining word meanings into the meaning of a sentence. In Iwahashi’s model, the class of a word is assumed to be given prior to learning its meaning because different acquisition algorithms are required for nouns and verbs (c.f., [12]). Moreover, the meaning of a sentence is obtained by filling a pre-defined template with meanings of words. Roy’s model does not require a priori knowledge of word classes, but requires the strong assumption, that the meaning of a word can be assigned to some pre-defined attributes of non-linguistic objects. This assumption is not realistic in more complex cases, such as when the meaning of a word needs to be extracted from non-linguistic spatio-temporal patterns, as in case of learning verbs. In this paper, we discuss an essential mechanism for self-organizing embodied compositional semantic representations, in which separate treatments of words and syntax are not required. Our model implements compositional semantics by utilizing the generalization capability of an RNN, where the meaning of each word cannot exist independently, but emerges from the relations with others (c.f., reverse compositionality, [3]). In this situation, a sort of generalization can be expected, such that the meanings of novel sentences can be inferred by analogy with learned ones. The experiments were conducted using a real mobile robot with an arm and with various sensors, including a vision system. A finite set of two-word sentences consisting of a verb followed by a noun was considered. Our analysis will clarify what sorts of internal neural structures should be self-organized for achieving compositional semantics grounded to a robot’s behavioral experiences. Although our experimental design is limited, the current study will suggest an essential mechanism for acquiring grounded compositional semantics, with the minimal combinatorial structure of this finite language [2]. 2 Task Design The aim of our experimental task is to discuss an essential mechanism for self-organizing compositional semantics based on the behavior of a robot. In the training phase, our robot learns the relationships between sentences and the corresponding behavioral sensory-motor sequences of a robot in a supervised manner. It is then tested to generate behavioral sequences from a given sentence. We regard compositional semantics as being acquired if appropriate behavioral sequences can be generated from unlearned sentences by analogy with learned data. Our mobile robot has three actuators, with two wheels and a joint on the arm; a colored vision sensor; and two torque sensors, on the wheel and the arm (Figure 1a). The robot operates in an environment where three colored objects (red, blue, and green) are placed on the floor (Figure 1b). The positions of these objects can be varied so long as the robot sees the red object on the left side of its field of view, the green object in the middle, and the blue object on the right at the start of every trial of behavioral sequences. The robot thus learns nine categories of behavioral patterns, consisting of pointing at, pushing, and hitting each of the three objects, in a supervised manner. These categories are denoted as POINT-R, POINT-B, POINT-G, PUSH-R, PUSH-B, PUSH-G, HIT-R, HIT-B, and HIT-G (Figure 1c-e). The robot also learns sentences which consist of one of 3 verbs (point, push, hit) folRedBlue Green Starting Position (a) (c) (d) (e) POINT-G PUSH-G HIT-G (b) Figure 1: The mobile robot (a) starts from a fixed position in the environment and (b) ends each behavior by (c) pointing at, (d) pushing, or (e) hitting an object. "point blue" "point left" "point red" "point center" "point green" "point right" POINT-R POINT-G POINT-B "push red" "push left" "push blue" "push center" "push green" "push right" PUSH-R PUSH-G PUSH-B "hit blue" "hit right" "hit center" "hit green" "hit red" "hit left" HIT-R HIT-B HIT-G Figure 2: The correspondence between sentences and behavioral categories. Each behavioral category has two corresponding sentences. lowed by one of 6 nouns (red, left, blue, center, green, right). The meanings of these 18 possible sentences are given in terms of fixed correspondences with the 9 behavioral categories (Figure 2). For example, “point red” and “point left” correspond to POINT-R, “point blue” and “point center” to POINT-B, and so on. In these correspondences, “left,” “center,” and “right” have exactly the same meaning as “red,” “blue,” and “green” respectively. These synonyms are introduced to observe how the behavioral similarity affects the acquired linguistic semantic structure. 3 Proposed Model Our model employs two RNNs with parametric bias nodes (RNNPBs) [15] in order to implement a linguistic module and a behavioral module (Figure 3). The RNNPB, like the conventional Jordan-type RNN [8], is a connectionist model to learn time sequences. The linguistic module learns the above sentences represented as time sequences of words [1], while the behavioral module learns the behavioral sensory-motor sequences of the robot. To acquire the correspondences between the sentences and behavioral sequences, these two modules are connected to each other by using the parametric bias binding method. Before discussing this binding method in detail, we introduce the overall architecture of RNNPB. Linguistic Module Interaction via parametric binding method Behavioral Module word input nodes parametric bias nodes context nodes word prediction output nodes sensory-motor input nodes parametric bias nodes context nodes sensory-motor prediction output nodes Figure 3: Our model is composed of two RNNs with parametric bias nodes (RNNPBs), one for a linguistic module and the other for a behavioral module. Both modules interact with each other during the learning process via the parametric bias method introduced in the text. 3.1 RNNPB The RNNPB has the same neural architecture as the Jordan-type RNN except for the PB nodes in the input layer (c.f., each module of Figure 3). Unlike the other input nodes, these PB nodes take a specific constant vector throughout each time sequence, and are employed to implement a mapping between fixed-length vectors and time sequences. Like the conventional Jordan-type RNN, the RNNPB learns time sequences in a supervised manner. The difference is that in the RNNPB, the vectors that encode the time sequences are self-organized in PB nodes during the learning process. The common structural properties of all the training time sequences are acquired as connection weight values by using the back-propagation through time (BPTT) algorithm, as used also in the conventional RNN [8, 11]. Meanwhile, the specific properties of each individual time sequence are simultaneously encoded as PB vectors (c.f., [9]). As a result, the RNNPB self-organizes a mapping between the PB vectors and the time sequences. The learning algorithm for the PB vectors is a variant of the BPTT algorithm. For each of n training time sequences of real-numbered vectors x0, · · · , xn−1, the back-propagated errors with respect to the PB nodes are accumulated for all time steps to update the PB vectors. Formally, the update rule for the PB vector pxi encoding the i-th training time sequence xi is given as follows: δ2pxi = 1 li li−1  t=0 errorpxi(t) (1) δpxi = ϵ · δ2pxi + η · δpold xi (2) pxi = pold xi + δpxi (3) In equation (1), the update of PB vector δ2pxi is obtained from the average back-propagated error with respect to a PB node errorpxi (t) through all time steps from t = 0 to li −1, where li is the length of xi. In equation (2), this update is low-pass filtered to inhibit frequent rapid changes in the PB vectors. After successfully learning the time sequences, the RNNPB can generate a time sequence xi from its corresponding PB vector pxi. The actual generation process of a time sequence xi is implemented by iteratively utilizing the RNNPB with the corresponding PB vector pxi, a fixed initial context vector, and input vectors for each time step. Depending on the required functionality, both the external information (e.g., sensory information) and the internal prediction (e.g., motor commands) are employed as input vectors. Here, we introduce an abstracted operational notation for the RNNPB to facilitate a later explanation of our proposed method of binding language and behavior. By using an operator RNNPB, the generation of xi from pxi is described as follows: RNNPB(pxi) → xi, i = 0, · · · , n −1. (4) Furthermore, the RNNPB can be used not only for sequence generation processes but also for recognition processes. For a given sequence xi, the corresponding PB vector pxi can be obtained by using the update rules for the PB vectors (equations (1) to (3)), without updating the connection weight values. This inverse operation for generation is regarded as recognition, and is hence denoted as follows: RNNPB−1(xi) → pxi, i = 0, · · · , n −1. (5) The other important characteristic nature of the RNNPB is that the relational structure among the training time sequences can be acquired in the PB space through the learning process. This generalization capability of RNNPB can be employed to generate and recognize unseen time sequences without any additional learning. For instance, by learning several cyclic time sequences of different frequency, novel time sequences of intermediate frequency can be generated [6]. 3.2 Binding In the proposed model, corresponding sentences and behavioral sequences are constrained to have the same PB vectors in both modules. Under this condition, corresponding behavioral sequences can be generated naturally from sentences. When a sentence si and its corresponding behavioral sequence bi have the same PB vector, we can obtain bi from si as follows: RNNPBB(RNNPB−1 L (si)) → bi (6) where RNNPBL and RNNPBB are abstracted operators for the linguistic module and the behavioral module, respectively. The PB vector psi is obtained by recognizing the sentence si. Because of the constraint that corresponding sentences and behavioral sequences must have the same PB vectors, pbi is equal to psi. Therefore, we can obtain the corresponding behavioral sequence bi by utilizing the behavioral module with pbi. The binding constraint is implemented by introducing an interaction term into part of the update rule for the PB vectors (equation (3)). psi = pold si + δpsi + γL · (pold bi −pold si ) (7) pbi = pold bi + δpbi + γB · (pold si −pold bi ) (8) where γL and γB are positive coefficients that determine the strength of the binding. Equations (7) and (8) are the constrained update rules for the linguistic module and the behavior module, respectively. Under these rules, the PB vectors of a corresponding sentence si and behavioral sequence bi attract each other. Actually, the corresponding PB vectors psi and pbi need not be completely equalized to learn a correspondence. The epsilon errors of the PB vectors can be neglected because of the continuity of PB spaces. 3.3 Generalization of Correspondences As noted above, our model enables a robot to understand a sentence by means of a generated behavior as if the meaning of the sentence were composed of the meanings of the constituents. That is to say, the robot can generate appropriate behavioral sequences from all sentences without learning all correspondences. To achieve this, an unlearned sentence and its corresponding behavioral sequences must have the same PB vector. Nevertheless, the PB binding method only equalizes the PB vectors for given corresponding sentences and behavioral sequences (c.f., equation (7) and (8)). Implicit binding, or in other words, inter-module generalization of correspondences, is achieved by dynamic coordination between the PB binding method and the intra-module generalization of each module. The local effect of the PB binding method spreads over the whole PB space, because each individual PB vector depends on the others in order to self-organize PB structures reflecting the relationships among training data. Thus, the PB structures of both modules densely interact via the PB binding methods. Finally, both PB structures converge into a common PB structure, and therefore, all corresponding sentences and behavioral sequences then share the same PB vectors automatically. 4 Experiments In the learning phase, the robot learned 14 of 18 correspondences between sentences and behavioral patterns (c.f., Figure 2). It was then tested to generate behavioral sequences from each of the remaining 4 sentences (“point green”, “point right”, “push red”, and “push left”). To enable a robot to learn correspondences robustly, five corresponding sentences and behavioral sequences were associated by using the PB binding method for each of the 14 training correspondences. Thus, the linguistic module learned 70 sentences with PB binding. Meanwhile, the behavioral module learned the behavioral sequences of the 9 categories, including 2 categories which had no corresponding sentences in the training set. The behavioral module learned 10 different sensory-motor sequences for each behavioral category. It therefore learned 70 behavioral sequences corresponding to the training sentences with PB binding and the remaining 20 sequences independently. In addition, the behavioral module learned the same 90 behavioral sequences without binding. A sentence is represented as a time sequence of words, which starts with a fixed starting symbol. Each word is locally represented, such that each input node of the module corresponds to a specific word. A single input node takes a value of 1.0 while the others take 0.0 [1]. The linguistic module has 10 input nodes for each of 9 words and a starting symbol. The module also has 6 parametric bias nodes, 4 context nodes, 50 hidden nodes, and 10 prediction output nodes. Thus, no a priori knowledge about the meanings of words is pre-programmed. A training behavioral sequence was created by sampling three sensory-motor vectors per second during a trial of the robot’s human-guided behavior. For robust learning of behavior, each training behavioral sequence was generated under a slightly different environment in which object positions were varied. The variation was at most 20 percent of the distance between the starting position of the robot and the original position of each object in every direction (c.f., Figure 1b). Typical behavioral sequences are about 5 to 25 seconds long, and therefore have about 15 to 75 sensory-motor vectors. A sensory-motor vector is a realnumbered 26-dimensional vector consisting of 3 motor values (for 2 wheels and the arm), 2 values from torque sensors (of the wheels and the arm), and 21 values encoding the visual image. The visual field is divided vertically into 7 regions, and each region is represented by (1) the fraction of the region covered by the object, (2) the dominant hue of the object in the region, and (3) the bottom border of the object in the region, which is proportional to the distance of the object from the camera. The behavioral module had 26 input nodes for sensory-motor input, 6 parametric bias nodes, 6 context nodes, 70 hidden nodes, and 6 output nodes for motor commands and partial prediction of the sensory image at the next time step. 5 Results and Analysis In this section, we analyze the results of the experiment presented in the previous section. The analysis reveals that the inter-module generalization realized by the PB binding method could fill an essential role in self-organizing the compositional semantics of the simple language through the behavioral experiences of the robot. As mentioned in the previous section, the training data for this experiment did not include all the correspondences. As a result, although the behavioral module was trained with the behavioral sequences of all behavioral categories, those in two of the categories, whose corresponding sentences were not in the linguistic training set, could not be bound. The most important result was that these dangling behavioral sequences could be bound with appropriate sentences. The robot could properly recognize four unseen sentences, and generate the corresponding behaviors. This means that both modules share the common PB structure successfully. Comparing the PB spaces of both modules shows that they indeed shared a common structure as a result of binding. The linguistic PB vectors are computed by recognizing all the possible 18 sentences including 4 unseen ones (Figure 4a), and the behavioral PB vectors are computed at the learning phase for all the corresponding 90 behavioral sequences in the training data (Figure 4b). The acquired correspondences between sentences and behavioral sequences can be examined according to equation (6). In particular, the implicit binding of the four unlearned correspondences (“point green”↔POINTG, “point right”↔POINT-G, “push red”↔PUSH-R, and “push left”↔PUSH-R) demonstrates acquisition of the underlying semantics, or the generalized correspondences. The acquired common structure has two striking characteristics: (1) the combinatorial structure originated from the linguistic module, and (2) the metric based on the behavioral similarity originated from the behavioral module. The interaction between modules enabled both PB spaces to simultaneously acquire both of these two structural properties. We can find three congruent sub-structures for each verb, and six congruent sub-structures for each noun in the linguistic PB space. This congruency represents the underlying synPOINT-R POINT-B POINT-G point red point left point blue point center point green point right push red push left push blue push center push green push right hit red hit left hit blue hit center hit green hit right 0.2 0.8 0.2 0.8 0.8 0.8 0.2 0.2 (a) Linguistic module (b) Behavioral module The first principal component The second principal component The second principal component The first principal component HIT-R HIT-B HIT-G PUSH-R PUSH-B PUSH-G Figure 4: Plots of the bound linguistic module (a) and the bound behavioral module (b). Both plots are projections of the PB spaces onto the same surface determined by the PCA method. Here, the accumulated contribution rate is about 73%. Unlearned sentences and their corresponding behavioral categories are underlined. tax structure of training sentences. For example, it is possible to estimate the PB vector of “point green” from the relationship among the PB vectors of “point blue”, “hit blue” and “hit green.” This predictable geometric regularity could be acquired by independent learning of the linguistic module. However it could not be acquired by independent learning of the behavioral module because these behavioral sequences can not be decomposed into plausible primitives, unlike the sentences which can be broken down into words. We can also see a metric reflecting the similarity of behavioral sequences not only in the behavioral modules but also in the linguistic module. The PB vectors of sentences that correspond to the same behavioral category take the similar values. For example, the two sentences corresponding to POINT-R (“point red” and “point left”) are encoded in similar PB vectors. Such a metric nature could not be observed in the independent learning of the linguistic module, in which all nouns were plotted symmetrically in the PB space by means of the syntactical constraints. The above observation thus confirms that the embodied compositional semantics was selforganized through the unification of both modules, which was implemented by the PB binding method. We also made experiments with different test sentences, and confirmed that similar results could be obtained. 6 Discussion and Summary Our simple experiments showed that the minimal grounded compositional semantics of our language can be acquired by generalizing the correspondences between sentences and the behavioral sensory-motor sequences of a robot. Our experiments could not examine strong systematicity [4], but could address the combinatorial characteristic nature of sentences. That is to say, the robot could understand relatively simple sentences in a systematic way, and could understand novel sentences. Therefore, our results can elucidate some important issues about the compositional semantic representation. We claim that the acquisition of word meaning and syntax can not be separated from the standpoint of the symbol grounding problem [5]. The meanings of words depend on each other to compose the meanings of sentences [16]. Consider the meaning of the word “red.” The meaning of “red” must be something which combines with the meaning of “point”, “push” or “hit” to form the grounded meanings of sentences. Therefore, a priori definition of the meaning of “red” substantially affects the organization of the other parts of the system, and often results in further pre-programming. This means that it is inevitably difficult to explicitly extract the meaning of a word from the meaning of a sentence. Our model avoids this difficulty by implementing the grounded meaning of a word implicitly in terms of the relationships among the meanings of sentences based on behavioral experiences. Our model does not require any pre-programming of syntactic information, such as symbolic representation of word meaning, a predefined combinatorial structure in the semantic domain, or behavior routines. Instead, the essential structures accounting for compositionality are fully self-organized in the iterative dynamics of the RNN, through the structural interactions between language and behavior using the PB binding method. Thus, the robot can understand “red” through its behavioral interactions in the designed tasks in a bottom-up way [14]. A similar argument holds true for verbs. For example, the robot understands “point” through pointing at red, blue, and green objects. To the summary, the current study has shown the importance of generalization of the correspondences between sentences and behavioral patterns in the acquisition of an embodied language. In future studies, we plan to apply our model to larger language sets. In the current experiment, the training set consists of a large fraction of the legal input space, when compared with related works. Such a large training set is needed because our model has no a priori knowledge of syntax and composition rules. However, we think that our model requires relatively fewer fraction of sentences to learn a larger language set, for a given degree of syntactic complexity. References [1] J. L. Elman. Finding structure in time. Cognitive Science, 14:179–211, 1990. [2] G. Evans. Semantic Theory and Tacit Knowledge. In S. Holzman and C. Leich, editors, Wittgenstein: To Follow a Rule. London: Routledge and Kegan Paul, 1981. [3] J. Fodor. Why Compositionality Won’t Go Away: Reflections on Horwich’s ’Deflationary’ Theory. Technical Report 46, Rutgers University, 1999. [4] R. F. Hadley. Systematicity revisited: reply to Christiansen and Chater and Niklasson and van Gelder. Mind and Language, 9:431–444, 1994. [5] S. Harnad. The symbol grounding problem. Physica D, 42:335–346, 1990. [6] M. Ito and J. Tani. Generalization and Diversity in Dynamic Pattern Learning and Generation by Distributed Representation Architecture . Technical Report 3, Lab. for BDC, Brain Science Institute, RIKEN, 2003. [7] N. Iwahashi. Language acquisition by robots – Towards New Paradigm of Language Processing –. Journal of Japanese Society for Artificial Intelligence, 18(1):49–58, 2003. [8] M.I. Jordan and D.E. Rumelhart. Forward models: supervised learning with a distal teacher. Cognitive Science, 16:307–354, 1992. [9] R. Miikkulainen. Subsymbolic Natural Language Processing: An Integrated Model of Script s, Lexicon, and Memory. MIT Press, 1993. [10] D. K. Roy. Learning visually grounded words and syntax for a scene description task. Computer Speech and Language, 16, 2002. [11] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. Mclelland, editors, Parallel Distributed Processing. Cambridge, MA: MIT Press, 1986. [12] J. M. Siskind. Grounding the Lexical Semantics of Verbs in Visual Perception using Force Dynamics and Event Logic. Artificial Intelligence Research, 15:31–90, 2001. [13] L. Steels. The Emergence of Grammar in Communicating Autonomous Robotic Agents. In W. Horn, editor, Proceedings of European Conference of Artificial Intelligence, pages 764–769. IOS Press, 2000. [14] J. Tani. Model-Based Learning for Mobile Robot Navigation from the Dynamical Systems Perspective. IEEE Trans. on SMC (B), 26(3):421–436, 1996. [15] J. Tani. Learning to generate articulated behavior through the bottom-up and the top-down interaction process. Neural Networks, 16:11–23, 2003. [16] T. Winograd. Understanding natural language. Cognitive Psychology, 3(1):1–191, 1972.
2003
47
2,449
Semidefinite relaxations for approximate inference on graphs with cycles Martin J. Wainwright Electrical Engineering and Computer Science UC Berkeley, Berkeley, CA 94720 wainwrig@eecs.berkeley.edu Michael I. Jordan Computer Science and Statistics UC Berkeley, Berkeley, CA 94720 jordan@cs.berkeley.edu Abstract We present a new method for calculating approximate marginals for probability distributions defined by graphs with cycles, based on a Gaussian entropy bound combined with a semidefinite outer bound on the marginal polytope. This combination leads to a log-determinant maximization problem that can be solved by efficient interior point methods [8]. As with the Bethe approximation and its generalizations [12], the optimizing arguments of this problem can be taken as approximations to the exact marginals. In contrast to Bethe/Kikuchi approaches, our variational problem is strictly convex and so has a unique global optimum. An additional desirable feature is that the value of the optimal solution is guaranteed to provide an upper bound on the log partition function. In experimental trials, the performance of the log-determinant relaxation is comparable to or better than the sum-product algorithm, and by a substantial margin for certain problem classes. Finally, the zero-temperature limit of our log-determinant relaxation recovers a class of well-known semidefinite relaxations for integer programming [e.g., 3]. 1 Introduction Given a probability distribution defined by a graphical model (e.g., Markov random field, factor graph), a key problem is the computation of marginal distributions. Although highly efficient algorithms exist for trees, exact solutions are prohibitively complex for more general graphs of any substantial size. This difficulty motivates the use of algorithms for computing approximations to marginal distributions, a problem to which we refer as approximate inference. One widely-used algorithm is the belief propagation or sum-product algorithm. As shown by Yedidia et al. [12], it can be interpreted as a method for attempting to solve a variational problem wherein the exact entropy is replaced by the Bethe approximation. Moreover, Yedidia et al. proposed extensions to the Bethe approximation based on clustering operations. An unattractive feature of the Bethe approach and its extensions is that with certain exceptions [e.g., 6], the associated variational problems are typically not convex, thus leading to algorithmic complications, and also raising the possibility of multiple local optima. Secondly, in contrast to other variational methods (e.g., mean field [4]), the optimal values of Bethe-type variational problems fail to provide bounds on the log partition function. This function arises in various contexts, including approximate parameter estimation and large deviations exponents, so that such bounds are of interest in their own right. This paper introduces a new class of variational problems that are both convex and provide upper bounds. Our derivation relies on a Gaussian upper bound on the discrete entropy of a suitably regularized random vector, and a semidefinite outer bound on the set of valid marginal distributions. The combination leads to a log-determinant maximization problem with a unique optimum that can be found by efficient interior point methods [8]. As with the Bethe/Kikuchi approximations and sum-product algorithms, the optimizing arguments of this problem can be taken as approximations to the marginal distributions of the underlying graphical model. Moreover, taking the “zero-temperature” limit recovers a class of wellknown semidefinite programming relaxations for integer programming problems [e.g., 3]. 2 Problem set-up We consider an undirected graph G = (V, E) with n = |V | nodes. Associated with each vertex s ∈V is a random variable xs taking values in the discrete space X = {0, 1, . . . , m −1}. We let x = {xs | s ∈V } denote a random vector taking values in the Cartesian product space X n. Our analysis makes use of the following exponential representation of a graph-structured distribution p(x). For some index set I, we let φ = {φα | α ∈I} denote a collection of potential functions associated with the cliques of G, and let θ = {θα | α ∈I} be a vector of parameters associated with these potential functions. The exponential family determined by φ is the following collection: p(x; θ) = exp  X α θαφα(x) −Φ(θ) (1a) Φ(θ) = log X x∈X n exp  X α θαφα(x) . (1b) Here Φ(θ) is the log partition function that serves to normalize the distribution. In a minimal representation, the functions {φα} are affinely independent, and d = |I| corresponds to the dimension of the family. For example, one minimal representation of a binaryvalued random vector on a graph with pairwise cliques is the standard Ising model, in which φ = {xs | s ∈V } ∪{ xsxt | (s, t) ∈E}. Here the index set I = V ∪E, and d = n + |E|. In order to incorporate higher order interactions, we simply add higher degree monomials (e.g., xsxtxu for a third order interaction) to the collection of potential functions. Similar representations exist for discrete processes on alphabets with m > 2 elements. 2.1 Duality and marginal polytopes It is well known that Φ is convex in terms of θ, and strictly so for a minimal representation. Accordingly, it is natural to consider its conjugate dual function, which is defined by the relation: Φ∗(µ) = sup θ∈Rd{⟨µ, θ⟩−Φ(θ)}. (2) Here the vector of dual variables µ is the same dimension as exponential parameter θ (i.e., µ ∈Rd). It is straightforward to show that the partial derivatives of Φ with respect to θ correspond to cumulants of φ(x); in particular, the first order derivatives define mean parameters: ∂Φ ∂θα (θ) = X x∈X n p(x; θ)φα(x) = Eθ[φα(x)]. (3) In order to compute Φ∗(µ) for a given µ, we take the derivative with respect to θ of the quantity within curly braces in Eqn. (2). Setting this derivative to zero and making use of Eqn. (3) yields defining conditions for a vector θ(µ) attaining the optimum in Eqn. (2): µα = Eθ(µ)[φα(x)] ∀α ∈I (4) It can be shown [10] that Eqn. (4) has a solution if and only if µ belongs to the relative interior of the set: MARG(G; φ) = { µ ∈Rd X x∈X n p(x) φ(x) = µ for some p(·)} (5) Note that this set is equivalent to the convex hull of the finite collection of vectors {φ(x) | x ∈X n}; consequently, the Minkowski-Weyl theorem [7] guarantees that it can be characterized by a finite number of linear inequality constraints. We refer to this set as the marginal polytope1 associated with the graph G and the potentials φ. In order to calculate an explicit form for Φ∗(µ) for any µ ∈MARG(G; φ), we substitute the relation in Eqn. (4) into the definition of Φ∗, thereby obtaining: Φ∗(µ) = ⟨µ, θ(µ)⟩−Φ(θ(µ)) = X x∈X n p(x; θ(µ)) log p(x; θ(µ)). (6) This relation establishes that for µ in the relative interior of MARG(G; φ), the value of the conjugate dual Φ∗(µ) is given by the negative entropy of the distribution p(x; θ(µ)), where the pair θ(µ) and µ are dually coupled via Eqn. (4). For µ /∈cl MARG(G; φ), it can be shown [10] that the value of the dual is +∞. Since Φ is lower semi-continuous, taking the conjugate twice recovers the original function [7]; applying this fact to Φ∗and Φ, we obtain the following relation: Φ(θ) = max µ∈MARG(G;φ)  ⟨θ, µ⟩−Φ∗(µ) . (7) Moreover, we are guaranteed that the optimum is attained uniquely at the exact marginals µ = {µα} of p(x; θ). This variational formulation plays a central role in our development in the sequel. 2.2 Challenges with the variational formulation There are two difficulties associated with the variational formulation (7). First of all, observe that the (negative) entropy Φ∗, as a function of only the mean parameters µ, is implicitly defined; indeed, it is typically impossible to specify an explicit form for Φ∗. Key exceptions are trees and hypertrees, for which the entropy is well-known to decompose into a sum of local entropies defined by local marginals on the (hyper)edges [1]. Secondly, for a general graph with cycles, the marginal polytope MARG(G; φ) is defined by a number of inequalities that grows rapidly in graph size [e.g., 2]. Trees and hypertrees again are important exceptions: in this case, the junction tree theorem [e.g., 1] provides a compact representation of the associated marginal polytopes. The Bethe approach (and its generalizations) can be understood as consisting of two steps: (a) replacing the exact entropy −Φ∗with a tree (or hypertree) approximation; and (b) replacing the marginal polytope MARG(G; φ) with constraint sets defined by tree (or hypertree) consistency conditions. However, since the (hyper)tree approximations used do not bound the exact entropy, the optimal values of Bethe-type variational problems do not provide a bound on the value of the log partition function Φ(θ). Requirements for bounding Φ are both an outer bound on the marginal polytope, as well as an upper bound on the entropy −Φ∗. 1When φα corresponds to an indicator function, then µα is a marginal probability; otherwise, this choice entails a minor abuse of terminology. 3 Log-determinant relaxation In this section, we state and prove a set of upper bounds based on the solution of a variational problem involving determinant maximization and semidefinite constraints. Although the ideas and methods described here are more generally applicable, for the sake of clarity in exposition we focus here on the case of a binary vector x ∈{−1, +1}n of “spins”. It is also convenient to define all problems with respect to the complete graph Kn (i.e., fully connected). We use the standard (minimal) Ising representation for a binary problem, in terms of the potential functions φ = {xs | s ∈V } ∪{xsxt | (s, t)}. On the complete graph, there are d = n + n 2  such potential functions in total. Of course, any problem can be embedded into the complete graph by setting to zero a subset of the {θst} parameters. (In particular, for a graph G = (V, E), we simply set θst = 0 for all pairs (s, t) /∈E.) 3.1 Outer bounds on the marginal polytope We first focus on the marginal polytope MARG(Kn) ≡MARG(Kn; φ) of valid dual variables {µs, µst}, as defined in Eqn. (5). In this section, we describe a set of semidefinite and linear constraints that any valid dual vector µ ∈MARG(Kn) must satisfy. 3.1.1 Semidefinite constraints Given an arbitrary vector µ ∈Rd, consider the following (n + 1) × (n + 1) matrix: M1[µ] :=   1 µ1 µ2 · · · µn−1 µn µ1 1 µ12 · · · · · · µ1n µ2 µ21 1 · · · · · · µ2n ... ... ... ... ... ... µn−1 ... ... ... ... µn,(n−1) µn µn1 µn2 · · · µ(n−1),n 1   (8) The motivation underlying this definition is the following: suppose that the given dual vector µ actually belongs to MARG(Kn), in which case there exists some distribution p(x; θ) such that µs = P x p(x; θ) xs and µst = P x p(x; θ) xsxt. Thus, if µ ∈MARG(Kn), the matrix M1[µ] can be interpreted as the matrix of second order moments for the vector (1, x), as computed under p(x; θ). (Note in particular that the diagonal elements are all one, since x2 s = 1 when xs ∈{−1, +1}.) Since any such moment matrix must be positive semidefinite,2 we have established the following: Lemma 1 (Semidefinite outer bound). The binary marginal polytope MARG(Kn) is contained within the semidefinite constraint set: SDEF1 :=  µ ∈Rd M1[µ] ⪰0 (9) This semidefinite relaxation can be further strengthened by including higher order terms in the moment matrices [5]. 3.1.2 Additional linear constraints It is straightforward to augment these semidefinite constraints with additional linear constraints. Here we focus in particular on two classes of constraints, referred to as rooted and unrooted triangle inequalities by Deza and Laurent [2], that are of especial relevance in the graphical model setting. 2To be explicit, letting z = (1, x), then for any vector a ∈Rn+1, we have aT M1[µ]a = aT E[zzT ]a = E[∥aT z∥2], which is certainly non-negative. Pairwise edge constraints: It is natural to require that the subset of mean parameters associated with each pair of random variables (xs, xt) — namely, µs, µt and µst — specify a valid pairwise marginal distribution. Letting {a, b} take values in {−1, +1}2, consider the set of four linear constraints of the following form: 1 + a µs + b µt + ab µst ≥ 0. (10) It can be shown [11, 10] that these constraints are necessary and sufficient to guarantee the existence of a consistent pairwise marginal. By the junction tree theorem [1], this pairwise consistency guarantees that the constraints of Eqn. (10) provide a complete description of the binary marginal polytope for any tree-structured graph. Moreover, for a general graph with cycles, they are equivalent to the tree-consistent constraint set used in the Bethe approach [12] when applied to a binary vector x ∈{−1, +1}n. Triplet constraints: Local consistency can be extended to triplets {xs, xt, xu}, and even more generally to higher order subsets. For the triplet case, consider the following set of constraints (and permutations thereof) among the pairwise mean parameters {µst, µsu, µtu}: µst + µsu + µtu ≥−1, µst −µsu −µtu ≥−1. (11) It can be shown [11, 10] that these constraints, in conjunction with the pairwise constraints (10), are necessary and sufficient to ensure that the collection of mean parameters {µs, µt, µu, µst, µsu, µtu} uniquely determine a valid marginal over the triplet (xs, xt, xu). Once again, by applying the junction tree theorem [1], we conclude that the constraints (10) and (11) provide a complete characterization of the binary marginal polytope for hypertrees of width two. It is worthwhile observing that this set of constraints is equivalent to those that are implicitly enforced by any Kikuchi approximation [12] with clusters of size three (when applied to a binary problem). 3.2 Gaussian entropy bound We now turn to the task of upper bounding the entropy. Our starting point is the familiar interpretation of the Gaussian as the maximum entropy distribution subject to covariance constraints: Lemma 2. The (differential) entropy h(ex) := − R p(ex) log p(ex)dex is upper bounded by the entropy 1 2 log det cov(ex) + n 2 log(2πe) of a Gaussian with matched covariance. Of interest to us is the discrete entropy of a discrete-valued random vector x ∈{−1, +1}n, whereas the Gaussian bound of Lemma 2 applies to the differential entropy of a continuousvalued random vector. Therefore, we need to convert our discrete vector to the continuous space. In order to do so, we define a new continuous random vector via ex = 1 2x + u, where u is a random vector independent of x, with each element independently and identically distributed3 as us ∼U[−1 2, 1 2]. The motivation for rescaling x by 1 2 is to pack the boxes together as tightly as possible. Lemma 3. We have h(ex) = H(x), where h and H denote the differential and discrete entropies of ex and x respectively. Proof. By construction, the differential entropy can be decomposed as a sum of integrals over hyperboxes of unit volume, one for each configuration, over which the probability density of ex is constant. 3The notation U[a, b] denotes the uniform distribution on the interval [a, b]. 3.3 Log-determinant relaxation Equipped with these building blocks, we are now ready to state and prove a log-determinant relaxation for the log partition function. Theorem 1. Let x ∈{−1, +1}n, and let OUT(Kn) be any convex outer bound on MARG(Kn) that is contained within SDEF1. Then there holds Φ(θ) ≤ max µ∈OUT(Kn)  ⟨θ, µ⟩+ 1 2 log det  M1(µ)+ 1 3 blkdiag[0, In]  + n 2 log(πe 2 ) (12) where blkdiag[0, In] is an (n+1)×(n+1) block-diagonal matrix. Moreover, the optimum is attained at a unique bµ ∈OUT(Kn). Proof. For any µ ∈MARG(Kn), let x be a random vector with these mean parameters. Consider the continuous-valued random vector ex = 1 2x + u. From Lemma 3, we have H(x) = h(ex); combining this equality with Lemma 2, we obtain the upper bound H(x) ≤ 1 2 log det cov(ex) + n 2 log(2πe). Since x and u are independent and u ∼U[−1/2, 1/2], we can write cov(ex) = 1 4 cov(x) + 1 12In. Next we use the Schur complement formula [8] to express the log determinant as follows: log det cov(ex) = log det  M1[µ] + 1 3 blkdiag[0, In] + n log 1 4. (13) Combining Eqn. (13) with the Gaussian upper bound leads to the following expression: H(x) = −Φ∗(µ) ≤ 1 2 log det M1[µ] + 1 3 blkdiag[0, In]  + n 2 log(πe 2 ) Substituting this upper bound into the variational representation of Eqn. (7) and using the fact that OUT(Kn) is an outer bound on MARG(G) yields Eqn. (12). By construction, the cost function is strictly convex so that the optimum is unique. The inclusion OUT(Kn) ⊆SDEF1 in the statement of Theorem 1 guarantees that the matrix M1(µ) will always be positive semidefinite. Importantly, the optimization problem in Eqn. (12) is a determinant maximization problem, for which efficient interior point methods have been developed [e.g., 8]. 4 Experimental results The relevance of the log-determinant relaxation for applications is two-fold: it provides upper bounds on the log partition function, and the maximizing arguments bµ ∈OUT(Kn) of Eqn. (12) can be taken as approximations to the exact marginals of the distribution p(x; θ). So as to test its performance in computing approximate marginals, we performed extensive experiments on the complete graph (fully connected) and the 2-D nearestneighbor lattice model. We treated relatively small problems with 16 nodes so as to enable comparison to the exact answer. For any given trial, we specified the distribution p(x; θ) by randomly choosing θ as follows. The single node parameters were chosen as θs ∼U[−0.25, 0.25] independently4 for each node. For a given coupling strength dcoup > 0, we investigated three possible types of coupling: (a) for repulsive interactions, θst ∼U[−2dcoup, 0]; (b) for mixed interactions, θst ∼U[−dcoup, +dcoup]; (c) for attractive interactions, θst ∼U[0, 2dcoup]. For each distribution p(x; θ), we performed the following computations: (a) the exact marginal probability p(xs = 1; θ) at each node; and (b) approximate marginals computed 4Here U[a, b] denotes the uniform distribution on [a, b]. from the Bethe approximation with the sum-product algorithm, or (c) log-determinant approximate marginals from Theorem 1 using the outer bound OUT(Kn) given by the first semidefinite relaxation SDEF1 in conjunction with the pairwise linear constraints in Eqn. (10). We computed the exact marginal values either by exhaustive summation (complete graph), or by the junction tree algorithm (lattices). We used the standard parallel message-passing form of the sum-product algorithm with a damping factor5 γ = 0.05. The log-determinant problem of Theorem 1 was solved using interior point methods [8]. For each graph (fully connected or grid), we examined a total of 6 conditions: 2 different potential strengths (weak or strong) for each of the 3 types of coupling (attractive, mixed, and repulsive). We computed the ℓ1-error 1 n Pn s=1 |p(xs = 1; θ) −bµs|, where bµs was the approximate marginal computed either by SP or by LD. Problem type Method Sum-product Log-determinant Graph Coupling Strength Median Range Median Range R (0.25, 0.25) 0.035 [0.01, 0.10] 0.020 [0.01, 0.03] R (0.25, 0.50) 0.066 [0.03, 0.20] 0.017 [0.01, 0.04] Full M∗ (0.25, 0.25) 0.003 [0.00, 0.04] 0.019 [0.01, 0.03] M (0.25, 0.50) 0.035 [0.01, 0.31] 0.010 [0.01, 0.06] A∗ (0.25, 0.06) 0.021 [0.00, 0.08] 0.026 [0.01, 0.06] A (0.25, 0.12) 0.422 [0.08, 0.86] 0.023 [0.01, 0.09] R (0.25, 1.0) 0.285 [0.04, 0.59] 0.041 [0.01, 0.12] R (0.25, 2.0) 0.342 [0.04, 0.78] 0.033 [0.00, 0.12] Grid M∗ (0.25, 1.0) 0.008 [0.00, 0.20] 0.016 [0.01, 0.02] M (0.25, 2.0) 0.053 [0.01, 0.54] 0.032 [0.01, 0.11] A (0.25, 1.0) 0.404 [0.06, 0.90] 0.037 [0.01, 0.13] A (0.25, 2.0) 0.550 [0.06, 0.94] 0.031 [0.00, 0.12] Table 1. Statistics of the ℓ1-approximation error for the sum-product (SP) and logdeterminant (LD) methods for the fully connected graph K16, as well as the 4-nearest neighbor grid with 16 nodes, with varying coupling and potential strengths. Table 1 shows quantitative results for 100 trials performed in each of the 12 experimental conditions, including only those trials for which SP converged. The potential strength is given as the pair (dobs, dcoup); note that dobs = 0.25 in all trials. For each method, we show the sample median, and the range [min, max] of the errors. Overall, the performance of LD is better than that of SP , and often substantially so. The performance of SP is slightly better in the regime of weak coupling and relatively strong observations (θs values); see the entries marked with ∗in the table. In the remaining cases, the LD method outperforms SP, and with a large margin for many examples with strong coupling. The two methods also differ substantially in the ranges of the approximation error. The SP method exhibits some instability, with the error for certain problems being larger than 0.5; for the same problems, the LD error ranges are much smaller, with a worst case maximum error over all trials and conditions of 0.13. In addition, the behavior of SP can change dramatically between the weakly coupled and strongly coupled conditions, whereas the LD results remain stable. 5More precisely, we updated messages in the log domain as γ log M new st + (1 −γ) log M old st . 5 Discussion In this paper, we developed a new method for approximate inference based on the combination of a Gaussian entropy bound with semidefinite constraints on the marginal polytope. The resultant log-determinant maximization problem can be solved by efficient interior point methods [8]. In experimental trials, the log-determinant method was either comparable or better than the sum-product algorithm, and by a substantial margin for certain problem classes. Of particular interest is that, in contrast to the sum-product algorithm, the performance degrades gracefully as the interaction strength is increased. It can be shown [11, 10] that in the zero-temperature limit, the log-determinant relaxation (12) reduces to a class of semidefinite relaxations that are widely used in combinatorial optimization. One open question is whether techniques for bounding the performance of such semidefinite relaxations [e.g., 3] can be adapted to the finite temperature case. Although this paper focused exclusively on the binary problem, the methods described here can be extended to other classes of random variables. It remains to develop a deeper understanding of the interaction between the two components to these approximations (i.e., the entropy bound, and the outer bound on the marginal polytope), as well as how to tailor approximations to particular graph structures. Finally, semidefinite constraints can be combined with entropy approximations (preferably convex) other than the Gaussian bound used in this paper, among them “convexified” Bethe/Kikuchi entropy approximations [9]. Acknowledgements: Thanks to Constantine Caramanis and Laurent El Ghaoui for helpful discussions. Work funded by NSF grant IIS-9988642, ARO MURI DAA19-02-1-0383, and a grant from Intel Corporation. References [1] R. G. Cowell, A. P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter. Probabilistic networks and expert systems. Statistics for Engineering and Information Science. Springer-Verlag, 1999. [2] M. Deza and M. Laurent. Geometry of cuts and metric embeddings. Springer-Verlag, New York, 1997. [3] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM, 42:1115– 1145, 1995. [4] M. Jordan, editor. Learning in graphical models. MIT Press, Cambridge, MA, 1999. [5] J. B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on Optimization, 11(3):796–817, 2001. [6] R. J. McEliece and M. Yildirim. Belief propagation on partially ordered sets. In D. Gilliam and J. Rosenthal, editors, Mathematical Theory of Systems and Networks. Institute for Mathematics and its Applications, 2002. [7] G. Rockafellar. Convex Analysis. Princeton University Press, Princeton, 1970. [8] L. Vandenberghe, S. Boyd, and S. Wu. Determinant maximization with linear matrix inequality constraints. SIAM Journal on Matrix Analysis and Applications, 19:499–533, 1998. [9] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition function. In Uncertainty in Artificial Intelligence, volume 18, pages 536–543, August 2002. [10] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Technical report, UC Berkeley, Department of Statistics, No. 649, 2003. [11] M. J. Wainwright and M. I. Jordan. Semidefinite relaxations for approximate inference on graphs with cycles. Technical report, UC Berkeley, UCB/CSD-3-1226, January 2003. [12] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations. Technical Report TR2001-22, Mitsubishi Electric Research Labs, January 2002.
2003
48
2,450
Discriminating deformable shape classes S. Ruiz-Correa†, L. G. Shapiro†, M. Meil˘a‡ and G. Berson£ †Department of Electrical Engineering ‡Department of Statistics £Division of Medical Genetics, School of Medicine University of Washington, Seattle, WA 98105 Abstract We present and empirically test a novel approach for categorizing 3-D free form object shapes represented by range data . In contrast to traditional surface-signature based systems that use alignment to match specific objects, we adapted the newly introduced symbolic-signature representation to classify deformable shapes [10]. Our approach constructs an abstract description of shape classes using an ensemble of classifiers that learn object class parts and their corresponding geometrical relationships from a set of numeric and symbolic descriptors. We used our classification engine in a series of large scale discrimination experiments on two well-defined classes that share many common distinctive features. The experimental results suggest that our method outperforms traditional numeric signature-based methodologies. 1 1 Introduction Categorizing objects from their shape is an unsolved problem in computer vision that entails the ability of a computer system to represent and generalize shape information on the basis of a finite amount of prior data. For automatic categorization to be of practical value, a number of important issues must be addressed. As pointed out in [10], how to construct a quantitative description of shape that accounts for the complexities in the categorization process is currently unknown. From a practical prospective, human perception, knowledge, and judgment are used to elaborate qualitative definitions of a class and to make distinctions among different classes. Nevertheless, categorization in humans is a standing problem in Neurosciences and Psychology, and no one is certain what information is utilized and what kind of processing takes place when constructing object categories [8]. Consequently, the task of classifying object shapes is often cast in the framework of supervised learning. Most 3-D object recognition research in computer vision has heavily used the alignmentverification methodology [11] for recognizing and locating specific objects in the context of industrial machine vision. The number of successful approaches is rather diverse and spans many different axes . However, only a handful of studies have addressed the problem of categorizing shapes classes containing a significant amount of shape variation and missing information frequently found in real range scenes. Recently, Osada et al. [9] developed a shape representation to match similar objects. The so-called shape distribution encodes the shape information of a complete 3-D object as a probability distribution sampled from a shape function. Discrimination between classes is attempted by comparing a deterministic similarity measure based on a Lp norm. Funkhouser et al. [1] extended the work on shape distribution by developing a representation of shape for object retrieval. 1This research is based upon work supported by NSF Grant No. IIS-0097329 and NIH Grant No. P20LM007714. Any opinions, findings and conclusions or recomendations expressed in this material and those of the autors do not necessarily reflects the views of NSF o NIH. The representation is based on a spherical harmonics expansion of the points of a polygonal surface mesh rasterized into a voxel grid. Query objects are matched to the database using a nearest neighbor classifier. In [7], Martin et al. developed a physical model for studying neuropathological shape deformations using Principal Component Analysis and a Gaussian quadratic classifier. Golland [2] introduced the discriminative direction for kernel classifiers for quantifying morphological differences between classes of anatomical structures. The method utilizes the distance-transform representation to characterize shape, but it is not directly applicable to range data due to the dependence of the representation on the global structure of the objects. In [10], we developed a shape novelty detector for recognizing classes of 3-D object shapes in cluttered scenes. The detector learns the components of a shapes class and their corresponding geometric configuration from a set of surface signatures embedded in a Hilbert space. The numeric signatures encode characteristic surface features of the components, while the symbolic signatures describe their corresponding spatial arrangement. The encouraging results obtained with our novelty detector motivated us to take a step further and extend our algorithm to accommodate classification by developing a 3-D shape classifier to be described in the next section. The basic idea is to generalize existing surface representations that have proved effective in recognizing specific 3-D objects to the problem of object classes by using a “symbolic” representation that is resistant to deformation as opposed to a numeric representation that is tied to a specific shape. We were also motivated by applications in medical diagnosis and human interface design where 3-D shape information plays a significant role. Detecting congenital abnormalities from craniofacial features [3], identifying cancerous cells using microscopic tomography, and discriminating 3-D facial gestures are some of the driving applications. The paper is organized as follows. Section 2 describes our proposed method. Section 3 is devoted to the experimental results. Section 4 discusses relevant aspects of our work and concludes the paper. 2 Our Approach We develop our shape classifier in this section. For the sake of clarity we concentrate on the simplest architecture capable of performing binary classification. Nevertheless, the approach admits a straightforward extension to a multi-class setting. The basic architecture consists of a cascade of two classification modules. Both modules have the same structure (a bank of novelty detectors and a multi-class classifier) but operate on different input spaces. The first module processes numeric surface signatures and the second, symbolic ones. These shape descriptors characterize our classes at two different levels of abstraction. 2.1 Surface signatures The surface signatures developed by Johnson and Hebert [5] are used to encode surface shape of free form objects. In contrast to the shape distributions and harmonic descriptors, their spatial scale can be enlarged to take into account local and non-local effects, which makes them robust against the clutter and occlusion generally present in range data. Experimental evidence has shown that the spin image and some of its variants are the preferred choice for encoding surface shape whenever the normal vectors of the surfaces of the objects can be accurately estimated [11]. The symbolic signatures developed in [10] are used at the next level to describe the spatial configuration of labeled surface regions. Numeric surface signatures. A spin-image [5] is a two-dimensional histogram computed at an oriented point P of the surface mesh of an object (see Figure 1). The histogram accumulates the coordinates α and β of a set of contributing points Q on the mesh. Contributing points are those that are within a specified distance of P and for which the surface normal forms an angle of less than the specified size with the surface normal N of P. This angle is called the support angle. As shown in Figure 1, the coordinate α is the distance from P to P β α N System Coordinate N α P pT β Mesh Surface Q Spin Image Figure 1: The spin image for point P is constructed by accumulating in a 2-D histogram the coordinates α and β of a set of contributing points (such as Q) on the mesh representing the object. the projection of Q onto the tangent plane TP at point P; β is the distance from Q to this plane. We use spin images as the numeric signatures in this work. Symbolic surface signatures Symbolic surface signatures (Fig. 2) are somewhat related to numeric surface signatures in that they also start with a point P on the surface mesh and consider a set of contributing points Q, which are still defined in terms of the distance from P and support angle. The main difference is that they are derived from a labeled surface mesh (shown in Figure 2a); each vertex of the mesh has an associated symbolic label referencing a surface region or component in which it lies. The components are constructed using a region growing algorithm to be described in Section 2.2. For symbolic surface signature construction, the vector PQ in Figure 2b is projected to the tangent plane at P where a set of orthogonal axes γ and δ have been defined. The direction of the δ −γ axes is arbitrarily defined since no curvature information was used to specify preferred directions. This ambiguity is resolved by the methods described in Section 2.2. The discretized version of the γ and δ coordinates of PQ are used to index a 2D array, and the indexed position of the array is set to the component label of Q. Note thst it is possible that multiple points Q that have different labels project into the same bin. In this case, the label that appeared most frequently is aasigned to the bin. The resultant array is the symbolic surface signature at point P. Note that the signature captures the relationships among the labeled regions on the mesh. The signature is shown as a labeled color image in Figure 2c. Figure 2: The symbolic surface signature for point P on a labeled surface mesh model of a human head. The signature is represented as a labeled color image for illustration purposes. 2.2 Classifying shape classes We consider the classification task for which we are given a set of l surface meshes C = {C1, · · · , Cl} representing two classes of object shapes. Each surface mesh is labeled by y ∈{±1}. The problem is to use the given meshes and the labels to construct an algorithm that predicts the label y of a new surface mesh C. We let C+1 (C−1) denote the shape class labeled with y = +1 (y = −1, respectively). We start by assuming that the correspondences between all the points of the instances for each class Cy are known. This can be achieved by using a morphable surface models technique such as the one described in [10]. Finding shape class components Before shape class learning can take place, the salient feature components associated with C+1 and C−1 must be specified . Each component of a class is identified by a particular region located on the surface of the class members. For each class C+1 and C−1 the components are constructed one at a time using a region growing algorithm. This algorithm iteratively constructs a classification function (novelty detector), which captures regions in the space of numeric signatures S that approximately correspond to the support of an assumed probability distribution function FS associated with the class component under consideration. In this context, a shape class component is defined as the set of all mesh points of the surface meshes in a shape class whose numeric signatures lie inside of the support region estimated by the classification function. The region growing algorithm proceeds as follows. Figure 3: The component R was grown around the critical point p using the algorithm described in the text. Six typical models of the training set are shown. The numeric signatures for the critical point p of five of the models are also shown. Their image width is 70 pixels and its region of influence covers about three quarters of the surface mesh models . Step I (Region Growing) . The input of this phase is a set of surface meshes that are samples of an object class Cy. 1. Select a set of critical points on a training object for class Cy. Let my be the number of critical points per object. The number my and the locations of the critical points are chosen by hand at this time. Note that the critical points chosen for class C+ can differ from the critical points chosen for class C−. 2. Use known correspondences to find the corresponding critical points on all training instances in C belonging to Cy . 3. For each critical point p of a class Cy, compute the numeric signatures at the corresponding points of every training instance of Cy; this set of signatures is the training set Tp,y for critical point p of class Cy. 4. For each critical point p of class Cy, train a component detector (implemented as a ν-SVM novelty detector [12]) to learn a component about p, using the training set Tp,y. The component detector will actually grow a region about p using the shape information of the numeric signatures in the training sample. The regions are grown for each critical point individually using the following growing phase. Let p be one of the m critical points. The performance of the component detector for point p can be quantified by calculating a bound on the expected probability of error E on the target set as E = #SVp/|Cy|, where #SVp is the number of support vectors in the component detector for p, and |Cy| the number of elements with label y in C. Using the classifier for point p, perform an iterative component growing operation to expand the component about p. Initially, the component consists only of point p. An iteration of the procedure consists of the following steps. 1) Select a point that is an immediate neighbor of one of the points in the component and is not yet in the component. 2) Retrain the classifier with the current component plus the new point. 3) Compute the error E′ for this classifier. 4) If the new error E′ is lower than the previous error E, add the new point to the component and set E = E′. 5) This continues until no more neighbors can be added to the component. This region growing approach is related to the one used by Heisele et al. [4] for categorizing objects in 2-D images. Figure 3 shows an example of a component grown by this technique about critical point p on a training set of 200 human faces from the University of South Florida database. At the end of step I, there are my component detectors, each of which can identify the component of a particular critical point of the object shape class Cy. That is, when applied to a surface mesh, each component detector will determines which vertices it thinks belong to its learned component (positive surface points), and which vertices do not. Step II. The input of this step is the training set of numeric signatures and their corresponding labels for each of the m = m+1 + m−1 components. The labels are determined by the step-I component detectors previously applied to C+1 and C−1. The output is a component classifier (multi-class νSVM) that, when given a positive surface point of a surface mesh previously processed with the bank of component detectors, will determine the particular component of the m components to which this point belongs. Learning spatial relationships The ensemble of component detectors and the component classifier described above define our classification module mentioned at the beginning of the section. A central feature of this module is that it can be used for learning the spatial configuration of the labeled components just by providing as input the set C of training surface meshes with each vertex labeled with the label of its component or zero if it does not belong to a component. The algorithm proceeds in the same fashion as described above except that the classifiers operate on the symbolic surface signatures of the labeled mesh. The signatures are embedded in a Hilbert space by means of a Mercer kernel that is constructed as follows. Let A and B be two square matrices of dimension N storing arbitrary labels. Let A ∗B denote a binary square matrix whose elements are defined as [A ∗B]ij = match ([A]ij, [B]ij) , where match(a, b) = 1 if a = b, and 0 otherwise. The symmetric mapping < A, B >= (1/N 2) P ij[A ∗B]ij, whose range is the interval [0, 1], can be interpreted as the cosine of angle θAB between two unit vectors on the unit sphere lying within a single quadrant. The angle θAB is the geodesic distance between them. Our kernel function is defined as k(A, B) = exp(−θ2 AB/σ2). Since symbolic surface signatures are defined up to a rotation, we use the virtual SV method for training all the classifiers involved. The method consists of training a component detector on the signatures to calculate the support vectors. Once the support vectors are obtained, new virtual support vectors are extracted from the labeled surface mesh in order to include the desired invariance; that is, a number r of rotated versions of each support vector is generated by rotating the δ −γ coordinate system used to construct each symbolic signature (see Fig. 2). Finally, the novelty detector used by the algorithm is trained with the enlarged data set consisting of the original training data and the set of virtual support vectors. The worse case complexity of the classification module is O(nc2s), where n is the number of vertices of the input mesh, s is the size of the input signatures (either numeric or symbolic) and c is the number of novelty detectors. In the classification experiments to be described below, typical values for n, s and c are 10, 000, 2, 500 and 8 , respectively. A classification example An architecture capable of discriminating two shape classes consists of a cascade of two classification modules. The first module identifies the components of each shape class, while the second verifies the geometric consistency (spatial relationships) of the components. Figure 4 illustrates the classification procedure on two sample surface meshes from a test set of 200 human heads. The first mesh (Figure 4 a) belongs to the class of healthy individuals, while the second (Figure 4 e) belongs to the class of individuals with a congenital syndrome that produces a pathological craniofacial deformation. The input classification module was trained with a set of 400 surface meshes and 4 critical points per class to recognize the eight components shown in Figure 4 b and f. The first four components are associated with healthy heads and the rest with the malformed ones. Each of the test surface meshes was individually processed as follows. Given an input surface mesh to the first classification module, the classifier ensemble (component detectors and components classifier) is applied to the numeric surface signatures of its points (Figure 4 a and e). A connected components algorithm is then applied to the result and components of size below a threshold (10 mesh points) are discarded. After this process the resulting labeled mesh is fed to the second classification module that was trained with 400 labeled meshes and two critical points to recognize two new components. The first component was grown around the point P in Figure 4 a. The second component was grown around point Q in Figure 4 e. The symbolic signatures inside the region around P encode the geometric configuration of three of the four components learned by the first module (healthy heads), while the symbolic signatures around Q encode the geometric configuration of three of the remaining four components (malformed heads), Figure 4 b and f . Consequently, the points of the output mesh of the second module will be set to “+1” if they belong to learned symbolic signatures associated with the healthy heads (Figure 4 c) , and “-1” otherwise (Figure 4 g). Finally, the filtering algorithms described above are applied to the output mesh. Figure 4 c (g) shows the region found by our algorithm that corresponds to the shape class model of normal (respectively abnormal) head. Figure 4: Binary classification example. a) and e) Mesh models of normal and abnormal heads, respectively. b) and f) Output of the first classification module. Components 1-4 are associated with healthy individuals while components 5-8, with unhealthy ones. Labeled points outside the bounded regions correspond to false positives. c) and g) Output of the second classification module. d) and h) Normalized classifier margin of the components associated with the second classification module. Red points represent high confidence values while blue points represent low values. 3 Experiments We used our classifier in a series of discrimination tasks with deformable 3-D human heads and faces. All data sets were split into training and testing samples. For classification with human heads the data consisted of 600 surface mesh models (400 training samples and 200 testing samples). The models had a resolution of 1 mm (∼30, 000 points) . For the faces, the data sets consisted of 300 surface meshes (200 training samples and 100 testing samples). The corresponding mesh resolution was set to about 0.8 mm (∼70, 000 points). All the surface models considered here were obtained from range data scanners and all the deformable models were constructed using the methods described in [10]. We tested the stability in the formation of shape class components using the faces data set. This set contains a significant amount of shape variability. It includes models of real subjects of different gender, race, age (young and mature adults) and facial gesture (smiling vs. neutral). Typical samples are shown in Figure 3. The first module of our classifier must generate stable components to allow the second module to discriminate their corresponding geometric configurations. We trained the first classification module with a set of 200 faces using critical points arbitrarily located on the cheek, chin, forehead and philtrum of the surface models. The trained module was then applied to the testing faces to identify the corresponding components. The component associated with the forehead was correctly identified in 86% of the testing samples. This rate is reasonably high considering the amount of shape variability in the data set (Fig. 3). The percentage of identified components associated with the cheek, chin and philtrum were 86%, 89% and 82%, respectively. We performed classification of normal versus abnormal human heads, a task that often occurs in medical settings. The abnormalities considered are related to two genetic syndromes that can produce severe craniofacial deformities 2. Our goal was to evaluate the performance of our classifier in discriminating examples with two well-defined where a very fine distinction exists. In our setup, the classes share many common features. This makes the classification difficult even for a trained physician. In Task I, the classifier attempted to discriminate between test samples that were 100% normal or 100% affected by each of the two model syndromes (Tasks I A and B). Task II was similar, except that the classifier was presented with examples with varying degrees of abnormality. The surface meshes of each of these examples were convex combinations of normal and abnormal heads. The degree of collusion between the resulting classes made the discrimination process more difficult. Our rationale was to drive a realistic task to its limit in order to evaluate the discrimination capabilities of the classifier. High discrimination power could be useful to quantitatively evaluate cases that are otherwise difficult to diagnose, even by human standards. The results of the experiments are summarized in Table 1. Our shape classifier was able to discriminate with high accuracy between normal and abnormal models. It was also able to discriminate classes that share a significant amount of common shape features ( see II-B∗in Table 1). We compared the performance of our approach with a signature-based method [11] that uses alignment for matching objects and is robust to scene clutter and occlusion. As we expected, a pilot study showed that the signature-based method performs poorly in tasks I A and B with an average classification rate close to 43%. The methods cited in the introduction were not considered for direct comparison, because they use global shape representations that were designed for classifying complete 3-D models. Our approach using symbolic signatures can operate on single-view data sets containing partial model information, as shown by the experimental results performed on several shape classes [10]. I-A (100% normal - 0% abnormal) 98 II-B (50% normal - 50% abnormal) 97 I-B (100% normal - 0% abnormal) 100 II-B ∗(25% normal - 75% abnormal) 92 II-B (65% normal - 35% abnormal) 98 II-B (15% normal - 85% abnormal) 48 Table 1: Classification accuracy rate (%) for discrimination between above test samples versus 100% abnormal test samples. 4 Discussion and Conclusion We presented a supervised approach to classification of 3-D shapes represented by range data that learns class components and their geometrical relationships from surface descriptors. We performed preliminary classification experiments on models of human heads (normal vs. abnormal) and studied the stability in the formation of class components using a collection of real face models containing a large amount of shape variability. We obtained promising results. The classification rates were high and the algorithm was able to grow consistent class components despite the variance. We want to stress which parts of our approach are essential as described and which are modifiable. The numeric and symbolic shape descriptors considered here are important. They are locally defined but they convey a certain amount of global information. For example, the spin image defined on the forehead (point P) in Figure 3 encodes information about the shape of most of the face (including the chin). As the image width increases, the spin image becomes more descriptive. Spin images and some variants [11] are reliable for encoding surface shape in the present context. Other descriptors such as curvature-based or harmonic signatures are not descriptive enough or lack robustness to scene clutter and occlusion. In the classification experiments described above, we did not perform any kind of feature selection for choosing the critical points. Nevertheless, the shape descriptors cap2Test samples were obtained from models with craniofacial features based upon either the Greig cephalopolysyndactyly (A) or the trisomy 9 mosaic (B) syndromes [6]. tured enough global information to allow a classifier to discriminate between the distinctive features of normal and abnormal heads. The structure of the classification module (bank of novelty detectors and multi-class classifier) is important. The experimental results showed us that the output of the novelty detectors is not always reliable and the multi-class classifier becomes critical for constructing stable and consistent class components. In the context of our medical application, the performance of our novelty detectors can be improved by incorporating prior information into the classification scheme. Maximum entropy classifiers or an extension of the Bayes point machines to the one class setting are being investigated as possible alternatives. The region-growing algorithm for finding class components is not critical. The essential point consists of generating groups of neighboring surface points whose shape descriptors are similar but distinctive enough from the signatures of other components. There are several issues to investigate. 1) Our method is able to model shape classes containing significant shape variance and can absorb about 20% of scale changes. A multiresolution approach could be used for applications that require full scale invariance. 2) We used large range data sets for training our classifier. However, larger sets are required in order to capture the shape variability of the abnormal craniofacial features due to race, age and gender. We are currently collecting data from various medical sources to create a database for implementing and testing a semi-automated diagnosis system. The data includes 3-D models constructed from range data and CT scans. The usability of the system will be evaluated by a panel of expert geneticists. References [1] T. Funkhouser, P. Min, M. Kazhdan, J. Chen, A. Halderman, D. Dobkin, and D. Jacobs “A Search Engine for 3D Models,” ACM Transactions on Graphics, 22(1), pp. 83-105, January 2003. [2] P. Golland “Discriminative Direction for Kernel Classifiers,”In: Advances in Neural Information Processing Systems, 13, Vancouver, Canada, 745-752, 2001. [3] P. Hammond, T. J. Hunton, M. A. Patton, and J. E. Allanson. “Delineation and Visualization of Congenital Abnormality using 3-D Facial Images,” In:Intelligent Data Analysis in Medicine and Pharmacology, MEDINFO, 2001, London. [4] B. Heisele, T. Serre, M. Pontil, T. Vetter and T. Poggio. “Categorization by Learning and Combining Object Parts,” In: Advances in Neural Information Processing Systems, 14, Vancouver, Canada, Vol. 2, 1239-1245, 2002. [5] A. E. Johnson and M. Hebert, “Using Spin Images for Efficient Object Recognition in Cluttered 3D scenes,” IEEE Trans. Pattern Analysis and Machine Intelligence, 21(5), pp. 433-449, 1999. [6] K. L. Jones, Smith’s Recognizable Patterns of Human Malformation, 5th Ed. W.B. Saunders Company, 1999. [7] J. Martin, A. Pentland, S. Sclaroff, and R. Kikinis, “Characterization of Neurophatological Shape Deformations,” IEEE Transactions on Pattern Analysis and Machine Intelligence,, Vol. 2, No. 2, 1998. [8] D. L. Medin, C. M. Aguilar, Categorization. In R.A. Wilson and F. C. Keil (Eds.). The MIT Encyclopedia of the Cognitive Sciences, Cambridge, MA, 1999. [9] R. Osada, T. Funkhouser, B. Chazelle, and D. Dobkin, “Matching 3-D models with shape distributions,” Shape Modeling International, 2001, pp. 154-166. [10] S. Ruiz-Correa, L. G. Shapiro, and M. Meil˘a. “A New Paradigm for Recognizing 3-D Object Shapes from Range Data,” Proceedings of the IEEE Computer Society International Conference on Computer Vision 2003, Vol.2, pp. 1126-1133. [11] S. Ruiz-Correa, L. G. Shapiro, and M. Meil˘a, “A New Signature-based Method for Efficient 3-D Object Recognition,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2001, Vol. 1, pp. 769 -776. [12] B. Scholk¨opf and A. J. Smola, Learning with Kernels, The MIT Press, Cambridge, MA, 2002.
2003
49
2,451
Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons Thomas Natschl¨ager, Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz A-8010 Graz, Austria {tnatschl, maass}@igi.tugraz.at Abstract We employ an efficient method using Bayesian and linear classifiers for analyzing the dynamics of information in high-dimensional states of generic cortical microcircuit models. It is shown that such recurrent circuits of spiking neurons have an inherent capability to carry out rapid computations on complex spike patterns, merging information contained in the order of spike arrival with previously acquired context information. 1 Introduction Common analytical tools of computational complexity theory cannot be applied to recurrent circuits with complex dynamic components, such as biologically realistic neuron models and dynamic synapses. In this article we explore the capability of information theoretic concepts to throw light on emergent computations in recurrent circuit of spiking neurons. This approach is attractive since it may potentially provide a solid mathematical basis for understanding such computations. But it is methodologically difficult because of systematic errors caused by under-sampling problems that are ubiquitous even in extensive computer simulations of relatively small circuits. Previous work on these methodological problems had focused on estimating the information in spike trains, i.e. temporally extended protocols of the activity of one or a few neurons. In contrast to that this paper addresses methods for estimating the information that is instantly available to a neuron that has synaptic connections to a large number of neurons. We will define the specific circuit model used for our study in section 2 (although the methods that we apply appear to be useful for to a much wider class of analog and digital recurrent circuits). The combination of information theoretic methods with methods from machine learning that we employ is discussed in section 3. The results of applications of these methods to the analysis of the distribution and dynamics of information in a generic recurrent circuit of spiking neurons are presented in section 4. Applications of these methods to the analysis of emergent computations are discussed in section 5. 1. segment 2. segment 3. segment 4. segment possible templates for spike train segments 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 template 1 (s1=1) template 0 (s2=0) template 1 (s3=1) template 0 (s4=0) time [sec] an example for resulting input spike trains (with noise) 0 1 template A B Figure 1: Input distribution used throughout the paper. Each input consists of 5 spike trains of length 800 ms generated from 4 segments of length 200 ms each. A For each segment 2 templates 0 and 1 were generated randomly (Poisson spike trains with a frequency of 20 Hz). B The actual input spike trains were generated by choosing randomly for each segment i, i = 1, . . . , 4, one of the two associated templates (si = 0 or si = 1), and then generating a noisy version by moving each spike by an amount drawn from a Gaussian distribution with mean 0 and SD 4 ms. 2 Our study case: A Generic Neural Microcircuit Model As our study case for analyzing information in high-dimensional circuit states we used a randomly connected circuit with sparse, primarily local connectivity consisting of 800 leaky integrate-and-fire (I&F) neurons, 20% of which were randomly chosen to be inhibitory. The 800 neurons of the circuit were arranged on two 20 × 20 layers L1 and L2. Circuit inputs consisting of 5 spike trains were injected into a randomly chosen subset of neurons in layer L1 (the connection probability was set to 0.25 for each of the 5 input channels and each neuron in layer L1). We modeled the (short term) dynamics of synapses according to the model proposed in [1], with the synaptic parameters U (use), D (time constant for depression), F (time constant for facilitation) randomly chosen from Gaussian distributions that model empirical data for such connections. Parameters of neurons and synapses were chosen as in [2] to fit data from microcircuits in rat somatosensory cortex (based on [3] and [1]). Since neural microcircuits in the nervous system often receive salient input in the form of spatio-temporal firing patterns (e.g. from arrays of sensory neurons, or from other brain areas), we have concentrated on circuit inputs of this type. Such firing pattern could for example represent visual information received during a saccade, or the neural representation of a phoneme or syllable in auditory cortex. Information dynamics and emergent computation in recurrent circuits of spiking neurons were investigated for input streams over 800 ms consisting of sequences of noisy versions of 4 of such firing patterns. We restricted our analysis to the case where in each of the four 200 ms segments one of two template patterns is possible, see Fig. 1. In the following we write si = 1 (si = 0) if a noisy version of template 1 (0) is used in the i-th time segment of the circuit input. Fig. 2 shows the response of a circuit of spiking neurons (drawn from the distribution specified above) to the input stream exhibited in Fig. 1B. Each frame in Fig. 2 shows the current firing activity of one layer of the circuit at a particular point t in time. Since in such rather small circuit (compared for example with the estimated 105 neurons below a mm2 of cortical surface) very few neurons fire at any given ms, we have replaced each spike by a pulse whose amplitude decays exponentially with a time constant of 30 ms. This models the impact of a spike on the membrane potential of a generic postsynaptic neuron. The resulting vector r(t) = ⟨r1(t), . . . , r800(t)⟩consisting of 800 analog values from the t=280 ms t=290 ms t=300 ms t=310 ms 0 2 4 6 Figure 2: Snapshots of the first 400 components of the circuit state r(t) (corresponding to the neurons in the layer L1) at various times t for the input shown at the bottom of fig. 1. Black denotes high activity, white no activity. A spike at time ts ≤t adds a value of exp(−(t −ts)/(30ms)) to the corresponding component of the state r(t). 800 neurons in the circuit is exactly the “liquid state” of the circuit at time t in the context of the abstract computational model introduced in [2]. In the subsequent sections we will analyze the temporal dynamics of the information contained in these momentary circuit states r(t).1 3 Methods for Analyzing the Information contained in Circuit States The mutual information MI(X, R) between two random variables X and R can be defined by MI(X, R) = H(X) −H(X|R), where H(X) = −P x∈Range(X) p(x) log p(x) is the entropy of X, and H(X|R) is the expected value (with regard to R) of the conditional entropy of X given R, see e.g. [4]. It is well known that empirical estimates of the entropy tend to underestimate the true entropy of a random variable (see e.g. [5, 6]). Hence in situations where the true value of H(X) is known (as is typically the case in neuroscience applications where X represents the stimulus, whose distribution is controlled by the experimentalist), the generic underestimate of H(X|R) yield a generic overestimate of the mutual information MI(X, R) = H(X) −H(X|R) for finite sample sizes. This undersampling effect has been addressed in a number of studies (see e.g. [7], [8] and [9] and the references therein), and has turned out to be a serious obstacle for a wide-spread application of information theoretic methods to the analysis of neural computation. The seriousness of this problem becomes obvious from results achieved for our study case of a generic neural microcircuit shown in Fig. 3A. The dashed line shows the dependence of “raw” estimates MIraw of the mutual information MI(s2, R) on the sample size2 N, which ranges here from 103 to 2 · 105. The raw estimate of MI(s2, R) results from a direct application of the definition of MI to the observed occupancy frequencies for a discrete set of bins3 , where R consists here of just d = 5 or d = 10 components of the 800-dimensional circuit state r(t) for t = 660 ms, and s2 is the bit encoded by the second input segment. For more components d of the current circuit state r(t), e.g. for estimating the mutual information MI(s2, R) between the preceding circuit input s2 and the current firing activity in a subcircuit consisting of d = 20 or more neurons, even sample sizes beyond 106 are likely to severely overestimate this mutual information. 1One should note that these circuit states do not reflect the complete current state of the underlying dynamical system, only those parts of the state of the dynamical system that are in principle “visible” for neurons outside the circuit. The current values of the membrane potential of neurons in the circuit and the current values of internal variables of dynamic synapses of the circuit are not visible in this sense. 2In our case the sample size N refers to the number of computer simulations of the circuit response to new drawings of circuit inputs, with new drawings of temporal jitter in the input spike trains and initial conditions of the neurons in the circuit. 3 For direct estimates of the MI the analog value of each component of the circuit state r(t) has to be divided into discrete bins. We first linearly transformed each component of r(t) such that it has zeros mean and variance σ2 = 1.0. The transformed components are then binned with a resolution of ϵ = 0.5. This means that there are four bins in the range ±σ. 0.3 0.35 0.4 0.45 MI [bit] A corrected MI (d=5, s2) MIraw MInaive MIinfinity 0 0.2 0.4 0.6 0.8 1 MI [bit] B lower bounds (d=5, s2) MIraw Bayes (ε = 0.5) linear (ε = 0.5) linear (ε = 0) 0 0.2 0.4 0.6 0.8 1 C lower bounds (d=10, s2) 10 3 10 4 10 5 6.75 7 7.25 7.5 sample size entropy [bit] D entropy of states (d=5) H(R)Ma H(R)raw H(R|X)Ma H(R|X)raw 10 3 10 4 10 5 0 0.2 0.4 0.6 0.8 1 sample size MI [bit] E lower bounds (d=5, s3) 10 3 10 4 10 5 0 0.2 0.4 0.6 0.8 1 sample size F lower bounds (d=10, s3) Figure 3: Estimated mutual information depends on sample size. In all panels d denotes the number of components of the circuit state r(t) at time t = 660 ms (or equivalently the number of neurons considered). A Dependence of the “raw” estimate MIraw and two corrected estimates MInaive and MIinfinity of the mutual information MI(s2, R) (see text). B Lower bounds MI(s2, h(R)) for the mutual information obtained via classifiers h which are trained to predict the actual value of s2 given the circuit state r(t). Results are shown for a) an empirical Bayes classifier (discretization ϵ = 0.5, see footnote 3 and 5), b) a linear classifier trained on the discrete (ϵ = 0.5) data and c) for a linear classifier trained on the analog data (ϵ = 0). In the case of the Bayes classifier MI(s2, h(R)) was estimated by employing a leave-one-out procedure (which is computationally efficient for a Bayes classifier), whereas for the linear classifiers a test set of size 5 · 104 was used (hence no results beyond a sample size of 1.5 · 105). C Same as B but for d = 10. D Estimates of the entropies H(R) and H(R|X). The “raw” estimates are compared with the corresponding Ma-bounds (see text). The filled triangle marks the sample size from which on the Mabound is below the raw estimate. E Same as B but for MI(s3, h(R)). F Same as E but for d = 10. Several methods for correcting this bias towards overestimation of MI have been suggested in the literature. In section 3.1 of [7] it is proposed to subtract one of two possible bias correction terms Bnaive and Bfull from the raw estimate MIraw of the mutual information. The effect of subtracting Bnaive is shown for d = 5 components of r(t) in Fig. 3A. This correction is too optimistic for these applications, since the corrected estimate MInaive = MIraw −Bnaive at small sample sizes (e.g. 104) is still substantially larger than the raw estimate MIraw at large sample sizes (e.g. 105). The subtraction of the second proposed term Bfull is not applicable in our situation because it yields forMIfull = MIraw −Bfull values lower than zero for all considered sample sizes. The reason is, that Bfull is proportional to the quotient “number of possible response bins” / N and the number of possible response bins is in the order of 3010 in this example. Another way to correct MIraw is proposed in [10]. This approach is based on a series expansion of MI in 1/N [6] and is effectively a method to get an empirical estimate MIinfinity of the mutual information for infinite sample size (N →∞). It can be seen in Fig. 3A that for moderate sample sizes MIinfinity also yields too optimistic estimates for MI. Another method for dealing with generic overestimates of MI has been proposed in [10]. This method it based on the equation MI(X, R) = H(R) −H(R|X) and compares the raw estimates of H(R) and H(R|X) with the so-called Ma-bounds, and suggests to judge raw estimates of H(R) and H(R|X), and hence raw estimates of MI(X, R) = H(R) − H(R|X), as being trustworthy as soon as the sample size is so large that the corresponding Ma-bounds (which are conjectured to be less affected by undersampling) assume values below the raw estimates of H(R) and H(R|X). According to this criterion a sample size of 9 · 103 would be sufficient in the case of 5-neuron subcircuits (i.e., d = 5 components of r(t)), c.f. Fig. 3D.4 However, Fig. 3A shows that the raw estimate MIraw is still too high for N = 9 · 103 since MIraw assumes a substantially smaller value at N = 2 · 105. In view of this unreliability of – even corrected – estimates for the mutual information we have employed standard methods from machine learning in order to derive lower bounds for the MI (see for example [8] and [9] for references to preceding related work). This method is computationally feasible and yields with not too large sample sizes reliable lower bounds for the MI even for large numbers of components of the circuit state. In fact, we will apply it in sections 4 and 5 even to the full 800-component circuit state r(t). This method is quite simple. According to the data processing inequality [4] one has MI(X, R) ≥MI(X, h(R)) for any function h. Obviously MI(X, h(R)) is easier to estimate than MI(X, R) if the dimension of h(R) is substantially lower than that of R, especially if h(R) assumes just a few discrete values. Furthermore the difference between MI(X, R) and MI(X, h(R)) is minimal if h(R) throws away only that information in R that is not relevant for predicting the value of X. Hence it makes sense to use as h a predictor or classifier that has been trained to predict the current value of X. Similar approaches for estimating a lower bound were motivated by the idea of predicting the stimulus (X) given the neural response (R) (see [8], [9] and the references therein). To get an unbiased estimate for MI(X, h(R)) one has to make sure that MI(X, h(R)) is estimated on data which have not been used for the training of h. To make the best use of the data one can alternatively use cross-validation or even leave-one-out (see [11]) to estimate MI(X, h(R)). Fig. 3B, 3C, 3E, and 3F show for 3 different predictors h how the resulting lower bounds for the MI depend on the sample size N. It is noteworthy that the lower bounds MI(X, h(R)) derived with the empirical Bayes classifier5 increase significantly with the sample size6 and converge quite well to the upper bounds MIraw(X, R). This reflects the fact that the estimated joint probability density between X and R gets more and more accurate. Furthermore the computationally less demanding7 use of linear classifiers h also yields significant lower bounds for MI(X, R), especially if the true value of MI(X, R) is not too small. In our application this does not even require high numerical precision, since a coarse binning (see footnote 3) of the analog components of r(t) suffices, see Fig. 3 B,C,E,F. All estimates of MI(X, R) in 4These kind of results depend on a division of the space of circuit states into subspaces, which is required for the calculation of the Ma-bound. In our case we have chosen the subspaces such that the frequency counts of any two circuit states in the same subspace differ by at most 1. 5 The empirical Bayes classifier operates as follows: given observed (and discretized) d components r(d)(t) of the state r(t) it predicts the input which was observed most frequently for the given state components r(d)(t) (maximum a posterior classification, see e.g. [11]). If r(d)(t) was not observed so far a random guess about the input is made. 6In fact, in the limit N →∞the Bayes classifier is the optimal classifier for the discretized data in the sense that it would yield the lowest classification error — and hence the highest lower bound on mutual information — over all possible classifiers. 7In contrast to the Bayes classifier the linear classifiers (both for analog and discrete data) yield already for relatively small sample sizes N good results which do not improve much with increasing N. 0 0.5 1 1×5 0 0.5 1 5×5 0 0.5 1 1×10 0 0.5 1 mutual information [bit] 1×20 0 0.5 1 5×160 0 0.2 0.4 0.6 0.8 0 0.5 1 s1 s2 s3 s4 time [sec] 1×800 Figure 4: Information in subset of neurons. Shown are lower bounds for mutual information MI(si, h(R)) obtained with a linear classifier h operating on d components of the circuit state r(t). The numbers a × d to the right of each panel specify the number of components d used by the linear classifier and for how many different choices a of such subsets of size d the results are plotted in that panel. the subsequent sections are lower bounds MI(X, h(R)) computed via linear classifiers h. These types of lower bounds for MI(X, R) are of particular interest from the point of view of neural computation, since a linear classifier can in principle be approximated by a neuron that is trained (for example by a suitable variation of the perceptron learning rule) to extract information about X from the current circuit state R. Hence a high value of a lower bound MI(X, h(R)) for such h shows not only that information about X is present in the current circuit state R, but also that this information is in principle accessible for other neurons. 4 Distribution and Dynamics of Information in Circuit States We have applied the method of estimating lower bounds for mutual information via linear classifiers described in the preceding section to analyze the spatial distribution and temporal dynamics of information for our study case described in section 2. Fig. 4 shows the temporal dynamics of information (estimated every 20ms as described in section 3) about input bits si (encoded as described in section 2) for different components of the circuit state r(t) corresponding to different randomly drawn subsets of neurons in the circuit. One sees that even subsets of just 5 neurons absorb substantial information about the input bits si, however with a rather slow onset of the information uptake at the beginning of a segment and little memory retention when this information is overwritten by the next input segment. By merging the information from different subsets of neurons the uptake of new information gets faster and the memory retention grows. Note that for large sets of neurons (160 and 800) the information about each input bit si jumps up to its maximal value right at the beginning of the corresponding ith segment of the input trains. 0 50 100 s1 ∨ s2 s1 ∧ s2 ¬ s1 ∧ s2 s1 ∧ ¬ s2 MI [% of H(f)] B 0 50 100 s1 s2 s3 s4 MI [% of H(s)] A 0 0.2 0.4 0.6 0.8 0 50 100 xor(s1,s2) MI [% of H(f)] time [sec] C 0 0.2 0.4 0.6 0.8 0 50 100 time [sec] MI [% of H(f)] D parity(s1,s2,s3) parity(s2,s3,s4) Figure 5: Emergent computations. A Dynamics of information about input bits as in the bottom row of Fig. 4. H(s) denotes the entropy of a segment si (which is 1 bit for i = 1, 2, 3, 4). B, C, D Lower bounds for the mutual information MI(f, h(R)) for various Boolean functions f(s1, . . . , s4) obtained with a linear classifier h operating on the full 800-component circuit state R = r(t). H(f) denotes the entropy of a Boolean function f(s1, . . . , s4) if the si are independently uniformly drawn from {0, 1}. 5 Emergent Computation in Recurrent Circuits of Spiking Neurons In this section we apply the same method to analyze the mutual information between the current circuit state and the target outputs of various computations on the information contained in the sequence of spatio-temporal spike patterns in the input stream to the circuit. This provides an interesting new method for analyzing neural computation, rather than just neural communication and coding. There exist 16 different Boolean functions f(s1, s2) that depend just on the first two of the 4 bits s1, . . . , s4. Fig. 5B,C shows that all these Boolean functions f are autonomously computed by the circuit, in the sense that the current circuit state contains high mutual information with the target output f(s1, s2) of this function f. Furthermore the information about the result f(s1, s2) of this computation can be extracted linearly from the current circuit state r(t) (in spite of the fact that the computation of f(s1, s2) from the spike patterns in the input requires highly nonlinear computational operations). This is shown in Fig. 5B and 5C for those 5 Boolean functions of 2 variables that are nontrivial in the sense that their output really depends on both input variables. There exist 5 other Boolean functions which are nontrivial in this sense, which are just the negations of the 5 Boolean functions shown (and for which the mutual information analysis therefore yields exactly the same result). In Fig. 5D corresponding results are shown for parity functions that depend on three of the 4 bits s1, s2, s3, s4. These Boolean functions are the most difficult ones to compute in the sense that knowledge of just 1 or 2 of their input bits does not give any advantage in guessing the output bit. One noteworthy feature in all these emergent computations is that information about the result of the computation is already present in the current circuit state long before the complete spatio-temporal input patterns that encode the relevant input bits have been received by the circuit. In fact, the computation of f(s1, s2) automatically just uses the temporal order of the first spikes in the pattern encoding s2, and merges information contained in the order of these spikes with the “context” defined by the preceding input pattern. In this way the circuit automatically completes an ultra-rapid computation within just 20 ms of the beginning of the second pattern s2. The existence of such ultra-rapid neural computations has previously already been inferred [12] but models that could explain the possibility of such ultra-rapid computations on the basis of generic models for recurrent neural microcircuits have been missing. 6 Discussion We have analyzed the dynamics of information in high-dimensional circuit states of a generic neural microcircuit model. We have focused on that information which can be extracted by a linear classifier (a linear classifier may be viewed as a coarse model for the classification capability of a biological neuron). This approach also has the advantage that significant lower bounds for the information content of high-dimensional circuit states can already be achieved for relatively small sample sizes. Our results show that information about current and preceding circuit inputs is spread throughout the circuit in a rather uniform manner. Furthermore our results show that a generic neural microcircuit model has inherent capabilities to process new input in the context of other information that arrived several hundred ms ago, and that information about the outputs of numerous potentially interesting target functions automatically accumulates in the current circuit state. Such emergent computation in circuits of spiking neurons is extremely fast, and therefore provides an interesting alternative to models based on special-purpose constructions for explaining empirically observed [12] ultra-rapid computations in neural systems. The method for analyzing information contained in high-dimensional circuit states that we have explored in this article for a generic neural microcircuit model should also be applicable to biological data from multi-unit recordings, fMRI etc., since significant lower bounds for mutual information were achieved in our study case already for sample sizes in the range of a few hundred (see Fig. 3). In this way one could get insight into the dynamics of information and emergent computations in biological neural systems. Acknowledgement: We would like to thank Henry Markram for inspiring discussions. This research was partially supported by the Austrian Science Fund (FWF), project # P15386. References [1] H. Markram, Y. Wang, and M. Tsodyks. Differential signaling via the same axon of neocortical pyramidal neurons. Proc. Natl. Acad. Sci., 95:5323–5328, 1998. [2] W. Maass, T. Natschl¨ager, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11):2531– 2560, 2002. [3] A. Gupta, Y. Wang, and H. Markram. Organizing principles for a diversity of GABAergic interneurons and synapses in the neocortex. Science, 287:273–278, 2000. [4] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, New York, 1991. [5] M. S. Roulston. Estimating the errors on measured entropy and mutual information. Physica D, 125:285–294, 1999. [6] S. Panzeri and A. Treves. Analytical estimates of limited sampling biases in different information measures. Network: Computation in Neural Systems, 7:87–107, 1996. [7] G. Pola, S. R. Schultz, R. S. Petersen, and S. Panzeri. A practical guide to information analysis of spike trains. In R. K¨otter, editor, Neuroscience Databases. A Practical Guide, chapter 10, pages 139–153. Kluwer Academic Publishers (Boston), 2003. [8] L. Paninski. Estimation of entropy and mutual information. Neural Computation, 15:1191– 1253, 2003. [9] J. Hertz. Reading the information in the outcome of neural computation. online available via http://www.nordita.dk/˜hertz/papers/infit.ps.gz. [10] S.P. Strong, R. Koberle, R. R. de Ruyter van Steveninck, and E. Bialek. Entropy and information in neural spike trains. Physical Review Letters, 80(1):197–200, 1998. [11] R. O. Duda, P.E. Hart, and D. G. Stork. Pattern Classification. John Wiley & Sons, 2nd edition, 2001. [12] S. Thorpe, D. Fize, and C. Marlot. Speed of processing in the human visual system. Nature, 381:520–522, 1996.
2003
5
2,452
Limiting form of the sample covariance eigenspectrum in PCA and kernel PCA David C. Hoyle & Magnus Rattray Department of Computer Science, University of Manchester, Manchester M13 9PL, UK. david.c.hoyle@man.ac.uk magnus@cs.man.ac.uk Abstract We derive the limiting form of the eigenvalue spectrum for sample covariance matrices produced from non-isotropic data. For the analysis of standard PCA we study the case where the data has increased variance along a small number of symmetry-breaking directions. The spectrum depends on the strength of the symmetry-breaking signals and on a parameter α which is the ratio of sample size to data dimension. Results are derived in the limit of large data dimension while keeping α fixed. As α increases there are transitions in which delta functions emerge from the upper end of the bulk spectrum, corresponding to the symmetry-breaking directions in the data, and we calculate the bias in the corresponding eigenvalues. For kernel PCA the covariance matrix in feature space may contain symmetry-breaking structure even when the data components are independently distributed with equal variance. We show examples of phase-transition behaviour analogous to the PCA results in this case. 1 Introduction A number of data analysis methods are based on the spectral decomposition of large matrices. Examples include Principal Component Analysis (PCA), kernel PCA and spectral clustering methods. PCA in particular is a ubiquitous method of data analysis [1]. The principal components are eigenvectors of the sample covariance matrix ordered according to the size of the corresponding eigenvalues. In PCA the data is projected onto the subspace corresponding to the n first principal components, where n is chosen according to some model selection criterion. Most methods for model selection require only the eigenvalue spectrum of the sample covariance matrix. It is therefore useful to understand how the sample covariance spectrum behaves given a particular data distribution. Much is known about the asymptotic properties of the spectrum in the case where the data distribution is isotropic, e.g. for the Gaussian Orthogonal Ensemble (GOE), and this knowledge can be used to construct model selection methods (see e.g. [2] and references therein). However, it is also instructive to consider the limiting behaviour in the case where the data does contain some low-dimensional structure. This is interesting as it allows us to understand the limits of learnability and previous studies have already shown phase-transition behaviour in PCA learning from data containing a single symmetry-breaking direction [3]. The analysis of data models which include a signal component are also useful if we are to correct for bias in the estimated eigenvalues corresponding to retained components. PCA has limited applicability because it is a globally linear method. A promising nonlinear alternative is kernel PCA [4] in which data is projected into a high-dimensional feature space and PCA is carried out in this feature space. The kernel trick allows all computations to be carried out efficiently so that the method is practical even when the feature space has a very high, or even infinite, dimension. In this case we are interested in properties of the eigenvalue spectrum of the sample covariance matrix in feature space. The covariance of the features will typically be non-isotropic even when the data itself has independently distributed components with equal variance. The sample covariance spectrum will therefore show quite rich behaviour even when the data itself has no structure. It is important to understand the expected behaviour in order to develop model selection methods for kernel PCA analogous to those used for standard PCA. Model selection methods based on data models with isotropic noise (e.g. [2, 5]) are certainly not suitable for kernel PCA. In this paper we apply methods from statistical mechanics and random matrix theory to determine the limiting form of eigenvalue spectrum for sample covariance matrices produced from data containing symmetry-breaking structure. We first show how the replica method can be used to derive the spectrum for Gaussian data with a finite number a symmetrybreaking directions. This result is confirmed and generalised by studying the Stieltjes transform of the eigenvalue spectrum, suggesting that it may be insensitive to details of the data distribution. We then show how the results can be used to derive the limiting form of eigenvalue spectrum of the feature covariance matrix (or Gram matrix) in kernel PCA for the case of a polynomial kernel. 2 Statistical mechanics theory for Gaussian data We first consider a data set of N-dimensional data vectors {xµ}p µ=1 containing a signal and noise component. Initially we restrict ourselves to the case where xµ is drawn from a Gaussian distribution whose covariance matrix C is isotropic except for a small number of orthogonal symmetry-breaking directions, i.e., C = σ2I + σ2 S X m=1 AmBmBT m , BT n Bm = δnm , Am > 0 . (1) We define the sample covariance ˆC = p−1 P µ xµxT µ and study its eigenvalue spectrum in the limit N →∞when the ratio α = p/N is held fixed and the number of symmetrybreaking directions S is finite. We work with the trace of the resolvent G(λ) = (λI−ˆC)−1 from which the density of eigenvalues ρ(λ) can be calculated, ρ(λ) = lim ϵ→0+(Nπ)−1Im trG(λ −iϵ) where trG(λ) = N X i=1 1 λ −λi (2) and λi are eigenvalues of ˆC. The trace of the resolvent can be represented as, trG(λ) = ∂ ∂λ log det(λI −ˆC) = ∂ ∂λ log Z(λ) . (3) Using the standard representation of the determinant of a matrix, [det A]−1 2 = (2π)−N 2 Z exp  −1 2φT Aφ  dφ , we have, log Z(λ) = N log 2π −2 log Z exp " −λ 2 ||φ||2 + 1 2p X µ (φ · xµ)2 # dφ . (4) We assume that the eigenvalue spectrum is self-averaging, so that the calculation for a specific realisation of the sample covariance can be replaced by an ensemble average for large N that can be performed using the replica method (see e.g. [6]). Details are presented elsewhere [7] and here we simply state the results. The calculation is similar to [3] where Reimann et. al. study the performance of PCA on Gaussian data with a single symmetrybreaking direction, although there are also notable differences between the calculations. We find the following asymptotic result for the spectral density, ρ(λ) = (1 −α)Θ(1 −α)δ(λ) + 1 N S X m=1 δ(λ −λu(Am, σ2))Θ(α −A−2 m ) + 1 −1 N S X m=1 Θ(α −A−2 m ) ! α 2πλσ2 p Max(0, (λ −λmin)(λmax −λ)) , (5) where we have defined, λmax,min = σ2α−1(1 ± √α)2 λu(A, σ2) = σ2(1 + A)(1 + 1 αA) . (6) The first term in equation (5) sets a proportion 1 −α eigenvalues to zero when the rank of ˆC is less than N, i.e. when α < 1. The last term represents the bulk of the spectrum and is identical to the well-known Marˇcenko-Pastur law for isotropic data with variance σ2 [8, 9]. In [7] we also give the O(1/N) corrections to this term, but here we are mainly interested in the leading order. The second term contains contributions due to the underlying structure in the data. The mth symmetry-breaking term in the data covariance C only contributes to the spectrum if α > A−2 m . This transition must be exceeded before signals of a given strength can be detected, i.e. the signal must be sufficiently strong or the data set sufficiently large. This corresponds to the same learning transition point observed in studies of PCA on Gaussian data with a single symmetry-breaking direction [3]. Above this transition the sample covariance eigenvalue over-estimates the true variance corresponding to this component by a factor 1 + 1/(αAm) which indicates a significant bias when the data set is small or the signal is relatively weak. Our result provides a method of bias correction for the top eigenvalues in this case. In figure 1 we show results for Gaussian data with three symmetry-breaking directions, each above the transition point. On the left we show how the top eigenvalues separate from the bulk while the inset compares the density of the bulk with the theoretical result, showing excellent agreement. On the right we show convergence to the theoretical result for λu(A, σ2) in equation (6) as the data dimension N is increased for fixed α. 3 Analysis of the Stieltjes transform The statistical mechanics approach is useful because it allows the derivation of results from first principles and it is possible to use this method to determine other self-averaging quantities of interest, e.g. the overlap between the leading eigenvectors of the sample and population covariances [3]. However, the method as presented here is restricted to Gaussian data. A number of results from the statistics literature have been derived under much weaker and often more explicit assumptions about the data distribution. It is therefore interesting to ask whether equation (5) can also be derived from these results. Marˇcenko and Pastur [8] studied the case of data with a general covariance matrix. The limiting distribution was shown to satisfy, ρ(λ) = lim ϵ→0+ π−1Im αmρ(λ + iϵ) where mρ(z) = α−1 Z dλ ρ(λ) λ −z . (7) 0 2 4 6 8 10 12 14 16 18 20 Index 5 6 7 8 9 Eigenvalue 0 1 2 3 4 5 6 Eigenvalue 0 0.05 0.1 0.15 0.2 0.25 0.3 Probability Density (a) 5 6 7 8 Log N -6 -5 -4 -3 -2 Log Fractional Error log ∆λ1 log ∆λ2 log ∆λ3 (b) Figure 1: In (a) we show eigenvalues of the sample covariance matrix for Gaussian data with σ2 = 1, N = 2000 and α = 0.5. The data contains three symmetry-breaking directions with strengths A2 1 = 20, A2 2 = 15 and A2 3 = 10 all above the transition point. The inset shows the distribution of all non-zero eigenvalues except for the largest three with the solid line showing the theoretical result. In (b) we show the fractional difference between the three largest eigenvalues λi and the theoretical value λu(Ai, σ2) for i = 1, 2, 3. We set α = 0.2, averaged λi over 1000 samples to get ⟨λi⟩, set ∆λi = |1 −⟨λi⟩/λu(Ai, σ2))| and set other values as in (a). Here, mρ(z) is the Stieltjes transform of α−1ρ(λ) and is equal to −p−1trG(z). The above equation is therefore exactly equivalent to equation (2) and we see that this approach starts from the same point as the statistical mechanics theory. Marˇcenko and Pastur showed that the Stieltjes transform satisfies the following relationship, z(mρ) = −1 mρ + α−1 Z dH(t) t−1 + mρ . (8) The measure H(t) is defined such that N −1 P i dk i converges to R tkdH(t) ∀k where di are the eigenvalues of C. An equivalent result is also derived by Wachter [10] and more recently by Sengupta and Mitra using the replica method [11] (for Gaussian data). Silverstein and Choi have shown that the support of ρ(λ) can be determined by the intervals between extrema of z(mρ) [12] and this has been used to determine the signal component of a spectrum when O(N) equal strength symmetry-breaking directions are present [13]. Since C in equation (1) only contains a finite number of symmetry-breaking directions then in the limit N →∞these will have zero measure as defined by H. Thus, in this limit the eigenvalue density would appear to be identical to the isotropic case. However, it is the behaviour of the largest eigenvalues that we are most interested in, even though these may have vanishing measure. For the case of a single symmetry-breaking direction (S = 1, A1 = A) we take dH(t) = (1 −ϵ)δ(t −σ2)dt + ϵδ(t −σ2(1 + A))dt, with ϵ ≃1/N. This gives, z(mρ) = −1 mρ + (1 −ϵ)α−1 σ−2 + mρ + ϵα−1 σ−2(1 + A)−1 + mρ , (9) and stationary points satisfy, 0 = 1 m2ρ − (1 −ϵ)α−1 (σ−2 + mρ)2 − ϵα−1 (σ−2(1 + A)−1 + mρ)2 . (10) Since ϵ ≪1 we do not expect the behaviour of z(mρ) to be modified substantially in the interval [λmin, λmax]. Therefore we look for additional stationary points close to the singularity at mρ = −σ−2(1+A)−1. Setting mρ = −σ−2(1+A)−1+δ and expanding (10) yields δ = ϵ 1 2 /σ2(1 + A) p (α −A−2)+O(ϵ). Substituting this into (9) gives z(−σ−2(1+ A)−1 + δ) = σ2(1 + A)(1 + (αA)−1) + O(ϵ 1 2 ). Thus, as N →∞, if the stationary points at −σ−2(1 + A)−1 + δ exist they will define a small interval of z centred on λu(A, σ2) and so define an approximate contribution of N −1δ(λ −λu(A, σ2)) to the spectrum, in agreement with the previous calculations using replicas. We also see that for δ to be real requires α > A−2, in agreement with our previous calculation for the learning transition point. A similar perturbative analysis when C contains more than one symmetry-breaking direction gives a set of contributions N −1δ(λ −λu(Am, σ2)), m = 1, . . . , S, to ρ(λ). Again this is in agreement with our previous replica analysis of the resolvent. The relationship in equation (8) can be obtained with only relatively weak conditions on the data distribution. One requirement is that the second moment of each element of ˆC exists. Bai has considered the case of data vectors with non-Gaussian i.i.d. components (e.g. [14]) while Marˇcenko and Pastur show that the data vector components do not have to be independently distributed for the relation to hold and they give sufficient conditions on the 4th order cross-moments of the data vector components [8]. In [7] we study PCA on some examples of non-Gaussian data with symmetry-breaking structure (non-Gaussian signal and noise) and show that the separated eigenvalues behave similarly to figure 1. 4 Eigenvalue spectra for kernel PCA Equation (8) holds under quite weak conditions on the data distribution. It is therefore hoped that we can apply these results to the feature space of kernel PCA [4]. In kernel PCA the data x is transformed into a feature vector φ(x) and standard PCA is carried out in the feature space. The method requires that we can define a kernel function k(x, y) = φ(x) · φ(y) that allows efficient computation of the dot-product in a high, or even infinite, dimensional space. The eigenvalues of the sample covariance in feature space are identical to eigenvalues of the Gram matrix Kµν with entries k(xµ, xν) and the eigenvalues can therefore be computed efficiently for arbitrary feature-space dimension as long as the number of samples p is not too large (NB. The Gram matrix first has to be centred [4] so that the data has zero mean in the feature space). One common choice of kernel function is the polynomial kernel k(x, y) = (c + x · y)d in which case, for integer d, the features are all possible monomials up to order d involving components of x. We limit our attention here to the quadratic kernel (d = 2). We consider data vectors with components that are independently and symmetrically distributed with equal variance σ2 and choose a set of features φ(x) = ( √ 2cx, Vec[xxT]) where Vec[xxT]j+N(i−1) = xixj. The covariance in feature space is block diagonal, C =   2c⟨xxT⟩ 0 0 ⟨Vec[xxT]Vec[xxT]T⟩ −⟨Vec[xxT]⟩⟨Vec[xxT]T⟩   di number 2cσ2 N 2σ4 N(N −1)/2 2σ4 + κi 4 N where angled brackets denote expectations over the data distribution. The non-zero eigenvalues of C are shown on the right where κi 4 = ⟨x4 i ⟩−3σ4 is the 4th cumulant of the ith component of x. We see that although each component of the data is independently distributed with equal variance, the covariance structure in feature space may be quite complex. • Gaussian data, c = 0 For isotropic Gaussian data and c = 0 there is a single degenerate eigenvalue of C and the asymptotic result for the spectrum is identical to the case of an isotropic distribution [8, 9] with variance 2σ4 and α defined as the ratio of the number of examples p to the effective 0 1 2 3 4 5 6 7 Eigenvalue 0 0.05 0.1 0.15 0.2 0.25 0.3 Probability Density 0 10000 20000 30000 40000 p 6 7 8 9 λ1 4 6 8 10 log p -4 -3 -2 -1 0 1 2 log( λ1 − 5.8284) Figure 2: On the left we show the Gram matrix eigenspectrum for a sample data set and compare it to the theoretical result. The kernel is purely quadratic (c = 0) and we use isotropic Gaussian data with 2σ4 = 1, N = 63 and p = 1000 so that α ≃0.5. On the right we show the averaged top eigenvalue against p for fixed α. Each point is averaged over 100 samples except for the right-most which is averaged over 50. The dashed line shows the theoretical result λ1 = 5.8284 and inset is a log-log plot of the same data. dimension in the feature space N(N+1)/2 (i.e. the degeneracy of the non-zero eigenvalue) so that α = 2p/N(N + 1) and p = O(N 2) is the appropriate scaling. On the left of figure 2 we compare the spectra for a single sample data set to the theory for p = 1000 and N = 63 which corresponds to α ≃0.50 and the theoretical curve is almost identical to the one used in the inset to figure 1(a). The finite size effects are much larger than would be observed for PCA with isotropic data and on the right of figure 2 we show the average of the top eigenvalue for this value of α as p is increased, showing a very slow convergence to the asymptotic result. • Gaussian data, c > 0 For isotropic Gaussian data and c > 0 there are two eigenvalues of C with degeneracy N and N(N + 1)/2 respectively. For large N and c > σ2 the top N eigenvalues play an analogous role to the top S eigenvalues in the PCA data model defined in section 2. A similar perturbative expansion to the one described in section 3 shows that when α < (c/σ2 −1)−2 (where α ≃2p/N 2 is defined relative to the feature space) the distribution is identical to the c = 0 case. For α above this transition point the N top eigenvalues separate from the bulk. In the limit N →∞with p = O(N 2) the spread of the upper N eigenvalues will tend to zero and they will become localised at λu(c/σ2 −1, 2σ4) as defined by equation (6). For finite N and when the two components of the spectra are well separated, we can approximate the eigenvalue spectrum of the top N eigenvalues as though the data only contains these components, i.e. we model this cluster as isotropic data with α = p/N and variance 2cσ2. We obtain an improved approximation by correcting the mean of the separated cluster by the value predicted for the mean in the large N limit. On the left of figure 3 we compare this approximation to the Gram matrix spectrum averaged over 300 data sets for large c, with the inset showing the separated cluster. The theory is shown by the solid line and provides a good qualitative fit to the data although there are significant discrepancies. For the bulk we believe these to be due to finite size effects but the theory for the spread of the upper N eigenvalues is only approximate since the spread of this cluster will vanish as N →∞for fixed c and p = O(N 2). On the right of figure 3 we plot the average of the top N eigenvalues against c, showing good agreement with the theory. The top eigenvalue of the population covariance is shown by the line and the theory accurately predicts the bias in the sample estimate. 0 2 4 6 8 Eigenvalue 0 0.05 0.1 0.15 0.2 0.25 Probability Density 10 15 20 25 30 35 40 0 0.0005 0.001 0.0015 0.002 0 5 10 15 20 c 0 5 10 15 20 25 30 Average of top N Eigenvalues Simulation Theory Unbiased Eigenvalue Figure 3: On the left we show the Gram matrix eigenvalue spectrum averaged over 300 data sets and compare it to the theoretical result. The inset shows the density of the top N eigenvalues which are separated from the bulk. The kernel is quadratic with c = σ2(1 + √ 500) with other parameters as in figure 2. On the right we show the average of the top N eigenvalues against the theoretical result as a function of c. 0 5 10 15 20 Rank 5 6 7 8 9 10 11 <λ> <λ1>Theory <λ2>Theory 0 1 2 3 4 5 6 7 Eigenvalue 0 0.2 0.4 0.6 0.8 Probability Density 0 5 10 15 20 Rank 5 6 7 8 9 10 11 <λ> 0 1 2 3 4 5 6 7 Eigenvalue 0 0.2 0.4 0.6 0.8 Probability Density Figure 4: Results from a purely quadratic kernel (c = 0) on data containing a single dimension having positive kurtosis. We show the top 20 eigenvalues of the Gram matrix with the bulk spectrum as an inset. On the left κ4 = 5 and we are above the transition where the top eigenvalue is separated from the bulk. On the right κ4 = 1 is below the transition. Other parameters were 2σ4 = 1, N = 70, p = 1500 and results were averaged over 25 data sets. • Non-Gaussian data, c = 0 If the data has components with positive kurtosis then these will break the symmetry of the covariance. This is analogous to the case for PCA studied in section 2 and the result for the limiting spectrum carries over. We have α ≃2p/N 2 defined with respect to the dimension of the feature space. For each component of the data with κi 4 > 2σ4/√α there will be a delta function in the spectrum at λu(κi 4/2σ4, 2σ4) as defined by equation (6). In figure 4 we show the Gram matrix eigenvalues for a data set containing a single dimension having positive kurtosis. On the left we have κ4 = 5 which is above the transition. We have indicated with arrows the theoretical prediction for the top two eigenvalues and we see that there is a significant difference, although the separation is quite well described by the theory. We expect that these discrepancies are due to large finite size effects and further simulations are required to verify this. On the right we have κ4 = 1 which is below the transition and the spectrum is very similar to the case for isotropic Gaussian data. 5 Conclusion We studied the asymptotic form of the sample covariance eigenvalue spectrum from data with symmetry-breaking structure. For standard PCA the asymptotic results are very accurate even for moderate data dimension, but for kernel PCA with a quadratic kernel we found that convergence to the asymptotic result was slow. The limiting form of sample covariance spectra has previously been studied in the neural networks literature where it can be used in order to determine the optimal batch learning rate for large linear perceptrons. Indeed, the results derived in section 2 for Gaussian data can also be derived by adapting an elegant method developed by Sollich [15], without recourse to the replica method. Halkjær & Winther used this approach to compute the spectral density for the case of a single symmetry breaking direction and obtained a similar result to us, except that the position of the separated eigenvalue was at σ2(1 + A) which differs from our result [16]. In fact they assumed a large signal in their derivation and their derivation can easily be adapted to obtain an identical result to ours. However this method, as well as the replica approach used here, is limited because it only applies to Gaussian data, while the Stieltjes transform relationship in equation (8) has been derived under much weaker conditions on the data distribution. Our current work is focussed on extending the analysis to more general kernels, such as the radial basis function (RBF) kernel where the feature space dimension is infinite. In the general case we find that the Stieltjes transform can be derived by a variational mean field theory and therefore provides a principled approximation to the average spectral density. Acknowledgments DCH was supported by a MRC(UK) Special Training Fellowship in Bioinformatics. We would like to thank the anonymous reviewers for useful comments and for pointing out references [15] and [16]. References [1] I.T. Jolliffe. Principal Component Analysis. Springer-Verlag, New York, 1986. [2] I.M. Johnstone. Ann. Stat., 29, 2001. [3] P. Reimann, C. Van den Broeck, and G.J. Bex. J. Phys. A:Math. Gen., 29:3521, 1996. [4] B. Scholk¨opf, A. Smola, and K.-R. M¨uller. Neural Computation, 10:1299–1319, 1998. [5] T.P. Minka. Automatic choice of dimensionality for PCA. In T.K. Leen, T.G. Dietterich, and V. Tresp, editors, NIPS 13, pages 598–604. MIT Press, 2001. [6] A. Engel and C. Van den Broeck. Statistical Mechanics of Learning. Cambridge University Press, 2001. [7] D.C. Hoyle and M. Rattray. Phys. Rev. E, in press. [8] V.A. Marˇcenko and L.A. Pastur. Math. USSR-Sb, 1:507, 1967. [9] A. Edelman. SIAM J. Matrix Anal. Appl., 9:543, 1988. [10] K.W. Wachter. Ann. Probab., 6:1, 1978. [11] A.M. Sengupta and P.P. Mitra. Phys. Rev. E, 60:3389, 1999. [12] J.W. Silverstein and S. Choi. J. Multivariate Analysis, 54:295, 1995. [13] J.W. Silverstein and P.L. Combettes. IEEE Trans. Signal Processing, 40:2100, 1992. [14] Z.D. Bai. Ann. Probab., 21:649, 1993. [15] P. Sollich. J. Phys. A, 27:7771, 1994. [16] S. Halkjær and O. Winther. In M. Mozer, M. Jordan, and T. Petsche, editors, NIPS 9, page 169. MIT Press, 1997.
2003
50
2,453
Auction Mechanism Design for Multi-Robot Coordination Curt Bererton, Geoff Gordon, Sebastian Thrun, Pradeep Khosla {curt,ggordon,thrun,pkk}@cs.cmu.edu Carnegie Mellon University 5000 Forbes Ave Pittsburgh, PA 15217 Abstract The design of cooperative multi-robot systems is a highly active research area in robotics. Two lines of research in particular have generated interest: the solution of large, weakly coupled MDPs, and the design and implementation of market architectures. We propose a new algorithm which joins together these two lines of research. For a class of coupled MDPs, our algorithm automatically designs a market architecture which causes a decentralized multi-robot system to converge to a consistent policy. We can show that this policy is the same as the one which would be produced by a particular centralized planning algorithm. We demonstrate the new algorithm on three simulation examples: multi-robot towing, multi-robot path planning with a limited fuel resource, and coordinating behaviors in a game of paint ball. 1 Introduction In recent years, the design of cooperative multi-robot systems has become a highly active research area within robotics [1, 2, 3, 4, 5, 6]. Many planning problems in robotics are best phrased as MDPs, defined over world states or—in case of partial observability—belief states [7]. However, existing MDP planning techniques generally scale poorly to multirobot systems because of the curse of dimensionality: in general, it is exponentially harder to solve an MDP for N agents than it is to solve a single-agent MDP, because the state and action space for N robots can be exponentially larger than for a single-robot system. This enormous complexity has confined MDP planning techniques largely to single-robot systems. In many cases, robots in a multi-robot system interact only in limited ways. Robots might seek not to collide with each other [1], coordinate their locations to carry out a joint task [4, 6], or consume a joint resource with limited availability [8, 9, 10]. While these problems are not trivially decomposed, they do not necessarily have the worst-case exponential complexity that characterizes the general case. However, so far we lack effective mechanisms for cooperatively solving such MDPs. Handling this sort of limited interaction is exactly the strength of market-based planning algorithms [10, 12]: by focusing their attention on a limited set of important resources and ignoring all other interactions, these algorithms reduce the problem of cooperating with other robots to the problem of deciding which resources to produce or consume. Marketbased algorithms are particularly attractive for multi-robot planning because many common types of interactions can be phrased as constraints on resources such as space (two robots can’t occupy the same location at once) and time (a robot can only work on a limited number of tasks at once). From the point of view of these auction algorithms, the difficult part of the multi-robot planning problem is to compute the probability distribution of the price of each resource at every time step: the optimal price for a resource at time t depends on how much each robot produces or consumes between now and time t, and what each robot’s state is at time t. The resource usage and state depend on the robots’ plans between now and time t, which in turn depend on the price. Worse yet, future resource usage depends on random events which can’t be predicted exactly. In this paper, we bring together resource-allocation tehniques from the auction and MDP literature. In particular, we propose a general technique for decomposing multi-robot MDP problems into “loosely coupled” MDPs which interact only through resource production and consumption constraints. The decomposition works by turning all interactions into streams of payments between robots, thereby allowing each robot to learn its own local value function. Prices can be attached to any function of the visitation frequencies of each robot’s states and actions. The actual prices for these resources are set by a “master” agent; the master agent takes into account the possibility of re-allocating resources at each step, but it approximates the effect of interactions between robots. Our approach generalizes a large body of previous literature in multi-robot systems, including prior work by Guestrin and Gordon [11]. Our algorithm can be distributed so that each robot reasons only about its own local interactions, and it always produces the same answer as a particular centralized planning algorithm. 2 MDPs, linear programs, and duals A Markov Decision Process (MDP) is a tuple M = {S, A, T , c, γ, so}. S is a set of N states. A is a set of M actions. T is the dynamics T(s′, a, s) = p(s′ | s, a). The reward function is c : S × A 7→ℜ. The discount factor is γ ∈[0, 1]. Finally, so ∈S is the initial state. For any MDP there is a value function which indicates how desirable any state is. It is defined as V (s) = maxa (c(s, a) + γ P s′ p(s′ | s, a)V (s′)). We can compute V by solving the Bellman linear program (1). Once we have V , we can compute the optimal policy by one-step lookahead. Here V ∈ℜN is the vector form of the value function. ca ∈ℜN is the immediate reward for taking action a and Ta ∈ℜN×N is the matrix representation of the transition probabilities for action a. α is an arbitrary probability distribution over S which represents the probability of the MDP starting in a particular state. Typically, α is a vector in which one entry (the starting state) is set to one and all other entries are set to zero. minV α · V ∀a : V ≥ca + γTaV (1) maxfa P a ca · fa P a fa −γP a T T a fa = α ∀a : fa ≥0 (2) The dual of the Bellman LP gives us an interesting alternative from which to view the problem of finding an optimal policy. The dual of the Bellman LP is shown in (2). The vector fa represents the expected number of times we perform action a from each state. For the remainder of the paper we will stack all of the fa vectors into one large vector f, and collect the equality constraints in (2) into Af = b. Subscripts (e.g., fi or Ai) will distinguish the planning problems for different robots. 3 Algorithm 3.1 Loosely coupled MDPs Our algorithm is designed for multi-robot problems that can be decomposed into separate single-robot MDPs which interact through the production or consumption of fictitious resources. These resources may be physical goods such as fuel; or they may be logical resources such as the right to pass over a bridge at a particular time, the right to explore an area of the environment, or the right to collect reward for achieving a particular subgoal. Time may be part of the individual robot states, in which case a resource could be the right to consume a unit of fuel at a particular time (a futures contract). In more detail, each robot has a vector of state-action visitation frequencies fi which must satisfy its own local dynamics Aifi = bi. Its production or consumption of resources is defined by a matrix Ci: element (j, k) of Ci is the amount of resource j which is produced or consumed by robot i in state-action pair k. (So, Cifi is the vector of expected resource usages for robot i. The sign is arbitrary, so we will assume positive numbers correspond to consumption.) The robots interact through resource constraints: the instantaneous production and consumption of each resource must balance exactly. This representation is in many ways related to an undirected dynamic Bayes network: each node of the network corresponds to the state and action of a single MDP, and a resource constraint involving a subset of the MDPs plays the role of a clique potential on the corresponding nodes. In this way it is similar to the representation of [11]; but, we do not assume any particular form for the Ci matrices, while [11] effectively assumes that they are indicator functions of particular state or action variables. In the same (trivial) sense as Bayes nets, our representation is completely general: by collapsing all robots into a single giant agent we can represent an arbitrary MDP. More importantly, in the more-typical case that some pieces of our model can be written as resource constraints, we can achieve an exponential savings in representation size compared to the monolithic planning problem. 3.2 Approximation The resource constraints are what make loosely-coupled MDPs difficult to solve. They make the value of a joint state depend in a non-linear way on the states of the individual robots. However, by making a simple approximation we can remove the nonlinearity and so factor our planning problem: we relax the resource constraints so that they must only be satisfied in expectation over all time steps, rather than deterministically on each time step. Under this assumption, knowing the expected resources available to a robot allows that robot to plan independently: since Cifi is the vector of expected resource usages for robot i, adding the constraint Cifi = k to equation (2) gives us the single-robot resourceconstrained planning problem. The (approximate) global planning problem then becomes to determine an optimal resource allocation among robots and corresponding single-robot plans, or equivalently to determine the optimal resource prices and corresponding single-robot value functions. More formally, the planning problem is to solve (3): maxfi P ici · fi ∀i : Aifi = bi (∗) P iCifi = d ∀i : fi ≥0 (3) Without the constraints marked (∗), this LP would represent a set of completely uncoupled robot planning problems. The constraints (∗) are the approximated resource constraints: they say that expected production must equal expected consumption for each resource. The resource prices are the dual variables for (∗), and the local value functions are the dual variables for the remaining equality constraints. The quality of our prices and value functions will depend on whether it is valid to assume a single price for each resource: if the prices stay constant then our approximate plan will translate perfectly to the physical world. On the other hand, if we are unlucky, we may find that prices are different than we had planned when we need to buy or sell. In this case our computed plan will contain overoptimistic, counterintuitive sequences of actions; for example, in the problem of section 3.4, two robots might each plan to break down at the same time and be towed by the other. The only way to fix this problem is to make a more accurate model; in the worst case we will have to combine several robots into one large MDP so that we can track their joint allocation of resources at all times. 3.3 Action selection Because the value functions incorporate information about future actions and random events, the robots only need to look ahead a short time to choose good actions. So, the robots can run a simple auction to determine their best joint action: each individual robot estimates its future cost for each action by a single-step backup from its value function. The difference between these future costs then tells the robot how much it is willing to bid for the right to execute each action. The optimal joint action is then the feasible action with the highest sum of bids. 3.4 Example R1 R2 R3 G Re Figure 1: A simple example (left panel): the objective is to have all robots (R1,R2,R3) reach the goal (G) where they receive a reward. Any action may result in a robot becoming disabled, in which case it must be towed to the repair area (Re) to continue with the task. The grid shown here is significantly smaller than the problem solved in our experiments (right panel). Figure 1 shows a simulator which contains 3 robots. Each robot receives a large reward upon reaching the goal but incurs a small cost for each step it takes. Robots can break whenever they take a step, but a functioning robot may tow a failed robot to the repair area and the repaired robot may then proceed to the goal. Each robot has the action set A = { 8connected move, pickup for towing, request tow}. The state of each robot is its x position, its y position and its status {towing, going to goal, being towed, doing nothing}. If the grid is 300 by 300, then the state space size is |S| = 300×300×4 = 360000. The action space size is |A| = 10. The joint state space of all three robots is |Sjoint| = |S|3 and the joint action space is |A| = 103. Clearly, this problem size is such that ordinary MDP solution methods will be insufficient to determine the optimal value function. However, this problem lends itself to resource-based decomposition because the robots only interact through towing. Specifically, we design our Ci matrices to represent the constraint that the expected number of times a robot executes a pickup action at a position should be equal to the expected number of times some other robot executes a request-tow action. Thus, we have a weakly coupled MDP with robot interactions that can be modeled by linear constraints. p ←0 Gi = [ ] Φi = [ ] repeat done ←true for i ←1 . . . n send prices p to robot i f ←frequencies from planning for robot i with costs ci −CT i p send expected usage g = Cif and cost φ = ci · f to master if g is not already in Gi Gi ←[Gi, g] Φi ←[Φi, φ] done ←false end if end for p ←new dual variables from solving (4) with current Gi and Φi until done Figure 2: The decentralized planning algorithm based on Dantzig-Wolfe decomposition. 3.5 Dantzig-Wolfe decomposition We have reduced the multi-robot planning problem to the problem of solving the LP (3). So, one possible planning algorithm is just to pass this LP to a pre-packaged linear-program solver. This planning algorithm can be fairly efficient, but it is completely centralized: each agent must communicate its entire dynamics to a central location and wait to receive its value function in return. Instead of using this centralized algorithm, we want to produce the same outcome with a decentralized planner. To do so, we will apply Dantzig-Wolfe decomposition [13, chapter 24]. This decomposition splits our original LP (3) into a master LP (4) and one slave LP (5) for each robot i. It then solves each slave program repeatedly, generating a new value for fi each time, and combines these solutions by inserting them into the master LP (Figure 2). The Dantzig-Wolfe decomposition algorithm is guaranteed to terminate in a finite number of steps with the correct solution to our original LP and therefore with the correct local value functions. Each slave LP is the same as the corresponding robot’s MDP except that it has different state-action costs; so, the robots can run standard MDP planners (which are often much faster than general LP solvers) to produce their plans. And, instead of sending whole MDPs and value functions back and forth, the Dantzig-Wolfe decomposition only needs to send resource prices and expected usages. The master program can be located on a separate agent, or on an arbitrary robot. In more detail, the master and slave LPs are: maxqi P icT i Fiqi (∗) P iCi (Fiqi) = d ∀i : qi ≥0 ∀i : P jqij = 1 (4) maxfi (cT i −pTCi)fi Aifi = bi fi ≥0 (5) The master LP is the same as the original problem (3) except that fi has been replaced by Fiqi. Each column of Fi is one of the solutions fi which we have computed for the ith slave LP. (For efficiency, instead of storing Fi we keep Gi = CiFi and Φi = cT i Fi.) So, solving the master LP means finding a convex combination qi of the known solutions for each slave LP. The slave LP is the same as a single-robot planning problem (2) except that its costs have been altered by subtracting pTCi. The vector p is the dual variable for the constraints (∗) from the last time we solved the master LP. 3.6 An economic interpretation We have described how to use the Dantzig-Wolfe decomposition to derive an efficient distributed planning algorithm for loosely-coupled MDPs. In addition to being efficient and distributed, our algorithm has an intuitive economic interpretation which leads to interesting links with existing work on market architectures. It is well known that the dual variables of a linear program have economic significance [14, 15]. Associated with each row of the constraint matrices Ci in the master program (4) is a dual variable; that is, there is one dual variable pj for each resource j. We can interpret this dual variable as a price for resource j. To see why, notice that the slave program charges robot i a cost of pj[Ci]j,k each time it visits state-action pair k, and that visiting state-action pair k consumes an amount [Ci]j,k of resource j. The Dantzig-Wolfe algorithm can be interpreted as a search for optimal resource prices. The master agent repeatedly asks the robots what they would do if the prices were p, then tries to combine their answers to produce a good plan for all the robots together. As it combines the single-robot plans, it notices whether it could achieve a higher reward by increasing or decreasing the supply of each resource; if there is an undersupply of a resource the master agent assigns it a high price, and if there is an oversupply the master agent assigns it a low price. 4 Experimental results 1 2 3 4 5 6 7 8 9 10 1 1.5 2 2.5 3 3.5 Iterations Performance / Optimal Auction outcome Figure 3: Auctions for multi-robot path planning with limited fuel usage. Left to right: in an auction based on the assumption of cheap fuel, all robots go to the globally most tempting goal. If we assume very expensive fuel, each robot crashes through obstacles and goes to its closest goal. With the optimal fuel price, the auction trades goal quality against distance to achieve the best possible total cost. As our algorithm learns better prices, the auction’s outcomes approach the optimal policy. Our experiments are divided into two groups. First, to investigate the convergence rate of our algorithm, we collected data from multiple runs on randomly-generated synthetic problems. Second, to investigate scaling, we applied the algorithm to a large, realistic problem taken from our ongoing research into robotic laser tag [16]. In our synthetic problem, we randomly place circular obstacles inside a bounded arena to create a maze. We then place 15 robots in random starting locations and ask them to plan paths to 10 random goals. Each robot can choose whichever goal it wants, but must pay a random goal-specific price. The robots are coupled through a constraint on fuel usage: there is a quadratic penalty on total path length. In this problem, our algorithm starts from an arbitrary initial guess at the value of a unit of fuel (which causes the individual robots to make poor policy decisions) and rapidly improves the estimated value by examining the individual robot plans. We averaged the performance of our algorithm on 20 random instances; the results are shown in Figure 3. To demonstrate scaling, we used our learning algorithm to coordinate the robot towing problem in the simulation shown in figure 4, with a grid size of 300 × 300 and 9 robots. Many more robots could be handled, but because we only coordinated towing and not path Figure 4: Left: an example of the output of the algorithm on a towing problem on a map generated using the robots on the right. Note that the nearest live robot (R1) tows the damaged robot to the repair area before heading to the goal. This type of problem was solved for up to 9 robots. Right: Multi-robot paint ball simulator. planning in this example, there was a bottleneck at the repair area due to the unmodeled coordination. The resulting paths executed in a sample problem are shown in figure 4. Because our algorithm uses an arbitrary MDP planner as a subroutine, very large problems can be solved by combining our approach with fast planning algorithms. Figure 4 shows the simulator in which we applied the method to multi-robot paint ball. The rules of the game are that the last team standing wins and that it takes 4 hits to cause a robot to fail. There is a repair area to which a tagged teammate may be towed in order to repair it so that it may continue to play. Robots can only see each other when there are no obstacles between them. In this problem, we use our method to select and coordinate predefined policies. Policies used are: do nothing, attack target i, coordinated attack (with a teammate) target i, tow teammate i, and be repaired. Currently these policies are hand specified, but in future work we would like to apply policy search methods to learn these policies. The objective of our multi-robot planner is to determine at a given time which fixed policy each robot on the team should be executing so that the team will perform better. Coordination constraints are that any coordinated attacks or towing/repairing must be consistent: if teammate 1 requests a tow from teammate 2, then teammate 2 must perform a tow of teammate 1. To solve the slave problems, we use rollouts of the given policies. This allows us to handle partial observability as each enemy is tracked with a particle filter, and the particle filter distribution is used when performing rollouts. Enemy positions are sampled from the particle filters at the beginning of each rollout and each policy is evaluated over several possible enemy position combinations to determine the performance of a policy. The robots replan at fixed intervals; the simulation is halted while planning occurs. We compared our coordination planner to a similar planner without coordination. Each planner was played against a default behavior of “attack nearest enemy” over 50 games. The uncoordinated planner won 42 of 50 games over the default behavior. The coordinated planner won 48 of 50 games against the default behavior. Thus, the addition of coordination (via our factored planning algorithm) significantly improved the performance. 5 Conclusions We have developed a decentralized method for solving large loosely-coupled multi-robot planning problems. Our algorithm works by finding an optimal solution to an approximate planning problem in which resource constraints hold only in expectation. It has an intuitive economic interpretation which facilitates its application to new problems. And, it can be combined with previous MDP decomposition methods, allowing the user to mix and match which methods are best suited to their problem. We have applied our algorithm to multirobot towing, optimal use of fuel in a multi-robot path planning problem, and planning for multi-robot paintball. Acknowledgements This project was supported by DARPA’s MICA and MARS programs. References [1] M. Bennewitz, W. Burgard, and S. Thrun. Optimizing schedules for prioritized path planning of multi-robot systems. In IEEE International Conference on Robotics and Automation (ICRA), Seoul, Korea, 2001. ICRA. [2] Cao Y.U., Fukunaga A.S., and Kahng A.B. Cooperative mobile robotics: Antecedents and directions. Autonomous Robots, 4:1–23, 1997. [3] D. Goldberg and M.J. Matari´c. Robust behavior-based control for distributed multi-robot collection tasks. Technical Report IRIS-00-387, USC Institute for Robotics and Intelligent Systems, 2000. [4] H. Kitano, editor. Proceedings of RoboCup-97: The First Robot World Cup Soccer Games and Conferences, Berlin, 1998. Springer Verlag. [5] S.I. Roumeliotis and G.A Bekey. Distributed multi-robot localization. In Proceedings of the International Symposium on Distributed Autonomous Robotic Systems (DARS 2000), pages 179– 188, Knoxville, Tenneessee, 2000. [6] J. Salido, J. Dolan, J. Hampshire, and P.K. Khosla. A modified reactive control framework for cooperative mobile robots. In Proceedings of the International Conference on Sensor Fusion and Decentralized Control, pages 90–100, Pittsburgh, PA, 1997. SPIE. [7] L.P. Kaelbling, M.L. Littman, and A.R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101(1-2):99–134, 1998. [8] W. Burgard, D. Fox, M. Moors, R. Simmons, and S. Thrun. Collaborative multi-robot exploration. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), San Francisco, CA, 2000. IEEE. [9] L. E. Parker. On the design of behavior-based multi-robot teams. Journal of Advanced Robotics, 10(6), 1996. [10] R. Zlot, A. Stentz, M. Dias, and S. Thayer. Multi-robot exploration controlled by a market economy, 2002. [11] Carlos Guestrin and Geoffrey Gordon. Distributed planning in hierarchical factored MDPs. In A. Darwiche and N. Friedman, editors, Uncertainty in Artificial Intelligence (UAI), volume 18, 2002. [12] Brian P. Gerkey and Maja J Mataric. Sold!: Market methods for multi-robot control. [13] George B. Dantzig. Linear Programming and Extensions. Princeton University Press, 1963. [14] Ronald Rardin. Optimization in Operations Research. Prentice Hall, 1998. [15] Vasek Chvatal. Linear Programming. W.H. Freeman and Company, 1983. [16] Matthew Rosencrantz, Geoffrey Gordon, and Sebastian Thrun. Locating moving entities in dynamic indoor environments. In ACM AGENTS, 2003. [17] M. Dias and A. Stentz. A market approach to multirobot coordination, 2001.
2003
51
2,454
A Computational Geometric Approach to Shape Analysis in Images Anuj Srivastava Department of Statistics Florida State University Tallahassee, FL 32306 anuj@stat.fsu.edu Washington Mio Department of Mathematics Florida State University Tallahassee, FL 32306 mio@math.fsu.edu Xiuwen Liu Department of Computer Science Florida State University Tallahassee, FL 32306 liux@cs.fsu.edu Eric Klassen Department of Mathematics Florida State University Tallahassee, FL 32306 klassen@math.fsu.edu Abstract We present a geometric approach to statistical shape analysis of closed curves in images. The basic idea is to specify a space of closed curves satisfying given constraints, and exploit the differential geometry of this space to solve optimization and inference problems. We demonstrate this approach by: (i) defining and computing statistics of observed shapes, (ii) defining and learning a parametric probability model on shape space, and (iii) designing a binary hypothesis test on this space. 1 Introduction An important goal in image understanding is to detect, track and label objects of interest present in observed images. Imaged objects can be characterized in many ways: according to their colors, textures, shapes, movements, and locations. The past decade has seen significant advances in the modeling and analysis of pixel values or textures to characterize objects in images, albeit with limited success. On the other hand, planar curves that represent contours of objects have been studied independently for a long time. An emerging opinion in the vision community is that global features such as shapes of contours should also be taken into account for the successful detection and recognition of objects. A common approach to analyzing curves in images is to treat them as level sets of functions, and algorithms involving such active contours are governed usually by partial differential equations (PDEs) driven by appropriate data terms and smoothness penalties (see for example [10]). Regularized curve evolutions and region-based active contours offer alternatives in similar frameworks. This remarkable body of work contains various studies of curve evolution, each with relative strengths and drawbacks. In this paper, we present a framework for the algorithmic study of curves, their variations and statistics. In this approach, a fundamental element is a space of closed curves, with additional constraints to impose equivalence of shapes under rotation, translation, and scale. We exploit the geometry of these spaces using elements such as tangents, normals, geodesics and gradient flows, to solve optimization and statistical inference problems for a variety of cost functions and probability densities. This framework differs from those employed in previous works on “geometry-driven flows” [8] in the sense that here both the geometry of the curves and the geometry of spaces of curves are utilized. Here the dynamics of active contours is described by vector fields on spaces of curves. It is important to emphasize that a shape space is usually a non-linear, infinite-dimensional manifold, and its elements are the individual curves of interest. Several interesting applications can be addressed in this formulation, including: 1) Efficient deformations between any two curves are generated by geodesic paths connecting the elements they represent in the shape space. Geodesic lengths also provide a natural metric for shape comparisons. 2) Given a set of curves (or shapes), one can define the concepts of mean and covariance using geodesic paths, and thus develop statistical frameworks for studying shapes. Furthermore, one can define probabilities on a shape space to perform curve (or shape) classification via hypothesis testing. While these problems have been studied in the past with elegant solutions presented in the literature (examples include [9, 11, 7, 2, 5]), we demonstrate the strength of the proposed framework by addressing these problems using significantly different ideas. Given past achievements in PDE-based approaches to curve evolution, what is the need for newer frameworks? The study of the structure of the shape space provides new insights and solutions to problems involving dynamic contours and problems in quantitative shape analysis. Once the constraints are imposed in definitions of shape spaces, the resulting solutions automatically satisfy these constraints. It also complements existing methods of image processing and analysis well by realizing new computational efficiencies. The main strength of this approach is its exploitation of the differential geometry of the shape space. For instance, a geodesic or gradient flow Xt of an energy function E can be generated as a solution of an ordinary differential equation of the type dXt dt = Π(∇E(Xt)) , (1) where Π denotes an appropriate projection onto a tangent space. This contrasts with the nonlinear PDE-based curve evolutions of past works. The geometry of shape space also enables us to derive statistical elements: probability measures, means and covariances; these quantities have rarely been treated in previous studies. In shape extraction, the main focus in past works has been on solving PDEs driven by image features under smoothness constraints, and not on the statistical analysis of shapes of curves. The use of geodesic paths, or piecewise geodesic paths, has also seen limited use in the past. We should also point out the main limitations of the proposed framework. One drawback is that curve evolutions can not handle certain changes in topology, which is one of the key features of level-set methods; a shape space is purposely setup to not allow curves to branch into several components. Secondly, this idea does not extend easily to the analysis of surfaces in R3. Despite these limitations, the proposed methodology provides powerful algorithms for the analysis of planar curves as demonstrated by the examples presented later. Moreover, even in applications where branching appears to be essential, the proposed methods may be applicable with additional developments. This paper is laid out as follows: Section 2 studies geometric representations of constrained curves as elements of a shape space. Geometric analysis tools on the shape space are presented in Section 3. Section 4 provides examples of statistical analysis on the shape space, while Section 5 concludes the paper with a brief summary. 2 Representations of Shapes In this paper we restrict the discussion to curves in R2 although curves in R3 can be handled similarly. Let α : R 7→R2 denote the coordinate function of a curve parametrized by arclength, i.e., satisfying ∥˙α(s)∥= 1, for every s. A direction function θ(s) is a function satisfying ˙α(s) = ej θ(s), where j = √−1. θ captures the angle made by the velocity vector with the x-axis, and is defined up to the addition of integer multiples of 2π. The curvature function κ(s) = ˙θ(s) can also be used to represent a curve. Consider the problem of studying shapes of contours or silhouettes of imaged objects as closed, planar curves in R2, parametrized by arc length. Since shapes are invariant to rigid motions (rotations and translations) and uniform scaling, a shape representation should be insensitive to these transformations. Scaling can be resolved by fixing the length of α to be 2π, and translations by representing curves via their direction functions. Thus, we consider the space L2 of all square integrable functions θ: [0, 2π] →R, with the usual inner product ⟨f, g⟩= R 2π 0 f(s)g(s) ds. To account for rotations and ambiguities on the choice of θ, we restrict direction functions to those having a fixed average, say, π. For α to be closed, it must satisfy the closure condition R 2π 0 ejθ(s) ds = 0. Thus, we represent curves by direction functions satisfying the average-π and closure conditions; we call this space of direction functions D. Summarizing, D is the subspace of L2 consisting of all (direction) functions satisfying the constraints 1 2π Z 2π 0 θ(s) ds = π ; Z 2π 0 cos(θ(s)) ds = 0 ; Z 2π 0 sin(θ(s)) ds = 0 . (2) It is still possible to have multiple continuous functions in D representing the same shape. This variability is due to the choice of the reference point (s = 0) along the curve. For x ∈S1 and θ ∈D, define (x · θ) as a curve whose initial point (s = 0) is changed by a distance of x along the curve. We term this a re-parametrization of the curve. To remove the variability due to this re-parametrization group, define the quotient space C ≡D/S1 as the space of continuous, planar shapes. For details, please refer to the paper [4]. 3 Geometric Tools for Shape Analysis The main idea in the proposed framework is to use the geometric structure of a shape space to solve optimization and statistical inference problems on these spaces. This approach often leads to simple formulations of these problems and to more efficient vision algorithms. Thus, we must study issues related to the differential geometry and topology of a shape space. In this paper we restrict to the tangent and normal bundles, exponential maps, and their inverses on these spaces. 3.1 Tangents and Normals to Shape Space The main reason for studying the tangential and normal structures is the following: We wish to employ iterative numerical methods in the simulation of geodesic and gradient flows on the shape space. At each step in the iteration, we first flow in the linear space L2 using standard methods, and then project the new point back onto the shape space using our knowledge of the normal structure. For technical reasons, it is convenient to reduce optimization and inference problems on C to problems on the manifold D, so we study the latter. It is difficult to specify the tangent spaces to D directly, because they are infinite-dimensional. When working with finitely many constraints, as is the case here, it is easier to describe the space of normals to D in L2 instead. It can be shown that a vector f ∈L2 is tangent to D at θ if and only if f is orthogonal to the subspace spanned by {1, sin θ, cos θ}. Hence, these three functions span the normal space to D at θ. Implicitly, the tangent space is given as: Tθ(D) = {f ∈L2|f ⊥span{1, cos θ, sin θ}} . Thus, the projection Π in Eqn. 1 can be specified by subtracting from a function (in L2) its projection onto the space spanned by these three elements. 3.2 Exponential Maps We first describe the computation of geodesics (or, one-parameter flows) in D with prescribed initial conditions. Geodesics on D are realized as exponential maps from tangent spaces to D. The intricate geometry of D disallows explicit analytic expressions. Therefore, we adopt an iterative strategy, where in each step, we first flow infinitesimally in the prescribed tangent direction in the space L2, and then project the end point of the path to D. Next, we parallel transport the velocity vector to the new point by projecting the previous velocity orthogonally onto the tangent space of D at the new point. Again, this is done by subtracting normal components. The simplest implementation is to use Euler’s method in L2, i.e., to move in each step along short straight line segments in L2 in the prescribed direction, and then project the path back onto D. Details of this numerical construction of geodesics are provided in [4]. A geodesic can be specified by an initial condition θ ∈D and a direction f ∈Tθ(D), the space of all tangent directions at θ. We will denote the corresponding geodesic by Ψ(θ, t, f), where t is the time parameter. The technique just described allows us to compute Ψ numerically. For t = 1, Ψ(θ, 1, f) is the exponential map from f ∈TθD to D. 3.3 Shape Logarithms Next, we focus on the problem of finding a geodesic path between any two given shapes θ1, θ2 ∈D. This is akin to inverting the exponential map. The main issue is to find that appropriate direction f ∈Tθ1(D) such that a geodesic from θ1 in that direction passes through θ2 at time t = 1. In other words, the problem is to solve for an f ∈Tθ1(D) such that Ψ(θ1, 0, f) = θ1 and Ψ(θ1, 1, f) = θ2. One can treat the search for this direction as an optimization problem over the tangent space Tθ1(D). The cost to be minimized is given by the functional H[f] = ∥Ψ(θ1, 1, f) −θ2∥2, and we are looking for that f ∈Tθ1(C) for which: (i) H[f] is zero, and (ii) ∥f∥is minimum among all such tangents. Since the space Tθ1(D) is infinite dimensional, this optimization is not straightforward. However, since f ∈L2, it has a Fourier decomposition, and we can solve the optimization problem over a finite number of Fourier coefficients. For any two shapes θ1, θ2 ∈D, we have used a shooting method to find the optimal f [4]. The basic idea is to choose an initial direction f specified by its Fourier coefficients and then use a gradient search to minimize H as a function of the Fourier coefficients. Finally, to find the shortest path between two shapes in C, we compute the shortest geodesic connecting representatives of the given shapes in D. This is a simple numerical problem, because C is the quotient of D by the 1-dimensional re-parametrization group S1. Shown in Figure 1 are three examples of geodesic paths in C connecting given shapes. Drawn in between are shapes corresponding to equally spaced points along the geodesic paths. 4 Statistical Analysis on Shape Spaces Our goal is to develop tools for statistical analysis of shapes. Towards that goal, we develop the following ideas. 50 100 150 200 250 300 350 400 450 500 550 50 100 150 200 250 300 350 50 100 150 200 250 300 350 400 450 500 550 50 100 150 200 250 300 350 50 100 150 200 250 300 350 400 450 500 550 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 550 50 100 150 200 250 300 350 0 2 4 6 8 10 12 14 16 18 20 0.5 1 1.5 2 2.5 0 2 4 6 8 10 12 14 16 18 20 0 1 2 0 2 4 6 8 10 12 14 16 18 20 0 1 2 Figure 1: Top panels show examples of shapes manually extracted from the images. Bottom panels show examples of evolving one shape into another via a geodesic path. In each case, the leftmost shape is θ1, rightmost curves are θ2, and intermediate shapes are equi-spaced points along the geodesic. 4.1 Sample Means on Shape Spaces Algorithms for finding geodesic paths on the shape space allow us to compute means and covariances in these spaces. We adopt a notion of mean known as the intrinsic mean or the Karcher mean ([3]) that is quite natural in our geometric framework. Let d( , ) be the shortest-path metric on C. To calculate the Karcher mean of shapes {θ1, . . . , θn} in C, define a function V : C →R by V (θ) = Pn i=1 d(θ, θi)2. Then, define the Karcher mean of the given shapes to be any point µ ∈C for which V (µ) is a local minimum. In the case of Euclidean spaces this definition agrees with the usual definition µ = 1 n Pn i=1 pi. Since C is complete, the intrinsic mean as defined above always exists. However, there may be collections of shapes for which µ is not unique. An iterative algorithm for finding a Karcher mean of given shapes is given in [4] and see [4] for details. 4.2 Shape Learning Another important problem in statistical analysis of shapes is to “learn” probability models from the observed shapes. Once the shapes are clustered, we assume that elements in the same cluster are (random) samples from the same probability model, and try to learn this model. These models can then be used for future Bayesian discoveries of shapes or for classification of new shapes. To learn a probability model amounts to estimating a probability density function on the shape space, a task that is rather difficult to perform precisely. The two main difficulties are: nonlinearity and infinite-dimensionality of C, and they are handled here as follows. 1. Tangent Space: Since C is a nonlinear manifold, we impose a probability density on a tangent space rather than on C directly. For a mean shape µ ∈C, the space of all tangents to the shape space at µ, Tµ(C) ⊂L2, is an infinite-dimensional vector space. Similar to the ideas presented in [1], we impose a probability density function f on Tµ(C) in order to avoid dealing with the nonlinearity of C. The basic assumption here is that the support of f in Tµ(C) is sufficiently small so that the exponential map between the support and C has a well-defined inverse. 2. Finite-Dimensional Representation: Assume that the covariance operator of the probability distribution on Tµ(C) has finite spectrum, and thus admits a finite representation. We approximate a tangent function by a truncated Fourier series to obtain a finite-dimensional representation. We thus characterize a probability distribution on Tµ(C) as that on a finite-dimensional vector space. Let a tangent element g ∈Tµ(C) be represented by its Fourier expansion: g(s) = Pm i=1 xiei(s), for a large positive integer m. Using the identification g ≡x = {xi} ∈Rm, one can define a probability distribution on elements of Tµ(C) via a probability distribution on the coefficients x. We still have to decide what form does the resulting probability distribution takes. One common approach is to assume a parametric form so that learning is reduced to an estimation of the relevant parameters. As an example, a popular idea is to assume a Gaussian distribution on the underlying space. The variations of x as mostly restricted to an m1dimensional subspace of Rm, called the principal subspace, for some m1 ≤m. On this subspace we adopt a multivariate normal with mean µ ∈Rm1 and variance K ∈Rm1×m1. Estimation of µ and K from the observed shapes follows the usual procedures. Computation of the mean shape µ is described in [4]. Using µ and any observed shapes θj, we find the tangent vectors gj ∈Tµ(C) such that the geodesic from µ in the direction gj passes through θj in unit time. This tangent vector is actually computed via its finite-dimensional representation and results in the corresponding vector of coefficients xj. From the observed values of xj ∈Rm, one can estimate the principal subspace and the covariance matrix. Extracting the dominant eigenvectors of the estimated covariance matrix, one can capture the dominant modes of variations. The density function associated with this family of shapes is given by: h(θ; µ, K) ≡ 1 (2π)m/2 det(K)1/2 exp(−(x −µ)T K−1(x −µ)/2) , (3) where Ψ(µ, g, 1) = θ and g = Pm1 i=1 (xi −µi)ei(s). An example of this shape learning is shown in Figure 2. The top panels show infrared pictures of tanks, followed by their extracted contours in the second row of images. These contours are then used in analyzing shapes of tanks. As an example, the 12 panels in bottom left show the observed contours of a tank when viewed from a variety of angles, and we are interested in capturing this shape variation. Repeating earlier process, the mean shape is shown in the top middle panel and the eigen values are plotted in the bottom middle panel. Twelve panels on the right show shape generated randomly from a parametric model h(θ; µ, Σ). In Figure 3 we present an interesting example of samples from three different shape models. Let the original model be h(θ; µ, K) where µ and K are as shown in Figure 2. Six samples from this model are shown in the left of Figure 3. The middle shows samples from a probability density h(θ; µ, 0.2K) to demonstrate a smaller covariance; the samples here seem much closer to the mean shape. The right shows samples from a density where the covariance is equivariant in principal subspace, i.e. the covariance is given by 0.4∥K∥2 times a matrix whose top left is a 12 × 12 identity matrix and remaining entries are zero. + + 0 2 4 6 8 10 12 0.5 1 1.5 2 2.5 3 3.5 Figure 2: Top two rows: Infrared images and extracted contours of two tanks M60 and T72 at different viewing angles. Bottom row: For the 12 observed M60 shapes shown in left, the middle panels show the mean shape and the principal eigenvalues of covariance, and the right panels show 12 random samples from Gaussian model h(θ; µ, K). Figure 3: Comparison of samples from three families: (i) h(θ; µ, K), (ii) h(θ; µ, 0.2K), and (iii) h(θ; µ, 0.4∥K∥2I12). 4.3 Hypothesis Testing This framework of shape representations and statistical models on shape spaces has important applications in decision theory. One is to recognize an imaged object according to the shape of its boundary. Statistical analysis on shape spaces can be used to make a variety of decisions such as: Does this shape belong to a given family of shapes? Does these two families of shapes have similar means and variances? Given a test shape and two competing probability models, which one explains the test shape better? We restrict to the case of binary hypothesis testing since for multiple hypotheses, one can find the best hypothesis using a sequence of binary hypothesis tests. Consider two shape families specified by their probability models: h1 and h2. For an observed shape θ ∈C, we are interested in selecting one of two following hypotheses: H0 : θ ∼h1 or H1 : θ ∼h2 . We will select a hypothesis according to the likelihood ratio test: l(θ) ≡log( h1(θ) h2(θ)) >< 0 . Substituting for the normal distributions (Eqn. 3) for h1 ≡h(θ; µ1, Σ1) and h2 ≡ h(θ; µ2, Σ2), we can obtain sufficient statistics for this test. Let x1 be the vector of Fourier coefficients that encode the tangent direction from µ1 to θ, and x2 be the same for direction from µ2 to θ. In other words, if we let g1 = Pm i=1 x1,iei and g2 = Pm i=1 x2,iei, then we have θ = Ψ(µ1, g1, 1) = Ψ(µ2, g2, 1). It follows that l(θ) = (xT 1 Σ− 1 x1 −x2Σ− 2 x2) −1 2(log(det(Σ2)) −log(det(Σ1))) (4) In case the two covariances are equal to Σ, the hypothesis test reduces to l(θ) = (xT 1 Σ−x1 −x2Σ−x2) >< 0 , and when Σ is identity, the log-likelihood ratio is given by l(θ) = ∥x1∥2 −∥x2∥2. The curved nature of the shape space C makes the analysis of this test difficult. For instance, one may be interested in probability of type one error but that calculation requires a probability model on x2 when H0 is true. As a first order approximation, one can write x2 ∼N(¯x, Σ1), where ¯x is the coefficient vector of tangent direction in Tµ2(C) that corresponds to the geodesic from µ2 to µ1. However, the validity of this approximation remains to be tested under experimental conditions. 5 Conclusion We have presented an overview of an ambitious framework for solving optimization and inference problems on a shape space. The main idea is to exploit the differential geometry of the manifold to obtain simpler solutions as compared to those obtained with PDE-based methods. We have presented some applications of this framework in image understanding. In particular, these ideas lead to a novel statistical theory of shapes of planar objects with powerful tools for shape analysis. Acknowledgments This research was supported in part by grants NSF (FRG) DMS-0101429, NMA 201-012010, and NSF (ACT) DMS-0345242. References [1] I. L. Dryden and K. V. Mardia. Statistical Shape Analysis. John Wiley & Son, 1998. [2] N. Duta, M. Sonka, and A. K. Jain. Learning shape models from examples using automatic shape clustering and Procrustes analysis. In Proceedings of Information in Medical Image Processing, volume 1613 of Lecture Notes in Computer Science, pages 370–375. Springer, 1999. [3] H. Karcher. Riemann center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics, 30:509–541, 1977. [4] E. Klassen, A. Srivastava, W. Mio, and S. Joshi. Analysis of planar shapes using geodesic paths on shape spaces. IEEE Pattern Analysis and Machiner Intelligence, 26(3):to appear, March, 2004. [5] H. Le. Locating frechet means with application to shape spaces. Advances in Applied Probability, 33(2):324–338, 2001. [6] W. Mio, A. Srivastava, and E. Klassen. Interpolation by elastica in Euclidean spaces. Quarterly of Applied Mathematics, to appear, 2003. [7] D. Mumford. Elastica and computer vision, pages 491–506. Springer, New York, 1994. [8] Editor: B. Romeny. Geometry Driven Diffusions in Computer Vision. Kluwer, 1994. [9] T. B. Sebastian, P. N. Klein, and B. B. Kimia. On aligning curves. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(1):116–125, 2003. [10] J. Sethian. Level Set Methods: Evolving Interfaces in Geometry, Fluid Mechanics, Computer Vision, and Material Science. Cambridge University Press, 1996. [11] E. Sharon, A. Brandt, and R. Basri. Completion energies and scale. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(10):1117–1131, 2000. [12] L. Younes. Optimal matching between shapes via elastic deformations. Journal of Image and Vision Computing, 17(5/6):381–389, 1999.
2003
52
2,455
A Low-Power Analog VLSI Visual Collision Detector Reid R. Harrison Department of Electrical and Computer Engineering University of Utah Salt Lake City, UT 84112 harrison@ece.utah.edu Abstract We have designed and tested a single-chip analog VLSI sensor that detects imminent collisions by measuring radially expansive optic flow. The design of the chip is based on a model proposed to explain leg-extension behavior in flies during landing approaches. A new elementary motion detector (EMD) circuit was developed to measure optic flow. This EMD circuit models the bandpass nature of large monopolar cells (LMCs) immediately postsynaptic to photoreceptors in the fly visual system. A 16 × 16 array of 2-D motion detectors was fabricated on a 2.24 mm × 2.24 mm die in a standard 0.5-µm CMOS process. The chip consumes 140 µW of power from a 5 V supply. With the addition of wide-angle optics, the sensor is able to detect collisions around 500 ms before impact in complex, real-world scenes. 1 Introduction Many animals – from flies to humans – are capable of visually detecting imminent collisions caused either by a rapidly approaching object or self-motion towards an obstacle. Neurons dedicated to this task have been found in the locust [1] and the pigeon [2]. Borst and Bahde have shown that flies use visual information to time the extension of their legs on landing approaches [3]. While several models have been proposed to explain collision detection, the model proposed in [3] is particularly amenable to hardware implementation. The model, shown in Fig. 1, employs a radially-oriented array of motion detectors centered in the direction of flight. As the animal approaches a static object, an expansive optic flow field is produced on the retina. A wide angle field of view is useful since optic flow in the direction of flight will be zero. The response of this radial array of motion detectors is summed and then passed through a leaky integrator (a lowpass filter). If this response exceeds a fixed threshold, an imminent collision is detected and the animal can take evasive action or prepare for a landing. This expansive optic flow model has recently been used to explain landing and collision avoidance responses in the fruit fly [4]. A similar algorithm has been implemented in a traditional CPU for autonomous robot navigation [5]. In this work, we present a single-chip analog VLSI sensor developed to implement this model. Field of View threshold leaky integrator D collision detect comparator spatial summation radially-oriented elementary motion detectors (EMDs) τ = RC Figure 1: Diagram of collision detection algorithm. 2 Elementary Motion Detectors Our collision detection algorithm uses an array of radially-oriented elementary motion detectors (EMDs) to sense image expansion. Simulations by the author have shown that the structure and properties of the EMDs strongly affect the accuracy of this algorithm [6]. We use an enhanced version of the familiar delay-and-correlate or “Reichardt” EMD first proposed by Hassenstein and Reichardt in the 1950s to explain the optomotor response of beetles [7]. Fig. 2 shows a diagram of the EMD used in our collision sensor. The first stage of the EMD is photoreception, where light intensity is transduced to a signal vphoto. Since light intensity is a strictly positive value, the mean intensity of the scene must be subtracted. Since we are interested in motion, it is also advantageous to amplify transient signals. Suppressing dc illumination and enhancing ac components of photoreceptor signals is a common theme in many biological visual systems. In flies, large monopolar cells (LMCs) directly postsynaptic to photoreceptors exhibit transient biphasic impulse responses approximately 40-200 ms in duration [8], [9]. In the frequency domain, this can be seen as a bandpass filtering operation that attenuates dc signals while amplifying signals in the 2-40 Hz range [9], [10]. In the lateral geniculate nucleus of cats, “lagged” and “non-lagged” cells exhibit transient biphasic impulse responses 200-300 ms in duration and act as bandpass filters amplifying signals in the 1-10 Hz range [11]. This filtering has recently been explained in terms of temporal decorrelation, and can be seen as way of removing redundant information from the photoreceptor signal before further processing [9], [12]. After this “transient enhancement”, or temporal decorrelation, the signals are delayed using the phase lag of a lowpass filter. While not a true time delay, the lowpass filter matches data from animal experiments and makes the Reichardt EMD equivalent to the oriented spatiotemporal energy filter proposed by Adelson and Bergen [13]. Before correlating the adjacent delayed and non-delayed signals, we apply a saturating static nonlinearity to each channel. Without such a nonlinearity, the delay-and-correlate EMD exhibits a quadratic dependence on image contrast. In fly tangential neurons, motion responses show a quadratic dependence only at very low contrasts, then quickly become largely independent of image contrast for contrasts above 30%. Egelhaaf and Borst proposed the presence of this nonlinearity in the biological EMD to explain this contrast independence [14]. Functionally, it is necessary to prevent high-contrast edges from dominating the summed output of the EMD array. vphoto-R LMC delay vLMC-R vdelay-R iout-R iout LMC delay vphoto-L vLMC-L vdelay-L iout-L photoreceptors temporal decorrelation (Large Monopolar Cells) delay saturating nonlinearity correlation (multiplication) opponent subtraction Fig. 3 Fig. 4 image motion Figure 2: Elaborated delay-and-correlate elementary motion detector (EMD) After correlation, opponent subtraction produces a strong directionally selective signal that is taken as the output of the EMD. Unlike algorithms that find and track features in an image, the delay-and-correlate EMD does not measure true image velocity independent of the spatial structure of the image. However, recent work has shown that for natural scenes, these Reichardt EMDs give reliable estimates of image velocity [15]. This reliability is improved by the addition of LMC bandpass filters and saturating nonlinearities. Experiments using earlier versions of silicon EMDs have demonstrated the ability of delay-and-correlate motion detectors to work at very low signal-to-noise ratios [16]. 3 Integrated Circuit Implementation We adapted the EMD shown in Fig. 2 to a small, low-power CMOS integrated circuit. Fig. 3 shows a schematic of the photoreceptor and LMC bandpass filter. A 35 µm × 35 µm well-substrate photodiode with diode-connected pMOS load converts the diode photocurrent into a voltage vphoto that is a logarithmic function of light intensity. A pMOS source follower biased by ISF = 700 pA buffers this signal so that the input capacitance of the LMC circuit does not load the photoreceptor. The LMC bandpass filter consists of two operational transconductance amplifiers (OTAs) and three capacitors. The OTAs in the circuit are implemented with pMOS differential pairs using diode-connected transistors for source degeneration for extended linear range (see inset, Fig. 3). The transfer function of the LMC circuit is given by ( ) ( ) ( ) ( ) s Q s s AN Q s s s s N A s v s v in LMC 1 1 1 1 2 1 0 0 1 1 1 1 1 τ τ β τ β τ τ τ τ + +      − ⋅ − = + + − ⋅ ⋅ − = (1) where m g C = 0 τ (2) ( )( ) 1 , if 1 1 >> ≈ − + + = K A NAK N K A N β (3) gm gm /N AC KC C vLMC vin vphoto ISF VREF gm IB v+ v- out out v- v+ gm = κIB/2UT Figure 3: Schematic of photoreceptor/LMC circuit. Detail of operational transconductance amplifier (OTA) shown in inset. 0 1 βτ τ = (4) ( ) N K Q + = β (5) The output signal vLMC is centered around VREF, a dc voltage which was set to 1.0 V. We sized the capacitors in our circuit to give A = 20 and K = 5 (with C = 70 fF). The transconductance of the lower OTA was set by adjusting its bias current IB: ( ) T B m U I g 2 1 ⋅ + = κ κ (6) where κ is the weak inversion slope (typically between 0.6 and 0.9) and UT is the thermal voltage kT/q (approximately 26 mV at room temperature). We set the bias current in the upper OTA five times smaller to achieve N = 5. As we see from (1), the LMC circuit acts as an ac-coupled bandpass filter centered at f1 = 1/2πτ1, with a quality factor Q set to 2.5 by capacitor and current ratios. The circuit also has a zero at βf1, but since β = 25 in our circuit, the zero takes effect outside that passband and thus has little practical effect on the filter. We used a bias current of IB = 35 pA in the lower OTA and 7 pA in the upper OTA to center the passband near 20 Hz, which was chosen because it lies in the range of LMC response measured in the fly. This LMC circuit represents a significant improvement over a previous silicon EMD design, which used only a first-order highpass filter to block dc illumination [16]. The LMC circuit presented here allows the designer to adjust the center frequency and Q factor to selectively amplify frequencies present in moving images. The LMC circuits from each photoreceptor pass their signals to the the delay-andcorrelate circuit shown in Fig. 4. The delay is implemented as a first-order lowpass filter. The OTAs in this circuit used two diode-connected transistors in series for extended linear range. The time constant of this filter is given by LPF m LPF LPF g C − = τ (7) gm-LPF VW VREF VW VREF Imult CLPF VW VREF VW Imult VREF gm-LPF CLPF iout+ iout- vLMC-L vLMC-R iout-L+ iout-L- iout-R- iout-R+ vdelay-L vdelay-R Figure 4: Schematic of delay-and-correlate circuit. OTA-based gm-C filters are used as low-pass filters. Subthreshold CMOS Gilbert multipliers are used for correlation. We used CLPF = 700 fF and set τLPF to around 25 ms, which is in the range of biological motion detectors. This required a bias current of 9 pA for each OTA. We implemented the correlation function using a CMOS Gilbert multiplier operating in subthreshold [17]. The output currents of the multipliers in Fig. 4 can be expressed as: ( ) ( ) T REF R LMC T REF L delay mult outL outL U V v U V v I i i 2 tanh 2 tanh − − = − − − − + κ κ (8) ( ) ( ) T REF L LMC T REF R delay mult outR outR U V v U V v I i i 2 tanh 2 tanh − − = − − − − + κ κ (9) For small differential input voltages, tanh(x) ≈ x and the circuit acts as a linear multiplier. As the input signals grow larger, the tanh nonlinearity dominates and the circuit acts more like a digital exclusive-or gate. We use this inherent circuit nonlinearity as the desired saturating nonlinearity in our EMD model (see Fig. 1). The previous LMC circuit provides sufficient gain to ensure that we are usually operating well outside the linear range of the multipliers. Traditional CMOS Gilbert multipliers require that the dc level of the upper differential input be shifted relative to the dc level of the lower differential input. This is required to keep the transistors in saturation. To avoid the cost in chip area, power consumption, and mismatch associated with level shifters, we introduce a novel circuit modification that allows both the upper and lower differential inputs to operate at the same dc level. We lower the well potential of the lower pMOS transistors from VDD to a dc voltage VW (see Fig. 4). This lowered well voltage causes the sources of these transistors to operate at a lower potential, which keeps the upper transistors in saturation. We use VW = 2.5 V in our circuit. (Care must be taken not to make VW too low, as parasitic source-well-substrate pnp transistors can be activated.) 52°°°° 74°°°° Figure 5: EMD pattern on chip. Ultra-wide-angle optics gave the chip a field of view ranging from ±52° to ±74°. The output of the Gilbert multiplier is a differential current. The signals from the left and right correlators are easily subtracted by summing their currents appropriately. Similarly, current summation on two global wires is used to sum the motion signals over the entire EMD array. 4 Experimental Results We fabricated a 16 × 16 EMD array in a 0.5-µm 2-poly, 3-metal standard CMOS process. The 2.24 mm × 2.24 mm die contained a 17 × 17 array of “pixels,” each measuring 100 µm × 100 µm. Each pixel contained a photoreceptor, LMC circuit, lowpass “delay” filter, and four correlators. These correlators were used to implement two independent EMDs: a vertical motion detector connected to the pixel below and a horizontal motion detector connected to the pixel to the right. The output signals from a subset of the EMDs representing radial outward motion were connected to two global wires, giving a differential current signal that was taken off chip on two pins. Fig. 5 shows the EMDs that were summed to produce the global radial motion signal. Diagonally-oriented EMDs were derived from the sum of a horizontal and a vertical EMD. The center 4 × 4 pixels were ignored, as motion near the center of the field of view is typically very small in collision situations. We used custombuilt ultra-wide-angle optics to give the chip a field of view ranging from ±52° at the sides to ±74° at the corners. Simulations revealed that a field of view of around ±60° was necessary for reasonable performance using this algorithm [6]. Before testing the array, we characterized an individual LMC circuit configured to have a voltage input vphoto provided from off chip using a function generator. We provided a 1.4 Hz, 100 mVpp square wave and observed the LMC circuit output. As shown in Fig. 6a, the LMC circuit exhibits a transient oscillatory step response similar to its biological counterpart. Using a spectrum analyzer, we measured the transfer function of the circuit (see Fig. 6b). The LMC circuit acts as a bandpass filter centered at 19 Hz, with a measured Q of 2.3. Figure 6: Measurement of LMC circuit performance. (a) Step response of LMC circuit. (b) Frequency tuning of LMC circuit. The entire chip consumed 140 µW of power. Most of this was consumed by peripheral biasing circuits; the 17 × 17 pixel array used only 5.2 µW (18 nW per pixel). To test the complete collision detection chip, we implemented the leaky integrator (τleak = 50 ms) and comparator from Fig. 1 using off-chip components. In future implementations, these circuits could be built on chip using little power. We tested the chip by mounting it on a small motorized vehicle facing forward with the lens centered 11 cm above the floor. The vehicle traveled in a straight path at 28 cm/s. Fig. 7 shows the output from the leaky integrator as the chip moves across the floor and collides with the center of a 38 cm × 38 cm trash can in our lab. The peak response of the chip occurs approximately 500 ms before contact, which corresponds to a distance of 14 cm. At this point, the edges of the trash can subtend an angle of 54°. After this point, the edges of the can move beyond the chip’s field of view, and the response decays rapidly. The rebound in response observed in the last 100 ms may be due to the chip seeing the expanding shadow cast by its own lens on the side of the can just before contact. 5 Conclusions The response of our chip, which peaks and then collapses before impact, is similar to activity patterns observed in the LGMD neuron in locusts [1] and η neurons in pigeons [2] during simulated collisions. While more complex models positing the measurement of true image velocity and object size have been used to explain this peculiar time course [1], we observe that a simple model integrating the output of a radial EMD array gives qualitatively similar responses. We have demonstrated that this model of collision detection can be implemented in a small, low-power, single-chip sensor. Further testing of the chip on mobile platforms should better characterize its performance. Acknowledgments This work was partially supported by a contract from the Naval Air Warfare Center, China Lake, CA. References [1] F. Gabbiani, H.G. Krapp, and G. Laurent, “Computation of object approach by a widefield, motion-sensitive neuron,” J. Neurosci. 19:1122-1141, 1999. Figure 7: Measured output of collision detection chip. [2] H. Sun and B.J. Frost, “Computation of different optical variables of looming objects in pigeon nucleus rotundus neurons,” Nature Neurosci. 1:296-303, 1998. [3] A. Borst and S. Bahde, “Visual information processing in the fly’s landing system,” J. Comp. Physiol. A 163:167-173, 1988. [4] L.T. Tammero and M.H. Dickinson, “Collision-avoidance and landing responses are mediated by separate pathways in the fruit fly, Drosophila melanogaster,” J. Exp. Biol.205: 2785-2798, 2002. [5] A.P. Duchon, W.H. Warren, and L.P. Kaelbling, “Ecological robotics,” Adaptive Behavior 6:473-507, 1998. [6] R.R. Harrison, “An algorithm for visual collision detection in real-world scenes,” submitted to NIPS 2003. [7] B. Hassenstein and W. Reichardt, “Systemtheoretische Analyse der Zeit-, Reihenfolgen-, und Vorzeichenauswertung bei der Bewegungsperzeption des Rüsselkäfers Chlorophanus,” Z. Naturforch. 11b:513-524, 1956. [8] S.B. Laughlin, “Matching coding, circuits, cells, and molecules to signals – general principles of retinal design in the fly’s eye,” Progress in Ret. Eye Research 13:165-196, 1994. [9] J.H. van Hateren, “Theoretical predictions of spatiotemporal receptive fields of fly LMCs, and experimental validation,” J. Comp. Physiol. A 171:157-170, 1992. [10] J.H. van Hateren, “Processing of natural time series of intensities by the visual system of the blowfly,” Vision Res. 37:3407-3416, 1997. [11] A.B. Saul and A.L. Humphrey, “Spatial and temporal response properties of lagged and nonlagged cells in cat lateral geniculate nucleus,” J. Neurophysiol. 64:206-224, 1990. [12] D.W. Dong and J.J. Atick, “Temporal decorrelation: a theory of lagged and nonlagged responses in the lateral geniculate nucleus,” Network 6:159-178, 1995. [13] E.H. Adelson and J.R. Bergen, “Spatiotemporal energy models for the perception of motion,” J. Opt. Soc. Am. A 2:284-299, 1985. [14] M. Egelhaaf and A. Borst, “Transient and steady-state response properties of movement detectors,” J. Opt. Soc. Am. A 6:116-127, 1989. [15] R.O. Dror, D.C. O’Carroll, and S.B. Laughlin, “Accuracy of velocity estimation by Reichardt correlators,” J. Opt. Soc. Am. A 18:241-252, 2001. [16] R.R. Harrison and C. Koch, “A robust analog VLSI Reichardt motion sensor,” Analog Integrated Circuits and Signal Processing 24:213-229, 2000. [17] C. Mead, Analog VLSI and Neural Systems, Reading, MA: Addison-Wesley, 1989.
2003
53
2,456
An Improved Scheme for Detection and Labelling in Johansson Displays Claudio Fanti Computational Vision Lab, 136-93 California Institute of Technology Pasadena, CA 91125, USA fanti@vision.caltech.edu Marzia Polito Intel Corporation, SC12-303 2200 Mission College Blvd. Santa Clara, CA 95054, USA marzia.polito@intel.com Pietro Perona Computational Vision Lab, 136-93 California Institute of Technology Pasadena, CA 91125, USA perona@vision.caltech.edu Abstract Consider a number of moving points, where each point is attached to a joint of the human body and projected onto an image plane. Johannson showed that humans can effortlessly detect and recognize the presence of other humans from such displays. This is true even when some of the body points are missing (e.g. because of occlusion) and unrelated clutter points are added to the display. We are interested in replicating this ability in a machine. To this end, we present a labelling and detection scheme in a probabilistic framework. Our method is based on representing the joint probability density of positions and velocities of body points with a graphical model, and using Loopy Belief Propagation to calculate a likely interpretation of the scene. Furthermore, we introduce a global variable representing the body’s centroid. Experiments on one motion-captured sequence suggest that our scheme improves on the accuracy of a previous approach based on triangulated graphical models, especially when very few parts are visible. The improvement is due both to the more general graph structure we use and, more significantly, to the introduction of the centroid variable. 1 Introduction Perceiving and analyzing human motion is a natural and useful task for our visual system. Replicating this ability in machines is one of the most important and difficult goals of machine vision. As Johannson experiments show [4], the instantaneous information on the position and velocity of a few features, such as the joints of the body, present sufficient information to detect human presence and understand the gist of human activity. This is true even if clutter features are detected in the scene, and if some body parts features are occluded (generalized Johansson display). Selecting features in a frame, as well as computing their velocity across frames, is a task for which good quality solutions exist in the literature [5] and we will not consider it here. We therefore assume that a number of features that are associated to the body have been detected and their velocity has been computed. We will not assume that all such features have been found, nor that all the features that were detected are associated to the body. We study the interpretation of such a generalized Johannson display, i.e. the detection of the presence of a human in the scene and the labelling of the point features as parts of the body or as clutter. We generalize an approach presented in [3] where the pattern of point positions and velocities associated to human motion was modelled with a triangulated graphical model. We are interested here in exploring the benefit of allowing long-range connections, and therefore loops in the graph representing correlations between cliques of variables. Furthermore, while [3] obtained translation invariance at the level of individual cliques, we study the possibility of obtaining translation invariance globally by introducing a variable representing the ensemble model of the body. Algorithms based on loopy belief propagation (LBP) are applied to efficiently compute high-likelihood interpretations of the scene, and therefore detection and labelling. 1.1 Notations We use bold-face letters x for random vectors and italic letters x for their sample values. The probability density (or mass) function for a variable x is denoted by fx(x). When x is a random quantity we write the expectation as Efx[x]. An ordered set I = [i1 . . . iK] used as a vector’s subscript has the obvious meaning of yI = [yi1 . . . yiK] or, when enclosed in squared brackets [I]s applied to a dimension of a matrix V = [vij], it selects the s-dimensional members (specified by the subscript) of the matrix along that dimension, i.e. V[1:2]4[1:2]4 is the 8 × 8 matrix obtained by selecting the first two 4-dimensional rows and columns. 1.2 Problem Definition We identify M = 16 relevant body parts (intuitively corresponding to the main joints). Each marked point on a display (referred to as a detection or observation) is denoted by yi ∈R4 and is endowed with four values, i.e. yi = [yi,a, yi,b, yi,va, yi,vb]T corresponding to its horizontal and vertical positions and velocities. Our goal here is to find the most probable assignment of a subset of detections to the body parts. For each display we call y = [yT 1 . . . yT N]T the 4N × 1 vector of all observations (on a frame) and we model each single observation as a 4 × 1 random vector yi. In general N ≥M however some or all of the M parts might not be present in a given display. The binary random variable δi indicates whether the ith part has been detected or not (i ∈{1 . . . M}) . For i ∈{1 . . . M}, a discrete random variable λi taking values on {1 . . . N} is used to further specify the correspondence of a body part i to a particular detection λi. Since this makes sense only if the body part is detected, we assume by convention that λi = 0 if δi = 0. A pair h = [λ, δ] is called a labelling hypothesis. Any particular labelling hypothesis determines a partition of the set of indices corresponding to detections into foreground and background: [1 . . . N]T = F ∪B, where F = [λi : δi = 1, i = 1 . . . M]T and B = [1 . . . N]T \ F. We say that m = |F| parts have been detected and M −m are missing. Based on the partition induced on λ by δ, we can define two vectors λf = λF and λb = λB, each identifying the detections that were assigned to the foreground and those assigned to the background respectively. Finally, the set of detections y remains partitioned into the vectors yλf and yλb of the foreground and background detections respectively. The foreground and background detections are assumed to be (conditionally) independent (given h) meaning that their joint distribution factorizes as follows fy|λδ(y|λδ) = fyλf |λδ(yλf |λδ) · fyλb|λδ(yλb|λδ) where fyλf |λδ(yλf |λδ) is a gaussian pdf, while fyλb|λδ(yλb|λδ) is the uniform pdf UN−m(A), with A determining the area of the position and velocity hyperplane for each of the N −m background parts. More specifically, when all M parts are observed (δ = [1 . . . 1]T ) we have that fyλ[1:M]1 |λδ(yλ[1:M]1|λδ) is N(µ, Σ). When m ≤M instead, N(µf, Σf) is the marginalized (over the M −m missing parts) version N(µf, Σf) of the complete model N(µ, Σ). Our goal is to find an hypothesis ˆh = [ˆλ, ˆδ] such that [ˆλ, ˆδ] = arg max λδ {Q(λ, δ)} = arg max λδ {fyλ|λδ(yλ|λ, δ)}. (1) 2 Learning the Model’s Parameters and Structure In this section we will assume some familiarity with the connections between probability density functions and graphical models. Let us initially assume that the moving human being we want to detect is centrally positioned in the frame. We will then enhance the model in order to accommodate for horizontal and vertical translations. In the learning process we want to estimate the parameters of fyλf |λδ(yλf |λδ), where the labeling of the training set is known, N = M (no clutter is present) and δ = [1 . . . 1]T (all parts are visible). A fully connected graphical model would be the most accurate description of the training set, however, the search for the optimal labelling, given a display, would be computationally infeasible. Additionally, by Occam’s razor, such model might not generalize as well as a simpler one. It is intuitive to think that some (conditional) independencies between the yi’s hold. We learn the model structure from the data, as well as the parameters. To limit the computational cost and to hope in a better generalizing model, we put an upper bound on the fan-in (number of incoming edges) of the nodes. In order to make the trade-offbetween complexity and likelihood explicit, we adopt the BIC (Bayesian Information Criterion) score. We recall that the BIC score is consistent, and that since the probability distribution factorizes family-wise, the score decomposes additively. An exhaustive search among graphs is infeasible. We therefore attempt to determine the highest scoring graph by mean of a greedy hillclimbing algorithm, with random restarts. Specifically, at each step the algorithm chooses the elementary operation (among adding, removing or inverting an edge of the graph) that results in the highest increase for the score. To prevent getting stuck in local maxima, we randomly restart a number of times once we cannot get any score improvements, and then we pick the graph achieving the highest score overall. We finally obtain our model by retaining the associated maximum likelihood parameters. As opposed to previous approaches [3], no decomposability of the graph is imposed, and exact belief propagation methods that pass through the construction of a junction tree are not applicable. When the junction property is satisfied, the maximum spanning tree algorithm allows an efficient construction of the junction tree. The tree with the most populated separators between cliques is produced in linear time. Here, we propose instead a construction of the junction graph that (greedily) attempts to minimize the complexity of the induced subgraph associated with each variable. Figure 1: Graphical Models. Light shaded vertices represent variables associated to different body parts, edges indicate conditional (in)dependencies, following the standard Graphical Models conventions. [Left] Hand made decomposable graph from [3], used for comparison. [Right] Model learned from data (sequence W1, see section 4), with max fan-in constrain of 2. 3 Detection and Labelling with Expectation Maximization One could solve the maximization problem (1) by means of Belief Propagation (BP), however, we require our system to be invariant with respect to translations in the first two coordinates (position) of the observations. To achieve this we introduce a new parameter γ = [γa, γb, 0, 0]T that represents the reference system’s origin, which we now allow to be different than zero. By introducing the centered observations ¯yλ = yλ −γ our model becomes f¯yλ|γh(¯y|γh) = f¯yλf |γλδ(¯yλf |γλδ) · f¯yλb|λδ(¯yλb|λδ). where in the second member the first factor is now N(¯µf, ¯Σf) while the second factor remains UN−m( ¯A). We finally use an EM-like procedure to estimate γ obtaining, as a by-product, the maximizing hypothesis h we are after. 3.1 E-Step As the hypothesis h is unobservable we replace the complete-data log-likelihood, with its expected value ˆLc( ˜f, h) = E ˜ fh[log f¯yλ|γ(¯yλ|γ)] (2) where the expectation is taken with respect to a generic distribution ˜fh(h). It’s known that the E-step maximizing solution is ˜f (k) h (h) ∝f¯yλ|γ(¯yλ|γ(k−1)). Since we will not be able to compute such distribution for all the assignments h of h, we will make a so-called hard assignment i.e. we will approximate f¯yλ|γ(¯yλ|γ(k−1)) with 1(h −h(k)), where h(k) = arg max h {f¯yλ|γ(¯yλ|γ(k−1))}. Given the current estimate γ(k−1) of γ, the hypothesis h(k) can be determined by maximizing the (discrete) potential Π(h) = log f¯yλf |γh(¯yλf |γ(k−1)h) · fyλb|h(yλb|h) with a Max-Sum Loopy Belief Propagation (LBP) on the associated junction graph. The potential above decomposes into a number of factors (or cliques). With the exception of root nodes, each family gives rise to a factor that we initialize to the family’s conditional probability mass function (pmf). For a root node, its marginal pmf is multiplied into one of its children. If LBP converges and the determined h(k) maximizes the expected log-likelihood ˆLc( ˜f (k), h(k−1)), then we are guaranteed (otherwise there is just reasonable1 hope) that EM will converge to the sought-after ML estimate of γ. 3.2 M-Step In the M-Step we maximize (2) with respect to γ, holding h = h(k), i.e. we compute γ(k+1) = arg max γ {log f¯yλ|γ(¯yλ(k)|γ)} (3) The maximizing γ can be obtained from 0 = ∇γ[(yλ −¯µ −Jγ)T ¯Σ−1(yλ −¯µ −Jγ)] (4) where J4 = diag(1, 1, 0, 0) and J = [ J4 J4 · · · J4 | {z }]T m . The solution involves the inversion of the matrix ¯Σ as a whole which is numerically instable given the minimal variance in the vertical component of the motion. We therefore approximate it with a block-diagonal version ˜Σ with ˜Σ[i]4[i]4 = I4 det(¯Σ[i]4[i]4) det(¯Σ) . (5) It’s easy to see that, for appropriate αi’s, γ(k+1) = J4 X δi=1 [αi(yλi −¯µi)] . (6) 3.3 Detection Criteria Let σ be a (discrete) indicator random variable for the event that the Johansson’s display represents a scene with a human body. So far, in our discussion we have implicitly assumed that σ = 1. In the following section we will describe a way for determining whether a human body is actually present (detection). By defining R(y) = fσ|y(1|y) fσ|y(0|y), we claim that a human body is present whenever R(y) > 1. By Bayes rule, R(y) can be rewritten as R(y) = fy|σ(y|1) fy|σ(y|0) · fσ(1) fσ(0) = fy|σ(y|1) fy|σ(y|0) · Rp 1Experimentally it is observed that when LBP converges, the determined maximum is either global or, although local, the potential’s value is very close to its global optimum. If the potential is increased (not necessarily maximized) by LBP, that suffices for EM to converge where Rp = P [σ=1] P [σ=0] is the contribution to R(y) due to the prior on σ. In order to compute the R(y) we marginalize over the labelling hypothesis h. When σ = 0, the only admissible hypotheses must have δ = 0T (no body parts are present) which translates into fδ|σ(δ|σ) = P[δ = δ|σ = 0] = 1k(δ −0T ). Also, fλ|δσ(λ|δ1) = N −N as no labelling is more likely than any other, before we have seen the detections. All N detections are labelled by λ as background and their conditional density is UN(A). Therefore, we have fy|σ(y|0) = 1 AN 1 N N where the summation is over the λ, δ compatible with σ = 0. When σ = 1, we have fδ|σ(δ|1) = P[δ = δ] = 2−M as we assume that each body part appears (or not) in a given display with probability 1 2, independently of all other parts. Also, fλ|δσ(λ|δ1) = N −N as before and therefore we can write fy|σ(y|1) = X λ,δ £ fy|λδσ(y|λδ1) ¤ 1 N N 1 2M where the summation is over the λ, δ compatible with σ = 1. We conclude that R(y) = Rp fy|σ(y|1) fy|σ(y|0) = Rp AN 2M X λ,δ £ fy|λδσ(y|λδ1) ¤ When implementing Loopy Belief Propagation, on a finite-precision computational architecture using Gaussian models, we are unable to perform marginalization as we can only represent log-probabilities. However, we will assume that the ML labelling ˆhσ is predominant over all other labelling, so that in the estimate of σ we can approximate marginalization with maximization and therefore write R(y) ≈Rp AN 2M fy|λδσ(y|ˆλˆδ1) where ˆλ, ˆδ is the maximizing hypothesis when σ = 1. 4 Experimental Results In our experiment we use two sequences W1 and W22 of about 7,000 frames each, representing a human subject walking back and forth along a straight line. Both sequences were acquired and labelled with a motion capture system. Each pair of consecutive frames is used to produce a Johannson display with positions and velocities. W1 is used to learn the probabilistic model’s parameter and structure. A 700 frames random sample from W2 is then used to test of our algorithm. We evaluate the performance of our technique and compare it with the hand-made, decomposable graphical model of [3]. There, translation invariance is achieved by using relative positions within each clique. We refer to it as to the local version of translation invariance (as opposed to the global version proposed in this paper). We first explore the benefits of just relaxing the decomposability constrain, still implementing the translation invariance locally. The lower two dashed curves of Figure 2 already show a noticeable improvement, especially when fewer body parts are visible. However, the biggest increase in performance is brought by global translation invariance as it is evident from the upper two curves of Figure 2. 2available at http://www.vision.caltech.edu/fanti. 3 4 5 6 7 8 9 10 11 12 60 65 70 75 80 85 90 95 100 % Correct Labels Number of Visible Points Labeling Performance Loopy + Global Inv. Decomp. + Global Inv. Loopy + Local Inv. Decomp. + Local Inv. 3 4 5 6 7 8 9 10 11 12 0 10 20 30 40 50 60 70 80 90 100 Prob. of Detection Number of Visible parts Detection Performance Loopy + Global Inv. Decomp. + Global Inv. Loopy + Local Inv. Decomp. + Local Inv. Figure 2: Detection and Labeling Performance. [Left] Labeling: On each display from the sequence W2, we randomly occlude between 3 and 10 parts and superimpose 30 randomly positioned clutter points. For any given number of visible parts, the four curves represent the percentage of correctly labeled parts out of the total labels in all 700 displays of W2. Each curve reflects a combination of either Local or Global translation invariance and Decomposable or Loopy graph. [Right] Detection: For the same four combinations we plot Pdetection (Prob. of detecting a person when the display shows one) for a fixed Pfalse−alarm = 10% (probability of stating that a person is present when only 30 points of clutters are presented). Again, we vary the number of visible points between 4, 7 and 11. As for the dynamical programming algorithm of [3], the Loopy Belief Propagation algorithm runs in O(MN 3), however 4 or 5 more iterations are needed for it to converge. Furthermore, to avoid local maxima, we restart the algorithm at most 10 times using a randomly generated schedule to pass the messages. Finally, when global invariance is used, we re-initialize γ up to 10 times. Each time we randomly pick a value within a different region of the display. On average, about 5 restarts for γ, 5 different scheduling and 3 iterations of EM suffice to achieve a labeling with a likelihood comparable with the one of the ground truth labeling. 5 Discussion, Conclusions and Future Work Generalizing our model from decomposable [3] to loopy produced a gain in performance. Further improvement would be expected when allowing larger cliques in the junction graph, at a considerable computational cost. A more sensible improvement was obtained by adding a global variable modeling the centroid of the figure. Taking [3] as a reference, there is about a 10x increase in computational cost when we either allow a loopy graph or account for translations with the centroid. When both enhancement are present the cost increase is between 100x and 1,000x. We believe that the combination of these two techniques points in the right direction. The local translation invariance model required the computation of relative positions within the same clique. These could not be computed in the majority of cliques when a large number of body parts were occluded, even with the more accurate loopy graphical model. Moreover, the introduction of the centroid variable is also valuable in light of a possible extension of the algorithm to multi-frame tracking. We should also note that the structure learning technique is sub-optimal due to the greediness of the algorithm. In addition, the model parameters and structure are estimated under the hypothesis of no occlusion or clutter. An algorithm that considers these two phenomena in the learning phase could likely achieve better results in realistic situations, when clutter and occlusion are significant. Finally, the step towards using displays directly obtained from gray-level image sequences remains a challenge that will be the goal of future work. 5.1 Acknowledgements We are very grateful to Max Welling, who first proposed the idea of using LBP to solve for the optimal labelling in a 2001 Research Note, and who gave many useful suggestion. Sequences W1 and W2 used in the experiments were collected by L. Goncalves and E. di Bernando. This work was partially funded by the NSF Center for Neuromorphic Systems Engineering grant EEC-9402726 and by the ONR MURI grant N00014-01-1-0890. References [1] Y. Song, L. Goncalves and P. Perona, “Learning Probabilistic Structure for Human Motion Detection”, Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol II, pages 771-777, Kauai, Hawaii, December 2001. [2] Y. Song, L. Goncalves and P. Perona, “Unsupervised Learning of Human Motion Models”, Advances in Neural Information Processing Systems 14, Vancouver, Cannada, December 2001. [3] Y. Song, L. Goncalves, and P. Perona, “Monocular perception of biological motion clutter and partial occlusion”, Proc. of 6th European Conferences on Computer Vision, vol II, pages 719-733, Dublin, Ireland, June/July, 2000. [4] G. Johansson, “Visual Perception of Biological Motion and a Model For Its Analysis”, Perception and Psychophysics 14, 201-211, 1973. [5] C. Tomasi and T. Kanade, “Detection and tracking of point features”, Tech. Rep. CMU-CS-91-132, Carnegie Mellon University, 1991. [6] S.M. Aji and R.J. McEliece, “The generalized distributive law”, IEEE Trans. Info. Theory, 46:325-343, March 2000. [7] P. Giudici and R Castelo, “Improving Markov Chain Monte Carlo Model Search for Data Mining”, Machine Learning 50(1-2), 127-158, 2003. [8] W.T.Freeman and Y. Weiss, “On the optimality of solutions of the max-product belief propagation algorithm in arbitrary graphs”, IEEE Transactions on Information Theory 47:2 pages 723-735. (2001). [9] J.S. Yedidia, W.T.Freeman and Y. Weiss, “Bethe free energy, Kikuchi approximations and belief propagation algorithms”, Advances in Neural Information Processing Systems 13, Vancouver, Canada, December 2000. [10] D. Chickering, “Optimal Structure Identification with Greedy Search”, Journal of Machine Learning Research 3, pages 507-554 (2002).
2003
54
2,457
1-norm Support Vector Machines Ji Zhu, Saharon Rosset, Trevor Hastie, Rob Tibshirani Department of Statistics Stanford University Stanford, CA 94305 {jzhu,saharon,hastie,tibs}@stat.stanford.edu Abstract The standard 2-norm SVM is known for its good performance in twoclass classi£cation. In this paper, we consider the 1-norm SVM. We argue that the 1-norm SVM may have some advantage over the standard 2-norm SVM, especially when there are redundant noise features. We also propose an ef£cient algorithm that computes the whole solution path of the 1-norm SVM, hence facilitates adaptive selection of the tuning parameter for the 1-norm SVM. 1 Introduction In standard two-class classi£cation problems, we are given a set of training data (x1, y1), . . . (xn, yn), where the input xi ∈Rp, and the output yi ∈{1, −1} is binary. We wish to £nd a class£cation rule from the training data, so that when given a new input x, we can assign a class y from {1, −1} to it. To handle this problem, we consider the 1-norm support vector machine (SVM): min β0,β n  i=1  1 −yi  β0 + q  j=1 βjhj(xi)     + (1) s.t. ∥β∥1 = |β1| + · · · + |βq| ≤s, (2) where D = {h1(x), . . . hq(x)} is a dictionary of basis functions, and s is a tuning parameter. The solution is denoted as ˆβ0(s) and ˆβ(s); the £tted model is ˆf(x) = ˆβ0 + q  j=1 ˆβjhj(x). (3) The classi£cation rule is given by sign[ ˆf(x)]. The 1-norm SVM has been successfully used in [1] and [9]. We argue in this paper that the 1-norm SVM may have some advantage over the standard 2-norm SVM, especially when there are redundant noise features. To get a good £tted model ˆf(x) that performs well on future data, we also need to select an appropriate tuning parameter s. In practice, people usually pre-specify a £nite set of values for s that covers a wide range, then either use a separate validation data set or use cross-validation to select a value for s that gives the best performance among the given set. In this paper, we illustrate that the solution path ˆβ(s) is piece-wise linear as a function of s (in the Rq space); we also propose an ef£cient algorithm to compute the exact whole solution path {ˆβ(s), 0 ≤s ≤∞}, hence help us understand how the solution changes with s and facilitate the adaptive selection of the tuning parameter s. Under some mild assumptions, we show that the computational cost to compute the whole solution path ˆβ(s) is O(nq min(n, q)2) in the worst case and O(nq) in the best case. Before delving into the technical details, we illustrate the concept of piece-wise linearity of the solution path ˆβ(s) with a simple example. We generate 10 training data in each of two classes. The £rst class has two standard normal independent inputs x1, x2. The second class also has two standard normal independent inputs, but conditioned on 4.5 ≤x2 1+x2 2 ≤ 8. The dictionary of basis functions is D = { √ 2x1, √ 2x2, √ 2x1x2, x2 1, x2 2}. The solution path ˆβ(s) as a function of s is shown in Figure 1. Any segment between two adjacent vertical lines is linear. Hence the right derivative of ˆβ(s) with respect to s is piece-wise constant (in Rq). The two solid paths are for x2 1 and x2 2, which are the two relevant features. 0.0 0.5 1.0 1.5 0.0 0.2 0.4 0.6 0.8 ˆβ s Figure 1: The solution path ˆβ(s) as a function of s. In section 2, we motivate why we are interested in the 1-norm SVM. In section 3, we describe the algorithm that computes the whole solution path ˆβ(s). In section 4, we show some numerical results on both simulation data and real world data. 2 Regularized support vector machines The standard 2-norm SVM is equivalent to £t a model that min β0,βj n  i=1  1 −yi  β0 + q  j=1 βjhj(xi)     + + λ∥β∥2 2, (4) where λ is a tuning parameter. In practice, people usually choose hj(x)’s to be the basis functions of a reproducing kernel Hilbert space. Then a kernel trick allows the dimension of the transformed feature space to be very large, even in£nite in some cases (i.e. q = ∞), without causing extra computational burden ([2] and [12]). In this paper, however, we will concentrate on the basis representation (3) rather than a kernel representation. Notice that (4) has the form loss + penalty, and λ is the tuning parameter that controls the tradeoff between loss and penalty. The loss (1 −yf)+ is called the hinge loss, and the penalty is called the ridge penalty. The idea of penalizing by the sum-of-squares of the parameters is also used in neural networks, where it is known as weight decay. The ridge penalty shrinks the £tted coef£cients ˆβ towards zero. It is well known that this shrinkage has the effect of controlling the variances of ˆβ, hence possibly improves the £tted model’s prediction accuracy, especially when there are many highly correlated features [6]. So from a statistical function estimation point of view, the ridge penalty could possibly explain the success of the SVM ([6] and [12]). On the other hand, computational learning theory has associated the good performance of the SVM to its margin maximizing property [11], a property of the hinge loss. [8] makes some effort to build a connection between these two different views. In this paper, we replace the ridge penalty in (4) with the L1-norm of β, i.e. the lasso penalty [10], and consider the 1-norm SVM problem: min β0,β n  i=1  1 −yi  β0 + q  j=1 βjhj(xi)     + + λ∥β∥1, (5) which is an equivalent Lagrange version of the optimization problem (1)-(2). The lasso penalty was £rst proposed in [10] for regression problems, where the response y is continuous rather than categorical. It has also been used in [1] and [9] for classi£cation problems under the framework of SVMs. Similar to the ridge penalty, the lasso penalty also shrinks the £tted coef£cients ˆβ’s towards zero, hence (5) also bene£ts from the reduction in £tted coef£cients’ variances. Another property of the lasso penalty is that because of the L1 nature of the penalty, making λ suf£ciently large, or equivalently s suf£ciently small, will cause some of the coef£cients ˆβj’s to be exactly zero. For example, when s = 1 in Figure 1, only three £tted coef£cients are non-zero. Thus the lasso penalty does a kind of continuous feature selection, while this is not the case for the ridge penalty. In (4), none of the ˆβj’s will be equal to zero. It is interesting to note that the ridge penalty corresponds to a Gaussian prior for the βj’s, while the lasso penalty corresponds to a double-exponential prior. The double-exponential density has heavier tails than the Gaussian density. This re¤ects the greater tendency of the lasso to produce some large £tted coef£cients and leave others at 0, especially in high dimensional problems. Recently, [3] consider a situation where we have a small number of training data, e.g. n = 100, and a large number of basis functions, e.g. q = 10, 000. [3] argue that in the sparse scenario, i.e. only a small number of true coef£cients βj’s are nonzero, the lasso penalty works better than the ridge penalty; while in the non-sparse scenario, e.g. the true coef£cients βj’s have a Gaussian distribution, neither the lasso penalty nor the ridge penalty will £t the coef£cients well, since there is too little data from which to estimate these non-zero coef£cients. This is the curse of dimensionality taking its toll. Based on these observations, [3] further propose the bet on sparsity principle for highdimensional problems, which encourages using lasso penalty. 3 Algorithm Section 2 gives the motivation why we are interested in the 1-norm SVM. To solve the 1-norm SVM for a £xed value of s, we can transform (1)-(2) into a linear programming problem and use standard software packages; but to get a good £tted model ˆf(x) that performs well on future data, we need to select an appropriate value for the tuning paramter s. In this section, we propose an ef£cient algorithm that computes the whole solution path ˆβ(s), hence facilitates adaptive selection of s. 3.1 Piece-wise linearity If we follow the solution path ˆβ(s) of (1)-(2) as s increases, we will notice that since both i(1 −yi ˆfi)+ and ∥β∥1 are piece-wise linear, the Karush-Kuhn-Tucker conditions will not change when s increases unless a residual (1 −yi ˆfi) changes from non-zero to zero, or a £tted coef£cient ˆβj(s) changes from non-zero to zero, which correspond to the nonsmooth points of i(1 −yi ˆfi)+ and ∥β∥1. This implies that the derivative of ˆβ(s) with respect to s is piece-wise constant, because when the Karush-Kuhn-Tucker conditions do not change, the derivative of ˆβ(s) will not change either. Hence it indicates that the whole solution path ˆβ(s) is piece-wise linear. See [13] for details. Thus to compute the whole solution path ˆβ(s), all we need to do is to £nd the joints, i.e. the asterisk points in Figure 1, on this piece-wise linear path, then use straight lines to interpolate them, or equivalently, to start at ˆβ(0) = 0, £nd the right derivative of ˆβ(s), let s increase and only change the derivative when ˆβ(s) gets to a joint. 3.2 Initial solution (i.e. s = 0) The following notation is used. Let V = {j : ˆβj(s) ̸= 0}, E = {i : 1 −yi ˆfi = 0}, L = {i : 1 −yi ˆfi > 0} and u for the right derivative of ˆβV(s): ∥u∥1 = 1 and ˆβV(s) denotes the components of ˆβ(s) with indices in V. Without loss of generality, we assume #{yi = 1} ≥#{yi = −1}; then ˆβ0(0) = 1, ˆβj(0) = 0. To compute the path that ˆβ(s) follows, we need to compute the derivative of ˆβ(s) at 0. We consider a modi£ed problem: min β0,βj  yi=1 (1 −yifi)+ +  yi=−1 (1 −yifi) (6) s.t. ∥β∥1 ≤∆s, fi = β0 + q  j=1 βjhj(xi). (7) Notice that if yi = 1, the loss is still (1 −yifi)+; but if yi = −1, the loss becomes (1 −yifi). In this setup, the derivative of ˆβ(∆s) with respect to ∆s is the same no matter what value ∆s is, and one can show that it coincides with the right derivative of ˆβ(s) when s is suf£ciently small. Hence this setup helps us £nd the initial derivative u of ˆβ(s). Solving (6)-(7), which can be transformed into a simple linear programming problem, we get initial V, E and L. |V| should be equal to |E|. We also have: ˆβ0(∆s) ˆβV(∆s) = 1 0 + ∆s · u0 u . (8) ∆s starts at 0 and increases. 3.3 Main algorithm The main algorithm that computes the whole solution path ˆβ(s) proceeds as following: 1. Increase ∆s until one of the following two events happens: • A training point hits E, i.e. 1 −yifi ̸= 0 becomes 1 −yifi = 0 for some i. • A basis function in V leaves V, i.e. ˆβj ̸= 0 becomes ˆβj = 0 for some j. Let the current ˆβ0, ˆβ and s be denoted by ˆβold 0 , ˆβold and sold. 2. For each j∗/∈V, we solve: u0 + V ujhj(xi) + uj∗hj∗(xi) = 0 for i ∈E V sign(ˆβold j )uj + |uj∗| = 1 (9) where u0, uj and uj∗are the unknowns. We then compute: ∆lossj∗ ∆s =  L yi  u0 +  V ujhj(xi) + uj∗hj∗(xi)  . (10) 3. For each i′ ∈E, we solve: u0 + V ujhj(xi) = 0 for i ∈E\{i′} V sign(ˆβold j )uj = 1 (11) where u0 and uj are the unknowns. We then compute: ∆lossi′ ∆s =  L yi  u0 +  V ujhj(xi)  . (12) 4. Compare the computed values of ∆loss ∆s from step 2 and step 3. There are q−|V|+ |E| = q + 1 such values. Choose the smallest negative ∆loss ∆s . Hence, • If the smallest ∆loss ∆s is non-negative, the algorithm terminates; else • If the smallest negative ∆loss ∆s corresponds to a j∗in step 2, we update V ←V ∪{j∗}, u ← u uj∗ . (13) • If the smallest negative ∆loss ∆s corresponds to a i′ in step 3, we update u and E ←E\{i′}, L ←L ∪{i′} if necessary. (14) In either of the last two cases, ˆβ(s) changes as: ˆβ0(sold + ∆s) ˆβV(sold + ∆s) = ˆβold 0 ˆβold V + ∆s · u0 u , (15) and we go back to step 1. In the end, we get a path ˆβ(s), which is piece-wise linear. 3.4 Remarks Due to the page limit, we omit the proof that this algorithm does indeed give the exact whole solution path ˆβ(s) of (1)-(2) (see [13] for detailed proof). Instead, we explain a little what each step of the algorithm tries to do. Step 1 of the algorithm indicates that ˆβ(s) gets to a joint on the solution path and the right derivative of ˆβ(s) needs to be changed if either a residual (1−yi ˆfi) changes from non-zero to zero, or the coef£cient of a basis function ˆβj(s) changes from non-zero to zero, when s increases. Then there are two possible types of actions that the algorithm can take: (1) add a basis function into V, or (2) remove a point from E. Step 2 computes the possible right derivative of ˆβ(s) if adding each basis function hj∗(x) into V. Step 3 computes the possible right derivative of ˆβ(s) if removing each point i′ from E. The possible right derivative of ˆβ(s) (determined by either (9) or (11)) is such that the training points in E are kept in E when s increases, until the next joint (step 1) occurs. ∆loss/∆s indicates how fast the loss will decrease if ˆβ(s) changes according to u. Step 4 takes the action corresponding to the smallest negative ∆loss/∆s. When the loss can not be decreased, the algorithm terminates. Table 1: Simulation results of 1-norm and 2-norm SVM Test Error (SE) Simulation 1-norm 2-norm No Penalty |D| # Joints 1 No noise input 0.073 (0.010) 0.08 (0.02) 0.08 (0.01) 5 94 (13) 2 2 noise inputs 0.074 (0.014) 0.10 (0.02) 0.12 (0.03) 14 149 (20) 3 4 noise inputs 0.074 (0.009) 0.13 (0.03) 0.20 (0.05) 27 225 (30) 4 6 noise inputs 0.082 (0.009) 0.15 (0.03) 0.22 (0.06) 44 374 (52) 5 8 noise inputs 0.084 (0.011) 0.18 (0.03) 0.22 (0.06) 65 499 (67) 3.5 Computational cost We have proposed an algorithm that computes the whole solution path ˆβ(s). A natural question is then what is the computational cost of this algorithm? Suppose |E| = m at a joint on the piece-wise linear solution path, then it takes O(qm2) to compute step 2 and step 3 of the algorithm through Sherman-Morrison updating formula. If we assume the training data are separable by the dictionary D, then all the training data are eventually going to have loss (1 −yi ˆfi)+ equal to zero. Hence it is reasonable to assume the number of joints on the piece-wise linear solution path is O(n). Since the maximum value of m is min(n, q) and the minimum value of m is 1, we get the worst computational cost is O(nq min(n, q)2) and the best computational cost is O(nq). Notice that this is a rough calculation of the computational cost under some mild assumptions. Simulation results (section 4) actually indicate that the number of joints tends to be O(min(n, q)). 4 Numerical results In this section, we use both simulation and real data results to illustrate the 1-norm SVM. 4.1 Simulation results The data generation mechanism is the same as the one described in section 1, except that we generate 50 training data in each of two classes, and to make harder problems, we sequentially augment the inputs with additional two, four, six and eight standard normal noise inputs. Hence the second class almost completely surrounds the £rst, like the skin surrounding the oragne, in a two-dimensional subspace. The Bayes error rate for this problem is 0.0435, irrespective of dimension. In the original input space, a hyperplane cannot separate the classes; we use an enlarged feature space corresponding to the 2nd degree polynomial kernel, hence the dictionary of basis functions is D = { √ 2xj, √ 2xjxj′, x2 j, j, j′ = 1, . . . p}. We generate 1000 test data to compare the 1-norm SVM and the standard 2-norm SVM. The average test errors over 50 simulations, with different numbers of noise inputs, are shown in Table 1. For both the 1-norm SVM and the 2-norm SVM, we choose the tuning parameters to minimize the test error, to be as fair as possible to each method. For comparison, we also include the results for the non-penalized SVM. From Table 1 we can see that the non-penalized SVM performs signi£cantly worse than the penalized ones; the 1-norm SVM and the 2-norm SVM perform similarly when there is no noise input (line 1), but the 2-norm SVM is adversely affected by noise inputs (line 2 - line 5). Since the 1-norm SVM has the ability to select relevant features and ignore redundant features, it does not suffer from the noise inputs as much as the 2-norm SVM. Table 1 also shows the number of basis functions q and the number of joints on the piece-wise linear solution path. Notice that q < n and there is a striking linear relationship between |D| and #Joints (Figure 2). Figure 2 also shows the 1-norm SVM result for one simulation. 0 2 4 6 −0.5 0.0 0.5 1.0 0 2 4 6 0.05 0.10 0.15 0.20 Test Error 10 20 30 40 50 60 100 200 300 400 500 Number of Bases Number of Joints ˆβ s s Figure 2: Left and middle panels: 1-norm SVM when there are 4 noise inputs. The left panel is the piece-wise linear solution path ˆβ(s). The two upper paths correspond to x2 1 and x2 2, which are the relevant features. The middle panel is the test error along the solution path. The dash lines correspond to the minimum of the test error. The right panel illustrates the linear relationship between the number of basis functions and the number of joints on the solution path when q < n. 4.2 Real data results In this section, we apply the 1-norm SVM to classi£cation of gene microarrays. Classi£cation of patient samples is an important aspect of cancer diagnosis and treatment. The 2-norm SVM has been successfully applied to microarray cancer diagnosis problems ([5] and [7]). However, one weakness of the 2-norm SVM is that it only predicts a cancer class label but does not automatically select relevant genes for the classi£cation. Often a primary goal in microarray cancer diagnosis is to identify the genes responsible for the classi£cation, rather than class prediction. [4] and [5] have proposed gene selection methods, which we call univariate ranking (UR) and recursive feature elimination (RFE) (see [14]), that can be combined with the 2-norm SVM. However, these procedures are two-step procedures that depend on external gene selection methods. On the other hand, the 1-norm SVM has an inherent gene (feature) selection property due to the lasso penalty. Hence the 1-norm SVM achieves the goals of classi£cation of patients and selection of genes simultaneously. We apply the 1-norm SVM to leukemia data [4]. This data set consists of 38 training data and 34 test data of two types of acute leukemia, acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL). Each datum is a vector of p = 7, 129 genes. We use the original input xj, i.e. the jth gene’s expression level, as the basis function, i.e. q = p. The tuning parameter is chosen according to 10-fold cross-validation, then the £nal model is £tted on all the training data and evaluated on the test data. The number of joints on the solution path is 104, which appears to be O(n) ≪O(q). The results are summarized in Table 2. We can see that the 1-norm SVM performs similarly to the other methods in classi£cation and it has the advantage of automatically selecting relevant genes. We should notice that the maximum number of genes that the 1-norm SVM can select is upper bounded by n, which is usually much less than q in microarray problems. 5 Conclusion We have considered the 1-norm SVM in this paper. We illustrate that the 1-norm SVM may have some advantage over the 2-norm SVM, especially when there are redundant features. The solution path ˆβ(s) of the 1-norm SVM is a piece-wise linear function in the tuning Table 2: Results on Microarray Classi£cation Method CV Error Test Error # of Genes 2-norm SVM UR 2/38 3/34 22 2-norm SVM RFE 2/38 1/34 31 1-norm SVM 2/38 2/34 17 parameter s. We have proposed an ef£cient algorithm to compute the whole solution path ˆβ(s) of the 1-norm SVM, and facilitate adaptive selection of the tuning parameter s. Acknowledgments Hastie was partially supported by NSF grant DMS-0204162, and NIH grant ROI-CA72028-01. Tibshirani was partially supported by NSF grant DMS-9971405, and NIH grant ROI-CA-72028. References [1] Bradley, P. & Mangasarian, O. (1998) Feature selection via concave minimization and support vector machines. In J. Shavlik (eds), ICML’98. Morgan Kaufmann. [2] Evgeniou, T., Pontil, M. & Poggio., T. (1999) Regularization networks and support vector machines. Advances in Large Margin Classi£ers. MIT Press. [3] Friedman, J., Hastie, T, Rosset, S, Tibshirani, R. & Zhu, J. (2004) Discussion of “Consistency in boosting” by W. Jiang, G. Lugosi, N. Vayatis and T. Zhang. Annals of Statistics. To appear. [4] Golub,T., Slonim,D., Tamayo,P., Huard,C., Gaasenbeek,M., Mesirov,J., Coller,H., Loh,M., Downing,J. & Caligiuri,M. (1999) Molecular classi£cation of cancer: class discovery and class prediction by gene expression monitoring. Science 286, 531-536. [5] Guyon,I., Weston,J., Barnhill,S. & Vapnik,V. (2002) Gene selection for cancer classi£cation using support vector machines. Machine Learning 46, 389-422. [6] Hastie, T., Tibshirani, R. & Friedman, J. (2001) The Elements of Statistical Learning. SpringerVerlag, New York. [7] Mukherjee, S., Tamayo,P., Slonim,D., Verri,A., Golub,T., Mesirov,J. & Poggio, T. (1999) Support vector machine classi£cation of microarray data. Technical Report AI Memo 1677, MIT. [8] Rosset, S., Zhu, J. & Hastie, T. (2003) Boosting as a regularized path to a maximum margin classi£er. Technical Report, Department of Statistics, Stanford University, CA. [9] Song, M., Breneman, C., Bi, J., Sukumar, N., Bennett, K., Cramer, S. & Tugcu, N. (2002) Prediction of protein retention times in anion-exchange chromatography systems using support vector regression. Journal of Chemical Information and Computer Sciences, September. [10] Tibshirani, R. (1996) Regression shrinkage and selection via the lasso. J.R.S.S.B. 58, 267-288. [11] Vapnik, V. (1995) Tha Nature of Statistical Learning Theory. Springer-Verlag, New York. [12] Wahba, G. (1999) Support vector machine, reproducing kernel Hilbert spaces and the randomized GACV. Advances in Kernel Methods - Support Vector Learning, 69-88, MIT Press. [13] Zhu, J. (2003) Flexible statistical modeling. Ph.D. Thesis. Stanford University. [14] Zhu, J. & Hastie, T. (2003) Classi£cation of gene microarrays by penalized logistic regression. Biostatistics. Accepted.
2003
55
2,458
Envelope-based Planning in Relational MDPs Natalia H. Gardiol MIT AI Lab Cambridge, MA 02139 nhg@ai.mit.edu Leslie Pack Kaelbling MIT AI Lab Cambridge, MA 02139 lpk@ai.mit.edu Abstract A mobile robot acting in the world is faced with a large amount of sensory data and uncertainty in its action outcomes. Indeed, almost all interesting sequential decision-making domains involve large state spaces and large, stochastic action sets. We investigate a way to act intelligently as quickly as possible in domains where finding a complete policy would take a hopelessly long time. This approach, Relational Envelopebased Planning (REBP) tackles large, noisy problems along two axes. First, describing a domain as a relational MDP (instead of as an atomic or propositionally-factored MDP) allows problem structure and dynamics to be captured compactly with a small set of probabilistic, relational rules. Second, an envelope-based approach to planning lets an agent begin acting quickly within a restricted part of the full state space and to judiciously expand its envelope as resources permit. 1 Introduction Quickly generating generating usable plans when the world abounds with uncertainty is an important and difficult enterprise. Consider the classic blocks world domain: the number of ways to make a stack of a certain height grows exponentially with the number of blocks on the table; and if the outcomes of actions are uncertain, the task becomes even more daunting. We want planning techniques that can deal with large state spaces and large, stochastic action sets since most compelling, realistic domains have these characteristics. In this paper we propose a method for planning in very large domains by using expressive rules to restrict attention to high-utility subsets of the state space. Much of the work in traditional planning techniques centers on propositional, deterministic domains. See Weld’s survey [12] for an overview of the extensive work in this area. Efforts to extend classical planning approaches into stochastic domains include mainly techniques that work with fully-ground state spaces [13, 2]. Conversely, efforts to move beyond propositional STRIPS-based planning involve work in mainly deterministic domains [6, 10]. But the world is not deterministic: for an agent to act robustly, it must handle uncertain dynamics as well as large state and action spaces. Markov decision theory provides techniques for dealing with uncertain outcomes in atomic-state contexts, and much work has been done in leveraging structured representations to solve very large MDPs and some POMDPs [9, 3, 7]. While these techniques have moved MDP techniques from atomic-state representations to factored ones, they still operate in fully-ground state spaces. In order to describe large stochastic domains compactly, we need relational structures that can represent uncertainty in the dynamics. Relational representations allow the structure of the domain to be expressed in terms of object properties rather than object identities and thus yield a much more compact representation of a domain than the equivalent propositional version can. Efficient solutions for probabilistic, first-order MDPs are difficult to come by, however. Boutilier et al.[3] find policies for first-order MDPs by solving for the value-function of a first-order domain: the approach manipulates logical expressions that stand for sets of underlying states, but keeping the value-function representation manageable requires complex theorem-proving. Other approaches in relational MDPs represent the value function as a decision-tree [5] or as a sum of local subfunctions [8]. Another recent body of work avoids learning the value function and learns policies directly from example policies [14]. These approaches all compute full policies over complete state and action spaces, however, and so are of a different spirit than the work presented here. The underlying message is nevertheless clear: the more an agent can compute logically and the less it attends to particular domain objects, the more general its solutions will be. Since fully-ground representations grow too big to be useful and purely logical representations are as yet unwieldy, we propose a middle path: we agree to ground things out, but in a principled, restricted way. We represent world dynamics by a compact set of relational rules, and we extend the envelope method of Dean et al.[4] to use these structured dynamics. We quickly come up with an initial trajectory (an envelope of states) to the goal and then to refine the policy by gradually incorporating nearby states into the envelope. This approach avoids the wild growth of purely propositional techniques by restricting attention to a useful subset of states. Our approach strikes a balance along two axes: between fully ground and purely logical representations, and between straight-line plans and full MDP policies. 2 Planning with an Envelope in Relational Domains The envelope method was initially designed for planning in atomic-state MDPs. Goals of achievement are encoded as reward functions, and planning now becomes finding a policy that maximizes a long-term measure of reward. Extending the approach to a relational setting lets us cast the problem of planning in stochastic, relational domains in terms of finding a policy for a restricted Markovian state space. 2.1 Encoding Markovian dynamics with rules The first step to extending the envelope method to relational domains is to encode the world dynamics relationally. We use a compact set of rules, as in Figure 1. Each rule, or operator, is denoted by an action symbol and a parameterized argument list. Its behavior is defined by a precondition and a set of outcomes, together called the rule schema. Each precondition and outcome is a conjunction of domain predicates. A rule applies in a state if its precondition can be matched against some subset of the state ground predicates. Each outcome then describes the set of possible resulting ground states. Given this structured representation of action dynamics, we define a relational MDP as a tuple ⟨P, Z, O, T ,R⟩: States: The set of states is defined by a finite set P of relational predicates, representing the properties and relations that can hold among the finite set of domain objects, O. Each RMDP state is a ground interpretation of the domain predicates over the domain objects. Actions: The set of ground actions depends on the set of rules Z and the objects in the world. For example, move(A, B) can be bound to the table arrangement in Figure 2(a) by binding A to block 1 and B to block 4 to yield the ground action move(1, 4). Transition Dynamics: For each action, the distribution over next states is given compactly by the distribution over outcomes encoded in the schema. For example, executing move(A, B) pre: (clear(B, t), hold(nil), height(B,H ), incr(H ,H ′), clear(A,t),on(A,C),broke(f)) eff: [ 0.70 ] (on(A, B), height(A, H ), clear(A, t), clear(B, f), hold(nil), clear(C, t)) [ 0.30 ] (on(A, table), clear(A, t), height(A,H ), hold(nil), clear(C, t), broke(t)) fix() pre: (broke(t)) eff: [ 0.97 ] (broke(f)) [ 0.03 ] (broke(t)) stackon(B) pre: ( clear(B, t), hold(A), height(B, H ), incr(H, H′), broke(f)) eff: [ .97 ] (on(A, B), height(A, H ), clear(A, t), clear(B, f),hold(nil)) [ .03 ] (on(A, table), clear(A, t), height(A,H’), hold(nil), broke(t)) stackon(table) pre: (clear(table, t), hold(A), broke(f)) eff: [ 1.00 ] (on(A, table), height(A, 0), clear(A, t), hold(nil)) pickup(A) pre: (clear(A, t), hold(nil), on(A, B),broke(f)) eff: [ 1.00 ] (hold(A), clear(A, f), on(A, nil), clear(B, t), height(A,-1)) Figure 1: The set of relational rules, Z, for blocks-world dynamics.2 Each rule schema contains the action name, precondition, and a set of effects. move(1, 4) yields a 0.3 chance of landing in a state where block 1 falls on the table, and a 0.7 chance of landing in a state where block 1 is correctly put on block 4. The rule outcomes themselves usually only specify a subset of the domain predicates, effectively describing a set of possible ground states. We assume a static frame: state predicates not directly changed by the rule are assumed to remain the same. Rewards: A state is deterministically mapped to a scalar reward according to function R(s). 2.2 Initial trajectory planning The next step is finding an initial path. In a relational setting, when the underlying MDP space implied by the full instantiation of the representation is potentially huge, a good initial envelope is crucial. It determines the quality of the early envelope policies and sets the stage for more elaborate policies later on. For planning in traditional STRIPS domains, the Graphplan algorithm is known to be effective [1]. Graphplan finds the shortest straight-line plan by iteratively growing a forwardchaining structure called a plangraph and testing for the presence of goal conditions at each step. Blum and Langford [2] describe a probabilistic extension called TGraphplan (TGP) that works by returning a plan’s a probability of success rather than a just a boolean flag. TGP can find straight-line plans fairly quickly from start to goal that satisfy a minimum probability. Given TGP’s success in probabilistic STRIPS domains, a straightforward idea is to use the trajectory found by TGP to populate our initial envelope. Nevertheless, this should give us pause: we have just said that our relational MDP describes a large underlying MDP. TGP and other Graphplan descendants work by grounding out the rules and chaining them forward to construct the plangraph. Large numbers of actions cause severe problems for Graphplan-based planners [11] since the branching factor quickly chokes the forward-chaining plangraph construction. So how do we cope? on(3,2) clear(3,t) height(3,1) color(3,blue) on(2,table) clear(2,f) color(2,green) ... hold(nil) clear(table,t) broke(f) move(A,B) 1 2 4 5 3 (a) A B H H ′ C 1 4 0 1 table 1 5 0 1 table 4 1 0 1 table 4 5 0 1 table 5 4 0 1 table 5 1 0 1 table 3 1 1 1 2 3 4 1 1 2 3 5 1 1 2 1 3 1 2 table 4 3 1 2 table 5 3 1 2 table (b) on(b1,table) color(b1,g) height(b1,0) hold(nil) clear(table, t) broke(f) clear(b1, t) on(b2,table) color(b2,r) height(b2,0) clear(b2, t) move(b1,b2) on(b1,b2) height(b1,1) clear(table, t) clear(b2,f) height(b1,0) broke(t) on(b1,table) .7 .3 (c) Figure 2: (a) Given this world configuration, the move action produces three types of effects. (b) 12 different groundings for the argument variables, but not all produce different groundings for the derived variables. (c) A plangraph fragment with a particular instance of move chained forward. 2.3 Equivalence-class sampling: reducing the planning action apace STRIPS rules require every variable in the rule schema to appear in the argument list, so move(A, B) becomes move(A, B, H, H′, C). The meaning of the operator shifts from “move A onto B” to “move A at height H ′ onto B at height H from C”. Not only is this awkward, but specifying all the variables in the argument list yields an exponential number of ground actions as the number of domain objects grows. In contrast, the operators we defined above have argument lists containing only those variables that are free parameters. That is, when the operator move(A, B) takes two arguments, A and B, it means that the other variables (such as C, the block under A) are derivable from the relations in the rule schema. Guided by this observation, one can generalize among bindings that produce equivalent effects on the derivable properties. Consider executing the move(A, B) rule in the world configuration in Figure 2. This creates 12 fully-ground actions. However examining the bindings reveals only three types of action-effects. There is one group of actions that move a block from one block and onto another; a group that moves a block from the table and onto a block of height zero; and another group that moves a block off the table and onto a block of height one. Except for the identities of the argument blocks A and B, the actions in each class produce equivalent groundings for the properties of the related domain objects. Rather than using all the actions, then, the plangraph can be constructed chaining forward only a sampled action from each class. We call this equivalence-class sampling; the sampled action is representative of the effects of any action from that class. Sampling reduces the branching factor at each step in the plangraph, so significantly larger domains can be handled. 3 From a Planning Problem to a Policy Now we describe the approach in detail. We define a planning problem as containing: Rules: These are the relational operators that describe the action effects. In our system, they are designed by hand and the probabilities are specified by the programmer. Initial World State: The set of ground predicates that describes the starting state. REBP does not make the closed world assumption, so all predicates and objects required in the planning task must appear in the initial state. Goal Condition: A conjunction of relational predicates. The goal may contain variables — it does not need to be fully ground. 1 2 OUT 1 2 2 1 1.0 1.0 1.0 0.7 0.3 1 2 OUT 1 2 2 1 1.0 1.0 1.0 0.7 0.3 0.97 .03 OUT 1 2 2 1 1.0 1.0 1.0 0.7 0.3 move(2,1) pickup(2) pickup(1) move(1,2) move(1,2) move(1,2) move(1,2) fix() fix() Figure 3: An initial envelope corresponding to the plangraph segment of Figure 2(c) followed fringe sampling and envelope expansion. Rewards: A list of conjunctions mapping matching states to a scalar reward value. If a state in the current MDP does not match a reward condition, the default value is 0. Additionally, there must be a penalty associated with falling out of the envelope. This penalty is an estimate of the cost of having to recover from falling out (such as having to replan back to the envelope, for example). Given a planning problem, there are now three main components to REBP: finding an initial plan, converting the plan into an MDP, and envelope manipulation. A running example to illustrate the approach will be the tiny task of making a two-block stack in a domain with two blocks. Figure 3 illustrates output produced by a run of the algorithm. 3.1 Finding an initial plan The process for making the initial trajectory essentially follows the TGP algorithm described by Blum and Langford [2]. The TGP algorithm starts with the initial world state as the first layer in the graph, a minimum probability cutoff for the plan, and a maximum plan depth. We use the equivalence-class sampling technique discussed above to prune actions from the plangraph. Figure 2(c) shows one step of a plangraph construction. 3.2 Turning the initial plan into an MDP The TGP algorithm produces a sequence of actions. The next step is to turn the sequence of action-effects into a well-defined envelope MDP; that is, we must compute the set of states and the transitions. Usually, the sequence of action-effects alone leaves many state predicates unspecified. Currently, we assume a static frame, which implies that the value of a predicate remains the same unless it is known to have explicitly changed. The set of RMDP states are computed iteratively: first, the envelope is initialized with the initial world state; then, the next state in the envelope is found by applying the plan action to the previous state and “filling in” any missing predicates with their previous values; when the state containing the goal condition is reached, the set of states is complete. To compute the set of actions, REBP loops through the list of operators and accumulates all the ground actions whose preconditions bind to any state in the envelope. Transitions that initiate in an envelope state but do not land in an envelope state are redirected to OUT. The leftmost MDP in Figure 3 shows the initial envelope corresponding to the one-step plan of Figure 2(c). 3.3 Envelope Expansion Envelope expansion, or deliberation, involves adding to the subset of world states under consideration. The decision of when and how long to deliberate must compaare the expected utility of further thinking against the cost of doing so. Dean et al. [4] discuss this complex issue in depth. As a first step, we considered the simple precursor deliberation model, in which deliberation occurs for some number r times and is completed before execution takes place. A round of deliberation involves sampling from the current policy to estimate which fringe states — states one step outside of the envelope — are likely. In each round, REBP draws d · M samples (drawing from an exploratory action with probability ϵ) and keeps counts of which fringe states are reached. The f · M most likely fringes are added to the envelope, where M is number of states in the current envelope and d and f are scalars. After expansion, we recompute the set of actions and compute a new policy. Figure 3 shows a sequence of fringe sampling and envelope expansion. We see the incorporation of the fringe state in which the hand breaks as a result of move. With the new envelope, the policy is re-computed to include the fix action. This is a conditional plan that a straight-line planner could not find. 4 Experimental Domain To illustrate the behavior of REBP, we show preliminary results in a stochastic blocks world. While simple, blocks world is a reasonably interesting first domain because, with enough blocks, it exposes the weaknesses of purely propositional approaches. Its regular dynamics, on other hand, lend themselves to relational descriptions. This domain demonstrates the type of scaling that can be achieved with the REBP approach. The task at hand is to build a stack containing all the blocks on the table. In this domain, blocks are stacked on one another, with the top block in a stack being clear. Each block has a color and is at some height in the stack. There is a gripper that may or may not be broken. The pickup(A) action is deterministic and puts a clear block into the empty hand; a block in the hand is no longer clear, and its height and and on-ness are no longer defined. The fix() action takes a broken hand and fixes it with some probability. The stackon() action comes in two flavors: first, stackon(B), takes a block from the hand and puts it on block B, which may be dropped onto the table with a small probability; second, stackon(table), always puts the block from the hand onto the table. The move(A, B) and stackon(B) actions also have some chance of breaking the hand. If the hand is broken, it must be fixed before any further actions can apply. The domain is formalized as follows:3 P : on(Block, Block), clear(Block, TorF), color(Block, Color), height(Block, Num), hold(Block), clear(table, TorF), broke(TorF). Z, T : The rules are shown in Figure 1. O : A set of n differently colored (red, green, blue) blocks. R(s) : If ∃A height(A, n −1), then 1; if broke(t), then −2; if OUT, then −1. 5 Empirical Results We compared the quality of the policies generated by the following algorithms: REBP; envelope expansion starting from empty initial plan (i.e., the initial envelope containing only the initial world state); and policy iteration on the fully ground MDP.4 In all cases, the policy was computed by simple policy iteration with a discount of 0.9 and a stopping threshold of 0.1. In the case of REBP, the number of deliberation rounds r was 10, d was 10, f was 0.3, and ϵ was 0.2. In the case of the deliberation-only envelope, the r was increased to 35. The runs were averaged over at least 7 trials in each case. We show numerical results for domains with 5 and 6 blocks. The size of the full MDP in each case is, respectively, 768 and 5,228 states, with 351 and 733 ground actions. A 3The predicates behave like functions in the sense that the nth argument represents the value of the relation for the first n −1 arguments. Thus, we say clear(block5, f) instead of ¬clear(block5). 4Starting with the initial state, the set of states is generated by exhaustively applying our operators until no more new states are found; this yields the true set of reachable states. Figure 4: Results for the block-stacking tasks. The top plots show policy value against computation time for REBP and the full MDP. The bottom plots show policy value against number of states for REBP and deliberation only (empty initial plan). domain of 7 blocks results in an MDP of over 37,000 states with 1,191 actions, a combined state and action space is too overwhelming for the full MDP solution. The REBP agent, on the other hand, is able to find plans for making stacks in domains of more than 12 blocks, which corresponds to an MDP of about 88,000 states and 3,000 ground actions. The plots in Figure 4 show intuitive results. The top row shows the value of the policy against execution time (as measured by a monitoring package) showing that the REBP algorithm produces good quality plans quickly. For REBP, we start measuring the value of the policy at the point when initial trajectory finding ends and deliberation begins; for the full MDP solution, we measure the value of the policy at the end of each round of policy iteration. The full MDP takes a long time to find a policy, but eventually converges. Without the equivalence-class sampling, plangraph construction takes on the order of a couple of hours; with it, it takes a couple of minutes. The bottom row shows the value of the policy against the number of states in the envelope so far and shows that the a good initial envelope is key for behaving well with fewer states. 6 Discussion and Conclusions Using the relational envelope method, we can take real advantage of relational generalization to produce good initial plans efficiently, and use envelope-growing techniques to improve the robustness of our plans incrementally as time permits. REBP is a planning system that tries to dynamically reformulate an apparently intractable problem into a small, easily handled problem at run time. However, there is plenty remaining to be done. The first thing needed is a more rigorous analysis of the equivalence-class sampling. Currently, the action sampling is a purely local decision made at each step of the plangraph. This works in the current setup because object identities do not matter and properties not mentioned in the operator outcomes are never part of the goal condition. If, on the other hand, the goal was to make a stack of height n −1 with a green block on top, it could be problematic to construct the plangraph without considering block color in the sampled actions. We are currently investigating what conditions are necessary for making general guarantees about the sampling approach. Furthermore, the current envelope-extension method is relatively undirected; it might be possible to diagnose more effectively which fringe states would be most profitable to add. In addition, techniques such as those used by Dean et al. [4] could be employed to decide when to stop envelope growth, and to manage the eventual interleaving of envelope-growth and execution. Currently the states in the envelope are essentially atomic; it ought to be possible to exploit the factored nature of relational representations to allow abstraction in the MDP model, with aggregate “states” in the MDP actually representing sets of states in the underlying world. In summary, the REBP method provides a way to restrict attention to a small, useful subset of a large MDP space. It produces an initial plan quickly by taking advantage of generalization among action effects, and as a result behaves smarter in a large space much sooner than it could by waiting for a full solution. Acknowledgements This work was supported by an NSF Graduate Research Fellowship, by the Office of Naval Research contract #N00014-00-1-0298, and by NASA award #NCC2-1237. References [1] Avrim L. Blum and Merrick L. Furst. Fast plannning through planning graph analysis. Artificial Intelligence, 90:281–300, 1997. [2] Avrim L. Blum and John C. Langford. Probabilistic planning in the graphplan framework. In 5th European Conference on Planning, 1999. [3] Craig Boutilier, Raymond Reiter, and Bob Price. Symbolic dynamic programming for firstorder MDPs. In IJCAI, 2001. [4] Thomas Dean, Leslie Pack Kaelbling, Jak Kirman, and Ann Nicholson. Planning under time constraints in stochastic domains. Artificial Intelligence, 76, 1995. [5] Kurt Driessens, Jan Ramon, and Hendrik Blockeel. Speeding up relational reinforcement learning through the use of an incremental first order decision tree learner. In European Conference on Machine Learning, 2001. [6] B. Cenk Gazen and Craig A. Knoblock. Combining the expressivity of UCPOP with the efficiency of graphplan. In Proc. European Conference on Planning (ECP-97), 1997. [7] H. Geffner and B. Bonet. High-level planning and control with incomplete information using POMDPs. In Fall AAAI Symposium on Cognitive Robotics, 1998. [8] C. Guestrin, D. Koller, C. Gearhart, and N. Kanodia. Generalizing plans to new environments in relational MDPs. In International Joint Conference on Artificial Intelligence, 2003. [9] Jesse Hoey, Robert St-Aubin, Alan Hu, and Craig Boutilier. Spudd: Stochastic planning using decision diagrams. In Fifteenth Conference on Uncertainty in Artificial Intelligence, 1999. [10] J. Koehler, B. Nebel, J. Hoffmann, and Y. Dimopoulos. Extending planning graphs to an ADL subset. In Proc. European Conference on Planning (ECP-97), 1997. [11] B. Nebel, J. Koehler, and Y. Dimopoulos. Ignoring irrelevant facts and operators in plan generation. In Proc. European Conference on Planning (ECP-97), 1997. [12] Daniel S. Weld. Recent advances in AI planning. AI Magazine, 20(2):93–123, 1999. [13] Daniel S. Weld, Corin R. Anderson, and David E. Smith. Extending graphplan to handle uncertainty and sensing actions. In Proceedings of AAAI ’98, 1998. [14] SungWook Yoon, Alan Fern, and Robert Givan. Inductive policy selection for first-order MDPs. In 18th International Conference on Uncertainty in Artificial Intelligence, 2002.
2003
56
2,459
Necessary Intransitive Likelihood-Ratio Classifiers Gang Ji and Jeff Bilmes SSLI-Lab, Department of Electrical Engineering University of Washington Seattle, WA 98195-2500 {gang,bilmes}@ee.washington.edu Abstract In pattern classification tasks, errors are introduced because of differences between the true model and the one obtained via model estimation. Using likelihood-ratio based classification, it is possible to correct for this discrepancy by finding class-pair specific terms to adjust the likelihood ratio directly, and that can make class-pair preference relationships intransitive. In this work, we introduce new methodology that makes necessary corrections to the likelihood ratio, specifically those that are necessary to achieve perfect classification (but not perfect likelihood-ratio correction which can be overkill). The new corrections, while weaker than previously reported such adjustments, are analytically challenging since they involve discontinuous functions, therefore requiring several approximations. We test a number of these new schemes on an isolatedword speech recognition task as well as on the UCI machine learning data sets. Results show that by using the bias terms calculated in this new way, classification accuracy can substantially improve over both the baseline and over our previous results. 1 Introduction Statistical pattern recognition is often based on Bayes decision theory [4], which aims to achieve minimum error rate classification. In previous work [2], we observed that multiclass Bayes classification can be viewed as a tournament style game, where the winner between players is decided using log likelihood ratios. Supposing the classes (players) are {c1, c2, · · · , cM}, and the observation (game) is x, the winner of each pair of classes is determined, with the assumption of equal priors, by the sign of the log likelihood ratio Lij(x) = ln P (x|ci) P (x|cj), in which case if Lij > 0 class ci wins and otherwise class cj wins. A practical game strategy can be obtained by fixing a comparison order, {i1, i2, · · · , iM}, as a permutation of {1, 2, · · · , M}, where class ci1 plays with class ci2 , the winner plays with class ci3 , and so on until a final winner is ultimately found. This yields a transitive game [8] — assuming no ties, the ultimate winner is identical regardless of the comparison order. To perform these procedures optimally, correct likelihood ratios are needed, which requires correct probabilistic models and sufficient training data. This is never the case given a finite amount of training data or the wrong model family, typical in practice. In previous work [2], we introduced a method to correct for the difference between the true and an approximate log likelihood ratio. In this work, we improve upon the correction method by using an expression that can still lead to perfect correction, but is weaker than what we used before. We show that this new condition can achieve a significant improvement over baseline results, both on a medium vocabulary isolated-word automatic speech recognition task and on the UCI machine learning data sets. The paper is organized as follows: Section 2 describes the general scheme and describes past work. Section 3 discusses the weaker correction condition, and its approximations. Section 4 provides various experimental results on an isolated-word speech recognition task. Section 5 contains the experimental results on the UCI data. Finally, Section 6 concludes. 2 Background A common problem in many probabilistic machine learning settings is the lack of a correct statistical model. In a generative pattern classification setting, this occurs because only an estimated quantity ˆP(x|c)1 of a distribution is available, rather than the true classconditional model P(x|c). In the likelihood ratio decision scheme described above, only an imperfect log likelihood ratio, ˆLij(x) = ln( ˆP(x|ci)/ ˆP(x|cj)), is available for decision making rather than the true log likelihood ratio Lij(x). One approach to correct for this inaccuracy is to use richer class conditional likelihoods, more complicated parametric forms of Lij(x) itself, and/or more training data. In previous work [2], we proposed a different approach that requires no change in generative models, no increase in free parameters, and no additional training data but still yields improved accuracy. The key idea is to compensate for the difference between Lij(x) and ˆLij(x) using a bias2 term αij(x) computed from test data such that: Lij(x) −αij(x) = ˆLij(x). (1) If it is assumed that a single bias term is used for all data, so that αij(x) = αij, we found that the best αij is as follows: αij = 1 2 (D(i∥j) −D(j∥i)) −1 2  ˆD(i∥j) −ˆD(j∥i)  , (2) where D(i∥j) = EP (x|ci) ln Lij(x) is the Kullback-Leibler (KL) divergence [3] between P(x|ci) and P(x|cj) and ˆD(i∥j) = EP (x|ci) ˆLij(x) is its estimation. Under the assumption (referred to as assumption A in Section 3.1) of symmetric KL-divergence for the true model (e.g., equal covariance matrices in the Gaussian case), the bias term can be solved explicitly as αij = −1 2  ˆD(i∥j) −ˆD(j∥i)  . (3) We saw how the augmented likelihood ratio Sij(x) = ˆLij(x) + αij can lead to an intransitive game [8, 13], since Sij(x) can specify intransitive preferences amongst the set {1, 2, · · · , M}. We therefore investigated a number of intransitive game playing strategies. Moreover, we observed that if the correction was optimal, the true likelihood ratios would be obtained which are clearly transitive. We therefore hypothesized and experimentally verified that the existence of intransitivity was a good indicator of the occurrence of a classification error. This general approach can be improved upon in several ways. First, better intransitive strategies can be developed (for detecting, tolerating, and utilizing the intransitivity of a 1In this paper, we use “hatted” letters to describe estimated quantities. 2Note that by bias, we do not mean standard parameter bias in statistical parameter estimation. classifier); second, the assumption of symmetric KL-divergence could be relaxed; and third, the above criterion is stricter than required to obtain perfect correction. In this work, we advance on the latter two of the above three possible avenues for improvement. 3 Necessary Intransitive Scheme An αij(x) that solves Equation 1 is a sufficient condition for a perfect correction of the estimated likelihood ratio since given such a quantity, the true likelihood ratio would be attainable. This condition, however, is stricter than required because it is only the sign of the likelihood ratio that is needed to decide the winning class. We therefore should ask for a condition that corrects only for the discrepancy in sign between the true and estimated ratio, i.e., we want to find a function αij(x) that minimizes J[αij] = Z Rn n sgn [Lij(x) −αij(x)] −sgnˆLij(x) o2 · Pij(x) dx. Clearly the αij(x) that minimizes J[αij] is the one such that sgn [Lij(x) −αij(x)] = sgnˆLij(x), ∀x ∈suppPij = {x : Pij(x) ̸= 0}. (4) As can be seen, this condition is weaker than Equation 1, weaker in the sense that any solution to Equation 1 solves Equation 4 but not vice versa. Note also that Equation 4 provides necessary conditions for an additive bias term to achieve perfect correction, since any such correction must achieve parity in the sign. Therefore, it might make it simpler to find a better bias term since Equation 4 (and therefore, set of possible α values) is less constrained. As will be seen, however, analysis of this weaker condition is more difficult. In the following sections, therefore we introduce several approximations to this condition. Note that as in previous work, we henceforth assume αij(x) = αij is a constant. In this case, the equation providing the best αij values is: EPij {sgn [Lij(x) −αij]} = EPij n sgnˆLij(x) o . (5) 3.1 The difficulty with the sign function The main problem in trying to solve for αij in Equation 5 is the existence of a discontinuous function. In this section, therefore, we work towards obtaining an analytically tractable approximation. The {−1, 0, 1}-valued sign function sgn(z) is defined as 2u(z) −1, where u(z) is the Heaviside step function. We obtain an approximation via a Taylor expansion as follows: sgn(z + ϵ) = sgn(z) + ϵsgn′(z) + o(ϵ) = sgn(z) + 2ϵδ(z) + o(ϵ), (6) where δ(z) is the Dirac delta function [7]. It can be defined as the derivative of the Heaviside step function u′(z) = δ(z), and it satisfies the sifting property Z R f(z)δ(z −z0) = f(z0). Therefore, it follows that [6, page 263] Z Rn f(z)δ[g(z)] dz = Z Zg f(z) |∇g(z)| · dµ, where ∇g is the gradient of g and Zg = {z ∈Rn : g(z) = 0} is the zero set of g with Lebesgue measure µ [12]. Of course, the Taylor expansion is valid only for a differentiable function, otherwise the error terms can be arbitrarily large. If, however, we find and use a suitable continuous and differentiable approximation rather than the discrete sign function, the above expansion becomes more appropriate. There exists a trade-off, however, between the quality of the sign function approximation (a better sign function should yield a better approximation in Equation 4) and the error caused by the o(ϵ) term in Equation 6 (a better sign function approximation will have a greater error when the higher-order Taylor terms are dropped). We therefore expect that ideally there will exist an optimal balance between the two. The shifted sigmoid with free parameter β (defined and used below) allows us to easily explore this trade-off simply by varying β. Retaining the first-order Taylor term, and applying this to the left side of Equation 5, EPijsgn [Lij(x) −αij] ≈EPijsgnLij(x) −2EPijαijδ [Lij(x)] . The distribution under which the expectation in Equation 5 is taken can also influence our results. If it is known that the true class of x is always ci, the ci-conditional distribution should be used, i.e., Pij(x) = P(x|ci), yielding a class-conditional correction term α(i) ij , and a class-conditional likelihood-ratio correction S(i) ij (x) = ˆLij(x)+α(i) ij . The symmetric case arises when x is of class cj. If, on the other hand, neither ci nor cj is the true classes (i.e., x is sampled from some other class-conditional distribution, say P(x|ck), k ̸= i, j), it does not matter which distribution for Pij(x) is used since, for a given comparison order in a game playing strategy, the current winner will ultimately play using the true class distribution P(x|ck) of x (when one of i or j will equal k). It is therefore valid to consider only the case when either x is of class ci (we denote this event by Ci(x)) or when x is of class cj (event Cj(x)). Note that these two events are disjoint. In practice, however, we do not know which of the two events is correct. The ideal choice in either case can be expressed using indicators as follows: Aij(x) = α(i) ij 1{Ci(x)} + α(j) ij 1{Cj(x)}. Taking the expected value of Aij(X) with respect to p(x|Ci(x) ∨Cj(x)) yields αij = Ep(x|Ci(x)∨Cj(x))[Aij(X)] = α(i) ij P(ci) + α(j) ij P(cj) P(ci) + P(cj) . This results in a single likelihood correction Sij(x) = ˆLij(x) + αij that is obtained simply by integrating in Equation 5 with respect to the average distribution over class ci and cj, i.e., Pij(x) ∆= p(x|Ci(x) ∨Cj(x)) = P(ci)P(x|ci) + P(cj)P(x|cj) P(ci) + P(cj) . With these assumptions, and supposing the zero set ZLij = {x ∈Rn : P(x|ci) = P(x|cj)} of Lij(x) is Lebesgue measurable with measure µ, we get: Z Rn {sgnLij(x) −2αijδ [Lij(x)]} Pij(x) dx = Z Rn sgnLij(x)Pij(x) dx −2Ψ(Pi, Pj)αij, where Ψ(Pi, Pj) = Z Rn Pij(x)δ [Lij(x)] dx = Z ZLij Pij(x) |∇Lij(x)| · dµ. (7) Therefore, αij = 1 Ψ(Pi, Pj) Z Rn " sgnLij(x) −sgnˆLij(x) 2 # Pij(x) dx. As can be seen, αij is composed of two factors, the integral and the 1/Ψ(Pi, Pj) factor. The integral is bounded between -1 and 1 and determines the direction of the correction. When Lij(x) and ˆLij(x) always agree, the integral is zero and there is no correction. The correction favors i when αij is positive. This occurs when Lij is positive and ˆLij is negative more often than Lij is negative and ˆLij is positive, a situation improved upon by giving i “help.” Similarly, when αij is negative, the correction biases towards j. The maximum amount of absolute likelihood correction possible is determined by the (always positive) 1/Ψ(Pi, Pj) factor. This is affected by two quantities, the mass around and the log-likelihood ratio gradient at the decision boundary. Low mass at the decision boundary increases the maximum possible correction because any errors in the integral factor are being de-weighted. High gradient at the decision boundary also increases the maximum possible correction because any decision boundary deviation causes a higher change in likelihood ratio than if the gradient was low. Since we are correcting the likelihood ratio directly, this needs to be reflected in αij. When P(x|ci) and P(x|cj) are multivariate Gaussians with means µi and µj, identical covariance matrices Σ, and equal priors, this becomes: Ψ(Pi, Pj) = e−1 8 (µi−µj)T Σ−1(µi−µj) p 2π(µi −µj)T Σ−1(µi −µj) As the means diverge from each other, both the mass at the decision boundary decreases and the likelihood-ratio gradient increases, thereby increasing the maximum amount of correction. Unfortunately, it is quite difficult to explicitly evaluate Ψ(Pi, Pj) without knowing the true probability distributions. In this initial work, therefore, our investigations simplify by only computing the direction and not the magnitude of the correction. As will be seen, this assumption yields a likelihood-ratio adjustment that is similar in form to our previous KLdivergence based adjustment. More practically, the assumption significantly simplifies the derivation and still yields reasonable empirical results. Under this assumption, expression for αij becomes: αij = 1 2EPij(x)[sgnLij(x)] −1 2EPij(x)[sgnˆLij(x)]. (8) The left term on the right of the equality is quite similar to the left difference on the right of the equality in the KL-divergence case (Equation 2). Again, because we have no information about the true class conditional models, we assume the left term in Equation 8 to be zero (denote this as assumption B). Comparing this with the corresponding assumption for the KL-divergence case (assumption A, Equations 2 and 3), it can be shown that 1) they are not identical in general, and 2) in the Gaussian case, A implies B but not vice versa, meaning B is weaker than A. Under assumption B, an expression for the resulting αij can be derived using the weak law of large numbers yielding: αij ≈ 1 2(Ni + Nj)  X x∈Ci sgn ln ˆP(x|cj) ˆP(x|ci) − X x∈Cj sgn ln ˆP(x|ci) ˆP(x|cj)  , (9) where x ∈Ci and x ∈Cj correspond to the samples as they are classified in a previous recognition pass; Ni and Nj are number of samples from model ci and cj respectively. One can immediately see the similarity between this equation and the one using KLD [2]. Like in [2], since the true classes are unknown, we perform a previous classification pass (e.g., using the original likelihood ratios) to get estimates and use these in Equation 9. Note that there are three potential sources of error in the analysis above. The first is the Ψ(Pi, Pj) factor that we neglected. The second is assumption B, that (since weaker) can be less severe than in the corresponding KL-divergence case. The third is the error due to the discontinuity of the sign function. To address the third problem, rather than using the sign function in Equation 9, we can approximate it with a continuous differential function with the goal of balancing the trade-off mentioned above. There are a number of possible sign-function approximations, including hyperbolic and arc tangent, and shifted sigmoid function, the latter of which is the most flexible because of its free parameter β.3 Specifically, the sigmoid function has the form f(z) = 1 1+e−βz , where the free parameter β (an inverse temperature) determines how well the curve will approximate the discontinuous function. Using the sigmoid function, we can approximate the sign function as sgnz ≈ 2 1+e−βz −1. Note that the approximation improves as β increases. Hence, αij ≈ 1 2(Ni + Nj)  X x∈ci  1 − 2 1 + eβ ˆLji(x)  − X x∈cj  1 − 2 1 + eβ ˆLij(x)  . (10) 4 Speech Recognition Evaluation As in previous work [2], we implemented this technique on NYNEX PHONEBOOK [10, 1], a medium vocabulary isolated-word speech corpus. Gaussian mixture hidden Markov models (HMMs) produced probability scores ˆP(x|ci) where here x is a matrix of feature values (one dimension as MFCC features and the other as time frames), and ci is a word identity. The HMMs use four hidden states per phone, and 12 Gaussian mixtures per state (standard for this task [10]). This yields approximately 200k free model parameters in total. In our experiments, the steps are: 1) calculate ˆP(x|ci) using full inference (no Viterbi approximation) for each test case and for each word; 2) classify the test examples using just the log likelihood ratios ˆLij = ln ˆP(x|ci)/ ˆP(x|cj); 3) using the hypothesized (and error-full) class labels, calculate the test-set bias term using one of the techniques described above; and 4) classify again using the augmented likelihood ratio Sij = ˆLij + αij. Since the procedure is no longer transitive, we run 1000 random tournament-style games (as in [2]) and choose the most frequent winner as the ultimate winner. Table 1: Word error rates % on speech data with various sign approximations. SIZE ORIG SIGN TANH ATAN SIG(.1) SIG(1) SIG(10) SIG(100) SIG(200) SIG(400) KLD[2] 75 2.34 1.76 1.76 1.76 1.82 1.76 1.56 1.57 1.33 1.34 1.91 150 3.31 2.83 2.84 2.83 2.65 2.83 2.65 2.47 2.68 2.43 2.72 300 5.23 4.75 4.75 4.70 4.74 4.75 4.29 3.95 4.34 4.34 4.29 600 7.39 6.64 6.61 6.60 6.66 6.64 6.04 5.70 6.74 6.74 5.91 The results are shown in Table 1, where the first column gives the test-set vocabulary size (number of different classes). The second column shows the baseline word error rates (WERs) using only ˆLij. The remaining columns are the bias-corrected results with various sign approximations, namely sign (Equation 9), hyperbolic and arc tangent, and the shifted sigmoid with various β values (thus allowing us to investigate the trade-off mentioned in Section 3.1). From the results we can see that larger-β sigmoid is usually better, with overall performance increasing with β. This is because with large β, the shifted sigmoid curve better approximates the sign function. For β = 100, the results are even better than our previous KL-divergence (KLD) results reported in [2] (right-most column in the table). It can also been seen that when β is greater than 100, the WERs are not consistently better. This indicates that the inaccuracies due to the Taylor error term start adversely affecting the results at around β = 100. 3Note that the other soft sign functions can also be defined to utilize a β smoothness parameter. 5 UCI Dataset Evaluation Table 2: Error rates in % (and std where applicable) on the UCI data. data NN baseline KLD sign sig(10) NB baseline KLD sign sig(10) australian 16.75(3.51) 16.33(3.66) 16.17(3.63) 16.32(3.75) 14.89(1.97) 14.29(2.45) 14.76(2.45) 14.76(2.37) breast 2.94(1.16) 2.62(1.15) 2.63(1.15) 2.65(1.15) 2.45(1.93) 2.29(2.02) 2.13(2.07) 1.86(2.07) chess 0.56 0.46 0.47 0.37 12.66 12.76 13.04 12.85 cleve 25.67(3.40) 24.35(2.82) 24.01(2.27) 24.01(3.94) 17.91(2.37) 15.55(1.81) 15.22(1.82) 16.22(2.61) corral 2.44(1.26) 1.82(1.16) 1.19(1.16) 1.19(1.16) 12.77(3.66) 9.57(2.12) 9.57(2.62) 12.05(4.80) crx 17.41(3.18) 17.25(2.67) 17.11(2.91) 17.26(3.00) 15.05(3.67) 14.02(3.91) 13.06(3.67) 15.05(3.67) diabetes 28.04(3.08) 26.88(3.56) 27.41(4.13) 27.18(1.98) 25.71(2.13) 24.79(2.68) 24.24(3.49) 24.66(2.59) flare 20.98(2.26) 19.37(2.16) 18.29(2.25) 18.46(1.85) 20.24(2.31) 19.55(2.63) 18.70(1.87) 16.64(2.34) german 29.96(3.49) 28.54(3.45) 28.82(2.53) 28.25(3.71) 24.58(2.57) 26.55(1.88) 24.79(2.30) 24.25(2.50) glass 42.16(2.06) 39.63(1.76) 41.92(1.92) 40.95(2.00) 44.12(7.96) 42.24(8.64) 42.06(9.22) 42.28(7.93) glass2 28.82(2.57) 26.23(2.61) 26.95(2.65) 26.23(2.57) 22.36(9.01) 21.15(9.25) 21.77(9.25) 22.36(9.01) heart 21.83(3.77) 21.48(4.26) 21.19(4.52) 21.09(4.23) 15.50(6.01) 15.11(5.34) 15.11(5.72) 15.11(6.01) hepatitis 19.46(7.10) 16.10(6.13) 17.16(6.92) 15.82(6.94) 16.18(5.92) 18.29(5.96) 18.04(5.92) 15.45(4.56) iris 8.13(1.60) 6.84(1.44) 6.26(1.47) 6.84(1.44) 6.99(1.78) 6.99(1.78) 6.99(1.78) 6.99(1.78) letter 38.66 34.66 37.10 37.00 30.68 30.88 30.48 30.64 lymphography 24.46(4.86) 23.81(4.57) 23.29(4.52) 23.29(4.86) 16.62(8.64) 18.27(9.25) 17.34(8.91) 15.31(8.91) mofn-3-7-10 0 0 0 0 8.59 4.57 1.56 3.42 pima 25.96(2.01) 25.22(2.95) 24.82(2.87) 25.96(2.19) 25.71(2.13) 24.79(2.68) 24.24(3.49) 24.66(2.59) satimage 15.80 14.25 14.40 14.25 19.15 19.35 19.25 18.70 segment 7.53 7.40 7.27 7.53 12.21 11.73 11.82 12.21 shuttle-small 0.87 0.77 0.87 0.77 1.40 1.41 1.50 1.50 soybean-large 8.47(1.31) 8.29(1.39) 7.18(1.08) 8.47(1.31) 8.71(2.70) 9.13(2.60) 8.35(2.65) 8.37(2.70) vehicle 28.39(4.68) 28.15(4.62) 27.70(4.44) 28.39(4.75) 38.92(4.47) 38.59(5.05) 38.79(4.46) 37.84(4.43) vote 7.40(2.22) 6.94(1.77) 6.94(1.77) 7.17(2.05) 9.91(1.72) 9.68(2.49) 9.68(1.72) 9.68(1.72) waveform-21 26.21 26.17 26.12 26.14 21.45 21.11 20.15 21.40 In order to show that our methodology is general beyond isolated-word speech recognition, we also evaluated this technique on the entire UCI machine learning repository [9]. In our experiments, baseline classifiers are built using one of: 1) the Matlab neural network (NN) toolbox with feed-forward 3-layer perceptrons having different number of hidden units and training epochs (optimized over a large set to achieve the best possible baseline for each test case), and trained using the Levenberg-Marquardt algorithm [11], or 2) the MLC++ toolbox to produce na¨ıve Bayes (NB) classifiers that have been smoothed using Dirichlet priors. In each case (i.e., NN or NB), we augmented the resulting likelihood ratios with bias correction terms thereby evaluating our technique using quite different forms of baseline classifiers. Unlike the above, with these data sets we have only tried one random tournament game to decide the winner so far. For the NN results, hidden units use logistic sigmoid, and output units use a soft-max function, making the network outputs interpretable as posterior probabilities P(c|x), where x is the sample and c is the class. While our bias correction described above is in terms of likelihoods ratios Lij(x), posteriors can be used as well if the posteriors are divided by the priors giving the relation P(c|x)/P(c) = P(x|c)/p(x) (i.e., scaled likelihoods) which produces the standard Lij(x) values when used in a likelihood ratio . As was done in [5], for the small data sets the experimental results use 5-fold crossvalidation using randomly selected chunks — results show mean and standard deviation (std) in parentheses. For the larger data sets, we use the same held out training/test sets as in [5] (so std is not shown). The experimental procedure is similar to that described in Section 4, except that scaled likelihoods are used for the NN baselines. Again, first-pass error-full test-set hypothesized answers are used to compute the bias corrections. Table 5 shows our results for both the NN (columns 2—5) and NB (columns 6—9) baseline classifiers. Within each baseline group, the first column shows the baseline accuracy (with the 5-fold standard derivations when the data set is small). The second column shows results using KL-divergence based bias corrections — these are the first published KLD results on the UCI data. The third column shows results with sign-based correction (Equation 9), and the forth column shows the sigmoid (β = 10) case (Equation 10). While not the point of this paper, one immediately sees that the NB baseline results are often better than the NN baseline results (15 out of 25 times). Using the NN as a baseline, the table shows that the KLD results are almost always better than the baseline 24 times (out of 25). Also, the sign correction is better than the baseline 23 out of 25 times, and the sigmoid(10) results are better 20 times. Also (not shown in the table), we found that β = 10 is slightly better than β = 1 but there is no advantage using β = 100. These results therefore show that the NN KLD correction typically beats the sign and sigmoid correction, possibly owing to the error in the Taylor approximation. Using the NB classifier as the baseline, however, shows not only improved baseline results in general but also that the sigmoid(10) improves more often. Specifically, the KLD results are better than the baseline 16 times, sign is better than the baseline 18 times, and sigmoid(10) beats the baseline 19 times, suggesting that sigmoid(10) typically wins over the KLD case. 6 Discussion We have introduced a new necessary intransitive likelihood ratio classifier. This was done by using sign-based corrections to likelihood ratios and by using continuous differentiable approximations of the sign function in order to be able to vary the inherent trade-off between sign-function approximation accuracy and Taylor error. We have applied these techniques to both a speech recognition corpus and the UCI data sets, as well as applying previous KL-divergence based corrections to the latter data. Results on the UCI data sets confirm that our techniques reasonably generalize to data sets other than speech recognition. This suggests that the framework could be applied to other machine learning tasks. This work was supported in part by NSF grant IIS-0093430 and IIS-0121396. References [1] Jeff Bilmes. Burried Markov models for speech recognition. In IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, March 1999. [2] Jeff Bilmes, Gang Ji, and M. Meil˘a. Intransitive likeilhood-ratio classifiers. In Neural Information Processing Systems: Natural and Synthetic, December 2001. [3] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley and Sons, Inc., 1991. [4] Richard O. Duda, Peter E. Hart, and David G. Stork. Pattern Classification. John Wiley and Sons, second edition, 2001. [5] Nir Friedman, Dan Geiger, and Moises Goldszmidt. Bayesian network classifiers. Machine Learning, 29(2-3):131–163, 1997. [6] D. S. Jones. Generalised Functions. McCraw-Hill Publishing Company Limited, 1966. [7] J. Kevorkian. Partial Differential Equations: Analytical Solution Techniques. New York: Springer, 2000. [8] R. Duncan Luce and Howard Raiffa. Games and Decisions: Introduction and Critical Survey. Dover, 1957. [9] P. M. Murphy and D. W. Aha. UCI Repository of Machine Learning Database, 1995. [10] J. Pitrelli, C. Fong, S. H. Wong, J. R. Spitz, and H. C. Lueng. PhoneBook: a phonetically-rich isolated-word telephone-speech database. In IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, 1995. [11] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, Cambridge, England, second edition, 1992. [12] M. M. Rao. Measure Theory and Integration. John Wiley and Sons, 1987. [13] P. D. Straffin. Game Theory and Strategy. The Mathematical Association of America, 1993.
2003
57
2,460
Simplicial Mixtures of Markov Chains: Distributed Modelling of Dynamic User Profiles Mark Girolami Department of Computing Science University of Glasgow Glasgow, UK girolami@dcs.gla.ac.uk Ata Kab´an School of Computer Science University of Birmingham Birmingham, UK a.kaban@cs.bham.ac.uk Abstract To provide a compact generative representation of the sequential activity of a number of individuals within a group there is a tradeoff between the definition of individual specific and global models. This paper proposes a linear-time distributed model for finite state symbolic sequences representing traces of individual user activity by making the assumption that heterogeneous user behavior may be ‘explained’ by a relatively small number of common structurally simple behavioral patterns which may interleave randomly in a user-specific proportion. The results of an empirical study on three different sources of user traces indicates that this modelling approach provides an efficient representation scheme, reflected by improved prediction performance as well as providing lowcomplexity and intuitively interpretable representations. 1 Introduction The now commonplace ability to accurately and inexpensively log the activity of individuals in a digital environment makes available a variety of traces of user activity and with it the necessity to develop efficient representations, or profiles, of individuals. Most often, such recordings take the form of streams of discrete symbols ordered in time. The modelling of time dependent sequences of discrete symbols employing n’th order Markov chains has been extensively studied in a number of domains. The representation provided by such models is global in the sense that it is assumed that one global generating process underlies all observed sequences. To capture the possible heterogeneous nature of the observed sequences a model with a number of differing generating processes needs to be considered. Indeed the notion of a heterogeneous population, characterized for example by occupational mobility and consumer brand preferences, has been captured in the MoverStayer model [3]. This model is a discrete time stochastic process that is a two component mixture of first order Markov chains, one of which is degenerate and possesses an identity transition matrix characterizing the stayers in the population. The original notion of a twocomponent mixture of Markov chains has recently been extended to the general form of a mixture model of Markov chains in [2]. Whilst the main motivation was the visualization of the class structure inherent in the browsing patterns of visitors to a commercial website, each class of users being characterized by their global behavior, such mixture models will not be appropriate for identifying the shared behavioral patterns which are the basis of multiple relationships between users and groups of users and which may yield a more realistic model of the population. The purpose of this paper is to develop a dynamic user model for individuals within a group that explicitly captures the assumption of the existence of a common set of behavioral patterns which can be estimated from all observed users along with their user-specific proportion of participation and these form the basis of individual profiles within a group. This is also a computationally attractive model, as simple structural characteristics may be assumed at the generative level, while allowing them to interleave randomly can account for more complex individual behavior. The resulting model is thus a distributed dynamic model which benefits from the recent technical developments in distributed parts based modelling of static vectorial data [7, 9, 5, 1, 8], with various applications including image decomposition, document modelling, information retrieval and collaborative filtering. Consistent generative semantics similar to the recently introduced latent Dirichlet allocation (LDA) [1] will be adopted and by analogy with [8] the resulting model will be referred to as a simplicial mixture. 2 Simplicial Mixtures of Markov Chains Assume that a sequence of L symbols sLsL−1, · · · , s0, denoted by s, can be drawn from a dictionary S by a process k, which has initial state probability P1(k) and has |S|m+1 state transition probabilities denoted by T(sm, · · · , s1 →s0|k). The number of times that the symbol s0 follows from the state defined by the m-tuple of symbols sm, · · · , s1 within the n-th sequence is denoted as rsm,··· ,s1→s0 n and so the probability of the sequence of symbols under the k’th m-th order Markov process is P(s|k) = P1(k) Q|S| sm=1 · · · Q|S| s0=1 T(sm, · · · , s1 →s0|k)rsm,··· ,s1→s0. To introduce a more compact notation we represent the elements of the state transition matrix for the k’th Markov process by Tm···0,k and the counts rsm,··· ,s1→s0 within the n’th observed sequence as rm···0 n . In addition, we employ Start and Stop states in each symbol sequence sn and incorporate the initial state distribution of the Start state as the transition probabilities from this state within the state transition matrix Tk. We denote the set of all state transition matrices {T1, · · · , Tk, · · · , TK} as T. Suppose that we are given a set of symbolic trajectories {sn}n=1:N over a common finite state space, each having length Ln. As opposed and somewhat complementary to cluster models for trajectories which try to model inter-sequence heterogeneities, our intuition is that sequences over a common finite state space, provided they are sufficiently long and possibly non-stationary, could have several randomly interleaved generator processes, some of which might be common to several sequences. To account for this idea, we will adopt a similar modelling strategy to LDA. The complete generative semantics of LDA allows us to describe the process of sequence generation where mixing components λ = [λ1, · · · , λk, · · · , λK] are K-dimensional Dirichlet random variables and so are drawn from the K −1 dimensional simplex defined by the Dirichlet distribution D(λ|α) with parameters α. These are then combined with the individual state-transition probabilities Tk, which are model parameters to be estimated, and yield the symbol transition probabilities Tm···0 = PK k=1 Tm···0,kλk. The overall probability for a sequence sn under such a mixture, which we shall now refer to as a simplicial mixture [8], denoted as P(sn|T, α) is equal to Z △ P(sn|T, λ)D(λ|α)dλ = Z △ dλD(λ|α) |S| Y sm=1 · · · |S| Y s0=1 ( K X k=1 Tm···0,kλk )rm···0 n (1) Each sequence will have its own expectation under the Dirichlet mixing coefficients and so the ability of such a representation to model intra-sequence heterogeneity emerges naturally. The following subsections briefly present the details of the identification of this model, which also highlights the close relationship between two existing related models, specifically the probabilistic latent semantic analysis (PLSA) [5] and LDA [1] as being instances of the same theoretical model and differing only in the estimation procedure adopted [4]. 2.1 Parameter Estimation and Inference Exact inference within the LDA framework is not possible [1], however the likelihood can be lower-bounded by introducing a sequence specific parameterised variational posterior Qn(λ) whose parameters will depend on n log P(sn|T, α) ≥EQn(λ) · log ½ P(sn|T, λ)D(λ|α) Qn(λ) ¾¸ (2) Where EQn(λ) denotes expectation with respect to Qn(λ). The bound can be defined using the Maximum a Posteriori (MAP) estimator, such that Qn(λ) = δ(λ −λMAP n ), in which case (2) is equal to log P(sn|T, λMAP n ) + log D(λMAP n |α) + Hδ where Hδ denotes the entropy of the delta function around λMAP n (which can be discarded in this setting as it does not depend on the model parameters, although it amounts to minus infinity). Forming a Lagrangian from the above to enforce the constraint that λMAP is a sample point from a Dirichlet variable then taking derivatives with respect to the λMAP k , a convergent series of updates λt kn is obtained where the superscript denotes the t’th iteration. As in [7], for each observed sequence in the sample a MAP value for the variable λ is iteratively estimated by the following multiplicative updates ˜λkn = (αk−1)+λt kn |S| X sm=1 · · · |S| X s0=1 rm···0 n Tm···0,k PK l=1 Tm···0,lλt ln ; λt+1 kn = ˜λkn Ln + P k(αk −1) (3) where Ln = P sm···s0 rm···0 n is the length of the sequence sn. Once the MAP values λMAP n for each sn are obtained a similar multiplicative iteration for the transition probabilities can be obtained ˜Tm···0,k = T t m···0,k N X n=1 rm···0 n λMAP kn PK l=1 T t m···0,lλMAP ln ; T t+1 m···0,k = ˜Tm···0,k P|S| s′ 0=1 ˜Tm···0′,k (4) The final parameter is that of the prior Dirichlet distribution, maximum likelihood estimation yields the estimated distribution parameters α given the λMAP n [6, 1]. Note that both (3) and (4) require an elementwise matrix multiplication and division so these iterations will scale linearly with the number of non-zero state-transition counts. It is interesting to note that the MAP estimator under a uniform Dirichlet distribution exactly recovers the aspect mixture model of [5] as a special case of the MAP estimated LDA model. 2.1.1 Variational Parameter Estimation and Inference While being optimal in analyzing an existing data set, MAP estimators are notoriously prone to overfitting, especially where there is a paucity of available data [10] and so the variational Bayes (VB) approach detailed in [1] can be adopted by considering Qn(λ) = D(λ|γn), where γn is a sequence-specific variational free parameter vector. The above (2) can be further lower-bounded by noting that log P(sn|T, λ) ≥ |S| X sm=1 · · · |S| X s0=1 K X k=1 rm···0 n Qm···0,n(k) log ½ λk Tm···0,k Qm···0,n(k) ¾ (5) where P k Qm···0,n(k) = 1, Qm···0,n(k) ≥0 are additional variational parameters. Alternatively, Qm···0,n(.) can also be understood as a variational distribution on a discrete hidden variable with K possible outcomes that selects which transition matrix is active at each time step of the generative process. Replacing (5) in (2), expanding and evaluating ED(λ|γn)[log λk] = ψ(γk) −ψ(P k′ γk′), where ψ denotes the digamma function, then solving for Qm···0,n(k) and γkn and finally combining yields the following multiplicative iterative update for the sequence specific variational free parameter γn γt+1 kn = αk + exp{ψ(γt kn)} |S| X sm=1 · · · |S| X s0=1 rm···0 n Tm···0,k PK k′=1 Tm···0,k′ exp{ψ(γt k′n)} (6) Solving for the transition probabilities and combining with the fixed point solutions for each Qm···0,n(k) yields the following ˜Tm···0,k = T t m···0,k N X n=1 rm···0 n exp{ψ(γt kn)} PK k′=1 T t m···0,k′ exp{ψ(γt k′n)} ; T t+1 m···0,k = ˜Tm···0,k P s′ 0 ˜Tm···0′,k (7) As before the parameters of the prior Dirichlet distribution α given the variational parameters γn are estimated using standard methods [6, 1]. 2.2 Prediction with Simplicial Mixtures The predictive probability of observing symbol snext given a sequence of L symbols sn = {sLn, · · · , s1} is given as P(snext|sn) = EP (λ|sn){P(snext|sm · · · s1, λ)} ≈ PK k=1 T(snext|sm · · · s1, k)EQn(λ){λk}. It should be noted that while m-th order Markov chains form the basis of the representation, the resulting simplicial mixture is not m-th order Markov with any global transition model. Rather it approximates the individual m-th order models while keeping the generative parameter set compact. The m-th order information of each individual’s past behaviour is embodied in the individual-specific latent variable estimate. On the other hand in a mixture model one component is responsible for sequence generation so within a cluster the representation is still global m-th order. Employing the MAP approximation for the Dirichlet distribution then EQn(λ){λk} = Eδ(λ−λMAP n ){λk} = λMAP kn where λMAP kn is the k-th dimension of λMAP n . Employing the variational Dirichlet approximation then EQn(λ){λk} = ED(λ|γn){λk} = γkn/ PK l=1 γln therefore given a new sequence snew, the symbol snext which is most likely to be predicted from the model as a suggested continuation of the sequence, is the maximum argument of P(snext|sn). 3 Distributed Modelling of Dynamic Profiles 3.1 Datasets 3.1.1 Telephone Usage Modelling The ability to model the usage of a telephone service is of importance at a number of levels, e.g. to obtain a predictive model of customer specific activity and service usage for the purposes of service provision planning, resource management of switching capacity, identification of fraudulent usage of services. A representative description can be based on the distribution of the destination numbers dialled and connected by the customer, in which case a multinomial distribution over the dialling codes can be employed. One method of encoding the destination numbers dialled by a customer is to capture the geographic location of the destination, or the mobile service provider if not a land based call. This is useful in determining the potential demand placed on telecommunication switches which route traffic from various geographical regions on the service providers network. Two weeks of transactions from a UK telecommunications operator were logged during weekdays, amounting to 36,492,082 and 45,350,654 transactions in each week respectively. All transactions made by commercial customers in the Glasgow region of the UK were considered in this study. This amounts to 1,172,578 transactions from 12,202 high usage customers in the first week considered and 1,753,304 transactions being made in the following week. The mapping from dialling number to geographic region or mobile operator was encoded with 87 symbols amounting to a possible 7,569 symbol transitions. Each customers activity is defined by a sequence of symbols defining the sequence of calls made over each period considered and these are employed to encode activity in a customer specific generative representation. 3.1.2 Web Page Browsing The second data set used in this study is a selected subset of the msnbc.com user navigation collection employed in [2]. Sequences of users who visited at least 9 of the overall 17 page categories (frontpage, news, tech, local, opinion, on-air, misc,weather, msn-news, health, living, business, msn-sports, sports, summary, bbs, travel) have been retained, this selection criteria is motivated by the observation that there would be little scope in trying to model interleaved dynamic behavior in observables which are too short to reveal any intra-sequence heterogeneity. The resulting data set, referred to as WEB, totals 119,667 page requests corresponding to 1,480 web browsing sessions. 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 0.58 0.59 0.6 0.61 0.62 Number of Factors ( log10 ) Fraction Prediction Error 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 7 8 9 10 11 Number of Factors ( log10 ) Predictive Perplexity Figure 1: Left: percentage of incorrect predictions against the number of model factors; right: predictive perplexity of each model against model order for the PHONE dataset. Solid straight line: global first order MC, dash: MAP estimated simplicial mixture, solid line: VB estimated simplicial mixture, dash-dot: mixture model. 3.2 Results In each experiment the objective assessment of model performance is evaluated by the predictive perplexity, exp{−1/N PNtest m=1 log P(snext|sm)}. In addition, the predictive accuracy of all models is measured under a 0-1 loss. Given a number of previously unobserved truncated sequences, the number of times the model correctly predicts the symbol which follows in the sequence is then counted. In all mixture models naive random initialization of the parameters was employed and parameter estimation was halted when the 1 2 1 2 3 4 5 Entropy Rate ( bits ) Simplicial Model Mixture Model 5 10 15 20 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 (k) ∫ λk P(λ|γn)dλ Figure 2: Left: distribution of entropy rates for the transition matrices of a 20-factor mixture and simplicial mixture models (VB). Right: the expected value of the Dirichlet variable under the variational approximation for one customer indicating the levels of participation in factor specific behaviors. in-sample likelihood did not improve by more than 0.001%, no annealing or early stopping was utilized, fifteen randomly initialized parameter estimation runs for each model were performed. The number of mixture components for the models ranged from 2 up to 200. On the PHONE data set the parameters of a global first-order Markov chain (bigram), mixtures of Markov chains [2], and simplicial mixtures of Markov chains (using both the MAP and VB estimation procedures) are estimated using the first week of customer transactions and the predictive capabilities of the models are assessed on the transactions from the following week. The results are summarized in Figure 1, from the predictive perplexity measures it is clear that the simplicial representation provides a statistically (tested at the 5% level using a Wilcoxon Rank Sum test) and practically significant reduction in perplexity over the global and mixture models. This is also reflected in the levels of prediction error under each model, however the mixture models tend to perform slightly worse than the global model. As expected the MAP estimated simplicial model performs slightly worse than that obtained using VB [1]. This also provides an additional insight as to why LDA models improve upon PLSA, as they are in fact both the same model using different approximations to the likelihood, refer to [10] for an illustrative discussion on the weaknesses of MAP estimators. As a comparison to different structural models hidden Markov models with a range of hidden states were also tested on this data set the best results obtained were for a ten state model which achieved a predictive perplexity score of (mean±standard-deviation) 11.119 ± 0.624 and fraction prediction error of 0.674 ± 0.959, considerably poorer than that obtained by the models considered here. In addition to the predictive capability of a simplicial representation of a customers activity the cost of encoding such a representation can be assessed by measuring the entropy rate of each of the constituent transition matrices which act as a basis in the representation of the individual specific generative process. The left hand plot of Figure (2) shows the distribution of the entropy rates for the transition probabilities in twenty factor simplicial and mixture models, the results are obtained from fifty randomly initialized estimation procedures. The entropy rates for the simplicial mixture are significantly lower than that of a mixture model indicating that the basis of each representation describes a number of simpler processes. The final experiment demonstrated considers the WEB data set. The results of ten-fold cross-validated predictive perplexities again show statistically significant improvement obtained with the VB-estimated simplicial mixture (again tested using the ranksum Wilcoxon test at the 5% level). The results are summarized in Figure 3. Five of the estimated transition factors of a twenty-factor model are shown in Figure 4, demonstrating once more that the proposed model creates a low entropy and an easily interpretable dynamic factorial representation. The numbers on the axes on these charts correspond to the 17 page categories enumerated earlier and the average strength of each of these factors amongst the full set of twenty factors computed as 1 N PN n=1 ED(λ|γn){λk} is also given above each chart. We can see that a behavioral feature manifested is a keen interest to visit pages about ‘news’ along with a quite dynamic transition model (left hand chart) which characterizes around 12% of the behavioral patterns of the entire user population under consideration while static state-repetition (second chart) or an almost exclusive interest in viewing the homepage (last chart) etc represent also relatively strong common characteristics of browsing behavior. The distribution of the entropy rates of the full set of these twenty basistransitions in comparison to those obtained from the mixture model is given on the right hand plot of Figure 3. Clearly, the coding efficiency of a simplicial mixture representation is significantly (statistically tested) superior. Note also these basis-transitions embody correlated transitions (transitions which appear in similar dynamical contexts and so have similar functionality), as can be seen from the multiplicative nature of the equations used for identifying the model. It is not surprising then that state repetitions or transitions which express focused interest in one of the topic categories appear together on distinct factors. We can also see a joint interest in msnnews and msnsport being present together on the 4-th chart of Figure 4 — indeed, as the prefix of these page categories also indicates, these are related page categories. 0.4 0.6 0.8 1 1.2 1.4 1.6 6 6.2 6.4 6.6 6.8 7 7.2 7.4 7.6 7.8 Nr of Factors (log10) Predictive Perplexity Entropy Rates (bits) 0.5 1 1.5 2 2.5 3 Simplicial Model Mixture Model Figure 3: Left: the predictive perplexity for the WEB data (straight line: global firstorder Markov chain, dash-dot: mixture of Markov chains, dotted line: simplicial mixture estimated by MAP, solid line: simplicial mixture estimated by VB). Right: the distribution of entropy rates. 4 Conclusions This paper has presented a linear time method to model finite-state sequences of discrete symbols which may arise from user or customer activity traces. The main feature of the proposed approach has been the assumption that heterogeneous user behavior may be ‘explained’ by the interleaved action of some structurally simple common generator processes. 0.12 5 10 15 2 4 6 8 10 12 14 16 0.07 5 10 15 2 4 6 8 10 12 14 16 0.23 5 10 15 2 4 6 8 10 12 14 16 0.03 5 10 15 2 4 6 8 10 12 14 16 0.02 5 10 15 2 4 6 8 10 12 14 16 Figure 4: State transition matrices of selected factors from a 20-factor run on WEB. An empirical study has been conducted on two real-world collections of user activity which has demonstrated this to be an efficient representation, revealed by both objective measures of prediction performance, low entropy rates, and interpretable representations of the user profiles provided. Acknowledgements Mark Girolami is part of the DETECTOR project funded by the Department of Trade and Industry (DTI) Management of Information (LINK) Programme and the Engineering & Physical Sciences Research Council (EPSRC) grant GR/R55184. References [1] D. M. Blei, A. Y. Ng & M. I. Jordan, Latent Dirchlet Allocation, Journal of Machine Learning Research, 3(5):993–1022, 2003. [2] I. Cadez, D. Heckerman, C. Meek, P. Smyth & S. White, Model-based clustering and visualisation of navigation patterns on a web site, Journal of data Mining and Knowledge Discovery, in press. [3] H. Frydman, Maximum likelihood estimation in the mover-stayer model, Journal of the American Statistical Society, 79, 632-638, 1984. [4] M. Girolami and A. Kab´an, On an equivalence between PLSI and LDA, Proc. 26-th Annual International ACM SIGIR Conference, 2003, pp. 433–434. [5] T. Hofmann,Unsupervised learning by probabilistic latent semantic analysis, Machine Learning, 42, 177-196, 2001. [6] G. Ronning, Maximum likelihood estimation of Dirichlet distributions, Journal of Statistical Computation and Simulation, 32:4, 215-221, 1989. [7] D. Lee & H. Sebastian Seung, Algorithms for Non-negative Matrix Factorization, Advances in Neural Information Processing Systems 13, ed’s Leen, Todd K, Dietterich, Thomas G. and Tresp, Volker, 556–562, MIT Press, 2001. [8] T. Minka & J. Lafferty, Expectation-propogation for the generative aspect model, Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence, 2002. [9] D. A. Ross & R. S. Zemel, Multiple-cause vector quantiztion, Advances in Neural Information Processing Systems 15, 2003. [10] H. Lappalainen & J. W. Miskin. Ensemble Learning. In M. Girolami, editor, Advances in Independent Component Analysis, 75-92, Springer-Verlag, 2000.
2003
58
2,461
Bounded Finite State Controllers Pascal Poupart Department of Computer Science University of Toronto Toronto, ON M5S 3H5 ppoupart@cs.toronto.edu Craig Boutilier Department of Computer Science University of Toronto Toronto, ON M5S 3H5 cebly@cs.toronto.edu Abstract We describe a new approximation algorithm for solving partially observable MDPs. Our bounded policy iteration approach searches through the space of bounded-size, stochastic finite state controllers, combining several advantages of gradient ascent (efficiency, search through restricted controller space) and policy iteration (less vulnerability to local optima). 1 Introduction Finite state controllers (FSCs) provide a simple, convenient way of representing policies for partially observable Markov decision processes (POMDPs). Two general approaches are often used to construct good controllers: policy iteration (PI) [7] and gradient ascent (GA) [10, 11, 1]. The former is guaranteed to converge to an optimal policy, however, the size of the controller often grows intractably. In contrast, the latter restricts its search to controllers of a bounded size, but may get trapped in a local optimum. While locally optimal solutions are often acceptable, for many planning problems with a combinatorial flavor, GA can easily get trapped by simple policies that are far from optimal. Consider a system engaged in preference elicitation, charged with discovering optimal query policy to determine relevant aspects of a user’s utility function. Often no single question yields information of much value, while a sequence of queries does. If each question has a cost, a system that locally optimizes the policy by GA may determine that the best course of action is to ask no questions (i.e., minimize cost given no information gain). When an optimal policy consists of a sequence of actions any small perturbation to which results in a bad policy, there is little hope of finding this sequence using methods that greedily perform local perturbations such as those employed by GA. In general, we would like the best of both worlds: bounded controller size and convergence to a global optimum. While achieving both is NP-hard for the class of deterministic controllers [10], one can hope for a tractable algorithm that at least avoids obvious local optima. We propose a new anytime algorithm, bounded policy iteration (BPI) that improves a policy much like Hansen’s PI [7] while keeping the size of the controller fixed. Whenever the algorithm gets stuck in a local optimum, the controller is allowed to slightly grow by introducing one (or a few) node(s) to escape the local optimum. Following a brief review of FSCs (Sec. 2), we extend PI to stochastic controllers (Sec. 3), thus admitting smaller, high quality controllers. We then derive the BPI algorithm by ensuring that the number of nodes remains unchanged (Sec. 4). We analyze the structure of local optima for BPI (Sec. 5), relate this analysis to GA, and use it to justify a new method to escape local optima. Finally, we report some preliminary experiments (Sec. 6). 2 Finite State Controllers for POMDPs A POMDP is defined by a set of states ; a set of actions  ; a set of observations  ; a transition function  , where     denotes the transition probabilities     ; an observation function  , where   denotes the probability      of making observation  in state  after taking action ; and a reward function  , where     denotes the immediate reward associated with state  when executing ation . We assume discrete state, action and observation sets and we focus on discounted, infinite horizon POMDPs with discount factor !#"%$'&)( . Since states are not directly observable in POMDPs, we define a belief state *+-,.+ to be a distribution over states. Belief state * can be updated in response to a action-observation pair /0  1 using Bayes rule. Policies represented by FSCs are defined by a (possibly cyclic) directed graph 23,4/65798:1 , where each node ;=<>5 is labeled by an action and each edge ?-<@8 by an observation  . Each node has one outward edge per observation. The FSC can be viewed as a policy 24,A/0BC9D:1 , where action strategy B associates each node ; with an action BC0;EF<7 , and observation strategy D associates each node ; and observation  with a successor node DG0;H I<J5 (corresponding to the edge from ; labeled with  ). A policy is executed by taking the action associated with the “current node,” and updating the current node by following the edge labeled by the observation made. The value function KML of an FSC 2 is the expected discounted sum of rewards for executing its policy 2 , and can be computed by solving a set of linear equations: K L 0;H NG,O  BC0;E9EPQ$RS4    BC0;E99     BTU;E9 K L 0DGU;H V    (1) Given an initial belief state * , an FSC’s value at node ; is simply the expectation KW0;H *VX,ZY#[\*+ KW0;H N ; the best starting node for a given * is determined by K]*VX, ^-_`ba K-U;Hc*V . As a result, the value KWU;Hc*V of each node ; is linear with respect to the belief state; hence the value function of the controller is piecewise-linear and convex. In Fig. 1(a), each linear segment corresponds to the value function of a node and the upper surface of these segments forms the controller value function. The optimal value function Kd satisfies Bellman’s equation: K d *Ve, ^-_` f  *N EPg$RS40\ *N 9KW* f S  (2) Policy iteration (PI) [7] incrementally improves a controller by alternating between two steps, policy improvement and policy evaluation, until convergence to an optimal policy. Policy evaluation solves Eq. 1 for a given policy. Policy improvement adds nodes to the controller by dynamic programming (DP) and removes other nodes. A DP backup applies the r.h.s. of Eq. 2 to the value function ( K in Fig. 2(a)) of the current controller to obtain a new, improved value function ( K in Fig. 2(a)). Each linear segment of K corresponds to a new node added to the controller. Several algorithms can be used to perform DP backups, with incremental pruning [4] perhaps being the fastest. After the new nodes created by DP have been added, old nodes that are now pointwise dominated are removed. A node is pointwise dominated when its value is less than that of some other node at all belief states (e.g., ;ih is pointwise dominated by ;Ej in Fig. 2(a)). The inward edges of a pointwise dominated node are re-directed to the dominating node since it offers better value (e.g., inward arcs of ; h are redirected to ; j in Fig. 2(c)). The controller resulting from this policy improvement step is guaranteed to offer higher value at all belief states. On the other hand, up to  k 54l mCl new nodes may be added with each DP backup, so the size of the controller quickly becomes intractable in many POMDPs. n1 n2 n3 value belief space value b b’ value function: backed up value function: upper surface: convex combination: a) b) Figure 1: a) Value function example; b) BPI local optimum: each linear segment of the value function is tangent to the backed up value function 1n n2 n3 n4 n2 1n n2 n3 n4 n3 n4 belief space value a) b) c) V: V’: a c b a b c Figure 2: a) Value function K and the backed-up K  obtained by DP; b) original controller ( ;ih and ;  ) with nodes added ( ; and ; j ) by DP; c) new controller once pointwise dominated node ; h is removed and its inward arcs a, b, c are redirected to ; j 3 Policy Iteration for Stochastic Controllers Policy iteration only prunes nodes that are pointwise dominated, rather than all dominated nodes. This is because the algorithm is designed to produce controllers with deterministic observation strategies. A pointwise-dominated node can safely be pruned since its inward arcs are redirected to the dominating node (which has value at least as high as the dominated node at each state). In contrast, a node jointly dominated by several nodes (e.g., ;  in Fig. 2(b) is jointly dominated by ; and ; j ) cannot be pruned without its inward arcs being redirected to different nodes depending on the current belief state. This problem can be circumvented by allowing stochastic observation strategies. We revise the notion of observation strategy DCU;H  9;i, U;   ;H  , defining a distribution over successor nodes ;E for each ;H  -pair. If the stochastic strategy is chosen carefully, the corresponding convex combination of dominating nodes may pointwise dominate the node we would like to prune. In Fig. 1(a), ; h is dominated by ;  and ;  together (but neither of them alone). Convex combinations of ;  and ; correspond to all lines that pass through the intersection of ;  and ; . The dotted line illustrates one convex combination of ;  and ;  that pointwise dominates ; h : consequently, ; h can be safely removed and its inward arcs re-directed to reflect this convex combination by setting the observation probabilities accordingly. In general, when a node is jointly dominated by a group of nodes, there exists a pointwise-dominating convex combination of this group. Theorem 1 The value function K]U;H  of a node ; is jointly dominated by the value functions K]U;ih       K-0;   of nodes ;:hN   9; if and only if there is a convex combination Y  KWU;   that dominates KW0;H  . ^  s.t. Y [ *N KWU;HcN P  Y [ *N9K]U;  N  Y#[ *Ne,4( '*N  !b   Table 1: Primal LP: KWU;H  is jointly dominated by KW0;:h      KW0;   when  ! . ^-_`  s.t. KW0;H NEP  " Y  K]U; cN  < Y  , (  !b Table 2: Dual LP: convex combination Y KW0;   dominates KWU;H  when  ! . Proof: KWU;H  is dominated by KWU; hN      K]U;   when the objective of the LP in Table 1 is positive. This LP finds the belief state * that minimizes the difference between KW0;H *V and the max of KW0; h  *VV   VcKWU;  *V . It turns out that the dual LP (Table 2) finds the most dominating convex combination parallel to KWU;H  . Since the dual has positive objective value when the primal does, the theorem follows.  As argued in the proof of Thm. 1, the LP in Table 1 gives us an algorithm to find the most dominating convex combination parallel to a dominated node. In summary, by considering stochastic controllers, we can extend PI to prune all dominated nodes (pointwise or jointly) in the policy improvement step. This provides two advantages: controllers can be made smaller while improving their decision quality. 4 Bounded Policy Iteration Although pruning all dominated nodes helps to keep the controller small, it may still grow substantially with each DP backup. Several heuristics are possible to bound the number of nodes. Feng and Hansen [6] proposed that one prunes all nodes that dominate the value function by less than some  after each DP backup. Alternatively, instead of growing the controller with each backup and then pruning, we can do a partial DP backup that generates only a subset of the nodes using Cheng’s algorithm [5], the witness algorithm [9], or other heuristics [14]. In order to keep the controller bounded, for each node created in a partial DP backup, one node must be pruned and its inward arcs redirected to some dominating convex combination. In the event where no node is dominated, we can still prune a node and redirect its arcs to a good convex combination, but the resulting controller may have lesser value at some belief states. We now propose a new algorithm called bounded policy iteration (BPI) that guarantees monotonic value improvement at all belief states while keeping the number of nodes fixed. BPI considers one node at a time and tries to improve it while keeping all other nodes fixed. Improvement is achieved by replacing each node by a good convex combination of the nodes normally created by a DP backup, but without actually performing a backup. Since the backed up value function must dominate the controller’s current value function, then by Thm. 1 there must exist a convex combination of the backed up nodes that pointwise dominates each node of the controller. Combining this idea with Eq. 2, we can directly compute such convex combinations with the LP in Table 3. This LP has  ]  54 l mCl variables corresponding to the probabilities of the convex combination as well as the  variable measuring the value improvement. We can significantly reduce the number of variables by pushing the convex combination variables as far as possible into the DP backup, resulting in the LP shown in Table 4. The key here is to realize that we can aggregate many variables since we only care about the marginals f ,'Y a a  a  f  a   a    a  and f  a , Y a    a   a !    a  f  a   a    a   . ^-_`  s.t. K]U;H NEP  "OY f  a   a    a   f  a  a   a        P $ Y [  S +    90\ + 9KWU; S  +   < Y f  a   a    a   f  a a   a  , ( f  a a   a   !b  9; h  ;     9; l mCl Table 3: Naive LP to find a convex combination of backed up nodes that dominate ; . ^-_ `  s.t. KWU;HcN P  " Y f  f     Pg$ Y [   S +      +0  f  a  KW0; S  + !  Y f f ,7( Y a  f  a  , f    f  !b \  f  a   !b    Table 4: Efficient LP to find a convex combination of backed up nodes that dominate ; . The efficient LP in Table 4 has only  ]  > 54 Pg ] P ( variables.1 Furthermore, the variables f and f  a  have an intuitive interpretation w.r.t. the action and observation strategies for the improved node. Each f variable indicates the probability of executing action (i.e., BTU;H  , f ). Similarly, each f  a  variable indicates the (unnormalized) probability of reaching node ; S after executing and observing  (i.e., DGU;H    ; S T, f  a  f ). Note that we now use probabilistic action strategies and have extended probabilistic observation strategies to depend on the action executed. To summarize, BPI alternates between policy evaluation and improvement as in regular PI, but the policy improvement step simply tries to improve each node by solving the LP in Table 4. The f and f  a  variables are used to set the probabilistic action and observation strategies of the new improved node. 5 Local Optima BPI is a simple, efficient alternative to standard PI that monotonically improves an FSC while keeping its size constant. Unfortunately, it is only guaranteed to converge to a local optimum. We now characterize BPI’s local optima and propose a method to escape them. 5.1 Characterization Thm. 2 gives a necessary and sufficient condition characterizing BPI’s local optima. Intuitively, a controller is a local optimum when each linear segment touches from below, or is tangent to, the controller’s backed up value function (see Fig. 1(b)). Theorem 2 BPI has converged to a local optimum if and only if each node’s value function is tangent to the backed up value function. Proof: Since the objective function of the LP in Table 4 seeks to maximize the improvement  , the resulting convex combination must be tangent to the upper surface of the backed up value function. Conversely, the only time when the LP won’t be able to improve a node is when its vector is already tangent to the backed up value function.  1Actually, we don’t need the  variables since they can be derived from the   variables by summing out  , so the number of variables can be reduced to     . Interestingly, tangency is a necessary (but not sufficient) condition for GA’s local optima. Corollary 1 If GA has converged to a local optimum, then the value function of each node reachable from the initial belief state is tangent to the backed up value function. Proof: GA seeks to monotonically improve a controller in the direction of steepest ascent. The LP of Table 4 also seeks a monotonically improving direction. Thus if BPI can improve a controller by finding a direction of improvement using the LP of Table 4, then GA will also find it or will find a steeper one. Conversely, when a controller is a local optimum for GA, then there is no monotonic improvement possible in any direction. Since BPI can only improve a controller by following a direction of monotonic improvement, GA’s local optima are a subset of BPI’s local optima. Thus, tangency is a necessary, but not sufficient, condition of GA’s local optima.  In the proof of Corollary 1, we argued that GA’s local optima are a subset of BPI’s local optima. This suggests that BPI is inferior to GA since it can be trapped by more local optima than GA. However we will describe in the next section a simple technique that allows BPI to easily escape from local optima. 5.2 Escape Technique The tangency condition characterizing local optima can be used to design an effective escape method for BPI. It essentially tells us that such tangent belief states are “bottlenecks” for further policy improvement. If we could improve the value at the tangent belief state(s) of some node, then we could break out of the local optimum. A simple method for doing so consists of a one-step lookahead search from the tangent belief states. Figure 1(b) illustrates how belief state *V can be reached in one step from tangent belief state * , and how the backed up value function improves *  ’s current value. Thus, if we add a node to the controller that maximizes the value of *  , its improved value can subsequently be backed up to the tangent belief state * , breaking out of the local optimum. Our algorithm is summarized as follows: perform a one-step lookahead search from each tangent belief state; when a reachable belief state can be improved, add a new node to the controller that maximizes that belief state’s value. Interestingly, when no reachable belief state can be improved, the policy must be optimal at the tangent belief states. Theorem 3 If the backed up value function does not improve the value of any belief state reachable in one step from any tangent belief state, then the policy is optimal at the tangent belief states. Proof: By definition, belief states for which the backed up value function provides no improvement are tangent belief states. Hence, when all belief states reachable in one step are themselves tangent belief states, then the set of tangent belief states is closed under every policy. Since there is no possibility of improvement, the current policy must be optimal at the tangent belief states.  Although Thm 3 guarantees an optimal solution only at the tangent belief states, in practice, they rarely form a proper subset of the belief space (when none of the reachable belief states can be improved). Note also that the escape algorithm assumes knowledge of the tangent belief states. Fortunately, the solution to the dual of the LP in Table 4 is a tangent belief state. Since most commercial LP solvers return both the solution of the primal and dual, a tangent belief state is readily available for each node.2 2A node may have more than one tangent belief state when an interval of its linear segment is 0 500 1000 1500 35 40 45 50 55 Number of nodes Expected Rewards Maze400 0 500 1000 1500 −50 −40 −30 −20 −10 Number of nodes Expected Rewards Tag−Avoid 10 0 10 1 10 2 10 3 10 4 10 5 35 40 45 50 55 Time (seconds) Expected Rewards Maze400 10 1 10 2 10 3 10 4 10 5 10 6 −50 −40 −30 −20 −10 Time (seconds) Expected Rewards Tag−Avoid Figure 3: Experimental results for the maze and tag-avoid problems. 6 Experiments We report some preliminary experiments with BPI and the escape method to assess their robustness against local optima, as well as their scalability to relatively large POMDPs. In a first experiment, we ran BPI with escape on a preference elicitation problem and a modified version of the Heaven-and-Hell problem described in [3]. It consistently found the optimal policy, whereas GA settles for a local optimum for both problems. In a second experiment, we report the running time and decision quality of the controllers found for two large grid-world problems. The first is a !! -state extention of Hauskrecht’s [8]  ! -state maze problem, and the second Pineau et al.’s [12] ! -state tagavoid problem. In Figure 3, we report the expected return achieved w.r.t. time and number of nodes. For the maze problem, the expected return is averaged over all 400 states since BPI tries to optimize the policy for all belief states simultaneously. For comparison purposes, the expected return for the tag-avoid problem is measured at the same initial belief state used in [12] even though BPI doesn’t tailor its policy exclusively to that belief state. In contrast, many point-based algorithms including PBVI [12] (which is perhaps the best such algorithm) optimize the policy for a single initial belief state, capitalizing on a hopefully small reachable belief region. BPI found a  ! -node controller in   with the same expected return of  (  achieved by PBVI in ( ! ! with a policy of (  linear segments. This suggests that most of the belief space is reachable in tag-avoid. We also tangent to the backed up value function, indicating that it is identical to some backed up node. ran BPI on the tiger-grid, hallway and hallway2 benchmark problems [12] and obtained ( !! -node controllers in (  ! ,   !  and   !  achieving expected returns of ( b( , ! ( , !  at the same initial belief states used in [12], but without using them to tailor the policy. In contrast, PBVI achieved expected returns of   , !  and ! in   ,   and  !  with policies of  ! , and   linear segments tailored to those initial belief states. This suggests that only a small portion of the belief space is reachable. 7 Conclusion We have introduced the BPI algorithm, which guarantees monotonic improvement of the value function while keeping controller size fixed. While quite efficient, the algorithm may get trapped in local optima. An analysis of such local optima reveals that the value function of each node is tangent to the backed up value function. This property can be successfully exploited in an algorithm that escapes local optima quite robustly. This research can be extented in a number of directions. State aggregation [2] and belief compression [13] techniques could be easily integrated with BPI to scale to problems with large state spaces. Also, since stochastic GA [11, 1] can tackle model free problems (which BPI cannot) it would be interesting to see if tangent belief states could be computed for stochastic GA and used to design a heuristic to escape local optima similar to the one proposed for BPI. Acknowledgements We thank Darius Braziunas for his help with the implementation and the anonymous reviewers for the helpful comments. References [1] D. Aberdeen and J. Baxter. Scaling internal-state policy-gradient methods for POMDPs. Proc. ICML-02, pp.3–10, Sydney, Australia, 2002. [2] C. Boutilier and D. Poole. Computing optimal policies for partially observable decision processes using compact representations. Proc. AAAI-96, pp.1168–1175, Portland, OR, 1996. [3] D. Braziunas. Stochastic local search for POMDP controllers. Master’s thesis, University of Toronto, Toronto, 2003. [4] A. R. Cassandra, M. L. Littman, and N. L. Zhang. Incremental pruning: A simple, fast, exact method for POMDPs. Proc.UAI-97, pp.54–61, Providence, RI, 1997. [5] H.-T. Cheng. Algorithms for Partially Observable Markov Decision Processes. PhD thesis, University of British Columbia, Vancouver, 1988. [6] Z. Feng and E. A. Hansen. Approximate planning for factored POMDPs. Proc. ECP-01, Toledo, Spain, 2001. [7] E. A. Hansen. Solving POMDPs by searching in policy space. Proc. UAI-98, pp.211–219, Madison, Wisconsin, 1998. [8] M. Hauskrecht. Value-function approximations for partially observable Markov decision processes. Journal of Artificial Intelligence Research, 13:33–94, 2000. [9] L. P. Kaelbling, M. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101:99–134, 1998. [10] N. Meuleau, K.-E. Kim, L. P. Kaelbling, and A. R. Cassandra. Solving POMDPs by searching the space of finite policies. Proc. UAI-99, pp.417–426, Stockholm, 1999. [11] N. Meuleau, L. Peshkin, K.-E. Kim, and L. P. Kaelbling. Learning finite-state controllers for partially observable environments. Proc. UAI-99, pp.427–436, Stockholm, 1999. [12] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: an anytime algorithm for POMDPs. In Proc. IJCAI-03, Acapulco, Mexico, 2003. [13] P. Poupart and C. Boutilier. Value-directed compressions of POMDPs. Proc. NIPS-02, pp.1547– 1554, Vancouver, Canada, 2002. [14] N. L. Zhang and W. Zhang. Speeding up the convergence of value-iteration in partially observable Markov decision processes. Journal of Artificial Intelligence Research, 14:29–51, 2001.
2003
59
2,462
Mutual Boosting for Contextual Inference Michael Fink Pietro Perona Center for Neural Computation Electrical Engineering Department Hebrew University of Jerusalem California Institute of Technology Jerusalem, Israel 91904 Pasadena, CA 91125 fink@huji.ac.il perona@vision.caltech.edu Abstract Mutual Boosting is a method aimed at incorporating contextual information to augment object detection. When multiple detectors of objects and parts are trained in parallel using AdaBoost [1], object detectors might use the remaining intermediate detectors to enrich the weak learner set. This method generalizes the efficient features suggested by Viola and Jones [2] thus enabling information inference between parts and objects in a compositional hierarchy. In our experiments eye-, nose-, mouth- and face detectors are trained using the Mutual Boosting framework. Results show that the method outperforms applications overlooking contextual information. We suggest that achieving contextual integration is a step toward human-like detection capabilities. 1 Introduction Classification of multiple objects in complex scenes is one of the next challenges facing the machine learning and computer vision communities. Although, real-time detection of single object classes has been recently demonstrated [2], naïve duplication of these detectors to the multiclass case would be unfeasible. Our goal is to propose an efficient method for detection of multiple objects in natural scenes. Hand-in-hand with the challenges entailing multiclass detection, some distinct advantages emerge as well. Knowledge on position of several objects might shed light on the entire scene (Figure 1). Detection systems that do not exploit the information provided by objects on the neighboring scene will be suboptimal. B A Figure 1: Contextual spatial relationships assist detection A. in absence of facial components (whitened blocking box) faces can be detected by context (alignment of neighboring faces). B. keyboards can be detected when they appear under monitors. Many human and computer vision models postulate explicitly or implicitly that vision follows a compositional hierarchy. Grounded features (that are innate/hardwired and are available prior to learning) are used to detect salient parts, these parts in turn enable detection of complex objects [3, 4], and finally objects are used to recognize the semantics of the entire scene. Yet, a more accurate assessment of human performance reveals that the visual system often violates this strictly hierarchical structure in two ways. First, part and whole detection are often evidently interacting [5, 6]. Second, several layers of the hierarchy are occasionally bypassed to enable swift direct detection. This phenomenon is demonstrated by gist recognition experiments where the semantic classification of an entire scene is performed using only minimal low level feature information [7]. The insights emerging from observing human perception were adopted by the object detection community. Many object detection algorithms bypass stages of a strict compositional hierarchy. The Viola & Jones (VJ) detector [2] is able to perform robust online face detection by directly agglomerating very low-level features (rectangle contrasts), without explicitly referring to facial parts. Gist detection from low-level spatial frequencies was demonstrated by Oliva and Torralba [8]. Recurrent optimization of parts and object constellation is also common in modern detection schemes [9]. Although Latent Semantic Analysis (making use of object cooccurrence information) has been adapted to images [10], the existing state of object detection methods is still far from unifying all the sources of visual contextual information integrated by the human perceptual system. Tackling the context integration problem and achieving robust multiclass object detection is a vital step for applications like image-content database indexing and autonomous robot navigation. We will propose a method termed Mutual Boosting to incorporate contextual information for object detection. Section 2 will start by posing the multiclass detection problem from labeled images. In Section 3 we characterize the feature sets implemented by Mutual Boosting and define an object's contextual neighborhood. Section 4 presents the Mutual Boosting framework aimed at integrating contextual information and inspired by the recurrent inferences dominating the human perceptual system. An application of the Mutual Boosting framework to facial component detection is presented in Section 5. We conclude with a discussion on the scope and limitations of the proposed framework. 2 Problem setting and basic notation Suppose we wish to detect multiple objects in natural scenes, and that these scenes are characterized by certain mutual positions between the composing objects. Could we make use of these objects' contextual relations to improve detection? Perceptual context might include multiple sources of information: information originating from the presence of existing parts, information derived from other objects in the perceptual vicinity and finally general visual knowledge on the scene. In order to incorporate these various sources of visual contextual information Mutual Boosting will treat parts, objects and scenes identically. We will therefore use the term object as a general term while referring to any entity in the compositional hierarchy. Let M denote the cardinality of the object set we wish to detect in natural scenes. Our goal is to optimize detection by exploiting contextual information while maintaining detection time comparable to M individual detectors trained without such information. We define the goal of the multiclass detection algorithm as generating M intensity maps Hm=1,..,M indicating the likelihood of object m appearing at different positions in a target image. We will use the following notation (Figure 2): • H0+/H0-: raw image input with/without the trained objects (A1 & A2) • Cm[i]: labeled position of instance i of object m in image H0+ • Hm: intensity map output indicating the likelihood of object m appearing in different positions in the image H0 (B) Cm[2] Cm[1] Cm[2] B. Hm Cm[1] A1. H0+ A2. H0- Figure 2: A1 & A2. Input: position of positive and negative examples of eyes in natural images. B. Output: Eye intensity (eyeness) detection map of image H0+ 3 Feature set and contextual window generalizations The VJ method for real-time object-detection included three basic innovations. First, they presented the rectangle contrast-features, features that are evaluated efficiently, using an integral-image. Second, VJ introduced AdaBoost [1] to object detection using rectangle features as weak learners. Finally a cascade method was developed to chain a sequence of increasingly complex AdaBoost learners to enable rapid filtering of non-relevant sections in the target image. The resulting cascade of AdaBoost face detectors achieves a 15 frame per second detection speed, with 90% detection rate and 2x10-6 false alarms. This detection speed is currently unmatched. In order to maintain efficient detection and in order to benchmark the performance of Mutual Boosting we will adopt the rectangle contrast feature framework suggested by VJ. It should be noted that the grayscale rectangle features could be naturally extended to any image channel that preserves the semantics of summation. A diversified feature set (including color features, texture features, etc.) might saturate later than a homogeneous channel feature set. By making use of features that capture the object regularities well, one can improve performance or reduce detection time. VJ extract training windows that capture the exact area of the training faces. We term this the local window approach. A second approach, in line with our attempt to incorporate information from neighboring parts or objects, would be to make use of training windows that capture wide regions around the object (Figure 3)1. B A Figure 3: A local window (VJ) and a contextual window that captures relative position information from objects or parts around and within the detected object. 1 Contextual neighborhoods emerge by downscaling larger regions in the original image to a PxP resolution window. The contextual neighborhood approach contributes to detection when the applied channels require a wide contextual range as will be demonstrated in the Mutual Boosting scheme presented in the following section2. 4 Mutual Boosting The AdaBoost algorithm maintains a clear distinction between the boosting level and the weak-learner training level. The basic insight guiding the Mutual Boosting method reexamines this distinction, stipulating that when multiple objects and parts are trained simultaneously using AdaBoost; any object detector might combine the previously evolving intermediate detectors to generate new weak learners. In order to elaborate this insight it should first be noted that while training a strong learner using 100 iterations of AdaBoost (abbreviated AB100) one could calculate an intermediate strong learner at each step on the way (AB2 - AB99). To apply this observation for our multiclass detection problem we simultaneously train M object detectors. At each boosting iteration t the M detectors (ABm t-1) emerging at the previous stage t-1, are used to filter positive and negative3 training images, thus producing intermediate m-detection maps Hm t-1 (likelihood of object m in the images4). Next, the Mutual Boosting stage takes place and all the existing Hm t-1 maps are used as additional channels out of which new contrast features are selected. This process gradually enriches the initial grounded features with composite contextual features. The composite features are searched on a PxP wide contextual neighborhood region rather than the PxP local window (Figure 3). Following a dynamic programming approach in training and detection, Hm=1,..,M detection maps are constantly maintained and updated so that the recalculation of Hm t only requires the last chosen weak learner WLmn* t to be evaluated on channel Hn* t-1 of the training image (Figure 4). This evaluation produces a binary detection layer that will be weighted by the AdaBoost weak-learner weighting scheme and added to the previous stage map5. Although Mutual Boosting examines a larger feature set during training, an iteration of Mutual Boosting detection of M objects is as time-consuming as performing an AdaBoost detection iteration for M individual objects. The advantage of Mutual Boosting emerges from introducing highly informative feature sets that can enhance detection or require fewer boosting iterations. While most object detection applications extract a local window containing the object information and discard the remaining image (including the object positional information). Mutual Boosting processes the entire image during training and detection and makes constant use of the information characterizing objects’ relative-position in the training images. As we have previously stated, the detected objects might be in various levels of a compositional hierarchy (e.g. complex objects or parts of other objects). Nevertheless, Mutual Boosting provides a similar treatment to objects, parts and scenes enabling any compositional structure of the data to naturally emerge. We will term any contextual reference that is not directly grounded to the basic features, as a cross referencing of objects6. 2 The most efficient size of the contextual neighborhoods might vary, from the immediate to the entire image, and therefore should be empirically learned. 3 Images without target objects (see experimental section below) 4 Unlike the weak learners, the intermediate strong learners do not apply a threshold 5 In order to optimize the number of detection map integral image recalculations these maps might be updated every k (e.g. 50) iterations rather than at each iteration. 6 Scenes can be crossed referenced as well if scene labels are available (office/lab etc.). Input H0+/0- positive / negative raw images Cm[i] position of instance i of object m=1,..,M in image H0+ Initialization initialize boosting-weights of instances i of object m to 1 initialize detection maps Hm+ 0/Hm0 to 0 For t=1,…,T For m=1,..,M and n=0,..,M (A) cutout & downscale local (n=0) or contextual (n>0) windows (WINm) of instances i of object m (at Cm[i]), from all existing images Hn t-1 For m=1,..,M normalize boosting-weights of object m instances [1] (B1&2) select map Hn* t-1 and weak learner WLmn* that minimize error on WINm decrease boosting-weights of instances that WLmn* labeled correctly [1] (C) DetectionLayermn* ← WLmn*(Hn* t-1) calculate αm t the weak learner contribution factor from the empirical error [1] (D) update m-detection map Hm t ← Hm t-1 + αm t DetectionLayermn * Return strong learner ABm T including WLmn* 1,..,T and αm 1,..,T (m=1,..,M) de n map WL m0 WL m1 Detection Layer mn* (B1) (B2) (B1) WIN mn* WIN m1 m tectio WL mn* (D) (B2) (B1) WIN m0 (C) (A) (A) (A) . . . Hm± . . . Hn*± H0± raw image H1± Figure 4: Mutual Boosting Diagram & Pseudo code. Each raw image H0 is analyzed by M object detection maps Hm=1,.,M, updated by iterating through four steps: (A) cutout & downscale from existing maps Hn=0,..,M t-1 a local (n=0) or contextual (n>0) PxP window containing a neighborhood of object m (B1&2) select best performing map Hn* and weak learner WLmn* that optimize object m detection (C) run WLmn* on Hn* map to generate a new binary m-detection layer (D) add m-detection layer to existing detection map Hm. [1] Standard AdaBoost stages are not elaborated To maintain local and global natural scene statistics, negative training examples are generated by pairing each image with an image of equal size that does not contain the target objects and by centering the local and contextual windows of the positive and negative examples on the object positions in the positive images (see Figure 2). By using parallel boosting and efficient rectangle contrast features, Mutual Boosting is capable of incorporating many information inferences (references in Figure 5): • Features could be used to directly detect parts and objects (A & B) • Objects could be used to detect other (or identical) objects in the image (C) • Parts could be used to detect other (or identical) nearby parts (D & E) • Parts could be used to detect objects (F) • Objects could be used to detect parts F. face feature from mouth detection image E. mouth feature from eye detection image D. eye feature from eye detection image C. face feature from face detection image B. face feature from raw image A. eye feature from raw image Figure 5: A-E. Emerging features of eyes, mouths and faces (presented on windows of raw images for legibility). The windows’ scale is defined by the detected object size and by the map mode (local or contextual). C. faces are detected using face detection maps HFace, exploiting the fact that faces tend to be horizontally aligned. 5 Experiments In order to test the contribution of the Mutual Boosting process we focused on detection of objects in what we term a face-scene (right eye, left eye, nose, mouth and face). We chose to perform contextual detection in the face-scene for two main reasons. First as detailed in Figure 5, face scenes demonstrate a range of potential part and object cross references. Second, faces have been the focus of object detection research for many years, thus enabling a systematic result comparison. Experiment 1 was aimed at comparing the performance of Mutual Boosting to that of naïve independently trained object detectors using local windows. Pd Pfa A. Figure 6: A. Two examples of the CMU/MIT face database. B. Mutual Boosting and AdaBoost ROCs on the CMU/MIT face database. Face-scene images were downloaded from the web and manually labeled7. Training relied on 450 positive and negative examples (~4% of the images used by VJ). 400 iterations of local window AdaBoost and contextual window Mutual Boosting were performed on the same image set. Contextual windows encompassed a region five times larger in width and height than the local windows8 (see Figure 3). 7 By following CMU database conventions (R-eye, L-eye, Nose & Mouth positions) we derive both the local window position and the relative position of objects in the image 8 Local windows were created by downscaling objects to 25x25 grids Test image detection maps emerge from iteratively summing T m-detection layers (Mutual Boosting stages C&D). ROC performance on the CMU/MIT face database (see sample images in Figure 6A) was assessed using a threshold on position Cm[i] that best discriminated the final positive and negative detection maps Hm+/T. Figure 6B demonstrates the superiority of Mutual Boosting to grounded feature AdaBoost. Our second experiment was aimed at assessing the performance of Mutual Boosting as we change the detected configurations’ variance. Assuming normal distribution of face configurations we estimated (from our existing labeled set) the spatial covariance between four facial components (noses, mouths and both eyes). We then modified the covariance matrix, multiplying it by 0.25, 1 or 4 and generated 100 artificial configurations by positioning four contrasting rectangles in the estimated position of facial components. Although both Mutual Boosting and AdaBoost performance degraded as the configuration variance increased, the advantage of Mutual Boosting persists both in rigid and in varying configurations9 (Figure 7). COV 1.00 COV 4.00 COV 0.25 Equal error performance Boosting iteration MB sigma=0.25 MB sigma=1.00 MB sigma=4.00 AB sigma=0.25 AB sigma=1.00 AB sigma=4.00 A. Figure 7: A. Artificial face configurations with increasing covariance B. MB and AB Equal error rate performance on configurations with varying covariance as a function of boosting iterations. 6 Discussion While evaluating the performance of Mutual Boosting it should be emphasized that we did not implement the VJ cascade approach; therefore we only attempt to demonstrate that the power of a single AdaBoost learner could be augmented by Mutual Boosting. The VJ detector is rescaled in order to perform efficient detection of objects in multiple scales. For simplicity, scale of neighboring objects and parts was assumed to be fixed so that a similar detector-rescaling approach could be followed. This assumption holds well for face-scenes, but if neighboring objects may vary in scale a single m-detection map will not suffice. However, by transforming each m-detection image to an m-detection cube, (having scale as the third dimension) multi-scale context detection could be achieved10. The dynamic programming characteristic of Mutual Boosting (simply reusing the multiple position and scale detections already performed by VJ) will ensure that the running time of varying scale context will only be doubled. It should be noted that the facescene is highly structured and therefore it is a good candidate for demonstrating 9 In this experiment the resolution of the MB windows (and the number of training features) was decreased so that information derived from the higher resolution of the parts would be ruled out as an explaining factor for the Mutual Boosting advantage. This procedure explains the superior AdaBoost performance in the first boosting iteration. 10 By using an integral cube, calculating the sum of a cube feature (of any size) requires 8 access operations (only double than the 4 operations required in the integral image case). Mutual Boosting; however as suggested by Figure 7B Mutual Boosting can handle highly varying configurations and the proposed method needs no modification when applied to other scenes, like the office scene in Figure 111. Notice that Mutual Boosting does not require a-priori knowledge of the compositional structure but rather permits structure to naturally emerge in the cross referencing pattern (see examples in Figure 5). Mutual Boosting could be enhanced by unifying the selection of weak-learners rather than selecting an individual weak learner for each object detector. Unified selection is aimed at choosing weak learners that maximize the entire object set detection rate, thus maximizing feature reuse [11]. This approach is optimal when many objects with common characteristics are trained. Is Mutual Boosting specific for image object detection? Indeed it requires labeled input of multiple objects in a scene supplying a local description of the objects as well as information on their contextual mutual positioning. But these criterions are shared by other complex "scenes". DNA sequences include multiple objects (Genes) in mutual positions, and therefore might be handled by a variant of Mutual Boosting. The remarkable success of the VJ method stems from abandoning the use of highly custom-tailored complex features in favor of numerous simple ones. Mutual Boosting combines parallel boosting, with a similar feature approach to efficiently incorporate contextual information. We suggest that achieving wide contextual integration is one step towards human-like object detection capabilities. References [1] Freund, Y. and Schapire, R. E. (1997) A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. JCSS 55(1): 119-139 [2] Viola, V. P. and Jones M. (2001) Robust real-time object detection. IEEE ICCV Workshop on Stat. and Comp. Theories of Vision , Vancouver, Canada, July 13 2001 [3] Tanaka, K., Saito, H., Fukada, Y. and Moriya, M. (1991) Coding visual images of objects in the inferotemporal cortex of the macaque monkey. J. Neurophys. 66:170-189 [4] Biederman, I. (1987). Recognition-by-components: A theory of human image understanding. Psychological Review, 94, 115±147. [5] Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cog. Psych. 9, 353-383. [6] Biederman, I., Mezzanotte, R. J., & Rabinowitz, J. C. (1982). Scene perception: Detecting the judging objects undergoing relational violations. Cog. Psych. 14, 143±177 [7] Biederman, I. (1981). On the semantics of a glance at a scene. In M. Kubovy, & J. R. Pomerantz, Perceptual organization (pp. 213±253). Hillsdale, NJ: Erlbaum. [8] Oliva, A., Torralba, A. B. (2002) Scene-Centered Description from Spatial Envelope Properties. Biologically Motivated Computer Vision 2002: 263-272 [9] Weber, M., Welling, M., & Perona, P. (2000) Unsupervised Learning of Models for Recognition. ECCV (1) 2000: 18-32 [10] Barnard K. and Forsyth D. (2001) Learning the semantics of words and pictures. In IEEE ICCV, volume 2, pages 408--415, Vancouver, Canada, July 2001 [11] Schapire, R. E. and Singer. Y. (2000) Boostexter: A boosting-based system for text categorization. Machine Learning, 39(2-3):135--168, May/June 2000. 11 MB is currently aimed at detecting objects in office-scenes (Caltech 360° office DB)
2003
6
2,463
Sparseness of Support Vector Machines—Some Asymptotically Sharp Bounds Ingo Steinwart Modeling, Algorithms, and Informatics Group, CCS-3, Mail Stop B256 Los Alamos National Laboratory Los Alamos, NM 87545, USA ingo@lanl.gov Abstract The decision functions constructed by support vector machines (SVM’s) usually depend only on a subset of the training set—the so-called support vectors. We derive asymptotically sharp lower and upper bounds on the number of support vectors for several standard types of SVM’s. In particular, we show for the Gaussian RBF kernel that the fraction of support vectors tends to twice the Bayes risk for the L1-SVM, to the probability of noise for the L2-SVM, and to 1 for the LS-SVM. 1 Introduction Given a training set T = ((x1, y1), . . . , (xn, yn)) with xi ∈X, yi ∈Y := {−1, 1} standard support vector machines (SVM’s) for classification (cf. [1], [2]) solve arg min f∈H b∈R λ∥f∥2 H + 1 n n X i=1 L yi(f(xi) + b)  , (1) where H is a reproducing kernel Hilbert space (RKHS) of a kernel k : X ×X →R (cf. [3], [4]), λ > 0 is a free regularization parameter and L : R →[0, ∞) is a convex loss function. Common choices for L are the hinge loss function L(t) := max{0, 1−t}, the squared hinge loss function L(t) := (max{0, 1−t})2 and the least square loss function L(t) := (1−t)2. The corresponding classifiers are called L1-SVM, L2-SVM and LS-SVM, respectively. Common choices of kernels are the Gaussian RBF k(x, x′) = exp(−σ2∥x −x′∥2 2) for x, x′ ∈Rd and fixed σ > 0 and polynomial kernels k(x, x′) = (⟨x, x′⟩+c)m for x, x′ ∈Rd and fixed c ≥0, m ∈N. If (fT,λ, bT,λ) ∈H × R denotes a solution of (1) we have fT,λ = 1 2λ n X i=1 yiα∗ i k(xi, .) (2) for suitable coefficients α∗ 1, . . . , α∗ n ∈R (cf. [5]). Obviously, only the samples xi with α∗ i ̸= 0 have an impact on fT,λ. These samples are called support vectors. The fewer support vectors fT,λ has the faster it can be evaluated. Moreover, it is well known that the number of support vectors #SV (fT,λ) of the representation of fT,λ (cf. Section 3 for a brief discusssion) also has a large impact on the time needed to solve (1) using the dual problem. Therefore, it is of high interest to know how many support vectors one can expect for a given classification problem. In this work we address this question by establishing asymptotically lower and upper bounds on the number of support vectors for typical situations. The rest of the paper is organized as follows: in Section 2 we introduce some technical notions and recall recent results in the direction of the paper. In Section 3 our results are presented and discussed, and finally, in Section 4 their proofs can be found. 2 Notations and known results The standard assumption in classification is that the training set T consists of i.i.d. pairs drawn from an unknown distribution P on X×Y . For technical reason we assume throughout this paper that X is a compact metric space, e.g. a bounded, closed subset of Rd. A Bayes decision function (cf. [6]) fP : X →Y is a function that PX-a.s. equals 1 and −1 on C1 := {x ∈X : P(1|x) > 1/2} and C−1 := {x ∈X : P(−1|x) > 1/2}, respectively. The corresponding classification error RP of such a function is called the Bayes risk of P. Recall, that the Bayes risk is the smallest possible classification error. A RKHS H is called universal if H is ∥.∥∞-dense in the space of continuous functions C(X). The best known example of a universal kernel is the Gaussian RBF kernel (cf. [7]). Let us recall some results of the recent paper [8]. To simplify the statements, let us assume that P has no discrete components, i.e. PX({x}) = 0 for all x ∈X. Furthermore, let L be a continuous convex loss function satisfying some minor regularity conditions. Then it was shown for universal RKHS’s and stritly positive nullsequences (λn) satisfying a regularity condition that the following statements hold for all ε > 0 and n →∞: P n T ∈(X × Y )n : #SV (fT,λn) ≥(RP −ε)n  →1 . (3) In particular, this result holds for L1-SVM’s. Furthermore, for L being also differentiable (e.g. L2-SVM’s and LS-SVM’s) it was proved P n T ∈(X × Y )n : #SV (fT,λn) ≥(SP −ε)n  →1 , (4) where SP := PX({x ∈X : 0 < P(1|x) < 1}) denotes the probability of the set of points where noise occurs. Obviously, we always have SP ≥2RP and for noisy non-degenerate P, that is for P with PX  x ∈X : P(1|x) ̸∈{0, 1/2, 1}  > 0 this relation becomes a strict inequality. We shall prove in the next section that (3) can be significantly improved for the L1-SVM. We shall also show that this new lower bound is also an upper bound under moderate conditions on P and H. Furthermore, we prove that (4) is asymptotically optimal for the L2-SVM and show that it can be significantly improved for the LS-SVM. 3 New bounds We begin with lower and upper bounds for the L1-SVM. Recall, that the problem (1) for this classifier can be reformulated as minimize λ⟨f, f⟩+ 1 n nP i=1 ξi for f ∈H, b ∈R, ξ ∈Rn subject to yi f(xi) + b  ≥1 −ξi, i = 1, . . . , n ξi ≥0, i = 1, . . . , n . (5) Instead of solving (5) directly, one usually solves the dual optimization problem (cf. [4]) maximize nP i=1 αi − 1 4λ nP i,j=1 yiyjαiαjk(xi, xj) for α ∈Rn subject to nP i=1 yiαi = 0, 0 ≤αi ≤1 n, i = 1, . . . , n . (6) If (α∗ 1, . . . , α∗ n) ∈R denotes a solution of (6) then fT,λ can be computed by (2). Note that the representaion of fT,λ is not unique in general, i.e. using other algorithms for solving (5) can lead to possibly sparser representations. However, in contrast to the general case the representation (2) of fT,λ is P n-almost surely (a.s.) unique if the kernel is universal and P has no discrete components (cf. [8]). Since our results for the L1-SVM hold for general kernels we always assume that fT,λ is found by (6). Finally, for a loss function L and a RKHS H we write RL,P,H := inf f∈H b∈R RL,P (f + b) , where RL,P (f) := E(x,y)∼P L yf(x)  . Note, that fT,λn +bT,λn cannot achieve an L-risk better than RL,P,H, if H is the RKHS used in (1). Now, our first result is: Theorem 3.1 Let k be a continuous kernel on X and P be a probability measure on X×Y with no discrete components. Then for the L1-SVM using a regularization sequence (λn) with λn →0 and nλ2 n/ log n →∞and all ε > 0 we have P n T ∈(X × Y )n : #SV (fT,λn) ≥(RL,P,H −ε)n  →1 . Remark 3.2 If k is a universal kernel we have RL,P,H = 2RP (cf. Ste7) and thus Theorem 3.1 yields the announced improvement of (3). For non-universal kernels we even have RL,P,H > 2RP in general. Remark 3.3 For specific kernels the regularity condition nλ2 n/ log n →∞can be weakened. Namely, for the Gaussian RBF kernel on X ⊂Rd it can be substituted by nλn |log λn|−d−1 →∞. Only slightly stronger conditions are sufficient for C∞-kernels. The interested reader can prove such conditions by establishing (9) using the results of [9]. Remark 3.4 If H is finite dimensional and n > dim H the representation (2) of fT,λn can be simplified such that only at most dim H kernel evaluations are neccessary. However, this simplification has no impact on the time needed for solving (6). In order to formulate an upper bound on #SV (fT,λn) recall that a function is called analytic if it can be locally represented by its Taylor series. Let L be a loss function, H be a RKHS over X and P be a probability measure on X × Y . We call the pair (H, P) non-trivial (with respect to L) if RL,P,H < inf b∈R RL,P (b) , i.e. the incorporation of H has a non-trivial effect on the L-risk of P. If H is universal we have RL,P,H = inf{RL,P (f) f : X →R} (cf. [9]) and therefore (H, P) is non-trivial if P has two non-vanishing classes, i.e. PX(C1) > 0 and PX(C−1) > 0. Furthermore, we denote the open unit ball of Rd by BRd. Now our upper bound is: Theorem 3.5 Let H be the RKHS of an analytic kernel on BRd. Furthermore, let X ⊂BRd be a closed ball and P be a noisy non-degenerate probability measure on X × Y such that PX has a density with respect to the Lebesgue measure on X and (H, P) is non-trivial. Then for the L1-SVM using a regularization sequence (λn) which tends sufficiently slowly to 0 we have #SV (fT,λn) n →RL,P,H in probability. Probably the most restricting condition on P in the above theorem is that PX has to have a density with respect to the Lebesgue measure. Considering the proof this condition can be slightly weakened to the assumption that every d−1-dimensional subset of X has measure zero. Although it would be desirable to exclude only probability measures with discrete components it is almost obvious that such a condition cannot be sufficient for d > 1 (cf. [10, p.32]). The assumption that P is noisy and non-degenerate is far more less restrictive since neither completely noise-free P nor noisy problems with only “coin-flipping” noise often occur in practice. Finally, the condition that (H, P) is non-trivial is more or less implicitly assumed whenever one uses nontrivial classifiers. Example 3.6 Theorem 3.5 directly applies to polynomial kernels. Note, that the limit RL,P,H depends on both P and the choice of the kernel. Example 3.7 Let k be a Gaussian RBF kernel with RKHS H and X be a closed ball of Rd. Moreover, let P and (λn) be according to Theorem 3.5. Recall, that k is universal and hence (H, P) is non-trivial iff P has two non-vanishing classes. Since k is also analytic on Rd we find #SV (fT,λn) n →2 RP . Therefore, (4) shows that in general this L1-SVM produces sparser decision functions than the L2-SVM and the LS-SVM based on a Gaussian RBF kernel (cf. also Theorem 3.11). Remark 3.8 A variant of the L1-SVM that is often considered in theoretical papers is based on the optimization problem (5) with a-priori fixed b := 0. Besides the constraint Pn i=1 yiαi = 0, which no longer appears, the corresponding dual problem is identical to (6). Hence it is easily seen that Theorem 3.1 also holds for this classifier. Moreover, for this modification Theorem 3.5 can be simplified. Namely, the assumption that P is noisy and non-degenerate is superfluous (cf. [8, Prop. 33] to guarantee (14)). In particular, for a Gaussian RBF kernel and noise-free problems P we then obtain #SV (fT,λn) n →0 , (7) i.e. the number of support vectors increases more slowly than linearly. This motivates the often claimed sparseness of SVM’s. The following theorem shows that the lower bound (4) on #SV (fT,λn) for the L2-SVM is often asymptotically optimal. This result is independent of the used optimization algorithm since we only consider universal kernels and measures with no discrete components. Theorem 3.9 Let H be the RKHS of an analytic and universal kernel on BRd. Furthermore, let X ⊂BRd be a closed ball and P be a probability measure on X × Y with RP > 0 such that PX has a density with respect to the Lebesgue measure on X and (H, P) is non-trivial. Then for the L2-SVM using using a regularization sequence (λn) which tends sufficiently slowly to 0 we have #SV (fT,λn) n →SP in probability. Remark 3.10 For the L2-SVM with fixed offset b := 0 the assumption RP > 0 in the above theorem is superfluous (cf. proof of Theorem 3.9 and proof of [8, Prop. 33]). In particular, for a Gaussian RBF kernel and noise-free problems P we obtain (7), i.e. for noise-free problems this classifier also tends to produce sparse solutions in the sense of Remark 3.8. Our last result shows that LS-SVM’s often tend to use almost every sample as a support vector: Theorem 3.11 Let H be the RKHS of an analytic and universal kernel on BRd. Furthermore, let X ⊂BRd be a closed ball and P be a probability measure on X × Y such that PX has a density with respect to the Lebesgue measure on X and (H, P) is non-trivial. Then for the LS-SVM using a regularization sequence (λn) which tends sufficiently slowly to 0 we have #SV (fT,λn) n →1 in probability. Remark 3.12 Note, that unlike the L1-SVM and the L2-SVM (with fixed offset) the LSSVM does not tend to produce sparse decision functions for noise-free P. This still holds if one fixes the offset for L2-SVM’s, i.e. one considers regularization networks (cf. [11]). The reason for the different behaviours is the margin as already observed in [12]: the assumptions on H and P ensure that only a very small fraction of samples xi can be mapped to ±1 by fT,λn (cf. also Remark 4.1). For the L2-SVM this asymptotically ensures that most of the samples are mapped to values outside the margin, i.e. yifT,λn(xi) > 1, (cf. the properties of Bn \ Aδ in the proof of Theorem 3.9) and it is well-known that such samples cannot be support vectors. In contrast to this the LS-SVM has the property that every point not lying on the margin is a support vector. Using the techniques of our proofs it is fairly easy to see that the same reasoning holds for the hinge loss function compared to “modified hinge loss functions with no margin”. 4 Proofs Let L be a loss function and T be a training set. For a function f : X →R we denote the empirical L-risk of f by RL,T (f + b) := 1 n n X i=1 L yi(f(xi) + b)  . Proof of Theorem 3.1: Let (fT,λn, bT,λn, ξ∗) ∈H × R × Rn and α∗∈Rn be solutions of (5) and (6) for the regulariztion parameter λn, respectively. Since there is no duality gap between (5) and (6) we have (cf. [4]): λn⟨fT,λn, fT,λn⟩+ 1 n n X i=1 ξ∗ i = n X i=1 α∗ i − 1 4λn n X i,j=1 yiyjα∗ i α∗ jk(xi, xj) (8) By (2) this yields 1 n n X i=1 ξ∗ i ≤2λn⟨fT,λn, fT,λn⟩+ 1 n n X i=1 ξ∗ i = n X i=1 α∗ i . Furthermore, recall that λn →0 and nλ2 n/ log n →∞implies 1 n n X i=1 ξ∗ i = RL,T (fT,λn + bT,λn) →RL,P,H (9) in probability for n →∞(cf. [9]) and hence for all ε > 0 the probability of n X i=1 α∗ i ≥RL,P,H −ε (10) tends to 1 for n →∞. Now let us assume that our training set satisfies (10). Since α∗ i ≤1/n we then find RL,P,H −ε ≤ n X i=1 α∗ i ≤ X α∗ i ̸=0 1 n = 1 n#SV (fT,λn) which finishes the proof. For our further considerations we need to consider the optimization problem (1) with respect to P, i.e. we treat the (solvable, see [8]) problem (fP,λ, bP,λ) := arg min f∈H b∈R λ∥f∥2 H + RL,P (f + b) . (11) Proof of Theorem 3.5: Since H is the RKHS of an analytic kernel every function f ∈H is analytic. Using the holomorphic extension of a non-constant f ∈H we see (after a suitable complex linear coordinate change, cf. [10, p. 31f]) that for c ∈R and x1, . . . , xd−1 ∈R the equation f(x1, . . . , xd−1, xd) = c has at most j solutions xd, where j ≥0 is locally (with respect to x1, . . . , xd−1 ∈R) constant . By a simple compactness argument we hence find PX {x ∈X : f(x) = c}  > 0 ⇒ f(x) = c PX-a.s. (12) for all f ∈H and all c ∈R. Now, let us suppose that PX {x ∈X : fP,λ(x) + bP,λ = fP (x)}  > 0 (13) for some λ > 0, where fP denotes the Bayes decision function. Then we may assume without loss of generality that PX {x ∈X : fP,λ(x)+bP,λ = 1}  > 0 holds. By (12) this leads to fP,λ(x) + bP,λ = 1 PX-a.s. However, since RL,P (fP,λ + bP,λ) →RL,P,H for λ →0 (cf. [9]) we see that fP,λ cannot be constant for small λ since (H, P) was assumed to be non-trivial. Therefore (13) cannot hold for small λ > 0 and hence we may assume without loss of generality that PX {x ∈X : |fP,λ(x) + bP,λ −fP (x)| = 0}  = 0 holds for all λ > 0. We define Aδ(λ) :=  x ∈X : |fP,λ(x) + bP,λ −fP (x)| ≤δ for δ, λ > 0. Our above considerations show that for all λ > 0 there exists a δ > 0 with PX(Aδ(λ)) ≤ε. We write δλ := 1 2 sup{δ > 0 : PX(Aδ(λ)) ≤ε}. We first show that there exists no sequence λn →λ ̸= 0 with δλn →0. Let us assume the converse. Then there exists a subsequence with (fP,λnj , bP,λnj ) →(fP,λ, bP,λ) weakly and we have lim supj→∞A3δλnj (λnj) ⊂A0(λ). By the construction we have PX(A3δλnj (λnj)) ≥ε and hence PX(lim supj→∞A3δλnj (λnj)) ≥ε by the Lemma of Fatou. This gives the contradiction PX(A0(λ)) ≥ε. Thus, the increasing function λ 7→m(λ) := inf{δ˜λ : ˜λ ≥ λ} satisfies m(λ) > 0 for all λ > 0. We fix a T = ((x1, y1), . . . , (xn, yn)) with ∥fT,λn + bT,λn −fP,λn −bP,λn∥∞ ≤ δn , (14) RL,T (fT,λn + bT,λn) −RL,P (fP,λn + bP,λn) ≤ ε (15) and {i : xi ∈Aδn(n))} ≤2εn. If m4(λn)λ3 nn →∞the results of [9] and [8] ensure, that the probability of such a T converges to 1 for n →∞. Moreover, by (8) we find 2λn⟨fT,λn, fT,λn⟩+ RL,T (fT,λn + bT,λn) = n X i=1 α∗ i . (16) Since fT,λn + bT,λn and fP,λn + bP,λn minimize the regularized risks, (15) implies λn∥fT,λn∥2 H +RL,T (fT,λn +bT,λn)−λn∥fP,λn∥2 H −RL,P (fP,λn +bP,λn) ≤ε . (17) Furthermore, if n →∞we have λn∥fP,λn∥2 H + RL,P (fP,λn + bP,λn) →RL,P,H (18) (cf. [9]) and therefore we obtain λn∥fT,λn∥2 H + RL,T (fT,λn + bT,λn) −RL,P,H ≤2ε for large n. Now, (15), (17) and (18) implies λn⟨fT,λn, fT,λn⟩≤3ε for large n. Hence (16) yields RL,P,H + 5ε ≥ n X i=1 α∗ i (19) if n is sufficiently large. Now let us suppose that we have a sample (xi, yi) of T with xi ̸∈Aδn(n). Then we have |fP,λn(xi) + bP,λn −fP (xi)| > δn and hence fT,λn(xi) + bT,λn ̸= ±1 by (14). By [4, p. 107] this means either α∗ i = 0 or α∗ i = 1/n. Therefore, by (19) we find RL,P,H + 5ε ≥ n X i=1 α∗ i ≥ n X xi̸∈Aδn(n) α∗ i = 1 n {i : xi ̸∈Aδn(n) and α∗ i ̸= 0} Since we have at most 2εn samples in Aδn(n) we finally obtain 1 n#SV (fT,λn) ≤RL,P,H + 7ε . Now the assertion follows by Theorem 3.1. Remark 4.1 The proof of Theorem 3.5 is based on a kind of paradox: recall that it was shown in [8] that fT,λn + bT,λn →fP on  x ∈X : P(1|x) ̸∈{0, 1/2, 1} in probability. However, the assumption on both H and P ensures that for typical T the sets  x ∈X : |fT,λn(x) + bT,λn −fP (x)| ≤δ become arbitrarily small for δ →0. We will apply these seemingly contradicting properties in the following proofs, too. Proof of Theorem 3.9: Let N := {x ∈X : 0 < P(1|x) < 1} be the subset of X where P is noisy. Furthermore, let Aδ(n) be defined as in the proof of Theorem 3.5. We write Bδ(n) :=  x ∈C1 \ N : fP,λn(x) + bP,λn ≥1 −δ ∪  x ∈C−1 \ N : fP,λn(x) + bP,λn ≤−1 + δ . By [8, Thm. 22]) for all n ≥1 there exists a δ > 0 with PX(Bδ(n)) ≥PX(X \ N) −ε. We define δn := 1 2 sup{δ > 0 : PX(Aδ(n)) ≤ε and PX(Bδ(n)) ≥PX(X \ N) −ε}. Let us fix a training set T = ((x1, y1), . . . , (xn, yn)) with ∥fT,λn + bT,λn −fP,λn −bP,λn∥∞ ≤ δn , {i : xi ∈Bδ(n) \ Aδn(n)} ≥ n PX(X \ N) −3ε  . Again, the probability of such T converges to 1 for n →∞whenever (λn) converges sufficiently slowly to 0. In view of (4) it suffices to show that every sample xi ∈Bδ(n) \ Aδn(n) cannot be a support vector. Given an xi ∈Bδ(n)\Aδn(n) we may assume without loss of generality that xi ∈C1. Then xi ∈Bδ(n) implies fP,λn(xi)+bP,λn ≥1−δn while xi ̸∈Aδn(n) yields |fP,λn(xi)+bP,λn−1| > δn. Hence we find fP,λn(xi)+bP,λn > 1+δn and thus fT,λn(xi)+bT,λn > 1. By the Karush-Kuhn-Tucker conditions of the primal/dual optimization problem of the L2-SVM (cf. [4, p. 105]) this shows that xi is not a support vector. Proof of Theorem 3.11: Let Aδ(n) and δn be defined as in the proof of Theorem 3.5. Without loss of generality we may assume δn ∈(0, 1/2). Let us define C0 := {x ∈X : P(1|x) = 1/2} and Dn =  x ∈C0 : |fP,λn(x) + bP,λn| ≤1/2 . By [8, Thm. 22] we may assume without loss of generality that PX(Dn) ≥PX(C0) −ε for all n ≥1. Now, let us fix a training set T = ((x1, y1), . . . , (xn, yn)) with ∥fT,λn + bT,λn −fP,λn −bP,λn∥∞ ≤ δn {i : xi ∈Aδn(n)} ≤ 2 ε n {i : xi ∈Dn} ≥ n PX(C0) −2ε  . Again, the probability of such T converges to 1 for n →∞whenever (λn) converges sufficiently slowly to 0. Now let us consider a sample xi ∈(X \ Aδn(n)) ∩C1 of T. Then we have |fP,λn(xi) + bP,λn −1| > δn and hence fT,λn(xi) + bT,λn ̸= 1. By [8, Cor. 32] this shows that xi is a support vector. Obviously, the same holds true for samples xi ∈(X \ Aδn(n)) ∩C−1. Finally, for samples xi ∈Dn we have |fT,λn(xi) + bT,λn| ≤ 1/2 + δn < 1 and hence these samples are always support vectors. Acknowledgments I would like to thank D. Hush and C. Scovel for helpful comments. References [1] C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20:1995, 273–297. [2] J.A.K. Suykens and J. Vandewalle. Least squares support vector machine classifiers. Neural Processing Letters, 9:293–300, 1999. [3] N. Aronszajn. Theory of reproducing kernels. Trans. Amer. Math. Soc., 68:337–404, 1950. [4] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, 2000. [5] B. Sch¨olkopf, R. Herbrich, and A.J. Smola. A generalized representer theorem. In Proceedings of the 14th Annual Conference on Computational Learning Theory, pages 416–426. Lecture Notes in Artificial Intelligence 2111, 2001. [6] L. Devroye, L. Gy¨orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, New York, 1997. [7] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. Journal of Machine Learning Research, 2:67–93, 2001. [8] I. Steinwart. Sparseness of support vector machines. Journal of Machine Learning Research, 4:1071–1105, 2003. [9] I. Steinwart. Consistency of support vector machines and other regularized kernel machine. IEEE Transactions on Information Theory, to appear. [10] R.M. Range. Holomorphic Functions and Integral Representations in Several Complex Variables. Springer, 1986. [11] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7:219–269, 1995. [12] A. Kowalczyk. Sparsity of data representation of optimal kernel machine and leaveone-out estimator. In T.K. Leen, T.G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, pages 252–258. MIT Press, 2001.
2003
60
2,464
Towards social robots: Automatic evaluation of human-robot interaction by face detection and expression classification M.S. Bartlett , G. Littlewort , I. Fasel   , J. Chenu   , T. Kanda   , H. Ishiguro   , and J.R. Movellan   Institute for Neural Computation, University of California, San Diego  Intelligent Robotics and Communication Laboratory, ATR, Kyoto Japan. Email: gwen, marni, ian, joel, javier @inc.ucsd.edu Abstract Computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how computers could be used in daily life. Face to face communication is a real-time process operating at a time scale of less than a second. In this paper we present progress on a perceptual primitive to automatically detect frontal faces in the video stream and code them with respect to 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. The face finder employs a cascade of feature detectors trained with boosting techniques [13, 2]. The expression recognizer employs a novel combination of Adaboost and SVM’s. The generalization performance to new subjects for a 7-way forced choice was 93.3% and 97% correct on two publicly available datasets. The outputs of the classifier change smoothly as a function of time, providing a potentially valuable representation to code facial expression dynamics in a fully automatic and unobtrusive manner. The system was deployed and evaluated for measuring spontaneous facial expressions in the field in an application for automatic assessment of human-robot interaction. 1 Introduction Computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how computers could be used in daily life. Face to face communication is a real-time process operating at a time scale of less than a second. Thus fulfilling the idea of machines that interact face to face with us requires development of robust real-time perceptive primitives. In this paper we present first steps towards the development of one such primitive: a system that automatically finds faces in the visual video stream and codes facial expression dynamics in real time. The system automatically detects frontal faces and codes them with respect to 7 dimensions: Joy, sadness, surprise, anger, disgust, fear, and neutral. Speed and accuracy are enhanced by a novel technique that combines feature selection based on Adaboost with feature integration based on support vector machines. We host an online demo of the system at http://mplab.ucsd.edu. The system was trained and tested on two publicly avaliable datasets of facial expressions collected by experimental psychologists expert in facial behavior. In addition, we deployed and evaluated the system in an application for recognizing spontaneous facial expressions from continuous video in the field. We assess the system as a method for automatic measurement of human-robot interaction. 2 Face detection We developed a real-time face-detection system based on [13] capable of detection and false positive rates equivalent to the best published results [11, 12, 10, 13]. The system consists of a cascade of classifiers trained by boosting techniques. Each classifier employs integral image filters reminiscent of Haar Basis functions, which can be computed very fast at any location and scale in constant time (see Figure 1). In a    pixel window, there are over 160,000 possible filters of this type. For each stage in the cascade, a subset of features are chosen using a feature selection procedure based on Adaboost [3]. We enhance the approach in [13] in the following ways: (1) Once a feature is selected by boosting, we refine the selection by finding the best performing single-feature classifier from a new set of filters generated by shifting and scaling the chosen filter by two pixels in each direction, as well as composite filters made by reflecting each shifted and scaled feature horizontally about the center and superimposing it on the original. This can be thought of as a single generation genetic algorithm, and is much faster than exhaustively searching for the best classifier among all 160,000 possible filters and their reflection-based cousins. (2) While [13] use Adaboost in their feature selection algorithm, which requires binary classifiers, we employed Gentleboost, described in [4], which uses real valued features. Figure 2 shows the first two filters chosen by the system along with the real valued output of the weak learners (or tuning curves) built on those filters. Note the bimodal distribution of filter 2. (3) We have also developed a training procedure so that after each single feature, the system can decide whether to test another feature or to make a decision. This system retains information about the continuous outputs of each feature detector rather than converting to binary decisions at each stage of the cascade. Preliminary results show potential for dramatic improvements in speed with no loss of accuracy over the current system. The face detector was trained on 5000 faces and millions of non-face patches from about 8000 images collected from the web by Compaq Research Laboratories. Accuracy on the CMU-MIT dataset (a standard, public data set for benchmarking frontal face detection systems) is comparable to [13]. Because the strong classifiers early in the sequence need very few features to achieve good performance (the first stage can reject  of the nonfaces using only  features, using only 20 simple operations, or about 60 microprocessor instructions), the average number of features that need to be evaluated for each window is very small, making the overall system very fast. The source code for the face detector is freely available at http://www.sourceforge.net/projects/kolmogorov. 3 Facial Expression Classification 3.1 Data set The facial expression system was trained and tested on Cohn and Kanade’s DFAT-504 dataset [6]. This dataset consists of 100 university students ranging in age from 18 to 30 years. 65% were female, 15% were African-American, and 3% were Asian or Latino. Videos were recoded in analog S-video using a camera located directly in front of the subject. Subjects were instructed by an experimenter to perform a series of 23 facial expresa. b. c. d. Figure 1: Integral image filters (after Viola & Jones, 2001 [13]). a. The value of the pixel at  is the sum of all the pixels above and to the left. b. The sum of the pixels within rectangle  can be computed as    "!  . (c) Each feature is computed by taking the difference of the sums of the pixels in the white boxes and grey boxes. Features include those shown in (c), as in [13], plus (d) the same features superimposed on their reflection about the Y axis. a. b. c. d. Figure 2: The first two features (a,c) and their respective tuning curves (b,d). Each feature is shown over the average face. The first tuning curve shows that a dark horizontal region over a bright horizontal region in the center of the window is evidence for a face, and for non-face otherwise. The output of the second filter is bimodal. Both a strong positive and a strong negative output is evidence for a face, while output closer to zero is evidence for non-face. sions. Subjects began and ended each display with a neutral face. Before performing each display, an experimenter described and modeled the desired display. Image sequences from neutral to target display were digitized into 640 by 480 pixel arrays with 8-bit precision for grayscale values. For our study, we selected 313 sequences from the dataset. The only selection criterion was that a sequence be labeled as one of the 6 basic emotions. The sequences came from 90 subjects, with 1 to 6 emotions per subject. The first and last frames (neutral and peak) were used as training images and for testing generalization to new subjects, for a total of 625 examples. The trained classifiers were later applied to the entire sequence. All faces in this dataset were successfully detected. The automatically located faces were rescaled to 48x48 pixels.The typical distance between the centers of the eyes was roughly 24 pixels. A comparison was also made at double resolution (96x96). No further registration was performed. Other approaches to automatic facial expression recognition include explicit detection and alignment of internal facial features. The recognition system presented here performs well without that step, providing a considerable savings in processing time. The images were converted into a Gabor magnitude representation, using a bank of Gabor filters at 8 orientations and 5 spatial frequencies (4:16 pixels per cycle at 1/2 octave steps) [7]. 4 SVM’s and Adaboost SVM performance was compared to Adaboost for emotion classification. The system performed a 7-way forced choice between the following emotion categories: Happiness, sadness, surprise, disgust, fear, anger, neutral. The classification was performed in two stages. First, seven binary classifiers were trained to discriminate each emotion from everything else. The emotion category decision was then implemented by choosing the classifier with the maximum output for the test example. Support vector machines (SVM’s) are well suited to this task because the high dimensionality of the Gabor representation does not affect training time for kernel classifiers. Linear, polynomial, and RBF kernels with Laplacian, and Gaussian basis functions were explored. Linear and RBF kernels employing a unit-width Gaussian performed best, and are presented here. Generalization to novel subjects was tested using leave-one-subject-out cross-validation. Results are presented in Table 1. The features employed for the Adaboost emotion classifier were the individual Gabor filters. There were 48x48x40 = 92160 possible features. A subset of these filters was chosen using Adaboost. On each training round, the threshold and scale parameter of each filter was optimized and the feature that provided best performance on the boosted distribution was chosen. During Adaboost, training for each emotion classifier continued until the distributions for the positive and negative samples were separated by a gap proportional to the widths of the two distributions. The total number of filters selected using this procedure was 538. Since Adaboost is significantly slower to train than SVM’s, we did not do ’leave one subject out’ cross validation. Instead we separated the subjects randomly into ten groups of roughly equal size and did ’leave one group out’ cross validation. SVM performance for this training strategy is shown for comparison. Results are shown in Table 1. The generalization performance, 85.0%, was comparable to linear SVM performance on the leave-group-out testing paradigm, but Adaboost was substantially faster, as shown in Table 2. Here, the system calculated the output of Gabor filters less efficiently, as the convolutions were done in pixel space rather than Fourier space, but the use of 200 times fewer Gabor filters nevertheless resulted in a substantial speed benefit. 5 AdaSVM’s Adaboost provides an added value of choosing which features are most informative to test at each step in the cascade. Figure 3a illustrates the first 5 Gabor features chosen for each emotion. The chosen features show no preference for direction, but the highest frequencies are chosen more often. Figure 3b shows the number of chosen features at each of the 5 wavelengths used. A combination approach, in which the Gabor Features chosen by Adaboost were used as a reduced representation for training SVM’s (AdaSVM’s) outperformed Adaboost by 3.8 percent points, a difference that was statistically significant (z=1.99, p=0.02). AdaSVM’s outperformed SVM’s by an average of 2.7 percent points, an improvement that was marginally significant (z = 1.55, p = 0.06). After examination of the frequency distribution of the Gabor filter selected by Adaboost, it became apparent that higher spatial frequency Gabors and higher resolution images could potentially improve performance. Indeed, by doubling the resolution to 96x96 and increasing the number of Gabor wavelengths from 5 to 9 so that they spanned 2:32 pixels in 1/2 octave steps improved performance of the nonlinear AdaSVM to 93.3% correct. As the resolution goes up, the speed benefit of AdaSVM’s becomes even more apparent. At the a. ANGER DISGUST FEAR JOY SADNESS SURPRISE b. 2 4 6 8 10 12 14 16 18 0 50 100 150 200 250 wavelength in pixels Wavelength distribution of Adaboost−chosen features Figure 3: a. Gabors selected by Adaboost for each expression. White dots indicate locations of all selected Gabors. Below each expression is a linear combination of the real part of the first 5 Adaboost features selected for that expression. Faces shown are a mean of 10 individuals. b. Wavelength distribution of features selected by Adaboost. higher resolution, the full Gabor representation increased by a factor of 7, whereas the number of Gabors selected by Adaboost only increased by a factor of 1.75. Performance of the system was also evaluated on a second publicly available dataset, Pictures of Facial Affect[1]. We obtained 97% accuracy for generalization to novel subjects, trained by leave-one-subject-out cross-validation. This is about 10 percentage points higher than the best previously reported results on this dataset [9, 8]. An emergent property was that the outputs of the classifier change smoothly as a function of time, providing a potentially valuable representation to code facial expression dynamics in a fully automatic and unobtrusive manner. (See Figure 5.) In the next section, we apply this system to assessing spontaneous facial expressions in the field. Leave-group-out Leave-subject-out Adaboost SVM SVM AdaSVM Linear 85.0 84.8 86.2 88.8 RBF 86.9 88.0 90.7 Table 1: Performance of Adaboost,SVM’s and AdaSVM’s (48x48 images). SVM Adaboost AdaSVM Lin RBF Lin RBF Time t t 90t 0.01t 0.01t 0.0125t Time t # t 90t 0.16t 0.16t 0.2t Memory m 90m 3m 3m 3.3m Table 2: Processing time and memory considerations. Time t # includes the extra time to calculate the outputs of the 538 Gabors in pixel space for Adaboost and AdaSVM, rather than the full FFT employed by the SVM’s. 6 Deployment and evaluation: Automatic Evaluation of Human-Robot Interaction We are currently evaluating the system as a tool for automatically measuring the quality of human-robot social interaction. This test involves recognition of spontaneous facial expressions in the continuous video stream during unconstrained interaction with RoboVie, a social robot under development at ATR and the University of Osaka [5]. This study was conducted at ATR in Kyoto, Japan. 14 participants, male and female, were instructed to interact with RoboVie for 5 minutes. Their facial expressions were recorded via 4 video cameras. The study was followed by a questionnaire in which the participants were asked to evaluate different aspects of their interaction with RoboVie. Figure 4: Human response during interaction with the RoboVie robot at ATR is measured by automatic expression analysis. Faces were automatically detected and facial expressions classified in the continuous video streams of each of the four cameras. With the multi-camera paradigm, one or more cameras often provides a better view than the others. When the face is rotated, partially occluded, or misaligned, the expression classification is less reliable. A confidence measure from the face detection step consisted of the final unthresholded output of the cascade passed through a softmax transform over the four cameras. This measure indicated how much like a frontal face the system determined the selected window from each camera to be. We compared the system’s expression labels with a form of ground truth from human judgment. Four naive human observers were presented with the videos of each subject at 1/3 speed. The observers indicated the amount of happiness shown by the subject in each video by turning a dial. The outputs of the four cameras were integrated by training a linear regression on 32 numbers, the continuous outputs of the seven emotion classifiers (the margin) plus the confidence measure from the face detector for each of the four cameras, to predict the human facial expression judgments. Figure 5 compares the human judgments with the automated system. Preliminary results are promising. The automated system predicted the human expression judgments with a correlation coefficient of 0.87, which was within the agreement range of the four human observers. $ These are results from one subject. Test results based on 14 subjects will be available in one week. We are also comparing facial expression measurements by both human and computer to the self-report questionnaires. Frame Figure 5: Human labels (blue/dark) compared to automated system labels (red/light) for ’joy’ (one subject, one observer). 7 Conclusions Computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how computers could be used in daily life. Social robots and agents designed to recognize facial expression might provide a much more interesting and engaging social interaction, which can benefit applications from automated tutors to entertainment robots. Face to face communication is a real-time process operating at a time scale of less than a second. The level of uncertainty at this time scale is considerable, making it necessary for humans and machines to rely on sensory rich perceptual primitives rather than slow symbolic inference processes. In this paper we present progress on one such perceptual primitive: Real time recognition of facial expressions. Our results suggest that user independent fully automatic real time coding of basic expressions is an achievable goal with present computer power, at least for applications in which frontal views or multiple cameras can be assumed. Good performance results were obtained for directly processing the output of an automatic face detector without the need for explicit detection and registration of facial features. A novel classification technique was presented that combines feature selection based on Adaboost with feature integration based on support vector machines. The AdaSVM’s outperformed Adaboost and SVM’s alone, and gave a considerable advantage in speed over SVM’s. Strong performance results, 93% and 97% accuracy for generalization to novel subjects, were presented for two publicly available datasets of facial expressions collected by experimental psychologists expert in facial expressions. We introduced a technique for automatically evaluating the quality of human-robot interaction based on the analysis of facial expressions. This test involved recognition of spontaneous facial expressions in the continuous video stream during unconstrained behavior. The system predicted human judgements of joy with a correlation of 0.87. Within the past decade, significant advances in machine learning and machine perception open up the possibility of automatic analysis of facial expressions. Automated systems will have a tremendous impact on basic research by making facial expression measurement more accessible as a behavioral measure, and by providing data on the dynamics of facial behavior at a resolution that was previously unavailable. Such systems will also lay the foundations for computers that can understand this critical aspect of human communication. Computer systems with this capability have a wide range of applications in basic and applied research areas, including man-machine communication, security, law enforcement, psychiatry, education, and telecommunications. Acknowledgments Support for this project was provided by ONR N00014-02-1-0616, NSF-ITR IIS-0220141 and IIS-0086107, DCI contract No.2000-I-058500-000, and California Digital Media Innovation Program DiMI 01-10130, and the MIND Institute. This research was supported in part by the Telecommunications Advancement Organization of Japan. References [1] P. Ekman and W. Friesen. Pictures of facial affect. Photographs, 1976. Available from Human Interaction Laboratory, UCSF, HIL-0984, San Francisco, CA 94143. [2] I. Fasel and J. R. Movellan. Comparison of neurally inspired face detection algorithms. In Proceedings of the international conference on artificial neural networks (ICANN 2002). UAM, 2002. [3] Yoav Freund and Robert E. Schapire. Experiments with a new boosting algorithm. In Proc. 13th International Conference on Machine Learning, pages 148–146. Morgan Kaufmann, 1996. [4] J Friedman, T Hastie, and R Tibshirani. Additive logistic regression: A statistical view of boosting. ANNALS OF STATISTICS, 28(2):337–374, 2000. [5] H. Ishiguro, T. Ono, M. Imai, T. Maeda, and T. Kandaand R. Nakatsu. Robovie: an interactive humanoid robot. 28(6):498–503, 2001. [6] T. Kanade, J.F. Cohn, and Y. Tian. Comprehensive database for facial expression analysis. In Proceedings of the fourth IEEE International conference on automatic face and gesture recognition (FG’00), pages 46–53, Grenoble, France, 2000. [7] M. Lades, J. Vorbr¨uggen, J. Buhmann, J. Lange, W. Konen, C. von der Malsburg, and R. W¨urtz. Distortion invariant object recognition in the dynamic link architecture. IEEE Transactions on Computers, 42(3):300–311, 1993. [8] M. Lyons, J. Budynek, A. Plante, and S. Akamatsu. Classifying facial attributes using a 2-d gabor wavelet representation and discriminant analysis. In Proceedings of the 4th international conference on automatic face and gesture recognition, pages 202–207, 2000. [9] C. Padgett and G. Cottrell. Representing face images for emotion classification. In M. Mozer, M. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems, volume 9, Cambridge, MA, 1997. MIT Press. [10] H. Rowley, S. Baluja, and T. Kanade. Neural network-based face detection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1(20):23–28, 1998. [11] H. Schneiderman and T. Kanade. Probabilistic modeling of local appearance and spatial relationships for object recognition. In Proc. IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 45–51, 1998. [12] Kah Kay Sung and Tomaso Poggio. Example based learning for view-based human face detection. Technical Report AIM-1521, 1994. [13] Paul Viola and Michael Jones. Robust real-time object detection. Technical Report CRL 20001/01, Cambridge ResearchLaboratory, 2001.
2003
61
2,465
Entrainment of Silicon Central Pattern Generators for Legged Locomotory Control Francesco Tenore1, Ralph Etienne-Cummings1,2, M. Anthony Lewis3 1Dept. of Electrical & Computer Eng., Johns Hopkins University, Baltimore, MD 21218 2Institute of Systems Research, University of Maryland, College Park, MD 20742 3Iguana Robotics, Inc., P.O. Box 625, Urbana, IL 61803 {fra, retienne}@jhu.edu, tlewis@iguana-robotics.com Abstract We have constructed a second generation CPG chip capable of generating the necessary timing to control the leg of a walking machine. We demonstrate improvements over a previous chip by moving toward a significantly more versatile device. This includes a larger number of silicon neurons, more sophisticated neurons including voltage dependent charging and relative and absolute refractory periods, and enhanced programmability of neural networks. This chip builds on the basic results achieved on a previous chip and expands its versatility to get closer to a self-contained locomotion controller for walking robots. 1 Introduction Legged locomotion is a system level behavior that engages most senses and activates most muscles in the human body. Understanding of biological systems is exceedingly difficult and usually defies any unifying analysis. Walking behavior is no exception. Theories of walking are likely incomplete, often in ways that are invisible to the scientist studying these behavior in animal or human systems. Biological systems often fill in gaps and details. One way of exposing our incomplete understanding is through the process of synthesis. In this paper we report on continued progress in building the basic elements of a motor pattern generator sufficient to control a legged robot. The focus of this paper is on a 2nd generation chip, that incorporates new features which we feel important for legged locomotion. An essential element of most locomotory systems is the Central Patter Generator (CPG). The CPG is a set of neural circuits found in the spinal cord, arranged to produce oscillatory periodic waveforms that activate muscles in a coordinated manner. They are neuron primitives that are used in most periodic biological systems such as the respiratory, the digestive and the locomotory systems. In this last one, CPGs are constructed using neurons coupled together to produce phasic relationships required to achieve coordinated gait-type movements. The CPG is more than a clock, or even a network of oscillators. Phenomena such as reflex reversal [7] can only be understood in terms of a system that has at least one additional state variable over sensory information alone. The CPG or similar circuits is certainly involved in modulation of sensory information from the periphery [5] and is of primary importance in providing phase information to the cerebellum. This information is necessary for coordination of the brain and the spinal cord [6]. Currently, there are two extremes in using CPGs for control of mechanical devices. The first is to be as faithful to the biological as possible, and then to discover how biological systems can assist in the control of complex machines. This approach is similar to that of Rasche et al. [1], based on the Hodgkin-Huxley model [3], and the one implemented by Simoni and DeWeerth [2], based on the Morris-Lecar model [4]. These ion-channel based models imply a very large parameter space, making it difficult to work with in silicon, yet inviting direct comparison with biological counterparts. Our approach is to start in the other direction. A system of minimal complexity was built [8,9] and then the question was asked of what additional features should be added to this minimal system to enable a behavior that is missing in the previous design. Thus, the two approaches start from different philosophical grounds, but will, hopefully, converge on similar solutions. The motivation behind choosing a self-contained silicon system rather than a software implementation is that the former will use less power and be more compact and more amenable to the control of a power-autonomous robot. Previously, a minimal system chip was built using integrate-and-fire neurons controlling a rudimentary robot [8, 9]. The chip described in this paper is an evolution of that one. Its main differences with its previous version are the following. The previous chip contained 2 spiking motoneurons and 2 pacemaker neurons, whereas the current chip contains 10 neurons of either type. More importantly, all the synapse weights (22 per neuron) are on-chip and can be used to make the synapse excitatory or inhibitory, while the previous version weighted the synapse signals outside the chip. The current chip also has 10 feedback synapses, making all the neurons interconnected. Moreover, the current chip has the capability of receiving and weighting up to 8 external inputs (instead of 2), such as sensory feedback signals, to allow better control of the CPG. The possibility of better tuning the pacemaker and spiking motoneurons created by the chip is achieved through direct modulation of the pulse width, of the absolute or relative refractory period and of the discharge strength of each neuron. Finally, the charging and discharging of the neurons’ membrane capacitance is an exponential function of time, as opposed to the linear function that the previous chip exhibited. This allows for better coupling between CPGs (unpublished observation). In this paper, after explaining the architecture of the chip and how simple networks can be created, a robotic application will be described. The paper will show that entrainment of multiple CPGs can be achieved by using direct coupling. Analysis and experiments demonstrating entrainment between multiple CPGs using direct coupling are presented. Finally, the oscillatory patterns used to control a single-legged robot are implemented in this chip. 2 Architecture The CPG emulator chip was fabricated in silicon using a 0.5 µm CMOS process. The chip was designed to provide plausible electronic counterparts of biological elements, such as neurons, synapses, cell membranes, axons, and axon hillocks, for controlling motor systems. The chip also contains digital memories that can be used with synapses to modify weights or to modulate the membrane conductance. Through these components, it is possible to construct non-linear oscillators, which are based on the central pattern generators of biological organisms. The chip’s architecture can be seen in figure 1. It is made up of 10 fully interconnected “neurons” and 22 “synapses” per neuron. Communication with a particular neuron/synapse pair occurs through the address register, made up of the neuron/row register and the synapse/column register. Finally, a weight/data register allows a tunable amount of current to flow onto or away from the “neurons’ axons.” Figure 2 shows a detailed view of a single neuron. As can be seen, all neurons are integrate-and-fire type neurons, in which the current that flows on the axon charges up the membrane capacitor, Cmem. When the voltage across the capacitor reaches a certain threshold, Vthresh, the hysteretic comparator output goes high. The output of the comparator does not change if the discharge and refractory period controls are disabled. Normally, however, the discharge controller is active and its function is to decrease the voltage on the membrane capacitance until it drops below the hysteretic comparator’s lower threshold. The comparator output then goes low, the discharge is halted, and the capacitor can charge up again, thereby making the process start anew. The i-th neuron can be modeled through the following set of equations: refrac i dis i j k k ik j ij mem i mem i I S I S I W I W dt dV C − − − = ∑ ∑ − + (1)    = + 0 1 ) ( dt t Si if if ) ( ) 0 ) ( ( ) ( ) 1 ) ( ( − + + − > ∨ > ∧ = > ∨ > ∧ = T mem i T mem i i T mem i T mem i i V V V V t S V V V V t S (2) where Ci mem is the membrane capacitance of the i-th neuron, VT + and VT - are respectively the high and low thresholds of the hysteretic comparator, Vi mem is the voltage on the capacitor, Si(t) is the state of the hysteretic comparator at time t, W+ ij is the excitatory weight on the j-th excitatory synapse of the i-th neuron and similarly W ik is the inhibitory weight on the k-th inhibitory synapse of the i-th neuron. The discharge and refractory currents, Idis and Irefrac correspond to the discharge and refractory period rates, respectively. Synapse/Column Select Neuron/Row Select Neuron 1 Neuron 2 Neuron 3 Neuron 4 Neuron 5 Neuron 6 Neuron 7 Neuron 8 Neuron 9 Neuron 10 ... Weight Value Feedback Vout Vout Vout Vout Vout Vout Vout Vout Vout Vout 1 2 3 4 5 6 7 8 9 10 Figure 1. Top. Chip micrograph, 3.3x2.1 mm2. The 22 synapses per neuron (vertical lines) are distinguishable. Bottom. System block diagram. The speed with which the comparator changes state depends on the amount of current that the weight, or weights, sets on or remove from the “axon”. The weights are set through 8-bit digital-to-analog converters (DACs) and stored in static random access memory (SRAM) cells. A ninth bit selects the type of weight, either excitatory or inhibitory. Finally, the three blocks that depend on the comparator output, work as follows. A weight can be set on any one of these three blocks, just as was done for the synapses. This allows modulation of the discharge strength, of the refractory period, and of the pulse width. The refractory period control element prevents current from charging up the capacitor for as long as it is active. It can be both relative and absolute, depending on its weight. The pulse-width block allows independent control of the output duty cycle by modifying the amount of time the output is high. As can be seen in figure 2, the output from the PW control block is both the neuron output and the feedback signal to all the neurons, including itself (self-feedback). The chip is thus fully interconnected. From figure 2, four types of synapses can be identified. The first is the internal bias synapse, which allows current to flow onto or away from the membrane capacitor, depending on the type of bias it has, without requiring signals from inside the chip. The analog and digital synapses require the presence of an external analog or digital voltage to allow current to flow on the capacitor. The feedback synapses are also internal to the chip and allow the neurons to influence each other by modulating the charge-up of the membrane capacitors they are acting upon. This means that one of these synapses is of self-feedback for a particular neuron. These synapses are considered to be dual mode, in that they can both excite or inhibit. The 3 final synapses are used to control the discharge strength, the refractory period, and the pulse width. It is thus possible to attain two types of waveforms at each neuron output, depending on the current charging the capacitor. If the current charges up and discharges the capacitor very quickly, the output is similar to that of a motor neuron. If the current charges and discharges the capacitor slowly, then the output is that of a pacemaker envelope neuron, which makes up the CPG. 3 Networks Two simple networks are described in this section using this chip to understand the how the chip operates. The first example is shown in figure 3. A pacemaker neuron Internal bias Weight (Exc) Internal bias Weight (Inh) Analog Inputs Weight (Exc) Analog Inputs Weight (Inh) Digital Inputs Weight (Exc) Digital Inputs Weight (Inh) FB Signals Weight (Exc) FB Signals Weight (Inh) V C thresh mem PW Control (Hysteretic Comparator) Refrac Control Discharge Axon Hillock Vout1 Vout10 Neuron 1 ... Vout10 Vout1 ... Vout1 Axon An ... An 1 4 Dig ... Dig 1 4 ... An An 1 4 ... Dig Dig 1 4 refrac dis I I Figure 2. Block diagram of a single neuron. The neuron output is fed back to all the neurons including itself (Vout1 is also a feedback signal). Vthresh Discharge dis I PW Control Vout bias synapse Cmem Vthresh Discharge dis I PW Control Vout feedback synapse Cmem bias synapse feedback synapse (exc) + I + I Figure 3. An envelope neuron exciting a motor neuron. The output waveforms are 180º out-of-phase. Figure 4. Master slave relationship. When the master spikes, the membrane potential increases for the duration of the spike. controls the spiking of a motor neuron such that the spiking only occurs if the envelope is high. This is done using the internal biasing synapse to charge up the membrane capacitance of the envelope neuron and the feedback synapse coming from the envelope neuron to charge up the capacitor of the motor neuron. Similarly, the envelope neuron can inhibit the spiking which would otherwise occur at a constant rate through the bias synapse. Note that the bias synapse can either be the internally generated, as the one shown in figure 3, or it can be the one of the external analog or digital synapse seen in figure 2. A second example, shown in figure 4, depicts the effects of a single spike on an envelope neuron. Depending on where the spike occurs with respect to the slave envelope neuron, it will either accelerate the charge-up or decelerate the discharge. In this example, the spike occurred during the membrane potential’s discharge phase. The membrane potential’s output voltage is shown within the slave output waveform. The two horizontal lines that delimit it represent the hysteretic comparator’s threshold voltages. Thus, the slave stays high for a longer period of time, thus decreasing its normal frequency of oscillation. It is therefore possible to entrain the slave oscillator to the frequency of the master. This can be done either by increasing the duration of the master spike, increasing bias synapse Master oscillator bias synapse Slave oscillator Spike Entrainer Spike Discharger Master Spike Entrainer Spike discharger Slave Figure 5. CPG entrainment. Figure 6. Phase delay between master envelope and spike entrainer. the feedback weight with which the master controls the slave, or simply by increasing the spike frequency. For example, in this latter case, if the master frequency is higher than the slave’s, then the spike will accelerate the slave such that it reaches the same period. 4 Analysis of pulse coupling To show that it is possible to entrain two oscillators to have the same frequency but alter the phase at will, such that any phase between the two waveforms can be achieved, it is necessary to use a configuration similar to the one described in the previous section. A master and slave oscillator with different frequencies and both with approximately 50% duty cycle are set up as shown in figure 5. Another neuron is used to generate a single spike during the master’s pulse width called the entrainer spike. It is evoked by the input from the master and has the same frequency, but its phase depends on the strength of the feedback synapse between these two cells. The spike’s discharge occurs very slowly, but to ensure that no residual charge is left on the capacitor, a fourth neuron, 180º out-of-phase with the master, is used. When this neuron is high, it sends a strong inhibition signal to the spike, thereby resetting it. At this point, the spike can be used for synchronizing the slave oscillator. As described previously, if the slave oscillator’s frequency is lower than the master’s (and therefore that of the spike’s), the spike’s effect is to accelerate until the two are synchronized. This allows for two pacemaker neurons to be out-of-phase by an arbitrary angle. This is shown in figure 6, where the coupling weight between master and slave was systematically altered and the resulting phase variation was recorded. To fine tune the slave oscillator’s desired phase difference, once the spike master has been set, it is necessary to tune the feedback strength between the spike and the slave oscillator. A stronger feedback will allow the Map Function (4.4 ms pulse width) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 Phase Phase (N+1) N Phase N+1 Slope = -1 Slope of |f(x)|< 1 Figure 7. Map function illustrating the coupling behavior between two neurons. two signals to happen virtually at the same time, a weaker weight will cause some delay between the two. Lewis and Bekey show that adaptation of time is critical to controlling walking in a robot [10]. Finally, figure 7 shows a map function obtained using a 4.4 ms spike pulse width. A map function depicts the effect of a spike on a pacemaker neuron at all possible phases. The curve shows a slope smaller 1 (in absolute value) in the transition region, which implies that the system is asymptotically stable [9]. 5 Experiment To build on all the results achieved, the oscillatory patterns necessary to control a single-legged robot were synthesized. Figure 7 shows the waveforms generated to control a hip’s flexor and extensor muscles and ipsilateral knee’s flexor and extensor. These waveforms were generated using all 10 available neurons with the procedures described previously. The hip flexor and extensor are 180º out-of-phase to each other. The left knee extensor is slightly out-of-phase with its respective hip muscle but the width of the waveform’s pulse is shorter than that of the hip extensor. As can be seen, the knee flexor has two bumps, where the purpose of the first bump is to stabilize the knee when the foot hits the substrate. The waveforms depicted are necessary to drive a robotic leg with a standard walking gait. Different gaits will have waveforms with different phase relationships. However, the results shown in the previous sections show that these waveforms, through simple variations of the timing parameters described, can be generated with ease. 6 Conclusions The waveforms needed to control a robotic leg can be generated using a silicon chip described in this paper. The phase differences between the waveforms, however, change depending on the type of gait that one wants to implement in a robot. The results obtained show that any phase difference between two or more waveforms can be achieved, thus making any gait effectively achievable. Furthermore, the map function that resulted from on-chip measurements showed that the chip has the capability of asymptotic coupling stability. Figure 8. Waveforms generated to control a robotic leg. References [1]. C. Rasche, R. Douglas, M. Mahowald, “Characterization of a pyramidal silicon neuron,” Neuromorphic Systems: Engineering silicon from neurobiology, L. S. Smith and A. Hamilton, eds, World Scientific, 1st edition, 1998. [2]. M. Simoni, S. DeWeerth, “Adaptation in an aVLSI model of a neuron.” IEEE Transactions on circuits and systems II: Analog and digital signal processing. 46(7):967970, 1999. [3]. A.L. Hodgkin, A.F. Huxley. “A quantitative description of ion currents and its applications to conduction and excitation in nerve membranes,” Journal of Physiology (Lond.), 117:500-544, 1952. [4]. C. Morris, H. Lecar, “Voltage oscillations in the barnacle giant muscle fiber,” Biophysics J., vol. 35, pp. 193-213, 1981. [5]. Y.I. Arshavsky, I. M. Gelfand, and G. N. Orlovsky, “The cerebellum and control of rhythmic movements,” TINS, vol. 6, pp. 417-422, 1983. [6]. A.H. Cohen, D.L. Boothe, “Sensorimotor interactions during locomotion: principles derived from biological systems,” Autonomous robots, special issue on biomorphic robots, M.A. Lewis and M.A. Arbib, (Eds). Vol. 7, pp. 225-238, 1999. [7]. H. Forssberg, S. Grillner, S. Rossignol, “Phase dependent reflex during walking in chronic spinal cats,” Brain research, vol. 85, pp. 103-7, 1975. [8]. M.A. Lewis, R. Etienne-Cummings, A.H. Cohen, M. Hartmann, “Toward biomorphic control using custom aVLSI chips”, Proceedings of the International conference on robotics and automation, San Francisco, CA, 2000. [9]. M. A. Lewis, R. Etienne-Cummings, M. J. Hartmann, A. H. Cohen, Z. R. Xu, “An in silico central pattern generator: silicon oscillator, coupling, entrainment, and physical computation”, Biological Cybernetics, 88, 2, 2003, pp. 137-151. [10]. M. Anthony Lewis and George A. Bekey (2002), Gait Adaptation in a Quadruped robot, Autonomous Robots, 12(3) 301-312.
2003
62
2,466
Fast Embedding of Sparse Music Similarity Graphs John C. Platt Microsoft Research 1 Microsoft Way Redmond, WA 98052 USA jplatt@microsoft.com Abstract This paper applies fast sparse multidimensional scaling (MDS) to a large graph of music similarity, with 267K vertices that represent artists, albums, and tracks; and 3.22M edges that represent similarity between those entities. Once vertices are assigned locations in a Euclidean space, the locations can be used to browse music and to generate playlists. MDS on very large sparse graphs can be effectively performed by a family of algorithms called Rectangular Dijsktra (RD) MDS algorithms. These RD algorithms operate on a dense rectangular slice of the distance matrix, created by calling Dijsktra a constant number of times. Two RD algorithms are compared: Landmark MDS, which uses the Nyström approximation to perform MDS; and a new algorithm called Fast Sparse Embedding, which uses FastMap. These algorithms compare favorably to Laplacian Eigenmaps, both in terms of speed and embedding quality. 1 Introduction This paper examines a general problem: given a sparse graph of similarities between a set of objects, quickly assign each object a location in a low-dimensional Euclidean space. This general problem can arise in several different applications: the paper addresses a specific application to music similarity. In the case of music similarity, a set of musical entities (i.e., artists, albums, tracks) must be placed in a low-dimensional space. Human editors have already supplied a graph of similarities, e.g., artist A is similar to artist B. There are three good reasons to embed a musical similarity graph: 1. Visualization — If a user’s musical collection is placed in two dimensions, it can be easily visualized on a display. This visualization can aid musical browsing. 2. Interpolation — Given a graph of similarities, it is simple to find music that “sounds like” other music. However, once music is embedded in a lowdimensional space, new user interfaces are enabled. For example, a user can specify a playlist by starting at song A and ending at song B, with the songs in the playlist smoothly interpolating between A and B. 3. Compression — In order to estimate “sounds like” directly from a graph of music similarities, the user must have access to the graph of all known music. However, once all of the musical entities are embedded, the coordinates for the music in a user’s collection can be shipped down to the user’s computer. These coordinates are much smaller than the entire graph. It is important to have algorithms that exploit the sparseness of similarity graphs because large-scale databases of similarities are very often sparse. Human editors cannot create a dense N × N matrix of music similarity for large values of N. The best editors can do is identify similar artists, albums, and tracks. Furthermore, humans are poor at accurately estimating large distances between entities (e.g., which is farther away from The Beatles: Enya or Duke Ellington?) Hence, there is a definite need for an scalable embedding algorithm that can handle a sparse graph of similarities, generalizing to similarities not seen in the training set. 1.1 Structure of Paper The paper describes three existing approaches to the sparse embedding problem in section 2 and section 3 describes a new algorithm for solving the problem. Section 4.1 verifies that the new algorithm does not get stuck in local minima and section 4.2 goes into further detail on the application of embedding musical similarity into a low-dimensional Euclidean space. 2 Methods for Sparse Embedding Multidimensional scaling (MDS) [4] is an established branch of statistics that deals with embedding objects in a low-dimensional Euclidean space based on a matrix of similarities. More specifically, MDS algorithms take a matrix of dissimilarities δrs and find vectors ⃗xr whose inter-vector distances drs are well matched to δrs. A common flexible algorithm is called ALSCAL [13], which encourages the inter-vector distances to be near some ideal values: min ⃗xr ∑ rs (d2 rs −ˆd2 rs)2, (1) where ˆd are derived from the dissimilarities δrs, typically through a linear relationship. There are three existing approaches for applying MDS to large sparse dissimilarity matrices: 1. Apply an MDS algorithm to the sparse graph directly. Not all MDS algorithms require a dense matrix δrs. For example, ALSCAL can operate on a sparse matrix by ignoring missing terms in its cost function (1). However, as shown in section 4.1, ALSCAL cannot reconstruct the position of known data points given a sparse matrix of dissimilarities. 2. Use a graph algorithm to generate a full matrix of dissimilarities. The Isomap algorithm [14] finds an embedding of a sparse set of dissimilarities into a lowdimensional Euclidean space. Isomap first applies Floyd’s shortest path algorithm [9] to find the shortest distance between any two points in the graph, and then uses these N × N distances as input to a full MDS algorithm. Once in the low-dimensional space, data can easily be interpolated or extrapolated. Note that the systems in [14] have N = 1000. For generalizing musical artist similarity, [7] also computes an N × N matrix of distances between all artists in a set, based on the shortest distance through a graph. The sparse graph in [7] was generated by human editors at the All Music Guide. [7] shows that human perception of artist similarity is well modeled by generalizing using the shortest graph distance. Similar to [14], [7] projects the N × N set of artist distances into a Euclidean space by a full MDS algorithm. Note that the MDS system in [7] has N = 412. The computational complexity for these methods inhibit their use on large data sets. Let us analyze the complexity for each portion of this method. For finding all of the minimum distances, Floyd’s algorithm operates on a dense matrix of distances and has computational complexity O(N3). A better choice is to run Dijkstra’s algorithm [6], which finds the minimum distances from a single vertex to all other vertices in the graph. Thus, Dijkstra’s algorithm must be run N times. The complexity of one invocation of Dijkstra’s algorithm (when implemented with a binary heap [11]) is O(MlogN) where M is the number of edges in the graph. Running a standard MDS algorithm on a full N ×N matrix of distances requires O(N2Kd) computation, where K is the number of iterations of the MDS algorithm and d is the dimensionality of the embedding. Therefore, the overall computational complexity of the approach is O(MN logN +N2Kd), which can be prohibitive for large N and M. 3. Use a graph algorithm to generate a thin dense rectangle of distances. One natural way to reduce the complexity of the graph traversal part of Isomap is to not run Dijkstra’s algorithm N times. In other words, instead of generating the entire N ×N matrix of dissimilarities, generate an interesting subset of n rows, n << N. There are a family of MDS algorithms, here called Rectangular Dijkstra (RD) MDS algorithms. RD algorithms operate on a dense rectangle of distances, filled in by Dijkstra’s algorithm. The first published member of this family was Landmark MDS (LMDS) [5]. Bengio, et al.[2] show that LMDS is the Nyström approximation [1] combined with classical MDS [4] operating on the rectangular distance matrix. (See also [10] for Nyström applied to spectral clustering). LMDS operates on a number of rows proportional to the embedding dimensionality, d. Thus, Dijkstra gets called O(d) times. LMDS then centers the n × n distance submatrix, converting it into a kernel matrix K. The top d column eigenvectors (⃗vi) and eigenvalues λi of K are then computed. The embedding coordinate for the mth point is thus ⃗xm = 1 2 ∑ j Mi j(A j −D jm), (2) where Mi j =⃗vT i /√λi, Aj is the average distance in the jth row of the rectangular distance matrix and D jm is the distance between the mth point and the jth point (j ∈[1..n]). Thus, the computational complexity of LMDS is O(Md logN +Nd2 +d3). 3 New Algorithm: Fast Sparse Embedding LMDS requires the solution of an n × n eigenproblem. To avoid this eigenproblem, this paper presents a new RD MDS algorithm, called FSE (Fast Sparse Embedding). Instead of a Nyström approximation, FSE uses FastMap [8]: an MDS algorithm that takes a constant number of rows of the dissimilarity matrix. FastMap iterates over the dimensions of the projection, fixing the position of all vertices in each dimension in turn. FastMap thus approximates the solution of the eigenproblem through deflation. Consider the first dimension. Two vertices (⃗xa,⃗xb) are chosen and the dissimilarity from these two vertices to all other vertices i are computed: (δai,δbi). In FSE, these dissimilarities are computed by Dijkstra’s algorithm. During the first iteration (dimension), the distances (dai,dbi) are set equal to the dissimilarities. The 2N distances can determine the location of the vertices along the dimension up to a shift, through use of the law of cosines: xi = d2 ai −d2 bi 2dab . (3) For each subsequent dimension, two new vertices are chosen and new dissimilarities (δai,δbi) are computed by Dijkstra’s algorithm. The subsequent dimensions are assumed to be orthogonal to previous ones, so the distances for dimension N are computed from the dissimilarities via: δ 2 ai = d2 ai + N−1 ∑ n=1 (xan −xin)2 ⇒d2 ai = δ 2 ai − N−1 ∑ n=1 (xan −xin)2. (4) Thus, each dimension accounts for a fraction of the dissimilarity matrix, analogous to PCA. Note that, except for dab, all other distances are needed as distance squared, so only one square root for each dimension is required. The distances produced by Dijkstra’s algorithm are the minimum graph distances modified by equation (4) in order to reflect the projection used so far. For each dimension, the vertices a and b are heuristically chosen to be as far apart as possible. In order to avoid an O(N2) step in choosing a and b, [8] recommends starting with an arbitrary point, finding the point furthest away from the current point, then setting the current point to the farthest point and repeating. The work of each Dijkstra call (including equation (4)) is O(MlogN + Nd), so the complexity of the entire algorithm is O(Md logN +Nd2). 4 Experimental Results 4.1 Artificial Data 4.5 5 5.5 6 6.5 4.4 4.6 4.8 5 5.2 5.4 5.6 5.8 6 6.2 6.4 Output of ALSCAL 2 4 6 8 10 1 2 3 4 5 6 7 8 9 10 Output of FSE Figure 1: Reconstructing a grid of points directly from a sparse distance matrix. On the left, ALSCAL cannot reconstruct the grid, while on the right, FSE accurately reconstructs the grid. An MDS algorithm needs to be tested on distance matrices that are computed from distances between real points, in order to verify that the algorithm quickly produces sensible results. FSE and ALSCAL were both tested on a set of 100 points in a 10 × 10 2D grid with unit spacing. The distance from each point to a random 10 of the nearest 20 other points were presented to each algorithm. The results are shown in Figure 1. Procrustes analysis [4] is applied to output of each algorithm; the output is shown after the best orthogonal affine projection between the algorithm output and the original data. Figure 1 shows that ALSCAL does a very poor job of reconstructing the locations of the data points, while FSE accurately reconstructs the grid locations. ALSCAL’s poor performance is caused by performing optimization on a non-convex cost function. When the dissimilarity matrix is very sparse, there are not enough constraints on the final solution, so ALSCAL gets stuck in a local minimum. Similar results were seen from Sammon’s method [4]. These results show that FSE (and other RD MDS algorithms) are preferable to using sparse MDS algorithms. FSE does not solve an optimization problem, hence does not get stuck in a local minimum. 4.2 Application: Generalizing Music Similarity This section presents the results of using RD MDS algorithms to project a large music dissimilarity graph into low-dimensional Euclidean space. This projection enables visualization and interpolation over music collections. The dissimilarity graph was derived from a music metadata database. The database consists of 10289 artists, 67799 albums, and 188749 tracks. Each track has subjective metadata assigned to it by human editors: style (specific style), subgenre (more general style), vocal code (gender of singer), and mood. See [12] for more details on the metadata. The database contains which tracks occur on which albums and which artists created those albums. Relationship Between Entities Edge Distance in Graph Two tracks have same style, vocal code, mood 1 Two tracks have same style 2 Two tracks have same subgenre 4 Track is on album 1 Album is by artist 2 Table 1: Mapping of relationship to edge distance. A sparse similarity graph was extracted from the metadata database according to Table 1. Every track, album, and artist are represented by a vertex in the graph. Every track was connected to all albums it appeared on, while each album was connected to its artist. The track similarity edges were sampled randomly, to provide an average of 7 links of edges of distance 1, 2, and 4. The final graph contained 267K vertices and 3.22M edges. RD MDS enabled this experiment: the full distance matrix would have taken days to compute with 267K calls to Dijkstra. Also, the graph distances were derived after some tuning (not on the test set): the speed of RD MDS enabled this tuning. One advantage of the music application is that the quality of the embedding can be tested externally. A test set of 50 playlists, with 444 pairs of sequential songs was gathered from real users who listened to these playlists. An embedding is considered good if sequential songs in the playlists are frequently closer to each other than random songs in the database. Table 2 shows the quality of the embedding as a fraction of random songs that are closer than sequential songs. The lower the fraction, the better the embedding, because the embedding more accurately reflects users’ ideas of music similarity. This fraction is computed by treating the pairwise distances as scores from a classifier, computing an ROC curve, then computing 1.0-the area under the ROC curve [3]. Algorithm n Average % of CPU time Random Songs Closer (sec) than Sequential Songs FSE 60 5.0% 52.8 LMDS 60 4.5% 52.7 LMDS 100 4.1% 87.4 LMDS 200 3.3% 175.0 LMDS 400 3.2% 355.1 Laplacian Eigenmaps N/A 13.0% 8003.4 Table 2: Speed and accuracy of music embedding for various algorithms. All embeddings are 20-dimensional (d = 20). The CPU time was measured on a 2.4 GHz Pentium 4. FSE uses a fixed rectangle size n = 3d, so has one entry in the table. For the same n, FSE and LMDS are competitive. However, LMDS can trade off speed for accuracy by increasing n. A Laplacian Eigenmap applied to the entire sparse similarity matrix was much slower than either of the RD MDS algorithms, and did not perform as well for this problem. A Gaussian kernel with σ = 2 was used to convert distances to similarities for the Laplacian Eigenmap. The slowness of the Laplacian eigenmap prevented extensive tuning of the parameters. −0.5 0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5 Aerosmith The Beatles Kate Bush Dire Straits The Doors Bob Dylan The Eagles Bryan Ferry Fleetwood Mac Peter Gabriel Genesis Jimi Hendrix Led Zeppelin Sarah McLachlan The Police The Rolling Stones Cat Stevens Talking Heads Suzanne Vega The Who Tori Amos Sheryl Crow Alanis Morissette Figure 2: LMDS Projection of the entire music dissimilarity graph into 2D. The coordinates of 23 artists are shown. Given that LMDS outperforms FSE for large n, this paper now presents qualitative results from the LMDS n = 400 projection. First, the top two dimensions are plotted to form a visualization of music space. This visualization is shown in Figure 4.2, which shows the coordinates of 23 artists that occur near the center of the space. Even restricted to the top two dimensions, the projection is sensible. For example, Tori Amos and Sarah McLachlan are mapped to be very close. Artist 1 Track 1 Artist 2 Track 2 Jimi Hendrix Purple Haze Alanis Hand In My Pocket Jimi Hendrix Fire Alanis All I Really Want Jimi Hendrix Red House Alanis You Oughta Know Jimi Hendrix I Don’t Live Today Alanis Right Through You Jimi Hendrix Foxey Lady Alanis You Learn Jimi Hendrix 3rd Stone from the Sun Alanis Ironic Doors Waiting for the Sun Sarah McLachlan Full of Grace Doors LA Woman Sarah McLachlan Hold On Doors Riders on the Storm Sarah McLachlan Good Enough Doors Love her Madly Sarah McLachlan The Path of Thorns Cat Stevens Ready Sarah McLachlan Possession Cat Stevens Music Blondie Tide is High Cat Stevens Jesus Sarah McLachlan Ice Cream Cat Stevens King of Trees Sarah McLachlan Fumbling Towards Ecstasy The Beatles Octopus’s Garden Fiona Apple Limp The Beatles I’m So Tired Fiona Apple Paper Bag The Beatles Revolution 9 Fiona Apple Fast As You Can The Beatles Sgt. Pepper’s Lonely Blondie Call Me The Beatles Please Please Me Blondie Hanging on the Telephone The Beatles Eleanor Rigby Blondie Rapture Table 3: Two playlists produced by the system. Each playlist reads top to bottom. The playlists interpolate between the first and last songs. The main application for the music graph projection is the generation of playlists. There are several different possible objectives for music playlists: background listening, dance mixes, music discovery. One of the criteria for playlists is that they play similar music together (i.e., avoid distracting jumps, like New Age to Heavy Metal). The goal for this paper is to generate playlists for background listening. Therefore, the only criterion we use for generation is smoothness and playlists are generated by linear interpolation in the embedding space. However, smoothness is not the only possible playlist generation mode: other criteria can be added (such as matching beats or artist self-avoidance or minimum distance between songs). These criteria can be added on top of the smoothness criteria. Such criteria are a matter of subjective musical taste and are beyond the scope of this paper. Table 3 shows two background-listening playlists formed by interpolating in the projected space. The playlists were drawn from a collection of 3920 songs. Unlike the image interpolation in [14], not every point in the 20-dimensional space has a valid song attached to it. The interpolation was performed by first computing the line segment connecting the first and last song, and then placing K equally-spaced points along the line segment, where K is the number of slots in the playlist. For every slot, the location of the previous song is projected onto a hyperplane normal to the line segment that goes through the ith point. The projected location is then moved halfway to the ith point, and the nearest song to the moved location is placed into the playlist. This method provides smooth interpolation without large jumps, as can be seen in Table 3. 5 Discussion and Conclusions Music playlist generation and browsing can utilize a large sparse similarity graph designed by editors. In order to allow tractable computations on this graph, its vertices can be projected into a low-dimensional space. This projection enables smooth interpolation and two-dimensional display of music. Music similarity graphs are amongst the largest graphs ever to be embedded. Rectangular Dijkstra MDS algorithms can be used to efficiently embed these large sparse graphs. This paper showed that FSE and the Nyström (LMDS) technique are both efficient and have comparable performance for the same size of rectangle. Both algorithms are much more efficient than Laplacian Eigenmaps. However, LMDS permits an accuracy/speed trade-off that makes it preferable. Using LMDS, a music graph with 267K vertices and 3.22M edges can be embedded in approximately 6 minutes. References [1] C. Baker. The numerical treatment of integral equations. Clarendon Press, Oxford, 1977. [2] Y. Bengio, J.-F. Paiement, and P. Vincent. Out-of-sample extensions for LLE, Isomap, MDS, Eigenmaps and spectral clustering. In S. Thrun, L. Saul, and B. Schø"lkopf, editors, Proc. NIPS, volume 16, 2004. [3] A. P. Bradley. The user of area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition, 30:1145–1159, 1997. [4] T. F. Cox and M. A. A. Cox. Multidimensional Scaling. Number 88 in Monographs on Statistics and Applied Probability. Chapman & Hall/CRC, 2nd edition, 2001. [5] V. de Silva and J. B. Tenenbaum. Global versus local methods in nonlinear dimensionality reduction. In S. Becker, S. Thrun, and K. Obermayer, editors, Proc. NIPS, volume 15, pages 721–728, 2003. [6] E. W. Dijkstra. A note on two problems in connexion with graphs. Numerical Mathematics, 1:269–271, 1959. [7] D. P. W. Ellis, B. Whitman, A. Berenzweig, and S. Lawrence. The quest for ground truth in musical artist similarity. In Proc. International Conference on Music Information Retrieval (ISMIR), 2002. [8] C. Faloutsos and K.-I. Lin. Fastmap: A fast algorithm for indexing, data-mining and visualization of traditional and multimedia databases. In Proc. ACM SIGMOD, pages 163–174, 1995. [9] R. Floyd. Algorithm 97 (shortest path). Communications of the ACM, 7:345, 1962. [10] C. Fowlkes, S. Belongie, and J. Malik. Efficient spatiotemporal grouping using the Nyström method. In Proc. CVPR, volume 1, pages I–231–I–238, 2001. [11] D. B. Johnson. Efficient algorithms for shortest paths in sparse networks. JACM, 24:1–13, 1977. [12] J. C. Platt, C. J. C. Burges, S. Swenson, C. Weare, and A. Zheng. Learning a gaussian process prior for automatically generating music playlists. In T. Dietterich, S. Becker, and Z. Ghahramani, editors, Proc. NIPS, volume 14, pages 1425–1432, 2002. [13] Y. Takane, F. W. Young, and J. de Leeuw. Nonmetric individual differences multidimensional scaling: an alternating least squares method with optimal scaling features. Psychometrika, 42:7–67, 1977. [14] J. B. Tenenbaum. Mapping a manifold of perceptual observations. In M. Jordan, M. Kearns, and S. Solla, editors, Proc. NIPS, volume 10, pages 682–688, 1998.
2003
63
2,467
Online Passive-Aggressive Algorithms Koby Crammer Ofer Dekel Shai Shalev-Shwartz Yoram Singer School of Computer Science & Engineering The Hebrew University, Jerusalem 91904, Israel {kobics,oferd,shais,singer}@cs.huji.ac.il Abstract We present a unified view for online classification, regression, and uniclass problems. This view leads to a single algorithmic framework for the three problems. We prove worst case loss bounds for various algorithms for both the realizable case and the non-realizable case. A conversion of our main online algorithm to the setting of batch learning is also discussed. The end result is new algorithms and accompanying loss bounds for the hinge-loss. 1 Introduction In this paper we describe and analyze several learning tasks through the same algorithmic prism. Specifically, we discuss online classification, online regression, and online uniclass prediction. In all three settings we receive instances in a sequential manner. For concreteness we assume that these instances are vectors in Rn and denote the instance received on round t by xt. In the classification problem our goal is to find a mapping from the instance space into the set of labels, {−1, +1}. In the regression problem the mapping is into R. Our goal in the uniclass problem is to find a center-point in Rn with a small Euclidean distance to all of the instances. We first describe the classification and regression problems. For classification and regression we restrict ourselves to mappings based on a weight vector w ∈Rn, namely the mapping f : Rn →R takes the form f(x) = w · x. After receiving xt we extend a prediction ˆyt using f. For regression the prediction is simply ˆyt = f(xt) while for classification ˆyt = sign(f(xt)). After extending the prediction ˆyt, we receive the true outcome yt. We then suffer an instantaneous loss based on the discrepancy between yt and f(xt). The goal of the online learning algorithm is to minimize the cumulative loss. The losses we discuss in this paper depend on a pre-defined insensitivity parameter ϵ and are denoted ℓϵ(w; (x, y)). For regression the ϵ-insensitive loss is, ℓϵ(w; (x, y)) =  0 |y −w · x| ≤ϵ |y −w · x| −ϵ otherwise , (1) while for classification the ϵ-insensitive loss is defined to be, ℓϵ(w; (x, y)) =  0 y(w · x) ≥ϵ ϵ −y(w · x) otherwise . (2) As in other online algorithms the weight vector w is updated after receiving the feedback yt. Therefore, we denote by wt the vector used for prediction on round t. We leave the details on the form this update takes to later sections. Problem Example (zt) Discrepancy (δ) Update Direction (vt) Classification (xt, yt) ∈Rn× {-1,+1} −yt(wt · xt) ytxt Regression (xt, yt) ∈Rn × R |yt −wt · xt| sign(yt −wt · xt) xt Uniclass (xt, yt) ∈Rn × {1} ∥xt −wt∥ xt−wt ∥xt−wt∥ Table 1: Summary of the settings and parameters employed by the additive PA algorithm for classification, regression, and uniclass. The setting for uniclass is slightly different as we only observe a sequence of instances. The goal of the uniclass algorithm is to find a center-point w such that all instances xt fall within a radius of ϵ from w. Since we employ the framework of online learning the vector w is constructed incrementally. The vector wt therefore plays the role of the instantaneous center and is adapted after observing each instance xt. If an example xt falls within a Euclidean distance ϵ from wt then we suffer no loss. Otherwise, the loss is the distance between xt and a ball of radius ϵ centered at wt. Formally, the uniclass loss is, ℓϵ(wt; xt) =  0 ∥xt −wt∥≤ϵ ∥xt −wt∥−ϵ otherwise . (3) In the next sections we give additive and multiplicative online algorithms for the above learning problems and prove respective online loss bounds. A common thread of our approach is a unified view of all three tasks which leads to a single algorithmic framework with a common analysis. Related work: Our work builds on numerous techniques from online learning. The updates we derive are based on an optimization problem directly related to the one employed by Support Vector Machines [15]. Li and Long [14] were among the first to suggest the idea of converting a batch optimization problem into an online task. Our work borrows ideas from the work of Warmuth and colleagues [11]. In particular, Gentile and Warmuth [6] generalized and adapted techniques from [11] to the hinge loss which is closely related to the losses defined in Eqs. (1)-(3). Kivinen et al. [10] discussed a general framework for gradient-based online learning where some of their bounds bare similarities to the bounds presented in this paper. Our work also generalizes and greatly improves online loss bounds for classification given in [3]. Herbster [8] suggested an algorithm for classification and regression that is equivalent to one of the algorithms given in this paper, however, the lossbound derived by Herbster is somewhat weaker. Finally, we would like to note that similar algorithms have been devised in the convex optimization community (cf. [1, 2]). The main difference between these algorithms and the online algorithms presented in this paper lies in the analysis: while we derive worst case, finite horizon loss bounds, the optimization community is mostly concerned with asymptotic convergence properties. 2 A Unified Loss The three problems described in the previous section share common algebraic properties which we explore in this section. The end result is a common algorithmic framework that is applicable to all three problems and an accompanying analysis (Sec. 3). Let zt = (xt, yt) denote the instance-target pair received on round t where in the case of uniclass we set yt = 1 as a placeholder. For a given example zt, let δ(w; zt) denote the discrepancy of w on zt: for classification we set the discrepancy to be −yt(wt · xt) (the negative of the margin), for regression it is |yt −wt · xt|, and for uniclass ∥xt −wt∥. Fixing zt, we also view δ(w; zt) as a convex function of w. Let [a]+ be the function that equals a whenever a > 0 and otherwise equals zero. Using the discrepancies defined above, the three different losses given in Eqs. (1)-(3) can all be written as ℓϵ(w; z) = [δ(w; z) −ϵ]+, where for classification we set ϵ ←−ϵ since the discrepancy is defined as the negative of the margin. While this construction might seem a bit odd for classification, it is very useful in unifying the three problems. To conclude, the loss in all three problems can be derived by applying the same hinge loss to different (problem dependent) discrepancies. 3 An Additive Algorithm for the Realizable Case Equipped with the simple unified notion of loss we describe in this section a single online algorithm that is applicable to all three problems. The algorithm and the analysis we present in this section assume that there exist a weight vector w⋆and an insensitivity parameter ϵ⋆ for which the data is perfectly realizable. Namely, we assume that ℓϵ⋆(w⋆; zt) = 0 for all t which implies that, yt(w⋆· xt) ≥|ϵ⋆| (Class.) |yt −w⋆· xt| ≤ϵ⋆(Reg.) ∥xt −w⋆∥≤ϵ⋆(Unic.) . (4) A modification of the algorithm for the unrealizable case is given in Sec. 5. The general method we use for deriving our on-line update rule is to define the new weight vector wt+1 as the solution to the following projection problem wt+1 = argmin w 1 2∥w −wt∥2 s.t. ℓϵ(w; zt) = 0 , (5) namely, wt+1 is set to be the projection of wt onto the set of all weight vectors that attain a loss of zero. We denote this set by C. For the case of classification, C is a half space, C = {w : −ytw · xt ≤ϵ}. For regression C is an ϵ-hyper-slab, C = {w : |w · xt − yt| ≤ϵ} and for uniclass it is a ball of radius ϵ centered at xt, C = {w : ∥w −xt∥≤ ϵ}. In Fig. 2 we illustrate the projection for the three cases. This optimization problem attempts to keep wt+1 as close to wt as possible, while forcing wt+1 to achieve a zero loss on the most recent example. The resulting algorithm is passive whenever the loss is zero, that is, wt+1 = wt whenever ℓϵ(wt; zt) = 0. In contrast, on rounds for which ℓϵ(wt; zt) > 0 we aggressively force wt+1 to satisfy the constraint ℓϵ(wt+1; zt) = 0. Parameter: Insensitivity: ϵ Initialize: Set w1 = 0 (R&C) ; w1 = x0 (U) For t = 1, 2, . . . • Get a new instance: zt ∈Rn • Suffer loss: ℓϵ(wt; zt) • If ℓϵ(wt; zt) > 0 : 1. Set vt (see Table 1) 2. Set τt = ℓϵ(wt;zt) ∥vt∥2 3. Update: wt+1 = wt + τtvt Figure 1: The additive PA algorithm. Therefore we name the algorithm passive-aggressive or PA for short. In the following we show that for the three problems described above the solution to the optimization problem in Eq. (5) yields the following update rule, wt+1 = wt + τtvt , (6) where vt is minus the gradient of the discrepancy and τt = ℓϵ(wt; zt)/∥vt∥2. (Note that although the discrepancy might not be differentiable everywhere, its gradient exists whenever the loss is greater than zero). To see that the update from Eq. (6) is the solution to the problem defined by Eq. (5), first note that the equality constraint ℓϵ(w; zt) = 0 is equivalent to the inequality constraint δ(w; zt) ≤ϵ. The Lagrangian of the optimization problem is L(w, τ) = 1 2∥w −wt∥2 + τ (δ(w; zt) −ϵ) , (7)  wt wt+1 q  wt wt+1 q  wt wt+1 q Figure 2: An illustration of the update: wt+1 is found by projecting the current vector wt onto the set of vectors attaining a zero loss on zt. This set is a stripe in the case of regression, a half-space for classification, and a ball for uniclass. where τ ≥0 is a Lagrange multiplier. To find a saddle point of L we first differentiate L with respect to w and use the fact that vt is minus the gradient of the discrepancy to get, ∇w(L) = w −wt + τ∇wδ = 0 ⇒ w = wt + τvt . To find the value of τ we use the KKT conditions. Hence, whenever τ is positive (as in the case of non-zero loss), the inequality constraint, δ(w; zt) ≤ϵ, becomes an equality. Simple algebraic manipulations yield that the value τ for which δ(w; zt) = ϵ for all three problems is equal to, τt = ℓϵ(w; zt)/∥vt∥2. A summary of the discrepancy functions and their respective updates is given in Table 1. The pseudo-code of the additive algorithm for all three settings is given in Fig. 1. We now discuss the initialization of w1. For classification and regression a reasonable choice for w1 is the zero vector. However, in the case of uniclass initializing w1 to be the zero vector might incur large losses if, for instance, all the instances are located far away from the origin. A more sensible choice for uniclass is to initialize w1 to be one of the examples. For simplicity of the description we assume that we are provided with an example x0 prior to the run of the algorithm and initialize w1 = x0. To conclude this section we note that for all three cases the weight vector wt is a linear combination of the instances. This representation enables us to employ kernels [15]. 4 Analysis The following theorem provides a unified loss bound for all three settings. After proving the theorem we discuss a few of its implications. Theorem 1 Let z1, z2, . . . , zt, . . . be a sequence of examples for one of the problems described in Table 1. Assume that there exist w⋆and ϵ⋆such that ℓϵ⋆(w⋆; zt) = 0 for all t. Then if the additive PA algorithm is run with ϵ ≥ϵ⋆, the following bound holds for any T ≥1 T X t=1 (ℓϵ(wt; zt))2 + 2(ϵ −ϵ⋆) T X t=1 ℓϵ(wt; zt) ≤ B ∥w⋆−w1∥2 , (8) where for classification and regression B is a bound on the squared norm of the instances (∀t : B ≥∥xt∥2 2) and B = 1 for uniclass. Proof: Define ∆t = ∥wt −w⋆∥2 −∥wt+1 −w⋆∥2. We prove the theorem by bounding PT t=1 ∆t from above and below. First note that PT t=1 ∆t is a telescopic sum and therefore T X t=1 ∆t = ∥w1 −w⋆∥2 −∥wT +1 −w⋆∥2 ≤∥w1 −w⋆∥2 . (9) This provides an upper bound on P t ∆t. In the following we prove the lower bound ∆t ≥ℓϵ(wt; zt) B (ℓϵ(wt; zt) + 2(ϵ −ϵ⋆)) . (10) First note that we do not modify wt if ℓϵ(wt; zt) = 0. Therefore, this inequality trivially holds when ℓϵ(wt; zt) = 0 and thus we can restrict ourselves to rounds on which the discrepancy is larger than ϵ, which implies that ℓϵ(wt; zt) = δ(wt; zt) −ϵ. Let t be such a round then by rewriting wt+1 as wt + τtvt we get, ∆t = ∥wt −w⋆∥2 −∥wt+1 −w⋆∥2 = ∥wt −w⋆∥2 −∥wt + τtvt −w⋆∥2 = ∥wt −w⋆∥2 − τ 2 t ∥vt∥2 + 2τt(vt · (wt −w⋆)) + ∥wt −w⋆∥2 = −τ 2 t ∥vt∥2 + 2τtvt · (w⋆−wt) . (11) Using the fact that −vt is the gradient of the convex function δ(w; zt) at wt we have, δ(w⋆; zt) −δ(wt; zt) ≥(−vt) · (w⋆−wt) . (12) Adding and subtracting ϵ from the left-hand side of Eq. (12) and rearranging we get, vt · (w⋆−wt) ≥δ(wt; zt) −ϵ + ϵ −δ(w⋆; zt) . (13) Recall that δ(wt; zt) −ϵ = ℓϵ(wt; zt) and that ϵ⋆≥δ(w⋆; zt). Therefore, (δ(wt; zt) −ϵ) + (ϵ −δ(w⋆; zt)) ≥ℓϵ(wt; zt) + (ϵ −ϵ⋆) . (14) Combining Eq. (11) with Eqs. (13-14) we get ∆t ≥ −τ 2 t ∥vt∥2 + 2τt (ℓϵ(wt; zt) + (ϵ −ϵ⋆)) = τt −τt∥vt∥2 + 2ℓϵ(wt; zt) + 2(ϵ −ϵ⋆)  . (15) Plugging τt = ℓϵ(wt; zt)/∥vt∥2 into Eq. (15) we get ∆t ≥ℓϵ(wt; zt) ∥vt∥2 (ℓϵ(wt; zt) + 2(ϵ −ϵ⋆)) . For uniclass ∥vt∥2 is always equal to 1 by construction and for classification and regression we have ∥vt∥2 = ∥xt∥2 ≤B which gives, ∆t ≥ℓϵ(wt; zt) B (ℓϵ(wt; zt) + 2(ϵ −ϵ⋆)) . Comparing the above lower bound with the upper bound in Eq. (9) we get T X t=1 (ℓϵ(wt; zt))2 + T X t=1 2(ϵ −ϵ⋆)ℓϵ(wt; zt) ≤B∥w⋆−w1∥2 . This concludes the proof. Let us now discuss the implications of Thm. 1. We first focus on the classification case. Due to the realizability assumption, there exist w⋆and ϵ⋆such that for all t, ℓϵ⋆(w⋆; zt) = 0 which implies that yt(w⋆·xt) ≥−ϵ⋆. Dividing w⋆by its norm we can rewrite the latter as yt( ˆw⋆· xt) ≥ˆϵ⋆where ˆw⋆= w⋆/∥w⋆∥and ˆϵ⋆= |ϵ⋆|/∥w⋆∥. The parameter ˆϵ⋆is often referred to as the margin of a unit-norm separating hyperplane. Now, setting ϵ = −1 we get that ℓϵ(w; z) = [1 −y(w · x)]+ – the hinge loss for classification. We now use Thm. 1 to obtain two loss bounds for the hinge loss in a classification setting. First, note that by also setting w⋆= ˆw⋆/ˆϵ⋆and thus ϵ⋆= −1 we get that the second term on the left hand side of Eq. (8) vanishes as ϵ⋆= ϵ = −1 and thus, T X t=1 ([1 −yt(wt · xt)]+)2 ≤B ∥w⋆∥2 = B (ˆϵ⋆)2 . (17) We thus have obtained a bound on the squared hinge loss. The same bound was also derived by Herbster [8]. We can immediately use this bound to derive a mistake bound for the PA algorithm. Note that the algorithm makes a prediction mistake iff yt(wt · xt) ≤0. In this case, [1 −yt(wt · xt)]+ ≥1 and therefore the number of prediction mistakes is bounded by B/(ˆϵ⋆)2. This bound is common to online algorithms for classification such as ROMMA [14]. We can also manipulate the result of Thm. 1 to obtain a direct bound on the hinge loss. Using again ϵ = −1 and omitting the first term in the left hand side of Eq. (8) we get, 2(−1 −ϵ⋆) T X t=1 [1 −yt(wt · xt)]+ ≤B∥w⋆∥2 . By setting w⋆= 2 ˆw⋆/ˆϵ⋆, which implies that ϵ⋆= −2, we can further simplify the above to get a bound on the cumulative hinge loss, T X t=1 [1 −yt(wt · xt)]+ ≤2 B (ˆϵ⋆)2 . To conclude this section, we would like to point out that the PA online algorithm can also be used as a building block for a batch algorithm. Concretely, let S = {z1, . . . , zm} be a fixed training set and let β ∈R be a small positive number. We start with an initial weight vector w1 and then invoke the PA algorithm as follows. We choose an example z ∈S such that ℓϵ(w1; z)2 > β and present z to the PA algorithm. We repeat this process and obtain w2, w3, . . . until the T’th iteration on which for all z ∈S, ℓϵ(wT ; z)2 ≤β. The output of the batch algorithm is wT . Due to the bound of Thm. 1, T is at most ⌈B∥w⋆−w1∥2/β⌉and by construction the loss of wT on any z ∈S is at most √β. Moreover, in the following lemma we show that the norm of wT cannot be too large. Since wT achieves a small empirical loss and its norm is small, it can be shown using classical techniques (cf. [15]) that the loss of wT on unseen data is small as well. Lemma 2 Under the same conditions of Thm. 1, the following bound holds for any T ≥1 ∥wT −w1∥≤2 ∥w⋆−w1∥. Proof: First note that the inequality trivially holds for T = 1 and thus we focus on the case T > 1. We use the definition of ∆t from the proof of Thm. 1. Eq. (10) implies that ∆t is non-negative for all t. Therefore, we get from Eq. (9) that 0 ≤ T −1 X t=1 ∆t = ∥w1 −w⋆∥2 −∥wT −w⋆∥2 . (18) Rearranging the terms in Eq. (18) we get that ∥wT −w⋆∥≤∥w⋆−w1∥. Finally, we use the triangle inequality to get the bound, ∥wT −w1∥ = ∥(wT −w⋆) + (w⋆−w1)∥ ≤ ∥wT −w⋆∥+ ∥w⋆−w1∥≤2 ∥w⋆−w1∥. This concludes the proof. 5 A Modification for the Unrealizable Case We now briefly describe an algorithm for the unrealizable case. This algorithm applies only to regression and classification problems. The case of uniclass is more involved and will be discussed in detail elsewhere. The algorithm employs two parameters. The first is the insensitivity parameter ϵ which defines the loss function as in the realizable case. However, in this case we do not assume that there exists w⋆that achieves zero loss over the sequence. We instead measure the loss of the online algorithm relative to the loss of any vector w⋆. The second parameter, γ > 0, is a relaxation parameter. Before describing the effect of this parameter we define the update step for the unrealizable case. As in the realizable case, the algorithm is conservative. That is, if the loss on example zt is zero then wt+1 = wt. In case the loss is positive the update rule is wt+1 = wt + τtvt where vt is the same as in the realizable case. However, the scaling factor τt is modified and is set to, τt = ℓϵ(wt; zt) ∥vt∥2 + γ . The following theorem provides a loss bound for the online algorithm relative to the loss of any fixed weight vector w⋆. Theorem 3 Let z1 = (x1, y1), z2 = (x2, y2), . . . , zt = (xt, yt), . . . be a sequence of classification or regression examples. Let w⋆be any vector in Rn. Then if the PA algorithm for the unrealizable case is run with ϵ, and with γ > 0, the following bound holds for any T ≥1 and a constant B satisfying B ≥∥xt∥2, T X t=1 (ℓϵ(wt; zt))2 ≤ (γ + B) ∥w⋆−w1∥2 +  1 + B γ  T X t=1 (ℓϵ(w⋆; zt))2 . (19) The proof of the theorem is based on a reduction to the realizable case (cf. [4, 13, 14]) and is omitted due to the lack of space. 6 Extensions There are numerous potential extensions to our approach. For instance, if all the components of the instances are non-negative we can derive a multiplicative version of the PA algorithm. The multiplicative PA algorithm maintains a weight vector wt ∈Pn where Pn = {x : x ∈Rn +, Pn j=1 xj = 1}. The multiplicative update of wt is, wt+1,j = (1/Zt) wt,jeτtvt,j , where vt is the same as the one used in the additive algorithm (Table 1), τt now becomes 4ℓϵ(wt; zt)/∥vt∥2 ∞for regression and classification and ℓϵ(wt; zt)/(8∥vt∥2 ∞) for uniclass and Zt = Pn j=1 wt,jeτtvt,j is a normalization factor. For the multiplicative PA we can prove the following loss bound. Theorem 4 Let z1, z2, . . . , zt = (xt, yt), . . . be a sequence of examples such that xt,j ≥0 for all t. Let DRE (w∥w′) = P j wj log(wj/w′ j) denote the relative entropy between w and w′. Assume that there exist w⋆and ϵ⋆such that ℓϵ⋆(w⋆; zt) = 0 for all t. Then when the multiplicative version of the PA algorithm is run with ϵ > ϵ⋆, the following bound holds for any T ≥1, T X t=1 (ℓϵ(wt; zt))2 + 2(ϵ −ϵ⋆) T X t=1 ℓϵ(wt; zt) ≤ 1 2B DRE (w⋆∥w1) , where for classification and regression B is a bound on the square of the infinity norm of the instances (∀t : B ≥∥xt∥2 ∞) and B = 16 for uniclass. The proof of the theorem is rather technical and uses the proof technique of Thm. 1 in conjunction with inequalities on the logarithm of Zt (see for instance [7, 11, 9]). An interesting question is whether the unified view of classification, regression, and uniclass can be exported and used with other algorithms for classification such as ROMMA [14] and ALMA [5]. Another, rather general direction for possible extension surfaces when replacing the Euclidean distance between wt+1 and wt with other distances and divergences such as the Bregman divergence. The resulting optimization problem may be solved via Bregman projections. In this case it might be possible to derive general loss bounds, see for example [12]. We are currently exploring generalizations of our framework to other decision tasks such as distance-learning [16] and online convex programming [17]. References [1] H. H. Bauschke and J. M. Borwein. On projection algorithms for solving convex feasibility problems. SIAM Review, 1996. [2] Y. Censor and S. A. Zenios. Parallel Optimization.. Oxford University Press, 1997. [3] K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. Jornal of Machine Learning Research, 3:951–991, 2003. [4] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277–296, 1999. [5] C. Gentile. A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2:213–242, 2001. [6] C. Gentile and M. Warmuth. Linear hinge loss and average margin. In NIPS’98. [7] D. P. Helmbold, R. E. Schapire, Y. Singer, and M. K. Warmuth. A comparison of new and old algorithms for a mixture estimation problem. In COLT’95. [8] M. Herbster. Learning additive models online with fast evaluating kernels. In COLT’01. [9] J. Kivinen, D. P. Helmbold, and M. Warmuth. Relative loss bounds for single neurons. IEEE Transactions on Neural Networks, 10(6):1291–1304, 1999. [10] J. Kivinen, A. J. Smola, and R. C. Williamson. Online learning with kernels. In NIPS’02. [11] J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–64, January 1997. [12] J. Kivinen and M. K. Warmuth. Relative loss bounds for multidimensional regression problems. Journal of Machine Learning, 45(3):301–329, July 2001. [13] N. Klasner and H. U. Simon. From noise-free to noise-tolerant and from on-line to batch learning. In COLT’95. [14] Y. Li and P. M. Long. The relaxed online maximum margin algorithm. Machine Learning, 46(1–3):361–387, 2002. [15] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. [16] E. Xing, A. Y. Ng, M. Jordan, and S. Russel. Distance metric learning, with application to clustering with side-information. In NIPS’03. [17] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML’03.
2003
64
2,468
Training a Quantum Neural Network Bob Ricks Department of Computer Science Brigham Young University Provo, UT 84602 cyberbob@cs.byu.edu Dan Ventura Department of Computer Science Brigham Young University Provo, UT 84602 ventura@cs.byu.edu Abstract Most proposals for quantum neural networks have skipped over the problem of how to train the networks. The mechanics of quantum computing are different enough from classical computing that the issue of training should be treated in detail. We propose a simple quantum neural network and a training method for it. It can be shown that this algorithm works in quantum systems. Results on several real-world data sets show that this algorithm can train the proposed quantum neural networks, and that it has some advantages over classical learning algorithms. 1 Introduction Many quantum neural networks have been proposed [1], but very few of these proposals have attempted to provide an in-depth method of training them. Most either do not mention how the network will be trained or simply state that they use a standard gradient descent algorithm. This assumes that training a quantum neural network will be straightforward and analogous to classical methods. While some quantum neural networks seem quite similar to classical networks [2], others have proposed quantum networks that are vastly different [3, 4, 5]. Several different network structures have been proposed, including lattices [6] and dots [4]. Several of these networks also employ methods which are speculative or difficult to do in quantum systems [7, 8]. These significant differences between classical networks and quantum neural networks, as well as the problems associated with quantum computation itself, require us to look more deeply at the issue of training quantum neural networks. Furthermore, no one has done empirical testing on their training methods to show that their methods work with real-world problems. It is an open question what advantages a quantum neural network (QNN) would have over a classical network. It has been shown that QNNs should have roughly the same computational power as classical networks [7]. Other results have shown that QNNs may work best with some classical components as well as quantum components [2]. Quantum searches can be proven to be faster than comparable classical searches. We leverage this idea to propose a new training method for a simple QNN. This paper details such a network and how training could be done on it. Results from testing the algorithm on several real-world problems show that it works. 2 Quantum Computation Several necessary ideas that form the basis for the study of quantum computation are briefly reviewed here. For a good treatment of the subject, see [9]. 2.1 Linear Superposition Linear superposition is closely related to the familiar mathematical principle of linear combination of vectors. Quantum systems are described by a wave function ψ that exists in a Hilbert space. The Hilbert space has a set of states, |φi⟩, that form a basis, and the system is described by a quantum state |ψ⟩= P i ci |φi⟩. |ψ⟩is said to be coherent or to be in a linear superposition of the basis states |φi⟩, and in general the coefficients ci are complex. A postulate of quantum mechanics is that if a coherent system interacts in any way with its environment (by being measured, for example), the superposition is destroyed. This loss of coherence is governed by the wave function ψ. The coefficients ci are called probability amplitudes, and |ci|2 gives the probability of |ψ⟩being measured in the state |φi⟩. Note that the wave function ψ describes a real physical system that must collapse to exactly one basis state. Therefore, the probabilities governed by the amplitudes ci must sum to unity. A two-state quantum system is used as the basic unit of quantum computation. Such a system is referred to as a quantum bit or qubit and naming the two states |0⟩and |1⟩, it is easy to see why this is so. 2.2 Operators Operators on a Hilbert space describe how one wave function is changed into another and they may be represented as matrices acting on vectors (the notation |·⟩indicates a column vector and the ⟨·| a [complex conjugate] row vector). Using operators, an eigenvalue equation can be written A |φi⟩= ai |φi⟩, where ai is the eigenvalue. The solutions |φi⟩to such an equation are called eigenstates and can be used to construct the basis of a Hilbert space as discussed in Section 2.1. In the quantum formalism, all properties are represented as operators whose eigenstates are the basis for the Hilbert space associated with that property and whose eigenvalues are the quantum allowed values for that property. It is important to note that operators in quantum mechanics must be linear operators and further that they must be unitary. 2.3 Interference Interference is a familiar wave phenomenon. Wave peaks that are in phase interfere constructively while those that are out of phase interfere destructively. This is a phenomenon common to all kinds of wave mechanics from water waves to optics. The well known double slit experiment demonstrates empirically that at the quantum level interference also applies to the probability waves of quantum mechanics. The wave function interferes with itself through the action of an operator – the different parts of the wave function interfere constructively or destructively according to their relative phases just like any other kind of wave. 2.4 Entanglement Entanglement is the potential for quantum systems to exhibit correlations that cannot be accounted for classically. From a computational standpoint, entanglement seems intuitive enough – it is simply the fact that correlations can exist between different qubits – for example if one qubit is in the |1⟩state, another will be in the |1⟩state. However, from a physical standpoint, entanglement is little understood. The questions of what exactly it is and how it works are still not resolved. What makes it so powerful (and so little understood) is the fact that since quantum states exist as superpositions, these correlations exist in superposition as well. When coherence is lost, the proper correlation is somehow communicated between the qubits, and it is this “communication” that is the crux of entanglement. Mathematically, entanglement may be described using the density matrix formalism. The density matrix ρψ of a quantum state |ψ⟩is defined as ρψ = |ψ⟩⟨ψ| For example, the quantum state |ξ⟩= 1 √ 2 |00⟩+ 1 √ 2 |01⟩appears in vector form as |ξ⟩= 1 √ 2    1 1 0 0   and it may also be represented as the density matrix ρξ = |ξ⟩⟨ξ| = 1 2    1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0   while the state |ψ⟩= 1 √ 2 |00⟩+ 1 √ 2 |11⟩is represented as ρψ = |ψ⟩⟨ψ| = 1 2    1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1    where the matrices and vectors are indexed by the state labels 00,..., 11. Notice that ρξ can be factorized as ρξ = 1 2  1 0 0 0  ⊗  1 1 1 1  where ⊗is the normal tensor product. On the other hand, ρψ can not be factorized. States that can not be factorized are said to be entangled, while those that can be factorized are not. There are different degrees of entanglement and much work has been done on better understanding and quantifying it [10, 11]. Finally, it should be mentioned that while interference is a quantum property that has a classical cousin, entanglement is a completely quantum phenomenon for which there is no classical analog. It has proven to be a powerful computational resource in some cases and a major hindrance in others. To summarize, quantum computation can be defined as representing the problem to be solved in the language of quantum states and then producing operators that drive the system (via interference and entanglement) to a final state such that when the system is observed there is a high probability of finding a solution. 2.5 An Example – Quantum Search One of the best known quantum algorithms searches an unordered database quadratically faster than any classical method [12, 13]. The algorithm begins with a superposition of all N data items and depends upon an oracle that can recognize the target of the search. Classically, searching such a database requires O(N) oracle calls; however, on a quantum computer, the task requires only O( √ N) oracle calls. Each oracle call consists of a quantum operator that inverts the phase of the search target. An “inversion about average” operator then shifts amplitude towards the target state. After π/4 ∗ √ N repetitions of this process, the system is measured and with high probability, the desired datum is the result. 3 A Simple Quantum Neural Network We would like a QNN with features that make it easy for us to model, yet powerful enough to leverage quantum physics. We would like our QNN to: • use known quantum algorithms and gates • have weights which we can measure for each node • work in classical simulations of reasonable size • be able to transfer knowledge to classical systems We propose a QNN that operates much like a classical ANN composed of several layers of perceptrons – an input layer, one or more hidden layers and an output layer. Each layer is fully connected to the previous layer. Each hidden layer computes a weighted sum of the outputs of the previous layer. If this is sum above a threshold, the node goes high, otherwise it stays low. The output layer does the same thing as the hidden layer(s), except that it also checks its accuracy against the target output of the network. The network as a whole computes a function by checking which output bit is high. There are no checks to make sure exactly one output is high. This allows the network to learn data sets which have one output high or binary-encoded outputs. Figure 1: Simple QNN to compute XOR function The QNN in Figure 1 is an example of such a network, with sufficient complexity to compute the XOR function. Each input node i is represented by a register, |α⟩i. The two hidden nodes compute a weighted sum of the inputs, |ψ⟩i1 and |ψ⟩i2, and compare the sum to a threshold weight, |ψ⟩i0. If the weighted sum is greater than the threshold the node goes high. The |β⟩k represent internal calculations that take place at each node. The output layer works similarly, taking a weighted sum of the hidden nodes and checking against a threshold. The QNN then checks each computed output and compares it to the target output, |Ω⟩j sending |ϕ⟩j high when they are equivalent. The performance of the network is denoted by |ρ⟩, which is the number of computed outputs equivalent to their corresponding target output. At the quantum gate level, the network will require O(blm + m2) gates for each node of the network. Here b is the number of bits used for floating point arithmetic in |β⟩, l is the number of bits for each weight and m is the number of inputs to the node [14]-[15]. The overall network works as follows on a training set. In our example, the network has two input parameters, so all n training examples will have two input registers. These are represented as |α⟩11 to |α⟩n2. The target answers are kept in registers |Ω⟩11 to |Ω⟩n2. Each hidden or output node has a weight vector, represented by |ψ⟩i, each vector containing weights for each of its inputs. After classifying a training example, the registers |ϕ⟩1 and |ϕ⟩2 reflect the networks ability to classify that the training example. As a simple measure of performance, we increment |ρ⟩by the sum of all |ϕ⟩i. When all training examples have Figure 2: QNN Training been classified, |ρ⟩will be the sum of the output nodes that have the correct answer throughout the training set and will range between zero and the number of training examples times the number of output nodes. 4 Using Quantum Search to Learn Network Weights One possibility for training this kind of a network is to search through the possible weight vectors for one which is consistent with the training data. Quantum searches have been used already in quantum learning [16] and many of the problems associated with them have already been explored [17]. We would like to find a solution which classifies all training examples correctly; in other words we would like |ρ⟩= n ∗m where n is the number of training examples and m is the number of output nodes. Since we generally do not know how many weight vectors will do this, we use a generalization of the original search algorithm [18], intended for problems where the number of solutions t is unknown. The basic idea is that we will put |ψ⟩into a superposition of all possible weight vectors and search for one which classifies all training examples correctly. We start out with |ψ⟩as a superposition of all possible weight vectors. All other registers (|β⟩, |ϕ⟩, |ρ⟩), besides the inputs and target outputs are initialized to the state |0⟩. We then classify each training example, updating the performance register, |ρ⟩. By using a superposition we classify the training examples with respect to every possible weight vector simultaneously. Each weight vector is now entangled with |ρ⟩in such a way that |ρ⟩corresponds with how well every weight vector classifies all the training data. In this case, the oracle for the quantum search is |ρ⟩= n ∗m, which corresponds to searching for a weight vector which correctly classifies the entire set. Unfortunately, searching the weight vectors while entangled with |ρ⟩would cause unwanted weight vectors to grow that would be entangled with the performance metric we are looking for. The solution is to disentangle |ψ⟩from the other registers after inverting the phase of those weights which match the search criteria, based on |ρ⟩. To do this the entire network will need to be uncomputed, which will unentangle all the registers and set them back to their initial values. This means that the network will need to be recomputed each time we make an oracle call and after each measurement. There are at least two things about this algorithm that are undesirable. First, not all training data will have any solution networks that correctly classify all training instances. This means that nothing will be marked by the search oracle, so every weight vector will have an equal chance of being measured. It is also possible that even when a solution does exist, it is not desirable because it over fits the training data. Second, the amount of time needed to find a vector which correctly classifies the training set is O( p 2b/t), which has exponential complexity with respect to the number of bits in the weight vector. One way to deal with the first problem is to search until we find a solution which covers an acceptable percentage, p, of the training data. In other words, the search oracle is modified to be |ρ⟩≥n ∗m ∗p. The second problem is addressed in the next section. 5 Piecewise Weight Learning Our quantum search algorithm gives us a good polynomial speed-up to the exponential task of finding a solution to the QNN. This algorithm does not scale well, in fact it is exponential in the total number of weights in the network and the bits per weight. Therefore, we propose a randomized training algorithm which searches each node’s weight vector independently. The network starts off, once again, with training examples in |α⟩, the corresponding answers in |Ω⟩, and zeros in all the other registers. A node is randomly selected and its weight vector, |ψ⟩i, is put into superposition. All other weight vectors start with random classical initial weights. We then search for a weight vector for this node that causes the entire network to classify a certain percentage, p, of the training examples correctly. This is repeated, iteratively decreasing p, until a new weight vector is found. That weight is fixed classically and the process is repeated randomly for the other nodes. Searching each node’s weight vector separately is, in effect, a random search through the weight space where we select weight vectors which give a good level of performance for each node. Each node takes on weight vectors that tend to increase performance with some amount of randomness that helps keep it out of local minima. This search can be terminated when an acceptable level of performance has been reached. There are a few improvements to the basic design which help speed convergence. First, to insure that hidden nodes find weight vectors that compute something useful, a small performance penalty is added to weight vectors which cause a hidden node to output the same value for all training examples. This helps select weight vectors which contain useful information for the output nodes. Since each output node’s performance is independent of the performance or all output nodes, the algorithm only considers the accuracy of the output node being trained when training an output node. 6 Results We first consider the canonical XOR problem. Each of the hidden and the output nodes are thresholded nodes with three weights, one for each input and one for the threshold. For each weight 2 bits are used. Quantum search did well on this problem, finding a solution in an average of 2.32 searches. The randomized search algorithm also did well on the XOR problem. After an average of 58 weight updates, the algorithm was able to correctly classify the training data. Since this is a randomized algorithm both in the number of iterations of the search algorithm before measuring and in the order which nodes update their weight vectors, the standard deviation for this method was much higher, but still reasonable. In the randomized search algorithm, an epoch refers to finding and fixing the weight of a single node. We also tried the randomized search algorithm for a few real-world machine learning problems: lenses, Hayes-Roth and the iris datasets [19]. The lenses data set is a data set that tries to predict whether people will need soft contact lenses, hard contact lenses or no contacts. The iris dataset details features of three different classes of irises. The Hayes-Roth dataset classifies people into different classes depending several attributes. # Weight Weight Output Training Data Set Qubits Epochs Updates Accuracy Accuracy Backprop Iris 32 23,000 225 98.23% 97.79% 96% Lenses 42 22,500 145 98.35% 100.0% 92% Hayes-Roth 68 5 × 106 9,200 88.76% 82.98% 83% Table 1: Training Results The lenses data set can be solved with a network that has three hidden nodes. After between a few hundred to a few thousand iterations it usually finds a solution. This may be because it has a hard time with 2 bit weights, or because it is searching for perfect accuracy. The number of times a weight was fixed and updated was only 225 for this data set. The iris data set was normalized so that each input had a value between zero and one. The randomized search algorithm found the correct target for 97.79% of the output nodes. Our results for the Hayes-Roth problem were also quite good. We used four hidden nodes with two bit weights for the hidden nodes. We had to normalize the inputs to range from zero to one once again so the larger inputs would not dominate the weight vectors. The algorithm found the correct target for 88.86% of the output nodes correctly in about 5,000,000 epochs. Note that this does not mean that it classified 88.86% of the training examples correctly since we are checking each output node for accuracy on each training example. The algorithm actually classified 82.98% of the training set correctly, which compares well with backpropagation’s 83% [20]. 7 Conclusions and Future Work This paper proposes a simple quantum neural network and a method of training it which works well in quantum systems. By using a quantum search we are able to use a wellknown algorithm for quantum systems which has already been used for quantum learning. The algorithm is able to search for solutions that cover an arbitrary percentage of the training set. This could be very useful for problems which require a very accurate solution. The drawback is that it is an exponential algorithm, even with the significant quadratic speedup. A randomized version avoids some of the exponential increases in complexity with problem size. This algorithm is exponential in the number of qubits of each node’s weight vector instead of in the composite weight vector of the entire network. This means the complexity of the algorithm increases with the number of connections to a node and the precision of each individual weight, dramatically decreasing complexity for problems with large numbers of nodes. This could be a great improvement for larger problems. Preliminary results for both algorithms have been very positive. There may be quantum methods which could be used to improve current gradient descent and other learning algorithms. It may also be possible to combine some of these with a quantum search. An example would be to use gradient descent to try and refine a composite weight vector found by quantum search. Conversely, a quantum search could start with the weight vector of a gradient descent search. This would allow the search to start with an accurate weight vector and search locally for weight vectors which improve overall performance. Finally the two methods could be used simultaneously to try and take advantage of the benefits of each technique. Other types of QNNs may be able to use a quantum search as well since the algorithm only requires a weight space which can be searched in superposition. In addition, more traditional gradient descent techniques might benefit from a quantum speed-up themselves. References [1] Alexandr Ezhov and Dan Ventura. Quantum neural networks. In Ed. N. Kasabov, editor, Future Directions for Intelligent Systems and Information Science. Physica-Verlang, 2000. [2] Ajit Narayanan and Tammy Menneer. Quantum artificial neural network architectures and components. In Information Sciences, volume 124 nos. 1-4, pages 231–255, 2000. [3] M. V. Altaisky. Quantum neural network. Technical report, 2001. http://xxx.lanl.gov/quantph/0107012. [4] E. C. Behrman, J. Niemel, J. E. Steck, and S. R. Skinner. A quantum dot neural network. In Proceedings of the 4th Workshop on Physics of Computation, pages 22–24. Boston, 1996. [5] Fariel Shafee. Neural networks with c-not gated nodes. Technical report, 2002. http://xxx.lanl.gov/quant-ph/0202016. [6] Yukari Fujita and Tetsuo Matsui. Quantum gauged neural network: U(1) gauge theory. Technical report, 2002. http://xxx.lanl.gov/cond-mat/0207023. [7] S. Gupta and R. K. P. Zia. Quantum neural networks. In Journal of Computer and System Sciences, volume 63 No. 3, pages 355–383, 2001. [8] E. C. Behrman, V. Chandrasheka, Z. Wank, C. K. Belur, J. E. Steck, and S. R. Skinner. A quantum neural network computes entanglement. Technical report, 2002. http://xxx.lanl.gov/quantph/0202131. [9] Michael A. Nielsen and Isaac L. Chuang. Quantum computation and quantum information. Cambridge University Press, 2000. [10] V. Vedral, M. B. Plenio, M. A. Rippin, and P. L. Knight. Quantifying entanglement. In Physical Review Letters, volume 78(12), pages 2275–2279, 1997. [11] R. Jozsa. Entanglement and quantum computation. In S. Hugget, L. Mason, K.P. Tod, T. Tsou, and N.M.J. Woodhouse, editors, The Geometric Universe, pages 369–379. Oxford University Press, 1998. [12] Lov K. Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the 28th ACM STOC, pages 212–219, 1996. [13] Lov K. Grover. Quantum mechanics helps in searching for a needle in a haystack. In Physical Review Letters, volume 78, pages 325–328, 1997. [14] Peter Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. In SIAM Journal of Computing, volume 26 no. 5, pages 1484–1509, 1997. [15] Vlatko Vedral, Adriano Barenco, and Artur Ekert. Quantum networks for elementary arithmetic operations. In Physical Review A, volume 54 no. 1, pages 147–153, 1996. [16] Dan Ventura and Tony Martinez. Quantum associative memory. In Information Sciences, volume 124 nos. 1-4, pages 273–296, 2000. [17] Alexandr Ezhov, A. Nifanova, and Dan Ventura. Distributed queries for quantum associative memory. In Information Sciences, volume 128 nos. 3-4, pages 271–293, 2000. [18] Michel Boyer, Gilles Brassard, Peter Høyer, and Alain Tapp. Tight bounds on quantum searching. In Proceedings of the Fourth Workshop on Physics and Computation, pages 36–43, 1996. [19] C.L. Blake and C.J. Merz. UCI repository of machine learning databases, 1998. http://www.ics.uci.edu/∼mlearn/MLRepository.html. [20] Frederick Zarndt. A comprehensive case study: An examination of machine learning and connectionist algorithms. Master’s thesis, Brigham Young University, 1995.
2003
65
2,469
Perspectives on Sparse Bayesian Learning David Wipf, Jason Palmer, and Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego, CA 92092 dwipf,japalmer@ucsd.edu, brao@ece.ucsd.edu Abstract Recently, relevance vector machines (RVM) have been fashioned from a sparse Bayesian learning (SBL) framework to perform supervised learning using a weight prior that encourages sparsity of representation. The methodology incorporates an additional set of hyperparameters governing the prior, one for each weight, and then adopts a specific approximation to the full marginalization over all weights and hyperparameters. Despite its empirical success however, no rigorous motivation for this particular approximation is currently available. To address this issue, we demonstrate that SBL can be recast as the application of a rigorous variational approximation to the full model by expressing the prior in a dual form. This formulation obviates the necessity of assuming any hyperpriors and leads to natural, intuitive explanations of why sparsity is achieved in practice. 1 Introduction In an archetypical regression situation, we are presented with a collection of N regressor/target pairs {φi ∈ℜM, ti ∈ℜ}N i=1 and the goal is to find a vector of weights w such that, in some sense, ti ≈φT i w, ∀i or t ≈Φw, (1) where t ≜[t1, . . . , tN]T and Φ ≜[φ1, . . . , φN]T ∈ℜN×M. Ideally, we would like to learn this relationship such that, given a new training vector φ∗, we can make accurate predictions of t∗, i.e., we would like to avoid overfitting. In practice, this requires some form of regularization, or a penalty on overly complex models. Recently, a sparse Bayesian learning (SBL) framework has been derived to find robust solutions to (1) [3, 7]. The key feature of this development is the incorporation of a prior on the weights that encourages sparsity in representation, i.e., few non-zero weights. When Φ is square and formed from a positive-definite kernel function, we obtain the relevance vector machine (RVM), a Bayesian competitor of SVMs with several significant advantages. 1.1 Sparse Bayesian Learning Given a new regressor vector φ∗, the full Bayesian treatment of (1) involves finding the predictive distribution p(t∗|t).1 We typically compute this distribution by marginalizing 1For simplicity, we omit explicit conditioning on Φ and φ∗, i.e., p(t∗|t) ≡p(t∗|t, Φ, φ∗). over the model weights, i.e., p(t∗|t) = 1 p(t) Z p(t∗|w)p(w, t)dw, (2) where the joint density p(w, t) = p(t|w)p(w) combines all relevant information from the training data (likelihood principle) with our prior beliefs about the model weights. The likelihood term p(t|w) is assumed to be Gaussian, p(t|w) = (2πσ2)−N/2 exp µ −1 2σ2 ∥t −Φw∥2 ¶ , (3) where for now we assume that the noise variance σ2 is known. For sparse priors p(w) (possibly improper), the required integrations, including the computation of the normalizing term p(t), are typically intractable, and we are forced to accept some form of approximation to p(w, t). Sparse Bayesian learning addresses this issue by introducing a set of hyperparameters into the specification of the problematic weight prior p(w) before adopting a particular approximation. The key assumption is that p(w) can be expressed as p(w) = M Y i=1 p(wi) = M Y i=1 Z p(wi|γi)p(γi)dγi, (4) where γ = [γ1, . . . , γM]T represents a vector of hyperparameters, (one for each weight). The implicit SBL derivation presented in [7] can then be reformulated as follows, p(t∗|t) = 1 p(t) Z p(t∗|w)p(t|w)p(w)dw = 1 p(t) Z Z p(t∗|w)p(t|w)p(w|γ)p(γ)dwdγ. (5) Proceeding further, by applying Bayes’ rule to this expression, we can exploit the plugin rule [2] via, p(t∗|t) = Z Z p(t∗|w)p(t|w)p(w|γ)p(γ|t) p(t|γ)dwdγ ≈ Z Z p(t∗|w)p(t|w)p(w|γ)δ(γMAP ) p(t|γ) dwdγ = 1 p(t; γMAP ) Z p(t∗|w)p(w, t; γMAP )dw. (6) The essential difference from (2) is that we have replaced p(w, t) with the approximate distribution p(w, t; γMAP ) = p(t|w)p(w; γMAP ). Also, the normalizing term becomes R p(w, t; γMAP )dw and we assume that all required integrations can now be handled in closed form. Of course the question remains, how do we structure this new set of parameters γ to accomplish this goal? The answer is that the hyperparameters enter as weight prior variances of the form, p(wi|γi) = N(0, γi). (7) The hyperpriors are given by, p(γ−1 i ) ∝γ1−a i exp(−b/γi), (8) where a, b > 0 are constants. The crux of the actual learning procedure presented in [7] is to find some MAP estimate of γ (or more accurately, a function of γ). In practice, we find that many of the estimated γi’s converge to zero, leading to sparse solutions since the corresponding weights, and therefore columns of Φ, can effectively be pruned from the model. The Gaussian assumptions, both on p(t|w) and p(w; γ), then facilitate direct, analytic computation of (6). 1.2 Ambiguities in Current SBL Derivation Modern Bayesian analysis is primarily concerned with finding distributions and locations of significant probability mass, not just modes of distributions, which can be very misleading in many cases [6]. With SBL, the justification for the additional level of sophistication (i.e., the inclusion of hyperparameters) is that the adoption of the plugin rule (i.e., the approximation p(w, t) ≈p(w, t; γMAP )) is reflective of the true mass, at least sufficiently so for predictive purposes. However, no rigorous motivation for this particular claim is currently available nor is it immediately obvious exactly how the mass of this approximate distribution relates to the true mass. A more subtle difficulty arises because MAP estimation, and hence the plugin rule, is not invariant under a change in parameterization. Specifically, for an invertible function f(·), [f(γ)]MAP ̸= f(γMAP ). (9) Different transformations lead to different modes and ultimately, different approximations to p(w, t) and therefore p(t∗|t). So how do we decide which one to use? The canonical form of SBL, and the one that has displayed remarkable success in the literature, does not in fact find a mode of p(γ|t), but a mode of p(−log γ|t). But again, why should this mode necessarily be more reflective of the desired mass than any other? As already mentioned, SBL often leads to sparse results in practice, namely, the approximation p(w, t; γMAP ) is typically nonzero only on a small subspace of M-dimensional w space. The question remains, however, why should an approximation to the full Bayesian treatment necessarily lead to sparse results in practice? To address all of these ambiguities, we will herein demonstrate that the sparse Bayesian learning procedure outlined above can be recast as the application of a rigorous variational approximation to the distribution p(w, t).2 This will allow us to quantify the exact relationship between the true mass and the approximate mass of this distribution. In effect, we will demonstrate that SBL is attempting to directly capture significant portions of the probability mass of p(w, t), while still allowing us to perform the required integrations. This framework also obviates the necessity of assuming any hyperprior p(γ) and is independent of the (subjective) parameterization (e.g., γ or −log γ, etc.). Moreover, this perspective leads to natural, intuitive explanations of why sparsity is observed in practice and why, in general, this need not be the case. 2 A Variational Interpretation of Sparse Bayesian Learning To begin, we review that the ultimate goal of this analysis is to find a well-motivated approximation to the distribution p(t∗|t; H) ∝ Z p(t∗|w)p(w, t; H)dw = Z p(t∗|w)p(t|w)p(w; H)dw, (10) where we have explicitly noted the hypothesis of a model with a sparsity inducing (possibly improper) weight prior by H. As already mentioned, the integration required by this form is analytically intractable and we must resort to some form of approximation. To accomplish this, we appeal to variational methods to find a viable approximation to p(w, t; H) [5]. We may then substitute this approximation into (10), leading to tractable integrations and analytic posterior distributions. To find a class of suitable approximations, we first express p(w; H) in its dual form by introducing a set of variational parameters. This is similar to a procedure outlined in [4] in the context of independent component analysis. 2We note that the analysis in this paper is different from [1], which derives an alternative SBL algorithm based on variational methods. 2.1 Dual Form Representation of p(w; H) At the heart of this methodology is the ability to represent a convex function in its dual form. For example, given a convex function f(y) : ℜ→ℜ, the dual form is given by f(y) = sup λ [λy −f ∗(λ)] , (11) where f ∗(λ) denotes the conjugate function. Geometrically, this can be interpreted as representing f(x) as the upper envelope or supremum of a set of lines parameterized by λ. The selection of f ∗(λ) as the intercept term ensures that each line is tangent to f(y). If we drop the maximization in (11), we obtain the bound f(y) ≥λy −f ∗(λ). (12) Thus, for any given λ, we have a lower bound on f(y); we may then optimize over λ to find the optimal or tightest bound in a region of interest. To apply this theory to the problem at hand, we specify the form for our sparse prior p(w; H) = QM i=1 p(wi; H). Using (7) and (8), we obtain the prior p(wi; H) = Z p(wi|γi)p(γi)dγi = C µ b + w2 i 2 ¶−(a+1/2) , (13) which for a, b > 0 is proportional to a Student-t density. The constant C is not chosen to enforce proper normalization; rather, it is chosen to facilitate the variational analysis below. Also, this density function can be seen to encourage sparsity since it has heavy tails and a sharp peak at zero. Clearly p(wi; H) is not convex in wi; however, if we let yi ≜w2 i as suggested in [5] and define f(yi) ≜log p(wi; H) = −(a + 1/2) log C ³ b + yi 2 ´ , (14) we see that we now have a convex function in yi amenable to dual representation. By computing the conjugate function f ∗(yi), constructing the dual, and then transforming back to p(wi; H), we obtain the representation (see Appendix for details) p(wi; H) = max γi≥0 · (2πγi)−1/2 exp µ −w2 i 2γi ¶ exp µ −b γi ¶ γ−a i ¸ . (15) As a, b →0, it is readily apparent from (15) that what were straight lines in the yi domain are now Gaussian functions with variance γi in the wi domain. Figure 1 illustrates this connection. When we drop the maximization, we obtain a lower bound on p(wi; H) of the form p(wi; H) ≥p(wi; ˆH) ≜(2πγi)−1/2 exp µ −w2 i 2γi ¶ exp µ −b γi ¶ γ−a i , (16) which serves as our approximate prior to p(w; H). From this relationship, we see that p(wi; ˆH) does not integrate to one, except in the special case when a, b →0. We will now incorporate these results into an algorithm for finding a good ˆH, or more accurately ˆH(γ), since each candidate hypothesis is characterized by a different set of variational parameters. 2.2 Variational Approximation to p(w, t; H) So now that we have a variational approximation to the problematic weight prior, we must return to our original problem of estimating p(t∗|t; H). Since the integration is intractable under model hypothesis H, we will instead compute p(t∗|t; ˆH) using p(w, t; ˆH) = p(t|w)p(w; ˆH), with p(w; ˆH) defined as in (16). How do we choose this approximate 0 0.5 1 1.5 2 2.5 3 −2.5 −2 −1.5 −1 −0.5 0 0.5 Log Density yi −5 −4 −3 −2 −1 0 1 2 3 4 5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Density lower bounds p (wi; H) p ³ wi; ˆH ´ wi Figure 1: Variational approximation example in both yi space and wi space for a, b →0. Left: Dual forms in yi space. The solid line represents the plot of f(yi) while the dotted lines represent variational lower bounds in the dual representation for three different values of λi. Right: Dual forms in wi space. The solid line represents the plot of p(wi; H) while the dotted lines represent Gaussian distributions with three different variances. model? In other words, given that different ˆH are distinguished by a different set of variational parameters γ, how do we choose the most appropriate γ? Consistent with modern Bayesian analysis, we concern ourselves not with matching modes of distributions, but with aligning regions of significant probability mass. In choosing p(w, t; ˆH), we would therefore like to match, where possible, significant regions of probability mass in the true model p(w, t; H). For a given t, an obvious way to do this is to select ˆH by minimizing the sum of the misaligned mass, i.e., ˆH = arg min ˆ H Z ¯¯¯p(w, t; H) −p(w, t; ˆH) ¯¯¯ dw = arg max ˆ H Z p(t|w)p(w; ˆH)dw, (17) where the variational assumptions have allowed us to remove the absolute value (since the argument must always be positive). Also, we note that (17) is tantamount to selecting the variational approximation with maximal Bayesian evidence [6]. In other words, we are selecting the ˆH, out of a class of variational approximations to H, that most probably explains the training data t, marginalized over the weights. From an implementational standpoint, (17) can be reexpressed using (16) as, γ = arg max γ log Z p(t|w) M Y i=1 p ³ wi; ˆH(γi) ´ dw = arg max γ −1 2 £ log |Σt| + tT Σ−1 t t ¤ + M X i=1 µ −b γi −a log γi ¶ , (18) where Σt ≜σ2I +Φdiag(γ)ΦT . This is the same cost function as in [7] only without terms resulting from a prior on σ2, which we will address later. Thus, the end result of this analysis is an evidence maximization procedure equivalent to the one in [7]. The difference is that, where before we were optimizing over a somewhat arbitrary model parameterization, now we see that it is actually optimization over the space of variational approximations to a model with a sparse, regularizing prior. Also, we know from (17) that this procedure is effectively matching, as much as possible, the mass of the full model p(w, t; ˆH). 3 Analysis While the variational perspective is interesting, two pertinent questions still remain: 1. Why should it be that approximating a sparse prior p(w; H) leads to sparse representations in practice? 2. How do we extend these results to handle an unknown, random variance σ2? We first treat Question (1). In Figure 2 below, we have illustrated a 2D example of evidence maximization within the context of variational approximations to the sparse prior p(w; H). For now, we will assume a, b →0, which from (13), implies that p(wi; H) ∝1/|wi| for each i. On the left, the shaded area represents the region of w space where both p(w; H) and p(t|w) (and therefore p(w, t; H)) have significant probability mass. Maximization of (17) involves finding an approximate distribution p(w, t; ˆH) with a substantial percentage of its mass in this region. −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8 p (t|w1, w2) p (w1, w2; H) w1 w2 −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8 variational constraint p (t|w1, w2) p ³ w1, w2; ˆHa ´ p ³ w1, w2; ˆHb ´ w1 w2 Figure 2: Comparison between full model and approximate models with a, b →0. Left: Contours of equiprobability density for p(w; H) and constant likelihood p(t|w); the prominent density and likelihood lie within each region respectively. The shaded region represents the area where both have significant mass. Right: Here we have added the contours of p(w; ˆH) for two different values of γ, i.e., two approximate hypotheses denoted ˆHa and ˆHb. The shaded region represents the area where both the likelihood and the approximate prior ˆHa have significant mass. Note that by the variational bound, each p(w; ˆH) must lie within the contours of p(w; H). In the plot on the right, we have graphed two approximate priors that satisfy the variational bounds, i.e., they must lie within the contours of p(w; H). We see that the narrow prior that aligns with the horizontal spine of p(w; H) places the largest percentage of its mass (and therefore the mass of p(w, t; ˆHa)) in the shaded region. This corresponds with a prior of p(w; ˆHa) = p(w1, w2; γ1 ≫0, γ2 ≈0). (19) This creates a long narrow prior since there is minimal variance along the w2 axis. In fact, it can be shown that owing to the infinite density of the variational constraint along each axis (which is allowed as a and b go to zero), the maximum evidence is obtained when γ2 is strictly equal to zero, giving the approximate prior infinite density along this axis as well. This implies that w2 also equals zero and can be pruned from the model. In contrast, a model with significant prior variance along both axes, ˆHb, is hampered because it cannot extend directly out (due to the dotted variational boundary) along the spine to penetrate the likelihood. Similar effective weight pruning occurs in higher dimensional problems as evidenced by simulation studies and the analysis in [3]. In higher dimensions, the algorithm only retains those weights associated with the prior spines that span a subspace penetrating the most prominent portion of the likelihood mass (i.e., a higher-dimensional analog to the shaded region already mentioned). The prior p(w; ˆH) navigates the variational constraints, placing as much as possible of its mass in this region, driving many of the γi’s to zero. In contrast, when a, b > 0, the situation is somewhat different. It is not difficult to show that, assuming a noise variance σ2 > 0, the variational approximation to p(w, t; H) with maximal evidence cannot have any γi = wi = 0. Intuitively, this occurs because the now finite spines of the prior p(w; H), which bound the variational approximation, do not allow us to place infinite prior density in any region of weight space (as occurred previously when any γi →0). Consequently, if any γi goes to zero with a, b > 0, the associated approximate prior mass, and therefore the approximate evidence, must also fall to zero by (16). As such, models with all non-zero weights will be now be favored when we form the variational approximation. We therefore cannot assume an approximation to a sparse prior will necessarily give us sparse results in practice. We now address Question (2). Thus far, we have considered a known, fixed noise variance σ2; however, what if σ2 is unknown? SBL assumes it is unknown and random with prior distribution p(1/σ2) ∝(σ2)1−c exp(−d/σ2), and c, d > 0. After integrating out the unknown σ2, we arrive at the implicit likelihood equation, p(t|w) = Z p(t|w, σ2)p(σ2)dσ2 ∝ µ d + 1 2∥t −Φw∥2 ¶−(¯c+1/2) , (20) where ¯c ≜c + (N −1)/2. We may then form a variational approximation to the likelihood in a similar manner as before (with wi being replaced by ∥t −Φw∥) giving us, p(t|w) ≥ (2π)−N/2(σ2)−1/2 exp µ −1 2σ2 ∥t −Φw∥2 ¶ exp µ −d σ2 ¶ (σ2)−¯c = (2πσ2)−N/2 exp µ −1 2σ2 ∥t −Φw∥2 ¶ exp µ −d σ2 ¶ (σ2)−c, (21) where the second step follows by substituting back in for ¯c. By replacing p(t|w) with the lower bound from (21), we then maximize over the variational parameters γ and σ2 via γ, σ2 = arg max γ,σ2 −1 2 £ log |Σt| + tT Σ−1 t t ¤ + M X i=1 µ −b γi −a log γi ¶ −d σ2 −c log σ2, (22) the exact SBL optimization procedure. Thus, we see that the entire SBL framework, including noise variance estimation, can be seen in variational terms. 4 Conclusions The end result of this analysis is an evidence maximization procedure that is equivalent to the one originally formulated in [7]. The difference is that, where before we were optimizing over a somewhat arbitrary model parameterization, we now see that SBL is actually searching a space of variational approximations to find an alternative distribution that captures the significant mass of the full model. Moreover, from the vantage point afforded by this new perspective, we can better understand the sparsity properties of SBL and the relationship between sparse priors and approximations to sparse priors. Appendix: Derivation of the Dual Form of p(wi; H) To accommodate the variational analysis of Sec. 2.1, we require the dual representation of p(wi; H). As an intermediate step, we must find the dual representation of f(yi), where yi ≜w2 i and f(yi) ≜log p(wi; H) = log · C ³ b + yi 2 ´−(a+1/2)¸ . (23) To accomplish this, we find the conjugate function f ∗(λi) using the duality relation f ∗(λi) = max yi [λiyi −f(yi)] = max yi · λiyi −log C + µ a + 1 2 ¶ log ³ b + yi 2 ´¸ . (24) To find the maximizing yi, we take the gradient of the left side and set it to zero, giving us, ymax i = −a λi − 1 2λi −2b. (25) Substituting this value into the expression for f ∗(λi) and selecting C = (2π)−1/2 exp · − µ a + 1 2 ¶¸ µ a + 1 2 ¶(a+1/2) , (26) we arrive at f ∗(λi) = µ a + 1 2 ¶ log µ −1 2λi ¶ + 1 2 log 2π −2bλi. (27) We are now ready to represent f(yi) in its dual form, observing first that we only need consider maximization over λi ≤0 since f(yi) is a monotonically decreasing function (i.e., all tangent lines will have negative slope). Proceeding forward, we have f(yi) = max λi≤0 [λiyi −f ∗(λi)] = max γi≥0 ·−yi 2γi − µ a + 1 2 ¶ log γi −1 2 log 2π −b γi ¸ , (28) where we have used the monotonically increasing transformation λi = −1/(2γi), γi ≥0. The attendant dual representation of p(wi; H) can then be obtained by exponentiating both sides of (28) and substituting yi = w2 i , p(wi; H) = max γi≥0 · 1 √2πγi exp µ −w2 i 2γi ¶ exp µ −b γi ¶ γ−a i ¸ . (29) Acknowledgments This research was supported by DiMI grant #22-8376 sponsored by Nissan. References [1] C. Bishop and M. Tipping, “Variational relevance vector machines,” Proc. 16th Conf. Uncertainty in Artificial Intelligence, pp. 46–53, 2000. [2] R. Duda, P. Hart, and D. Stork, Pattern Classification, Wiley, Inc., New York, 2nd ed., 2001. [3] A.C. Faul and M.E. Tipping, “Analysis of sparse Bayesian learning,” Advances in Neural Information Processing Systems 14, pp. 383–389, 2002. [4] M. Girolami, “A variational method for learning sparse and overcomplete representations,” Neural Computation, vol. 13, no. 11, pp. 2517–2532, 2001. [5] M.I. Jordan, Z. Ghahramani, T. Jaakkola, and L.K. Saul, “An introduction to variational methods for graphical models,” Machine Learning, vol. 37, no. 2, pp. 183–233, 1999. [6] D.J.C. MacKay, “Bayesian interpolation,” Neural Comp., vol. 4, no. 3, pp. 415–447, 1992. [7] M.E. Tipping, “Sparse Bayesian learning and the relevance vector machine,” Journal of Machine Learning, vol. 1, pp. 211–244, 2001.
2003
66
2,470
One microphone blind dereverberation based on quasi-periodicity of speech signals Tomohiro Nakatani, Masato Miyoshi, and Keisuke Kinoshita Speech Open Lab., NTT Communication Science Labs., NTT Corporation 2-4, Hikaridai, Seika-cho, Soraku-gun, Kyoto, Japan {nak,miyo,kinoshita}@cslab.kecl.ntt.co.jp Abstract Speech dereverberation is desirable with a view to achieving, for example, robust speech recognition in the real world. However, it is still a challenging problem, especially when using a single microphone. Although blind equalization techniques have been exploited, they cannot deal with speech signals appropriately because their assumptions are not satisfied by speech signals. We propose a new dereverberation principle based on an inherent property of speech signals, namely quasi-periodicity. The present methods learn the dereverberation filter from a lot of speech data with no prior knowledge of the data, and can achieve high quality speech dereverberation especially when the reverberation time is long. 1 Introduction Although numerous studies have been undertaken on robust automatic speech recognition (ASR) in the real world, long reverberation is still a serious problem that severely degrades the ASR performance [1]. One simple way to overcome this problem is to dereverberate the speech signals prior to ASR, but this is also a challenging problem, especially when using a single microphone. For example, certain blind equalization methods, including independent component analysis (ICA), can estimate the inverse filter of an unknown impulse response convolved with target signals when the signals are statistically independent and identically distributed sequences [2]. However, these methods cannot appropriately deal with speech signals because speech signals have inherent properties, such as periodicity and formant structure, making their sequences statistically dependent. This approach inevitably destroys such essential properties of speech. Another approach that uses the properties of speech has also been proposed [3]. The basic idea involves adaptively detecting time regions in which signal-to-reverberation ratios become small, and attenuating speech signals in those regions. However, the precise separation of the signal and reverberation durations is difficult, therefore, this approach has achieved only moderate results so far. In this paper, we propose a new principle for estimating an inverse filter by using an essential property of speech signals, namely quasi-periodicity, as a clue. In general, voiced segments in an utterance have approximate periodicity in each local time region while the period gradually changes. Therefore, when a long reverberation is added to a speech signal, signals in different time regions with different periods are mixed, thus degrading the periodicity of the signals in local time regions. By contrast, we show that we can estimate an inverse filter for dereverberating a signal by enhancing the periodicity of the signal in each local time region. The estimated filter can dereverberate both the periodic and non-periodic parts of speech signals with no prior knowledge of the target signals, even though only the periodic parts of the signals are used for the estimation. 2 Quasi-periodicity based dereverberation We propose two dereverberation methods, referred to as Harmonicity based dEReverBeration (HERB) methods, based on the features of quasi-periodic signals: one based on an Average Transfer Function (ATF) that transforms reverberant signals into quasi-periodic components (ATF-HERB), and the other based on the Minimum Mean Squared Error (MMSE) criterion that evaluates the quasi-periodicity of target signals (MMSE-HERB). First, we briefly explain the features of quasi-periodic signals, and then describe the two methods. 2.1 Features of quasi-periodic signals When a source signal s(n) is recorded in a reverberant room1, the obtained signal x(n) is represented as x(n) = h(n)∗s(n), where h(n) is the impulse response of the room and “∗” is a convolution operation. The goal of the dereverberation is to estimate a dereverberation filter, w(n), for −N < n < N that dereverberates x(n), and to obtain the dereverberated signal y(n) by: y(n) = w(n) ∗x(n) = (w(n) ∗h(n)) ∗s(n) = q(n) ∗s(n). (1) where q(n) = w(n) ∗h(n) is referred to as a dereverberated impulse response. Here, we assume s(n) is a quasi-periodic signal2, which has the following features: 1. In each local time region around n0 (n0 −δ < n < n0 + δ for ∀n0), s(n) is approximately a periodic signal whose period is T(n0). 2. Outside the region (|n′ −n0| > δ), s(n′) is also a periodic signal within its neighboring time region, but often has another period that is different from T(n0). These features make x(n) a non-periodic signal even within local time regions when h(m) contains non-zero values for |m| > δ. This is because more than two periodic signals, s(n) and s(n −m), that have different periods, are added to x(n) with weights of h(0) and h(m). Inversely, the goal of our dereverberation is to estimate w(n) that makes y(n) a periodic signal in each local time region. Once such a filter is obtained, q(m) must have zero values for |m| > δ, and thus, reverberant components longer than δ are eliminated from y(n). An important additional feature of a quasi-periodic signal is that quasi-periodic components in a source signal can be enhanced by an adaptive harmonic filter. An adaptive harmonic filter is a time-varying linear filter that enhances frequency components whose frequencies correspond to multiples of the fundamental frequency (F0) of the target signal, while preserving their phases and amplitudes. The filter values are adaptively modified according to F0. For example, a filter, F(f0(n))[·], can be implemented as follows: ˆx(n) = F(f0(n))[x(n)], (2) =  n0 g2(n −n0)Re{x(n) ∗(g1(n)  k exp(j2πkf0(n0)n/fs))}, (3) where n0 is the center time of each frame, f0(n0) is the fundamental frequency (F0) of the signal at the frame, k is a harmonics index, g1(n) and g2(n) are analysis window 1In this paper, time domain and frequency domain signals are represented by non-capitalized and capitalized symbols, respectively. Arguments “(ω)” that represent the center frequencies of the discrete Fourier transformation bins are often omitted from frequency domain signals. 2Later, this assumption is extended so that s(n) is composed of quasi-periodic components and non-periodic components in the case of speech signals. S(ω) X(ω) Y(ω) H(ω) W(ω) X(ω) ^ F(f0) E(X/X) ^ Figure 1: Diagram of ATF-HERB functions, and fs is the sampling frequency. Even when x(n) contains a long reverberation, the reverberant components that have different frequencies from s(n) are reduced by the harmonic filter, and thus, the quasi-periodic components can be enhanced. 2.2 ATF-HERB: average transfer function based dereverberation Figure 1 is a diagram of ATF-HERB, which uses the average transfer function from reverberant signals to quasi-periodic signals. A speech signal, S(ω), can be modeled by the sum of the quasi-periodic components, or voiced components, Sh(ω), and non-periodic components, or unvoiced components, Sn(ω), as eq. (4). The reverberant observed signal, X(ω), is then represented by the product of S and the transfer function, H(ω), of a room as eq. (5). The transfer function, H, can also be divided into two functions, D(ω) and R(ω). The former transforms S into the direct signal, DS, and the latter into the reverberation part, RS, as shown in eq. (6). X is also represented by the sum of the direct signal of the quasi-periodic components, DSh, and the other components as eq. (7). S(ω) = Sh(ω) + Sn(ω), (4) X(ω) = H(ω)S(ω), (5) = (D(ω) + R(ω))S(ω), (6) = DSh + (RSh + HSn). (7) Of these components, DSh can approximately be extracted from X by harmonic filtering. Although the frequencies of quasi-periodic components change dynamically according to the changes in their fundamental frequency (F0), their reverberation remains unchanged at the same frequency. Therefore, direct quasi-periodic components, DSh, can be enhanced by extracting frequency components located at multiples of its F0. This approximated direct signal ˆX(ω) can be modeled as follows: ˆX(ω) = D(ω)Sh(ω) + ( ˆR(ω)Sh(ω) + ˆN(ω)), (8) where ˆR(ω)Sh(ω) and ˆN(ω) are part of the reverberation of Sh and part of the direct signal and reverberation of Sn, which unexpectedly remain in ˆX after the harmonic filtering3. We assume that all the estimation errors in ˆX are caused by ˆRSh and ˆN in eq. (8). The goal of ATF-HERB is to estimate O( ˆR(ω)) = (D(ω) + ˆR(ω))/H(ω), referred to as a “dereverberation operator.” This is because the signal DS + ˆRS, which can be obtained by multiplying O( ˆR) by X, becomes in a sense a dereverberated signal. O( ˆR(ω))X(ω) = D(ω)S(ω) + ˆR(ω)S(ω), (9) 3Strictly speaking, ˆR cannot be represented as a linear transformation because the reverberation included in ˆ X depends on the time pattern of ˆ X. We introduce this approximation for simplicity. S(ω) X(ω) Y(ω) H(ω) W(ω) X(ω) ^ F(f0) MMSE Figure 2: Diagram of MMSE-HERB where the right side of eq. (9) is composed of a direct signal, DS, and certain parts of the reverberation, ˆRS. The rest of the reverberation included in X(= DS+RS), or (R−ˆR)S, is eliminated by the dereverberation operator. To estimate the dereverberation operator, we use the output of the harmonic filter, ˆX. Suppose a number of X values are obtained and ˆX values are calculated from individual X values. Then, the dereverberation operator, O( ˆR), can be approximated as the average of ˆX/X, or W(ω) = E( ˆX/X). W(ω) is shown to be a good estimate of O( ˆR) by substituting E( ˆX/X) for eqs. (4), (5) and (8) as eq. (11). W(ω) = E( ˆX/X), (10) = O( ˆR(ω))E( 1 1 + Sn/Sh ) + E( 1 1 + (X −ˆN)/ ˆN ), (11) ≃ O( ˆR(ω))P(|Sh(ω)| > |Sn(ω)|), (12) where P(·) is a probability function. The arguments of the two average functions in eq. (11) have the form of a complex function, f(z) = 1/(1 + z). E(f(z)) is easily proven to equal P(|z| < 1), using the residue theorem if it is assumed that the phase of z is uniformly distributed, the phases of z and |z| are independent, and |z| ̸= 1. Based on this property, the second term of eq. (11) approximately equals zero because ˆN is a non-periodic component that the harmonic filter unexpectedly extracts and thus the magnitude of ˆN almost always has a smaller value than (Y −ˆN) if a sufficiently long analysis window is used. Therefore, W(ω) can be approximated by eq. (12), that is, W(ω) has the value of the dereverberation operator multiplied by the probability of the harmonic components of speech with a larger magnitude than the non-periodic components. Once the dereverberation operator is calculated from the periodic parts of speech signals for almost all the frequency ranges, it can dereverberate both the periodic and non-periodic parts of the signals because the inverse transfer function is independent of the source signal characteristics. Instead, the gain of W(ω) tends to decrease with frequency when using our method. This is because the magnitudes of the non-periodic components relative to the periodic components tend to increase with frequency for a speech signal, and thus the P(|Sh| > |Sn|) value becomes smaller as ω increases. To compensate for this decreasing gain, it may be useful to use the average attributes of speech on the probability, P(|Sh| > |Sn|). In our experiments in section 4, however, W(ω) itself was used as the dereverberation operator without any compensation. 2.3 MMSE-HERB: minimum mean squared error criterion based dereverberation As discussed in section 2.1, quasi-periodic signals can be dereverberated simply by enhancing their quasi-periodicity. To implement this principle directly, we introduce a cost function, referred to as the minimum mean squared error (MMSE) criterion, to evaluate the quasi-periodicity of the signals as follows: C(w) =  n (y(n) −F(f0(n))[y(n)])2 =  n (w(n) ∗x(n) −F(f0(n))[w(n) ∗x(n)])2, (13) where y(n) = w(n) ∗x(n) is a target signal that should be dereverberated by controlling w(n), and F(f0(n))[y(n)] is a signal obtained by applying a harmonic filter to y(n). When y(n) is a quasi-periodic signal, y(n) approximately equals F(f0(n))[y(n)] because of the feature of quasi-periodic signals, and thus, the above cost function is expected to have the minimum value. Inversely, the filter, w(n), that minimizes C(w) is expected to enhance the quasi-periodicity of x(n). Such filter parameters can, for example, be obtained using optimization algorithms such as a hill-climbing method using the derivatives of C(w) calculated as follows: ∂C(w) ∂w(l) = 2  n (y(n) −F(f0(n))[y(n)])(x(n −l) −F(f0(n))[x(n −l)]), (14) where F(f0(n))[x(n −l)]) is a signal obtained by applying the adaptive harmonic filter to x(n −l)4. There are, however, several problems involved in directly using eq. (13) as the cost function. 1. As discussed in section 2.1, the values of the dereverberated impulse response, q(n), are expected to become zero using this method where |n| > δ, however, the values are not specifically determined where |n| < δ. This may cause unexpected spectral modification of the dereverberated signal. Additional constraints are required in order to specify these values. 2. The cost function has a self-evident solution, that is, w(l) = 0 for all l values. This solution means that the signal, y(n), is always zero instead of being dereverberated, and therefore, should be excluded. Some constraints, such as  l w(l)2 = 1, may be useful for solving this problem. 3. The complexity of the computing needed to minimize the cost function based on repetitive estimation increases as the dereverberation filter becomes longer. The longer the reverberation becomes, the longer the dereverberation filter should be. To overcome these problems, we simplify the cost function in this paper. The new cost function is defined as follows: C(W(ω)) = E((Y (ω) −ˆX(ω))2) = E((W(ω)X(ω) −ˆX(ω))2), (15) where Y (ω), X(ω), and ˆX(ω) are discrete Fourier transformations of y(n), x(n), and F(f0(n))[x(n)], respectively. The new cost function evaluates the quasi-periodicity not in the time domain but in the frequency domain, and uses a fixed quasi-periodic signal ˆX(ω) as the desired signal, instead of using the non-fixed quasi-periodic signal, F(f0(n))[y(n)]. This modification allows us to solve the above problems. The use of the fixed desired signals specifically provides the dereverberated impulse response, q(n), with the desired values, even in the time region, |n| < δ. In addition, the self-evident solution, w(l) = 0, can no longer be optimal in terms of the cost function. Furthermore, the computing complexity is greatly reduced because the solution can be given analytically as follows: W(ω) = E( ˆX(ω)X∗(ω)) E(X(ω)X∗(ω)). (16) A diagram of this simplified MMSE-HERB is shown in Fig. 2. 4F(f0(n))[x(n −l)]) is not the same signal as ˆx(n −l). When calculating F(f0(n))[x(n −l)], x(n) is time-shifted with l-points while f0(n) of the adaptive harmonic filter is not time-shifted. F0 Input X STEP1: X STEP2: Dereverberation by ^ O(R1) Dereverberation operator estimation Adaptive harmonics filter F0 estimation X X X1 ^ ^ O(R1) F0 Dereverberation by ^ O(R2) Dereverberation operator estimation Adaptive harmonics filter F0 estimation X2 ^ ^ O(R2) ^ O(R1)X ^ O(R1)X ^ O(R1)X ^ O(R1)X ^ O(R1)X ^ O(R2) X ^ O(R1) Figure 3: Processing flow of dereverberation. When we assume the model of ˆX in eq. (8), and E(ShS∗ n) = E(SnS∗ h) = E( ˆNS∗ h) = 0, it is shown that the resulting W in eq. (16) again approaches the dereverberation operator, O( ˆR), presented in section 2.2: W(ω) = O( ˆR(ω)) E(ShS∗ h) E(ShS∗ h) + E(SnS∗n) + 1 H E( ˆNS∗ n) E(ShS∗ h) + E(SnS∗n), (17) ≃ O( ˆR(ω)) E(ShS∗ h) E(ShS∗ h) + E(SnS∗n). (18) Because ˆN represents non-periodic components that are included unexpectedly and at random in the output of the harmonic filter, the absolute value of the second term in eq. (17) is expected to be sufficiently small compared with that of the first term, therefore, we disregard this term. Then, W(ω) in eq. (16) becomes the dereverberation operator multiplied by the ratio of the expected power of the quasi-periodic components in the signals to that of whole signals. As with the speech signals discussed in section 2.2, the E(ShS∗ h)/(E(ShS∗ h) + E(SnS∗ n)) value becomes smaller as ω increases, and thus, the gain of W(ω) tends to decrease. Therefore, the same frequency compensation scenario as found in section 2.2 may again be useful for the MMSE based dereverberation scheme. 3 Processing flow Based on the above two methods, we constructed a dereverberation algorithm composed of two steps as shown in Fig. 3. Both methods are implemented in the same processing flow except that the methods used to calculate the dereverberation operator are different. The flow is summarized as follows: 1. In the first step, F0 is estimated from the reverberant signal, X. Then the harmonic components included in X are estimated as ˆX1 based on adaptive harmonic filtering. The dereverberation operator O( ˆR1) is then calculated by ATF-HERB or MMSE-HERB for a number of reverberant speech signals. Finally, the dereverberated signal is obtained by multiplying O( ˆR1) by X. 2. The second step employs almost the same procedures as the first step except that the speech data dereverberated by the first step are used as the input signal. The use of this dereverberated input signal means that reverberant components, ˆR2X2, inevitably included in eq. (8) can be attenuated. Therefore, a more effective dereverberation can be achieved in step 2. In our preliminary experiments, however, repeating STEP 2 did not always improve the quality of the dereverberated signals. This is because the estimation error of the dereverberation operators accumulates in the dereverberated signals when the signals are multiplied by more than one dereverberation operator. Therefore, in our experiments, we used STEP 2 only once. A more detailed explanation of these processing steps is also presented in [4]. 0 0.2 0.4 0.6 0.8 −60 −40 −20 0 Time (sec.) Power (dB) rtime=1.0 sec. 0 0.2 0.4 0.6 0.8 −60 −40 −20 0 Time (sec.) Power (dB) rtime=0.5 sec. 0 0.2 0.4 0.6 0.8 −60 −40 −20 0 Time (sec.) Power (dB) rtime=0.2 sec. 0 0.2 0.4 0.6 0.8 −60 −40 −20 0 Time (sec.) Power (dB) rtime=0.1 sec. Figure 4: Reverberation curves of the original impulse responses (thin line) and dereverberated impulse responses (male: thick dashed line, female: thick solid line) for different reverberation times (rtime). Accurate F0 estimation is very important in terms of achieving effective dereverberation with our methods in this processing flow. However, this is a difficult task, especially for speech with a long reverberation using existing F0 estimators. To cope with this problem, we designed a simple filter that attenuates a signal that continues at the same frequency, and used it as a preprocessor for the F0 estimation [5]. In addition, the dereverberation operator, O( ˆR1), itself is a very effective preprocessor for an F0 estimator because the reverberation of the speech can be directly reduced by the operator. This mechanism is already included in step 2 of the dereverberation procedure, that is, F0 estimation is applied to O( ˆR1)X. Therefore, more accurate F0 can be obtained in step 2 than in step 1. 4 Experimental results We examined the performance of the proposed dereverberation methods. Almost the same results were obtained with the two methods, and so we only describe those obtained with ATF-HERB. We used 5240 Japanese word utterances provided by a male and a female speaker (MAU and FKM, 12 kHz sampling) included in the ATR database as source signals, S(ω). We used four impulse responses measured in a reverberant room whose reverberation times were about 0.1, 0.2, 0.5, and 1.0 sec, respectively. Reverberant signals, X(ω), were obtained by convolving S(ω) with the impulse responses. Figure 4 depicts the reverberation curves5 of the original impulse responses and the dereverberated impulse responses obtained with ATF-HERB. The figure shows that the proposed methods could effectively reduce the reverberation in the impulse responses for the female speaker when the reverberation time (rtime) was longer than 0.1 sec. For the male speaker, the reverberation effect in the lower time region was also effectively reduced. This means that strong reverberant components were eliminated, and we can expect the intelligibility of the signals to be improved [6]. Figure 5 shows spectrograms of reverberant and dereverberated speech signals when rtime was 1.0 sec. As shown in the figure, the reverberation of the signal was effectively reduced, and the formant structure of the signal was restored. Similar spectrogram features were observed under other reverberation conditions, and an improvement in sound quality could clearly be recognized by listening to the dereverberated signals [7]. We also evaluated the quality of the dereverberated speech in terms of speaker dependent word recognition rates 5The reverberation curve shows the reduction in the energy of a room impulse response with time [6]. Time (sec.) Frequency (kHz) 0 0.4 0.8 1.2 0 1 2 Time (sec.) Frequency (kHz) 0 0.4 0.8 1.2 0 1 2 Figure 5: Spectrogram of reverberant (left) and dereverberated (right) speech of a male speaker uttering “ba-ku-da-i”. with an ASR system, and could achieve more than 95 % recognition rates under all the reverberation conditions with acoustic models trained using dereverberated speech signals. Detailed information on the ASR experiments is also provided in [4]. 5 Conclusion A new blind dereverberation principle based on the quasi-periodicity of speech signals was proposed. We presented two types of dereverberation method, referred to as harmonicity based dereverberation (HERB) method: one estimates the average filter function that transforms reverberant signals into quasi-periodic signals (ATF-HERB) and the other minimizes the MMSE criterion that evaluates the quasi-periodicity of signals (MMSE-HERB). We showed that ATF-HERB and a simplified version of MMSE-HERB are both capable of learning the dereverberation operator that can reduce reverberant components in speech signals. Experimental results showed that a dereverberation operator trained with 5240 Japanese word utterances could achieve very high quality speech dereverberation. Future work will include an investigation of how such high quality speech dereverberation can be achieved with fewer speech data. References [1] Baba, A., Lee, A., Saruwatari, H., and Shikano, K., “Speech recognition by reverberation adapted acoustic model,” Proc. of ASJ general meeting, pp. 27–28, Akita, Japan, Sep., 2002. [2] Amari, S., Douglas, S. C., Cichocki, A., and Yang, H. H., “Multichannel blind deconvolution and equalization using the natural gradient,” Proc. IEEE Workshop on Signal Processing Advances in Wireless Communications, Paris, pp. 101-104, April 1997. [3] Yegnanarayana, B., and Murthy, P. S., “Enhancement of reverberant speech using LP residual signal,” IEEE Trans. SAP vol. 8, no. 3, pp. 267–281, 2000. [4] Nakatani, T., Miyoshi, M., and Kinoshita, K., “Implementation and effects of single channel dereverberation based on the harmonic structure of speech,” Proc. IWAENC2003, Sep., 2003. [5] Nakatani, T., and Miyoshi, M., “Blind dereverberation of single channel speech signal based on harmonic structure,” Proc. ICASSP-2003, vol. 1, pp. 92–95, Apr., 2003. [6] Yegnanarayana, B., and Ramakrishna, B. S., “Intelligibility of speech under nonexponential decay conditions,” JASA, vol. 58, pp. 853–857, Oct. 1975. [7] http://www.kecl.ntt.co.jp/icl/signal/nakatani/sound-demos/dm/derev-demos.html
2003
67
2,471
Multiple Instance Learning via Disjunctive Programming Boosting Stuart Andrews Department of Computer Science Brown University, Providence, RI, 02912 stu@cs.brown.edu Thomas Hofmann Department of Computer Science Brown University, Providence, RI, 02912 th@cs.brown.edu Abstract Learning from ambiguous training data is highly relevant in many applications. We present a new learning algorithm for classification problems where labels are associated with sets of pattern instead of individual patterns. This encompasses multiple instance learning as a special case. Our approach is based on a generalization of linear programming boosting and uses results from disjunctive programming to generate successively stronger linear relaxations of a discrete non-convex problem. 1 Introduction In many applications of machine learning, it is inherently difficult or prohibitively expensive to generate large amounts of labeled training data. However, it is often considerably less challenging to provide weakly labeled data, where labels or annotations y are associated with sets of patterns or bags X instead of individual patterns x ∈X. These bags reflect a fundamental ambiguity about the correspondence of patterns and the associated label which can be expressed logically as a disjunction of the form: W x∈X(x is an example of class y). In plain English, each labeled bag contains at least one pattern (but possibly more) belonging to this class, but the identities of these patterns are unknown. A special case of particular relevance is known as multiple instance learning [5] (MIL). In MIL labels are binary and the ambiguity is asymmetric in the sense that bags with negative labels are always of size one. Hence the label uncertainty is restricted to members of positive bags. There are many interesting problems where training data of this kind arises quite naturally, including drug activity prediction [5], content-based image indexing [10] and text categorization [1]. The ambiguity typically arises, because of polymorphisms allowing multiple representations, e.g. a molecule which can be in different conformations, or because of a part/whole ambiguity, e.g. annotations may be associated with images or documents where they should be attached to objects in an image or passages in a document. Notice also that there are two intertwined objectives: the goal may be to learn a pattern-level classifier from ambiguous training examples, but sometimes one may be primarily interested in classifying new bags without necessarily resolving the ambiguity for individual patterns. A number of algorithms have been developed for MIL, including special purpose algorithms using axis-parallel rectangular hypotheses [5], diverse density [10, 14], neural networks [11], and kernel methods [6]. In [1] two versions of a maximummargin learning architecture for solving the multiple instance learning problem have been presented. Because of the combinatorial nature of the problem, a simple optimization heuristic was used in [1] to learn discriminant functions. In this paper, we take a more principled approach by carefully analyzing the nature of the resulting optimization problem and by deriving a sequence of successively stronger relaxations that can be used to compute lower and upper bounds on the objective. Since it turns out that exploiting sparseness is a crucial aspect, we have focused on a linear programming formulation by generalizing the LPBoost algorithm [7, 12, 4] we call the resulting method Disjunctive Programming Boosting (DPBoost). 2 Linear Programming Boosting LPBoost is a linear programming approach to boosting, which aims at learning ensemble classifiers of the form G(x) = sgn F(x) with F(x) = P k αkhk(x), where hk : ℜd →{−1, 1}, k = 1, . . . , n are the so-called base classifiers, weak hypotheses, or features and αk ≥0 are combination weights. The ensemble margin of a labeled example (x, y) is defined as yF(x). Given a set of labeled training examples {(x1, y1), . . . , (xm, ym)}, LPBoost formulates the supervised learning problem using the 1-norm soft margin objective min α, ξ n X k=1 αk + C m X i=1 ξi s.t. yiF(xi) ≥1 −ξi, ξi ≥0, ∀i, αk ≥0, ∀k . (1) Here C > 0 controls the tradeoffbetween the Hinge loss and the L1 regularization term. Notice that this formulation remains meaningful even if all training examples are just negative or just positive [13]. Following [4] the dual program of Eq. (1) can be written as max u m X i=1 ui, s.t. m X i=1 uiyihk(xi) ≤1, ∀k, 0 ≤ui ≤C, ∀i . (2) It is useful to take a closer look at the KKT complementary conditions ui (yiF(xi) + ξi −1) = 0, and αk m X i=1 uiyihk(xi) −1 ! = 0. (3) Since the optimal values of the slack variables are implicitly determined by α as ξi(α) = [1 −yiF(xi)]+, the first set of conditions states that ui = 0 whenever yiF(xi) > 1. Since ui can be interpreted as the “misclassification” cost, this implies that only instances with tight margin constraints may have non-vanishing associated costs. The second set of conditions ensures that αk = 0, if Pm i=1 uiyihk(xi) < 1, which states that a weak hypothesis hk is never included in the ensemble, if its weighted score P i uiyihk(xi) is strictly below the maximum score of 1. So a typical LPBoost solution may be sparse in two ways: (i) Only a small number of weak hypothesis with αk > 0 may contribute to the ensemble and (ii) the solution may only depend on a subset of the training data, i.e. those instances with ui > 0. LPBoost exploits the sparseness of the ensemble by incrementally selecting columns from the simplex tableau and optimizing the smaller tableau. This amounts to finding in each round a hypothesis hk for which the constraint in Eq. (2) is violated, adding it to the ensemble and re-optimizing the tableau with the selected columns. As a column selection heuristic the authors of [4] propose to use the magnitude of the violation, i.e. pick the weak hypothesis hk with maximal score P i uiyihk(xi). 3 Disjunctive Programming Boosting In order to deal with pattern ambiguity, we employ the disjunctive programming framework [2, 9]. In the spirit of transductive large margin methods [8, 3], we propose to estimate the parameters α of the discriminant function in a way that achieves a large margin for at least one of the patterns in each bag. Applying this principle, we can compile the training data into a set of disjunctive constraints on α. To that extend, let us define the following polyhedra Hi(x) ≡ ( (α, ξ) : yi X k αkhk(x) + ξi ≥1 ) , Q ≡{(α, ξ) : α, ξ ≥0} . (4) Then we can formulate the following disjunctive program: min α,ξ n X k=1 αk + C m X i=1 ξi, s.t. (α, ξ) ∈Q ∩ \ i [ x∈Xi Hi(x) . (5) Notice that if |Xi| ≥2 then the constraint imposed by Xi is highly non-convex, since it is defined via a union of halfspaces. However, for trivial bags with |Xi| = 1, the resulting constraints are the same as in Eq. (1). Since we will handle these two cases quite differently in the sequel, let us introduce index sets I = {i : |Xi| ≥2} and J = {j : |Xj| = 1}. A suitable way to define a relaxation to this non-convex optimization problem is to replace the disjunctive set in Eq. (5) by its convex hull. As shown in [2], a whole hierarchy of such relaxations can be built, using the fundamental fact that cl-conv(A) ∩cl-conv(B) ⊇cl-conv(A ∩B), where cl-conv(A) denotes the closure of the convex hull of the limiting points of A. This means a tighter convex relaxation is obtained, if we intersect as many sets as possible, before taking their convex hull. Since repeated intersections of disjunctive sets with more than one element each leads to an combinatorial blow-up in the number of constraints, we propose to intersect every ambiguous disjunctive constraint with every non-ambiguous constraint as well as with Q. This is also called a parallel reduction step [2]. It results in the following convex relaxation of the constraints in Eq. (5) (α, ξ)∈ \ i∈I cl-conv  [ x∈Xi  Hi(x) ∩Q ∩ \ j∈J Hj(xj)    , (6) where we have abused the notation slightly and identified Xj = {xj} for bags with one pattern. The rationale in using this relaxation is that the resulting convex optimization problem is tractable and may provide a reasonably accurate approximation to the original disjunctive program, which can be further strengthened by using it in combination with branch-and-bound search. There is a lift-and-project representation of the convex hulls in Eq. (6), i.e. one can characterize the feasible set as a projection of a higher dimensional polyhedron which can be explicitly characterized [2]. Proposition 1. Assume a set of non-empty linear constraints Hi ≡{z : Aiz ≥ bi} ̸= ∅is given. Then z ∈cl-conv S i Hi if and only if there exist zj and ηj ≥0 such that z = X j zj, X j ηj = 1, Ajzj ≥ηjbj . Proof. [2] Let us pause here briefly and recapitulate what we have achieved so far. We have derived a LP relaxation of the original disjunctive program for boosting with ambiguity. This relaxation was obtained by a linearization of the original non-convex constraints. Furthermore, we have demonstrated how this relaxation can be improved using parallel reduction steps. Applying this linearization to every convex hull in Eq. (6) individually, notice that one needs to introduce duplicates αx, ξx of the parameters α and slack variables ξ, for every x ∈Xi. In addition to the constraints αx k, ξx i , ξx j , ηx i ≥0 and P x∈Xi ηx i = 1 the relevant constraint set for ambiguous bag Xi for i ∈I of the resulting LP can be written as ∀x ∈Xi : yi X k αx khk(x) + ξx i ≥ηx i , (7a) ∀x ∈Xi, ∀j ∈J : yj X k αx khk(xj) + ξx j ≥ηx i , (7b) ∀k, ∀j ∈I ∪J : αk = X x∈Xi αx k, ξj = X x∈Xi ξx j . (7c) The first margin constraint in Eq. (7a) is the one associated with the specific pattern x, while the second set of margin constraints in Eq. (7b) stems from the parallel reduction performed with unambiguous bags. One can calculate the dual LP of the above relaxation, the derivation of which can be found in the appendix. The resulting program has a more complicated bound structure on the u-variables and the following crucial constraints involving the data ∀i, ∀x ∈Xi : yiux i hk(x) + X j∈J yjux j hk(xj) ≤ρik, X i∈I ρik = 1 . (8) However, the size of the resulting problem is significant. As a result of linearization and parallel reductions, the number of parameters in the primal LP is now O(q ·n+ q·r), where q, r ≤m denote the number of patterns in ambiguous and unambiguous bags, compared to O(n + m) of the standard LPBoost. The number of constraints (variables in the dual) has also been inflated significantly from O(m) to O(q·r+p·n)), where p ≤q is the number of ambiguous bags. In order to maintain the spirit of LPBoost in dealing efficiently with a large-scale linear program, we propose to maintain the column selection scheme of selecting one or more αx k in every round. Notice that the column selection can not proceed independently because of the equality constraints P x∈Xi αx k = αk for all Xi; in particular, αx k > 0 implies αk > 0, so that αz k > 0 for at least some z ∈Xi for each Xi, i ∈I. We hence propose to simultaneously add all columns {αx k : x ∈Xi, i ∈I} involving the same weak hypothesis and to prune those back after each boosting round in order to exploit the expected sparseness of the solution. In order to select a feature hk, we compute the following score S(k) = X i ¯ρik −1, ¯ρik ≡max x  yiux i hk(x) + X j∈J yjux j hk(xj)  . (9) Notice that due to the block structure of the tableau, working with a reduced set of columns also eliminates a large number of inequalities (rows). However, the large set of q · r inequalities for the parallel reductions is still prohibitive. In order to address this problem, we propose to perform incremental row selection in an outer loop. Once we have converged to a column basis for the current relaxed LP, we add a subset of rows corresponding to the most useful parallel reductions. One can use the magnitude of the margin violation as a heuristic to perform this row selection. Hence we propose to use the following score T(x, j) = ηx i −yj X k αx khk(xj), where x ∈Xi, i ∈I, j ∈J (10) This means that for current values of the duplicated ensemble weights αx k, one selects the parallel reduction margin constraint associated with ambiguous pattern x and unambiguous pattern j that is violated most strongly. Although the margin constraints imposed by unambiguous training instances (xj, yj) are redundant after we performed the parallel reduction step in Eq. (6), we add them to the problem, because this will give us a better starting point with respect to the row selection process, and may lead to a sparser solution. We hence add the following constraints to the primal yj X k αkhk(xj) + ξj ≥1, ∀j ∈J , (11) which will introduce additional dual variables uj, j ∈J. Notice that in the worst case where all inequalities imposed by ambiguous training instances Xi are vacuous, this will make sure that one recovers the standard LPBoost formulation on the unambiguous examples. One can then think of the row generation process as a way of deriving useful information from ambiguous examples. This information takes the form of linear inequalities in the high dimensional representation of the convex hull and will sequentially reduce the version space, i.e. the set of feasible (α, ξ) pairs. Algorithm 1 DPBoost Algorithm 1: initialize H = ∅, C = {ξi : i ∈I ∪J}, R = {ux i : x ∈Xi, i ∈I} ∪{uj : j ∈J} 2: uj = 1 |J|, ux i = 0, ξi = 0 3: repeat 4: repeat 5: column selection: select hk ̸∈H with maximal S(k) 6: H = H ∪{hk} 7: C = C ∪{αk} ∪{αx k : ∀x ∈Xi, ∀i ∈I} 8: solve LP(C, R) 9: until max S(k) < ϵ 10: row selection: select a set S of pairs (x, j) ̸∈R with maximal T(x, j) > 0 11: R = R ∪{ux j : (x, j) ∈S}, C = C ∪{ξx j : (x, j) ∈S} 12: solve LP(C, R) 13: until max T(x, j) < ϵ 1 3 5 7 50 60 70 80 90 1 3 5 7 50 60 70 80 90 1 3 5 7 50 60 70 80 90 Figure 1: (Left) Normalized intensity plot used to generate synthetic data sets. (Right) Performance relative to the degree of label ambiguity. Mean and standard deviation of the pattern-level classification accuracy plotted versus λ, for perfectknowledge (solid), perfect-selector (dotted), DPboost (dashed), and naive (dashdot) algorithms. The three plots correspond to data sets of size |I| = 10, 20, 30. 4 Experiments We generated a set of synthetic weakly labeled data sets to evaluate DPboost on a small scale. These were multiple-instance data sets, where the label uncertainty was asymmetric; the only ambiguous bags (|Xi| > 1) were positive. More specifically, we generated instances x ∈[0, 1] × [0, 1] sampled uniformly at random from the white (yi = 1) and black (yi = −1) regions of Figure 1, leaving the intermediate gray area as a separating margin. The degree of ambiguity was controlled by generating ambiguous bags of size k ∼Poisson(λ) having only one positive and k −1 negative patterns. To control data set size, we generated a pre-specified number of ambiguous bags, and the same number of singleton unambiguous bags. As a proof of concept benchmark, we compared the classification perfomance of DPboost with two other LPboost variants: perfect-knowledge, perfect-selector, and naive algorithms. All variants use LPboost as their base algorithm and have slightly different preprocessing steps to accomodate the MIL data sets. The first corresponds to the supervised LPboost algorithm; i.e. the true pattern-level labels are used. Since this algorithm does not have to deal with ambiguity, it will perform better than DPboost. The second uses the true pattern-level labels to prune the negative examples from ambiguous bags and solves the smaller supervised problem with LPboost as above. This algorithm provides an interesting benchmark, since its performance is the best we can hope for from DPboost. At the other extreme, the third variant assumes the ambiguous pattern labels are equal to their respective bag labels. For all algorithms, we used thresholded “RBF-like” features. Figure 2 shows the discriminant boundary (black line), learned by each of the four algorithms for a data set generated with λ = 3 and having 20 ambiguous bags (i.e. |I| = 20, no. ambig. = 71, no. total = 91). The ambiguous patterns are marked by “o”, unambiguous ones “x”, and the background is shaded to indicate the value of the ensemble F(x) (clamped to [−3, 3]). It is clear from the shading that the ensemble has a small number of active features for DPboost, perfect-selector and perfect-knowledge algorithms. For each classifier, we report the pattern-level classification accuracy for a uniform grid (21 x 21) of points. The sparsity of the dual variables was also verified; less than 20 percent of the dual variables and reductions were active. We ran 5-fold cross-validation on the synthetic data sets for λ = 1, 3, 5, 7 and for data sets having |I| = 10, 20, 30. Figure 1 (right side) shows the mean pattern-level classification accuracy with error bars showing one standard deviation, as a function −3 −2 −1 0 1 2 3 Figure 2: Discriminant boundaries learned by naive (accuracy = 53.3 %), DPboost (85.3 %), perfect-selector (86.6 %) and perfect-knowledge (92.7 %) algorithms. of the parameter λ. 5 Conclusion We have presented a new learning algorithm for classification problems where labels are associated with sets of pattern instead of individual patterns. Using synthetic data, the expected behaviour of the algorithm has been demonstrated. Our current implementation could not handle large data sets, and so improvements, followed by a large-scale validation and comparison to other algorithms using benchmark MIL data sets, will follow. Acknowledgments David Musicant for making his CPLEX MEX interface available online. Also, to Ioannis Tsochantaridis and Keith Hall, for useful discussion and advice. This work was sponsored by an NSF-ITR grant, award number IIS-0085836. References [1] Stuart Andrews, Ioannis Tsochantaridis, and Thomas Hofmann. Support vector machines for multiple-instance learning. In Advances in Neural Information Processing Systems, volume 15. MIT Press, 2003. [2] Egon Balas. Disjunctive programming and a hierarchy of relaxations for discrete optimization problems. SIAM Journal on Algebraic and Discrete Methods, 6(3):466– 486, July 1985. [3] A. Demirez and K. Bennett. Optimization approaches to semisupervised learning. In M. Ferris, O. Mangasarian, and J. Pang, editors, Applications and Algorithms of Complementarity. Kluwer Academic Publishers, Boston, 2000. [4] Ayhan Demiriz, Kristin P. Bennett, and John Shawe-Taylor. Linear programming boosting via column generation. Machine Learning, 46(1-3):225–254, 2002. [5] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Perez. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89(1-2):31–71, 1997. [6] T. G¨artner, P. A. Flach, A. Kowalczyk, and A. J. Smola. Multi-instance kernels. In Proc. 19th International Conf. on Machine Learning. Morgan Kaufmann, San Francisco, CA, 2002. [7] A.J. Grove and D. Schuurmans. Boosting in the limit: Maximizing the margin of learned ensembles. In Proceedings of the Fifteenth National Conference on Artifical Intelligence, 1998. [8] T. Joachims. Transductive inference for text classification using support vector machines. In Proceedings 16th International Conference on Machine Learning, pages 200–209. Morgan Kaufmann, San Francisco, CA, 1999. [9] Sangbum Lee and Ignacio E. Grossmann. New algorithms for nonlinear generalized disjunctive programming. Computers and Chemical Engineering Journal, 24(910):2125–2141, October 2000. [10] O. Maron and A. L. Ratan. Multiple-instance learning for natural scene classification. In Proc. 15th International Conf. on Machine Learning, pages 341–349. Morgan Kaufmann, San Francisco, CA, 1998. [11] J. Ramon and L. De Raedt. Multi instance neural networks. In Proceedings of ICML2000, Workshop on Attribute-Value and Relational Learning, 2000. [12] G. R¨atsch, T. Onoda, and K.-R. M¨uller. Soft margins for AdaBoost. Technical Report NC-TR-1998-021, Department of Computer Science, Royal Holloway, University of London, Egham, UK, 1998. [13] Gunnar R¨atsch, Sebastian Mika, Bernhard Sch¨olkopf, and Klaus-Robert M¨uller. Constructing boosting algorithms from svms: an application to one-class classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(9):1184–1199, 2002. [14] Qi Zhang and Sally A. Goldman. EM-DD: An improved multiple-instance learning technique. In Advances in Neural Information Processing Systems, volume 14. MIT Press, 2002. Appendix The primal variables are αk, αx k, ξi, ξx i , ξx j , and ηx i . The dual variables are ux and ux j for the margin constraints, and ρik, σi, and θi for the equality constraints on αk, ξ and η, respectively. The Lagrangian is given by L = X k αk + C  X i ξi + X j ξj  − X i X x∈Xi ux i yi X k αx khk(x) + ξx i −ηx i ! − X i X x∈Xi X j ux j yj X k αx khk(xj) + ξx j −ηx i ! + X i θi 1 − X x∈Xi ηx i ! − X i,k ρik αk − X x∈Xi αx k ! − X i σi ξi − X x∈Xi ξx i ! − X i,j σij ξj − X x∈Xi ξx j ! − X i X x∈Xi X k ˜αx kαx k − X i X x∈Xi ˜ξx i ξx i − X i X x∈Xi X j ˜ξx j ξx j − X i X x∈Xi ˜ηx i ηx i . Taking derivatives w.r.t. primal variables, leads to the following dual max X i θi s.t. θi ≤ux i + X j ux j , ux i ≤C, ux j ≤σij, X i σij ≤C yiux i hk(x) + X j yjux j hk(xj) ≤ρik, X i ρik = 1
2003
68
2,472
An Autonomous Robotic System For Mapping Abandoned Mines D. Ferguson1, A. Morris1, D. H¨ahnel2, C. Baker1, Z. Omohundro1, C. Reverte1 S. Thayer1, C. Whittaker1, W. Whittaker1, W. Burgard2, S. Thrun3 1The Robotics Institute 2Computer Science Department 3Computer Science Department Carnegie Mellon University University of Freiburg Stanford University Pittsburgh, PA Freiburg, Germany Stanford, CA Abstract We present the software architecture of a robotic system for mapping abandoned mines. The software is capable of acquiring consistent 2D maps of large mines with many cycles, represented as Markov random £elds. 3D C-space maps are acquired from local 3D range scans, which are used to identify navigable paths using A* search. Our system has been deployed in three abandoned mines, two of which inaccessible to people, where it has acquired maps of unprecedented detail and accuracy. 1 Introduction This paper describes the navigation software of a deployed robotic system for mapping subterranean spaces such as abandoned mines. Subsidence of abandoned mines poses a major problem for society, as do ground water contaminations, mine £res, and so on. Most abandoned mines are inaccessible to people, but some are accessible to robots. Autonomy is a key requirement for robots operating in such environments, due to a lack of wireless communication technology for subterranean spaces. Our vehicle, shown in Figure 1 (see [1] for a detailed hardware description) is equipped with two actuated laser range £nders. When exploring and mapping unknown mines, it alternates short phases of motion guided by 2D range scans, with phases in which the vehicle rests to acquire 3D range scans. An analysis of the 3D scans leads to a path that is then executed, using rapidly acquired 2D scans to determine the robot’s motion relative to the 3D map. If no such path is found a high-level control module adjusts the motion direction accordingly. Acquiring consistent large-scale maps without external geo-referencing through GPS is largely considered an open research issue. Our approach relies on ef£cient statistical techniques for generating such maps in real-time. At the lowest level, we employ a fast scan matching algorithm for registering successive scans, thereby recovering robot odometry. Groups of scans are then converted into local maps, using Markov random £eld representations (MRFs) to characterize the residual path uncertainty. Loop closure is attained by adding constraints into those MRFs, based on a maximum likelihood (ML) estimator. However, the brittleness of the ML approach is overcome by a “lazy” data association mechanism that can undo and redo past associations so as to maximize the overall map consistency. To navigate, local 3D scans are mapped into 2 1 2D terrain maps, by analyzing surface gradients and vertical clearance in the 3D scans. The result is subsequently transformed into cost Figure 1: The Groundhog robot is a 1,500 pound custom-built vehicle equipped with onboard computing, laser range sensing, gas and sinkage sensors, and video recording equipment. Its purpose is to explore and map abandoned mines. functions expressed in the robot’s three-dimensional con£guration space, by convolving the 2 1 2D terrain maps with kernels that describe the robot’s footprints in different orientations. Fast A* planning is then employed in con£guration space to generate paths executed through PD control. The system has been tested in a number of mines. Some of the results reported here were obtained via manual control in mines accessible to people. Others involved fully autonomous exploration, for which our robot operated fully self-guided for several hours beyond the reach of radio communication. 2 2D Mapping 2.1 Generating Locally Consistent Maps As in [6, 9], we apply an incremental scan matching technique for registering scans, acquired using a forward-pointed laser range £nder while the vehicle is in motion. This algorithm aligns scans by iteratively identifying nearby points in pairs of consecutive range scans, and then calculating the relative displacement and orientation of these scans by minimizing the quadratic distance of these pairs of points [2]. This approach leads to the recovery of two quantities: locally consistent maps and an estimate of the robot’s motion. It is well-understood [3, 6], however, that local scan matching is incapable of achieving globally consistent maps. This is because of the residual error in scan matching, which accumulates over time. The limitation is apparent in the map shown in Figure 2a, which is the result of applying local scan matching in a mine that is approximately 250 meters wide. Our approach addresses this problem by explicitly representing the uncertainty in the map and the path using a Markov random £eld (MRF) [11]. More speci£cally, the data acquired through every £ve meters of consecutive robot motion is mapped into a local map [3]. Figure 3a shows such a local map. The absolute location of orientation of the k-th map will be denoted by ξk = ( xk yk θk )T ; here x and y are the Cartesian coordinates and θ is the orientation. From the scan matcher, we can retrieve relative displacement information of the form δk,k−1 = ( ∆xk,k−1 ∆yk,k−1 ∆θk,k−1 )T which, if scan matching was errorfree, would enable us to recover absolute information via the following recursion (under the boundary condition ξ0 = (0, 0, 0)T ) ξk = f(ξk−1, δk,k−1) = Ã xk−1 + ∆xk,k−1 cos θk,k−1 + ∆yk,k−1 sin θk−1 yk−1 −∆xk,k−1 sin θk,k−1 + ∆yk,k−1 cos θk−1 θk−1 + ∆θk,k−1 ! (1) However, scan matching is not without errors. To account for those errors, our approach generalizes this recursion into a Markov random £eld (MRF), in which each variable Ξ = ξ1, ξ2, . . . is a (three-dimensional) node. This MRF is de£ned through the potentials: φ(ξk, ξk−1) = exp −1 2(ξk −f(ξk−1, δk,k−1))T Rk,k−1(ξk −f(ξk−1, δk,k−1)) (2) Here Rk,k−1 is the inverse covariance of the uncertainty associated with the transition δk,k−1. Since the MRF is a linear chain without cycles, the mode of this MRF is the solution to the recursion de£ned in (1). Figure 3b shows the MRF for the data collected in the (a) (b) Figure 2: Mine map with incremental ML scan matching (left) and using our lazy data association approach (right). The map is approximately 250 meters wide. Bruceton Research Mine, over a distance of more than a mile. We note this representation generalizes the one in [11], who represent posteriors by a local bank of Kalman £lters. 2.2 Enforcing Global Consistency The key advantage of the MRF representation is that it encompasses the residual uncertainty in local scan matching. This enables us to alter the shape of the map in accordance with global consistency constraints. These constraints are obtained by matching local maps acquired at different points in time (e.g., when closing a large cycle). In particular, if the k-th map overlaps with some map j acquired at an earlier point in time, our approach localizes the robot relative to this map using once again local scan matching. As a result, it recovers a relative constraint φ(ξk, ξj) between the coordinates of non-adjacent maps ξk and ξj. This constraint is of the same form as the local constraints in (2), hence is represented by a potential. For any £xed set of such potentials Φ = {φ(ξk, ξj)}, the resulting MRF is described through the following negative log-likelihood function −log p(Ξ) = const. + 1 2 X k,j (ξk −f(ξj, δk,j))T Rk,j (ξk −f(ξj, δk,j)) (3) where Ξ = ξ1, ξ2, . . . is the set of all map poses, and f is de£ned in (1). Unfortunately, the resulting MRF is not a linear chain any longer. Instead, it contains cycles. The variables Ξ = ξ1, ξ2, . . . can be recovered using any of the standard inference algorithms for inference on graphs with cycles, such as the popular loopy belief propagation algorithm and related techniques [5, 14, 17]. Our approach solves this problem by matrix inversion. In particular, we linearize the function f using a Taylor expansion: f(ξj, δk,j) ≈ f(¯ξj) + Fk,j(ξj −¯ξj) (4) where ¯ξj denotes a momentary estimate of the variables ξj (e.g., the solution of the recursion (1) without the additional data association constraints). The matrix Fk,j = (a) (b) Figure 3: (a) Example of a local map. (b) The Markov random £eld: Each node is the center of a local map, acquired when traversing the Bruceton Research Mine near Pittsburgh, PA. ∇ξjf(¯ξj, δk,j) is the Jacobean of f(ξj, δk,j) at ¯ξj: Fk,jx =   1 0 −∆xk,j sin ¯θk + ∆yk,j cos ¯θk 0 1 −∆xk,j cos ¯θk −∆yk,j sin ¯θk 0 0 1   (5) The resulting negative log-likelihood is given by −log p(Ξ) ≈const. + 1 2 X k,j (ξk −f(¯ξj) −Fk,j(ξj −¯ξj))T σ−1 k,j (ξk −f(¯ξj) −Fk,j(ξj −¯ξj)) is quadratic in the variables Ξ of the form const. + (AΞ −a)T R (AΞ −a), where A is a diagonal matrix, a is a vector, and R is a sparse matrix that is non-zero for all elements j, k in the set of potentials. The minimum of this function is attained at (AT RA)−1AT Ra. This solution requires the inversion of a sparse matrix. Empirically, we £nd that this inversion can be performed very ef£ciently using an inversion algorithm described in [15]; it only requires a few seconds for matrices composed of hundreds of local map positions (and it appears to be numerically more stable than the solution in [11, 6]). Iterative application of this linearized optimization quickly converges to the mode of the MRF, which is the set of locations and orientations Ξ. However, we conjecture that recent advances on inference in loopy graphs can further increase the ef£ciency of our approach. 2.3 Lazy Data Association Search Unfortunately, the approach described thus far leads only to a consistent map when the additional constraints φ(ξk, ξj) obtained after loop closure are correct. These constraints amount to a maximum likelihood solution for the challenging data association problem that arises when closing a loop. When loops are large, this ML solution might be wrong—a problem that has been the source of an entire literature on SLAM (simultaneous localization and mapping) algorithms. Figure 4a depicts such a situation, obtained when operating our vehicle in a large abandoned mine. The current best algorithms apply proactive particle £lter (PF) techniques to solve this problem [4, 8, 12, 13]. PF techniques sample from the path posterior. When closing a loop, random variations in these samples lead to different loop closures. As long as the correct such closure is in the set of surviving particle £lters, the correct map can be recovered. In the context of our present system, this approach suffers from two disadvantages: it is computationally expensive due to its proactive nature, and it provides no mechanism for recovery should the correct loop closure not be represented in the particle set. Our approach overcomes both of these limitations. When closing a loop, it always picks the most likely data association. However, it also provides a mechanism to undo and redo past data association decisions. The exact data association algorithm involves a step that monitors the likelihood of the most recent sensor measurement given the map. If this likelihood falls below a threshold, data association constraints are recursively undone and replaced by other constraints of decreasing likelihood (including the possibility of not generating a constraint at all). The search terminates if the likelihood of the most recent measurement (a) (b)  start con¤ict © ©  map after adjustment Figure 4: Example of our lazy data association technique: When closing a large loop, the robot £rst erroneously assumes the existence of a second, parallel hallway. However, this model leads to a gross inconsistency as the robot encounters a corridor at a right angle. At this point, our approach recursively searches for improved data association decisions, arriving at the map shown on the right. exceeds the threshold [7]. In practice, the threshold test works well, since global inconsistencies tend to induce gross inconsistencies in the robot’s measurements at some point in time. The algorithm is illustrated in Figure 4. The left panel shows the ML association after traversing a large loop inside a mine: At £rst, it appears that the existence of two adjacent corridors is more likely than a single one, according to the estimated robot motion. However, as the robot approaches a turn, a noticeable inconsistency is detected. Inconsistencies are found by monitoring the measurement likelihood, using a threshold for triggering an exception. As a result, our data association mechanism recursively removes past data association constraints back to the most recent loop closure, and then “tries” the second most likely hypothesis. The result of this backtracking step is shown in the right panel of Figure 4. The backtracking requires a fraction of a second, and with high likelihood leads to a globally consistent map and, as a side-effect, to an improved estimate of the map coordinates Ξ. Figure 2b shows a proto-typical corrected map, which is globally consistent. 3 Autonomous Navigation 2D maps are suf£cient for localizing robots inside mines; however, they are insuf£cient to navigate a robot due to the rugged nature of abandoned mines. Our approach to navigation is based on 3D maps, acquired in periodic intervals while the vehicle suspends motion to scan its environment. A typical 3D scan is shown in Figure 5a; others are shown in Figure 7. 3.1 2 1 2D Terrain Maps In a £rst processing step, the robot projects local 3D maps onto 2 1 2D terrain maps, such as the one shown in Figure 5b. The gray-level in this map illustrates the degree at which the map is traversable: the brighter a 2D location, the better suited it is for navigation. The terrain map is obtained by analyzing all measurements ⟨x, y, z⟩in the 3D scan (where z is the vertical dimension). For each rectangular surface region {xmin; xmax} × {ymin; ymax}, it identi£es the minimum z-value, denoted z. It then searches for the largest z value in this region whose distance to z does not exceed the vehicle height (plus a safety margin); this value will be called ¯z. The difference ¯z −z is the navigational coef£cient: it loosely corresponds to the ruggedness of the terrain under the height of the robot. If no measurement is available for the target region {xmin; xmax} × {ymin; ymax}, the region is marked as unknown. For safety reasons, multiple regions {xmin; xmax} × {ymin; ymax} overlap when building the terrain map. The terrain map is subsequently convolved with a narrow radial kernel that serves as a repellent potential £eld, to keep the robot clear of obstacles. 3.2 Con£guration Space Maps The terrain map is used to construct a collection of maps that describe the robot’s con£guration space, or C-space [10]. The C-space is the three-dimensional space of poses that (a) (b) (c) Figure 5: (a) A local 3D model of the mine corridor, obtained by a scanning laser range £nder. (b) The corresponding 2 1 2D terrain map extracted from this 3D snapshot: the brighter a location, the easier it is to navigate. (c) Kernels for generating directional C-space maps from the 2 1 2D terrain map. The two black bars in each kernel correspond to the vehicle’s tires. Planning in these C-space maps ensures that the terrain under the tires is maximally navigable. the vehicle can assume; it comprises the x-y location along with the vehicle’s orientation θ. The C-space maps are obtained by convolving the terrain map with oriented kernels that describe the robot’s footprint. Figure 5c shows some of these kernels: Most value is placed in the wheel area of the vehicle, with only a small portion assigned to the area in between, where the vehicle’s clearance is approximately 30 centimeters. The intuition of using such a kernel is as follows: Abandoned mines often possess railroad tracks, and while it is perfectly acceptable to navigate with a track between the wheels, traversing or riding these tracks causes unnecessary damage to the tires and will increase the energy consumption. The result of this transformation is a collection of C-space maps, each of which applies to a different vehicle orientation. 3.3 Corridor Following Finally, A* search is employed in C-space to determine a path to an unexplored area. The A* search is initiated with an array of goal points, which places the highest value at locations at maximum distance straight down a mine corridor. This approach £nds the best path to traverse, and then executes it using a PD controller. If no such path can be found even within a short range (2.5 meters), the robot decides that the hallway is not navigable and initiates a high-level decision to turn around. This technique has been suf£cient for our autonomous exploration runs thus far (which involved straight hallway exploration), but it does not yet provide a viable solution for exploring multiple hallways connected by intersections (see [16] for recent work on this topic). 4 Results The approach was tested in multiple experiments, some of which were remotely operated while in others the robot operated autonomously, outside the reach of radio communication. On October 27, 2002, Groundhog was driven under manual control into the Florence Mine near Burgettstown, PA. Figure 6b shows a picture of the tethered and remotely controlled vehicle inside this mine, which has not been entered by people for many decades. Its partially ¤ooded nature prevented an entry into the mine for more than approximately 40 meters. Maps acquired in this mine are shown in Figure 9. On May 30, 2003, Groundhog successfully explored an abandoned mine using the fully autonomous mode. The mine, known as the Mathies Mine near Pittsburgh, is part of a large mine system near Courtney, PA. Existing maps for this mine are highly inaccurate, and the conditions inside the mine were unknown to us. Figure 6a shows the robot as it enters the mine, and Figure 7a depicts a typical 3D scan acquired in the entrance area. (a) (b) Figure 6: (a) The vehicle as it enters the Mathies Mine on May 30, 2003. It autonomously descended 308 meters into the mine before making the correct decision to turn around due to a blockage inside the mine. (b) The vehicle, as it negotiates acidic mud under manual remote control approximately 30 meters into the Florence Mine near Burgettstown, PA. (a) (b) Figure 7: 3D local maps: (a) a typical corridor map that is highly navigable. (b) a map of a broken ceiling bar that renders the corridor segment unnavigable. This obstacle was encountered 308 meters into the abandoned Mathies Mine. Figure 8: Fraction of the 2D mine map of the Mathies Mine, autonomously explored by the Groundhog vehicle. Also shown is the path of the robot and the locations at which it chose to take 3D scans. The protruding obstacle shows up as a small dot-like obstacle in the 2D map. (a) (b) (c) Figure 9: (a) A small 2D map acquired by Groundhog in the Florence Mine near Burgettstown, PA. This remotely-controlled mission was aborted when the robot’s computer was ¤ooded by water and mud in the mine. (b) View of a local 3D map of the ceiling. (c) Image acquired by Groundhog inside the Mathies Mine (a dry mine). After successfully descending 308 meters into the Mathies Mine, negotiating some rough terrain along the way, the robot encountered a broken ceiling beam that draped diagonally across the robot’s path. The corresponding 3D scan is shown in Figure 7b: it shows rubble on the ground, along with the ceiling bar and two ceiling cables dragged down by the bar. The robot’s A* motion planner failed to identify a navigable path, and the robot made the appropriate decision to retreat. Figure 8 shows the corresponding 2D map; the entire map is 308 meters long, but here we only show the £nal section, along with the path and the location at which the robot stop to take a 3D scan. An image acquired in this mine is depicted in Figure 9c. 5 Conclusion We have described the software architecture of a deployed system for robotic mine mapping. The most important algorithmic innovations of our approach are new, lazy techniques for data association, and a fast technique for navigating rugged terrain. The system has been tested under extreme conditions, and generated accurate maps of abandoned mines inaccessible to people. Acknowledgements We acknowledge the contributions of the students of the class 16865 Mobile Robot Development at CMU who helped build Groundhog. We also acknowledge the assistance provided by Bruceton Research Mine (Paul Stefko), MSHA, PA-DEP, Workhorse Technologies, and the various people in the mining industry who supported this work. Finally, we also gratefully acknowledge £nancial support by DARPA’s MARS program. References [1] C. Baker, Z. Omohundro, S. Thayer, W. Whittaker, M. Montemerlo, and S. Thrun. A case study in robotic mapping of abandoned mines. FSR-03. [2] P. Besl and N. McKay. A method for registration of 3d shapes. PAMI 14(2), 1992. [3] M. Bosse, P. Newman, M. Soika, W. Feiten, J. Leonard, and S. Teller. An atlas framework for scalable mapping. ICRA-03. [4] A. Eliazar and R. Parr. DP-SLAM: Fast, robust simultaneous localization and mapping without predetermined landmarks. IJCAI-03. [5] Anshul Gupta, George Karypis, and Vipin Kumar. Highly scalable parallel algorithms for sparse matrix factorization. Trans. Parallel and Distrib. Systems, 8(5), 1997. [6] J.-S. Gutmann and K. Konolige. Incremental mapping of large cyclic environments. CIRA-00. [7] D. H¨ahnel, W. Burgard, B. Wegbreit, and S. Thrun. Towards lazy data association in SLAM. 11th International Symposium of Robotics Research, Sienna, 2003. [8] D. H¨ahnel, D. Fox, W. Burgard, and S. Thrun. A highly ef£cient FastSLAM algorithm for generating cyclic maps of large-scale environments from raw laser range measurements. Submitted to IROS-03. [9] D. H¨ahnel, D. Schulz, and W. Burgard. Map building with mobile robots in populated environments. IROS-02. [10] J.-C. Latombe. Robot Motion Planning. Kluwer, 1991. [11] F. Lu and E. Milios. Globally consistent range scan alignment for environment mapping. Autonomous Robots: 4, 1997. [12] M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit. FastSLAM 2.0: An improved particle £ltering algorithm for simultaneous localization and mapping that provably converges. IJCAI03. [13] K. Murphy. Bayesian map learning in dynamic environments. NIPS-99. [14] K.P. Murphy, Y. Weiss, and M.I. Jordan. Loopy belief propagation for approximate inference: An empirical study. UAI-99 [15] W. H. Press. Numerical recipes in C: the art of scienti£c computing. Cambridge Univ. Press, 1988. [16] R. Simmons, D. Apfelbaum, W. Burgard, M. Fox, D. an Moors, S. Thrun, and H. Younes. Coordination for multi-robot exploration and mapping. AAAI-00. [17] M. J. Wainwright. Stochastic processes on graphs with cycles: geometric and variational approaches. PhD thesis, MIT, 2002.
2003
69
2,473
On the Dynamics of Boosting∗ Cynthia Rudin Ingrid Daubechies Princeton University Progr. Appl. & Comp. Math. Fine Hall Washington Road Princeton, NJ 08544-1000 {crudin,ingrid}@math.princeton.edu Robert E. Schapire Princeton University Department of Computer Science 35 Olden St. Princeton, NJ 08544 schapire@cs.princeton.edu Abstract In order to understand AdaBoost’s dynamics, especially its ability to maximize margins, we derive an associated simplified nonlinear iterated map and analyze its behavior in low-dimensional cases. We find stable cycles for these cases, which can explicitly be used to solve for AdaBoost’s output. By considering AdaBoost as a dynamical system, we are able to prove R¨atsch and Warmuth’s conjecture that AdaBoost may fail to converge to a maximal-margin combined classifier when given a ‘nonoptimal’ weak learning algorithm. AdaBoost is known to be a coordinate descent method, but other known algorithms that explicitly aim to maximize the margin (such as AdaBoost∗and arc-gv) are not. We consider a differentiable function for which coordinate ascent will yield a maximum margin solution. We then make a simple approximation to derive a new boosting algorithm whose updates are slightly more aggressive than those of arc-gv. 1 Introduction AdaBoost is an algorithm for constructing a “strong” classifier using only a training set and a “weak” learning algorithm. A “weak” classifier produced by the weak learning algorithm has a probability of misclassification that is slightly below 50%. A “strong” classifier has a much smaller probability of error on test data. Hence, AdaBoost “boosts” the weak learning algorithm to achieve a stronger classifier. AdaBoost was the first practical boosting algorithm, and due to its success, a number of similar boosting algorithms have since been introduced (see [1] for an introduction). AdaBoost maintains a distribution (set of weights) over the training examples, and requests a weak classifier from the weak learning algorithm at each iteration. Training examples that were misclassified by the weak classifier at the current iteration then receive higher weights at the following iteration. The end result is a final combined classifier, given by a thresholded linear combination of the weak classifiers. Often, AdaBoost does not empirically seem to suffer badly from overfitting, even after a large number of iterations. This lack of overfitting has been attributed to AdaBoost’s ∗This research was partially supported by NSF Grants IIS-0325500, CCR-0325463, ANI0085984 and AFOSR Grant F49620-01-1-0099. ability to generate a large margin, leading to a better guarantee on the generalization performance. When it is possible to achieve a positive margin, AdaBoost has been shown to approximately maximize the margin [2]. In particular, it is known that AdaBoost achieves a margin of at least 1 2ρ, where ρ is the largest margin that can possibly be attained by a combined classifier (other bounds appear in [3]). Many of the subsequent boosting algorithms that have emerged (such as AdaBoost∗[4], and arc-gv [5]) have the same main outline as AdaBoost but attempt more explicitly to maximize the margin at the expense of lowering the convergence rate; the trick seems to be to design an update for the combined classifier that maximizes the margin, has a fast rate of convergence, and is robust. For all the extensive theoretical and empirical study of AdaBoost, it is still unknown if AdaBoost achieves a maximal margin solution, and thus the best upper bound on the probability of error (for margin-based bounds). While the limiting dynamics of the linearly inseparable case (i.e., ρ = 0) are fully understood [6], other basic questions about the dynamics of AdaBoost in the more common case ρ > 0 are unknown. For instance, we do not know, in the limit of a large number of rounds, if AdaBoost eventually cycles among the base classifiers, or if its behavior is more chaotic. In this paper, we study the dynamics of AdaBoost. First we simplify the algorithm to reveal a nonlinear iterated map for AdaBoost’s weight vector. This iterated map gives a direct relation between the weights at time t and the weights at time t + 1, including renormalization, thus providing a much more concise mapping than the original algorithm. We then provide a specific set of examples in which trajectories of this iterated map converge to a limit cycle, allowing us to calculate AdaBoost’s output vector directly. There are two interesting cases governing the dynamics: the case where the optimal weak classifiers are chosen at each iteration (the ‘optimal’ case), and the case where permissible non-optimal weak classifiers may be chosen (the ‘non-optimal’ case). In the optimal case, the weak learning algorithm is required to choose a weak classifier which has the largest edge at every iteration. In the non-optimal case, the weak learning algorithm may choose any weak classifier as long as its edge exceeds ρ, the maximum margin achievable by a combined classifier. This is a natural notion of non-optimality for boosting; thus it provides a natural sense in which to measure robustness. Based on large scale experiments and a gap in theoretical bounds, R¨atsch and Warmuth [3] conjectured that AdaBoost does not necessarily converge to a maximum margin classifier in the non-optimal case, i.e., that AdaBoost is not robust in this sense. In practice, the weak classifiers are generated by CART or another heuristic weak learning algorithm, implying that the choice need not always be optimal. In Section 3, we show this conjecture to be true using a low-dimensional example. Thus, our low-dimensional study provides insight into AdaBoost’s large scale dynamical behavior. AdaBoost, as shown by Breiman [5] and others, is actually a coordinate descent algorithm on a particular exponential loss function. However, minimizing this function in other ways does not necessarily achieve large margins; the process of coordinate descent must be somehow responsible. In Section 4, we introduce a differentiable function that can be maximized to achieve maximal margins; performing coordinate ascent on this function yields a new boosting algorithm that directly maximizes margins. This new algorithm and AdaBoost use the same formula to choose a direction of ascent/descent at each iteration; thus AdaBoost chooses the optimal direction for this new setting. We approximate the update rule for coordinate ascent on this function and derive an algorithm with updates that are slightly more aggressive than those of arc-gv. We proceed as follows: in Section 2 we introduce some notation and state the AdaBoost algorithm. Then we decouple the dynamics for AdaBoost in the binary case to reveal a nonlinear iterated map. In Section 3, we analyze these dynamics for a simple case: the case where each hypothesis has one misclassified point. In a 3 × 3 example, we find 2 stable cycles. We use these cycles to show that AdaBoost produces a maximal margin solution in the optimal case; this result generalizes to m×m. Then, we produce the example promised above to show that AdaBoost does not necessarily converge to a maximal margin solution in the non-optimal case. In Section 4 we introduce a differentiable function that can be used to maximize the margin via coordinate ascent, and then approximate the coordinate ascent update step to derive a new algorithm. 2 Simplified Dynamics of AdaBoost The training set consists of {(xi, yi)}i=1..m, where each example (xi, yi) ∈X × {−1, 1}. Denote by dt ∈Rm the distribution (weights) over the training examples at iteration t, expressed as a column vector. (Denote dT t as its transpose.) Denote by n the total number of classifiers that can be produced by the weak learning algorithm. Since our classifiers are binary, n is finite (at most 2m), but may be very large. The weak classifiers are denoted h1, ..., hn, with hj : X →{1, −1}; we assume that for every hj on this list, −hj also appears. We construct a matrix M so that Mij = yihj(xi), i.e., Mij = +1 if training example i is classified correctly by hypothesis hj, and −1 otherwise. The (unnormalized) coefficient of classifier hj for the final combined hypothesis is denoted λj, so that the final combined hypothesis is fAda(x) = Pn j=1(λj/∥λ∥1)hj(x) where ∥λ∥1 = Pn j=1 λj. (In this paper, either hj or -hj remains unused.) The simplex of n-dimensional vectors with positive entries that sum to 1 will be denoted ∆n. The margin of training example i is defined by yifAda(xi), or equivalently (Mλ)i/∥λ∥1, and the edge of hypothesis j with respect to the training data (weighted by d) is (dT M)j, or 1 −2×(probability of error of hj on the training set weighted by d). Our goal is to find a normalized vector ˜λ ∈∆n that maximizes mini(M˜λ)i. We call this minimum margin over training examples the margin of classifier λ. Here is the AdaBoost algorithm and our reduction to an iterated map. AdaBoost (‘optimal’ case): 1. Input: Matrix M, Number of iterations tmax 2. Initialize: λ1,j = 0 for j = 1, ..., n 3. Loop for t = 1, ..., tmax (a) dt,i = e(−Mλt)i/ Pm i=1 e(−Mλt)i for i = 1, ..., m (b) jt = argmaxj(dT t M)j (c) rt = (dT t M)jt (d) αt = 1 2 ln ³ 1+rt 1−rt ´ (e) λt+1 = λt + αtejt, where ejt is 1 in position jt and 0 elsewhere. 4. Output: λcombined,j = λtmax+1,j/∥λtmax+1∥1 Thus at each iteration, the distribution dt is computed (Step 3a), classifier jt with maximum edge is selected (Step 3b), and the weight of that classifier is updated (Step 3c, 3d, 3e). (Note that wlog one can omit from M all the unused columns.) AdaBoost can be reduced to the following iterated map for the dt’s. This map gives a direct relationship between dt and dt+1, taking the normalization of Step 3a into account automatically. Initialize d1,i = 1/m for i = 1, ..., m as in the first iteration of AdaBoost. Reduced Iterated Map: 1. jt = argmaxj(dT t M)j 2. rt = (dT t M)jt 3. dt+1,i = dt,i 1+Mijtrt for i = 1, ..., m To derive this map, consider the iteration defined by AdaBoost and reduce as follows. dt+1,i = e−(Mλt)ie−(Mijtαt) Pm i=1 e−(Mλt)i e−(Mijtαt) where αt = 1 2 ln µ1 + rt 1 −rt ¶ , so e−(Mijtαt) = µ1 −rt 1 + rt ¶ 1 2 Mijt = µ1 −Mijtrt 1 + Mijtrt ¶ 1 2 , thus dt+1,i = dt,i Pm i=1 dt,i ³ 1−Mijtrt 1+Mijtrt ´ 1 2 ³ 1+Mijtrt 1−Mijtrt ´ 1 2 . Define d+ = P {i:Mijt=1} dt,i and d−= 1 −d+. Thus, d+ = 1+rt 2 and d−= 1−rt 2 . For each i such that Mijt = 1, we find: dt+1,i = dt,i d+ + d− ³ 1+rt 1−rt ´ = dt,i 1 + rt Likewise, for each i such that Mijt = −1, we find dt+1,i = dt,i 1−rt . Our reduction is complete. To check that Pm i=1 dt+1,i = 1, we see Pm i=1 dt+1,i = 1 1+rt d+ + 1 1−rt d−= d+ 2d+ + d− 2d−= 1. 3 The Dynamics of Low-Dimensional AdaBoost First we will introduce a simple 3×3 input matrix and analyze the convergence of AdaBoost in the optimal case. Then we will consider a larger matrix and show that AdaBoost fails to converge to a maximum margin solution in the non-optimal case. Consider the following input matrix M = µ −1 1 1 1 −1 1 1 1 −1 ¶ corresponding to the case of three training examples, where each weak classifier misclassifies one example. (We could add additional hypotheses to M, but these would never be chosen by AdaBoost.) The maximum value of the margin for M is 1/3. How will AdaBoost achieve this result? We are in the optimal case, where jt = argmaxj(dT t M)j. Consider the dynamical system on the simplex P3 i=1 dt,i = 1, dt,i > 0 ∀i defined by our reduced map above. In the triangular region with vertices (0, 0, 1), (1/3, 1/3, 1/3), (0, 1, 0), jt will be 1. Similarly, we have regions for jt = 2 and jt = 3 (see Figure 1(a)). Since dt+1 will always satisfy (dT t+1M)jt = 0, the dynamics are restricted to the edges of a triangle with vertices (0, 1 2, 1 2), ( 1 2, 0, 1 2), ( 1 2, 1 2, 0) after the first iteration (see Figure 1(b)). first component of d_t second component of d_t 1/3 1/3 1 1 j =3 j =1 j =2 t t t first component of d_t second component of d_t 1 1 1/2 1/2 (d M) =0 (d M) =0 (d M) =0 T T T 1 2 3 Figure 1: (a-Left) Regions of dt-space where classifiers jt = 1, 2, 3 will respectively be selected. (b-Right) All weight vectors d2, ..., dtmax are restricted to lie on the edges of the inner triangle. (0,.5,.5) (.5,.5,0) (.5,.0,.5) (0,.5,.5) (0,.5,.5) (.5,.5,0) (.5,.0,.5) (0,.5,.5) position along triangle position along triangle (0,.5,.5) (.5,.5,0) (.5,.0,.5) (0,.5,.5) (0,.5,.5) (.5,.5,0) (.5,0,.5) (0,.5,.5) position along triangle position along triangle 0.15 0.55 0.15 0.6 first component of weight vector second component of weight vector 0 0.5 0 0.5 first component of weight vector second component of weight vector Figure 2: (a-Upper Left) The iterated map on the unfolded triangle. Both axes give coordinates on the edges of the inner triangle in Figure 1(b). The plot shows where dt+1 will be, given dt. (b-Upper Right) The map from (a) iterated twice, showing where dt+3 will be, given dt. There are 6 stable fixed points, 3 for each cycle. (c-Lower Left) 50 timesteps of AdaBoost showing convergence of dt’s to a cycle. Small rings indicate earlier timesteps of AdaBoost, while larger rings indicate later timesteps. There are many concentric rings at positions d(1) cyc, d(2) cyc, and d(3) cyc. (d-Lower Right) 500 timesteps of AdaBoost on a random 11x21 matrix. The axes are dt,1 vs dt,2. On this reduced 1-dimensional phase space, the iterated map has no stable fixed points or orbits of length 2. However, consider the following periodic orbit of length 3: d(1)T cyc = ( 3− √ 5 4 , √ 5−1 4 , 1 2), d(2)T cyc = ( 1 2, 3− √ 5 4 , √ 5−1 4 ), d(3)T cyc = ( √ 5−1 4 , 1 2, 3− √ 5 4 ). This is clearly a cycle, since starting from d(1) cyc, AdaBoost will choose jt = 1. Then r1 = (d(1)T cyc M)1 = ( √ 5 −1)/2. Now, computing d(1) cyc,i/(1 + Mi,1r1) for each i yields d(2) cyc. In this way, AdaBoost will cycle between hypotheses j = 1, 2, 3, 1, 2, 3, etc. There is in fact another 3-cycle, d(1)T cyc′ = ( 3− √ 5 4 , 1 2, √ 5−1 4 ), d(2)T cyc′ = ( 1 2, √ 5−1 4 , 3− √ 5 4 ), d(3)T cyc′ = ( √ 5−1 4 , 3− √ 5 4 , 1 2). To find these cycles, we hypothesized only that a cycle of length 3 exists, visiting each hypothesis in turn, and used the reduced equations from Section 2 to solve for the cycle coordinates. We give the following outline of the proof for global stability: This map is a contraction, so any small perturbation from the cycle will diminish, yielding local stability of the cycles. One only needs to consider the one-dimensional map defined on the unfolded triangle, since within one iteration every trajectory lands on the triangle. This map and its iterates are piecewise continuous and monotonic in each piece, so one can find exactly where each interval will be mapped (see Figure 2(a)). Consider the second iteration of this map (Figure 2(b)). One can break the unfolded triangle into intervals and find the region of attraction of each fixed cycle; in fact the whole triangle is the union of both regions of attraction. The convergence to one of these two 3cycles is very fast; Figure 2(b) shows that the absolute slope of the second iterated map at the fixed points is much less than 1. The combined classifier AdaBoost will output is: λcombined = ((d(1)T cyc M)1, (d(2)T cyc M)2, (d(3)T cyc M)3)/normaliz. = (1/3, 1/3/1/3), and since mini(Mλcombined)i = 1/3 AdaBoost produces a maximal margin solution. This 3 × 3 case can be generalized to m classifiers, each having one misclassified training example; in this case there will be periodic cycles of length m, and the contraction will also persist (the cycles will be stable). We note that for every low-dimensional case we tried, periodic cycles of larger lengths seem to exist (such as in Figure 2(d)), but that the contraction at each iteration does not, so it is harder to show stability. Now, we give an example to show that non-optimal AdaBoost does not necessarily converge to a maximal margin solution. Consider the following input matrix (again, omitting unused columns): M = à −1 1 1 1 −1 1 −1 1 1 −1 1 1 −1 1 1 1 1 1 −1 1 ! . For this matrix, the maximal margin ρ is 1/2. In the optimal case, AdaBoost will produce this value by cycling among the first four columns of M. Recall that in the non-optimal case jt ∈{j : (dT t M)j ≥ρ}. Consider the following initial condition for the dynamics: dT 1 = ( 3− √ 5 8 , 3− √ 5 8 , 1 2, √ 5−1 4 ). Since (dT 1 M)5 > ρ, we are justified in choosing j1 = 5, although here it is not the optimal choice. Another iteration yields dT 2 = ( 1 4, 1 4, √ 5−1 4 , 3− √ 5 4 ), satisfying (dT 1 M)4 > ρ for which we choose j2 = 4. At the third iteration, we choose j3 = 3, and at the fourth iteration we find d4 = d1. This cycle is the same cycle as in our previous example (although there is one extra dimension). There is actually a whole manifold of 3-cycles in this non-optimal case, since ˜d1 T := (ϵ, 3− √ 5 4 −ϵ, 1 2, √ 5−1 4 ) lies on a cycle for any ϵ, 0 ≤ϵ ≤3− √ 5 4 . In any case, the value of the margin produced by this cycle is 1/3, not 1/2. We have thus established that AdaBoost is not robust in the sense we described; if the weak learner is not required to choose the optimal hypothesis at each iteration, but is only required to choose a sufficiently good weak classifier jt ∈{j : (dT t M)j ≥ρ}, then a maximum margin solution will not necessarily be attained. In practice, it may be possible for AdaBoost to converge to a maximum margin solution when hypotheses are chosen to be only slightly non-optimal; however the notion of non-optimal we are using is a very natural notion, and we have shown that AdaBoost may not converge to ρ here. Note that for some matrices M, a maximum margin solution may still be attained in the non-optimal case (for example the simple 3×3 matrix we analyzed above), but it is not attained in general as shown by our example. We are not saying that the only way for AdaBoost to converge to a non-optimal solution is to fall into the wrong cycle; there may be many other non-cyclic ways for the algorithm to fail to converge to a maximum margin solution. Also note that for the other algorithms mentioned in Section 1 and for the new algorithms in Section 4, there are fixed points rather than periodic orbits. 4 Coordinate Ascent for Maximum Margins AdaBoost can be interpreted as an algorithm based on coordinate descent. There are other algorithms such as AdaBoost∗and arc-gv that attempt to maximize the margin explicitly, but these are not based on coordinate descent. We now suggest a boosting algorithm that aims to maximize the margin explicitly (like arc-gv and AdaBoost∗) yet is based on coordinate ascent. An important note is that AdaBoost and our new algorithm choose the direction of descent/ascent (value of jt) using the same formula, jt = argmaxj(dT t M)j. This lends further credence to the conjecture that AdaBoost maximizes the margin in the optimal case, since the direction AdaBoost chooses is the same direction one would choose to maximize the margin directly via coordinate ascent. The function that AdaBoost minimizes via coordinate descent is F(λ) = Pm i=1 e−(Mλ)i. Consider any λ such that (Mλ)i > 0 ∀i. Then lima→∞aλ will minimize F, yet the original normalized λ might not yield a maximum margin. So it must be the process of coordinate descent which awards AdaBoost its ability to increase margins, not simply AdaBoost’s ability to minimize F. Now consider a different function (which bears a resemblance to an ϵ-Boosting objective in [7]): G(λ) = − 1 ∥λ∥1 ln F(λ) = − 1 ∥λ∥1 ln à m X i=1 e−(Mλ)i ! where ∥λ∥1 := n X j=1 λj . It can be verified that G has many nice properties, e.g., G is a concave function for each fixed value of ∥λ∥1, whose maximum only occurs in the limit as ∥λ∥1 →∞, and more importantly, as ∥λ∥1 →∞we have G(λ) →µ(λ), where µ(λ) = (mini(Mλ)i)/∥λ∥1, the margin of λ. That is, me−µ(λ)∥λ∥1 ≥ Pm i=1 e−(Mλ)i > e−µ(λ)∥λ∥1 (1) −(ln m)/∥λ∥1 + µ(λ) ≤ G(λ) < µ(λ) (2) For (1), the first inequality becomes equality only when all m examples achieve the same minimal margin, and the second inequality holds since we took only one term. Rather than performing coordinate descent on F as in AdaBoost, let us perform coordinate ascent on G. The choice of direction jt at iteration t is: argmax j dG(λt + αej) dα ¯¯¯ α=0 = argmax j ·Pm i=1 e−(Mλt)iMij F(λt)∥λt∥1 ¸ + 1 ∥λt∥2 1 ln(F(λt)). Of these two terms, the second term does not depend on j, and the first term is proportional to (dT t M)j. Thus the same direction will be chosen here as for AdaBoost. Now consider the distance to travel along this direction. Ideally, we would like to maximize G(λt + αejt) with respect to α, i.e., we would like: 0 = dG(λt + αejt) dα ∥λt+1∥1 = Pm i=1 e−(Mλt)ie−MijtαMijt F(λt + αejt) −G(λt + αejt) There is not an analytical solution for α, but maximization of G(λt+αejt) is 1-dimensional so it can be performed quickly. An approximate coordinate ascent algorithm which avoids this line search is the following approximation to this maximization problem: 0 ≈ Pm i=1 e−(Mλt)ie−MijtαMijt F(λt + αejt) −G(λt). We can solve for αt analytically: αt = 1 2 ln µ1 + rt 1 −rt ¶ −1 2 ln µ1 + gt 1 −gt ¶ , where gt = max{0, G(λt)}. (3) Consider some properties of this iteration scheme. The update for αt is strictly positive (in the case of positive margins) due to the Von Neumann min-max theorem and equation (2), that is: rt ≥ρ = mind∈∆m maxj (dT M)j = maxeλ∈∆n mini (Meλ)i ≥mini (Mλt)i/∥λt∥1 > G(λt), and thus αt > 0 ∀t. We have preliminary proofs that the value of G increases at each iteration of our approximate coordinate ascent algorithm, and that our algorithms converge to a maximum margin solution, even in the non-optimal case. Our new update (3) is less aggressive than AdaBoost’s, but slightly more aggressive than arc-gv’s. The other algorithm we mention, AdaBoost∗, has a different sort of update. It converges to a combined classifier attaining a margin inside the interval [ρ −ν, ρ] within 2(log2 m)/ν2 steps, but does not guarantee asymptotic convergence to ρ for a fixed ν. There are many other boosting algorithms, but some of them require minimization over non-convex functions; here, we choose to compare with the simple updates of AdaBoost (due to its fast convergence rate), AdaBoost∗, and arc-gv. AdaBoost, arc-gv, and our algorithm have initially large updates, based on a conservative estimate of the margin. AdaBoost∗’s updates are initially small based on an estimate of the edge. 0 20 150 1100 0.4 0.5 0.65 Iterations Margin AdaBoost* AdaBoost arc−gv, approximate coord ascent, and coord ascent arc−gv AdaBoost approximate coord. ascent and coord. ascent 90 400 1800 10000 0.1 0.13 0.16 Iterations Margin AdaBoost approximate coordinate ascent arc−gv Figure 3: (a-Left) Performance of all algorithms in the optimal case on a random 11 × 21 input matrix (b-Right) AdaBoost, arc-gv, and approximate coordinate ascent on synthetic data. Figure 3(a) shows the performance of AdaBoost, arc-gv, AdaBoost∗(parameter ν set to .001), approximate coordinate ascent, and coordinate ascent on G (with a line search for αt at every iteration) on a reduced randomly generated 11 × 21 matrix, in the optimal case. AdaBoost settles into a cycle (as shown in Figure2(d)), so its updates remain consistently large, causing ∥λt∥1 to grow faster, thus converge faster with respect to G. The values of rt in the cycle happen to produce an optimal margin solution, so AdaBoost quickly converges to this solution. The approximate coordinate ascent algorithm has slightly less aggressive updates than AdaBoost, and is very closely aligned with coordinate ascent; arcgv is slower. AdaBoost∗has a more methodical convergence rate; convergence is initially slower but speeds up later. Artificial test data for Figure 3(b) was designed as follows: 50 example points were constructed randomly such that each xi lies on a corner of the hypercube {−1, 1}100. We set yi = sign(P11 k=1 xi(k)), where xi(k) indicates the kth component of xi. The jth weak learner is hj(x) = x(j), thus Mij = yixi(j). As expected, the convergence rate of approximate coordinate ascent falls between AdaBoost and arc-gv. 5 Conclusions We have used the nonlinear iterated map defined by AdaBoost to understand its update rule in low-dimensional cases and uncover cyclic dynamics. We produced an example to show that AdaBoost does not necessarily maximize the margin in the non-optimal case. Then, we introduced a coordinate ascent algorithm and an approximate coordinate ascent algorithm that aim to maximize the margin directly. Here, the direction of ascent agrees with the direction chosen by AdaBoost and other algorithms. It is an open problem to understand these dynamics in other cases. References [1] Robert E. Schapire. A brief introduction to boosting. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, 1999. [2] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651–1686, October 1998. [3] Gunnar R¨atsch and Manfred Warmuth. Maximizing the margin with boosting. In Proceedings of the 15th Annual Conference on Computational Learning Theory, pages 334–350, 2002. [4] Gunnar R¨atsch and Manfred Warmuth. Efficient margin maximizing with boosting. Journal of Machine Learning Research, submitted 2002. [5] Leo Breiman. Prediction games and arcing classifiers. Neural Computation, 11(7):1493–1517, 1999. [6] Michael Collins, Robert E. Schapire, and Yoram Singer. Logistic regression, AdaBoost and Bregman distances. Machine Learning, 48(1/2/3), 2002. [7] Saharon Rosset, Ji Zhu, and Trevor Hastie. Boosting as a regularized path to a maximum margin classifier. Technical report, Department of Statistics, Stanford University, 2003.
2003
7
2,474
Model Uncertainty in Classical Conditioning A. C. Courville*1,3, N. D. Daw2,3, G. J. Gordon4, and D. S. Touretzky2,3 1Robotics Institute, 2Computer Science Department, 3Center for the Neural Basis of Cognition, 4Center for Automated Learning and Discovery Carnegie Mellon University, Pittsburgh, PA 15213 {aaronc,daw,ggordon,dst}@cs.cmu.edu Abstract We develop a framework based on Bayesian model averaging to explain how animals cope with uncertainty about contingencies in classical conditioning experiments. Traditional accounts of conditioning fit parameters within a fixed generative model of reinforcer delivery; uncertainty over the model structure is not considered. We apply the theory to explain the puzzling relationship between second-order conditioning and conditioned inhibition, two similar conditioning regimes that nonetheless result in strongly divergent behavioral outcomes. According to the theory, second-order conditioning results when limited experience leads animals to prefer a simpler world model that produces spurious correlations; conditioned inhibition results when a more complex model is justified by additional experience. 1 Introduction Most theories of classical conditioning, exemplified by the classic model of Rescorla and Wagner [7], are wholly concerned with parameter learning. They assume a fixed (often implicit) generative model m of reinforcer delivery and treat conditioning as a process of estimating values for the parameters wm of that model. Typically, these parameters represent the rates of reinforcers delivered in the presence of various stimuli. Using the model and the parameters, the probability of reinforcer delivery can be estimated; such estimates are assumed to give rise to conditioned responses in behavioral experiments. More overtly statistical theories have treated uncertainty in the parameter estimates, which can influence predictions and learning [4]. In realistic situations, the underlying contingencies of the environment are complex and unobservable, and it can thus make sense to view the model m as itself uncertain and subject to learning, though (to our knowledge) no explicitly statistical theories of conditioning have yet done so. Under the standard Bayesian approach, such uncertainty can be treated analogously to parameter uncertainty, by representing knowledge about m as a distribution over a set of possible models, conditioned on evidence. Here we advance this idea as a highlevel computational framework for the role of model learning in classical conditioning. We do not concentrate on how the brain might implement these processes, but rather explore the behavior that a system approximating Bayesian reasoning should exhibit. This work establishes a relationship between theories of animal learning and a recent line of theory by Tenenbaum and collaborators, which uses similar ideas about Bayesian model learning to explain human causal reasoning [9]. We have applied our theory to a variety of standard results in animal conditioning, including acquisition, negative and positive patterning, and forward and backward blocking. Here we present one of the most interesting and novel applications, an explanation of a rather mysterious classical conditioning phenomenon in which opposite predictions about the likelihood of reinforcement can arise from different amounts of otherwise identical experience [11]. The opposing effects, both well known, are called second-order conditioning and conditioned inhibition. The theory explains the phenomenon as resulting from a tradeoff between evidence and model complexity. 2 A Model of Classical Conditioning In a conditioning trial, a set of conditioned stimuli CS ≡{A, B, . . . } is presented, potentially accompanied by an unconditioned stimulus or reinforcement signal, US. We represent the jth stimulus with a binary random variable yj such that yj = 1 when the stimulus is present. Here the index j, 1 ≤j ≤s, ranges over both the (s −1) conditioned stimuli and the unconditioned stimulus. The collection of trials within an experimental protocol constitutes a training data set, D = {yjt}, indexed by stimulus j and trial t, 1 ≤t ≤T. We take the perspective that animals are attempting to recover the generative process underlying the observed stimuli. We claim they assert the existence of latent causes, represented by the binary variables xi ∈{0, 1}, responsible for evoking the observed stimuli. The relationship between the latent causes and observed stimuli is encoded with a sigmoid belief network. This particular class of models is not essential to our conclusions; many model classes should result in similar behavior. Sigmoid Belief Networks In sigmoid belief networks, local conditional probabilities are defined as functions of weighted sums of parent nodes. Using our notation, P(yj = 1 | x1, . . . , xc, wm, m) = (1 + exp(− X i wijxi −wyj))−1, (1) and P(yj = 0 | x1, . . . , xc, wm, m) = 1 −P(yj = 1 | x1, . . . , xc, wm, m). The weight, wij, represents the influence of the parent node xi on the child node yj. The bias term wyj encodes the probability of yj in the absence of all parent nodes. The parameter vector wm contains all model parameters for model structure m. The form of the sigmoid belief networks we consider is represented as a directed graphical model in Figure 1a, with the latent causes as parents of the observed stimuli. The latent causes encode the intratrial correlations between stimuli — we do not model the temporal structure of events within a trial. Conditioned on the latent causes, the stimuli are mutually independent. We can express the conditional joint probability of the observed stimuli as Qs j=1 P(yj | x1, . . . , xc, wm, m). Similarly, we assume that trials are drawn from a stationary process. We do not consider trial order effects, and we assume all trials are mutually independent. (Because of these simplifying assumptions, the present model cannot address a number of phenomena such as the difference between latent inhibition, partial reinforcement, and extinction.) The resulting likelihood function of the training data, with latent causes marginalized, is: P(D | wm, m) = T Y t=1 X x s Y j=1 P(yjt | x, wm, m)P(x | wm, m), (2) x1 x2 · · · A B · · · US wx1 wx2 wy1 wy2 wys w11 w12 w1s w22 w2s (a) Sigmoid belief network D p(D | m) simple model complicated model simple wins complicated wins (b) Marginal likelihood Figure 1: (a) An example from the proposed set of models. Conditional dependencies are depicted as links between the latent causes (x1, x2) and the observed stimuli (A, B, US) during a trial. (b) Marginal likelihood of the data, D, for a simple model and a more complicated model (after MacKay [5]). where the sum is over all combinations of values of x = [x1, . . . , xc] and P(x | wm, m) = Qc i=1(1 + exp(−1xiwxi))−1. Sigmoid belief networks have a number of appealing properties for modeling conditioning. First, the sigmoid belief network is capable of compactly representing correlations between groups of observable stimuli. Without a latent cause, the number of parameters required to represent these correlations would scale exponentially with the number of stimuli. Second, the parent nodes, interacting additively, constitute a factored representation of state. This is advantageous as it permits generalization to novel combinations of factors. Such additivity has frequently been observed in conditioning experiments [7]. 2.1 Prediction under Parameter Uncertainty Consider a particular network structure, m, with parameters wm. Given m and a set of trials, D, the uncertainty associated with the choice of parameters is represented in a posterior distribution over wm. This posterior is given by Bayes’ rule, p(wm | D, m) ∝ P(D | wm, m)p(wm | m), where P(D | m) is from Equation 2 and p(wm | m) is the prior distribution over the parameters of m. We assume the model parameters are a priori independent. p(wm | m) = Q ij p(wij) Q i p(wxi) Q j p(wyj), with Gaussian priors for weights p(wij) = N(0, 3), latent cause biases p(wxi) = N(0, 3), and stimulus biases p(wyj) = N(−15, 1), the latter reflecting an assumption that stimuli are rare in the absence of causes. In conditioning, the test trial measures the conditioned response (CR). This is taken to be a measure of the animal’s estimate of the probability of reinforcement conditioned on the present conditioned stimuli CS. This probability is also conditioned on the absence of the remaining stimuli; however, in the interest of clarity, our notation suppresses these absent stimuli. In the Bayesian framework, given m, this probability, P(US | CS, m, D) is determined by integrating over all values of the parameters weighted by their posterior probability density, P(US | CS, m, D) = Z P(US | CS, wm, m, D)p(wm | m, D) dwm (3) 2.2 Prediction under Model Uncertainty In the face of uncertainty about which is the correct model of contingencies in the world — for instance, whether a reinforcer is independent of a tone stimulus — a standard Bayesian approach is to marginalize out the influence of the model choice, P(US | CS, D) = X m P(US | CS, m, D)P(m | D) (4) = X m Z P(US | CS, wm, m, D)p(wm | m, D)P(m | D) dwm The posterior over models, p(m | D), is given by: P(m | D) = P(D | m)P(m) P m′ P(D | m′)P(m′), P(D | m) = Z P(D | wm, m)p(wm | m) dwm The marginal likelihood P(D | m) is the probability of the data under model m, marginalizing out the model parameters. The marginal likelihood famously confers an automatic Occam’s razor effect on the average of Equation 4. Under complex models, parameters can be found to boost the probability of particular data sets that would be unlikely under simpler models, but any particular parameter choice is also less likely in more complex models. Thus there is a tradeoff between model fidelity and complexity (Figure 1b). We also encode a further preference for simpler models through the prior over model structure, which we factor as P(m) = P(c) Qc i=1 P(li), where c is the number of latent causes and li is the number of directed links emanating from xi. The priors over c and li are in turn given by, P(c) = ( 10−3c P 5 c′=0 10−3c′ if 0 ≤c ≤5 0 otherwise and P(li) = ( 10−3li P s l′ i=0 10−3l′ i if 0 ≤li ≤4 0 otherwise In the Bayesian model average, we consider the set of sigmoid belief networks with a maximum of 4 stimuli and 5 latent causes. This strong prior over model structures is required in addition to the automatic Occam’s razor effect in order to explain the animal behaviors we consider. This probably is due to the extreme abstraction of our setting. With generative models that included, e.g., temporal ordering effects and multiple perceptual dimensions, model shifts equivalent to the addition of a single latent variable in our setting would introduce a great deal of additional model complexity and require proportionally more evidential justification. 2.3 Monte Carlo Integration In order to determine the predictive probability of reinforcement, Bayesian model averaging requires that we evaluate Equation 4. Unfortunately, the integral is not amenable to analytic solution. Hence we approximate the integral with a sum over samples from the posterior p(wm, m | D). Acquiring samples is complicated by the need to sample over parameter spaces of different dimensions. In the simulations reported here, we solved this problem and obtained samples using a reversible jump Markov chain Monte Carlo (MCMC) method [2]. A new sample in the chain is obtained by proposing perturbations to the current sample’s model structure or parameters.1 Jumps include the addition or removal of links or latent causes, or updates to the stimulus biases or weights. To improve mixing over the different modes of the target distribution, we used exchange MCMC, which enables fast mixing between modes through the coupling of parallel Markov chains [3]. 1The proposal acceptance probability satisfies detailed balance for each type of jump. Group A-US A-X B-US Test ;Result Test ;Result No-X 96 0 8 X ;− XB ;CR Few-X 96 4 8 X ;CR XB ;CR Many-X 96 48 8 X ;− XB ;− Table 1: A summary of some of the experiments of Yin et al. [11]. The US was a footshock; A = white noise or buzzer sound; X = tone; B = click train. 3 Second-Order Conditioning and Conditioned Inhibition We use the model to shed light on the relationship between two classical conditioning phenomena, second-order conditioning and conditioned inhibition. The procedures for establishing a second-order excitor and a conditioned inhibitor are similar, yet the results are drastically different. Both procedures involve two kinds of trials: a conditioned stimulus A is presented with the US (A-US); and A is also presented with a target conditioned stimulus X in unreinforced trials (A-X). In second order conditioning, X becomes an excitor — it is associated with increased probability of reinforcement, demonstrated by conditioned responding. But in conditioned inhibition, X becomes an inhibitor, i.e. associated with decreased probability of reinforcement. Inhibition is probed with two tests: a transfer test, in which the inhibitor is paired with a second excitor B and shown to reduce conditioned responding, and a retardation test, in which the time course of response development under subsequent excitatory X-US training is retarded relative to naive animals. Yin et al. [11] explored the dimensions of these two procedures in an effort to distill the essential requirements for each. Under previous theories [8], it might have seemed that the crucial distinction between second order conditioning and conditioned inhibition had to do with either blocked versus interspersed trials, or with sequential versus simultaneous presentation of the CSes. However, they found that using only interspersed trials and simultaneous presentation of the conditioned stimuli, they were able to shift from second-order conditioning to conditioned inhibition simply by increasing the number of A-X pairings.2 Table 1 summarizes the relevant details of the experiment. From a theoretical perspective, these results present a challenge for models of conditioning. Why do animals so drastically change their behavior regarding X given only more of the same kind of A-X experience? Bayesian model averaging offers some insight. We simulated the experiments of Yin et al., matching their numbers for each type of trial, as shown in Table 1. Results of the MCMC approximation of the Bayesian model average integration are shown in Figure 2. All MCMC runs were at least 5 × 106 iterations long excluding a burn-in of 1 × 106 iterations. The sequences were subsampled to 2.5 × 104. In Figure 2a, we see that P(US | X, D) reveals significant second order conditioning with few A-X trials. With more trials the predicted probability of reinforcement quickly decreases. These results are consistent with the findings of Yin et al., as shown in Table 1. With few A-X trials there are insufficient data to justify a complicated model that accurately fits the data. Due to the automatic Occam’s razor and the prior preference for simple models, high posterior density is inferred for the simple model of Figure 3a. This model combines the stimuli from all trial types and attributes them to a single latent cause. When X is tested alone, its connection to the US through the latent cause results in a large P(US | X, D). With more training trials, the preference for simpler models is more successfully offset and more complicated models — capable of describing the data more accurately — are given 2In other conditions, trial ordering was shown to have an additional effect; this is outside the scope of the present theory due to our stationarity assumptions. 0 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 Number of A−X trials P(US | A, D ) P(US | X, D ) (a) Second-order Cond. 0 10 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 Number of A−X trials P(US | B, D ) P(US | X, B, D ) (b) Summation test 0 4 48 0 0.2 0.4 0.6 0.8 1 P(US|X,D ) Number of A−X trials (c) Retardation test Figure 2: A summary of the simulation results. Error bars indicate the 3σ margin in the standard error of the estimate (we omit very small error bars). (a) P(US | X, D) and P(US | A, D) as a function of A-X trials. For few trials (2 to 8), P(US | X, D) is high, indicative of second-order conditioning. (b) P(US | X, B, D) and P(US | B, D) as a function of number of A-X trials. After 10 trials, X is able to significantly reduce the predicted probability of reinforcement generated by the presentation of B. (c) Results of a retardation test. With many A-X trials, acquisition of an excitatory association to X is retarded. greater posterior density (Figure 3c). An example of such a model is shown in Figure 3b. In the model, X is made a conditioned inhibitor by a negative valued weight between x2 and X. In testing X with a transfer excitor B, as shown in Figure 2, this weight acts to cancel a positive correlation between B and the US. Note that the shift from excitation to inhibition is due to inclusion of uncertainty over models; inferring the parameters with the more complex model fixed would result in immediate inhibition. In their experiment, Yin et al. also conducted a retardation test of conditioned inhibition for X. We follow their procedure and include in D 3 X-US trials. Our retardation test results are shown in Figure 2 and are in agreement with the findings of Yin et al. A further mystery about conditioned inhibitors, from the perspective of the benchmark theory of Rescorla and Wagner [7], is the nonextinction effect: repeated presentations of a conditioned inhibitor X alone and unreinforced do not extinguish its inhibitory properties. An experiment by Williams and Overmier [10] demonstrated that unpaired presentations of a conditioned inhibitor can actually enhance its ability to suppress responding in a transfer test. Our model shows the same effect, as illustrated with a dramatic test in Figure 4. Here we used the previous dataset with only 8 A-X pairings and added a number of unpaired presentations of X. The additional unpaired presentations shift the model from a secondorder conditioning regime to a conditioned inhibition regime. The extinction trials suppress posterior density over simple models that exhibit a positive correlation between X and US, shifting density to more complex models and unmasking the inhibitor. 4 Discussion We have demonstrated our ideas in the context of a very abstract set of candidate models, ignoring the temporal arrangement of trials and of the events within them. Obviously, both of these issues have important effects, and the present framework can be straightforwardly generalized to account for them, with the addition of temporal dependencies to the latent variables [1] and the removal of the stationarity assumption [4]. An odd but key concept in early models of classical conditioning is the “configural unit,” a detector for a conjunction of co-active stimuli. “Configural learning” theories (e.g. [6]) x1 A X B US −2.5 −13 −14 −14 −13 15 10 11 16 (a) Few A-X trials x1 x2 A X B US −2.5 0.8 −14 −14 −14 −14 16 16 11 11 −8 8 (b) Many A-X trials 0 10 20 30 40 50 60 1 1.5 2 2.5 3 Average number of latent causes Number of A−X trials (c) Model size over trials Figure 3: Sigmoid belief networks with high probability density under the posterior. (a) After a few A-X pairings: this model exhibits second-order conditioning. (b) After many A-X pairings: this model exhibits conditioned inhibition. (c) The average number of latent causes as a function of A-X pairings. rely on heuristics for creating such units in response to observations, a rough-and-ready sort of model structure learning. With a stimulus configuration represented through a latent cause, our theory provides a clearer prescription for how to reason about model structure. Our framework can be applied to a reservoir of configural learning experiments, including negative and positive patterning and a host of others. Another body of data on which our work may shed light is acquisition of a conditioned response. Recent theories of acquisition (e.g. [4]) propose that animals respond to a conditioned stimulus (CS) when the difference in the reinforcement rate between the presence and absence of the CS satisfies some test of significance. From the perspective of our model, this test looks like a heuristic for choosing between generative models of stimulus delivery that differ as to whether the CS and US are correlated through a shared hidden cause. To our knowledge, the relationship between second-order conditioning and conditioned inhibition has never been explicitly studied using previous theories. This is in part because the majority of classical conditioning theories do not account for second-order conditioning at all, since they typically consider learning only about CS-US but not CS-CS correlations. Models based on temporal difference learning [8] predict second-order conditioning, but only if the two CSes are presented sequentially (not true of the experiment considered here). Second-order conditioning can also be predicted if the A-X pairings cause some sort of representational change so that A’s excitatory associations generalize to X. Yin et al. [11] suggest that if this representational learning is fast (as in [6], though that theory would need to be modified to include any second-order effects) and if conditioned inhibition accrues only gradually by error-driven learning [7], then second-order conditioning will dominate initially. The details of such an account seem never to have been worked out, and even if they were, such a mechanistic theory would be considerably less illuminating than our theory as to the normative reasons why the animals should predict as they do. Acknowledgments This work was supported by National Science Foundation grants IIS-9978403 and DGE9987588, and by AFRL contract F30602–01–C–0219, DARPA’s MICA program. We thank Peter Dayan and Maneesh Sahani for helpful discussions. 0 0.2 0.4 0.6 0.8 1 0 2 4 6 8 10 P(US | X,B,wm,m,D ) p(wm,m | D ) 1 X− trial 2 X− trials 3 X− trials (a) Posterior PDF 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 Number of X− trials P(US | B,D ) P(US | X,B,D ) (b) Summation test Figure 4: Effect of adding unpaired presentations of X on the strength of X as an inhibitor. (a) Posterior probability of models which predict different values of P(US | X, B). With only 1 unpaired presentation of X, most models predict a high probability of US (secondorder conditioning). With 2 or 3 unpaired presentations of X, models which predict a low P(US | X, B) get more posterior weight (conditioned inhibition). (b) A plot contrasting P(US | B, D) and P(US | X, B, D) as a function of unpaired X trials. The reduction in the probability of reinforcement indicates an enhancement of the inhibitory strength of X. Error bars indicate the 3σ margin in the standard error in the estimate (omitting small error bars). References [1] A. C. Courville and D. S. Touretzky. Modeling temporal structure in classical conditioning. In Advances in Neural Information Processing Systems 14, pages 3–10, Cambridge, MA, 2002. MIT Press. [2] P. J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82:711–732, 1995. [3] Y. Iba. Extended ensemble Monte Carlo. International Journal of Modern Physics C, 12(5):623–656, 2001. [4] S. Kakade and P. Dayan. Acquisition and extinction in autoshaping. Psychological Review, 109:533–544, 2002. [5] D. J. C. MacKay. Bayesian model comparison and backprop nets. In Advances in Neural Information Processing Systems 4, Cambridge, MA, 1991. MIT Press. [6] J. M. Pearce. Similarity and discrimination: A selective review and a connectionist model. Psychological Review, 101:587–607, 1994. [7] R. A. Rescorla and A. R. Wagner. A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In A. H. Black and W. F. Prokasy, editors, Classical Conditioning II. Appleton-Century-Crofts, 1972. [8] R. S. Sutton and A. G. Barto. Time-derivative models of Pavlovian reinforcement. In M. Gabriel and J. Moore, editors, Learning and Computational Neuroscience: Foundations of Adaptive Networks, chapter 12, pages 497–537. MIT Press, 1990. [9] J. Tenenbaum and T. Griffiths. Structure learning in human causal induction. In Advances in Neural Information Processing Systems 13, pages 59–65, Cambridge, MA, 2001. MIT Press. [10] D. A. Williams and J. B. Overmier. Some types of conditioned inhibitors carry collateral excitatory associations. Learning and Motivation, 19:345–368, 1988. [11] H. Yin, R. C. Barnet, and R. R. Miller. Second-order conditioning and Pavlovian conditioned inhibition: Operational similarities and differences. Journal of Experimental Psychology: Animal Behavior Processes, 20(4):419–428, 1994.
2003
70
2,475
Local Phase Coherence and the Perception of Blur Zhou Wang and Eero P. Simoncelli Howard Hughes Medical Institute Center for Neural Science and Courant Institute of Mathematical Sciences New York University, New York, NY 10003 zhouwang@ieee.org, eero.simoncelli@nyu.edu Humans are able to detect blurring of visual images, but the mechanism by which they do so is not clear. A traditional view is that a blurred image looks “unnatural” because of the reduction in energy (either globally or locally) at high frequencies. In this paper, we propose that the disruption of local phase can provide an alternative explanation for blur perception. We show that precisely localized features such as step edges result in strong local phase coherence structures across scale and space in the complex wavelet transform domain, and blurring causes loss of such phase coherence. We propose a technique for coarse-to-fine phase prediction of wavelet coefficients, and observe that (1) such predictions are highly effective in natural images, (2) phase coherence increases with the strength of image features, and (3) blurring disrupts the phase coherence relationship in images. We thus lay the groundwork for a new theory of perceptual blur estimation, as well as a variety of algorithms for restoration and manipulation of photographic images. 1 Introduction Blur is one of the most common forms of image distortion. It can arise from a variety of sources, such as atmospheric scatter, lens defocus, optical aberrations of the lens, and spatial and temporal sensor integration. Human observers are bothered by blur, and our visual systems are quite good at reporting whether an image appears blurred (or sharpened) [1,2]. However, the mechanism by which this is accomplished is not well understood. Clearly, detection of blur requires some model of what constitutes an unblurred image. In recent years, there has been a surge of interest in the modelling of natural images, both for purposes of improving the performance of image processing and computer vision systems, and also for furthering our understanding of biological visual systems. Early statistical models were almost exclusively based on a description of global Fourier power spectra. Specifically, image spectra are found to follow a power law [3–5]. This model leads to an obvious method of detecting and compensating for blur. Specifically, blurring usually reduces the energy of high frequency components, and thus the power spectrum of a blurry image should fall faster than a typical natural image. The standard formulation of the “deblurring” problem, due to Wiener [6], aims to restore those high frequency components to their original amplitude. But this proposal is problematic, since individual images show significant variability in their Fourier amplitudes, both in their shape and in the rate at which they fall [1]. In particular, simply reducing the number of sharp features (e.g., edges) in an image can lead to a steeper falloff in global amplitude spectrum, even though the image will still appear sharp [7]. Nevertheless, the visual system seems to be able to compensate for this when estimating blur [1,2,7]. Over the past two decades, researchers from many communities have converged on a view that images are better represented using bases of multi-scale bandpass oriented filters. These representations, loosely referred to as “wavelets”, are effective at decoupling the high-order statistical features of natural images. In addition, they provide the most basic model for neurons in the primary visual cortex of mammals, which are presumably adapted to efficiently represent the visually relevant features of images. Many recent statistical image models in the wavelet domain are based on the amplitudes of the coefficients, and the relationship between the amplitudes of coefficients in local neighborhoods or across different scales [e.g. 8]. In both human and computer vision, the amplitudes of complex wavelets have been widely used as a mechanism for localizing/representing features [e.g. 9–13]. It has also been shown that the relative wavelet amplitude as a function of scale can be used to explain a number of subjective experiments on the perception of blur [7]. In this paper, we propose the disruption of local phase as an alternative and effective measure for the detection of blur. This seems counterintuitive, because when an image is blurred through convolution with a symmetric linear filter, the phase information in the (global) Fourier transform domain does not change at all. But we show that this is not true for local phase information. In previous work, Fourier phase has been found to carry important information about image structures and features [14] and higher-order Fourier statistics have been used to examine the phase structure in natural images [15]. It has been pointed out that at the points of isolated even and odd symmetric features such as lines and step edges, the arrival phases of all Fourier harmonics are identical [11,16]. Phase congruency [11,17] provides a quantitative measure for the agreement of such phase alignment pattern. It has also been shown that maximum phase congruency feature detection is equivalent to maximum local energy model [18]. Local phase has been used in a number of machine vision and image processing applications, such as estimation of image motion [19] and disparity [20], description of image textures [21], and recognition of persons using iris patterns [22]. However, the behaviors of local phase at different scales in the vicinity of image features, and the means by which blur affects such behaviors have not been deeply investigated. 2 Local Phase Coherence of Isolated Features Wavelet transforms provide a convenient framework for localized representation of signals simultaneously in space and frequency. The wavelets are dilated/contracted and translated versions of a “mother wavelet” w(x). In this paper, we consider symmetric (linear phase) wavelets whose mother wavelets may be written as a modulation of a low-pass filter: w(x) = g(x) ejωcx , (1) where ωc is the center frequency of the modulated band-pass filter, and g(x) is a slowly varying and symmetric function. The family of wavelets derived from the mother wavelet are then ws,p(x) = 1 √s w x −p s  = 1 √s g x −p s  ejωc(x−p)/s , (2) where s ∈R+ is the scale factor, and p ∈R is the translation factor. Considering the fact that g(−x) = g(x), the wavelet transform of a given real signal f(x) can be written as F(s, p) = Z ∞ −∞ f(x) w∗ s,p(x) dx =  f(x) ∗1 √s g x s  ejωcx/s  x=p . (3) Now assume that the signal f(x) being analyzed is localized near the position x0, and we rewrite it into a function f0(x) that satisfies f(x) = f0(x −x0). Using the convolution theorem and the shifting and scaling properties of the Fourier transform, we can write F(s, p) = 1 2π Z ∞ −∞ F(ω)√s G(s ω −ωc) ejωp dω = 1 2π Z ∞ −∞ F0(ω)√s G(s ω −ωc) ejω(p−x0) dω = 1 2π√s Z ∞ −∞ F0 ω s  G(ω −ωc) ejω(p−x0)/s dω , (4) where F(ω), F0(ω) and G(ω) are the Fourier transforms of f(x), f0(x) and g(x), respectively. We now examine how the phase of F(s, p) evolves across space p and scale s. From Eq. (4), we see that the phase of F(s, p) highly depends on the nature of F0(ω). If F0(ω) is scale-invariant, meaning that F0 ω s  = K(s)F0(ω) , (5) where K(s) is a real function of only s, but independent of ω, then from Eq. (4) and Eq. (5) we obtain F(s, p) = K(s) 2π√s Z ∞ −∞ F0(ω) G(ω −ωc) ejω(p−x0)/s dω = K(s) √s F(1, x0 + p −x0 s ) . (6) Since both K(s) and s are real, we can write the phase as: Φ(F(s, p)) = Φ(F(1, x0 + p −x0 s )) . (7) This equation suggests a strong phase coherence relationship across scale and space. An illustration is shown in Fig. 1(a), where it can be seen that equal-phase contours in the (s, p) plane form straight lines defined by x0 + p −x0 s = C , (8) where C can be any real constant. Further, all these straight lines converge exactly at the location of the feature x0. More generally, the phase at any given scale may be computed from the phase at any other scale by simply rescaling the position axis. This phase coherence relationship relies on the scale-invariance property of Eq. (5) of the signal. Analytically, the only type of continuous spectrum signal that satisfies Eq. (5) follows a power law: F0(ω) = K0 ωP . (9) In the spatial domain, the functions f0(x) that satisfy this scale-invariance condition include the step function f0(x) = K(u(x) −1 2) (where K is a constant and F0(ω) = K/jω) and its derivatives, such as the delta function f0(x) = Kδ(x) (where K is a constant and F0(ω) = K). Notice that both functions of f0(x) are precisely localized in space. Figure 1(b) shows that this precisely convergent phase behavior is disrupted by blurring. Specifically, if we convolve a sharp feature (e.g., an step edge) with a low-pass filter, the resulting signal will no longer satisfy the scale-invariant property of Eq. (5) and the phase coherence relationship of Eq. (7). Thus, a measure of phase coherence can be used to detect blur. Note that the phase congruency relationship [11,17], which expresses the alignment of phase at the location of a feature, corresponds to the center (vertical) contour of Fig. 1, which remains intact after blurring. Thus, phase congruency measures [11,17] provide no information about blur. p (position) s (scale) 0 1 ... ... ... ... x 0 x 0 (a) (b) 0 0 x x x 0 x 0 x 0 x 0 wavelet space signal space Fig. 1: Local phase coherence of precisely localized (scale-invariant) features, and the disruption of this coherence in the presence of blur. (a) precisely localized features. (b) blurred features. 3 Phase Prediction in Natural Images In this section, we show that if the local image features are precisely localized (such as the delta and the step functions), then in the discrete wavelet transform domain, the phase of nearby fine-scale coefficients can be well predicted from their coarser-scale parent coefficients. We then examine these phase predictions in both sharp and blurred natural images. 3.1 Coarse-to-fine Phase Prediction From Eq. (3), it is straightforward to prove that for f0(x) = Kδ(x), Φ(F(1, p)) = −ωc (p −x0) + n1π , (10) where n1 is an integer whose value depends on the value range of ωc (p −x0) and the sign of Kg(p −x0). Using the phase coherence relation of Eq. (7), we have Φ(F(s, p)) = −ωc (p −x0) s + n1π . (11) It can also be shown that for a step function f0(x) = K[u(x) −1 2], when g(x) is slowly varying and p is located near the feature location x0, Φ(F(s, p)) ≈ωc (p −x0) s −π 2 + n2π . (12) Similarly, n2 is an integer. The discrete wavelet transform corresponds to a discrete sampling of the continuous wavelet transform F(s, p). A typical sampling grid is illustrated in Fig. 2(a), where between every two adjacent scales, the scale factor s doubles and the spatial sampling rate is halved. Now we consider three consecutive scales and group the neighboring coefficients {a, b1, b2, c1, c2, c3, c4} as shown in Fig. 2(a), then it can be shown that the phases a b 1 b 2 c 4 c 1 c 2 c 3 s s p 1 c 11 c 12 c 13 c 14 c 21 c 22 c 23 c 24 c 31 c 32 c 33 c 34 c 41 c 42 c 43 c 44 b 11 b 21 b 22 b 12 a (a) (b) 1 2 4 p 2 p Fig. 2: Discrete wavelet transform sampling grid in the continuous wavelet transform domain. (a) 1-D sampling; (b) 2-D sampling. of the finest scale coefficients {c1, c2, c3, c4} can be well predicted from the coarser scale coefficients {a, b1, b2}, provided the local phase satisfies the phase coherence relationship. Specifically, the estimated phase ˆΦ for {c1, c2, c3, c4} can be expressed as ˆΦ    c1 c2 c3 c4   = Φ   (a∗)2 ·   b3 1 b2 1b2 b1b2 2 b3 2     . (13) We can develop a similar technique for the two dimensional case. As shown in Fig. 2(b), the phase prediction expression from the coarser scale coefficients {a, b11, b12, b21, b22} to the group of finest scale coefficients {cij} is as follows: ˆΦ({cij}) = Φ   (a∗)2 ·   b3 11 b2 11b12 b11b2 12 b3 12 b2 11b21 b2 11b22 b11b12b22 b2 12b22 b11b2 21 b11b21b22 b11b2 22 b12b2 22 b3 21 b2 21b22 b21b2 22 b3 22     . (14) 3.2 Image Statistics We decompose the images using the “steerable pyramid” [23], a multi-scale wavelet decomposition whose basis functions are spatially localized, oriented, and roughly one octave in bandwidth. A 3-scale 8-orientation pyramid is calculated for each image, resulting in 26 subbands (24 oriented, plus highpass and lowpass residuals). Using Eq. (14), the phase of each coefficient in the 8 oriented finest-scale subbands is predicted from the phases of its coarser-scale parent and grandparent coefficients as illustrated in Fig. 2(b). We applied such a phase prediction method to a dataset of 1000 high-resolution sharp images as well as their blurred versions, and then examined the errors between the predicted and true phases at the fine scale. The summary histograms are shown in Fig. 3. In order to demonstrate how blurring affects the phase prediction accuracy, in all these conditional histograms, the magnitude axis corresponds to the coefficient magnitudes of the original image, so that the same column in the three histograms correspond to the same set of coefficients in spatial location. From Fig. −π π 0 −π π 0 −π π 0 original coefficient magnitude phase pred. error phase pred. error phase pred. error natural sharp image blurred image highly blurred image (a) (b) (c) (d) (e) (f) 0.04 0.03 0.02 −π π 0 phase prediction error (g) sharp blurred highly blurred Fig. 3: Local phase coherence statistics in sharp and blurred images. (a),(b),(c): example natural, blurred and highly blurred images taken from the test image database of 1000 (512×512, 8bits/pixel, gray-scale) natural images with a wide variety of contents (humans, animals, plants, landscapes, man-made objects, etc.). Images are cropped to 200×200 for visibility; (d),(e),(f): conditional histograms of phase prediction error as a function of the original coefficient magnitude for the three types of images. Each column of the histograms is scaled individually, such that the largest value of each column is mapped to white; (g) phase prediction error histogram of significant coefficients (magnitude greater than 20). 3, we observe that phase coherence is highly effective in natural images and the phase prediction error decreases as the coefficient magnitude increases. Larger coefficients implies stronger local phase coherence. Furthermore, as expected, the blurring process clearly reduces the phase prediction accuracy. We thus hypothesize that it is perhaps this disruption of local phase coherence that the visual system senses as being “unnatural”. 4 Discussion This paper proposes a new view of image blur based on the observation that blur induces distortion of local phase, in addition to the widely noted loss of high-frequency energy. We have shown that isolated precisely localized features create strong local phase coherence, and that blurring disrupts this phase coherence. We have also developed a particular measure of phase coherence based on coarse-to-fine phase prediction, and shown that this measure can serve as an indication of blur in natural images. In the future, it remains to be seen whether the visual systems detect blur by comparing the relative amplitude of localized filters at different scales [7], or alternatively, comparing the relative spread of local phase across scale and space. The coarse-to-fine phase prediction method was developed in order to facilitate examination of phase coherence in real images, but the computations involved bear some resemblance to the behaviors of neurons in the primary visual cortex (area V1) of mammals. First, phase information is measured using pairs of localized bandpass filters in quadrature, as are widely used to describe the receptive field properties of neurons in mammalian primary visual cortex (area V1) [24]. Second, the responses of these filters must be exponentiated for comparison across different scales. Many recent models of V1 response incorporate such exponentiation [25]. Finally, responses are seen to be normalized by the magnitudes of neighboring filter responses. Similar “divisive normalization” mechanisms have been successfully used to account for many nonlinear behaviors of neurons in both visual and auditory neurons [26, 27]. Thus, it seems that mammalian visual systems are equipped with the basic computational building blocks that can be used to process local phase coherence. The importance of local phase coherence in blur perception seems intuitively sensible from the perspective of visual function. In particular, the accurate localization of image features is critical to a variety of visual capabilities, including various forms of hyperacuity, stereopsis, and motion estimation. Since the localization of image features depends critically on phase coherence, and blurring disrupts phase coherence, blur would seem to be a particularly disturbing artifact. This perhaps explains the subjective feeling of frustration when confronted with a blurred image that cannot be corrected by visual accommodation. For purposes of machine vision and image processing applications, we view the results of this paper as an important step towards the incorporation of phase properties into statistical models for images. We believe this is likely to lead to substantial improvements in a variety of applications, such as deblurring or sharpening by phase restoration, denoising by phase restoration, image compression, image quality assessment, and a variety of more creative photographic applications, such as image blending or compositing, reduction of dynamic range, or post-exposure adjustments of depth-of-field. Furthermore, if we would like to detect the position of an isolated precisely localized feature from phase samples measured above a certain allowable scale, then infinite precision can be achieved using the phase convergence property illustrated in Fig. 1(a), provided the phase measurement is perfect. In other words, the detection precision is limited by the accuracy of phase measurement, rather than the highest spatial sampling density. This provides a workable mechanism of “seeing beyond the Nyquist limit” [28], which could explain a number of visual hyperacuity phenomena [29,30], and may be used for the design of super-precision signal detection devices. References [1] Y. Tadmor and D. J. Tolhurst, “Discrimination of changes in the second-order statistics of natural and synthetic images,” Vis Res, vol. 34, no. 4, pp. 541–554, 1994. [2] M. A. Webster, M. A. Georgeson, and S. M. Webster, “Neural adjustments to image blur,” Nature Neuroscience, vol. 5, no. 9, pp. 839–840, 2002. [3] E. R. Kretzmer, “The statistics of television signals,” Bell System Tech. J., vol. 31, pp. 751–763, 1952. [4] D. J. Field, “Relations between the statistics of natural images and the response properties of cortical cells,” J. Opt. Soc. America, vol. 4, pp. 2379–2394, 1987. [5] D. L. Ruderman, “The statistics of natural images,” Network: Computation in Neural Systems, vol. 5, pp. 517–548, 1996. [6] N. Wiener, Nonlinear Problems in Random Theory. New York: John Wiley and Sons, 1958. [7] D. J. Field and N. Brady, “Visual sensitivity, blur and the sources of variability in the amplitude spectra of natural scenes,” Vis Res, vol. 37, no. 23, pp. 3367–3383, 1997. [8] E. P. Simoncelli, “Statistical models for images: Compression, restoration and synthesis,” in Proc 31st Asilomar Conf on Signals, Systems and Computers, (Pacific Grove, CA), pp. 673–678, Nov 1997. [9] E. H. Adelson and J. R. Bergen, “Spatiotemporal energy models for the perception of motion,” J Optical Society, vol. 2, pp. 284–299, Feb 1985. [10] J. R. Bergen and E. H. Adelson, “Early vision and texture perception,” Nature, vol. 333, pp. 363–364, 1988. [11] M. C. Morrone and R. A. Owens, “Feature detection from local energy,” Pattern Recognition Letters, vol. 6, pp. 303–313, 1987. [12] N. Graham, Visual pattern analyzers. New York: Oxford University Press, 1989. [13] P. Perona and J. Malik, “Detecting and localizing edges composed of steps, peaks and roofs,” in Proc. 3rd Int’l Conf Comp Vision, (Osaka), pp. 52–57, 1990. [14] A. V. Oppenheim and J. S. Lim, “The importance of phase in signals,” Proc. of the IEEE, vol. 69, pp. 529–541, 1981. [15] M. G. A. Thomson, “Visual coding and the phase structure of natural scenes,” Network: Comput. Neural Syst., no. 10, pp. 123–132, 1999. [16] M. C. Morrone and D. C. Burr, “Feature detection in human vision: A phasedependent energy model,” Proc. R. Soc. Lond. B, vol. 235, pp. 221–245, 1988. [17] P. Kovesi, “Phase congruency: A low-level image invariant,” Psych. Research, vol. 64, pp. 136–148, 2000. [18] S. Venkatesh and R. A. Owens, “An energy feature detection scheme,” Int’l Conf on Image Processing, pp. 553–557, 1989. [19] D. J. Fleet and A. D. Jepson, “Computation of component image velocity from local phase information,” Int’l J Computer Vision, no. 5, pp. 77–104, 1990. [20] D. J. Fleet, “Phase-based disparity measurement,” CVGIP: Image Understanding, no. 53, pp. 198–210, 1991. [21] J. Portilla and E. P. Simoncelli, “A parametric texture model based on joint statistics of complex wavelet coefficients,” Int’l J Computer Vision, vol. 40, pp. 49–71, 2000. [22] J. Daugman, “Statistical richness of visual phase information: update on recognizing persons by iris patterns,” Int’l J Computer Vision, no. 45, pp. 25–38, 2001. [23] E. P. Simoncelli, W. T. Freeman, E. H. Adelson, and D. J. Heeger, “Shiftable multiscale transforms,” IEEE Trans Information Theory, vol. 38, pp. 587–607, Mar 1992. [24] D. A. Pollen and S. F. Ronner, “Phase relationships between adjacent simple cells in the cat,” Science, no. 212, pp. 1409–1411, 1981. [25] D. J. Heeger, “Half-squaring in responses of cat striate cells,” Visual Neuroscience, no. 9, pp. 427–443, 1992. [26] D. J. Heeger, “Normalization of cell responses in cat striate cortex,” Visual Neuroscience, no. 9, pp. 181–197, 1992. [27] O. Schwartz and E. P. Simoncelli, “Natural signal statistics and sensory gain control,” Nature Neuroscience, no. 4, pp. 819–825, 2001. [28] D. L. Ruderman and W. Bialek, “Seeing beyond the Nyquist limit,” Neural Comp., no. 4, pp. 682–690, 1992. [29] G. Westheimer and S. P. McKee, “Spatial configurations for visual hyperacuity,” Vison Res., no. 17, pp. 941–947, 1977. [30] W. S. Geisler, “Physical limits of acuity and hyperacuity,” J. Opti. Soc. America, no. 1, pp. 775–782, 1984.
2003
71
2,476
Efficient Exact k-NN and Nonparametric Classification in High Dimensions Ting Liu Computer Science Dept. Carnegie Mellon University Pittsburgh, PA 15213 tingliu@cs.cmu.edu Andrew W. Moore Computer Science Dept. Carnegie Mellon University Pittsburgh, PA 15213 awm@cs.cmu.edu Alexander Gray Computer Science Dept. Carnegie Mellon University Pittsburgh, PA 15213 agray@cs.cmu.edu Abstract This paper is about non-approximate acceleration of high dimensional nonparametric operations such as k nearest neighbor classifiers and the prediction phase of Support Vector Machine classifiers. We attempt to exploit the fact that even if we want exact answers to nonparametric queries, we usually do not need to explicitly find the datapoints close to the query, but merely need to ask questions about the properties about that set of datapoints. This offers a small amount of computational leeway, and we investigate how much that leeway can be exploited. For clarity, this paper concentrates on pure k-NN classification and the prediction phase of SVMs. We introduce new ball tree algorithms that on real-world datasets give accelerations of 2-fold up to 100-fold compared against highly optimized traditional ball-tree-based k-NN. These results include datasets with up to 106 dimensions and 105 records, and show non-trivial speedups while giving exact answers. 1 Introduction Nonparametric models have become increasingly popular in the statistics communities and probabilistic AI communities. They remain hampered by their computational complexity. Spatial methods such as kd-trees [6, 17], R-trees [9], metric trees [18, 4] and ball trees [15] have been proposed and tested as a way of alleviating the computational cost of such statistics without resorting to approximate answers. They have been used in many different ways, and with a variety of tree search algorithms and with a variety of “cached sufficient statistics” decorating the internal leaves, for example in [14, 5, 16, 8]. The main concern with such accelerations is the extent to which they can survive high dimensional data. Indeed, there are some datasets in this paper for which a highly optimized conventional k nearest neighbor search based on ball trees is on average more expensive than the naive linear search algorithm,but extracting the k nearest neighbors is often not needed, even for a k nearest neighbor classifier. This paper is about the consequences of the fact that none of these three questions have the same precise meaning: (a) “What are the k nearest neighbors of t?” (b) “How many of the k nearest neighbors of t are from the positive class?” and (c) “Are at least q of the k nearest neighbors from the positive class?” The computational geometry community has focused on question (a), but uses of proximity queries in statistics far more frequently require (b) and (c) types of computations. Further, in addition to traditional K-NN, the same insight applies to many other statistical computations such as nonparametric density estimation, locally weighted regression, mixture models, k-means and the prediction phase of SVM classification. 2 Ball trees A ball tree is a binary tree in which each node represents a set of points, called Points(Node). Given a dataset, the root node of a ball tree represents the full set of points in the dataset. A node can be either a leaf node or a non-leaf node. A leaf node explicitly contains a list of the points represented by the node. A non-leaf node does not explicitly contain a set of points. It has two child nodes: Node.child1 and Node.child2, where Points(Node.child1)∩Points(Node.child2) = φ Points(Node.child1)∪Points(Node.child2) = Points(Node) Points are organized spatially. Each node has a distinguished point called a pivot. Depending on the implementation, the pivot may be one of the datapoints, or it may be the centroid of Points(Node). Each node records the maximum distance of the points it owns to its pivot. Call this the radius of the node Node.Radius = maxx∈Points(Node) | Node.Pivot −x | Balls lower down the tree cover smaller volumes. This is achieved by insisting, at tree construction time, that x ∈Points(Node.child1) ⇒ | x−Node.child1.Pivot | ≤| x−Node.child2.Pivot | x ∈Points(Node.child2) ⇒ | x−Node.child2.Pivot | ≤| x−Node.child1.Pivot | Provided our distance function obeys the triangle inequality, this gives the ability to bound the distance from a target point t to any point in any ball tree node. If x ∈Points(Node) then we can be sure that: |x−t| ≥ |t−Node.Pivot|−Node.Radius (1) |x−t| ≤ |t−Node.Pivot|+Node.Radius (2) Ball trees are constructed top-down. There are several ways to construct them, and practical algorithms trade off the cost of construction (it would be useless to be O(R2) given a dataset with R points, for example) against the tightness of the radius of the balls. [13] describes one fast way of constructing a ball tree appropriate for computational statistics. If a ball tree is balanced, then the construction time is O(CRlogR), where C is the cost of a pointpoint distance computation (which is O(m) if there are m dense attributes, and O(fm) if the records are sparse with only fraction f of attributes taking non-zero values). 2.1 KNS1: Conventional K nearest neighbor search with ball trees In this paper, we call conventional ball-tree-based search [18] KNS1. Let a pointset PS be a set of datapoints. We begin with the following definition: Say that PS consists of the k-NN of t in pointset V if and only if ((| V |≥k) ∧(PS are the k-NN of t in V)) ∨((| V |< k) ∧(PS = V)) (3) We now define a recursive procedure called BallKNN with the following inputs and output. PSout = BallKNN(PSin,Node) Let V = set of points searched so far, on entry. Assume PSin consists of the k-NN of t in V. This function efficiently ensures that on exit, PSout consists of the k-NN of t in V ∪Points(Node). Let Dsofar =  ∞ if | PSin |< k maxx∈PSin | x−t | if | PSin |= k (4) Dsofar is the minimum distance within which points would become interesting to us. Let DNode minp =  max(|t−Node.Pivot|−Node.Radius,DNode.parent minp ) if Node ̸= Root max(|t−Node.Pivot|−Node.Radius,0) if Node = Root (5) DNode minp is the minimum possible distance from any point in Node to t. Procedure BallKNN (PSin,Node) begin if (DNode minp ≥Dsofar) then exit returning PSin unchanged. else if (Node is a leaf) PSout = PSin ∀x ∈Points(Node) if (| x−t |< Dsofar) then add x to PSout if (| PSout |= k +1) then remove furthest neighbor from PSout; update Dsofar else if (Node is a non-leaf) node1 = child of Node closest to t node2 = child of Node furthest from t PStemp = BallKNN(PSin,node1) PSout = BallKNN(PStemp,node2) end A call of BallKNN({},Root) returns the k nearest neighbors of t in the Ball tree. 2.2 KNS2: Faster k-NN classification for skewed-class data In several binary classification domains,one class is much more frequent than the other, For example, in High Throughput Screening datasets, [19] it is far more common for the result of an experiment to be negative than positive. In fraud detection or intrusion detection, a non-attack is far more common than an attack. The new algorithm introduced in this section, KNS2, is designed to accelerate k-NN based classification beyond the speedups already available by using KNS1 (conventional ball-tree-based k-NN). KNS2 attacks the problem by building two ball trees: Rootpos is the root of a (small) ball tree built from all the positive points in the dataset. Rootneg is the root of a (large) ball tree built from all negative points. Then, when it is time to classify a new target point t, we compute q, the number of k nearest neighbors of t that are in the positive class, in the following fashion • Step 1 — “ Find positive”: Find the k nearest positive class neighbors of t (and their distances to t) using conventional ball tree search. • Step 2 — “Insert negative”: Do sufficient search of the negative tree to prove that the number of positive datapoints among k nearest neighbors is q for some value of q. Step 2 is achieved using a new recursive search called NegCount. In order to describe NegCount we need the following three definitions. • The Dists Array. Dists is an array of elements Dists1 ...Distsk consisting of the distances to the k nearest positive neighbors of t, sorted in increasing order of distance. We will also write Dists0 = 0 and Distsk+1 = ∞. • Pointsets. Define pointset V as the set of points in the negative balls visited so far. • The Counts Array (n,C). Say that (n,C) summarize interesting negative points for pointset V if and only if 1. ∀i ∈[0,n], Ci =| V ∩{x : Distsi ≤| x−t |< Distsi+1} | (6) 2. ∑n i=0Ci ≥k, ∑n−1 i=0 Ci < k. This simply declares that the length n of the C array is as short as possible while accounting for the k members of V that are nearest to t. Step 2 of KNS2 is implemented by the recursive function (nout,Cout) = NegCount(nin,Cin,Node,Dists) Assume that on entry that (nin,Cin) summarize interesting negative points for pointset V, where V is the set of points visited so far during the search. This algorithm efficiently ensures that on exit (nout,Cout) summarize interesting negative points for V ∪Points(Node). Procedure NegCount (nin,Cin,Node,Dists) begin nout := nin; Cout := Cin Let T = ∑nin−1 i=0 Cin i T is the total number of negative points closer than the ninth positive point if (DNode minp ≥Distnin) then exit and return(nout,Cout) else if (Node is a leaf) ∀x ∈Points(Node) Use binary search to find j ∈[0,nout], such that Distsj ≤| x−t |< Distsj+1 Cout j := Cout j +1; T := T +1 If T exceeds k, decrement nout until T = ∑nout−1 i=0 Cout i < k. Distsnout+1 := ∞ if (nout = 0)exit and return(0, Cout) else if(Node is a non leaf) node1 := child of Node closest to t node2 := child of Node furthest from t (ntemp,Ctemp) := NegCount(nin,Cin,node1,Dists) if (ntemp = 0) exit and return (0, Cout) (nout,Cout) := NegCount(ntemp,Ctemp,node2,Dists) end We can stop the procedure when nout becomes 0 (which means all the k nearest neighbors of t are in the negative class) or when we run out of nodes. The top-level call is NegCount(k,C0,NegTree.Root,Dists) where C0 is an array of zeroes and Dists are defined in Equation 6 and obtained by applying KNS1 to the (small) positive ball tree. 2.3 KNS3: Are at least q of the k nearest neighbors positive? Unfortunately, space constraints prevent us from describing the details of KNS3. KNS3 removes KNS2’s constraint of an assumed skewedness in the class distribution, while introducing a new constraint: we answer the binary question “are at least q nearest neighbors positive?” (where the questioner must supply q). This is often the most statistically relevant question, for example during classification with known false positive and false negative costs. KNS3 will be described fully in a journal-article length version of the paper 1. 2.4 SVP1: Faster Radial Basis SVM Prediction After an SVM [3] has been trained we hit the prediction phase. Given a batch of query points q1,q2 ...qR we wish to classify each q j. Furthermore, in state-of-the-art training algorithms such as SMO, training time is dominated by SVM evaluation [12]. q j should be classified according to this rule: ASUM(q j) = ∑ i∈posvecs αiK(|q j −xi|) , BSUM(q j) = ∑ i∈negvecs βiK(|q j −xi|) (7) 1available from www.autonlab.org Class(q j) = 1 if ASUM(q j)−BSUM(q j) ≥−b = 0 if ASUM(q j)−BSUM(q j) < −b Where the positive support vectors posvecs, the negative support vectors negvecs and the weights {αi}, {βi} and constant term b are all obtained from SVM training. We place the queries (not the support vectors) into a ball-tree. We can then apply the same kinds of tricks as KNS2 and KNS3 in which we do not need to find the explicit values of the ASUM and BSUM terms, but merely find balls in the tree in which we can prove all query points satisfy one of the above inequalities. To classify all the points in a node called Node we do the following: 1. Compute values (ASUMLO,ASUMHI) such that we can be sure ∀q j ∈Node : ASUMLO ≤ASUM(q j) ≤ASUMHI (8) without iterating over the queries in Node. This is achieved simply, for example if q j ∈Node we know ASUM(q j) = ∑ i∈posvecs αiK(|q j −xi|) ≥ ∑ i∈posvecs αiK(|Node.pivot −xi|+Node.Radius) = ASUMLO Similarly, ASUM(q j) = ∑ i∈posvecs αiK(|q j −xi|) ≤ ∑ i∈posvecs αiK(max(|Node.pivot −xi|−Node.Radius,0)) = ASUMHI under the assumption that the kernel function is a decreasing function of distance. This is true, for example, for Gaussian Radial Basis function kernels. 2. Similarly compute values (BSUMLO,BSUMHI). 3. If ASUMLO −BSUMHI ≥−b we have proved that all queries in Node should be classified positively, and we can terminate this recursive call. 4. If ASUMHI −BSUMLO < −b we have proved that all queries in Node should be classified negatively, and we can terminate this recursive call. 5. Else we recurse and apply the same procedure to the two children of Node, unless Node is a leaf node in which case we must explicitly iterate over its members. 3 Experimental Results Table 1 is a summary of the datasets in the empirical analysis. Life Sciences: These were proprietary datasets (ds1 and ds2) similar to the publicly available Open Compound Database provided by the National Cancer Institute (NCI Open Compound Database, 2000). The two datasets are sparse. We also present results on datasets derived from ds1, denoted ds1.10pca, ds1.100pca and ds2.100anchor by linear projection using principal component analysis (PCA). Link Detection: The first, Citeseer, is derived from the Citeseer web site (Citeseer,2002) and lists the names of collaborators on published materials. The goal is to predict whether J Lee ( the most common name) was a collaborator for each work based on who else is listed for that work. We use J Lee.100pca to represent the linear projection of the data to 100 dimensions using PCA. The second link detection dataset is derived from the Internet Movie Database (IMDB,2002) and is denoted imdb using a similar approach, but to predict the participation of Mel Blanc (again the most common participant). UCI/KDD data: We use three large datasets from KDD/UCI repository [2]. The datasets can be identified from their names. They were converted to binary classification problems. Each categorical input attribute was converted into n binary attributes by a 1-of-n encoding (where n is the attribute’s arity).The post-processed versions of these datasets are at http://www.cs.cmu.edu/∼awm/kns 1. Letter originally had 26 classes: A-Z. We performed binary classification using the letter A as the positive class and “Not A” as negative. 2. Movie is a dataset from[11]. The TREC-2001 Video Track organized by NIST shot boundary Task. It is a 4 hours of video or 13 MPEG-1 video files at slightly over 2GB of data. 3. Ipums (from ipums.la.97). We predict farm status, which is binary. 4. Kdd99(10%) has a binary prediction: Normal vs. Attack. Table 1: Datasets Dataset Num. Num. DiNum. Dataset Num. Num. DiNum. records mensions pos. records mensions pos. ds1 26733 6348 804 ds1.10pca 26733 10 804 ds1.100pca 26733 100 804 ds2.100anchor 88358 100 211 ds2 88358 1100000 211 J Lee.100pca 181395 100 299 Letter 20000 16 790 Blanc Mel 186414 10 824 Movie 38943 62 7620 Kdd99(10%) 494021 176 97278 Ipums 70187 60 119 For each dataset, we tested k = 9 and k = 101. For KNS3, we used q = ⌈k/2⌉when k = 9 and q = ⌈pk/(n+p)⌉when k = 101, where p = Num.positive in the dataset and n = Num.negative in the dataset. : a datapoint is classified as positive iff the majority of its k nearest neighbors are positive. Each experiment performed 10-fold cross-validation. Thus, each experiment required R k-NN classification queries (where R is the number of records in the dataset) and each query involved the k-NN among 0.9R records. A naive implementation with no ball-trees would thus require 0.9R2 distance computations.These algorithms are all exact. No approximations were used in the classifications. Table 2 shows the computational cost of naive k-NN, both in terms of the number of distance computations and the wall-clock time on an unloaded 2 GHz Pentium. We then examine the speedups of KNS1 (traditional use of Ball-trees) and our two new Ball-tree methods (KNS2 and KNS3). It is notable that for some high dimensional datasets, KNS1 does not produce an acceleration over naive. KNS2 and KNS3 do, however, and in some cases they are hundreds of times faster than KNS1. The ds2 result is particularly interesting because it involves data in over a million dimensions. The first thing to notice is that conventional ball-trees (KNS1) were slightly worse than the naive O(R2) algorithm. In only one case was KNS2 inferior to naive and KNS3 was always superior. On some datasets KNS2 and KNS3 gave dramatic speedups. Table 3 gives results for SVP1, the Ball-tree-based accelerator for SVM prediction2 In general SVP1 appears to be 2-4 times faster than SVMlight[12], with two far more dramatic speedups in the case of two classification tasks where SVP1 quickly realizes that a large node near the top of its query tree can be pruned as negative. As with previous results, SVP1 is exact, and all predictions agree with SVM-Light. All these experiments used Radial Basis kernels, with kernel width tuned for optimal test-set performance. 2Because training SVMs is so expensive, some of the results below used reduced training sets. Table 2: Number of distance computations and wall-clock-time for Naive k-NN classification (2nd column). Acceleration for normal use of ball-trees in col, 2 (in terms of num. distances and time). Accelerations of new methods KNS2 and KNS3 in other columns. Naive times are independent of k. NAIVE KNS1 KNS2 KNS3 dists time dists time dists time dists time (secs) speedup speedup speedup speedup speedup speedup ds1 k=9 6.4×108 4830 1.6 1.0 4.7 3.1 12.8 5.8 k=101 1.0 0.7 1.6 1.1 10 4.2 ds1.10pca k=9 6.4×108 420 11.8 11.0 33.6 21.4 71 20 k=101 4.6 3.4 6.5 4.0 40 6.1 ds1.100pcak=9 6.4×108 2190 1.7 1.8 7.6 7.4 23.7 29.6 k=101 0.97 1.0 1.6 1.6 16.4 6.8 ds2 k=9 8.5×109 105500 0.64 0.24 14.0 2.8 25.6 3.0 k=101 0.61 0.24 2.4 0.83 28.7 3.3 ds2.100k=9 7.0×109 24210 15.8 14.3 185.3 144 580 311 k=101 10.9 14.3 23.0 19.4 612 248 J Lee.100- k=9 3.6×1010 142000 2.6 2.4 28.4 27.2 15.6 12.6 k=101 2.2 1.9 12.6 11.6 37.4 27.2 Blanc Melk=9 3.8×1010 44300 3.0 3.0 47.5 60.8 51.9 60.7 k=101 2.9 3.1 7.1 33 203 134.0 Letter k=9 3.6×108 290 8.5 7.1 42.9 26.4 94.2 25.5 k=101 3.5 2.6 9.0 5.7 45.9 9.4 Movie k=9 1.4×109 3100 16.1 13.8 29.8 24.8 50.5 22.4 k=101 9.1 7.7 10.5 8.1 33.3 11.6 Ipums k=9 4.4×109 9520 195 136 665 501 1003 515 k=101 69.1 50.4 144.6 121 5264 544 Kddcup99 k=9 2.7×1011 1670000 4.2 4.2 574 702 4 4.1 (10%) k=101 4.2 4.2 187.7 226.2 3.9 3.9 Table 3: Comparison between SVM light and SVP1. We show the total number of distance computations made during the prediction phase for each method, and total wall-clock time. SVM light SVP1 SVM light SVP1 speedup distances distances seconds seconds ds1 6.4×107 1.8×107 394 171 2.3 ds1.10pca 6.4×107 1.8×107 60 23 2.6 ds1.100pca 6.4×107 2.3×107 259 92 2.8 ds2.100pca 7.0×108 1.4×108 2775 762 3.6 J Lee.100pca 6.4×106 2×106 31 7 4.4 Blanc Mel 1.2×108 3.6×107 61 26 2.3 Letter 2.6×107 1×107 21 11 1.9 Ipums 1.9×108 7.7×104 494 1 494 Movie 1.4×108 4.4×107 371 136 2.7 Kddcup99(10%) 6.3×106 2.8×105 69 1 69 4 Comments and related work Applicability of other proximity query work. For the problem of “find the k nearest datapoints” (as opposed to our question of “perform k-NN or Kernel classification”) in high dimensions, the frequent failure of traditional ball trees to beat naive has lead to some innovative alternatives, based on random projections, hashing discretized cubes, and acceptance of approximate answers. For example [7] gives a hashing method that was demonstrated to provide speedups over a ball-tree-based approach in 64 dimensions by a factor of 2-5 depending on how much error in the approximate answer was permitted. Another approximate k-NN idea is in [1], one of the first k-NN approaches to use a priority queue of nodes, in this case achieving a 3-fold speedup with an approximation to the true k-NN. However, these approaches are based on the notion that any points falling within a factor of (1+ε) times the true nearest neighbor distance are acceptable substitutes for the true nearest neighbor. Noting in particular that distances in high-dimensional spaces tend to occupy a decreasing range of continuous values [10], it remains an open question whether schemes based upon the absolute values of the distances rather than their ranks are relevant to the classification task. Our approach, because it need not find the k-NN to answer the relevant statistical question, finds an answer without approximation. The fact that our methods are easily modified to allow (1 + ε) approximation in the manner of [1] suggests an obvious avenue for future research. References [1] S. Arya, D. Mount, N. Netanyahu, R. Silverman, and A. Wu. An optimal algorithm for approximate nearest neighbor searching fixed dimensions. Journal of the ACM, 45(6):891–923, 1998. [2] S. D. Bay. UCI KDD Archive [http://kdd.ics.uci.edu]. Irvine, CA: University of California, Dept of Information and Computer Science, 1999. [3] C. Burges. A tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery, 2(2):955–974, 1998. [4] P. Ciaccia, M. Patella, and P. Zezula. M-tree: An efficient access method for similarity search in metric spaces. In Proceedings of the 23rd VLDB International Conference, September 1997. [5] K. Deng and A. W. Moore. Multiresolution Instance-based Learning. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence, pages 1233–1239, San Francisco, 1995. Morgan Kaufmann. [6] J. H. Friedman, J. L. Bentley, and R. A. Finkel. An algorithm for finding best matches in logarithmic expected time. ACM Transactions on Mathematical Software, 3(3):209–226, September 1977. [7] A. Gionis, P. Indyk, and R. Motwani. Similarity Search in High Dimensions via Hashing. In Proc 25th VLDB Conference, 1999. [8] A. Gray and A. W. Moore. N-Body Problems in Statistical Learning. In Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13 (December 2000). MIT Press, 2001. [9] A. Guttman. R-trees: A dynamic index structure for spatial searching. In Proceedings of the Third ACM SIGACT-SIGMOD Symposium on Principles of Database Systems. Assn for Computing Machinery, April 1984. [10] J. M. Hammersley. The Distribution of Distances in a Hypersphere. Annals of Mathematical Statistics, 21:447–452, 1950. [11] CMU informedia digital video library project. The trec-2001 video trackorganized by nist shot boundary task, 2001. [12] T. Joachims. Making large-scale support vector machine learning practical. In A. Smola B. Sch¨olkopf, C. Burges, editor, Advances in Kernel Methods: Support Vector Machines. MIT Press, Cambridge, MA, 1998. [13] A. W. Moore. The Anchors Hierarchy: Using the Triangle Inequality to Survive HighDimensional Data. In Twelfth Conference on Uncertainty in Artificial Intelligence. AAAI Press, 2000. [14] S. M. Omohundro. Efficient Algorithms with Neural Network Behaviour. Journal of Complex Systems, 1(2):273–347, 1987. [15] S. M. Omohundro. Bumptrees for Efficient Function, Constraint, and Classification Learning. In R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3. Morgan Kaufmann, 1991. [16] D. Pelleg and A. W. Moore. Accelerating Exact k-means Algorithms with Geometric Reasoning. In Proceedings of the Fifth International Conference on Knowledge Discovery and Data Mining. ACM, 1999. [17] F. P. Preparata and M. Shamos. Computational Geometry. Springer-Verlag, 1985. [18] J. K. Uhlmann. Satisfying general proximity/similarity queries with metric trees. Information Processing Letters, 40:175–179, 1991. [19] W. Zheng and A. Tropsha. A Novel Variable Selection QSAR Approach based on the K-Nearest Neighbor Principle. J. Chem. Inf.Comput. Sci., 40(1):185–194, 2000.
2003
72
2,477
Large margin classifiers: convex loss, low noise, and convergence rates Peter L. Bartlett, Michael I. Jordan and Jon D. McAuliffe Division of Computer Science and Department of Statistics University of California, Berkeley Berkeley, CA 94720 {bartlett,jordan,jon}@stat.berkeley.edu Abstract Many classification algorithms, including the support vector machine, boosting and logistic regression, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0-1 loss function. We characterize the statistical consequences of using such a surrogate by providing a general quantitative relationship between the risk as assessed using the 0-1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial bounds under the weakest possible condition on the loss function—that it satisfy a pointwise form of Fisher consistency for classification. The relationship is based on a variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise. Finally, we present applications of our results to the estimation of convergence rates in the general setting of function classes that are scaled hulls of a finite-dimensional base class. 1 Introduction Convexity has played an increasingly important role in machine learning in recent years, echoing its growing prominence throughout applied mathematics (Boyd and Vandenberghe, 2003). In particular, a wide variety of two-class classification methods choose a real-valued classifier f based on the minimization of a convex surrogate φ(yf(x)) in the place of an intractable loss function 1(sign(f(x)) ̸= y). Examples of this tactic include the support vector machine, AdaBoost, and logistic regression, which are based on the exponential loss, the hinge loss and the logistic loss, respectively. What are the statistical consequences of choosing models and estimation procedures so as to exploit the computational advantages of convexity? In the setting of 0-1 loss, some basic answers have begun to emerge. In particular, it is possible to demonstrate the Bayes-risk consistency of methods based on minimizing convex surrogates for 0-1 loss, with appropriate regularization. Lugosi and Vayatis (2003) have provided such a result for any differentiable, monotone, strictly convex loss function φ that satisfies φ(0) = 1. This handles many common cases although it does not handle the SVM. Steinwart (2002) has demonstrated consistency for the SVM as well, where F is a reproducing kernel Hilbert space and φ is continuous. Other results on Bayes-risk consistency have been presented by Jiang (2003), Zhang (2003), and Mannor et al. (2002). To carry this agenda further, it is necessary to find general quantitative relationships between the approximation and estimation errors associated with φ, and those associated with 0-1 loss. This point has been emphasized by Zhang (2003), who has presented several examples of such relationships. We simplify and extend Zhang’s results, developing a general methodology for finding quantitative relationships between the risk associated with φ and the risk associated with 0-1 loss. In particular, let R(f) denote the risk based on 0-1 loss and let R∗= inff R(f) denote the Bayes risk. Similarly, let us refer to Rφ(f) = Eφ(Y f(X)) as the “φ-risk,” and let R∗ φ = inff Rφ(f) denote the “optimal φ-risk.” We show that, for all measurable f, ψ(R(f) −R∗) ≤Rφ(f) −R∗ φ, (1) for a nondecreasing function ψ : [0, 1] →[0, ∞), and that no better bound is possible. Moreover, we present a general variational representation of ψ in terms of φ, and show how this representation allows us to infer various properties of ψ. This result suggests that if ψ is well-behaved then minimization of Rφ(f) may provide a reasonable surrogate for minimization of R(f). Moreover, the result provides a quantitative way to transfer assessments of statistical error in terms of “excess φ-risk” Rφ(f)−R∗ φ into assessments of error in terms of “excess risk” R(f) −R∗. Although our principal goal is to understand the implications of convexity in classification, we do not impose a convexity assumption on φ at the outset. Indeed, while conditions such as convexity, continuity, and differentiability of φ are easy to verify and have natural relationships to optimization procedures, it is not immediately obvious how to relate such conditions to their statistical consequences. Thus, in Section 2 we consider the weakest possible condition on φ—that it is “classification-calibrated,” which is essentially a pointwise form of Fisher consistency for classification. We show that minimizing φ-risk leads to minimal risk precisely when φ is classification-calibrated. Building on (1), in Section 3 we study the low noise setting, in which the posterior probability η(X) is not too close to 1/2. We show that in this setting we are able to obtain an improvement in the relationship between excess φ-risk and excess risk. Section 4 turns to the estimation of convergence rates for empirical φ-risk minimization in the low noise setting. We find that for convex φ satisfying a certain uniform convexity condition, empirical φ-risk minimization yields convergence of misclassification risk to that of the best-performing classifier in F, and the rate of convergence can be strictly faster than the classical parametric rate of n−1/2. 2 Relating excess risk to excess φ-risk There are three sources of error to be considered in a statistical analysis of classification problems: the classical estimation error due to finite sample size, the classical approximation error due to the size of the function space F, and an additional source of approximation error due to the use of a surrogate in place of the 0-1 loss function. It is this last source of error that is our focus in this section. We give estimates for this error that are valid for any measurable function. Since the error is defined in terms of the probability distribution, we work with population expectations in this section. Fix an input space X and let (X, Y ), (X1, Y1), . . . , (Xn, Yn) ∈X × {±1} be i.i.d., with distribution P. Define η : X →[0, 1] as η(x) = P(Y = 1|X = x). Define the {0, 1}-risk, or just risk, of f as R(f) = P(sign(f(X)) ̸= Y ), where sign(α) = 1 for α > 0 and −1 otherwise. Based on the sample Dn = ((X1, Y1), . . . , (Xn, Yn)), we want to choose a function fn with small risk. Define the Bayes risk R∗= inff R(f), where the infimum is over all measurable f. Then any f satisfying sign(f(X)) = sign(η(X) − 1/2) a.s. on {η(X) ̸= 1/2} has R(f) = R∗. Fix a function φ : →[0, ∞). Define the φ-risk of f as Rφ(f) = Eφ(Y f(X)). We can view φ as specifying a contrast function that is minimized in determining a discriminant f. Define Cη(α) = ηφ(α) + (1 −η)φ(−α), so that the conditional φ-risk at x ∈X is E(φ(Y f(X))|X = x) = Cη(x)(f(x)) = η(x)φ(f(x)) + (1 −η(x))φ(−f(x)). As a useful illustration for the definitions that follow, consider a singleton domain X = {x0}. Minimizing φ-risk corresponds to choosing f(x0) to minimize Cη(x0)(f(x0)). For η ∈[0, 1], define the optimal conditional φ-risk H(η) = inf α∈ Cη(α) = inf α∈  (ηφ(α) + (1 −η)φ(−α)). Then the optimal φ-risk satisfies R∗ φ := inff Rφ(f) = EH(η(X)), where the infimum is over measurable functions. For η ∈[0, 1], define H−(η) = inf α:α(2η−1)≤0 Cη(α) = inf α:α(2η−1)≤0(ηφ(α) + (1 −η)φ(−α)). This is the optimal value of the conditional φ-risk, under the constraint that the sign of the argument α disagrees with that of 2η −1. We now turn to the basic condition we impose on φ. This condition generalizes the requirement that the minimizer of Cη(α) (if it exists) has the correct sign. This is a minimal condition that can be viewed as a form of Fisher consistency for classification (Lin, 2001). Definition 1. We say that φ is classification-calibrated if, for any η ̸= 1/2, H−(η) > H(η). The following functional transform of the loss function will be useful in our main result. Definition 2. We define the ψ-transform of a loss function as follows. Given φ : → [0, ∞), define the function ψ : [0, 1] →[0, ∞) by ψ = ˜ψ∗∗, where ˜ψ(θ) = H− 1 + θ 2  −H 1 + θ 2  , and g∗∗: [0, 1] → is the Fenchel-Legendre biconjugate of g : [0, 1] → . Equivalently, the epigraph of g∗∗is the closure of the convex hull of the epigraph of g. (Recall that the epigraph of a function g is the set {(x, t) : x ∈[0, 1], g(x) ≤t}.) It is immediate from the definitions that ˜ψ and ψ are nonnegative and that they are also continuous on [0, 1]. We calculate the ψ-transform for exponential loss, logistic loss, quadratic loss and truncated quadratic loss, tabulating the results in Table 1. All of these loss functions can be verified to be classification-calibrated. (The other parameters listed in the table will be referred to later.) The importance of the ψ-transform is shown by the following theorem. φ(α) ψ(θ) LB δ(ϵ) exponential e−α 1 − √ 1 −θ2 eB e−Bϵ2/8 logistic ln(1 + e−2α) θ 2 e−2Bϵ2/4 quadratic (1 −α)2 θ2 2(B + 1) ϵ2/4 truncated quadratic (max{0, 1 −α})2 θ2 2(B + 1) ϵ2/4 Table 1: Four convex loss functions and the corresponding ψ-transform. On the interval [−B, B], each loss function has the indicated Lipschitz constant LB and modulus of convexity δ(ϵ) with respect to dφ. All have a quadratic modulus of convexity. Theorem 3. 1. For any nonnegative loss function φ, any measurable f : X → and any probability distribution on X × {±1}, ψ(R(f) −R∗) ≤Rφ(f) −R∗ φ. 2. Suppose |X| ≥2. For any nonnegative loss function φ, any ϵ > 0 and any θ ∈ [0, 1], there is a probability distribution on X × {±1} and a function f : X → such that R(f) −R∗= θ and ψ(θ) ≤Rφ(f) −R∗ φ ≤ψ(θ) + ϵ. 3. The following conditions are equivalent. (a) φ is classification-calibrated. (b) For any sequence (θi) in [0, 1], ψ(θi) →0 if and only if θi →0. (c) For every sequence of measurable functions fi : X → and every probability distribution on X × {±1}, Rφ(fi) →R∗ φ implies R(fi) →R∗. Remark: It can be shown that classification-calibration implies ψ is invertible on [0, 1], in which case it is meaningful to write the upper bound on excess risk as ψ−1(Rφ(f) −R∗ φ). Remark: Zhang (2003) has given a comparison theorem like Part 1, for convex φ that satisfy certain conditions. Lugosi and Vayatis (2003) and Steinwart (2002) have shown limiting results like Part 3c under other conditions on φ. All of these conditions are stronger than the ones we assume here. The following lemma summarizes various useful properties of H, H −and ψ. Lemma 4. The functions H, H−and ψ have the following properties, for all η ∈[0, 1]: 1. H and H−are symmetric about 1/2: H(η) = H(1 −η), H−(η) = H−(1 −η). 2. H is concave and satisfies H(η) ≤H(1/2) = H−(1/2). 3. If φ is classification-calibrated, then H(η) < H(1/2) for η ̸= 1/2. 4. H−is concave on [0, 1/2] and [1/2, 1], and satisfies H−(η) ≥H(η). 5. H, H−and ˜ψ are continuous on [0, 1]. 6. ψ is continuous on [0, 1], ψ is nonnegative and minimal at 0, and ψ(0) = 0. 7. φ is classification-calibrated iff ψ(θ) > 0 for all θ ∈(0, 1]. Proof. (Of Theorem 3). For Part 1, it is straightforward to show that R(f) −R∗= E (1 [sign(f(X)) ̸= sign(η(X) −1/2)] |2η(X) −1|) , where 1 [Φ] is 1 if the predicate Φ is true and 0 otherwise. From the definition, ψ is convex, so we can apply Jensen’s inequality, the fact that ψ(0) = 0 (Lemma 4, part 6) and the fact that ψ(θ) ≤˜ψ(θ), to show that ψ(R(f) −R∗) ≤Eψ (1 [sign(f(X)) ̸= sign(η(X) −1/2)] |2η(X) −1|) = E (1 [sign(f(X)) ̸= sign(η(X) −1/2)] ψ (|2η(X) −1|)) ≤E  1 [sign(f(X)) ̸= sign(η(X) −1/2)] ˜ψ (|2η(X) −1|)  = E 1 [sign(f(X)) ̸= sign(η(X) −1/2)] H−(η(X)) −H(η(X))  = E  1 [sign(f(X)) ̸= sign(η(X) −1/2)]  inf α:α(2η(X)−1)≤0 Cη(X)(α) −H(η(X))  ≤E Cη(X)(f(X)) −H(η(X))  = Rφ(f) −R∗ φ, where the last inequality used the fact that for any x, and in particular when sign(f(x)) = sign(η(x) −1/2), we have Cη(x)(f(x)) ≥H(η(x)). For Part 2, the first inequality is from Part 1. For the second, fix ϵ > 0 and θ ∈[0, 1]. From the definition of ψ, we can choose γ, α1, α2 ∈[0, 1] for which θ = γα1 + (1 −γ)α2 and ψ(θ) ≥γ ˜ψ(α1) + (1 −γ) ˜ψ(α2) −ϵ/2. Choose distinct x1, x2 ∈X, and choose PX such that PX{x1} = γ, PX{x2} = 1 −γ, η(x1) = (1 + α1)/2, and η(x2) = (1 + α2)/2. From the definition of H−, we can choose f : X → such that f(x1) ≤0, f(x2) ≤0, Cη(x1)(f(x1)) ≤H−(η(x1)) + ϵ/2 and Cη(x2)(f(x2)) ≤H−(η(x2)) + ϵ/2. Then it is easy to verify that Rφ(f)−R∗ φ ≤γ ˜ψ(α1)+(1−γ) ˜ψ(α2)+ϵ/2 ≤ψ(θ)+ϵ. Furthermore, since sign(f(xi)) = −1 but η(xi) ≥1/2, we have R(f) −R∗= E|2η(X) −1| = θ. For Part 3, first note that, for any φ, ψ is continuous on [0, 1] and ψ(0) = 0 by Lemma 4, part 6, and hence θi →0 implies ψ(θi) →0. Thus, we can replace condition (3b) by (3b’) For any sequence (θi) in [0, 1], ψ(θi) →0 implies θi →0 . To see that (3a) implies (3b’), let φ be classification-calibrated, and let (θi) be a sequence that does not converge to 0. Define c = lim sup θi > 0, and pass to a subsequence with lim θi = c. Then lim ψ(θi) = ψ(c) by continuity, and ψ(c) > 0 by classification-calibration (Lemma 4, part 7). Thus, for the original sequence (θi), we see lim sup ψ(θi) > 0, so we cannot have ψ(θi) →0. Part 1 implies that (3b’) implies (3c). The proof that (3c) implies (3a) is straightforward; see Bartlett et al. (2003). The following observation is easy to verify. It shows that if φ is convex, the classificationcalibration condition is easy to verify and the ψ transform is a little easier to compute. Lemma 5. Suppose φ is convex. Then we have 1. φ is classification-calibrated if and only if it is differentiable at 0 and φ′(0) < 0. 2. If φ is classification-calibrated, then ˜ψ is convex, hence ψ = ˜ψ. All of the classification procedures mentioned in earlier sections utilize surrogate loss functions which are either upper bounds on 0-1 loss or can be transformed into upper bounds via a positive scaling factor. It is easy to verify that this is necessary. Lemma 6. If φ : →[0, ∞) is classification-calibrated, then there is a γ > 0 such that γφ(α) ≥1 [α ≤0] for all α ∈ . 3 Tighter bounds under low noise conditions In a study of the convergence rate of empirical risk minimization, Tsybakov (2001) provided a useful condition on the behavior of the posterior probability near the optimal decision boundary {x : η(x) = 1/2}. Tsybakov’s condition is useful in our setting as well; as we show in this section, it allows us to obtain a refinement of Theorem 3. Recall that R(f) −R∗= E (1 [sign(f(X)) ̸= sign(η(X) −1/2)] |2η(X) −1|) ≤PX (sign(f(X)) ̸= sign(η(X) −1/2)) , (2) with equality provided that η(X) is almost surely either 1 or 0. We say that P has noise exponent α ≥0 if there is a c > 0 such that every measurable f : X → has PX (sign(f(X)) ̸= sign(η(X) −1/2)) ≤c (R(f) −R∗)α . (3) Notice that we must have α ≤1, in view of (2). If α = 0, this imposes no constraint on the noise: take c = 1 to see that every probability measure P satisfies (3). On the other hand, it is easy to verify that α = 1 if and only if |2η(X) −1| ≥1/c a.s. [PX]. Theorem 7. Suppose P has noise exponent 0 < α ≤1, and φ is classification-calibrated. Then there is a c > 0 such that for any f : X → , c (R(f) −R∗)α ψ (R(f) −R∗)1−α 2c ! ≤Rφ(f) −R∗ φ. Furthermore, this never gives a worse rate than the result of Theorem 3, since (R(f) −R∗)α ψ (R(f) −R∗)1−α 2c ! ≥ψ R(f) −R∗ 2c  . The proof follows closely that of Theorem 3(1), with the modification that we approximate the error integral separately over subsets of the input space with low and high noise. 4 Estimation rates Large margin algorithms choose ˆf from a class F to minimize empirical φ-risk, ˆRφ(f) = ˆEφ(Y f(X)) = 1 n n X i=1 φ(Yif(Xi)). We have seen how the excess risk depends on the excess φ-risk. In this section, we examine the convergence of ˆf’s excess φ-risk, Rφ( ˆf) −R∗ φ. We can split this excess risk into an estimation error term and an approximation error term: Rφ( ˆf) −R∗ φ = (Rφ( ˆf) −inf f∈F Rφ(f)) + ( inf f∈F Rφ(f) −R∗ φ). We focus on the first term, the estimation error term. For simplicity, we assume throughout that some f ∗∈F achieves the infimum, Rφ(f ∗) = inff∈F Rφ(f). The simplest way to bound Rφ( ˆf) −Rφ(f ∗) is to show that ˆRφ(f) and Rφ(f) are close, uniformly over F. This approach can give the wrong rate. For example, for a nontrivial class F, the resulting estimation error bound can decrease no faster than 1/√n. However, if F is a small class (for instance, a VC-class) and Rφ(f ∗) = 0, then Rφ( ˆf) should decrease as log n/n. Lee et al. (1996) showed that fast rates are also possible for the quadratic loss φ(α) = (1 −α)2 if F is convex, even if Rφ(f ∗) > 0. In particular, because the quadratic loss function is strictly convex, it is possible to bound the variance of the excess loss (difference between the loss of a function f and that of the optimal f ∗) in terms of its expectation. Since the variance decreases as we approach the optimal f ∗, the risk of the empirical minimizer converges more quickly to the optimal risk than the simple uniform convergence results would suggest. Mendelson (2002) improved this result, and extended it from prediction in L2(PX) to prediction in Lp(PX) for other values of p. The proof used the idea of the modulus of convexity of a norm. This idea can be used to give a simpler proof of a more general bound when the loss function satisfies a strict convexity condition, and we obtain risk bounds. The modulus of convexity of an arbitrary strictly convex function (rather than a norm) is a key notion in formulating our results. Definition 8 (Modulus of convexity). Given a pseudometric d defined on a vector space S, and a convex function f : S → , the modulus of convexity of f with respect to d is the function δ : [0, ∞) →[0, ∞] satisfying δ(ϵ) = inf f(x1) + f(x2) 2 −f x1 + x2 2  : x1, x2 ∈S, d(x1, x2) ≥ϵ  . If δ(ϵ) > 0 for all ϵ > 0, we say that f is strictly convex with respect to d. We consider loss functions φ that also satisfy a Lipschitz condition with respect to a pseudometric d on : we say that φ : → is Lipschitz with respect to d, with constant L, if for all a, b ∈ , |φ(a) −φ(b)| ≤L · d(a, b). (Note that if d is a metric and φ is convex, then φ necessarily satisfies a Lipschitz condition on any compact subset of .) We consider four loss functions that satisfy these conditions: the exponential loss function used in AdaBoost, the deviance function for logistic regression, the quadratic loss function, and the truncated quadratic loss function; see Table 1. We use the pseudometric dφ(a, b) = inf {|a −α| + |β −b| : φ constant on (min{α, β}, max{α, β})} . For all except the truncated quadratic loss function, this corresponds to the standard metric on , dφ(a, b) = |a−b|. In all cases, dφ(a, b) ≤|a−b|, but for the truncated quadratic, dφ ignores differences to the right of 1. It is easy to calculate the Lipschitz constant and modulus of convexity for each of these loss functions. These parameters are given in Table 1. In the following result, we consider the function class used by algorithms such as AdaBoost: the class of linear combinations of classifiers from a fixed base class. We assume that this base class has finite Vapnik-Chervonenkis dimension, and we constrain the size of the class by restricting the ℓ1 norm of the linear parameters. If G is the VC-class, we write F = B absconv(G), for some constant B, where B absconv(G) = ( m X i=1 αigi : m ∈ , αi ∈ , gi ∈G, ∥α∥1 = B ) . Theorem 9. Let φ : → be a convex loss function. Suppose that, on the interval [−B, B], φ is Lipschitz with constant LB and has modulus of convexity δ(ϵ) = aBϵ2 (both with respect to the pseudometric d). For any probability distribution P on X × Y that has noise exponent α = 1, there is a constant c′ for which the following is true. For i.i.d. data (X1, Y1), . . . , (Xn, Yn), let ˆf ∈F be the minimizer of the empirical φ-risk, Rφ(f) = ˆEφ(Y f(X)). Suppose that F = B absconv(G), where G ⊆{±1}X has dV C(G) = d, and ϵ∗≥BLB max (LBaB B 1/(d+1) , 1 ) n−(d+2)/(2d+2) Then with probability at least 1 −e−x, R( ˆf) ≤R∗+ c′  ϵ∗+ LB(LB/aB + B)x n + inf f∈F Rφ(f) −R∗ φ  . Notice that the rate obtained here is strictly faster than the classical n−1/2 parametric rate, even though the class is infinite dimensional and the optimal element of F can have risk larger than the Bayes risk. The key idea in the proof is similar to ideas from Lee et al. (1996), Mendelson (2002), but simpler. Let f ∗be the minimizer of φ-risk in a function class F. If the class F is convex and the loss function φ is strictly convex and Lipschitz, then the variance of the excess loss, gf(x, y) = φ(yf(x)) −φ(yf ∗(x)), decreases with its expectation. Thus, as a function f ∈F approaches the optimum, f ∗, the two losses φ(Y ˆf(X)) and φ(Y f ∗(X)) become strongly correlated. This leads to the faster rates. More formally, suppose that φ is L-Lipschitz and has modulus of convexity δ(ϵ) ≥cϵr with r ≤2. Then it is straightforward to show that Eg2 f ≤L2 (Egf/(2c))2/r. For the details, see Bartlett et al. (2003). 5 Conclusions We have studied the relationship between properties of a nonnegative margin-based loss function φ and the statistical performance of the classifier which, based on an i.i.d. training set, minimizes empirical φ-risk over a class of functions. We first derived a universal upper bound on the population misclassification risk of any thresholded measurable classifier in terms of its corresponding population φ-risk. The bound is governed by the ψ-transform, a convexified variational transform of φ. It is the tightest possible upper bound uniform over all probability distributions and measurable functions in this setting. Using this upper bound, we characterized the class of loss functions which guarantee that every φ-risk consistent classifier sequence is also Bayes-risk consistent, under any population distribution. Here φ-risk consistency denotes sequential convergence of population φ-risks to the smallest possible φ-risk of any measurable classifier. The characteristic property of such a φ, which we term classification-calibration, is a kind of pointwise Fisher consistency for the conditional φ-risk at each x ∈X. The necessity of classification-calibration is apparent; the sufficiency underscores its fundamental importance in elaborating the statistical behavior of large-margin classifiers. Under the low noise assumption of Tsybakov (2001), we sharpened our original upper bound and studied the Bayes-risk consistency of ˆf, the minimizer of empirical φ-risk over a convex, bounded class of functions F which is not too complex. We found that, for convex φ satisfying a certain uniform strict convexity condition, empirical φ-risk minimization yields convergence of misclassification risk to that of the best-performing classifier in F, as the sample size grows. Furthermore, the rate of convergence can be strictly faster than the classical n−1/2, depending on the strictness of convexity of φ and the complexity of F. Acknowledgments We would like to thank Gilles Blanchard, Olivier Bousquet, Pascal Massart, Ron Meir, Shahar Mendelson, Martin Wainwright and Bin Yu for helpful discussions. References Bartlett, P. L., Jordan, M. I., and McAuliffe, J. M. (2003). Convexity, classification and risk bounds. Technical Report 638, Dept. of Statistics, UC Berkeley. [www.stat.berkeley.edu/tech-reports]. Boyd, S. and Vandenberghe, L. (2003). Convex Optimization. [www.stanford.edu/∼boyd]. Jiang, W. (2003). Process consistency for Adaboost. Annals of Statistics, in press. Lee, W. S., Bartlett, P. L., and Williamson, R. C. (1996). Efficient agnostic learning of neural networks with bounded fan-in. IEEE Transactions on Information Theory, 42(6):2118–2132. Lin, Y. (2001). A note on margin-based loss functions in classification. Technical Report 1044r, Department of Statistics, University of Wisconsin. Lugosi, G. and Vayatis, N. (2003). On the Bayes risk consistency of regularized boosting methods. Annals of Statistics, in press. Mannor, S., Meir, R., and Zhang, T. (2002). The consistency of greedy algorithms for classification. In Proceedings of the Annual Conference on Computational Learning Theory, pages 319–333. Mendelson, S. (2002). Improving the sample complexity using global data. IEEE Transactions on Information Theory, 48(7):1977–1991. Steinwart, I. (2002). Consistency of support vector machines and other regularized classifiers. Technical Report 02-03, University of Jena, Department of Mathematics and Computer Science. Tsybakov, A. (2001). Optimal aggregation of classifiers in statistical learning. Technical Report PMA-682, Universit´e Paris VI. Zhang, T. (2003). Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, in press.
2003
73
2,478
Design of experiments via information theory ∗ Liam Paninski Center for Neural Science New York University New York, NY 10003 liam@cns.nyu.edu Abstract We discuss an idea for collecting data in a relatively efficient manner. Our point of view is Bayesian and information-theoretic: on any given trial, we want to adaptively choose the input in such a way that the mutual information between the (unknown) state of the system and the (stochastic) output is maximal, given any prior information (including data collected on any previous trials). We prove a theorem that quantifies the effectiveness of this strategy and give a few illustrative examples comparing the performance of this adaptive technique to that of the more usual nonadaptive experimental design. For example, we are able to explicitly calculate the asymptotic relative efficiency of the “staircase method” widely employed in psychophysics research, and to demonstrate the dependence of this efficiency on the form of the “psychometric function” underlying the output responses. 1 Introduction One simple model of experimental design (we have neurophysiological experiments in mind, but our results are general with respect to the identity of the system under study) is as follows. We have some set X of input stimuli, and some knowledge of how the system should respond to every stimulus, x, in X. This knowledge is summarized in the form of a prior distribution, p0(θ), on some space Θ of models θ. A model is a set of probabilistic input-output relationships: regular conditional distributions p(y|x, θ) on Y , the set of possible output responses, given each x in X. Thus the joint probability of stimulus and response is: p(x, y) = Z p(x, y, θ)dθ = Z p0(θ)p(x)p(y|θ, x)dθ. The “design” of an experiment is given by the choice of input probability p(x). We want to design our experiment — choose p(x) — optimally in some sense. One natural idea would be to choose p(x) in such a way that we learn as much as possible about the underlying model, on average. Information theory thus suggests we choose p(x) to optimize the ∗A longer version of this paper, including proofs, has been submitted and is available at http://www.cns.nyu.edu/∼liam. following objective function: I({x, y}; θ) = Z X×Y ×Θ p(x, y, θ) log p(x, y, θ) p(x, y)p(θ) (1) where I(.; .) denotes mutual information. In other words, we want to maximize the information provided about θ by the pair {x, y}, given our current knowledge of the model as summarized in the posterior distribution given N samples of data: pN(θ) = p(θ|{xi, yi}1≤i≤N). Similar ideas have seen application in a wide and somewhat scattered literature; for a partial bibliography, see the longer draft of this paper at http://www.cns.nyu.edu/∼liam. Somewhat surprisingly, we have not seen any applications of the information-theoretic objective function (1) to the design of neurophysiological experiments (although see the abstract by [7], who seem to have independently implemented the same idea in a simulation study). The primary goal of this paper is to elucidate the asymptotic behavior of the a posteriori density pN(θ) when we choose x according to the recipe outlined above; in particular, we want to compare the adaptive strategy to the more usual case, in which the stimuli are drawn i.i.d. (non-adaptively) from some fixed distribution p(x). Our main result (section 2) states that, under acceptably weak conditions on the models p(y|θ, x), the informationmaximization strategy leads to consistent and efficient estimates of the true underlying model, in a natural sense. We also give a few simple examples to illustrate the applicability of our results (section 3). 2 Main Result First, we note that the problem as posed in the introduction turns out to be slightly easier than one might have expected, because I({x, y}; θ) is linear in p(x). This, in turn, implies that p(x) must be degenerate, concentrated on the points x where I is maximal. Thus, instead of finding optimal distributions p(x), we need only find optimal inputs x, in the sense of maximizing the conditional information between θ and y, given a single input x: I(y; θ|x) ≡ Z Y Z Θ pN(θ)p(y|θ, x) log p(y|x, θ) R Θ pN(θ)p(y|x, θ). Our main result is a “Bernstein-von Mises” - type theorem [12]. The classical form of this kind of result says, basically, that if the posterior distributions are consistent (in the sense that pN(U) →1 for any neighborhood U of the true parameter θ0) and the relevant likelihood ratios are sufficiently smooth on average, then the posterior distributions pN(θ) are asymptotically normal, with easily calculable asymptotic mean and variance. We adapt this result to the present case, where x is chosen according to the information-maximization recipe. It turns out that the hard part is proving consistency (c.f. section 4); we give the basic consistency lemma (interesting in its own right) first, from which the main theorem follows fairly easily. Lemma 1 (Consistency). Assume the following conditions: 1. The parameter space Θ is compact. 2. The loglikelihood log p(y|x, θ) is Lipschitz in θ, uniformly in x, with respect to some dominating measure on Y . 3. The prior measure p0 assigns positive measure to any neighborhood of θ0. 4. The maximal divergence supx DKL(θ0; θ|x) is positive for all θ ̸= θ0. Then the posteriors are consistent: pN(U) →1 in probability for any neighborhood U of θ0. Theorem 2 (Asymptotic normality). Assume the conditions of Lemma 1, stengthened as follows: 1. Θ has a smooth, finite-dimensional manifold structure in a neighborhood of θ0. 2. The loglikelihood log p(y|x, θ) is uniformly C2 in θ. In particular, the Fisher information matrices Iθ(x) = Z Y  ˙p(y|x, θ) p(y|x, θ) t ˙p(y|x, θ) p(y|x, θ)  p(y|θ, x), where the differential ˙p is taken with respect to θ, are well-defined and continuous in θ, uniformly in (x, θ) in some neighborhood of θ0. 3. The prior measure p0 is absolutely continuous in some neighborhood of θ0, with a continuous positive density at θ0. 4. maxC∈co(Iθ0(x)) det(C) > 0, where co(Iθ0(x)) denotes the convex closure of the set of Fisher information matrices Iθ0(x). Then ||pN −N(µN, σ2 N)|| →0 in probability, where ||.|| denotes variation distance, N(µN, σ2 N) denotes the normal density with mean µN and variance σ2 N, and µN is asymptotically normal with mean θ0 and variance σ2 N. Here (Nσ2 N)−1 →argmaxC∈co(Iθ0(x)) det(C); the maximum in the above expression is well-defined and unique. Thus, under these conditions, the information maximization strategy works, and works better than the i.i.d. x strategy (where the asymptotic variance σ2 is inversely related to an average, not a maximum, over x, and is therefore generically larger). A few words about the assumptions are in order. Most should be fairly self-explanatory: the conditions on the priors, as usual, are there to ensure that the prior becomes irrelevant in the face of sufficient posterior evidence; the smoothness assumptions on the likelihood permit the local expansion which is the source of asymptotic normality; and the condition on the maximal divergence function supx DKL(θ0; θ|x) ensures that distinct models θ0 and θ are identifiable. Finally, some form of monotonicity or compactness on Θ is necessary, mostly to bound the maximal divergence function supx DKL(θ0; θ|x) and its inverse away from zero (the lower bound, again, is to ensure identifiability; the necessity of the upper bound, on the other hand, will become clear in section 4); also, compactness is useful (though not necessary) for adapting certain Glivenko-Cantelli bounds [12] for the consistency proof. It should also be clear that we have not stated the results as generally as possible; we have chosen instead to use assumptions that are simple to understand and verify, and to leave the technical generalizations to the interested reader. Our assumptions should be weak enough for most neurophysiological and psychophysical situations, for example, by assuming that parameters take values in bounded (though possibly large) sets and that tuning curves are not infinitely steep. The proofs of these three results are basically elaborations on Wald’s consistency method and Le Cam’s approach to the Bernstein-von Mises theorem [12]. 3 Applications 3.1 Psychometric model As noted in the introduction, psychophysicists have employed versions of the informationmaximization procedure for some years [14, 9, 13, 6]. References in [13], for example, go back four decades, and while these earlier investigators usually couched their discussion in terms of variance instead of entropy, the basic idea is the same (note, for example, that minimizing entropy is asymptotically equivalent to minimizing variance, by our main theorem). Our results above allow us to precisely quantify the effectiveness of this stategy. The standard psychometric model is as follows. The response space Y is binary, corresponding to subjective “yes” or “no” detection responses. Let f be “sigmoidal”: a uniformly smooth, monotonically increasing function on the line, such that f(0) = 1/2, limt→−∞f(t) = 0 and limt→∞f(t) = 1 (this function represents the detection probability when the subject is presented with a stimulus of strength t). Let fa,θ = f((t−θ)/a); θ here serves as a location (“threshold”) parameter, while a sets the scale (we assume a is known, for now, although of course this can be relaxed [6]). Finally, let p(x) and p0(θ) be some fixed sampling and prior distributions, respectively, both with smooth densities with respect to Lebesgue measure on some interval Θ. Now, for any fixed scale a, we want to compare the performance of the informationmaximization strategy to that of the i.i.d. p(x) procedure. We have by theorem 2 that the most efficient estimator of θ is asymptotically unbiased with asymptotic variance σ2 info ≈(N sup x Iθ0(x))−1, while the usual calculations show that the asymptotic variance of any efficient estimator based on i.i.d. samples from p(x) is given by σ2 iid ≈(N Z X dp(x)Iθ0(x))−1; the key point, again, is that σ−2 iid is an average, while σ−2 info is a maximum, and hence σiid ≥σinfo, with equality only in the exceptional case that the Fisher information Iθ0(x) is constant almost surely in p(x). The Fisher information here is easily calculated here to be Iθ = ( ˙fa,θ)2 fa,θ(1 −fa,θ). We can immediately derive two easy but important conclusions. First, there is just one function f ∗for which the i.i.d. sampling strategy is as asymptotically efficient as informationmaximization strategy; for all other f, information maximization is strictly more efficient. The extremal function f ∗is obtained by setting σiid = σinfo, implying that Iθ0(x) is constant a.e. [p(x)], and so f ∗is the unique solution of the differential equation df ∗ dt = c  f ∗(t)(1 −f ∗(t)) 1/2 , where the auxiliary constant c = √ Iθ uniquely fixes the scale a. After some calculus, we obtain f ∗(t) = sin(ct) + 1 2 on the interval [−π/2c, π/2c] (and defined uniquely, by monotonicity, as 0 or 1 outside this interval). Since the support of the derivative of this function is compact, this result is quite dependent of the sampling density p(x); if p(x) places any of its mass outside of the interval [−π/2c, π/2c], then σ2 iid is always strictly greater than σ2 info. This recapitulates a basic theme from the psychophysical literature comparing adaptive and nonadaptive techniques: when the scale of the nonlinearity f is either unknown or smaller than the scale of the i.i.d. sampling density p(x), adaptive techniques are greatly preferable. Second, a crude analysis shows that, as the scale a of the nonlinearity shrinks, the ratio σ2 iid/σ2 info grows approximately as 1/a; this gives quantitative support to the intuition that the sharper the nonlinearity with respect to the scale of the sampling distribution p(x), the more we can expect the information-maximization strategy to help. 3.2 Linear-nonlinear cascade model We now consider a model that has received increasing attention from the neurophysiology community (see, e.g., [8] for some analysis and relevant references). The model is of cascade form, with a linear stage followed by a nonlinear stage: the input space X is a compact subset of d-dimensional Euclidean space (take X to be the unit sphere, for concreteness), and the firing rate of the model cell, given input ⃗x ∈X, is given by the simple form E(y|⃗x, θ) = f(< ⃗θ, ⃗x >). Here the linear filter ⃗θ is some unit vector in X′, the dual space of X (thus, the model space Θ is isomorphic to X), while the nonlinearity f is some nonconstant, nonnegative function on [−1, 1]. We assume that f is uniformly smooth, to satisfy the conditions of theorem 2; we also assume f is known, although, again, this can be relaxed. The response space Y — the space of possible spike counts, given the stimulus ⃗x — can be taken to be the nonnegative integers. For simplicity, let the conditional probabilities p(y|⃗x, θ) be parametrized uniquely by the mean firing rate f(< ⃗θ, ⃗x >); the most convenient model, as usual, is to assume that p(y|⃗x, θ) is Poisson with mean f(< ⃗θ, ⃗x >). Finally, we assume that the sampling density p(x) is uniform on the unit sphere (this choice is natural for several reasons, mainly involving symmetry; see, e.g., [2, 8]), and that the prior p0(θ) is positive and continuous (and is therefore bounded away from zero, by the compactness of Θ). The Fisher information for this model is easily calculated as Iθ(x) = (f ′(< ⃗θ, ⃗x >))2 f(< ⃗θ, ⃗x >) P⃗x,θ, where f ′ is the usual derivative of the real function f and P⃗x,θ is the projection operator corresponding to ⃗x, restricted to the (d −1)-dimensional tangent space to the unit sphere at θ. Theorem 2 now implies that σ2 info ≈  N max t∈[−1,1] f ′(t)2g(t) f(t) −1 , while σ2 iid ≈  N Z [−1,1] dp(t)f ′(t)2g(t) f(t) −1 , where g(t) = 1 −t2, p(t) denotes the one-dimensional marginal measure induced on the interval [−1, 1] by the uniform measure p(x) on the unit sphere, and σ2 in each of these two expressions multiplies the (d −1)-dimensional identity matrix. Clearly, the arguments of subsection 3.1 apply here as well: the ratio σ2 iid/σ2 info grows roughly linearly in the inverse of the scale of the nonlinearity. The more interesting asymptotics here, though, are in d. This is because the unit sphere has a measure concentration property [11]: as d →∞, the measure p(t) becomes exponentially concentrated around 0. In fact, it is easy to show directly that, in this limit, p(t) converges in distribution to the normal measure with mean zero and variance d−2. The most surprising implication of this result is seen for nonlinearities f such that f ′(0) = 0, f(0) > 0; we have in mind, for example, symmetric nonlinearities like those often used to model complex cells in visual cortex. For these nonlinearities, σ2 info σ2 iid = O(d−2) : that is, the information maximization strategy becomes infinitely more efficient than the usual i.i.d. approach as the dimensionality of the spaces X and Θ grows. 4 A Negative Example Our next example is more negative and perhaps more surprising: it shows how the information-maximation strategy can fail, in a certain sense, if the conditions of the consistency lemma are not met. Let Θ be multidimensional, with coordinates which are “independent” in a certain sense, and assume the expected information obtained from one coordinate of the parameter remains bounded strictly away from the expected information obtained from one of the other coordinates. For instance, consider the following model: p(1|x) =        .5 −1 < x ≤θ−1, f−1 θ−1 < x ≤0, .5 0 < x ≤θ1, f1 θ1 < x ≤1 where 0 ≤f−1, f1 ≤1, |f−1 −.5| > |f1 −.5|, are known and −1 < θ−1 < 0 and 0 < θ1 < 1 are the parameters we want to learn. Let the initial prior be absolutely continuous with respect to Lebesgue measure; this implies that all posteriors will have the same property. Then, using the inverse cumulative probability transform and the fact that mutual information is invariant with respect to invertible mappings, it is easy to show that the maximal information we can obtain by sampling from the left is strictly greater than the maximal information obtainable from the right, uniformly in N. Thus the information-maximization strategy will sample from x < 0 forever, leading to a linear information growth rate (and easily-proven consistency) for the left parameter and non-convergence on the right. Compare the performance of the usual i.i.d. approach for choosing x (using any Lebesgue-dominating measure on the parameter space), which leads to the standard root-N rate for both parameters (i.e., is strongly consistent in posterior probability). Note that this kind of inconsistency problem does not occur in the case of sufficiently smooth p(y|x, θ), by our main theorem. Thus one way of avoiding this problem would be to fix a finite sampling scale for each coordinate (i.e., discretizing). Below this scale, no information can be extracted; therefore, when the algorithm hits this “floor” for one coordinate, it will switch to the other. However, it is possible to find other examples which show that the lack of consistency is not necessarily tied to the discontinuous nature of the conditional densities. 5 Directions In this paper, we have presented a rigorous theoretical framework for adaptively designing experiments using an information-theoretic objective function. Most importantly, we have offered some asymptotic results which clarify the effectiveness of adaptive experiment design using the information-theoretic objective function (1); in addition, we expect that our asymptotic approximations should find applications in approximative computational schemes for optimizing stimulus choice during this type of online experiment. For example, our theorem 2 might suggest the use of a mixture-of-Gaussians representation as an efficient approximation for the posteriors pN(θ) [5]. It should be clear that we have left several important questions open. Perhaps the most obvious such question concerns the use of non-information theoretic objective functions. It turns out that many of our results apply with only modest changes if the experiment is instead designed to minimize something like the Bayes mean-square error (perhaps defined only locally if Θ has a nontrivial manifold structure), for example: in this case, the results in sections 3.1 and 3.2 remain completely unchanged, while the statement of our main theorem requires only slight changes in the asymptotic variance formula (see http://www.cns.nyu.edu/∼liam). Thus it seems our results here can add very little to any discussion of what objective function is “best” in general. We briefly describe a few more open research directions below. 5.1 “Batch mode” and stimulus dependencies Perhaps our strongest assumption here is that the experimenter will be able to freely choose the stimuli on each trial. This might be inaccurate for a number of reasons: for example, computational demands might require that experiments be run in “batch mode,” with stimulus optimization taking place not after every trial, but perhaps only after each batch of k stimuli, all chosen according to some fixed distribution p(x). Another common situation involves stimuli which vary temporally, for which the system is commonly modelled as responding not just to a given stimulus x(t), but also to all of its time-translates x(t −τ). Finally, if there is some cost C(x0, x1) associated with changing the state of the observational apparatus from the current state x0 to x1, the experimenter may wish to optimize an objective function which incorporates this cost, for example I(y; θ|x1)  C(x0, x1). Each of these situations is clearly ripe for further study. Here we restrict ourselves to the first setting, and give a simple conjecture, based on the asymptotic results presented above and inspired by results like those of [1, 4, 10]. First, we state more precisely the optimization problem inherent in designing a “batch” experiment: we wish to choose some sequence, {xi}1≤i≤k, to maximize I({xi, yi}1≤i≤k; θ); the main difference here is that {xi}1≤i≤k must be chosen nonadaptively, i.e., without sequential knowledge of the responses {yi}. Clearly, the order of any sequence of optimal {xi}1≤i≤k is irrelevant to the above objective function; in addition, it should be apparent that if no given piece of data (x, y) is too strong (for example, under Lipschitz conditions like those in lemma 1) that any given elements of such an optimal sequence {xi}1≤i≤k should be asymptotically independent. (Without such a smoothness condition — for example, if some input x could definitively decide between some given θ0 and θ1 — then no such asymptotic independence statement can hold, since no more than one sample from such an x would be necessary.) Thus, we can hope that we should be able to asymptotically approximate this optimal experiment by sampling in an i.i.d. manner from some well-chosen p(x). Moreover, we can make a guess as to the identity of this putative p(x): Conjecture (“Batch” mode). Under suitable conditions, the empirical distribution corresponding to any optimal sequence {xi}1≤i≤k, ˆpk(x) ≡1 k k X i=1 δ(xi), converges weakly as k →∞to S, the convex set of maximizers in p(x) of Eθ log(det(ExIθ(x))). (2) Expression (2) above is an average over p(θ) of terms proportional to the negative entropy of the asymptotic Gaussian posterior distribution corresponding to each θ, and thus should be maximized by any optimal approximant distribution p(x). (Note also that expression (2) is concave in p(x), ensuring the tractability of the above maximization.) In fact, it is not difficult, using the results of Clarke and Barron [3] to prove the above conjecture under the conditions like those of Theorem 2, assuming that X is finite (in which case weak convergence is equivalent to pointwise convergence); we leave generalizations for future work. Acknowledgements We thank R. Sussman, E. Simoncelli, C. Machens, and D. Pelli for helpful conversations. This work was partially supported by a predoctoral fellowship from HHMI. References [1] J. Berger, J. Bernardo, and M. Mendoza. Bayesian Statistics 4, chapter On priors that maximize expected information, pages 35–60. Oxford University Press, 1989. [2] E. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems, 12:199–213, 2001. [3] B. Clarke and A. Barron. Information-theoretic asymptotics of Bayes methods. IEEE Transactions on Information Theory, 36:453 – 471, 1990. [4] B. Clarke and A. Barron. Jeffreys’ prior is asymptotically least favorable under entropy risk. Journal of Statistical Planning Inference, 41:37–60, 1994. [5] P. Deignan, P. Meckl, M. Franchek, J. Abraham, and S. Jaliwala. Using mutual information to pre-process input data for a virtual sensor. In ACC, number ASME0043 in American Control Conference, 2000. [6] L. Kontsevich and C. Tyler. Bayesian adaptive estimation of psychometric slope and threshold. Vision Research, 39:2729–2737, 1999. [7] M. Mascaro and D. Bradley. Optimized neuronal tuning algorithm for multichannel recording. Unpublished abstract at http://www.compscipreprints.com/, 2002. [8] L. Paninski. Convergence properties of some spike-triggered analysis techniques. Network: Computation in Neural Systems, 14:437–464, 2003. [9] D. Pelli. The ideal psychometric procedure. Investigative Ophthalmology and Visual Science (Supplement), 28:366, 1987. [10] H. R. Scholl. Shannon optimal priors on iid statistical experiments converge weakly to jeffreys’ prior. Available at citeseer.nj.nec.com/104699.html, 1998. [11] M. Talagrand. Concentration of measure and isoperimetric inequalities in product spaces. Publ. Math. IHES, 81:73–205, 1995. [12] A. van der Vaart. Asymptotic statistics. Cambridge University Press, Cambridge, 1998. [13] A. Watson and A. Fitzhugh. The method of constant stimuli is inefficient. Perception and Psychophysics, 47:87–91, 1990. [14] A. Watson and D. Pelli. QUEST: a Bayesian adaptive psychophysical method. Perception and Psychophysics, 33:113–120, 1983.
2003
74
2,479
Minimax embeddings Matthew Brand Mitsubishi Electric Research Labs Cambridge MA 02139 USA Abstract Spectral methods for nonlinear dimensionality reduction (NLDR) impose a neighborhood graph on point data and compute eigenfunctions of a quadratic form generated from the graph. We introduce a more general and more robust formulation of NLDR based on the singular value decomposition (SVD). In this framework, most spectral NLDR principles can be recovered by taking a subset of the constraints in a quadratic form built from local nullspaces on the manifold. The minimax formulation also opens up an interesting class of methods in which the graph is “decorated” with information at the vertices, offering discrete or continuous maps, reduced computational complexity, and immunity to some solution instabilities of eigenfunction approaches. Apropos, we show almost all NLDR methods based on eigenvalue decompositions (EVD) have a solution instability that increases faster than problem size. This pathology can be observed (and corrected via the minimax formulation) in problems as small as N < 100 points. 1 Nonlinear dimensionality reduction (NLDR) Spectral NLDR methods are graph embedding problems where a set of N points X .= [x1,··· ,xN] ∈RD×N sampled from a low-dimensional manifold in a ambient space RD is reparameterized by imposing a neighborhood graph G on X and embedding the graph with minimal distortion in a “parameterization” space Rd, d < D. Typically the graph is sparse and local, with edges connecting points to their immediate neighbors. The embedding must keep these edges short or preserve their length (for isometry) or angles (for conformality). The graph-embedding problem was first introduced as a least-squares problem by Tutte [1], and as an eigenvalue problem by Fiedler [2]. The use of sparse graphs to generate metrics for least-squares problems has been studied intensely in the following three decades (see [3]). Modern NLDR methods use graph constraints to generate a metric in a space of embeddings RN. Eigenvalue decomposition (EVD) gives the directions of least or greatest variance under this metric. Typically a subset of d extremal eigenvectors gives the embedding of N points in Rd parameterization space. This includes the IsoMap family [4], the locally linear embedding (LLE) family [5,6], and Laplacian methods [7,8]. Using similar methods, the Automatic Alignment [6] and Charting [9] algorithms embed local subspaces instead of points, and by combining subspace projections thus obtain continuous maps between RD and Rd. This paper introduces a general algebraic framework for computing optimal embeddings directly from graph constraints. The aforementioned methods can can be recovered as special cases. The framework also suggests some new methods with very attractive properties, including continuous maps, reduced computational complexity, and control over the degree of conformality/isometry in the desired map. It also eliminates a solution instability that is intrinsic to EVD-based approaches. A perturbational analysis quantifies the instability. 2 Minimax theorem for graph embeddings We begin with neighborhood graph specified by a nondiagonal weighted adjacency matrix M ∈RN×N that has the data-reproducing property XM = X (this can be relaxed to XM ≈ X in practice). The graph-embedding and NLDR literatures offer various constructions of M, each appropriate to different sets of assumptions about the original embedding and its sampling X (e.g., isometry, local linearity, noiseless samples, regular sampling, etc.). Typically Mi j ̸= 0 if points i, j are nearby on the intrinsic manifold and |Mi j| is small or zero otherwise. Each point is taken to be a linear or convex combination of its neighbors, and thus M specifies manifold connectivity in the sense that any nondegenerate embedding Y that satisfies YM ≈Y with small residual ∥YM −Y∥F will preserve this connectivity and the structure of local neighborhoods. For example, in barycentric embeddings, each point is the average of its neighbors and thus Mi j = 1/k if vertex i is connected to vertex j (of degree k). We will also consider three optional constraints on the embedding : 1. A null-space restriction, where the solution must be outside to the column-space of C ∈RN×M, M < N. For example, it is common to stipulate that the solution Y be centered, i.e., YC = 0 for C = 1, the constant vector. 2. A basis restriction, where the solution must be a linear combination of the rows of basis Z ∈RK×N, K ≤N. This can be thought of as information placed at the vertices of the graph that serves as example inputs for a target NLDR function. We will use this to construct dimension-reducing radial basis function networks. 3. A metric Σ ∈RN×N that determines how error is distributed over the points. For example, it might be important that boundary points have less error. We assume that Σ is symmetric positive definite and has factorization Σ = AA⊤(e.g., A could be a Cholesky factor of Σ). In most settings, the optional matrices will default to the identity matrix. In this context, we define the per-dimension embedding error of row-vector yi ∈rows(Y) to be EM(yi) .= max yi∈range(Z),,K∈RM×N ∥(yi(M+CD)−yi)A∥ ∥yiA∥ (1) where D is a matrix constructed by an adversary to maximize the error. The optimizing yi is a vector inside the subspace spanned by the rows of Z and outside the subspace spanned by the columns of C, for which the reconstruction residual yiM−yi has smallest norm w.r.t. the metric Σ. The following theorem identifies the optimal embedding Y for any choice of M,Z,C,Σ: Minimax solution: Let Q ∈SK×P be a column-orthonormal basis of the null-space of the rows of ZC, with P = K −rank(C). Let B ∈RP×P be a square factor satisfying B⊤B = Q⊤ZΣZ⊤Q, e.g., a Cholesky factor (or the “R” factor in QR-decomposition of (Q⊤ZA)⊤). Compute the left singular vectors U ∈SN×N of Udiag(s)V⊤= B−⊤Q⊤Z(I −M)A, with singular values s⊤.= [s1,··· ,sP] ordered s1 ≤s2 ≤··· ≤sp. Using the leading columns U1:d of U, set Y = U⊤ 1:dB−⊤Q⊤Z. Theorem 1. Y is the optimal (minimax) embedding in Rd with error ∥[s1,··· ,sd]∥2: Y .= U⊤ 1:dB−⊤Q⊤Z = arg min Y∈Rd×N ∑ yi∈rows(Y) EM(yi)2 with EM(yi) = si. (2) Appendix A develops the proof and other error measures that are minimized. Local NLDR techniques are easily expressed in this framework. When Z = A = I, C = [], and M reproduces X through linear combinations with M⊤1 = 1, we recover LLE [5]. When Z = I, C = [], I−M is the normalized graph Laplacian, and A is a diagonal matrix of vertex degrees, we recover Laplacian eigenmaps [7]. When further Z = X we recover locally preserving projections [8]. 3 Analysis and generalization of charting The minimax construction of charting [9] takes some development, but offers an interesting insight into the above-mentioned methods. Recall that charting first solves for a set of local affine subspace axes S1 ∈RD×d,S2,··· at offsets µ1 ∈RD,µ2,··· that best cover the data and vary smoothly over the manifold. Each subspace offers a chart—a local parameterization of the data by projection onto the local axes. Charting then constructs a weighted mixture of affine projections that merges the charts into a global parameterization. If the data manifold is curved, each projection will assign a point a slightly different embedding, so the error is measured as the variance of these proposed embeddings about their mean. This maximizes consistency and tends to produce isometric embeddings; [9] discusses ways to explicitly optimize the isometry of the embedding. Under the assumption of isometry, the charting error is equivalent to the sumsquared displacements of an embedded point relative to its immediate neighbors (summed over all neighborhoods). To construct the same error criteria in the minimax setting, let xi−k,··· ,xi,··· ,xi+k denote points in the ith neighborhood and let the columns of Vi ∈R(2k+1)×d be an orthonormal basis of rows of the local parameterization S⊤ i [xi−k,··· ,xi,··· ,xi+k]. Then a nonzero reparameterization will satisfy [yi−k,··· ,yi,··· ,yi+k]ViV⊤ i = [yi−k,··· ,yi,··· ,yi+k] if and only if it preserves the relative position of the points in the local parameterization. Conversely, any relative displacements of the points are isolated by the formula [yi−k,··· ,yi,··· ,yi+k](I −ViV⊤ i ). Minimizing the Frobenius norm of this expression is thus equivalent to minimizing the local error in charting. We sum these constraints over all neighborhoods to obtain the constraint matrix M = I −∑i Fi(I −ViV⊤ i )F⊤ i , where (Fi)k j = 1 iff the jth point of the ith neighborhood is the kth point of the dataset. Because ViV⊤ i and (I −ViV⊤ i ) are complementary, it follows that the error criterion of any local NLDR method (e.g., LLE, Laplacian eigenmaps, etc.) must measure the projection of the embedding onto some subspace of (I−ViV⊤ i ). To construct a continuous map, charting uses an overcomplete radial basis function (RBF) representation Z = [z(x1),z(x2),···z(xN)], where z(x) is a vector that stacks z1(x), z2(x), etc., and zm(x) .=  K⊤ m(x−µm) 1  pm(x) ∑m pm(x), (3) pm(x) .= N (x|µm,Σm) ∝e−(x−µm)⊤Σ−1 m (x−µm)/2 (4) and Km is any local linear dimensionality reducer, typically Sm itself. Each column of Z contains many “views” of the same point that are combined to give its low-dimensional embedding. Finally, we set C = 1, which forces the embedding of the full data to be centered. Applying the minimax solution to these constraints yields the RBF network mixing matrix, f(x) .= U⊤ 1:dB−⊤Q⊤z(x). Theorem 1 guarantees that the resulting embedding is leastsquares optimal w.r.t. Z,M,C,A at the datapoints f(xi), and because f(·) is an affine transform of z(·) it smoothly interpolates the embedding between points. There are some interesting variants: Kernel embeddings of the twisted swiss roll generalized EVD minimax SVD LL corner detail UR corner detail Fig. 1. Minimax and generalized EVD solution for kernel eigenmap of a non-developable swiss roll. Points are connected into a grid which ideally should be regular. The EVD solution shows substantial degradation. Insets detail corners where the EVD solution crosses itself repeatedly. The border compression is characteristic of Laplacian constraints. One-shot charting: If we set the local dimensionality reducers to the identity matrix (all Km = I), then the minimax method jointly optimizes the local dimensionality reduction to charts and the global coordination of the charts (under any choice of M). This requires that rows(Z) ≤N for a fully determined solution. Discrete isometric charting: If Z = I then we directly obtain a discrete isometric embedding of the data, rather than a continuous map, making this a local equivalent of IsoMap. Reduced basis charting: Let Z be constructed using just a small number of kernels randomly placed on the data manifold, such that rows(Z) ≪N. Then the size of the SVD problem is substantially reduced. 4 Numerical advantage of minimax method Note that the minimax method projects the constraint matrix M into a subspace derived from C and Z and decomposes it there. This suppresses unwanted degrees of freedom (DOFs) admitted by the problem constraints, for example the trivial R0 embedding where all points are mapped to a single point yi = N−1/2. The R0 embedding serves as a translational DOF in the solution. LLE- and eigenmap-based methods construct M to have a constant null-space so that the translational DOF will be isolated in the EVD as null eigenvalue paired to a constant eigenvector, which is then discarded. However, section 4.1 shows that this construction makes the EVD increasingly unstable as problem size grows and/or the data becomes increasing amenable to low-residual embeddings, ultimately causing solution collapse. As the next paragraph demonstrates, the problem is exacerbated when embedding w.r.t. a basis Z (via the equivalent generalized eigenproblem), partly because the eigenvector associated with the unwanted DOF can have arbitrary structure. In all cases the problem can be averted by using the minimax formulation with C = 1 to suppress the DOF. A 2D plane was embedded in 3D with a curl, a twist, and 2.5% Gaussian noise, then regularly sampled at 900 points. We computed a kernelized Laplacian eigenmap using 70 random points as RBF centers, i.e., a continous map using M derived from the graph Laplacian and Z constructed as above. The map was computed both via the minimax (SVD) method and via the equivalent generalized eigenproblem, where the translational degree of freedom must be removed by discarding an eigenvector from the solution. The two solutions are algebraically equivalent in every other regard. A variety of eigensolvers were tried; we took 100 200 0 5 10 15 x 10 −5 eigenvalue excess energy Eigen spectrum compared to minimax spectrum 100 200 300 400 500 600 700 800 900 −8 −6 −4 −2 0 2 x 10 −5 point deviation Error in null embedding 100 200 0 5 10 15 x 10 −5 eigenvalue excess energy Eigen spectrum compared to minimax spectrum 100 200 300 400 500 600 700 800 900 −8 −6 −4 −2 0 2 x 10 −5 point deviation Error in null embedding Fig. 2. Excess energy in the eigenspectrum indicates that the translational DOF has contaminated many eigenvectors. If the EVD had successfully isolated the unwanted DOF, then its remaining eigenvalues should be identical to those derived from the minimax solution. The graph at left shows the difference in the eigenspectra. The graph at right shows the EVD solution’s deviation from the translational vector y0 = 1 · N−1/2 ≈.03333. If the numerics were perfect the line would be flat, but in practice the deviation is significant enough (roughly 1% of the diameter of the embedding) to noticably perturb points in figure 1. the best result. Figure 1 shows that the EVD solution exhibits many defects, particularly a folding-over of the manifold at the top and bottom edges and at the corners. Figure 2 shows that the noisiness of the EVD solution is due largely to mutual contamination of numerically unstable eigenvectors. 4.1 Numerical instability of eigen-methods The following theorem uses tools of matrix perturbation theory to show that as the problem size increases, the desired and unwanted eigenvectors become increasingly wobbly and gradually contaminate each other, leading to degraded solutions. More precisely, the low-order eigenvalues are ill-conditioned and exhibit multiplicities that may be true (due to noiseless samples from low-curvature manifolds) or false (due to numerical noise). Although in many cases some post-hoc algebra can “filter” the unwanted components out of the contaminated eigensolution, it is not hard to construct cases where the eigenvectors cannot be cleanly separated. The minimax formulation is immune to this problem because it explicitly suppresses the gratuitous component(s) before matrix decomposition. Theorem 2. For any finite numerical precision, as the number of points N increases, the Frobenius norm of numerical noise in the null eigenvector v0 can grow as O(N3/2), and the eigenvalue problem can approach a false multiplicity at a rate as fast as O(N3/2), at which point the eigenvectors of interest—embedding and translational—are mutually contaminated and/or have an indeterminate eigenvalue ordering. Please see appendix B for the proof. This theorem essentially lower-bounds an upperbound on error; examples can be constructed in which the problem is worse. For example, it can be shown analytically that when embedding points drawn from the simple curve xi = [a,cosπa]⊤, a ∈[0,1] with K = 2 neighbors, instabilities cannot be bounded better than O(N5/2); empirically we see eigenvector mixing with N < 100 points and we see it grow at the rate ≈O(N4)—in many different eigensolvers. At very large scales, more pernicious instabilities set in. E.g., by N = 20000 points, the solution begins to fold over. Although algebraic multiplicity and instability of the eigenproblem is conceptually a minor oversight in the algorithmic realizations of eigenfunction embeddings, as theorem 2 shows, the consequences are eventually fatal. 5 Summary One of the most appealing aspects of the spectral NLDR literature is that algorithms are usually motivated from analyses of linear operators on smooth differentiable manifolds, e.g., [7]. Understandably, these analysis rely on assumptions (e.g., smoothness or isometry or noiseless sampling) that make it difficult to predict what algorithmic realizations will do when real, noisy data violates these assumptions. The minimax embedding theorem provides a complete algebraic characterization of this discrete NLDR problem, and provides a solution that recovers numerically robustified versions of almost all known algorithms. It offers a principled way of constructing new algorithms with clear optimality properties and good numerical conditioning—notably the construction of a continuous NLDR map (an RBF network) in a one-shot optimization (SVD). We have also shown how to cast several local NLDR principles in this framework, and upgrade these methods to give continuous maps. Working in the opposite direction, we sketched the minimax formulation of isometric charting and showed that its constraint matrix contains a superset of all the algebraic constraints used in local NLDR techniques. References 1. W.T. Tutte. How to draw a graph. Proc. London Mathematical Society, 13:743–768, 1963. 2. Miroslav Fiedler. A property of eigenvectors of nonnegative symmetric matrices and its application to graph theory. Czech. Math. Journal, 25:619–633, 1975. 3. Fan R.K. Chung. Spectral graph theory, volume 92 of CBMS Regional Conference Series in Mathematics. American Mathematical Society, 1997. 4. Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290:2319–2323, December 22 2000. 5. Sam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323–2326, December 22 2000. 6. Yee Whye Teh and Sam T. Roweis. Automatic alignment of hidden representations. In Proc. NIPS-15, 2003. 7. Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. volume 14 of Advances in Neural Information Processing Systems, 2002. 8. Xiafei He and Partha Niyogi. Locality preserving projections. Technical Report TR-2002-09, University of Chicago Computer Science, October 2002. 9. Matthew Brand. Charting a manifold. volume 15 of Advances in Neural Information Processing Systems, 2003. 10. G.W. Stewart and Ji-Guang Sun. Matrix perturbation theory. Academic Press, 1990. A Proof of minimax embedding theorem (1) The burden of this proof is carried by supporting lemmas, below. To emphasize the proof strategy, we give the proof first; supporting lemmas follow. Proof. Setting yi = l⊤ i Z, we will solve for li ∈columns(L). Writing the error in terms of li, EM(li) = max K∈RM×N ∥l⊤ i Z(I−M−CK)A∥ ∥l⊤ i ZA∥ = max K∈RM×N ∥l⊤ i Z(I−M)A−l⊤ i ZCKA∥ ∥l⊤ i ZA∥ . (5) The term l⊤ i ZCKA produces infinite error unless l⊤ i ZC = 0, so we accept this as a constraint and seek min l⊤ i ZC=0 ∥l⊤ i Z(I−M)A∥ ∥l⊤ i ZA∥ . (6) By lemma 1, that orthogonality is satisfied by solving the problem in the space orthogonal to ZC; the basis for this space is given by columns of Q .= null((ZC)⊤). By lemma 2, the denominator of the error specifies the metric in solution space to be ZAA⊤Z⊤; when the problem is projected into the space orthogonal to ZC it becomes Q⊤(ZAA⊤Z⊤)Q. Nesting the “orthogonally-constrained-SVD” construction of lemma 1 inside the “SVD-under-a-metric” lemma 2, we obtain a solution that uses the correct metric in the orthogonal space: B⊤B = Q⊤ZAA⊤Z⊤Q (7) Udiag(s)V⊤= B−⊤{Q(Z(I−M)A)} (8) L = QB−1U (9) where braces indicate the nesting of lemmas. By the “best-projection” lemma (#3), if we order the singular values by ascending magnitude, L1:d = arg min J∈RN×d q ∑ji∈cols(J)(∥j⊤Z(I−M)A∥/∥j∥ZΣZ⊤)2 (10) The proof is completed by making the substitutions L⊤Z →Y⊤and ∥x⊤A∥→∥x∥Σ (for Σ = AA⊤), and leaving off the final square root operation to obtain (Y⊤)1:d = arg min J∈RN×d ∑ji∈cols(J)  ∥j⊤(I−M)∥Σ/∥j∥Σ 2 . (11) Lemma 1. Orthogonally constrained SVD: The left singular vectors L of matrix M under the constraint U⊤C = 0 are calculated as Q .= null(C⊤), Udiag(s)V⊤SVD ←Q⊤M, L = QU. Proof. First observe that L is orthogonal to C: By definition, the null-space basis satisfies Q⊤C = 0, thus L⊤C = U⊤Q⊤C = 0. Let J be an orthonormal basis for C, with J⊤J = I and Q⊤J = 0. Then Ldiag(s)V⊤= QQ⊤M = (I −JJ⊤)M, the orthogonal projector of C applied to M, proving that the SVD captures the component of M that is orthogonal to C. Lemma 2. SVD with respect to a metric: The vectors li ∈L, vi ∈V that diagonalize matrix M with respect to positive definite column-space metric Σ are calculated as B⊤B ←Σ, Udiag(s)V⊤SVD ←B−⊤M, L .= B−1U satisfy ∥l⊤ i M∥/∥li∥Σ = si and extremize this form for the extremal singular values smin,smax. Proof. By construction, L and V diagonalize M: L⊤MV = (B−1U)⊤MV = U⊤(B−⊤M)V = diag(s) (12) and diag(s)V⊤= B−⊤M. Forming the gram matrices of both sides of the last line, we obtain the identity Vdiag(s)2V⊤= M⊤B−1B−⊤M = M⊤Σ−1M, which demonstrates that si ∈s are the singular values of M w.r.t. column-space metric Σ. Finally, L is orthonormal w.r.t. the metric Σ, because ∥L∥2 Σ = L⊤ΣL = U⊤B−⊤B⊤BB−1U = I. Consequently, ∥l⊤M∥/∥l∥Σ = ∥l⊤M∥/1 = ∥siv⊤ i ∥= si . (13) and by the Courant-Hilbert theorem, smax = max l ∥l⊤M∥/∥l∥Σ; smin = min l ∥l⊤M∥/∥l∥Σ. (14) Lemma 3. Best projection: Taking L and s from lemma 2, let the columns of L and elements of s be sorted so that s1 ≥s2 ≥··· ≥sN. Then for any dimensionality 1 ≤d ≤N, L1:d .= [l1,··· ,ld] = arg max J∈RN×d ∥J⊤M∥(J⊤ΣJ)−1 (15) = arg max J∈RN×d|J⊤ΣJ=I ∥J⊤M∥F (16) = arg max J∈RN×d q ∑ji∈cols(J)(∥j⊤M∥/∥j∥Σ)2 (17) with the optimum value of all right hand sides being (∑d i=1 s2 i )1/2. If the sort order is reversed, the minimum of this form is obtained. Proof. By the Eckart-Young-Mirsky theorem, if U⊤MV = diag(s) with singular values sorted in descending order, then U1:d .= [u1,··· ,ud] = argmaxU∈SN×d ∥U⊤M∥F. We first extend this to a non-orthonogonal basis J under a Mahalonobis norm: maxJ∈RN×d∥J⊤M∥(J⊤J)−1 = maxU∈SN×d∥U⊤M∥F (18) because ∥J⊤M∥2 (J⊤J)−1 = trace(M⊤J(J⊤J)−1J⊤M) = trace(M⊤JJ+(JJ+)⊤M) = ∥(JJ+)M∥2 F = ∥UU⊤M∥2 F = ∥U⊤M∥2 F since JJ+ is a (symmetric) orthogonal projector having binary eigenvalues λ ∈{0,1} and therefore it is the gram of an thin orthogonal matrix. We then impose a metric Σ on the column-space of J to obtain the first criterion (equation 15), which asks what maximizes variance in J⊤M while minimizing the norm of J w.r.t. metric Σ. Here it suffices to substitute in the leading (resp., trailing) columns of L and verify that the norm is maximized (resp., minimized). Expanding, ∥L⊤ 1:dM∥2 (L⊤ 1:dΣL1:d)−1 = trace((L⊤ 1:dM)⊤(L⊤ 1:dΣL1:d)−1(L⊤ 1:dM)) = trace((L⊤ 1:dM)⊤I(L⊤ 1:dM)) = trace((diag(s1:d)V⊤ 1:d)⊤(diag(s1:d)V⊤ 1:d)) = ∥s1:d∥2. Again, by the Eckart-Young-Mirsky theorem, these are the maximal variance-preserving projections, so the first criterion is indeed maximized by setting J to the columns in L corresponding to the largest values in s. Criterion #2 restates the first criterion with the set of candidates for J restricted to (the hyperelliptical manifold of) matrices that reduce the metric on the norm to the identity matrix (thereby recovering the Frobenius norm). Criterion #3 criterion merely expands the above trace by individual singular values. Note that the numerator and denominator can have different metrics because they are norms in different spaces, possibly of different dimension. Finally, that the trailing d eigenvectors minimize these criteria follows directly from the fact that leading N −d singular values account for the maximal part of the variance. B Proof of instability theorem (2) Proof. When generated from a sparse graph with average degree K, weighted connectivity matrix W is sparse and has O(NK) entries. Since the graph vertices represent samples from a smooth manifold, increasing the sampling density N does not change the distribution of magnitudes in W. Consider a perturbation of the nonzero values in W, e.g., W →W + E due to numerical noise E created by finite machine precision. By the weak law of large numbers, the Frobenius norm of the sparse perturbation grows as ∥E∥F ∼O( √ N). However the tth-smallest nonzero eigenvalue λt(W) grows as λt(W) = v⊤ t Wvt ∼O(N−1), because elements of corresponding eigenvector vt grow as O(N−1/2) and only K of those elements are multiplied by nonzero values to form each element of Wvt. In sum, the perturbation ∥E∥F grows while the eigenvalue λt(W) shrinks. In linear embedding algorithms, the eigengap of interest is λgap .= λ1 −λ0. The tail eigenvalue λ0 = 0 by construction but it is possible that λ0 > 0 with numerical error, thus λgap ≤λ1. Combining these facts, the ratio between the perturbation and the eigengap grows as ∥E∥F/λgap ∼O(N3/2) or faster. Now consider the shifted eigenproblem I −W with leading (maximal) eigenvalues 1 −λ0 ≥1 −λ1 ≥··· and unchanged eigenvectors. From matrix perturbation theory [10, thm. V.2.8], when W is perturbed to W′ .= W + E, the change in the leading eigenvalue from 1 −λ0 to 1 −λ′ 0 is bounded as |λ′ 0 −λ0| ≤ √ 2∥E∥F and similarly 1 −λ′ 1 ≤1 −λ1 + √ 2∥E∥F. Thus λ′ gap ≥λgap − √ 2∥E∥F. Since ∥E∥F/λgap ∼O(N3/2), the right hand side of the gap bound goes negative at a supralinear rate, implying that the eigenvalue ordering eventually becomes unstable with the possibility of the first and second eigenvalue/vector pairs being swapped. Mutual contamination of the eigenvectors happens well before: Under general (dense) conditions, the change in the eigenvector v0 is bounded as ∥v′ 0 −v0∥≤ 4∥E∥F |λ0−λ1|− √ 2∥E∥F [10, thm. V.2.8]. (This bound is often tight enough to serve as a good approximation.) Specializing this to the sparse embedding matrix, we find that the bound weakens to ∥v′ 0 −1·N−1/2∥∼ O( √ N) O(N−1)−O( √ N) > O( √ N) O(N−1) = O(N3/2).
2003
75
2,480
Efficient and Robust Feature Extraction by Maximum Margin Criterion Haifeng Li Tao Jiang Department of Computer Science University of California Riverside, CA 92521 {hli,jiang}@cs.ucr.edu Keshu Zhang Department of Electrical Engineering University of New Orleans New Orleans, LA 70148 kzhang1@uno.edu Abstract A new feature extraction criterion, maximum margin criterion (MMC), is proposed in this paper. This new criterion is general in the sense that, when combined with a suitable constraint, it can actually give rise to the most popular feature extractor in the literature, linear discriminate analysis (LDA). We derive a new feature extractor based on MMC using a different constraint that does not depend on the nonsingularity of the within-class scatter matrix Sw. Such a dependence is a major drawback of LDA especially when the sample size is small. The kernelized (nonlinear) counterpart of this linear feature extractor is also established in this paper. Our preliminary experimental results on face images demonstrate that the new feature extractors are efficient and stable. 1 Introduction In statistical pattern recognition, the high-dimensionality is a major cause of the practical limitations of many pattern recognition technologies. In the past several decades, many dimensionality reduction techniques have been proposed. Linear discriminant analysis (LDA, also called Fisher’s Linear Discriminant) [1] is one of the most popular linear dimensionality reduction method. In many applications, LDA has been proven to be very powerful. LDA is given by a linear transformation matrix W ∈RD×d maximizing the so-called Fisher criterion (a kind of Rayleigh coefficient) JF (W) = WT SbW WT SwW (1) where Sb = Pc i=1 pi(mi−m)(mi−m)T and Sw = Pc i=1 piSi are the betweenclass scatter matrix and the within-class scatter matrix, respectively; c is the number of classes; mi and pi are the mean vector and a priori probability of class i, respectively; m = Pc i=1 pimi is the overall mean vector; Si is the within-class scatter matrix of class i; D and d are the dimensionalities of the data before and after the transformation, respectively. To maximize (1), the transformation matrix W must be constituted by the largest eigenvectors of S−1 w Sb. The purpose of LDA is to maximize the between-class scatter while simultaneously minimizing the within-class scatter. The two-class LDA has a close connection to optimal linear Bayes classifiers. In the two-class case, the transformation matrix W is just a vector, which is in the same direction as the discriminant in the corresponding optimal Bayes classifier. However, it has been shown that LDA is suboptimal for multi-class problems [2]. A major drawback of LDA is that it cannot be applied when Sw is singular due to the small sample size problem [3]. The small sample size problem arises whenever the number of samples is smaller than the dimensionality of samples. For example, a 64 × 64 image in a face recognition system has 4096 dimensions, which requires more than 4096 training data to ensure that Sw is nonsingular. So, LDA is not a stable method in practice when the training data are scarce. In recent years, many researchers have noticed this problem and tried to overcome the computational difficulty with LDA. Tian et al. [4] used the pseudo-inverse matrix S+ w instead of the inverse matrix S−1 w . For the same purpose, Hong and Yang [5] tried to add a singular value perturbation to Sw to make it nonsingular. Neither of these methods are theoretically sound because Fisher’s criterion is not valid when Sw is singular. When Sw is singular, any positive Sb makes Fisher’s criterion infinitely large. Thus, these naive attempts to calculate the (pseudo or approximate) inverse of Sw may lead to arbitrary (meaningless) results. Besides, it is also known that an eigenvector could be very sensitive to small perturbation if its corresponding eigenvalue is close to another eigenvalue of the same matrix [6]. In 1992, Liu et al. [7] modified Fisher’s criterion by using the total scatter matrix St = Sb + Sw as the denominator instead of Sw. It has been proven that the modified criterion is exactly equivalent to Fisher’s criterion. However, when Sw is singular, the modified criterion reaches the maximum value (i.e., 1) no matter what the transformation W is. Such an arbitrary transformation cannot guarantee the maximum class separability unless WT SbW is maximized. Besides, this method need still calculate an inverse matrix, which is time consuming. In 2000, Chen et al. [8] proposed the LDA+PCA method. When Sw is of full rank, the LDA+PCA method just calculates the maximum eigenvectors of S−1 t Sb to form the transformation matrix. Otherwise, a two-stage procedure is employed. First, the data are transformed into the null space V0 of Sw. Second, it tries to maximize the betweenclass scatter in V0, which is accomplished by performing principal component analysis (PCA) on the between-class scatter matrix in V0. Although this method solves the small sample size problem, it is obviously suboptimal because it maximizes the between-class scatter in the null space of Sw instead of the original input space. Besides, the performance of the LDA+PCA method drops significantly when n −c is close to the dimensionality D, where n is the number of samples and c is the number of classes. The reason is that the dimensionality of the null space V0 is too small in this situation, and too much information is lost when we try to extract the discriminant vectors in V0. LDA+PCA also need calculate the rank of Sw, which is an ill-defined operation due to floating-point imprecisions. At last, this method is complicated and slow because too much calculation is involved. Kernel Fisher’s Discriminant (KFD) [9] is a well-known nonlinear extension to LDA. The instability problem is more severe for KFD because Sw in the (nonlinear) feature space F is always singular (the rank of Sw is n−c). Similar to [5], KFD simply adds a perturbation µI to Sw. Of course, it has the same stability problem as that in [5] because eigenvectors are sensitive to small perturbation. Although the authors also argued that this perturbation acts as some kind of regularization, i.e., a capacity control in F, the real influence in this setting of regularization is not yet fully understood. Besides, it is hard to determine an optimal µ since there are no theoretical guidelines. In this paper, a simpler, more efficient, and stable method is proposed to calculate the most discriminant vectors based on a new feature extraction criterion, the maximum margin criterion (MMC). Based on MMC, new linear and nonlinear feature extractors are established. It can be shown that MMC represents class separability better than PCA. As a connection to Fisher’s criterion, we may also derive LDA from MMC by incorporating some suitable constraint. On the other hand, the new feature extractors derived above (based on MMC) do not suffer from the small sample size problem, which is known to cause serious stability problems for LDA (based on Fisher’s criterion). Different from LDA+PCA, the new feature extractors based on MMC maximize the between-class scatter in the input space instead of the null space of Sw. Hence, it has a better overall performance than LDA+PCA, as confirmed by our preliminary experimental results. 2 Maximum Margin Criterion Suppose that we are given empirical data (x1, y1), . . . , (xn, yn) ∈X × {C1, . . . , Cc} Here, the domain X ∈RD is some nonempty set that the patterns xi are taken from. The yi’s are called labels or targets. By studying these samples, we want to predict the label y ∈{C1, . . . , Cc} of some new pattern x ∈X. In other words, we choose y such that (x, y) is in some sense similar to the training examples. For this purpose, some measure need be employed to assess similarity or dissimilarity. We want to keep such similarity/dissimilarity information as much as possible after the dimensionality reduction, i.e., transforming x from RD to Rd, where d ≪D. If some distance metric is used to measure the dissimilarity, we would hope that a pattern is close to those in the same class but far from those in different classes. So, a good feature extractor should maximize the distances between classes after the transformation. Therefore, we may define the feature extraction criterion as J = 1 2 c X i=1 c X j=1 pipjd(Ci, Cj) (2) We call (2) the maximum margin criterion (MMC). It is actually the summation of 1 2c(c−1) interclass margins. Like the weighted pairwise Fisher’s criteria in [2], one may also define a weighted maximum margin criterion. Due to the page limit, we omit the discussion in this paper. One may use the distance between mean vectors as the distance between classes, i.e. d(Ci, Cj) = d(mi, mj) (3) where mi and mj are the mean vectors of the class Ci and the class Cj, respectively. However, (3) is not suitable since it neglects the scatter of classes. Even if the distance between the mean vectors is large, it is not easy to separate two classes that have the large spread and overlap with each other. By considering the scatter of classes, we define the interclass distance (or margin) as d(Ci, Cj) = d(mi, mj) −s(Ci) −s(Cj) (4) where s(Ci) is some measure of the scatter of the class Ci. In statistics, we usually use the generalized variance |Si| or overall variance tr(Si) to measure the scatter of data. In this paper, we use the overall variance tr(Si) because it is easy to analyze. The weakness of the overall variance is that it ignores covariance structure altogether. Note that, by employing the overall/generalized variance, the expression (4) measures the “average margin” between two classes while the minimum margin is used in support vector machines (SVMs) [10]. With (4) and s(Ci) being tr(Si), we may decompose (2) into two parts J = 1 2 c X i=1 c X j=1 pipj(d(mi, mj) −tr(Si) −tr(Sj)) = 1 2 c X i=1 c X j=1 pipjd(mi, mj) −1 2 c X i=1 c X j=1 pipj(tr(Si) + tr(Sj)) The second part is easily simplified to tr(Sw) 1 2 c X i=1 c X j=1 pipj(tr(Si) + tr(Sj)) = c X i=1 pitr(Si) = tr c X i=1 piSi ! = tr(Sw) (5) By employing the Euclidean distance, we may also simplify the first part to tr(Sb) as follows 1 2 c X i=1 c X j=1 pipjd(mi, mj) = 1 2 c X i=1 c X j=1 pipj(mi−mj)T (mi−mj) = 1 2 c X i=1 c X j=1 pipj(mi−m + m −mj)T (mi−m + m −mj) After expanding it, we can simplify the above equation to Pc i=1 pi(mi−m)T (mi−m) by using the fact Pc j=1 pj(m −mj) = 0. So 1 2 c X i=1 c X j=1 pipjd(mi, mj) = tr c X i=1 pi(mi−m)(mi−m)T ! = tr(Sb) (6) Now we obtain J = tr(Sb −Sw) (7) Since tr(Sb) measures the overall variance of the class mean vectors, a large tr(Sb) implies that the class mean vectors scatter in a large space. On the other hand, a small tr(Sw) implies that every class has a small spread. Thus, a large J indicates that patterns are close to each other if they are from the same class but are far from each other if they are from different classes. Thus, this criterion may represent class separability better than PCA. Recall that PCA tries to maximize the total scatter after a linear transformation. But the data set with a large within-class scatter can also have a large total scatter even when it has a small between-class scatter because St = Sb + Sw. Obviously, such data are not easy to classify. Compared with LDA+PCA, we maximize the between-class scatter in input space rather than the null space of Sw when Sw is singular. So, our method can keep more discriminative information than LDA+PCA does. 3 Linear Feature Extraction When performing dimensionality reduction, we want to find a (linear or nonlinear) mapping from the measurement space M to some feature space F such that J is maximized after the transformation. In this section, we discuss how to find an optimal linear feature extractor. In the next section, we will generalize it to the nonlinear case. Consider a linear mapping W ∈RD×d . We would like to maximize J(W) = tr(SW b −SW w ) where SW b and SW w are the between-class scatter matrix and within-class scatter matrix in the feature space F. Since W is a linear mapping, it is easy to show SW b = WT SbW and SW w = WT SwW. So, we have J(W) = tr WT (Sb−Sw)W  (8) In this formulation, we have the freedom to multiply W with some nonzero constant. Thus, we additionally require that W is constituted by the unit vectors, i.e. W = [w1 w2 . . . wd] and wT k wk = 1. This means that we need solve the following constrained optimization max d X k=1 wT k (Sb−Sw)wk subject to wT k wk −1 = 0 k = 1, . . . , d Note that, we may also use other constraints in the above. For example, we may require tr WT SwW  = 1 and then maximize tr WT SbW  . It is easy to show that maximizing MMC with such a constraint in fact results in LDA. The only difference is that it involves a constrained optimization whereas the traditional LDA solves an unconstrained optimization. The motivation for using the constraint wT k wk = 1 is that it allows us to avoid calculating the inverse of Sw and thus the potential small sample size problem. To solve the above optimization problem, we may introduce a Lagrangian L(wk, λk) = d X k=1 wT k (Sb−Sw)wk −λk(wT k wk −1) (9) with multipliers λk. The Lagrangian L has to be maximized with respect to λk and wk. The condition that at the stationary point, the derivatives of L with respect to wk must vanish ∂L(wk, λk) ∂wk = ((Sb−Sw) −λkI)wk = 0 k = 1, . . . , d (10) leads to (Sb−Sw)wk =λkwk k = 1, . . . , d (11) which means that the λk’s are the eigenvalues of Sb−Sw and the wk’s are the corresponding eigenvectors. Thus J(W) = d X k=1 wT k (Sb−Sw)wk = d X k=1 λkwT k wk = d X k=1 λk (12) Therefore, J(W) is maximized when W is composed of the first d largest eigenvectors of Sb −Sw. Here, we need not calculate the inverse of Sw, which allows us to avoid the small sample size problem easily. We may also require W to be orthonormal, which may help preserve the shape of the distribution. 4 Nonlinear Feature Extraction with Kernel In this section, we follow the approach of nonlinear SVMs [10] to kernelize the above linear feature extractor. More precisely, we first reformulate the maximum margin criterion in terms of only dot-product ⟨Φ(x), Φ(y)⟩of input patterns. Then we replace the dot-product by some positive definite kernel k(x, y), e.g. Gaussian kernel e−γ∥x−y∥2. Consider the maximum margin criterion in the feature space F JΦ(W) = d X k=1 wT k (SΦ b −SΦ w)wk where SΦ b and SΦ w are the between-class scatter matrix and within-class scatter matrix in F, i.e., SΦ b = Pc i=1 pi(mΦ i −mΦ)(mΦ i −mΦ)T , SΦ w = Pc i=1 piSΦ i and SΦ i = 1 ni Pni j=1(Φ(x(i) j ) −mΦ i )(Φ(x(i) j ) −mΦ i )T with mΦ i = 1 ni Pni j=1 Φ(x(i) j ), mΦ = Pc i=1 pimΦ i , and x(i) j is the pattern of class Ci that has ni samples. For us, an important fact is that each wk lies in the span of Φ(x1), Φ(x2), . . . , Φ(xn). Therefore, we can find an expansion for wk in the form wk = Pn l=1 α(k) l Φ(xl). Using this expansion and the definition of mΦ i , we have wT k mΦ i = n X l=1 α(k) l  1 ni ni X j=1 ⟨Φ(xl), Φ(x(i) j )⟩   Replacing the dot-product by some kernel function k(x, y) and defining ( emi)l = 1 ni Pni j=1 k(xl, x(i) j ), we get wT k mΦ i = αT k emi with (αk)l = α(k) l . Similarly, we have wT k mΦ = wT k c X i=1 pimΦ i = αT k c X i=1 pi emi = αT k em with em = Pc i=1 pi emi. This means wT k (mΦ i −mΦ) = αT k ( emi −em). and d X k=1 wT k SΦ b wk = d X k=1 c X i=1 pi(wT k (mΦ i −mΦ))(wT k (mΦ i −mΦ))T = d X k=1 c X i=1 pT i αT k ( emi −em)( emi −em)T αk = d X k=1 αT k eSbαk where eSb = Pc i=1 pi( emi −em)( emi −em)T . Similarly, one can simplify WT SΦ wW. First, we have wT k (Φ(x(i) j ) −mΦ i ) = αT k (k(i) j − emi) with (k(i) j )l = k(xl, x(i) j ). Considering wT k SΦ i wk = 1 ni Pni j=1(wT k (Φ(x(i) j ) − mΦ i ))(wT k (Φ(x(i) j ) −mΦ i ))T , we have wT k SΦ i wk = 1 ni ni X j=1 αT k (k(i) j −emi)(k(i) j −emi)T αk = 1 ni ni X j=1 αT k eSi(ej −1 ni 1ni)(ej −1 ni 1ni)T eST i αk = 1 ni ni X j=1 αT k eSi(ejeT j −1 ni ej1T ni−1 ni 1nieT j + 1 n2 i 1ni1T ni)eS T i αk = 1 ni αT k eSi(Ini×ni−1 ni 1ni1T ni)eS T i αk where (eSi)lj = k(xl, x(i) j ), Ini×ni is the ni ×ni identity matrix, 1ni is the ni-dimensional vector of 1’s, and ej is the canonical basis vector of ni dimensions. Thus, we obtain d X k=1 wT k SΦ wwk = d X k=1 c X i=1 pi 1 ni αT k eSi(Ini−1 ni 1ni1T ni)eS T i αk = d X k=1 αT k c X i=1 pi 1 ni eSi(Ini−1 ni 1ni1T ni)eS T i ! αk = d X k=1 αT k eSwαk where eSw = Pc i=1 pi 1 ni eSi(Ini−1 ni 1ni1T ni)eS T i . So the maximum criterion in the feature space F is J(W) = d X k=1 αT k (eSb −eSw)αk (13) Similar to the observations in Section 3, the above criterion is maximized by the largest eigenvectors of eSb −eSw. 0 0.05 0.1 0.15 0.2 0.25 20 25 30 35 40 error rate class no. RAW LDA+PCA MMC KMMC (a) The comparison in term of error rate. 2 4 6 8 10 12 14 16 18 20 22 24 20 25 30 35 40 training time (second) class no. LDA+PCA MMC KMMC (b) The comparison in term of training time. Figure 1: Experimental results obtained using a linear SVM on the original data (RAW), and the data extracted by LDA+PCA, the linear feature extractor based on MMC (MMC) and the nonlinear feature extractor based on MMC (KMMC), which employs the Gaussian kernel with γ = 0.03125. 5 Experiments To evaluate the performance of our new methods (both linear and nonlinear feature extractors), we ran both LDA+PCA and our methods on the ORL face dataset [11]. The ORL dataset consists of 10 face images from 40 subjects for a total of 400 images, with some variation in pose, facial expression and details. The resolution of the images is 112 × 92, with 256 gray-levels. First, we resized the images to 28 × 23 to save the experimental time. Then, we reduced the dimensionality of each image set to c −1, where c is the number of classes. At last we trained and tested a linear SVM on the dimensionality-reduced data. As a control, we also trained and tested a linear SVM on the original data before its dimensionality was reduced. In order to demonstrate the effectiveness and the efficiency of our methods, we conducted a series of experiments and compared our results with those obtained using LDA+PCA. The error rates are shown in Fig.1(a). When trained with 3 samples and tested with 7 other samples for each class, our method is generally better than LDA+PCA. In fact, our method is usually better than LDA+PCA on other numbers of training samples. To save space, we do not show all the results here. Note that our methods can even achieve lower error rates than a linear SVM on the original data (without dimensionality reduction). However, LDA+PCA does not demonstrate such a clear superiority over RAW. Fig. 1(a) also shows that the kernelized (nonlinear) feature extractor based on MMC is significantly better than the linear one, in particular when the number of classes c is large. Besides accuracy, our methods are also much more efficient than LDA+PCA in the sense of the training time required. Fig. 1(b) shows that our linear feature extractor is about 4 times faster than LDA+PCA. The same speedup was observed on other numbers of training samples. Note that our nonlinear feature extractor is also faster than LDA+PCA in this case although it is very time-consuming to calculate the kernel matrix in general. An explanation of the speedup is that the kernel matrix size equals the number of samples, which is pretty small in this case. Furthermore, our method performs much better than LDA+PCA when n −c is close to the dimensionality D. Because the amount of training data was limited, we resized the images to 168 dimensions to create such a situation. The experimental results are shown in Fig. 2. In this situation, the performance of LDA+PCA drops significantly because the null space of Sw has a small dimensionality. When LDA+PCA tries to maximize the between-class scatter in this small null space, it loses a lot of information. On the other hand, our method tries to maximize the between-class scatter in the original input space. From Fig. 2, we can 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 20 25 30 35 40 error rate class no. LDA+PCA MMC KMMC (a) Each class contains three training samples. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 20 25 30 35 40 error rate class no. LDA+PCA MMC KMMC (b) Each class contains four training samples. Figure 2: Comparison between our new methods and LDA+PCA when n −c is close to D. see that LDA+PCA is ineffective in this situation because it is even worse than a random guess. But our method still produced acceptable results. Thus, the experimental results show that our method is better than LDA+PCA in terms of both accuracy and efficiency. 6 Conclusion In this paper, we proposed both linear and nonlinear feature extractors based on the maximum margin criterion. The new methods do not suffer from the small sample size problem. The experimental results show that it is very efficient, accurate, and robust. Acknowledgments We thank D. Gunopulos, C. Domeniconi, and J. Peng for valuable discussions and comments. This work was partially supported by NSF grants CCR-9988353 and ACI-0085910. References [1] R. A. Fisher. The use of multiple measurements in taxonomic problems. Annual of Eugenics, 7:179–188, 1936. [2] M. Loog, R. P. W. Duin, and R. Haeb-Umbach. Multiclass linear dimension reduction by weighted pairwise fisher criteria. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(7):762–766, 2001. [3] K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic Press, New York, 2nd edition, 1990. [4] Q. Tian, M. Barbero, Z. Gu, and S. Lee. Image classification by the foley-sammon transform. Optical Engineering, 25(7):834–840, 1986. [5] Z. Hong and J. Yang. Optimal discriminant plane for a small number of samples and design method of classifier on the plane. Pattern Recognition, 24(4):317–324, 1991. [6] G. W. Stewart. Introduction to Matrix Computations. Academic Press, New York, 1973. [7] K. Liu, Y. Cheng, and J. Yang. A generalized optimal set of discriminant vectors. Pattern Recognition, 25(7):731–739, 1992. [8] L. Chen, H. Liao, M .Ko, J. Lin, and G. Yu. A new LDA-based face recognition system which can solve the small sample size problem. Pattern Recognition, 33(10):1713–1726, 2000. [9] S. Mika, G. R¨atsch, J. Weston, B. Sch¨olkopf, and K.-R. M¨uller. Fisher discriminant analysis with kernels. In Y.-H. Hu, J. Larsen, E. Wilson, and S. Douglas, editors, Neural Networks for Signal Processing IX, pages 41–48. IEEE, 1999. [10] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, New York, 1998. [11] F. Samaria and A. Harter. Parameterisation of a stochastic model for human face identification. In Proceedings of 2nd IEEE Workshop on Applications of Computer Vision, Sarasota, FL, 1994.
2003
76
2,481
Training fMRI Classifiers to Discriminate Cognitive States across Multiple Subjects Xuerui Wang, Rebecca Hutchinson, and Tom M. Mitchell Center for Automated Learning and Discovery Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213 {xuerui.wang, rebecca.hutchinson, tom.mitchell}@cs.cmu.edu Abstract We consider learning to classify cognitive states of human subjects, based on their brain activity observed via functional Magnetic Resonance Imaging (fMRI). This problem is important because such classifiers constitute “virtual sensors” of hidden cognitive states, which may be useful in cognitive science research and clinical applications. In recent work, Mitchell, et al. [6,7,9] have demonstrated the feasibility of training such classifiers for individual human subjects (e.g., to distinguish whether the subject is reading an ambiguous or unambiguous sentence, or whether they are reading a noun or a verb). Here we extend that line of research, exploring how to train classifiers that can be applied across multiple human subjects, including subjects who were not involved in training the classifier. We describe the design of several machine learning approaches to training multiple-subject classifiers, and report experimental results demonstrating the success of these methods in learning cross-subject classifiers for two different fMRI data sets. 1 Introduction The advent of functional Magnetic Resonance Imaging (fMRI) has made it possible to safely, non-invasively observe correlates of neural activity across the entire human brain at high spatial resolution. A typical fMRI session can produce a three dimensional image of brain activation once per second, with a spatial resolution of a few millimeters, yielding tens of millions of individual fMRI observations over the course of a twenty-minute session. This fMRI technology holds the potential to revolutionize studies of human cognitive processing, provided we can develop appropriate data analysis methods. Researchers have now employed fMRI to conduct hundreds of studies that identify which regions of the brain are activated on average when a human performs a particular cognitive task (e.g., reading, puzzle solving). Typical research publications describe summary statistics of brain activity in various locations, calculated by averaging together fMRI observations collected over multiple time intervals during which the subject responds to repeated stimuli of a particular type. Our interest here is in a different problem: training classifiers to automatically decode the subject’s cognitive state at a single instant or interval in time. If we can reliably train such classifiers, we may be able to use these as “virtual sensors” of hidden cognitive states, to observe previously hidden cognitive processes in the brain. In recent work [6,7,9], Mitchell et al. have demonstrated the feasibility of training such classifiers. Whereas their work focussed primarily on training a different classifier for each human subject, our focus in this paper is on training a single classifier that can be used across multiple human subjects, including humans not involved in the training process. This is challenging because different brains have substantially different sizes and shapes, and because different people may generate different brain activation given the same cognitive state. Below we briefly survey related work, describe a range of machine learning approaches to this problem, and present experimental results showing statistically significant cross-subject classifier accuracies for two different fMRI studies. 2 Related Work As noted above, Mitchell et al. [6,7,9] describe methods for training classifiers of cognitive states, focussing primarily on training subject-specific classifiers. More specifically, they train classifiers that distinguish among a set of predefined cognitive states, based on a single fMRI image or fixed window of fMRI images collected relative to the presentation of a particular stimulus. For example, they report on successful classifiers to distinguish whether the object presented to the subject is a sentence or a picture, whether the sentence being viewed is ambiguous or unambiguous, whether an isolated word is a noun or a verb, and whether an isolated noun is about a person, building, animal, etc. They used several different classifiers, and report that dimensionality reduction methods are essential given the high dimensional, sparse training data. They propose specific methods for dimensionality reduction that take advantage of data collected during rest periods between stimuli, and demonstrate that these outperform standard methods for feature selection such as those based on mutual information. Despite these positive results, there remain several limitations: classifiers are trained and applied over a fixed time window of data, classifiers are trained only to discriminate among predefined classes of cognitive states, and they deal only with single cognitive states rather than multiple states evolving over time. In earlier work, Wagner et al. [11] report that they have been able to predict whether a verbal experience will be remembered later, based on the magnitude of activity within certain parts of left prefrontal and temporal cortices during that experience. Haxby et al. [2] show that different patterns of fMRI activity are generated when a subject views a photograph of a face versus a house, etc., and show that by dividing the fMRI data for each photograph category into two samples, they could automatically match the data samples related to the same category. Recent work on brain computer interfaces (see, e.g., [8]) also seeks to decode observed brain activity (often EEG or direct neural recordings, rather than fMRI) typically for the purpose of controlling external devices. 3 Approach 3.1 Learning Method In this paper we explore the use of machine learning methods to approximate classification functions of the following form f : ⟨I1, ..., In⟩→CognitiveState where ⟨I1, ..., In⟩is a sequence of n fMRI images collected during a contiguous time interval and where CognitiveState is the set of cognitive states to be discriminated. We explore a number of classifier training methods, including: • Gaussian Naive Bayes (GNB). This classifier learns a class-conditional Gaussian generative model for each feature1. New examples are classified using Bayes rule and the assumption that features are conditionally independent given the class (see, for instance, [5]). • Support Vector Machine (SVM). We employ a linear kernel Support Vector Machine (see, for instance, [1]). • k Nearest Neighbor(kNN). We use k Nearest Neighbor with a Euclidean distance metric, considering values of 1, 3, and 5 for k (see, for instance, [5]). Classifiers were evaluated using a “leave one subject out” cross validation procedure, in which each of the m human subjects was used as a test subject while training on the remaining m−1 subjects, and the mean accuracy over these held out subjects was calculated. 3.2 Feature Extraction In general, each input image may contain many thousands of voxels. We explored a variety of approaches to reducing the dimensionality of the input feature vector, including methods that select a subset of available features, methods that replace multiple feature values by their mean, and methods that use both of these extractions. In the latter two cases, we take means over values found within anatomically defined brain regions (e.g., dorsolateral prefrontal cortex) which are referred to as Regions of Interest, or ROIs). We considered the following feature extraction methods: • Average. For each ROI, calculate the mean activity over all voxels in the ROI. Use these ROI means as the input features. • ActiveAvg(n). For each ROI, select the n most active voxels2, then calculate the mean of their values. Again, use these ROI means as the input features. Here the “most active” voxels are those whose activity while performing the task varies the most from their activity when the subject is at rest (see [7] for details). • Active(n). Select the n most active voxels over the entire brain. Use only these n voxels as input features. 3.3 Registering Data from Multiple Subjects Given the different sizes and shapes of different brains, it is not possible to directly map the voxels in one brain to those in another. We considered two different methods for producing representations of fMRI data for use across multiple subjects: • ROI Mapping. Abstract the voxel data in each brain using the Average or ActiveAvg(n) feature extraction method described above. Because each brain contains the same set of anatomically defined ROIs, we can use the resulting representation of average activity per ROI as a canonical representation across subjects. • Talairach coordinates. The coordinate system of each brain is transformed (geometrically morphed) into the coordinate system of a standard brain (known as the Talairach-Tournoux coordinate system [10]). After this transformation, each brain has the same shape and size, though the transformation is usually imperfect. 1It is well known that the Gaussian model does not accurately fit fMRI data. Some non-Gaussian models, such as Generalized Gaussian model which makes use of the kurtosis of the data, and tdistribution which is more heavy-tailed, are in our future plan. 2The fMRI data used here are first preprocessed by FIASCO (http://www.stat.cmu.edu/∼fiasco), and the active voxels are determined by t-test. There are significant differences in these two approaches. First, note they differ in their spatial resolution and in the dimension of the resulting input feature vector. ROI Mapping results in just one feature per ROI (we work with at most 35 ROIs per brain) at each timepoint, whereas Talairach coordinates retain the voxel-level resolution (on the order of 15,000 voxels per brain). Second, the approaches have different noise characteristics. ROI Mapping reduces noise by averaging voxel activations, whereas the Talairach transformation effectively introduces new noise due to imperfections in the morphing transformation. Thus, the approaches have complementary advantages and disadvantages. Notice both of these transformations require background knowledge about brain anatomy in order to identify anatomical landmarks or ROIs. 4 Case Studies This section describes two fMRI case studies used for training classifiers (detailed in [7]). 4.1 Sentence versus Picture Study In this fMRI study [3], thirteen normal subjects performed a sequence of trials. During each trial they were first shown a sentence and a simple picture, then asked whether the sentence correctly described the picture. We used this data set to explore the feasibility of training classifiers to distinguish whether the subject is examining a sentence or a picture during a particular time interval. In half of the trials the picture was presented first, followed by the sentence, which we will refer to as PS data set. In the remaining trials, the sentence was presented first, followed by the picture, which we will call SP data set. Pictures contained geometric arrangements of two of the following symbols: +, ∗, $. Sentences were descriptions such as “It is true that the star is below the plus,” or “It is not true that the star is above the plus.” The learning task we consider here is to train a classifier to determine, given a particular 16-image interval of fMRI data, whether the subject was viewing a sentence or a picture during this interval. In other words, we wish to learn a classifier of the form: f : ⟨I1, ..., I16⟩→{Picture, Sentence} where I1 is the image captured at the time of stimulus (picture or sentence) onset. In this case we restrict the classifier input to 7 most relevant ROIs3 determined by a domain expert. 4.2 Syntactic Ambiguity Study In this fMRI study [4], subjects were presented with ambiguous and unambiguous sentences, and were asked to respond to a yes-no question about the content of each sentence. The questions were designed to ensure that the subject was in fact processing the sentence. Five normal subjects participated in this study, which we will refer to as SA data set. We are interested here in learning a classifier that takes as input an interval of fMRI activity, and determines whether the subject was currently reading an unambiguous or ambiguous sentence. An example ambiguous sentence is “The experienced soldiers warned about the dangers conducted the midnight raid.” An example of an unambiguous sentence is “The experienced soldiers spoke about the dangers before the midnight raid.” We train classifiers of the form f : ⟨I1, ..., I16⟩→{Ambiguous, Unambiguous} 3They are pars opercularis of the inferior frontal gyrus, pars triangularis of the inferior frontal gyrus, intra-parietal sulcus, inferior temporal gyri and sulci, inferior parietal lobule, dorsolateral prefrontal cortex, and an area around the calcarine sulcus, respectively. where I1 is the image captured at the time when the sentence is first presented to the subject. In this case we restrict the classifier input to 4 ROIs4 considered to be the most relevant. 5 Experimental Results The primary goal of this work is to determine whether and how it is possible to train classifiers of cognitive states across multiple human subjects. We experimented using data from the two case studies described above, measuring the accuracy of classifiers trained for single subjects, as well as those trained for multiple subjects. Note we might expect the multiple subject classification accuracies to be lower due to differences among subjects, or to be higher due to the larger number of training examples available. In order to test the statistical significance of our results, consider the 95% confidence intervals5 of the accuracies. Assuming that errors on test examples are i.i.d. Bernoulli(p) distributed, the number of observed correct classifications will follow a Binomial(n, p) distribution, where n is the number of test examples. Table 1 displays the lowest accuracies that are statistically significant at the 95% confidence level, where the expected accuracy due to chance is 0.5 given the equal number of examples from both classes. We will not report confidence interval individually for each accuracy because they are very similar. Table 1: The lowest accuracies that are significantly better than chance at the 95% level. SP PS SP+PS SA # of examples 520 520 1040 100 Lowest accuracy 54.3% 54.3% 53.1% 59.7% 5.1 ROI Mapping We first consider the ROI Mapping method for merging data from multiple subjects. Table 2 shows the classifier accuracies for the Sentence versus Picture study, when training across subjects and testing on the subject withheld from the training set. For comparison, it also shows (in parentheses) the average accuracy achieved by classifiers trained and tested on single subjects. All results are highly significant compared to the 50% accuracy expected by chance, demonstrating convincingly the feasibility of training classifiers to distinguish cognitive states in subjects beyond the training set. In fact, the accuracy achieved on the left out subject for the multiple-subject classifiers is often very close to the average accuracy of the single-subject classifiers, and in several cases it is significantly better. This surprisingly positive result indicates that the accuracy of the multiple-subject classifier, when tested on new subjects outside the training set, is comparable to the average accuracy achieved when training and testing using data from a single subject. Presumably this can be explained by the fact that it is trained using an order of magnitude more training examples, from twelve subjects rather than one. The increase in training set size apparently compensates for the variability among subjects. A second trend apparent in Table 2 is that the accuracies in SP or PS data sets are better than the accuracies when using their union (SP+PS). Presumably this is due to the fact that the context in which the stimulus (picture or sentence) appears is more consistent when we restrict to data in which these stimuli are presented in the same sequence. 4They include pars opercularis of the inferior frontal gyrus, pars triangularis of the inferior frontal gyrus, Wernicke’s area, and the superior temporal gyrus. 5Under cross validation, we learn m classifiers, and the accuracy we reported is the mean accuracy of these classifiers. The size of the confidence interval we compute is the upper bound of the size of Table 2: Multiple-subject accuracies in the Sentence versus Picture study (ROI mapping). Numbers in parenthesis are the corresponding mean accuracies of single-subject classifiers. METHOD CLASSIFIER SP PS SP+PS Average GNB 88.8% (90.6%) 82.3% (79.6%) 74.3% (66.5%) Average SVM 86.5% (89.0%) 77.1% (83.7%) 75.3% (69.8%) Average 1NN 84.8% (86.5%) 73.8% (61.9%) 63.7% (59.7%) Average 3NN 86.5% (87.5%) 75.8% (69.2%) 67.3% (59.7%) Average 5NN 88.7% (89.4%) 78.7% (74.6%) 68.3% (60.4%) ActiveAvg(20) GNB 92.5% (95.4%) 87.3% (88.1%) 72.8% (75.4%) ActiveAvg(20) 1NN 91.5% (94.4%) 83.8% (82.5%) 66.0% (71.2%) ActiveAvg(20) 3NN 93.1% (95.4%) 86.2% (83.7%) 71.5% (73.2%) ActiveAvg(20) 5NN 93.8% (95.0%) 87.5% (86.2%) 72.0% (73.2%) Table 3: Multiple-subject accuracies in the Syntactic Ambiguity study (ROI mapping). Numbers in parenthesis are the corresponding mean accuracies of single-subject classifiers. To choose n in ActiveAvg(n), we explored all even numbers less than 50, reporting the best. METHOD CLASSIFIER ACCURACY Average GNB 58.0% (61.0%) Average SVM 54.0% (63.0%) Average 1NN 56.0% (54.0%) Average 3NN 57.0% (64.0%) Average 5NN 58.0% (60.0%) ActiveAvg(n) GNB 64.0% (68.0%) ActiveAvg(n) SVM 65.0% (71.0%) ActiveAvg(n) 1NN 64.0% (61.0%) ActiveAvg(n) 3NN 69.0% (60.0%) ActiveAvg(n) 5NN 62.0% (64.0%) Classifier accuracies for the Syntactic Ambiguity study are shown in Table 3. Note accuracies above 59.7% are significantly better than chance. The accuracies for both singlesubject and multiple-subject classifiers are lower than in the first study, perhaps due in part to the smaller number of subjects and training examples. Although we cannot draw strong conclusions from the results of this study, it provides modest additional support for the feasibility of training multiple-subject classifiers using ROI mapping. Note that accuracies of the multiple-subject classifiers are again comparable to those of single subject classifiers. 5.2 Talairach Coordinates Next we explore the Talairach coordinates method for merging data from multiple subjects. Here we consider the Syntactic Ambiguity study only6. Note one difficulty in utilizing the Talairach transformation here is that slightly different regions of the brain were scanned for different subjects. Figure 1 shows the portions of the brain that were scanned for two of the subjects along with the intersection of these regions from all five subjects. In combining data from multiple subjects, we used only the data in this intersection. the true confidence interval of the mean accuracy, which can be shown using the Lagrangian method. 6We experienced technical difficulties in applying the Talairach transformation software to the Sentence versus Picture study (see [3] for details). Subject 1 Subject 2 Intersecting all subjects Figure 1: The two leftmost panels show in color the scanned portion of the brain for two subjects (Syntactic Ambiguity study) in Talairach space in sagittal view. The rightmost panel shows the intersection of these scanned bands across all five subjects. The results of training multiple-subject classifiers based on the Talairach coordinates method are shown in Table 4. Notice the results are comparable to those achieved by the earlier ROI Mapping method in Table 3. Based on these results, we cannot state that one of these methods is significantly more accurate than the other. When using the Talairach method, we found the most effective feature extraction approach was the Active(n) feature selection approach, which chooses the n most active voxels from across the brain. Note that it is not possible to use this feature selection approach with the ROI Mapping method, because the individual voxels from different brains can only be aligned after performing the Talairach transformation. Table 4: Multiple-subject accuracies in the Syntactic Ambiguity study (Talairach coordinates). Numbers in parenthesis are the mean accuracies of single-subject classifiers. For n in Active(n), we explored all even numbers less than 200, reporting the best. METHOD CLASSIFIER ACCURACY Active(n) GNB 63.0% (72.0%) Active(n) SVM 67.0% (71.0%) Active(n) 1NN 60.0% (64.0%) Active(n) 3NN 60.0% (69.0%) Active(n) 5NN 62.0% (69.0%) 6 Summary and Conclusions The primary goal of this research was to determine whether it is feasible to use machine learning methods to decode mental states across multiple human subjects. The successful results for two case studies indicate that this is indeed feasible. Two methods were explored to train multiple-subject classifiers based on fMRI data. ROI mapping abstracts fMRI data by using the mean fMRI activity in each of several anatomically defined ROIs to map different brains in terms of ROIs. The transformation to Talairach coordinates morphs brains into a standard coordinate frame, retaining the approximate spatial resolution of the original data. Using these approaches, it was possible to train classifiers to distinguish, e.g., whether the subject was viewing a picture or a sentence describing a picture, and to apply these successfully to subjects outside the training set. In many cases, the classification accuracy for subjects outside the training set equalled or exceeded the accuracy achieved by training on data from just the single subject. The results using the two methods showed no statistically significant difference in the Syntactic Ambiguity study. It is important to note that while our empirical results demonstrate the ability to successfully distinguish among a predefined set of states occurring at specific times while the subject performs specific tasks, they do not yet demonstrate that trained classifiers can reliably detect cognitive states occurring at arbitrary times while the subject performs arbitrary tasks. We intend to pursue this more general goal in future work. We foresee many opportunities for future machine learning research in this area. For example, we plan to next learn models of temporal behavior, in contrast to the work reported here which considers only data at a single time interval. Machine learning methods such as Hidden Markov Models and Dynamic Bayesian Networks appear relevant. A second research direction is to develop learning methods that take advantage of data from multiple studies, in contrast to the single study efforts described here. Acknowledgments We are grateful to Marcel Just for providing the fMRI data for these experiments, and for many valuable discussions and suggestions. We would like to thank Francisco Pereira and Radu S. Niculescu for providing much code to run our experiments, and Vladimir Cherkassky, Joel Welling, Erika Laing and Timothy Keller for their instruction on techniques related to Talairach transformation. References [1] Burges, C., A Tutorial on Support Vector Machines for Pattern Recognition, Journal of data Mining and Knowledge Discovery, 2(2),121-167, 1998. [2] Haxby, J., Gobbini, M., Furey, M., Ishai, A., Schouten, J., & Pietrini, P., Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex, Science, 293, 2425-2430, 2001. [3] Keller, T., Just, M., & Stenger, V., Reading Span and the Time-course of Cortical Activation in Sentence-Picture Verification,Annual Convention of the Psychonomic Society, Orlando, FL, 2001. [4] Mason, R., Just, M., Keller, T., & Carpenter, P., Ambiguity in the Brain: What Brain Imaging Reveals about the Processing of Syntactically Ambiguous Sentences, Journal of Experimental Psychology: Learning, Memory, and Cognition, in press, 2003. [5] Mitchell, T.M., Machine Learning, McGraw-Hill, 1997 [6] Mitchell, T.M., Hutchinson, R., Just, M., Niculescu, R., Pereira, F., & Wang, X., Classifying Instantaneous Cognitive States from fMRI Data, The American Medical Informatics Association 2003 Annual Symposium, to appear, 2003 [7] Mitchell, T.M., Hutchinson, R., Niculescu, R., Pereira, F., Wang, X., Just, M., & Newman, S., Learning to Decode Cognitive States from Brain Images, Machine Learning: Special Issue on Data Mining Lessons Learned, accepted, 2003 [8] NIPS 2001 Brain Computer Interface Workshop, Whistler, BC, Canada, December 2001. [9] Pereira, F., Just, M., & Mitchell, T.M., Distinguishing Natural Language Processes on the Basis of fMRI-measured Brain Activation, PKDD 2001, Freiburg, Germany, 2001. [10] Talairach, J., & Tournoux, P., Co-planar Stereotaxic Atlas of the Human Brain, Thieme, New York, 1988. [11] Wagner, A., Schacter, D., Rotte, M., Koutstaal, W., Maril, A., Dale, A., Rosen, B., & Buckner, R., Building Memories: Remembering and Forgetting of Verbal Experiences as Predicted by Brain Activity, Science, 281, 1188-1191, 1998.
2003
77
2,482
Applying Metric-Trees to Belief-Point POMDPs Joelle Pineau, Geoffrey Gordon School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 {jpineau,ggordon}@cs.cmu.edu Sebastian Thrun Computer Science Department Stanford University Stanford, CA 94305 thrun@stanford.edu Abstract Recent developments in grid-based and point-based approximation algorithms for POMDPs have greatly improved the tractability of POMDP planning. These approaches operate on sets of belief points by individually learning a value function for each point. In reality, belief points exist in a highly-structured metric simplex, but current POMDP algorithms do not exploit this property. This paper presents a new metric-tree algorithm which can be used in the context of POMDP planning to sort belief points spatially, and then perform fast value function updates over groups of points. We present results showing that this approach can reduce computation in point-based POMDP algorithms for a wide range of problems. 1 Introduction Planning under uncertainty is a central problem in the field of robotics as well as many other AI applications. In terms of representational effectiveness, the Partially Observable Markov Decision Process (POMDP) is among the most promising frameworks for this problem. However the practical use of POMDPs has been severely limited by the computational requirement of planning in such a rich representation. POMDP planning is difficult because it involves learning action selection strategies contingent on all possible types of state uncertainty. This means that whenever the robot’s world state cannot be observed, the planner must maintain a belief (namely a probability distribution over possible states) to summarize the robot’s recent history of actions taken and observations received. The POMDP planner then learns an optimal future action selection for each possible belief. As the planning horizon grows (linearly), so does the number of possible beliefs (exponentially), which causes the computational intractability of exact POMDP planning. In recent years, a number of approximate algorithms have been proposed which overcome this issue by simply refusing to consider all possible beliefs, and instead selecting (and planning for) a small set of representative belief points. During execution, should the robot encounter a belief for which it has no plan, it finds the nearest known belief point and follows its plan. Such approaches, often known as grid-based [1, 4, 13], or point-based [8, 9] algorithms, have had significant success with increasingly large planning domains. They formulate the plan optimization problem as a value iteration procedure, and estimate the cost/reward of applying a sequence of actions from a given belief point. The value of each action sequence can be expressed as an α-vector, and a key step in many algorithms consists of evaluating many candidate α-vectors (set Γ) at each belief point (set B). These B × Γ (point-to-vector) comparisons—which are typically the main bottleneck in scaling point-based algorithms—are reminiscent of many M × N comparison problems that arise in statistical learning tasks, such as kNN, mixture models, kernel regression, etc. Recent work has shown that for these problems, one can significantly reduce the number of necessary comparisons by using appropriate metric data structures, such as KD-trees and ball-trees [3, 6, 12]. Given this insight, we extend the metric-tree approach to POMDP planning, with the specific goal of reducing the number of B × Γ comparisons. This paper describes our algorithm for building and searching a metric-tree over belief points. In addition to improving the scalability of POMDP planning, this approach features a number of interesting ideas for generalizing metric-tree algorithms. For example, when using trees for POMDPs, we move away from point-to-point search procedures for which the trees are typically used, and leverage metric constraints to prune point-to-vector comparisons. We show how it is often possible to evaluate the usefulness of an α-vector over an entire sub-region of the belief simplex without explicitly evaluating it at each belief point in that sub-region. While our new metric-tree approach offers significant potential for all point-based approaches, in this paper we apply it in the context of the PBVI algorithm [8], and show that it can effectively reduce computation without compromising plan quality. 2 Partially Observable Markov Decision Processes We adopt the standard POMDP formulation [5], defining a problem by the n-tuple: {S, A, Z, T, O, R, γ, b0}, where S is a set of (discrete) world states describing the problem domain, A is a set of possible actions, and Z is a set of possible observations providing (possibly noisy and/or partial) state information. The distribution T(s, a, s′) describes state-to-state transition probabilities; distribution O(s, a, z) describes observation emission probabilities; function R(s, a) represents the reward received for applying action a in state s; γ represents the discount factor; and b0 specifies the initial belief distribution. An |S|-dimensional vector, bt, represents the agent’s belief about the state of the world at time t, and is expressed as a probability distribution over states. This belief is updated after each time step—to reflect the latest pair (at−1, zt)—using a Bayesian filter: bt(s′) := c O(s′, at−1, zt) P s∈S T(s, at−1, s′)bt−1(s), where c is a normalizing constant. The goal of POMDP planning is to find a sequence of actions maximizing the expected sum of rewards E[P t γtR(st, at)], for all belief. The corresponding value function can be formulated as a Bellman equation: V (b) = maxa∈A  R(b, a) + γ P b′∈B T(b, a, b′)V (b′)  By definition there exist an infinite number of belief points. However when optimized exactly, the value function is always piecewise linear and convex in the belief (Fig. 1a). After n value iterations, the solution consists of a finite set of α-vectors: Vn = {α0, α1, ..., αm}. Each α-vector represents an |S|-dimensional hyper-plane, and defines the value function over a bounded region of the belief: Vn(b) = maxα∈Vn P s∈S α(s)b(s). When performing exact value updates, the set of α-vectors can (and often does) grow exponentially with the planning horizon. Therefore exact algorithms tend to be impractical for all but the smallest problems. We leave out a full discussion of exact POMDP planning (see [5] for more) and focus instead on the much more tractable point-based approximate algorithm. 3 Point-based value iteration for POMDPs The main motivation behind the point-based algorithm is to exploit the fact that most beliefs are never, or very rarely, encountered, and thus resources are better spent planning for those beliefs that are most likely to be reached. Many classical POMDP algorithms do not exploit this insight. Point-based value iteration algorithms on the other hand apply value backups only to a finite set of pre-selected (and likely to be encountered) belief points B = {b0, b1, ..., bq}. They initialize a separate α-vector for each selected point, and repeatedly update the value of that α-vector. As shown in Figure 1b, by maintaining a full α-vector for each belief point, we can preserve the piecewise linearity and convexity of the value function, and define a value function over the entire belief simplex. This is an approximation, as some vectors may be missed, but by appropriately selecting points, we can bound the approximation error (see [8] for details). α0 α0 V={ ,α1,α2,α3} b2 b1 b0 b3 V={ ,α1,α3} (a) (b) Figure 1: (a) Value iteration with exact updates. (b) Value iteration with point-based updates. There are generally two phases to point-based algorithms. First, a set of belief points is selected, and second, a series of backup operations are applied over α-vectors for that set of points. In practice, steps of value iteration and steps of belief set expansion can be repeatedly interleaved to produce an anytime algorithm that can gradually trade-off computation time and solution quality. The question of how to best select belief points is somewhat orthogonal to the ideas in this paper and is discussed in detail in [8]. We therefore focus on describing how to do point-based value backups, before showing how this step can be significantly accelerated by the use of appropriate metric data structures. The traditional value iteration POMDP backup operation is formulated as a dynamic program, where we build the n-th horizon value function V from the previous solution V ′: V (b) = max a∈A "X s∈S R(s, a)b(s)+ γ X z∈Z max α′∈V ′ X s∈S X s′∈S T(s, a, s′)O(z, s′, a)α′(s′)b(s) # (1) = max a∈A "X z∈Z max α′∈V ′ "X s∈S R(s, a) |Z| b(s)+ γ X s∈S X s′∈S T(s, a, s′)O(z, s′, a)α′(s′)b(s) ## To plan for a finite set of belief points B, we can modify this operation such that only one α-vector per belief point is maintained and therefore we only consider V (b) at points b ∈B. This is implemented using three steps. First, we take each vector in V ′ and project it backward (according to the model) for a given action, observation pair. In doing so, we generate intermediate sets Γa,z, ∀a ∈A, ∀z ∈Z: Γa,z ← αa,z i (s) = R(s, a) |Z| + γ X s′∈S T(s, a, s′)O(z, s′, a)α′ i(s′), ∀α′ i ∈V ′ (Step 1) (2) Second for each b ∈B, we construct Γa (∀a ∈A). This sum over observations1 includes the maximum αa,z (at a given b) from each Γa,z: Γa b = X z∈Z argmax α∈Γa,z (α · b) (Step 2) (3) 1In exact updates, this step requires taking a cross-sum over observations, which is O(|S| |A| |V ′||Z|). By operating over a finite set of points, the cross-sum reduces to a simple sum, which is the main reason behind the computational speed-up obtained in point-based algorithms. Finally, we find the best action for each belief point: V ← argmax Γa b ,∀a∈A (Γa b · b), ∀b ∈B (Step 3) (4) The main bottleneck in applying point-based algorithms to larger POMDPs is in step 2 where we perform a B × Γ comparison2: for every b ∈B, we must find the best vector from a given set Γa,z. This is usually implemented as a sequential search, exhaustively comparing α · b for every b ∈B and every α ∈Γa,z, in order to find the best α at each b (with overall time-complexity O(|A| |Z| |S| |B| |V ′|)). While this is not entirely unreasonable, it is by far the slowest step. It also completely ignores the highly structured nature of the belief space. Belief points exist in a metric space and there is much to be gained from exploiting this property. For example, given the piecewise linearity and convexity of the value function, it is more likely that two nearby points will share similar values (and policies) than points that are far away. Consequently it could be much more efficient to evaluate an α-vector over sets of nearby points, rather than by exhaustively looking at all the points separately. In the next section, we describe a new type of metric-tree which structures data points based on a distance metric over the belief simplex. We then show how this kind of tree can be used to efficiently evaluate α-vectors over sets of belief points (or belief regions). 4 Metric-trees for belief spaces Metric data structures offer a way to organize large sets of data points according to distances between the points. By organizing the data appropriately, it is possible to satisfy many different statistical queries over the elements of the set, without explicitly considering all points. Instances of metric data structures such as KD-trees, ball-trees and metric-trees have been shown to be useful for a wide range of learning tasks (e.g. nearest-neighbor, kernel regression, mixture modeling), including some with high-dimensional and non-Euclidean spaces. The metric-tree [12] in particular offers a very general approach to the problem of structural data partitioning. It consists of a hierarchical tree built by recursively splitting the set of points into spatially tighter subsets, assuming only that the distance between points is a metric. 4.1 Building a metric-tree from belief points Each node η in a metric-tree is represented by its center ηc, its radius ηr, and a set of points ηB that fall within its radius. To recursively construct the tree—starting with node η and building children nodes η1 and η2—we first pick two candidate centers (one per child) at the extremes of the η’s region: η1 c = maxb∈ηD D(ηc, b), and η2 c = maxb∈ηD D(η1 c, b). In a single-step approximation to k-nearest-neighbor (k=2), we then re-allocate each point in ηB to the child with the closest center (ties are broken randomly): η1 B ←b if D(η1 c, b) < D(η2 c, b) (5) η2 B ←b if D(η1 c, b) > D(η2 c, b) Finally we update the centers and calculate the radius for each child: η1 c = Center{η1 B} η2 c = Center{η2 B} (6) η1 r = max b∈η1 B D(η1 c, b) η2 r = max b∈η2 B D(η2 c, b) (7) 2Step 1 projects all vectors α ∈V ′ for any (a, z) pair. In the worse-case, this has time-complexity O(|A| |Z| |S|2 |V ′|), however most problems have very sparse transition matrices and this is typically much closer to O(|A| |Z| |S| |V ′|). Step 3 is also relatively efficient at O(|A| |Z| |S| |B|). The general metric-tree algorithm allows a variety of ways to calculate centers and distances. For the centers, the most common choice is the centroid of the points and this is what we use when building a tree over belief points. We have tried other options, but with negligible impact. For the distance metric, we select the max-norm: D(ηc, b) = ||ηc−b||∞, which allows for fast searching as described in the next section. While the radius determines the size of the region enclosed by each node, the choice of distance metric determines its shape (e.g. with Euclidean distance, we would get hyper-balls of radius ηr). In the case of the max-norm, each node defines an |S|-dimensional hyper-cube of length 2∗ηr. Figure 2 shows how the first two-levels of a tree are built, assuming a 3-state problem. P(s1) P(s2) nc nr n1 n2 n0 n0 n1 n2 bi bj ... (a) (b) (c) (d) Figure 2: (a) Belief points. (b) Top node. (c) Level-1 left and right nodes. (d) Corresponding tree While we need to compute the center and radius for each node to build the tree, there are additional statistics which we also store about each node. These are specific to using trees in the context of belief-state planning, and are necessary to evaluate α vectors over regions of the belief simplex. For a given node η containing data points ηB, we compute ηmin and ηmax, the vectors containing respectively the min and max belief in each dimension: ηmin(s) = min b∈ηB b(s), ∀s ∈S ηmax(s) = max b∈ηB b(s), ∀s ∈S (8) 4.2 Searching over sub-regions of the simplex Once the tree is built, it can be used for fast statistical queries. In our case, the goal is to compute argmaxα∈Γa,z(α · b) for all belief points. To do this, we consider the α vectors one at a time, and decide whether a new candidate αi is better than any of the previous vectors {α0 . . . αi−1}. With the belief points organized in a tree, we can often assess this over sets of points by consulting a high-level node η, rather than by assessing this for each belief point separately. We start at the root node of the tree. There are four different situations we can encounter as we traverse the tree: first, there might be no single previous α-vector that is best for all belief points below the current node (Fig. 3a). In this case we proceed to the children of the current node without performing any tests. In the other three cases there is a single dominant alpha-vector at the current node; the cases are that the newest vector αi dominates it (Fig. 3b), is dominated by it (Fig. 3c), or neither (Fig. 3d). If we can prove that αi dominates or is dominated by the previous one, we can prune the search and avoid checking the current node’s children; otherwise we must check the children recursively. We seek an efficient test to determine whether one vector, αi, dominates another, αj, over the belief points contained within a node. The test must be conservative: it must never erroneously say that one vector dominates another. It is acceptable for the test to miss some pruning opportunities—the consequence is an increase in run-time as we check more nodes than necessary—but this is best avoided if possible. The most thorough test would check whether ∆· b is positive or negative at every belief sample b under the current node η c η r η c η r η c η r η c η r αi i α αi i α (a) (b) (c) (d) Figure 3: Possible scenarios when evaluation a new vector α at a node η, assuming a 2-state domain. (a) η is a split node. (b) αi is dominant. (c) αi is dominated. (d) αi is partially dominant. (where ∆= (αi −αj)). All positive would mean that αi dominates αj, all negative the reverse, and mixed positive and negative would mean that neither dominates the other. Of course, this test renders the tree useless, since all points are checked individually. Instead, we test whether ∆·b is positive or negative over a convex region R which includes all of the belief samples that belong to the current node. The smaller the region, the more accurate our test will be; on the other hand, if the region is too complicated we won’t be able to carry out the test efficiently. (Note that we can always test some region R by solving one linear program to find l = minb∈R b · ∆, another to find h = maxb∈R b · ∆, and testing whether l < 0 < h. But this is expensive and we prefer a more efficient test.) ηmax(s1) ηmax(s2) ηmax(s3) ηmin(s2) ηmin(s1) ηmin(s3) P(s2) P(s1) (a) (b) (c) (d) Figure 4: Several possible convex regions over subsets of belief points, assuming a 3-state domain. We tested several types of region. The simplest type is an axis-parallel bounding box (Fig. 4a), ηmin ≤b ≤ηmax for vectors ηmin and ηmax (as defined in Eq. 8). We also tested the simplex defined by b ≥ηmin and P s∈S b(s) = 1 (Fig. 4b), as well as the simplex defined by b ≤ηmax and P s∈S b(s) = 1 (Fig. 4c). The most effective test we discovered assumes R is the intersection of the bounding box ηmin ≤b ≤ηmax with the plane P s∈S b(s) = 1 (Fig. 4d). For each of these shapes, minimizing or maximizing b · ∆takes time O(d) (where d=#states): for the box (Fig. 4a) we check each dimension independently, and for the simplices (Figs 4b, 4c) we check each corner exhaustively. For the last shape (Fig. 4d), maximizing with respect to b is the same as computing δ s.t. b(s) = ηmin(s) if ∆(s) < δ and b(s) = ηmax(s) if ∆(s) > δ. We can find δ in expected time O(d) using a modification of the quick-median algorithm. In practice, not all O(d) algorithms are equivalent. Empirical results show that checking the corners of regions (b) and (c) and taking the tightest bounds provides the fastest algorithm. This is what we used for the results presented below. 5 Results and Discussion We have conducted a set of experiments to test the effectiveness of the tree structure in reducing computations. While still preliminary, these results illustrate a few interesting properties of metric-trees when used in conjunction with point-based POMDP planning. Figure 5 presents results for six well-known POMDP problems, ranging in size from 4 to 870 states (for problem descriptions see [2], except for Coffee [10] and Tag [8]). While all these problems have been successfully solved by previous approaches, it is interesting to observe the level of speed-up that can be obtained by leveraging metric-tree data structures. In Fig. 5(a)-(f) we show the number of B × Γ (point-to-vector) comparisons required, with and without a tree, for different numbers of belief points. In Fig. 5(g)-(h) we show the computation time (as a function of the number of belief points) required for two of the problems. The No-Tree results were generated by applying the original PBVI algorithm (Section 2, [8]). The Tree results (which count comparisons on both internal and leaf nodes) were generated by embedding the tree searching procedure described in Section 4.2 within the same point-based POMDP algorithm. For some of the problems, we also show performance using an ϵ-tree, where the test for vector dominance can reject (i.e. declare αi is dominated, Fig. 3c) a new vector that is within ϵ of the current best vector. 0 1000 2000 3000 4000 0 2 4 6 8 10x 10 4 # comparisons # belief points No Tree Tree Epsilon−Tree 0 1000 2000 3000 4000 5000 0 0.5 1 1.5 2x 10 6 # comparisons # belief points 0 100 200 300 400 500 0 1 2 3 4 5 6 7x 10 4 # comparisons # belief points 0 100 200 300 400 500 0 0.5 1 1.5 2x 10 7 # comparisons # belief points (a) Hanks, |S|=4 (b) SACI, |S|=12 (c) Coffee, |S|=32 (d) Tiger-grid, |S|=36 0 200 400 600 800 1000 1200 0 2 4 6 8 10x 10 7 # comparisons # belief points 0 100 200 300 400 500 0 2 4 6 8 10x 10 6 # comparisons # belief points 0 0.5 1 1.5 2 2.5 x 10 4 0 5 10 15 20 25 TIME (secs) # belief points 0 200 400 600 800 1000 0 1 2 3 4 5x 10 4 TIME (secs) # belief points (e) Hallway, |S|=60 (f) Tag, |S|=870 (g) SACI, |S|=12 (h) Tag, |S|=870 Figure 5: Results of PBVI algorithm with and without metric-tree. These early results show that, in various proportions, the tree can cut down on the number of comparisons. This illustrates how the use of metric-trees can effectively reduce POMDP computational load. The ϵ-tree is particularly effective at reducing the number of comparisons in some domains (e.g. SACI, Tag). The much smaller effect shown in the other problems may be attributed to a poorly tuned ϵ (we used ϵ = 0.01 in all experiments). The question of how to set ϵ such that we most reduce computation, while maintaining good control performance, tends to be highly problem-dependent. In keeping with other metric-tree applications, our results show that computational savings increase with the number of belief points. What is more surprising is to see the trees paying off with so few data points (most applications of KD-trees start seeing benefits with 1000+ data points.) This may be partially attributed to the compactness of our convex test region (Fig. 4d), and to the fact that we do not search on split nodes (Fig. 3a); however, it is most likely due to the nature of our search problem: many α vectors are accepted/rejected before visiting any leaf nodes, which is different from typical metric-tree applications. We are particularly encouraged to see trees having a noticeable effect with very few data points because, in some domains, good control policies can also be extracted with few data points. We notice that the effect of using trees is negligible in some larger problems (e.g. Tigergrid), while still pronounced in others of equal or larger size (e.g. Coffee, Tag). This is likely due to the intrinsic dimensionality of each problem.3 Metric-trees often perform well in high-dimensional datasets with low intrinsic dimensionality; this also appears to be true of metric-trees applied to vector sorting. While this suggests that our current algorithm is not as effective in problems with intrinsic high-dimensionality, a slightly different tree structure or search procedure may well help in those cases. Recent work has proposed new kinds of metric-trees that can better handle point-based searches in high-dimensions [7], and some of this may be applicable to the POMDP α-vector sorting problem. 6 Conclusion We have described a new type of metric-tree which can be used for sorting belief points and accelerating value updates in POMDPs. Early experiments indicate that the tree structure, by appropriately pruning unnecessary α-vectors over large regions of the belief, can accelerate planning for a range problems. The promising performance of the approach on the Tag domain opens the door to larger experiments. Acknowledgments This research was supported by DARPA (MARS program) and NSF (ITR initiative). References [1] R. I. Brafman. A heuristic variable grid solution method for POMDPs. In Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI), pages 727–733, 1997. [2] A. Cassandra. http://www.cs.brown.edu/research/ai/pomdp/examples/index.html. [3] J. H. Friendman, J. L. Bengley, and R. A. Finkel. An algorithm for finding best matches in logarithmic expected time. ACM Transactions on Mathematical Software, 3(3):209–226, 1977. [4] M. Hauskrecht. Value-function approximations for partially observable Markov decision processes. Journal of Artificial Intelligence Research, 13:33–94, 2000. [5] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101:99–134, 1998. [6] A. W. Moore. Very fast EM-based mixture model clustering using multiresolution KD-trees. In Advances in Neural Information Processing Systems (NIPS), volume 11, 1999. [7] A. W. Moore. The anchors hierarchy: Using the triangle inequality to survive high dimensional data. Technical Report CMU-RI-TR-00-05, Carnegie Mellon, 2000. [8] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for POMDPs. In International Joint Conference on Artificial Intelligence (IJCAI), 2003. [9] K.-M. Poon. A fast heuristic algorithm for decision-theoretic planning. Master’s thesis, The Hong-Kong University of Science and Technology, 2001. [10] P. Poupart and C. Boutilier. Value-directed compression of POMDPs. In Advances in Neural Information Processing Systems (NIPS), volume 15, 2003. [11] N. Roy and G. Gordon. Exponential family PCA for belief compression in POMDPs. In Advances in Neural Information Processing Systems (NIPS), volume 15, 2003. [12] J. K. Uhlmann. Satisfying general proximity/similarity queries with metric trees. Information Processing Letters, 40:175–179, 1991. [13] R. Zhou and E. A. Hansen. An improved grid-based approximation algorithm for POMDPs. In Proceedings of the 17th International Joint Conference on Artificial Intelligence (IJCAI), 2001. 3The coffee domain is known to have an intrinsic dimensionality of 7 [10]. We do not know the intrinsic dimensionality of the Tag domain, but many robot applications produce belief points that exist in sub-dimensional manifolds [11].
2003
78
2,483
Nonlinear Filtering of Electron Micrographs by Means of Support Vector Regression R. Vollgraf1, M. Scholz1, I. A. Meinertzhagen2, K. Obermayer1 1Department of Electrical Engineering and Computer Science Berlin University of Technology, Germany {vro,idefix,oby}@cs.tu-berlin.de 2Dalhousie University, Halifax, Canada iam@is.dal.ca Abstract Nonlinear filtering can solve very complex problems, but typically involve very time consuming calculations. Here we show that for filters that are constructed as a RBF network with Gaussian basis functions, a decomposition into linear filters exists, which can be computed efficiently in the frequency domain, yielding dramatic improvement in speed. We present an application of this idea to image processing. In electron micrograph images of photoreceptor terminals of the fruit fly, Drosophila, synaptic vesicles containing neurotransmitter should be detected and labeled automatically. We use hand labels, provided by human experts, to learn a RBF filter using Support Vector Regression with Gaussian kernels. We will show that the resulting nonlinear filter solves the task to a degree of accuracy, which is close to what can be achieved by human experts. This allows the very time consuming task of data evaluation to be done efficiently. 1 Introduction Using filters for image processing can be understood as a supervised learning method for classification and segmentation of certain image elements. A given training image would contain a target that should be approximated by some filter at every location. In principle, any kind of machine-learning techniques could be employed to learn the mapping from the input receptive field of the filter to the target value. The most simple filter is linear mapping. It has the advantage that it can be very efficiently computed in the frequency domain. However linear filters may not be complex enough for difficult problems. The complexity of nonlinear filters is in principle unlimited (if we leave generalization issues aside), but the computation of the filter output can be very time consuming, since usually there is no shortcut in the frequency domain, as for linear filters. However, for nonlinear filters, that are linear superpositions of Gaussian radial basis functions, there exists a decomposition into linear filters, allowing the filter output to be computed in reasonable time. This sort of nonlinear filtering is for example obtained, when Support Vector Machines (SVM) with a Gaussian kernel are used for learning. SVM have proved to yield good performance on many applications [1]. This and the ability to compute the filter output in an affordable time, make SVM interesting for nonlinear filtering in image processing tasks. Here we apply this new method to the evaluation of electron micrograph images taken from the visual system of the fruit fly, Drosophila, as a means to analyze morphological phenotypes of new genetic mutants. Genetically manipulable organisms such as Drosophila provide means to address many current questions in neuroscience. The action, even of lethal genes, can be uncovered in photoreceptors by creating homozygous whole-eye mosaics in heterozygous flies [2]. Mutant synaptic phenotypes are then interpretable from detailed ultra-structural knowledge of the photoreceptor terminals R1-R6 in the lamina [3]. Electron microscopy (EM) alone offers the resolution required to analyze sub-cellular structure, even though this technique is tedious to undertake. In Drosophila genetics hundreds of mutants of the visual system have been isolated, many even from a single genetic screen. The task of analyzing each of these mutants manually is simply not feasible, hence reliable automatic (computer assisted) methods are needed. The focus here is just to count the number of synaptic vesicles, but in general the method proposed in this report could be extended to the analysis of other structures as well. As representative datasets showing the feasibility of the proposed method, we have chosen two datasets from wild type Drosophila (ter01 for training and ter04 for performance evaluation, cf. Fig. 1) and one from a visual system mutant (mutant, also for performance evaluation, cf. Fig. 2, left). 2 Learning the RBF Filter Given an image x, we want to find a RBF filter with Gaussian basis functions, the output of which is closest to a target image y, in terms of some suitable distance measure. The filter is constrained to some receptive field P, so that its output at position r would be formulated in the most general form as z(r) = fRBF (x(r)) = fRBF  (x(r + ∆r1), . . . , x(r + ∆rM))T  , where P = {∆r1, . . . , ∆rM} is the neighborhood that forms the receptive field. In the following we will continue using bold faced symbols to indicate a vector containing the neighborhood (patch) at some location, while light faces indicate the value of the image itself. Individual elements of patches are addressed by a subscript, for example x∆r(r) = x(r+∆r). fRBF is a RBF network with M input dimensions. It can be implemented as a feed forward net with a single hidden layer containing a fixed number of RBF units and a linear output layer [4]. However we would rather use the technique of Support Vector Regression (SVR) [5] as it has a number of advantages over RBF feed forward networks. It offers adjustable model complexity depending on the training data, thus providing good generalization performance. The training of SVR is a quadratic, constrained optimization problem, which can be solved efficiently without being trapped into local minima. In the linear case the formulation of the ν−SVR, as it was introduced in [6], would be minimize τ(w, ξ(∗), ε) = 1 2∥w∥2 + C · νε + 1 l l X i=1 (ξi + ξ∗ i ) ! (1) s.t. ((w · xi) + b) −yi ≤ε + ξi , yi −((w · xi) + b) ≤ε + ξ∗ i (2) ξ(∗) i ≥0, ε ≥0 (3) The constraints implement as a distance measure the ε-insensitive loss |y−f(x)|ε = max{0, |y −f(x)|−ε}, which is a basic feature of SVR, and has been shown to yield robust estimation. The objective itself provides a solution of low complexity (small ∥w∥2) and, at the same time, low errors, balanced by C. In contrast to ε−SVR, as it was introduced at first in [5], parameterization with the hyper parameter ν also allows optimization for the width ε of the insensitive region. Interacting with C, ν controls the complexity of the model. It provides an upper bound on the fraction of outliers (samples that do not fall into the epsilon tube) and a lower bound on the fraction of support vectors (SV), see [6] and [1] for further details. As usual for SVM, the system is transformed into a nonlinear regressor by replacing the scalar product with a kernel, that fulfills Mercers condition [7]. With a Gaussian kernel (RBF kernel) the regression function is z(r) = l X i=1 α(∗) i zi(r) + b , (4) where zi(r) = k(xi, x(r)) = exp −1 γ X ∆r∈P (xi,∆r −x(r + ∆r))2 ! (5) is the Gaussian- or RBF-kernel. The resulting SVs xi are a subset of the training examples, for which one of the constraints (2) holds with equality. They correspond to Lagrange multipliers having α(∗) i = (αi −α∗ i ) ̸= 0. In the analogy to a RBF network, the SVs are the centers of the basis functions, while α(∗) i are the weights of the output layer. 3 RBF Filtering To evaluate a RBF network filter at location r, all the basis functions have to be evaluated for the neighborhood x(r). This calculation is computationally very expensive when computed in the straightforward way given by (5). If the squared sum is multiplied out, however, we can compute the kernel as zi(r) = exp  −1 γ ∥xi∥2 −2z′ i(r) + z′′ i (r)  , (6) where z′ i(r) = X ∆r∈P xi,∆rx(r + ∆r) and z′′ i (r) = X ∆r∈P x(r + ∆r)2 . (7) Now we are left with linear filtering operations only, the two cross correlations z′ and z′′, which can be efficiently computed in the frequency domain, where the cross correlation of a signal with some filter becomes a multiplication of the signal’s spectrum with the conjugate complex spectrum of the filter. This operation is so much faster that the additional computation cost of the Fourier transform is neglectable. Note that in fact z′′ is the cross correlation of x2 with the filter o, which is 1 for all ∆r ∈P. We need to compute the following Fourier transforms: X(jω) ≡ F[x(r)] , X(2)(jω) ≡ F[x2(r)] , Xi(jω) ≡ F[xi(r)] , O(jω) ≡ F[o(r)] . (8) xi(r) and o(r) are the filters xi and o, zero filled for r /∈P to the size of the image. It is necessary to take care of the placement of the origin ∆r = 0 and the mapping of negative offsets in P, which depends on the implementation of the Fourier transform. Now zi is easily computed as zi(r) ≡exp  −1 γ  xT i xi −F−1 h 2XC i (jω)X(jω) −OC(jω)X(2)(jω) i (9) where (·)C indicates the conjugate complex. Using Fast Fourier Transform (FFT), the speed improvement is much higher when the size of x is even in terms of powers of 2 [8]. Thus one should consider enlarging the image size by adding the appropriate number of zeros at the border. However this can lead to large overhead regions, when the image size is not close to the next power of 2. For this reason we use a tiling scheme, which processes the image in smaller parts of even size, which can cover the entire image more closely. It is important to be aware of the distorted margins of the image or its tiles, when filtering is done in the frequency domain. Because the cross correlation in the frequency domain is cyclic, points at the margin, for which the neighborhood P exceeds the image boundaries, have incorrect values in the filter’s output. This is particularly important for the tiling scheme, which has to provide sufficient overlap for the tiles, so that the image can be covered completely with the uncorrupted inner parts of the tiles. Table 1 summarizes the speed-up gain for the described filtering method. Most performance gain is obtained through the filtering in the frequency domain. However, splitting the image into tiles of appropriate size can improve speed even further. Table 1: Computation time examples for different filtering methods. filtering acc. to (5) 6d 10h FFT filtering, whole image 55m FFT filtering, tiles of 256 × 256 24m • image size 1686 × 1681 pixel • 200 SV of 50 × 50 pixels size • implementation in MATLAB • SUN F6800 / 750MHz, 1 CPU 4 Experiments To test the performance of the method we used two images of wild type and one of mutant photoreceptor terminals. The profiles of the terminals contain typically about 100 synaptic vesicles, the number of which could differ if the genes for membrane trafficking are mutated. Detecting such numerical differences is a simple but tedious task best suited to a computational approach. The wild type images came from electron micrographs of the same animal under the same physiological conditions. For all images visual identification and hand written labelings of the vesicles were made. Image ter01 (Fig. 1, left) was used for training. The validation error on ter04 (Fig. 1, right) was considered for model selection. Then the best model was tested on the mutant image (Fig. 2). 4.1 Construction of the Target ter01 contains 286 hand-labeled vesicles at discrete positions. To generate a smooth target image y, circular gauss blobs with σ2 = 40 and a peak value of 1 were placed on every label. Now training examples x(r) where generated from ter01 by Figure 1: EM images of photoreceptor terminals of the wild type fruit fly, Drosophila melanogaster. The left image (ter01) was used for training, the right image (ter04) for validation. Arrow: individual synaptic vesicle, 30nm in diameter. taking square patches, centered around r. We have set the patch size P = 50 × 50 pixels, to cover an entire vesicle plus a little surrounding. The corresponding values y(r) of the target image where used as targets for regression. The most complete training set would clearly contain patches from all locations, which however would be computationally unfeasible. Instead we used patches from all hand-label positions and additionally 2000 patches from random positions. No patches exceeded the image boundaries. With these data the SVM was trained. We used the libsvm implementation [9] which also contains, beside others, the ν-SVR. Mainly three parameters have to be adjusted for training the ν-SVR: the width of the RBF kernel γ and the parameters ν and C. Since the training dataset is small compared to the input dimensionality, the validation error on ter04 is subject to large variance. Therefore we cannot give a complete parameter exploration here, but we would expect a model with not too much complexity to give the best generalization. It turned out that, for the given conditions, a kernel size of γ = 20.000 together with a low value ν = 0.1 and C = 0.01 yield good validation results on ter04. The optimization returned 245 SVs, 185 of which where outliers. The kernel width is large compared with the average distance of the training examples in input space, which was < 2.000. Because the computation time of the filter grows linearly with the number of SVs, we are strongly interested in a solution with only few SVs. This requires small values of ν, since it is a lower bound on the fraction of SVs. At the same time, small ν values provide large ε and hence restrict the model complexity. After filtering, the decision which point in z corresponds to a vesicle, has to be made. Although the regions of high amplitude form sharp peaks, they still have some spatial extension. Therefore we first discriminate for the peak locations and then for the amplitude. In a first step, we determine those locations r, for which z(r) is a local maximum in some neighborhood, which is determined roughly by the size of a vesicle, i.e. we consider the set Qd =  r : z(r) = max {∆r:∥r−∆r∥≤d} z(r + ∆r)  . (10) Then a threshold is applied to the candidates in Qd to yield the set of locations, which are considered as detected vesicles, Qθ = {r ∈Qd : z(r) > θ} . (11) We set the parameter d = 15 constant in our experiments, and will vary only the threshold θ. 4.2 Performance Evaluation To evaluate the performance of the method, the set of detected vesicles Qθ must be compared with set QExp, which contains the locations detected by a human expert. Clearly this is only meaningful when done on data which was not used to train the SVM. We note that the location of the same vesicle may vary slightly in Qθ and QExp, due to fluctuations in the manual labeling, for example. So we need to find the set Qmatch, containing pairs (r1, r2) with r1 ∈Qθ, r2 ∈QExp, so that r1 and r2 are close to each other and describe the location of the same vesicle. We compute this with a simple, greedy but fast algorithm: • compute the matrix Dij = ∥ri −rj∥for all ri ∈Qθ, rj ∈QExp • while Dij = min D ≤dm – put (ri, rj) into Qmatch – fill i-th row and j-th column of D with +∞ The resulting pairs of matching locations are closer than dm, which should be set approximately to the radius of a vesicle. This algorithm does not generally find the global optimal assignment, which would be a NP-complete problem, but for low point densities the error made by this algorithm is usually low. Now we can evaluate the fraction of correctly detected and the fraction of false positives, fc = #Qmatch #QExp , ffp = 1 −#Qmatch #Qθ , (12) where # denotes the cardinality of the set. Depending on the threshold θ, #Qθ may change and so does #Qmatch. So we get different values for fc and ffp. We summarize these two rates in a diagram, which we call, following [10], Receiver Operating Characteristic (ROC). In comparison to [10], fc represents the hit rate and ffp represents the false alarm rate, cf. Fig. 3. However, our ROC differs in some aspects. fc does not need to reach 1 for arbitrary low thresholds, as it is restricted by the set Qd, which does not need to contain a match to all elements of QExp. Furthermore, raising the threshold (decreasing #Qθ) may occasionally increase #Qmatch due to the greedy matching algorithm. These artifacts yield nonmonotonic parts in the ROC. If no a priori costs are assigned to fc and ffp, then a natural measure for quality is the area below the ROC, which would be close to 1 at best, and 0 if no match would be contained in Qd. 4.3 Results The ROC of the validation with ter04 and mutant is shown in Fig. 3. The rates fc and ffp were computed for 50 different threshold values, covering the interval [minr∈Qd z(r), maxr∈Qd z(r)]. For ter04 there exist four, and for mutant two, human expert labelings. Therefore we can plot either four or two curves, respectively, and get an impression about the variance of our performance measure, the area below the curve. Furthermore the multiple hand labelings allow us to plot them Figure 2: left: Photoreceptor terminal a of mutant type (mutant). right: Close up of the left panel, showing labels set by a human (+) and labels found by our method (□). Threshold θ was 0.3, which yields fc ≈1 −ffp in this case. against each other in the same figure (single crosses). They indicate what performance is achievable at best. A curve passing these points can be considered to do the task as well on average as a human does. One can see that for the wild type image the curve gets close to that region. For the mutant the performance is slightly worse, in terms of the area. In mutants not only the number of vesicles, but typically also their shape and appearance differ. This variability was not covered by the training set and had to be generalized from the wild type data. 5 Discussion We showed that SVR, used as a nonlinear filter, was able to detect synaptic vesicles in electron micrographs with high accuracy. On the one hand, for good performance the ability of the SVR to learn the input/output mapping properly is crucial. On the other hand it is necessary that in the input image a small neighborhood contains sufficient information about the target. Due to the “curse of dimensionality” (cf. [5]) the receptive field P must not be too large, unless there is a huge amount of training data. A smaller input dimension P would make the learning easier, but if P is too small the information that x(r) contains about y(r) may be too small and the performance poor. For the presented application patch size P = 50 × 50 was a good tradeoff. Note that, since we do the filtering in the frequency domain, the size of P has, in contrast to the number of SVs, no direct influence on the computation time needed for filtering. Thus, we have a 2500 dimensional input space and only 286 points in this space, that describe a vesicle. Clearly, only a model with low complexity would achieve acceptable generalization, and this is what we used. In fact the best linear SVR, i.e. the best linear filter, which has an even much lower complexity, still yields a performance of Ater04 = 0.82 and Amutant = 0.74 (cf. Fig. 3, Ater04 = 0.85 . . . 0.89, Ater04 = 0.76 . . . 0.83). However, for future work we plan to extend the training set significantly. To do so, we have access to hand labelings for a broad variety of images of different mutants, also including slightly different scalings. With such more training data the nonlinear SVR can get more complex without loss of generalization performance. The capacity of the linear filter, however, cannot grow any further. Thus we expect the performance gap between nonlinear and linear filtering to grow significantly. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 A1 = 0.863 A2 = 0.848 A3 = 0.838 A4 = 0.889 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 A1=0.826 A2=0.765 Figure 3: ROC of the validation with ter04 (left) and with mutant (right). For various thresholds θ, fc is plotted on the x-axis versus 1 −ffp on the y-axis. The single crosses show the fraction of matching labels for every pair of hand labels of ter04. For detailed explanation, see text. Acknowledgments Support Contributed By: BMBF grant 0311559 (R.V., M.S., K.O.) and NIH grant EY-03592; Killam Trust (I.A.M.) References [1] Bernhard Sch¨olkopf and Alexander J. Smola. Learning with Kernels. The MIT Press, 2002. [2] R.S. Stowers and T.L. Schwarz. A genetic method for generating drosophila eyes composed exclusively of mitotic clones of a single genotype. Genetics, (152):1631– 1639, 1999. [3] R. Fabian-Fine, P. Verstreken, P.R. Hiesinger, J.A. Horne, R. Kostyleva, H.J. Bellen, and I.A. Meinertzhagen. Endophilin acts after synaptic vesicle fission in drosophila photoreceptor terminals. J. Neurosci., 2003. (in press). [4] Simon S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall, 1998. [5] Vladimir Vapnik. The Nature of Statistical Learning Theory. 1995. [6] B. Sch¨olkopf and A. Smola and R. Williamson and P. Bartlett. New support vector algorithms. Neural Computation, 12(5):1207–1245, May 2000. [7] J. Mercer. Functions of positive and negative type and their connection with the theory of integral equations. Philosophical Transactions of the Royal Society of London A, 209:415–446, 1909. [8] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. Numerical Recipes in C. Cambridge University Press, 2nd. edition, 1992. [9] Chih-Chung Chang and Chih-Jen Lin. LIBSVM – A Library for Support Vector Machines. http://www.csie.ntu.edu.tw/˜cjlin/libsvm/, April 2003. [10] L. O. Harvey, Jr. The critical operating characteristic and the evaluation of expert judgment. Organizational Behavior and Human Decision Processes, 53(2):229–251, 1992.
2003
79
2,484
Distributed Optimization in Adaptive Networks Ciamac C. Moallemi Electrical Engineering Stanford University Stanford, CA 94305 ciamac@stanford.edu Benjamin Van Roy Management Science and Engineering and Electrical Engineering Stanford University Stanford, CA 94305 bvr@stanford.edu Abstract We develop a protocol for optimizing dynamic behavior of a network of simple electronic components, such as a sensor network, an ad hoc network of mobile devices, or a network of communication switches. This protocol requires only local communication and simple computations which are distributed among devices. The protocol is scalable to large networks. As a motivating example, we discuss a problem involving optimization of power consumption, delay, and buffer overflow in a sensor network. Our approach builds on policy gradient methods for optimization of Markov decision processes. The protocol can be viewed as an extension of policy gradient methods to a context involving a team of agents optimizing aggregate performance through asynchronous distributed communication and computation. We establish that the dynamics of the protocol approximate the solution to an ordinary differential equation that follows the gradient of the performance objective. 1 Introduction This paper is motivated by the potential of policy gradient methods as a general approach to designing simple scalable distributed optimization protocols for networks of electronic devices. We offer a general framework for such protocols that builds on ideas from the policy gradient literature. We also explore a specific example involving a network of sensors that aggregates data. In this context, we propose a distributed optimization protocol that minimizes power consumption, delay, and buffer overflow. The proposed approach for designing protocols based on policy gradient methods comprises one contribution of this paper. In addition, this paper offers fundamental contributions to the policy gradient literature. In particular, the kind of protocol we propose can be viewed as extending policy gradient methods to a context involving a team of agents optimizing system behavior through asynchronous distributed computation and parsimonious local communication. Our main theoretical contribution is to show that the dynamics of our protocol approximate the solution to an ordinary differential equation that follows the gradient of the performance objective. 2 A General Formulation Consider a network consisting of a set of components V = {1, . . . , n}. Associated with this network is a discrete-time dynamical system with a finite state space W. Denote the state of the system at time k by w(k), for k = 0, 1, 2, . . .. There are n subsets W1, . . . , Wn of W, each consisting of states associated with events at component i. Note that these subsets need not be mutually exclusive or totally exhaustive. At the kth epoch, there are n control actions a1(k) ∈A1, . . . , an(k) ∈An, where each Ai is a finite set of possible actions that can be taken by component i. We sometimes write these control actions in a vector form a(k) ∈A = A1 × · · · An. The actions are governed by a set of policies π1 θ1, . . . , πn θn, parameterized by vectors θ1 ∈RN1, . . . , θn ∈RNn. Each ith action process only transitions when the state w(k) transitions to an element of Wi. At the time of transition, the probability that ai(k) becomes any ai ∈Ai is given by πi θi(ai|w(k)). The state transitions depend on the prior state and action vector. In particular, let P(w′, a′, w) be a transition kernel defining the probability of state w given prior state w′ and action a′. Letting θ = (θ1, . . . , θn), we have Pr {w(k) = w, a(k) = a|w(k −1) = w′, a(k −1) = a′, θ} = P(w′, a′, w) Y i:w∈Wi πi θi(ai|w) Y i:w /∈Wi 1{a′ i=ai}. Define Fk to be the σ-algebra generated by {(w(ℓ), a(ℓ))|ℓ= 1, . . . , k}. While the system is in state w ∈W and action a ∈A is applied, each component i receives a reward ri(w, a). The average reward received by the network is r(w, a) = 1 n Pn i=1 ri(w, a). Assumption 1. For every θ, the Markov chain w(k) is ergodic (aperiodic, irreducible). Given Assumption 1, for each fixed θ, there is a well-defined long-term average reward λ(θ) = limK→∞1 K E[PK−1 k=0 r(w(k), a(k))]. We will consider a stochastic approximation iteration (1) θi(k + 1) = θi(k) + ϵχi(k). Here, ϵ > 0 is a constant step size and χi(k) is a noisy estimate of the gradient ∇θiλ(θ(k)) computed at component i based on the component’s historically observed states, actions, and rewards, in addition to communication with other components. Our goal is to develop an estimator χi(k) that can be used in an adaptive, asynchronous, and decentralized context, and to establish the convergence of the resulting stochastic approximation scheme. Our approach builds on policy gradient algorithms that have been proposed in recent years ([5, 7, 8, 3, 4, 2]). As a starting point, consider a gradient estimation method that is a decentralized variation of the OLPOMDP algorithm of [3, 4, 1]. In this algorithm, each component i maintains and updates an eligibility vector zβ i (t) ∈RNi, defined by (2) zβ i (k) = k X ℓ=0 βk−ℓ∇θiπi θi(ℓ)(ai(ℓ)|w(ℓ)) πi θi(ℓ)(ai(ℓ)|w(ℓ)) 1{w(ℓ)∈Wi}, for some β ∈(0, 1). The algorithm generates an estimate ¯χi(k) = r(w(t), a(t))zβ i (k) to the local gradient ∇θiλ(θ(k)). Note that while the credit vector zβ i (t) can be computed using only local information, the gradient estimate ¯χi(t) cannot be computed without knowledge of the global reward r(x(t), a(t)) at each time. In a fully decentralized environment, where components only have knowledge of their local rewards, this algorithm cannot be used. In this paper, we present a simple scalable distributed protocol through which rewards occurring locally at each node are communicated over time across the network and gradient estimates are generated at each node based on local information. A fundamental issue this raises is that rewards may incur large delays before being communicated across the network. Moreover, these delays may be random and may correlated with the underlying events that occur in operation of the network. We address this issue and establish conditions for convergence. Another feature of the protocol is that it is completely decentralized – there is no central processor that aggregates and disseminates rewards. As such, the protocol is robust to isolated changes or failures in the network. In addition to design of the protocol, a significant contribution is in the protocol’s analysis, which we believe requires new ideas beyond what has been employed in the prior policy gradient literature. 3 A General Framework for Protocols We will make the following assumption regarding the policies, which is common in the policy gradient literature ([7, 8, 3, 4, 2]). Assumption 2. For all i and every w ∈Wi, ai ∈Ai, πi θi(ai|w) is a continuously differentiable function of θi. Further, for every i, there exists a bounded function Li(w, ai, θ) such that for all w ∈Wi, ai ∈Ai, ∇θiπi θi(ai|w) = πi θi(ai|w)Li(w, ai, θ). The latter part of the assumption is satisfied, for example, if there exists a constant ϵ > 0 such that for each i,w ∈Wi,ai ∈Ai, either πi θi(ai|w) = 0 for every θi or πi θi(ai|w) ≥ϵ, for all θi. Consider the following gradient estimator: (3) χi(k) = zβ i (k) 1 n n X j=1 k X ℓ=0 dα ij(ℓ, k)rj(ℓ), where we use the shorthand rj(ℓ) = rj(w(ℓ), a(ℓ)). Here, the random variables {dα ij(ℓ, k)}, with parameter α ∈(0, 1), represent an arrival process describing the communication of rewards across the network. Indeed, dα ij(ℓ, k) is the fraction of the reward rj(ℓ) at component j that is learned by component i at time k ≥ℓ. We will assume the arrival process satisfies the following conditions. Assumption 3. For each i, j, ℓ, and α ∈(0, 1), the process {dα ji(ℓ, k)|k = ℓ, ℓ+ 1, ℓ+ 2, . . .} satisfies: 1. dα ji(ℓ, k) is Fk-measurable. 2. There exists a scalar γ ∈(0, 1) and a random variable cℓsuch that for all k ≥ℓ, dα ji(ℓ, k) (1 −α)αk−ℓ−1 < cℓγk−ℓ, with probability 1. Further, we require that the distribution of cℓgiven Fℓdepend only on (w(ℓ), a(ℓ)), and that there exist a constant ¯c such that E[cℓ|w(ℓ) = w, a(ℓ) = a] < ¯c < ∞, with probability 1 for all initial conditions w ∈W and a ∈A. 3. The distribution of {dα ji(ℓ, k)|k = ℓ, ℓ+ 1, . . .} given Fℓdepends only on w(ℓ) and a(ℓ). The following result, proved in our appendix [9], establishes the convergence of the longterm sample averages of χi(t) of the form (3) to an estimate of the gradient. This type of convergence is central to the convergence of the stochastic approximation iteration (1). Theorem 1. Holding θ fixed, the limit ∇αβ θi λ(θ) = lim K→∞ 1 K E "K−1 X k=0 χi(k) # exists. Further, lim α↑1 lim sup β↑1 ∇αβ θi λ(θ) −∇θiλ(θ) = 0. . 4 Example: A Sensor Network In this section, we present a model of a wireless network of sensors that gathers and communicates data to a central base station. Our example is motivated by issues arising in the development of sensor network technology being carried out by commercial producers of electronic devices. However, we will not take into account the many complexities associated with real sensor networks. Rather, our objective is to pose a simplified model that motivates and provides a context for discussion of our distributed optimization protocol. 4.1 System Description Consider a network of n sensors and a central base station. Each sensor gathers packets of data through observation of its environment, and these packets of data are relayed through the network to the base station via multi-hop wireless communication. Each sensor retains a queue of packets, each obtained either through sensing or via transmission from another sensor. Packets in a queue are indistinguishable – each is of equal size and must be transferred to the central base station. We take the state of a sensor to be the number of packets in the queue and denote the state of the ith sensor at time k by xi(k). The number of packets in a queue cannot exceed a finite buffer size, which we denote by x. A number of triggering events occur at any given device. These include (1) packetizing of an observation (2) reception of a packet from another sensor, (3) transmission of a packet to another sensor, (4) awakening from a period of sleep, (5) termination of a period of attempted reception, (6) termination of a period of attempted transmission. At the time of a triggering event, the sensor must decide on its next action. Possible actions include (1) sleep, (2) attempt transmission, (3) attempt reception. When the buffer is full, options are limited to (1) and (2). When the buffer is empty, options are limited (1) and (3). The action taken by the ith sensor at time k is denoted by ai(k). The base station will be thought of as a sensor that has an infinite buffer and perpetually attempts reception. For each i, there is a set N(i) of entities with which the ith sensor can directly communicate. If the ith sensor is attempting transmission of a packet and there is at least one element of N(i) that is simultaneously attempting reception and is closer to the base station than component i, the packet is transferred to the queue of that element. If there are multiple such elements, one of them is chosen randomly. Note that if among the elements of N(i) that are attempting reception, all are further away from the base station than component i, no packet is transmitted. Observations are made and packetized by each sensor at random times. If a sensor’s buffer is not full when an observation is packetized, an element is added to the queue. Otherwise, the packet is dropped from the system. 4.2 Control Policies and Objective Every sensor employs a control policy that selects an action based on its queue length each time a triggering event occurs. The action is maintained until occurrence of the next triggering event. Each ith sensor’s control policy is parameterized by a vector θi ∈R2. Given θi, at an event time, if the ith sensor has a non-empty queue, it chooses to transmit with probability θi1. If the ith sensor does not transmit and its queue is not full, it chooses to receive with probability θi2. If the sensor does not transmit or receive, then it sleeps. In order to satisfy Assumption 2, we constrain θi1 and θi2 to lie in an interval [θℓ, θh], where 0 < θℓ< θh < 1. Assume that each sensor has a finite power supply. In order to guarantee a minimum lifespan for the network, we will require that each sensor sleeps at least a fraction fs of the time. This is enforced by considering a time window of length Ts. If, at any given time, a sensor has not slept for a total fraction of a least fs of the preceding time Ts, it is forced to sleep and hence not allowed to transmit or receive. The objective is to minimize a weighted sum of the average delay and average number of dropped packets per unit of time. Delay can be thought of as the amount of time a packet spends in the network before arriving at the base station. Hence, the objective is: max θ1,...,θn lim sup K→∞ −1 K K−1 X k=0 1 n n X i=1 (xi(k) + ξDi(k)) , where Di(k) is the number of packets dropped by sensor i at time k, and ξ is a weight reflecting the relative importance of delay and dropped packets. 5 Distributed Optimization Protocol We now describe a simple protocol by which components a the network can communicate rewards, in a fashion that satisfies the requirements of Theorem 1 and hence will produce good gradient estimates. This protocol communicates the rewards across the network over time using a distributed averaging procedure. In order to motivate our protocol, consider a different problem. Imagine each component i in the network is given a real value Ri. Our goal is to design an asynchronous distributed protocol through which each node will obtain the average R = Pn i=1 Ri/n. To do this, define the vector Y (0) ∈Rn by Yi(0) = Ri for all i. For each edge (i, j), define a function Q(i,j) : Rn 7→Rn by Q(i,j) ℓ (Y ) = ( Yi+Yj 2 if ℓ∈{i, j}, Yℓ otherwise. At each time t, choose an edge (i, j), and set Y (k + 1) = Q(i,j)(Y (k)). If the graph is connected and every edge is sampled infinitely often, then limk→∞Y (t) = Y , where Y i = R. To see this, note that the operators Q(i,j) preserve the average value of the vector, hence Pn i=1 Yi(k)/n = R. Further, for any k, either Y (k+1) = Y (k) or ∥Y (k+1)−Y ∥< ∥Y (k) −Y ∥. Further, Y is the unique vector with average value R that is a fixed point for all operators Q(i,j). Hence, as long as the graph is connected and each edge is sampled infinitely often, Yi(k)→R as k→∞and the components agree to the common average R. In the context of our distributed optimization protocol, we will assume that each component i maintains a scalar value Yi(k) at time k representing an estimate of the global reward. We will define a structure by which components communicate. Define E to be the set of edges along which communication can occur. For an ordered set of distinct edges S = (ii, j1), . . . , (i|S|, j|S|)  , define a set WS ⊂W. Let σ(E) be the set of all possible ordered sets of disjoint edges S, including the empty set. We will assume that the sets {WS|S ∈ σ(E)} are disjoint and together form a partition of W. If w(k) ∈WS, for some set S, we will assume that the components along the edges in S communicate in the order specified by S. Define QS = Q(i|S|,j|S|) · · · Q(i1,j1), where the terms in the product are taken over the order specified by S. Define R(k) = (r1(k), . . . , rn(k)) is a vector of rewards occurring at time k. The update rule for the vector Y (k) is given by Y (k + 1) = R(k + 1) + αQS(k+1)Y (k), where QS(k+1) = X S∈σ(E) 1{w(k+1)∈WS}QS. Let ˆE = {(i, j)|(i, j) ∈S, WS ̸= ∅}. We will make the following assumption. Assumption 4. The graph (V, ˆE) is connected. Since the process (w(k), a(k)) is aperiodic and irreducible (Assumption 1), this assumption guarantees that every edge on a connected subset of edges is sampled infinitely often. Policy parameters are updated at each component according to the rule: (4) θi(k + 1) = θi(k) + ϵzβ i (k)(1 −α)Yi(k). In relation to equations (1) and (3), we have (5) dα ji(ℓ, k) = n(1 −α)αk−ℓh ˆQ(ℓ, k) i ij , where ˆQ(ℓ, k) = QS(k−1) · · · QS(ℓ). The following theorem, which relies on a general stochastic approximation result from [6] together with custom analysis available in our appendix [9], establishes the convergence of the distributed stochastic iteration method defined by (4). Theorem 2. For each ϵ > 0, define {θϵ(k)|k = 0, 1, . . .} as the result of the stochastic approximation iteration (4) with the fixed value of ϵ. Assume the set {θϵ(k)|k, ϵ} is bounded. Define the continuous time interpolation ¯θϵ(t) by setting ¯θϵ(t) = θϵ(k) for t ∈[kϵ, kϵ+ϵ). Then, for any sequence of processes {¯θϵ(t)|ϵ→0} there exists a subsequence that weakly converges to ¯θ(t) as ϵ→0, where ¯θ(t) is a solution to the ordinary differential equation (6) ˙¯θ(t) = ∇αβ θ λ(¯θ(t)). Further, define L to be the set of limit points of (6), and for a δ > 0, Nδ(L) to be a neighborhood of radius δ about L. The fraction of time that ¯θϵ(t) spends in Nδ(L) over the time interval [0, T] goes to 1 in probability as ϵ→0 and T→∞. Note that since we are using a constant step-size ϵ, this type of weak convergence is the strongest one would expect. The parameters will typically oscillate in the neighborhood of an limit point, and only weak convergence to a distribution centered around a limit point can be established. An alternative would be to use a decreasing step size ϵ(k)→0 in (4). In such instances, probability 1 convergence to a local optimum can often be established. However, with decreasing step sizes, the adaptation of parameters becomes very slow as ϵ(n) decays. We expect our protocol to be used in an online fashion, where it is ideal to be adaptive to long-term changes in network topology or dynamics of the environment. Hence, the constant step size case is more appropriate as it provides such adaptivity. Also, a boundedness requirement on the iterates in Theorem 2 is necessary for the mathematical analysis of convergence. In practical numerical implementations, choices of the policy parameters θi would be constrained to bounded sets of Hi ⊂RNi. In such an implementation, the iteration (4) would be replaced with an iteration projected onto the set Hi. The conclusions of Theorem 2 would continue to hold, but with the ODE (6) replaced with an appropriate projected ODE. See [6] for further discussion. 5.1 Relation to the Example In the example of Section 4, one approach to implementing our distributed optimization protocol involves passing messages associated with the optimization protocol alongside normal network traffic, as we will now explain. Each ith sensor should maintain and update two vectors: a parameter vector θi(k) ∈R2 and an eligibility vector zβ i (k). If a triggering event occurs at sensor i at time k, the eligibility vector is updated according to zβ i (k) = βzβ(k −1) + ∇θiπi θi(k)(ai(k)|w(k)) πi θi(k)(ai(k)|w(k)) . Otherwise, zβ i (k) = βzβ i (k −1). Furthermore, each sensor maintains an estimate Yi(k) of the global reward. At each time k, each ith sensor observes a reward (negative cost) of ri(k) = −xi(k) −ξDi(k). If two neighboring sensors are both not asleep at a time k, they communicate their global reward estimates from the previous time. If the ith sensor is not involved in a reward communication event at that time, its global reward estimate is updated according to Yi(k) = αYi(k −1) + ri(k). On the other hand, at any time k that there is a communication event, its global reward estimate is updated according to Yi(k) = ri(k)+α(Yi(k)+αYj(k))/2, where j is the index of the sensor with which communication occurs. If communication occurs with multiple neighbors, the corresponding global reward estimates are averaged pairwise in an arbitrary order. Clearly this update process can be modeled in terms of the sets WS introduced in the previous section. In this context, the graph ˆE contains an edge for each pair of neighbors in the sensor network, where the neighborhood relations are capture by N, as introduced in Section 4. To optimize performance over time, each ith sensor would update its parameter values according to our stochastic approximation iteration (4). To highlight the simplicity of this protocol, note that each sensor need only maintain and update a few numerical values. Furthermore, the only communication required by the optimization protocol is that an extra scalar numerical value be transmitted and an extra scalar numerical value be received during the reception or transmission of any packet. As a numerical example, consider the network topology in Figure 1. Here, at every time step, an observation arrives at a sensor with a 0.02 probability, and each sensor maintains a queue of up to 20 observations. Policy parameters θi1 and θi2 for each sensor i are constrained to lie in the interval [0.05, 0.95]. (Note that for this set of parameters, the chance of a buffer overflow is very small, and hence did not occur in our simulations.) A baseline policy is defined by having leaf nodes transmit with maximum probability, and interior nodes splitting their time roughly evenly between transmission and reception, when not forced to sleep by the power constraint. Applying our decentralized optimization method to this example, it is clear in Figure 2 that the performance of the network is quickly and dramatically improved. Over time, the algorithm converges to the neighborhood of a local optimum as expected. Further, the algorithm achieves qualitatively similar performance to gradient optimization using the centralized OLPOMDP method of [3, 4, 1], hence decentralization comes at no cost. 6 Remarks and Further Issues We are encouraged by the simplicity and scalability of the distributed optimization protocol we have presented. We believe that this protocol represents both an interesting direction for practical applications involving networks of electronic devices and a significant step in the policy gradient literature. However, there is an important outstanding issue that needs to be addressed to assess the potential of this approach: whether or not parameters can be adapted fast enough for this protocol to be useful in applications. There are two dimensions Figure 1: Example network topology. Figure 2: Convergence of method. root 8 7 2 1 4 3 5 9 6 10 0 1 2 3 4 5 6 7 8 9 10 x 10 6 −0.3 −0.25 −0.2 −0.15 −0.1 −0.05 0 Iteration Long−Term Average Reward OLPOMDP decentralized baseline to this issue: (1) variance of gradient estimates and (2) convergence rate of the underlying ODE. Both should be explored through experimentation with models that capture practical contexts. Also, there is room for research that explores how variance can be reduced and the convergence rate of the ODE can be accelerated. Acknowledgements The authors thank Abbas El Gamal, Abtin Keshavarzian, Balaji Prabhakar, and Elif Uysal for stimulating conversations on sensor network models and applications. This research was supported by NSF CAREER Grant ECS-9985229 and by the ONR under grant MURIN00014-00-1-0637. The first author was also supported by a Benchmark Stanford Graduate Fellowship. References [1] P. L. Bartlett and J. Baxter. Stochastic Optimization of Controlled Markov Decision Processes. In IEEE Conference on Decision and Control, pages 124–129, 2000. [2] P. L. Bartlett and J. Baxter. Estimation and Approximation Bounds for Gradient-Based Reinforcement Learning. Journal of Computer and System Sciences, 64:133–150, 2002. [3] J. Baxter and P. L. Bartlett. Infinite-Horizon Gradient-Based Policy Search. Journal of Artificial Intelligence Research, 15:319–350, 2001. [4] J. Baxter, P. L. Bartlett, and L. Weaver. Infinite-Horizon Gradient-Based Policy Search: II. Gradient Ascent Algorithms and Experiments. Journal of Artificial Intelligence Research, 15:351–381, 2001. [5] T. Jaakkola, S. P. Singh, and M. I. Jordan. Reinforcement Learning Algorithms for Partially Observable Markov Decision Problems. In Advances in Neural Information Processing Systems 7, pages 345–352, 1995. [6] H. J. Kushner and G. Yin. Stochastic Approximation Algorithms and Applications. SpringerVerlag, New York, NY, 1997. [7] P. Marbach, O. Mihatsch, and J.N. Tsitsiklis. Call Admission Control and Routing in Integrated Service Networks. In IEEE Conference on Decision and Control, 1998. [8] P. Marbach and J.N. Tsitsiklis. Simulation–Based Optimization of Markov Reward Processes. IEEE Transactions on Automatic Control, 46(2):191–209, 2001. [9] C. C. Moallemi and B. Van Roy. Appendix to NIPS Submission. URL: http://www. moallemi.com/ciamac/papers/nips-2003-appendix.pdf, 2003.
2003
8
2,485
Application of SVMs for Colour Classification and Collision Detection with AIBO Robots Michael J. Quinlan, Stephan K. Chalup and Richard H. Middleton∗ School of Electrical Engineering & Computer Science The University of Newcastle, Callaghan 2308, Australia {mquinlan,chalup,rick}@eecs.newcastle.edu.au Abstract This article addresses the issues of colour classification and collision detection as they occur in the legged league robot soccer environment of RoboCup. We show how the method of one-class classification with support vector machines (SVMs) can be applied to solve these tasks satisfactorily using the limited hardware capacity of the prescribed Sony AIBO quadruped robots. The experimental evaluation shows an improvement over our previous methods of ellipse fitting for colour classification and the statistical approach used for collision detection. 1 Introduction Autonomous agents offer a wide range of possibilities to apply and test machine learning algorithms, for example in vision, locomotion, and localisation. However, training-time requirements of sophisticated machine learning algorithms can overstrain the hardware of real world robots. Consequently, in most cases, ad hoc methods, hard coding of expert knowledge, and hand-tuning of parameters, or similar approaches were preferred over the use of learning algorithms on the robot. Application of the latter was often restricted to simulations which sometimes could support training or tuning of the real world robot parameters. However, often the gap between simulation and the real world was too wide so that a transfer of training results from the simulated to the real robot turned out to be useless. A few years ago it may have been regarded as infeasible to consider the use of support vector machines [1, 2, 3] on real world robots with restricted processing capabilities. During the first years after their invention support vector machines had the reputation to be more a theoretical concept than a method which could be efficiently applied in real world situations. One of the main reasons for this was complexity of the quadratic programming part. In recent years it has become possible to speed up optimisations for SVMs in various ways [4]. SVMs have since been successfully applied on many tasks but primarily in the areas of data mining and pattern classification. With the present study we explore the feasibility and usefulness of one-class SVM classification [5] for tasks faced by AIBO robots within the legged league environment of RoboCup [6]. We focus on two particularly critical issues: detection of objects based on ∗http://www.robots.newcastle.edu.au correct colour classification and detection of robot-to-robot collisions. Both issues seemed not to be sufficiently solved and implemented by the teams of RoboCup2002 and caused significant deterioration in the quality of play even in the world-best teams of that league. The article has five more sections addressing the environment and tasks, the methods, followed by the experiments and applications for colour classification and collision detection, respectively. The article concludes with a summary. 2 Environment and tasks The restricted real world environment and the uniformly prescribed hardware of the legged league [6] of RoboCup provide a good compromise for testing machine learning algorithms on autonomous agents with a view towards possible applications in more general real world environments. A soccer team in the legged league consists of four robots, including one goal keeper. Each team is identified by robots wearing either a red or blue coloured ‘uniform’. The soccer matches take place on a green enclosed carpeted field with white boundaries. Two goals, a blue and a yellow, are positioned on opposite ends of the field. To aid localisation six beacons are placed regularly around the field, each uniquely identifiable by a specific colour pattern. The ball used is orange plastic and of a suitable size to be easily moved around by the robots. The games consist of two ten minute halves under strict rules imposed by independent referees. The legged league of RoboCup 2003 prescribed the use of Sony AIBO entertainment robots, models ERS-210 or the newer ERS-210A. Both have an internal 64-bit RISC processor with clock speeds of 192MHz and 384MHz, respectively. The robots are programmed in a C++ software environment using the Sony’s OPEN-R software development kit [7]. They have 16MB of memory accessible by user programs. The dimensions of the robot (width × height × length) are 154 mm × 266 mm × 274 mm (not including the tail and ears) and the mass is approximately 1.4 kg. The AIBO has 20 degrees of freedom (DOF): neck 3DOF (pan, tilt, and roll), ear 1DOF x 2, chin 1DOF, legs 3DOF (abductor, rotator, knee) x 4 and tail 2DOF (up-down, left-right). Among other sensors the AIBO has a 1/6 inch colour CMOS camera capable of 25 frames per seconds. The images are gathered at a resolution of 352(H) × 288(V) but middleware restricts the available resolution to a maximum of 176(H) × 144(V). The lens has an aperture of 2.0 and a focal length of 2.18 mm. Additionally, the camera has a field of vision of 23.9◦up and down and 28.8◦left and right. To help achieve results in different lighting conditions the camera allows the modification of parameters: White balance, Shutter Speed and Gain. 2.1 Colour classification task The vision system for most teams consists of four main tasks, Colour Classification, Run Length Encoding, Blob Formation and Object Recognition (Figure 1). The classification process takes the image from the camera in a YUV bitmap format [8]. Each pixel in the image is assigned a colour label (i.e. ball orange, beacon pink etc.) based on its YUV values. A lookup table (LUT) is used to determine which YUV values correspond to which colour labels. The critical point is the initial generation of the LUT. Since the robot is extremely reliant on colour for object detection a new LUT has to be generated with any change in lighting conditions. Currently this is a manual task which requires a human to take hundreds of images and assign a colour label on a pixel-by-pixel basis. Using this method each LUT can take hours to create, yet it will still contain holes and classification errors. CMOS Camera Colour Lookup Table 3x6bit YUV Pixels 176x144 Pixels (3x8bit YUV) Run Length Encoding & Blob Formation 176x144 Pixels (Enum) Image Object Recognition List of Blobs List of Objects Figure 1: Vision System of the NUbots Legged League Team [9] 2.2 Collision detection task The goal is to detect collisions using the limited sensors provided by the AIBO robot. The camera and infrared distance sensor on the AIBO don’t provide enough support in avoiding obstacles unless the speed of the robot is dramatically decreased. For these reasons we have chosen to use information obtained from the joint sensors (i.e. the angle of the joint) as the input to our collision detection system [10]. 3 One-class SVM classification method An approach to one-class SVM classification was proposed by Sch¨olkopf et al. [5]. Their strategy is to map data into the feature space corresponding to the kernel function and to separate them from the origin with maximum margin. This implies the construction of a hyperplane such that w · Φ(xi) −p ≥0. The result is a function f that returns the value +1 in the region containing most of the data points and -1 elsewhere. Assuming the use of an RBF kernel and i, j ∈1, ..., ℓ, we are presented with the dual problem: min α 1 2 X ij αiαjk(xi, xj) subject to 0 ≤αi ≤1 νℓ, X i αi = 1 (1) p can be found by the fact that for any such αi, a corresponding pattern xi satisfies: p = X j αjk(xj, xi) The resulting decision function f (the support of the distribution) is: f(x) = sign( X i αik(xi, x) −p) An implementation of this approach is available in the LIBSVM library [11]. It solves a scaled version of (1): min α 1 2 X ij αiαjk(xi, xj) subject to 0 ≤αi ≤1 , X i αi = νℓ For our applications we use a RBF kernel with parameter γ in the form k(x, y) = e−γ∥x−y∥2. The parameter ν approximates the fraction of outliers and support vectors [5]. 3.1 Method for colour classification The classification functions we seek take data that has been manually clustered to produce sets Xk = © xk i ∈R3; i = 1, ..., Nk ª of colour space data for each object colour k. Each Xk corresponds to sets of colour values in the YUV space corresponding to one of the known colour labels. An individual one-class SVM is created for each colour, with Xk being used as the training data (each element in the set is scaled between -1 and 1). By training with an extremely low ν and a large γ the boundary formed by the decision function approximates the region that contains the majority (1-ν) of the points in Xk. In addition the SVM has the advantage of simultaneously removing the outliers that occur during manual classification. The new colour set is constructed by attempting to classify every point in the YUV space (643 elements). All points that return a value of +1 are inside the region and therefore deemed to be of colour k. One-class SVM was chosen because it allows us to optimally treat each individual colour. To avoid misclassification each point in YUV space that does not strongly correspond to one of the known colours must remain classified as unknown. In addition the colours were originally selected because they are located in different areas of the YUV space. Because of this we can choose to treat each colour without regard to the location and shape of the other colours. For these reasons we are not interested in using a multi-class technique to form a hyperplane that provides an optimal separation between the colours. 3.2 Method for collision detection For collision detection the one-class SVM is employed as a novelty detection mechanism. In our implementation each training point is a vector containing thirteen elements. These include five walk parameters, stepFrequency, backStrideLength, turn, strafe and timeParameter along with a sensor reading from the abductor and rotator joints on each of the four legs. Upon training the SVMs decision function will return +1 for all values that relate to a “normal” step, and -1 for all steps that contain a fault. Speed is of the greatest importance in the Robocup domain. For this reason a collision detection system must attempt to minimise the generation of false-positives (detecting a collision that we deemed not to have happened) while still finding a high percentage of actual collisions. Low false-positives are achieved by keeping the kernel parameter γ high but this has the side effect of lowering the generalisation to the data set, which results in the need for an increased number of training points. In a real world robotic system the need for more training points greatly increases the training time and in-turn the wear on the machinery. 4 Experiments and application to colour classification The SVM can be used in two situations during the colour classification procedure. Firstly during the construction of a new LUT where it can be applied to increase the speed of classification. By lowering γ while the number of training points is low, a rough estimation of the final shape can be obtained. By continuing the manual classification and increasing γ a closer approximation to the area containing the training data is obtained. In this manner a continually improving LUT can be constructed until it is deemed adequate. An extreme example of this application is during the set-up phase at a competition. In the past when we arrived at a new venue all system testing was delayed until the generation of a LUT. Of critical importance is testing the locomotion engine on the new carpet and in particular ball chasing. The task of ball chasing relies on the classification of ball orange. Thus a method of quickly but roughly classifying orange is valuable. By manually classifying a few images of the ball and then training the SVM with γ < 0, a sphere containing all possible values for the ball is generated. The second situation in which we use the one-class SVM is on a completed LUT. Either all colours in the table can be trained (i.e. updating of an old table) or an individual colour is trained due to an initial classification error. This procedure can be performed either on the robot or a remote computer. Empirical tests have indicated that ν = 0.025 and γ = 250 provide excellent results on a previously constructed LUT. The initial table contained 3329 entries while after training the table contained 6989 entries. The most evident change can be seen in the classification of colour white, see Figure 2. The LUTs were compared over 60 images, which equates to 1,520,640 individual pixel comparisons. The initial table generated 144,098 classification errors. The new LUT produced 117,652 errors, this equates to an 18% reduction in errors. Figure 2: Image Comparison: The left image is classified with the original LUT and the image on the right is the using the updated LUT. Black pixels indicate an unknown colour. 4.1 Comparison with ellipsoid fitting The previous method involved converting the existing LUT values from YUV to the HSI colour space [8] and fitting an ellipsoid, E, which can be represented by the quadratic form: E (x0, Q) = n x ∈R3 : (x −x0)T Q−1 (x −x0) ≤1 o (2) where x0 is the centre of the ellipsoid, and the size, orientation and shape of the ellipsoid are contained in the positive definite symmetric matrix Q = QT > 0 ∈R3×3. Note that this definition of the shape can be alternatively represented by the linear matrix inequality (LMI): xi ∈E = · Q (xi −x0) (xi −x0)T 1 ¸ ≥0 (3) The LMI (3) is linear in the unknowns Q and x0 and this therefore leads to the convex optimisation: (Q, x0) = argmin Q = QT > 0, x0 : (3) is true for i = 1..Nk {tr(Q)} Note that minimising the trace of Q (tr(Q)) is the same as minimising the sum of the diagonal elements of Q which is the same as minimising the sum of the squares of the lengths of the principal axes of the ellipsoid. The ellipsoidal shape defined in (2) has the disadvantage of restricting the shape of possible regions in the colour space. However, it does have the advantage of having a simple representation and a convex shape. Before the ellipsoid can be fitted, potential outliers and duplicate points were identified and removed. The removal of outliers is important in avoiding too large a region. Duplicate points were removed, since these increase computations without adding any information. For the comparison we use the initial LUT from the above example. Figure 3 shows the effects of each method on the colour white. To make the comparison with ellipsoids, the initial LUT and the generated LUT from the SVM procedure are shown in the HSI colour space. Figure 3: Colour classification in HSI colour space: A) Points manually classified at white. B) Ellipsoid fitted to these white points. C) Result of the one-class SVM technique, ν=0.025 and γ=10. D) Result of the one-class SVM technique, ν=0.025 and γ=250. It is evident that the manual classification of white is rather incomplete and contains many holes that should be classified as white. The negative results of these holes can be seen as noise in the left image of Figure 2. Using the ellipsoid fitting method these holes are filled but with the potential drawback of over classification. From image B in Figure 3 it is evident that the top section and the bottom left of the ellipsoid contain no white entries and therefore it is highly questionable that this area should be classified as white. Images C and D in the figure show the results of our one-class SVM method. It is clear from image D that the area now classified as white is a region that tightly fits the original training set. 5 Experiments and application to collision detection The collision detection system is designed with the aim that the entire system can be run on the robot. This means adhering to the memory and processing capabilities of the device. On the AIBO we have a maximum of 8MB memory available for collision detection, a total of 20,000 training points. This is the equivalent of 1000 steps which equates to approximately 10 minutes of training time. The training set is generated by having the robot behave normally on the field but with the stipulation that all collisions are avoided. The trained classifier analyses the on-line stream of joint data measurements in samples of ten consecutive data points. If more than 2 points in one sample are classified as -1 a collision is declared to be detected. Initial parameters of ν = 0.05 and γ = 5 were chosen, this was based on the assumption that a collision point would lie considerably outside the training set. The results from these parameters were less then satisfying, only the largest of collisions (i.e. physically holding multiple legs) were detected. The solution to this problem could involve increasing ν due to the possibility that the initial training set contained many outliers and/or increasing γ to improve the tightness of the classification. By a series of tests, all of which tended to lead to either an over classification or an under classification, parameters of ν = 0.05 and γ = 100 were settled on. In our system these parameters appear to give the best balance between minimising false-positives and maximising correct detection of collisions. 5.1 Comparison with the previous statistical method The previous method, described in [10], for collision detection involves observing a joint position substantially differing from its expected value. In our case an empirical study found two standard deviations to be a practical measure, see Figure 4. Initially we would have considered a collision to have occurred if a single error is found, but further investigation has shown that finding multiple errors (in most cases three) in quick succession is necessary to warrant a warning that can be acted upon by the robot’s behaviour system. Figure 4: Rear Rotators for a forwards walking boundary collision on both front legs, front right leg hitting first. The bold line shows the path of a collided motion. The dotted line represents the mean “normal” path of the joint (that is, during unobstructed motion), with the error bars indicating two standard deviations above and below. One drawback of this method is that it relied on domain knowledge to arrive at two standard deviations. In addition it required considerable storage space to hold the table of means and standard deviations for each parameter combination. The previous statistical method had the advantage of extremely low computational expense, in fact it was a table look up. The trade-off is increased space, this method required the allocation of approximately 6MB of memory during both the training and detection stages. Conversely the SVM approach requires only about 1MB of memory during the detection phase, but this comes at the side effect of increased computation. Since the SVM approach was capable of running without reducing the frame rate, the extra memory could now be used for other applications. With respect to accuracy the SVM approach slightly outperformed the original statistical method for particular types of steps, these include the common steps associated with chasing the ball. Other step types, such as an aggressive turn did not show the same improvement. This is due to the movement of the joints in some motions being more inconsistent, thus making accurate classification harder. A possible solution may involve using multiple SVMs associated with different combinations of walk parameters, allowing the tuning of parameters on a specific basis. This solution would have the downside of requiring more memory. 6 Summary The method of one-class classification with SVMs was successfully applied to the tasks of colour classification and collision detection using the restricted memory and processing power of the AIBO hardware. It was possible to run the SVM algorithm implemented in the C++ libraries of LIBSVM off and on the robot. In a comparison with previously used methods the SVM based methods generated better results, and in the case of colour classification the SVM approach was more efficient and convenient. Acknowledgments We would like to thank William McMahan and Jared Bunting for their work on the previous vision classification method and Craig Murch for his extensive contributions to both the vision and locomotion systems. Michael J. Quinlan was supported by a University of Newcastle Postgraduate Research Scholarship. References [1] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In D. Haussler, editor, Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144–152, Pittsburgh, PA, July 1992. ACM Press. [2] C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20:273 – 297, 1995. [3] V. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, New York, 1995. [4] Bernhard Sch¨olkopf and Alexander J. Smola. Learning with Kernels, Support Vector Machines, Regularization, Optimization and Beyond. The MIT Press, 2002. [5] B. Sch¨olkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating the support of a high-dimensional distribution. Neural Computation, 13:1443–1471, 2001. [6] RoboCup Legged League web site. http://www.openr.org/robocup/index.html. [7] OPEN-R SDK. http://openr.aibo.com. [8] Linda G. Shapiro and George C. Stockman. Computer Vision. Prentice Hall, 2001. [9] J. Bunting, S. Chalup, M. Freeston, W. McMahan, R. Middleton, C. Murch, M. Quinlan, C. Seysener, and G. Shanks. Return of the NUbots! The 2003 NUbots Team Report, 2003. http://robots.newcastle.edu.au/publications/NUbotFinalReport2003.pdf. [10] Michael J. Quinlan, Craig L. Murch, Richard H. Middleton, and Stephan K. Chalup. Traction monitoring for collision detection with legged robots. In RoboCup 2003 Symposium, 2003. [11] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/˜cjlin/libsvm.
2003
80
2,486
Warped Gaussian Processes Edward Snelson∗ Carl Edward Rasmussen† Zoubin Ghahramani∗ ∗Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London WC1N 3AR, UK {snelson,zoubin}@gatsby.ucl.ac.uk †Max Planck Institute for Biological Cybernetics Spemann Straße 38, 72076 T¨ubingen, Germany carl@tuebingen.mpg.de Abstract We generalise the Gaussian process (GP) framework for regression by learning a nonlinear transformation of the GP outputs. This allows for non-Gaussian processes and non-Gaussian noise. The learning algorithm chooses a nonlinear transformation such that transformed data is well-modelled by a GP. This can be seen as including a preprocessing transformation as an integral part of the probabilistic modelling problem, rather than as an ad-hoc step. We demonstrate on several real regression problems that learning the transformation can lead to significantly better performance than using a regular GP, or a GP with a fixed transformation. 1 Introduction A Gaussian process (GP) is an extremely concise and simple way of placing a prior on functions. Once this is done, GPs can be used as the basis for nonlinear nonparametric regression and classification, showing excellent performance on a wide variety of datasets [1, 2, 3]. Importantly they allow full Bayesian predictive distributions to be obtained, rather than merely point predictions. However, in their simplest form GPs are limited by the nature of their simplicity: they assume the target data to be distributed as a multivariate Gaussian, with Gaussian noise on the individual points. This simplicity enables predictions to be made easily using matrix manipulations, and of course the predictive distributions are Gaussian also. Often it is unreasonable to assume that, in the form the data is obtained, the noise will be Gaussian, and the data well modelled as a GP. For example, the observations may be positive quantities varying over many orders of magnitude, where it makes little sense to model these quantities directly assuming homoscedastic Gaussian noise. In these situations it is standard practice in the statistics literature to take the log of the data. Then modelling proceeds assuming that this transformed data has Gaussian noise and will be better modelled by the GP. The log is just one particular transformation that could be done; there is a continuum of transformations that could be applied to the observation space to bring the data into a form well modelled by a GP. Making such a transformation should really be a full part of the probabilistic modelling; it seems strange to first make an ad-hoc transformation, and then use a principled Bayesian probabilistic model. In this paper we show how such a transformation or ‘warping’ of the observation space can be made entirely automatically, fully encompassed into the probabilistic framework of the GP. The warped GP makes a transformation from a latent space to the observation, such that the data is best modelled by a GP in the latent space. It can also be viewed as a generalisation of the GP, since in observation space it is a non-Gaussian process, with nonGaussian and asymmetric noise in general. It is not however just a GP with a non-Gaussian noise model; see section 6 for further discussion. For an excellent review of Gaussian processes for regression and classification see [4]. We follow the notation there throughout this paper and present a brief summary of GP regression in section 2. We show in sections 4 and 5, with both toy and real data, that the warped GP can significantly improve predictive performance over a variety of measures, especially with regard to the whole predictive distribution, rather than just a single point prediction such as the mean or median. The transformation found also gives insight into the properties of the data. 2 Nonlinear regression with Gaussian processes Suppose we are given a dataset D, consisting of N pairs of input vectors XN ≡{x(n)}N n=1 and real-valued targets tN ≡{tn}N n=1. We wish to predict the value of an observation tN+1 given a new input vector x(N+1), or rather the distribution P(tN+1|x(N+1), D). We assume there is an underlying function y(x) which we are trying to model, and that the observations lie noisily around this. A GP places a prior directly on the space of functions by assuming that any finite selection of points XN gives rise to a multivariate Gaussian distribution over the corresponding function values yN. The covariance between the function value of y at two points x and x′ is modelled with a covariance function C(x, x′), which is usually assumed to have some simple parametric form. If the noise model is taken to be Gaussian, then the distribution over observations tN is also Gaussian with the entries of the covariance matrix C given by Cmn = C(x(m), x(n); Θ) + δmng(x(n); Θ) , (1) where Θ parameterises the covariance function, g is the noise model, and δmn is the Kronecker delta function. Often the noise model is taken to be input-independent, and the covariance function is taken to be a Gaussian function of the difference in the input vectors (a stationary covariance function), although many other possibilities exist, see e.g. [5] for GPs with input dependent noise. In this paper we consider only this popular choice, in which case the entries in the covariance matrix are given by Cmn = v1 exp  −1 2 D X d=1 x(m) d −x(n) d rd !2 + v0δmn . (2) Here rd is a width parameter expressing the scale over which typical functions vary in the dth dimension, v1 is a size parameter expressing the typical size of the overall process in y-space, v0 is the noise variance of the observations, and Θ = {v0, v1, r1, . . . , rD}. It is simple to show that the predictive distribution for a new point given the observed data, P(tN+1|tN, XN+1), is Gaussian. The calculation of the mean and variance of this distribution involves doing a matrix inversion of the covariance matrix CN of the training inputs, which using standard exact methods incurs a computational cost of order N 3. Learning, or ‘training’, in a GP is usually achieved by finding a local maximum in the likelihood using conjugate gradient methods with respect to the hyperparameters Θ of the covariance matrix. The negative log likelihood is given by L = −log P(tN|XN, Θ) = 1 2 log det CN + 1 2t⊤ NC−1 N tN + N 2 log 2π . (3) Once again, the evaluation of L, and its gradients with respect to Θ, involve computing the inverse covariance matrix, incurring an order N 3 cost. Rather than finding a ML estimate ΘML, a prior over Θ can be included to find a MAP estimate ΘMAP, or even better Θ can be numerically integrated out when computing P(tN+1|x(N+1), D) using for example hybrid Monte Carlo methods [2, 6]. 3 Warping the observation space In this section we present a method of warping the observation space through a nonlinear monotonic function to a latent space, whilst retaining the full probabilistic framework to enable learning and prediction to take place consistently. Let us consider a vector of latent targets zN and suppose that this vector is modelled by a GP, −log P(zN|XN, Θ) = 1 2 log det CN + 1 2z⊤ NC−1 N zN + N 2 log 2π . (4) Now we make a transformation from the true observation space to the latent space by mapping each observation through the same monotonic function f, zn = f(tn; Ψ) ∀n , (5) where Ψ parameterises the transformation. We require f to be monotonic and mapping on to the whole of the real line; otherwise probability measure will not be conserved in the transformation, and we will not induce a valid distribution over the targets tN. Including the Jacobian term that takes the transformation into account, the negative log likelihood, −log P(tN|XN, Θ, Ψ), now becomes: L = 1 2 log det CN + 1 2f(tN)⊤C−1 N f(tN) − N X n=1 log ∂f(t) ∂t tn + N 2 log 2π . (6) 3.1 Training the warped GP Learning in this extended model is achieved by simply taking derivatives of the negative log likelihood function (6) with respect to both Θ and Ψ parameter vectors, and using a conjugate gradient method to compute ML parameter values. In this way the form of both the covariance matrix and the nonlinear transformation are learnt simultaneously under the same probabilistic framework. Since the computational limiter to a GP is inverting the covariance matrix, adding a few extra parameters into the likelihood is not really costing us anything. All we require is that the derivatives of f are easy to compute (both with respect to t and Ψ), and that we don’t introduce so many extra parameters that we have problems with over-fitting. Of course a prior over both Θ and Ψ may be included to compute a MAP estimate, or in fact the parameters integrated out using a hybrid Monte Carlo method. 3.2 Predictions with the warped GP For a particular setting of the covariance function hyperparameters Θ (for example ΘML or ΘMAP), in latent variable space the predictive distribution at a new point is just as for a regular GP: a Gaussian whose mean and variance are calculated as mentioned in section 2; P(zN+1|x(N+1), D, Θ) = N ˆzN+1(Θ), σ2 N+1(Θ)  . (7) To find the distribution in the observation space we pass that Gaussian through the nonlinear warping function, giving P(tN+1|x(N+1), D, Θ, Ψ) = f ′(tN+1) q 2πσ2 N+1 exp " −1 2 f(tN+1) −ˆzN+1 σN+1 2# . (8) The shape of this distribution depends on the form of the warping function f, but in general it may be asymmetric and multimodal. If we require a point prediction to be made, rather than the whole distribution over tN+1, then the value we will predict depends on our loss function. If our loss function is absolute error, then the median of the distribution should be predicted, whereas if our loss function is squared error, then it is the mean of the distribution. For a standard GP where the predictive distribution is Gaussian, the median and mean lie at the same point. For the warped GP in general they are at different points. The median is particularly easy to calculate: tmed N+1 = f −1(ˆzN+1) . (9) Notice we need to compute the inverse warping function. In general we are unlikely to have an analytical form for f −1, because we have parameterised the function in the opposite direction. However since we have access to derivatives of f, a few iterations of NewtonRaphson with a good enough starting point is enough. It is often useful to give an indication of the shape and range of the distribution by giving the positions of various ‘percentiles’. For example we may want to know the positions of ‘2σ’ either side of the median so that we can say that approximately 95% of the density lies between these bounds. These points in observation space are calculated in exactly the same way as the median - simply pass the values through the inverse function: tmed±2σ N+1 = f −1(ˆzN+1 ± 2σN+1) . (10) To calculate the mean, we need to integrate tN+1 over the density (8). Rewriting this integral back in latent space we get E(tN+1) = Z dzf −1(z)Nz(ˆzN+1, σ2 N+1) = E(f −1) . (11) This is a simple one dimensional integral under a Gaussian density, so Gauss-Hermite quadrature may be used to accurately compute it with a weighted sum of a small number of evaluations of the inverse function f −1 at appropriate places. 3.3 Choosing a monotonic warping function We wish to design a warping function that will allow for complex transformations, but we must constrain the function to be monotonic. There are various ways to do this, an obvious one being a neural-net style sum of tanh functions, f(t; Ψ) = I X i=1 ai tanh (bi(t + ci)) ai, bi ≥0 ∀i , (12) where Ψ = {a, b, c}. This produces a series of smooth steps, with a controlling the size of the steps, b controlling their steepness, and c their position. Of course the number of −pi −pi/2 −pi/4 0 pi/2 pi −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 x t (a) −2 −1 0 1 0 0.5 1 1.5 2 2.5 3 3.5 t P(t | x=−π/4) (b) Figure 1: A 1D regression task. The dotted lines show the true generating distribution, the dashed lines show a GP’s predictions, and the solid lines show the warped GP’s predictions. (a) The triplets of lines represent the median, and 2σ percentiles in each case. (b) Predictive probability densities at x = −π/4; i.e. a cross section through (a) at the solid grey line steps I needs to be set, and that will depend on how complex a function one wants. The derivatives of this function with respect to either t, or the warping parameters Ψ, are easy to compute. In the same spirit, sums of error functions, or sums of logistic functions, would produce a similar series of steps, and so these could be used instead. The problem with using (12) as it stands is that it is bounded; the inverse function f −1(z) does not exist for values of z outside the range of these bounds. As explained earlier, this will not lead to a proper density in t space, because the density in z space is Gaussian, which covers the whole of the real line. We can fix this up by using instead: f(t; Ψ) = t + I X i=1 ai tanh (bi(t + ci)) ai, bi ≥0 ∀i . (13) which has linear trends away from the tanh steps. In doing so, we have restricted ourselves to only making warping functions with f ′ ≥1, but because the size of the covariance function v1 is free to vary, the effective gradient can be made arbitrarily small by simply making the range of the data in the latent space arbitrarily big. A more flexible system of linear trends may be made by including, in addition to the neuralnet style function (12), some functions of the form 1 β log  eβm1(t−d) + eβm2(t−d) , where m1, m2 ≥0. This function effectively splices two straight lines of gradients m1 and m2 smoothly together with a ‘curvature’ parameter β, and at position d. The sign of β determines whether the join is convex or concave. 4 A simple 1D regression task A simple 1D regression task was created to show a situation where the warped GP should, and does, perform significantly better than the standard GP. 101 points, regularly spaced from −π to π on the x axis, were generated with Gaussian noise about a sine function. These points were then warped through the function t = z1/3, to arrive at the dataset t which is shown as the dots in Figure 1(a). t z (a) sine t z (b) creep t z (c) abalone t z (d) ailerons Figure 2: Warping functions learnt for the four regression tasks carried out in this paper. Each plot is made over the range of the observation data, from tmin to tmax. A GP and a warped GP were trained independently on this dataset using a conjugate gradient minimisation procedure and randomly initialised parameters, to obtain maximum likelihood parameters. For the warped GP, the warping function (13) was used with just two tanh functions. For both models the covariance matrix (2) was used. Hybrid Monte Carlo was also implemented to integrate over all the parameters, or just the warping parameters (much faster since no matrix inversion is required with each step), but with this dataset (and the real datasets of section 5) no significant differences were found from ML. Predictions from the GP and warped GP were made, using the ML parameters, for 401 points regularly spaced over the range of x. The predictions made were the median and 2σ percentiles in each case, and these are plotted as triplets of lines on Figure 1(a). The predictions from the warped GP are found to be much closer to the true generating distribution than the standard GP, especially with regard to the 2σ lines. The mean line was also computed, and found to lie close, but slightly skewed, from the median line. Figure 1(b) emphasises the point that the warped GP finds the shape of the whole predictive distribution much better, not just the median or mean. In this plot, one particular point on the x axis is chosen, x = −π/4, and the predictive densities from the GP and warped GP are plotted alongside the true density (which can be written down analytically). Note that the standard GP must necessarily predict a symmetrical Gaussian density, even when the density from which the points are generated is highly asymmetrical, as in this case. Figure 2(a) shows the warping function learnt for this regression task. The tanh functions have adjusted themselves so that they mimic a t3 nonlinearity over the range of the observation space, thus inverting the z1/3 transformation imposed when generating the data. 5 Results for some real datasets It is not surprising that the method works well on the toy dataset of section 4 since it was generated from a known nonlinear warping of a smooth function with Gaussian noise. To demonstrate that nonlinear transformations also help on real data sets we have run the warped GP comparing its predictions to an ordinary GP on three regression problems. These datasets are summarised in the following table which shows the range of the targets (tmin, tmax), the number of input dimensions (D), and the size of the training and test sets (Ntrain, Ntest) that we used. Dataset D tmin tmax Ntrain Ntest creep 30 18 MPa 530 MPa 800 1266 abalone 8 1 yr 29 yrs 1000 3177 ailerons 40 −3.0 × 10−3 −3.5 × 10−4 1000 6154 Dataset Model Absolute error Squared error −log P(t) creep GP 16.4 654 4.46 GP + log 15.6 587 4.24 warped GP 15.0 554 4.19 abalone GP 1.53 4.79 2.19 GP + log 1.48 4.62 2.01 warped GP 1.47 4.63 1.96 ailerons GP 1.23 × 10−4 3.05 × 10−8 -7.31 warped GP 1.18 × 10−4 2.72 × 10−8 -7.45 Table 1: Results of testing the GP, warped GP, and GP with log transform, on three real datasets. The units for absolute error and squared error are as for the original data. The dataset creep is a materials science set, with the objective to predict creep rupture stress (in MPa) for steel given chemical composition and other inputs [7, 8]. With abalone the aim is to predict the the age of abalone from various physical inputs [9]. ailerons is a simulated control problem, with the aim to predict the control action on the ailerons of an F16 aircraft [10, 11]. For datasets creep and abalone, which consist of positive observations only, standard practice may be to model the log of the data with a GP. So for these datasets we have compared three models: a GP directly on the data, a GP on the fixed log-transformed data, and the warped GP directly on the data. The predictive points and densities were always compared in the original data space, accounting for the Jacobian of both the log and the warped transforms. The models were run as in the 1D task: ML parameter estimates only, covariance matrix (2), and warping function (13) with three tanh functions. The results we obtain for the three datasets are shown in Table 1. We show three measures of performance over independent test sets: mean absolute error, mean squared error, and the mean negative log predictive density evaluated at the test points. This final measure was included to give some idea of how well the model predicts the entire density, not just point predictions. On these three sets, the warped GP always performs significantly better than the standard GP. For creep and abalone, the fixed log transform clearly works well too, but particularly in the case of creep, the warped GP learns a better transformation. Figure 2 shows the warping functions learnt, and indeed 2(b) and 2(c) are clearly log-like in character. On the other hand 2(d), for the ailerons set, is exponential-like. This shows the warped GP is able to flexibly handle these different types of datasets. The shapes of the learnt warping functions were also found to be very robust to random initialisation of the parameters. Finally, the warped GP also makes a better job of predicting the distributions, as shown by the difference in values of the negative log density. 6 Conclusions, extensions, and related work We have shown that the warped GP is a useful extension to the standard GP for regression, capable of finding extra structure in the data through the transformations it learns. From another viewpoint, it allows standard preprocessing transforms, such as log, to be discovered automatically and improved on, rather than be applied in an ad-hoc manner. We have demonstrated an improvement in performance over the regular GP on several datasets. Of course some datasets are well modelled by a GP already, and applying the warped GP model simply results in a linear “warping” function. It has also been found that datasets that have been censored, i.e. many observations at the edge of the range lie on a single point, cause the warped GP problems. The warping function attempts to model the censoring by pushing those points far away from the rest of the data, and it suffers in performance especially for ML learning. To deal with this properly a censorship model is required. As a further extension, one might consider warping the input space in some nonlinear fashion. In the context of geostatistics this has actually been dealt with by O’Hagan [12], where a transformation is made from an input space which can have non-stationary and non-isotropic covariance structure, to a latent space in which the usual conditions of stationarity and isotropy hold. Gaussian process classifiers can also be thought of as warping the outputs of a GP, through a mapping onto the (0, 1) probability interval. However, the observations in classification are discrete, not points in this warped continuous space. Therefore the likelihood is different. Diggle et al. [13] consider various other fixed nonlinear transformations of GP outputs. It should be emphasised that the presented method can be beneficial in situations where the noise variance depends on the output value. Gaussian processes where the noise variance depends on the inputs have been examined by e.g. [5]. Forms of non-Gaussianity which do not directly depend on the output values (such as heavy tailed noise) are also not captured by the method proposed here. We propose that the current method should be used in conjunction with methods targeted directly at these other issues. The force of the method it that it is powerful, yet very easy and computationally cheap to apply. Acknowledgements. Many thanks to David MacKay for useful discussions, suggestions of warping functions and datasets to try. CER was supported by the German Research Council (DFG) through grant RA 1030/1. References [1] C. K. I. Williams and C. E. Rasmussen. Gaussian processes for regression. In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo., editors, Advances in Neural Information Processing Systems 8. MIT Press, 1996. [2] C. E. Rasmussen. Evaluation of Gaussian Processes and Other Methods for Non-Linear Regression. PhD thesis, University of Toronto, 1996. [3] M. N. Gibbs. Bayesian Gaussian Processes for Regression and Classification. PhD thesis, Cambridge University, 1997. [4] D. J. C. MacKay. Introduction to Gaussian processes. In C. M. Bishop, editor, Neural Networks and Machine Learning, NATO ASI Series, pages 133–166. Kluwer Academic Press, 1998. [5] Paul W. Goldberg, Christopher K. I. Williams, and Christopher M. Bishop. Regression with input-dependent noise: A gaussian process treatment. In Advances in Neural Information Processing Systems 10. MIT Press, 1998. [6] Radford M.Neal. Monte Carlo implementation of Gaussian process models for Bayesian regression and classification. Technical Report 9702, University of Toronto, 1997. [7] Materials algorithms project (MAP) program and data library. http://www.msm.cam.ac. uk/map/entry.html. [8] D. Cole, C. Martin-Moran, A. G. Sheard, H. K. D. H. Bhadeshia, and D. J. C. MacKay. Modelling creep rupture strength of ferritic steel welds. Science and Technology of Welding and Joining, 5:81–90, 2000. [9] C. L. Blake and C. J. Merz. UCI repository of machine learning databases, 1998. http: //www.ics.uci.edu/˜mlearn/MLRepository.html. [10] L. Torgo. http://www.liacc.up.pt/˜ltorgo/Regression/. [11] R. Camacho. Inducing models of human control skills. PhD thesis, University of Porto, 2000. [12] A. O’Hagan and A. M. Schmidt. Bayesian inference for nonstationary spatial covariance structure via spatial deformations. Technical Report 498/00, University of Sheffield, 2000. [13] P. J. Diggle, J. A. Tawn, and R. A. Moyeed. Model-based geostatistics. Applied Statistics, 1998.
2003
81
2,487
Online Learning via Global Feedback for Phrase Recognition Xavier Carreras Llu´ıs M`arquez TALP Research Center, LSI Department Technical University of Catalonia (UPC) Campus Nord UPC, E–08034 Barcelona {carreras,lluism}@lsi.upc.es Abstract This work presents an architecture based on perceptrons to recognize phrase structures, and an online learning algorithm to train the perceptrons together and dependently. The recognition strategy applies learning in two layers: a filtering layer, which reduces the search space by identifying plausible phrase candidates, and a ranking layer, which recursively builds the optimal phrase structure. We provide a recognition-based feedback rule which reflects to each local function its committed errors from a global point of view, and allows to train them together online as perceptrons. Experimentation on a syntactic parsing problem, the recognition of clause hierarchies, improves state-of-the-art results and evinces the advantages of our global training method over optimizing each function locally and independently. 1 Introduction Over the past few years, many machine learning methods have been successfully applied to Natural Language tasks in which phrases of some type have to be recognized. Generally, given an input sentence —as a sequence of words— the task is to predict a bracketing for the sentence representing a structure of phrases, either sequential or hierarchical. For instance, syntactic analysis of Natural Language provides several problems of this type, such as partial parsing tasks [1, 2], or even full parsing [3]. The general approach consists of decomposing the global phrase recognition problem into a number of local learnable subproblems, and infer the global solution from the outcomes of the local subproblems. For chunking problems —in which phrases are sequentially structured— the approach is typically to perform a tagging. In this case, local subproblems include learning whether a word opens, closes, or is inside a phrase of some type (noun phrase, verb phrase, ...), and the inference process consists of sequentially computing the optimal tag sequence which encodes the phrases, by means of dynamic programming [1, 4, 5]. When hierarchical structure has to be recognized, additional local decisions are required to determine the embedding of phrases, resulting in a more complex inference process which recursively builds the global solution [3, 2, 6, 7]. In general, a learning system for these tasks makes use of several learned functions which interact in some way to determine the structure. A usual methodology for solving the local subproblems is to use a discriminative learning algorithm to learn a classifier for each local decision [1, 2]. Each individual classifier is trained separately from the others, maximizing some local measure such as the accuracy of the local decision. However, when performing the phrase recognition task, the classifiers are used together and dependently, in the sense that one classifier predictions’ may affect the prediction of another. Indeed, the global performance of a system is measured in terms of precision and recall of the recognized phrases, which, although related, is not the local classification accuracy measure for which the local classifiers are usually trained. In this direction, recent works in the area provide alternative strategies in which the learning process is driven from the global level. The general idea consists of moving the learning strategy from the binary classification setting to a general ranking context into which the global problem can be casted. Crammer and Singer [8] present a label-ranking algorithm, in which several perceptrons receive feedback from the ranking they produce over a training instance. Har-Peled et al. [9] study a general learning framework in which the constraints between a number of linear functions and an output prediction allow to effectively learn a desired label-ranking function. For structured outputs, and motivating this work, Collins [10] introduces a variant of the perceptron for tagging tasks, in which the learning feedback is globally given from the output of the Viterbi decoding algorithm. In this paper we present a global learning strategy for the general task of recognizing phrases in a sentence. We adopt the general phrase recognition strategy of our previous work [6]. Given a sentence, learning is first applied at word level to identify phrase candidates of the solution. Then, learning is applied at a higher-order level in which phrase candidates are scored to discriminate among competing ones. The overall strategy infers the global solution by exploring with learning components a number of plausible solutions. As a main contribution, we propose a recognition-based feedback rule which allows to learn the decisions in the system as perceptrons, all in one go. The learning strategy works online at sentence level. When visiting a sentence, the perceptrons are first used to recognize the set of phrases, and then updated according to the correctness of the global solution. As a result, each local function is automatically adapted to the recognition strategy. Furthermore, following [11] the final model incorporates voted prediction methods for the perceptrons and the use of kernel functions. Experimenting on the Clause Identification problem [2] we show the effectiveness of our method, evincing the benefits over local learning strategies and improving the best results for the particular task. 2 Phrase Recognition 2.1 Formalization Let x be a sentence formed by n words xi, with i ranging from 0 to n −1, belonging to the sentence space X. Let K be a predefined set of phrase categories. For instance, in syntactic parsing K may include noun phrases, verb phrases, prepositional phrases and clauses, among others. A phrase, denoted as (s, e)k, is the sequence of consecutive words spanning from word xs to word xe, having s ≤e, with category k ∈K. Let ph1 =(s1, e1)k1 and ph2 =(s2, e2)k2 be two different phrases. We define that ph1 and ph2 overlap iff s1 <s2 ≤e1 <e2 or s2 <s1 ≤e2 <e1 , and we note it as ph1 ∼ph2. Also, we define that ph1 is embedded in ph2 iff s2 ≤s1 ≤e1 ≤e2, and we note it as ph1 ≺ph2. Let P be the set of all possible phrases, expressed as P = {(s, e)k | 0 ≤s≤e, k ∈K}. A solution for a phrase recognition problem is a set y of phrases which is coherent with respect to some constraints. We consider two types of constraints: overlapping and embedding. For the problem of recognizing sequentially organized phrases, often referred to as chunking, phrases are not allowed to overlap or embed. Thus, the solution space can be formally expressed as Y = {y ⊆P | ∀ph1, ph2 ∈y ph1̸∼ph2 ∧ph1̸≺ph2} . More generally, for the problem of recognizing phrases organized hierarchically, a solution is a set of phrases which do not overlap but may be embedded. Formally, the solution space is Y = {y ⊆P | ∀ph1, ph2 ∈y ph1̸∼ph2} . In order to evaluate a phrase recognition system we use the standard measures for recognition tasks: precision (p) —the ratio of recognized phrases that are correct—, recall (r) —the ratio of correct phrases that are recognized— and their harmonic mean F1 = 2pr p+r. 2.2 Recognizing Phrases The mechanism to recognize phrases is described here as a function which, given a sentence x, identifies the set of phrases y of x: R : X →Y. We assume two components within this function, both being learning components of the recognizer. First, we assume a function P which, given a sentence x, identifies a set of candidate phrases, not necessarily coherent, for the sentence, P(x) ⊆P. Second, we assume a score function which, given a phrase, produces a real-valued prediction indicating the plausability of the phrase for the sentence. The phrase recognizer is a function which searches a coherent phrase set for a sentence x according to the following optimality criterion: R(x) = arg max y⊆P(x) | y∈Y X (s,e)k∈y score((s, e)k, x, y) (1) That is, among all the coherent subsets of candidate phrases, the optimal solution is defined as the one whose phrases maximize the summation of phrase scores. The function P is only used to reduce the search space of the R function. Note that the R function constructs the optimal phrase set by evaluating scores of phrase candidates, and, regarding the length of the sentence, there is a quadratic number of possible phrases, that is, the set P. Thus, considering straightforwardly all phrases in P would result in a very expensive exploration. The function P is intended to filter out phrase candidates from P by applying decisions at word level. A simple setting for this function is a start-end classification for each phrase type: each word of the sentence is tested as k-start —if it is likely to start phrases of type k— and as k-end —if it is likely to end phrases type k. Each k-start word xs with each k-end word xe, having s ≤e, forms the phrase candidates (s, e)k. Assuming start and end binary classification functions, hk S and hk E, for each type k ∈K, the filtering function is expressed as: P(x) = { (s, e)k ∈P | hk S(xs) = +1 ∧hk E(xe) = +1 } Alternatives to this setting may be to consider a single pair of start-end classifiers, independent of phrase types, or to perform a different tagging for identifying phrases, such as the well-known begin-inside classification. In general, each classifier will be applied to each word in the sentence, and deciding the best strategy for identifying phrase candidates will depend on the sparseness of phrases in a sentence, the length of phrases and the number of categories. Once the phrase candidates are identified, the optimal coherent phrase set is selected according to (1). Due to its nature, there is no need to explicitly enumerate each possible coherent phrase set, which would result in an exponential exploration. Instead, by guiding the exploration through the problem constraints and using dynamic programming the optimal coherent phrase set can be found in polynomial time over the sentence length. For chunking problems, the solution can be found in quadratic time by performing a Viterbistyle exploration from left to right [4]. When embedding of phrases is allowed, a cubic-time bottom-up exploration is required [6]. As noted above, in either cases there will be the additional cost of applying a quadratic number of decisions for scoring phrases. Summarizing, the phrase recognition system is performed in two layers: the identification layer, which filters out phrase candidates in linear time, and the scoring layer, which selects the optimal phrase chunking in quadratic or cubic time. 3 Additive Online Learning via Recognition Feedback In this section we describe an online learning strategy for training the learning components of the Phrase Recognizer, namely the start-end classifiers in P and the score function. The learning challenge consists in approximating the functions so as to maximize the global F1 measure on the problem, taking into account that the functions interact. In particular, the start-end functions define the actual input space of the score function. Each function is implemented using a linear separator, hw : Rn →R, operating in a feature space defined by a feature representation function, φ : X →Rn, for some instance space X. The function P consists of two classifiers per phrase type: the start classifier (hk S) and the end classifier (hk E). Thus, the P function is formed by a prediction vector for each classifier, noted as wk S or wk E, and a unique shared representation function φw which maps a word in context into a feature vector. A prediction is computed as hk S(x) = wk S ·φw(x), and similarly for the hk E, and the sign is taken as the binary classification. The score function computes a real-valued score for a phrase candidate (s, e)k. We implement this function with a prediction vector wk for each type k ∈K, and also a shared representation function φp which maps a phrase into a feature vector. The score prediction is then given by the expression: score((s, e)k, x, y) = wk · φp((s, e)k, x, y). 3.1 The FR-Perceptron Learning Algorithm We propose a mistake-driven online learning algorithm for training the parameter vectors all together. We give the algorithm the name FR-Perceptron since it is a Perceptron-based learning algorithm that approximates the prediction vectors in P as Filters of words, and the score vectors as Rankers of phrases. The algorithm starts with all vectors initialized to 0, and then runs repeatedly in a number of epochs T through all the sentences in the training set. Given a sentence, it predicts its optimal phrase solution as specified in (1) using the current vectors. As in the traditional Perceptron algorithm, if the predicted phrase set is not perfect the vectors responsible of the incorrect prediction are updated additively. The algorithm is as follows: • Input: {(x1, y1), . . . , (xm, ym)}, xi are sentences, yi are solutions in Y • Define: W = {wk S, wk E, wk|k ∈K}. • Initialize: ∀w ∈W w = 0; • for t = 1 . . . T , for i = 1 . . . m : 1. ˆy = RW (xi) 2. recognition learning feedback(W, xi, yi, ˆy) • Output: the vectors in W. We now describe the recognition-based learning feedback. By analyzing the dependencies between each function and a solution, we derive a feedback rule which naturally fits the phrase recognition setting. Let y∗be the gold set of phrases for a sentence x, and ˆy the set predicted by the R function. Let goldS(xi, k) and goldE(xi, k) be, respectively, the perfect indicator functions for start and end boundaries of phrases of type k. That is, they return 1 if word xi starts/ends some k-phrase in y∗and -1 otherwise. We differentiate three kinds of phrases in order to give feedback to the functions being learned: • Phrases correctly identified: ∀(s, e)k ∈y∗∩ˆy: – Do nothing, since they are correct. • Missed phrases: ∀(s, e)k ∈y∗\ˆy: 1. Update misclassified boundary words: if (wk S · φw(xs) ≤0) then wk S = wk S + φw(xs) if (wk E · φw(xe) ≤0) then wk E = wk E + φw(xe) 2. Update score function, if applied: if (wk S · φw(xs) > 0 ∧wk E · φw(xe) > 0) then wk = wk + φp((s, e)k, x, y) • Over-predicted phrases: ∀(s, e)k ∈ˆy\y∗: 1. Update score function: wk = wk −φp((s, e)k, x, y) 2. Update words misclassified as S or E: if (goldS(xs, k) = −1) then wk S = wk S −φw(xs) if (goldE(xe, k) = −1) then wk E = wk E −φw(xe) This feedback models the interaction between the two layers of the recognition process. The start-end layer filters out phrase candidates for the scoring layer. Thus, misclassifying the boundary words of a correct phrase blocks the generation of the candidate and produces a missed phrase. Therefore, we move the start or end prediction vectors toward the misclassified boundary words of a missed phrase. When an incorrect phrase is predicted, we move away the prediction vectors from the start or end words, provided that they are not boundary words of a phrase in the gold solution. Note that we deliberately do not care about false positives start or end words which do not finally over-produce a phrase. Regarding the scoring layer, each category prediction vector is moved toward missed phrases and moved away from over-predicted phrases. It is important to note that this feedback operates only on the basis of the predicted solution ˆy, avoiding to make updates for every prediction the function has made. Thus, the learning strategy is taking advantage of the recognition process, and concentrates on (i) assigning high scores for the correct phrases and (ii) making the incorrect competing phrases to score lower than the correct ones. As a consequence, this feedback rule tends to approximate the desired behavior of the global R function, that is, to make the summation of the scores of the correct phrase set maximal with respect to other phrase set candidates. This learning strategy is closely related to other recent works on learning ranking functions [10, 8, 9]. A Note on the Convergence Assuming linear separability for each start, end and score function, it can be shown that (i) the mistakes of the start-end filters are bounded (applying Novikoff’s proof); (ii) between two consecutive updates in the start-end layer, there is room only for a finite number of updates of the score function; and (iii) once the start-end filters have converged, the correct solution is always considered in the score layer as candidate, and in this state the overall learning process converges (applying the proof of Collins for a perceptron tagger [10]). 4 Experiments on Clause Identification Clause Identification is the problem of recognizing the clauses of a sentence. A clause can be roughly defined as a phrase with a subject, possibly implicit, and a predicate. Clauses in a sentence form a hierarchical structure which constitutes the skeleton of the full syntactic tree. In the following example, the clauses are annotated with brackets: ( (When (you don’t have any other option)), it is easy (to fight) .) We followed the setting of the CoNLL-2001 competition 1. The problem consists of recognizing the set of clauses on the basis of words, part-of-speech tags (PoS), and syntactic base phrases (or chunks). There is only one category of phrases to be considered, namely the clauses. The data consists of a training set (8,936 sentences, 24,841 clauses), a development set (2,012 sentences, 5,418 clauses) and a test set (1,671 sentences, 5,225 clauses). Representation Functions We now describe the representation functions φw and φp, which respectively map a word or a phrase and their local context into a feature vector in {0, 1}n. Their design is inspired in our previous work [6]. For the function φw(xi) we capture the form, PoS and chunk tags of words in a window around xi, that is, words xi+l with l ∈[−Lw, +Lw]. Each attribute type, together with each relative position l and each returned value forms a final binary indicator feature (for instance, “the word at position -2 is that” is a binary feature). Also, we consider the word decisions of the words to the left of xi, that is, binary flags indicating whether the [−Lw, −1] words in the window are starts and/or ends of a phrase. For the function φp(s, e) we represent the context of the phrase by capturing a [−Lp, 0] window of forms, PoS and chunks at the s word, and a separate [0, +Lp] window at the e word. Furthermore, we represent the (s, e) phrase by evaluating a pattern from s to e which captures the relevant elements in the sentence fragment from word s to word e 2. We experimentally set both Lw and Lp to 3. On this problem we were interested in comparing the FR-Perceptron algorithm versus other alternative learning methods. The system to train was composed by the start and end functions which identify clause candidates, and a score function for clauses. As alternatives, we first considered a batch classification setting, in which each function is trained separately with binary classification loss. To do so, we generated three data sets from training examples, one for each function. For the start-end sets, we considered an example for each word in the data. To train the score classifier, we generated only the phrase candidates formed with all pairs of correct phrase boundaries. This latter generation greatly reduces the real instance space in which the scoring function operates. The alternative of generating all possible phrases as examples would be more realistic, but infeasible for the learning algorithm since it would produce 1,377,843 examples, with a 98.2% of negatives. As a secondary intermediate approach, we considered a simple model which learns all the functions online via binary classification loss. That is, the training sentences are visited online as in the FR-Perceptron: first, the start-end functions are applied to each word, and according to their positive decisions, phrase examples are generated to train the score function. In this way, the input of the score function is dynamically adapted to the start-end behavior, but a classification feedback is given to each function for each decision taken. The functions of the system were actually modeled as Voted Perceptrons [11], which compute a prediction as an average of all vectors generated during training. For the batch classification setting, we modeled the functions as Voted Perceptrons and also as SVMs3. In all cases, a function can be expressed in dual form as a combination of training instances, which allows the use of kernel functions. We work with polynomial kernels of degree 2. 4 We trained the perceptron models for up to 20 epochs via the FR-Perceptron algorithm and via classification feedback, either online (CO-VP) or batch (CB-VP). We also trained SVM classifiers (Cl-SVM), adjusting the soft margin C parameter on the development set. 1Data and details at the CoNLL-2001 website: http://cnts.uia.ac.be/conll2001 . 2The following elements are considered in a pattern: a) Punctuation marks and coordinate conjunctions; b) The word that; c) Relative pronouns; d) Verb phrase chunks; and e) The top clauses within the s to e fragment, already recognized through the bottom up search (a clause in a pattern reduces all the elements within it into an atomic element). 3We used the SVMlight package available at http://svmlight.joachims.org . 4Initial tests revealed poor performance for the linear case and no improvements for degrees > 2. Figure 1: Performance on the development set with respect to the number of epochs. Top: global F1 (left) and precision/recall on starts (right). Bottom: given the start-end filters, upper bound on the global F1 (left) and number of proposed phrase candidates (right). 76 78 80 82 84 86 88 90 0 5 10 15 20 global F Measure FR-Perceptron CO-VP CB-VP SVM 50 55 60 65 70 75 80 85 90 95 100 0 5 10 15 20 Precision/Recall on Start Words precision FR-Perceptron recall FR-Perceptron precision CO-VP recall CO-VP 88 89 90 91 92 93 94 95 96 97 0 5 10 15 20 global F upper bound FR-Perceptron CO-VP CB-VP SVM 5000 10000 15000 20000 25000 30000 35000 40000 45000 0 5 10 15 20 P - number of phrase candidates FR-Perceptron CO-VP CB-VP SVM number of epochs number of epochs Figure 1 (top, left) shows the performance curves in terms of the F1 measure with respect to the number of training epochs. Clearly, the FR-Perceptron model exhibits a much better curve than classification models, being at any epoch more than 2 points higher than the online model, and far from the batch models. To get an idea of how the learning strategy behaves, it is interesting to look at the other plots of Figure 1. The top right plot shows the performance of the start function. The FR-Perceptron model exhibits the desirable filtering behavior for this local decision, which consists in maintaining a very high recall (so that no correct candidates are blocked) while increasing the precision during epochs. In contrast, the CO-VP model concentrates mainly on the precision. The same behavior is observed for the other classification models, and also for the end local decision. The start-end behavior is also shown from a global point of view at the bottom plots. The left plot shows the maximum achievable global F1, assuming a perfect scorer, given the phrases proposed by the start-end functions. Additionally, the right plot depicts the filtering capabilities in terms of the number of phrase candidates produced, out of a total number of 300,511 possible phrases. The FR-Perceptron behavior in the filtering layer is clear: while it maintains a high recall on identifying correct phrases (above 95%), it substantially reduces the number of phrase candidates to explore in the scoring layer, and thus, it progressively simplifies the input to the score function. Far from this behavior, the classification-based models are not sensitive to the global performance in the filtering layer and, although they aggressively reduce the search space, provide only a moderate upper bound on the global F1. Table 4 shows the performance of each model, together with the results of our previous system [6], which held the best results on the problem. There, the same decisions were learned by AdaBoost classifiers working in a richer feature space. Also, the score function was a robust combination of several classifiers. These were trained taking into account the errors of the start-end classifiers, which required a tuning procedure to select the amount of introduced errors. Our new approach is much simpler to learn, since the interaction between functions is naturally ruled by the recognition feedback. Looking at results, we substantially improve the global F1. development test T prec. recall Fβ=1 prec. recall Fβ=1 CB-VP 8 83.84 80.55 82.16 82.22 78.09 80.10 SVM 84.31 82.83 83.57 83.19 80.00 81.57 CO-VP 19 91.06 80.62 85.52 89.25 77.62 83.03 FR-Perceptron 20 90.56 85.73 88.08 88.17 82.10 85.03 AdaBoost [6] – 92.53 82.48 87.22 90.18 78.11 83.71 Table 1: Results of Clause Identification on the CoNLL-2001 development and test sets. The T column shows the optimal number of epochs on the development set. 5 Conclusion We have presented a global learning strategy for the general problem of recognizing structures of phrases, in which, typically, several different learning functions interact to explore and recognize the structure. The effectiveness of our method has been empirically proved in the problem of clause identification, where we have shown that a considerable improvement can be obtained by exploiting high-order global dependencies in learning, in contrast to concentrating only on the local subproblems. These results suggest to scale up global learning strategies to more complex problems found in the natural language area (such as full parsing or machine translation), or other structured domains. Acknowledgements Research partially funded by the European Commission (Meaning, IST-2001-34460) and the Spanish Research Department (Hermes, TIC2000-0335-C03-02; Petra, TIC2000-1735C02-02). Xavier Carreras is supported by a grant from the Catalan Research Department. References [1] E. F. Tjong Kim Sang and S. Buchholz. Introduction to the CoNLL-2000 Shared Task: Chunking. In Proc. of CoNLL-2000 and LLL-2000, 2000. [2] Erik F. Tjong Kim Sang and Herv´e D´ejean. Introduction to the CoNLL-2001 Shared Task: Clause Identification. In Proc. of CoNLL-2001, 2001. [3] A. Ratnaparkhi. Learning to Parse Natural Language with Maximum-Entropy Models. Machine Learning, 34(1):151–175, 1999. [4] V. Punyakanok and D. Roth. The Use of Classifiers in Sequential Inference. In Advances in Neural Information Processing Systems 13 (NIPS’00), 2001. [5] T. Kudo and Y. Matsumoto. Chunking with Support Vector Machines . In Proc. of 2nd Conference of the North American Chapter of the Association for Computational Linguistics, 2001. [6] X. Carreras, L. M`arquez, V. Punyakanok, and D. Roth. Learning and Inference for Clause Identification. In Proceedings of the 14th ECML, Helsinki, Finland, 2002. [7] T. Kudo and Y. Matsumoto. Japanese Dependency Analyisis using Cascaded Chunking . In Proc. of CoNLL-2002, 2002. [8] K. Crammer and Y. Singer. A Family of Additive Online Algorithms for Category Ranking. Journal of Machine Learning Research, 3:1025–1058, 2003. [9] S. Har-Peled, D. Roth, and D. Zimak. Constraint Classification for Multiclass Classification and Ranking. In Advances in Neural Information Processing Systems 15 (NIPS’02), 2003. [10] M. Collins. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. In Proceedings of the EMNLP’02, 2002. [11] Y. Freund and R. E. Schapire. Large Margin Classification Using the Perceptron Algorithm. Machine Learning, 37(3):277–296, 1999.
2003
82
2,488
Iterative scaled trust-region learning in Krylov subspaces via Pearlmutter’s implicit sparse Hessian-vector multiply Eiji Mizutani Department of Computer Science Tsing Hua University Hsinchu, 300 TAIWAN R.O.C. eiji@wayne.cs.nthu.edu.tw James W. Demmel Mathematics and Computer Science University of California at Berkeley, Berkeley, CA 94720 USA demmel@cs.berkeley.edu Abstract The online incremental gradient (or backpropagation) algorithm is widely considered to be the fastest method for solving large-scale neural-network (NN) learning problems. In contrast, we show that an appropriately implemented iterative batch-mode (or block-mode) learning method can be much faster. For example, it is three times faster in the UCI letter classification problem (26 outputs, 16,000 data items, 6,066 parameters with a two-hidden-layer multilayer perceptron) and 353 times faster in a nonlinear regression problem arising in color recipe prediction (10 outputs, 1,000 data items, 2,210 parameters with a neuro-fuzzy modular network). The three principal innovative ingredients in our algorithm are the following: First, we use scaled trust-region regularization with inner-outer iteration to solve the associated “overdetermined” nonlinear least squares problem, where the inner iteration performs a truncated (or inexact) Newton method. Second, we employ Pearlmutter’s implicit sparse Hessian matrix-vector multiply algorithm to construct the Krylov subspaces used to solve for the truncated Newton update. Third, we exploit sparsity (for preconditioning) in the matrices resulting from the NNs having many outputs. 1 Introduction Our objective function to be minimized for optimizing the n-dimensional parameter vector „ of an F-output NN model is the sum over all the d data of squared residuals: E(„) = 1 2∥r(„)∥2 2 = 1 2 m i=1 r2 i = 1 2 F k=1 ∥rk∥2 2. Here, m≡Fd; r(„) is the mdimensional residual vector composed of all m residual elements: ri (i = 1, . . . , m); and rk the d-dimensional residual vector evaluated at terminal node k. The gradient vector and the Hessian matrix of E(„) are given by g ≡JT r and H ≡JT J + S, respectively, where J, the m×n (residual) Jacobian matrix of r, is readily obtainable from backpropagation (BP) process, and S is the matrix of second-derivative terms of r; i.e., S ≡m i=1 ri∇2ri. Most nonlinear least squares algorithms take advantage of information of J or its cross product called the Gauss-Newton (GN) Hessian JT J (or the Fisher information matrix for E(.) in Amari’s natural-gradient learning [1]), which is the important portion of H because influence of S becomes weaker and weaker as residuals become smaller while learning progresses. With multiple F-output nonlinear models (except fully-connected NNs), J is known to have the m × n block angular matrix form (see [7, 6] and references therein). For instance, consider a single-hidden layer S-H-F MLP (with S-input H-hidden F-output nodes); there are nA=F(H + 1) terminal parameters „A (including threshold parameters) on direct connections to F terminal nodes, each of which has CA(=H + 1) direct connections, and the rest of nB=H(S + 1) parameters are not directly connected to any terminal node; hence, nB hidden parameters „B. In other words, model’s parameters „ (n=FCA + nB in total) can separate as: „T= [„AT |„BT ] =[„AT 1 , · · · , „AT k , · · · , „AT F |„BT ], where „A k is a vector of the kth subset of CA terminal parameters directly linked to terminal node k (k = 1, · · · ,F). The associated residual Jacobian matrix J can be given in the block-angular form below left, and thus the (full) Hessian matrix H has the n × n sparse block arrow form below right (× denotes some non-zero block) as well as the GN-Hessian JT J: J  m×n =   A1 B1 A2 B2 ... ... AF BF  , H  n×n =   × × × × × × × × × × × × ×   . (1) Here in J, Ak and Bk are d × CA and d × nB Jacobian matrices, respectively, of the d-dimensional residual vector rk evaluated at terminal node k. Notice that there are F diagonal Ak blocks [because (F −1)CA terminal parameters excluding „A k have no effect on rk], and F vertical Bk blocks corresponding to the nB hidden parameters „B that contribute to minimizing all the residuals rk(k=1, · · · , F) evaluated at all F terminal nodes. Therefore, the posed problem is overdetermined when “m > n” (namely, “d > CA + 1 F nB”) holds. In addition, when the terminal nodes have linear identity functions, terminal parameters „A are linear, and thus all Ak blocks become identical A1 = A2 = · · · = AF , with H + 1 hidden-node outputs (including one constant bias-node output) in each row. For small- and medium-scale problems, direct batch-mode learning is recommendable with a suitable “direct” matrix factorization, but attention must be paid to exploiting obvious sparsity in either block-angular J or block-arrow H so as to render the algorithms efficient in both memory and operation counts [7, 6]. Notice that H−1 is dense even if H has a nice block-arrow sparsity structure. For large-scale problems, Krylov subspace methods, which circumvent the need to perform time-consuming and memory-intensive direct matrix factorizations, can be employed to realize what we call iterative batch-mode learning. If any rows (or columns) of those matrices Ak and Bk are not needed explicitly, then Pearlmutter’s method [11] can automatically exploit such sparsity to perform sparse Hessian-vector product in constructing a Krylov subspace for parameter optimization, which we describe in what follows with our numerical evidence. 2 Inner-Outer Iterative Scaled Trust-Region Methods Practical Newton methods enjoy both the global convergence property of the Cauchy (or steepest descent) method and the fast local convergence of the Newton method. 2.1 Outer iteration process in trust-region methods One might consider a convex combination of the Cauchy step ∆„Cauchy and the Newton step ∆„Newton such as (using a scalar parameter h): ∆„Dogleg def = (1 −h)∆„Cauchy + h∆„Newton, (2) which is known as the dogleg step [4, 9]. This step yields a good approximate solution to the so-called “scaled 2-norm” or “M-norm” trust-region subproblem (e.g., see Chap. 7 in [2]) with Lagrange multiplier µ below: min∆„ q(∆„) subject to ∥∆„∥M ≤R, or min∆„ q(∆„) + µ 2 (∆„T M∆„ −R2) , (3) where the distances are measured in the M-norm: ∥x∥M = √ xT Mx with a symmetric positive definite matrix M, and R (called the trust-region radius) signifies the trust-region size of the local quadratic model q(∆„) def = E(„) + gT ∆„ + 1 2∆„T H∆„. Radius R is controlled according to how well q(.) predicts the behavior of E(.) by checking the error reduction ratio below: ρ = Actual error reduction Predicted error reduction = E(„now) −E(„next) E(„now) −q(∆„) . (4) For more details, refer to [9, 2]. The posed constrained quadratic minimization can be solved with Lagrange multiplier µ: If ∆„ is a solution to the posed problem, then ∆„ satisfies the formula: (H + µM)∆„ = −g, with µ(∥∆„∥M −R) = 0, µ ≥0, and H + µM positive semidefinite. In nonlinear least squares context, the nonnegative scalar parameter µ is known as the Levenberg-Marquardt parameter. When µ = 0 (namely, R ≥∥∆„Newton∥M), the trust-region step ∆„ becomes the Newton step ∆„Newton def = −H−1g, and, as µ increases (i.e., as R decreases), ∆„ gets closer to the (full) Cauchy step ∆„Cauchy: ∆„Cauchy def = − gT M−1g/gT M−1HM−1g M−1g. When R < ∥∆„Cauchy∥M, the trust-region step ∆„ reduces to the restricted Cauchy step ∆„RC def = −(R/∥∆„Cauchy∥M)∆„Cauchy. If ∥∆„Cauchy∥M < R < ∥∆„Newton∥M, ∆„ is the “dogleg step,” intermediate between ∆„Cauchy and ∆„Newton, as shown in Eq. (2), where scalar h (0 < h < 1) is the positive root of ∥s + hp∥M = R: h = −sT Mp+√ (sT Mp)2+pT Mp(R2−sT Ms) pT Mp , (5) with s def = ∆„Cauchy and p def = ∆„Newton −∆„Cauchy (when pT g < 0). In this way, the trial step ∆„ is subject to trust-region regularization. In large-scale problems, the linear-equation solution sequence {∆„k} is generated iteratively while seeking a trial step ∆„ in the inner iteration process, and the parameter sequence {„i}, whose two consecutive elements are denoted by „now and „next, is produced by the outer iteration (i.e., epoch in batch mode). The outer iterative process updates parameters by „next = „now + ∆„ without taking any uphill movement: That is, if the step is not satisfactory, then R is decreased so as to realize an important Levenberg-Marquardt concept: the failed step is shortened and deflected towards the Cauchy-step direction simultaneously. For this purpose, the trust-region methods compute the gradient vector in batch mode or with (sufficiently large) data block (i.e., block mode; see our demonstration in Section 3). 2.2 Inner iteration process with truncated preconditioned linear CG We employ a preconditioned conjugate gradient (PCG) (among many Krylov subspace methods; see Section 6.6 in [3] and Chapter 5 in [2]) with our symmetric positive definite preconditioner M for solving the M-norm trust-region subproblem (3). This is the truncated PCG (also known as Steihaug-Toint CG) applicable even to nonconvex problems for solving inexactly the Newton formula by the inner iterative process below (see pp. 628-629 in [10]; pp. 202–218 in [2]) based on the standard PCG algorithm (e.g., see page 317 in [3]): Algorithm 1: The inner iteration process via preconditioned CG. 1. Initialization (k=0): Set ∆„0 = 0 and ‹0 = −g (=−g −H∆„0); Solve Mz = ‹0 for pseudoresiduals: z = M−1‹0; Compute τ0 = δT 0 z; Set k = 1 and d1 = z, and then proceed to Step 2. 2. Matrix-vector product: z = Hdk = JT (Jdk) + Sdk (see also Algorithm 2). 3. Curvature check: γk = dT k z = dT k Hdk. If γk > 0, then continue with Step 4. Otherwise, compute h (> 0) such that ∥∆„k−1 + hdk∥M = R, and terminate with ∆„ = ∆„k−1 + hdk. 4. Step size: ηk = τk−1 γk . 5. Approximate solution: ∆„k = ∆„k−1 + ηkdk. If ∥∆„k∥M < R, go onto Step 6; else terminate with ∆„ = R ∥∆„k∥M ∆„k. (6) 6. Linear-system residuals: δk = δk−1 −ηkz [= −g −H∆„k = −q ′(∆„k)]. If ∥δk∥2 is small enough; i.e., ∥δk∥2 ≤ξ∥g∥2, then terminate with ∆„ = ∆„k. 7. Pseudoresiduals: z = M−1‹k, and then compute τk = δT k z. 8. Conjugation factor: βk+1 = τk τk−1 . 9. Search direction: dk+1 = δk + βk+1dk. 10. If k < klimit, set k = k + 1 and return to Step 2. Otherwise, terminate with ∆„ = ∆„k. 2 At Step 3, h is obtainable with Eq. (5) with s = ∆„k−1 and p = dk plugged in. Likewise, in place of Eq. (6) at Step 5, we may use Eq. (5) for ∆„ = ∆„k−1 + hdk such that ∥∆„k−1 + hdk∥M = R, but both computations become identical if R ≤∥∆„Cauchy∥M; otherwise, Eq. (6) is less expensive and tends to give more bias towards the Newton direction. The inner-iterative process terminates (i.e., stops at inner iteration k) when one of the next four conditions holds: (A) dT k Hdk ≤0, (B) ∥∆„k∥M ≥R, (C) ∥H∆„k + g∥2 ≤ξ∥g∥2, (D) k=klimit. (7) Condition (D) at Step 10 is least likely to be met since there would be no prior knowledge about preset limits klimit to inner iterations (usually, klimit=n). As long as dT k Hdk > 0 holds, PCG works properly until the CG-trajectory hits the trustregion boundary [Condition (B) at Step 5], or till the 2-norm linear-system residuals become small [Condition (C) at Step 6], where ξ can be fixed (e.g., ξ=0.01). Condition (A) dT k Hdk ≤0 (at Step 3) may hold when the local model is not strictly convex (or H is not positive definite). That is, dk is a direction of zero or negative curvature; a typical exploitation of non-positive curvature is to set ∆„ equal to the “step to the trust-region boundary along that curvature segment (in Step 3)” as a model minimizer in the trust region. In this way, the terminated kth CG step yields an approximate solution to the trust-region subproblem (3), and it belongs to the Krylov subspace span{−M−1 2 g, −(M−1 2 HM−1 2 )M−1 2 g, ..., −(M−1 2 HM−1 2 )k−1M−1 2 g}, resulting from our application of CG (without multiplying by M−1 2 ) to the symmetric Newton formula (M−1 2 HM−1 2 )(M 1 2 ∆„) = −M−1 2 g, because M−1H (in the system M−1H∆„ = −M−1g) is unlikely symmetric (see page 317 in [3]) even if M is a diagonal matrix (unless M = I). The overall memory requirement of Algorithm 1 is O(n) because at most five nvectors are enough to implement. Since the matrix-vector product Hdk at Step 2 is dominant in operation cost of the entire inner-outer process, we can employ Pearlmutter’s method with no H explicitly required. To better understand the method, we first describe a straightforward implicit sparse matrix-vector multiply when H = JT J; it evaluates JT Jdi (without forming JT J) in two-step implicit matrix-vector product as z=JT (Jdi), exploiting block-angular J in Eq. (1); i.e., working on each block, Ak and Bk, in a row-wise manner below: Algorithm 2: Implicit (i.e., matrix-free) sparse matrix-vector multiplication step with an F-output NN model at inner iteration i starting with z = 0: for p = 1 to d (i.e., one sweep of d training data): (a) do forward pass to compute F final outputs yp(„) on datum p; for k = 1 to F (at each terminal node k): • (b) do backward pass to obtain the pth row of Ak as the CA-vector aT p,k, and the pth row of Bk as the nB-vector bT p,k; • (c) compute αkap and αkbp,k, where scalar αk = aT p,kda i,k + bT p,kdb i, and then add them to their corresponding elements of z; end for k. end for p. 2 Here, Step (a) costs at least 2dn (see details in [8]); Step (b) costs at least 2mlu, where m=Fd and lu=CA+nB < n=FCA+nB; and Step (c) costs 4mlu; overall, Algorithm 2 costs O(mlu), linear in F. Note that if sparsity is ignored, the cost becomes O(mn), quadratic in F since mn = Fd(FCA+nB). Algorithm 2 can extract explicitly F pairs of row vectors (aT and bT) of J (with Flu storage) on each datum, making it easier to apply other numerical linear algebra approaches such as preconditioning to reduce the number of inner iterations. Yet, if the row vectors are not needed explicitly, then Pearlmutter’s method is more efficient, calculating αk [see Step (c)] in its forward pass (i.e., R{yk}=αk; see Eq. (4.3) on page 151 in [11]). When H = JT J, it is easy to simplify its backward pass (see Eq. (4.4) on page 152 in [11]), just by eliminating the terms involving residuals r and second-derivatives of node functions f ′′(.), so as to multiply vectors ak and bk through by scalar αk implicitly. This simplified method of Pearlmutter runs in time O(dn), whereas Algorithm 2 does in O(mlu). Since mlu −dn = dF(CA + nB) −d(FCA + nB) = d(F −1)nB, Pearlmutter’s method can be up to F times faster than Algorithm 2. Furthermore, Pearlmutter’s original method efficiently multiplies an n-vector by the “full” Hessian matrix still in O(dn) for z = Hdi = JT (Jdi) + Sdi = m j=1(uT j di)uj + m j=1[∇2rj]rjdi, where uT i is the ith row vector of J; notably, the method automatically exploits block-arrow sparsity of H [see Eq. (1), right] in the essentially same way as the standard BP deals with block-angular sparsity of J [see Eq. (1), left] to perform the matrix-vector product g = JT r in O(dn). 3 Experiments and Discussion In simulation, we compared the following five algorithms: Algorithm A: Online-BP (i.e., H = I) with a fixed momentum (0.8); Algorithm B: Algorithm 2 alone for Algorithm 1 with H = JT J (see [6]); Algorithm C: Pearlmutter’s method alone for Algorithm 1 with H = JT J; Algorithm D: Algorithm 2 to obtain preconditioner M = diag(JT J) only, and Pearlmutter’s method for Algorithm 1 with H = JT J; Algorithm E: Same as Algorithm D except with “full” Hessian H = JT J + S. 2 Algorithm A is tested for our speed comparison purpose, because if it works, it’s probably fastest. In Algorithms D and E, Algorithm 2 was only employed for obtaining a diagonal preconditioner M = diag(JT J) (or Jacobi preconditioner) for Algorithm 1, whereas in Algorithms B and C, no preconditioning (M = I) was applied. The performance comparisons were made with a nonlinear regression task and a classification benchmark, the letter recognition problem, from the UCI machine learning repository. All the experiments were conducted on a 1.6-GHz Pentium-IV PC with FreeBSD 4.5 and gcc-2.95.3 compiler (with -O2 optimization flag). The first regression task was a real-world application color recipe prediction, a problem of determining mixing proportions of available colorants to reproduce a given target color, requiring mappings from 16 inputs (16 spectral reflectance signals of the target color) to ten outputs (F=10) (ten colorant proportions) using 1,000 training data (d=1,000; m=10,000) with 302 test data. The table below shows the results averaged over 20 trials with a single 16-82-10 MLP [n=2,224 (CA=83;nB=1,394;lu=1,477); hence, mlu dn =6.6], which was optimized until “training RMSE ≤0.002 (application requirement)” satisfied, when we say that “convergence” (relatively early stop) occurs. Clearly, the posed regression task is nontrivial because Algorithm A, online-BP, took roughly six days (averaged over only ten trials), nearly 280 (=8748.4/31.2) times slower than (fastest) Algorithm D. In generalization performance, all the posed algorithms were more or less equivalent. Model Single 16-82-10 MLP Five-MLP mixed Algorithm A B C D E B C D Total time (min) 8748.4 336.4 107.2 31.2 64.5 162.3 57.6 20.9 Stopped epoch 2,916,495.2 272.5 261.5 132.7 300.3 147.3 160.0 179.1 Time/epoch (sec) 0.2 73.8 24.6 14.1 12.9 65.2 21.6 7.0 Inner itr./epoch N/A 218.3 216.0 142.7 110.9 193.8 174.1 66.0 Flops ratio/itr. N/A 3.9 1.0 1.3 4.1 1.2 Test RMSE 0.020 0.015 0.015 0.015 0.015 0.016 0.016 0.017 We also observed that use of full Hessian matrix (Algorithm E) helped reduce inner iterations per epoch, although the total convergence time turned out to be greater than that obtained with the GN-Hessian (Algorithm D), presumably because our Jacobi-preconditioner must be more suitable for the GN-Hessian than for the full Hessian, and perhaps because the inner iterative process of Algorithm E can terminate due to detection of non-positive curvature in Eq. (7)(A); this extra chance of termination may increase the total epochs, but help reduce the time per epoch. Remarkably, the time per inner iteration of Algorithm E did not differ much from Algorithms C and D owing to Pearlmutter’s method; in fact, given preconditioner M, Algorithm E merely needed about 1.3 times more flops ∗per inner iteration than Algorithms C and D did, although Algorithm B needed nearly 3.9 times more. The measured megaflop rates for all these codes lie roughly in the range from 200-270 Mflop/sec; typically, below 10 % of peak machine speed. For improving single-MLP performance, one might employ two layers of hidden nodes (rather than one large hidden layer; see the letter problem below), which increases nB while reducing nA, rendering Algorithm 2 less efficient (i.e., slower). Alternatively, one might introduce direct connections between the input and terminal output layers, which increases CA, the column size of Ak, retaining nice parameter separability. Yet another approach (if applicable) is to use a “comple∗The floating-point operation counts were measured by using PAPI (Performance Application Programming Interface); see http://icl.cs.utk.edu/projects/papi/. mentary mixtures of Z MLP-experts” model (or a neuro-fuzzy modular network) that combines Z smaller-size MLPs complementarily; the associated residual vector to be minimized becomes: r(„) = y(„) −t = Z i=1 wioi  −t, where scalar wi, the ith output of the integrating unit, is the ith (normalized) mixing proportion assigned to the outputs (F-vector oi) of expert-MLP i. Note that each expert learns “residuals” rather than “desired outputs” (unlike in the committee method below) in the sense that only the final combined outputs y must come close to the desired ones t. That is, there are strong coupling effects (see page 80 in [5]) among all experts; hence, it is crucial to consider the global Hessian across all experts to optimize them simultaneously [7]. The corresponding J has the same block-angular form as that in Eq. (1)(left) with Ak ≡[A1 kA2 k · · · AZ k ], and Bk ≡[B1 kB2 k · · · BZ k ] (k = 1, · · · , F). Here, the residual Jacobian portion for the parameters of the integrating unit was omitted because they were merely fine-tuned with a steepest-descent type method owing to our knowledge-based design for input-partition to avoid (too many) local experts. Specifically, the spectral reflectance signals (16 inputs) were converted to the hue angle as input to the integrating unit that consists of five bell-shaped basis functions, partitioning that hue-subspace alone in a fuzzy fashion into only five color regions (red, yellow, green, blue, and violet) for five 16-16-10 MLP-experts, each of which receives all the 16 spectral signals as input [hence, Z=5; n=2,210 (CA=85; nB=1,360); mlu dn =6.5]. Due to localized parameter-tunings, our five-MLP mixtures model was better in learning; see faster learning in table above. In particular, our model with Algorithm D worked 353 (≈123.1 × 60.0/20.9) times faster than with Algorithm A that took 123.1 hours (see [6]) and 419 (≈8748.4/20.9) times faster than the single MLP with Algorithm A. For our complementary mixtures model, R{.}-operator of Pearlmutter’s method is readily applicable; for instance, at terminal node k (k=1, · · · ,F): R{rk} = R{yk} = Z i R{oi,k}wi + Z i R{wi}oi,k, where each R{oi,k} yields αk [see Algorithm 2(c)] for each expert-MLP i (i = 1, · · · ,Z). The second letter classification benchmark problem involves 16 inputs (features) and 26 outputs (alphabets) with 16,000 training data (F=26; d=16,000; m=416,000) plus 4,000 test data. We used the 16-70-50-26 MLP (see [12]) (n=6,066) with 10 sets of different initial parameters randomly generated uniformly in the range [−0.2, 0.2]. We implemented block-mode learning (as well as batch mode) just by splitting the training data set into two or four equally-sized data blocks, and each data block alone is employed for Algorithms 1 and 2 except for computing ρ in Eq. (4), where evaluation of E(.) involves all the d training data. Notice that two-block mode learning scheme updates model’s parameters „ twice per epoch, whereas onlineBP updates them on each datum (i.e., d times per epoch). We observed that possible redundancy in the data set appeared to help reduce the number of inner iterations, speeding up our iterative batch-mode learning; therefore, we did not use preconditioning. The next table shows the average performance (over ten trials) when the best test-set performance was obtained by epoch 1,000 with online-BP (i.e., Algorithm A) and by epoch 50 with Algorithm C in three learning modes: Average results Online-BP Four-block mode Two-block mode Batch mode Total time (min) 63.2 22.4 41.0 61.1 Stopped epoch 597.8 36.6 22.1 27.1 Time/epoch (sec) 6.3 36.8 111.7 135.2 Avg. inner itr. N/A 4.5/block 26.3/block 31.0/batch Error (train/test) 2.3% / 6.4% 2.7% / 5.1% 1.2% / 4.6% 1.2% / 4.9% Committee error 0.2% / 3.0% 1.2% / 2.8% 0.3% / 2.2% 0.1% / 2.3% On average, Algorithm C in four-block mode worked about three (≈63.2/22.4) times faster than online-BP, and thus can work faster than batch-mode nonlinearCG algorithms, since, reported in [12], online-BP worked faster than nonlinear-CG. Here, we also tested the committee methods (see Chap. 8 in [13]) that merely combined all (equally-weighted) outputs of the ten MLPs, which were optimized independently in this experiment. The committee error was better than the average error, as expected. Intriguingly, our block-mode learning schemes introduced small (harmless) bias, improving the test-data performance; specifically, the two-block mode yielded the best test error rate 2.2% even with this simple committee method. 4 Conclusion and Future Directions Pearlmutter’s method can construct Krylov subspaces efficiently for implementing iterative batch- or block-mode learning. In our simulation examples, the simpler version of Pearlmutter’s method (see Algorithms C and D) worked excellently. But it would be of interest to investigate other real-life large-scale problems to find out the strengths of the full-Hessian based methods (see Algorithm E) perhaps with a more elaborate preconditioner, which would be much more time-consuming per epoch but may reduce the total time dramatically; hence, need to deal with a delicate balancing act. Beside the simple committee method, it would be worth examining our algorithms for implementing other statistical learning methods (e.g., boosting) in conjunction with appropriate numerical linear algebra techniques. These are part of our overlay ambitious goal for attacking practical large-scale problems. References [1] Shun-ichi Amari. Natural gradient works efficiently in learning. In Neural Computation, 10, pp. 251–276, 1998. [2] A. R. Conn, N. I. M. Gould, and P. L. Toint. Trust-Region Methods. SIAM, 2000. [3] James W. Demmel. Applied Numerical Linear Algebra. SIAM, 1997. [4] J. E. Dennis, D. M. Gay, and R. E. Welsch. “An Adaptive Nonlinear Least-Squares Algorithm.” In ACM Trans. on Mathematical Software, 7(3), pp. 348–368, 1981. [5] R. A. Jacobs, M. I. Jordan, S. J. Nowlan and G. E. Hinton. “Adaptive Mixtures of Local Experts.” In Neural Computation, pp. 79–87, Vol. 3, No. 1, 1991. [6] Eiji Mizutani and James W. Demmel. “On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.” In International Journal of Neural Networks. Elsevier Science, Vol. 16, pp. 745-753, 2003. [7] Eiji Mizutani and James W. Demmel. “On separable nonlinear least squares algorithms for neuro-fuzzy modular network learning.” In Proceedings of the IEEE Int’l Joint Conf. on Neural Networks, Vol.3, pp. 2399–2404, Honolulu USA, May, 2002. (Available at http://www.cs.berkeley.edu/˜eiji/ijcnn02.pdf.) [8] Eiji Mizutani and Stuart E. Dreyfus. “On complexity analysis of supervised MLPlearning for algorithmic comparisons.” In Proceedings of the INNS-IEEE Int’l Joint Conf. on Neural Networks, Vol. 1, pp. 347–352, Washington D.C., July, 2001. [9] Jorge J. Mor´e and Danny C. Sorensen. “Computing A Trust Region Step.” In SIAM J. Sci. Stat. Comp. 4(3), pp. 553–572, 1983. [10] Trond Steihaug “The Conjugate Gradient Method and Trust Regions in Large Scale Optimization.” In SIAM J. Numer. Anal. pp. 626–637, vol. 20, no. 3. 1983. [11] Barak A. Pearlmutter. “Fast exact multiplication by the Hessian.” In Neural Computation, pp. 147–160, Vol. 6, No. 1, 1994. [12] Holger Schwenk and Yoshua Bengio. “Boosting neural networks.” In Neural Computation, pp. 1869–1887, Vol. 12, No. 8, 2000. [13] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Springer-Verlag, 2001 (Corrected printing 2002).
2003
83
2,489
AUC Optimization vs. Error Rate Minimization Corinna Cortes∗and Mehryar Mohri AT&T Labs – Research 180 Park Avenue, Florham Park, NJ 07932, USA {corinna, mohri}@research.att.com Abstract The area under an ROC curve (AUC) is a criterion used in many applications to measure the quality of a classification algorithm. However, the objective function optimized in most of these algorithms is the error rate and not the AUC value. We give a detailed statistical analysis of the relationship between the AUC and the error rate, including the first exact expression of the expected value and the variance of the AUC for a fixed error rate. Our results show that the average AUC is monotonically increasing as a function of the classification accuracy, but that the standard deviation for uneven distributions and higher error rates is noticeable. Thus, algorithms designed to minimize the error rate may not lead to the best possible AUC values. We show that, under certain conditions, the global function optimized by the RankBoost algorithm is exactly the AUC. We report the results of our experiments with RankBoost in several datasets demonstrating the benefits of an algorithm specifically designed to globally optimize the AUC over other existing algorithms optimizing an approximation of the AUC or only locally optimizing the AUC. 1 Motivation In many applications, the overall classification error rate is not the most pertinent performance measure, criteria such as ordering or ranking seem more appropriate. Consider for example the list of relevant documents returned by a search engine for a specific query. That list may contain several thousand documents, but, in practice, only the top fifty or so are examined by the user. Thus, a search engine’s ranking of the documents is more critical than the accuracy of its classification of all documents as relevant or not. More generally, for a binary classifier assigning a real-valued score to each object, a better correlation between output scores and the probability of correct classification is highly desirable. A natural criterion or summary statistic often used to measure the ranking quality of a classifier is the area under an ROC curve (AUC) [8].1 However, the objective function optimized by most classification algorithms is the error rate and not the AUC. Recently, several algorithms have been proposed for maximizing the AUC value locally [4] or maximizing some approximations of the global AUC value [9, 15], but, in general, these algorithms do not obtain AUC values significantly better than those obtained by an algorithm designed to minimize the error rates. Thus, it is important to determine the relationship between the AUC values and the error rate. ∗This author’s new address is: Google Labs, 1440 Broadway, New York, NY 10018, corinna@google.com. 1The AUC value is equivalent to the Wilcoxon-Mann-Whitney statistic [8] and closely related to the Gini index [1]. It has been re-invented under the name of L-measure by [11], as already pointed out by [2], and slightly modified under the name of Linear Ranking by [13, 14]. (1,1) (0,0) False positive rate True positive rate ROC Curve. AUC=0.718 True positive rate = correctly classified positive total positive False positive rate = incorrectly classified negative total negative Figure 1: An example of ROC curve. The line connecting (0, 0) and (1, 1), corresponding to random classification, is drawn for reference. The true positive (negative) rate is sometimes referred to as the sensitivity (resp. specificity) in this context. In the following sections, we give a detailed statistical analysis of the relationship between the AUC and the error rate, including the first exact expression of the expected value and the variance of the AUC for a fixed error rate.2 We show that, under certain conditions, the global function optimized by the RankBoost algorithm is exactly the AUC. We report the results of our experiments with RankBoost in several datasets and demonstrate the benefits of an algorithm specifically designed to globally optimize the AUC over other existing algorithms optimizing an approximation of the AUC or only locally optimizing the AUC. 2 Definition and properties of the AUC The Receiver Operating Characteristics (ROC) curves were originally developed in signal detection theory [3] in connection with radio signals, and have been used since then in many other applications, in particular for medical decision-making. Over the last few years, they have found increased interest in the machine learning and data mining communities for model evaluation and selection [12, 10, 4, 9, 15, 2]. The ROC curve for a binary classification problem plots the true positive rate as a function of the false positive rate. The points of the curve are obtained by sweeping the classification threshold from the most positive classification value to the most negative. For a fully random classification, the ROC curve is a straight line connecting the origin to (1, 1). Any improvement over random classification results in an ROC curve at least partially above this straight line. Fig. (1) shows an example of ROC curve. The AUC is defined as the area under the ROC curve and is closely related to the ranking quality of the classification as shown more formally by Lemma 1 below. Consider a binary classification task with m positive examples and n negative examples. We will assume that a classifier outputs a strictly ordered list for these examples and will denote by 1X the indicator function of a set X. Lemma 1 ([8]) Let c be a fixed classifier. Let x1, . . . , xm be the output of c on the positive examples and y1, . . . , yn its output on the negative examples. Then, the AUC, A, associated to c is given by: A = Pm i=1 Pn j=1 1xi>yj mn (1) that is the value of the Wilcoxon-Mann-Whitney statistic [8]. Proof. The proof is based on the observation that the AUC value is exactly the probability P(X > Y ) where X is the random variable corresponding to the distribution of the outputs for the positive examples and Y the one corresponding to the negative examples [7]. The Wilcoxon-Mann-Whitney statistic is clearly the expression of that probability in the discrete case, which proves the lemma [8]. Thus, the AUC can be viewed as a measure based on pairwise comparisons between classifications of the two classes. With a perfect ranking, all positive examples are ranked higher than the negative ones and A = 1. Any deviation from this ranking decreases the AUC. 2An attempt in that direction was made by [15], but, unfortunately, the authors’ analysis and the result are both wrong. m − (k − x) Positive examples θ Threshold x Negative examples k − x Positive examples n − x Negative examples Figure 2: For a fixed number of errors k, there may be x, 0 ≤x ≤k, false negative examples. 3 The Expected Value of the AUC In this section, we compute exactly the expected value of the AUC over all classifications with a fixed number of errors and compare that to the error rate. Different classifiers may have the same error rate but different AUC values. Indeed, for a given classification threshold θ, an arbitrary reordering of the examples with outputs more than θ clearly does not affect the error rate but leads to different AUC values. Similarly, one may reorder the examples with output less than θ without changing the error rate. Assume that the number of errors k is fixed. We wish to compute the average value of the AUC over all classifications with k errors. Our model is based on the simple assumption that all classifications or rankings with k errors are equiprobable. One could perhaps argue that errors are not necessarily evenly distributed, e.g., examples with very high or very low ranks are less likely to be errors, but we cannot justify such biases in general. For a given classification, there may be x, 0 ≤x ≤k, false positive examples. Since the number of errors is fixed, there are k −x false negative examples. Figure 3 shows the corresponding configuration. The two regions of examples with classification outputs above and below the threshold are separated by a vertical line. For a given x, the computation of the AUC, A, as given by Eq. (1) can be divided into the following three parts: A = A1 + A2 + A3 mn , with (2) A1 = the sum over all pairs (xi, yj) with xi and yj in distinct regions; A2 = the sum over all pairs (xi, yj) with xi and yj in the region above the threshold; A3 = the sum over all pairs (xi, yj) with xi and yj in the region below the threshold. The first term, A1, is easy to compute. Since there are (m −(k −x)) positive examples above the threshold and n −x negative examples below the threshold, A1 is given by: A1 = (m −(k −x))(n −x) (3) To compute A2, we can assign to each negative example above the threshold a position based on its classification rank. Let position one be the first position above the threshold and let α1 < . . . < αx denote the positions in increasing order of the x negative examples in the region above the threshold. The total number of examples classified as positive is N = m −(k −x) + x. Thus, by definition of A2, A2 = x X i=1 (N −αi) −(x −i) (4) where the first term N −αi represents the number of examples ranked higher than the ith example and the second term x −i discounts the number of negative examples incorrectly ranked higher than the ith example. Similarly, let α′ 1 < . . . < α′ k−x denote the positions of the k −x positive examples below the threshold, counting positions in reverse by starting from the threshold. Then, A3 is given by: A3 = x′ X j=1 (N ′ −α′ j) −(x′ −j) (5) with N ′ = n −x + (k −x) and x′ = k −x. Combining the expressions of A1, A2, and A3 leads to: A = A1 + A2 + A3 mn = 1 + (k −2x)2 + k 2mn − (Px i=1 αi + Px′ j=1 α′ j) mn (6) Lemma 2 For a fixed x, the average value of the AUC A is given by: < A >x= 1 − x n + k−x m 2 (7) Proof. The proof is based on the computation of the average values of Px i=1 αi and Px′ j=1 α′ j for a given x. We start by computing the average value < αi >x for a given i, 1 ≤i ≤x. Consider all the possible positions for α1 . . . αi−1 and αi+1 . . . αx, when the value of αi is fixed at say αi = l. We have i ≤l ≤N −(x −i) since there need to be at least i−1 positions before αi and N −(x−i) above. There are l −1 possible positions for α1 . . . αi−1 and N −l possible positions for αi+1 . . . αx. Since the total number of ways of choosing the x positions for α1 . . . αx out of N is N x  , the average value < αi >x is: < αi >x= PN−(x−i) l=i l l−1 i−1 N−l x−i  N x  (8) Thus, < x X i=1 αi >x= Px i=1 PN−(x−i) l=i l l−1 i−1 N−l x−i  N x  = PN l=1 l Px i=1 l−1 i−1 N−l x−i  N x  (9) Using the classical identity: P p1+p2=p u p1  v p2  = u+v p  , we can write: < x X i=1 αi >x= PN l=1 l N−1 x−1  N x  = N(N + 1) 2 N−1 x−1  N x  = x(N + 1) 2 (10) Similarly, we have: < x′ X j=1 α′ j >x= x′(N ′ + 1) 2 (11) Replacing < Px i=1 αi >x and < Px′ j=1 α′ j >x in Eq. (6) by the expressions given by Eq. (10) and Eq. (11) leads to: < A >x= 1 + (k −2x)2 + k −x(N + 1) −x′(N ′ + 1) 2mn = 1 − x n + k−x m 2 (12) which ends the proof of the lemma. Note that Eq. (7) shows that the average AUC value for a given x is simply one minus the average of the accuracy rates for the positive and negative classes. Proposition 1 Assume that a binary classification task with m positive examples and n negative examples is given. Then, the expected value of the AUC A over all classifications with k errors is given by: < A >= 1 − k m + n −(n −m)2(m + n + 1) 4mn k m + n − Pk−1 x=0 m+n x  Pk x=0 m+n+1 x  ! (13) Proof. Lemma 2 gives the average value of the AUC for a fixed value of x. To compute the average over all possible values of x, we need to weight the expression of Eq. (7) with the total number of possible classifications for a given x. There are N x  possible ways of choosing the positions of the x misclassified negative examples, and similarly N ′ x′  possible ways of choosing the positions of the x′ = k −x misclassified positive examples. Thus, in view of Lemma 2, the average AUC is given by: < A >= Pk x=0 N x N ′ x′  (1 − x n + k−x m 2 ) Pk x=0 N x N ′ x′  (14) 0.0 0.1 0.2 0.3 0.4 0.5 0.5 0.6 0.7 0.8 0.9 1.0 Error rate Mean value of the AUC r=0.01 r=0.05 r=0.1 r=0.25 r=0.5 0.0 0.1 0.2 0.3 0.4 0.5 .00 .05 .10 .15 .20 .25 Error rate Relative standard deviation r=0.01 r=0.05 r=0.1 r=0.25 r=0.5 Figure 3: Mean (left) and relative standard deviation (right) of the AUC as a function of the error rate. Each curve corresponds to a fixed ratio of r = n/(n + m). The average AUC value monotonically increases with the accuracy. For n = m, as for the top curve in the left plot, the average AUC coincides with the accuracy. The standard deviation decreases with the accuracy, and the lowest curve corresponds to n = m. This expression can be simplified into Eq. (13)3 using the following novel identities: k X x=0 N x ! N ′ x′ ! = k X x=0 n + m + 1 x ! (15) k X x=0 x N x ! N ′ x′ ! = k X x=0 (k −x)(m −n) + k 2 n + m + 1 x ! (16) that we obtained by using Zeilberger’s algorithm4 and numerous combinatorial ’tricks’. From the expression of Eq. (13), it is clear that the average AUC value is identical to the accuracy of the classifier only for even distributions (n = m). For n ̸= m, the expected value of the AUC is a monotonic function of the accuracy, see Fig. (3)(left). For a fixed ratio of n/(n + m), the curves are obtained by increasing the accuracy from n/(n + m) to 1. The average AUC varies monotonically in the range of accuracy between 0.5 and 1.0. In other words, on average, there seems nothing to be gained in designing specific learning algorithms for maximizing the AUC: a classification algorithm minimizing the error rate also optimizes the AUC. However, this only holds for the average AUC. Indeed, we will show in the next section that the variance of the AUC value is not null for any ratio n/(n + m) when k ̸= 0. 4 The Variance of the AUC Let D = mn + (k−2x)2+k 2 , a = Px i=1 αi, a′ = Px′ j=1 α′ j, and α = a + a′. Then, by Eq. (6), mnA = D −α. Thus, the variance of the AUC, σ2(A), is given by: (mn)2σ2(A) = < (D −α)2 −(< D > −< α >)2 > (17) = < D2 > −< D >2 + < α2 > −< α >2 −2(< αD > −< α >< D >) As before, to compute the average of a term X over all classifications, we can first determine its average < X >x for a fixed x, and then use the function F defined by: F(Y ) = Pk x=0 N x N ′ x′  Y Pk x=0 N x N ′ x′  (18) and < X >= F(< X >x). A crucial step in computing the exact value of the variance of the AUC is to determine the value of the terms of the type < a2 >x=< (Px i=1 αi)2 >x. 3An essential difference between Eq. (14) and the expression given by [15] is the weighting by the number of configurations. The authors’ analysis leads them to the conclusion that the average AUC is identical to the accuracy for all ratios n/(n + m), which is false. 4We thank Neil Sloane for having pointed us to Zeilberger’s algorithm and Maple package. Lemma 3 For a fixed x, the average of (Px i=1 αi)2 is given by: < a2 >x= x(N + 1) 12 (3Nx + 2x + N) (19) Proof. By definition of a, < a2 >x= b + 2c with: b =< x X i=1 α2 i >x c =< x X 1≤i<j≤x αiαj >x (20) Reasoning as in the proof of Lemma 2, we can obtain: b = Px i=1 PN−(x−i) l=i l2l−1 i−1 N−l x−i  N x  = N X l=1 l2 N−1 x−1  N x  = (N + 1)(2N + 1)x 6 (21) To compute c, we start by computing the average value of < αiαj >x, for a given pair (i, j) with i < j. As in the proof of Lemma 2, consider all the possible positions of α1 . . . αi−1, αi+1 . . . αj−1, and αj+1 . . . αx when αi is fixed at αi = l, and αj is fixed at αj = l′. There are l −1 possible positions for the α1 . . . αi−1, l′ −l −1 possible positions for αi+1 . . . αj−1, and N −l′ possible positions for αj+1 . . . αx. Thus, we have: < αiαj >x= P i≤l<l′≤N−(x−j) ll′l−1 i−1 l′−l−1 j−i−1 N−l′ x−j  N x  (22) and c = P l<l′ ll′ P m1+m2+m3=x−2 l−1 m1 l′−l−1 m2 N−l′ m3  N x  (23) Using the identity P m1+m2+m3=x−2 l−1 m1 l′−l−1 m2 N−l′ m3  = N−2 x−2  , we obtain: c = (N + 1)(3N + 2)x(x −1) 24 (24) Combining Eq. (21) and Eq. (24) leads to Eq. (19). Proposition 2 Assume that a binary classification task with m positive examples and n negative examples is given. Then, the variance of the AUC A over all classifications with k errors is given by: σ2(A) = F((1 − x n + k−x m 2 )2) −F((1 − x n + k−x m 2 ))2 + (25) F(mx2 + n(k −x)2 + (m(m + 1)x + n(n + 1)(k −x)) −2x(k −x)(m + n + 1) 12m2n2 ) Proof. Eq. (18) can be developed and expressed in terms of F, D, a, and a′: (mn)2σ2(A) = F([D−< a + a′ >x]2) −F(D−< a + a′ >x)2+ F(< a2 >x −< a >2 x) + F(< a′2 >x −< a′ >2 x) (26) The expressions for < a >x and < a′ >x were given in the proof of Lemma 2, and that of < a2 >x by Lemma 3. The following formula can be obtained in a similar way: < a′2 >x= x′(N ′+1) 12 (3N ′x′ + 2x′ + N ′). Replacing these expressions in Eq. (26) and further simplifications give exactly Eq. (25) and prove the proposition. The expression of the variance is illustrated by Fig. (3)(right) which shows the value of one standard deviation of the AUC divided by the corresponding mean value of the AUC. This figure is parallel to the one showing the mean of the AUC (Fig. (3)(left)). Each line is obtained by fixing the ratio n/(n + m) and varying the number of errors from 1 to the size of the smallest class. The more uneven class distributions have the highest variance, the variance increases with the number of errors. These observations contradict the inexact claim of [15] that the variance is zero for all error rates with even distributions n = m. In Fig. (3)(right), the even distribution n = m corresponds to the lowest dashed line. Dataset Size # of n n+m AUCsplit[4] RankBoost Attr. (%) Accuracy (%) AUC (%) Accuracy (%) AUC (%) Breast-Wpbc 194 33 23.7 69.5 ± 10.6 59.3 ± 16.2 65.5 ± 13.8 80.4 ± 8.0 Credit 653 15 45.3 81.0 ± 7.4 94.5 ± 2.9 Ionosphere 351 34 35.9 89.6 ± 5.0 89.7 ± 6.7 83.6 ± 10.9 98.0 ± 3.3 Pima 768 8 34.9 72.5 ± 5.1 76.7 ± 6.0 69.7 ± 7.6 84.8 ± 6.5 SPECTF 269 43 20.4 67.3 93.4 Page-blocks 5473 10 10.2 96.8 ± 0.2 95.1 ± 6.9 92.0 ± 2.5 98.5 ± 1.5 Yeast (CYT) 1484 8 31.2 71.1 ± 3.6 73.3 ± 4.0 45.3 ± 3.8 78.5 ± 3.0 Table 1: Accuracy and AUC values for several datasets from the UC Irvine repository. The values for RankBoost are obtained by 10-fold cross-validation. The values for AUCsplit are from [4]. 5 Experimental Results Proposition 2 above demonstrates that, for uneven distributions, classifiers with the same fixed (low) accuracy exhibit noticeably different AUC values. This motivates the use of algorithms directly optimizing the AUC rather than doing so indirectly via minimizing the error rate. Under certain conditions, RankBoost [5] can be viewed exactly as an algorithm optimizing the AUC. In this section, we make the connection between RankBoost and AUC optimization, and compare the performance of RankBoost to two recent algorithms proposed for optimizing an approximation [15] or locally optimizing the AUC [4]. The objective of RankBoost is to produce a ranking that minimizes the number of incorrectly ordered pairs of examples, possibly with different costs assigned to the mis-rankings. When the examples to be ranked are simply two disjoint sets, the objective function minimized by RankBoost is rloss = m X i=1 n X j=1 1 m 1 n1xi≤yj (27) which is exactly one minus the Wilcoxon-Mann-Whitney statistic. Thus, by Lemma 1, the objective function maximized by RankBoost coincides with the AUC. RankBoost’s optimization is based on combining a number of weak rankings. For our experiments, we chose as weak rankings threshold rankers with the range {0, 1}, similar to the boosted stumps often used by AdaBoost [6]. We used the so-called Third Method of RankBoost for selecting the best weak ranker. According to this method, at each step, the weak threshold ranker is selected so as to maximize the AUC of the weighted distribution. Thus, with this method, the global objective of obtaining the best AUC is obtained by selecting the weak ranking with the best AUC at each step. Furthermore, the RankBoost algorithm maintains a perfect 50-50% distribution of the weights on the positive and negative examples. By Proposition 1, for even distributions, the mean of the AUC is identical to the classification accuracy. For threshold rankers like step functions, or stumps, there is no variance of the AUC, so the mean of the AUC is equal to the observed AUC. That is, instead of viewing RankBoost as selecting the weak ranker with the best weighted AUC value, one can view it as selecting the weak ranker with the lowest weighted error rate. This is similar to the choice of the best weak learner for boosted stumps in AdaBoost. So, for stumps, AdaBoost and RankBoost differ only in the updating scheme of the weights: RankBoost updates the positive examples differently from the negative ones, while AdaBoost uses one common scheme for the two groups. Our experimental results corroborate the observation that RankBoost is an algorithm optimizing the AUC. RankBoost based on boosted stumps obtains AUC values that are substantially better than those reported in the literature for algorithms designed to locally or approximately optimize the AUC. Table 1 compares the results of RankBoost on a number of datasets from the UC Irvine repository to the results reported by [4]. The results for RankBoost are obtained by 10-fold cross-validation. For RankBoost, the accuracy and the best AUC values reported on each line of the table correspond to the same boosting step. RankBoost consistently outperforms AUCsplit in a comparison based on AUC values, even for the datasets such as Breast-Wpbc and Pima where the two algorithms obtain similar accuracies. The table also lists results for the UC Irvine Credit Approval and SPECTF heart dataset, for which the authors of [15] report results corresponding to their AUC optimization algorithms. The AUC values reported by [15] are no better than 92.5% for the Credit Approval dataset and only 87.5% for the SPECTF dataset, which is substantially lower. From the table, it is also clear that RankBoost is not an error rate minimization algorithm. The accuracy for the Yeast (CYT) dataset is as low as 45%. 6 Conclusion A statistical analysis of the relationship between the AUC value and the error rate was given, including the first exact expression of the expected value and standard deviation of the AUC for a fixed error rate. The results offer a better understanding of the effect on the AUC value of algorithms designed for error rate minimization. For uneven distributions and relatively high error rates, the standard deviation of the AUC suggests that algorithms designed to optimize the AUC value may lead to substantially better AUC values. Our experimental results using RankBoost corroborate this claim. In separate experiments we have observed that AdaBoost achieves significantly better error rates than RankBoost (as expected) but that it also leads to AUC values close to those achieved by RankBoost. It is a topic for further study to explain and understand this property of AdaBoost. A partial explanation could be that, just like RankBoost, AdaBoost maintains at each boosting round an equal distribution of the weights for positive and negative examples. References [1] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. Wadsworth International, Belmont, CA, 1984. [2] J-H. Chauchat, R. Rakotomalala, M. Carloz, and C. Pelletier. Targeting customer groups using gain and cost matrix; a marketing application. Technical report, ERIC Laboratory - University of Lyon 2, 2001. [3] J. P. Egan. Signal Detection Theory and ROC Analysis. Academic Press, 1975. [4] C. Ferri, P. Flach, and J. Hern´andez-Orallo. Learning decision trees using the area under the ROC curve. In ICML-2002. Morgan Kaufmann, 2002. [5] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. In ICML-98. Morgan Kaufmann, San Francisco, US, 1998. [6] Yoav Freund and Robert E. Schapire. A decision theoretical generalization of online learning and an application to boosting. In Proceedings of the Second European Conference on Computational Learning Theory, volume 2, 1995. [7] D. M. Green and J. A Swets. Signal detection theory and psychophysics. New York: Wiley, 1966. [8] J. A. Hanley and B. J. McNeil. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 1982. [9] M. C. Mozer, R. Dodier, M. D. Colagrosso, C. Guerra-Salcedo, and R. Wolniewicz. Prodding the ROC curve. In NIPS-2002. MIT Press, 2002. [10] C. Perlich, F. Provost, and J. Simonoff. Tree induction vs. logistic regression: A learning curve analysis. Journal of Machine Learning Research, 2003. [11] G. Piatetsky-Shapiro and S. Steingold. Measuring lift quality in database marketing. In SIGKDD Explorations. ACM SIGKDD, 2000. [12] F. Provost and T. Fawcett. Analysis and visualization of classifier performance: Comparison under imprecise class and cost distribution. In KDD-97. AAAI, 1997. [13] S. Rosset. Ranking-methods for flexible evaluation and efficient comparison of 2class models. Master’s thesis, Tel-Aviv University, 1999. [14] S. Rosset, E. Neumann, U. Eick, N. Vatnik, and I. Idan. Evaluation of prediction models for marketing campaigns. In KDD-2001. ACM Press, 2001. [15] L. Yan, R. Dodier, M. C. Mozer, and R. Wolniewicz. Optimizing Classifier Performance Via the Wilcoxon-Mann-Whitney Statistics. In ICML-2003, 2003.
2003
84
2,490
Semi-Supervised Learning with Trees Charles Kemp, Thomas L. Griffiths, Sean Stromsten & Joshua B. Tenenbaum Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139 {ckemp,gruffydd,sean s,jbt}@mit.edu Abstract We describe a nonparametric Bayesian approach to generalizing from few labeled examples, guided by a larger set of unlabeled objects and the assumption of a latent tree-structure to the domain. The tree (or a distribution over trees) may be inferred using the unlabeled data. A prior over concepts generated by a mutation process on the inferred tree(s) allows efficient computation of the optimal Bayesian classification function from the labeled examples. We test our approach on eight real-world datasets. 1 Introduction People have remarkable abilities to learn concepts from very limited data, often just one or a few labeled examples per class. Algorithms for semi-supervised learning try to match this ability by extracting strong inductive biases from a much larger sample of unlabeled data. A general strategy is to assume some latent structure T that underlies both the label vector Y to be learned and the observed features X of the full data (unlabeled and labeled; see Figure 1). The unlabeled data can be used to help identify the latent structure T , and an assumption that Y is somehow “smooth” with respect to T – or in Bayesian terms, can be assigned a strong prior conditional on T – provides the inductive bias needed to estimate Y successfully from very few labeled examples Yobs. Different existing approaches can be understood within this framework. The closest to our current work is [1] and its cousins [2-5]. The structure T is assumed to be a lowdimensional manifold, whose topology is approximated by a sparse neighborhood graph defined over the data points (based on Euclidean distance between feature vectors in the X matrix). The label vector Y is assumed to be smooth with respect to T ; [1] implements this smoothness assumption by defining a Gaussian field over all complete labelings Y of the neighborhood graph that expects neighbors to have the same label. This approach performs well in classifying data with a natural manifold structure, e.g., handwritten digits. The graphical model in Figure 1 suggests a more general strategy for exploiting other kinds of latent structure T , not just low-dimensional manifolds. In particular, trees arise prominently in both natural and human-generated domains (e.g., in biology, language and information retrieval). Here we describe an approach to semi-supervised learning based on mapping the data onto the leaf nodes of a rooted (and typically ultrametric) tree T . The label vector Y is generated from a stochastic mutation process operating over branches of T . Tree T can be inferred from unlabeled data using either bottom-up methods (agglomerative clustering) or more complex probabilistic methods. The mutation process defines Y Yobs X T Figure 1: A general approach to semi-supervised learning. X is an observed object-feature matrix, Y the hidden vector of true labels for these objects and Yobs a sparse vector of observed labels. The unlabeled data in X assist in inferring Y by allowing us to infer some latent structure T that is assumed to generate both X and Y . a prior over all possible labelings of the unlabeled data, favoring those that maximize a tree-specific notion of “smoothness”. Figure 2 illustrates this Tree-Based Bayes (TBB) approach. Each of the 32 objects in this dataset has two continuous features (x and y coordinates); X is a 32-by-2 matrix. Yobs contains four entries, two positive and two negative. The shading in part (b) represents a probabilistic inference about Y : the darker an object’s node in the tree, the more likely that its label is positive. TBB classifies unlabeled data by integrating over all possible labelings of the domain that are consistent with the observed labels Yobs, and is thus an instance of optimal Bayesian concept learning [6]. Typically, optimal Bayes is of theoretical interest only [7], because the sum over labelings is in general intractable and it is difficult to specify sufficiently powerful and noise-resistant priors for real-world domains. Here, a prior defined in terms of a tree-based mutation process makes the approach efficient and empirically successful. The next section describes TBB, as well as a simple heuristic method, Tree Nearest Neighbor (TNN), which we show approximates TBB in the limit of high mutation rate. Section 3 presents experimental comparisons with other approaches on a range of datasets. (a) (b) Figure 2: Illustration of the Tree-Based Bayesian approach to semi-supervised learning. (a) We observe a set of unlabeled objects (small points) with some latent hierarchical structure (gray ellipses) along with two positive and two negative examples of a new concept (black and white circles). (b) Inferring the latent tree, and treating the concept as generated from a mutation process on the tree, we can probabilistically classify the unlabeled objects. 2 Tree-Based Bayes (TBB) We assume a binary classification problem with Y ∈{−1, 1}n. We choose a label yi for unlabeled object xi by computing p(yi = 1|Yobs, X) and thresholding at 0.5. Generalization to the multi-class case will be straightforward. Ideally we would sum over all possible latent trees T : p(yi = 1|Yobs, X) = X T p(yi = 1|Yobs, T )p(T |Yobs, X) (1) First we consider p(yi = 1|Yobs, T ) and the classification of object xi given a particular tree T . Section 2.2 discusses p(T |Yobs, X), the inference of tree T , and approaches to approximating the sum over trees in Equation 1. We predict object xi’s label by summing over all possible complete labelings Y of the data: p(yi = 1|Yobs, T ) = X Y p(yi = 1|Y )p(Y |Yobs, T ) (2) = X Y p(yi = 1|Y )p(Yobs|Y, T )p(Y |T ) p(Yobs|T ) (3) = P Y p(yi = 1|Y )p(Yobs|Y )p(Y |T ) P Y p(Yobs|Y )p(Y |T ) (4) In general, the likelihood p(Yobs|Y ) depends on assumptions about sampling and noise. Typical simplifying assumptions are that the labeled objects were chosen randomly from all objects in the domain, and that all observations are free of noise. Then p(Yobs|Y ) ∝1 if Yobs is consistent with Y and is zero otherwise. Under these assumptions, Equation 4 becomes: p(yi = 1|Yobs, T ) = P Y consistent with Yobs:yi=1 p(Y |T ) P Y consistent with Yobs p(Y |T ) (5) The probability that yi = 1 reduces to the weighted fraction of label vectors consistent with Yobs that set yi = 1, with each label vector weighted by its prior under the tree, p(Y |T ). When class frequencies are unbalanced, small training sets provide little scope for learning if constructed using random sampling. Consider the problem of identifying genetic markers for a disease that afflicts one person in 10,000. A training set for this problem might be constructed by “retrospective sampling,” e.g. taking data from 20 patients with the disease and 20 healthy subjects. Randomly sampling subjects from the entire population would mean that even a medium-sized training set would have little chance of including anyone with the disease. Retrospective sampling can be modeled by specifying a more complex likelihood p(Yobs|Y ). The likelihood can also be modified to handle additional complexities, such as learning from labeled examples of just a single class, or learning in the presence of label noise. We consider none of these complexities here. Our experiments explore both random and retrospective sampling, but the algorithm we implement is strictly correct only for noise-free learning under random sampling. 2.1 Bayesian classification with a mutation model In many tree-structured domains it is natural to think of features arising from a history of stochastic events or mutations. We develop a mutation model that induces a sensible “smoothness” prior p(Y |T ) and enables efficient computation of Equation 5 via belief propagation on a Bayes net. The model combines aspects of several previous proposals for probabilistic learning with trees [8, 9, 10]. Let L be a feature corresponding to the class label. Suppose that L is defined at every point along every branch, not just at the leaf nodes where the data points lie. Imagine L spreading out over the tree from root to leaves — it starts out at the root with some value and could switch values at any point along any branch. Whenever a branch splits, both lower branches inherit the value of L at the point immediately before the split. Transitions between states of L are modeled using a continuous-time Markov chain with infinitesimal matrix: Q =  −λ λ λ −λ  The free parameter, λ, will be called the mutation rate. Note that the mutation process is symmetric: mutations from -1 to 1 are just as likely as mutations in the other direction. Other models of mutation could be substituted if desired. Generalization to the k-class case is achieved by specifying a k by k matrix Q, with −λ on the diagonal and λ k−1 on the off-diagonal. Transition probabilities along a branch of length t are given by: eQt = " 1+e−2λt 2 1−e−2λt 2 1−e−2λt 2 1+e−2λt 2 # (6) That is, the probability that a parent and child separated by a branch of length t have different values of L is 1−e−2λt 2 . This mutation process induces a prior p(Y |T ) equal to the probability of generating the label vector Y over leaves of T under the mutation process. The resulting distribution favors labelings that are “smooth” with respect to T . Regardless of λ, it is always more likely for L to stay the same than to switch its value along a branch. Thus labelings that do not require very many mutations are preferred, and the two hypotheses that assign the same label to all leaf nodes receive the most weight. Because mutations are more likely to occur along longer branches, the prior also favors hypotheses in which label changes occur between clusters (where branches tend to be longer) rather than within clusters (where branches tend to be shorter). The independence assumptions implicit in the mutation model allow the right side of Equation 5 to be computed efficiently. Inspired by [9], we set up a Bayes net with the same topology as T that captures the joint probability distribution over all nodes. We associate with each branch a conditional probability table that specifies the value of the child conditioned on the value of the parent (based on Equation 6), and set the prior probabilities at the root node to the uniform distribution (the stationary distribution of the Markov chain specified by Q). Evaluating Equation 5 now reduces to a standard problem of inference in a Bayes net – we clamp the nodes in Yobs to their observed values, and compute the posterior marginal probability at node yi. The tree structure makes this computation efficient and allows specially tuned inference algorithms, as in [9]. 2.2 A distribution over trees We now consider p(T |Yobs, X), the second component of Equation 1. Using Bayes’ theorem: p(T |Yobs, X) ∝p(Yobs, X|T )p(T ) (7) We assume that each discrete feature in X is generated independently over T according to the mutation model just outlined. Continuous features can be handled by an analogous stochastic diffusion process in a continuous space (see for example [11]). Because the features are conditionally independent of each other and of Yobs given the tree, p(Yobs, X|T ) can be computed using the methods of the previous section. To finish the theoretical development of the model it remains only to specify p(T ), a prior over tree structures. Section 3.2 uses a uniform prior, but a Dirichlet Diffusion Tree prior is another option [11]. 2.3 Approximating the sum over trees The sum over trees in Equation 1 is intractable for datasets of even moderate size. We therefore consider two approximations. Markov Chain Monte Carlo (MCMC) techniques have been used to approximate similar sums over trees in Bayesian phylogenetics [12], and Section 3.2 applies these ideas to a small-scale example. Although theoretically attractive, MCMC approaches are still expensive to use with large datasets. Section 3.1 follows a simpler approach: we assume that most of the probability p(T |Yobs, X) is concentrated on or near the most probable tree T ∗and approximate Equation 1 as p(yi = 1|Yobs, T ∗). The tree T ∗can be estimated using more or less sophisticated means. In Section 3.1 we use a greedy method – average-link agglomerative clustering on the object-feature matrix X, using Hamming or Euclidean distance in discrete or continuous domains, respectively. In Section 3.2 we compare this greedy method to the best tree found in our MCMC runs. Note that we ignore Yobs when building T ∗, because we run many trials on each dataset and do not want to compute a new tree for each value of Yobs. Since our data include many features and few labeled objects, the contribution of Yobs is likely to be negligible. 2.4 Tree Nearest Neighbor (TNN) A Bayesian formulation based on the mutation process provides a principled approach to learning with trees, but there are simpler algorithms that instantiate similar intuitions. For instance, we could build a one-nearest-neighbor classifier using the metric of distance in the tree T (with ties resolved randomly). It is clear how this Tree Nearest Neighbor (TNN) algorithm reflects the assumption that nearby leaves in T are likely to have the same label, but it is not necessarily clear when and why this simple approach should work well. An analysis of Tree-Based Bayes provides some insight here – TBB and TNN become equivalent when the λ parameter of TBB is set sufficiently high. Theorem 1 For each ultrametric tree T , there is a λ0 such that TNN and TBB produce identical classifications for all examples with a unique nearest neighbor when λ > λ0 . A proof is available at http://www.mit.edu/˜ckemp/papers/ treesslproof.pdf, but we give some intuition for the result here. Consider the Bayes net described in Section 2.1 and suppose xi is an unlabeled object. The value chosen for yi will depend on all the labels in Yobs, but the influence of any single label decreases with distance in the tree from yi. Once λ becomes sufficiently high it can be shown that yi is always determined uniquely by the closest labeled example in the tree. Given this equivalence between the algorithms, TNN is the method of choice when a high mutation rate is indicated. It is not only faster, but numerically more stable. For large values of λ, the probabilities manipulated by TBB become very close to 0.5 and variables that should be different may become indistinguishable within the limits of computational precision. Our implementation of TBB therefore uses TNN when cross-validation indicates that a sufficiently high value of λ is required. 3 Experiments 3.1 Trees versus Manifolds We compared TBB and TNN with the Laplacian method of Belkin and Niyogi [4], an approach that effectively assumes a latent manifold structure T . We also ran generic onenearest neighbor (NN) as a baseline. The best performing method on a given dataset should be the algorithm that assumes the right latent structure for that domain. We therefore tested the algorithms on several different types of data: four taxonomic datasets (Beetles, Crustaceans, Salamanders and Worms, with 192, 56, 30 and 286 objects respectively), two molecular biology sets (Gene Promoter and Gene Splice, with sizes 106 and 3190), and two “manifold” sets (Digits and Vowels, with sizes 10,000 and 990). The taxonomic datasets were expected to have a tree-like structure. Each set describes the external anatomy of a group of species, based on data available at http: //biodiversity.uno.edu/delta/. One feature in the Beetles set, for example, indicates whether a beetle’s body is “strongly flattened, slightly flattened to moderately convex, or strongly convex.” Since these taxonomic sets do not include class labels, we chose features at random to stand in for the class label. We averaged across five such choices for each dataset. The molecular biology sets were taken from the UCI repository. The objects in both sets are strings of DNA, and tree structures might also be appropriate here since these strings arose through evolution. The manifold sets arose from human motor behaviors, and were therefore expected to have a low-dimensional manifold structure. The Digits data are a subset of the MNIST data, and the Vowels data are taken from the UCI repository. Our experiments focused on learning from very small labeled sets. The number of labeled examples was always set to a small multiple (m = 1, 2, 3, 5, or 10) of the total number of classes. The algorithms were compared under random and retrospective sampling, and training sets were always sampled with replacement. For each training-set size m, we averaged across 10 values of Yobs obtained by randomly sampling from the vector Y . Free parameters for TBB (λ) and Laplacian (number of nearest neighbors, number of eigenvectors) were chosen using randomized leave-one-out cross-validation. Figure 3a shows the performance of the algorithms under random sampling for four representative datasets. TBB outperforms the other algorithms across the four taxonomic sets (only Beetles and Crustaceans shown), but the differences between TBB and Nearest Neighbor are rather small. These results do suggest a substantial advantage for TBB over Laplacian in tree-structured domains. As expected, this pattern is reversed on the Digits set, but it is encouraging that the tree-based methods can still improve on Nearest Neighbor even for datasets that are not normally associated with trees. Neither method beats the baseline on the Vowels or the Gene Promoter sets, but TBB performs well on the Gene Splice set, which suggests that it may find further uses in computational biology. More dramatic differences between the algorithms appear under retrospective sampling (Figure 3b). There is a clear advantage here for TBB on the taxonomic sets. TBB fares better than the other algorithms when the class proportions in the training set do not match the proportions in the population, and it turns out that many of the features in the taxonomic datasets are unbalanced. Since the other datasets have classes of approximately equal size, the results for retrospective sampling are similar to those for random sampling. While not conclusive, our results suggest that TBB may be the method of choice on treestructured datasets, and is robust even for datasets (like Digits) that are not clearly treestructured. 3.2 MCMC over trees Figure 3 shows that TBB can perform well on real-world datasets using only a single tree. Working with a distribution over trees, although costly, could improve performance when there is not sufficient data to strongly constrain the best tree, or when the domain is not strongly tree-structured. Using a small synthetic example, we explored one such case: learning from very sparse and noisy data in a tree-structured domain. 1 2 3 5 10 15 20 25 30 35 Beetles (a) 1 2 3 5 10 6 7 8 Crustaceans 1 2 3 5 10 40 50 60 Gene Splice 1 2 3 5 10 20 40 60 Digits 1 2 3 5 10 10 20 30 40 Examples per class Error rate (b) 1 2 3 5 10 10 20 30 40 50 1 2 3 5 10 40 45 50 55 60 1 2 3 5 10 10 20 30 40 50 60 TBB NN TNN Laplacian Figure 3: Error rates for four datasets under (a) random and (b) retrospective sampling, as a function of the number of labeled examples m per class. Mean standard error bars for each dataset are shown in the upper right corner of the plot. 4 8 12 0.2 0.25 0.3 0.35 0.4 TBB (ideal) TBB (MCMC) TBB (modal) TBB (agglom) NN Number of labeled examples Error rate Figure 4: Error rates on sparse artificial data as a function of number of labels observed. We generated artificial datasets consisting of 20 objects. Each dataset was based on a “true” tree T0, with objects at the leaves of T0. Each object was represented by a vector of 20 binary features generated by a mutation process over T0, with high λ. Most feature values were missing; the algorithms saw only 5 of the 20 features for each object. For each dataset, we created 20 test concepts from the same mutation process. The algorithms saw m labeled examples of each test concept and had to infer the labels of the remaining objects. This experiment was repeated for 10 random trees T0. Our MCMC approach was inspired by an algorithm for reconstruction of phylogenetic trees [12], which uses MetropolisHastings over tree topologies with two kinds of proposals: local (nearest neighbor interchange) and global (subtree pruning and regrafting). Unlike the previous section, none of the trees considered (including the true tree T0) was ultrametric. Instead, each branch in each tree was assigned a fixed length. This meant that any two trees with the same hierarchical structure were identical, and we did not have to store trees with the same topology but different branch lengths. Figure 4 shows the mean classification error rate, based on 1600 samples after a burn-in of 400 iterations. Four versions of TBB are shown: “ideal” uses the true tree T0, “MCMC” uses model averaging over a distribution of trees, “modal” uses the single most likely tree in the distribution, and “agglom” uses a tree built by average-link clustering. The ideal learner beats all others because the true tree is impossible to identify with such sparse data. Using MCMC over trees brings TBB substantially closer to the ideal than simpler alternatives that ignore the tree structure (NN) or consider only a single tree (modal, agglom). 4 Conclusion We have shown how to make optimal Bayesian concept learning tractable in a semisupervised setting by assuming a latent tree structure that can be inferred from the unlabeled data and defining a prior for concepts based on a mutation process over the tree. Our Bayesian framework supports many possible extensions, including active learning, feature selection, and model selection. Inferring the nature of the latent structure T – rather than assuming a manifold structure or a tree structure – is a particularly interesting problem. When little is known about the form of T , Bayesian methods for model selection could be used to choose among approaches that assume manifolds, trees, flat clusters, or other canonical representational forms. Acknowledgments This project was supported by the DARPA CALO program and NTT Communication Science Laboratories. Our implementation of the Laplacian method was based on code provided by Mikhail Belkin. References [1] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. In ICML, volume 20, 2003. [2] M. Szummer and T. Jaakkola. Partially labeled classification with Markov random walks. In NIPS, volume 14, 2002. [3] A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. In ICML, volume 18, 2001. [4] M. Belkin and P. Niyogi. Semi-supervised learning on manifolds. 2003. To appear in Machine Learning, Special Issue on Theoretical Advances in Data Clustering. [5] O. Chapelle, J. Weston, and B. Sch¨olkopf. Cluster kernels for semi-supervised learning. In NIPS, volume 15, 2003. [6] T. M. Mitchell. Machine Learning. McGraw-Hill, 1997. [7] D. Haussler, M. Kearns, and R. Schapire. Bounds on the sample complexity of Bayesian learning using information theory and the VC dimension. Machine Learning, 14(1), 1994. [8] C. Kemp and J. B. Tenenbaum. Theory-based induction. In Proceedings of the 25th Annual Conference of the Cognitive Science Society, 2003. [9] L. Shih and D. Karger. Learning classes correlated to a hierarchy. 2003. Unpublished manuscript. [10] J.-P. Vert. A tree kernel to analyze phylogenetic profiles. Bioinformatics, 1(1):1–9, 2002. [11] R. Neal. Defining priors for distributions using Dirichlet diffusion trees. Technical Report 0108, University of Toronto, 2001. [12] H. Jow, C. Hudelot, M. Rattray, and P. Higgs. Bayesian phylogenetics using an RNA substitution model applied to early mammalian evolution. Molecular Biology and Evolution, 19(9):1951–1601, 2002.
2003
85
2,491
Finding the M Most Probable Configurations Using Loopy Belief Propagation Chen Yanover and Yair Weiss School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, Israel {cheny,yweiss}@cs.huji.ac.il Abstract Loopy belief propagation (BP) has been successfully used in a number of difficult graphical models to find the most probable configuration of the hidden variables. In applications ranging from protein folding to image analysis one would like to find not just the best configuration but rather the top M. While this problem has been solved using the junction tree formalism, in many real world problems the clique size in the junction tree is prohibitively large. In this work we address the problem of finding the M best configurations when exact inference is impossible. We start by developing a new exact inference algorithm for calculating the best configurations that uses only max-marginals. For approximate inference, we replace the max-marginals with the beliefs calculated using max-product BP and generalized BP. We show empirically that the algorithm can accurately and rapidly approximate the M best configurations in graphs with hundreds of variables. 1 Introduction Considerable progress has been made in the field of approximate inference using techniques such as variational methods [7], Monte-Carlo methods [5], mini-bucket elimination [4] and belief propagation (BP) [6]. These techniques allow approximate solutions to various inference tasks in graphical models where building a junction tree is infeasible due to the exponentially large clique size. The inference tasks that have been considered include calculating marginal probabilities, finding the most likely configuration, and evaluating or bounding the log likelihood. In this paper we consider an inference task that has not been tackled with the same tools of approximate inference: calculating the M most probable configurations (MPCs). This is a natural task in many applications. As a motivating example, consider the protein folding task known as the side-chain prediction problem. In our previous work [17], we showed how to find the minimal-energy side-chain configuration using approximate inference in a graphical model. The graph has 300 nodes and the clique size in a junction tree calculated using standard software [10] can be up to an order of 1042, so that exact inference is obviously impossible. We showed that loopy max-product belief propagation (BP) achieved excellent results in finding the first MPC for this graph. In the few cases where BP did not converge, Generalized Belief Propagation (GBP) always converge, with an increase in computation. But we are also interested in finding the second best configuration, the third best or, more generally, the top M configurations. Can this also be done with BP ? The problem of finding the M MPCs has been successfully solved within the junction tree (JT) framework. However, to the best of our knowledge, there has been no equivalent solution when building a junction tree is infeasible. A simple solution would be outputting the top M configurations that are generated by a Monte-Carlo simulation or by a local search algorithm from multiple initializations. As we show in our simulations, both of these solutions are unsatisfactory. Alternatively, one can attempt to use more sophisticated heuristically guided search methods (such as A∗) or use exact MPCs algorithms on an approximated, reduced size junction tree [4, 1]. However, given the success of BP and GBP in finding the first MPC in similar problems [6, 9] it is natural to look for a method based on BP. In this paper we develop such an algorithm. We start by showing why the standard algorithm [11] for calculating the top M MPCs cannot be used in graphs with cycles. We then introduce a novel algorithm called Best Max-Marginal First (BMMF) and show that when the max-marginals are exact it provably finds the M MPCs. We show simulation results of BMMF in graphs where exact inference is impossible, with excellent performance on challenging graphical models with hundreds of variables. 2 Exact MPCs algorithms We assume our hidden variables are denoted by a vector X, N = |X| and the observed variables by Y , where Y = y. Let mk = (mk(1), mk(2), · · · , mk(N)) denote the kth MPC. We first seek a configuration m1 that maximizes Pr(X = x|y). Pearl, Dawid and others [12, 3, 11] have shown that this configuration can be calculated using a quantity known as max-marginals (MMs): max marginal(i, j) = max x:x(i)=j Pr(X = x|y) (1) Max-marginal lemma: If there exists a unique MAP assignment m1 (i.e. Pr(X = m1|y) > Pr(X = x|y), ∀x ̸= m1) then x1 defined by x1(i) = arg maxj max marginal(i, j) will recover the MAP assignment, m1 = x1. Proof: Suppose, that there exists i for which m1(i) = k, x1(i) = l, and k ̸= l. It follows that maxx:x(i)=k Pr(X = x|y) > maxx:x(i)=l Pr(X = x|y) which is a contradiction to the definition of x1. When the graph is a tree, the MMs can be calculated exactly using max-product belief propagation [16, 15, 12] using two passes: one up the tree and the other down the tree. Similarly, for an arbitrary graph they can be calculated exactly using two passes of max-propagation in the junction tree [2, 11, 3]. A more efficient algorithm for calculating m1 requires only one pass of maxpropagation. After calculating the max-marginal exactly at the root node, the MAP assignment m1 can be calculated by tracing back the pointers that were used during the max-propagation [11]. Figure 1a illustrates this traceback operation in the Viterbi algorithm in HMMs [13] (the pairwise potentials favor configurations where neighboring nodes have different values). After calculating messages from left x(2) x(1) x(3) x(3) = 1 x(2) = 0 x(1) x(3) = 1 x(2) x(3) x(2) = 0 x(1) = 1 x(3) = 0 x(2) = 1 x(1) = 0 a b Figure 1: a. The traceback operation in the Viterbi algorithm. The MAP configuration can be calculated by a forward message passing scheme followed by a backward “traceback”. b. The same traceback operation applied to a loopy graph may give inconsistent results. to right using max-product, we have the max-marginal at node 3 and can calculate x1(3) = 1. We then use the value of x1(3) and the message from node 1 to 2 to find x1(2) = 0. Similarly, we then trace back to find the value of x1(1). These traceback operations, however, are problematic in loopy graphs. Figure 1b shows a simple example from [15] with the same potentials as in figure 1a. After setting x1(3) = 1 we traceback and find x1(2) = 0, x1(1) = 1 and finally x1(3) = 0, which is obviously inconsistent with our initial choice. One advantage of using traceback is that it can recover m1 even if there are “ties” in the MMs, i.e. when there exists a max-marginal that has a non-unique maximizing value. When there are ties, the max-marginal lemma no longer holds and independently maximizing the MMs will not find m1 (cf. [12]). Finding m1 using only MMs requires multiple computation of MMs — each time with the additional constraint x(i) = j, where i is a tied node and j one of its maximizing values — until no ties exist. It is easy to show that this algorithm will recover m1. The proof is a special case of the proof we present for claim 2 in the next section. However, we need to recalculate the MMs many times until no more ties exist. This is the price we pay for not being able to use traceback. The situation is similar if we seek the M MPCs. 2.1 The Simplified Max-Flow Propagation Algorithm Nilsson’s Simplified Max-Flow Propagation (SMFP) [11] starts by calculating the MMs and using the max-marginal lemma to find m1. Since m2 must differ from m1 in at least one variable, the algorithm defines N conditioning sets, Ci ≜(x(1) = m1(1), x(2) = m1(2), · · · , x(i−1) = m1(i−1), x(i) ̸= m1(i)). It then uses the maxmarginal lemma to find the most probable configuration given each conditioning set, xi = arg maxx Pr(X = x|y, Ci) and finally m2 = arg maxx∈{xi} Pr(X = x|y). Since the conditioning sets form a partition, it is easy to show that the algorithm finds m2 after N calculations of the MMs. Similarly, to find mk the algorithm uses the fact that mk must differ from m1, m2, · · · , mk−1 in at least one variable and forms a new set of up to N conditioning sets. Using the max-marginal lemma one can find the MPC given each of these new conditioning sets. This gives up to N new candidates, in addition to (k−1)(N −1) previously calculated candidates. The Figure 2: An illustration of our novel BMMF algorithm on a simple example. most probable candidate out of these k(N −1) + 1 is guaranteed to be mk. As pointed out by Nilsson, this simple algorithm may require far too many calculations of the MMs (O(MN)). He suggested an algorithm that uses traceback operations to reduce the computation significantly. Since traceback operations are problematic in loopy graphs, we now present a novel algorithm that does not use traceback but may require far less calculation of the MMs compared to SMFP. 2.2 A novel algorithm: Best Max-Marginal First For simplicity of exposition, we will describe the BMMF algorithm under what we call the strict order assumption, that no two configurations have exactly the same probability. We illustrate our algorithm using a simple example (figure 2). There are 4 binary variables in the graphical model and we can find the top 3 MPCs exactly: 1100, 1110, 0001. Our algorithm outputs a set of candidates xt, one at each iteration. In the first iteration, t = 1, we start by calculating the MMs, and using the max-marginal lemma we find m1. We now search the max-marginal table for the next best maxmarginal value. In this case it is obtained with x(3) = 1. In the second iteration, t = 2, we now lock x(3) = 1. In other words, we calculate the MMs with the added constraint that x(3) to 1. We use the max-marginal lemma to find the most likely configuration with x(3) = 1 locked and obtain x2 = 1110. Note that we have found the second most likely configuration. We then add the complementary constraint x(3) ̸= 1 to the originating constraints set and calculate the MMs. In the third iteration, t = 3, we search both previous max-marginal tables and find the best remaining max-marginal. It is obtained at x(1) = 0, t = 1. We now add the constraint x(1) = 0 to the constraints set from t = 1, calculate the MMs and use the max-marginal lemma to find x3 = 0001. Finally, we add the complementary constraint x(1) ̸= 0 to the originating constraints set and calculate the MMs. Thus after 3 iterations we have found the first 3 MPCs using only 5 calculations of the MMs. The Best Max-Marginal First (BMMF) algorithm for calculating the M most probable configurations: • Initialization SCORE1(i, j) = max x:x(i)=j Pr(X = x|y) (2) x1(i) = arg max j SCORE1(i, j) (3) CONSTRAINTS1 = ∅ (4) USED2 = ∅ (5) • For t=2:T SEARCHt = (i, j, s < t : xs(i) ̸= j, (i, j, s) /∈USEDt) (6) (it, jt, st) = arg max (i,j,s)∈SEARCHt SCOREs(i, j) (7) CONSTRAINTSt = CONSTRAINTSst ∪{(x(it) = jt)} (8) SCOREt(i, j) = max x:x(i)=j,CONSTRAINTSt Pr(X = x|y) (9) xt(i) = arg max j SCOREt(i, j) (10) USEDt+1 = USEDt ∪{(it, jt, st)} (11) CONSTRAINTSst = CONSTRAINTSst ∪{(x(it) ̸= jt)} (12) SCOREst(i, j) = max x:x(i)=j,CONSTRAINTSst Pr(X = x|y) (13) Claim 1: x1 calculated by the BMMF algorithm is equal to the MPC m1. Proof: This is just a restatement of the max-marginal lemma. Claim 2: x2 calculated by the BMMF algorithm is equal to the second MPC m2. Proof: We first show that m2(i2) = j2. We know that m2 differs in at least one location from m1. We also know that out of all the assignments that differ from m1 it must have the highest probability. Suppose, that m2(i2) ̸= j2. By the definition of SCORE1, this means that there exists an x ̸= m2 that is not m1 whose posterior probability is higher than that of m2. This is a contradiction. Now, out of all assignments for which x(i2) = j2, m2 has highest posterior probability (recall that by definition, m1(i2) ̸= j2). The max-marginal lemma guarantees that x2 = m2. Partition Lemma: Let SATk denote the set of assignments satisfying CONSTRAINTSk. Then, after iteration k, the collection {SAT1, SAT2, · · · , SATk} is a partition of the assignment space. Proof: By induction over k. For k = 1, CONSTRAINTS1 = ∅and the claim trivially holds. For k = 2, SAT1 = {x|x(i2) ̸= j2} and SAT2 = {x|x(i2) = j2} are mutually disjoint and SAT1 ∪SAT2 covers the assignment space, therefore {SAT1, SAT2} is a partition of the assignment space. Assume that after iteration k −1, {SAT1, SAT2, · · · , SATk−1} is a partition of the assignment space. Note that in iteration k, we add CONSTRAINTSk = CONSTRAINTSsk ∪{(x(ik) = jk)} and modify CONSTRAINTSsk = CONSTRAINTSsk ∪{(x(ik) ̸= jk)}, while keeping all other constraints set unchanged. SATk and the modified SATsk are pairwise disjoint and SATk ∪SATsk covers the originating SATsk. Since after iteration k −1 {SAT1, SAT2, · · · , SATk−1} is a partition of the assignment space, so is {SAT1, SAT2, · · · , SATk}. Claim 3: xk, the configuration calculated by the algorithm in iteration k, is mk, the kth MPC. Proof: First, note that SCOREsk(ik, jk) ≤SCOREsk−1(ik−1, jk−1), otherwise (ik, jk, sk) would have been chosen in iteration k −1. Following the partition lemma, each assignment arises at most once. By the strict order assumption, this means that SCOREsk(ik, jk) < SCOREsk−1(ik−1, jk−1). Let mk ∈SATs∗. We know that mk differs from all previous xs in at least one location. In particular, mk must differ from xs∗in at least one location. Denote that location by i∗and mk(i∗) = j∗. We want to show that SCOREs∗(i∗, j∗) = Pr(X = mk|y). First, note that (i∗, j∗, s∗) /∈USEDk. If we had previously used it, then (x(i∗) ̸= j∗) ∈CONSTRAINTSs∗, which contradicts the definition of s∗. Now suppose there exists ml, l ≤k −1 such that ml ∈SATs∗and ml(i∗) = j∗. Since (i∗, j∗, s∗) /∈USEDk this would mean that SCOREsk(ik, jk) ≥SCOREsk−1(ik−1, jk−1) which is a contradiction. Therefore mk is the most probable assignment that satisfies CONSTRAINTSs∗and has the value j∗at location i∗. Hence SCOREs∗(i∗, j∗) = Pr(X = mk|y). A consequence of claim 3 is that BMMF will find the top M MPCs using 2M calculations of max marginals. In contrast, SMFP requires O(MN) calculations. In real world loopy problems, especially when N≫M, this can lead to drastically different run times. First, real world problems may have thousands of nodes so a speedup of a factor of N will be very significant. Second, calculating the MMs requires iterative algorithms (e.g. BP or GBP) so that the speedup of a factor of N may be the difference between running a month versus running half a day. 3 Approximate MPCs algorithms using loopy BP We now compare 4 approximate MPCs algorithms: 1. loopy BMMF. This is exactly the algorithm in section 2.2 with the MMs based on the beliefs computed by loopy max-product BP or max-GBP: SCOREk(i, j) = Pr(X = xk|y) BEL(i, j|CONSTRAINTSk) maxjBEL(i, j|CONSTRAINTSk) (14) 2. loopy SMFP. This is just Nilsson’s SMFP algorithm with the MMs calculated using loopy max-product BP. 3. Gibbs sampling. We collect all configurations sampled during a Gibbs sampling simulation and output the top M of these. 4. Greedy. We collect all configurations encountered during a greedy optimization of the posterior probability (this is just Gibbs sampling at zero temperature) and output the top M of these. All four algorithms were implemented in Matlab and the number of iterations for greedy and Gibbs were chosen so that the run times would be the same as that of loopy BMMF. Gibbs sampling started from m1, the most probable assignment, and the greedy local search algorithm initialized to an assignment “similar” to m1 (1% of the variables were chosen randomly and their values flipped). For the protein folding problem [17], we used a database consisting of 325 proteins, each gives rise to a graphical model with hundreds of variables and many loops. We 5 10 15 20 25 30 35 40 45 50 299.5 300 300.5 301 301.5 302 302.5 303 303.5 Configuration Number Energy Gibbs loopy BMMF 5 10 15 20 25 −591.3 −591.2 −591.1 −591 Configuration Number Greedy loopy BMMF Figure 3: The configurations found by loopy-BMMF compared to those obtained using Gibbs sampling and greedy local search for a large toy-QMR model (right) and a 32 × 32 spin glass model (right). compared the top 100 correct configurations obtained by the A∗heuristic search algorithm [8] to those found by loopy BMMF algorithm, using BP. In all cases where A∗was feasible, loopy BMMF always found the correct configurations. Also, the BMMF algorithm converged more often (96.3% compared to 76.3%) and ran much faster. We then assessed the performance of the BMMF algorithm for a couple of relatively small problems, where exact inference was possible. For both a small toy-QMR model (with 20 diseases and 50 symptoms) and a 8×8 spin glass model the BMMF algorithm obtained the correct MPCs. Finally, we compared the performance of the algorithms for couple of hard problems — a large toy-QMR model (with 100 diseases and 200 symptoms) and 32 × 32 spin glass model with large pairwise interactions. For the toy-QMR model, the MPCs calculated by the BMMF algorithm were better than those calculated by Gibbs sampling (Figure 3, left). For the large spin glass, we found that ordinary BP didn’t converge and used max-product generalized BP instead. This is exactly the algorithm described in [18] with marginalizations replaced with maximizations. We found that GBP converged far more frequently and indeed the MPCs found using GBP are much better than those obtained with Gibbs or greedy (Figure 3, right. Gibbs results are worse than those of the greedy search and therefore not shown). Note that finding the second MPC using the simple MFP algorithm requires a week, while the loopy BMMF calculated the 25 MPCs in few hours only. 4 Discussion Existing algorithms successfully find the M MPCs for graphs where building a JT is possible. However, in many real-world applications exact inference is impossible and approximate techniques are needed. In this paper we have addressed the problem of finding the M MPCs using the techniques of approximate inference. We have presented a new algorithm, called Best Max-Marginal First that will provably solve the problem if MMs can be calculated exactly. We have shown that the algorithm continues to perform well when the MMs are approximated using max-product loopy BP or GBP. Interestingly, the BMMF algorithm uses the numerical values of the approximate MMs to determine what to do in each iteration. The success of loopy BMMF suggests that in some cases the max product loopy BP gives a good numerical approximation to the true MMs. Most existing analysis of loopy max-product [16, 15] has focused on the configurations found by the algorithm. It would be interesting to extend the analysis to bound the approximate MMs which in turn would lead to a provable approximate MPCs algorithm. While we have used loopy BP to approximate the MMs, any approximate inference can be used inside BMMF to derive a novel, approximate MPCs algorithm. In particular, the algorithm suggested by Wainwright et al. [14] can be shown to give the MAP assignment when it converges. It would be interesting to incorporate their algorithm into BMMF. References [1] A. Cano, S. Moral, and A. Salmer´on. Penniless propagation in join trees. Journal of Intelligent Systems, 15:1010–1027, 2000. [2] R. Cowell. Advanced inference in Bayesian networks. In M.I. Jordan, editor, Learning in Graphical Models. MIT Press, 1998. [3] P. Dawid. Applications of a general propagation algorithm for probabilistic expert systems. Statistics and Computing, 2:25–36, 1992. [4] R. Dechter and I. Rish. A scheme for approximating probabilistic inference. In Uncertainty in Artificial Intelligence (UAI 97), 1997. [5] A. Doucet, N. de Freitas, K. Murphy, and S. Russell. Rao-blackwellised particle filtering for dynamic bayesian networks. In Proceedings UAI 2000. Morgan Kaufmann, 2000. [6] B.J. Frey, R. Koetter, and N. Petrovic. Very loopy belief propagation for unwrapping phase images. In Adv. Neural Information Processing Systems 14. MIT Press, 2001. [7] T.S. Jaakkola and M.I. Jordan. Variational probabilistic inference and the QMR-DT database. JAIR, 10:291–322, 1999. [8] Andrew R. Leach and Andrew P. Lemon. Exploring the conformational space of protein side chains using dead-end elimination and the A* algorithm. Proteins: Structure, Function, and Genetics, 33(2):227–239, 1998. [9] A. Levin, A. Zomet, and Y. Weiss. Learning to perceive transparency from the statistics of natural scenes. In Proceedings NIPS 2002. MIT Press, 2002. [10] Kevin Murphy. The bayes net toolbox for matlab. Computing Science and Statistics, 33, 2001. [11] D. Nilsson. An efficient algorithm for finding the M most probable configurations in probabilistic expert systems. Statistics and Computing, 8:159–173, 1998. [12] Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988. [13] L.R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE, 77(2):257–286, 1989. [14] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Exact map estimates by (hyper)tree agreement. In Proceedings NIPS 2002. MIT Press, 2002. [15] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Tree consistency and bounds on the performance of the max-product algorithm and its generalizations. Technical Report P-2554, MIT LIDS, 2002. [16] Y. Weiss and W.T. Freeman. On the optimality of solutions of the max-product belief propagation algorithm in arbitrary graphs. IEEE Transactions on Information Theory, 47(2):723–735, 2001. [17] C. Yanover and Y. Weiss. Approximate inference and protein folding. In Proceedings NIPS 2002. MIT Press, 2002. [18] J. Yedidia, W. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations. In G. Lakemeyer and B. Nebel, editors, Exploring Artificial Intelligence in the New Millennium. Morgan Kaufmann, 2003.
2003
86
2,492
A Nonlinear Predictive State Representation Matthew R. Rudary and Satinder Singh Computer Science and Engineering University of Michigan Ann Arbor, MI 48109 {mrudary,baveja}@umich.edu Abstract Predictive state representations (PSRs) use predictions of a set of tests to represent the state of controlled dynamical systems. One reason why this representation is exciting as an alternative to partially observable Markov decision processes (POMDPs) is that PSR models of dynamical systems may be much more compact than POMDP models. Empirical work on PSRs to date has focused on linear PSRs, which have not allowed for compression relative to POMDPs. We introduce a new notion of tests which allows us to define a new type of PSR that is nonlinear in general and allows for exponential compression in some deterministic dynamical systems. These new tests, called e-tests, are related to the tests used by Rivest and Schapire [1] in their work with the diversity representation, but our PSR avoids some of the pitfalls of their representation—in particular, its potential to be exponentially larger than the equivalent POMDP. 1 Introduction A predictive state representation, or PSR, captures the state of a controlled dynamical system not as a memory of past observations (as do history-window approaches), nor as a distribution over hidden states (as do partially observable Markov decision process or POMDP approaches), but as predictions for a set of tests that can be done on the system. A test is a sequence of action-observation pairs and the prediction for a test is the probability of the test-observations happening if the test-actions are executed. Littman et al. [2] showed that PSRs are as flexible a representation as POMDPs and are a more powerful representation than fixed-length history-window approaches. PSRs are potentially significant for two main reasons: 1) they are expressed entirely in terms of observable quantities and this may allow the development of methods for learning PSR models from observation data that behave and scale better than do existing methods for learning POMDP models from observation data, and 2) they may be much more compact than POMDP representations. It is the latter potential advantage that we focus on in this paper. All PSRs studied to date have been linear, in the sense that the probability of any sequence of k observations given a sequence of k actions can be expressed as a linear function of the predictions of a core set of tests. We introduce a new type of test, the e-test, and present the first nonlinear PSR that can be applied to a general class of dynamical systems. In particular, in the first such result for PSRs we show that there exist controlled dynamical systems whose PSR representation is exponentially smaller than its POMDP representation. To arrive at this result, we briefly review PSRs, introduce e-tests and an algorithm to generate a core set of e-tests given a POMDP, show that a representation built using e-tests is a PSR and that it can be exponentially smaller than the equivalent POMDP, and conclude with example problems and a look at future work in this area. 2 Models of Dynamical Systems A model of a controlled dynamical system defines a probability distribution over sequences of observations one would get for any sequence of actions one could execute in the system. Equivalently, given any history of interaction with the dynamical system so far, a model defines the distribution over sequences of future observations for all sequences of future actions. The state of such a model must be a sufficient statistic of the observed history; that is, it must encode all the information conveyed by the history. POMDPs [3, 4] and PSRs [2] both model controlled dynamical systems. In POMDPs, a belief state is used to encode historical information; in PSRs, probabilities of particular future outcomes are used. Here we describe both models and relate them to one another. POMDPs A POMDP model is defined by a tuple ⟨S, A, O, T, O, b0⟩, where S, A, and O are, respectively, sets of (unobservable) hypothetical underlying-system states, actions that can be taken, and observations that may be issued by the system. T is a set of matrices of dimension |S| × |S|, one for each a ∈A, such that T a ij is the probability that the next state is j given that the current state is i and action a is taken. O is a set of |S| × |S| diagonal matrices, one for each action-observation pair, such that Oa,o ii is the probability of observing o after arriving in state i by taking action a. Finally, b0 is the initial belief state, a |S| × 1 vector whose ith element is the probability of the system starting in state i. The belief state at history h is b(S|h) = [prob(1|h) prob(2|h) . . . prob(|S||h)], where prob(i|h) is the probability of the unobserved state being i at history h. After taking an action a in history h and observing o, the belief state can be updated as follows: bT (S|hao) = bT (S|h)T aOa,o bT (S|h)T aOa,o1|S| (1|S| is the |S| × 1 vector consisting of all 1’s) PSRs Littman et al. [2] (inspired by the work of Rivest and Schapire [1] and Jaeger [5]) introduced PSRs to represent the state of a controlled dynamical system using predictions of the outcomes of tests. They define a test t as a sequence of actions and observations t = a1o1a2o2 · · · akok; we shall call this type of test a sequence test, or s-test in short. An s-test succeeds iff, when the sequence of actions a1a2 · · · ak is executed, the system issues the observation sequence o1o2 · · · ok. The prediction p(t|h) is the probability that the s-test t succeeds from observed history h (of length n w.l.o.g.); that is p(t|h) = prob(on+1 = o1, . . . , on+k = ok|h, an+1 = a1, . . . , an+k = ak) (1) where ai and oi denote the action taken and the observation, respectively, at time i. In the rest of this paper, we will abbreviate expressions like the right-hand side of Equation 1 by prob(o1o2 · · · ok|ha1a2 · · · ak). A set of s-tests Q = {q1q2 . . . q|Q|} is said to be a core set if it constitutes a PSR, i.e., if its vector of predictions, p(Q|h) = [p(q1|h) p(q2|h) . . . p(q|Q||h)], is a sufficient statistic for any history h. Equivalently, if Q is a core set, then for any s-test t, there exists a function ft such that p(t|h) = ft(p(Q|h)) for all h. The prediction vector p(Q|h) in PSR models corresponds to belief state b(S|h) in POMDP models. The PSRs discussed by Littman et al. [2] are linear PSRs in the sense that for any s-test t, ft is a linear function of the predictions of the core s-tests; equivalently, the following equation ∀s-tests t ∃a weight vector wt, s.t. p(t|h) = pT (Q|h)wt (2) defines what it means for Q to constitute a linear PSR. Upon taking action a in history h and observing o, the prediction vector can be updated as follows: p(qi|hao) = p(aoqi|h) p(ao|h) = faoqi(p(Q|h)) fao(p(Q|h)) = pT (Q|h)maoqi pT (Q|h)mao (3) where the final right-hand side is only valid for linear PSRs. Thus a linear PSR model is specified by Q and by the weight vectors in the above equation maoq for all a ∈A, o ∈ O, q ∈Q ∪φ (where φ is the null sequence). It is pertinent to ask what sort of dynamical systems can be modeled by a PSR and how many core tests are required to model a system. In fact, Littman et al. [2] answered these questions with the following result: Lemma 1 (Littman et al. [2]) For any dynamical system that can be represented by a finite POMDP model, there exists a linear PSR model of size (|Q|) no more than the size (|S|) of the POMDP model. Littman et al. prove this result by providing an algorithm for constructing a linear PSR model from a POMDP model. The algorithm they present depends on the insight that s-tests are differentiated by their outcome vectors. An outcome vector u(t) for an s-test t = a1o1a2o2 . . . akok is a |S| × 1 vector; the ith component of the vector is the probability of t succeeding given that the system is in the hidden state i, i.e., u(t) = T a1Oa1o1T a2Oa2o2 . . . T anOakok1|S|. Consider the matrix U whose rows correspond to the states in S and whose columns are the outcome vectors for all possible s-tests. Let Q denote the set of s-tests associated with the maximal set of linearly independent columns of U; clearly |Q| ≤|S|. Littman et al. showed that Q is a core set for a linear PSR model by the following logic. Let U(Q) denote the submatrix consisting of the columns of U corresponding to the s-tests ∈Q. Clearly, for any s-test t, u(t) = U(Q)wt for some vector of weights wt. Therefore, p(t|h) = bT (S|h)u(t) = bT (S|h)U(Q)wt = p(Q|h)wt which is exactly the requirement for a linear PSR (cf. Equation 2). We will reuse the concept of linear independence of outcome vectors with a new type of test to derive a PSR that is nonlinear in general. This is the first nonlinear PSR that can be used to represent a general class of problems. In addition, this type of PSR in some cases requires a number of core tests that is exponentially smaller than the number of states in the minimal POMDP or the number of core tests in the linear PSR. 3 A new notion of tests In order to formulate a PSR that requires fewer core tests, we look to a new kind of test— the end test, or e-test in short. An e-test is defined by a sequence of actions and a single ending observation. An e-test e = a1a2 · · · akok succeeds if, after the sequence of actions a1a2 · · · ak is executed, ok is observed. This type of test is inspired by Rivest and Schapire’s [1] notion of tests in their work on modeling deterministic dynamical systems. 3.1 PSRs with e-tests Just as Littman et al. considered the problem of constructing s-test-based PSRs from POMDP models, here we consider how to construct a e-test-based PSR, or EPSR, from a POMDP model and will derive properties of EPSRs from the resulting construction. The |S| × 1 outcome vector for an e-test e = a1a2 . . . akok is v(e) = T a1T a2 . . . T akOakok1|S|. (4) Note that we are using v’s to denote outcome vectors for e-tests and u’s to denote outcome vectors for s-tests. Consider the matrix V whose rows correspond to S whose columns are done ←false; i ←0; L ←{} do until done done ←true N ←generate all one-action extensions of length-i tests in L for each t ∈N if v(t) is linearly independent of V (L) then L ←L ∪{t}; done ←false end for i ←i + 1 end do QV ←L Figure 1: Our search algorithm to find a set of core e-tests given the outcome vectors. the outcome vectors for all possible e-tests. Let QV denote the set of e-tests associated with a maximal set of linearly independent columns of matrix V ; clearly |QV | ≤|S|. Note that QV is not uniquely defined; there are many such sets. The hope is that the set QV is a core set for an EPSR model of the dynamical system represented by the POMDP model. But before we consider this hope, let us consider how we would find QV given a POMDP model. We can compute the outcome vector for any e-test from the POMDP parameters using Equation 4. So we could compute the columns of V one by one and check to see how many linearly independent columns we find. If we ever find |S| linearly independent columns, we know we can stop, because we will not find any more. However, if |QV | < |S|, then how would we know when to stop? In Figure 1, we present a search algorithm that finds a set QV in polynomial time. Our algorithm is adapted from Littman et al.’s algorithm for finding core s-tests. The algorithm starts with all e-tests of length one and maintains a set L of currently known linearly independent e-tests. At each iteration it searches for new linearly independent e-tests among all one-action extensions of the e-tests it added to L at the last iteration and stops when an iteration does not add to the set L. Lemma 2 The search algorithm of Figure 1 computes the set QV in time polynomial in the size of the POMDP. Proof Computing the outcome vector for an e-test using Equation 4 is polynomial in the number of states and the length of the e-test. There cannot be more than |S| e-tests in the set L maintained by the search algorithm algorithm and only one-action extensions of the e-tests in L ∪O are ever considered. Each extension length considered must add an e-test else the algorithm stops, and so the maximal length of each e-test in QV is upper bounded by the number of states. Therefore, our algorithm returns QV in polynomial time. □ Note that this algorithm is only practical if the outcome vectors have been found; this only makes sense if the POMDP model is already known, as outcome vectors map POMDP states to outcomes. We will address learning these models from observations in future work [6]. Next we show that the prediction of any e-test can be computed linearly from the prediction vector for the e-tests in QV . Lemma 3 For any history h and any e-test e, the prediction p(e|h) is some linear function of prediction vector p(QV |h), i.e., p(e|h) = p(QV |h)we for some weight vector we. Proof Let V (QV ) be the submatrix of V containing the columns corresponding to QV . By the definition of QV , for any e-test e, v(e) = V (QV )we, for some weight vector we. Furthermore, for any history h, p(e|h) = b(S|h)v(e) = b(S|h)V (QV )we = p(QV |h)we. □ Note that Lemma 3 does not imply that QV constitutes a PSR, let alone a linear PSR, for the definition of a PSR requires that the prediction of all s-tests be computable from the core test predictions. Next we turn to the crucial question: does QV constitute a PSR? Theorem 1 If V (QV ), defined as above with respect to some POMDP model of a dynamical system, is a square matrix, i.e., the number of e-tests in QV is the number of states |S| (in that POMDP model), then QV constitutes a linear EPSR for that dynamical system. Proof For any history h, pT (QV |h) = bT (S|h)V (QV ). If V (QV ) is square then it is invertible because by construction it has full rank, and hence for any history h, bT (S|h) = pT (QV |h)V −1(QV ). For any s-test t = a1o1a2o2 · · · akok, pT (t|h) = bT (S|h)T a1Oa1,o1T a2Oa2,o2 · · · T akOak,ok1S (by first-principles definition) = pT (QV |h)V −1(QV )T a1Oa1,o1T a2Oa2,o2 · · · T akOak,ok1S = pT (QV |h)wt for some weight vector wt. Thus, QV constitutes a linear EPSR as per the definition in Equation 2. □ We note that the product T a1Oa1,o1 · · · T akOak,ok1S appears often in association with an s-test t = a1o1 · · · akok, and abbreviate the product z(t). We similarly define z(e) = T a1Ta2 · · · T akOak,ok1S for the e-test e = a1a2 · · · akok. Staying with the linear EPSR case for now, we can define an update function for p(QV |h) as follows: (remembering that V (QV ) is invertible for this case) p(ei|hao) = p(aoei|h) p(ao|h) = b(S|h)T aOa,oz(ei) p(Q|h)mao = p(QV |h)V −1(QV )z(aoei) p(QV |h)mao = p(QV |h)maoei p(QV |h)mao (5) where we used the fact that the test ao in the denominator is an e-test. (The form of the linear EPSR update equation is identical to the form of the update in linear PSRs with s-tests given in Equation 3). Thus, a linear EPSR model is defined by QV and the set of weight vectors, maoe for all a ∈A, o ∈O, e ∈{QV ∪φ}, used in Equation 5. Now, let us turn to the case when the number of e-tests in QV is less than |S|, i.e., when V (QV ) is not a square matrix. Lemma 4 In general, if the number of e-tests in QV is less than |S|, then QV is not guaranteed to be a linear EPSR. Proof (Sketch) To prove this lemma, we must only find an example of a dynamical system that is an EPSR but not a linear EPSR. In Section 4.3 we present a k-bit register as an example of such a problem. We show in that section that the state space size is 2k and the number of s-tests in the core set of a linear s-test-based PSR model is also 2k, but the number of e-tests in QV is only k + 1. Note that this means that the rank of the U matrix for s-tests is 2k while the rank of the V matrix for e-tests is k + 1. This must mean that QV is not a linear EPSR. We skip these details for lack of space. □ Lemma 4 leaves open the possibility that if |QV | < |S| then QV constitutes a nonlinear EPSR. We were unable, thus far, to evaluate this possibility for general POMDPs but we did obtain an interesting and positive answer, presented in the next section, for the class of deterministic POMDPs. 4 A Nonlinear PSR for Deterministic Dynamical Systems In deterministic dynamical systems, the predictions of both e-tests and s-tests are binary and it is this basic fact that allows us to prove the following result. Theorem 2 For deterministic dynamical systems the set of e-tests QV is always an EPSR and in general it is a nonlinear EPSR. Proof For an arbitrary s-test t = a1o1a2o2 · · · akok, and some arbitrary history h that is realizable (i.e., p(h) = 1), and for some vectors wa1o1, wa1o1a2o2, ..., wa1o1a2o2···akok of length |QV |, we have prob(o1o2 · · · ok|ha1a2 · · · ak) = = prob(o1|ha1)prob(o2|ha1o1a2) · · · prob(ok|ha1o1a2o2 · · · ak−1ok−1ak) = prob(o1|ha1)prob(o2|ha1a2) · · · prob(ok|ha1a2 · · · ak) = (pT (QV |h)wa1o1)(pT (QV |h)wa1o1a2o2) · · · (pT (QV |h)wa1o1···akok) = ft(p(QV |h)) In going from the second line to the third, we eliminate observations from the conditions by noting that in a deterministic system, the observation emitted depends only on the sequence of actions executed. In going from the third line to the fourth, we use the result of Lemma 3 that regardless of the size of QV , the predictions for all e-tests for any history h are linear functions of p(QV |h). This shows that for deterministic dynamical systems, QV , regardless of its size, constitutes an EPSR. □ Update Function: Since predictions are binary in deterministic EPSRs, p(ao|h) must be 1 if o is observed after taking action a in history h: p(ei|hao) = p(aoei|h)/p(ao|h) = p(aei|h) = p(QV |h)maei where the second equality from the left comes about because p(ao|h) = 1 and, because o must be observed when a is executed, p(aoei|h) = p(aei|h), and the last equality used the fact that aei is just some other e-test and so from Lemma 3 must be a linear function of p(QV |h). It is rather interesting that even though the EPSR formed through QV is nonlinear (as seen in Theorem 2), the update function is in fact linear. 4.1 Diversity and e-tests Rivest and Schapire’s [1] diversity representation, the inspiration for e-tests, applies only to deterministic systems and can be explained using the binary outcome matrix V defined at the beginning of Section 3.1. Diversity also uses the predictions of a set of e-tests as its representation of state; it uses as many e-tests as there are distinct columns in the matrix V . Clearly, at most there can be 2|S| distinct columns and they show that there have to be at least log2(|S|) distinct columns and that these bounds are tight. Thus the size of the diversity representation can be exponentially smaller or exponentially bigger than the size of a POMDP representation. While we use the same notion of tests as the diversity representation, our use of linear independence of outcome vectors instead of equivalence classes based on equality of outcome vectors allows us to use e-tests on stochastic systems. Next we show through an example that EPSR models in deterministic dynamic systems can lead to exponential compression over POMDP models in some cases while avoiding the exponential blowup possible in Rivest and Schapire’s [1] diversity representation. 4.2 EPSRs can be Exponentially Smaller than Diversity This first example shows a case in which the size of the EPSR representation is exponentially smaller than the size of the diversity representation. The hit register (see Figure 2a) is a k-bit register (these are the value bits) with an additional special hit bit. There are 2k + 1 states in the POMDP describing this system—one state in which the hit bit is 1 and 2k states in which the hit bit is 0 and the value bits take on different combinations of b) a) 0 1 0 1 1 1 ... ... k bits (value bits) k bits hit Figure 2: The two example systems. a) The k-bit hit register. There are k value bits and the special hit bit. The value of the hit bit determines the observation and k + 2 actions alter the value of the bits; this is fully specified in Section 4.2. b) The k-bit rotate register. The value of the leftmost bit is observed; this bit can be flipped, and the register can be rotated either to the right or to the left. This is described in greater detail in Section 4.3. values. There are k + 2 actions: a flip action Fi for each value bit i that inverts bit i if the hit bit is not set, a set action Sh that sets the hit bit if all the value bits are 0, and a clear action Ch that clears the hit bit. There are two observations: Oh if the hit bit is set and Om otherwise. Rivest and Schapire [1] present a similar problem (their version has no Ch action). The diversity representation requires O(22k) equivalence classes and thus tests, whereas an EPSR has only 2k + 1 core e-tests (see Table 1 for the core e-tests and update function when k = 2). Table 1: Core e-tests and update functions for the 2-bit hit register problem. update function for action test F1 F2 Sh Ch F1Oh p(F1Oh) p(F1Oh) p(ShOh) 0 ShOh p(F1ShOh) p(F2ShOh) p(ShOh) p(ShOh) F1ShOh p(ShOh) p(F2F1ShOh) p(ShOh) −p(F1Oh) + p(F1ShOh) p(F1ShOh) − p(F1Oh) F2ShOh p(F2F1ShOh) p(ShOh) p(ShOh) −p(F1Oh) + p(F2ShOh) p(F2ShOh) − p(F1Oh) F2F1ShOh p(F2ShOh) p(F1ShOh) p(ShOh) −p(F1Oh) + p(F2F1ShOh) p(F2F1ShOh)− p(F1Oh) Lemma 5 For deterministic dynamical systems, the size of the EPSR representation is always upper-bounded by the minimum of the size of the diversity representation and the size of the POMDP representation. Proof The size of the EPSR representation, |QV |, is upper-bounded by |S| by construction of QV . The number of e-tests used by diversity representation is the number of distinct columns in the binary V matrix of Section 3.1, while the number of e-tests used by the EPSR representation is the number of linearly independent columns in V . Clearly the latter is upper-bounded by the former. As a quick example, consider the case of 2-bit binary vectors: There are 4 distinct vectors but only 2 linearly independent ones. □ 4.3 EPSRs can be Exponentially Smaller than POMDPs and the Original PSRs This second example shows a case in which the EPSR representation uses exponentially fewer tests than the number of states in the POMDP representation as well as the original linear PSR representation. The rotate register illustrated in Figure 2b is a k-bit shift-register. Table 2: Core e-tests and update function for the 4 bit rotate register problem. update function for action test R L F FO1 p(FO1) + p(FFO1) −p(RO1) p(FO1) + p(FFO1) −p(LO1) p(FFO1) RO1 p(RRO1) p(FFO1) p(RO1) LO1 p(FFO1) p(RRO1) p(LO1) FFO1 p(RO1) p(LO1) p(FO1) RRO1 p(LO1) p(RO1) p(RRO1) There are two observations: O1 is observed if the leftmost bit is 1 and O0 is observed when the leftmost bit is 0. The three actions are R, which shifts the register one place to the right with wraparound, L, which does the opposite, and F, which flips the leftmost bit. This problem is also presented by Rivest and Schapire as an example of a system whose diversity is exponentially smaller than the number of states in the minimal POMDP, which is 2k. This is also the number of core s-tests in the equivalent linear PSR (we computed these 2k s-tests but do not report them here). The diversity is 2k. However, the EPSR that models this system has only k + 1 core e-tests. The tests and update function for the 4-bit rotate register are shown in Table 2. 5 Conclusions and Future Work In this paper we have used a new type of test, the e-test, to specify a nonlinear PSR for deterministic controlled dynamical systems. This is the first nonlinear PSR for any general class of systems. We proved that in some deterministic systems our new PSR models are exponentially smaller than both the original PSR models as well as POMDP models. Similarly, compared to the size of Rivest & Schapire’s diversity representation (the inspiration for the notion of e-tests) we proved that our PSR models are never bigger but sometimes exponentially smaller. This work has primarily been an attempt to understand the representational implications of using e-tests; as future work, we will explore the computational implications of switching to e-tests. Acknowledgments Matt Rudary and Satinder Singh were supported by a grant from the Intel Research Council. References [1] Ronald L. Rivest and Robert E. Schapire. Diversity-based inference of finite automata. Journal of the ACM, 41(3):555–589, May 1994. [2] Michael L. Littman, Richard S. Sutton, and Satinder Singh. Predictive representations of state. In Advances In Neural Information Processing Systems 14, 2001. [3] William S. Lovejoy. A survey of algorithmic methods for partially observed markov decision processes. Annals of Operations Research, 28(1):47–65, 1991. [4] Michael L. Littman. Algorithms for Sequential Decision Making. PhD thesis, Brown University, 1996. [5] Herbert Jaeger. Observable operator models for discrete stochastic time series. Neural Computation, 12(6):1371–1398, 2000. [6] Satinder Singh, Michael L. Littman, Nicholas E. Jong, David Pardoe, and Peter Stone. Learning predictive state representations. In The Twentieth International Conference on Machine Learning (ICML-2003), 2003. To appear.
2003
87
2,493
A Recurrent Model of Orientation Maps with Simple and Complex Cells Paul Merolla and Kwabena Boahen Department of Bioengineering University of Pennsylvania Philadelphia, PA 19104 {pmerolla,boahen} @seas.upenn.edu Abstract We describe a neuromorphic chip that utilizes transistor heterogeneity, introduced by the fabrication process, to generate orientation maps similar to those imaged in vivo. Our model consists of a recurrent network of excitatory and inhibitory cells in parallel with a push-pull stage. Similar to a previous model the recurrent network displays hotspots of activity that give rise to visual feature maps. Unlike previous work, however, the map for orientation does not depend on the sign of contrast. Instead, signindependent cells driven by both ON and OFF channels anchor the map, while push-pull interactions give rise to sign-preserving cells. These two groups of orientation-selective cells are similar to complex and simple cells observed in V1. 1 Orientation Maps Neurons in visual areas 1 and 2 (V1 and V2) are selectively tuned for a number of visual features, the most pronounced feature being orientation. Orientation preference of individual cells varies across the two-dimensional surface of the cortex in a stereotyped manner, as revealed by electrophysiology [1] and optical imaging studies [2]. The origin of these preferred orientation (PO) maps is debated, but experiments demonstrate that they exist in the absence of visual experience [3]. To the dismay of advocates of Hebbian learning, these results suggest that the initial appearance of PO maps rely on neural mechanisms oblivious to input correlations. Here, we propose a model that accounts for observed PO maps based on innate noise in neuron thresholds and synaptic currents. The network is implemented in silicon where heterogeneity is as ubiquitous as it is in biology. 2 Patterned Activity Model Ernst et al. have previously described a 2D rate model that can account for the origin of visual maps [4]. Individual units in their network receive isotropic feedforward input from the geniculate and recurrent connections from neighboring units in a Mexican hat profile, described by short-range excitation and long-range inhibition. If the recurrent connections are sufficiently strong, hotspots of activity (or ‘bumps’) form periodically across space. In a homogeneous network, these bumps of activity are equally stable at any position in the network and are free to wander. Introducing random jitter to the Mexican hat connectivity profiles breaks the symmetry and reduces the number of stable states for the bumps. Subsequently, the bumps are pinned down at the locations that maximize their net local recurrent feedback. In this regime, moving gratings are able to shift the bumps away from their stability points such that the responses of the network resemble PO maps. Therefore, the recurrent network, given an ample amount of noise, can innately generate its own orientation specificity without the need for specific hardwired connections or visually driven learning rules. 2.1 Criticisms of the Bump model We might posit that the brain uses a similar opportunistic model to derive and organize its feature maps – but the parallels between the primary visual cortex and the Ernst et al. bump model are unconvincing. For instance, the units in their model represent the collective activity of a column, reducing the network dynamics to a firing-rate approximation. But this simplification ignores the rich temporal dynamics of spiking networks, which are known to affect bump stability. More fundamentally, there is no role for functionally distinct neuron types. The primary criticism of the Ernst et al.’s bump model is that its input only consists of a luminance channel, and it is not obvious how to replace this channel with ON and OFF rectified channels to account for simple and complex cells. One possibility would be to segregate ON-driven and OFF-driven cells (referred to as simple cells in this paper) into two distinct recurrent networks. Because each network would have its own innate noise profile, bumps would form independently. Consequently, there is no guarantee that ON-driven maps would line up with OFF-driven maps, which would result in conflicting orientation signals when these simple cells converge onto sign-independent (complex) cells. 2.2 Simple Cells Solve a Complex Problem To ensure that both ON-driven and OFF-driven simple cells have the same orientation maps, both ON and OFF bumps must be computed in the same recurrent network so that they are subjected to the same noise profile. We achieve this by building our recurrent network out of cells that are sign-independent; that is both ON and OFF channels drive the network. These cells exhibit complex cell-like behavior (and are referred to as complex cells in this paper) because they are modulated at double the spatial frequency of a sinusoidal grating input. The simple cells subsequently derive their responses from two separate signals: an orientation selective feedback signal from the complex cells indicating the presence of either an ON or an OFF bump, and an ON–OFF selection signal that chooses the appropriate response flavor. Figure 1 left illustrates the formation of bumps (highlighted cells) by a recurrent network with a Mexican hat connectivity profile. Extending the Ernst et al. model, these complex bumps seed simple bumps when driven by a grating. Simple bumps that match the sign of the input survive, whereas out-of-phase bumps are extinguished (faded cells) by push-pull inhibition. Figure 1 right shows the local connections within a microcircuit. An EXC (excitatory) cell receives excitatory input from both ON and OFF channels, and projects to other EXC (not shown) and INH (inhibitory) cells. The INH cell projects back in a reciprocal configuration to EXC cells. The divergence is indicated in left. ON-driven and OFF-driven simple cells receive input in a push-pull configuration (i.e., ON cells are excited by ON inputs and inhibited by OFF inputs, and vise-versa), while additionally receiving input from the EXC–INH recurrent network. In this model, we implement our push-pull circuit using monosynaptic inhibitory connections, despite the fact that geniculate input is strictly excitatory. This simplification, while anatomically incorrect, yields a more efficient implementation that is functionally equivalent. OFF Input ON Input EXC OFF right left Luminance ON & OFF Input Space EXC ON OFF INH Divergence Complex Cells Simple Cells INH Figure 1: left, Complex and simple cell responses to a sinusoidal grating input. Luminance is transformed into ON (green) and OFF (red) pathways by retinal processing. Complex cells form a recurrent network through excitatory and inhibitory projections (yellow and blue lines, respectively), and clusters of activity occur at twice the spatial frequency of the grating. ON input activates ON-driven simple cells (bright green) and suppresses OFF-driven simple cells (faded red), and vise-versa. right, The bump model’s local microcircuit: circles represent neurons, curved lines represent axon arbors that end in excitatory synapses (v shape) or inhibitory synapses (open circles). For simplicity, inhibitory interneurons were omitted in our push-pull circuit. 2.3 Mathematical Description The neurons in our network follow the equation n CV , where C is membrane capacitance, V syn KCa leak ( ) n t t I I I • ∑ = −∂− + − − • is the temporal derivative of the membrane voltage, δ(·) is the Dirac delta function, which resets the membrane at the times tn when it crosses threshold, Isyn is synaptic current from the network, and Ileak is a constant leak current. Neurons receive synaptic current of the form: back N N FF XC NH XC N FF XC NH FF FF N XC NH NH XC O + O O EE E EI I E + O O EE E EI I syn syn O + O O EE E EI I I IE E syn syn ( ) , , , I w I w I w I w I I w I I w I w I I I w I w I w I w I I w I − − = − + − = + + − + = − + − = where w+ is the excitatory synaptic strength for ON and OFF input synapses, w- is the strength of the push-pull inhibition, wEE is the synaptic strength for EXC cell projections to other EXC cells, wEI is the strength of INH cell projections to EXC cells, wIE is the strength of EXC cell projections to INH cells, Iback is a constant input current, and I{ON,OFF,EXC,INH} account for all impinging synapses from each of the four cell types. These terms are calculated for cell i using an arbor function that consists of a spatial weighting J(r) and a post-synaptic current waveform α(t): , k n ( ) ( k n ) J i k t t α ∑ − ⋅ − , where k spans all cells of a given type and n indexes their spike times. The spatial weighting function is described by ( ) exp( ) J i k i k σ − = −− , with σ as the space constant. The current waveform, which is non-zero for t>0, convolves a 1 t function with a decaying exponential: 1 0 ( ( ) ) exp( ) c t t t e α τ α τ − + = ∗ − , where τc is the decay-rate, and τe is the time constant of the exponential. Finally, we model spike-rate adaptation with a calcium-dependent potassium-channel (KCa), which integrates Ca triggered by spikes at times tn with a gain K and a time constant τk, as described by KCa exp( ) n n I K t t ∑ = − kτ . 3 Silicon Implementation We implemented our model in silicon using the TSMC (Taiwan Semiconductor Manufacturing Company) 0.25µm 5-metal layer CMOS process. The final chip consists of a 2-D core of 48x48 pixels, surrounded by asynchronous digital circuitry that transmits and receives spikes in real-time. Neurons that reach threshold within the array are encoded as address-events and sent off-chip, and concurrently, incoming address-events are sent to their appropriate synapse locations. This interface is compatible with other spike-based chips that use address-events [5]. The fabricated bump chip has close to 460,000 transistors packed in 10 mm2 of silicon area for a total of 9,216 neurons. 3.1 Circuit Design Our neural circuit was morphed into hardware using four building blocks. Figure 2 shows the transistor implementation for synapses, axonal arbors (diffuser), KCa analogs, and neurons. The circuits are designed to operate in the subthreshold region (except for the spiking mechanism of the neuron). Noise is not purposely designed into the circuits. Instead, random variations from the fabrication process introduce significant deviations in I-V curves of theoretically identical MOS transistors. The function of the synapse circuit is to convert a brief voltage pulse (neuron spike) into a postsynaptic current with biologically realistic temporal dynamics. Our synapse achieves this by cascading a current-mirror integrator with a log-domain low-pass filter. The current-mirror integrator has a current impulse response that decays as 1 (with a decay rate set by the voltage τ t c and an amplitude set by A). This time-extended current pulse is fed into a log-domain low-pass filter (equivalent to a current-domain RC circuit) that imposes a rise-time on the post-synaptic current set by τe. ON and OFF input synapses receive presynaptic spikes from the off-chip link, whereas EXC and INH synapses receive presynaptic spikes from local on-chip neurons. A c e Synapse k K KCa Analog Diffuser g r Vmem Vspk Neuron Figure 2: Transistor implementations are shown for a synapse, diffuser, KCa analog, and neuron (simplified), with circuit insignias in the top-left of each box. The circuits they interact with are indicated (e.g. the neuron receives synaptic current from the diffuser as well as adaptation current from the KCa analog; the neuron in turn drives the KCa analog). The far right shows layout for one pixel of the bump chip (vertical dimension is 83µm, horizontal is 30 µm). The diffuser circuit models axonal arbors that project to a local region of space with an exponential weighting. Analogous to resistive divider networks, diffusers [6] efficiently distribute synaptic currents to multiple targets. We use four diffusers to implement axonal projections for: the ON pathway, which excites ON and EXC cells and inhibits OFF cells; the OFF pathway, which excites OFF and EXC cells and inhibits ON cells; the EXC cells, which excite all cell types; and the INH cells, which inhibits EXC, ON, and OFF cells. Each diffuser node connects to its six neighbors through transistors that have a pseudo-conductance set by σr, and to its target site through a pseudo-conductance set by σg; the space-constant of the exponential synaptic decay is set by σr and σg’s relative levels. The neuron circuit integrates diffuser currents on its membrane capacitance. Diffusers either directly inject current (excitatory), or siphon off current (inhibitory) through a current-mirror. Spikes are generated by an inverter with positive feedback (modified from [7]), and the membrane is subsequently reset by the spike signal. We model a calcium concentration in the cell with a KCa analog. K controls the amount of calcium that enters the cell per spike; the concentration decays exponentially with a time constant set by τk. Elevated charge levels activate a KCa-like current that throttles the spike-rate of the neuron. 3.2 Experimental Setup Our setup uses either a silicon retina [8] or a National Instruments DIO (digital input–output) card as input to the bump chip. This allows us to test our V1 model with real-time visual stimuli, similar to the experimental paradigm of electrophysiologists. More specifically, the setup uses an address-event link [5] to establish virtual point-to-point connectivity between ON or OFF ganglion cells from the retina chip (or DIO card) with ON or OFF synapses on the bump chip. Both the input activity and the output activity of the bump chip is displayed in real-time using receiver chips, which integrate incoming spikes and displays their rates as pixel intensities on a monitor. A logic analyzer is used to capture spike output from the bump chip so it can be further analyzed. We investigated responses of the bump chip to gratings moving in sixteen different directions, both qualitatively and quantitatively. For the qualitative aspect, we created a PO map by taking each cell’s average activity for each stimulus direction and computing the vector sum. To obtain a quantitative measure, we looked at the normalized vector magnitude (NVM), which reveals the sharpness of a cell’s tuning. The NVM is calculated by dividing the vector sum by the magnitude sum for each cell. The NVM is 0 if a cell responds equally to all orientations, and 1 if a cell’s orientation selectivity is perfect such that it only responds at a single orientation. 4 Results We presented sixteen moving gratings to the network, with directions ranging from 0 to 360 degrees. The spatial frequency of the grating is tuned to match the size of the average bump, and the temporal frequency is 1 Hz. Figure 3a shows a resulting PO map for directions from 180 to 360 degrees, looking at the inhibitory cell population (the data looks similar for other cell types). Black contours represent stable bump regions, or equivalently, the regions that exceed a prescribed threshold (90 spikes) for all directions. The PO map from the bump chip reveals structure that resembles data from real cortex. Nearby cells tend to prefer similar orientations except at fractures. There are even regions that are similar to pinwheels (delimited by a white rectangle). A PO is a useful tool to describe a network’s selectivity, but it only paints part of the picture. So we have additionally computed a NVM map and a NVM histogram, shown in Figure 3b and 3c respectively. The NVM map shows that cells with sharp selectivity tend to cluster, particularly around the edge of the bumps. The histogram also reveals that the distribution of cell selectivity across the network varies considerably, skewed towards broadly tuned cells. We also looked at spike rasters from different cell-types to gain insight into their phase relationship with the stimulus. In particular, we present recordings for the site indicated by the arrow (see Figure 3a) for gratings moving in eight directions ranging from 0 to 360 degrees in 45-degree increments (this location was chosen because it is in the vicinity of a pinwheel, is reasonably selective, and shows considerable modulation in its firing rate). Figure 4 shows the luminance of the stimulus (bottom sinusoids), ON- (cyan) and OFF-input (magenta) spike trains, and the resulting spike trains from EXC (yellow), INH (blue), ON- (green), and OFFdriven (red) cell types for each of the eight directions. The center polar plot summarizes the orientation selectivity for each cell-type by showing the normalized number of spikes for each stimulus. Data is shown for one period. Even though all cells-types are selective for the same orientation (regardless of grating direction), complex cell responses tend to be phase-insensitive while the simple cell responses are modulated at the fundamental frequency. It is worth noting that the simple cells have sharper orientation selectivity compared to the complex cells. This trend is characteristic of our data. 20 40 60 80 100 120 140 160 180 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 50 100 150 200 250 300 Figure 3: (a) PO map for the inhibitory cell population stimulated with eight different directions from 180 to 360 degrees (black represents no activity, contours delineate regions that exceed 90 spikes for all stimuli). Normalized vector magnitude (NVM) data is presented as (b) a map and (c) a histogram. Figure 4: Spike rasters and polar plot for 8 directions ranging from 0 to 360 degrees. Each set of spike rasters represent from bottom to top, ON- (cyan) and OFF-input (magenta), INH (yellow), EXC (blue), and ON- (green) and OFF-driven (red). The stimulus period is 1 sec. 5 Discussion We have implemented a large-scale network of spiking neurons in a silicon chip that is based on layer 4 of the visual cortex. The initial testing of the network reveals a PO map, inherited from innate chip heterogeneities, resembling cortical maps. Our microcircuit proposes a novel function for complex-like cells; that is they create a sign-independent orientation selective signal, which through a push-pull circuit creates sharply tuned simple cells with the same orientation preference. Recently, Ringach et al. surveyed orientation selectivity in the macaque [9]. They observed that, in a population of V1 neurons (N=308) the distribution of orientation selectivity is quite broad, having a median NVM of 0.39. We have measured median NVM’s ranging from 0.25 to 0.32. Additionally, Ringach et al. found a negative correlation between spontaneous firing rate and NVM. This is consistent with our model because cells closer to the center of the bump have higher firing rates and broader tuning. While the results from the bump chip are promising, our maps are less consistent and noisier than the maps Ernst et al. have reported. We believe this is because our network is tuned to operate in a fluid state where bumps come on, travel a short distance and disappear (motivated by cortical imaging studies). But excessive fluidity can cause non-dominant bumps to briefly appear and adversely shift the PO maps. We are currently investigating the role of lateral connections between bumps as a means to suppress these spontaneous shifts. The neural mechanisms that underlie the orientation selectivity of V1 neurons are still highly debated. This may be because neuron responses are not only shaped by feedforward inputs, but are also influenced at the network level. If modeling is going to be a useful guide for electrophysiologists, we must model at the network level while retaining cell level detail. Our results demonstrate that a spike-based neuromorphic system is well suited to model layer 4 of the visual cortex. The same approach may be used to build large-scale models of other cortical regions. References 1. Hubel, D. and T. Wiesel, Receptive firelds, binocular interaction and functional architecture in the cat's visual cortex. J. Physiol, 1962. 160: p. 106-154. 2. Blasdel, G.G., Orientation selectivity, preference, and continuity in monkey striate cortex. J Neurosci, 1992. 12(8): p. 3139-61. 3. Crair, M.C., D.C. Gillespie, and M.P. Stryker, The role of visual experience in the development of columns in cat visual cortex. Science, 1998. 279(5350): p. 566-70. 4. Ernst, U.A., et al., Intracortical origin of visual maps. Nat Neurosci, 2001. 4(4): p. 431-6. 5. Boahen, K., Point-to-Point Connectivity. IEEE Transactions on Circuits & Systems II, 2000. vol 47 no 5: p. 416-434. 6. Boahen, K. and Andreou. A contrast sensitive silicon retina with reciprocal synapses. in NIPS91. 1992: IEEE. 7. Culurciello, E., R. Etienne-Cummings, and K. Boahen, A Biomorphic Digital Image Sensor. IEEE Journal of Solid State Circuits, 2003. vol 38 no 2: p. 281-294. 8. Zaghloul, K., A silicon implementation of a novel model for retinal processing, in Neuroscience. 2002, UPENN: Philadelphia. 9. Ringach, D.L., R.M. Shapley, and M.J. Hawken, Orientation selectivity in macaque V1: diversity and laminar dependence. J Neurosci, 2002. 22(13): p. 5639-51.
2003
88
2,494
Predicting Speech Intelligibility from a Population of Neurons Jeff Bondy Ian C. Bruce Dept. of Electrical Engineering Dept. of Electrical Engineering McMaster University McMaster University Hamilton, ON Hamilton, ON jeff@soma.crl.mcmaster.ca ibruce@ieee.org Suzanna Becker Simon Haykin Dept. of Psychology Dept. of Electrical Engineering McMaster University McMaster University becker@mcmaster.ca haykin@mcmaster.ca Abstract A major issue in evaluating speech enhancement and hearing compensation algorithms is to come up with a suitable metric that predicts intelligibility as judged by a human listener. Previous methods such as the widely used Speech Transmission Index (STI) fail to account for masking effects that arise from the highly nonlinear cochlear transfer function. We therefore propose a Neural Articulation Index (NAI) that estimates speech intelligibility from the instantaneous neural spike rate over time, produced when a signal is processed by an auditory neural model. By using a well developed model of the auditory periphery and detection theory we show that human perceptual discrimination closely matches the modeled distortion in the instantaneous spike rates of the auditory nerve. In highly rippled frequency transfer conditions the NAI’s prediction error is 8% versus the STI’s prediction error of 10.8%. 1 Introduction A wide range of intelligibility measures in current use rest on the assumption that intelligibility of a speech signal is based upon the sum of contributions of intelligibility within individual frequency bands, as first proposed by French and Steinberg [1]. This basic method applies a function of the Signal-to-Noise Ratio (SNR) in a set of bands, then averages across these bands to come up with a prediction of intelligibility. French and Steinberg’s original Articulation Index (AI) is based on 20 equally contributing bands, and produces an intelligibility score between zero and one: , 20 1 20 1∑ = = i i TI AI (1) where TIi (Transmission Index i) is the normalized intelligibility in the i th band. The TI per band is a function of the signal to noise ratio or: 30 12 + = i i SNR TI for SNRs between –12 dB and 18 dB. A SNR of greater than 18 dB means that the band has perfect intelligibility and TI equals 1, while an SNR under –12 dB means that a band is not contributing at all, and the TI of that band equals 0. The overall intelligibility is then a function of the AI, but this function changes depending on the semantic context of the signal. Kryter validated many of the underlying AI principles [2]. Kryter also presented the mechanics for calculating the AI for different number of bands - 5,6,15 or the original 20 - as well as important correction factors [3]. Some of the most important correction factors account for the effects of modulated noise, peak clipping, and reverberation. Even with the application of various correction factors, the AI does not predict intelligibility in the presence of some time-domain distortions. Consequently, the Modulation Transfer Function (MTF) has been utilized to measure the loss of intelligibility due to echoes and reverberation [4]. Steeneken and Houtgast later extended this approach to include nonlinear distortions, giving a new name to the predictor: the Speech Transmission Index (STI) [5]. These metrics proved more valid for a larger range of environments and interferences. The STI test signal is a long-term average speech spectrum, gaussian random signal, amplitude modulated by a 0.63 Hz to 12.5 Hz tone. Acoustic components within different frequency bands are switched on and off over the testing sequence to come up with an intelligibility score between zero and one. Interband intermodulation sources can be discerned, as long as the product does not fall into the testing band. Therefore, the STI allows for standard AI-frequency band weighted SNR effects, MTF-time domain effects, and some limited measurements of nonlinearities. The STI shows a high correlation with empirical tests, and has been codified as an ANSI standard [6]. For general acoustics it is very good. However, the STI does not accurately model intraband masker non-linearities, phase distortions or the underlying auditory mechanisms (outside of independent frequency bands) We therefore sought to extend the AI/STI concepts to predict intelligibility, on the assumption that the closest physical variable we have to the perceptual variable of intelligibility is the auditory nerve response. Using a spiking model of the auditory periphery [7] we form the Neuronal Articulation Index (NAI) by describing distortions in the spike trains of different frequency bands. The spiking over time of an auditory nerve fiber for an undistorted speech signal (control case) is compared to the neural spiking over time for the same signal after undergoing some distortion (test case). The difference in the estimated instantaneous discharge rate for the two cases is used to calculate a neural equivalent to the TI, the Neural Distortion (ND), for each frequency band. Then the NAI is calculated with a weighted average of NDs at different Best Frequencies (BFs). In general detection theory terms, the control neuronal response sets some locus in a high dimensional space, then the distorted neuronal response will project near that locus if it is perceptually equivalent, or very far away if it is not. Thus, the distance between the control neuronal response and the distorted neuronal response is a function of intelligibility. Due to the limitations of the STI mentioned above it is predicted that a measure of the neural coding error will be a better predictor than SNR for human intelligibility word-scores. Our method also has the potential to shed light on the underlying neurobiological mechanisms. (2) 2 Method 2.1 Model The auditory periphery model used throughout (and hereafter referred to as the Auditory Model) is from [7]. The system is shown in Figure 1. Figure 1 Block diagram of the computational model of the auditory periphery from the middle ear to the Auditory Nerve. Reprinted from Fig. 1 of [7] with permission from the Acoustical Society of America © (2003). The auditory periphery model comprises several sections, each providing a phenomenological description of a different part of the cat auditory periphery function. The first section models middle ear filtering. The second section, labeled the “control path,” captures the Outer Hair Cells (OHC) modulatory function, and includes a wideband, nonlinear, time varying, band-pass filter followed by an OHC nonlinearity (NL) and low-pass (LP) filter. This section controls the time-varying, nonlinear behavior of the narrowband signal-path basilar membrane (BM) filter. The control-path filter has a wider bandwidth than the signal-path filter to account for wideband nonlinear phenomena such as two-tone rate suppression. The third section of the model, labeled the “signal path”, describes the filter properties and traveling wave delay of the BM (time-varying, narrowband filter); the nonlinear transduction and low-pass filtering of the Inner Hair Cell (IHC NL and LP); spontaneous and driven activity and adaptation in synaptic transmission (synapse model); and spike generation and refractoriness in the auditory nerve (AN). In this model, CIHC and COHC are scaling constants that control IHC and OHC status, respectively. The parameters of the synapse section of the model are set to produce adaptation and discharge-rate versus level behavior appropriate for a high-spontaneous rate/low-threshold auditory nerve fiber. In order to avoid having to generate many spike trains to obtain a reliable estimate of the instantaneous discharge rate over time, we instead use the synaptic release rate as an approximation of the discharge rate, ignoring the effects of neural refractoriness. 2.2 Neural articulation index These results emulate most of the simulations described in Chapter 2 of Steeneken’s thesis [8], as it describes the full development of an STI metric from inception to end. For those interested, the following simulations try to map most of the second chapter, but instead of basing the distortion metric on a SNR calculation, we use the neural distortion. There are two sets of experiments. The first, in section 3.1, deals with applying a frequency weighting structure to combine the band distortion values, while section 3.2 introduces redundancy factors also. The bands, chosen to match [8], are octave bands centered at [125, 250, 500, 1000, 2000, 4000, 8000] Hz. Only seven bands are used here. The Neural AI (NAI) for this is: , ... 7 7 2 2 1 1 NTI NTI NTI NAI ⋅ + + ⋅ + ⋅ = α α α where •i is the i th bands contribution and NTIi is the Neural Transmission Index in the i th band. Here all the •s sum to one, so each • factor can be thought of as the percentage contribution of a band to intelligibility. Since NTI is between [0,1], it can also be thought of as the percentage of acoustic features that are intelligible in a particular band. The ND per band is the projection of the distorted (Test) instantaneous spike rate against the clean (Control) instantaneous spike rate. 1 T T Test Control ND Control Control ⋅ = − ⋅ , where Control and Test are vectors of the instantaneous spike rate over time, sampled at 22050 Hz. This type of error metric can only deal with steady state channel distortions, such as the ones used in [8]. ND was then linearly fit to resemble the TI equation 1-2, after normalizing each of the seven bands to have zero means and unit standard deviations across each of the seven bands. The NTI in the i th band was calculated as . b ND m NTI i i i i + − = σ µ NTIi is then thresholded to be no less then 0 and no greater then 1, following the TI thresholding. In equation (5) the factors, m = 2.5, b = -1, were the best linear fit to produce NTIi’s in bands with SNR greater then 15 dB of 1, bands with 7.5 dB SNR produce NTIi’s of 0.75, and bands with 0 dB SNR produced NTIi’s of 0.5. This closely followed the procedure outlined in section 2.3.3 of [8]. As the TI is a best linear fit of SNR to intelligibility, the NTI is a best linear fit of neural distortion to intelligibility. The input stimuli were taken from a Dutch corpus [9], and consisted of 10 Consonant-Vowel-Consonant (CVC) words, each spoken by four males and four females and sampled at 44100 Hz. The Steeneken study had many more, but the exact corpus could not be found. 80 total words is enough to produce meaningful frequency weighting factors. There were 26 frequency channel distortion conditions used for male speakers, 17 for female and three SNRs (+15 dB, +7.5 dB and 0 dB). The channel conditions were split into four groups given in Tables 1 through 4 for males, since females have negligible signal in the 125 Hz band, they used a subset, marked with an asterisk in Table 1 through Table 4. (3) (4) (5) Table 1: Rippled Envelope OCTAVE-BAND CENTRE FREQUENCY ID # 125 250 500 1K 2K 4K 8K 1* 1 1 1 1 0 0 0 2* 0 0 0 0 1 1 1 3* 1 1 0 0 0 1 1 4* 0 0 1 1 1 0 0 5* 1 1 0 0 1 1 0 6* 0 0 1 1 0 0 1 7* 1 0 1 0 1 0 1 8* 0 1 0 1 0 1 0 Table 2: Adjacent Triplets OCTAVE-BAND CENTRE FREQUENCY ID # 125 250 500 1K 2K 4K 8K 9 1 1 1 0 0 0 0 10 0 1 1 1 0 0 0 11* 0 0 0 1 1 1 0 Table 3: Isolated Triplets OCTAVE-BAND CENTRE FREQUENCY ID # 125 250 500 1K 2K 4K 8K 12 1 0 1 0 1 0 0 13 1 0 1 0 0 1 0 14 1 0 0 1 0 1 0 15* 0 1 0 1 0 0 1 16* 0 1 0 0 1 0 1 17 0 0 1 0 1 0 1 Table 4: Contiguous Bands OCTAVE-BAND CENTRE FREQUENCY ID # 125 250 500 1K 2K 4K 8K 18* 0 1 1 1 1 0 0 19* 0 0 1 1 1 1 0 20* 0 0 0 1 1 1 1 21 1 1 1 1 1 0 0 22* 0 1 1 1 1 1 0 23* 0 0 1 1 1 1 1 24 1 1 1 1 1 1 0 25 0 1 1 1 1 1 1 26* 1 1 1 1 1 1 1 In the above tables a one represents a passband and a zero a stop band. A 1353 tap FIR filter was designed for each envelope condition. The female envelopes are a subset of these because they have no appreciable speech energy in the 125 Hz octave band. Using the 40 male utterances and 40 female utterances under distortion and calculating the NAI following equation (3) produces only a value between [0,1]. To produce a word-score intelligibility prediction between zero and 100 percent the NAI value was fit to a third order polynomial that produced the lowest standard deviation of error from empirical data. While Fletcher and Galt [10] state that the relation between AI and intelligibility is exponential, [8] fits with a third order polynomial, and we have chosen to compare to [8]. The empirical word-score intelligibility was from [8]. 3 Results 3.1 Determining frequency weighting structure For the first tests, the optimal frequency weights (the values of •i from equation 3) were designed through minimizing the difference between the predicted intelligibility and the empirical intelligibility. At each iteration one of the values was dithered up or down, and then the sum of the •i was normalized to one. This is very similar to [5] whose final standard deviation of prediction error for males was 12.8%, and 8.8% for females. The NAI’s final standard deviation of prediction error for males was 8.9%, and 7.1% for females. Figure 2 Relation between NAI and empirical word-score intelligibility for male (left) and female (right) speech with bandpass limiting and noise. The vertical spread from the best fitting polynomial for males has a s.d. = 8.9% versus the STI [5] s.d. = 12.8%, for females the fit has a s.d. = 7.1% versus the STI [5] s.d. = 8.8% The frequency weighting factors are similar for the NAI and the STI. The STI weighting factors from [8], which produced the optimal prediction of empirical data (male s.d. = 6.8%, female s.d. = 6.0%) and the NAI are plotted in Figure 3. Figure 3 Frequency weighting factors for the optimal predictor of male and female intelligibility calculated with the NAI and published by Steeneken [8]. As one can see, the low frequency information is tremendously suppressed in the NAI, while the high frequencies are emphasized. This may be an effect of the stimuli corpus. The corpus has a high percentage of stops and fricatives in the initial and final consonant positions. Since these have a comparatively large amount of high frequency signal they may explain this discrepancy at the cost of the low frequency weights. [8] does state that these frequency weights are dependant upon the conditions used for evaluation. (6) 3.2 Determining frequency weighting with redundancy factors In experiment two, rather then using equation (3) that assumes each frequency band contributes independently, we introduce redundancy factors. There is correlation between the different frequency bands of speech [11], which tends to make the STI over-predict intelligibility. The redundancy factors attempt to remove correlate signals between bands. Equation (3) then becomes: , ... 7 7 3 2 1 2 2 2 1 1 1 1 NTI NTI NTI NTI NTI NTI NTI NAIr ⋅ + + ⋅ − ⋅ + ⋅ − ⋅ = α β α β α where the r subscript denotes a redundant NAI and • is the correlation factor. Only adjacent bands are used here to reduce complexity. We replicated Section 3.1 except using equation 6. The same testing, and adaptation strategy from Section 3.1 was used to find the optimal •s and •s. Figure 4 Relation between NAIr and empirical word-score intelligibility for male speech (right) and female speech (left) with bandpass limiting and noise with Redundancy Factors. The vertical spread from the best fitting polynomial for males has a s.d. = 6.9% versus the STIr [8] s.d. = 4.7%, for females the best fitting polynomial has a s.d. = 5.4% versus the STIr [8] s.d. = 4.0%. The frequency weighting and redundancy factors given as optimal in Steeneken, versus calculated through optimizing the NAIr are given in Figure 5. Figure 5 Frequency and redundancy factors for the optimal predictor of male and female intelligibility calculated with the NAIr and published in [8]. The frequency weights for the NAIr and STIr are more similar than in Section 3.1. The redundancy factors are very different though. The NAI redundancy factors show no real frequency dependence unlike the convex STI redundancy factors. This may be due to differences in optimization that were not clear in [8]. Table 5: Standard Deviation of Prediction Error MALE EQ. 3 FEMALE EQ. 3 MALE EQ. 6 FEMALE EQ. 6 NAI 8.9 % 7.1 % 6.9 % 5.4 % STI [5] 12.8 % 8.8 % STI [8] 6.8 % 6.0 % 4.7 % 4.0 % The mean difference in error between the STIr, as given in [8], and the NAIr is 1.7%. This difference may be from the limited CVC word choice. It is well within the range of normal speaker variation, about 2%, so we believe that the NAI and NAIr are comparable to the STI and STIr in predicting speech intelligibility. 4 Conclusions These results are very encouraging. The NAI provides a modest improvement over STI in predicting intelligibility. We do not propose this as a replacement for the STI for general acoustics since the NAI is much more computationally complex then the STI. The NAI’s end applications are in predicting hearing impairment intelligibility and using statistical decision theory to describe the auditory systems feature extractors - tasks which the STI cannot do, but are available to the NAI. While the AI and STI can take into account threshold shifts in a hearing impaired individual, neither can account for sensorineural, suprathreshold degradations [12]. The accuracy of this model, based on cat anatomy and physiology, in predicting human speech intelligibility provides strong validation of attempts to design hearing aid amplification schemes based on physiological data and models [13]. By quantifying the hearing impairment in an intelligibility metric by way of a damaged auditory model one can provide a more accurate assessment of the distortion, probe how the distortion is changing the neuronal response and provide feedback for preprocessing via a hearing aid before the impairment. The NAI may also give insight into how the ear codes stimuli for the very robust, human auditory system. References [1] French, N.R. & Steinberg, J.C. (1947) Factors governing the intelligibility of speech sounds. J. Acoust. Soc. Am. 19:90-119. [2] Kryter, K.D. (1962) Validation of the articulation index. J. Acoust. Soc. Am. 34:16981702. [3] Kryter, K.D. (1962b) Methods for the calculation and use of the articulation index. J. Acoust. Soc. Am. 34:1689-1697. [4] Houtgast, T. & Steeneken, H.J.M. (1973) The modulation transfer function in room acoustics as a predictor of speech intelligibility. Acustica 28:66-73. [5] Steeneken, H.J.M. & Houtgast, T. (1980) A physical method for measuring speechtransmission quality. J. Acoust. Soc. Am. 67(1):318-326. [6] ANSI (1997) ANSI S3.5-1997 Methods for calculation of the speech intelligibility index. American National Standards Institute, New York. [7] Bruce, I.C., Sachs, M.B., Young, E.D. (2003) An auditory-periphery model of the effects of acoustic trauma on auditory nerve responses. J. Acoust. Soc. Am., 113(1):369-388. [8] Steeneken, H.J.M. (1992) On measuring and predicting speech intelligibility. Ph.D. Dissertation, University of Amsterdam. [9] van Son, R.J.J.H., Binnenpoorte, D., van den Heuvel, H. & Pols, L.C.W. (2001) The IFA corpus: a phonemically segmented Dutch “open source” speech database. Eurospeech 2001 Poster http://145.18.230.99/corpus/index.html [10] Fletcher, H., & Galt, R.H. (1950) The perception of speech and its relation to telephony. J. Acoust. Soc. Am. 22:89-151. [11] Houtgast, T., & Verhave, J. (1991) A physical approach to speech quality assessment: correlation patterns in the speech spectrogram. Proc. Eurospeech 1991, Genova:285-288. [12] van Schijndel, N.H., Houtgast, T. & Festen, J.M. (2001) Effects of degradation of intensity, time, or frequency content on speech intelligibility for normal-hearing and hearingimpaired listeners. J. Acoust. Soc. Am.110(1):529-542. [13] Sachs, M.B., Bruce, I.C., Miller, R.L., & Young, E. D. (2002) Biological basis of hearing-aid design. Ann. Biomed. Eng. 30:157–168.
2003
89
2,495
Statistical Debugging of Sampled Programs Alice X. Zheng EE Division UC Berkeley alicez@cs.berkeley.edu Michael I. Jordan CS Division and Department of Statistics UC Berkeley jordan@cs.berkeley.edu Ben Liblit CS Division UC Berkeley liblit@cs.berkeley.edu Alex Aiken CS Division UC Berkeley aiken@cs.berkeley.edu Abstract We present a novel strategy for automatically debugging programs given sampled data from thousands of actual user runs. Our goal is to pinpoint those features that are most correlated with crashes. This is accomplished by maximizing an appropriately defined utility function. It has analogies with intuitive debugging heuristics, and, as we demonstrate, is able to deal with various types of bugs that occur in real programs. 1 Introduction No software is perfect, and debugging is a resource-consuming process. Most users take software bugs for granted, and willingly run buggy programs every day with little complaint. In some sense, these user runs of the program are the ideal test suite any software engineer could hope for. In an effort to harness the information contained in these field tests, companies like Netscape/Mozilla and Microsoft have developed automated, opt-in feedback systems. User crash reports are used to direct debugging efforts toward those bugs which seem to affect the most people. However, we can do much more with the information users may provide. Even if we collect just a little bit of information from every user run, successful or not, we may end up with enough information to automatically pinpoint the locations of bugs. In earlier work [1] we present a program sampling framework that collects data from users at minimal cost; the aggregated runs are then analyzed to isolate the bugs. Specifically, we learn a classifier on the data set, regularizing the parameters so that only the few features that are highly predictive of the outcome have large non-zero weights. One limitation of this earlier approach is that it uses different methods to deal with different types of bugs. In this paper, we describe how to design a single classification utility function that integrates the various debugging heuristics. In particular, determinism of some features is a significant issue in this domain, and an additional penalty term for false positives is included to deal with this aspect. Furthermore, utility levels, while subjective, are robust: we offer simple guidelines for their selection, and demonstrate that results remain stable and strong across a wide range of reasonable parameter settings. We start by briefly describing the program sampling framework in Section 2, and present the feature selection framework in Section 3. The test programs and our data set are described in Section 4, followed by experimental results in Section 5. 2 Program Sampling Framework Our approach relies on being able to collect information about program behavior at runtime. To avoid paying large costs in time or space, we sparsely sample the program’s runtime behavior. We scatter a large number of checks in the program code, but do not execute all of them during any single run. The sampled results are aggregated into counts which no longer contain chronology information but are much more space efficient. To catch certain types of bugs, one asks certain types of questions. For instance, function call return values are good sanity checks which many programmers neglect. Memory corruption is another common class of bugs, for which we may check whether pointers are within their prescribed ranges. We add a large set of commonly useful assertions into the code, most of which are wild guesses which may or may not capture interesting behavior. At runtime, the program tosses a coin (with low heads probability) independently for each assertion it encounters, and decides whether or not to execute the assertion. However, while it is not expensive to generate a random coin toss, doing so separately for each assertion would incur a very large overhead; the program will run even slower than just executing every assertion. The key is to combine coin tosses. Given i.i.d. Bernoulli random variables with success probability h, the number of trials it takes until the first success is a geometric random variable with probability P(n) = (1 −h)n−1h. Instead of tossing a Bernoulli coin n times, we can generate a geometric random variable to be used as a countdown to the next sample. Each assertion decrements this countdown by 1; when it reaches 0, we perform the assertion and generate another geometric random variable.1 However, checking to see if the counter has reached 0 at every assertion is still an expensive procedure. For further code optimization, we analyze each contiguous acyclic code region (loops- and recursion-free) at compile time and count the maximum number of assertions on any path through that region. Whenever possible, the generated code decrements in bulk, and takes a fast path that skips over the individual checks within a contiguous code region using just a single check against this maximum threshold. Samples are taken in chronological order as the program runs. Useful as it might be, it would take a huge amount of space to record this information. To save space, we instead record only the counts of how often each assertion is found to be true or false. When the program finishes, these counts, along with the program exit status, are sent back to the central server for further analysis. The program sampling framework is a non-trivial software analysis effort. Interested readers may refer to [1] for a more thorough treatment of all the subtleties, along with detailed analyses of performance impact at different sampling rates. 3 Classification and Feature Selection In the hopes of catching a wide range of bugs, we add a large number of rather wild guesses into the code. Having cast a much bigger net than what we may need, the next step is to identify the relevant features. Let crashes be labeled with an output of 1, and successes labeled with 0. Knowing the final program exit status (crashed or successful) leaves us in 1The sampling density h controls the tradeoff between runtime overhead and data sparsity. It is set to be small enough to have tolerable overhead, which then requires more runs in order to alleviate the effects of sparsity. This is not a problem for large programs like Mozilla and Windows with thousands of crash reports a day. a classification setting. However, our primary goal is that of feature selection [2]. Good feature selection should be corroborated by classification performance, though in our case, we only care about features that correctly predict one of the two classes. Hence, instead of working in the usual maximum likelihood setting for classification and regularization, we define and maximize a more appropriate utility function. Ultimately, we will see that the two are not wholly unrelated. It has been noted that the goals of variable and feature selection do not always coincide with that of classification [3]. Classification is but the means to an end. As we demonstrate in Section 5, good classification performance assures the user that the system is working correctly, but one still has to examine the selected features to see that they make sense. 3.1 Some characteristics of the problem We concentrate on isolating the bugs that are caused by the occurrence of a small set of features, i.e. assertions that are always true when a crash occurs.2 We want to identify the predicate counts that are positively correlated with the program crashing. In contrast, we do not care much about the features that are highly correlated with successes. This makes our feature selection an inherently one-sided process. Due to sampling effects, it is quite possible that a feature responsible for the ultimate crash may not have been observed in a given run. This is especially true in the case of “quick and painless” deaths, where a program crashes very soon after the actual bug occurs. Normally this would be an easy bug to find, because one wouldn’t have to look very far beyond the crashing point at the top of the stack. However, this is a challenge for our approach, because there may be only a single opportunity to sample the buggy feature before the program dies. Thus many crashes may have an input feature profile that is very similar to that of a successful run. From the classification perspective, this means that false negatives are quite likely. At the other end of the spectrum, if we are dealing with a deterministic bug3, false positives should have a probability of zero: if the buggy feature is observed to be true, then the program has to crash; if the program did not crash, then the bug must not have occurred. Therefore, for a deterministic bug, any false positives during the training process should incur a much larger penalty compared to any false negatives. 3.2 Designing the utility function Let (x, y) denote a data point, where x is an input vector of non-negative integer counts, and y ∈{0, 1} is the output label. Let f(x; θ) denote a classifier with parameter vector θ. There are four possible prediction outcomes: y = 1 and f(x; θ) = 1, y = 0 and f(x; θ) = 0, y = 1 and f(x; θ) = 0, and y = 0 and f(x; θ) = 1. The last two cases represent false negative and false positive, respectively. In the general form of utility maximization for classification (see, e.g., [4]), we can define separate utility functions for each of the four cases, and maximize the sum of the expected utilities: max θ EP (Y |x)U(Y, x; θ), (1) where U(Y, x; θ) = u1(x; θ)Y I{f(x;θ)=1} + u2(x; θ)Y I{f(x;θ)=0} + u3(x; θ)(1 −Y )I{f(x;θ)=0} + u4(x; θ)(1 −Y )I{f(x;θ)=1} + v(θ), (2) 2There are bugs that are caused by non-occurrence of certain events, such as forgotten initializations. We do not focus on this type of bugs in this paper. 3A bug is deterministic if it crashes the program every time it is observed. For example, dereferencing a null pointer would crash the program without exception. Note that this notion of determinism is data-dependent: it is always predicated on the trial runs that we have seen. and where IW is the indicator function for event W. The ui(x; θ) functions specify the utility of each case. v(θ) is a regularization term, and can be interpreted as a prior over the classifier parameters θ in the Bayesian terminology. We can approximate the distribution P(Y |x) simply by its empirical distribution, P(Y = 1|x; θ) := ˆP(Y = 1|x) = y. The actual distribution of input features X is determined by the software under examination, hence it is difficult to specify and highly non-Gaussian. Thus we need a discriminative classifier. Let z = θT x, where the x vector is now augmented by a trailing 1 to represent the intercept term.4 We use the logistic function µ(z) to model the class conditional probability: P(Y = 1|x) := µ(z) = 1/(1 + e−z). (3) The decision boundary is set to 1/2, so that f(x; θ) = 1 if µ(z) > 1/2, and f(x; θ) = 0 if µ(z) ≤1/2. The regularization term is chosen to be the ℓ1 norm of θ, which has the effect of driving most θi’s to zero: v(θ) := −λ|θ|1 1 = −λ P i |θi|. To slightly simplify the formula, we choose the same functional form for u1 and u2, but add an extra penalty term for false positives: u1(x; θ) := u2(x; θ) := δ1(log2 µ(x; θ) + 1) (4) u3(x; θ) := δ2(log2(1 −µ(x; θ)) + 1) (5) u4(x; θ) := δ2(log2(1 −µ(x; θ)) + 1) −δ3θT x . (6) Note that the additive constants do not affect the outcome of the optimization; they merely ensure that utility at the decision boundary is zero. Also, we can fold any multiplicative constants of the utility functions into δi, so the base of the log function is freely exchangeable. We find that the expected utility function is equivalent to: E U = δ1y log µ + δ2(1 −y) log(1 −µ) −δ3θT x(1 −y)I{µ>1/2} −λ∥θ∥1 1 . (7) When δ1 = δ2 = 1 and δ3 = 0, Eqn. (7) is akin to the Lasso [5] (standard logistic regression with ML parameter estimation and ℓ1-norm regularization). In general, this expected utility function weighs each class separately using δi, and has an additional penalty term for false positives. Parameter learning is done using stochastic (sub)gradient ascent on the objective function. Besides having desirable properties like fast convergence rate and space efficiency, such on-line methods also improve user privacy. Once the sufficient statistics are collected, the trial run can be discarded, thus obviating the need to permanently store any user’s private data on a central server. Eqn. (7) is concave in θ, but the ℓ1 norm and the indicator function are non-differentiable at θi = 0 and θT x = 0, respectively. This can be handled by subgradient ascent methods5. In practice, we jitter the solution away from the point of non-differentiability by taking a very small step along any subgradient. This means that none of the θi’s will ever be exactly zero. But this does not matter since weights close enough to zero are essentially taken as zero. Only the few features with the most positive weights are selected at the end. 3.3 Interpretation of the utility functions Let us closely examine the utility functions defined in Eqns. (4)–(6). For the case of Y = 1, Fig. 1(a) plots the function log2 µ(z) + 1. It is positive when z is positive, and approaches 4Assuming that the more abnormalities there are, the more likely it is for the program to crash, it is reasonable to use a classifier based on a linear combination of features. 5Subgradients are a generalization of gradients that are also defined at non-differentiable points. A subgradient for a convex function is any sublinear function pivoted at that point, and minorizing the entire convex function. For convex (concave) optimization, any subgradient is a feasible descent (ascent) direction. For more details, see, e.g., [6]. −2 −1 0 1 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 1{z>0} log2µ(z)+1 z u(z) (a) Y = 1 −2 −1 0 1 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 log2(1−µ(z))+1 1{z<0} −z/ln2 −z/2ln2 z u(z) (b) Y = 0 Figure 1: (a) Plot of the true positive indicator function and the utility function log2 µ(z)+ 1. (b) Plot of the true negative indicator function, utility function log2(1 −µ(z)) + 1, and its asymptotic slopes −z/ log 2 and −z/2 log 2. 1 as z approaches +∞. It is a crude but smooth approximation of the indicator function for a true positive, yI{µ>1/2}. On the other hand, when z is negative, the utility function is negative, acting as a penalty for false negatives. Similarly, Fig. 1(b) plots the utility functions for Y = 0. In both cases, the utilify function has an upper bound of 1, so that the effect of correct classifications is limited. On the other hand, incorrect classifications are undesirable, thus their penalty is an unbounded (but slowly deceasing) negative number. Taking the derivative d dz log2(1−µ(z)+1) = −µ(z)/ log 2, we see that, when z is positive, −1 ≤−µ(z) ≤−1/2, so log2(1 −µ(z)) + 1 is sandwiched between two linear functions −z/ log 2 and −z/2 log 2. It starts off being closer to −z/2 log 2, but approaches −z/ log 2 asymptotically (see Fig. 1(b)). Hence, when the false positive is close to the decision boundary, the additional penalty of θT x = z in Eqn. (6) is larger than the default false positive penalty, though the two are asymptotically equivalent. Let us turn to the roles of the multiplicative weights. δ1 and δ2 weigh the relative importance of the two classes, and can be used to deal with imbalanced training sets where one class is disproportionately larger than the other [7]. Most of the time a program exits successfully without crashing, so we have to deal with having many more successful runs than crashed runs (see Section 5). Furthermore, since we really only care about predicting class 1, increasing δ1 beyond an equal balance of the two data sets could be beneficial for feature selection performance. Finally, δ3 is the knob of determinism: if the bug is deterministic, then setting δ3 to a large value will severely penalize false positives; if the bug is not deterministic, then a small value for δ3 affords the necessary slack to accommodate runs which should have failed but did not. As we shall see in Section 5, if the bug is truly deterministic, then the quality of the final features selected will be higher for large δ3 values. In a previous paper [1], we outlined some simple feature elimination heuristics that can be used in the case of a deterministic bug. ⟨Elimination by universal falsehood⟩discards any counter that is always zero, because it likely represents an assertion that can never be true. This is a very common data preprocessing step. ⟨Elimination by lack of failing example⟩discards any counter that is zero on all crashes, because what never happens cannot have caused the crash. ⟨Elimination by successful counterexample⟩discards any counter that is non-zero on any successful run, because these are assertions that can be true without a subsequent program failure. In our model, if a feature xi is never positive for any crashes, then its associated weight θi will only decrease in the maximization process. Thus it will not be selected as a crash-predictive feature. This handles ⟨elimination by lack of failing example⟩. Also, if a heavily weighted feature xi is positive on a successful run in the training set, then the classifier is more likely to result in a false positive. The false positive penalty term will then decrease the weight θi, so that such a feature is unlikely to be chosen at the end. Thus utility maximization also handles ⟨elimination by successful counterexample⟩. The model we derive here, then, neatly subsumes the ad hoc elimination heuristics used in our earlier work. 4 Two Case Studies As examples, we present two cases studies of C programs with bugs that are at the opposite ends of the determinism spectrum. Our deterministic example is ccrypt, a small encryption utility. ccrypt-1.2 is known to contain a bug that involves overwriting existing files. If the user responds to a confirmation prompt with EOF rather than yes or no, ccrypt consistently crashes. Our non-deterministic example is GNU bc-1.06, the Unix command line calculator tool. We find that feeding bc nine megabytes of random input causes it to crash roughly one time in four while calling malloc() — a strong indication of heap corruption. Such bugs are inherently difficult to fix because they are inherently non-deterministic: there is no guarantee that a mangled heap will cause a crash soon or indeed at all. ccrypt’s sensitivity to EOF inputs suggests that the problem has something to do with its interactions with standard file operations. Thus, randomly sampling function return values may identify key operations close to the bug. Our instrumented program adds instrumentation after each function call to sample and record the number of times the return value is negative, zero, or positive. There are 570 call sites of interest, for 570 × 3 = 1710 counters. In lieu of a large user community, we generate many runs artificially using reasonable inputs. Each run uses a randomly selected set of present or absent files, randomized command line flags, and randomized responses to ccrypt prompts including the occasional EOF. We have collected 7204 trial runs at a sampling rate of 1/100, 1162 of which result in a crash. 6516 (≈90%) of these trial runs are randomly selected for training, and the remaining 688 held aside for cross-validation. Out of the 1710 counter features, 1542 are constant across all runs, leaving 168 counters to be considered in the training process. In the case of bc, we are interested in the behavior of all pointers and buffers. All pointers and array indices are scalars, hence we compare all pairs of scalar values. At any direct assignment to a scalar variable a, we identify all other variables b1, b2, . . . , bn of the same type that are also in scope. We record the number of times that a is found to be greater than, equal to, or less than each bi. Additionally, we compare each pointer to the NULL value. There are 30150 counters in all, of which 2908 are not constant across all runs. Our bc data set consists of 3051 runs with distinct random inputs at a sampling rate of 1/1000. 2729 of these runs are randomly chosen as training set, 322 for the hold-out set. 5 Experimental Results We maximize the utility function in Eqn. (7) using stochastic subgradient ascent with a learning rate of 10−5. In order to make the magnitude of the weights θi comparable to each other, the feature values are shifted and scaled to lie between [0, 1], then normalized to have unit variance. There are four learning parameters, δ1, δ2, δ3, and λ. Since only their relative scale is important, the regularization parameter λ can be set to some fixed value (we use 0.1). For each setting of δi, the model is set to run for 60 iterations through the training set, though the process usually converges much sooner. For bc, this takes roughly 110 seconds in MATLAB on a 1.8 GHz Pentium 4 CPU with 1 GB of RAM. The smaller ccrypt dataset requires just under 8 seconds. The values of δ1, δ2, and δ3 can all be set through cross-validation. However, this may take a long time, plus we would like to leave the ultimate control of the values to the users of this tool. The more important knobs are δ1 and δ3: the former controls the relative importance of classification performance on crashed runs, the latter adjusts the believed level of determinism of the bug. Here are some guidelines for setting δ1 and δ3 that we find to work well in practice. (1) In order to counter the effects of imbalanced datasets, the ratio of δ1/δ2 should be at least around the range of the ratio of successful to crashed runs. This is especially crucial for the ccrypt data set, which contains roughly 32 successful runs for every crash. (2) δ3 should not be higher than δ1, because it is ultimately more important 1 10 20 30 40 50 1 1.2 1.4 1.6 1.8 2 δ1 best score (a) ccrypt 0 5 10 15 20 25 1 1.2 1.4 1.6 1.8 2 δ3 best score (b) ccrypt, δ1 = 30 1 5 10 15 20 1 1.2 1.4 1.6 1.8 2 δ1 best score (c) bc 0 1 2 3 4 5 1 1.2 1.4 1.6 1.8 2 δ3 best score (d) bc, δ1 = 5 Figure 2: (a,b) Cross-validation scores for the ccrypt data set; (c,d) Cross-validation scores for the bc data set. All scores shown are the maximum over free parameters. to correctly classify crashes than to not have any false positives. As a performance metric, we look at the hold-out set confusion matrix and define the score as the sum of the percentages of correctly classified data points for each class. Fig. 2(a) shows a plot of cross-validation score (maximum over a number of settings for δ2 and δ3) for the ccrypt data set at various δ1 values. It is apparent from the plot that any δ1 values in the range of [10, 50] are roughly equivalent in terms of classification performance. Specifically, for the case of δ1 = 30 (which is around the range suggested by Guideline 1 above), Fig. 2(b) shows the cross-validation scores plotted against different values for δ3. In this case, as long as δ3 is in the rough range of [3, 15], the classification performance remains the same.6 Furthermore, settings for δ1 and δ3 that are safe for classification also select high quality features for debugging. The “smoking gun” which directly indicates the ccrypt bug is: traverse.c:122: xreadline() return value == 0 This call to xreadline() returns 0 if the input terminal is at EOF. In all of the above mentioned safe settings for δ1 and δ3, this feature is returned as the top feature. The rest of the higher ranked features are sufficient, but not necessary, conditions for a crash. The only difference is that, in more optimal settings, the separation between the top feature and the rest can be as large as an order of magnitude; in non-optimal settings (classification score-wise), the separation is smaller. For bc, the classification results are even less sensitive to the particular settings of δ1, δ2, and δ3. (See Fig. 2(c,d).) The classification score is roughly constant for δ1 ∈[5, 20], and for a particular value of δ1, such as δ1 = 5, the value of δ3 has little impact on classification performance. This is to be expected: the bug in bc is non-deterministic, and therefore false positives do indeed exist in the training set. Hence any small value for δ3 will do. As for the feature selection results for bc, for all reasonable parameter settings (and even those that do not have the best classification performance), the top features are a group of correlated counters that all point to the index of an array being abnormally big. Below are the top five features for δ1 = 10, δ2 = 2, δ3 = 1: 1. storage.c:176: more arrays(): indx > optopt 2. storage.c:176: more arrays(): indx > opterr 3. storage.c:176: more arrays(): indx > use math 4. storage.c:176: more arrays(): indx > quiet 5. storage.c:176: more arrays(): indx > f count 6In Fig. 2(b), the classification performance for δ1 = 30 and δ3 = 0 is deceptively high. In this case, the best δ2 value is 5, which offsets the cross-validation score by increasing the number of predicted non-crashes, at the expense of worse crash-prediction performance. The top feature becomes a necessary but not sufficient condition for a crash – a false positive-inducing feature! Hence the lesson is that if the bug is believed to be deterministic then δ3 should always be positive. These features immediately point to line 176 of the file storage.c. They also indicate that the variable indx seems to be abnormally big. Indeed, indx is the array index that runs over the actual array length, which is contained in the integer variable a count. The program may crash long after the first array bound violation, which means that there are many opportunities for the sampling framework to observe the abnormally big value of indx. Since there are many comparisons between indx and other integer variables, there is a large set of inter-correlated counters, any subset of which may be picked by our algorithm as the top features. In the training run shown above, the smoking gun of indx > a count is ranked number 8. But in general its rank could be much smaller, because the top features already suffice for predicting crashes and pointing us to the right line in the code. 6 Conclusions and Future Work Our goal is a system that automatically pinpoints the location of bugs in widely deployed software. We tackle different types of bugs using a custom-designed utility function with a “determinism level” knob. Our methods are shown to work on two real-world programs, and are able to locate the bugs in a range of parameter settings. In the real world, programs contain not just one, but many bugs, which will not be distinctly labeled in the set of crashed runs. It is difficult to tease out the different failure modes through clustering: it relies on macro-level usage patterns, as opposed to the microscopic difference between failures. In on-going research, we are extending our approach to deal with the problem of multiple bugs in larger programs. We are also working on modifying the program sampling framework to allow denser sampling in more important regions of the code. This should alleviate the sparsity of features while reducing the number of runs required to yield useful results. Acknowledgments This work was supported in part by ONR MURI Grant N00014-00-1-0637; NASA Grant No. NAG2-1210; NSF Grant Nos. EIA-9802069, CCR-0085949, ACI-9619020, and IIS9988642; DOE Prime Contract No. W-7405-ENG-48 through Memorandum Agreement No. B504962 with LLNL. References [1] B. Liblit, A. Aiken, A. X. Zheng, and M. I. Jordan. Bug isolation via remote program sampling. In ACM SIGPLAN PLDI 2003, 2003. [2] A. Blum and P. Langley. Selection of relevant features and examples in machine learning. Artificial Intelligence, 97(1-2):245–271, 1997. [3] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. Journal of Machine Learning Research, 3:1157–1182, March 2003. [4] E. L. Lehmann. Testing Statistical Hypotheses. John Wiley & Sons, 2nd edition, 1986. [5] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer–Verlag, 2001. [6] J.-B. Hiriart-Urruty and C. Lemarechal. Convex Analysis and Minimization Algorithms, volume II. Springer–Verlag, 1993. [7] N. Japkowicz and S. Stephen. The class imbalance problem: a systematic study. Intelligent Data Analysis Journal, 6(5), November 2002.
2003
9
2,496
Different Cortico-Basal Ganglia Loops Specialize in Reward Prediction on Different Time Scales Saori Tanaka Kenji Doya Nara Institute of Science and Technology ATR Computational Neuroscience Laboratories CREST, Japan Science and Technology Corporation Kyoto, Japan xsaori@atr.co.jp doya@atr.co.jp Go Okada Kazutaka Ueda Yasumasa Okamoto Shigeto Yamawaki Hiroshima University School of Medicine CREST, Japan Science and Technology Corporation Hiroshima, Japan Abstract To understand the brain mechanisms involved in reward prediction on different time scales, we developed a Markov decision task that requires prediction of both immediate and future rewards, and analyzed subjects’ brain activities using functional MRI. We estimated the time course of reward prediction and reward prediction error on different time scales from subjects' performance data, and used them as the explanatory variables for SPM analysis. We found topographic maps of different time scales in medial frontal cortex and striatum. The result suggests that different cortico-basal ganglia loops are specialized for reward prediction on different time scales. 1 Introduction In our daily life, we make decisions based on the prediction of rewards on different time scales; immediate and long-term effects of an action are often in conflict, and biased evaluation of immediate or future outcome can lead to pathetic behaviors. Lesions in the central serotonergic system result in impulsive behaviors in humans [1], and animals [2, 3], which can be attributed to deficits in reward prediction on a long time scale. Damages in the ventral part of medial frontal cortex (MFC) also cause deficits in decision-making that requires assessment of future outcomes [4-6]. A possible mechanism underlying these observations is that different brain areas are specialized for reward prediction on different time scales, and that the ascending serotonergic system activates those specialized for predictions in longer time scales [7]. The theoretical framework of temporal difference (TD) learning [8] successfully explains reward-predictive activities of the midbrain dopaminergic system as well as those of the cortex and the striatum [9-13]. In TD learning theory, the predicted amount of future reward starting from a state s(t) is formulated as the “value function” V(t) = E[r(t + 1) + γ r(t + 2) + γ 2r(t + 3) + …] (1) and learning is based on the TD error δ(t) = r(t) + γ V(t) – V(t - 1). (2) The ‘discount factor’ γ controls the time scale of prediction; while only the immediate reward r(t + 1) is considered with γ = 0, rewards in the longer future are taken into account with γ closer to 1. In order to test the above hypothesis [7], we developed a reinforcement learning task which requires a large value of discount factor for successful performance, and analyzed subjects’ brain activities using functional MRI. In addition to conventional block-design analysis, a novel model-based regression analysis revealed topographic representation of prediction time scale with in the cortico-basal ganglia loops. 2 Methods 2.1 Markov Decision Task In the Markov decision task (Fig. 1), markers on the corners of a square present four states, and the subject selects one of two actions by pressing a button (a1 = left button, a2 = right button) (Fig. 1A). The action determines both the amount of reward and the movement of the marker (Fig. 1B). In the REGULAR condition, the next trial is started from the marker position at the end of the previous trial. Therefore, in order to maximize the reward acquired in a long run, the subject has to select an action by taking into account both the immediate reward and the future reward expected from the subsequent state. The optimal behavior is to receive small negative rewards at states s2, s3, and s4 to obtain a large positive reward at state s1 (Fig. 1C). In the RANDOM condition, next trial is started from a random marker position so that the subject has to consider only immediate reward. Thus, the optimal behavior is to collect a larger reward at each state (Fig. 1D). In the baseline condition (NO condition), the reward is always zero. In order to learn the optimal behaviors, the discount factor γ has to be larger than 0.3425 in REGULAR condition, while it can be arbitrarily small in RANDOM condition. 2.2 fMRI imaging Eighteen healthy, right-handed volunteers (13 males and 5 females), gave informed consent to take part in the study, with the approval of the ethics and safety committees of ATR and Hiroshima University. A 1.5-Tesla scanner (Marconi, MAGNEX ECLIPSE, Japan) was used to acquire both structural T1-weighted images (TR = 12 s, TE = 450 ms, flip angle = 20 deg, matrix = 256 × 256, FoV = 256 mm, thickness = 1 mm, slice gap = 0 mm ) and T2*-weighted echo planar images (TR = 4 s, TE = 55 msec, flip angle = 90 deg, 38 transverse slices, matrix = 64 × 64, FoV = 192 mm, thickness = 4 mm, slice gap = 0 mm, slice gap = 0 mm) with blood oxygen level-dependent (BOLD) contrast. 2.3 Data analysis The data were preprocessed and analyzed with SPM99 (Friston et al., 1995; Wellcome Department of Cognitive Neurology, London, UK). The first three volumes of images were discarded to avoid T1 equilibrium effects. The images were realigned to the first image as a reference, spatially normalized with respect to the Montreal Neurological Institute EPI template, and spatially smoothed with a Gaussian kernel (8 mm, full-width at half-maximum). Fig. 1. (A) Sequence of stimulus and response events in the Markov decision task. First, one of four squares representing present state turns green (0s). As the fixation point turns green (1s), the subject presses either the right or left button within 1 second. After 1s delay, the green square changes its position (2s), and then a reward for the current action is presented by a number (2.5s) and a bar graph showing cumulative reward during the block is updated (3.0s). One trial takes four seconds. Subjects performed five trials in the NO condition, 32 trials in the RANDOM condition, five trials in the NO condition, and 32 trials in the REGULAR condition in one block. They repeated four blocks; thus, the entire experiment consisted of 312 trials, taking about 20 minutes. (B) The rule of the reward and marker movement. (C) In the REGULAR condition, the optimal behavior is to receive small negative rewards –r1 (-10, -20, or -30 yen) at states s2, s3, and s4 to obtain a large positive reward +r2 (90, 100, or 110 yen) at state s1. (D) In the RANDOM condition, the next trial is started from random state. Thus, the optimal behavior is to select a larger reward at each state. s1 s2 s3 s4 +r2 -r2 -r2 +r1 +r1 -r1 -r1 -r1 B a2 a1 r1 = 20 10 yen r2= 100 10 yen 1.0 2.0 2.5 3.0 Time 0 100 A 100 4.0 (s) s1 s2 s3 s4 +r2 -r1 -r1 -r1 REGULAR condition s1 s2 s3 s4 +r2 +r1 -r1 -r1 RANDOM condition C D Images of parameter estimates for the contrast of interest were created for each subject. These were then used for a second-level group analysis using a one-sample t-test across the subjects (random effects analysis). We conducted two types of analysis. One was block design analysis using three boxcar regressors convolved with a hemodynamic response function as the reference waveform for each condition (RANDOM, REGULAR, and NO). The other was multivariate regression analysis using explanatory variables, representing the time course of the reward prediction V(t) and reward prediction error δ(t) estimated from subjects’ performance data (described below), in addition to three regressors representing the condition of the block. 2.4 Estimation of predicted reward V(t) and prediction error δ(t) The time course of reward prediction V(t) and reward prediction error δ(t) were estimated from each subject’s performance data, i.e. state s(t), action a(t), and reward r(t), as follows. If the subject starts from a state s(t) and comes back to the same state after k steps, the expected cumulative reward V(t) should satisfy the consistency condition V(t) = r(t + 1) + γ r(t + 2) + … + γ k-1r(t + k) + γ kV(t). (3) Thus, for each time t of the data file, we calculated the weighted sum of the rewards acquired until the subject returned to the same state and estimated the value function for that episode as ( ) ( ) ( ) ( ) 1 1 2 ... ˆ . 1 k k r t r t r t k V t γ γ γ −   + + + + + +   = − The estimate of the value function V(t) at time t was given by the average of all previous episodes from the same state as at time t ( ) ( ) 1 1 ˆ L l l V t V t L = = ∑ , (5) where {t1, …, tL} are the indices of time visiting the same state as s(t), i.e. s(t1) = … = s(tL) = s(t). The TD error was given by the difference between the actual reward r(t) and the temporal difference of the value function V(t) according to equation (2). Assuming that different brain areas are involved in reward prediction on different time scales, we varied the discount factor γ as 0, 0.3, 0.6, 0.8, 0.9, and 0.99. Fig. 2. The selected action of a representative single subject (solid line) and the group average ratio of selecting optimal action (dashed line) in (A) RANDOM and (B) REGULAR conditions. 1 32 64 96 128 smaller reward larger reward RANDOM condition trial action A 1 32 64 96 128 optimal REGULAR condition trial action B nonoptimal (4) 3 Results 3.1 Behavioral results Figure 2 summarizes the learning performance of a representative single subject (solid line) and group average (dashed line) during fMRI measurement. Fourteen subjects successfully learned to take larger immediate rewards in the RANDOM condition (Fig. 2A) and a large positive reward at s1 after small negative rewards at s2, s3 and s4 in the REGULAR condition (Fig. 2B). 3.2 Block-design analysis In REGULAR vs. RANDOM contrast, we observed a significant activation in the dorsolateral prefrontal cortex (DLPFC) (Fig. 3A) (p < 0.001 uncorrected). In RANDOM vs. REGULAR contrast, we observed a significant activation in lateral orbitofrontal cortex (lOFC) (Fig. 3B) (p < 0.001 uncorrected). The result of block-design analysis suggests differential involvement of neural pathways in reward prediction on long and short time scales. The result in RANDOM vs. REGULAR contrast was consistent with previous studies that the OFC is involved in reward prediction within a short delay and reward outcome [14-20]. 3.3 Regression analysis We observed significant correlation with reward prediction V(t) in the MFC, DLPFC (all γ ), ventromedial insula (small γ ), dorsal striatum, amygdala, hippocampus, and parahippocampal gyrus (large γ ) (p < 0.001 uncorrected) (Fig. 4A). We also found significant correlation with reward prediction error δ(t) in the IPC, PMd, cerebellum (all γ ), ventral striatum (small γ ), and lateral OFC (large γ ) (p < 0.001 uncorrected) (Fig. 4B). As we changed the time scale parameter γ of reward prediction, we found rostro-caudal maps of correlation to V(t) in MFC with increasing γ. Fig. 3. (A) In REGULAR vs. RANDOM comparison, significant activation was observed in DLPFC ((x, y, z) = (46, 45, 9), peak t = 4.06) (p < 0.001 uncorrected). (B) In RANDOM vs. REGULAR comparison, significant activation was observed in lateral OFC ((x, y, z) = (-32, 9, -21), peak t = 4.90) (p < 0.001 uncorrected). 4 Discussion In the MFC, anterior and ventral part was involved in reward prediction V(t) on shorter time scales (0 ≤ γ ≤ 0.6), whereas posterior and dorsal part was involved in reward prediction V(t) on longer time scales (0.6 ≤ γ ≤ 0.99). The ventral striatum involved in reward prediction error δ(t) on shortest time scale (γ = 0), while the dorsolateral striatum correlated with reward prediction V(t) on longer time scales (0.9 ≤ γ ≤ 0.99). These results are consistent with the topographic organization of fronto-striatal connection; the rostral part of the MFC project to the ventral striatum, whereas the dorsal and posterior part of the cingulate cortex project to the dorsolateral striatum [21]. In the MFC and the striatum, no significant difference in activity was observed in block-design analysis while we did find graded maps of activities with different values of γ. A possible reason is that different parts of the MFC and the striatum are concurrently involved with reward prediction on different time scales, regardless of the task context. Activities of the DLPFC and lOFC, which show significant differences in block-design analysis (Fig. 3), may be regulated according to the necessity for the task; Fig. 4. Voxels with a significant correlation (p < 0.001 uncorrected) with reward prediction V(t) and prediction error δ(t) are shown in different colors for different settings of the time scale parameter (γ = 0 in red, γ = 0.3 in orange, γ = 0.6 in yellow, γ = 0.8 in green, γ = 0.9 in cyan, and γ = 0.99 in blue). Voxels correlated with two or more regressors are shown by a mosaic of colors. (A) Significant correlation with reward prediction V(t) was observed in the MFC, DLPFC, dorsal striatum, insula, and hippocampus. Note the anterior-ventral to posterior-dorsal gradient with the increase in γ in the MFC. (B) Significant correlation with reward prediction error δ(t) on γ = 0 was observed in the ventral striatum. From these results, we propose the following mechanism of reward prediction on different time scales. The parallel cortico-basal ganglia loops are responsible for reward prediction on various time scales. The ‘limbic loop’ via the ventral striatum specializes in immediate reward prediction, whereas the ‘cognitive and motor loop’ via the dorsal striatum specialises in future reward prediction. Each loop learns to predict rewards on its specific time scale. To perform an optimal action under a given time scale, the output of the loop with an appropriate time scale is used for actual action selection. Previous studies in brain damages and serotonergic functions suggest that the MFC and the dorsal raphe, which are reciprocally connected [22, 23], play an important role in future reward prediction. The cortico-cortico projections from the MFC, or the serotonergic projections from the dorsal raphe to the cortex and the striatum may be involved in the modulation of these parallel loops. In present study, using a novel regression analysis based on subjects’ performance data and reinforcement learning model, we revealed the maps of time scales in reward prediction, which could not be found by conventional block-design analysis. Future studies using this method under pharmacological manipulation of the serotonergic system would clarify the role of serotonin in regulating the time scale of reward prediction. Acknowledgments We thank Nicolas Schweighofer, Kazuyuki Samejima, Masahiko Haruno, Hiroshi Imamizu, Satomi Higuchi, Toshinori Yoshioka, and Mitsuo Kawato for helpful discussions and technical advice. References [1] Rogers, R.D., et al. (1999) Dissociable deficits in the decision-making cognition of chronic amphetamine abusers, opiate abusers, patients with focal damage to prefrontal cortex, and tryptophan-depleted normal volunteers: evidence for monoaminergic mechanisms. Neuropsychopharmacology 20(4):322-339. [2] Evenden, J.L. & Ryan, C.N. (1996) The pharmacology of impulsive behaviour in rats: the effects of drugs on response choice with varying delays of reinforcement. Psychopharmacology (Berl) 128(2):161-170. [3] Mobini, S., et al. (2000) Effects of central 5-hydroxytryptamine depletion on sensitivity to delayed and probabilistic reinforcement. Psychopharmacology (Berl) 152(4):390-397. [4] Bechara, A., et al. (1994) Insensitivity to future consequences following damage to human prefrontal cortex. Cognition 50(1-3):7-15. [5] Bechara, A., Tranel, D. & Damasio, H. (2000) Characterization of the decision-making deficit of patients with ventromedial prefrontal cortex lesions. Brain 123:2189-2202. [6] Mobini, S., et al. (2002) Effects of lesions of the orbitofrontal cortex on sensitivity to delayed and probabilistic reinforcement. Psychopharmacology (Berl) 160(3):290-298. [7] Doya, K. (2002) Metalearning and neuromodulation. Neural Netw 15(4-6):495-506. [8] Sutton, R.S., Barto, A. G. (1998) Reinforcement learning. Cambridge, MA: MIT press. [9] Houk, J.C., Adams, J.L. & Barto, A.G., A model of how the basal ganglia generate and use neural signals that predict reinforcement, in Models of information processing in the basal ganglia, J.C. Houk, J.L. Davis, and D.G. Beiser, Editors. 1995, MIT Press: Cambridge, Mass. p. 249-270. [10] Schultz, W., Dayan, P. & Montague, P.R. (1997) A neural substrate of prediction and reward. Science 275(5306):1593-1599. [11] Doya, K. (2000) Complementary roles of basal ganglia and cerebellum in learning and motor control. Curr Opin Neurobiol 10(6):732-739. [12] Berns, G.S., et al. (2001) Predictability modulates human brain response to reward. J Neurosci 21(8):2793-2798. [13] O'Doherty, J.P., et al. (2003) Temporal difference models and reward-related learning in the human brain. Neuron 38(2):329-337. [14] Koepp, M.J., et al. (1998) Evidence for striatal dopamine release during a video game. Nature 393(6682):266-268. [15] Rogers, R.D., et al. (1999) Choosing between small, likely rewards and large, unlikely rewards activates inferior and orbital prefrontal cortex. J Neurosci 19(20):9029-9038. [16] Elliott, R., Friston, K.J. & Dolan, R.J. (2000) Dissociable neural responses in human reward systems. J Neurosci 20(16):6159-6165. [17] Breiter, H.C., et al. (2001) Functional imaging of neural responses to expectancy and experience of monetary gains and losses. Neuron 30(2):619-639. [18] Knutson, B., et al. (2001) Anticipation of increasing monetary reward selectively recruits nucleus accumbens. J Neurosci 21(16):RC159. [19] O'Doherty, J.P., et al. (2002) Neural responses during anticipation of a primary taste reward. Neuron 33(5):815-826. [20] Pagnoni, G., et al. (2002) Activity in human ventral striatum locked to errors of reward prediction. Nat Neurosci 5(2):97-98. [21] Haber, S.N., et al. (1995) The orbital and medial prefrontal circuit through the primate basal ganglia. J Neurosci 15(7 Pt 1):4851-4867. [22] Celada, P., et al. (2001) Control of dorsal raphe serotonergic neurons by the medial prefrontal cortex: Involvement of serotonin-1A, GABA(A), and glutamate receptors. J Neurosci 21(24):9917-9929. [23] Martin-Ruiz, R., et al. (2001) Control of serotonergic function in medial prefrontal cortex by serotonin-2A receptors through a glutamate-dependent mechanism. J Neurosci 21(24):9856-9866.
2003
90
2,497
Measure Based Regularization Olivier Bousquet, Olivier Chapelle, Matthias Hein Max Planck Institute for Biological Cybernetics, 72076 T¨ubingen, Germany {first.last}@tuebingen.mpg.de Abstract We address in this paper the question of how the knowledge of the marginal distribution P(x) can be incorporated in a learning algorithm. We suggest three theoretical methods for taking into account this distribution for regularization and provide links to existing graph-based semi-supervised learning algorithms. We also propose practical implementations. 1 Introduction Most existing learning algorithms perform a trade-offbetween fit of the data and ’complexity’ of the solution. The way this complexity is defined varies from one algorithm to the other and is usually referred to as a prior probability or a regularizer. The choice of this term amounts to having a preference for certain solutions and there is no a priori best such choice since it depends on the learning problem to be addressed. This means that the right choice should be dictated by prior knowledge or assumptions about the problem or the class of problems to which the algorithm is to be applied. Let us consider the binary classification setting. A typical assumption that is (at least implicitly) used in many learning algorithms is the following Two points that are close in input space should have the same label. One possible way to enforce this assumption is to look for a decision function which is consistent with the training data and which does not change too much between neighboring points. This can be done in a regularization setting, using the Lipschitz norm as a regularizer. For differentiable functions, the Lipschitz norm of a function is the supremum of the norm of the gradient. It is thus natural to consider algorithms of the form min f sup x ∥∇f(x)∥ under constraints yif(xi) ≥1. (1) Performing such a minimization on the set of linear functions leads to the maximum margin solution (since the gradient x 7→⟨w, x⟩is w), whereas the 1-nearest neighbor decision function is one of the solutions of the above optimization problem when the set of functions is unconstrained [13]. Although very useful because widely applicable, the above assumption is sometimes too weak. Indeed, most ’real-world’ learning problems have more structure than what this assumption captures. For example, most data is located in regions where the label is constant (clusters) and regions where the label is not well-defined are typically of low density. This can be formulated via the so-called cluster assumption: Two points that are connected by a line that goes through high density regions should have the same label Another related way of stating this assumption is to say that the decision boundary should lie in regions of low density. Our goal is to propose possible implementations of this assumption. It is important to notice that in the context of supervised learning, the knowledge of the joint probability P(x, y) is enough to achieve perfect classification (taking arg maxy P(x, y) as decision function), while in semi-supervised learning, even if one knows the distribution P(x) of the instances, there is no unique or optimal way of using it. We will thus try to propose a principled approach to this problem. A similar attempt was made in [10] but in a probabilistic context, where the decision function was modeled by a conditional probability distribution, while here we consider arbitrary real-valued functions and use the standard regularization approach. We will use three methods for obtaining regularizers that depend on the distribution P(x) of the data. In section 2 we suggest to modify the regularizer in a general way by weighting it with the data density. Then in section 3 we adopt a geometric approach where we suggest to modify the distances in input space (in a local manner) to take into account the density (i.e. we stretch or blow up the space depending on the density). The third approach presented in section 4 builds on spectral methods. The idea is to look for the analogue of graph-based spectral methods when the amount of available data is infinite. We show that these three approaches are related in various ways and in particular we clarify the asymptotic behavior of graph-based regularization. Finally, in section 5 we give a practical method for implementing one of the proposed regularizers and show its application on a toy problem. 2 Density based regularization The first approach we propose is to start with a gradient-based regularizer like ∥∇f∥which penalizes large variations of the function. Now, to implement the cluster assumption one has to penalize more the variations of the function in high density regions and less in low density regions. A natural way of doing this is to replace ∥∇f∥by ∥p∇f∥where p is the density of the marginal distribution P. More generally, instead of the gradient, one can can consider a regularization map L : RX 7→(R+)X , where L(f)(x) is a measure of the smoothness of the function f at the point x, and then consider the following regularization term Ω(f) = ∥L(f)χ(p) ∥, (2) where χ is a strictly increasing function. An interesting case is when the norm in (2) is chosen as the L2 norm. Then, Ω(f) can be the norm of a Reproducing Kernel Hilbert Space (RKHS), which means that there exist an Hilbert space H and a kernel function k : X 2 7→R such that q ⟨f, f⟩H = Ω(f) and ⟨f, k(x, ·)⟩H = f(x). (3) The reason for using an RKHS norm is the so-called representer theorem [5]: the function minimizing the corresponding regularized loss can be expressed as a linear combination of the kernel function evaluated at the labeled points. However, it is not straightforward to find the kernel associated with an RKHS norm. In general, one has to solve equation (3). For instance, in the case L(f) = (f 2 + ∥∇f∥2)1/2 and without taking the density into account (χ = 1), it has been shown in [3] that the corresponding kernel is the Laplacian one, k(x, y) = exp(−∥x −y∥L1) with associated inner product ⟨f, g⟩H = ⟨f, g⟩L2 + ⟨∇f, ∇g⟩L2 . Taking the density into account, this inner product becomes ⟨f, g⟩H = f, χ2(p)g L2 + ∇f, χ2(p)∇g L2 . Plugging g = k(x, .) in above and expressing that (3) should be valid for all f ∈H, we find that k must satisfy χ2(p)k(x, .) −∇(χ2(p)∇k(x, .)) = δ(x −.), where δ is the Dirac delta function. However, solving this differential equation is not an easy task for arbitrary p. Since finding the kernel function associated to a regularizer is, in general, a difficult problem, we propose to perform the minimization of the regularized loss on a fixed set of basis functions, i.e. f is expressed as a linear combination of functions ϕi. f(x) = l X i=1 αiϕi(x) + b. (4) We will present in section 5 a practical implementation of this approach. 3 Density based change of geometry We now try to adopt a geometric point of view. First we translate the cluster assumption into a geometric statement, then we explore how to enforce it by changing the geometry of our underlying space. A similar approach was recently proposed by Vincent and Bengio [12]. We will see that there exists such a change of geometry which leads to the same type of regularizer that was proposed in section 2. Recall that the cluster assumption states that points are likely to be in the same class if they can be connected by a path through high density regions. Naturally this means that we have to weight paths according to the density they are going through. This leads to introducing a new distance measure on the input space (typically Rd) defined as the length of the shortest weighted path connecting two points. With this new distance, we simply have to enforce that close points have the same label (we thus recover the standard assumption). Let us make this more precise. We consider the euclidean space Rd as a flat Riemannian manifold with metric tensor δ, denoted by (Rn, δ). A Riemannian manifold (M, g) is also a metric space with the following path (or geodesic) distance: d(x, y) = inf γ {L(γ)|γ : [a, b] →M, γ(a) = x, γ(b) = y} where γ is a piecewise smooth curve and L(γ) is the length of the curve given by L(γ) = Z b a q gij(γ(t))˙γi ˙γjdt (5) We now want to change the metric δ of Rd such that the new distance is the weighted path distance corresponding to the cluster assumption. The only information we have is the local density p(x), which is a scalar at every point and as such can only lead to an isotropic transformation in the tangent space TxM. Therefore we consider the following conformal transformation of the metric δ δij →gij = 1 χ(p(x))δij (6) where χ is a strictly increasing function. We denote by (Rd, g) the distorted euclidean space. Note that this kind of transformation also changes the volume element √gdx1 . . . dxd, where g is the determinant of gij. dx1 . . . dxd →√gdx1 . . . dxd = 1 χ(p)d/2 dx1 . . . dxd (7) In the following we will choose χ(x) = x, which is the simplest choice which gives the desired properties. The distance structure of the transformed space implements now the cluster assumption, since we see from (5) that all paths get weighted by the inverse density. Therefore we can use any metric based classification method and it will automatically take into account the density of the data. For example the nearest neighbor classifier in the new distance is equivalent to the Lipschitz regularization (1) weighted with the density proposed in the last section. However, implementing such a method requires to compute the geodesic distance in (Rd, g), which is non trivial for arbitrary densities p. We suggest the following approximation which is similar in spirit to the approach in [11]. Since we have a global chart of Rd we can give for each neighborhood Bϵ(x) in the euclidean space the following upper and lower bounds for the geodesic distance: inf z∈Bϵ(x) s 1 p(z)∥x −y∥≤d(x, y) ≤ sup z∈Bϵ(x) s 1 p(z)∥x −y∥, ∀y ∈Bϵ(x) (8) Then we choose a real ϵ and set for each x the distance to all points in a p(x)−1/2ϵneighborhood of x as d(x, y) = p( x+y 2 )−1/2∥x −y∥. The geodesic distance can then be approximated by the shortest path along the obtained graph. We now show the relationship to the the regularization based approach of the previous section. We denote by ∥·∥L2(Rd,g,Σ) the L2 norm in (Rd, g) with respect to the measure Σ and by µ the standard Lebesgue measure on Rd. Let us consider the regularizer ∥∇f∥2 L2(Rd,δ,µ) which is the standard L2 norm of the gradient. Now modifying this regularizer according to section 2 (by changing the underlying measure) gives S(f) = ∥∇f∥2 L2(Rd,δ,P ). On the distorted space (Rd, g) we keep the Lebesgue measure µ which can be done by integrating on the manifold with respect to the density σ = 1 √g = pd/2, which cancels then with the volume element σ√gdx1 . . . dxd = dx1 . . . dxd. Since we have on (Rd, g), ∥∇f∥2 = p(x)δij ∂f ∂xi ∂f ∂xj we get equivalence of S(f). S(f) = ∥∇f∥2 L2(Rd,δ,P ) = Z Rd p(x)δij ∂f ∂xi ∂f ∂xj dx1 . . . dxd = ∥∇f∥2 L2(Rd,g,µ) (9) This shows that modifying the measure and keeping the geometry, or modifying the geometry and keeping the Lebesgue measure leads to the same regularizer S(f). However, there is a structural difference between the spaces (Rd, δ, P) and (Rd, g, µ) even if S(f) is the same. Indeed, for regularization operators corresponding to higher order derivatives the above correspondence is not valid any more. 4 Link with Spectral Techniques Recently, there has been a lot of interest in spectral techniques for non linear dimension reduction, clustering or semi-supervised learning. The general idea of these approaches is to construct an adjacency graph on the (unlabeled) points whose weights are given by a matrix W. Then the first eigenvectors of a modified version of W give a more suitable representation of the points (taking into account their manifold and/or cluster structure). An instance of such an approach and related references are given in [1] where the authors propose to use the following regularizer 1 2 m X i,j=1 (fi −fj)2Wij = f ⊤(D −W)f, (10) where fi is the value of the function at point xi (the index ranges over labeled and unlabeled points), D is a diagonal matrix with Dii = P j Wij and Wij is chosen as a function of the distance between xi and xj, for example Wij = K(∥xi −xj∥/t). Given a sample x1, . . . , xm of m i.i.d. instances sampled according to P(x), it is possible to rewrite (10) after normalization as the following random variable Uf = 1 2m(m −1) X i,j (f(xi) −f(xj))2K(∥xi −xj∥/t) . Under the assumption that f and K are bounded, the result of [4] (see Inequality (5.7) in this paper, which applies to U-statistics) gives P [Uf ≥E [Uf] + t] ≤e−mt2/C2 , where C is a constant which does not depend on n and t. This shows that for each fixed function, the normalized regularizer Uf converges towards its expectation when the sample size increases. Moreover, one can check that E [Uf] = 1 2 Z Z (f(x) −f(y))2K(∥x −y∥/t)dP(x)dP(y) . (11) This is the term that should be used as a regularizer if one knows the whole distribution since it is the limit of (10)1. The following proposition relates the regularizer (11) to the one defined in (2). Proposition 4.1 If p is a density which is Lipschitz continuous and K is a continuous function on R+ such that x2+dK(x) ∈L2, then for any function f ∈C2(Rd) with bounded hessian lim t→0 d C t2+d Z Z (f(x) −f(y))2K(∥x −y∥/t)p(x)p(y)dxdy (12) = Z ∥∇f(x)∥2 p2(x)dx, (13) where C = R Rd ∥x∥2 K(∥x∥) dx. Proof: Let’s fix x. Writing a Taylor-Lagrange expansion of f and p around x in terms of h = (y −x)/t gives Z (f(x) −f(y))2K ∥x −y∥ t  p(y)dy = Z (t ⟨∇f(x), h⟩+ O(t2 ∥h∥2))2K(∥h∥)(p(x) + O(t ∥h∥)tddh = td+2p(x) Z ⟨∇f(x), h⟩2 K(∥h∥)dh + O(td+3) , (14) 1We have shown that the convergence of Uf towards E [Uf] happens for each fixed f but this convergence can be uniform over a set of functions, provided this set is small enough. To conclude the proof, we rewrite this last integral as ∇f(x)⊤R hh⊤K(∥h∥)dh  ∇f(x) = ∥∇f(x)∥2 C d . The last equality comes from the fact that, by symmetry considerations, R hh⊤K(∥h∥)dh is equal to a constant (let’s call it C2) times the identity matrix and this constant can be computed by C2d = trace R hh⊤K(∥h∥)dh  = trace R h⊤hK(∥h∥)dh  = C. □ Note that different K lead to different affinity matrices: if we choose K(x) = exp(−x2/2), we get a gaussian RBF affinity matrix as used in [7], whereas K(x) = 1x≤1 leads to an unweighted neighboring graph (at size t) [1]. So we have proved that if one takes the limit of the regularizer (10) when the sample size goes to infinity and the scale parameter t goes to 0 (with appropriate scaling), one obtains the regularizer Z ∥∇f(x)∥2 p2(x)dx = f, ∇∗D2 p∇f , where ∇∗is the adjoint of ∇, Dp is the diagonal operator that maps f to pf and ⟨., .⟩is the inner product in L2. In [2], the authors investigated the limiting behavior of the regularizer D −W obtained from the graph and claimed that this is the empirical counterpart of the Laplace operator defined on the manifold. However, this is true only if the distribution is uniform on the manifold. We have shown that, in the general case, the continuous equivalent of the graph Laplacian is ∇∗D2 p∇. 5 Practical Implementation and Experiments As mentioned in section 2, it is difficult in general to find the kernel associated with a given regularizer and instead, we decided to minimize the regularized loss on a fixed basis of functions (ϕi)1≤i≤l, as expressed by equation (4). The regularizer we considered is of the form (2) and is, Ω(f) = ∥∥∇f∥√p ∥2 L2 = Z ∇f(x) · ∇f(x)p(x)dx. Thus, the coefficients α and b in expansion (4) are found by minimizing the following convex regularized functional 1 n n X i=1 ℓ(f(xi), yi) | {z } Remp(f) +λ l X i,j=1 αiαj Z ∇ϕi(x) · ∇ϕj(x)p(x)dx | {z } ∥L(f)√p∥ 2 L2 . (15) Introducing the l × l matrix Hij = R ∇ϕi(x) · ∇ϕj(x)p(x)dx and the n × l matrix K with Kij = ϕj(xi), the minimization of the functional (15) is equivalent to the following one for the standard L1-SVM loss: min α,b α⊤Hα + C n X i=1 ξi under constraints ∀i, yi(Pl j=1 Kijαj + b) ≥1 −ξi. The dual formulation of this optimization problem turns out to be the standard SVM one with a modified kernel function (see also [9]): max β n X i=1 βi −1 2 n X i,j=1 βiβjyiyjLij, −4 −3 −2 −1 0 1 2 3 4 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 Figure 1: Two moons toy problem: there are 2 labeled points (the cross and the triangle) and 200 unlabeled points. The gray level corresponds to the output of the function. The function was expanded on all unlabeled points (m=200 in (4)) and the widths of the gaussians have been chosen as σ = 0.5 and σp = 0.05. under constraints 0 ≤βi ≤C and P βiyi = 0, with L = KH−1K⊤. Once the vector β has been found, the coefficients α of the expansion are given by α = H−1K⊤diag(Y )β. In order to calculate the Hij, one has to compute an integral. From now on, we consider a special case where this integral can be computed analytically: • The basis functions are gaussian RBF, ϕi(x) = exp  −∥x−xi∥2 2σ2  , where the points x1, . . . , xl can be chosen arbitrarily. We decided to take the unlabeled points (or a subset of them) for this expansion. • The marginal density p is estimated using a Parzen window with a Gaussian kernel, p(x) = 1 m Pm i=1 exp  −∥x−xi∥2 2σ2 p  . Defining h = 1/σ2 and hp = 1/σ2 p, this integral turns out to be, up to an irrelevant constant factor, Hij = m X k=1 exp − h2 2h + hp ∥xi −xj∥2 2 − hhp 2h + hp ∥xi −xk∥2 + ∥xj −xk∥2 2 ! h2 p(xk −xi) · (xk −xj) −h(h + hp)(xi −xj)2 + d(2h + hp)  , where d is the dimension of the input space. After careful dataset selection [6], we considered the two moons toy problem (see figure 1). On this 2D example, the regularizer we suggested implements perfectly the cluster assumption: the function is smooth on high density regions and the decision boundary lies in a low density region. We also tried some real world experiments but were not successful. The reason might be that in dimension more than 2, the gradient does not yield a suitable regularizer: there exists non continuous functions whose regularizer is 0. To avoid this, from the Sobolev embedding lemma, we consider derivatives of order at least d/2. More specifically, we are currently investigating the regularizer associated with a Gaussian kernel of width σr [8, page 100], ∞ X p=1 σ2p r p!2p Z ∥∇pf(x)∥2 p(x)dx, with ∇2p ≡∆p. 6 Conclusion We have tried to make a first step towards a theoretical framework for semisupervised learning. Ideally, this framework should be based on general principles which can then be used to derive new heuristics or justify existing ones. One such general principle is the cluster assumption. Starting from the assumption that the distribution P(x) of the data is known, we have proposed several ideas to implement this principle and shown their relationships. In addition, we have shown the relationship to the limiting behavior of an algorithm based on the graph Laplacian. We believe that this topic deserves further investigation. From a theoretical point of view, other types of regularizers, involving, for example, higher order derivatives should be studied. Also from a practical point of view, we should derive efficient algorithms from the proposed ideas, especially by obtaining finite sample approximations of the limit case where P(x) is known. References [1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373–1396, 2003. [2] M. Belkin and P. Niyogi. Semi-supervised learning on manifolds. Machine Learning journal, 2003. to appear. [3] F. Girosi, M. Jones, and T. Poggio. Priors, stabilizers and basis functions: From regularization to radial, tensor and additive splines. Technical Report Artificial Intelligence Memo 1430, Massachusetts Institute of Technology, 1993. [4] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58:13–30, 1963. [5] G. Kimeldorf and G. Wahba. Some results on tchebychean spline functions. Journal of Mathematics Analysis and Applications, 33:82–95, 1971. [6] Doudou LaLoudouana and Mambobo Bonouliqui Tarare. Data set selection. In Advances in Neural Information Processing Systems, volume 15, 2002. [7] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems, volume 14, 2001. [8] B. Sch¨olkopf and A. Smola. Learning with kernels. MIT Press, Cambridge, MA, 2002. [9] A. Smola and B. Scholkopf. On a kernel-based method for pattern recognition, regression, approximation and operator inversion. Algorithmica, 22:211–231, 1998. [10] M. Szummer and T. Jaakkola. Information regularization with partially labeled data. In Advances in Neural Information Processing Systems, volume 15. MIT Press, 2002. [11] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000. [12] P. Vincent and Y. Bengio. Density-sensitive metrics and kernels. Presented at the Snowbird Learning Workshop, 2003. [13] U. von Luxburg and O. Bousquet. Distance-based classification with lipschitz functions. In Proceedings of the 16th Annual Conference on Computational Learning Theory, 2003.
2003
91
2,498
Geometric Clustering using the Information Bottleneck method Susanne Still Department of Physics Princeton Unversity, Princeton, NJ 08544 susanna@princeton.edu William Bialek Department of Physics Princeton Unversity, Princeton, NJ 08544 wbialek@princeton.edu L´eon Bottou NEC Laboratories America 4 Independence Way, Princeton, NJ 08540 leon@bottou.org Abstract We argue that K–means and deterministic annealing algorithms for geometric clustering can be derived from the more general Information Bottleneck approach. If we cluster the identities of data points to preserve information about their location, the set of optimal solutions is massively degenerate. But if we treat the equations that define the optimal solution as an iterative algorithm, then a set of “smooth” initial conditions selects solutions with the desired geometrical properties. In addition to conceptual unification, we argue that this approach can be more efficient and robust than classic algorithms. 1 Introduction Clustering is one of the most widespread methods of data analysis and embodies strong intuitions about the world: Many different acoustic waveforms stand for the same word, many different images correspond to the same object, etc.. At a colloquial level, clustering groups data points so that points within a cluster are more similar to one another than to points in different clusters. To achieve this, one has to assign data points to clusters and determine how many clusters to use. (Dis)similarity among data points might, in the simplest example, be measured with the Euclidean norm, and then we could ask for a clustering of the points1 {xi}, i = 1, 2, ..., N, such that the mean square distance among points within the clusters is minimized, 1 Nc Nc X c=1 1 nc X ij∈c |xi −xj|2, (1) where there are Nc clusters and nc points are assigned to cluster c. Widely used iterative reallocation algorithms such as K–means [5, 8] provide an approximate solution to the 1Notation: All bold faced variables in this paper denote vectors. problem of minimizing this quantity. Several alternative cost functions have been proposed (see e.g. [5]), and some use analogies with physical systems [3, 7]. However, this approach does not give a principled answer to how many clusters should be used. One often introduces and optimizes another criterion to find the optimal number of clusters, leading to a variety of “stopping rules” for the clustering process [5]. Alternatively, cross-validation methods can be used [11] or, if the underlying distribution is assumed to have a certain shape (mixture models), then the number of clusters can be found, e.g by using the BIC [4]. A different view of clustering is provided by information theory. Clustering is viewed as lossy data compression; the identity of individual points (∼log2 N bits) is replaced by the identity of the cluster to which they are assigned (∼log2 Nc bits ≪log2 N bits). Each cluster is associated with a representative point xc, and what we lose in the compression are the deviations of the individual xi∈c, from the representative xc. One way to formalize this trading between data compression and error is rate–distortion theory [10], which again requires us to specify a function d(xi, xc) that measures the magnitude of our error in replacing xi by xc. The trade-off between the coding cost and the distortion defines a one parameter family of optimization problems, and this parameter can be identified with temperature through an analogy with statistical mechanics [9]. As we lower the temperature there are phase transitions to solutions with more and more distinct clusters, and if we fix the number of clusters and vary the temperature we find a smooth variation from “soft” (probabilistic) to “hard” (deterministic) clustering. For distortion functions d(x, x′) ∝ (x −x′)2, a deterministic annealing approach to solving the variational problem converges to the K–means algorithm in the limit of zero temperature [9]. A more general information theoretic approach to clustering, the Information Bottleneck method [13], explicitly implements the idea that our analysis of the data typically is motivated by our interest in some derived quantity (e.g., words from sounds) and that we should preserve this relevant information rather than trying to guess at what metric in the space of our data will achieve the proper feature selection. We imagine that each point xi occurs together with a corresponding variable vi, and that v is really the object of interest.2 Rather than trying to select the important features of similarity among different points xi, we cluster in x space to compress our description of these points while preserving as much information as possible about v, and again this defines a one parameter family of optimization problems. In this formulation there is no need to define a similarity (or distortion) measure; this measure arises from the optimization principle itself. Furthermore, this framework allows us to find the optimal number of clusters for a finite data set using perturbation theory [12]. The Information Bottleneck principle thus allows a full solution of the clustering problem. The Information Bottleneck approach is attractive precisely because the generality of information theory frees us from a need to specify in advance what it means for data points to be similar: Two points can be clustered together if this merger does not lose too much information about the relevant variable v. More precisely, because mutual information is invariant to any invertible transformation of the variables, approaches which are built entirely from such information theoretic quantities are independent of any arbitrary assumptions about what it means for two points to be close in the data space. This is especially attractive if we want the same information theoretic principles to apply both to the analysis of, for example, raw acoustic waveforms and to the sequences of words for which these sounds might stand [2]. On the other hand, it is not clear how to incorporate a geometric intuition into the Information Bottleneck approach. A natural and purely information theoretic formulation of geometric clustering might ask that we cluster the points, compressing the data index i ∈[1, N] into a smaller set of cluster 2v does not have to live in the same space as the data xi. indices c ∈[1, Nc] so that we preserve as much information as possible about the locations of the points, i.e. location x becomes the relevant variable. Because mutual information is a geometric invariant, however, such a problem has an infinitely degenerate set of solutions. We emphasize that this degeneracy is a matter of principle, and not a failing of any approximate algorithm for solving the optimization problem. What we propose here is to lift this degeneracy by choosing the initial conditions for an iterative algorithm which solves the Information Bottleneck equations. In effect our choice of initial conditions expresses a notion of smoothness or geometry in the space of the {xi}, and once this is done the dynamics of the iterative algorithm lead to a finite set of fixed points. For a broad range of temperatures in the Information Bottleneck problem the solutions we find in this way are precisely those which would be found by a K–means algorithm, while at a critical temperature we recover the deterministic annealing approach to rate–distortion theory. In addition to the conceptual attraction of connecting these very different approaches to clustering in a single information theoretic framework, we argue that our approach may have some advantages of robustness. 2 Derivation of K–means from the Information Bottleneck method We use the Information Bottleneck method to solve the geometric clustering problem and compress the data indices i into cluster indices c in a lossy way, keeping as much information about the location x in the compression as possible. The variational principle is then max p(c|i) [I(x, c) −λI(c, i)] (2) where λ is a Lagrange parameter which regulates the trade-off between compression and preservation of relevant information. Following [13], we assume that p(x|i, c) = p(x|i), i.e. the distribution of locations for a datum, if the index of the datum is known, does not depend explicitly on how we cluster. Then p(x|c) is given by the Markov condition p(x|c) = 1 p(c) X i p(x|i)p(c|i)p(i). (3) For simplicity, let us discretize the space that the data live in, let us assume that it is a finite domain and that we can estimate the probability distribution p(x) by a normalized histogram. Then the data we observe determine p(x|i) = δxxi, (4) where δxxi is the Kronecker-delta which is 1 if x = xi and zero otherwise. The probability of indices is, of course, p(i) = 1/N. The optimal assignment rule follows from the variational principle (2) and is given by p(c|i) = p(c) Z(i, λ) exp " 1 λ X x p(x|i) log2 [p(x|c)] # . (5) where Z(i, λ) ensures normalization. This equation has to be solved self consistently together with eq.(3) and p(c) = P i p(c|i)/N. These are the Information Bottleneck equations and they can be solved iteratively [13]. Denoting by pn the probability distribution after the n-th iteration, the iterative algorithm is given by pn(c|i) = pn−1(c) Zn(i, λ) exp " 1 λ X x p(x|i) log2 [pn−1(x|c)] # , (6) pn(x|c) = 1 Npn−1(c) X i p(x|i)pn(c|i), (7) pn(c) = 1 N X i pn(c|i). (8) Let d(x, x′) be a distance measure on the data space. We choose Nc cluster centers x(0) c at random and initialize p0(x|c) = 1 Z0(c, λ) exp  −1 sd(x, x(0) c )  (9) where Z0(c, λ) is a normalization constant and s > 0 is some arbitrary length scale – the reason for introducing s will become apparent in the following treatment. After each iteration, we determine the cluster centers x(n) c , n ≥1, according to (compare [9]) 0 = X x pn(x|c) ∂d(x, x(n) c ) ∂x(n) c , (10) which for the squared distance reduces to x(n) c = X x x pn(x|c). (11) We furthermore initialize p0(c) = 1/Nc, where Nc is the number of clusters. Now define the index c∗ i such that it denotes the cluster with cluster center closest to the datum xi (in the n-th iteration): c∗ i := arg min c d(xi, x(n) c ). (12) Proposition: If 0 < λ < 1, and if the cluster indexed by c∗ i is non–empty, then for n →∞ p(c|i) = δcc∗ i . (13) Proof: From (7) and (4) we know that pn(x|c) ∝P i δxxipn(c|i)/pn−1(c) and from (6) we have pn(c|i)/pn−1(c) ∝exp " 1 λ X x p(x|i) log2 [pn−1(x|c)] # , (14) and hence pn(x|c) ∝ (pn−1(x|c))1/λ. Substituting (9), we have p1(x|c) ∝ exp h −1 sλd(x, x(0) c ) i . The cluster centers x(n) c are updated in each iteration and therefore we have after n iterations: pn(x|c) ∝exp  −1 sλn d(x, x(n−1) c )  (15) where the proportionality constant has to ensure normalization of the probability measure. Use (14) and (15) to find that pn(c|i) ∝pn−1(c) exp  −1 sλn d(xi, x(n−1) c )  . (16) and again the proportionality constant has to ensure normalization. We can now write the probability that a data point is assigned to the cluster nearest to it: pn(c∗ i |i) =  1 + 1 pn−1(c∗ i ) X c̸=c∗ i pn−1(c) exp  −1 sλn  d(xi, x(n−1) c ) −d(xi, x(n−1) c∗ i )   −1 (17) By definition d(xi, x(n−1) c ) −d(xi, x(n−1) c∗ i ) > 0 ∀c ̸= c∗ i , and thus for n →∞, exp h − 1 sλn  d(xi, x(n−1) c ) −d(xi, x(n−1) c∗ i ) i →0, and for clusters that do not have zero occupancy, i.e for which pn−1(c∗ i ) > 0, we have p(c∗ i |i) →1. Finally, because of normalization, p(c ̸= c∗ i |i) must be zero. □ From eq. (13) follows with equations (4), (7) and (11) that for n →∞ xc = 1 nc X x xiδcc∗ i , (18) where nc = P i δcc∗ i . This means that for the square distance measure, this algorithm produces the familiar K–means solution: we get a hard clustering assignment (13) where each datum i is assigned to the cluster c∗ i with the nearest center. Cluster centers are updated according to eq. (18) as the average of all the points that have been assigned to that cluster. For some problems, the squared distance might be inappropriate, and the update rule for computing the cluster centers depends on the particular distance function (see eq. 10). Example. We consider the squared Euclidean distance, d(x, x′) = |x −x′|2/2. With this distance measure, eq. (15) tells us that the (Gaussian) distribution p(x|c) contracts around the cluster center xc as the number of iterations increases. The xc’s are, of course, recomputed in every iteration, following eq. (11). We create a synthetic data set by drawing 2500 data points i.i.d. from four two-dimensional Gaussian distributions with different means and the same variance. Figure (1) shows the result of numerical iteration of the equations (14) and (16) – ensuring proper normalization – as well as (8) and (11), with λ = 0.5 and s = 0.5. The algorithm converges to a stable solution after n = 14 iterations. This algorithm is less sensitive to initial conditions than the regular K–means algorithm. We measure the goodness of the classification by evaluating how much relevant information I(x, c) the solution captures. In the case we are looking at, the relevant information reduces to the entropy H[p(c)] of the distribution p(c) at the solution3. We used 1000 different random initial conditions for the cluster centers and for each, we iterated eqs. (8), (11), (14) and (16) on the data in Fig. 1. We found two different values for H[p(c)] at the solution, indicating that there are at least two local maxima in I(x, c). Figure 2 shows the fraction of the initial conditions that converged to the global maximum. This number depends on the parameters s and λ. For d(x, x′) = |x −x′|2/2s, the initial distribution p(0)(x|c) is Gaussian with variance s. Larger variance s makes the algorithm less sensitive to the initial location of the cluster centers. Figure 2 shows that, for large values of s, we obtain a solution that corresponds to the global maximum of I(x, c) for 100% of the initial conditions. Here, we fixed λ at reasonably small values to ensure fast convergence (λ ∈{0.05, 0.1, 0.2}). For these λ values, the number of iterations till convergence lies 3I(x, c) = H[p(c)] + P x p(x) P c p(c|x) log2(p(c|x)). Deterministic assignments: p(c|i) = δcc∗ i . Data points which are located at one particular position: p(x|i) = δxxi. We thus have p(c|x) = 1 Np(c) P i p(c|i)p(x|i) = 1 Np(c) P i δxxiδcc∗ i = δcc∗x, where c∗ x = arg minc d(x, xc). Then P c p(c|x) log2(p(c|x) = 0 and hence I(x, c) = H[p(c)]. −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 Figure 1: 2500 data points drawn i.i.d from four Gaussian distributions with different means and the same variance. Those data which got assigned to the same cluster are plotted with the same symbol. The dotted traces indicate movements of the cluster centers (black stars) from their initial positions in the lower left corner of the graph to their final positions close to the means of the Gaussian distributions (black circles) after 14 iterations. between 10 and 20 (for 0.5 < s < 500). As we increase λ there is a (noisy) trend to more iterations. In comparison, we did the same test using regular K–means [8] and obtained a globally optimal solution from only 75.8% of the initial cluster locations. To see how this algorithm performs on data in a higher dimensional space, we draw 2500 points from 4 twenty-dimensional Gaussians with variance 0.3 along each dimension. The typical euclidean distances between the means are around 7. We tested the robustness to initial center locations in the same way as we did for the two dimensional data. Despite the high signal to noise ratio, the regular K–means algorithm [8], run on this data, finds a globally optimal solution for only 37.8% of the initial center locations, presumably because the data is relatively scarce and therefore the objective function is relatively rough. We found that our algorithm converged to the global optimum for between 78.0% and 81.0% of the initial center locations for large enough values of s (1000 < s < 10000) and λ = 0.1. 3 Discussion Connection to deterministic annealing. For λ = 1, we obtain the solution pn(c|i) ∝exp  −1 sd(xi, x(n−1) c )  (19) where the proportionality constant ensures normalization. This equation, together with eq. (11), recovers the equations derived from rate distortion theory in [9] (for square distance), only here the length scale s appears in the position of the annealing temperature T in [9]. We call this parameter the annealing temperature, because [9] suggests the following deterministic annealing scheme: start with large T; fix the xc’s and compute the optimal assignment rule according to eq. (19), then fix the assignment rule and compute the xc’s according to eq. (11), and repeat these two steps until convergence. Then lower the temper10 0 10 1 10 2 10 3 80 85 90 95 100 λ=0.2 λ=0.05 λ=0.1 s % Figure 2: Robustness of algorithm to initial center positions as a function of the initial variance, s. 1000 different random initial positions were used to obtain clustering solutions on the data shown in Fig. 1. Displayed is, as a function of the initial variance s, the percent of initial center positions that converge to a global maximum of the objective function. In comparison, regular K–means [8] converges to the global optimum for only 75.8% of the initial center positions. The parameter λ is kept fixed at reasonably small values (indicated in the plot) to ensure fast convergence (between 10 and 20 iterations). ature and repeat the procedure. There is no general rule that tells us how slow the annealing has to be. In contrast, the algorithm we have derived here for λ < 1 suggests to start with a very large initial temperature, given by sλ, by making s very large and to lower the temperature rapidly by making λ reasonably small. In contrast to the deterministic annealing scheme, we do not iterate the equations for the optimal assignment rule and cluster centers till convergence before we lower the temperature, but instead the temperature is lowered by a factor of λ after each iteration. This produces an algorithm that converges rapidly while finding a globally optimal solution with high probability. For λ = 1, we furthermore find from eq. (15), that pn(x|c) ∝exp h −1 sd(x, x(n−1) c ) i , and for d(x, x′) = |x −x′|2/2, the clusters are simply Gaussians. For λ > 1, we obtain a useless solution for n →∞, that assigns all the data to one cluster. Optimal number of clusters One of the advancements that the approach we have laid out here should bring is that it should now be possible to extend our earlier results on finding the optimal number of clusters [12] to the problem of geometric clustering. We have to leave the details for a future paper, but essentially we would argue that as we observe a finite number of data points, we make an error in estimating the distribution that underlies the generation of these data points. This mis-estimate leads to a systematic error in evaluating the relevant information. We have computed this error using perturbation theory [12]. For deterministic assignments (as we have in the hard K–means solution), we know that a correction of the error introduces a penalty in the objective function for using more clusters and this allows us to find the optimal number of clusters. Since our result says that the penalty depends on the number of bins that we use to estimate the distribution underlying the data [12], we either have to know the resolution with which to look at our data, or estimate this resolution from the size of the data set, as in e.g. [1, 6]. A combination of these insights should tell us how to determine, for geometrical clustering, the number of clusters that is optimal for a finite data set. 4 Conclusion We have shown that it is possible to cast geometrical clustering into the general, information theoretic framework provided by the Information Bottleneck method. More precisely, we cluster the data keeping information about location and we have shown that the degeneracy of optimal solutions, which arises from the fact that the mutual information is invariant to any invertible transformation of the variables, can be lifted by the correct choice of the initial conditions for the iterative algorithm which solves the Information Bottleneck equations. We have shown that for a large range of values of the Lagrange multiplier λ (which regulates the trade-off between compression and preservation of relevant information), we obtain an algorithm that converges to a hard clustering K–means solution. We have found some indication that this algorithm might be more robust to initial center locations than regular K–means. Our results also suggest an annealing scheme, which might prove to be faster than the deterministic annealing approach to geometrical clustering, known from rate–distortion theory [9]. We recover the later for λ = 1. Our results shed new light on the connection between the relatively novel Information Bottleneck method and earlier approaches to clustering, particularly the well-established K–means algorithm. Acknowledgments We thank G. Atwal and N. Slonim for interesting discussions. S. Still acknowledges support from the German Research Foundation (DFG), grant no. Sti197. References [1] W. Bialek and C. G. Callan and S. P. Strong, Phys. Rev. Lett. 77 (1996) 4693-4697, http://arxiv.org/abs/cond-mat/9607180 [2] W. Bialek in Physics of bio-molecules and cells; ´Ecole d’ete de physique th´eorique Les Houches Session LXXV Eds.: H. Flyvbjerg, F. J¨ulicher, P. Ormos and F. David (2001) Springer-Verlag, pp.485–577, http://arxiv.org/abs/physics/0205030 [3] M. Blatt, S. Wiseman and E. Domany, Phys. Rev. Lett. 76 (1996) 3251-3254, http://arxiv.org/abs/cond-mat/9702072 [4] C. Fraley and A. Raftery, J. Am. Stat. Assoc. 97 (2002) 611-631. [5] A. D. Gordon, Classification, (1999) Chapmann and Hall/CRC Press, London. [6] P. Hall and E. J. Hannan, Biometrika 75, 4 (1988) 705-714. [7] D. Horn and A. Gottlieb, Phys. Rev. Lett. 88 (2002) 018702, extended version: http://arxiv.org/abs/physics/0107063 [8] J. MacQueen in Proc. 5th Berkeley Symp. Math. Statistics and Probability Eds.: L.M.L Cam and J. Neyman (1967) University of California Press, pp. 281-297 (Vol. I) [9] K. Rose, E. Gurewitz and G. C. Fox, Phys. Rev. Lett. 65 (1990) 945; and: K. Rose, Proceedings of the IEEE 86, 11 (1998) pp. 2210-2239. [10] C. E. Shannon, Bell System Tech. J. 27, (1948). pp. 379-423, 623-656. See also: C. Shannon and W. Weaver, The Mathematical Theory of Communication (1963) University of Illinois Press [11] P. Smyth, Statistics and Computing 10, 1 (2000) 63-72. [12] S. Still and W. Bialek (2003, submitted), available at http://arxiv.org/abs/physics/0303011 [13] N. Tishby, F. Pereira and W. Bialek in Proc. 37th Annual Allerton Conf. Eds.: B. Hajek and R. S. Sreenivas (1999) University of Illinois, http://arxiv.org/abs/physics/0004057
2003
92
2,499
Sample Propagation Mark A. Paskin Computer Science Division University of California, Berkeley Berkeley, CA 94720 mark@paskin.org Abstract Rao–Blackwellization is an approximation technique for probabilistic inference that flexibly combines exact inference with sampling. It is useful in models where conditioning on some of the variables leaves a simpler inference problem that can be solved tractably. This paper presents Sample Propagation, an efficient implementation of Rao–Blackwellized approximate inference for a large class of models. Sample Propagation tightly integrates sampling with message passing in a junction tree, and is named for its simple, appealing structure: it walks the clusters of a junction tree, sampling some of the current cluster’s variables and then passing a message to one of its neighbors. We discuss the application of Sample Propagation to conditional Gaussian inference problems such as switching linear dynamical systems. 1 Introduction Message passing on junction trees is an efficient means of solving many probabilistic inference problems [1, 2]. However, as these are exact methods, their computational costs must scale with the complexity of the inference problem, making them inapplicable to very demanding inference tasks. This happens when the messages become too expensive to compute, as in discrete models of large treewidth or conditional Gaussian models [3]. In these settings it is natural to investigate whether junction tree techniques can be combined with sampling to yield fast, accurate approximate inference algorithms. One way to do this is to use sampling to approximate the messages, as in HUGS [4, 5]. This strategy has two disadvantages: first, the samples must be stored, which limits the sample size by space constraints (rather than time constraints); and second, variables are sampled using only local information, leading to samples that may not be likely under the entire model. Another way to integrate sampling and message passing is via Rao–Blackwellization, where we repeatedly sample a subset of the model’s variables and then compute all of the messages exactly, conditioned on these sample values. This technique, suggested in [6] and studied in [7], yields a powerful and flexible approximate inference algorithm; however, it can be expensive because the junction tree algorithm must be run for every sample. In this paper, we present a simple implementation of Rao–Blackwellized approximate inference that avoids running the entire junction tree algorithm for every sample. We develop a new message passing algorithm for junction trees that supports fast retraction of evidence, and we tightly integrate it with a blocking Gibbs sampler so that only one message must be recomputed per sample. The resulting algorithm, Sample Propagation, has an appealing structure: it walks the clusters of a junction tree, resampling some of the current cluster’s variables and then passing a message to the next cluster in the walk. 2 Rao–Blackwellized approximation using junction tree inference We start by presenting our notation and assumptions on the probability model. Then we summarize the three basic ingredients of our approach: message passing in a junction tree, Rao–Blackwellized approximation, and sampling via Markov chain Monte Carlo. 2.1 The probability model Let X = (Xi : i ∈I) be a vector of random variables indexed by the finite, ordered set I, and for each index i let Xi be the range of Xi. We will use the symbols A, B, C, D and E to denote subsets of the index set I. For each subset A, let XA ≡(Xi : i ∈A) be the corresponding subvector of random variables and let XA be its range. It greatly simplifies the exposition to develop a simple notation for assignments of values to subsets of variables. An assignment to a subset A is a set of pairs {(i, xi) : i ∈A}, one per index i ∈A, where xi ∈Xi. We use the symbols u, v, and w to represent assignments, and we use XA to denote the set of assignments to the subset A (with the shorthand X ≡XI). We use two operations to generate new assignments from old assignments. Given assignments u and v to disjoint subsets A and B, respectively, their union u∪v is an assignment to A ∪B. If u is an assignment to A then the restriction of u to another subset B is uB ≡{(i, xi) ∈u : i ∈B}, an assignment to A ∩B. We also let functions act on assignments in the natural way: if u = {(i, xi) : i ∈D} is an assignment to D and f is a function whose domain is XD, then we use f(u) to denote f(xi : i ∈D). We consider probability densities of the form p(u) ∝ Y C∈C ψC(uC), u ∈X (1) where C is a set of subsets of I and each ψC is a potential function over C (i.e., a nonnegative function of XC). This class includes directed graphical models (i.e., Bayesian networks) and undirected graphical models such as Markov random fields. Observed variables are reflected in the model by evidence potentials. We use pA(·) to denote the marginal density of XA and pA|B(· | ·) to denote the conditional density of XA given XB. Finally, we use the notation of finite measure spaces for simplicity, but our approach extends to the continuous case. 2.2 Junction tree inference Given a density of the form (1), we view the problem of probabilistic inference as that of computing the expectation of a function f: E [f(X)] = P u∈X p(u)f(u). This sum can be expensive to compute when X is a large space. When the desired expectation is “local” in that f depends only upon some subset of the variables XD, we can compute the expectation more cheaply using a marginal density as E [f(XD)] = X u∈XC pC(u)f(uD) (2) where pC is the marginal density of XC and C ⊇D “covers” the input of the function. If this sum is tractable, then we have reduced the problem to that of computing pC. We can compute this marginal via message passing on a junction tree [1, 2]. A junction tree for C is a singly-connected, undirected graph (C, E) with the junction tree property: for each pair of nodes (or clusters) A, B ∈C that contain some i ∈I, every cluster on the unique path between A and B also contains i. In what follows we assume we have a junction tree for C with a cluster that covers D, the input of f. (Such a junction tree can always be found, but we may have to enlarge the subsets in C.) Whereas the HUGIN message passing algorithm [2] may be more familiar, Sample Propagation is most easily described by extending the Shafer–Shenoy algorithm [1]. In this algorithm, we define for each edge B →C of the junction tree a potential over B ∩C: µBC(u) ≡ X v∈XB\C ψB(u ∪v) Y (A,B)∈E A̸=C µAB(uA ∪vA), u ∈XB∩C (3) µBC is called the message from B to C. Note that this definition is recursive—messages can depend on each other—with the base case being messages from leaf clusters of the junction tree. For each cluster C we define a potential βC over C by βC(u) ≡ψC(u) Y (B,C)∈E µBC(uB), u ∈XC (4) βC is called the cluster belief of C, and it follows that βC ∝pC, i.e., that the cluster beliefs are the marginals over their respective variables (up to renormalization). Thus we can use the (normalized) cluster beliefs βC for some C ⊇D to compute the expectation (2). In what follows we will also be interested in computing conditional cluster densities given an evidence assignment w to a subset of the variables XE. Because pI\E|E(u | w) ∝ p(u ∪w), we can “enter in” this evidence by instantiating w in every cluster potential ψC. The cluster beliefs (4) will then be proportional to the conditional density pC\E|E(· | w). Junction tree inference is often the most efficient means of computing exact solutions to inference problems of the sort described above. However, the sums required by the messages (3) or the function expectations (2) are often prohibitively expensive to compute. If the variables are all finite-valued, this happens when the clusters of the junction tree are too large; if the model is conditional-Gaussian, this happens when the messages, which are mixtures of Gaussians, have too many mixture components [3]. 2.3 Rao–Blackwellized approximate inference In cases where the expectation is intractable to compute exactly, it can be approximated by a Monte Carlo estimate: E [f(XD)] ≈1 N N X n=1 f(vn D) (5) where {vn : 1 ≤n ≤N} are a set of samples of X. However, obtaining a good estimate will require many samples if f(XD) has high variance. Many models have the property that while computing exact expectations is intractable, there exists a subset of random variables XE such that the conditional expectation E [f(XD) | XE = xE] can be computed efficiently. This leads to the Rao–Blackwellized estimate, where we use a set of samples {wn : 1 ≤n ≤N} of XE to approximate E [f(XD)] = E [E [f(XD) | XE]] ≈1 N N X n=1 E [f(XD) | wn] (6) The first advantage of this scheme over standard Monte Carlo integration is that the Rao– Blackwell theorem guarantees that the expected squared error of the estimate (6) is upper bounded by that of (5), and strictly so when f(XD) depends on XD\E. A second advantage is that (6) requires samples from a smaller (and perhaps better-behaved) probability space. Algorithm 1 Rao–Blackwell estimation on a junction tree Input: A set of samples {wn : 1 ≤n ≤N} of XE, a function f of XD, and a cluster C ⊇D Output: An estimate ˆf ≈E [f(XD)] 1: Initialize the estimator ˆf = 0. 2: for n = 1 to N do 3: Enter the assignment wn as evidence into the junction tree. 4: Use message passing to compute the beliefs βC ∝pC\E|E(· | wn) via (3) and (4). 5: Compute the expectation E [f(XD) | wn] via (7). 6: Set ˆf = ˆf + E [f(XD) | wn]. 7: Set ˆf = ˆf/N. However, the Rao–Blackwellized estimate (6) is more expensive to compute than (5) because we must compute conditional expectations. In many cases, message passing in a junction tree can be used to implement these computations (see Algorithm 1). We can enter each sample assignment wn as evidence into the junction tree and use message passing to compute the conditional density pC\E|E(· | wn) for some cluster C that covers D. We then compute the conditional expectation as E [f(XD) | wn] = X u∈XC\E pC\E|E(u | wn)f(uD ∪wn D) (7) 2.4 Markov chain Monte Carlo We now turn to the problem of obtaining the samples {wn} of XE. Markov chain Monte Carlo (MCMC) is a powerful technique for generating samples from a complex distribution p; we design a Markov chain whose stationary distribution is p, and simulate the chain to obtain samples [8]. One simple MCMC algorithm is the Gibbs sampler, where each successive state of the Markov chain is chosen by resampling one variable conditioned on the current values of the remaining variables. A more advanced technique is “blocking” Gibbs sampling, where we resample a subset of variables in each step; this technique can yield Markov chains that mix more quickly [9]. To obtain the benefits of sampling in a smaller space, we would like to sample directly from the marginal pE; however, this requires us to sum out the nuisance variables XI\E from the joint density p. Blocking Gibbs sampling is particularly attractive in this setting because message passing can be used to implement the required marginalizations.1 Assume that the current state of the Markov chain over XE is wn. To generate the next state of the chain wn+1 we choose a cluster C (randomly, or according to a schedule) and resample XC∩E given wn E\C; i.e., we resample the E variables within C given the E variables outside C. The transition density can be computed by entering the evidence wn E\C into the junction tree, computing the cluster belief at C, and marginalizing down to a conditional density over XC∩E. The complete Gibbs sampler is given as Algorithm 2.2 3 Sample Propagation Algorithms 1 and 2 represent two of the three key ideas behind our proposal: both Gibbs sampling and Rao–Blackwellized estimation can be implemented efficiently using message passing on a junction tree. The third idea is that these two uses of message passing can be interleaved so that each sample requires only one message to be computed. 1Interestingly, the blocking Gibbs proposal [9] makes a different use of junction tree inference than we do here: they use message passing within a block of variables to efficiently generate a sample. 2In cases where the transition density pC∩E|E\C(· | wn E\C) is too large to represent or too difficult to sample from, we can use the Metropolis-Hastings algorithm, where we instead sample from a simpler proposal distribution qC∩E and then accept or reject the proposal [8]. Algorithm 2 Blocking Gibbs sampler on a junction tree Input: A subset of variables XE to sample and a sample size N Output: A set of samples {wn : 1 ≤n ≤N} of XE 1: Choose an initial assignment w0 ∈XE. 2: for n = 1 to N do 3: Choose a cluster C ∈C. 4: Enter the evidence wn−1 E\C into the junction tree. 5: Use message passing to compute the beliefs βC ∝pC|E\C(· | wn−1 E\C) via (3) and (4). 6: Marginalize over XC\E to obtain the transition density pC∩E|E\C(· | wn−1 E\C). 7: Sample wn C∩E ∼pC∩E|E\C(· | wn−1 E\C) and set wn E\C = wn−1 E\C. 3.1 Lazy updating of the Rao–Blackwellized estimates Algorithms 1 and 2 both process the samples sequentially, so the first advantage of merging them is that the sample set need not be stored. The second advantage is that, by being selective about when the Rao–Blackwellized estimator is updated, we can compute the messages once, not twice, per sample. When the Gibbs sampler chooses to resample a cluster C that covers D (the input of f), we can update the Rao–Blackwellized estimator for free. In particular, the Gibbs sampler computes the cluster belief βC ∝pC|E\C(· | wn−1 E\C) in order to compute the transition density pC∩E|E\C(· | wn−1 E\C). Once it samples wn C∩E from this density, we can instantiate the sample in the belief βC to obtain the conditional density pC\E|E(· | wn) needed by the Rao–Blackwellized estimator. (This follows from the fact that wn E\C = wn−1 E\C.) In fact, when it is tractable to do so, we can simply use the cluster belief βC to update the estimator in (7); because it treats more variables exactly, it can yield a lower-variance estimate. Therefore, if we are willing to update the Rao–Blackwellized estimator only when the Gibbs sampler chooses a cluster that covers the function’s inputs, we can focus on reducing the computational requirements of the Gibbs sampler. In this scheme the estimate will be based on fewer samples, but the samples that are used will be less correlated because they are more distant from each other in the Markov chain. In parallel estimation problems where every cluster is computing expectations, every sample will be used to update an estimate, but not every estimate will be updated by every sample. 3.2 Optimizing the Gibbs sampler We now turn to the Gibbs sampler. The Gibbs sampler computes the messages so that it can compute the cluster belief βC when it resamples within a cluster C. An important property of the computation (4) is that it requires only those messages directed towards C; thus, we have again reduced by half the number of messages required per sample. The difficulty in further minimizing the number of messages computed by the Gibbs sampler is that the evidence on the junction tree is constantly changing. It will therefore be useful to modify the message passing so that, rather than instantiating all the evidence and then passing messages, the evidence is instantiated on the fly, on a per-cluster basis. For each edge B →C we define a potential µBC|E by µBC|E(u, w) ≡ X v∈XB\(C∪E) ψB(u ∪v ∪wB\C) Y A̸=C (A,B)∈E µAB|E((u ∪v ∪wB\C)A, w) (8) where u ∈XB∩C and w ∈XE. This is the conditional message from B to C given evidence w on XE. Figure 1 illustrates how the ranges of the assignment variables u, v, and w cover the variables of B; the intuition is that when we send a message from B to C, we instantiate all evidence variables that are in B but not those that are in C; this gives us Algorithm 3 Sample Propagation Input: A function f of XD, a cluster C ⊇D, a subset E to sample, and a sample size N Output: An estimate ˆf ≈E [f(XD)] 1: Choose an initial assignment w0 ∈XE and compute the messages µAB|E(·, w0) via (8). 2: Choose a cluster C1 ∈C, initialize the estimator ˆf = 0, and set the sample count M = 0. 3: for n = 1 to N do 4: Compute the conditional cluster belief βCn|E(·, wn−1) ∝pCn|E\Cn(· | wn−1 E\Cn) via (9). Advance the Markov chain: 5: Marginalize over XCn\E to obtain the transition density pCn∩E|E\Cn(· | wn−1 E\Cn). 6: Sample wn Cn∩E ∼pCn∩E|E\Cn(· | wn−1 E\Cn) and set wn E\Cn = wn−1 E\Cn. Update any estimates to be computed at Cn: 7: if D ⊆Cn ∪E then 8: Instantiate wn Cn in βCn|E and normalize to obtain pCn\E|E(· | wn). 9: Compute the expectation E [f(XD) | wn] via (7). 10: Set ˆf = ˆf + E [f(XD) | wn] and increment the sample count M. Take the next step of the walk: 11: Choose a cluster Cn+1 that is a neighbor of Cn. 12: Recompute the message µCnCn+1|E(·, wn) via (8). 13: Set ˆf ←ˆf/M. the freedom to later instantiate XC∩E as we wish, or not at all. It is easy to verify that the conditional belief βC|E given by βC|E(u, w) ≡ψC(u) Y (B,C)∈E µBC|E(uB, w), u ∈XC, w ∈XE (9) is proportional to the conditional density pC|E\C(u | wE\C).3 Using these modified definitions, we can dramatically reduce the Figure 1: A Venn diagram showing how the ranges of the assignment variables in (8) cover the cluster B. number of messages computed per sample. In particular, the conditional messages have the following important property: Proposition. Let w and w′ be two assignments to E such that wE\D = w′ E\D for some cluster D. Then for all edges B →C with C closer to D than B, µBC|E(u, w) = µBC|E(u, w′). Proof. Assume by induction that the messages into B (except the one from C) are equal given w or w′. There are two cases to consider. If (E ∩D) has no overlap with (E ∩B), then wB\C = w′ B\C and the equality follows from (8). Otherwise, by the junction property we know that if i ∈B and i ∈D, then i ∈C, so again we get wB\C = w′ B\C. Thus, when we resample a cluster Cn, we have wn E\Cn = wn−1 E\Cn and so only those messages directed away from Cn change. In addition, as argued above, when we resample Cn+1 in iteration n + 1, we only require the messages directed towards Cn+1. Combining these two arguments, we find that only the messages on the directed path from Cn to Cn+1 must be recomputed in iteration n. If we choose Cn+1 to be a neighbor of Cn, we only have to recompute a single message in each iteration.4 Putting all of these optimizations together, we obtain Algorithm 3, which is easily generalized to the case where many function expectations are computed in parallel. 3The modified message passing scheme we describe can be viewed as an implementation of fast retraction for Shafer-Shenoy messages, analogous to the scheme described for HUGIN in [2, §6.4.6]. 4A similar idea has recently been used to improve the efficiency of the Unified Propagation and Scaling algorithm for maximum likelihood estimation [10]. 3.3 Complexity of Sample Propagation For simplicity of analysis we assume finite-value variables and tabular potentials. In the Shafer–Shenoy algorithm, the space complexity of representing the exact message (3) is O(|XB∩C|), and the time complexity of computing it is O(|XB|) (since for each assignment to B ∩C we must sum over assignments to B\C). In contrast, when computing the conditional message (8), we only sum over assignments to B\(C ∪E), since E ∩(B\C) is instantiated by the current sample. This makes the conditional message cheaper to compute than the exact message: in the finite case the time complexity is O(|XB\(E∩(B\C))|). The space complexity of representing the conditional message is O(|XB∩C|)—the same as the exact message, since it a potential over the same variables. As we sample more variables, the conditional messages become cheaper to compute. However, note that the space complexity of representing the conditional message is independent of the choice of sampled variables E; even if we sample a given variable, it remains a free parameter of the conditional message. (If we instead fixed its value, the proposition above would not hold.) Thus, the time complexity of computing conditional messages can be reduced by sampling more variables, but only up to a point: the time complexity of computing the conditional message must be o(|XB∩C|). This contrasts with the approach of Bidyuk & Dechter [7], where the asymptotic time complexity of each iteration can be reduced arbitrarily by sampling more variables. However, to achieve this their algorithm runs the entire junction tree algorithm in each iteration, and does not reuse messages between iterations. In contrast, Sample Propagation reuses all but one of the messages between iterations, leading to a greatly reduced “constant factor”. 4 Application to conditional Gaussian models A conditional Gaussian (CG) model is a probability distribution over a set of discrete variables {Xi : i ∈∆} and continous variables {Xi : i ∈Γ} such that the conditional distribution of XΓ given X∆is multivariate Gaussian. Inference in CG models is harder than in models that are totally discrete or totally Gaussian. For example, consider polytree models: when all of the variables are discrete or all are Gaussian, exact inference is linear in size of the model; but if the model is CG then approximate inference is NP-hard [11]. In traditional junction tree inference, our goal is to compute the marginal for each cluster. However, when p is a CG model, each cluster marginal is a mixture of |X∆| Gaussians, and is intractable to represent. Instead, we can compute the weak marginals, i.e., for each cluster we compute the best conditional Gaussian approximation of pC. Lauritzen’s algorithm [12] is an extension of the HUGIN algorithm that computes these weak marginals exactly. Unfortunately, it is often intractable because it requires strongly rooted junction trees, which can have clusters that contain most or all of the discrete variables [3]. The structure of CG models makes it possible to use Sample Propagation to approximate the weak cluster marginals: we choose E = ∆, since conditioning on the discrete variables leaves a tractable Gaussian inference problem.5 The expectations we must compute are of the sufficient statistics of the weak cluster marginals: for each cluster C, we need the distribution of XC∩∆and the conditional means and covariances of XC∩Γ given XC∩∆. As an example, consider the model given in Figure 2(a) for tracking an object whose state (position and velocity) at time t is Xt. At each time step, we obtain a vector measurement Yt which is either a noisy measurement of the object’s position (if Zt = 0) or an outlier (if Zt = 1). The Markov chain over Zt makes it likely that inliers and outliers come in bursts. The task is to estimate the position of the object at all time steps (for T = 100). 5We cannot choose E = Γ because computing the conditional messages (8) may require summing discrete variables out of CG potentials, which leads to representational difficulties [3]. In this case one can instead use Bidyuk & Dechter’s algorithm, which does not require these operations. Lauritzen’s algorithm is intractable in this case beX1 X2 ... Y2 Y1 Z1 Z2 ... XT YT ZT (a) X1, X2, Z1, Z2 X2, X3, Z2, Z3 XT - 1, XT, ZT - 1, ZT ... (b) 10 5 10 6 10 7 10 8 10 9 3 4 5 6 7 8 9 floating point operations average position error Assumed Density Filtering Sample Propagation Gibbs Sampling (c) Figure 2: The TRACKING example. cause any strongly rooted junction tree for this network must have a cluster containing all of the discrete variables [3, Thm. 3.18]. Therefore, instead of comparing our approximate position estimates to the correct answer, we sampled a trajectory from the network and computed the average position error to the (unobserved) ground truth. Both Gibbs sampling and Sample Propagation were run with a forwards–backwards sampling schedule; Sample Propagation used the junction tree of Figure 2(b).6 Both algorithms were started in the same state and both were allowed to “burn in” for five forwards–backwards passes. We repeated this 10 times and averaged the results over trials. Figure 2(c) shows that Sample Propagation converged much more quickly than Gibbs sampling. Also, Sample Propagation found better answers than Assumed Density Filtering (a standard algorithm for this problem), but at increased computational cost. Acknowledgements. I thank K. Murphy and S. Russell for comments on a draft of this paper. This research was supported by ONR N00014-001-0637 and an Intel Internship. References [1] G. Shafer and P. Shenoy. Probability propagation. Annals of Mathematics and Artificial Intelligence, 2:327–352, 1990. [2] R. Cowell, P. Dawid, S. Lauritzen, and D. Spiegelhalter. Probabilistic Networks and Expert Systems. Springer, 1999. [3] U. Lerner. Hybrid Bayesian Networks for Reasoning About Complex Systems. PhD thesis, Stanford University, October 2002. [4] A. Dawid, U. Kjærulff, and S. Lauritzen. Hybrid propagation in junction trees. In Advances in Intelligent Computing, volume 945 of Lecture Notes in Computer Science. Springer, 1995. [5] U. Kjærulff. HUGS: Combining exact inference and Gibbs sampling in junction trees. In Proc. of the 11th Conf. on Uncertainty in Artificial Intelligence (UAI-95). Morgan Kaufmann, 1995. [6] A. Doucet, N. de Freitas, K. Murphy, and S. Russell. Rao-Blackwellised particle filtering for dynamic Bayesian networks. In Proc. of the 16th Conf. on Uncertainty in AI (UAI-00), 2000. [7] B. Bidyuk and R. Dechter. An empirical study of w-cutset sampling for Bayesian networks. In Proc. of the 19th Conf. on Uncertainty in AI (UAI-03). Morgan Kaufmann, 2003. [8] R. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRG-TR-93-1, University of Toronto, 1993. [9] C. S. Jensen, A. Kong, and U. Kjærulff. Blocking Gibbs sampling in very large probabilistic expert systems. International Journal of Human-Computer Studies, 42:647–666, 1995. [10] Y. W. Teh and M. Welling. On improving the efficiency of the iterative proportional fitting procedure. In Proc. of the 9th Int’l. Workshop on AI and Statistics (AISTATS-03), 2003. [11] U. Lerner and R. Parr. Inference in hybrid networks: Theoretical limits and practical algorithms. In Proc. of the 17th Conf. on Uncertainty in AI (UAI-01). Morgan Kaufmann, 2001. [12] S. Lauritzen. Propagation of probabilities, means, and variances in mixed graphical association models. Journal of the American Statistical Association, 87(420):1098–1108, 1992. [13] C. Carter and R. Kohn. Markov chain Monte Carlo in conditionally Gaussian state space models. Biometrika, 83:589–601, 1996. 6Carter & Kohn [13] describe a specialized algorithm for this model that is similar to a version of Sample Propagation that does not resample the discrete variables on the backwards pass.
2003
93